aid
string
mid
string
abstract
string
related_work
string
ref_abstract
dict
title
string
text_except_rw
string
total_words
int64
1105.5236
2952132893
Traceroute measurements are one of our main instruments to shed light onto the structure and properties of today's complex networks such as the Internet. This paper studies the feasibility and infeasibility of inferring the network topology given traceroute data from a worst-case perspective, i.e., without any probabilistic assumptions on, e.g., the nodes' degree distribution. We attend to a scenario where some of the routers are anonymous, and propose two fundamental axioms that model two basic assumptions on the traceroute data: (1) each trace corresponds to a real path in the network, and (2) the routing paths are at most a factor 1 alpha off the shortest paths, for some parameter alpha in (0,1]. In contrast to existing literature that focuses on the cardinality of the set of (often only minimal) inferrable topologies, we argue that a large number of possible topologies alone is often unproblematic, as long as the networks have a similar structure. We hence seek to characterize the set of topologies inferred with our axioms. We introduce the notion of star graphs whose colorings capture the differences among inferred topologies; it also allows us to construct inferred topologies explicitly. We find that in general, inferrable topologies can differ significantly in many important aspects, such as the nodes' distances or the number of triangles. These negative results are complemented by a discussion of a scenario where the trace set is best possible, i.e., "complete". It turns out that while some properties such as the node degrees are still hard to measure, a complete trace set can help to determine global properties such as the connectivity.
In the field of , topologies are explored using pairwise end-to-end measurements, without the cooperation of nodes along these paths. This approach is quite flexible and applicable in various contexts, e.g., in social networks @cite_6 . For a good discussion of this approach as well as results for a routing model along shortest and second shortest paths see @cite_6 . For example, @cite_6 shows that for sparse random graphs, a relatively small number of cooperating participants is sufficient to discover a network fairly well.
{ "abstract": [ "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013" ], "cite_N": [ "@cite_6" ], "mid": [ "2568950526" ] }
Misleading Stars: What Cannot Be Measured in the Internet?
Surprisingly little is known about the structure of many important complex networks such as the Internet. One reason is the inherent difficulty of performing accurate, large-scale and preferably synchronous measurements from a large number of different vantage points. Another reason are privacy and information hiding issues: for example, network providers may seek to hide the details of their infrastructure to avoid tailored attacks. Since knowledge of the network characteristics is crucial for many applications (e.g., RMTP [12], or PaDIS [13]), the research community implements measurement tools to analyze at least the main properties of the network. The results can then, e.g., be used to design more efficient network protocols in the future. This paper focuses on the most basic characteristic of the network: its topology. The classic tool to study topological properties is traceroute. Traceroute allows us to collect traces from a given source node to a set of specified destination nodes. A trace between two nodes contains a sequence of identifiers describing the route traveled by the packet. However, not every node along such a path is configured to answer with its identifier. Rather, some nodes may be anonymous in the sense that they appear as stars (' * ') in a trace. Anonymous nodes exacerbate the exploration of a topology because already a small number of anonymous nodes may increase the spectrum of inferrable topologies that correspond to a trace set T . This paper is motivated by the observation that the mere number of inferrable topologies alone does not contradict the usefulness or feasibility of topology inference; if the set of inferrable topologies is homogeneous in the sense that that the different topologies share many important properties, the generation of all possible graphs can be avoided: an arbitrary representative may characterize the underlying network accurately. Therefore, we identify important topological metrics such as diameter or maximal node degree and examine how "close" the possible inferred topologies are with respect to these metrics. Our Contribution This paper initiates the study and characterization of topologies that can be inferred from a given trace set computed with the traceroute tool. While existing literature assuming a worst-case perspective has mainly focused on the cardinality of minimal topologies, we go one step further and examine specific topological graph properties. We introduce a formal theory of topology inference by proposing basic axioms (i.e., assumptions on the trace set) that are used to guide the inference process. We present a novel and we believe appealing definition for the isomorphism of inferred topologies which is aware of traffic paths; it is motivated by the observation that although two topologies look equivalent up to a renaming of anonymous nodes, the same trace set may result in different paths. Moreover, we initiate the study of two extremes: in the first scenario, we only require that each link appears at least once in the trace set; interestingly, however, it turns out that this is often not sufficient, and we propose a "best case" scenario where the trace set is, in some sense, complete: it contains paths between all pairs of nodes. The main result of the paper is a negative one. It is shown that already a small number of anonymous nodes in the network renders topology inference difficult. In particular, we prove that in general, the possible inferrable topologies differ in many crucial aspects. We introduce the concept of the star graph of a trace set that is useful for the characterization of inferred topologies. In particular, colorings of the star graphs allow us to constructively derive inferred topologies. (Although the general problem of computing the set of inferrable topologies is related to NP-hard problems such as minimal graph coloring and graph isomorphism, some important instances of inferrable topologies can be computed efficiently.) The minimal coloring (i.e., the chromatic number) of the star graph defines a lower bound on the number of anonymous nodes from which the stars in the traces could originate from. And the number of possible colorings of the star graph-a function of the chromatic polynomial of the star graph-gives an upper bound on the number of inferrable topologies. We show that this bound is tight in the sense that there are situation where there indeed exist so many inferrable topologies. Especially, there are problem instances where the cardinality of the set of inferrable topologies equals the Bell number. This insight complements (and generalizes to arbitrary, not only minimal, inferrable topologies) existing cardinality results. Finally, we examine the scenario of fully explored networks for which "complete" trace sets are available. As expected, inferrable topologies are more homogenous and can be characterized well with respect to many properties such as node distances. However, we also find that other properties are inherently difficult to estimate. Interestingly, our results indicate that full exploration is often useful for global properties (such as connectivity) while it does not help much for more local properties (such as node degree). Organization The remainder of this paper is organized as follows. Our theory of topology inference is introduced in Section 2. The main contribution is presented in Sections 3 and 4 where we derive bounds for general trace sets and fully explored networks, respectively. In Section 5, the paper concludes with a discussion of our results and directions for future research. Due to space constraints, some proofs are moved to the appendix. Model Let T denote the set of traces obtained from probing (e.g., by traceroute) a (not necessarily connected and undirected) network G 0 = (V 0 , E 0 ) with nodes or vertices V 0 (the set of routers) and links or edges E 0 . We assume that G 0 is static during the probing time (or that probing is instantaneous). Each trace T (u, v) ∈ T describes a path connecting two nodes u, v ∈ V 0 ; when u and v do not matter or are clear from the context, we simply write T . Moreover, let d T (u, v) denote the distance (number of hops) between two nodes u and v in trace T . We define d G 0 (u, v) to be the corresponding shortest path distance in G 0 . Note that a trace between two nodes u and v may not describe the shortest path between u and v in G 0 . The nodes in V 0 fall into two categories: anonymous nodes and non-anonymous (or shorter: named) nodes. Therefore, each trace T ∈ T describes a sequence of symbols representing anonymous and non-anonymous nodes. We make the natural assumption that the first and the last node in each trace T is non-anonymous. Moreover, we assume that traces are given in a form where non-anonymous nodes appear with a unique, anti-aliased identifier (i.e., the multiple IP addresses corresponding to different interfaces of a node are resolved to one identifier); an anonymous node is represented as * ("star") in the traces. For our formal analysis, we assign to each star in a trace set T a unique identifier i: * i . (Note that except for the numbering of the stars, we allow identical copies of T in T , and we do not make any assumptions on the implications of identical traces: they may or may not describe the same paths.) Thus, a trace T ∈ T is a sequence of symbols taken from an alphabet Σ = ID ∪ ( i * i ), where ID is the set of non-anonymous node identifiers (IDs): Σ is the union of the (anti-aliased) non-anonymous nodes and the set of all stars (with their unique identifiers) appearing in a trace set. The main challenge in topology inference is to determine which stars in the traces may originate from which anonymous nodes. Henceforth, let n = |ID| denote the number of non-anonymous nodes and let s = | i * i | be the number of stars in T ; similarly, let a denote the number of anonymous nodes in a topology. Let N = n + s = |Σ| be the total number of symbols occurring in T . Clearly, the process of topology inference depends on the assumptions on the measurements. In the following, we postulate the fundamental axioms that guide the reconstruction. First, we make the assumption that each link of G 0 is visited by the measurement process, i.e., it appears as a transition in the trace set T . In other words, we are only interested in inferring the (sub-)graph for which measurement data is available. AXIOM 0 (Complete Cover): Each edge of G 0 appears at least once in some trace in T . The next fundamental axiom assumes that traces always represent paths on G 0 . AXIOM 1 (Reality Sampling): For every trace T ∈ T , if the distance between two symbols σ 1 , σ 2 ∈ T is d T (σ 1 , σ 2 ) = k, then there exists a path (i.e., a walk without cycles) of length k connecting two (named or anonymous) nodes σ 1 and σ 2 in G 0 . The following axiom captures the consistency of the routing protocol on which the traceroute probing relies. In the current Internet, policy routing is known to have in impact both on the route length [14] and on the convergence time [11]. AXIOM 2 (α-(Routing) Consistency): There exists an α ∈ (0, 1] such that, for every trace T ∈ T , if d T (σ 1 , σ 2 ) = k for two entries σ 1 , σ 2 in trace T , then the shortest path connecting the two (named or anonymous) nodes corresponding to σ 1 and σ 2 in G 0 has distance at least αk . Note that if α = 1, the routing is a shortest path routing. Moreover, note that if α = 0, there can be loops in the paths, and there are hardly any topological constraints, rendering almost any topology inferrable. (For example, the complete graph with one anonymous router is always a solution.) A natural axiom to merge traces is the following. AXIOM 3 (Trace Merging): For two traces T 1 , T 2 ∈ T for which ∃σ 1 , σ 2 , σ 3 , where σ 2 refers to a named node, such that d T 1 (σ 1 , σ 2 ) = i and d T 2 (σ 2 , σ 3 ) = j, it holds that the distance between two nodes u and v corresponding to σ 1 and σ 2 , respectively, in G 0 , is at most d G 0 (σ 1 , σ 3 ) ≤ i + j. Any topology G which is consistent with these axioms (when applied to T ) is called inferrable from T . Definition 2.1 (Inferrable Topologies). A topology G is (α-consistently) inferrable from a trace set T if axioms AXIOM 0, AXIOM 1, AXIOM 2 (with parameter α), and AXIOM 3 are fulfilled. We will refer by G T to the set of topologies inferrable from T . Please note the following important observation. Remark 2.2. While we generally have that G 0 ∈ G T , since T was generated from G 0 and AXIOM 0, AXIOM 1, AXIOM 2 and AXIOM 3 are fulfilled by definition, there can be situations where an α-consistent trace set for G 0 contradicts AXIOM 0: some edges may not appear in T . If this is the case, we will focus on the inferrable topologies containing the links we know, even if G 0 may have additional, hidden links that cannot be explored due to the high α value. The main objective of a topology inference algorithm ALG is to compute topologies which are consistent with these axioms. Concretely, ALG's input is the trace set T together with the parameter α specifying the assumed routing consistency. Essentially, the goal of any topology inference algorithm ALG is to compute a mapping of the symbols Σ (appearing in T ) to nodes in an inferred topology G; or, in case the input parameters α and T are contradictory, reject the input. This mapping of symbols to nodes implicitly describes the edge set of G as well: the edge set is unique as all the transitions of the traces in T are now unambiguously tied to two nodes. So far, we have ignored an important and non-trivial question: When are two topologies G 1 , G 2 ∈ G T different (and hence appear as two independent topologies in G T )? In this paper, we pursue the following approach: We are not interested in purely topological isomorphisms, but we care about the identifiers of the non-anonymous nodes, i.e., we are interested in the locations of the non-anonymous nodes and their distance to other nodes. For anonymous nodes, the situation is slightly more complicated: one might think that as the nodes are anonymous, their "names" do not matter. Consider however the example in Figure 1: the two inferrable topologies have two anonymous nodes, once where { * 1 , * 2 } plus { * 3 , * 4 } are merged into one node each in the inferrable topology and once where { * 1 , * 4 } plus { * 2 , * 3 } are merged into one node each in the inferrable topology. In this paper, we regard the two topologies as different, for the following reason: Assume that there are two paths in the network, one u * 2 v (e.g., during day time) and one u * 3 v (e.g., at night); clearly, this traffic has different consequences and hence we want to be able to distinguish between the two topologies described above. In other words, our notion of isomorphism of inferred topologies is path-aware. It is convenient to introduce the following MAP function. Essentially, an inference algorithm computes such a mapping. Definition 2.3 (Mapping Function MAP). Let G = (V, E) ∈ G T be a topology inferrable from T . A topology inference algorithm describes a surjective mapping function MAP : Σ → V . For the set of non-anonymous nodes in Σ, the mapping function is bijective; and each star is mapped to exactly one node in V , but multiple stars may be assigned to the same node. Note that for any σ ∈ Σ, MAP(σ) uniquely identifies a node v ∈ V . More specifically, we assume that MAP assigns labels to the nodes in V : in case of a named node, the label is simply the node's identifier; in case of anonymous nodes, the label is * β , where β is the concatenation of the sorted indices of the stars which are merged into node * β . With this definition, two topologies G 1 , G 2 ∈ G T differ if and only if they do not describe the identical (MAP-) labeled topology. We will use this MAP function also for G 0 , i.e., we will write MAP(σ) to refer to a symbol σ's corresponding node in G 0 . In the remainder of this paper, we will often assume that AXIOM 0 is given. Moreover, note that AXIOM 3 is redundant. Therefore, in our proofs, we will not explicitly cover AXIOM 0, and it is sufficient to show that AXIOM 1 holds to prove that AXIOM 3 is satisfied. Lemma 2.4. AXIOM 1 implies AXIOM 3. Proof. Let T be a trace set, and G ∈ G T . Let σ 1 , σ 2 , σ 3 s.t. ∃T 1 , T 2 ∈ T with σ 1 ∈ T 1 , σ 3 ∈ T 2 and σ 2 ∈ T 1 ∩ T 2 . Let i = d T 1 (σ 1 , σ 2 ) and j = d T 2 (σ 1 , σ 3 ). Since any inferrable topology G fulfills AXIOM 1, there is a path π 1 of length at most i between the nodes corresponding to σ 1 and σ 2 in G and a path π 2 of length at most j between the nodes corresponding to σ 2 and σ 3 in G. The combined path can only be shorter, and hence the claim follows. Inferrable Topologies What insights can be obtained from topology inference with minimal assumptions, i.e., with our axioms? Or what is the structure of the inferrable topology set G T ? We first make some general observations and then examine different graph metrics in more detail. Basic Observations Although the generation of the entire topology set G T may be computationally hard, some instances of G T can be computed efficiently. The simplest possible inferrable topology is the so-called canonic graph G C : the topology which assumes that all stars in the traces refer to different anonymous nodes. In other words, if a trace set T contains n = |ID| named nodes and s stars, G C will contain |V (G C )| = N = n + s nodes. Definition 3.1 (Canonic Graph G C ). The canonic graph is defined by G C (V C , E C ) where V C = Σ is the set of (anti-aliased) nodes appearing in T (where each star is considered a unique anonymous node) and where {σ 1 , σ 2 } ∈ E C ⇔ ∃T ∈ T , T = (. . . , σ 1 , σ 2 , . . .), i.e., σ 1 follows after σ 2 in some trace T (σ 1 , σ 2 ∈ T can be either non-anonymous nodes or stars). Let d C (σ 1 , σ 2 ) denote the canonic distance between two nodes, i.e., the length of a shortest path in G C between the nodes σ 1 and σ 2 . Note that G C is indeed an inferrable topology. In this case, MAP : Σ → Σ is the identity function. The proof appears in the appendix. Theorem 3.2. G C is inferrable from T . G C can be computed efficiently from T : represent each non-anonymous node and star as a separate node, and for any pair of consecutive entries (i.e., nodes) in a trace, add the corresponding link. The time complexity of this construction is linear in the size of T . With the definition of the canonic graph, we can derive the following lemma which establishes a necessary condition when two stars cannot represent the same node in G 0 from constraints on the routing paths. This is useful for the characterization of inferred topologies. Lemma 3.3. Let * 1 , * 2 be two stars occurring in some traces in T . * 1 , * 2 cannot be mapped to the same node, i.e., MAP( * 1 ) = MAP( * 2 ), without violating the axioms in the following conflict situations: (i) if * 1 ∈ T 1 and * 2 ∈ T 2 , and T 1 describes a too long path between anonymous node MAP( * 1 ) and nonanonymous node u, i.e., α · d T 1 ( * 1 , u) > d C (u, * 2 ). (ii) if * 1 ∈ T 1 and * 2 ∈ T 2 , and there exists a trace T that contains a path between two non-anonymous nodes u and v and α · d T (u, v) > d C (u, * 1 ) + d C (v, * 2 ). Proof. The first proof is by contradiction. Assume MAP( * 1 ) = MAP( * 2 ) represents the same node v of G 0 , and that α · d T 1 (v, u) > d C (u, v). Then we know from AXIOM 2 that d C (v, u) ≥ d G 0 (v, u) ≥ α · d T 1 (u, v) > d C (v, u) , which yields the desired contradiction. Similarly for the second proof. Assume for the sake of contradiction that MAP( * 1 ) = MAP( * 2 ) represents the same node w of G 0 , and that α · d T (u, v) > d C (u, w) + d C (v, w). Due to the triangle inequality, we have that d C (u, w) + d C (v, w) ≥ d C (u, v) and hence, α · d T (u, v) > d C (u, v), which contradicts the fact that G C is inferrable (Theorem 3.2). Lemma 3.3 can be applied to show that a topology is not inferrable from a given trace set because it merges (i.e., maps to the same node) two stars in a manner that violates the axioms. Let us introduce a useful concept for our analysis: the star graph that describes the conflicts between stars. Note that the star graph G * is unique and can be computed efficiently for a given trace set T : Conditions (i) and (ii) can be checked by computing G C . However, note that while G * specifies some stars which cannot be merged, the construction is not sufficient: as Lemma 3.3 is based on G C , additional links might be needed to characterize the set of inferrable and α-consistent topologies G T exactly. In other words, a topology G obtained by merging stars that are adjacent in G * is never inferrable (G ∈ G T ); however, merging non-adjacent stars does not guarantee that the resulting topology is inferrable. What do star graphs look like? The answer is arbitrarily: the following lemma states that the set of possible star graphs is equivalent to the class of general graphs. This claim holds for any α. The proof appears in the appendix. The problem of computing inferrable topologies is related to the vertex colorings of the star graphs. We will use the following definition which relates a vertex coloring of G * to an inferrable topology G by contracting independent stars in G * to become one anonymous node in G. For example, observe that a maximum coloring treating every star in the trace as a separate anonymous node describes the inferrable topology G C . Definition 3.6 (Coloring-Induced Graph). Let γ denote a coloring of G * which assigns colors 1, . . . , k to the vertices of G * : γ : V * → {1, . . . , k}. We require that γ is a proper coloring of G * , i.e., that different anonymous nodes are assigned different colors: {u, v} ∈ E * ⇒ γ(u) = γ(v). G γ is defined as the topology induced by γ. G γ describes the graph G C where nodes of the same color are contracted: two vertices u and v represent the same node in G γ , i.e., MAP( * i ) = MAP( * j ), if and only if γ( * i ) = γ( * j ). The following two lemmas establish an intriguing relationship between colorings of G * and inferrable topologies. Also note that Definition 3.6 implies that two different colorings of G * define two non-isomorphic inferrable topologies. We first show that while a coloring-induced topology always fulfills AXIOM 1, the routing consistency is sacrificed. The proof appears in the appendix. Lemma 3.7. Let γ be a proper coloring of G * . The coloring induced topology G γ is a topology fulfilling AXIOM 2 with a routing consistency of α , for some positive α . An inferrable topology always defines a proper coloring on G * . Lemma 3.8. Let T be a trace set and G * its corresponding star graph. If a topology G is inferrable from T , then G induces a proper coloring on G * . Proof. For any α-consistent inferrable topology G there exists some mapping function MAP that assigns each symbol of T to a corresponding node in G (cf Definition 2.3), and this mapping function gives a coloring on G * (i.e., merged stars appear as nodes of the same color in G * ). The coloring must be proper: due to Lemma 3.3, an inferrable topology can never merge adjacent nodes of G * . The colorings of G * allow us to derive an upper bound on the cardinality of G T . Theorem 3.9. Given a trace set T sampled from a network G 0 and G T , the set of topologies inferrable from T , it holds that: |V * | k=γ(G * ) P (G * , k)/k! ≥ |G T |, where γ(G * ) is the chromatic number of G * and P (G * , k) is the number of colorings of G * with k colors (known as the chromatic polynomial of G * ). Proof. The proof follows directly from Lemma 3.8 which shows that each inferred topology has proper colorings, and the fact that a coloring of G * cannot result in two different inferred topologies, as the coloring uniquely describes which stars to merge (Lemma 3.7). In order to account for isomorphic colorings, we need to divide by the number of color permutations. Note that the fact that G * can be an arbitrary graph (Lemma 3.5) implies that we cannot exploit some special properties of G * to compute colorings of G * and γ(G * ). Also note that the exact computation of the upper bound is hard, since the minimal coloring as well as the chromatic polynomial of G * (in P ) is needed. To complement the upper bound, we note that star graphs with a small number of conflict edges can indeed result in a large number of inferred topologies. Proof. Consider a trace set T = {(σ i , * i , σ i ) i=1,...,s } (e.g., obtained from exploring a topology G 0 where one anonymous center node is connected to 2s named nodes). The trace set does not impose any constraints on how the stars relate to each other, and hence, G * does not contain any edges at all; even when stars are merged, there are no constraints on how the stars relate to each other. Therefore, the star graph for T has B s = s j=0 S (s,j) colorings, where S (s,j) = 1/j! · j =0 (−1) j (j − ) s is the number of ways to group s nodes into j different, disjoint non-empty subsets (known as the Stirling number of the second kind). Each of these colorings also describes a distinct inferrable topology as MAP assigns unique labels to anonymous nodes stemming from merging a group of stars (cf Definition 2.3). Properties Even if the number of inferrable topologies is large, topology inference can still be useful if one is mainly interested in the properties of G 0 and if the ensemble G T is homogenous with respect to these properties; for example, if "most" of the instances in G T are close to G 0 , there may be an option to conduct an efficient sampling analysis on random representatives. Therefore, in the following, we will take a closer look how much the members of G T differ. Important metrics to characterize inferrable topologies are, for instance, the graph size, the diameter DIAM(·), the number of triangles C 3 (·) of G, and so on. In the following, let G 1 = (V 1 , E 1 ), G 2 = (V 2 , E 2 ) ∈ G T be two arbitrary representatives of G T . As one might expect, the graph size can be estimated quite well. Lemma 3.11. It holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ≤ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of edges between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Observe that inferrable topologies can also differ in the number of connected components. This implies that the shortest distance between two named nodes can differ arbitrarily between two representatives in G T . Lemma 3.12. Let COMP(G) denote the number of connected components of a topology G. Then, |COMP(G 1 ) − COMP(G 2 )| ≤ n/2. There are instances G 1 , G 2 that reach this bound. Proof. Consider the trace set T = {T i , i = 1 . . . n/2 } in which T i = {n 2i , * i , n 2i+1 }. Since i = j ⇒ T i ∩ T j = ∅, we have |E * | = 0. Take G 1 as the 1-coloring of G * : G 1 is a topology with one anonymous node connected to all named nodes. Take G 2 as the n/2 -coloring of the star graph: G 2 has n/2 distinct connected components (consisting of three nodes). Upper bound: For the sake of contradiction, suppose ∃T s.t. |COMP(G 1 ) − COMP(G 2 )| > n/2 . Let us assume that G 1 has the most connected components: G 1 has at least n/2 + 1 more connected components than G 2 . Let C refer to a connected component of G 2 whose nodes are not connected in G 1 . This means that C contains at least one anonymous node. Thus, C contains at least two named nodes (since a trace T cannot start or end by a star). There must exist at least n/2 + 1 such connected component C. Thus G 2 has to contain at least 2( n/2 + 1) ≥ n + 1 named nodes. Contradiction. An important criterion for topology inference regards the distortion of shortest paths. Definition 3.13 (Stretch). The maximal ratio of the distance of two non-anonymous nodes in G 0 and a connected topology G is called the stretch ρ: ρ = max u,v∈ID(G 0 ) max{d G 0 (u, v)/d G (u, v), d G (u, v)/d G 0 (u, v)}. From Lemma 3.12 we already know that inferrable topologies can differ in the number of connected components, and hence, the distance and the stretch between nodes can be arbitrarily wrong. Hence, in the following, we will focus on connected graphs only. However, even if two nodes are connected, their distance can be much longer or shorter than in G 0 . Figure 2 gives an example. Both topologies are inferrable from the traces T 1 = (v, * , v 1 , . . . , v k , u) and T 2 = (w, * , w 1 , . . . , w k , u). One inferrable topology is the canonic graph G C (Figure 2 left), whereas the other topology merges the two anonymous nodes (Figure 2 right). The distances between v and w are 2(k + 2) and 2, respectively, implying a stretch of k + 2. Figure 2: Due to the lack of a trace between v and w, the stretch of an inferred topology can be large. Lemma 3.14. Let u and v be two arbitrary named nodes in the connected topologies G 1 and G 2 . Then, even for only two stars in the trace set, it holds for the stretch that ρ ≤ (N −1)/2. There are instances G 1 , G 2 that reach this bound. We now turn our attention to the diameter and the degree. Proof. Upper bound: As G C does not merge any stars, it describes the network with the largest diameter. Let π be a longest path between two nodes u and v in G C . In the extreme case, π is the only path determining the network diameter and π contains all star nodes. Then, the graph where all s stars are merged into one anonymous node has a minimal diameter of at least DIAM(G C )/s. (u s , . . . , * s , . . . , u s+1 )} with x named nodes and star in the middle between u i and u i+1 (assume x to be even, x does not include u i and u i+1 ). It holds that DIAM(G C ) = s · (x + 2) whereas in a graph G where all stars are merged, DIAM(G) = x + 2. There are n = s(x + 1) non-anonymous nodes, so x = (n − s − 1)/s. Figure 3 depicts an example. Lemma 3.16. For the maximal node degree DEG, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. Another important topology measure that indicates how well meshed the network is, is the number of triangles. Lemma 3.17. Let C 3 (G) be the number of cycles of length 3 of the graph G. It holds that C 3 (G 1 ) − C 3 (G 2 ) ≤ 2s(s − 1), which can be reached. The relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large unless the number of links between non-anonymous nodes exceeds n 2 /4 in which case the ratio is upper bounded by 2s(s − 1) + 1. Proof. Upper bound: Each node which is part of a triangle has at least two incident edges. Thus, a node v can be part of at most DEG(v) 2 triangles, where DEG(v) denotes v's degree. As a consequence the number of triangles containing an anonymous node in an inferrable topology with a anonymous nodes u 1 , . . . u a is at most a j=1 DEG(u j ) 2 . Given s, this sum is maximized if a = 1 and DEG(u 1 ) = 2s as 2s is the maximum degree possible due to Lemma 3.16. Thus there can be at most s · (2s − 1) triangles containing an anonymous node in G 1 . The number of triangles with at least one anonymous node is minimized in G C because in the canonic graph the degrees of the anonymous nodes are minimized, i.e, they are always exactly two. As a consequence there cannot be more than s such triangles in G C . If the number of such triangles in G C is smaller by x, then the number of of triangles with at least one anonymous node in the topology G 1 is upper bounded by s · (2s − 1) − x. The difference between the triangles in G 1 and G 2 is thus at most s(2s − 1) − x − s + x = 2s(s − 1). Example meeting this bound: If the non-anonymous nodes form a complete graph and all star nodes can be merged into one node in G 1 and G 2 = G C , then the difference in the number of triangles matches the upper bound. Consequently it holds for the ratio of triangles with anonymous nodes that it does not exceed (s(2s−1)−x)/(s−x). Thus the ratio can be infinite, as x can reach s. However, if the number of links between n non-anonymous nodes exceeds n 2 /4 then there is at least one triangle, as the densest complete bipartite graph contains at most n 2 /4 links. Full Exploration So far, we assumed that the trace set T contains each node and link of G 0 at least once. At first sight, this seems to be the best we can hope for. However, sometimes traces exploring the vicinity of anonymous nodes in different ways yields additional information that help to characterize G T better. This section introduces the concept of fully explored networks: T contains sufficiently many traces such that the distances between non-anonymous nodes can be estimated accurately. In some sense, a trace set for a fully explored network is the best we can hope for. Properties that cannot be inferred well under the fully explored topology model are infeasible to infer without additional assumptions on G 0 . In this sense, this section provides upper bounds on what can be learned from topology inference. In the following, we will constrain ourselves to routing along shortest paths only (α = 1). Let us again study the properties of the family of inferrable topologies fully explored by a trace set. Obviously, all the upper bounds from Section 3 are still valid for fully explored topologies. In the following, let G 1 , G 2 ∈ G T be arbitrary representatives of G T for a fully explored trace set T . A direct consequence of the Definition 4.1 concerns the number of connected components and the stretch. (Recall that the stretch is defined with respect to named nodes only, and since α = 1, a 1-consistent inferrable topology cannot include a shorter path between u and v than the one that must appear in a trace of T .) Lemma 4.2. It holds that COMP(G 1 ) = COMP(G 2 ) (= COMP(G 0 )) and the stretch is 1. The proof for the claims of the following lemmata are analogous to our former proofs, as the main difference is the fact that there might be more conflicts, i.e., edges in G * . Lemma 4.3. For fully explored networks it holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ∈ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of links between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Lemma 4.4. For the maximal node degree, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. From Lemma 4.2 we know that fully explored scenarios yield a perfect stretch of one. However, regarding the diameter, the situation is different in the sense that distances between anonymous nodes play a role. The number of triangles with anonymous nodes can still not be estimated accurately in the fully explored scenario. Lemma 4.6. There exist graphs where C 3 (G 1 ) − C 3 (G 2 ) = s(s − 1)/2, and the relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large. Conclusion We understand our work as a first step to shed light onto the similarity of inferrable topologies based on most basic axioms and without any assumptions on power-law properties, i.e., in the worst case. Using our formal framework we show that the topologies for a given trace set may differ significantly. Thus, it is impossible to accurately characterize topological properties of complex networks. To complement the general analysis, we propose the notion of fully explored networks or trace sets, as a "best possible scenario". As expected, we find that fully exploring traces allow us to determine several properties of the network more accurately; however, it also turns out that even in this scenario, other topological properties are inherently hard to compute. Our results are summarized in Figure 4. Our work opens several directions for future research. On a theoretical side, one may study whether the minimal inferrable topologies considered in, e.g., [1,2], are more similar in nature. More importantly, while this paper presented results for the general worst-case, it would be interesting to devise algorithms that compute, for a given trace set, worst-case bounds for the properties under consideration. For example, such approximate bounds would be helpful to decide whether additional measurements are needed. Moreover, maybe such algorithms may even give advice on the locations at which such measurements would be most useful. Property/Scenario Arbitrary Fully Explored (α = 1) G1 − G2 G1/G2 G1 − G2 G1/G2 # of nodes ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) # of links ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) # of connected components ≤ n/2 ≤ n/2 Figure 4: Summary of our bounds on the properties of inferrable topologies. s denotes the number of stars in the traces, n is the number of named nodes, N = n + s, and ν denotes the number of links between named nodes. Note that trace sets meeting these bounds exist for all properties for which we have tight or upper bounds. For the two entries marked with ( ¶), only "lower bounds" are derived, i.e., examples that yield at least the corresponding accuracy; as the upper bounds from the arbitrary scenario do not match, how to close the gap remains an open question. = 0 = 1 Stretch - ≤ (N − 1)/2 - = 1 Diameter ≤ (s − 1)/s · (N − 1) ≤ s s/2 ( ¶) 2 Max. Deg. ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 Triangles ≤ 2s(s − 1) ∞ ≤ 2s(s − 1)/2 ∞ [13] Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis, and Anja Feldmann. Improving content delivery using provider-aided distance information. In Proc. ACM IMC, 2010. [ Fix T . We have to prove that G C fulfills AXIOM 0, AXIOM 1 (which implies AXIOM 3) and AXIOM 2. AXIOM 0: The axiom holds trivially: only edges from the traces are used in G C . AXIOM 1: Let T ∈ T and σ 1 , σ 2 ∈ T . Let k = d T (σ 1 , σ 2 ). We show that G C fulfills AXIOM 1, namely, there exists a path of length k in G C . Induction on k: (k = 1:) By the definition of G C , {σ 1 , σ 2 } ∈ E C thus there exists a path of length one between σ 1 and σ 2 . (k > 1:) Suppose AXIOM 1 holds up to k − 1. Let σ 1 , . . . , σ k−1 be the intermediary nodes between σ 1 and σ 2 in T : T = (. . . , σ 1 , σ 1 , . . . , σ k−1 , σ 2 , . . .). By the induction hypothesis, in G C there is a path of length k − 1 between σ 1 and σ k−1 . Let π be this path. By definition of G C , {σ k−1 , σ 2 } ∈ E C . Thus appending (σ k−1 , σ 2 ) to π yields the desired path of length k linking σ 1 and σ 2 : AXIOM 1 thus holds up to k. AXIOM 2: We have to show that d T (σ 1 , σ 2 ) = k ⇒ d C (σ 1 , σ 2 ) ≥ α · k . By contradiction, suppose that G C does not fulfill AXIOM 2 with respect to α. So there exists k < α · k and σ 1 , σ 2 ∈ V C such that d C (σ 1 , σ 2 ) = k . Let π be a shortest path between σ 1 and σ 2 in G C . Let (T 1 , . . . , T ) be the corresponding (maybe repeating) traces covering this path π in G C . Let T i ∈ (T 1 , . . . , T ), and let s i and e i be the corresponding start and end nodes of π in T i . We will show that this path π implies the existence of a path in G 0 which violates α-consistency. Since G 0 is inferrable, G 0 fulfills AXIOM 2, thus we have: d C (σ 1 , σ 2 ) = i=1 d T i (s i , e i ) = k < α · k ≤ d G 0 (σ 1 , σ 2 ) since G 0 is α-consistent. However, G 0 also fulfills AXIOM 1, thus d T i (s i , e i ) ≥ d G 0 (s i , e i ). Thus i=1 d G 0 (s i , e i ) ≤ i=1 d T i (s i , e i ) < d G 0 (σ 1 , σ 2 ): we have constructed a path from σ 1 to σ 2 in G 0 whose length is shorter than the distance between σ 1 and σ 2 in G 0 , leading to the desired contradiction. A.2 Proof of Lemma 3.5 First we construct a topology G 0 = (V 0 , E 0 ) and then describe a trace set on this graph that generates the star graph G = (V, E). The node set V 0 consists of |V | anonymous nodes and |V | · (1 + τ ) named nodes, where τ = 3/(2α) − 1/2 . The first building block of G 0 is a copy of G. To each node v i in the copy of G we add a chain consisting of 2 + τ nodes, first appending τ non-anonymous nodes w (i,k) where 1 ≤ k ≤ τ , followed by an anonymous node u i and finally a named node w (i,τ +1) . More formally we can describe the link set as E 0 = E ∪ |V | i=1 {v i , w (i,1) }, {w (i,1) , w (i,2) }, . . . , {w (i,τ ) , u i }, {u i , w (i,τ +1) } . The trace set T consists of the following |V | + |E| shortest path traces: the traces T for ∈ {1, . . . , |V |}, are given by T (w ( ,τ ) , w ( ,τ +1) ) (for each node in V ), and the traces T for ∈ {|V | + 1, . . . , |V | + |E|}, are given by T (w (i,τ ) , w (j,τ ) ) for each link {v i , v j } in E. Note that G 0 = G C as each star appears as a separate anonymous node. The star graph G * corresponding to this trace set contains the |V | nodes * i (corresponding to u i ). In order to prove the claim of the lemma we have to show that two nodes * i , * j are conflicting according to Lemma 3.3 if and only if there is a link {v i , v j } in E. Case (i) does not apply because the minimum distance between any two nodes in the canonic graph is at least one, and α · d T i ( * i , w (i,τ ) ) = 1 and α · d T i ( * i , w (i,τ +1) ) = 1. It remains to examine Case (ii): "⇒" if MAP( * i ) = MAP( * j ) there would be a path of length two between w (i,τ ) and w (j,τ ) in the topology generated by MAP; the trace set however contains a trace T (w (i,τ ) , w (j,τ ) ) of length 2τ + 1. So α · d T (w (i,τ ) , w (j,τ ) ) = α · (2τ + 1) = α · (2 3/(2α) − 1/2 + 1 ) ≥ 3, which violates the α-consistency (Lemma 3.3 (ii)) and hence { * i , * j } ∈ E * and {v i , v j } ∈ E. "⇐": if {v i , v j } ∈ E, there is no trace T (w (i,τ ) , w (j,τ ) ), thus we have to prove that no trace T (w (i ,τ ) , w (j ,τ ) ) with i = i and j = j and j = i leads to a conflict between * i and * j . We show that an even more general statement is true, namely that for any pair of distinct non-anonymous nodes * j ). Since G C = G 0 and the traces contain shortest paths only, the trace distance between two nodes in the same trace is the same as the distance in G C . The following tables contain the relevant lower bounds on distances in G C and µ(x 1 , x 1 , x 2 , where x 1 , x 2 ∈ {v i , v j , w (i ,k) , w (j ,k) |1 ≤ k ≤ τ + 1, i = i, j = i, j = j}, it holds that α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 ,x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ). d C (·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 0 1 k 1 k 1 + 1 v j 1 0 k 1 + 1 k 1 w (i ,k 2 ) k 2 k 2 + 1 |k 2 − k 1 | k 1 + 1 + k 2 w (j ,k 2 ) k 2 + 1 k 2 k 1 + 1 + k 2 |k 2 − k 1 | * i τ + 2 τ + 1 2 + τ + k 1 τ − k 1 + 1 * j τ + 2 τ + 2 2 + τ + k 1 2 + τ + k 1 µ(·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 2τ + 4 2τ + 3 4 + 2τ + k 1 4 + 2τ + k 1 v j 2τ + 3 2τ + 4 2τ + 3 + k 1 3 + 2τ + k 1 w (i ,k 2 ) 4 + 2τ + k 2 4 + 2τ + k 2 4 + 2τ + k 1 + k 2 4 + 2τ + k 1 + k 2 w (j ,k 2 ) 2τ − k 2 + 3 2τ − k 2 + 3 2τ + 3 + k 1 − k 2 2τ + k 1 − k 2 + 3) = d C (x 1 , * i ) + d C (x 2 , * j ) . If x 1 = w (j ,k 2 ) then it holds for all x 1 , x 2 that d T (x 1 , x 2 ) ≤ 2τ + 1 whereas µ(x 1 , x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ) ≥ 2τ + 2. In all other cases it holds at least that Figure 5: d C (x 1 , x 2 ) < µ(x 1 , x 2 ). Thus α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 , * j Visualization for proof of Lemma 3.7. Solid lines denote links, dashed lines denote paths (of annotated length). We have to show that the paths in the traces correspond to paths in G γ . Let T ∈ T , and σ 1 , σ 2 ∈ T . Let π be the sequence of nodes in T connecting σ 1 and σ 2 . This is also a path in G γ : since α > 0, for any two symbols σ 1 , σ 2 ∈ T , it holds that MAP(σ 1 ) = MAP(σ 2 ) as α > 0. We now construct an example showing that the α for which G γ fulfills AXIOM 2 can be arbitrarily small. Consider the graph represented in Figure 5. Let T 1 = (s, . . . , t), T 2 = (s, * 1 , . . . , m 1 ), T 3 = (m 1 , . . . , * 2 , m 2 ), T 4 = (m 2 , * 3 , . . . , m 3 ), T 5 = (m 3 , . . . , * 4 , t). We assume α = 1. By changing parameters k = d C (s, t) and k = d C (m 1 , * 4 ), we can modulate the links of the corresponding star graph G * . Using d T 1 (s, t) = k, observe that k > 2 ⇔ { * 1 , * 4 } ∈ E * . Similarly, k > 2(k + 1) ⇔ { * 1 , * 3 } ∈ E * ∧ { * 2 , * 4 } ∈ E * and k > 2(k + 2) ⇔ { * 1 , * 2 } ∈ E * ∧ { * 3 , * 4 } ∈ E * . Taking k = 2k + 4, we thus have E * = {{ * 1 , * 3 }, { * 2 , * 4 }, { * 1 , * 4 }}. * 1 ) = d C (m 1 , * 2 ) = d C (m 3 , * 3 ) = d C (m 3 , Thus, we here construct a situation where * 1 and * 2 as well as * 3 and * 4 can be merged without breaking the consistency requirement, but where merging both simultaneously leads to a topology G that is only 4/k-consistent, since d G (s, t) = 4. This ratio can be made arbitrarily small provided we choose k = (k − 4)/2. A.4 Proof of Lemma 3.11 In the worst-case, each star in the trace represents a different node in G 1 , so the maximal number of nodes in any topology in G T is the total number of non-anonymous nodes plus the total number of stars in T . This number of nodes is reached in the topology G C . According to Definition 3.4, only non-adjacent stars in G * can represent the same node in an inferrable topology. Thus, the stars in trace T must originate from at least γ(G * ) different nodes. As a consequence |V 1 | − |V 2 | ≤ s − γ(G * ), which can reach s − 1 for a trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s}. Analogously, |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Observe that each occurrence of a node in a trace describes at most two edges. If all anonymous nodes are merged into γ(G * ) nodes in G 1 and are separate nodes in G 2 the difference in the number of edges is at most 2(s − γ(G * )). Analogously, |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s. The trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s} reaches this bound. A.5 Proof of Lemma 3.14 An "lower bound" example follows from Figure 2. Essentially, this is also the worst case: note that the difference in the shortest distance between a pair of nodes u and v in G 1 and G 2 is only greater than 0 if the shortest path between them involves at least one anonymous node. Hence the shortest distance between such a pair is two. The longest shortest distance between the same pair of nodes in another inferred topology visits all nodes in the network, i.e., its length is bounded by N − 1. A.6 Proof of Lemma 3.16 Each occurrence of a node in a trace describes at most two links incident to this node. For the degree difference we only have to consider the links incident to at least one anonymous node, as the number of links between nonanonymous nodes is the same in G 1 and G 2 . If all anonymous nodes can be merged into γ(G * ) nodes in G 1 and all anonymous nodes are separate in G 2 the difference in the maximum degree is thus at most 2(s − γ(G * )), as there can be at most s − γ(G * ) + 1 nodes merged into one node and the minimal maximum degree of a node in G 2 is two. This bound is tight, as the trace set T i = {v i , * , w i } for 1 ≤ i ≤ s containing s stars can be represented by a graph with one anonymous node of degree 2s or by a graph with s anonymous nodes of degree two each. For the ratio of the maximal degree we can ignore links between non-anonymous nodes as well, as these only decrease the ratio. The highest number of links incident at node v with one endpoint in the set of anonymous nodes is s − γ(G * ) + 1 for non-anonymous nodes and 2(s − γ(G * ) + 1) for anonymous nodes, whereas the lowest number is two. A.7 Proof of Lemma 4.4 The proof for the upper bound is analogous to the case without full exploration. To prove that this bound can be reached, we need to add traces to the trace set to ensure that all pairs of named nodes appear in the trace but does not change the degrees of anonymous nodes. To this end we add a named node u for each pair {v, w} that is not in the trace set yet to G 0 and a trace T = {v, u, w}. This does not increase the maximum degree and guarantees full exploration. A.8 Proof of Lemma 4.5 We first prove the upper bound for the relative case. Note that the maximal distance between two anonymous nodes MAP( * 1 ) and MAP( * 2 ) in an inferred topology component cannot be larger than twice the distance of two named nodes u and v: from Definition 4.1 we know that there must be a trace in T connecting u and v, and the maximal distance δ of a pair of named nodes is given by the path of the trace that includes u and v. Therefore, and since any trace starts and ends with a named node, any star can be at a distance at a distance δ/2 from a named node. Therefore, the maximal distance between MAP( * 1 ) and MAP( * 2 ) is δ/2 + δ/2 to get to the corresponding closest named nodes, plus δ for the connection between the named nodes. As according to Lemma 4.2, the distance between named nodes is the same in all inferred topologies, the diameter of inferred topologies can vary at most by a factor of two. We now construct an example that reaches this bound. Consider a topology consisting of a center node c and four rays of length k. Let u 1 , u 2 , u 3 , u 4 be the "end nodes" of each ray. We assume that all these nodes are named. Now add two chains of anonymous nodes of length 2k + 1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 to the topology. The trace set consists of the minimal trace set to obtain a fully explored topology: six traces of length 2k +1 between each pair of end nodes u 1 , u 2 , u 3 , u 4 . Now we add two traces of length 2k +1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 . These traces explore the anonymous chains and have the following shape: T 7 = (u 1 , * 1 , . . . , * k , σ, * k+1 , . . . , * 2k , u 2 ) and T 8 = (u 3 , * 2k+1 , . . . , * 3k , σ , * 3k+1 , . . . , * 4k , u 4 ), where σ and σ are stars. Let G 1 = G C and G 2 be the inferrable graph where σ and σ are merged. The resulting diameters are DIAM(G 1 ) = 4k+2 and DIAM(G 2 ) = 2k+1. Since s = 4k+2, the difference can thus be as large as s/2. Note that this construction also yields the bound of the relative difference: DIAM(G 1 )/DIAM(G 2 ) = (4k + 2)/(2k + 1) = 2. A.9 Proof of Lemma 4.6 Given the number of stars s, we construct a trace set T with two inferrable graphs such that in one graph the number of triangles with anonymous nodes is s(s − 1)/2 and in the other graph there are no such triangles. As a first step we add s traces T i = (v i , * i , w) to the trace set T , where 1 ≤ i ≤ s. To make this trace set fully explored we add traces for each pair v i , v j to T as a second step, i.e., traces T i,j = (v i , v j ) for 1 ≤ i ≤ s and 1 ≤ j ≤ s. The resulting trace set contains s stars and none of the stars are in conflict with each other. Thus the graph G 1 merging all stars into one anonymous node is inferrable from this trace and the number of triangles where the anonymous node is part of is s(s−1)/2. Let G 2 be the canonic graph of this trace set. This graph does not contain any triangles with anonymous nodes and hence the difference C(G 1 ) − C(G 2 ) is s(s − 1)/2. To see that the ratio can be unbounded look at the trace set {(v, * 1 , w), (u, * 2 , w), (u, v)}. This set is fully explored since all pairs of named nodes appear in a trace. The graph where the two stars are merged has one triangle and the canonic graph has no triangle.
10,634
1105.5236
2952132893
Traceroute measurements are one of our main instruments to shed light onto the structure and properties of today's complex networks such as the Internet. This paper studies the feasibility and infeasibility of inferring the network topology given traceroute data from a worst-case perspective, i.e., without any probabilistic assumptions on, e.g., the nodes' degree distribution. We attend to a scenario where some of the routers are anonymous, and propose two fundamental axioms that model two basic assumptions on the traceroute data: (1) each trace corresponds to a real path in the network, and (2) the routing paths are at most a factor 1 alpha off the shortest paths, for some parameter alpha in (0,1]. In contrast to existing literature that focuses on the cardinality of the set of (often only minimal) inferrable topologies, we argue that a large number of possible topologies alone is often unproblematic, as long as the networks have a similar structure. We hence seek to characterize the set of topologies inferred with our axioms. We introduce the notion of star graphs whose colorings capture the differences among inferred topologies; it also allows us to construct inferred topologies explicitly. We find that in general, inferrable topologies can differ significantly in many important aspects, such as the nodes' distances or the number of triangles. These negative results are complemented by a discussion of a scenario where the trace set is best possible, i.e., "complete". It turns out that while some properties such as the node degrees are still hard to measure, a complete trace set can help to determine global properties such as the connectivity.
The classic tool to discover Internet topologies is traceroute @cite_13 . Unfortunately, there are several problems with this approach that render topology inference difficult, such as or , which has motivated researchers to develop new tools such as @cite_11 @cite_1 . Another complication stems from the fact that routers may appear as stars in the trace since they are overloaded or since they are configured not to send out any ICMP responses. The lack of complete information in the trace set renders the accurate characterization of Internet topologies difficult.
{ "abstract": [ "We consider using traceroute-like end-to-end measurement to infer the underlay topology for a group of hosts. One major issue is the measurement cost. Given N hosts in an asymmetric network without anonymous routers, traditionally full N(N-1) traceroutes are needed to determine the underlay topology. We investigate how to efficiently infer an underlay topology with low measurement cost, and propose a heuristic called Max-Delta. In the heuristic, a server selects appropriate host-pairs to measure in each iteration so as to reveal the most undiscovered information on the underlay. We further observe that the presence of anonymous routers significantly distorts and inflates the inferred topology. Previous research has shown that obtaining both exact and approximate topology in the presence of anonymous routers under certain consistency constraints is intractable. We hence propose fast algorithms on how to practically construct an approximate topology by relaxing some constraints. We investigate and compare two algorithms to merge anonymous routers. The first one uses Isomap to map routers into a multidimensional space and merges anonymous routers according to their interdistances. The second algorithm is based on neighbor router information, which trades off some accuracy with speed. We evaluate our inference algorithms on Internet-like and real Internet topologies. Our results show that almost full measurement is needed to fully discover the underlay topology. However, substantial reduction in measurements can be achieved if a little accuracy, say 5 , can be compromised. Moreover, our merging algorithms in the presence of anonymous routers can efficiently infer an underlay topology with good accuracy", "We have been collecting and recording routing paths from a test host to each of over 90,000 registered networks on the Internet since August 1998. The resulting database contains intersting routing and reachability information, and is available to the public for research purpose. The daily scans cover approximately a tenth of the networks on the Internet, with a full scan run roughly once a month. We have also been collecting Lucent's intranet data, and applied these tools to understanding its size and connectivity. We have also detected the sloss of power to routers in Yugoslavia as the result of NATO bombing. A simulated spring-force algorithm lays out the graphs thgat results from these database. This algorithm is well known, but has never been applied to such a large problem. The Internet graph, with around 88,000 nodes and 100,000 edges, is larger than those previsouly considered tractable by the data visualization community. The resulting Internet layouts are pleasent, though rather cluttered. On smaller networks, like Lucent's intranet, the layouts present the data in a useful way. For the Internet data, we have also tried plotting a minimum distance spanning tree; by throwing away edges, the remaining graph can be made more accessible. Once a layout is chosen, it can be colored in various ways to show network-relevant data, such as IP address, domain information, location, ISPs, and result of scan (completed, filtered, loop, etc). This paper expands and updates the description of the project given in an IEEE Computer article [1].", "" ], "cite_N": [ "@cite_1", "@cite_13", "@cite_11" ], "mid": [ "2155906776", "1790126211", "" ] }
Misleading Stars: What Cannot Be Measured in the Internet?
Surprisingly little is known about the structure of many important complex networks such as the Internet. One reason is the inherent difficulty of performing accurate, large-scale and preferably synchronous measurements from a large number of different vantage points. Another reason are privacy and information hiding issues: for example, network providers may seek to hide the details of their infrastructure to avoid tailored attacks. Since knowledge of the network characteristics is crucial for many applications (e.g., RMTP [12], or PaDIS [13]), the research community implements measurement tools to analyze at least the main properties of the network. The results can then, e.g., be used to design more efficient network protocols in the future. This paper focuses on the most basic characteristic of the network: its topology. The classic tool to study topological properties is traceroute. Traceroute allows us to collect traces from a given source node to a set of specified destination nodes. A trace between two nodes contains a sequence of identifiers describing the route traveled by the packet. However, not every node along such a path is configured to answer with its identifier. Rather, some nodes may be anonymous in the sense that they appear as stars (' * ') in a trace. Anonymous nodes exacerbate the exploration of a topology because already a small number of anonymous nodes may increase the spectrum of inferrable topologies that correspond to a trace set T . This paper is motivated by the observation that the mere number of inferrable topologies alone does not contradict the usefulness or feasibility of topology inference; if the set of inferrable topologies is homogeneous in the sense that that the different topologies share many important properties, the generation of all possible graphs can be avoided: an arbitrary representative may characterize the underlying network accurately. Therefore, we identify important topological metrics such as diameter or maximal node degree and examine how "close" the possible inferred topologies are with respect to these metrics. Our Contribution This paper initiates the study and characterization of topologies that can be inferred from a given trace set computed with the traceroute tool. While existing literature assuming a worst-case perspective has mainly focused on the cardinality of minimal topologies, we go one step further and examine specific topological graph properties. We introduce a formal theory of topology inference by proposing basic axioms (i.e., assumptions on the trace set) that are used to guide the inference process. We present a novel and we believe appealing definition for the isomorphism of inferred topologies which is aware of traffic paths; it is motivated by the observation that although two topologies look equivalent up to a renaming of anonymous nodes, the same trace set may result in different paths. Moreover, we initiate the study of two extremes: in the first scenario, we only require that each link appears at least once in the trace set; interestingly, however, it turns out that this is often not sufficient, and we propose a "best case" scenario where the trace set is, in some sense, complete: it contains paths between all pairs of nodes. The main result of the paper is a negative one. It is shown that already a small number of anonymous nodes in the network renders topology inference difficult. In particular, we prove that in general, the possible inferrable topologies differ in many crucial aspects. We introduce the concept of the star graph of a trace set that is useful for the characterization of inferred topologies. In particular, colorings of the star graphs allow us to constructively derive inferred topologies. (Although the general problem of computing the set of inferrable topologies is related to NP-hard problems such as minimal graph coloring and graph isomorphism, some important instances of inferrable topologies can be computed efficiently.) The minimal coloring (i.e., the chromatic number) of the star graph defines a lower bound on the number of anonymous nodes from which the stars in the traces could originate from. And the number of possible colorings of the star graph-a function of the chromatic polynomial of the star graph-gives an upper bound on the number of inferrable topologies. We show that this bound is tight in the sense that there are situation where there indeed exist so many inferrable topologies. Especially, there are problem instances where the cardinality of the set of inferrable topologies equals the Bell number. This insight complements (and generalizes to arbitrary, not only minimal, inferrable topologies) existing cardinality results. Finally, we examine the scenario of fully explored networks for which "complete" trace sets are available. As expected, inferrable topologies are more homogenous and can be characterized well with respect to many properties such as node distances. However, we also find that other properties are inherently difficult to estimate. Interestingly, our results indicate that full exploration is often useful for global properties (such as connectivity) while it does not help much for more local properties (such as node degree). Organization The remainder of this paper is organized as follows. Our theory of topology inference is introduced in Section 2. The main contribution is presented in Sections 3 and 4 where we derive bounds for general trace sets and fully explored networks, respectively. In Section 5, the paper concludes with a discussion of our results and directions for future research. Due to space constraints, some proofs are moved to the appendix. Model Let T denote the set of traces obtained from probing (e.g., by traceroute) a (not necessarily connected and undirected) network G 0 = (V 0 , E 0 ) with nodes or vertices V 0 (the set of routers) and links or edges E 0 . We assume that G 0 is static during the probing time (or that probing is instantaneous). Each trace T (u, v) ∈ T describes a path connecting two nodes u, v ∈ V 0 ; when u and v do not matter or are clear from the context, we simply write T . Moreover, let d T (u, v) denote the distance (number of hops) between two nodes u and v in trace T . We define d G 0 (u, v) to be the corresponding shortest path distance in G 0 . Note that a trace between two nodes u and v may not describe the shortest path between u and v in G 0 . The nodes in V 0 fall into two categories: anonymous nodes and non-anonymous (or shorter: named) nodes. Therefore, each trace T ∈ T describes a sequence of symbols representing anonymous and non-anonymous nodes. We make the natural assumption that the first and the last node in each trace T is non-anonymous. Moreover, we assume that traces are given in a form where non-anonymous nodes appear with a unique, anti-aliased identifier (i.e., the multiple IP addresses corresponding to different interfaces of a node are resolved to one identifier); an anonymous node is represented as * ("star") in the traces. For our formal analysis, we assign to each star in a trace set T a unique identifier i: * i . (Note that except for the numbering of the stars, we allow identical copies of T in T , and we do not make any assumptions on the implications of identical traces: they may or may not describe the same paths.) Thus, a trace T ∈ T is a sequence of symbols taken from an alphabet Σ = ID ∪ ( i * i ), where ID is the set of non-anonymous node identifiers (IDs): Σ is the union of the (anti-aliased) non-anonymous nodes and the set of all stars (with their unique identifiers) appearing in a trace set. The main challenge in topology inference is to determine which stars in the traces may originate from which anonymous nodes. Henceforth, let n = |ID| denote the number of non-anonymous nodes and let s = | i * i | be the number of stars in T ; similarly, let a denote the number of anonymous nodes in a topology. Let N = n + s = |Σ| be the total number of symbols occurring in T . Clearly, the process of topology inference depends on the assumptions on the measurements. In the following, we postulate the fundamental axioms that guide the reconstruction. First, we make the assumption that each link of G 0 is visited by the measurement process, i.e., it appears as a transition in the trace set T . In other words, we are only interested in inferring the (sub-)graph for which measurement data is available. AXIOM 0 (Complete Cover): Each edge of G 0 appears at least once in some trace in T . The next fundamental axiom assumes that traces always represent paths on G 0 . AXIOM 1 (Reality Sampling): For every trace T ∈ T , if the distance between two symbols σ 1 , σ 2 ∈ T is d T (σ 1 , σ 2 ) = k, then there exists a path (i.e., a walk without cycles) of length k connecting two (named or anonymous) nodes σ 1 and σ 2 in G 0 . The following axiom captures the consistency of the routing protocol on which the traceroute probing relies. In the current Internet, policy routing is known to have in impact both on the route length [14] and on the convergence time [11]. AXIOM 2 (α-(Routing) Consistency): There exists an α ∈ (0, 1] such that, for every trace T ∈ T , if d T (σ 1 , σ 2 ) = k for two entries σ 1 , σ 2 in trace T , then the shortest path connecting the two (named or anonymous) nodes corresponding to σ 1 and σ 2 in G 0 has distance at least αk . Note that if α = 1, the routing is a shortest path routing. Moreover, note that if α = 0, there can be loops in the paths, and there are hardly any topological constraints, rendering almost any topology inferrable. (For example, the complete graph with one anonymous router is always a solution.) A natural axiom to merge traces is the following. AXIOM 3 (Trace Merging): For two traces T 1 , T 2 ∈ T for which ∃σ 1 , σ 2 , σ 3 , where σ 2 refers to a named node, such that d T 1 (σ 1 , σ 2 ) = i and d T 2 (σ 2 , σ 3 ) = j, it holds that the distance between two nodes u and v corresponding to σ 1 and σ 2 , respectively, in G 0 , is at most d G 0 (σ 1 , σ 3 ) ≤ i + j. Any topology G which is consistent with these axioms (when applied to T ) is called inferrable from T . Definition 2.1 (Inferrable Topologies). A topology G is (α-consistently) inferrable from a trace set T if axioms AXIOM 0, AXIOM 1, AXIOM 2 (with parameter α), and AXIOM 3 are fulfilled. We will refer by G T to the set of topologies inferrable from T . Please note the following important observation. Remark 2.2. While we generally have that G 0 ∈ G T , since T was generated from G 0 and AXIOM 0, AXIOM 1, AXIOM 2 and AXIOM 3 are fulfilled by definition, there can be situations where an α-consistent trace set for G 0 contradicts AXIOM 0: some edges may not appear in T . If this is the case, we will focus on the inferrable topologies containing the links we know, even if G 0 may have additional, hidden links that cannot be explored due to the high α value. The main objective of a topology inference algorithm ALG is to compute topologies which are consistent with these axioms. Concretely, ALG's input is the trace set T together with the parameter α specifying the assumed routing consistency. Essentially, the goal of any topology inference algorithm ALG is to compute a mapping of the symbols Σ (appearing in T ) to nodes in an inferred topology G; or, in case the input parameters α and T are contradictory, reject the input. This mapping of symbols to nodes implicitly describes the edge set of G as well: the edge set is unique as all the transitions of the traces in T are now unambiguously tied to two nodes. So far, we have ignored an important and non-trivial question: When are two topologies G 1 , G 2 ∈ G T different (and hence appear as two independent topologies in G T )? In this paper, we pursue the following approach: We are not interested in purely topological isomorphisms, but we care about the identifiers of the non-anonymous nodes, i.e., we are interested in the locations of the non-anonymous nodes and their distance to other nodes. For anonymous nodes, the situation is slightly more complicated: one might think that as the nodes are anonymous, their "names" do not matter. Consider however the example in Figure 1: the two inferrable topologies have two anonymous nodes, once where { * 1 , * 2 } plus { * 3 , * 4 } are merged into one node each in the inferrable topology and once where { * 1 , * 4 } plus { * 2 , * 3 } are merged into one node each in the inferrable topology. In this paper, we regard the two topologies as different, for the following reason: Assume that there are two paths in the network, one u * 2 v (e.g., during day time) and one u * 3 v (e.g., at night); clearly, this traffic has different consequences and hence we want to be able to distinguish between the two topologies described above. In other words, our notion of isomorphism of inferred topologies is path-aware. It is convenient to introduce the following MAP function. Essentially, an inference algorithm computes such a mapping. Definition 2.3 (Mapping Function MAP). Let G = (V, E) ∈ G T be a topology inferrable from T . A topology inference algorithm describes a surjective mapping function MAP : Σ → V . For the set of non-anonymous nodes in Σ, the mapping function is bijective; and each star is mapped to exactly one node in V , but multiple stars may be assigned to the same node. Note that for any σ ∈ Σ, MAP(σ) uniquely identifies a node v ∈ V . More specifically, we assume that MAP assigns labels to the nodes in V : in case of a named node, the label is simply the node's identifier; in case of anonymous nodes, the label is * β , where β is the concatenation of the sorted indices of the stars which are merged into node * β . With this definition, two topologies G 1 , G 2 ∈ G T differ if and only if they do not describe the identical (MAP-) labeled topology. We will use this MAP function also for G 0 , i.e., we will write MAP(σ) to refer to a symbol σ's corresponding node in G 0 . In the remainder of this paper, we will often assume that AXIOM 0 is given. Moreover, note that AXIOM 3 is redundant. Therefore, in our proofs, we will not explicitly cover AXIOM 0, and it is sufficient to show that AXIOM 1 holds to prove that AXIOM 3 is satisfied. Lemma 2.4. AXIOM 1 implies AXIOM 3. Proof. Let T be a trace set, and G ∈ G T . Let σ 1 , σ 2 , σ 3 s.t. ∃T 1 , T 2 ∈ T with σ 1 ∈ T 1 , σ 3 ∈ T 2 and σ 2 ∈ T 1 ∩ T 2 . Let i = d T 1 (σ 1 , σ 2 ) and j = d T 2 (σ 1 , σ 3 ). Since any inferrable topology G fulfills AXIOM 1, there is a path π 1 of length at most i between the nodes corresponding to σ 1 and σ 2 in G and a path π 2 of length at most j between the nodes corresponding to σ 2 and σ 3 in G. The combined path can only be shorter, and hence the claim follows. Inferrable Topologies What insights can be obtained from topology inference with minimal assumptions, i.e., with our axioms? Or what is the structure of the inferrable topology set G T ? We first make some general observations and then examine different graph metrics in more detail. Basic Observations Although the generation of the entire topology set G T may be computationally hard, some instances of G T can be computed efficiently. The simplest possible inferrable topology is the so-called canonic graph G C : the topology which assumes that all stars in the traces refer to different anonymous nodes. In other words, if a trace set T contains n = |ID| named nodes and s stars, G C will contain |V (G C )| = N = n + s nodes. Definition 3.1 (Canonic Graph G C ). The canonic graph is defined by G C (V C , E C ) where V C = Σ is the set of (anti-aliased) nodes appearing in T (where each star is considered a unique anonymous node) and where {σ 1 , σ 2 } ∈ E C ⇔ ∃T ∈ T , T = (. . . , σ 1 , σ 2 , . . .), i.e., σ 1 follows after σ 2 in some trace T (σ 1 , σ 2 ∈ T can be either non-anonymous nodes or stars). Let d C (σ 1 , σ 2 ) denote the canonic distance between two nodes, i.e., the length of a shortest path in G C between the nodes σ 1 and σ 2 . Note that G C is indeed an inferrable topology. In this case, MAP : Σ → Σ is the identity function. The proof appears in the appendix. Theorem 3.2. G C is inferrable from T . G C can be computed efficiently from T : represent each non-anonymous node and star as a separate node, and for any pair of consecutive entries (i.e., nodes) in a trace, add the corresponding link. The time complexity of this construction is linear in the size of T . With the definition of the canonic graph, we can derive the following lemma which establishes a necessary condition when two stars cannot represent the same node in G 0 from constraints on the routing paths. This is useful for the characterization of inferred topologies. Lemma 3.3. Let * 1 , * 2 be two stars occurring in some traces in T . * 1 , * 2 cannot be mapped to the same node, i.e., MAP( * 1 ) = MAP( * 2 ), without violating the axioms in the following conflict situations: (i) if * 1 ∈ T 1 and * 2 ∈ T 2 , and T 1 describes a too long path between anonymous node MAP( * 1 ) and nonanonymous node u, i.e., α · d T 1 ( * 1 , u) > d C (u, * 2 ). (ii) if * 1 ∈ T 1 and * 2 ∈ T 2 , and there exists a trace T that contains a path between two non-anonymous nodes u and v and α · d T (u, v) > d C (u, * 1 ) + d C (v, * 2 ). Proof. The first proof is by contradiction. Assume MAP( * 1 ) = MAP( * 2 ) represents the same node v of G 0 , and that α · d T 1 (v, u) > d C (u, v). Then we know from AXIOM 2 that d C (v, u) ≥ d G 0 (v, u) ≥ α · d T 1 (u, v) > d C (v, u) , which yields the desired contradiction. Similarly for the second proof. Assume for the sake of contradiction that MAP( * 1 ) = MAP( * 2 ) represents the same node w of G 0 , and that α · d T (u, v) > d C (u, w) + d C (v, w). Due to the triangle inequality, we have that d C (u, w) + d C (v, w) ≥ d C (u, v) and hence, α · d T (u, v) > d C (u, v), which contradicts the fact that G C is inferrable (Theorem 3.2). Lemma 3.3 can be applied to show that a topology is not inferrable from a given trace set because it merges (i.e., maps to the same node) two stars in a manner that violates the axioms. Let us introduce a useful concept for our analysis: the star graph that describes the conflicts between stars. Note that the star graph G * is unique and can be computed efficiently for a given trace set T : Conditions (i) and (ii) can be checked by computing G C . However, note that while G * specifies some stars which cannot be merged, the construction is not sufficient: as Lemma 3.3 is based on G C , additional links might be needed to characterize the set of inferrable and α-consistent topologies G T exactly. In other words, a topology G obtained by merging stars that are adjacent in G * is never inferrable (G ∈ G T ); however, merging non-adjacent stars does not guarantee that the resulting topology is inferrable. What do star graphs look like? The answer is arbitrarily: the following lemma states that the set of possible star graphs is equivalent to the class of general graphs. This claim holds for any α. The proof appears in the appendix. The problem of computing inferrable topologies is related to the vertex colorings of the star graphs. We will use the following definition which relates a vertex coloring of G * to an inferrable topology G by contracting independent stars in G * to become one anonymous node in G. For example, observe that a maximum coloring treating every star in the trace as a separate anonymous node describes the inferrable topology G C . Definition 3.6 (Coloring-Induced Graph). Let γ denote a coloring of G * which assigns colors 1, . . . , k to the vertices of G * : γ : V * → {1, . . . , k}. We require that γ is a proper coloring of G * , i.e., that different anonymous nodes are assigned different colors: {u, v} ∈ E * ⇒ γ(u) = γ(v). G γ is defined as the topology induced by γ. G γ describes the graph G C where nodes of the same color are contracted: two vertices u and v represent the same node in G γ , i.e., MAP( * i ) = MAP( * j ), if and only if γ( * i ) = γ( * j ). The following two lemmas establish an intriguing relationship between colorings of G * and inferrable topologies. Also note that Definition 3.6 implies that two different colorings of G * define two non-isomorphic inferrable topologies. We first show that while a coloring-induced topology always fulfills AXIOM 1, the routing consistency is sacrificed. The proof appears in the appendix. Lemma 3.7. Let γ be a proper coloring of G * . The coloring induced topology G γ is a topology fulfilling AXIOM 2 with a routing consistency of α , for some positive α . An inferrable topology always defines a proper coloring on G * . Lemma 3.8. Let T be a trace set and G * its corresponding star graph. If a topology G is inferrable from T , then G induces a proper coloring on G * . Proof. For any α-consistent inferrable topology G there exists some mapping function MAP that assigns each symbol of T to a corresponding node in G (cf Definition 2.3), and this mapping function gives a coloring on G * (i.e., merged stars appear as nodes of the same color in G * ). The coloring must be proper: due to Lemma 3.3, an inferrable topology can never merge adjacent nodes of G * . The colorings of G * allow us to derive an upper bound on the cardinality of G T . Theorem 3.9. Given a trace set T sampled from a network G 0 and G T , the set of topologies inferrable from T , it holds that: |V * | k=γ(G * ) P (G * , k)/k! ≥ |G T |, where γ(G * ) is the chromatic number of G * and P (G * , k) is the number of colorings of G * with k colors (known as the chromatic polynomial of G * ). Proof. The proof follows directly from Lemma 3.8 which shows that each inferred topology has proper colorings, and the fact that a coloring of G * cannot result in two different inferred topologies, as the coloring uniquely describes which stars to merge (Lemma 3.7). In order to account for isomorphic colorings, we need to divide by the number of color permutations. Note that the fact that G * can be an arbitrary graph (Lemma 3.5) implies that we cannot exploit some special properties of G * to compute colorings of G * and γ(G * ). Also note that the exact computation of the upper bound is hard, since the minimal coloring as well as the chromatic polynomial of G * (in P ) is needed. To complement the upper bound, we note that star graphs with a small number of conflict edges can indeed result in a large number of inferred topologies. Proof. Consider a trace set T = {(σ i , * i , σ i ) i=1,...,s } (e.g., obtained from exploring a topology G 0 where one anonymous center node is connected to 2s named nodes). The trace set does not impose any constraints on how the stars relate to each other, and hence, G * does not contain any edges at all; even when stars are merged, there are no constraints on how the stars relate to each other. Therefore, the star graph for T has B s = s j=0 S (s,j) colorings, where S (s,j) = 1/j! · j =0 (−1) j (j − ) s is the number of ways to group s nodes into j different, disjoint non-empty subsets (known as the Stirling number of the second kind). Each of these colorings also describes a distinct inferrable topology as MAP assigns unique labels to anonymous nodes stemming from merging a group of stars (cf Definition 2.3). Properties Even if the number of inferrable topologies is large, topology inference can still be useful if one is mainly interested in the properties of G 0 and if the ensemble G T is homogenous with respect to these properties; for example, if "most" of the instances in G T are close to G 0 , there may be an option to conduct an efficient sampling analysis on random representatives. Therefore, in the following, we will take a closer look how much the members of G T differ. Important metrics to characterize inferrable topologies are, for instance, the graph size, the diameter DIAM(·), the number of triangles C 3 (·) of G, and so on. In the following, let G 1 = (V 1 , E 1 ), G 2 = (V 2 , E 2 ) ∈ G T be two arbitrary representatives of G T . As one might expect, the graph size can be estimated quite well. Lemma 3.11. It holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ≤ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of edges between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Observe that inferrable topologies can also differ in the number of connected components. This implies that the shortest distance between two named nodes can differ arbitrarily between two representatives in G T . Lemma 3.12. Let COMP(G) denote the number of connected components of a topology G. Then, |COMP(G 1 ) − COMP(G 2 )| ≤ n/2. There are instances G 1 , G 2 that reach this bound. Proof. Consider the trace set T = {T i , i = 1 . . . n/2 } in which T i = {n 2i , * i , n 2i+1 }. Since i = j ⇒ T i ∩ T j = ∅, we have |E * | = 0. Take G 1 as the 1-coloring of G * : G 1 is a topology with one anonymous node connected to all named nodes. Take G 2 as the n/2 -coloring of the star graph: G 2 has n/2 distinct connected components (consisting of three nodes). Upper bound: For the sake of contradiction, suppose ∃T s.t. |COMP(G 1 ) − COMP(G 2 )| > n/2 . Let us assume that G 1 has the most connected components: G 1 has at least n/2 + 1 more connected components than G 2 . Let C refer to a connected component of G 2 whose nodes are not connected in G 1 . This means that C contains at least one anonymous node. Thus, C contains at least two named nodes (since a trace T cannot start or end by a star). There must exist at least n/2 + 1 such connected component C. Thus G 2 has to contain at least 2( n/2 + 1) ≥ n + 1 named nodes. Contradiction. An important criterion for topology inference regards the distortion of shortest paths. Definition 3.13 (Stretch). The maximal ratio of the distance of two non-anonymous nodes in G 0 and a connected topology G is called the stretch ρ: ρ = max u,v∈ID(G 0 ) max{d G 0 (u, v)/d G (u, v), d G (u, v)/d G 0 (u, v)}. From Lemma 3.12 we already know that inferrable topologies can differ in the number of connected components, and hence, the distance and the stretch between nodes can be arbitrarily wrong. Hence, in the following, we will focus on connected graphs only. However, even if two nodes are connected, their distance can be much longer or shorter than in G 0 . Figure 2 gives an example. Both topologies are inferrable from the traces T 1 = (v, * , v 1 , . . . , v k , u) and T 2 = (w, * , w 1 , . . . , w k , u). One inferrable topology is the canonic graph G C (Figure 2 left), whereas the other topology merges the two anonymous nodes (Figure 2 right). The distances between v and w are 2(k + 2) and 2, respectively, implying a stretch of k + 2. Figure 2: Due to the lack of a trace between v and w, the stretch of an inferred topology can be large. Lemma 3.14. Let u and v be two arbitrary named nodes in the connected topologies G 1 and G 2 . Then, even for only two stars in the trace set, it holds for the stretch that ρ ≤ (N −1)/2. There are instances G 1 , G 2 that reach this bound. We now turn our attention to the diameter and the degree. Proof. Upper bound: As G C does not merge any stars, it describes the network with the largest diameter. Let π be a longest path between two nodes u and v in G C . In the extreme case, π is the only path determining the network diameter and π contains all star nodes. Then, the graph where all s stars are merged into one anonymous node has a minimal diameter of at least DIAM(G C )/s. (u s , . . . , * s , . . . , u s+1 )} with x named nodes and star in the middle between u i and u i+1 (assume x to be even, x does not include u i and u i+1 ). It holds that DIAM(G C ) = s · (x + 2) whereas in a graph G where all stars are merged, DIAM(G) = x + 2. There are n = s(x + 1) non-anonymous nodes, so x = (n − s − 1)/s. Figure 3 depicts an example. Lemma 3.16. For the maximal node degree DEG, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. Another important topology measure that indicates how well meshed the network is, is the number of triangles. Lemma 3.17. Let C 3 (G) be the number of cycles of length 3 of the graph G. It holds that C 3 (G 1 ) − C 3 (G 2 ) ≤ 2s(s − 1), which can be reached. The relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large unless the number of links between non-anonymous nodes exceeds n 2 /4 in which case the ratio is upper bounded by 2s(s − 1) + 1. Proof. Upper bound: Each node which is part of a triangle has at least two incident edges. Thus, a node v can be part of at most DEG(v) 2 triangles, where DEG(v) denotes v's degree. As a consequence the number of triangles containing an anonymous node in an inferrable topology with a anonymous nodes u 1 , . . . u a is at most a j=1 DEG(u j ) 2 . Given s, this sum is maximized if a = 1 and DEG(u 1 ) = 2s as 2s is the maximum degree possible due to Lemma 3.16. Thus there can be at most s · (2s − 1) triangles containing an anonymous node in G 1 . The number of triangles with at least one anonymous node is minimized in G C because in the canonic graph the degrees of the anonymous nodes are minimized, i.e, they are always exactly two. As a consequence there cannot be more than s such triangles in G C . If the number of such triangles in G C is smaller by x, then the number of of triangles with at least one anonymous node in the topology G 1 is upper bounded by s · (2s − 1) − x. The difference between the triangles in G 1 and G 2 is thus at most s(2s − 1) − x − s + x = 2s(s − 1). Example meeting this bound: If the non-anonymous nodes form a complete graph and all star nodes can be merged into one node in G 1 and G 2 = G C , then the difference in the number of triangles matches the upper bound. Consequently it holds for the ratio of triangles with anonymous nodes that it does not exceed (s(2s−1)−x)/(s−x). Thus the ratio can be infinite, as x can reach s. However, if the number of links between n non-anonymous nodes exceeds n 2 /4 then there is at least one triangle, as the densest complete bipartite graph contains at most n 2 /4 links. Full Exploration So far, we assumed that the trace set T contains each node and link of G 0 at least once. At first sight, this seems to be the best we can hope for. However, sometimes traces exploring the vicinity of anonymous nodes in different ways yields additional information that help to characterize G T better. This section introduces the concept of fully explored networks: T contains sufficiently many traces such that the distances between non-anonymous nodes can be estimated accurately. In some sense, a trace set for a fully explored network is the best we can hope for. Properties that cannot be inferred well under the fully explored topology model are infeasible to infer without additional assumptions on G 0 . In this sense, this section provides upper bounds on what can be learned from topology inference. In the following, we will constrain ourselves to routing along shortest paths only (α = 1). Let us again study the properties of the family of inferrable topologies fully explored by a trace set. Obviously, all the upper bounds from Section 3 are still valid for fully explored topologies. In the following, let G 1 , G 2 ∈ G T be arbitrary representatives of G T for a fully explored trace set T . A direct consequence of the Definition 4.1 concerns the number of connected components and the stretch. (Recall that the stretch is defined with respect to named nodes only, and since α = 1, a 1-consistent inferrable topology cannot include a shorter path between u and v than the one that must appear in a trace of T .) Lemma 4.2. It holds that COMP(G 1 ) = COMP(G 2 ) (= COMP(G 0 )) and the stretch is 1. The proof for the claims of the following lemmata are analogous to our former proofs, as the main difference is the fact that there might be more conflicts, i.e., edges in G * . Lemma 4.3. For fully explored networks it holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ∈ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of links between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Lemma 4.4. For the maximal node degree, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. From Lemma 4.2 we know that fully explored scenarios yield a perfect stretch of one. However, regarding the diameter, the situation is different in the sense that distances between anonymous nodes play a role. The number of triangles with anonymous nodes can still not be estimated accurately in the fully explored scenario. Lemma 4.6. There exist graphs where C 3 (G 1 ) − C 3 (G 2 ) = s(s − 1)/2, and the relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large. Conclusion We understand our work as a first step to shed light onto the similarity of inferrable topologies based on most basic axioms and without any assumptions on power-law properties, i.e., in the worst case. Using our formal framework we show that the topologies for a given trace set may differ significantly. Thus, it is impossible to accurately characterize topological properties of complex networks. To complement the general analysis, we propose the notion of fully explored networks or trace sets, as a "best possible scenario". As expected, we find that fully exploring traces allow us to determine several properties of the network more accurately; however, it also turns out that even in this scenario, other topological properties are inherently hard to compute. Our results are summarized in Figure 4. Our work opens several directions for future research. On a theoretical side, one may study whether the minimal inferrable topologies considered in, e.g., [1,2], are more similar in nature. More importantly, while this paper presented results for the general worst-case, it would be interesting to devise algorithms that compute, for a given trace set, worst-case bounds for the properties under consideration. For example, such approximate bounds would be helpful to decide whether additional measurements are needed. Moreover, maybe such algorithms may even give advice on the locations at which such measurements would be most useful. Property/Scenario Arbitrary Fully Explored (α = 1) G1 − G2 G1/G2 G1 − G2 G1/G2 # of nodes ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) # of links ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) # of connected components ≤ n/2 ≤ n/2 Figure 4: Summary of our bounds on the properties of inferrable topologies. s denotes the number of stars in the traces, n is the number of named nodes, N = n + s, and ν denotes the number of links between named nodes. Note that trace sets meeting these bounds exist for all properties for which we have tight or upper bounds. For the two entries marked with ( ¶), only "lower bounds" are derived, i.e., examples that yield at least the corresponding accuracy; as the upper bounds from the arbitrary scenario do not match, how to close the gap remains an open question. = 0 = 1 Stretch - ≤ (N − 1)/2 - = 1 Diameter ≤ (s − 1)/s · (N − 1) ≤ s s/2 ( ¶) 2 Max. Deg. ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 Triangles ≤ 2s(s − 1) ∞ ≤ 2s(s − 1)/2 ∞ [13] Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis, and Anja Feldmann. Improving content delivery using provider-aided distance information. In Proc. ACM IMC, 2010. [ Fix T . We have to prove that G C fulfills AXIOM 0, AXIOM 1 (which implies AXIOM 3) and AXIOM 2. AXIOM 0: The axiom holds trivially: only edges from the traces are used in G C . AXIOM 1: Let T ∈ T and σ 1 , σ 2 ∈ T . Let k = d T (σ 1 , σ 2 ). We show that G C fulfills AXIOM 1, namely, there exists a path of length k in G C . Induction on k: (k = 1:) By the definition of G C , {σ 1 , σ 2 } ∈ E C thus there exists a path of length one between σ 1 and σ 2 . (k > 1:) Suppose AXIOM 1 holds up to k − 1. Let σ 1 , . . . , σ k−1 be the intermediary nodes between σ 1 and σ 2 in T : T = (. . . , σ 1 , σ 1 , . . . , σ k−1 , σ 2 , . . .). By the induction hypothesis, in G C there is a path of length k − 1 between σ 1 and σ k−1 . Let π be this path. By definition of G C , {σ k−1 , σ 2 } ∈ E C . Thus appending (σ k−1 , σ 2 ) to π yields the desired path of length k linking σ 1 and σ 2 : AXIOM 1 thus holds up to k. AXIOM 2: We have to show that d T (σ 1 , σ 2 ) = k ⇒ d C (σ 1 , σ 2 ) ≥ α · k . By contradiction, suppose that G C does not fulfill AXIOM 2 with respect to α. So there exists k < α · k and σ 1 , σ 2 ∈ V C such that d C (σ 1 , σ 2 ) = k . Let π be a shortest path between σ 1 and σ 2 in G C . Let (T 1 , . . . , T ) be the corresponding (maybe repeating) traces covering this path π in G C . Let T i ∈ (T 1 , . . . , T ), and let s i and e i be the corresponding start and end nodes of π in T i . We will show that this path π implies the existence of a path in G 0 which violates α-consistency. Since G 0 is inferrable, G 0 fulfills AXIOM 2, thus we have: d C (σ 1 , σ 2 ) = i=1 d T i (s i , e i ) = k < α · k ≤ d G 0 (σ 1 , σ 2 ) since G 0 is α-consistent. However, G 0 also fulfills AXIOM 1, thus d T i (s i , e i ) ≥ d G 0 (s i , e i ). Thus i=1 d G 0 (s i , e i ) ≤ i=1 d T i (s i , e i ) < d G 0 (σ 1 , σ 2 ): we have constructed a path from σ 1 to σ 2 in G 0 whose length is shorter than the distance between σ 1 and σ 2 in G 0 , leading to the desired contradiction. A.2 Proof of Lemma 3.5 First we construct a topology G 0 = (V 0 , E 0 ) and then describe a trace set on this graph that generates the star graph G = (V, E). The node set V 0 consists of |V | anonymous nodes and |V | · (1 + τ ) named nodes, where τ = 3/(2α) − 1/2 . The first building block of G 0 is a copy of G. To each node v i in the copy of G we add a chain consisting of 2 + τ nodes, first appending τ non-anonymous nodes w (i,k) where 1 ≤ k ≤ τ , followed by an anonymous node u i and finally a named node w (i,τ +1) . More formally we can describe the link set as E 0 = E ∪ |V | i=1 {v i , w (i,1) }, {w (i,1) , w (i,2) }, . . . , {w (i,τ ) , u i }, {u i , w (i,τ +1) } . The trace set T consists of the following |V | + |E| shortest path traces: the traces T for ∈ {1, . . . , |V |}, are given by T (w ( ,τ ) , w ( ,τ +1) ) (for each node in V ), and the traces T for ∈ {|V | + 1, . . . , |V | + |E|}, are given by T (w (i,τ ) , w (j,τ ) ) for each link {v i , v j } in E. Note that G 0 = G C as each star appears as a separate anonymous node. The star graph G * corresponding to this trace set contains the |V | nodes * i (corresponding to u i ). In order to prove the claim of the lemma we have to show that two nodes * i , * j are conflicting according to Lemma 3.3 if and only if there is a link {v i , v j } in E. Case (i) does not apply because the minimum distance between any two nodes in the canonic graph is at least one, and α · d T i ( * i , w (i,τ ) ) = 1 and α · d T i ( * i , w (i,τ +1) ) = 1. It remains to examine Case (ii): "⇒" if MAP( * i ) = MAP( * j ) there would be a path of length two between w (i,τ ) and w (j,τ ) in the topology generated by MAP; the trace set however contains a trace T (w (i,τ ) , w (j,τ ) ) of length 2τ + 1. So α · d T (w (i,τ ) , w (j,τ ) ) = α · (2τ + 1) = α · (2 3/(2α) − 1/2 + 1 ) ≥ 3, which violates the α-consistency (Lemma 3.3 (ii)) and hence { * i , * j } ∈ E * and {v i , v j } ∈ E. "⇐": if {v i , v j } ∈ E, there is no trace T (w (i,τ ) , w (j,τ ) ), thus we have to prove that no trace T (w (i ,τ ) , w (j ,τ ) ) with i = i and j = j and j = i leads to a conflict between * i and * j . We show that an even more general statement is true, namely that for any pair of distinct non-anonymous nodes * j ). Since G C = G 0 and the traces contain shortest paths only, the trace distance between two nodes in the same trace is the same as the distance in G C . The following tables contain the relevant lower bounds on distances in G C and µ(x 1 , x 1 , x 2 , where x 1 , x 2 ∈ {v i , v j , w (i ,k) , w (j ,k) |1 ≤ k ≤ τ + 1, i = i, j = i, j = j}, it holds that α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 ,x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ). d C (·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 0 1 k 1 k 1 + 1 v j 1 0 k 1 + 1 k 1 w (i ,k 2 ) k 2 k 2 + 1 |k 2 − k 1 | k 1 + 1 + k 2 w (j ,k 2 ) k 2 + 1 k 2 k 1 + 1 + k 2 |k 2 − k 1 | * i τ + 2 τ + 1 2 + τ + k 1 τ − k 1 + 1 * j τ + 2 τ + 2 2 + τ + k 1 2 + τ + k 1 µ(·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 2τ + 4 2τ + 3 4 + 2τ + k 1 4 + 2τ + k 1 v j 2τ + 3 2τ + 4 2τ + 3 + k 1 3 + 2τ + k 1 w (i ,k 2 ) 4 + 2τ + k 2 4 + 2τ + k 2 4 + 2τ + k 1 + k 2 4 + 2τ + k 1 + k 2 w (j ,k 2 ) 2τ − k 2 + 3 2τ − k 2 + 3 2τ + 3 + k 1 − k 2 2τ + k 1 − k 2 + 3) = d C (x 1 , * i ) + d C (x 2 , * j ) . If x 1 = w (j ,k 2 ) then it holds for all x 1 , x 2 that d T (x 1 , x 2 ) ≤ 2τ + 1 whereas µ(x 1 , x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ) ≥ 2τ + 2. In all other cases it holds at least that Figure 5: d C (x 1 , x 2 ) < µ(x 1 , x 2 ). Thus α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 , * j Visualization for proof of Lemma 3.7. Solid lines denote links, dashed lines denote paths (of annotated length). We have to show that the paths in the traces correspond to paths in G γ . Let T ∈ T , and σ 1 , σ 2 ∈ T . Let π be the sequence of nodes in T connecting σ 1 and σ 2 . This is also a path in G γ : since α > 0, for any two symbols σ 1 , σ 2 ∈ T , it holds that MAP(σ 1 ) = MAP(σ 2 ) as α > 0. We now construct an example showing that the α for which G γ fulfills AXIOM 2 can be arbitrarily small. Consider the graph represented in Figure 5. Let T 1 = (s, . . . , t), T 2 = (s, * 1 , . . . , m 1 ), T 3 = (m 1 , . . . , * 2 , m 2 ), T 4 = (m 2 , * 3 , . . . , m 3 ), T 5 = (m 3 , . . . , * 4 , t). We assume α = 1. By changing parameters k = d C (s, t) and k = d C (m 1 , * 4 ), we can modulate the links of the corresponding star graph G * . Using d T 1 (s, t) = k, observe that k > 2 ⇔ { * 1 , * 4 } ∈ E * . Similarly, k > 2(k + 1) ⇔ { * 1 , * 3 } ∈ E * ∧ { * 2 , * 4 } ∈ E * and k > 2(k + 2) ⇔ { * 1 , * 2 } ∈ E * ∧ { * 3 , * 4 } ∈ E * . Taking k = 2k + 4, we thus have E * = {{ * 1 , * 3 }, { * 2 , * 4 }, { * 1 , * 4 }}. * 1 ) = d C (m 1 , * 2 ) = d C (m 3 , * 3 ) = d C (m 3 , Thus, we here construct a situation where * 1 and * 2 as well as * 3 and * 4 can be merged without breaking the consistency requirement, but where merging both simultaneously leads to a topology G that is only 4/k-consistent, since d G (s, t) = 4. This ratio can be made arbitrarily small provided we choose k = (k − 4)/2. A.4 Proof of Lemma 3.11 In the worst-case, each star in the trace represents a different node in G 1 , so the maximal number of nodes in any topology in G T is the total number of non-anonymous nodes plus the total number of stars in T . This number of nodes is reached in the topology G C . According to Definition 3.4, only non-adjacent stars in G * can represent the same node in an inferrable topology. Thus, the stars in trace T must originate from at least γ(G * ) different nodes. As a consequence |V 1 | − |V 2 | ≤ s − γ(G * ), which can reach s − 1 for a trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s}. Analogously, |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Observe that each occurrence of a node in a trace describes at most two edges. If all anonymous nodes are merged into γ(G * ) nodes in G 1 and are separate nodes in G 2 the difference in the number of edges is at most 2(s − γ(G * )). Analogously, |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s. The trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s} reaches this bound. A.5 Proof of Lemma 3.14 An "lower bound" example follows from Figure 2. Essentially, this is also the worst case: note that the difference in the shortest distance between a pair of nodes u and v in G 1 and G 2 is only greater than 0 if the shortest path between them involves at least one anonymous node. Hence the shortest distance between such a pair is two. The longest shortest distance between the same pair of nodes in another inferred topology visits all nodes in the network, i.e., its length is bounded by N − 1. A.6 Proof of Lemma 3.16 Each occurrence of a node in a trace describes at most two links incident to this node. For the degree difference we only have to consider the links incident to at least one anonymous node, as the number of links between nonanonymous nodes is the same in G 1 and G 2 . If all anonymous nodes can be merged into γ(G * ) nodes in G 1 and all anonymous nodes are separate in G 2 the difference in the maximum degree is thus at most 2(s − γ(G * )), as there can be at most s − γ(G * ) + 1 nodes merged into one node and the minimal maximum degree of a node in G 2 is two. This bound is tight, as the trace set T i = {v i , * , w i } for 1 ≤ i ≤ s containing s stars can be represented by a graph with one anonymous node of degree 2s or by a graph with s anonymous nodes of degree two each. For the ratio of the maximal degree we can ignore links between non-anonymous nodes as well, as these only decrease the ratio. The highest number of links incident at node v with one endpoint in the set of anonymous nodes is s − γ(G * ) + 1 for non-anonymous nodes and 2(s − γ(G * ) + 1) for anonymous nodes, whereas the lowest number is two. A.7 Proof of Lemma 4.4 The proof for the upper bound is analogous to the case without full exploration. To prove that this bound can be reached, we need to add traces to the trace set to ensure that all pairs of named nodes appear in the trace but does not change the degrees of anonymous nodes. To this end we add a named node u for each pair {v, w} that is not in the trace set yet to G 0 and a trace T = {v, u, w}. This does not increase the maximum degree and guarantees full exploration. A.8 Proof of Lemma 4.5 We first prove the upper bound for the relative case. Note that the maximal distance between two anonymous nodes MAP( * 1 ) and MAP( * 2 ) in an inferred topology component cannot be larger than twice the distance of two named nodes u and v: from Definition 4.1 we know that there must be a trace in T connecting u and v, and the maximal distance δ of a pair of named nodes is given by the path of the trace that includes u and v. Therefore, and since any trace starts and ends with a named node, any star can be at a distance at a distance δ/2 from a named node. Therefore, the maximal distance between MAP( * 1 ) and MAP( * 2 ) is δ/2 + δ/2 to get to the corresponding closest named nodes, plus δ for the connection between the named nodes. As according to Lemma 4.2, the distance between named nodes is the same in all inferred topologies, the diameter of inferred topologies can vary at most by a factor of two. We now construct an example that reaches this bound. Consider a topology consisting of a center node c and four rays of length k. Let u 1 , u 2 , u 3 , u 4 be the "end nodes" of each ray. We assume that all these nodes are named. Now add two chains of anonymous nodes of length 2k + 1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 to the topology. The trace set consists of the minimal trace set to obtain a fully explored topology: six traces of length 2k +1 between each pair of end nodes u 1 , u 2 , u 3 , u 4 . Now we add two traces of length 2k +1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 . These traces explore the anonymous chains and have the following shape: T 7 = (u 1 , * 1 , . . . , * k , σ, * k+1 , . . . , * 2k , u 2 ) and T 8 = (u 3 , * 2k+1 , . . . , * 3k , σ , * 3k+1 , . . . , * 4k , u 4 ), where σ and σ are stars. Let G 1 = G C and G 2 be the inferrable graph where σ and σ are merged. The resulting diameters are DIAM(G 1 ) = 4k+2 and DIAM(G 2 ) = 2k+1. Since s = 4k+2, the difference can thus be as large as s/2. Note that this construction also yields the bound of the relative difference: DIAM(G 1 )/DIAM(G 2 ) = (4k + 2)/(2k + 1) = 2. A.9 Proof of Lemma 4.6 Given the number of stars s, we construct a trace set T with two inferrable graphs such that in one graph the number of triangles with anonymous nodes is s(s − 1)/2 and in the other graph there are no such triangles. As a first step we add s traces T i = (v i , * i , w) to the trace set T , where 1 ≤ i ≤ s. To make this trace set fully explored we add traces for each pair v i , v j to T as a second step, i.e., traces T i,j = (v i , v j ) for 1 ≤ i ≤ s and 1 ≤ j ≤ s. The resulting trace set contains s stars and none of the stars are in conflict with each other. Thus the graph G 1 merging all stars into one anonymous node is inferrable from this trace and the number of triangles where the anonymous node is part of is s(s−1)/2. Let G 2 be the canonic graph of this trace set. This graph does not contain any triangles with anonymous nodes and hence the difference C(G 1 ) − C(G 2 ) is s(s − 1)/2. To see that the ratio can be unbounded look at the trace set {(v, * 1 , w), (u, * 2 , w), (u, v)}. This set is fully explored since all pairs of named nodes appear in a trace. The graph where the two stars are merged has one triangle and the canonic graph has no triangle.
10,634
1105.5236
2952132893
Traceroute measurements are one of our main instruments to shed light onto the structure and properties of today's complex networks such as the Internet. This paper studies the feasibility and infeasibility of inferring the network topology given traceroute data from a worst-case perspective, i.e., without any probabilistic assumptions on, e.g., the nodes' degree distribution. We attend to a scenario where some of the routers are anonymous, and propose two fundamental axioms that model two basic assumptions on the traceroute data: (1) each trace corresponds to a real path in the network, and (2) the routing paths are at most a factor 1 alpha off the shortest paths, for some parameter alpha in (0,1]. In contrast to existing literature that focuses on the cardinality of the set of (often only minimal) inferrable topologies, we argue that a large number of possible topologies alone is often unproblematic, as long as the networks have a similar structure. We hence seek to characterize the set of topologies inferred with our axioms. We introduce the notion of star graphs whose colorings capture the differences among inferred topologies; it also allows us to construct inferred topologies explicitly. We find that in general, inferrable topologies can differ significantly in many important aspects, such as the nodes' distances or the number of triangles. These negative results are complemented by a discussion of a scenario where the trace set is best possible, i.e., "complete". It turns out that while some properties such as the node degrees are still hard to measure, a complete trace set can help to determine global properties such as the connectivity.
This paper attends to the problem of anonymous nodes and assumes a conservative, worst-case'' perspective that does not rely on any assumptions on the underlying network. There are already several works on the subject. @cite_2 initiated the study of possible candidate topologies for a given trace set and suggested computing the , that is, the topology with the minimal number of anonymous nodes, which turns out to be NP-hard. Consequently, different heuristics have been proposed @cite_5 @cite_1 .
{ "abstract": [ "", "We consider using traceroute-like end-to-end measurement to infer the underlay topology for a group of hosts. One major issue is the measurement cost. Given N hosts in an asymmetric network without anonymous routers, traditionally full N(N-1) traceroutes are needed to determine the underlay topology. We investigate how to efficiently infer an underlay topology with low measurement cost, and propose a heuristic called Max-Delta. In the heuristic, a server selects appropriate host-pairs to measure in each iteration so as to reveal the most undiscovered information on the underlay. We further observe that the presence of anonymous routers significantly distorts and inflates the inferred topology. Previous research has shown that obtaining both exact and approximate topology in the presence of anonymous routers under certain consistency constraints is intractable. We hence propose fast algorithms on how to practically construct an approximate topology by relaxing some constraints. We investigate and compare two algorithms to merge anonymous routers. The first one uses Isomap to map routers into a multidimensional space and merges anonymous routers according to their interdistances. The second algorithm is based on neighbor router information, which trades off some accuracy with speed. We evaluate our inference algorithms on Internet-like and real Internet topologies. Our results show that almost full measurement is needed to fully discover the underlay topology. However, substantial reduction in measurements can be achieved if a little accuracy, say 5 , can be compromised. Moreover, our merging algorithms in the presence of anonymous routers can efficiently infer an underlay topology with good accuracy", "Many topology discovery systems rely on traceroute to discover path information in public networks. However, for some routers, traceroute detects their existence but not their address; we term such routers anonymous routers. This paper considers the problem of inferring the network topology in the presence of anonymous routers. We illustrate how obvious approaches to handle anonymous routers lead to incomplete, inflated, or inaccurate topologies. We formalize the topology inference problem and show that producing both exact and approximate solutions is intractable. Two heuristics are proposed and evaluated through simulation. These heuristics have been used to infer the topology of the 6Bone, and could be incorporated into existing tools to infer more comprehensive and accurate topologies." ], "cite_N": [ "@cite_5", "@cite_1", "@cite_2" ], "mid": [ "", "2155906776", "2129094430" ] }
Misleading Stars: What Cannot Be Measured in the Internet?
Surprisingly little is known about the structure of many important complex networks such as the Internet. One reason is the inherent difficulty of performing accurate, large-scale and preferably synchronous measurements from a large number of different vantage points. Another reason are privacy and information hiding issues: for example, network providers may seek to hide the details of their infrastructure to avoid tailored attacks. Since knowledge of the network characteristics is crucial for many applications (e.g., RMTP [12], or PaDIS [13]), the research community implements measurement tools to analyze at least the main properties of the network. The results can then, e.g., be used to design more efficient network protocols in the future. This paper focuses on the most basic characteristic of the network: its topology. The classic tool to study topological properties is traceroute. Traceroute allows us to collect traces from a given source node to a set of specified destination nodes. A trace between two nodes contains a sequence of identifiers describing the route traveled by the packet. However, not every node along such a path is configured to answer with its identifier. Rather, some nodes may be anonymous in the sense that they appear as stars (' * ') in a trace. Anonymous nodes exacerbate the exploration of a topology because already a small number of anonymous nodes may increase the spectrum of inferrable topologies that correspond to a trace set T . This paper is motivated by the observation that the mere number of inferrable topologies alone does not contradict the usefulness or feasibility of topology inference; if the set of inferrable topologies is homogeneous in the sense that that the different topologies share many important properties, the generation of all possible graphs can be avoided: an arbitrary representative may characterize the underlying network accurately. Therefore, we identify important topological metrics such as diameter or maximal node degree and examine how "close" the possible inferred topologies are with respect to these metrics. Our Contribution This paper initiates the study and characterization of topologies that can be inferred from a given trace set computed with the traceroute tool. While existing literature assuming a worst-case perspective has mainly focused on the cardinality of minimal topologies, we go one step further and examine specific topological graph properties. We introduce a formal theory of topology inference by proposing basic axioms (i.e., assumptions on the trace set) that are used to guide the inference process. We present a novel and we believe appealing definition for the isomorphism of inferred topologies which is aware of traffic paths; it is motivated by the observation that although two topologies look equivalent up to a renaming of anonymous nodes, the same trace set may result in different paths. Moreover, we initiate the study of two extremes: in the first scenario, we only require that each link appears at least once in the trace set; interestingly, however, it turns out that this is often not sufficient, and we propose a "best case" scenario where the trace set is, in some sense, complete: it contains paths between all pairs of nodes. The main result of the paper is a negative one. It is shown that already a small number of anonymous nodes in the network renders topology inference difficult. In particular, we prove that in general, the possible inferrable topologies differ in many crucial aspects. We introduce the concept of the star graph of a trace set that is useful for the characterization of inferred topologies. In particular, colorings of the star graphs allow us to constructively derive inferred topologies. (Although the general problem of computing the set of inferrable topologies is related to NP-hard problems such as minimal graph coloring and graph isomorphism, some important instances of inferrable topologies can be computed efficiently.) The minimal coloring (i.e., the chromatic number) of the star graph defines a lower bound on the number of anonymous nodes from which the stars in the traces could originate from. And the number of possible colorings of the star graph-a function of the chromatic polynomial of the star graph-gives an upper bound on the number of inferrable topologies. We show that this bound is tight in the sense that there are situation where there indeed exist so many inferrable topologies. Especially, there are problem instances where the cardinality of the set of inferrable topologies equals the Bell number. This insight complements (and generalizes to arbitrary, not only minimal, inferrable topologies) existing cardinality results. Finally, we examine the scenario of fully explored networks for which "complete" trace sets are available. As expected, inferrable topologies are more homogenous and can be characterized well with respect to many properties such as node distances. However, we also find that other properties are inherently difficult to estimate. Interestingly, our results indicate that full exploration is often useful for global properties (such as connectivity) while it does not help much for more local properties (such as node degree). Organization The remainder of this paper is organized as follows. Our theory of topology inference is introduced in Section 2. The main contribution is presented in Sections 3 and 4 where we derive bounds for general trace sets and fully explored networks, respectively. In Section 5, the paper concludes with a discussion of our results and directions for future research. Due to space constraints, some proofs are moved to the appendix. Model Let T denote the set of traces obtained from probing (e.g., by traceroute) a (not necessarily connected and undirected) network G 0 = (V 0 , E 0 ) with nodes or vertices V 0 (the set of routers) and links or edges E 0 . We assume that G 0 is static during the probing time (or that probing is instantaneous). Each trace T (u, v) ∈ T describes a path connecting two nodes u, v ∈ V 0 ; when u and v do not matter or are clear from the context, we simply write T . Moreover, let d T (u, v) denote the distance (number of hops) between two nodes u and v in trace T . We define d G 0 (u, v) to be the corresponding shortest path distance in G 0 . Note that a trace between two nodes u and v may not describe the shortest path between u and v in G 0 . The nodes in V 0 fall into two categories: anonymous nodes and non-anonymous (or shorter: named) nodes. Therefore, each trace T ∈ T describes a sequence of symbols representing anonymous and non-anonymous nodes. We make the natural assumption that the first and the last node in each trace T is non-anonymous. Moreover, we assume that traces are given in a form where non-anonymous nodes appear with a unique, anti-aliased identifier (i.e., the multiple IP addresses corresponding to different interfaces of a node are resolved to one identifier); an anonymous node is represented as * ("star") in the traces. For our formal analysis, we assign to each star in a trace set T a unique identifier i: * i . (Note that except for the numbering of the stars, we allow identical copies of T in T , and we do not make any assumptions on the implications of identical traces: they may or may not describe the same paths.) Thus, a trace T ∈ T is a sequence of symbols taken from an alphabet Σ = ID ∪ ( i * i ), where ID is the set of non-anonymous node identifiers (IDs): Σ is the union of the (anti-aliased) non-anonymous nodes and the set of all stars (with their unique identifiers) appearing in a trace set. The main challenge in topology inference is to determine which stars in the traces may originate from which anonymous nodes. Henceforth, let n = |ID| denote the number of non-anonymous nodes and let s = | i * i | be the number of stars in T ; similarly, let a denote the number of anonymous nodes in a topology. Let N = n + s = |Σ| be the total number of symbols occurring in T . Clearly, the process of topology inference depends on the assumptions on the measurements. In the following, we postulate the fundamental axioms that guide the reconstruction. First, we make the assumption that each link of G 0 is visited by the measurement process, i.e., it appears as a transition in the trace set T . In other words, we are only interested in inferring the (sub-)graph for which measurement data is available. AXIOM 0 (Complete Cover): Each edge of G 0 appears at least once in some trace in T . The next fundamental axiom assumes that traces always represent paths on G 0 . AXIOM 1 (Reality Sampling): For every trace T ∈ T , if the distance between two symbols σ 1 , σ 2 ∈ T is d T (σ 1 , σ 2 ) = k, then there exists a path (i.e., a walk without cycles) of length k connecting two (named or anonymous) nodes σ 1 and σ 2 in G 0 . The following axiom captures the consistency of the routing protocol on which the traceroute probing relies. In the current Internet, policy routing is known to have in impact both on the route length [14] and on the convergence time [11]. AXIOM 2 (α-(Routing) Consistency): There exists an α ∈ (0, 1] such that, for every trace T ∈ T , if d T (σ 1 , σ 2 ) = k for two entries σ 1 , σ 2 in trace T , then the shortest path connecting the two (named or anonymous) nodes corresponding to σ 1 and σ 2 in G 0 has distance at least αk . Note that if α = 1, the routing is a shortest path routing. Moreover, note that if α = 0, there can be loops in the paths, and there are hardly any topological constraints, rendering almost any topology inferrable. (For example, the complete graph with one anonymous router is always a solution.) A natural axiom to merge traces is the following. AXIOM 3 (Trace Merging): For two traces T 1 , T 2 ∈ T for which ∃σ 1 , σ 2 , σ 3 , where σ 2 refers to a named node, such that d T 1 (σ 1 , σ 2 ) = i and d T 2 (σ 2 , σ 3 ) = j, it holds that the distance between two nodes u and v corresponding to σ 1 and σ 2 , respectively, in G 0 , is at most d G 0 (σ 1 , σ 3 ) ≤ i + j. Any topology G which is consistent with these axioms (when applied to T ) is called inferrable from T . Definition 2.1 (Inferrable Topologies). A topology G is (α-consistently) inferrable from a trace set T if axioms AXIOM 0, AXIOM 1, AXIOM 2 (with parameter α), and AXIOM 3 are fulfilled. We will refer by G T to the set of topologies inferrable from T . Please note the following important observation. Remark 2.2. While we generally have that G 0 ∈ G T , since T was generated from G 0 and AXIOM 0, AXIOM 1, AXIOM 2 and AXIOM 3 are fulfilled by definition, there can be situations where an α-consistent trace set for G 0 contradicts AXIOM 0: some edges may not appear in T . If this is the case, we will focus on the inferrable topologies containing the links we know, even if G 0 may have additional, hidden links that cannot be explored due to the high α value. The main objective of a topology inference algorithm ALG is to compute topologies which are consistent with these axioms. Concretely, ALG's input is the trace set T together with the parameter α specifying the assumed routing consistency. Essentially, the goal of any topology inference algorithm ALG is to compute a mapping of the symbols Σ (appearing in T ) to nodes in an inferred topology G; or, in case the input parameters α and T are contradictory, reject the input. This mapping of symbols to nodes implicitly describes the edge set of G as well: the edge set is unique as all the transitions of the traces in T are now unambiguously tied to two nodes. So far, we have ignored an important and non-trivial question: When are two topologies G 1 , G 2 ∈ G T different (and hence appear as two independent topologies in G T )? In this paper, we pursue the following approach: We are not interested in purely topological isomorphisms, but we care about the identifiers of the non-anonymous nodes, i.e., we are interested in the locations of the non-anonymous nodes and their distance to other nodes. For anonymous nodes, the situation is slightly more complicated: one might think that as the nodes are anonymous, their "names" do not matter. Consider however the example in Figure 1: the two inferrable topologies have two anonymous nodes, once where { * 1 , * 2 } plus { * 3 , * 4 } are merged into one node each in the inferrable topology and once where { * 1 , * 4 } plus { * 2 , * 3 } are merged into one node each in the inferrable topology. In this paper, we regard the two topologies as different, for the following reason: Assume that there are two paths in the network, one u * 2 v (e.g., during day time) and one u * 3 v (e.g., at night); clearly, this traffic has different consequences and hence we want to be able to distinguish between the two topologies described above. In other words, our notion of isomorphism of inferred topologies is path-aware. It is convenient to introduce the following MAP function. Essentially, an inference algorithm computes such a mapping. Definition 2.3 (Mapping Function MAP). Let G = (V, E) ∈ G T be a topology inferrable from T . A topology inference algorithm describes a surjective mapping function MAP : Σ → V . For the set of non-anonymous nodes in Σ, the mapping function is bijective; and each star is mapped to exactly one node in V , but multiple stars may be assigned to the same node. Note that for any σ ∈ Σ, MAP(σ) uniquely identifies a node v ∈ V . More specifically, we assume that MAP assigns labels to the nodes in V : in case of a named node, the label is simply the node's identifier; in case of anonymous nodes, the label is * β , where β is the concatenation of the sorted indices of the stars which are merged into node * β . With this definition, two topologies G 1 , G 2 ∈ G T differ if and only if they do not describe the identical (MAP-) labeled topology. We will use this MAP function also for G 0 , i.e., we will write MAP(σ) to refer to a symbol σ's corresponding node in G 0 . In the remainder of this paper, we will often assume that AXIOM 0 is given. Moreover, note that AXIOM 3 is redundant. Therefore, in our proofs, we will not explicitly cover AXIOM 0, and it is sufficient to show that AXIOM 1 holds to prove that AXIOM 3 is satisfied. Lemma 2.4. AXIOM 1 implies AXIOM 3. Proof. Let T be a trace set, and G ∈ G T . Let σ 1 , σ 2 , σ 3 s.t. ∃T 1 , T 2 ∈ T with σ 1 ∈ T 1 , σ 3 ∈ T 2 and σ 2 ∈ T 1 ∩ T 2 . Let i = d T 1 (σ 1 , σ 2 ) and j = d T 2 (σ 1 , σ 3 ). Since any inferrable topology G fulfills AXIOM 1, there is a path π 1 of length at most i between the nodes corresponding to σ 1 and σ 2 in G and a path π 2 of length at most j between the nodes corresponding to σ 2 and σ 3 in G. The combined path can only be shorter, and hence the claim follows. Inferrable Topologies What insights can be obtained from topology inference with minimal assumptions, i.e., with our axioms? Or what is the structure of the inferrable topology set G T ? We first make some general observations and then examine different graph metrics in more detail. Basic Observations Although the generation of the entire topology set G T may be computationally hard, some instances of G T can be computed efficiently. The simplest possible inferrable topology is the so-called canonic graph G C : the topology which assumes that all stars in the traces refer to different anonymous nodes. In other words, if a trace set T contains n = |ID| named nodes and s stars, G C will contain |V (G C )| = N = n + s nodes. Definition 3.1 (Canonic Graph G C ). The canonic graph is defined by G C (V C , E C ) where V C = Σ is the set of (anti-aliased) nodes appearing in T (where each star is considered a unique anonymous node) and where {σ 1 , σ 2 } ∈ E C ⇔ ∃T ∈ T , T = (. . . , σ 1 , σ 2 , . . .), i.e., σ 1 follows after σ 2 in some trace T (σ 1 , σ 2 ∈ T can be either non-anonymous nodes or stars). Let d C (σ 1 , σ 2 ) denote the canonic distance between two nodes, i.e., the length of a shortest path in G C between the nodes σ 1 and σ 2 . Note that G C is indeed an inferrable topology. In this case, MAP : Σ → Σ is the identity function. The proof appears in the appendix. Theorem 3.2. G C is inferrable from T . G C can be computed efficiently from T : represent each non-anonymous node and star as a separate node, and for any pair of consecutive entries (i.e., nodes) in a trace, add the corresponding link. The time complexity of this construction is linear in the size of T . With the definition of the canonic graph, we can derive the following lemma which establishes a necessary condition when two stars cannot represent the same node in G 0 from constraints on the routing paths. This is useful for the characterization of inferred topologies. Lemma 3.3. Let * 1 , * 2 be two stars occurring in some traces in T . * 1 , * 2 cannot be mapped to the same node, i.e., MAP( * 1 ) = MAP( * 2 ), without violating the axioms in the following conflict situations: (i) if * 1 ∈ T 1 and * 2 ∈ T 2 , and T 1 describes a too long path between anonymous node MAP( * 1 ) and nonanonymous node u, i.e., α · d T 1 ( * 1 , u) > d C (u, * 2 ). (ii) if * 1 ∈ T 1 and * 2 ∈ T 2 , and there exists a trace T that contains a path between two non-anonymous nodes u and v and α · d T (u, v) > d C (u, * 1 ) + d C (v, * 2 ). Proof. The first proof is by contradiction. Assume MAP( * 1 ) = MAP( * 2 ) represents the same node v of G 0 , and that α · d T 1 (v, u) > d C (u, v). Then we know from AXIOM 2 that d C (v, u) ≥ d G 0 (v, u) ≥ α · d T 1 (u, v) > d C (v, u) , which yields the desired contradiction. Similarly for the second proof. Assume for the sake of contradiction that MAP( * 1 ) = MAP( * 2 ) represents the same node w of G 0 , and that α · d T (u, v) > d C (u, w) + d C (v, w). Due to the triangle inequality, we have that d C (u, w) + d C (v, w) ≥ d C (u, v) and hence, α · d T (u, v) > d C (u, v), which contradicts the fact that G C is inferrable (Theorem 3.2). Lemma 3.3 can be applied to show that a topology is not inferrable from a given trace set because it merges (i.e., maps to the same node) two stars in a manner that violates the axioms. Let us introduce a useful concept for our analysis: the star graph that describes the conflicts between stars. Note that the star graph G * is unique and can be computed efficiently for a given trace set T : Conditions (i) and (ii) can be checked by computing G C . However, note that while G * specifies some stars which cannot be merged, the construction is not sufficient: as Lemma 3.3 is based on G C , additional links might be needed to characterize the set of inferrable and α-consistent topologies G T exactly. In other words, a topology G obtained by merging stars that are adjacent in G * is never inferrable (G ∈ G T ); however, merging non-adjacent stars does not guarantee that the resulting topology is inferrable. What do star graphs look like? The answer is arbitrarily: the following lemma states that the set of possible star graphs is equivalent to the class of general graphs. This claim holds for any α. The proof appears in the appendix. The problem of computing inferrable topologies is related to the vertex colorings of the star graphs. We will use the following definition which relates a vertex coloring of G * to an inferrable topology G by contracting independent stars in G * to become one anonymous node in G. For example, observe that a maximum coloring treating every star in the trace as a separate anonymous node describes the inferrable topology G C . Definition 3.6 (Coloring-Induced Graph). Let γ denote a coloring of G * which assigns colors 1, . . . , k to the vertices of G * : γ : V * → {1, . . . , k}. We require that γ is a proper coloring of G * , i.e., that different anonymous nodes are assigned different colors: {u, v} ∈ E * ⇒ γ(u) = γ(v). G γ is defined as the topology induced by γ. G γ describes the graph G C where nodes of the same color are contracted: two vertices u and v represent the same node in G γ , i.e., MAP( * i ) = MAP( * j ), if and only if γ( * i ) = γ( * j ). The following two lemmas establish an intriguing relationship between colorings of G * and inferrable topologies. Also note that Definition 3.6 implies that two different colorings of G * define two non-isomorphic inferrable topologies. We first show that while a coloring-induced topology always fulfills AXIOM 1, the routing consistency is sacrificed. The proof appears in the appendix. Lemma 3.7. Let γ be a proper coloring of G * . The coloring induced topology G γ is a topology fulfilling AXIOM 2 with a routing consistency of α , for some positive α . An inferrable topology always defines a proper coloring on G * . Lemma 3.8. Let T be a trace set and G * its corresponding star graph. If a topology G is inferrable from T , then G induces a proper coloring on G * . Proof. For any α-consistent inferrable topology G there exists some mapping function MAP that assigns each symbol of T to a corresponding node in G (cf Definition 2.3), and this mapping function gives a coloring on G * (i.e., merged stars appear as nodes of the same color in G * ). The coloring must be proper: due to Lemma 3.3, an inferrable topology can never merge adjacent nodes of G * . The colorings of G * allow us to derive an upper bound on the cardinality of G T . Theorem 3.9. Given a trace set T sampled from a network G 0 and G T , the set of topologies inferrable from T , it holds that: |V * | k=γ(G * ) P (G * , k)/k! ≥ |G T |, where γ(G * ) is the chromatic number of G * and P (G * , k) is the number of colorings of G * with k colors (known as the chromatic polynomial of G * ). Proof. The proof follows directly from Lemma 3.8 which shows that each inferred topology has proper colorings, and the fact that a coloring of G * cannot result in two different inferred topologies, as the coloring uniquely describes which stars to merge (Lemma 3.7). In order to account for isomorphic colorings, we need to divide by the number of color permutations. Note that the fact that G * can be an arbitrary graph (Lemma 3.5) implies that we cannot exploit some special properties of G * to compute colorings of G * and γ(G * ). Also note that the exact computation of the upper bound is hard, since the minimal coloring as well as the chromatic polynomial of G * (in P ) is needed. To complement the upper bound, we note that star graphs with a small number of conflict edges can indeed result in a large number of inferred topologies. Proof. Consider a trace set T = {(σ i , * i , σ i ) i=1,...,s } (e.g., obtained from exploring a topology G 0 where one anonymous center node is connected to 2s named nodes). The trace set does not impose any constraints on how the stars relate to each other, and hence, G * does not contain any edges at all; even when stars are merged, there are no constraints on how the stars relate to each other. Therefore, the star graph for T has B s = s j=0 S (s,j) colorings, where S (s,j) = 1/j! · j =0 (−1) j (j − ) s is the number of ways to group s nodes into j different, disjoint non-empty subsets (known as the Stirling number of the second kind). Each of these colorings also describes a distinct inferrable topology as MAP assigns unique labels to anonymous nodes stemming from merging a group of stars (cf Definition 2.3). Properties Even if the number of inferrable topologies is large, topology inference can still be useful if one is mainly interested in the properties of G 0 and if the ensemble G T is homogenous with respect to these properties; for example, if "most" of the instances in G T are close to G 0 , there may be an option to conduct an efficient sampling analysis on random representatives. Therefore, in the following, we will take a closer look how much the members of G T differ. Important metrics to characterize inferrable topologies are, for instance, the graph size, the diameter DIAM(·), the number of triangles C 3 (·) of G, and so on. In the following, let G 1 = (V 1 , E 1 ), G 2 = (V 2 , E 2 ) ∈ G T be two arbitrary representatives of G T . As one might expect, the graph size can be estimated quite well. Lemma 3.11. It holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ≤ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of edges between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Observe that inferrable topologies can also differ in the number of connected components. This implies that the shortest distance between two named nodes can differ arbitrarily between two representatives in G T . Lemma 3.12. Let COMP(G) denote the number of connected components of a topology G. Then, |COMP(G 1 ) − COMP(G 2 )| ≤ n/2. There are instances G 1 , G 2 that reach this bound. Proof. Consider the trace set T = {T i , i = 1 . . . n/2 } in which T i = {n 2i , * i , n 2i+1 }. Since i = j ⇒ T i ∩ T j = ∅, we have |E * | = 0. Take G 1 as the 1-coloring of G * : G 1 is a topology with one anonymous node connected to all named nodes. Take G 2 as the n/2 -coloring of the star graph: G 2 has n/2 distinct connected components (consisting of three nodes). Upper bound: For the sake of contradiction, suppose ∃T s.t. |COMP(G 1 ) − COMP(G 2 )| > n/2 . Let us assume that G 1 has the most connected components: G 1 has at least n/2 + 1 more connected components than G 2 . Let C refer to a connected component of G 2 whose nodes are not connected in G 1 . This means that C contains at least one anonymous node. Thus, C contains at least two named nodes (since a trace T cannot start or end by a star). There must exist at least n/2 + 1 such connected component C. Thus G 2 has to contain at least 2( n/2 + 1) ≥ n + 1 named nodes. Contradiction. An important criterion for topology inference regards the distortion of shortest paths. Definition 3.13 (Stretch). The maximal ratio of the distance of two non-anonymous nodes in G 0 and a connected topology G is called the stretch ρ: ρ = max u,v∈ID(G 0 ) max{d G 0 (u, v)/d G (u, v), d G (u, v)/d G 0 (u, v)}. From Lemma 3.12 we already know that inferrable topologies can differ in the number of connected components, and hence, the distance and the stretch between nodes can be arbitrarily wrong. Hence, in the following, we will focus on connected graphs only. However, even if two nodes are connected, their distance can be much longer or shorter than in G 0 . Figure 2 gives an example. Both topologies are inferrable from the traces T 1 = (v, * , v 1 , . . . , v k , u) and T 2 = (w, * , w 1 , . . . , w k , u). One inferrable topology is the canonic graph G C (Figure 2 left), whereas the other topology merges the two anonymous nodes (Figure 2 right). The distances between v and w are 2(k + 2) and 2, respectively, implying a stretch of k + 2. Figure 2: Due to the lack of a trace between v and w, the stretch of an inferred topology can be large. Lemma 3.14. Let u and v be two arbitrary named nodes in the connected topologies G 1 and G 2 . Then, even for only two stars in the trace set, it holds for the stretch that ρ ≤ (N −1)/2. There are instances G 1 , G 2 that reach this bound. We now turn our attention to the diameter and the degree. Proof. Upper bound: As G C does not merge any stars, it describes the network with the largest diameter. Let π be a longest path between two nodes u and v in G C . In the extreme case, π is the only path determining the network diameter and π contains all star nodes. Then, the graph where all s stars are merged into one anonymous node has a minimal diameter of at least DIAM(G C )/s. (u s , . . . , * s , . . . , u s+1 )} with x named nodes and star in the middle between u i and u i+1 (assume x to be even, x does not include u i and u i+1 ). It holds that DIAM(G C ) = s · (x + 2) whereas in a graph G where all stars are merged, DIAM(G) = x + 2. There are n = s(x + 1) non-anonymous nodes, so x = (n − s − 1)/s. Figure 3 depicts an example. Lemma 3.16. For the maximal node degree DEG, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. Another important topology measure that indicates how well meshed the network is, is the number of triangles. Lemma 3.17. Let C 3 (G) be the number of cycles of length 3 of the graph G. It holds that C 3 (G 1 ) − C 3 (G 2 ) ≤ 2s(s − 1), which can be reached. The relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large unless the number of links between non-anonymous nodes exceeds n 2 /4 in which case the ratio is upper bounded by 2s(s − 1) + 1. Proof. Upper bound: Each node which is part of a triangle has at least two incident edges. Thus, a node v can be part of at most DEG(v) 2 triangles, where DEG(v) denotes v's degree. As a consequence the number of triangles containing an anonymous node in an inferrable topology with a anonymous nodes u 1 , . . . u a is at most a j=1 DEG(u j ) 2 . Given s, this sum is maximized if a = 1 and DEG(u 1 ) = 2s as 2s is the maximum degree possible due to Lemma 3.16. Thus there can be at most s · (2s − 1) triangles containing an anonymous node in G 1 . The number of triangles with at least one anonymous node is minimized in G C because in the canonic graph the degrees of the anonymous nodes are minimized, i.e, they are always exactly two. As a consequence there cannot be more than s such triangles in G C . If the number of such triangles in G C is smaller by x, then the number of of triangles with at least one anonymous node in the topology G 1 is upper bounded by s · (2s − 1) − x. The difference between the triangles in G 1 and G 2 is thus at most s(2s − 1) − x − s + x = 2s(s − 1). Example meeting this bound: If the non-anonymous nodes form a complete graph and all star nodes can be merged into one node in G 1 and G 2 = G C , then the difference in the number of triangles matches the upper bound. Consequently it holds for the ratio of triangles with anonymous nodes that it does not exceed (s(2s−1)−x)/(s−x). Thus the ratio can be infinite, as x can reach s. However, if the number of links between n non-anonymous nodes exceeds n 2 /4 then there is at least one triangle, as the densest complete bipartite graph contains at most n 2 /4 links. Full Exploration So far, we assumed that the trace set T contains each node and link of G 0 at least once. At first sight, this seems to be the best we can hope for. However, sometimes traces exploring the vicinity of anonymous nodes in different ways yields additional information that help to characterize G T better. This section introduces the concept of fully explored networks: T contains sufficiently many traces such that the distances between non-anonymous nodes can be estimated accurately. In some sense, a trace set for a fully explored network is the best we can hope for. Properties that cannot be inferred well under the fully explored topology model are infeasible to infer without additional assumptions on G 0 . In this sense, this section provides upper bounds on what can be learned from topology inference. In the following, we will constrain ourselves to routing along shortest paths only (α = 1). Let us again study the properties of the family of inferrable topologies fully explored by a trace set. Obviously, all the upper bounds from Section 3 are still valid for fully explored topologies. In the following, let G 1 , G 2 ∈ G T be arbitrary representatives of G T for a fully explored trace set T . A direct consequence of the Definition 4.1 concerns the number of connected components and the stretch. (Recall that the stretch is defined with respect to named nodes only, and since α = 1, a 1-consistent inferrable topology cannot include a shorter path between u and v than the one that must appear in a trace of T .) Lemma 4.2. It holds that COMP(G 1 ) = COMP(G 2 ) (= COMP(G 0 )) and the stretch is 1. The proof for the claims of the following lemmata are analogous to our former proofs, as the main difference is the fact that there might be more conflicts, i.e., edges in G * . Lemma 4.3. For fully explored networks it holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ∈ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of links between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Lemma 4.4. For the maximal node degree, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. From Lemma 4.2 we know that fully explored scenarios yield a perfect stretch of one. However, regarding the diameter, the situation is different in the sense that distances between anonymous nodes play a role. The number of triangles with anonymous nodes can still not be estimated accurately in the fully explored scenario. Lemma 4.6. There exist graphs where C 3 (G 1 ) − C 3 (G 2 ) = s(s − 1)/2, and the relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large. Conclusion We understand our work as a first step to shed light onto the similarity of inferrable topologies based on most basic axioms and without any assumptions on power-law properties, i.e., in the worst case. Using our formal framework we show that the topologies for a given trace set may differ significantly. Thus, it is impossible to accurately characterize topological properties of complex networks. To complement the general analysis, we propose the notion of fully explored networks or trace sets, as a "best possible scenario". As expected, we find that fully exploring traces allow us to determine several properties of the network more accurately; however, it also turns out that even in this scenario, other topological properties are inherently hard to compute. Our results are summarized in Figure 4. Our work opens several directions for future research. On a theoretical side, one may study whether the minimal inferrable topologies considered in, e.g., [1,2], are more similar in nature. More importantly, while this paper presented results for the general worst-case, it would be interesting to devise algorithms that compute, for a given trace set, worst-case bounds for the properties under consideration. For example, such approximate bounds would be helpful to decide whether additional measurements are needed. Moreover, maybe such algorithms may even give advice on the locations at which such measurements would be most useful. Property/Scenario Arbitrary Fully Explored (α = 1) G1 − G2 G1/G2 G1 − G2 G1/G2 # of nodes ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) # of links ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) # of connected components ≤ n/2 ≤ n/2 Figure 4: Summary of our bounds on the properties of inferrable topologies. s denotes the number of stars in the traces, n is the number of named nodes, N = n + s, and ν denotes the number of links between named nodes. Note that trace sets meeting these bounds exist for all properties for which we have tight or upper bounds. For the two entries marked with ( ¶), only "lower bounds" are derived, i.e., examples that yield at least the corresponding accuracy; as the upper bounds from the arbitrary scenario do not match, how to close the gap remains an open question. = 0 = 1 Stretch - ≤ (N − 1)/2 - = 1 Diameter ≤ (s − 1)/s · (N − 1) ≤ s s/2 ( ¶) 2 Max. Deg. ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 Triangles ≤ 2s(s − 1) ∞ ≤ 2s(s − 1)/2 ∞ [13] Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis, and Anja Feldmann. Improving content delivery using provider-aided distance information. In Proc. ACM IMC, 2010. [ Fix T . We have to prove that G C fulfills AXIOM 0, AXIOM 1 (which implies AXIOM 3) and AXIOM 2. AXIOM 0: The axiom holds trivially: only edges from the traces are used in G C . AXIOM 1: Let T ∈ T and σ 1 , σ 2 ∈ T . Let k = d T (σ 1 , σ 2 ). We show that G C fulfills AXIOM 1, namely, there exists a path of length k in G C . Induction on k: (k = 1:) By the definition of G C , {σ 1 , σ 2 } ∈ E C thus there exists a path of length one between σ 1 and σ 2 . (k > 1:) Suppose AXIOM 1 holds up to k − 1. Let σ 1 , . . . , σ k−1 be the intermediary nodes between σ 1 and σ 2 in T : T = (. . . , σ 1 , σ 1 , . . . , σ k−1 , σ 2 , . . .). By the induction hypothesis, in G C there is a path of length k − 1 between σ 1 and σ k−1 . Let π be this path. By definition of G C , {σ k−1 , σ 2 } ∈ E C . Thus appending (σ k−1 , σ 2 ) to π yields the desired path of length k linking σ 1 and σ 2 : AXIOM 1 thus holds up to k. AXIOM 2: We have to show that d T (σ 1 , σ 2 ) = k ⇒ d C (σ 1 , σ 2 ) ≥ α · k . By contradiction, suppose that G C does not fulfill AXIOM 2 with respect to α. So there exists k < α · k and σ 1 , σ 2 ∈ V C such that d C (σ 1 , σ 2 ) = k . Let π be a shortest path between σ 1 and σ 2 in G C . Let (T 1 , . . . , T ) be the corresponding (maybe repeating) traces covering this path π in G C . Let T i ∈ (T 1 , . . . , T ), and let s i and e i be the corresponding start and end nodes of π in T i . We will show that this path π implies the existence of a path in G 0 which violates α-consistency. Since G 0 is inferrable, G 0 fulfills AXIOM 2, thus we have: d C (σ 1 , σ 2 ) = i=1 d T i (s i , e i ) = k < α · k ≤ d G 0 (σ 1 , σ 2 ) since G 0 is α-consistent. However, G 0 also fulfills AXIOM 1, thus d T i (s i , e i ) ≥ d G 0 (s i , e i ). Thus i=1 d G 0 (s i , e i ) ≤ i=1 d T i (s i , e i ) < d G 0 (σ 1 , σ 2 ): we have constructed a path from σ 1 to σ 2 in G 0 whose length is shorter than the distance between σ 1 and σ 2 in G 0 , leading to the desired contradiction. A.2 Proof of Lemma 3.5 First we construct a topology G 0 = (V 0 , E 0 ) and then describe a trace set on this graph that generates the star graph G = (V, E). The node set V 0 consists of |V | anonymous nodes and |V | · (1 + τ ) named nodes, where τ = 3/(2α) − 1/2 . The first building block of G 0 is a copy of G. To each node v i in the copy of G we add a chain consisting of 2 + τ nodes, first appending τ non-anonymous nodes w (i,k) where 1 ≤ k ≤ τ , followed by an anonymous node u i and finally a named node w (i,τ +1) . More formally we can describe the link set as E 0 = E ∪ |V | i=1 {v i , w (i,1) }, {w (i,1) , w (i,2) }, . . . , {w (i,τ ) , u i }, {u i , w (i,τ +1) } . The trace set T consists of the following |V | + |E| shortest path traces: the traces T for ∈ {1, . . . , |V |}, are given by T (w ( ,τ ) , w ( ,τ +1) ) (for each node in V ), and the traces T for ∈ {|V | + 1, . . . , |V | + |E|}, are given by T (w (i,τ ) , w (j,τ ) ) for each link {v i , v j } in E. Note that G 0 = G C as each star appears as a separate anonymous node. The star graph G * corresponding to this trace set contains the |V | nodes * i (corresponding to u i ). In order to prove the claim of the lemma we have to show that two nodes * i , * j are conflicting according to Lemma 3.3 if and only if there is a link {v i , v j } in E. Case (i) does not apply because the minimum distance between any two nodes in the canonic graph is at least one, and α · d T i ( * i , w (i,τ ) ) = 1 and α · d T i ( * i , w (i,τ +1) ) = 1. It remains to examine Case (ii): "⇒" if MAP( * i ) = MAP( * j ) there would be a path of length two between w (i,τ ) and w (j,τ ) in the topology generated by MAP; the trace set however contains a trace T (w (i,τ ) , w (j,τ ) ) of length 2τ + 1. So α · d T (w (i,τ ) , w (j,τ ) ) = α · (2τ + 1) = α · (2 3/(2α) − 1/2 + 1 ) ≥ 3, which violates the α-consistency (Lemma 3.3 (ii)) and hence { * i , * j } ∈ E * and {v i , v j } ∈ E. "⇐": if {v i , v j } ∈ E, there is no trace T (w (i,τ ) , w (j,τ ) ), thus we have to prove that no trace T (w (i ,τ ) , w (j ,τ ) ) with i = i and j = j and j = i leads to a conflict between * i and * j . We show that an even more general statement is true, namely that for any pair of distinct non-anonymous nodes * j ). Since G C = G 0 and the traces contain shortest paths only, the trace distance between two nodes in the same trace is the same as the distance in G C . The following tables contain the relevant lower bounds on distances in G C and µ(x 1 , x 1 , x 2 , where x 1 , x 2 ∈ {v i , v j , w (i ,k) , w (j ,k) |1 ≤ k ≤ τ + 1, i = i, j = i, j = j}, it holds that α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 ,x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ). d C (·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 0 1 k 1 k 1 + 1 v j 1 0 k 1 + 1 k 1 w (i ,k 2 ) k 2 k 2 + 1 |k 2 − k 1 | k 1 + 1 + k 2 w (j ,k 2 ) k 2 + 1 k 2 k 1 + 1 + k 2 |k 2 − k 1 | * i τ + 2 τ + 1 2 + τ + k 1 τ − k 1 + 1 * j τ + 2 τ + 2 2 + τ + k 1 2 + τ + k 1 µ(·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 2τ + 4 2τ + 3 4 + 2τ + k 1 4 + 2τ + k 1 v j 2τ + 3 2τ + 4 2τ + 3 + k 1 3 + 2τ + k 1 w (i ,k 2 ) 4 + 2τ + k 2 4 + 2τ + k 2 4 + 2τ + k 1 + k 2 4 + 2τ + k 1 + k 2 w (j ,k 2 ) 2τ − k 2 + 3 2τ − k 2 + 3 2τ + 3 + k 1 − k 2 2τ + k 1 − k 2 + 3) = d C (x 1 , * i ) + d C (x 2 , * j ) . If x 1 = w (j ,k 2 ) then it holds for all x 1 , x 2 that d T (x 1 , x 2 ) ≤ 2τ + 1 whereas µ(x 1 , x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ) ≥ 2τ + 2. In all other cases it holds at least that Figure 5: d C (x 1 , x 2 ) < µ(x 1 , x 2 ). Thus α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 , * j Visualization for proof of Lemma 3.7. Solid lines denote links, dashed lines denote paths (of annotated length). We have to show that the paths in the traces correspond to paths in G γ . Let T ∈ T , and σ 1 , σ 2 ∈ T . Let π be the sequence of nodes in T connecting σ 1 and σ 2 . This is also a path in G γ : since α > 0, for any two symbols σ 1 , σ 2 ∈ T , it holds that MAP(σ 1 ) = MAP(σ 2 ) as α > 0. We now construct an example showing that the α for which G γ fulfills AXIOM 2 can be arbitrarily small. Consider the graph represented in Figure 5. Let T 1 = (s, . . . , t), T 2 = (s, * 1 , . . . , m 1 ), T 3 = (m 1 , . . . , * 2 , m 2 ), T 4 = (m 2 , * 3 , . . . , m 3 ), T 5 = (m 3 , . . . , * 4 , t). We assume α = 1. By changing parameters k = d C (s, t) and k = d C (m 1 , * 4 ), we can modulate the links of the corresponding star graph G * . Using d T 1 (s, t) = k, observe that k > 2 ⇔ { * 1 , * 4 } ∈ E * . Similarly, k > 2(k + 1) ⇔ { * 1 , * 3 } ∈ E * ∧ { * 2 , * 4 } ∈ E * and k > 2(k + 2) ⇔ { * 1 , * 2 } ∈ E * ∧ { * 3 , * 4 } ∈ E * . Taking k = 2k + 4, we thus have E * = {{ * 1 , * 3 }, { * 2 , * 4 }, { * 1 , * 4 }}. * 1 ) = d C (m 1 , * 2 ) = d C (m 3 , * 3 ) = d C (m 3 , Thus, we here construct a situation where * 1 and * 2 as well as * 3 and * 4 can be merged without breaking the consistency requirement, but where merging both simultaneously leads to a topology G that is only 4/k-consistent, since d G (s, t) = 4. This ratio can be made arbitrarily small provided we choose k = (k − 4)/2. A.4 Proof of Lemma 3.11 In the worst-case, each star in the trace represents a different node in G 1 , so the maximal number of nodes in any topology in G T is the total number of non-anonymous nodes plus the total number of stars in T . This number of nodes is reached in the topology G C . According to Definition 3.4, only non-adjacent stars in G * can represent the same node in an inferrable topology. Thus, the stars in trace T must originate from at least γ(G * ) different nodes. As a consequence |V 1 | − |V 2 | ≤ s − γ(G * ), which can reach s − 1 for a trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s}. Analogously, |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Observe that each occurrence of a node in a trace describes at most two edges. If all anonymous nodes are merged into γ(G * ) nodes in G 1 and are separate nodes in G 2 the difference in the number of edges is at most 2(s − γ(G * )). Analogously, |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s. The trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s} reaches this bound. A.5 Proof of Lemma 3.14 An "lower bound" example follows from Figure 2. Essentially, this is also the worst case: note that the difference in the shortest distance between a pair of nodes u and v in G 1 and G 2 is only greater than 0 if the shortest path between them involves at least one anonymous node. Hence the shortest distance between such a pair is two. The longest shortest distance between the same pair of nodes in another inferred topology visits all nodes in the network, i.e., its length is bounded by N − 1. A.6 Proof of Lemma 3.16 Each occurrence of a node in a trace describes at most two links incident to this node. For the degree difference we only have to consider the links incident to at least one anonymous node, as the number of links between nonanonymous nodes is the same in G 1 and G 2 . If all anonymous nodes can be merged into γ(G * ) nodes in G 1 and all anonymous nodes are separate in G 2 the difference in the maximum degree is thus at most 2(s − γ(G * )), as there can be at most s − γ(G * ) + 1 nodes merged into one node and the minimal maximum degree of a node in G 2 is two. This bound is tight, as the trace set T i = {v i , * , w i } for 1 ≤ i ≤ s containing s stars can be represented by a graph with one anonymous node of degree 2s or by a graph with s anonymous nodes of degree two each. For the ratio of the maximal degree we can ignore links between non-anonymous nodes as well, as these only decrease the ratio. The highest number of links incident at node v with one endpoint in the set of anonymous nodes is s − γ(G * ) + 1 for non-anonymous nodes and 2(s − γ(G * ) + 1) for anonymous nodes, whereas the lowest number is two. A.7 Proof of Lemma 4.4 The proof for the upper bound is analogous to the case without full exploration. To prove that this bound can be reached, we need to add traces to the trace set to ensure that all pairs of named nodes appear in the trace but does not change the degrees of anonymous nodes. To this end we add a named node u for each pair {v, w} that is not in the trace set yet to G 0 and a trace T = {v, u, w}. This does not increase the maximum degree and guarantees full exploration. A.8 Proof of Lemma 4.5 We first prove the upper bound for the relative case. Note that the maximal distance between two anonymous nodes MAP( * 1 ) and MAP( * 2 ) in an inferred topology component cannot be larger than twice the distance of two named nodes u and v: from Definition 4.1 we know that there must be a trace in T connecting u and v, and the maximal distance δ of a pair of named nodes is given by the path of the trace that includes u and v. Therefore, and since any trace starts and ends with a named node, any star can be at a distance at a distance δ/2 from a named node. Therefore, the maximal distance between MAP( * 1 ) and MAP( * 2 ) is δ/2 + δ/2 to get to the corresponding closest named nodes, plus δ for the connection between the named nodes. As according to Lemma 4.2, the distance between named nodes is the same in all inferred topologies, the diameter of inferred topologies can vary at most by a factor of two. We now construct an example that reaches this bound. Consider a topology consisting of a center node c and four rays of length k. Let u 1 , u 2 , u 3 , u 4 be the "end nodes" of each ray. We assume that all these nodes are named. Now add two chains of anonymous nodes of length 2k + 1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 to the topology. The trace set consists of the minimal trace set to obtain a fully explored topology: six traces of length 2k +1 between each pair of end nodes u 1 , u 2 , u 3 , u 4 . Now we add two traces of length 2k +1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 . These traces explore the anonymous chains and have the following shape: T 7 = (u 1 , * 1 , . . . , * k , σ, * k+1 , . . . , * 2k , u 2 ) and T 8 = (u 3 , * 2k+1 , . . . , * 3k , σ , * 3k+1 , . . . , * 4k , u 4 ), where σ and σ are stars. Let G 1 = G C and G 2 be the inferrable graph where σ and σ are merged. The resulting diameters are DIAM(G 1 ) = 4k+2 and DIAM(G 2 ) = 2k+1. Since s = 4k+2, the difference can thus be as large as s/2. Note that this construction also yields the bound of the relative difference: DIAM(G 1 )/DIAM(G 2 ) = (4k + 2)/(2k + 1) = 2. A.9 Proof of Lemma 4.6 Given the number of stars s, we construct a trace set T with two inferrable graphs such that in one graph the number of triangles with anonymous nodes is s(s − 1)/2 and in the other graph there are no such triangles. As a first step we add s traces T i = (v i , * i , w) to the trace set T , where 1 ≤ i ≤ s. To make this trace set fully explored we add traces for each pair v i , v j to T as a second step, i.e., traces T i,j = (v i , v j ) for 1 ≤ i ≤ s and 1 ≤ j ≤ s. The resulting trace set contains s stars and none of the stars are in conflict with each other. Thus the graph G 1 merging all stars into one anonymous node is inferrable from this trace and the number of triangles where the anonymous node is part of is s(s−1)/2. Let G 2 be the canonic graph of this trace set. This graph does not contain any triangles with anonymous nodes and hence the difference C(G 1 ) − C(G 2 ) is s(s − 1)/2. To see that the ratio can be unbounded look at the trace set {(v, * 1 , w), (u, * 2 , w), (u, v)}. This set is fully explored since all pairs of named nodes appear in a trace. The graph where the two stars are merged has one triangle and the canonic graph has no triangle.
10,634
1105.5236
2952132893
Traceroute measurements are one of our main instruments to shed light onto the structure and properties of today's complex networks such as the Internet. This paper studies the feasibility and infeasibility of inferring the network topology given traceroute data from a worst-case perspective, i.e., without any probabilistic assumptions on, e.g., the nodes' degree distribution. We attend to a scenario where some of the routers are anonymous, and propose two fundamental axioms that model two basic assumptions on the traceroute data: (1) each trace corresponds to a real path in the network, and (2) the routing paths are at most a factor 1 alpha off the shortest paths, for some parameter alpha in (0,1]. In contrast to existing literature that focuses on the cardinality of the set of (often only minimal) inferrable topologies, we argue that a large number of possible topologies alone is often unproblematic, as long as the networks have a similar structure. We hence seek to characterize the set of topologies inferred with our axioms. We introduce the notion of star graphs whose colorings capture the differences among inferred topologies; it also allows us to construct inferred topologies explicitly. We find that in general, inferrable topologies can differ significantly in many important aspects, such as the nodes' distances or the number of triangles. These negative results are complemented by a discussion of a scenario where the trace set is best possible, i.e., "complete". It turns out that while some properties such as the node degrees are still hard to measure, a complete trace set can help to determine global properties such as the connectivity.
Our work is motivated by a series of papers by Acharya and Gouda. @cite_14 , a network tracing theory model is introduced where nodes are irregular'' in the sense that each node appears in at least one trace with its real identifier. @cite_3 , hardness results are derived for this model. However, as pointed out by the authors themselves, the irregular node model---where nodes are anonymous due to high loads---is less relevant in practice and hence they consider strictly anonymous nodes in their follow-up studies @cite_7 . As proved in @cite_7 , the problem is still hard (in the sense that there are many minimal networks corresponding to a trace set), even with only two anonymous nodes, symmetric routing and without aliasing.
{ "abstract": [ "Traceroute is a widely used program for computing the topology of any network in the Internet. Using Traceroute, one starts from a node and chooses any other node in the network. Traceroute obtains the sequence of nodes that occur between these two nodes, as specified by the routing tables in these nodes. Each use of Traceroute in a network produces a trace of nodes that constitute a simple path in this network. In every trace that is produced by Traceroute, each node occurs either by its unique identifier, or by the anonymous identifier\"*\". In this paper, we introduce the first theory aimed at answering the following important question. Is there an algorithm to compute the topology of a network N from a trace set T that is produced by using Traceroute in network N , assuming that each edge in N occurs in at least one trace in T , and that each node in N occurs by its unique identifier in at least one trace in T ? We prove that the answer to this question is \"No\" if N is an even ring or a general network. However, it is \"Yes\" if N is a tree or an odd ring. The answer is also \"No\" if N is mostly-regular, but \"Yes\" if N is a mostly-regular even ring.", "Many systems require information about the topology of networks on the Internet, for purposes like management, efficiency, testing of new protocols and so on. However, ISPs usually do not share the actual topology maps with outsiders; thus, in order to obtain the topology of a network on the Internet, a system must reconstruct it from publicly observable data. The standard method employs traceroute to obtain paths between nodes; next, a topology is generated such that the observed paths occur in the graph. However, traceroute has the problem that some routers refuse to reveal their addresses, and appear as anonymous nodes in traces. Previous research on the problem of topology inference with anonymous nodes has demonstrated that it is at best NP-complete. In this paper, we improve upon this result. In our previous research, we showed that in the special case where nodes may be anonymous in some traces but not in all traces (so all node identifiers are known), there exist trace sets that are generable from multiple topologies. This paper extends our theory of network tracing to the general case (with strictly anonymous nodes), and shows that the problem of computing the network that generated a trace set, given the trace set, has no general solution. The weak version of the problem, which allows an algorithm to output a \"small\" set of networks- any one of which is the correct one- is also not solvable. Any algorithm guaranteed to output the correct topology outputs at least an exponential number of networks. Our results are surprisingly robust: they hold even when the network is known to have exactly two anonymous nodes, and every node as well as every edge in the network is guaranteed to occur in some trace. On the basis of this result, we suggest that exact reconstruction of network topology requires more powerful tools than traceroute.", "Computing the topology of a network in the Internet is a problem that has attracted considerable research interest. The usual method is to employ Traceroute, which produces sequences of nodes that occur along the routes from one node (source) to another (destination). In every trace thus produced, a node occurs by either its unique identifier, or by the anonymous identifier \"*\". We have earlier proved that there exists no algorithm that can take a set of traces produced by running Traceroute on network N and compute one topology which is guaranteed to be the topology of N. This paper proves a much stronger result: no algorithm can produce a small set of topologies that is guaranteed to contain the topology of N, as the size of the solution set is exponentially large. This result holds even when every edge occurs in a trace, all the unique identifiers of all the nodes are known, and the number of nodes that are irregular (anonymous in some traces) is given. On the basis of this strong result, we suggest that efforts to exactly reconstruct network topology should focus on special cases where the solution set is small." ], "cite_N": [ "@cite_14", "@cite_7", "@cite_3" ], "mid": [ "1520619684", "1774487707", "1561479608" ] }
Misleading Stars: What Cannot Be Measured in the Internet?
Surprisingly little is known about the structure of many important complex networks such as the Internet. One reason is the inherent difficulty of performing accurate, large-scale and preferably synchronous measurements from a large number of different vantage points. Another reason are privacy and information hiding issues: for example, network providers may seek to hide the details of their infrastructure to avoid tailored attacks. Since knowledge of the network characteristics is crucial for many applications (e.g., RMTP [12], or PaDIS [13]), the research community implements measurement tools to analyze at least the main properties of the network. The results can then, e.g., be used to design more efficient network protocols in the future. This paper focuses on the most basic characteristic of the network: its topology. The classic tool to study topological properties is traceroute. Traceroute allows us to collect traces from a given source node to a set of specified destination nodes. A trace between two nodes contains a sequence of identifiers describing the route traveled by the packet. However, not every node along such a path is configured to answer with its identifier. Rather, some nodes may be anonymous in the sense that they appear as stars (' * ') in a trace. Anonymous nodes exacerbate the exploration of a topology because already a small number of anonymous nodes may increase the spectrum of inferrable topologies that correspond to a trace set T . This paper is motivated by the observation that the mere number of inferrable topologies alone does not contradict the usefulness or feasibility of topology inference; if the set of inferrable topologies is homogeneous in the sense that that the different topologies share many important properties, the generation of all possible graphs can be avoided: an arbitrary representative may characterize the underlying network accurately. Therefore, we identify important topological metrics such as diameter or maximal node degree and examine how "close" the possible inferred topologies are with respect to these metrics. Our Contribution This paper initiates the study and characterization of topologies that can be inferred from a given trace set computed with the traceroute tool. While existing literature assuming a worst-case perspective has mainly focused on the cardinality of minimal topologies, we go one step further and examine specific topological graph properties. We introduce a formal theory of topology inference by proposing basic axioms (i.e., assumptions on the trace set) that are used to guide the inference process. We present a novel and we believe appealing definition for the isomorphism of inferred topologies which is aware of traffic paths; it is motivated by the observation that although two topologies look equivalent up to a renaming of anonymous nodes, the same trace set may result in different paths. Moreover, we initiate the study of two extremes: in the first scenario, we only require that each link appears at least once in the trace set; interestingly, however, it turns out that this is often not sufficient, and we propose a "best case" scenario where the trace set is, in some sense, complete: it contains paths between all pairs of nodes. The main result of the paper is a negative one. It is shown that already a small number of anonymous nodes in the network renders topology inference difficult. In particular, we prove that in general, the possible inferrable topologies differ in many crucial aspects. We introduce the concept of the star graph of a trace set that is useful for the characterization of inferred topologies. In particular, colorings of the star graphs allow us to constructively derive inferred topologies. (Although the general problem of computing the set of inferrable topologies is related to NP-hard problems such as minimal graph coloring and graph isomorphism, some important instances of inferrable topologies can be computed efficiently.) The minimal coloring (i.e., the chromatic number) of the star graph defines a lower bound on the number of anonymous nodes from which the stars in the traces could originate from. And the number of possible colorings of the star graph-a function of the chromatic polynomial of the star graph-gives an upper bound on the number of inferrable topologies. We show that this bound is tight in the sense that there are situation where there indeed exist so many inferrable topologies. Especially, there are problem instances where the cardinality of the set of inferrable topologies equals the Bell number. This insight complements (and generalizes to arbitrary, not only minimal, inferrable topologies) existing cardinality results. Finally, we examine the scenario of fully explored networks for which "complete" trace sets are available. As expected, inferrable topologies are more homogenous and can be characterized well with respect to many properties such as node distances. However, we also find that other properties are inherently difficult to estimate. Interestingly, our results indicate that full exploration is often useful for global properties (such as connectivity) while it does not help much for more local properties (such as node degree). Organization The remainder of this paper is organized as follows. Our theory of topology inference is introduced in Section 2. The main contribution is presented in Sections 3 and 4 where we derive bounds for general trace sets and fully explored networks, respectively. In Section 5, the paper concludes with a discussion of our results and directions for future research. Due to space constraints, some proofs are moved to the appendix. Model Let T denote the set of traces obtained from probing (e.g., by traceroute) a (not necessarily connected and undirected) network G 0 = (V 0 , E 0 ) with nodes or vertices V 0 (the set of routers) and links or edges E 0 . We assume that G 0 is static during the probing time (or that probing is instantaneous). Each trace T (u, v) ∈ T describes a path connecting two nodes u, v ∈ V 0 ; when u and v do not matter or are clear from the context, we simply write T . Moreover, let d T (u, v) denote the distance (number of hops) between two nodes u and v in trace T . We define d G 0 (u, v) to be the corresponding shortest path distance in G 0 . Note that a trace between two nodes u and v may not describe the shortest path between u and v in G 0 . The nodes in V 0 fall into two categories: anonymous nodes and non-anonymous (or shorter: named) nodes. Therefore, each trace T ∈ T describes a sequence of symbols representing anonymous and non-anonymous nodes. We make the natural assumption that the first and the last node in each trace T is non-anonymous. Moreover, we assume that traces are given in a form where non-anonymous nodes appear with a unique, anti-aliased identifier (i.e., the multiple IP addresses corresponding to different interfaces of a node are resolved to one identifier); an anonymous node is represented as * ("star") in the traces. For our formal analysis, we assign to each star in a trace set T a unique identifier i: * i . (Note that except for the numbering of the stars, we allow identical copies of T in T , and we do not make any assumptions on the implications of identical traces: they may or may not describe the same paths.) Thus, a trace T ∈ T is a sequence of symbols taken from an alphabet Σ = ID ∪ ( i * i ), where ID is the set of non-anonymous node identifiers (IDs): Σ is the union of the (anti-aliased) non-anonymous nodes and the set of all stars (with their unique identifiers) appearing in a trace set. The main challenge in topology inference is to determine which stars in the traces may originate from which anonymous nodes. Henceforth, let n = |ID| denote the number of non-anonymous nodes and let s = | i * i | be the number of stars in T ; similarly, let a denote the number of anonymous nodes in a topology. Let N = n + s = |Σ| be the total number of symbols occurring in T . Clearly, the process of topology inference depends on the assumptions on the measurements. In the following, we postulate the fundamental axioms that guide the reconstruction. First, we make the assumption that each link of G 0 is visited by the measurement process, i.e., it appears as a transition in the trace set T . In other words, we are only interested in inferring the (sub-)graph for which measurement data is available. AXIOM 0 (Complete Cover): Each edge of G 0 appears at least once in some trace in T . The next fundamental axiom assumes that traces always represent paths on G 0 . AXIOM 1 (Reality Sampling): For every trace T ∈ T , if the distance between two symbols σ 1 , σ 2 ∈ T is d T (σ 1 , σ 2 ) = k, then there exists a path (i.e., a walk without cycles) of length k connecting two (named or anonymous) nodes σ 1 and σ 2 in G 0 . The following axiom captures the consistency of the routing protocol on which the traceroute probing relies. In the current Internet, policy routing is known to have in impact both on the route length [14] and on the convergence time [11]. AXIOM 2 (α-(Routing) Consistency): There exists an α ∈ (0, 1] such that, for every trace T ∈ T , if d T (σ 1 , σ 2 ) = k for two entries σ 1 , σ 2 in trace T , then the shortest path connecting the two (named or anonymous) nodes corresponding to σ 1 and σ 2 in G 0 has distance at least αk . Note that if α = 1, the routing is a shortest path routing. Moreover, note that if α = 0, there can be loops in the paths, and there are hardly any topological constraints, rendering almost any topology inferrable. (For example, the complete graph with one anonymous router is always a solution.) A natural axiom to merge traces is the following. AXIOM 3 (Trace Merging): For two traces T 1 , T 2 ∈ T for which ∃σ 1 , σ 2 , σ 3 , where σ 2 refers to a named node, such that d T 1 (σ 1 , σ 2 ) = i and d T 2 (σ 2 , σ 3 ) = j, it holds that the distance between two nodes u and v corresponding to σ 1 and σ 2 , respectively, in G 0 , is at most d G 0 (σ 1 , σ 3 ) ≤ i + j. Any topology G which is consistent with these axioms (when applied to T ) is called inferrable from T . Definition 2.1 (Inferrable Topologies). A topology G is (α-consistently) inferrable from a trace set T if axioms AXIOM 0, AXIOM 1, AXIOM 2 (with parameter α), and AXIOM 3 are fulfilled. We will refer by G T to the set of topologies inferrable from T . Please note the following important observation. Remark 2.2. While we generally have that G 0 ∈ G T , since T was generated from G 0 and AXIOM 0, AXIOM 1, AXIOM 2 and AXIOM 3 are fulfilled by definition, there can be situations where an α-consistent trace set for G 0 contradicts AXIOM 0: some edges may not appear in T . If this is the case, we will focus on the inferrable topologies containing the links we know, even if G 0 may have additional, hidden links that cannot be explored due to the high α value. The main objective of a topology inference algorithm ALG is to compute topologies which are consistent with these axioms. Concretely, ALG's input is the trace set T together with the parameter α specifying the assumed routing consistency. Essentially, the goal of any topology inference algorithm ALG is to compute a mapping of the symbols Σ (appearing in T ) to nodes in an inferred topology G; or, in case the input parameters α and T are contradictory, reject the input. This mapping of symbols to nodes implicitly describes the edge set of G as well: the edge set is unique as all the transitions of the traces in T are now unambiguously tied to two nodes. So far, we have ignored an important and non-trivial question: When are two topologies G 1 , G 2 ∈ G T different (and hence appear as two independent topologies in G T )? In this paper, we pursue the following approach: We are not interested in purely topological isomorphisms, but we care about the identifiers of the non-anonymous nodes, i.e., we are interested in the locations of the non-anonymous nodes and their distance to other nodes. For anonymous nodes, the situation is slightly more complicated: one might think that as the nodes are anonymous, their "names" do not matter. Consider however the example in Figure 1: the two inferrable topologies have two anonymous nodes, once where { * 1 , * 2 } plus { * 3 , * 4 } are merged into one node each in the inferrable topology and once where { * 1 , * 4 } plus { * 2 , * 3 } are merged into one node each in the inferrable topology. In this paper, we regard the two topologies as different, for the following reason: Assume that there are two paths in the network, one u * 2 v (e.g., during day time) and one u * 3 v (e.g., at night); clearly, this traffic has different consequences and hence we want to be able to distinguish between the two topologies described above. In other words, our notion of isomorphism of inferred topologies is path-aware. It is convenient to introduce the following MAP function. Essentially, an inference algorithm computes such a mapping. Definition 2.3 (Mapping Function MAP). Let G = (V, E) ∈ G T be a topology inferrable from T . A topology inference algorithm describes a surjective mapping function MAP : Σ → V . For the set of non-anonymous nodes in Σ, the mapping function is bijective; and each star is mapped to exactly one node in V , but multiple stars may be assigned to the same node. Note that for any σ ∈ Σ, MAP(σ) uniquely identifies a node v ∈ V . More specifically, we assume that MAP assigns labels to the nodes in V : in case of a named node, the label is simply the node's identifier; in case of anonymous nodes, the label is * β , where β is the concatenation of the sorted indices of the stars which are merged into node * β . With this definition, two topologies G 1 , G 2 ∈ G T differ if and only if they do not describe the identical (MAP-) labeled topology. We will use this MAP function also for G 0 , i.e., we will write MAP(σ) to refer to a symbol σ's corresponding node in G 0 . In the remainder of this paper, we will often assume that AXIOM 0 is given. Moreover, note that AXIOM 3 is redundant. Therefore, in our proofs, we will not explicitly cover AXIOM 0, and it is sufficient to show that AXIOM 1 holds to prove that AXIOM 3 is satisfied. Lemma 2.4. AXIOM 1 implies AXIOM 3. Proof. Let T be a trace set, and G ∈ G T . Let σ 1 , σ 2 , σ 3 s.t. ∃T 1 , T 2 ∈ T with σ 1 ∈ T 1 , σ 3 ∈ T 2 and σ 2 ∈ T 1 ∩ T 2 . Let i = d T 1 (σ 1 , σ 2 ) and j = d T 2 (σ 1 , σ 3 ). Since any inferrable topology G fulfills AXIOM 1, there is a path π 1 of length at most i between the nodes corresponding to σ 1 and σ 2 in G and a path π 2 of length at most j between the nodes corresponding to σ 2 and σ 3 in G. The combined path can only be shorter, and hence the claim follows. Inferrable Topologies What insights can be obtained from topology inference with minimal assumptions, i.e., with our axioms? Or what is the structure of the inferrable topology set G T ? We first make some general observations and then examine different graph metrics in more detail. Basic Observations Although the generation of the entire topology set G T may be computationally hard, some instances of G T can be computed efficiently. The simplest possible inferrable topology is the so-called canonic graph G C : the topology which assumes that all stars in the traces refer to different anonymous nodes. In other words, if a trace set T contains n = |ID| named nodes and s stars, G C will contain |V (G C )| = N = n + s nodes. Definition 3.1 (Canonic Graph G C ). The canonic graph is defined by G C (V C , E C ) where V C = Σ is the set of (anti-aliased) nodes appearing in T (where each star is considered a unique anonymous node) and where {σ 1 , σ 2 } ∈ E C ⇔ ∃T ∈ T , T = (. . . , σ 1 , σ 2 , . . .), i.e., σ 1 follows after σ 2 in some trace T (σ 1 , σ 2 ∈ T can be either non-anonymous nodes or stars). Let d C (σ 1 , σ 2 ) denote the canonic distance between two nodes, i.e., the length of a shortest path in G C between the nodes σ 1 and σ 2 . Note that G C is indeed an inferrable topology. In this case, MAP : Σ → Σ is the identity function. The proof appears in the appendix. Theorem 3.2. G C is inferrable from T . G C can be computed efficiently from T : represent each non-anonymous node and star as a separate node, and for any pair of consecutive entries (i.e., nodes) in a trace, add the corresponding link. The time complexity of this construction is linear in the size of T . With the definition of the canonic graph, we can derive the following lemma which establishes a necessary condition when two stars cannot represent the same node in G 0 from constraints on the routing paths. This is useful for the characterization of inferred topologies. Lemma 3.3. Let * 1 , * 2 be two stars occurring in some traces in T . * 1 , * 2 cannot be mapped to the same node, i.e., MAP( * 1 ) = MAP( * 2 ), without violating the axioms in the following conflict situations: (i) if * 1 ∈ T 1 and * 2 ∈ T 2 , and T 1 describes a too long path between anonymous node MAP( * 1 ) and nonanonymous node u, i.e., α · d T 1 ( * 1 , u) > d C (u, * 2 ). (ii) if * 1 ∈ T 1 and * 2 ∈ T 2 , and there exists a trace T that contains a path between two non-anonymous nodes u and v and α · d T (u, v) > d C (u, * 1 ) + d C (v, * 2 ). Proof. The first proof is by contradiction. Assume MAP( * 1 ) = MAP( * 2 ) represents the same node v of G 0 , and that α · d T 1 (v, u) > d C (u, v). Then we know from AXIOM 2 that d C (v, u) ≥ d G 0 (v, u) ≥ α · d T 1 (u, v) > d C (v, u) , which yields the desired contradiction. Similarly for the second proof. Assume for the sake of contradiction that MAP( * 1 ) = MAP( * 2 ) represents the same node w of G 0 , and that α · d T (u, v) > d C (u, w) + d C (v, w). Due to the triangle inequality, we have that d C (u, w) + d C (v, w) ≥ d C (u, v) and hence, α · d T (u, v) > d C (u, v), which contradicts the fact that G C is inferrable (Theorem 3.2). Lemma 3.3 can be applied to show that a topology is not inferrable from a given trace set because it merges (i.e., maps to the same node) two stars in a manner that violates the axioms. Let us introduce a useful concept for our analysis: the star graph that describes the conflicts between stars. Note that the star graph G * is unique and can be computed efficiently for a given trace set T : Conditions (i) and (ii) can be checked by computing G C . However, note that while G * specifies some stars which cannot be merged, the construction is not sufficient: as Lemma 3.3 is based on G C , additional links might be needed to characterize the set of inferrable and α-consistent topologies G T exactly. In other words, a topology G obtained by merging stars that are adjacent in G * is never inferrable (G ∈ G T ); however, merging non-adjacent stars does not guarantee that the resulting topology is inferrable. What do star graphs look like? The answer is arbitrarily: the following lemma states that the set of possible star graphs is equivalent to the class of general graphs. This claim holds for any α. The proof appears in the appendix. The problem of computing inferrable topologies is related to the vertex colorings of the star graphs. We will use the following definition which relates a vertex coloring of G * to an inferrable topology G by contracting independent stars in G * to become one anonymous node in G. For example, observe that a maximum coloring treating every star in the trace as a separate anonymous node describes the inferrable topology G C . Definition 3.6 (Coloring-Induced Graph). Let γ denote a coloring of G * which assigns colors 1, . . . , k to the vertices of G * : γ : V * → {1, . . . , k}. We require that γ is a proper coloring of G * , i.e., that different anonymous nodes are assigned different colors: {u, v} ∈ E * ⇒ γ(u) = γ(v). G γ is defined as the topology induced by γ. G γ describes the graph G C where nodes of the same color are contracted: two vertices u and v represent the same node in G γ , i.e., MAP( * i ) = MAP( * j ), if and only if γ( * i ) = γ( * j ). The following two lemmas establish an intriguing relationship between colorings of G * and inferrable topologies. Also note that Definition 3.6 implies that two different colorings of G * define two non-isomorphic inferrable topologies. We first show that while a coloring-induced topology always fulfills AXIOM 1, the routing consistency is sacrificed. The proof appears in the appendix. Lemma 3.7. Let γ be a proper coloring of G * . The coloring induced topology G γ is a topology fulfilling AXIOM 2 with a routing consistency of α , for some positive α . An inferrable topology always defines a proper coloring on G * . Lemma 3.8. Let T be a trace set and G * its corresponding star graph. If a topology G is inferrable from T , then G induces a proper coloring on G * . Proof. For any α-consistent inferrable topology G there exists some mapping function MAP that assigns each symbol of T to a corresponding node in G (cf Definition 2.3), and this mapping function gives a coloring on G * (i.e., merged stars appear as nodes of the same color in G * ). The coloring must be proper: due to Lemma 3.3, an inferrable topology can never merge adjacent nodes of G * . The colorings of G * allow us to derive an upper bound on the cardinality of G T . Theorem 3.9. Given a trace set T sampled from a network G 0 and G T , the set of topologies inferrable from T , it holds that: |V * | k=γ(G * ) P (G * , k)/k! ≥ |G T |, where γ(G * ) is the chromatic number of G * and P (G * , k) is the number of colorings of G * with k colors (known as the chromatic polynomial of G * ). Proof. The proof follows directly from Lemma 3.8 which shows that each inferred topology has proper colorings, and the fact that a coloring of G * cannot result in two different inferred topologies, as the coloring uniquely describes which stars to merge (Lemma 3.7). In order to account for isomorphic colorings, we need to divide by the number of color permutations. Note that the fact that G * can be an arbitrary graph (Lemma 3.5) implies that we cannot exploit some special properties of G * to compute colorings of G * and γ(G * ). Also note that the exact computation of the upper bound is hard, since the minimal coloring as well as the chromatic polynomial of G * (in P ) is needed. To complement the upper bound, we note that star graphs with a small number of conflict edges can indeed result in a large number of inferred topologies. Proof. Consider a trace set T = {(σ i , * i , σ i ) i=1,...,s } (e.g., obtained from exploring a topology G 0 where one anonymous center node is connected to 2s named nodes). The trace set does not impose any constraints on how the stars relate to each other, and hence, G * does not contain any edges at all; even when stars are merged, there are no constraints on how the stars relate to each other. Therefore, the star graph for T has B s = s j=0 S (s,j) colorings, where S (s,j) = 1/j! · j =0 (−1) j (j − ) s is the number of ways to group s nodes into j different, disjoint non-empty subsets (known as the Stirling number of the second kind). Each of these colorings also describes a distinct inferrable topology as MAP assigns unique labels to anonymous nodes stemming from merging a group of stars (cf Definition 2.3). Properties Even if the number of inferrable topologies is large, topology inference can still be useful if one is mainly interested in the properties of G 0 and if the ensemble G T is homogenous with respect to these properties; for example, if "most" of the instances in G T are close to G 0 , there may be an option to conduct an efficient sampling analysis on random representatives. Therefore, in the following, we will take a closer look how much the members of G T differ. Important metrics to characterize inferrable topologies are, for instance, the graph size, the diameter DIAM(·), the number of triangles C 3 (·) of G, and so on. In the following, let G 1 = (V 1 , E 1 ), G 2 = (V 2 , E 2 ) ∈ G T be two arbitrary representatives of G T . As one might expect, the graph size can be estimated quite well. Lemma 3.11. It holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ≤ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of edges between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Observe that inferrable topologies can also differ in the number of connected components. This implies that the shortest distance between two named nodes can differ arbitrarily between two representatives in G T . Lemma 3.12. Let COMP(G) denote the number of connected components of a topology G. Then, |COMP(G 1 ) − COMP(G 2 )| ≤ n/2. There are instances G 1 , G 2 that reach this bound. Proof. Consider the trace set T = {T i , i = 1 . . . n/2 } in which T i = {n 2i , * i , n 2i+1 }. Since i = j ⇒ T i ∩ T j = ∅, we have |E * | = 0. Take G 1 as the 1-coloring of G * : G 1 is a topology with one anonymous node connected to all named nodes. Take G 2 as the n/2 -coloring of the star graph: G 2 has n/2 distinct connected components (consisting of three nodes). Upper bound: For the sake of contradiction, suppose ∃T s.t. |COMP(G 1 ) − COMP(G 2 )| > n/2 . Let us assume that G 1 has the most connected components: G 1 has at least n/2 + 1 more connected components than G 2 . Let C refer to a connected component of G 2 whose nodes are not connected in G 1 . This means that C contains at least one anonymous node. Thus, C contains at least two named nodes (since a trace T cannot start or end by a star). There must exist at least n/2 + 1 such connected component C. Thus G 2 has to contain at least 2( n/2 + 1) ≥ n + 1 named nodes. Contradiction. An important criterion for topology inference regards the distortion of shortest paths. Definition 3.13 (Stretch). The maximal ratio of the distance of two non-anonymous nodes in G 0 and a connected topology G is called the stretch ρ: ρ = max u,v∈ID(G 0 ) max{d G 0 (u, v)/d G (u, v), d G (u, v)/d G 0 (u, v)}. From Lemma 3.12 we already know that inferrable topologies can differ in the number of connected components, and hence, the distance and the stretch between nodes can be arbitrarily wrong. Hence, in the following, we will focus on connected graphs only. However, even if two nodes are connected, their distance can be much longer or shorter than in G 0 . Figure 2 gives an example. Both topologies are inferrable from the traces T 1 = (v, * , v 1 , . . . , v k , u) and T 2 = (w, * , w 1 , . . . , w k , u). One inferrable topology is the canonic graph G C (Figure 2 left), whereas the other topology merges the two anonymous nodes (Figure 2 right). The distances between v and w are 2(k + 2) and 2, respectively, implying a stretch of k + 2. Figure 2: Due to the lack of a trace between v and w, the stretch of an inferred topology can be large. Lemma 3.14. Let u and v be two arbitrary named nodes in the connected topologies G 1 and G 2 . Then, even for only two stars in the trace set, it holds for the stretch that ρ ≤ (N −1)/2. There are instances G 1 , G 2 that reach this bound. We now turn our attention to the diameter and the degree. Proof. Upper bound: As G C does not merge any stars, it describes the network with the largest diameter. Let π be a longest path between two nodes u and v in G C . In the extreme case, π is the only path determining the network diameter and π contains all star nodes. Then, the graph where all s stars are merged into one anonymous node has a minimal diameter of at least DIAM(G C )/s. (u s , . . . , * s , . . . , u s+1 )} with x named nodes and star in the middle between u i and u i+1 (assume x to be even, x does not include u i and u i+1 ). It holds that DIAM(G C ) = s · (x + 2) whereas in a graph G where all stars are merged, DIAM(G) = x + 2. There are n = s(x + 1) non-anonymous nodes, so x = (n − s − 1)/s. Figure 3 depicts an example. Lemma 3.16. For the maximal node degree DEG, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. Another important topology measure that indicates how well meshed the network is, is the number of triangles. Lemma 3.17. Let C 3 (G) be the number of cycles of length 3 of the graph G. It holds that C 3 (G 1 ) − C 3 (G 2 ) ≤ 2s(s − 1), which can be reached. The relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large unless the number of links between non-anonymous nodes exceeds n 2 /4 in which case the ratio is upper bounded by 2s(s − 1) + 1. Proof. Upper bound: Each node which is part of a triangle has at least two incident edges. Thus, a node v can be part of at most DEG(v) 2 triangles, where DEG(v) denotes v's degree. As a consequence the number of triangles containing an anonymous node in an inferrable topology with a anonymous nodes u 1 , . . . u a is at most a j=1 DEG(u j ) 2 . Given s, this sum is maximized if a = 1 and DEG(u 1 ) = 2s as 2s is the maximum degree possible due to Lemma 3.16. Thus there can be at most s · (2s − 1) triangles containing an anonymous node in G 1 . The number of triangles with at least one anonymous node is minimized in G C because in the canonic graph the degrees of the anonymous nodes are minimized, i.e, they are always exactly two. As a consequence there cannot be more than s such triangles in G C . If the number of such triangles in G C is smaller by x, then the number of of triangles with at least one anonymous node in the topology G 1 is upper bounded by s · (2s − 1) − x. The difference between the triangles in G 1 and G 2 is thus at most s(2s − 1) − x − s + x = 2s(s − 1). Example meeting this bound: If the non-anonymous nodes form a complete graph and all star nodes can be merged into one node in G 1 and G 2 = G C , then the difference in the number of triangles matches the upper bound. Consequently it holds for the ratio of triangles with anonymous nodes that it does not exceed (s(2s−1)−x)/(s−x). Thus the ratio can be infinite, as x can reach s. However, if the number of links between n non-anonymous nodes exceeds n 2 /4 then there is at least one triangle, as the densest complete bipartite graph contains at most n 2 /4 links. Full Exploration So far, we assumed that the trace set T contains each node and link of G 0 at least once. At first sight, this seems to be the best we can hope for. However, sometimes traces exploring the vicinity of anonymous nodes in different ways yields additional information that help to characterize G T better. This section introduces the concept of fully explored networks: T contains sufficiently many traces such that the distances between non-anonymous nodes can be estimated accurately. In some sense, a trace set for a fully explored network is the best we can hope for. Properties that cannot be inferred well under the fully explored topology model are infeasible to infer without additional assumptions on G 0 . In this sense, this section provides upper bounds on what can be learned from topology inference. In the following, we will constrain ourselves to routing along shortest paths only (α = 1). Let us again study the properties of the family of inferrable topologies fully explored by a trace set. Obviously, all the upper bounds from Section 3 are still valid for fully explored topologies. In the following, let G 1 , G 2 ∈ G T be arbitrary representatives of G T for a fully explored trace set T . A direct consequence of the Definition 4.1 concerns the number of connected components and the stretch. (Recall that the stretch is defined with respect to named nodes only, and since α = 1, a 1-consistent inferrable topology cannot include a shorter path between u and v than the one that must appear in a trace of T .) Lemma 4.2. It holds that COMP(G 1 ) = COMP(G 2 ) (= COMP(G 0 )) and the stretch is 1. The proof for the claims of the following lemmata are analogous to our former proofs, as the main difference is the fact that there might be more conflicts, i.e., edges in G * . Lemma 4.3. For fully explored networks it holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ∈ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of links between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Lemma 4.4. For the maximal node degree, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. From Lemma 4.2 we know that fully explored scenarios yield a perfect stretch of one. However, regarding the diameter, the situation is different in the sense that distances between anonymous nodes play a role. The number of triangles with anonymous nodes can still not be estimated accurately in the fully explored scenario. Lemma 4.6. There exist graphs where C 3 (G 1 ) − C 3 (G 2 ) = s(s − 1)/2, and the relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large. Conclusion We understand our work as a first step to shed light onto the similarity of inferrable topologies based on most basic axioms and without any assumptions on power-law properties, i.e., in the worst case. Using our formal framework we show that the topologies for a given trace set may differ significantly. Thus, it is impossible to accurately characterize topological properties of complex networks. To complement the general analysis, we propose the notion of fully explored networks or trace sets, as a "best possible scenario". As expected, we find that fully exploring traces allow us to determine several properties of the network more accurately; however, it also turns out that even in this scenario, other topological properties are inherently hard to compute. Our results are summarized in Figure 4. Our work opens several directions for future research. On a theoretical side, one may study whether the minimal inferrable topologies considered in, e.g., [1,2], are more similar in nature. More importantly, while this paper presented results for the general worst-case, it would be interesting to devise algorithms that compute, for a given trace set, worst-case bounds for the properties under consideration. For example, such approximate bounds would be helpful to decide whether additional measurements are needed. Moreover, maybe such algorithms may even give advice on the locations at which such measurements would be most useful. Property/Scenario Arbitrary Fully Explored (α = 1) G1 − G2 G1/G2 G1 − G2 G1/G2 # of nodes ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) # of links ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) # of connected components ≤ n/2 ≤ n/2 Figure 4: Summary of our bounds on the properties of inferrable topologies. s denotes the number of stars in the traces, n is the number of named nodes, N = n + s, and ν denotes the number of links between named nodes. Note that trace sets meeting these bounds exist for all properties for which we have tight or upper bounds. For the two entries marked with ( ¶), only "lower bounds" are derived, i.e., examples that yield at least the corresponding accuracy; as the upper bounds from the arbitrary scenario do not match, how to close the gap remains an open question. = 0 = 1 Stretch - ≤ (N − 1)/2 - = 1 Diameter ≤ (s − 1)/s · (N − 1) ≤ s s/2 ( ¶) 2 Max. Deg. ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 Triangles ≤ 2s(s − 1) ∞ ≤ 2s(s − 1)/2 ∞ [13] Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis, and Anja Feldmann. Improving content delivery using provider-aided distance information. In Proc. ACM IMC, 2010. [ Fix T . We have to prove that G C fulfills AXIOM 0, AXIOM 1 (which implies AXIOM 3) and AXIOM 2. AXIOM 0: The axiom holds trivially: only edges from the traces are used in G C . AXIOM 1: Let T ∈ T and σ 1 , σ 2 ∈ T . Let k = d T (σ 1 , σ 2 ). We show that G C fulfills AXIOM 1, namely, there exists a path of length k in G C . Induction on k: (k = 1:) By the definition of G C , {σ 1 , σ 2 } ∈ E C thus there exists a path of length one between σ 1 and σ 2 . (k > 1:) Suppose AXIOM 1 holds up to k − 1. Let σ 1 , . . . , σ k−1 be the intermediary nodes between σ 1 and σ 2 in T : T = (. . . , σ 1 , σ 1 , . . . , σ k−1 , σ 2 , . . .). By the induction hypothesis, in G C there is a path of length k − 1 between σ 1 and σ k−1 . Let π be this path. By definition of G C , {σ k−1 , σ 2 } ∈ E C . Thus appending (σ k−1 , σ 2 ) to π yields the desired path of length k linking σ 1 and σ 2 : AXIOM 1 thus holds up to k. AXIOM 2: We have to show that d T (σ 1 , σ 2 ) = k ⇒ d C (σ 1 , σ 2 ) ≥ α · k . By contradiction, suppose that G C does not fulfill AXIOM 2 with respect to α. So there exists k < α · k and σ 1 , σ 2 ∈ V C such that d C (σ 1 , σ 2 ) = k . Let π be a shortest path between σ 1 and σ 2 in G C . Let (T 1 , . . . , T ) be the corresponding (maybe repeating) traces covering this path π in G C . Let T i ∈ (T 1 , . . . , T ), and let s i and e i be the corresponding start and end nodes of π in T i . We will show that this path π implies the existence of a path in G 0 which violates α-consistency. Since G 0 is inferrable, G 0 fulfills AXIOM 2, thus we have: d C (σ 1 , σ 2 ) = i=1 d T i (s i , e i ) = k < α · k ≤ d G 0 (σ 1 , σ 2 ) since G 0 is α-consistent. However, G 0 also fulfills AXIOM 1, thus d T i (s i , e i ) ≥ d G 0 (s i , e i ). Thus i=1 d G 0 (s i , e i ) ≤ i=1 d T i (s i , e i ) < d G 0 (σ 1 , σ 2 ): we have constructed a path from σ 1 to σ 2 in G 0 whose length is shorter than the distance between σ 1 and σ 2 in G 0 , leading to the desired contradiction. A.2 Proof of Lemma 3.5 First we construct a topology G 0 = (V 0 , E 0 ) and then describe a trace set on this graph that generates the star graph G = (V, E). The node set V 0 consists of |V | anonymous nodes and |V | · (1 + τ ) named nodes, where τ = 3/(2α) − 1/2 . The first building block of G 0 is a copy of G. To each node v i in the copy of G we add a chain consisting of 2 + τ nodes, first appending τ non-anonymous nodes w (i,k) where 1 ≤ k ≤ τ , followed by an anonymous node u i and finally a named node w (i,τ +1) . More formally we can describe the link set as E 0 = E ∪ |V | i=1 {v i , w (i,1) }, {w (i,1) , w (i,2) }, . . . , {w (i,τ ) , u i }, {u i , w (i,τ +1) } . The trace set T consists of the following |V | + |E| shortest path traces: the traces T for ∈ {1, . . . , |V |}, are given by T (w ( ,τ ) , w ( ,τ +1) ) (for each node in V ), and the traces T for ∈ {|V | + 1, . . . , |V | + |E|}, are given by T (w (i,τ ) , w (j,τ ) ) for each link {v i , v j } in E. Note that G 0 = G C as each star appears as a separate anonymous node. The star graph G * corresponding to this trace set contains the |V | nodes * i (corresponding to u i ). In order to prove the claim of the lemma we have to show that two nodes * i , * j are conflicting according to Lemma 3.3 if and only if there is a link {v i , v j } in E. Case (i) does not apply because the minimum distance between any two nodes in the canonic graph is at least one, and α · d T i ( * i , w (i,τ ) ) = 1 and α · d T i ( * i , w (i,τ +1) ) = 1. It remains to examine Case (ii): "⇒" if MAP( * i ) = MAP( * j ) there would be a path of length two between w (i,τ ) and w (j,τ ) in the topology generated by MAP; the trace set however contains a trace T (w (i,τ ) , w (j,τ ) ) of length 2τ + 1. So α · d T (w (i,τ ) , w (j,τ ) ) = α · (2τ + 1) = α · (2 3/(2α) − 1/2 + 1 ) ≥ 3, which violates the α-consistency (Lemma 3.3 (ii)) and hence { * i , * j } ∈ E * and {v i , v j } ∈ E. "⇐": if {v i , v j } ∈ E, there is no trace T (w (i,τ ) , w (j,τ ) ), thus we have to prove that no trace T (w (i ,τ ) , w (j ,τ ) ) with i = i and j = j and j = i leads to a conflict between * i and * j . We show that an even more general statement is true, namely that for any pair of distinct non-anonymous nodes * j ). Since G C = G 0 and the traces contain shortest paths only, the trace distance between two nodes in the same trace is the same as the distance in G C . The following tables contain the relevant lower bounds on distances in G C and µ(x 1 , x 1 , x 2 , where x 1 , x 2 ∈ {v i , v j , w (i ,k) , w (j ,k) |1 ≤ k ≤ τ + 1, i = i, j = i, j = j}, it holds that α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 ,x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ). d C (·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 0 1 k 1 k 1 + 1 v j 1 0 k 1 + 1 k 1 w (i ,k 2 ) k 2 k 2 + 1 |k 2 − k 1 | k 1 + 1 + k 2 w (j ,k 2 ) k 2 + 1 k 2 k 1 + 1 + k 2 |k 2 − k 1 | * i τ + 2 τ + 1 2 + τ + k 1 τ − k 1 + 1 * j τ + 2 τ + 2 2 + τ + k 1 2 + τ + k 1 µ(·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 2τ + 4 2τ + 3 4 + 2τ + k 1 4 + 2τ + k 1 v j 2τ + 3 2τ + 4 2τ + 3 + k 1 3 + 2τ + k 1 w (i ,k 2 ) 4 + 2τ + k 2 4 + 2τ + k 2 4 + 2τ + k 1 + k 2 4 + 2τ + k 1 + k 2 w (j ,k 2 ) 2τ − k 2 + 3 2τ − k 2 + 3 2τ + 3 + k 1 − k 2 2τ + k 1 − k 2 + 3) = d C (x 1 , * i ) + d C (x 2 , * j ) . If x 1 = w (j ,k 2 ) then it holds for all x 1 , x 2 that d T (x 1 , x 2 ) ≤ 2τ + 1 whereas µ(x 1 , x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ) ≥ 2τ + 2. In all other cases it holds at least that Figure 5: d C (x 1 , x 2 ) < µ(x 1 , x 2 ). Thus α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 , * j Visualization for proof of Lemma 3.7. Solid lines denote links, dashed lines denote paths (of annotated length). We have to show that the paths in the traces correspond to paths in G γ . Let T ∈ T , and σ 1 , σ 2 ∈ T . Let π be the sequence of nodes in T connecting σ 1 and σ 2 . This is also a path in G γ : since α > 0, for any two symbols σ 1 , σ 2 ∈ T , it holds that MAP(σ 1 ) = MAP(σ 2 ) as α > 0. We now construct an example showing that the α for which G γ fulfills AXIOM 2 can be arbitrarily small. Consider the graph represented in Figure 5. Let T 1 = (s, . . . , t), T 2 = (s, * 1 , . . . , m 1 ), T 3 = (m 1 , . . . , * 2 , m 2 ), T 4 = (m 2 , * 3 , . . . , m 3 ), T 5 = (m 3 , . . . , * 4 , t). We assume α = 1. By changing parameters k = d C (s, t) and k = d C (m 1 , * 4 ), we can modulate the links of the corresponding star graph G * . Using d T 1 (s, t) = k, observe that k > 2 ⇔ { * 1 , * 4 } ∈ E * . Similarly, k > 2(k + 1) ⇔ { * 1 , * 3 } ∈ E * ∧ { * 2 , * 4 } ∈ E * and k > 2(k + 2) ⇔ { * 1 , * 2 } ∈ E * ∧ { * 3 , * 4 } ∈ E * . Taking k = 2k + 4, we thus have E * = {{ * 1 , * 3 }, { * 2 , * 4 }, { * 1 , * 4 }}. * 1 ) = d C (m 1 , * 2 ) = d C (m 3 , * 3 ) = d C (m 3 , Thus, we here construct a situation where * 1 and * 2 as well as * 3 and * 4 can be merged without breaking the consistency requirement, but where merging both simultaneously leads to a topology G that is only 4/k-consistent, since d G (s, t) = 4. This ratio can be made arbitrarily small provided we choose k = (k − 4)/2. A.4 Proof of Lemma 3.11 In the worst-case, each star in the trace represents a different node in G 1 , so the maximal number of nodes in any topology in G T is the total number of non-anonymous nodes plus the total number of stars in T . This number of nodes is reached in the topology G C . According to Definition 3.4, only non-adjacent stars in G * can represent the same node in an inferrable topology. Thus, the stars in trace T must originate from at least γ(G * ) different nodes. As a consequence |V 1 | − |V 2 | ≤ s − γ(G * ), which can reach s − 1 for a trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s}. Analogously, |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Observe that each occurrence of a node in a trace describes at most two edges. If all anonymous nodes are merged into γ(G * ) nodes in G 1 and are separate nodes in G 2 the difference in the number of edges is at most 2(s − γ(G * )). Analogously, |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s. The trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s} reaches this bound. A.5 Proof of Lemma 3.14 An "lower bound" example follows from Figure 2. Essentially, this is also the worst case: note that the difference in the shortest distance between a pair of nodes u and v in G 1 and G 2 is only greater than 0 if the shortest path between them involves at least one anonymous node. Hence the shortest distance between such a pair is two. The longest shortest distance between the same pair of nodes in another inferred topology visits all nodes in the network, i.e., its length is bounded by N − 1. A.6 Proof of Lemma 3.16 Each occurrence of a node in a trace describes at most two links incident to this node. For the degree difference we only have to consider the links incident to at least one anonymous node, as the number of links between nonanonymous nodes is the same in G 1 and G 2 . If all anonymous nodes can be merged into γ(G * ) nodes in G 1 and all anonymous nodes are separate in G 2 the difference in the maximum degree is thus at most 2(s − γ(G * )), as there can be at most s − γ(G * ) + 1 nodes merged into one node and the minimal maximum degree of a node in G 2 is two. This bound is tight, as the trace set T i = {v i , * , w i } for 1 ≤ i ≤ s containing s stars can be represented by a graph with one anonymous node of degree 2s or by a graph with s anonymous nodes of degree two each. For the ratio of the maximal degree we can ignore links between non-anonymous nodes as well, as these only decrease the ratio. The highest number of links incident at node v with one endpoint in the set of anonymous nodes is s − γ(G * ) + 1 for non-anonymous nodes and 2(s − γ(G * ) + 1) for anonymous nodes, whereas the lowest number is two. A.7 Proof of Lemma 4.4 The proof for the upper bound is analogous to the case without full exploration. To prove that this bound can be reached, we need to add traces to the trace set to ensure that all pairs of named nodes appear in the trace but does not change the degrees of anonymous nodes. To this end we add a named node u for each pair {v, w} that is not in the trace set yet to G 0 and a trace T = {v, u, w}. This does not increase the maximum degree and guarantees full exploration. A.8 Proof of Lemma 4.5 We first prove the upper bound for the relative case. Note that the maximal distance between two anonymous nodes MAP( * 1 ) and MAP( * 2 ) in an inferred topology component cannot be larger than twice the distance of two named nodes u and v: from Definition 4.1 we know that there must be a trace in T connecting u and v, and the maximal distance δ of a pair of named nodes is given by the path of the trace that includes u and v. Therefore, and since any trace starts and ends with a named node, any star can be at a distance at a distance δ/2 from a named node. Therefore, the maximal distance between MAP( * 1 ) and MAP( * 2 ) is δ/2 + δ/2 to get to the corresponding closest named nodes, plus δ for the connection between the named nodes. As according to Lemma 4.2, the distance between named nodes is the same in all inferred topologies, the diameter of inferred topologies can vary at most by a factor of two. We now construct an example that reaches this bound. Consider a topology consisting of a center node c and four rays of length k. Let u 1 , u 2 , u 3 , u 4 be the "end nodes" of each ray. We assume that all these nodes are named. Now add two chains of anonymous nodes of length 2k + 1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 to the topology. The trace set consists of the minimal trace set to obtain a fully explored topology: six traces of length 2k +1 between each pair of end nodes u 1 , u 2 , u 3 , u 4 . Now we add two traces of length 2k +1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 . These traces explore the anonymous chains and have the following shape: T 7 = (u 1 , * 1 , . . . , * k , σ, * k+1 , . . . , * 2k , u 2 ) and T 8 = (u 3 , * 2k+1 , . . . , * 3k , σ , * 3k+1 , . . . , * 4k , u 4 ), where σ and σ are stars. Let G 1 = G C and G 2 be the inferrable graph where σ and σ are merged. The resulting diameters are DIAM(G 1 ) = 4k+2 and DIAM(G 2 ) = 2k+1. Since s = 4k+2, the difference can thus be as large as s/2. Note that this construction also yields the bound of the relative difference: DIAM(G 1 )/DIAM(G 2 ) = (4k + 2)/(2k + 1) = 2. A.9 Proof of Lemma 4.6 Given the number of stars s, we construct a trace set T with two inferrable graphs such that in one graph the number of triangles with anonymous nodes is s(s − 1)/2 and in the other graph there are no such triangles. As a first step we add s traces T i = (v i , * i , w) to the trace set T , where 1 ≤ i ≤ s. To make this trace set fully explored we add traces for each pair v i , v j to T as a second step, i.e., traces T i,j = (v i , v j ) for 1 ≤ i ≤ s and 1 ≤ j ≤ s. The resulting trace set contains s stars and none of the stars are in conflict with each other. Thus the graph G 1 merging all stars into one anonymous node is inferrable from this trace and the number of triangles where the anonymous node is part of is s(s−1)/2. Let G 2 be the canonic graph of this trace set. This graph does not contain any triangles with anonymous nodes and hence the difference C(G 1 ) − C(G 2 ) is s(s − 1)/2. To see that the ratio can be unbounded look at the trace set {(v, * 1 , w), (u, * 2 , w), (u, v)}. This set is fully explored since all pairs of named nodes appear in a trace. The graph where the two stars are merged has one triangle and the canonic graph has no triangle.
10,634
1105.5236
2952132893
Traceroute measurements are one of our main instruments to shed light onto the structure and properties of today's complex networks such as the Internet. This paper studies the feasibility and infeasibility of inferring the network topology given traceroute data from a worst-case perspective, i.e., without any probabilistic assumptions on, e.g., the nodes' degree distribution. We attend to a scenario where some of the routers are anonymous, and propose two fundamental axioms that model two basic assumptions on the traceroute data: (1) each trace corresponds to a real path in the network, and (2) the routing paths are at most a factor 1 alpha off the shortest paths, for some parameter alpha in (0,1]. In contrast to existing literature that focuses on the cardinality of the set of (often only minimal) inferrable topologies, we argue that a large number of possible topologies alone is often unproblematic, as long as the networks have a similar structure. We hence seek to characterize the set of topologies inferred with our axioms. We introduce the notion of star graphs whose colorings capture the differences among inferred topologies; it also allows us to construct inferred topologies explicitly. We find that in general, inferrable topologies can differ significantly in many important aspects, such as the nodes' distances or the number of triangles. These negative results are complemented by a discussion of a scenario where the trace set is best possible, i.e., "complete". It turns out that while some properties such as the node degrees are still hard to measure, a complete trace set can help to determine global properties such as the connectivity.
In contrast to this line of research on cardinalities, we are interested in the . If the inferred topologies share the most important characteristics, the negative results in @cite_3 @cite_7 may be of little concern. Moreover, we believe that a study limited to minimal topologies only may miss important redundancy aspects of the Internet. Unlike @cite_3 @cite_7 , our work is constructive in the sense that algorithms can be derived to compute inferred topologies.
{ "abstract": [ "Many systems require information about the topology of networks on the Internet, for purposes like management, efficiency, testing of new protocols and so on. However, ISPs usually do not share the actual topology maps with outsiders; thus, in order to obtain the topology of a network on the Internet, a system must reconstruct it from publicly observable data. The standard method employs traceroute to obtain paths between nodes; next, a topology is generated such that the observed paths occur in the graph. However, traceroute has the problem that some routers refuse to reveal their addresses, and appear as anonymous nodes in traces. Previous research on the problem of topology inference with anonymous nodes has demonstrated that it is at best NP-complete. In this paper, we improve upon this result. In our previous research, we showed that in the special case where nodes may be anonymous in some traces but not in all traces (so all node identifiers are known), there exist trace sets that are generable from multiple topologies. This paper extends our theory of network tracing to the general case (with strictly anonymous nodes), and shows that the problem of computing the network that generated a trace set, given the trace set, has no general solution. The weak version of the problem, which allows an algorithm to output a \"small\" set of networks- any one of which is the correct one- is also not solvable. Any algorithm guaranteed to output the correct topology outputs at least an exponential number of networks. Our results are surprisingly robust: they hold even when the network is known to have exactly two anonymous nodes, and every node as well as every edge in the network is guaranteed to occur in some trace. On the basis of this result, we suggest that exact reconstruction of network topology requires more powerful tools than traceroute.", "Computing the topology of a network in the Internet is a problem that has attracted considerable research interest. The usual method is to employ Traceroute, which produces sequences of nodes that occur along the routes from one node (source) to another (destination). In every trace thus produced, a node occurs by either its unique identifier, or by the anonymous identifier \"*\". We have earlier proved that there exists no algorithm that can take a set of traces produced by running Traceroute on network N and compute one topology which is guaranteed to be the topology of N. This paper proves a much stronger result: no algorithm can produce a small set of topologies that is guaranteed to contain the topology of N, as the size of the solution set is exponentially large. This result holds even when every edge occurs in a trace, all the unique identifiers of all the nodes are known, and the number of nodes that are irregular (anonymous in some traces) is given. On the basis of this strong result, we suggest that efforts to exactly reconstruct network topology should focus on special cases where the solution set is small." ], "cite_N": [ "@cite_7", "@cite_3" ], "mid": [ "1774487707", "1561479608" ] }
Misleading Stars: What Cannot Be Measured in the Internet?
Surprisingly little is known about the structure of many important complex networks such as the Internet. One reason is the inherent difficulty of performing accurate, large-scale and preferably synchronous measurements from a large number of different vantage points. Another reason are privacy and information hiding issues: for example, network providers may seek to hide the details of their infrastructure to avoid tailored attacks. Since knowledge of the network characteristics is crucial for many applications (e.g., RMTP [12], or PaDIS [13]), the research community implements measurement tools to analyze at least the main properties of the network. The results can then, e.g., be used to design more efficient network protocols in the future. This paper focuses on the most basic characteristic of the network: its topology. The classic tool to study topological properties is traceroute. Traceroute allows us to collect traces from a given source node to a set of specified destination nodes. A trace between two nodes contains a sequence of identifiers describing the route traveled by the packet. However, not every node along such a path is configured to answer with its identifier. Rather, some nodes may be anonymous in the sense that they appear as stars (' * ') in a trace. Anonymous nodes exacerbate the exploration of a topology because already a small number of anonymous nodes may increase the spectrum of inferrable topologies that correspond to a trace set T . This paper is motivated by the observation that the mere number of inferrable topologies alone does not contradict the usefulness or feasibility of topology inference; if the set of inferrable topologies is homogeneous in the sense that that the different topologies share many important properties, the generation of all possible graphs can be avoided: an arbitrary representative may characterize the underlying network accurately. Therefore, we identify important topological metrics such as diameter or maximal node degree and examine how "close" the possible inferred topologies are with respect to these metrics. Our Contribution This paper initiates the study and characterization of topologies that can be inferred from a given trace set computed with the traceroute tool. While existing literature assuming a worst-case perspective has mainly focused on the cardinality of minimal topologies, we go one step further and examine specific topological graph properties. We introduce a formal theory of topology inference by proposing basic axioms (i.e., assumptions on the trace set) that are used to guide the inference process. We present a novel and we believe appealing definition for the isomorphism of inferred topologies which is aware of traffic paths; it is motivated by the observation that although two topologies look equivalent up to a renaming of anonymous nodes, the same trace set may result in different paths. Moreover, we initiate the study of two extremes: in the first scenario, we only require that each link appears at least once in the trace set; interestingly, however, it turns out that this is often not sufficient, and we propose a "best case" scenario where the trace set is, in some sense, complete: it contains paths between all pairs of nodes. The main result of the paper is a negative one. It is shown that already a small number of anonymous nodes in the network renders topology inference difficult. In particular, we prove that in general, the possible inferrable topologies differ in many crucial aspects. We introduce the concept of the star graph of a trace set that is useful for the characterization of inferred topologies. In particular, colorings of the star graphs allow us to constructively derive inferred topologies. (Although the general problem of computing the set of inferrable topologies is related to NP-hard problems such as minimal graph coloring and graph isomorphism, some important instances of inferrable topologies can be computed efficiently.) The minimal coloring (i.e., the chromatic number) of the star graph defines a lower bound on the number of anonymous nodes from which the stars in the traces could originate from. And the number of possible colorings of the star graph-a function of the chromatic polynomial of the star graph-gives an upper bound on the number of inferrable topologies. We show that this bound is tight in the sense that there are situation where there indeed exist so many inferrable topologies. Especially, there are problem instances where the cardinality of the set of inferrable topologies equals the Bell number. This insight complements (and generalizes to arbitrary, not only minimal, inferrable topologies) existing cardinality results. Finally, we examine the scenario of fully explored networks for which "complete" trace sets are available. As expected, inferrable topologies are more homogenous and can be characterized well with respect to many properties such as node distances. However, we also find that other properties are inherently difficult to estimate. Interestingly, our results indicate that full exploration is often useful for global properties (such as connectivity) while it does not help much for more local properties (such as node degree). Organization The remainder of this paper is organized as follows. Our theory of topology inference is introduced in Section 2. The main contribution is presented in Sections 3 and 4 where we derive bounds for general trace sets and fully explored networks, respectively. In Section 5, the paper concludes with a discussion of our results and directions for future research. Due to space constraints, some proofs are moved to the appendix. Model Let T denote the set of traces obtained from probing (e.g., by traceroute) a (not necessarily connected and undirected) network G 0 = (V 0 , E 0 ) with nodes or vertices V 0 (the set of routers) and links or edges E 0 . We assume that G 0 is static during the probing time (or that probing is instantaneous). Each trace T (u, v) ∈ T describes a path connecting two nodes u, v ∈ V 0 ; when u and v do not matter or are clear from the context, we simply write T . Moreover, let d T (u, v) denote the distance (number of hops) between two nodes u and v in trace T . We define d G 0 (u, v) to be the corresponding shortest path distance in G 0 . Note that a trace between two nodes u and v may not describe the shortest path between u and v in G 0 . The nodes in V 0 fall into two categories: anonymous nodes and non-anonymous (or shorter: named) nodes. Therefore, each trace T ∈ T describes a sequence of symbols representing anonymous and non-anonymous nodes. We make the natural assumption that the first and the last node in each trace T is non-anonymous. Moreover, we assume that traces are given in a form where non-anonymous nodes appear with a unique, anti-aliased identifier (i.e., the multiple IP addresses corresponding to different interfaces of a node are resolved to one identifier); an anonymous node is represented as * ("star") in the traces. For our formal analysis, we assign to each star in a trace set T a unique identifier i: * i . (Note that except for the numbering of the stars, we allow identical copies of T in T , and we do not make any assumptions on the implications of identical traces: they may or may not describe the same paths.) Thus, a trace T ∈ T is a sequence of symbols taken from an alphabet Σ = ID ∪ ( i * i ), where ID is the set of non-anonymous node identifiers (IDs): Σ is the union of the (anti-aliased) non-anonymous nodes and the set of all stars (with their unique identifiers) appearing in a trace set. The main challenge in topology inference is to determine which stars in the traces may originate from which anonymous nodes. Henceforth, let n = |ID| denote the number of non-anonymous nodes and let s = | i * i | be the number of stars in T ; similarly, let a denote the number of anonymous nodes in a topology. Let N = n + s = |Σ| be the total number of symbols occurring in T . Clearly, the process of topology inference depends on the assumptions on the measurements. In the following, we postulate the fundamental axioms that guide the reconstruction. First, we make the assumption that each link of G 0 is visited by the measurement process, i.e., it appears as a transition in the trace set T . In other words, we are only interested in inferring the (sub-)graph for which measurement data is available. AXIOM 0 (Complete Cover): Each edge of G 0 appears at least once in some trace in T . The next fundamental axiom assumes that traces always represent paths on G 0 . AXIOM 1 (Reality Sampling): For every trace T ∈ T , if the distance between two symbols σ 1 , σ 2 ∈ T is d T (σ 1 , σ 2 ) = k, then there exists a path (i.e., a walk without cycles) of length k connecting two (named or anonymous) nodes σ 1 and σ 2 in G 0 . The following axiom captures the consistency of the routing protocol on which the traceroute probing relies. In the current Internet, policy routing is known to have in impact both on the route length [14] and on the convergence time [11]. AXIOM 2 (α-(Routing) Consistency): There exists an α ∈ (0, 1] such that, for every trace T ∈ T , if d T (σ 1 , σ 2 ) = k for two entries σ 1 , σ 2 in trace T , then the shortest path connecting the two (named or anonymous) nodes corresponding to σ 1 and σ 2 in G 0 has distance at least αk . Note that if α = 1, the routing is a shortest path routing. Moreover, note that if α = 0, there can be loops in the paths, and there are hardly any topological constraints, rendering almost any topology inferrable. (For example, the complete graph with one anonymous router is always a solution.) A natural axiom to merge traces is the following. AXIOM 3 (Trace Merging): For two traces T 1 , T 2 ∈ T for which ∃σ 1 , σ 2 , σ 3 , where σ 2 refers to a named node, such that d T 1 (σ 1 , σ 2 ) = i and d T 2 (σ 2 , σ 3 ) = j, it holds that the distance between two nodes u and v corresponding to σ 1 and σ 2 , respectively, in G 0 , is at most d G 0 (σ 1 , σ 3 ) ≤ i + j. Any topology G which is consistent with these axioms (when applied to T ) is called inferrable from T . Definition 2.1 (Inferrable Topologies). A topology G is (α-consistently) inferrable from a trace set T if axioms AXIOM 0, AXIOM 1, AXIOM 2 (with parameter α), and AXIOM 3 are fulfilled. We will refer by G T to the set of topologies inferrable from T . Please note the following important observation. Remark 2.2. While we generally have that G 0 ∈ G T , since T was generated from G 0 and AXIOM 0, AXIOM 1, AXIOM 2 and AXIOM 3 are fulfilled by definition, there can be situations where an α-consistent trace set for G 0 contradicts AXIOM 0: some edges may not appear in T . If this is the case, we will focus on the inferrable topologies containing the links we know, even if G 0 may have additional, hidden links that cannot be explored due to the high α value. The main objective of a topology inference algorithm ALG is to compute topologies which are consistent with these axioms. Concretely, ALG's input is the trace set T together with the parameter α specifying the assumed routing consistency. Essentially, the goal of any topology inference algorithm ALG is to compute a mapping of the symbols Σ (appearing in T ) to nodes in an inferred topology G; or, in case the input parameters α and T are contradictory, reject the input. This mapping of symbols to nodes implicitly describes the edge set of G as well: the edge set is unique as all the transitions of the traces in T are now unambiguously tied to two nodes. So far, we have ignored an important and non-trivial question: When are two topologies G 1 , G 2 ∈ G T different (and hence appear as two independent topologies in G T )? In this paper, we pursue the following approach: We are not interested in purely topological isomorphisms, but we care about the identifiers of the non-anonymous nodes, i.e., we are interested in the locations of the non-anonymous nodes and their distance to other nodes. For anonymous nodes, the situation is slightly more complicated: one might think that as the nodes are anonymous, their "names" do not matter. Consider however the example in Figure 1: the two inferrable topologies have two anonymous nodes, once where { * 1 , * 2 } plus { * 3 , * 4 } are merged into one node each in the inferrable topology and once where { * 1 , * 4 } plus { * 2 , * 3 } are merged into one node each in the inferrable topology. In this paper, we regard the two topologies as different, for the following reason: Assume that there are two paths in the network, one u * 2 v (e.g., during day time) and one u * 3 v (e.g., at night); clearly, this traffic has different consequences and hence we want to be able to distinguish between the two topologies described above. In other words, our notion of isomorphism of inferred topologies is path-aware. It is convenient to introduce the following MAP function. Essentially, an inference algorithm computes such a mapping. Definition 2.3 (Mapping Function MAP). Let G = (V, E) ∈ G T be a topology inferrable from T . A topology inference algorithm describes a surjective mapping function MAP : Σ → V . For the set of non-anonymous nodes in Σ, the mapping function is bijective; and each star is mapped to exactly one node in V , but multiple stars may be assigned to the same node. Note that for any σ ∈ Σ, MAP(σ) uniquely identifies a node v ∈ V . More specifically, we assume that MAP assigns labels to the nodes in V : in case of a named node, the label is simply the node's identifier; in case of anonymous nodes, the label is * β , where β is the concatenation of the sorted indices of the stars which are merged into node * β . With this definition, two topologies G 1 , G 2 ∈ G T differ if and only if they do not describe the identical (MAP-) labeled topology. We will use this MAP function also for G 0 , i.e., we will write MAP(σ) to refer to a symbol σ's corresponding node in G 0 . In the remainder of this paper, we will often assume that AXIOM 0 is given. Moreover, note that AXIOM 3 is redundant. Therefore, in our proofs, we will not explicitly cover AXIOM 0, and it is sufficient to show that AXIOM 1 holds to prove that AXIOM 3 is satisfied. Lemma 2.4. AXIOM 1 implies AXIOM 3. Proof. Let T be a trace set, and G ∈ G T . Let σ 1 , σ 2 , σ 3 s.t. ∃T 1 , T 2 ∈ T with σ 1 ∈ T 1 , σ 3 ∈ T 2 and σ 2 ∈ T 1 ∩ T 2 . Let i = d T 1 (σ 1 , σ 2 ) and j = d T 2 (σ 1 , σ 3 ). Since any inferrable topology G fulfills AXIOM 1, there is a path π 1 of length at most i between the nodes corresponding to σ 1 and σ 2 in G and a path π 2 of length at most j between the nodes corresponding to σ 2 and σ 3 in G. The combined path can only be shorter, and hence the claim follows. Inferrable Topologies What insights can be obtained from topology inference with minimal assumptions, i.e., with our axioms? Or what is the structure of the inferrable topology set G T ? We first make some general observations and then examine different graph metrics in more detail. Basic Observations Although the generation of the entire topology set G T may be computationally hard, some instances of G T can be computed efficiently. The simplest possible inferrable topology is the so-called canonic graph G C : the topology which assumes that all stars in the traces refer to different anonymous nodes. In other words, if a trace set T contains n = |ID| named nodes and s stars, G C will contain |V (G C )| = N = n + s nodes. Definition 3.1 (Canonic Graph G C ). The canonic graph is defined by G C (V C , E C ) where V C = Σ is the set of (anti-aliased) nodes appearing in T (where each star is considered a unique anonymous node) and where {σ 1 , σ 2 } ∈ E C ⇔ ∃T ∈ T , T = (. . . , σ 1 , σ 2 , . . .), i.e., σ 1 follows after σ 2 in some trace T (σ 1 , σ 2 ∈ T can be either non-anonymous nodes or stars). Let d C (σ 1 , σ 2 ) denote the canonic distance between two nodes, i.e., the length of a shortest path in G C between the nodes σ 1 and σ 2 . Note that G C is indeed an inferrable topology. In this case, MAP : Σ → Σ is the identity function. The proof appears in the appendix. Theorem 3.2. G C is inferrable from T . G C can be computed efficiently from T : represent each non-anonymous node and star as a separate node, and for any pair of consecutive entries (i.e., nodes) in a trace, add the corresponding link. The time complexity of this construction is linear in the size of T . With the definition of the canonic graph, we can derive the following lemma which establishes a necessary condition when two stars cannot represent the same node in G 0 from constraints on the routing paths. This is useful for the characterization of inferred topologies. Lemma 3.3. Let * 1 , * 2 be two stars occurring in some traces in T . * 1 , * 2 cannot be mapped to the same node, i.e., MAP( * 1 ) = MAP( * 2 ), without violating the axioms in the following conflict situations: (i) if * 1 ∈ T 1 and * 2 ∈ T 2 , and T 1 describes a too long path between anonymous node MAP( * 1 ) and nonanonymous node u, i.e., α · d T 1 ( * 1 , u) > d C (u, * 2 ). (ii) if * 1 ∈ T 1 and * 2 ∈ T 2 , and there exists a trace T that contains a path between two non-anonymous nodes u and v and α · d T (u, v) > d C (u, * 1 ) + d C (v, * 2 ). Proof. The first proof is by contradiction. Assume MAP( * 1 ) = MAP( * 2 ) represents the same node v of G 0 , and that α · d T 1 (v, u) > d C (u, v). Then we know from AXIOM 2 that d C (v, u) ≥ d G 0 (v, u) ≥ α · d T 1 (u, v) > d C (v, u) , which yields the desired contradiction. Similarly for the second proof. Assume for the sake of contradiction that MAP( * 1 ) = MAP( * 2 ) represents the same node w of G 0 , and that α · d T (u, v) > d C (u, w) + d C (v, w). Due to the triangle inequality, we have that d C (u, w) + d C (v, w) ≥ d C (u, v) and hence, α · d T (u, v) > d C (u, v), which contradicts the fact that G C is inferrable (Theorem 3.2). Lemma 3.3 can be applied to show that a topology is not inferrable from a given trace set because it merges (i.e., maps to the same node) two stars in a manner that violates the axioms. Let us introduce a useful concept for our analysis: the star graph that describes the conflicts between stars. Note that the star graph G * is unique and can be computed efficiently for a given trace set T : Conditions (i) and (ii) can be checked by computing G C . However, note that while G * specifies some stars which cannot be merged, the construction is not sufficient: as Lemma 3.3 is based on G C , additional links might be needed to characterize the set of inferrable and α-consistent topologies G T exactly. In other words, a topology G obtained by merging stars that are adjacent in G * is never inferrable (G ∈ G T ); however, merging non-adjacent stars does not guarantee that the resulting topology is inferrable. What do star graphs look like? The answer is arbitrarily: the following lemma states that the set of possible star graphs is equivalent to the class of general graphs. This claim holds for any α. The proof appears in the appendix. The problem of computing inferrable topologies is related to the vertex colorings of the star graphs. We will use the following definition which relates a vertex coloring of G * to an inferrable topology G by contracting independent stars in G * to become one anonymous node in G. For example, observe that a maximum coloring treating every star in the trace as a separate anonymous node describes the inferrable topology G C . Definition 3.6 (Coloring-Induced Graph). Let γ denote a coloring of G * which assigns colors 1, . . . , k to the vertices of G * : γ : V * → {1, . . . , k}. We require that γ is a proper coloring of G * , i.e., that different anonymous nodes are assigned different colors: {u, v} ∈ E * ⇒ γ(u) = γ(v). G γ is defined as the topology induced by γ. G γ describes the graph G C where nodes of the same color are contracted: two vertices u and v represent the same node in G γ , i.e., MAP( * i ) = MAP( * j ), if and only if γ( * i ) = γ( * j ). The following two lemmas establish an intriguing relationship between colorings of G * and inferrable topologies. Also note that Definition 3.6 implies that two different colorings of G * define two non-isomorphic inferrable topologies. We first show that while a coloring-induced topology always fulfills AXIOM 1, the routing consistency is sacrificed. The proof appears in the appendix. Lemma 3.7. Let γ be a proper coloring of G * . The coloring induced topology G γ is a topology fulfilling AXIOM 2 with a routing consistency of α , for some positive α . An inferrable topology always defines a proper coloring on G * . Lemma 3.8. Let T be a trace set and G * its corresponding star graph. If a topology G is inferrable from T , then G induces a proper coloring on G * . Proof. For any α-consistent inferrable topology G there exists some mapping function MAP that assigns each symbol of T to a corresponding node in G (cf Definition 2.3), and this mapping function gives a coloring on G * (i.e., merged stars appear as nodes of the same color in G * ). The coloring must be proper: due to Lemma 3.3, an inferrable topology can never merge adjacent nodes of G * . The colorings of G * allow us to derive an upper bound on the cardinality of G T . Theorem 3.9. Given a trace set T sampled from a network G 0 and G T , the set of topologies inferrable from T , it holds that: |V * | k=γ(G * ) P (G * , k)/k! ≥ |G T |, where γ(G * ) is the chromatic number of G * and P (G * , k) is the number of colorings of G * with k colors (known as the chromatic polynomial of G * ). Proof. The proof follows directly from Lemma 3.8 which shows that each inferred topology has proper colorings, and the fact that a coloring of G * cannot result in two different inferred topologies, as the coloring uniquely describes which stars to merge (Lemma 3.7). In order to account for isomorphic colorings, we need to divide by the number of color permutations. Note that the fact that G * can be an arbitrary graph (Lemma 3.5) implies that we cannot exploit some special properties of G * to compute colorings of G * and γ(G * ). Also note that the exact computation of the upper bound is hard, since the minimal coloring as well as the chromatic polynomial of G * (in P ) is needed. To complement the upper bound, we note that star graphs with a small number of conflict edges can indeed result in a large number of inferred topologies. Proof. Consider a trace set T = {(σ i , * i , σ i ) i=1,...,s } (e.g., obtained from exploring a topology G 0 where one anonymous center node is connected to 2s named nodes). The trace set does not impose any constraints on how the stars relate to each other, and hence, G * does not contain any edges at all; even when stars are merged, there are no constraints on how the stars relate to each other. Therefore, the star graph for T has B s = s j=0 S (s,j) colorings, where S (s,j) = 1/j! · j =0 (−1) j (j − ) s is the number of ways to group s nodes into j different, disjoint non-empty subsets (known as the Stirling number of the second kind). Each of these colorings also describes a distinct inferrable topology as MAP assigns unique labels to anonymous nodes stemming from merging a group of stars (cf Definition 2.3). Properties Even if the number of inferrable topologies is large, topology inference can still be useful if one is mainly interested in the properties of G 0 and if the ensemble G T is homogenous with respect to these properties; for example, if "most" of the instances in G T are close to G 0 , there may be an option to conduct an efficient sampling analysis on random representatives. Therefore, in the following, we will take a closer look how much the members of G T differ. Important metrics to characterize inferrable topologies are, for instance, the graph size, the diameter DIAM(·), the number of triangles C 3 (·) of G, and so on. In the following, let G 1 = (V 1 , E 1 ), G 2 = (V 2 , E 2 ) ∈ G T be two arbitrary representatives of G T . As one might expect, the graph size can be estimated quite well. Lemma 3.11. It holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ≤ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of edges between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Observe that inferrable topologies can also differ in the number of connected components. This implies that the shortest distance between two named nodes can differ arbitrarily between two representatives in G T . Lemma 3.12. Let COMP(G) denote the number of connected components of a topology G. Then, |COMP(G 1 ) − COMP(G 2 )| ≤ n/2. There are instances G 1 , G 2 that reach this bound. Proof. Consider the trace set T = {T i , i = 1 . . . n/2 } in which T i = {n 2i , * i , n 2i+1 }. Since i = j ⇒ T i ∩ T j = ∅, we have |E * | = 0. Take G 1 as the 1-coloring of G * : G 1 is a topology with one anonymous node connected to all named nodes. Take G 2 as the n/2 -coloring of the star graph: G 2 has n/2 distinct connected components (consisting of three nodes). Upper bound: For the sake of contradiction, suppose ∃T s.t. |COMP(G 1 ) − COMP(G 2 )| > n/2 . Let us assume that G 1 has the most connected components: G 1 has at least n/2 + 1 more connected components than G 2 . Let C refer to a connected component of G 2 whose nodes are not connected in G 1 . This means that C contains at least one anonymous node. Thus, C contains at least two named nodes (since a trace T cannot start or end by a star). There must exist at least n/2 + 1 such connected component C. Thus G 2 has to contain at least 2( n/2 + 1) ≥ n + 1 named nodes. Contradiction. An important criterion for topology inference regards the distortion of shortest paths. Definition 3.13 (Stretch). The maximal ratio of the distance of two non-anonymous nodes in G 0 and a connected topology G is called the stretch ρ: ρ = max u,v∈ID(G 0 ) max{d G 0 (u, v)/d G (u, v), d G (u, v)/d G 0 (u, v)}. From Lemma 3.12 we already know that inferrable topologies can differ in the number of connected components, and hence, the distance and the stretch between nodes can be arbitrarily wrong. Hence, in the following, we will focus on connected graphs only. However, even if two nodes are connected, their distance can be much longer or shorter than in G 0 . Figure 2 gives an example. Both topologies are inferrable from the traces T 1 = (v, * , v 1 , . . . , v k , u) and T 2 = (w, * , w 1 , . . . , w k , u). One inferrable topology is the canonic graph G C (Figure 2 left), whereas the other topology merges the two anonymous nodes (Figure 2 right). The distances between v and w are 2(k + 2) and 2, respectively, implying a stretch of k + 2. Figure 2: Due to the lack of a trace between v and w, the stretch of an inferred topology can be large. Lemma 3.14. Let u and v be two arbitrary named nodes in the connected topologies G 1 and G 2 . Then, even for only two stars in the trace set, it holds for the stretch that ρ ≤ (N −1)/2. There are instances G 1 , G 2 that reach this bound. We now turn our attention to the diameter and the degree. Proof. Upper bound: As G C does not merge any stars, it describes the network with the largest diameter. Let π be a longest path between two nodes u and v in G C . In the extreme case, π is the only path determining the network diameter and π contains all star nodes. Then, the graph where all s stars are merged into one anonymous node has a minimal diameter of at least DIAM(G C )/s. (u s , . . . , * s , . . . , u s+1 )} with x named nodes and star in the middle between u i and u i+1 (assume x to be even, x does not include u i and u i+1 ). It holds that DIAM(G C ) = s · (x + 2) whereas in a graph G where all stars are merged, DIAM(G) = x + 2. There are n = s(x + 1) non-anonymous nodes, so x = (n − s − 1)/s. Figure 3 depicts an example. Lemma 3.16. For the maximal node degree DEG, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. Another important topology measure that indicates how well meshed the network is, is the number of triangles. Lemma 3.17. Let C 3 (G) be the number of cycles of length 3 of the graph G. It holds that C 3 (G 1 ) − C 3 (G 2 ) ≤ 2s(s − 1), which can be reached. The relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large unless the number of links between non-anonymous nodes exceeds n 2 /4 in which case the ratio is upper bounded by 2s(s − 1) + 1. Proof. Upper bound: Each node which is part of a triangle has at least two incident edges. Thus, a node v can be part of at most DEG(v) 2 triangles, where DEG(v) denotes v's degree. As a consequence the number of triangles containing an anonymous node in an inferrable topology with a anonymous nodes u 1 , . . . u a is at most a j=1 DEG(u j ) 2 . Given s, this sum is maximized if a = 1 and DEG(u 1 ) = 2s as 2s is the maximum degree possible due to Lemma 3.16. Thus there can be at most s · (2s − 1) triangles containing an anonymous node in G 1 . The number of triangles with at least one anonymous node is minimized in G C because in the canonic graph the degrees of the anonymous nodes are minimized, i.e, they are always exactly two. As a consequence there cannot be more than s such triangles in G C . If the number of such triangles in G C is smaller by x, then the number of of triangles with at least one anonymous node in the topology G 1 is upper bounded by s · (2s − 1) − x. The difference between the triangles in G 1 and G 2 is thus at most s(2s − 1) − x − s + x = 2s(s − 1). Example meeting this bound: If the non-anonymous nodes form a complete graph and all star nodes can be merged into one node in G 1 and G 2 = G C , then the difference in the number of triangles matches the upper bound. Consequently it holds for the ratio of triangles with anonymous nodes that it does not exceed (s(2s−1)−x)/(s−x). Thus the ratio can be infinite, as x can reach s. However, if the number of links between n non-anonymous nodes exceeds n 2 /4 then there is at least one triangle, as the densest complete bipartite graph contains at most n 2 /4 links. Full Exploration So far, we assumed that the trace set T contains each node and link of G 0 at least once. At first sight, this seems to be the best we can hope for. However, sometimes traces exploring the vicinity of anonymous nodes in different ways yields additional information that help to characterize G T better. This section introduces the concept of fully explored networks: T contains sufficiently many traces such that the distances between non-anonymous nodes can be estimated accurately. In some sense, a trace set for a fully explored network is the best we can hope for. Properties that cannot be inferred well under the fully explored topology model are infeasible to infer without additional assumptions on G 0 . In this sense, this section provides upper bounds on what can be learned from topology inference. In the following, we will constrain ourselves to routing along shortest paths only (α = 1). Let us again study the properties of the family of inferrable topologies fully explored by a trace set. Obviously, all the upper bounds from Section 3 are still valid for fully explored topologies. In the following, let G 1 , G 2 ∈ G T be arbitrary representatives of G T for a fully explored trace set T . A direct consequence of the Definition 4.1 concerns the number of connected components and the stretch. (Recall that the stretch is defined with respect to named nodes only, and since α = 1, a 1-consistent inferrable topology cannot include a shorter path between u and v than the one that must appear in a trace of T .) Lemma 4.2. It holds that COMP(G 1 ) = COMP(G 2 ) (= COMP(G 0 )) and the stretch is 1. The proof for the claims of the following lemmata are analogous to our former proofs, as the main difference is the fact that there might be more conflicts, i.e., edges in G * . Lemma 4.3. For fully explored networks it holds that |V 1 | − |V 2 | ≤ s − γ(G * ) ≤ s − 1 and |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Moreover, |E 1 | − |E 2 | ∈ 2(s − γ(G * )) and |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s, where ν denotes the number of links between non-anonymous nodes. There are traces with inferrable topology G 1 , G 2 reaching these bounds. Lemma 4.4. For the maximal node degree, we have DEG(G 1 ) − DEG(G 2 ) ≤ 2(s − γ(G * )) and DEG(G 1 )/DEG(G 2 ) ≤ s − γ(G * ) + 1. There are instances G 1 , G 2 that reach these bounds. From Lemma 4.2 we know that fully explored scenarios yield a perfect stretch of one. However, regarding the diameter, the situation is different in the sense that distances between anonymous nodes play a role. The number of triangles with anonymous nodes can still not be estimated accurately in the fully explored scenario. Lemma 4.6. There exist graphs where C 3 (G 1 ) − C 3 (G 2 ) = s(s − 1)/2, and the relative error C 3 (G 1 )/C 3 (G 2 ) can be arbitrarily large. Conclusion We understand our work as a first step to shed light onto the similarity of inferrable topologies based on most basic axioms and without any assumptions on power-law properties, i.e., in the worst case. Using our formal framework we show that the topologies for a given trace set may differ significantly. Thus, it is impossible to accurately characterize topological properties of complex networks. To complement the general analysis, we propose the notion of fully explored networks or trace sets, as a "best possible scenario". As expected, we find that fully exploring traces allow us to determine several properties of the network more accurately; however, it also turns out that even in this scenario, other topological properties are inherently hard to compute. Our results are summarized in Figure 4. Our work opens several directions for future research. On a theoretical side, one may study whether the minimal inferrable topologies considered in, e.g., [1,2], are more similar in nature. More importantly, while this paper presented results for the general worst-case, it would be interesting to devise algorithms that compute, for a given trace set, worst-case bounds for the properties under consideration. For example, such approximate bounds would be helpful to decide whether additional measurements are needed. Moreover, maybe such algorithms may even give advice on the locations at which such measurements would be most useful. Property/Scenario Arbitrary Fully Explored (α = 1) G1 − G2 G1/G2 G1 − G2 G1/G2 # of nodes ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) ≤ s − γ(G * ) ≤ (n + s)/(n + γ(G * )) # of links ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) ≤ 2(s − γ(G * )) ≤ (ν + 2s)/(ν + 2) # of connected components ≤ n/2 ≤ n/2 Figure 4: Summary of our bounds on the properties of inferrable topologies. s denotes the number of stars in the traces, n is the number of named nodes, N = n + s, and ν denotes the number of links between named nodes. Note that trace sets meeting these bounds exist for all properties for which we have tight or upper bounds. For the two entries marked with ( ¶), only "lower bounds" are derived, i.e., examples that yield at least the corresponding accuracy; as the upper bounds from the arbitrary scenario do not match, how to close the gap remains an open question. = 0 = 1 Stretch - ≤ (N − 1)/2 - = 1 Diameter ≤ (s − 1)/s · (N − 1) ≤ s s/2 ( ¶) 2 Max. Deg. ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 ≤ 2(s − γ(G * )) ≤ s − γ(G * ) + 1 Triangles ≤ 2s(s − 1) ∞ ≤ 2s(s − 1)/2 ∞ [13] Ingmar Poese, Benjamin Frank, Bernhard Ager, Georgios Smaragdakis, and Anja Feldmann. Improving content delivery using provider-aided distance information. In Proc. ACM IMC, 2010. [ Fix T . We have to prove that G C fulfills AXIOM 0, AXIOM 1 (which implies AXIOM 3) and AXIOM 2. AXIOM 0: The axiom holds trivially: only edges from the traces are used in G C . AXIOM 1: Let T ∈ T and σ 1 , σ 2 ∈ T . Let k = d T (σ 1 , σ 2 ). We show that G C fulfills AXIOM 1, namely, there exists a path of length k in G C . Induction on k: (k = 1:) By the definition of G C , {σ 1 , σ 2 } ∈ E C thus there exists a path of length one between σ 1 and σ 2 . (k > 1:) Suppose AXIOM 1 holds up to k − 1. Let σ 1 , . . . , σ k−1 be the intermediary nodes between σ 1 and σ 2 in T : T = (. . . , σ 1 , σ 1 , . . . , σ k−1 , σ 2 , . . .). By the induction hypothesis, in G C there is a path of length k − 1 between σ 1 and σ k−1 . Let π be this path. By definition of G C , {σ k−1 , σ 2 } ∈ E C . Thus appending (σ k−1 , σ 2 ) to π yields the desired path of length k linking σ 1 and σ 2 : AXIOM 1 thus holds up to k. AXIOM 2: We have to show that d T (σ 1 , σ 2 ) = k ⇒ d C (σ 1 , σ 2 ) ≥ α · k . By contradiction, suppose that G C does not fulfill AXIOM 2 with respect to α. So there exists k < α · k and σ 1 , σ 2 ∈ V C such that d C (σ 1 , σ 2 ) = k . Let π be a shortest path between σ 1 and σ 2 in G C . Let (T 1 , . . . , T ) be the corresponding (maybe repeating) traces covering this path π in G C . Let T i ∈ (T 1 , . . . , T ), and let s i and e i be the corresponding start and end nodes of π in T i . We will show that this path π implies the existence of a path in G 0 which violates α-consistency. Since G 0 is inferrable, G 0 fulfills AXIOM 2, thus we have: d C (σ 1 , σ 2 ) = i=1 d T i (s i , e i ) = k < α · k ≤ d G 0 (σ 1 , σ 2 ) since G 0 is α-consistent. However, G 0 also fulfills AXIOM 1, thus d T i (s i , e i ) ≥ d G 0 (s i , e i ). Thus i=1 d G 0 (s i , e i ) ≤ i=1 d T i (s i , e i ) < d G 0 (σ 1 , σ 2 ): we have constructed a path from σ 1 to σ 2 in G 0 whose length is shorter than the distance between σ 1 and σ 2 in G 0 , leading to the desired contradiction. A.2 Proof of Lemma 3.5 First we construct a topology G 0 = (V 0 , E 0 ) and then describe a trace set on this graph that generates the star graph G = (V, E). The node set V 0 consists of |V | anonymous nodes and |V | · (1 + τ ) named nodes, where τ = 3/(2α) − 1/2 . The first building block of G 0 is a copy of G. To each node v i in the copy of G we add a chain consisting of 2 + τ nodes, first appending τ non-anonymous nodes w (i,k) where 1 ≤ k ≤ τ , followed by an anonymous node u i and finally a named node w (i,τ +1) . More formally we can describe the link set as E 0 = E ∪ |V | i=1 {v i , w (i,1) }, {w (i,1) , w (i,2) }, . . . , {w (i,τ ) , u i }, {u i , w (i,τ +1) } . The trace set T consists of the following |V | + |E| shortest path traces: the traces T for ∈ {1, . . . , |V |}, are given by T (w ( ,τ ) , w ( ,τ +1) ) (for each node in V ), and the traces T for ∈ {|V | + 1, . . . , |V | + |E|}, are given by T (w (i,τ ) , w (j,τ ) ) for each link {v i , v j } in E. Note that G 0 = G C as each star appears as a separate anonymous node. The star graph G * corresponding to this trace set contains the |V | nodes * i (corresponding to u i ). In order to prove the claim of the lemma we have to show that two nodes * i , * j are conflicting according to Lemma 3.3 if and only if there is a link {v i , v j } in E. Case (i) does not apply because the minimum distance between any two nodes in the canonic graph is at least one, and α · d T i ( * i , w (i,τ ) ) = 1 and α · d T i ( * i , w (i,τ +1) ) = 1. It remains to examine Case (ii): "⇒" if MAP( * i ) = MAP( * j ) there would be a path of length two between w (i,τ ) and w (j,τ ) in the topology generated by MAP; the trace set however contains a trace T (w (i,τ ) , w (j,τ ) ) of length 2τ + 1. So α · d T (w (i,τ ) , w (j,τ ) ) = α · (2τ + 1) = α · (2 3/(2α) − 1/2 + 1 ) ≥ 3, which violates the α-consistency (Lemma 3.3 (ii)) and hence { * i , * j } ∈ E * and {v i , v j } ∈ E. "⇐": if {v i , v j } ∈ E, there is no trace T (w (i,τ ) , w (j,τ ) ), thus we have to prove that no trace T (w (i ,τ ) , w (j ,τ ) ) with i = i and j = j and j = i leads to a conflict between * i and * j . We show that an even more general statement is true, namely that for any pair of distinct non-anonymous nodes * j ). Since G C = G 0 and the traces contain shortest paths only, the trace distance between two nodes in the same trace is the same as the distance in G C . The following tables contain the relevant lower bounds on distances in G C and µ(x 1 , x 1 , x 2 , where x 1 , x 2 ∈ {v i , v j , w (i ,k) , w (j ,k) |1 ≤ k ≤ τ + 1, i = i, j = i, j = j}, it holds that α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 ,x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ). d C (·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 0 1 k 1 k 1 + 1 v j 1 0 k 1 + 1 k 1 w (i ,k 2 ) k 2 k 2 + 1 |k 2 − k 1 | k 1 + 1 + k 2 w (j ,k 2 ) k 2 + 1 k 2 k 1 + 1 + k 2 |k 2 − k 1 | * i τ + 2 τ + 1 2 + τ + k 1 τ − k 1 + 1 * j τ + 2 τ + 2 2 + τ + k 1 2 + τ + k 1 µ(·, ·) ≥ v i v j w (i ,k 1 ) w (j ,k 1 ) v i 2τ + 4 2τ + 3 4 + 2τ + k 1 4 + 2τ + k 1 v j 2τ + 3 2τ + 4 2τ + 3 + k 1 3 + 2τ + k 1 w (i ,k 2 ) 4 + 2τ + k 2 4 + 2τ + k 2 4 + 2τ + k 1 + k 2 4 + 2τ + k 1 + k 2 w (j ,k 2 ) 2τ − k 2 + 3 2τ − k 2 + 3 2τ + 3 + k 1 − k 2 2τ + k 1 − k 2 + 3) = d C (x 1 , * i ) + d C (x 2 , * j ) . If x 1 = w (j ,k 2 ) then it holds for all x 1 , x 2 that d T (x 1 , x 2 ) ≤ 2τ + 1 whereas µ(x 1 , x 2 ) = d C (x 1 , * i ) + d C (x 2 , * j ) ≥ 2τ + 2. In all other cases it holds at least that Figure 5: d C (x 1 , x 2 ) < µ(x 1 , x 2 ). Thus α · d C (x 1 , x 2 ) ≤ d C (x 1 , * i ) + d C (x 2 , * j Visualization for proof of Lemma 3.7. Solid lines denote links, dashed lines denote paths (of annotated length). We have to show that the paths in the traces correspond to paths in G γ . Let T ∈ T , and σ 1 , σ 2 ∈ T . Let π be the sequence of nodes in T connecting σ 1 and σ 2 . This is also a path in G γ : since α > 0, for any two symbols σ 1 , σ 2 ∈ T , it holds that MAP(σ 1 ) = MAP(σ 2 ) as α > 0. We now construct an example showing that the α for which G γ fulfills AXIOM 2 can be arbitrarily small. Consider the graph represented in Figure 5. Let T 1 = (s, . . . , t), T 2 = (s, * 1 , . . . , m 1 ), T 3 = (m 1 , . . . , * 2 , m 2 ), T 4 = (m 2 , * 3 , . . . , m 3 ), T 5 = (m 3 , . . . , * 4 , t). We assume α = 1. By changing parameters k = d C (s, t) and k = d C (m 1 , * 4 ), we can modulate the links of the corresponding star graph G * . Using d T 1 (s, t) = k, observe that k > 2 ⇔ { * 1 , * 4 } ∈ E * . Similarly, k > 2(k + 1) ⇔ { * 1 , * 3 } ∈ E * ∧ { * 2 , * 4 } ∈ E * and k > 2(k + 2) ⇔ { * 1 , * 2 } ∈ E * ∧ { * 3 , * 4 } ∈ E * . Taking k = 2k + 4, we thus have E * = {{ * 1 , * 3 }, { * 2 , * 4 }, { * 1 , * 4 }}. * 1 ) = d C (m 1 , * 2 ) = d C (m 3 , * 3 ) = d C (m 3 , Thus, we here construct a situation where * 1 and * 2 as well as * 3 and * 4 can be merged without breaking the consistency requirement, but where merging both simultaneously leads to a topology G that is only 4/k-consistent, since d G (s, t) = 4. This ratio can be made arbitrarily small provided we choose k = (k − 4)/2. A.4 Proof of Lemma 3.11 In the worst-case, each star in the trace represents a different node in G 1 , so the maximal number of nodes in any topology in G T is the total number of non-anonymous nodes plus the total number of stars in T . This number of nodes is reached in the topology G C . According to Definition 3.4, only non-adjacent stars in G * can represent the same node in an inferrable topology. Thus, the stars in trace T must originate from at least γ(G * ) different nodes. As a consequence |V 1 | − |V 2 | ≤ s − γ(G * ), which can reach s − 1 for a trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s}. Analogously, |V 1 |/|V 2 | ≤ (n + s)/(n + γ(G * )) ≤ (2 + s)/3. Observe that each occurrence of a node in a trace describes at most two edges. If all anonymous nodes are merged into γ(G * ) nodes in G 1 and are separate nodes in G 2 the difference in the number of edges is at most 2(s − γ(G * )). Analogously, |E 1 |/|E 2 | ≤ (ν + 2s)/(ν + 2) ≤ s. The trace set T = {T i = (v, * i , w)|1 ≤ i ≤ s} reaches this bound. A.5 Proof of Lemma 3.14 An "lower bound" example follows from Figure 2. Essentially, this is also the worst case: note that the difference in the shortest distance between a pair of nodes u and v in G 1 and G 2 is only greater than 0 if the shortest path between them involves at least one anonymous node. Hence the shortest distance between such a pair is two. The longest shortest distance between the same pair of nodes in another inferred topology visits all nodes in the network, i.e., its length is bounded by N − 1. A.6 Proof of Lemma 3.16 Each occurrence of a node in a trace describes at most two links incident to this node. For the degree difference we only have to consider the links incident to at least one anonymous node, as the number of links between nonanonymous nodes is the same in G 1 and G 2 . If all anonymous nodes can be merged into γ(G * ) nodes in G 1 and all anonymous nodes are separate in G 2 the difference in the maximum degree is thus at most 2(s − γ(G * )), as there can be at most s − γ(G * ) + 1 nodes merged into one node and the minimal maximum degree of a node in G 2 is two. This bound is tight, as the trace set T i = {v i , * , w i } for 1 ≤ i ≤ s containing s stars can be represented by a graph with one anonymous node of degree 2s or by a graph with s anonymous nodes of degree two each. For the ratio of the maximal degree we can ignore links between non-anonymous nodes as well, as these only decrease the ratio. The highest number of links incident at node v with one endpoint in the set of anonymous nodes is s − γ(G * ) + 1 for non-anonymous nodes and 2(s − γ(G * ) + 1) for anonymous nodes, whereas the lowest number is two. A.7 Proof of Lemma 4.4 The proof for the upper bound is analogous to the case without full exploration. To prove that this bound can be reached, we need to add traces to the trace set to ensure that all pairs of named nodes appear in the trace but does not change the degrees of anonymous nodes. To this end we add a named node u for each pair {v, w} that is not in the trace set yet to G 0 and a trace T = {v, u, w}. This does not increase the maximum degree and guarantees full exploration. A.8 Proof of Lemma 4.5 We first prove the upper bound for the relative case. Note that the maximal distance between two anonymous nodes MAP( * 1 ) and MAP( * 2 ) in an inferred topology component cannot be larger than twice the distance of two named nodes u and v: from Definition 4.1 we know that there must be a trace in T connecting u and v, and the maximal distance δ of a pair of named nodes is given by the path of the trace that includes u and v. Therefore, and since any trace starts and ends with a named node, any star can be at a distance at a distance δ/2 from a named node. Therefore, the maximal distance between MAP( * 1 ) and MAP( * 2 ) is δ/2 + δ/2 to get to the corresponding closest named nodes, plus δ for the connection between the named nodes. As according to Lemma 4.2, the distance between named nodes is the same in all inferred topologies, the diameter of inferred topologies can vary at most by a factor of two. We now construct an example that reaches this bound. Consider a topology consisting of a center node c and four rays of length k. Let u 1 , u 2 , u 3 , u 4 be the "end nodes" of each ray. We assume that all these nodes are named. Now add two chains of anonymous nodes of length 2k + 1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 to the topology. The trace set consists of the minimal trace set to obtain a fully explored topology: six traces of length 2k +1 between each pair of end nodes u 1 , u 2 , u 3 , u 4 . Now we add two traces of length 2k +1 between nodes u 1 and u 2 , and between nodes u 3 and u 4 . These traces explore the anonymous chains and have the following shape: T 7 = (u 1 , * 1 , . . . , * k , σ, * k+1 , . . . , * 2k , u 2 ) and T 8 = (u 3 , * 2k+1 , . . . , * 3k , σ , * 3k+1 , . . . , * 4k , u 4 ), where σ and σ are stars. Let G 1 = G C and G 2 be the inferrable graph where σ and σ are merged. The resulting diameters are DIAM(G 1 ) = 4k+2 and DIAM(G 2 ) = 2k+1. Since s = 4k+2, the difference can thus be as large as s/2. Note that this construction also yields the bound of the relative difference: DIAM(G 1 )/DIAM(G 2 ) = (4k + 2)/(2k + 1) = 2. A.9 Proof of Lemma 4.6 Given the number of stars s, we construct a trace set T with two inferrable graphs such that in one graph the number of triangles with anonymous nodes is s(s − 1)/2 and in the other graph there are no such triangles. As a first step we add s traces T i = (v i , * i , w) to the trace set T , where 1 ≤ i ≤ s. To make this trace set fully explored we add traces for each pair v i , v j to T as a second step, i.e., traces T i,j = (v i , v j ) for 1 ≤ i ≤ s and 1 ≤ j ≤ s. The resulting trace set contains s stars and none of the stars are in conflict with each other. Thus the graph G 1 merging all stars into one anonymous node is inferrable from this trace and the number of triangles where the anonymous node is part of is s(s−1)/2. Let G 2 be the canonic graph of this trace set. This graph does not contain any triangles with anonymous nodes and hence the difference C(G 1 ) − C(G 2 ) is s(s − 1)/2. To see that the ratio can be unbounded look at the trace set {(v, * 1 , w), (u, * 2 , w), (u, v)}. This set is fully explored since all pairs of named nodes appear in a trace. The graph where the two stars are merged has one triangle and the canonic graph has no triangle.
10,634
1105.5032
2951177312
Many electoral bribery, control, and manipulation problems (which we will refer to in general as "manipulative actions" problems) are NP-hard in the general case. It has recently been noted that many of these problems fall into polynomial time if the electorate is single-peaked (i.e., is polarized along some axis issue). However, real-world electorates are not truly single-peaked. There are usually some mavericks, and so real-world electorates tend to merely be nearly single-peaked. This paper studies the complexity of manipulative-action algorithms for elections over nearly single-peaked electorates, for various notions of nearness and various election systems. We provide instances where even one maverick jumps the manipulative-action complexity up to @math -hardness, but we also provide many instances where a reasonable number of mavericks can be tolerated without increasing the manipulative-action complexity.
The four papers most related to the present one are the following. insightfully raised the idea that general complexity results may change in single-peaked societies. His manipulative-action example (STV) actually provides a case where single-peakedness fails to lower manipulation complexity, but in a different context he did find a lowering of complexity for single-peakedness. The papers , the first of which was in the TARK conference, then broadly explored the effect of single-peakedness on manipulative actions. These three papers are all in the model of (perfect) single-peakedness. , in the context of preference elicitation, raised and experimentally studied the issue of single-peaked societies. also discussed nearness to single-peakedness, and the papers @cite_3 @cite_2 both raise as open issues whether shield-evaporation (complexity) results for single-peakedness will withstand near-single-peakedness. The present paper seeks to bring the nearly single-peaked'' lens to the study of manipulative actions.
{ "abstract": [ "Much work has been devoted, during the past 20years, to using complexity to protect elections from manipulation and control. Many ''complexity shield'' results have been obtained-results showing that the attacker's task can be made NP-hard. Recently there has been much focus on whether such worst-case hardness protections can be bypassed by frequently correct heuristics or by approximations. This paper takes a very different approach: We argue that when electorates follow the canonical political science model of societ al preferences the complexity shield never existed in the first place. In particular, we show that for electorates having single-peaked preferences, many existing NP-hardness results on manipulation and control evaporate.", "For many election systems, bribery (and related) attacks have been shown NP-hard using constructions on combinatorially rich structures such as partitions and covers. This paper shows that for voters who follow the most central political-science model of electorates-- single-peaked preferences--those hardness protections vanish. By using single-peaked preferences to simplify combinatorial covering challenges, we for the first time show that NP-hard bribery problems--including those for Kemeny and Llull elections--fall to polynomial time for single-peaked electorates. By using single-peaked preferences to simplify combinatorial partition challenges, we for the first time show that NP-hard partition-of-voters problems fall to polynomial time for single-peaked electorates. We show that for single-peaked electorates, the winner problems for Dodgson and Kemeny elections, though Θ2p-complete in the general case, fall to polynomial time. And we completely classify the complexity of weighted coalition manipulation for scoring protocols in single-peaked electorates." ], "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1983312130", "2233988666" ] }
The Complexity of Manipulative Attacks in Nearly Single-Peaked Electorates *
Elections are a model of collective decision-making so central in human and multiagent-systems contextsranging from planning to collaborative filtering to reducing web spam-that it is natural to want to get a handle on the computational difficulty of finding whether manipulative actions can obtain a given outcome (see the survey [22]). A recent line of work started by Walsh [37,20,4] has looked at the extent to which NP-hardness results for the complexity of manipulative actions (bribery, control, and manipulation) may evaporate when one focuses on electorates that are (unidimensional) single-peaked, a central social-science model of electoral behavior. That model basically views society as polarized along some (perhaps hidden) issue or axis. However, real-world elections are unlikely to be perfectly single-peaked. Rather, they are merely very close to being single-peaked, a notion that was recently raised in a computational context by Conitzer [7] and Escoffier et al. [16]. There will almost always be a few mavericks, who vote based on some reason having nothing to do with the societal axis. For example, in recent US presidential primary and final elections, there was much discussion of whether some voters would vote not based on the political positioning of the candidates but rather based on the candidates' religion, race, or gender. In this paper, we most centrally study whether the evaporation of complexity results that often holds for single-peaked electorates will also occur in nearly single-peaked electorates. We prove that often the answer is yes, and sometimes the answer is no. We defer to Section 6 our discussion of previous and related work . Among the contributions of our paper are the following. • Most centrally, we show that in many control and bribery settings, a reasonable number of mavericks (voters whose votes are not consistent with the societal axis) can be handled. In such cases, the "complexity-shield evaporation" results of the earlier work can now be declared free from the worry that the results might hold only for perfect single-peakedness. • We give settings, for example 3-candidate Borda and 3-candidate veto, in which even one maverick raises the (constructive coalition weighted) manipulation complexity from P to NP-hardness. • For all scoring systems of the form (α 1 , α 2 , α 3 ), α 2 = α 3 , we provide a dichotomy theorem determining when the (constructive coalition weighted) manipulation problem is in P and when it is NPcomplete, for so-called single-caved societies. • We show cases where the price of mavericity is paid in nondeterminism-cases where, for each k, we prove the control problem for societies with O(log k n) mavericks to be in complexity class β k , the kth level of the limited nondeterminism hierarchy of Kintala and Fisher [31]. This paper touches on bribery, control, and manipulation, discusses various election systems and notions of nearness to single-peaked, and gives both polynomial-time attack results and NP-hardness results. It thus is not surprising that the proofs vary broadly in their techniques and approaches; we have no single approach that covers this entire range of cases. Almost all of our proofs are relegated to the appendix. Preliminaries In this section we give intuitive descriptions of the problems that we study. More detailed coverage, and discussion of the motivations and limitations of the models, can be found in the various bibliography entries, including, for example, Faliszewski et al. [19,22]. We have also included formal definitions in the appendix. As is standard, throughout this paper the terms "NP-hard"/"NP-hardness" will refer to polynomial-time many-one "NP-hard"/"NP-hardness." Elections An election E = (C,V ) consists of a finite candidate set C and a finite collection V of votes over the candidates. V is a list of entries, one per voter, with each entry containing a linear (i.e., tie-free total) ordering of the candidates (except for approval elections where each vote is a C -long 0-1 vector denoting disapproval/approval of each candidate). 1 In plurality elections, whichever candidate gets the most top-ofthe-preference-order votes wins. Each vector (α 1 , . . . , α k ), α i ∈ N, α 1 ≥ · · · ≥ α k ≥ 0, defines a k-candidate scoring protocol election, in which each voter's ith favorite candidate gets α i points, and whichever candidate gets the most points wins. k-candidate veto is defined by the vector ( k−1 1, . . . , 1, 0), and k-candidate Borda is defined by the vector (k − 1, k − 2, . . . , 0). In approval elections, whichever candidate is approved of by the most voters wins. In all the systems just mentioned, if candidates tie for the highest number of points, those tieing for highest are all considered winners. In Condorcet elections, a candidate wins if he or she strictly beats every other candidate in pairwise head-on-head votes. top of each other. The set of preferences that can be supported among them by curves of the mentioned sort on which there are no ties among candidates in utility are precisely the single-peaked vote ensembles. Note that different voters can have different peaks/plateaus and different curves, e.g., if both Alice and Bob think 40 percent is the ideal top tax rate, it is completely legal for Alice to prefer 30 percent to 50 percent and Bob to prefer 50 percent to 30 percent. There is extensive political science literature on single-peaked voting's naturalness, ranging from conceptual discussions to empirical studies of actual US political elections (with few candidates) showing that most voters are single-peaked with respect to left-right political spectrum, and has been described as "the canonical setting for models of political institutions" [25]. If in the definition of single-peaked one replaces the final "forall" with this one, (∀v ∈ V )[c 2 P v c 1 =⇒ c 3 P v c 2 ], one defines the closely related notion of single-caved preferences, which we will also study. For approval ballots, a vote set V is said to be single-peaked if there is a linear order L such that for each voter v, all candidates that v approves of (if any) form an adjacent block in L. In all our manipulative action problems about single-peaked and nearly single-peaked societies, we will follow Walsh's model, which is that the societal order, L, is part of the input. (See the earlier papers for extensive discussion of why this is a reasonable model.) In this paper, we will primarily focus on elections whose voters are "nearly" single-peaked, under the following notions of nearness. Our "maverick" notions apply to both voting by approval ballots and voting by linear orders; our other notions are specific to voting by linear orders. We will say an election is over a k-maverick-SP society (equivalently, a k-maverick-SP electorate) if all but k of the voters are consistent with (in the sense of single-peakedness; this does not mean identical to) the societal order L. That is, we allow up to k mavericks. We will speak of f (·)-maverick-SP societies when this usage and the type of f 's argument(s) is clear from context ( f 's argument(s) will typically be the size of the election instance or some parameters of the election, e.g., the number of candidates or the number of voters). Also, we will prove a number of results that state that "PROBLEM for ELECTION-SYSTEM over logmaverick-SP societies is in P"; this is a shorthand for the claim that for each function f (that is computable in time polynomial in the size of the input-which is roughly V C log C for the election (C,V ) itself plus whatever space is taken by other parameters-and to avoid possible technical problems, we should assume f is nondecreasing) whose value is O(log(ProblemInputSize)), it holds that "PROBLEM for ELECTION-SYSTEM over f -maverick-SP societies is in P," where the argument to f is the input size of the problem. An election is over a (k, k ′ )-swoon-SP society if each voter has the property that if one removes the voter's k favorite and k ′ least favorite candidates from the voter's preference order, the resulting order is consistent with societal order L after removing those same candidates from L. We will use swoon-SP as a shorthand for (1, 0)-swoon-SP, as we will not study other swoon values in this paper. In swoon-SP, each person may have as her or his favorite some candidate chosen due to some personal passion (such as hairstyle or religion), but all the rest of that person's vote must be consistent with the societal polarization. An election is over a Dodgson k -SP society if for each voter some at-most-k sequential exchanges of adjacent candidates in his or her order make the vote consistent with the societal order L. An election is over a PerceptionFlip k -SP society if, for each voter, there is some series of at most k sequential exchanges of adjacent candidates in the societal order L after which the voter's vote is consistent with L. This models each voter being consistent with that voter's humanly blurred view of the societal order. Some model details follow. For control by adding voters for maverick-SP societies, the total number of mavericks in the initial voter set and the pool of potential additional voters is what the maverick bound limits. For manipulation, nonmanipulators as well as manipulators can be mavericks, and we bound the total number of mavericks. For bribery involving f (·)-maverick-SP societies, we will consider both the "standard" model and the "marked" model. In the standard model, the f (ProblemInputSize) limit on the number of mavericks must hold both for the input and for the voter set after the bribing is done; anyone may be bribed and bribes can create mavericks and can make mavericks become nonmavericks. In the marked model, each voter has a flag saying whether or not he or she can cast a maverick vote (we will call that being "maverick-enabled"). The at most f (ProblemInputSize) voters with the maverick-enabled flag may (subject to the other constraints of the bribery problem such as total number of bribes) be bribed in any way, and so may legally cross in either direction between consistency and inconsistency with the societal ordering. All non-maverick-enabled voters must be consistent with societal order L both before and after the bribing, although they too can be bribed (again, subject to the problem's other constraints such as total number of bribes). For single-peaked electorates, "median voting" (in which the candidate wins who on the societal axis is preferred by the "median voter") is known to be strategy-proof, i.e., a voter never benefits from misrepresenting his or her preferences. It might seem tempting to conclude from that that all elections on single-peaked societies "should" use median voting, and that we thus need not discuss single-peaked (or perhaps even nearly single-peaked) elections with respect to other voting systems, such as plurality, veto, etc. But that temptation should be resisted. First, median voting's strategy-proofness regards manipulation, not control or bribery. Second, even in real-world political elections broadly viewed as being (nearly) single-peaked, it simply is not the case that median voting is used. People, for whatever reasons of history and comfort, use such systems as plurality, approval, and so on for such elections. And so algorithms for those systems are worth studying. Third, for manipulation of nearly single-peaked electorates, strategy-proofness does not even hold. And although for them indeed only the mavericks can have an incentive to lie, that doesn't mean that the outcome won't be utterly distorted even by a single maverick. There are arbitrarily large electorates, having just one maverick, where that maverick can change the winner from being the median one to instead being a candidate on the outer extreme of the societal order. Manipulation This paper's sections on control and bribery focus on, and provide many examples of, settings where not just the single-peaked case but even the nearly single-peaked cases have polynomial-time algorithms. Regarding manipulation, the results are more sharply varied. We show that NP-hardness holds for a rich class of scoring protocols, in the presence of even one maverick. (When α 2 = α 3 the system is either equivalent to plurality or is a trivial system where everyone always is a winner. These cases are easily seen to be in P.) Recall from Section 2 the meaning of "(α 1 , α 2 , α 3 ) elections," namely, scoring protocol elections using the vector (α 1 , α 2 , α 3 ). Theorem 3.1. For each α 1 ≥ α 2 > α 3 , CCWM for (α 1 , α 2 , α 3 ) elections over 1-maverick-SP societies is NP-complete. We point out that this theorem is of the same form as that for the general case (see [8,28,35]). However, the proofs for the general case do not work in our case, since those proofs construct elections with at least two mavericks. In the general case (i.e., no single-peakedness is required), the above cases also are NP-complete [8,28,35], so allowing a one-maverick single-peaked society is jumping us up to the same level of complexity that holds in the general case here. In contrast, for SP societies (without mavericks), 3-candidate CCWM is NP-complete when (α 1 − α 3 ) > 2(α 2 − α 3 ) > 0 and is in P otherwise [23]. So, in particular, 3-candidate veto and 3-candidate Borda elections are in P for the SP (single-peaked) case, but are already NP-complete for SP with one maverick allowed. In contrast, all of Theorem 3.2's cases are well-known to be NP-complete in the general case [8]. Still, the contrast is a bit fragile. For example, although the above theorem shows that CCWM for (1, 1, 1, 1, 0) elections over 2-maverick-SP societies is in P, we prove below that CCWM for (1, 1, 1, 0) elections over 2-maverick-SP societies is NP-complete. Note also that this theorem gives an example where 4-candidate veto elections are NP-complete but 5-candidate veto elections are in P, in contrast with the behavior that one often expects regarding NP-completeness and parameters, namely, one might expect that increasing the number of candidates wouldn't lower the complexity. (However, see Faliszewski et al. [23] for another example of this unusual behavior.) Theorem 3. 3. For each k ≥ 0 and m ≥ 3 such that m ≤ k + 2, CCWM for m-candidate veto elections over k-maverick-SP societies is NP-complete. Theorems 3.2 and 3.3 also show that for any number of mavericks, there exists a voting system such that CCWM is easy for up to that number of mavericks, and hard for more mavericks. Corollary 3.4. Let k ≥ 0. For all ℓ ≥ 0, CCWM for k + 3-candidate veto elections over ℓ-maverick-SP societies is in P if ℓ ≤ k and NP-complete otherwise. Let us now turn from the maverick notion of nearness to single-peakedness, and look at the "swoon" notion, in which, recall, each voter must be consistent with the societal ordering (minus the voter's firstchoice candidate) when one removes from the voter's preference list the voter's first-choice candidate. We will still mostly focus on the case of veto elections. For three candidates (see Observation 3.6) and four candidates we have NP-completeness, and for five or more candidates we have membership in P. Observation 3. 6. Every 3-candidate election is a swoon-SP election and a Dodgson 1 -SP election and so all complexity results for 3-candidate elections in the general case also hold for swoon-SP elections and Dodgson 1 -SP elections. Complexity results for general elections do not always hold for swoon-SP elections or for Dodgson 1 -SP elections. For example, for m ≥ 5, CCWM for m-veto elections is NP-complete in the general case, but in P for swoon-SP societies (Theorem 3.5) and Dodgson 1 -SP societies (Theorem 3.7). We conclude this section with a brief comment about single-caved electorates. (We remind the reader that single-caved is not a "nearness to SP" notion, but rather is in some sense a mirror-sibling.) For scoring vectors (α 1 , α 2 , α 3 ), the known CCWM dichotomy result for single-peaked electorates is that if α 1 − α 3 > 2(α 2 − α 3 ) then the problem is NP-complete and otherwise the problem is in P. For single-caved, the opposite holds for each case that is not in P in the general case. Theorem 3. 8. For each α 1 ≥ α 2 > α 3 , CCWM for (α 1 , α 2 , α 3 ) elections over single-caved societies is NP-complete if (α 1 − α 3 ) ≤ 2(α 2 − α 3 ) and otherwise is in P. Control The very first results of Faliszewski et al. [23] showing that NP-complete general-case control results can simplify to P results for single-peaked electorates were for constructive control by adding voters and for constructive control by deleting voters, for approval elections. We show that each of those results can be reestablished even in the presence of logarithmically many mavericks. (Indeed, we mention in passing that even if the attacker is allowed to simultaneously both add and delete voters-so-called "AV+DV" multimode control in the recent model that allows simultaneous attacks [17]-the complexity of planning an optimal attack still remains polynomial-time even with logarithmically many mavericks.) Theorem 4.1. CCAV and CCDV for approval elections over log-maverick-SP societies are each in P. For CCAV, the complexity remains in P even for the case where no limit is imposed on the number of mavericks in the initial voter set, and the number of mavericks in the set of potential additional voters is logarithmically bounded (in the overall problem input size). 2 Although we will soon prove a more general result, we start by proving this result directly. We do so both as that will make the more general result clearer, and as P-time results are the core focus of this paper. Our proof involves the "demaverickification" of the society, in order to allow us to exploit the power of single-peakedness. By doing so, our proof establishes that there is a polynomial-time disjunctive truth-table reduction (see Ladner et al. [32] for the formal definition of ≤ p dtt , but one does not need to know that to follow our proof) to the single-peaked case. In this version of this paper, the proof of Theorem 4.1 is located in Appendix C, and we assume that the interested reader will now read it (doing so is not required and probably not even a good idea on a first reading, but such reading will make clearer the next paragraph, which briefly refers to the algorithm within that proof). Now, before we move on to other election systems, let us pause to wonder whether Theorem 4.1 is just the tip of an iceberg, and is in fact hiding some broader connection between number of mavericks and computational complexity theory. We won't give this type of discussion for all, or even most, of our theorems. But it is worthwhile to, since this is our first control result, look here at what holds. And what holds is that Theorem 4.1 is indeed in some sense the tip of an iceberg. However, it is an iceberg whose tip is its most interesting part, since it gives the part that admits polynomial-time attacks. Still, the rest of the iceberg brings out an interesting connection between maverick frequency and nondeterminism. Let us think again of the proof we just saw. It worked by sequentially generating each member of the powerset of a logarithmic-sized set (call it Q). And we did that, naturally enough, in polynomial time. However, note that we could also have done it with nondeterminism. We can nondeterministically guess for each member of Q whether or not it will be added (for CCAV) or deleted (for CCDV). And then after that nondeterministic guess, we for the CCAV case do the demaverickification presented in the above theorem's proof and for the CCDV case do the deletable/nondeletable marking, and then we run the polynomial-time algorithms for the single-peaked approval-voting CCAV and the approvalvoting CCDV (with deletable/nondeletable flag, and all mavericks-those not consistent with the societal ordering-being nondeletable) cases. It is easy to see that above proof argument works fine with the 2 By that last part, we mean precisely the definition-including its various restrictions on the complexity of the function-of the notion of log-maverick-SP, except with the limit being placed just on the number of mavericks in the additional voter set. Formally put, to avoid any possibility of ambiguity or confusion, we mean that for each function f (that is computable in time polynomial in the size of the input; and to avoid possible technical problems, we require that f be nondecreasing) whose value is O(log(ProblemInputSize)), it holds that "CCAV for approval elections over inputs for which (as is standard in our model regarding SP and nearly-SP cases, a societal ordering of single-peakedness is given a part of the input, and) at most f (ProblemInputSize) of the additional voters are inconsistent with the societal ordering is in P." change to nondeterminism. Indeed, the reason Theorem 4.1 is about "P" is because sequentially handling O(log(ProblemInputSize)) nondeterministic bits can be done in polynomial time . So, what underlies the above theorem are the following results that say that frequency of mavericks in one's society exacts a price, in nondeterminism. (We here are proving just an upper bound, but we conjecture that the connection is quite tight-that commonality of wild voter behavior is very closely connected with nondeterminism.) To state the results, we need to briefly introduce some notions from complexity theory. Complexity theorists often separate out the weighing of differing resources, putting bounds on each. The only such class we need here is the class of languages that can be accepted in time t(n) on machines using g(n) bits of nondeterminism, which is typically denoted NONDET-TIME[g(n), t(n)]. The most widely known such classes are those of the limited nondeterminism hierarchy, known as the beta hierarchy, of Kintala and Fisher [31] (see also [9] and the survey [27]). β k is the class of sets that can be accepted in polynomial time on machines that use O(log k n) bits of nondeterminism: β k = {L | (∃ polynomial t(n))(∃g(n) ∈ O(log k n))[L ∈ NONDET-TIME[g(n), t(n)]], or for short, β k = NONDET-TIME[O(log k n), poly]. Of course, β 0 = β 1 = P. We can now state our result, which says that frequency of mavericity is paid for in nondeterminism. For Condorcet elections, both CCAV and CCDV are known to be NP-complete in the general case [3] but to be in P for single-peaked electorates [4]. For Condorcet, results analogous to those for approval under f (·)-maverick-SP societies hold. For CCAV, the complexity remains in NONDET-TIME[ f (ProblemInputSize), poly] even for the case where no limit is imposed on the number of mavericks in the initial voter set, and the number of mavericks in the set of potential additional voters is f (·)-bounded (in the overall problem input size). For plurality, the most important of systems, CCAV and CCDV are known to be in P for the general case [3], so there is no need to seek a maverick result there. However, CCAC and CCDC are both known to be NP-complete in the general case and in P in the single-peaked case. For both of those, a constant number of mavericks can be handled. Theorem 4. 6. For each k, CCAC and CCDC for plurality over k-maverick-SP societies are in P. As in the case of Theorem 4.1, the idea of the algorithm is to (polynomial-time disjunctively truth-table) reduce to the single-peaked case. However, here the mavericks require a more involved brute-force search and thus we can only handle a constant number of them. Unfortunately, swooning cannot be handled at all (unless P = NP, of course). Also, allowing the number of mavericks to be some root of the input size cannot be handled. For the Dodgson and PerceptionFlip notions of nearness to single-peakedness, we can prove that allowing a constant number of adjacent-swaps (for each voter, separately, in the appropriate structure) still leaves the CCAC and CCDC problems in P. (We mention in passing that the following result holds not just for constructive control, but even for the concept, which we are not focusing on in this paper, of destructive control, in which one's goal is just to preclude a certain candidate from winning.) It turns out that this local structure, slightly distorted, still occurs in Dodgson k -SP societies and in PerceptionFlip k -SP societies. Thus, our strategy for proving Theorem 4.9 is to first formally define what me mean by a distorted variant of the above lemma, and then to adapt the CCAC and CCDC algorithms of Faliszewski et al. [23] for single-peaked societies to the distorted setting. 1. score (C,V ) (c 1 ) = score ({c 1 ,c 2 },V ) (c 1 ), 2. for each i, 2 ≤ i ≤ m − 1, score (C,V ) (c i ) = score ({c i−1 ,c i ,c i+1 },V ) (c i ), In particular, the proof of Lemma 4.10 (given in [23]) shows that single-peaked plurality elections are 1local with respect to the societal axis. We extend this result to Dodgson k -SP societies and to PerceptionFlip k -SP societies. Lemma 4.13. Let k be a positive integer, let E = (C,V ) be a plurality election, and let L be some linear order over C (the societal axis). (a) If E is Dodgson k -SP with respect to L then E is (k + 1)-local with respect to L. (b) If E is PerceptionFlip k -SP with respect to L then E is (k + 1)-local with respect to L. Proof. Cases (a) and (b) are similar but not identical and thus we will handle each of them separately. Case (a) Let E = (C,V ) be a Dodgson k -SP election, where C = {c 1 , . . . , c m } and where the societal axis L is such that c 1 L c 2 L · · · L c m . Since for every C ′ ⊆ C, (C ′ ,V ) is Dodgson k -SP with respect to L and since E was chosen arbitrarily, it suffices to show that for each candidate c i ∈ C it holds that score (N(L,C,c i ,k+1),V ) (c i ) = score (C,V ) (c i ). Fix a candidate c i in C and a voter v in V . By definition of Dodgson k -SP elections, there exists a vote v ′ that is single-peaked with respect to L, such that v can be obtained from v ′ by at most k swaps. Let C ′ = N(L,C, c i , k + 1). We will show that score (C ′ ,{v}) (c i ) = score (C,{v}) (c i ). First, if v ranks c i on top among all candidates in C, then certainly v ranks c i on top among candidates in C ′ . Thus, score (C ′ ,{v}) (c i ) ≥ score (C,{v}) (c i ). It remains to show that if v does not rank c i on top among candidates in C, then v also does not rank c i on top among candidates in C ′ . We consider two cases: 1. The peak of v ′ is not in C ′ . Then it is easy to verify that some candidate from C ′ precedes c i in v. Otherwise, to convert v ′ to v one would need enough swaps for c i to pass k + 1 candidates from C ′ (either those "to the left" of c i in L if the peak of v ′ was "to the left" of c i , or those "to the right" of c i if the peak of v ′ was "to the right" of c i ). The peak of v ′ is in C ′ . If v's top-ranked candidate is in C ′ then clearly c i is not ranked first among C ′ in v. Thus, let us assume that the top ranked candidate in v is not in C ′ and that it is some candidate c j . Without loss of generality, let us assume that j > i + k + 1 (the case where j < i − k − 1 is analogous). Let us also assume that the peak of v ′ is some candidate c i+s , such that 1 ≤ s ≤ k + 1 (the case when −k − 1 ≤ s ≤ 0 is impossible because converting v ′ to v requires at most k swaps). The minimal number of swaps that convert v ′ to a vote where c j is ranked first is at least (k + 1) − s + 1 = k − s + 2 (these swaps involve c j and candidates c i+s , c i+s+1 , . . . , c i+k+1 . The minimum number of swaps in v ′ that ensure that c i is ranked ahead of c i+s is at least s (these swaps involve c i and candidates c i+1 , c i+2 , . . . , c i+s ). Thus, the minimum number of swaps of candidates in v ′ that ensure that c j is ranked first and that c i is ranked ahead of c i+s is k + 2, which is more than the allowed k swaps. Thus, this situation is impossible. As a result, some candidate from C ′ is ranked ahead of c i in v. Thus, we have shown that if c i is not ranked first in v among the candidates from C, then c i is not ranked first in v among the candidates from C ′ . Thus, score (C ′ ,{v}) (c i ) ≤ score (C,{v}) (c i ), and with the previously shown inequality score (C ′ ,{v}) (c i ) ≥ score (C,{v}) (c i ), it must be the case that score (C ′ ,{v}) (c i ) = score (C,{v}) (c i ). Since v was chosen arbitrarily, we have that score (C ′ ,V ) (c i ) = score (C,V ) (c i ). This completes the proof of part (a) of the theorem. Case (b) Let E = (C,V ) be an election, where C = {c 1 , . . . , c m }. Let us assume, without loss of generality, that E is PerceptionFlip k -SP via societal axis L, where c 1 L c 2 L · · · L c m . Since for every C ′ ⊆ C, (C ′ ,V ) is PerceptionFlip k -SP with respect to L and since E was chosen arbitrarily, it suffices to show that for each candidate c i ∈ C it holds that score (N(L,C,c i ,k+1),V ) = score (C,V ) (c i ). Let us fix a candidate c i in C and a voter v in V . This voter's preference order is single-peaked with respect to some order L ′ that can be obtained from L by at most k swaps of adjacent candidates. Let us assume that c j ′ is the candidate directly preceding c i in L ′ and c j ′′ is the candidate directly succeeding c i in L ′ (in this proof we omit the easy-to-handle cases where c i is either the maximum or the minimum element of L ′ ). We claim that for any C ′ ⊆ C that includes c j ′ , c i , and c j ′′ , it holds that score (C ′ ,{v}) (c i ) = score (C,{v}) (c i ) . This is so because any voter that ranks c i on top, ranks c i on top irrespective of which other candidates are included. So, score ( C ′ ,{v}) (c i ) ≥ score (C,{v}) (c i ). On the other hand, by Lemma 3.4 of [23], score ({c j ′ ,c i ,c j ′′ },{v}) (c i ) = score (C,{v}) (c i ). Thus, any voter that does not rank c i on top, given a choice between c i , c j ′ , and c j ′′ ranks one of c j ′ , c j ′′ on top. It is easy to see that {c i , c j ′ , c j ′′ } ⊆ N(L,C, c i , k + 1), and so, score (C ′ ,{v}) (c i ) ≤ score (C,{v}) (c i ). Thus, score (C ′ ,{v}) (c i ) = score (C,{v}) (c i ) and since we picked v arbitrarily, score (C ′ ,V ) (c i ) ≤ score (C,V ) (c i ). The proof of case (b) is complete. Now, Theorem 4.9 is a consequence of the following, more general, result. Theorem 4.14. For each constant k, CCAC and CCDC for plurality elections are in P for k-local elections. However, we do have NP-completeness if in Dodgson k -SP societies or PerceptionFlip k -SP societies we allow the parameter k to increase to m − 2, where m is the total number of candidates involved in the election. (This many swaps allow us to, in effect, use the same technique as for swoon-SP societies.) Theorem 4.15. CCAC and CCDC for plurality elections are NP-complete for Dodgson m−2 -SP societies and for PerceptionFlip m−2 -SP societies, where m is the total number of candidates involved in the election. Finally, we note that for single-caved elections, CCAC and CCDC are in P for plurality. Bribery We now briefly look at bribery of nearly single-peaked electorates, focusing on approval elections. For all three cases-bribery, negative-bribery, and strongnegative-bribery-in which general-case NP-hardness bribery results have been shown to be in P for single-peaked societies [4], we show that the complexity remains in polynomial time even if the number of mavericks is logarithmic in the input size. Theorem 5.1. Bribery, negative-bribery, and strongnegative-bribery for approval elections over logmaverick-SP societies are each in P, in both the standard and the marked model. We mention in passing that although plurality bribery has never been discussed with respect to singlepeaked (or nearly single-peaked) electorates, it is not hard to see that the two NP-complete bribery cases for plurality (plurality-weighted-$bribery and plurality-weighted-negative-bribery) remain NP-complete on single-peaked electorates, in one case immediately from a theorem of, and in one case as a corollary to a proof of, Faliszewski et al. [18]. Conclusions and Open Directions Motivated by the fact that real-world electorates are unlikely to be flawlessly single-peaked, we have studied the complexity of manipulative actions on nearly single-peaked electorates. We observed a wide range of behavior. Often, a modest amount of non-single-peaked behavior is not enough to obliterate an existing polynomial-time claim. We find this the most important theme of this paper-its "take-home message." So if one feels that previous polynomial-time manipulative-action algorithms for single-peaked electorates are suspect since real-world electorates tend not to be truly single-peaked but rather nearly single-peaked, our results of this sort should reassure one on this point-although they are but a first step, as the paragraph after this one will explain. Yet we also found that sometimes allowing even one deviant voter is enough to raise the complexity from P to NP-hardness, and sometimes allowing any number of deviant voters has no effect at all on the complexity. We also saw cases where frequency of mavericity extracted a price in terms of amount of nondeterminism used. We feel this is a connection that should be further explored, and regarding Corollary 4.3, we particularly commend to the reader's attention the issue of proving completeness for-not merely membership in-the levels of the beta hierarchy. We conjecture that completeness holds. One might wish to study other notions of closeness to single-peakedness and, in particular, one might want to combine our notions. Indeed, in real human elections, there probably are both mavericks and swooners, and so our models are but a first step. In addition, the types of nearness that appear in different human contexts may differ from each other, and from the types of nearness that appear in computer multiagent systems contexts. Models of human/multiagent-system behavior, and empirical study of actual occurring vote sets, may help identify the most appropriate notions of nearness for a given setting. Our control work studies just one type of control-attack at a time. We suspect that many of our polynomial-time results could be extended to handle multiple types of attacks simultaneously, as has recently been explored (without single-peakedness constraints) by Faliszewski et al. [17], and we mentioned in passing one result for which we have already shown this. [9] J. Díaz and J. Torán A Formal Definitions In this section we provide the missing formal definitions of the problems that we study (variants of manipulation, control, and bribery) and formal definitions of the problems we reduce from in our hardness proofs. Definition A.1 ([8]) . Let R be an election system. In the CCWM problem for R we are given a set of candidates C, a preferred candidate p ∈ C, a collection of nonmanipulative voters S (each vote consists of a preference order and a positive integer, the weight of the vote), and a collection T of n manipulators, each specified by its positive integer weight. We ask if it is possible to set the preference orders of the manipulators in such a way that p is a winner of the resulting R election (C, S ∪ T ). The following control notions are due to the seminal paper of Bartholdi et al. [3], except the notion below of CCAC follows Faliszewski et al. [21] in employing a bound, K, to make it better match the other control types. Definition A.2 ([3]). Let R be an election system. (a) In the CCAC problem for R we are given two disjoint sets of candidates, C and A, a collection V of votes over C ∪ A, a candidate p ∈ C, and a nonnegative integer K. We ask if there is a set A ′ ⊆ A such that (a) A ′ ≤ K, and (b) p is a winner of R election (C ∪ A ′ ,V ). (b) In the CCDC problem for R we are given an election (C,V ), a candidate p ∈ C, and a nonnegative integer K. We ask if there is a set C ′ ⊆ C such that (a) C ′ ≤ K, (b) p / ∈ C ′ , and (c) p is a winner of R election (C −C ′ ,V ). (c) In the CCAV problem for R we are given a set of candidates C, two collections of voters, V and W , over C, a candidate p ∈ C, and a nonnegative integer K. We ask if there is a subcollection W ′ ⊆ W such that (a) W ′ ≤ K, and (b) p is a winner of R election (C,V ∪W ′ ). (d) In the CCDV problem for R we are given an election (C,V ), a candidate p ∈ C, and a nonnegative integer K. We ask if there is a collection V ′ of voters that can be obtained from V be deleting at most K voters such that p is a winner of R election (C,V ′ ). The bribery notions below are due to Faliszewski et al. [18], except the notion below of negative and strongnegative bribery for approval voting are due to Brandt et al. [4]. Definition A.3 ([18, 4]). Let R be an election system. In the weighted-$bribery problem for R we are given an election (C,V ), where each vote consists of the voter's preferences (as appropriate for the election system, e.g., an approval vector for approval voting and a preference order for plurality) and two integers (this vote's positive integer weight and this vote's nonnegative integer price), a candidate p ∈ C, and a nonnegative integer K (the allowed budget). We ask if there is a subcollection of votes, whose total price does not exceed K, such that it is possible to ensure that p is an R-winner of the election by modifying the preferences of these votes. The problems (a) weighted-bribery, (b) $bribery, and (c) bribery for R are variants of weighted-$bribery for R where, respectively: (a) no prices are specified and each vote is treated as having unit cost, (b) no weights are specified, and each vote is treated as having unit weight, and (c) no prices or weights are specified, and each vote is treated as having unit price and unit weight. For plurality, "negative" bribery means no bribed voter can have p as the most preferred candidate in his/her preference order. For approval voting, "negative" bribery means a bribe cannot change someone from disapproving of p to approving of p, and "strongnegative" bribery means every bribed voter must end up disapproving of p. Definition A. 5. For score-based election systems (e.g., plurality, approval, scoring protocols), we write score (C,V ) (c) to denote the score of candidate c in election (C,V ); naturally we require that c ∈ C. The particular election system that we use will always be clear from context. B Proofs from Section 3 Theorem 3.1. For each α 1 ≥ α 2 > α 3 , CCWM for (α 1 , α 2 , α 3 ) elections over 1-maverick-SP societies is NP-complete. Proof of Theorem 3. 1. Without loss of generality, we assume that α 3 = 0. We will reduce from PARTITION. Given a set {k 1 , . . . , k n } of n distinct positive integers that sums to 2K, define the following instance of CCWM. Let C = {p, a, b}, let society's order be aLpLb, let S consist of one voter with preference order a > b > p (note that this voter is the maverick) with weight (2α 1 − α 2 )α 1 K, and one voter with preference order b > p > a with weight (2α 1 − α 2 )(α 1 − α 2 )K. (Technically, weights need to be positive, but if α 1 = α 2 we can get the same effect by letting S consist of just the maverick.) Note that score (C,S) (a) = score (C,S) (b) = (2α 3 1 − α 2 1 α 2 )K and that score (C,S) (p) = (2α 1 − α 2 )(α 1 − α 2 )α 2 K = (2α 2 1 α 2 − 3α 1 α 2 2 + α 3 2 )K. Let T consist of n manipulators with weights (α 2 1 − α 1 α 2 + α 2 2 )k 1 , . . . , (α 2 1 − α 1 α 2 + α 2 2 )k n . If there is a subset of k 1 , . . . , k n that sums to K, then we let all manipulators in T whose weight divided by (α 2 1 − α 1 α 2 + α 2 2 ) is in this subset vote p > a > b, and all manipulators in T whose weight divided by (α 2 1 −α 1 α 2 +α 2 2 ) is not in this subset vote p > b > a. It is immediate that score (C,S∪T ) (a) = score (C,S∪T ) (b) = score (C,S) (a) + (α 2 1 α 2 − α 1 α 2 2 + α 3 2 )K = (2α 3 1 − α 1 α 2 2 + α 3 2 )K and that score (C,S∪T ) (p) = score (C,S) (p) + (2α 3 1 − 2α 2 1 α 2 + 2α 1 α 2 2 )K = (2α 3 1 − α 1 α 2 2 + α 3 2 )K. It follows that all candidates are tied, and thus all candidates are winners. For the converse, suppose the manipulators vote so that p becomes a winner. It is easy to see that we can assume that all manipulators rank p first. From the calculations above, it is also easy to see that it is always the case that 2score (C,S∪T ) (p) ≤ score (C,S∪T ) (a) + score (C,S∪T ) (b). In order for p to be a winner, we thus certainly need the scores of a and b to be equal. This implies that score (C,T ) (a) = score (C,T ) (b). But then the weights of the manipulators voting p > a > b sum to (α 2 1 − α 1 α 2 + α 2 2 )K. Proof. We will again reduce from PARTITION. Given a set {k 1 , . . . , k n } of n distinct positive integers that sums to 2K, define the following instance of CCWM. Let C = {p, a, b, c 1 , . . . , c m−3 }, let society's order be aLpLc 1 L · · · Lc m−3 Lb, let S consist of m − 2 voters each of weight K. For every candidate c in C − {a, b} there is a voter in S that ranks c last. Note that all voters in S are mavericks. This is allowed, since m − 2 ≤ k. Let T consist of n manipulators with weights k 1 , . . . , k n . If there is a subset of k 1 , . . . , k n that sums to K, then we let all manipulators in T whose weight is in this subset vote a > p > c 1 > · · · > c m−3 > b, and all manipulators in T whose weight is not in this subset vote b > c m−3 > · · · > c 1 > p > a. It is immediate that all candidates in election (C, S ∪ T ) are tied and so p is a winner. For the converse, suppose the manipulators can vote so that p becomes a winner. Note that p needs to gain at least K points over a and over b in T . Clearly, the only way this can happen is if score (C,T ) (a) = score (C,T ) (b) = K. But then the weights of the voters in T who rank a last add to K. Proof. First suppose that m ≥ 5. Let L be society's order. Let c be a candidate such that there are at least two candidates to the left of c in L and there are at least two candidates to the right of c in L. In an m-candidate veto election in a swoon-SP society, c will never be vetoed. Given a CCWM instance for m-candidate veto elections in swoon-SP societies, p can be made a winner if and only if p is never vetoed by the nonmanipulators. Now consider the case that m = 4. We will reduce from PARTITION. Given a set {k 1 , . . . , k n } of n distinct positive integers that sums to 2K, define the following instance of CCWM. Let C = {p, a, b, c}, let society's order be aLpLbLc, let S consist of two voters, each with weight K. One voter in S votes a > c > b > p and one voter votes c > a > p > b. Let T consist of n manipulators with weights k 1 , . . . , k n . If there is a subset of k 1 , . . . , k n that sums to K, then we let all manipulators in T whose weight is in this subset veto a and all manipulators in T whose weight is not in this subset veto c. It is immediate that all candidates in election (C, S ∪ T ) are tied and so p is a winner. For the converse, suppose the manipulators can vote so that p becomes a winner. Note that p needs to gain at least K points over a and over c in T . Clearly, the only way this can happen is if score (C,T ) (a) = score (C,T ) (c) = K. But then the weights of the voters in T who veto a add to K. A very similar proof can be used to show the statement for m = 3. However, the statement for m = 3 also follows immediately from the fact that CCWM for 3-candidate veto elections is NP-complete and the observation below that every 3-candidate election is a swoon-SP election. Observation 3.6. Every 3-candidate election is a swoon-SP election and a Dodgson 1 -SP election and so all complexity results for 3-candidate elections in the general case also hold for swoon-SP elections and Dodgson 1 -SP elections. Proof. Since every 2-candidate vote is single-peaked, it follows immediately that every 3-candidate election is a swoon-SP election. For the Dodgson 1 -SP case, suppose society's order is aLbLc. The only votes that are not single-peaked are a > c > b and c > a > b. But note that both of these votes are one adjacent swap away from being single-peaked, by swapping the last two candidates. Theorem 3.7. For each m ≥ 5, CCWM for m-candidate veto elections in Dodgson 1 -SP societies is in P. For m ∈ {3, 4}, this problem is NP-complete. Proof. The m ≥ 5 case follows using the same proof as the m ≥ 5 case of Theorem 3. 5. The m = 4 case follows using the same proof as the m = 4 case of Theorem 3.5 except that, in order for the votes to be within one adjacent swap of being consistent with societal order, the two voters in S now vote c > b > a > p and a > p > c > b. The m = 3 case follows from Observation 3. 6. Theorem 3.8. For each α 1 ≥ α 2 > α 3 , CCWM for (α 1 , α 2 , α 3 ) elections over single-caved societies is NP-complete if (α 1 − α 3 ) ≤ 2(α 2 − α 3 ) and otherwise is in P. Proof. Without loss of generality, assume that α 3 = 0. We first consider the case that α 1 > 2α 2 . Let (C,V ) be an (α 1 , α 2 , α 3 ) election over single-caved societies, let W be the total weight of V , and let L be society's order. Consider the middle candidate in L. This candidate can never be ranked first, and so its score will be at most α 2 W , and the sum of the scores of the other two candidates will be at least α 1 W . Since α 1 > 2α 2 , it follows that the middle candidate will never be a winner if W > 0. Given an instance of CCWM for (α 1 , α 2 , 0) elections over single-caved societies, p can be made a winner if and only if (1) p is the middle candidate in L and W = 0, or (2) p is not the middle candidate in L and p is a winner if all manipulators rank p first, then the middle candidate, and then the last candidate. All this is easy to check in polynomial time. Now consider the case that α 1 ≤ 2α 2 . We will show that in this case PARTITION can be reduced to CCWM for (α 1 , α 2 , 0) elections over single-caved societies. Given a set {k 1 , . . . , k n } of n distinct positive integers that sums to 2K, define the following instance of CCWM. Let C = {p, a, b} and let society's order be aLpLb, and let S consist of two voters, each with weight (2α 2 − α 1 )K. One voter in S votes a > b > p and one voter votes b > a > p. (Technically, weights need to be positive, but if α 1 = 2α 2 we can get the same effect by letting S = / 0.) Let T consist of n manipulators with weights (α 1 + α 2 )k 1 , . . . , (α 1 + α 2 )k n . If there exists a subset of {k 1 , . . . , k n } that sums to K, we let the manipulators whose weight divided by (α 1 + α 2 ) is in this subset vote a > p > b and the manipulators whose weight divided by (α 1 + α 2 ) is not in this subset vote b > p > a. It is easy to see that score (C,S∪T ) (a) = score (C,S∪T ) (b) = (2α 2 − α 1 )(α 1 + α 2 )K + α 1 (α 1 + α 2 )K = 2α 2 (α 1 + α 2 )K = score (C,S∪T ) (p), and so p is a winner. For the converse, suppose p can be made a winner. Since p is the middle candidate in L, p can not be ranked first. Without loss of generality we can assume that the voters in T vote a > p > b or b > p > a. It follows that score (C,S∪T ) (p) = 2α 2 (α 1 + α 2 )K. Since score (C,S∪T ) (a) + score (C,S∪T ) (b) = 4α 2 (α 1 + α 2 )K, by the argument above, the only way p can be a winner is if a and b tie in T . But then the weights of the manipulators voting a > p > b sum to (α 1 + α 2 )K. C Proofs from Section 4 In the following subsections we provide the missing proofs from Section 4. C.1 Proofs of Theorems 4.1, 4.4, and 4.6 Theorem 4.1. CCAV and CCDV for approval elections over log-maverick-SP societies are each in P. For CCAV, the complexity remains in P even for the case where no limit is imposed on the number of mavericks in the initial voter set, and the number of mavericks in the set of potential additional voters is logarithmically bounded (in the overall problem input size). Proof. Consider the case of CCAV. We will handle directly the stronger case mentioned in the theorem, namely the one with no limit on the number of mavericks in the initial voter set. There of course will be a O(log(ProblemInputSize)) limit on the number of mavericks in the set of voters to potentially add. Let that (easy, nondecreasing) upper-bounding function be called f . So, suppose we are given an input instance of this problem. Let K, which is part of the input, be the limit on the number of voters we are allowed to add. We start by doing the obvious syntactic checks, and we also check that the number of voters in the additional voter set who are not consistent with the input societal order is at most f (ProblemInputSize). If any of these checks fail, we reject. Now, we will show how to build a polynomially long list of instances of the CCAV problem over singlepeaked elections such that our control goal is possible to achieve if and only if one or more of those control problems has a goal that can be achieved. (That is, we will implicitly give a polynomial-time disjunctive truth-table reduction to the single-peaked case.) Our construction is as follows. For each choice of which mavericks from the additional voter set to add to our election, we will generate at most one instance of a single-peaked control question. Since there are at most logarithmically many such mavericks, and the number of cases we have to look at is the cardinality of the powerset of the number of mavericks among the additional voters, the number of instances we generate is polynomially bounded. For each choice A of which mavericks from among the additional voter set to add to the main election, we generate at most one instance as follows. If A > K, we will generate no instance, as that choice is trying to add illegally many additional voters. Otherwise, we generate a single-peaked election instance as follows. Move the elements of A from the additional voter set to the main election. Remove all remaining mavericks from the additional voter set. Demaverickify our election as follows: For each maverick voter v, for each candidate c that v approves, add a new voter who approves of only c (and so certainly is consistent with the single-peaked societal order). Then remove all the maverick voters. Note that this demaverickification process does not change the approval counts of the election and does ensure that the electorate is single-peaked. The entire demaverickification process does not increase the problem's size by more than a polynomial, since no voter is replaced by more than C − 1 voters. Replace K by K − A . The resulting instance is the instance that this choice of A adds to our collection of instances. So, we have created a polynomial-length list of (polynomial-sized) instances of the single-peaked CCAV problem. It is easy to see that our control goal can be achieved exactly if for at least one of these instances the control goal can be achieved. Briefly put, that is because our problem has a successful control action (after passing the initial maverick-cardinality-limit check) exactly if there is some appropriate-sized subset of additional voters that we can add to make the favored candidate become a winner. Our above process tries every legal set of choices for which mavericks from the additional voter set might be the mavericks in the added set. And the instance it generates, based on that choice, will have a successful control action precisely if what remains of our initial K bound, after we remove the cardinality of the added mavericks, is such that there is some number of nonmaverick additional voters who can be added to achieve the desired victory for p. In addition, the instance generated is a single-peaked society, and the transformation we used to make it single-peaked doesn't in anyway affect the answer to the created instance, since the demaverickification occurred only on voters that were (at that point, although some had not started there) in our main voter set, and the only affect that set has on the single-peaked CCAV control question is the approval totals of each candidate, and our demaverickification did not alter those totals. Our polynomial-length list of instances is composed just of instances of the CCAV problem for approval elections over single-peaked electorates. That problem has a polynomial-time algorithm [23]. And so we run that algorithm on each of the polynomially many instances, and if any finds a successful control action, our original problem has a successful control action, and if not our original problem does not. Thus, our proof of the CCAV case is complete. However, a final comment is needed, since we wish to not only give a yes/no answer, but to in fact find what control action to take, when one is possible. (Doing so goes beyond what the theorem promises, but we in general will try to give algorithms that not only give yes/no answers but also that at least implicitly make available the actual successful actions for the yes instances.) Formally speaking, disjunctive truthtable reductions are about languages, rather than about solutions. Nonetheless, from our construction it is immediately clear how a successful control action for any problem on the list-and the polynomial-time algorithm of Faliszewski et al. [23] in fact gives not merely a yes/no answer but in fact finds a successful control action when one exists-specifies a successful control action for our nearly-single-peaked original problem. The CCDV case might at first seem to be almost completely analogous, except that in that case, there is no separate pool of additional voters, and the logarithmic bound applies to the entire set of initial voters. However, the reduction approach we took above for CCAV at first seems not to work here. The reason is that for the CCAV case, the mavericks we added could be demaverickified in a way that didn't interfere with the call to the single-peaked case of the CCAV approval voting algorithm, and the mavericks we decided not to add could (for the instance being generated) be deleted. In contrast, for the CCDV case, whichever mavericks we don't delete remain very much a part of the election-and are indeed part of the instance we would like to generate of a case of a call to CCDV. But that means the generated case may not be single-peaked, as we would like it to be. We can work around this obstacle, by noting that the algorithm given in Faliszewski et al. [23] for the single-peaked CCDV approval-voting case in fact does a bit more than is claimed there. It is easy to see, looking at that paper, that it in effect gives a polynomial-time algorithm for the following problem: Given an instance of CCDV, and a societal ordering, and given that in the instance's voter set each voter has an extra bit specifying whether the voter is deletable or is not deletable, and given that every voter that is specified as being deletable must be consistent with the societal ordering (but voters specified as being not deletable are not required to be consistent with the societal ordering-they may be mavericks), is there a set of at most K (K being part of the input) deletable voters such that if we delete them our preferred candidate p is a winner? The fact that the paper implicitly gives such an algorithm is clear from that paper, since regarding the "deleting voters" actions described on its page 96 we can choose to allow them only on the deletable voters, and the Faliszewski et al. [23] algorithm's correctness in our case hinges (assuming that nondeletable voters are indeed nondeletable) just on the fact that the deletable voters all respect the societal ordering. (Once we allow a deletable/nondeletable flag, we could in fact demaverickify all the remaining mavericks, and then flag all the 1-approval-each voters added by that demaverickification as being nondeletable, but there is no need to do any of that. Doing it requires the deletable/nondeletable flag, and as just noted, if one has that flag, one can outright tolerate (nondeletable) mavericks.) So, we have noted a polynomial-time algorithm that, while not stated as their theorem, is a corollary to their theorem's proof-i.e., the proof of the CCDV result that in [23] appears on that paper's page 93. With this in hand, we can handle our nearly-single-peaked CCDV case using the same basic approach we used for CCAV, as naturally modified for the CCDV case. In particular, we again polynomial-time disjunctive truth-table reduce to a problem known to be in polynomial time-in this case, CCDV for approval voting over single-peaked societies, with a deletable/nondeletable flag for each voter, and with all deletable voters having to be nonmavericks, which was argued above to be in polynomial time. Our reduction is that (after checking that the input election is syntactically correct and does not have illegally many mavericks) for each subset of the mavericks that is of cardinality at most K (the input bound on the number of voters to delete), we delete those K mavericks, then we decrement K by the cardinality of the subset, then we mark all the nonmaverick voters as deletable, and mark each remaining maverick as nondeletable. This approach works for essentially the same reason as the CCAV case, and as in that case, we can get not merely a yes/no answer, but can even for the yes cases produce a successful control action. For CCAV, the complexity remains in NONDET-TIME[ f (ProblemInputSize), poly] even for the case where no limit is imposed on the number of mavericks in the initial voter set, and the number of mavericks in the set of potential additional voters is f (·)-bounded (in the overall problem input size). Proof. Let us handle the CCAV case first. We assume we are in the more general setting where there is no limit on the number of mavericks in the initial voter set. Let (C,V,W, p, K) be our input instance of CCAV for Condorcet and let L be the societal axis. We assume that the candidate set C is of the form C = {b m ′ , . . . , b 1 , p, c 1 , . . . , c m ′′ } and b m ′ L · · · L b 1 L p L c 1 L · · · L c m ′′ holds. We partition the voters in W into fours groups, W ℓ , W r , W p , and W m : 1. W ℓ contains those voters from W who are not mavericks and whose most preferred candidate c is such that c L p (intuitively, these are the voters whose top choice is "to the left of p"). 2. W r contains those voters from W who are not mavericks and whose most preferred candidate c is such that p L c (intuitively, these are the voters whose top choice is "to the right of p"). 3. W p contains those voters from W whose most preferred candidate is p. 4. W m contains the remaining voters from W (i.e., W m contains those mavericks who do not rank p first; note that there are at most f (ProblemInputSize) voters in W m ). Voters in W ℓ (in W r ) have an interesting structure of their preference orders. For each v ∈ W ℓ (each v ∈ W r ), due to his or her single-peakedness, there is a positive integer i such that v prefers each candidate in {b 1 , . . . , b i } to p to each of the remaining candidates (v prefers each candidate in {c 1 , . . . , c i } to p to each of the remaining candidates). Thus, we can conveniently sort the voters in W ℓ (the voters in W r ) in increasing order of the cardinalities of the sets of candidates they prefer to p. Our (nondeterministic) algorithm works as follows. First, we add max( W p , K) voters from W p . If that makes p a Condorcet winner, we accept. Otherwise, we set K ′ = K − max( W p , K). If K ′ = 0, we reject. Next, using (at most) f (ProblemInputSize) nondeterministic binary decisions, for each voter v in W m we decide whether to add v to the election or not. Let M be the number of voters we add in this process. If M > K ′ then we reject (on this computation path) and otherwise we set K ′′ = K ′ − M. Then, we execute the following algorithm: 1. For each two nonnegative integers K ℓ and K r such that K ℓ + K r ≤ K ′′ , execute the following steps. (a) Add K ℓ voters from W ℓ to the election (in the order described one paragraph above). (b) Add K r voters from W r to the election (in the order described one paragraph above). (c) Check if p is the Condorcet winner of the resulting election. If so, accept. Otherwise, undo the adding of the voters from the two preceding steps. If we have not accepted until this point, reject on this computation path. It is easy to verify that this algorithm indeed runs in polynomial time (given access to f (ProblemInputSize) nondeterministic steps). Its correctness follows naturally from the observations regarding the preference orders of voters in W ℓ and W r (it is clear that, given K ℓ and K r , our algorithm adds K ℓ voters from W ℓ in an optimal way and adds K r voters from W r in an optimal way). Let us now turn to the case of CCDV. The algorithm is very similar. Let (C,V, p, K) be our input instance. We assume that the candidate set C is of the form C = {b m ′ , . . . , b 1 , p, c 1 , . . . , c m ′′ } and b m ′ L · · · L b 1 L p L c 1 L · · · L c m ′′ holds. We partition the voters in V into fours groups, V ℓ , V r , V p , and V m : 1. V ℓ contains those voters from V who are not mavericks and whose most preferred candidate c is such that c L p (intuitively, these are the voters whose top choice is "to the left of p"). 2. V r contains those voters from V who are not mavericks and whose most preferred candidate c is such that p L c (intuitively, these are the voters whose top choice is "to the right of p"). 3. V p contains those voters from V whose most preferred candidate is p. 4. V m contains the remaining voters from V (i.e., V m contains those mavericks who do not rank p first; note that there are at most f (ProblemInputSize) voters in V m ). For each v ∈ V ℓ (each v ∈ V r ), due to his or her single-peakedness, there is a positive integer i such that v prefers each candidate in {b 1 , . . . , b i } to p to each of the remaining candidates (v prefers each candidate in {c 1 , . . . , c i } to p to each of the remaining candidates). Thus, we can conveniently sort the voters in W ℓ (the voters in W r ) in decreasing order of the cardinalities of the sets of candidates they prefer to p. It is clear that we should never delete voters from V p . Our (nondeterministic) algorithm proceeds as follows. First, for each voter v in V m we make a nondeterministic decision whether to delete v from the election or not. Let M be the number of voters we delete in this process. If M > K then we reject on this computation path and otherwise we set K ′ = K − M. Next, we execute the following algorithm: 1. For each two nonnegative integers K ℓ and K r such that K ℓ + K r ≤ K ′ , execute the following steps: (a) Delete K ℓ voters from V ℓ (in the order described one paragraph above). (b) Delete K r voters from V r (in the order described one paragraph above). (c) Check if p is the Condorcet winner of the resulting election. If so, accept. Otherwise, undo the deleting of the voters from the two preceding steps. 2. If we have not accepted so far, reject on this computation path. Correctness and polynomial running time of the algorithm (given access to f (ProblemInputSize) nondeterministic steps) follow analogously as in the CCAV case. Theorem 4.6. For each k, CCAC and CCDC for plurality over k-maverick-SP societies are in P. Proof. The main idea of our proof is analogous to that of the proof of Theorem 4.1 but the details of demaverickification are different and, as a result, we can only handle a constant number of mavericks. We handle the CCAC case first. Let I = (C, A,V, p, K) be our input instance of CCAC for plurality and let L be the societal axis. Let k ′ be the number of mavericks in V (k ′ ≤ k) and let M = {m 1 , . . . , m k ′ } be the subcollection of V containing exactly these k ′ maverick voters. Our algorithm proceeds as follows: 1. For each vector B = (b 1 , . . . , b k ′ ) ∈ (C ∪ A) k ′ of candidates execute the following steps (intuitively, we intend to enforce that candidates b 1 , . . . , b k ′ are top-ranked candidates of voters in M and that it is impossible to change the top-ranked candidates of voters in M by adding other candidates). (a) If for any voter m i , 1 ≤ i ≤ k ′ , it holds that m i prefers some candidate in (C ∪ {b j | 1 ≤ j ≤ k ′ }) − {b i } to b i then drop this B and return to Step 1. (This condition guarantees that it is possible to ensure, via adding candidates from A, that for each i, 1 ≤ i ≤ k ′ , voter m i ranks candidate b i first among the participating candidates.) (b) Set C ′ = C ∪ {b j | 1 ≤ j ≤ k ′ , b j ∈ A}. (c) Set A ′ = (A − {b j | 1 ≤ j ≤ k ′ , b j ∈ A}) − {a ∈ A | some voter m i , 1 ≤ i ≤ k ′ , prefers a to b i }. (d) Set K ′ = K − {b j | 1 ≤ j ≤ k ′ , b j ∈ A} . If K ′ < 0 then drop this B and return to Step 1. (e) Form a voter collection V ′ that is identical to V except that we restrict voters' preferences to candidates in C ′ ∪ A ′ and for each voter m i , 1 ≤ i ≤ k ′ , we replace m i 's preference order with an easily-computable preference order over C ′ ∪ A ′ that ranks b i first and is single-peaked with respect to L. 2. If the algorithm has not accepted so far, reject. It is easy to verify that the above algorithm indeed runs in polynomial time: There are exactly C ∪ A k ′ choices of vector B to test and for each fixed B each step can clearly be performed in polynomial time. It remains to show that the algorithm is correct. Let us assume that I is a yes instance. We will show that in this case the algorithm accepts. Let A ′′ be a subset of A such that A ′′ ≤ K and p is a winner of election E ′′ = (C ∪ A ′′ ,V ). Let B ′′ = (b ′′ 1 , . . . , b ′′ k ′ ) be the vector of candidates from C ∪ A ′′ such that for each i, 1 ≤ i ≤ k ′ , in E ′′ voter m i ranks b ′′ i first. We claim that our algorithm accepts at latest when it considers vector B ′′ . First, by our choice of B ′′ it is clear that in Step (1a) we do not drop B ′′ . Let C ′ , A ′ , K ′ , and V ′ be as computed by our algorithm for B = B ′′ . By our choice of B ′′ , it is clear that A ′′ − {b ′′ i |1 ≤ i ≤ k ′ } ⊆ A ′ . Thus, there is a set A ′′′ ⊆ A ′ such that A ′′′ ≤ K ′ and p is a winner of election (C ′ ∪ A ′′′ ,V ). Further, since every voter m i , 1 ≤ i ≤ k ′ , prefers candidate b ′′ i to all other candidates in C ′ ∪ A ′ , it holds that p is a winner of election (C ′ ∪ A ′′′ ,V ′ ). That is, (C ′ , A ′ ,V ′ , p, K ′ ) is a yes instance. Similarly, it is easy to see that the construction of instances (C ′ , A ′ ,V ′ , p, K ′ ), and in particular the construction of V ′ in Step (1e), ensures that if the algorithm accepts then I is a yes instance. This completes the discussion of the CCAC case. Let us now move on to the case of CCDC. As in the case of CCAC, we will, essentially, reduce the problem to the case where all voters are single-peaked. However, we will need the following more general variant of the CCDC problem. Definition C. 1. Let R be an election system. In the CCDC with restricted deleting problem for R, we are given an election (C,V ), a candidate p ∈ C, a set F ⊆ C such that p ∈ F, and a nonnegative integer K. We ask if there is a set C ′ ⊆ C such that (a) C ′ ≤ K, (b) C ′ ∩ F = / 0, and (c) p is a winner of R election (C −C ′ ,V ). That is, in CCDC with restricted deleting we can specify which candidates are impossible to delete. The following result is a direct corollary to the proof of Faliszewski et al. [23] that CCDC for plurality is in P for single-peaked electorates. Observation C.2 (Implicit in Faliszewski et al. [23]). CCDC with restricted deleting is in P for plurality over single-peaked electorates. Let I = (C,V, p, K) be our input instance of CCDC for plurality and let L be the societal axis. Let k ′ be the number of mavericks in V (k ′ ≤ k) and let M = {m 1 , . . . , m k ′ } be the subcollection of V that contains these k ′ maverick voters. Our algorithm works as follows: 1. For each vector B = (b 1 , . . . , b k ′ ) ∈ C k ′ of(b) Set F ′ = {b i | 1 ≤ i ≤ k ′ } ∪ {p}. (c) Set C ′ = C − {c ∈ C | there is an i, 1 ≤ i ≤ k ′ such that m i prefers c to b i }. (d) Set K ′ = K − {c ∈ C | there is an i, 1 ≤ i ≤ k ′ such that m i prefers c to b i } . If K ′ < 0 then drop this B and return to Step 1. (e) Form a voter collection V ′ that is identical to V except that we restrict voters' preferences to candidates in C ′ and for each voter m i , 1 ≤ i ≤ k ′ , we replace m i 's preference order with an easily-computable preference order over C ′ that ranks b i first and is single-peaked with respect to L. (f) Using Observation C.2, check if (C ′ ,V ′ , p, F ′ , K ′ ) is a yes instance of CCDC with restricted deleting for single-peaked plurality elections with societal axis L. If so, accept. 2. If the algorithm has not accepted so far, reject. Using the same arguments as in the case of CCAC, we can see that this algorithm is both correct and runs in polynomial time. Proof of Theorem 4. 7. We first consider the case of CCAC. We easily note that CCAC for plurality over swoon-SP societies is in NP. It remains to show that it is NP-hard and we do so by giving a reduction from X3C. Let I = (B, S ) be our input X3C instance, where B = {b 1 , . . . , b 3k } and S = {S 1 , . . . , S n }. Without loss of generality, we assume that k ≥ 2 and n ≥ 4. For each b i ∈ B, we set ℓ i to be the number of sets in S that contain b i . C.2 Proofs of Theorems 4.7, 4.8, and 4.15 We construct an election E = (C ∪ A,V ), where C = B ∪ {p, d} is the set of registered candidates, A = {a 1 , . . . , a n } is the set of spoiler (unregistered) candidates, and V is a collection of votes. Each candidate a i in A corresponds to a set S i in S . We assume that the societal axis L is p L d L b 1 L · · · L b 3k L a 1 L · · · L a n . (Our proof works for any easily computable axis.) Collection V contains the following (6kn) + (n) + (2nk + k − n) + (2nk) + ∑ 3k i=1 (2nk + 2k − 2kℓ i ) votes; each of the five parenthesized terms in this expression corresponds to an item in the description of votes below. (For each vote we only specify up to two topranked candidates. Note that voters in swoon-SP societies can legally pick any two candidates to be ranked in the top two positions of their votes. This is so, because the top-ranked candidate can be chosen freely as the candidate to which the voter swoons, and the second-ranked candidate can be chosen to be the voter's peak in the societal axis. We assume that the remaining positions in each vote-irrelevant from the point of view of our proof-are filled in in an easily computable way consistent with the societal axis L. For example, each voter we describe below could rank candidates as follows: (a) in the first up to two positions of the vote he or she would rank the candidates as described below (appropriately choosing the candidate to which he or she swoons, and the candidate who takes the role of the voter's peak on the societal axis), (b) in the remaining positions the voter would first rank the remaining candidates "to the left" of the peak and then those "to the right" of the peak.) 1. For each set S j ∈ S , for each b i ∈ S j , we have 2k votes a j > b i > · · · . 2. For each set S j ∈ S we have a single vote a j > p > · · · . 3. We have 2nk + k − n voters that rank p first. 4. We have 2nk voters that rank d first. 5. For each b i ∈ B, we have 2nk + 2k − 2kℓ i voters that rank b i first. We note that in election (C,V ) the scores of candidates are as follows: 1. p has 2nk + k points, 2. d has 2nk points, and 3. each candidate b i ∈ B has 2nk + 2k points. That is, the winners of plurality election (C,V ) are exactly the candidates in B. We claim that there is a set A ′ , A ′ ⊆ A, such that A ′ ≤ k and p is a winner of plurality election (C ∪ A ′ ,V ) if and only if I is a yes instance of X3C (that is, if there exists a collection of exactly k sets from S that union to B; such a collection of sets is called an exact set cover of B). Let A ′′ be some subset of A. It is easy to see that in election (C ∪ A ′′ ,V ), plurality scores of candidates are as follows: p has score 2nk + k − A ′′ , d has score 2nk, each candidate b i ∈ B has score 2nk + 2k − 2k {a j ∈ A ′′ | b i ∈ S j } , and each candidate a i ∈ A ′′ has score 6k + 1. Assume that p is a winner of election (C ∪ A ′′ ,V ). Since d's score is 2nk and p's score is 2nk + k − A ′′ , it holds that A ′′ ≤ k. Further, for each b i ∈ B it holds that b i 's score is no larger than that of p. It is easy to verify that this is possible only if A ′′ corresponds to an exact set cover of B (the score of each of 3k candidates in B has to be decreased and each a j ∈ A ′′ corresponds to decreasing the score of exactly three candidates in B). On the other hand, if A ′′ corresponds to an exact cover of B, then p is a winner of election (C ∪ A ′′ ,V ). In such a case A ′′ = k and so the score of p is 2nk. Since each a j ∈ A ′′ corresponds to a set S j ∈ S that contains three unique members of B, the score of each b i ∈ B is 2nk. The score of d is 2nk as well. Each a j ∈ A ′′ has score 6k + 1 < 2nk (this is so because n ≥ 4 and k ≥ 2). The proof is complete. We now move on to the case of CCDC. CCDC for plurality over swoon-SP societies is clearly in NP and we focus on proving NP-hardness. We do so by giving a reduction from X3C. Let I = (B, S ) be an X3C instance, where B = {b 1 , . . . , b 3k } and S = {S 1 , . . . , S n }. Without loss of generality we assume that k > 5. We use societal axis p L d L b 1 L · · · L b 3k L a 1 L · · · L a n . We construct an instance of CCDC for plurality as follows. Set A = {a 1 , . . . , a n } and let E = (C,V ) be an election, where C = B ∪ A ∪ {p} and V contains the following groups of votes (for each vote we only specify up to two top candidates and up to one ranked-lowest candidate; the reader can verify that using societal axis L it is possible to create swoon-SP votes of the form we require). 1. For each S j ∈ S and for each b i ∈ S j , we have one vote a j > b i > · · · > p. 2. For each S j ∈ S , we have one vote a j > p > · · · . For each b i ∈ B, we have k − 1 votes b i > · · · > p. In this election the candidates have the following scores: 1. p has 0 points, 2. each b i ∈ B has k − 1 points, and 3. each a j , 1 ≤ j ≤ n, has 4 points (note that 4 < k − 1). We claim that it is possible to ensure that p is a winner of this election by deleting at most k candidates if and only if I is a yes instance of X3C. First, assume that I is a yes instance of X3C and let A ′ be a subset of A such that {S i | a i ∈ A ′ } is an exact cover of B. It is easy to see that p is a plurality winner of election E ′ = (C − A ′ ,V ): Compared to E, in E ′ the score of p increases by k, the score of each b i ∈ B increases by 1, and the scores of remaining members of A do not change. Thus, p and all members of B tie for victory. On the other hand, assume that there exists a set A ′′ ⊆ B ∪ A of candidates, A ′′ ≤ k, such that p is a winner of election E ′′ = (C − A ′′ ,V ). Since A ′′ ≤ k, there are at least 2k candidates from B in E ′′ and so the score of p in E ′′ has to be at least k − 1, to tie with these candidates. However, the only way to increase p's score to k − 1 (or higher) by deleting at most k candidates is to delete k − 1 (or more) candidates from A. Yet if we delete exactly k − 1 candidates from A, then there is some candidate b i in the election whose score is at least k. Thus, A ′′ must contain exactly k candidates from A. Deleting these candidates increases p's score to be k. To ensure that the scores of the candidates in B do not exceed k, we must ensure that A ′′ corresponds to an exact cover of B by sets from S . This completes the proof. By a simple extension of the above proof, we can also show that allowing the number of mavericks to be some root of the input size cannot be handled either. (C, A,V, p, K) is a yes instance of CCAC for plurality. However, of course, we have no guarantee that V contains at most I ε mavericks (with respect to L), where I denotes the input size of (C, A,V, p, K). Yet it is easy to verify that for each positive integer t, (C, A,V, p, K) is a yes instance of CCAC for plurality if and only if (C, A,V ∪V ′ t , p, K) is a yes instance of the same problem, where V ′ t is a collection of t blocks of votes that each contain the following 3k + 2 votes: 1. For each i, 1 ≤ i ≤ 3k, there is a single vote that is single-peaked with respect to L and ranks b i first and p last 4 (note that, by our choice of L in the proof of Theorem 4.7, such a vote exists). 2. There is a single vote that is single-peaked with respect to L and ranks p first. 3. There is a single vote that is single-peaked with respect to L and ranks d first and p last. It is easy to see that by choosing a large enough value of t (but polynomially bounded in I 1 ε ), it is possible to form an instance (C, A,V ∪V ′ t , p, K), whose encoding size is I ′ , that is a yes instance of CCAC for plurality if and only if (B, S ) is a yes instance of plurality, and which contains at most I ′ε mavericks with respect to the societal axis L (namely, the voters in V ). This proves our theorem for the CCAC case. Essentially the same proof approach works for the CCDC case. The crucial observation here is that the proof of the CCDC case of Theorem 4.7 ensures that deleting candidates outside of the set A is never a successful strategy. Adding the voters V ′ t does not affect this observation because in all votes in V ′ t candidate p is either ranked first or ranked last (however, of course, for the case of CCDC each of the t blocks of votes in V ′ t contains only 3k + 1 votes; the vote that ranks d first and p last is not included, and the candidate d does not occur in any of the other votes). It is easy to see that Theorem 4.15 is a simple corollary to the proof of Theorem 4.7. If m is the total number of candidates involved in the election then both in Dodgson m−2 -SP societies and in PerceptionFlip m−2 -SP societies the voters can legally rank any two candidates on top of their votes (see lemma below). Further, the societal axis in the CCDC part of the proof of Theorem 4.7 is such that the voters can easily rank p last if need be. This is all that we need for the proof of Theorem 4.7 to work for the case of Theorem 4.15. Lemma C. 3. Let C = {c 1 , . . . , c m } be a set of candidates, m ≥ 2, and let L be a societal axis over C such that c 1 L c 2 L · · · L c m . For each two candidates c i , c j ∈ C, there exist two preference orders of the form c i > c j > · · · such that the first is nearly single-peaked in the sense of Dodgson m−2 -SP societies and the second one is nearly single-peaked in the sense of PerceptionFlip m−2 -SP societies. Further, if i = 1 and j = 1, it is possible to ensure that these preference orders rank c 1 last. Proof. Let c i and c j be two arbitrary, distinct candidates in C. We first consider the case of Dodgson m−2 -SP societies. Let > ′ be an arbitrary preference order that is single-peaked with respect to L and that ranks c i first (and c 1 last, if i = 1 and j = 1). We obtain > from > ′ by shifting c j forward in > ′ to the second position (that is, just below c i ). It is easy to see that this requires at most m − 2 swaps. For the case of PerceptionFlip m−2 -SP societies, note that by using at most m − 2 swaps of adjacent candidates, it is possible to obtain a societal axis L ′ from L where candidates c i and c j are adjacent (and where, if i = 1 and j = 1, it still holds that c 1 L c k for each k, 2 ≤ k ≤ m). Clearly, there is a preference order > that is single-peaked with respect to L ′ and that ranks c i first and c j second (and c 1 last, if i = 1 and j = 1). C.3 Proof of Theorem 4.14 Theorem 4.14. For each constant k, CCAC and CCDC for plurality elections are in P for k-local elections. We first give a polynomial-time algorithm for CCAC for plurality k-local elections. The main idea of our algorithm is the following. Let p be the candidate whose victory we want to ensure in our input k-local instance of plurality CCAC. We first add up to 2k candidates so that the score of p is fixed, and then we run a dynamic programming algorithm that ensures that no candidate has score higher than this fixed score of p. Of course, we do not know which candidates to add in the first part of the algorithm, so we perform an exhaustive search (since k is a constant, it is possible to perform such a search in polynomial time). We will first describe the dynamic programming algorithm in Lemma C.4 and then we will provide the main algorithm. Before we proceed with this plan, we need to provide some additional notation. Let E = (C ∪ A,V ) be a k-local plurality election, where we interpret C as the registered candidates and A as the spoiler candidates. Let L be the societal axis for E. We rename the candidates so that D = C ∪ A = {d 1 , . . . , d m } and d 1 L d 2 L · · · L d m . For each set B ⊆ D, we define lt(B) to be the minimal (leftmost) element of B with respect to L and rt(B) to be the maximal (rightmost) element of B with respect to L. For each d i ∈ D we define S (d i ) to be the family of sets {N(L,C ∪ A ′ , d i , k) | A ′ ⊆ A, d i ∈ C ∪ A ′ }. The reader can verify that each S (d i ) contains a number of sets that is at most polynomial in C ∪ A k and that each S (d i ) is easily computable (to compute S (d i ) it suffices to consider sets A ′ of cardinality at most 2k + 1). Lemma C. 4. Let E = (C ∪ A,V ) be a plurality election, where C = {c 1 , . . . , c m ′ } and A = {a 1 , . . . , a m ′′ }, such that E is k-local for some positive integer k. There exists an algorithm that given election E, integer k, societal axis L with respect to which E is k-local, and a nonnegative integer t outputs the cardinality of a smallest (in terms of cardinality) set A ′ ⊆ A such that the plurality scores of all candidates in election (C ∪ A ′ ,V ) are at most t, or indicates that no such set A ′ exists. This algorithm runs in time polynomial with respect to ( C ∪ A + V ) k . Proof. The proof of this lemma is a much extended version of the proof of Lemma 3.7 of [23]. Let the notation be as in the statement of the lemma. We assume that C is nonempty. We let D = C ∪ A and, without loss of generality, we rename the candidates so that D = {d 1 , . . . , d m }, where m = m ′ + m ′′ , and d 1 L d 2 L · · · L d m . Without loss of generality, we assume that d 1 , d m ∈ C (if this was not the case, we could extend C to include two additional candidates, ranked last by all voters, without destroying k-locality of the election). For each d i ∈ D and each D ′ ∈ S (d i ) we define f (d i , D ′ ) to be the cardinality of a smallest (with respect to cardinality) set A ′ ⊆ A such that: 4. Check how many candidates are needed to ensure that candidates "to the right" of p do not beat p): 1. For each candidate d j ∈ C ∪ A ′ such that j ≤ i it holds that score (C∪A ′ ,V ) (d j ) ≤ t. 2. If d j ′ = lt(D ′ ) and d j ′′ = rt(D ′ ) then D ′ = (C ∪ A ′ ) ∩ {d j ′ , d j ′ +1 , . . . , d j ′′ }. (Since d 1 , d m ∈ C, this is equivalent to D ′ = N(L,C ∪ A ′ , d i , k).) (b) Set C left = ({d 1 , . . . , d j ′ −1 } ∩ C) ∪ D ′ (a) Set j ′′ to be such that d j ′′ = rt(D ′ ). (b) Set C right = ({d j ′′ +1 , . . . , d m } ∩ C) ∪ D ′ and set A right = {d j ′′ +1 , . . . , d m } ∩ A. (C right is the set of all registered candidates "to the right" of D ′ , union D ′ (we treat D ′ as already fixed); A right is the set of spoiler candidates "to the right" of D ′ .) (c) Using Lemma C.4 compute the minimal number of candidates from A right that need to be added to election (C right ,V ) so that each candidate in the resulting election has score at most t. Call this number K right . If it is impossible to achieve the desired effect, drop this D ′ . 5. If K p + K left + K right ≤ K then accept. If the above procedure does not accept for any D ′ then reject. By Lemma C.4 and the fact that k is a fixed constant, it is easy to see that this algorithm works in polynomial time. The correctness is easy to observe as well. We now move on the the case of CCDC for plurality k-local elections. Lemma C. 6. For each fixed k, CCDC for k-local plurality elections, where the societal axis L is given, is in P. Proof. Let E = (C,V ) be our input election, p be the preferred candidate, and K be a nonnegative integer. Our goal is to determine if it is possible to ensure that p is a winner by deleting at most K candidates. Let L be the input societal axis with respect to which E is k-local. We rename the candidates in C so that C = {ℓ m ′ , . . . , ℓ 1 , p, r 1 , . . . , r m ′′ } and ℓ m ′ L · · · L ℓ 1 L p L r 1 L · · · L r m ′′ . Recall that by definition of k-local plurality elections, the score of p depends only on the presence of k candidates "to the left of p" and k candidates "to the right of p" (with respect to L). Our algorithm works as follows: 1. For each size-min(k, m ′ ) subset L of {ℓ 1 , . . . , ℓ m ′ } and each size-min(k, m ′′ ) subset R of {r 1 , . . . , r m ′′ } execute the following steps: (a) Let i be the largest integer such that ℓ i ∈ L and let j be the largest integer such that r j ∈ R. . . , ℓ 1 , r 1 , . . . r j } − (L ∪ R) (at this point, intuitively, D is the unique smallest set of candidates that one has to delete from C to ensure that the k-radius neighborhood of p is exactly L ∪ R). (b) Let D = {ℓ i , . (c) Execute the following loop: If there is a candidate c ∈ C − D, c = p, such that the score of c in (C − D,V ) is higher than that of p, then add c to D. (d) If D ≤ K then accept. Reject. Since k is a constant, there are only polynomially many pairs of sets L and R to try. Thus, it is easy to see that the algorithm runs in polynomial time. To see the correctness, it suffices to note the following two facts. First, the score of p depends only on the k-radius neighborhood of p. Second, it is impossible to decrease a score of a candidate by deleting (other) candidates, so if for a given k-radius neighborhood of p some candidates still have score higher than p, the only way to ensure that they do not preclude p from winning is by deleting them. 5 C.4 Proof of Theorem 4.16 The following lemma will be very useful in proving Theorem 4.16. Proof. Let us consider the CCAC case first. Let (C, A,V, p, K) be our input instance of CCAC for plurality, where votes in V are single-caved with respect to a given societal axis L. We assume that C ≥ 2 (otherwise p, the only candidate, is already a winner). Let a be some candidate in A. We claim that if p is not a winner of election E = (C,V ) then p is not a winner of election E ′ = (C ∪ {a},V ). First, adding a cannot increase p's plurality score. Thus, if p's plurality score in E is 0 then it is 0 in E ′ as well and p is not a winner in either of them. Thus, let us assume that there is at least one voter that prefers p to all other candidates in C. This means that we can assume, without loss of generality, that there is a candidate d ∈ C such that for each candidate c ∈ C − {p, d} it holds that p L c L d. By Lemma C.7, p and d are the only candidates in election E whose plurality score is nonzero. We assume that p does not win in E, so the score of p is smaller than the score of d. We now consider three possible cases, depending on a's position on the societal axis. 1. If a L p then it is easy to note that in every vote in which p was ranked first prior to adding a, now a is ranked first, and so p is not a winner of the election. 2. If d L a then a is ranked first in each vote in which d was ranked first prior to adding a, and so now p loses to a. 3. If p L a L d then adding a to the election does not change plurality scores of p and d and thus p still loses to d. Thus, by induction on the number of added candidates, it is impossible to move p from losing an election to winning it by adding candidates. Our CCAC algorithm simply checks if p is a winner already, accepts if so and rejects otherwise. Let us now consider the case of CCDC for plurality and single-caved societies. Let (C,V, p, K) be our input instance where votes in V are single-caved with respect to given societal axis L. Let us rename the candidates so that C = {c 1 , . . . , c m }, c 1 L · · · L c m , and let us fix i such that p = c i . Let D = {{c 1 , c 2 . . . , c i−1 , c j , . . . , c m } | j > i} ∪ {{c 1 , . . . , c k , c i+1 , c i+2 , . . . , c m } | k < i}. By Lemma C.7 and the definition of single-cavedness, it is easy to see that p can become a winner of election (C,V ) by deleting at most K candidates if and only if V = / 0 or there is a set D in D such that D ≤ K and p is a winner of election (C − D,V ). D Proof from Section 5 We provide the proof of Section 5's theorem. Theorem 5. 1. Bribery, negative-bribery, and strongnegative-bribery for approval elections over logmaverick-SP societies are each in P, in both the standard and the marked model. Proof. We first prove in detail the "bribery" case (the first of the three types of bribery that the theorem covers), in both its marked-model and standard-model cases. Let us look first at the marked model. In this case, much as in the proof of Theorem 4.1, we will note that an earlier paper is implicitly obtaining a stronger result than what its theorem states, and then we will use that observation to build a disjunctive truth-table reduction from our problem to that problem. In this case, the earlier paper is not Faliszewski et al. [23] as it was in Theorem 4.1, but rather is Brandt et al. [4]. The result of theirs that we focus on is their theorem stating that approval bribery is in polynomial time for single-peaked societies. This is "Theorem 4" of Brandt et al. [4], but for its proof/algorithm, one needs to refer to Appendix A.2 of that paper's technical report version [5]. Now, by inspection of that proof, one can see that the algorithm given there does not need all the voters it operates on to respect the societal ordering. Rather, it can handle perfectly well the case where each voter has an "open to bribes" flag, and every voter whose open to bribes flag is set respects the single-peaked ordering. (In that algorithm, as modified to handle this, the surpluses are computed with respect to all the voters-both those with the flag set and those with the flag unset. But then the pool of voters that the algorithm looks at to try to find a good bribe is limited to just those with the open to bribes flag set, although the surplus recomputations throughout the algorithm are always with respect to the entire set of voters. This is a slight extension of the algorithm, but is clearly correct, for the same reasons the original algorithm is.) Note that the algorithm can bribe voters whose open to bribes flag is set, but (by the nature of the algorithm) will only bribe them to values consistent with the societal order. Call the language problem defined by this FlagBribe. Having made the previous paragraph's observation, we can now disjunctive truth-table reduce to FlagBribe-which is put into polynomial time by the above algorithm (due to Brandt et al. [4], except slightly adapted as just mentioned). We do so as follows. Suppose our logarithmic bound is given. Given an input to our problem, we check that the number of voters with the maverick-enabled flag (not to be confused with the open to bribes flag mentioned above) set does not exceed the logarithmic bound; if it does, reject immediately. Otherwise, for each of member A of the powerset of the set of maverick-enabled voters (i.e., for each choice of which of the maverick-enabled voters we will bribe), we will generate at most one instance of FlagBribe as follows. If A > K, generate no instance. (The number of voters being bribed would exceed the problem's bound.) Otherwise, generate an instance of FlagBribe that is the same set of voters as our instance, except with the members of A modified to each approve of p and only p. The voters who in our original problem were not maverick-enabled will all have their open to bribes flag set. All others will have their open to bribes flag unset. Set K to now be K − A . So, since there are a logarithmic number of maverick-enabled voters, the powerset above is polynomial in size, and we generate a polynomial number of (polynomial-sized) instances of FlagBribe. It is clear that our original problem has a successful bribe exactly if at least one of those instances has a successful bribe (i.e., belongs to FlagBribe). This is so, due to the properties of the algorithm underlying FlagBribe, and the fact that if there is a bribe of K voters that makes p a winner, then the same bribe action except with any subset of them instead bribed to approve only of p will also make p a winner. Changing a voter to approve of p and only p is a best possible bribe of that voter, if the voter will be bribed at all. That concludes the marked model case for bribery. We turn now to the standard model case. In this case, each voter can potentially turn into a maverick. And so with an O(log(ProblemInputSize)) bound on the number of mavericks, as long as the bribery problem itself has a generously large K, to even decide which voters to make into mavericks would seem to involve C O(log(ProblemInputSize)) options-superpolynomially many, which potentially is a worry if they can be turned into complex mavericks. But we are again saved here by the fact that any good bribe of a voter is at least as good if one just bribes that voter to approve only p (and clearly that vote is also inherently consistent with the societal ordering). That means that if there is a good set of bribes, then there is a good set of bribes that never bribes people to vote in ways that are inconsistent with the societal order. But given that, we can turn this case into our marked-model case. (The calls to FlagBribe that will underlie the handling of that case indeed also may present problems with superpolynomial numbers of options as to which voters to bribe. But due to the single-peakednessrespecting nature of all voters who are open to bribes, that can be handled easily-that is the real power of FlagBribe's underlying algorithm: it uses single-peakedness to tame combinatorial explosion .) In particular, we can proceed here as follows. Take our input. Reject if the number of voters who are inconsistent with the societal ordering conflicts with our logarithmic bound. Otherwise, have the maverickenabled flag be set for each voter who violates the societal ordering and have the maverick-enabled flag be unset for all other voters. Keep the K parameter the same as it originally was. And then solve this marked-model case as described above. This works, due to the comments of the previous paragraph. That covers in detail the case of bribery. The remaining two cases, negative-bribery and strongnegativebribery, are similarly proven by noting that one can alter the algorithms from Brandt et al. [4] for those two cases, and by noting that if there is a good bribe in these models, then there is a good bribe where no bribed voter will have any approval set other than either "just p" or the empty set.
19,076
1104.5636
2951202000
In this work, we develop a distributed source routing algorithm for topology discovery suitable for ISP transport networks, that is however inspired by opportunistic algorithms used in ad hoc wireless networks. We propose a plug-and-play control plane, able to find multiple paths toward the same destination, and introduce a novel algorithm, called adaptive probabilistic flooding, to achieve this goal. By keeping a small amount of state in routers taking part in the discovery process, our technique significantly limits the amount of control messages exchanged with flooding -- and, at the same time, it only minimally affects the quality of the discovered multiple path with respect to the optimal solution. Simple analytical bounds, confirmed by results gathered with extensive simulation on four realistic topologies, show our approach to be of high practical interest.
Routing is a critical component of the Internet, and as such has long been studied by the scientific community. The problem of finding a interconnecting any two nodes of a graph is solved by well-known algorithms like Dijkstra and Bellman-Ford, which have been implemented in widely deployed protocols such as OSPF and RIP, respectively. However, interconnecting nodes through a single path (typically, the shortest) does not make the network resilient against failures and traffic surges. Hence, different techniques relying on have been proposed. For instance, ECMP @cite_10 aims at balancing load over multiple paths of equal cost. In standard IP MPLS networks, the control and data planes are generally considered jointly; multipath routing is then achieved through a centralized algorithm, solving some standard multicommodity flow problem @cite_15 @cite_4 .
{ "abstract": [ "This paper presents and discusses path selection algorithms to support QoS routes in IP networks. The work is carried out in the context of extensions to the OSPF protocol, and the initial focus is on unicast flows, although some of the proposed extensions are also applicable to multicast flows. We first review the metrics required to support QoS, and then present and compare several path selection algorithms, which represent different trade-offs between accuracy and computational complexity. We also describe and discuss the associated link advertisement mechanisms, and investigate some options in balancing the requirements for accurate and timely information with the associated control overhead. The overall goal of this study is to identify a framework and possible approaches to allow deployment of QoS routing capabilities with the minimum possible impact to the existing routing infrastructure.", "Equal-cost multi-path (ECMP) is a routing technique for routing packets along multiple paths of equal cost. The forwarding engine identifies paths by next-hop. When forwarding a packet the router must decide which next-hop (path) to use. This document gives an analysis of one method for making that decision. The analysis includes the performance of the algorithm and the disruption caused by changes to the set of next-hops.", "" ], "cite_N": [ "@cite_15", "@cite_10", "@cite_4" ], "mid": [ "2120608001", "2169246522", "" ] }
0
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Gabor functions @cite_61 , in the form of Gabor filters (GFs) @cite_32 and Gabor wavelets @cite_0 , are applied for a multitude of purposes in many areas of image processing and pattern recognition. Basically, the intentions for using GF and log-GF @cite_45 can be grouped into two categories: first, GF aim at enhancing images @cite_24 and the second common goal is to extract Gabor features obtained from responses of filterbanks. Typical fields of application include:
{ "abstract": [ "It is suggested that it may be possible to transmit speech and music in much narrower wavebands than was hitherto thought necessary, not by clipping the ends of the waveband, but by condensing the information. Two possibilities of more economical transmission are discussed. Both have in common that the original waveband is compressed in transmission and re-expanded to the original width in reception. In the first or ?kinematical? method a temporary or permanent record is scanned by moving slits or their equivalents, which replace one another in continuous succession before a ?window.? Mathematical analysis is simplest if the transmission of the window is graded according to a probability function. A simple harmonic oscillation is reproduced as a group of spectral lines with frequencies which have an approximately constant ratio to the original frequency. The average departure from the law of proportional conversion is in inverse ratio to the time interval in which the record passes before the window. Experiments carried out with simple apparatus indicate that speech can be compressed into a frequency band of 800 or even 500 c s without losing much of its intelligibility. There are various possibilities for utilizing frequency compression in telephony by means of the ?kinematical? method. In a second method the compression and expansion are carried out electrically, without mechanical motion. This method consists essentially in using non-sinusoidal carriers, such as repeated probability pulses, and local oscillators producing waves of the same type. It is shown that one variety of the electrical method is mathematically equivalent to the kinematical method of frequency conversion.", "", "This paper extends to two dimensions the frame criterion developed by Daubechies for one-dimensional wavelets, and it computes the frame bounds for the particular case of 2D Gabor wavelets. Completeness criteria for 2D Gabor image representations are important because of their increasing role in many computer vision applications and also in modeling biological vision, since recent neurophysiological evidence from the visual cortex of mammalian brains suggests that the filter response profiles of the main class of linearly-responding cortical neurons (called simple cells) are best modeled as a family of self-similar 2D Gabor wavelets. We therefore derive the conditions under which a set of continuous 2D Gabor wavelets will provide a complete representation of any image, and we also find self-similar wavelet parametrization which allow stable reconstruction by summation as though the wavelets formed an orthonormal basis. Approximating a \"tight frame\" generates redundancy which allows low-resolution neural responses to represent high-resolution images.", "Abstract Dennis Gabor is mainly known for the invention of optical holography and the introduction of the so-called Gabor functions in communications. A few people know that he was also interested in image processing. In a paper entitled “Information theory in electron microscopy” ( Laboratory Investigation 14 (6), 801–807 (1965)), written in 1965, he examined the problem of image deblurring and was the first to suggest a method for edge enhancement based on principles widely accepted today and implemented in advanced image processing systems. In this paper his ideas are reviewed, their relation to contemporary methods is shown, and some simulations he could not do in 1965 are performed.", "The relative efficiency of any particular image-coding scheme should be defined only in relation to the class of images that the code is likely to encounter. To understand the representation of images by the mammalian visual system, it might therefore be useful to consider the statistics of images from the natural environment (i.e., images with trees, rocks, bushes, etc). In this study, various coding schemes are compared in relation to how they represent the information in such natural images. The coefficients of such codes are represented by arrays of mechanisms that respond to local regions of space, spatial frequency, and orientation (Gabor-like transforms). For many classes of image, such codes will not be an efficient means of representing information. However, the results obtained with six natural images suggest that the orientation and the spatial-frequency tuning of mammalian simple cells are well suited for coding the information in such images if the goal of the code is to convert higher-order redundancy (e.g., correlation between the intensities of neighboring pixels) into first-order redundancy (i.e., the response distribution of the coefficients). Such coding produces a relatively high signal-to-noise ratio and permits information to be transmitted with only a subset of the total number of cells. These results support Barlow’s theory that the goal of natural vision is to represent the information in the natural environment with minimal redundancy." ], "cite_N": [ "@cite_61", "@cite_32", "@cite_0", "@cite_24", "@cite_45" ], "mid": [ "2109930949", "90244687", "2138584058", "1994424757", "2167034998" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Texture segmentation @cite_40 and classification @cite_47 , with applications such as e.g. recognizing species of tropical wood @cite_11 or classifying developmental stages of fruit flies @cite_14 .
{ "abstract": [ "In this paper, we present a supervised classification system for sorting Drosophila embryonic in situ hybridization (ISH) images according to their developmental stages. The proposed system first segments the embryo from an image and registers it for subsequent texture feature extraction. In order to extract the most distinguishing features for classifying developmental stages, we identify several areas of interest in an embryo with peculiar traits. Gabor filter is applied on these areas to extract texture features and Principal Component Analysis (PCA) is then performed on the extracted features to reduce dimensionality while retaining significant information. We adopt multi-class Support Vector Machine (SVM) as the classifier that learns model parameters from the training examples and classifies new examples with the trained model. We evaluate the system performance by comparing it to existing algorithms. The experimental results show that the proposed system achieves good performance in classifying Drosophila embryonic developmental stages and outperforms other state-of-the-art algorithms.", "Texture segmentation involves subdividing an image into differently textured regions. Many texture segmentation schemes are based on a filter-bank model, where the filters, called Gabor filters, are derived from Gabor elementary functions. The goal is to transform texture differences into detectable filter-output discontinuities at texture boundaries. By locating these discontinuities, one can segment the image into differently textured regions. Distinct discontinuities occur, however, only if the Gabor filter parameters are suitably chosen. Some previous analysis has shown how to design filters for discriminating simple textures. Designing filters for more general natural textures, though, has largely been done ad hoc. We have devised a more rigorously based method for designing Gabor filters. It assumes that an image contains two different textures and that prototype samples of the textures are given a priori. We argue that Gabor filter outputs can be modeled as Rician random variables (often approximated well as Gaussian rv's) and develop a decision-theoretic algorithm for selecting optimal filter parameters. To improve segmentations for difficult texture pairs, we also propose a multiple-filter segmentation scheme, motivated by the Rician model. Experimental results indicate that our method is superior to previous methods in providing useful Gabor filters for a wide range of texture pairs. >", "This paper proposes a novel approach to extract image features for texture classification. The proposed features are robust to image rotation, less sensitive to histogram equalization and noise. It comprises of two sets of features: dominant local binary patterns (DLBP) in a texture image and the supplementary features extracted by using the circularly symmetric Gabor filter responses. The dominant local binary pattern method makes use of the most frequently occurred patterns to capture descriptive textural information, while the Gabor-based features aim at supplying additional global textural information to the DLBP features. Through experiments, the proposed approach has been intensively evaluated by applying a large number of classification tests to histogram-equalized, randomly rotated and noise corrupted images in Outex, Brodatz, Meastex, and CUReT texture image databases. Our method has also been compared with six published texture features in the experiments. It is experimentally demonstrated that the proposed method achieves the highest classification accuracy in various texture databases and image conditions.", "Tropical timber woods have more than 1,000 species. Some of the species have similar patterns with others and some have different patterns even though they are of the same species. One of the main problems in wood species recognition system is the lack of discriminative features of the texture images. Gabor filter has been extensively used as feature extractor for various applications such as face detection, face recognition, image retrieval and font type extraction. In our work, we propose the use of Gabor filter to generate multiple processed images from a single image so that more features can be extracted and will be trained by neural network. The use of Gabor filters will optimally localized the properties of the images in both spatial and frequency domain. The features of the filtered images are extracted using co- occurrence matrix approach, known as grey level co- occurrence matrix (GLCM). A multi-layer neural network based on the popular BP (back propagation) algorithm is used for classification. The results show that increasing the number of features by means of Gabor filters as well as the right combination of Gabor filters increases the accuracy rate of the system." ], "cite_N": [ "@cite_14", "@cite_40", "@cite_47", "@cite_11" ], "mid": [ "2153613314", "2126440645", "2103496373", "2164519644" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
In medical imaging, GFs are applied for the enhancement of structures like e.g. finger veins @cite_52 and muscle fibers in ultrasound images @cite_34 , for the detection of blood vessels in retinal images @cite_1 , as well as for many other tasks like e.g. analyzing event-related brain activity @cite_31 , assessing osteoporosis in radiographs @cite_59 and for modeling the behavior of simple cells in the mammalian visual cortex @cite_30 .
{ "abstract": [ "Two-dimensional spatial linear filters are constrained by general uncertainty relations that limit their attainable information resolution for orientation, spatial frequency, and two-dimensional (2D) spatial position. The theoretical lower limit for the joint entropy, or uncertainty, of these variables is achieved by an optimal 2D filter family whose spatial weighting functions are generated by exponentiated bivariate second-order polynomials with complex coefficients, the elliptic generalization of the one-dimensional elementary functions proposed in Gabor’s famous theory of communication [ J. Inst. Electr. Eng.93, 429 ( 1946)]. The set includes filters with various orientation bandwidths, spatial-frequency bandwidths, and spatial dimensions, favoring the extraction of various kinds of information from an image. Each such filter occupies an irreducible quantal volume (corresponding to an independent datum) in a four-dimensional information hyperspace whose axes are interpretable as 2D visual space, orientation, and spatial frequency, and thus such a filter set could subserve an optimally efficient sampling of these variables. Evidence is presented that the 2D receptive-field profiles of simple cells in mammalian visual cortex are well described by members of this optimal 2D filter family, and thus such visual neurons could be said to optimize the general uncertainty relations for joint 2D-spatial–2D-spectral information resolution. The variety of their receptive-field dimensions and orientation and spatial-frequency bandwidths, and the correlations among these, reveal several underlying constraints, particularly in width length aspect ratio and principal axis organization, suggesting a polar division of labor in occupying the quantal volumes of information hyperspace. Such an ensemble of 2D neural receptive fields in visual cortex could locally embed coarse polar mappings of the orientation–frequency plane piecewise within the global retinotopic mapping of visual space, thus efficiently representing 2D spatial visual information by localized 2D spectral signatures.", "This paper proposes an automated blood vessel detection scheme based on adaptive contrast enhancement, feature extraction, and tracing. Feature extraction of small blood vessels is performed by using the standard deviation of Gabor filter responses. Tracing of vessels is done via forward detection, bifurcation identification, and backward verification. Tests over twenty images show that for normal images, the true positive rate (TPR) ranges from 80 to 91 , and their corresponding false positive rates (FPR) range from 2.8 to 5.5 . For abnormal images, the TPR ranges from 73.8 to 86.5 and the FPR ranges from 2.1 to 5.3 , respectively. In comparison with two published solution schemes that were also based on the STARE database, our scheme has lower FPR for the reported TPR measure.", "tion. Generally, finger-vein images have low contrast and uneven illumination due to finger-vein imaging manner and finger-shape variation. So, finger-vein enhancement is indispensable for reliable finger-vein network extraction. This paper proposes a new method based on combination of Gray-Level Grouping (GLG) and Circular Gabor Filter (CGF) for finger-vein image enhancement. First, GLG is used to reduce illumination fluctuation and improve the contrast of finger-vein images. Then a circular Gabor filter is used to further strengthen vein ridges in images. The experimental results show that this proposed method is capable of enhancing finger-vein image effectively.", "Multi-channel filtering implemented using the Gabor function, or Gabor filter, is capable of mimicking characteristics of the human visual system. In the assessment of osteoporosis, Gabor filter is used to calculate features from a trabecular pattern recorded on radiographs of a proximal femur. The assessment of osteoporosis can be done by observing and analyzing the trabecular pattern in the proximal femur. The feature extraction method used is energy calculation. The energy then is used for assessment or classification. The predetermined Singh index of the trabecular pattern is used to justify the classification result.", "Abstract Frequency-specific, i.e., narrow-band brain, activity is traditionally analyzed on the basis of either a time- or frequency-domain representation of the signal. Here we demonstrate an alternative method based on Gabor functions which are well known for their optimal concentration in time and frequency. Using Gabor filtering, amplitude and frequency information can be separated clearly from one another and certain novel approaches to averaging become possible.", "In this study, the Gabor filter bank technique was applied to the biceps and gastrocnemius ultrasound images to longitudinally enhance the coherently oriented and hyperechoic perimysiums regions. The method involved three steps, orientation field estimation, frequency map computation and Gabor filtering. The method was evaluated using a simulated image distorted with multiplicative speckle noises, where the “muscles” were arranged in a bipennate fashion with a central “aponeurosis”. After the enhancement using the proposed method, most of the original hyperechoic bands in the simulated image could be recovered and the noises in other locations were greatly reduced. The proposed method was also tested on biceps sonograms collected from 4 healthy adult subjects. Based on the filtering results, the hyperechoic regions, especially the fibroadipose septas, could be enhanced in several aspects, such as the completion of broken but coherently oriented regions. It is believed that the proposed method have potentials in assisting the visualization of strongly oriented patterns in ultrasound images and especially the quantitative estimation of muscle thickness, muscle fiber pennation angle and fascicle length." ], "cite_N": [ "@cite_30", "@cite_1", "@cite_52", "@cite_59", "@cite_31", "@cite_34" ], "mid": [ "2006500012", "2118823856", "2170371365", "2137923066", "2067390842", "2123481956" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
GFs are utilized for text segmentation @cite_42 , character recognition @cite_2 , font recognition @cite_18 , and license plate recognition @cite_28 .
{ "abstract": [ "In this paper, a video processing methodology for a field-programmable gate array (FPGA)-based license plate recognition (LPR) system is researched. The raster scan video is used as an input with low memory utilization. During the design, Gabor filter, threshold, and connected component labeling (CCL) algorithms are used to obtain license plate region. This region is segmented into disjoint characters for the character recognition phase, where the self-organizing map (SOM) neural network is used to identify the characters. The system is portable and relatively faster than computer-based recognition systems. The robustness of the system has been tested with a large database acquired from parking lots and a highway. The memory requirements are uniquely designed to be extremely low, which enables usage of smaller FPGAs. The resulting hardware is suitable for applications where cost, compactness, and efficiency are system design constraints.", "The font recognition of Chinese characters is an important part in OCR (optical character recognition) system. It is also a main technical challenge due to the similarity of different fonts. The reconstruction quality of layout depends on the accuracy of font recognition. However, the prevalent method of font recognition is predominant font recognition based on the fact that the most layouts are printed in a single font, which makes it impossible to reconstruct the original layout. In this paper, an improved font recognition method of individual character is proposed. The approach consists of three steps. In the first step, the guidance fonts are acquired based on Gabor filter optimized with genetic algorithm (GA). Then a single font recognizer is applied to get the matching results with the help of the guidance fonts and the layout knowledge of font typesetting. Finally, the post-processing of font recognition is fulfilled according to the layout knowledge. Experiments were carried out with samples from newspaper and magazines and the results show that the method is of immense practical and theoretical value.", "There is a considerable interest in designing automatic systems that will scan a given paper document and store it on electronic media for easier storage, manipulation, and access. Most documents contain graphics and images in addition to text. Thus, the document image has to be segmented to identify the text regions, so that OCR techniques may be applied only to those regions. In this paper, we present a simple method for document image segmentation in which text regions in a given document image are automatically identified. The proposed segmentation method for document images is based on a multichannel filtering approach to texture segmentation. The text in the document is considered as a textured region. Nontext contents in the document, such as blank spaces, graphics, and pictures, are considered as regions with different textures. Thus, the problem of segmenting document images into text and nontext regions can be posed as a texture segmentation problem. Two-dimensional Gabor filters are used to extract texture features for each of these regions. These filters have been extensively used earlier for a variety of texture segmentation tasks. Here we apply the same filters to the document image segmentation problem. Our segmentation method does not assume any a priori knowledge about the content or font styles of the document, and is shown to work even for skewed images and handwritten text. Results of the proposed segmentation method are presented for several test images which demonstrate the robustness of this technique.", "Optical Character Recognition (OCR) is a classical research field and has become one of most thriving applications in the field of pattern recognition. Feature extraction is a key step in the process of OCR, which in fact is a deciding factor of the accuracy of the system. This paper proposes a novel and robust technique for feature extraction using Gabor Filters, to be employed in the OCR. The use of 2D Gabor filters is investigated and features are extracted using these filters. The technique generally extracts fifty features based on global texture analysis and can be further extended to increase the number of features if necessary. The algorithm is well explained and is found that the proposed method demonstrated better performance in efficiency. In addition, experimental results show that the method gains high recognition rate and cost reasonable average running time." ], "cite_N": [ "@cite_28", "@cite_18", "@cite_42", "@cite_2" ], "mid": [ "2115639406", "2131069463", "2019273017", "2158030311" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Objects can be detected by GFs @cite_49 , e.g. cars @cite_25 . Moreover, GFs can be used for performing content-based image retrieval @cite_22 .
{ "abstract": [ "This paper provides a content-based digital image retrieval system. Our CBIR system uses the query by example technique and the relevance feedback. A Gabor filter based image feature extraction is proposed first. Thus, 3D image feature vectors using even-symmetric 2D Gabor filters are computed for the images of a large collection and for the input image. At each step an input image is selected, from the output set obtained in the previous step, and the most similar images from the collection are retrieved.", "This paper describes a car recognition system using a camera as sensor to recognize a moving car. There are four main stages in this process: object detection, object segmentation, feature extraction using Gabor filters and Gabor jet matching to the car database. The experiment was conducted for various types of car with various illuminations (daylight and night). The result shows that Gabor filter responses give good feature representation. The system achieved an average recognition rate of 93.88 .", "Abstract This paper pertains to the detection of objects located in complex backgrounds. A feature-based segmentation approach to the object detection problem is pursued, where the features are computed over multiple spatial orientations and frequencies. The method proceeds as follows: a given image is passed through a bank of even-symmetric Gabor filters. A selection of these filtered images is made and each (selected) filtered image is subjected to a nonlinear (sigmoidal like) transformation. Then, a measure of texture energy is computed in a window around each transformed image pixel. The texture energy (“Gabor features”) and their spatial locations are inputted to a squared-error clustering algorithm. This clustering algorithm yields a segmentation of the original image—it assigns to each pixel in the image a cluster label that identifies the amount of mean local energy the pixel possesses across different spatial orientations and frequencies. The method is applied to a number of visual and infrared images, each one of which contains one or more objects. The region corresponding to the object is usually segmented correctly, and a unique signature of “Gabor features” is typically associated with the segment containing the object(s) of interest. Experimental results are provided to illustrate the usefulness of this object detection method in a number of problem domains. These problems arise in IVHS, military reconnaissance, fingerprint analysis, and image database query." ], "cite_N": [ "@cite_22", "@cite_25", "@cite_49" ], "mid": [ "2150254808", "1659951205", "1969840923" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Gabor functions play an important role in biometric recognition. They are employed for many physical or behavioral traits including iris @cite_26 , face @cite_53 , facial expression @cite_43 , speaker @cite_33 , speech @cite_60 , emotion recognition in speech @cite_54 , gait @cite_3 , handwriting @cite_39 , palmprint @cite_36 , and fingerprint recognition.
{ "abstract": [ "Algorithms developed by the author for recognizing persons by their iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests. The recognition principle is the failure of a test of statistical independence on iris phase structure encoded by multi-scale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 b mm sup 2 over the iris, enabling real-time decisions about personal identity with extremely high confidence. The high confidence levels are important because they allow very large databases to be searched exhaustively (one-to-many \"identification mode\") without making false matches, despite so many chances. Biometrics that lack this property can only survive one-to-one (\"verification\") or few comparisons. The paper explains the iris recognition algorithms and presents results of 9.1 million comparisons among eye images from trials in Britain, the USA, Japan, and Korea.", "For text-independent speaker identification a prominent combination is to use Gaussian mixture models (GMM) for classification while relying on Mel-frequency cepstral coefficients (MFCC) as features. To take temporal information into account the time difference of features of adjacent speech frames are appended to the initial features. In this paper we investigate the applicability of spectro-temporal features obtained from Gabor-filters and present an algorithm for optimizing the possible parameters. Simulation results on a database show that spectro-temporal features achieve higher recognition rates than purely temporal features for clean speech as well as for disturbed speech.", "In this paper, we investigate the speech feature extraction problem in the noisy environment. A novel approach based on Gabor filtering and tensor factorization is proposed. From recent physiological and psychoacoustic experimental results, localized spectro-temporal features are essential for auditory perception. We employ 2D-Gabor functions with different scales and directions to analyze the localized patches of power spectrogram, by which speech signal can be encoded as a general higher order tensor. Then Nonnegative Tensor PCA with sparse constraint is used to learn the projection matrices from multiple interrelated feature subspaces and extract the robust features. Experimental results confirm that our proposed method can improve the speech recognition performance, especially in noisy environment, compared with traditional speech feature extraction methods.", "Biometrics-based personal identification is regarded as an effective method for automatically recognizing, with a high confidence, a person's identity. This paper presents a new biometric approach to online personal identification using palmprint technology. In contrast to the existing methods, our online palmprint identification system employs low-resolution palmprint images to achieve effective personal identification. The system consists of two parts: a novel device for online palmprint image acquisition and an efficient algorithm for fast palmprint recognition. A robust image coordinate system is defined to facilitate image alignment for feature extraction. In addition, a 2D Gabor phase encoding scheme is proposed for palmprint feature extraction and representation. The experimental results demonstrate the feasibility of the proposed system.", "Elastic graph matching has been proposed as a practical implementation of dynamic link matching, which is a neural network with dynamically evolving links between a reference model and an input image. Each node of the graph contains features that characterize the neighborhood of its location in the image. The elastic graph matching usually consists of two consecutive steps, namely a matching with a rigid grid, followed by a deformation of the grid, which is actually the elastic part. The deformation step is introduced in order to allow for some deformation, rotation, and scaling of the object to be matched. This method is applied here to the authentication of human faces where candidates claim an identity that is to be checked. The matching error as originally suggested is not powerful enough to provide satisfying results in this case. We introduce an automatic weighting of the nodes according to their significance. We also explore the significance of the elastic deformation for an application of face-based person authentication. We compare performance results obtained with and without the second matching step. Results show that the deformation step slightly increases the performance, but has lower influence than the weighting of the nodes. The best results are obtained with the combination of both aspects. The results provided by the proposed method compare favorably with two methods that require a prior geometric face normalization, namely the synergetic and eigenface approaches.", "We present new methods that extract characteristic features from speech magnitude spectrograms. Two of the presented approaches have been found particularly efficient in the process of automatic stress and emotion classification. In the first approach, the spectrograms are sub-divided into ERB frequency bands and the average energy for each band is calculated. In the second approach, the spectrograms are passed through a bank of 12 log-Gabor filters and the outputs are averaged and passed through an optimal feature selection procedure based on mutual information criteria. The proposed methods were tested using single vowels, words and sentences from SUSAS data base with 3 classes of stress, and spontaneous speech recordings made by psychologists (ORI) with 5 emotional classes. The classification results based on the Gaussian mixture model show correct classification rates of 40 -81 , for different SUSAS data sets and 40 -53.4 for the ORI data base.", "Traditional image representations are not suited to conventional classification methods such as the linear discriminant analysis (LDA) because of the undersample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two-dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA, compared with existing preprocessing methods such as the principal components analysis (PCA) and 2DLDA, include the following: 1) the USP is reduced in subsequent classification by, for example, LDA, 2) the discriminative information in the training tensors is preserved, and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, whereas that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor-function-based image decompositions for image understanding and object recognition, we develop three different Gabor-function-based image representations: 1) GaborD is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS, and GaborSD representations are applied to the problem of recognizing people from their averaged gait images. A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS, or GaborSD image representation, then using GDTA to extract features and, finally, using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the University of South Florida (USF) HumanID Database. Experimental comparisons are made with nine state-of-the-art classification methods in gait recognition.", "In this paper, we describe a new method to identify the writer of Chinese handwritten documents. There are many methods for signature verification or writer identification, but most of them require segmentation or connected component analysis. They are content dependent identification methods, as signature verification requires the writer to write the same text (e.g. his name). In our new method, we take the handwriting as an image containing some special texture, and writer identification is regarded as texture identification. This is a content independent method. We apply the well-established 2D Gabor filtering technique to extract features of such textures and a weighted Euclidean distance classifier to fulfil the identification task. Experiments are made using Chinese handwritings from 17 different people and very promising results were achieved.", "Facial expression classification has achieved good results in the past using manually extracted facial points convolved with Gabor filters. In this paper, classification performance was tested on feature vectors composed of facial points convolved with Gabor and Log-Gabor filters, as well as with whole image pixel representation of static facial images. Principal Component Analysis was performed on these feature vectors, and classification accuracies compared using Linear Discriminant Analysis. Experiments carried out on two databases show comparable performance between Gabor and Log-Gabor filters, with a classification accuracy of around 85 . This was achieved on low-resolution images, without the need to precisely locate facial points on each face image." ], "cite_N": [ "@cite_26", "@cite_33", "@cite_60", "@cite_36", "@cite_53", "@cite_54", "@cite_3", "@cite_39", "@cite_43" ], "mid": [ "1974821667", "2103438437", "2134905334", "2114214092", "2148486884", "2050748687", "2154624311", "2100202779", "2098532194" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Gabor filterbanks are used for the segmentation @cite_10 and quality estimation @cite_21 of fingerprint images, for core point estimation @cite_55 , classification @cite_56 and fingerprint matching based on Gabor features @cite_46 @cite_55 . GFs are also employed for generating synthetic fingerprints @cite_23 . The use of GF for fingerprint image enhancement was introduced in @cite_44 .
{ "abstract": [ "We propose a Gabor-filter-based method for fingerprint recognition in this paper. The method makes use of Gabor filtering technologies and need only to do the core point detection before the feature extraction process without any other pre-processing steps such as smoothing, binarization, thinning, and minutiae detection. The proposed Gabor-filter-based features play a central role in the processes of fingerprint recognition, including local ridge orientation, core point detection, and feature extraction. Experimental results show that the recognition rate of the k-nearest neighbor classifier using the proposed features is 97.2 for a small-scale fingerprint database, and thus that the proposed method is an efficient and reliable approach.", "", "Fingerprint classification provides an important indexing mechanism in a fingerprint database. An accurate and consistent classification can greatly reduce fingerprint matching time for a large database. We present a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature. We classify fingerprints into five categories: whorl, right loop, left loop, arch, and tented arch. The algorithm uses a novel representation (FingerCode) and is based on a two-stage classifier to make a classification. It has been tested on 4000 images in the NIST-4 database. For the five-class problem, a classification accuracy of 90 percent is achieved (with a 1.8 percent rejection during the feature extraction phase). For the four-class problem (arch and tented arch combined into one class), we are able to achieve a classification accuracy of 94.8 percent (with 1.8 percent rejection). By incorporating a reject option at the classifier, the classification accuracy can be increased to 96 percent for the five-class classification task, and to 97.8 percent for the four-class classification task after a total of 32.5 percent of the images are rejected.", "In order to ensure that the performance of an automatic fingerprint identification verification system will be robust with respect to the quality of input fingerprint images, it is essential to incorporate a fingerprint enhancement algorithm in the minutiae extraction module. We present a fast fingerprint enhancement algorithm, which can adaptively improve the clarity of ridge and valley structures of input fingerprint images based on the estimated local ridge orientation and frequency. We have evaluated the performance of the image enhancement algorithm using the goodness index of the extracted minutiae and the accuracy of an online fingerprint verification system. Experimental results show that incorporating the enhancement algorithm improves both the goodness index and the verification accuracy.", "Introduces a method for the generation of synthetic fingerprint images. Gabor-like space-variant filters are used for iteratively expanding an initially empty image containing just one or a few seeds. A directional image model, whose inputs are the number and location of the fingerprint cores and deltas, is used for tuning the filters according to the underlying ridge orientation. Very realistic fingerprint images are obtained after the final noising-and-rendering stage.", "Biometrics-based verification, especially fingerprint-based identification, is receiving a lot of attention. There are two major shortcomings of the traditional approaches to fingerprint representation. For a considerable fraction of population, the representations based on explicit detection of complete ridge structures in the fingerprint are difficult to extract automatically. The widely used minutiae-based representation does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Further, minutiae-based matching has difficulty in quickly matching two fingerprint images containing a different number of unregistered minutiae points. The proposed filter-based algorithm uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode. The fingerprint matching is based on the Euclidean distance between the two corresponding FingerCodes and hence is extremely fast. We are able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature. Our system performs better than a state-of-the-art minutiae-based system when the performance requirement of the application system does not demand a very low false acceptance rate. Finally, we show that the matching performance can be improved by combining the decisions of the matchers based on complementary (minutiae-based and filter-based) fingerprint information.", "An important step in fingerprint recognition is the segmentation of the region of interest. In this paper, we present an enhanced approach for fingerprint segmentation based on the response of eight oriented Gabor filters. The performance of the algorithm has been evaluated in terms of decision error trade-off curves of an overall verification system. Experimental results demonstrate the robustness of the proposed method." ], "cite_N": [ "@cite_55", "@cite_21", "@cite_56", "@cite_44", "@cite_23", "@cite_46", "@cite_10" ], "mid": [ "2156614183", "", "2124874699", "2127717018", "1834846564", "2109131581", "2100698407" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Image quality @cite_15 has a big impact on the performance of a fingerprint recognition system (see e.g. @cite_38 and @cite_57 ). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints @cite_29 , and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties:
{ "abstract": [ "The performance of an automatic fingerprint authentication system relies heavily on the quality of the captured fingerprint images. In this paper, two new quality indices for fingerprint images are developed. The first index measures the energy concentration in the frequency domain as a global feature. The second index measures the spatial coherence in local regions. We present a novel framework for evaluating and comparing quality indices in terms of their capability of predicting the system performance at three different stages, namely, image enhancement, feature extraction and matching. Experimental results on the IBM-HURSLEY and FVC2002 DB3 databases demonstrate that the global index is better than the local index in the enhancement stage (correlation of 0.70 vs. 0.50) and comparative in the feature extraction stage (correlation of 0.70 vs. 0.71). Both quality indices are effective in predicting the matching performance, and by applying a quality-based weighting scheme in the matching algorithm, the overall matching performance can be improved; a decrease of 1.94 in EER is observed on the FVC2002 DB3 database.", "A small hole detection system which detects small holes in fast moving sheets of material. An array of LEDs is pulsed from an emitter which is aligned with an array of photocells positioned to detect light emitting from the emitter. The receiver sends a signal to a processing unit when light is detected. The small hole detection system may include an auto-shuttering mechanism which automatically adjusts the system according to the width of the material being scanned.", "One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system.", "A major new professional reference work on fingerprint security systems and technology from leading international researchers in the field. Handbook provides authoritative and comprehensive coverage of all major topics, concepts, and methods for fingerprint security systems. This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators." ], "cite_N": [ "@cite_57", "@cite_38", "@cite_15", "@cite_29" ], "mid": [ "1483712199", "1565631400", "2129631732", "1639795441" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
Enhancement of low quality images (occurring e.g. in all databases of FVC2004 @cite_5 ) and very low quality prints like latents (e.g. NIST SD27 @cite_13 ) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement @cite_29 and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image @cite_41 which consequently tends to increase the number of identification or verification errors.
{ "abstract": [ "Fingerprint image enhancement is a crucial step in automatic fingerprint recognition. A large number of approaches for filtering the fingerprint image have been suggested. Most of them perform oriented band pass filtering. However, such filters, for example, Gabor filters, may create spurious ridge structure information. This is harmful for the feature extraction and therefore is harmful for the automatic fingerprint recognition. This paper examines the properties of applying the Gabor filter in the fingerprint image enhancement. It shows that the nonsinusoidal-shaped ridge structure, the ridge frequency estimation error, and the small filter size are the causes of creating spurious ridge structures. As a solution, we suggest an adaptive oriented low pass filter instead of the Gabor filter to avoid producing undesired harmful side effects for the automatic fingerprint recognition.", "A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-of-the-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to ”light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004.", "A major new professional reference work on fingerprint security systems and technology from leading international researchers in the field. Handbook provides authoritative and comprehensive coverage of all major topics, concepts, and methods for fingerprint security systems. This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators.", "The National Institute of Standards and Technology in conjunction with the Federal Bureau of Investigation has developed a new database of grayscale fingerprint images and corresponding minutiae data. The database contains latent fingerprints from crime scenes and their matching rolled fingerprint mates. In all there are 258 latent cases. Each case includes the latent image, the matching tenprint image, and four sets of minutiae that have been validated by a professional team of latent examiners. One set of minutiae contains all minutiae points on the latent fingerprint; the second set contains all minutiae points on the tenprint mate; the other two sets contain the minutiae points in common between the latent fingerprint and tenprint mate. In all there are 27,426 minutiae recorded across the set of tenprints with 5460 minutiae in common with their matching latent fingerprint. All data files are formatted according to the ANSI NISTITL 1-2000 standard using Type-1, 9, 13, & 14 records. Software utilities are provided to read, write, and manipulate these files. The database can be used to develop and test new fingerprint algorithms, test commercial and research AFIS systems, train latent examiners, and promote the ANSI NIST file format standard." ], "cite_N": [ "@cite_41", "@cite_5", "@cite_29", "@cite_13" ], "mid": [ "2141852517", "1569096659", "1639795441", "2626330247" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1104.4298
2019919182
Gabor filters (GFs) play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved GFs that locally adapt their shape to the direction of flow. These curved GFs enable the choice of filter parameters that increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved GFs are applied to the curved ridge and valley structures of low-quality fingerprint images. First, we combine two orientation-field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation. Subsequently, these curved regions are used for estimating the local ridge frequency. Finally, curved GFs are defined based on curved regions, and they apply the previously estimated orientations and ridge frequencies for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison with state-of-the-art enhancement methods.
For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in @cite_12 (results are cited in Table of Section ). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see @cite_6 and @cite_20 ).
{ "abstract": [ "Automatic feature extraction in latent fingerprints is a challenging problem due to poor quality of most latents, such as unclear ridge structures, overlapped lines and letters, and overlapped fingerprints. We proposed a latent fingerprint enhancement algorithm which requires manually marked region of interest (ROI) and singular points. The core of the proposed enhancement algorithm is a novel orientation field estimation algorithm, which fits orientation field model to coarse orientation field estimated from skeleton outputted by a commercial fingerprint SDK. Experimental results on NIST SD27 latent fingerprint database indicate that by incorporating the proposed enhancement algorithm, the matching accuracy of the commercial matcher was significantly improved.", "In this paper we propose a semi-automatic approach for the enhancement of very low quality fingerprints such as latent fingerprints. A specific markup tool is designed to allow fingerprint examiners to simply and quickly provide sparse estimates of local orientations and frequencies. These estimates are then interpolated though Delaunay triangulation and fed to a contextual Gabor-based enhancement algorithm that significantly improves the image quality, thus making the successive automatic feature extraction much more reliable. Experimental results (both qualitative and quantitative) confirm the effectiveness of this method over a fully-automatic state-of-the-art approach.", "Accurate fingerprint recognition presupposes robust feature extraction which is often hampered by noisy input data. We suggest common techniques for both enhancement and minutiae extraction, employing symmetry features. For enhancement, a Laplacian-like image pyramid is used to decompose the original fingerprint into sub-bands corresponding to different spatial scales. In a further step, contextual smoothing is performed on these pyramid levels, where the corresponding filtering directions stem from the frequency-adapted structure tensor (linear symmetry features). For minutiae extraction, parabolic symmetry is added to the local fingerprint model which allows to accurately detect the position and direction of a minutia simultaneously. Our experiments support the view that using the suggested parabolic symmetry features, the extraction of which does not require explicit thinning or other morphological operations, constitute a robust alternative to conventional minutiae extraction. All necessary image processing is done in the spatial domain using 1-D filters only, avoiding block artifacts that reduce the biometric information. We present comparisons to other studies on enhancement in matching tasks employing the open source matcher from NIST, FIS2. Furthermore, we compare the proposed minutiae extraction method with the corresponding method from the NIST package, mindtct. A top five commercial matcher from FVC2006 is used in enhancement quantification as well. The matching error is lowered significantly when plugging in the suggested methods. The FVC2004 fingerprint database, notable for its exceptionally low-quality fingerprints, is used for all experiments." ], "cite_N": [ "@cite_20", "@cite_6", "@cite_12" ], "mid": [ "2021181521", "2140602039", "2145624469" ] }
Curved Gabor Filters for Fingerprint Image Enhancement
Fingerprint Image Enhancement Image quality [2] has a big impact on the performance of a fingerprint recognition system (see e.g. [51] and [8]). The goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages. Most systems extract minutiae from fingerprints [41], and the presence of noise can interfere with the extraction. As a result, true minutiae may be missed and false minutiae may be detected, both having a negative effect on the recognition rate. In order to avoid these two types of errors, image enhancement aims at improving the clarity of the ridge and valley structure. With special consideration to the typical types of noise occurring in fingerprints, an image enhancement method should have three important properties: • reconnect broken ridges, e.g. caused by dryness of the finger or scars; • separate falsely conglutinated ridges, e.g. caused by wetness of the finger or smudges; • preserve ridge endings and bifurcations. Enhancement of low quality images (occurring e.g. in all databases of FVC2004 [40]) and very low quality prints like latents (e.g. NIST SD27 [20]) is still a challenge. Techniques based on contextual filtering are widely used for fingerprint image enhancement [41] and a major difficulty lies in an automatic and reliable estimation of the local context, i.e. the local orientation and ridge frequency as input of the GF. Failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image [32] which consequently tends to increase the number of identification or verification errors. For low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in [17] (results are cited in Table 1 of Section 5). The situation is even worse for very low quality images, and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints (see [7] and [56]). The present work addresses these challenges as follows: in the next section, two stateof-the-art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one. In Section 3, curved regions are introduced and employed for achieving a reliable ridge frequency estimation. Based on the curved regions, in Section 4 curved Gabor filters are defined. In Section 5, all previously described methods are combined for the enhancement of low quality images from FVC2004 and performance improvements in comparison to existing methods are shown. The paper concludes with a discussion of the advantages and drawbacks of this approach, as well as possible future directions in Section 6. Orientation Field Estimation In order to obtain a robust orientation field (OF) estimation for low quality images, two estimation methods are combined: the line sensor method [23] and the gradients based method [4] (with a smoothing window size of 33 pixels). The OFs are compared at each pixel. If the angle between both estimations is smaller than a threshold (here t = 15 • ), the orientation of the combined OF is set to the average of the two. Otherwise, the pixel is marked as missing. Afterwards, all inner gaps are reconstructed and up to a radius of 16 pixels, the orientation of the outer proximity is extrapolated, both as described in [23]. Results of verification tests on all 12 databases of FVC2000 to 2004 [38,39,40] showed a better performance of the combined OF applied for contextual image enhancement than each individual OF estimation [22]. The OF being the only parameter that was changed, lower equal error rates can be interpreted as an indicator that the combined OF contains fewer estimation errors than each of the individual estimations. Simultaneously, we regard the combined OF as a segmentation of the fingerprint image into foreground (endowed with an OF estimation) and background. The information fusion strategy for obtaining the combined OF was inspired by [44]. The two OF estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment. If the angle between both estimations is greater than a threshold t, the judgments are considered as incoherent, and consequently not averaged. If an estimation method provides no estimation for a pixel, it is regarded as abstaining. Orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments. Ridge Frequency Estimation Using Curved Regions In [26], a ridge frequency (RF) estimation method was proposed which divides a fingerprint image into blocks of 16 × 16 pixels, and for each block, it obtains an estimation from an oriented window of 32 × 16 pixels by a method called 'x-signature' which detects peaks in the gray-level profile. Failures to estimate a RF, e.g. caused due to presence of noise, curvature or minutiae, are handled by interpolation and outliers are removed by low-pass filtering. In our experience, this method works well for good and medium quality prints, but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints. In this section, we propose a RF estimation method following the same basic idea -to obtain an estimation from the gray-level profile -but which bears several improvements in comparison to [26]: (i) the profile is derived from a curved region which is different in shape and size from the oriented window of the x-signature method, (ii) we introduce an information criterion (IC) for the reliability of an estimation and (iii) depending on the IC, the gray-level profile is smoothed with a Gaussian kernel, (iv) both, minima and maxima are taken into account and (v) the inverse median is applied for the RF estimate. If the clarity of the ridge and valley structure is disturbed by noise, e.g. caused by dryness or wetness of the finger, an oriented window of 32 × 16 pixels may not contain a sufficient amount of information for a RF estimation (e.g. see Figure 3, left image). In regions where the ridges run almost parallel, this may be compensated by averaging over larger distances along the lines. However, if the ridges are curved, the enlargement of the rectangular window does not improve the consistency of the gray-profile, because the straight lines cut neighboring ridges and valleys. In order to overcome this limitation, we propose curved regions which adapt their shape to the local orientation. It is important to take the curvature of ridges and valleys into account, because about 94 % of all fingerprints belong to the classes right loop, whorl, left loop and tented arch [30], so that they contain core points and therefore regions of high curvature. Curved Regions Let (x c , y c ) be the center of a curved region which consists of 2p + 1 parallel curves and 2q + 1 points along each curve. The midpoints (depicted as blue squares in Figure 2) of the parallel curves are initialised by following both directions orthogonal to the orientation for p steps of one pixel unit, starting from the central pixel (x c , y c ) (red square). At each step, the direction is adjusted, so that it is orthogonal to the local orientation. If the change between two consecutive local orientations is greater than a threshold, the presence of a core point is assumed, and the iteration is stopped. Since all x-and y-coordinates are decimal values, the local orientation is interpolated. Nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in Section 5. Starting from each of the 2p + 1 midpoints, curves are obtained by following the respective local orientation and its opposite direction (local orientation θ + π) for q steps of one pixel unit, respectively. Curvature estimation As a by-product of constructing curved regions, a pixel-wise estimate of the local curvature is obtained using the central curve of each region (cf. the red curves in Figures 2 and 3). The estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points. The outcome is an estimate of the curvature, i.e. integrated change in orientation along a curve (here: of 65 pixel steps). For an illustration, see Figure 4. The curvature estimate can be useful for singular point detection, fingerprint alignment or as additional information at the matching stage. Ridge Frequency Estimation Gray values at the decimal coordinates of the curve points are interpolated. In this study, three interpolation methods are taken into account: nearest neighbor, bilinear and bicubic [47] (considering 1, 4 and 16 neighboring pixels for the gray value interpolation, respectively). The gray-level profile is produced by averaging the interpolated gray values along each curve (in our experiments, the minimum number of valid points is set to 50% of the points per line). Next, local extrema are detected and the distances between consecutive minima and consecutive maxima are stored. The RF estimate is the reciprocal of the median of the inter-extrema distances (IEDs). The proportion p maxmin of the largest IED to the smallest IED is regarded as an information Large values of p maxmin are considered as an indicator for the occurrence of false extrema in the profile (see Figure 5). or for the absence of true extrema. Only RF estimations where p maxmin is below a threshold are regarded as valid (for the tests in Section 5, we used thr p maxmin ≤ 1.5). If p maxmin of the gray-level profile produced by averaging along the curves exceeds the threshold, then, in some cases it is still possible to obtain a feasible RF estimation by smoothing the profile which may remove false minima and maxima, followed by a repetition of the estimation steps (see Figure 5). A Gaussian with a size of 7 and σ = 1.0 was applied in our study, and a maximum number of three smoothing iterations was performed. In an additional constraint we require that at least two minima and two maxima are detected and the RF estimation is located within an appropriate range of valid values (between 1 3 and 1 25 ). As a final step, the RF image is smoothed by averaging over a window of size w = 49 pixels. Curved Gabor Filters Definition The Gabor filter is a two-dimensional filter formed by the combination of a cosine with a two-dimensional Gaussian function and it has the general form: g(x, y, θ, f, σ x , σ y ) = exp − 1 2 x 2 θ σ 2 x + y 2 θ σ 2 y · cos (2π · f · x θ )(1) x θ = x · cos θ + y · sin θ (2) y θ = −x · sin θ + y · cos θ(3) In (1), the Gabor filter is centered at the origin. θ denotes the rotation of the filter related to the x-axis and f the local frequency. σ x and σ y signify the standard deviation of the Gaussian function along the x-and y-axis, respectively. A curved Gabor filter is computed by mapping a curved region to a two-dimensional array, followed by a point-wise multiplication with an unrotated GF (θ = 0). The curved region C i,j centered in (i, j) consists of 2p + 1 parallel lines and 2q + 1 points along each line. The corresponding array A i,j contains the interpolated gray values (see right image in Figure 2). The enhanced pixel E(i, j) is obtained by: E(i, j, A i,j , f (i,j) ) = 2p+1 k=0 2q+1 l=0 A(k, l) · g(k − p, l − q, 0, f (i,j) , σ x , σ y )(4) Finally, differences in brightness are compensated by a locally adaptive normalization (using the formula from [26] who proposed a global normalization as a first step before the OF and RF estimation, and the Gabor filtering). In our experiments, the desired mean and standard deviation were set to 127.5 and 100, respectively, and neighboring pixels within a circle of radius r = 16 were considered. Parameter Choice In the case of image enhancement by straight GFs, [26] and other authors (e.g. [41]) use quadratic windows of size 11×11 pixels and choices for the standard deviation of the Gaussian of σ x = σ y = 4.0, or very similar values. We agree with their arguments that the parameter selection of σ x and σ y involves a trade-off between an ineffective filter (for small values of σ x and σ y ) and the risk of creating artifacts in the enhanced image (for large values of σ x and σ y ). Moreover, the same reasoning holds true for the size of the window. In analogy to the situation during the RF estimation (see Figure 3), enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and, as a consequence of this, false structures into the enhanced image. The main advantage of curved Gabor filters is that they enable the choice of larger curved regions and high values for σ x and σ y without creating spurious features (see Figures 6 and 7). In this way, curved Gabor filters have a much greater smoothing potential in comparison to traditional GF. For curved GFs, the only limitation is the accuracy of the OF and RF estimation, and no longer the filter itself. The authors of [64] applied a straight GF for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the GF in order to reduce the number of artifacts in the enhanced image. Similarly, we tested an ellipse with major axis 2q + 1 and minor axis 2p + 1 instead of the full curved region, i.e. in Equation 4, only those interpolated gray values of array A i,j are considered which are located within the ellipse. In our tests, both variants achieved similar results on the FVC2004 databases (see Table 1). As opposed to [64], the term 'circular GF' is used in [59] and [60] for denoting the case σ x = σ y . Results Test Setup Two algorithms were employed for matching the original and the enhanced gray-scale images. The matcher "BOZORTH3" is based on the NIST biometric image software package (NBIS) [53], applying MINDTCT for minutiae extraction and BOZORTH3 for template matching. The matcher "VeriFinger 5.0 Grayscale" is derived from the Neurotechnology VeriFinger 5.0 SDK. For the verification tests, we follow the FVC protocol in order to ensure comparability of the results with [17] and other researchers. 2800 genuine and 4950 impostor recognition attempts were conducted for each of the FVC databases. Equal error rates (EERs) were calculated as described in [38]. Verification tests Curved Gabor filters were applied for enhancing the images of FVC2004 [40]. Several choices for σ x , σ y , the size of the curved region and interpolation methods were tested. EERs for Figure 7: The detail on the left (impression 1 of finger 90 in FVC2004 database 4) is enhanced by Gabor filtering using rectangular windows (center) and curved regions (right). Both filters resort to the same orientation field estimation and the same ridge frequency estimation based on curved regions. Filter parameters are also identical (p = 16, q = 32, σ x = 16, σ y = 32), the only difference between the two is the shape of window underlying the Gabor filter. Artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted (highlighted by a red circle). some combinations of filter parameters are reported in Table 1. Other choices for the size of the curved region and the standard deviations of the Gaussian resulted in similar EERs. Relating to the interpolation method, only results for nearest neighbor are listed, because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests. In order to compare the enhancement performance of curved Gabor filters for low quality images with existing enhancement methods, matcher BOZORTH3 was applied to the enhanced images of FVC2004 which enables the comparison with the traditional GF proposed in [26], short time Fourier transform (STFT) analysis [9] and pyramid-based image filtering [17] (see Table 1). Furthermore, in order to isolate the influence of the OF estimation and segmentation on the verification performance, we tested the x-signature method [26] for RF estimation and straight Gabor filters in combination with our OF estimation and segmentation. EERs are listed in the second and sixth row of Table 1. In comparison to the results of the cited implementation which applied an OF estimation and segmentation as described in [26], this led to lower EERs on DB1 and DB2, a higher EER on DB3 and a similar performance on DB4. In comparison to the performance on the original images, an improvement was observed on the first database and a deterioration on DB3 and DB4. Visual inspection of the enhanced images on DB3 showed that the increase of the EER was caused largely by incorrect RF estimates of the x-signature method. Moreover, we combined minutiae templates which were extracted by MINDTCT from images enhanced by curved Gabor filters and from images enhanced by anisotropic diffusion filtering. A detailed representation of this combination can be found in [21] and results are listed in Table 1 [40]. Parentheses indicate that only a small foreground area of the fingerprints was useful for recognition. Results listed in the top four rows are cited from [17]. Parameters of the curved Gabor filters: size of the curved region, interpolation method (NN = nearest neighbor), considered pixels (F = full curved region, E = elliptical), standard deviations of Gaussian. EERs on the FVC2004 databases which have been achieved so far using MINDTCT and BOZORTH3. The matcher referred to as VeriFinger 5.0 Grayscale has a built-in enhancement step which can not be turned off, so that the results for the original images in Table 1 are obtained on matching images which were also enhanced (by an undisclosed procedure of the commercial software). Results using this matcher were included in order to show that even in the face of this built-in enhancement, the proposed image smoothing by curved Gabor filters leads to considerable improvements in verification performance. Conclusions The present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved Gabor filters. For low quality fingerprint images, in comparison to existing enhancement methods improvements of the matching performance were shown. Besides matching accuracy, speed is an important factor for fingerprint recognition systems. Results given in Section 5 were achieved using a proof of concept implementation written in Java. In a first test of a GPU based implementation on a Nvidia Tesla C2070, computing the RF image using curved regions of size 33 × 65 pixels took about 320 ms and applying curved Gabor filters of size 65 × 33 pixels took about 280 ms. The RF estimation can be further accelerated, if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel-wise computation. These computing times indicate the practicability of the presented method for on-line verification systems. In our opinion, the potential for further improvements of the matching performance rests upon a better OF estimation. The combined method delineated in Section 2 produces fewer erroneous estimations than each of the individual methods, but there is still room for improvement. As long as OF estimation errors occur, it is necessary to choose the size of the curved Gabor filters and the standard deviations of the Gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features. Future work includes an exploration of a locally adaptive choice of these parameters, depending on the local image quality, and e.g. the local reliability of the OF estimation. In addition, it will be of interest to apply the curved region based RF estimation and curved Gabor filters to latent fingerprints.
3,338
1103.4875
2952271679
One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.
The methods for constructing graphs with a given degree distribution are primarily either reductions to perfect matchings or sequential sampling methods. There are two popular perfect matching methods. The first is the @cite_15 @cite_48 : @math mini-vertices are created for each degree @math vertex, and all the mini-vertices are connected. Any perfect matching in the configuration graph corresponds to a graph with the correct degree distribution by merging all of the identified mini-vertices. This allows multiple edges and self-loops, which are often undesirable. See Figure . The second approach, the , prevents multi-edges and self-loops by creating a gadget for each vertex. If @math has degree @math , then it is replaced with a complete bipartite graph @math with @math and @math . Exactly one node in each @math is connected to each other @math , representing edge @math @cite_6 . Any perfect matching in this model corresponds exactly to a simple graph by using the edges in the matching that correspond with edges connecting any @math to any @math . We use a natural extension of the first configuration model to the joint degree distribution problem.
{ "abstract": [ "We propose a random graph model which is a special case of sparse random graphs with given degree sequences. This model involves only a small number of parameters, called logsize and log-log growth rate. These parameters capture some universal characteristics of massive graphs. Furthermore, from these parameters, various properties of the graph can be derived. For example, for certain ranges of the parameters, we will compute the expected distribution of the sizes of the connected components which almost surely occur with high probability. We will illustrate the consistency of our model with the behavior of some massive graphs derived from data in telecommunications. We will also discuss the threshold function, the giant component, and the evolution of random graphs in this model.", "Let Δ and n be natural numbers such that Δn = 2m is even and Δ ⩽ (2 log n )1 2 - 1. Then as n →, the number of labelled Δ-regular graphs on n vertices is asymptotic to e − λ − λ 2 ( 2 m ) ! m ! 2 m ( Δ ! ) m where λ = (Δ -1) 2. As a consequence of the method we determine the asymptotic distribution of the number of short cycles in graphs with a given degree sequence, and give analogous formulae for hypergraphs.", "We consider two problems: randomly generating labeled bipartite graphs with a given degree sequence and randomly generating labeled tournaments with a given score sequence. We analyze simple Markov chains for both problems. For the first problem, we cannot prove that our chain is rapidly mixing in general, but in the near-regular case, i.e., when all the degrees are almost equal, we give a proof of rapid mixing. Our methods also apply to the corresponding problem for general (nonbipartite) regular graphs, which was studied earlier by several researchers. One significant difference in our approach is that our chain has one state for every graph (or bipartite graph) with the given degree sequence; in particular, there are no auxiliary states as in the chain used by Jerrum and Sinclair. For the problem of generating tournaments, we are able to prove that our Markov chain on tournaments is rapidly mixing, if the score sequence is near-regular. The proof techniques we use for the two problems are similar. ©1999 John Wiley & Sons, Inc. Random Struct. Alg., 14: 293–308, 1999" ], "cite_N": [ "@cite_48", "@cite_15", "@cite_6" ], "mid": [ "2097147952", "2091476183", "1988307095" ] }
Constructing and Sampling Graphs with a Prescribed Joint Degree Distribution
show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice. Introduction Graphs are widely recognized as the standard modeling language for many complex systems, including physical infrastructure (e.g., Internet, electric power, water, and gas networks), scientific processes (e.g., chemical kinetics, protein interactions, and regulatory networks in biology starting at the gene levels through ecological systems), and relational networks (e.g., citation networks, hyperlinks on the web, and social networks). The broader adoption of the graph models over the last decade, along with the growing importance of associated applications, calls for descriptive and generative models for real networks. What is common among these networks? How do they differ statistically? Can we quantify the differences among these networks? Answering these questions requires understanding the topological properties of these graphs, which have lead to numerous studies on many "real-world" networks from the Internet to social, biological and technological networks [18]. Perhaps the most prominent theme in these studies is the skewed degree distribution; real-world graphs have a few vertices with very high degree and many vertices with small degree. There is some dispute as to the exact distribution, some have called it power-law [5,18], some log-normal [4,51,41,8], and but all agree that it is 'heavy-tailed' [17,54]. The ubiquity of this distribution has been a motivator for many different generative models and is often used as a metric for the quality of the model. Models like preferential attachment [5], the copying model [31], the Barabasi hierarchical model [53], forest-fire model, the Kronecker graph model [33], geometric preferential attachment [19] and many more [34,59,11] study the expected degree distribution and use the results to argue for the strength of their method. Many of these models also match other observed features, such as small diameter or densification [28]. However, recent studies comparing the generative models with real networks on metrics like conductance [35], core numbers [13] and clustering coefficients [30] show that the models do not match other important features of the networks. The degree distribution alone does not define a graph. McKay's estimate [39] shows that there may be exponentially many graphs with the same degree distribution. However, models based on degree distribution are commonly used to compute statistically significant structures in a graph. For example, the modularity metric for community detection in graphs [43,42] assumes a null hypothesis for the structure of a graph based on its degree distribution, namely that probability of an edge between vertex v i and v j is proportional to d i d j , where d i and d j represent the degrees of vertices v i and v j . The modularity of a group of vertices is defined by how much their structure deviates from the null hypothesis, and a higher modularity signifies a better community. The key point here is that the null hypothesis is solely based on its degree distribution and therefore might be incorrect. Degree distribution based models are also used to predict graph properties [40,2,15,14,16], benchmark [32], and analyze the expected run time of algorithms [7]. These studies improve our understanding of the relationship between the degree distribution and the structure of a graph. The shortcomings of these studies give insight into what other features besides the degree distribution would give us a better grasp of a graph's structure. For example, the degree assortativity of a network measure whether nodes attach to other similar or dissimilar vertices. This is not specified by the degree distribution, yet studies have shown that social networks tend to be assortative, while biological and technological networks tend to be dissortative [47,46]. An example of recent work using assortativity is [30]. In this study, a high assortativity is assumed for connections that generate high clustering coefficients, and this, in addition to preserving the degree distribution, results in very realistic instances of real-world graphs. Another study that has looked at the joint degree distribution is dK-graphs [38]. They propose modeling a graph by looking at the distribution of the structure of all sized k subsets of vertices, where d = 1 are vertex degrees, d = 2 are edge degrees (the joint degree distribution), d = 3 is the degree distribution of triangles and wedges, and so on. It is an interesting idea, as clearly the nK distribution contains all information about the graph, but it is far too detailed as a model. At what d value does the additional information become less useful? One way to enhance the results based on degree distribution is to use a more restrictive feature such as the joint degree distribution. Intuitively, if degree distribution of a graph describes the probability that a vertex selected uniformly at random will be of degree k then its joint degree distribution describes the probability that a randomly selected edge will be between nodes of degree k and l. We will use a slightly different concept, the joint degree matrix, where the total number of nodes and edges is specified, and the numbers of edges between each set of degrees is counted. Note that while the joint degree distribution uniquely defines the degree distribution of a graph up to isolated nodes, graphs with the same degree distribution may have very different joint degree distributions. We are not proposing that the joint degree distribution be used as a stand alone descriptive model for generating networks. We believe that understanding the relationship between the joint degree distribution and the network structure is important, and that having the capability to generate random instances of graphs with the same joint degree distribution will help enable this goal. Experiments on real data are valuable, but also drawing conclusions only based on a limited data may be misleading, as the graphs may all be biased the same way. For a more rigorous study, we need a sampling algorithm that can generate random instances in a reasonable time, which is the motivation of this work. The primary questions investigated by this paper are: Given a joint degree distribution and an integer n, does the joint degree distribution correspond to a real labeled graph? If so, can one construct a graph of size n with that joint degree distribution? Is it possible to construct or generate a uniformly random graph with that same joint degree distribution? We address these problems from both a theoretical and from an empirical perspective. In particular, being able to uniformly sample graphs allows one to empirically evaluate which other graph features, like diameter, or eigenvalues, are correlated with the joint degree distribution. Contributions We make several contributions to this problem, both theoretically and experimentally. First, we discuss the necessary and sufficient conditions for a given joint degree vector to be graphical. We prove that these conditions are sufficient by providing a new constructive algorithm. Next, we introduce a new configuration model for the joint degree matrix problem which is a natural extension of the configuration model for the degree sequence problem. Finally, using this configuration model, we develop Markov Chains for sampling both pseudographs and simple graphs with a fixed joint degree matrix. A pseudograph allows multiple edges between two nodes and self-loops. We prove the correctness of both chains and mixing time for the pseudograph chain by using previous work. The mixing time of the simple graph chain is experimentally evaluated using autocorrelation. In practice, Monte Carlo Markov Chains are a very popular method for sampling from difficult distributions. However, it is often very difficult to theoretically evaluate the mixing time of the chain, and many practitioners simply stop the chain after 5,000, 10,000 or 20,000 iterations without much justification. Our experimental design with autocorrelation provides a set of statistics that can be used as a justification for choosing a stopping point. Further, we show one way that the autocorrelation technique can be adapted from real-valued samples to combinatorial samples. Notation and Definitions Formally, a degree distribution of a graph is the probability that a node chosen at random will be of degree k. Similarly, the joint degree distribution is the probability that a randomly selected edge will have end points of degree k and l. In this paper, we are concerned with constructing graphs that exactly match these distributions, so rather than probabilities, we will use a counting definition below and call it the joint degree matrix. In particular, we will be concerned with generating simple graphs that do not contain multiple edges or self-loops. Any graph that may have multiple edges or self loops will be referred to as a pseudograph. A generic degree vector will be denoted by D. Definition 2. The joint degree matrix (JDM) J (G) of a graph G is a matrix where J (G) k,l is exactly the number of edges between nodes of degree k and degree l in G. A generic joint degree matrix will be denoted by J . Given a joint degree matrix, J , we can recover the number of edges in the graph as m = ∑ ∞ k=1 ∑ ∞ l=k J k,l . We can also recover the degree vector as D k = 1 k (J k,k + ∑ ∞ l=1 J k,l ). The term J k,k is added twice because kD k is the number of end points of degree k and the edges in J k,k contribute two end points. The number of nodes, n is then ∑ ∞ k=1 D k . This count does not include any degree 0 vertices, as these have no edges in the joint degree matrix. Given n and m, we can easily get the degree distribution and joint degree distribution. They are P(k) = 1 n D k while P(k, l) = 1 m J k,l . Note that P(k) is not quite the marginal of P(k, l) although it is closely related. The Joint Degree Matrix Configuration Model We propose a new configuration model for the joint degree distribution problem. Given J and its corresponding D we create k labeled mini-vertices for every vertex of degree k. In addition, for every edge with end points of degree k and l we create two labeled mini-end points, one of class k and one of class l. We connect all degree k mini-vertices to the class k mini-end points. This forms a complete bipartite graph for each degree, and each of these forms a connected component that is disconnected from all other components. We will call each of these components the "k-neighborhood". Notice that there are kD k mini-vertices of degree k, and kD k = J k,k + ∑ l J k,l corresponding mini-end points in each k-neighborhood. This is pictured in Figure 2. Take any perfect matching in this graph. If we merge each pair of mini-end points that correspond to the same edge, we will have some pseudograph that has exactly the desired joint degree matrix. This observation forms the basis of our sampling method. Constructing Graphs with a Given Joint Degree Matrix The Erdős-Gallai condition is a necessary and sufficient condition for a degree sequence to be realizable as a simple graph. Theorem 1. Erdős-Gallai A degree sequence d = {d 1 , d 2 , · · · d n } sorted in non- increasing order is graphical if and only if for every k ≤ n, ∑ k i=1 d i ≤ k(k − 1) + ∑ n i=k+1 min(d i , k). The necessity of this condition comes from noting that in a set of vertices of size k, there can be at most k 2 internal edges, and for each vertex v not in the subset, there can be at most min{d(v), k} edges entering. The condition considers each subset of decreasing degree vertices and looks at the degree requirements of those nodes. If the requirement is more than the available edges, the sequence cannot be graphical. The sufficiency is shown via the constructive Havel-Hakimi algorithm [23,22]. The existence of the Erdős-Gallai condition inspires us to ask whether similar necessary and sufficient conditions exist for a joint degree matrix to be graphical. The following necessary and sufficient conditions were independently studied by Amanatidis et al. [3]. Theorem 2. Let J be given and D be the associated degree distribution. J can be realized as a simple graph if and only if (1) D k is integer-valued for all k and (2) ∀k, l, if k = l then J k,l ≤ D k D l . For each k, J k,k ≤ D k 2 . The necessity of these conditions is clear. The first condition requires that there are an integer number of nodes of each degree value. The next two are that the number of edges between nodes of degree k and l (or k and k) are not more than the total possible number of k to l edges in a simple graph defined by the marginal degree sequences. Amanatidis et al. show the sufficiency through a constructive algorithm. We will now introduce a new algorithm that runs in O(m) time. The algorithm proceeds by building a nearly regular graph for each class of edges, J k,l . Assume that k = l for simplicity. Each of the D k nodes of degree k receives J k,l /D k edges, while J k,l mod D k each have an extra edge. Similarly, the l degree nodes have J k,l /D l edges, with J k,l mod D l having 1 extra. We can then construct a simple bipartite graph with this degree sequence. This can be done in linear time in the number of edges using queues as is discussed after Lemma 1. If k = l, the only differences are that the graph is no longer bipartite and there are 2J k,k end points to be distributed among D k nodes. To find a simple nearly regular graph, one can use the Havel-Hakimi [22,23] algorithm in O(J k,k ) time by using the degree sequence of the graph as input to the algorithm. We must show that there is a way to combine all of these nearly-regular graphs together without violating any degree constraints. Let d = d 1 , d 2 , · · · d n be the sorted non-increasing order degree sequence from D. Letd v denote the residual degree sequence where the residual degree of a vertex v is d v minus the number of edges that currently neighbor v. Also, letD k denote the number of nodes of degree k that have non-zero residual degree, i.e.D k = ∑ d j =k 1(d j = 0). Algorithm 1 Greedy Graph Construction with a Fixed JDM, Input: J , n, m, D 1: for k = n · · · 1 and l = k · · · 1 do 2: if k = l then 3: Let a = J k,l mod D k and b = J k,l mod D l 4: Let x 1 · · · x a = J k,l D k + 1, x a+1 · · · x D k = J k,l D k and y 1 · · · y b = J k,l D l + 1, y b+1 · · · y D l = J k,l D l 5: Construct a simple bipartite graph B with degree sequence x 1 · · · x D k , y 1 · · · y D l 6: else 7: Let c = 2J k,k mod D k 8: Let x 1 · · · x c = 2J k,k D k + 1 and x c+1 · · · x D k = 2J k,k D k 9: Construct a simple graph B with the degree sequence x 1 · · · x D k 10: end if 11: Place B into G by matching the nodes of degree k with higher residual degree with x 1 · · · x a and those of degree l with higher residual degree with y 1 · · · y b . The other vertices in B can be matched in any way with those in G of degree k and l 12: Update the residual degrees of each k and l degree node. 13: end for To combine the nearly uniform subgraphs, we start with the largest degree nodes, and the corresponding largest degree classes. It is not necessary to start with the largest, but it simplifies the proof. First, we note that after every iteration, the joint degree sequence is still feasible if ∀k, l, k = lĴ k,l ≤D kDl and ∀kĴ k,k ≤ D k 2 . We will prove that Algorithm 4 can always satisfy the feasibility conditions. First, we note a fact. Observation 1 For all k, ∑ lĴk,l +Ĵ k,k = ∑ d j =kd j This follows directly from the fact that the left hand side is summing over all of the k end points needed byĴ while the right hand side is summing up the available residual end points from the degree distribution. Next, we note that if all residual degrees for degree k nodes are either 0 or 1, then: Observation 2 If, for all j such that d j = k,d j = 0 or 1 then ∑ d j =kd j = ∑ d j =k 1(d j = 0) =D k . Lemma 1. After every iteration, for every pair of vertices u, v of any degree k, |d u −d v | ≤ 1. Amanatidis et al. refer to Lemma 1 as the balanced degree invariant. This is most easily proven by considering the vertices of degree k as a queue. If there are x edges to be assigned, we can consider the process of deciding how many edges to assign each vertex as being one of popping vertices from the top of the queue and reinserting them at the end x times. Each vertex is assigned edges equal to the number of times it was popped. The next time we assign edges with end points of degree k, we start with the queue at the same position as where we ended previously. It is clear that no vertex can be popped twice without all other vertices being popped at least once. Lemma 2. The above algorithm can always greedily produce a graph that satisfies J , provided J satisfies the initial necessary conditions. Proof. There is one key observation about this algorithm -it maximizesD kDl by ensuring that the residual degrees of any two vertices of the same degree never differ by more than 1. By maximizing the number of available vertices, we can not get stuck adding a self-loop or multiple edge. From this, we gather that if, for some degree k, there exists a vertex j such thatd j = 0, then for all vertices of degree k, their residuals must be either 0 or 1. This means that ∑ d j =kd j =D k ≥Ĵ k,l for every other l from Observation 2. From the initial conditions, we have that for every k, l J k,l ≤ D k D l . D k =D k provided that all degree k vertices have non-zero residuals. Otherwise, for any unprocessed pair, J k,l ≤ min{D k ,D l } ≤D kDl . For the k, k case, it is clear that J k,k ≤D k ≤ D k 2 . Therefore, the residual joint degree matrix and degree sequence will always be feasible, and the algorithm can always continue. A natural question is that since the joint degree distribution contains all of the information in the degree distribution, do the joint degree distribution necessary conditions easily imply the Erdős-Gallai condition? This can easily be shown to be true. Corollary 1. The necessary conditions for a joint degree matrix to be graphical imply that the associated degree vector satisfies the Erdős-Gallai condition. Uniformly Sampling Graphs with Monte Carlo Markov Chain (MCMC) Methods We now turn our attention to uniformly sampling graphs with a given graphical joint degree matrix using MCMC methods. We return to the joint degree matrix configuration model. We can obtain a starting configuration for any graphical joint degree matrix by using Algorithm 1. This configuration consists of one complete bipartite component for each degree with a perfect matching selected. The transitions we use select any end point uniformly at random, then select any other end point in its degree neighborhood and swap the two edges that these neighbor. In Figure 2, this is equivalent to selecting one of the square endpoints uniformly at random and then selecting another uniformly at random from the same connected component and then swapping the edges. A more complex version of this chain checks that this swap does not create a multiple edge or self-loop. Formally, the transition function is a randomized algorithm given by Algorithm 2. E ∪ {(e 1 , v 2 ), (e 2 , v 1 )} \ {(e 1 , v 1 ), (e 2 , v 2 )} contains a multi-edge or self-loop, reject) 5: E ← E ∪ {(e 1 , v 2 ), (e 2 , v 1 )} \ {(e 1 , v 1 ), (e 2 , v 2 )} There are two chains described by Algorithm 2. The first, A doesn't have step (4) and its state space is all pseudographs with the desired joint degree matrix. The second, B includes step (4) and only transitions to and from simple graphs with the correct joint degree matrix. We remind the reader of the standard result that any irreducible, aperiodic Markov Chain with symmetric transitions converges to the uniform distribution over its state space. Both A and B are aperiodic, due to the self-loop to each state. From the description of the transition function, we can see that A is symmetric. This is less clear for the transition function of B. Is it possible for two connected configurations to have a different number of feasible transitions in a given degree neighborhood? We show that it is not the case in the following lemma. Proof. Let C 1 and C 2 be two neighboring configurations in B. This means that they differ by exactly 4 edges in exactly 1 degree neighborhood. Let this degree be k and let these edges be e 1 v 1 and e 2 v 2 in C 1 whereas they are e 1 v 2 and e 2 v 1 in C 2 . We want to show that C 1 and C 2 have exactly the same number of feasible k-degree swaps. Without loss of generality, let e x , e y be a swap that is prevented by e 1 in C 1 but allowed in C 2 . This must mean that e x neighbors v 1 and e y neighbors some v y = v 1 , v 2 . Notice that the swap e 1 e x is currently feasible. However, in C 2 , it is now infeasible to swap e 1 , e x , even though e x and e y are now possible. If we consider the other cases, like e x , e y is prevented by both e 1 and e 2 , then after swapping e 1 and e 2 , e x , e y is still infeasible. If swapping e 1 and e 2 makes something feasible in C 1 infeasible in C 2 , then we can use the above argument in reverse. This means that the number of feasible swaps in a k-neighborhood is invariant under k-degree swaps. The remaining important question is the connectivity of the state space over these chains. It is simple to show that the state space of A is connected. We note that it is a standard result that all perfect matchings in a complete bipartite graph are connected via edge swaps [58]. Moreover, the space of pseudographs can be seen exactly as the set of all perfect matchings over the disconnected complete bipartite degree neighborhoods in the joint degree matrix configuration model. The connectivity result is much less obvious for B. We adapt a result of Taylor [58] that all graphs with a given degree sequence are connected via edge swaps in order to prove this. The proof is inductive and follows the structure of Taylor's proof. Theorem 3. Given two simple graphs, G 1 and G 2 of the same size with the same joint degree matrix, there exists a series of endpoint rewirings to transform G 1 into G 2 (and vice versa) where every intermediate graph is also simple. Proof. This proof will proceed by induction on the number of nodes in the graph. The base case is when there are 3 nodes. There are 3 realizable JDMs. Each is uniquely realizable, so there are no switchings available. Assume that this is true for n = k. Let G 1 and G 2 have k + 1 vertices. Label the nodes of G 1 and G 2 v 1 · · · v k+1 such that deg(v 1 ) ≥ deg(v 2 ) ≥ · · · ≥ deg(v k+1 ) . Our goal will be to show that both graphs can be transformed in G 1 and G 2 respectively such that v 1 neighbors the same nodes in each graph, and the transitions are all through simple graphs. Now we can remove v 1 to create G 1 and G 2 , each with n − 1 nodes and identical JDMs. By the inductive hypothesis, these can be transformed into one other and the result follows. v Fig. 4 The dotted edges represent the troublesome edges that we may need to swap out before we can swap v 1 and v c . Fig. 5 The disk is v 1 . The crosses are e 1 · · · e d 1 . 1 u f f v c e i+1 u i+1e 1 e d1 e 3 e 2 v 1 e d1−1 e d1−2 k 1 k 2 k 3 u d1 u d1−1 u d1−2 We will break the analysis into two cases. For both cases, we will have a set of target edges, e 1 , e 2 · · · e d 1 that we want v 1 to be connected to. Without loss of generality, we let this set be the edges that v 1 currently neighbors in G 2 . We assume that the edges are ordered in reverse lexicographic order by the degrees of their endpoints. This will guarantee that the resulting construction for v 1 is graphical and that we have a non-increasing ordering on the requisite endpoints. Now, let k i denote the endpoint in G 2 for edge e i that isn't v 1 . Case 1) For the first case, we will assume that v 1 is already the endpoint of all edges e 1 , e 2 · · · e d 1 but that all of the k i may not be assigned correctly as in Figure 5. Assume that e 1 , e 2 · · · e i−1 are all edges (v 1 , k 1 ) · · · (v 1 , k i−1 ) and that e i is the first that isn't matched to its appropriate k i . Call the current endpoint of the other endpoint of e i u i . We know that deg(k i ) = deg(u i ) and that k i currently neighbors deg(k i ) other nodes, Γ (k i ). We have two cases here. One is that v 1 ∈ Γ (k i ) but via edge f instead of e i . Here, we can swap v 1 on the endpoints of f and e i so that the edge v 1 − e i − k i is in the graph. f can not be an e j where j < i because those edges have their correct endpoints, k j assigned. This is demonstrated in Figure 6. The other case is that v 1 ∈ Γ (k i ). If this is the case, then there must exist some x ∈ Γ (k i ) \Γ (u i ) because d(u i ) = d(k i ) and u i neighbors v 1 while k i doesn't. Therefore, we can swap the edges v 1 − e i − u i and x − f − k i to v 1 − e i − k i and x − f − u i without creating any self-loops or multiple edges. This is demonstrated in Figure 6. Fig. 7 The two parts of Case (2) Therefore, we can swap all of the correct endpoints onto the correct edges. Case 2) For the second case, we assume that the edges e 1 , · · · e d 1 are distributed over l nodes of degree d 1 . We want to show that we can move all of the edges e 1 · · · e d 1 so that v 1 is an endpoint. If this is achievable, we have exactly Case 1. Let e 1 , · · · e i−1 be currently matched to v i and let e i be matched to some x such that deg(x) = d 1 . Let f be an edge currently matched to v 1 that is not part of e 1 · · · e d 1 and let its other endpoint be u f . Let the other end point of e i be u x as in Figure 7. We now have several initial cases that are all easy to handle. First, if v, x, u x , u f are all distinct and (v, u x ) and (x, u f ) are not edges then we can easily swap v and x such that the edges go from v − f − u f and x − e i − u x to v − e i − u x and x − f − u f . Next, if u f = u x then we can simply swap v 1 onto e i and x onto f and, again, v 1 will neighbor e i . This will not create any self-loops or multiple edges because the graph itself will be isomorphic. This situations are both shown in Figure 7. The next case is that x = u f . If we try to swap v 1 onto e i then we create a selfloop from x to x via f . Instead, we note that since the JDM is graphical, there must exist a third vertex y of the same degree as v 1 and x that does not neighbor x. Now, y neighbors an edge g, and we can swap x − f and y − g to x − g and y − f . The edges are v 1 − f − y and x − e i − u i and e i can be swapped onto v 1 without conflict. The cases left to analyze are those where the nodes are all distinct and (v 1 , u x ) or (x, u f ) are edges in the graph. We will analyze these separately. Case 2a) If (v 1 , u x ) is an edge in the graph, then it must be so through some edge named g. Note that this means we have v 1 − g − u x and x − e i − u x . We can swap this to v 1 − e i − u x and x − g − u x and have an isomorphic graph provided that g is not some e j where j < i. This is the top case in Figure 8. If g is some e j then it must be that u x = k j . This is distinct from k i . deg(k j ) = deg(k i ) so there must exist some edge h that k i neighbors with its other endpoint being y. There are again three cases, when y = x, v 1 y = x and when y = v 1 . These are the bottom three rows illustrated in Figure 8. The first is the simplest. Here, we can assume that k j does not neighbor y (because it neighbors v 1 and x that k i does not) so we can swap k j onto h and k i onto e 1 . This has removed the offending edge, and we can now swap v 1 onto e 1 and x onto f . When y = x, we first swap k i onto e j and k j onto h. Next, we swap v onto e i and x onto f as they no longer share an offending edge. Finally, when y = v 1 , we use a sequence of three swaps. The first is k i onto e j and k j onto h. The next is v 1 onto e 1 and x onto h. Finally, we swap k j back onto e j and k i onto e i . Case 2b) If (x, u f ) is an edge in the graph, then it must be through some edge g such that x − g − u f and x − e i − u x . Without loss of generality, assume that f is the only edge neighboring v 1 that isn't an e j . Since f doesn't neighbor v 1 in G 2 , there must either exist a w with deg(w) = deg(u f ) or v s with deg(v s ) = d(v 1 ). This relies critically upon the fact that f and g are the same class edge. If there is a w, then it doesn't neighbor v 1 (or we can apply the above argument to find a w ) and it must have some neighbor y ∈ Γ (w) \ Γ (u) through edge h. Therefore, we can swap u f onto h and w onto f . This removes the offending edge, and we can now swap v 1 onto e i and x onto f . If v s exists instead, then by the same argument, there exists some edge h with endpoint u s such that v s / ∈ Γ (u f ) and u s / ∈ Γ (x). Therefore, we can swap v s − h and x − g to v s − g and x − h. This again removes the troublesome edge and allows us to swap v 1 onto e i . Therefore, given any node, a precise set of edges that it should neighbor, and a set of vertices that are the endpoints of those edges, we can use half-edge-rewirings to transform any graph G to G that has this property, provided the set of edges is graphical. Now that we have shown that both A and B converge to the uniform distribution over their respective state spaces, the next question is how quickly this happens. Note that from the proof that the state space of B is connected, we can upperbound the diameter of the state space by 3m. The diameter provides a lower bound on the mixing time. In the next section, we will empirically estimate the mixing time to be also linear in m. Estimating the Mixing Time of the Markov Chain The Markov chain A is very similar to one analyzed by Kannan, Tetali and Vempala [26]. We can exactly use their canonical paths and analysis to show that the mixing time is polynomial. This result follows directly from Theorem 3 of [26] for chain A . This is because the joint degree matrix configuration model can be viewed as |D| complete, bipartite, and disjoint components. These components should remain disjoint, so the Markov Chain can be viewed as a 'meta-chain' which samples a component and then runs one step of the Kannan, Tetali and Vempala chain on that component. Even though the mixing time for this chain is provably polynomial, this upper bound is too large to be useful in practice. The analysis to bound the mixing time for B chain is significantly more complicated. One approach is to use the canonical path method to bound the congestion of this chain. The standard trick is to define a path from G 1 to G 2 that fixes the misplaced edges identified by G 1 ⊕ G 2 in a globally ordered way. However, this is difficult to apply to chain B because fixing a specific edge may not be atomic, i.e. from the proof of Theorem 3 it may take up to 4 swaps to correctly connect a vertex with an endpoint if there are conflicts with the other degree neighborhoods. These swaps take place in other degree neighborhoods and are not local moves. Therefore, this introduces new errors that must be fixed, but can not be incorporated into G 1 ⊕ G 2 . In addition, step (4) also prevents us from using path coupling as a proof of the mixing time. Given that bounding the mixing time of this chain seems to be difficult without new techniques or ideas, we use a series of experiments that substitute the autocorrelation time for the mixing time. Autocorrelation Time Autocorrelation time is a quantity that is related to the mixing time and is popular among physicists. We will give a brief introduction to this concept, and refer the reader to Sokal's lecture notes for further details and discussion [56]. The autocorrelation of a signal is the cross-correlation of the signal with itself given a lag t. More formally, given a series of data X i where each X i is a drawn from the same distribution X with mean µ and variance σ , the autocorrelation func- tion is R X (t) = E[(X i −µ)(X i−t −µ)] σ 2 . Intuitively, the inherent problem with using a Markov Chain sampling method is that successive states generated by the chain may be highly correlated. If we were able to draw independent samples from the stationary distribution, then the autocorrelation of that set of samples with itself would go to 0 as the number of samples increased. The autocorrelation time is capturing the size of the gaps between sampled states of the chain needed before the autocorrelation of this 'thinned' chain is very small. If the thinned chain has 0 autocorrelation, then it must be exactly sampled from the stationary distribution. In practice, when estimating the autocorrelation from a finite number of samples, we do not expect it to go to exactly 0, but we do expect it to 'die away' as the number of samples and gap increases. Definition 3. The exponential autocorrelation time is τ exp,X = lim sup t→∞ t − log |R X (t)| [56]. [56]. Definition 4. The integrated autocorrelation time is τ int,X = 1 2 ∑ ∞ t=−∞ R X (t) = 1 2 + ∑ ∞ t=1 R X (t) The difference between the exponential autocorrelation time and the integrated autocorrelation time is that the exponential autocorrelation time measures the time it takes for the chain to reach equilibrium after a cold start, or 'burn-in' time. The integrated autocorrelation time is related to the increase in the variance over the samples from the Markov Chain as opposed to samples that are truly independent. Often, these measurements are the same, although this is not necessarily true. We can substitute the autocorrelation time for the mixing time because they are, in effect, measuring the same thing -the number of iterations that the Markov Chain needs to run for before the difference between the current distribution and the stationary distribution is small. We will use the integrated autocorrelation time estimate. Experimental Design We used the Markov Chain B in two different ways. First, for each of the smaller datasets, we ran the chain for 50,000 iterations 15 times. We used this to calculate the the autocorrelation values for each edge for each lag between 100 and 15,000 in multiples of 100. From this, we calculated the estimated integrated autocorrelation time, as well as the iteration time for the autocorrelation of each edge to drop under a threshold of 0.001. This is discussed in Section 6.4. We also replicated the experimental design of Raftery and Lewis [52]. Given our estimates of the autocorrelation time for each size graph in Section 6.4, we ran the chain again for long enough to capture 10,000 samples where each sample had x iterations of the chain between them. x was chosen to vary from much smaller than the estimated autocorrelation time, to much larger. From these samples, we calculated the sample mean for each edge, and compared it with the actual mean from the joint degree matrix. We looked at the total variational distance between the sample means and actual means and showed that the difference appears to be converging to 0. We chose the mean as an evaluation metric because we were able to calculate the true means theoretically. We are unaware of another similarly simple metric. We used the formulas for empirical evaluation of mixing time from page 14 of Sokal's survey [56]. In particular, we used the following: • The sample mean is µ = 1 n ∑ n i=1 x i . • The sample unnormalized autocorrelation function isĈ(t) = 1 n−t ∑ n−t i=1 (x i − µ)(x i+t − µ). • The natural estimator of R X (t) isρ(t) =Ĉ(t)/Ĉ(0) • The estimator for τ int,X isτ int = 1 2 ∑ n−1 t=−(n−1) λ (t)ρ(t) where λ is a 'suitable' cutoff function. For a sequence of length x, calculating the autocorrelation of gap t requires (x − t) 2 dot products. Our experiments require that we calculate the autocorrelation for each possible edge in a graph for many lags. Thus running the full set of experiments requires O(|V | 2 x log x) time and is prohibitive when V is large. Note that x must necessarily be at least Θ (E) as well, since the mixing time can not be sub-linear in the number of edges. In Section 6.3 we will discuss results on the smaller datasets (AdjNoun, Dolphins, Football, Karate, and LesMis) that suggest a more feasible method for estimating autocorrelation time for larger graphs. We use this method to evaluate the autocorrelation time for the larger graphs as well, and present all of the results together. Rather than running the chain for 15,000 steps for the larger graphs, we selected more appropriate stopping conditions that were generally 10|E| based on the results for smaller graphs. Data Sets We have used several publicly available datasets, Word Adjacencies [48], Les Miserables [29], American College Football [20], the Karate Club [63], the Dolphin Social Network [36], C. Elegans Neural Network (celegans) [60,62], Power grid (power) [61], Astrophysics collaborations (astro-ph) [44], High-Energy Theory collaborations (hep-th) [45], Coauthorships in network science (netscience) [49], and a snapshot of the Internet from 2006 (as-22july) [50]. In the following |V | is the number of nodes, |E| is the number of edges and |J | is the number of non-zero entries in the joint degree matrix. Table 1 Details about the datasets, |V | is the number of nodes, |E| is the number of edges and |J | is the number of unique entries in the J . Relationship Between Mean of an Edge and Autocorrelation For each of the smaller graphs, AdjNoun, Dolphins, Football, Karate and LesMis, we ran the Markov Chain 10 times for 50,000 iterations and collected an indicator variable for each potential edge. For each of these edges, and each run, we calculated the autocorrelation function for values of t between 100 and 15,000 in multiples of 100. For each edge, and each run, we looked at the t value where the autocorrelation function first dropped below the threshold of 0.001. We then plotted the mean of these values against the mean of the edge, i.e. if it connects vertices of degree d i and d j (where d i = d j ) then µ e = J d i ,d j /d i d j or µ e = J d i ,d i / d i 2 otherwise. The three most useful plots are given in Figures 10 and 11 as the other graphs did not contain a large range of mean values. From these results, we identified a potential relationship between µ e and the time to pass under a threshold. Unfortunately, none of our datasets contained a significant number of edges with larger µ e values, i.e. between 0.5 and 1. In order to test this hypothesis, we designed a synthetic dataset that contained the many edges with values of µ e at i 20 for i = 1, · · · 20. We describe the creation of this dataset in the appendix. The final dataset we created had 326 edges, 194 vertices and 21 distinct J entries. We ran the Markov Chain 200 times for this synthetic graph. For each run, we calculated the threshold value for each edge. Figure 11 shows the edges' mean vs its mean time for the autocorrelation value to pass under 0.001. We see that there is a roughly symmetric curve that obtains its maximum at µ e = 0.5. This result suggests a way to estimate the autocorrelation time for larger graphs without repeating the entire experiment for every edge that could possibly appear. One can calculate µ e for each edge from the JDM and sample edges with µ e around 0.5. We use this method for selecting our subset of edges to analyze. In particular, we sampled about 300 edges from each of the larger graphs. For all of these except Autocorrelation Values For each dataset and each run we calculated the unnormalized autocorrelation values. For the smaller graphs, this entailed setting t to every value between 100 and 15,000 in multiples of 100. We randomly selected 1 run for each dataset and graphed the autocorrelation values for each of the edges. We present the data for the Karate and Dolphins datasets in Figures 12 and 13. For the larger graphs, we changed the starting and ending points, based on the graph size. For example, for Netscience was analyzed from 2,000 to 15,000 in multiples of 100, while as-22july was analyzed from 1,000 to 500,000 in multiples of 1,000. Fig. 13 The exponential drop-off for Dolphins appears to end after 600 iterations. All of the graphs exhibit the same behavior. We see an exponential drop off initially, and then the autocorrelation values oscillate around 0. This behavior is due to the limited number of samples, and a bias due to using the sample mean for each edge. If we ignore the noisy tail, then we estimate that the autocorrelation 'dies off' at the point where the mean absolute value of the autocorrelation approximately converges, then we can locate the 'elbow' in the graphs. This estimate for all graphs is given in Table 3 at the end of this Section. Estimated Integrated Autocorrelation Time For each dataset and run, we calculated the estimated integrated autocorrelation time. For the datasets with fewer than 1,000 edges, we calculated the autocorrelation in lags of 100 from 100 to 15,000 for each dataset. For the larger ones, we used intervals that depended on the total size of the graph. We estimateρ(t) as the size of the intervals times the sum of the values. The cut-off function we used for the smaller graphs was λ (t) = 1 if 0 < t < 15, 000 and 0 otherwise. This value was calculated for each edge. In Table 2 we present the mean, maximum and minimum estimated integrated autocorrelation time for each dataset over the runs of the Markov Chain using three different methods. For each of the edges, we first calculated the mean, median and max estimated integrated autocorrelation value over the various runs. Then, for each of these three values for each edge, we calculated the max, mean and min over all edges. For each of the graphs, the data series representing the median and max have each had their x-values perturbed slightly for clarity. These values are graphed on a log-log scale plot. Further, we also present a graph showing the ratio of these values to the number of edges. The ratio plot, Figure 15, suggests that the autocorrelation time may be a linear function of the number of edges in the graph, however the estimates are noisy due to the limited number of runs. All three metrics give roughly the same picture. We note that there is much higher variance in estimated autocorrelation time for the larger graphs. If we consider the evidence of the log-log plot and the ratio plot, we suspect that the autocorrelation time of this Markov Chain is linear in the number of edges. The Sample Mean Approaches the Real Mean for Each Edge Given the results of the previous experiment estimating the integrated autocorrelation time, we next executed an experiment suggested by Raftery and Lewis [52]. First we note that for each edge e, we know the true value of P(e ∈ G|G has J ) is exactly J k,l D k D l or J k,k ( D k 2 ) if e is an edge between degrees k and l. This is because there are D k D l potential (k, l) edges that show up in any graph with a fixed J , and each graph has J k,l of them. If we consider the graphs as being labeled, then we can see that each edge has an equal probability of showing up when we consider permutations of the orderings. Fig. 15 The ratio of the max, median and min values over the edges to the number of edges for the estimated integrated autocorrelation times. L to R in order of size: Karate, Dolphins, LesMis, AdjNoun, Football, celegans, netscience, power, hep-th, as-22july and astro-ph Thus, our experiment was to take samples at varying intervals, and consider how the sample mean of each edge compared with our known theoretical mean. For the smaller graphs, we took 10,000 samples at varying gaps depending on our estimated integrated autocorrelation time and repeated this 10 times. Additionally, we saw that the total variational distance quickly converged to a small, but non-zero value. We repeated this experiment with 20,000 samples and, for the two smallest graphs, Karate and Dolphins, we repeated the experiment with 5,000 and 40,000 samples. These results show that this error is due to the number of samples and not the sampler. For the graphs with more than 1,000 edges, each run resulted in 20,000 samples at varying gaps, and this was repeated 5 times. We present these results in Figures 18 through 28. If S e,g is the sample mean for edge e and gap g, and µ e is the true mean, then the graphed value is ∑ e |S e,g − µ e |/ ∑ e µ e . In all of the figures, the line runs through the median error for the runs and the error bars are the maximum and minimum values. We note that the maximum and minimum are very close to the median as they are within 0.05% for most intervals. These graphs imply that we are sampling uniformly after a gap of 175 for the Karate graph. For the dolphin graph, we see very similar results, and note that the error becomes constant after a sampling gap of 400 iterations. For the larger graphs, we varied the gaps based on the graph size, and then focused on the area where the error appeared to be decreasing. Again, we see consistent results, although the residual error is higher. This is to be expected because there are more potential edges in these graphs, so we took relatively fewer samples per edge. A summary of the results can be found in Table 3 Based on the results in this table, our recommendation would be that running the Markov Chain for 5m steps would satisfy all running time estimates except for Power's results for the Maximum Estimated Integrated Autocorrelation time. This estimate is significantly lower than the result for Chain A that was obtained using the standard theoretical technique of canonical paths. Conclusions and Future Work This paper makes two primary contributions. The first is the investigation of Markov Chain methods for uniformly sampling graphs with a fixed joint degree distribution. Previous work shows that the mixing time of A is polynomial, while our experiments suggest that the mixing time of B is also polynomial. The relationship between the mean of an edge and the autocorrelation values can be used to efficiently experiment with larger graphs by sampling edges with mean between 0.4 and 0.6 and repeating the analysis for just those edges. This was used to repeat the experiments for larger graphs and to provide further convincing evidence of polynomial mixing time. Our second contribution is in the design of the experiments to evaluate the mixing time of the Markov Chain. In practice, it seems the stopping time for sampling is often chosen without justification. Autocorrelation is a simple metric to use, and can be strong evidence that a chain is close to the stationary distribution when used correctly. Now, we must fill in the rest of J so that D is integer valued for degrees. One way is to note that we should have 4 × 20 degree 20 edges. We can sum the number of currently allocated edges with one endpoint of degree 20, call this x and set J 1,20 = 80 − x. There are many other ways of consistently completing J , such as assigning as many edges as possible to the K ×K and L ×L entries, like J 20,21 . This results in a denser graph. For the synthetic graph used in this paper, we completed J by adding all edges as (1,20), (1, 21) etc edges. We chose this because it was simple to verify and it also made it easy to ignore the edges that were not of interest.
9,108
1103.4875
2952271679
One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.
There are also sequential sampling methods that will construct a graph with a given degree distribution. Some of these are based on the necessary and sufficient Erd o s-Gallai conditions for a degree sequence to be graphical @cite_35 , while others follow the method of Steger and Wormald @cite_60 @cite_31 @cite_16 @cite_25 @cite_32 . These combine the construction and sampling parts of the problem and can be quite fast. The current best work can sample graphs where @math in @math time @cite_60 .
{ "abstract": [ "Random graphs with a given degree sequence are a useful model capturing several features absent in the classical Erd˝os-Renyi model, such as dependent edges and non-binomial degrees. In this paper, we use a characterization due to Erd˝os and Gallai to develop a sequential algorithm for generating a random labeled graph with a given degree sequence. The algorithm is easy to implement and allows surprisingly ecient sequential importance sampling. Applications are given, in- cluding simulating a biological network and estimating the number of graphs with a given degree sequence. 1. Introduction. Random graphs with given vertex degrees have recently attracted great interest as a model for many real-world complex networks, including the World Wide Web, peer-to-peer networks, social networks, and biological networks. Newman (58) contains an excellent survey of these networks, with extensive references. A common approach to simulating these systems is to study (empirically or theoretically) the degrees of the vertices in instances of the network, and then to generate a random graph with the appropriate degrees. Graphs with prescribed degrees also appear in random matrix theory and string theory, which can call for large simulations based on random k-regular graphs. Throughout, we are concerned with generating simple graphs, i.e., no loops or multiple edges are allowed (the problem becomes considerably easier if loops and multiple edges are allowed). The main result of this paper is a new sequential importance sampling algorithm for generating random graphs with a given degree sequence. The idea is to build up the graph sequentially, at each stage choosing an edge from a list of candidates with probability proportional to the degrees. Most previously studied algorithms for this problem sometimes either get stuck or produce loops or multiple edges in the output, which is handled by starting over and trying again. Often for such algorithms, the probability of a restart being needed on a trial rapidly approaches 1 as the degree parameters grow, resulting in an enormous number of trials being needed on average to obtain a simple graph. A major advantage of our algorithm is that it never gets stuck. This is achieved using the Erd˝os- Gallai characterization, which is explained in Section 2, and a carefully chosen order of edge selection.", "", "We present a practical algorithm for generating random regular graphs. For all d growing as a small power of n, the d-regular graphs on n vertices are generated approximately uniformly at random, in the sense that all d-regular graphs on n vertices have in the limit the same probability as n → ∞. The expected runtime for these ds is O(nd2).", "", "Abstract The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.", "Abstract An algorithm is presented which randomly selects a labelled graph with specified vertex degrees from a distribution which is arbitrarily close to uniform. The algorithm is based on simulation of a rapidly convergent stochastic process, and runs in polynomial time for a wide class of degree sequences, including all regular sequences and all n -vertex sequences with no degree exceeding √ n 2. The algorithm can be extended to cover the selection of a graph with given degree sequence which avoids a specified set of edges. One consequence of this extension is the existence of a polynomial-time algorithm for selecting an f -factor in a sufficiently dense graph. A companion algorithm for counting degree-constrained graphs is also presented; this algorithm has exactly the same range of validity as the one for selection." ], "cite_N": [ "@cite_35", "@cite_60", "@cite_32", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "1978479505", "", "2078391137", "", "2072211488", "2087275015" ] }
Constructing and Sampling Graphs with a Prescribed Joint Degree Distribution
show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice. Introduction Graphs are widely recognized as the standard modeling language for many complex systems, including physical infrastructure (e.g., Internet, electric power, water, and gas networks), scientific processes (e.g., chemical kinetics, protein interactions, and regulatory networks in biology starting at the gene levels through ecological systems), and relational networks (e.g., citation networks, hyperlinks on the web, and social networks). The broader adoption of the graph models over the last decade, along with the growing importance of associated applications, calls for descriptive and generative models for real networks. What is common among these networks? How do they differ statistically? Can we quantify the differences among these networks? Answering these questions requires understanding the topological properties of these graphs, which have lead to numerous studies on many "real-world" networks from the Internet to social, biological and technological networks [18]. Perhaps the most prominent theme in these studies is the skewed degree distribution; real-world graphs have a few vertices with very high degree and many vertices with small degree. There is some dispute as to the exact distribution, some have called it power-law [5,18], some log-normal [4,51,41,8], and but all agree that it is 'heavy-tailed' [17,54]. The ubiquity of this distribution has been a motivator for many different generative models and is often used as a metric for the quality of the model. Models like preferential attachment [5], the copying model [31], the Barabasi hierarchical model [53], forest-fire model, the Kronecker graph model [33], geometric preferential attachment [19] and many more [34,59,11] study the expected degree distribution and use the results to argue for the strength of their method. Many of these models also match other observed features, such as small diameter or densification [28]. However, recent studies comparing the generative models with real networks on metrics like conductance [35], core numbers [13] and clustering coefficients [30] show that the models do not match other important features of the networks. The degree distribution alone does not define a graph. McKay's estimate [39] shows that there may be exponentially many graphs with the same degree distribution. However, models based on degree distribution are commonly used to compute statistically significant structures in a graph. For example, the modularity metric for community detection in graphs [43,42] assumes a null hypothesis for the structure of a graph based on its degree distribution, namely that probability of an edge between vertex v i and v j is proportional to d i d j , where d i and d j represent the degrees of vertices v i and v j . The modularity of a group of vertices is defined by how much their structure deviates from the null hypothesis, and a higher modularity signifies a better community. The key point here is that the null hypothesis is solely based on its degree distribution and therefore might be incorrect. Degree distribution based models are also used to predict graph properties [40,2,15,14,16], benchmark [32], and analyze the expected run time of algorithms [7]. These studies improve our understanding of the relationship between the degree distribution and the structure of a graph. The shortcomings of these studies give insight into what other features besides the degree distribution would give us a better grasp of a graph's structure. For example, the degree assortativity of a network measure whether nodes attach to other similar or dissimilar vertices. This is not specified by the degree distribution, yet studies have shown that social networks tend to be assortative, while biological and technological networks tend to be dissortative [47,46]. An example of recent work using assortativity is [30]. In this study, a high assortativity is assumed for connections that generate high clustering coefficients, and this, in addition to preserving the degree distribution, results in very realistic instances of real-world graphs. Another study that has looked at the joint degree distribution is dK-graphs [38]. They propose modeling a graph by looking at the distribution of the structure of all sized k subsets of vertices, where d = 1 are vertex degrees, d = 2 are edge degrees (the joint degree distribution), d = 3 is the degree distribution of triangles and wedges, and so on. It is an interesting idea, as clearly the nK distribution contains all information about the graph, but it is far too detailed as a model. At what d value does the additional information become less useful? One way to enhance the results based on degree distribution is to use a more restrictive feature such as the joint degree distribution. Intuitively, if degree distribution of a graph describes the probability that a vertex selected uniformly at random will be of degree k then its joint degree distribution describes the probability that a randomly selected edge will be between nodes of degree k and l. We will use a slightly different concept, the joint degree matrix, where the total number of nodes and edges is specified, and the numbers of edges between each set of degrees is counted. Note that while the joint degree distribution uniquely defines the degree distribution of a graph up to isolated nodes, graphs with the same degree distribution may have very different joint degree distributions. We are not proposing that the joint degree distribution be used as a stand alone descriptive model for generating networks. We believe that understanding the relationship between the joint degree distribution and the network structure is important, and that having the capability to generate random instances of graphs with the same joint degree distribution will help enable this goal. Experiments on real data are valuable, but also drawing conclusions only based on a limited data may be misleading, as the graphs may all be biased the same way. For a more rigorous study, we need a sampling algorithm that can generate random instances in a reasonable time, which is the motivation of this work. The primary questions investigated by this paper are: Given a joint degree distribution and an integer n, does the joint degree distribution correspond to a real labeled graph? If so, can one construct a graph of size n with that joint degree distribution? Is it possible to construct or generate a uniformly random graph with that same joint degree distribution? We address these problems from both a theoretical and from an empirical perspective. In particular, being able to uniformly sample graphs allows one to empirically evaluate which other graph features, like diameter, or eigenvalues, are correlated with the joint degree distribution. Contributions We make several contributions to this problem, both theoretically and experimentally. First, we discuss the necessary and sufficient conditions for a given joint degree vector to be graphical. We prove that these conditions are sufficient by providing a new constructive algorithm. Next, we introduce a new configuration model for the joint degree matrix problem which is a natural extension of the configuration model for the degree sequence problem. Finally, using this configuration model, we develop Markov Chains for sampling both pseudographs and simple graphs with a fixed joint degree matrix. A pseudograph allows multiple edges between two nodes and self-loops. We prove the correctness of both chains and mixing time for the pseudograph chain by using previous work. The mixing time of the simple graph chain is experimentally evaluated using autocorrelation. In practice, Monte Carlo Markov Chains are a very popular method for sampling from difficult distributions. However, it is often very difficult to theoretically evaluate the mixing time of the chain, and many practitioners simply stop the chain after 5,000, 10,000 or 20,000 iterations without much justification. Our experimental design with autocorrelation provides a set of statistics that can be used as a justification for choosing a stopping point. Further, we show one way that the autocorrelation technique can be adapted from real-valued samples to combinatorial samples. Notation and Definitions Formally, a degree distribution of a graph is the probability that a node chosen at random will be of degree k. Similarly, the joint degree distribution is the probability that a randomly selected edge will have end points of degree k and l. In this paper, we are concerned with constructing graphs that exactly match these distributions, so rather than probabilities, we will use a counting definition below and call it the joint degree matrix. In particular, we will be concerned with generating simple graphs that do not contain multiple edges or self-loops. Any graph that may have multiple edges or self loops will be referred to as a pseudograph. A generic degree vector will be denoted by D. Definition 2. The joint degree matrix (JDM) J (G) of a graph G is a matrix where J (G) k,l is exactly the number of edges between nodes of degree k and degree l in G. A generic joint degree matrix will be denoted by J . Given a joint degree matrix, J , we can recover the number of edges in the graph as m = ∑ ∞ k=1 ∑ ∞ l=k J k,l . We can also recover the degree vector as D k = 1 k (J k,k + ∑ ∞ l=1 J k,l ). The term J k,k is added twice because kD k is the number of end points of degree k and the edges in J k,k contribute two end points. The number of nodes, n is then ∑ ∞ k=1 D k . This count does not include any degree 0 vertices, as these have no edges in the joint degree matrix. Given n and m, we can easily get the degree distribution and joint degree distribution. They are P(k) = 1 n D k while P(k, l) = 1 m J k,l . Note that P(k) is not quite the marginal of P(k, l) although it is closely related. The Joint Degree Matrix Configuration Model We propose a new configuration model for the joint degree distribution problem. Given J and its corresponding D we create k labeled mini-vertices for every vertex of degree k. In addition, for every edge with end points of degree k and l we create two labeled mini-end points, one of class k and one of class l. We connect all degree k mini-vertices to the class k mini-end points. This forms a complete bipartite graph for each degree, and each of these forms a connected component that is disconnected from all other components. We will call each of these components the "k-neighborhood". Notice that there are kD k mini-vertices of degree k, and kD k = J k,k + ∑ l J k,l corresponding mini-end points in each k-neighborhood. This is pictured in Figure 2. Take any perfect matching in this graph. If we merge each pair of mini-end points that correspond to the same edge, we will have some pseudograph that has exactly the desired joint degree matrix. This observation forms the basis of our sampling method. Constructing Graphs with a Given Joint Degree Matrix The Erdős-Gallai condition is a necessary and sufficient condition for a degree sequence to be realizable as a simple graph. Theorem 1. Erdős-Gallai A degree sequence d = {d 1 , d 2 , · · · d n } sorted in non- increasing order is graphical if and only if for every k ≤ n, ∑ k i=1 d i ≤ k(k − 1) + ∑ n i=k+1 min(d i , k). The necessity of this condition comes from noting that in a set of vertices of size k, there can be at most k 2 internal edges, and for each vertex v not in the subset, there can be at most min{d(v), k} edges entering. The condition considers each subset of decreasing degree vertices and looks at the degree requirements of those nodes. If the requirement is more than the available edges, the sequence cannot be graphical. The sufficiency is shown via the constructive Havel-Hakimi algorithm [23,22]. The existence of the Erdős-Gallai condition inspires us to ask whether similar necessary and sufficient conditions exist for a joint degree matrix to be graphical. The following necessary and sufficient conditions were independently studied by Amanatidis et al. [3]. Theorem 2. Let J be given and D be the associated degree distribution. J can be realized as a simple graph if and only if (1) D k is integer-valued for all k and (2) ∀k, l, if k = l then J k,l ≤ D k D l . For each k, J k,k ≤ D k 2 . The necessity of these conditions is clear. The first condition requires that there are an integer number of nodes of each degree value. The next two are that the number of edges between nodes of degree k and l (or k and k) are not more than the total possible number of k to l edges in a simple graph defined by the marginal degree sequences. Amanatidis et al. show the sufficiency through a constructive algorithm. We will now introduce a new algorithm that runs in O(m) time. The algorithm proceeds by building a nearly regular graph for each class of edges, J k,l . Assume that k = l for simplicity. Each of the D k nodes of degree k receives J k,l /D k edges, while J k,l mod D k each have an extra edge. Similarly, the l degree nodes have J k,l /D l edges, with J k,l mod D l having 1 extra. We can then construct a simple bipartite graph with this degree sequence. This can be done in linear time in the number of edges using queues as is discussed after Lemma 1. If k = l, the only differences are that the graph is no longer bipartite and there are 2J k,k end points to be distributed among D k nodes. To find a simple nearly regular graph, one can use the Havel-Hakimi [22,23] algorithm in O(J k,k ) time by using the degree sequence of the graph as input to the algorithm. We must show that there is a way to combine all of these nearly-regular graphs together without violating any degree constraints. Let d = d 1 , d 2 , · · · d n be the sorted non-increasing order degree sequence from D. Letd v denote the residual degree sequence where the residual degree of a vertex v is d v minus the number of edges that currently neighbor v. Also, letD k denote the number of nodes of degree k that have non-zero residual degree, i.e.D k = ∑ d j =k 1(d j = 0). Algorithm 1 Greedy Graph Construction with a Fixed JDM, Input: J , n, m, D 1: for k = n · · · 1 and l = k · · · 1 do 2: if k = l then 3: Let a = J k,l mod D k and b = J k,l mod D l 4: Let x 1 · · · x a = J k,l D k + 1, x a+1 · · · x D k = J k,l D k and y 1 · · · y b = J k,l D l + 1, y b+1 · · · y D l = J k,l D l 5: Construct a simple bipartite graph B with degree sequence x 1 · · · x D k , y 1 · · · y D l 6: else 7: Let c = 2J k,k mod D k 8: Let x 1 · · · x c = 2J k,k D k + 1 and x c+1 · · · x D k = 2J k,k D k 9: Construct a simple graph B with the degree sequence x 1 · · · x D k 10: end if 11: Place B into G by matching the nodes of degree k with higher residual degree with x 1 · · · x a and those of degree l with higher residual degree with y 1 · · · y b . The other vertices in B can be matched in any way with those in G of degree k and l 12: Update the residual degrees of each k and l degree node. 13: end for To combine the nearly uniform subgraphs, we start with the largest degree nodes, and the corresponding largest degree classes. It is not necessary to start with the largest, but it simplifies the proof. First, we note that after every iteration, the joint degree sequence is still feasible if ∀k, l, k = lĴ k,l ≤D kDl and ∀kĴ k,k ≤ D k 2 . We will prove that Algorithm 4 can always satisfy the feasibility conditions. First, we note a fact. Observation 1 For all k, ∑ lĴk,l +Ĵ k,k = ∑ d j =kd j This follows directly from the fact that the left hand side is summing over all of the k end points needed byĴ while the right hand side is summing up the available residual end points from the degree distribution. Next, we note that if all residual degrees for degree k nodes are either 0 or 1, then: Observation 2 If, for all j such that d j = k,d j = 0 or 1 then ∑ d j =kd j = ∑ d j =k 1(d j = 0) =D k . Lemma 1. After every iteration, for every pair of vertices u, v of any degree k, |d u −d v | ≤ 1. Amanatidis et al. refer to Lemma 1 as the balanced degree invariant. This is most easily proven by considering the vertices of degree k as a queue. If there are x edges to be assigned, we can consider the process of deciding how many edges to assign each vertex as being one of popping vertices from the top of the queue and reinserting them at the end x times. Each vertex is assigned edges equal to the number of times it was popped. The next time we assign edges with end points of degree k, we start with the queue at the same position as where we ended previously. It is clear that no vertex can be popped twice without all other vertices being popped at least once. Lemma 2. The above algorithm can always greedily produce a graph that satisfies J , provided J satisfies the initial necessary conditions. Proof. There is one key observation about this algorithm -it maximizesD kDl by ensuring that the residual degrees of any two vertices of the same degree never differ by more than 1. By maximizing the number of available vertices, we can not get stuck adding a self-loop or multiple edge. From this, we gather that if, for some degree k, there exists a vertex j such thatd j = 0, then for all vertices of degree k, their residuals must be either 0 or 1. This means that ∑ d j =kd j =D k ≥Ĵ k,l for every other l from Observation 2. From the initial conditions, we have that for every k, l J k,l ≤ D k D l . D k =D k provided that all degree k vertices have non-zero residuals. Otherwise, for any unprocessed pair, J k,l ≤ min{D k ,D l } ≤D kDl . For the k, k case, it is clear that J k,k ≤D k ≤ D k 2 . Therefore, the residual joint degree matrix and degree sequence will always be feasible, and the algorithm can always continue. A natural question is that since the joint degree distribution contains all of the information in the degree distribution, do the joint degree distribution necessary conditions easily imply the Erdős-Gallai condition? This can easily be shown to be true. Corollary 1. The necessary conditions for a joint degree matrix to be graphical imply that the associated degree vector satisfies the Erdős-Gallai condition. Uniformly Sampling Graphs with Monte Carlo Markov Chain (MCMC) Methods We now turn our attention to uniformly sampling graphs with a given graphical joint degree matrix using MCMC methods. We return to the joint degree matrix configuration model. We can obtain a starting configuration for any graphical joint degree matrix by using Algorithm 1. This configuration consists of one complete bipartite component for each degree with a perfect matching selected. The transitions we use select any end point uniformly at random, then select any other end point in its degree neighborhood and swap the two edges that these neighbor. In Figure 2, this is equivalent to selecting one of the square endpoints uniformly at random and then selecting another uniformly at random from the same connected component and then swapping the edges. A more complex version of this chain checks that this swap does not create a multiple edge or self-loop. Formally, the transition function is a randomized algorithm given by Algorithm 2. E ∪ {(e 1 , v 2 ), (e 2 , v 1 )} \ {(e 1 , v 1 ), (e 2 , v 2 )} contains a multi-edge or self-loop, reject) 5: E ← E ∪ {(e 1 , v 2 ), (e 2 , v 1 )} \ {(e 1 , v 1 ), (e 2 , v 2 )} There are two chains described by Algorithm 2. The first, A doesn't have step (4) and its state space is all pseudographs with the desired joint degree matrix. The second, B includes step (4) and only transitions to and from simple graphs with the correct joint degree matrix. We remind the reader of the standard result that any irreducible, aperiodic Markov Chain with symmetric transitions converges to the uniform distribution over its state space. Both A and B are aperiodic, due to the self-loop to each state. From the description of the transition function, we can see that A is symmetric. This is less clear for the transition function of B. Is it possible for two connected configurations to have a different number of feasible transitions in a given degree neighborhood? We show that it is not the case in the following lemma. Proof. Let C 1 and C 2 be two neighboring configurations in B. This means that they differ by exactly 4 edges in exactly 1 degree neighborhood. Let this degree be k and let these edges be e 1 v 1 and e 2 v 2 in C 1 whereas they are e 1 v 2 and e 2 v 1 in C 2 . We want to show that C 1 and C 2 have exactly the same number of feasible k-degree swaps. Without loss of generality, let e x , e y be a swap that is prevented by e 1 in C 1 but allowed in C 2 . This must mean that e x neighbors v 1 and e y neighbors some v y = v 1 , v 2 . Notice that the swap e 1 e x is currently feasible. However, in C 2 , it is now infeasible to swap e 1 , e x , even though e x and e y are now possible. If we consider the other cases, like e x , e y is prevented by both e 1 and e 2 , then after swapping e 1 and e 2 , e x , e y is still infeasible. If swapping e 1 and e 2 makes something feasible in C 1 infeasible in C 2 , then we can use the above argument in reverse. This means that the number of feasible swaps in a k-neighborhood is invariant under k-degree swaps. The remaining important question is the connectivity of the state space over these chains. It is simple to show that the state space of A is connected. We note that it is a standard result that all perfect matchings in a complete bipartite graph are connected via edge swaps [58]. Moreover, the space of pseudographs can be seen exactly as the set of all perfect matchings over the disconnected complete bipartite degree neighborhoods in the joint degree matrix configuration model. The connectivity result is much less obvious for B. We adapt a result of Taylor [58] that all graphs with a given degree sequence are connected via edge swaps in order to prove this. The proof is inductive and follows the structure of Taylor's proof. Theorem 3. Given two simple graphs, G 1 and G 2 of the same size with the same joint degree matrix, there exists a series of endpoint rewirings to transform G 1 into G 2 (and vice versa) where every intermediate graph is also simple. Proof. This proof will proceed by induction on the number of nodes in the graph. The base case is when there are 3 nodes. There are 3 realizable JDMs. Each is uniquely realizable, so there are no switchings available. Assume that this is true for n = k. Let G 1 and G 2 have k + 1 vertices. Label the nodes of G 1 and G 2 v 1 · · · v k+1 such that deg(v 1 ) ≥ deg(v 2 ) ≥ · · · ≥ deg(v k+1 ) . Our goal will be to show that both graphs can be transformed in G 1 and G 2 respectively such that v 1 neighbors the same nodes in each graph, and the transitions are all through simple graphs. Now we can remove v 1 to create G 1 and G 2 , each with n − 1 nodes and identical JDMs. By the inductive hypothesis, these can be transformed into one other and the result follows. v Fig. 4 The dotted edges represent the troublesome edges that we may need to swap out before we can swap v 1 and v c . Fig. 5 The disk is v 1 . The crosses are e 1 · · · e d 1 . 1 u f f v c e i+1 u i+1e 1 e d1 e 3 e 2 v 1 e d1−1 e d1−2 k 1 k 2 k 3 u d1 u d1−1 u d1−2 We will break the analysis into two cases. For both cases, we will have a set of target edges, e 1 , e 2 · · · e d 1 that we want v 1 to be connected to. Without loss of generality, we let this set be the edges that v 1 currently neighbors in G 2 . We assume that the edges are ordered in reverse lexicographic order by the degrees of their endpoints. This will guarantee that the resulting construction for v 1 is graphical and that we have a non-increasing ordering on the requisite endpoints. Now, let k i denote the endpoint in G 2 for edge e i that isn't v 1 . Case 1) For the first case, we will assume that v 1 is already the endpoint of all edges e 1 , e 2 · · · e d 1 but that all of the k i may not be assigned correctly as in Figure 5. Assume that e 1 , e 2 · · · e i−1 are all edges (v 1 , k 1 ) · · · (v 1 , k i−1 ) and that e i is the first that isn't matched to its appropriate k i . Call the current endpoint of the other endpoint of e i u i . We know that deg(k i ) = deg(u i ) and that k i currently neighbors deg(k i ) other nodes, Γ (k i ). We have two cases here. One is that v 1 ∈ Γ (k i ) but via edge f instead of e i . Here, we can swap v 1 on the endpoints of f and e i so that the edge v 1 − e i − k i is in the graph. f can not be an e j where j < i because those edges have their correct endpoints, k j assigned. This is demonstrated in Figure 6. The other case is that v 1 ∈ Γ (k i ). If this is the case, then there must exist some x ∈ Γ (k i ) \Γ (u i ) because d(u i ) = d(k i ) and u i neighbors v 1 while k i doesn't. Therefore, we can swap the edges v 1 − e i − u i and x − f − k i to v 1 − e i − k i and x − f − u i without creating any self-loops or multiple edges. This is demonstrated in Figure 6. Fig. 7 The two parts of Case (2) Therefore, we can swap all of the correct endpoints onto the correct edges. Case 2) For the second case, we assume that the edges e 1 , · · · e d 1 are distributed over l nodes of degree d 1 . We want to show that we can move all of the edges e 1 · · · e d 1 so that v 1 is an endpoint. If this is achievable, we have exactly Case 1. Let e 1 , · · · e i−1 be currently matched to v i and let e i be matched to some x such that deg(x) = d 1 . Let f be an edge currently matched to v 1 that is not part of e 1 · · · e d 1 and let its other endpoint be u f . Let the other end point of e i be u x as in Figure 7. We now have several initial cases that are all easy to handle. First, if v, x, u x , u f are all distinct and (v, u x ) and (x, u f ) are not edges then we can easily swap v and x such that the edges go from v − f − u f and x − e i − u x to v − e i − u x and x − f − u f . Next, if u f = u x then we can simply swap v 1 onto e i and x onto f and, again, v 1 will neighbor e i . This will not create any self-loops or multiple edges because the graph itself will be isomorphic. This situations are both shown in Figure 7. The next case is that x = u f . If we try to swap v 1 onto e i then we create a selfloop from x to x via f . Instead, we note that since the JDM is graphical, there must exist a third vertex y of the same degree as v 1 and x that does not neighbor x. Now, y neighbors an edge g, and we can swap x − f and y − g to x − g and y − f . The edges are v 1 − f − y and x − e i − u i and e i can be swapped onto v 1 without conflict. The cases left to analyze are those where the nodes are all distinct and (v 1 , u x ) or (x, u f ) are edges in the graph. We will analyze these separately. Case 2a) If (v 1 , u x ) is an edge in the graph, then it must be so through some edge named g. Note that this means we have v 1 − g − u x and x − e i − u x . We can swap this to v 1 − e i − u x and x − g − u x and have an isomorphic graph provided that g is not some e j where j < i. This is the top case in Figure 8. If g is some e j then it must be that u x = k j . This is distinct from k i . deg(k j ) = deg(k i ) so there must exist some edge h that k i neighbors with its other endpoint being y. There are again three cases, when y = x, v 1 y = x and when y = v 1 . These are the bottom three rows illustrated in Figure 8. The first is the simplest. Here, we can assume that k j does not neighbor y (because it neighbors v 1 and x that k i does not) so we can swap k j onto h and k i onto e 1 . This has removed the offending edge, and we can now swap v 1 onto e 1 and x onto f . When y = x, we first swap k i onto e j and k j onto h. Next, we swap v onto e i and x onto f as they no longer share an offending edge. Finally, when y = v 1 , we use a sequence of three swaps. The first is k i onto e j and k j onto h. The next is v 1 onto e 1 and x onto h. Finally, we swap k j back onto e j and k i onto e i . Case 2b) If (x, u f ) is an edge in the graph, then it must be through some edge g such that x − g − u f and x − e i − u x . Without loss of generality, assume that f is the only edge neighboring v 1 that isn't an e j . Since f doesn't neighbor v 1 in G 2 , there must either exist a w with deg(w) = deg(u f ) or v s with deg(v s ) = d(v 1 ). This relies critically upon the fact that f and g are the same class edge. If there is a w, then it doesn't neighbor v 1 (or we can apply the above argument to find a w ) and it must have some neighbor y ∈ Γ (w) \ Γ (u) through edge h. Therefore, we can swap u f onto h and w onto f . This removes the offending edge, and we can now swap v 1 onto e i and x onto f . If v s exists instead, then by the same argument, there exists some edge h with endpoint u s such that v s / ∈ Γ (u f ) and u s / ∈ Γ (x). Therefore, we can swap v s − h and x − g to v s − g and x − h. This again removes the troublesome edge and allows us to swap v 1 onto e i . Therefore, given any node, a precise set of edges that it should neighbor, and a set of vertices that are the endpoints of those edges, we can use half-edge-rewirings to transform any graph G to G that has this property, provided the set of edges is graphical. Now that we have shown that both A and B converge to the uniform distribution over their respective state spaces, the next question is how quickly this happens. Note that from the proof that the state space of B is connected, we can upperbound the diameter of the state space by 3m. The diameter provides a lower bound on the mixing time. In the next section, we will empirically estimate the mixing time to be also linear in m. Estimating the Mixing Time of the Markov Chain The Markov chain A is very similar to one analyzed by Kannan, Tetali and Vempala [26]. We can exactly use their canonical paths and analysis to show that the mixing time is polynomial. This result follows directly from Theorem 3 of [26] for chain A . This is because the joint degree matrix configuration model can be viewed as |D| complete, bipartite, and disjoint components. These components should remain disjoint, so the Markov Chain can be viewed as a 'meta-chain' which samples a component and then runs one step of the Kannan, Tetali and Vempala chain on that component. Even though the mixing time for this chain is provably polynomial, this upper bound is too large to be useful in practice. The analysis to bound the mixing time for B chain is significantly more complicated. One approach is to use the canonical path method to bound the congestion of this chain. The standard trick is to define a path from G 1 to G 2 that fixes the misplaced edges identified by G 1 ⊕ G 2 in a globally ordered way. However, this is difficult to apply to chain B because fixing a specific edge may not be atomic, i.e. from the proof of Theorem 3 it may take up to 4 swaps to correctly connect a vertex with an endpoint if there are conflicts with the other degree neighborhoods. These swaps take place in other degree neighborhoods and are not local moves. Therefore, this introduces new errors that must be fixed, but can not be incorporated into G 1 ⊕ G 2 . In addition, step (4) also prevents us from using path coupling as a proof of the mixing time. Given that bounding the mixing time of this chain seems to be difficult without new techniques or ideas, we use a series of experiments that substitute the autocorrelation time for the mixing time. Autocorrelation Time Autocorrelation time is a quantity that is related to the mixing time and is popular among physicists. We will give a brief introduction to this concept, and refer the reader to Sokal's lecture notes for further details and discussion [56]. The autocorrelation of a signal is the cross-correlation of the signal with itself given a lag t. More formally, given a series of data X i where each X i is a drawn from the same distribution X with mean µ and variance σ , the autocorrelation func- tion is R X (t) = E[(X i −µ)(X i−t −µ)] σ 2 . Intuitively, the inherent problem with using a Markov Chain sampling method is that successive states generated by the chain may be highly correlated. If we were able to draw independent samples from the stationary distribution, then the autocorrelation of that set of samples with itself would go to 0 as the number of samples increased. The autocorrelation time is capturing the size of the gaps between sampled states of the chain needed before the autocorrelation of this 'thinned' chain is very small. If the thinned chain has 0 autocorrelation, then it must be exactly sampled from the stationary distribution. In practice, when estimating the autocorrelation from a finite number of samples, we do not expect it to go to exactly 0, but we do expect it to 'die away' as the number of samples and gap increases. Definition 3. The exponential autocorrelation time is τ exp,X = lim sup t→∞ t − log |R X (t)| [56]. [56]. Definition 4. The integrated autocorrelation time is τ int,X = 1 2 ∑ ∞ t=−∞ R X (t) = 1 2 + ∑ ∞ t=1 R X (t) The difference between the exponential autocorrelation time and the integrated autocorrelation time is that the exponential autocorrelation time measures the time it takes for the chain to reach equilibrium after a cold start, or 'burn-in' time. The integrated autocorrelation time is related to the increase in the variance over the samples from the Markov Chain as opposed to samples that are truly independent. Often, these measurements are the same, although this is not necessarily true. We can substitute the autocorrelation time for the mixing time because they are, in effect, measuring the same thing -the number of iterations that the Markov Chain needs to run for before the difference between the current distribution and the stationary distribution is small. We will use the integrated autocorrelation time estimate. Experimental Design We used the Markov Chain B in two different ways. First, for each of the smaller datasets, we ran the chain for 50,000 iterations 15 times. We used this to calculate the the autocorrelation values for each edge for each lag between 100 and 15,000 in multiples of 100. From this, we calculated the estimated integrated autocorrelation time, as well as the iteration time for the autocorrelation of each edge to drop under a threshold of 0.001. This is discussed in Section 6.4. We also replicated the experimental design of Raftery and Lewis [52]. Given our estimates of the autocorrelation time for each size graph in Section 6.4, we ran the chain again for long enough to capture 10,000 samples where each sample had x iterations of the chain between them. x was chosen to vary from much smaller than the estimated autocorrelation time, to much larger. From these samples, we calculated the sample mean for each edge, and compared it with the actual mean from the joint degree matrix. We looked at the total variational distance between the sample means and actual means and showed that the difference appears to be converging to 0. We chose the mean as an evaluation metric because we were able to calculate the true means theoretically. We are unaware of another similarly simple metric. We used the formulas for empirical evaluation of mixing time from page 14 of Sokal's survey [56]. In particular, we used the following: • The sample mean is µ = 1 n ∑ n i=1 x i . • The sample unnormalized autocorrelation function isĈ(t) = 1 n−t ∑ n−t i=1 (x i − µ)(x i+t − µ). • The natural estimator of R X (t) isρ(t) =Ĉ(t)/Ĉ(0) • The estimator for τ int,X isτ int = 1 2 ∑ n−1 t=−(n−1) λ (t)ρ(t) where λ is a 'suitable' cutoff function. For a sequence of length x, calculating the autocorrelation of gap t requires (x − t) 2 dot products. Our experiments require that we calculate the autocorrelation for each possible edge in a graph for many lags. Thus running the full set of experiments requires O(|V | 2 x log x) time and is prohibitive when V is large. Note that x must necessarily be at least Θ (E) as well, since the mixing time can not be sub-linear in the number of edges. In Section 6.3 we will discuss results on the smaller datasets (AdjNoun, Dolphins, Football, Karate, and LesMis) that suggest a more feasible method for estimating autocorrelation time for larger graphs. We use this method to evaluate the autocorrelation time for the larger graphs as well, and present all of the results together. Rather than running the chain for 15,000 steps for the larger graphs, we selected more appropriate stopping conditions that were generally 10|E| based on the results for smaller graphs. Data Sets We have used several publicly available datasets, Word Adjacencies [48], Les Miserables [29], American College Football [20], the Karate Club [63], the Dolphin Social Network [36], C. Elegans Neural Network (celegans) [60,62], Power grid (power) [61], Astrophysics collaborations (astro-ph) [44], High-Energy Theory collaborations (hep-th) [45], Coauthorships in network science (netscience) [49], and a snapshot of the Internet from 2006 (as-22july) [50]. In the following |V | is the number of nodes, |E| is the number of edges and |J | is the number of non-zero entries in the joint degree matrix. Table 1 Details about the datasets, |V | is the number of nodes, |E| is the number of edges and |J | is the number of unique entries in the J . Relationship Between Mean of an Edge and Autocorrelation For each of the smaller graphs, AdjNoun, Dolphins, Football, Karate and LesMis, we ran the Markov Chain 10 times for 50,000 iterations and collected an indicator variable for each potential edge. For each of these edges, and each run, we calculated the autocorrelation function for values of t between 100 and 15,000 in multiples of 100. For each edge, and each run, we looked at the t value where the autocorrelation function first dropped below the threshold of 0.001. We then plotted the mean of these values against the mean of the edge, i.e. if it connects vertices of degree d i and d j (where d i = d j ) then µ e = J d i ,d j /d i d j or µ e = J d i ,d i / d i 2 otherwise. The three most useful plots are given in Figures 10 and 11 as the other graphs did not contain a large range of mean values. From these results, we identified a potential relationship between µ e and the time to pass under a threshold. Unfortunately, none of our datasets contained a significant number of edges with larger µ e values, i.e. between 0.5 and 1. In order to test this hypothesis, we designed a synthetic dataset that contained the many edges with values of µ e at i 20 for i = 1, · · · 20. We describe the creation of this dataset in the appendix. The final dataset we created had 326 edges, 194 vertices and 21 distinct J entries. We ran the Markov Chain 200 times for this synthetic graph. For each run, we calculated the threshold value for each edge. Figure 11 shows the edges' mean vs its mean time for the autocorrelation value to pass under 0.001. We see that there is a roughly symmetric curve that obtains its maximum at µ e = 0.5. This result suggests a way to estimate the autocorrelation time for larger graphs without repeating the entire experiment for every edge that could possibly appear. One can calculate µ e for each edge from the JDM and sample edges with µ e around 0.5. We use this method for selecting our subset of edges to analyze. In particular, we sampled about 300 edges from each of the larger graphs. For all of these except Autocorrelation Values For each dataset and each run we calculated the unnormalized autocorrelation values. For the smaller graphs, this entailed setting t to every value between 100 and 15,000 in multiples of 100. We randomly selected 1 run for each dataset and graphed the autocorrelation values for each of the edges. We present the data for the Karate and Dolphins datasets in Figures 12 and 13. For the larger graphs, we changed the starting and ending points, based on the graph size. For example, for Netscience was analyzed from 2,000 to 15,000 in multiples of 100, while as-22july was analyzed from 1,000 to 500,000 in multiples of 1,000. Fig. 13 The exponential drop-off for Dolphins appears to end after 600 iterations. All of the graphs exhibit the same behavior. We see an exponential drop off initially, and then the autocorrelation values oscillate around 0. This behavior is due to the limited number of samples, and a bias due to using the sample mean for each edge. If we ignore the noisy tail, then we estimate that the autocorrelation 'dies off' at the point where the mean absolute value of the autocorrelation approximately converges, then we can locate the 'elbow' in the graphs. This estimate for all graphs is given in Table 3 at the end of this Section. Estimated Integrated Autocorrelation Time For each dataset and run, we calculated the estimated integrated autocorrelation time. For the datasets with fewer than 1,000 edges, we calculated the autocorrelation in lags of 100 from 100 to 15,000 for each dataset. For the larger ones, we used intervals that depended on the total size of the graph. We estimateρ(t) as the size of the intervals times the sum of the values. The cut-off function we used for the smaller graphs was λ (t) = 1 if 0 < t < 15, 000 and 0 otherwise. This value was calculated for each edge. In Table 2 we present the mean, maximum and minimum estimated integrated autocorrelation time for each dataset over the runs of the Markov Chain using three different methods. For each of the edges, we first calculated the mean, median and max estimated integrated autocorrelation value over the various runs. Then, for each of these three values for each edge, we calculated the max, mean and min over all edges. For each of the graphs, the data series representing the median and max have each had their x-values perturbed slightly for clarity. These values are graphed on a log-log scale plot. Further, we also present a graph showing the ratio of these values to the number of edges. The ratio plot, Figure 15, suggests that the autocorrelation time may be a linear function of the number of edges in the graph, however the estimates are noisy due to the limited number of runs. All three metrics give roughly the same picture. We note that there is much higher variance in estimated autocorrelation time for the larger graphs. If we consider the evidence of the log-log plot and the ratio plot, we suspect that the autocorrelation time of this Markov Chain is linear in the number of edges. The Sample Mean Approaches the Real Mean for Each Edge Given the results of the previous experiment estimating the integrated autocorrelation time, we next executed an experiment suggested by Raftery and Lewis [52]. First we note that for each edge e, we know the true value of P(e ∈ G|G has J ) is exactly J k,l D k D l or J k,k ( D k 2 ) if e is an edge between degrees k and l. This is because there are D k D l potential (k, l) edges that show up in any graph with a fixed J , and each graph has J k,l of them. If we consider the graphs as being labeled, then we can see that each edge has an equal probability of showing up when we consider permutations of the orderings. Fig. 15 The ratio of the max, median and min values over the edges to the number of edges for the estimated integrated autocorrelation times. L to R in order of size: Karate, Dolphins, LesMis, AdjNoun, Football, celegans, netscience, power, hep-th, as-22july and astro-ph Thus, our experiment was to take samples at varying intervals, and consider how the sample mean of each edge compared with our known theoretical mean. For the smaller graphs, we took 10,000 samples at varying gaps depending on our estimated integrated autocorrelation time and repeated this 10 times. Additionally, we saw that the total variational distance quickly converged to a small, but non-zero value. We repeated this experiment with 20,000 samples and, for the two smallest graphs, Karate and Dolphins, we repeated the experiment with 5,000 and 40,000 samples. These results show that this error is due to the number of samples and not the sampler. For the graphs with more than 1,000 edges, each run resulted in 20,000 samples at varying gaps, and this was repeated 5 times. We present these results in Figures 18 through 28. If S e,g is the sample mean for edge e and gap g, and µ e is the true mean, then the graphed value is ∑ e |S e,g − µ e |/ ∑ e µ e . In all of the figures, the line runs through the median error for the runs and the error bars are the maximum and minimum values. We note that the maximum and minimum are very close to the median as they are within 0.05% for most intervals. These graphs imply that we are sampling uniformly after a gap of 175 for the Karate graph. For the dolphin graph, we see very similar results, and note that the error becomes constant after a sampling gap of 400 iterations. For the larger graphs, we varied the gaps based on the graph size, and then focused on the area where the error appeared to be decreasing. Again, we see consistent results, although the residual error is higher. This is to be expected because there are more potential edges in these graphs, so we took relatively fewer samples per edge. A summary of the results can be found in Table 3 Based on the results in this table, our recommendation would be that running the Markov Chain for 5m steps would satisfy all running time estimates except for Power's results for the Maximum Estimated Integrated Autocorrelation time. This estimate is significantly lower than the result for Chain A that was obtained using the standard theoretical technique of canonical paths. Conclusions and Future Work This paper makes two primary contributions. The first is the investigation of Markov Chain methods for uniformly sampling graphs with a fixed joint degree distribution. Previous work shows that the mixing time of A is polynomial, while our experiments suggest that the mixing time of B is also polynomial. The relationship between the mean of an edge and the autocorrelation values can be used to efficiently experiment with larger graphs by sampling edges with mean between 0.4 and 0.6 and repeating the analysis for just those edges. This was used to repeat the experiments for larger graphs and to provide further convincing evidence of polynomial mixing time. Our second contribution is in the design of the experiments to evaluate the mixing time of the Markov Chain. In practice, it seems the stopping time for sampling is often chosen without justification. Autocorrelation is a simple metric to use, and can be strong evidence that a chain is close to the stationary distribution when used correctly. Now, we must fill in the rest of J so that D is integer valued for degrees. One way is to note that we should have 4 × 20 degree 20 edges. We can sum the number of currently allocated edges with one endpoint of degree 20, call this x and set J 1,20 = 80 − x. There are many other ways of consistently completing J , such as assigning as many edges as possible to the K ×K and L ×L entries, like J 20,21 . This results in a denser graph. For the synthetic graph used in this paper, we completed J by adding all edges as (1,20), (1, 21) etc edges. We chose this because it was simple to verify and it also made it easy to ignore the edges that were not of interest.
9,108
1103.4875
2952271679
One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.
Another vein of related work is that of who introduce the concept of @math -series @cite_21 @cite_17 . In this model, @math refers to the dimension of the distribution and @math is the joint degree distribution. They propose a heuristic for generating random @math -graphs for a fixed @math distribution via edge rewirings. However, their method can get stuck if there exists a degree in the graph for which there is only 1 node with that degree. This is because the state space is not connected. We provide a theoretically sound method of doing this. Finally, Newman also studies the problem of fixing an assortativity value, finding a with that value, and then sampling a random graph with that distribution using Markov Chains @cite_50 @cite_36 . His Markov Chain starts at any graph with the correct degree distribution and converges to a pseudograph with the correct joint remaining degree distribution. By contrast, our work provides a theoretically sound way of constructing a simple graph with a given joint degree distribution first, and our Markov Chain only has simple graphs with the same joint degree distribution as its state space.
{ "abstract": [ "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "Researchers have proposed a variety of metrics to measure important graph properties, for instance, in social, biological, and computer networks. Values for a particular graph metric may capture a graph's resilience to failure or its routing efficiency. Knowledge of appropriate metric values may influence the engineering of future topologies, repair strategies in the face of failure, and understanding of fundamental properties of existing networks. Unfortunately, there are typically no algorithms to generate graphs matching one or more proposed metrics and there is little understanding of the relationships among individual metrics or their applicability to different settings. We present a new, systematic approach for analyzing network topologies. We first introduce the dK-series of probability distributions specifying all degree correlations within d-sized subgraphs of a given graph G. Increasing values of d capture progressively more properties of G at the cost of more complex representation of the probability distribution. Using this series, we can quantitatively measure the distance between two graphs and construct random graphs that accurately reproduce virtually all metrics proposed in the literature. The nature of the dK-series implies that it will also capture any future metrics that may be proposed. Using our approach, we construct graphs for d=0, 1, 2, 3 and demonstrate that these graphs reproduce, with increasing accuracy, important properties of measured and modeled Internet topologies. We find that the d=2 case is sufficient for most practical purposes, while d=3 essentially reconstructs the Internet AS-and router-level topologies exactly. We hope that a systematic method to analyze and synthesize topologies offers a significant improvement to the set of tools available to network topology and protocol researchers.", "We study assortative mixing in networks, the tendency for vertices in networks to be connected to other vertices that are like (or unlike) them in some way. We consider mixing according to discrete characteristics such as language or race in social networks and scalar characteristics such as age. As a special example of the latter we consider mixing according to vertex degree, i.e., according to the number of connections vertices have to other vertices: do gregarious people tend to associate with other gregarious people? We propose a number of measures of assortative mixing appropriate to the various mixing types, and apply them to a variety of real-world networks, showing that assortative mixing is a pervasive phenomenon found in many networks. We also propose several models of assortatively mixed networks, both analytic ones based on generating function methods, and numerical ones based on Monte Carlo graph generation techniques. We use these models to probe the properties of networks as their level of assortativity is varied. In the particular case of mixing by degree, we find strong variation with assortativity in the connectivity of the network and in the resilience of the network to the removal of vertices.", "" ], "cite_N": [ "@cite_36", "@cite_21", "@cite_50", "@cite_17" ], "mid": [ "2040956707", "2100256240", "2033193852", "" ] }
Constructing and Sampling Graphs with a Prescribed Joint Degree Distribution
show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice. Introduction Graphs are widely recognized as the standard modeling language for many complex systems, including physical infrastructure (e.g., Internet, electric power, water, and gas networks), scientific processes (e.g., chemical kinetics, protein interactions, and regulatory networks in biology starting at the gene levels through ecological systems), and relational networks (e.g., citation networks, hyperlinks on the web, and social networks). The broader adoption of the graph models over the last decade, along with the growing importance of associated applications, calls for descriptive and generative models for real networks. What is common among these networks? How do they differ statistically? Can we quantify the differences among these networks? Answering these questions requires understanding the topological properties of these graphs, which have lead to numerous studies on many "real-world" networks from the Internet to social, biological and technological networks [18]. Perhaps the most prominent theme in these studies is the skewed degree distribution; real-world graphs have a few vertices with very high degree and many vertices with small degree. There is some dispute as to the exact distribution, some have called it power-law [5,18], some log-normal [4,51,41,8], and but all agree that it is 'heavy-tailed' [17,54]. The ubiquity of this distribution has been a motivator for many different generative models and is often used as a metric for the quality of the model. Models like preferential attachment [5], the copying model [31], the Barabasi hierarchical model [53], forest-fire model, the Kronecker graph model [33], geometric preferential attachment [19] and many more [34,59,11] study the expected degree distribution and use the results to argue for the strength of their method. Many of these models also match other observed features, such as small diameter or densification [28]. However, recent studies comparing the generative models with real networks on metrics like conductance [35], core numbers [13] and clustering coefficients [30] show that the models do not match other important features of the networks. The degree distribution alone does not define a graph. McKay's estimate [39] shows that there may be exponentially many graphs with the same degree distribution. However, models based on degree distribution are commonly used to compute statistically significant structures in a graph. For example, the modularity metric for community detection in graphs [43,42] assumes a null hypothesis for the structure of a graph based on its degree distribution, namely that probability of an edge between vertex v i and v j is proportional to d i d j , where d i and d j represent the degrees of vertices v i and v j . The modularity of a group of vertices is defined by how much their structure deviates from the null hypothesis, and a higher modularity signifies a better community. The key point here is that the null hypothesis is solely based on its degree distribution and therefore might be incorrect. Degree distribution based models are also used to predict graph properties [40,2,15,14,16], benchmark [32], and analyze the expected run time of algorithms [7]. These studies improve our understanding of the relationship between the degree distribution and the structure of a graph. The shortcomings of these studies give insight into what other features besides the degree distribution would give us a better grasp of a graph's structure. For example, the degree assortativity of a network measure whether nodes attach to other similar or dissimilar vertices. This is not specified by the degree distribution, yet studies have shown that social networks tend to be assortative, while biological and technological networks tend to be dissortative [47,46]. An example of recent work using assortativity is [30]. In this study, a high assortativity is assumed for connections that generate high clustering coefficients, and this, in addition to preserving the degree distribution, results in very realistic instances of real-world graphs. Another study that has looked at the joint degree distribution is dK-graphs [38]. They propose modeling a graph by looking at the distribution of the structure of all sized k subsets of vertices, where d = 1 are vertex degrees, d = 2 are edge degrees (the joint degree distribution), d = 3 is the degree distribution of triangles and wedges, and so on. It is an interesting idea, as clearly the nK distribution contains all information about the graph, but it is far too detailed as a model. At what d value does the additional information become less useful? One way to enhance the results based on degree distribution is to use a more restrictive feature such as the joint degree distribution. Intuitively, if degree distribution of a graph describes the probability that a vertex selected uniformly at random will be of degree k then its joint degree distribution describes the probability that a randomly selected edge will be between nodes of degree k and l. We will use a slightly different concept, the joint degree matrix, where the total number of nodes and edges is specified, and the numbers of edges between each set of degrees is counted. Note that while the joint degree distribution uniquely defines the degree distribution of a graph up to isolated nodes, graphs with the same degree distribution may have very different joint degree distributions. We are not proposing that the joint degree distribution be used as a stand alone descriptive model for generating networks. We believe that understanding the relationship between the joint degree distribution and the network structure is important, and that having the capability to generate random instances of graphs with the same joint degree distribution will help enable this goal. Experiments on real data are valuable, but also drawing conclusions only based on a limited data may be misleading, as the graphs may all be biased the same way. For a more rigorous study, we need a sampling algorithm that can generate random instances in a reasonable time, which is the motivation of this work. The primary questions investigated by this paper are: Given a joint degree distribution and an integer n, does the joint degree distribution correspond to a real labeled graph? If so, can one construct a graph of size n with that joint degree distribution? Is it possible to construct or generate a uniformly random graph with that same joint degree distribution? We address these problems from both a theoretical and from an empirical perspective. In particular, being able to uniformly sample graphs allows one to empirically evaluate which other graph features, like diameter, or eigenvalues, are correlated with the joint degree distribution. Contributions We make several contributions to this problem, both theoretically and experimentally. First, we discuss the necessary and sufficient conditions for a given joint degree vector to be graphical. We prove that these conditions are sufficient by providing a new constructive algorithm. Next, we introduce a new configuration model for the joint degree matrix problem which is a natural extension of the configuration model for the degree sequence problem. Finally, using this configuration model, we develop Markov Chains for sampling both pseudographs and simple graphs with a fixed joint degree matrix. A pseudograph allows multiple edges between two nodes and self-loops. We prove the correctness of both chains and mixing time for the pseudograph chain by using previous work. The mixing time of the simple graph chain is experimentally evaluated using autocorrelation. In practice, Monte Carlo Markov Chains are a very popular method for sampling from difficult distributions. However, it is often very difficult to theoretically evaluate the mixing time of the chain, and many practitioners simply stop the chain after 5,000, 10,000 or 20,000 iterations without much justification. Our experimental design with autocorrelation provides a set of statistics that can be used as a justification for choosing a stopping point. Further, we show one way that the autocorrelation technique can be adapted from real-valued samples to combinatorial samples. Notation and Definitions Formally, a degree distribution of a graph is the probability that a node chosen at random will be of degree k. Similarly, the joint degree distribution is the probability that a randomly selected edge will have end points of degree k and l. In this paper, we are concerned with constructing graphs that exactly match these distributions, so rather than probabilities, we will use a counting definition below and call it the joint degree matrix. In particular, we will be concerned with generating simple graphs that do not contain multiple edges or self-loops. Any graph that may have multiple edges or self loops will be referred to as a pseudograph. A generic degree vector will be denoted by D. Definition 2. The joint degree matrix (JDM) J (G) of a graph G is a matrix where J (G) k,l is exactly the number of edges between nodes of degree k and degree l in G. A generic joint degree matrix will be denoted by J . Given a joint degree matrix, J , we can recover the number of edges in the graph as m = ∑ ∞ k=1 ∑ ∞ l=k J k,l . We can also recover the degree vector as D k = 1 k (J k,k + ∑ ∞ l=1 J k,l ). The term J k,k is added twice because kD k is the number of end points of degree k and the edges in J k,k contribute two end points. The number of nodes, n is then ∑ ∞ k=1 D k . This count does not include any degree 0 vertices, as these have no edges in the joint degree matrix. Given n and m, we can easily get the degree distribution and joint degree distribution. They are P(k) = 1 n D k while P(k, l) = 1 m J k,l . Note that P(k) is not quite the marginal of P(k, l) although it is closely related. The Joint Degree Matrix Configuration Model We propose a new configuration model for the joint degree distribution problem. Given J and its corresponding D we create k labeled mini-vertices for every vertex of degree k. In addition, for every edge with end points of degree k and l we create two labeled mini-end points, one of class k and one of class l. We connect all degree k mini-vertices to the class k mini-end points. This forms a complete bipartite graph for each degree, and each of these forms a connected component that is disconnected from all other components. We will call each of these components the "k-neighborhood". Notice that there are kD k mini-vertices of degree k, and kD k = J k,k + ∑ l J k,l corresponding mini-end points in each k-neighborhood. This is pictured in Figure 2. Take any perfect matching in this graph. If we merge each pair of mini-end points that correspond to the same edge, we will have some pseudograph that has exactly the desired joint degree matrix. This observation forms the basis of our sampling method. Constructing Graphs with a Given Joint Degree Matrix The Erdős-Gallai condition is a necessary and sufficient condition for a degree sequence to be realizable as a simple graph. Theorem 1. Erdős-Gallai A degree sequence d = {d 1 , d 2 , · · · d n } sorted in non- increasing order is graphical if and only if for every k ≤ n, ∑ k i=1 d i ≤ k(k − 1) + ∑ n i=k+1 min(d i , k). The necessity of this condition comes from noting that in a set of vertices of size k, there can be at most k 2 internal edges, and for each vertex v not in the subset, there can be at most min{d(v), k} edges entering. The condition considers each subset of decreasing degree vertices and looks at the degree requirements of those nodes. If the requirement is more than the available edges, the sequence cannot be graphical. The sufficiency is shown via the constructive Havel-Hakimi algorithm [23,22]. The existence of the Erdős-Gallai condition inspires us to ask whether similar necessary and sufficient conditions exist for a joint degree matrix to be graphical. The following necessary and sufficient conditions were independently studied by Amanatidis et al. [3]. Theorem 2. Let J be given and D be the associated degree distribution. J can be realized as a simple graph if and only if (1) D k is integer-valued for all k and (2) ∀k, l, if k = l then J k,l ≤ D k D l . For each k, J k,k ≤ D k 2 . The necessity of these conditions is clear. The first condition requires that there are an integer number of nodes of each degree value. The next two are that the number of edges between nodes of degree k and l (or k and k) are not more than the total possible number of k to l edges in a simple graph defined by the marginal degree sequences. Amanatidis et al. show the sufficiency through a constructive algorithm. We will now introduce a new algorithm that runs in O(m) time. The algorithm proceeds by building a nearly regular graph for each class of edges, J k,l . Assume that k = l for simplicity. Each of the D k nodes of degree k receives J k,l /D k edges, while J k,l mod D k each have an extra edge. Similarly, the l degree nodes have J k,l /D l edges, with J k,l mod D l having 1 extra. We can then construct a simple bipartite graph with this degree sequence. This can be done in linear time in the number of edges using queues as is discussed after Lemma 1. If k = l, the only differences are that the graph is no longer bipartite and there are 2J k,k end points to be distributed among D k nodes. To find a simple nearly regular graph, one can use the Havel-Hakimi [22,23] algorithm in O(J k,k ) time by using the degree sequence of the graph as input to the algorithm. We must show that there is a way to combine all of these nearly-regular graphs together without violating any degree constraints. Let d = d 1 , d 2 , · · · d n be the sorted non-increasing order degree sequence from D. Letd v denote the residual degree sequence where the residual degree of a vertex v is d v minus the number of edges that currently neighbor v. Also, letD k denote the number of nodes of degree k that have non-zero residual degree, i.e.D k = ∑ d j =k 1(d j = 0). Algorithm 1 Greedy Graph Construction with a Fixed JDM, Input: J , n, m, D 1: for k = n · · · 1 and l = k · · · 1 do 2: if k = l then 3: Let a = J k,l mod D k and b = J k,l mod D l 4: Let x 1 · · · x a = J k,l D k + 1, x a+1 · · · x D k = J k,l D k and y 1 · · · y b = J k,l D l + 1, y b+1 · · · y D l = J k,l D l 5: Construct a simple bipartite graph B with degree sequence x 1 · · · x D k , y 1 · · · y D l 6: else 7: Let c = 2J k,k mod D k 8: Let x 1 · · · x c = 2J k,k D k + 1 and x c+1 · · · x D k = 2J k,k D k 9: Construct a simple graph B with the degree sequence x 1 · · · x D k 10: end if 11: Place B into G by matching the nodes of degree k with higher residual degree with x 1 · · · x a and those of degree l with higher residual degree with y 1 · · · y b . The other vertices in B can be matched in any way with those in G of degree k and l 12: Update the residual degrees of each k and l degree node. 13: end for To combine the nearly uniform subgraphs, we start with the largest degree nodes, and the corresponding largest degree classes. It is not necessary to start with the largest, but it simplifies the proof. First, we note that after every iteration, the joint degree sequence is still feasible if ∀k, l, k = lĴ k,l ≤D kDl and ∀kĴ k,k ≤ D k 2 . We will prove that Algorithm 4 can always satisfy the feasibility conditions. First, we note a fact. Observation 1 For all k, ∑ lĴk,l +Ĵ k,k = ∑ d j =kd j This follows directly from the fact that the left hand side is summing over all of the k end points needed byĴ while the right hand side is summing up the available residual end points from the degree distribution. Next, we note that if all residual degrees for degree k nodes are either 0 or 1, then: Observation 2 If, for all j such that d j = k,d j = 0 or 1 then ∑ d j =kd j = ∑ d j =k 1(d j = 0) =D k . Lemma 1. After every iteration, for every pair of vertices u, v of any degree k, |d u −d v | ≤ 1. Amanatidis et al. refer to Lemma 1 as the balanced degree invariant. This is most easily proven by considering the vertices of degree k as a queue. If there are x edges to be assigned, we can consider the process of deciding how many edges to assign each vertex as being one of popping vertices from the top of the queue and reinserting them at the end x times. Each vertex is assigned edges equal to the number of times it was popped. The next time we assign edges with end points of degree k, we start with the queue at the same position as where we ended previously. It is clear that no vertex can be popped twice without all other vertices being popped at least once. Lemma 2. The above algorithm can always greedily produce a graph that satisfies J , provided J satisfies the initial necessary conditions. Proof. There is one key observation about this algorithm -it maximizesD kDl by ensuring that the residual degrees of any two vertices of the same degree never differ by more than 1. By maximizing the number of available vertices, we can not get stuck adding a self-loop or multiple edge. From this, we gather that if, for some degree k, there exists a vertex j such thatd j = 0, then for all vertices of degree k, their residuals must be either 0 or 1. This means that ∑ d j =kd j =D k ≥Ĵ k,l for every other l from Observation 2. From the initial conditions, we have that for every k, l J k,l ≤ D k D l . D k =D k provided that all degree k vertices have non-zero residuals. Otherwise, for any unprocessed pair, J k,l ≤ min{D k ,D l } ≤D kDl . For the k, k case, it is clear that J k,k ≤D k ≤ D k 2 . Therefore, the residual joint degree matrix and degree sequence will always be feasible, and the algorithm can always continue. A natural question is that since the joint degree distribution contains all of the information in the degree distribution, do the joint degree distribution necessary conditions easily imply the Erdős-Gallai condition? This can easily be shown to be true. Corollary 1. The necessary conditions for a joint degree matrix to be graphical imply that the associated degree vector satisfies the Erdős-Gallai condition. Uniformly Sampling Graphs with Monte Carlo Markov Chain (MCMC) Methods We now turn our attention to uniformly sampling graphs with a given graphical joint degree matrix using MCMC methods. We return to the joint degree matrix configuration model. We can obtain a starting configuration for any graphical joint degree matrix by using Algorithm 1. This configuration consists of one complete bipartite component for each degree with a perfect matching selected. The transitions we use select any end point uniformly at random, then select any other end point in its degree neighborhood and swap the two edges that these neighbor. In Figure 2, this is equivalent to selecting one of the square endpoints uniformly at random and then selecting another uniformly at random from the same connected component and then swapping the edges. A more complex version of this chain checks that this swap does not create a multiple edge or self-loop. Formally, the transition function is a randomized algorithm given by Algorithm 2. E ∪ {(e 1 , v 2 ), (e 2 , v 1 )} \ {(e 1 , v 1 ), (e 2 , v 2 )} contains a multi-edge or self-loop, reject) 5: E ← E ∪ {(e 1 , v 2 ), (e 2 , v 1 )} \ {(e 1 , v 1 ), (e 2 , v 2 )} There are two chains described by Algorithm 2. The first, A doesn't have step (4) and its state space is all pseudographs with the desired joint degree matrix. The second, B includes step (4) and only transitions to and from simple graphs with the correct joint degree matrix. We remind the reader of the standard result that any irreducible, aperiodic Markov Chain with symmetric transitions converges to the uniform distribution over its state space. Both A and B are aperiodic, due to the self-loop to each state. From the description of the transition function, we can see that A is symmetric. This is less clear for the transition function of B. Is it possible for two connected configurations to have a different number of feasible transitions in a given degree neighborhood? We show that it is not the case in the following lemma. Proof. Let C 1 and C 2 be two neighboring configurations in B. This means that they differ by exactly 4 edges in exactly 1 degree neighborhood. Let this degree be k and let these edges be e 1 v 1 and e 2 v 2 in C 1 whereas they are e 1 v 2 and e 2 v 1 in C 2 . We want to show that C 1 and C 2 have exactly the same number of feasible k-degree swaps. Without loss of generality, let e x , e y be a swap that is prevented by e 1 in C 1 but allowed in C 2 . This must mean that e x neighbors v 1 and e y neighbors some v y = v 1 , v 2 . Notice that the swap e 1 e x is currently feasible. However, in C 2 , it is now infeasible to swap e 1 , e x , even though e x and e y are now possible. If we consider the other cases, like e x , e y is prevented by both e 1 and e 2 , then after swapping e 1 and e 2 , e x , e y is still infeasible. If swapping e 1 and e 2 makes something feasible in C 1 infeasible in C 2 , then we can use the above argument in reverse. This means that the number of feasible swaps in a k-neighborhood is invariant under k-degree swaps. The remaining important question is the connectivity of the state space over these chains. It is simple to show that the state space of A is connected. We note that it is a standard result that all perfect matchings in a complete bipartite graph are connected via edge swaps [58]. Moreover, the space of pseudographs can be seen exactly as the set of all perfect matchings over the disconnected complete bipartite degree neighborhoods in the joint degree matrix configuration model. The connectivity result is much less obvious for B. We adapt a result of Taylor [58] that all graphs with a given degree sequence are connected via edge swaps in order to prove this. The proof is inductive and follows the structure of Taylor's proof. Theorem 3. Given two simple graphs, G 1 and G 2 of the same size with the same joint degree matrix, there exists a series of endpoint rewirings to transform G 1 into G 2 (and vice versa) where every intermediate graph is also simple. Proof. This proof will proceed by induction on the number of nodes in the graph. The base case is when there are 3 nodes. There are 3 realizable JDMs. Each is uniquely realizable, so there are no switchings available. Assume that this is true for n = k. Let G 1 and G 2 have k + 1 vertices. Label the nodes of G 1 and G 2 v 1 · · · v k+1 such that deg(v 1 ) ≥ deg(v 2 ) ≥ · · · ≥ deg(v k+1 ) . Our goal will be to show that both graphs can be transformed in G 1 and G 2 respectively such that v 1 neighbors the same nodes in each graph, and the transitions are all through simple graphs. Now we can remove v 1 to create G 1 and G 2 , each with n − 1 nodes and identical JDMs. By the inductive hypothesis, these can be transformed into one other and the result follows. v Fig. 4 The dotted edges represent the troublesome edges that we may need to swap out before we can swap v 1 and v c . Fig. 5 The disk is v 1 . The crosses are e 1 · · · e d 1 . 1 u f f v c e i+1 u i+1e 1 e d1 e 3 e 2 v 1 e d1−1 e d1−2 k 1 k 2 k 3 u d1 u d1−1 u d1−2 We will break the analysis into two cases. For both cases, we will have a set of target edges, e 1 , e 2 · · · e d 1 that we want v 1 to be connected to. Without loss of generality, we let this set be the edges that v 1 currently neighbors in G 2 . We assume that the edges are ordered in reverse lexicographic order by the degrees of their endpoints. This will guarantee that the resulting construction for v 1 is graphical and that we have a non-increasing ordering on the requisite endpoints. Now, let k i denote the endpoint in G 2 for edge e i that isn't v 1 . Case 1) For the first case, we will assume that v 1 is already the endpoint of all edges e 1 , e 2 · · · e d 1 but that all of the k i may not be assigned correctly as in Figure 5. Assume that e 1 , e 2 · · · e i−1 are all edges (v 1 , k 1 ) · · · (v 1 , k i−1 ) and that e i is the first that isn't matched to its appropriate k i . Call the current endpoint of the other endpoint of e i u i . We know that deg(k i ) = deg(u i ) and that k i currently neighbors deg(k i ) other nodes, Γ (k i ). We have two cases here. One is that v 1 ∈ Γ (k i ) but via edge f instead of e i . Here, we can swap v 1 on the endpoints of f and e i so that the edge v 1 − e i − k i is in the graph. f can not be an e j where j < i because those edges have their correct endpoints, k j assigned. This is demonstrated in Figure 6. The other case is that v 1 ∈ Γ (k i ). If this is the case, then there must exist some x ∈ Γ (k i ) \Γ (u i ) because d(u i ) = d(k i ) and u i neighbors v 1 while k i doesn't. Therefore, we can swap the edges v 1 − e i − u i and x − f − k i to v 1 − e i − k i and x − f − u i without creating any self-loops or multiple edges. This is demonstrated in Figure 6. Fig. 7 The two parts of Case (2) Therefore, we can swap all of the correct endpoints onto the correct edges. Case 2) For the second case, we assume that the edges e 1 , · · · e d 1 are distributed over l nodes of degree d 1 . We want to show that we can move all of the edges e 1 · · · e d 1 so that v 1 is an endpoint. If this is achievable, we have exactly Case 1. Let e 1 , · · · e i−1 be currently matched to v i and let e i be matched to some x such that deg(x) = d 1 . Let f be an edge currently matched to v 1 that is not part of e 1 · · · e d 1 and let its other endpoint be u f . Let the other end point of e i be u x as in Figure 7. We now have several initial cases that are all easy to handle. First, if v, x, u x , u f are all distinct and (v, u x ) and (x, u f ) are not edges then we can easily swap v and x such that the edges go from v − f − u f and x − e i − u x to v − e i − u x and x − f − u f . Next, if u f = u x then we can simply swap v 1 onto e i and x onto f and, again, v 1 will neighbor e i . This will not create any self-loops or multiple edges because the graph itself will be isomorphic. This situations are both shown in Figure 7. The next case is that x = u f . If we try to swap v 1 onto e i then we create a selfloop from x to x via f . Instead, we note that since the JDM is graphical, there must exist a third vertex y of the same degree as v 1 and x that does not neighbor x. Now, y neighbors an edge g, and we can swap x − f and y − g to x − g and y − f . The edges are v 1 − f − y and x − e i − u i and e i can be swapped onto v 1 without conflict. The cases left to analyze are those where the nodes are all distinct and (v 1 , u x ) or (x, u f ) are edges in the graph. We will analyze these separately. Case 2a) If (v 1 , u x ) is an edge in the graph, then it must be so through some edge named g. Note that this means we have v 1 − g − u x and x − e i − u x . We can swap this to v 1 − e i − u x and x − g − u x and have an isomorphic graph provided that g is not some e j where j < i. This is the top case in Figure 8. If g is some e j then it must be that u x = k j . This is distinct from k i . deg(k j ) = deg(k i ) so there must exist some edge h that k i neighbors with its other endpoint being y. There are again three cases, when y = x, v 1 y = x and when y = v 1 . These are the bottom three rows illustrated in Figure 8. The first is the simplest. Here, we can assume that k j does not neighbor y (because it neighbors v 1 and x that k i does not) so we can swap k j onto h and k i onto e 1 . This has removed the offending edge, and we can now swap v 1 onto e 1 and x onto f . When y = x, we first swap k i onto e j and k j onto h. Next, we swap v onto e i and x onto f as they no longer share an offending edge. Finally, when y = v 1 , we use a sequence of three swaps. The first is k i onto e j and k j onto h. The next is v 1 onto e 1 and x onto h. Finally, we swap k j back onto e j and k i onto e i . Case 2b) If (x, u f ) is an edge in the graph, then it must be through some edge g such that x − g − u f and x − e i − u x . Without loss of generality, assume that f is the only edge neighboring v 1 that isn't an e j . Since f doesn't neighbor v 1 in G 2 , there must either exist a w with deg(w) = deg(u f ) or v s with deg(v s ) = d(v 1 ). This relies critically upon the fact that f and g are the same class edge. If there is a w, then it doesn't neighbor v 1 (or we can apply the above argument to find a w ) and it must have some neighbor y ∈ Γ (w) \ Γ (u) through edge h. Therefore, we can swap u f onto h and w onto f . This removes the offending edge, and we can now swap v 1 onto e i and x onto f . If v s exists instead, then by the same argument, there exists some edge h with endpoint u s such that v s / ∈ Γ (u f ) and u s / ∈ Γ (x). Therefore, we can swap v s − h and x − g to v s − g and x − h. This again removes the troublesome edge and allows us to swap v 1 onto e i . Therefore, given any node, a precise set of edges that it should neighbor, and a set of vertices that are the endpoints of those edges, we can use half-edge-rewirings to transform any graph G to G that has this property, provided the set of edges is graphical. Now that we have shown that both A and B converge to the uniform distribution over their respective state spaces, the next question is how quickly this happens. Note that from the proof that the state space of B is connected, we can upperbound the diameter of the state space by 3m. The diameter provides a lower bound on the mixing time. In the next section, we will empirically estimate the mixing time to be also linear in m. Estimating the Mixing Time of the Markov Chain The Markov chain A is very similar to one analyzed by Kannan, Tetali and Vempala [26]. We can exactly use their canonical paths and analysis to show that the mixing time is polynomial. This result follows directly from Theorem 3 of [26] for chain A . This is because the joint degree matrix configuration model can be viewed as |D| complete, bipartite, and disjoint components. These components should remain disjoint, so the Markov Chain can be viewed as a 'meta-chain' which samples a component and then runs one step of the Kannan, Tetali and Vempala chain on that component. Even though the mixing time for this chain is provably polynomial, this upper bound is too large to be useful in practice. The analysis to bound the mixing time for B chain is significantly more complicated. One approach is to use the canonical path method to bound the congestion of this chain. The standard trick is to define a path from G 1 to G 2 that fixes the misplaced edges identified by G 1 ⊕ G 2 in a globally ordered way. However, this is difficult to apply to chain B because fixing a specific edge may not be atomic, i.e. from the proof of Theorem 3 it may take up to 4 swaps to correctly connect a vertex with an endpoint if there are conflicts with the other degree neighborhoods. These swaps take place in other degree neighborhoods and are not local moves. Therefore, this introduces new errors that must be fixed, but can not be incorporated into G 1 ⊕ G 2 . In addition, step (4) also prevents us from using path coupling as a proof of the mixing time. Given that bounding the mixing time of this chain seems to be difficult without new techniques or ideas, we use a series of experiments that substitute the autocorrelation time for the mixing time. Autocorrelation Time Autocorrelation time is a quantity that is related to the mixing time and is popular among physicists. We will give a brief introduction to this concept, and refer the reader to Sokal's lecture notes for further details and discussion [56]. The autocorrelation of a signal is the cross-correlation of the signal with itself given a lag t. More formally, given a series of data X i where each X i is a drawn from the same distribution X with mean µ and variance σ , the autocorrelation func- tion is R X (t) = E[(X i −µ)(X i−t −µ)] σ 2 . Intuitively, the inherent problem with using a Markov Chain sampling method is that successive states generated by the chain may be highly correlated. If we were able to draw independent samples from the stationary distribution, then the autocorrelation of that set of samples with itself would go to 0 as the number of samples increased. The autocorrelation time is capturing the size of the gaps between sampled states of the chain needed before the autocorrelation of this 'thinned' chain is very small. If the thinned chain has 0 autocorrelation, then it must be exactly sampled from the stationary distribution. In practice, when estimating the autocorrelation from a finite number of samples, we do not expect it to go to exactly 0, but we do expect it to 'die away' as the number of samples and gap increases. Definition 3. The exponential autocorrelation time is τ exp,X = lim sup t→∞ t − log |R X (t)| [56]. [56]. Definition 4. The integrated autocorrelation time is τ int,X = 1 2 ∑ ∞ t=−∞ R X (t) = 1 2 + ∑ ∞ t=1 R X (t) The difference between the exponential autocorrelation time and the integrated autocorrelation time is that the exponential autocorrelation time measures the time it takes for the chain to reach equilibrium after a cold start, or 'burn-in' time. The integrated autocorrelation time is related to the increase in the variance over the samples from the Markov Chain as opposed to samples that are truly independent. Often, these measurements are the same, although this is not necessarily true. We can substitute the autocorrelation time for the mixing time because they are, in effect, measuring the same thing -the number of iterations that the Markov Chain needs to run for before the difference between the current distribution and the stationary distribution is small. We will use the integrated autocorrelation time estimate. Experimental Design We used the Markov Chain B in two different ways. First, for each of the smaller datasets, we ran the chain for 50,000 iterations 15 times. We used this to calculate the the autocorrelation values for each edge for each lag between 100 and 15,000 in multiples of 100. From this, we calculated the estimated integrated autocorrelation time, as well as the iteration time for the autocorrelation of each edge to drop under a threshold of 0.001. This is discussed in Section 6.4. We also replicated the experimental design of Raftery and Lewis [52]. Given our estimates of the autocorrelation time for each size graph in Section 6.4, we ran the chain again for long enough to capture 10,000 samples where each sample had x iterations of the chain between them. x was chosen to vary from much smaller than the estimated autocorrelation time, to much larger. From these samples, we calculated the sample mean for each edge, and compared it with the actual mean from the joint degree matrix. We looked at the total variational distance between the sample means and actual means and showed that the difference appears to be converging to 0. We chose the mean as an evaluation metric because we were able to calculate the true means theoretically. We are unaware of another similarly simple metric. We used the formulas for empirical evaluation of mixing time from page 14 of Sokal's survey [56]. In particular, we used the following: • The sample mean is µ = 1 n ∑ n i=1 x i . • The sample unnormalized autocorrelation function isĈ(t) = 1 n−t ∑ n−t i=1 (x i − µ)(x i+t − µ). • The natural estimator of R X (t) isρ(t) =Ĉ(t)/Ĉ(0) • The estimator for τ int,X isτ int = 1 2 ∑ n−1 t=−(n−1) λ (t)ρ(t) where λ is a 'suitable' cutoff function. For a sequence of length x, calculating the autocorrelation of gap t requires (x − t) 2 dot products. Our experiments require that we calculate the autocorrelation for each possible edge in a graph for many lags. Thus running the full set of experiments requires O(|V | 2 x log x) time and is prohibitive when V is large. Note that x must necessarily be at least Θ (E) as well, since the mixing time can not be sub-linear in the number of edges. In Section 6.3 we will discuss results on the smaller datasets (AdjNoun, Dolphins, Football, Karate, and LesMis) that suggest a more feasible method for estimating autocorrelation time for larger graphs. We use this method to evaluate the autocorrelation time for the larger graphs as well, and present all of the results together. Rather than running the chain for 15,000 steps for the larger graphs, we selected more appropriate stopping conditions that were generally 10|E| based on the results for smaller graphs. Data Sets We have used several publicly available datasets, Word Adjacencies [48], Les Miserables [29], American College Football [20], the Karate Club [63], the Dolphin Social Network [36], C. Elegans Neural Network (celegans) [60,62], Power grid (power) [61], Astrophysics collaborations (astro-ph) [44], High-Energy Theory collaborations (hep-th) [45], Coauthorships in network science (netscience) [49], and a snapshot of the Internet from 2006 (as-22july) [50]. In the following |V | is the number of nodes, |E| is the number of edges and |J | is the number of non-zero entries in the joint degree matrix. Table 1 Details about the datasets, |V | is the number of nodes, |E| is the number of edges and |J | is the number of unique entries in the J . Relationship Between Mean of an Edge and Autocorrelation For each of the smaller graphs, AdjNoun, Dolphins, Football, Karate and LesMis, we ran the Markov Chain 10 times for 50,000 iterations and collected an indicator variable for each potential edge. For each of these edges, and each run, we calculated the autocorrelation function for values of t between 100 and 15,000 in multiples of 100. For each edge, and each run, we looked at the t value where the autocorrelation function first dropped below the threshold of 0.001. We then plotted the mean of these values against the mean of the edge, i.e. if it connects vertices of degree d i and d j (where d i = d j ) then µ e = J d i ,d j /d i d j or µ e = J d i ,d i / d i 2 otherwise. The three most useful plots are given in Figures 10 and 11 as the other graphs did not contain a large range of mean values. From these results, we identified a potential relationship between µ e and the time to pass under a threshold. Unfortunately, none of our datasets contained a significant number of edges with larger µ e values, i.e. between 0.5 and 1. In order to test this hypothesis, we designed a synthetic dataset that contained the many edges with values of µ e at i 20 for i = 1, · · · 20. We describe the creation of this dataset in the appendix. The final dataset we created had 326 edges, 194 vertices and 21 distinct J entries. We ran the Markov Chain 200 times for this synthetic graph. For each run, we calculated the threshold value for each edge. Figure 11 shows the edges' mean vs its mean time for the autocorrelation value to pass under 0.001. We see that there is a roughly symmetric curve that obtains its maximum at µ e = 0.5. This result suggests a way to estimate the autocorrelation time for larger graphs without repeating the entire experiment for every edge that could possibly appear. One can calculate µ e for each edge from the JDM and sample edges with µ e around 0.5. We use this method for selecting our subset of edges to analyze. In particular, we sampled about 300 edges from each of the larger graphs. For all of these except Autocorrelation Values For each dataset and each run we calculated the unnormalized autocorrelation values. For the smaller graphs, this entailed setting t to every value between 100 and 15,000 in multiples of 100. We randomly selected 1 run for each dataset and graphed the autocorrelation values for each of the edges. We present the data for the Karate and Dolphins datasets in Figures 12 and 13. For the larger graphs, we changed the starting and ending points, based on the graph size. For example, for Netscience was analyzed from 2,000 to 15,000 in multiples of 100, while as-22july was analyzed from 1,000 to 500,000 in multiples of 1,000. Fig. 13 The exponential drop-off for Dolphins appears to end after 600 iterations. All of the graphs exhibit the same behavior. We see an exponential drop off initially, and then the autocorrelation values oscillate around 0. This behavior is due to the limited number of samples, and a bias due to using the sample mean for each edge. If we ignore the noisy tail, then we estimate that the autocorrelation 'dies off' at the point where the mean absolute value of the autocorrelation approximately converges, then we can locate the 'elbow' in the graphs. This estimate for all graphs is given in Table 3 at the end of this Section. Estimated Integrated Autocorrelation Time For each dataset and run, we calculated the estimated integrated autocorrelation time. For the datasets with fewer than 1,000 edges, we calculated the autocorrelation in lags of 100 from 100 to 15,000 for each dataset. For the larger ones, we used intervals that depended on the total size of the graph. We estimateρ(t) as the size of the intervals times the sum of the values. The cut-off function we used for the smaller graphs was λ (t) = 1 if 0 < t < 15, 000 and 0 otherwise. This value was calculated for each edge. In Table 2 we present the mean, maximum and minimum estimated integrated autocorrelation time for each dataset over the runs of the Markov Chain using three different methods. For each of the edges, we first calculated the mean, median and max estimated integrated autocorrelation value over the various runs. Then, for each of these three values for each edge, we calculated the max, mean and min over all edges. For each of the graphs, the data series representing the median and max have each had their x-values perturbed slightly for clarity. These values are graphed on a log-log scale plot. Further, we also present a graph showing the ratio of these values to the number of edges. The ratio plot, Figure 15, suggests that the autocorrelation time may be a linear function of the number of edges in the graph, however the estimates are noisy due to the limited number of runs. All three metrics give roughly the same picture. We note that there is much higher variance in estimated autocorrelation time for the larger graphs. If we consider the evidence of the log-log plot and the ratio plot, we suspect that the autocorrelation time of this Markov Chain is linear in the number of edges. The Sample Mean Approaches the Real Mean for Each Edge Given the results of the previous experiment estimating the integrated autocorrelation time, we next executed an experiment suggested by Raftery and Lewis [52]. First we note that for each edge e, we know the true value of P(e ∈ G|G has J ) is exactly J k,l D k D l or J k,k ( D k 2 ) if e is an edge between degrees k and l. This is because there are D k D l potential (k, l) edges that show up in any graph with a fixed J , and each graph has J k,l of them. If we consider the graphs as being labeled, then we can see that each edge has an equal probability of showing up when we consider permutations of the orderings. Fig. 15 The ratio of the max, median and min values over the edges to the number of edges for the estimated integrated autocorrelation times. L to R in order of size: Karate, Dolphins, LesMis, AdjNoun, Football, celegans, netscience, power, hep-th, as-22july and astro-ph Thus, our experiment was to take samples at varying intervals, and consider how the sample mean of each edge compared with our known theoretical mean. For the smaller graphs, we took 10,000 samples at varying gaps depending on our estimated integrated autocorrelation time and repeated this 10 times. Additionally, we saw that the total variational distance quickly converged to a small, but non-zero value. We repeated this experiment with 20,000 samples and, for the two smallest graphs, Karate and Dolphins, we repeated the experiment with 5,000 and 40,000 samples. These results show that this error is due to the number of samples and not the sampler. For the graphs with more than 1,000 edges, each run resulted in 20,000 samples at varying gaps, and this was repeated 5 times. We present these results in Figures 18 through 28. If S e,g is the sample mean for edge e and gap g, and µ e is the true mean, then the graphed value is ∑ e |S e,g − µ e |/ ∑ e µ e . In all of the figures, the line runs through the median error for the runs and the error bars are the maximum and minimum values. We note that the maximum and minimum are very close to the median as they are within 0.05% for most intervals. These graphs imply that we are sampling uniformly after a gap of 175 for the Karate graph. For the dolphin graph, we see very similar results, and note that the error becomes constant after a sampling gap of 400 iterations. For the larger graphs, we varied the gaps based on the graph size, and then focused on the area where the error appeared to be decreasing. Again, we see consistent results, although the residual error is higher. This is to be expected because there are more potential edges in these graphs, so we took relatively fewer samples per edge. A summary of the results can be found in Table 3 Based on the results in this table, our recommendation would be that running the Markov Chain for 5m steps would satisfy all running time estimates except for Power's results for the Maximum Estimated Integrated Autocorrelation time. This estimate is significantly lower than the result for Chain A that was obtained using the standard theoretical technique of canonical paths. Conclusions and Future Work This paper makes two primary contributions. The first is the investigation of Markov Chain methods for uniformly sampling graphs with a fixed joint degree distribution. Previous work shows that the mixing time of A is polynomial, while our experiments suggest that the mixing time of B is also polynomial. The relationship between the mean of an edge and the autocorrelation values can be used to efficiently experiment with larger graphs by sampling edges with mean between 0.4 and 0.6 and repeating the analysis for just those edges. This was used to repeat the experiments for larger graphs and to provide further convincing evidence of polynomial mixing time. Our second contribution is in the design of the experiments to evaluate the mixing time of the Markov Chain. In practice, it seems the stopping time for sampling is often chosen without justification. Autocorrelation is a simple metric to use, and can be strong evidence that a chain is close to the stationary distribution when used correctly. Now, we must fill in the rest of J so that D is integer valued for degrees. One way is to note that we should have 4 × 20 degree 20 edges. We can sum the number of currently allocated edges with one endpoint of degree 20, call this x and set J 1,20 = 80 − x. There are many other ways of consistently completing J , such as assigning as many edges as possible to the K ×K and L ×L entries, like J 20,21 . This results in a denser graph. For the synthetic graph used in this paper, we completed J by adding all edges as (1,20), (1, 21) etc edges. We chose this because it was simple to verify and it also made it easy to ignore the edges that were not of interest.
9,108
1103.4271
2122895885
In this paper we present a framework for the rendering of dynamic 3D virtual environments which can be integrated in the development of videogames. It includes methods to manage sounds and particle effects, paged static geometries, the support of a physics engine and various input systems. It has been designed with a modular structure to allow future expansions. We exploited some open-source state-of-the-art components such as OGRE, PhysX, ParticleUniverse, etc.; all of them have been properly integrated to obtain peculiar physical and environmental effects. The stand-alone version of the application is fully compatible with Direct3D and OpenGL APIs and adopts OpenAL APIs to manage audio cards. Concluding, we devised a showcase demo which reproduces a dynamic 3D environment, including some particular effects: the alternation of day and night influencing the lighting of the scene, the rendering of terrain, water and vegetation, the reproduction of sounds and atmospheric agents.
In the area of environmental effects simulation, several methods for reproducing weather phenomena, like particle-based rain techniques, are presented in @cite_2 . The authors describe in details those methods to render, in a real-time system, very complex atmospheric physical phenomena such as strong rainfall, falling raindrops dripping off objects' surfaces, various reflections in surface materials and puddles, water ripples and puddles on the streets and so on.
{ "abstract": [ "In this chapter we will cover approaches for creating visually complex, rich interactive environments as a case study of developing the world of ATI \"ToyShop\" demo. We will discuss the constraints for developing large immersive worlds in real-time, and go over the considerations for developing lighting environments for such scene rendering. Rain-specific effects in city environments will be presented. We will overview the lightning system used to create illumination from the lightning flashes, the high dynamic range rendering techniques used, various approaches for rendering rain effects and dynamic water simulation on the GPU. Methods for rendering reflections in real-time will be illustrated. Additionally, a number of specific material shaders for enhancing the feel of the rainy urban environment will be examined." ], "cite_N": [ "@cite_2" ], "mid": [ "2013275563" ] }
0
1103.4271
2122895885
In this paper we present a framework for the rendering of dynamic 3D virtual environments which can be integrated in the development of videogames. It includes methods to manage sounds and particle effects, paged static geometries, the support of a physics engine and various input systems. It has been designed with a modular structure to allow future expansions. We exploited some open-source state-of-the-art components such as OGRE, PhysX, ParticleUniverse, etc.; all of them have been properly integrated to obtain peculiar physical and environmental effects. The stand-alone version of the application is fully compatible with Direct3D and OpenGL APIs and adopts OpenAL APIs to manage audio cards. Concluding, we devised a showcase demo which reproduces a dynamic 3D environment, including some particular effects: the alternation of day and night influencing the lighting of the scene, the rendering of terrain, water and vegetation, the reproduction of sounds and atmospheric agents.
Realistic animations of water, smoke, explosions, and related phenomena are reproduced via fluids simulation. @cite_12 face the problem of high-resolution fluid motion in real-time videogame applications, describing some techniques on a scale previously unattainable in computer graphics. The central idea is to cover the simulation domain with a small set of simulation primitives, called tiles. Each tile consists of a velocity basis representing the possible flow within its sub-domain. The boundaries between sub-domains correspond to tile faces.
{ "abstract": [ "We present a new approach to fluid simulation that balances the speed of model reduction with the flexibility of grid-based methods. We construct a set of composable reduced models, or tiles, which capture spatially localized fluid behavior. We then precompute coupling terms so that these models can be rearranged at runtime. To enforce consistency between tiles, we introduce constraint reduction. This technique modifies a reduced model so that a given set of linear constraints can be fulfilled. Because dynamics and constraints can be solved entirely in the reduced space, our method is extremely fast and scales to large domains." ], "cite_N": [ "@cite_12" ], "mid": [ "2142037563" ] }
0
1103.4271
2122895885
In this paper we present a framework for the rendering of dynamic 3D virtual environments which can be integrated in the development of videogames. It includes methods to manage sounds and particle effects, paged static geometries, the support of a physics engine and various input systems. It has been designed with a modular structure to allow future expansions. We exploited some open-source state-of-the-art components such as OGRE, PhysX, ParticleUniverse, etc.; all of them have been properly integrated to obtain peculiar physical and environmental effects. The stand-alone version of the application is fully compatible with Direct3D and OpenGL APIs and adopts OpenAL APIs to manage audio cards. Concluding, we devised a showcase demo which reproduces a dynamic 3D environment, including some particular effects: the alternation of day and night influencing the lighting of the scene, the rendering of terrain, water and vegetation, the reproduction of sounds and atmospheric agents.
An overview of Halo 3 's unique lighting and material system and its main components is treated in @cite_4 . Halo includes key innovations in the following areas: spherical harmonics lightmap generation, compression and rendering; rendering complex materials under area light sources; HDR rendering and post-processing. Some of the effects presented in that work have been adopted also here (e.g. the HDR rendering).
{ "abstract": [ "Lighting and material are very important aspects of the visual appearances of games and they present some of the hardest challenges in real time graphics today. For Halo and indeed many other games, keeping the players immersed in the virtual environment for long periods of time is a top priority of the graphics system, and good quality lighting and realistic materials are the fundamental building blocks for achieving the level of realism necessary to accomplish this goal." ], "cite_N": [ "@cite_4" ], "mid": [ "2017214388" ] }
0
1103.1453
2150471503
Interference in wireless networks is one of the key capacity-limiting factors. Recently developed interference-embracing techniques show promising performance on turning collisions into useful transmissions. However, the interference-embracing techniques are hard to apply in practical applications due to their strict requirements. In this paper, we consider utilizing the interference-embracing techniques in a common scenario of two interfering sender-receiver pairs. By employing opportunistic listening and analog network coding (ANC), we show that compared to traditional ARQ retransmission, a higher retransmission throughput can be achieved by allowing two interfering senders to cooperatively retransmit selected lost packets at the same time. This simultaneous retransmission is facilitated by a simple handshaking procedure without introducing additional overhead. Simulation results demonstrate the superior performance of the proposed cooperative retransmission.
The idea of turning a collision of two simultaneous wireless transmissions into a useful transmission was first introduced in PNC @cite_5 . In particular, the authors proposed a frame-based decode-and-forward strategy in packet forwarding. In their scenario of a relay network, two nodes transmit simultaneously to a common receiver. Assuming perfect transmission synchronization at the physical layer, based on the additive nature of simultaneously arriving (EM), the receiver detects a single collided signal which is the sum of the two transmitted signals. Using a suitable mapping scheme, they show that for certain modulation schemes, there exists a mapping scheme such that the relationship between the two transmitted binary bits and the decoded binary bit follows the (XOR) principle. ANC @cite_2 was further proposed to relax the restrictions of symbol-level synchronization, carrier-frequency synchronization and carrier-phase synchronization required in PNC, which makes ANC more practical. Specifically, ANC is able to decode an unknown packet @math from a collided packet @math We use the notation @math to denote a collision of two packet transmissions. based on the known packet @math by leveraging the co-channel FM signal separation technique @cite_1 and network layer information to cancel the interference.
{ "abstract": [ "A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.", "A new technique is presented to separate two cochannel frequency modulation signals. Two candidate solutions for the phases are analytically derived, and a sequence of phase solutions is chosen to match the expected power spectral density of each constituent signal. This is accomplished with a one-step linear predictor and a simple two-state Viterbi algorithm. Simulations on recorded radio frequency data indicate improved separation capability over other techniques such as the joint Viterbi algorithm and cross-coupled phase-locked loop.", "Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding." ], "cite_N": [ "@cite_5", "@cite_1", "@cite_2" ], "mid": [ "1975099099", "2166383582", "2152496949" ] }
Cooperative Retransmissions Through Collisions
Compared with centralized medium access control (MAC) protocols, random access based MAC protocols such as Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) do not suffer from single point of failure and the network scalability problem, and thus it has become dominant in wireless local area networks (WLANs). However, using random access for multiple nodes to share a common channel inevitably introduces collisions or interferences, especially at heavy traffic load. Numerous approaches have been proposed to deal with wireless signal interference. The common idea is to avoid collision as much as possible. For example, CSMA/CA uses carrier sensing and random backoff to avoid collision. Other techniques [1] include channel assignment, load balancing and power control. All these techniques can alleviate wireless interference to a certain extent, but cannot completely eliminate interference. In 2006, Zhang et al. [2] introduced a novel idea of decoding a transmission collision on a wireless channel, which directly challenges the traditional rule, that a collided transmission on a wireless channel is undecodable. In this pioneering work called physical-layer network coding (PNC) [2], it shows that two simultaneous wireless transmissions added together at the electromagnetic wave level can be decoded to produce an outcome same as network coding. Katti et al. [3] further elaborated this concept of embracing wireless interference and proposed an analog network coding (ANC) scheme, which is more practical than PNC. Despite this remarkable idea of turning collisions into useful transmissions, it is hard to apply PNC and ANC in practical applications. There are a few reasons for that. First, PNC and ANC are only suitable for decoding a superimposed transmission consisting only of two simultaneous transmissions. Second, the collided transmissions need to be well synchronized, although a perfect synchronization is not required for ANC. Third, in order to decode a superimposed transmission, one of the two collided transmissions needs to be known. All these constraints limit the use of the interference-embracing techniques. Although so far there is no practical solution to decode a superimposition of multiple (more than two) transmissions, some recently developed schemes [4], [5] show that it is possible to tell the presence of individual transmissions involved in a collision. These techniques have been successfully applied to improve the reliability of wireless broadcasting [4], [5]. The idea is that upon receiving a broadcast transmission, each receiver detecting the transmission replies with an acknowledge (ACK) transmission. These simultaneous ACK packets transmissions cause a collision. Then, decoding of the superimposed ACK packet is performed to identify the ACK transmitters. The synchronization issue is well handled in this case since simultaneous ACK transmissions appear after the completion of a broadcast transmission which is a common event. In this paper, we consider utilizing the interferenceembracing techniques in a common unit of two interfering sender-receiver pairs. Particularly, we study the scenario of two interfering WLAN APs, which are simulcasting bulky data to their associated individual stations in a lossy environment as shown in Fig. 1. This scenario is in line with the increasing density of WLAN APs and the increasing popularity of multimedia applications such as video streaming and online games [1], [6]. Due to the high performance to price ratio, more and more WLANs are being deployed in public and residential places. Thus, it is quite common that multiple APs overlap with each other and share a common channel, especially in metropolitan cities. By employing opportunistic listening and ANC, we show that compared to traditional ARQ retransmission, a higher retransmission throughput can be achieved by allowing two interfering APs to cooperatively retransmit selected lost packets at the same time. This simultaneous retransmission is facilitated by a simple handshaking procedure without introducing additional overhead. Simulation results demonstrate the superior performance of the proposed cooperative retransmission. The rest of the paper is organized as follows. Section II reviews the existing interference-embracing techniques. Section III introduces our ideas and describes the detailed protocol design. We analyze the proposed collision based retransmission scheme in Section IV and provides the simulation results in Section V. Finally, we conclude the paper in Section VI. A. Interference Based Network Coding The idea of turning a collision of two simultaneous wireless transmissions into a useful transmission was first introduced in PNC [2]. In particular, the authors proposed a frame-based decode-and-forward strategy in packet forwarding. In their scenario of a relay network, two nodes transmit simultaneously to a common receiver. Assuming perfect transmission synchronization at the physical layer, based on the additive nature of simultaneously arriving electromagnetic waves (EM), the receiver detects a single collided signal which is the sum of the two transmitted signals. Using a suitable mapping scheme, they show that for certain modulation schemes, there exists a mapping scheme such that the relationship between the two transmitted binary bits and the decoded binary bit follows the exclusive-or (XOR) principle. ANC [3] was further proposed to relax the restrictions of symbol-level synchronization, carrier-frequency synchronization and carrier-phase synchronization required in PNC, which makes ANC more practical. Specifically, ANC is able to decode an unknown packet c 2 from a collided packet c 1 ⊙ c 2 1 based on the known packet c 1 by leveraging the cochannel FM signal separation technique [7] and network layer information to cancel the interference. 1 We use the notation ⊙ to denote a collision of two packet transmissions. B. Superimposed Acknowledgement As mentioned, some interesting methods [4], [5] have been proposed to decode superimposed ACKs for providing reliability in wireless multicast, where the requirement is not to decode the content of the collided packets but to detect the existence of individual ACKs from different receivers. In [5], Durvy et al. proposed to use a bit sequence of N + 1 bits to decode a collision of up to N simultaneous ACK transmissions. The main limitation of the scheme is that it requires precise power level differentiation in the decoding procedure. Comparisons of analog received signals are needed for the operation, and a delay line is used to store analog signals for the comparison purpose. In our previous work [4], we design a coding method, called collision codes, used in the MAC layer that can also achieve the decoding of the collided ACK transmissions. Our coding method does not require precise detection of signal energy and thus there is no modification needed for the physicallayer modulation. In particular, each receiver assigns a unique bitstream pattern to embed in its ACK packet. Different superimpositions of these bitstream patterns result in different decoded bitstream, which enables the sender to deduce the presence of individual ACK transmissions involved in the collision. In this way, there is no need for each receiver to transmit its ACK in different time slots. III. PROPOSED COOPERATIVE RETRANSMISSIONS THROUGH COLLISIONS In this section, we will show how these interferenceembracing techniques can be used in a common scenario of two interfering pairs of sender-receiver communicating in a lossy environment. We will use the case of two interfering WLAN APs as an example to illustrate our idea, although it can be applied to other wireless network scenarios as well. A. Basic Idea Consider the two pairs, AP 1 ∼ R 1 and AP 2 ∼ R 2 in Fig. 1, in a lossy wireless network, where both receivers are within the transmission range of the two APs. Let us assume that AP 1 wishes to transmit a packet c 1 to R 1 and AP 2 wishes to transmit a packet c 2 to R 2 . Suppose that after the transmission packet c 1 is not heard by R 1 but overheard by R 2 , while packet c 2 is not heard by R 2 but overheard by R 1 due to the broadcast nature of wireless transmission (also known as opportunistic listening [3]). In this case, rather than retransmitting each of the two lost packets in different time slots to avoid interference, it is possible that both AP 1 and AP 2 retransmit their packet c 1 and c 2 simultaneously, which can be decoded by the two receivers using ANC as each of them already has one known packet. In this way, we can improve the retransmission efficiency by reducing one retransmission. B. Protocol Design In the practical scenario of two interfering APs shown in Fig. 1, there are typically multiple receivers associated with each AP. For receivers located in non-interference regions, their transmission and retransmission follow the standard IEEE 802.11 protocol. Only for receivers located in the interference region, the retransmission is carried out using both the proposed cooperative collision and the conventional ARQ. In order to enjoy the proposed cooperative retransmission, two receivers belonging to different APs in the interference region need to be paired up. In particular, each receiver station first connects to an AP. After the establishment of the AP i ∼ R i connection, the receiver then detects whether it is in the interference region by overhearing transmission from another AP, AP j . If it is in the interference region, it then broadcasts its availability to pair-up with receiver R j connected to AP j and located inside the interference region. If receiver R j is available, it accepts the pairing invitation. After that, both R i and R j broadcast their pairing information to the APs. Once paired, both R i and R j can acknowledge packets destined for anyone of them and no third receiver is allowed to participate in acknowledging packets destined for either R i or R j . Suppose we have established the connections of AP 1 ∼ R 1 and AP 2 ∼ R 2 and the pair-up of R 1 ∼ R 2 as shown in Fig. 1. Initially, both APs will transmit and retransmit packets using 802.11 MAC protocol and both receivers will reply with an ACK embedded with the collision codes [4] for every packet they hear and destined to anyone of them. If both receivers hear the same packet, they will transmit their ACKs simultaneously and the APs uses the aforementioned technique [4] to decoded the superimposed ACK. If AP 1 only detects an ACK from R 2 for a packet c 1 destined for R 1 , it defers the retransmission until it finds an opportunity for cooperative retransmission. Because of the broadcasting nature of ACK transmission, AP 2 is aware that AP 1 has deferred a retransmission. When AP 2 only detects an ACK from R 1 for a packet c 2 destined for R 2 , AP 2 is then available to participate in cooperative retransmission. Since both APs are aware of each other's deferred retransmission status, they then simultaneously retransmit their corresponding packets, which results in a collision. Once the receivers successfully decode the collided packet using ANC, they will send superimposed ACK immediately. Figure 2 illustrates the handshake procedure for the cooperative retransmission. IV. PERFORMANCE ANALYSIS So far, we only show that there is a possibility that the interference-embracing techniques can be utilized to improve the retransmission efficiency in the scenario of two interfering sender-receiver pairs. In this section, we mathematically analyze the probability and corresponding performance gain. A. System Model Let d AP denote the distance between the two APs, and r t denote the transmission range of each AP with both APs transmitting at the same transmission power and the same transmission rate. Consider that the interfering APs are overlapped such that d AP < 2r t . Each AP associates with N client stations, which are uniformly distributed within the p ij (1 − p ii ) 2 received by R j but not by R i p ii (1 − p ij ) 3 received by both R i and R j (1 − p ii )(1 − p ij ) 4 not received by both R i and R j p ii p ij transmission range of the AP. Of interest to us are the receivers located inside the interference region. Consider the two pairs AP 1 ∼ R 1 and AP 2 ∼ R 2 shown in Fig. 1. Average packet loss probability p ij for transmissions from AP i to receiver R j follows an independent Bernoulli packet loss model [8], where {i, j} ∈ {1, 2}. Packet batch size for transmissions from AP i to R i is denoted as B i . We assume that B 1 =B 2 =B. For multimedia applications such as video streaming, B is usually a large value. B. Retransmission Efficiency We use ARQ as the benchmark for performance comparison. It is well known that the average number of retransmissions needed for recovering a lost packet follows the geometric distribution. Thus, the average total number of retransmissions needed for both AP 1 and AP 2 to successfully deliver B packets is N ARQ = 2 i=1 B · p ii (1 − p ii ) .(1) In our proposed scheme, each AP builds up packet reception status for every packet it transmits. Because of the broadcasting nature of superimposed ACK, AP i is aware of the reception status of both R i and R j . Therefore, any transmitted packet will have four reception states shown in Table I. Since the packets transmitted by AP i is destined to R i , both state 1 and state 3 in Table I are considered instances of ⊙ c 2 (1 − p ii )(1 − p ji ) S b c 1 ⊙ c 2 is corrupted. 1 − (1 − p ii )(1 − p ji ) successful reception cases. For state 4, where the packet is not received by both receivers, AP i would repeatedly retransmit the packet in traditional ARQ fashion until the retransmission falls into any one of the first 3 states. The total number of retransmissions needed for changing the packet reception status for all packets in state 4 to one of the first 3 states follows geometric distribution with average loss probability of p i1 p i2 . Thus, the total number of retransmissions needed for packets in state 4 by both the senders is calculated as N CR−S4 = 2 i=1 B · p i1 p i2 1 − p i1 p i2 .(2) State 2 in Table I is the case for cooperative retransmission. Suppose AP 1 and AP 2 are now simultaneously retransmitting c 1 and c 2 to R 1 and R 2 , respectively. There are two states for the reception of c 1 ⊙ c 2 at each receiver, as shown in Table II. Note that only when both c 1 and c 2 reach R i successfully, the reception of the collided packet at R i is considered as a success. This is because any corruption in one of the packets will cause the collided packet undecodable by using ANC. Therefore, the total number of retransmissions needed for state 2 can be derived as N CR−S2 = B · P S2,i (1 − p ii )(1 − p ji )(3) where P S2,i , the probability that a transmitted packet by AP i is in state 2, is given by P S2,i = p ii (1 − p ij ) 1 + ∞ n=1 (p ii p ij ) n ,(4) which takes into consideration additional packets falling in state 2 after retransmission of packets in state 4. Note that unlike (1) and (2), there is no summation sign in (3). This is because of the collision based cooperative retransmission, where the retransmissions for one receiver can always be piggyback in the retransmissions for another receiver. It is reasonable to assume that both AP 1 and AP 2 can always find 'partner packets' in cooperative retransmission for multimedia applications such as video streaming, which typically have large B values. In practice, if there is no 'partner packets', those lost packets are just retransmitted in the traditional way. Finally, we compute the total number of retransmissions needed for our proposed cooperative retransmission as N CR = N CR−S4 + N CR−S2 .(5) Assuming that p 11 = p 12 = p 21 = p 22 = p, we derive the retransmission gain against ARQ as G r = N ARQ N CR = 2(1 − p 2 ) 2p(1 − p) + 1 ,(6) which gives a theoretical retransmission gain of 2 > G r > 1 for 0 < p < 1/2. We now derive the total gain for the entire network, where each AP is associated with N uniformly distributed receivers. According to the system model in Section IV-A and the geometry relationships shown in Fig. 1, we can derive the overlapped area as A = 2r 2 t arccos d AP 2r t − d AP r 2 t − d 2 AP 4 .(7) It is clear that the total network gain depends on the number of receiver pairs located in the overlapped area, which is N A = N · A πr 2 t . Therefore, the total retransmission gain with respect to all receivers in the network is given as G N = N · N ARQ N A · N CR + (N − N A ) · N ARQ .(8) V. SIMULATION RESULTS Packet decoding using ANC has been successfully demonstrated on a test bed in [3]. Therefore we can confidently assume that ANC is a practically applicable technique. For the proposed collision based cooperative retransmission, we construct a C++ discrete-time simulator with the system model described in Section IV-A. For simplicity, we assume the network environment for the two APs are homogeneous and symmetric, e.g. same packet loss rate and distance between AP 1 ∼ R 1 and AP 2 ∼ R 2 Fig. 3 shows the retransmission gain G r under different packet loss probabilities. We can see that the simulation results with B = 1000 matches the theoretical results well. The relatively large difference at low packet loss rates is due to the unavailability of 'partner packet' for the cooperative retransmission. Compared with the results with B = 1000 and B = 100, we can see that the difference between the theoretical gain and the simulation gain becomes smaller with the increase of the batch size. This is because a larger batch size leads to more cooperative collision coding opportunities, which is consistent with the assumptions we made in the theoretical analysis. On the other hand, for the case of small batch size, the problem of no 'partner packet' becomes more severe. It can also be seen from Fig. 3 that the retransmission gain is reduced with the increase of packet loss rate. There are two main reasons for this. First, large packet loss rate reduces the probability of successful reception of the collided packets as shown in Table II. Second, with the increase of packet loss rate, the probability for state 4 becomes significant (see Fig. 4), where the traditional ARQ based retransmission is used, and thus it reduces the gain from the cooperative retransmission. 5 shows the network retransmission gain as the overlap area increases, for which the distance between the APs decreases. Each AP is associated with 10 uniformly distributed receivers. With the increase of the overlapped area, more receivers are located inside the overlapped region, where there are more pairs for the cooperative retransmission. As expected, from Fig. 5, we can see that the network gain increases with the increased number of receivers located in the overlapped area. VI. CONCLUSION In this paper, we have successfully applied the existing interference-embracing techniques in the scenario of two interfering WLAN APs, which are simulcasting bulky data to their associated individual stations in a lossy environment. In particular, we have proposed a collision based cooperative retransmission scheme. Our major contribution lies in the protocol design which well combines different interference- embracing techniques to solve the retransmission problem. We have also analyzed the performance gain of our proposed cooperative retransmission against the traditional ARQ scheme. Both theoretical analysis and simulations show that our proposed collision based retransmission method is able to reduce the number of retransmission of ARQ by up to 50%. Although we focus on the scenario of two interfering WLAN APs, the proposed collision based cooperative retransmission scheme can be applied to any two interfering pairs of 'sender-receiver', which is quite common in WLANs, wireless mesh networks and wireless sensor networks. Our future work will be to extend the proposed scheme to more general scenarios such as a mixture of simulcasting and multicasting receivers and heterogeneous receivers.
3,413
1103.0040
2953139329
We design an expected polynomial-time, truthful-in-expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental class of combinatorial auctions. Our results apply to bidders with valuations that are m matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions, matroid weighted-rank functions, and convex combinations thereof. Our approximation factor is the best possible, even for known and explicitly given coverage valuations, assuming P != NP. Ours is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for an NP-hard welfare maximization problem in combinatorial auctions with heterogeneous goods and restricted valuations. Our mechanism is an instantiation of a new framework for designing approximation mechanisms based on randomized rounding algorithms. A typical such algorithm first optimizes over a fractional relaxation of the original problem, and then randomly rounds the fractional solution to an integral one. With rare exceptions, such algorithms cannot be converted into truthful mechanisms. The high-level idea of our mechanism design framework is to optimize directly over the (random) output of the rounding algorithm, rather than over the input to the rounding algorithm. This approach leads to truthful-in-expectation mechanisms, and these mechanisms can be implemented efficiently when the corresponding objective function is concave. For bidders with MRS valuations, we give a novel randomized rounding algorithm that leads to both a concave objective function and a (1-1 e)-approximation of the optimal welfare.
Without incentive-compatibility constraints, the welfare maximization problem with submodular bidder valuations is completely solved. Vondr 'a k @cite_20 gave a @math -approximation algorithm for the problem, improving over the @math -approximation given in @cite_3 . The algorithm in @cite_20 works in the value oracle model, where each valuation @math is modeled as a black box'' that returns the value @math of a queried set @math in a single operation. The approximation factor of @math is unconditionally optimal in the value-oracle model (for polynomial communication) @cite_12 , and is also optimal (for polynomial time) for certain succinctly represented submodular valuations, assuming @math @cite_11 . The result of @cite_11 implies that @math is the optimal approximation factor in our model as well, assuming @math . We show in Appendix that our oracle model is no more powerful than polynomial-time computation in the special case of explicitly represented coverage functions, for which @math is optimal assuming @math @cite_11 . In contrast, the work of @cite_23 improves on the approximation factor of @math by using , which can not be simulated in polynomial time for explicit coverage functions.
{ "abstract": [ "In most of microeconomic theory, consumers are assumed to exhibit decreasing marginal utilities. This paper considers combinatorial auctions among such buyers. The valuations of such buyers are placed within a hierarchy of valuations that exhibit no complementarities, a hierarchy that includes also OR and XOR combinations of singleton valuations, and valuations satisfying the gross substitutes property. While we show that the allocation problem among valuations with decreasing marginal utilities is NP-hard, we present an efficient greedy 2-approximation algorithm for this case. No such approximation algorithm exists in a setting allowing for complementarities. Some results about strategic aspects of combinatorial auctions among players with decreasing marginal utilities are also presented.", "Combinatorial allocation problems require allocating items to players in a way that maximizes the total utility. Two such problems received attention recently, and were addressed using the same linear programming (LP) relaxation. In the Maximum Submodular Welfare (SMW) problem, utility functions of players are submodular, and for this case Dobzinski and Schapira [SODA 2006] showed an approximation ratio of 1 - 1 e. In the Generalized Assignment Problem (GAP) utility functions are linear but players also have capacity constraints. GAP admits a (1 - 1 e)- approximation as well, as shown by Fleischer, Goemans, Mirrokni and Sviridenko [SODA 2006]. In both cases, the approximation ratio was in fact shown for a more general version of the problem, for which improving 1 - 1 e is NPhard. In this paper, we show how to improve the 1 - 1 e approximation ratio, both for SMW and for GAP. A common theme in both improvements is the use of a new and optimal Fair Contention Resolution technique. However, each of the improvements involves a different rounding procedure for the above mentioned LP. In addition, we prove APX-hardness results for SMW (such results were known for GAP). An important feature of our hardness results is that they apply even in very restricted settings, e.g. when every player has nonzero utility only for a constant number of items.", "In the Submodular Welfare Problem, m items are to be distributed among n players with utility functions wi: 2[m] → R+. The utility functions are assumed to be monotone and submodular. Assuming that player i receives a set of items Si, we wish to maximize the total utility ∑i=1n wi(Si). In this paper, we work in the value oracle model where the only access to the utility functions is through a black box returning wi(S) for a given set S. Submodular Welfare is in fact a special case of the more general problem of submodular maximization subject to a matroid constraint: max f(S): S ∈ I , where f is monotone submodular and I is the collection of independent sets in some matroid. For both problems, a greedy algorithm is known to yield a 1 2-approximation [21, 16]. In special cases where the matroid is uniform (I = S: |S| ≤ k) [20] or the submodular function is of a special type [4, 2], a (1-1 e)-approximation has been achieved and this is optimal for these problems in the value oracle model [22, 6, 15]. A (1-1 e)-approximation for the general Submodular Welfare Problem has been known only in a stronger demand oracle model [4], where in fact 1-1 e can be improved [9]. In this paper, we develop a randomized continuous greedy algorithm which achieves a (1-1 e)-approximation for the Submodular Welfare Problem in the value oracle model. We also show that the special case of n equal players is approximation resistant, in the sense that the optimal (1-1 e)-approximation is achieved by a uniformly random solution. Using the pipage rounding technique [1, 2], we obtain a (1-1 e)-approximation for submodular maximization subject to any matroid constraint. The continuous greedy algorithm has a potential of wider applicability, which we demonstrate on the examples of the Generalized Assignment Problem and the AdWords Assignment Problem.", "We provide tight information-theoretic lower bounds for the welfare maximization problem in combinatorial auctions. In this problem, the goal is to partition m items among k bidders in a way that maximizes the sum of bidders' values for their allocated items. Bidders have complex preferences over items expressed by valuation functions that assign values to all subsets of items. We study the \"black box\" setting in which the auctioneer has oracle access to the valuation functions of the bidders. In particular, we explore the well-known value query model in which the permitted query to a valuation function is in the form of a subset of items, and the reply is the value assigned to that subset of items by the valuation function. We consider different classes of valuation functions: submodular,subadditive, and superadditive. For these classes, it has been shown that one can achieve approximation ratios of 1 -- 1 e, 1 √m, and √ m m, respectively, via a polynomial (in k and m) number of value queries. We prove that these approximation factors are essentially the best possible: For any fixed e > 0, a (1--1 e + e)-approximation for submodular valuations or an 1 m1 2-e-approximation for subadditive valuations would require exponentially many value queries, and a log1+e m m-approximation for superadditive valuations would require a superpolynomial number of value queries.", "" ], "cite_N": [ "@cite_3", "@cite_23", "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "2004045061", "2126307323", "1989453388", "2059527251", "" ] }
From Convex Optimization to Randomized Mechanisms: Toward Optimal Combinatorial Auctions *
The overarching goal of algorithmic mechanism design is to design computationally efficient algorithms that solve or approximate fundamental optimization problems in which the underlying data is a priori unknown to the algorithm. A central example in both theory and practice is welfaremaximization in combinatorial auctions. Here, there are m items for sale and n bidders vying for them. Each bidder i has a private valuation v i (S) for each subset S of the items. 1 The welfare of an allocation S 1 , . . . , S n of the items to the bidders is n i=1 v i (S i ). Since valuations are initially unknown to the seller, computing a near-optimal allocation requires eliciting information from the (self-interested) bidders, for example via a bid. A mechanism is a protocol that extracts such information and computes an allocation of the items and payments. The "holy grail" for a mechanism designer is to devise a computationally efficient and incentivecompatible mechanism with an approximation factor that matches the best one known for the (easier) problem in which the underlying data is provided up front. 2 Such results are usually difficult to obtain, and in some cases are provably impossible using deterministic mechanisms [20,27]. The space of randomized mechanisms, however, is much more promising as shown recently in [9,12]. 3 This paper provides such a positive result for a fundamental class of combinatorial auctions, via a novel randomized mechanism design framework based on convex optimization. Algorithmic mechanism design is difficult because incentive compatibility severely limits how the algorithm can compute an outcome, which prohibits use of most of the ingenious approximation algorithms that have been developed for different optimization problems. More concretely, the only general approach known for designing (randomized) truthful mechanisms is via maximal-indistributional range (MIDR) algorithms [9,12]. An MIDR algorithm fixes a set of distributions over feasible solutions -the distributional range -independently of the valuations reported by the self-interested participants, and outputs a random sample from the distribution that maximizes expected (reported) welfare. The Vickrey-Clarke-Groves (VCG) payment scheme renders an MIDR algorithm truthful-in-expectation. Most approximation algorithms are not MIDR algorithms. Consider, as an example, a randomized rounding algorithm for welfare maximization in combinatorial auctions (e.g. [14,11]). We can view such an algorithm as the composition of two algorithms, a relaxation algorithm and a rounding algorithm. The relaxation algorithm is deterministic and takes as input the problem data (players' valuations v), and outputs the (fractional) solution to a linear programming relaxation of the welfare-maximization problem that is optimal for the objective function defined by v. The rounding algorithm is randomized and takes as input this fractional solution and outputs a feasible allocation of the items to the players. Taken together, these algorithms assign to each input v a probability distribution D(v) over integral allocations. For almost all known randomized rounding algorithms, there is an input v such that the expected objective function value E y∼D (v) [v T y] with the distribution D(v) is inferior to that E y∼D(w) [v T y] with a distribution D(w) that the algorithm 1 Each bidder has an exponential number of private values; we ignore the attendant representation issues for the moment. 2 In this paper, by "incentive compatible" we generally mean a (possibly randomized) mechanism such that every participant maximizes its expected payoff by truthfully revealing its information to the mechanism, no matter how the other participants behave. Such mechanisms are called truthful-in-expectation, and are defined formally in Section 2.2. 3 We note that the impressively general positive results for implementations in Bayes-Nash equilibria that were recently obtained in [18,17,1] do not apply to the stronger incentive-compatibility notions used in this paper and in most of the algorithmic mechanism design literature. would produce for a different input w -and this is a violation of the MIDR property. Informally, such violations are inevitable unless a rounding algorithm is designed explicitly to avoid them, on top of the usual approximation requirements. The exception that proves the rule is the important and well-known mechanism design framework of Lavi and Swamy [21]. Lavi and Swamy [21] begin with the foothold that the fractional welfare maximization problem -the relaxation algorithm above -can be made truthful by charging appropriate VCG payments. Further, they identify a very special type of rounding algorithm that preserves truthfulness: if the expected allocation produced by the rounding algorithm is always identical to the input to the rounding algorithm, component-wise, up to some universal scaling factor α, then composing the two algorithms easily yields an α-approximate truthful-in-expectation mechanism (after scaling the fractional VCG payments by α). Perhaps surprisingly, there are some interesting problems, such as welfare maximization in combinatorial auctions with general valuations, that admit such a rounding algorithm with a best-possible approximation guarantee (assuming P = N P ). However, most N P -hard welfare maximization problems do not seem to admit good randomized rounding algorithms of the rigid type required by this design framework. Our Contributions We introduce a new approach to designing truthful-in-expectation approximation mechanisms based on randomized rounding algorithms; we outline it here for the special case of welfare maximization in combinatorial auctions. The high-level idea is to optimize directly on the outcome of the rounding algorithm, rather than merely on the outcome of the relaxation algorithm (the input to the rounding algorithm). In other words, let r(x) denote a randomized rounding algorithm, from fractional allocations to integer allocations. Given players' valuations v, we compute a fractional allocation x that maximizes the expected welfare E y∼r(x) [v T y] over all fractional allocations x. This methodology evidently gives MIDR algorithms. This optimization problem is often intractable, but when the rounding algorithm r and the space of valuations v are such that the function E y∼r(x) [v T y] is always concave in x -in which case we call r a convex rounding algorithm -it can be solved in polynomial time using convex programming (modulo numerical issues that we address later). We use this design framework to give an expected polynomial-time, truthful-in-expectation, (1 − 1/e)-approximation mechanism for welfare maximization in combinatorial auctions in which bidders' valuations are matroid rank sums (MRS) -non-negative linear combinations of matroid rank functions on the items. MRS valuations are submodular and encompass most concrete examples of submodular functions that have been studied in the combinatorial auctions literature, including all coverage functions and matroid weighted-rank functions (see Section 2.4 for formal definitions). Our approximation guarantee is optimal, assuming P = N P , even for the special case of the welfare maximization problem with known and explicitly presented coverage valuations. Our mechanism is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for any N P -hard special case of combinatorial auctions that doesn't assume that there are multiple copies of every type of item. It works with "black-box" valuations, provided that they support a randomized analog of a "value oracle". We also give a (non-oraclebased) version of the mechanism for explicitly represented coverage valuations. Preliminaries Optimization Problems We consider optimization problems Π of the following general form. Each instance of Π consists of a feasible set S, and an objective function w : S → R. The solution to an instance of Π is given by the following optimization problem. maximize w(x) subject to x ∈ S. (1) Mechanism Design Basics We consider mechanism design optimization problems of the form in (1). In such problems, there are n players, where each player i has a valuation function v i : S → R. We are concerned with welfare maximization problems, where the objective is w(x) = n i=1 v i (x). We consider direct-revelation mechanisms for optimization mechanism design problems. Such a mechanism comprises an allocation rule, which is a function from (hopefully truthfully) reported valuation functions v 1 , . . . , v n to an outcome x ∈ S, and a payment rule, which is a function from reported valuation functions to a required payment from each player. We allow the allocation and payment rules to be randomized. A mechanism with allocation and payment rules A and p is truthful-in-expectation if every player always maximizes its expected payoff by truthfully reporting its valuation function, meaning that E[v i (A(v)) − p i (v)] ≥ E[v i (A(v ′ i , v −i )) − p i (v ′ i , v −i )](2) for every player i, (true) valuation function v i , (reported) valuation function v ′ i , and (reported) valuation functions v −i of the other players. The expectation in (2) is over the coin flips of the mechanism. If (2) holds for every flip of the coins, rather than merely in expectation, we call the mechanism universally truthful. The mechanisms that we design can be thought of as randomized variations on the classical VCG mechanism, as we explain next. Recall that the VCG mechanism is defined by the (generally intractable) allocation rule that selects the welfare-maximizing outcome with respect to the reported valuation functions, and the payment rule that charges each player i a bid-independent "pivot term" minus the reported welfare earned by other players in the selected outcome. This (deterministic) mechanism is truthful; see e.g. [25]. Now let dist(S) denote the probability distributions over a feasible set S, and let D ⊆ dist(S) be a compact subset of them. The corresponding Maximal in Distributional Range (MIDR) allocation rule is defined as follows: given reported valuation functions v 1 , . . . , v n , return an outcome that is sampled randomly from a distribution D * ∈ D that maximizes the expected welfare E x∼D [ i v i (x)] over all distributions D ∈ D. Analogous to the VCG mechanism, there is a (randomized) payment rule that can be coupled with this allocation rule to yield a truthful-in-expectation mechanism (see [9]). Combinatorial Auctions In Combinatorial Auctions there is a set [m] = {1, 2, . . . , m} of items, and a set [n] = {1, 2, . . . , n} of players. Each player i has a valuation function v i : 2 [m] → R + that is normalized (v i (∅) = 0) and monotone (v i (A) ≤ v i (B) whenever A ⊆ B). A feasible solution is an allocation (S 1 , . . . , S n ), where S i denotes the items assigned to player i, and {S i } i are mutually disjoint subsets of [m]. Player i's value for outcome (S 1 , . . . , S n ) is equal to v i (S i ). The goal is to choose the allocation maximizing social welfare: i v i (S i ). Matroid Rank Sum Valuations We now define matroid rank sum valuations. Relevant concepts from matroid theory are reviewed in Appendix C.1. w 1 , . . . , w κ ∈ R + , such that v(S) = κ ℓ=1 w ℓ u ℓ (S) for all S ⊆ [m]. We do not assume any particular representation of MRS valuations, and require only oracle access to their (expected) values on certain distributions (see Section 2.5). MRS functions include most concrete examples of monotone submodular functions that appear in the literaturethis includes coverage functions 6 , matroid weighted-rank functions 7 , and all convex combinations thereof. Moreover, as shown in [19], 1 − 1/e is the best approximation possible in polynomial time for combinatorial auctions with MRS valuations unless P = N P , even ignoring strategic considerations. That being said, we note that some interesting submodular functions -such as some budget additive functions 8 -are not in the matroid rank sum family (see Appendix D.2). Lotteries and Oracles A value oracle for a valuation v : 2 [m] → R takes as input a set S ⊆ [m], and returns v(S). We define an analogous oracle that takes in a description of a simple lottery over subsets of [m], and outputs the expectation of v over this lottery. Given a vector x ∈ [0, 1] m of probabilities on the items, let D x be the distribution over S ⊆ [m] that includes each item j in S independently with probability x j . We use F v (x) to denote the expected value of v(S) over draws S ∼ D x from this lottery. Definition 2.2. A lottery-value oracle for set function v : 2 [m] → R takes as input a vector x ∈ [0, 1] m , and outputs F v (x) = E S∼Dx [v(S)] = S⊆[m] v(S) j∈S x j j =S (1 − x j ).(3) We note that F v is simply the well-studied multi-linear extension of v (see for example [6,29]). In addition to being the natural randomized analog of a value oracle, a lottery-value oracle is easily implemented for various succinctly represented examples of MRS valuations, like explicit coverage functions (see Appendix A). We also note that lottery-value oracle queries can be approximated arbitrarily well with high probability using a polynomial number of value oracle queries (see [29]). Unfortunately, we are not able to reconcile the incurred sampling errors -small as they may be -with the requirement that our mechanism be exactly truthful. We suspect that relaxing our solution concept to approximate truthfulness -also known as ǫ-truthfulness -would remove this difficulty, and allow us to relax our oracle model to the more traditional value oracles. Convex Rounding Framework Relaxations and Rounding Schemes Let Π be an optimization problem. A relaxation Π ′ of Π defines for every (S, w) ∈ Π a convex and compact relaxed feasible set R ⊆ R m that is independent of w (we suppress the dependence on S); and an extension w R : R → R of the objective w to the relaxed feasible set R. This gives the following relaxed optimization problem. maximize w R (x) subject to x ∈ R.(4) Generally, the extension is defined so that it is computationally tractable to find a point x ∈ R that maximizes w R (x) (possibly approximately). For example, S could be the allocations of m items to n bidders in a combinatorial auction, w(x) the welfare of an allocation, R the feasible region of a linear programming relaxation, and w R the natural linear extension of w to fractional allocations. The solution x ∈ R to the relaxed problem need not be in S. A rounding scheme for relaxation Π ′ of Π defines for each feasible set S of Π, and its corresponding relaxed set R, a (possibly randomized) function r : R → S. Since our rounding scheme will be randomized, we will frequently use r(x) to denote the distribution over S resulting from rounding the point x ∈ R. Commonly, the rounding scheme satisfies the following approximation guarantee: E y∼r(x) [w(y)] ≥ α · w R (x) for every x ∈ R. In this case, if x * maximizes w R over R and w R agrees with w on S, then E y∼r(x * ) [w(y)] ≥ α · max y∈S w(y). Convex Rounding Schemes and MIDR Our technique is motivated by the following observation: instead of solving the relaxed problem and subsequently rounding the solution, why not optimize directly on the outcome of the rounding scheme? In particular, consider the following relaxation of Π that "absorbs" rounding scheme r into the objective. maximize E y∼r(x) [w(y)] subject to x ∈ R.(5) The solution to this problem rounds to the best possible distribution in the range of the rounding scheme, over all possible fractional solutions in R. While this problem is often intractable, it always leads to an MIDR allocation rule. Lemma 3.1. Algorithm 1 is an MIDR allocation rule. Algorithm 1 MIDR Allocation Rule via Optimizing over Output of Rounding Scheme Parameter: Feasible set S of Π. Parameter: Relaxed feasible set R ⊆ R m . Parameter: (Randomized) rounding scheme r : R → S. Input: Objective w : S → R satisfying (S, w) ∈ Π. Output: Feasible solution z ∈ S. 1: Let x * maximize E y∼r(x) [w(y)] over x ∈ R. 2: Let z ∼ r(x * ) We say a rounding scheme r : R → S is α-approximate for α ≤ 1 if w(x) ≥ E y∼r(x) [w(y)] ≥ α · w(x) for every x ∈ S. When r is α-approximate, so is the allocation rule of Algorithm 1. For most rounding schemes in the approximation algorithms literature, the optimization problem (5) cannot be solved in polynomial time (assuming P = N P ). The reason is that for any rounding scheme that always rounds a feasible solution to itself -i.e., r(x) = x for all x ∈ S -an optimal solution to (5) is also optimal for (1). Thus, in this case, hardness of the original problem (1) implies hardness of (5). We conclude that we need to design rounding schemes with the unusual property that r(x) = x for some x ∈ S. We call a (randomized) rounding scheme r : Under additional technical conditions, discussed in the context of combinatorial auctions in Appendix B, the convex program (5) can be solved efficiently (e.g., using the ellipsoid method). This reduces the design of a polynomial-time α-approximate MIDR algorithm to designing a polynomialtime α-approximate convex rounding scheme. R → S convex if E y∼r(x) [w(y)] is concave function of x ∈ R. Summarizing, Lemmas 3.1, 3.2, and 3.3 give the following informal theorem. Theorem 3.4. (Informal) Let Π be a welfare-maximization optimization problem, and let Π ′ be a relaxation of Π. If there exists a polynomial-time, α-approximate, convex rounding scheme for Π ′ , then there exists a truthful-in-expectation, polynomial-time, α-approximate mechanism for Π. Of course, there is no reason a priori to believe that useful convex rounding schemes -let alone ones computable in polynomial time -exist for any important problems. We show in Section 4 that they do in fact exist and yield new results for an interesting class of combinatorial auctions. Combinatorial Auctions In this section, we use the framework of Section 3 to prove our main result. Theorem 4.1. There is a (1 − 1/e)-approximate, truthful-in-expectation mechanism for combinatorial auctions with matroid rank sum valuations in the lottery-value oracle model, running in expected poly(n, m) time. We formulate welfare maximization in combinatorial auctions as an optimization problem Π. An instance (S, w) ∈ Π is given by the following integer program with feasible set S contained in {0, 1} n×m . Variable x ij indicates whether item j is allocated to player i, and w(x) denotes the social welfare of allocation x. maximize w(x) = i v i ({j : x ij = 1}) subject to i x ij ≤ 1, for j ∈ [m]. x ij ∈ {0, 1} , for i ∈ [n], j ∈ [m].(6) We let the relaxed feasible set R = R(S) be the result of relaxing the constraints x ij ∈ {0, 1} of (6) to 0 ≤ x ij ≤ 1. We structure the proof of Theorem 4.1 as follows. We define the Poisson rounding scheme, which we denote by r poiss , in Section 4.1. We prove that r poiss is (1 − 1/e)-approximate (Lemma 4.3), and convex (Lemma 4.2). Lemmas 3.1, 3.2 and 4.3, taken together, imply that Algorithm 1 when instantiated for combinatorial auctions with r = r poiss , is a (1 − 1/e)-approximate MIDR allocation rule. Lemma 4.2 reduces implementing this allocation rule to solving a convex program. In Appendix B, we handle the technical and numerical issues related to solving convex programs. First, we prove that our instantiation of Algorithm 1 for combinatorial auctions can be implemented in expected polynomial-time using the ellipsoid method under a simplifying assumption on the numerical conditioning of our convex program (Lemma B.2). Then we show in Section B.3 that the previous assumption can be removed by slightly modifying our algorithm. Finally, we prove that truth-telling VCG payments can be computed efficiently in Lemma D.1. Taken together, these lemmas complete the proof of Theorem 4.1. In Appendix D.2, we discuss prospects for extending our result beyond matroid rank sum valuations. The Poisson Rounding Scheme In this section we define the Poisson rounding scheme, which we denote by r poiss . The random map r poiss : R → S renders the the following optimization problem over R a convex optimization problem. maximize f (x) = E y∼r poiss (x) [w(y)] subject to i x ij ≤ 1, for j ∈ [m]. 0 ≤ x ij ≤ 1, for i ∈ [n], j ∈ [m].(7) We define the Poisson rounding scheme as follows. Given a fractional solution x to (7), do the following independently for each item j: assign j to player i with probability 1 − e −x ij . (This is well defined since 1 − e −x ij ≤ x ij for all players i and items j, and i x ij ≤ 1 for all items j.) We make this more precise in Algorithm 2. For clarity, we represent an allocation as a function from items to players, with an additional null player * reserved for items that are left unassigned. if i (1 − e −x ij ) ≥ p j then 4: Let a(j) be the minimum index such that i≤a(j) (1 − e −x ij ) ≥ p j . Proof. Let S 1 , . . . , S n be an allocation, and let x be an the integer point of (7) corresponding to S 1 , . . . , S n . Let (S ′ 1 , . . . , S ′ n ) ∼ r poiss (x). It suffices to show that E[ i v i (S ′ i )] ≥ (1 − 1/e) · i v i (S i ). By definition of the Poisson rounding scheme, S ′ i includes each j ∈ S i independently with probability 1 − 1/e. Submodularity implies that E[v i (S ′ i )] ≥ (1 − 1/e) · v i (S i ) - Warm-up: Convexity for Coverage Valuations In this section, we prove the special case of Lemma 4.2 for coverage valuations, as defined in Section 2.4. Fix n, m, and coverage valuations {v i } n i=1 , and let R denote the feasible set of mathematical program (7). Let (S 1 , . . . , S n ) ∼ r poiss (x) be the (random) allocation computed by the Poisson rounding scheme for point x ∈ R. The expected welfare E[w(r poiss (x))] can be written as E[ n i=1 v i (S i )], where the expectation is taken over the internal random coins of the rounding scheme. By linearity of expectation, as well as the fact that the sum of concave functions is concave, it suffices to show that E[v i (S i )] is a concave function of x for an arbitrary player i with coverage valuation v i . Fix player i, and use x j , v, and S as short-hand for x ij , v i , and S i respectively. Recall that v is a coverage function; let L be a ground set and A 1 , . . . , A m ⊆ L be such that v i (T ) = | ∪ j∈T A j | for each T ⊆ [m]. The Poisson rounding scheme includes each item j in S independently with probability 1 − e −x j . The expected value of player i can be written as follows. E [v(S)] = E[| ∪ j∈S A j |] = ℓ∈L Pr[ℓ ∈ ∪ j∈S A j ] Since the sum of concave functions is concave, it suffices to show that Pr[ℓ ∈ ∪ j∈S A j ] is concave in x for each ℓ ∈ L. We can interpret Pr[ℓ ∈ ∪ j∈S A j ] as the probability that element ℓ is covered by an item in S, where j ∈ [m] covers ℓ ∈ L if ℓ ∈ A j . For each ℓ ∈ L, let C ℓ be the set of items that cover ℓ. Element ℓ ∈ L is covered by S precisely when C ℓ ∩ S = ∅. Each item j ∈ C ℓ is included in S independently with probability 1 − e −x j . Therefore, the probability ℓ ∈ L is covered by S can be re-written as follows: Pr[ℓ ∈ ∪ j∈S A j ] = 1 − j∈C ℓ e −x j = 1 − exp   − j∈C ℓ x j   .(8) Form (8) is the composition of the concave function g(y) = 1 − e −y with the affine function y → j∈C ℓ x j . It is well-known that composing a concave function with an affine function yields another concave function (see e.g. [4]). Therefore, Pr[ℓ ∈ ∪ j∈S A j ] is concave in x for each ℓ ∈ L, as needed. This completes the proof. Convexity for Matroid Rank Sum Valuations In this section, we will prove Lemma 4.2 in its full generality. First, we define a discrete analogue of a Hessian matrix for set functions, and show that these discrete Hessians are negative semi-definite for matroid rank sum functions. H v S (j, k) = v(S ∪ {j, k}) − v(S ∪ {j}) − v(S ∪ {k}) + v(S)(9) for j, k ∈ [m]. Claim 4.5. If v : 2 [m] → R + is a matroid rank sum function, then H v S is negative semi-definite for each S ⊆ [m]. Proof. We observe that H v S is linear in v, and recall that a non-negative weighted-sum of negative semi-definite matrices is negative semi-definite. Therefore, it is sufficient to prove this claim when v is a matroid rank function. Let A binary matrix encoding a symmetric and transitive relation is a block diagonal matrix where each diagonal block is an all-ones or all-zeros sub-matrix. It is known, and easy to prove, that such a matrix is positive semi-definite. Therefore H v S is negative semi-definite. We now return to Lemma 4.2. Fix n, m, and MRS valuations {v i } n i=1 , and let R denote the feasible set of mathematical program (7). Let (S 1 , . . . , S n ) ∼ r poiss (x) be the (random) allocation computed by the Poisson rounding scheme for point x ∈ R. The expected welfare E[w(r poiss (x))] can be written as E [ n i=1 v i (S i )], where the expectation is taken over the internal random coins of the rounding scheme. By linearity of expectation, as well as the fact that the sum of concave functions is concave, it suffices to show that E[v i (S i )] is a concave function of x for an arbitrary player i with MRS valuation v i . Fix player i, and use x j , v, S as short-hand for x ij , v i , S i respectively. The Poisson rounding scheme includes each item j in S independently with probability 1 − e −x j . We can now write the expected value of player i as the following function G v : R m → R: G v (x 1 , . . . , x m ) = S⊆[m] v(S) j∈S (1 − e −x j ) j =S e −x j(10) The following claim, combined with Claim 4.5, completes the proof of Lemma 4.2. Claim 4.6. If all discrete Hessians of v are negative semi-definite, then G v is concave. Proof. Assume H v S is negative semi-definite for each S ⊆ [m] . We work with G v as expressed in Equation (10). We will show that the Hessian matrix of G v at an arbitrary x ∈ R m is negative semi-definite, which is a sufficient condition for concavity. We take the mixed-derivative of G v with respect to x j and x k (possibly j = k). ∂ 2 G v (x) ∂x j ∂x k = S⊆[m]\{j,k} ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ v(S) − v(S ∪ {j}) − v(S ∪ {k}) + v(S ∪ {j, k}) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ v(S) − v(S ∪ {j}) − v(S ∪ {k}) + v(S ∪ {j, k}) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ H v S (j, k) The first equality follows by grouping the terms of Equation (10) ▽ 2 G v (x) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ H v S(11) A non-negative weighted-sum of negative semi-definite matrices is negative semi-definite. This completes the proof of the claim. A Combinatorial Auctions with Explicit Coverage Valuations In this section, we apply our mechanism to explicitly represented coverage valuations. This demonstrates the utility of our mechanism in a concrete, non-oracle-based setting, and moreover allows us to establish an interesting separation result. Specifically, we show that (1) The (1 − 1/e)approximate mechanism of Theorem 4.1 can be implemented in expected polynomial-time for this problem, and (2) No polynomial-time, universally-truthful, VCG-based 9 mechanism guarantees an approximation ratio of o(n), unless N P ⊆ P/poly. The approximation ratio of 1 − 1/e is the best possible in polynomial-time for this problem -even without incentive constraints -assuming P = N P [19]. Ours is the first separation of its kind in the computational complexity model. 10 An n player, m item instance combinatorial auctions with explicit coverage valuations is described as follows. For each player i, there is a finite set L i , and a family A i 1 , . . . , A i m of subsets of L i . The valuation function of player i is then defined as v i (S) = | ∪ j∈S A i j |. The set system L i , A i j m j=1 is encoded explicitly as a bipartite graph. A.1 A Truthful-in-Expectation Mechanism As discussed previously, MRS valuations include all coverage valuations. Therefore, in order to implement the MIDR allocation rule of Section 4 for this problem, it suffices to answer lotteryvalue queries in time polynomial in the number of bits encoding the instance. Proof. Let v : 2 [m] → R + be a coverage valuation presented explicitly as a set system (L, {A j } m j=1 ), and let x ∈ [0, 1] m . Let S be a random set that includes each j ∈ [m] independently with probability x j . The outcome of the lottery value oracle of v evaluated at x is equal to the sum, over all ℓ ∈ L, of the probability that ℓ is "covered" by S -specifically, ℓ∈L Pr[ℓ ∈ ∪ j∈S A j ]. It is easy to verify that a term of this sum can be expressed as the following closed form expression. Pr[ℓ ∈ ∪ j∈S A j ] = 1 − j:A j ∋ℓ (1 − x j ) This expression can be evaluated in time polynomial in the representation of the set system. This completes the proof. Claim A.1 implies the following Theorem. Theorem A.2. There is an expected polynomial-time, (1−1/e)-approximate, truthful-in-expectation mechanism for combinatorial auctions with explicit coverage valuations. 9 A universally-truthful mechanism is VCG-based if it is a randomization over deterministic truthful mechanisms that each implement a maximal in range allocation rule -the special case of MIDR where each distribution in the distributional range is supported on a single allocation. 10 We note that this separation is meaningful because there are no known universally-truthful polynomial-time mechanisms -VCG-based or otherwise -for this problem that achieve an approximation ratio better than min(n, √ m). In particular, the result of [8] uses demand queries, which can not be answered in polynomial time for explicit coverage valuations by the results of [19] and [16]. A.2 A Lower-bound on Universally Truthful VCG-Based Mechanisms We use the following special case of [5,Theorem 1.2]: If a succinct combinatorial auction problem satisfies the regularity conditions on the valuations defined in [5], and moreover the 2-player version of the problem is APX hard, then no polynomial-time, universally-truthful, VCG-based mechanism guarantees an approximation ratio of o(n). It is routine to verify the regularity assumptions of [5] for explicit coverage valuations. APXhardness of the 2-player problem follows by an elementary reduction from the APX-hard problem max-cut. Given an instance of max-cut on a graph G = (V, E), we let [m] = V , L 1 = L 2 = E. For e ∈ E, i ∈ {1, 2}, and j ∈ V , we let e ∈ A i j if j is one of the endpoints of edge e. It is easy to check that the welfare maximizing allocation of the resulting 2-player instance of combinatorial auctions corresponds to the maximum cut of G. Moreover, using the fact that the optimal objective value of max-cut is at least |E|/2, it is elementary to verify that the reduction preserves hardness of approximation up to a constant factor. Therefore, combinatorial auctions with explicit coverage valuations and 2 players is APX hard. This yields the following Theorem. B Solving The Convex Program In this section, we overcome some technical difficulties related to the solvability of convex programs. We show in Section B.1 that, in the lottery-value oracle model, the four conditions for "solvability" of convex programs, as stated in Fact C.3, are easily satisfied for convex program (7). However, an additional challenge remains: "solving" a convex program -as in Definition C.2 -returns an approximately optimal solution. Indeed the optimal solution of a convex program may be irrational in general, so this is unavoidable. We show how to overcome this difficulty if we settle for polynomial runtime in expectation. While the optimal solution x * of (7) cannot be computed explicitly, the random variable r poiss (x * ) can be sampled in expected polynomial-time. The key idea is the following: sampling the random variable r poiss (x * ) rarely requires precise knowledge of x * . Depending on the coin flips of r poiss , we decide how accurately we need to solve convex program (7) in order compute r poiss (x * ). Roughly speaking, we show that the probability of requiring a (1 − ǫ)-approximation falls exponentially in 1 ǫ . As a result, we can sample r poiss (x * ) in expected polynomial-time. We implement this plan in Section B.2 under the simplifying assumption that convex program (7) is well-conditioned -i.e. is "sufficiently concave" everywhere. In Section B.3, we show how to remove that assumption by slightly modifying our algorithm. B.1 Approximating the Convex Program Claim B.1. There is an algorithm for Combinatorial Auctions with MRS valuations in the lotteryvalue oracle model that takes as input an instance of the problem and an approximation parameter ǫ > 0, runs in poly(n, m, log(1/ǫ)) time, and returns a (1 − ǫ)-approximate solution to convex program (7). It suffices to show that the four conditions of Fact C.3 are satisfied in our setting. The first three are immediate from elementary combinatorial optimization (see for example [28]). It remains to show that the first-order oracle, as defined in Fact C.3, can be implemented in polynomial-time in the lottery-value oracle model. The objective f (x) of convex program (7) can, by definition, be written as f (x) = i G v i (x i ), where v i is the valuation function of player i, x i is the vector (x i1 , . . . , x im ), and and G v i is as defined in (10). By definition, G v i (x i ) is the outcome of querying the lottery-value oracle of player i with (1 − e −x i1 , . . . , 1 − e −x im ) . Therefore, we can evaluate f (x) using n lottery-value query, one for each player. It remains to show that we can also evaluate the (multi-variate) derivative ▽f (x) of f (x). Using definition (10), we take the partial derivative corresponding to x ij . By rearranging the sum appropriately, we get that ∂f ∂x ij (x) = e −x ij F v i (1 − e −x i1 , . . . , 1 − e −x im ) ∨ 1 j − F v i (1 − e −x i1 , . . . , 1 − e −x im ) ∧ 0 j , where F v i is as defined in Equation (3). Here, ∨ and ∧ denote entry-wise minimum and maximum respectively, 1 j denotes the vector with all entries equal to 0 except for a 1 at position j, and 0 j denotes the vector with all entries equal to 1 except for a 0 at position j. It is clear that this entry of the gradient of f can be evaluated using two lottery-value queries. Therefore, ▽f (x) can be evaluated using 2n lottery-value queries, 2 for each player. This completes the proof of Claim B.1. B.2 The Well-Conditioned Case In this section, we make the following simplifying assumption: The objective function f (x) of convex program (7), when restricted to any line in the feasible set R, has a second derivative of magnitude at least λ = Let x * be the optimal solution to convex program (7). Algorithm 1 allocates items according to the distribution r poiss (x * ). The Poisson rounding scheme, as described in Algorithm 2, requires making m independent decisions, one for each item j. Therefore, we fix item j and show how to simulate this decision. It suffices to do the following in expected polynomial-time: flip uniform coin p j ∈ [0, 1], and find the minimum index a(j) (if any) such that i≤a(j) (1 − e −x * ij ) ≥ p j . For most realizations of p j , this can be decided using only coarse estimates x ij to x * ij . Assume we have an estimation oracle for x * that, on input δ, returns a δ-estimate x of x * : Specifically, x ij − x * ij ≤ δ for each i. When p j falls outside the "uncertainty zones" of x, such as when |p j − i ′ ≤i (1−e − x i ′ j )| > δn for each i ∈ [n], it is easy to see that we can correctly determine a(j) by using x in lieu of x. The total measure of the uncertainty zones of x is at most 2n 2 δ, therefore p j lands outside the uncertainty zones with probability at least 1 − 2n 2 δ. The following claim shows that if the estimation oracle for x * can be implemented in time polynomial in log(1/δ), then we can simulate the Poisson rounding procedure in expected polynomial-time. Claim B.3. Let x * be the optimal solution of convex program (7). Assume access to a subroutine B(δ) that returns a δ-estimate of x * in time poly(n, m, log(1/δ)). Algorithm (1) with r = r poiss can be simulated in expected poly(n, m) time. Proof. It suffices to show that we can simulate the allocation of an item j by Algorithm (2) on input x * . The simulation proceeds as follows: Draw p j ∈ [0, 1] uniformly at random. Start with δ = δ 0 = 1 2n 2 . Let x = B(δ). While |p j − i ′ ≤i (1 − e − x i ′ j )| ≤ δn for some i ∈ [n] (i.e. p j may fall inside an "uncertainty zone") do the following: let δ = δ/2, x = B(δ) and repeat. After the loop terminates, we have a sufficiently accurate estimate of x * to calculate a(j) as in Algorithm (2). It is easy to see that the above procedure is a faithful simulation of Algorithm (2) on x * . It remains to bound its expected running time. Let δ k = 1 2 k+1 n 2 denote the value of δ at the kth iteration. By assumption, the kth iteration takes poly(n, m, log(1/δ k )) = poly(n, m, log(2 k+1 n 2 )) = poly(n, m, k) time. The probability this procedure does not terminate after k iterations is at most 2n 2 δ k = 1/2 k . Taken together, these two facts and a simple geometric summation imply that the expected runtime is polynomial in n and m. It remains to show that the estimation oracle B(δ) can be implemented in poly(n, m, log(1/δ)) time. At first blush, one may expect that the ellipsoid method can be used in the usual manner here. However, there is one complication: we require an estimate x that is close to x * in solution space rather than in terms of objective value. Using our assumption on the curvature of f (x), we will reduce finding a δ-estimate of x * to finding an 1 − ǫ(δ) approximate solution to convex program (7). The dependence of ǫ on δ will be such that ǫ ≥ poly(δ)/2 poly(n,m) , thereby we can invoke Claim B.1 to deduce that B(δ) can be implemented in poly(n, m, log(1/δ)) time. Let ǫ = ǫ(δ) = δ 2 λ 2 i v i ([m]) . Plugging in the definition of λ, we deduce that ǫ ≥ δ 2 /2 poly(n,m) , which is the desired dependence. It remains to show that if x is (1 − ǫ)-approximate solution to (7), then x is also a δ-estimate of x * . Using the fact that f (x) is concave, and moreover its second derivative has magnitude at least λ, it a simple exercise to bound distance of any point x from the optimal point x * in terms of its sub-optimality f (x * ) − f (x), as follows: f (x * ) − f (x) ≥ λ 2 ||x − x * || 2 .(12) Assume x is a (1 − ǫ)-approximate solution to (7). Equation (12) implies that || x − x * || 2 ≤ 2 λ ǫf (x * ) = δ 2 i v i ([m]) f (x * ) ≤ δ 2 , where the last inequality follows from the fact that i v i ([m])) is an upper-bound on the optimal value f (x * ). Therefore, ||x − x * || ≤ δ, as needed. This completes the proof of Lemma B.2. B.3 Guaranteeing Good Conditioning In this section, we propose a modification r + poiss of the Poisson rounding scheme r poiss . We will argue that r + poiss satisfies all the properties of r poiss established so far, with one exception: the approximation guarantee of Lemma 4.3 is reduced to 1 − 1/e − 2 −2mn . Then we will show that r + poiss satisfies the curvature assumption of Lemma B.2, demonstrating that said assumption may be removed. Therefore Algorithm 1, instantiated with r = r + poiss for combinatorial auctions with MRS valuations in the lottery-value oracle model, is (1 − 1/e − 2 −2mn ) approximate and can be implemented in expected poly(n, m) time. Finally, we show in Remark B.4 how to recover the 2 −2mn term to get a clean 1 − 1/e approximation ratio, as claimed in Theorem 4.1. Let µ = 2 −2mn . We define r + poiss in Algorithm 3. Intuitively, r + poiss at first makes a tentative allocation using r poiss . Then, it cancels said allocation with small probability µ. Finally, with probability β it chooses a random "lucky winner" i * and gives him all the items. β is defined as the fraction of items allocated in the original tentative allocation. The motivation behind this seemingly bizarre definition of r + poiss is purely technical: as we will see, it can be thought of as adding "concave noise" to r poiss . Let (S 1 , . . . , S n ) = (∅, ∅, . . . , ∅). 6: Draw q 2 ∈ [0, 1] uniformly at random. 7: if q 2 ∈ [0, β] then 8: Choose a player i * uniformly at random. We can write the expected welfare E[w(r + poiss (x))] as follows. We use linearity of expectations and the fact that β is independent of the choice of i * to simplify the expression. E[w(r + poiss (x))] = E[(1 − µ)w(r poiss (x)) + µβv i * ([m])] = (1 − µ) E[w(r poiss (x))] + µ E[β]E[v i * ([m])] = (1 − µ) E[w(r poiss (x))] + µ E[β] i v i ([m]) n Observe that r poiss allocates an item j with probability i (1−e −x ij ). Therefore, the expectation of β is ij (1−e −x ij ) m . This gives: E[w(r + poiss (x))] =(1 − µ) E[w(r poiss (x))] + µ mn i v i ([m]) i,j (1 − e −x ij ).(13) It is clear that the expected welfare when using r = r + poiss is within 1 − µ = 1 − 2 −2mn of the expected welfare when using r = r poiss in the instantiation of Algorithm 1. Using Lemma 4.3, we conclude that r + poiss is a (1 − 1/e − 2 −2mn )-approximate rounding scheme. Moreover, using Lemma 4.2, as well as the fact that (1 − e −x ij ) is a concave function, we conclude that r + poiss is a convex rounding scheme. Therefore, this establishes the analogues of Lemmas 4.3 and4.2 for r + poiss . It is elementary to verify that our proof of Lemma B.2 can be adapted to r + poiss as well. It remains to show that r + poiss is "sufficiently concave". This would establish that the conditioning assumption of Section B.2 is unnecessary for r + poiss . We will show that expression (13) is a concave function with curvature of magnitude at least λ = n i=1 v i ([m]) emn2 2mn everywhere. Since the curvature of concave functions is always non-positive, and moreover the curvature of the sum of two functions is the sum of their curvatures, it suffices to show that the second term of the sum (13) has curvature of magnitude at least λ. We note that the curvature of ij (1 − e −x ij ) is at least e −1 over x ∈ [0, 1] n×m . Therefore, the curvature of the second term of (13) is at least µ mn i v i ([m]) e −1 = λ as needed. Remark B.4. In this section, we sacrificed 2 −2mn in the approximation ratio in order to guarantee expected polynomial runtime of our algorithm even when convex program (7) is not well-conditioned. This loss can be recovered to get a clean 1 − 1/e approximation as follows. Given our (1 − 1/e − 2 −2mn )-approximate MIDR algorithm A, construct the following algorithm A ′ : Given an instance of combinatorial auctions, A ′ runs A on the instance with probability 1 − e2 −2mn , and with the remaining probability solves the instance optimally in exponential time O(2 2mn ). It was shown in [12] that a random composition of MIDR mechanisms is MIDR, therefore A ′ is MIDR. The expected runtime of A ′ is bounded by the expected runtime of A plus e2 −2mn · O(2 2mn ) = O(1). Finally, the expected approximation of A ′ is the weighted average of the approximation ratio of A and the optimal approximation ratio 1, and is at least (1 − e2 −2mn )(1 − 1/e − 2 −2mn ) + e2 −2mn ≥ 1 − 1/e. C Additional Preliminaries C.1 Matroid Theory In this section, we review some basics of matroid theory. For a more comprehensive reference, we refer the reader to [26]. A matroid M is a pair (X , I), where X is a finite ground set, and I is a non-empty family of subsets of X satisfying the following two properties. (1) Downward closure: If S belongs to I, then so do all subsets of S. (2) The Exchange Property: Whenever T, S ∈ I with |T | < |S|, there is some x ∈ S \ T such that T ∪ {x} ∈ I. Elements of I are often referred to as the independent sets of the matroid. Subsets of X that are not in I are often called dependent. We associate with matroid M a set function rank M : 2 X → N, known as the rank function of M , defined as follows: rank M (A) = max S∈I |S ∩ A|. Equivalently, the rank of set A in matroid M is the maximum size of an independent set contained in A. C.2 Convex Optimization In this section, we distill some basics of convex optimization. For more details, see [2]. Definition C.1. A maximization problem is given by a set Π of instances (P, c), where P is a subset of some euclidean space, c : P → R, and the goal is to maximize c(x) over x ∈ P. We say Π is a convex maximization problem if for every (P, c) ∈ Π, P is a compact convex set, and c : P → R is concave. If c : P → R + for every instance of Π, we say Π is non-negative. Definition C.2. We say a non-negative maximization problem Π is R-solvable in polynomial time if there is an algorithm that takes as input the representation of an instance I = (P, c) ∈ Π -where we use |I| to denote the number of bits in the representation -and an approximation parameter ǫ, and in time poly(|I|, log(1/ǫ)) outputs x ∈ P such that c(x) ≥ (1 − ǫ) max y∈P c(y). Fact C.3. Consider a non-negative convex maximization problem Π. If the following are satisfied, then Π is R-solvable in polynomial time using the ellipsoid method. We let I = (P, c) denote an instance of Π, and let m denote the dimension of the ambient euclidean space. 1. Polynomial Dimension: m is polynomial in |I|. Starting ellipsoid: There is an algorithm that computes, in time poly(|I|), a point c ∈ R m , a matrix A ∈ R m×m , and a number V ∈ R such that the following hold. We use E(c, A) to denote the ellipsoid given by center c and linear transformation A. 3. Separation oracle for P: There is an algorithm that takes takes input I and x ∈ R m , and in time poly(|I|, |x|) where |x| denotes the size of the representation of x, outputs "yes" if x ∈ P, otherwise outputs h ∈ R m such that h T x < h T y for every y ∈ P. 4. First order oracle for c: There is an algorithm that takes input I and x ∈ R m , and in time poly(|I|, |x|) outputs c(x) ∈ R and ▽c(x) ∈ R m . D Additional Technical Details and Commentary D.1 Computing Payments In this section, we show how to efficiently compute truth-telling payments for our mechanism. In fact, as shown below, this is possible for any maximal in distributional range allocation rule for combinatorial auctions given as a black box. Lemma D.1. Let A be an MIDR allocation rule for combinatorial auctions, and let v 1 , . . . , v n be input valuations. Assume black-box access to A, and value oracle access to {v i } n i=1 . We can compute, with poly(n) over-head in runtime, payments p 1 , . . . , p n such that E[p i ] equals the VCG payment of player i for MIDR allocation rule A on input v 1 , . . . , v n . Proof. Without loss of generality, it suffices to show how to compute p 1 . Let 0 : 2 [m] → R be the valuation evaluating to 0 at each bundle. Recall (see e.g. [25]) that the VCG payment of player 1 is equal to E T ∼A(0,v 2 ,...,vn) n i=2 v i (T i ) − E S∼A(v 1 ,...,vn) n i=2 v i (S i ) .(14) Let (S 1 , . . . , S n ) be a sample from A(v 1 , . . . , v n ), and let (T 1 , . . . , T n ) be a sample from A(0, v 2 , . . . , v n ). Let p 1 = n i=2 v i (T i ) − n i=2 v i (S i ). Using linearity of expectations, it is easy to see that the expectation of p 1 is equal to the expression in (14). This completes the proof. We note that the mechanism resulting from Lemma D.1 is individually rational in expectation, and each payment is non-negative in expectation. We leave open the question of whether it is possible to enforce individual rationality and non-negative payments for our mechanism ex-post. D.2 Beyond Matroid Rank Sum Valuations In this section, we discuss the prospect of extending our result beyond matroid rank sum valuations. First, we argue that our restriction to a subset of submodular functions is not merely an artifact of our analysis. Specifically, we exhibit a submodular function that is not in the matroid rank sum family, and moreover the Poisson rounding scheme can be non-convex when a player has this function as their valuation. Then, we briefly argue that our mechanism may yet apply to some valuations that are not matroid rank sums. We define a budget additive function v on four items {1, 2, 3, 4}. Three of the items are "small", one item is "big", and the budget equals the value of the big item. We can show that v is not a matroid rank sum function by invoking Claim 4.5. Specifically, one can manually check that the discrete Hessian matrix H v ∅ of v at ∅ (see Definition 4.4) is not negative semi-definite. Moreover, for a player with valuation v, Poisson rounding renders the player's expected value function G v (x) (Equation (10)) non-concave in x: By Equation (11), the Hessian matrix of G v (x) approaches the discrete Hessian H v ∅ as x tends to zero. Since H v ∅ is not negative semi-definite, G v (x) is non-concave for x near zero. We note that we can construct a large family of similar counter examples by simply increasing the number of "small items" in v. Finally, we observe that our mechanism may apply to some valuations that are not matroid rank sums. We observe that we only used two properties of MRS functions: their discrete Hessian matrices are negative semi-definite (Claim 4.5, which is used to prove Lemma 4.2), and they are submodular (used to prove Lemma 4.3). Therefore, our result extends directly to the class of all set functions satisfying both of these properties. We leave open the question of whether there exist interesting functions in this class that are not matroid rank sums. More generally, understanding the class of set functions with negative semi-definite discrete Hessian matrices -in particular the relationship of this class to other classes of set functions studied in the literature -may be an interesting direction for future inquiry.
9,302
1103.0040
2953139329
We design an expected polynomial-time, truthful-in-expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental class of combinatorial auctions. Our results apply to bidders with valuations that are m matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions, matroid weighted-rank functions, and convex combinations thereof. Our approximation factor is the best possible, even for known and explicitly given coverage valuations, assuming P != NP. Ours is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for an NP-hard welfare maximization problem in combinatorial auctions with heterogeneous goods and restricted valuations. Our mechanism is an instantiation of a new framework for designing approximation mechanisms based on randomized rounding algorithms. A typical such algorithm first optimizes over a fractional relaxation of the original problem, and then randomly rounds the fractional solution to an integral one. With rare exceptions, such algorithms cannot be converted into truthful mechanisms. The high-level idea of our mechanism design framework is to optimize directly over the (random) output of the rounding algorithm, rather than over the input to the rounding algorithm. This approach leads to truthful-in-expectation mechanisms, and these mechanisms can be implemented efficiently when the corresponding objective function is concave. For bidders with MRS valuations, we give a novel randomized rounding algorithm that leads to both a concave objective function and a (1-1 e)-approximation of the optimal welfare.
Despite intense study, prior to this work, there were no truthful-in-expectation and polynomial-time constant-factor approximation mechanisms for welfare maximization with any non-trivial subclass of submodular bidder valuations. The best previous results, which apply to all submodular valuations, are a truthful-in-expectation @math approximation mechanism in the communication complexity model due to Dobzinski, Fu and Kleinberg @cite_15 , and a universally-truthful A mechanism is universally-truthful if, for realization of a the mechanism's coins, each player maximizes his payoff by bidding truthfully. Universally truthful mechanisms are defined formally in Section @math approximation mechanism in the model due to Dobzinski @cite_18 .
{ "abstract": [ "This short note exhibits a truthful-in-expectation @math -approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication.", "This paper discusses two advancements in the theory of designing truthful randomized mechanisms. Our first contribution is a new framework for developing truthful randomized mechanisms. The framework enables the construction of mechanisms with polynomially small failure probability. This is in contrast to previous mechanisms that fail with constant probability. Another appealing feature of the new framework is that bidding truthfully is a stronglydominant strategy. The power of the framework is demonstrated by an @math -mechanism for combinatorial auctions that succeeds with probability @math . The other major result of this paper is an O(logmloglogm) randomized truthful mechanism for combinatorial auction with subadditivebidders. The best previously-known truthful mechanism for this setting guaranteed an approximation ratio of @math . En route, the new mechanism also provides the best approximation ratio for combinatorial auctions with submodularbidders currently achieved by truthful mechanisms." ], "cite_N": [ "@cite_15", "@cite_18" ], "mid": [ "1727602370", "1580387990" ] }
From Convex Optimization to Randomized Mechanisms: Toward Optimal Combinatorial Auctions *
The overarching goal of algorithmic mechanism design is to design computationally efficient algorithms that solve or approximate fundamental optimization problems in which the underlying data is a priori unknown to the algorithm. A central example in both theory and practice is welfaremaximization in combinatorial auctions. Here, there are m items for sale and n bidders vying for them. Each bidder i has a private valuation v i (S) for each subset S of the items. 1 The welfare of an allocation S 1 , . . . , S n of the items to the bidders is n i=1 v i (S i ). Since valuations are initially unknown to the seller, computing a near-optimal allocation requires eliciting information from the (self-interested) bidders, for example via a bid. A mechanism is a protocol that extracts such information and computes an allocation of the items and payments. The "holy grail" for a mechanism designer is to devise a computationally efficient and incentivecompatible mechanism with an approximation factor that matches the best one known for the (easier) problem in which the underlying data is provided up front. 2 Such results are usually difficult to obtain, and in some cases are provably impossible using deterministic mechanisms [20,27]. The space of randomized mechanisms, however, is much more promising as shown recently in [9,12]. 3 This paper provides such a positive result for a fundamental class of combinatorial auctions, via a novel randomized mechanism design framework based on convex optimization. Algorithmic mechanism design is difficult because incentive compatibility severely limits how the algorithm can compute an outcome, which prohibits use of most of the ingenious approximation algorithms that have been developed for different optimization problems. More concretely, the only general approach known for designing (randomized) truthful mechanisms is via maximal-indistributional range (MIDR) algorithms [9,12]. An MIDR algorithm fixes a set of distributions over feasible solutions -the distributional range -independently of the valuations reported by the self-interested participants, and outputs a random sample from the distribution that maximizes expected (reported) welfare. The Vickrey-Clarke-Groves (VCG) payment scheme renders an MIDR algorithm truthful-in-expectation. Most approximation algorithms are not MIDR algorithms. Consider, as an example, a randomized rounding algorithm for welfare maximization in combinatorial auctions (e.g. [14,11]). We can view such an algorithm as the composition of two algorithms, a relaxation algorithm and a rounding algorithm. The relaxation algorithm is deterministic and takes as input the problem data (players' valuations v), and outputs the (fractional) solution to a linear programming relaxation of the welfare-maximization problem that is optimal for the objective function defined by v. The rounding algorithm is randomized and takes as input this fractional solution and outputs a feasible allocation of the items to the players. Taken together, these algorithms assign to each input v a probability distribution D(v) over integral allocations. For almost all known randomized rounding algorithms, there is an input v such that the expected objective function value E y∼D (v) [v T y] with the distribution D(v) is inferior to that E y∼D(w) [v T y] with a distribution D(w) that the algorithm 1 Each bidder has an exponential number of private values; we ignore the attendant representation issues for the moment. 2 In this paper, by "incentive compatible" we generally mean a (possibly randomized) mechanism such that every participant maximizes its expected payoff by truthfully revealing its information to the mechanism, no matter how the other participants behave. Such mechanisms are called truthful-in-expectation, and are defined formally in Section 2.2. 3 We note that the impressively general positive results for implementations in Bayes-Nash equilibria that were recently obtained in [18,17,1] do not apply to the stronger incentive-compatibility notions used in this paper and in most of the algorithmic mechanism design literature. would produce for a different input w -and this is a violation of the MIDR property. Informally, such violations are inevitable unless a rounding algorithm is designed explicitly to avoid them, on top of the usual approximation requirements. The exception that proves the rule is the important and well-known mechanism design framework of Lavi and Swamy [21]. Lavi and Swamy [21] begin with the foothold that the fractional welfare maximization problem -the relaxation algorithm above -can be made truthful by charging appropriate VCG payments. Further, they identify a very special type of rounding algorithm that preserves truthfulness: if the expected allocation produced by the rounding algorithm is always identical to the input to the rounding algorithm, component-wise, up to some universal scaling factor α, then composing the two algorithms easily yields an α-approximate truthful-in-expectation mechanism (after scaling the fractional VCG payments by α). Perhaps surprisingly, there are some interesting problems, such as welfare maximization in combinatorial auctions with general valuations, that admit such a rounding algorithm with a best-possible approximation guarantee (assuming P = N P ). However, most N P -hard welfare maximization problems do not seem to admit good randomized rounding algorithms of the rigid type required by this design framework. Our Contributions We introduce a new approach to designing truthful-in-expectation approximation mechanisms based on randomized rounding algorithms; we outline it here for the special case of welfare maximization in combinatorial auctions. The high-level idea is to optimize directly on the outcome of the rounding algorithm, rather than merely on the outcome of the relaxation algorithm (the input to the rounding algorithm). In other words, let r(x) denote a randomized rounding algorithm, from fractional allocations to integer allocations. Given players' valuations v, we compute a fractional allocation x that maximizes the expected welfare E y∼r(x) [v T y] over all fractional allocations x. This methodology evidently gives MIDR algorithms. This optimization problem is often intractable, but when the rounding algorithm r and the space of valuations v are such that the function E y∼r(x) [v T y] is always concave in x -in which case we call r a convex rounding algorithm -it can be solved in polynomial time using convex programming (modulo numerical issues that we address later). We use this design framework to give an expected polynomial-time, truthful-in-expectation, (1 − 1/e)-approximation mechanism for welfare maximization in combinatorial auctions in which bidders' valuations are matroid rank sums (MRS) -non-negative linear combinations of matroid rank functions on the items. MRS valuations are submodular and encompass most concrete examples of submodular functions that have been studied in the combinatorial auctions literature, including all coverage functions and matroid weighted-rank functions (see Section 2.4 for formal definitions). Our approximation guarantee is optimal, assuming P = N P , even for the special case of the welfare maximization problem with known and explicitly presented coverage valuations. Our mechanism is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for any N P -hard special case of combinatorial auctions that doesn't assume that there are multiple copies of every type of item. It works with "black-box" valuations, provided that they support a randomized analog of a "value oracle". We also give a (non-oraclebased) version of the mechanism for explicitly represented coverage valuations. Preliminaries Optimization Problems We consider optimization problems Π of the following general form. Each instance of Π consists of a feasible set S, and an objective function w : S → R. The solution to an instance of Π is given by the following optimization problem. maximize w(x) subject to x ∈ S. (1) Mechanism Design Basics We consider mechanism design optimization problems of the form in (1). In such problems, there are n players, where each player i has a valuation function v i : S → R. We are concerned with welfare maximization problems, where the objective is w(x) = n i=1 v i (x). We consider direct-revelation mechanisms for optimization mechanism design problems. Such a mechanism comprises an allocation rule, which is a function from (hopefully truthfully) reported valuation functions v 1 , . . . , v n to an outcome x ∈ S, and a payment rule, which is a function from reported valuation functions to a required payment from each player. We allow the allocation and payment rules to be randomized. A mechanism with allocation and payment rules A and p is truthful-in-expectation if every player always maximizes its expected payoff by truthfully reporting its valuation function, meaning that E[v i (A(v)) − p i (v)] ≥ E[v i (A(v ′ i , v −i )) − p i (v ′ i , v −i )](2) for every player i, (true) valuation function v i , (reported) valuation function v ′ i , and (reported) valuation functions v −i of the other players. The expectation in (2) is over the coin flips of the mechanism. If (2) holds for every flip of the coins, rather than merely in expectation, we call the mechanism universally truthful. The mechanisms that we design can be thought of as randomized variations on the classical VCG mechanism, as we explain next. Recall that the VCG mechanism is defined by the (generally intractable) allocation rule that selects the welfare-maximizing outcome with respect to the reported valuation functions, and the payment rule that charges each player i a bid-independent "pivot term" minus the reported welfare earned by other players in the selected outcome. This (deterministic) mechanism is truthful; see e.g. [25]. Now let dist(S) denote the probability distributions over a feasible set S, and let D ⊆ dist(S) be a compact subset of them. The corresponding Maximal in Distributional Range (MIDR) allocation rule is defined as follows: given reported valuation functions v 1 , . . . , v n , return an outcome that is sampled randomly from a distribution D * ∈ D that maximizes the expected welfare E x∼D [ i v i (x)] over all distributions D ∈ D. Analogous to the VCG mechanism, there is a (randomized) payment rule that can be coupled with this allocation rule to yield a truthful-in-expectation mechanism (see [9]). Combinatorial Auctions In Combinatorial Auctions there is a set [m] = {1, 2, . . . , m} of items, and a set [n] = {1, 2, . . . , n} of players. Each player i has a valuation function v i : 2 [m] → R + that is normalized (v i (∅) = 0) and monotone (v i (A) ≤ v i (B) whenever A ⊆ B). A feasible solution is an allocation (S 1 , . . . , S n ), where S i denotes the items assigned to player i, and {S i } i are mutually disjoint subsets of [m]. Player i's value for outcome (S 1 , . . . , S n ) is equal to v i (S i ). The goal is to choose the allocation maximizing social welfare: i v i (S i ). Matroid Rank Sum Valuations We now define matroid rank sum valuations. Relevant concepts from matroid theory are reviewed in Appendix C.1. w 1 , . . . , w κ ∈ R + , such that v(S) = κ ℓ=1 w ℓ u ℓ (S) for all S ⊆ [m]. We do not assume any particular representation of MRS valuations, and require only oracle access to their (expected) values on certain distributions (see Section 2.5). MRS functions include most concrete examples of monotone submodular functions that appear in the literaturethis includes coverage functions 6 , matroid weighted-rank functions 7 , and all convex combinations thereof. Moreover, as shown in [19], 1 − 1/e is the best approximation possible in polynomial time for combinatorial auctions with MRS valuations unless P = N P , even ignoring strategic considerations. That being said, we note that some interesting submodular functions -such as some budget additive functions 8 -are not in the matroid rank sum family (see Appendix D.2). Lotteries and Oracles A value oracle for a valuation v : 2 [m] → R takes as input a set S ⊆ [m], and returns v(S). We define an analogous oracle that takes in a description of a simple lottery over subsets of [m], and outputs the expectation of v over this lottery. Given a vector x ∈ [0, 1] m of probabilities on the items, let D x be the distribution over S ⊆ [m] that includes each item j in S independently with probability x j . We use F v (x) to denote the expected value of v(S) over draws S ∼ D x from this lottery. Definition 2.2. A lottery-value oracle for set function v : 2 [m] → R takes as input a vector x ∈ [0, 1] m , and outputs F v (x) = E S∼Dx [v(S)] = S⊆[m] v(S) j∈S x j j =S (1 − x j ).(3) We note that F v is simply the well-studied multi-linear extension of v (see for example [6,29]). In addition to being the natural randomized analog of a value oracle, a lottery-value oracle is easily implemented for various succinctly represented examples of MRS valuations, like explicit coverage functions (see Appendix A). We also note that lottery-value oracle queries can be approximated arbitrarily well with high probability using a polynomial number of value oracle queries (see [29]). Unfortunately, we are not able to reconcile the incurred sampling errors -small as they may be -with the requirement that our mechanism be exactly truthful. We suspect that relaxing our solution concept to approximate truthfulness -also known as ǫ-truthfulness -would remove this difficulty, and allow us to relax our oracle model to the more traditional value oracles. Convex Rounding Framework Relaxations and Rounding Schemes Let Π be an optimization problem. A relaxation Π ′ of Π defines for every (S, w) ∈ Π a convex and compact relaxed feasible set R ⊆ R m that is independent of w (we suppress the dependence on S); and an extension w R : R → R of the objective w to the relaxed feasible set R. This gives the following relaxed optimization problem. maximize w R (x) subject to x ∈ R.(4) Generally, the extension is defined so that it is computationally tractable to find a point x ∈ R that maximizes w R (x) (possibly approximately). For example, S could be the allocations of m items to n bidders in a combinatorial auction, w(x) the welfare of an allocation, R the feasible region of a linear programming relaxation, and w R the natural linear extension of w to fractional allocations. The solution x ∈ R to the relaxed problem need not be in S. A rounding scheme for relaxation Π ′ of Π defines for each feasible set S of Π, and its corresponding relaxed set R, a (possibly randomized) function r : R → S. Since our rounding scheme will be randomized, we will frequently use r(x) to denote the distribution over S resulting from rounding the point x ∈ R. Commonly, the rounding scheme satisfies the following approximation guarantee: E y∼r(x) [w(y)] ≥ α · w R (x) for every x ∈ R. In this case, if x * maximizes w R over R and w R agrees with w on S, then E y∼r(x * ) [w(y)] ≥ α · max y∈S w(y). Convex Rounding Schemes and MIDR Our technique is motivated by the following observation: instead of solving the relaxed problem and subsequently rounding the solution, why not optimize directly on the outcome of the rounding scheme? In particular, consider the following relaxation of Π that "absorbs" rounding scheme r into the objective. maximize E y∼r(x) [w(y)] subject to x ∈ R.(5) The solution to this problem rounds to the best possible distribution in the range of the rounding scheme, over all possible fractional solutions in R. While this problem is often intractable, it always leads to an MIDR allocation rule. Lemma 3.1. Algorithm 1 is an MIDR allocation rule. Algorithm 1 MIDR Allocation Rule via Optimizing over Output of Rounding Scheme Parameter: Feasible set S of Π. Parameter: Relaxed feasible set R ⊆ R m . Parameter: (Randomized) rounding scheme r : R → S. Input: Objective w : S → R satisfying (S, w) ∈ Π. Output: Feasible solution z ∈ S. 1: Let x * maximize E y∼r(x) [w(y)] over x ∈ R. 2: Let z ∼ r(x * ) We say a rounding scheme r : R → S is α-approximate for α ≤ 1 if w(x) ≥ E y∼r(x) [w(y)] ≥ α · w(x) for every x ∈ S. When r is α-approximate, so is the allocation rule of Algorithm 1. For most rounding schemes in the approximation algorithms literature, the optimization problem (5) cannot be solved in polynomial time (assuming P = N P ). The reason is that for any rounding scheme that always rounds a feasible solution to itself -i.e., r(x) = x for all x ∈ S -an optimal solution to (5) is also optimal for (1). Thus, in this case, hardness of the original problem (1) implies hardness of (5). We conclude that we need to design rounding schemes with the unusual property that r(x) = x for some x ∈ S. We call a (randomized) rounding scheme r : Under additional technical conditions, discussed in the context of combinatorial auctions in Appendix B, the convex program (5) can be solved efficiently (e.g., using the ellipsoid method). This reduces the design of a polynomial-time α-approximate MIDR algorithm to designing a polynomialtime α-approximate convex rounding scheme. R → S convex if E y∼r(x) [w(y)] is concave function of x ∈ R. Summarizing, Lemmas 3.1, 3.2, and 3.3 give the following informal theorem. Theorem 3.4. (Informal) Let Π be a welfare-maximization optimization problem, and let Π ′ be a relaxation of Π. If there exists a polynomial-time, α-approximate, convex rounding scheme for Π ′ , then there exists a truthful-in-expectation, polynomial-time, α-approximate mechanism for Π. Of course, there is no reason a priori to believe that useful convex rounding schemes -let alone ones computable in polynomial time -exist for any important problems. We show in Section 4 that they do in fact exist and yield new results for an interesting class of combinatorial auctions. Combinatorial Auctions In this section, we use the framework of Section 3 to prove our main result. Theorem 4.1. There is a (1 − 1/e)-approximate, truthful-in-expectation mechanism for combinatorial auctions with matroid rank sum valuations in the lottery-value oracle model, running in expected poly(n, m) time. We formulate welfare maximization in combinatorial auctions as an optimization problem Π. An instance (S, w) ∈ Π is given by the following integer program with feasible set S contained in {0, 1} n×m . Variable x ij indicates whether item j is allocated to player i, and w(x) denotes the social welfare of allocation x. maximize w(x) = i v i ({j : x ij = 1}) subject to i x ij ≤ 1, for j ∈ [m]. x ij ∈ {0, 1} , for i ∈ [n], j ∈ [m].(6) We let the relaxed feasible set R = R(S) be the result of relaxing the constraints x ij ∈ {0, 1} of (6) to 0 ≤ x ij ≤ 1. We structure the proof of Theorem 4.1 as follows. We define the Poisson rounding scheme, which we denote by r poiss , in Section 4.1. We prove that r poiss is (1 − 1/e)-approximate (Lemma 4.3), and convex (Lemma 4.2). Lemmas 3.1, 3.2 and 4.3, taken together, imply that Algorithm 1 when instantiated for combinatorial auctions with r = r poiss , is a (1 − 1/e)-approximate MIDR allocation rule. Lemma 4.2 reduces implementing this allocation rule to solving a convex program. In Appendix B, we handle the technical and numerical issues related to solving convex programs. First, we prove that our instantiation of Algorithm 1 for combinatorial auctions can be implemented in expected polynomial-time using the ellipsoid method under a simplifying assumption on the numerical conditioning of our convex program (Lemma B.2). Then we show in Section B.3 that the previous assumption can be removed by slightly modifying our algorithm. Finally, we prove that truth-telling VCG payments can be computed efficiently in Lemma D.1. Taken together, these lemmas complete the proof of Theorem 4.1. In Appendix D.2, we discuss prospects for extending our result beyond matroid rank sum valuations. The Poisson Rounding Scheme In this section we define the Poisson rounding scheme, which we denote by r poiss . The random map r poiss : R → S renders the the following optimization problem over R a convex optimization problem. maximize f (x) = E y∼r poiss (x) [w(y)] subject to i x ij ≤ 1, for j ∈ [m]. 0 ≤ x ij ≤ 1, for i ∈ [n], j ∈ [m].(7) We define the Poisson rounding scheme as follows. Given a fractional solution x to (7), do the following independently for each item j: assign j to player i with probability 1 − e −x ij . (This is well defined since 1 − e −x ij ≤ x ij for all players i and items j, and i x ij ≤ 1 for all items j.) We make this more precise in Algorithm 2. For clarity, we represent an allocation as a function from items to players, with an additional null player * reserved for items that are left unassigned. if i (1 − e −x ij ) ≥ p j then 4: Let a(j) be the minimum index such that i≤a(j) (1 − e −x ij ) ≥ p j . Proof. Let S 1 , . . . , S n be an allocation, and let x be an the integer point of (7) corresponding to S 1 , . . . , S n . Let (S ′ 1 , . . . , S ′ n ) ∼ r poiss (x). It suffices to show that E[ i v i (S ′ i )] ≥ (1 − 1/e) · i v i (S i ). By definition of the Poisson rounding scheme, S ′ i includes each j ∈ S i independently with probability 1 − 1/e. Submodularity implies that E[v i (S ′ i )] ≥ (1 − 1/e) · v i (S i ) - Warm-up: Convexity for Coverage Valuations In this section, we prove the special case of Lemma 4.2 for coverage valuations, as defined in Section 2.4. Fix n, m, and coverage valuations {v i } n i=1 , and let R denote the feasible set of mathematical program (7). Let (S 1 , . . . , S n ) ∼ r poiss (x) be the (random) allocation computed by the Poisson rounding scheme for point x ∈ R. The expected welfare E[w(r poiss (x))] can be written as E[ n i=1 v i (S i )], where the expectation is taken over the internal random coins of the rounding scheme. By linearity of expectation, as well as the fact that the sum of concave functions is concave, it suffices to show that E[v i (S i )] is a concave function of x for an arbitrary player i with coverage valuation v i . Fix player i, and use x j , v, and S as short-hand for x ij , v i , and S i respectively. Recall that v is a coverage function; let L be a ground set and A 1 , . . . , A m ⊆ L be such that v i (T ) = | ∪ j∈T A j | for each T ⊆ [m]. The Poisson rounding scheme includes each item j in S independently with probability 1 − e −x j . The expected value of player i can be written as follows. E [v(S)] = E[| ∪ j∈S A j |] = ℓ∈L Pr[ℓ ∈ ∪ j∈S A j ] Since the sum of concave functions is concave, it suffices to show that Pr[ℓ ∈ ∪ j∈S A j ] is concave in x for each ℓ ∈ L. We can interpret Pr[ℓ ∈ ∪ j∈S A j ] as the probability that element ℓ is covered by an item in S, where j ∈ [m] covers ℓ ∈ L if ℓ ∈ A j . For each ℓ ∈ L, let C ℓ be the set of items that cover ℓ. Element ℓ ∈ L is covered by S precisely when C ℓ ∩ S = ∅. Each item j ∈ C ℓ is included in S independently with probability 1 − e −x j . Therefore, the probability ℓ ∈ L is covered by S can be re-written as follows: Pr[ℓ ∈ ∪ j∈S A j ] = 1 − j∈C ℓ e −x j = 1 − exp   − j∈C ℓ x j   .(8) Form (8) is the composition of the concave function g(y) = 1 − e −y with the affine function y → j∈C ℓ x j . It is well-known that composing a concave function with an affine function yields another concave function (see e.g. [4]). Therefore, Pr[ℓ ∈ ∪ j∈S A j ] is concave in x for each ℓ ∈ L, as needed. This completes the proof. Convexity for Matroid Rank Sum Valuations In this section, we will prove Lemma 4.2 in its full generality. First, we define a discrete analogue of a Hessian matrix for set functions, and show that these discrete Hessians are negative semi-definite for matroid rank sum functions. H v S (j, k) = v(S ∪ {j, k}) − v(S ∪ {j}) − v(S ∪ {k}) + v(S)(9) for j, k ∈ [m]. Claim 4.5. If v : 2 [m] → R + is a matroid rank sum function, then H v S is negative semi-definite for each S ⊆ [m]. Proof. We observe that H v S is linear in v, and recall that a non-negative weighted-sum of negative semi-definite matrices is negative semi-definite. Therefore, it is sufficient to prove this claim when v is a matroid rank function. Let A binary matrix encoding a symmetric and transitive relation is a block diagonal matrix where each diagonal block is an all-ones or all-zeros sub-matrix. It is known, and easy to prove, that such a matrix is positive semi-definite. Therefore H v S is negative semi-definite. We now return to Lemma 4.2. Fix n, m, and MRS valuations {v i } n i=1 , and let R denote the feasible set of mathematical program (7). Let (S 1 , . . . , S n ) ∼ r poiss (x) be the (random) allocation computed by the Poisson rounding scheme for point x ∈ R. The expected welfare E[w(r poiss (x))] can be written as E [ n i=1 v i (S i )], where the expectation is taken over the internal random coins of the rounding scheme. By linearity of expectation, as well as the fact that the sum of concave functions is concave, it suffices to show that E[v i (S i )] is a concave function of x for an arbitrary player i with MRS valuation v i . Fix player i, and use x j , v, S as short-hand for x ij , v i , S i respectively. The Poisson rounding scheme includes each item j in S independently with probability 1 − e −x j . We can now write the expected value of player i as the following function G v : R m → R: G v (x 1 , . . . , x m ) = S⊆[m] v(S) j∈S (1 − e −x j ) j =S e −x j(10) The following claim, combined with Claim 4.5, completes the proof of Lemma 4.2. Claim 4.6. If all discrete Hessians of v are negative semi-definite, then G v is concave. Proof. Assume H v S is negative semi-definite for each S ⊆ [m] . We work with G v as expressed in Equation (10). We will show that the Hessian matrix of G v at an arbitrary x ∈ R m is negative semi-definite, which is a sufficient condition for concavity. We take the mixed-derivative of G v with respect to x j and x k (possibly j = k). ∂ 2 G v (x) ∂x j ∂x k = S⊆[m]\{j,k} ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ v(S) − v(S ∪ {j}) − v(S ∪ {k}) + v(S ∪ {j, k}) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ v(S) − v(S ∪ {j}) − v(S ∪ {k}) + v(S ∪ {j, k}) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ H v S (j, k) The first equality follows by grouping the terms of Equation (10) ▽ 2 G v (x) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ H v S(11) A non-negative weighted-sum of negative semi-definite matrices is negative semi-definite. This completes the proof of the claim. A Combinatorial Auctions with Explicit Coverage Valuations In this section, we apply our mechanism to explicitly represented coverage valuations. This demonstrates the utility of our mechanism in a concrete, non-oracle-based setting, and moreover allows us to establish an interesting separation result. Specifically, we show that (1) The (1 − 1/e)approximate mechanism of Theorem 4.1 can be implemented in expected polynomial-time for this problem, and (2) No polynomial-time, universally-truthful, VCG-based 9 mechanism guarantees an approximation ratio of o(n), unless N P ⊆ P/poly. The approximation ratio of 1 − 1/e is the best possible in polynomial-time for this problem -even without incentive constraints -assuming P = N P [19]. Ours is the first separation of its kind in the computational complexity model. 10 An n player, m item instance combinatorial auctions with explicit coverage valuations is described as follows. For each player i, there is a finite set L i , and a family A i 1 , . . . , A i m of subsets of L i . The valuation function of player i is then defined as v i (S) = | ∪ j∈S A i j |. The set system L i , A i j m j=1 is encoded explicitly as a bipartite graph. A.1 A Truthful-in-Expectation Mechanism As discussed previously, MRS valuations include all coverage valuations. Therefore, in order to implement the MIDR allocation rule of Section 4 for this problem, it suffices to answer lotteryvalue queries in time polynomial in the number of bits encoding the instance. Proof. Let v : 2 [m] → R + be a coverage valuation presented explicitly as a set system (L, {A j } m j=1 ), and let x ∈ [0, 1] m . Let S be a random set that includes each j ∈ [m] independently with probability x j . The outcome of the lottery value oracle of v evaluated at x is equal to the sum, over all ℓ ∈ L, of the probability that ℓ is "covered" by S -specifically, ℓ∈L Pr[ℓ ∈ ∪ j∈S A j ]. It is easy to verify that a term of this sum can be expressed as the following closed form expression. Pr[ℓ ∈ ∪ j∈S A j ] = 1 − j:A j ∋ℓ (1 − x j ) This expression can be evaluated in time polynomial in the representation of the set system. This completes the proof. Claim A.1 implies the following Theorem. Theorem A.2. There is an expected polynomial-time, (1−1/e)-approximate, truthful-in-expectation mechanism for combinatorial auctions with explicit coverage valuations. 9 A universally-truthful mechanism is VCG-based if it is a randomization over deterministic truthful mechanisms that each implement a maximal in range allocation rule -the special case of MIDR where each distribution in the distributional range is supported on a single allocation. 10 We note that this separation is meaningful because there are no known universally-truthful polynomial-time mechanisms -VCG-based or otherwise -for this problem that achieve an approximation ratio better than min(n, √ m). In particular, the result of [8] uses demand queries, which can not be answered in polynomial time for explicit coverage valuations by the results of [19] and [16]. A.2 A Lower-bound on Universally Truthful VCG-Based Mechanisms We use the following special case of [5,Theorem 1.2]: If a succinct combinatorial auction problem satisfies the regularity conditions on the valuations defined in [5], and moreover the 2-player version of the problem is APX hard, then no polynomial-time, universally-truthful, VCG-based mechanism guarantees an approximation ratio of o(n). It is routine to verify the regularity assumptions of [5] for explicit coverage valuations. APXhardness of the 2-player problem follows by an elementary reduction from the APX-hard problem max-cut. Given an instance of max-cut on a graph G = (V, E), we let [m] = V , L 1 = L 2 = E. For e ∈ E, i ∈ {1, 2}, and j ∈ V , we let e ∈ A i j if j is one of the endpoints of edge e. It is easy to check that the welfare maximizing allocation of the resulting 2-player instance of combinatorial auctions corresponds to the maximum cut of G. Moreover, using the fact that the optimal objective value of max-cut is at least |E|/2, it is elementary to verify that the reduction preserves hardness of approximation up to a constant factor. Therefore, combinatorial auctions with explicit coverage valuations and 2 players is APX hard. This yields the following Theorem. B Solving The Convex Program In this section, we overcome some technical difficulties related to the solvability of convex programs. We show in Section B.1 that, in the lottery-value oracle model, the four conditions for "solvability" of convex programs, as stated in Fact C.3, are easily satisfied for convex program (7). However, an additional challenge remains: "solving" a convex program -as in Definition C.2 -returns an approximately optimal solution. Indeed the optimal solution of a convex program may be irrational in general, so this is unavoidable. We show how to overcome this difficulty if we settle for polynomial runtime in expectation. While the optimal solution x * of (7) cannot be computed explicitly, the random variable r poiss (x * ) can be sampled in expected polynomial-time. The key idea is the following: sampling the random variable r poiss (x * ) rarely requires precise knowledge of x * . Depending on the coin flips of r poiss , we decide how accurately we need to solve convex program (7) in order compute r poiss (x * ). Roughly speaking, we show that the probability of requiring a (1 − ǫ)-approximation falls exponentially in 1 ǫ . As a result, we can sample r poiss (x * ) in expected polynomial-time. We implement this plan in Section B.2 under the simplifying assumption that convex program (7) is well-conditioned -i.e. is "sufficiently concave" everywhere. In Section B.3, we show how to remove that assumption by slightly modifying our algorithm. B.1 Approximating the Convex Program Claim B.1. There is an algorithm for Combinatorial Auctions with MRS valuations in the lotteryvalue oracle model that takes as input an instance of the problem and an approximation parameter ǫ > 0, runs in poly(n, m, log(1/ǫ)) time, and returns a (1 − ǫ)-approximate solution to convex program (7). It suffices to show that the four conditions of Fact C.3 are satisfied in our setting. The first three are immediate from elementary combinatorial optimization (see for example [28]). It remains to show that the first-order oracle, as defined in Fact C.3, can be implemented in polynomial-time in the lottery-value oracle model. The objective f (x) of convex program (7) can, by definition, be written as f (x) = i G v i (x i ), where v i is the valuation function of player i, x i is the vector (x i1 , . . . , x im ), and and G v i is as defined in (10). By definition, G v i (x i ) is the outcome of querying the lottery-value oracle of player i with (1 − e −x i1 , . . . , 1 − e −x im ) . Therefore, we can evaluate f (x) using n lottery-value query, one for each player. It remains to show that we can also evaluate the (multi-variate) derivative ▽f (x) of f (x). Using definition (10), we take the partial derivative corresponding to x ij . By rearranging the sum appropriately, we get that ∂f ∂x ij (x) = e −x ij F v i (1 − e −x i1 , . . . , 1 − e −x im ) ∨ 1 j − F v i (1 − e −x i1 , . . . , 1 − e −x im ) ∧ 0 j , where F v i is as defined in Equation (3). Here, ∨ and ∧ denote entry-wise minimum and maximum respectively, 1 j denotes the vector with all entries equal to 0 except for a 1 at position j, and 0 j denotes the vector with all entries equal to 1 except for a 0 at position j. It is clear that this entry of the gradient of f can be evaluated using two lottery-value queries. Therefore, ▽f (x) can be evaluated using 2n lottery-value queries, 2 for each player. This completes the proof of Claim B.1. B.2 The Well-Conditioned Case In this section, we make the following simplifying assumption: The objective function f (x) of convex program (7), when restricted to any line in the feasible set R, has a second derivative of magnitude at least λ = Let x * be the optimal solution to convex program (7). Algorithm 1 allocates items according to the distribution r poiss (x * ). The Poisson rounding scheme, as described in Algorithm 2, requires making m independent decisions, one for each item j. Therefore, we fix item j and show how to simulate this decision. It suffices to do the following in expected polynomial-time: flip uniform coin p j ∈ [0, 1], and find the minimum index a(j) (if any) such that i≤a(j) (1 − e −x * ij ) ≥ p j . For most realizations of p j , this can be decided using only coarse estimates x ij to x * ij . Assume we have an estimation oracle for x * that, on input δ, returns a δ-estimate x of x * : Specifically, x ij − x * ij ≤ δ for each i. When p j falls outside the "uncertainty zones" of x, such as when |p j − i ′ ≤i (1−e − x i ′ j )| > δn for each i ∈ [n], it is easy to see that we can correctly determine a(j) by using x in lieu of x. The total measure of the uncertainty zones of x is at most 2n 2 δ, therefore p j lands outside the uncertainty zones with probability at least 1 − 2n 2 δ. The following claim shows that if the estimation oracle for x * can be implemented in time polynomial in log(1/δ), then we can simulate the Poisson rounding procedure in expected polynomial-time. Claim B.3. Let x * be the optimal solution of convex program (7). Assume access to a subroutine B(δ) that returns a δ-estimate of x * in time poly(n, m, log(1/δ)). Algorithm (1) with r = r poiss can be simulated in expected poly(n, m) time. Proof. It suffices to show that we can simulate the allocation of an item j by Algorithm (2) on input x * . The simulation proceeds as follows: Draw p j ∈ [0, 1] uniformly at random. Start with δ = δ 0 = 1 2n 2 . Let x = B(δ). While |p j − i ′ ≤i (1 − e − x i ′ j )| ≤ δn for some i ∈ [n] (i.e. p j may fall inside an "uncertainty zone") do the following: let δ = δ/2, x = B(δ) and repeat. After the loop terminates, we have a sufficiently accurate estimate of x * to calculate a(j) as in Algorithm (2). It is easy to see that the above procedure is a faithful simulation of Algorithm (2) on x * . It remains to bound its expected running time. Let δ k = 1 2 k+1 n 2 denote the value of δ at the kth iteration. By assumption, the kth iteration takes poly(n, m, log(1/δ k )) = poly(n, m, log(2 k+1 n 2 )) = poly(n, m, k) time. The probability this procedure does not terminate after k iterations is at most 2n 2 δ k = 1/2 k . Taken together, these two facts and a simple geometric summation imply that the expected runtime is polynomial in n and m. It remains to show that the estimation oracle B(δ) can be implemented in poly(n, m, log(1/δ)) time. At first blush, one may expect that the ellipsoid method can be used in the usual manner here. However, there is one complication: we require an estimate x that is close to x * in solution space rather than in terms of objective value. Using our assumption on the curvature of f (x), we will reduce finding a δ-estimate of x * to finding an 1 − ǫ(δ) approximate solution to convex program (7). The dependence of ǫ on δ will be such that ǫ ≥ poly(δ)/2 poly(n,m) , thereby we can invoke Claim B.1 to deduce that B(δ) can be implemented in poly(n, m, log(1/δ)) time. Let ǫ = ǫ(δ) = δ 2 λ 2 i v i ([m]) . Plugging in the definition of λ, we deduce that ǫ ≥ δ 2 /2 poly(n,m) , which is the desired dependence. It remains to show that if x is (1 − ǫ)-approximate solution to (7), then x is also a δ-estimate of x * . Using the fact that f (x) is concave, and moreover its second derivative has magnitude at least λ, it a simple exercise to bound distance of any point x from the optimal point x * in terms of its sub-optimality f (x * ) − f (x), as follows: f (x * ) − f (x) ≥ λ 2 ||x − x * || 2 .(12) Assume x is a (1 − ǫ)-approximate solution to (7). Equation (12) implies that || x − x * || 2 ≤ 2 λ ǫf (x * ) = δ 2 i v i ([m]) f (x * ) ≤ δ 2 , where the last inequality follows from the fact that i v i ([m])) is an upper-bound on the optimal value f (x * ). Therefore, ||x − x * || ≤ δ, as needed. This completes the proof of Lemma B.2. B.3 Guaranteeing Good Conditioning In this section, we propose a modification r + poiss of the Poisson rounding scheme r poiss . We will argue that r + poiss satisfies all the properties of r poiss established so far, with one exception: the approximation guarantee of Lemma 4.3 is reduced to 1 − 1/e − 2 −2mn . Then we will show that r + poiss satisfies the curvature assumption of Lemma B.2, demonstrating that said assumption may be removed. Therefore Algorithm 1, instantiated with r = r + poiss for combinatorial auctions with MRS valuations in the lottery-value oracle model, is (1 − 1/e − 2 −2mn ) approximate and can be implemented in expected poly(n, m) time. Finally, we show in Remark B.4 how to recover the 2 −2mn term to get a clean 1 − 1/e approximation ratio, as claimed in Theorem 4.1. Let µ = 2 −2mn . We define r + poiss in Algorithm 3. Intuitively, r + poiss at first makes a tentative allocation using r poiss . Then, it cancels said allocation with small probability µ. Finally, with probability β it chooses a random "lucky winner" i * and gives him all the items. β is defined as the fraction of items allocated in the original tentative allocation. The motivation behind this seemingly bizarre definition of r + poiss is purely technical: as we will see, it can be thought of as adding "concave noise" to r poiss . Let (S 1 , . . . , S n ) = (∅, ∅, . . . , ∅). 6: Draw q 2 ∈ [0, 1] uniformly at random. 7: if q 2 ∈ [0, β] then 8: Choose a player i * uniformly at random. We can write the expected welfare E[w(r + poiss (x))] as follows. We use linearity of expectations and the fact that β is independent of the choice of i * to simplify the expression. E[w(r + poiss (x))] = E[(1 − µ)w(r poiss (x)) + µβv i * ([m])] = (1 − µ) E[w(r poiss (x))] + µ E[β]E[v i * ([m])] = (1 − µ) E[w(r poiss (x))] + µ E[β] i v i ([m]) n Observe that r poiss allocates an item j with probability i (1−e −x ij ). Therefore, the expectation of β is ij (1−e −x ij ) m . This gives: E[w(r + poiss (x))] =(1 − µ) E[w(r poiss (x))] + µ mn i v i ([m]) i,j (1 − e −x ij ).(13) It is clear that the expected welfare when using r = r + poiss is within 1 − µ = 1 − 2 −2mn of the expected welfare when using r = r poiss in the instantiation of Algorithm 1. Using Lemma 4.3, we conclude that r + poiss is a (1 − 1/e − 2 −2mn )-approximate rounding scheme. Moreover, using Lemma 4.2, as well as the fact that (1 − e −x ij ) is a concave function, we conclude that r + poiss is a convex rounding scheme. Therefore, this establishes the analogues of Lemmas 4.3 and4.2 for r + poiss . It is elementary to verify that our proof of Lemma B.2 can be adapted to r + poiss as well. It remains to show that r + poiss is "sufficiently concave". This would establish that the conditioning assumption of Section B.2 is unnecessary for r + poiss . We will show that expression (13) is a concave function with curvature of magnitude at least λ = n i=1 v i ([m]) emn2 2mn everywhere. Since the curvature of concave functions is always non-positive, and moreover the curvature of the sum of two functions is the sum of their curvatures, it suffices to show that the second term of the sum (13) has curvature of magnitude at least λ. We note that the curvature of ij (1 − e −x ij ) is at least e −1 over x ∈ [0, 1] n×m . Therefore, the curvature of the second term of (13) is at least µ mn i v i ([m]) e −1 = λ as needed. Remark B.4. In this section, we sacrificed 2 −2mn in the approximation ratio in order to guarantee expected polynomial runtime of our algorithm even when convex program (7) is not well-conditioned. This loss can be recovered to get a clean 1 − 1/e approximation as follows. Given our (1 − 1/e − 2 −2mn )-approximate MIDR algorithm A, construct the following algorithm A ′ : Given an instance of combinatorial auctions, A ′ runs A on the instance with probability 1 − e2 −2mn , and with the remaining probability solves the instance optimally in exponential time O(2 2mn ). It was shown in [12] that a random composition of MIDR mechanisms is MIDR, therefore A ′ is MIDR. The expected runtime of A ′ is bounded by the expected runtime of A plus e2 −2mn · O(2 2mn ) = O(1). Finally, the expected approximation of A ′ is the weighted average of the approximation ratio of A and the optimal approximation ratio 1, and is at least (1 − e2 −2mn )(1 − 1/e − 2 −2mn ) + e2 −2mn ≥ 1 − 1/e. C Additional Preliminaries C.1 Matroid Theory In this section, we review some basics of matroid theory. For a more comprehensive reference, we refer the reader to [26]. A matroid M is a pair (X , I), where X is a finite ground set, and I is a non-empty family of subsets of X satisfying the following two properties. (1) Downward closure: If S belongs to I, then so do all subsets of S. (2) The Exchange Property: Whenever T, S ∈ I with |T | < |S|, there is some x ∈ S \ T such that T ∪ {x} ∈ I. Elements of I are often referred to as the independent sets of the matroid. Subsets of X that are not in I are often called dependent. We associate with matroid M a set function rank M : 2 X → N, known as the rank function of M , defined as follows: rank M (A) = max S∈I |S ∩ A|. Equivalently, the rank of set A in matroid M is the maximum size of an independent set contained in A. C.2 Convex Optimization In this section, we distill some basics of convex optimization. For more details, see [2]. Definition C.1. A maximization problem is given by a set Π of instances (P, c), where P is a subset of some euclidean space, c : P → R, and the goal is to maximize c(x) over x ∈ P. We say Π is a convex maximization problem if for every (P, c) ∈ Π, P is a compact convex set, and c : P → R is concave. If c : P → R + for every instance of Π, we say Π is non-negative. Definition C.2. We say a non-negative maximization problem Π is R-solvable in polynomial time if there is an algorithm that takes as input the representation of an instance I = (P, c) ∈ Π -where we use |I| to denote the number of bits in the representation -and an approximation parameter ǫ, and in time poly(|I|, log(1/ǫ)) outputs x ∈ P such that c(x) ≥ (1 − ǫ) max y∈P c(y). Fact C.3. Consider a non-negative convex maximization problem Π. If the following are satisfied, then Π is R-solvable in polynomial time using the ellipsoid method. We let I = (P, c) denote an instance of Π, and let m denote the dimension of the ambient euclidean space. 1. Polynomial Dimension: m is polynomial in |I|. Starting ellipsoid: There is an algorithm that computes, in time poly(|I|), a point c ∈ R m , a matrix A ∈ R m×m , and a number V ∈ R such that the following hold. We use E(c, A) to denote the ellipsoid given by center c and linear transformation A. 3. Separation oracle for P: There is an algorithm that takes takes input I and x ∈ R m , and in time poly(|I|, |x|) where |x| denotes the size of the representation of x, outputs "yes" if x ∈ P, otherwise outputs h ∈ R m such that h T x < h T y for every y ∈ P. 4. First order oracle for c: There is an algorithm that takes input I and x ∈ R m , and in time poly(|I|, |x|) outputs c(x) ∈ R and ▽c(x) ∈ R m . D Additional Technical Details and Commentary D.1 Computing Payments In this section, we show how to efficiently compute truth-telling payments for our mechanism. In fact, as shown below, this is possible for any maximal in distributional range allocation rule for combinatorial auctions given as a black box. Lemma D.1. Let A be an MIDR allocation rule for combinatorial auctions, and let v 1 , . . . , v n be input valuations. Assume black-box access to A, and value oracle access to {v i } n i=1 . We can compute, with poly(n) over-head in runtime, payments p 1 , . . . , p n such that E[p i ] equals the VCG payment of player i for MIDR allocation rule A on input v 1 , . . . , v n . Proof. Without loss of generality, it suffices to show how to compute p 1 . Let 0 : 2 [m] → R be the valuation evaluating to 0 at each bundle. Recall (see e.g. [25]) that the VCG payment of player 1 is equal to E T ∼A(0,v 2 ,...,vn) n i=2 v i (T i ) − E S∼A(v 1 ,...,vn) n i=2 v i (S i ) .(14) Let (S 1 , . . . , S n ) be a sample from A(v 1 , . . . , v n ), and let (T 1 , . . . , T n ) be a sample from A(0, v 2 , . . . , v n ). Let p 1 = n i=2 v i (T i ) − n i=2 v i (S i ). Using linearity of expectations, it is easy to see that the expectation of p 1 is equal to the expression in (14). This completes the proof. We note that the mechanism resulting from Lemma D.1 is individually rational in expectation, and each payment is non-negative in expectation. We leave open the question of whether it is possible to enforce individual rationality and non-negative payments for our mechanism ex-post. D.2 Beyond Matroid Rank Sum Valuations In this section, we discuss the prospect of extending our result beyond matroid rank sum valuations. First, we argue that our restriction to a subset of submodular functions is not merely an artifact of our analysis. Specifically, we exhibit a submodular function that is not in the matroid rank sum family, and moreover the Poisson rounding scheme can be non-convex when a player has this function as their valuation. Then, we briefly argue that our mechanism may yet apply to some valuations that are not matroid rank sums. We define a budget additive function v on four items {1, 2, 3, 4}. Three of the items are "small", one item is "big", and the budget equals the value of the big item. We can show that v is not a matroid rank sum function by invoking Claim 4.5. Specifically, one can manually check that the discrete Hessian matrix H v ∅ of v at ∅ (see Definition 4.4) is not negative semi-definite. Moreover, for a player with valuation v, Poisson rounding renders the player's expected value function G v (x) (Equation (10)) non-concave in x: By Equation (11), the Hessian matrix of G v (x) approaches the discrete Hessian H v ∅ as x tends to zero. Since H v ∅ is not negative semi-definite, G v (x) is non-concave for x near zero. We note that we can construct a large family of similar counter examples by simply increasing the number of "small items" in v. Finally, we observe that our mechanism may apply to some valuations that are not matroid rank sums. We observe that we only used two properties of MRS functions: their discrete Hessian matrices are negative semi-definite (Claim 4.5, which is used to prove Lemma 4.2), and they are submodular (used to prove Lemma 4.3). Therefore, our result extends directly to the class of all set functions satisfying both of these properties. We leave open the question of whether there exist interesting functions in this class that are not matroid rank sums. More generally, understanding the class of set functions with negative semi-definite discrete Hessian matrices -in particular the relationship of this class to other classes of set functions studied in the literature -may be an interesting direction for future inquiry.
9,302
1103.0040
2953139329
We design an expected polynomial-time, truthful-in-expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental class of combinatorial auctions. Our results apply to bidders with valuations that are m matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions, matroid weighted-rank functions, and convex combinations thereof. Our approximation factor is the best possible, even for known and explicitly given coverage valuations, assuming P != NP. Ours is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for an NP-hard welfare maximization problem in combinatorial auctions with heterogeneous goods and restricted valuations. Our mechanism is an instantiation of a new framework for designing approximation mechanisms based on randomized rounding algorithms. A typical such algorithm first optimizes over a fractional relaxation of the original problem, and then randomly rounds the fractional solution to an integral one. With rare exceptions, such algorithms cannot be converted into truthful mechanisms. The high-level idea of our mechanism design framework is to optimize directly over the (random) output of the rounding algorithm, rather than over the input to the rounding algorithm. This approach leads to truthful-in-expectation mechanisms, and these mechanisms can be implemented efficiently when the corresponding objective function is concave. For bidders with MRS valuations, we give a novel randomized rounding algorithm that leads to both a concave objective function and a (1-1 e)-approximation of the optimal welfare.
The aforementioned works @cite_15 @cite_9 are precursors to our general design framework that optimizes directly over the output of a randomized rounding algorithm. In the framework of Lavi and Swamy @cite_9 , the input to and output of the rounding algorithm are assumed to coincide up to a scaling factor, so optimizing over its input (as they do) is equivalent to optimizing over its output (as we do). In the result of @cite_15 , optimizing with respect to their proxy bidders'' is equivalent to optimizing over the output of a particular randomized rounding algorithm.
{ "abstract": [ "We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any spl alpha -approximation algorithm that also bounds the integrality gap of the IF relaxation of the problem by a can be used to construct an spl alpha -approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O( spl radic m) for combinatorial auctions (CAs), (1 + spl epsi ) for multiunit CAs with B = spl Omega (log m) copies of each item, and 2 for multiparameter knapsack problems (multiunit auctions). Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism by W. Vickrey (1961), E. Clarke (1971) and T. Groves (1973) to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by a, where a is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.", "This short note exhibits a truthful-in-expectation @math -approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication." ], "cite_N": [ "@cite_9", "@cite_15" ], "mid": [ "2103751307", "1727602370" ] }
From Convex Optimization to Randomized Mechanisms: Toward Optimal Combinatorial Auctions *
The overarching goal of algorithmic mechanism design is to design computationally efficient algorithms that solve or approximate fundamental optimization problems in which the underlying data is a priori unknown to the algorithm. A central example in both theory and practice is welfaremaximization in combinatorial auctions. Here, there are m items for sale and n bidders vying for them. Each bidder i has a private valuation v i (S) for each subset S of the items. 1 The welfare of an allocation S 1 , . . . , S n of the items to the bidders is n i=1 v i (S i ). Since valuations are initially unknown to the seller, computing a near-optimal allocation requires eliciting information from the (self-interested) bidders, for example via a bid. A mechanism is a protocol that extracts such information and computes an allocation of the items and payments. The "holy grail" for a mechanism designer is to devise a computationally efficient and incentivecompatible mechanism with an approximation factor that matches the best one known for the (easier) problem in which the underlying data is provided up front. 2 Such results are usually difficult to obtain, and in some cases are provably impossible using deterministic mechanisms [20,27]. The space of randomized mechanisms, however, is much more promising as shown recently in [9,12]. 3 This paper provides such a positive result for a fundamental class of combinatorial auctions, via a novel randomized mechanism design framework based on convex optimization. Algorithmic mechanism design is difficult because incentive compatibility severely limits how the algorithm can compute an outcome, which prohibits use of most of the ingenious approximation algorithms that have been developed for different optimization problems. More concretely, the only general approach known for designing (randomized) truthful mechanisms is via maximal-indistributional range (MIDR) algorithms [9,12]. An MIDR algorithm fixes a set of distributions over feasible solutions -the distributional range -independently of the valuations reported by the self-interested participants, and outputs a random sample from the distribution that maximizes expected (reported) welfare. The Vickrey-Clarke-Groves (VCG) payment scheme renders an MIDR algorithm truthful-in-expectation. Most approximation algorithms are not MIDR algorithms. Consider, as an example, a randomized rounding algorithm for welfare maximization in combinatorial auctions (e.g. [14,11]). We can view such an algorithm as the composition of two algorithms, a relaxation algorithm and a rounding algorithm. The relaxation algorithm is deterministic and takes as input the problem data (players' valuations v), and outputs the (fractional) solution to a linear programming relaxation of the welfare-maximization problem that is optimal for the objective function defined by v. The rounding algorithm is randomized and takes as input this fractional solution and outputs a feasible allocation of the items to the players. Taken together, these algorithms assign to each input v a probability distribution D(v) over integral allocations. For almost all known randomized rounding algorithms, there is an input v such that the expected objective function value E y∼D (v) [v T y] with the distribution D(v) is inferior to that E y∼D(w) [v T y] with a distribution D(w) that the algorithm 1 Each bidder has an exponential number of private values; we ignore the attendant representation issues for the moment. 2 In this paper, by "incentive compatible" we generally mean a (possibly randomized) mechanism such that every participant maximizes its expected payoff by truthfully revealing its information to the mechanism, no matter how the other participants behave. Such mechanisms are called truthful-in-expectation, and are defined formally in Section 2.2. 3 We note that the impressively general positive results for implementations in Bayes-Nash equilibria that were recently obtained in [18,17,1] do not apply to the stronger incentive-compatibility notions used in this paper and in most of the algorithmic mechanism design literature. would produce for a different input w -and this is a violation of the MIDR property. Informally, such violations are inevitable unless a rounding algorithm is designed explicitly to avoid them, on top of the usual approximation requirements. The exception that proves the rule is the important and well-known mechanism design framework of Lavi and Swamy [21]. Lavi and Swamy [21] begin with the foothold that the fractional welfare maximization problem -the relaxation algorithm above -can be made truthful by charging appropriate VCG payments. Further, they identify a very special type of rounding algorithm that preserves truthfulness: if the expected allocation produced by the rounding algorithm is always identical to the input to the rounding algorithm, component-wise, up to some universal scaling factor α, then composing the two algorithms easily yields an α-approximate truthful-in-expectation mechanism (after scaling the fractional VCG payments by α). Perhaps surprisingly, there are some interesting problems, such as welfare maximization in combinatorial auctions with general valuations, that admit such a rounding algorithm with a best-possible approximation guarantee (assuming P = N P ). However, most N P -hard welfare maximization problems do not seem to admit good randomized rounding algorithms of the rigid type required by this design framework. Our Contributions We introduce a new approach to designing truthful-in-expectation approximation mechanisms based on randomized rounding algorithms; we outline it here for the special case of welfare maximization in combinatorial auctions. The high-level idea is to optimize directly on the outcome of the rounding algorithm, rather than merely on the outcome of the relaxation algorithm (the input to the rounding algorithm). In other words, let r(x) denote a randomized rounding algorithm, from fractional allocations to integer allocations. Given players' valuations v, we compute a fractional allocation x that maximizes the expected welfare E y∼r(x) [v T y] over all fractional allocations x. This methodology evidently gives MIDR algorithms. This optimization problem is often intractable, but when the rounding algorithm r and the space of valuations v are such that the function E y∼r(x) [v T y] is always concave in x -in which case we call r a convex rounding algorithm -it can be solved in polynomial time using convex programming (modulo numerical issues that we address later). We use this design framework to give an expected polynomial-time, truthful-in-expectation, (1 − 1/e)-approximation mechanism for welfare maximization in combinatorial auctions in which bidders' valuations are matroid rank sums (MRS) -non-negative linear combinations of matroid rank functions on the items. MRS valuations are submodular and encompass most concrete examples of submodular functions that have been studied in the combinatorial auctions literature, including all coverage functions and matroid weighted-rank functions (see Section 2.4 for formal definitions). Our approximation guarantee is optimal, assuming P = N P , even for the special case of the welfare maximization problem with known and explicitly presented coverage valuations. Our mechanism is the first truthful-in-expectation and polynomial-time mechanism to achieve a constant-factor approximation for any N P -hard special case of combinatorial auctions that doesn't assume that there are multiple copies of every type of item. It works with "black-box" valuations, provided that they support a randomized analog of a "value oracle". We also give a (non-oraclebased) version of the mechanism for explicitly represented coverage valuations. Preliminaries Optimization Problems We consider optimization problems Π of the following general form. Each instance of Π consists of a feasible set S, and an objective function w : S → R. The solution to an instance of Π is given by the following optimization problem. maximize w(x) subject to x ∈ S. (1) Mechanism Design Basics We consider mechanism design optimization problems of the form in (1). In such problems, there are n players, where each player i has a valuation function v i : S → R. We are concerned with welfare maximization problems, where the objective is w(x) = n i=1 v i (x). We consider direct-revelation mechanisms for optimization mechanism design problems. Such a mechanism comprises an allocation rule, which is a function from (hopefully truthfully) reported valuation functions v 1 , . . . , v n to an outcome x ∈ S, and a payment rule, which is a function from reported valuation functions to a required payment from each player. We allow the allocation and payment rules to be randomized. A mechanism with allocation and payment rules A and p is truthful-in-expectation if every player always maximizes its expected payoff by truthfully reporting its valuation function, meaning that E[v i (A(v)) − p i (v)] ≥ E[v i (A(v ′ i , v −i )) − p i (v ′ i , v −i )](2) for every player i, (true) valuation function v i , (reported) valuation function v ′ i , and (reported) valuation functions v −i of the other players. The expectation in (2) is over the coin flips of the mechanism. If (2) holds for every flip of the coins, rather than merely in expectation, we call the mechanism universally truthful. The mechanisms that we design can be thought of as randomized variations on the classical VCG mechanism, as we explain next. Recall that the VCG mechanism is defined by the (generally intractable) allocation rule that selects the welfare-maximizing outcome with respect to the reported valuation functions, and the payment rule that charges each player i a bid-independent "pivot term" minus the reported welfare earned by other players in the selected outcome. This (deterministic) mechanism is truthful; see e.g. [25]. Now let dist(S) denote the probability distributions over a feasible set S, and let D ⊆ dist(S) be a compact subset of them. The corresponding Maximal in Distributional Range (MIDR) allocation rule is defined as follows: given reported valuation functions v 1 , . . . , v n , return an outcome that is sampled randomly from a distribution D * ∈ D that maximizes the expected welfare E x∼D [ i v i (x)] over all distributions D ∈ D. Analogous to the VCG mechanism, there is a (randomized) payment rule that can be coupled with this allocation rule to yield a truthful-in-expectation mechanism (see [9]). Combinatorial Auctions In Combinatorial Auctions there is a set [m] = {1, 2, . . . , m} of items, and a set [n] = {1, 2, . . . , n} of players. Each player i has a valuation function v i : 2 [m] → R + that is normalized (v i (∅) = 0) and monotone (v i (A) ≤ v i (B) whenever A ⊆ B). A feasible solution is an allocation (S 1 , . . . , S n ), where S i denotes the items assigned to player i, and {S i } i are mutually disjoint subsets of [m]. Player i's value for outcome (S 1 , . . . , S n ) is equal to v i (S i ). The goal is to choose the allocation maximizing social welfare: i v i (S i ). Matroid Rank Sum Valuations We now define matroid rank sum valuations. Relevant concepts from matroid theory are reviewed in Appendix C.1. w 1 , . . . , w κ ∈ R + , such that v(S) = κ ℓ=1 w ℓ u ℓ (S) for all S ⊆ [m]. We do not assume any particular representation of MRS valuations, and require only oracle access to their (expected) values on certain distributions (see Section 2.5). MRS functions include most concrete examples of monotone submodular functions that appear in the literaturethis includes coverage functions 6 , matroid weighted-rank functions 7 , and all convex combinations thereof. Moreover, as shown in [19], 1 − 1/e is the best approximation possible in polynomial time for combinatorial auctions with MRS valuations unless P = N P , even ignoring strategic considerations. That being said, we note that some interesting submodular functions -such as some budget additive functions 8 -are not in the matroid rank sum family (see Appendix D.2). Lotteries and Oracles A value oracle for a valuation v : 2 [m] → R takes as input a set S ⊆ [m], and returns v(S). We define an analogous oracle that takes in a description of a simple lottery over subsets of [m], and outputs the expectation of v over this lottery. Given a vector x ∈ [0, 1] m of probabilities on the items, let D x be the distribution over S ⊆ [m] that includes each item j in S independently with probability x j . We use F v (x) to denote the expected value of v(S) over draws S ∼ D x from this lottery. Definition 2.2. A lottery-value oracle for set function v : 2 [m] → R takes as input a vector x ∈ [0, 1] m , and outputs F v (x) = E S∼Dx [v(S)] = S⊆[m] v(S) j∈S x j j =S (1 − x j ).(3) We note that F v is simply the well-studied multi-linear extension of v (see for example [6,29]). In addition to being the natural randomized analog of a value oracle, a lottery-value oracle is easily implemented for various succinctly represented examples of MRS valuations, like explicit coverage functions (see Appendix A). We also note that lottery-value oracle queries can be approximated arbitrarily well with high probability using a polynomial number of value oracle queries (see [29]). Unfortunately, we are not able to reconcile the incurred sampling errors -small as they may be -with the requirement that our mechanism be exactly truthful. We suspect that relaxing our solution concept to approximate truthfulness -also known as ǫ-truthfulness -would remove this difficulty, and allow us to relax our oracle model to the more traditional value oracles. Convex Rounding Framework Relaxations and Rounding Schemes Let Π be an optimization problem. A relaxation Π ′ of Π defines for every (S, w) ∈ Π a convex and compact relaxed feasible set R ⊆ R m that is independent of w (we suppress the dependence on S); and an extension w R : R → R of the objective w to the relaxed feasible set R. This gives the following relaxed optimization problem. maximize w R (x) subject to x ∈ R.(4) Generally, the extension is defined so that it is computationally tractable to find a point x ∈ R that maximizes w R (x) (possibly approximately). For example, S could be the allocations of m items to n bidders in a combinatorial auction, w(x) the welfare of an allocation, R the feasible region of a linear programming relaxation, and w R the natural linear extension of w to fractional allocations. The solution x ∈ R to the relaxed problem need not be in S. A rounding scheme for relaxation Π ′ of Π defines for each feasible set S of Π, and its corresponding relaxed set R, a (possibly randomized) function r : R → S. Since our rounding scheme will be randomized, we will frequently use r(x) to denote the distribution over S resulting from rounding the point x ∈ R. Commonly, the rounding scheme satisfies the following approximation guarantee: E y∼r(x) [w(y)] ≥ α · w R (x) for every x ∈ R. In this case, if x * maximizes w R over R and w R agrees with w on S, then E y∼r(x * ) [w(y)] ≥ α · max y∈S w(y). Convex Rounding Schemes and MIDR Our technique is motivated by the following observation: instead of solving the relaxed problem and subsequently rounding the solution, why not optimize directly on the outcome of the rounding scheme? In particular, consider the following relaxation of Π that "absorbs" rounding scheme r into the objective. maximize E y∼r(x) [w(y)] subject to x ∈ R.(5) The solution to this problem rounds to the best possible distribution in the range of the rounding scheme, over all possible fractional solutions in R. While this problem is often intractable, it always leads to an MIDR allocation rule. Lemma 3.1. Algorithm 1 is an MIDR allocation rule. Algorithm 1 MIDR Allocation Rule via Optimizing over Output of Rounding Scheme Parameter: Feasible set S of Π. Parameter: Relaxed feasible set R ⊆ R m . Parameter: (Randomized) rounding scheme r : R → S. Input: Objective w : S → R satisfying (S, w) ∈ Π. Output: Feasible solution z ∈ S. 1: Let x * maximize E y∼r(x) [w(y)] over x ∈ R. 2: Let z ∼ r(x * ) We say a rounding scheme r : R → S is α-approximate for α ≤ 1 if w(x) ≥ E y∼r(x) [w(y)] ≥ α · w(x) for every x ∈ S. When r is α-approximate, so is the allocation rule of Algorithm 1. For most rounding schemes in the approximation algorithms literature, the optimization problem (5) cannot be solved in polynomial time (assuming P = N P ). The reason is that for any rounding scheme that always rounds a feasible solution to itself -i.e., r(x) = x for all x ∈ S -an optimal solution to (5) is also optimal for (1). Thus, in this case, hardness of the original problem (1) implies hardness of (5). We conclude that we need to design rounding schemes with the unusual property that r(x) = x for some x ∈ S. We call a (randomized) rounding scheme r : Under additional technical conditions, discussed in the context of combinatorial auctions in Appendix B, the convex program (5) can be solved efficiently (e.g., using the ellipsoid method). This reduces the design of a polynomial-time α-approximate MIDR algorithm to designing a polynomialtime α-approximate convex rounding scheme. R → S convex if E y∼r(x) [w(y)] is concave function of x ∈ R. Summarizing, Lemmas 3.1, 3.2, and 3.3 give the following informal theorem. Theorem 3.4. (Informal) Let Π be a welfare-maximization optimization problem, and let Π ′ be a relaxation of Π. If there exists a polynomial-time, α-approximate, convex rounding scheme for Π ′ , then there exists a truthful-in-expectation, polynomial-time, α-approximate mechanism for Π. Of course, there is no reason a priori to believe that useful convex rounding schemes -let alone ones computable in polynomial time -exist for any important problems. We show in Section 4 that they do in fact exist and yield new results for an interesting class of combinatorial auctions. Combinatorial Auctions In this section, we use the framework of Section 3 to prove our main result. Theorem 4.1. There is a (1 − 1/e)-approximate, truthful-in-expectation mechanism for combinatorial auctions with matroid rank sum valuations in the lottery-value oracle model, running in expected poly(n, m) time. We formulate welfare maximization in combinatorial auctions as an optimization problem Π. An instance (S, w) ∈ Π is given by the following integer program with feasible set S contained in {0, 1} n×m . Variable x ij indicates whether item j is allocated to player i, and w(x) denotes the social welfare of allocation x. maximize w(x) = i v i ({j : x ij = 1}) subject to i x ij ≤ 1, for j ∈ [m]. x ij ∈ {0, 1} , for i ∈ [n], j ∈ [m].(6) We let the relaxed feasible set R = R(S) be the result of relaxing the constraints x ij ∈ {0, 1} of (6) to 0 ≤ x ij ≤ 1. We structure the proof of Theorem 4.1 as follows. We define the Poisson rounding scheme, which we denote by r poiss , in Section 4.1. We prove that r poiss is (1 − 1/e)-approximate (Lemma 4.3), and convex (Lemma 4.2). Lemmas 3.1, 3.2 and 4.3, taken together, imply that Algorithm 1 when instantiated for combinatorial auctions with r = r poiss , is a (1 − 1/e)-approximate MIDR allocation rule. Lemma 4.2 reduces implementing this allocation rule to solving a convex program. In Appendix B, we handle the technical and numerical issues related to solving convex programs. First, we prove that our instantiation of Algorithm 1 for combinatorial auctions can be implemented in expected polynomial-time using the ellipsoid method under a simplifying assumption on the numerical conditioning of our convex program (Lemma B.2). Then we show in Section B.3 that the previous assumption can be removed by slightly modifying our algorithm. Finally, we prove that truth-telling VCG payments can be computed efficiently in Lemma D.1. Taken together, these lemmas complete the proof of Theorem 4.1. In Appendix D.2, we discuss prospects for extending our result beyond matroid rank sum valuations. The Poisson Rounding Scheme In this section we define the Poisson rounding scheme, which we denote by r poiss . The random map r poiss : R → S renders the the following optimization problem over R a convex optimization problem. maximize f (x) = E y∼r poiss (x) [w(y)] subject to i x ij ≤ 1, for j ∈ [m]. 0 ≤ x ij ≤ 1, for i ∈ [n], j ∈ [m].(7) We define the Poisson rounding scheme as follows. Given a fractional solution x to (7), do the following independently for each item j: assign j to player i with probability 1 − e −x ij . (This is well defined since 1 − e −x ij ≤ x ij for all players i and items j, and i x ij ≤ 1 for all items j.) We make this more precise in Algorithm 2. For clarity, we represent an allocation as a function from items to players, with an additional null player * reserved for items that are left unassigned. if i (1 − e −x ij ) ≥ p j then 4: Let a(j) be the minimum index such that i≤a(j) (1 − e −x ij ) ≥ p j . Proof. Let S 1 , . . . , S n be an allocation, and let x be an the integer point of (7) corresponding to S 1 , . . . , S n . Let (S ′ 1 , . . . , S ′ n ) ∼ r poiss (x). It suffices to show that E[ i v i (S ′ i )] ≥ (1 − 1/e) · i v i (S i ). By definition of the Poisson rounding scheme, S ′ i includes each j ∈ S i independently with probability 1 − 1/e. Submodularity implies that E[v i (S ′ i )] ≥ (1 − 1/e) · v i (S i ) - Warm-up: Convexity for Coverage Valuations In this section, we prove the special case of Lemma 4.2 for coverage valuations, as defined in Section 2.4. Fix n, m, and coverage valuations {v i } n i=1 , and let R denote the feasible set of mathematical program (7). Let (S 1 , . . . , S n ) ∼ r poiss (x) be the (random) allocation computed by the Poisson rounding scheme for point x ∈ R. The expected welfare E[w(r poiss (x))] can be written as E[ n i=1 v i (S i )], where the expectation is taken over the internal random coins of the rounding scheme. By linearity of expectation, as well as the fact that the sum of concave functions is concave, it suffices to show that E[v i (S i )] is a concave function of x for an arbitrary player i with coverage valuation v i . Fix player i, and use x j , v, and S as short-hand for x ij , v i , and S i respectively. Recall that v is a coverage function; let L be a ground set and A 1 , . . . , A m ⊆ L be such that v i (T ) = | ∪ j∈T A j | for each T ⊆ [m]. The Poisson rounding scheme includes each item j in S independently with probability 1 − e −x j . The expected value of player i can be written as follows. E [v(S)] = E[| ∪ j∈S A j |] = ℓ∈L Pr[ℓ ∈ ∪ j∈S A j ] Since the sum of concave functions is concave, it suffices to show that Pr[ℓ ∈ ∪ j∈S A j ] is concave in x for each ℓ ∈ L. We can interpret Pr[ℓ ∈ ∪ j∈S A j ] as the probability that element ℓ is covered by an item in S, where j ∈ [m] covers ℓ ∈ L if ℓ ∈ A j . For each ℓ ∈ L, let C ℓ be the set of items that cover ℓ. Element ℓ ∈ L is covered by S precisely when C ℓ ∩ S = ∅. Each item j ∈ C ℓ is included in S independently with probability 1 − e −x j . Therefore, the probability ℓ ∈ L is covered by S can be re-written as follows: Pr[ℓ ∈ ∪ j∈S A j ] = 1 − j∈C ℓ e −x j = 1 − exp   − j∈C ℓ x j   .(8) Form (8) is the composition of the concave function g(y) = 1 − e −y with the affine function y → j∈C ℓ x j . It is well-known that composing a concave function with an affine function yields another concave function (see e.g. [4]). Therefore, Pr[ℓ ∈ ∪ j∈S A j ] is concave in x for each ℓ ∈ L, as needed. This completes the proof. Convexity for Matroid Rank Sum Valuations In this section, we will prove Lemma 4.2 in its full generality. First, we define a discrete analogue of a Hessian matrix for set functions, and show that these discrete Hessians are negative semi-definite for matroid rank sum functions. H v S (j, k) = v(S ∪ {j, k}) − v(S ∪ {j}) − v(S ∪ {k}) + v(S)(9) for j, k ∈ [m]. Claim 4.5. If v : 2 [m] → R + is a matroid rank sum function, then H v S is negative semi-definite for each S ⊆ [m]. Proof. We observe that H v S is linear in v, and recall that a non-negative weighted-sum of negative semi-definite matrices is negative semi-definite. Therefore, it is sufficient to prove this claim when v is a matroid rank function. Let A binary matrix encoding a symmetric and transitive relation is a block diagonal matrix where each diagonal block is an all-ones or all-zeros sub-matrix. It is known, and easy to prove, that such a matrix is positive semi-definite. Therefore H v S is negative semi-definite. We now return to Lemma 4.2. Fix n, m, and MRS valuations {v i } n i=1 , and let R denote the feasible set of mathematical program (7). Let (S 1 , . . . , S n ) ∼ r poiss (x) be the (random) allocation computed by the Poisson rounding scheme for point x ∈ R. The expected welfare E[w(r poiss (x))] can be written as E [ n i=1 v i (S i )], where the expectation is taken over the internal random coins of the rounding scheme. By linearity of expectation, as well as the fact that the sum of concave functions is concave, it suffices to show that E[v i (S i )] is a concave function of x for an arbitrary player i with MRS valuation v i . Fix player i, and use x j , v, S as short-hand for x ij , v i , S i respectively. The Poisson rounding scheme includes each item j in S independently with probability 1 − e −x j . We can now write the expected value of player i as the following function G v : R m → R: G v (x 1 , . . . , x m ) = S⊆[m] v(S) j∈S (1 − e −x j ) j =S e −x j(10) The following claim, combined with Claim 4.5, completes the proof of Lemma 4.2. Claim 4.6. If all discrete Hessians of v are negative semi-definite, then G v is concave. Proof. Assume H v S is negative semi-definite for each S ⊆ [m] . We work with G v as expressed in Equation (10). We will show that the Hessian matrix of G v at an arbitrary x ∈ R m is negative semi-definite, which is a sufficient condition for concavity. We take the mixed-derivative of G v with respect to x j and x k (possibly j = k). ∂ 2 G v (x) ∂x j ∂x k = S⊆[m]\{j,k} ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ v(S) − v(S ∪ {j}) − v(S ∪ {k}) + v(S ∪ {j, k}) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ v(S) − v(S ∪ {j}) − v(S ∪ {k}) + v(S ∪ {j, k}) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ H v S (j, k) The first equality follows by grouping the terms of Equation (10) ▽ 2 G v (x) = S⊆[m] ℓ∈S (1 − e −x ℓ ) ℓ∈[m]\S e −x ℓ H v S(11) A non-negative weighted-sum of negative semi-definite matrices is negative semi-definite. This completes the proof of the claim. A Combinatorial Auctions with Explicit Coverage Valuations In this section, we apply our mechanism to explicitly represented coverage valuations. This demonstrates the utility of our mechanism in a concrete, non-oracle-based setting, and moreover allows us to establish an interesting separation result. Specifically, we show that (1) The (1 − 1/e)approximate mechanism of Theorem 4.1 can be implemented in expected polynomial-time for this problem, and (2) No polynomial-time, universally-truthful, VCG-based 9 mechanism guarantees an approximation ratio of o(n), unless N P ⊆ P/poly. The approximation ratio of 1 − 1/e is the best possible in polynomial-time for this problem -even without incentive constraints -assuming P = N P [19]. Ours is the first separation of its kind in the computational complexity model. 10 An n player, m item instance combinatorial auctions with explicit coverage valuations is described as follows. For each player i, there is a finite set L i , and a family A i 1 , . . . , A i m of subsets of L i . The valuation function of player i is then defined as v i (S) = | ∪ j∈S A i j |. The set system L i , A i j m j=1 is encoded explicitly as a bipartite graph. A.1 A Truthful-in-Expectation Mechanism As discussed previously, MRS valuations include all coverage valuations. Therefore, in order to implement the MIDR allocation rule of Section 4 for this problem, it suffices to answer lotteryvalue queries in time polynomial in the number of bits encoding the instance. Proof. Let v : 2 [m] → R + be a coverage valuation presented explicitly as a set system (L, {A j } m j=1 ), and let x ∈ [0, 1] m . Let S be a random set that includes each j ∈ [m] independently with probability x j . The outcome of the lottery value oracle of v evaluated at x is equal to the sum, over all ℓ ∈ L, of the probability that ℓ is "covered" by S -specifically, ℓ∈L Pr[ℓ ∈ ∪ j∈S A j ]. It is easy to verify that a term of this sum can be expressed as the following closed form expression. Pr[ℓ ∈ ∪ j∈S A j ] = 1 − j:A j ∋ℓ (1 − x j ) This expression can be evaluated in time polynomial in the representation of the set system. This completes the proof. Claim A.1 implies the following Theorem. Theorem A.2. There is an expected polynomial-time, (1−1/e)-approximate, truthful-in-expectation mechanism for combinatorial auctions with explicit coverage valuations. 9 A universally-truthful mechanism is VCG-based if it is a randomization over deterministic truthful mechanisms that each implement a maximal in range allocation rule -the special case of MIDR where each distribution in the distributional range is supported on a single allocation. 10 We note that this separation is meaningful because there are no known universally-truthful polynomial-time mechanisms -VCG-based or otherwise -for this problem that achieve an approximation ratio better than min(n, √ m). In particular, the result of [8] uses demand queries, which can not be answered in polynomial time for explicit coverage valuations by the results of [19] and [16]. A.2 A Lower-bound on Universally Truthful VCG-Based Mechanisms We use the following special case of [5,Theorem 1.2]: If a succinct combinatorial auction problem satisfies the regularity conditions on the valuations defined in [5], and moreover the 2-player version of the problem is APX hard, then no polynomial-time, universally-truthful, VCG-based mechanism guarantees an approximation ratio of o(n). It is routine to verify the regularity assumptions of [5] for explicit coverage valuations. APXhardness of the 2-player problem follows by an elementary reduction from the APX-hard problem max-cut. Given an instance of max-cut on a graph G = (V, E), we let [m] = V , L 1 = L 2 = E. For e ∈ E, i ∈ {1, 2}, and j ∈ V , we let e ∈ A i j if j is one of the endpoints of edge e. It is easy to check that the welfare maximizing allocation of the resulting 2-player instance of combinatorial auctions corresponds to the maximum cut of G. Moreover, using the fact that the optimal objective value of max-cut is at least |E|/2, it is elementary to verify that the reduction preserves hardness of approximation up to a constant factor. Therefore, combinatorial auctions with explicit coverage valuations and 2 players is APX hard. This yields the following Theorem. B Solving The Convex Program In this section, we overcome some technical difficulties related to the solvability of convex programs. We show in Section B.1 that, in the lottery-value oracle model, the four conditions for "solvability" of convex programs, as stated in Fact C.3, are easily satisfied for convex program (7). However, an additional challenge remains: "solving" a convex program -as in Definition C.2 -returns an approximately optimal solution. Indeed the optimal solution of a convex program may be irrational in general, so this is unavoidable. We show how to overcome this difficulty if we settle for polynomial runtime in expectation. While the optimal solution x * of (7) cannot be computed explicitly, the random variable r poiss (x * ) can be sampled in expected polynomial-time. The key idea is the following: sampling the random variable r poiss (x * ) rarely requires precise knowledge of x * . Depending on the coin flips of r poiss , we decide how accurately we need to solve convex program (7) in order compute r poiss (x * ). Roughly speaking, we show that the probability of requiring a (1 − ǫ)-approximation falls exponentially in 1 ǫ . As a result, we can sample r poiss (x * ) in expected polynomial-time. We implement this plan in Section B.2 under the simplifying assumption that convex program (7) is well-conditioned -i.e. is "sufficiently concave" everywhere. In Section B.3, we show how to remove that assumption by slightly modifying our algorithm. B.1 Approximating the Convex Program Claim B.1. There is an algorithm for Combinatorial Auctions with MRS valuations in the lotteryvalue oracle model that takes as input an instance of the problem and an approximation parameter ǫ > 0, runs in poly(n, m, log(1/ǫ)) time, and returns a (1 − ǫ)-approximate solution to convex program (7). It suffices to show that the four conditions of Fact C.3 are satisfied in our setting. The first three are immediate from elementary combinatorial optimization (see for example [28]). It remains to show that the first-order oracle, as defined in Fact C.3, can be implemented in polynomial-time in the lottery-value oracle model. The objective f (x) of convex program (7) can, by definition, be written as f (x) = i G v i (x i ), where v i is the valuation function of player i, x i is the vector (x i1 , . . . , x im ), and and G v i is as defined in (10). By definition, G v i (x i ) is the outcome of querying the lottery-value oracle of player i with (1 − e −x i1 , . . . , 1 − e −x im ) . Therefore, we can evaluate f (x) using n lottery-value query, one for each player. It remains to show that we can also evaluate the (multi-variate) derivative ▽f (x) of f (x). Using definition (10), we take the partial derivative corresponding to x ij . By rearranging the sum appropriately, we get that ∂f ∂x ij (x) = e −x ij F v i (1 − e −x i1 , . . . , 1 − e −x im ) ∨ 1 j − F v i (1 − e −x i1 , . . . , 1 − e −x im ) ∧ 0 j , where F v i is as defined in Equation (3). Here, ∨ and ∧ denote entry-wise minimum and maximum respectively, 1 j denotes the vector with all entries equal to 0 except for a 1 at position j, and 0 j denotes the vector with all entries equal to 1 except for a 0 at position j. It is clear that this entry of the gradient of f can be evaluated using two lottery-value queries. Therefore, ▽f (x) can be evaluated using 2n lottery-value queries, 2 for each player. This completes the proof of Claim B.1. B.2 The Well-Conditioned Case In this section, we make the following simplifying assumption: The objective function f (x) of convex program (7), when restricted to any line in the feasible set R, has a second derivative of magnitude at least λ = Let x * be the optimal solution to convex program (7). Algorithm 1 allocates items according to the distribution r poiss (x * ). The Poisson rounding scheme, as described in Algorithm 2, requires making m independent decisions, one for each item j. Therefore, we fix item j and show how to simulate this decision. It suffices to do the following in expected polynomial-time: flip uniform coin p j ∈ [0, 1], and find the minimum index a(j) (if any) such that i≤a(j) (1 − e −x * ij ) ≥ p j . For most realizations of p j , this can be decided using only coarse estimates x ij to x * ij . Assume we have an estimation oracle for x * that, on input δ, returns a δ-estimate x of x * : Specifically, x ij − x * ij ≤ δ for each i. When p j falls outside the "uncertainty zones" of x, such as when |p j − i ′ ≤i (1−e − x i ′ j )| > δn for each i ∈ [n], it is easy to see that we can correctly determine a(j) by using x in lieu of x. The total measure of the uncertainty zones of x is at most 2n 2 δ, therefore p j lands outside the uncertainty zones with probability at least 1 − 2n 2 δ. The following claim shows that if the estimation oracle for x * can be implemented in time polynomial in log(1/δ), then we can simulate the Poisson rounding procedure in expected polynomial-time. Claim B.3. Let x * be the optimal solution of convex program (7). Assume access to a subroutine B(δ) that returns a δ-estimate of x * in time poly(n, m, log(1/δ)). Algorithm (1) with r = r poiss can be simulated in expected poly(n, m) time. Proof. It suffices to show that we can simulate the allocation of an item j by Algorithm (2) on input x * . The simulation proceeds as follows: Draw p j ∈ [0, 1] uniformly at random. Start with δ = δ 0 = 1 2n 2 . Let x = B(δ). While |p j − i ′ ≤i (1 − e − x i ′ j )| ≤ δn for some i ∈ [n] (i.e. p j may fall inside an "uncertainty zone") do the following: let δ = δ/2, x = B(δ) and repeat. After the loop terminates, we have a sufficiently accurate estimate of x * to calculate a(j) as in Algorithm (2). It is easy to see that the above procedure is a faithful simulation of Algorithm (2) on x * . It remains to bound its expected running time. Let δ k = 1 2 k+1 n 2 denote the value of δ at the kth iteration. By assumption, the kth iteration takes poly(n, m, log(1/δ k )) = poly(n, m, log(2 k+1 n 2 )) = poly(n, m, k) time. The probability this procedure does not terminate after k iterations is at most 2n 2 δ k = 1/2 k . Taken together, these two facts and a simple geometric summation imply that the expected runtime is polynomial in n and m. It remains to show that the estimation oracle B(δ) can be implemented in poly(n, m, log(1/δ)) time. At first blush, one may expect that the ellipsoid method can be used in the usual manner here. However, there is one complication: we require an estimate x that is close to x * in solution space rather than in terms of objective value. Using our assumption on the curvature of f (x), we will reduce finding a δ-estimate of x * to finding an 1 − ǫ(δ) approximate solution to convex program (7). The dependence of ǫ on δ will be such that ǫ ≥ poly(δ)/2 poly(n,m) , thereby we can invoke Claim B.1 to deduce that B(δ) can be implemented in poly(n, m, log(1/δ)) time. Let ǫ = ǫ(δ) = δ 2 λ 2 i v i ([m]) . Plugging in the definition of λ, we deduce that ǫ ≥ δ 2 /2 poly(n,m) , which is the desired dependence. It remains to show that if x is (1 − ǫ)-approximate solution to (7), then x is also a δ-estimate of x * . Using the fact that f (x) is concave, and moreover its second derivative has magnitude at least λ, it a simple exercise to bound distance of any point x from the optimal point x * in terms of its sub-optimality f (x * ) − f (x), as follows: f (x * ) − f (x) ≥ λ 2 ||x − x * || 2 .(12) Assume x is a (1 − ǫ)-approximate solution to (7). Equation (12) implies that || x − x * || 2 ≤ 2 λ ǫf (x * ) = δ 2 i v i ([m]) f (x * ) ≤ δ 2 , where the last inequality follows from the fact that i v i ([m])) is an upper-bound on the optimal value f (x * ). Therefore, ||x − x * || ≤ δ, as needed. This completes the proof of Lemma B.2. B.3 Guaranteeing Good Conditioning In this section, we propose a modification r + poiss of the Poisson rounding scheme r poiss . We will argue that r + poiss satisfies all the properties of r poiss established so far, with one exception: the approximation guarantee of Lemma 4.3 is reduced to 1 − 1/e − 2 −2mn . Then we will show that r + poiss satisfies the curvature assumption of Lemma B.2, demonstrating that said assumption may be removed. Therefore Algorithm 1, instantiated with r = r + poiss for combinatorial auctions with MRS valuations in the lottery-value oracle model, is (1 − 1/e − 2 −2mn ) approximate and can be implemented in expected poly(n, m) time. Finally, we show in Remark B.4 how to recover the 2 −2mn term to get a clean 1 − 1/e approximation ratio, as claimed in Theorem 4.1. Let µ = 2 −2mn . We define r + poiss in Algorithm 3. Intuitively, r + poiss at first makes a tentative allocation using r poiss . Then, it cancels said allocation with small probability µ. Finally, with probability β it chooses a random "lucky winner" i * and gives him all the items. β is defined as the fraction of items allocated in the original tentative allocation. The motivation behind this seemingly bizarre definition of r + poiss is purely technical: as we will see, it can be thought of as adding "concave noise" to r poiss . Let (S 1 , . . . , S n ) = (∅, ∅, . . . , ∅). 6: Draw q 2 ∈ [0, 1] uniformly at random. 7: if q 2 ∈ [0, β] then 8: Choose a player i * uniformly at random. We can write the expected welfare E[w(r + poiss (x))] as follows. We use linearity of expectations and the fact that β is independent of the choice of i * to simplify the expression. E[w(r + poiss (x))] = E[(1 − µ)w(r poiss (x)) + µβv i * ([m])] = (1 − µ) E[w(r poiss (x))] + µ E[β]E[v i * ([m])] = (1 − µ) E[w(r poiss (x))] + µ E[β] i v i ([m]) n Observe that r poiss allocates an item j with probability i (1−e −x ij ). Therefore, the expectation of β is ij (1−e −x ij ) m . This gives: E[w(r + poiss (x))] =(1 − µ) E[w(r poiss (x))] + µ mn i v i ([m]) i,j (1 − e −x ij ).(13) It is clear that the expected welfare when using r = r + poiss is within 1 − µ = 1 − 2 −2mn of the expected welfare when using r = r poiss in the instantiation of Algorithm 1. Using Lemma 4.3, we conclude that r + poiss is a (1 − 1/e − 2 −2mn )-approximate rounding scheme. Moreover, using Lemma 4.2, as well as the fact that (1 − e −x ij ) is a concave function, we conclude that r + poiss is a convex rounding scheme. Therefore, this establishes the analogues of Lemmas 4.3 and4.2 for r + poiss . It is elementary to verify that our proof of Lemma B.2 can be adapted to r + poiss as well. It remains to show that r + poiss is "sufficiently concave". This would establish that the conditioning assumption of Section B.2 is unnecessary for r + poiss . We will show that expression (13) is a concave function with curvature of magnitude at least λ = n i=1 v i ([m]) emn2 2mn everywhere. Since the curvature of concave functions is always non-positive, and moreover the curvature of the sum of two functions is the sum of their curvatures, it suffices to show that the second term of the sum (13) has curvature of magnitude at least λ. We note that the curvature of ij (1 − e −x ij ) is at least e −1 over x ∈ [0, 1] n×m . Therefore, the curvature of the second term of (13) is at least µ mn i v i ([m]) e −1 = λ as needed. Remark B.4. In this section, we sacrificed 2 −2mn in the approximation ratio in order to guarantee expected polynomial runtime of our algorithm even when convex program (7) is not well-conditioned. This loss can be recovered to get a clean 1 − 1/e approximation as follows. Given our (1 − 1/e − 2 −2mn )-approximate MIDR algorithm A, construct the following algorithm A ′ : Given an instance of combinatorial auctions, A ′ runs A on the instance with probability 1 − e2 −2mn , and with the remaining probability solves the instance optimally in exponential time O(2 2mn ). It was shown in [12] that a random composition of MIDR mechanisms is MIDR, therefore A ′ is MIDR. The expected runtime of A ′ is bounded by the expected runtime of A plus e2 −2mn · O(2 2mn ) = O(1). Finally, the expected approximation of A ′ is the weighted average of the approximation ratio of A and the optimal approximation ratio 1, and is at least (1 − e2 −2mn )(1 − 1/e − 2 −2mn ) + e2 −2mn ≥ 1 − 1/e. C Additional Preliminaries C.1 Matroid Theory In this section, we review some basics of matroid theory. For a more comprehensive reference, we refer the reader to [26]. A matroid M is a pair (X , I), where X is a finite ground set, and I is a non-empty family of subsets of X satisfying the following two properties. (1) Downward closure: If S belongs to I, then so do all subsets of S. (2) The Exchange Property: Whenever T, S ∈ I with |T | < |S|, there is some x ∈ S \ T such that T ∪ {x} ∈ I. Elements of I are often referred to as the independent sets of the matroid. Subsets of X that are not in I are often called dependent. We associate with matroid M a set function rank M : 2 X → N, known as the rank function of M , defined as follows: rank M (A) = max S∈I |S ∩ A|. Equivalently, the rank of set A in matroid M is the maximum size of an independent set contained in A. C.2 Convex Optimization In this section, we distill some basics of convex optimization. For more details, see [2]. Definition C.1. A maximization problem is given by a set Π of instances (P, c), where P is a subset of some euclidean space, c : P → R, and the goal is to maximize c(x) over x ∈ P. We say Π is a convex maximization problem if for every (P, c) ∈ Π, P is a compact convex set, and c : P → R is concave. If c : P → R + for every instance of Π, we say Π is non-negative. Definition C.2. We say a non-negative maximization problem Π is R-solvable in polynomial time if there is an algorithm that takes as input the representation of an instance I = (P, c) ∈ Π -where we use |I| to denote the number of bits in the representation -and an approximation parameter ǫ, and in time poly(|I|, log(1/ǫ)) outputs x ∈ P such that c(x) ≥ (1 − ǫ) max y∈P c(y). Fact C.3. Consider a non-negative convex maximization problem Π. If the following are satisfied, then Π is R-solvable in polynomial time using the ellipsoid method. We let I = (P, c) denote an instance of Π, and let m denote the dimension of the ambient euclidean space. 1. Polynomial Dimension: m is polynomial in |I|. Starting ellipsoid: There is an algorithm that computes, in time poly(|I|), a point c ∈ R m , a matrix A ∈ R m×m , and a number V ∈ R such that the following hold. We use E(c, A) to denote the ellipsoid given by center c and linear transformation A. 3. Separation oracle for P: There is an algorithm that takes takes input I and x ∈ R m , and in time poly(|I|, |x|) where |x| denotes the size of the representation of x, outputs "yes" if x ∈ P, otherwise outputs h ∈ R m such that h T x < h T y for every y ∈ P. 4. First order oracle for c: There is an algorithm that takes input I and x ∈ R m , and in time poly(|I|, |x|) outputs c(x) ∈ R and ▽c(x) ∈ R m . D Additional Technical Details and Commentary D.1 Computing Payments In this section, we show how to efficiently compute truth-telling payments for our mechanism. In fact, as shown below, this is possible for any maximal in distributional range allocation rule for combinatorial auctions given as a black box. Lemma D.1. Let A be an MIDR allocation rule for combinatorial auctions, and let v 1 , . . . , v n be input valuations. Assume black-box access to A, and value oracle access to {v i } n i=1 . We can compute, with poly(n) over-head in runtime, payments p 1 , . . . , p n such that E[p i ] equals the VCG payment of player i for MIDR allocation rule A on input v 1 , . . . , v n . Proof. Without loss of generality, it suffices to show how to compute p 1 . Let 0 : 2 [m] → R be the valuation evaluating to 0 at each bundle. Recall (see e.g. [25]) that the VCG payment of player 1 is equal to E T ∼A(0,v 2 ,...,vn) n i=2 v i (T i ) − E S∼A(v 1 ,...,vn) n i=2 v i (S i ) .(14) Let (S 1 , . . . , S n ) be a sample from A(v 1 , . . . , v n ), and let (T 1 , . . . , T n ) be a sample from A(0, v 2 , . . . , v n ). Let p 1 = n i=2 v i (T i ) − n i=2 v i (S i ). Using linearity of expectations, it is easy to see that the expectation of p 1 is equal to the expression in (14). This completes the proof. We note that the mechanism resulting from Lemma D.1 is individually rational in expectation, and each payment is non-negative in expectation. We leave open the question of whether it is possible to enforce individual rationality and non-negative payments for our mechanism ex-post. D.2 Beyond Matroid Rank Sum Valuations In this section, we discuss the prospect of extending our result beyond matroid rank sum valuations. First, we argue that our restriction to a subset of submodular functions is not merely an artifact of our analysis. Specifically, we exhibit a submodular function that is not in the matroid rank sum family, and moreover the Poisson rounding scheme can be non-convex when a player has this function as their valuation. Then, we briefly argue that our mechanism may yet apply to some valuations that are not matroid rank sums. We define a budget additive function v on four items {1, 2, 3, 4}. Three of the items are "small", one item is "big", and the budget equals the value of the big item. We can show that v is not a matroid rank sum function by invoking Claim 4.5. Specifically, one can manually check that the discrete Hessian matrix H v ∅ of v at ∅ (see Definition 4.4) is not negative semi-definite. Moreover, for a player with valuation v, Poisson rounding renders the player's expected value function G v (x) (Equation (10)) non-concave in x: By Equation (11), the Hessian matrix of G v (x) approaches the discrete Hessian H v ∅ as x tends to zero. Since H v ∅ is not negative semi-definite, G v (x) is non-concave for x near zero. We note that we can construct a large family of similar counter examples by simply increasing the number of "small items" in v. Finally, we observe that our mechanism may apply to some valuations that are not matroid rank sums. We observe that we only used two properties of MRS functions: their discrete Hessian matrices are negative semi-definite (Claim 4.5, which is used to prove Lemma 4.2), and they are submodular (used to prove Lemma 4.3). Therefore, our result extends directly to the class of all set functions satisfying both of these properties. We leave open the question of whether there exist interesting functions in this class that are not matroid rank sums. More generally, understanding the class of set functions with negative semi-definite discrete Hessian matrices -in particular the relationship of this class to other classes of set functions studied in the literature -may be an interesting direction for future inquiry.
9,302
1102.5529
2953379898
Besides the complexity in time or in number of messages, a common approach for analyzing distributed algorithms is to look at the assumptions they make on the underlying network. We investigate this question from the perspective of network dynamics. In particular, we ask how a given property on the evolution of the network can be rigorously proven as necessary or sufficient for a given algorithm. The main contribution of this paper is to propose the combination of two existing tools in this direction: local computations by means of graph relabelings, and evolving graphs. Such a combination makes it possible to express fine-grained properties on the network dynamics, then examine what impact those properties have on the execution at a precise, intertwined, level. We illustrate the use of this framework through the analysis of three simple algorithms, then discuss general implications of this work, which include (i) the possibility to compare distributed algorithms on the basis of their topological requirements, (ii) a formal hierarchy of dynamic networks based on these requirements, and (iii) the potential for mechanization induced by our framework, which we believe opens a door towards automated analysis and decision support in dynamic networks.
Distributed algorithms can be expressed using a variety of communication models ( message passing, mailboxes, shared memory). Although a vast majority of algorithms is designed in one of these models -- predominantly the message passing model --, the very fact that one of them is chosen implies that the obtained results ( positive or negative characterizations and associated proofs) are limited to the scope of this model. This problem of diversity among formalisms and results, already pointed out twenty years ago in @cite_17 , led researchers to consider higher abstractions when studying fundamental properties of distributed systems.
{ "abstract": [ "Abstract : This talk is about impossibility results in the area of distributed computing. In this category, I include not just results that say that a particular task cannot be accomplished, but also lower bound results, which say that a task cannot be accomplished within a certain bound on cost. I started out with a simple plan for preparing this talk: I would spend a couple of weeks reading all the impossibility proofs in our fields, and would categorize them according to the ideas used. Then I would make wise and general observations, and try to predict where the future of this area is headed. That turned out to be a bit too ambitious; there are many more such results than I thought. Although it is often hard to say what constitutes ad different results, I managed to count over 100 such impossibility proofs And my search wasn't even very systematic or exhaustive. It's not quite as hopeless to understand this area as it might seem from the number of papers. Although there are 100 different results, there aren't 100 different ideas. I thought I could contribute something by identifying some of the commonality among the different results." ], "cite_N": [ "@cite_17" ], "mid": [ "2084316739" ] }
Distributed Computing in Dynamic Networks: Towards a Framework for Automated Analysis of Algorithms ⋆
The past decade has seen a burst of research in the field of communication networks. This is particularly true for dynamic networks due to the arrival, or impending deployment, of a multitude of applications involving new types of communicating entities such as wireless sensors, smartphones, satellites, vehicles, or swarms of mobile robots. These contexts offer both unprecedented opportunities and challenges for the research community, which is striving to design appropriate algorithms and protocols. Behind the apparent unity of these networks lies a great diversity of assumptions on their dynamics. One end of the spectrum corresponds to infrastructured networks, in which only terminal nodes are dynamic -these include 3G/4G telecommunication networks, access-point-based Wi-Fi networks, and to some extent the Internet itself. At the other end lies delay-tolerant networks (DTNs), which are characterized by the possible absence of end-to-end communication route at any instant. The defining property of DTNs actually reflects many types of real-world contexts, from satellites or vehicular networks to pedestrian or social animal networks (e.g. birds, ants, termites). In-between lies a number of environments whose capabilities and limitations require specific attention. A consequence of this diversity is that a given protocol for dynamic networks may prove appropriate in one context, while performing poorly (or not at all) in another. The most common approach for evaluating protocols in dynamic networks is to run simulations, and use a given mobility model (or set of traces) to generate topological changes during the execution. These parameters must faithfully reflect the target context to yield an accurate evaluation. Likewise, the comparison between two protocols is only meaningful ⋆ A preliminary version of this paper appeared in [9]. if similar traces or mobility models are used. This state of facts makes it often ambiguous and difficult to judge of the appropriateness of solutions based on the sole experimental results reported in the literature. The problem is even more complex if we consider the possible biases induced by further parameters like the size of the network, the density of nodes, the choice of PHY or MAC layers, bandwidth limitations, latency, buffer size, etc. The fundamental requirement of an algorithm on the network dynamics will likely be better understood from an analytical standpoint, and some recent efforts have been carried out in this direction. They include the works by O'Dell et al. [23] and Kuhn et al. [18], in which the impacts of given assumptions on the network dynamics are studied for some basic problems of distributed computing (broadcast, counting, and election). These works have in common an effort to make the dynamics amenable to analysis through exploiting properties of a static essence: even though the network is possibly highly-dynamic, it remains connected at every instant. The approach of population protocols [1,2] also contributed to more analytical understanding. Here, no assumptions are made on the network connectivity at a given instant, but yet, the same fundamental idea of looking at dynamic networks through the eyes of static properties is leveraged by the concept of graph of interaction, in which every entity is assumed to interact infinitely often with its neighbors (and thus, dynamics is reduced to a scheduling problem in static networks). Besides the fact that the above assumptions are strong -we will show how strong in comparison to others in a hierarchy -, we believe that the very attempt to flatten the time dimension does prevent from understanding the true requirements of an algorithm on the network dynamics. As a trivial example, consider the broadcasting of a piece of information in the network depicted in Figure 1. The possibility to complete the broadcast in this scenario clearly depends on which node is the initial emitter: a and b may succeed, while c cannot. Why? How can we express this intuitive property the topology evolution must have with respect to the emitter and the other nodes? Flattening the timedimension without keeping information on the ordering of events would obviously loose some important specificities, such as the fact that nodes a and c are in a non-symmetrical configuration. How can we prove, more generally, that a given assumption on the dynamics is necessary or sufficient for a given problem (or algorithm)? How can we find (and define) property that relate to finer-grain aspects than recurrence or more generally regularities. Even when intuitive, rigorous characterizations of this kind might be difficult to obtain without appropriate models and formalisms -a conceptual shift is needed. We investigate these questions in the present paper. Contrary to the aforementioned approaches, in which a given context is first considered, then the feasibility of problems studied in this particular context, we suggest the somehow reverse approach of considering first a problem, then trying to characterize its necessary and/or sufficient conditions (if any) in terms of network dynamics. We introduce a generalpurpose analysis framework based on the combination of 1) local computations by means of graph relabelings [19], and 2) an appropriate formalism for dynamic networks, evolving graphs [15], which formalizes the evolution of the network topology as an ordered sequence of static graphs. The strengths of this combination are several: First, the use of local computations allows to obtain general impossibility results that do not depend on a particular communication model (e.g., message passing, mailbox, or shared memory). Second, the use of evolving graphs enables to express fine-grain network properties that remain temporal in essence. (For instance, a necessary condition for the broadcast problem above is the existence of a temporal path, or journey, from the emitter to any other node, which statement can be expressed us-ing monadic second-order logic on evolving graphs.) The combination of graph relabelings and evolving graphs makes it possible to study the execution of an algorithm as an intertwined sequence of topological events and computations, leading to a precise characterization of their relation. The framework we propose should be considered as a conceptual framework to guide the analysis of distributed algorithms. As such, it is specified at a high-level of abstraction and does not impose the choice for, say, a particular logic (e.g. first-order vs. LMSO) or scope of computation (e.g. pairwise vs. starwise interaction), although all our examples assume LMSO and pairwise interactions. Finally, we believe this framework could pave the way to decision support systems or mechanized analysis in dynamic networks, both of which are discussed as possible applications. Local computations and evolving graphs are first presented in Section 2, together with central properties of dynamic networks (such as connectivity over time, whose intuitive implications on the broadcast problem were explored in various work -see e.g. [3,6]). We describe the analysis framework based on the combination of both tools in Section 3. This includes the reformulation of an execution in terms of relabelings over a sequence of graphs, as well as new formulations of what a necessary or sufficient condition is in terms of existence and non-existence of such a relabeling sequence. We illustrate these theoretical tools in Section 4 through the analysis of three basic examples, i.e., one broadcast algorithm and two counting algorithms, one of which can also be used for election. (Note that our framework was recently applied to the problem of mutual exclusion in [16].) The rest of the paper is devoted to exploring some implications of the proposed approach, articulated around the two major motifs of classification (Section 5) and mechanization (Section 6). The section on classification discusses how the conditions resulting from analysis translate into more general properties that define classes of evolving graphs. The relations of inclusion between these classes are examined, and interestingly-enough, they allow to organize the classes as a connected hierarchy. We show how this classification can reciprocally be used to evaluate and compare algorithms on the basis of their topological requirements. The section on mechanization discusses to what extent the tasks related to assessing the appropriateness of an algorithm in a given context can be automated. We provide canonical ways of checking inclusion of a given network trace in all classes resulting from the analyses in this paper (in efficient time), and mention some ongoing work around the use of the coq proof assistant in the context of local computation, which we believe could be extended to evolving graphs. Section 7 eventually concludes with some remarks and open problems. Abstracting communications through local computations and graph relabelings Distributed algorithms can be expressed using a variety of communication models (e.g. message passing, mailboxes, shared memory). Although a vast majority of algorithms is designed in one of these models -predominantly the message passing model -, the very fact that one of them is chosen implies that the obtained results (e.g. positive or negative characterizations and associated proofs) are limited to the scope of this model. This problem of diversity among formalisms and results, already pointed out twenty years ago in [20], led researchers to consider higher abstractions when studying fundamental properties of distributed systems. Local computations and Graph relabelings were jointly proposed in this perspective in [19]. These theoretical tools allow to represent a distributed algorithm as a set of local interaction rules that are independent from the effective communications. Within the formalism of graph relabelings, the network is represented by a graph whose vertices and edges are associated with labels that represent the algorithmic state of the corresponding nodes and links. An interaction rule is then defined as a transition pattern (preconditions, actions), where preconditions and actions relate to these labels values. Since the interactions are local, each transition pattern must involve a limited and connected subset of vertices and edges. Figure 2 shows different scopes of computation, which are not necessarily the same for preconditions and actions. The approach taken by local computations shares a number of traits with that of population protocols, more recently introduced in [1,2]. Both approaches work at a similar level of abstraction and are concerned with characterizing what can or cannot be done in distributed computing. As far as the scope of computation is concerned, population protocols can be seen as a particular case of local computation focusing on pairwise interaction (see Figure 2(c)). The main difference between these tools (if any, besides that of originating from distinct lines of research), has more to do with the role given to the underlying synchronization between nodes. While local computations typically sees this as an lower layer being itself abstracted (whenever possible), population protocols consider the execution of an algorithm given some explicit properties of an interaction scheduler. This particularity led population protocols to become an appropriate tool to study distributed computing in dynamic networks, by reducing the network dynamics into specific properties of the scheduler (e.g., every pair of nodes interact infinitely often). Several variants of population protocols have subsequently been introduced (e.g., assuming various types of fairness of the scheduler and graphs of interaction), however we believe the analogy between dynamics and scheduling has some limits (e.g., in reality two nodes that interact once will not necessarily interact twice; and the precise order in which a group of nodes interacts matters all the more when interactions do not repeat infinitely often). We advocate looking at the dynamics at a finer scale, without always assuming infinite recurrence on the scheduler (such a scheduler can still be formulated as a specific class of dynamics), in the purpose of studying the precise relationship between an algorithm and the dynamics underlying its execution. To remain as general as possible, we are building on top of local computations. One may ask whether remaining as general is relevant, and whether the various models on Figure 2 are in fact equivalent in power (e.g. could we simulate any of them by repetition of another?). The answer is negative due to different levels of atomicity (e.g. models 2(a) vs. 2(c)) and symmetry breaking (e.g. models 2(c) vs. 2(d)). The reader is referred to [14] for a detailed hierarchy of these models. Note that the equivalences between models would have to be re-considered anyway in a dynamic context, since the dynamics may prevent the possibility of applying several steps of a weaker model to simulate a stronger one. (a) (b) (c) (d) We now describe the graph relabeling formalism traditionally associated with local computations. Let the network topology be represented by a finite undirected loopless graph G = (V G , E G ), with V G representing the set of nodes and E G representing the set of communication links between them. Two vertices u and v are said neighbors if and only if they share a common edge (u, v) in E G . Let λ : V G ∪ E G → L * be a mapping that associates every vertex and edge from G with one or several labels from an alphabet L (which denotes all the possible states these elements can take). The state of a given vertex v, resp. edge e, at a given time t is denoted by λ t (v), resp. λ t (e). The whole labeled graph is represented by the pair (G, λ), noted G. According to [19], a complete algorithm can be given by a triplet {L, I, P }, where I is the set of initial states, and P is a set of relabeling rules (transition patterns) representing the distributed interactions -these rules are considered uniform (i.e., same for all nodes). The Algorithm 1 below (A 1 for short), gives the example of a one-rule algorithm that represents the general broadcasting scheme discussed in the introduction. We assume here that the label I (resp. N ) stands for the state informed (resp. non-informed). Propagating the information thus consists in repeating this single rule, starting from the emitter vertex, until all vertices are labeled I. 4 Algorithm 1 A propagation algorithm coded by a single relabeling rule (r 1 ). Let us repeat that an algorithm does not specify how the nodes synchronize, i.e., how they select each other to perform a common computation step. From the abstraction level of local computations, this underlying synchronization is seen as an implementation choice (dedicated procedures were designed to fit the various models, e.g. local elections [21] and local rendezvous [22] for starwise and pairwise interactions, respectively). A direct consequence is that the execution of an algorithm at this level may not be deterministic. Another consequence is that the characterization of sufficient conditions on the dynamics will additionally require assumptions on the synchronization -we suggest later a generic progression hypothesis that serves this purpose. Note that the three algorithms provided in this paper rely on pairwise interactions, but the concepts and methodology involved apply to local computations in general. Expressing dynamic network properties using Evolving Graphs In a different context, evolving graphs [15] were proposed as a combinatorial model for dynamic networks. The initial purpose of this model was to provide a suitable representation of fixed schedule dynamic networks (FSDNs), in order to compute optimal communication routes such as shortest, fastest and foremost journeys [6]. In such a context, the evolution of the network was known beforehand. In the present work, we use evolving graphs in a very different purpose, which is to express properties on the network dynamics. It is important to keep in mind that the analyzed algorithms are never supposed to know the evolution of the network beforehand. An evolving graph is a structure in which the evolution of the network topology is recorded as a sequence of static graphs S G = G 1 , G 2 , ..., where every G i = (V i , E i ) corresponds to the network topology during an interval of time [t i , t i+1 ) . Several models of dynamic networks can be captured by this formalism, depending on the meaning which is given to the sequence of dates S T = t 1 , t 2 , .... For example, these dates could correspond to every time step in a discrete-time system (and therefore be taken from a time domain T ⊆ N), or to variable-size time intervals in continuous-time systems (T ⊆ R), where each t i is the date when a topological event occurs in the system (e.g., appearance or disappearance of an edge in the graph), see for example Figure 3. We consider continuous-time evolving graphs in general. (Our results actually hold for any of the above meanings.) Formally, we consider an evolving graph as the structure G = (G, S G , S T ), where G is the union of all G i in S G , called the underlying graph of G. Henceforth, we will simply use the notations V and E to denote V (G) and E(G), the sets of vertices and edges of the underlying graph G. Since we focus here on computation models that are undirected, we logically consider evolving graphs as being themselves undirected. The original version of evolving graphs considered undirected edges, as well as possible restrictions on bandwidth and latency. Finally, we will use the notation G [ta,t b ) to denote the temporal subgraph G ′ = (G ′ , S ′ G , S ′ T ) built from G = (G, S G , S T ) such that G ′ = G, S ′ G = {G i ∈ S G : t i ∈ [t a , t b )}, and S ′ T = {t i ∈ S T ∩ [t a , t b )}. period t0 → t1 period t1 → t2 period t2 → t3 period t3 →[t 1 , t3 ) [t0, t1) [t 2 , t 4 ) [t 0 , t 1 ) [ t 0 , t 2 ) [ t 0 , t 3 ) [t 2 , t4 ) G = (b) A compact representation Basic concepts and notations (given an evolving graph G = (G, S G , S T )). As a writing facility, we consider the use of a presence function ρ : E × T → {0, 1} that indicates whether a given edge is present at a given date, that is, for e ∈ E and t ∈ [t i , t i+1 ) (with t i , t i+1 ∈ S T ), ρ(e, t) = 1 ⇐⇒ e ∈ E i . A central concept in dynamic networks is that of journey, which is the temporal extension of the concept of path. A journey can be thought of as a path over time from one vertex to another. Formally, a sequence of couples J = {(e 1 , σ 1 ), (e 2 , σ 2 ) . . . , (e k , σ k )} such that {e 1 , e 2 , ..., e k } is a walk in G and {σ 1 , σ 2 , ..., σ k } is a non-decreasing sequence of dates from T, is a journey in G if and only if ρ(e i , σ i ) = 1 for all i ≤ k. We will say that a given journey is strict if every couple (e i , σ i ) is taken from a distinct graph of the sequence S G . Let us denote by J * the set of all possible journeys in an evolving graph G, and by J * (u,v) ⊆ J * those journeys starting at node u and ending at node v. If a journey exists from a node u to a node v, that is, if J * (u,v) = ∅, then we say that u can reach v in a graph G, and allow the simplified notations u v (in G), or u st v if this can be done through a strict journey. Clearly, the existence of journey is not symmetrical: u v v u; this holds regardless of whether the edges are directed or not, because the time dimension creates its own level of direction -this point is clear by the example of Figure 1. Given a node u, the set {v ∈ V : u v} is called the horizon of u. We assume that every node belongs to its own horizon by means of an empty journey. Here are examples of journeys in the evolving graph of Figure 3: -J (a,e) ={(ab, σ 1 ∈ [t 1 , t 2 )), (bc, σ 2 ∈ [σ 1 , t 2 )), (ce, σ 3 ∈ [t 2 , t 3 ))} is a journey from a to e ; -J (a,e) ={(ac, σ 1 ∈ [t 0 , t 1 )), (cd, σ 2 ∈ [σ 1 , t 1 ), (de, σ 3 ∈ [t 3 , t 4 ))} is another journey from a to e ; -J (a,e) ={(ac, σ 1 ∈ [t 0 , t 1 )), (cd, σ 2 ∈ [t 1 , t 2 ), (de, σ 3 ∈ [t 3 , t 4 )) } is yet another (strict) journey from a to e. We will say that the network is connected over time iff ∀u, v ∈ V, u v ∧ v u. The concept of connectivity over time is not new and goes back at least to [3], in which it was called eventual connectivity (although recent literature on DTNs referred to this terms for another concept that we renamed eventual instant-connectivity to avoid confusion in Section 5). The proposed analysis framework As a recall of the previous section, the algorithmic state of the network is given by a labeling on the corresponding graph G, then noted G. We denote by G i the graph covering the period [t i , t i+1 ) in the evolving graph G = (G, S G , S T ), with G i ∈ S G and t i , t i+1 ∈ S T . Notice that the symbol G was used here with two different meanings: the first as the generic letter to represent the network, the second to denote the underlying graph of G. Both notations are kept as is in the following, while preventing ambiguous uses in the text. Putting the pieces together: relabelings over evolving graphs For an evolving graph G = (G, S G , S T ) and a given date t i ∈ S T , we denote by G i the labeled graph (G i , λ ti+ǫ ) representing the state of the network just after the topological event of date t i , and by G i[ the labeled graph (G i−1 , λ ti−ǫ ) representing the network state just before that event. We note Event ti (G i[ ) = G i . A number of distributed operations may occur between two consecutive events. Hence, for a given algorithm A and two consecutive dates t i , t i+1 ∈ S T , we denote by R A [t i ,t i+1 ) one of the possible relabeling sequence induced by A on the graph G i during the period [t i , t i+1 ). We note R A [t i ,t i+1 ) (G i ) = G i+1[ . For simplicity, we will sometimes use the notation r i (u, v) ∈ R A [t,t ′ ) to indicate that the rule r i is applied on the edge (u, v) during [t, t ′ ). A complete execution sequence from t 0 to t k is then given by means of an alternated sequence of relabeling steps and topological events, which we note X=R A [t k−1 ,t k ) • Event t k−1 • .. • Event t i • R A [t i−1 ,t i ) • .. • Event t 1 • R A [t 0 ,t 1 ) (G 0 ) This combination is illustrated on Figure 4. As mentioned at the end of Section 2.1, the execution of a local computation algorithm is not necessarily deterministic, and may depend on the way nodes select one another at a lower level before applying a relabeling rule. Hence, we denote by X A/G the set of all possible execution sequences of an algorithm A over an evolving graph G. Methodology Below are some proposed methods and concepts to characterize the requirement of an algorithm in terms of topology dynamics. More precisely, we use the above combination to define the concept of topologyrelated necessary or sufficient conditions, and discuss how a given property can be proved to be so. time start t0 G0 G 1[ R [t 0 ,t 1 ) G0 Evt 1 t1 G1 G 2[ R [t 1 ,t 2 ) G1 Evt 2 t2 Evt k−1 t k−1 G k−1 G k[ R [t k−1 ,t k ) Gt k−1 end t k . . . . . . Fig. 4. Combination of Graph Relabelings and Evolving Graphs. Objectives of an algorithm Given an algorithm A and a labeled graph G, the state one wishes to reach can be given by a logic formula P on the labels of vertices (and edges, if appropriate). In the case of the propagation scheme (Algorithm 1 Section 2.1), such a terminal state could be that all nodes are informed, P 1 (G) = ∀v ∈ V, λ(v) = I. The objective O A is then defined as the fact of verifying the desired property by the end of the execution, that is, on the final labeled graph G k . In this example, we consider O A1 = P 1 (G k ). The opportunity must be taken here to talk about two fundamentally different types of objectives in dynamic networks. In the example above, as well as in the other examples in this paper, we consider algorithms whose objective is to reach a given property by the end of the execution. Another type of objective in dynamic network is to consider the maintenance of a desired property despite the network evolution (e.g. covering every connected component in the network by a single spanning tree). In this case, the objective must not be formulated in terms of terminal state, but rather in terms of satisfactory state, for example in-between every two consecutive topological events, i.e., O A = ∀G i ∈ S G , P(G i+1[ ). This actually corresponds to a self-stabilization scenario where recurrent faults are the topological events, and the network must stabilize in-between any two consecutive faults. We restrict ourselves to the first type of objective in the following. Because the abstraction level of these computations is not concerned with the underlying synchronization, no topological property can guarantee, alone, that the nodes will effectively communicate and collaborate to reach the desired objective. Therefore, the characterization of sufficient conditions requires additional assumptions on the synchronization. We propose below a generic progression hypothesis applicable to the pairwise interaction model (Figure 2(c)). This assumption may or may not be considered realistic depending on the expected rate of topological changes. Necessary conditions Given an algorithm Progression Hypothesis 1 (P H 1 ). In every time interval [t i , t i+1 ), with t i in S T , each vertex is able to apply at least one relabeling rule with each of its neighbors, provided the rule preconditions are already satisfied at time t i (and still satisfied at the time the rule is applied). In the case when starwise interaction (see Figure 2(b)) is considered, this hypothesis could be partially relaxed to assuming only that every node applies at least one rule in each interval. Examples of basic analyses This section illustrates the proposed framework through the analysis of three basic algorithms, namely the propagation algorithm previously given, and two counting algorithms (one centralized, one decentralized). The results obtained here are used in the next section to highlight some implications of this work. Analysis of the propagation algorithm We want to prove that the existence of a journey (resp. strict journey) between the emitter and every other node is a necessary (resp. sufficient) condition to achieve O A1 . Our purpose is not as much to emphasize the results themselves -they are rather intuitive -as to illustrate how the characterizations can be written in a rigorous way. Condition 1 ∀v ∈ V, emitter v (There exists a journey between the emitter and every other vertex). Lemma 1 ∀v ∈ V : λ t0 (v) = N, λ σ>t0 (v) = I =⇒ ∃u ∈ V, ∃σ ′ ∈ [t 0 , σ) : λ σ ′ (u) = I ∧ u v in G [σ ′ ,σ) ( If a non-emitter vertex has the information at some point, it implies the existence of an incoming journey from a vertex that had the information before) Proof. ∀v ∈ V : λ t0 (v) = N, (λ σ>t0 (v) = I =⇒ ∃v ′ ∈ V : r 1(v ′ ,v) ∈ R A1[t0,σ) ) (If a non-emitter vertex has the information at some point, then it has necessarily applied rule r 1 with another vertex) =⇒ ∃v ′ ∈ V, σ ′ ∈ [t 0 , σ) : λ σ ′ (v ′ ) = I ∧ ρ((v ′ , v), σ ′ ) = 1 (An edge existed at a previous date between this vertex and a vertex labeled I) By transitivity, =⇒ ∃v ′′ ∈ V, ∃σ ′′ ∈ [t 0 , σ) : λ σ ′′ (v ′′ ) = I ∧ v ′′ v in G [σ ′′ ,σ) (A journey existed between a vertex labeled I and this vertex) Proposition 1 Condition 1 (C 1 ) is a necessary condition on G to allow Algorithm 1 (A 1 ) to reach its objective O A1 . Analysis of a centralized counting algorithm Like the propagation algorithm, the distributed algorithm presented below assumes a distinguished vertex at initial time. This vertex, called the counter, is in charge of counting all the vertices it meets during the execution (its successive neighbors in the changing topology). Hence, the counter vertex has two labels (C, i), meaning that it is the counter (C), and that it has already counted i vertices (initially 1, i.e., itself). The other vertices are labeled either F or N , depending on whether they have already been counted or not. The counting rule is given by r 1 in Algorithm 2, below. Algorithm 2 Counting algorithm with a pre-selected counter. C, i N C, i + 1 F Objective of the algorithm. Under the assumption of a fixed number of vertices, the algorithm reaches a terminal state when all vertices are counted, which corresponds to the fact that no more vertices are labeled N : P 2 = ∀v ∈ V, λ(v) = N The objective of Algorithm 2 is to satisfy this property at the end of the execution (O A2 = P 2 (G k )). We prove here that the existence of an edge at some point of the execution between the counter node and every other node is a necessary and sufficient condition. Condition 3 ∀v ∈ V \{counter}, ∃t i ∈ S T : (counter, v) ∈ E i , or equivalently with the notion of underlying graph, ∀v ∈ V \{counter}, (counter, v) ∈ E Proposition 3 For a given evolving graph G representing the topological evolutions that take place during the execution of A 2 , Condition 3 (C 3 ) is a necessary condition on G to allow A 2 to reach its objective O A2 . Proof. ¬C 3 (G) =⇒ ∃v ∈ V \{counter} : (counter, v) / ∈ E =⇒ ∃v ∈ V \{counter} : ∀t i ∈ S T \{t k }, r 1 (counter, v) / ∈ R A2[ti,ti+1) =⇒ ∃v ∈ V \{counter} : ∀X ∈ X A2/G , λ t k (v) = N =⇒ ∄X ∈ X A2/G : P 2 (G k ) =⇒ ¬O A2 Proposition 4 Under Progression Hypothesis 1 (noted P H 1 below), C 3 is also a sufficient condition on G to guarantee that A 2 will reach its objective O A2 . Proof. C 3 (G) =⇒ ∀v ∈ V \{counter}, ∃t i ∈ S T : (counter, v) ∈ E i by P H 1 , =⇒ ∀v ∈ V \{counter}, ∃t i ∈ S T : r 1 (counter, v) ∈ R A2[ti,ti+1) =⇒ ∀v ∈ V \{counter}, λ t k (v) = N =⇒ ∀X ∈ X A2/G , P 2 (G k ) =⇒ O A2 Analysis of a decentralized counting algorithm Contrary to the previous algorithm, Algorithm 3 below does not require a distinguished initial state for any vertex. Indeed, all vertices are initialized with the same labels (C, 1), meaning that they are all initially counters that have already included themselves into the count. Then, depending on the topological evolutions, the counters opportunistically merge by pairs (rule r 1 ) in Algorithm A 3 . In the optimistic scenario, at the end of the execution, only one node remains labeled C and its second label gives the total number of vertices in the graph. A similar counting principle was used in [1] to illustrate population protocols -a possible application of this protocol was anecdotally mentioned, consisting in monitoring a flock of birds for fever, with the role of counters being played by sensors. Algorithm 3 Decentralized counting algorithm. initial states: {(C, 1)} (for all vertices) alphabet: {C, F, N * } rule r1: C, i C, j C, i + j F Objective of the algorithm Under the assumption of a fixed number of vertices, this algorithm reaches the desired state when exactly one vertex remains labeled C: P 3 = ∃u ∈ V : ∀v ∈ V \{u}, λ(u) = C ∧ λ(v) = C. As with the two previous algorithms, the objective here is to reach this property by the end of the execution: O A3 = P 3 (G k ). The characterization below proves that the existence of a vertex belonging to the horizon of every other vertex is a necessary condition for this algorithm. Condition 4 ∃v ∈ V : ∀u ∈ V, u v Lemma 2 ∀u ∈ V, ∃u ′ ∈ V : u u ′ ∧ λ t k (u ′ ) = C ( Counters cannot disappear from their own horizon.) This lemma is proven in natural language because the equivalent steps would reveal substantially longer and inelegant (at least, without introducing further notations on sequences of relabelings). One should however see without effort how the proof could be technically translated. Proof. (by contradiction). The only operation that can suppress C labels is the application of r 1 . Since all vertices are initially labeled C, assuming that Lemma 2 is false (i.e., that there is no C-labeled vertex in the horizon of a vertex) comes to assume that a relabeling sequence took place transitively from vertex u to a vertex u ′ that is outside the horizon of u, which is by definition impossible. (Given any final counter, there is a vertex that could not reach it by a journey). Proposition 5 Condition 4 (C 4 ) is necessary for A 3 to reach its objective O A3 . Proof. ¬C 4 (G) =⇒ ∄v ∈ V : ∀u ∈ V, u v =⇒ ∀v ∈ V : λ t k (v) = C, ∃u ∈ V : u vBy Lemma 2, =⇒ ∀v ∈ V : λ t k (v) = C, ∃v ′ ∈ V \{v} : λ t k (v ′ ) = C (There are at least two final counters). =⇒ ¬P 3 (G k ) =⇒ ¬O A3 The characterization of a sufficient condition for A 3 is left open. This question is addressed from a probabilistic perspective in [1], but we believe a deterministic condition should also exist, although very specific. Classification of dynamic networks and algorithms In this section, we show how the previously characterized conditions can be used to define evolving graph classes, some of which are included in others. The relations of inclusion lead to a de facto classification of dynamic networks based on the properties they verify. As a result, the classification can in turn be used to compare several algorithms or problems on the basis of their topological requirements. Besides the classification based on the above conditions, we discuss a possible extension of 10 more classes considered in various recent works. From conditions to classes of evolving graphs From C 1 = ∀v ∈ V, emitter v, we derive two classes of evolving graphs. F 1 is the class in which at least one vertex can reach all the others by a journey. If an evolving graph does not belong to this class, then there is no chance for A 1 to succeed whatever the initial emitter. F 2 is the class where every vertex can reach all the others by a journey. If an evolving graph does not belong to this class, then at least one vertex, if chosen as an initial emitter, will fail to inform all the others using A 1 . From C 2 = ∀v ∈ V, emitter st v, we derive two classes of evolving graphs. F 3 is the class in which at least one vertex can reach all the others by a strict journey. If an evolving graph belongs to this class, then there is at least one vertex that could, for sure, inform all the others using A 1 (under Progression Hypothesis 1). F 4 is the class of evolving graphs in which every vertex can reach all the others by a strict journey. If an evolving graph belongs to this class, then the success of A 1 is guaranteed for any vertex as initial emitter (again, under Progression Hypothesis 1). From C 3 = ∀v ∈ V \{counter}, (counter, v) ∈ E, we derive two classes of graphs. F 5 is the class of evolving graphs in which at least one vertex shares, at some point of the execution, an edge with every other vertex. If an evolving graph does not belong to this class, then there is no chance of success for A 2 , whatever the vertex chosen for counter. Here, if we assume Progression Hypothesis 1, then F 5 is also a class in which the success of the algorithm can be guaranteed for one specific vertex as counter. F 6 is the class of evolving graphs in which every vertex shares an edge with every other vertex at some point of the execution. If an evolving graph does not belong to this class, then there exists at least one vertex that cannot count all the others using A 2 . Again, if we consider Progression Hypothesis 1, then F 6 becomes a class in which the success is guaranteed whatever the counter. Finally, from C 4 = ∃v ∈ V : ∀u ∈ V, u v, we derive the class F 7 , which is the class of graphs such that at least one vertex can be reached from all the others by a journey (in other words, the intersection of all nodes horizons is non-empty). If a graph does not belong to this class, then there is absolutely no chance of success for A 3 . Relations between classes Since all implies at least one, we have: F 2 ⊆ F 1 , F 4 ⊆ F 3 , and F 6 ⊆ F 5 . Since a strict journey is a journey, we have: F 3 ⊆ F 1 , and F 4 ⊆ F 2 . Since an edge is a (strict) journey, we have: F 5 ⊆ F 3 , F 6 ⊆ F 4 , and F 5 ⊆ F 7 . Finally, the existence of a journey between all pairs of vertices (F 2 ) implies that each vertex can be reached by all the others, which implies in turn that at least one vertex can be reach by all the others ( F 7 ). We then have: F 2 ⊆ F 7 . Although we have used here a non-strict inclusion (⊆), the inclusions described above are strict (one easily find for each inclusion a graph that belongs to the parent class but is outside the child class). Figure 5 summarizes all these relations. Further classes were introduced in the recent literature, and organized into a classification in [?]. They include F 8 (round connectivity): every node can reach every other node, and be reached back afterwards; F 9 : (recurrent connectivity): every node can reach all the others infinitely often; F 10 (recurrence of edges): F1 : ∃u ∈ V : ∀v ∈ V, u v F2 : ∀u, v ∈ V, u v F3 : ∃u ∈ V : ∀v ∈ V, u st v F4 : ∀u, v ∈ V, u st v F5 : ∃u ∈ V : ∀v ∈ V \{u}, (u, v) ∈ E F6 : ∀u, v ∈ V, (u, v) ∈ E F7 : ∃u ∈ V : ∀v ∈ V, v u F6 F4 F5 F2 F3 F7 F1 F8 (Fig. 6) Fig. 5. A first classification of dynamic networks, based on evolving graph properties that result from the analysis of Section 4. the underlying graph G = (V, E) is connected, and every edge in E re-appears infinitely often; F 11 (timebounded recurrence of edges): same as F 10 , but the re-appearance is bounded by a given time duration; F 12 (periodicity): the underlying graph G is connected and every edge in E re-appears at regular intervals; F 13 (eventual instant-routability): given any pair of nodes and at any time, there always exists a future G i in which a (static) path exists between them; F 14 (eventual instant-connectivity): at any time, there always exists a future G i that is connected in a classic sense (i.e., a static path exists in G i between any pair of nodes); F 15 (perpetual instant-connectivity): every G i is connected in a static sense; F 16 (T-intervalconnectivity): all the graphs in any sub-sequence G i , G i+1 , ...G i+T have at least one connected spanning subgraph in common. Finally, F 17 is the reference class for population protocols, it corresponds to the subclass of F 10 in which the underlying graph G (graph of interaction) is a complete graph. All these classes were shown to have particular algorithmic significance. For example, F 16 allows to speed up the execution of some algorithms by a factor T [18]. In a context of broadcast, F 15 allows to have at least one new node informed in every G i , and consequently to bound the broadcast time by (a constant factor of) the network size [23]. F 13 and F 14 were used in [24] to characterize the contexts in which non-delay-tolerant routing protocols can eventually work if they retry upon failure. Classes F 10 , F 11 , and F 12 were shown to have an impact on the distributed versions of foremost, shortest, and fastest broadcasts with termination detection. Precisely, foremost broadcast is feasible in F 10 , whereas shortest and fastest broadcasts are not; shortest broadcast becomes feasible in F 11 [10], whereas fastest broadcast is not and becomes feasible in F 12 . Also, even though foremost broadcast is possible in F 10 , the memorization of the journeys for subsequent use is not possible in F 10 nor F 11 ; it is however possible in F 12 [11]. Finally, F 8 could be regarded as a sine qua non for termination detection in many contexts. Interestingly, this new range of classes -from F 8 to F 17 -can also be integrally connected by means of a set of inclusion relations, as illustrated on Figure 6. Both classifications can also be inter-connected through F 8 , a subclass of F 2 , which brings us to 17 connected classes. A classification of this type can be useful in several respects, including the possibility to transpose results or to compare solutions or problems on a formal basis, which we discuss now. Comparison of algorithms based on their topological requirements Let us consider the two counting algorithms given in Section 4. To have any chance of success, A 2 requires the evolving graph to be in F 5 (with a fortunate choice of counter) or in F 6 (with any vertex as counter). On the other hand, A 3 requires the evolving graph to be in F 7 . Since both F 5 (directly) and F 6 (transitively) are included in F 7 , there are some topological scenarios (i.e., G ∈ F 7 \F 5 ) in which A 2 has no chance of success, while A 3 has some. Such observation allows to claim that A 3 is more general than A 2 with respect to its topological requirements. This illustrates how a classification can help compare two solutions on a fair and formal basis. In the particular case of these two counting algorithms, however, the claim could be balanced by the fact that a sufficient condition is known for A 2 , whereas none is known for A 3 . The choice for the right algorithm may thus depend on the target mobility context: if this context is thought to produce topological scenarios in F 5 or F 6 , then A 2 could be preferred, otherwise A 3 should be considered. A similar type of reasoning could also teach us something about the problems themselves. Consider the above-mentioned results about shortest, fastest, and foremost broadcast with termination detection, the fact that F 12 is included in F 11 , which is itself included in F 10 , tells us that there is a (at least partial) order between these problems topological requirements: f oremost shortest f astest. We believe that classifications of this type have the potential to lead more equivalence results and formal comparison between problems and algorithms. Now, one must also keep in mind that these are only topology-related conditions, and that other dimensions of properties -e.g., what knowledge is available to the nodes, or whether they have unique identifiers -keep playing the same important role as they do in a static context. Considering again the same example, the above classification hides that detecting termination in the foremost case in F 10 requires the emitter to know the number of nodes n in the network, whereas this knowledge is not necessary for shortest broadcast in F 11 (the alternative knowledge of knowing a bound on the recurrence time is sufficient). In other words, lower topology-related requirements do not necessarily imply lower requirements in general. Mechanization potential One of the motivations of this work is to contribute to the development of assistance tools for algorithmic design and decision support in mobile ad hoc networks. The usual approach to assess the correct behavior of an algorithm or its appropriateness to a particular mobility context is to perform simulations. A typical simulation scenario consists in executing the algorithm concurrently with topological changes that are generated using a mobility model (e.g., the random way point model, in which every node repeatedly selects a new destination at random and moves towards it), or on top of real network traces that are first collected from the real world, then replayed at simulation time. As discussed in the introduction, the simulation approach has some limitations, among which generating results that are difficult to generalize, reproduce, or compare with one another on a non-subjective basis. The framework presented in this paper allows for an analytical alternative to simulations. The previous section already discussed how two algorithms could be compared on the basis of their topological requirements. We could actually envision a larger-purpose chain of operations, aiming to characterize how appropriate a given algorithm is to a given mobility context. The complete workflow is depicted on Figure 7. On the one hand, algorithms are analyzed, and necessary/sufficient conditions determined. This step produces classes of evolving graphs. On the other hand, mobility models and real-world networks can be used to generate a collection of network traces, each of which corresponds to an instance of evolving graphs. Checking how given instances distribute within given classes -i.e., are they included or not, in what proportion? -may give a clue about the appropriateness of an algorithm in a given mobility context. This section starts discussing the question of understanding to what extent such a workflow could be automated (mechanized), in particular through the two core operations of Inclusion checking and Analysis, both capable of raising problems of a theoretical nature. Checking network traces for inclusion in the classes We provide below an efficient solution to check the inclusion of an evolving graph in any of the seven classes of Figure 5 -that are, all classes derived from the analysis carried out in Section 4. Interestingly, each of these classes allows for efficient checking strategies, provided a few transformations are done. The transitive closure of the journeys of an evolving graph G is the graph H = (V, A H ), where A H = {(v i , v j ) : v i v j )}. Because journeys are oriented entities, their transitive closure is by nature a directed graph (see Figure 8). As explained in [5], the computation of transitive closures can be done efficiently, in O(|V |.|E|.(log|S T |.log|V |) time, by building the tree of shortest journeys from each node in the network. We extend this notion to the case of strict journeys, with Given an evolving graph G, its underlying graph G, its transitive closure H, and the transitive closure of its strict journeys H strict , the inclusion in each of the seven classes can be tested as follows: H strict = (V, A Hstrict ), where A Hstrict = {(v i , v j ) : v i st v j )}. -G ∈ F 1 ⇐⇒ H contains an out-dominating set of size 1. -G ∈ F 2 ⇐⇒ H is a complete graph. -G ∈ F 3 ⇐⇒ H strict contains an out-dominating set of size 1. -G ∈ F 4 ⇐⇒ H strict is a complete graph. -G ∈ F 5 ⇐⇒ G contains a dominating set of size 1. -G ∈ F 6 ⇐⇒ G is a complete graph. -G ∈ F 7 ⇐⇒ H contains an in-dominating set of size 1. How the classes of Figure 6 could be checked is left open. Their case is more complex, or at least substantially different, because the corresponding definitions rely on the notion of infinite, which a network trace is necessarily not. For example, whether a given edge is eventually going to reappear (e.g. in the context of checking inclusion to class F 8 or F 9 ) cannot be inferred from a finite sequence of events. However, it is certainly feasible to check whether a given recurrence bound applies within the time-span of a given network trace (bounded recurrence F 10 ), or similarly, whether the sequence of events repeats modulo p (for a given p) within the given trace (periodic networks F 11 ). Towards a mechanized analysis The most challenging component of the workflow on Figure 7 is certainly that of Analysis. Ultimately, one may hope to build a component like that of Figure 9, which is capable of answering whether a given property is necessary (no possible success without), sufficient (no possible failure with), or orthogonal (both success and failure possible) to a given algorithm with given computation assumptions (e.g., a particular type of synchronization or progression hypothesis). Such a workflow could ultimately be used to confirm an intuition of the analyst, as well as to discover new conditions automatically, based on a collection of properties. As of today, such an objective is still far from reach, and a number of intermediate steps should be taken. For example, one may consider specific instances of evolving graphs rather than general properties. We develop below a prospective idea inspired by the work of Castéran et al. in static networks [8,7]. Their work focus on bridging the gap between local computations and the formal proof management system Coq [4], and materializes, among others, as the development of a Coq library: Loco. This library contains appropriate representations for graphs and labelings in Coq (by means of sets and maps), as well as an operational description of relabeling rule execution (see Section 6 of [7] for details). The fact that such a machinery is already developed is worthwhile noting, because we believe evolving graphs could be seen themselves as relabelings acting on a 'presence' label on vertices and edges. The idea in this case would be to re-define topological events as being themselves graph relabeling rules whose preconditions correspond to a G i and actions lead to the next G i+1 . Considering the execution of these rules concurrently with those of the studied algorithm could make it possible to leverage the power of Coq to mechanize proofs of correctness and/or impossibility results in given instances of evolving graphs. Concluding remarks and open problems This paper suggested the combination of existing tools and the use of dedicated methods for the analysis of distributed algorithms in dynamic networks. The resulting framework allows to characterize assumptions that a given algorithm requires in terms of topological evolution during its execution. We illustrated it by the analysis of three basic algorithms, whose necessary and sufficient conditions were derived into a sketch of classification of dynamic networks. We showed how such a classification could be used in turn to compare algorithms on a formal basis and provide assistance in the selection of an algorithm. This classification was extended by an additional 10 classes from recent literature. We finally discussed some implications of this work for mechanization of both decision support systems and analysis, including respectively the question of checking whether a given network trace belongs to one of the introduced classes, and prospective ideas on the combination of evolving graph and graph relabeling systems within the Coq proof assistant. Analyzing the network requirements of algorithms is not a novel approach in general. It appears however that it was never considered in systematic manner for dynamics-related assumptions. Instead, the apparent norm in dynamic network analytical research is to study problems once a given set of assumptions has been considered, these assumptions being likely chosen for analytical convenience. This appears particularly striking in the recent field of population protocols, where a common assumption is that a pair of nodes interacting once will interact infinitely often. In the light of the classification shown is this paper, such an assumption corresponds to a highly specific computing context. We believe the framework in this paper may help characterize weaker topological assumptions for the same class of problems. Our work being mostly of a conceptual essence, a number of questions may be raised relative to its broader applicability. For example, the algorithms studied here are simple. A natural question is whether the framework will scale to more complex algorithms. We hope it could suit the analysis of most fundamental problems in distributed computing, such as election, naming, concensus, or the construction of spanning structures (note that election and naming may not have identical assumptions in a dynamic context, although they do in a static one). Our discussion on mechanization potentials left two significant questions undiscussed: how to check for the inclusion of an evolving graph in all the remaining classes, and how to approach the problem of mechanizing analysis relative to a general property. Another prospect is to investigate how intermediate properties could be explored between necessary and sufficient conditions, for example to guarantee a desired probability of success. Finally, besides these characterizations on feasibility, one may also want to look at the impact that particular properties may have on the complexity of problems and algorithms. Analytical research in dynamic networks is still in its infancy, and many exiting questions remain to be explored.
9,353
1102.5529
2953379898
Besides the complexity in time or in number of messages, a common approach for analyzing distributed algorithms is to look at the assumptions they make on the underlying network. We investigate this question from the perspective of network dynamics. In particular, we ask how a given property on the evolution of the network can be rigorously proven as necessary or sufficient for a given algorithm. The main contribution of this paper is to propose the combination of two existing tools in this direction: local computations by means of graph relabelings, and evolving graphs. Such a combination makes it possible to express fine-grained properties on the network dynamics, then examine what impact those properties have on the execution at a precise, intertwined, level. We illustrate the use of this framework through the analysis of three simple algorithms, then discuss general implications of this work, which include (i) the possibility to compare distributed algorithms on the basis of their topological requirements, (ii) a formal hierarchy of dynamic networks based on these requirements, and (iii) the potential for mechanization induced by our framework, which we believe opens a door towards automated analysis and decision support in dynamic networks.
Let us repeat that an algorithm does not specify how the nodes synchronize, how they select each other to perform a common computation step. From the abstraction level of local computations, this underlying synchronization is seen as an implementation choice (dedicated procedures were designed to fit the various models, e.g. local elections @cite_14 and local rendezvous @cite_6 for starwise and pairwise interactions, respectively). A direct consequence is that the execution of an algorithm at this level may not be deterministic. Another consequence is that the characterization of conditions on the dynamics will additionally require assumptions on the synchronization -- we suggest later a generic progression hypothesis that serves this purpose. Note that the three algorithms provided in this paper rely on pairwise interactions, but the concepts and methodology involved apply to local computations in general.
{ "abstract": [ "We propose and analyze two randomized local election algorithms in an asynchronous anonymous graph.", "In this paper we propose and analyze a randomized algorithm to get rendezvous between neighbours in an anonymous graph. We examine in particular the probability to obtain at least one rendezvous and the expected number of rendezvous. We study the rendezvous number distribution in the cases of chain graphs, rings, and complete graphs. The last part is devoted to the efficiency of the proposed algorithm." ], "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2118991742", "2088310932" ] }
Distributed Computing in Dynamic Networks: Towards a Framework for Automated Analysis of Algorithms ⋆
The past decade has seen a burst of research in the field of communication networks. This is particularly true for dynamic networks due to the arrival, or impending deployment, of a multitude of applications involving new types of communicating entities such as wireless sensors, smartphones, satellites, vehicles, or swarms of mobile robots. These contexts offer both unprecedented opportunities and challenges for the research community, which is striving to design appropriate algorithms and protocols. Behind the apparent unity of these networks lies a great diversity of assumptions on their dynamics. One end of the spectrum corresponds to infrastructured networks, in which only terminal nodes are dynamic -these include 3G/4G telecommunication networks, access-point-based Wi-Fi networks, and to some extent the Internet itself. At the other end lies delay-tolerant networks (DTNs), which are characterized by the possible absence of end-to-end communication route at any instant. The defining property of DTNs actually reflects many types of real-world contexts, from satellites or vehicular networks to pedestrian or social animal networks (e.g. birds, ants, termites). In-between lies a number of environments whose capabilities and limitations require specific attention. A consequence of this diversity is that a given protocol for dynamic networks may prove appropriate in one context, while performing poorly (or not at all) in another. The most common approach for evaluating protocols in dynamic networks is to run simulations, and use a given mobility model (or set of traces) to generate topological changes during the execution. These parameters must faithfully reflect the target context to yield an accurate evaluation. Likewise, the comparison between two protocols is only meaningful ⋆ A preliminary version of this paper appeared in [9]. if similar traces or mobility models are used. This state of facts makes it often ambiguous and difficult to judge of the appropriateness of solutions based on the sole experimental results reported in the literature. The problem is even more complex if we consider the possible biases induced by further parameters like the size of the network, the density of nodes, the choice of PHY or MAC layers, bandwidth limitations, latency, buffer size, etc. The fundamental requirement of an algorithm on the network dynamics will likely be better understood from an analytical standpoint, and some recent efforts have been carried out in this direction. They include the works by O'Dell et al. [23] and Kuhn et al. [18], in which the impacts of given assumptions on the network dynamics are studied for some basic problems of distributed computing (broadcast, counting, and election). These works have in common an effort to make the dynamics amenable to analysis through exploiting properties of a static essence: even though the network is possibly highly-dynamic, it remains connected at every instant. The approach of population protocols [1,2] also contributed to more analytical understanding. Here, no assumptions are made on the network connectivity at a given instant, but yet, the same fundamental idea of looking at dynamic networks through the eyes of static properties is leveraged by the concept of graph of interaction, in which every entity is assumed to interact infinitely often with its neighbors (and thus, dynamics is reduced to a scheduling problem in static networks). Besides the fact that the above assumptions are strong -we will show how strong in comparison to others in a hierarchy -, we believe that the very attempt to flatten the time dimension does prevent from understanding the true requirements of an algorithm on the network dynamics. As a trivial example, consider the broadcasting of a piece of information in the network depicted in Figure 1. The possibility to complete the broadcast in this scenario clearly depends on which node is the initial emitter: a and b may succeed, while c cannot. Why? How can we express this intuitive property the topology evolution must have with respect to the emitter and the other nodes? Flattening the timedimension without keeping information on the ordering of events would obviously loose some important specificities, such as the fact that nodes a and c are in a non-symmetrical configuration. How can we prove, more generally, that a given assumption on the dynamics is necessary or sufficient for a given problem (or algorithm)? How can we find (and define) property that relate to finer-grain aspects than recurrence or more generally regularities. Even when intuitive, rigorous characterizations of this kind might be difficult to obtain without appropriate models and formalisms -a conceptual shift is needed. We investigate these questions in the present paper. Contrary to the aforementioned approaches, in which a given context is first considered, then the feasibility of problems studied in this particular context, we suggest the somehow reverse approach of considering first a problem, then trying to characterize its necessary and/or sufficient conditions (if any) in terms of network dynamics. We introduce a generalpurpose analysis framework based on the combination of 1) local computations by means of graph relabelings [19], and 2) an appropriate formalism for dynamic networks, evolving graphs [15], which formalizes the evolution of the network topology as an ordered sequence of static graphs. The strengths of this combination are several: First, the use of local computations allows to obtain general impossibility results that do not depend on a particular communication model (e.g., message passing, mailbox, or shared memory). Second, the use of evolving graphs enables to express fine-grain network properties that remain temporal in essence. (For instance, a necessary condition for the broadcast problem above is the existence of a temporal path, or journey, from the emitter to any other node, which statement can be expressed us-ing monadic second-order logic on evolving graphs.) The combination of graph relabelings and evolving graphs makes it possible to study the execution of an algorithm as an intertwined sequence of topological events and computations, leading to a precise characterization of their relation. The framework we propose should be considered as a conceptual framework to guide the analysis of distributed algorithms. As such, it is specified at a high-level of abstraction and does not impose the choice for, say, a particular logic (e.g. first-order vs. LMSO) or scope of computation (e.g. pairwise vs. starwise interaction), although all our examples assume LMSO and pairwise interactions. Finally, we believe this framework could pave the way to decision support systems or mechanized analysis in dynamic networks, both of which are discussed as possible applications. Local computations and evolving graphs are first presented in Section 2, together with central properties of dynamic networks (such as connectivity over time, whose intuitive implications on the broadcast problem were explored in various work -see e.g. [3,6]). We describe the analysis framework based on the combination of both tools in Section 3. This includes the reformulation of an execution in terms of relabelings over a sequence of graphs, as well as new formulations of what a necessary or sufficient condition is in terms of existence and non-existence of such a relabeling sequence. We illustrate these theoretical tools in Section 4 through the analysis of three basic examples, i.e., one broadcast algorithm and two counting algorithms, one of which can also be used for election. (Note that our framework was recently applied to the problem of mutual exclusion in [16].) The rest of the paper is devoted to exploring some implications of the proposed approach, articulated around the two major motifs of classification (Section 5) and mechanization (Section 6). The section on classification discusses how the conditions resulting from analysis translate into more general properties that define classes of evolving graphs. The relations of inclusion between these classes are examined, and interestingly-enough, they allow to organize the classes as a connected hierarchy. We show how this classification can reciprocally be used to evaluate and compare algorithms on the basis of their topological requirements. The section on mechanization discusses to what extent the tasks related to assessing the appropriateness of an algorithm in a given context can be automated. We provide canonical ways of checking inclusion of a given network trace in all classes resulting from the analyses in this paper (in efficient time), and mention some ongoing work around the use of the coq proof assistant in the context of local computation, which we believe could be extended to evolving graphs. Section 7 eventually concludes with some remarks and open problems. Abstracting communications through local computations and graph relabelings Distributed algorithms can be expressed using a variety of communication models (e.g. message passing, mailboxes, shared memory). Although a vast majority of algorithms is designed in one of these models -predominantly the message passing model -, the very fact that one of them is chosen implies that the obtained results (e.g. positive or negative characterizations and associated proofs) are limited to the scope of this model. This problem of diversity among formalisms and results, already pointed out twenty years ago in [20], led researchers to consider higher abstractions when studying fundamental properties of distributed systems. Local computations and Graph relabelings were jointly proposed in this perspective in [19]. These theoretical tools allow to represent a distributed algorithm as a set of local interaction rules that are independent from the effective communications. Within the formalism of graph relabelings, the network is represented by a graph whose vertices and edges are associated with labels that represent the algorithmic state of the corresponding nodes and links. An interaction rule is then defined as a transition pattern (preconditions, actions), where preconditions and actions relate to these labels values. Since the interactions are local, each transition pattern must involve a limited and connected subset of vertices and edges. Figure 2 shows different scopes of computation, which are not necessarily the same for preconditions and actions. The approach taken by local computations shares a number of traits with that of population protocols, more recently introduced in [1,2]. Both approaches work at a similar level of abstraction and are concerned with characterizing what can or cannot be done in distributed computing. As far as the scope of computation is concerned, population protocols can be seen as a particular case of local computation focusing on pairwise interaction (see Figure 2(c)). The main difference between these tools (if any, besides that of originating from distinct lines of research), has more to do with the role given to the underlying synchronization between nodes. While local computations typically sees this as an lower layer being itself abstracted (whenever possible), population protocols consider the execution of an algorithm given some explicit properties of an interaction scheduler. This particularity led population protocols to become an appropriate tool to study distributed computing in dynamic networks, by reducing the network dynamics into specific properties of the scheduler (e.g., every pair of nodes interact infinitely often). Several variants of population protocols have subsequently been introduced (e.g., assuming various types of fairness of the scheduler and graphs of interaction), however we believe the analogy between dynamics and scheduling has some limits (e.g., in reality two nodes that interact once will not necessarily interact twice; and the precise order in which a group of nodes interacts matters all the more when interactions do not repeat infinitely often). We advocate looking at the dynamics at a finer scale, without always assuming infinite recurrence on the scheduler (such a scheduler can still be formulated as a specific class of dynamics), in the purpose of studying the precise relationship between an algorithm and the dynamics underlying its execution. To remain as general as possible, we are building on top of local computations. One may ask whether remaining as general is relevant, and whether the various models on Figure 2 are in fact equivalent in power (e.g. could we simulate any of them by repetition of another?). The answer is negative due to different levels of atomicity (e.g. models 2(a) vs. 2(c)) and symmetry breaking (e.g. models 2(c) vs. 2(d)). The reader is referred to [14] for a detailed hierarchy of these models. Note that the equivalences between models would have to be re-considered anyway in a dynamic context, since the dynamics may prevent the possibility of applying several steps of a weaker model to simulate a stronger one. (a) (b) (c) (d) We now describe the graph relabeling formalism traditionally associated with local computations. Let the network topology be represented by a finite undirected loopless graph G = (V G , E G ), with V G representing the set of nodes and E G representing the set of communication links between them. Two vertices u and v are said neighbors if and only if they share a common edge (u, v) in E G . Let λ : V G ∪ E G → L * be a mapping that associates every vertex and edge from G with one or several labels from an alphabet L (which denotes all the possible states these elements can take). The state of a given vertex v, resp. edge e, at a given time t is denoted by λ t (v), resp. λ t (e). The whole labeled graph is represented by the pair (G, λ), noted G. According to [19], a complete algorithm can be given by a triplet {L, I, P }, where I is the set of initial states, and P is a set of relabeling rules (transition patterns) representing the distributed interactions -these rules are considered uniform (i.e., same for all nodes). The Algorithm 1 below (A 1 for short), gives the example of a one-rule algorithm that represents the general broadcasting scheme discussed in the introduction. We assume here that the label I (resp. N ) stands for the state informed (resp. non-informed). Propagating the information thus consists in repeating this single rule, starting from the emitter vertex, until all vertices are labeled I. 4 Algorithm 1 A propagation algorithm coded by a single relabeling rule (r 1 ). Let us repeat that an algorithm does not specify how the nodes synchronize, i.e., how they select each other to perform a common computation step. From the abstraction level of local computations, this underlying synchronization is seen as an implementation choice (dedicated procedures were designed to fit the various models, e.g. local elections [21] and local rendezvous [22] for starwise and pairwise interactions, respectively). A direct consequence is that the execution of an algorithm at this level may not be deterministic. Another consequence is that the characterization of sufficient conditions on the dynamics will additionally require assumptions on the synchronization -we suggest later a generic progression hypothesis that serves this purpose. Note that the three algorithms provided in this paper rely on pairwise interactions, but the concepts and methodology involved apply to local computations in general. Expressing dynamic network properties using Evolving Graphs In a different context, evolving graphs [15] were proposed as a combinatorial model for dynamic networks. The initial purpose of this model was to provide a suitable representation of fixed schedule dynamic networks (FSDNs), in order to compute optimal communication routes such as shortest, fastest and foremost journeys [6]. In such a context, the evolution of the network was known beforehand. In the present work, we use evolving graphs in a very different purpose, which is to express properties on the network dynamics. It is important to keep in mind that the analyzed algorithms are never supposed to know the evolution of the network beforehand. An evolving graph is a structure in which the evolution of the network topology is recorded as a sequence of static graphs S G = G 1 , G 2 , ..., where every G i = (V i , E i ) corresponds to the network topology during an interval of time [t i , t i+1 ) . Several models of dynamic networks can be captured by this formalism, depending on the meaning which is given to the sequence of dates S T = t 1 , t 2 , .... For example, these dates could correspond to every time step in a discrete-time system (and therefore be taken from a time domain T ⊆ N), or to variable-size time intervals in continuous-time systems (T ⊆ R), where each t i is the date when a topological event occurs in the system (e.g., appearance or disappearance of an edge in the graph), see for example Figure 3. We consider continuous-time evolving graphs in general. (Our results actually hold for any of the above meanings.) Formally, we consider an evolving graph as the structure G = (G, S G , S T ), where G is the union of all G i in S G , called the underlying graph of G. Henceforth, we will simply use the notations V and E to denote V (G) and E(G), the sets of vertices and edges of the underlying graph G. Since we focus here on computation models that are undirected, we logically consider evolving graphs as being themselves undirected. The original version of evolving graphs considered undirected edges, as well as possible restrictions on bandwidth and latency. Finally, we will use the notation G [ta,t b ) to denote the temporal subgraph G ′ = (G ′ , S ′ G , S ′ T ) built from G = (G, S G , S T ) such that G ′ = G, S ′ G = {G i ∈ S G : t i ∈ [t a , t b )}, and S ′ T = {t i ∈ S T ∩ [t a , t b )}. period t0 → t1 period t1 → t2 period t2 → t3 period t3 →[t 1 , t3 ) [t0, t1) [t 2 , t 4 ) [t 0 , t 1 ) [ t 0 , t 2 ) [ t 0 , t 3 ) [t 2 , t4 ) G = (b) A compact representation Basic concepts and notations (given an evolving graph G = (G, S G , S T )). As a writing facility, we consider the use of a presence function ρ : E × T → {0, 1} that indicates whether a given edge is present at a given date, that is, for e ∈ E and t ∈ [t i , t i+1 ) (with t i , t i+1 ∈ S T ), ρ(e, t) = 1 ⇐⇒ e ∈ E i . A central concept in dynamic networks is that of journey, which is the temporal extension of the concept of path. A journey can be thought of as a path over time from one vertex to another. Formally, a sequence of couples J = {(e 1 , σ 1 ), (e 2 , σ 2 ) . . . , (e k , σ k )} such that {e 1 , e 2 , ..., e k } is a walk in G and {σ 1 , σ 2 , ..., σ k } is a non-decreasing sequence of dates from T, is a journey in G if and only if ρ(e i , σ i ) = 1 for all i ≤ k. We will say that a given journey is strict if every couple (e i , σ i ) is taken from a distinct graph of the sequence S G . Let us denote by J * the set of all possible journeys in an evolving graph G, and by J * (u,v) ⊆ J * those journeys starting at node u and ending at node v. If a journey exists from a node u to a node v, that is, if J * (u,v) = ∅, then we say that u can reach v in a graph G, and allow the simplified notations u v (in G), or u st v if this can be done through a strict journey. Clearly, the existence of journey is not symmetrical: u v v u; this holds regardless of whether the edges are directed or not, because the time dimension creates its own level of direction -this point is clear by the example of Figure 1. Given a node u, the set {v ∈ V : u v} is called the horizon of u. We assume that every node belongs to its own horizon by means of an empty journey. Here are examples of journeys in the evolving graph of Figure 3: -J (a,e) ={(ab, σ 1 ∈ [t 1 , t 2 )), (bc, σ 2 ∈ [σ 1 , t 2 )), (ce, σ 3 ∈ [t 2 , t 3 ))} is a journey from a to e ; -J (a,e) ={(ac, σ 1 ∈ [t 0 , t 1 )), (cd, σ 2 ∈ [σ 1 , t 1 ), (de, σ 3 ∈ [t 3 , t 4 ))} is another journey from a to e ; -J (a,e) ={(ac, σ 1 ∈ [t 0 , t 1 )), (cd, σ 2 ∈ [t 1 , t 2 ), (de, σ 3 ∈ [t 3 , t 4 )) } is yet another (strict) journey from a to e. We will say that the network is connected over time iff ∀u, v ∈ V, u v ∧ v u. The concept of connectivity over time is not new and goes back at least to [3], in which it was called eventual connectivity (although recent literature on DTNs referred to this terms for another concept that we renamed eventual instant-connectivity to avoid confusion in Section 5). The proposed analysis framework As a recall of the previous section, the algorithmic state of the network is given by a labeling on the corresponding graph G, then noted G. We denote by G i the graph covering the period [t i , t i+1 ) in the evolving graph G = (G, S G , S T ), with G i ∈ S G and t i , t i+1 ∈ S T . Notice that the symbol G was used here with two different meanings: the first as the generic letter to represent the network, the second to denote the underlying graph of G. Both notations are kept as is in the following, while preventing ambiguous uses in the text. Putting the pieces together: relabelings over evolving graphs For an evolving graph G = (G, S G , S T ) and a given date t i ∈ S T , we denote by G i the labeled graph (G i , λ ti+ǫ ) representing the state of the network just after the topological event of date t i , and by G i[ the labeled graph (G i−1 , λ ti−ǫ ) representing the network state just before that event. We note Event ti (G i[ ) = G i . A number of distributed operations may occur between two consecutive events. Hence, for a given algorithm A and two consecutive dates t i , t i+1 ∈ S T , we denote by R A [t i ,t i+1 ) one of the possible relabeling sequence induced by A on the graph G i during the period [t i , t i+1 ). We note R A [t i ,t i+1 ) (G i ) = G i+1[ . For simplicity, we will sometimes use the notation r i (u, v) ∈ R A [t,t ′ ) to indicate that the rule r i is applied on the edge (u, v) during [t, t ′ ). A complete execution sequence from t 0 to t k is then given by means of an alternated sequence of relabeling steps and topological events, which we note X=R A [t k−1 ,t k ) • Event t k−1 • .. • Event t i • R A [t i−1 ,t i ) • .. • Event t 1 • R A [t 0 ,t 1 ) (G 0 ) This combination is illustrated on Figure 4. As mentioned at the end of Section 2.1, the execution of a local computation algorithm is not necessarily deterministic, and may depend on the way nodes select one another at a lower level before applying a relabeling rule. Hence, we denote by X A/G the set of all possible execution sequences of an algorithm A over an evolving graph G. Methodology Below are some proposed methods and concepts to characterize the requirement of an algorithm in terms of topology dynamics. More precisely, we use the above combination to define the concept of topologyrelated necessary or sufficient conditions, and discuss how a given property can be proved to be so. time start t0 G0 G 1[ R [t 0 ,t 1 ) G0 Evt 1 t1 G1 G 2[ R [t 1 ,t 2 ) G1 Evt 2 t2 Evt k−1 t k−1 G k−1 G k[ R [t k−1 ,t k ) Gt k−1 end t k . . . . . . Fig. 4. Combination of Graph Relabelings and Evolving Graphs. Objectives of an algorithm Given an algorithm A and a labeled graph G, the state one wishes to reach can be given by a logic formula P on the labels of vertices (and edges, if appropriate). In the case of the propagation scheme (Algorithm 1 Section 2.1), such a terminal state could be that all nodes are informed, P 1 (G) = ∀v ∈ V, λ(v) = I. The objective O A is then defined as the fact of verifying the desired property by the end of the execution, that is, on the final labeled graph G k . In this example, we consider O A1 = P 1 (G k ). The opportunity must be taken here to talk about two fundamentally different types of objectives in dynamic networks. In the example above, as well as in the other examples in this paper, we consider algorithms whose objective is to reach a given property by the end of the execution. Another type of objective in dynamic network is to consider the maintenance of a desired property despite the network evolution (e.g. covering every connected component in the network by a single spanning tree). In this case, the objective must not be formulated in terms of terminal state, but rather in terms of satisfactory state, for example in-between every two consecutive topological events, i.e., O A = ∀G i ∈ S G , P(G i+1[ ). This actually corresponds to a self-stabilization scenario where recurrent faults are the topological events, and the network must stabilize in-between any two consecutive faults. We restrict ourselves to the first type of objective in the following. Because the abstraction level of these computations is not concerned with the underlying synchronization, no topological property can guarantee, alone, that the nodes will effectively communicate and collaborate to reach the desired objective. Therefore, the characterization of sufficient conditions requires additional assumptions on the synchronization. We propose below a generic progression hypothesis applicable to the pairwise interaction model (Figure 2(c)). This assumption may or may not be considered realistic depending on the expected rate of topological changes. Necessary conditions Given an algorithm Progression Hypothesis 1 (P H 1 ). In every time interval [t i , t i+1 ), with t i in S T , each vertex is able to apply at least one relabeling rule with each of its neighbors, provided the rule preconditions are already satisfied at time t i (and still satisfied at the time the rule is applied). In the case when starwise interaction (see Figure 2(b)) is considered, this hypothesis could be partially relaxed to assuming only that every node applies at least one rule in each interval. Examples of basic analyses This section illustrates the proposed framework through the analysis of three basic algorithms, namely the propagation algorithm previously given, and two counting algorithms (one centralized, one decentralized). The results obtained here are used in the next section to highlight some implications of this work. Analysis of the propagation algorithm We want to prove that the existence of a journey (resp. strict journey) between the emitter and every other node is a necessary (resp. sufficient) condition to achieve O A1 . Our purpose is not as much to emphasize the results themselves -they are rather intuitive -as to illustrate how the characterizations can be written in a rigorous way. Condition 1 ∀v ∈ V, emitter v (There exists a journey between the emitter and every other vertex). Lemma 1 ∀v ∈ V : λ t0 (v) = N, λ σ>t0 (v) = I =⇒ ∃u ∈ V, ∃σ ′ ∈ [t 0 , σ) : λ σ ′ (u) = I ∧ u v in G [σ ′ ,σ) ( If a non-emitter vertex has the information at some point, it implies the existence of an incoming journey from a vertex that had the information before) Proof. ∀v ∈ V : λ t0 (v) = N, (λ σ>t0 (v) = I =⇒ ∃v ′ ∈ V : r 1(v ′ ,v) ∈ R A1[t0,σ) ) (If a non-emitter vertex has the information at some point, then it has necessarily applied rule r 1 with another vertex) =⇒ ∃v ′ ∈ V, σ ′ ∈ [t 0 , σ) : λ σ ′ (v ′ ) = I ∧ ρ((v ′ , v), σ ′ ) = 1 (An edge existed at a previous date between this vertex and a vertex labeled I) By transitivity, =⇒ ∃v ′′ ∈ V, ∃σ ′′ ∈ [t 0 , σ) : λ σ ′′ (v ′′ ) = I ∧ v ′′ v in G [σ ′′ ,σ) (A journey existed between a vertex labeled I and this vertex) Proposition 1 Condition 1 (C 1 ) is a necessary condition on G to allow Algorithm 1 (A 1 ) to reach its objective O A1 . Analysis of a centralized counting algorithm Like the propagation algorithm, the distributed algorithm presented below assumes a distinguished vertex at initial time. This vertex, called the counter, is in charge of counting all the vertices it meets during the execution (its successive neighbors in the changing topology). Hence, the counter vertex has two labels (C, i), meaning that it is the counter (C), and that it has already counted i vertices (initially 1, i.e., itself). The other vertices are labeled either F or N , depending on whether they have already been counted or not. The counting rule is given by r 1 in Algorithm 2, below. Algorithm 2 Counting algorithm with a pre-selected counter. C, i N C, i + 1 F Objective of the algorithm. Under the assumption of a fixed number of vertices, the algorithm reaches a terminal state when all vertices are counted, which corresponds to the fact that no more vertices are labeled N : P 2 = ∀v ∈ V, λ(v) = N The objective of Algorithm 2 is to satisfy this property at the end of the execution (O A2 = P 2 (G k )). We prove here that the existence of an edge at some point of the execution between the counter node and every other node is a necessary and sufficient condition. Condition 3 ∀v ∈ V \{counter}, ∃t i ∈ S T : (counter, v) ∈ E i , or equivalently with the notion of underlying graph, ∀v ∈ V \{counter}, (counter, v) ∈ E Proposition 3 For a given evolving graph G representing the topological evolutions that take place during the execution of A 2 , Condition 3 (C 3 ) is a necessary condition on G to allow A 2 to reach its objective O A2 . Proof. ¬C 3 (G) =⇒ ∃v ∈ V \{counter} : (counter, v) / ∈ E =⇒ ∃v ∈ V \{counter} : ∀t i ∈ S T \{t k }, r 1 (counter, v) / ∈ R A2[ti,ti+1) =⇒ ∃v ∈ V \{counter} : ∀X ∈ X A2/G , λ t k (v) = N =⇒ ∄X ∈ X A2/G : P 2 (G k ) =⇒ ¬O A2 Proposition 4 Under Progression Hypothesis 1 (noted P H 1 below), C 3 is also a sufficient condition on G to guarantee that A 2 will reach its objective O A2 . Proof. C 3 (G) =⇒ ∀v ∈ V \{counter}, ∃t i ∈ S T : (counter, v) ∈ E i by P H 1 , =⇒ ∀v ∈ V \{counter}, ∃t i ∈ S T : r 1 (counter, v) ∈ R A2[ti,ti+1) =⇒ ∀v ∈ V \{counter}, λ t k (v) = N =⇒ ∀X ∈ X A2/G , P 2 (G k ) =⇒ O A2 Analysis of a decentralized counting algorithm Contrary to the previous algorithm, Algorithm 3 below does not require a distinguished initial state for any vertex. Indeed, all vertices are initialized with the same labels (C, 1), meaning that they are all initially counters that have already included themselves into the count. Then, depending on the topological evolutions, the counters opportunistically merge by pairs (rule r 1 ) in Algorithm A 3 . In the optimistic scenario, at the end of the execution, only one node remains labeled C and its second label gives the total number of vertices in the graph. A similar counting principle was used in [1] to illustrate population protocols -a possible application of this protocol was anecdotally mentioned, consisting in monitoring a flock of birds for fever, with the role of counters being played by sensors. Algorithm 3 Decentralized counting algorithm. initial states: {(C, 1)} (for all vertices) alphabet: {C, F, N * } rule r1: C, i C, j C, i + j F Objective of the algorithm Under the assumption of a fixed number of vertices, this algorithm reaches the desired state when exactly one vertex remains labeled C: P 3 = ∃u ∈ V : ∀v ∈ V \{u}, λ(u) = C ∧ λ(v) = C. As with the two previous algorithms, the objective here is to reach this property by the end of the execution: O A3 = P 3 (G k ). The characterization below proves that the existence of a vertex belonging to the horizon of every other vertex is a necessary condition for this algorithm. Condition 4 ∃v ∈ V : ∀u ∈ V, u v Lemma 2 ∀u ∈ V, ∃u ′ ∈ V : u u ′ ∧ λ t k (u ′ ) = C ( Counters cannot disappear from their own horizon.) This lemma is proven in natural language because the equivalent steps would reveal substantially longer and inelegant (at least, without introducing further notations on sequences of relabelings). One should however see without effort how the proof could be technically translated. Proof. (by contradiction). The only operation that can suppress C labels is the application of r 1 . Since all vertices are initially labeled C, assuming that Lemma 2 is false (i.e., that there is no C-labeled vertex in the horizon of a vertex) comes to assume that a relabeling sequence took place transitively from vertex u to a vertex u ′ that is outside the horizon of u, which is by definition impossible. (Given any final counter, there is a vertex that could not reach it by a journey). Proposition 5 Condition 4 (C 4 ) is necessary for A 3 to reach its objective O A3 . Proof. ¬C 4 (G) =⇒ ∄v ∈ V : ∀u ∈ V, u v =⇒ ∀v ∈ V : λ t k (v) = C, ∃u ∈ V : u vBy Lemma 2, =⇒ ∀v ∈ V : λ t k (v) = C, ∃v ′ ∈ V \{v} : λ t k (v ′ ) = C (There are at least two final counters). =⇒ ¬P 3 (G k ) =⇒ ¬O A3 The characterization of a sufficient condition for A 3 is left open. This question is addressed from a probabilistic perspective in [1], but we believe a deterministic condition should also exist, although very specific. Classification of dynamic networks and algorithms In this section, we show how the previously characterized conditions can be used to define evolving graph classes, some of which are included in others. The relations of inclusion lead to a de facto classification of dynamic networks based on the properties they verify. As a result, the classification can in turn be used to compare several algorithms or problems on the basis of their topological requirements. Besides the classification based on the above conditions, we discuss a possible extension of 10 more classes considered in various recent works. From conditions to classes of evolving graphs From C 1 = ∀v ∈ V, emitter v, we derive two classes of evolving graphs. F 1 is the class in which at least one vertex can reach all the others by a journey. If an evolving graph does not belong to this class, then there is no chance for A 1 to succeed whatever the initial emitter. F 2 is the class where every vertex can reach all the others by a journey. If an evolving graph does not belong to this class, then at least one vertex, if chosen as an initial emitter, will fail to inform all the others using A 1 . From C 2 = ∀v ∈ V, emitter st v, we derive two classes of evolving graphs. F 3 is the class in which at least one vertex can reach all the others by a strict journey. If an evolving graph belongs to this class, then there is at least one vertex that could, for sure, inform all the others using A 1 (under Progression Hypothesis 1). F 4 is the class of evolving graphs in which every vertex can reach all the others by a strict journey. If an evolving graph belongs to this class, then the success of A 1 is guaranteed for any vertex as initial emitter (again, under Progression Hypothesis 1). From C 3 = ∀v ∈ V \{counter}, (counter, v) ∈ E, we derive two classes of graphs. F 5 is the class of evolving graphs in which at least one vertex shares, at some point of the execution, an edge with every other vertex. If an evolving graph does not belong to this class, then there is no chance of success for A 2 , whatever the vertex chosen for counter. Here, if we assume Progression Hypothesis 1, then F 5 is also a class in which the success of the algorithm can be guaranteed for one specific vertex as counter. F 6 is the class of evolving graphs in which every vertex shares an edge with every other vertex at some point of the execution. If an evolving graph does not belong to this class, then there exists at least one vertex that cannot count all the others using A 2 . Again, if we consider Progression Hypothesis 1, then F 6 becomes a class in which the success is guaranteed whatever the counter. Finally, from C 4 = ∃v ∈ V : ∀u ∈ V, u v, we derive the class F 7 , which is the class of graphs such that at least one vertex can be reached from all the others by a journey (in other words, the intersection of all nodes horizons is non-empty). If a graph does not belong to this class, then there is absolutely no chance of success for A 3 . Relations between classes Since all implies at least one, we have: F 2 ⊆ F 1 , F 4 ⊆ F 3 , and F 6 ⊆ F 5 . Since a strict journey is a journey, we have: F 3 ⊆ F 1 , and F 4 ⊆ F 2 . Since an edge is a (strict) journey, we have: F 5 ⊆ F 3 , F 6 ⊆ F 4 , and F 5 ⊆ F 7 . Finally, the existence of a journey between all pairs of vertices (F 2 ) implies that each vertex can be reached by all the others, which implies in turn that at least one vertex can be reach by all the others ( F 7 ). We then have: F 2 ⊆ F 7 . Although we have used here a non-strict inclusion (⊆), the inclusions described above are strict (one easily find for each inclusion a graph that belongs to the parent class but is outside the child class). Figure 5 summarizes all these relations. Further classes were introduced in the recent literature, and organized into a classification in [?]. They include F 8 (round connectivity): every node can reach every other node, and be reached back afterwards; F 9 : (recurrent connectivity): every node can reach all the others infinitely often; F 10 (recurrence of edges): F1 : ∃u ∈ V : ∀v ∈ V, u v F2 : ∀u, v ∈ V, u v F3 : ∃u ∈ V : ∀v ∈ V, u st v F4 : ∀u, v ∈ V, u st v F5 : ∃u ∈ V : ∀v ∈ V \{u}, (u, v) ∈ E F6 : ∀u, v ∈ V, (u, v) ∈ E F7 : ∃u ∈ V : ∀v ∈ V, v u F6 F4 F5 F2 F3 F7 F1 F8 (Fig. 6) Fig. 5. A first classification of dynamic networks, based on evolving graph properties that result from the analysis of Section 4. the underlying graph G = (V, E) is connected, and every edge in E re-appears infinitely often; F 11 (timebounded recurrence of edges): same as F 10 , but the re-appearance is bounded by a given time duration; F 12 (periodicity): the underlying graph G is connected and every edge in E re-appears at regular intervals; F 13 (eventual instant-routability): given any pair of nodes and at any time, there always exists a future G i in which a (static) path exists between them; F 14 (eventual instant-connectivity): at any time, there always exists a future G i that is connected in a classic sense (i.e., a static path exists in G i between any pair of nodes); F 15 (perpetual instant-connectivity): every G i is connected in a static sense; F 16 (T-intervalconnectivity): all the graphs in any sub-sequence G i , G i+1 , ...G i+T have at least one connected spanning subgraph in common. Finally, F 17 is the reference class for population protocols, it corresponds to the subclass of F 10 in which the underlying graph G (graph of interaction) is a complete graph. All these classes were shown to have particular algorithmic significance. For example, F 16 allows to speed up the execution of some algorithms by a factor T [18]. In a context of broadcast, F 15 allows to have at least one new node informed in every G i , and consequently to bound the broadcast time by (a constant factor of) the network size [23]. F 13 and F 14 were used in [24] to characterize the contexts in which non-delay-tolerant routing protocols can eventually work if they retry upon failure. Classes F 10 , F 11 , and F 12 were shown to have an impact on the distributed versions of foremost, shortest, and fastest broadcasts with termination detection. Precisely, foremost broadcast is feasible in F 10 , whereas shortest and fastest broadcasts are not; shortest broadcast becomes feasible in F 11 [10], whereas fastest broadcast is not and becomes feasible in F 12 . Also, even though foremost broadcast is possible in F 10 , the memorization of the journeys for subsequent use is not possible in F 10 nor F 11 ; it is however possible in F 12 [11]. Finally, F 8 could be regarded as a sine qua non for termination detection in many contexts. Interestingly, this new range of classes -from F 8 to F 17 -can also be integrally connected by means of a set of inclusion relations, as illustrated on Figure 6. Both classifications can also be inter-connected through F 8 , a subclass of F 2 , which brings us to 17 connected classes. A classification of this type can be useful in several respects, including the possibility to transpose results or to compare solutions or problems on a formal basis, which we discuss now. Comparison of algorithms based on their topological requirements Let us consider the two counting algorithms given in Section 4. To have any chance of success, A 2 requires the evolving graph to be in F 5 (with a fortunate choice of counter) or in F 6 (with any vertex as counter). On the other hand, A 3 requires the evolving graph to be in F 7 . Since both F 5 (directly) and F 6 (transitively) are included in F 7 , there are some topological scenarios (i.e., G ∈ F 7 \F 5 ) in which A 2 has no chance of success, while A 3 has some. Such observation allows to claim that A 3 is more general than A 2 with respect to its topological requirements. This illustrates how a classification can help compare two solutions on a fair and formal basis. In the particular case of these two counting algorithms, however, the claim could be balanced by the fact that a sufficient condition is known for A 2 , whereas none is known for A 3 . The choice for the right algorithm may thus depend on the target mobility context: if this context is thought to produce topological scenarios in F 5 or F 6 , then A 2 could be preferred, otherwise A 3 should be considered. A similar type of reasoning could also teach us something about the problems themselves. Consider the above-mentioned results about shortest, fastest, and foremost broadcast with termination detection, the fact that F 12 is included in F 11 , which is itself included in F 10 , tells us that there is a (at least partial) order between these problems topological requirements: f oremost shortest f astest. We believe that classifications of this type have the potential to lead more equivalence results and formal comparison between problems and algorithms. Now, one must also keep in mind that these are only topology-related conditions, and that other dimensions of properties -e.g., what knowledge is available to the nodes, or whether they have unique identifiers -keep playing the same important role as they do in a static context. Considering again the same example, the above classification hides that detecting termination in the foremost case in F 10 requires the emitter to know the number of nodes n in the network, whereas this knowledge is not necessary for shortest broadcast in F 11 (the alternative knowledge of knowing a bound on the recurrence time is sufficient). In other words, lower topology-related requirements do not necessarily imply lower requirements in general. Mechanization potential One of the motivations of this work is to contribute to the development of assistance tools for algorithmic design and decision support in mobile ad hoc networks. The usual approach to assess the correct behavior of an algorithm or its appropriateness to a particular mobility context is to perform simulations. A typical simulation scenario consists in executing the algorithm concurrently with topological changes that are generated using a mobility model (e.g., the random way point model, in which every node repeatedly selects a new destination at random and moves towards it), or on top of real network traces that are first collected from the real world, then replayed at simulation time. As discussed in the introduction, the simulation approach has some limitations, among which generating results that are difficult to generalize, reproduce, or compare with one another on a non-subjective basis. The framework presented in this paper allows for an analytical alternative to simulations. The previous section already discussed how two algorithms could be compared on the basis of their topological requirements. We could actually envision a larger-purpose chain of operations, aiming to characterize how appropriate a given algorithm is to a given mobility context. The complete workflow is depicted on Figure 7. On the one hand, algorithms are analyzed, and necessary/sufficient conditions determined. This step produces classes of evolving graphs. On the other hand, mobility models and real-world networks can be used to generate a collection of network traces, each of which corresponds to an instance of evolving graphs. Checking how given instances distribute within given classes -i.e., are they included or not, in what proportion? -may give a clue about the appropriateness of an algorithm in a given mobility context. This section starts discussing the question of understanding to what extent such a workflow could be automated (mechanized), in particular through the two core operations of Inclusion checking and Analysis, both capable of raising problems of a theoretical nature. Checking network traces for inclusion in the classes We provide below an efficient solution to check the inclusion of an evolving graph in any of the seven classes of Figure 5 -that are, all classes derived from the analysis carried out in Section 4. Interestingly, each of these classes allows for efficient checking strategies, provided a few transformations are done. The transitive closure of the journeys of an evolving graph G is the graph H = (V, A H ), where A H = {(v i , v j ) : v i v j )}. Because journeys are oriented entities, their transitive closure is by nature a directed graph (see Figure 8). As explained in [5], the computation of transitive closures can be done efficiently, in O(|V |.|E|.(log|S T |.log|V |) time, by building the tree of shortest journeys from each node in the network. We extend this notion to the case of strict journeys, with Given an evolving graph G, its underlying graph G, its transitive closure H, and the transitive closure of its strict journeys H strict , the inclusion in each of the seven classes can be tested as follows: H strict = (V, A Hstrict ), where A Hstrict = {(v i , v j ) : v i st v j )}. -G ∈ F 1 ⇐⇒ H contains an out-dominating set of size 1. -G ∈ F 2 ⇐⇒ H is a complete graph. -G ∈ F 3 ⇐⇒ H strict contains an out-dominating set of size 1. -G ∈ F 4 ⇐⇒ H strict is a complete graph. -G ∈ F 5 ⇐⇒ G contains a dominating set of size 1. -G ∈ F 6 ⇐⇒ G is a complete graph. -G ∈ F 7 ⇐⇒ H contains an in-dominating set of size 1. How the classes of Figure 6 could be checked is left open. Their case is more complex, or at least substantially different, because the corresponding definitions rely on the notion of infinite, which a network trace is necessarily not. For example, whether a given edge is eventually going to reappear (e.g. in the context of checking inclusion to class F 8 or F 9 ) cannot be inferred from a finite sequence of events. However, it is certainly feasible to check whether a given recurrence bound applies within the time-span of a given network trace (bounded recurrence F 10 ), or similarly, whether the sequence of events repeats modulo p (for a given p) within the given trace (periodic networks F 11 ). Towards a mechanized analysis The most challenging component of the workflow on Figure 7 is certainly that of Analysis. Ultimately, one may hope to build a component like that of Figure 9, which is capable of answering whether a given property is necessary (no possible success without), sufficient (no possible failure with), or orthogonal (both success and failure possible) to a given algorithm with given computation assumptions (e.g., a particular type of synchronization or progression hypothesis). Such a workflow could ultimately be used to confirm an intuition of the analyst, as well as to discover new conditions automatically, based on a collection of properties. As of today, such an objective is still far from reach, and a number of intermediate steps should be taken. For example, one may consider specific instances of evolving graphs rather than general properties. We develop below a prospective idea inspired by the work of Castéran et al. in static networks [8,7]. Their work focus on bridging the gap between local computations and the formal proof management system Coq [4], and materializes, among others, as the development of a Coq library: Loco. This library contains appropriate representations for graphs and labelings in Coq (by means of sets and maps), as well as an operational description of relabeling rule execution (see Section 6 of [7] for details). The fact that such a machinery is already developed is worthwhile noting, because we believe evolving graphs could be seen themselves as relabelings acting on a 'presence' label on vertices and edges. The idea in this case would be to re-define topological events as being themselves graph relabeling rules whose preconditions correspond to a G i and actions lead to the next G i+1 . Considering the execution of these rules concurrently with those of the studied algorithm could make it possible to leverage the power of Coq to mechanize proofs of correctness and/or impossibility results in given instances of evolving graphs. Concluding remarks and open problems This paper suggested the combination of existing tools and the use of dedicated methods for the analysis of distributed algorithms in dynamic networks. The resulting framework allows to characterize assumptions that a given algorithm requires in terms of topological evolution during its execution. We illustrated it by the analysis of three basic algorithms, whose necessary and sufficient conditions were derived into a sketch of classification of dynamic networks. We showed how such a classification could be used in turn to compare algorithms on a formal basis and provide assistance in the selection of an algorithm. This classification was extended by an additional 10 classes from recent literature. We finally discussed some implications of this work for mechanization of both decision support systems and analysis, including respectively the question of checking whether a given network trace belongs to one of the introduced classes, and prospective ideas on the combination of evolving graph and graph relabeling systems within the Coq proof assistant. Analyzing the network requirements of algorithms is not a novel approach in general. It appears however that it was never considered in systematic manner for dynamics-related assumptions. Instead, the apparent norm in dynamic network analytical research is to study problems once a given set of assumptions has been considered, these assumptions being likely chosen for analytical convenience. This appears particularly striking in the recent field of population protocols, where a common assumption is that a pair of nodes interacting once will interact infinitely often. In the light of the classification shown is this paper, such an assumption corresponds to a highly specific computing context. We believe the framework in this paper may help characterize weaker topological assumptions for the same class of problems. Our work being mostly of a conceptual essence, a number of questions may be raised relative to its broader applicability. For example, the algorithms studied here are simple. A natural question is whether the framework will scale to more complex algorithms. We hope it could suit the analysis of most fundamental problems in distributed computing, such as election, naming, concensus, or the construction of spanning structures (note that election and naming may not have identical assumptions in a dynamic context, although they do in a static one). Our discussion on mechanization potentials left two significant questions undiscussed: how to check for the inclusion of an evolving graph in all the remaining classes, and how to approach the problem of mechanizing analysis relative to a general property. Another prospect is to investigate how intermediate properties could be explored between necessary and sufficient conditions, for example to guarantee a desired probability of success. Finally, besides these characterizations on feasibility, one may also want to look at the impact that particular properties may have on the complexity of problems and algorithms. Analytical research in dynamic networks is still in its infancy, and many exiting questions remain to be explored.
9,353
1102.5529
2953379898
Besides the complexity in time or in number of messages, a common approach for analyzing distributed algorithms is to look at the assumptions they make on the underlying network. We investigate this question from the perspective of network dynamics. In particular, we ask how a given property on the evolution of the network can be rigorously proven as necessary or sufficient for a given algorithm. The main contribution of this paper is to propose the combination of two existing tools in this direction: local computations by means of graph relabelings, and evolving graphs. Such a combination makes it possible to express fine-grained properties on the network dynamics, then examine what impact those properties have on the execution at a precise, intertwined, level. We illustrate the use of this framework through the analysis of three simple algorithms, then discuss general implications of this work, which include (i) the possibility to compare distributed algorithms on the basis of their topological requirements, (ii) a formal hierarchy of dynamic networks based on these requirements, and (iii) the potential for mechanization induced by our framework, which we believe opens a door towards automated analysis and decision support in dynamic networks.
In a different context, @cite_2 were proposed as a combinatorial model for dynamic networks. The initial purpose of this model was to provide a suitable representation of (FSDNs), in order to compute optimal communication routes such as shortest, fastest and foremost journeys @cite_9 . In such a context, the evolution of the network was known beforehand. In the present work, we use evolving graphs in a very different purpose, which is to express properties on the network dynamics. It is important to keep in mind that the analyzed algorithms are never supposed to know the evolution of the network beforehand.
{ "abstract": [ "New technologies and the deployment of mobile and nomadic services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be effectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journey (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys).", "Wireless technologies and the deployment of mobile and nomadic services are driving the emergence of complex ad hoc networks that have a highly dynamic behavior. Modeling such dynamics and creating a reference model on which results could be compared and reproduced, was stated as a fundamental issue by a recent NSF workshop on networking. In this article we show how the modeling of time-changes unsettles old questions and allows for new insights into central problems in networking, such as routing metrics, connectivity, and spanning trees. Such modeling is made possible through evolving graphs, a simple combinatorial model that helps capture the behavior or dynamic networks over time." ], "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "1984196269", "1982459859" ] }
Distributed Computing in Dynamic Networks: Towards a Framework for Automated Analysis of Algorithms ⋆
The past decade has seen a burst of research in the field of communication networks. This is particularly true for dynamic networks due to the arrival, or impending deployment, of a multitude of applications involving new types of communicating entities such as wireless sensors, smartphones, satellites, vehicles, or swarms of mobile robots. These contexts offer both unprecedented opportunities and challenges for the research community, which is striving to design appropriate algorithms and protocols. Behind the apparent unity of these networks lies a great diversity of assumptions on their dynamics. One end of the spectrum corresponds to infrastructured networks, in which only terminal nodes are dynamic -these include 3G/4G telecommunication networks, access-point-based Wi-Fi networks, and to some extent the Internet itself. At the other end lies delay-tolerant networks (DTNs), which are characterized by the possible absence of end-to-end communication route at any instant. The defining property of DTNs actually reflects many types of real-world contexts, from satellites or vehicular networks to pedestrian or social animal networks (e.g. birds, ants, termites). In-between lies a number of environments whose capabilities and limitations require specific attention. A consequence of this diversity is that a given protocol for dynamic networks may prove appropriate in one context, while performing poorly (or not at all) in another. The most common approach for evaluating protocols in dynamic networks is to run simulations, and use a given mobility model (or set of traces) to generate topological changes during the execution. These parameters must faithfully reflect the target context to yield an accurate evaluation. Likewise, the comparison between two protocols is only meaningful ⋆ A preliminary version of this paper appeared in [9]. if similar traces or mobility models are used. This state of facts makes it often ambiguous and difficult to judge of the appropriateness of solutions based on the sole experimental results reported in the literature. The problem is even more complex if we consider the possible biases induced by further parameters like the size of the network, the density of nodes, the choice of PHY or MAC layers, bandwidth limitations, latency, buffer size, etc. The fundamental requirement of an algorithm on the network dynamics will likely be better understood from an analytical standpoint, and some recent efforts have been carried out in this direction. They include the works by O'Dell et al. [23] and Kuhn et al. [18], in which the impacts of given assumptions on the network dynamics are studied for some basic problems of distributed computing (broadcast, counting, and election). These works have in common an effort to make the dynamics amenable to analysis through exploiting properties of a static essence: even though the network is possibly highly-dynamic, it remains connected at every instant. The approach of population protocols [1,2] also contributed to more analytical understanding. Here, no assumptions are made on the network connectivity at a given instant, but yet, the same fundamental idea of looking at dynamic networks through the eyes of static properties is leveraged by the concept of graph of interaction, in which every entity is assumed to interact infinitely often with its neighbors (and thus, dynamics is reduced to a scheduling problem in static networks). Besides the fact that the above assumptions are strong -we will show how strong in comparison to others in a hierarchy -, we believe that the very attempt to flatten the time dimension does prevent from understanding the true requirements of an algorithm on the network dynamics. As a trivial example, consider the broadcasting of a piece of information in the network depicted in Figure 1. The possibility to complete the broadcast in this scenario clearly depends on which node is the initial emitter: a and b may succeed, while c cannot. Why? How can we express this intuitive property the topology evolution must have with respect to the emitter and the other nodes? Flattening the timedimension without keeping information on the ordering of events would obviously loose some important specificities, such as the fact that nodes a and c are in a non-symmetrical configuration. How can we prove, more generally, that a given assumption on the dynamics is necessary or sufficient for a given problem (or algorithm)? How can we find (and define) property that relate to finer-grain aspects than recurrence or more generally regularities. Even when intuitive, rigorous characterizations of this kind might be difficult to obtain without appropriate models and formalisms -a conceptual shift is needed. We investigate these questions in the present paper. Contrary to the aforementioned approaches, in which a given context is first considered, then the feasibility of problems studied in this particular context, we suggest the somehow reverse approach of considering first a problem, then trying to characterize its necessary and/or sufficient conditions (if any) in terms of network dynamics. We introduce a generalpurpose analysis framework based on the combination of 1) local computations by means of graph relabelings [19], and 2) an appropriate formalism for dynamic networks, evolving graphs [15], which formalizes the evolution of the network topology as an ordered sequence of static graphs. The strengths of this combination are several: First, the use of local computations allows to obtain general impossibility results that do not depend on a particular communication model (e.g., message passing, mailbox, or shared memory). Second, the use of evolving graphs enables to express fine-grain network properties that remain temporal in essence. (For instance, a necessary condition for the broadcast problem above is the existence of a temporal path, or journey, from the emitter to any other node, which statement can be expressed us-ing monadic second-order logic on evolving graphs.) The combination of graph relabelings and evolving graphs makes it possible to study the execution of an algorithm as an intertwined sequence of topological events and computations, leading to a precise characterization of their relation. The framework we propose should be considered as a conceptual framework to guide the analysis of distributed algorithms. As such, it is specified at a high-level of abstraction and does not impose the choice for, say, a particular logic (e.g. first-order vs. LMSO) or scope of computation (e.g. pairwise vs. starwise interaction), although all our examples assume LMSO and pairwise interactions. Finally, we believe this framework could pave the way to decision support systems or mechanized analysis in dynamic networks, both of which are discussed as possible applications. Local computations and evolving graphs are first presented in Section 2, together with central properties of dynamic networks (such as connectivity over time, whose intuitive implications on the broadcast problem were explored in various work -see e.g. [3,6]). We describe the analysis framework based on the combination of both tools in Section 3. This includes the reformulation of an execution in terms of relabelings over a sequence of graphs, as well as new formulations of what a necessary or sufficient condition is in terms of existence and non-existence of such a relabeling sequence. We illustrate these theoretical tools in Section 4 through the analysis of three basic examples, i.e., one broadcast algorithm and two counting algorithms, one of which can also be used for election. (Note that our framework was recently applied to the problem of mutual exclusion in [16].) The rest of the paper is devoted to exploring some implications of the proposed approach, articulated around the two major motifs of classification (Section 5) and mechanization (Section 6). The section on classification discusses how the conditions resulting from analysis translate into more general properties that define classes of evolving graphs. The relations of inclusion between these classes are examined, and interestingly-enough, they allow to organize the classes as a connected hierarchy. We show how this classification can reciprocally be used to evaluate and compare algorithms on the basis of their topological requirements. The section on mechanization discusses to what extent the tasks related to assessing the appropriateness of an algorithm in a given context can be automated. We provide canonical ways of checking inclusion of a given network trace in all classes resulting from the analyses in this paper (in efficient time), and mention some ongoing work around the use of the coq proof assistant in the context of local computation, which we believe could be extended to evolving graphs. Section 7 eventually concludes with some remarks and open problems. Abstracting communications through local computations and graph relabelings Distributed algorithms can be expressed using a variety of communication models (e.g. message passing, mailboxes, shared memory). Although a vast majority of algorithms is designed in one of these models -predominantly the message passing model -, the very fact that one of them is chosen implies that the obtained results (e.g. positive or negative characterizations and associated proofs) are limited to the scope of this model. This problem of diversity among formalisms and results, already pointed out twenty years ago in [20], led researchers to consider higher abstractions when studying fundamental properties of distributed systems. Local computations and Graph relabelings were jointly proposed in this perspective in [19]. These theoretical tools allow to represent a distributed algorithm as a set of local interaction rules that are independent from the effective communications. Within the formalism of graph relabelings, the network is represented by a graph whose vertices and edges are associated with labels that represent the algorithmic state of the corresponding nodes and links. An interaction rule is then defined as a transition pattern (preconditions, actions), where preconditions and actions relate to these labels values. Since the interactions are local, each transition pattern must involve a limited and connected subset of vertices and edges. Figure 2 shows different scopes of computation, which are not necessarily the same for preconditions and actions. The approach taken by local computations shares a number of traits with that of population protocols, more recently introduced in [1,2]. Both approaches work at a similar level of abstraction and are concerned with characterizing what can or cannot be done in distributed computing. As far as the scope of computation is concerned, population protocols can be seen as a particular case of local computation focusing on pairwise interaction (see Figure 2(c)). The main difference between these tools (if any, besides that of originating from distinct lines of research), has more to do with the role given to the underlying synchronization between nodes. While local computations typically sees this as an lower layer being itself abstracted (whenever possible), population protocols consider the execution of an algorithm given some explicit properties of an interaction scheduler. This particularity led population protocols to become an appropriate tool to study distributed computing in dynamic networks, by reducing the network dynamics into specific properties of the scheduler (e.g., every pair of nodes interact infinitely often). Several variants of population protocols have subsequently been introduced (e.g., assuming various types of fairness of the scheduler and graphs of interaction), however we believe the analogy between dynamics and scheduling has some limits (e.g., in reality two nodes that interact once will not necessarily interact twice; and the precise order in which a group of nodes interacts matters all the more when interactions do not repeat infinitely often). We advocate looking at the dynamics at a finer scale, without always assuming infinite recurrence on the scheduler (such a scheduler can still be formulated as a specific class of dynamics), in the purpose of studying the precise relationship between an algorithm and the dynamics underlying its execution. To remain as general as possible, we are building on top of local computations. One may ask whether remaining as general is relevant, and whether the various models on Figure 2 are in fact equivalent in power (e.g. could we simulate any of them by repetition of another?). The answer is negative due to different levels of atomicity (e.g. models 2(a) vs. 2(c)) and symmetry breaking (e.g. models 2(c) vs. 2(d)). The reader is referred to [14] for a detailed hierarchy of these models. Note that the equivalences between models would have to be re-considered anyway in a dynamic context, since the dynamics may prevent the possibility of applying several steps of a weaker model to simulate a stronger one. (a) (b) (c) (d) We now describe the graph relabeling formalism traditionally associated with local computations. Let the network topology be represented by a finite undirected loopless graph G = (V G , E G ), with V G representing the set of nodes and E G representing the set of communication links between them. Two vertices u and v are said neighbors if and only if they share a common edge (u, v) in E G . Let λ : V G ∪ E G → L * be a mapping that associates every vertex and edge from G with one or several labels from an alphabet L (which denotes all the possible states these elements can take). The state of a given vertex v, resp. edge e, at a given time t is denoted by λ t (v), resp. λ t (e). The whole labeled graph is represented by the pair (G, λ), noted G. According to [19], a complete algorithm can be given by a triplet {L, I, P }, where I is the set of initial states, and P is a set of relabeling rules (transition patterns) representing the distributed interactions -these rules are considered uniform (i.e., same for all nodes). The Algorithm 1 below (A 1 for short), gives the example of a one-rule algorithm that represents the general broadcasting scheme discussed in the introduction. We assume here that the label I (resp. N ) stands for the state informed (resp. non-informed). Propagating the information thus consists in repeating this single rule, starting from the emitter vertex, until all vertices are labeled I. 4 Algorithm 1 A propagation algorithm coded by a single relabeling rule (r 1 ). Let us repeat that an algorithm does not specify how the nodes synchronize, i.e., how they select each other to perform a common computation step. From the abstraction level of local computations, this underlying synchronization is seen as an implementation choice (dedicated procedures were designed to fit the various models, e.g. local elections [21] and local rendezvous [22] for starwise and pairwise interactions, respectively). A direct consequence is that the execution of an algorithm at this level may not be deterministic. Another consequence is that the characterization of sufficient conditions on the dynamics will additionally require assumptions on the synchronization -we suggest later a generic progression hypothesis that serves this purpose. Note that the three algorithms provided in this paper rely on pairwise interactions, but the concepts and methodology involved apply to local computations in general. Expressing dynamic network properties using Evolving Graphs In a different context, evolving graphs [15] were proposed as a combinatorial model for dynamic networks. The initial purpose of this model was to provide a suitable representation of fixed schedule dynamic networks (FSDNs), in order to compute optimal communication routes such as shortest, fastest and foremost journeys [6]. In such a context, the evolution of the network was known beforehand. In the present work, we use evolving graphs in a very different purpose, which is to express properties on the network dynamics. It is important to keep in mind that the analyzed algorithms are never supposed to know the evolution of the network beforehand. An evolving graph is a structure in which the evolution of the network topology is recorded as a sequence of static graphs S G = G 1 , G 2 , ..., where every G i = (V i , E i ) corresponds to the network topology during an interval of time [t i , t i+1 ) . Several models of dynamic networks can be captured by this formalism, depending on the meaning which is given to the sequence of dates S T = t 1 , t 2 , .... For example, these dates could correspond to every time step in a discrete-time system (and therefore be taken from a time domain T ⊆ N), or to variable-size time intervals in continuous-time systems (T ⊆ R), where each t i is the date when a topological event occurs in the system (e.g., appearance or disappearance of an edge in the graph), see for example Figure 3. We consider continuous-time evolving graphs in general. (Our results actually hold for any of the above meanings.) Formally, we consider an evolving graph as the structure G = (G, S G , S T ), where G is the union of all G i in S G , called the underlying graph of G. Henceforth, we will simply use the notations V and E to denote V (G) and E(G), the sets of vertices and edges of the underlying graph G. Since we focus here on computation models that are undirected, we logically consider evolving graphs as being themselves undirected. The original version of evolving graphs considered undirected edges, as well as possible restrictions on bandwidth and latency. Finally, we will use the notation G [ta,t b ) to denote the temporal subgraph G ′ = (G ′ , S ′ G , S ′ T ) built from G = (G, S G , S T ) such that G ′ = G, S ′ G = {G i ∈ S G : t i ∈ [t a , t b )}, and S ′ T = {t i ∈ S T ∩ [t a , t b )}. period t0 → t1 period t1 → t2 period t2 → t3 period t3 →[t 1 , t3 ) [t0, t1) [t 2 , t 4 ) [t 0 , t 1 ) [ t 0 , t 2 ) [ t 0 , t 3 ) [t 2 , t4 ) G = (b) A compact representation Basic concepts and notations (given an evolving graph G = (G, S G , S T )). As a writing facility, we consider the use of a presence function ρ : E × T → {0, 1} that indicates whether a given edge is present at a given date, that is, for e ∈ E and t ∈ [t i , t i+1 ) (with t i , t i+1 ∈ S T ), ρ(e, t) = 1 ⇐⇒ e ∈ E i . A central concept in dynamic networks is that of journey, which is the temporal extension of the concept of path. A journey can be thought of as a path over time from one vertex to another. Formally, a sequence of couples J = {(e 1 , σ 1 ), (e 2 , σ 2 ) . . . , (e k , σ k )} such that {e 1 , e 2 , ..., e k } is a walk in G and {σ 1 , σ 2 , ..., σ k } is a non-decreasing sequence of dates from T, is a journey in G if and only if ρ(e i , σ i ) = 1 for all i ≤ k. We will say that a given journey is strict if every couple (e i , σ i ) is taken from a distinct graph of the sequence S G . Let us denote by J * the set of all possible journeys in an evolving graph G, and by J * (u,v) ⊆ J * those journeys starting at node u and ending at node v. If a journey exists from a node u to a node v, that is, if J * (u,v) = ∅, then we say that u can reach v in a graph G, and allow the simplified notations u v (in G), or u st v if this can be done through a strict journey. Clearly, the existence of journey is not symmetrical: u v v u; this holds regardless of whether the edges are directed or not, because the time dimension creates its own level of direction -this point is clear by the example of Figure 1. Given a node u, the set {v ∈ V : u v} is called the horizon of u. We assume that every node belongs to its own horizon by means of an empty journey. Here are examples of journeys in the evolving graph of Figure 3: -J (a,e) ={(ab, σ 1 ∈ [t 1 , t 2 )), (bc, σ 2 ∈ [σ 1 , t 2 )), (ce, σ 3 ∈ [t 2 , t 3 ))} is a journey from a to e ; -J (a,e) ={(ac, σ 1 ∈ [t 0 , t 1 )), (cd, σ 2 ∈ [σ 1 , t 1 ), (de, σ 3 ∈ [t 3 , t 4 ))} is another journey from a to e ; -J (a,e) ={(ac, σ 1 ∈ [t 0 , t 1 )), (cd, σ 2 ∈ [t 1 , t 2 ), (de, σ 3 ∈ [t 3 , t 4 )) } is yet another (strict) journey from a to e. We will say that the network is connected over time iff ∀u, v ∈ V, u v ∧ v u. The concept of connectivity over time is not new and goes back at least to [3], in which it was called eventual connectivity (although recent literature on DTNs referred to this terms for another concept that we renamed eventual instant-connectivity to avoid confusion in Section 5). The proposed analysis framework As a recall of the previous section, the algorithmic state of the network is given by a labeling on the corresponding graph G, then noted G. We denote by G i the graph covering the period [t i , t i+1 ) in the evolving graph G = (G, S G , S T ), with G i ∈ S G and t i , t i+1 ∈ S T . Notice that the symbol G was used here with two different meanings: the first as the generic letter to represent the network, the second to denote the underlying graph of G. Both notations are kept as is in the following, while preventing ambiguous uses in the text. Putting the pieces together: relabelings over evolving graphs For an evolving graph G = (G, S G , S T ) and a given date t i ∈ S T , we denote by G i the labeled graph (G i , λ ti+ǫ ) representing the state of the network just after the topological event of date t i , and by G i[ the labeled graph (G i−1 , λ ti−ǫ ) representing the network state just before that event. We note Event ti (G i[ ) = G i . A number of distributed operations may occur between two consecutive events. Hence, for a given algorithm A and two consecutive dates t i , t i+1 ∈ S T , we denote by R A [t i ,t i+1 ) one of the possible relabeling sequence induced by A on the graph G i during the period [t i , t i+1 ). We note R A [t i ,t i+1 ) (G i ) = G i+1[ . For simplicity, we will sometimes use the notation r i (u, v) ∈ R A [t,t ′ ) to indicate that the rule r i is applied on the edge (u, v) during [t, t ′ ). A complete execution sequence from t 0 to t k is then given by means of an alternated sequence of relabeling steps and topological events, which we note X=R A [t k−1 ,t k ) • Event t k−1 • .. • Event t i • R A [t i−1 ,t i ) • .. • Event t 1 • R A [t 0 ,t 1 ) (G 0 ) This combination is illustrated on Figure 4. As mentioned at the end of Section 2.1, the execution of a local computation algorithm is not necessarily deterministic, and may depend on the way nodes select one another at a lower level before applying a relabeling rule. Hence, we denote by X A/G the set of all possible execution sequences of an algorithm A over an evolving graph G. Methodology Below are some proposed methods and concepts to characterize the requirement of an algorithm in terms of topology dynamics. More precisely, we use the above combination to define the concept of topologyrelated necessary or sufficient conditions, and discuss how a given property can be proved to be so. time start t0 G0 G 1[ R [t 0 ,t 1 ) G0 Evt 1 t1 G1 G 2[ R [t 1 ,t 2 ) G1 Evt 2 t2 Evt k−1 t k−1 G k−1 G k[ R [t k−1 ,t k ) Gt k−1 end t k . . . . . . Fig. 4. Combination of Graph Relabelings and Evolving Graphs. Objectives of an algorithm Given an algorithm A and a labeled graph G, the state one wishes to reach can be given by a logic formula P on the labels of vertices (and edges, if appropriate). In the case of the propagation scheme (Algorithm 1 Section 2.1), such a terminal state could be that all nodes are informed, P 1 (G) = ∀v ∈ V, λ(v) = I. The objective O A is then defined as the fact of verifying the desired property by the end of the execution, that is, on the final labeled graph G k . In this example, we consider O A1 = P 1 (G k ). The opportunity must be taken here to talk about two fundamentally different types of objectives in dynamic networks. In the example above, as well as in the other examples in this paper, we consider algorithms whose objective is to reach a given property by the end of the execution. Another type of objective in dynamic network is to consider the maintenance of a desired property despite the network evolution (e.g. covering every connected component in the network by a single spanning tree). In this case, the objective must not be formulated in terms of terminal state, but rather in terms of satisfactory state, for example in-between every two consecutive topological events, i.e., O A = ∀G i ∈ S G , P(G i+1[ ). This actually corresponds to a self-stabilization scenario where recurrent faults are the topological events, and the network must stabilize in-between any two consecutive faults. We restrict ourselves to the first type of objective in the following. Because the abstraction level of these computations is not concerned with the underlying synchronization, no topological property can guarantee, alone, that the nodes will effectively communicate and collaborate to reach the desired objective. Therefore, the characterization of sufficient conditions requires additional assumptions on the synchronization. We propose below a generic progression hypothesis applicable to the pairwise interaction model (Figure 2(c)). This assumption may or may not be considered realistic depending on the expected rate of topological changes. Necessary conditions Given an algorithm Progression Hypothesis 1 (P H 1 ). In every time interval [t i , t i+1 ), with t i in S T , each vertex is able to apply at least one relabeling rule with each of its neighbors, provided the rule preconditions are already satisfied at time t i (and still satisfied at the time the rule is applied). In the case when starwise interaction (see Figure 2(b)) is considered, this hypothesis could be partially relaxed to assuming only that every node applies at least one rule in each interval. Examples of basic analyses This section illustrates the proposed framework through the analysis of three basic algorithms, namely the propagation algorithm previously given, and two counting algorithms (one centralized, one decentralized). The results obtained here are used in the next section to highlight some implications of this work. Analysis of the propagation algorithm We want to prove that the existence of a journey (resp. strict journey) between the emitter and every other node is a necessary (resp. sufficient) condition to achieve O A1 . Our purpose is not as much to emphasize the results themselves -they are rather intuitive -as to illustrate how the characterizations can be written in a rigorous way. Condition 1 ∀v ∈ V, emitter v (There exists a journey between the emitter and every other vertex). Lemma 1 ∀v ∈ V : λ t0 (v) = N, λ σ>t0 (v) = I =⇒ ∃u ∈ V, ∃σ ′ ∈ [t 0 , σ) : λ σ ′ (u) = I ∧ u v in G [σ ′ ,σ) ( If a non-emitter vertex has the information at some point, it implies the existence of an incoming journey from a vertex that had the information before) Proof. ∀v ∈ V : λ t0 (v) = N, (λ σ>t0 (v) = I =⇒ ∃v ′ ∈ V : r 1(v ′ ,v) ∈ R A1[t0,σ) ) (If a non-emitter vertex has the information at some point, then it has necessarily applied rule r 1 with another vertex) =⇒ ∃v ′ ∈ V, σ ′ ∈ [t 0 , σ) : λ σ ′ (v ′ ) = I ∧ ρ((v ′ , v), σ ′ ) = 1 (An edge existed at a previous date between this vertex and a vertex labeled I) By transitivity, =⇒ ∃v ′′ ∈ V, ∃σ ′′ ∈ [t 0 , σ) : λ σ ′′ (v ′′ ) = I ∧ v ′′ v in G [σ ′′ ,σ) (A journey existed between a vertex labeled I and this vertex) Proposition 1 Condition 1 (C 1 ) is a necessary condition on G to allow Algorithm 1 (A 1 ) to reach its objective O A1 . Analysis of a centralized counting algorithm Like the propagation algorithm, the distributed algorithm presented below assumes a distinguished vertex at initial time. This vertex, called the counter, is in charge of counting all the vertices it meets during the execution (its successive neighbors in the changing topology). Hence, the counter vertex has two labels (C, i), meaning that it is the counter (C), and that it has already counted i vertices (initially 1, i.e., itself). The other vertices are labeled either F or N , depending on whether they have already been counted or not. The counting rule is given by r 1 in Algorithm 2, below. Algorithm 2 Counting algorithm with a pre-selected counter. C, i N C, i + 1 F Objective of the algorithm. Under the assumption of a fixed number of vertices, the algorithm reaches a terminal state when all vertices are counted, which corresponds to the fact that no more vertices are labeled N : P 2 = ∀v ∈ V, λ(v) = N The objective of Algorithm 2 is to satisfy this property at the end of the execution (O A2 = P 2 (G k )). We prove here that the existence of an edge at some point of the execution between the counter node and every other node is a necessary and sufficient condition. Condition 3 ∀v ∈ V \{counter}, ∃t i ∈ S T : (counter, v) ∈ E i , or equivalently with the notion of underlying graph, ∀v ∈ V \{counter}, (counter, v) ∈ E Proposition 3 For a given evolving graph G representing the topological evolutions that take place during the execution of A 2 , Condition 3 (C 3 ) is a necessary condition on G to allow A 2 to reach its objective O A2 . Proof. ¬C 3 (G) =⇒ ∃v ∈ V \{counter} : (counter, v) / ∈ E =⇒ ∃v ∈ V \{counter} : ∀t i ∈ S T \{t k }, r 1 (counter, v) / ∈ R A2[ti,ti+1) =⇒ ∃v ∈ V \{counter} : ∀X ∈ X A2/G , λ t k (v) = N =⇒ ∄X ∈ X A2/G : P 2 (G k ) =⇒ ¬O A2 Proposition 4 Under Progression Hypothesis 1 (noted P H 1 below), C 3 is also a sufficient condition on G to guarantee that A 2 will reach its objective O A2 . Proof. C 3 (G) =⇒ ∀v ∈ V \{counter}, ∃t i ∈ S T : (counter, v) ∈ E i by P H 1 , =⇒ ∀v ∈ V \{counter}, ∃t i ∈ S T : r 1 (counter, v) ∈ R A2[ti,ti+1) =⇒ ∀v ∈ V \{counter}, λ t k (v) = N =⇒ ∀X ∈ X A2/G , P 2 (G k ) =⇒ O A2 Analysis of a decentralized counting algorithm Contrary to the previous algorithm, Algorithm 3 below does not require a distinguished initial state for any vertex. Indeed, all vertices are initialized with the same labels (C, 1), meaning that they are all initially counters that have already included themselves into the count. Then, depending on the topological evolutions, the counters opportunistically merge by pairs (rule r 1 ) in Algorithm A 3 . In the optimistic scenario, at the end of the execution, only one node remains labeled C and its second label gives the total number of vertices in the graph. A similar counting principle was used in [1] to illustrate population protocols -a possible application of this protocol was anecdotally mentioned, consisting in monitoring a flock of birds for fever, with the role of counters being played by sensors. Algorithm 3 Decentralized counting algorithm. initial states: {(C, 1)} (for all vertices) alphabet: {C, F, N * } rule r1: C, i C, j C, i + j F Objective of the algorithm Under the assumption of a fixed number of vertices, this algorithm reaches the desired state when exactly one vertex remains labeled C: P 3 = ∃u ∈ V : ∀v ∈ V \{u}, λ(u) = C ∧ λ(v) = C. As with the two previous algorithms, the objective here is to reach this property by the end of the execution: O A3 = P 3 (G k ). The characterization below proves that the existence of a vertex belonging to the horizon of every other vertex is a necessary condition for this algorithm. Condition 4 ∃v ∈ V : ∀u ∈ V, u v Lemma 2 ∀u ∈ V, ∃u ′ ∈ V : u u ′ ∧ λ t k (u ′ ) = C ( Counters cannot disappear from their own horizon.) This lemma is proven in natural language because the equivalent steps would reveal substantially longer and inelegant (at least, without introducing further notations on sequences of relabelings). One should however see without effort how the proof could be technically translated. Proof. (by contradiction). The only operation that can suppress C labels is the application of r 1 . Since all vertices are initially labeled C, assuming that Lemma 2 is false (i.e., that there is no C-labeled vertex in the horizon of a vertex) comes to assume that a relabeling sequence took place transitively from vertex u to a vertex u ′ that is outside the horizon of u, which is by definition impossible. (Given any final counter, there is a vertex that could not reach it by a journey). Proposition 5 Condition 4 (C 4 ) is necessary for A 3 to reach its objective O A3 . Proof. ¬C 4 (G) =⇒ ∄v ∈ V : ∀u ∈ V, u v =⇒ ∀v ∈ V : λ t k (v) = C, ∃u ∈ V : u vBy Lemma 2, =⇒ ∀v ∈ V : λ t k (v) = C, ∃v ′ ∈ V \{v} : λ t k (v ′ ) = C (There are at least two final counters). =⇒ ¬P 3 (G k ) =⇒ ¬O A3 The characterization of a sufficient condition for A 3 is left open. This question is addressed from a probabilistic perspective in [1], but we believe a deterministic condition should also exist, although very specific. Classification of dynamic networks and algorithms In this section, we show how the previously characterized conditions can be used to define evolving graph classes, some of which are included in others. The relations of inclusion lead to a de facto classification of dynamic networks based on the properties they verify. As a result, the classification can in turn be used to compare several algorithms or problems on the basis of their topological requirements. Besides the classification based on the above conditions, we discuss a possible extension of 10 more classes considered in various recent works. From conditions to classes of evolving graphs From C 1 = ∀v ∈ V, emitter v, we derive two classes of evolving graphs. F 1 is the class in which at least one vertex can reach all the others by a journey. If an evolving graph does not belong to this class, then there is no chance for A 1 to succeed whatever the initial emitter. F 2 is the class where every vertex can reach all the others by a journey. If an evolving graph does not belong to this class, then at least one vertex, if chosen as an initial emitter, will fail to inform all the others using A 1 . From C 2 = ∀v ∈ V, emitter st v, we derive two classes of evolving graphs. F 3 is the class in which at least one vertex can reach all the others by a strict journey. If an evolving graph belongs to this class, then there is at least one vertex that could, for sure, inform all the others using A 1 (under Progression Hypothesis 1). F 4 is the class of evolving graphs in which every vertex can reach all the others by a strict journey. If an evolving graph belongs to this class, then the success of A 1 is guaranteed for any vertex as initial emitter (again, under Progression Hypothesis 1). From C 3 = ∀v ∈ V \{counter}, (counter, v) ∈ E, we derive two classes of graphs. F 5 is the class of evolving graphs in which at least one vertex shares, at some point of the execution, an edge with every other vertex. If an evolving graph does not belong to this class, then there is no chance of success for A 2 , whatever the vertex chosen for counter. Here, if we assume Progression Hypothesis 1, then F 5 is also a class in which the success of the algorithm can be guaranteed for one specific vertex as counter. F 6 is the class of evolving graphs in which every vertex shares an edge with every other vertex at some point of the execution. If an evolving graph does not belong to this class, then there exists at least one vertex that cannot count all the others using A 2 . Again, if we consider Progression Hypothesis 1, then F 6 becomes a class in which the success is guaranteed whatever the counter. Finally, from C 4 = ∃v ∈ V : ∀u ∈ V, u v, we derive the class F 7 , which is the class of graphs such that at least one vertex can be reached from all the others by a journey (in other words, the intersection of all nodes horizons is non-empty). If a graph does not belong to this class, then there is absolutely no chance of success for A 3 . Relations between classes Since all implies at least one, we have: F 2 ⊆ F 1 , F 4 ⊆ F 3 , and F 6 ⊆ F 5 . Since a strict journey is a journey, we have: F 3 ⊆ F 1 , and F 4 ⊆ F 2 . Since an edge is a (strict) journey, we have: F 5 ⊆ F 3 , F 6 ⊆ F 4 , and F 5 ⊆ F 7 . Finally, the existence of a journey between all pairs of vertices (F 2 ) implies that each vertex can be reached by all the others, which implies in turn that at least one vertex can be reach by all the others ( F 7 ). We then have: F 2 ⊆ F 7 . Although we have used here a non-strict inclusion (⊆), the inclusions described above are strict (one easily find for each inclusion a graph that belongs to the parent class but is outside the child class). Figure 5 summarizes all these relations. Further classes were introduced in the recent literature, and organized into a classification in [?]. They include F 8 (round connectivity): every node can reach every other node, and be reached back afterwards; F 9 : (recurrent connectivity): every node can reach all the others infinitely often; F 10 (recurrence of edges): F1 : ∃u ∈ V : ∀v ∈ V, u v F2 : ∀u, v ∈ V, u v F3 : ∃u ∈ V : ∀v ∈ V, u st v F4 : ∀u, v ∈ V, u st v F5 : ∃u ∈ V : ∀v ∈ V \{u}, (u, v) ∈ E F6 : ∀u, v ∈ V, (u, v) ∈ E F7 : ∃u ∈ V : ∀v ∈ V, v u F6 F4 F5 F2 F3 F7 F1 F8 (Fig. 6) Fig. 5. A first classification of dynamic networks, based on evolving graph properties that result from the analysis of Section 4. the underlying graph G = (V, E) is connected, and every edge in E re-appears infinitely often; F 11 (timebounded recurrence of edges): same as F 10 , but the re-appearance is bounded by a given time duration; F 12 (periodicity): the underlying graph G is connected and every edge in E re-appears at regular intervals; F 13 (eventual instant-routability): given any pair of nodes and at any time, there always exists a future G i in which a (static) path exists between them; F 14 (eventual instant-connectivity): at any time, there always exists a future G i that is connected in a classic sense (i.e., a static path exists in G i between any pair of nodes); F 15 (perpetual instant-connectivity): every G i is connected in a static sense; F 16 (T-intervalconnectivity): all the graphs in any sub-sequence G i , G i+1 , ...G i+T have at least one connected spanning subgraph in common. Finally, F 17 is the reference class for population protocols, it corresponds to the subclass of F 10 in which the underlying graph G (graph of interaction) is a complete graph. All these classes were shown to have particular algorithmic significance. For example, F 16 allows to speed up the execution of some algorithms by a factor T [18]. In a context of broadcast, F 15 allows to have at least one new node informed in every G i , and consequently to bound the broadcast time by (a constant factor of) the network size [23]. F 13 and F 14 were used in [24] to characterize the contexts in which non-delay-tolerant routing protocols can eventually work if they retry upon failure. Classes F 10 , F 11 , and F 12 were shown to have an impact on the distributed versions of foremost, shortest, and fastest broadcasts with termination detection. Precisely, foremost broadcast is feasible in F 10 , whereas shortest and fastest broadcasts are not; shortest broadcast becomes feasible in F 11 [10], whereas fastest broadcast is not and becomes feasible in F 12 . Also, even though foremost broadcast is possible in F 10 , the memorization of the journeys for subsequent use is not possible in F 10 nor F 11 ; it is however possible in F 12 [11]. Finally, F 8 could be regarded as a sine qua non for termination detection in many contexts. Interestingly, this new range of classes -from F 8 to F 17 -can also be integrally connected by means of a set of inclusion relations, as illustrated on Figure 6. Both classifications can also be inter-connected through F 8 , a subclass of F 2 , which brings us to 17 connected classes. A classification of this type can be useful in several respects, including the possibility to transpose results or to compare solutions or problems on a formal basis, which we discuss now. Comparison of algorithms based on their topological requirements Let us consider the two counting algorithms given in Section 4. To have any chance of success, A 2 requires the evolving graph to be in F 5 (with a fortunate choice of counter) or in F 6 (with any vertex as counter). On the other hand, A 3 requires the evolving graph to be in F 7 . Since both F 5 (directly) and F 6 (transitively) are included in F 7 , there are some topological scenarios (i.e., G ∈ F 7 \F 5 ) in which A 2 has no chance of success, while A 3 has some. Such observation allows to claim that A 3 is more general than A 2 with respect to its topological requirements. This illustrates how a classification can help compare two solutions on a fair and formal basis. In the particular case of these two counting algorithms, however, the claim could be balanced by the fact that a sufficient condition is known for A 2 , whereas none is known for A 3 . The choice for the right algorithm may thus depend on the target mobility context: if this context is thought to produce topological scenarios in F 5 or F 6 , then A 2 could be preferred, otherwise A 3 should be considered. A similar type of reasoning could also teach us something about the problems themselves. Consider the above-mentioned results about shortest, fastest, and foremost broadcast with termination detection, the fact that F 12 is included in F 11 , which is itself included in F 10 , tells us that there is a (at least partial) order between these problems topological requirements: f oremost shortest f astest. We believe that classifications of this type have the potential to lead more equivalence results and formal comparison between problems and algorithms. Now, one must also keep in mind that these are only topology-related conditions, and that other dimensions of properties -e.g., what knowledge is available to the nodes, or whether they have unique identifiers -keep playing the same important role as they do in a static context. Considering again the same example, the above classification hides that detecting termination in the foremost case in F 10 requires the emitter to know the number of nodes n in the network, whereas this knowledge is not necessary for shortest broadcast in F 11 (the alternative knowledge of knowing a bound on the recurrence time is sufficient). In other words, lower topology-related requirements do not necessarily imply lower requirements in general. Mechanization potential One of the motivations of this work is to contribute to the development of assistance tools for algorithmic design and decision support in mobile ad hoc networks. The usual approach to assess the correct behavior of an algorithm or its appropriateness to a particular mobility context is to perform simulations. A typical simulation scenario consists in executing the algorithm concurrently with topological changes that are generated using a mobility model (e.g., the random way point model, in which every node repeatedly selects a new destination at random and moves towards it), or on top of real network traces that are first collected from the real world, then replayed at simulation time. As discussed in the introduction, the simulation approach has some limitations, among which generating results that are difficult to generalize, reproduce, or compare with one another on a non-subjective basis. The framework presented in this paper allows for an analytical alternative to simulations. The previous section already discussed how two algorithms could be compared on the basis of their topological requirements. We could actually envision a larger-purpose chain of operations, aiming to characterize how appropriate a given algorithm is to a given mobility context. The complete workflow is depicted on Figure 7. On the one hand, algorithms are analyzed, and necessary/sufficient conditions determined. This step produces classes of evolving graphs. On the other hand, mobility models and real-world networks can be used to generate a collection of network traces, each of which corresponds to an instance of evolving graphs. Checking how given instances distribute within given classes -i.e., are they included or not, in what proportion? -may give a clue about the appropriateness of an algorithm in a given mobility context. This section starts discussing the question of understanding to what extent such a workflow could be automated (mechanized), in particular through the two core operations of Inclusion checking and Analysis, both capable of raising problems of a theoretical nature. Checking network traces for inclusion in the classes We provide below an efficient solution to check the inclusion of an evolving graph in any of the seven classes of Figure 5 -that are, all classes derived from the analysis carried out in Section 4. Interestingly, each of these classes allows for efficient checking strategies, provided a few transformations are done. The transitive closure of the journeys of an evolving graph G is the graph H = (V, A H ), where A H = {(v i , v j ) : v i v j )}. Because journeys are oriented entities, their transitive closure is by nature a directed graph (see Figure 8). As explained in [5], the computation of transitive closures can be done efficiently, in O(|V |.|E|.(log|S T |.log|V |) time, by building the tree of shortest journeys from each node in the network. We extend this notion to the case of strict journeys, with Given an evolving graph G, its underlying graph G, its transitive closure H, and the transitive closure of its strict journeys H strict , the inclusion in each of the seven classes can be tested as follows: H strict = (V, A Hstrict ), where A Hstrict = {(v i , v j ) : v i st v j )}. -G ∈ F 1 ⇐⇒ H contains an out-dominating set of size 1. -G ∈ F 2 ⇐⇒ H is a complete graph. -G ∈ F 3 ⇐⇒ H strict contains an out-dominating set of size 1. -G ∈ F 4 ⇐⇒ H strict is a complete graph. -G ∈ F 5 ⇐⇒ G contains a dominating set of size 1. -G ∈ F 6 ⇐⇒ G is a complete graph. -G ∈ F 7 ⇐⇒ H contains an in-dominating set of size 1. How the classes of Figure 6 could be checked is left open. Their case is more complex, or at least substantially different, because the corresponding definitions rely on the notion of infinite, which a network trace is necessarily not. For example, whether a given edge is eventually going to reappear (e.g. in the context of checking inclusion to class F 8 or F 9 ) cannot be inferred from a finite sequence of events. However, it is certainly feasible to check whether a given recurrence bound applies within the time-span of a given network trace (bounded recurrence F 10 ), or similarly, whether the sequence of events repeats modulo p (for a given p) within the given trace (periodic networks F 11 ). Towards a mechanized analysis The most challenging component of the workflow on Figure 7 is certainly that of Analysis. Ultimately, one may hope to build a component like that of Figure 9, which is capable of answering whether a given property is necessary (no possible success without), sufficient (no possible failure with), or orthogonal (both success and failure possible) to a given algorithm with given computation assumptions (e.g., a particular type of synchronization or progression hypothesis). Such a workflow could ultimately be used to confirm an intuition of the analyst, as well as to discover new conditions automatically, based on a collection of properties. As of today, such an objective is still far from reach, and a number of intermediate steps should be taken. For example, one may consider specific instances of evolving graphs rather than general properties. We develop below a prospective idea inspired by the work of Castéran et al. in static networks [8,7]. Their work focus on bridging the gap between local computations and the formal proof management system Coq [4], and materializes, among others, as the development of a Coq library: Loco. This library contains appropriate representations for graphs and labelings in Coq (by means of sets and maps), as well as an operational description of relabeling rule execution (see Section 6 of [7] for details). The fact that such a machinery is already developed is worthwhile noting, because we believe evolving graphs could be seen themselves as relabelings acting on a 'presence' label on vertices and edges. The idea in this case would be to re-define topological events as being themselves graph relabeling rules whose preconditions correspond to a G i and actions lead to the next G i+1 . Considering the execution of these rules concurrently with those of the studied algorithm could make it possible to leverage the power of Coq to mechanize proofs of correctness and/or impossibility results in given instances of evolving graphs. Concluding remarks and open problems This paper suggested the combination of existing tools and the use of dedicated methods for the analysis of distributed algorithms in dynamic networks. The resulting framework allows to characterize assumptions that a given algorithm requires in terms of topological evolution during its execution. We illustrated it by the analysis of three basic algorithms, whose necessary and sufficient conditions were derived into a sketch of classification of dynamic networks. We showed how such a classification could be used in turn to compare algorithms on a formal basis and provide assistance in the selection of an algorithm. This classification was extended by an additional 10 classes from recent literature. We finally discussed some implications of this work for mechanization of both decision support systems and analysis, including respectively the question of checking whether a given network trace belongs to one of the introduced classes, and prospective ideas on the combination of evolving graph and graph relabeling systems within the Coq proof assistant. Analyzing the network requirements of algorithms is not a novel approach in general. It appears however that it was never considered in systematic manner for dynamics-related assumptions. Instead, the apparent norm in dynamic network analytical research is to study problems once a given set of assumptions has been considered, these assumptions being likely chosen for analytical convenience. This appears particularly striking in the recent field of population protocols, where a common assumption is that a pair of nodes interacting once will interact infinitely often. In the light of the classification shown is this paper, such an assumption corresponds to a highly specific computing context. We believe the framework in this paper may help characterize weaker topological assumptions for the same class of problems. Our work being mostly of a conceptual essence, a number of questions may be raised relative to its broader applicability. For example, the algorithms studied here are simple. A natural question is whether the framework will scale to more complex algorithms. We hope it could suit the analysis of most fundamental problems in distributed computing, such as election, naming, concensus, or the construction of spanning structures (note that election and naming may not have identical assumptions in a dynamic context, although they do in a static one). Our discussion on mechanization potentials left two significant questions undiscussed: how to check for the inclusion of an evolving graph in all the remaining classes, and how to approach the problem of mechanizing analysis relative to a general property. Another prospect is to investigate how intermediate properties could be explored between necessary and sufficient conditions, for example to guarantee a desired probability of success. Finally, besides these characterizations on feasibility, one may also want to look at the impact that particular properties may have on the complexity of problems and algorithms. Analytical research in dynamic networks is still in its infancy, and many exiting questions remain to be explored.
9,353
1102.2094
1552521148
Multi-mode real-time systems are those which support applications with different modes of operation, where each mode is characterized by a specific set of tasks. At run-time, such systems can, at any time, be requested to switch from its current operating mode to another mode (called "new mode") by replacing the current set of tasks with that of the new-mode. Thereby, ensuring that all the timing requirements are met not only requires that a schedulability test is performed on the tasks of each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two distinct protocols that manage the mode transitions upon uniform and identical multiprocessor platforms at run-time, each specific to distinct task requirements. For each protocol, we formally establish schedulability analyses that indicate beforehand whether all the timing requirements will be met during any mode transition of the system. This is performed assuming both Fixed-Task-Priority and Fixed-Job-Priority schedulers.
Among the uniprocessor protocols, the authors of @cite_16 @cite_22 @cite_25 proposed the following protocols. @math A protocol @cite_22 where tasks are assigned priorities according to the Deadline Monotonic Scheduling algorithm and are scheduled with time offsets during the mode change only.
{ "abstract": [ "It is noted that in many hard real-time systems, the set of functions that a system is required to provide may change over time. One way of providing this change is to allow currently running hard real-time tasks to be deleted or changed, or new tasks to be added. The authors define this change as a mode change, and seek to guarantee a priori the timing constraints of all tasks across the change from one mode to another. The authors derive a scheduling theory for static priority preemptive scheduling that can be used to make such guarantees. The schedulability test discussed could easily be incorporated into engineering support tools. The authors also discuss some of the approaches that could be taken to extend the analysis to cope with more complex and interesting scheduling problems, and to handle distributed hard real-time systems. >", "Consider the problem of scheduling sporadically-arriving tasks with implicit deadlines using Earliest-Deadline-First (EDF) on a single processor. The system may undergo changes in its operational modes and therefore the characteristics of the task set may change at run-time. We consider a well-established previously published mode-change protocol and we show that if every mode utilizes at most 50 of the processing capacity then all deadlines are met. We also show that there exists a task set that misses a deadline although the utilization exceeds 50 by just an arbitrarily small amount. Finally, we present, for a relevant special case, an exact schedulability test for EDF with mode change.", "One important requirement of many real-time systems is the ability to undergo several mutually exclusive modes of operation. By means of a mode change the system changes its functionality over time, thus being able to adapt to changing environmental situations. In order to successfully include mode changes in real-time systems, a mode change protocol with well known real-time behaviour is necessary. The authors provide a new model and related schedulability analysis for mode changes in flexible real-time systems." ], "cite_N": [ "@cite_16", "@cite_25", "@cite_22" ], "mid": [ "2145680671", "1535203306", "1714922633" ] }
Global Scheduling of Multi-Mode Real-Time Applications upon Multiprocessor Platforms
0
1102.2094
1552521148
Multi-mode real-time systems are those which support applications with different modes of operation, where each mode is characterized by a specific set of tasks. At run-time, such systems can, at any time, be requested to switch from its current operating mode to another mode (called "new mode") by replacing the current set of tasks with that of the new-mode. Thereby, ensuring that all the timing requirements are met not only requires that a schedulability test is performed on the tasks of each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two distinct protocols that manage the mode transitions upon uniform and identical multiprocessor platforms at run-time, each specific to distinct task requirements. For each protocol, we formally establish schedulability analyses that indicate beforehand whether all the timing requirements will be met during any mode transition of the system. This is performed assuming both Fixed-Task-Priority and Fixed-Job-Priority schedulers.
A protocol has been introduced by in @cite_12 , assuming Fixed-Task-Priority scheduling. Then, the authors of @cite_25 extended this protocol to the Earliest Deadline First @cite_9 scheduling algorithm.
{ "abstract": [ "The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.", "Consider the problem of scheduling sporadically-arriving tasks with implicit deadlines using Earliest-Deadline-First (EDF) on a single processor. The system may undergo changes in its operational modes and therefore the characteristics of the task set may change at run-time. We consider a well-established previously published mode-change protocol and we show that if every mode utilizes at most 50 of the processing capacity then all deadlines are met. We also show that there exists a task set that misses a deadline although the utilization exceeds 50 by just an arbitrarily small amount. Finally, we present, for a relevant special case, an exact schedulability test for EDF with mode change.", "IN MANY REAL-TIME APPLICATIONS, THE SET OF TASKS IN THE SYSTEM AS WELL AS THE CHARACTERISTICS OF THE TASKS CHANGE DURING SYSTEM EXECUTION. SPECI- FICALLY, THE SYSTEM MOVES FROM ONE MODE OF EXECUTION TO ANOTHER AS ITS MISSION PROGRESSES. A MODE CHANGE IS CHARACTERIZED BY THE DELETION OF SOME TASKS, ADDITION OF NEW TASKS, OR CHANGES IN THE PARAMETERS OF CERTAIN TASKS, E.G., INCREASING THE SAMPLING RATE TO OBTAIN A MORE ACCURATE RESULT. THIS PAPER DISCUSSES HOW MODE CHANGES CAN BE ACCOMMODATED WITHIN A GIVEN FRAMEWORK OF PRIORITY DRIVEN REAL-TIME SCHEDULING." ], "cite_N": [ "@cite_9", "@cite_25", "@cite_12" ], "mid": [ "2109488193", "1535203306", "2010894613" ] }
Global Scheduling of Multi-Mode Real-Time Applications upon Multiprocessor Platforms
0
1102.2094
1552521148
Multi-mode real-time systems are those which support applications with different modes of operation, where each mode is characterized by a specific set of tasks. At run-time, such systems can, at any time, be requested to switch from its current operating mode to another mode (called "new mode") by replacing the current set of tasks with that of the new-mode. Thereby, ensuring that all the timing requirements are met not only requires that a schedulability test is performed on the tasks of each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two distinct protocols that manage the mode transitions upon uniform and identical multiprocessor platforms at run-time, each specific to distinct task requirements. For each protocol, we formally establish schedulability analyses that indicate beforehand whether all the timing requirements will be met during any mode transition of the system. This is performed assuming both Fixed-Task-Priority and Fixed-Job-Priority schedulers.
The authors of @cite_16 introduced a particular protocol which allows tasks to modify their parameters (period, execution time, etc.) during the mode changes. As in @cite_22 , this study assumes that the tasks are scheduled according to the Deadline Monotonic scheduling algorithm.
{ "abstract": [ "It is noted that in many hard real-time systems, the set of functions that a system is required to provide may change over time. One way of providing this change is to allow currently running hard real-time tasks to be deleted or changed, or new tasks to be added. The authors define this change as a mode change, and seek to guarantee a priori the timing constraints of all tasks across the change from one mode to another. The authors derive a scheduling theory for static priority preemptive scheduling that can be used to make such guarantees. The schedulability test discussed could easily be incorporated into engineering support tools. The authors also discuss some of the approaches that could be taken to extend the analysis to cope with more complex and interesting scheduling problems, and to handle distributed hard real-time systems. >", "One important requirement of many real-time systems is the ability to undergo several mutually exclusive modes of operation. By means of a mode change the system changes its functionality over time, thus being able to adapt to changing environmental situations. In order to successfully include mode changes in real-time systems, a mode change protocol with well known real-time behaviour is necessary. The authors provide a new model and related schedulability analysis for mode changes in flexible real-time systems." ], "cite_N": [ "@cite_16", "@cite_22" ], "mid": [ "2145680671", "1714922633" ] }
Global Scheduling of Multi-Mode Real-Time Applications upon Multiprocessor Platforms
0
1102.1402
2950593226
Social media generates a prodigious wealth of real-time content at an incessant rate. From all the content that people create and share, only a few topics manage to attract enough attention to rise to the top and become temporal trends which are displayed to users. The question of what factors cause the formation and persistence of trends is an important one that has not been answered yet. In this paper, we conduct an intensive study of trending topics on Twitter and provide a theoretical basis for the formation, persistence and decay of trends. We also demonstrate empirically how factors such as user activity and number of followers do not contribute strongly to trend creation and its propagation. In fact, we find that the resonance of the content with the users of the social network plays a major role in causing trends.
There has been some prior work on analyzing connections on Twitter. @cite_1 studied social interactions on Twitter to reveal that the driving process for usage is a sparse hidden network underlying the friends and followers, while most of the links represent meaningless interactions. @cite_11 have examined Twitter as a mechanism for word-of-mouth advertising. They considered particular brands and products and examined the structure of the postings and the change in sentiments. @cite_9 proposed a propagation model that predicts which users will tweet about which URL based on the history of past user activity.
{ "abstract": [ "Microblogging sites are a unique and dynamic Web 2.0 communication medium. Understanding the information flow in these systems can not only provide better insights into the underlying sociology, but is also crucial for applications such as content ranking, recommendation and filtering, spam detection and viral marketing. In this paper, we characterize the propagation of URLs in the social network of Twitter, a popular microblogging site. We track 15 million URLs exchanged among 2.7 million users over a 300 hour period. Data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades and the communication dynamics. Based on these results we propose a propagation model that predicts which users are likely to mention which URLs. The model correctly accounts for more than half of the URL mentions in our data set, while maintaining a false positive rate lower than 15 .", "", "In this paper we report research results investigating microblogging as a form of electronic word-of-mouth for sharing consumer opinions concerning brands. We analyzed more than 150,000 microblog postings containing branding comments, sentiments, and opinions. We investigated the overall structure of these microblog postings, the types of expressions, and the movement in positive or negative sentiment. We compared automated methods of classifying sentiment in these microblogs with manual coding. Using a case study approach, we analyzed the range, frequency, timing, and content of tweets in a corporate account. Our research findings show that 19p of microblogs contain mention of a brand. Of the branding microblogs, nearly 20p contained some expression of brand sentiments. Of these, more than 50p were positive and 33p were critical of the company or product. Our comparison of automated and manual coding showed no significant differences between the two approaches. In analyzing microblogs for structure and composition, the linguistic structure of tweets approximate the linguistic patterns of natural language expressions. We find that microblogging is an online tool for customer word of mouth communications and discuss the implications for corporations using microblogging as part of their overall marketing strategy. © 2009 Wiley Periodicals, Inc." ], "cite_N": [ "@cite_9", "@cite_1", "@cite_11" ], "mid": [ "1943015726", "", "2139043937" ] }
Trends in Social Media : Persistence and Decay
Social media is growing at an explosive rate, with millions of people all over the world generating and sharing content on a scale barely imaginable a few years ago. This has resulted in massive participation with countless number of updates, opinions, news, comments and product reviews being constantly posted and discussed in social web sites such as Facebook, Digg and Twitter, to name a few. This widespread generation and consumption of content has created an extremely competitive online environment where different types of content vie with each other for the scarce attention of the user community. In spite of the seemingly chaotic fashion with which all these interactions take place, certain topics manage to attract an inordinate amount of attention, thus bubbling to the top in terms of popularity. Through their visibility, this popular topics contribute to the collective awareness of what is trending and at times can also affect the public agenda of the community. At present there is no clear picture of what causes these topics to become extremely popular, nor how some persist in the public eye longer than others. There is considerable evidence that one aspect that causes topics to decay over time is their novelty [11]. Another factor responsible for their decay is the competitive nature of the medium. As content starts propagating throught a social network it can usurp the positions of earlier topics of interest, and due to the limited attention of users it is soon rendered invisible by newer content. Yet another aspect responsible for the popularity of certain topics is the influence of members of the network on the propagation of content. Some users generate content that resonates very strongly with their followers thus causing the content to propagate and gain popularity [9]. The source of that content can originate in standard media outlets or from users who generate topics that eventually become part of the trends and capture the attention of large communities. In either case the fact that a small set of topics become part of the trending set means that they will capture the attention of a large audience for a short time, thus contributing in some measure to the public agenda. When topics originate in media outlets, the social medium acts as filter and amplifier of what the standard media produces and thus contributes to the agenda setting mechanisms that have been thoroughly studied for more than three decades [7] . In this paper, we study trending topics on Twitter, an immensely popular microblogging network on which millions of users create and propagate enormous content via a steady stream on a daily basis. The trending topics, which are shown on the main website, represent those pieces of content that bubble to the surface on Twitter owing to frequent mentions by the community. Thus they can be equated to crowdsourced popularity. We then determine the factors that contribute to the creation and evolution of these trends, as they provide insight into the complex interactions that lead to the popularity and persistence of certain topics on Twitter, while most others fail to catch on and are lost in the flow. We first analyze the distribution of the number of tweets across trending topics. We observe that they are characterized by a strong log-normal distribution, similar to that found in other networks such as Digg and which is generated by a stochastic multiplicative process [11]. We also find that the decay function for the tweets is mostly linear. Subsequently we study the persistence of the trends to determine which topics last long at the top. Our analysis reveals that there are few topics that last for long times, while most topics break fairly quickly, in the order of 20-40 minutes. Finally, we look at the impact of users on trend persistence times within Twitter. We find that traditional notions of user influence such as the frequency of posting and the number of followers are not the main drivers of trends, as previously thought. Rather, long trends are characterized by the resonating nature of the content, which is found to arise mainly from traditional media sources. We observe that social media behaves as a selective amplifier for the content generated by traditional media, with chains of retweets by many users leading to the observed trends. Twitter Twitter is an extremely popular online microblogging service, that has gained a very large user following, consisting of close to 200 million users. The Twitter graph is a directed social network, where each user chooses to follow certain other users. Each user submits periodic status updates, known as tweets, that consist of short messages limited in size to 140 characters. These updates typically consist of personal information about the users, news or links to content such as images, video and articles. The posts made by a user are automatically displayed on the user's profile page, as well as shown to his followers. A retweet is a post originally made by one user that is forwarded by another user. Retweets are useful for propagating interesting posts and links through the Twitter community. Twitter has attracted lots of attention from corporations due to the immense potential it provides for viral market-ing. Due to its huge reach, Twitter is increasingly used by news organizations to disseminate news updates, which are then filtered and commented on by the Twitter community. A number of businesses and organizations are using Twitter or similar micro-blogging services to advertise products and disseminate information to stockholders. Twitter Trends Data Trending topics are presented as a list by Twitter on their main Twitter.com site, and are selected by an algorithm proprietary to the service. They mostly consist of two to three word expressions, and we can assume with a high confidence that they are snippets that appear more frequently in the most recent stream of tweets than one would expect from a document term frequency analysis such as TFIDF. The list of trending topics is updated every few minutes as new topics become popular. Twitter provides a Search API for extracting tweets containing particular keywords. To obtain the dataset of trends for this study, we repeatedly used the API in two stages. First, we collected the trending topics by doing an API query every 20 minutes. Second, for each trending topic, we used the Search API to collect all the tweets mentioning this topic over the past 20 minutes. For each tweet, we collected the author, the text of the tweet and the time it was posted. Using this procedure for data collection, we obtained 16.32 million tweets on 3361 different topics over a course of 40 days in Sep-Oct 2010. We picked 20 minutes as the duration of a timestamp after evaluating different time lengths, to optimize the discovery of new trends while still capturing all trends. This is due to the fact that Twitter only allows 1500 tweets per search query. We found that with 20 minute intervals, we were able to capture all the tweets for the trending topics efficiently. We noticed that many topics become trends again after they stop trending according to the Twitter trend algorithm. We therefore considered these trends as separate sequences: it is very likely that the spreading mechanism of trends has a strong time component with an initial increase and a trailing decline, and once a topic stops trending, it should be considered as new when it reappears among the users that become aware of it later. This procedure split the 3468 originally collected trend titles into 6084 individual trend sequences. Distribution of tweets We measured the number of tweets that each topic gets in 20 minute intervals, from the time the topic starts trending until it stops, as described earlier. From this we can sum up the tweet counts over time to obtain the cumulative number of tweets N q (t i ) of topic q for any time frame t i , N q (t i ) = i τ =1 n q (t τ ),(1) where n q (t) is the number of tweets on topic q in time interval t. Since it is plausible to assume that initially popular topics will stay popular later on in time as well, we can calculate the ratios C q (t i , t j ) = N q (t i )/N q (t j ) for topic q for time frames t i and t j . Figure 1(a) shows the distribution of C q (t i , t j )'s over all topics for four arbitrarily chosen pairs of time frames (nevertheless such that t i > t j , and t i is relatively large, and t j is small). These figures immediately suggest that the ratios C q (t i , t j ) are distributed according to log-normal distributions, since the horizontal axes are logarithmically rescaled, and the histograms appear to be Gaussian functions. To check if this assumption holds, consider Fig. 1(b), where we show the Q-Q plots of the distributions of Fig. 1(a) in comparison to normal distributions. We can observe that the (logarithmically rescaled) empirical distributions exhibit normality to a high degree for later time frames, with the exception of the high end of the distributions. These 10-15 outliers occur more frequently than could be expected for a normal distribution. Log-normals arise as a result of multiplicative growth processes with noise [8]. In our case, if N q (t) is the number of tweets for a given topic q at time t, then the dynamics that leads to a log-normally distributed N q (t) over q can be written as: N q (t) = [1 + γ(t)ξ(t)] N q (t − 1),(2) where the random variables ξ(t) are positive, independent and identically distributed as a function of t with mean 1 and variance σ 2 . Note that time here is measured in discrete steps (t − 1 expresses the previous time step with respect to t), in accordance with our measurement setup. γ(t) is introduced to account for the novelty decay [11]. We would expect topics to initially increase in popularity but to slow down their activity as they become obsolete or known to most users. Since γ(t) is made up of decreasing positive numbers, the growth of N t slows with time. To see that Eq. (2) leads to a log-normal distribution of N q (t), we first expand the recursion relation: N q (t) = t s=1 [1 + γ(s)ξ(s)] N q (0).(3) Here N q (0) is the initial number of tweets in the earliest time step. Taking the logarithm of both sides of Eq. (3), ln N q (t) − ln N q (0) = t s=1 ln [1 + γ(s)ξ(s)] (4) The RHS of Eq. (4) is the sum of a large number of random variables. The central limit theorem states thus that if the random variables are independent and identically distributed, then the sum asymptotically approximates a normal distribution. The i.i.d condition would hold exactly for the ξ(s) term, and it can be shown that in the presence of the discounting factors (if the rate of decline is not too fast), the resulting distribution is still normal [11]. In other words, we expect from this model that ln [N q (t)/N q (0)] will be distributed normally over q when fixing t. These quantities were shown in Fig. 1 above. Essentially, if the difference between the two times where we take the ratio is big enough, the log-normal property is observed. The intuitive explanation for the multiplicative model of Eq. (2) is that at each time step the number of new tweets on a topic is a multiple of the tweets that we already have. The number of past tweets, in turn, is a proxy for the number of users that are aware of the topic up to that point. These users discuss the topic on different forums, including Twitter, essentially creating an effective network through which the topic spreads. As more users talk about a particular topic, many others are likely to learn about it, thus giving the multiplicative nature of the spreading. The noise term is necessary to account for the stochasticity of this process. On the other hand, the monotically decreasing γ(t) characterizes the decay in timeliness and novelty of the topic as it slowly becomes . The log-log plot exhibits that it decreases in a power-law fashion, with an exponent that is measured to be exactly -1 (the linear regression on the logarithmically transformed data fits with R 2 = 0.98). The fit to determine the exponent was performed in the range of the solid line next to the function, which also shows the result of the fit while being shifted lower for easy comparison. The inset displays the same γ(t) function on standard linear scales. obsolete and known to most users, and guarantees that N q (t) does not grow unbounded [11]. To measure the functional form of γ(t), we observe that the expected value of the noise term ξ(t) in Eq. (2) is 1. Thus averaging over the fractions between consecutive tweet counts yields γ(t): γ(t) = N q (t) N q (t − 1) q − 1.(5) The experimental values of γ(t) in time are shown in Fig. 2. It is interesting to notice that γ(t) follows a powerlaw decay very precisely with an exponent of −1, which means that γ(t) ∼ 1/t. The growth of tweets over time The interesting fact about the decay function γ(t) = 1/t is that it results in a linear increase in the total number of tweets for a topic over time. To see this, we can again consider Eq. (4), and approximate the discrete sum of random variables with an integral of the operand of the sum, and substitute the noise term with its expectation value, ξ(t) = 1 as defined earlier (this is valid if γ(t) is changing slowly). These approximations yield the following: ln N q (t) N q (0) ≈ t τ =0 ln [1 + γ(τ )] dτ ≈ t τ =0 1 τ dτ = ln t. (6) In simplifying the logarithm above, we used the Taylor expansion of ln(1 + x) ≈ x, for small x, and also used the fact that γ(τ ) = 1/τ as we found experimentally earlier. It can be immediately seen then that N q (t) ≈ N q (0) t for the range of t where γ(t) is inversely proportional to t. In fact, it can be easily proven that no functional form for γ(t) would yield a linear increase in N q (t) other than γ(t) ∼ 1/t (assuming that the above approximations are valid for the stochastic discrete case). This suggests that the trending topics featured on Twitter increase their tweet counts linearly in time, and their dynamics is captured by the multiplicative noise model we discussed above. To check this, we first plotted a few representative examples of the cumulative number of tweets for a few topics in Fig. 3. It is apparent that all the topics ( selected randomly) show an approximate initial linear growth in the number of tweets.We also checked if this is true in general. Figure 4 shows the second discrete derivative of the total number of tweets, which we expect to be 0 if the trend lines are linear on average. A positive second derivative would mean that the growth is superlinear, while a negative one suggests that it is sublinear. We point out that before taking the average of all second derivatives over the different topics in time, we divided the derivatives by the average of the total number of tweets of the given topics. We did this so as to account for the large difference between the ranges of the number of tweets across topics, since a simple averaging without prior normalization would likely bias the results towards topics with large tweet counts and their fluctuations. The averages are shown in Fig. 4. We observe from the figure that when we consider all The number of total tweets on topics in the first 48 hours, normalized to 1 so that they can be shown on the same plot. The randomly selected topics were (from left to right): "Earnings", "#pulpopaul", "Sheen", "Deuces Remix", "Isaacs", "#gmp24", and "Mac App". topics there is a very slight sublinear growth regime right after the topic starts trending, which then becomes mostly linear, as the derivatives data is distributed around 0. If we consider only very popular topics (that were on the trends site for more than 4 hours), we observe an even better linear trend. One reason for this may be that topics that trend only for short periods exhibit a concave curvature, since they lose popularity quickly, and are removed from among the Twitter trends by the system early on. These results suggest that once a topic is highlighted as a trend on a very visible website, its growth becomes linear in time. The reason for this may be that as more and more visitors come to the site and see the trending topics there is a constant probability that they will also talk and tweet about it. This is in contrast to scenarios where the primary channel of information flow is more informal. In that case we expect that the growth will exhibit first a phase with accelerated growth and then slow down to a point when no one talks about the topic any more. Content that spreads through a social network or without external "driving" will follow such a course, as has been showed elsewhere [10,12]. An important reason to study trending topics on Twitter is to understand why some of them remain at the top while others dissipate quickly. To see the general pattern of behavior on Twitter, we examined the lifetimes of the topics that trended in our study. From Fig 5(a) we can see that while most topics occur continuously, around 34% of topics appear in more than one sequence. This means that they stop trending for a certain period of time before beginning to trend again. A reason for this behavior may be the time zones that are involved. For instance, if a topic is a piece of news relevant to North American readers, a trend may first appear in the Eastern time zone, and 3 hours later in the Pacific time zone. Likewise, a trend may return the next morning if it was trending the previous evening, when more users check their accounts again after the night. Given that many topics do not occur continuously, we examined the distribution of the lengths sequences for all topics. In Fig 5(b) we show the length of the topic sequences. It can be observed that this is a power-law which means that most topic sequences are short and a few topics last for a very long time. This could be due to the fact that there are many topics competing for attention. Thus, the topics that make it to the top (the trend list) last for a short time. However, in many cases, the topics return to trend for more time, which is captured by the number of sequences shown in Fig 5(a), as mentioned. Relation to authors and activity We first examine the authors who tweet about given trending topics to see if the authors change over time or if it is the same people who keep tweeting to cause trends. When we computed the correlation in the number of unique authors for a topic with the duration (number of timestamps) that the topic trends we noticed that correlation is very strong (0.80). This indicates that as the number of authors increases so does the lifetime, suggesting that the propagation through the network causes the topic to trend. To measure the impact of authors we compute for each topic the active-ratio a q as: a q = N umber of T weets N umber of U nique Authors The correlation of active-ratio with trending duration is as shown in Fig 6. We observe that the active-ratio quickly saturates and varies little with time for any given topic. Since the authors change over time with the topic propagation, the correlation between number of tweets and authors is high (0.83). q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q q q q q q q qqq q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q qq q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q qq q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q q q qq q q q q qq q qq q q q q q q q qq q q q q q q q qq q qq q q q q q q q q q q qq q qq q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q qq q q qq q q q q q qq q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qqq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Trending Duration (mins) Figure 6: Relation between the active-ratio and the length of the trend across all topics, showing that the active-ratio does not vary significantly with time. Persistence of long trending topics On Twitter each topic competes with the others to survive on the trending page. As we now show, for the long trending ones we can derive an expression for the distribution of their average length. We assume that, if the relative growth rate of tweets, denoted by φ t = Nt Nt−1 , falls below a certain threshold θ, the topic would stop trending. When we consider longtrending topics, as they grow in time, they overcome the initial novelty decay, and the γ term in equation (3) becomes fairly constant. So we can measure the change over time using only the random variable ξ as : log φ t = log N t N t−1 = log N t N 0 − log N t−1 N 0 ξ t (8) Since the ξ s are independent and identical distributed random variables, φ 1 , φ 2 , ···φ t would be independent with each other. Thus the probability that a topic stops trending in a time interval s, where s is large, is equal to the probability that φ s is lower than the threshold θ, which can be written as: p = Pr(φ s < θ) = Pr(log φ s < log(θ)) = Pr(ξ s < log(θ)) = F (log θ) F (x) is the cumulative distribution function of the random variable χ. Given that distribution we can actually determine the threshold for survival as: θ = e F −1 (p)(10) From the independence property of the φ, the duration or life time of a trending topic, denoted by L, follows a geometric distribution, which in the continuum case becomes the exponential distribution. Thus, the probability that a topic survives in the first k time intervals and fails in the k + 1 time interval, given that k is large, can be written as: Pr(L = k) = (1 − p) k p(11) The expected length of trending duration L would thus be: < L >= ∞ 0 (1 − p) k p · k = 1 p − 1 = 1 F (log θ) − 1(12) We considered trending durations for topics that trended for more than 10 timestamps on Twitter. The comparison between the geometric distribution and the trending duration is shown in Fig 7. In Fig 8 the trending duration to density in a logarithmic scale suggests an exponential function for the trending time. The R-square of the fitting is 0.9112. Trend-setters We consider two types of people who contribute to trending topics -the sources who begin trends, and the propagators who are responsible for those trends propagating through the network due to the nature of the content they share. Sources We examined the users who initiate the most trending topics. First, for each topic we extracted the first 100 users who tweeted about it prior to its trending. The distribution of these authors and the topics is a power-law, as shown in Fig 9. This shows that there are few authors who contribute to the creation of many different topics. To focus on these multi-tasking users, we considered only the authors who contributed to at least five trending topics. When we consider people who are influential in starting trends on Twitter, we can hypothesize two attributes -a high frequency of activity for these users, as well as a large follower network. To evaluate these hypotheses we measured these two attributes for these authors over these months. Frequency: The tweet-rate can effectively measure the frequency of participation of a Twitter user. The mean tweet-rate for these users was 26.38 tweets per day, indicating that these authors tweeted fairly regularly. However, when we computed the correlation of the tweet-rate with the number of trending topics that they contributed to, the result was a weak positive correlation of 0.22. This indicates that although people who tweet a lot do tend to contribute to the trending topics, the rate by itself does not strongly determine the popularity of the topic. In fact, they happen to tweet on a variety of topics, many of which do not become trends. We found that a large number of them tended to tweet frequently about sporting events and players and teams involved. When some sports-related topics begin to trend, these users are among the early initiators of them, by virtue of their high tweet-rate. This suggests that the nature of the content plays a strong role in determining if a topic trends, rather than the users who initate it. Audience: When we looked at the number of followers for these authors, we were surprised to find that they were almost completely uncorrelated (correlation of 0.01) with the number of trending topics, although the mean is fairly high (2481) 1 . The absence of correlation indicates that Domination: We found that in some cases, almost all the retweets for a topic are credited to one single user. These are topics that are entirely based on the comments by that user. They can thus be said to be dominating the topic. The domination-ratio for a topic can be defined as the fraction of the retweets of that topic that can be attributed to the largest contributing user for that topic. However, we observed a negative correlation of −0.19 between the domination-ratio of a topic to its trending duration. This means that topics revolving around a particular author's tweets do not typically last long. This is consistent with the earlier observed strong correlation between number of authors and the trend duration. Hence, for a topic to trend for a long time, it requires many people to contribute actively to it. Influence: On the other hand, we observed that there were authors who contributed actively to many topics and were retweeted significantly in many of them. For each author, we computed the ratio of retweets to topics which we call the retweet-ratio. The list of influential authors who are retweeted in at least 50 trending topics is shown in Table 1. We find that a large portion of these authors are popular news sources such as CNN, the New York Times and ESPN. This illustrates that social media, far from being an alternate source of news, functions more as a filter and an amplifier for interesting news from traditional media. Conclusions To study the dynamics of trends in social media, we have conducted a comprehensive study on trending topics on Twitter. We first derived a stochastic model to explain the growth of trending topics and showed that it leads to a lognormal distribution, which is validated by our empirical results. We also have found that most topics do not trend for long, and for those that are long-trending, their persistence obeys a geometric distribution. When we considered the impact of the users of the network, we discovered that the number of followers and tweet-rate of users are not the attributes that cause trends. What proves to be more important in determining trends is the retweets by other users, which is more related to the content that is being shared than the attributes of the users. Furthermore, we found that the content that trended was largely news from traditional media sources, which are then amplified by repeated retweets on Twitter to generate trends.
9,462
1102.1402
2950593226
Social media generates a prodigious wealth of real-time content at an incessant rate. From all the content that people create and share, only a few topics manage to attract enough attention to rise to the top and become temporal trends which are displayed to users. The question of what factors cause the formation and persistence of trends is an important one that has not been answered yet. In this paper, we conduct an intensive study of trending topics on Twitter and provide a theoretical basis for the formation, persistence and decay of trends. We also demonstrate empirically how factors such as user activity and number of followers do not contribute strongly to trend creation and its propagation. In fact, we find that the resonance of the content with the users of the social network plays a major role in causing trends.
Yang and Leskovec @cite_7 examined patterns of temporal behavior for hashtags in Twitter. They presented a stable time series clustering algorithm and demonstrate the common temporal patterns that tweets containing hashtags follow. There have also been earlier studies focused on social influence and propagation. @cite_6 studied the problem of identifying influential bloggers in the blogosphere. They discovered that the most influential bloggers were not necessarily the most active. , @cite_10 have distinguished the effects of homophily from influence as motivators for propagation. As to the study of influence within Twitter, @cite_3 performed a comparison of three different measures of influence - indegree, retweets, and user mentions. They discovered that while retweets and mentions correlated well with each other, the indegree of users did not correlate well with the other two measures. Based on this, they hypothesized that the number of followers may not a good measure of influence. Recently, Romero and others @cite_5 introduced a novel influence measure that takes into account the passivity of the audience in the social network. They developed an iterative algorithm to compute influence in the style of the HITS algorithm and empirically demonstrated that the number of followers is a poor measure of influence.
{ "abstract": [ "Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention.", "Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user's influence on others — a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user.", "Blogging becomes a popular way for a Web user to publish information on the Web. Bloggers write blog posts, share their likes and dislikes, voice their opinions, provide suggestions, report news, and form groups in Blogosphere. Bloggers form their virtual communities of similar interests. Activities happened in Blogosphere affect the external world. One way to understand the development on Blogosphere is to find influential blog sites. There are many non-influential blog sites which form the \"the long tail\". Regardless of a blog site being influential or not, there are influential bloggers. Inspired by the high impact of the influentials in a physical community, we study a novel problem of identifying influential bloggers at a blog site. Active bloggers are not necessarily influential. Influential bloggers can impact fellow bloggers in various ways. In this paper, we discuss the challenges of identifying influential bloggers, investigate what constitutes influential bloggers, present a preliminary model attempting to quantify an influential blogger, and pave the way for building a robust model that allows for finding various types of the influentials. To illustrate these issues, we conduct experiments with data from a real-world blog site, evaluate multi-facets of the problem of identifying influential bloggers, and discuss unique challenges. We conclude with interesting findings and future work", "The ever-increasing amount of information flowing through Social Media forces the members of these networks to compete for attention and influence by relying on other people to spread their message. A large study of information propagation within Twitter reveals that the majority of users act as passive information consumers and do not forward the content to the network. Therefore, in order for individuals to become influential they must not only obtain attention and thus be popular, but also overcome user passivity. We propose an algorithm that determines the influence and passivity of users based on their information forwarding activity. An evaluation performed with a 2.5 million user dataset shows that our influence measure is a good predictor of URL clicks, outperforming several other measures that do not explicitly take user passivity into account. We demonstrate that high popularity does not necessarily imply high influence and vice-versa.", "Node characteristics and behaviors are often correlated with the structure of social networks over time. While evidence of this type of assortative mixing and temporal clustering of behaviors among linked nodes is used to support claims of peer influence and social contagion in networks, homophily may also explain such evidence. Here we develop a dynamic matched sample estimation framework to distinguish influence and homophily effects in dynamic networks, and we apply this framework to a global instant messaging network of 27.4 million users, using data on the day-by-day adoption of a mobile service application and users' longitudinal behavioral, demographic, and geographic data. We find that previous methods overestimate peer influence in product adoption decisions in this network by 300–700 , and that homophily explains >50 of the perceived behavioral contagion. These findings and methods are essential to both our understanding of the mechanisms that drive contagions in networks and our knowledge of how to propagate or combat them in domains as diverse as epidemiology, marketing, development economics, and public health." ], "cite_N": [ "@cite_7", "@cite_3", "@cite_6", "@cite_5", "@cite_10" ], "mid": [ "2112056172", "1814023381", "2098479684", "2976207093", "2149910108" ] }
Trends in Social Media : Persistence and Decay
Social media is growing at an explosive rate, with millions of people all over the world generating and sharing content on a scale barely imaginable a few years ago. This has resulted in massive participation with countless number of updates, opinions, news, comments and product reviews being constantly posted and discussed in social web sites such as Facebook, Digg and Twitter, to name a few. This widespread generation and consumption of content has created an extremely competitive online environment where different types of content vie with each other for the scarce attention of the user community. In spite of the seemingly chaotic fashion with which all these interactions take place, certain topics manage to attract an inordinate amount of attention, thus bubbling to the top in terms of popularity. Through their visibility, this popular topics contribute to the collective awareness of what is trending and at times can also affect the public agenda of the community. At present there is no clear picture of what causes these topics to become extremely popular, nor how some persist in the public eye longer than others. There is considerable evidence that one aspect that causes topics to decay over time is their novelty [11]. Another factor responsible for their decay is the competitive nature of the medium. As content starts propagating throught a social network it can usurp the positions of earlier topics of interest, and due to the limited attention of users it is soon rendered invisible by newer content. Yet another aspect responsible for the popularity of certain topics is the influence of members of the network on the propagation of content. Some users generate content that resonates very strongly with their followers thus causing the content to propagate and gain popularity [9]. The source of that content can originate in standard media outlets or from users who generate topics that eventually become part of the trends and capture the attention of large communities. In either case the fact that a small set of topics become part of the trending set means that they will capture the attention of a large audience for a short time, thus contributing in some measure to the public agenda. When topics originate in media outlets, the social medium acts as filter and amplifier of what the standard media produces and thus contributes to the agenda setting mechanisms that have been thoroughly studied for more than three decades [7] . In this paper, we study trending topics on Twitter, an immensely popular microblogging network on which millions of users create and propagate enormous content via a steady stream on a daily basis. The trending topics, which are shown on the main website, represent those pieces of content that bubble to the surface on Twitter owing to frequent mentions by the community. Thus they can be equated to crowdsourced popularity. We then determine the factors that contribute to the creation and evolution of these trends, as they provide insight into the complex interactions that lead to the popularity and persistence of certain topics on Twitter, while most others fail to catch on and are lost in the flow. We first analyze the distribution of the number of tweets across trending topics. We observe that they are characterized by a strong log-normal distribution, similar to that found in other networks such as Digg and which is generated by a stochastic multiplicative process [11]. We also find that the decay function for the tweets is mostly linear. Subsequently we study the persistence of the trends to determine which topics last long at the top. Our analysis reveals that there are few topics that last for long times, while most topics break fairly quickly, in the order of 20-40 minutes. Finally, we look at the impact of users on trend persistence times within Twitter. We find that traditional notions of user influence such as the frequency of posting and the number of followers are not the main drivers of trends, as previously thought. Rather, long trends are characterized by the resonating nature of the content, which is found to arise mainly from traditional media sources. We observe that social media behaves as a selective amplifier for the content generated by traditional media, with chains of retweets by many users leading to the observed trends. Twitter Twitter is an extremely popular online microblogging service, that has gained a very large user following, consisting of close to 200 million users. The Twitter graph is a directed social network, where each user chooses to follow certain other users. Each user submits periodic status updates, known as tweets, that consist of short messages limited in size to 140 characters. These updates typically consist of personal information about the users, news or links to content such as images, video and articles. The posts made by a user are automatically displayed on the user's profile page, as well as shown to his followers. A retweet is a post originally made by one user that is forwarded by another user. Retweets are useful for propagating interesting posts and links through the Twitter community. Twitter has attracted lots of attention from corporations due to the immense potential it provides for viral market-ing. Due to its huge reach, Twitter is increasingly used by news organizations to disseminate news updates, which are then filtered and commented on by the Twitter community. A number of businesses and organizations are using Twitter or similar micro-blogging services to advertise products and disseminate information to stockholders. Twitter Trends Data Trending topics are presented as a list by Twitter on their main Twitter.com site, and are selected by an algorithm proprietary to the service. They mostly consist of two to three word expressions, and we can assume with a high confidence that they are snippets that appear more frequently in the most recent stream of tweets than one would expect from a document term frequency analysis such as TFIDF. The list of trending topics is updated every few minutes as new topics become popular. Twitter provides a Search API for extracting tweets containing particular keywords. To obtain the dataset of trends for this study, we repeatedly used the API in two stages. First, we collected the trending topics by doing an API query every 20 minutes. Second, for each trending topic, we used the Search API to collect all the tweets mentioning this topic over the past 20 minutes. For each tweet, we collected the author, the text of the tweet and the time it was posted. Using this procedure for data collection, we obtained 16.32 million tweets on 3361 different topics over a course of 40 days in Sep-Oct 2010. We picked 20 minutes as the duration of a timestamp after evaluating different time lengths, to optimize the discovery of new trends while still capturing all trends. This is due to the fact that Twitter only allows 1500 tweets per search query. We found that with 20 minute intervals, we were able to capture all the tweets for the trending topics efficiently. We noticed that many topics become trends again after they stop trending according to the Twitter trend algorithm. We therefore considered these trends as separate sequences: it is very likely that the spreading mechanism of trends has a strong time component with an initial increase and a trailing decline, and once a topic stops trending, it should be considered as new when it reappears among the users that become aware of it later. This procedure split the 3468 originally collected trend titles into 6084 individual trend sequences. Distribution of tweets We measured the number of tweets that each topic gets in 20 minute intervals, from the time the topic starts trending until it stops, as described earlier. From this we can sum up the tweet counts over time to obtain the cumulative number of tweets N q (t i ) of topic q for any time frame t i , N q (t i ) = i τ =1 n q (t τ ),(1) where n q (t) is the number of tweets on topic q in time interval t. Since it is plausible to assume that initially popular topics will stay popular later on in time as well, we can calculate the ratios C q (t i , t j ) = N q (t i )/N q (t j ) for topic q for time frames t i and t j . Figure 1(a) shows the distribution of C q (t i , t j )'s over all topics for four arbitrarily chosen pairs of time frames (nevertheless such that t i > t j , and t i is relatively large, and t j is small). These figures immediately suggest that the ratios C q (t i , t j ) are distributed according to log-normal distributions, since the horizontal axes are logarithmically rescaled, and the histograms appear to be Gaussian functions. To check if this assumption holds, consider Fig. 1(b), where we show the Q-Q plots of the distributions of Fig. 1(a) in comparison to normal distributions. We can observe that the (logarithmically rescaled) empirical distributions exhibit normality to a high degree for later time frames, with the exception of the high end of the distributions. These 10-15 outliers occur more frequently than could be expected for a normal distribution. Log-normals arise as a result of multiplicative growth processes with noise [8]. In our case, if N q (t) is the number of tweets for a given topic q at time t, then the dynamics that leads to a log-normally distributed N q (t) over q can be written as: N q (t) = [1 + γ(t)ξ(t)] N q (t − 1),(2) where the random variables ξ(t) are positive, independent and identically distributed as a function of t with mean 1 and variance σ 2 . Note that time here is measured in discrete steps (t − 1 expresses the previous time step with respect to t), in accordance with our measurement setup. γ(t) is introduced to account for the novelty decay [11]. We would expect topics to initially increase in popularity but to slow down their activity as they become obsolete or known to most users. Since γ(t) is made up of decreasing positive numbers, the growth of N t slows with time. To see that Eq. (2) leads to a log-normal distribution of N q (t), we first expand the recursion relation: N q (t) = t s=1 [1 + γ(s)ξ(s)] N q (0).(3) Here N q (0) is the initial number of tweets in the earliest time step. Taking the logarithm of both sides of Eq. (3), ln N q (t) − ln N q (0) = t s=1 ln [1 + γ(s)ξ(s)] (4) The RHS of Eq. (4) is the sum of a large number of random variables. The central limit theorem states thus that if the random variables are independent and identically distributed, then the sum asymptotically approximates a normal distribution. The i.i.d condition would hold exactly for the ξ(s) term, and it can be shown that in the presence of the discounting factors (if the rate of decline is not too fast), the resulting distribution is still normal [11]. In other words, we expect from this model that ln [N q (t)/N q (0)] will be distributed normally over q when fixing t. These quantities were shown in Fig. 1 above. Essentially, if the difference between the two times where we take the ratio is big enough, the log-normal property is observed. The intuitive explanation for the multiplicative model of Eq. (2) is that at each time step the number of new tweets on a topic is a multiple of the tweets that we already have. The number of past tweets, in turn, is a proxy for the number of users that are aware of the topic up to that point. These users discuss the topic on different forums, including Twitter, essentially creating an effective network through which the topic spreads. As more users talk about a particular topic, many others are likely to learn about it, thus giving the multiplicative nature of the spreading. The noise term is necessary to account for the stochasticity of this process. On the other hand, the monotically decreasing γ(t) characterizes the decay in timeliness and novelty of the topic as it slowly becomes . The log-log plot exhibits that it decreases in a power-law fashion, with an exponent that is measured to be exactly -1 (the linear regression on the logarithmically transformed data fits with R 2 = 0.98). The fit to determine the exponent was performed in the range of the solid line next to the function, which also shows the result of the fit while being shifted lower for easy comparison. The inset displays the same γ(t) function on standard linear scales. obsolete and known to most users, and guarantees that N q (t) does not grow unbounded [11]. To measure the functional form of γ(t), we observe that the expected value of the noise term ξ(t) in Eq. (2) is 1. Thus averaging over the fractions between consecutive tweet counts yields γ(t): γ(t) = N q (t) N q (t − 1) q − 1.(5) The experimental values of γ(t) in time are shown in Fig. 2. It is interesting to notice that γ(t) follows a powerlaw decay very precisely with an exponent of −1, which means that γ(t) ∼ 1/t. The growth of tweets over time The interesting fact about the decay function γ(t) = 1/t is that it results in a linear increase in the total number of tweets for a topic over time. To see this, we can again consider Eq. (4), and approximate the discrete sum of random variables with an integral of the operand of the sum, and substitute the noise term with its expectation value, ξ(t) = 1 as defined earlier (this is valid if γ(t) is changing slowly). These approximations yield the following: ln N q (t) N q (0) ≈ t τ =0 ln [1 + γ(τ )] dτ ≈ t τ =0 1 τ dτ = ln t. (6) In simplifying the logarithm above, we used the Taylor expansion of ln(1 + x) ≈ x, for small x, and also used the fact that γ(τ ) = 1/τ as we found experimentally earlier. It can be immediately seen then that N q (t) ≈ N q (0) t for the range of t where γ(t) is inversely proportional to t. In fact, it can be easily proven that no functional form for γ(t) would yield a linear increase in N q (t) other than γ(t) ∼ 1/t (assuming that the above approximations are valid for the stochastic discrete case). This suggests that the trending topics featured on Twitter increase their tweet counts linearly in time, and their dynamics is captured by the multiplicative noise model we discussed above. To check this, we first plotted a few representative examples of the cumulative number of tweets for a few topics in Fig. 3. It is apparent that all the topics ( selected randomly) show an approximate initial linear growth in the number of tweets.We also checked if this is true in general. Figure 4 shows the second discrete derivative of the total number of tweets, which we expect to be 0 if the trend lines are linear on average. A positive second derivative would mean that the growth is superlinear, while a negative one suggests that it is sublinear. We point out that before taking the average of all second derivatives over the different topics in time, we divided the derivatives by the average of the total number of tweets of the given topics. We did this so as to account for the large difference between the ranges of the number of tweets across topics, since a simple averaging without prior normalization would likely bias the results towards topics with large tweet counts and their fluctuations. The averages are shown in Fig. 4. We observe from the figure that when we consider all The number of total tweets on topics in the first 48 hours, normalized to 1 so that they can be shown on the same plot. The randomly selected topics were (from left to right): "Earnings", "#pulpopaul", "Sheen", "Deuces Remix", "Isaacs", "#gmp24", and "Mac App". topics there is a very slight sublinear growth regime right after the topic starts trending, which then becomes mostly linear, as the derivatives data is distributed around 0. If we consider only very popular topics (that were on the trends site for more than 4 hours), we observe an even better linear trend. One reason for this may be that topics that trend only for short periods exhibit a concave curvature, since they lose popularity quickly, and are removed from among the Twitter trends by the system early on. These results suggest that once a topic is highlighted as a trend on a very visible website, its growth becomes linear in time. The reason for this may be that as more and more visitors come to the site and see the trending topics there is a constant probability that they will also talk and tweet about it. This is in contrast to scenarios where the primary channel of information flow is more informal. In that case we expect that the growth will exhibit first a phase with accelerated growth and then slow down to a point when no one talks about the topic any more. Content that spreads through a social network or without external "driving" will follow such a course, as has been showed elsewhere [10,12]. An important reason to study trending topics on Twitter is to understand why some of them remain at the top while others dissipate quickly. To see the general pattern of behavior on Twitter, we examined the lifetimes of the topics that trended in our study. From Fig 5(a) we can see that while most topics occur continuously, around 34% of topics appear in more than one sequence. This means that they stop trending for a certain period of time before beginning to trend again. A reason for this behavior may be the time zones that are involved. For instance, if a topic is a piece of news relevant to North American readers, a trend may first appear in the Eastern time zone, and 3 hours later in the Pacific time zone. Likewise, a trend may return the next morning if it was trending the previous evening, when more users check their accounts again after the night. Given that many topics do not occur continuously, we examined the distribution of the lengths sequences for all topics. In Fig 5(b) we show the length of the topic sequences. It can be observed that this is a power-law which means that most topic sequences are short and a few topics last for a very long time. This could be due to the fact that there are many topics competing for attention. Thus, the topics that make it to the top (the trend list) last for a short time. However, in many cases, the topics return to trend for more time, which is captured by the number of sequences shown in Fig 5(a), as mentioned. Relation to authors and activity We first examine the authors who tweet about given trending topics to see if the authors change over time or if it is the same people who keep tweeting to cause trends. When we computed the correlation in the number of unique authors for a topic with the duration (number of timestamps) that the topic trends we noticed that correlation is very strong (0.80). This indicates that as the number of authors increases so does the lifetime, suggesting that the propagation through the network causes the topic to trend. To measure the impact of authors we compute for each topic the active-ratio a q as: a q = N umber of T weets N umber of U nique Authors The correlation of active-ratio with trending duration is as shown in Fig 6. We observe that the active-ratio quickly saturates and varies little with time for any given topic. Since the authors change over time with the topic propagation, the correlation between number of tweets and authors is high (0.83). q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q q q q q q q qqq q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q qq q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q qq q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq qq q q q q q q q q q q q q q q qq q q q q qq q qq q q q q q q q qq q q q q q q q qq q qq q q q q q q q q q q qq q qq q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q qq q q qq q q q q q qq q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qqq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Trending Duration (mins) Figure 6: Relation between the active-ratio and the length of the trend across all topics, showing that the active-ratio does not vary significantly with time. Persistence of long trending topics On Twitter each topic competes with the others to survive on the trending page. As we now show, for the long trending ones we can derive an expression for the distribution of their average length. We assume that, if the relative growth rate of tweets, denoted by φ t = Nt Nt−1 , falls below a certain threshold θ, the topic would stop trending. When we consider longtrending topics, as they grow in time, they overcome the initial novelty decay, and the γ term in equation (3) becomes fairly constant. So we can measure the change over time using only the random variable ξ as : log φ t = log N t N t−1 = log N t N 0 − log N t−1 N 0 ξ t (8) Since the ξ s are independent and identical distributed random variables, φ 1 , φ 2 , ···φ t would be independent with each other. Thus the probability that a topic stops trending in a time interval s, where s is large, is equal to the probability that φ s is lower than the threshold θ, which can be written as: p = Pr(φ s < θ) = Pr(log φ s < log(θ)) = Pr(ξ s < log(θ)) = F (log θ) F (x) is the cumulative distribution function of the random variable χ. Given that distribution we can actually determine the threshold for survival as: θ = e F −1 (p)(10) From the independence property of the φ, the duration or life time of a trending topic, denoted by L, follows a geometric distribution, which in the continuum case becomes the exponential distribution. Thus, the probability that a topic survives in the first k time intervals and fails in the k + 1 time interval, given that k is large, can be written as: Pr(L = k) = (1 − p) k p(11) The expected length of trending duration L would thus be: < L >= ∞ 0 (1 − p) k p · k = 1 p − 1 = 1 F (log θ) − 1(12) We considered trending durations for topics that trended for more than 10 timestamps on Twitter. The comparison between the geometric distribution and the trending duration is shown in Fig 7. In Fig 8 the trending duration to density in a logarithmic scale suggests an exponential function for the trending time. The R-square of the fitting is 0.9112. Trend-setters We consider two types of people who contribute to trending topics -the sources who begin trends, and the propagators who are responsible for those trends propagating through the network due to the nature of the content they share. Sources We examined the users who initiate the most trending topics. First, for each topic we extracted the first 100 users who tweeted about it prior to its trending. The distribution of these authors and the topics is a power-law, as shown in Fig 9. This shows that there are few authors who contribute to the creation of many different topics. To focus on these multi-tasking users, we considered only the authors who contributed to at least five trending topics. When we consider people who are influential in starting trends on Twitter, we can hypothesize two attributes -a high frequency of activity for these users, as well as a large follower network. To evaluate these hypotheses we measured these two attributes for these authors over these months. Frequency: The tweet-rate can effectively measure the frequency of participation of a Twitter user. The mean tweet-rate for these users was 26.38 tweets per day, indicating that these authors tweeted fairly regularly. However, when we computed the correlation of the tweet-rate with the number of trending topics that they contributed to, the result was a weak positive correlation of 0.22. This indicates that although people who tweet a lot do tend to contribute to the trending topics, the rate by itself does not strongly determine the popularity of the topic. In fact, they happen to tweet on a variety of topics, many of which do not become trends. We found that a large number of them tended to tweet frequently about sporting events and players and teams involved. When some sports-related topics begin to trend, these users are among the early initiators of them, by virtue of their high tweet-rate. This suggests that the nature of the content plays a strong role in determining if a topic trends, rather than the users who initate it. Audience: When we looked at the number of followers for these authors, we were surprised to find that they were almost completely uncorrelated (correlation of 0.01) with the number of trending topics, although the mean is fairly high (2481) 1 . The absence of correlation indicates that Domination: We found that in some cases, almost all the retweets for a topic are credited to one single user. These are topics that are entirely based on the comments by that user. They can thus be said to be dominating the topic. The domination-ratio for a topic can be defined as the fraction of the retweets of that topic that can be attributed to the largest contributing user for that topic. However, we observed a negative correlation of −0.19 between the domination-ratio of a topic to its trending duration. This means that topics revolving around a particular author's tweets do not typically last long. This is consistent with the earlier observed strong correlation between number of authors and the trend duration. Hence, for a topic to trend for a long time, it requires many people to contribute actively to it. Influence: On the other hand, we observed that there were authors who contributed actively to many topics and were retweeted significantly in many of them. For each author, we computed the ratio of retweets to topics which we call the retweet-ratio. The list of influential authors who are retweeted in at least 50 trending topics is shown in Table 1. We find that a large portion of these authors are popular news sources such as CNN, the New York Times and ESPN. This illustrates that social media, far from being an alternate source of news, functions more as a filter and an amplifier for interesting news from traditional media. Conclusions To study the dynamics of trends in social media, we have conducted a comprehensive study on trending topics on Twitter. We first derived a stochastic model to explain the growth of trending topics and showed that it leads to a lognormal distribution, which is validated by our empirical results. We also have found that most topics do not trend for long, and for those that are long-trending, their persistence obeys a geometric distribution. When we considered the impact of the users of the network, we discovered that the number of followers and tweet-rate of users are not the attributes that cause trends. What proves to be more important in determining trends is the retweets by other users, which is more related to the content that is being shared than the attributes of the users. Furthermore, we found that the content that trended was largely news from traditional media sources, which are then amplified by repeated retweets on Twitter to generate trends.
9,462
1102.0467
2950424720
The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this paper, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds. We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Omega(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number L of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log L + loglog n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most L leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Omega(log L + loglog n)$ bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3.
The rendezvous problem was first mentioned in @cite_15 . Authors investigating rendezvous (cf. @cite_18 for an extensive survey) considered either the geometric scenario (rendezvous in an interval of the real line, see, e.g., @cite_28 @cite_19 @cite_7 , or in the plane, see, e.g., @cite_13 @cite_31 ), or rendezvous in networks, see e.g., @cite_20 @cite_35 @cite_12 . Many papers, e.g., @cite_22 @cite_3 @cite_33 @cite_28 @cite_24 study the probabilistic setting: inputs and or rendezvous strategies are random.
{ "abstract": [ "We obtain several improved solutions for the deterministic rendezvous problem in general undirected graphs. Our solutions answer several problems left open in a recent paper by We also introduce an interesting variant of the rendezvous problem which we call the deterministic treasure hunt problem. Both the rendezvous and the treasure hunt problems motivate the study of universal traversal sequences and universal exploration sequences with some strengthened properties. We call such sequences strongly universal traversal (exploration) sequences. We give an explicit construction of strongly universal exploration sequences. The existence of strongly universal traversal sequences, as well as the solution of the most difficult variant of the deterministic treasure hunt problem, are left as intriguing open problems.", "Search Theory is one of the original disciplines within the field of Operations Research. It deals with the problem faced by a Searcher who wishes to minimize the time required to find a hidden object, or “target. ” The Searcher chooses a path in the “search space” and finds the target when he is sufficiently close to it. Traditionally, the target is assumed to have no motives of its own regarding when it is found; it is simply stationary and hidden according to a known distribution (e. g. , oil), or its motion is determined stochastically by known rules (e. g. , a fox in a forest). The problems dealt with in this book assume, on the contrary, that the “target” is an independent player of equal status to the Searcher, who cares about when he is found. We consider two possible motives of the target, and divide the book accordingly. Book I considers the zero-sum game that results when the target (here called the Hider) does not want to be found. Such problems have been called Search Games (with the “ze- sum” qualifier understood). Book II considers the opposite motive of the target, namely, that he wants to be found. In this case the Searcher and the Hider can be thought of as a team of agents (simply called Player I and Player II) with identical aims, and the coordination problem they jointly face is called the Rendezvous Search Problem.", "We consider rendezvous problems in which two players move on the plane and wish to cooperate to minimise their first meeting time. We begin by considering the case where both players are placed such that the vector difference is chosen equiprobably from a finite set. We also consider a situation in which they know they are a distanced apart, but they do not know the direction of the other player. Finally, we give some results for the case in which player 1 knows the initial position of player 2, while player 2 is given information only on the initial distance of player 1.", "The author considers the problem faced by two people who are placed randomly in a known search region and move about at unit speed to find each other in the least expected time. This time is called the rendezvous value of the region. It is shown how symmetries in the search region may hinder the process by preventing coordination based on concepts such as north or clockwise. A general formulation of the rendezvous search problem is given for a compact metric space endowed with a group of isometrics which represents the spatial uncertainties of the players. These concepts are illustrated by considering upper bounds for various rendezvous values for the circle and an arbitrary metric network. The discrete rendezvous problem on a cycle graph for players restricted to symmetric Markovian strategies is then solved. Finally, the author considers the problem faced by two people on an infinite line who each know the distribution of the distance but not the direction to each other.", "We present two new results for the asymmetric rendezvous problem on the line. We first show that it is never optimal for one player to be stationary during the entire search period in the two-player rendezvous. Then we consider the meeting time of n-players in the worst case and show that it has an asymptotic behavior of n = 2 + O(log n).", "Two friends have become separated in a building or shopping mall and and wish to meet as quickly as possible. There are n possible locations where they might meet. However, the locations are identical and there has been no prior agreement where to meet or how to search. Hence they must use identical strategies and must treat all locations in a symmetrical fashion. Suppose their search proceeds in discrete time. Since they wish to avoid the possibility of never meeting, they will wish to use some randomizing strategy. If each person searches one of the n locations at random at each step, then rendezvous will require n steps on average. It is possible to do better than this: although the optimal strategy is difficult to characterize for general n, there is a strategy with an expected time until rendezvous of less than 0.829 n for large enough n. For n = 2 and 3 the optimal strategy can be established and on average 2 and 8 3 steps are required respectively. There are many tantalizing variations on this problem, which we discuss with some conjectures. DYNAMIC PROGRAMMING; SEARCH PROBLEMS", "Two players A and B are randomly placed on a line. The distribution of the distance between them is unknown except that the expected initial distance of the (two) players does not exceed some constant @math The players can move with maximal velocity 1 and would like to meet one another as soon as possible. Most of the paper deals with the asymmetric rendezvous in which each player can use a different trajectory. We find rendezvous trajectories which are efficient against all probability distributions in the above class. (It turns out that our trajectories do not depend on the value of @math ) We also obtain the minimax trajectory of player A if player B just waits for him. This trajectory oscillates with a geometrically increasing amplitude. It guarantees an expected meeting time not exceeding @math We show that, if player B also moves, then the expected meeting time can be reduced to @math The expected meeting time can be further reduced if the players use mixed strategies. We show that if player B rests, then the optimal strategy of player A is a mixture of geometric trajectories. It guarantees an expected meeting time not exceeding @math This value can be reduced even more (below @math ) if player B also moves according to a (correlated) mixed strategy. We also obtain a bound for the expected meeting time of the corresponding symmetric rendezvous problem.", "Two players are independently placed on a commonly labelled network X. They cannot see each other but wish to meet in least expected time. We consider continuous and discrete versions, in which they may move at unit speed or between adjacent distinct nodes, respectively. There are two versions of the problem (asymmetric or symmetric), depending on whether or not we allow the players to use different strategies. After obtaining some optimality conditions for general networks, we specialize to the interval and circle networks. In the first setting, we extend the work of J. V. Howard; in the second we prove a conjecture concerning the optimal symmetric strategy. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 256–274, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002 nav.10011", "", "Leaving marks at the starting points in a rendezvous search problem may provide the players with important information. Many of the standard rendezvous search problems are investigated under this new framework which we call markstart rendezvous search. Somewhat surprisingly, the relative difficulties of analysing problems in the two scenarios differ from problem to problem. Symmetric rendezvous on the line seems to be more tractable in the new setting whereas asymmetric rendezvous on the line when the initial distance is chosen by means of a convex distribution appears easier to analyse in the original setting. Results are also obtained for markstart rendezvous on complete graphs and on the line when the players' initial distance is given by an unknown probability distribution. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 722–731, 2001", "", "", "We consider the problem of a rendezvous (coordinated meeting) of distributed units (intelligent agents in network computing or autonomous robots). The environment is modeled as a graph, the node labeling of which may not be “common knowledge” to the units, due to protocol and naming convention mismatch, machine faults, status change, or even hostility of the environment. Meeting of such units is likely to be a basic procedure in the area of distributed “intelligent agent” computing and in the domain of coordinated tasks of autonomous robots. The crux of the problem which we present here and initiate research on, is the breaking of potential symmetry while the units dynamically move. The units are more intelligent (computing power, control and memory) than simple (traditional) pebbles or tokens, and our algorithms will make use of this capability for speeding up the convergence to a common place (e.g., we will allow units to meet exchange information and depart). We consider both randomized protocols and deterministic (but non-uniform) protocols; the problem is unsolvable by a uniform deterministic algorithm. The deterministic procedure employs ideas from design theory and achieves O(n) time, while the randomized methods are based on random walks and may achieve O(n) time where k is the number of agents.", "" ], "cite_N": [ "@cite_35", "@cite_18", "@cite_31", "@cite_22", "@cite_7", "@cite_33", "@cite_28", "@cite_3", "@cite_24", "@cite_19", "@cite_15", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "2623545498", "1501957312", "2014192080", "2040570464", "2034224690", "2103469743", "1971467559", "2071069929", "2010875998", "2030068303", "", "2096182510", "1547036184", "" ] }
Delays induce an exponential memory gap for rendezvous in trees
We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound matches the upper bound from [14] valid for arbitrary graphs. Our main positive result is a proof that the amount of memory needed for rendezvous with simultaneous start in trees depends essentially on the number of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we show two identical agents with O(log + log log n) bits of memory that solve the rendezvous problem in all trees with n nodes and leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that the size O(log + log log n) of memory needed for rendezvous is optimal, even in the class of trees with degrees bounded by 3. More precisely, we prove two lower bounds. First, for infinitely many integers , we show a class of arbitrarily large trees with maximum degree 3 and with leaves, for which rendezvous with simultaneous start requires Ω(log ) bits of memory. Second, we show that Ω(log log n) bits of memory are required for rendezvous with simultaneous start in the line of length n. These two bounds together imply that our upper bound O(log + log log n) cannot be improved, even for the class of trees with maximum degree 3. Bibliographic note Note that our definition of solving the rendezvous problem is stronger than the definition used in the conference versions [24,25] of this paper. Indeed, rendezvous should occur for any port labeling. As opposed to what is claimed in [25], the exponential gap described in this paper does not carry over to the case where the ability of achieving rendezvous may depend on the port labeling. More precisely, it was claimed in [25] that the positive result concerning the size O(log + log log n) of memory for which rendezvous with simultaneous start is possible, holds for arbitrary initial positions that are not symmetric with respect to a given port labeling µ of the tree in which agents operate. This result is in fact incorrect in this formulation. Indeed, it has been recently proved in [15] that, for some port labeling of a line and some initial positions that are not symmetric with respect to this labeling, rendezvous with simultaneous start requires a logarithmic number of bits, while = 2 for the line. However, our positive result holds for agents starting from arbitrary non perfectly symmetrizable initial positions. The algorithm and its analysis remain similar as in [25]. (The exact place where the provided arguments do not extend to the case where the ability of achieving rendezvous may depend on the port labeling will be pointed out to the reader). On the other hand, all negative results from [24] and [25] hold in the present setting as well. Framework and Preliminaries Model We consider mobile agents traveling in trees with locally labeled ports. The tree and its size are a priori unknown to the agents. We first define precisely an individual agent. An agent is an abstract state machine A = (S, π, λ, s 0 ), where S is a set of states among which there is a specified state s 0 called the initial state, π : S × Z 2 → S, and λ : S → Z. Initially the agent is at some node u 0 in the initial state s 0 ∈ S. The agent performs actions in rounds measured by its internal clock. Each action can be either a move to an adjacent node or a null move resulting in remaining in the currently occupied node. State s 0 determines a natural number λ(s 0 ). If λ(s 0 ) = −1 then the agent makes a null move (i.e., remains at u 0 ). If λ(s 0 ) ≥ 0 then the agent leaves u 0 by port λ(s 0 ) modulo the degree of u 0 . When incoming to a node v in state s ∈ S, the behavior of the agent is as follows. It reads the number i of the port through which it entered v and the degree d of v. The pair (i, d) ∈ Z 2 is an input symbol that causes the transition from state s to state s = π(s, (i, d)). If the previous move of the agent was null, (i.e., the agent stayed at node v in state s) then the pair (−1, d) ∈ Z 2 is the input symbol read by the agent, that causes the transition from state s to state s = π(s, (−1, d)). In both cases s determines an integer λ(s ), which is either −1, in which case the agent makes a null move, or a non negative integer indicating a port number by which the agent leaves v (this port is λ(s ) mod d). The agent continues moving in this way, possibly infinitely. Since we consider the rendezvous problem for identical agents, we assume that agents are copies A and A of the same abstract state machine A, starting at two distinct nodes v A and v A , called the initial positions. We will refer to such identical machines as a pair of agents. It is assumed that the internal clocks of a pair of agents tick at the same rate. The clock of each agent starts when the agent starts executing its actions. Agents start from their initial position with delay θ ≥ 0, controlled by an adversary. This means that the later agent starts executing its actions θ rounds after the first agent. Agents do not know which of them is first and what is the value of θ. We seek agents with small memory, measured by the number of states of the corresponding automaton, or equivalently by the number of bits on which these states are encoded. An automaton with K states requires Θ(log K) bits of memory. We say that a pair of agents solves the rendezvous problem with arbitrary delay (resp. with simultaneous start) in a class of trees, if, for any tree in this class, for any port labeling of this tree, and for any initial positions that are not perfectly symmetrizable, both agents are eventually in the same node of the tree in the same round, regardless of the starting rounds of the agents (resp. provided that they start in the same round). Preliminary results Consider any tree T and the following sequence of trees constructed recursively: T 0 = T , and T i+1 is the tree obtained from T i by removing all its leaves. T = T j for the smallest j for which T j has at most two nodes. If T has one node, then this node is called the central node of T . If T has two nodes, then the edge joining them is called the central edge of T . A tree T with a port labeling µ is called symmetric, if there exists a non-trivial automorphism f of the tree (i.e., an automorphism f such that f (u) = u, for some u ∈ V ) preserving this port labeling. If a tree with port numbers has a central node, then it cannot be symmetric. We define the "basic walk" starting at node v the walk resulting from an agent performing the following actions: leave node v by port 0, and, perpetually, whenever entering a degree-d node by port i ∈ {0, . . . , d − 1}, leave that node by port (i + 1) mod d. Of course, a basic walk can be bounded to perform for t steps (instead of perpetually), in which case we refer to a basic walk of length t. Note that a basic walk of length 2(n − 1) in an n-node tree returns to its starting node. The following statement is an easy consequence of the techniques and results from [27]. Fact 2.1 There exists an agent accomplishing the following task in an arbitrary tree: using O(log m) bits of memory, it finds the number m of nodes in the tree, returns and stops at its initial position, and detects whether the tree has a central node, or has a central edge but is not symmetric, or has a central edge and is symmetric. Moreover, • if the tree has a central node x, then the agent finds the minimum number of steps of a basic walk from its initial position to the central node x; • if the tree has a central edge e = {x, y} but is not symmetric, then, for every initial position, the agent finds the minimum number of steps of a basic walk from its initial position to the same extremity x of the central edge; moreover, it knows which port at this extremity corresponds to the central edge; • if the tree is symmetric, then the agent finds the minimum number of steps of a basic walk from its initial position to the farthest extremity 1 of the central edge; moreover, it knows which port at this extremity corresponds to the central edge. In the sequel, the procedure accomplishing the above task starting at node v will be called Procedure Explo(v). Rendezvous with arbitrary delay It was proved in [14] that rendezvous with arbitrary delay can be accomplished in arbitrary nnode graphs using O(log n) bits of memory. On the other hand, observe that rendezvous requires Ω(log n) bits of memory in arbitrarily large trees with 2n + 1 nodes and maximum degree n. The lower bound examples are trees T n consisting of two nodes u and v of degree n, both linked to a common node w, and to n − 1 leaves. However, these trees have linear degree and the reason for the logarithmic memory requirement is simply that agents with smaller memory are incapable of having an output function λ with range of linear size, and thus the adversary can place one agent in node u, the other in a leaf adjacent to v, and distribute ports in such a way that none of the agents can ever get to node w, which makes rendezvous infeasible, in spite of non perfectly symmetrizable initial positions. This example leaves open the question if rendezvous with sub-logarithmic memory is possible, e.g., in all trees with constant maximum degree. It turns out that if the delay is arbitrary, this is not the case: rendezvous requires logarithmic memory even for the class of lines. Theorem 3.1 Rendezvous with arbitrary delay in the n-node line requires agents with Ω(log n) bits of memory. Proof. Let k be the number of memory bits of the agent and K = 2 k be its number of states. Place one agent at some node u of the infinite line where each edge has the same port number at its two extremities. In any interval of length K + 1 there exist two nodes at which the agent is in the same state. Let x 1 be the first node of the trajectory of the agent in which this happens and let s be the state of the agent at x 1 . Let x 2 be the second node of the trajectory of the agent at which the agent is in state s. Let δ be the distance between u and x 1 and let d be the distance between x 1 and x 2 . We construct the following instance of the rendezvous problem (see Fig. 1). The line is of length 8(K + 1) + 1. Let e be the central edge of this line. Assign number 0 to ports leading to edge e from both its extremities, and assign other port labels so that ports leading to any edge at both its extremities get the same number 0 or 1. (This is equivalent to 2-edge-coloring of the line). Let z be the endpoint of the line, for which x 1 is between z and x 2 . Let y 1 and y 2 be symmetric images of x 1 and x 2 , respectively, according to the axis of symmetry of the line. Let y 0 be the node distinct from y 2 , at distance d from y 1 . Let v be the node at distance δ from y 0 , such that the vectors [x 1 , u] and [y 0 , v] have opposite directions. The other agent is placed at node v. Let t 1 be the number of rounds that the agent starting at u takes to reach 2 x 1 in state s. Let t 2 be the number of rounds that the agent starting at v takes to reach y 1 in state s. Let θ = t 2 − t 1 . The adversary delays the agent starting at u by θ rounds. Hence the agent starting at u reaches x 1 at the same time t and in the same state as the agent starting at v reaches y 1 . The points x 1 and y 1 are symmetric positions, hence rendezvous is impossible after time t. Before time t the two Together with the logarithmic upper bound from [14], the above result completely solves the problem of determining the minimum memory of the agents permitting rendezvous with arbitrary delay. Hence in the rest of the paper we concentrate on rendezvous with simultaneous start, thus assuming that the delay θ = 0. e v 2(K+1) 2(K+1) 2(K+1) 2(K+1) u z x1 y0 x2 y2 y1 d d d δ δ 4 Rendezvous with simultaneous start 4 .1 Upper bound It turns out that the size of memory needed for rendezvous with simultaneous start depends on two parameters of the tree: the number n of nodes and the number of leaves. In fact we show that rendezvous in trees with n nodes and leaves can be done using only O(log + log log n) bits of memory. Thus, for trees with polylogarithmically many leaves, O(log log n) bits of memory are enough. In view of Theorem 3.1, this shows an exponential gap in the minimum memory size needed for rendezvous between the scenarios with arbitrary delay and with delay zero. Theorem 4.1 There is a pair of identical agents solving rendezvous with simultaneous start in all trees, and using, for any integers n and , O(log + log log n) bits of memory in trees with at most n nodes and at most leaves. The rest of the section is dedicated to the proof of Theorem 4.1. Let T be any tree, and let v and v be the initial positions of the two agents in T . Let T be the contraction of T , that is the tree obtained from T by replacing every path 3 in T joining two nodes of degree different from 2 by an edge (the ports of this edge correspond to the ports at both extremities of the contracted path). Notice that if T has leaves, then its contraction T has at most 2 − 1 nodes. Our rendezvous algorithm uses Procedure Explo, defined in Section 2, as a subroutine. More precisely, each of the two agents executes procedure Explo in T , ignoring the degree-2 nodes. That is, protocol Explo is modified so that whenever an agent enters a degree-2 node through port i ∈ {0, 1} in some state s, it will leave that node in the next round by port (i + 1) mod 2, in the same state s. In fact, the are some subtle additional details in the modified version of Explo, when the initial node is of degree different from 2. Specifically, let s 0 be the initial state of an agent executing Explo. Our modified agent starts in an additional state s * 0 . If the initial node v has a degree different from 2, then it enters state s 0 and starts Explo(v), ignoring the degree-2 nodes. Otherwise, the agent remains in state s * 0 and leaves the initial node through port 0. The agent then performs a basic walk, remaining in state s * 0 , until it enters a node of degree 1 (i.e., a leaf of the tree T ). At such a node, denoted by v leaf , the agent enters state s 0 and starts Explo(v leaf ), ignoring the degree-2 nodes. We call Explo-bis the procedure Explo modified in this way. Observe that, in trees with no nodes of degree 2, the two protocols Explo and Explo-bis are executed identically. Hence, protocols Explo and Explo-bis are executed identically in T . Formally, for an initial position v, let us define v = v if deg(v) = 2 v leaf otherwise Then, the following holds. Claim 4.1 Once an agent starting from some node v has reached node v, the states at nodes of degrees different from 2 of the agent performing Explo-bis in T are identical to the states of an agent performing Explo in T starting from node v. Using this claim, rendezvous in T is achieved as follows. Stage 1. Each of the two agents executes procedure Explo-bis from their respective initial positions v and v . After having completed Explo-bis, each agent knows whether the contraction tree T is symmetric or not. (It is non-symmetric if either there is a central node, or there is a central edge and the two port-labeled trees obtained by removing the central edge in T are not isomorphic -the isomorphism must preserve both the structure of the trees, and the port labelings). Stage 2. The nature of the second stage differs according to whether T is symmetric or not. In the non symmetric case, the rendezvous protocol uses Fact 2.1, which states that the two agents performing Procedure Explo will eventually identify a single node x of T . Node x is identified by the number of steps of the basic walk performed in T to reach that node from the initial position. Notice that, although Explo ensures (by Fact 2.1) that each agent returns to its initial position v after completing the procedure, Claim 4.1 guaranties only that the agent applying Explo-bis returns to a node v. Nevertheless, this is sufficient, since the length of the basic walk reaching x is the length of the one starting from node v, ignoring degree-2 nodes. Note that this length does not exceed twice the number of edges of T , and thus it can be encoded on O(log ) bits. Therefore, each of the agents act as follows: • If there is a central node x in T , then Rendezvous is achieved by waiting for the other agent at that node. • Similarly, if there is a central edge in T , and the tree T is not symmetric, then let x be the extremity of the central edge of T identified by protocol Explo-bis; rendezvous is achieved by waiting for the other agent at that node. The difficult and more challenging situation is when the contraction tree T has a central edge with two non distinguishable extremities, in which case the ability to solve the rendezvous problem depends on the large tree T and on the initial positions of the two agents in T . Achieving rendezvous is complicated by the constraint that the agents must use sub-logarithmic memory when is small. The main part of the proof will be dedicated to describing how this task can actually be achieved in a memory efficient manner. Sub-stage 2.1. (for the case when T symmetric) Resynchronization. Recall that we are in a situation where each of the two agents has performed Explo-bis. An agent starting from node v ∈ T has not necessarily returned to node v, but to node v ∈ T . Each agent executes Procedure Synchro defined as follows. It starts the execution of a basic walk in T , leaving the current node v by port 0. This basic walk will end when the agent is back at node v. This is simply insured by counting the number of edge-traversals in T : the agent stops the basic walk after 2(ν − 1) edge-traversals in T , where ν denotes the number of nodes in T . Since ν ≤ 2 − 1, counting up to O(ν) does not require more that O(log ) bits. The basic walk proceeds with the following insertions: at each visited node w with degree different from 2 (i.e., at each node of T ), the agent performs Explo-bis(w), except for the very last node of T visited by the basic walk, that is except when the agent returns, for the last time, at its initial position v. Since agents performing Procedure Synchro starting from different initial positions v execute identical actions, only in different order, we have the following: Claim 4.2 Two agents starting simultaneously at arbitrary initial positions v and v in T finish Procedure Synchro with a delay β = |L − L | where L (resp., L ) is the length of the basic walk in T leading from v to v (resp., from v to v ). Once the agents are resynchronized (their desynchronization is now precisely β), each of them proceeds to the second part of Stage 2. Sub-stage 2.2. (for the case when T symmetric) Rendezvous in a virtual line. After the execution of Procedure Synchro, the agent with initial position v is back at v. In view of Fact 2.1, since it has applied Explo( v) at the very beginning of the rendezvous protocol, the agent knows the number of steps of the basic walk from v to the farthest extremity of the central edge of T . So, its first action in Sub-stage 2.2 is to go to this node, following a basic walk. We denote by v f ar (resp., v f ar ) the farthest extremity of the central edge of T reached by the agent starting from v (resp., from v ). Since the contraction tree T is symmetric, the two agents may end up in two different nodes of T , i.e., possibly v f ar = v f ar . For instance, in the n-node path with an odd number of edges, the two agents may end up in the two extremities of the path. Also, in the binomial tree with n-nodes (cf. [13]), the two agents may end up in the two roots of the two binomial subtrees of T with n/2 nodes. Still, we prove that rendezvous is possible with little memory assuming that the two initial positions of the agents were not perfectly symmetrizable in T . Actually, the first of the two key ingredients in our proof is showing how rendezvous can be achieved in the path (or line) using agents with O(log log n) bits of memory. In the lemma below, we consider blind agents in paths, that is agents that ignore port labels. More precisely, when entering a node, such an agent can just distinguish between the incoming edge and the other edge (if any). Let P = (v 1 , . . . , v m ) be an m-node path, and consider two identical blind agents initially located at nodes v a and v b , a < b. Rendezvous using blind agents is possible if and only if m is odd, or m is even and a − 1 = m − b. Of course, a standard agent can simulate the behavior of a blind agent. When applying the lemma below with standard agents, we will make sure that the starting positions v a and v b are such that rendezvous is achievable even with blind agents. Lemma 4.1 There exists a pair of identical blind agents accomplishing rendezvous with simultaneous start in all paths, whenever it is possible, and using O(log log m) bits of memory in paths with at most m nodes. Proof. Let P = (v 1 , . . . , v m ) be an m-node path, and consider two identical blind agents initially located at nodes v a and v b , a < b. To achieve rendezvous, the two agents perform a sequence of traversals of P , executed at lower and lower speeds, aiming at eventually meeting each other at some node. More precisely, for an integer s ≥ 1, a traversal of the path is performed at speed 1/s, if the agent remains idle s − 1 rounds before traversing any edge. For instance, traversing P from v 1 to v m at speed 1/s requires (m − 1)s rounds. Our rendezvous algorithm for the line, called prime, performs as follows. Begin start in arbitrary direction; move at speed 1 until reaching one extremity of the path; p ← 2; While no rendezvous do traverse the entire path twice, at speed 1/p; p ← smallest prime larger than p; End We now prove that, whenever rendezvous is possible for blind agents (i.e., when m odd, or m even and a − 1 = m − b), the two agents meet before the pth iteration of the loop, for p = O(log n). Let p j be the jth prime number (p 1 = 2). Hence the speed of each agent at the jth execution of the loop is 1/p j . If rendezvous has not occurred during the jth execution of the loop, then the two agents have crossed the same edge, say e = {v c , v c+1 }, at the same time t, in opposite directions. This can occur if, for instance, the agent initially at v a moves to node v 1 , traverses twice the path at successive speeds p 1 , . . . , p j−1 , and, c p j rounds after having eventually started walking at speed p j , traverses the edge e at time t, while the other agent initially at v b moves to v m , traverses twice the path at successive speeds p 1 , . . . , p j−1 , and, (m − c)p j rounds after having eventually started walking at speed p j , traverses the same edge e in the other direction at the same time t. In fact, there are four cases to consider, depending on the two starting directions of the two agents: towards v 1 or towards v m . From these four cases, we get that one of the following four equalities must hold (the first one corresponds to the previously described scenario: v a moves towards v 1 while v b moves towards v m ): • t = (a − 1) + 2(m − 1) j−1 i=1 p i + c p j = (m − b) + 2(m − 1) j−1 i=1 p i + (m − c)p j • t = (a − 1) + 2(m − 1) j−1 i=1 p i + (m − 1)p j + (m − c)p j = (b − 1) + 2(m − 1) j−1 i=1 p i + c p j • t = (m − a) + 2(m − 1) j−1 i=1 p i + (m − c)p j = (m − b) + 2(m − 1) j−1 i=1 p i + (m − 1)p j + c p j • t = (m − a) + 2(m − 1) j−1 i=1 p i + (m − c)p j = (b − 1) + 2(m − 1) j−1 i=1 p i + c p j Therefore we get that p j divides |a − b|, or p j divides |m − (a + b) + 1|. As a consequence, since the p i 's are primes, we get that if the two agents have not met after the jth execution of the loop, then where I ∪J = {1, . . . , j}. Therefore, since the p i 's are primes, j i=1 p i divides |a−b|·|m−(a+b)+1|. Hence, if rendezvous is feasible, it must occur at or before the jth execution of the loop, where j is the largest index such that j i=1 p i divides |a − b| · |m − (a + b) + 1|. Thus it must occur at or before the jth execution of the loop, where j is the largest index such that j i=1 p i ≤ m 2 . Let π(x) be the number of prime numbers smaller than or equal to x. On the one hand, we have j i=1 p i ≥ 2 π(p j ) . Hence, rendezvous must occur at or before the jth execution of the loop, where j is the largest index such that 2 π(p j ) ≤ m 2 , i.e., π(p j ) ≤ 2 log m. On the other hand, from the Prime Number Theorem we get that π(x) ∼ x/ ln(x), i.e., lim x→∞ π(x) x/ ln(x) = 1. Hence, for m large enough, π(x) ≥ x/(2 ln(x)). Thus rendezvous must occur at or before the jth execution of the loop, where j is the largest index such that p j / ln p j ≤ 4 log m. From the above, we get that (1) rendezvous must occur whenever it is feasible, and (2) it occurs at or before the jth execution of the loop, where log p j ≤ O(log log m). Since the next prime p can be found using O(log p) bits, e.g., by exhaustive search, we get that prime performs rendezvous using agents with O(log log m) bits of memory. The (blind) agents described in Lemma 4.1 perform a protocol called prime. This protocol uses the infinite sequence of prime numbers. We denote by prime(i) the protocol prime modified so that it stops after having considered the ith prime number. We now come back to our general rendezvous protocol in trees (with port numbers). Let ν = 2x be the number of nodes in the contraction tree T . (We have ν even, since T is symmetric with respect to its central edge). We define a (non-simple) path called the rendezvous path, denoted by P , that will be used by the agents to rendezvous using protocol prime. To define P , let u and v be the two extremities of the path in T corresponding to the central edge in T . We have { v f ar , v f ar } ⊆ {u, v}. The path P is called the central path, and is denoted by C. Abusing notation, C will also be used as a shortcut for the instruction: "traverse C". Let bw (for "basic walk") be the instruction of performing the following actions: leave by port 0, and, perpetually, whenever entering a degree-d node by port i ∈ {0, . . . , d − 1}, leave that node by port (i + 1) mod d. Similarly, let cbw (for "counter basic walk"), be the instruction of performing the following: leave by the port used to enter the current node at the previous step, and, perpetually, whenever entering a degree-d node by port i, leave that node by port (i − 1) mod d. For j ≥ 1, let bw(j) (resp., cbw(j)) be the instruction to execute bw (resp., cbw) until j nodes of degree different from 2 have been visited. Let B u (resp., B v ) be the path corresponding to the execution of bw 2(ν − 1) from u (resp., from v). Note that a node can be visited several times by the walk, and thus neither B u nor B v are simple. Note also that since T has ν nodes, it has ν − 1 edges, and thus both B u and B v are closed paths, i.e., their extremities are u and v, respectively. Let B u (resp., B v ) be the path corresponding to the execution of cbw 2(ν − 1) from u (resp., from v). We define P = (B u | C u→v | B v | C v→u ) 5 | (B u | C u→v | B v ) where " | " denotes the concatenation of paths, C u→v (resp., C v→u ) denotes the path C traversed from u to v (resp., from v to u), and, for a closed path Q, Q α denotes Q concatenated with itself α times. The path P is well defined. Indeed, the sequence B u | C u→v | B v | C v→u leads back to node u. Also, the two extremities of the path are u and v. Now, the agents have no clue whether they are standing at u or at v. Nevertheless, we have the following. traverses the path P from one of its extremities to the other. Before establishing the claim, note that instructions bw 2(ν−1) and cbw 2(ν−1) are meaningful, since agents can have counters of size O(log ) bits, and they know ν in view of Fact 2.1. To establish the claim, it suffices to notice that the path P reverse to P is given by Begin for consecutive values i ≥ 1 do /* outer loop */ /* try rendezvous */ for j = 0, 1, . . . , 2(ν − 1) do /* first inner loop */ perform bw(j); perform cbw(j); /* back to the original position */ perform prime(i) on the rendezvous path P ; /* reset */ go to the other extremity of the central path C; for j = 0, 1, . . . , 2(ν − 1) do /* second inner loop */ perform bw(j); perform cbw(j); /* back to the original position */ return to the original extremity of the central path C; End The two agents will use protocol prime along the path P to achieve rendezvous. However, to make sure that rendezvous succeeds, the two agents must not start prime simultaneously at the two extremities of P , in order to break symmetry. Unfortunately, this requirement is not trivial to satisfy. Indeed, one can guarantee some upper bound on the delay between the times the two agents reach the two extremities of C (and thus of P as well) that does not exceed n, but no guarantee can be given for the minimum delay, which could be zero. This is because the delay does not depend on the tree T , but on the tree T . Hence two agents starting simultaneously in T may actually finish Stage 2.1 of our protocol (i.e., the execution of Synchro) at the same time, even if T is not symmetric, and even if T is symmetric but the starting positions were not perfectly symmetrizable. The second key ingredient in our proof is a technique guaranteeing eventual desynchronization of the two agents. A high level description of this technique is summarized in Figure 2. We describe this technique in detail below. The outer loop of the protocol in Figure 2 states how many consecutive prime numbers the protocol will test while performing prime along the path P . Performing prime(i) for successive values of i, instead of just prime, is for avoiding a perpetual execution of prime in the case when the two agents started the execution of phase 2 at the same time from the two extremities of P . For every number i ≥ 1 of primes to be used in prime, the protocol performs two inner loops. The first one is an attempt to achieve rendezvous along P , while the second one is used to upper bound the delay between the two agents at the end of the outer loop, in order to guarantee that the next execution of the outer loop will start with a delay between the two agents that does not exceed n. During the first inner loop, an agent executing the protocol performs a series of basic walks, of different lengths. For j = 0, the agent performs nothing. In this case, prime(i) is performed on P directly. For j > 0, the agent performs a basic walk in T to the jth node of degree different from 2 that it encounters along its walk. When j = 2(ν − 1), the basic walk is a complete one, traversing each edge of T twice. Each bw(j) is followed by a cbw(j), so as to come back to the original position at the same extremity of the path P . Once this is done, the agent performs prime(i) on P . The second inner loop aims at resetting the two agents. For this purpose, each agent goes to the other extremity of C, performs the same sequence of actions as the other agent had performed during its execution of the first inner loop, and returns to its original extremity of C. This enables resetting the two agents in the following sense. To establish the claim, just notice that, during every execution of the outer loop, the sets of actions performed by the two agents inside the loop are identical, differing only by their orders. Note that we can express |t − t | = |(L + L) − (L + L )| where L and L are defined in Claim 4.2, and L (resp., L ) denotes the length of the basic walk leading from v (resp., v ) to v f ar (resp., to v f ar ). A consequence of Claim 4.4 is the following lemma. Proof. For j ≥ 1, let l j and l j be the lengths (i.e., numbers of edges) of the paths in T between the (j − 1)th and the jth node of degree different from 2 that is met by the two agents, respectively, during their basic walk from their positions at the two extremities of C. At the jth iteration of the inner loop, one agent has traversed 2 j a=1 a b=1 l b edges during bw(a) and cbw(a) for all a = 1, . . . , j. The other agent has traversed 2 j a=1 a b=1 l b edges during the same bw(a) and cbw(a). Since the number of rounds of prime(i) is the same for both agents, we get that their delay is at most: |t − t | + 2 j a=1 a b=1 |l b − l b | ≤ |t − t | + 4(ν − 1) 2(ν−1) b=1 |l b − l b | ≤ |t − t | + 4(ν − 1) 2(ν−1) b=1 max{l b , l b } ≤ |t − t | + 8(ν − 1)n ≤ |t − t | + 8νn ≤ |t − t | + 16n . This completes the proof of the lemma. Lemma 4.3 Assume that the two agents have not met when they arrive at v f ar and v f ar after the execution of Synchro. For every i, if at the beginning of each execution of prime(i) the delay between the two agents is zero, then their initial positions were perfectly symmetrizable in T . Proof. Fix i ≥ 1, and assume that, at the beginning of each of the 2ν − 1 executions of prime(i) in the outer loop, the delay between the two agents is zero. This implies that, using the same notations as in the proof of Lemma 4.2, for every j = 0, . . . , 2(ν − 1) we have t + 2 j a=1 a b=1 l b = t + 2 j a=1 a b=1 l b . Therefore, t = t and l j = l j for every j = 0, . . . , 2(ν − 1). These equalities imply that the tree T is topologically symmetric: there is an automorphism f which extends the port preserving automorphism f of T mapping the two symmetric subtrees T 1 and T 2 of T hanging at the two extremities of the central edge of T (f induces an isomorphism between T 1 and T 2 preserving port labels). Indeed, since the two agents have not met when both of them arrive at v f ar and v f ar , the fact that t = t implies that v f ar = v f ar . We have v f ar = f ( v f ar ). More generally, if x j (resp., x j ) denotes the jth node of T reached by the basic walk starting at v f ar (resp., v f ar ), we have x j = f (x j ). By definition, l j (resp., l j ) is the length of the path in T between x j−1 and x j (resp., between x j−1 and x j ). Since l j = l j , we get that the number of degree-2 nodes in T between x j−1 and x j is the same as the number of degree-2 nodes in T between x j−1 and x j . Thus f can be extended to match nodes of these two paths, preserving adjacencies. Since this holds for every j, we get that T is topologically symmetric 4 . To sum up, the tree T is topologically symmetric (by automorphism f ), and its contraction tree T is symmetric (by automorphism f , which preserves port labels). A consequence of this fact is the following crucial observation. Let us consider the following port labeling µ. The port numbers at nodes of degree larger than 2 are the same as in T . The port labeling is completed arbitrarily at nodes of degree 2, preserving the following condition: if {z, z } is an edge in T with at least one extremity z of degree 2, then the port number at z corresponding to {z, z } is equal to the port number at f (z) corresponding to {f (z), f (z )}. Two basic walks starting from two symmetric positions in T generate two sequences of nodes such that the ith nodes of the two sequences are symmetric in T with respect to µ. Indeed, the "branching" nodes, i.e., the nodes of degree at least 3, are symmetric, and basic walks are oblivious of the port numbers at nodes of degree at most 2. The same observation holds for counter basic walks. It also holds if the port number of the outgoing edge from the starting nodes are not 0, under the simple assumption that they are equal. We use the above observation to show that the two nodes v and v are perfectly symmetrizable. Since T is symmetric, it is sufficient to show that v and v are topologically symmetric. The two agents have reached nodes v f ar and v f ar after procedure Synchro, entering these nodes from the central path. Indeed, on the one hand, v f ar and v f ar are the farthest extremity of the central edge of T coming from v and v , respectively, and, on the other hand, the basic walks reaching these nodes are of minimum length (cf., Fact 2.1). Since v f ar and v f ar are symmetric in T , the port numbers of the edges incident to these nodes on the central path are identical. Let i be this port number. Consider two counter basic walks of length t = t starting from v f ar and v f ar , leaving the starting node by port number i. These counter basic walks proceed backwards, first along the basic walk from v to v f ar for L steps, and next along the basic walk from v to v for L steps. If v = v then L = 0. If v = v, then the articulation between the two basic walks v → v and v → v f ar occurs at v = v leaf . Since we have chosen this latter node as a leaf, the sequence of basic walks v → v and v → v f ar is actually equal to a basic walk v → v f ar of length t = L + L. Hence the counter basic walk of length t starting from v f ar by port i leads to the initial position v. The same holds for the other walk of length t = t. Therefore, v and v are topologically symmetric, and thus they are perfectly symmetrizable. In view of the previous lemma, since v and v are not perfectly symmetrizable, at each execution i of the outer loop, there is an execution j of prime(i) for which the two agents do not start the second phase at the same time from their respective extremities of P . Moreover, by Lemma 4.2, during this jth execution of prime(i), the delay δ between the two agents is at most |t − t | + 16n . We have |t − t | = |(L + L) − (L + L )|, where the four parameters are lengths of basic walks. These four basic walks have lengths at most 2(n − 1). Hence, |t − t | ≤ 4n. Therefore, δ ≤ 20n . The length of the rendezvous path P is larger than 20n because B u and B v are each of length at least 2n. Therefore, at the first time when both agents are simultaneously in the jth execution of prime(i), they occupy two non perfectly symmetrizable positions in P : one is at one extremity of P , and the other is at some node of P at distance δ > 0 along P from the other extremity of P . Moreover, since the delay δ between the two agents is smaller than the length of the path P , the agent first executing prime(i) has not yet completed the first traversal of P when the other agent starts prime(i). As a consequence, the two agents act as if prime(i) were executed with both agents starting simultaneously at non perfectly symmetrizable positions in the path. Now, for small values of i, prime(i) may not achieve rendezvous in P . However, in view of Lemma 4.1, for some i = O(log n), rendezvous will be completed whenever the initial positions of the agents were not perfectly symmetrizable in T . We complete the proof by checking that each agent uses O(log +log log n) bits of memory. Protocol Explo-bis executed in T consumes the same amount of memory as Protocol Explo executed in T . Since T has at most 2 − 1 nodes, Explo-bis uses O(log ) bits of memory. During the second stage of the rendezvous, a counter is used for identifying the index j of the inner loop. Since j ≤ 2ν ≤ 4 , this counter uses O(log ) bits of memory. All executions of prime are independent, and performed one after the other. Thus, in view of Lemma 4.1, a total of O(log log n) bits suffice to implement these executions. The index i of the outer loop grows until it is large enough so that prime(i) achieves rendezvous in a path of length O(n ). Thus, i ≤ log(n ), and thus O(log log(n )) = O(log log n) bits suffice to encode this index. This completes the proof of Theorem 4.1. The lower bound Ω(log log n) In this section we prove the lower bound Ω(log log n) on the size of memory required for rendezvous with simultaneous start in a n-node line. Theorem 4.2 Rendezvous with simultaneous start in the n-node line requires agents with Ω(log log n) bits of memory. The rest of the section is dedicated to the proof of Theorem 4.2. For proving the theorem, note that we can restrict ourselves to lines whose edges are properly colored 1 and 2, so that the port numbers at the two extremities of an edge colored i are set to i. In this setting, the transition function of an agent in a line is π : S × {1, 2} → S that describes the transition that occurs when an agent enters a node of degree d ∈ {1, 2} in state s ∈ S. In this situation, the agent changes its state to state s = π(s, d), and performs the action λ(s ). The fact that one does not need to specify the incoming port number is a consequence of the edge-coloring, which implies that whenever an agent leaves a node by port i, it enters the next node by port i too. Let us fix two identical agents A and A , with finite state set S, and transition function π. Let π : S → S be the transition function applied at nodes of degree 2 of the edge-colored line, i.e., π (s) = π(s, 2) for any s ∈ S. To π is associated its transition digraph, whose nodes are the states in S, and there is an arc from s to s if and only if s = π (s). This digraph is composed of a certain number of connected components, say r, each of them of a similar shape, that is a circuit with inward trees rooted at the nodes of the circuit. Let C 1 , . . . , C r be the r circuits corresponding to the r connected components of the transition digraph, and let γ be the least common multiple of the number of arcs of these circuits, i.e., γ = lcm(|C 1 |, . . . , |C r |). We prove that there is a line of length proportional to 2γ + |S| in which A and A do not rendezvous. First, observe that if A and A cannot go at arbitrarily large distance from their starting positions, say they go at maximum distance D, then they cannot rendezvous in a line of length 4D + 4. Indeed, if the initial positions are two nodes at distance 2D + 1, and at distance at least D + 1 from the extremities of the line, then the ranges of activity of the two agents are disjoint, and thus they cannot meet (one edge is added at one extremity of the line to break the symmetry of the initial configuration). Thus from now on, we assume that both agents can go at arbitrarily large distance from their starting positions. For the purpose of establishing our result, place the two agents A and A on two adjacent nodes v A and v A of an infinite line (whose edges are properly colored). Let e = {v A , v A } be the edge linking these two nodes. • Let t 0 be large enough so that A is at distance at least 2γ + |S| from its starting position after t 0 steps. Since t 0 > |S|, agent A at time t 0 is in some state s i ∈ C i for some i ∈ {1, . . . , r}. In fact, since |C i | divides γ, agent A has fully executed C i at least twice. We define the notion of extreme position for a circuit C. Let s, π (s), . . . , π (k) (s) be a circuit, with s = π (k) (s). Assume that agent A starts in state s from node u 0 at distance at least k + 1 from both extremities of the line. After having performed C exactly once, i.e., after k steps, agent A is at some node u k , back in state s. Let u 0 , u 1 , u 2 , . . . , u k be the k + 1 non necessarily distinct nodes visited by A while executing C. The extreme position for C starting in state s is the node u j satisfying dist(u 0 , u j ) = dist(u 0 , u k ) + dist(u k , u j ), and dist(u 0 , u j ) = max 0≤ ≤k dist(u 0 , u ). Let u i be the extreme position for C i starting in s i , and let us define the following parameters: • τ is the first time step among the |C i | steps after step t 0 at which A reaches u i . • x is the distance of agent A at time τ from its original position, i.e., x = dist(u i , v A ); • τ = τ + 2γ; • x is the distance of agent A at time τ from its original position v A . Note that, by symmetry of the port labeling, and from the fact that A and A are identical and operate in an infinite line, the two agents are on the two different sides of edge e at time τ . Note also that, between times τ and τ , agent A keeps on going further away from its original position, by repeating the sequence of actions determined by the circuit C i . Hence x = x. Actually, we have x > x. We can therefore consider the following construction. Initial configuration of the agents. Let L be the properly 2-edge-colored line of length x + x + 1, formed by x edges, followed by one edge called e, and followed by x edges. The two agents A and A are placed at the two extremities v A and v A of e, the same way they were placed at the two extremities of e in the infinite line used to define x and x . Since x = x , the initial positions of agents are not perfectly symmetrizable. Nevertheless, we prove that the two agents never meet in L, and thus rendezvous is not accomplished. The adversary imposes no delay between the starting times of the agents, i.e., they both start acting simultaneously from their respective initial positions. One ingredient used for proving that the two agents do not rendezvous is the following general result, that we state as a lemma for further reference. Lemma 4.4 (Parity Lemma)Consider two (not necessarily identical) agents initially at odd distance in a tree T , that start acting simultaneously in T . Let t ≥ 1. Assume that one agent stays idle q times in the time interval [1, t], while the other one stays idle q times in the same time interval. If |q − q | is even, then the two agents are at odd distance at step t. Proof. At any step, if one agent moves while the other one stays idle, then the parity of their distance changes. On the other hand, if both agents move or both stay idle, then the parity of their distance remains unchanged. Let a be the number of steps in [1, t] when both agents were idle simultaneously. Then the parity of the inter-agent distance changes exactly (q − a) + (q − a) times in the time interval [1, t]. Since |q − q | is even, q + q is also even, and thus (q − a) + (q − a) is even too. Thus the parity of the inter-agent distance is the same at time 1 and at time t. The Parity Lemma enables us to establish the following. Lemma 4.5 The two agents A and A do not meet during the first τ steps. Proof. Since the agents perform the same sequence of actions in the time interval [1, τ ], we get that, for any t ≤ τ , the two agents have remained idle the same mumber of times in the time interval [1, t], and thus, by the Parity Lemma (with q = q ), they are at odd distance at step t, since they originally started at distance 1. In other words, the two agents remain permanently at odd distance during the time interval [1, τ ]. Thus they cannot meet during this time interval. At step τ , the behavior of the two agents becomes different. Indeed, agent A is reaching one extremity of L, while A is visiting a degree-2 node. We analyze the states of the two agents when they reach extremities of L during the execution of their protocol. Assume that agent A reaches the extremities of L at least k ≥ 1 times. Let σ j be the state of agent A when it reaches any of the two extremities of L for the jth time, 1 ≤ j ≤ k. Lemma 4.6 Agent A reaches the extremities of L at least k times. Moreover, if σ j is the state of agent A when it reaches any of the two extremities of L for the jth time, 1 ≤ j ≤ k, then σ j = σ j . Proof. First, let us consider the case k = 1. After time τ (i.e., after the time when A reaches one extremity of L, in state σ 1 ), agent A keeps on repeating the execution of circuit C i . This leads A to eventually reach the other extremity of L. Recall that we have considered the behavior of A after time t 0 when A was in state s i ∈ C i , and that τ was defined as the first time step among the |C i | steps after step t 0 at which A reaches the extreme position u i of C i starting at s i . Since τ = τ + 2γ, and since |C i | divides γ, we get that agent A is in state σ 1 at time τ . Moreover, since |C i | divides γ, A reaches the extreme position u i of C i at time τ , and therefore time τ is the first time when A is at distance x from e. Therefore σ 1 = σ 1 , and the lemma holds for k = 1. For k > 1, the proof is by induction on the number of times j agent A reaches an extremity of L, j = 1, . . . , k. By the previous arguments, the result holds for j = 1. When agent A reaches an extremity of L for the jth time, it is in state σ j . By the induction hypothesis, when agent A reaches an extremity of L for the jth time, it is also in state σ j = σ j . Therefore, the configuration for A and A between two consecutive hits of an extremity of L is actually symmetric. As a consequence, σ j+1 = σ j+1 , and the lemma holds. After time τ the walks of the agents can be decomposed in two different types of subwalks. A traversal period for an agent is the subwalk between two consecutive hits of two different extremities of L by this agent. A bouncing period for an agent is a subwalk (possibly empty) performed between two consecutive traversal periods. Roughly, a bouncing period for an agent is a walk during which the agent starts from one extremity of L and repeats bouncing (i.e., leaving and going back) that extremity until it eventually starts the next traversal period. Globally, an agent starts from its original position, performs some initial steps (τ for A, and τ for A ), and then alternates between bouncing periods and traversal periods. These periods are not synchronous between the two agents because there is a delay of 2γ between them. Nevertheless, by Lemma 4.6, if one agent bounces at one extremity of L during its kth bouncing period, then the other agent bounces at the other extremity of L during its kth bouncing period. Similarly, if one agent traverses L during its kth traversal period, then the other agent traverses L in the opposite direction during its kth traversal period. In fact, Lemma 4.6 guarantees that the two agents perform symmetric actions with a delay of 2γ, alternating bouncing at the two different extremities of L, and traversing L in two opposite directions. The following lemma holds, by establishing that whenever one agent is in a bouncing period, the two agents are far apart. Lemma 4.7 The two agents A and A do not meet whenever one of them is in a bouncing period. Proof. There is a delay of 2γ between the two agents. During such a period of time, an agent can travel a distance at most 2γ. Also, during its bouncing period, an agent cannot go at distance more than |S| from the extremity of the line where it is bouncing. On the other hand, by the definitions of t 0 and τ > t 0 , we have x > 2γ + |S|, and thus x > 2γ + |S| as well. Therefore, when one of the agents is in a bouncing period, the distance between the two agents is at least 2γ + |S|, and thus they cannot meet. The following lemma holds, by using the fact that γ is the least common multiple of the circuit lengths in the transition digraph of the agents, and by applying the Parity Lemma. Proof. When both agents are in a traversal period, they started their period in the same state, from Lemma 4.6. Hence, they are eventually both performing the same circuit of states C i . This occurs after the same initial time of duration at most |S|. This time corresponds to the time it takes to reach the circuit C i from the initial state at which the agents started their traversal period. As we already observed in the proof of Lemma 4.7, since x > x > 2γ + |S|, the two agents are far apart during the transition period before both of them have entered the circuit C i executed during the considered traversal. Thus we can now assume that the two agents are performing C i , traversing the line in two opposite directions. We prove that they cross along an edge, and hence they do not meet. Since the delay between the two agents is 2γ and since γ is a multiple of |C i | for any i ∈ {1, . . . , r}, the delay is an even multiple of the length of the circuit |C i | performed at this traversal. As a consequence, at any step of their traversal periods, the number of times one agent was idle when the other was not, is even. The Parity Lemma with |q − q | = 2γ/|C i | then insures that the distance between the two agents remains odd during the whole traversal period. Thus they do not meet. Proof of Theorem 4.2. The two agents start an initial period that lasts τ steps. By Lemma 4.5 they do not meet during this period. Then the two agents alternate between bouncing periods and traversal periods. By Lemma 4.7, they do not meet when one of the two agents is in a bouncing period. When the two agents are in a traversal period, Lemma 4.8 guarantees that they do not meet. Hence the two agents never meet, in spite of starting from non perfectly symmetrizable positions, and thus they do not rendezvous in L. By the construction of the line L and the setting of γ, we get that L is of length O(|S| |S| ). Therefore, rendezvous with simultaneous start in lines of size at most n requires agents with at least Ω(log log n) memory bits. The lower bound Ω(log ) In this section we prove that rendezvous with simultaneous start in trees with leaves requires Ω(log ) bits of memory, even in the class of trees with maximum degree 3. Together with the lower bound of Ω(log log n) on memory size needed for rendezvous in the n-node line 5 established in Theorem 4.2, this result proves that our upper bound O(log + log log n) from Section 4.1 cannot be improved even for trees of maximum degree 3. Theorem 4.3 For infinitely many integers , there exists an infinite family of trees with leaves, for which rendezvous with simultaneous start requires Ω(log ) bits of memory. Proof. Consider an integer = 2i, for any even i. Consider an (i+1)-node path with a distinguished endpoint called the root. To every internal node x of the path attach either a new leaf, or a new node y of degree 2 with a new leaf z attached to it. There are 2 i−1 = 2 /2−1 possible resulting non-isomorphic rooted trees. Call them side trees. Note that non-isomorphic is meant here without the port-preserving clause: there are so many rooted trees which cannot be mapped to each other by any isomorphism, not only by any isomorphism preserving port numbering. Fix an arbitrary port labeling in every side tree. For any pair of side trees T and T and for any positive even integer m, consider the tree T consisting of side trees T and T whose roots are joined by a path of length m + 1 (i.e., there are m added nodes of degree two). Ports at the added nodes of degree two are labeled as follows: both ports at the central edge have label 0, and ports at both ends of any other edge of the line have the same label 0 or 1. (This corresponds to a 2-edge-coloring of the line). Call any tree resulting from this construction a two-sided tree. Any such tree has leaves and maximum degree 3. For any two-sided tree consider initial positions of the agents at nodes u and v of the joining path adjacent to roots of its side trees. Consider agents with k bits of memory (thus with K = 2 k states). A tour of a side tree associated with an initial position (u or v) is the part of the trajectory of the agent in this side tree between consecutive visits of the associated initial position. Observe that the maximum duration D of a tour is smaller than K · (3i). Indeed, the number of nodes in a side tree is at most 3i − 1, hence the number of possible pairs (state, node of the side tree) is at most K · (3i − 1). A tour of longer duration than this value would cause the agent to leave the same node twice in the same state, implying an infinite loop. Such a tour could not come back to the initial position. For a fixed agent with the set S of states and a fixed side tree, we define the function p : S → S as follows. Let s be the state in which the agent starts a tour. Then p(s) is the state in which the agent finishes the tour. Now we define the function q : S → S × {1, . . . , D}, called the behavior function, by the formula q(s) = (p(s), t), where t is the number of rounds to complete the tour when starting in state s. The number of possible behavior functions is at most F = (KD) K . A behavior function depends on the side tree for which it is constructed. Suppose that k ≤ 1 3 log . We have D < 3Ki = 3 2 K , hence KD < 3 2 K 2 . Hence we have log K + log log(KD) ≤ k + log log( 3 2 K 2 ) ≤ k + 2 + log k + log log , which is smaller than 2 3 log for sufficiently large k. It follows that K log(KD) < 2/3 < /2−1, which implies F = (KD) K < 2 /2−1 . Thus the number of possible behavior functions is strictly smaller than the total number of side trees. It follows that there are two side trees T 1 and T 2 for which the corresponding behavior functions are equal. Consider two instances of the rendezvous problem for any length m + 1 of the joining line, where m is a positive even integer: one in which both side trees are equal to T 1 , and the other for which one side tree is T 1 and the other is T 2 . Rendezvous is impossible in the first instance because in this instance initial positions of the agents form a symmetric pair of nodes with respect to the given port labeling. Consider the second instance, in which the initial positions of the agents do not form a perfectly symmetrizable pair. Because of the symmetry of labeling of the joining line, agents cannot meet inside any of the side trees. Indeed, when one of them is in one tree, the other one is in the other tree. Since the behavior function associated with side trees T 1 and T 2 is the same, the agents leave these trees always at the same time and in the same state. Hence they cannot meet on the line, in view of its odd length and symmetric port labeling. This implies that they never meet, in spite of initial positions that are not perfectly symmetrizable. Hence rendezvous in the second instance requires Ω(log ) bits of memory.
11,632
1102.0467
2950424720
The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this paper, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds. We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Omega(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number L of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log L + loglog n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most L leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Omega(log L + loglog n)$ bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3.
A natural extension of the rendezvous problem is that of gathering @cite_10 @cite_24 @cite_2 @cite_6 , when more than two agents have to meet in one location. @cite_12 the authors considered rendezvous of many agents with unique labels.
{ "abstract": [ "If two searchers are searching for a stationary target and wish to minimize the expected time until both searchers and the lost target are reunited, there is a trade off between searching for the target and checking back to see if the other searcher has already found the target. This note solves a non-linear optimization problem to find the optimal search strategy for this problem.", "", "Suppose that @math players are placed randomly on the real line at consecutive integers, and faced in random directions. Each player has maximum speed one, cannot see the others, and doesn't know his relative position. What is the minimum time @math required to ensure that all the players can meet together at a single point, regardless of their initial placement? We prove that @math , @math , and @math is asymptotic to @math We also consider a variant of the problem which requires players who meet to stick together, and find in this case that three players require @math time units to ensure a meeting. This paper is thus a minimax version of the rendezvous search problem, which has hitherto been studied only in terms of minimizing the expected meeting time.", "We consider a collection of robots which are identical (anonymous), have limited visibility of the environment, and no memory of the past (oblivious); furthermore, they are totally asynchronous in their actions, computations, and movements. We show that, even in such a totally asynchronous setting, it is possible for the robots to gather in the same location in finite time, provided they have a compass.", "We consider the problem of a rendezvous (coordinated meeting) of distributed units (intelligent agents in network computing or autonomous robots). The environment is modeled as a graph, the node labeling of which may not be “common knowledge” to the units, due to protocol and naming convention mismatch, machine faults, status change, or even hostility of the environment. Meeting of such units is likely to be a basic procedure in the area of distributed “intelligent agent” computing and in the domain of coordinated tasks of autonomous robots. The crux of the problem which we present here and initiate research on, is the breaking of potential symmetry while the units dynamically move. The units are more intelligent (computing power, control and memory) than simple (traditional) pebbles or tokens, and our algorithms will make use of this capability for speeding up the convergence to a common place (e.g., we will allow units to meet exchange information and depart). We consider both randomized protocols and deterministic (but non-uniform) protocols; the problem is unsolvable by a uniform deterministic algorithm. The deterministic procedure employs ideas from design theory and achieves O(n) time, while the randomized methods are based on random walks and may achieve O(n) time where k is the number of agents." ], "cite_N": [ "@cite_6", "@cite_24", "@cite_2", "@cite_10", "@cite_12" ], "mid": [ "1975690872", "2010875998", "2007267487", "1635699204", "1547036184" ] }
Delays induce an exponential memory gap for rendezvous in trees
We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound matches the upper bound from [14] valid for arbitrary graphs. Our main positive result is a proof that the amount of memory needed for rendezvous with simultaneous start in trees depends essentially on the number of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we show two identical agents with O(log + log log n) bits of memory that solve the rendezvous problem in all trees with n nodes and leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that the size O(log + log log n) of memory needed for rendezvous is optimal, even in the class of trees with degrees bounded by 3. More precisely, we prove two lower bounds. First, for infinitely many integers , we show a class of arbitrarily large trees with maximum degree 3 and with leaves, for which rendezvous with simultaneous start requires Ω(log ) bits of memory. Second, we show that Ω(log log n) bits of memory are required for rendezvous with simultaneous start in the line of length n. These two bounds together imply that our upper bound O(log + log log n) cannot be improved, even for the class of trees with maximum degree 3. Bibliographic note Note that our definition of solving the rendezvous problem is stronger than the definition used in the conference versions [24,25] of this paper. Indeed, rendezvous should occur for any port labeling. As opposed to what is claimed in [25], the exponential gap described in this paper does not carry over to the case where the ability of achieving rendezvous may depend on the port labeling. More precisely, it was claimed in [25] that the positive result concerning the size O(log + log log n) of memory for which rendezvous with simultaneous start is possible, holds for arbitrary initial positions that are not symmetric with respect to a given port labeling µ of the tree in which agents operate. This result is in fact incorrect in this formulation. Indeed, it has been recently proved in [15] that, for some port labeling of a line and some initial positions that are not symmetric with respect to this labeling, rendezvous with simultaneous start requires a logarithmic number of bits, while = 2 for the line. However, our positive result holds for agents starting from arbitrary non perfectly symmetrizable initial positions. The algorithm and its analysis remain similar as in [25]. (The exact place where the provided arguments do not extend to the case where the ability of achieving rendezvous may depend on the port labeling will be pointed out to the reader). On the other hand, all negative results from [24] and [25] hold in the present setting as well. Framework and Preliminaries Model We consider mobile agents traveling in trees with locally labeled ports. The tree and its size are a priori unknown to the agents. We first define precisely an individual agent. An agent is an abstract state machine A = (S, π, λ, s 0 ), where S is a set of states among which there is a specified state s 0 called the initial state, π : S × Z 2 → S, and λ : S → Z. Initially the agent is at some node u 0 in the initial state s 0 ∈ S. The agent performs actions in rounds measured by its internal clock. Each action can be either a move to an adjacent node or a null move resulting in remaining in the currently occupied node. State s 0 determines a natural number λ(s 0 ). If λ(s 0 ) = −1 then the agent makes a null move (i.e., remains at u 0 ). If λ(s 0 ) ≥ 0 then the agent leaves u 0 by port λ(s 0 ) modulo the degree of u 0 . When incoming to a node v in state s ∈ S, the behavior of the agent is as follows. It reads the number i of the port through which it entered v and the degree d of v. The pair (i, d) ∈ Z 2 is an input symbol that causes the transition from state s to state s = π(s, (i, d)). If the previous move of the agent was null, (i.e., the agent stayed at node v in state s) then the pair (−1, d) ∈ Z 2 is the input symbol read by the agent, that causes the transition from state s to state s = π(s, (−1, d)). In both cases s determines an integer λ(s ), which is either −1, in which case the agent makes a null move, or a non negative integer indicating a port number by which the agent leaves v (this port is λ(s ) mod d). The agent continues moving in this way, possibly infinitely. Since we consider the rendezvous problem for identical agents, we assume that agents are copies A and A of the same abstract state machine A, starting at two distinct nodes v A and v A , called the initial positions. We will refer to such identical machines as a pair of agents. It is assumed that the internal clocks of a pair of agents tick at the same rate. The clock of each agent starts when the agent starts executing its actions. Agents start from their initial position with delay θ ≥ 0, controlled by an adversary. This means that the later agent starts executing its actions θ rounds after the first agent. Agents do not know which of them is first and what is the value of θ. We seek agents with small memory, measured by the number of states of the corresponding automaton, or equivalently by the number of bits on which these states are encoded. An automaton with K states requires Θ(log K) bits of memory. We say that a pair of agents solves the rendezvous problem with arbitrary delay (resp. with simultaneous start) in a class of trees, if, for any tree in this class, for any port labeling of this tree, and for any initial positions that are not perfectly symmetrizable, both agents are eventually in the same node of the tree in the same round, regardless of the starting rounds of the agents (resp. provided that they start in the same round). Preliminary results Consider any tree T and the following sequence of trees constructed recursively: T 0 = T , and T i+1 is the tree obtained from T i by removing all its leaves. T = T j for the smallest j for which T j has at most two nodes. If T has one node, then this node is called the central node of T . If T has two nodes, then the edge joining them is called the central edge of T . A tree T with a port labeling µ is called symmetric, if there exists a non-trivial automorphism f of the tree (i.e., an automorphism f such that f (u) = u, for some u ∈ V ) preserving this port labeling. If a tree with port numbers has a central node, then it cannot be symmetric. We define the "basic walk" starting at node v the walk resulting from an agent performing the following actions: leave node v by port 0, and, perpetually, whenever entering a degree-d node by port i ∈ {0, . . . , d − 1}, leave that node by port (i + 1) mod d. Of course, a basic walk can be bounded to perform for t steps (instead of perpetually), in which case we refer to a basic walk of length t. Note that a basic walk of length 2(n − 1) in an n-node tree returns to its starting node. The following statement is an easy consequence of the techniques and results from [27]. Fact 2.1 There exists an agent accomplishing the following task in an arbitrary tree: using O(log m) bits of memory, it finds the number m of nodes in the tree, returns and stops at its initial position, and detects whether the tree has a central node, or has a central edge but is not symmetric, or has a central edge and is symmetric. Moreover, • if the tree has a central node x, then the agent finds the minimum number of steps of a basic walk from its initial position to the central node x; • if the tree has a central edge e = {x, y} but is not symmetric, then, for every initial position, the agent finds the minimum number of steps of a basic walk from its initial position to the same extremity x of the central edge; moreover, it knows which port at this extremity corresponds to the central edge; • if the tree is symmetric, then the agent finds the minimum number of steps of a basic walk from its initial position to the farthest extremity 1 of the central edge; moreover, it knows which port at this extremity corresponds to the central edge. In the sequel, the procedure accomplishing the above task starting at node v will be called Procedure Explo(v). Rendezvous with arbitrary delay It was proved in [14] that rendezvous with arbitrary delay can be accomplished in arbitrary nnode graphs using O(log n) bits of memory. On the other hand, observe that rendezvous requires Ω(log n) bits of memory in arbitrarily large trees with 2n + 1 nodes and maximum degree n. The lower bound examples are trees T n consisting of two nodes u and v of degree n, both linked to a common node w, and to n − 1 leaves. However, these trees have linear degree and the reason for the logarithmic memory requirement is simply that agents with smaller memory are incapable of having an output function λ with range of linear size, and thus the adversary can place one agent in node u, the other in a leaf adjacent to v, and distribute ports in such a way that none of the agents can ever get to node w, which makes rendezvous infeasible, in spite of non perfectly symmetrizable initial positions. This example leaves open the question if rendezvous with sub-logarithmic memory is possible, e.g., in all trees with constant maximum degree. It turns out that if the delay is arbitrary, this is not the case: rendezvous requires logarithmic memory even for the class of lines. Theorem 3.1 Rendezvous with arbitrary delay in the n-node line requires agents with Ω(log n) bits of memory. Proof. Let k be the number of memory bits of the agent and K = 2 k be its number of states. Place one agent at some node u of the infinite line where each edge has the same port number at its two extremities. In any interval of length K + 1 there exist two nodes at which the agent is in the same state. Let x 1 be the first node of the trajectory of the agent in which this happens and let s be the state of the agent at x 1 . Let x 2 be the second node of the trajectory of the agent at which the agent is in state s. Let δ be the distance between u and x 1 and let d be the distance between x 1 and x 2 . We construct the following instance of the rendezvous problem (see Fig. 1). The line is of length 8(K + 1) + 1. Let e be the central edge of this line. Assign number 0 to ports leading to edge e from both its extremities, and assign other port labels so that ports leading to any edge at both its extremities get the same number 0 or 1. (This is equivalent to 2-edge-coloring of the line). Let z be the endpoint of the line, for which x 1 is between z and x 2 . Let y 1 and y 2 be symmetric images of x 1 and x 2 , respectively, according to the axis of symmetry of the line. Let y 0 be the node distinct from y 2 , at distance d from y 1 . Let v be the node at distance δ from y 0 , such that the vectors [x 1 , u] and [y 0 , v] have opposite directions. The other agent is placed at node v. Let t 1 be the number of rounds that the agent starting at u takes to reach 2 x 1 in state s. Let t 2 be the number of rounds that the agent starting at v takes to reach y 1 in state s. Let θ = t 2 − t 1 . The adversary delays the agent starting at u by θ rounds. Hence the agent starting at u reaches x 1 at the same time t and in the same state as the agent starting at v reaches y 1 . The points x 1 and y 1 are symmetric positions, hence rendezvous is impossible after time t. Before time t the two Together with the logarithmic upper bound from [14], the above result completely solves the problem of determining the minimum memory of the agents permitting rendezvous with arbitrary delay. Hence in the rest of the paper we concentrate on rendezvous with simultaneous start, thus assuming that the delay θ = 0. e v 2(K+1) 2(K+1) 2(K+1) 2(K+1) u z x1 y0 x2 y2 y1 d d d δ δ 4 Rendezvous with simultaneous start 4 .1 Upper bound It turns out that the size of memory needed for rendezvous with simultaneous start depends on two parameters of the tree: the number n of nodes and the number of leaves. In fact we show that rendezvous in trees with n nodes and leaves can be done using only O(log + log log n) bits of memory. Thus, for trees with polylogarithmically many leaves, O(log log n) bits of memory are enough. In view of Theorem 3.1, this shows an exponential gap in the minimum memory size needed for rendezvous between the scenarios with arbitrary delay and with delay zero. Theorem 4.1 There is a pair of identical agents solving rendezvous with simultaneous start in all trees, and using, for any integers n and , O(log + log log n) bits of memory in trees with at most n nodes and at most leaves. The rest of the section is dedicated to the proof of Theorem 4.1. Let T be any tree, and let v and v be the initial positions of the two agents in T . Let T be the contraction of T , that is the tree obtained from T by replacing every path 3 in T joining two nodes of degree different from 2 by an edge (the ports of this edge correspond to the ports at both extremities of the contracted path). Notice that if T has leaves, then its contraction T has at most 2 − 1 nodes. Our rendezvous algorithm uses Procedure Explo, defined in Section 2, as a subroutine. More precisely, each of the two agents executes procedure Explo in T , ignoring the degree-2 nodes. That is, protocol Explo is modified so that whenever an agent enters a degree-2 node through port i ∈ {0, 1} in some state s, it will leave that node in the next round by port (i + 1) mod 2, in the same state s. In fact, the are some subtle additional details in the modified version of Explo, when the initial node is of degree different from 2. Specifically, let s 0 be the initial state of an agent executing Explo. Our modified agent starts in an additional state s * 0 . If the initial node v has a degree different from 2, then it enters state s 0 and starts Explo(v), ignoring the degree-2 nodes. Otherwise, the agent remains in state s * 0 and leaves the initial node through port 0. The agent then performs a basic walk, remaining in state s * 0 , until it enters a node of degree 1 (i.e., a leaf of the tree T ). At such a node, denoted by v leaf , the agent enters state s 0 and starts Explo(v leaf ), ignoring the degree-2 nodes. We call Explo-bis the procedure Explo modified in this way. Observe that, in trees with no nodes of degree 2, the two protocols Explo and Explo-bis are executed identically. Hence, protocols Explo and Explo-bis are executed identically in T . Formally, for an initial position v, let us define v = v if deg(v) = 2 v leaf otherwise Then, the following holds. Claim 4.1 Once an agent starting from some node v has reached node v, the states at nodes of degrees different from 2 of the agent performing Explo-bis in T are identical to the states of an agent performing Explo in T starting from node v. Using this claim, rendezvous in T is achieved as follows. Stage 1. Each of the two agents executes procedure Explo-bis from their respective initial positions v and v . After having completed Explo-bis, each agent knows whether the contraction tree T is symmetric or not. (It is non-symmetric if either there is a central node, or there is a central edge and the two port-labeled trees obtained by removing the central edge in T are not isomorphic -the isomorphism must preserve both the structure of the trees, and the port labelings). Stage 2. The nature of the second stage differs according to whether T is symmetric or not. In the non symmetric case, the rendezvous protocol uses Fact 2.1, which states that the two agents performing Procedure Explo will eventually identify a single node x of T . Node x is identified by the number of steps of the basic walk performed in T to reach that node from the initial position. Notice that, although Explo ensures (by Fact 2.1) that each agent returns to its initial position v after completing the procedure, Claim 4.1 guaranties only that the agent applying Explo-bis returns to a node v. Nevertheless, this is sufficient, since the length of the basic walk reaching x is the length of the one starting from node v, ignoring degree-2 nodes. Note that this length does not exceed twice the number of edges of T , and thus it can be encoded on O(log ) bits. Therefore, each of the agents act as follows: • If there is a central node x in T , then Rendezvous is achieved by waiting for the other agent at that node. • Similarly, if there is a central edge in T , and the tree T is not symmetric, then let x be the extremity of the central edge of T identified by protocol Explo-bis; rendezvous is achieved by waiting for the other agent at that node. The difficult and more challenging situation is when the contraction tree T has a central edge with two non distinguishable extremities, in which case the ability to solve the rendezvous problem depends on the large tree T and on the initial positions of the two agents in T . Achieving rendezvous is complicated by the constraint that the agents must use sub-logarithmic memory when is small. The main part of the proof will be dedicated to describing how this task can actually be achieved in a memory efficient manner. Sub-stage 2.1. (for the case when T symmetric) Resynchronization. Recall that we are in a situation where each of the two agents has performed Explo-bis. An agent starting from node v ∈ T has not necessarily returned to node v, but to node v ∈ T . Each agent executes Procedure Synchro defined as follows. It starts the execution of a basic walk in T , leaving the current node v by port 0. This basic walk will end when the agent is back at node v. This is simply insured by counting the number of edge-traversals in T : the agent stops the basic walk after 2(ν − 1) edge-traversals in T , where ν denotes the number of nodes in T . Since ν ≤ 2 − 1, counting up to O(ν) does not require more that O(log ) bits. The basic walk proceeds with the following insertions: at each visited node w with degree different from 2 (i.e., at each node of T ), the agent performs Explo-bis(w), except for the very last node of T visited by the basic walk, that is except when the agent returns, for the last time, at its initial position v. Since agents performing Procedure Synchro starting from different initial positions v execute identical actions, only in different order, we have the following: Claim 4.2 Two agents starting simultaneously at arbitrary initial positions v and v in T finish Procedure Synchro with a delay β = |L − L | where L (resp., L ) is the length of the basic walk in T leading from v to v (resp., from v to v ). Once the agents are resynchronized (their desynchronization is now precisely β), each of them proceeds to the second part of Stage 2. Sub-stage 2.2. (for the case when T symmetric) Rendezvous in a virtual line. After the execution of Procedure Synchro, the agent with initial position v is back at v. In view of Fact 2.1, since it has applied Explo( v) at the very beginning of the rendezvous protocol, the agent knows the number of steps of the basic walk from v to the farthest extremity of the central edge of T . So, its first action in Sub-stage 2.2 is to go to this node, following a basic walk. We denote by v f ar (resp., v f ar ) the farthest extremity of the central edge of T reached by the agent starting from v (resp., from v ). Since the contraction tree T is symmetric, the two agents may end up in two different nodes of T , i.e., possibly v f ar = v f ar . For instance, in the n-node path with an odd number of edges, the two agents may end up in the two extremities of the path. Also, in the binomial tree with n-nodes (cf. [13]), the two agents may end up in the two roots of the two binomial subtrees of T with n/2 nodes. Still, we prove that rendezvous is possible with little memory assuming that the two initial positions of the agents were not perfectly symmetrizable in T . Actually, the first of the two key ingredients in our proof is showing how rendezvous can be achieved in the path (or line) using agents with O(log log n) bits of memory. In the lemma below, we consider blind agents in paths, that is agents that ignore port labels. More precisely, when entering a node, such an agent can just distinguish between the incoming edge and the other edge (if any). Let P = (v 1 , . . . , v m ) be an m-node path, and consider two identical blind agents initially located at nodes v a and v b , a < b. Rendezvous using blind agents is possible if and only if m is odd, or m is even and a − 1 = m − b. Of course, a standard agent can simulate the behavior of a blind agent. When applying the lemma below with standard agents, we will make sure that the starting positions v a and v b are such that rendezvous is achievable even with blind agents. Lemma 4.1 There exists a pair of identical blind agents accomplishing rendezvous with simultaneous start in all paths, whenever it is possible, and using O(log log m) bits of memory in paths with at most m nodes. Proof. Let P = (v 1 , . . . , v m ) be an m-node path, and consider two identical blind agents initially located at nodes v a and v b , a < b. To achieve rendezvous, the two agents perform a sequence of traversals of P , executed at lower and lower speeds, aiming at eventually meeting each other at some node. More precisely, for an integer s ≥ 1, a traversal of the path is performed at speed 1/s, if the agent remains idle s − 1 rounds before traversing any edge. For instance, traversing P from v 1 to v m at speed 1/s requires (m − 1)s rounds. Our rendezvous algorithm for the line, called prime, performs as follows. Begin start in arbitrary direction; move at speed 1 until reaching one extremity of the path; p ← 2; While no rendezvous do traverse the entire path twice, at speed 1/p; p ← smallest prime larger than p; End We now prove that, whenever rendezvous is possible for blind agents (i.e., when m odd, or m even and a − 1 = m − b), the two agents meet before the pth iteration of the loop, for p = O(log n). Let p j be the jth prime number (p 1 = 2). Hence the speed of each agent at the jth execution of the loop is 1/p j . If rendezvous has not occurred during the jth execution of the loop, then the two agents have crossed the same edge, say e = {v c , v c+1 }, at the same time t, in opposite directions. This can occur if, for instance, the agent initially at v a moves to node v 1 , traverses twice the path at successive speeds p 1 , . . . , p j−1 , and, c p j rounds after having eventually started walking at speed p j , traverses the edge e at time t, while the other agent initially at v b moves to v m , traverses twice the path at successive speeds p 1 , . . . , p j−1 , and, (m − c)p j rounds after having eventually started walking at speed p j , traverses the same edge e in the other direction at the same time t. In fact, there are four cases to consider, depending on the two starting directions of the two agents: towards v 1 or towards v m . From these four cases, we get that one of the following four equalities must hold (the first one corresponds to the previously described scenario: v a moves towards v 1 while v b moves towards v m ): • t = (a − 1) + 2(m − 1) j−1 i=1 p i + c p j = (m − b) + 2(m − 1) j−1 i=1 p i + (m − c)p j • t = (a − 1) + 2(m − 1) j−1 i=1 p i + (m − 1)p j + (m − c)p j = (b − 1) + 2(m − 1) j−1 i=1 p i + c p j • t = (m − a) + 2(m − 1) j−1 i=1 p i + (m − c)p j = (m − b) + 2(m − 1) j−1 i=1 p i + (m − 1)p j + c p j • t = (m − a) + 2(m − 1) j−1 i=1 p i + (m − c)p j = (b − 1) + 2(m − 1) j−1 i=1 p i + c p j Therefore we get that p j divides |a − b|, or p j divides |m − (a + b) + 1|. As a consequence, since the p i 's are primes, we get that if the two agents have not met after the jth execution of the loop, then where I ∪J = {1, . . . , j}. Therefore, since the p i 's are primes, j i=1 p i divides |a−b|·|m−(a+b)+1|. Hence, if rendezvous is feasible, it must occur at or before the jth execution of the loop, where j is the largest index such that j i=1 p i divides |a − b| · |m − (a + b) + 1|. Thus it must occur at or before the jth execution of the loop, where j is the largest index such that j i=1 p i ≤ m 2 . Let π(x) be the number of prime numbers smaller than or equal to x. On the one hand, we have j i=1 p i ≥ 2 π(p j ) . Hence, rendezvous must occur at or before the jth execution of the loop, where j is the largest index such that 2 π(p j ) ≤ m 2 , i.e., π(p j ) ≤ 2 log m. On the other hand, from the Prime Number Theorem we get that π(x) ∼ x/ ln(x), i.e., lim x→∞ π(x) x/ ln(x) = 1. Hence, for m large enough, π(x) ≥ x/(2 ln(x)). Thus rendezvous must occur at or before the jth execution of the loop, where j is the largest index such that p j / ln p j ≤ 4 log m. From the above, we get that (1) rendezvous must occur whenever it is feasible, and (2) it occurs at or before the jth execution of the loop, where log p j ≤ O(log log m). Since the next prime p can be found using O(log p) bits, e.g., by exhaustive search, we get that prime performs rendezvous using agents with O(log log m) bits of memory. The (blind) agents described in Lemma 4.1 perform a protocol called prime. This protocol uses the infinite sequence of prime numbers. We denote by prime(i) the protocol prime modified so that it stops after having considered the ith prime number. We now come back to our general rendezvous protocol in trees (with port numbers). Let ν = 2x be the number of nodes in the contraction tree T . (We have ν even, since T is symmetric with respect to its central edge). We define a (non-simple) path called the rendezvous path, denoted by P , that will be used by the agents to rendezvous using protocol prime. To define P , let u and v be the two extremities of the path in T corresponding to the central edge in T . We have { v f ar , v f ar } ⊆ {u, v}. The path P is called the central path, and is denoted by C. Abusing notation, C will also be used as a shortcut for the instruction: "traverse C". Let bw (for "basic walk") be the instruction of performing the following actions: leave by port 0, and, perpetually, whenever entering a degree-d node by port i ∈ {0, . . . , d − 1}, leave that node by port (i + 1) mod d. Similarly, let cbw (for "counter basic walk"), be the instruction of performing the following: leave by the port used to enter the current node at the previous step, and, perpetually, whenever entering a degree-d node by port i, leave that node by port (i − 1) mod d. For j ≥ 1, let bw(j) (resp., cbw(j)) be the instruction to execute bw (resp., cbw) until j nodes of degree different from 2 have been visited. Let B u (resp., B v ) be the path corresponding to the execution of bw 2(ν − 1) from u (resp., from v). Note that a node can be visited several times by the walk, and thus neither B u nor B v are simple. Note also that since T has ν nodes, it has ν − 1 edges, and thus both B u and B v are closed paths, i.e., their extremities are u and v, respectively. Let B u (resp., B v ) be the path corresponding to the execution of cbw 2(ν − 1) from u (resp., from v). We define P = (B u | C u→v | B v | C v→u ) 5 | (B u | C u→v | B v ) where " | " denotes the concatenation of paths, C u→v (resp., C v→u ) denotes the path C traversed from u to v (resp., from v to u), and, for a closed path Q, Q α denotes Q concatenated with itself α times. The path P is well defined. Indeed, the sequence B u | C u→v | B v | C v→u leads back to node u. Also, the two extremities of the path are u and v. Now, the agents have no clue whether they are standing at u or at v. Nevertheless, we have the following. traverses the path P from one of its extremities to the other. Before establishing the claim, note that instructions bw 2(ν−1) and cbw 2(ν−1) are meaningful, since agents can have counters of size O(log ) bits, and they know ν in view of Fact 2.1. To establish the claim, it suffices to notice that the path P reverse to P is given by Begin for consecutive values i ≥ 1 do /* outer loop */ /* try rendezvous */ for j = 0, 1, . . . , 2(ν − 1) do /* first inner loop */ perform bw(j); perform cbw(j); /* back to the original position */ perform prime(i) on the rendezvous path P ; /* reset */ go to the other extremity of the central path C; for j = 0, 1, . . . , 2(ν − 1) do /* second inner loop */ perform bw(j); perform cbw(j); /* back to the original position */ return to the original extremity of the central path C; End The two agents will use protocol prime along the path P to achieve rendezvous. However, to make sure that rendezvous succeeds, the two agents must not start prime simultaneously at the two extremities of P , in order to break symmetry. Unfortunately, this requirement is not trivial to satisfy. Indeed, one can guarantee some upper bound on the delay between the times the two agents reach the two extremities of C (and thus of P as well) that does not exceed n, but no guarantee can be given for the minimum delay, which could be zero. This is because the delay does not depend on the tree T , but on the tree T . Hence two agents starting simultaneously in T may actually finish Stage 2.1 of our protocol (i.e., the execution of Synchro) at the same time, even if T is not symmetric, and even if T is symmetric but the starting positions were not perfectly symmetrizable. The second key ingredient in our proof is a technique guaranteeing eventual desynchronization of the two agents. A high level description of this technique is summarized in Figure 2. We describe this technique in detail below. The outer loop of the protocol in Figure 2 states how many consecutive prime numbers the protocol will test while performing prime along the path P . Performing prime(i) for successive values of i, instead of just prime, is for avoiding a perpetual execution of prime in the case when the two agents started the execution of phase 2 at the same time from the two extremities of P . For every number i ≥ 1 of primes to be used in prime, the protocol performs two inner loops. The first one is an attempt to achieve rendezvous along P , while the second one is used to upper bound the delay between the two agents at the end of the outer loop, in order to guarantee that the next execution of the outer loop will start with a delay between the two agents that does not exceed n. During the first inner loop, an agent executing the protocol performs a series of basic walks, of different lengths. For j = 0, the agent performs nothing. In this case, prime(i) is performed on P directly. For j > 0, the agent performs a basic walk in T to the jth node of degree different from 2 that it encounters along its walk. When j = 2(ν − 1), the basic walk is a complete one, traversing each edge of T twice. Each bw(j) is followed by a cbw(j), so as to come back to the original position at the same extremity of the path P . Once this is done, the agent performs prime(i) on P . The second inner loop aims at resetting the two agents. For this purpose, each agent goes to the other extremity of C, performs the same sequence of actions as the other agent had performed during its execution of the first inner loop, and returns to its original extremity of C. This enables resetting the two agents in the following sense. To establish the claim, just notice that, during every execution of the outer loop, the sets of actions performed by the two agents inside the loop are identical, differing only by their orders. Note that we can express |t − t | = |(L + L) − (L + L )| where L and L are defined in Claim 4.2, and L (resp., L ) denotes the length of the basic walk leading from v (resp., v ) to v f ar (resp., to v f ar ). A consequence of Claim 4.4 is the following lemma. Proof. For j ≥ 1, let l j and l j be the lengths (i.e., numbers of edges) of the paths in T between the (j − 1)th and the jth node of degree different from 2 that is met by the two agents, respectively, during their basic walk from their positions at the two extremities of C. At the jth iteration of the inner loop, one agent has traversed 2 j a=1 a b=1 l b edges during bw(a) and cbw(a) for all a = 1, . . . , j. The other agent has traversed 2 j a=1 a b=1 l b edges during the same bw(a) and cbw(a). Since the number of rounds of prime(i) is the same for both agents, we get that their delay is at most: |t − t | + 2 j a=1 a b=1 |l b − l b | ≤ |t − t | + 4(ν − 1) 2(ν−1) b=1 |l b − l b | ≤ |t − t | + 4(ν − 1) 2(ν−1) b=1 max{l b , l b } ≤ |t − t | + 8(ν − 1)n ≤ |t − t | + 8νn ≤ |t − t | + 16n . This completes the proof of the lemma. Lemma 4.3 Assume that the two agents have not met when they arrive at v f ar and v f ar after the execution of Synchro. For every i, if at the beginning of each execution of prime(i) the delay between the two agents is zero, then their initial positions were perfectly symmetrizable in T . Proof. Fix i ≥ 1, and assume that, at the beginning of each of the 2ν − 1 executions of prime(i) in the outer loop, the delay between the two agents is zero. This implies that, using the same notations as in the proof of Lemma 4.2, for every j = 0, . . . , 2(ν − 1) we have t + 2 j a=1 a b=1 l b = t + 2 j a=1 a b=1 l b . Therefore, t = t and l j = l j for every j = 0, . . . , 2(ν − 1). These equalities imply that the tree T is topologically symmetric: there is an automorphism f which extends the port preserving automorphism f of T mapping the two symmetric subtrees T 1 and T 2 of T hanging at the two extremities of the central edge of T (f induces an isomorphism between T 1 and T 2 preserving port labels). Indeed, since the two agents have not met when both of them arrive at v f ar and v f ar , the fact that t = t implies that v f ar = v f ar . We have v f ar = f ( v f ar ). More generally, if x j (resp., x j ) denotes the jth node of T reached by the basic walk starting at v f ar (resp., v f ar ), we have x j = f (x j ). By definition, l j (resp., l j ) is the length of the path in T between x j−1 and x j (resp., between x j−1 and x j ). Since l j = l j , we get that the number of degree-2 nodes in T between x j−1 and x j is the same as the number of degree-2 nodes in T between x j−1 and x j . Thus f can be extended to match nodes of these two paths, preserving adjacencies. Since this holds for every j, we get that T is topologically symmetric 4 . To sum up, the tree T is topologically symmetric (by automorphism f ), and its contraction tree T is symmetric (by automorphism f , which preserves port labels). A consequence of this fact is the following crucial observation. Let us consider the following port labeling µ. The port numbers at nodes of degree larger than 2 are the same as in T . The port labeling is completed arbitrarily at nodes of degree 2, preserving the following condition: if {z, z } is an edge in T with at least one extremity z of degree 2, then the port number at z corresponding to {z, z } is equal to the port number at f (z) corresponding to {f (z), f (z )}. Two basic walks starting from two symmetric positions in T generate two sequences of nodes such that the ith nodes of the two sequences are symmetric in T with respect to µ. Indeed, the "branching" nodes, i.e., the nodes of degree at least 3, are symmetric, and basic walks are oblivious of the port numbers at nodes of degree at most 2. The same observation holds for counter basic walks. It also holds if the port number of the outgoing edge from the starting nodes are not 0, under the simple assumption that they are equal. We use the above observation to show that the two nodes v and v are perfectly symmetrizable. Since T is symmetric, it is sufficient to show that v and v are topologically symmetric. The two agents have reached nodes v f ar and v f ar after procedure Synchro, entering these nodes from the central path. Indeed, on the one hand, v f ar and v f ar are the farthest extremity of the central edge of T coming from v and v , respectively, and, on the other hand, the basic walks reaching these nodes are of minimum length (cf., Fact 2.1). Since v f ar and v f ar are symmetric in T , the port numbers of the edges incident to these nodes on the central path are identical. Let i be this port number. Consider two counter basic walks of length t = t starting from v f ar and v f ar , leaving the starting node by port number i. These counter basic walks proceed backwards, first along the basic walk from v to v f ar for L steps, and next along the basic walk from v to v for L steps. If v = v then L = 0. If v = v, then the articulation between the two basic walks v → v and v → v f ar occurs at v = v leaf . Since we have chosen this latter node as a leaf, the sequence of basic walks v → v and v → v f ar is actually equal to a basic walk v → v f ar of length t = L + L. Hence the counter basic walk of length t starting from v f ar by port i leads to the initial position v. The same holds for the other walk of length t = t. Therefore, v and v are topologically symmetric, and thus they are perfectly symmetrizable. In view of the previous lemma, since v and v are not perfectly symmetrizable, at each execution i of the outer loop, there is an execution j of prime(i) for which the two agents do not start the second phase at the same time from their respective extremities of P . Moreover, by Lemma 4.2, during this jth execution of prime(i), the delay δ between the two agents is at most |t − t | + 16n . We have |t − t | = |(L + L) − (L + L )|, where the four parameters are lengths of basic walks. These four basic walks have lengths at most 2(n − 1). Hence, |t − t | ≤ 4n. Therefore, δ ≤ 20n . The length of the rendezvous path P is larger than 20n because B u and B v are each of length at least 2n. Therefore, at the first time when both agents are simultaneously in the jth execution of prime(i), they occupy two non perfectly symmetrizable positions in P : one is at one extremity of P , and the other is at some node of P at distance δ > 0 along P from the other extremity of P . Moreover, since the delay δ between the two agents is smaller than the length of the path P , the agent first executing prime(i) has not yet completed the first traversal of P when the other agent starts prime(i). As a consequence, the two agents act as if prime(i) were executed with both agents starting simultaneously at non perfectly symmetrizable positions in the path. Now, for small values of i, prime(i) may not achieve rendezvous in P . However, in view of Lemma 4.1, for some i = O(log n), rendezvous will be completed whenever the initial positions of the agents were not perfectly symmetrizable in T . We complete the proof by checking that each agent uses O(log +log log n) bits of memory. Protocol Explo-bis executed in T consumes the same amount of memory as Protocol Explo executed in T . Since T has at most 2 − 1 nodes, Explo-bis uses O(log ) bits of memory. During the second stage of the rendezvous, a counter is used for identifying the index j of the inner loop. Since j ≤ 2ν ≤ 4 , this counter uses O(log ) bits of memory. All executions of prime are independent, and performed one after the other. Thus, in view of Lemma 4.1, a total of O(log log n) bits suffice to implement these executions. The index i of the outer loop grows until it is large enough so that prime(i) achieves rendezvous in a path of length O(n ). Thus, i ≤ log(n ), and thus O(log log(n )) = O(log log n) bits suffice to encode this index. This completes the proof of Theorem 4.1. The lower bound Ω(log log n) In this section we prove the lower bound Ω(log log n) on the size of memory required for rendezvous with simultaneous start in a n-node line. Theorem 4.2 Rendezvous with simultaneous start in the n-node line requires agents with Ω(log log n) bits of memory. The rest of the section is dedicated to the proof of Theorem 4.2. For proving the theorem, note that we can restrict ourselves to lines whose edges are properly colored 1 and 2, so that the port numbers at the two extremities of an edge colored i are set to i. In this setting, the transition function of an agent in a line is π : S × {1, 2} → S that describes the transition that occurs when an agent enters a node of degree d ∈ {1, 2} in state s ∈ S. In this situation, the agent changes its state to state s = π(s, d), and performs the action λ(s ). The fact that one does not need to specify the incoming port number is a consequence of the edge-coloring, which implies that whenever an agent leaves a node by port i, it enters the next node by port i too. Let us fix two identical agents A and A , with finite state set S, and transition function π. Let π : S → S be the transition function applied at nodes of degree 2 of the edge-colored line, i.e., π (s) = π(s, 2) for any s ∈ S. To π is associated its transition digraph, whose nodes are the states in S, and there is an arc from s to s if and only if s = π (s). This digraph is composed of a certain number of connected components, say r, each of them of a similar shape, that is a circuit with inward trees rooted at the nodes of the circuit. Let C 1 , . . . , C r be the r circuits corresponding to the r connected components of the transition digraph, and let γ be the least common multiple of the number of arcs of these circuits, i.e., γ = lcm(|C 1 |, . . . , |C r |). We prove that there is a line of length proportional to 2γ + |S| in which A and A do not rendezvous. First, observe that if A and A cannot go at arbitrarily large distance from their starting positions, say they go at maximum distance D, then they cannot rendezvous in a line of length 4D + 4. Indeed, if the initial positions are two nodes at distance 2D + 1, and at distance at least D + 1 from the extremities of the line, then the ranges of activity of the two agents are disjoint, and thus they cannot meet (one edge is added at one extremity of the line to break the symmetry of the initial configuration). Thus from now on, we assume that both agents can go at arbitrarily large distance from their starting positions. For the purpose of establishing our result, place the two agents A and A on two adjacent nodes v A and v A of an infinite line (whose edges are properly colored). Let e = {v A , v A } be the edge linking these two nodes. • Let t 0 be large enough so that A is at distance at least 2γ + |S| from its starting position after t 0 steps. Since t 0 > |S|, agent A at time t 0 is in some state s i ∈ C i for some i ∈ {1, . . . , r}. In fact, since |C i | divides γ, agent A has fully executed C i at least twice. We define the notion of extreme position for a circuit C. Let s, π (s), . . . , π (k) (s) be a circuit, with s = π (k) (s). Assume that agent A starts in state s from node u 0 at distance at least k + 1 from both extremities of the line. After having performed C exactly once, i.e., after k steps, agent A is at some node u k , back in state s. Let u 0 , u 1 , u 2 , . . . , u k be the k + 1 non necessarily distinct nodes visited by A while executing C. The extreme position for C starting in state s is the node u j satisfying dist(u 0 , u j ) = dist(u 0 , u k ) + dist(u k , u j ), and dist(u 0 , u j ) = max 0≤ ≤k dist(u 0 , u ). Let u i be the extreme position for C i starting in s i , and let us define the following parameters: • τ is the first time step among the |C i | steps after step t 0 at which A reaches u i . • x is the distance of agent A at time τ from its original position, i.e., x = dist(u i , v A ); • τ = τ + 2γ; • x is the distance of agent A at time τ from its original position v A . Note that, by symmetry of the port labeling, and from the fact that A and A are identical and operate in an infinite line, the two agents are on the two different sides of edge e at time τ . Note also that, between times τ and τ , agent A keeps on going further away from its original position, by repeating the sequence of actions determined by the circuit C i . Hence x = x. Actually, we have x > x. We can therefore consider the following construction. Initial configuration of the agents. Let L be the properly 2-edge-colored line of length x + x + 1, formed by x edges, followed by one edge called e, and followed by x edges. The two agents A and A are placed at the two extremities v A and v A of e, the same way they were placed at the two extremities of e in the infinite line used to define x and x . Since x = x , the initial positions of agents are not perfectly symmetrizable. Nevertheless, we prove that the two agents never meet in L, and thus rendezvous is not accomplished. The adversary imposes no delay between the starting times of the agents, i.e., they both start acting simultaneously from their respective initial positions. One ingredient used for proving that the two agents do not rendezvous is the following general result, that we state as a lemma for further reference. Lemma 4.4 (Parity Lemma)Consider two (not necessarily identical) agents initially at odd distance in a tree T , that start acting simultaneously in T . Let t ≥ 1. Assume that one agent stays idle q times in the time interval [1, t], while the other one stays idle q times in the same time interval. If |q − q | is even, then the two agents are at odd distance at step t. Proof. At any step, if one agent moves while the other one stays idle, then the parity of their distance changes. On the other hand, if both agents move or both stay idle, then the parity of their distance remains unchanged. Let a be the number of steps in [1, t] when both agents were idle simultaneously. Then the parity of the inter-agent distance changes exactly (q − a) + (q − a) times in the time interval [1, t]. Since |q − q | is even, q + q is also even, and thus (q − a) + (q − a) is even too. Thus the parity of the inter-agent distance is the same at time 1 and at time t. The Parity Lemma enables us to establish the following. Lemma 4.5 The two agents A and A do not meet during the first τ steps. Proof. Since the agents perform the same sequence of actions in the time interval [1, τ ], we get that, for any t ≤ τ , the two agents have remained idle the same mumber of times in the time interval [1, t], and thus, by the Parity Lemma (with q = q ), they are at odd distance at step t, since they originally started at distance 1. In other words, the two agents remain permanently at odd distance during the time interval [1, τ ]. Thus they cannot meet during this time interval. At step τ , the behavior of the two agents becomes different. Indeed, agent A is reaching one extremity of L, while A is visiting a degree-2 node. We analyze the states of the two agents when they reach extremities of L during the execution of their protocol. Assume that agent A reaches the extremities of L at least k ≥ 1 times. Let σ j be the state of agent A when it reaches any of the two extremities of L for the jth time, 1 ≤ j ≤ k. Lemma 4.6 Agent A reaches the extremities of L at least k times. Moreover, if σ j is the state of agent A when it reaches any of the two extremities of L for the jth time, 1 ≤ j ≤ k, then σ j = σ j . Proof. First, let us consider the case k = 1. After time τ (i.e., after the time when A reaches one extremity of L, in state σ 1 ), agent A keeps on repeating the execution of circuit C i . This leads A to eventually reach the other extremity of L. Recall that we have considered the behavior of A after time t 0 when A was in state s i ∈ C i , and that τ was defined as the first time step among the |C i | steps after step t 0 at which A reaches the extreme position u i of C i starting at s i . Since τ = τ + 2γ, and since |C i | divides γ, we get that agent A is in state σ 1 at time τ . Moreover, since |C i | divides γ, A reaches the extreme position u i of C i at time τ , and therefore time τ is the first time when A is at distance x from e. Therefore σ 1 = σ 1 , and the lemma holds for k = 1. For k > 1, the proof is by induction on the number of times j agent A reaches an extremity of L, j = 1, . . . , k. By the previous arguments, the result holds for j = 1. When agent A reaches an extremity of L for the jth time, it is in state σ j . By the induction hypothesis, when agent A reaches an extremity of L for the jth time, it is also in state σ j = σ j . Therefore, the configuration for A and A between two consecutive hits of an extremity of L is actually symmetric. As a consequence, σ j+1 = σ j+1 , and the lemma holds. After time τ the walks of the agents can be decomposed in two different types of subwalks. A traversal period for an agent is the subwalk between two consecutive hits of two different extremities of L by this agent. A bouncing period for an agent is a subwalk (possibly empty) performed between two consecutive traversal periods. Roughly, a bouncing period for an agent is a walk during which the agent starts from one extremity of L and repeats bouncing (i.e., leaving and going back) that extremity until it eventually starts the next traversal period. Globally, an agent starts from its original position, performs some initial steps (τ for A, and τ for A ), and then alternates between bouncing periods and traversal periods. These periods are not synchronous between the two agents because there is a delay of 2γ between them. Nevertheless, by Lemma 4.6, if one agent bounces at one extremity of L during its kth bouncing period, then the other agent bounces at the other extremity of L during its kth bouncing period. Similarly, if one agent traverses L during its kth traversal period, then the other agent traverses L in the opposite direction during its kth traversal period. In fact, Lemma 4.6 guarantees that the two agents perform symmetric actions with a delay of 2γ, alternating bouncing at the two different extremities of L, and traversing L in two opposite directions. The following lemma holds, by establishing that whenever one agent is in a bouncing period, the two agents are far apart. Lemma 4.7 The two agents A and A do not meet whenever one of them is in a bouncing period. Proof. There is a delay of 2γ between the two agents. During such a period of time, an agent can travel a distance at most 2γ. Also, during its bouncing period, an agent cannot go at distance more than |S| from the extremity of the line where it is bouncing. On the other hand, by the definitions of t 0 and τ > t 0 , we have x > 2γ + |S|, and thus x > 2γ + |S| as well. Therefore, when one of the agents is in a bouncing period, the distance between the two agents is at least 2γ + |S|, and thus they cannot meet. The following lemma holds, by using the fact that γ is the least common multiple of the circuit lengths in the transition digraph of the agents, and by applying the Parity Lemma. Proof. When both agents are in a traversal period, they started their period in the same state, from Lemma 4.6. Hence, they are eventually both performing the same circuit of states C i . This occurs after the same initial time of duration at most |S|. This time corresponds to the time it takes to reach the circuit C i from the initial state at which the agents started their traversal period. As we already observed in the proof of Lemma 4.7, since x > x > 2γ + |S|, the two agents are far apart during the transition period before both of them have entered the circuit C i executed during the considered traversal. Thus we can now assume that the two agents are performing C i , traversing the line in two opposite directions. We prove that they cross along an edge, and hence they do not meet. Since the delay between the two agents is 2γ and since γ is a multiple of |C i | for any i ∈ {1, . . . , r}, the delay is an even multiple of the length of the circuit |C i | performed at this traversal. As a consequence, at any step of their traversal periods, the number of times one agent was idle when the other was not, is even. The Parity Lemma with |q − q | = 2γ/|C i | then insures that the distance between the two agents remains odd during the whole traversal period. Thus they do not meet. Proof of Theorem 4.2. The two agents start an initial period that lasts τ steps. By Lemma 4.5 they do not meet during this period. Then the two agents alternate between bouncing periods and traversal periods. By Lemma 4.7, they do not meet when one of the two agents is in a bouncing period. When the two agents are in a traversal period, Lemma 4.8 guarantees that they do not meet. Hence the two agents never meet, in spite of starting from non perfectly symmetrizable positions, and thus they do not rendezvous in L. By the construction of the line L and the setting of γ, we get that L is of length O(|S| |S| ). Therefore, rendezvous with simultaneous start in lines of size at most n requires agents with at least Ω(log log n) memory bits. The lower bound Ω(log ) In this section we prove that rendezvous with simultaneous start in trees with leaves requires Ω(log ) bits of memory, even in the class of trees with maximum degree 3. Together with the lower bound of Ω(log log n) on memory size needed for rendezvous in the n-node line 5 established in Theorem 4.2, this result proves that our upper bound O(log + log log n) from Section 4.1 cannot be improved even for trees of maximum degree 3. Theorem 4.3 For infinitely many integers , there exists an infinite family of trees with leaves, for which rendezvous with simultaneous start requires Ω(log ) bits of memory. Proof. Consider an integer = 2i, for any even i. Consider an (i+1)-node path with a distinguished endpoint called the root. To every internal node x of the path attach either a new leaf, or a new node y of degree 2 with a new leaf z attached to it. There are 2 i−1 = 2 /2−1 possible resulting non-isomorphic rooted trees. Call them side trees. Note that non-isomorphic is meant here without the port-preserving clause: there are so many rooted trees which cannot be mapped to each other by any isomorphism, not only by any isomorphism preserving port numbering. Fix an arbitrary port labeling in every side tree. For any pair of side trees T and T and for any positive even integer m, consider the tree T consisting of side trees T and T whose roots are joined by a path of length m + 1 (i.e., there are m added nodes of degree two). Ports at the added nodes of degree two are labeled as follows: both ports at the central edge have label 0, and ports at both ends of any other edge of the line have the same label 0 or 1. (This corresponds to a 2-edge-coloring of the line). Call any tree resulting from this construction a two-sided tree. Any such tree has leaves and maximum degree 3. For any two-sided tree consider initial positions of the agents at nodes u and v of the joining path adjacent to roots of its side trees. Consider agents with k bits of memory (thus with K = 2 k states). A tour of a side tree associated with an initial position (u or v) is the part of the trajectory of the agent in this side tree between consecutive visits of the associated initial position. Observe that the maximum duration D of a tour is smaller than K · (3i). Indeed, the number of nodes in a side tree is at most 3i − 1, hence the number of possible pairs (state, node of the side tree) is at most K · (3i − 1). A tour of longer duration than this value would cause the agent to leave the same node twice in the same state, implying an infinite loop. Such a tour could not come back to the initial position. For a fixed agent with the set S of states and a fixed side tree, we define the function p : S → S as follows. Let s be the state in which the agent starts a tour. Then p(s) is the state in which the agent finishes the tour. Now we define the function q : S → S × {1, . . . , D}, called the behavior function, by the formula q(s) = (p(s), t), where t is the number of rounds to complete the tour when starting in state s. The number of possible behavior functions is at most F = (KD) K . A behavior function depends on the side tree for which it is constructed. Suppose that k ≤ 1 3 log . We have D < 3Ki = 3 2 K , hence KD < 3 2 K 2 . Hence we have log K + log log(KD) ≤ k + log log( 3 2 K 2 ) ≤ k + 2 + log k + log log , which is smaller than 2 3 log for sufficiently large k. It follows that K log(KD) < 2/3 < /2−1, which implies F = (KD) K < 2 /2−1 . Thus the number of possible behavior functions is strictly smaller than the total number of side trees. It follows that there are two side trees T 1 and T 2 for which the corresponding behavior functions are equal. Consider two instances of the rendezvous problem for any length m + 1 of the joining line, where m is a positive even integer: one in which both side trees are equal to T 1 , and the other for which one side tree is T 1 and the other is T 2 . Rendezvous is impossible in the first instance because in this instance initial positions of the agents form a symmetric pair of nodes with respect to the given port labeling. Consider the second instance, in which the initial positions of the agents do not form a perfectly symmetrizable pair. Because of the symmetry of labeling of the joining line, agents cannot meet inside any of the side trees. Indeed, when one of them is in one tree, the other one is in the other tree. Since the behavior function associated with side trees T 1 and T 2 is the same, the agents leave these trees always at the same time and in the same state. Hence they cannot meet on the line, in view of its odd length and symmetric port labeling. This implies that they never meet, in spite of initial positions that are not perfectly symmetrizable. Hence rendezvous in the second instance requires Ω(log ) bits of memory.
11,632
1102.0467
2950424720
The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this paper, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds. We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Omega(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number L of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log L + loglog n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most L leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Omega(log L + loglog n)$ bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3.
Apart from the synchronous model used in this paper, several authors have investigated asynchronous rendezvous in the plane @cite_34 @cite_10 and in network environments @cite_23 @cite_1 @cite_32 . In the latter scenario the agent chooses the edge which it decides to traverse but the adversary controls the speed of the agent. Under this assumption rendezvous in a node cannot be guaranteed even in very simple graphs, hence the rendezvous requirement is relaxed to permit the agents to meet inside an edge.
{ "abstract": [ "Two mobile agents (robots) with distinct labels have to meet in an arbitrary, possibly infinite, unknown connected graph or in an unknown connected terrain in the plane. Agents are modeled as points, and the route of each of them only depends on its label and on the unknown environment. The actual walk of each agent also depends on an asynchronous adversary that may arbitrarily vary the speed of the agent, stop it, or even move it back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. Meeting in a graph means that both agents must be at the same time in some node or in some point inside an edge of the graph, while meeting in a terrain means that both agents must be at the same time in some point of the terrain. Does there exist a deterministic algorithm that allows any two agents to meet in any unknown environment in spite of this very powerful adversary? We give deterministic rendezvous algorithms for agents starting at arbitrary nodes of any anonymous connected graph (finite or infinite) and for agents starting at any interior points with rational coordinates in any closed region of the plane with path-connected interior. While our algorithms work in a very general setting - agents can, indeed, meet almost everywhere - we show that none of the above few limitations imposed on the environment can be removed. On the other hand, our algorithm also guarantees the following approximate rendezvous for agents starting at arbitrary interior points of a terrain as above: agents will eventually get at an arbitrarily small positive distance from each other.", "Two mobile agents (robots) having distinct labels and located in nodes of an unknown anonymous connected graph have to meet. We consider the asynchronous version of this well-studied rendezvous problem and we seek fast deterministic algorithms for it. Since in the asynchronous setting, meeting at a node, which is normally required in rendezvous, is in general impossible, we relax the demand by allowing meeting of the agents inside an edge as well. The measure of performance of a rendezvous algorithm is its cost: for a given initial location of agents in a graph, this is the number of edge traversals of both agents until rendezvous is achieved. If agents are initially situated at a distance D in an infinite line, we show a rendezvous algorithm with cost O(D|Lmin|2) when D is known and O((D + |Lmax|)3) if D is unknown, where |Lmin| and |Lmax| are the lengths of the shorter and longer label of the agents, respectively. These results still hold for the case of the ring of unknown size, but then we also give an optimal algorithm of cost O(n|Lmin|), if the size n of the ring is known, and of cost O(n|Lmax|), if it is unknown. For arbitrary graphs, we show that rendezvous is feasible if an upper bound on the size of the graph is known and we give an optimal algorithm of cost O(D|Lmin|) if the topology of the graph and the initial positions are known to agents.", "We investigate the relation between the time complexity and the space complexity for the rendezvous problem with k agents in asynchronous tree networks. The rendezvous problem requires that all the agents in the system have to meet at a single node within finite time. First, we consider asymptotically time-optimal algorithms and investigate the minimum memory requirement per agent for asymptotically time-optimal algorithms. We show that there exists a tree with n nodes in which Ω(n) bits of memory per agent is required to solve the rendezvous problem in O(n) time (asymptotically time-optimal). Then, we present an asymptotically time-optimal rendezvous algorithm. This algorithm can be executed if each agent has O(n) bits of memory. From this lower upper bound, this algorithm is asymptotically space-optimal on the condition that the time complexity is asymptotically optimal. Finally, we consider asymptotically space-optimal algorithms while allowing slowdown in time required to achieve rendezvous. We present an asymptotically space-optimal algorithm that each agent uses only O(logn) bits of memory. This algorithm terminates in O(Δn8) time where Δ is the maximum degree of the tree.", "Consider a set of n > 2 simple autonomous mobile robots (decentralized, asynchronous, no common coordinate system, no identities, no central coordination, no direct communication, no memory of the past, deterministic) moving freely in the plane and able to sense the positions of the other robots. We study the primitive task of gathering them at a point not fixed in advance (GATHERING PROBLEM). In the literature, most contributions are simulation-validated heuristics. The existing algorithmic contributions for such robots are limited to solutions for n ≤ 4 or for restricted sets of initial configurations of the robots. In this paper, we present the first algorithm that solves the GATHERING PROBLEM for any initial configuration of the robots.", "We consider a collection of robots which are identical (anonymous), have limited visibility of the environment, and no memory of the past (oblivious); furthermore, they are totally asynchronous in their actions, computations, and movements. We show that, even in such a totally asynchronous setting, it is possible for the robots to gather in the same location in finite time, provided they have a compass." ], "cite_N": [ "@cite_1", "@cite_32", "@cite_23", "@cite_34", "@cite_10" ], "mid": [ "2100580556", "2050063944", "1571240480", "1592126212", "1635699204" ] }
Delays induce an exponential memory gap for rendezvous in trees
We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound matches the upper bound from [14] valid for arbitrary graphs. Our main positive result is a proof that the amount of memory needed for rendezvous with simultaneous start in trees depends essentially on the number of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we show two identical agents with O(log + log log n) bits of memory that solve the rendezvous problem in all trees with n nodes and leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that the size O(log + log log n) of memory needed for rendezvous is optimal, even in the class of trees with degrees bounded by 3. More precisely, we prove two lower bounds. First, for infinitely many integers , we show a class of arbitrarily large trees with maximum degree 3 and with leaves, for which rendezvous with simultaneous start requires Ω(log ) bits of memory. Second, we show that Ω(log log n) bits of memory are required for rendezvous with simultaneous start in the line of length n. These two bounds together imply that our upper bound O(log + log log n) cannot be improved, even for the class of trees with maximum degree 3. Bibliographic note Note that our definition of solving the rendezvous problem is stronger than the definition used in the conference versions [24,25] of this paper. Indeed, rendezvous should occur for any port labeling. As opposed to what is claimed in [25], the exponential gap described in this paper does not carry over to the case where the ability of achieving rendezvous may depend on the port labeling. More precisely, it was claimed in [25] that the positive result concerning the size O(log + log log n) of memory for which rendezvous with simultaneous start is possible, holds for arbitrary initial positions that are not symmetric with respect to a given port labeling µ of the tree in which agents operate. This result is in fact incorrect in this formulation. Indeed, it has been recently proved in [15] that, for some port labeling of a line and some initial positions that are not symmetric with respect to this labeling, rendezvous with simultaneous start requires a logarithmic number of bits, while = 2 for the line. However, our positive result holds for agents starting from arbitrary non perfectly symmetrizable initial positions. The algorithm and its analysis remain similar as in [25]. (The exact place where the provided arguments do not extend to the case where the ability of achieving rendezvous may depend on the port labeling will be pointed out to the reader). On the other hand, all negative results from [24] and [25] hold in the present setting as well. Framework and Preliminaries Model We consider mobile agents traveling in trees with locally labeled ports. The tree and its size are a priori unknown to the agents. We first define precisely an individual agent. An agent is an abstract state machine A = (S, π, λ, s 0 ), where S is a set of states among which there is a specified state s 0 called the initial state, π : S × Z 2 → S, and λ : S → Z. Initially the agent is at some node u 0 in the initial state s 0 ∈ S. The agent performs actions in rounds measured by its internal clock. Each action can be either a move to an adjacent node or a null move resulting in remaining in the currently occupied node. State s 0 determines a natural number λ(s 0 ). If λ(s 0 ) = −1 then the agent makes a null move (i.e., remains at u 0 ). If λ(s 0 ) ≥ 0 then the agent leaves u 0 by port λ(s 0 ) modulo the degree of u 0 . When incoming to a node v in state s ∈ S, the behavior of the agent is as follows. It reads the number i of the port through which it entered v and the degree d of v. The pair (i, d) ∈ Z 2 is an input symbol that causes the transition from state s to state s = π(s, (i, d)). If the previous move of the agent was null, (i.e., the agent stayed at node v in state s) then the pair (−1, d) ∈ Z 2 is the input symbol read by the agent, that causes the transition from state s to state s = π(s, (−1, d)). In both cases s determines an integer λ(s ), which is either −1, in which case the agent makes a null move, or a non negative integer indicating a port number by which the agent leaves v (this port is λ(s ) mod d). The agent continues moving in this way, possibly infinitely. Since we consider the rendezvous problem for identical agents, we assume that agents are copies A and A of the same abstract state machine A, starting at two distinct nodes v A and v A , called the initial positions. We will refer to such identical machines as a pair of agents. It is assumed that the internal clocks of a pair of agents tick at the same rate. The clock of each agent starts when the agent starts executing its actions. Agents start from their initial position with delay θ ≥ 0, controlled by an adversary. This means that the later agent starts executing its actions θ rounds after the first agent. Agents do not know which of them is first and what is the value of θ. We seek agents with small memory, measured by the number of states of the corresponding automaton, or equivalently by the number of bits on which these states are encoded. An automaton with K states requires Θ(log K) bits of memory. We say that a pair of agents solves the rendezvous problem with arbitrary delay (resp. with simultaneous start) in a class of trees, if, for any tree in this class, for any port labeling of this tree, and for any initial positions that are not perfectly symmetrizable, both agents are eventually in the same node of the tree in the same round, regardless of the starting rounds of the agents (resp. provided that they start in the same round). Preliminary results Consider any tree T and the following sequence of trees constructed recursively: T 0 = T , and T i+1 is the tree obtained from T i by removing all its leaves. T = T j for the smallest j for which T j has at most two nodes. If T has one node, then this node is called the central node of T . If T has two nodes, then the edge joining them is called the central edge of T . A tree T with a port labeling µ is called symmetric, if there exists a non-trivial automorphism f of the tree (i.e., an automorphism f such that f (u) = u, for some u ∈ V ) preserving this port labeling. If a tree with port numbers has a central node, then it cannot be symmetric. We define the "basic walk" starting at node v the walk resulting from an agent performing the following actions: leave node v by port 0, and, perpetually, whenever entering a degree-d node by port i ∈ {0, . . . , d − 1}, leave that node by port (i + 1) mod d. Of course, a basic walk can be bounded to perform for t steps (instead of perpetually), in which case we refer to a basic walk of length t. Note that a basic walk of length 2(n − 1) in an n-node tree returns to its starting node. The following statement is an easy consequence of the techniques and results from [27]. Fact 2.1 There exists an agent accomplishing the following task in an arbitrary tree: using O(log m) bits of memory, it finds the number m of nodes in the tree, returns and stops at its initial position, and detects whether the tree has a central node, or has a central edge but is not symmetric, or has a central edge and is symmetric. Moreover, • if the tree has a central node x, then the agent finds the minimum number of steps of a basic walk from its initial position to the central node x; • if the tree has a central edge e = {x, y} but is not symmetric, then, for every initial position, the agent finds the minimum number of steps of a basic walk from its initial position to the same extremity x of the central edge; moreover, it knows which port at this extremity corresponds to the central edge; • if the tree is symmetric, then the agent finds the minimum number of steps of a basic walk from its initial position to the farthest extremity 1 of the central edge; moreover, it knows which port at this extremity corresponds to the central edge. In the sequel, the procedure accomplishing the above task starting at node v will be called Procedure Explo(v). Rendezvous with arbitrary delay It was proved in [14] that rendezvous with arbitrary delay can be accomplished in arbitrary nnode graphs using O(log n) bits of memory. On the other hand, observe that rendezvous requires Ω(log n) bits of memory in arbitrarily large trees with 2n + 1 nodes and maximum degree n. The lower bound examples are trees T n consisting of two nodes u and v of degree n, both linked to a common node w, and to n − 1 leaves. However, these trees have linear degree and the reason for the logarithmic memory requirement is simply that agents with smaller memory are incapable of having an output function λ with range of linear size, and thus the adversary can place one agent in node u, the other in a leaf adjacent to v, and distribute ports in such a way that none of the agents can ever get to node w, which makes rendezvous infeasible, in spite of non perfectly symmetrizable initial positions. This example leaves open the question if rendezvous with sub-logarithmic memory is possible, e.g., in all trees with constant maximum degree. It turns out that if the delay is arbitrary, this is not the case: rendezvous requires logarithmic memory even for the class of lines. Theorem 3.1 Rendezvous with arbitrary delay in the n-node line requires agents with Ω(log n) bits of memory. Proof. Let k be the number of memory bits of the agent and K = 2 k be its number of states. Place one agent at some node u of the infinite line where each edge has the same port number at its two extremities. In any interval of length K + 1 there exist two nodes at which the agent is in the same state. Let x 1 be the first node of the trajectory of the agent in which this happens and let s be the state of the agent at x 1 . Let x 2 be the second node of the trajectory of the agent at which the agent is in state s. Let δ be the distance between u and x 1 and let d be the distance between x 1 and x 2 . We construct the following instance of the rendezvous problem (see Fig. 1). The line is of length 8(K + 1) + 1. Let e be the central edge of this line. Assign number 0 to ports leading to edge e from both its extremities, and assign other port labels so that ports leading to any edge at both its extremities get the same number 0 or 1. (This is equivalent to 2-edge-coloring of the line). Let z be the endpoint of the line, for which x 1 is between z and x 2 . Let y 1 and y 2 be symmetric images of x 1 and x 2 , respectively, according to the axis of symmetry of the line. Let y 0 be the node distinct from y 2 , at distance d from y 1 . Let v be the node at distance δ from y 0 , such that the vectors [x 1 , u] and [y 0 , v] have opposite directions. The other agent is placed at node v. Let t 1 be the number of rounds that the agent starting at u takes to reach 2 x 1 in state s. Let t 2 be the number of rounds that the agent starting at v takes to reach y 1 in state s. Let θ = t 2 − t 1 . The adversary delays the agent starting at u by θ rounds. Hence the agent starting at u reaches x 1 at the same time t and in the same state as the agent starting at v reaches y 1 . The points x 1 and y 1 are symmetric positions, hence rendezvous is impossible after time t. Before time t the two Together with the logarithmic upper bound from [14], the above result completely solves the problem of determining the minimum memory of the agents permitting rendezvous with arbitrary delay. Hence in the rest of the paper we concentrate on rendezvous with simultaneous start, thus assuming that the delay θ = 0. e v 2(K+1) 2(K+1) 2(K+1) 2(K+1) u z x1 y0 x2 y2 y1 d d d δ δ 4 Rendezvous with simultaneous start 4 .1 Upper bound It turns out that the size of memory needed for rendezvous with simultaneous start depends on two parameters of the tree: the number n of nodes and the number of leaves. In fact we show that rendezvous in trees with n nodes and leaves can be done using only O(log + log log n) bits of memory. Thus, for trees with polylogarithmically many leaves, O(log log n) bits of memory are enough. In view of Theorem 3.1, this shows an exponential gap in the minimum memory size needed for rendezvous between the scenarios with arbitrary delay and with delay zero. Theorem 4.1 There is a pair of identical agents solving rendezvous with simultaneous start in all trees, and using, for any integers n and , O(log + log log n) bits of memory in trees with at most n nodes and at most leaves. The rest of the section is dedicated to the proof of Theorem 4.1. Let T be any tree, and let v and v be the initial positions of the two agents in T . Let T be the contraction of T , that is the tree obtained from T by replacing every path 3 in T joining two nodes of degree different from 2 by an edge (the ports of this edge correspond to the ports at both extremities of the contracted path). Notice that if T has leaves, then its contraction T has at most 2 − 1 nodes. Our rendezvous algorithm uses Procedure Explo, defined in Section 2, as a subroutine. More precisely, each of the two agents executes procedure Explo in T , ignoring the degree-2 nodes. That is, protocol Explo is modified so that whenever an agent enters a degree-2 node through port i ∈ {0, 1} in some state s, it will leave that node in the next round by port (i + 1) mod 2, in the same state s. In fact, the are some subtle additional details in the modified version of Explo, when the initial node is of degree different from 2. Specifically, let s 0 be the initial state of an agent executing Explo. Our modified agent starts in an additional state s * 0 . If the initial node v has a degree different from 2, then it enters state s 0 and starts Explo(v), ignoring the degree-2 nodes. Otherwise, the agent remains in state s * 0 and leaves the initial node through port 0. The agent then performs a basic walk, remaining in state s * 0 , until it enters a node of degree 1 (i.e., a leaf of the tree T ). At such a node, denoted by v leaf , the agent enters state s 0 and starts Explo(v leaf ), ignoring the degree-2 nodes. We call Explo-bis the procedure Explo modified in this way. Observe that, in trees with no nodes of degree 2, the two protocols Explo and Explo-bis are executed identically. Hence, protocols Explo and Explo-bis are executed identically in T . Formally, for an initial position v, let us define v = v if deg(v) = 2 v leaf otherwise Then, the following holds. Claim 4.1 Once an agent starting from some node v has reached node v, the states at nodes of degrees different from 2 of the agent performing Explo-bis in T are identical to the states of an agent performing Explo in T starting from node v. Using this claim, rendezvous in T is achieved as follows. Stage 1. Each of the two agents executes procedure Explo-bis from their respective initial positions v and v . After having completed Explo-bis, each agent knows whether the contraction tree T is symmetric or not. (It is non-symmetric if either there is a central node, or there is a central edge and the two port-labeled trees obtained by removing the central edge in T are not isomorphic -the isomorphism must preserve both the structure of the trees, and the port labelings). Stage 2. The nature of the second stage differs according to whether T is symmetric or not. In the non symmetric case, the rendezvous protocol uses Fact 2.1, which states that the two agents performing Procedure Explo will eventually identify a single node x of T . Node x is identified by the number of steps of the basic walk performed in T to reach that node from the initial position. Notice that, although Explo ensures (by Fact 2.1) that each agent returns to its initial position v after completing the procedure, Claim 4.1 guaranties only that the agent applying Explo-bis returns to a node v. Nevertheless, this is sufficient, since the length of the basic walk reaching x is the length of the one starting from node v, ignoring degree-2 nodes. Note that this length does not exceed twice the number of edges of T , and thus it can be encoded on O(log ) bits. Therefore, each of the agents act as follows: • If there is a central node x in T , then Rendezvous is achieved by waiting for the other agent at that node. • Similarly, if there is a central edge in T , and the tree T is not symmetric, then let x be the extremity of the central edge of T identified by protocol Explo-bis; rendezvous is achieved by waiting for the other agent at that node. The difficult and more challenging situation is when the contraction tree T has a central edge with two non distinguishable extremities, in which case the ability to solve the rendezvous problem depends on the large tree T and on the initial positions of the two agents in T . Achieving rendezvous is complicated by the constraint that the agents must use sub-logarithmic memory when is small. The main part of the proof will be dedicated to describing how this task can actually be achieved in a memory efficient manner. Sub-stage 2.1. (for the case when T symmetric) Resynchronization. Recall that we are in a situation where each of the two agents has performed Explo-bis. An agent starting from node v ∈ T has not necessarily returned to node v, but to node v ∈ T . Each agent executes Procedure Synchro defined as follows. It starts the execution of a basic walk in T , leaving the current node v by port 0. This basic walk will end when the agent is back at node v. This is simply insured by counting the number of edge-traversals in T : the agent stops the basic walk after 2(ν − 1) edge-traversals in T , where ν denotes the number of nodes in T . Since ν ≤ 2 − 1, counting up to O(ν) does not require more that O(log ) bits. The basic walk proceeds with the following insertions: at each visited node w with degree different from 2 (i.e., at each node of T ), the agent performs Explo-bis(w), except for the very last node of T visited by the basic walk, that is except when the agent returns, for the last time, at its initial position v. Since agents performing Procedure Synchro starting from different initial positions v execute identical actions, only in different order, we have the following: Claim 4.2 Two agents starting simultaneously at arbitrary initial positions v and v in T finish Procedure Synchro with a delay β = |L − L | where L (resp., L ) is the length of the basic walk in T leading from v to v (resp., from v to v ). Once the agents are resynchronized (their desynchronization is now precisely β), each of them proceeds to the second part of Stage 2. Sub-stage 2.2. (for the case when T symmetric) Rendezvous in a virtual line. After the execution of Procedure Synchro, the agent with initial position v is back at v. In view of Fact 2.1, since it has applied Explo( v) at the very beginning of the rendezvous protocol, the agent knows the number of steps of the basic walk from v to the farthest extremity of the central edge of T . So, its first action in Sub-stage 2.2 is to go to this node, following a basic walk. We denote by v f ar (resp., v f ar ) the farthest extremity of the central edge of T reached by the agent starting from v (resp., from v ). Since the contraction tree T is symmetric, the two agents may end up in two different nodes of T , i.e., possibly v f ar = v f ar . For instance, in the n-node path with an odd number of edges, the two agents may end up in the two extremities of the path. Also, in the binomial tree with n-nodes (cf. [13]), the two agents may end up in the two roots of the two binomial subtrees of T with n/2 nodes. Still, we prove that rendezvous is possible with little memory assuming that the two initial positions of the agents were not perfectly symmetrizable in T . Actually, the first of the two key ingredients in our proof is showing how rendezvous can be achieved in the path (or line) using agents with O(log log n) bits of memory. In the lemma below, we consider blind agents in paths, that is agents that ignore port labels. More precisely, when entering a node, such an agent can just distinguish between the incoming edge and the other edge (if any). Let P = (v 1 , . . . , v m ) be an m-node path, and consider two identical blind agents initially located at nodes v a and v b , a < b. Rendezvous using blind agents is possible if and only if m is odd, or m is even and a − 1 = m − b. Of course, a standard agent can simulate the behavior of a blind agent. When applying the lemma below with standard agents, we will make sure that the starting positions v a and v b are such that rendezvous is achievable even with blind agents. Lemma 4.1 There exists a pair of identical blind agents accomplishing rendezvous with simultaneous start in all paths, whenever it is possible, and using O(log log m) bits of memory in paths with at most m nodes. Proof. Let P = (v 1 , . . . , v m ) be an m-node path, and consider two identical blind agents initially located at nodes v a and v b , a < b. To achieve rendezvous, the two agents perform a sequence of traversals of P , executed at lower and lower speeds, aiming at eventually meeting each other at some node. More precisely, for an integer s ≥ 1, a traversal of the path is performed at speed 1/s, if the agent remains idle s − 1 rounds before traversing any edge. For instance, traversing P from v 1 to v m at speed 1/s requires (m − 1)s rounds. Our rendezvous algorithm for the line, called prime, performs as follows. Begin start in arbitrary direction; move at speed 1 until reaching one extremity of the path; p ← 2; While no rendezvous do traverse the entire path twice, at speed 1/p; p ← smallest prime larger than p; End We now prove that, whenever rendezvous is possible for blind agents (i.e., when m odd, or m even and a − 1 = m − b), the two agents meet before the pth iteration of the loop, for p = O(log n). Let p j be the jth prime number (p 1 = 2). Hence the speed of each agent at the jth execution of the loop is 1/p j . If rendezvous has not occurred during the jth execution of the loop, then the two agents have crossed the same edge, say e = {v c , v c+1 }, at the same time t, in opposite directions. This can occur if, for instance, the agent initially at v a moves to node v 1 , traverses twice the path at successive speeds p 1 , . . . , p j−1 , and, c p j rounds after having eventually started walking at speed p j , traverses the edge e at time t, while the other agent initially at v b moves to v m , traverses twice the path at successive speeds p 1 , . . . , p j−1 , and, (m − c)p j rounds after having eventually started walking at speed p j , traverses the same edge e in the other direction at the same time t. In fact, there are four cases to consider, depending on the two starting directions of the two agents: towards v 1 or towards v m . From these four cases, we get that one of the following four equalities must hold (the first one corresponds to the previously described scenario: v a moves towards v 1 while v b moves towards v m ): • t = (a − 1) + 2(m − 1) j−1 i=1 p i + c p j = (m − b) + 2(m − 1) j−1 i=1 p i + (m − c)p j • t = (a − 1) + 2(m − 1) j−1 i=1 p i + (m − 1)p j + (m − c)p j = (b − 1) + 2(m − 1) j−1 i=1 p i + c p j • t = (m − a) + 2(m − 1) j−1 i=1 p i + (m − c)p j = (m − b) + 2(m − 1) j−1 i=1 p i + (m − 1)p j + c p j • t = (m − a) + 2(m − 1) j−1 i=1 p i + (m − c)p j = (b − 1) + 2(m − 1) j−1 i=1 p i + c p j Therefore we get that p j divides |a − b|, or p j divides |m − (a + b) + 1|. As a consequence, since the p i 's are primes, we get that if the two agents have not met after the jth execution of the loop, then where I ∪J = {1, . . . , j}. Therefore, since the p i 's are primes, j i=1 p i divides |a−b|·|m−(a+b)+1|. Hence, if rendezvous is feasible, it must occur at or before the jth execution of the loop, where j is the largest index such that j i=1 p i divides |a − b| · |m − (a + b) + 1|. Thus it must occur at or before the jth execution of the loop, where j is the largest index such that j i=1 p i ≤ m 2 . Let π(x) be the number of prime numbers smaller than or equal to x. On the one hand, we have j i=1 p i ≥ 2 π(p j ) . Hence, rendezvous must occur at or before the jth execution of the loop, where j is the largest index such that 2 π(p j ) ≤ m 2 , i.e., π(p j ) ≤ 2 log m. On the other hand, from the Prime Number Theorem we get that π(x) ∼ x/ ln(x), i.e., lim x→∞ π(x) x/ ln(x) = 1. Hence, for m large enough, π(x) ≥ x/(2 ln(x)). Thus rendezvous must occur at or before the jth execution of the loop, where j is the largest index such that p j / ln p j ≤ 4 log m. From the above, we get that (1) rendezvous must occur whenever it is feasible, and (2) it occurs at or before the jth execution of the loop, where log p j ≤ O(log log m). Since the next prime p can be found using O(log p) bits, e.g., by exhaustive search, we get that prime performs rendezvous using agents with O(log log m) bits of memory. The (blind) agents described in Lemma 4.1 perform a protocol called prime. This protocol uses the infinite sequence of prime numbers. We denote by prime(i) the protocol prime modified so that it stops after having considered the ith prime number. We now come back to our general rendezvous protocol in trees (with port numbers). Let ν = 2x be the number of nodes in the contraction tree T . (We have ν even, since T is symmetric with respect to its central edge). We define a (non-simple) path called the rendezvous path, denoted by P , that will be used by the agents to rendezvous using protocol prime. To define P , let u and v be the two extremities of the path in T corresponding to the central edge in T . We have { v f ar , v f ar } ⊆ {u, v}. The path P is called the central path, and is denoted by C. Abusing notation, C will also be used as a shortcut for the instruction: "traverse C". Let bw (for "basic walk") be the instruction of performing the following actions: leave by port 0, and, perpetually, whenever entering a degree-d node by port i ∈ {0, . . . , d − 1}, leave that node by port (i + 1) mod d. Similarly, let cbw (for "counter basic walk"), be the instruction of performing the following: leave by the port used to enter the current node at the previous step, and, perpetually, whenever entering a degree-d node by port i, leave that node by port (i − 1) mod d. For j ≥ 1, let bw(j) (resp., cbw(j)) be the instruction to execute bw (resp., cbw) until j nodes of degree different from 2 have been visited. Let B u (resp., B v ) be the path corresponding to the execution of bw 2(ν − 1) from u (resp., from v). Note that a node can be visited several times by the walk, and thus neither B u nor B v are simple. Note also that since T has ν nodes, it has ν − 1 edges, and thus both B u and B v are closed paths, i.e., their extremities are u and v, respectively. Let B u (resp., B v ) be the path corresponding to the execution of cbw 2(ν − 1) from u (resp., from v). We define P = (B u | C u→v | B v | C v→u ) 5 | (B u | C u→v | B v ) where " | " denotes the concatenation of paths, C u→v (resp., C v→u ) denotes the path C traversed from u to v (resp., from v to u), and, for a closed path Q, Q α denotes Q concatenated with itself α times. The path P is well defined. Indeed, the sequence B u | C u→v | B v | C v→u leads back to node u. Also, the two extremities of the path are u and v. Now, the agents have no clue whether they are standing at u or at v. Nevertheless, we have the following. traverses the path P from one of its extremities to the other. Before establishing the claim, note that instructions bw 2(ν−1) and cbw 2(ν−1) are meaningful, since agents can have counters of size O(log ) bits, and they know ν in view of Fact 2.1. To establish the claim, it suffices to notice that the path P reverse to P is given by Begin for consecutive values i ≥ 1 do /* outer loop */ /* try rendezvous */ for j = 0, 1, . . . , 2(ν − 1) do /* first inner loop */ perform bw(j); perform cbw(j); /* back to the original position */ perform prime(i) on the rendezvous path P ; /* reset */ go to the other extremity of the central path C; for j = 0, 1, . . . , 2(ν − 1) do /* second inner loop */ perform bw(j); perform cbw(j); /* back to the original position */ return to the original extremity of the central path C; End The two agents will use protocol prime along the path P to achieve rendezvous. However, to make sure that rendezvous succeeds, the two agents must not start prime simultaneously at the two extremities of P , in order to break symmetry. Unfortunately, this requirement is not trivial to satisfy. Indeed, one can guarantee some upper bound on the delay between the times the two agents reach the two extremities of C (and thus of P as well) that does not exceed n, but no guarantee can be given for the minimum delay, which could be zero. This is because the delay does not depend on the tree T , but on the tree T . Hence two agents starting simultaneously in T may actually finish Stage 2.1 of our protocol (i.e., the execution of Synchro) at the same time, even if T is not symmetric, and even if T is symmetric but the starting positions were not perfectly symmetrizable. The second key ingredient in our proof is a technique guaranteeing eventual desynchronization of the two agents. A high level description of this technique is summarized in Figure 2. We describe this technique in detail below. The outer loop of the protocol in Figure 2 states how many consecutive prime numbers the protocol will test while performing prime along the path P . Performing prime(i) for successive values of i, instead of just prime, is for avoiding a perpetual execution of prime in the case when the two agents started the execution of phase 2 at the same time from the two extremities of P . For every number i ≥ 1 of primes to be used in prime, the protocol performs two inner loops. The first one is an attempt to achieve rendezvous along P , while the second one is used to upper bound the delay between the two agents at the end of the outer loop, in order to guarantee that the next execution of the outer loop will start with a delay between the two agents that does not exceed n. During the first inner loop, an agent executing the protocol performs a series of basic walks, of different lengths. For j = 0, the agent performs nothing. In this case, prime(i) is performed on P directly. For j > 0, the agent performs a basic walk in T to the jth node of degree different from 2 that it encounters along its walk. When j = 2(ν − 1), the basic walk is a complete one, traversing each edge of T twice. Each bw(j) is followed by a cbw(j), so as to come back to the original position at the same extremity of the path P . Once this is done, the agent performs prime(i) on P . The second inner loop aims at resetting the two agents. For this purpose, each agent goes to the other extremity of C, performs the same sequence of actions as the other agent had performed during its execution of the first inner loop, and returns to its original extremity of C. This enables resetting the two agents in the following sense. To establish the claim, just notice that, during every execution of the outer loop, the sets of actions performed by the two agents inside the loop are identical, differing only by their orders. Note that we can express |t − t | = |(L + L) − (L + L )| where L and L are defined in Claim 4.2, and L (resp., L ) denotes the length of the basic walk leading from v (resp., v ) to v f ar (resp., to v f ar ). A consequence of Claim 4.4 is the following lemma. Proof. For j ≥ 1, let l j and l j be the lengths (i.e., numbers of edges) of the paths in T between the (j − 1)th and the jth node of degree different from 2 that is met by the two agents, respectively, during their basic walk from their positions at the two extremities of C. At the jth iteration of the inner loop, one agent has traversed 2 j a=1 a b=1 l b edges during bw(a) and cbw(a) for all a = 1, . . . , j. The other agent has traversed 2 j a=1 a b=1 l b edges during the same bw(a) and cbw(a). Since the number of rounds of prime(i) is the same for both agents, we get that their delay is at most: |t − t | + 2 j a=1 a b=1 |l b − l b | ≤ |t − t | + 4(ν − 1) 2(ν−1) b=1 |l b − l b | ≤ |t − t | + 4(ν − 1) 2(ν−1) b=1 max{l b , l b } ≤ |t − t | + 8(ν − 1)n ≤ |t − t | + 8νn ≤ |t − t | + 16n . This completes the proof of the lemma. Lemma 4.3 Assume that the two agents have not met when they arrive at v f ar and v f ar after the execution of Synchro. For every i, if at the beginning of each execution of prime(i) the delay between the two agents is zero, then their initial positions were perfectly symmetrizable in T . Proof. Fix i ≥ 1, and assume that, at the beginning of each of the 2ν − 1 executions of prime(i) in the outer loop, the delay between the two agents is zero. This implies that, using the same notations as in the proof of Lemma 4.2, for every j = 0, . . . , 2(ν − 1) we have t + 2 j a=1 a b=1 l b = t + 2 j a=1 a b=1 l b . Therefore, t = t and l j = l j for every j = 0, . . . , 2(ν − 1). These equalities imply that the tree T is topologically symmetric: there is an automorphism f which extends the port preserving automorphism f of T mapping the two symmetric subtrees T 1 and T 2 of T hanging at the two extremities of the central edge of T (f induces an isomorphism between T 1 and T 2 preserving port labels). Indeed, since the two agents have not met when both of them arrive at v f ar and v f ar , the fact that t = t implies that v f ar = v f ar . We have v f ar = f ( v f ar ). More generally, if x j (resp., x j ) denotes the jth node of T reached by the basic walk starting at v f ar (resp., v f ar ), we have x j = f (x j ). By definition, l j (resp., l j ) is the length of the path in T between x j−1 and x j (resp., between x j−1 and x j ). Since l j = l j , we get that the number of degree-2 nodes in T between x j−1 and x j is the same as the number of degree-2 nodes in T between x j−1 and x j . Thus f can be extended to match nodes of these two paths, preserving adjacencies. Since this holds for every j, we get that T is topologically symmetric 4 . To sum up, the tree T is topologically symmetric (by automorphism f ), and its contraction tree T is symmetric (by automorphism f , which preserves port labels). A consequence of this fact is the following crucial observation. Let us consider the following port labeling µ. The port numbers at nodes of degree larger than 2 are the same as in T . The port labeling is completed arbitrarily at nodes of degree 2, preserving the following condition: if {z, z } is an edge in T with at least one extremity z of degree 2, then the port number at z corresponding to {z, z } is equal to the port number at f (z) corresponding to {f (z), f (z )}. Two basic walks starting from two symmetric positions in T generate two sequences of nodes such that the ith nodes of the two sequences are symmetric in T with respect to µ. Indeed, the "branching" nodes, i.e., the nodes of degree at least 3, are symmetric, and basic walks are oblivious of the port numbers at nodes of degree at most 2. The same observation holds for counter basic walks. It also holds if the port number of the outgoing edge from the starting nodes are not 0, under the simple assumption that they are equal. We use the above observation to show that the two nodes v and v are perfectly symmetrizable. Since T is symmetric, it is sufficient to show that v and v are topologically symmetric. The two agents have reached nodes v f ar and v f ar after procedure Synchro, entering these nodes from the central path. Indeed, on the one hand, v f ar and v f ar are the farthest extremity of the central edge of T coming from v and v , respectively, and, on the other hand, the basic walks reaching these nodes are of minimum length (cf., Fact 2.1). Since v f ar and v f ar are symmetric in T , the port numbers of the edges incident to these nodes on the central path are identical. Let i be this port number. Consider two counter basic walks of length t = t starting from v f ar and v f ar , leaving the starting node by port number i. These counter basic walks proceed backwards, first along the basic walk from v to v f ar for L steps, and next along the basic walk from v to v for L steps. If v = v then L = 0. If v = v, then the articulation between the two basic walks v → v and v → v f ar occurs at v = v leaf . Since we have chosen this latter node as a leaf, the sequence of basic walks v → v and v → v f ar is actually equal to a basic walk v → v f ar of length t = L + L. Hence the counter basic walk of length t starting from v f ar by port i leads to the initial position v. The same holds for the other walk of length t = t. Therefore, v and v are topologically symmetric, and thus they are perfectly symmetrizable. In view of the previous lemma, since v and v are not perfectly symmetrizable, at each execution i of the outer loop, there is an execution j of prime(i) for which the two agents do not start the second phase at the same time from their respective extremities of P . Moreover, by Lemma 4.2, during this jth execution of prime(i), the delay δ between the two agents is at most |t − t | + 16n . We have |t − t | = |(L + L) − (L + L )|, where the four parameters are lengths of basic walks. These four basic walks have lengths at most 2(n − 1). Hence, |t − t | ≤ 4n. Therefore, δ ≤ 20n . The length of the rendezvous path P is larger than 20n because B u and B v are each of length at least 2n. Therefore, at the first time when both agents are simultaneously in the jth execution of prime(i), they occupy two non perfectly symmetrizable positions in P : one is at one extremity of P , and the other is at some node of P at distance δ > 0 along P from the other extremity of P . Moreover, since the delay δ between the two agents is smaller than the length of the path P , the agent first executing prime(i) has not yet completed the first traversal of P when the other agent starts prime(i). As a consequence, the two agents act as if prime(i) were executed with both agents starting simultaneously at non perfectly symmetrizable positions in the path. Now, for small values of i, prime(i) may not achieve rendezvous in P . However, in view of Lemma 4.1, for some i = O(log n), rendezvous will be completed whenever the initial positions of the agents were not perfectly symmetrizable in T . We complete the proof by checking that each agent uses O(log +log log n) bits of memory. Protocol Explo-bis executed in T consumes the same amount of memory as Protocol Explo executed in T . Since T has at most 2 − 1 nodes, Explo-bis uses O(log ) bits of memory. During the second stage of the rendezvous, a counter is used for identifying the index j of the inner loop. Since j ≤ 2ν ≤ 4 , this counter uses O(log ) bits of memory. All executions of prime are independent, and performed one after the other. Thus, in view of Lemma 4.1, a total of O(log log n) bits suffice to implement these executions. The index i of the outer loop grows until it is large enough so that prime(i) achieves rendezvous in a path of length O(n ). Thus, i ≤ log(n ), and thus O(log log(n )) = O(log log n) bits suffice to encode this index. This completes the proof of Theorem 4.1. The lower bound Ω(log log n) In this section we prove the lower bound Ω(log log n) on the size of memory required for rendezvous with simultaneous start in a n-node line. Theorem 4.2 Rendezvous with simultaneous start in the n-node line requires agents with Ω(log log n) bits of memory. The rest of the section is dedicated to the proof of Theorem 4.2. For proving the theorem, note that we can restrict ourselves to lines whose edges are properly colored 1 and 2, so that the port numbers at the two extremities of an edge colored i are set to i. In this setting, the transition function of an agent in a line is π : S × {1, 2} → S that describes the transition that occurs when an agent enters a node of degree d ∈ {1, 2} in state s ∈ S. In this situation, the agent changes its state to state s = π(s, d), and performs the action λ(s ). The fact that one does not need to specify the incoming port number is a consequence of the edge-coloring, which implies that whenever an agent leaves a node by port i, it enters the next node by port i too. Let us fix two identical agents A and A , with finite state set S, and transition function π. Let π : S → S be the transition function applied at nodes of degree 2 of the edge-colored line, i.e., π (s) = π(s, 2) for any s ∈ S. To π is associated its transition digraph, whose nodes are the states in S, and there is an arc from s to s if and only if s = π (s). This digraph is composed of a certain number of connected components, say r, each of them of a similar shape, that is a circuit with inward trees rooted at the nodes of the circuit. Let C 1 , . . . , C r be the r circuits corresponding to the r connected components of the transition digraph, and let γ be the least common multiple of the number of arcs of these circuits, i.e., γ = lcm(|C 1 |, . . . , |C r |). We prove that there is a line of length proportional to 2γ + |S| in which A and A do not rendezvous. First, observe that if A and A cannot go at arbitrarily large distance from their starting positions, say they go at maximum distance D, then they cannot rendezvous in a line of length 4D + 4. Indeed, if the initial positions are two nodes at distance 2D + 1, and at distance at least D + 1 from the extremities of the line, then the ranges of activity of the two agents are disjoint, and thus they cannot meet (one edge is added at one extremity of the line to break the symmetry of the initial configuration). Thus from now on, we assume that both agents can go at arbitrarily large distance from their starting positions. For the purpose of establishing our result, place the two agents A and A on two adjacent nodes v A and v A of an infinite line (whose edges are properly colored). Let e = {v A , v A } be the edge linking these two nodes. • Let t 0 be large enough so that A is at distance at least 2γ + |S| from its starting position after t 0 steps. Since t 0 > |S|, agent A at time t 0 is in some state s i ∈ C i for some i ∈ {1, . . . , r}. In fact, since |C i | divides γ, agent A has fully executed C i at least twice. We define the notion of extreme position for a circuit C. Let s, π (s), . . . , π (k) (s) be a circuit, with s = π (k) (s). Assume that agent A starts in state s from node u 0 at distance at least k + 1 from both extremities of the line. After having performed C exactly once, i.e., after k steps, agent A is at some node u k , back in state s. Let u 0 , u 1 , u 2 , . . . , u k be the k + 1 non necessarily distinct nodes visited by A while executing C. The extreme position for C starting in state s is the node u j satisfying dist(u 0 , u j ) = dist(u 0 , u k ) + dist(u k , u j ), and dist(u 0 , u j ) = max 0≤ ≤k dist(u 0 , u ). Let u i be the extreme position for C i starting in s i , and let us define the following parameters: • τ is the first time step among the |C i | steps after step t 0 at which A reaches u i . • x is the distance of agent A at time τ from its original position, i.e., x = dist(u i , v A ); • τ = τ + 2γ; • x is the distance of agent A at time τ from its original position v A . Note that, by symmetry of the port labeling, and from the fact that A and A are identical and operate in an infinite line, the two agents are on the two different sides of edge e at time τ . Note also that, between times τ and τ , agent A keeps on going further away from its original position, by repeating the sequence of actions determined by the circuit C i . Hence x = x. Actually, we have x > x. We can therefore consider the following construction. Initial configuration of the agents. Let L be the properly 2-edge-colored line of length x + x + 1, formed by x edges, followed by one edge called e, and followed by x edges. The two agents A and A are placed at the two extremities v A and v A of e, the same way they were placed at the two extremities of e in the infinite line used to define x and x . Since x = x , the initial positions of agents are not perfectly symmetrizable. Nevertheless, we prove that the two agents never meet in L, and thus rendezvous is not accomplished. The adversary imposes no delay between the starting times of the agents, i.e., they both start acting simultaneously from their respective initial positions. One ingredient used for proving that the two agents do not rendezvous is the following general result, that we state as a lemma for further reference. Lemma 4.4 (Parity Lemma)Consider two (not necessarily identical) agents initially at odd distance in a tree T , that start acting simultaneously in T . Let t ≥ 1. Assume that one agent stays idle q times in the time interval [1, t], while the other one stays idle q times in the same time interval. If |q − q | is even, then the two agents are at odd distance at step t. Proof. At any step, if one agent moves while the other one stays idle, then the parity of their distance changes. On the other hand, if both agents move or both stay idle, then the parity of their distance remains unchanged. Let a be the number of steps in [1, t] when both agents were idle simultaneously. Then the parity of the inter-agent distance changes exactly (q − a) + (q − a) times in the time interval [1, t]. Since |q − q | is even, q + q is also even, and thus (q − a) + (q − a) is even too. Thus the parity of the inter-agent distance is the same at time 1 and at time t. The Parity Lemma enables us to establish the following. Lemma 4.5 The two agents A and A do not meet during the first τ steps. Proof. Since the agents perform the same sequence of actions in the time interval [1, τ ], we get that, for any t ≤ τ , the two agents have remained idle the same mumber of times in the time interval [1, t], and thus, by the Parity Lemma (with q = q ), they are at odd distance at step t, since they originally started at distance 1. In other words, the two agents remain permanently at odd distance during the time interval [1, τ ]. Thus they cannot meet during this time interval. At step τ , the behavior of the two agents becomes different. Indeed, agent A is reaching one extremity of L, while A is visiting a degree-2 node. We analyze the states of the two agents when they reach extremities of L during the execution of their protocol. Assume that agent A reaches the extremities of L at least k ≥ 1 times. Let σ j be the state of agent A when it reaches any of the two extremities of L for the jth time, 1 ≤ j ≤ k. Lemma 4.6 Agent A reaches the extremities of L at least k times. Moreover, if σ j is the state of agent A when it reaches any of the two extremities of L for the jth time, 1 ≤ j ≤ k, then σ j = σ j . Proof. First, let us consider the case k = 1. After time τ (i.e., after the time when A reaches one extremity of L, in state σ 1 ), agent A keeps on repeating the execution of circuit C i . This leads A to eventually reach the other extremity of L. Recall that we have considered the behavior of A after time t 0 when A was in state s i ∈ C i , and that τ was defined as the first time step among the |C i | steps after step t 0 at which A reaches the extreme position u i of C i starting at s i . Since τ = τ + 2γ, and since |C i | divides γ, we get that agent A is in state σ 1 at time τ . Moreover, since |C i | divides γ, A reaches the extreme position u i of C i at time τ , and therefore time τ is the first time when A is at distance x from e. Therefore σ 1 = σ 1 , and the lemma holds for k = 1. For k > 1, the proof is by induction on the number of times j agent A reaches an extremity of L, j = 1, . . . , k. By the previous arguments, the result holds for j = 1. When agent A reaches an extremity of L for the jth time, it is in state σ j . By the induction hypothesis, when agent A reaches an extremity of L for the jth time, it is also in state σ j = σ j . Therefore, the configuration for A and A between two consecutive hits of an extremity of L is actually symmetric. As a consequence, σ j+1 = σ j+1 , and the lemma holds. After time τ the walks of the agents can be decomposed in two different types of subwalks. A traversal period for an agent is the subwalk between two consecutive hits of two different extremities of L by this agent. A bouncing period for an agent is a subwalk (possibly empty) performed between two consecutive traversal periods. Roughly, a bouncing period for an agent is a walk during which the agent starts from one extremity of L and repeats bouncing (i.e., leaving and going back) that extremity until it eventually starts the next traversal period. Globally, an agent starts from its original position, performs some initial steps (τ for A, and τ for A ), and then alternates between bouncing periods and traversal periods. These periods are not synchronous between the two agents because there is a delay of 2γ between them. Nevertheless, by Lemma 4.6, if one agent bounces at one extremity of L during its kth bouncing period, then the other agent bounces at the other extremity of L during its kth bouncing period. Similarly, if one agent traverses L during its kth traversal period, then the other agent traverses L in the opposite direction during its kth traversal period. In fact, Lemma 4.6 guarantees that the two agents perform symmetric actions with a delay of 2γ, alternating bouncing at the two different extremities of L, and traversing L in two opposite directions. The following lemma holds, by establishing that whenever one agent is in a bouncing period, the two agents are far apart. Lemma 4.7 The two agents A and A do not meet whenever one of them is in a bouncing period. Proof. There is a delay of 2γ between the two agents. During such a period of time, an agent can travel a distance at most 2γ. Also, during its bouncing period, an agent cannot go at distance more than |S| from the extremity of the line where it is bouncing. On the other hand, by the definitions of t 0 and τ > t 0 , we have x > 2γ + |S|, and thus x > 2γ + |S| as well. Therefore, when one of the agents is in a bouncing period, the distance between the two agents is at least 2γ + |S|, and thus they cannot meet. The following lemma holds, by using the fact that γ is the least common multiple of the circuit lengths in the transition digraph of the agents, and by applying the Parity Lemma. Proof. When both agents are in a traversal period, they started their period in the same state, from Lemma 4.6. Hence, they are eventually both performing the same circuit of states C i . This occurs after the same initial time of duration at most |S|. This time corresponds to the time it takes to reach the circuit C i from the initial state at which the agents started their traversal period. As we already observed in the proof of Lemma 4.7, since x > x > 2γ + |S|, the two agents are far apart during the transition period before both of them have entered the circuit C i executed during the considered traversal. Thus we can now assume that the two agents are performing C i , traversing the line in two opposite directions. We prove that they cross along an edge, and hence they do not meet. Since the delay between the two agents is 2γ and since γ is a multiple of |C i | for any i ∈ {1, . . . , r}, the delay is an even multiple of the length of the circuit |C i | performed at this traversal. As a consequence, at any step of their traversal periods, the number of times one agent was idle when the other was not, is even. The Parity Lemma with |q − q | = 2γ/|C i | then insures that the distance between the two agents remains odd during the whole traversal period. Thus they do not meet. Proof of Theorem 4.2. The two agents start an initial period that lasts τ steps. By Lemma 4.5 they do not meet during this period. Then the two agents alternate between bouncing periods and traversal periods. By Lemma 4.7, they do not meet when one of the two agents is in a bouncing period. When the two agents are in a traversal period, Lemma 4.8 guarantees that they do not meet. Hence the two agents never meet, in spite of starting from non perfectly symmetrizable positions, and thus they do not rendezvous in L. By the construction of the line L and the setting of γ, we get that L is of length O(|S| |S| ). Therefore, rendezvous with simultaneous start in lines of size at most n requires agents with at least Ω(log log n) memory bits. The lower bound Ω(log ) In this section we prove that rendezvous with simultaneous start in trees with leaves requires Ω(log ) bits of memory, even in the class of trees with maximum degree 3. Together with the lower bound of Ω(log log n) on memory size needed for rendezvous in the n-node line 5 established in Theorem 4.2, this result proves that our upper bound O(log + log log n) from Section 4.1 cannot be improved even for trees of maximum degree 3. Theorem 4.3 For infinitely many integers , there exists an infinite family of trees with leaves, for which rendezvous with simultaneous start requires Ω(log ) bits of memory. Proof. Consider an integer = 2i, for any even i. Consider an (i+1)-node path with a distinguished endpoint called the root. To every internal node x of the path attach either a new leaf, or a new node y of degree 2 with a new leaf z attached to it. There are 2 i−1 = 2 /2−1 possible resulting non-isomorphic rooted trees. Call them side trees. Note that non-isomorphic is meant here without the port-preserving clause: there are so many rooted trees which cannot be mapped to each other by any isomorphism, not only by any isomorphism preserving port numbering. Fix an arbitrary port labeling in every side tree. For any pair of side trees T and T and for any positive even integer m, consider the tree T consisting of side trees T and T whose roots are joined by a path of length m + 1 (i.e., there are m added nodes of degree two). Ports at the added nodes of degree two are labeled as follows: both ports at the central edge have label 0, and ports at both ends of any other edge of the line have the same label 0 or 1. (This corresponds to a 2-edge-coloring of the line). Call any tree resulting from this construction a two-sided tree. Any such tree has leaves and maximum degree 3. For any two-sided tree consider initial positions of the agents at nodes u and v of the joining path adjacent to roots of its side trees. Consider agents with k bits of memory (thus with K = 2 k states). A tour of a side tree associated with an initial position (u or v) is the part of the trajectory of the agent in this side tree between consecutive visits of the associated initial position. Observe that the maximum duration D of a tour is smaller than K · (3i). Indeed, the number of nodes in a side tree is at most 3i − 1, hence the number of possible pairs (state, node of the side tree) is at most K · (3i − 1). A tour of longer duration than this value would cause the agent to leave the same node twice in the same state, implying an infinite loop. Such a tour could not come back to the initial position. For a fixed agent with the set S of states and a fixed side tree, we define the function p : S → S as follows. Let s be the state in which the agent starts a tour. Then p(s) is the state in which the agent finishes the tour. Now we define the function q : S → S × {1, . . . , D}, called the behavior function, by the formula q(s) = (p(s), t), where t is the number of rounds to complete the tour when starting in state s. The number of possible behavior functions is at most F = (KD) K . A behavior function depends on the side tree for which it is constructed. Suppose that k ≤ 1 3 log . We have D < 3Ki = 3 2 K , hence KD < 3 2 K 2 . Hence we have log K + log log(KD) ≤ k + log log( 3 2 K 2 ) ≤ k + 2 + log k + log log , which is smaller than 2 3 log for sufficiently large k. It follows that K log(KD) < 2/3 < /2−1, which implies F = (KD) K < 2 /2−1 . Thus the number of possible behavior functions is strictly smaller than the total number of side trees. It follows that there are two side trees T 1 and T 2 for which the corresponding behavior functions are equal. Consider two instances of the rendezvous problem for any length m + 1 of the joining line, where m is a positive even integer: one in which both side trees are equal to T 1 , and the other for which one side tree is T 1 and the other is T 2 . Rendezvous is impossible in the first instance because in this instance initial positions of the agents form a symmetric pair of nodes with respect to the given port labeling. Consider the second instance, in which the initial positions of the agents do not form a perfectly symmetrizable pair. Because of the symmetry of labeling of the joining line, agents cannot meet inside any of the side trees. Indeed, when one of them is in one tree, the other one is in the other tree. Since the behavior function associated with side trees T 1 and T 2 is the same, the agents leave these trees always at the same time and in the same state. Hence they cannot meet on the line, in view of its odd length and symmetric port labeling. This implies that they never meet, in spite of initial positions that are not perfectly symmetrizable. Hence rendezvous in the second instance requires Ω(log ) bits of memory.
11,632
1101.4068
2951248193
A mode of a multiset @math is an element @math of maximum multiplicity; that is, @math occurs at least as frequently as any other element in @math . Given a list @math of @math items, we consider the problem of constructing a data structure that efficiently answers range mode queries on @math . Each query consists of an input pair of indices @math for which a mode of @math must be returned. We present an @math -space static data structure that supports range mode queries in @math time in the worst case, for any fixed @math . When @math , this corresponds to the first linear-space data structure to guarantee @math query time. We then describe three additional linear-space data structures that provide @math , @math , and @math query time, respectively, where @math denotes the number of distinct elements in @math and @math denotes the frequency of the mode of @math . Finally, we examine generalizing our data structures to higher dimensions.
Range Mode Query Naturally, a mode of the query interval @math can be computed directly without preprocessing using any of the methods described in . @cite_10 describe data structures that provide constant-time queries using @math space and @math -time queries using @math space, for any fixed @math . Petersen and Grabowski @cite_28 improve the first bound to constant time and @math space and Petersen @cite_8 improves the second bound to @math -time queries using @math space, for any fixed @math . When @math , the data structure of @cite_10 requires only linear space and provides @math query time. Although its space requirement is almost linear in @math as @math approaches @math , the data structure of Petersen @cite_8 requires @math space. Furthermore, the construction becomes impractical as @math approaches @math (the number of levels in a hierarchical set of tables and hash functions approaches @math as @math ) and no obvious modification reduces its space requirement to @math . @cite_37 prove a lower bound of @math query time for any data structure that uses @math memory cells of @math bits.
{ "abstract": [ "Given a list of n items and a function defined over sub-lists, we study the space required for computing the function for arbitrary sub-lists in constant time. For the function mode we improve the previously known space bound O(n^2 logn) to O(n^2loglogn log^2n) words. For median the space bound is improved to O(n^2loglog^2n log^2n) words from O(n^2@?log^(^k^)n logn), where k is an arbitrary constant and log^(^k^) is the iterated logarithm.", "The mode of a multiset of labels, is a label that occurs at least as often as any other label. The input to the range mode problem is an array A of size n. A range query [i, j] must return the mode of the subarray A[i], A[i+1],...,A[j]. We prove that any data structure that uses S memory cells of w bits needs Ω(log n log(Sw n)) time to answer a range mode query. Secondly, we consider the related range k-frequency problem. The input to this problem is an array A of size n, and a query [i, j] must return whether there exists a label that occurs precisely k times in the subarray A[i], A[i+1],...,A[j]. We show that for any constant k > 1, this problem is equivalent to 2D orthogonal rectangle stabbing, and that for k = 1 this is no harder than four-sided 3D orthogonal range emptiness. Finally, we consider approximate range mode queries. A c-approximate range mode query must return a label that occurs at least 1 c times that of the mode. We describe a linear space data structure that supports 3-approximate range mode queries in constant time, and a data structure that uses O(n e) space and supports (1 + e)-approximation queries in O(log 1 e) time.", "We consider algorithms for preprocessing labelled lists and trees so that, for any two nodes u and v we can answer queries of the form: What is the mode or median label in the sequence of labels on the path from u to v.", "We investigate the following problem: Given a list of n items and a function defined over lists of these items, generate a bounded amount of auxiliary information such that range queries asking for the value of the function on sub-lists can be answered within a certain time bound. For the function \"mode\" we improve the previously known time bound O(nƐ log n) to O(nƐ) with space O(n2-2Ɛ), where 0 ≤ Ɛ < 1 2. We improve the space bound O(n2 log log n log n) for an O(1) time bounded solution to O(n2 log n). For the function \"median\" the space bound O(n2 log log n log n) is improved to O(n2 log(k) n log n) for an O(1) time solution, where k is an arbitrary constant and log(k) is the iterated logarithm." ], "cite_N": [ "@cite_28", "@cite_37", "@cite_10", "@cite_8" ], "mid": [ "2090221650", "1550061821", "1624497191", "1484905448" ] }
0
1101.4068
2951248193
A mode of a multiset @math is an element @math of maximum multiplicity; that is, @math occurs at least as frequently as any other element in @math . Given a list @math of @math items, we consider the problem of constructing a data structure that efficiently answers range mode queries on @math . Each query consists of an input pair of indices @math for which a mode of @math must be returned. We present an @math -space static data structure that supports range mode queries in @math time in the worst case, for any fixed @math . When @math , this corresponds to the first linear-space data structure to guarantee @math query time. We then describe three additional linear-space data structures that provide @math , @math , and @math query time, respectively, where @math denotes the number of distinct elements in @math and @math denotes the frequency of the mode of @math . Finally, we examine generalizing our data structures to higher dimensions.
@cite_33 consider approximate range mode queries, in which the objective is to return an element whose frequency is at least @math . They give a data structure that requires @math space and answers approximate range mode queries in @math time for any fixed @math , as well as data structures that provide constant-time queries for @math , using space @math , @math , and @math , respectively. @cite_37 give a linear-space data structure that supports approximate range mode queries in constant time for @math , and an @math -space data structure that supports approximate range mode queries in @math time for any fixed @math .
{ "abstract": [ "The mode of a multiset of labels, is a label that occurs at least as often as any other label. The input to the range mode problem is an array A of size n. A range query [i, j] must return the mode of the subarray A[i], A[i+1],...,A[j]. We prove that any data structure that uses S memory cells of w bits needs Ω(log n log(Sw n)) time to answer a range mode query. Secondly, we consider the related range k-frequency problem. The input to this problem is an array A of size n, and a query [i, j] must return whether there exists a label that occurs precisely k times in the subarray A[i], A[i+1],...,A[j]. We show that for any constant k > 1, this problem is equivalent to 2D orthogonal rectangle stabbing, and that for k = 1 this is no harder than four-sided 3D orthogonal range emptiness. Finally, we consider approximate range mode queries. A c-approximate range mode query must return a label that occurs at least 1 c times that of the mode. We describe a linear space data structure that supports 3-approximate range mode queries in constant time, and a data structure that uses O(n e) space and supports (1 + e)-approximation queries in O(log 1 e) time.", "We consider data structures and algorithms for preprocessing a labelled list of length n so that, for any given indices i and j we can answer queries of the form: What is the mode or median label in the sequence of labels between indices i and j. Our results are on approximate versions of this problem. Using @math space, our data structure can find in @math time an element whose number of occurrences is at least α times that of the mode, for some user-specified parameter 0<α<1. Data structures are proposed to achieve constant query time for α=1 2,1 3 and 1 4, using storage space of O(n log n), O(n log log n) and O(n), respectively. Finally, if the elements are comparable, we construct an @math space data structure that answers approximate range median queries. Specifically, given indices i and j, in O(1) time, an element whose rank is at least @math and at most @math is returned for 0<α<1." ], "cite_N": [ "@cite_37", "@cite_33" ], "mid": [ "1550061821", "2111741778" ] }
0
1101.4101
1546437762
The context of a software developer is something hard to dene and capture, as it represents a complex network of elements across dierent dimensions that are not limited to the work developed on an IDE. We propose the denition of a software developer context model that takes into account all the dimensions that characterize the work environment of the developer. We are especially focused on what the software developer context encompasses at the project level and how it can be captured. The experimental work done so far show that useful context information can be extracted from project management tools. The extraction, analysis and availability of this context information can be used to enrich the work environment of developers with additional knowledge to support their work.
In the same line of task management and recovery, Parnin and Gorg @cite_5 propose an approach for capturing the context relevant for a task from a programmer's interactions with an IDE, which is then used to aid the programmer recovering the mental state associated with a task and to facilitate the exploration of source code using recommendation systems. Their approach is focused on analyzing the interactions of the programmer with the source code, in order to create techniques for supporting recovery and exploration. Again, this approach is largely restricted to the IDE and the developer interaction with it.
{ "abstract": [ "Software developers often work on multiple simultaneous projects. Even when only a single project is underway, everyday distractions interrupt the development effort. Consequently, developers spend significant effort pursuing recovery of their context. By context, we focus on the classes and methods within the code that are relevant to a specific bug being fixed or enhancement made. Context is reified by a program in terms of a set of presentations (windows a containing source code, command executions, and data files); however, it is not enough to save the latest context. Even when working on a single task, programmers flip between contexts as they extend their understanding, and when they decide on a change, they may have to visit several contexts in order to address all possible ripple effects. Consequently, we would like to record a history of contexts and be able to retrieve them as demanded by the current task. We introduce a novel technique to obtain a context, consisting of a set of methods relevant for the current task, from a programmer’s interactions with an IDE. Using this context, we demonstrate how to improve the ability of a programmer to recover the mental state associated with tasks and to facilitate the exploration of software through recommendation systems." ], "cite_N": [ "@cite_5" ], "mid": [ "2150601094" ] }
Context Capture in Software Development
The term context has an intuitive meaning for humans, but due to this intuitive connotation it remains vague and generalist. Furthermore, the interest in the many roles of context comes from different fields such as literature, philosophy, linguistics and computer science, with each field proposing its own view of context [1]. The term context typically refers to the set of circumstances and facts that surround the center of interest, providing additional information and increasing understanding. The context-aware computing concept was first introduced by Schilit and Theimer [2], where they refer to context as "location of use, the collection of nearby people and objects, as well as the changes to those objects over time". In a similar way, Brown et al. [3] define context as location, identities of the people around the user, the time of day, season, temperature, etc. In a more generic definition, Dey and Abowd [4] define context as "any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves". In software development, the context of a developer can be viewed as a rich and complex network of elements across different dimensions that are not limited to the work developed on an IDE (Integrated Development Environment). Due to the difficulty on approaching such challenge, there is not a unique notion of what it really covers and how it can be truly exploited. With the increasing dimension of software systems, software development projects have grown in complexity and size, as well as in the number of functionalities and technologies involved. During their work, software developers need to cope with a large amount of contextual information that is typically not captured and processed in order to enrich their work environment. Our aim is the definition of a software developer context model that takes into account all the dimensions that characterize the work environment of the developer. We propose that these dimensions can be represented as a layered model with four main layers: personal, project, organization and domain. Also, we believe that a context model needs to be analyzed from different perspectives: capture, modeling, representation and application. This way, each layer of the proposed context model will be founded in a definition of what context capture, modeling, representation and application should be for that layer. This work is especially focused on the project layer of the software developer context model. We give a definition of what the context model encompasses at the project layer and present some experimental work on the context capture perspective. The remaining of the paper starts with an overview of the software developer context model we envision. In section 3 we describe the current work on context capture, some preliminary experimentation and the prototype developed. An overview of related work is given in section 4. Finally, section 5 concludes the paper and point out some directions for future work. Context Model The software developer context model we propose takes into account all the dimensions that comprise the software developer work environment. This way, we have identified four main dimensions: personal, project, organization and domain. As shown in figure 1, these dimensions form a layered model and will be described from four different perspectives: context capture, context modeling, context representation and context application. The personal layer represents the context of the work a developer has at hands at any point in time, which can be defined as a set of tasks. In order to accomplish these tasks, the developer has to deal with various kinds of resources at the same time, such as source code files, specification documents, bug reports, etc. These resources are typically dispersed through different places and systems, although being connected by a set of explicit and implicit relations that exist The project layer focuses on the context of the project, or projects, in which the developer is somehow involved. A software development project is an aggregation of a team, a set of resources and a combination of explicit and implicit knowledge that keeps the project running. The team is responsible for accomplishing tasks, which end up consuming and producing resources. The relations that exist between people and resources are the glue that makes everything work. The project layer represents the people and resources, as well as their relations, of the software development projects where the developer is included. The organization layer takes into account the organization context to which the developer belongs. Similarly to a project, an organization is made up of people, resources and their relations, but in a much more complex network. While in a project the people and resources are necessarily connected due to the requisites of their work, in a software development organization these projects easily become separate islands. The knowledge and competences developed in each project may be of interest in other projects and valuable synergies can be created when this information is available. The organization layer represents the organizational context that surrounds a developer. The domain layer takes into account the knowledge domain, or domains, in which the developer works. This layer goes beyond the project and organization levels and includes a set of knowledge sources that stand out of these spheres. Nowdays, a typical developer uses the Internet to search information and to keep informed of the advances in the technologies s/he works with. These actions are based on services and communities, such as source code repositories, development foruns, news websites, blogs, etc. These knowledge sources cannot be detached from the developer context and are integrated in the domain layer of our context model. For instance, due to the dynamic nature of the software development field, the developer must be able to gather knowledge from sources that go beyond the limits of the organization, either to follow the technological evolution or to search for help whenever needed. The four context dimensions described before can be described through four different perspectives: context capture, which represents the sources of context information and the way this information is gathered, in order to build the context model; context modeling, which represents the different dimensions, entities and aspects of the context information (conceptual model); context representation, which represents the technologies and data structures used to represent the context model (implementation); and context application, which represents how the context model is used and the objectives behind its use. Context Capture Our current work is focused on the project layer of our developer context model, and we will discuss our work at this level from the different perspectives we have presented before. Concerning context capture, the main sources of contextual information that feed up the developer context model at the project level are project management tools. These tools store a big amount of explicit and implicit information about the resources produced during a software development project, how the people involved relate with these resources and how the resources relate to each other. We are focusing on two types of tools: Version Control Systems (VCS) and Issue Tracking Systems (ITS). As shown in figure 2, the former deals with resources and their changes, the second deals with tasks. These systems store valuable information about how developers, tasks and resources relate and how these relations evolve over time. We are especially interested in revision logs and tasks. Briefly described, a revision log tell us that a set of changes were applied by a developer to a set of resources in the repository. A task commonly represents a problem report and the respective fix. The network of developers, resources, tasks and their relations will be used to build our context model at the project level. This way, the context model of the developer, from a project point of view, will be modeled as a set of implicit and explicit relations, as shown in figure 3. The lines with a filled pattern represent the explicit relations and those with a dotted pattern represent the implicit ones. The developers are explicitly related with revisions, as they are the ones who commit the revisions, and with tasks, as each task is assigned to a developer. The relation between tasks and resources is not explicit, but we believe it can be identified by analyzing the information that describe tasks and revisions. The proximity between developers can be inferred by analyzing the resources were they share work. Also, the resources can be implicitly related by analyzing how often they are changed together. In order to extract relations from the information stored in project management tools, that information is previously imported and stored locally for analysis. The prototype developed uses a database to store both the imported data and extracted relations. In the next phase, we intend to represent these relations and connection elements in an ontology [5], which will gradually evolve to a global developer context model ontology. We believe that representing the context model in an ontology and formalising it using the Semantic Web [6] technologies promote knowledge sharing and reusability, since these technologies are standards accepted by the scientific community. The context information extracted at the project level will be used to inform the developer about the network that links people, resources and tasks on the project s/he works. This information can be prepared to facilitate consulting and presented to the developer easily accessible in her/his working environment. Relation Extraction We have implemented connectors that allowed us to collect all the desired information from the Subversion and Bugzilla systems, as they are among the most popular in use. Through the collected information, we could already perceive a set of explicit relations: which resources are created/changed by which developers, which tasks have been assigned to which developers (see relations number 1 and 2 in figure 3). There is also a set of implicit relations that would be valuable if disclosed. Our approach to extract implicit relations between resources and tasks (see relation number 3 in figure 3) relies on the analysis of the text provided by revision messages, task summaries and task comments. It is common to find references to task identifiers in revision messages, denoting that the revision was made in the context of a specific task. Also, task summaries and comments commonly reference specific resources, either because a problem has been identified in a specific class or because a error stack trace is attached to the task summary to help developers identify the source of the problem. Taking this into account, we have defined three algorithms to find resource/task and task/revision relations: -Resource/Task (I). For each resource referenced in a revision, the respective name was searched in all task summaries. The search was performed using the file name and, in case it was a source code file, the full qualified name (package included) and the class name separately. -Resource/Task (II). For each resource referenced in a revision, the respective name was searched in task comments. This search was performed as described for the previous relation. -Task/Revision. For each task, the respective identification number was searched in revision messages. The search was performed using common patterns such as "<id>", "bug <id>"and "#<id>". The implicit relations between resources (see relation number 4 in figure 3) can be extracted by analyzing resources that are changed together very often. Revisions are often associated with specific goals, such as the implementation of a new feature or the correction of a bug. When developers commit revisions, they typically change a set of resources at a time, those that needed to be changed in order to accomplish a goal. When two resources are changed together in various revisions, this means that these resources are somehow related, even if they do not have any explicit relation between them, because when one of them is modified there is a high probability that the other also needs to be modified. The proximity between developers (see relation number 5 in figure 3) can also be inferred by analyzing the resources they share work. Developers can share work when they commit revisions on the same resources or when they are assigned to tasks that are related to the same resources. This way, if two developers often make changes, or perform tasks, on the same resources, they are likely to be related. Preliminary Results To validate the relation extraction algorithms, these were tested against two open-source projects from the Eclipse foundation: By applying the relation extraction algorithms to the information related with these two projects, we have gathered the results represented in table 1. The table shows the number of distinct relations extracted using each one of the algorithms in the two projects analyzed. The results show that a large amount of implicit relations can be extracted from the analysis of the information associated to tasks and revisions. These relations complement the context model we are building by connecting tasks with related resources. With a more detailed analysis we have identified some problems with the algorithms creating some relations that do not correspond to an effective correlation between the two entities analyzed. These problems are mainly related with string matching inconsistencies and can be corrected with minor improvements in the way expressions are searched in the text. Prototype We have developed a prototype, in the form of an Eclipse plug-in, to show how the context information can be integrated into the an IDE and used to help developers. In figure 4 we show a screenshot of the prototype, where we can see an Eclipse View named "Context" that shows context information related to a specific resource. Each time the developer opens a resource, this view is updated with a list of developers, resources and tasks that are related with that resource through the relations we have described before. This way the developer can easily gather information about what resources are likely to be related with that resource, what other tasks affected that resource in the past, and what other developers may be of help if extra information is needed. The availability of this information inside the IDE, where developers perform most of their work, increases developers awareness and reduces their effort on finding information that would be hidden and dispersed otherwise. Conclusions We have presented our approach of a software developer context model. Our context model is based on a layered structure, taking into account four main dimensions of the work environment of a software developer: personal, project, organization and domain. The current work is focused on the project layer of the software developer context model. We have discussed this layer in more detail and presented preliminary experimentation on the context capture perspective. The results show that it is possible to relate tasks and revisions/resources using simple relation extraction algorithms. These relations were then used in a plug-in for Eclipse to unveil relevant information to the developer. As future work we plan to improve the prototype we have developed with better visualization, search and filtering functionality. We also want to explore the use of ontologies to represent the developer context model. The remaining layers of the context model will be addressed iteratively, as an extent to the work already developed. Finally, we intend to test our approach with developers working in with real world projects.
2,542
1101.0562
2115587794
We consider the sizing of network buffers in IEEE 802.11-based networks. Wireless networks face a number of fundamental issues that do not arise in wired networks. We demonstrate that the use of fixed-size buffers in 802.11 networks inevitably leads to either undesirable channel underutilization or unnecessary high delays. We present two novel dynamic buffer-sizing algorithms that achieve high throughput while maintaining low delay across a wide range of network conditions. Experimental measurements demonstrate the utility of the proposed algorithms in a production WLAN and a lab test bed.
The foregoing work is in the context of wired links, and to our knowledge the question of buffer sizing for 802.11 wireless links has received almost no attention in the literature. Exceptions include @cite_12 @cite_21 @cite_26 . Sizing of buffers for voice traffic in WLANs is investigated in @cite_12 . The impact of fixed buffer sizes on TCP flows is studied in @cite_21 . In @cite_26 , TCP performance with a variety of AP buffer sizes and 802.11e parameter settings is investigated. In @cite_31 @cite_29 , initial investigations are reported related to the eBDP algorithm and the ALT algorithm of the A* algorithm. We substantially extend the previous work in this paper with theoretical analysis, experiment implementations in both testbed and a production WLAN, and additional NS simulations.
{ "abstract": [ "There has been an explosive growth in the use of wireless LANs (WLANs) to support network applications ranging from web-browsing and file-sharing to voice calls. It is difficult to optimally configure WLAN components, such as access points (APs), to meet the quality-of-service requirements of the different applications, as well as ensuring flow-level fairness. Recent work has shown that the widely-deployed IEEE 802.11 MAC Distributed Coordination Function (DCF) is biased against downstream flows. The new IEEE 802.11e standard introduces QoS mechanisms, such as Enhanced Distributed Channel Access (EDCA), that allow this unfairness to be addressed. So far, only limited work has been done to evaluate the impact of these MAC protocols on TCP-based applications. In this paper, through ns-2 simulations, we evaluate the impact of EDCA on TCP application traffic consisting of both long and short-lived TCP flows. We find that the performance of TCP applications is very dependent upon the settings of the EDCA parameters and buffer lengths at the AP. We also show that the performance of the admission control strategy employed depends on the buffer lengths at the AP and the traffic intensity.", "We consider the provision of access point buffers in WLANs. We first demonstrate that the default use of static buffers in WLANs leads to either undesirable channel under-utilisation or unnecessary high delays, which motivates the use of dynamic buffer sizing. Although adaptive algorithms have been proposed for wired Internet, a number of fundamental new issues arise in WLANs which necessitates new algorithms to be designed. These new issues include the fact that channel bandwidth is time-varying, the mean service rate is dependent on the level of channel contention, and packet inter-service times vary stochastically due to the random nature of CSMA CA operation. We propose an adaptive sizing algorithms which is demonstrated to be able to maintain high throughput efficiency whilst achieving low delay.", "As local area wireless networks based on the IEEE 802.11 standard see increasing public deployment, it is important to ensure that access to the network by different users remains fair. While fairness issues in 802.11 networks have been studied before, this paper is the first to focus on TCP fairness in 802.11 networks in the presence of both mobile senders and receivers. In this paper, we evaluate extensively through analysis, simulation, and experimentation the interaction between the 802.11 MAC protocol and TCP. We identify four different regions of TCP unfairness that depend on the buffer availability at the base station, with some regions exhibiting significant unfairness of over 10 in terms of throughput ratio between upstream and downstream TCP flows. We also propose a simple solution that can be implemented at the base station above the MAC layer that ensures that different TCP flows share the 802.11 bandwidth equitably irrespective of the buffer availability at the base station.", "We consider the task of sizing buffers for TCP flows in 802.11e WLANs. A number of fundamental new issues arise compared to wired networks. These include that the mean service rate is dependent on the level of channel contention and packet inter-service times vary stochastically due to the random nature of CSMA CA operation. We find that these considerations lead naturally to a requirement for adaptation of buffer sizes in response to changing network conditions.", "The use of 802.11 to transport delay sensitive traffic is becoming increasingly common. This raises the question of the tradeoff between buffering delay and loss in 802.11 networks. We find that there exists a sharp transition from the low-loss, low-delay regime to high-loss, high-delay operation. Given modest buffering at the access point, this transition determines the voice capacity of a WLAN and its location is largely insensitive to the buffer size used." ], "cite_N": [ "@cite_26", "@cite_29", "@cite_21", "@cite_31", "@cite_12" ], "mid": [ "2113832452", "2038262034", "2111579580", "2114069676", "2134712343" ] }
0
1011.2152
2953378433
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
-0.05in Locality issues have been thoroughly studied in the literature, via the analysis of various construction problems, including @math -coloring and Maximal Independent Set (MIS) @cite_39 @cite_40 @cite_24 @cite_35 @cite_20 @cite_0 @cite_19 , Minimum Spanning Tree (MST) @cite_31 @cite_26 @cite_33 , Maximal Matching @cite_38 , Maximum Weighted Matching @cite_8 @cite_23 @cite_9 , Minimum Dominating Set @cite_28 @cite_7 , Spanners @cite_4 @cite_32 @cite_12 , etc. For some problems (e.g., coloring @cite_40 @cite_24 @cite_19 ), there are still large gaps between the best known results on specific families of graphs (e.g., bounded degree graphs) and on arbitrary graphs. .0in
{ "abstract": [ "Coloring the nodes of a graph with a small number of colors is one of the most fundamental problems in theoretical computer science. In this paper, we study graph coloring in a distributed setting. Processors of a distributed system are nodes of an undirected graph G. There is an edge between two nodes whenever the corresponding processors can directly communicate with each other. We assume that distributed coloring algorithms start with an initial m-coloring of G. In the paper, we prove new strong lower bounds for two special kinds of coloring algorithms. For algorithms which run for a single communication round---i.e., every node of the network can only send its initial color to all its neighbors---, we show that the number of colors of the computed coloring has to be at least Ω(Δ2 log2 Δ+ log log m). If such one-round algorithms are iteratively applied to reduce the number of colors step-by-step, we prove a time lower bound of Ω(Δ log2 Δ+ log*m) to obtain an O(Δ)-coloring. The best previous lower bounds for the two types of algorithms are Ω(log log m) and Ω(log*m), respectively.", "", "", "We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m,t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an \"advice\" (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (log n,0)-advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0,t)-advising scheme satisfies t ≥ Ω (√n). Our main result is the construction of an (O(1),O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m,0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m,1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log(n) to constant.", "We consider distributed algorithms for approximate maximum matching on general graphs. Our main result is a randomized @math -approximation distributed algorithm for maximum weighted matching, whose running time is @math for any constant @math , where @math is the number of nodes in the graph. This is, to the best of our knowledge, the first log-time distributed algorithm that achieves constant approximation for maximum weighted matching on general graphs. In addition, we consider the dynamic case, where nodes are inserted and deleted one at a time. For unweighted dynamic graphs, we give a distributed algorithm that maintains a @math -approximation in @math time for each node insertion or deletion for any constant @math . For weighted dynamic graphs we give a constant-factor approximation distributed algorithm that runs in constant time for each insertion or deletion.", "Abstract A simple parallel randomized algorithm to find a maximal independent set in a graph G = ( V , E ) on n vertices is presented. Its expected running time on a concurrent-read concurrent-write PRAM with O (| E | d max ) processors is O (log n ), where d max denotes the maximum degree. On an exclusive-read exclusive-write PRAM with O (| E |) processors the algorithm runs in O (log 2 n ). Previously, an O (log 4 n ) deterministic algorithm was given by Karp and Wigderson for the EREW-PRAM model. This was recently (independently of our work) improved to O (log 2 n ) by M. Luby. In both cases randomized algorithms depending on pairwise independent choices were turned into deterministic algorithms. We comment on how randomized combinatorial algorithms whose analysis only depends on d -wise rather than fully independent random choices (for some constant d ) can be converted into deterministic algorithms. We apply a technique due to A. Joffe (1974) and obtain deterministic construction in fast parallel time of various combinatorial objects whose existence follows from probabilistic arguments.", "We present improved algorithms for finding approximately optimal matchings in both weighted and unweighted graphs. For unweighted graphs, we give an algorithm providing >(1-e-approximation in O(log n) time for any constant e > 0. This result improves on the classical 1 over 2-approximation due to Israeli and Itai. As a by-product, we also provide an improved algorithm for unweighted matchings in bipartite graphs. In the context of weighted graphs, we give another algorithm which provides (1 over 2-e) approximation in general graphs in O(log n)time. The latter result improves on the known (1 over 4-e-approximation in O(log n)time.", "This article presents a fast distributed algorithm to compute a smallk-dominating setD(for any fixedk) and to compute its induced graph partition (breaking the graph into radiuskclusters centered around the vertices ofD). The time complexity of the algorithm isO(klog*n). Smallk-dominating sets have applications in a number of areas, including routing with sparse routing tables, the design of distributed data structures, and center selection in a distributed network. The main application described in this article concerns a fast distributed algorithm for constructing a minimum-weight spanning tree (MST). On ann-vertex network of diameterd, the new algorithm constructs an MST in time, improving on previous results.", "Whether local algorithms can compute constant approximations of NP-hard problems is of both practical and theoretical interest. So far, no algorithms achieving this goal are known, as either the approximation ratio or the running time exceed O(1), or the nodes are provided with non-trivial additional information. In this paper, we present the first distributed algorithm approximating a minimum dominating set on a planar graph within a constant factor in constant time. Moreover, the nodes do not need any additional information.", "", "For an unweighted undirected graph G = (V,E), and a pair of positive integers α ≥ 1, β ≥ 0, a subgraph G′ = (V,H), H ⊂eqE, is called an (α,β)-spanner of G if for every pair of vertices u,v ∊ V, dist G′(u,v) ≤ α ⋅ dist G (u,v) + β.", "", "", "We present efficient algorithms for computing very sparse low distortion spanners in distributed networks and prove some non-trivial lower bounds on the trade-off between time, sparseness, and distortion. All of our algorithms assume a synchronized distributed network, where relatively short messages may be communicated in each time step. Our first result is an O(log n)1+o(1)-time algorithm for finding a (2O(log* n)log n)-spanner with size O(n). Besides being nearly optimal in time and distortion, this algorithm appears to be the first that constructs a O(n)-size skeleton without requiring unbounded length messages or time proportional to the diameter of the network. Our second result is a new class of efficiently constructible (α,β)-spanners called Fibonacci spanners whose distortion improves with the distance being approximated. At their sparsest Fibonacci spanners can have nearly linear size O(n(log log n)φ) where φ = 1+☂5 2 is the golden ratio. As the distance increases the Fibonacci spanner's multiplicative distortion passes through four discrete stages, moving from logarithmic to doubly logarithmic, then into a period where it is constant, tending to 3, followed by another period tending to 1. On the lower bound side we prove that many recent sequential spanner constructions have no efficient counterparts in distributed networks, even if the desired distortion only needs to be achieved on the average or for a tiny fraction of the vertices. In particular, any distance preservers, purely additive spanners, or spanners with sublinear additive distortion must either be very dense, slow to construct, or have very weak guarantees on distortion.", "This paper presents a lower bound of @math on the time required for the distributed construction of a minimum-weight spanning tree (MST) in weighted n-vertex networks of diameter @math , in the bounded message model. This establishes the asymptotic near-optimality of existing time-efficient distributed algorithms for the problem, whose complexity is @math .", "In this paper, we present fast and fully distributed algorithms for matching in weighted trees and general weighted graphs. The time complexity as well as the approximation ratio of the tree algorithm is constant. In particular, the approximation ratio is 4. For the general graph algorithm we prove a constant ratio bound of 5 and a polylogarithmic time complexity of O(log2 n).", "We study deterministic, distributed algorithms for two weak variants of the standard graph coloring problem. We consider defective colorings, i.e., colorings where nodes of a color class may induce a graph of maximum degree d for some parameter d>0. We also look at colorings where a minimum number of multi-chromatic edges is required. For an integer k>0, we call a coloring k-partially proper if every node v has at least min k,deg(v) neighbors with a different color. We show that for all d∈ 1,...,Δ , it is possible to compute a O(Δ2 d2)-coloring with defect d in time O(log*n) where Δ is the largest degree of the network graph. Similarly, for all k∈ 1,...,Δ , a k-partially proper O(k2)-coloring can be computed in O(log*n) rounds. As an application of our weak defective coloring algorithm, we obtain a faster deterministic algorithm for the standard vertex coloring problem on graphs with moderate degrees. We show that in time O(Δ+log*n), a (Δ+1)-coloring can be computed, a task for which the best previous algorithm required time O(Δ*log(Δ) + log*n). The same result holds for the problem of computing a maximal independent set.", "", "This paper studies the problem of constructing a minimum-weight spanning tree (MST) in a distributed network. This is one of the most important problems in the area of distributed computing. There is a long line of gradually improving protocols for this problem, and the state of the art today is a protocol with running time O(Λ(G)+n⋅log∗n) due to Kutten and Peleg [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27], where Λ(G) denotes the diameter of the graph G. Peleg and Rubinovich [D. Peleg, V. Rubinovich, A near-tight lower bound on the time complexity of distributed MST construction, in: Proc. 40th IEEE Symp. on Foundations of Computer Science, 1999, pp. 253–261] have shown that Ω˜(n) time is required for constructing MST even on graphs of small diameter, and claimed that their result “establishes the asymptotic near-optimality” of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. In this paper we refine this claim, and devise a protocol that constructs the MST in Ω˜(μ(G,ω)+n) rounds, where μ(G,ω) is the MST-radius of the graph. The ratio between the diameter and the MST-radius may be as large as Θ(n), and, consequently, on some inputs our protocol is faster than the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27] by a factor of Ω˜(n). Also, on every input, the running time of our protocol is never greater than twice the running time of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. As part of our protocol for constructing an MST, we develop a protocol for constructing neighborhood covers with a drastically improved running time. The latter result may be of independent interest." ], "cite_N": [ "@cite_35", "@cite_20", "@cite_38", "@cite_4", "@cite_8", "@cite_39", "@cite_23", "@cite_26", "@cite_7", "@cite_28", "@cite_32", "@cite_19", "@cite_40", "@cite_12", "@cite_33", "@cite_9", "@cite_24", "@cite_0", "@cite_31" ], "mid": [ "2109368894", "", "", "1975595616", "1993913882", "1964089073", "2030863680", "1534484868", "2077158870", "2592539685", "2056518357", "", "", "2032654248", "2015819397", "2626065578", "2157562188", "", "2030982625" ] }
Local Distributed Decision *
Distributed computing concerns a collection of processors which collaborate in order to achieve some global task. With time, two main disciplines have evolved in the field. One discipline deals with timing issues, namely, uncertainties due to asynchrony (the fact that processors run at their own speed, and possibly crash), and the other concerns topology issues, namely, uncertainties due to locality constraints (the lack of knowledge about far away processors). Studies carried out by the distributed computing community within these two disciplines were to a large extent problem-driven. Indeed, several major problems considered in the literature concern coping with one of the two uncertainties. For instance, in the asynchrony-discipline, Fischer, Lynch and Paterson [14] proved that consensus cannot be achieved in the asynchronous model, even in the presence of a single fault, and in the locality-discipline, Linial [28] proved that (∆ + 1)-coloring cannot be achieved locally (i.e., in a constant number of communication rounds), even in the ring network. One of the significant achievements of the asynchrony-discipline was its success in establishing unifying theories in the flavor of computational complexity theory. Some central examples of such theories are failure detectors [6,7] and the wait-free hierarchy (including Herlihy's hierarchy) [18]. In contrast, despite considerable progress, the locality-discipline still suffers from the absence of a solid basis in the form of a fundamental computational complexity theory. Obviously, defining some common cost measures (e.g., time, message, memory, etc.) enables us to compare problems in terms of their relative cost. Still, from a computational complexity point of view, it is not clear how to relate the difficulty of problems in the locality-discipline. Specifically, if two problems have different kinds of outputs, it is not clear how to reduce one to the other, even if they cost the same. Inspired by sequential complexity theory, we focus on decision problems, in which one is aiming at deciding whether a given global input instance belongs to some specified language. In the context of distributed computing, each processor must produce a boolean output, and the decision is defined by the conjunction of the processors' outputs, i.e., if the instance belongs to the language, then all processors must output "yes", and otherwise, at least one processor must output "no". Observe that decision problems provide a natural framework for tackling fault-tolerance: the processors have to collectively check whether the network is fault-free, and a node detecting a fault raises an alarm. In fact, many natural problems can be phrased as decision problems, like "is there a unique leader in the network?" or "is the network planar?". Moreover, decision problems occur naturally when one is aiming at checking the validity of the output of a computational task, such as "is the produced coloring legal?", or "is the constructed subgraph an MST?". Construction tasks such as exact or approximated solutions to problems like coloring, MST, spanner, MIS, maximum matching, etc., received enormous attention in the literature (see, e.g., [5,25,26,28,30,31,32,38]), yet the corresponding decision problems have hardly been considered. The purpose of this paper is to investigate the nature of local decision problems. Decision problems seem to provide a promising approach to building up a distributed computational theory for the locality-discipline. Indeed, as we will show, one can define local reductions in the framework of decision problems, thus enabling the introduction of complexity classes and notions of completeness. We consider the LOCAL model [36], which is a standard distributed computing model capturing the essence of locality. In this model, processors are woken up simultaneously, and computation proceeds in fault-free synchronous rounds during which every processor exchanges messages of unlimited size with its neighbors, and performs arbitrary computations on its data. Informally, let us define LD(t) (for local decision) as the class of decision problems that can be solved in t number of communication rounds in the LOCAL model. (We find special interest in the case where t represents a constant, but in general we view t as a function of the input graph. We note that in the LOCAL model, every decidable decision problem can be solved in n communication rounds, where n denotes the number of nodes in the input graph.) Some decision problems are trivially in LD(O(1)) (e.g., "is the given coloring a (∆ + 1)coloring?", "do the selected nodes form an MIS?", etc.), while some others can easily be shown to be outside LD(t), for any t = o(n) (e.g., "is the network planar?", "is there a unique leader?", etc.). In contrast to the above examples, there are some languages for which it is not clear whether they belong to LD(t), even for t = O(1). To elaborate on this, consider the particular case where it is required to decide whether the network belongs to some specified family F of graphs. If this question can be decided in a constant number of communication rounds, then this means, informally, that the family F can somehow be characterized by relatively simple conditions. For example, a family F of graphs that can be characterized as consisting of all graphs having no subgraph from C, for some specified finite set C of finite subgraphs, is obviously in LD(O(1)). However, the question of whether a family of graphs can be characterized as above is often non-trivial. For example, characterizing cographs as precisely the graphs with no induced P 4 , attributed to Seinsche [40], is not easy, and requires nontrivial usage of modular decomposition. The first question we address is whether and to what extent randomization helps. For p, q ∈ (0, 1], define BPLD(t, p, q) as the class of all distributed languages that can be decided by a randomized distributed algorithm that runs in t number of communication rounds and produces correct answers on legal (respectively, illegal) instances with probability at least p (resp., q). An interesting observation is that for p and q such that p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q). In fact, for such p and q, there exists a language L ∈ BPLD(0, p, q), such that L / ∈ LD(t), for any t = o(n). To see why, consider the following Unique-Leader language. The input is a graph where each node has a bit indicating whether it is a leader or not. An input is in the language Unique-Leader if and only if there is at most one leader in the graph. Obviously, this language is not in LD(t), for any t < n. We claim it is in BPLD(0, p, q), for p and q such that p 2 + q ≤ 1. Indeed, for such p and q, we can design the following simple randomized algorithm that runs in 0 time: every node which is not a leader says "yes" with probability 1, and every node which is a leader says "yes" with probability p. Clearly, if the graph has at most one leader then all nodes say "yes" with probability at least p. On the other hand, if there are at least k ≥ 2 leaders, at least one node says "no", with probability at least 1 − p k ≥ 1 − p 2 ≥ q. It turns out that the aforementioned choice of p and q is not coincidental, and that p 2 +q = 1 is really the correct threshold. Indeed, we show that Unique-Leader / ∈ BPLD(t, p, q), for any t < n, and any p and q such that p 2 +q > 1. In fact, we show a much more general result, that is, we prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) actually collapses into LD(O(t)), for any t. In the second part of the paper, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results. Our contributions 1.2.1 Impact of randomization We study the impact of randomization on local decision. We prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) = LD(O(t)), for any function t. This, together with the observation that LD(t) BPLD(t, p, q), for any t = o(n), may indicate that p 2 + q = 1 serves as a sharp threshold for distinguishing the deterministic case from the randomized one. Impact of non-determinism We first show that non-determinism helps local decision, i.e., we show that the class NLD(t) (cf. Section 2.3) strictly contains LD(t). More precisely, we show that there exists a language in NLD(O(1)) which is not in LD(t) for every t = o(n), where n is the size of the input graph. Nevertheless, NLD(t) does not capture all (decidable) languages, for t = o(n). Indeed we show that there exists a language not in NLD(t) for every t = o(n). Specifically, this language is #n = {(G, n) | |V (G)| = n}. Perhaps surprisingly, it turns out that it is the combination of randomization with nondeterminism that enables to decide all languages in constant time. Let BPNLD(O(1)) = BPNLD(O(1), p, q), for some constants p and q such that p 2 + q ≤ 1. We prove that BPNLD(O(1)) contains all languages. To sum up, LD(o(n)) NLD(O(1)) ⊂ NLD(o(n)) BPNLD(O(1)) = All. Finally, we introduce the notion of many-one local reduction, and establish some completeness results. We show that there exits a problem, called cover, which is, in a sense, the most difficult decision problem. That is we show that cover is BPNLD(O(1))-complete. (Interestingly, a small relaxation of cover, called containment, turns out to be NLD(O(1))complete). Decision problems and complexity classes 2.1 Model of computation Let us first recall some basic notions in distributed computing. We consider the LOCAL model [36], which is a standard model capturing the essence of locality. In this model, processors are assumed to be nodes of a network G, provided with arbitrary distinct identities, and computation proceeds in fault-free synchronous rounds. At each round, every processor v ∈ V (G) exchanges messages of unrestricted size with its neighbors in G, and performs computations on its data. We assume that the number of steps (sequential time) used for the local computation made by the node v in some round r is bounded by some function f A (H(r, v)), where H(r, v) denotes the size of the "history" seen by node v up to the beginning of round r. That is, the total number of bits encoded in the input and the identity of the node, as well as in the incoming messages from previous rounds. Here, we do not impose any restriction on the growth rate of f A . We would like to point out, however, that imposing such restrictions, or alternatively, imposing restrictions on the memory used by a node for local computation, may lead to interesting connections between the theory of locality and classical computational complexity theory. To sum up, during the execution of a distributed algorithm A, all processors are woken up simultaneously, and, initially, a processor is solely aware of it own identity, and possibly to some local input too. Then, in each round r, every processor v (1) sends messages to its neighbors, (2) receives messages from its neighbors, and (3) performs at most f A (H(r, v)) computations. After a number of rounds (that may depend on the network G and may vary among the processors, simply because nodes have different identities, potentially different inputs, and are typically located at non-isomorphic positions in the network), every processor v terminates and outputs some value out(v). Consider an algorithm running in a network G with input x and identity assignment Id. The running time of a node v, denoted T v (G, x, Id), is the maximum of the number of rounds until v outputs. The running time of the algorithm, denoted T (G, x, Id), is the maximum of the number of rounds until all processors terminate, i.e., T (G, x, Id) = max{T v (G, x, Id) | v ∈ V (G)}. Let t be a non-decreasing function of input configurations (G, x, Id). (By non-decreasing, we mean that if G ′ is an induced subgraph of G and x ′ and Id ′ are the restrictions of x and Id, respectively, to the nodes in G ′ , then t(G ′ , x ′ , Id ′ ) ≤ t(G, x, Id).) We say that an algorithm A has running time at most t, if T (G, x, Id) ≤ t(G, x, Id) , for every (G, x, Id). We shall give special attention to the case that t represents a constant function. Note that in general, given (G, x, Id), the nodes may not be aware of t(G, x, Id). On the other hand, note that, if t = t(G, x, Id) is known, then w.l.o.g. one can always assume that a local algorithm running in time at most t operates at each node v in two stages: (A) collect all information available in B G (v, t), the t-neighborhood, or ball of radius t of v in G, including inputs, identities and adjacencies, and (B) compute the output based on this information. Local decision (LD) We now refine some of the above concepts, in order to formally define our objects of interest. Obviously, a distributed algorithm that runs on a graph G operates separately on each connected component of G, and nodes of a component G ′ of G cannot distinguish the underlying graph G from G ′ . For this reason, we consider connected graphs only. Definition 2.1 A configuration is a pair (G, x) where G is a connected graph, and every node v ∈ V (G) is assigned as its local input a binary string x(v) ∈ {0, 1} * . In some problems, the local input of every node is empty, i.e., x(v) = ǫ for every v ∈ V (G), where ǫ denotes the empty binary string. Since an undecidable collection of configurations remains undecidable in the distributed setting too, we consider only decidable collections of configurations. Formally, we define the following. Definition 2.2 A distributed language is a decidable collection L of configurations. In general, there are several possible ways of representing a configuration of a distributed language corresponding to standard distributed computing problems. Some examples considered in this paper are the following. Unique-Leader = {(G, x) | x 1 ≤ 1} consists of all configurations such that there exists at most one node with local input 1, with all the others having local input 0. Consensus = {(G, (x 1 , x 2 )) | ∃u ∈ V (G), ∀v ∈ V (G), x 2 (v) = x 1 (u)} consists of all configurations such that all nodes agree on the value proposed by some node. Coloring = {(G, x) | ∀v ∈ V (G), ∀w ∈ N(v), x(v) = x(w)} where N(v) denotes the (open) neighborhood of v, that is, all nodes at distance 1 from v. MIS = {(G, x) | S = {v ∈ V (G) | x(v) = 1} forms a MIS}. SpanningTree = {(G, (name, head)) | T = {e v = (v, v + ), v ∈ V (G), head(v) = name(v + )} is a spanning tree of G} consists of all configurations such that the set T of edges e v between every node v and its neighbor v + satisfying name(v + ) = head(v) forms a spanning tree of G. (The language MST, for minimum spanning tree, can be defined similarly). An identity assignment Id for a graph G is an assignment of distinct integers to the nodes of G. A node v ∈ V (G) executing a distributed algorithm in a configuration (G, x) initially knows only its own identity Id(v) and its own input x(v), and is unaware of the graph G. After t rounds, v acquires knowledge only of its t-neighborhood B G (v, t). In each round r of the algorithm A, a node may communicate with its neighbors by sending and receiving messages, and may perform at most f A (H(r, v)) computations. Eventually, each node v ∈ V (G) must output a local output out(v) ∈ {0, 1} * . Let L be a distributed language. We say that a distributed algorithm A decides L if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", satisfying the following decision rules: • If (G, x) ∈ L, then out(v) ="yes" for every node v ∈ V (G); • If (G, x) / ∈ L, then there exists at least one node v ∈ V (G) such that out(v) ="no". We are now ready to define one of our main subjects of interest, the class LD(t), for local decision. Definition 2.3 Let t be a non-decreasing function of triplets (G, x, Id). Define LD(t) as the class of all distributed languages that can be decided by a local distributed algorithm that runs in number of rounds at most t. For instance, Coloring ∈ LD(1) and MIS ∈ LD(1). On the other hand, it is not hard to see that languages such as Unique-Leader, Consensus, and SpanningTree are not in LD(t), for any t = o(n). In what follows, we define LD(O(t)) = ∪ c>1 LD(c · t). Non-deterministic local decision (NLD) A distributed verification algorithm is a distributed algorithm A that gets as input, in addition to a configuration (G, x), a global certificate vector y, i.e., every node v of a graph G gets as input a binary string x(v) ∈ {0, 1} * , and a certificate y(v) ∈ {0, 1} * . A verification algorithm A verifies L if and only if for every configuration (G, x), the following hold: • If (G, x) ∈ L, then there exists a certificate y such that for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="yes" for all v ∈ V (G); • If (G, x) / ∈ L, then for every certificate y and for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="no" for at least one node v ∈ V (G). One motivation for studying the nondeterministic verification framework comes from settings in which one must perform local verifications repeatedly. In such cases, one can afford to have a relatively "wasteful" preliminary step in which a certificate is computed for each node. Using these certificates, local verifications can then be performed very fast. See [21,22] for more details regarding such applications. Indeed, the definition of a verification algorithm finds similarities with the notion of proof-labeling schemes discussed in [21,22]. Informally, in a proof-labeling scheme, the construction of a "good" certificate y for a configuration (G, x) ∈ L may depend also on the given id-assignment. Since the question of whether a configuration (G, x) belongs to a language L is independent from the particular id-assignment, we prefer to let the "good" certificate y depend only on the configuration. In other words, as defined above, a verification algorithm operating on a configuration (G, x) ∈ L and a "good" certificate y must say "yes" at every node regardless of the id-assignment. We now define the class NLD(t), for nondeterministic local decision. (our terminology is by direct analogy to the class NP in sequential computational complexity). Bounded-error probabilistic local decision (BPLD) A randomized distributed algorithm is a distributed algorithm A that enables every node v, at any round r during the execution, to toss a number of random bits obtaining a string r(v) ∈ {0, 1} * . Clearly, this number cannot exceed f A (H(r, v)), the bound on the number of computational steps used by node v at round r. Note however, that H(r, v) may now also depend on the random bits produced by other nodes in previous rounds. For p, q ∈ (0, 1], we say that a randomized distributed algorithm A is a (p, q)-decider for L, or, that it decides L with "yes" success probability p and "no" success probability q, if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", and the following properties are satisfied: • If (G, x) ∈ L, then Pr[out(v) = "yes" for every node v ∈ V (G)] ≥ p, • If (G, x) / ∈ L, then Pr[out(v) = "no" for at least one node v ∈ V (G)] ≥ q, where the probabilities in the above definition are taken over all possible coin tosses performed by nodes. We define the class BPLD(t, p, q), for "Bounded-error Probabilistic Local Decision", as follows. Definition 2.5 For p, q ∈ (0, 1] and a function t, BPLD(t, p, q) is the class of all distributed languages that have a local randomized distributed (p, q)-decider running in time t. (i.e., can be decided in time t by a local randomized distributed algorithm with "yes" success probability p and "no" success probability q). A sharp threshold for randomization Consider some graph G, and a subset U of the nodes of G, i.e., U ⊆ V (G Theorem 3.1 below asserts that, for hereditary languages, randomization does not help if one imposes that p 2 + q > 1, i.e, the "no" success probability distribution is at least as large as one minus the square of the "yes" success probability. Somewhat more formally, we prove that for hereditary languages, we have p 2 +q>1 BPLD(t, p, q) = LD(O(t)). This complements the fact that for p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q), for any t = o(n). Recall that [34] investigates the question of whether randomization helps for constructing in constant time a solution for a problem in LCL LD(O(1)). We stress that the technique used in [34] for tackling this question relies heavily on the definition of LCL, specifically, that only graphs of constant degree and of constant input size are considered. Hence it is not clear whether the technique of [34] can be useful for our purposes, as we impose no such assumptions on the degrees or input sizes. Also, although it seems at first glance, that Lovsz local lemma might have been helpful here, we could not effectively apply it in our proof. Instead, we use a completely different approach. Theorem 3.1 Let L be an hereditary language and let t be a function. If L ∈ BPLD(t, p, q) for constants p, q ∈ (0, 1] such that p 2 + q > 1, then L ∈ LD(O(t)). Proof. Let us start with some definitions. Let L be a language in BPLD(t, p, q) where p, q ∈ (0, 1] and p 2 + q > 1, and t is some function. Let A be a randomized algorithm deciding L, with "yes" success probability p, and "no" success probability q, whose running time is at most t(G, x, Id), for every configuration (G, x) with identity assignment Id. Fix a configuration (G, x), and an id-assignment Id for the nodes of V (G). The distance dist G (u, v) between two nodes of G is the minimum number of edges in a path connecting u and v in G. The distance between two subsets U 1 , U 2 ⊆ V is defined as dist G (U 1 , U 2 ) = min{dist G (u, v) | u ∈ U 1 , v ∈ U 2 }. For a set U ⊆ V , let E(G, x, Id, U) denote the event that when running A on (G, x) with id-assignment Id, all nodes in U output "yes". Let v ∈ V (G). The running time of A at v may depend on the coin tosses made by the nodes. Let t v = t v (G, x, Id) denote the maximal running time of v over all possible coin tosses. Note that t v ≤ t(G, x, Id) (we do not assume that neither t or t v are known to v). The radius of a node v, denoted r v , is the maximum value t u such that there exists a node u, where v ∈ B G (u, t u ). (Observe that the radius of a node is at most t.) The radius of a set of nodes S is r S := max{r v | v ∈ S}. In what follows, fix a constant δ such that 0 < δ < p 2 + q − 1, and define λ = 11 ⌈log p/log(1 − δ)⌉. A splitter of (G, x, Id) is a triplet (S, U 1 , U 2 ) of pairwise disjoint subsets of nodes such that S ∪ U 1 ∪ U 2 = V , dist G (U 1 , U 2 ) ≥ λr S . (Observe that r S may depend on the identity assignment and the input, and therefore, being a splitter is not just a topological property depending only on G). Given a splitter (S, U 1 , U 2 ) of (G, x, Id), let G k = G[U k ∪ S], and let x k be the input x restricted to nodes in G k , for k = 1, 2. The following structural claim does not use the fact that L is hereditary. Lemma 3.2 For every configuration (G, x) with identity assignment Id, and every splitter (S, U 1 , U 2 ) of (G, x, Id), we have (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L ⇒ (G, x) ∈ L. Let (G, x) be a configuration with identity assignment Id. Assume, towards contradiction, that there exists a splitter (S, U 1 , U 2 ) of triplet (G, x, Id), such that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L, yet (G, x) / ∈ L. (The fact that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L implies that both G 1 and G 2 are connected, however, we note, that for the claim to be true, it is not required that G[ Proof. For proving Claim 3.3, we upper bound the size of I by d − 4r S − 2. This is done by covering the integers in (2r S , d − 2r S ) by at most 4r S + 1 sets, such that each one is (4r S + 1)-independent, that is, for every two integers in the same set, they are at least 4r S + 1 apart. Specifically, for s ∈ [1, 4r S + 1] and m(S) = ⌈(d − 8r S )/(4r S + 1)⌉, we define J s = {s + 2r S + j(4r S + 1) | j ∈ [0, m(S)]}. Observe that, as desired, (2r S , d − 2r S ) ⊂ s∈[1,4r S +1] J s , and for each s ∈ [1, 4r S + 1], J s is (4r S + 1)-independent. In what follows, fix s ∈ [1, 4r S + 1] and let J = J s . Since (G 1 , x 1 ) ∈ L, we know that, Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p . Observe that for i ∈ (2r S , d − 2r S ), t v ≤ r v ≤ r S , and hence, the t v -neighborhood in G of every node v ∈ S i is contained in S ⊆ G 1 , i.e., B G (v, t v ) ⊆ G 1 . It therefore follows that: Pr[E(G, x, Id, S J∩I )] = Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p .(1) Consider two integers a and b in J. We know that |a − b| ≥ 4r S + 1. Hence, the distance in G between any two nodes u ∈ S a and v ∈ S b is at least 2r S + 1. Thus, the events E(G, x, Id, S a ) and E(G, x, Id, S b ) are independent. It follows by the definition of I, that Pr[E(G, x, Id, S J∩I )] < (1 − δ) |J∩I|(2) By (1) and (2), we have that p < (1 − δ) |J∩I| and thus |J ∩ I| < log p/ log(1 − δ). Since (2r S , d − 2r S ) can be covered by the sets J s , s = 1, . . . , 4r S + 1, each of which is (4r S + 1)independent, we get that |I| = 4r S +1 s=1 |J s ∩ I| < (4r S + 1)(log p/ log(1 − r)) . Combining this bound with the fact that d = λr S , we get that d − 4r S − 1 > |I|. It follows by the pigeonhole principle that there exists some i ∈ (2r S , d − 2r S ) such that i / ∈ I, as desired. This completes the proof of Claim 3.3. Fix i ∈ (2r S , d − 2r S ) such that i / ∈ I, and let F = E(G, x, Id, S i ). By definition, Pr[F] ≤ δ < p 2 + q − 1.(3) Let H 1 denote the subgraph of G induced by the nodes in ( i−r S −1 j=1 L j ) ∪ U 1 . We similarly define H 2 as the subgraph of G induced by the nodes in ( j>i+r S L j ) ∪ U 2 . Note that S i ∪ V (H 1 ) ∪ V (H 2 ) = V , and for any two nodes u ∈ V (H 1 ) and v ∈ V (H 2 ), we have d G (u, v) > 2r S . It follows that, for k = 1, 2, the t u -neighborhood in G of each node u ∈ V (H k ) equals the t u -neighborhood in G k of u, that is, B G (u, t u ) ⊆ G k . (To see why, consider, for example, the case k = 2. Given u ∈ V (H 2 ), it is sufficient to show that ∄v ∈ V (H 1 ), such that v ∈ B G (u, t u ). Indeed, if such a vertex v exists then d G (u, v) > 2r S , and hence t u > 2r S . Since there must exists a vertex w ∈ S i such that w ∈ B(u, t u ), we get that r w > 2r S , in contradiction to the fact that w ∈ S.) Thus, for k = 1, 2, since (G i , x i ) ∈ L, we get Pr[E(G, x, Id, V (H i ))] = Pr[E(G i , x i , Id, V (H i ))] ≥ p . Let F ′ = E(G, x, Id, V (H 1 )∪V (H 2 ) ). As the events E(G, x, Id, V (H 1 )) and E(G, x, Id, V (H 2 )) are independent, it follows that Pr[F ′ ] > p 2 , that is Pr[F ′ ] ≤ 1 − p 2(4) By Eqs. (3) and (4), and using union bound, it follows that Pr[F ∨ F ′ ] < q. Thus Pr[E(G, x, Id, V (G))] = Pr[E(G, x, Id, S i ∪ V (H 1 ) ∪ V (H 2 ))] = Pr[F ∧ F ′ ] > 1 − q . This is in contradiction to the assumption that (G, x) / ∈ L. This concludes the proof of Lemma 3.2. Our goal now is to show that L ∈ LD(O(t)) by proving the existence of a deterministic local algorithm D that runs in time O(t) and recognizes L. (No attempt is made here to minimize the constant factor hidden in the O(t) notation.) Recall that both t = t(G, x, Id) and t v = t v (G, x, Id) may not be known to v. Nevertheless, by inspecting the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , each node v can compute an upper bound on t v as given by the following claim. t * v = t * v (c) such that (1) c · t v ≤ t * v = O(t) and (2) for every u ∈ B G (v, c · t * v ), we have t u ≤ t * v . To establish the claim, observe first that in O(t) time, each node v can compute a value t ′ v satisfying t v ≤ t ′ v ≤ 2t. Indeed, given the ball B G (v, 2 i ), for some integer i, and using the upper bound on number of (sequential) local computations, node v can simulate all its possible executions up to round r = 2 i . The desired value t ′ v is the smallest r = 2 i for which all executions of v up to round r conclude with an output at v. Once t ′ v is computed, node v aims at computing t * v . For this purpose, it starts again to inspect the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , to obtain t ′ u from each u ∈ B G (v, 2 i ). (For this purpose, it may need to wait until u computes t ′ u , but this delays the whole computation by at most O(t) time.) Now, node v outputs t * v = 2 i for the smallest i satisfying (1) c · t ′ v ≤ 2 i and (2) for every u ∈ B G (v, c · 2 i ), we have t ′ u ≤ t * v . It is easy to see that for this i, we have 2 i = O(t), hence t * v = O(t). Given a configuration (G, x), and an id-assignment Id, Algorithm D, applied at a node u first calculates t * u = t * u (6λ), and then outputs "yes" if and only if the 2λt * u -neighborhood of u in (G, x) belongs to L. That is, out(u) = "yes" ⇐⇒ (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L . Obviously, Algorithm D is a deterministic algorithm that runs in time O(t). We claim that Algorithm D decides L. Indeed, since L is hereditary, if (G, x) ∈ L, then every prefix of (G, x) is also in L, and thus, every node u outputs out(u) ="yes". Now consider the case where (G, x) / ∈ L, and assume by contradiction that by applying D on (G, x) with id-assignment Id, every node u outputs out(u) ="yes". Let U ⊆ V (G) be maximal by inclusion, such that G[U] is connected and (G[U], x[U]) ∈ L. Obviously, U is not empty, as (B G (u, 2λt * v ), x[B G (u, 2λt * v )] ) ∈ L for every node u. On the other hand, we have |U| < |V (G)|, because (G, x) / ∈ L. Let u ∈ U be a node with maximal t u such that B G (u, 2t u ) contains a node outside U. Define G ′ as the subgraph of G induced by U ∪ V (B G (u, 2t u )). Observe that G ′ is connected and that G ′ strictly contains U. Towards contradiction, our goal is to show that (G ′ , x[G ′ ]) ∈ L. Let H denote the graph which is maximal by inclusion such that H is connected and B G (u, 2t u ) ⊂ H ⊆ B G (u, 2t u ) ∪ (U ∩ B G (u, 2λt * u )) . Let W 1 , W 2 , · · · , W ℓ be the ℓ connected components of G[U]\B G (u, 2t u ), ordered arbitrarily. Let W 0 be the empty graph, and for k = 0, 1, 2, · · · , ℓ, define the graph Z k = H ∪ W 0 ∪ W 1 ∪ W 2 ∪ · · · ∪ W k . Observe that Z k is connected for each k = 0, 1, 2, · · · , ℓ. We prove by induction on k that (Z k , x[Z k ]) ∈ L for every k = 0, 1, 2, · · · , ℓ. This will establish the contradiction since Z ℓ = G ′ . For the basis of the induction, the case k = 0, we need to show that (H, x[H]) ∈ L. However, this is immediate by the facts that H is a connected subgraph of B G (u, 2λt * u ), the configuration (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L, and L is hereditary. Assume now that we have (Z k , x[Z k ]) ∈ L for 0 ≤ k < ℓ, and consider the graph Z k+1 = Z k ∪ W k+1 . Define the sets of nodes S = V (Z k ) ∩ V (W k+1 ), U 1 = V (Z k ) \ S, and U 2 = V (W k+1 ) \ S . A crucial observation is that (S, U 1 , U 2 ) is a splitter of Z k+1 . This follows from the following arguments. Let us first show that r S ≤ t * u . By definition, we have t v ≤ t * u , for every v ∈ B G (u, 6λt * u ). Hence, in order to bound the radius of S (in Z k+1 ) by t * u it is sufficient to prove that there is no node w ∈ U \ B G (u, 6λt * u ) such that B G (w, t w ) ∩ S = ∅. Indeed, if such a node w exists then t w > 4λt * u and hence B G (w, 2t w ) contains a node outside U, in contradiction to the choice of u. It follows that r S ≤ t * u . We now claim that dist Z k+1 (U 1 , U 2 ) ≥ λt * u . Consider a simple directed path P in Z k+1 going from a node x ∈ U 1 to a node y ∈ U 2 . Since x / ∈ V (W k+1 ) and y ∈ V (W k+1 ), we get that P must pass through a vertex in B G (u, 2t u ). Let z be the last vertex in P such that z ∈ B G (u, 2t u ), and consider the directed subpath P [z,y] of P going from z to y. Now, let P ′ = P [z,y] \ {z}. The first d ′ = min{(2λ −2)t * u , |P ′ |} vertices in the directed subpath P ′ must belong to V (H) ⊆ V (Z k ). In addition, observe that all nodes in P ′ must be in V (W k+1 ). It follows that the first d ′ nodes of P ′ are in S. Since y / ∈ S, we get that |P ′ | ≥ d ′ = (2λ − 2)t * u , and thus |P | > λt * u . Consequently, dist Z k+1 (U 1 , U 2 ) ≥ λt * u , as desired. This completes the proof that (S, U 1 , U 2 ) is a splitter of Z k+1 . Now, by the induction hypothesis, we have ( G 1 , x[G 1 ]) ∈ L, because G 1 = G[U 1 ∪S] = Z k . In addition, we have (G 2 , x[G 2 ]) ∈ L, because G 2 = G[U 2 ∪ S] = W k+1 , Nondeterminism and complete problems 4.1 Separation results Our first separation result indicates that non-determinism helps for local decision. Indeed, we show that there exists a language, specifically, tree = {(G, ǫ) | G is a tree}, which belongs to NLD(1) but not to LD(t), for any t = o(n). The proof follows by rather standard arguments. Proof. To establish the theorem it is sufficient to show that there exists a language L such that L / ∈ LD(o(n)) and L ∈ NLD(1). Let tree = {(G, ǫ) | G is a tree}. We have tree / ∈ LD(o(n)). To see why, consider a cycle C with nodes labeled consecutively from 1 to 4n, and the path P 1 (resp., P 2 ) with nodes labeled consecutively 1, . . . , 4n (resp., 2n + 1, . . . , 4n, 1, . . . , 2n), from one extremity to the other. For any algorithm A deciding tree, all nodes n+1, . . . , 3n output "yes" in configuration (P 1 , ǫ) for any identity assignment for the nodes in P 1 , while all nodes 3n + 1, . . . , 4n, 1, . . . , n output "yes" in configuration (P 2 , ǫ) for any identity assignment or the nodes in P 2 . Thus if A is local, then all nodes output "yes" in configuration (C, ǫ), a contradiction. In contrast, we next show that tree ∈ NLD. The (nondeterministic) local algorithm A verifying tree operates as follows. Given a configuration (G, ǫ), the certificate given at node v is y(v) = dist G (v, r) where r ∈ V (G) is an arbitrary fixed node. The verification procedure is then as follows. At each node v, A inspects every neighbor (with its certificates), and verifies the following: • y(v) is a non-negative integer, • if y(v) = 0, then y(w) = 1 for every neighbor w of v, and • if y(v) > 0, then there exists a neighbor w of v such that y(w) = y(v) − 1, and, for all other neighbors w ′ of v, we have y(w ′ ) = y(v) + 1. If G is a tree, then applying Algorithm A on G with the certificate yields the answer "yes" at all nodes regardless of the given id-assignment. On the other hand, if G is not a tree, then we claim that for every certificate, and every id-assignment Id, Algorithm A outputs "no" at some node. Indeed, consider some certificate y given to the nodes of G, and let C be a simple cycle in G. Assume, for the sake of contradiction, that all nodes in C output "yes". In this case, each node in C has at least one neighbor in C with a larger certificate. This creates an infinite sequence of strictly increasing certificates, in contradiction with the finiteness of C. Theorem 4.2 There exists a language L such that L / ∈ NLD(t), for any t = o(n). Proof. Let InpEqSize = {(G, x) | ∀v ∈ V (G), x(v) = |V (G)|}. We show that InpEqSize / ∈ NLD(t), for any t = o(n). Assume, for the sake of contradiction, that there exists a local nondeterministic algorithm A deciding InpEqSize. Let t < n/4 be the running time of A. Consider the cycle C with 2t + 1 nodes u 1 , u 2 , · · · , u 2t+1 , enumerated clockwise. Assume that the input at each node u i of C satisfies x(u i ) = 2t + 1. Then, there exists a certificate y such that, for any identity assignment Id, algorithm A outputs "yes" at each node of C. Now, consider the configuration (C ′ , x ′ ) where the cycle C ′ has 4t + 2 nodes, and for each node v i of C ′ , x ′ (v i ) = 2t + 1. We have (C ′ , x ′ ) / ∈ InpEqSize. To fool Algorithm A, we enumerate the nodes in C ′ clockwise, i.e., C = (v 1 , v 2 , · · · , v 4t+2 ). We then define the certificate y ′ as follows: y ′ (v i ) = y ′ (v i+2t+1 ) = y(u i ) for i = 1, 2, · · · 2t + 1 . Fix an id-assignment Id ′ for the nodes in V (C ′ ), and fix i ∈ {1, 2, · · · 2t + 1}. There exists an id-assignment Id 1 for the nodes in V (C), such that the output of A at node v i in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with certificate y and id-assignment Id 1 . Similarly, there exists an id-assignment Id 2 for the nodes in V (C) such that the output of A at node v i+2t+1 in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with with certificate y and id-assignment Id 2 . Thus, Algorithm A at both v i and v i+2t+1 outputs "yes" in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ . Hence, since i was arbitrary, all nodes output "yes" for this configuration, certificate and id-assignment, contradicting the fact that (C ′ , x ′ ) / ∈ InpEqSize. For p, q ∈ (0, 1] and a function t, let us define BPNLD(t, p, q) as the class of all distributed languages that have a local randomized non-deterministic distributed (p, q)-decider running in time t. Theorem 4.3 Let p, q ∈ (0, 1] such that p 2 + q ≤ 1. For every language L, we have L ∈ BPNLD(1, p, q). Proof. Let L be a language. The certificate of a configuration (G, x) ∈ L is a map of G, with nodes labeled with distinct integers in {1, ..., n}, where n = |V (G)|, together with the inputs of all nodes in G. In addition, every node v receives the label λ(v) of the corresponding vertex in the map. Precisely, the certificate at node v is y(v) = (G ′ , x ′ , i) where G ′ is an isomorphic copy of G with nodes labeled from 1 to n, x ′ is an n-dimensional vector such that x ′ [λ(u)] = x(u) for every node u, and i = λ(v). The verification algorithm involves checking that the configuration (G ′ , x ′ ) is identical to (G, x). This is sufficient because distributed languages are sequentially decidable, hence every node can individually decide whether (G ′ , x ′ ) belongs to L or not, once it has secured the fact that (G ′ , x ′ ) is the actual configuration. It remains to show that there exists a local randomized non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x), and running in time 1. The non-deterministic (p, q)-decider operates as follows. First, every node v checks that it has received the input as specified by x ′ , i.e., v checks wether x ′ [λ(v)] = x(v), and outputs "no" if this does not hold. Second, each node v communicates with its neighbors to check that (1) they all got the same map G ′ and the same input vector x ′ , and (2) they are labeled the way they should be according to the map G ′ . If some inconsistency is detected by a node, then this node outputs "no". Finally, consider a node v that passed the aforementioned two phases without outputting "no" . If λ(v) = 1 then v outputs "yes" (with probability 1), and if λ(v) = 1 then v outputs "yes" with probability p. We claim that the above implements a non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x). Indeed, if all nodes pass the two phases without outputting "no", then they all agree on the map G ′ and on the input vector x ′ , and they know that their respective neighborhood fits with what is indicated on the map. Hence, (G ′ , x ′ ) is a lift of (G, x). If follows that (G ′ , x ′ ) = (G, x) if and only if there exists at most one node v ∈ G, whose label satisfies λ(v) = 1. Consequently, if (G ′ , x ′ ) = (G, x) then all nodes say "yes" with probability at least p. On the other hand, if (G ′ , x ′ ) = (G, x) then there are at least two nodes in G whose label is "1". These two nodes say "yes" with probability p 2 , hence, the probability that at least one of them says "no" is at least 1 − p 2 ≥ q. This completes the proof of Theorem 4.3. The above theorem guarantees that the following definition is well defined. Let BPNLD = BPNLD(1, p, q), for some p, q ∈ (0, 1] such that p 2 + q ≤ 1. The following follows from Completeness results Let us first define a notion of reduction that fits the class LD. For two languages L 1 , L 2 , we say that L 1 is locally reducible to L 2 , denoted by L 1 L 2 , if there exists a constant time local algorithm A such that, for every configuration (G, x) and every id-assignment Id, A produces out(v) ∈ {0, 1} * as output at every node v ∈ V (G) so that (G, x) ∈ L 1 ⇐⇒ (G, out) ∈ L 2 . By definition, LD(O(t)) is closed under local reductions, that is, for every two languages L 1 , L 2 satisfying L 1 L 2 , if L 2 ∈ LD(O(t)) then L 1 ∈ LD(O(t)). We now show that there exists a natural problem, called cover, which is in some sense the "most difficult" decision problem; that is, we show that cover is BPNLD-complete. Language cover is defined as follows. Every node v is given as input an element E(v), and a finite collection of sets S(v). The union of these inputs is in the language if there exists a node v such that one set in S(v) equals the union of all the elements given to the nodes. Formally, Proof. The fact that cover ∈ BPNLD follows from Theorem 4.3. To prove that cover is BPNLD-hard, we consider some L ∈ BPNLD and show that L cover. For this purpose, we describe a local distributed algorithm A transforming any configuration for L to a configuration for cover preserving the memberships to these languages. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm A operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "local view" at v in (G, x), i.e., the star subgraph of G consisting of v and its neighbors, together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . we define cover = {(G, (E, S)) | ∃v ∈ V (G), ∃S ∈ S(v) s.t. S = {E(v) | v ∈ V (G)}. Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with k ≤ ψ(v) vertices, and x ′ is a collection of k input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm A associates a set S ∈ S(v) consisting of the k = |V (G ′ )| local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ A(G, x) ∈ cover. If (G, x) ∈ L, then by the construction of Algorithm A, there exists a set S ∈ S(v) such that S covers the collection of local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S which contains all local views of the given configuration (G, x) and Id assignemnt. Thus A(G, x) ∈ cover. Now consider the case that A(G, x) ∈ cover. In this case, there exists a node v and a set S ∈ S(v) such that S = {E(u) | u ∈ G}. Such a set S is the collection of local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. On the other hand, S is also the collection of local views of nodes of the given configuration (G, x) ∈ L and Id assignment. It follows that (G, x) = (G ′ , x ′ ) ∈ L. We now define a natural problem, called containment, which is NLD(O(1))-complete. Somewhat surprisingly, the definition of containment is quite similar to the definition of cover. Specifically, as in cover, every node v is given as input an element E(v), and a finite collection of sets S(v). However, in contrast to cover, the union of these inputs is in the containment language if there exists a node v such that one set in S(v) contains the union of all the elements given to the nodes. Formally, we define containment = {(G, (E, S)) | ∃v ∈ V (G), Proof. We first prove that containment is NLD(O(1))-hard. Consider some L ∈ NLD(O(1)); we show that L containment. For this purpose, we describe a local distributed algorithm D transforming any configuration for L to a configuration for containment preserving the memberships to these languages. ∃S ∈ S(v) s.t. S ⊇ {E(v) | v ∈ V (G)}. Let t = t L ≥ 0 be some (constant) integer such that there exists a local nondeterministic algorithm A L deciding L in time at most t. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm D operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "t-local view" at v in (G, x), i.e., the ball of radius t around v, B G (v, t), together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with m ≤ ψ(v) vertices, and x ′ is a collection of m input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm D associates a set S ∈ S(v) consisting of the m = |V (G ′ )| t-local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ D(G, x) ∈ containment. If (G, x) ∈ L, then by the construction of Algorithm D, there exists a set S ∈ S(v) such that S covers the collection of t-local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S that precisely corresponds to (G, x) and its given Id assignment; hence, S contains all corresponding t-local views. Thus, D(G, x) ∈ containment. Now consider the case that D(G, x) ∈ containment. In this case, there exists a node v and a set S ∈ S(v) such that S ⊇ {E(u) | u ∈ G}. Such a set S is the collection of t-local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. Since (G ′ , x ′ ) ∈ L, there exists a certificate y ′ for the nodes of G ′ , such that when algorithm A L operates on (G ′ , x ′ , y ′ ), all nodes say "yes". Now, since S contains the t-local views of nodes (G, x), with the corresponding identities, there exists a mapping φ : (G, x, Id) → (G ′ , x ′ , Id ′ ) that preserves inputs and identities. Moreover, when restricted to a ball of radius t around a vertex v ∈ G, φ is actually an isomorphism between this ball and its image. We assign a certificate y to the nodes of G: for each v ∈ V (G), y(v) = y ′ (φ(v)). Now, Algorithm A L when operating on (G, x, y) outputs "yes" at each node of G. By the correctness of A L , we obtain (G, x) ∈ L. We now show that containment ∈ NLD(O(1)). For this purpose, we design a nondeterministic local algorithm A that decides whether a configuration (G, x) is in containment. Such an algorithm A is designed to operate on (G, x, y), where y is a certificate. The configuration (G, x) satisfies that x(v) = (E(v), S(v)). Algorithm A aims at verifying whether there exists a node v * with a set S * ∈ S(v * ) such that S * ⊇ {E(v) | v ∈ V (G)}. Given a correct instance, i.e., a configuration (G, x), we define the certificate y as follows. For each node v, the certificate y(v) at v consists of several fields, specifically, y(v) = (y c (v), y s (v), y id (v), y l (v)). The candidate configuration field y c (v) is a triplet y c (v) = (G ′ , x ′ , Id ′ ), where (G ′ , x ′ ) is an isomorphic copy (G ′ , x ′ ) of (G, x) and Id ′ is an p and q, one can modify the success probabilities by performing k runs and requiring each node to individually output "no" if it decided "no" on at least one of the runs. In this case, the "no" success probability increases from q to at least 1 − (1 − q) k , and the "yes" success probability then decreases from p to p k .) Another interesting question is whether the phenomena we observed regarding randomization occurs also in the non-deterministic setting, that is, whether BPNLD(t, p, q) collapses into NLD(O(t)), for p 2 + q > 1. Our model of computation, namely, the LOCAL model, focuses on difficulties arising from purely locality issues, and abstracts away other complexity measures. Naturally, it would be very interesting to come up with a rigorous complexity framework taking into account also other complexity measures. For example, it would be interesting to investigate the connections between classical computational complexity theory and the local complexity one. The bound on the (centralized) running time in each round (given by the function f , see Section 2) may serve a bridge for connecting the two theories, by putting constrains on this bound (i.e., f must be polynomial, exponential, etc). Also, one could restrict the memory used by a node, in addition to, or instead of, bounding the sequential time. Finally, it would be interesting to come up with a complexity framework taking also congestion into account.
10,308
1011.2152
2953378433
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
The question of what can be computed in a constant number of communication rounds was investigated in the seminal work of Naor and Stockmeyer @cite_16 . In particular, that paper considers a subclass of @math , called LCL, which is essentially @math restricted to languages involving graphs of constant maximum degree, and involving processor inputs taken from a set of constant size, and studies the question of how to compute in @math rounds the constructive versions of decision problems in LCL. The paper provides some beautiful general results. In particular, the authors show that if there exists a randomized algorithm that constructs a solution for a problem in LCL in @math rounds, then there is also a deterministic algorithm constructing a solution for this problem in @math rounds. Unfortunately, the proof of this result relies heavily on the definition of LCL. Indeed, the constant bound constraints on the degrees and input sizes allow the authors to cleverly use Ramsey theory. It is thus not clear whether it is possible to extend this result to all languages in @math .
{ "abstract": [ "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs)." ], "cite_N": [ "@cite_16" ], "mid": [ "2017345786" ] }
Local Distributed Decision *
Distributed computing concerns a collection of processors which collaborate in order to achieve some global task. With time, two main disciplines have evolved in the field. One discipline deals with timing issues, namely, uncertainties due to asynchrony (the fact that processors run at their own speed, and possibly crash), and the other concerns topology issues, namely, uncertainties due to locality constraints (the lack of knowledge about far away processors). Studies carried out by the distributed computing community within these two disciplines were to a large extent problem-driven. Indeed, several major problems considered in the literature concern coping with one of the two uncertainties. For instance, in the asynchrony-discipline, Fischer, Lynch and Paterson [14] proved that consensus cannot be achieved in the asynchronous model, even in the presence of a single fault, and in the locality-discipline, Linial [28] proved that (∆ + 1)-coloring cannot be achieved locally (i.e., in a constant number of communication rounds), even in the ring network. One of the significant achievements of the asynchrony-discipline was its success in establishing unifying theories in the flavor of computational complexity theory. Some central examples of such theories are failure detectors [6,7] and the wait-free hierarchy (including Herlihy's hierarchy) [18]. In contrast, despite considerable progress, the locality-discipline still suffers from the absence of a solid basis in the form of a fundamental computational complexity theory. Obviously, defining some common cost measures (e.g., time, message, memory, etc.) enables us to compare problems in terms of their relative cost. Still, from a computational complexity point of view, it is not clear how to relate the difficulty of problems in the locality-discipline. Specifically, if two problems have different kinds of outputs, it is not clear how to reduce one to the other, even if they cost the same. Inspired by sequential complexity theory, we focus on decision problems, in which one is aiming at deciding whether a given global input instance belongs to some specified language. In the context of distributed computing, each processor must produce a boolean output, and the decision is defined by the conjunction of the processors' outputs, i.e., if the instance belongs to the language, then all processors must output "yes", and otherwise, at least one processor must output "no". Observe that decision problems provide a natural framework for tackling fault-tolerance: the processors have to collectively check whether the network is fault-free, and a node detecting a fault raises an alarm. In fact, many natural problems can be phrased as decision problems, like "is there a unique leader in the network?" or "is the network planar?". Moreover, decision problems occur naturally when one is aiming at checking the validity of the output of a computational task, such as "is the produced coloring legal?", or "is the constructed subgraph an MST?". Construction tasks such as exact or approximated solutions to problems like coloring, MST, spanner, MIS, maximum matching, etc., received enormous attention in the literature (see, e.g., [5,25,26,28,30,31,32,38]), yet the corresponding decision problems have hardly been considered. The purpose of this paper is to investigate the nature of local decision problems. Decision problems seem to provide a promising approach to building up a distributed computational theory for the locality-discipline. Indeed, as we will show, one can define local reductions in the framework of decision problems, thus enabling the introduction of complexity classes and notions of completeness. We consider the LOCAL model [36], which is a standard distributed computing model capturing the essence of locality. In this model, processors are woken up simultaneously, and computation proceeds in fault-free synchronous rounds during which every processor exchanges messages of unlimited size with its neighbors, and performs arbitrary computations on its data. Informally, let us define LD(t) (for local decision) as the class of decision problems that can be solved in t number of communication rounds in the LOCAL model. (We find special interest in the case where t represents a constant, but in general we view t as a function of the input graph. We note that in the LOCAL model, every decidable decision problem can be solved in n communication rounds, where n denotes the number of nodes in the input graph.) Some decision problems are trivially in LD(O(1)) (e.g., "is the given coloring a (∆ + 1)coloring?", "do the selected nodes form an MIS?", etc.), while some others can easily be shown to be outside LD(t), for any t = o(n) (e.g., "is the network planar?", "is there a unique leader?", etc.). In contrast to the above examples, there are some languages for which it is not clear whether they belong to LD(t), even for t = O(1). To elaborate on this, consider the particular case where it is required to decide whether the network belongs to some specified family F of graphs. If this question can be decided in a constant number of communication rounds, then this means, informally, that the family F can somehow be characterized by relatively simple conditions. For example, a family F of graphs that can be characterized as consisting of all graphs having no subgraph from C, for some specified finite set C of finite subgraphs, is obviously in LD(O(1)). However, the question of whether a family of graphs can be characterized as above is often non-trivial. For example, characterizing cographs as precisely the graphs with no induced P 4 , attributed to Seinsche [40], is not easy, and requires nontrivial usage of modular decomposition. The first question we address is whether and to what extent randomization helps. For p, q ∈ (0, 1], define BPLD(t, p, q) as the class of all distributed languages that can be decided by a randomized distributed algorithm that runs in t number of communication rounds and produces correct answers on legal (respectively, illegal) instances with probability at least p (resp., q). An interesting observation is that for p and q such that p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q). In fact, for such p and q, there exists a language L ∈ BPLD(0, p, q), such that L / ∈ LD(t), for any t = o(n). To see why, consider the following Unique-Leader language. The input is a graph where each node has a bit indicating whether it is a leader or not. An input is in the language Unique-Leader if and only if there is at most one leader in the graph. Obviously, this language is not in LD(t), for any t < n. We claim it is in BPLD(0, p, q), for p and q such that p 2 + q ≤ 1. Indeed, for such p and q, we can design the following simple randomized algorithm that runs in 0 time: every node which is not a leader says "yes" with probability 1, and every node which is a leader says "yes" with probability p. Clearly, if the graph has at most one leader then all nodes say "yes" with probability at least p. On the other hand, if there are at least k ≥ 2 leaders, at least one node says "no", with probability at least 1 − p k ≥ 1 − p 2 ≥ q. It turns out that the aforementioned choice of p and q is not coincidental, and that p 2 +q = 1 is really the correct threshold. Indeed, we show that Unique-Leader / ∈ BPLD(t, p, q), for any t < n, and any p and q such that p 2 +q > 1. In fact, we show a much more general result, that is, we prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) actually collapses into LD(O(t)), for any t. In the second part of the paper, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results. Our contributions 1.2.1 Impact of randomization We study the impact of randomization on local decision. We prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) = LD(O(t)), for any function t. This, together with the observation that LD(t) BPLD(t, p, q), for any t = o(n), may indicate that p 2 + q = 1 serves as a sharp threshold for distinguishing the deterministic case from the randomized one. Impact of non-determinism We first show that non-determinism helps local decision, i.e., we show that the class NLD(t) (cf. Section 2.3) strictly contains LD(t). More precisely, we show that there exists a language in NLD(O(1)) which is not in LD(t) for every t = o(n), where n is the size of the input graph. Nevertheless, NLD(t) does not capture all (decidable) languages, for t = o(n). Indeed we show that there exists a language not in NLD(t) for every t = o(n). Specifically, this language is #n = {(G, n) | |V (G)| = n}. Perhaps surprisingly, it turns out that it is the combination of randomization with nondeterminism that enables to decide all languages in constant time. Let BPNLD(O(1)) = BPNLD(O(1), p, q), for some constants p and q such that p 2 + q ≤ 1. We prove that BPNLD(O(1)) contains all languages. To sum up, LD(o(n)) NLD(O(1)) ⊂ NLD(o(n)) BPNLD(O(1)) = All. Finally, we introduce the notion of many-one local reduction, and establish some completeness results. We show that there exits a problem, called cover, which is, in a sense, the most difficult decision problem. That is we show that cover is BPNLD(O(1))-complete. (Interestingly, a small relaxation of cover, called containment, turns out to be NLD(O(1))complete). Decision problems and complexity classes 2.1 Model of computation Let us first recall some basic notions in distributed computing. We consider the LOCAL model [36], which is a standard model capturing the essence of locality. In this model, processors are assumed to be nodes of a network G, provided with arbitrary distinct identities, and computation proceeds in fault-free synchronous rounds. At each round, every processor v ∈ V (G) exchanges messages of unrestricted size with its neighbors in G, and performs computations on its data. We assume that the number of steps (sequential time) used for the local computation made by the node v in some round r is bounded by some function f A (H(r, v)), where H(r, v) denotes the size of the "history" seen by node v up to the beginning of round r. That is, the total number of bits encoded in the input and the identity of the node, as well as in the incoming messages from previous rounds. Here, we do not impose any restriction on the growth rate of f A . We would like to point out, however, that imposing such restrictions, or alternatively, imposing restrictions on the memory used by a node for local computation, may lead to interesting connections between the theory of locality and classical computational complexity theory. To sum up, during the execution of a distributed algorithm A, all processors are woken up simultaneously, and, initially, a processor is solely aware of it own identity, and possibly to some local input too. Then, in each round r, every processor v (1) sends messages to its neighbors, (2) receives messages from its neighbors, and (3) performs at most f A (H(r, v)) computations. After a number of rounds (that may depend on the network G and may vary among the processors, simply because nodes have different identities, potentially different inputs, and are typically located at non-isomorphic positions in the network), every processor v terminates and outputs some value out(v). Consider an algorithm running in a network G with input x and identity assignment Id. The running time of a node v, denoted T v (G, x, Id), is the maximum of the number of rounds until v outputs. The running time of the algorithm, denoted T (G, x, Id), is the maximum of the number of rounds until all processors terminate, i.e., T (G, x, Id) = max{T v (G, x, Id) | v ∈ V (G)}. Let t be a non-decreasing function of input configurations (G, x, Id). (By non-decreasing, we mean that if G ′ is an induced subgraph of G and x ′ and Id ′ are the restrictions of x and Id, respectively, to the nodes in G ′ , then t(G ′ , x ′ , Id ′ ) ≤ t(G, x, Id).) We say that an algorithm A has running time at most t, if T (G, x, Id) ≤ t(G, x, Id) , for every (G, x, Id). We shall give special attention to the case that t represents a constant function. Note that in general, given (G, x, Id), the nodes may not be aware of t(G, x, Id). On the other hand, note that, if t = t(G, x, Id) is known, then w.l.o.g. one can always assume that a local algorithm running in time at most t operates at each node v in two stages: (A) collect all information available in B G (v, t), the t-neighborhood, or ball of radius t of v in G, including inputs, identities and adjacencies, and (B) compute the output based on this information. Local decision (LD) We now refine some of the above concepts, in order to formally define our objects of interest. Obviously, a distributed algorithm that runs on a graph G operates separately on each connected component of G, and nodes of a component G ′ of G cannot distinguish the underlying graph G from G ′ . For this reason, we consider connected graphs only. Definition 2.1 A configuration is a pair (G, x) where G is a connected graph, and every node v ∈ V (G) is assigned as its local input a binary string x(v) ∈ {0, 1} * . In some problems, the local input of every node is empty, i.e., x(v) = ǫ for every v ∈ V (G), where ǫ denotes the empty binary string. Since an undecidable collection of configurations remains undecidable in the distributed setting too, we consider only decidable collections of configurations. Formally, we define the following. Definition 2.2 A distributed language is a decidable collection L of configurations. In general, there are several possible ways of representing a configuration of a distributed language corresponding to standard distributed computing problems. Some examples considered in this paper are the following. Unique-Leader = {(G, x) | x 1 ≤ 1} consists of all configurations such that there exists at most one node with local input 1, with all the others having local input 0. Consensus = {(G, (x 1 , x 2 )) | ∃u ∈ V (G), ∀v ∈ V (G), x 2 (v) = x 1 (u)} consists of all configurations such that all nodes agree on the value proposed by some node. Coloring = {(G, x) | ∀v ∈ V (G), ∀w ∈ N(v), x(v) = x(w)} where N(v) denotes the (open) neighborhood of v, that is, all nodes at distance 1 from v. MIS = {(G, x) | S = {v ∈ V (G) | x(v) = 1} forms a MIS}. SpanningTree = {(G, (name, head)) | T = {e v = (v, v + ), v ∈ V (G), head(v) = name(v + )} is a spanning tree of G} consists of all configurations such that the set T of edges e v between every node v and its neighbor v + satisfying name(v + ) = head(v) forms a spanning tree of G. (The language MST, for minimum spanning tree, can be defined similarly). An identity assignment Id for a graph G is an assignment of distinct integers to the nodes of G. A node v ∈ V (G) executing a distributed algorithm in a configuration (G, x) initially knows only its own identity Id(v) and its own input x(v), and is unaware of the graph G. After t rounds, v acquires knowledge only of its t-neighborhood B G (v, t). In each round r of the algorithm A, a node may communicate with its neighbors by sending and receiving messages, and may perform at most f A (H(r, v)) computations. Eventually, each node v ∈ V (G) must output a local output out(v) ∈ {0, 1} * . Let L be a distributed language. We say that a distributed algorithm A decides L if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", satisfying the following decision rules: • If (G, x) ∈ L, then out(v) ="yes" for every node v ∈ V (G); • If (G, x) / ∈ L, then there exists at least one node v ∈ V (G) such that out(v) ="no". We are now ready to define one of our main subjects of interest, the class LD(t), for local decision. Definition 2.3 Let t be a non-decreasing function of triplets (G, x, Id). Define LD(t) as the class of all distributed languages that can be decided by a local distributed algorithm that runs in number of rounds at most t. For instance, Coloring ∈ LD(1) and MIS ∈ LD(1). On the other hand, it is not hard to see that languages such as Unique-Leader, Consensus, and SpanningTree are not in LD(t), for any t = o(n). In what follows, we define LD(O(t)) = ∪ c>1 LD(c · t). Non-deterministic local decision (NLD) A distributed verification algorithm is a distributed algorithm A that gets as input, in addition to a configuration (G, x), a global certificate vector y, i.e., every node v of a graph G gets as input a binary string x(v) ∈ {0, 1} * , and a certificate y(v) ∈ {0, 1} * . A verification algorithm A verifies L if and only if for every configuration (G, x), the following hold: • If (G, x) ∈ L, then there exists a certificate y such that for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="yes" for all v ∈ V (G); • If (G, x) / ∈ L, then for every certificate y and for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="no" for at least one node v ∈ V (G). One motivation for studying the nondeterministic verification framework comes from settings in which one must perform local verifications repeatedly. In such cases, one can afford to have a relatively "wasteful" preliminary step in which a certificate is computed for each node. Using these certificates, local verifications can then be performed very fast. See [21,22] for more details regarding such applications. Indeed, the definition of a verification algorithm finds similarities with the notion of proof-labeling schemes discussed in [21,22]. Informally, in a proof-labeling scheme, the construction of a "good" certificate y for a configuration (G, x) ∈ L may depend also on the given id-assignment. Since the question of whether a configuration (G, x) belongs to a language L is independent from the particular id-assignment, we prefer to let the "good" certificate y depend only on the configuration. In other words, as defined above, a verification algorithm operating on a configuration (G, x) ∈ L and a "good" certificate y must say "yes" at every node regardless of the id-assignment. We now define the class NLD(t), for nondeterministic local decision. (our terminology is by direct analogy to the class NP in sequential computational complexity). Bounded-error probabilistic local decision (BPLD) A randomized distributed algorithm is a distributed algorithm A that enables every node v, at any round r during the execution, to toss a number of random bits obtaining a string r(v) ∈ {0, 1} * . Clearly, this number cannot exceed f A (H(r, v)), the bound on the number of computational steps used by node v at round r. Note however, that H(r, v) may now also depend on the random bits produced by other nodes in previous rounds. For p, q ∈ (0, 1], we say that a randomized distributed algorithm A is a (p, q)-decider for L, or, that it decides L with "yes" success probability p and "no" success probability q, if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", and the following properties are satisfied: • If (G, x) ∈ L, then Pr[out(v) = "yes" for every node v ∈ V (G)] ≥ p, • If (G, x) / ∈ L, then Pr[out(v) = "no" for at least one node v ∈ V (G)] ≥ q, where the probabilities in the above definition are taken over all possible coin tosses performed by nodes. We define the class BPLD(t, p, q), for "Bounded-error Probabilistic Local Decision", as follows. Definition 2.5 For p, q ∈ (0, 1] and a function t, BPLD(t, p, q) is the class of all distributed languages that have a local randomized distributed (p, q)-decider running in time t. (i.e., can be decided in time t by a local randomized distributed algorithm with "yes" success probability p and "no" success probability q). A sharp threshold for randomization Consider some graph G, and a subset U of the nodes of G, i.e., U ⊆ V (G Theorem 3.1 below asserts that, for hereditary languages, randomization does not help if one imposes that p 2 + q > 1, i.e, the "no" success probability distribution is at least as large as one minus the square of the "yes" success probability. Somewhat more formally, we prove that for hereditary languages, we have p 2 +q>1 BPLD(t, p, q) = LD(O(t)). This complements the fact that for p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q), for any t = o(n). Recall that [34] investigates the question of whether randomization helps for constructing in constant time a solution for a problem in LCL LD(O(1)). We stress that the technique used in [34] for tackling this question relies heavily on the definition of LCL, specifically, that only graphs of constant degree and of constant input size are considered. Hence it is not clear whether the technique of [34] can be useful for our purposes, as we impose no such assumptions on the degrees or input sizes. Also, although it seems at first glance, that Lovsz local lemma might have been helpful here, we could not effectively apply it in our proof. Instead, we use a completely different approach. Theorem 3.1 Let L be an hereditary language and let t be a function. If L ∈ BPLD(t, p, q) for constants p, q ∈ (0, 1] such that p 2 + q > 1, then L ∈ LD(O(t)). Proof. Let us start with some definitions. Let L be a language in BPLD(t, p, q) where p, q ∈ (0, 1] and p 2 + q > 1, and t is some function. Let A be a randomized algorithm deciding L, with "yes" success probability p, and "no" success probability q, whose running time is at most t(G, x, Id), for every configuration (G, x) with identity assignment Id. Fix a configuration (G, x), and an id-assignment Id for the nodes of V (G). The distance dist G (u, v) between two nodes of G is the minimum number of edges in a path connecting u and v in G. The distance between two subsets U 1 , U 2 ⊆ V is defined as dist G (U 1 , U 2 ) = min{dist G (u, v) | u ∈ U 1 , v ∈ U 2 }. For a set U ⊆ V , let E(G, x, Id, U) denote the event that when running A on (G, x) with id-assignment Id, all nodes in U output "yes". Let v ∈ V (G). The running time of A at v may depend on the coin tosses made by the nodes. Let t v = t v (G, x, Id) denote the maximal running time of v over all possible coin tosses. Note that t v ≤ t(G, x, Id) (we do not assume that neither t or t v are known to v). The radius of a node v, denoted r v , is the maximum value t u such that there exists a node u, where v ∈ B G (u, t u ). (Observe that the radius of a node is at most t.) The radius of a set of nodes S is r S := max{r v | v ∈ S}. In what follows, fix a constant δ such that 0 < δ < p 2 + q − 1, and define λ = 11 ⌈log p/log(1 − δ)⌉. A splitter of (G, x, Id) is a triplet (S, U 1 , U 2 ) of pairwise disjoint subsets of nodes such that S ∪ U 1 ∪ U 2 = V , dist G (U 1 , U 2 ) ≥ λr S . (Observe that r S may depend on the identity assignment and the input, and therefore, being a splitter is not just a topological property depending only on G). Given a splitter (S, U 1 , U 2 ) of (G, x, Id), let G k = G[U k ∪ S], and let x k be the input x restricted to nodes in G k , for k = 1, 2. The following structural claim does not use the fact that L is hereditary. Lemma 3.2 For every configuration (G, x) with identity assignment Id, and every splitter (S, U 1 , U 2 ) of (G, x, Id), we have (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L ⇒ (G, x) ∈ L. Let (G, x) be a configuration with identity assignment Id. Assume, towards contradiction, that there exists a splitter (S, U 1 , U 2 ) of triplet (G, x, Id), such that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L, yet (G, x) / ∈ L. (The fact that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L implies that both G 1 and G 2 are connected, however, we note, that for the claim to be true, it is not required that G[ Proof. For proving Claim 3.3, we upper bound the size of I by d − 4r S − 2. This is done by covering the integers in (2r S , d − 2r S ) by at most 4r S + 1 sets, such that each one is (4r S + 1)-independent, that is, for every two integers in the same set, they are at least 4r S + 1 apart. Specifically, for s ∈ [1, 4r S + 1] and m(S) = ⌈(d − 8r S )/(4r S + 1)⌉, we define J s = {s + 2r S + j(4r S + 1) | j ∈ [0, m(S)]}. Observe that, as desired, (2r S , d − 2r S ) ⊂ s∈[1,4r S +1] J s , and for each s ∈ [1, 4r S + 1], J s is (4r S + 1)-independent. In what follows, fix s ∈ [1, 4r S + 1] and let J = J s . Since (G 1 , x 1 ) ∈ L, we know that, Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p . Observe that for i ∈ (2r S , d − 2r S ), t v ≤ r v ≤ r S , and hence, the t v -neighborhood in G of every node v ∈ S i is contained in S ⊆ G 1 , i.e., B G (v, t v ) ⊆ G 1 . It therefore follows that: Pr[E(G, x, Id, S J∩I )] = Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p .(1) Consider two integers a and b in J. We know that |a − b| ≥ 4r S + 1. Hence, the distance in G between any two nodes u ∈ S a and v ∈ S b is at least 2r S + 1. Thus, the events E(G, x, Id, S a ) and E(G, x, Id, S b ) are independent. It follows by the definition of I, that Pr[E(G, x, Id, S J∩I )] < (1 − δ) |J∩I|(2) By (1) and (2), we have that p < (1 − δ) |J∩I| and thus |J ∩ I| < log p/ log(1 − δ). Since (2r S , d − 2r S ) can be covered by the sets J s , s = 1, . . . , 4r S + 1, each of which is (4r S + 1)independent, we get that |I| = 4r S +1 s=1 |J s ∩ I| < (4r S + 1)(log p/ log(1 − r)) . Combining this bound with the fact that d = λr S , we get that d − 4r S − 1 > |I|. It follows by the pigeonhole principle that there exists some i ∈ (2r S , d − 2r S ) such that i / ∈ I, as desired. This completes the proof of Claim 3.3. Fix i ∈ (2r S , d − 2r S ) such that i / ∈ I, and let F = E(G, x, Id, S i ). By definition, Pr[F] ≤ δ < p 2 + q − 1.(3) Let H 1 denote the subgraph of G induced by the nodes in ( i−r S −1 j=1 L j ) ∪ U 1 . We similarly define H 2 as the subgraph of G induced by the nodes in ( j>i+r S L j ) ∪ U 2 . Note that S i ∪ V (H 1 ) ∪ V (H 2 ) = V , and for any two nodes u ∈ V (H 1 ) and v ∈ V (H 2 ), we have d G (u, v) > 2r S . It follows that, for k = 1, 2, the t u -neighborhood in G of each node u ∈ V (H k ) equals the t u -neighborhood in G k of u, that is, B G (u, t u ) ⊆ G k . (To see why, consider, for example, the case k = 2. Given u ∈ V (H 2 ), it is sufficient to show that ∄v ∈ V (H 1 ), such that v ∈ B G (u, t u ). Indeed, if such a vertex v exists then d G (u, v) > 2r S , and hence t u > 2r S . Since there must exists a vertex w ∈ S i such that w ∈ B(u, t u ), we get that r w > 2r S , in contradiction to the fact that w ∈ S.) Thus, for k = 1, 2, since (G i , x i ) ∈ L, we get Pr[E(G, x, Id, V (H i ))] = Pr[E(G i , x i , Id, V (H i ))] ≥ p . Let F ′ = E(G, x, Id, V (H 1 )∪V (H 2 ) ). As the events E(G, x, Id, V (H 1 )) and E(G, x, Id, V (H 2 )) are independent, it follows that Pr[F ′ ] > p 2 , that is Pr[F ′ ] ≤ 1 − p 2(4) By Eqs. (3) and (4), and using union bound, it follows that Pr[F ∨ F ′ ] < q. Thus Pr[E(G, x, Id, V (G))] = Pr[E(G, x, Id, S i ∪ V (H 1 ) ∪ V (H 2 ))] = Pr[F ∧ F ′ ] > 1 − q . This is in contradiction to the assumption that (G, x) / ∈ L. This concludes the proof of Lemma 3.2. Our goal now is to show that L ∈ LD(O(t)) by proving the existence of a deterministic local algorithm D that runs in time O(t) and recognizes L. (No attempt is made here to minimize the constant factor hidden in the O(t) notation.) Recall that both t = t(G, x, Id) and t v = t v (G, x, Id) may not be known to v. Nevertheless, by inspecting the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , each node v can compute an upper bound on t v as given by the following claim. t * v = t * v (c) such that (1) c · t v ≤ t * v = O(t) and (2) for every u ∈ B G (v, c · t * v ), we have t u ≤ t * v . To establish the claim, observe first that in O(t) time, each node v can compute a value t ′ v satisfying t v ≤ t ′ v ≤ 2t. Indeed, given the ball B G (v, 2 i ), for some integer i, and using the upper bound on number of (sequential) local computations, node v can simulate all its possible executions up to round r = 2 i . The desired value t ′ v is the smallest r = 2 i for which all executions of v up to round r conclude with an output at v. Once t ′ v is computed, node v aims at computing t * v . For this purpose, it starts again to inspect the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , to obtain t ′ u from each u ∈ B G (v, 2 i ). (For this purpose, it may need to wait until u computes t ′ u , but this delays the whole computation by at most O(t) time.) Now, node v outputs t * v = 2 i for the smallest i satisfying (1) c · t ′ v ≤ 2 i and (2) for every u ∈ B G (v, c · 2 i ), we have t ′ u ≤ t * v . It is easy to see that for this i, we have 2 i = O(t), hence t * v = O(t). Given a configuration (G, x), and an id-assignment Id, Algorithm D, applied at a node u first calculates t * u = t * u (6λ), and then outputs "yes" if and only if the 2λt * u -neighborhood of u in (G, x) belongs to L. That is, out(u) = "yes" ⇐⇒ (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L . Obviously, Algorithm D is a deterministic algorithm that runs in time O(t). We claim that Algorithm D decides L. Indeed, since L is hereditary, if (G, x) ∈ L, then every prefix of (G, x) is also in L, and thus, every node u outputs out(u) ="yes". Now consider the case where (G, x) / ∈ L, and assume by contradiction that by applying D on (G, x) with id-assignment Id, every node u outputs out(u) ="yes". Let U ⊆ V (G) be maximal by inclusion, such that G[U] is connected and (G[U], x[U]) ∈ L. Obviously, U is not empty, as (B G (u, 2λt * v ), x[B G (u, 2λt * v )] ) ∈ L for every node u. On the other hand, we have |U| < |V (G)|, because (G, x) / ∈ L. Let u ∈ U be a node with maximal t u such that B G (u, 2t u ) contains a node outside U. Define G ′ as the subgraph of G induced by U ∪ V (B G (u, 2t u )). Observe that G ′ is connected and that G ′ strictly contains U. Towards contradiction, our goal is to show that (G ′ , x[G ′ ]) ∈ L. Let H denote the graph which is maximal by inclusion such that H is connected and B G (u, 2t u ) ⊂ H ⊆ B G (u, 2t u ) ∪ (U ∩ B G (u, 2λt * u )) . Let W 1 , W 2 , · · · , W ℓ be the ℓ connected components of G[U]\B G (u, 2t u ), ordered arbitrarily. Let W 0 be the empty graph, and for k = 0, 1, 2, · · · , ℓ, define the graph Z k = H ∪ W 0 ∪ W 1 ∪ W 2 ∪ · · · ∪ W k . Observe that Z k is connected for each k = 0, 1, 2, · · · , ℓ. We prove by induction on k that (Z k , x[Z k ]) ∈ L for every k = 0, 1, 2, · · · , ℓ. This will establish the contradiction since Z ℓ = G ′ . For the basis of the induction, the case k = 0, we need to show that (H, x[H]) ∈ L. However, this is immediate by the facts that H is a connected subgraph of B G (u, 2λt * u ), the configuration (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L, and L is hereditary. Assume now that we have (Z k , x[Z k ]) ∈ L for 0 ≤ k < ℓ, and consider the graph Z k+1 = Z k ∪ W k+1 . Define the sets of nodes S = V (Z k ) ∩ V (W k+1 ), U 1 = V (Z k ) \ S, and U 2 = V (W k+1 ) \ S . A crucial observation is that (S, U 1 , U 2 ) is a splitter of Z k+1 . This follows from the following arguments. Let us first show that r S ≤ t * u . By definition, we have t v ≤ t * u , for every v ∈ B G (u, 6λt * u ). Hence, in order to bound the radius of S (in Z k+1 ) by t * u it is sufficient to prove that there is no node w ∈ U \ B G (u, 6λt * u ) such that B G (w, t w ) ∩ S = ∅. Indeed, if such a node w exists then t w > 4λt * u and hence B G (w, 2t w ) contains a node outside U, in contradiction to the choice of u. It follows that r S ≤ t * u . We now claim that dist Z k+1 (U 1 , U 2 ) ≥ λt * u . Consider a simple directed path P in Z k+1 going from a node x ∈ U 1 to a node y ∈ U 2 . Since x / ∈ V (W k+1 ) and y ∈ V (W k+1 ), we get that P must pass through a vertex in B G (u, 2t u ). Let z be the last vertex in P such that z ∈ B G (u, 2t u ), and consider the directed subpath P [z,y] of P going from z to y. Now, let P ′ = P [z,y] \ {z}. The first d ′ = min{(2λ −2)t * u , |P ′ |} vertices in the directed subpath P ′ must belong to V (H) ⊆ V (Z k ). In addition, observe that all nodes in P ′ must be in V (W k+1 ). It follows that the first d ′ nodes of P ′ are in S. Since y / ∈ S, we get that |P ′ | ≥ d ′ = (2λ − 2)t * u , and thus |P | > λt * u . Consequently, dist Z k+1 (U 1 , U 2 ) ≥ λt * u , as desired. This completes the proof that (S, U 1 , U 2 ) is a splitter of Z k+1 . Now, by the induction hypothesis, we have ( G 1 , x[G 1 ]) ∈ L, because G 1 = G[U 1 ∪S] = Z k . In addition, we have (G 2 , x[G 2 ]) ∈ L, because G 2 = G[U 2 ∪ S] = W k+1 , Nondeterminism and complete problems 4.1 Separation results Our first separation result indicates that non-determinism helps for local decision. Indeed, we show that there exists a language, specifically, tree = {(G, ǫ) | G is a tree}, which belongs to NLD(1) but not to LD(t), for any t = o(n). The proof follows by rather standard arguments. Proof. To establish the theorem it is sufficient to show that there exists a language L such that L / ∈ LD(o(n)) and L ∈ NLD(1). Let tree = {(G, ǫ) | G is a tree}. We have tree / ∈ LD(o(n)). To see why, consider a cycle C with nodes labeled consecutively from 1 to 4n, and the path P 1 (resp., P 2 ) with nodes labeled consecutively 1, . . . , 4n (resp., 2n + 1, . . . , 4n, 1, . . . , 2n), from one extremity to the other. For any algorithm A deciding tree, all nodes n+1, . . . , 3n output "yes" in configuration (P 1 , ǫ) for any identity assignment for the nodes in P 1 , while all nodes 3n + 1, . . . , 4n, 1, . . . , n output "yes" in configuration (P 2 , ǫ) for any identity assignment or the nodes in P 2 . Thus if A is local, then all nodes output "yes" in configuration (C, ǫ), a contradiction. In contrast, we next show that tree ∈ NLD. The (nondeterministic) local algorithm A verifying tree operates as follows. Given a configuration (G, ǫ), the certificate given at node v is y(v) = dist G (v, r) where r ∈ V (G) is an arbitrary fixed node. The verification procedure is then as follows. At each node v, A inspects every neighbor (with its certificates), and verifies the following: • y(v) is a non-negative integer, • if y(v) = 0, then y(w) = 1 for every neighbor w of v, and • if y(v) > 0, then there exists a neighbor w of v such that y(w) = y(v) − 1, and, for all other neighbors w ′ of v, we have y(w ′ ) = y(v) + 1. If G is a tree, then applying Algorithm A on G with the certificate yields the answer "yes" at all nodes regardless of the given id-assignment. On the other hand, if G is not a tree, then we claim that for every certificate, and every id-assignment Id, Algorithm A outputs "no" at some node. Indeed, consider some certificate y given to the nodes of G, and let C be a simple cycle in G. Assume, for the sake of contradiction, that all nodes in C output "yes". In this case, each node in C has at least one neighbor in C with a larger certificate. This creates an infinite sequence of strictly increasing certificates, in contradiction with the finiteness of C. Theorem 4.2 There exists a language L such that L / ∈ NLD(t), for any t = o(n). Proof. Let InpEqSize = {(G, x) | ∀v ∈ V (G), x(v) = |V (G)|}. We show that InpEqSize / ∈ NLD(t), for any t = o(n). Assume, for the sake of contradiction, that there exists a local nondeterministic algorithm A deciding InpEqSize. Let t < n/4 be the running time of A. Consider the cycle C with 2t + 1 nodes u 1 , u 2 , · · · , u 2t+1 , enumerated clockwise. Assume that the input at each node u i of C satisfies x(u i ) = 2t + 1. Then, there exists a certificate y such that, for any identity assignment Id, algorithm A outputs "yes" at each node of C. Now, consider the configuration (C ′ , x ′ ) where the cycle C ′ has 4t + 2 nodes, and for each node v i of C ′ , x ′ (v i ) = 2t + 1. We have (C ′ , x ′ ) / ∈ InpEqSize. To fool Algorithm A, we enumerate the nodes in C ′ clockwise, i.e., C = (v 1 , v 2 , · · · , v 4t+2 ). We then define the certificate y ′ as follows: y ′ (v i ) = y ′ (v i+2t+1 ) = y(u i ) for i = 1, 2, · · · 2t + 1 . Fix an id-assignment Id ′ for the nodes in V (C ′ ), and fix i ∈ {1, 2, · · · 2t + 1}. There exists an id-assignment Id 1 for the nodes in V (C), such that the output of A at node v i in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with certificate y and id-assignment Id 1 . Similarly, there exists an id-assignment Id 2 for the nodes in V (C) such that the output of A at node v i+2t+1 in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with with certificate y and id-assignment Id 2 . Thus, Algorithm A at both v i and v i+2t+1 outputs "yes" in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ . Hence, since i was arbitrary, all nodes output "yes" for this configuration, certificate and id-assignment, contradicting the fact that (C ′ , x ′ ) / ∈ InpEqSize. For p, q ∈ (0, 1] and a function t, let us define BPNLD(t, p, q) as the class of all distributed languages that have a local randomized non-deterministic distributed (p, q)-decider running in time t. Theorem 4.3 Let p, q ∈ (0, 1] such that p 2 + q ≤ 1. For every language L, we have L ∈ BPNLD(1, p, q). Proof. Let L be a language. The certificate of a configuration (G, x) ∈ L is a map of G, with nodes labeled with distinct integers in {1, ..., n}, where n = |V (G)|, together with the inputs of all nodes in G. In addition, every node v receives the label λ(v) of the corresponding vertex in the map. Precisely, the certificate at node v is y(v) = (G ′ , x ′ , i) where G ′ is an isomorphic copy of G with nodes labeled from 1 to n, x ′ is an n-dimensional vector such that x ′ [λ(u)] = x(u) for every node u, and i = λ(v). The verification algorithm involves checking that the configuration (G ′ , x ′ ) is identical to (G, x). This is sufficient because distributed languages are sequentially decidable, hence every node can individually decide whether (G ′ , x ′ ) belongs to L or not, once it has secured the fact that (G ′ , x ′ ) is the actual configuration. It remains to show that there exists a local randomized non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x), and running in time 1. The non-deterministic (p, q)-decider operates as follows. First, every node v checks that it has received the input as specified by x ′ , i.e., v checks wether x ′ [λ(v)] = x(v), and outputs "no" if this does not hold. Second, each node v communicates with its neighbors to check that (1) they all got the same map G ′ and the same input vector x ′ , and (2) they are labeled the way they should be according to the map G ′ . If some inconsistency is detected by a node, then this node outputs "no". Finally, consider a node v that passed the aforementioned two phases without outputting "no" . If λ(v) = 1 then v outputs "yes" (with probability 1), and if λ(v) = 1 then v outputs "yes" with probability p. We claim that the above implements a non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x). Indeed, if all nodes pass the two phases without outputting "no", then they all agree on the map G ′ and on the input vector x ′ , and they know that their respective neighborhood fits with what is indicated on the map. Hence, (G ′ , x ′ ) is a lift of (G, x). If follows that (G ′ , x ′ ) = (G, x) if and only if there exists at most one node v ∈ G, whose label satisfies λ(v) = 1. Consequently, if (G ′ , x ′ ) = (G, x) then all nodes say "yes" with probability at least p. On the other hand, if (G ′ , x ′ ) = (G, x) then there are at least two nodes in G whose label is "1". These two nodes say "yes" with probability p 2 , hence, the probability that at least one of them says "no" is at least 1 − p 2 ≥ q. This completes the proof of Theorem 4.3. The above theorem guarantees that the following definition is well defined. Let BPNLD = BPNLD(1, p, q), for some p, q ∈ (0, 1] such that p 2 + q ≤ 1. The following follows from Completeness results Let us first define a notion of reduction that fits the class LD. For two languages L 1 , L 2 , we say that L 1 is locally reducible to L 2 , denoted by L 1 L 2 , if there exists a constant time local algorithm A such that, for every configuration (G, x) and every id-assignment Id, A produces out(v) ∈ {0, 1} * as output at every node v ∈ V (G) so that (G, x) ∈ L 1 ⇐⇒ (G, out) ∈ L 2 . By definition, LD(O(t)) is closed under local reductions, that is, for every two languages L 1 , L 2 satisfying L 1 L 2 , if L 2 ∈ LD(O(t)) then L 1 ∈ LD(O(t)). We now show that there exists a natural problem, called cover, which is in some sense the "most difficult" decision problem; that is, we show that cover is BPNLD-complete. Language cover is defined as follows. Every node v is given as input an element E(v), and a finite collection of sets S(v). The union of these inputs is in the language if there exists a node v such that one set in S(v) equals the union of all the elements given to the nodes. Formally, Proof. The fact that cover ∈ BPNLD follows from Theorem 4.3. To prove that cover is BPNLD-hard, we consider some L ∈ BPNLD and show that L cover. For this purpose, we describe a local distributed algorithm A transforming any configuration for L to a configuration for cover preserving the memberships to these languages. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm A operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "local view" at v in (G, x), i.e., the star subgraph of G consisting of v and its neighbors, together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . we define cover = {(G, (E, S)) | ∃v ∈ V (G), ∃S ∈ S(v) s.t. S = {E(v) | v ∈ V (G)}. Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with k ≤ ψ(v) vertices, and x ′ is a collection of k input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm A associates a set S ∈ S(v) consisting of the k = |V (G ′ )| local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ A(G, x) ∈ cover. If (G, x) ∈ L, then by the construction of Algorithm A, there exists a set S ∈ S(v) such that S covers the collection of local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S which contains all local views of the given configuration (G, x) and Id assignemnt. Thus A(G, x) ∈ cover. Now consider the case that A(G, x) ∈ cover. In this case, there exists a node v and a set S ∈ S(v) such that S = {E(u) | u ∈ G}. Such a set S is the collection of local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. On the other hand, S is also the collection of local views of nodes of the given configuration (G, x) ∈ L and Id assignment. It follows that (G, x) = (G ′ , x ′ ) ∈ L. We now define a natural problem, called containment, which is NLD(O(1))-complete. Somewhat surprisingly, the definition of containment is quite similar to the definition of cover. Specifically, as in cover, every node v is given as input an element E(v), and a finite collection of sets S(v). However, in contrast to cover, the union of these inputs is in the containment language if there exists a node v such that one set in S(v) contains the union of all the elements given to the nodes. Formally, we define containment = {(G, (E, S)) | ∃v ∈ V (G), Proof. We first prove that containment is NLD(O(1))-hard. Consider some L ∈ NLD(O(1)); we show that L containment. For this purpose, we describe a local distributed algorithm D transforming any configuration for L to a configuration for containment preserving the memberships to these languages. ∃S ∈ S(v) s.t. S ⊇ {E(v) | v ∈ V (G)}. Let t = t L ≥ 0 be some (constant) integer such that there exists a local nondeterministic algorithm A L deciding L in time at most t. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm D operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "t-local view" at v in (G, x), i.e., the ball of radius t around v, B G (v, t), together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with m ≤ ψ(v) vertices, and x ′ is a collection of m input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm D associates a set S ∈ S(v) consisting of the m = |V (G ′ )| t-local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ D(G, x) ∈ containment. If (G, x) ∈ L, then by the construction of Algorithm D, there exists a set S ∈ S(v) such that S covers the collection of t-local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S that precisely corresponds to (G, x) and its given Id assignment; hence, S contains all corresponding t-local views. Thus, D(G, x) ∈ containment. Now consider the case that D(G, x) ∈ containment. In this case, there exists a node v and a set S ∈ S(v) such that S ⊇ {E(u) | u ∈ G}. Such a set S is the collection of t-local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. Since (G ′ , x ′ ) ∈ L, there exists a certificate y ′ for the nodes of G ′ , such that when algorithm A L operates on (G ′ , x ′ , y ′ ), all nodes say "yes". Now, since S contains the t-local views of nodes (G, x), with the corresponding identities, there exists a mapping φ : (G, x, Id) → (G ′ , x ′ , Id ′ ) that preserves inputs and identities. Moreover, when restricted to a ball of radius t around a vertex v ∈ G, φ is actually an isomorphism between this ball and its image. We assign a certificate y to the nodes of G: for each v ∈ V (G), y(v) = y ′ (φ(v)). Now, Algorithm A L when operating on (G, x, y) outputs "yes" at each node of G. By the correctness of A L , we obtain (G, x) ∈ L. We now show that containment ∈ NLD(O(1)). For this purpose, we design a nondeterministic local algorithm A that decides whether a configuration (G, x) is in containment. Such an algorithm A is designed to operate on (G, x, y), where y is a certificate. The configuration (G, x) satisfies that x(v) = (E(v), S(v)). Algorithm A aims at verifying whether there exists a node v * with a set S * ∈ S(v * ) such that S * ⊇ {E(v) | v ∈ V (G)}. Given a correct instance, i.e., a configuration (G, x), we define the certificate y as follows. For each node v, the certificate y(v) at v consists of several fields, specifically, y(v) = (y c (v), y s (v), y id (v), y l (v)). The candidate configuration field y c (v) is a triplet y c (v) = (G ′ , x ′ , Id ′ ), where (G ′ , x ′ ) is an isomorphic copy (G ′ , x ′ ) of (G, x) and Id ′ is an p and q, one can modify the success probabilities by performing k runs and requiring each node to individually output "no" if it decided "no" on at least one of the runs. In this case, the "no" success probability increases from q to at least 1 − (1 − q) k , and the "yes" success probability then decreases from p to p k .) Another interesting question is whether the phenomena we observed regarding randomization occurs also in the non-deterministic setting, that is, whether BPNLD(t, p, q) collapses into NLD(O(t)), for p 2 + q > 1. Our model of computation, namely, the LOCAL model, focuses on difficulties arising from purely locality issues, and abstracts away other complexity measures. Naturally, it would be very interesting to come up with a rigorous complexity framework taking into account also other complexity measures. For example, it would be interesting to investigate the connections between classical computational complexity theory and the local complexity one. The bound on the (centralized) running time in each round (given by the function f , see Section 2) may serve a bridge for connecting the two theories, by putting constrains on this bound (i.e., f must be polynomial, exponential, etc). Also, one could restrict the memory used by a node, in addition to, or instead of, bounding the sequential time. Finally, it would be interesting to come up with a complexity framework taking also congestion into account.
10,308
1011.2152
2953378433
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
The question of whether randomization helps in decreasing the locality parameter of construction problems has been the focus of numerous studies. To date, there exists evidence that, for some problems at least, randomization does not help. For instance, @cite_30 proves this for 3-coloring the ring. In fact, for low degree graphs, the gaps between the efficiencies of the best known randomized and deterministic algorithms for problems like MIS, @math -coloring, and Maximal Matching are very small. On the other hand, for graphs of arbitrarily large degrees, there seem to be indications that randomization does help, at least in some cases. For instance, @math -coloring can be randomly computed in expected @math communication rounds on @math -node graphs @cite_39 @cite_6 , whereas the best known deterministic algorithm for this problem performs in @math rounds @cite_1 . @math -coloring results whose performances are measured also with respect to the maximum degree @math illustrate this phenomena as well. Specifically, @cite_27 shows that @math -coloring can be randomly computed in expected @math communication rounds whereas the best known deterministic algorithm performs in @math rounds @cite_40 @cite_24 .
{ "abstract": [ "Suppose that n processors are arranged in a ring and can communicate only with their immediate neighbors. It is shown that any probabilistic algorithm for 3 coloring the ring must take at least @math rounds, otherwise the probability that all processors are colored legally is less than @math . A similar time bound holds for selecting a maximal independent set. The bound is tight (up to a constant factor) in light of the deterministic algorithms of Cole and Vishkin [Inform, and Control, 70 (1986), pp. 32–53] and extends the lower bound for deterministic algorithms of Linial [Proc. 28th IEEE Foundations of Computer Science Symposium, 1987, pp. 331–335].", "In this paper, we improve the bounds for computing a network decomposition distributively and deterministically. Our algorithm computes an (n?(n),n?(n))-decomposition innO(?(n))time, whereformula. As a corollary we obtain improved deterministic bounds for distributively computing several graph structures such as maximal independent sets and ?-vertex colorings. We also show that the class of graphs G whose maximum degree isnO(?(n))where ?(n)=1 log lognis complete for the task of computing a near-optimal decomposition, i.e., a (logn, logn)-decomposition, in polylog(n) time. This is a corollary of a more general characterization, which pinpoints the weak points of existing network decomposition algorithms. Completeness is to be intended in the following sense: if we have an algorithmAthat computes a near-optimal decomposition in polylog(n) time for graphs inG, then we can compute a near-optimal decomposition in polylog(n) time for all graphs.", "Two basic design strategies are used to develop a very simple and fast parallel algorithms for the maximal independent set (MIS) problem. The first strategy consists of assigning identical copies of a simple algorithm to small local portions of the problem input. The algorithm is designed so that when the copies are executed in parallel the correct problem output is produced very quickly. A very simple Monte Carlo algorithm for the MIS problem is presented which is based upon this strategy. The second strategy is a general and powerful technique for removing randomization from algorithms. This strategy is used to convert the Monte Carlo algorithm for this MIS problem into a simple deterministic algorithm with the same parallel running time.", "Abstract A simple parallel randomized algorithm to find a maximal independent set in a graph G = ( V , E ) on n vertices is presented. Its expected running time on a concurrent-read concurrent-write PRAM with O (| E | d max ) processors is O (log n ), where d max denotes the maximum degree. On an exclusive-read exclusive-write PRAM with O (| E |) processors the algorithm runs in O (log 2 n ). Previously, an O (log 4 n ) deterministic algorithm was given by Karp and Wigderson for the EREW-PRAM model. This was recently (independently of our work) improved to O (log 2 n ) by M. Luby. In both cases randomized algorithms depending on pairwise independent choices were turned into deterministic algorithms. We comment on how randomized combinatorial algorithms whose analysis only depends on d -wise rather than fully independent random choices (for some constant d ) can be converted into deterministic algorithms. We apply a technique due to A. Joffe (1974) and obtain deterministic construction in fast parallel time of various combinatorial objects whose existence follows from probabilistic arguments.", "We study deterministic, distributed algorithms for two weak variants of the standard graph coloring problem. We consider defective colorings, i.e., colorings where nodes of a color class may induce a graph of maximum degree d for some parameter d>0. We also look at colorings where a minimum number of multi-chromatic edges is required. For an integer k>0, we call a coloring k-partially proper if every node v has at least min k,deg(v) neighbors with a different color. We show that for all d∈ 1,...,Δ , it is possible to compute a O(Δ2 d2)-coloring with defect d in time O(log*n) where Δ is the largest degree of the network graph. Similarly, for all k∈ 1,...,Δ , a k-partially proper O(k2)-coloring can be computed in O(log*n) rounds. As an application of our weak defective coloring algorithm, we obtain a faster deterministic algorithm for the standard vertex coloring problem on graphs with moderate degrees. We show that in time O(Δ+log*n), a (Δ+1)-coloring can be computed, a task for which the best previous algorithm required time O(Δ*log(Δ) + log*n). The same result holds for the problem of computing a maximal independent set.", "We introduce Multi-Trials, a new technique for symmetry breaking for distributed algorithms and apply it to various problems in general graphs. For instance, we present three randomized algorithms for distributed (vertex or edge) coloring improving on previous algorithms and showing a time color trade-off. To get a Δ+1 coloring takes time O(log Δ+ √ log n). To obtain an O(Δ+log1+1 log*nn) coloring takes time O(log* n). This is more than an exponential improvement in time for graphs of polylogarithmic degree. Our fastest algorithm works in constant time using O(Δlog(c) n+ log1+1 c n) colors, where c denotes an arbitrary constant and log(c ) n denotes the c times (recursively) applied logarithm ton. We also use the Multi-Trials technique to compute network decompositions and to compute maximal independent set (MIS), obtaining new results for several graph classes.", "" ], "cite_N": [ "@cite_30", "@cite_1", "@cite_6", "@cite_39", "@cite_24", "@cite_27", "@cite_40" ], "mid": [ "1970330908", "1998643836", "2100061495", "1964089073", "2157562188", "2119761906", "" ] }
Local Distributed Decision *
Distributed computing concerns a collection of processors which collaborate in order to achieve some global task. With time, two main disciplines have evolved in the field. One discipline deals with timing issues, namely, uncertainties due to asynchrony (the fact that processors run at their own speed, and possibly crash), and the other concerns topology issues, namely, uncertainties due to locality constraints (the lack of knowledge about far away processors). Studies carried out by the distributed computing community within these two disciplines were to a large extent problem-driven. Indeed, several major problems considered in the literature concern coping with one of the two uncertainties. For instance, in the asynchrony-discipline, Fischer, Lynch and Paterson [14] proved that consensus cannot be achieved in the asynchronous model, even in the presence of a single fault, and in the locality-discipline, Linial [28] proved that (∆ + 1)-coloring cannot be achieved locally (i.e., in a constant number of communication rounds), even in the ring network. One of the significant achievements of the asynchrony-discipline was its success in establishing unifying theories in the flavor of computational complexity theory. Some central examples of such theories are failure detectors [6,7] and the wait-free hierarchy (including Herlihy's hierarchy) [18]. In contrast, despite considerable progress, the locality-discipline still suffers from the absence of a solid basis in the form of a fundamental computational complexity theory. Obviously, defining some common cost measures (e.g., time, message, memory, etc.) enables us to compare problems in terms of their relative cost. Still, from a computational complexity point of view, it is not clear how to relate the difficulty of problems in the locality-discipline. Specifically, if two problems have different kinds of outputs, it is not clear how to reduce one to the other, even if they cost the same. Inspired by sequential complexity theory, we focus on decision problems, in which one is aiming at deciding whether a given global input instance belongs to some specified language. In the context of distributed computing, each processor must produce a boolean output, and the decision is defined by the conjunction of the processors' outputs, i.e., if the instance belongs to the language, then all processors must output "yes", and otherwise, at least one processor must output "no". Observe that decision problems provide a natural framework for tackling fault-tolerance: the processors have to collectively check whether the network is fault-free, and a node detecting a fault raises an alarm. In fact, many natural problems can be phrased as decision problems, like "is there a unique leader in the network?" or "is the network planar?". Moreover, decision problems occur naturally when one is aiming at checking the validity of the output of a computational task, such as "is the produced coloring legal?", or "is the constructed subgraph an MST?". Construction tasks such as exact or approximated solutions to problems like coloring, MST, spanner, MIS, maximum matching, etc., received enormous attention in the literature (see, e.g., [5,25,26,28,30,31,32,38]), yet the corresponding decision problems have hardly been considered. The purpose of this paper is to investigate the nature of local decision problems. Decision problems seem to provide a promising approach to building up a distributed computational theory for the locality-discipline. Indeed, as we will show, one can define local reductions in the framework of decision problems, thus enabling the introduction of complexity classes and notions of completeness. We consider the LOCAL model [36], which is a standard distributed computing model capturing the essence of locality. In this model, processors are woken up simultaneously, and computation proceeds in fault-free synchronous rounds during which every processor exchanges messages of unlimited size with its neighbors, and performs arbitrary computations on its data. Informally, let us define LD(t) (for local decision) as the class of decision problems that can be solved in t number of communication rounds in the LOCAL model. (We find special interest in the case where t represents a constant, but in general we view t as a function of the input graph. We note that in the LOCAL model, every decidable decision problem can be solved in n communication rounds, where n denotes the number of nodes in the input graph.) Some decision problems are trivially in LD(O(1)) (e.g., "is the given coloring a (∆ + 1)coloring?", "do the selected nodes form an MIS?", etc.), while some others can easily be shown to be outside LD(t), for any t = o(n) (e.g., "is the network planar?", "is there a unique leader?", etc.). In contrast to the above examples, there are some languages for which it is not clear whether they belong to LD(t), even for t = O(1). To elaborate on this, consider the particular case where it is required to decide whether the network belongs to some specified family F of graphs. If this question can be decided in a constant number of communication rounds, then this means, informally, that the family F can somehow be characterized by relatively simple conditions. For example, a family F of graphs that can be characterized as consisting of all graphs having no subgraph from C, for some specified finite set C of finite subgraphs, is obviously in LD(O(1)). However, the question of whether a family of graphs can be characterized as above is often non-trivial. For example, characterizing cographs as precisely the graphs with no induced P 4 , attributed to Seinsche [40], is not easy, and requires nontrivial usage of modular decomposition. The first question we address is whether and to what extent randomization helps. For p, q ∈ (0, 1], define BPLD(t, p, q) as the class of all distributed languages that can be decided by a randomized distributed algorithm that runs in t number of communication rounds and produces correct answers on legal (respectively, illegal) instances with probability at least p (resp., q). An interesting observation is that for p and q such that p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q). In fact, for such p and q, there exists a language L ∈ BPLD(0, p, q), such that L / ∈ LD(t), for any t = o(n). To see why, consider the following Unique-Leader language. The input is a graph where each node has a bit indicating whether it is a leader or not. An input is in the language Unique-Leader if and only if there is at most one leader in the graph. Obviously, this language is not in LD(t), for any t < n. We claim it is in BPLD(0, p, q), for p and q such that p 2 + q ≤ 1. Indeed, for such p and q, we can design the following simple randomized algorithm that runs in 0 time: every node which is not a leader says "yes" with probability 1, and every node which is a leader says "yes" with probability p. Clearly, if the graph has at most one leader then all nodes say "yes" with probability at least p. On the other hand, if there are at least k ≥ 2 leaders, at least one node says "no", with probability at least 1 − p k ≥ 1 − p 2 ≥ q. It turns out that the aforementioned choice of p and q is not coincidental, and that p 2 +q = 1 is really the correct threshold. Indeed, we show that Unique-Leader / ∈ BPLD(t, p, q), for any t < n, and any p and q such that p 2 +q > 1. In fact, we show a much more general result, that is, we prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) actually collapses into LD(O(t)), for any t. In the second part of the paper, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results. Our contributions 1.2.1 Impact of randomization We study the impact of randomization on local decision. We prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) = LD(O(t)), for any function t. This, together with the observation that LD(t) BPLD(t, p, q), for any t = o(n), may indicate that p 2 + q = 1 serves as a sharp threshold for distinguishing the deterministic case from the randomized one. Impact of non-determinism We first show that non-determinism helps local decision, i.e., we show that the class NLD(t) (cf. Section 2.3) strictly contains LD(t). More precisely, we show that there exists a language in NLD(O(1)) which is not in LD(t) for every t = o(n), where n is the size of the input graph. Nevertheless, NLD(t) does not capture all (decidable) languages, for t = o(n). Indeed we show that there exists a language not in NLD(t) for every t = o(n). Specifically, this language is #n = {(G, n) | |V (G)| = n}. Perhaps surprisingly, it turns out that it is the combination of randomization with nondeterminism that enables to decide all languages in constant time. Let BPNLD(O(1)) = BPNLD(O(1), p, q), for some constants p and q such that p 2 + q ≤ 1. We prove that BPNLD(O(1)) contains all languages. To sum up, LD(o(n)) NLD(O(1)) ⊂ NLD(o(n)) BPNLD(O(1)) = All. Finally, we introduce the notion of many-one local reduction, and establish some completeness results. We show that there exits a problem, called cover, which is, in a sense, the most difficult decision problem. That is we show that cover is BPNLD(O(1))-complete. (Interestingly, a small relaxation of cover, called containment, turns out to be NLD(O(1))complete). Decision problems and complexity classes 2.1 Model of computation Let us first recall some basic notions in distributed computing. We consider the LOCAL model [36], which is a standard model capturing the essence of locality. In this model, processors are assumed to be nodes of a network G, provided with arbitrary distinct identities, and computation proceeds in fault-free synchronous rounds. At each round, every processor v ∈ V (G) exchanges messages of unrestricted size with its neighbors in G, and performs computations on its data. We assume that the number of steps (sequential time) used for the local computation made by the node v in some round r is bounded by some function f A (H(r, v)), where H(r, v) denotes the size of the "history" seen by node v up to the beginning of round r. That is, the total number of bits encoded in the input and the identity of the node, as well as in the incoming messages from previous rounds. Here, we do not impose any restriction on the growth rate of f A . We would like to point out, however, that imposing such restrictions, or alternatively, imposing restrictions on the memory used by a node for local computation, may lead to interesting connections between the theory of locality and classical computational complexity theory. To sum up, during the execution of a distributed algorithm A, all processors are woken up simultaneously, and, initially, a processor is solely aware of it own identity, and possibly to some local input too. Then, in each round r, every processor v (1) sends messages to its neighbors, (2) receives messages from its neighbors, and (3) performs at most f A (H(r, v)) computations. After a number of rounds (that may depend on the network G and may vary among the processors, simply because nodes have different identities, potentially different inputs, and are typically located at non-isomorphic positions in the network), every processor v terminates and outputs some value out(v). Consider an algorithm running in a network G with input x and identity assignment Id. The running time of a node v, denoted T v (G, x, Id), is the maximum of the number of rounds until v outputs. The running time of the algorithm, denoted T (G, x, Id), is the maximum of the number of rounds until all processors terminate, i.e., T (G, x, Id) = max{T v (G, x, Id) | v ∈ V (G)}. Let t be a non-decreasing function of input configurations (G, x, Id). (By non-decreasing, we mean that if G ′ is an induced subgraph of G and x ′ and Id ′ are the restrictions of x and Id, respectively, to the nodes in G ′ , then t(G ′ , x ′ , Id ′ ) ≤ t(G, x, Id).) We say that an algorithm A has running time at most t, if T (G, x, Id) ≤ t(G, x, Id) , for every (G, x, Id). We shall give special attention to the case that t represents a constant function. Note that in general, given (G, x, Id), the nodes may not be aware of t(G, x, Id). On the other hand, note that, if t = t(G, x, Id) is known, then w.l.o.g. one can always assume that a local algorithm running in time at most t operates at each node v in two stages: (A) collect all information available in B G (v, t), the t-neighborhood, or ball of radius t of v in G, including inputs, identities and adjacencies, and (B) compute the output based on this information. Local decision (LD) We now refine some of the above concepts, in order to formally define our objects of interest. Obviously, a distributed algorithm that runs on a graph G operates separately on each connected component of G, and nodes of a component G ′ of G cannot distinguish the underlying graph G from G ′ . For this reason, we consider connected graphs only. Definition 2.1 A configuration is a pair (G, x) where G is a connected graph, and every node v ∈ V (G) is assigned as its local input a binary string x(v) ∈ {0, 1} * . In some problems, the local input of every node is empty, i.e., x(v) = ǫ for every v ∈ V (G), where ǫ denotes the empty binary string. Since an undecidable collection of configurations remains undecidable in the distributed setting too, we consider only decidable collections of configurations. Formally, we define the following. Definition 2.2 A distributed language is a decidable collection L of configurations. In general, there are several possible ways of representing a configuration of a distributed language corresponding to standard distributed computing problems. Some examples considered in this paper are the following. Unique-Leader = {(G, x) | x 1 ≤ 1} consists of all configurations such that there exists at most one node with local input 1, with all the others having local input 0. Consensus = {(G, (x 1 , x 2 )) | ∃u ∈ V (G), ∀v ∈ V (G), x 2 (v) = x 1 (u)} consists of all configurations such that all nodes agree on the value proposed by some node. Coloring = {(G, x) | ∀v ∈ V (G), ∀w ∈ N(v), x(v) = x(w)} where N(v) denotes the (open) neighborhood of v, that is, all nodes at distance 1 from v. MIS = {(G, x) | S = {v ∈ V (G) | x(v) = 1} forms a MIS}. SpanningTree = {(G, (name, head)) | T = {e v = (v, v + ), v ∈ V (G), head(v) = name(v + )} is a spanning tree of G} consists of all configurations such that the set T of edges e v between every node v and its neighbor v + satisfying name(v + ) = head(v) forms a spanning tree of G. (The language MST, for minimum spanning tree, can be defined similarly). An identity assignment Id for a graph G is an assignment of distinct integers to the nodes of G. A node v ∈ V (G) executing a distributed algorithm in a configuration (G, x) initially knows only its own identity Id(v) and its own input x(v), and is unaware of the graph G. After t rounds, v acquires knowledge only of its t-neighborhood B G (v, t). In each round r of the algorithm A, a node may communicate with its neighbors by sending and receiving messages, and may perform at most f A (H(r, v)) computations. Eventually, each node v ∈ V (G) must output a local output out(v) ∈ {0, 1} * . Let L be a distributed language. We say that a distributed algorithm A decides L if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", satisfying the following decision rules: • If (G, x) ∈ L, then out(v) ="yes" for every node v ∈ V (G); • If (G, x) / ∈ L, then there exists at least one node v ∈ V (G) such that out(v) ="no". We are now ready to define one of our main subjects of interest, the class LD(t), for local decision. Definition 2.3 Let t be a non-decreasing function of triplets (G, x, Id). Define LD(t) as the class of all distributed languages that can be decided by a local distributed algorithm that runs in number of rounds at most t. For instance, Coloring ∈ LD(1) and MIS ∈ LD(1). On the other hand, it is not hard to see that languages such as Unique-Leader, Consensus, and SpanningTree are not in LD(t), for any t = o(n). In what follows, we define LD(O(t)) = ∪ c>1 LD(c · t). Non-deterministic local decision (NLD) A distributed verification algorithm is a distributed algorithm A that gets as input, in addition to a configuration (G, x), a global certificate vector y, i.e., every node v of a graph G gets as input a binary string x(v) ∈ {0, 1} * , and a certificate y(v) ∈ {0, 1} * . A verification algorithm A verifies L if and only if for every configuration (G, x), the following hold: • If (G, x) ∈ L, then there exists a certificate y such that for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="yes" for all v ∈ V (G); • If (G, x) / ∈ L, then for every certificate y and for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="no" for at least one node v ∈ V (G). One motivation for studying the nondeterministic verification framework comes from settings in which one must perform local verifications repeatedly. In such cases, one can afford to have a relatively "wasteful" preliminary step in which a certificate is computed for each node. Using these certificates, local verifications can then be performed very fast. See [21,22] for more details regarding such applications. Indeed, the definition of a verification algorithm finds similarities with the notion of proof-labeling schemes discussed in [21,22]. Informally, in a proof-labeling scheme, the construction of a "good" certificate y for a configuration (G, x) ∈ L may depend also on the given id-assignment. Since the question of whether a configuration (G, x) belongs to a language L is independent from the particular id-assignment, we prefer to let the "good" certificate y depend only on the configuration. In other words, as defined above, a verification algorithm operating on a configuration (G, x) ∈ L and a "good" certificate y must say "yes" at every node regardless of the id-assignment. We now define the class NLD(t), for nondeterministic local decision. (our terminology is by direct analogy to the class NP in sequential computational complexity). Bounded-error probabilistic local decision (BPLD) A randomized distributed algorithm is a distributed algorithm A that enables every node v, at any round r during the execution, to toss a number of random bits obtaining a string r(v) ∈ {0, 1} * . Clearly, this number cannot exceed f A (H(r, v)), the bound on the number of computational steps used by node v at round r. Note however, that H(r, v) may now also depend on the random bits produced by other nodes in previous rounds. For p, q ∈ (0, 1], we say that a randomized distributed algorithm A is a (p, q)-decider for L, or, that it decides L with "yes" success probability p and "no" success probability q, if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", and the following properties are satisfied: • If (G, x) ∈ L, then Pr[out(v) = "yes" for every node v ∈ V (G)] ≥ p, • If (G, x) / ∈ L, then Pr[out(v) = "no" for at least one node v ∈ V (G)] ≥ q, where the probabilities in the above definition are taken over all possible coin tosses performed by nodes. We define the class BPLD(t, p, q), for "Bounded-error Probabilistic Local Decision", as follows. Definition 2.5 For p, q ∈ (0, 1] and a function t, BPLD(t, p, q) is the class of all distributed languages that have a local randomized distributed (p, q)-decider running in time t. (i.e., can be decided in time t by a local randomized distributed algorithm with "yes" success probability p and "no" success probability q). A sharp threshold for randomization Consider some graph G, and a subset U of the nodes of G, i.e., U ⊆ V (G Theorem 3.1 below asserts that, for hereditary languages, randomization does not help if one imposes that p 2 + q > 1, i.e, the "no" success probability distribution is at least as large as one minus the square of the "yes" success probability. Somewhat more formally, we prove that for hereditary languages, we have p 2 +q>1 BPLD(t, p, q) = LD(O(t)). This complements the fact that for p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q), for any t = o(n). Recall that [34] investigates the question of whether randomization helps for constructing in constant time a solution for a problem in LCL LD(O(1)). We stress that the technique used in [34] for tackling this question relies heavily on the definition of LCL, specifically, that only graphs of constant degree and of constant input size are considered. Hence it is not clear whether the technique of [34] can be useful for our purposes, as we impose no such assumptions on the degrees or input sizes. Also, although it seems at first glance, that Lovsz local lemma might have been helpful here, we could not effectively apply it in our proof. Instead, we use a completely different approach. Theorem 3.1 Let L be an hereditary language and let t be a function. If L ∈ BPLD(t, p, q) for constants p, q ∈ (0, 1] such that p 2 + q > 1, then L ∈ LD(O(t)). Proof. Let us start with some definitions. Let L be a language in BPLD(t, p, q) where p, q ∈ (0, 1] and p 2 + q > 1, and t is some function. Let A be a randomized algorithm deciding L, with "yes" success probability p, and "no" success probability q, whose running time is at most t(G, x, Id), for every configuration (G, x) with identity assignment Id. Fix a configuration (G, x), and an id-assignment Id for the nodes of V (G). The distance dist G (u, v) between two nodes of G is the minimum number of edges in a path connecting u and v in G. The distance between two subsets U 1 , U 2 ⊆ V is defined as dist G (U 1 , U 2 ) = min{dist G (u, v) | u ∈ U 1 , v ∈ U 2 }. For a set U ⊆ V , let E(G, x, Id, U) denote the event that when running A on (G, x) with id-assignment Id, all nodes in U output "yes". Let v ∈ V (G). The running time of A at v may depend on the coin tosses made by the nodes. Let t v = t v (G, x, Id) denote the maximal running time of v over all possible coin tosses. Note that t v ≤ t(G, x, Id) (we do not assume that neither t or t v are known to v). The radius of a node v, denoted r v , is the maximum value t u such that there exists a node u, where v ∈ B G (u, t u ). (Observe that the radius of a node is at most t.) The radius of a set of nodes S is r S := max{r v | v ∈ S}. In what follows, fix a constant δ such that 0 < δ < p 2 + q − 1, and define λ = 11 ⌈log p/log(1 − δ)⌉. A splitter of (G, x, Id) is a triplet (S, U 1 , U 2 ) of pairwise disjoint subsets of nodes such that S ∪ U 1 ∪ U 2 = V , dist G (U 1 , U 2 ) ≥ λr S . (Observe that r S may depend on the identity assignment and the input, and therefore, being a splitter is not just a topological property depending only on G). Given a splitter (S, U 1 , U 2 ) of (G, x, Id), let G k = G[U k ∪ S], and let x k be the input x restricted to nodes in G k , for k = 1, 2. The following structural claim does not use the fact that L is hereditary. Lemma 3.2 For every configuration (G, x) with identity assignment Id, and every splitter (S, U 1 , U 2 ) of (G, x, Id), we have (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L ⇒ (G, x) ∈ L. Let (G, x) be a configuration with identity assignment Id. Assume, towards contradiction, that there exists a splitter (S, U 1 , U 2 ) of triplet (G, x, Id), such that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L, yet (G, x) / ∈ L. (The fact that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L implies that both G 1 and G 2 are connected, however, we note, that for the claim to be true, it is not required that G[ Proof. For proving Claim 3.3, we upper bound the size of I by d − 4r S − 2. This is done by covering the integers in (2r S , d − 2r S ) by at most 4r S + 1 sets, such that each one is (4r S + 1)-independent, that is, for every two integers in the same set, they are at least 4r S + 1 apart. Specifically, for s ∈ [1, 4r S + 1] and m(S) = ⌈(d − 8r S )/(4r S + 1)⌉, we define J s = {s + 2r S + j(4r S + 1) | j ∈ [0, m(S)]}. Observe that, as desired, (2r S , d − 2r S ) ⊂ s∈[1,4r S +1] J s , and for each s ∈ [1, 4r S + 1], J s is (4r S + 1)-independent. In what follows, fix s ∈ [1, 4r S + 1] and let J = J s . Since (G 1 , x 1 ) ∈ L, we know that, Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p . Observe that for i ∈ (2r S , d − 2r S ), t v ≤ r v ≤ r S , and hence, the t v -neighborhood in G of every node v ∈ S i is contained in S ⊆ G 1 , i.e., B G (v, t v ) ⊆ G 1 . It therefore follows that: Pr[E(G, x, Id, S J∩I )] = Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p .(1) Consider two integers a and b in J. We know that |a − b| ≥ 4r S + 1. Hence, the distance in G between any two nodes u ∈ S a and v ∈ S b is at least 2r S + 1. Thus, the events E(G, x, Id, S a ) and E(G, x, Id, S b ) are independent. It follows by the definition of I, that Pr[E(G, x, Id, S J∩I )] < (1 − δ) |J∩I|(2) By (1) and (2), we have that p < (1 − δ) |J∩I| and thus |J ∩ I| < log p/ log(1 − δ). Since (2r S , d − 2r S ) can be covered by the sets J s , s = 1, . . . , 4r S + 1, each of which is (4r S + 1)independent, we get that |I| = 4r S +1 s=1 |J s ∩ I| < (4r S + 1)(log p/ log(1 − r)) . Combining this bound with the fact that d = λr S , we get that d − 4r S − 1 > |I|. It follows by the pigeonhole principle that there exists some i ∈ (2r S , d − 2r S ) such that i / ∈ I, as desired. This completes the proof of Claim 3.3. Fix i ∈ (2r S , d − 2r S ) such that i / ∈ I, and let F = E(G, x, Id, S i ). By definition, Pr[F] ≤ δ < p 2 + q − 1.(3) Let H 1 denote the subgraph of G induced by the nodes in ( i−r S −1 j=1 L j ) ∪ U 1 . We similarly define H 2 as the subgraph of G induced by the nodes in ( j>i+r S L j ) ∪ U 2 . Note that S i ∪ V (H 1 ) ∪ V (H 2 ) = V , and for any two nodes u ∈ V (H 1 ) and v ∈ V (H 2 ), we have d G (u, v) > 2r S . It follows that, for k = 1, 2, the t u -neighborhood in G of each node u ∈ V (H k ) equals the t u -neighborhood in G k of u, that is, B G (u, t u ) ⊆ G k . (To see why, consider, for example, the case k = 2. Given u ∈ V (H 2 ), it is sufficient to show that ∄v ∈ V (H 1 ), such that v ∈ B G (u, t u ). Indeed, if such a vertex v exists then d G (u, v) > 2r S , and hence t u > 2r S . Since there must exists a vertex w ∈ S i such that w ∈ B(u, t u ), we get that r w > 2r S , in contradiction to the fact that w ∈ S.) Thus, for k = 1, 2, since (G i , x i ) ∈ L, we get Pr[E(G, x, Id, V (H i ))] = Pr[E(G i , x i , Id, V (H i ))] ≥ p . Let F ′ = E(G, x, Id, V (H 1 )∪V (H 2 ) ). As the events E(G, x, Id, V (H 1 )) and E(G, x, Id, V (H 2 )) are independent, it follows that Pr[F ′ ] > p 2 , that is Pr[F ′ ] ≤ 1 − p 2(4) By Eqs. (3) and (4), and using union bound, it follows that Pr[F ∨ F ′ ] < q. Thus Pr[E(G, x, Id, V (G))] = Pr[E(G, x, Id, S i ∪ V (H 1 ) ∪ V (H 2 ))] = Pr[F ∧ F ′ ] > 1 − q . This is in contradiction to the assumption that (G, x) / ∈ L. This concludes the proof of Lemma 3.2. Our goal now is to show that L ∈ LD(O(t)) by proving the existence of a deterministic local algorithm D that runs in time O(t) and recognizes L. (No attempt is made here to minimize the constant factor hidden in the O(t) notation.) Recall that both t = t(G, x, Id) and t v = t v (G, x, Id) may not be known to v. Nevertheless, by inspecting the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , each node v can compute an upper bound on t v as given by the following claim. t * v = t * v (c) such that (1) c · t v ≤ t * v = O(t) and (2) for every u ∈ B G (v, c · t * v ), we have t u ≤ t * v . To establish the claim, observe first that in O(t) time, each node v can compute a value t ′ v satisfying t v ≤ t ′ v ≤ 2t. Indeed, given the ball B G (v, 2 i ), for some integer i, and using the upper bound on number of (sequential) local computations, node v can simulate all its possible executions up to round r = 2 i . The desired value t ′ v is the smallest r = 2 i for which all executions of v up to round r conclude with an output at v. Once t ′ v is computed, node v aims at computing t * v . For this purpose, it starts again to inspect the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , to obtain t ′ u from each u ∈ B G (v, 2 i ). (For this purpose, it may need to wait until u computes t ′ u , but this delays the whole computation by at most O(t) time.) Now, node v outputs t * v = 2 i for the smallest i satisfying (1) c · t ′ v ≤ 2 i and (2) for every u ∈ B G (v, c · 2 i ), we have t ′ u ≤ t * v . It is easy to see that for this i, we have 2 i = O(t), hence t * v = O(t). Given a configuration (G, x), and an id-assignment Id, Algorithm D, applied at a node u first calculates t * u = t * u (6λ), and then outputs "yes" if and only if the 2λt * u -neighborhood of u in (G, x) belongs to L. That is, out(u) = "yes" ⇐⇒ (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L . Obviously, Algorithm D is a deterministic algorithm that runs in time O(t). We claim that Algorithm D decides L. Indeed, since L is hereditary, if (G, x) ∈ L, then every prefix of (G, x) is also in L, and thus, every node u outputs out(u) ="yes". Now consider the case where (G, x) / ∈ L, and assume by contradiction that by applying D on (G, x) with id-assignment Id, every node u outputs out(u) ="yes". Let U ⊆ V (G) be maximal by inclusion, such that G[U] is connected and (G[U], x[U]) ∈ L. Obviously, U is not empty, as (B G (u, 2λt * v ), x[B G (u, 2λt * v )] ) ∈ L for every node u. On the other hand, we have |U| < |V (G)|, because (G, x) / ∈ L. Let u ∈ U be a node with maximal t u such that B G (u, 2t u ) contains a node outside U. Define G ′ as the subgraph of G induced by U ∪ V (B G (u, 2t u )). Observe that G ′ is connected and that G ′ strictly contains U. Towards contradiction, our goal is to show that (G ′ , x[G ′ ]) ∈ L. Let H denote the graph which is maximal by inclusion such that H is connected and B G (u, 2t u ) ⊂ H ⊆ B G (u, 2t u ) ∪ (U ∩ B G (u, 2λt * u )) . Let W 1 , W 2 , · · · , W ℓ be the ℓ connected components of G[U]\B G (u, 2t u ), ordered arbitrarily. Let W 0 be the empty graph, and for k = 0, 1, 2, · · · , ℓ, define the graph Z k = H ∪ W 0 ∪ W 1 ∪ W 2 ∪ · · · ∪ W k . Observe that Z k is connected for each k = 0, 1, 2, · · · , ℓ. We prove by induction on k that (Z k , x[Z k ]) ∈ L for every k = 0, 1, 2, · · · , ℓ. This will establish the contradiction since Z ℓ = G ′ . For the basis of the induction, the case k = 0, we need to show that (H, x[H]) ∈ L. However, this is immediate by the facts that H is a connected subgraph of B G (u, 2λt * u ), the configuration (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L, and L is hereditary. Assume now that we have (Z k , x[Z k ]) ∈ L for 0 ≤ k < ℓ, and consider the graph Z k+1 = Z k ∪ W k+1 . Define the sets of nodes S = V (Z k ) ∩ V (W k+1 ), U 1 = V (Z k ) \ S, and U 2 = V (W k+1 ) \ S . A crucial observation is that (S, U 1 , U 2 ) is a splitter of Z k+1 . This follows from the following arguments. Let us first show that r S ≤ t * u . By definition, we have t v ≤ t * u , for every v ∈ B G (u, 6λt * u ). Hence, in order to bound the radius of S (in Z k+1 ) by t * u it is sufficient to prove that there is no node w ∈ U \ B G (u, 6λt * u ) such that B G (w, t w ) ∩ S = ∅. Indeed, if such a node w exists then t w > 4λt * u and hence B G (w, 2t w ) contains a node outside U, in contradiction to the choice of u. It follows that r S ≤ t * u . We now claim that dist Z k+1 (U 1 , U 2 ) ≥ λt * u . Consider a simple directed path P in Z k+1 going from a node x ∈ U 1 to a node y ∈ U 2 . Since x / ∈ V (W k+1 ) and y ∈ V (W k+1 ), we get that P must pass through a vertex in B G (u, 2t u ). Let z be the last vertex in P such that z ∈ B G (u, 2t u ), and consider the directed subpath P [z,y] of P going from z to y. Now, let P ′ = P [z,y] \ {z}. The first d ′ = min{(2λ −2)t * u , |P ′ |} vertices in the directed subpath P ′ must belong to V (H) ⊆ V (Z k ). In addition, observe that all nodes in P ′ must be in V (W k+1 ). It follows that the first d ′ nodes of P ′ are in S. Since y / ∈ S, we get that |P ′ | ≥ d ′ = (2λ − 2)t * u , and thus |P | > λt * u . Consequently, dist Z k+1 (U 1 , U 2 ) ≥ λt * u , as desired. This completes the proof that (S, U 1 , U 2 ) is a splitter of Z k+1 . Now, by the induction hypothesis, we have ( G 1 , x[G 1 ]) ∈ L, because G 1 = G[U 1 ∪S] = Z k . In addition, we have (G 2 , x[G 2 ]) ∈ L, because G 2 = G[U 2 ∪ S] = W k+1 , Nondeterminism and complete problems 4.1 Separation results Our first separation result indicates that non-determinism helps for local decision. Indeed, we show that there exists a language, specifically, tree = {(G, ǫ) | G is a tree}, which belongs to NLD(1) but not to LD(t), for any t = o(n). The proof follows by rather standard arguments. Proof. To establish the theorem it is sufficient to show that there exists a language L such that L / ∈ LD(o(n)) and L ∈ NLD(1). Let tree = {(G, ǫ) | G is a tree}. We have tree / ∈ LD(o(n)). To see why, consider a cycle C with nodes labeled consecutively from 1 to 4n, and the path P 1 (resp., P 2 ) with nodes labeled consecutively 1, . . . , 4n (resp., 2n + 1, . . . , 4n, 1, . . . , 2n), from one extremity to the other. For any algorithm A deciding tree, all nodes n+1, . . . , 3n output "yes" in configuration (P 1 , ǫ) for any identity assignment for the nodes in P 1 , while all nodes 3n + 1, . . . , 4n, 1, . . . , n output "yes" in configuration (P 2 , ǫ) for any identity assignment or the nodes in P 2 . Thus if A is local, then all nodes output "yes" in configuration (C, ǫ), a contradiction. In contrast, we next show that tree ∈ NLD. The (nondeterministic) local algorithm A verifying tree operates as follows. Given a configuration (G, ǫ), the certificate given at node v is y(v) = dist G (v, r) where r ∈ V (G) is an arbitrary fixed node. The verification procedure is then as follows. At each node v, A inspects every neighbor (with its certificates), and verifies the following: • y(v) is a non-negative integer, • if y(v) = 0, then y(w) = 1 for every neighbor w of v, and • if y(v) > 0, then there exists a neighbor w of v such that y(w) = y(v) − 1, and, for all other neighbors w ′ of v, we have y(w ′ ) = y(v) + 1. If G is a tree, then applying Algorithm A on G with the certificate yields the answer "yes" at all nodes regardless of the given id-assignment. On the other hand, if G is not a tree, then we claim that for every certificate, and every id-assignment Id, Algorithm A outputs "no" at some node. Indeed, consider some certificate y given to the nodes of G, and let C be a simple cycle in G. Assume, for the sake of contradiction, that all nodes in C output "yes". In this case, each node in C has at least one neighbor in C with a larger certificate. This creates an infinite sequence of strictly increasing certificates, in contradiction with the finiteness of C. Theorem 4.2 There exists a language L such that L / ∈ NLD(t), for any t = o(n). Proof. Let InpEqSize = {(G, x) | ∀v ∈ V (G), x(v) = |V (G)|}. We show that InpEqSize / ∈ NLD(t), for any t = o(n). Assume, for the sake of contradiction, that there exists a local nondeterministic algorithm A deciding InpEqSize. Let t < n/4 be the running time of A. Consider the cycle C with 2t + 1 nodes u 1 , u 2 , · · · , u 2t+1 , enumerated clockwise. Assume that the input at each node u i of C satisfies x(u i ) = 2t + 1. Then, there exists a certificate y such that, for any identity assignment Id, algorithm A outputs "yes" at each node of C. Now, consider the configuration (C ′ , x ′ ) where the cycle C ′ has 4t + 2 nodes, and for each node v i of C ′ , x ′ (v i ) = 2t + 1. We have (C ′ , x ′ ) / ∈ InpEqSize. To fool Algorithm A, we enumerate the nodes in C ′ clockwise, i.e., C = (v 1 , v 2 , · · · , v 4t+2 ). We then define the certificate y ′ as follows: y ′ (v i ) = y ′ (v i+2t+1 ) = y(u i ) for i = 1, 2, · · · 2t + 1 . Fix an id-assignment Id ′ for the nodes in V (C ′ ), and fix i ∈ {1, 2, · · · 2t + 1}. There exists an id-assignment Id 1 for the nodes in V (C), such that the output of A at node v i in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with certificate y and id-assignment Id 1 . Similarly, there exists an id-assignment Id 2 for the nodes in V (C) such that the output of A at node v i+2t+1 in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with with certificate y and id-assignment Id 2 . Thus, Algorithm A at both v i and v i+2t+1 outputs "yes" in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ . Hence, since i was arbitrary, all nodes output "yes" for this configuration, certificate and id-assignment, contradicting the fact that (C ′ , x ′ ) / ∈ InpEqSize. For p, q ∈ (0, 1] and a function t, let us define BPNLD(t, p, q) as the class of all distributed languages that have a local randomized non-deterministic distributed (p, q)-decider running in time t. Theorem 4.3 Let p, q ∈ (0, 1] such that p 2 + q ≤ 1. For every language L, we have L ∈ BPNLD(1, p, q). Proof. Let L be a language. The certificate of a configuration (G, x) ∈ L is a map of G, with nodes labeled with distinct integers in {1, ..., n}, where n = |V (G)|, together with the inputs of all nodes in G. In addition, every node v receives the label λ(v) of the corresponding vertex in the map. Precisely, the certificate at node v is y(v) = (G ′ , x ′ , i) where G ′ is an isomorphic copy of G with nodes labeled from 1 to n, x ′ is an n-dimensional vector such that x ′ [λ(u)] = x(u) for every node u, and i = λ(v). The verification algorithm involves checking that the configuration (G ′ , x ′ ) is identical to (G, x). This is sufficient because distributed languages are sequentially decidable, hence every node can individually decide whether (G ′ , x ′ ) belongs to L or not, once it has secured the fact that (G ′ , x ′ ) is the actual configuration. It remains to show that there exists a local randomized non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x), and running in time 1. The non-deterministic (p, q)-decider operates as follows. First, every node v checks that it has received the input as specified by x ′ , i.e., v checks wether x ′ [λ(v)] = x(v), and outputs "no" if this does not hold. Second, each node v communicates with its neighbors to check that (1) they all got the same map G ′ and the same input vector x ′ , and (2) they are labeled the way they should be according to the map G ′ . If some inconsistency is detected by a node, then this node outputs "no". Finally, consider a node v that passed the aforementioned two phases without outputting "no" . If λ(v) = 1 then v outputs "yes" (with probability 1), and if λ(v) = 1 then v outputs "yes" with probability p. We claim that the above implements a non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x). Indeed, if all nodes pass the two phases without outputting "no", then they all agree on the map G ′ and on the input vector x ′ , and they know that their respective neighborhood fits with what is indicated on the map. Hence, (G ′ , x ′ ) is a lift of (G, x). If follows that (G ′ , x ′ ) = (G, x) if and only if there exists at most one node v ∈ G, whose label satisfies λ(v) = 1. Consequently, if (G ′ , x ′ ) = (G, x) then all nodes say "yes" with probability at least p. On the other hand, if (G ′ , x ′ ) = (G, x) then there are at least two nodes in G whose label is "1". These two nodes say "yes" with probability p 2 , hence, the probability that at least one of them says "no" is at least 1 − p 2 ≥ q. This completes the proof of Theorem 4.3. The above theorem guarantees that the following definition is well defined. Let BPNLD = BPNLD(1, p, q), for some p, q ∈ (0, 1] such that p 2 + q ≤ 1. The following follows from Completeness results Let us first define a notion of reduction that fits the class LD. For two languages L 1 , L 2 , we say that L 1 is locally reducible to L 2 , denoted by L 1 L 2 , if there exists a constant time local algorithm A such that, for every configuration (G, x) and every id-assignment Id, A produces out(v) ∈ {0, 1} * as output at every node v ∈ V (G) so that (G, x) ∈ L 1 ⇐⇒ (G, out) ∈ L 2 . By definition, LD(O(t)) is closed under local reductions, that is, for every two languages L 1 , L 2 satisfying L 1 L 2 , if L 2 ∈ LD(O(t)) then L 1 ∈ LD(O(t)). We now show that there exists a natural problem, called cover, which is in some sense the "most difficult" decision problem; that is, we show that cover is BPNLD-complete. Language cover is defined as follows. Every node v is given as input an element E(v), and a finite collection of sets S(v). The union of these inputs is in the language if there exists a node v such that one set in S(v) equals the union of all the elements given to the nodes. Formally, Proof. The fact that cover ∈ BPNLD follows from Theorem 4.3. To prove that cover is BPNLD-hard, we consider some L ∈ BPNLD and show that L cover. For this purpose, we describe a local distributed algorithm A transforming any configuration for L to a configuration for cover preserving the memberships to these languages. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm A operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "local view" at v in (G, x), i.e., the star subgraph of G consisting of v and its neighbors, together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . we define cover = {(G, (E, S)) | ∃v ∈ V (G), ∃S ∈ S(v) s.t. S = {E(v) | v ∈ V (G)}. Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with k ≤ ψ(v) vertices, and x ′ is a collection of k input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm A associates a set S ∈ S(v) consisting of the k = |V (G ′ )| local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ A(G, x) ∈ cover. If (G, x) ∈ L, then by the construction of Algorithm A, there exists a set S ∈ S(v) such that S covers the collection of local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S which contains all local views of the given configuration (G, x) and Id assignemnt. Thus A(G, x) ∈ cover. Now consider the case that A(G, x) ∈ cover. In this case, there exists a node v and a set S ∈ S(v) such that S = {E(u) | u ∈ G}. Such a set S is the collection of local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. On the other hand, S is also the collection of local views of nodes of the given configuration (G, x) ∈ L and Id assignment. It follows that (G, x) = (G ′ , x ′ ) ∈ L. We now define a natural problem, called containment, which is NLD(O(1))-complete. Somewhat surprisingly, the definition of containment is quite similar to the definition of cover. Specifically, as in cover, every node v is given as input an element E(v), and a finite collection of sets S(v). However, in contrast to cover, the union of these inputs is in the containment language if there exists a node v such that one set in S(v) contains the union of all the elements given to the nodes. Formally, we define containment = {(G, (E, S)) | ∃v ∈ V (G), Proof. We first prove that containment is NLD(O(1))-hard. Consider some L ∈ NLD(O(1)); we show that L containment. For this purpose, we describe a local distributed algorithm D transforming any configuration for L to a configuration for containment preserving the memberships to these languages. ∃S ∈ S(v) s.t. S ⊇ {E(v) | v ∈ V (G)}. Let t = t L ≥ 0 be some (constant) integer such that there exists a local nondeterministic algorithm A L deciding L in time at most t. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm D operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "t-local view" at v in (G, x), i.e., the ball of radius t around v, B G (v, t), together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with m ≤ ψ(v) vertices, and x ′ is a collection of m input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm D associates a set S ∈ S(v) consisting of the m = |V (G ′ )| t-local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ D(G, x) ∈ containment. If (G, x) ∈ L, then by the construction of Algorithm D, there exists a set S ∈ S(v) such that S covers the collection of t-local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S that precisely corresponds to (G, x) and its given Id assignment; hence, S contains all corresponding t-local views. Thus, D(G, x) ∈ containment. Now consider the case that D(G, x) ∈ containment. In this case, there exists a node v and a set S ∈ S(v) such that S ⊇ {E(u) | u ∈ G}. Such a set S is the collection of t-local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. Since (G ′ , x ′ ) ∈ L, there exists a certificate y ′ for the nodes of G ′ , such that when algorithm A L operates on (G ′ , x ′ , y ′ ), all nodes say "yes". Now, since S contains the t-local views of nodes (G, x), with the corresponding identities, there exists a mapping φ : (G, x, Id) → (G ′ , x ′ , Id ′ ) that preserves inputs and identities. Moreover, when restricted to a ball of radius t around a vertex v ∈ G, φ is actually an isomorphism between this ball and its image. We assign a certificate y to the nodes of G: for each v ∈ V (G), y(v) = y ′ (φ(v)). Now, Algorithm A L when operating on (G, x, y) outputs "yes" at each node of G. By the correctness of A L , we obtain (G, x) ∈ L. We now show that containment ∈ NLD(O(1)). For this purpose, we design a nondeterministic local algorithm A that decides whether a configuration (G, x) is in containment. Such an algorithm A is designed to operate on (G, x, y), where y is a certificate. The configuration (G, x) satisfies that x(v) = (E(v), S(v)). Algorithm A aims at verifying whether there exists a node v * with a set S * ∈ S(v * ) such that S * ⊇ {E(v) | v ∈ V (G)}. Given a correct instance, i.e., a configuration (G, x), we define the certificate y as follows. For each node v, the certificate y(v) at v consists of several fields, specifically, y(v) = (y c (v), y s (v), y id (v), y l (v)). The candidate configuration field y c (v) is a triplet y c (v) = (G ′ , x ′ , Id ′ ), where (G ′ , x ′ ) is an isomorphic copy (G ′ , x ′ ) of (G, x) and Id ′ is an p and q, one can modify the success probabilities by performing k runs and requiring each node to individually output "no" if it decided "no" on at least one of the runs. In this case, the "no" success probability increases from q to at least 1 − (1 − q) k , and the "yes" success probability then decreases from p to p k .) Another interesting question is whether the phenomena we observed regarding randomization occurs also in the non-deterministic setting, that is, whether BPNLD(t, p, q) collapses into NLD(O(t)), for p 2 + q > 1. Our model of computation, namely, the LOCAL model, focuses on difficulties arising from purely locality issues, and abstracts away other complexity measures. Naturally, it would be very interesting to come up with a rigorous complexity framework taking into account also other complexity measures. For example, it would be interesting to investigate the connections between classical computational complexity theory and the local complexity one. The bound on the (centralized) running time in each round (given by the function f , see Section 2) may serve a bridge for connecting the two theories, by putting constrains on this bound (i.e., f must be polynomial, exponential, etc). Also, one could restrict the memory used by a node, in addition to, or instead of, bounding the sequential time. Finally, it would be interesting to come up with a complexity framework taking also congestion into account.
10,308
1011.2152
2953378433
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
The theory of @cite_17 @cite_15 was designed to tackle the issue of locally verifying (with the aid of a proof , i.e., a certificate, at each node) solutions to problems that cannot be decided locally (e.g., is the given subgraph a spanning tree of the network?'', or, is it an MST?''). In fact, the model of proof-labeling schemes has some resemblance to our definition of the class @math . Investigations in the framework of proof-labeling schemes mostly focus on the minimum size of the certificate necessary so that verification can be performed in a single round. The notion of proof-labeling schemes also has interesting similarities with the notions of local detection @cite_25 , local checking @cite_5 , or silent stabilization @cite_18 , which were introduced in the context of self-stabilization @cite_36 .
{ "abstract": [ "A stabilizing algorithm is silent if starting from an arbitrary state it converges to a global state after which the values stored in the communication registers are fixed. Many silent stabilizing algorithms have appeared in the literature. In this paper we show that there cannot exist constant memory silent stabilizing algorithms for finding the centers of a graph, electing a leader, and constructing a spanning tree. We demonstrate a lower bound of Ω(log n) bits per communication register for each of the above tasks.", "", "The first self-stabilizing end-to-end communication protocol and the most efficient known self-stabilizing network reset protocol are introduced. A simple method of local checking and correction, by which distributed protocols can be made self-stabilizing without the use of unbounded counters, is used. The self-stabilization model distinguishes between catastrophic faults that abstract arbitrary corruption of global state, and other restricted kinds of anticipated faults. It is assumed that after the execution starts there are no further catastrophic faults, but the anticipated faults may continue to occur. >", "This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.", "Abstract A new paradigm for the design of self-stabilizing distributed algorithms, called local detection , is introduced. The essence of the paradigm is in defining a local condition based on the state of a processor and its immediate neighborhood such that the system is in a globally legal state if and only if the local condition is satisfied at all the nodes. In this work we also extend the model of self-stabilizing networks traditionally assuming memory failure to include the model of dynamic networks (assuming edge failures and recoveries). We apply the paradigm to the extended model which we call “dynamic self-stabilizing networks”. Without loss of generality, we present the results in the least restrictive shared memory model of read write atomicity, to which end we construct basic information transfer primitives. Using local detection, we develop deterministic and randomized self-stabilizing algorithms that maintain a rooted spanning tree in a general network whose topology changes dynamically. The deterministic algorithm assumes unique identities while the randomized assumes an anonymous network. The algorithms use a constant number of memory words per edge in each node; and both the size of memory words and of messages is the number of bits necessary to represent a node identity (typically O (log n ) bits where n is the size of the network). These algorithms provide for the easy construction of self-stabilizing protocols for numerous tasks: reset, routing, topology-update and self-stabilization transformers that automatically self-stabilize existing protocols for which local detection conditions can be defined.", "" ], "cite_N": [ "@cite_18", "@cite_36", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2019918796", "2568887944", "1901784050", "2056295140", "2015849324", "" ] }
Local Distributed Decision *
Distributed computing concerns a collection of processors which collaborate in order to achieve some global task. With time, two main disciplines have evolved in the field. One discipline deals with timing issues, namely, uncertainties due to asynchrony (the fact that processors run at their own speed, and possibly crash), and the other concerns topology issues, namely, uncertainties due to locality constraints (the lack of knowledge about far away processors). Studies carried out by the distributed computing community within these two disciplines were to a large extent problem-driven. Indeed, several major problems considered in the literature concern coping with one of the two uncertainties. For instance, in the asynchrony-discipline, Fischer, Lynch and Paterson [14] proved that consensus cannot be achieved in the asynchronous model, even in the presence of a single fault, and in the locality-discipline, Linial [28] proved that (∆ + 1)-coloring cannot be achieved locally (i.e., in a constant number of communication rounds), even in the ring network. One of the significant achievements of the asynchrony-discipline was its success in establishing unifying theories in the flavor of computational complexity theory. Some central examples of such theories are failure detectors [6,7] and the wait-free hierarchy (including Herlihy's hierarchy) [18]. In contrast, despite considerable progress, the locality-discipline still suffers from the absence of a solid basis in the form of a fundamental computational complexity theory. Obviously, defining some common cost measures (e.g., time, message, memory, etc.) enables us to compare problems in terms of their relative cost. Still, from a computational complexity point of view, it is not clear how to relate the difficulty of problems in the locality-discipline. Specifically, if two problems have different kinds of outputs, it is not clear how to reduce one to the other, even if they cost the same. Inspired by sequential complexity theory, we focus on decision problems, in which one is aiming at deciding whether a given global input instance belongs to some specified language. In the context of distributed computing, each processor must produce a boolean output, and the decision is defined by the conjunction of the processors' outputs, i.e., if the instance belongs to the language, then all processors must output "yes", and otherwise, at least one processor must output "no". Observe that decision problems provide a natural framework for tackling fault-tolerance: the processors have to collectively check whether the network is fault-free, and a node detecting a fault raises an alarm. In fact, many natural problems can be phrased as decision problems, like "is there a unique leader in the network?" or "is the network planar?". Moreover, decision problems occur naturally when one is aiming at checking the validity of the output of a computational task, such as "is the produced coloring legal?", or "is the constructed subgraph an MST?". Construction tasks such as exact or approximated solutions to problems like coloring, MST, spanner, MIS, maximum matching, etc., received enormous attention in the literature (see, e.g., [5,25,26,28,30,31,32,38]), yet the corresponding decision problems have hardly been considered. The purpose of this paper is to investigate the nature of local decision problems. Decision problems seem to provide a promising approach to building up a distributed computational theory for the locality-discipline. Indeed, as we will show, one can define local reductions in the framework of decision problems, thus enabling the introduction of complexity classes and notions of completeness. We consider the LOCAL model [36], which is a standard distributed computing model capturing the essence of locality. In this model, processors are woken up simultaneously, and computation proceeds in fault-free synchronous rounds during which every processor exchanges messages of unlimited size with its neighbors, and performs arbitrary computations on its data. Informally, let us define LD(t) (for local decision) as the class of decision problems that can be solved in t number of communication rounds in the LOCAL model. (We find special interest in the case where t represents a constant, but in general we view t as a function of the input graph. We note that in the LOCAL model, every decidable decision problem can be solved in n communication rounds, where n denotes the number of nodes in the input graph.) Some decision problems are trivially in LD(O(1)) (e.g., "is the given coloring a (∆ + 1)coloring?", "do the selected nodes form an MIS?", etc.), while some others can easily be shown to be outside LD(t), for any t = o(n) (e.g., "is the network planar?", "is there a unique leader?", etc.). In contrast to the above examples, there are some languages for which it is not clear whether they belong to LD(t), even for t = O(1). To elaborate on this, consider the particular case where it is required to decide whether the network belongs to some specified family F of graphs. If this question can be decided in a constant number of communication rounds, then this means, informally, that the family F can somehow be characterized by relatively simple conditions. For example, a family F of graphs that can be characterized as consisting of all graphs having no subgraph from C, for some specified finite set C of finite subgraphs, is obviously in LD(O(1)). However, the question of whether a family of graphs can be characterized as above is often non-trivial. For example, characterizing cographs as precisely the graphs with no induced P 4 , attributed to Seinsche [40], is not easy, and requires nontrivial usage of modular decomposition. The first question we address is whether and to what extent randomization helps. For p, q ∈ (0, 1], define BPLD(t, p, q) as the class of all distributed languages that can be decided by a randomized distributed algorithm that runs in t number of communication rounds and produces correct answers on legal (respectively, illegal) instances with probability at least p (resp., q). An interesting observation is that for p and q such that p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q). In fact, for such p and q, there exists a language L ∈ BPLD(0, p, q), such that L / ∈ LD(t), for any t = o(n). To see why, consider the following Unique-Leader language. The input is a graph where each node has a bit indicating whether it is a leader or not. An input is in the language Unique-Leader if and only if there is at most one leader in the graph. Obviously, this language is not in LD(t), for any t < n. We claim it is in BPLD(0, p, q), for p and q such that p 2 + q ≤ 1. Indeed, for such p and q, we can design the following simple randomized algorithm that runs in 0 time: every node which is not a leader says "yes" with probability 1, and every node which is a leader says "yes" with probability p. Clearly, if the graph has at most one leader then all nodes say "yes" with probability at least p. On the other hand, if there are at least k ≥ 2 leaders, at least one node says "no", with probability at least 1 − p k ≥ 1 − p 2 ≥ q. It turns out that the aforementioned choice of p and q is not coincidental, and that p 2 +q = 1 is really the correct threshold. Indeed, we show that Unique-Leader / ∈ BPLD(t, p, q), for any t < n, and any p and q such that p 2 +q > 1. In fact, we show a much more general result, that is, we prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) actually collapses into LD(O(t)), for any t. In the second part of the paper, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results. Our contributions 1.2.1 Impact of randomization We study the impact of randomization on local decision. We prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) = LD(O(t)), for any function t. This, together with the observation that LD(t) BPLD(t, p, q), for any t = o(n), may indicate that p 2 + q = 1 serves as a sharp threshold for distinguishing the deterministic case from the randomized one. Impact of non-determinism We first show that non-determinism helps local decision, i.e., we show that the class NLD(t) (cf. Section 2.3) strictly contains LD(t). More precisely, we show that there exists a language in NLD(O(1)) which is not in LD(t) for every t = o(n), where n is the size of the input graph. Nevertheless, NLD(t) does not capture all (decidable) languages, for t = o(n). Indeed we show that there exists a language not in NLD(t) for every t = o(n). Specifically, this language is #n = {(G, n) | |V (G)| = n}. Perhaps surprisingly, it turns out that it is the combination of randomization with nondeterminism that enables to decide all languages in constant time. Let BPNLD(O(1)) = BPNLD(O(1), p, q), for some constants p and q such that p 2 + q ≤ 1. We prove that BPNLD(O(1)) contains all languages. To sum up, LD(o(n)) NLD(O(1)) ⊂ NLD(o(n)) BPNLD(O(1)) = All. Finally, we introduce the notion of many-one local reduction, and establish some completeness results. We show that there exits a problem, called cover, which is, in a sense, the most difficult decision problem. That is we show that cover is BPNLD(O(1))-complete. (Interestingly, a small relaxation of cover, called containment, turns out to be NLD(O(1))complete). Decision problems and complexity classes 2.1 Model of computation Let us first recall some basic notions in distributed computing. We consider the LOCAL model [36], which is a standard model capturing the essence of locality. In this model, processors are assumed to be nodes of a network G, provided with arbitrary distinct identities, and computation proceeds in fault-free synchronous rounds. At each round, every processor v ∈ V (G) exchanges messages of unrestricted size with its neighbors in G, and performs computations on its data. We assume that the number of steps (sequential time) used for the local computation made by the node v in some round r is bounded by some function f A (H(r, v)), where H(r, v) denotes the size of the "history" seen by node v up to the beginning of round r. That is, the total number of bits encoded in the input and the identity of the node, as well as in the incoming messages from previous rounds. Here, we do not impose any restriction on the growth rate of f A . We would like to point out, however, that imposing such restrictions, or alternatively, imposing restrictions on the memory used by a node for local computation, may lead to interesting connections between the theory of locality and classical computational complexity theory. To sum up, during the execution of a distributed algorithm A, all processors are woken up simultaneously, and, initially, a processor is solely aware of it own identity, and possibly to some local input too. Then, in each round r, every processor v (1) sends messages to its neighbors, (2) receives messages from its neighbors, and (3) performs at most f A (H(r, v)) computations. After a number of rounds (that may depend on the network G and may vary among the processors, simply because nodes have different identities, potentially different inputs, and are typically located at non-isomorphic positions in the network), every processor v terminates and outputs some value out(v). Consider an algorithm running in a network G with input x and identity assignment Id. The running time of a node v, denoted T v (G, x, Id), is the maximum of the number of rounds until v outputs. The running time of the algorithm, denoted T (G, x, Id), is the maximum of the number of rounds until all processors terminate, i.e., T (G, x, Id) = max{T v (G, x, Id) | v ∈ V (G)}. Let t be a non-decreasing function of input configurations (G, x, Id). (By non-decreasing, we mean that if G ′ is an induced subgraph of G and x ′ and Id ′ are the restrictions of x and Id, respectively, to the nodes in G ′ , then t(G ′ , x ′ , Id ′ ) ≤ t(G, x, Id).) We say that an algorithm A has running time at most t, if T (G, x, Id) ≤ t(G, x, Id) , for every (G, x, Id). We shall give special attention to the case that t represents a constant function. Note that in general, given (G, x, Id), the nodes may not be aware of t(G, x, Id). On the other hand, note that, if t = t(G, x, Id) is known, then w.l.o.g. one can always assume that a local algorithm running in time at most t operates at each node v in two stages: (A) collect all information available in B G (v, t), the t-neighborhood, or ball of radius t of v in G, including inputs, identities and adjacencies, and (B) compute the output based on this information. Local decision (LD) We now refine some of the above concepts, in order to formally define our objects of interest. Obviously, a distributed algorithm that runs on a graph G operates separately on each connected component of G, and nodes of a component G ′ of G cannot distinguish the underlying graph G from G ′ . For this reason, we consider connected graphs only. Definition 2.1 A configuration is a pair (G, x) where G is a connected graph, and every node v ∈ V (G) is assigned as its local input a binary string x(v) ∈ {0, 1} * . In some problems, the local input of every node is empty, i.e., x(v) = ǫ for every v ∈ V (G), where ǫ denotes the empty binary string. Since an undecidable collection of configurations remains undecidable in the distributed setting too, we consider only decidable collections of configurations. Formally, we define the following. Definition 2.2 A distributed language is a decidable collection L of configurations. In general, there are several possible ways of representing a configuration of a distributed language corresponding to standard distributed computing problems. Some examples considered in this paper are the following. Unique-Leader = {(G, x) | x 1 ≤ 1} consists of all configurations such that there exists at most one node with local input 1, with all the others having local input 0. Consensus = {(G, (x 1 , x 2 )) | ∃u ∈ V (G), ∀v ∈ V (G), x 2 (v) = x 1 (u)} consists of all configurations such that all nodes agree on the value proposed by some node. Coloring = {(G, x) | ∀v ∈ V (G), ∀w ∈ N(v), x(v) = x(w)} where N(v) denotes the (open) neighborhood of v, that is, all nodes at distance 1 from v. MIS = {(G, x) | S = {v ∈ V (G) | x(v) = 1} forms a MIS}. SpanningTree = {(G, (name, head)) | T = {e v = (v, v + ), v ∈ V (G), head(v) = name(v + )} is a spanning tree of G} consists of all configurations such that the set T of edges e v between every node v and its neighbor v + satisfying name(v + ) = head(v) forms a spanning tree of G. (The language MST, for minimum spanning tree, can be defined similarly). An identity assignment Id for a graph G is an assignment of distinct integers to the nodes of G. A node v ∈ V (G) executing a distributed algorithm in a configuration (G, x) initially knows only its own identity Id(v) and its own input x(v), and is unaware of the graph G. After t rounds, v acquires knowledge only of its t-neighborhood B G (v, t). In each round r of the algorithm A, a node may communicate with its neighbors by sending and receiving messages, and may perform at most f A (H(r, v)) computations. Eventually, each node v ∈ V (G) must output a local output out(v) ∈ {0, 1} * . Let L be a distributed language. We say that a distributed algorithm A decides L if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", satisfying the following decision rules: • If (G, x) ∈ L, then out(v) ="yes" for every node v ∈ V (G); • If (G, x) / ∈ L, then there exists at least one node v ∈ V (G) such that out(v) ="no". We are now ready to define one of our main subjects of interest, the class LD(t), for local decision. Definition 2.3 Let t be a non-decreasing function of triplets (G, x, Id). Define LD(t) as the class of all distributed languages that can be decided by a local distributed algorithm that runs in number of rounds at most t. For instance, Coloring ∈ LD(1) and MIS ∈ LD(1). On the other hand, it is not hard to see that languages such as Unique-Leader, Consensus, and SpanningTree are not in LD(t), for any t = o(n). In what follows, we define LD(O(t)) = ∪ c>1 LD(c · t). Non-deterministic local decision (NLD) A distributed verification algorithm is a distributed algorithm A that gets as input, in addition to a configuration (G, x), a global certificate vector y, i.e., every node v of a graph G gets as input a binary string x(v) ∈ {0, 1} * , and a certificate y(v) ∈ {0, 1} * . A verification algorithm A verifies L if and only if for every configuration (G, x), the following hold: • If (G, x) ∈ L, then there exists a certificate y such that for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="yes" for all v ∈ V (G); • If (G, x) / ∈ L, then for every certificate y and for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="no" for at least one node v ∈ V (G). One motivation for studying the nondeterministic verification framework comes from settings in which one must perform local verifications repeatedly. In such cases, one can afford to have a relatively "wasteful" preliminary step in which a certificate is computed for each node. Using these certificates, local verifications can then be performed very fast. See [21,22] for more details regarding such applications. Indeed, the definition of a verification algorithm finds similarities with the notion of proof-labeling schemes discussed in [21,22]. Informally, in a proof-labeling scheme, the construction of a "good" certificate y for a configuration (G, x) ∈ L may depend also on the given id-assignment. Since the question of whether a configuration (G, x) belongs to a language L is independent from the particular id-assignment, we prefer to let the "good" certificate y depend only on the configuration. In other words, as defined above, a verification algorithm operating on a configuration (G, x) ∈ L and a "good" certificate y must say "yes" at every node regardless of the id-assignment. We now define the class NLD(t), for nondeterministic local decision. (our terminology is by direct analogy to the class NP in sequential computational complexity). Bounded-error probabilistic local decision (BPLD) A randomized distributed algorithm is a distributed algorithm A that enables every node v, at any round r during the execution, to toss a number of random bits obtaining a string r(v) ∈ {0, 1} * . Clearly, this number cannot exceed f A (H(r, v)), the bound on the number of computational steps used by node v at round r. Note however, that H(r, v) may now also depend on the random bits produced by other nodes in previous rounds. For p, q ∈ (0, 1], we say that a randomized distributed algorithm A is a (p, q)-decider for L, or, that it decides L with "yes" success probability p and "no" success probability q, if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", and the following properties are satisfied: • If (G, x) ∈ L, then Pr[out(v) = "yes" for every node v ∈ V (G)] ≥ p, • If (G, x) / ∈ L, then Pr[out(v) = "no" for at least one node v ∈ V (G)] ≥ q, where the probabilities in the above definition are taken over all possible coin tosses performed by nodes. We define the class BPLD(t, p, q), for "Bounded-error Probabilistic Local Decision", as follows. Definition 2.5 For p, q ∈ (0, 1] and a function t, BPLD(t, p, q) is the class of all distributed languages that have a local randomized distributed (p, q)-decider running in time t. (i.e., can be decided in time t by a local randomized distributed algorithm with "yes" success probability p and "no" success probability q). A sharp threshold for randomization Consider some graph G, and a subset U of the nodes of G, i.e., U ⊆ V (G Theorem 3.1 below asserts that, for hereditary languages, randomization does not help if one imposes that p 2 + q > 1, i.e, the "no" success probability distribution is at least as large as one minus the square of the "yes" success probability. Somewhat more formally, we prove that for hereditary languages, we have p 2 +q>1 BPLD(t, p, q) = LD(O(t)). This complements the fact that for p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q), for any t = o(n). Recall that [34] investigates the question of whether randomization helps for constructing in constant time a solution for a problem in LCL LD(O(1)). We stress that the technique used in [34] for tackling this question relies heavily on the definition of LCL, specifically, that only graphs of constant degree and of constant input size are considered. Hence it is not clear whether the technique of [34] can be useful for our purposes, as we impose no such assumptions on the degrees or input sizes. Also, although it seems at first glance, that Lovsz local lemma might have been helpful here, we could not effectively apply it in our proof. Instead, we use a completely different approach. Theorem 3.1 Let L be an hereditary language and let t be a function. If L ∈ BPLD(t, p, q) for constants p, q ∈ (0, 1] such that p 2 + q > 1, then L ∈ LD(O(t)). Proof. Let us start with some definitions. Let L be a language in BPLD(t, p, q) where p, q ∈ (0, 1] and p 2 + q > 1, and t is some function. Let A be a randomized algorithm deciding L, with "yes" success probability p, and "no" success probability q, whose running time is at most t(G, x, Id), for every configuration (G, x) with identity assignment Id. Fix a configuration (G, x), and an id-assignment Id for the nodes of V (G). The distance dist G (u, v) between two nodes of G is the minimum number of edges in a path connecting u and v in G. The distance between two subsets U 1 , U 2 ⊆ V is defined as dist G (U 1 , U 2 ) = min{dist G (u, v) | u ∈ U 1 , v ∈ U 2 }. For a set U ⊆ V , let E(G, x, Id, U) denote the event that when running A on (G, x) with id-assignment Id, all nodes in U output "yes". Let v ∈ V (G). The running time of A at v may depend on the coin tosses made by the nodes. Let t v = t v (G, x, Id) denote the maximal running time of v over all possible coin tosses. Note that t v ≤ t(G, x, Id) (we do not assume that neither t or t v are known to v). The radius of a node v, denoted r v , is the maximum value t u such that there exists a node u, where v ∈ B G (u, t u ). (Observe that the radius of a node is at most t.) The radius of a set of nodes S is r S := max{r v | v ∈ S}. In what follows, fix a constant δ such that 0 < δ < p 2 + q − 1, and define λ = 11 ⌈log p/log(1 − δ)⌉. A splitter of (G, x, Id) is a triplet (S, U 1 , U 2 ) of pairwise disjoint subsets of nodes such that S ∪ U 1 ∪ U 2 = V , dist G (U 1 , U 2 ) ≥ λr S . (Observe that r S may depend on the identity assignment and the input, and therefore, being a splitter is not just a topological property depending only on G). Given a splitter (S, U 1 , U 2 ) of (G, x, Id), let G k = G[U k ∪ S], and let x k be the input x restricted to nodes in G k , for k = 1, 2. The following structural claim does not use the fact that L is hereditary. Lemma 3.2 For every configuration (G, x) with identity assignment Id, and every splitter (S, U 1 , U 2 ) of (G, x, Id), we have (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L ⇒ (G, x) ∈ L. Let (G, x) be a configuration with identity assignment Id. Assume, towards contradiction, that there exists a splitter (S, U 1 , U 2 ) of triplet (G, x, Id), such that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L, yet (G, x) / ∈ L. (The fact that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L implies that both G 1 and G 2 are connected, however, we note, that for the claim to be true, it is not required that G[ Proof. For proving Claim 3.3, we upper bound the size of I by d − 4r S − 2. This is done by covering the integers in (2r S , d − 2r S ) by at most 4r S + 1 sets, such that each one is (4r S + 1)-independent, that is, for every two integers in the same set, they are at least 4r S + 1 apart. Specifically, for s ∈ [1, 4r S + 1] and m(S) = ⌈(d − 8r S )/(4r S + 1)⌉, we define J s = {s + 2r S + j(4r S + 1) | j ∈ [0, m(S)]}. Observe that, as desired, (2r S , d − 2r S ) ⊂ s∈[1,4r S +1] J s , and for each s ∈ [1, 4r S + 1], J s is (4r S + 1)-independent. In what follows, fix s ∈ [1, 4r S + 1] and let J = J s . Since (G 1 , x 1 ) ∈ L, we know that, Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p . Observe that for i ∈ (2r S , d − 2r S ), t v ≤ r v ≤ r S , and hence, the t v -neighborhood in G of every node v ∈ S i is contained in S ⊆ G 1 , i.e., B G (v, t v ) ⊆ G 1 . It therefore follows that: Pr[E(G, x, Id, S J∩I )] = Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p .(1) Consider two integers a and b in J. We know that |a − b| ≥ 4r S + 1. Hence, the distance in G between any two nodes u ∈ S a and v ∈ S b is at least 2r S + 1. Thus, the events E(G, x, Id, S a ) and E(G, x, Id, S b ) are independent. It follows by the definition of I, that Pr[E(G, x, Id, S J∩I )] < (1 − δ) |J∩I|(2) By (1) and (2), we have that p < (1 − δ) |J∩I| and thus |J ∩ I| < log p/ log(1 − δ). Since (2r S , d − 2r S ) can be covered by the sets J s , s = 1, . . . , 4r S + 1, each of which is (4r S + 1)independent, we get that |I| = 4r S +1 s=1 |J s ∩ I| < (4r S + 1)(log p/ log(1 − r)) . Combining this bound with the fact that d = λr S , we get that d − 4r S − 1 > |I|. It follows by the pigeonhole principle that there exists some i ∈ (2r S , d − 2r S ) such that i / ∈ I, as desired. This completes the proof of Claim 3.3. Fix i ∈ (2r S , d − 2r S ) such that i / ∈ I, and let F = E(G, x, Id, S i ). By definition, Pr[F] ≤ δ < p 2 + q − 1.(3) Let H 1 denote the subgraph of G induced by the nodes in ( i−r S −1 j=1 L j ) ∪ U 1 . We similarly define H 2 as the subgraph of G induced by the nodes in ( j>i+r S L j ) ∪ U 2 . Note that S i ∪ V (H 1 ) ∪ V (H 2 ) = V , and for any two nodes u ∈ V (H 1 ) and v ∈ V (H 2 ), we have d G (u, v) > 2r S . It follows that, for k = 1, 2, the t u -neighborhood in G of each node u ∈ V (H k ) equals the t u -neighborhood in G k of u, that is, B G (u, t u ) ⊆ G k . (To see why, consider, for example, the case k = 2. Given u ∈ V (H 2 ), it is sufficient to show that ∄v ∈ V (H 1 ), such that v ∈ B G (u, t u ). Indeed, if such a vertex v exists then d G (u, v) > 2r S , and hence t u > 2r S . Since there must exists a vertex w ∈ S i such that w ∈ B(u, t u ), we get that r w > 2r S , in contradiction to the fact that w ∈ S.) Thus, for k = 1, 2, since (G i , x i ) ∈ L, we get Pr[E(G, x, Id, V (H i ))] = Pr[E(G i , x i , Id, V (H i ))] ≥ p . Let F ′ = E(G, x, Id, V (H 1 )∪V (H 2 ) ). As the events E(G, x, Id, V (H 1 )) and E(G, x, Id, V (H 2 )) are independent, it follows that Pr[F ′ ] > p 2 , that is Pr[F ′ ] ≤ 1 − p 2(4) By Eqs. (3) and (4), and using union bound, it follows that Pr[F ∨ F ′ ] < q. Thus Pr[E(G, x, Id, V (G))] = Pr[E(G, x, Id, S i ∪ V (H 1 ) ∪ V (H 2 ))] = Pr[F ∧ F ′ ] > 1 − q . This is in contradiction to the assumption that (G, x) / ∈ L. This concludes the proof of Lemma 3.2. Our goal now is to show that L ∈ LD(O(t)) by proving the existence of a deterministic local algorithm D that runs in time O(t) and recognizes L. (No attempt is made here to minimize the constant factor hidden in the O(t) notation.) Recall that both t = t(G, x, Id) and t v = t v (G, x, Id) may not be known to v. Nevertheless, by inspecting the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , each node v can compute an upper bound on t v as given by the following claim. t * v = t * v (c) such that (1) c · t v ≤ t * v = O(t) and (2) for every u ∈ B G (v, c · t * v ), we have t u ≤ t * v . To establish the claim, observe first that in O(t) time, each node v can compute a value t ′ v satisfying t v ≤ t ′ v ≤ 2t. Indeed, given the ball B G (v, 2 i ), for some integer i, and using the upper bound on number of (sequential) local computations, node v can simulate all its possible executions up to round r = 2 i . The desired value t ′ v is the smallest r = 2 i for which all executions of v up to round r conclude with an output at v. Once t ′ v is computed, node v aims at computing t * v . For this purpose, it starts again to inspect the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , to obtain t ′ u from each u ∈ B G (v, 2 i ). (For this purpose, it may need to wait until u computes t ′ u , but this delays the whole computation by at most O(t) time.) Now, node v outputs t * v = 2 i for the smallest i satisfying (1) c · t ′ v ≤ 2 i and (2) for every u ∈ B G (v, c · 2 i ), we have t ′ u ≤ t * v . It is easy to see that for this i, we have 2 i = O(t), hence t * v = O(t). Given a configuration (G, x), and an id-assignment Id, Algorithm D, applied at a node u first calculates t * u = t * u (6λ), and then outputs "yes" if and only if the 2λt * u -neighborhood of u in (G, x) belongs to L. That is, out(u) = "yes" ⇐⇒ (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L . Obviously, Algorithm D is a deterministic algorithm that runs in time O(t). We claim that Algorithm D decides L. Indeed, since L is hereditary, if (G, x) ∈ L, then every prefix of (G, x) is also in L, and thus, every node u outputs out(u) ="yes". Now consider the case where (G, x) / ∈ L, and assume by contradiction that by applying D on (G, x) with id-assignment Id, every node u outputs out(u) ="yes". Let U ⊆ V (G) be maximal by inclusion, such that G[U] is connected and (G[U], x[U]) ∈ L. Obviously, U is not empty, as (B G (u, 2λt * v ), x[B G (u, 2λt * v )] ) ∈ L for every node u. On the other hand, we have |U| < |V (G)|, because (G, x) / ∈ L. Let u ∈ U be a node with maximal t u such that B G (u, 2t u ) contains a node outside U. Define G ′ as the subgraph of G induced by U ∪ V (B G (u, 2t u )). Observe that G ′ is connected and that G ′ strictly contains U. Towards contradiction, our goal is to show that (G ′ , x[G ′ ]) ∈ L. Let H denote the graph which is maximal by inclusion such that H is connected and B G (u, 2t u ) ⊂ H ⊆ B G (u, 2t u ) ∪ (U ∩ B G (u, 2λt * u )) . Let W 1 , W 2 , · · · , W ℓ be the ℓ connected components of G[U]\B G (u, 2t u ), ordered arbitrarily. Let W 0 be the empty graph, and for k = 0, 1, 2, · · · , ℓ, define the graph Z k = H ∪ W 0 ∪ W 1 ∪ W 2 ∪ · · · ∪ W k . Observe that Z k is connected for each k = 0, 1, 2, · · · , ℓ. We prove by induction on k that (Z k , x[Z k ]) ∈ L for every k = 0, 1, 2, · · · , ℓ. This will establish the contradiction since Z ℓ = G ′ . For the basis of the induction, the case k = 0, we need to show that (H, x[H]) ∈ L. However, this is immediate by the facts that H is a connected subgraph of B G (u, 2λt * u ), the configuration (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L, and L is hereditary. Assume now that we have (Z k , x[Z k ]) ∈ L for 0 ≤ k < ℓ, and consider the graph Z k+1 = Z k ∪ W k+1 . Define the sets of nodes S = V (Z k ) ∩ V (W k+1 ), U 1 = V (Z k ) \ S, and U 2 = V (W k+1 ) \ S . A crucial observation is that (S, U 1 , U 2 ) is a splitter of Z k+1 . This follows from the following arguments. Let us first show that r S ≤ t * u . By definition, we have t v ≤ t * u , for every v ∈ B G (u, 6λt * u ). Hence, in order to bound the radius of S (in Z k+1 ) by t * u it is sufficient to prove that there is no node w ∈ U \ B G (u, 6λt * u ) such that B G (w, t w ) ∩ S = ∅. Indeed, if such a node w exists then t w > 4λt * u and hence B G (w, 2t w ) contains a node outside U, in contradiction to the choice of u. It follows that r S ≤ t * u . We now claim that dist Z k+1 (U 1 , U 2 ) ≥ λt * u . Consider a simple directed path P in Z k+1 going from a node x ∈ U 1 to a node y ∈ U 2 . Since x / ∈ V (W k+1 ) and y ∈ V (W k+1 ), we get that P must pass through a vertex in B G (u, 2t u ). Let z be the last vertex in P such that z ∈ B G (u, 2t u ), and consider the directed subpath P [z,y] of P going from z to y. Now, let P ′ = P [z,y] \ {z}. The first d ′ = min{(2λ −2)t * u , |P ′ |} vertices in the directed subpath P ′ must belong to V (H) ⊆ V (Z k ). In addition, observe that all nodes in P ′ must be in V (W k+1 ). It follows that the first d ′ nodes of P ′ are in S. Since y / ∈ S, we get that |P ′ | ≥ d ′ = (2λ − 2)t * u , and thus |P | > λt * u . Consequently, dist Z k+1 (U 1 , U 2 ) ≥ λt * u , as desired. This completes the proof that (S, U 1 , U 2 ) is a splitter of Z k+1 . Now, by the induction hypothesis, we have ( G 1 , x[G 1 ]) ∈ L, because G 1 = G[U 1 ∪S] = Z k . In addition, we have (G 2 , x[G 2 ]) ∈ L, because G 2 = G[U 2 ∪ S] = W k+1 , Nondeterminism and complete problems 4.1 Separation results Our first separation result indicates that non-determinism helps for local decision. Indeed, we show that there exists a language, specifically, tree = {(G, ǫ) | G is a tree}, which belongs to NLD(1) but not to LD(t), for any t = o(n). The proof follows by rather standard arguments. Proof. To establish the theorem it is sufficient to show that there exists a language L such that L / ∈ LD(o(n)) and L ∈ NLD(1). Let tree = {(G, ǫ) | G is a tree}. We have tree / ∈ LD(o(n)). To see why, consider a cycle C with nodes labeled consecutively from 1 to 4n, and the path P 1 (resp., P 2 ) with nodes labeled consecutively 1, . . . , 4n (resp., 2n + 1, . . . , 4n, 1, . . . , 2n), from one extremity to the other. For any algorithm A deciding tree, all nodes n+1, . . . , 3n output "yes" in configuration (P 1 , ǫ) for any identity assignment for the nodes in P 1 , while all nodes 3n + 1, . . . , 4n, 1, . . . , n output "yes" in configuration (P 2 , ǫ) for any identity assignment or the nodes in P 2 . Thus if A is local, then all nodes output "yes" in configuration (C, ǫ), a contradiction. In contrast, we next show that tree ∈ NLD. The (nondeterministic) local algorithm A verifying tree operates as follows. Given a configuration (G, ǫ), the certificate given at node v is y(v) = dist G (v, r) where r ∈ V (G) is an arbitrary fixed node. The verification procedure is then as follows. At each node v, A inspects every neighbor (with its certificates), and verifies the following: • y(v) is a non-negative integer, • if y(v) = 0, then y(w) = 1 for every neighbor w of v, and • if y(v) > 0, then there exists a neighbor w of v such that y(w) = y(v) − 1, and, for all other neighbors w ′ of v, we have y(w ′ ) = y(v) + 1. If G is a tree, then applying Algorithm A on G with the certificate yields the answer "yes" at all nodes regardless of the given id-assignment. On the other hand, if G is not a tree, then we claim that for every certificate, and every id-assignment Id, Algorithm A outputs "no" at some node. Indeed, consider some certificate y given to the nodes of G, and let C be a simple cycle in G. Assume, for the sake of contradiction, that all nodes in C output "yes". In this case, each node in C has at least one neighbor in C with a larger certificate. This creates an infinite sequence of strictly increasing certificates, in contradiction with the finiteness of C. Theorem 4.2 There exists a language L such that L / ∈ NLD(t), for any t = o(n). Proof. Let InpEqSize = {(G, x) | ∀v ∈ V (G), x(v) = |V (G)|}. We show that InpEqSize / ∈ NLD(t), for any t = o(n). Assume, for the sake of contradiction, that there exists a local nondeterministic algorithm A deciding InpEqSize. Let t < n/4 be the running time of A. Consider the cycle C with 2t + 1 nodes u 1 , u 2 , · · · , u 2t+1 , enumerated clockwise. Assume that the input at each node u i of C satisfies x(u i ) = 2t + 1. Then, there exists a certificate y such that, for any identity assignment Id, algorithm A outputs "yes" at each node of C. Now, consider the configuration (C ′ , x ′ ) where the cycle C ′ has 4t + 2 nodes, and for each node v i of C ′ , x ′ (v i ) = 2t + 1. We have (C ′ , x ′ ) / ∈ InpEqSize. To fool Algorithm A, we enumerate the nodes in C ′ clockwise, i.e., C = (v 1 , v 2 , · · · , v 4t+2 ). We then define the certificate y ′ as follows: y ′ (v i ) = y ′ (v i+2t+1 ) = y(u i ) for i = 1, 2, · · · 2t + 1 . Fix an id-assignment Id ′ for the nodes in V (C ′ ), and fix i ∈ {1, 2, · · · 2t + 1}. There exists an id-assignment Id 1 for the nodes in V (C), such that the output of A at node v i in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with certificate y and id-assignment Id 1 . Similarly, there exists an id-assignment Id 2 for the nodes in V (C) such that the output of A at node v i+2t+1 in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with with certificate y and id-assignment Id 2 . Thus, Algorithm A at both v i and v i+2t+1 outputs "yes" in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ . Hence, since i was arbitrary, all nodes output "yes" for this configuration, certificate and id-assignment, contradicting the fact that (C ′ , x ′ ) / ∈ InpEqSize. For p, q ∈ (0, 1] and a function t, let us define BPNLD(t, p, q) as the class of all distributed languages that have a local randomized non-deterministic distributed (p, q)-decider running in time t. Theorem 4.3 Let p, q ∈ (0, 1] such that p 2 + q ≤ 1. For every language L, we have L ∈ BPNLD(1, p, q). Proof. Let L be a language. The certificate of a configuration (G, x) ∈ L is a map of G, with nodes labeled with distinct integers in {1, ..., n}, where n = |V (G)|, together with the inputs of all nodes in G. In addition, every node v receives the label λ(v) of the corresponding vertex in the map. Precisely, the certificate at node v is y(v) = (G ′ , x ′ , i) where G ′ is an isomorphic copy of G with nodes labeled from 1 to n, x ′ is an n-dimensional vector such that x ′ [λ(u)] = x(u) for every node u, and i = λ(v). The verification algorithm involves checking that the configuration (G ′ , x ′ ) is identical to (G, x). This is sufficient because distributed languages are sequentially decidable, hence every node can individually decide whether (G ′ , x ′ ) belongs to L or not, once it has secured the fact that (G ′ , x ′ ) is the actual configuration. It remains to show that there exists a local randomized non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x), and running in time 1. The non-deterministic (p, q)-decider operates as follows. First, every node v checks that it has received the input as specified by x ′ , i.e., v checks wether x ′ [λ(v)] = x(v), and outputs "no" if this does not hold. Second, each node v communicates with its neighbors to check that (1) they all got the same map G ′ and the same input vector x ′ , and (2) they are labeled the way they should be according to the map G ′ . If some inconsistency is detected by a node, then this node outputs "no". Finally, consider a node v that passed the aforementioned two phases without outputting "no" . If λ(v) = 1 then v outputs "yes" (with probability 1), and if λ(v) = 1 then v outputs "yes" with probability p. We claim that the above implements a non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x). Indeed, if all nodes pass the two phases without outputting "no", then they all agree on the map G ′ and on the input vector x ′ , and they know that their respective neighborhood fits with what is indicated on the map. Hence, (G ′ , x ′ ) is a lift of (G, x). If follows that (G ′ , x ′ ) = (G, x) if and only if there exists at most one node v ∈ G, whose label satisfies λ(v) = 1. Consequently, if (G ′ , x ′ ) = (G, x) then all nodes say "yes" with probability at least p. On the other hand, if (G ′ , x ′ ) = (G, x) then there are at least two nodes in G whose label is "1". These two nodes say "yes" with probability p 2 , hence, the probability that at least one of them says "no" is at least 1 − p 2 ≥ q. This completes the proof of Theorem 4.3. The above theorem guarantees that the following definition is well defined. Let BPNLD = BPNLD(1, p, q), for some p, q ∈ (0, 1] such that p 2 + q ≤ 1. The following follows from Completeness results Let us first define a notion of reduction that fits the class LD. For two languages L 1 , L 2 , we say that L 1 is locally reducible to L 2 , denoted by L 1 L 2 , if there exists a constant time local algorithm A such that, for every configuration (G, x) and every id-assignment Id, A produces out(v) ∈ {0, 1} * as output at every node v ∈ V (G) so that (G, x) ∈ L 1 ⇐⇒ (G, out) ∈ L 2 . By definition, LD(O(t)) is closed under local reductions, that is, for every two languages L 1 , L 2 satisfying L 1 L 2 , if L 2 ∈ LD(O(t)) then L 1 ∈ LD(O(t)). We now show that there exists a natural problem, called cover, which is in some sense the "most difficult" decision problem; that is, we show that cover is BPNLD-complete. Language cover is defined as follows. Every node v is given as input an element E(v), and a finite collection of sets S(v). The union of these inputs is in the language if there exists a node v such that one set in S(v) equals the union of all the elements given to the nodes. Formally, Proof. The fact that cover ∈ BPNLD follows from Theorem 4.3. To prove that cover is BPNLD-hard, we consider some L ∈ BPNLD and show that L cover. For this purpose, we describe a local distributed algorithm A transforming any configuration for L to a configuration for cover preserving the memberships to these languages. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm A operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "local view" at v in (G, x), i.e., the star subgraph of G consisting of v and its neighbors, together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . we define cover = {(G, (E, S)) | ∃v ∈ V (G), ∃S ∈ S(v) s.t. S = {E(v) | v ∈ V (G)}. Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with k ≤ ψ(v) vertices, and x ′ is a collection of k input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm A associates a set S ∈ S(v) consisting of the k = |V (G ′ )| local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ A(G, x) ∈ cover. If (G, x) ∈ L, then by the construction of Algorithm A, there exists a set S ∈ S(v) such that S covers the collection of local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S which contains all local views of the given configuration (G, x) and Id assignemnt. Thus A(G, x) ∈ cover. Now consider the case that A(G, x) ∈ cover. In this case, there exists a node v and a set S ∈ S(v) such that S = {E(u) | u ∈ G}. Such a set S is the collection of local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. On the other hand, S is also the collection of local views of nodes of the given configuration (G, x) ∈ L and Id assignment. It follows that (G, x) = (G ′ , x ′ ) ∈ L. We now define a natural problem, called containment, which is NLD(O(1))-complete. Somewhat surprisingly, the definition of containment is quite similar to the definition of cover. Specifically, as in cover, every node v is given as input an element E(v), and a finite collection of sets S(v). However, in contrast to cover, the union of these inputs is in the containment language if there exists a node v such that one set in S(v) contains the union of all the elements given to the nodes. Formally, we define containment = {(G, (E, S)) | ∃v ∈ V (G), Proof. We first prove that containment is NLD(O(1))-hard. Consider some L ∈ NLD(O(1)); we show that L containment. For this purpose, we describe a local distributed algorithm D transforming any configuration for L to a configuration for containment preserving the memberships to these languages. ∃S ∈ S(v) s.t. S ⊇ {E(v) | v ∈ V (G)}. Let t = t L ≥ 0 be some (constant) integer such that there exists a local nondeterministic algorithm A L deciding L in time at most t. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm D operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "t-local view" at v in (G, x), i.e., the ball of radius t around v, B G (v, t), together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with m ≤ ψ(v) vertices, and x ′ is a collection of m input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm D associates a set S ∈ S(v) consisting of the m = |V (G ′ )| t-local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ D(G, x) ∈ containment. If (G, x) ∈ L, then by the construction of Algorithm D, there exists a set S ∈ S(v) such that S covers the collection of t-local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S that precisely corresponds to (G, x) and its given Id assignment; hence, S contains all corresponding t-local views. Thus, D(G, x) ∈ containment. Now consider the case that D(G, x) ∈ containment. In this case, there exists a node v and a set S ∈ S(v) such that S ⊇ {E(u) | u ∈ G}. Such a set S is the collection of t-local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. Since (G ′ , x ′ ) ∈ L, there exists a certificate y ′ for the nodes of G ′ , such that when algorithm A L operates on (G ′ , x ′ , y ′ ), all nodes say "yes". Now, since S contains the t-local views of nodes (G, x), with the corresponding identities, there exists a mapping φ : (G, x, Id) → (G ′ , x ′ , Id ′ ) that preserves inputs and identities. Moreover, when restricted to a ball of radius t around a vertex v ∈ G, φ is actually an isomorphism between this ball and its image. We assign a certificate y to the nodes of G: for each v ∈ V (G), y(v) = y ′ (φ(v)). Now, Algorithm A L when operating on (G, x, y) outputs "yes" at each node of G. By the correctness of A L , we obtain (G, x) ∈ L. We now show that containment ∈ NLD(O(1)). For this purpose, we design a nondeterministic local algorithm A that decides whether a configuration (G, x) is in containment. Such an algorithm A is designed to operate on (G, x, y), where y is a certificate. The configuration (G, x) satisfies that x(v) = (E(v), S(v)). Algorithm A aims at verifying whether there exists a node v * with a set S * ∈ S(v * ) such that S * ⊇ {E(v) | v ∈ V (G)}. Given a correct instance, i.e., a configuration (G, x), we define the certificate y as follows. For each node v, the certificate y(v) at v consists of several fields, specifically, y(v) = (y c (v), y s (v), y id (v), y l (v)). The candidate configuration field y c (v) is a triplet y c (v) = (G ′ , x ′ , Id ′ ), where (G ′ , x ′ ) is an isomorphic copy (G ′ , x ′ ) of (G, x) and Id ′ is an p and q, one can modify the success probabilities by performing k runs and requiring each node to individually output "no" if it decided "no" on at least one of the runs. In this case, the "no" success probability increases from q to at least 1 − (1 − q) k , and the "yes" success probability then decreases from p to p k .) Another interesting question is whether the phenomena we observed regarding randomization occurs also in the non-deterministic setting, that is, whether BPNLD(t, p, q) collapses into NLD(O(t)), for p 2 + q > 1. Our model of computation, namely, the LOCAL model, focuses on difficulties arising from purely locality issues, and abstracts away other complexity measures. Naturally, it would be very interesting to come up with a rigorous complexity framework taking into account also other complexity measures. For example, it would be interesting to investigate the connections between classical computational complexity theory and the local complexity one. The bound on the (centralized) running time in each round (given by the function f , see Section 2) may serve a bridge for connecting the two theories, by putting constrains on this bound (i.e., f must be polynomial, exponential, etc). Also, one could restrict the memory used by a node, in addition to, or instead of, bounding the sequential time. Finally, it would be interesting to come up with a complexity framework taking also congestion into account.
10,308
1011.2152
2953378433
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
The use of oracles that provide information to nodes was studied intensively in the context of distributed construction tasks. For instance, this framework, called local computation with advice , was studied in @cite_37 for MST construction and in @cite_37 for 3-coloring a cycle.
{ "abstract": [ "We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring." ], "cite_N": [ "@cite_37" ], "mid": [ "1502033971" ] }
Local Distributed Decision *
Distributed computing concerns a collection of processors which collaborate in order to achieve some global task. With time, two main disciplines have evolved in the field. One discipline deals with timing issues, namely, uncertainties due to asynchrony (the fact that processors run at their own speed, and possibly crash), and the other concerns topology issues, namely, uncertainties due to locality constraints (the lack of knowledge about far away processors). Studies carried out by the distributed computing community within these two disciplines were to a large extent problem-driven. Indeed, several major problems considered in the literature concern coping with one of the two uncertainties. For instance, in the asynchrony-discipline, Fischer, Lynch and Paterson [14] proved that consensus cannot be achieved in the asynchronous model, even in the presence of a single fault, and in the locality-discipline, Linial [28] proved that (∆ + 1)-coloring cannot be achieved locally (i.e., in a constant number of communication rounds), even in the ring network. One of the significant achievements of the asynchrony-discipline was its success in establishing unifying theories in the flavor of computational complexity theory. Some central examples of such theories are failure detectors [6,7] and the wait-free hierarchy (including Herlihy's hierarchy) [18]. In contrast, despite considerable progress, the locality-discipline still suffers from the absence of a solid basis in the form of a fundamental computational complexity theory. Obviously, defining some common cost measures (e.g., time, message, memory, etc.) enables us to compare problems in terms of their relative cost. Still, from a computational complexity point of view, it is not clear how to relate the difficulty of problems in the locality-discipline. Specifically, if two problems have different kinds of outputs, it is not clear how to reduce one to the other, even if they cost the same. Inspired by sequential complexity theory, we focus on decision problems, in which one is aiming at deciding whether a given global input instance belongs to some specified language. In the context of distributed computing, each processor must produce a boolean output, and the decision is defined by the conjunction of the processors' outputs, i.e., if the instance belongs to the language, then all processors must output "yes", and otherwise, at least one processor must output "no". Observe that decision problems provide a natural framework for tackling fault-tolerance: the processors have to collectively check whether the network is fault-free, and a node detecting a fault raises an alarm. In fact, many natural problems can be phrased as decision problems, like "is there a unique leader in the network?" or "is the network planar?". Moreover, decision problems occur naturally when one is aiming at checking the validity of the output of a computational task, such as "is the produced coloring legal?", or "is the constructed subgraph an MST?". Construction tasks such as exact or approximated solutions to problems like coloring, MST, spanner, MIS, maximum matching, etc., received enormous attention in the literature (see, e.g., [5,25,26,28,30,31,32,38]), yet the corresponding decision problems have hardly been considered. The purpose of this paper is to investigate the nature of local decision problems. Decision problems seem to provide a promising approach to building up a distributed computational theory for the locality-discipline. Indeed, as we will show, one can define local reductions in the framework of decision problems, thus enabling the introduction of complexity classes and notions of completeness. We consider the LOCAL model [36], which is a standard distributed computing model capturing the essence of locality. In this model, processors are woken up simultaneously, and computation proceeds in fault-free synchronous rounds during which every processor exchanges messages of unlimited size with its neighbors, and performs arbitrary computations on its data. Informally, let us define LD(t) (for local decision) as the class of decision problems that can be solved in t number of communication rounds in the LOCAL model. (We find special interest in the case where t represents a constant, but in general we view t as a function of the input graph. We note that in the LOCAL model, every decidable decision problem can be solved in n communication rounds, where n denotes the number of nodes in the input graph.) Some decision problems are trivially in LD(O(1)) (e.g., "is the given coloring a (∆ + 1)coloring?", "do the selected nodes form an MIS?", etc.), while some others can easily be shown to be outside LD(t), for any t = o(n) (e.g., "is the network planar?", "is there a unique leader?", etc.). In contrast to the above examples, there are some languages for which it is not clear whether they belong to LD(t), even for t = O(1). To elaborate on this, consider the particular case where it is required to decide whether the network belongs to some specified family F of graphs. If this question can be decided in a constant number of communication rounds, then this means, informally, that the family F can somehow be characterized by relatively simple conditions. For example, a family F of graphs that can be characterized as consisting of all graphs having no subgraph from C, for some specified finite set C of finite subgraphs, is obviously in LD(O(1)). However, the question of whether a family of graphs can be characterized as above is often non-trivial. For example, characterizing cographs as precisely the graphs with no induced P 4 , attributed to Seinsche [40], is not easy, and requires nontrivial usage of modular decomposition. The first question we address is whether and to what extent randomization helps. For p, q ∈ (0, 1], define BPLD(t, p, q) as the class of all distributed languages that can be decided by a randomized distributed algorithm that runs in t number of communication rounds and produces correct answers on legal (respectively, illegal) instances with probability at least p (resp., q). An interesting observation is that for p and q such that p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q). In fact, for such p and q, there exists a language L ∈ BPLD(0, p, q), such that L / ∈ LD(t), for any t = o(n). To see why, consider the following Unique-Leader language. The input is a graph where each node has a bit indicating whether it is a leader or not. An input is in the language Unique-Leader if and only if there is at most one leader in the graph. Obviously, this language is not in LD(t), for any t < n. We claim it is in BPLD(0, p, q), for p and q such that p 2 + q ≤ 1. Indeed, for such p and q, we can design the following simple randomized algorithm that runs in 0 time: every node which is not a leader says "yes" with probability 1, and every node which is a leader says "yes" with probability p. Clearly, if the graph has at most one leader then all nodes say "yes" with probability at least p. On the other hand, if there are at least k ≥ 2 leaders, at least one node says "no", with probability at least 1 − p k ≥ 1 − p 2 ≥ q. It turns out that the aforementioned choice of p and q is not coincidental, and that p 2 +q = 1 is really the correct threshold. Indeed, we show that Unique-Leader / ∈ BPLD(t, p, q), for any t < n, and any p and q such that p 2 +q > 1. In fact, we show a much more general result, that is, we prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) actually collapses into LD(O(t)), for any t. In the second part of the paper, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results. Our contributions 1.2.1 Impact of randomization We study the impact of randomization on local decision. We prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) = LD(O(t)), for any function t. This, together with the observation that LD(t) BPLD(t, p, q), for any t = o(n), may indicate that p 2 + q = 1 serves as a sharp threshold for distinguishing the deterministic case from the randomized one. Impact of non-determinism We first show that non-determinism helps local decision, i.e., we show that the class NLD(t) (cf. Section 2.3) strictly contains LD(t). More precisely, we show that there exists a language in NLD(O(1)) which is not in LD(t) for every t = o(n), where n is the size of the input graph. Nevertheless, NLD(t) does not capture all (decidable) languages, for t = o(n). Indeed we show that there exists a language not in NLD(t) for every t = o(n). Specifically, this language is #n = {(G, n) | |V (G)| = n}. Perhaps surprisingly, it turns out that it is the combination of randomization with nondeterminism that enables to decide all languages in constant time. Let BPNLD(O(1)) = BPNLD(O(1), p, q), for some constants p and q such that p 2 + q ≤ 1. We prove that BPNLD(O(1)) contains all languages. To sum up, LD(o(n)) NLD(O(1)) ⊂ NLD(o(n)) BPNLD(O(1)) = All. Finally, we introduce the notion of many-one local reduction, and establish some completeness results. We show that there exits a problem, called cover, which is, in a sense, the most difficult decision problem. That is we show that cover is BPNLD(O(1))-complete. (Interestingly, a small relaxation of cover, called containment, turns out to be NLD(O(1))complete). Decision problems and complexity classes 2.1 Model of computation Let us first recall some basic notions in distributed computing. We consider the LOCAL model [36], which is a standard model capturing the essence of locality. In this model, processors are assumed to be nodes of a network G, provided with arbitrary distinct identities, and computation proceeds in fault-free synchronous rounds. At each round, every processor v ∈ V (G) exchanges messages of unrestricted size with its neighbors in G, and performs computations on its data. We assume that the number of steps (sequential time) used for the local computation made by the node v in some round r is bounded by some function f A (H(r, v)), where H(r, v) denotes the size of the "history" seen by node v up to the beginning of round r. That is, the total number of bits encoded in the input and the identity of the node, as well as in the incoming messages from previous rounds. Here, we do not impose any restriction on the growth rate of f A . We would like to point out, however, that imposing such restrictions, or alternatively, imposing restrictions on the memory used by a node for local computation, may lead to interesting connections between the theory of locality and classical computational complexity theory. To sum up, during the execution of a distributed algorithm A, all processors are woken up simultaneously, and, initially, a processor is solely aware of it own identity, and possibly to some local input too. Then, in each round r, every processor v (1) sends messages to its neighbors, (2) receives messages from its neighbors, and (3) performs at most f A (H(r, v)) computations. After a number of rounds (that may depend on the network G and may vary among the processors, simply because nodes have different identities, potentially different inputs, and are typically located at non-isomorphic positions in the network), every processor v terminates and outputs some value out(v). Consider an algorithm running in a network G with input x and identity assignment Id. The running time of a node v, denoted T v (G, x, Id), is the maximum of the number of rounds until v outputs. The running time of the algorithm, denoted T (G, x, Id), is the maximum of the number of rounds until all processors terminate, i.e., T (G, x, Id) = max{T v (G, x, Id) | v ∈ V (G)}. Let t be a non-decreasing function of input configurations (G, x, Id). (By non-decreasing, we mean that if G ′ is an induced subgraph of G and x ′ and Id ′ are the restrictions of x and Id, respectively, to the nodes in G ′ , then t(G ′ , x ′ , Id ′ ) ≤ t(G, x, Id).) We say that an algorithm A has running time at most t, if T (G, x, Id) ≤ t(G, x, Id) , for every (G, x, Id). We shall give special attention to the case that t represents a constant function. Note that in general, given (G, x, Id), the nodes may not be aware of t(G, x, Id). On the other hand, note that, if t = t(G, x, Id) is known, then w.l.o.g. one can always assume that a local algorithm running in time at most t operates at each node v in two stages: (A) collect all information available in B G (v, t), the t-neighborhood, or ball of radius t of v in G, including inputs, identities and adjacencies, and (B) compute the output based on this information. Local decision (LD) We now refine some of the above concepts, in order to formally define our objects of interest. Obviously, a distributed algorithm that runs on a graph G operates separately on each connected component of G, and nodes of a component G ′ of G cannot distinguish the underlying graph G from G ′ . For this reason, we consider connected graphs only. Definition 2.1 A configuration is a pair (G, x) where G is a connected graph, and every node v ∈ V (G) is assigned as its local input a binary string x(v) ∈ {0, 1} * . In some problems, the local input of every node is empty, i.e., x(v) = ǫ for every v ∈ V (G), where ǫ denotes the empty binary string. Since an undecidable collection of configurations remains undecidable in the distributed setting too, we consider only decidable collections of configurations. Formally, we define the following. Definition 2.2 A distributed language is a decidable collection L of configurations. In general, there are several possible ways of representing a configuration of a distributed language corresponding to standard distributed computing problems. Some examples considered in this paper are the following. Unique-Leader = {(G, x) | x 1 ≤ 1} consists of all configurations such that there exists at most one node with local input 1, with all the others having local input 0. Consensus = {(G, (x 1 , x 2 )) | ∃u ∈ V (G), ∀v ∈ V (G), x 2 (v) = x 1 (u)} consists of all configurations such that all nodes agree on the value proposed by some node. Coloring = {(G, x) | ∀v ∈ V (G), ∀w ∈ N(v), x(v) = x(w)} where N(v) denotes the (open) neighborhood of v, that is, all nodes at distance 1 from v. MIS = {(G, x) | S = {v ∈ V (G) | x(v) = 1} forms a MIS}. SpanningTree = {(G, (name, head)) | T = {e v = (v, v + ), v ∈ V (G), head(v) = name(v + )} is a spanning tree of G} consists of all configurations such that the set T of edges e v between every node v and its neighbor v + satisfying name(v + ) = head(v) forms a spanning tree of G. (The language MST, for minimum spanning tree, can be defined similarly). An identity assignment Id for a graph G is an assignment of distinct integers to the nodes of G. A node v ∈ V (G) executing a distributed algorithm in a configuration (G, x) initially knows only its own identity Id(v) and its own input x(v), and is unaware of the graph G. After t rounds, v acquires knowledge only of its t-neighborhood B G (v, t). In each round r of the algorithm A, a node may communicate with its neighbors by sending and receiving messages, and may perform at most f A (H(r, v)) computations. Eventually, each node v ∈ V (G) must output a local output out(v) ∈ {0, 1} * . Let L be a distributed language. We say that a distributed algorithm A decides L if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", satisfying the following decision rules: • If (G, x) ∈ L, then out(v) ="yes" for every node v ∈ V (G); • If (G, x) / ∈ L, then there exists at least one node v ∈ V (G) such that out(v) ="no". We are now ready to define one of our main subjects of interest, the class LD(t), for local decision. Definition 2.3 Let t be a non-decreasing function of triplets (G, x, Id). Define LD(t) as the class of all distributed languages that can be decided by a local distributed algorithm that runs in number of rounds at most t. For instance, Coloring ∈ LD(1) and MIS ∈ LD(1). On the other hand, it is not hard to see that languages such as Unique-Leader, Consensus, and SpanningTree are not in LD(t), for any t = o(n). In what follows, we define LD(O(t)) = ∪ c>1 LD(c · t). Non-deterministic local decision (NLD) A distributed verification algorithm is a distributed algorithm A that gets as input, in addition to a configuration (G, x), a global certificate vector y, i.e., every node v of a graph G gets as input a binary string x(v) ∈ {0, 1} * , and a certificate y(v) ∈ {0, 1} * . A verification algorithm A verifies L if and only if for every configuration (G, x), the following hold: • If (G, x) ∈ L, then there exists a certificate y such that for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="yes" for all v ∈ V (G); • If (G, x) / ∈ L, then for every certificate y and for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="no" for at least one node v ∈ V (G). One motivation for studying the nondeterministic verification framework comes from settings in which one must perform local verifications repeatedly. In such cases, one can afford to have a relatively "wasteful" preliminary step in which a certificate is computed for each node. Using these certificates, local verifications can then be performed very fast. See [21,22] for more details regarding such applications. Indeed, the definition of a verification algorithm finds similarities with the notion of proof-labeling schemes discussed in [21,22]. Informally, in a proof-labeling scheme, the construction of a "good" certificate y for a configuration (G, x) ∈ L may depend also on the given id-assignment. Since the question of whether a configuration (G, x) belongs to a language L is independent from the particular id-assignment, we prefer to let the "good" certificate y depend only on the configuration. In other words, as defined above, a verification algorithm operating on a configuration (G, x) ∈ L and a "good" certificate y must say "yes" at every node regardless of the id-assignment. We now define the class NLD(t), for nondeterministic local decision. (our terminology is by direct analogy to the class NP in sequential computational complexity). Bounded-error probabilistic local decision (BPLD) A randomized distributed algorithm is a distributed algorithm A that enables every node v, at any round r during the execution, to toss a number of random bits obtaining a string r(v) ∈ {0, 1} * . Clearly, this number cannot exceed f A (H(r, v)), the bound on the number of computational steps used by node v at round r. Note however, that H(r, v) may now also depend on the random bits produced by other nodes in previous rounds. For p, q ∈ (0, 1], we say that a randomized distributed algorithm A is a (p, q)-decider for L, or, that it decides L with "yes" success probability p and "no" success probability q, if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", and the following properties are satisfied: • If (G, x) ∈ L, then Pr[out(v) = "yes" for every node v ∈ V (G)] ≥ p, • If (G, x) / ∈ L, then Pr[out(v) = "no" for at least one node v ∈ V (G)] ≥ q, where the probabilities in the above definition are taken over all possible coin tosses performed by nodes. We define the class BPLD(t, p, q), for "Bounded-error Probabilistic Local Decision", as follows. Definition 2.5 For p, q ∈ (0, 1] and a function t, BPLD(t, p, q) is the class of all distributed languages that have a local randomized distributed (p, q)-decider running in time t. (i.e., can be decided in time t by a local randomized distributed algorithm with "yes" success probability p and "no" success probability q). A sharp threshold for randomization Consider some graph G, and a subset U of the nodes of G, i.e., U ⊆ V (G Theorem 3.1 below asserts that, for hereditary languages, randomization does not help if one imposes that p 2 + q > 1, i.e, the "no" success probability distribution is at least as large as one minus the square of the "yes" success probability. Somewhat more formally, we prove that for hereditary languages, we have p 2 +q>1 BPLD(t, p, q) = LD(O(t)). This complements the fact that for p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q), for any t = o(n). Recall that [34] investigates the question of whether randomization helps for constructing in constant time a solution for a problem in LCL LD(O(1)). We stress that the technique used in [34] for tackling this question relies heavily on the definition of LCL, specifically, that only graphs of constant degree and of constant input size are considered. Hence it is not clear whether the technique of [34] can be useful for our purposes, as we impose no such assumptions on the degrees or input sizes. Also, although it seems at first glance, that Lovsz local lemma might have been helpful here, we could not effectively apply it in our proof. Instead, we use a completely different approach. Theorem 3.1 Let L be an hereditary language and let t be a function. If L ∈ BPLD(t, p, q) for constants p, q ∈ (0, 1] such that p 2 + q > 1, then L ∈ LD(O(t)). Proof. Let us start with some definitions. Let L be a language in BPLD(t, p, q) where p, q ∈ (0, 1] and p 2 + q > 1, and t is some function. Let A be a randomized algorithm deciding L, with "yes" success probability p, and "no" success probability q, whose running time is at most t(G, x, Id), for every configuration (G, x) with identity assignment Id. Fix a configuration (G, x), and an id-assignment Id for the nodes of V (G). The distance dist G (u, v) between two nodes of G is the minimum number of edges in a path connecting u and v in G. The distance between two subsets U 1 , U 2 ⊆ V is defined as dist G (U 1 , U 2 ) = min{dist G (u, v) | u ∈ U 1 , v ∈ U 2 }. For a set U ⊆ V , let E(G, x, Id, U) denote the event that when running A on (G, x) with id-assignment Id, all nodes in U output "yes". Let v ∈ V (G). The running time of A at v may depend on the coin tosses made by the nodes. Let t v = t v (G, x, Id) denote the maximal running time of v over all possible coin tosses. Note that t v ≤ t(G, x, Id) (we do not assume that neither t or t v are known to v). The radius of a node v, denoted r v , is the maximum value t u such that there exists a node u, where v ∈ B G (u, t u ). (Observe that the radius of a node is at most t.) The radius of a set of nodes S is r S := max{r v | v ∈ S}. In what follows, fix a constant δ such that 0 < δ < p 2 + q − 1, and define λ = 11 ⌈log p/log(1 − δ)⌉. A splitter of (G, x, Id) is a triplet (S, U 1 , U 2 ) of pairwise disjoint subsets of nodes such that S ∪ U 1 ∪ U 2 = V , dist G (U 1 , U 2 ) ≥ λr S . (Observe that r S may depend on the identity assignment and the input, and therefore, being a splitter is not just a topological property depending only on G). Given a splitter (S, U 1 , U 2 ) of (G, x, Id), let G k = G[U k ∪ S], and let x k be the input x restricted to nodes in G k , for k = 1, 2. The following structural claim does not use the fact that L is hereditary. Lemma 3.2 For every configuration (G, x) with identity assignment Id, and every splitter (S, U 1 , U 2 ) of (G, x, Id), we have (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L ⇒ (G, x) ∈ L. Let (G, x) be a configuration with identity assignment Id. Assume, towards contradiction, that there exists a splitter (S, U 1 , U 2 ) of triplet (G, x, Id), such that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L, yet (G, x) / ∈ L. (The fact that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L implies that both G 1 and G 2 are connected, however, we note, that for the claim to be true, it is not required that G[ Proof. For proving Claim 3.3, we upper bound the size of I by d − 4r S − 2. This is done by covering the integers in (2r S , d − 2r S ) by at most 4r S + 1 sets, such that each one is (4r S + 1)-independent, that is, for every two integers in the same set, they are at least 4r S + 1 apart. Specifically, for s ∈ [1, 4r S + 1] and m(S) = ⌈(d − 8r S )/(4r S + 1)⌉, we define J s = {s + 2r S + j(4r S + 1) | j ∈ [0, m(S)]}. Observe that, as desired, (2r S , d − 2r S ) ⊂ s∈[1,4r S +1] J s , and for each s ∈ [1, 4r S + 1], J s is (4r S + 1)-independent. In what follows, fix s ∈ [1, 4r S + 1] and let J = J s . Since (G 1 , x 1 ) ∈ L, we know that, Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p . Observe that for i ∈ (2r S , d − 2r S ), t v ≤ r v ≤ r S , and hence, the t v -neighborhood in G of every node v ∈ S i is contained in S ⊆ G 1 , i.e., B G (v, t v ) ⊆ G 1 . It therefore follows that: Pr[E(G, x, Id, S J∩I )] = Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p .(1) Consider two integers a and b in J. We know that |a − b| ≥ 4r S + 1. Hence, the distance in G between any two nodes u ∈ S a and v ∈ S b is at least 2r S + 1. Thus, the events E(G, x, Id, S a ) and E(G, x, Id, S b ) are independent. It follows by the definition of I, that Pr[E(G, x, Id, S J∩I )] < (1 − δ) |J∩I|(2) By (1) and (2), we have that p < (1 − δ) |J∩I| and thus |J ∩ I| < log p/ log(1 − δ). Since (2r S , d − 2r S ) can be covered by the sets J s , s = 1, . . . , 4r S + 1, each of which is (4r S + 1)independent, we get that |I| = 4r S +1 s=1 |J s ∩ I| < (4r S + 1)(log p/ log(1 − r)) . Combining this bound with the fact that d = λr S , we get that d − 4r S − 1 > |I|. It follows by the pigeonhole principle that there exists some i ∈ (2r S , d − 2r S ) such that i / ∈ I, as desired. This completes the proof of Claim 3.3. Fix i ∈ (2r S , d − 2r S ) such that i / ∈ I, and let F = E(G, x, Id, S i ). By definition, Pr[F] ≤ δ < p 2 + q − 1.(3) Let H 1 denote the subgraph of G induced by the nodes in ( i−r S −1 j=1 L j ) ∪ U 1 . We similarly define H 2 as the subgraph of G induced by the nodes in ( j>i+r S L j ) ∪ U 2 . Note that S i ∪ V (H 1 ) ∪ V (H 2 ) = V , and for any two nodes u ∈ V (H 1 ) and v ∈ V (H 2 ), we have d G (u, v) > 2r S . It follows that, for k = 1, 2, the t u -neighborhood in G of each node u ∈ V (H k ) equals the t u -neighborhood in G k of u, that is, B G (u, t u ) ⊆ G k . (To see why, consider, for example, the case k = 2. Given u ∈ V (H 2 ), it is sufficient to show that ∄v ∈ V (H 1 ), such that v ∈ B G (u, t u ). Indeed, if such a vertex v exists then d G (u, v) > 2r S , and hence t u > 2r S . Since there must exists a vertex w ∈ S i such that w ∈ B(u, t u ), we get that r w > 2r S , in contradiction to the fact that w ∈ S.) Thus, for k = 1, 2, since (G i , x i ) ∈ L, we get Pr[E(G, x, Id, V (H i ))] = Pr[E(G i , x i , Id, V (H i ))] ≥ p . Let F ′ = E(G, x, Id, V (H 1 )∪V (H 2 ) ). As the events E(G, x, Id, V (H 1 )) and E(G, x, Id, V (H 2 )) are independent, it follows that Pr[F ′ ] > p 2 , that is Pr[F ′ ] ≤ 1 − p 2(4) By Eqs. (3) and (4), and using union bound, it follows that Pr[F ∨ F ′ ] < q. Thus Pr[E(G, x, Id, V (G))] = Pr[E(G, x, Id, S i ∪ V (H 1 ) ∪ V (H 2 ))] = Pr[F ∧ F ′ ] > 1 − q . This is in contradiction to the assumption that (G, x) / ∈ L. This concludes the proof of Lemma 3.2. Our goal now is to show that L ∈ LD(O(t)) by proving the existence of a deterministic local algorithm D that runs in time O(t) and recognizes L. (No attempt is made here to minimize the constant factor hidden in the O(t) notation.) Recall that both t = t(G, x, Id) and t v = t v (G, x, Id) may not be known to v. Nevertheless, by inspecting the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , each node v can compute an upper bound on t v as given by the following claim. t * v = t * v (c) such that (1) c · t v ≤ t * v = O(t) and (2) for every u ∈ B G (v, c · t * v ), we have t u ≤ t * v . To establish the claim, observe first that in O(t) time, each node v can compute a value t ′ v satisfying t v ≤ t ′ v ≤ 2t. Indeed, given the ball B G (v, 2 i ), for some integer i, and using the upper bound on number of (sequential) local computations, node v can simulate all its possible executions up to round r = 2 i . The desired value t ′ v is the smallest r = 2 i for which all executions of v up to round r conclude with an output at v. Once t ′ v is computed, node v aims at computing t * v . For this purpose, it starts again to inspect the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , to obtain t ′ u from each u ∈ B G (v, 2 i ). (For this purpose, it may need to wait until u computes t ′ u , but this delays the whole computation by at most O(t) time.) Now, node v outputs t * v = 2 i for the smallest i satisfying (1) c · t ′ v ≤ 2 i and (2) for every u ∈ B G (v, c · 2 i ), we have t ′ u ≤ t * v . It is easy to see that for this i, we have 2 i = O(t), hence t * v = O(t). Given a configuration (G, x), and an id-assignment Id, Algorithm D, applied at a node u first calculates t * u = t * u (6λ), and then outputs "yes" if and only if the 2λt * u -neighborhood of u in (G, x) belongs to L. That is, out(u) = "yes" ⇐⇒ (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L . Obviously, Algorithm D is a deterministic algorithm that runs in time O(t). We claim that Algorithm D decides L. Indeed, since L is hereditary, if (G, x) ∈ L, then every prefix of (G, x) is also in L, and thus, every node u outputs out(u) ="yes". Now consider the case where (G, x) / ∈ L, and assume by contradiction that by applying D on (G, x) with id-assignment Id, every node u outputs out(u) ="yes". Let U ⊆ V (G) be maximal by inclusion, such that G[U] is connected and (G[U], x[U]) ∈ L. Obviously, U is not empty, as (B G (u, 2λt * v ), x[B G (u, 2λt * v )] ) ∈ L for every node u. On the other hand, we have |U| < |V (G)|, because (G, x) / ∈ L. Let u ∈ U be a node with maximal t u such that B G (u, 2t u ) contains a node outside U. Define G ′ as the subgraph of G induced by U ∪ V (B G (u, 2t u )). Observe that G ′ is connected and that G ′ strictly contains U. Towards contradiction, our goal is to show that (G ′ , x[G ′ ]) ∈ L. Let H denote the graph which is maximal by inclusion such that H is connected and B G (u, 2t u ) ⊂ H ⊆ B G (u, 2t u ) ∪ (U ∩ B G (u, 2λt * u )) . Let W 1 , W 2 , · · · , W ℓ be the ℓ connected components of G[U]\B G (u, 2t u ), ordered arbitrarily. Let W 0 be the empty graph, and for k = 0, 1, 2, · · · , ℓ, define the graph Z k = H ∪ W 0 ∪ W 1 ∪ W 2 ∪ · · · ∪ W k . Observe that Z k is connected for each k = 0, 1, 2, · · · , ℓ. We prove by induction on k that (Z k , x[Z k ]) ∈ L for every k = 0, 1, 2, · · · , ℓ. This will establish the contradiction since Z ℓ = G ′ . For the basis of the induction, the case k = 0, we need to show that (H, x[H]) ∈ L. However, this is immediate by the facts that H is a connected subgraph of B G (u, 2λt * u ), the configuration (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L, and L is hereditary. Assume now that we have (Z k , x[Z k ]) ∈ L for 0 ≤ k < ℓ, and consider the graph Z k+1 = Z k ∪ W k+1 . Define the sets of nodes S = V (Z k ) ∩ V (W k+1 ), U 1 = V (Z k ) \ S, and U 2 = V (W k+1 ) \ S . A crucial observation is that (S, U 1 , U 2 ) is a splitter of Z k+1 . This follows from the following arguments. Let us first show that r S ≤ t * u . By definition, we have t v ≤ t * u , for every v ∈ B G (u, 6λt * u ). Hence, in order to bound the radius of S (in Z k+1 ) by t * u it is sufficient to prove that there is no node w ∈ U \ B G (u, 6λt * u ) such that B G (w, t w ) ∩ S = ∅. Indeed, if such a node w exists then t w > 4λt * u and hence B G (w, 2t w ) contains a node outside U, in contradiction to the choice of u. It follows that r S ≤ t * u . We now claim that dist Z k+1 (U 1 , U 2 ) ≥ λt * u . Consider a simple directed path P in Z k+1 going from a node x ∈ U 1 to a node y ∈ U 2 . Since x / ∈ V (W k+1 ) and y ∈ V (W k+1 ), we get that P must pass through a vertex in B G (u, 2t u ). Let z be the last vertex in P such that z ∈ B G (u, 2t u ), and consider the directed subpath P [z,y] of P going from z to y. Now, let P ′ = P [z,y] \ {z}. The first d ′ = min{(2λ −2)t * u , |P ′ |} vertices in the directed subpath P ′ must belong to V (H) ⊆ V (Z k ). In addition, observe that all nodes in P ′ must be in V (W k+1 ). It follows that the first d ′ nodes of P ′ are in S. Since y / ∈ S, we get that |P ′ | ≥ d ′ = (2λ − 2)t * u , and thus |P | > λt * u . Consequently, dist Z k+1 (U 1 , U 2 ) ≥ λt * u , as desired. This completes the proof that (S, U 1 , U 2 ) is a splitter of Z k+1 . Now, by the induction hypothesis, we have ( G 1 , x[G 1 ]) ∈ L, because G 1 = G[U 1 ∪S] = Z k . In addition, we have (G 2 , x[G 2 ]) ∈ L, because G 2 = G[U 2 ∪ S] = W k+1 , Nondeterminism and complete problems 4.1 Separation results Our first separation result indicates that non-determinism helps for local decision. Indeed, we show that there exists a language, specifically, tree = {(G, ǫ) | G is a tree}, which belongs to NLD(1) but not to LD(t), for any t = o(n). The proof follows by rather standard arguments. Proof. To establish the theorem it is sufficient to show that there exists a language L such that L / ∈ LD(o(n)) and L ∈ NLD(1). Let tree = {(G, ǫ) | G is a tree}. We have tree / ∈ LD(o(n)). To see why, consider a cycle C with nodes labeled consecutively from 1 to 4n, and the path P 1 (resp., P 2 ) with nodes labeled consecutively 1, . . . , 4n (resp., 2n + 1, . . . , 4n, 1, . . . , 2n), from one extremity to the other. For any algorithm A deciding tree, all nodes n+1, . . . , 3n output "yes" in configuration (P 1 , ǫ) for any identity assignment for the nodes in P 1 , while all nodes 3n + 1, . . . , 4n, 1, . . . , n output "yes" in configuration (P 2 , ǫ) for any identity assignment or the nodes in P 2 . Thus if A is local, then all nodes output "yes" in configuration (C, ǫ), a contradiction. In contrast, we next show that tree ∈ NLD. The (nondeterministic) local algorithm A verifying tree operates as follows. Given a configuration (G, ǫ), the certificate given at node v is y(v) = dist G (v, r) where r ∈ V (G) is an arbitrary fixed node. The verification procedure is then as follows. At each node v, A inspects every neighbor (with its certificates), and verifies the following: • y(v) is a non-negative integer, • if y(v) = 0, then y(w) = 1 for every neighbor w of v, and • if y(v) > 0, then there exists a neighbor w of v such that y(w) = y(v) − 1, and, for all other neighbors w ′ of v, we have y(w ′ ) = y(v) + 1. If G is a tree, then applying Algorithm A on G with the certificate yields the answer "yes" at all nodes regardless of the given id-assignment. On the other hand, if G is not a tree, then we claim that for every certificate, and every id-assignment Id, Algorithm A outputs "no" at some node. Indeed, consider some certificate y given to the nodes of G, and let C be a simple cycle in G. Assume, for the sake of contradiction, that all nodes in C output "yes". In this case, each node in C has at least one neighbor in C with a larger certificate. This creates an infinite sequence of strictly increasing certificates, in contradiction with the finiteness of C. Theorem 4.2 There exists a language L such that L / ∈ NLD(t), for any t = o(n). Proof. Let InpEqSize = {(G, x) | ∀v ∈ V (G), x(v) = |V (G)|}. We show that InpEqSize / ∈ NLD(t), for any t = o(n). Assume, for the sake of contradiction, that there exists a local nondeterministic algorithm A deciding InpEqSize. Let t < n/4 be the running time of A. Consider the cycle C with 2t + 1 nodes u 1 , u 2 , · · · , u 2t+1 , enumerated clockwise. Assume that the input at each node u i of C satisfies x(u i ) = 2t + 1. Then, there exists a certificate y such that, for any identity assignment Id, algorithm A outputs "yes" at each node of C. Now, consider the configuration (C ′ , x ′ ) where the cycle C ′ has 4t + 2 nodes, and for each node v i of C ′ , x ′ (v i ) = 2t + 1. We have (C ′ , x ′ ) / ∈ InpEqSize. To fool Algorithm A, we enumerate the nodes in C ′ clockwise, i.e., C = (v 1 , v 2 , · · · , v 4t+2 ). We then define the certificate y ′ as follows: y ′ (v i ) = y ′ (v i+2t+1 ) = y(u i ) for i = 1, 2, · · · 2t + 1 . Fix an id-assignment Id ′ for the nodes in V (C ′ ), and fix i ∈ {1, 2, · · · 2t + 1}. There exists an id-assignment Id 1 for the nodes in V (C), such that the output of A at node v i in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with certificate y and id-assignment Id 1 . Similarly, there exists an id-assignment Id 2 for the nodes in V (C) such that the output of A at node v i+2t+1 in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with with certificate y and id-assignment Id 2 . Thus, Algorithm A at both v i and v i+2t+1 outputs "yes" in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ . Hence, since i was arbitrary, all nodes output "yes" for this configuration, certificate and id-assignment, contradicting the fact that (C ′ , x ′ ) / ∈ InpEqSize. For p, q ∈ (0, 1] and a function t, let us define BPNLD(t, p, q) as the class of all distributed languages that have a local randomized non-deterministic distributed (p, q)-decider running in time t. Theorem 4.3 Let p, q ∈ (0, 1] such that p 2 + q ≤ 1. For every language L, we have L ∈ BPNLD(1, p, q). Proof. Let L be a language. The certificate of a configuration (G, x) ∈ L is a map of G, with nodes labeled with distinct integers in {1, ..., n}, where n = |V (G)|, together with the inputs of all nodes in G. In addition, every node v receives the label λ(v) of the corresponding vertex in the map. Precisely, the certificate at node v is y(v) = (G ′ , x ′ , i) where G ′ is an isomorphic copy of G with nodes labeled from 1 to n, x ′ is an n-dimensional vector such that x ′ [λ(u)] = x(u) for every node u, and i = λ(v). The verification algorithm involves checking that the configuration (G ′ , x ′ ) is identical to (G, x). This is sufficient because distributed languages are sequentially decidable, hence every node can individually decide whether (G ′ , x ′ ) belongs to L or not, once it has secured the fact that (G ′ , x ′ ) is the actual configuration. It remains to show that there exists a local randomized non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x), and running in time 1. The non-deterministic (p, q)-decider operates as follows. First, every node v checks that it has received the input as specified by x ′ , i.e., v checks wether x ′ [λ(v)] = x(v), and outputs "no" if this does not hold. Second, each node v communicates with its neighbors to check that (1) they all got the same map G ′ and the same input vector x ′ , and (2) they are labeled the way they should be according to the map G ′ . If some inconsistency is detected by a node, then this node outputs "no". Finally, consider a node v that passed the aforementioned two phases without outputting "no" . If λ(v) = 1 then v outputs "yes" (with probability 1), and if λ(v) = 1 then v outputs "yes" with probability p. We claim that the above implements a non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x). Indeed, if all nodes pass the two phases without outputting "no", then they all agree on the map G ′ and on the input vector x ′ , and they know that their respective neighborhood fits with what is indicated on the map. Hence, (G ′ , x ′ ) is a lift of (G, x). If follows that (G ′ , x ′ ) = (G, x) if and only if there exists at most one node v ∈ G, whose label satisfies λ(v) = 1. Consequently, if (G ′ , x ′ ) = (G, x) then all nodes say "yes" with probability at least p. On the other hand, if (G ′ , x ′ ) = (G, x) then there are at least two nodes in G whose label is "1". These two nodes say "yes" with probability p 2 , hence, the probability that at least one of them says "no" is at least 1 − p 2 ≥ q. This completes the proof of Theorem 4.3. The above theorem guarantees that the following definition is well defined. Let BPNLD = BPNLD(1, p, q), for some p, q ∈ (0, 1] such that p 2 + q ≤ 1. The following follows from Completeness results Let us first define a notion of reduction that fits the class LD. For two languages L 1 , L 2 , we say that L 1 is locally reducible to L 2 , denoted by L 1 L 2 , if there exists a constant time local algorithm A such that, for every configuration (G, x) and every id-assignment Id, A produces out(v) ∈ {0, 1} * as output at every node v ∈ V (G) so that (G, x) ∈ L 1 ⇐⇒ (G, out) ∈ L 2 . By definition, LD(O(t)) is closed under local reductions, that is, for every two languages L 1 , L 2 satisfying L 1 L 2 , if L 2 ∈ LD(O(t)) then L 1 ∈ LD(O(t)). We now show that there exists a natural problem, called cover, which is in some sense the "most difficult" decision problem; that is, we show that cover is BPNLD-complete. Language cover is defined as follows. Every node v is given as input an element E(v), and a finite collection of sets S(v). The union of these inputs is in the language if there exists a node v such that one set in S(v) equals the union of all the elements given to the nodes. Formally, Proof. The fact that cover ∈ BPNLD follows from Theorem 4.3. To prove that cover is BPNLD-hard, we consider some L ∈ BPNLD and show that L cover. For this purpose, we describe a local distributed algorithm A transforming any configuration for L to a configuration for cover preserving the memberships to these languages. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm A operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "local view" at v in (G, x), i.e., the star subgraph of G consisting of v and its neighbors, together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . we define cover = {(G, (E, S)) | ∃v ∈ V (G), ∃S ∈ S(v) s.t. S = {E(v) | v ∈ V (G)}. Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with k ≤ ψ(v) vertices, and x ′ is a collection of k input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm A associates a set S ∈ S(v) consisting of the k = |V (G ′ )| local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ A(G, x) ∈ cover. If (G, x) ∈ L, then by the construction of Algorithm A, there exists a set S ∈ S(v) such that S covers the collection of local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S which contains all local views of the given configuration (G, x) and Id assignemnt. Thus A(G, x) ∈ cover. Now consider the case that A(G, x) ∈ cover. In this case, there exists a node v and a set S ∈ S(v) such that S = {E(u) | u ∈ G}. Such a set S is the collection of local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. On the other hand, S is also the collection of local views of nodes of the given configuration (G, x) ∈ L and Id assignment. It follows that (G, x) = (G ′ , x ′ ) ∈ L. We now define a natural problem, called containment, which is NLD(O(1))-complete. Somewhat surprisingly, the definition of containment is quite similar to the definition of cover. Specifically, as in cover, every node v is given as input an element E(v), and a finite collection of sets S(v). However, in contrast to cover, the union of these inputs is in the containment language if there exists a node v such that one set in S(v) contains the union of all the elements given to the nodes. Formally, we define containment = {(G, (E, S)) | ∃v ∈ V (G), Proof. We first prove that containment is NLD(O(1))-hard. Consider some L ∈ NLD(O(1)); we show that L containment. For this purpose, we describe a local distributed algorithm D transforming any configuration for L to a configuration for containment preserving the memberships to these languages. ∃S ∈ S(v) s.t. S ⊇ {E(v) | v ∈ V (G)}. Let t = t L ≥ 0 be some (constant) integer such that there exists a local nondeterministic algorithm A L deciding L in time at most t. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm D operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "t-local view" at v in (G, x), i.e., the ball of radius t around v, B G (v, t), together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with m ≤ ψ(v) vertices, and x ′ is a collection of m input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm D associates a set S ∈ S(v) consisting of the m = |V (G ′ )| t-local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ D(G, x) ∈ containment. If (G, x) ∈ L, then by the construction of Algorithm D, there exists a set S ∈ S(v) such that S covers the collection of t-local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S that precisely corresponds to (G, x) and its given Id assignment; hence, S contains all corresponding t-local views. Thus, D(G, x) ∈ containment. Now consider the case that D(G, x) ∈ containment. In this case, there exists a node v and a set S ∈ S(v) such that S ⊇ {E(u) | u ∈ G}. Such a set S is the collection of t-local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. Since (G ′ , x ′ ) ∈ L, there exists a certificate y ′ for the nodes of G ′ , such that when algorithm A L operates on (G ′ , x ′ , y ′ ), all nodes say "yes". Now, since S contains the t-local views of nodes (G, x), with the corresponding identities, there exists a mapping φ : (G, x, Id) → (G ′ , x ′ , Id ′ ) that preserves inputs and identities. Moreover, when restricted to a ball of radius t around a vertex v ∈ G, φ is actually an isomorphism between this ball and its image. We assign a certificate y to the nodes of G: for each v ∈ V (G), y(v) = y ′ (φ(v)). Now, Algorithm A L when operating on (G, x, y) outputs "yes" at each node of G. By the correctness of A L , we obtain (G, x) ∈ L. We now show that containment ∈ NLD(O(1)). For this purpose, we design a nondeterministic local algorithm A that decides whether a configuration (G, x) is in containment. Such an algorithm A is designed to operate on (G, x, y), where y is a certificate. The configuration (G, x) satisfies that x(v) = (E(v), S(v)). Algorithm A aims at verifying whether there exists a node v * with a set S * ∈ S(v * ) such that S * ⊇ {E(v) | v ∈ V (G)}. Given a correct instance, i.e., a configuration (G, x), we define the certificate y as follows. For each node v, the certificate y(v) at v consists of several fields, specifically, y(v) = (y c (v), y s (v), y id (v), y l (v)). The candidate configuration field y c (v) is a triplet y c (v) = (G ′ , x ′ , Id ′ ), where (G ′ , x ′ ) is an isomorphic copy (G ′ , x ′ ) of (G, x) and Id ′ is an p and q, one can modify the success probabilities by performing k runs and requiring each node to individually output "no" if it decided "no" on at least one of the runs. In this case, the "no" success probability increases from q to at least 1 − (1 − q) k , and the "yes" success probability then decreases from p to p k .) Another interesting question is whether the phenomena we observed regarding randomization occurs also in the non-deterministic setting, that is, whether BPNLD(t, p, q) collapses into NLD(O(t)), for p 2 + q > 1. Our model of computation, namely, the LOCAL model, focuses on difficulties arising from purely locality issues, and abstracts away other complexity measures. Naturally, it would be very interesting to come up with a rigorous complexity framework taking into account also other complexity measures. For example, it would be interesting to investigate the connections between classical computational complexity theory and the local complexity one. The bound on the (centralized) running time in each round (given by the function f , see Section 2) may serve a bridge for connecting the two theories, by putting constrains on this bound (i.e., f must be polynomial, exponential, etc). Also, one could restrict the memory used by a node, in addition to, or instead of, bounding the sequential time. Finally, it would be interesting to come up with a complexity framework taking also congestion into account.
10,308
1011.2152
2953378433
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
Finally, we note that our notion of NLD seems to be related to the theory of lifts , e.g., @cite_41 .
{ "abstract": [ "We describe here a simple probabilistic model for graphs that are lifts of a fixed base graph G, i.e., those graphs from which there is a covering man onto G. Our aim is to investigate the properties of typical graphs in this class. In particular, we show that almost every lift of G is δ(G)-connected where δ(G) is the minimal degree of G. We calculate the typical edge expansion of lifts of the bouquet B d and" ], "cite_N": [ "@cite_41" ], "mid": [ "2037363355" ] }
Local Distributed Decision *
Distributed computing concerns a collection of processors which collaborate in order to achieve some global task. With time, two main disciplines have evolved in the field. One discipline deals with timing issues, namely, uncertainties due to asynchrony (the fact that processors run at their own speed, and possibly crash), and the other concerns topology issues, namely, uncertainties due to locality constraints (the lack of knowledge about far away processors). Studies carried out by the distributed computing community within these two disciplines were to a large extent problem-driven. Indeed, several major problems considered in the literature concern coping with one of the two uncertainties. For instance, in the asynchrony-discipline, Fischer, Lynch and Paterson [14] proved that consensus cannot be achieved in the asynchronous model, even in the presence of a single fault, and in the locality-discipline, Linial [28] proved that (∆ + 1)-coloring cannot be achieved locally (i.e., in a constant number of communication rounds), even in the ring network. One of the significant achievements of the asynchrony-discipline was its success in establishing unifying theories in the flavor of computational complexity theory. Some central examples of such theories are failure detectors [6,7] and the wait-free hierarchy (including Herlihy's hierarchy) [18]. In contrast, despite considerable progress, the locality-discipline still suffers from the absence of a solid basis in the form of a fundamental computational complexity theory. Obviously, defining some common cost measures (e.g., time, message, memory, etc.) enables us to compare problems in terms of their relative cost. Still, from a computational complexity point of view, it is not clear how to relate the difficulty of problems in the locality-discipline. Specifically, if two problems have different kinds of outputs, it is not clear how to reduce one to the other, even if they cost the same. Inspired by sequential complexity theory, we focus on decision problems, in which one is aiming at deciding whether a given global input instance belongs to some specified language. In the context of distributed computing, each processor must produce a boolean output, and the decision is defined by the conjunction of the processors' outputs, i.e., if the instance belongs to the language, then all processors must output "yes", and otherwise, at least one processor must output "no". Observe that decision problems provide a natural framework for tackling fault-tolerance: the processors have to collectively check whether the network is fault-free, and a node detecting a fault raises an alarm. In fact, many natural problems can be phrased as decision problems, like "is there a unique leader in the network?" or "is the network planar?". Moreover, decision problems occur naturally when one is aiming at checking the validity of the output of a computational task, such as "is the produced coloring legal?", or "is the constructed subgraph an MST?". Construction tasks such as exact or approximated solutions to problems like coloring, MST, spanner, MIS, maximum matching, etc., received enormous attention in the literature (see, e.g., [5,25,26,28,30,31,32,38]), yet the corresponding decision problems have hardly been considered. The purpose of this paper is to investigate the nature of local decision problems. Decision problems seem to provide a promising approach to building up a distributed computational theory for the locality-discipline. Indeed, as we will show, one can define local reductions in the framework of decision problems, thus enabling the introduction of complexity classes and notions of completeness. We consider the LOCAL model [36], which is a standard distributed computing model capturing the essence of locality. In this model, processors are woken up simultaneously, and computation proceeds in fault-free synchronous rounds during which every processor exchanges messages of unlimited size with its neighbors, and performs arbitrary computations on its data. Informally, let us define LD(t) (for local decision) as the class of decision problems that can be solved in t number of communication rounds in the LOCAL model. (We find special interest in the case where t represents a constant, but in general we view t as a function of the input graph. We note that in the LOCAL model, every decidable decision problem can be solved in n communication rounds, where n denotes the number of nodes in the input graph.) Some decision problems are trivially in LD(O(1)) (e.g., "is the given coloring a (∆ + 1)coloring?", "do the selected nodes form an MIS?", etc.), while some others can easily be shown to be outside LD(t), for any t = o(n) (e.g., "is the network planar?", "is there a unique leader?", etc.). In contrast to the above examples, there are some languages for which it is not clear whether they belong to LD(t), even for t = O(1). To elaborate on this, consider the particular case where it is required to decide whether the network belongs to some specified family F of graphs. If this question can be decided in a constant number of communication rounds, then this means, informally, that the family F can somehow be characterized by relatively simple conditions. For example, a family F of graphs that can be characterized as consisting of all graphs having no subgraph from C, for some specified finite set C of finite subgraphs, is obviously in LD(O(1)). However, the question of whether a family of graphs can be characterized as above is often non-trivial. For example, characterizing cographs as precisely the graphs with no induced P 4 , attributed to Seinsche [40], is not easy, and requires nontrivial usage of modular decomposition. The first question we address is whether and to what extent randomization helps. For p, q ∈ (0, 1], define BPLD(t, p, q) as the class of all distributed languages that can be decided by a randomized distributed algorithm that runs in t number of communication rounds and produces correct answers on legal (respectively, illegal) instances with probability at least p (resp., q). An interesting observation is that for p and q such that p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q). In fact, for such p and q, there exists a language L ∈ BPLD(0, p, q), such that L / ∈ LD(t), for any t = o(n). To see why, consider the following Unique-Leader language. The input is a graph where each node has a bit indicating whether it is a leader or not. An input is in the language Unique-Leader if and only if there is at most one leader in the graph. Obviously, this language is not in LD(t), for any t < n. We claim it is in BPLD(0, p, q), for p and q such that p 2 + q ≤ 1. Indeed, for such p and q, we can design the following simple randomized algorithm that runs in 0 time: every node which is not a leader says "yes" with probability 1, and every node which is a leader says "yes" with probability p. Clearly, if the graph has at most one leader then all nodes say "yes" with probability at least p. On the other hand, if there are at least k ≥ 2 leaders, at least one node says "no", with probability at least 1 − p k ≥ 1 − p 2 ≥ q. It turns out that the aforementioned choice of p and q is not coincidental, and that p 2 +q = 1 is really the correct threshold. Indeed, we show that Unique-Leader / ∈ BPLD(t, p, q), for any t < n, and any p and q such that p 2 +q > 1. In fact, we show a much more general result, that is, we prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) actually collapses into LD(O(t)), for any t. In the second part of the paper, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results. Our contributions 1.2.1 Impact of randomization We study the impact of randomization on local decision. We prove that if p 2 + q > 1, then restricted to hereditary languages, BPLD(t, p, q) = LD(O(t)), for any function t. This, together with the observation that LD(t) BPLD(t, p, q), for any t = o(n), may indicate that p 2 + q = 1 serves as a sharp threshold for distinguishing the deterministic case from the randomized one. Impact of non-determinism We first show that non-determinism helps local decision, i.e., we show that the class NLD(t) (cf. Section 2.3) strictly contains LD(t). More precisely, we show that there exists a language in NLD(O(1)) which is not in LD(t) for every t = o(n), where n is the size of the input graph. Nevertheless, NLD(t) does not capture all (decidable) languages, for t = o(n). Indeed we show that there exists a language not in NLD(t) for every t = o(n). Specifically, this language is #n = {(G, n) | |V (G)| = n}. Perhaps surprisingly, it turns out that it is the combination of randomization with nondeterminism that enables to decide all languages in constant time. Let BPNLD(O(1)) = BPNLD(O(1), p, q), for some constants p and q such that p 2 + q ≤ 1. We prove that BPNLD(O(1)) contains all languages. To sum up, LD(o(n)) NLD(O(1)) ⊂ NLD(o(n)) BPNLD(O(1)) = All. Finally, we introduce the notion of many-one local reduction, and establish some completeness results. We show that there exits a problem, called cover, which is, in a sense, the most difficult decision problem. That is we show that cover is BPNLD(O(1))-complete. (Interestingly, a small relaxation of cover, called containment, turns out to be NLD(O(1))complete). Decision problems and complexity classes 2.1 Model of computation Let us first recall some basic notions in distributed computing. We consider the LOCAL model [36], which is a standard model capturing the essence of locality. In this model, processors are assumed to be nodes of a network G, provided with arbitrary distinct identities, and computation proceeds in fault-free synchronous rounds. At each round, every processor v ∈ V (G) exchanges messages of unrestricted size with its neighbors in G, and performs computations on its data. We assume that the number of steps (sequential time) used for the local computation made by the node v in some round r is bounded by some function f A (H(r, v)), where H(r, v) denotes the size of the "history" seen by node v up to the beginning of round r. That is, the total number of bits encoded in the input and the identity of the node, as well as in the incoming messages from previous rounds. Here, we do not impose any restriction on the growth rate of f A . We would like to point out, however, that imposing such restrictions, or alternatively, imposing restrictions on the memory used by a node for local computation, may lead to interesting connections between the theory of locality and classical computational complexity theory. To sum up, during the execution of a distributed algorithm A, all processors are woken up simultaneously, and, initially, a processor is solely aware of it own identity, and possibly to some local input too. Then, in each round r, every processor v (1) sends messages to its neighbors, (2) receives messages from its neighbors, and (3) performs at most f A (H(r, v)) computations. After a number of rounds (that may depend on the network G and may vary among the processors, simply because nodes have different identities, potentially different inputs, and are typically located at non-isomorphic positions in the network), every processor v terminates and outputs some value out(v). Consider an algorithm running in a network G with input x and identity assignment Id. The running time of a node v, denoted T v (G, x, Id), is the maximum of the number of rounds until v outputs. The running time of the algorithm, denoted T (G, x, Id), is the maximum of the number of rounds until all processors terminate, i.e., T (G, x, Id) = max{T v (G, x, Id) | v ∈ V (G)}. Let t be a non-decreasing function of input configurations (G, x, Id). (By non-decreasing, we mean that if G ′ is an induced subgraph of G and x ′ and Id ′ are the restrictions of x and Id, respectively, to the nodes in G ′ , then t(G ′ , x ′ , Id ′ ) ≤ t(G, x, Id).) We say that an algorithm A has running time at most t, if T (G, x, Id) ≤ t(G, x, Id) , for every (G, x, Id). We shall give special attention to the case that t represents a constant function. Note that in general, given (G, x, Id), the nodes may not be aware of t(G, x, Id). On the other hand, note that, if t = t(G, x, Id) is known, then w.l.o.g. one can always assume that a local algorithm running in time at most t operates at each node v in two stages: (A) collect all information available in B G (v, t), the t-neighborhood, or ball of radius t of v in G, including inputs, identities and adjacencies, and (B) compute the output based on this information. Local decision (LD) We now refine some of the above concepts, in order to formally define our objects of interest. Obviously, a distributed algorithm that runs on a graph G operates separately on each connected component of G, and nodes of a component G ′ of G cannot distinguish the underlying graph G from G ′ . For this reason, we consider connected graphs only. Definition 2.1 A configuration is a pair (G, x) where G is a connected graph, and every node v ∈ V (G) is assigned as its local input a binary string x(v) ∈ {0, 1} * . In some problems, the local input of every node is empty, i.e., x(v) = ǫ for every v ∈ V (G), where ǫ denotes the empty binary string. Since an undecidable collection of configurations remains undecidable in the distributed setting too, we consider only decidable collections of configurations. Formally, we define the following. Definition 2.2 A distributed language is a decidable collection L of configurations. In general, there are several possible ways of representing a configuration of a distributed language corresponding to standard distributed computing problems. Some examples considered in this paper are the following. Unique-Leader = {(G, x) | x 1 ≤ 1} consists of all configurations such that there exists at most one node with local input 1, with all the others having local input 0. Consensus = {(G, (x 1 , x 2 )) | ∃u ∈ V (G), ∀v ∈ V (G), x 2 (v) = x 1 (u)} consists of all configurations such that all nodes agree on the value proposed by some node. Coloring = {(G, x) | ∀v ∈ V (G), ∀w ∈ N(v), x(v) = x(w)} where N(v) denotes the (open) neighborhood of v, that is, all nodes at distance 1 from v. MIS = {(G, x) | S = {v ∈ V (G) | x(v) = 1} forms a MIS}. SpanningTree = {(G, (name, head)) | T = {e v = (v, v + ), v ∈ V (G), head(v) = name(v + )} is a spanning tree of G} consists of all configurations such that the set T of edges e v between every node v and its neighbor v + satisfying name(v + ) = head(v) forms a spanning tree of G. (The language MST, for minimum spanning tree, can be defined similarly). An identity assignment Id for a graph G is an assignment of distinct integers to the nodes of G. A node v ∈ V (G) executing a distributed algorithm in a configuration (G, x) initially knows only its own identity Id(v) and its own input x(v), and is unaware of the graph G. After t rounds, v acquires knowledge only of its t-neighborhood B G (v, t). In each round r of the algorithm A, a node may communicate with its neighbors by sending and receiving messages, and may perform at most f A (H(r, v)) computations. Eventually, each node v ∈ V (G) must output a local output out(v) ∈ {0, 1} * . Let L be a distributed language. We say that a distributed algorithm A decides L if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", satisfying the following decision rules: • If (G, x) ∈ L, then out(v) ="yes" for every node v ∈ V (G); • If (G, x) / ∈ L, then there exists at least one node v ∈ V (G) such that out(v) ="no". We are now ready to define one of our main subjects of interest, the class LD(t), for local decision. Definition 2.3 Let t be a non-decreasing function of triplets (G, x, Id). Define LD(t) as the class of all distributed languages that can be decided by a local distributed algorithm that runs in number of rounds at most t. For instance, Coloring ∈ LD(1) and MIS ∈ LD(1). On the other hand, it is not hard to see that languages such as Unique-Leader, Consensus, and SpanningTree are not in LD(t), for any t = o(n). In what follows, we define LD(O(t)) = ∪ c>1 LD(c · t). Non-deterministic local decision (NLD) A distributed verification algorithm is a distributed algorithm A that gets as input, in addition to a configuration (G, x), a global certificate vector y, i.e., every node v of a graph G gets as input a binary string x(v) ∈ {0, 1} * , and a certificate y(v) ∈ {0, 1} * . A verification algorithm A verifies L if and only if for every configuration (G, x), the following hold: • If (G, x) ∈ L, then there exists a certificate y such that for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="yes" for all v ∈ V (G); • If (G, x) / ∈ L, then for every certificate y and for every id-assignment Id, algorithm A applied on (G, x) with certificate y and id-assignment Id outputs out(v) ="no" for at least one node v ∈ V (G). One motivation for studying the nondeterministic verification framework comes from settings in which one must perform local verifications repeatedly. In such cases, one can afford to have a relatively "wasteful" preliminary step in which a certificate is computed for each node. Using these certificates, local verifications can then be performed very fast. See [21,22] for more details regarding such applications. Indeed, the definition of a verification algorithm finds similarities with the notion of proof-labeling schemes discussed in [21,22]. Informally, in a proof-labeling scheme, the construction of a "good" certificate y for a configuration (G, x) ∈ L may depend also on the given id-assignment. Since the question of whether a configuration (G, x) belongs to a language L is independent from the particular id-assignment, we prefer to let the "good" certificate y depend only on the configuration. In other words, as defined above, a verification algorithm operating on a configuration (G, x) ∈ L and a "good" certificate y must say "yes" at every node regardless of the id-assignment. We now define the class NLD(t), for nondeterministic local decision. (our terminology is by direct analogy to the class NP in sequential computational complexity). Bounded-error probabilistic local decision (BPLD) A randomized distributed algorithm is a distributed algorithm A that enables every node v, at any round r during the execution, to toss a number of random bits obtaining a string r(v) ∈ {0, 1} * . Clearly, this number cannot exceed f A (H(r, v)), the bound on the number of computational steps used by node v at round r. Note however, that H(r, v) may now also depend on the random bits produced by other nodes in previous rounds. For p, q ∈ (0, 1], we say that a randomized distributed algorithm A is a (p, q)-decider for L, or, that it decides L with "yes" success probability p and "no" success probability q, if and only if for every configuration (G, x), and for every identity assignment Id for the nodes of G, every node of G eventually terminates and outputs "yes" or "no", and the following properties are satisfied: • If (G, x) ∈ L, then Pr[out(v) = "yes" for every node v ∈ V (G)] ≥ p, • If (G, x) / ∈ L, then Pr[out(v) = "no" for at least one node v ∈ V (G)] ≥ q, where the probabilities in the above definition are taken over all possible coin tosses performed by nodes. We define the class BPLD(t, p, q), for "Bounded-error Probabilistic Local Decision", as follows. Definition 2.5 For p, q ∈ (0, 1] and a function t, BPLD(t, p, q) is the class of all distributed languages that have a local randomized distributed (p, q)-decider running in time t. (i.e., can be decided in time t by a local randomized distributed algorithm with "yes" success probability p and "no" success probability q). A sharp threshold for randomization Consider some graph G, and a subset U of the nodes of G, i.e., U ⊆ V (G Theorem 3.1 below asserts that, for hereditary languages, randomization does not help if one imposes that p 2 + q > 1, i.e, the "no" success probability distribution is at least as large as one minus the square of the "yes" success probability. Somewhat more formally, we prove that for hereditary languages, we have p 2 +q>1 BPLD(t, p, q) = LD(O(t)). This complements the fact that for p 2 + q ≤ 1, we have LD(t) BPLD(t, p, q), for any t = o(n). Recall that [34] investigates the question of whether randomization helps for constructing in constant time a solution for a problem in LCL LD(O(1)). We stress that the technique used in [34] for tackling this question relies heavily on the definition of LCL, specifically, that only graphs of constant degree and of constant input size are considered. Hence it is not clear whether the technique of [34] can be useful for our purposes, as we impose no such assumptions on the degrees or input sizes. Also, although it seems at first glance, that Lovsz local lemma might have been helpful here, we could not effectively apply it in our proof. Instead, we use a completely different approach. Theorem 3.1 Let L be an hereditary language and let t be a function. If L ∈ BPLD(t, p, q) for constants p, q ∈ (0, 1] such that p 2 + q > 1, then L ∈ LD(O(t)). Proof. Let us start with some definitions. Let L be a language in BPLD(t, p, q) where p, q ∈ (0, 1] and p 2 + q > 1, and t is some function. Let A be a randomized algorithm deciding L, with "yes" success probability p, and "no" success probability q, whose running time is at most t(G, x, Id), for every configuration (G, x) with identity assignment Id. Fix a configuration (G, x), and an id-assignment Id for the nodes of V (G). The distance dist G (u, v) between two nodes of G is the minimum number of edges in a path connecting u and v in G. The distance between two subsets U 1 , U 2 ⊆ V is defined as dist G (U 1 , U 2 ) = min{dist G (u, v) | u ∈ U 1 , v ∈ U 2 }. For a set U ⊆ V , let E(G, x, Id, U) denote the event that when running A on (G, x) with id-assignment Id, all nodes in U output "yes". Let v ∈ V (G). The running time of A at v may depend on the coin tosses made by the nodes. Let t v = t v (G, x, Id) denote the maximal running time of v over all possible coin tosses. Note that t v ≤ t(G, x, Id) (we do not assume that neither t or t v are known to v). The radius of a node v, denoted r v , is the maximum value t u such that there exists a node u, where v ∈ B G (u, t u ). (Observe that the radius of a node is at most t.) The radius of a set of nodes S is r S := max{r v | v ∈ S}. In what follows, fix a constant δ such that 0 < δ < p 2 + q − 1, and define λ = 11 ⌈log p/log(1 − δ)⌉. A splitter of (G, x, Id) is a triplet (S, U 1 , U 2 ) of pairwise disjoint subsets of nodes such that S ∪ U 1 ∪ U 2 = V , dist G (U 1 , U 2 ) ≥ λr S . (Observe that r S may depend on the identity assignment and the input, and therefore, being a splitter is not just a topological property depending only on G). Given a splitter (S, U 1 , U 2 ) of (G, x, Id), let G k = G[U k ∪ S], and let x k be the input x restricted to nodes in G k , for k = 1, 2. The following structural claim does not use the fact that L is hereditary. Lemma 3.2 For every configuration (G, x) with identity assignment Id, and every splitter (S, U 1 , U 2 ) of (G, x, Id), we have (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L ⇒ (G, x) ∈ L. Let (G, x) be a configuration with identity assignment Id. Assume, towards contradiction, that there exists a splitter (S, U 1 , U 2 ) of triplet (G, x, Id), such that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L, yet (G, x) / ∈ L. (The fact that (G 1 , x 1 ) ∈ L and (G 2 , x 2 ) ∈ L implies that both G 1 and G 2 are connected, however, we note, that for the claim to be true, it is not required that G[ Proof. For proving Claim 3.3, we upper bound the size of I by d − 4r S − 2. This is done by covering the integers in (2r S , d − 2r S ) by at most 4r S + 1 sets, such that each one is (4r S + 1)-independent, that is, for every two integers in the same set, they are at least 4r S + 1 apart. Specifically, for s ∈ [1, 4r S + 1] and m(S) = ⌈(d − 8r S )/(4r S + 1)⌉, we define J s = {s + 2r S + j(4r S + 1) | j ∈ [0, m(S)]}. Observe that, as desired, (2r S , d − 2r S ) ⊂ s∈[1,4r S +1] J s , and for each s ∈ [1, 4r S + 1], J s is (4r S + 1)-independent. In what follows, fix s ∈ [1, 4r S + 1] and let J = J s . Since (G 1 , x 1 ) ∈ L, we know that, Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p . Observe that for i ∈ (2r S , d − 2r S ), t v ≤ r v ≤ r S , and hence, the t v -neighborhood in G of every node v ∈ S i is contained in S ⊆ G 1 , i.e., B G (v, t v ) ⊆ G 1 . It therefore follows that: Pr[E(G, x, Id, S J∩I )] = Pr[E(G 1 , x 1 , Id, S J∩I )] ≥ p .(1) Consider two integers a and b in J. We know that |a − b| ≥ 4r S + 1. Hence, the distance in G between any two nodes u ∈ S a and v ∈ S b is at least 2r S + 1. Thus, the events E(G, x, Id, S a ) and E(G, x, Id, S b ) are independent. It follows by the definition of I, that Pr[E(G, x, Id, S J∩I )] < (1 − δ) |J∩I|(2) By (1) and (2), we have that p < (1 − δ) |J∩I| and thus |J ∩ I| < log p/ log(1 − δ). Since (2r S , d − 2r S ) can be covered by the sets J s , s = 1, . . . , 4r S + 1, each of which is (4r S + 1)independent, we get that |I| = 4r S +1 s=1 |J s ∩ I| < (4r S + 1)(log p/ log(1 − r)) . Combining this bound with the fact that d = λr S , we get that d − 4r S − 1 > |I|. It follows by the pigeonhole principle that there exists some i ∈ (2r S , d − 2r S ) such that i / ∈ I, as desired. This completes the proof of Claim 3.3. Fix i ∈ (2r S , d − 2r S ) such that i / ∈ I, and let F = E(G, x, Id, S i ). By definition, Pr[F] ≤ δ < p 2 + q − 1.(3) Let H 1 denote the subgraph of G induced by the nodes in ( i−r S −1 j=1 L j ) ∪ U 1 . We similarly define H 2 as the subgraph of G induced by the nodes in ( j>i+r S L j ) ∪ U 2 . Note that S i ∪ V (H 1 ) ∪ V (H 2 ) = V , and for any two nodes u ∈ V (H 1 ) and v ∈ V (H 2 ), we have d G (u, v) > 2r S . It follows that, for k = 1, 2, the t u -neighborhood in G of each node u ∈ V (H k ) equals the t u -neighborhood in G k of u, that is, B G (u, t u ) ⊆ G k . (To see why, consider, for example, the case k = 2. Given u ∈ V (H 2 ), it is sufficient to show that ∄v ∈ V (H 1 ), such that v ∈ B G (u, t u ). Indeed, if such a vertex v exists then d G (u, v) > 2r S , and hence t u > 2r S . Since there must exists a vertex w ∈ S i such that w ∈ B(u, t u ), we get that r w > 2r S , in contradiction to the fact that w ∈ S.) Thus, for k = 1, 2, since (G i , x i ) ∈ L, we get Pr[E(G, x, Id, V (H i ))] = Pr[E(G i , x i , Id, V (H i ))] ≥ p . Let F ′ = E(G, x, Id, V (H 1 )∪V (H 2 ) ). As the events E(G, x, Id, V (H 1 )) and E(G, x, Id, V (H 2 )) are independent, it follows that Pr[F ′ ] > p 2 , that is Pr[F ′ ] ≤ 1 − p 2(4) By Eqs. (3) and (4), and using union bound, it follows that Pr[F ∨ F ′ ] < q. Thus Pr[E(G, x, Id, V (G))] = Pr[E(G, x, Id, S i ∪ V (H 1 ) ∪ V (H 2 ))] = Pr[F ∧ F ′ ] > 1 − q . This is in contradiction to the assumption that (G, x) / ∈ L. This concludes the proof of Lemma 3.2. Our goal now is to show that L ∈ LD(O(t)) by proving the existence of a deterministic local algorithm D that runs in time O(t) and recognizes L. (No attempt is made here to minimize the constant factor hidden in the O(t) notation.) Recall that both t = t(G, x, Id) and t v = t v (G, x, Id) may not be known to v. Nevertheless, by inspecting the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , each node v can compute an upper bound on t v as given by the following claim. t * v = t * v (c) such that (1) c · t v ≤ t * v = O(t) and (2) for every u ∈ B G (v, c · t * v ), we have t u ≤ t * v . To establish the claim, observe first that in O(t) time, each node v can compute a value t ′ v satisfying t v ≤ t ′ v ≤ 2t. Indeed, given the ball B G (v, 2 i ), for some integer i, and using the upper bound on number of (sequential) local computations, node v can simulate all its possible executions up to round r = 2 i . The desired value t ′ v is the smallest r = 2 i for which all executions of v up to round r conclude with an output at v. Once t ′ v is computed, node v aims at computing t * v . For this purpose, it starts again to inspect the balls B G (v, 2 i ) for increasing i = 1, 2, · · · , to obtain t ′ u from each u ∈ B G (v, 2 i ). (For this purpose, it may need to wait until u computes t ′ u , but this delays the whole computation by at most O(t) time.) Now, node v outputs t * v = 2 i for the smallest i satisfying (1) c · t ′ v ≤ 2 i and (2) for every u ∈ B G (v, c · 2 i ), we have t ′ u ≤ t * v . It is easy to see that for this i, we have 2 i = O(t), hence t * v = O(t). Given a configuration (G, x), and an id-assignment Id, Algorithm D, applied at a node u first calculates t * u = t * u (6λ), and then outputs "yes" if and only if the 2λt * u -neighborhood of u in (G, x) belongs to L. That is, out(u) = "yes" ⇐⇒ (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L . Obviously, Algorithm D is a deterministic algorithm that runs in time O(t). We claim that Algorithm D decides L. Indeed, since L is hereditary, if (G, x) ∈ L, then every prefix of (G, x) is also in L, and thus, every node u outputs out(u) ="yes". Now consider the case where (G, x) / ∈ L, and assume by contradiction that by applying D on (G, x) with id-assignment Id, every node u outputs out(u) ="yes". Let U ⊆ V (G) be maximal by inclusion, such that G[U] is connected and (G[U], x[U]) ∈ L. Obviously, U is not empty, as (B G (u, 2λt * v ), x[B G (u, 2λt * v )] ) ∈ L for every node u. On the other hand, we have |U| < |V (G)|, because (G, x) / ∈ L. Let u ∈ U be a node with maximal t u such that B G (u, 2t u ) contains a node outside U. Define G ′ as the subgraph of G induced by U ∪ V (B G (u, 2t u )). Observe that G ′ is connected and that G ′ strictly contains U. Towards contradiction, our goal is to show that (G ′ , x[G ′ ]) ∈ L. Let H denote the graph which is maximal by inclusion such that H is connected and B G (u, 2t u ) ⊂ H ⊆ B G (u, 2t u ) ∪ (U ∩ B G (u, 2λt * u )) . Let W 1 , W 2 , · · · , W ℓ be the ℓ connected components of G[U]\B G (u, 2t u ), ordered arbitrarily. Let W 0 be the empty graph, and for k = 0, 1, 2, · · · , ℓ, define the graph Z k = H ∪ W 0 ∪ W 1 ∪ W 2 ∪ · · · ∪ W k . Observe that Z k is connected for each k = 0, 1, 2, · · · , ℓ. We prove by induction on k that (Z k , x[Z k ]) ∈ L for every k = 0, 1, 2, · · · , ℓ. This will establish the contradiction since Z ℓ = G ′ . For the basis of the induction, the case k = 0, we need to show that (H, x[H]) ∈ L. However, this is immediate by the facts that H is a connected subgraph of B G (u, 2λt * u ), the configuration (B G (u, 2λt * u ), x[B G (u, 2λt * u )]) ∈ L, and L is hereditary. Assume now that we have (Z k , x[Z k ]) ∈ L for 0 ≤ k < ℓ, and consider the graph Z k+1 = Z k ∪ W k+1 . Define the sets of nodes S = V (Z k ) ∩ V (W k+1 ), U 1 = V (Z k ) \ S, and U 2 = V (W k+1 ) \ S . A crucial observation is that (S, U 1 , U 2 ) is a splitter of Z k+1 . This follows from the following arguments. Let us first show that r S ≤ t * u . By definition, we have t v ≤ t * u , for every v ∈ B G (u, 6λt * u ). Hence, in order to bound the radius of S (in Z k+1 ) by t * u it is sufficient to prove that there is no node w ∈ U \ B G (u, 6λt * u ) such that B G (w, t w ) ∩ S = ∅. Indeed, if such a node w exists then t w > 4λt * u and hence B G (w, 2t w ) contains a node outside U, in contradiction to the choice of u. It follows that r S ≤ t * u . We now claim that dist Z k+1 (U 1 , U 2 ) ≥ λt * u . Consider a simple directed path P in Z k+1 going from a node x ∈ U 1 to a node y ∈ U 2 . Since x / ∈ V (W k+1 ) and y ∈ V (W k+1 ), we get that P must pass through a vertex in B G (u, 2t u ). Let z be the last vertex in P such that z ∈ B G (u, 2t u ), and consider the directed subpath P [z,y] of P going from z to y. Now, let P ′ = P [z,y] \ {z}. The first d ′ = min{(2λ −2)t * u , |P ′ |} vertices in the directed subpath P ′ must belong to V (H) ⊆ V (Z k ). In addition, observe that all nodes in P ′ must be in V (W k+1 ). It follows that the first d ′ nodes of P ′ are in S. Since y / ∈ S, we get that |P ′ | ≥ d ′ = (2λ − 2)t * u , and thus |P | > λt * u . Consequently, dist Z k+1 (U 1 , U 2 ) ≥ λt * u , as desired. This completes the proof that (S, U 1 , U 2 ) is a splitter of Z k+1 . Now, by the induction hypothesis, we have ( G 1 , x[G 1 ]) ∈ L, because G 1 = G[U 1 ∪S] = Z k . In addition, we have (G 2 , x[G 2 ]) ∈ L, because G 2 = G[U 2 ∪ S] = W k+1 , Nondeterminism and complete problems 4.1 Separation results Our first separation result indicates that non-determinism helps for local decision. Indeed, we show that there exists a language, specifically, tree = {(G, ǫ) | G is a tree}, which belongs to NLD(1) but not to LD(t), for any t = o(n). The proof follows by rather standard arguments. Proof. To establish the theorem it is sufficient to show that there exists a language L such that L / ∈ LD(o(n)) and L ∈ NLD(1). Let tree = {(G, ǫ) | G is a tree}. We have tree / ∈ LD(o(n)). To see why, consider a cycle C with nodes labeled consecutively from 1 to 4n, and the path P 1 (resp., P 2 ) with nodes labeled consecutively 1, . . . , 4n (resp., 2n + 1, . . . , 4n, 1, . . . , 2n), from one extremity to the other. For any algorithm A deciding tree, all nodes n+1, . . . , 3n output "yes" in configuration (P 1 , ǫ) for any identity assignment for the nodes in P 1 , while all nodes 3n + 1, . . . , 4n, 1, . . . , n output "yes" in configuration (P 2 , ǫ) for any identity assignment or the nodes in P 2 . Thus if A is local, then all nodes output "yes" in configuration (C, ǫ), a contradiction. In contrast, we next show that tree ∈ NLD. The (nondeterministic) local algorithm A verifying tree operates as follows. Given a configuration (G, ǫ), the certificate given at node v is y(v) = dist G (v, r) where r ∈ V (G) is an arbitrary fixed node. The verification procedure is then as follows. At each node v, A inspects every neighbor (with its certificates), and verifies the following: • y(v) is a non-negative integer, • if y(v) = 0, then y(w) = 1 for every neighbor w of v, and • if y(v) > 0, then there exists a neighbor w of v such that y(w) = y(v) − 1, and, for all other neighbors w ′ of v, we have y(w ′ ) = y(v) + 1. If G is a tree, then applying Algorithm A on G with the certificate yields the answer "yes" at all nodes regardless of the given id-assignment. On the other hand, if G is not a tree, then we claim that for every certificate, and every id-assignment Id, Algorithm A outputs "no" at some node. Indeed, consider some certificate y given to the nodes of G, and let C be a simple cycle in G. Assume, for the sake of contradiction, that all nodes in C output "yes". In this case, each node in C has at least one neighbor in C with a larger certificate. This creates an infinite sequence of strictly increasing certificates, in contradiction with the finiteness of C. Theorem 4.2 There exists a language L such that L / ∈ NLD(t), for any t = o(n). Proof. Let InpEqSize = {(G, x) | ∀v ∈ V (G), x(v) = |V (G)|}. We show that InpEqSize / ∈ NLD(t), for any t = o(n). Assume, for the sake of contradiction, that there exists a local nondeterministic algorithm A deciding InpEqSize. Let t < n/4 be the running time of A. Consider the cycle C with 2t + 1 nodes u 1 , u 2 , · · · , u 2t+1 , enumerated clockwise. Assume that the input at each node u i of C satisfies x(u i ) = 2t + 1. Then, there exists a certificate y such that, for any identity assignment Id, algorithm A outputs "yes" at each node of C. Now, consider the configuration (C ′ , x ′ ) where the cycle C ′ has 4t + 2 nodes, and for each node v i of C ′ , x ′ (v i ) = 2t + 1. We have (C ′ , x ′ ) / ∈ InpEqSize. To fool Algorithm A, we enumerate the nodes in C ′ clockwise, i.e., C = (v 1 , v 2 , · · · , v 4t+2 ). We then define the certificate y ′ as follows: y ′ (v i ) = y ′ (v i+2t+1 ) = y(u i ) for i = 1, 2, · · · 2t + 1 . Fix an id-assignment Id ′ for the nodes in V (C ′ ), and fix i ∈ {1, 2, · · · 2t + 1}. There exists an id-assignment Id 1 for the nodes in V (C), such that the output of A at node v i in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with certificate y and id-assignment Id 1 . Similarly, there exists an id-assignment Id 2 for the nodes in V (C) such that the output of A at node v i+2t+1 in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ is identical to the output of A at node u i in (C, x) with with certificate y and id-assignment Id 2 . Thus, Algorithm A at both v i and v i+2t+1 outputs "yes" in (C ′ , x ′ ) with certificate y ′ and id-assignment Id ′ . Hence, since i was arbitrary, all nodes output "yes" for this configuration, certificate and id-assignment, contradicting the fact that (C ′ , x ′ ) / ∈ InpEqSize. For p, q ∈ (0, 1] and a function t, let us define BPNLD(t, p, q) as the class of all distributed languages that have a local randomized non-deterministic distributed (p, q)-decider running in time t. Theorem 4.3 Let p, q ∈ (0, 1] such that p 2 + q ≤ 1. For every language L, we have L ∈ BPNLD(1, p, q). Proof. Let L be a language. The certificate of a configuration (G, x) ∈ L is a map of G, with nodes labeled with distinct integers in {1, ..., n}, where n = |V (G)|, together with the inputs of all nodes in G. In addition, every node v receives the label λ(v) of the corresponding vertex in the map. Precisely, the certificate at node v is y(v) = (G ′ , x ′ , i) where G ′ is an isomorphic copy of G with nodes labeled from 1 to n, x ′ is an n-dimensional vector such that x ′ [λ(u)] = x(u) for every node u, and i = λ(v). The verification algorithm involves checking that the configuration (G ′ , x ′ ) is identical to (G, x). This is sufficient because distributed languages are sequentially decidable, hence every node can individually decide whether (G ′ , x ′ ) belongs to L or not, once it has secured the fact that (G ′ , x ′ ) is the actual configuration. It remains to show that there exists a local randomized non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x), and running in time 1. The non-deterministic (p, q)-decider operates as follows. First, every node v checks that it has received the input as specified by x ′ , i.e., v checks wether x ′ [λ(v)] = x(v), and outputs "no" if this does not hold. Second, each node v communicates with its neighbors to check that (1) they all got the same map G ′ and the same input vector x ′ , and (2) they are labeled the way they should be according to the map G ′ . If some inconsistency is detected by a node, then this node outputs "no". Finally, consider a node v that passed the aforementioned two phases without outputting "no" . If λ(v) = 1 then v outputs "yes" (with probability 1), and if λ(v) = 1 then v outputs "yes" with probability p. We claim that the above implements a non-deterministic distributed (p, q)-decider for verifying that the configuration (G ′ , x ′ ) is identical to (G, x). Indeed, if all nodes pass the two phases without outputting "no", then they all agree on the map G ′ and on the input vector x ′ , and they know that their respective neighborhood fits with what is indicated on the map. Hence, (G ′ , x ′ ) is a lift of (G, x). If follows that (G ′ , x ′ ) = (G, x) if and only if there exists at most one node v ∈ G, whose label satisfies λ(v) = 1. Consequently, if (G ′ , x ′ ) = (G, x) then all nodes say "yes" with probability at least p. On the other hand, if (G ′ , x ′ ) = (G, x) then there are at least two nodes in G whose label is "1". These two nodes say "yes" with probability p 2 , hence, the probability that at least one of them says "no" is at least 1 − p 2 ≥ q. This completes the proof of Theorem 4.3. The above theorem guarantees that the following definition is well defined. Let BPNLD = BPNLD(1, p, q), for some p, q ∈ (0, 1] such that p 2 + q ≤ 1. The following follows from Completeness results Let us first define a notion of reduction that fits the class LD. For two languages L 1 , L 2 , we say that L 1 is locally reducible to L 2 , denoted by L 1 L 2 , if there exists a constant time local algorithm A such that, for every configuration (G, x) and every id-assignment Id, A produces out(v) ∈ {0, 1} * as output at every node v ∈ V (G) so that (G, x) ∈ L 1 ⇐⇒ (G, out) ∈ L 2 . By definition, LD(O(t)) is closed under local reductions, that is, for every two languages L 1 , L 2 satisfying L 1 L 2 , if L 2 ∈ LD(O(t)) then L 1 ∈ LD(O(t)). We now show that there exists a natural problem, called cover, which is in some sense the "most difficult" decision problem; that is, we show that cover is BPNLD-complete. Language cover is defined as follows. Every node v is given as input an element E(v), and a finite collection of sets S(v). The union of these inputs is in the language if there exists a node v such that one set in S(v) equals the union of all the elements given to the nodes. Formally, Proof. The fact that cover ∈ BPNLD follows from Theorem 4.3. To prove that cover is BPNLD-hard, we consider some L ∈ BPNLD and show that L cover. For this purpose, we describe a local distributed algorithm A transforming any configuration for L to a configuration for cover preserving the memberships to these languages. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm A operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "local view" at v in (G, x), i.e., the star subgraph of G consisting of v and its neighbors, together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . we define cover = {(G, (E, S)) | ∃v ∈ V (G), ∃S ∈ S(v) s.t. S = {E(v) | v ∈ V (G)}. Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with k ≤ ψ(v) vertices, and x ′ is a collection of k input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm A associates a set S ∈ S(v) consisting of the k = |V (G ′ )| local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ A(G, x) ∈ cover. If (G, x) ∈ L, then by the construction of Algorithm A, there exists a set S ∈ S(v) such that S covers the collection of local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S which contains all local views of the given configuration (G, x) and Id assignemnt. Thus A(G, x) ∈ cover. Now consider the case that A(G, x) ∈ cover. In this case, there exists a node v and a set S ∈ S(v) such that S = {E(u) | u ∈ G}. Such a set S is the collection of local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. On the other hand, S is also the collection of local views of nodes of the given configuration (G, x) ∈ L and Id assignment. It follows that (G, x) = (G ′ , x ′ ) ∈ L. We now define a natural problem, called containment, which is NLD(O(1))-complete. Somewhat surprisingly, the definition of containment is quite similar to the definition of cover. Specifically, as in cover, every node v is given as input an element E(v), and a finite collection of sets S(v). However, in contrast to cover, the union of these inputs is in the containment language if there exists a node v such that one set in S(v) contains the union of all the elements given to the nodes. Formally, we define containment = {(G, (E, S)) | ∃v ∈ V (G), Proof. We first prove that containment is NLD(O(1))-hard. Consider some L ∈ NLD(O(1)); we show that L containment. For this purpose, we describe a local distributed algorithm D transforming any configuration for L to a configuration for containment preserving the memberships to these languages. ∃S ∈ S(v) s.t. S ⊇ {E(v) | v ∈ V (G)}. Let t = t L ≥ 0 be some (constant) integer such that there exists a local nondeterministic algorithm A L deciding L in time at most t. Let (G, x) be a configuration for L and let Id be an identity assignment. Algorithm D operating at a node v outputs a pair (E(v), S(v)), where E(v) is the "t-local view" at v in (G, x), i.e., the ball of radius t around v, B G (v, t), together with the inputs of these nodes and their identities, and S(v) is the collection of sets S defined as follows. For a binary string x, let |x| denote the length of x, i.e., the number of bits in x. For every vertex v, let ψ(v) = 2 |Id(v)|+|x(v)| . Node v first generates all configurations (G ′ , x ′ ) where G ′ is a graph with m ≤ ψ(v) vertices, and x ′ is a collection of m input strings of length at most ψ(v), such that (G ′ , x ′ ) ∈ L. For each such configuration (G ′ , x ′ ), node v generates all possible Id ′ assignments to V (G ′ ) such that for every node u ∈ V (G ′ ), |Id(u)| ≤ ψ(v). Now, for each such pair of a graph (G ′ , x ′ ) and an Id ′ assignment, algorithm D associates a set S ∈ S(v) consisting of the m = |V (G ′ )| t-local views of the nodes of G ′ in (G ′ , x ′ ). We show that (G, x) ∈ L ⇐⇒ D(G, x) ∈ containment. If (G, x) ∈ L, then by the construction of Algorithm D, there exists a set S ∈ S(v) such that S covers the collection of t-local views for (G, x), i.e., S = {E(u) | u ∈ G}. Indeed, the node v maximizing ψ(v) satisfies ψ(v) ≥ max{Id(u) | u ∈ V (G)} ≥ n and ψ(v) ≥ max{x(u) | u ∈ V (G)}. Therefore, that specific node has constructed a set S that precisely corresponds to (G, x) and its given Id assignment; hence, S contains all corresponding t-local views. Thus, D(G, x) ∈ containment. Now consider the case that D(G, x) ∈ containment. In this case, there exists a node v and a set S ∈ S(v) such that S ⊇ {E(u) | u ∈ G}. Such a set S is the collection of t-local views of nodes of some configuration (G ′ , x ′ ) ∈ L and some Id ′ assignment. Since (G ′ , x ′ ) ∈ L, there exists a certificate y ′ for the nodes of G ′ , such that when algorithm A L operates on (G ′ , x ′ , y ′ ), all nodes say "yes". Now, since S contains the t-local views of nodes (G, x), with the corresponding identities, there exists a mapping φ : (G, x, Id) → (G ′ , x ′ , Id ′ ) that preserves inputs and identities. Moreover, when restricted to a ball of radius t around a vertex v ∈ G, φ is actually an isomorphism between this ball and its image. We assign a certificate y to the nodes of G: for each v ∈ V (G), y(v) = y ′ (φ(v)). Now, Algorithm A L when operating on (G, x, y) outputs "yes" at each node of G. By the correctness of A L , we obtain (G, x) ∈ L. We now show that containment ∈ NLD(O(1)). For this purpose, we design a nondeterministic local algorithm A that decides whether a configuration (G, x) is in containment. Such an algorithm A is designed to operate on (G, x, y), where y is a certificate. The configuration (G, x) satisfies that x(v) = (E(v), S(v)). Algorithm A aims at verifying whether there exists a node v * with a set S * ∈ S(v * ) such that S * ⊇ {E(v) | v ∈ V (G)}. Given a correct instance, i.e., a configuration (G, x), we define the certificate y as follows. For each node v, the certificate y(v) at v consists of several fields, specifically, y(v) = (y c (v), y s (v), y id (v), y l (v)). The candidate configuration field y c (v) is a triplet y c (v) = (G ′ , x ′ , Id ′ ), where (G ′ , x ′ ) is an isomorphic copy (G ′ , x ′ ) of (G, x) and Id ′ is an p and q, one can modify the success probabilities by performing k runs and requiring each node to individually output "no" if it decided "no" on at least one of the runs. In this case, the "no" success probability increases from q to at least 1 − (1 − q) k , and the "yes" success probability then decreases from p to p k .) Another interesting question is whether the phenomena we observed regarding randomization occurs also in the non-deterministic setting, that is, whether BPNLD(t, p, q) collapses into NLD(O(t)), for p 2 + q > 1. Our model of computation, namely, the LOCAL model, focuses on difficulties arising from purely locality issues, and abstracts away other complexity measures. Naturally, it would be very interesting to come up with a rigorous complexity framework taking into account also other complexity measures. For example, it would be interesting to investigate the connections between classical computational complexity theory and the local complexity one. The bound on the (centralized) running time in each round (given by the function f , see Section 2) may serve a bridge for connecting the two theories, by putting constrains on this bound (i.e., f must be polynomial, exponential, etc). Also, one could restrict the memory used by a node, in addition to, or instead of, bounding the sequential time. Finally, it would be interesting to come up with a complexity framework taking also congestion into account.
10,308
1011.0041
2949683464
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem.
Partially observable Markov decision processes (POMDPs) extend MDPs to situations where the state is not directly observable @cite_31 @cite_14 @cite_29 . In this circumstance, an agent can plan using a continuous with dimensionality equal to the number of hidden states in the POMDP @. When the number of hidden states is large, dimensionality reduction in POMDPs can be achieved by projecting a high dimensional belief space to a lower dimensional one; of course, the difficulty is to find a projection which preserves decision quality. Strategies for finding good projections include value-directed compression @cite_15 and non-negative matrix factorization @cite_3 @cite_21 . The resulting model after compression is a Predictive State Representation (PSR) @cite_36 @cite_6 , an Observable Operator Model @cite_30 , or a multiplicity automaton @cite_26 . Moving to one of these representations can often compress a POMDP by a large factor with little or no loss in accuracy: examples exist with arbitrarily large lossless compression factors, and in practice, we can often achieve large compression ratios with little loss.
{ "abstract": [ "A widely used class of models for stochastic systems is hidden Markov models. Systems that can be modeled by hidden Markov models are a proper subclass of linearly dependent processes, a class of stochastic systems known from mathematical investigations carried out over the past four decades. This article provides a novel, simple characterization of linearly dependent processes, called observable operator models. The mathematical properties of observable operator models lead to a constructive learning algorithm for the identification of linearly dependent processes. The core of the algorithm has a time complexity of O (N + nm3), where N is the size of training data, n is the number of distinguishable outcomes of observations, and m is model state-space dimension.", "", "Planning and learning in Partially Observable MDPs (POMDPs) are among the most challenging tasks in both the AI and Operation Research communities. Although solutions to these problems are intractable in general, there might be special cases, such as structured POMDPs, which can be solved efficiently. A natural and possibly efficient way to represent a POMDP is through the predictive state representation (PSR) — a representation which recently has been receiving increasing attention. In this work, we relate POMDPs to multiplicity automata — showing that POMDPs can be represented by multiplicity automata with no increase in the representation size. Furthermore, we show that the size of the multiplicity automaton is equal to the rank of the predictive state representation. Therefore, we relate both the predictive state representation and POMDPs to the well-founded multiplicity automata literature. Based on the multiplicity automata representation, we provide a planning algorithm which is exponential only in the multiplicity automata rank rather than the number of states of the POMDP. As a result, whenever the predictive state representation is logarithmic in the standard POMDP representation, our planning algorithm is efficient.", "We examine the problem of generating state-space compressions of POMDPs in a way that minimally impacts decision quality. We analyze the impact of compressions on decision quality, observing that compressions that allow accurate policy evaluation (prediction of expected future reward) will not affect decision quality We derive a set of sufficient conditions that ensure accurate prediction in this respect, illustrate interesting mathematical properties these confer on lossless linear compressions, and use these to derive an iterative procedure for finding good linear lossy compressions. We also elaborate on how structured representations of a POMDP can be used to find such compressions.", "", "In this paper, we describe the partially observable Markov decision process (POMDP) approach to finding optimal or near-optimal control strategies for partially observable stochastic environments, given a complete model of the environment. The POMDP approach was originally developed in the operations research community and provides a formal basis for planning problems that have been of interest to the AI community. We found the existing algorithms for computing optimal control strategies to be highly computationally inefficient and have developed a new algorithm that is empirically more efficient. We sketch this algorithm and present preliminary results on several small problems that illustrate important properties of the POMDP approach.", "", "High dimensionality of POMDP's belief state space is one major cause that makes the underlying optimal policy computation intractable. Belief compression refers to the methodology that projects the belief state space to a low-dimensional one to alleviate the problem. In this paper, we propose a novel orthogonal non-negative matrix factorization (O-NMF) for the projection. The proposed O-NMF not only factors the belief state space by minimizing the reconstruction error, but also allows the compressed POMDP formulation to be efficiently computed (due to its orthogonality) in a value-directed manner so that the value function will take same values for corresponding belief states in the original and compressed state spaces. We have tested the proposed approach using a number of benchmark problems and the empirical results confirms its effectiveness in achieving substantial computational cost saving in policy computation.", "", "" ], "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_15", "@cite_36", "@cite_29", "@cite_21", "@cite_3", "@cite_6", "@cite_31" ], "mid": [ "2149960632", "1484113995", "1539249135", "2138089556", "", "2099873296", "", "2100078368", "", "2302698253" ] }
Predictive State Temporal Difference Learning
Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states π 0 , a state transition function T , a reward function R, and a discount factor γ ∈ [0, 1]. We seek a policy π, a mapping from states to actions. The notion of a value function is of central importance in reinforcement learning: for a given policy π, the value of state s is defined as the expected discounted sum of rewards obtained when starting in state s and following policy π, J π (s) = E [ ∞ t=0 γ t R(s t ) | s 0 = s, π]. It is well known that the value function must obey the Bellman equation J π (s) = R(s) + γ s J π (s ) Pr[s | s, π(s)](1) If we know the transition function T , and if the set of states S is sufficiently small, we can use (1) directly to solve for the value function J π . We can then execute the greedy policy for J π , setting the action at each state to maximize the right-hand side of (1). However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. Our state (i.e., the information which we can use to make decisions) is not an element of S but a history (an ordered sequence of action-observation pairs h = a h 1 o h 1 . . . a h t o h t that have been executed and observed prior to time t). If we knew the transition model T , we could use h to infer a belief distribution over S, and use that belief (or a compression of that belief) as a state instead; below, we will discuss how to learn a compressed belief state. Because of partial observability, we can only hope to predict reward conditioned on history, R(h) = E[R(s) | h], and we must choose actions as a function of history, π(h) instead of π(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories J π (s) = w T φ H (h)(2) Here w ∈ R j is a parameter vector and φ H (h) ∈ R j is a feature vector for a history h. So, we can rewrite the Bellman equation as w T φ H (h) = R(h) + γ o∈O w T φ H (hπo) Pr[hπo | hπ](3) where hπo is history h extended by taking action π(h) and observing o. Least Squares Temporal Difference Learning In general we don't know the transition probabilities Pr[hπo | h], but we do have samples of state features φ H t = φ H (h t ), next-state features φ H t+1 = φ H (h t+1 ), and immediate rewards R t = R(h t ). We can thus estimate the Bellman equation w T φ H 1:k ≈ R 1:k + γw T φ H 2:k+1(4) (Here we have used the notation φ H 1:k to mean the matrix whose columns are φ H t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving the linear system in the least squares sense:ŵ T = R 1:k φ H 1:k − γφ H 2:k+1 † , where † indicates the Moore-Penrose pseudoinverse. However, this solution is biased [3], since the independent variables φ H t − γφ H t+1 are noisy samples of the expected difference E[φ H (h) − γ o∈O φ H (hπo) Pr[hπo | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm provides a consistent estimate of the independent variables by right multiplying the approximate Bellman equation (Equation 4) by φ H t T . The quantity φ H t T can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with the true independent variables, but uncorrelated with the noise in our estimates of these variables. 1 The value function parameter w may then be estimated as follows: w T = 1 k k t=1 R t φ H t T 1 k k t=1 φ H t φ H t T − γ k k t=1 φ H t+1 φ H t T −1(5) As the amount of data k increases, the empirical covariance matrices φ H 1:k φ H (5) is consistent. Therefore, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w therefore converges to the true parameters with probability 1. Predictive Features Although LSTD provides a consistent estimate of the value function parameters w, in practice, the potential size of the feature vectors can be a problem. If the number of features is large relative to the number of training samples, then the estimation of w is prone to overfitting. This problem can be alleviated by choosing some small set of features that only contain information that is relevant for value function approximation. However, with the exception of LARS-TD [18], there has been little work on the problem of how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-alwaysavailable expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given some signal from history, in this case a large set of features, we would like to find a compression that preserves only relevant information for predicting the value function J π . As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. Tests and Features of the Future We first need to define precisely the task of predicting the future. Just as a history is an ordered sequence of action-observation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs τ = a 1 o 1 . . . a i o i that can be executed and observed after time t [14]. The prediction for a test τ after a history h, written τ (h), is the probability that we will see the test observations τ O = o 1 . . . o i , given that we intervene [22] to execute the test actions τ A = a 1 . . . a i : τ (h) = Pr[τ O | h, do(τ A )] If Q = {τ 1 , . . . , τ n } is a set of tests, we write Q(h) = (τ 1 (h), . . . , τ n (h)) T for the corresponding vector of test predictions. We can generalize the notion of a test to a feature of the future, a linear combination of several tests sharing a common action sequence. For example, if τ 1 and τ 2 are two tests with τ A 1 = τ A 2 ≡ τ A , then we can make a feature φ = 3τ 1 + τ 2 . This feature is executed if we intervene to do(τ A ), and if it is executed its value is 3I(τ O 1 ) + I(τ O 2 ), where I(o 1 . . . o i ) stands for an indicator random variable, taking the value 0 or 1 depending on whether we observe the sequence of observations o 1 . . . o i . The prediction of φ given h is φ(h) ≡ E(φ | h, do(τ A )) = 3τ 1 (h) + τ 2 (h). While linear combinations of tests may seem restrictive, our definition is actually very expressive: we can represent an arbitrary function of a finite sequence of future observations. To do so, we take a collection of tests, each of which picks out one possible realization of the sequence, and weight each test by the value of the function conditioned on that realization. For example, if our observations are integers 1, 2, . . . , 10, we can write the square of the next observation as 10 o=1 o 2 I(o), and the mean of the next two observations as 10 o=1 10 o =1 1 2 (o + o )I(o, o ). The restriction to a common action sequence is necessary: without this restriction, all the tests making up a feature could never be executed at once. Once we move to feature predictions, however, it makes sense to lift this restriction: we will say that any linear combination of feature predictions is also a feature prediction, even if the features involved have different action sequences. Action sequences raise some problems with obtaining empirical estimates of means and covariances of features of the future: e.g., it is not always possible to get a sample of a particular feature's value on every time step, and the feature we choose to sample at one step can restrict which features we can sample at subsequent steps. In order to carry out our derivations without running into these problems repeatedly, we will assume for the rest of the paper that we can reset our system after every sample, and get a new history independently distributed as h t ∼ ω for some distribution ω. (With some additional bookkeeping we could remove this assumption [23], but this bookkeeping would unnecessarily complicate our derivations.) Furthermore, we will introduce some new language, again to keep derivations simple: if we have a vector of features of the future φ T , we will pretend that we can get a sample φ T t in which we evaluate all of our features starting from a single history h t , even if the different elements of φ T require us to execute different action sequences. When our algorithms call for such a sample, we will instead use the following trick to get a random vector with the correct expectation (and somewhat higher variance, which doesn't matter for any of our arguments): write τ A 1 , τ A 2 , . . . for the different action sequences, and let ζ 1 , ζ 2 , . . . > 0 be a probability distribution over these sequences. We pick a single action sequence τ A a according to ζ, and execute τ A a to get a sampleφ T of the features which depend on τ A a . We then enterφ T /ζ a into the corresponding coordinates of φ T t , and fill in zeros everywhere else. It is easy to see that the expected value of our sample vector is then correct: the probability of selection ζ a and the weighting factor 1/ζ a cancel out. We will write E(φ T | h t , do(ζ)) to stand for this expectation. None of the above tricks are actually necessary in our experiments with stopping problems: we simply execute the "continue" action on every step, and use only sequences of "continue" actions in every test and feature. Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. Since we are interested in value function approximation, the most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward given a history. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps, while not strictly necessary to predict total discounted reward, seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward better. Finally, in any specific RL application, we may be able to add problem-specific prediction tasks that will help focus our attention on relevant information: for example, in a pathplanning problem, we might try to predict which of several goal states we will reach (in addition to how much it will cost to get there). We can represent all of these prediction tasks as features of the future: e.g., to predict which goal we will reach, we add a distinct observation at each goal state, or to predict individual rewards, we add individual rewards as observations. 2 We will write φ T t for the vector of all features of the "future at time t," i.e., events starting at time t + 1 and continuing forward. So, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reduced-rank regression [24]. We define the following empirical covariance matrices between features of the future and features of histories: Σ T ,H = 1 k k t=1 φ T t φ H t T Σ H,H = 1 k k t=1 φ H t φ H t T(6) Let L H be the lower triangular Cholesky factor of Σ H,H . Then we can find a predictive compression of histories by a singular value decomposition (SVD) of the weighted covariance: write UDV T ≈ Σ T ,H L −T H (7) for a truncated SVD [25] of the weighted covariance, where U are the left singular vectors, V T are the right singular vectors, and D is the diagonal matrix of singular values. The number of columns of U, V, or D is equal to the number of retained singular values. 3 Then we define U = UD 1/2 (8) to be the mapping from the low-dimensional compressed space up to the high-dimensional space of features of the future. Given U , we would like to find a compression operator V that optimally predicts features of the future through the bottleneck defined by U . The least squares estimate can be found by minimizing the loss L(V ) = φ T 1:k − U V φ H 1:k 2 F(9) where · F denotes the Frobenius norm. We can find the minimum by taking the derivative of this loss with respect to V , setting it to zero, and solving for V (see Appendix, Section A for details), giving us: V = arg min V L(V ) = U T Σ T ,H ( Σ H,H ) −1(10) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression-but, unlike previous ways to find value-directed compressions [11], we do not need to know a model of our system ahead of time. For another example, define L T to be the lower triangular Cholesky factor of the empirical covariance of future features Σ T ,T . Then, if we scale features of the future by L −T T , the singular value decomposition will preserve the largest possible amount of mutual information between features of the future and features of history. This is equivalent to canonical correlation analysis [26,27], and the matrix D becomes a diagonal matrix of canonical correlations between futures and histories. Predictive State Temporal Difference Learning Now that we have found a predictive compression operator V via Equation 10, we can replace the features of history φ H t with the compressed features V φ H t in the Bellman recursion, Equation 4. Doing so results in the following approximate Bellman equation: w T V φ H 1:k ≈ R 1:k + γw T V φ H 2:k+1(11) The least squares solution for w is still prone to an error-in-variables problem. The variable φ H is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it as an instrumental variable to unbias the estimate of w. Define the additional empirical covariance matrices: Σ R,H = 1 k k t=1 R t φ H t T Σ H + ,H = 1 k k t=1 φ H t+1 φ H t T(12) Then, the corrected Bellman equation is: w T V Σ H,H = Σ R,H + γŵ T V Σ H + ,H and solving forŵ gives us the Predictive State Temporal Difference (PSTD) learning algorithm: w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(13) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the modelfree algorithm in Equation 13 is, under some circumstances, equivalent to a model-based value function approximation method which uses subspace identification to learn Predictive State Representations [20,21]. Predictive State Representations A predictive state representation (PSR) [14] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Formally, a PSR consists of five elements A, O, Q, s 1 , F . A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions f τ which embody these predictions: τ (h) = f τ (Q(h)). And, m 1 = Q( ) is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: f τ (Q(h)) = r T τ Q(h) for some vector r τ ∈ R |Q| . Finally, a core set Q for a linear PSR is said to be minimal if the tests in Q are linearly independent [16,15], i.e., no one test's prediction is a linear function of the other tests' predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we write M ao for the matrix with rows r T aoτ for τ ∈ Q, then we can use Bayes' Rule to show: Q(hao) = M ao Q(h) Pr[o | h, do(a)] = M ao Q(h) m T ∞ M ao Q(h)(14) where m ∞ is a normalizer, defined by m T ∞ Q(h) = 1 for all h. In addition to the above PSR parameters, we need a few additional definitions for reinforcement learning: a reward function R(h) = η T Q(h) mapping predictive states to immediate rewards, a discount factor γ ∈ [0, 1] which weights the importance of future rewards vs. present ones, and a policy π(Q(h)) mapping from predictive states to actions. (Specifying a reward in terms of the core test predictions Q(h) is fully general: e.g., if we want to add a unit reward for some test τ ∈ Q, we can instead equivalently set η := η + r τ , where r τ is defined (as above) so that τ (h) = r T τ Q(h).) Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [20,21]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix S, we can transform m 1 → Sm 1 , m T ∞ → m T ∞ S −1 , and M ao → SM ao S −1 without changing the corresponding dynamical system, since pairs S −1 S cancel in Eq. 14. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [28,29,27,30]. Learning Transformed PSRs Let Q be a minimal core set of tests for a dynamical system, with cardinality n = |Q| equal to the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal, and possibly even with |T | countably infinite). And, let H be the set of all possible histories. (|H| is finite or countably infinite, depending on whether our system is finite-horizon or infinite-horizon.) As before, write φ H t ∈ R for a vector of features of history at time t, and write φ T t ∈ R for a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction τ (h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction φ(h) as a linear function of T (h). We define the matrix Φ T ∈ R ×|T | to embody our predictions of future features: that is, an entry of Φ T is the weight of one of the tests in T for calculating the prediction of one of the features in φ T . Below we define several covariance matrices, Equation 15(a-d), in terms of the observable quantities φ T t , φ H t , a t , and o t , and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 17 below. First we define Σ H,H , the covariance matrix of features of histories, as E[φ H t φ H t T | h t ∼ ω]. Given k samples, we can approximate this covariance: [ Σ H,H ] i,j = 1 k k t=1 φ H it φ H jt =⇒ Σ H,H = 1 k φ H 1:k φ H 1:k T .(15a) As k → ∞, the empirical covariance Σ H,H converges to the true covariance Σ H,H with probability 1. Next we define Σ S,H , the cross covariance of states and features of histories. Writing s t = Q(h t ) for the (unobserved) state at time t, let Σ S,H = E 1 k s 1:k φ H 1:k T h t ∼ ω (∀t) We cannot directly estimate Σ S,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define Σ T ,H , the cross covariance matrix of the features of tests and histories: Σ T ,H ≡ E[φ T t φ H t T | h t ∼ ω, do(ζ)]. The true covariance is the expectation of the sample covariance Σ T ,H : [ Σ T ,H ] i,j ≡ 1 k k t=1 φ T i,t φ H j,t [Σ T ,H ] i,j = E 1 k k t=1 φ T i,t φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 E φ T i,t | h t , do(ζ) φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ τ (h t )φ H j,t h t ∼ ω (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ r T τ Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 s t φ H j,t h t ∼ ω (∀t) =⇒ Σ T ,H = Φ T RΣ S,H(15b) where the vector r τ is the linear function that specifies the probability of the test τ given the probabilities of tests in the core set Q, and the matrix R has all of the r τ vectors as rows. The above derivation shows that, because of our assumptions about the linear dimension of the system, the matrix Σ T ,H has factors R ∈ R |T |×n and Σ S,H ∈ R n× . Therefore, the rank of Σ T ,H is no more than n, the linear dimension of the system. We can also see that, since the size of Σ T ,H is fixed but the number of samples k is increasing, the empirical covariance Σ T ,H converges to the true covariance Σ T ,H with probability 1. Next we define Σ H,ao,H , a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, I t (o) is an indicator variable for whether we see observation o at step t. Σ H,ao,H ≡ 1 k k t=1 φ H t+1 I t (o)φ H t T Σ H,ao,H ≡ E Σ H,ao,H h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) (15c) Since the dimensions of each Σ H,ao,H are fixed, as k → ∞ these empirical covariances converge to the true covariances Σ H,ao,H with probability 1. Finally we define Σ R,H ≡ E[R t φ H t T | h t ∼ ω] , and approximate the covariance (in this case a vector) of reward and features of history: Σ R,H ≡ 1 k k t=1 R t φ H t T Σ R,H ≡ E Σ R,H h t ∼ ω (∀t) = E 1 k k t=1 R t φ H t T h t ∼ ω (∀t) = E 1 k k t=1 η T Q(h t )φ H t T h t ∼ ω (∀t) = η T E 1 k k t=1 s t φ H t T h t ∼ ω (∀t) = η T Σ S,H (15d) Again, as k → ∞, Σ R,H converges to Σ R,H with probability 1. We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to determine the state of the system, i.e., the regression from φ H to s is exact: s t = Σ S,H Σ −1 H,H φ H t . We discuss how to relax this assumption below in Section 4.3. We also need a matrix U such that U T Φ T R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, it is useful to choose U via SVD of a scaled version of Σ T ,H as described in Sec. 3.2. Using our assumptions we can show a useful identity for Σ H,ao,H : Σ S,H Σ −1 H,H Σ H,ao,H = E 1 k k t=1 Σ S,H Σ −1 H,H φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 s t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 M ao s t φ H t T h t ∼ ω (∀t) = M ao Σ S,H(16) This identity is at the heart of our learning algorithm: it shows that Σ H,ao,H contains a hidden copy of M ao , the main TPSR parameter that we need to learn. We would like to recover M ao via Eq. 16, M ao = Σ S,H Σ −1 H,H Σ H,ao,H Σ † S,H ; but of course we do not know Σ S,H . Fortunately, though, it turns out that we can use U T Σ T ,H as a stand-in, as described below, since this matrix differs from Σ S,H only by an invertible transform (Eq. 15b). We now show how to recover a TPSR from the matrices Σ T ,H , Σ H,H , Σ R,H , Σ H,ao,H , and U . Since a TPSR's predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform. b t ≡ U T Σ T ,H (Σ H,H ) −1 φ H t = U T Φ T RΣ S,H (Σ H,H ) −1 φ H t = (U T Φ T R)s t (17a) B ao ≡ U T Σ T ,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = U T Φ T RΣ S,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = (U T Φ T R)M ao Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (17b) b T η ≡ Σ R,H (U T Σ T ,H ) † = η T Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (17c) Our PSR learning algorithm is simple: simply replace each true covariance matrix in Eq. 17 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.3 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy π, a TPSR's value function is a linear function of state, J π (s) = w T b, and is the solution of the TPSR Bellman equation [31]: for all b, w T b = b T η b + γ o∈O w T B πo b, or equivalently, w T = b T η + γ o∈O w T B πo If we substitute in our learned PSR parameters from Equations 17(a-c), we get w T = Σ R,H (U T Σ T ,H ) † + γ o∈Oŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H,πo,H (U T Σ T ,H ) † w T U T Σ T ,H = Σ R,H + γŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H + ,H since, by comparing Eqs. 15c and 12, we can see that o∈O Σ H,πo,H = Σ H + ,H . Now, suppose that we define U and V by Eqs. 8 and 10, and let U = U as suggested above in Sec. 4.1. Then U T Σ T ,H = V Σ H,H , andŵ T V Σ H,H = Σ R,H + γŵ T V Σ H + ,Ĥ w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(18) Eq. 18 is exactly the PSTD algorithm (Eq. 13). So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation-to our knowledge, the first such result for a TD method. PSTD learning is related to value-directed compression of POMDPs [11]. If we learn a TPSR from data generated by a POMDP, then the TPSR state is exactly a linear compression of the POMDP state [15,20]. The compression can be exact or approximate, depending on whether we include enough features of the future and whether we keep all or only some nonzero singular values in our bottleneck. If we include only reward as a feature of the future, we get a value-directed compression in the sense of Poupart and Boutilier [11]. If desired, we can tune the degree of value-directedness of our compression by scaling the relative variance of our features: the higher the variance of the reward feature compared to other features, the more value-directed the resulting compression will be. Our work significantly diverges from previous work on POMDP compression in one important respect: prior work assumes access to the true POMDP model, while we make no such assumption, and learn a compressed representation directly from data. Insights from Subspace Identification The close connection to subspace identification for PSRs provides additional insight into the temporal difference learning procedure. In Equation 17 we made the assumption that the features of history are rich enough to completely determine the state of the dynamical system. In fact, using theory developed in [21], it is possible to relax this assumption and instead assume that state is merely correlated with features of history. In this case, we need to introduce a new set of covariance matrices Σ T ,ao, do(a, ζ)], one for each actionobservation pair, that represent the covariance between features of history before and features of tests after taking action a and observing o. We can then estimate the TPSR transition matrices as B ao = U T Σ T ,ao,H ( U T Σ T ,H ) † (see [21] for proof details). The value function parameter w can be estimated asŵ H ≡ E[φ T t I t (o)φ H t T | h t ∼ ω,T = Σ R,H ( U T Σ T ,H ) † (I − o∈O U T Σ T ,ao,H ( U T Σ T ,H ) † ) † = Σ R,H ( U T Σ T ,H − o∈O U T Σ T ,ao,H ) † (the proof is similar to Equation 18 ). Since we no longer assume that state is completely specified by features of history, we can no longer apply the learned value function to U Σ T ,H (Σ H,H ) −1 φ t at each time t. Instead we need to learn a full PSR model and filter with the model to estimate state. Details on this procedure can be found in [21]. Experimental Results We designed several experiments to evaluate the properties of the PSTD learning algorithm. In the first set of experiments we look at the comparative merits of PSTD with respect to LSTD and LARS-TD when applied to the problem of estimating the value function of a reduced-rank POMDP. In the second set of experiments, we apply PSTD to a benchmark optimal stopping problem (pricing a fictitious financial derivative), and show that PSTD outperforms competing approaches. Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [32]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy's transition matrix is low rank: the resulting belief distributions can be represented in a 3-dimensional subspace of the original belief simplex. A reward of 1 is given in the first and third latent state and a reward of 0 in the other two latent states (see Appendix, Section B). The system emits 2 possible observations, conflating information about the latent states. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, PSTD, and PSTD as formulated in Section 4.3 (which we call PSTD2) when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by J π = R(I − γT π ) −1 . In the first experiment we execute the policy π for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, PSTD with linear dimension 3, and PSTD2 with linear dimension 3 (Figure 1(A)). 4 In this experiment, PSTD and PSTD2 both had lower mean squared error than the other approaches. For the second experiment, we added 490 random features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features in both tests and histories and the relatively small amount of training data. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. Surprisingly, in this experiment PSTD2 not only outperformed PSTD but bested even LARS-TD. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was relatively large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD and PSTD2 both outperform LARS-TD and each of these subspace and subset selection methods outperform LSTD by a large margin by efficiently estimating the value function (Figure 1(C)). Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the contract owner must make decisions-e.g., with early exercise the contract holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions. In these cases, the value of the derivative depends on how the contract holder acts. Deciding when to exercise is therefore an optimal stopping problem: at each point in time, the contract holder must decide whether to continue holding the contract or exercise. Such stopping problems provide an ideal testbed for policy evaluation methods, since we can easily collect a single data set which is sufficient to evaluate any policy: we just choose the "continue" action forever. (We can then evaluate the "stop" action easily in any of the resulting states, since the immediate reward is given by the rules of the contract, and the next state is the terminal state by definition.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [33]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of a given day, the holder may opt to exercise. At exercise the owner receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a "psychic call": the owner gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility σ = 0.02 and continuously compounded short term growth rate ρ = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ∼ 10%. In more detail, if w t is a standard Brownian motion, then the stock price p t evolves as ∇p t = ρp t ∇t + σp t ∇w t , and we can summarize relevant state at the end of each day as a vector x t ∈ R 100 , with x t = pt−99 pt−100 , pt−98 pt−100 , . . . , pt pt−100 T . The ith dimension x t (i) represents the amount a $1 investment in a stock at time t − 100 would grow to at time t − 100 + i. This process is Markov and ergodic [33,34]: x t and x t+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor γ = e −ρ is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock's growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V * (x) = sup t E[γ t G(x t ) | x 0 = x]. Our goal is to calculate an approximate value function V (x) = w T φ H (x), and then use this value function to generate a stopping time min{t | G(x t ) ≥ V (x t )}. To do so, we sample a sequence of 1,000,000 states x t ∈ R 100 and calculate features φ H of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy "stop if G(x t ) ≥ w T φ H (x t )." Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the "continue" action, with reward 0 at each step. When the policy executes the "stop" action, the reward is G(x) and the next state's features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a "good" set of 16 features for this data set through repeated trial and error (see Appendix, Section B and [33,34]). We greatly expand this set of features, then use PSTD to synthesize a small set of highquality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison's sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [33]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [33,34] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn't know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [33,34]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD's compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. maximal returns, and how long ago they occurred: The next set of basis functions summarize the characteristics of the basic shape of the 100 day sample path. They are the inner product of the path with the first four Legendre polynomial degrees. Let j = i/50 − 1. φ 1 (x) = 1 φ 2 (x) = G(x) φ 3 (x) = min i=1φ 7 (x) = 1 100 100 i=1 x(i) − 1 √ 2 φ 8 (x) = 1 100 100 i=1 x(i) 3 2 j φ 9 (x) = 1 100 100 i=1 x(i) 5 2 3j 2 − 1 2 φ 10 (x) = 1 100 100 i=1 x(i) 7 2 5j 3 − 3j 2 Nonlinear combinations of basis functions: φ 11 (x) = φ 2 (x)φ 3 (x) φ 12 (x) = φ 2 (x)φ 4 (x) φ 13 (x) = φ 2 (x)φ 7 (x) φ 14 (x) = φ 2 (x)φ 8 (x) φ 15 (x) = φ 2 (x)φ 9 (x) φ 16 (x) = φ 2 (x)φ 10 (x) In order to improve our results, we added a large number of additional basis functions to these hand-picked 16. PSTD will compress these features for us, so we can use as many additional basis functions as we would like. First we defined 4 additional basis functions consisting of the inner products of the 100 day sample path with the 5th and 6th Legende polynomials and we added the corresponding nonlinear combinations of basis functions: φ 17 (x) = 1 100 100 i=1 x(i) 9 2 35j 4 − 30x 2 + 3 8 φ 18 (x) = 1 100 100 i=1 x(i) 11 2 63j 5 − 70j 3 + 15j 8 φ 19 (x) = φ 2 (x)φ 17 (x) φ 20 (x) = φ 2 (x)φ 18 (x) Finally we added the the entire sample path and the squared sample path: φ 21:120 = x 1:100 φ 121:220 = x 2 1:100
8,106
1011.0041
2949683464
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem.
The drawback of all of the approaches enumerated above is that they first assume that the dynamical system model is known, and only then give us a way of finding a compact representation and a value function. In practice, we would like to be able to find a good set of features, . Kolter and Ng @cite_18 contend with this problem from a sparse feature selection standpoint. Given a large set of possibly-relevant features of observations, they proposed augmenting LSTD by applying an @math penalty to the coefficients, forcing LSTD to select a sparse set of features for value function estimation. The resulting algorithm, LARS-TD, works well in certain situations (for example, see ), but only if our original large set of features contains a small subset of highly-relevant features.
{ "abstract": [ "We consider the task of reinforcement learning with linear value function approximation. Temporal difference algorithms, and in particular the Least-Squares Temporal Difference (LSTD) algorithm, provide a method for learning the parameters of the value function, but when the number of features is large this algorithm can over-fit to the data and is computationally expensive. In this paper, we propose a regularization framework for the LSTD algorithm that overcomes these difficulties. In particular, we focus on the case of l1 regularization, which is robust to irrelevant features and also serves as a method for feature selection. Although the l1 regularized LSTD solution cannot be expressed as a convex optimization problem, we present an algorithm similar to the Least Angle Regression (LARS) algorithm that can efficiently compute the optimal solution. Finally, we demonstrate the performance of the algorithm experimentally." ], "cite_N": [ "@cite_18" ], "mid": [ "2112264645" ] }
Predictive State Temporal Difference Learning
Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states π 0 , a state transition function T , a reward function R, and a discount factor γ ∈ [0, 1]. We seek a policy π, a mapping from states to actions. The notion of a value function is of central importance in reinforcement learning: for a given policy π, the value of state s is defined as the expected discounted sum of rewards obtained when starting in state s and following policy π, J π (s) = E [ ∞ t=0 γ t R(s t ) | s 0 = s, π]. It is well known that the value function must obey the Bellman equation J π (s) = R(s) + γ s J π (s ) Pr[s | s, π(s)](1) If we know the transition function T , and if the set of states S is sufficiently small, we can use (1) directly to solve for the value function J π . We can then execute the greedy policy for J π , setting the action at each state to maximize the right-hand side of (1). However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. Our state (i.e., the information which we can use to make decisions) is not an element of S but a history (an ordered sequence of action-observation pairs h = a h 1 o h 1 . . . a h t o h t that have been executed and observed prior to time t). If we knew the transition model T , we could use h to infer a belief distribution over S, and use that belief (or a compression of that belief) as a state instead; below, we will discuss how to learn a compressed belief state. Because of partial observability, we can only hope to predict reward conditioned on history, R(h) = E[R(s) | h], and we must choose actions as a function of history, π(h) instead of π(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories J π (s) = w T φ H (h)(2) Here w ∈ R j is a parameter vector and φ H (h) ∈ R j is a feature vector for a history h. So, we can rewrite the Bellman equation as w T φ H (h) = R(h) + γ o∈O w T φ H (hπo) Pr[hπo | hπ](3) where hπo is history h extended by taking action π(h) and observing o. Least Squares Temporal Difference Learning In general we don't know the transition probabilities Pr[hπo | h], but we do have samples of state features φ H t = φ H (h t ), next-state features φ H t+1 = φ H (h t+1 ), and immediate rewards R t = R(h t ). We can thus estimate the Bellman equation w T φ H 1:k ≈ R 1:k + γw T φ H 2:k+1(4) (Here we have used the notation φ H 1:k to mean the matrix whose columns are φ H t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving the linear system in the least squares sense:ŵ T = R 1:k φ H 1:k − γφ H 2:k+1 † , where † indicates the Moore-Penrose pseudoinverse. However, this solution is biased [3], since the independent variables φ H t − γφ H t+1 are noisy samples of the expected difference E[φ H (h) − γ o∈O φ H (hπo) Pr[hπo | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm provides a consistent estimate of the independent variables by right multiplying the approximate Bellman equation (Equation 4) by φ H t T . The quantity φ H t T can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with the true independent variables, but uncorrelated with the noise in our estimates of these variables. 1 The value function parameter w may then be estimated as follows: w T = 1 k k t=1 R t φ H t T 1 k k t=1 φ H t φ H t T − γ k k t=1 φ H t+1 φ H t T −1(5) As the amount of data k increases, the empirical covariance matrices φ H 1:k φ H (5) is consistent. Therefore, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w therefore converges to the true parameters with probability 1. Predictive Features Although LSTD provides a consistent estimate of the value function parameters w, in practice, the potential size of the feature vectors can be a problem. If the number of features is large relative to the number of training samples, then the estimation of w is prone to overfitting. This problem can be alleviated by choosing some small set of features that only contain information that is relevant for value function approximation. However, with the exception of LARS-TD [18], there has been little work on the problem of how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-alwaysavailable expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given some signal from history, in this case a large set of features, we would like to find a compression that preserves only relevant information for predicting the value function J π . As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. Tests and Features of the Future We first need to define precisely the task of predicting the future. Just as a history is an ordered sequence of action-observation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs τ = a 1 o 1 . . . a i o i that can be executed and observed after time t [14]. The prediction for a test τ after a history h, written τ (h), is the probability that we will see the test observations τ O = o 1 . . . o i , given that we intervene [22] to execute the test actions τ A = a 1 . . . a i : τ (h) = Pr[τ O | h, do(τ A )] If Q = {τ 1 , . . . , τ n } is a set of tests, we write Q(h) = (τ 1 (h), . . . , τ n (h)) T for the corresponding vector of test predictions. We can generalize the notion of a test to a feature of the future, a linear combination of several tests sharing a common action sequence. For example, if τ 1 and τ 2 are two tests with τ A 1 = τ A 2 ≡ τ A , then we can make a feature φ = 3τ 1 + τ 2 . This feature is executed if we intervene to do(τ A ), and if it is executed its value is 3I(τ O 1 ) + I(τ O 2 ), where I(o 1 . . . o i ) stands for an indicator random variable, taking the value 0 or 1 depending on whether we observe the sequence of observations o 1 . . . o i . The prediction of φ given h is φ(h) ≡ E(φ | h, do(τ A )) = 3τ 1 (h) + τ 2 (h). While linear combinations of tests may seem restrictive, our definition is actually very expressive: we can represent an arbitrary function of a finite sequence of future observations. To do so, we take a collection of tests, each of which picks out one possible realization of the sequence, and weight each test by the value of the function conditioned on that realization. For example, if our observations are integers 1, 2, . . . , 10, we can write the square of the next observation as 10 o=1 o 2 I(o), and the mean of the next two observations as 10 o=1 10 o =1 1 2 (o + o )I(o, o ). The restriction to a common action sequence is necessary: without this restriction, all the tests making up a feature could never be executed at once. Once we move to feature predictions, however, it makes sense to lift this restriction: we will say that any linear combination of feature predictions is also a feature prediction, even if the features involved have different action sequences. Action sequences raise some problems with obtaining empirical estimates of means and covariances of features of the future: e.g., it is not always possible to get a sample of a particular feature's value on every time step, and the feature we choose to sample at one step can restrict which features we can sample at subsequent steps. In order to carry out our derivations without running into these problems repeatedly, we will assume for the rest of the paper that we can reset our system after every sample, and get a new history independently distributed as h t ∼ ω for some distribution ω. (With some additional bookkeeping we could remove this assumption [23], but this bookkeeping would unnecessarily complicate our derivations.) Furthermore, we will introduce some new language, again to keep derivations simple: if we have a vector of features of the future φ T , we will pretend that we can get a sample φ T t in which we evaluate all of our features starting from a single history h t , even if the different elements of φ T require us to execute different action sequences. When our algorithms call for such a sample, we will instead use the following trick to get a random vector with the correct expectation (and somewhat higher variance, which doesn't matter for any of our arguments): write τ A 1 , τ A 2 , . . . for the different action sequences, and let ζ 1 , ζ 2 , . . . > 0 be a probability distribution over these sequences. We pick a single action sequence τ A a according to ζ, and execute τ A a to get a sampleφ T of the features which depend on τ A a . We then enterφ T /ζ a into the corresponding coordinates of φ T t , and fill in zeros everywhere else. It is easy to see that the expected value of our sample vector is then correct: the probability of selection ζ a and the weighting factor 1/ζ a cancel out. We will write E(φ T | h t , do(ζ)) to stand for this expectation. None of the above tricks are actually necessary in our experiments with stopping problems: we simply execute the "continue" action on every step, and use only sequences of "continue" actions in every test and feature. Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. Since we are interested in value function approximation, the most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward given a history. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps, while not strictly necessary to predict total discounted reward, seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward better. Finally, in any specific RL application, we may be able to add problem-specific prediction tasks that will help focus our attention on relevant information: for example, in a pathplanning problem, we might try to predict which of several goal states we will reach (in addition to how much it will cost to get there). We can represent all of these prediction tasks as features of the future: e.g., to predict which goal we will reach, we add a distinct observation at each goal state, or to predict individual rewards, we add individual rewards as observations. 2 We will write φ T t for the vector of all features of the "future at time t," i.e., events starting at time t + 1 and continuing forward. So, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reduced-rank regression [24]. We define the following empirical covariance matrices between features of the future and features of histories: Σ T ,H = 1 k k t=1 φ T t φ H t T Σ H,H = 1 k k t=1 φ H t φ H t T(6) Let L H be the lower triangular Cholesky factor of Σ H,H . Then we can find a predictive compression of histories by a singular value decomposition (SVD) of the weighted covariance: write UDV T ≈ Σ T ,H L −T H (7) for a truncated SVD [25] of the weighted covariance, where U are the left singular vectors, V T are the right singular vectors, and D is the diagonal matrix of singular values. The number of columns of U, V, or D is equal to the number of retained singular values. 3 Then we define U = UD 1/2 (8) to be the mapping from the low-dimensional compressed space up to the high-dimensional space of features of the future. Given U , we would like to find a compression operator V that optimally predicts features of the future through the bottleneck defined by U . The least squares estimate can be found by minimizing the loss L(V ) = φ T 1:k − U V φ H 1:k 2 F(9) where · F denotes the Frobenius norm. We can find the minimum by taking the derivative of this loss with respect to V , setting it to zero, and solving for V (see Appendix, Section A for details), giving us: V = arg min V L(V ) = U T Σ T ,H ( Σ H,H ) −1(10) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression-but, unlike previous ways to find value-directed compressions [11], we do not need to know a model of our system ahead of time. For another example, define L T to be the lower triangular Cholesky factor of the empirical covariance of future features Σ T ,T . Then, if we scale features of the future by L −T T , the singular value decomposition will preserve the largest possible amount of mutual information between features of the future and features of history. This is equivalent to canonical correlation analysis [26,27], and the matrix D becomes a diagonal matrix of canonical correlations between futures and histories. Predictive State Temporal Difference Learning Now that we have found a predictive compression operator V via Equation 10, we can replace the features of history φ H t with the compressed features V φ H t in the Bellman recursion, Equation 4. Doing so results in the following approximate Bellman equation: w T V φ H 1:k ≈ R 1:k + γw T V φ H 2:k+1(11) The least squares solution for w is still prone to an error-in-variables problem. The variable φ H is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it as an instrumental variable to unbias the estimate of w. Define the additional empirical covariance matrices: Σ R,H = 1 k k t=1 R t φ H t T Σ H + ,H = 1 k k t=1 φ H t+1 φ H t T(12) Then, the corrected Bellman equation is: w T V Σ H,H = Σ R,H + γŵ T V Σ H + ,H and solving forŵ gives us the Predictive State Temporal Difference (PSTD) learning algorithm: w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(13) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the modelfree algorithm in Equation 13 is, under some circumstances, equivalent to a model-based value function approximation method which uses subspace identification to learn Predictive State Representations [20,21]. Predictive State Representations A predictive state representation (PSR) [14] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Formally, a PSR consists of five elements A, O, Q, s 1 , F . A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions f τ which embody these predictions: τ (h) = f τ (Q(h)). And, m 1 = Q( ) is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: f τ (Q(h)) = r T τ Q(h) for some vector r τ ∈ R |Q| . Finally, a core set Q for a linear PSR is said to be minimal if the tests in Q are linearly independent [16,15], i.e., no one test's prediction is a linear function of the other tests' predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we write M ao for the matrix with rows r T aoτ for τ ∈ Q, then we can use Bayes' Rule to show: Q(hao) = M ao Q(h) Pr[o | h, do(a)] = M ao Q(h) m T ∞ M ao Q(h)(14) where m ∞ is a normalizer, defined by m T ∞ Q(h) = 1 for all h. In addition to the above PSR parameters, we need a few additional definitions for reinforcement learning: a reward function R(h) = η T Q(h) mapping predictive states to immediate rewards, a discount factor γ ∈ [0, 1] which weights the importance of future rewards vs. present ones, and a policy π(Q(h)) mapping from predictive states to actions. (Specifying a reward in terms of the core test predictions Q(h) is fully general: e.g., if we want to add a unit reward for some test τ ∈ Q, we can instead equivalently set η := η + r τ , where r τ is defined (as above) so that τ (h) = r T τ Q(h).) Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [20,21]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix S, we can transform m 1 → Sm 1 , m T ∞ → m T ∞ S −1 , and M ao → SM ao S −1 without changing the corresponding dynamical system, since pairs S −1 S cancel in Eq. 14. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [28,29,27,30]. Learning Transformed PSRs Let Q be a minimal core set of tests for a dynamical system, with cardinality n = |Q| equal to the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal, and possibly even with |T | countably infinite). And, let H be the set of all possible histories. (|H| is finite or countably infinite, depending on whether our system is finite-horizon or infinite-horizon.) As before, write φ H t ∈ R for a vector of features of history at time t, and write φ T t ∈ R for a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction τ (h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction φ(h) as a linear function of T (h). We define the matrix Φ T ∈ R ×|T | to embody our predictions of future features: that is, an entry of Φ T is the weight of one of the tests in T for calculating the prediction of one of the features in φ T . Below we define several covariance matrices, Equation 15(a-d), in terms of the observable quantities φ T t , φ H t , a t , and o t , and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 17 below. First we define Σ H,H , the covariance matrix of features of histories, as E[φ H t φ H t T | h t ∼ ω]. Given k samples, we can approximate this covariance: [ Σ H,H ] i,j = 1 k k t=1 φ H it φ H jt =⇒ Σ H,H = 1 k φ H 1:k φ H 1:k T .(15a) As k → ∞, the empirical covariance Σ H,H converges to the true covariance Σ H,H with probability 1. Next we define Σ S,H , the cross covariance of states and features of histories. Writing s t = Q(h t ) for the (unobserved) state at time t, let Σ S,H = E 1 k s 1:k φ H 1:k T h t ∼ ω (∀t) We cannot directly estimate Σ S,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define Σ T ,H , the cross covariance matrix of the features of tests and histories: Σ T ,H ≡ E[φ T t φ H t T | h t ∼ ω, do(ζ)]. The true covariance is the expectation of the sample covariance Σ T ,H : [ Σ T ,H ] i,j ≡ 1 k k t=1 φ T i,t φ H j,t [Σ T ,H ] i,j = E 1 k k t=1 φ T i,t φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 E φ T i,t | h t , do(ζ) φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ τ (h t )φ H j,t h t ∼ ω (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ r T τ Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 s t φ H j,t h t ∼ ω (∀t) =⇒ Σ T ,H = Φ T RΣ S,H(15b) where the vector r τ is the linear function that specifies the probability of the test τ given the probabilities of tests in the core set Q, and the matrix R has all of the r τ vectors as rows. The above derivation shows that, because of our assumptions about the linear dimension of the system, the matrix Σ T ,H has factors R ∈ R |T |×n and Σ S,H ∈ R n× . Therefore, the rank of Σ T ,H is no more than n, the linear dimension of the system. We can also see that, since the size of Σ T ,H is fixed but the number of samples k is increasing, the empirical covariance Σ T ,H converges to the true covariance Σ T ,H with probability 1. Next we define Σ H,ao,H , a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, I t (o) is an indicator variable for whether we see observation o at step t. Σ H,ao,H ≡ 1 k k t=1 φ H t+1 I t (o)φ H t T Σ H,ao,H ≡ E Σ H,ao,H h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) (15c) Since the dimensions of each Σ H,ao,H are fixed, as k → ∞ these empirical covariances converge to the true covariances Σ H,ao,H with probability 1. Finally we define Σ R,H ≡ E[R t φ H t T | h t ∼ ω] , and approximate the covariance (in this case a vector) of reward and features of history: Σ R,H ≡ 1 k k t=1 R t φ H t T Σ R,H ≡ E Σ R,H h t ∼ ω (∀t) = E 1 k k t=1 R t φ H t T h t ∼ ω (∀t) = E 1 k k t=1 η T Q(h t )φ H t T h t ∼ ω (∀t) = η T E 1 k k t=1 s t φ H t T h t ∼ ω (∀t) = η T Σ S,H (15d) Again, as k → ∞, Σ R,H converges to Σ R,H with probability 1. We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to determine the state of the system, i.e., the regression from φ H to s is exact: s t = Σ S,H Σ −1 H,H φ H t . We discuss how to relax this assumption below in Section 4.3. We also need a matrix U such that U T Φ T R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, it is useful to choose U via SVD of a scaled version of Σ T ,H as described in Sec. 3.2. Using our assumptions we can show a useful identity for Σ H,ao,H : Σ S,H Σ −1 H,H Σ H,ao,H = E 1 k k t=1 Σ S,H Σ −1 H,H φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 s t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 M ao s t φ H t T h t ∼ ω (∀t) = M ao Σ S,H(16) This identity is at the heart of our learning algorithm: it shows that Σ H,ao,H contains a hidden copy of M ao , the main TPSR parameter that we need to learn. We would like to recover M ao via Eq. 16, M ao = Σ S,H Σ −1 H,H Σ H,ao,H Σ † S,H ; but of course we do not know Σ S,H . Fortunately, though, it turns out that we can use U T Σ T ,H as a stand-in, as described below, since this matrix differs from Σ S,H only by an invertible transform (Eq. 15b). We now show how to recover a TPSR from the matrices Σ T ,H , Σ H,H , Σ R,H , Σ H,ao,H , and U . Since a TPSR's predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform. b t ≡ U T Σ T ,H (Σ H,H ) −1 φ H t = U T Φ T RΣ S,H (Σ H,H ) −1 φ H t = (U T Φ T R)s t (17a) B ao ≡ U T Σ T ,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = U T Φ T RΣ S,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = (U T Φ T R)M ao Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (17b) b T η ≡ Σ R,H (U T Σ T ,H ) † = η T Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (17c) Our PSR learning algorithm is simple: simply replace each true covariance matrix in Eq. 17 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.3 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy π, a TPSR's value function is a linear function of state, J π (s) = w T b, and is the solution of the TPSR Bellman equation [31]: for all b, w T b = b T η b + γ o∈O w T B πo b, or equivalently, w T = b T η + γ o∈O w T B πo If we substitute in our learned PSR parameters from Equations 17(a-c), we get w T = Σ R,H (U T Σ T ,H ) † + γ o∈Oŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H,πo,H (U T Σ T ,H ) † w T U T Σ T ,H = Σ R,H + γŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H + ,H since, by comparing Eqs. 15c and 12, we can see that o∈O Σ H,πo,H = Σ H + ,H . Now, suppose that we define U and V by Eqs. 8 and 10, and let U = U as suggested above in Sec. 4.1. Then U T Σ T ,H = V Σ H,H , andŵ T V Σ H,H = Σ R,H + γŵ T V Σ H + ,Ĥ w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(18) Eq. 18 is exactly the PSTD algorithm (Eq. 13). So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation-to our knowledge, the first such result for a TD method. PSTD learning is related to value-directed compression of POMDPs [11]. If we learn a TPSR from data generated by a POMDP, then the TPSR state is exactly a linear compression of the POMDP state [15,20]. The compression can be exact or approximate, depending on whether we include enough features of the future and whether we keep all or only some nonzero singular values in our bottleneck. If we include only reward as a feature of the future, we get a value-directed compression in the sense of Poupart and Boutilier [11]. If desired, we can tune the degree of value-directedness of our compression by scaling the relative variance of our features: the higher the variance of the reward feature compared to other features, the more value-directed the resulting compression will be. Our work significantly diverges from previous work on POMDP compression in one important respect: prior work assumes access to the true POMDP model, while we make no such assumption, and learn a compressed representation directly from data. Insights from Subspace Identification The close connection to subspace identification for PSRs provides additional insight into the temporal difference learning procedure. In Equation 17 we made the assumption that the features of history are rich enough to completely determine the state of the dynamical system. In fact, using theory developed in [21], it is possible to relax this assumption and instead assume that state is merely correlated with features of history. In this case, we need to introduce a new set of covariance matrices Σ T ,ao, do(a, ζ)], one for each actionobservation pair, that represent the covariance between features of history before and features of tests after taking action a and observing o. We can then estimate the TPSR transition matrices as B ao = U T Σ T ,ao,H ( U T Σ T ,H ) † (see [21] for proof details). The value function parameter w can be estimated asŵ H ≡ E[φ T t I t (o)φ H t T | h t ∼ ω,T = Σ R,H ( U T Σ T ,H ) † (I − o∈O U T Σ T ,ao,H ( U T Σ T ,H ) † ) † = Σ R,H ( U T Σ T ,H − o∈O U T Σ T ,ao,H ) † (the proof is similar to Equation 18 ). Since we no longer assume that state is completely specified by features of history, we can no longer apply the learned value function to U Σ T ,H (Σ H,H ) −1 φ t at each time t. Instead we need to learn a full PSR model and filter with the model to estimate state. Details on this procedure can be found in [21]. Experimental Results We designed several experiments to evaluate the properties of the PSTD learning algorithm. In the first set of experiments we look at the comparative merits of PSTD with respect to LSTD and LARS-TD when applied to the problem of estimating the value function of a reduced-rank POMDP. In the second set of experiments, we apply PSTD to a benchmark optimal stopping problem (pricing a fictitious financial derivative), and show that PSTD outperforms competing approaches. Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [32]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy's transition matrix is low rank: the resulting belief distributions can be represented in a 3-dimensional subspace of the original belief simplex. A reward of 1 is given in the first and third latent state and a reward of 0 in the other two latent states (see Appendix, Section B). The system emits 2 possible observations, conflating information about the latent states. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, PSTD, and PSTD as formulated in Section 4.3 (which we call PSTD2) when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by J π = R(I − γT π ) −1 . In the first experiment we execute the policy π for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, PSTD with linear dimension 3, and PSTD2 with linear dimension 3 (Figure 1(A)). 4 In this experiment, PSTD and PSTD2 both had lower mean squared error than the other approaches. For the second experiment, we added 490 random features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features in both tests and histories and the relatively small amount of training data. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. Surprisingly, in this experiment PSTD2 not only outperformed PSTD but bested even LARS-TD. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was relatively large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD and PSTD2 both outperform LARS-TD and each of these subspace and subset selection methods outperform LSTD by a large margin by efficiently estimating the value function (Figure 1(C)). Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the contract owner must make decisions-e.g., with early exercise the contract holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions. In these cases, the value of the derivative depends on how the contract holder acts. Deciding when to exercise is therefore an optimal stopping problem: at each point in time, the contract holder must decide whether to continue holding the contract or exercise. Such stopping problems provide an ideal testbed for policy evaluation methods, since we can easily collect a single data set which is sufficient to evaluate any policy: we just choose the "continue" action forever. (We can then evaluate the "stop" action easily in any of the resulting states, since the immediate reward is given by the rules of the contract, and the next state is the terminal state by definition.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [33]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of a given day, the holder may opt to exercise. At exercise the owner receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a "psychic call": the owner gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility σ = 0.02 and continuously compounded short term growth rate ρ = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ∼ 10%. In more detail, if w t is a standard Brownian motion, then the stock price p t evolves as ∇p t = ρp t ∇t + σp t ∇w t , and we can summarize relevant state at the end of each day as a vector x t ∈ R 100 , with x t = pt−99 pt−100 , pt−98 pt−100 , . . . , pt pt−100 T . The ith dimension x t (i) represents the amount a $1 investment in a stock at time t − 100 would grow to at time t − 100 + i. This process is Markov and ergodic [33,34]: x t and x t+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor γ = e −ρ is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock's growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V * (x) = sup t E[γ t G(x t ) | x 0 = x]. Our goal is to calculate an approximate value function V (x) = w T φ H (x), and then use this value function to generate a stopping time min{t | G(x t ) ≥ V (x t )}. To do so, we sample a sequence of 1,000,000 states x t ∈ R 100 and calculate features φ H of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy "stop if G(x t ) ≥ w T φ H (x t )." Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the "continue" action, with reward 0 at each step. When the policy executes the "stop" action, the reward is G(x) and the next state's features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a "good" set of 16 features for this data set through repeated trial and error (see Appendix, Section B and [33,34]). We greatly expand this set of features, then use PSTD to synthesize a small set of highquality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison's sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [33]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [33,34] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn't know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [33,34]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD's compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. maximal returns, and how long ago they occurred: The next set of basis functions summarize the characteristics of the basic shape of the 100 day sample path. They are the inner product of the path with the first four Legendre polynomial degrees. Let j = i/50 − 1. φ 1 (x) = 1 φ 2 (x) = G(x) φ 3 (x) = min i=1φ 7 (x) = 1 100 100 i=1 x(i) − 1 √ 2 φ 8 (x) = 1 100 100 i=1 x(i) 3 2 j φ 9 (x) = 1 100 100 i=1 x(i) 5 2 3j 2 − 1 2 φ 10 (x) = 1 100 100 i=1 x(i) 7 2 5j 3 − 3j 2 Nonlinear combinations of basis functions: φ 11 (x) = φ 2 (x)φ 3 (x) φ 12 (x) = φ 2 (x)φ 4 (x) φ 13 (x) = φ 2 (x)φ 7 (x) φ 14 (x) = φ 2 (x)φ 8 (x) φ 15 (x) = φ 2 (x)φ 9 (x) φ 16 (x) = φ 2 (x)φ 10 (x) In order to improve our results, we added a large number of additional basis functions to these hand-picked 16. PSTD will compress these features for us, so we can use as many additional basis functions as we would like. First we defined 4 additional basis functions consisting of the inner products of the 100 day sample path with the 5th and 6th Legende polynomials and we added the corresponding nonlinear combinations of basis functions: φ 17 (x) = 1 100 100 i=1 x(i) 9 2 35j 4 − 30x 2 + 3 8 φ 18 (x) = 1 100 100 i=1 x(i) 11 2 63j 5 − 70j 3 + 15j 8 φ 19 (x) = φ 2 (x)φ 17 (x) φ 20 (x) = φ 2 (x)φ 18 (x) Finally we added the the entire sample path and the squared sample path: φ 21:120 = x 1:100 φ 121:220 = x 2 1:100
8,106
1011.0041
2949683464
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem.
Recently, looked at the problem of value function estimation from the perspective of both model-free and model-based reinforcement learning @cite_33 . The model-free approach estimates a value function directly from sample trajectories, i.e., from sequences of feature vectors of visited states. The model-based approach, by contrast, first learns a model and then computes the value function from the learned model. compared LSTD (a model-free method) to a model-based method in which we first learn a linear model by viewing features as a proxy for state (leading to a linear transition matrix that predicts future features from past features), and then compute a value function from this approximate model. demonstrated that these two approaches compute exactly the same value function @cite_33 , formalizing a fact that has been recognized to some degree before @cite_37 .
{ "abstract": [ "TD( ) is a popular family of algorithms for approximate policy evaluation in large MDPs. TD( ) works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes ine cient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto (Bradtke and Barto, 1996) eliminates all stepsize parameters and improves data e ciency. This paper extends Bradtke and Barto's work in three signi cant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting algorithm is shown to be a practical formulation of supervised linear regression. Third, it presents a novel, intuitive interpretation of LSTD as a model-based reinforcement learning technique.", "We show that linear value-function approximation is equivalent to a form of linear model approximation. We then derive a relationship between the model-approximation error and the Bellman error, and show how this relationship can guide feature selection for model improvement and or value-function improvement. We also show how these results give insight into the behavior of existing feature-selection algorithms." ], "cite_N": [ "@cite_37", "@cite_33" ], "mid": [ "1597303641", "2123979492" ] }
Predictive State Temporal Difference Learning
Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states π 0 , a state transition function T , a reward function R, and a discount factor γ ∈ [0, 1]. We seek a policy π, a mapping from states to actions. The notion of a value function is of central importance in reinforcement learning: for a given policy π, the value of state s is defined as the expected discounted sum of rewards obtained when starting in state s and following policy π, J π (s) = E [ ∞ t=0 γ t R(s t ) | s 0 = s, π]. It is well known that the value function must obey the Bellman equation J π (s) = R(s) + γ s J π (s ) Pr[s | s, π(s)](1) If we know the transition function T , and if the set of states S is sufficiently small, we can use (1) directly to solve for the value function J π . We can then execute the greedy policy for J π , setting the action at each state to maximize the right-hand side of (1). However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. Our state (i.e., the information which we can use to make decisions) is not an element of S but a history (an ordered sequence of action-observation pairs h = a h 1 o h 1 . . . a h t o h t that have been executed and observed prior to time t). If we knew the transition model T , we could use h to infer a belief distribution over S, and use that belief (or a compression of that belief) as a state instead; below, we will discuss how to learn a compressed belief state. Because of partial observability, we can only hope to predict reward conditioned on history, R(h) = E[R(s) | h], and we must choose actions as a function of history, π(h) instead of π(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories J π (s) = w T φ H (h)(2) Here w ∈ R j is a parameter vector and φ H (h) ∈ R j is a feature vector for a history h. So, we can rewrite the Bellman equation as w T φ H (h) = R(h) + γ o∈O w T φ H (hπo) Pr[hπo | hπ](3) where hπo is history h extended by taking action π(h) and observing o. Least Squares Temporal Difference Learning In general we don't know the transition probabilities Pr[hπo | h], but we do have samples of state features φ H t = φ H (h t ), next-state features φ H t+1 = φ H (h t+1 ), and immediate rewards R t = R(h t ). We can thus estimate the Bellman equation w T φ H 1:k ≈ R 1:k + γw T φ H 2:k+1(4) (Here we have used the notation φ H 1:k to mean the matrix whose columns are φ H t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving the linear system in the least squares sense:ŵ T = R 1:k φ H 1:k − γφ H 2:k+1 † , where † indicates the Moore-Penrose pseudoinverse. However, this solution is biased [3], since the independent variables φ H t − γφ H t+1 are noisy samples of the expected difference E[φ H (h) − γ o∈O φ H (hπo) Pr[hπo | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm provides a consistent estimate of the independent variables by right multiplying the approximate Bellman equation (Equation 4) by φ H t T . The quantity φ H t T can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with the true independent variables, but uncorrelated with the noise in our estimates of these variables. 1 The value function parameter w may then be estimated as follows: w T = 1 k k t=1 R t φ H t T 1 k k t=1 φ H t φ H t T − γ k k t=1 φ H t+1 φ H t T −1(5) As the amount of data k increases, the empirical covariance matrices φ H 1:k φ H (5) is consistent. Therefore, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w therefore converges to the true parameters with probability 1. Predictive Features Although LSTD provides a consistent estimate of the value function parameters w, in practice, the potential size of the feature vectors can be a problem. If the number of features is large relative to the number of training samples, then the estimation of w is prone to overfitting. This problem can be alleviated by choosing some small set of features that only contain information that is relevant for value function approximation. However, with the exception of LARS-TD [18], there has been little work on the problem of how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-alwaysavailable expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given some signal from history, in this case a large set of features, we would like to find a compression that preserves only relevant information for predicting the value function J π . As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. Tests and Features of the Future We first need to define precisely the task of predicting the future. Just as a history is an ordered sequence of action-observation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs τ = a 1 o 1 . . . a i o i that can be executed and observed after time t [14]. The prediction for a test τ after a history h, written τ (h), is the probability that we will see the test observations τ O = o 1 . . . o i , given that we intervene [22] to execute the test actions τ A = a 1 . . . a i : τ (h) = Pr[τ O | h, do(τ A )] If Q = {τ 1 , . . . , τ n } is a set of tests, we write Q(h) = (τ 1 (h), . . . , τ n (h)) T for the corresponding vector of test predictions. We can generalize the notion of a test to a feature of the future, a linear combination of several tests sharing a common action sequence. For example, if τ 1 and τ 2 are two tests with τ A 1 = τ A 2 ≡ τ A , then we can make a feature φ = 3τ 1 + τ 2 . This feature is executed if we intervene to do(τ A ), and if it is executed its value is 3I(τ O 1 ) + I(τ O 2 ), where I(o 1 . . . o i ) stands for an indicator random variable, taking the value 0 or 1 depending on whether we observe the sequence of observations o 1 . . . o i . The prediction of φ given h is φ(h) ≡ E(φ | h, do(τ A )) = 3τ 1 (h) + τ 2 (h). While linear combinations of tests may seem restrictive, our definition is actually very expressive: we can represent an arbitrary function of a finite sequence of future observations. To do so, we take a collection of tests, each of which picks out one possible realization of the sequence, and weight each test by the value of the function conditioned on that realization. For example, if our observations are integers 1, 2, . . . , 10, we can write the square of the next observation as 10 o=1 o 2 I(o), and the mean of the next two observations as 10 o=1 10 o =1 1 2 (o + o )I(o, o ). The restriction to a common action sequence is necessary: without this restriction, all the tests making up a feature could never be executed at once. Once we move to feature predictions, however, it makes sense to lift this restriction: we will say that any linear combination of feature predictions is also a feature prediction, even if the features involved have different action sequences. Action sequences raise some problems with obtaining empirical estimates of means and covariances of features of the future: e.g., it is not always possible to get a sample of a particular feature's value on every time step, and the feature we choose to sample at one step can restrict which features we can sample at subsequent steps. In order to carry out our derivations without running into these problems repeatedly, we will assume for the rest of the paper that we can reset our system after every sample, and get a new history independently distributed as h t ∼ ω for some distribution ω. (With some additional bookkeeping we could remove this assumption [23], but this bookkeeping would unnecessarily complicate our derivations.) Furthermore, we will introduce some new language, again to keep derivations simple: if we have a vector of features of the future φ T , we will pretend that we can get a sample φ T t in which we evaluate all of our features starting from a single history h t , even if the different elements of φ T require us to execute different action sequences. When our algorithms call for such a sample, we will instead use the following trick to get a random vector with the correct expectation (and somewhat higher variance, which doesn't matter for any of our arguments): write τ A 1 , τ A 2 , . . . for the different action sequences, and let ζ 1 , ζ 2 , . . . > 0 be a probability distribution over these sequences. We pick a single action sequence τ A a according to ζ, and execute τ A a to get a sampleφ T of the features which depend on τ A a . We then enterφ T /ζ a into the corresponding coordinates of φ T t , and fill in zeros everywhere else. It is easy to see that the expected value of our sample vector is then correct: the probability of selection ζ a and the weighting factor 1/ζ a cancel out. We will write E(φ T | h t , do(ζ)) to stand for this expectation. None of the above tricks are actually necessary in our experiments with stopping problems: we simply execute the "continue" action on every step, and use only sequences of "continue" actions in every test and feature. Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. Since we are interested in value function approximation, the most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward given a history. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps, while not strictly necessary to predict total discounted reward, seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward better. Finally, in any specific RL application, we may be able to add problem-specific prediction tasks that will help focus our attention on relevant information: for example, in a pathplanning problem, we might try to predict which of several goal states we will reach (in addition to how much it will cost to get there). We can represent all of these prediction tasks as features of the future: e.g., to predict which goal we will reach, we add a distinct observation at each goal state, or to predict individual rewards, we add individual rewards as observations. 2 We will write φ T t for the vector of all features of the "future at time t," i.e., events starting at time t + 1 and continuing forward. So, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reduced-rank regression [24]. We define the following empirical covariance matrices between features of the future and features of histories: Σ T ,H = 1 k k t=1 φ T t φ H t T Σ H,H = 1 k k t=1 φ H t φ H t T(6) Let L H be the lower triangular Cholesky factor of Σ H,H . Then we can find a predictive compression of histories by a singular value decomposition (SVD) of the weighted covariance: write UDV T ≈ Σ T ,H L −T H (7) for a truncated SVD [25] of the weighted covariance, where U are the left singular vectors, V T are the right singular vectors, and D is the diagonal matrix of singular values. The number of columns of U, V, or D is equal to the number of retained singular values. 3 Then we define U = UD 1/2 (8) to be the mapping from the low-dimensional compressed space up to the high-dimensional space of features of the future. Given U , we would like to find a compression operator V that optimally predicts features of the future through the bottleneck defined by U . The least squares estimate can be found by minimizing the loss L(V ) = φ T 1:k − U V φ H 1:k 2 F(9) where · F denotes the Frobenius norm. We can find the minimum by taking the derivative of this loss with respect to V , setting it to zero, and solving for V (see Appendix, Section A for details), giving us: V = arg min V L(V ) = U T Σ T ,H ( Σ H,H ) −1(10) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression-but, unlike previous ways to find value-directed compressions [11], we do not need to know a model of our system ahead of time. For another example, define L T to be the lower triangular Cholesky factor of the empirical covariance of future features Σ T ,T . Then, if we scale features of the future by L −T T , the singular value decomposition will preserve the largest possible amount of mutual information between features of the future and features of history. This is equivalent to canonical correlation analysis [26,27], and the matrix D becomes a diagonal matrix of canonical correlations between futures and histories. Predictive State Temporal Difference Learning Now that we have found a predictive compression operator V via Equation 10, we can replace the features of history φ H t with the compressed features V φ H t in the Bellman recursion, Equation 4. Doing so results in the following approximate Bellman equation: w T V φ H 1:k ≈ R 1:k + γw T V φ H 2:k+1(11) The least squares solution for w is still prone to an error-in-variables problem. The variable φ H is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it as an instrumental variable to unbias the estimate of w. Define the additional empirical covariance matrices: Σ R,H = 1 k k t=1 R t φ H t T Σ H + ,H = 1 k k t=1 φ H t+1 φ H t T(12) Then, the corrected Bellman equation is: w T V Σ H,H = Σ R,H + γŵ T V Σ H + ,H and solving forŵ gives us the Predictive State Temporal Difference (PSTD) learning algorithm: w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(13) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the modelfree algorithm in Equation 13 is, under some circumstances, equivalent to a model-based value function approximation method which uses subspace identification to learn Predictive State Representations [20,21]. Predictive State Representations A predictive state representation (PSR) [14] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Formally, a PSR consists of five elements A, O, Q, s 1 , F . A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions f τ which embody these predictions: τ (h) = f τ (Q(h)). And, m 1 = Q( ) is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: f τ (Q(h)) = r T τ Q(h) for some vector r τ ∈ R |Q| . Finally, a core set Q for a linear PSR is said to be minimal if the tests in Q are linearly independent [16,15], i.e., no one test's prediction is a linear function of the other tests' predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we write M ao for the matrix with rows r T aoτ for τ ∈ Q, then we can use Bayes' Rule to show: Q(hao) = M ao Q(h) Pr[o | h, do(a)] = M ao Q(h) m T ∞ M ao Q(h)(14) where m ∞ is a normalizer, defined by m T ∞ Q(h) = 1 for all h. In addition to the above PSR parameters, we need a few additional definitions for reinforcement learning: a reward function R(h) = η T Q(h) mapping predictive states to immediate rewards, a discount factor γ ∈ [0, 1] which weights the importance of future rewards vs. present ones, and a policy π(Q(h)) mapping from predictive states to actions. (Specifying a reward in terms of the core test predictions Q(h) is fully general: e.g., if we want to add a unit reward for some test τ ∈ Q, we can instead equivalently set η := η + r τ , where r τ is defined (as above) so that τ (h) = r T τ Q(h).) Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [20,21]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix S, we can transform m 1 → Sm 1 , m T ∞ → m T ∞ S −1 , and M ao → SM ao S −1 without changing the corresponding dynamical system, since pairs S −1 S cancel in Eq. 14. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [28,29,27,30]. Learning Transformed PSRs Let Q be a minimal core set of tests for a dynamical system, with cardinality n = |Q| equal to the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal, and possibly even with |T | countably infinite). And, let H be the set of all possible histories. (|H| is finite or countably infinite, depending on whether our system is finite-horizon or infinite-horizon.) As before, write φ H t ∈ R for a vector of features of history at time t, and write φ T t ∈ R for a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction τ (h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction φ(h) as a linear function of T (h). We define the matrix Φ T ∈ R ×|T | to embody our predictions of future features: that is, an entry of Φ T is the weight of one of the tests in T for calculating the prediction of one of the features in φ T . Below we define several covariance matrices, Equation 15(a-d), in terms of the observable quantities φ T t , φ H t , a t , and o t , and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 17 below. First we define Σ H,H , the covariance matrix of features of histories, as E[φ H t φ H t T | h t ∼ ω]. Given k samples, we can approximate this covariance: [ Σ H,H ] i,j = 1 k k t=1 φ H it φ H jt =⇒ Σ H,H = 1 k φ H 1:k φ H 1:k T .(15a) As k → ∞, the empirical covariance Σ H,H converges to the true covariance Σ H,H with probability 1. Next we define Σ S,H , the cross covariance of states and features of histories. Writing s t = Q(h t ) for the (unobserved) state at time t, let Σ S,H = E 1 k s 1:k φ H 1:k T h t ∼ ω (∀t) We cannot directly estimate Σ S,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define Σ T ,H , the cross covariance matrix of the features of tests and histories: Σ T ,H ≡ E[φ T t φ H t T | h t ∼ ω, do(ζ)]. The true covariance is the expectation of the sample covariance Σ T ,H : [ Σ T ,H ] i,j ≡ 1 k k t=1 φ T i,t φ H j,t [Σ T ,H ] i,j = E 1 k k t=1 φ T i,t φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 E φ T i,t | h t , do(ζ) φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ τ (h t )φ H j,t h t ∼ ω (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ r T τ Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 s t φ H j,t h t ∼ ω (∀t) =⇒ Σ T ,H = Φ T RΣ S,H(15b) where the vector r τ is the linear function that specifies the probability of the test τ given the probabilities of tests in the core set Q, and the matrix R has all of the r τ vectors as rows. The above derivation shows that, because of our assumptions about the linear dimension of the system, the matrix Σ T ,H has factors R ∈ R |T |×n and Σ S,H ∈ R n× . Therefore, the rank of Σ T ,H is no more than n, the linear dimension of the system. We can also see that, since the size of Σ T ,H is fixed but the number of samples k is increasing, the empirical covariance Σ T ,H converges to the true covariance Σ T ,H with probability 1. Next we define Σ H,ao,H , a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, I t (o) is an indicator variable for whether we see observation o at step t. Σ H,ao,H ≡ 1 k k t=1 φ H t+1 I t (o)φ H t T Σ H,ao,H ≡ E Σ H,ao,H h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) (15c) Since the dimensions of each Σ H,ao,H are fixed, as k → ∞ these empirical covariances converge to the true covariances Σ H,ao,H with probability 1. Finally we define Σ R,H ≡ E[R t φ H t T | h t ∼ ω] , and approximate the covariance (in this case a vector) of reward and features of history: Σ R,H ≡ 1 k k t=1 R t φ H t T Σ R,H ≡ E Σ R,H h t ∼ ω (∀t) = E 1 k k t=1 R t φ H t T h t ∼ ω (∀t) = E 1 k k t=1 η T Q(h t )φ H t T h t ∼ ω (∀t) = η T E 1 k k t=1 s t φ H t T h t ∼ ω (∀t) = η T Σ S,H (15d) Again, as k → ∞, Σ R,H converges to Σ R,H with probability 1. We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to determine the state of the system, i.e., the regression from φ H to s is exact: s t = Σ S,H Σ −1 H,H φ H t . We discuss how to relax this assumption below in Section 4.3. We also need a matrix U such that U T Φ T R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, it is useful to choose U via SVD of a scaled version of Σ T ,H as described in Sec. 3.2. Using our assumptions we can show a useful identity for Σ H,ao,H : Σ S,H Σ −1 H,H Σ H,ao,H = E 1 k k t=1 Σ S,H Σ −1 H,H φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 s t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 M ao s t φ H t T h t ∼ ω (∀t) = M ao Σ S,H(16) This identity is at the heart of our learning algorithm: it shows that Σ H,ao,H contains a hidden copy of M ao , the main TPSR parameter that we need to learn. We would like to recover M ao via Eq. 16, M ao = Σ S,H Σ −1 H,H Σ H,ao,H Σ † S,H ; but of course we do not know Σ S,H . Fortunately, though, it turns out that we can use U T Σ T ,H as a stand-in, as described below, since this matrix differs from Σ S,H only by an invertible transform (Eq. 15b). We now show how to recover a TPSR from the matrices Σ T ,H , Σ H,H , Σ R,H , Σ H,ao,H , and U . Since a TPSR's predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform. b t ≡ U T Σ T ,H (Σ H,H ) −1 φ H t = U T Φ T RΣ S,H (Σ H,H ) −1 φ H t = (U T Φ T R)s t (17a) B ao ≡ U T Σ T ,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = U T Φ T RΣ S,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = (U T Φ T R)M ao Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (17b) b T η ≡ Σ R,H (U T Σ T ,H ) † = η T Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (17c) Our PSR learning algorithm is simple: simply replace each true covariance matrix in Eq. 17 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.3 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy π, a TPSR's value function is a linear function of state, J π (s) = w T b, and is the solution of the TPSR Bellman equation [31]: for all b, w T b = b T η b + γ o∈O w T B πo b, or equivalently, w T = b T η + γ o∈O w T B πo If we substitute in our learned PSR parameters from Equations 17(a-c), we get w T = Σ R,H (U T Σ T ,H ) † + γ o∈Oŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H,πo,H (U T Σ T ,H ) † w T U T Σ T ,H = Σ R,H + γŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H + ,H since, by comparing Eqs. 15c and 12, we can see that o∈O Σ H,πo,H = Σ H + ,H . Now, suppose that we define U and V by Eqs. 8 and 10, and let U = U as suggested above in Sec. 4.1. Then U T Σ T ,H = V Σ H,H , andŵ T V Σ H,H = Σ R,H + γŵ T V Σ H + ,Ĥ w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(18) Eq. 18 is exactly the PSTD algorithm (Eq. 13). So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation-to our knowledge, the first such result for a TD method. PSTD learning is related to value-directed compression of POMDPs [11]. If we learn a TPSR from data generated by a POMDP, then the TPSR state is exactly a linear compression of the POMDP state [15,20]. The compression can be exact or approximate, depending on whether we include enough features of the future and whether we keep all or only some nonzero singular values in our bottleneck. If we include only reward as a feature of the future, we get a value-directed compression in the sense of Poupart and Boutilier [11]. If desired, we can tune the degree of value-directedness of our compression by scaling the relative variance of our features: the higher the variance of the reward feature compared to other features, the more value-directed the resulting compression will be. Our work significantly diverges from previous work on POMDP compression in one important respect: prior work assumes access to the true POMDP model, while we make no such assumption, and learn a compressed representation directly from data. Insights from Subspace Identification The close connection to subspace identification for PSRs provides additional insight into the temporal difference learning procedure. In Equation 17 we made the assumption that the features of history are rich enough to completely determine the state of the dynamical system. In fact, using theory developed in [21], it is possible to relax this assumption and instead assume that state is merely correlated with features of history. In this case, we need to introduce a new set of covariance matrices Σ T ,ao, do(a, ζ)], one for each actionobservation pair, that represent the covariance between features of history before and features of tests after taking action a and observing o. We can then estimate the TPSR transition matrices as B ao = U T Σ T ,ao,H ( U T Σ T ,H ) † (see [21] for proof details). The value function parameter w can be estimated asŵ H ≡ E[φ T t I t (o)φ H t T | h t ∼ ω,T = Σ R,H ( U T Σ T ,H ) † (I − o∈O U T Σ T ,ao,H ( U T Σ T ,H ) † ) † = Σ R,H ( U T Σ T ,H − o∈O U T Σ T ,ao,H ) † (the proof is similar to Equation 18 ). Since we no longer assume that state is completely specified by features of history, we can no longer apply the learned value function to U Σ T ,H (Σ H,H ) −1 φ t at each time t. Instead we need to learn a full PSR model and filter with the model to estimate state. Details on this procedure can be found in [21]. Experimental Results We designed several experiments to evaluate the properties of the PSTD learning algorithm. In the first set of experiments we look at the comparative merits of PSTD with respect to LSTD and LARS-TD when applied to the problem of estimating the value function of a reduced-rank POMDP. In the second set of experiments, we apply PSTD to a benchmark optimal stopping problem (pricing a fictitious financial derivative), and show that PSTD outperforms competing approaches. Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [32]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy's transition matrix is low rank: the resulting belief distributions can be represented in a 3-dimensional subspace of the original belief simplex. A reward of 1 is given in the first and third latent state and a reward of 0 in the other two latent states (see Appendix, Section B). The system emits 2 possible observations, conflating information about the latent states. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, PSTD, and PSTD as formulated in Section 4.3 (which we call PSTD2) when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by J π = R(I − γT π ) −1 . In the first experiment we execute the policy π for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, PSTD with linear dimension 3, and PSTD2 with linear dimension 3 (Figure 1(A)). 4 In this experiment, PSTD and PSTD2 both had lower mean squared error than the other approaches. For the second experiment, we added 490 random features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features in both tests and histories and the relatively small amount of training data. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. Surprisingly, in this experiment PSTD2 not only outperformed PSTD but bested even LARS-TD. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was relatively large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD and PSTD2 both outperform LARS-TD and each of these subspace and subset selection methods outperform LSTD by a large margin by efficiently estimating the value function (Figure 1(C)). Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the contract owner must make decisions-e.g., with early exercise the contract holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions. In these cases, the value of the derivative depends on how the contract holder acts. Deciding when to exercise is therefore an optimal stopping problem: at each point in time, the contract holder must decide whether to continue holding the contract or exercise. Such stopping problems provide an ideal testbed for policy evaluation methods, since we can easily collect a single data set which is sufficient to evaluate any policy: we just choose the "continue" action forever. (We can then evaluate the "stop" action easily in any of the resulting states, since the immediate reward is given by the rules of the contract, and the next state is the terminal state by definition.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [33]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of a given day, the holder may opt to exercise. At exercise the owner receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a "psychic call": the owner gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility σ = 0.02 and continuously compounded short term growth rate ρ = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ∼ 10%. In more detail, if w t is a standard Brownian motion, then the stock price p t evolves as ∇p t = ρp t ∇t + σp t ∇w t , and we can summarize relevant state at the end of each day as a vector x t ∈ R 100 , with x t = pt−99 pt−100 , pt−98 pt−100 , . . . , pt pt−100 T . The ith dimension x t (i) represents the amount a $1 investment in a stock at time t − 100 would grow to at time t − 100 + i. This process is Markov and ergodic [33,34]: x t and x t+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor γ = e −ρ is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock's growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V * (x) = sup t E[γ t G(x t ) | x 0 = x]. Our goal is to calculate an approximate value function V (x) = w T φ H (x), and then use this value function to generate a stopping time min{t | G(x t ) ≥ V (x t )}. To do so, we sample a sequence of 1,000,000 states x t ∈ R 100 and calculate features φ H of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy "stop if G(x t ) ≥ w T φ H (x t )." Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the "continue" action, with reward 0 at each step. When the policy executes the "stop" action, the reward is G(x) and the next state's features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a "good" set of 16 features for this data set through repeated trial and error (see Appendix, Section B and [33,34]). We greatly expand this set of features, then use PSTD to synthesize a small set of highquality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison's sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [33]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [33,34] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn't know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [33,34]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD's compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. maximal returns, and how long ago they occurred: The next set of basis functions summarize the characteristics of the basic shape of the 100 day sample path. They are the inner product of the path with the first four Legendre polynomial degrees. Let j = i/50 − 1. φ 1 (x) = 1 φ 2 (x) = G(x) φ 3 (x) = min i=1φ 7 (x) = 1 100 100 i=1 x(i) − 1 √ 2 φ 8 (x) = 1 100 100 i=1 x(i) 3 2 j φ 9 (x) = 1 100 100 i=1 x(i) 5 2 3j 2 − 1 2 φ 10 (x) = 1 100 100 i=1 x(i) 7 2 5j 3 − 3j 2 Nonlinear combinations of basis functions: φ 11 (x) = φ 2 (x)φ 3 (x) φ 12 (x) = φ 2 (x)φ 4 (x) φ 13 (x) = φ 2 (x)φ 7 (x) φ 14 (x) = φ 2 (x)φ 8 (x) φ 15 (x) = φ 2 (x)φ 9 (x) φ 16 (x) = φ 2 (x)φ 10 (x) In order to improve our results, we added a large number of additional basis functions to these hand-picked 16. PSTD will compress these features for us, so we can use as many additional basis functions as we would like. First we defined 4 additional basis functions consisting of the inner products of the 100 day sample path with the 5th and 6th Legende polynomials and we added the corresponding nonlinear combinations of basis functions: φ 17 (x) = 1 100 100 i=1 x(i) 9 2 35j 4 − 30x 2 + 3 8 φ 18 (x) = 1 100 100 i=1 x(i) 11 2 63j 5 − 70j 3 + 15j 8 φ 19 (x) = φ 2 (x)φ 17 (x) φ 20 (x) = φ 2 (x)φ 18 (x) Finally we added the the entire sample path and the squared sample path: φ 21:120 = x 1:100 φ 121:220 = x 2 1:100
8,106
1011.0041
2949683464
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem.
Second, we look at the problem of value function estimation from a model-based perspective (). Instead of learning a linear transition model from features, as in @cite_33 , we use subspace identification @cite_20 @cite_27 to learn a PSR from our samples. Then we compute a value function via the Bellman equations for our learned PSR @. This new approach has a substantial benefit: while the linear feature-to-feature transition model of @cite_33 does not seem to have any common uses outside that paper, PSRs have been proposed numerous times on their own merits (including being invented independently at least three times), and are a strict generalization of POMDPs.
{ "abstract": [ "", "We show that linear value-function approximation is equivalent to a form of linear model approximation. We then derive a relationship between the model-approximation error and the Bellman error, and show how this relationship can guide feature selection for model improvement and or value-function improvement. We also show how these results give insight into the behavior of existing feature-selection algorithms.", "Predictive state representations (PSRs) have recently been proposed as an alternative to partially observable Markov decision processes (POMDPs) for representing the state of a dynamical system (, 2001). We present a learning algorithm that learns a PSR from observational data. Our algorithm produces a variant of PSRs called transformed predictive state representations (TPSRs). We provide an efficient principal-components-based algorithm for learning a TPSR, and show that TPSRs can perform well in comparison to Hidden Markov Models learned with Baum-Welch in a real world robot tracking task for low dimensional representations and long prediction horizons." ], "cite_N": [ "@cite_27", "@cite_33", "@cite_20" ], "mid": [ "", "2123979492", "2154384352" ] }
Predictive State Temporal Difference Learning
Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states π 0 , a state transition function T , a reward function R, and a discount factor γ ∈ [0, 1]. We seek a policy π, a mapping from states to actions. The notion of a value function is of central importance in reinforcement learning: for a given policy π, the value of state s is defined as the expected discounted sum of rewards obtained when starting in state s and following policy π, J π (s) = E [ ∞ t=0 γ t R(s t ) | s 0 = s, π]. It is well known that the value function must obey the Bellman equation J π (s) = R(s) + γ s J π (s ) Pr[s | s, π(s)](1) If we know the transition function T , and if the set of states S is sufficiently small, we can use (1) directly to solve for the value function J π . We can then execute the greedy policy for J π , setting the action at each state to maximize the right-hand side of (1). However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. Our state (i.e., the information which we can use to make decisions) is not an element of S but a history (an ordered sequence of action-observation pairs h = a h 1 o h 1 . . . a h t o h t that have been executed and observed prior to time t). If we knew the transition model T , we could use h to infer a belief distribution over S, and use that belief (or a compression of that belief) as a state instead; below, we will discuss how to learn a compressed belief state. Because of partial observability, we can only hope to predict reward conditioned on history, R(h) = E[R(s) | h], and we must choose actions as a function of history, π(h) instead of π(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories J π (s) = w T φ H (h)(2) Here w ∈ R j is a parameter vector and φ H (h) ∈ R j is a feature vector for a history h. So, we can rewrite the Bellman equation as w T φ H (h) = R(h) + γ o∈O w T φ H (hπo) Pr[hπo | hπ](3) where hπo is history h extended by taking action π(h) and observing o. Least Squares Temporal Difference Learning In general we don't know the transition probabilities Pr[hπo | h], but we do have samples of state features φ H t = φ H (h t ), next-state features φ H t+1 = φ H (h t+1 ), and immediate rewards R t = R(h t ). We can thus estimate the Bellman equation w T φ H 1:k ≈ R 1:k + γw T φ H 2:k+1(4) (Here we have used the notation φ H 1:k to mean the matrix whose columns are φ H t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving the linear system in the least squares sense:ŵ T = R 1:k φ H 1:k − γφ H 2:k+1 † , where † indicates the Moore-Penrose pseudoinverse. However, this solution is biased [3], since the independent variables φ H t − γφ H t+1 are noisy samples of the expected difference E[φ H (h) − γ o∈O φ H (hπo) Pr[hπo | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm provides a consistent estimate of the independent variables by right multiplying the approximate Bellman equation (Equation 4) by φ H t T . The quantity φ H t T can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with the true independent variables, but uncorrelated with the noise in our estimates of these variables. 1 The value function parameter w may then be estimated as follows: w T = 1 k k t=1 R t φ H t T 1 k k t=1 φ H t φ H t T − γ k k t=1 φ H t+1 φ H t T −1(5) As the amount of data k increases, the empirical covariance matrices φ H 1:k φ H (5) is consistent. Therefore, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w therefore converges to the true parameters with probability 1. Predictive Features Although LSTD provides a consistent estimate of the value function parameters w, in practice, the potential size of the feature vectors can be a problem. If the number of features is large relative to the number of training samples, then the estimation of w is prone to overfitting. This problem can be alleviated by choosing some small set of features that only contain information that is relevant for value function approximation. However, with the exception of LARS-TD [18], there has been little work on the problem of how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-alwaysavailable expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given some signal from history, in this case a large set of features, we would like to find a compression that preserves only relevant information for predicting the value function J π . As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. Tests and Features of the Future We first need to define precisely the task of predicting the future. Just as a history is an ordered sequence of action-observation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs τ = a 1 o 1 . . . a i o i that can be executed and observed after time t [14]. The prediction for a test τ after a history h, written τ (h), is the probability that we will see the test observations τ O = o 1 . . . o i , given that we intervene [22] to execute the test actions τ A = a 1 . . . a i : τ (h) = Pr[τ O | h, do(τ A )] If Q = {τ 1 , . . . , τ n } is a set of tests, we write Q(h) = (τ 1 (h), . . . , τ n (h)) T for the corresponding vector of test predictions. We can generalize the notion of a test to a feature of the future, a linear combination of several tests sharing a common action sequence. For example, if τ 1 and τ 2 are two tests with τ A 1 = τ A 2 ≡ τ A , then we can make a feature φ = 3τ 1 + τ 2 . This feature is executed if we intervene to do(τ A ), and if it is executed its value is 3I(τ O 1 ) + I(τ O 2 ), where I(o 1 . . . o i ) stands for an indicator random variable, taking the value 0 or 1 depending on whether we observe the sequence of observations o 1 . . . o i . The prediction of φ given h is φ(h) ≡ E(φ | h, do(τ A )) = 3τ 1 (h) + τ 2 (h). While linear combinations of tests may seem restrictive, our definition is actually very expressive: we can represent an arbitrary function of a finite sequence of future observations. To do so, we take a collection of tests, each of which picks out one possible realization of the sequence, and weight each test by the value of the function conditioned on that realization. For example, if our observations are integers 1, 2, . . . , 10, we can write the square of the next observation as 10 o=1 o 2 I(o), and the mean of the next two observations as 10 o=1 10 o =1 1 2 (o + o )I(o, o ). The restriction to a common action sequence is necessary: without this restriction, all the tests making up a feature could never be executed at once. Once we move to feature predictions, however, it makes sense to lift this restriction: we will say that any linear combination of feature predictions is also a feature prediction, even if the features involved have different action sequences. Action sequences raise some problems with obtaining empirical estimates of means and covariances of features of the future: e.g., it is not always possible to get a sample of a particular feature's value on every time step, and the feature we choose to sample at one step can restrict which features we can sample at subsequent steps. In order to carry out our derivations without running into these problems repeatedly, we will assume for the rest of the paper that we can reset our system after every sample, and get a new history independently distributed as h t ∼ ω for some distribution ω. (With some additional bookkeeping we could remove this assumption [23], but this bookkeeping would unnecessarily complicate our derivations.) Furthermore, we will introduce some new language, again to keep derivations simple: if we have a vector of features of the future φ T , we will pretend that we can get a sample φ T t in which we evaluate all of our features starting from a single history h t , even if the different elements of φ T require us to execute different action sequences. When our algorithms call for such a sample, we will instead use the following trick to get a random vector with the correct expectation (and somewhat higher variance, which doesn't matter for any of our arguments): write τ A 1 , τ A 2 , . . . for the different action sequences, and let ζ 1 , ζ 2 , . . . > 0 be a probability distribution over these sequences. We pick a single action sequence τ A a according to ζ, and execute τ A a to get a sampleφ T of the features which depend on τ A a . We then enterφ T /ζ a into the corresponding coordinates of φ T t , and fill in zeros everywhere else. It is easy to see that the expected value of our sample vector is then correct: the probability of selection ζ a and the weighting factor 1/ζ a cancel out. We will write E(φ T | h t , do(ζ)) to stand for this expectation. None of the above tricks are actually necessary in our experiments with stopping problems: we simply execute the "continue" action on every step, and use only sequences of "continue" actions in every test and feature. Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. Since we are interested in value function approximation, the most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward given a history. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps, while not strictly necessary to predict total discounted reward, seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward better. Finally, in any specific RL application, we may be able to add problem-specific prediction tasks that will help focus our attention on relevant information: for example, in a pathplanning problem, we might try to predict which of several goal states we will reach (in addition to how much it will cost to get there). We can represent all of these prediction tasks as features of the future: e.g., to predict which goal we will reach, we add a distinct observation at each goal state, or to predict individual rewards, we add individual rewards as observations. 2 We will write φ T t for the vector of all features of the "future at time t," i.e., events starting at time t + 1 and continuing forward. So, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reduced-rank regression [24]. We define the following empirical covariance matrices between features of the future and features of histories: Σ T ,H = 1 k k t=1 φ T t φ H t T Σ H,H = 1 k k t=1 φ H t φ H t T(6) Let L H be the lower triangular Cholesky factor of Σ H,H . Then we can find a predictive compression of histories by a singular value decomposition (SVD) of the weighted covariance: write UDV T ≈ Σ T ,H L −T H (7) for a truncated SVD [25] of the weighted covariance, where U are the left singular vectors, V T are the right singular vectors, and D is the diagonal matrix of singular values. The number of columns of U, V, or D is equal to the number of retained singular values. 3 Then we define U = UD 1/2 (8) to be the mapping from the low-dimensional compressed space up to the high-dimensional space of features of the future. Given U , we would like to find a compression operator V that optimally predicts features of the future through the bottleneck defined by U . The least squares estimate can be found by minimizing the loss L(V ) = φ T 1:k − U V φ H 1:k 2 F(9) where · F denotes the Frobenius norm. We can find the minimum by taking the derivative of this loss with respect to V , setting it to zero, and solving for V (see Appendix, Section A for details), giving us: V = arg min V L(V ) = U T Σ T ,H ( Σ H,H ) −1(10) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression-but, unlike previous ways to find value-directed compressions [11], we do not need to know a model of our system ahead of time. For another example, define L T to be the lower triangular Cholesky factor of the empirical covariance of future features Σ T ,T . Then, if we scale features of the future by L −T T , the singular value decomposition will preserve the largest possible amount of mutual information between features of the future and features of history. This is equivalent to canonical correlation analysis [26,27], and the matrix D becomes a diagonal matrix of canonical correlations between futures and histories. Predictive State Temporal Difference Learning Now that we have found a predictive compression operator V via Equation 10, we can replace the features of history φ H t with the compressed features V φ H t in the Bellman recursion, Equation 4. Doing so results in the following approximate Bellman equation: w T V φ H 1:k ≈ R 1:k + γw T V φ H 2:k+1(11) The least squares solution for w is still prone to an error-in-variables problem. The variable φ H is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it as an instrumental variable to unbias the estimate of w. Define the additional empirical covariance matrices: Σ R,H = 1 k k t=1 R t φ H t T Σ H + ,H = 1 k k t=1 φ H t+1 φ H t T(12) Then, the corrected Bellman equation is: w T V Σ H,H = Σ R,H + γŵ T V Σ H + ,H and solving forŵ gives us the Predictive State Temporal Difference (PSTD) learning algorithm: w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(13) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the modelfree algorithm in Equation 13 is, under some circumstances, equivalent to a model-based value function approximation method which uses subspace identification to learn Predictive State Representations [20,21]. Predictive State Representations A predictive state representation (PSR) [14] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Formally, a PSR consists of five elements A, O, Q, s 1 , F . A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions f τ which embody these predictions: τ (h) = f τ (Q(h)). And, m 1 = Q( ) is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: f τ (Q(h)) = r T τ Q(h) for some vector r τ ∈ R |Q| . Finally, a core set Q for a linear PSR is said to be minimal if the tests in Q are linearly independent [16,15], i.e., no one test's prediction is a linear function of the other tests' predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we write M ao for the matrix with rows r T aoτ for τ ∈ Q, then we can use Bayes' Rule to show: Q(hao) = M ao Q(h) Pr[o | h, do(a)] = M ao Q(h) m T ∞ M ao Q(h)(14) where m ∞ is a normalizer, defined by m T ∞ Q(h) = 1 for all h. In addition to the above PSR parameters, we need a few additional definitions for reinforcement learning: a reward function R(h) = η T Q(h) mapping predictive states to immediate rewards, a discount factor γ ∈ [0, 1] which weights the importance of future rewards vs. present ones, and a policy π(Q(h)) mapping from predictive states to actions. (Specifying a reward in terms of the core test predictions Q(h) is fully general: e.g., if we want to add a unit reward for some test τ ∈ Q, we can instead equivalently set η := η + r τ , where r τ is defined (as above) so that τ (h) = r T τ Q(h).) Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [20,21]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix S, we can transform m 1 → Sm 1 , m T ∞ → m T ∞ S −1 , and M ao → SM ao S −1 without changing the corresponding dynamical system, since pairs S −1 S cancel in Eq. 14. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [28,29,27,30]. Learning Transformed PSRs Let Q be a minimal core set of tests for a dynamical system, with cardinality n = |Q| equal to the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal, and possibly even with |T | countably infinite). And, let H be the set of all possible histories. (|H| is finite or countably infinite, depending on whether our system is finite-horizon or infinite-horizon.) As before, write φ H t ∈ R for a vector of features of history at time t, and write φ T t ∈ R for a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction τ (h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction φ(h) as a linear function of T (h). We define the matrix Φ T ∈ R ×|T | to embody our predictions of future features: that is, an entry of Φ T is the weight of one of the tests in T for calculating the prediction of one of the features in φ T . Below we define several covariance matrices, Equation 15(a-d), in terms of the observable quantities φ T t , φ H t , a t , and o t , and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 17 below. First we define Σ H,H , the covariance matrix of features of histories, as E[φ H t φ H t T | h t ∼ ω]. Given k samples, we can approximate this covariance: [ Σ H,H ] i,j = 1 k k t=1 φ H it φ H jt =⇒ Σ H,H = 1 k φ H 1:k φ H 1:k T .(15a) As k → ∞, the empirical covariance Σ H,H converges to the true covariance Σ H,H with probability 1. Next we define Σ S,H , the cross covariance of states and features of histories. Writing s t = Q(h t ) for the (unobserved) state at time t, let Σ S,H = E 1 k s 1:k φ H 1:k T h t ∼ ω (∀t) We cannot directly estimate Σ S,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define Σ T ,H , the cross covariance matrix of the features of tests and histories: Σ T ,H ≡ E[φ T t φ H t T | h t ∼ ω, do(ζ)]. The true covariance is the expectation of the sample covariance Σ T ,H : [ Σ T ,H ] i,j ≡ 1 k k t=1 φ T i,t φ H j,t [Σ T ,H ] i,j = E 1 k k t=1 φ T i,t φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 E φ T i,t | h t , do(ζ) φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ τ (h t )φ H j,t h t ∼ ω (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ r T τ Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 s t φ H j,t h t ∼ ω (∀t) =⇒ Σ T ,H = Φ T RΣ S,H(15b) where the vector r τ is the linear function that specifies the probability of the test τ given the probabilities of tests in the core set Q, and the matrix R has all of the r τ vectors as rows. The above derivation shows that, because of our assumptions about the linear dimension of the system, the matrix Σ T ,H has factors R ∈ R |T |×n and Σ S,H ∈ R n× . Therefore, the rank of Σ T ,H is no more than n, the linear dimension of the system. We can also see that, since the size of Σ T ,H is fixed but the number of samples k is increasing, the empirical covariance Σ T ,H converges to the true covariance Σ T ,H with probability 1. Next we define Σ H,ao,H , a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, I t (o) is an indicator variable for whether we see observation o at step t. Σ H,ao,H ≡ 1 k k t=1 φ H t+1 I t (o)φ H t T Σ H,ao,H ≡ E Σ H,ao,H h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) (15c) Since the dimensions of each Σ H,ao,H are fixed, as k → ∞ these empirical covariances converge to the true covariances Σ H,ao,H with probability 1. Finally we define Σ R,H ≡ E[R t φ H t T | h t ∼ ω] , and approximate the covariance (in this case a vector) of reward and features of history: Σ R,H ≡ 1 k k t=1 R t φ H t T Σ R,H ≡ E Σ R,H h t ∼ ω (∀t) = E 1 k k t=1 R t φ H t T h t ∼ ω (∀t) = E 1 k k t=1 η T Q(h t )φ H t T h t ∼ ω (∀t) = η T E 1 k k t=1 s t φ H t T h t ∼ ω (∀t) = η T Σ S,H (15d) Again, as k → ∞, Σ R,H converges to Σ R,H with probability 1. We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to determine the state of the system, i.e., the regression from φ H to s is exact: s t = Σ S,H Σ −1 H,H φ H t . We discuss how to relax this assumption below in Section 4.3. We also need a matrix U such that U T Φ T R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, it is useful to choose U via SVD of a scaled version of Σ T ,H as described in Sec. 3.2. Using our assumptions we can show a useful identity for Σ H,ao,H : Σ S,H Σ −1 H,H Σ H,ao,H = E 1 k k t=1 Σ S,H Σ −1 H,H φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 s t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 M ao s t φ H t T h t ∼ ω (∀t) = M ao Σ S,H(16) This identity is at the heart of our learning algorithm: it shows that Σ H,ao,H contains a hidden copy of M ao , the main TPSR parameter that we need to learn. We would like to recover M ao via Eq. 16, M ao = Σ S,H Σ −1 H,H Σ H,ao,H Σ † S,H ; but of course we do not know Σ S,H . Fortunately, though, it turns out that we can use U T Σ T ,H as a stand-in, as described below, since this matrix differs from Σ S,H only by an invertible transform (Eq. 15b). We now show how to recover a TPSR from the matrices Σ T ,H , Σ H,H , Σ R,H , Σ H,ao,H , and U . Since a TPSR's predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform. b t ≡ U T Σ T ,H (Σ H,H ) −1 φ H t = U T Φ T RΣ S,H (Σ H,H ) −1 φ H t = (U T Φ T R)s t (17a) B ao ≡ U T Σ T ,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = U T Φ T RΣ S,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = (U T Φ T R)M ao Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (17b) b T η ≡ Σ R,H (U T Σ T ,H ) † = η T Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (17c) Our PSR learning algorithm is simple: simply replace each true covariance matrix in Eq. 17 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.3 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy π, a TPSR's value function is a linear function of state, J π (s) = w T b, and is the solution of the TPSR Bellman equation [31]: for all b, w T b = b T η b + γ o∈O w T B πo b, or equivalently, w T = b T η + γ o∈O w T B πo If we substitute in our learned PSR parameters from Equations 17(a-c), we get w T = Σ R,H (U T Σ T ,H ) † + γ o∈Oŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H,πo,H (U T Σ T ,H ) † w T U T Σ T ,H = Σ R,H + γŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H + ,H since, by comparing Eqs. 15c and 12, we can see that o∈O Σ H,πo,H = Σ H + ,H . Now, suppose that we define U and V by Eqs. 8 and 10, and let U = U as suggested above in Sec. 4.1. Then U T Σ T ,H = V Σ H,H , andŵ T V Σ H,H = Σ R,H + γŵ T V Σ H + ,Ĥ w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(18) Eq. 18 is exactly the PSTD algorithm (Eq. 13). So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation-to our knowledge, the first such result for a TD method. PSTD learning is related to value-directed compression of POMDPs [11]. If we learn a TPSR from data generated by a POMDP, then the TPSR state is exactly a linear compression of the POMDP state [15,20]. The compression can be exact or approximate, depending on whether we include enough features of the future and whether we keep all or only some nonzero singular values in our bottleneck. If we include only reward as a feature of the future, we get a value-directed compression in the sense of Poupart and Boutilier [11]. If desired, we can tune the degree of value-directedness of our compression by scaling the relative variance of our features: the higher the variance of the reward feature compared to other features, the more value-directed the resulting compression will be. Our work significantly diverges from previous work on POMDP compression in one important respect: prior work assumes access to the true POMDP model, while we make no such assumption, and learn a compressed representation directly from data. Insights from Subspace Identification The close connection to subspace identification for PSRs provides additional insight into the temporal difference learning procedure. In Equation 17 we made the assumption that the features of history are rich enough to completely determine the state of the dynamical system. In fact, using theory developed in [21], it is possible to relax this assumption and instead assume that state is merely correlated with features of history. In this case, we need to introduce a new set of covariance matrices Σ T ,ao, do(a, ζ)], one for each actionobservation pair, that represent the covariance between features of history before and features of tests after taking action a and observing o. We can then estimate the TPSR transition matrices as B ao = U T Σ T ,ao,H ( U T Σ T ,H ) † (see [21] for proof details). The value function parameter w can be estimated asŵ H ≡ E[φ T t I t (o)φ H t T | h t ∼ ω,T = Σ R,H ( U T Σ T ,H ) † (I − o∈O U T Σ T ,ao,H ( U T Σ T ,H ) † ) † = Σ R,H ( U T Σ T ,H − o∈O U T Σ T ,ao,H ) † (the proof is similar to Equation 18 ). Since we no longer assume that state is completely specified by features of history, we can no longer apply the learned value function to U Σ T ,H (Σ H,H ) −1 φ t at each time t. Instead we need to learn a full PSR model and filter with the model to estimate state. Details on this procedure can be found in [21]. Experimental Results We designed several experiments to evaluate the properties of the PSTD learning algorithm. In the first set of experiments we look at the comparative merits of PSTD with respect to LSTD and LARS-TD when applied to the problem of estimating the value function of a reduced-rank POMDP. In the second set of experiments, we apply PSTD to a benchmark optimal stopping problem (pricing a fictitious financial derivative), and show that PSTD outperforms competing approaches. Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [32]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy's transition matrix is low rank: the resulting belief distributions can be represented in a 3-dimensional subspace of the original belief simplex. A reward of 1 is given in the first and third latent state and a reward of 0 in the other two latent states (see Appendix, Section B). The system emits 2 possible observations, conflating information about the latent states. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, PSTD, and PSTD as formulated in Section 4.3 (which we call PSTD2) when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by J π = R(I − γT π ) −1 . In the first experiment we execute the policy π for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, PSTD with linear dimension 3, and PSTD2 with linear dimension 3 (Figure 1(A)). 4 In this experiment, PSTD and PSTD2 both had lower mean squared error than the other approaches. For the second experiment, we added 490 random features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features in both tests and histories and the relatively small amount of training data. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. Surprisingly, in this experiment PSTD2 not only outperformed PSTD but bested even LARS-TD. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was relatively large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD and PSTD2 both outperform LARS-TD and each of these subspace and subset selection methods outperform LSTD by a large margin by efficiently estimating the value function (Figure 1(C)). Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the contract owner must make decisions-e.g., with early exercise the contract holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions. In these cases, the value of the derivative depends on how the contract holder acts. Deciding when to exercise is therefore an optimal stopping problem: at each point in time, the contract holder must decide whether to continue holding the contract or exercise. Such stopping problems provide an ideal testbed for policy evaluation methods, since we can easily collect a single data set which is sufficient to evaluate any policy: we just choose the "continue" action forever. (We can then evaluate the "stop" action easily in any of the resulting states, since the immediate reward is given by the rules of the contract, and the next state is the terminal state by definition.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [33]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of a given day, the holder may opt to exercise. At exercise the owner receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a "psychic call": the owner gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility σ = 0.02 and continuously compounded short term growth rate ρ = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ∼ 10%. In more detail, if w t is a standard Brownian motion, then the stock price p t evolves as ∇p t = ρp t ∇t + σp t ∇w t , and we can summarize relevant state at the end of each day as a vector x t ∈ R 100 , with x t = pt−99 pt−100 , pt−98 pt−100 , . . . , pt pt−100 T . The ith dimension x t (i) represents the amount a $1 investment in a stock at time t − 100 would grow to at time t − 100 + i. This process is Markov and ergodic [33,34]: x t and x t+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor γ = e −ρ is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock's growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V * (x) = sup t E[γ t G(x t ) | x 0 = x]. Our goal is to calculate an approximate value function V (x) = w T φ H (x), and then use this value function to generate a stopping time min{t | G(x t ) ≥ V (x t )}. To do so, we sample a sequence of 1,000,000 states x t ∈ R 100 and calculate features φ H of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy "stop if G(x t ) ≥ w T φ H (x t )." Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the "continue" action, with reward 0 at each step. When the policy executes the "stop" action, the reward is G(x) and the next state's features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a "good" set of 16 features for this data set through repeated trial and error (see Appendix, Section B and [33,34]). We greatly expand this set of features, then use PSTD to synthesize a small set of highquality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison's sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [33]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [33,34] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn't know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [33,34]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD's compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. maximal returns, and how long ago they occurred: The next set of basis functions summarize the characteristics of the basic shape of the 100 day sample path. They are the inner product of the path with the first four Legendre polynomial degrees. Let j = i/50 − 1. φ 1 (x) = 1 φ 2 (x) = G(x) φ 3 (x) = min i=1φ 7 (x) = 1 100 100 i=1 x(i) − 1 √ 2 φ 8 (x) = 1 100 100 i=1 x(i) 3 2 j φ 9 (x) = 1 100 100 i=1 x(i) 5 2 3j 2 − 1 2 φ 10 (x) = 1 100 100 i=1 x(i) 7 2 5j 3 − 3j 2 Nonlinear combinations of basis functions: φ 11 (x) = φ 2 (x)φ 3 (x) φ 12 (x) = φ 2 (x)φ 4 (x) φ 13 (x) = φ 2 (x)φ 7 (x) φ 14 (x) = φ 2 (x)φ 8 (x) φ 15 (x) = φ 2 (x)φ 9 (x) φ 16 (x) = φ 2 (x)φ 10 (x) In order to improve our results, we added a large number of additional basis functions to these hand-picked 16. PSTD will compress these features for us, so we can use as many additional basis functions as we would like. First we defined 4 additional basis functions consisting of the inner products of the 100 day sample path with the 5th and 6th Legende polynomials and we added the corresponding nonlinear combinations of basis functions: φ 17 (x) = 1 100 100 i=1 x(i) 9 2 35j 4 − 30x 2 + 3 8 φ 18 (x) = 1 100 100 i=1 x(i) 11 2 63j 5 − 70j 3 + 15j 8 φ 19 (x) = φ 2 (x)φ 17 (x) φ 20 (x) = φ 2 (x)φ 18 (x) Finally we added the the entire sample path and the squared sample path: φ 21:120 = x 1:100 φ 121:220 = x 2 1:100
8,106
1011.0041
2949683464
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem.
Just as showed for the two simpler methods, we show that our two improved methods (model-free and model-based) are equivalent. This result yields some appealing theoretical benefits: for example, PSTD features can be explicitly interpreted as a statistically consistent estimate of the true underlying system state. And, the feasibility of finding the true value function can be shown to depend on the of the dynamical system, or equivalently, the dimensionality of the predictive state representation--- on the cardinality of the POMDP state space. Therefore our representation is naturally compressed'' in the sense of @cite_15 , speeding up convergence.
{ "abstract": [ "We examine the problem of generating state-space compressions of POMDPs in a way that minimally impacts decision quality. We analyze the impact of compressions on decision quality, observing that compressions that allow accurate policy evaluation (prediction of expected future reward) will not affect decision quality We derive a set of sufficient conditions that ensure accurate prediction in this respect, illustrate interesting mathematical properties these confer on lossless linear compressions, and use these to derive an iterative procedure for finding good linear lossy compressions. We also elaborate on how structured representations of a POMDP can be used to find such compressions." ], "cite_N": [ "@cite_15" ], "mid": [ "2138089556" ] }
Predictive State Temporal Difference Learning
Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states π 0 , a state transition function T , a reward function R, and a discount factor γ ∈ [0, 1]. We seek a policy π, a mapping from states to actions. The notion of a value function is of central importance in reinforcement learning: for a given policy π, the value of state s is defined as the expected discounted sum of rewards obtained when starting in state s and following policy π, J π (s) = E [ ∞ t=0 γ t R(s t ) | s 0 = s, π]. It is well known that the value function must obey the Bellman equation J π (s) = R(s) + γ s J π (s ) Pr[s | s, π(s)](1) If we know the transition function T , and if the set of states S is sufficiently small, we can use (1) directly to solve for the value function J π . We can then execute the greedy policy for J π , setting the action at each state to maximize the right-hand side of (1). However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. Our state (i.e., the information which we can use to make decisions) is not an element of S but a history (an ordered sequence of action-observation pairs h = a h 1 o h 1 . . . a h t o h t that have been executed and observed prior to time t). If we knew the transition model T , we could use h to infer a belief distribution over S, and use that belief (or a compression of that belief) as a state instead; below, we will discuss how to learn a compressed belief state. Because of partial observability, we can only hope to predict reward conditioned on history, R(h) = E[R(s) | h], and we must choose actions as a function of history, π(h) instead of π(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories J π (s) = w T φ H (h)(2) Here w ∈ R j is a parameter vector and φ H (h) ∈ R j is a feature vector for a history h. So, we can rewrite the Bellman equation as w T φ H (h) = R(h) + γ o∈O w T φ H (hπo) Pr[hπo | hπ](3) where hπo is history h extended by taking action π(h) and observing o. Least Squares Temporal Difference Learning In general we don't know the transition probabilities Pr[hπo | h], but we do have samples of state features φ H t = φ H (h t ), next-state features φ H t+1 = φ H (h t+1 ), and immediate rewards R t = R(h t ). We can thus estimate the Bellman equation w T φ H 1:k ≈ R 1:k + γw T φ H 2:k+1(4) (Here we have used the notation φ H 1:k to mean the matrix whose columns are φ H t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving the linear system in the least squares sense:ŵ T = R 1:k φ H 1:k − γφ H 2:k+1 † , where † indicates the Moore-Penrose pseudoinverse. However, this solution is biased [3], since the independent variables φ H t − γφ H t+1 are noisy samples of the expected difference E[φ H (h) − γ o∈O φ H (hπo) Pr[hπo | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm provides a consistent estimate of the independent variables by right multiplying the approximate Bellman equation (Equation 4) by φ H t T . The quantity φ H t T can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with the true independent variables, but uncorrelated with the noise in our estimates of these variables. 1 The value function parameter w may then be estimated as follows: w T = 1 k k t=1 R t φ H t T 1 k k t=1 φ H t φ H t T − γ k k t=1 φ H t+1 φ H t T −1(5) As the amount of data k increases, the empirical covariance matrices φ H 1:k φ H (5) is consistent. Therefore, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w therefore converges to the true parameters with probability 1. Predictive Features Although LSTD provides a consistent estimate of the value function parameters w, in practice, the potential size of the feature vectors can be a problem. If the number of features is large relative to the number of training samples, then the estimation of w is prone to overfitting. This problem can be alleviated by choosing some small set of features that only contain information that is relevant for value function approximation. However, with the exception of LARS-TD [18], there has been little work on the problem of how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-alwaysavailable expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given some signal from history, in this case a large set of features, we would like to find a compression that preserves only relevant information for predicting the value function J π . As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. Tests and Features of the Future We first need to define precisely the task of predicting the future. Just as a history is an ordered sequence of action-observation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs τ = a 1 o 1 . . . a i o i that can be executed and observed after time t [14]. The prediction for a test τ after a history h, written τ (h), is the probability that we will see the test observations τ O = o 1 . . . o i , given that we intervene [22] to execute the test actions τ A = a 1 . . . a i : τ (h) = Pr[τ O | h, do(τ A )] If Q = {τ 1 , . . . , τ n } is a set of tests, we write Q(h) = (τ 1 (h), . . . , τ n (h)) T for the corresponding vector of test predictions. We can generalize the notion of a test to a feature of the future, a linear combination of several tests sharing a common action sequence. For example, if τ 1 and τ 2 are two tests with τ A 1 = τ A 2 ≡ τ A , then we can make a feature φ = 3τ 1 + τ 2 . This feature is executed if we intervene to do(τ A ), and if it is executed its value is 3I(τ O 1 ) + I(τ O 2 ), where I(o 1 . . . o i ) stands for an indicator random variable, taking the value 0 or 1 depending on whether we observe the sequence of observations o 1 . . . o i . The prediction of φ given h is φ(h) ≡ E(φ | h, do(τ A )) = 3τ 1 (h) + τ 2 (h). While linear combinations of tests may seem restrictive, our definition is actually very expressive: we can represent an arbitrary function of a finite sequence of future observations. To do so, we take a collection of tests, each of which picks out one possible realization of the sequence, and weight each test by the value of the function conditioned on that realization. For example, if our observations are integers 1, 2, . . . , 10, we can write the square of the next observation as 10 o=1 o 2 I(o), and the mean of the next two observations as 10 o=1 10 o =1 1 2 (o + o )I(o, o ). The restriction to a common action sequence is necessary: without this restriction, all the tests making up a feature could never be executed at once. Once we move to feature predictions, however, it makes sense to lift this restriction: we will say that any linear combination of feature predictions is also a feature prediction, even if the features involved have different action sequences. Action sequences raise some problems with obtaining empirical estimates of means and covariances of features of the future: e.g., it is not always possible to get a sample of a particular feature's value on every time step, and the feature we choose to sample at one step can restrict which features we can sample at subsequent steps. In order to carry out our derivations without running into these problems repeatedly, we will assume for the rest of the paper that we can reset our system after every sample, and get a new history independently distributed as h t ∼ ω for some distribution ω. (With some additional bookkeeping we could remove this assumption [23], but this bookkeeping would unnecessarily complicate our derivations.) Furthermore, we will introduce some new language, again to keep derivations simple: if we have a vector of features of the future φ T , we will pretend that we can get a sample φ T t in which we evaluate all of our features starting from a single history h t , even if the different elements of φ T require us to execute different action sequences. When our algorithms call for such a sample, we will instead use the following trick to get a random vector with the correct expectation (and somewhat higher variance, which doesn't matter for any of our arguments): write τ A 1 , τ A 2 , . . . for the different action sequences, and let ζ 1 , ζ 2 , . . . > 0 be a probability distribution over these sequences. We pick a single action sequence τ A a according to ζ, and execute τ A a to get a sampleφ T of the features which depend on τ A a . We then enterφ T /ζ a into the corresponding coordinates of φ T t , and fill in zeros everywhere else. It is easy to see that the expected value of our sample vector is then correct: the probability of selection ζ a and the weighting factor 1/ζ a cancel out. We will write E(φ T | h t , do(ζ)) to stand for this expectation. None of the above tricks are actually necessary in our experiments with stopping problems: we simply execute the "continue" action on every step, and use only sequences of "continue" actions in every test and feature. Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. Since we are interested in value function approximation, the most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward given a history. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps, while not strictly necessary to predict total discounted reward, seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward better. Finally, in any specific RL application, we may be able to add problem-specific prediction tasks that will help focus our attention on relevant information: for example, in a pathplanning problem, we might try to predict which of several goal states we will reach (in addition to how much it will cost to get there). We can represent all of these prediction tasks as features of the future: e.g., to predict which goal we will reach, we add a distinct observation at each goal state, or to predict individual rewards, we add individual rewards as observations. 2 We will write φ T t for the vector of all features of the "future at time t," i.e., events starting at time t + 1 and continuing forward. So, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reduced-rank regression [24]. We define the following empirical covariance matrices between features of the future and features of histories: Σ T ,H = 1 k k t=1 φ T t φ H t T Σ H,H = 1 k k t=1 φ H t φ H t T(6) Let L H be the lower triangular Cholesky factor of Σ H,H . Then we can find a predictive compression of histories by a singular value decomposition (SVD) of the weighted covariance: write UDV T ≈ Σ T ,H L −T H (7) for a truncated SVD [25] of the weighted covariance, where U are the left singular vectors, V T are the right singular vectors, and D is the diagonal matrix of singular values. The number of columns of U, V, or D is equal to the number of retained singular values. 3 Then we define U = UD 1/2 (8) to be the mapping from the low-dimensional compressed space up to the high-dimensional space of features of the future. Given U , we would like to find a compression operator V that optimally predicts features of the future through the bottleneck defined by U . The least squares estimate can be found by minimizing the loss L(V ) = φ T 1:k − U V φ H 1:k 2 F(9) where · F denotes the Frobenius norm. We can find the minimum by taking the derivative of this loss with respect to V , setting it to zero, and solving for V (see Appendix, Section A for details), giving us: V = arg min V L(V ) = U T Σ T ,H ( Σ H,H ) −1(10) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression-but, unlike previous ways to find value-directed compressions [11], we do not need to know a model of our system ahead of time. For another example, define L T to be the lower triangular Cholesky factor of the empirical covariance of future features Σ T ,T . Then, if we scale features of the future by L −T T , the singular value decomposition will preserve the largest possible amount of mutual information between features of the future and features of history. This is equivalent to canonical correlation analysis [26,27], and the matrix D becomes a diagonal matrix of canonical correlations between futures and histories. Predictive State Temporal Difference Learning Now that we have found a predictive compression operator V via Equation 10, we can replace the features of history φ H t with the compressed features V φ H t in the Bellman recursion, Equation 4. Doing so results in the following approximate Bellman equation: w T V φ H 1:k ≈ R 1:k + γw T V φ H 2:k+1(11) The least squares solution for w is still prone to an error-in-variables problem. The variable φ H is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it as an instrumental variable to unbias the estimate of w. Define the additional empirical covariance matrices: Σ R,H = 1 k k t=1 R t φ H t T Σ H + ,H = 1 k k t=1 φ H t+1 φ H t T(12) Then, the corrected Bellman equation is: w T V Σ H,H = Σ R,H + γŵ T V Σ H + ,H and solving forŵ gives us the Predictive State Temporal Difference (PSTD) learning algorithm: w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(13) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the modelfree algorithm in Equation 13 is, under some circumstances, equivalent to a model-based value function approximation method which uses subspace identification to learn Predictive State Representations [20,21]. Predictive State Representations A predictive state representation (PSR) [14] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Formally, a PSR consists of five elements A, O, Q, s 1 , F . A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions f τ which embody these predictions: τ (h) = f τ (Q(h)). And, m 1 = Q( ) is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: f τ (Q(h)) = r T τ Q(h) for some vector r τ ∈ R |Q| . Finally, a core set Q for a linear PSR is said to be minimal if the tests in Q are linearly independent [16,15], i.e., no one test's prediction is a linear function of the other tests' predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we write M ao for the matrix with rows r T aoτ for τ ∈ Q, then we can use Bayes' Rule to show: Q(hao) = M ao Q(h) Pr[o | h, do(a)] = M ao Q(h) m T ∞ M ao Q(h)(14) where m ∞ is a normalizer, defined by m T ∞ Q(h) = 1 for all h. In addition to the above PSR parameters, we need a few additional definitions for reinforcement learning: a reward function R(h) = η T Q(h) mapping predictive states to immediate rewards, a discount factor γ ∈ [0, 1] which weights the importance of future rewards vs. present ones, and a policy π(Q(h)) mapping from predictive states to actions. (Specifying a reward in terms of the core test predictions Q(h) is fully general: e.g., if we want to add a unit reward for some test τ ∈ Q, we can instead equivalently set η := η + r τ , where r τ is defined (as above) so that τ (h) = r T τ Q(h).) Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [20,21]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix S, we can transform m 1 → Sm 1 , m T ∞ → m T ∞ S −1 , and M ao → SM ao S −1 without changing the corresponding dynamical system, since pairs S −1 S cancel in Eq. 14. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [28,29,27,30]. Learning Transformed PSRs Let Q be a minimal core set of tests for a dynamical system, with cardinality n = |Q| equal to the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal, and possibly even with |T | countably infinite). And, let H be the set of all possible histories. (|H| is finite or countably infinite, depending on whether our system is finite-horizon or infinite-horizon.) As before, write φ H t ∈ R for a vector of features of history at time t, and write φ T t ∈ R for a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction τ (h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction φ(h) as a linear function of T (h). We define the matrix Φ T ∈ R ×|T | to embody our predictions of future features: that is, an entry of Φ T is the weight of one of the tests in T for calculating the prediction of one of the features in φ T . Below we define several covariance matrices, Equation 15(a-d), in terms of the observable quantities φ T t , φ H t , a t , and o t , and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 17 below. First we define Σ H,H , the covariance matrix of features of histories, as E[φ H t φ H t T | h t ∼ ω]. Given k samples, we can approximate this covariance: [ Σ H,H ] i,j = 1 k k t=1 φ H it φ H jt =⇒ Σ H,H = 1 k φ H 1:k φ H 1:k T .(15a) As k → ∞, the empirical covariance Σ H,H converges to the true covariance Σ H,H with probability 1. Next we define Σ S,H , the cross covariance of states and features of histories. Writing s t = Q(h t ) for the (unobserved) state at time t, let Σ S,H = E 1 k s 1:k φ H 1:k T h t ∼ ω (∀t) We cannot directly estimate Σ S,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define Σ T ,H , the cross covariance matrix of the features of tests and histories: Σ T ,H ≡ E[φ T t φ H t T | h t ∼ ω, do(ζ)]. The true covariance is the expectation of the sample covariance Σ T ,H : [ Σ T ,H ] i,j ≡ 1 k k t=1 φ T i,t φ H j,t [Σ T ,H ] i,j = E 1 k k t=1 φ T i,t φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 E φ T i,t | h t , do(ζ) φ H j,t h t ∼ ω (∀t), do(ζ) (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ τ (h t )φ H j,t h t ∼ ω (∀t) = E 1 k k t=1 τ ∈T Φ T i,τ r T τ Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 Q(h t )φ H j,t h t ∼ ω (∀t) = τ ∈T Φ T i,τ r T τ E 1 k k t=1 s t φ H j,t h t ∼ ω (∀t) =⇒ Σ T ,H = Φ T RΣ S,H(15b) where the vector r τ is the linear function that specifies the probability of the test τ given the probabilities of tests in the core set Q, and the matrix R has all of the r τ vectors as rows. The above derivation shows that, because of our assumptions about the linear dimension of the system, the matrix Σ T ,H has factors R ∈ R |T |×n and Σ S,H ∈ R n× . Therefore, the rank of Σ T ,H is no more than n, the linear dimension of the system. We can also see that, since the size of Σ T ,H is fixed but the number of samples k is increasing, the empirical covariance Σ T ,H converges to the true covariance Σ T ,H with probability 1. Next we define Σ H,ao,H , a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, I t (o) is an indicator variable for whether we see observation o at step t. Σ H,ao,H ≡ 1 k k t=1 φ H t+1 I t (o)φ H t T Σ H,ao,H ≡ E Σ H,ao,H h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) (15c) Since the dimensions of each Σ H,ao,H are fixed, as k → ∞ these empirical covariances converge to the true covariances Σ H,ao,H with probability 1. Finally we define Σ R,H ≡ E[R t φ H t T | h t ∼ ω] , and approximate the covariance (in this case a vector) of reward and features of history: Σ R,H ≡ 1 k k t=1 R t φ H t T Σ R,H ≡ E Σ R,H h t ∼ ω (∀t) = E 1 k k t=1 R t φ H t T h t ∼ ω (∀t) = E 1 k k t=1 η T Q(h t )φ H t T h t ∼ ω (∀t) = η T E 1 k k t=1 s t φ H t T h t ∼ ω (∀t) = η T Σ S,H (15d) Again, as k → ∞, Σ R,H converges to Σ R,H with probability 1. We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to determine the state of the system, i.e., the regression from φ H to s is exact: s t = Σ S,H Σ −1 H,H φ H t . We discuss how to relax this assumption below in Section 4.3. We also need a matrix U such that U T Φ T R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, it is useful to choose U via SVD of a scaled version of Σ T ,H as described in Sec. 3.2. Using our assumptions we can show a useful identity for Σ H,ao,H : Σ S,H Σ −1 H,H Σ H,ao,H = E 1 k k t=1 Σ S,H Σ −1 H,H φ H t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 s t+1 I t (o)φ H t T h t ∼ ω (∀t), do(a) (∀t) = E 1 k k t=1 M ao s t φ H t T h t ∼ ω (∀t) = M ao Σ S,H(16) This identity is at the heart of our learning algorithm: it shows that Σ H,ao,H contains a hidden copy of M ao , the main TPSR parameter that we need to learn. We would like to recover M ao via Eq. 16, M ao = Σ S,H Σ −1 H,H Σ H,ao,H Σ † S,H ; but of course we do not know Σ S,H . Fortunately, though, it turns out that we can use U T Σ T ,H as a stand-in, as described below, since this matrix differs from Σ S,H only by an invertible transform (Eq. 15b). We now show how to recover a TPSR from the matrices Σ T ,H , Σ H,H , Σ R,H , Σ H,ao,H , and U . Since a TPSR's predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform. b t ≡ U T Σ T ,H (Σ H,H ) −1 φ H t = U T Φ T RΣ S,H (Σ H,H ) −1 φ H t = (U T Φ T R)s t (17a) B ao ≡ U T Σ T ,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = U T Φ T RΣ S,H (Σ H,H ) −1 Σ H,ao,H (U T Σ T ,H ) † = (U T Φ T R)M ao Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = (U T Φ T R)M ao (U T Φ T R) −1 (17b) b T η ≡ Σ R,H (U T Σ T ,H ) † = η T Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (U T Φ T R)Σ S,H (U T Σ T ,H ) † = η T (U T Φ T R) −1 (17c) Our PSR learning algorithm is simple: simply replace each true covariance matrix in Eq. 17 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.3 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy π, a TPSR's value function is a linear function of state, J π (s) = w T b, and is the solution of the TPSR Bellman equation [31]: for all b, w T b = b T η b + γ o∈O w T B πo b, or equivalently, w T = b T η + γ o∈O w T B πo If we substitute in our learned PSR parameters from Equations 17(a-c), we get w T = Σ R,H (U T Σ T ,H ) † + γ o∈Oŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H,πo,H (U T Σ T ,H ) † w T U T Σ T ,H = Σ R,H + γŵ T U T Σ T ,H ( Σ H,H ) −1 Σ H + ,H since, by comparing Eqs. 15c and 12, we can see that o∈O Σ H,πo,H = Σ H + ,H . Now, suppose that we define U and V by Eqs. 8 and 10, and let U = U as suggested above in Sec. 4.1. Then U T Σ T ,H = V Σ H,H , andŵ T V Σ H,H = Σ R,H + γŵ T V Σ H + ,Ĥ w T = Σ R,H V Σ H,H − γ V Σ H + ,H †(18) Eq. 18 is exactly the PSTD algorithm (Eq. 13). So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation-to our knowledge, the first such result for a TD method. PSTD learning is related to value-directed compression of POMDPs [11]. If we learn a TPSR from data generated by a POMDP, then the TPSR state is exactly a linear compression of the POMDP state [15,20]. The compression can be exact or approximate, depending on whether we include enough features of the future and whether we keep all or only some nonzero singular values in our bottleneck. If we include only reward as a feature of the future, we get a value-directed compression in the sense of Poupart and Boutilier [11]. If desired, we can tune the degree of value-directedness of our compression by scaling the relative variance of our features: the higher the variance of the reward feature compared to other features, the more value-directed the resulting compression will be. Our work significantly diverges from previous work on POMDP compression in one important respect: prior work assumes access to the true POMDP model, while we make no such assumption, and learn a compressed representation directly from data. Insights from Subspace Identification The close connection to subspace identification for PSRs provides additional insight into the temporal difference learning procedure. In Equation 17 we made the assumption that the features of history are rich enough to completely determine the state of the dynamical system. In fact, using theory developed in [21], it is possible to relax this assumption and instead assume that state is merely correlated with features of history. In this case, we need to introduce a new set of covariance matrices Σ T ,ao, do(a, ζ)], one for each actionobservation pair, that represent the covariance between features of history before and features of tests after taking action a and observing o. We can then estimate the TPSR transition matrices as B ao = U T Σ T ,ao,H ( U T Σ T ,H ) † (see [21] for proof details). The value function parameter w can be estimated asŵ H ≡ E[φ T t I t (o)φ H t T | h t ∼ ω,T = Σ R,H ( U T Σ T ,H ) † (I − o∈O U T Σ T ,ao,H ( U T Σ T ,H ) † ) † = Σ R,H ( U T Σ T ,H − o∈O U T Σ T ,ao,H ) † (the proof is similar to Equation 18 ). Since we no longer assume that state is completely specified by features of history, we can no longer apply the learned value function to U Σ T ,H (Σ H,H ) −1 φ t at each time t. Instead we need to learn a full PSR model and filter with the model to estimate state. Details on this procedure can be found in [21]. Experimental Results We designed several experiments to evaluate the properties of the PSTD learning algorithm. In the first set of experiments we look at the comparative merits of PSTD with respect to LSTD and LARS-TD when applied to the problem of estimating the value function of a reduced-rank POMDP. In the second set of experiments, we apply PSTD to a benchmark optimal stopping problem (pricing a fictitious financial derivative), and show that PSTD outperforms competing approaches. Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [32]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy's transition matrix is low rank: the resulting belief distributions can be represented in a 3-dimensional subspace of the original belief simplex. A reward of 1 is given in the first and third latent state and a reward of 0 in the other two latent states (see Appendix, Section B). The system emits 2 possible observations, conflating information about the latent states. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, PSTD, and PSTD as formulated in Section 4.3 (which we call PSTD2) when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by J π = R(I − γT π ) −1 . In the first experiment we execute the policy π for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, PSTD with linear dimension 3, and PSTD2 with linear dimension 3 (Figure 1(A)). 4 In this experiment, PSTD and PSTD2 both had lower mean squared error than the other approaches. For the second experiment, we added 490 random features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features in both tests and histories and the relatively small amount of training data. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. Surprisingly, in this experiment PSTD2 not only outperformed PSTD but bested even LARS-TD. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was relatively large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD and PSTD2 both outperform LARS-TD and each of these subspace and subset selection methods outperform LSTD by a large margin by efficiently estimating the value function (Figure 1(C)). Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the contract owner must make decisions-e.g., with early exercise the contract holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions. In these cases, the value of the derivative depends on how the contract holder acts. Deciding when to exercise is therefore an optimal stopping problem: at each point in time, the contract holder must decide whether to continue holding the contract or exercise. Such stopping problems provide an ideal testbed for policy evaluation methods, since we can easily collect a single data set which is sufficient to evaluate any policy: we just choose the "continue" action forever. (We can then evaluate the "stop" action easily in any of the resulting states, since the immediate reward is given by the rules of the contract, and the next state is the terminal state by definition.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [33]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of a given day, the holder may opt to exercise. At exercise the owner receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a "psychic call": the owner gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility σ = 0.02 and continuously compounded short term growth rate ρ = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ∼ 10%. In more detail, if w t is a standard Brownian motion, then the stock price p t evolves as ∇p t = ρp t ∇t + σp t ∇w t , and we can summarize relevant state at the end of each day as a vector x t ∈ R 100 , with x t = pt−99 pt−100 , pt−98 pt−100 , . . . , pt pt−100 T . The ith dimension x t (i) represents the amount a $1 investment in a stock at time t − 100 would grow to at time t − 100 + i. This process is Markov and ergodic [33,34]: x t and x t+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor γ = e −ρ is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock's growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V * (x) = sup t E[γ t G(x t ) | x 0 = x]. Our goal is to calculate an approximate value function V (x) = w T φ H (x), and then use this value function to generate a stopping time min{t | G(x t ) ≥ V (x t )}. To do so, we sample a sequence of 1,000,000 states x t ∈ R 100 and calculate features φ H of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy "stop if G(x t ) ≥ w T φ H (x t )." Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the "continue" action, with reward 0 at each step. When the policy executes the "stop" action, the reward is G(x) and the next state's features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a "good" set of 16 features for this data set through repeated trial and error (see Appendix, Section B and [33,34]). We greatly expand this set of features, then use PSTD to synthesize a small set of highquality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison's sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [33]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [33,34] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn't know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [33,34]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD's compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. maximal returns, and how long ago they occurred: The next set of basis functions summarize the characteristics of the basic shape of the 100 day sample path. They are the inner product of the path with the first four Legendre polynomial degrees. Let j = i/50 − 1. φ 1 (x) = 1 φ 2 (x) = G(x) φ 3 (x) = min i=1φ 7 (x) = 1 100 100 i=1 x(i) − 1 √ 2 φ 8 (x) = 1 100 100 i=1 x(i) 3 2 j φ 9 (x) = 1 100 100 i=1 x(i) 5 2 3j 2 − 1 2 φ 10 (x) = 1 100 100 i=1 x(i) 7 2 5j 3 − 3j 2 Nonlinear combinations of basis functions: φ 11 (x) = φ 2 (x)φ 3 (x) φ 12 (x) = φ 2 (x)φ 4 (x) φ 13 (x) = φ 2 (x)φ 7 (x) φ 14 (x) = φ 2 (x)φ 8 (x) φ 15 (x) = φ 2 (x)φ 9 (x) φ 16 (x) = φ 2 (x)φ 10 (x) In order to improve our results, we added a large number of additional basis functions to these hand-picked 16. PSTD will compress these features for us, so we can use as many additional basis functions as we would like. First we defined 4 additional basis functions consisting of the inner products of the 100 day sample path with the 5th and 6th Legende polynomials and we added the corresponding nonlinear combinations of basis functions: φ 17 (x) = 1 100 100 i=1 x(i) 9 2 35j 4 − 30x 2 + 3 8 φ 18 (x) = 1 100 100 i=1 x(i) 11 2 63j 5 − 70j 3 + 15j 8 φ 19 (x) = φ 2 (x)φ 17 (x) φ 20 (x) = φ 2 (x)φ 18 (x) Finally we added the the entire sample path and the squared sample path: φ 21:120 = x 1:100 φ 121:220 = x 2 1:100
8,106
1009.4672
1667827919
Achieving optimal transmission throughput in data networks in a multi-hop wireless networks is fundamental but hard problem. The situation is aggravated when nodes are mobile. Further, multi-rate system make the analysis of throughput more complicated. In mobile scenario, link may break or be created as nodes are moving within communication range. Route Discovery' which is to find the optimal route and transmission schedule is an important issue. Route discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and or schedule, which hurts end-to-end throughput. The implementation of the routing decision problem in one dimensional mobile ad hoc network as Markov decision process problem is already is discussed in the paper [1]. A heuristic based on threshold policy is discussed in the same paper without giving a way to find the threshold. In this paper, we suggested a rule for setting the threshold, given the parameters of the system. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an open' network in which existing relay nodes can leave and or new relay nodes can join the network.
Gupta and Kumar studied throughput of static wireless networks, @cite_8 . They have considered protocol model and physical model for the studying of impact of interfering transmission on SNR. They observed that in a network comprising of @math identical nodes, each of which communicating with another nodes, the throughput per node under protocol model is of order @math if placement of nodes is random. The throughput per node becomes @math if node placement and communication patterns is optimal. The later result is valid for physical model as explained intuitively by @cite_9 . While the overall on-hop throughput of the network grows as @math , the average path length grows as @math , which makes the throughput per node to vary as @math .
{ "abstract": [ "Early simulation experience with wireless ad hoc networks suggests that their capacity can be surprisingly low, due to the requirement that nodes forward each others' packets. The achievable capacity depends on network size, traffic patterns, and detailed local radio interactions. This paper examines these factors alone and in combination, using simulation and analysis from first principles. Our results include both specific constants and general scaling relationships helpful in understanding the limitations of wireless ad hoc networks. We examine interactions of the 802.11 MAC and ad hoc forwarding and the effect on capacity for several simple configurations and traffic patterns. While 802.11 discovers reasonably good schedules, we nonetheless observe capacities markedly less than optimal for very simple chain and lattice networks with very regular traffic patterns. We validate some simulation results with experiments. We also show that the traffic pattern determines whether an ad hoc network's per node capacity will scale to large networks. In particular, we show that for total capacity to scale up with network size the average distance between source and destination nodes must remain small as the network grows. Non-local traffic-patterns in which this average distance grows with the network size result in a rapid decrease of per node capacity. Thus the question “Are large ad hoc networks feasible?” reduces to a question about the likely locality of communication in such networks.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance." ], "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2090609496", "2137775453" ] }
Threshold Policy for Route Discovery Initiation in Mobile Ad hoc Networks
Mobile multi hop ad hoc networks play a crucial role in setting up a network on the fly where deployment of network is not practical in times of utmost urgency due to both time and economical constraints. Industrial instrumentation, personal communication, inter-vehicular networking, law enforcement operation, battle field communications, disaster recovery situations, mobile Internet access are few examples to cite. In Mobile Ad hoc network (MANET), communication between nodes situated beyond their radio range is also possible. For this type of communication, the nodes have to take help from other relay nodes which have overlapping radio communication. Here the communication is possible by knowing a path or route between the source and destination node. Transmission schedule should be known for the route. Finding the optimal route and transmission schedule shall be referred as 'Route Discovery'. In static scenario route discovery is to be initiated only at beginning. In mobile scenario links may break or be created (nodes are moving within communication range). We are motivated by the question: When to initiate route and schedule discovery in a MANET? A discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and/or schedule, which hurts endto-end throughput. Our interest in this question stems from the need to assess how policies based on simple heuristics perform in comparison with policies that are optimal in some precisely defined sense. If it turns out that the simple heuristic is far from optimal, then the search for improved heuristics must continue. Else, it is reassuring to know that the heuristic performs nearly as well as it can. In our earlier work [1], we had studied this problem in the framework of Markov Decision Theory. A simple onedimensional network was considered, and a simple mobility model led to a Controlled Markov Chain, and our interest was in obtaining the best route and schedule discovery policy. The resulting problem was solved numerically, using the Value Iteration Algorithm (VIA). However, as pointed out in the earlier work, the VIA approach led to a huge computational burden. Computing the optimal policy required knowing the present 'State' (often impossible in practice), as well as significant computation. Therefore, a simple and suboptimal policy was considered: the Threshold Policy. Whenever the end-to-end throughput dropped below a threshold, route and schedule discovery was initiated. While the idea of a threshold policy is straightforward, the issue was the threshold value to use. In the earlier work, the best threshold was obtained by an exhaustive search within a finite set of possible thresholds: The one resulting in the best performance was found in this way. In this paper, we address this specific question: Can we arrive at a simple rule for setting the threshold, given the parameters of the system (number of relay nodes, number of positions, cost parameter, mobility parameters)? Even though the literature on MANETS is extensive, the issue of capturing the cost of route discovery in a formal framework does not seem to have received much attention. In this paper our contributions are: i. Providing a rule that yields the threshold value for use in the threshold policy for deciding to do route and schedule discovery or not: Threshold value computed using the configuration information and ideal scheduling ii. The study of the scheduling and end-to-end throughput characteristics which provides many incite of a linear ad hoc network. iii. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an 'open' network in which existing relay nodes can leave and/or new relay nodes can join the network. The boundary condition is relaxed and modeled as a wrap around condition for making a open end network. The mobility here is not necessary to be symmetrical. It is shown that the characteristic of the network does not change. Our results indicate that the performance of the proposed rule is no worse than 7% of the best possible threshold threshold policy, and no worse than 15% of the optimal, when the route discovery arXiv:1009.4672v1 [cs.NI] 23 Sep 2010 cost is low. In the following Section II, the related work for this paper is discussed. In Section III, the system model is described in detail. In Section IV discusses our previous work of finding long run average of throughput which is studied in the framework of Markov Decision Theory. Section V discusses the derivation of threshold value for simple threshold-based heuristic and compare the performance with respect to the throughput-optimal policy. In Section VI, we discussed the boundary conditions relaxed to make a open network. This network may cater the scenario which can be seen as a small area of concern in a large linear system. We conclude in Section VII. III. SYSTEM MODEL We consider a network same as our earlier work in paper [1] which is shown in fig 1. Here the network G = (V, L) where V is the set of vertices and L is the set of links. While source and destination nodes are assumed to be fixed at the two ends of the linear grid, relay nodes are movable and can occupy any position in between the source and destination nodes. The number of possible positions which the relay nodes can occupy is K. We will consider a bounded area i.e, number of the relay nodes is N which is assumed to be same all the time. In the figure 1, the value of K and N are 4 and 3 respectively. Number of relay nodes N can be more or less than the number of grid position K. But we will consider different ranges of node density as ratio of N to K. We will consider a discrete-time slotted system. Nodes can change grid position to left or right with a probability of p l and p r respectively only at the beginning of a time slot. A node may stay at the same position with probability p t = 1 − (p l + p r ) at the beginning of the time slot but the node will not change the positions during whole period of time slot. However, if the node finds a boundary at the beginning of the time slot, it will wait at the boundary. We model the mobility of the network by specifying the duration of each time slot and the probability with which a node can move to the left or the right. Note that short (long) time slots correspond to a network with high (low) mobility. All nodes transmit and receive over a common channel. Transmission range is assumed to be m units lengths of the linear grid. The link capacity or data rate will be 1 (normalized) if nodes are at neighboring positions, data rate will reduced to a smaller rate (1/2) if there is a vacant position in between and data rate will be 0 if there are at least two vacant places at two consecutive places. We assume interference range is more than the transmission range (m) but less than m + 1 units of lengths i.e., if node transmits, it will interfere with any other nodes trying to transmit during the same time slot if the separation of nodes is less than interference range (here m + 1). Again as any communication between two nodes requires exchange of packets both by the transmitter and by the receiver for setting up the link, both the nodes of a link should not be within the interference range of another communication link at the same time. Among the links of communication between nodes if at least one of the nodes of each such links is in the interference range of others, only one link can be active at time. Hence the links whose both ends (the nodes) are away than both the nodes of other links, more than the interference range, can be active simultaneously. Hence for end-to-end communication through these links, the links are to be scheduled, i.e., when and what fraction of time they will be active satisfying the criteria discussed just now. We model the cost associated with route discovery as follows. In every slot in which route discovery is initiated, we assume that no data can be transmitted for a fraction φ of the slot. Suppose that route and schedule discovery takes no more thań t units, wheret is less than a slot duration. Then the ratio of t to the slot duration is φ. Clearly, as φ moves closer to 1, the mobility level and the cost of route and schedule discovery increase. Correspondingly, as φ becomes smaller, the network is more and more static and the cost of route and schedule discovery can be amortized by sending more data over the slot. In the limit as φ goes to zero, we have a static network where route and schedule discovery is done at the beginning, and data can be transferred forever. This reduces to the model considered in, for example, [13]. Just as φ is treated as a cost, the number of bits transferred over the slot duration behaves like a reward. Suppose that an end-toend transmission rate R can be supported over the duration of the slot for the chosen route and transmission schedule. Then, assuming that the slot duration is defined as the unit of time, the net reward over the slot is (1 − φ)R if route discovery is done. When route discovery is not done, the net reward is simply R. Clearly, the net reward corresponds to the number of data bits transmitted from the source to the destination during the slot. A route is defined as a sequence of grid-positions (0, i 1 , i 2 , . . . , i l , . . . , (K + 1)), where position 0 and (K + 1) indicate the positions of S and D respectively, and i 1 , i 2 , . . ., i l indicate positions on the line, with i 1 ≤ i 2 ≤ . . . ≤ i l , and i 1 ≤ m, (i 2 − i 1 ) ≤ m, (i 3 − i 2 ) ≤ m, . . ., (i l − i l−1 ) ≤ m, ((K + 1) − i l ) ≤ m in this one-dimensional network. Given a route, it is possible that there is no node at a particular position. We still consider this as a valid route; however, the rate that can be supported on such a route is clearly zero. Similarly, it is also possible that there are multiple nodes at a particular position. In this case, any of the nodes at that position can act as the relay node. Because our performance criterion depends on the transmission rate that corresponds to a route, the individual node identities do not matter. IV. RECAPITULATION OF EARLIER WORK The problem of route and schedule discovery was solved in the framework of Markov Decision Process (MDP) [14] in our earlier work [1]. Five elements of MDP namely (a) the State Space, (b) the Action Space, (c) the Conditional Transition Probability given the current state and action, (d) the One-step Expected Cost and (e) the Total Cost Criterion over a finite or infinite time horizon. The details of each elements can be found in the paper [1]. Some of the results are reproduced in Fig 2. Two networks with K = 5, N = 9 and K = 6, N = 3, 9 respectively are considered for the optimal net throughput with cost parameter(φ). It is known that computing the optimal policy using VIA is a significant computational burden. Here we discuss a simplified policy which is used for obtaining high net end-to-end throughput. The motivation for this policy is as follows: If the observed throughput in a slot is small, then the current route is likely to be poor. The policy is: If the observed throughput is smaller than the threshold, then perform route and schedule discovery else continue with the currently known route and schedule. This is discussed in our earlier paper [1]. Some of the results are reproduced here in Fig 3. This figure indicates that, by proper choice of threshold value, there is advantage to implement the (best) threshold policy at the very low implementation cost. V. THRESHOLD VALUE In this paper our objective is to find the threshold value which is close to the best threshold value that gives throughput as good as the best threshold value. Also, we would like to compute this threshold value in a simple manner. Given a configuration, and ignoring any discovery cost, we can ask: What is the best possible throughput in this configuration? Let this end-to-end throughput named as 'raw' throughput corresponding to a given configuration. Now allowing the configuration to vary over all possibilities, we can come up with an expected raw throughput. This is possible because we can find out the steady state probability of each configuration as given in Section V-A. E(raw throughput) = Steady state prob. of node distribution for the configuration ×raw throughput for the best route. No discovery cost means φ=0. Finally, we incorporate the role of discovery cost (φ), by setting the threshold as follows. At higher φ, the tendency to have less route discovery, with other conditions being same. Which implies that as φ increases, the threshold value will decrease. In other word, threshold value is decreasing function of φ. We propose the following: Threshold Value = (1 − φ x ) * E(raw throughput), x > 0 (2) A. Steady State Probability of Specific Configuration (States based on Positions of users ) It can be easily shown that the positions of single node when movement is random walk with boundary behavior 'pause and restart (stuck-at-boundary)' in one dimension, is uniformly distributed at all movement positions. This is because of doubly stochastic nature of the state transition probability matrix. The probability that the node is at any of the moving positions = 1 K and when N such nodes are there, steady state probability of any ordered configuration of N nodes =( 1 K ) N . As in our case the first part of the state which is based only on movement positions counted as 'how many nodes are at one movement positions' with K such movement positions, we have to find out how many numbers of ordered pair of nodes those make one single state as discussed. This problem reduces as follows: Let there be [n 1 , n 2 , ..., n K ] nodes at the K grid positions respectively. Hence K i=1 n i = N . It can be shown that for the k th position, the number of possible options is A k = (N − k−1 i=1 ni) C n k . Hence the total numbers of ordered pairs of nodes that make one single state is . For each route, the best throughput can be computed by using optimal scheduling as discussed in Section V-C. The null route is added here just to address the situation when the system has no route at the beginning. = K i=1 A i = K−1 i=1 A i (3) = N ! n 1 !(N − n 1 )! × (N − n 1 )! n 2 !(N − n 1 − n 2 )! × ...(4)= N ! n 1 !...n K !(5) C. Optimal Scheduling This part of the our derivation is similar to the derivation of the problem when the network is static as in paper [13]. Any communication between two nodes causes contention with any other node within the interference range of both the nodes if both are active simultaneously. This problem is approached using 'Conflict graph' whose vertices correspond to the links of the transmission graph (G) of the network. In this conflict graph an edge from a vertex to itself is not drawn. If the edge between two nodes exist then the corresponding links in transmission graph interfere with each other and hence can not be active simultaneously. Links belonging to an independent set in the conflict graph can be scheduled simultaneously. Using maximal independent set the optimal scheduling problem can be expressed as linear program. And solving the linear program, we can get link schedule i.e., the fraction of time the links which will be active. D. Results The expected raw throughput computed using the above method is taken as a threshold value for the threshold policy and simulations are done for different system parameters. From simulations it is observed that x = 2 gives a good approximation for most of the cases. The related graphs are given in Fig 4, 5. It can observed from the graphs that most of the configurations, when φ ≤ 0.5, the performance will not be worse than 7% of the best possible threshold value and will not be worse than 15% of the average throughput as obtained by using the optimal policy. As φ > 0.5, it is observed that threshold policy follows the route break policy. VI. OPEN ENDED BOUNDARY The boundary condition explained earlier is close ended system i.e., Stuck-at-boundary model is necessarily make the number of relay nodes in the area of concerned constant as nodes are neither allowed to leave or join the existing network. But the model is meaningful only when mobility is symmetrical i.e., p l = p r ; otherwise, eventually, all the nodes move to the leftmost or rightmost position with probability 1. To allow the unsymmetrical Relative throughput with respect to optimal policy vs φ mobility model(p l = p r ), another boundary model namely wraparound model is considered. To make the relay node constant, an assumption, though little bit artificial, is made: if a node move out of(into) the area at one end then another node is move into(out of) the area at the other end. A. Wrap Around Boundary Conditions (Open Ended Boundary). Claim 1: The fraction of time a single node is at any of the moving positions for 1-dimensional random walk with wrap around boundary conditions is uniformly distributed even when moving probabilities toward left or right are unequal. Proof: In this model, when ever a boundary is found, instead of jumping out of the bounded area, node will be transfered to the other end for that time slot. The state transition probability matrix (P) = As the state transition matrix is doubly stochastic matrix, even when mobility is non-uniform, the steady state distribution (π)=[1/K, ..., 1/K, ..., 1/K]. So the earlier analysis applies. VII. CONCLUSIONS Threshold policy is a practical method for being both a simplified, less computational intensive approach, and the approach which can be implemented by measuring the throughput instead of knowing the states. The policy that measures the throughput systematically along with randomly measuring it relieved from the requirement to know the mobility condition (time slot depends upon mobility) by measuring the change of throughput also. The expected throughput is analytically derived in this paper and it is observed that (1 − φ 2 ) is a good approximation to the multiplying factor for most of the cases when φ is considered. The analytical method for finding the steady state probability of different configurations based on number of relay nodes at different nodes is obtained. It is observed from the simulations that for most of the configurations, by considering the stated rule, when φ ≤ 0.5 the performance of the proposed rule is no worse than 7% of that of the best threshold policy, and no worse than 15% of the optimal. As φ > 0.5, it is observed that threshold policy follows the route break policy. The the boundary condition is modified to satisfy open network where nodes can leave/join the network. This is modeled as wraparound boundary condition. Here assumption taken is: whenever a node leaves (joins) another node joins (leaves) simultaneously at the other end of the boundary so as to make the number of nodes in the network same. It is analyzed that the behavior in this case is also same as Struck-at-boundary condition. 'By withdrawing the previous assumption, i.e., the number of relay nodes varies randomly, now the analysis can be modeled as birthdeath process', is the future work we are continuing now.
3,414
1009.4672
1667827919
Achieving optimal transmission throughput in data networks in a multi-hop wireless networks is fundamental but hard problem. The situation is aggravated when nodes are mobile. Further, multi-rate system make the analysis of throughput more complicated. In mobile scenario, link may break or be created as nodes are moving within communication range. Route Discovery' which is to find the optimal route and transmission schedule is an important issue. Route discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and or schedule, which hurts end-to-end throughput. The implementation of the routing decision problem in one dimensional mobile ad hoc network as Markov decision process problem is already is discussed in the paper [1]. A heuristic based on threshold policy is discussed in the same paper without giving a way to find the threshold. In this paper, we suggested a rule for setting the threshold, given the parameters of the system. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an open' network in which existing relay nodes can leave and or new relay nodes can join the network.
used linear programming approach to characterize networks with interference, @cite_6 . They used a conflict graph to model constraints on simultaneous transmissions. In the paper @cite_2 , both approximation algorithms that solve both the end-to-end flow routing problem and link scheduling problem near optimal are proposed. In the paper @cite_10 , it is shown that the problem of solving the optimal scheduling given the concurrency constraints to maximize network throughput, is NP-hard.
{ "abstract": [ "It is shown that the decision problem regarding the membership of a point in the capacity region of a packet radio network is nondeterministic polynomial time hard (NP hard). The capacity region is the set of all feasible origin-to-destination message rates where feasibility is defined as the existence of any set of rules for moving the data through the network so that the desired rates are satisfied.", "In this paper, we address the following question: given a specific placement of wireless nodes in physical space and a specific traffic workload, what is the maximum throughput that can be supported by the resulting network? Unlike previous work that has focused on computing asymptotic performance bounds under assumptions of homogeneity or randomness in the network topology and or workload, we work with any given network and workload specified as inputs.A key issue impacting performance is wireless interference between neighboring nodes. We model such interference using a conflict graph, and present methods for computing upper and lower bounds on the optimal throughput for the given network and workload. To compute these bounds, we assume that packet transmissions at the individual nodes can be finely controlled and carefully scheduled by an omniscient and omnipotent central entity, which is unrealistic. Nevertheless, using ns-2 simulations, we show that the routes derived from our analysis often yield noticeably better throughput than the default shortest path routes even in the presence of uncoordinated packet transmissions and MAC contention. This suggests that there is opportunity for achieving throughput gains by employing an interference-aware routing protocol.", "" ], "cite_N": [ "@cite_10", "@cite_6", "@cite_2" ], "mid": [ "2038603123", "2435603672", "" ] }
Threshold Policy for Route Discovery Initiation in Mobile Ad hoc Networks
Mobile multi hop ad hoc networks play a crucial role in setting up a network on the fly where deployment of network is not practical in times of utmost urgency due to both time and economical constraints. Industrial instrumentation, personal communication, inter-vehicular networking, law enforcement operation, battle field communications, disaster recovery situations, mobile Internet access are few examples to cite. In Mobile Ad hoc network (MANET), communication between nodes situated beyond their radio range is also possible. For this type of communication, the nodes have to take help from other relay nodes which have overlapping radio communication. Here the communication is possible by knowing a path or route between the source and destination node. Transmission schedule should be known for the route. Finding the optimal route and transmission schedule shall be referred as 'Route Discovery'. In static scenario route discovery is to be initiated only at beginning. In mobile scenario links may break or be created (nodes are moving within communication range). We are motivated by the question: When to initiate route and schedule discovery in a MANET? A discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and/or schedule, which hurts endto-end throughput. Our interest in this question stems from the need to assess how policies based on simple heuristics perform in comparison with policies that are optimal in some precisely defined sense. If it turns out that the simple heuristic is far from optimal, then the search for improved heuristics must continue. Else, it is reassuring to know that the heuristic performs nearly as well as it can. In our earlier work [1], we had studied this problem in the framework of Markov Decision Theory. A simple onedimensional network was considered, and a simple mobility model led to a Controlled Markov Chain, and our interest was in obtaining the best route and schedule discovery policy. The resulting problem was solved numerically, using the Value Iteration Algorithm (VIA). However, as pointed out in the earlier work, the VIA approach led to a huge computational burden. Computing the optimal policy required knowing the present 'State' (often impossible in practice), as well as significant computation. Therefore, a simple and suboptimal policy was considered: the Threshold Policy. Whenever the end-to-end throughput dropped below a threshold, route and schedule discovery was initiated. While the idea of a threshold policy is straightforward, the issue was the threshold value to use. In the earlier work, the best threshold was obtained by an exhaustive search within a finite set of possible thresholds: The one resulting in the best performance was found in this way. In this paper, we address this specific question: Can we arrive at a simple rule for setting the threshold, given the parameters of the system (number of relay nodes, number of positions, cost parameter, mobility parameters)? Even though the literature on MANETS is extensive, the issue of capturing the cost of route discovery in a formal framework does not seem to have received much attention. In this paper our contributions are: i. Providing a rule that yields the threshold value for use in the threshold policy for deciding to do route and schedule discovery or not: Threshold value computed using the configuration information and ideal scheduling ii. The study of the scheduling and end-to-end throughput characteristics which provides many incite of a linear ad hoc network. iii. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an 'open' network in which existing relay nodes can leave and/or new relay nodes can join the network. The boundary condition is relaxed and modeled as a wrap around condition for making a open end network. The mobility here is not necessary to be symmetrical. It is shown that the characteristic of the network does not change. Our results indicate that the performance of the proposed rule is no worse than 7% of the best possible threshold threshold policy, and no worse than 15% of the optimal, when the route discovery arXiv:1009.4672v1 [cs.NI] 23 Sep 2010 cost is low. In the following Section II, the related work for this paper is discussed. In Section III, the system model is described in detail. In Section IV discusses our previous work of finding long run average of throughput which is studied in the framework of Markov Decision Theory. Section V discusses the derivation of threshold value for simple threshold-based heuristic and compare the performance with respect to the throughput-optimal policy. In Section VI, we discussed the boundary conditions relaxed to make a open network. This network may cater the scenario which can be seen as a small area of concern in a large linear system. We conclude in Section VII. III. SYSTEM MODEL We consider a network same as our earlier work in paper [1] which is shown in fig 1. Here the network G = (V, L) where V is the set of vertices and L is the set of links. While source and destination nodes are assumed to be fixed at the two ends of the linear grid, relay nodes are movable and can occupy any position in between the source and destination nodes. The number of possible positions which the relay nodes can occupy is K. We will consider a bounded area i.e, number of the relay nodes is N which is assumed to be same all the time. In the figure 1, the value of K and N are 4 and 3 respectively. Number of relay nodes N can be more or less than the number of grid position K. But we will consider different ranges of node density as ratio of N to K. We will consider a discrete-time slotted system. Nodes can change grid position to left or right with a probability of p l and p r respectively only at the beginning of a time slot. A node may stay at the same position with probability p t = 1 − (p l + p r ) at the beginning of the time slot but the node will not change the positions during whole period of time slot. However, if the node finds a boundary at the beginning of the time slot, it will wait at the boundary. We model the mobility of the network by specifying the duration of each time slot and the probability with which a node can move to the left or the right. Note that short (long) time slots correspond to a network with high (low) mobility. All nodes transmit and receive over a common channel. Transmission range is assumed to be m units lengths of the linear grid. The link capacity or data rate will be 1 (normalized) if nodes are at neighboring positions, data rate will reduced to a smaller rate (1/2) if there is a vacant position in between and data rate will be 0 if there are at least two vacant places at two consecutive places. We assume interference range is more than the transmission range (m) but less than m + 1 units of lengths i.e., if node transmits, it will interfere with any other nodes trying to transmit during the same time slot if the separation of nodes is less than interference range (here m + 1). Again as any communication between two nodes requires exchange of packets both by the transmitter and by the receiver for setting up the link, both the nodes of a link should not be within the interference range of another communication link at the same time. Among the links of communication between nodes if at least one of the nodes of each such links is in the interference range of others, only one link can be active at time. Hence the links whose both ends (the nodes) are away than both the nodes of other links, more than the interference range, can be active simultaneously. Hence for end-to-end communication through these links, the links are to be scheduled, i.e., when and what fraction of time they will be active satisfying the criteria discussed just now. We model the cost associated with route discovery as follows. In every slot in which route discovery is initiated, we assume that no data can be transmitted for a fraction φ of the slot. Suppose that route and schedule discovery takes no more thań t units, wheret is less than a slot duration. Then the ratio of t to the slot duration is φ. Clearly, as φ moves closer to 1, the mobility level and the cost of route and schedule discovery increase. Correspondingly, as φ becomes smaller, the network is more and more static and the cost of route and schedule discovery can be amortized by sending more data over the slot. In the limit as φ goes to zero, we have a static network where route and schedule discovery is done at the beginning, and data can be transferred forever. This reduces to the model considered in, for example, [13]. Just as φ is treated as a cost, the number of bits transferred over the slot duration behaves like a reward. Suppose that an end-toend transmission rate R can be supported over the duration of the slot for the chosen route and transmission schedule. Then, assuming that the slot duration is defined as the unit of time, the net reward over the slot is (1 − φ)R if route discovery is done. When route discovery is not done, the net reward is simply R. Clearly, the net reward corresponds to the number of data bits transmitted from the source to the destination during the slot. A route is defined as a sequence of grid-positions (0, i 1 , i 2 , . . . , i l , . . . , (K + 1)), where position 0 and (K + 1) indicate the positions of S and D respectively, and i 1 , i 2 , . . ., i l indicate positions on the line, with i 1 ≤ i 2 ≤ . . . ≤ i l , and i 1 ≤ m, (i 2 − i 1 ) ≤ m, (i 3 − i 2 ) ≤ m, . . ., (i l − i l−1 ) ≤ m, ((K + 1) − i l ) ≤ m in this one-dimensional network. Given a route, it is possible that there is no node at a particular position. We still consider this as a valid route; however, the rate that can be supported on such a route is clearly zero. Similarly, it is also possible that there are multiple nodes at a particular position. In this case, any of the nodes at that position can act as the relay node. Because our performance criterion depends on the transmission rate that corresponds to a route, the individual node identities do not matter. IV. RECAPITULATION OF EARLIER WORK The problem of route and schedule discovery was solved in the framework of Markov Decision Process (MDP) [14] in our earlier work [1]. Five elements of MDP namely (a) the State Space, (b) the Action Space, (c) the Conditional Transition Probability given the current state and action, (d) the One-step Expected Cost and (e) the Total Cost Criterion over a finite or infinite time horizon. The details of each elements can be found in the paper [1]. Some of the results are reproduced in Fig 2. Two networks with K = 5, N = 9 and K = 6, N = 3, 9 respectively are considered for the optimal net throughput with cost parameter(φ). It is known that computing the optimal policy using VIA is a significant computational burden. Here we discuss a simplified policy which is used for obtaining high net end-to-end throughput. The motivation for this policy is as follows: If the observed throughput in a slot is small, then the current route is likely to be poor. The policy is: If the observed throughput is smaller than the threshold, then perform route and schedule discovery else continue with the currently known route and schedule. This is discussed in our earlier paper [1]. Some of the results are reproduced here in Fig 3. This figure indicates that, by proper choice of threshold value, there is advantage to implement the (best) threshold policy at the very low implementation cost. V. THRESHOLD VALUE In this paper our objective is to find the threshold value which is close to the best threshold value that gives throughput as good as the best threshold value. Also, we would like to compute this threshold value in a simple manner. Given a configuration, and ignoring any discovery cost, we can ask: What is the best possible throughput in this configuration? Let this end-to-end throughput named as 'raw' throughput corresponding to a given configuration. Now allowing the configuration to vary over all possibilities, we can come up with an expected raw throughput. This is possible because we can find out the steady state probability of each configuration as given in Section V-A. E(raw throughput) = Steady state prob. of node distribution for the configuration ×raw throughput for the best route. No discovery cost means φ=0. Finally, we incorporate the role of discovery cost (φ), by setting the threshold as follows. At higher φ, the tendency to have less route discovery, with other conditions being same. Which implies that as φ increases, the threshold value will decrease. In other word, threshold value is decreasing function of φ. We propose the following: Threshold Value = (1 − φ x ) * E(raw throughput), x > 0 (2) A. Steady State Probability of Specific Configuration (States based on Positions of users ) It can be easily shown that the positions of single node when movement is random walk with boundary behavior 'pause and restart (stuck-at-boundary)' in one dimension, is uniformly distributed at all movement positions. This is because of doubly stochastic nature of the state transition probability matrix. The probability that the node is at any of the moving positions = 1 K and when N such nodes are there, steady state probability of any ordered configuration of N nodes =( 1 K ) N . As in our case the first part of the state which is based only on movement positions counted as 'how many nodes are at one movement positions' with K such movement positions, we have to find out how many numbers of ordered pair of nodes those make one single state as discussed. This problem reduces as follows: Let there be [n 1 , n 2 , ..., n K ] nodes at the K grid positions respectively. Hence K i=1 n i = N . It can be shown that for the k th position, the number of possible options is A k = (N − k−1 i=1 ni) C n k . Hence the total numbers of ordered pairs of nodes that make one single state is . For each route, the best throughput can be computed by using optimal scheduling as discussed in Section V-C. The null route is added here just to address the situation when the system has no route at the beginning. = K i=1 A i = K−1 i=1 A i (3) = N ! n 1 !(N − n 1 )! × (N − n 1 )! n 2 !(N − n 1 − n 2 )! × ...(4)= N ! n 1 !...n K !(5) C. Optimal Scheduling This part of the our derivation is similar to the derivation of the problem when the network is static as in paper [13]. Any communication between two nodes causes contention with any other node within the interference range of both the nodes if both are active simultaneously. This problem is approached using 'Conflict graph' whose vertices correspond to the links of the transmission graph (G) of the network. In this conflict graph an edge from a vertex to itself is not drawn. If the edge between two nodes exist then the corresponding links in transmission graph interfere with each other and hence can not be active simultaneously. Links belonging to an independent set in the conflict graph can be scheduled simultaneously. Using maximal independent set the optimal scheduling problem can be expressed as linear program. And solving the linear program, we can get link schedule i.e., the fraction of time the links which will be active. D. Results The expected raw throughput computed using the above method is taken as a threshold value for the threshold policy and simulations are done for different system parameters. From simulations it is observed that x = 2 gives a good approximation for most of the cases. The related graphs are given in Fig 4, 5. It can observed from the graphs that most of the configurations, when φ ≤ 0.5, the performance will not be worse than 7% of the best possible threshold value and will not be worse than 15% of the average throughput as obtained by using the optimal policy. As φ > 0.5, it is observed that threshold policy follows the route break policy. VI. OPEN ENDED BOUNDARY The boundary condition explained earlier is close ended system i.e., Stuck-at-boundary model is necessarily make the number of relay nodes in the area of concerned constant as nodes are neither allowed to leave or join the existing network. But the model is meaningful only when mobility is symmetrical i.e., p l = p r ; otherwise, eventually, all the nodes move to the leftmost or rightmost position with probability 1. To allow the unsymmetrical Relative throughput with respect to optimal policy vs φ mobility model(p l = p r ), another boundary model namely wraparound model is considered. To make the relay node constant, an assumption, though little bit artificial, is made: if a node move out of(into) the area at one end then another node is move into(out of) the area at the other end. A. Wrap Around Boundary Conditions (Open Ended Boundary). Claim 1: The fraction of time a single node is at any of the moving positions for 1-dimensional random walk with wrap around boundary conditions is uniformly distributed even when moving probabilities toward left or right are unequal. Proof: In this model, when ever a boundary is found, instead of jumping out of the bounded area, node will be transfered to the other end for that time slot. The state transition probability matrix (P) = As the state transition matrix is doubly stochastic matrix, even when mobility is non-uniform, the steady state distribution (π)=[1/K, ..., 1/K, ..., 1/K]. So the earlier analysis applies. VII. CONCLUSIONS Threshold policy is a practical method for being both a simplified, less computational intensive approach, and the approach which can be implemented by measuring the throughput instead of knowing the states. The policy that measures the throughput systematically along with randomly measuring it relieved from the requirement to know the mobility condition (time slot depends upon mobility) by measuring the change of throughput also. The expected throughput is analytically derived in this paper and it is observed that (1 − φ 2 ) is a good approximation to the multiplying factor for most of the cases when φ is considered. The analytical method for finding the steady state probability of different configurations based on number of relay nodes at different nodes is obtained. It is observed from the simulations that for most of the configurations, by considering the stated rule, when φ ≤ 0.5 the performance of the proposed rule is no worse than 7% of that of the best threshold policy, and no worse than 15% of the optimal. As φ > 0.5, it is observed that threshold policy follows the route break policy. The the boundary condition is modified to satisfy open network where nodes can leave/join the network. This is modeled as wraparound boundary condition. Here assumption taken is: whenever a node leaves (joins) another node joins (leaves) simultaneously at the other end of the boundary so as to make the number of nodes in the network same. It is analyzed that the behavior in this case is also same as Struck-at-boundary condition. 'By withdrawing the previous assumption, i.e., the number of relay nodes varies randomly, now the analysis can be modeled as birthdeath process', is the future work we are continuing now.
3,414
1009.4672
1667827919
Achieving optimal transmission throughput in data networks in a multi-hop wireless networks is fundamental but hard problem. The situation is aggravated when nodes are mobile. Further, multi-rate system make the analysis of throughput more complicated. In mobile scenario, link may break or be created as nodes are moving within communication range. Route Discovery' which is to find the optimal route and transmission schedule is an important issue. Route discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and or schedule, which hurts end-to-end throughput. The implementation of the routing decision problem in one dimensional mobile ad hoc network as Markov decision process problem is already is discussed in the paper [1]. A heuristic based on threshold policy is discussed in the same paper without giving a way to find the threshold. In this paper, we suggested a rule for setting the threshold, given the parameters of the system. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an open' network in which existing relay nodes can leave and or new relay nodes can join the network.
Many authors have discussed route discovery process @cite_7 , @cite_3 , @cite_12 but neither suggested when to initiate route discovery as their case is related to static case nor they have suggested how frequently to initiate the discovery process, in case of mobile network. Most of them suggested initiation of discovery only when a link, in the existing route, is disrupted i.e., in case of route break. @cite_7 , the authors proposed a modified AODV which uses the concept of reliable distance that change dynamically. Peng @cite_3 suggested distributed route discovery method that uses reinforcement learning. @cite_12 , the authors have used fuzzy controller in every node. In their paper, the destination evaluates performance of all those routes and arranges it in order of preference, when route-request packet reaches its destination.
{ "abstract": [ "The quality of service (QoS) routing has been receiving increasingly intensive attention in the mobile ad hoc networks (MANETs) fields, but it is difficult to solve the problem for the nature of MANETs such as performance constraints and dynamic network topology. In order to increase the probability of success in finding QoS feasible paths and reduce average cost in flooding path discovery scheme of the traditional MANETs routing protocols, we proposed a heuristic and distributed route discovery method named RLGAMAN that supports QoS requirement for MANETs in this paper. This method integrates the route discovery scheme with a reinforcement learning (RL) method that only utilizes the local information for the dynamic network environment; and the route expand scheme based on genetic algorithms (GA) method to avoid the problem of stagnation route. We investigate the performance of the RLGAMAN by simulation experiment bed in NS2. Compared with the traditional method, the experiment results showed the network performance is improved obviously, and RLGAMAN is efficient and effective", "", "The Ad Hoc On-Demand Distance Vector (AODV) protocol is an on-demand protocol specialized for mobile ad hoc network. Because of node's mobility and limited transmission range, the routes created by original AODV become invalid frequently leading to larger control overhead. In this paper, we propose a new scheme to improve AODV protocol by the concept of reliable distance. The reliable distance, which is always smaller than transmission range, is depended on the node's velocity and direction information attained from Global Positioning System (GPS). By the new mechanism, the routes are more reliable. Performance comparison of optimized AODV with conventional AODV by NS-2 simulator in various conditions shows the performance improvement." ], "cite_N": [ "@cite_3", "@cite_12", "@cite_7" ], "mid": [ "2160980555", "", "2103966151" ] }
Threshold Policy for Route Discovery Initiation in Mobile Ad hoc Networks
Mobile multi hop ad hoc networks play a crucial role in setting up a network on the fly where deployment of network is not practical in times of utmost urgency due to both time and economical constraints. Industrial instrumentation, personal communication, inter-vehicular networking, law enforcement operation, battle field communications, disaster recovery situations, mobile Internet access are few examples to cite. In Mobile Ad hoc network (MANET), communication between nodes situated beyond their radio range is also possible. For this type of communication, the nodes have to take help from other relay nodes which have overlapping radio communication. Here the communication is possible by knowing a path or route between the source and destination node. Transmission schedule should be known for the route. Finding the optimal route and transmission schedule shall be referred as 'Route Discovery'. In static scenario route discovery is to be initiated only at beginning. In mobile scenario links may break or be created (nodes are moving within communication range). We are motivated by the question: When to initiate route and schedule discovery in a MANET? A discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and/or schedule, which hurts endto-end throughput. Our interest in this question stems from the need to assess how policies based on simple heuristics perform in comparison with policies that are optimal in some precisely defined sense. If it turns out that the simple heuristic is far from optimal, then the search for improved heuristics must continue. Else, it is reassuring to know that the heuristic performs nearly as well as it can. In our earlier work [1], we had studied this problem in the framework of Markov Decision Theory. A simple onedimensional network was considered, and a simple mobility model led to a Controlled Markov Chain, and our interest was in obtaining the best route and schedule discovery policy. The resulting problem was solved numerically, using the Value Iteration Algorithm (VIA). However, as pointed out in the earlier work, the VIA approach led to a huge computational burden. Computing the optimal policy required knowing the present 'State' (often impossible in practice), as well as significant computation. Therefore, a simple and suboptimal policy was considered: the Threshold Policy. Whenever the end-to-end throughput dropped below a threshold, route and schedule discovery was initiated. While the idea of a threshold policy is straightforward, the issue was the threshold value to use. In the earlier work, the best threshold was obtained by an exhaustive search within a finite set of possible thresholds: The one resulting in the best performance was found in this way. In this paper, we address this specific question: Can we arrive at a simple rule for setting the threshold, given the parameters of the system (number of relay nodes, number of positions, cost parameter, mobility parameters)? Even though the literature on MANETS is extensive, the issue of capturing the cost of route discovery in a formal framework does not seem to have received much attention. In this paper our contributions are: i. Providing a rule that yields the threshold value for use in the threshold policy for deciding to do route and schedule discovery or not: Threshold value computed using the configuration information and ideal scheduling ii. The study of the scheduling and end-to-end throughput characteristics which provides many incite of a linear ad hoc network. iii. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an 'open' network in which existing relay nodes can leave and/or new relay nodes can join the network. The boundary condition is relaxed and modeled as a wrap around condition for making a open end network. The mobility here is not necessary to be symmetrical. It is shown that the characteristic of the network does not change. Our results indicate that the performance of the proposed rule is no worse than 7% of the best possible threshold threshold policy, and no worse than 15% of the optimal, when the route discovery arXiv:1009.4672v1 [cs.NI] 23 Sep 2010 cost is low. In the following Section II, the related work for this paper is discussed. In Section III, the system model is described in detail. In Section IV discusses our previous work of finding long run average of throughput which is studied in the framework of Markov Decision Theory. Section V discusses the derivation of threshold value for simple threshold-based heuristic and compare the performance with respect to the throughput-optimal policy. In Section VI, we discussed the boundary conditions relaxed to make a open network. This network may cater the scenario which can be seen as a small area of concern in a large linear system. We conclude in Section VII. III. SYSTEM MODEL We consider a network same as our earlier work in paper [1] which is shown in fig 1. Here the network G = (V, L) where V is the set of vertices and L is the set of links. While source and destination nodes are assumed to be fixed at the two ends of the linear grid, relay nodes are movable and can occupy any position in between the source and destination nodes. The number of possible positions which the relay nodes can occupy is K. We will consider a bounded area i.e, number of the relay nodes is N which is assumed to be same all the time. In the figure 1, the value of K and N are 4 and 3 respectively. Number of relay nodes N can be more or less than the number of grid position K. But we will consider different ranges of node density as ratio of N to K. We will consider a discrete-time slotted system. Nodes can change grid position to left or right with a probability of p l and p r respectively only at the beginning of a time slot. A node may stay at the same position with probability p t = 1 − (p l + p r ) at the beginning of the time slot but the node will not change the positions during whole period of time slot. However, if the node finds a boundary at the beginning of the time slot, it will wait at the boundary. We model the mobility of the network by specifying the duration of each time slot and the probability with which a node can move to the left or the right. Note that short (long) time slots correspond to a network with high (low) mobility. All nodes transmit and receive over a common channel. Transmission range is assumed to be m units lengths of the linear grid. The link capacity or data rate will be 1 (normalized) if nodes are at neighboring positions, data rate will reduced to a smaller rate (1/2) if there is a vacant position in between and data rate will be 0 if there are at least two vacant places at two consecutive places. We assume interference range is more than the transmission range (m) but less than m + 1 units of lengths i.e., if node transmits, it will interfere with any other nodes trying to transmit during the same time slot if the separation of nodes is less than interference range (here m + 1). Again as any communication between two nodes requires exchange of packets both by the transmitter and by the receiver for setting up the link, both the nodes of a link should not be within the interference range of another communication link at the same time. Among the links of communication between nodes if at least one of the nodes of each such links is in the interference range of others, only one link can be active at time. Hence the links whose both ends (the nodes) are away than both the nodes of other links, more than the interference range, can be active simultaneously. Hence for end-to-end communication through these links, the links are to be scheduled, i.e., when and what fraction of time they will be active satisfying the criteria discussed just now. We model the cost associated with route discovery as follows. In every slot in which route discovery is initiated, we assume that no data can be transmitted for a fraction φ of the slot. Suppose that route and schedule discovery takes no more thań t units, wheret is less than a slot duration. Then the ratio of t to the slot duration is φ. Clearly, as φ moves closer to 1, the mobility level and the cost of route and schedule discovery increase. Correspondingly, as φ becomes smaller, the network is more and more static and the cost of route and schedule discovery can be amortized by sending more data over the slot. In the limit as φ goes to zero, we have a static network where route and schedule discovery is done at the beginning, and data can be transferred forever. This reduces to the model considered in, for example, [13]. Just as φ is treated as a cost, the number of bits transferred over the slot duration behaves like a reward. Suppose that an end-toend transmission rate R can be supported over the duration of the slot for the chosen route and transmission schedule. Then, assuming that the slot duration is defined as the unit of time, the net reward over the slot is (1 − φ)R if route discovery is done. When route discovery is not done, the net reward is simply R. Clearly, the net reward corresponds to the number of data bits transmitted from the source to the destination during the slot. A route is defined as a sequence of grid-positions (0, i 1 , i 2 , . . . , i l , . . . , (K + 1)), where position 0 and (K + 1) indicate the positions of S and D respectively, and i 1 , i 2 , . . ., i l indicate positions on the line, with i 1 ≤ i 2 ≤ . . . ≤ i l , and i 1 ≤ m, (i 2 − i 1 ) ≤ m, (i 3 − i 2 ) ≤ m, . . ., (i l − i l−1 ) ≤ m, ((K + 1) − i l ) ≤ m in this one-dimensional network. Given a route, it is possible that there is no node at a particular position. We still consider this as a valid route; however, the rate that can be supported on such a route is clearly zero. Similarly, it is also possible that there are multiple nodes at a particular position. In this case, any of the nodes at that position can act as the relay node. Because our performance criterion depends on the transmission rate that corresponds to a route, the individual node identities do not matter. IV. RECAPITULATION OF EARLIER WORK The problem of route and schedule discovery was solved in the framework of Markov Decision Process (MDP) [14] in our earlier work [1]. Five elements of MDP namely (a) the State Space, (b) the Action Space, (c) the Conditional Transition Probability given the current state and action, (d) the One-step Expected Cost and (e) the Total Cost Criterion over a finite or infinite time horizon. The details of each elements can be found in the paper [1]. Some of the results are reproduced in Fig 2. Two networks with K = 5, N = 9 and K = 6, N = 3, 9 respectively are considered for the optimal net throughput with cost parameter(φ). It is known that computing the optimal policy using VIA is a significant computational burden. Here we discuss a simplified policy which is used for obtaining high net end-to-end throughput. The motivation for this policy is as follows: If the observed throughput in a slot is small, then the current route is likely to be poor. The policy is: If the observed throughput is smaller than the threshold, then perform route and schedule discovery else continue with the currently known route and schedule. This is discussed in our earlier paper [1]. Some of the results are reproduced here in Fig 3. This figure indicates that, by proper choice of threshold value, there is advantage to implement the (best) threshold policy at the very low implementation cost. V. THRESHOLD VALUE In this paper our objective is to find the threshold value which is close to the best threshold value that gives throughput as good as the best threshold value. Also, we would like to compute this threshold value in a simple manner. Given a configuration, and ignoring any discovery cost, we can ask: What is the best possible throughput in this configuration? Let this end-to-end throughput named as 'raw' throughput corresponding to a given configuration. Now allowing the configuration to vary over all possibilities, we can come up with an expected raw throughput. This is possible because we can find out the steady state probability of each configuration as given in Section V-A. E(raw throughput) = Steady state prob. of node distribution for the configuration ×raw throughput for the best route. No discovery cost means φ=0. Finally, we incorporate the role of discovery cost (φ), by setting the threshold as follows. At higher φ, the tendency to have less route discovery, with other conditions being same. Which implies that as φ increases, the threshold value will decrease. In other word, threshold value is decreasing function of φ. We propose the following: Threshold Value = (1 − φ x ) * E(raw throughput), x > 0 (2) A. Steady State Probability of Specific Configuration (States based on Positions of users ) It can be easily shown that the positions of single node when movement is random walk with boundary behavior 'pause and restart (stuck-at-boundary)' in one dimension, is uniformly distributed at all movement positions. This is because of doubly stochastic nature of the state transition probability matrix. The probability that the node is at any of the moving positions = 1 K and when N such nodes are there, steady state probability of any ordered configuration of N nodes =( 1 K ) N . As in our case the first part of the state which is based only on movement positions counted as 'how many nodes are at one movement positions' with K such movement positions, we have to find out how many numbers of ordered pair of nodes those make one single state as discussed. This problem reduces as follows: Let there be [n 1 , n 2 , ..., n K ] nodes at the K grid positions respectively. Hence K i=1 n i = N . It can be shown that for the k th position, the number of possible options is A k = (N − k−1 i=1 ni) C n k . Hence the total numbers of ordered pairs of nodes that make one single state is . For each route, the best throughput can be computed by using optimal scheduling as discussed in Section V-C. The null route is added here just to address the situation when the system has no route at the beginning. = K i=1 A i = K−1 i=1 A i (3) = N ! n 1 !(N − n 1 )! × (N − n 1 )! n 2 !(N − n 1 − n 2 )! × ...(4)= N ! n 1 !...n K !(5) C. Optimal Scheduling This part of the our derivation is similar to the derivation of the problem when the network is static as in paper [13]. Any communication between two nodes causes contention with any other node within the interference range of both the nodes if both are active simultaneously. This problem is approached using 'Conflict graph' whose vertices correspond to the links of the transmission graph (G) of the network. In this conflict graph an edge from a vertex to itself is not drawn. If the edge between two nodes exist then the corresponding links in transmission graph interfere with each other and hence can not be active simultaneously. Links belonging to an independent set in the conflict graph can be scheduled simultaneously. Using maximal independent set the optimal scheduling problem can be expressed as linear program. And solving the linear program, we can get link schedule i.e., the fraction of time the links which will be active. D. Results The expected raw throughput computed using the above method is taken as a threshold value for the threshold policy and simulations are done for different system parameters. From simulations it is observed that x = 2 gives a good approximation for most of the cases. The related graphs are given in Fig 4, 5. It can observed from the graphs that most of the configurations, when φ ≤ 0.5, the performance will not be worse than 7% of the best possible threshold value and will not be worse than 15% of the average throughput as obtained by using the optimal policy. As φ > 0.5, it is observed that threshold policy follows the route break policy. VI. OPEN ENDED BOUNDARY The boundary condition explained earlier is close ended system i.e., Stuck-at-boundary model is necessarily make the number of relay nodes in the area of concerned constant as nodes are neither allowed to leave or join the existing network. But the model is meaningful only when mobility is symmetrical i.e., p l = p r ; otherwise, eventually, all the nodes move to the leftmost or rightmost position with probability 1. To allow the unsymmetrical Relative throughput with respect to optimal policy vs φ mobility model(p l = p r ), another boundary model namely wraparound model is considered. To make the relay node constant, an assumption, though little bit artificial, is made: if a node move out of(into) the area at one end then another node is move into(out of) the area at the other end. A. Wrap Around Boundary Conditions (Open Ended Boundary). Claim 1: The fraction of time a single node is at any of the moving positions for 1-dimensional random walk with wrap around boundary conditions is uniformly distributed even when moving probabilities toward left or right are unequal. Proof: In this model, when ever a boundary is found, instead of jumping out of the bounded area, node will be transfered to the other end for that time slot. The state transition probability matrix (P) = As the state transition matrix is doubly stochastic matrix, even when mobility is non-uniform, the steady state distribution (π)=[1/K, ..., 1/K, ..., 1/K]. So the earlier analysis applies. VII. CONCLUSIONS Threshold policy is a practical method for being both a simplified, less computational intensive approach, and the approach which can be implemented by measuring the throughput instead of knowing the states. The policy that measures the throughput systematically along with randomly measuring it relieved from the requirement to know the mobility condition (time slot depends upon mobility) by measuring the change of throughput also. The expected throughput is analytically derived in this paper and it is observed that (1 − φ 2 ) is a good approximation to the multiplying factor for most of the cases when φ is considered. The analytical method for finding the steady state probability of different configurations based on number of relay nodes at different nodes is obtained. It is observed from the simulations that for most of the configurations, by considering the stated rule, when φ ≤ 0.5 the performance of the proposed rule is no worse than 7% of that of the best threshold policy, and no worse than 15% of the optimal. As φ > 0.5, it is observed that threshold policy follows the route break policy. The the boundary condition is modified to satisfy open network where nodes can leave/join the network. This is modeled as wraparound boundary condition. Here assumption taken is: whenever a node leaves (joins) another node joins (leaves) simultaneously at the other end of the boundary so as to make the number of nodes in the network same. It is analyzed that the behavior in this case is also same as Struck-at-boundary condition. 'By withdrawing the previous assumption, i.e., the number of relay nodes varies randomly, now the analysis can be modeled as birthdeath process', is the future work we are continuing now.
3,414
1009.4672
1667827919
Achieving optimal transmission throughput in data networks in a multi-hop wireless networks is fundamental but hard problem. The situation is aggravated when nodes are mobile. Further, multi-rate system make the analysis of throughput more complicated. In mobile scenario, link may break or be created as nodes are moving within communication range. Route Discovery' which is to find the optimal route and transmission schedule is an important issue. Route discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and or schedule, which hurts end-to-end throughput. The implementation of the routing decision problem in one dimensional mobile ad hoc network as Markov decision process problem is already is discussed in the paper [1]. A heuristic based on threshold policy is discussed in the same paper without giving a way to find the threshold. In this paper, we suggested a rule for setting the threshold, given the parameters of the system. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an open' network in which existing relay nodes can leave and or new relay nodes can join the network.
In paper @cite_5 , the authors discussed the route discovery initiation to reduce the frequency of flooding request by elongating the link duration of the selected paths. In @cite_11 , the authors suggested extension in storing multiple paths as route rather than unipath as route .
{ "abstract": [ "Flooding-based approaches are incorporated in reactive routing protocols as the fundamental strategy for route discovery. They overtly affect traffic as the frequency of route discovery increases along with the mobility of users in a mobile ad hoc network (MANET). This paper presents a scheme for reducing overall traffic and end-to-end delay in highly MANET networks. Firstly a new routing algorithm is proposed to reduce the frequency of flood requests by elongating the link duration of the selected paths. In order to increase the path duration, non-disjoint paths are also considered. This concept is a novel approach in route discovery as previous reactive routing protocols seek only disjoint paths. Secondly another novel approach is presented to estimate the link expiration time without the need for global positioning system (GPS) devices. To prevent broadcast storms that may be intrigued during the path discovery operation, another scheme is also introduced. The basic concept behind the proposed scheme is to broadcast only specific and well-defined packets, referred to as \"best packets\" in the paper. The new protocol is simulated with regard to traffic overhead. Although our main aim in this paper is to reduce the net control traffic in a MANET network, there are other benefits arising from the proposed schemes, namely the increase in link duration, reduction in the end-to-end communication delay, less disruption in data flow, and fewer path setups.", "Increasing popularity and availability of portable wireless devices, which constitute mobile ad hoc networks, calls for scalable ad hoc routing protocols. On-demand routing protocols adapt well with dynamic topologies of ad hoc networks, because of their lower control overhead and quick response to route breaks. But, as the size of the network increases, these protocols cease to perform due to large routing overhead generated while repairing route breaks. We propose a multipath on-demand routing protocol (SMORT), which reduces the routing overhead incurred in recovering from route breaks, by using secondary paths. SMORT computes fail-safe multiple paths, which provide all the intermediate nodes on the primary path with multiple routes (if exists) to destination. Exhaustive simulations using GloMoSim with large networks (2000 nodes) confirm that SMORT is scalable, and performs better even at higher mobility and traffic loads, when compared to the disjoint multipath routing protocol (DMRP) and ad hoc on-demand distance vector (AODV) routing protocol." ], "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2170526908", "2023256016" ] }
Threshold Policy for Route Discovery Initiation in Mobile Ad hoc Networks
Mobile multi hop ad hoc networks play a crucial role in setting up a network on the fly where deployment of network is not practical in times of utmost urgency due to both time and economical constraints. Industrial instrumentation, personal communication, inter-vehicular networking, law enforcement operation, battle field communications, disaster recovery situations, mobile Internet access are few examples to cite. In Mobile Ad hoc network (MANET), communication between nodes situated beyond their radio range is also possible. For this type of communication, the nodes have to take help from other relay nodes which have overlapping radio communication. Here the communication is possible by knowing a path or route between the source and destination node. Transmission schedule should be known for the route. Finding the optimal route and transmission schedule shall be referred as 'Route Discovery'. In static scenario route discovery is to be initiated only at beginning. In mobile scenario links may break or be created (nodes are moving within communication range). We are motivated by the question: When to initiate route and schedule discovery in a MANET? A discovery entails some cost; so one would not like to initiate discovery too often. On the other hand, not discovering reasonably often entails the risk of being stuck with a suboptimal route and/or schedule, which hurts endto-end throughput. Our interest in this question stems from the need to assess how policies based on simple heuristics perform in comparison with policies that are optimal in some precisely defined sense. If it turns out that the simple heuristic is far from optimal, then the search for improved heuristics must continue. Else, it is reassuring to know that the heuristic performs nearly as well as it can. In our earlier work [1], we had studied this problem in the framework of Markov Decision Theory. A simple onedimensional network was considered, and a simple mobility model led to a Controlled Markov Chain, and our interest was in obtaining the best route and schedule discovery policy. The resulting problem was solved numerically, using the Value Iteration Algorithm (VIA). However, as pointed out in the earlier work, the VIA approach led to a huge computational burden. Computing the optimal policy required knowing the present 'State' (often impossible in practice), as well as significant computation. Therefore, a simple and suboptimal policy was considered: the Threshold Policy. Whenever the end-to-end throughput dropped below a threshold, route and schedule discovery was initiated. While the idea of a threshold policy is straightforward, the issue was the threshold value to use. In the earlier work, the best threshold was obtained by an exhaustive search within a finite set of possible thresholds: The one resulting in the best performance was found in this way. In this paper, we address this specific question: Can we arrive at a simple rule for setting the threshold, given the parameters of the system (number of relay nodes, number of positions, cost parameter, mobility parameters)? Even though the literature on MANETS is extensive, the issue of capturing the cost of route discovery in a formal framework does not seem to have received much attention. In this paper our contributions are: i. Providing a rule that yields the threshold value for use in the threshold policy for deciding to do route and schedule discovery or not: Threshold value computed using the configuration information and ideal scheduling ii. The study of the scheduling and end-to-end throughput characteristics which provides many incite of a linear ad hoc network. iii. We also point out that our results remain valid in a slightly different mobility model; this model is a first step towards an 'open' network in which existing relay nodes can leave and/or new relay nodes can join the network. The boundary condition is relaxed and modeled as a wrap around condition for making a open end network. The mobility here is not necessary to be symmetrical. It is shown that the characteristic of the network does not change. Our results indicate that the performance of the proposed rule is no worse than 7% of the best possible threshold threshold policy, and no worse than 15% of the optimal, when the route discovery arXiv:1009.4672v1 [cs.NI] 23 Sep 2010 cost is low. In the following Section II, the related work for this paper is discussed. In Section III, the system model is described in detail. In Section IV discusses our previous work of finding long run average of throughput which is studied in the framework of Markov Decision Theory. Section V discusses the derivation of threshold value for simple threshold-based heuristic and compare the performance with respect to the throughput-optimal policy. In Section VI, we discussed the boundary conditions relaxed to make a open network. This network may cater the scenario which can be seen as a small area of concern in a large linear system. We conclude in Section VII. III. SYSTEM MODEL We consider a network same as our earlier work in paper [1] which is shown in fig 1. Here the network G = (V, L) where V is the set of vertices and L is the set of links. While source and destination nodes are assumed to be fixed at the two ends of the linear grid, relay nodes are movable and can occupy any position in between the source and destination nodes. The number of possible positions which the relay nodes can occupy is K. We will consider a bounded area i.e, number of the relay nodes is N which is assumed to be same all the time. In the figure 1, the value of K and N are 4 and 3 respectively. Number of relay nodes N can be more or less than the number of grid position K. But we will consider different ranges of node density as ratio of N to K. We will consider a discrete-time slotted system. Nodes can change grid position to left or right with a probability of p l and p r respectively only at the beginning of a time slot. A node may stay at the same position with probability p t = 1 − (p l + p r ) at the beginning of the time slot but the node will not change the positions during whole period of time slot. However, if the node finds a boundary at the beginning of the time slot, it will wait at the boundary. We model the mobility of the network by specifying the duration of each time slot and the probability with which a node can move to the left or the right. Note that short (long) time slots correspond to a network with high (low) mobility. All nodes transmit and receive over a common channel. Transmission range is assumed to be m units lengths of the linear grid. The link capacity or data rate will be 1 (normalized) if nodes are at neighboring positions, data rate will reduced to a smaller rate (1/2) if there is a vacant position in between and data rate will be 0 if there are at least two vacant places at two consecutive places. We assume interference range is more than the transmission range (m) but less than m + 1 units of lengths i.e., if node transmits, it will interfere with any other nodes trying to transmit during the same time slot if the separation of nodes is less than interference range (here m + 1). Again as any communication between two nodes requires exchange of packets both by the transmitter and by the receiver for setting up the link, both the nodes of a link should not be within the interference range of another communication link at the same time. Among the links of communication between nodes if at least one of the nodes of each such links is in the interference range of others, only one link can be active at time. Hence the links whose both ends (the nodes) are away than both the nodes of other links, more than the interference range, can be active simultaneously. Hence for end-to-end communication through these links, the links are to be scheduled, i.e., when and what fraction of time they will be active satisfying the criteria discussed just now. We model the cost associated with route discovery as follows. In every slot in which route discovery is initiated, we assume that no data can be transmitted for a fraction φ of the slot. Suppose that route and schedule discovery takes no more thań t units, wheret is less than a slot duration. Then the ratio of t to the slot duration is φ. Clearly, as φ moves closer to 1, the mobility level and the cost of route and schedule discovery increase. Correspondingly, as φ becomes smaller, the network is more and more static and the cost of route and schedule discovery can be amortized by sending more data over the slot. In the limit as φ goes to zero, we have a static network where route and schedule discovery is done at the beginning, and data can be transferred forever. This reduces to the model considered in, for example, [13]. Just as φ is treated as a cost, the number of bits transferred over the slot duration behaves like a reward. Suppose that an end-toend transmission rate R can be supported over the duration of the slot for the chosen route and transmission schedule. Then, assuming that the slot duration is defined as the unit of time, the net reward over the slot is (1 − φ)R if route discovery is done. When route discovery is not done, the net reward is simply R. Clearly, the net reward corresponds to the number of data bits transmitted from the source to the destination during the slot. A route is defined as a sequence of grid-positions (0, i 1 , i 2 , . . . , i l , . . . , (K + 1)), where position 0 and (K + 1) indicate the positions of S and D respectively, and i 1 , i 2 , . . ., i l indicate positions on the line, with i 1 ≤ i 2 ≤ . . . ≤ i l , and i 1 ≤ m, (i 2 − i 1 ) ≤ m, (i 3 − i 2 ) ≤ m, . . ., (i l − i l−1 ) ≤ m, ((K + 1) − i l ) ≤ m in this one-dimensional network. Given a route, it is possible that there is no node at a particular position. We still consider this as a valid route; however, the rate that can be supported on such a route is clearly zero. Similarly, it is also possible that there are multiple nodes at a particular position. In this case, any of the nodes at that position can act as the relay node. Because our performance criterion depends on the transmission rate that corresponds to a route, the individual node identities do not matter. IV. RECAPITULATION OF EARLIER WORK The problem of route and schedule discovery was solved in the framework of Markov Decision Process (MDP) [14] in our earlier work [1]. Five elements of MDP namely (a) the State Space, (b) the Action Space, (c) the Conditional Transition Probability given the current state and action, (d) the One-step Expected Cost and (e) the Total Cost Criterion over a finite or infinite time horizon. The details of each elements can be found in the paper [1]. Some of the results are reproduced in Fig 2. Two networks with K = 5, N = 9 and K = 6, N = 3, 9 respectively are considered for the optimal net throughput with cost parameter(φ). It is known that computing the optimal policy using VIA is a significant computational burden. Here we discuss a simplified policy which is used for obtaining high net end-to-end throughput. The motivation for this policy is as follows: If the observed throughput in a slot is small, then the current route is likely to be poor. The policy is: If the observed throughput is smaller than the threshold, then perform route and schedule discovery else continue with the currently known route and schedule. This is discussed in our earlier paper [1]. Some of the results are reproduced here in Fig 3. This figure indicates that, by proper choice of threshold value, there is advantage to implement the (best) threshold policy at the very low implementation cost. V. THRESHOLD VALUE In this paper our objective is to find the threshold value which is close to the best threshold value that gives throughput as good as the best threshold value. Also, we would like to compute this threshold value in a simple manner. Given a configuration, and ignoring any discovery cost, we can ask: What is the best possible throughput in this configuration? Let this end-to-end throughput named as 'raw' throughput corresponding to a given configuration. Now allowing the configuration to vary over all possibilities, we can come up with an expected raw throughput. This is possible because we can find out the steady state probability of each configuration as given in Section V-A. E(raw throughput) = Steady state prob. of node distribution for the configuration ×raw throughput for the best route. No discovery cost means φ=0. Finally, we incorporate the role of discovery cost (φ), by setting the threshold as follows. At higher φ, the tendency to have less route discovery, with other conditions being same. Which implies that as φ increases, the threshold value will decrease. In other word, threshold value is decreasing function of φ. We propose the following: Threshold Value = (1 − φ x ) * E(raw throughput), x > 0 (2) A. Steady State Probability of Specific Configuration (States based on Positions of users ) It can be easily shown that the positions of single node when movement is random walk with boundary behavior 'pause and restart (stuck-at-boundary)' in one dimension, is uniformly distributed at all movement positions. This is because of doubly stochastic nature of the state transition probability matrix. The probability that the node is at any of the moving positions = 1 K and when N such nodes are there, steady state probability of any ordered configuration of N nodes =( 1 K ) N . As in our case the first part of the state which is based only on movement positions counted as 'how many nodes are at one movement positions' with K such movement positions, we have to find out how many numbers of ordered pair of nodes those make one single state as discussed. This problem reduces as follows: Let there be [n 1 , n 2 , ..., n K ] nodes at the K grid positions respectively. Hence K i=1 n i = N . It can be shown that for the k th position, the number of possible options is A k = (N − k−1 i=1 ni) C n k . Hence the total numbers of ordered pairs of nodes that make one single state is . For each route, the best throughput can be computed by using optimal scheduling as discussed in Section V-C. The null route is added here just to address the situation when the system has no route at the beginning. = K i=1 A i = K−1 i=1 A i (3) = N ! n 1 !(N − n 1 )! × (N − n 1 )! n 2 !(N − n 1 − n 2 )! × ...(4)= N ! n 1 !...n K !(5) C. Optimal Scheduling This part of the our derivation is similar to the derivation of the problem when the network is static as in paper [13]. Any communication between two nodes causes contention with any other node within the interference range of both the nodes if both are active simultaneously. This problem is approached using 'Conflict graph' whose vertices correspond to the links of the transmission graph (G) of the network. In this conflict graph an edge from a vertex to itself is not drawn. If the edge between two nodes exist then the corresponding links in transmission graph interfere with each other and hence can not be active simultaneously. Links belonging to an independent set in the conflict graph can be scheduled simultaneously. Using maximal independent set the optimal scheduling problem can be expressed as linear program. And solving the linear program, we can get link schedule i.e., the fraction of time the links which will be active. D. Results The expected raw throughput computed using the above method is taken as a threshold value for the threshold policy and simulations are done for different system parameters. From simulations it is observed that x = 2 gives a good approximation for most of the cases. The related graphs are given in Fig 4, 5. It can observed from the graphs that most of the configurations, when φ ≤ 0.5, the performance will not be worse than 7% of the best possible threshold value and will not be worse than 15% of the average throughput as obtained by using the optimal policy. As φ > 0.5, it is observed that threshold policy follows the route break policy. VI. OPEN ENDED BOUNDARY The boundary condition explained earlier is close ended system i.e., Stuck-at-boundary model is necessarily make the number of relay nodes in the area of concerned constant as nodes are neither allowed to leave or join the existing network. But the model is meaningful only when mobility is symmetrical i.e., p l = p r ; otherwise, eventually, all the nodes move to the leftmost or rightmost position with probability 1. To allow the unsymmetrical Relative throughput with respect to optimal policy vs φ mobility model(p l = p r ), another boundary model namely wraparound model is considered. To make the relay node constant, an assumption, though little bit artificial, is made: if a node move out of(into) the area at one end then another node is move into(out of) the area at the other end. A. Wrap Around Boundary Conditions (Open Ended Boundary). Claim 1: The fraction of time a single node is at any of the moving positions for 1-dimensional random walk with wrap around boundary conditions is uniformly distributed even when moving probabilities toward left or right are unequal. Proof: In this model, when ever a boundary is found, instead of jumping out of the bounded area, node will be transfered to the other end for that time slot. The state transition probability matrix (P) = As the state transition matrix is doubly stochastic matrix, even when mobility is non-uniform, the steady state distribution (π)=[1/K, ..., 1/K, ..., 1/K]. So the earlier analysis applies. VII. CONCLUSIONS Threshold policy is a practical method for being both a simplified, less computational intensive approach, and the approach which can be implemented by measuring the throughput instead of knowing the states. The policy that measures the throughput systematically along with randomly measuring it relieved from the requirement to know the mobility condition (time slot depends upon mobility) by measuring the change of throughput also. The expected throughput is analytically derived in this paper and it is observed that (1 − φ 2 ) is a good approximation to the multiplying factor for most of the cases when φ is considered. The analytical method for finding the steady state probability of different configurations based on number of relay nodes at different nodes is obtained. It is observed from the simulations that for most of the configurations, by considering the stated rule, when φ ≤ 0.5 the performance of the proposed rule is no worse than 7% of that of the best threshold policy, and no worse than 15% of the optimal. As φ > 0.5, it is observed that threshold policy follows the route break policy. The the boundary condition is modified to satisfy open network where nodes can leave/join the network. This is modeled as wraparound boundary condition. Here assumption taken is: whenever a node leaves (joins) another node joins (leaves) simultaneously at the other end of the boundary so as to make the number of nodes in the network same. It is analyzed that the behavior in this case is also same as Struck-at-boundary condition. 'By withdrawing the previous assumption, i.e., the number of relay nodes varies randomly, now the analysis can be modeled as birthdeath process', is the future work we are continuing now.
3,414
1009.5183
1591748822
Understanding constellations in large data collections has become a common task. One obstacle a user has to overcome is the internal complexity of these repositories. For example, extracting connected data from a normalized relational database requires knowledge of the table structure which might not be available for the casual user. In this paper we present a visualization framework which presents the collection as a set of entities and relations (on the data level). Using rating functions, we divide large relation networks into small graphs which resemble ego-centered networks. These graphs are connected so the user can browse from one to another. To further assist the user, we present two views which embed information on the evolution of the relations into the graphs. Each view emphasizes another aspect of temporal development. The framework can be adapted to any repository by a flexible data interface and a graph configuration file. We present some first web-based applications including a visualization of the DBLP data set. We use the DBLP visualization to evaluate our approach.
Aggregated approaches show the temporal data in a single drawing. LifeLines @cite_10 visualizes a person's disease pattern. For each condition there is a timeline, i.e., a horizontal bar along a time axis. Coloration and thickness of these bars change to show the status of the condition at different times. TimeRadarTrees @cite_11 uses a radial drawing to show how several entities are related with each other. Unlike our approach, there is no ego and all relations between the visible nodes are displayed. Instead of node-link, it uses colored segments which fill a circle with multiple layers. The approach is limited to small graphs but supports hierarchical nesting to compensate this. Segments at the perimeter represent recent events while those near the center represent old influences. The intensity view is similar to this drawing but has a much lower information density. ConfSearch @cite_2 searches DBLP for relations between conferences, authors and keywords. The related entities of an ego are presented as a rated list with additional information. ConfSearch does not show the evolution of relations, but we adopted some of the rating functions used for the examples in Section .
{ "abstract": [ "Recent models have introduced the notion of dimensions and hierarchies in social networks. These models motivate the mining of small world graphs under a new perspective. We exemplary base our work on a conference graph, which is constructed from the DBLP publication records. We show that this graph indeed exhibits a layered structure as the models suggest. We then introduce a subtraction approach that allows to segregate layers. Using this technique we separate the conference graph into a thematic and a quality layer. As concrete applications of the discussed methods we present a novel rating method as well as a conference search tool that bases on our graph and its layer separation.", "LifeLines provide a general visualization environment for personal histories that can be applied to medical and court records, professional histories and other types of biographical data. A one screen overview shows multiple facets of the records. Aspects, for example medical conditions or legal cases, are displayed as individual time lines, while icons indicate discrete events, such as physician consultations or legal reviews. Line color and thickness illustrate relationships or significance, rescaling tools and filters allow users to focus on part of the information. LifeLines reduce the chances of missing information, facilitate spotting anomalies and trends, streamline access to details, while remaining tailorable and easily transferable between applications. The paper describes the use of LifeLines for youth records of the Maryland Department of Juvenile Justice and also for medical records. User's feedback was collected using a Visual Basic prototype for the youth record. Techniques to deal with complex records are reviewed and issues of a standard personal record format are discussed.", "The evolution of dependencies in information hierarchies can be modeled by sequences of compound digraphs with edge weights. In this paper we present a novel approach to visualize such sequences of graphs. It uses radial tree layout to draw the hierarchy, and circle sectors to represent the temporal change of edges in the digraphs. We have developed several interaction techniques that allow the users to explore the structural and temporal data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from very different application domains." ], "cite_N": [ "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "1587992703", "2128938146", "2100786833" ] }
A Framework for an Ego-centered and Time-aware Visualization of Relations in Arbitrary Data Repositories
The amount of data we create and store increases every day. The result is a growing number of large and possibly complex repositories. Much effort has been invested in defining potent query languages like SQL and XQuery and there are many fast and reliable algorithms to apply them. However, these techniques do little to hide the internal structure of the collection. For example, a highly normalized relational database might require several table joins to gather all information which belongs to one entry. While an expert can use information on the internal structure to speed up queries, a casual novice might easily be overstrained. This is a problem because the user wastes valuable time needed for the actual task on understanding the data organization. A common way to hide internal complexity is to present the collection as a set of entities which are connected by various types of relations. Consider the DLBP bibliographic data set 1 . It contains meta data for 1.3 million publications. Authors are one type of entity that can be found in the collection. They are connected by the coauthor relation if they cooperated on at least one publication. There are several approaches to use this model in applications. Some prefer a textual representation [1] [9] while others provide a more complex visualization [10] [15]. Relations also provide a context for a given entity and thereby support exploring the data set. The networks which are defined by relations in real-world data sets can be very large and dense. The DBLP coauthor network consists of 750,000 authors and 2.5 million relations. Suppose we want to examine the publication behavior of author Adam. Even finding Adam in this network is cumbersome. Figure 1 shows a node-link drawing of Adam's direct neighborhood in the coauthor network. It still contains 178 nodes and 1154 edges (177 Adam -coauthor + 977 coauthor -coauthor). The drawing hides the fact that some relations are stronger than others. For example, Adam has cooperated 25 times with Bob, but only 1 time with Jack, so the tie to Bob is much stronger. It also does not reveal at which time the relations were formed and how they have evolved over time. In this paper we show how relation networks can be divided into small connected graphs. In Section 2 we describe how filtering and rating can reduce the neighborhood graphs. We obtain simple graph drawings which resemble ego-centered networks [5]. These drawings can be enriched with information on how a relation has evolved. In Section 3 we present two views which cover different aspects of evolution. One emphasizes when a relation was influenced while the other focuses on how strong this influence was. A single graph is of little use if it is not linked with others. In Section 4 we show how the graphs are embedded into a framework which allows browsing the graphs. In Section 5 we present examples of visualized relations in DBLP and the German language Wikipedia. We conclude this paper with a two-part evaluation of our approach. A common approach to visualize graphs is the node-link diagram. Entities are represented by nodes. If two entities are in relation, their nodes are connected by an edge. In most real-world data sets there are a number of different relations. The networks defined by these relations can become very large and do not allow comprehensible node-link drawings. To reduce their size, we generate a set of ego-centered drawings. In a first step we split the relation networks into small graphs. Each graph represents the direct proximity of one entity. We call this entity the ego of the drawing. In the coauthor example, we obtain 750,000 graphs, one for each author. Figure 1 shows that this approach will be virtually useless if the drawing violates several aesthetic criteria which have been identified for this type of drawing [2] [7]. Above all, the number of edge crossings is problematic here. To improve the comprehensibility of the split graphs we only include the most relevant neighbors. For most relations there is a straightforward definition of relevance. In the coauthor example, we can use the number of joint publications. If we put the node which represents ego in the center of the drawing and place the selected neighbors (the alters) in a distance according to their relevance, we will obtain an ego-centered network [5]. We do not draw edges between alters to avoid edge crossings. Figure 2a shows Adam's ego-centered network. We have included the ten most relevant alters. Bob is placed closest because the relation with him is the strongest. Eve is the least relevant of the included authors. Inverted Ego Graphs In Section 3 we will use the edges to display information on how the relation has evolved over time. In many cases, the most relevant alter is of special interest and we can expect that the development of this relation is more complex than others. However, the edge which represents this relation is the shortest in the drawing and thereby provides the least space for additional information. To increase the available space, we introduce the inverted ego graph. We place the most relevant alter at maximum distance to the ego while less important alters are positioned closer. To further increase the edge length, we place the ego and the most relevant alter at opposite sides of the drawing. This contradicts the close related entities are placed close to each other metaphor, but it provides sufficient space. Figure 2b shows Adam's inverted ego graph. Ego-centered networks originate from the field of qualitative network analysis. Aside from the relevance value, these drawings often contain additional information which might be important for a user who interprets them. We can add information by modifying size, shape and filling of the nodes. For example, in Figure 2b we use the size of the alters to show the total number of publications. Obviously, Eve is by far the most active author. However, she has a low relevance because only a small part of her publications were joint work with Adam. We use the node filling to show this fraction. Mike for example published all work with Adam. Together with the low number of publications, this suggests that he is a PhD student rather than an established researcher. Both ego and alters represent persons so we use the same node shape to draw both. Table 1 shows the six types of node fillings and the parameters they require. none leaves the nodes empty while solid fills them with a specified color. fraction requires a value d ∈ [0, 1] which defines how much of the node area is filled. pie is a generalization of fraction where multiple color segments can be used. In the time color filling, all segments have the same size and the whole area is covered. The presence filling divides the area into equal-sized parts. A list of Boolean values defines which parts are filled and which are left white. In Section 5 we will show examples of all types. Not all relations have evolved in the same way and a user might be interested to see these differences. For example, if we look for Adam's long-term partners, we want to discriminate them from coauthors with a short but intense cooperation. First, we divide the time frame, i.e., the interval between the oldest and the youngest time stamp in the data set into equal-sized periods. The data set must contain sufficient information to determine how strong a relation was at each period. In the DBLP example, the time stamps represent the year of publication. The oldest paper is from 1936, the newest from 2010. We use 75 periods, each covering a single year. The strength of a relation in a single period is the number of joint papers in that year. This information is available in the data set. There are periods in which the relation becomes stronger and periods in which it remains unchanged. We do not consider relations which become less intense over time. Drawing We use the edge between ego and alter to show the development in different periods. There are two aspects we must consider: when was the relation influenced and how strong was this influence. For each aspect we have implemented a view, i.e., a way to modify the edge drawings. The positions of the nodes are the same for both views so we can easily switch between the two to see both aspects. Time-Color View The time-color view emphasizes the moment in which a relation was influenced. We assign a color to each period which expresses its position in the time frame. There is no linear order of colors but most people accept the purple-blue-green-yellow-orange-red sequence for this purpose [14]. We assign purple and blue tones to periods from the beginning of the time frame. Red tones indicate recent periods. For each period which affected the relation, regardless how strong, we add a colored segment to the respective edge. The segment color matches the represented period. Periods with no influence have no representation. The segments are ordered by time, the oldest close to the ego and the newest close to the alter. Figure 3 shows Adam's coauthors in the time-color view. At the bottom, there is a color bar which shows the time-color mapping. Green and yellow segments refer to the 1980s while reddish segments represent the years after 2000. We can see that the relation between Adam and Bob has evolved over a long time while the cooperation with Claire started later. The cooperation with Dave ended in the late 1990s. The segment size depends on the number of periods which are relevant for the respective relation and the available edge length. Figure 4a shows two edges with different content. While edge A covers many consecutive periods, B shows two distinct phases of development marked by an abrupt color change. In a first version of this view, we modified a segment length based on the strength of influence. The segments of important periods became longer compared to those of less relevant periods. Early user feedback (see Section 6.1) showed that this additional distortion confused the viewers. We gave up on this feature in favor of the intensity view. The number of periods is limited by the number of colors a human can discriminate. In an ideal environment, we can distinguish more than one million colors [8] but the user feedback showed that even the 75 DBLP periods can become problematic. Especially, matching segment colors with the bottom bar was reported to be difficult. We use linking and brushing to compensate. Whenever the mouse cursor moves over a segment, all segments of the same color are painted in double stroke. The bottom bar shows only those segments which are relevant for at least one relation. Intensity View The intensity view emphasizes how strong a relation was influenced during a period. We use colors to show the strength of development. Periods with little influence are painted in blue while more important ones are presented in reddish tones. Unlike the time-color view, each edge contains segments for all periods. Like before we leave out periods which are not relevant for any relation. If a period had no influence on a relation, the segment is painted in white so it appears as a gap. Because the number of segments is the same for all edges, one level of distortion is eliminated. Now we can use the position of the segments to guess the period. Cleveland and McGill [4] showed that humans can perceive positions much better than colors. This compensates for the smaller segments. Figure 4b shows two edges from the intensity view. While edge A shows a homogeneous but weak development with only three gaps, edge B represents a younger relation with a strong recent development. Figure 5a shows Adam's coauthor graph in the intensity view. The strength of development in a period is equivalent to the number of joint publications in this year. Although Bob and Claire have similar importance values, we can clearly see the differences in development. It is also visible that the last cooperation between Adam and Dave happened some time ago. An additional legend at the right side of the graph maps colors and values. In Figure 5a Framework Components The framework consists of three major components, the user interface, the graph generator and the interface with the underlying data. The front end is composed of the graph drawing and some additional elements which assist the user. We use the SVG (Scalable Vector Graphics) format for the drawings. Vector graphics are smaller then pixel graphics and can be zoomed without loss of quality. There are a number of frameworks which allow SVG rendering in applications and many web browsers support a sufficient part of the standard. Figure 6 shows an example of a web-based front end displayed by the Firefox 3.0 browser. The component positions are a suggestion and can be changed if needed. The most important front end function is to link the graphs. We can click on a node and a new graph is created where this node is the ego. In Figure 6 there is more than one possible relation for a person type ego. A connection menu (Bob) lists the available options. In the example, there is also an external link to Bob's DBLP author page. In theory, SVG graphics can provide smooth changeovers between the graphs where nodes move to their new positions and new ones are faded in. However, only few browsers support SVG animation. We expect this to improve in the future. There are two other ways to switch to a new graph. The head menu provides a search field which can be used to find a new ego if the node is not contained in the current graph. The menu also contains control elements for the time lens feature. The time lens allows the user to define which periods should be considered for displaying the graphs and computing the relevance function. In Figure 6 we see Adam as if he only published in the 1990s. The bottom bar shows which periods are relevant here. If we compare the relevance values with those in Figure 3 we clearly see some differences. The head menu also contains a control to modify the maximum number of alters. Often, it is useful to go back to a previous graph. The history bar shows thumbnails of up to four old requests. We can click the thumbs to enlarge the drawings again. In some situations it is useful to have textual representation of data. All nodes have a tooltip menu which appears when the cursor moves over it. In the next section, we will show how they are defined. Like all visual effects, including those we discussed in Sections 3.1 and 3.2, tooltips are created by JavaScript functions which are embedded into the SVG files. No additional command processing is necessary. There is no limit to the complexity of the underlying data source. In most cases we need sophisticated information extraction strategies and hand tuned queries to achieve acceptable performance. The data which is necessary to draw an entity or a relation might be scattered over the whole data set. Only an expert can provide a fast interface with these repositories by implementing a given Java interface. An important part of this step is to actually define the entities and relations. The expert can utilize caching strategies and pre calculations if necessary and enforce privacy policies for example by making entities anonymous. Other framework components request data from the interface by giving a pre-defined key and a data type. A similar technique is used to define the rating function for the relations. Example Applications To demonstrate the framework, we deployed two applications on data sets with different characteristics: DBLP and the German language Wikipedia. DBLP The coauthor graph drawings we used in this paper are taken from this application. In this section we give some background information and additional examples for drawing details. This visualization is available online 2 and generated some user feedback. The DBLP data set is available as an XML file which lists all records. The size and the structure of this file make efficient queries impossible so we import the document into a relational database. During this step, we pre calculate some frequently used values. We take additional information on journals and conferences from html pages on the DBLP web server. Entities: We extract three types of entities (number on December 8, 2009): person (760,277), word (36,150) and stream (3480). Stream is the term DBLP uses to refer to conferences and journals. Each author with at least one publication listed in DBLP is represented by a person entity. DBLP uses person names as identifiers. If there are two homonymous authors, a numeric suffix is appended to the name. If a person uses different names the data set contains entries to match them. The search engine is customized to consider this additional data as well. The word entities are derived from the publication titles. We remove stop words and invert inflections. Relations: Figure 7 shows the relations between Petra and the streams which accepted her papers. The relevance for this relation is defined by the number of accepted papers. The edge coloration shows that Petra stopped publishing at eik, mfcs and stacs in the 1990s. To better understand the reasons for this, we use a time-color filling for the stream nodes. For each year a conference or a journal was active (held a venue or published an issue) we add a segment in the respective color. If a stream is old some colors will not be shown in the bottom bar. Missing red segments show that eik was not continued after 1994 which explains why there are no further publications. However, mfcs and stacs were continued after Petra's last publications so there must be other reasons which the data set does not reveal. With the exception of SIGUCCS all stream nodes with recent publications are small. This means they do not have many years of activity and are rather young. The pie filling of the ego node shows when Petra was active. The more papers she published in a year the larger the respective segment. Figure 8 shows which themes were popular at the Journal of Symbolic Logic (JSYML). JSYML is the stream with the most active years in DBLP. We use the words extracted from the publication titles. They are a poor replacement for actual keyword lists but pro- vide acceptable results. Based on the titles from a single year we apply term frequency -inverse document frequency (tf-idf) [13] to sort out nondescriptive words like system. Kuhn and Wattenhofer [9] used a similar approach to get thematic descriptions of conferences. The ego node has a time-color filling. Red segments are missing here because JSYML was not continued after 2003. We can see different categories of themes. meeting and symbolic were used from the beginning but do not appear later. cardinality was not used at the beginning and theory and logic appeared at all time. Note that meeting and cardinality have a similar importance value although they developed differently. Figure 9 shows the opposite relation. We see the streams related to the word query in the intensive view. tf-idf returns values that are difficult to interpret but the higher the value the stronger the influence. We use presence filling to show when a stream started. For each relevant period we add a segment. If the alter was active in this period the segment would be painted blue otherwise it would be painted white. The webdb conference for example was established late and therefore could not use any keywords in early years. We can see that icdt is a biennial conference because only every second segment is colored. There are other relations as well but without additional drawing details. The mot important one is a relation between streams which is rated by comparing communities and themes. We also provide a relation between author and relevant themes. Wikipedia As another application we considered relations in the German language Wikipedia. This data set differs from DBLP in size and density of the relation networks. There are 4.6 million authors (including unregistered) and 2.7 million pages. A page is an article, a Fig. 9: Related streams for the keyword query user page or an administrative page. While an average DBLP author contributed to 2.89 streams, a Wikipedia author modified on average 12.59 articles. This requires more sophisticated caching and pre-calculation strategies for the data interface. There are also differences in the time frame. While DBLP only contains 75 time stamps (1936-2010) the Wikipedia dump we considered was changed on more than 3000 days since 2001. Via the data interface, we define a period as a month and map the respective time stamps to it. Figure 10 shows the relation between Gil and the articles she modified. Gil is administrator so we paint the ego node as a circle. Otherwise, we would have used a rounded rectangle for registered and a triangle for unregistered users. We use presence filling to show in which months an article was changed. The color we use for the presence filling of the alters shows the type of page. Evaluation The evaluation of our approach consists of a field study in which we observed people using the visual interface and a controlled experiment in the laboratory. We used the DBLP data set (Section 5.1) in either case. Basic Field Study In January 2009, we launched a web application using the DBLP data set. For a period of 320 days, we logged which type of graph was requested and which settings were used. To get an approximate mapping of requests and users, we also logged a hash of the session ID. Sessions with a very high number of requests (most probably web bots) were excluded. During the observation, there were 42,068 sessions with a total of 107,683 requests. Most sessions terminated after the first request but 1277 times the user viewed more than ten graphs. The mean length of these long sessions is 30.5 which indicates that the application was actually used for browsing. The time lens was used more often when the session length increased while the use of the search engine dropped. The intensity view was added later so there is no significant data which view was favored. We also received direct feedback. In Section 3.1 we already described the remarks on the time-color view and the resulting modifications. The tenor was that the users considered the program to be useful but only after they gathered some experiences with interpreting the drawings. Especially the drawing details seem to pose the risk of confusion and misinterpretations. In general, there were two groups of users. One requested additional information while the other favored simple drawings. Task-based Study To get more direct information of the usefulness of our approach we conducted a study in the laboratory. Participiants: Two female and eight male undergraduate students aged 22 to 29 participated in the study. All rated themselves as regular computer and web users. Eight participants stated to know the DBLP data set but only one had advanced experience. Nobody was experienced with this framework and the associated visualizations. Setup: After a short introduction on the visual interface and DBLP, we gave the participants time to familiarize themselves with the application. Then we asked them to complete three groups of tasks (G 1 , G 2 and G 3 ) with three problems each. G 1 Tasks could be solved by analyzing a single graph. G 2 Tasks required using a specific feature (for example time lens) or a specific view. G 3 Open tasks. We asked participants to explore the neighborhood of a given entity. Noteworthy constellations should be reported. Because no participiant was experienced in the application domain, we had to limit the complexity of the tasks in G 3 . For G 1 and G 2 we expected short answers to specific questions. For G 3 we were mainly interested in how the users applied their experience from G 1 and G 2 . We randomized the order of tasks within each group to counter learning effects. We observed the participants and logged which graphs they requested and which parameter they used. The tasks had to be completed in 30 minutes. At the end of the session, the participants were asked to answer open end questions on what they liked and didn't like on the framework. Task Results: Eight participants solved all G 1 tasks and six all G 2 tasks. Nobody failed more than one task in a group. For each task we asked if completing it had been easy, slightly difficult or difficult. In 55 out of 60 cases, the task was rated easy. The remaining cases were considered to be slightly difficult. Eight participants were able to find interesting constellations in G 3 . The others reported only uninteresting details or actual misinterpretations. Nine participants stated that the lack of knowledge on the DBLP data set was the major problem. All were positive to find more if given additional time. By observing the users and analyzing the log files we found out that: -All Tasks (or parts of task) which required to tell the relevance of an entity were correctly solved. -At first, the users preferred the time-color view. A task in G 2 forced them to use the intensity view. After that, this view was preferred. -In G 2 half of the users browsed to a new graph using the search function rather than the connection menus. With growing experience this behavior changed. In G 3 all participants used the connection menus if possible. -Not understanding drawing details was the major cause of errors. While there was no problem with the fraction filling, many users misinterpreted the fillings of the stream nodes (see Figure 7). Eight participiants reported that they were confused because the filling used colors which did not appear in the bottom bar. -Tooltips were used more often than we expected. All participants tried to validate their interpretation of the drawing with textual information if possible. -The time lens was considered an important part of the application. It was used in many cases, even if it did not contribute to the solution. In general, the comments were positive. The users liked the small and clear graph drawings and the idea of presenting a rating by the length of an edge. Only one person criticized the fact that close related alters are placed far away from the ego. Like in the field study feedback, learning to use the application was a major issue. Four participants explicitly stated that they had to learn how to use the application first. There also was a significant learning effect. Unlike the feedback from the field study, the participants requested more textual information. Four persons explicitly mentioned text integrated in the drawings. For example, the relevance value should be visible next to the nodes. Two proposed an additional view which should contain the information in tabular form. This supports our finding that tooltips are frequently used. The tooltips were consistantly mentioned as a positive aspect. Four participants stated that the information density was too high. But, like in the field study feedback, all users posted ideas on what information should be added. Three participants proposed additional drawing details like patterns or more types of node shapes. This shows that a graph definition has to be done very carefully with respect to the user group. The definer must not give in on the wish to integrate as much information as possible but has to make reasonable selections. Conclusion And Future Work In this paper, we presented a framework that can visualize relations in any given data set. Large and complex relation networks are reduced to small graphs. The drawings of these graphs resemble ego-centered networks and convey information on when and how strong a relation evolved. The drawings are part of a visual interface which supports the user in understanding the data and links the graphs with each other. Experiments from an online application and the results of a basic usability study indicated that our approach is useful though it poses the risk of generating graphs which overstrain the user. Future work will have to address the problem of how difficult it is to understand the drawing details. These studies will also have to include other drawing details. There is no clear definition of an ego-centered graph in the literature. Many definitions allow additional edges between the alters or even nodes that are in no direct connection with the ego. The decision to include a node or an edge is usually based on complex strategies or the intuition of the person who generates the graph. It is unclear whether the inverted ego network could be extended this way and whether this would benefit the user.
4,837
1009.5183
1591748822
Understanding constellations in large data collections has become a common task. One obstacle a user has to overcome is the internal complexity of these repositories. For example, extracting connected data from a normalized relational database requires knowledge of the table structure which might not be available for the casual user. In this paper we present a visualization framework which presents the collection as a set of entities and relations (on the data level). Using rating functions, we divide large relation networks into small graphs which resemble ego-centered networks. These graphs are connected so the user can browse from one to another. To further assist the user, we present two views which embed information on the evolution of the relations into the graphs. Each view emphasizes another aspect of temporal development. The framework can be adapted to any repository by a flexible data interface and a graph configuration file. We present some first web-based applications including a visualization of the DBLP data set. We use the DBLP visualization to evaluate our approach.
Many systems combine different types of visualizations. Paper Lense @cite_6 and Facet Lense @cite_8 provide bar charts, textual result lists and nested node drawings to show entities in faceted data sets. Among others, data can be plotted against time, like the number of an author's publications by year. Both tools provide extensive filtering and sorting functions. The DB-Browser @cite_3 features similar views including simple graph drawings. PaperLense and DB-Browser visualize DBLP data. Both provide aggregated information like the number of joint papers for two given authors.
{ "abstract": [ "For a scientific researcher it is more and more vital to find relevant publications with their correct bibliographical data, not only for accurate citations but particularly for getting further information about their current research topic. This paper describes a new approach to develop user-friendly interfaces: Multi-Layered-Browsing. Two example applications are introduced that play a central role in searching, browsing and visualising bibliographical data.", "PaperLens is a novel visualization that reveals trends, connections, and activity throughout a conference community. It tightly couples views across papers, authors, and references. PaperLens was developed to visualize 8 years (1995-2002) of InfoVis conference proceedings and was then extended to visualize 23 years (1982-2004) of the CHI conference proceedings. This paper describes how we analyzed the data and designed PaperLens. We also describe a user study to focus our redesign efforts along with the design changes we made to address usability issues. We summarize lessons learned in the process of design and scaling up to the larger set of CHI conference papers.", "Previous research has shown that faceted browsing is effective and enjoyable in searching and browsing large collections of data. In this work, we explore the efficacy of interactive visualization systems in supporting exploration and sensemaking within faceted datasets. To do this, we developed an interactive visualization system called FacetLens, which exposes trends and relationships within faceted datasets. FacetLens implements linear facets to enable users not only to identify trends but also to easily compare several trends simultaneously. Furthermore, it offers pivot operations to allow users to navigate the faceted dataset using relationships between items. We evaluate the utility of the system through a description of insights gained while experts used the system to explore the CHI publication repository as well as a database of funding grant data, and report a formative user study that identified usability issues." ], "cite_N": [ "@cite_3", "@cite_6", "@cite_8" ], "mid": [ "1546552929", "2122843645", "2110328813" ] }
A Framework for an Ego-centered and Time-aware Visualization of Relations in Arbitrary Data Repositories
The amount of data we create and store increases every day. The result is a growing number of large and possibly complex repositories. Much effort has been invested in defining potent query languages like SQL and XQuery and there are many fast and reliable algorithms to apply them. However, these techniques do little to hide the internal structure of the collection. For example, a highly normalized relational database might require several table joins to gather all information which belongs to one entry. While an expert can use information on the internal structure to speed up queries, a casual novice might easily be overstrained. This is a problem because the user wastes valuable time needed for the actual task on understanding the data organization. A common way to hide internal complexity is to present the collection as a set of entities which are connected by various types of relations. Consider the DLBP bibliographic data set 1 . It contains meta data for 1.3 million publications. Authors are one type of entity that can be found in the collection. They are connected by the coauthor relation if they cooperated on at least one publication. There are several approaches to use this model in applications. Some prefer a textual representation [1] [9] while others provide a more complex visualization [10] [15]. Relations also provide a context for a given entity and thereby support exploring the data set. The networks which are defined by relations in real-world data sets can be very large and dense. The DBLP coauthor network consists of 750,000 authors and 2.5 million relations. Suppose we want to examine the publication behavior of author Adam. Even finding Adam in this network is cumbersome. Figure 1 shows a node-link drawing of Adam's direct neighborhood in the coauthor network. It still contains 178 nodes and 1154 edges (177 Adam -coauthor + 977 coauthor -coauthor). The drawing hides the fact that some relations are stronger than others. For example, Adam has cooperated 25 times with Bob, but only 1 time with Jack, so the tie to Bob is much stronger. It also does not reveal at which time the relations were formed and how they have evolved over time. In this paper we show how relation networks can be divided into small connected graphs. In Section 2 we describe how filtering and rating can reduce the neighborhood graphs. We obtain simple graph drawings which resemble ego-centered networks [5]. These drawings can be enriched with information on how a relation has evolved. In Section 3 we present two views which cover different aspects of evolution. One emphasizes when a relation was influenced while the other focuses on how strong this influence was. A single graph is of little use if it is not linked with others. In Section 4 we show how the graphs are embedded into a framework which allows browsing the graphs. In Section 5 we present examples of visualized relations in DBLP and the German language Wikipedia. We conclude this paper with a two-part evaluation of our approach. A common approach to visualize graphs is the node-link diagram. Entities are represented by nodes. If two entities are in relation, their nodes are connected by an edge. In most real-world data sets there are a number of different relations. The networks defined by these relations can become very large and do not allow comprehensible node-link drawings. To reduce their size, we generate a set of ego-centered drawings. In a first step we split the relation networks into small graphs. Each graph represents the direct proximity of one entity. We call this entity the ego of the drawing. In the coauthor example, we obtain 750,000 graphs, one for each author. Figure 1 shows that this approach will be virtually useless if the drawing violates several aesthetic criteria which have been identified for this type of drawing [2] [7]. Above all, the number of edge crossings is problematic here. To improve the comprehensibility of the split graphs we only include the most relevant neighbors. For most relations there is a straightforward definition of relevance. In the coauthor example, we can use the number of joint publications. If we put the node which represents ego in the center of the drawing and place the selected neighbors (the alters) in a distance according to their relevance, we will obtain an ego-centered network [5]. We do not draw edges between alters to avoid edge crossings. Figure 2a shows Adam's ego-centered network. We have included the ten most relevant alters. Bob is placed closest because the relation with him is the strongest. Eve is the least relevant of the included authors. Inverted Ego Graphs In Section 3 we will use the edges to display information on how the relation has evolved over time. In many cases, the most relevant alter is of special interest and we can expect that the development of this relation is more complex than others. However, the edge which represents this relation is the shortest in the drawing and thereby provides the least space for additional information. To increase the available space, we introduce the inverted ego graph. We place the most relevant alter at maximum distance to the ego while less important alters are positioned closer. To further increase the edge length, we place the ego and the most relevant alter at opposite sides of the drawing. This contradicts the close related entities are placed close to each other metaphor, but it provides sufficient space. Figure 2b shows Adam's inverted ego graph. Ego-centered networks originate from the field of qualitative network analysis. Aside from the relevance value, these drawings often contain additional information which might be important for a user who interprets them. We can add information by modifying size, shape and filling of the nodes. For example, in Figure 2b we use the size of the alters to show the total number of publications. Obviously, Eve is by far the most active author. However, she has a low relevance because only a small part of her publications were joint work with Adam. We use the node filling to show this fraction. Mike for example published all work with Adam. Together with the low number of publications, this suggests that he is a PhD student rather than an established researcher. Both ego and alters represent persons so we use the same node shape to draw both. Table 1 shows the six types of node fillings and the parameters they require. none leaves the nodes empty while solid fills them with a specified color. fraction requires a value d ∈ [0, 1] which defines how much of the node area is filled. pie is a generalization of fraction where multiple color segments can be used. In the time color filling, all segments have the same size and the whole area is covered. The presence filling divides the area into equal-sized parts. A list of Boolean values defines which parts are filled and which are left white. In Section 5 we will show examples of all types. Not all relations have evolved in the same way and a user might be interested to see these differences. For example, if we look for Adam's long-term partners, we want to discriminate them from coauthors with a short but intense cooperation. First, we divide the time frame, i.e., the interval between the oldest and the youngest time stamp in the data set into equal-sized periods. The data set must contain sufficient information to determine how strong a relation was at each period. In the DBLP example, the time stamps represent the year of publication. The oldest paper is from 1936, the newest from 2010. We use 75 periods, each covering a single year. The strength of a relation in a single period is the number of joint papers in that year. This information is available in the data set. There are periods in which the relation becomes stronger and periods in which it remains unchanged. We do not consider relations which become less intense over time. Drawing We use the edge between ego and alter to show the development in different periods. There are two aspects we must consider: when was the relation influenced and how strong was this influence. For each aspect we have implemented a view, i.e., a way to modify the edge drawings. The positions of the nodes are the same for both views so we can easily switch between the two to see both aspects. Time-Color View The time-color view emphasizes the moment in which a relation was influenced. We assign a color to each period which expresses its position in the time frame. There is no linear order of colors but most people accept the purple-blue-green-yellow-orange-red sequence for this purpose [14]. We assign purple and blue tones to periods from the beginning of the time frame. Red tones indicate recent periods. For each period which affected the relation, regardless how strong, we add a colored segment to the respective edge. The segment color matches the represented period. Periods with no influence have no representation. The segments are ordered by time, the oldest close to the ego and the newest close to the alter. Figure 3 shows Adam's coauthors in the time-color view. At the bottom, there is a color bar which shows the time-color mapping. Green and yellow segments refer to the 1980s while reddish segments represent the years after 2000. We can see that the relation between Adam and Bob has evolved over a long time while the cooperation with Claire started later. The cooperation with Dave ended in the late 1990s. The segment size depends on the number of periods which are relevant for the respective relation and the available edge length. Figure 4a shows two edges with different content. While edge A covers many consecutive periods, B shows two distinct phases of development marked by an abrupt color change. In a first version of this view, we modified a segment length based on the strength of influence. The segments of important periods became longer compared to those of less relevant periods. Early user feedback (see Section 6.1) showed that this additional distortion confused the viewers. We gave up on this feature in favor of the intensity view. The number of periods is limited by the number of colors a human can discriminate. In an ideal environment, we can distinguish more than one million colors [8] but the user feedback showed that even the 75 DBLP periods can become problematic. Especially, matching segment colors with the bottom bar was reported to be difficult. We use linking and brushing to compensate. Whenever the mouse cursor moves over a segment, all segments of the same color are painted in double stroke. The bottom bar shows only those segments which are relevant for at least one relation. Intensity View The intensity view emphasizes how strong a relation was influenced during a period. We use colors to show the strength of development. Periods with little influence are painted in blue while more important ones are presented in reddish tones. Unlike the time-color view, each edge contains segments for all periods. Like before we leave out periods which are not relevant for any relation. If a period had no influence on a relation, the segment is painted in white so it appears as a gap. Because the number of segments is the same for all edges, one level of distortion is eliminated. Now we can use the position of the segments to guess the period. Cleveland and McGill [4] showed that humans can perceive positions much better than colors. This compensates for the smaller segments. Figure 4b shows two edges from the intensity view. While edge A shows a homogeneous but weak development with only three gaps, edge B represents a younger relation with a strong recent development. Figure 5a shows Adam's coauthor graph in the intensity view. The strength of development in a period is equivalent to the number of joint publications in this year. Although Bob and Claire have similar importance values, we can clearly see the differences in development. It is also visible that the last cooperation between Adam and Dave happened some time ago. An additional legend at the right side of the graph maps colors and values. In Figure 5a Framework Components The framework consists of three major components, the user interface, the graph generator and the interface with the underlying data. The front end is composed of the graph drawing and some additional elements which assist the user. We use the SVG (Scalable Vector Graphics) format for the drawings. Vector graphics are smaller then pixel graphics and can be zoomed without loss of quality. There are a number of frameworks which allow SVG rendering in applications and many web browsers support a sufficient part of the standard. Figure 6 shows an example of a web-based front end displayed by the Firefox 3.0 browser. The component positions are a suggestion and can be changed if needed. The most important front end function is to link the graphs. We can click on a node and a new graph is created where this node is the ego. In Figure 6 there is more than one possible relation for a person type ego. A connection menu (Bob) lists the available options. In the example, there is also an external link to Bob's DBLP author page. In theory, SVG graphics can provide smooth changeovers between the graphs where nodes move to their new positions and new ones are faded in. However, only few browsers support SVG animation. We expect this to improve in the future. There are two other ways to switch to a new graph. The head menu provides a search field which can be used to find a new ego if the node is not contained in the current graph. The menu also contains control elements for the time lens feature. The time lens allows the user to define which periods should be considered for displaying the graphs and computing the relevance function. In Figure 6 we see Adam as if he only published in the 1990s. The bottom bar shows which periods are relevant here. If we compare the relevance values with those in Figure 3 we clearly see some differences. The head menu also contains a control to modify the maximum number of alters. Often, it is useful to go back to a previous graph. The history bar shows thumbnails of up to four old requests. We can click the thumbs to enlarge the drawings again. In some situations it is useful to have textual representation of data. All nodes have a tooltip menu which appears when the cursor moves over it. In the next section, we will show how they are defined. Like all visual effects, including those we discussed in Sections 3.1 and 3.2, tooltips are created by JavaScript functions which are embedded into the SVG files. No additional command processing is necessary. There is no limit to the complexity of the underlying data source. In most cases we need sophisticated information extraction strategies and hand tuned queries to achieve acceptable performance. The data which is necessary to draw an entity or a relation might be scattered over the whole data set. Only an expert can provide a fast interface with these repositories by implementing a given Java interface. An important part of this step is to actually define the entities and relations. The expert can utilize caching strategies and pre calculations if necessary and enforce privacy policies for example by making entities anonymous. Other framework components request data from the interface by giving a pre-defined key and a data type. A similar technique is used to define the rating function for the relations. Example Applications To demonstrate the framework, we deployed two applications on data sets with different characteristics: DBLP and the German language Wikipedia. DBLP The coauthor graph drawings we used in this paper are taken from this application. In this section we give some background information and additional examples for drawing details. This visualization is available online 2 and generated some user feedback. The DBLP data set is available as an XML file which lists all records. The size and the structure of this file make efficient queries impossible so we import the document into a relational database. During this step, we pre calculate some frequently used values. We take additional information on journals and conferences from html pages on the DBLP web server. Entities: We extract three types of entities (number on December 8, 2009): person (760,277), word (36,150) and stream (3480). Stream is the term DBLP uses to refer to conferences and journals. Each author with at least one publication listed in DBLP is represented by a person entity. DBLP uses person names as identifiers. If there are two homonymous authors, a numeric suffix is appended to the name. If a person uses different names the data set contains entries to match them. The search engine is customized to consider this additional data as well. The word entities are derived from the publication titles. We remove stop words and invert inflections. Relations: Figure 7 shows the relations between Petra and the streams which accepted her papers. The relevance for this relation is defined by the number of accepted papers. The edge coloration shows that Petra stopped publishing at eik, mfcs and stacs in the 1990s. To better understand the reasons for this, we use a time-color filling for the stream nodes. For each year a conference or a journal was active (held a venue or published an issue) we add a segment in the respective color. If a stream is old some colors will not be shown in the bottom bar. Missing red segments show that eik was not continued after 1994 which explains why there are no further publications. However, mfcs and stacs were continued after Petra's last publications so there must be other reasons which the data set does not reveal. With the exception of SIGUCCS all stream nodes with recent publications are small. This means they do not have many years of activity and are rather young. The pie filling of the ego node shows when Petra was active. The more papers she published in a year the larger the respective segment. Figure 8 shows which themes were popular at the Journal of Symbolic Logic (JSYML). JSYML is the stream with the most active years in DBLP. We use the words extracted from the publication titles. They are a poor replacement for actual keyword lists but pro- vide acceptable results. Based on the titles from a single year we apply term frequency -inverse document frequency (tf-idf) [13] to sort out nondescriptive words like system. Kuhn and Wattenhofer [9] used a similar approach to get thematic descriptions of conferences. The ego node has a time-color filling. Red segments are missing here because JSYML was not continued after 2003. We can see different categories of themes. meeting and symbolic were used from the beginning but do not appear later. cardinality was not used at the beginning and theory and logic appeared at all time. Note that meeting and cardinality have a similar importance value although they developed differently. Figure 9 shows the opposite relation. We see the streams related to the word query in the intensive view. tf-idf returns values that are difficult to interpret but the higher the value the stronger the influence. We use presence filling to show when a stream started. For each relevant period we add a segment. If the alter was active in this period the segment would be painted blue otherwise it would be painted white. The webdb conference for example was established late and therefore could not use any keywords in early years. We can see that icdt is a biennial conference because only every second segment is colored. There are other relations as well but without additional drawing details. The mot important one is a relation between streams which is rated by comparing communities and themes. We also provide a relation between author and relevant themes. Wikipedia As another application we considered relations in the German language Wikipedia. This data set differs from DBLP in size and density of the relation networks. There are 4.6 million authors (including unregistered) and 2.7 million pages. A page is an article, a Fig. 9: Related streams for the keyword query user page or an administrative page. While an average DBLP author contributed to 2.89 streams, a Wikipedia author modified on average 12.59 articles. This requires more sophisticated caching and pre-calculation strategies for the data interface. There are also differences in the time frame. While DBLP only contains 75 time stamps (1936-2010) the Wikipedia dump we considered was changed on more than 3000 days since 2001. Via the data interface, we define a period as a month and map the respective time stamps to it. Figure 10 shows the relation between Gil and the articles she modified. Gil is administrator so we paint the ego node as a circle. Otherwise, we would have used a rounded rectangle for registered and a triangle for unregistered users. We use presence filling to show in which months an article was changed. The color we use for the presence filling of the alters shows the type of page. Evaluation The evaluation of our approach consists of a field study in which we observed people using the visual interface and a controlled experiment in the laboratory. We used the DBLP data set (Section 5.1) in either case. Basic Field Study In January 2009, we launched a web application using the DBLP data set. For a period of 320 days, we logged which type of graph was requested and which settings were used. To get an approximate mapping of requests and users, we also logged a hash of the session ID. Sessions with a very high number of requests (most probably web bots) were excluded. During the observation, there were 42,068 sessions with a total of 107,683 requests. Most sessions terminated after the first request but 1277 times the user viewed more than ten graphs. The mean length of these long sessions is 30.5 which indicates that the application was actually used for browsing. The time lens was used more often when the session length increased while the use of the search engine dropped. The intensity view was added later so there is no significant data which view was favored. We also received direct feedback. In Section 3.1 we already described the remarks on the time-color view and the resulting modifications. The tenor was that the users considered the program to be useful but only after they gathered some experiences with interpreting the drawings. Especially the drawing details seem to pose the risk of confusion and misinterpretations. In general, there were two groups of users. One requested additional information while the other favored simple drawings. Task-based Study To get more direct information of the usefulness of our approach we conducted a study in the laboratory. Participiants: Two female and eight male undergraduate students aged 22 to 29 participated in the study. All rated themselves as regular computer and web users. Eight participants stated to know the DBLP data set but only one had advanced experience. Nobody was experienced with this framework and the associated visualizations. Setup: After a short introduction on the visual interface and DBLP, we gave the participants time to familiarize themselves with the application. Then we asked them to complete three groups of tasks (G 1 , G 2 and G 3 ) with three problems each. G 1 Tasks could be solved by analyzing a single graph. G 2 Tasks required using a specific feature (for example time lens) or a specific view. G 3 Open tasks. We asked participants to explore the neighborhood of a given entity. Noteworthy constellations should be reported. Because no participiant was experienced in the application domain, we had to limit the complexity of the tasks in G 3 . For G 1 and G 2 we expected short answers to specific questions. For G 3 we were mainly interested in how the users applied their experience from G 1 and G 2 . We randomized the order of tasks within each group to counter learning effects. We observed the participants and logged which graphs they requested and which parameter they used. The tasks had to be completed in 30 minutes. At the end of the session, the participants were asked to answer open end questions on what they liked and didn't like on the framework. Task Results: Eight participants solved all G 1 tasks and six all G 2 tasks. Nobody failed more than one task in a group. For each task we asked if completing it had been easy, slightly difficult or difficult. In 55 out of 60 cases, the task was rated easy. The remaining cases were considered to be slightly difficult. Eight participants were able to find interesting constellations in G 3 . The others reported only uninteresting details or actual misinterpretations. Nine participants stated that the lack of knowledge on the DBLP data set was the major problem. All were positive to find more if given additional time. By observing the users and analyzing the log files we found out that: -All Tasks (or parts of task) which required to tell the relevance of an entity were correctly solved. -At first, the users preferred the time-color view. A task in G 2 forced them to use the intensity view. After that, this view was preferred. -In G 2 half of the users browsed to a new graph using the search function rather than the connection menus. With growing experience this behavior changed. In G 3 all participants used the connection menus if possible. -Not understanding drawing details was the major cause of errors. While there was no problem with the fraction filling, many users misinterpreted the fillings of the stream nodes (see Figure 7). Eight participiants reported that they were confused because the filling used colors which did not appear in the bottom bar. -Tooltips were used more often than we expected. All participants tried to validate their interpretation of the drawing with textual information if possible. -The time lens was considered an important part of the application. It was used in many cases, even if it did not contribute to the solution. In general, the comments were positive. The users liked the small and clear graph drawings and the idea of presenting a rating by the length of an edge. Only one person criticized the fact that close related alters are placed far away from the ego. Like in the field study feedback, learning to use the application was a major issue. Four participants explicitly stated that they had to learn how to use the application first. There also was a significant learning effect. Unlike the feedback from the field study, the participants requested more textual information. Four persons explicitly mentioned text integrated in the drawings. For example, the relevance value should be visible next to the nodes. Two proposed an additional view which should contain the information in tabular form. This supports our finding that tooltips are frequently used. The tooltips were consistantly mentioned as a positive aspect. Four participants stated that the information density was too high. But, like in the field study feedback, all users posted ideas on what information should be added. Three participants proposed additional drawing details like patterns or more types of node shapes. This shows that a graph definition has to be done very carefully with respect to the user group. The definer must not give in on the wish to integrate as much information as possible but has to make reasonable selections. Conclusion And Future Work In this paper, we presented a framework that can visualize relations in any given data set. Large and complex relation networks are reduced to small graphs. The drawings of these graphs resemble ego-centered networks and convey information on when and how strong a relation evolved. The drawings are part of a visual interface which supports the user in understanding the data and links the graphs with each other. Experiments from an online application and the results of a basic usability study indicated that our approach is useful though it poses the risk of generating graphs which overstrain the user. Future work will have to address the problem of how difficult it is to understand the drawing details. These studies will also have to include other drawing details. There is no clear definition of an ego-centered graph in the literature. Many definitions allow additional edges between the alters or even nodes that are in no direct connection with the ego. The decision to include a node or an edge is usually based on complex strategies or the intuition of the person who generates the graph. It is unclear whether the inverted ego network could be extended this way and whether this would benefit the user.
4,837
1009.5183
1591748822
Understanding constellations in large data collections has become a common task. One obstacle a user has to overcome is the internal complexity of these repositories. For example, extracting connected data from a normalized relational database requires knowledge of the table structure which might not be available for the casual user. In this paper we present a visualization framework which presents the collection as a set of entities and relations (on the data level). Using rating functions, we divide large relation networks into small graphs which resemble ego-centered networks. These graphs are connected so the user can browse from one to another. To further assist the user, we present two views which embed information on the evolution of the relations into the graphs. Each view emphasizes another aspect of temporal development. The framework can be adapted to any repository by a flexible data interface and a graph configuration file. We present some first web-based applications including a visualization of the DBLP data set. We use the DBLP visualization to evaluate our approach.
Not all approaches use the entity and relation abstraction. The ThemeRiver @cite_12 application shows how the frequency of a term in a set of documents changes over time. The results for multiple terms are presented as a plot where one axis shows the time and the other axis the frequency.
{ "abstract": [ "The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored \"currents\" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme." ], "cite_N": [ "@cite_12" ], "mid": [ "2106738877" ] }
A Framework for an Ego-centered and Time-aware Visualization of Relations in Arbitrary Data Repositories
The amount of data we create and store increases every day. The result is a growing number of large and possibly complex repositories. Much effort has been invested in defining potent query languages like SQL and XQuery and there are many fast and reliable algorithms to apply them. However, these techniques do little to hide the internal structure of the collection. For example, a highly normalized relational database might require several table joins to gather all information which belongs to one entry. While an expert can use information on the internal structure to speed up queries, a casual novice might easily be overstrained. This is a problem because the user wastes valuable time needed for the actual task on understanding the data organization. A common way to hide internal complexity is to present the collection as a set of entities which are connected by various types of relations. Consider the DLBP bibliographic data set 1 . It contains meta data for 1.3 million publications. Authors are one type of entity that can be found in the collection. They are connected by the coauthor relation if they cooperated on at least one publication. There are several approaches to use this model in applications. Some prefer a textual representation [1] [9] while others provide a more complex visualization [10] [15]. Relations also provide a context for a given entity and thereby support exploring the data set. The networks which are defined by relations in real-world data sets can be very large and dense. The DBLP coauthor network consists of 750,000 authors and 2.5 million relations. Suppose we want to examine the publication behavior of author Adam. Even finding Adam in this network is cumbersome. Figure 1 shows a node-link drawing of Adam's direct neighborhood in the coauthor network. It still contains 178 nodes and 1154 edges (177 Adam -coauthor + 977 coauthor -coauthor). The drawing hides the fact that some relations are stronger than others. For example, Adam has cooperated 25 times with Bob, but only 1 time with Jack, so the tie to Bob is much stronger. It also does not reveal at which time the relations were formed and how they have evolved over time. In this paper we show how relation networks can be divided into small connected graphs. In Section 2 we describe how filtering and rating can reduce the neighborhood graphs. We obtain simple graph drawings which resemble ego-centered networks [5]. These drawings can be enriched with information on how a relation has evolved. In Section 3 we present two views which cover different aspects of evolution. One emphasizes when a relation was influenced while the other focuses on how strong this influence was. A single graph is of little use if it is not linked with others. In Section 4 we show how the graphs are embedded into a framework which allows browsing the graphs. In Section 5 we present examples of visualized relations in DBLP and the German language Wikipedia. We conclude this paper with a two-part evaluation of our approach. A common approach to visualize graphs is the node-link diagram. Entities are represented by nodes. If two entities are in relation, their nodes are connected by an edge. In most real-world data sets there are a number of different relations. The networks defined by these relations can become very large and do not allow comprehensible node-link drawings. To reduce their size, we generate a set of ego-centered drawings. In a first step we split the relation networks into small graphs. Each graph represents the direct proximity of one entity. We call this entity the ego of the drawing. In the coauthor example, we obtain 750,000 graphs, one for each author. Figure 1 shows that this approach will be virtually useless if the drawing violates several aesthetic criteria which have been identified for this type of drawing [2] [7]. Above all, the number of edge crossings is problematic here. To improve the comprehensibility of the split graphs we only include the most relevant neighbors. For most relations there is a straightforward definition of relevance. In the coauthor example, we can use the number of joint publications. If we put the node which represents ego in the center of the drawing and place the selected neighbors (the alters) in a distance according to their relevance, we will obtain an ego-centered network [5]. We do not draw edges between alters to avoid edge crossings. Figure 2a shows Adam's ego-centered network. We have included the ten most relevant alters. Bob is placed closest because the relation with him is the strongest. Eve is the least relevant of the included authors. Inverted Ego Graphs In Section 3 we will use the edges to display information on how the relation has evolved over time. In many cases, the most relevant alter is of special interest and we can expect that the development of this relation is more complex than others. However, the edge which represents this relation is the shortest in the drawing and thereby provides the least space for additional information. To increase the available space, we introduce the inverted ego graph. We place the most relevant alter at maximum distance to the ego while less important alters are positioned closer. To further increase the edge length, we place the ego and the most relevant alter at opposite sides of the drawing. This contradicts the close related entities are placed close to each other metaphor, but it provides sufficient space. Figure 2b shows Adam's inverted ego graph. Ego-centered networks originate from the field of qualitative network analysis. Aside from the relevance value, these drawings often contain additional information which might be important for a user who interprets them. We can add information by modifying size, shape and filling of the nodes. For example, in Figure 2b we use the size of the alters to show the total number of publications. Obviously, Eve is by far the most active author. However, she has a low relevance because only a small part of her publications were joint work with Adam. We use the node filling to show this fraction. Mike for example published all work with Adam. Together with the low number of publications, this suggests that he is a PhD student rather than an established researcher. Both ego and alters represent persons so we use the same node shape to draw both. Table 1 shows the six types of node fillings and the parameters they require. none leaves the nodes empty while solid fills them with a specified color. fraction requires a value d ∈ [0, 1] which defines how much of the node area is filled. pie is a generalization of fraction where multiple color segments can be used. In the time color filling, all segments have the same size and the whole area is covered. The presence filling divides the area into equal-sized parts. A list of Boolean values defines which parts are filled and which are left white. In Section 5 we will show examples of all types. Not all relations have evolved in the same way and a user might be interested to see these differences. For example, if we look for Adam's long-term partners, we want to discriminate them from coauthors with a short but intense cooperation. First, we divide the time frame, i.e., the interval between the oldest and the youngest time stamp in the data set into equal-sized periods. The data set must contain sufficient information to determine how strong a relation was at each period. In the DBLP example, the time stamps represent the year of publication. The oldest paper is from 1936, the newest from 2010. We use 75 periods, each covering a single year. The strength of a relation in a single period is the number of joint papers in that year. This information is available in the data set. There are periods in which the relation becomes stronger and periods in which it remains unchanged. We do not consider relations which become less intense over time. Drawing We use the edge between ego and alter to show the development in different periods. There are two aspects we must consider: when was the relation influenced and how strong was this influence. For each aspect we have implemented a view, i.e., a way to modify the edge drawings. The positions of the nodes are the same for both views so we can easily switch between the two to see both aspects. Time-Color View The time-color view emphasizes the moment in which a relation was influenced. We assign a color to each period which expresses its position in the time frame. There is no linear order of colors but most people accept the purple-blue-green-yellow-orange-red sequence for this purpose [14]. We assign purple and blue tones to periods from the beginning of the time frame. Red tones indicate recent periods. For each period which affected the relation, regardless how strong, we add a colored segment to the respective edge. The segment color matches the represented period. Periods with no influence have no representation. The segments are ordered by time, the oldest close to the ego and the newest close to the alter. Figure 3 shows Adam's coauthors in the time-color view. At the bottom, there is a color bar which shows the time-color mapping. Green and yellow segments refer to the 1980s while reddish segments represent the years after 2000. We can see that the relation between Adam and Bob has evolved over a long time while the cooperation with Claire started later. The cooperation with Dave ended in the late 1990s. The segment size depends on the number of periods which are relevant for the respective relation and the available edge length. Figure 4a shows two edges with different content. While edge A covers many consecutive periods, B shows two distinct phases of development marked by an abrupt color change. In a first version of this view, we modified a segment length based on the strength of influence. The segments of important periods became longer compared to those of less relevant periods. Early user feedback (see Section 6.1) showed that this additional distortion confused the viewers. We gave up on this feature in favor of the intensity view. The number of periods is limited by the number of colors a human can discriminate. In an ideal environment, we can distinguish more than one million colors [8] but the user feedback showed that even the 75 DBLP periods can become problematic. Especially, matching segment colors with the bottom bar was reported to be difficult. We use linking and brushing to compensate. Whenever the mouse cursor moves over a segment, all segments of the same color are painted in double stroke. The bottom bar shows only those segments which are relevant for at least one relation. Intensity View The intensity view emphasizes how strong a relation was influenced during a period. We use colors to show the strength of development. Periods with little influence are painted in blue while more important ones are presented in reddish tones. Unlike the time-color view, each edge contains segments for all periods. Like before we leave out periods which are not relevant for any relation. If a period had no influence on a relation, the segment is painted in white so it appears as a gap. Because the number of segments is the same for all edges, one level of distortion is eliminated. Now we can use the position of the segments to guess the period. Cleveland and McGill [4] showed that humans can perceive positions much better than colors. This compensates for the smaller segments. Figure 4b shows two edges from the intensity view. While edge A shows a homogeneous but weak development with only three gaps, edge B represents a younger relation with a strong recent development. Figure 5a shows Adam's coauthor graph in the intensity view. The strength of development in a period is equivalent to the number of joint publications in this year. Although Bob and Claire have similar importance values, we can clearly see the differences in development. It is also visible that the last cooperation between Adam and Dave happened some time ago. An additional legend at the right side of the graph maps colors and values. In Figure 5a Framework Components The framework consists of three major components, the user interface, the graph generator and the interface with the underlying data. The front end is composed of the graph drawing and some additional elements which assist the user. We use the SVG (Scalable Vector Graphics) format for the drawings. Vector graphics are smaller then pixel graphics and can be zoomed without loss of quality. There are a number of frameworks which allow SVG rendering in applications and many web browsers support a sufficient part of the standard. Figure 6 shows an example of a web-based front end displayed by the Firefox 3.0 browser. The component positions are a suggestion and can be changed if needed. The most important front end function is to link the graphs. We can click on a node and a new graph is created where this node is the ego. In Figure 6 there is more than one possible relation for a person type ego. A connection menu (Bob) lists the available options. In the example, there is also an external link to Bob's DBLP author page. In theory, SVG graphics can provide smooth changeovers between the graphs where nodes move to their new positions and new ones are faded in. However, only few browsers support SVG animation. We expect this to improve in the future. There are two other ways to switch to a new graph. The head menu provides a search field which can be used to find a new ego if the node is not contained in the current graph. The menu also contains control elements for the time lens feature. The time lens allows the user to define which periods should be considered for displaying the graphs and computing the relevance function. In Figure 6 we see Adam as if he only published in the 1990s. The bottom bar shows which periods are relevant here. If we compare the relevance values with those in Figure 3 we clearly see some differences. The head menu also contains a control to modify the maximum number of alters. Often, it is useful to go back to a previous graph. The history bar shows thumbnails of up to four old requests. We can click the thumbs to enlarge the drawings again. In some situations it is useful to have textual representation of data. All nodes have a tooltip menu which appears when the cursor moves over it. In the next section, we will show how they are defined. Like all visual effects, including those we discussed in Sections 3.1 and 3.2, tooltips are created by JavaScript functions which are embedded into the SVG files. No additional command processing is necessary. There is no limit to the complexity of the underlying data source. In most cases we need sophisticated information extraction strategies and hand tuned queries to achieve acceptable performance. The data which is necessary to draw an entity or a relation might be scattered over the whole data set. Only an expert can provide a fast interface with these repositories by implementing a given Java interface. An important part of this step is to actually define the entities and relations. The expert can utilize caching strategies and pre calculations if necessary and enforce privacy policies for example by making entities anonymous. Other framework components request data from the interface by giving a pre-defined key and a data type. A similar technique is used to define the rating function for the relations. Example Applications To demonstrate the framework, we deployed two applications on data sets with different characteristics: DBLP and the German language Wikipedia. DBLP The coauthor graph drawings we used in this paper are taken from this application. In this section we give some background information and additional examples for drawing details. This visualization is available online 2 and generated some user feedback. The DBLP data set is available as an XML file which lists all records. The size and the structure of this file make efficient queries impossible so we import the document into a relational database. During this step, we pre calculate some frequently used values. We take additional information on journals and conferences from html pages on the DBLP web server. Entities: We extract three types of entities (number on December 8, 2009): person (760,277), word (36,150) and stream (3480). Stream is the term DBLP uses to refer to conferences and journals. Each author with at least one publication listed in DBLP is represented by a person entity. DBLP uses person names as identifiers. If there are two homonymous authors, a numeric suffix is appended to the name. If a person uses different names the data set contains entries to match them. The search engine is customized to consider this additional data as well. The word entities are derived from the publication titles. We remove stop words and invert inflections. Relations: Figure 7 shows the relations between Petra and the streams which accepted her papers. The relevance for this relation is defined by the number of accepted papers. The edge coloration shows that Petra stopped publishing at eik, mfcs and stacs in the 1990s. To better understand the reasons for this, we use a time-color filling for the stream nodes. For each year a conference or a journal was active (held a venue or published an issue) we add a segment in the respective color. If a stream is old some colors will not be shown in the bottom bar. Missing red segments show that eik was not continued after 1994 which explains why there are no further publications. However, mfcs and stacs were continued after Petra's last publications so there must be other reasons which the data set does not reveal. With the exception of SIGUCCS all stream nodes with recent publications are small. This means they do not have many years of activity and are rather young. The pie filling of the ego node shows when Petra was active. The more papers she published in a year the larger the respective segment. Figure 8 shows which themes were popular at the Journal of Symbolic Logic (JSYML). JSYML is the stream with the most active years in DBLP. We use the words extracted from the publication titles. They are a poor replacement for actual keyword lists but pro- vide acceptable results. Based on the titles from a single year we apply term frequency -inverse document frequency (tf-idf) [13] to sort out nondescriptive words like system. Kuhn and Wattenhofer [9] used a similar approach to get thematic descriptions of conferences. The ego node has a time-color filling. Red segments are missing here because JSYML was not continued after 2003. We can see different categories of themes. meeting and symbolic were used from the beginning but do not appear later. cardinality was not used at the beginning and theory and logic appeared at all time. Note that meeting and cardinality have a similar importance value although they developed differently. Figure 9 shows the opposite relation. We see the streams related to the word query in the intensive view. tf-idf returns values that are difficult to interpret but the higher the value the stronger the influence. We use presence filling to show when a stream started. For each relevant period we add a segment. If the alter was active in this period the segment would be painted blue otherwise it would be painted white. The webdb conference for example was established late and therefore could not use any keywords in early years. We can see that icdt is a biennial conference because only every second segment is colored. There are other relations as well but without additional drawing details. The mot important one is a relation between streams which is rated by comparing communities and themes. We also provide a relation between author and relevant themes. Wikipedia As another application we considered relations in the German language Wikipedia. This data set differs from DBLP in size and density of the relation networks. There are 4.6 million authors (including unregistered) and 2.7 million pages. A page is an article, a Fig. 9: Related streams for the keyword query user page or an administrative page. While an average DBLP author contributed to 2.89 streams, a Wikipedia author modified on average 12.59 articles. This requires more sophisticated caching and pre-calculation strategies for the data interface. There are also differences in the time frame. While DBLP only contains 75 time stamps (1936-2010) the Wikipedia dump we considered was changed on more than 3000 days since 2001. Via the data interface, we define a period as a month and map the respective time stamps to it. Figure 10 shows the relation between Gil and the articles she modified. Gil is administrator so we paint the ego node as a circle. Otherwise, we would have used a rounded rectangle for registered and a triangle for unregistered users. We use presence filling to show in which months an article was changed. The color we use for the presence filling of the alters shows the type of page. Evaluation The evaluation of our approach consists of a field study in which we observed people using the visual interface and a controlled experiment in the laboratory. We used the DBLP data set (Section 5.1) in either case. Basic Field Study In January 2009, we launched a web application using the DBLP data set. For a period of 320 days, we logged which type of graph was requested and which settings were used. To get an approximate mapping of requests and users, we also logged a hash of the session ID. Sessions with a very high number of requests (most probably web bots) were excluded. During the observation, there were 42,068 sessions with a total of 107,683 requests. Most sessions terminated after the first request but 1277 times the user viewed more than ten graphs. The mean length of these long sessions is 30.5 which indicates that the application was actually used for browsing. The time lens was used more often when the session length increased while the use of the search engine dropped. The intensity view was added later so there is no significant data which view was favored. We also received direct feedback. In Section 3.1 we already described the remarks on the time-color view and the resulting modifications. The tenor was that the users considered the program to be useful but only after they gathered some experiences with interpreting the drawings. Especially the drawing details seem to pose the risk of confusion and misinterpretations. In general, there were two groups of users. One requested additional information while the other favored simple drawings. Task-based Study To get more direct information of the usefulness of our approach we conducted a study in the laboratory. Participiants: Two female and eight male undergraduate students aged 22 to 29 participated in the study. All rated themselves as regular computer and web users. Eight participants stated to know the DBLP data set but only one had advanced experience. Nobody was experienced with this framework and the associated visualizations. Setup: After a short introduction on the visual interface and DBLP, we gave the participants time to familiarize themselves with the application. Then we asked them to complete three groups of tasks (G 1 , G 2 and G 3 ) with three problems each. G 1 Tasks could be solved by analyzing a single graph. G 2 Tasks required using a specific feature (for example time lens) or a specific view. G 3 Open tasks. We asked participants to explore the neighborhood of a given entity. Noteworthy constellations should be reported. Because no participiant was experienced in the application domain, we had to limit the complexity of the tasks in G 3 . For G 1 and G 2 we expected short answers to specific questions. For G 3 we were mainly interested in how the users applied their experience from G 1 and G 2 . We randomized the order of tasks within each group to counter learning effects. We observed the participants and logged which graphs they requested and which parameter they used. The tasks had to be completed in 30 minutes. At the end of the session, the participants were asked to answer open end questions on what they liked and didn't like on the framework. Task Results: Eight participants solved all G 1 tasks and six all G 2 tasks. Nobody failed more than one task in a group. For each task we asked if completing it had been easy, slightly difficult or difficult. In 55 out of 60 cases, the task was rated easy. The remaining cases were considered to be slightly difficult. Eight participants were able to find interesting constellations in G 3 . The others reported only uninteresting details or actual misinterpretations. Nine participants stated that the lack of knowledge on the DBLP data set was the major problem. All were positive to find more if given additional time. By observing the users and analyzing the log files we found out that: -All Tasks (or parts of task) which required to tell the relevance of an entity were correctly solved. -At first, the users preferred the time-color view. A task in G 2 forced them to use the intensity view. After that, this view was preferred. -In G 2 half of the users browsed to a new graph using the search function rather than the connection menus. With growing experience this behavior changed. In G 3 all participants used the connection menus if possible. -Not understanding drawing details was the major cause of errors. While there was no problem with the fraction filling, many users misinterpreted the fillings of the stream nodes (see Figure 7). Eight participiants reported that they were confused because the filling used colors which did not appear in the bottom bar. -Tooltips were used more often than we expected. All participants tried to validate their interpretation of the drawing with textual information if possible. -The time lens was considered an important part of the application. It was used in many cases, even if it did not contribute to the solution. In general, the comments were positive. The users liked the small and clear graph drawings and the idea of presenting a rating by the length of an edge. Only one person criticized the fact that close related alters are placed far away from the ego. Like in the field study feedback, learning to use the application was a major issue. Four participants explicitly stated that they had to learn how to use the application first. There also was a significant learning effect. Unlike the feedback from the field study, the participants requested more textual information. Four persons explicitly mentioned text integrated in the drawings. For example, the relevance value should be visible next to the nodes. Two proposed an additional view which should contain the information in tabular form. This supports our finding that tooltips are frequently used. The tooltips were consistantly mentioned as a positive aspect. Four participants stated that the information density was too high. But, like in the field study feedback, all users posted ideas on what information should be added. Three participants proposed additional drawing details like patterns or more types of node shapes. This shows that a graph definition has to be done very carefully with respect to the user group. The definer must not give in on the wish to integrate as much information as possible but has to make reasonable selections. Conclusion And Future Work In this paper, we presented a framework that can visualize relations in any given data set. Large and complex relation networks are reduced to small graphs. The drawings of these graphs resemble ego-centered networks and convey information on when and how strong a relation evolved. The drawings are part of a visual interface which supports the user in understanding the data and links the graphs with each other. Experiments from an online application and the results of a basic usability study indicated that our approach is useful though it poses the risk of generating graphs which overstrain the user. Future work will have to address the problem of how difficult it is to understand the drawing details. These studies will also have to include other drawing details. There is no clear definition of an ego-centered graph in the literature. Many definitions allow additional edges between the alters or even nodes that are in no direct connection with the ego. The decision to include a node or an edge is usually based on complex strategies or the intuition of the person who generates the graph. It is unclear whether the inverted ego network could be extended this way and whether this would benefit the user.
4,837
1009.3796
2952658630
(Non-)portability of Prolog programs is widely considered as an important factor in the lack of acceptance of the language. Since 1995, the core of the language is covered by the ISO standard 13211-1. Since 2007, YAP and SWI-Prolog have established a basic compatibility framework. This article describes and evaluates this framework. The aim of the framework is running the same code on both systems rather than migrating an application. We show that today, the portability within the family of Edinburgh Quintus derived Prolog implementations is good enough to allow for maintaining portable real-world applications.
Another popular approach is to write applications for environment @math and completely environment @math on top of the target environment @math . One of the most extreme examples here is http: www.winehq.org , that completely emulates the Windows-API on top of POSIX systems. The opposite is Cygwin @cite_3 , that emulates the POSIX API on Windows platforms.
{ "abstract": [ "Development tools which accompany numerous variants of the Unix Operating System such as compilers, shells, editors, and other assorted development utilities can be a blessing for applied researchers. This review examines the Cygwin toolkit, a free toolkit containing ports of many of the popular GNU tools and utilities to the Windows 95, 98, and NT environments. This toolkit can make life easier for applied researchers who find themselves working within the confines of a Windows-based environment. Copyright © 2000 John Wiley & Sons, Ltd." ], "cite_N": [ "@cite_3" ], "mid": [ "2016790197" ] }
Portability of Prolog programs: theory and case-studies
0
1009.4102
1633665642
This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.
The Lazy Flipper is related in at least four ways to existing work. First of all, it generalizes Iterated Conditional Modes (ICM) for binary variables @cite_5 . While ICM leaves all variables except one fixed in each step, the Lazy Flipper can optimize over larger (for small models: all) connected subgraphs of a graphical model. Furthermore, it extends Block-ICM @cite_14 that optimizes over specific subsets of variables in grid graphs to irregular and higher-order graphical models.
{ "abstract": [ "may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or \"pixels\", each pixel having a particular \"colour\" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.", "Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm (\"loopy\" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy." ], "cite_N": [ "@cite_5", "@cite_14" ], "mid": [ "1554544485", "2134963415" ] }
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
Energy functions that depend on thousands of binary variables and decompose according to a graphical model [1,2,3,4] into potential functions that depend on subsets of all variables have been used successfully for pattern analysis, e.g. in the seminal works [5,6,7,8]. An important problem is the minimization of the sum of potentials, i.e. the search for an assignment of zeros and ones to the variables that minimizes the energy. This problem can be solved efficiently by dynamic programming if the graph is acyclic [9] or its treewidth is small enough [3], and by finding a minimum s-t-cut [6] if the energy function is (permutation) submodular [10,11]. In general, the problem is NP-hard [10]. For moderate problem sizes, exact optimization is sometimes tractable by means of Mixed Integer Linear Programming (MILP) [12,13]. Contrary to popular belief, some practical computer vision problems can indeed be solved to optimality by modern MILP solvers (cf. Section 5). However, all such solvers are eventually overburdened when the problem size becomes too large. In cases where exact optimization is intractable, one has to settle for approximations. While substantial progress has been made in this direction, a deterministic non-redundant search algorithm that constrains the search space based on the topology of the graphical model has not been proposed before. This article presents a depth-limited exhaustive search algorithm, the Lazy Flipper, that does just that. arXiv:1009.4102v1 [cs.DS] 21 Sep 2010 The Lazy Flipper starts from an arbitrary initial assignment of zeros and ones to the variables that can be chosen, for instance, to minimize the sum of only the first order potentials of the graphical model. Starting from this initial configuration, it searches for flips of variables that reduce the energy. As soon as such a flip is found, the current configuration is updated accordingly, i.e. in a greedy fashion. In the beginning, only single variables are flipped. Once a configuration is found whose energy can no longer be reduced by flipping of single variables, all those subsets of two and successively more variables that are connected via potentials in the graphical model are considered. When a subset of more than one variable is flipped, all smaller subsets that are affected by the flip are revisited. This allows the Lazy Flipper to perform an exhaustive search over all subsets of variables whose flip potentially reduces the energy. Two special data structures described in Section 3 are used to represent each subset of connected variables precisely once and to exclude subsets from the search whose flip cannot reduce the energy due to the topology of the graphical model and the history of unsuccessful flips. These data structures, the Lazy Flipper algorithm and an experimental evaluation of state-of-the-art optimization algorithms on higher-order graphical models are the main contributions of this article. Overall, the new algorithm has four favorable properties: (i) It is strictly convergent. While a global minimum is found when searching through all subgraphs (typically not tractable), approximate solutions with a guaranteed quality certificate (Section 4) are found if the search space is restricted to subgraphs of a given maximum size. The larger the subgraphs are allowed to be, the tighter the upper bound on the minimum energy becomes. This allows for a favorable tradeoff between runtime and approximation quality. (ii) Unlike in brute force search, the runtime of lazy flipping depends on the topology of the graphical model. It is exponential in the worst case but can be shorter compared to brute force search by an amount that is exponential in the number of variables. It is approximately linear in the size of the model for a fixed maximum search depth. (iii) The Lazy Flipper can be applied to graphical models of any order and topology, including but not limited to the more standard grid graphs. Directed Bayesian Networks and undirected Markov Random Fields are processed in the exact same manner; they are converted to factor graph models [14] before lazy flipping. (iv) Only trivial operations are performed on the graphical model, namely graph traversal and evaluations of potential functions. These operations are cheap compared, for instance, to the summation and minimization of potential functions performed by message passing algorithms, and require only an implicit specification of potential functions in terms of program code that computes the function value for any given assignment of values to the variables. Experiments on simulated and real-world problems, submodular and nonsubmodular functions, grids and irregular graphs (Section 5) assess the quality of Lazy Flipper approximations, their convergence as well as the dependence of the runtime of the algorithm on the size of the model and the search depth. The results are put into perspective by a comparison with Iterated Conditional Modes (ICM) [5], Belief Propagation (BP) [9,14], Tree-reweighted BP [15,4] and a Dual Decomposition ansatz using sub-gradient descent methods [16,17]. The Lazy Flipper Data Structures Two special data structures are crucial to the Lazy Flipper. The first data structure that we call a connected subgraph tree (CS-tree) ensures that only connected subsets of variables are considered, i.e. sets of variables which are connected via potentials in the graphical model. Moreover, it ensures that every such subset is represented precisely once (and not repeatedly) by an ordered sequence of its variables, cf. [30]. The rationale behind this concept is the following: If the flip of one variable and the flip of another variable not connected to the first one do not reduce the energy then it is pointless to try a simultaneous flip of both variables because the (energy increasing) contributions from both flips would sum up. Furthermore, if the flip of a disconnected set of variables reduces the energy then the same and possibly better reductions can be obtained by flipping connected subsets of this set consecutively, in any order. All disconnected subsets of variables can therefore be excluded from the search if the connected subsets are searched ordered by their size. Finding a unique representative for each connected subset of variables is important. The alternative would be to consider all sequences of pairwise distinct variables in which each variable is connected to at least one of its predecessors and to ignore the fact that many of these sequences represent the same set. Sampling algorithms that select and grow connected subsets in a randomized fashion do exactly this. However, the redundancy is large. As an example, consider a connected subset of six variables of a 2-dimensional grid graph as depicted in Fig. 1a. Although there is only one connected set that contains all six variables, 208 out of the 6! = 720 possible sequences of these variables meet the requirement that each variable is connected to at least one of its predecessors. This 208-fold redundancy hampers the exploration of the search space by means of randomized algorithms; it is avoided in lazy flipping at the cost of storing one unique representative for every connected subgraph in the CS-tree. The second data structure is a tag list that prevents the repeated assessment of unsuccessful flips. The idea is the following: If some variables have been flipped in one iteration (and the current best configuration has been updated accordingly), it suffices to revisit only those sets of variables that are connected to at least one variable that has been flipped. All other sets of variables are excluded from the search because the potentials that depend on these variables are unaffected by the flip and have been assessed in their current state before. The tag list and the connected subgraph tree are essential to the Lazy Flipper and are described in the following sections, 3.1 and 3.2. For a quick overview, the reader can however skip these sections, take for granted that it is possible to efficiently enumerate all connected subgraphs of a graphical model, ordered by their size, and refer directly to the main algorithm (Section 4 and Alg. 1). All non-trivial sub-functions used in the main algorithm are related to tag lists and the CS-tree and are described in detail now. Connected Subgraph Tree (CS-tree) The CS-tree represents subsets of connected variables uniquely. Every node in the CS-tree except the special root node is labeled with the integer index of one variable in the graphical model. The same variable index is assigned to several nodes in the CS-tree unless the graphical model is completely disconnected. The CS-tree is constructed such that every connected subset of variables in the graphical model corresponds to precisely one path in the CS-tree from a node to the root node, the node labels along the path indicating precisely the variables in the subset, and vice versa, there exists precisely one connected subset of variables in the graphical model for each path in the CS-tree from a node to the root node. In order to guarantee by construction of the CS-tree that each subset of connected variables is represented precisely once, the variable indices of each subset are put in a special order, namely the lexicographically smallest order in which each variable is connected to at least one of its predecessors. The following definition of these sequences of variable indices is recursive and therefore motivates an algorithm for the construction of the CS-tree for the Lazy Flipper. A small grid model and its complete CS-tree are depicted in Fig. 1. Definition 1 (CSR-Sequence). Given an undirected graph G = (V, E) whose m ∈ N vertices V = {1, . . . , m} are integer indices, every sequence that consists of only one index is called connected subset representing (CSR). Given n ∈ N and a CSR-sequence (v 1 , . . . , v n ), a sequence (v 1 , . . . , v n , v n+1 ) of n + 1 indices is called a CSR-sequence precisely if the following conditions hold: (i) v n+1 is not among its predecessors, i.e. ∀j ∈ {1, . . . , n} : v j = v n+1 . (ii) v n+1 is connected to at least one of its predecessors, i.e. ∃j ∈ {1, . . . , n} : {v j , v n+1 } ∈ E. (iii) v n+1 > v 1 . (iv) If n ≥ 2 and v n+1 could have been added at an earlier position j ∈ {2, . . . , n} to the sequence, fulfilling (i)-(iii), all subsequent vertices v j , . . . , v n are smaller than v n+1 , i.e. ∀j ∈ {2, . . . , n} ({v j−1 , v n+1 } ∈ E ⇒ (∀k ∈ {j, . . . , n} : v k < v n+1 )) .(1) Based on this definition, three functions are sufficient to recursively build the CS-tree T of a graphical model G, starting from the root node. The function q = growSubset(T, G, p) appends to a node p in the CS-tree the smallest variable index that is not yet among the children of p and fulfills (i)-(iv) for the CSR-sequence of variable indices on the path from p to the root node. It returns the appended node or the empty set if no suitable variable index exists. The function q = firstSubsetOfSize(T, G, n) traverses the CS-tree on the current deepest level n − 1, calling the function growSubset for each leaf until a node can be appended and thus, the first subset of size n has been found. Finally, the function q = nextSubsetOfSameSize(T, G, p) starts from a node p, finds its parent and traverses from there in level order, calling growSubset for each node to find the length-lexicographic successor of the CSR-sequence associated with the node p, i.e. the representative of the next subset of the same size. These functions are used by the Lazy Flipper (Alg. 1) to construct the CS-tree. In contrast, the traversal of already constructed parts of the CS-tree (when revisiting subsets of variables after successful flips) is performed by functions associated with tag lists which are defined the following section. Tag Lists Tag lists are used to tag variables that are affected by flips. A variable is affected by a flip either because it has been flipped itself or because it is connected (via a potential) to a flipped variable. The tag list data structure comprises a Boolean vector in which each entry corresponds to a variable, indicating whether or not this variable is affected by recent flips. As the total number of variables can be large (10 6 is not exceptional) and possibly only a few variables are affected by flips, a list of all affected variables is maintained in addition to the vector. This list allows the algorithm to untag all tagged variables without re-initializing the entire Boolean vector. The two fundamental operations on a tag list L are tag(L, x) which tags the variable with the index x, and untagAll (L). For the Lazy Flipper, three special functions are used in addition: Given a tag list L, a (possibly incomplete) CS-tree T , the graphical model G, and a node s ∈ T , tagConnectedVariables(L, T, G, s) tags all variables on the path from s to the root node in T , as well as all nodes that are connected (via a potential in G) to at least one of these nodes. The function s = firstTaggedSubset(L, T ) traverses the first level of T and returns the first node s whose variable is tagged (or the empty set if all variables are untagged). Finally, the function t = nextTaggedSubset(L, T, s) traverses T in level order, starting with the successor of s, and returns the first node t for which the path to the root contains at least one tagged variable. These functions, together with those of the CS-tree, are sufficient for the Lazy Flipper, Alg. 1. The Lazy Flipper Algorithm In the main loop of the Lazy Flipper (lines 2-26 in Alg. 1), the size n of subsets is incremented until the limit n max is reached (line 24). Inside this main loop, the algorithm falls into two parts, the exploration part (lines 3-11) and the revisiting part (lines 12-23). In the exploration part, flips of previously unseen subsets of n variables are assessed. The current best configuration c is updated in a greedy fashion, i.e. whenever a flip yields a lower energy. At the same time, the CStree is grown, using the functions defined in Section 3.1. In the revisiting part, all subsets of sizes 1 through n that are affected by recent flips are assessed iteratively until no flip of any of these subsets reduces the energy (line 14). The indices of affected variables are stored in the tag lists L 1 and L 2 (cf. Section 3.2). In practice, the Lazy Flipper can be stopped at any point, e.g. when a time limit is exceeded, and the current best configuration c taken as the output. It eventually reaches configurations for which it is guaranteed that no flip of n or less variables can yield a lower energy because all such flips that could potentially lower the energy have been assessed (line 14). Such configurations are therefore guaranteed to be optimal within a Hamming radius of n: Experiments For a comparative assessment of the Lazy Flipper, four optimization problems of different complexity are considered, two simulated problems and two problems based on real-world data. For the sake of reproducibility, the simulations are described in detail and the models constructed from real data are available from the authors as supplementary material. The first problem is a ferromagnetic Ising model that is widely used in computer vision for foreground vs. background segmentation [6]. Energy functions of this model consist of first and second order potentials that are submodular. The global minimum can therefore be found via a graph cut. We simulate random instances of this model in order to measure how the runtime of lazy flipping depends on the size of the model and the coupling strength, and to compare Lazy Flipper approximations to the global optimum (Section 5.1). The second problem is a problem of finding optimal subgraphs on a grid. Energy functions of this model consist of first and fourth order potentials, of which the latter are not permutation submodular. We simulate difficult instances of this problem that cannot be solved to optimality, even when allowing several days of runtime. In this challenging setting, Lazy Flipper approximations and their convergence are compared to those of BP, TRBP and DD as well as to the lower bounds on local polytope relaxations obtained by DD (Section 5.2). The third problem is a graphical model for removing excessive boundaries from image over-segmentations that is related to the model proposed in [31]. Energy functions of this model consist of first, third and fourth order potentials. In contrast to the grid graphs of the Ising model and the optimal subgraph model, the corresponding factor graphs are irregular but still planar. The higher-order potentials are not permutation submodular but the global optimum can be found by means of MILP in approximately 10 seconds per model using one of the fastest commercial solvers (IBM ILOG CPLEX, version 12.1). Since CPLEX is closedsource software, the algorithm is not known in detail and we use it as a black box. The general method used by CPLEX for MILP is a branch-and-bound algorithm [32,33]. 100 instances of this model obtained from the 100 natural test images of the Berkeley Segmentation Database (BSD) [34] are used to compare the Lazy Flipper to algorithms based on message passing and linear programming in a real-world setting where the global optimum is accessible (Section 5.3). The fourth problem is identical to the third, except that instances are obtained from 3-dimensional volume images of neural tissue acquired by means of Serial Block Face Scanning Electron Microscopy (SBFSEM) [35]. Unlike in the 2-dimensional case, the factor graphs are no longer planar. Whether exact optimization by means of MILP is practical depends on the size of the model. In practice, SBFSEM datasets consist of more than 2000 3 voxels. To be able to compare approximations to the global optimum, we consider 16 models obtained from 16 SBFSEM volume sub-images of only 150 3 voxels for which the global optimum can be found by means of MILP within a few minutes (Section 5.4). Ferromagnetic Ising model The ferromagnetic Ising model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with points on a 2-dimensional square grid and connected via second order potentials E jk (x j , x k ) = 1 − δ xj ,x k (δ: Kronecker delta) to their nearest neighbors. First order potentials E j (x j ) relate the variables to observed evidence in underlying data. The total energy of this model is the following sum in which α ∈ R + 0 is a weight on the second order potentials, and j ∼ k indicates that the variables x j and x k are adjacent on the grid: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + α m j=1 m k=j+1 k∼j E jk (x j , x k ) .(2) For each α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, an ensemble of ten simulated Ising models of 50 · 50 = 2500 variables is considered. The first order potentials E j are initialized randomly by drawing E j (0) uniformly from the interval [0, 1] and setting E j (1) := 1 − E j (0). The exact global minimum of the total energy is found via a graph cut. For each model, the Lazy Flipper is initialized with a configuration that minimizes the sum of the first order potentials. Upper bounds on the minimum energy found by means of lazy flipping converge towards the global optimum as depicted in Fig. 2. Color scales and gray scales in this figure respectively indicate the maximum size and the total number of distinct subsets that have been searched, averaged over all models in the ensemble. It can be seen from this figure that upper bounds on the minimum energy are tightened significantly by searching larger subsets of variables, independent of the coupling strength α. It takes the Lazy Flipper less than 100 seconds (on a single CPU of an Intel Quad Xeon E7220 at 2.93GHz) to exhaustively search all connected subsets of 6 variables. The amount of RAM required for the CS-tree (in bytes) is 24 times as high as the number of subsets (approximately 50 MB in this case) because each subset is stored in the CS-tree as a node consisting of three 64-bit integers: a variable index, the index of the parent node and the index of the level order successor (Section 3.1) 1 . For n max ∈ {1, 6}, configurations corresponding to the upper bounds on the minimum energy are depicted in Fig. 3. It can be seen from this figure that all connected subsets of falsely set variables are larger than n max . For a fixed maximum subgraph size n max , the runtime of lazy flipping scales approximately linearly with the number of variables in the Ising model (cf. Fig.4). Optimal Subgraph Model The optimal subgraph model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with the edges of a 2-dimensional grid graph. A subgraph is defined by those edges whose associated variables attain the value 1. Energy functions of this model consist of first order potentials, one for each edge, and fourth order potentials, one for each node v ∈ V in which four edges (j, k, l, m) = N (v) meet: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + (j,k,l,m)∈N (V ) E jklm (x j , x k , x l , x m ) .(3) All fourth order potentials are equal, penalizing dead ends and branches of paths in the selected subgraph: E jklm (x j , x k , x l , x m ) =                0.0 if s = 0 100.0 if s = 1 0.6 if s = 2 1.2 if s = 3 2.4 if s = 4 with s = x j + x k + x l + x m . (4) An ensemble of 16 such models is constructed by drawing the unary potentials at random, exactly as for the Ising models. Each model has 19800 variables, the same number of first order potentials, and 9801 fourth order potentials. Approximate optimal subgraphs are found by Min-Sum Belief Propagation (BP) with parallel message passing [9,14] and message damping [36], by Tree-reweighted Belief Propagation (TRBP) [4], by Dual Decomposition (DD) [16,17] and by lazy flipping (LF). DD affords also lower bounds on the minimum energy. Details on the parameters of the algorithms and the decomposition of the models are given in Appendix A. Bounds on the minimum energy converge with increasing runtime, as depicted in Fig. 5. It can be seen from this figure that Lazy Flipper approximations converge fast, reaching a smaller energy after 3 seconds than the other approximations after 10000 seconds. Subgraphs of up to 7 variables are searched, using approximately 2.2 GB of RAM for the CS-tree. A gap remains between the energies of all approximations and the lower bound on the minimum energy obtained by DD. Thus, there is no guarantee that any of the problems has been solved to optimality. However, the gaps are upper bounds on the deviation from the global optimum. They are compared at t = 10000 s in Fig. 5. For any model in the ensemble, the energy of the Lazy Flipper approximation is less than 4% away from the global optimum, a substantial improvement over the other algorithms for this particular model. Pruning of 2D Over-Segmentations The graphical model for removing excessive boundaries from image over-segmentations contains one binary variable for each boundary between segments, indicating whether this boundary is to be removed (0) or preserved (1). First order potentials relate these variables to the image content, and non-submodular third and fourth order potentials connect adjacent boundaries, supporting the closedness and smooth continuation of preserved boundaries. The energy function is a sum of these potentials: ∀x ∈ {0, 1} m E(x) = m j=1 E j (x j ) + (j,k,l)∈J E jkl (x j , x k , x l ) + (j,k,l,p)∈K E jklp (x j , x k , x l , x p ) .(5) We consider an ensemble of 100 such models obtained from the 100 BSD test images [34]. On average, a model has 8845 ± 670 binary variables, the same number of unary potentials, 5715 ± 430 third order potentials and 98 ± 18 fourth order potentials. Each variable is connected via potentials to at most six other variables, a sparse structure that is favorable for the Lazy Flipper. BP, TRBP, DD and the Lazy Flipper solve these problems approximately, thus providing upper bounds on the minimum energy. The differences between these bounds and the global optimum found by means of MILP are depicted in Fig. 6. It can be seen from this figure that, after 200 seconds, Lazy Flipper approximations provide a tighter upper bound on the global minimum in the median than those of the other three algorithms. BP and DD have a better peak performance, solving one problem to optimality. The Lazy Flipper reaches a search depth of 9 after around 1000 seconds for these sparse graphical models using roughly 720 MB of RAM for the CS-tree. At t = 5000 s and on average over all models, its approximations deviate by only 2.6% from the global optimum. Pruning of 3D Over-Segmentations The model described in the previous section is now applied in 3D to remove excessive boundaries from the over-segmentation of a volume image. In an ensemble of 16 such models obtained from 16 SBFSEM volume images, models have on average 16748 ± 1521 binary variables (and first order potentials), 26379 ± 2502 potentials of order 3, and 5081 ± 482 potentials of order 4. For BP, TRBP, DD and Lazy Flipper approximations, deviations from the global optimum are shown in Fig. 7. It can be seen from this figure that BP performs exceptionally well on these problems, providing approximations whose energies deviate by only 0.4% on average from the global optimum. One reason is that most variables influence many (up to 60) potential functions, and BP can propagate local evidence from all these potentials. Variables are connected via these potentials to as many as 100 neighboring variables which hampers the exploration of the search space by the Lazy Flipper that reaches only of search depth of 5 after 10000 seconds, using 4.8 GB of RAM for the CS-tree, yielding worse approximations than BP, TRBP and DD for these models. In practical applications where volume images and the according models are several hundred times larger and can no longer be optimized exactly, it matters whether one can further improve upon the BP approximations. Dashed lines in the first plot in Fig. 7 show the result obtained when initializing the Lazy Flipper with the BP approximation at t = 100s. This reduces the deviation from the global optimum at t = 50000 s from 0.4% on average over all models to 0.1%. Conclusion The optimum of a function of binary variables that decomposes according to a graphical model can be found by an exhaustive search over only the connected subgraphs of the model. We implemented this search, using a CS-tree to efficiently and uniquely enumerate the subgraphs. The C++ source code is available from http://hci.iwr.uni-heidelberg.de/software.php. Our algorithm is guaranteed to converge to a global minimum when searching through all subgraphs which is typically intractable. With limited runtime, approximations can be found by restricting the search to subgraphs of a given maximum size. Simulated and real-world problems exist for which these approximations compare favorably to those obtained by message passing and sub-gradient descent. For large scale problems, the applicability of the Lazy Flipper is limited by the memory required for the CS-tree. However, for regular graphs, this limit can be overcome by an implicit representation of the CS-tree that is subject of future research. A Parameters and Model Decomposition In all experiments, the damping parameters for BP and TRBP are chosen optimally from the set {0, 0.1, 0.2, . . . , 0.9}. The step size of the sub-gradient descent is chosen according to τ t = α 1 1 + βt(6) where β = 0.01 and α is chosen optimally from {0.01, 0.025, 0.05, 0.1, 0.25, 0.5}. The sequence of step sizes, in particular the function (6) and β could be tuned further. Moreover, [16] consider the primal-dual gap and [17] smooth the subgradient over iterations in order to suppress oscillations. These measures can have substantial impact on the convergence. The upper bounds obtained by BP, TRBP and DD do not decrease monotonously. After each iteration of these algorithms, we therefore consider the elapsed runtime and the current best bound, i.e. the best bound of the current and all preceding iterations. All five algorithms are implemented in C++, using the same optimized data structures for the graphical model and a visitor design pattern that allows us to measure runtime without significantly affecting performance. The same decomposition of each graphical model into tree models is used for TRBP and DD. Tree models are constructed in a greedy fashion, each comprising as many potential functions as possible. The procedure is generally applicable to irregular models with higher-order potentials: Initially, all potentials of the graphical model are put on a white list that contains those potentials that have not been added to any tree model. A black list of already added potentials and a gray list of recently added potentials are initially empty. As long as there are potentials on the white list, new tree models are constructed. For each newly constructed tree model, the procedure iterates over the white list, adding potentials to the tree model if they do not introduce loops. Added potentials are moved from the white list to the gray list. After all potentials from the white list have been processed, potentials from the black list that do not introduce loops are added to the tree model. The gray list is then appended to the black list and cleared. The procedure finishes when the white list is empty. As recently shown in [17], decompositions into cyclic subproblems can lead to significantly tighter relaxations and better integer solutions.
5,125
1009.4102
1633665642
This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.
Naive attempts to generalize ICM and Block-ICM to optimize over subgraphs of size @math would consider all sequences of @math connected variables and ignore the fact that many of these sequences represent the same set. This causes substantial problems because the redundancy is large, as we show in . The Lazy Flipper avoids this redundancy, at the cost of storing one unique representative for each subset. Compared to randomized algorithms that sample from the set of subgraphs @cite_22 @cite_24 @cite_33 , this is a memory intensive approach. Up to 8 GB of RAM are required for the optimizations shown in . Now that servers with much larger RAM are available, it has become a practical option.
{ "abstract": [ "A new approach to Monte Carlo simulations is presented, giving a highly efficient method of simulation for large systems near criticality. The algorithm violates dynamic universality at second-order phase transitions, producing unusually small values of the dynamical critical exponent.", "We consider the question of computing Maximum A Posteriori (MAP) assignment in an arbitrary pair-wise Markov Random Field (MRF). We present a randomized iterative algorithm based on simple local updates. The algorithm, starting with an arbitrary initial assignment, updates it in each iteration by first, picking a random node, then selecting an (appropriately chosen) random local neighborhood and optimizing over this local neighborhood. Somewhat surprisingly, we show that this algorithm finds a near optimal assignment within n log2 n iterations with high probability for any n node pair-wise MRF with geometry (i.e. MRF graph with polynomial growth) with the approximation error depending on (in a reasonable manner) the geometric growth rate of the graph and the average radius of the local neighborhood - this allows for a graceful tradeoff between the complexity of the algorithm and the approximation error. Through extensive simulations, we show that our algorithm finds extremely good approximate solutions for various kinds of MRFs with geometry.", "A Monte Carlo algorithm is presented that updates large clusters of spins simultaneously in systems at and near criticality. We demonstrate its efficiency in the two-dimensional @math @math models for @math (Ising) and @math ( @math ) at their critical temperatures, and for @math (Heisenberg) with correlation lengths around 10 and 20. On lattices up to @math no sign of critical slowing down is visible with autocorrelation times of 1-2 steps per spin for estimators of long-range quantities." ], "cite_N": [ "@cite_24", "@cite_22", "@cite_33" ], "mid": [ "2037139490", "2160561951", "1969758109" ] }
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
Energy functions that depend on thousands of binary variables and decompose according to a graphical model [1,2,3,4] into potential functions that depend on subsets of all variables have been used successfully for pattern analysis, e.g. in the seminal works [5,6,7,8]. An important problem is the minimization of the sum of potentials, i.e. the search for an assignment of zeros and ones to the variables that minimizes the energy. This problem can be solved efficiently by dynamic programming if the graph is acyclic [9] or its treewidth is small enough [3], and by finding a minimum s-t-cut [6] if the energy function is (permutation) submodular [10,11]. In general, the problem is NP-hard [10]. For moderate problem sizes, exact optimization is sometimes tractable by means of Mixed Integer Linear Programming (MILP) [12,13]. Contrary to popular belief, some practical computer vision problems can indeed be solved to optimality by modern MILP solvers (cf. Section 5). However, all such solvers are eventually overburdened when the problem size becomes too large. In cases where exact optimization is intractable, one has to settle for approximations. While substantial progress has been made in this direction, a deterministic non-redundant search algorithm that constrains the search space based on the topology of the graphical model has not been proposed before. This article presents a depth-limited exhaustive search algorithm, the Lazy Flipper, that does just that. arXiv:1009.4102v1 [cs.DS] 21 Sep 2010 The Lazy Flipper starts from an arbitrary initial assignment of zeros and ones to the variables that can be chosen, for instance, to minimize the sum of only the first order potentials of the graphical model. Starting from this initial configuration, it searches for flips of variables that reduce the energy. As soon as such a flip is found, the current configuration is updated accordingly, i.e. in a greedy fashion. In the beginning, only single variables are flipped. Once a configuration is found whose energy can no longer be reduced by flipping of single variables, all those subsets of two and successively more variables that are connected via potentials in the graphical model are considered. When a subset of more than one variable is flipped, all smaller subsets that are affected by the flip are revisited. This allows the Lazy Flipper to perform an exhaustive search over all subsets of variables whose flip potentially reduces the energy. Two special data structures described in Section 3 are used to represent each subset of connected variables precisely once and to exclude subsets from the search whose flip cannot reduce the energy due to the topology of the graphical model and the history of unsuccessful flips. These data structures, the Lazy Flipper algorithm and an experimental evaluation of state-of-the-art optimization algorithms on higher-order graphical models are the main contributions of this article. Overall, the new algorithm has four favorable properties: (i) It is strictly convergent. While a global minimum is found when searching through all subgraphs (typically not tractable), approximate solutions with a guaranteed quality certificate (Section 4) are found if the search space is restricted to subgraphs of a given maximum size. The larger the subgraphs are allowed to be, the tighter the upper bound on the minimum energy becomes. This allows for a favorable tradeoff between runtime and approximation quality. (ii) Unlike in brute force search, the runtime of lazy flipping depends on the topology of the graphical model. It is exponential in the worst case but can be shorter compared to brute force search by an amount that is exponential in the number of variables. It is approximately linear in the size of the model for a fixed maximum search depth. (iii) The Lazy Flipper can be applied to graphical models of any order and topology, including but not limited to the more standard grid graphs. Directed Bayesian Networks and undirected Markov Random Fields are processed in the exact same manner; they are converted to factor graph models [14] before lazy flipping. (iv) Only trivial operations are performed on the graphical model, namely graph traversal and evaluations of potential functions. These operations are cheap compared, for instance, to the summation and minimization of potential functions performed by message passing algorithms, and require only an implicit specification of potential functions in terms of program code that computes the function value for any given assignment of values to the variables. Experiments on simulated and real-world problems, submodular and nonsubmodular functions, grids and irregular graphs (Section 5) assess the quality of Lazy Flipper approximations, their convergence as well as the dependence of the runtime of the algorithm on the size of the model and the search depth. The results are put into perspective by a comparison with Iterated Conditional Modes (ICM) [5], Belief Propagation (BP) [9,14], Tree-reweighted BP [15,4] and a Dual Decomposition ansatz using sub-gradient descent methods [16,17]. The Lazy Flipper Data Structures Two special data structures are crucial to the Lazy Flipper. The first data structure that we call a connected subgraph tree (CS-tree) ensures that only connected subsets of variables are considered, i.e. sets of variables which are connected via potentials in the graphical model. Moreover, it ensures that every such subset is represented precisely once (and not repeatedly) by an ordered sequence of its variables, cf. [30]. The rationale behind this concept is the following: If the flip of one variable and the flip of another variable not connected to the first one do not reduce the energy then it is pointless to try a simultaneous flip of both variables because the (energy increasing) contributions from both flips would sum up. Furthermore, if the flip of a disconnected set of variables reduces the energy then the same and possibly better reductions can be obtained by flipping connected subsets of this set consecutively, in any order. All disconnected subsets of variables can therefore be excluded from the search if the connected subsets are searched ordered by their size. Finding a unique representative for each connected subset of variables is important. The alternative would be to consider all sequences of pairwise distinct variables in which each variable is connected to at least one of its predecessors and to ignore the fact that many of these sequences represent the same set. Sampling algorithms that select and grow connected subsets in a randomized fashion do exactly this. However, the redundancy is large. As an example, consider a connected subset of six variables of a 2-dimensional grid graph as depicted in Fig. 1a. Although there is only one connected set that contains all six variables, 208 out of the 6! = 720 possible sequences of these variables meet the requirement that each variable is connected to at least one of its predecessors. This 208-fold redundancy hampers the exploration of the search space by means of randomized algorithms; it is avoided in lazy flipping at the cost of storing one unique representative for every connected subgraph in the CS-tree. The second data structure is a tag list that prevents the repeated assessment of unsuccessful flips. The idea is the following: If some variables have been flipped in one iteration (and the current best configuration has been updated accordingly), it suffices to revisit only those sets of variables that are connected to at least one variable that has been flipped. All other sets of variables are excluded from the search because the potentials that depend on these variables are unaffected by the flip and have been assessed in their current state before. The tag list and the connected subgraph tree are essential to the Lazy Flipper and are described in the following sections, 3.1 and 3.2. For a quick overview, the reader can however skip these sections, take for granted that it is possible to efficiently enumerate all connected subgraphs of a graphical model, ordered by their size, and refer directly to the main algorithm (Section 4 and Alg. 1). All non-trivial sub-functions used in the main algorithm are related to tag lists and the CS-tree and are described in detail now. Connected Subgraph Tree (CS-tree) The CS-tree represents subsets of connected variables uniquely. Every node in the CS-tree except the special root node is labeled with the integer index of one variable in the graphical model. The same variable index is assigned to several nodes in the CS-tree unless the graphical model is completely disconnected. The CS-tree is constructed such that every connected subset of variables in the graphical model corresponds to precisely one path in the CS-tree from a node to the root node, the node labels along the path indicating precisely the variables in the subset, and vice versa, there exists precisely one connected subset of variables in the graphical model for each path in the CS-tree from a node to the root node. In order to guarantee by construction of the CS-tree that each subset of connected variables is represented precisely once, the variable indices of each subset are put in a special order, namely the lexicographically smallest order in which each variable is connected to at least one of its predecessors. The following definition of these sequences of variable indices is recursive and therefore motivates an algorithm for the construction of the CS-tree for the Lazy Flipper. A small grid model and its complete CS-tree are depicted in Fig. 1. Definition 1 (CSR-Sequence). Given an undirected graph G = (V, E) whose m ∈ N vertices V = {1, . . . , m} are integer indices, every sequence that consists of only one index is called connected subset representing (CSR). Given n ∈ N and a CSR-sequence (v 1 , . . . , v n ), a sequence (v 1 , . . . , v n , v n+1 ) of n + 1 indices is called a CSR-sequence precisely if the following conditions hold: (i) v n+1 is not among its predecessors, i.e. ∀j ∈ {1, . . . , n} : v j = v n+1 . (ii) v n+1 is connected to at least one of its predecessors, i.e. ∃j ∈ {1, . . . , n} : {v j , v n+1 } ∈ E. (iii) v n+1 > v 1 . (iv) If n ≥ 2 and v n+1 could have been added at an earlier position j ∈ {2, . . . , n} to the sequence, fulfilling (i)-(iii), all subsequent vertices v j , . . . , v n are smaller than v n+1 , i.e. ∀j ∈ {2, . . . , n} ({v j−1 , v n+1 } ∈ E ⇒ (∀k ∈ {j, . . . , n} : v k < v n+1 )) .(1) Based on this definition, three functions are sufficient to recursively build the CS-tree T of a graphical model G, starting from the root node. The function q = growSubset(T, G, p) appends to a node p in the CS-tree the smallest variable index that is not yet among the children of p and fulfills (i)-(iv) for the CSR-sequence of variable indices on the path from p to the root node. It returns the appended node or the empty set if no suitable variable index exists. The function q = firstSubsetOfSize(T, G, n) traverses the CS-tree on the current deepest level n − 1, calling the function growSubset for each leaf until a node can be appended and thus, the first subset of size n has been found. Finally, the function q = nextSubsetOfSameSize(T, G, p) starts from a node p, finds its parent and traverses from there in level order, calling growSubset for each node to find the length-lexicographic successor of the CSR-sequence associated with the node p, i.e. the representative of the next subset of the same size. These functions are used by the Lazy Flipper (Alg. 1) to construct the CS-tree. In contrast, the traversal of already constructed parts of the CS-tree (when revisiting subsets of variables after successful flips) is performed by functions associated with tag lists which are defined the following section. Tag Lists Tag lists are used to tag variables that are affected by flips. A variable is affected by a flip either because it has been flipped itself or because it is connected (via a potential) to a flipped variable. The tag list data structure comprises a Boolean vector in which each entry corresponds to a variable, indicating whether or not this variable is affected by recent flips. As the total number of variables can be large (10 6 is not exceptional) and possibly only a few variables are affected by flips, a list of all affected variables is maintained in addition to the vector. This list allows the algorithm to untag all tagged variables without re-initializing the entire Boolean vector. The two fundamental operations on a tag list L are tag(L, x) which tags the variable with the index x, and untagAll (L). For the Lazy Flipper, three special functions are used in addition: Given a tag list L, a (possibly incomplete) CS-tree T , the graphical model G, and a node s ∈ T , tagConnectedVariables(L, T, G, s) tags all variables on the path from s to the root node in T , as well as all nodes that are connected (via a potential in G) to at least one of these nodes. The function s = firstTaggedSubset(L, T ) traverses the first level of T and returns the first node s whose variable is tagged (or the empty set if all variables are untagged). Finally, the function t = nextTaggedSubset(L, T, s) traverses T in level order, starting with the successor of s, and returns the first node t for which the path to the root contains at least one tagged variable. These functions, together with those of the CS-tree, are sufficient for the Lazy Flipper, Alg. 1. The Lazy Flipper Algorithm In the main loop of the Lazy Flipper (lines 2-26 in Alg. 1), the size n of subsets is incremented until the limit n max is reached (line 24). Inside this main loop, the algorithm falls into two parts, the exploration part (lines 3-11) and the revisiting part (lines 12-23). In the exploration part, flips of previously unseen subsets of n variables are assessed. The current best configuration c is updated in a greedy fashion, i.e. whenever a flip yields a lower energy. At the same time, the CStree is grown, using the functions defined in Section 3.1. In the revisiting part, all subsets of sizes 1 through n that are affected by recent flips are assessed iteratively until no flip of any of these subsets reduces the energy (line 14). The indices of affected variables are stored in the tag lists L 1 and L 2 (cf. Section 3.2). In practice, the Lazy Flipper can be stopped at any point, e.g. when a time limit is exceeded, and the current best configuration c taken as the output. It eventually reaches configurations for which it is guaranteed that no flip of n or less variables can yield a lower energy because all such flips that could potentially lower the energy have been assessed (line 14). Such configurations are therefore guaranteed to be optimal within a Hamming radius of n: Experiments For a comparative assessment of the Lazy Flipper, four optimization problems of different complexity are considered, two simulated problems and two problems based on real-world data. For the sake of reproducibility, the simulations are described in detail and the models constructed from real data are available from the authors as supplementary material. The first problem is a ferromagnetic Ising model that is widely used in computer vision for foreground vs. background segmentation [6]. Energy functions of this model consist of first and second order potentials that are submodular. The global minimum can therefore be found via a graph cut. We simulate random instances of this model in order to measure how the runtime of lazy flipping depends on the size of the model and the coupling strength, and to compare Lazy Flipper approximations to the global optimum (Section 5.1). The second problem is a problem of finding optimal subgraphs on a grid. Energy functions of this model consist of first and fourth order potentials, of which the latter are not permutation submodular. We simulate difficult instances of this problem that cannot be solved to optimality, even when allowing several days of runtime. In this challenging setting, Lazy Flipper approximations and their convergence are compared to those of BP, TRBP and DD as well as to the lower bounds on local polytope relaxations obtained by DD (Section 5.2). The third problem is a graphical model for removing excessive boundaries from image over-segmentations that is related to the model proposed in [31]. Energy functions of this model consist of first, third and fourth order potentials. In contrast to the grid graphs of the Ising model and the optimal subgraph model, the corresponding factor graphs are irregular but still planar. The higher-order potentials are not permutation submodular but the global optimum can be found by means of MILP in approximately 10 seconds per model using one of the fastest commercial solvers (IBM ILOG CPLEX, version 12.1). Since CPLEX is closedsource software, the algorithm is not known in detail and we use it as a black box. The general method used by CPLEX for MILP is a branch-and-bound algorithm [32,33]. 100 instances of this model obtained from the 100 natural test images of the Berkeley Segmentation Database (BSD) [34] are used to compare the Lazy Flipper to algorithms based on message passing and linear programming in a real-world setting where the global optimum is accessible (Section 5.3). The fourth problem is identical to the third, except that instances are obtained from 3-dimensional volume images of neural tissue acquired by means of Serial Block Face Scanning Electron Microscopy (SBFSEM) [35]. Unlike in the 2-dimensional case, the factor graphs are no longer planar. Whether exact optimization by means of MILP is practical depends on the size of the model. In practice, SBFSEM datasets consist of more than 2000 3 voxels. To be able to compare approximations to the global optimum, we consider 16 models obtained from 16 SBFSEM volume sub-images of only 150 3 voxels for which the global optimum can be found by means of MILP within a few minutes (Section 5.4). Ferromagnetic Ising model The ferromagnetic Ising model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with points on a 2-dimensional square grid and connected via second order potentials E jk (x j , x k ) = 1 − δ xj ,x k (δ: Kronecker delta) to their nearest neighbors. First order potentials E j (x j ) relate the variables to observed evidence in underlying data. The total energy of this model is the following sum in which α ∈ R + 0 is a weight on the second order potentials, and j ∼ k indicates that the variables x j and x k are adjacent on the grid: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + α m j=1 m k=j+1 k∼j E jk (x j , x k ) .(2) For each α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, an ensemble of ten simulated Ising models of 50 · 50 = 2500 variables is considered. The first order potentials E j are initialized randomly by drawing E j (0) uniformly from the interval [0, 1] and setting E j (1) := 1 − E j (0). The exact global minimum of the total energy is found via a graph cut. For each model, the Lazy Flipper is initialized with a configuration that minimizes the sum of the first order potentials. Upper bounds on the minimum energy found by means of lazy flipping converge towards the global optimum as depicted in Fig. 2. Color scales and gray scales in this figure respectively indicate the maximum size and the total number of distinct subsets that have been searched, averaged over all models in the ensemble. It can be seen from this figure that upper bounds on the minimum energy are tightened significantly by searching larger subsets of variables, independent of the coupling strength α. It takes the Lazy Flipper less than 100 seconds (on a single CPU of an Intel Quad Xeon E7220 at 2.93GHz) to exhaustively search all connected subsets of 6 variables. The amount of RAM required for the CS-tree (in bytes) is 24 times as high as the number of subsets (approximately 50 MB in this case) because each subset is stored in the CS-tree as a node consisting of three 64-bit integers: a variable index, the index of the parent node and the index of the level order successor (Section 3.1) 1 . For n max ∈ {1, 6}, configurations corresponding to the upper bounds on the minimum energy are depicted in Fig. 3. It can be seen from this figure that all connected subsets of falsely set variables are larger than n max . For a fixed maximum subgraph size n max , the runtime of lazy flipping scales approximately linearly with the number of variables in the Ising model (cf. Fig.4). Optimal Subgraph Model The optimal subgraph model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with the edges of a 2-dimensional grid graph. A subgraph is defined by those edges whose associated variables attain the value 1. Energy functions of this model consist of first order potentials, one for each edge, and fourth order potentials, one for each node v ∈ V in which four edges (j, k, l, m) = N (v) meet: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + (j,k,l,m)∈N (V ) E jklm (x j , x k , x l , x m ) .(3) All fourth order potentials are equal, penalizing dead ends and branches of paths in the selected subgraph: E jklm (x j , x k , x l , x m ) =                0.0 if s = 0 100.0 if s = 1 0.6 if s = 2 1.2 if s = 3 2.4 if s = 4 with s = x j + x k + x l + x m . (4) An ensemble of 16 such models is constructed by drawing the unary potentials at random, exactly as for the Ising models. Each model has 19800 variables, the same number of first order potentials, and 9801 fourth order potentials. Approximate optimal subgraphs are found by Min-Sum Belief Propagation (BP) with parallel message passing [9,14] and message damping [36], by Tree-reweighted Belief Propagation (TRBP) [4], by Dual Decomposition (DD) [16,17] and by lazy flipping (LF). DD affords also lower bounds on the minimum energy. Details on the parameters of the algorithms and the decomposition of the models are given in Appendix A. Bounds on the minimum energy converge with increasing runtime, as depicted in Fig. 5. It can be seen from this figure that Lazy Flipper approximations converge fast, reaching a smaller energy after 3 seconds than the other approximations after 10000 seconds. Subgraphs of up to 7 variables are searched, using approximately 2.2 GB of RAM for the CS-tree. A gap remains between the energies of all approximations and the lower bound on the minimum energy obtained by DD. Thus, there is no guarantee that any of the problems has been solved to optimality. However, the gaps are upper bounds on the deviation from the global optimum. They are compared at t = 10000 s in Fig. 5. For any model in the ensemble, the energy of the Lazy Flipper approximation is less than 4% away from the global optimum, a substantial improvement over the other algorithms for this particular model. Pruning of 2D Over-Segmentations The graphical model for removing excessive boundaries from image over-segmentations contains one binary variable for each boundary between segments, indicating whether this boundary is to be removed (0) or preserved (1). First order potentials relate these variables to the image content, and non-submodular third and fourth order potentials connect adjacent boundaries, supporting the closedness and smooth continuation of preserved boundaries. The energy function is a sum of these potentials: ∀x ∈ {0, 1} m E(x) = m j=1 E j (x j ) + (j,k,l)∈J E jkl (x j , x k , x l ) + (j,k,l,p)∈K E jklp (x j , x k , x l , x p ) .(5) We consider an ensemble of 100 such models obtained from the 100 BSD test images [34]. On average, a model has 8845 ± 670 binary variables, the same number of unary potentials, 5715 ± 430 third order potentials and 98 ± 18 fourth order potentials. Each variable is connected via potentials to at most six other variables, a sparse structure that is favorable for the Lazy Flipper. BP, TRBP, DD and the Lazy Flipper solve these problems approximately, thus providing upper bounds on the minimum energy. The differences between these bounds and the global optimum found by means of MILP are depicted in Fig. 6. It can be seen from this figure that, after 200 seconds, Lazy Flipper approximations provide a tighter upper bound on the global minimum in the median than those of the other three algorithms. BP and DD have a better peak performance, solving one problem to optimality. The Lazy Flipper reaches a search depth of 9 after around 1000 seconds for these sparse graphical models using roughly 720 MB of RAM for the CS-tree. At t = 5000 s and on average over all models, its approximations deviate by only 2.6% from the global optimum. Pruning of 3D Over-Segmentations The model described in the previous section is now applied in 3D to remove excessive boundaries from the over-segmentation of a volume image. In an ensemble of 16 such models obtained from 16 SBFSEM volume images, models have on average 16748 ± 1521 binary variables (and first order potentials), 26379 ± 2502 potentials of order 3, and 5081 ± 482 potentials of order 4. For BP, TRBP, DD and Lazy Flipper approximations, deviations from the global optimum are shown in Fig. 7. It can be seen from this figure that BP performs exceptionally well on these problems, providing approximations whose energies deviate by only 0.4% on average from the global optimum. One reason is that most variables influence many (up to 60) potential functions, and BP can propagate local evidence from all these potentials. Variables are connected via these potentials to as many as 100 neighboring variables which hampers the exploration of the search space by the Lazy Flipper that reaches only of search depth of 5 after 10000 seconds, using 4.8 GB of RAM for the CS-tree, yielding worse approximations than BP, TRBP and DD for these models. In practical applications where volume images and the according models are several hundred times larger and can no longer be optimized exactly, it matters whether one can further improve upon the BP approximations. Dashed lines in the first plot in Fig. 7 show the result obtained when initializing the Lazy Flipper with the BP approximation at t = 100s. This reduces the deviation from the global optimum at t = 50000 s from 0.4% on average over all models to 0.1%. Conclusion The optimum of a function of binary variables that decomposes according to a graphical model can be found by an exhaustive search over only the connected subgraphs of the model. We implemented this search, using a CS-tree to efficiently and uniquely enumerate the subgraphs. The C++ source code is available from http://hci.iwr.uni-heidelberg.de/software.php. Our algorithm is guaranteed to converge to a global minimum when searching through all subgraphs which is typically intractable. With limited runtime, approximations can be found by restricting the search to subgraphs of a given maximum size. Simulated and real-world problems exist for which these approximations compare favorably to those obtained by message passing and sub-gradient descent. For large scale problems, the applicability of the Lazy Flipper is limited by the memory required for the CS-tree. However, for regular graphs, this limit can be overcome by an implicit representation of the CS-tree that is subject of future research. A Parameters and Model Decomposition In all experiments, the damping parameters for BP and TRBP are chosen optimally from the set {0, 0.1, 0.2, . . . , 0.9}. The step size of the sub-gradient descent is chosen according to τ t = α 1 1 + βt(6) where β = 0.01 and α is chosen optimally from {0.01, 0.025, 0.05, 0.1, 0.25, 0.5}. The sequence of step sizes, in particular the function (6) and β could be tuned further. Moreover, [16] consider the primal-dual gap and [17] smooth the subgradient over iterations in order to suppress oscillations. These measures can have substantial impact on the convergence. The upper bounds obtained by BP, TRBP and DD do not decrease monotonously. After each iteration of these algorithms, we therefore consider the elapsed runtime and the current best bound, i.e. the best bound of the current and all preceding iterations. All five algorithms are implemented in C++, using the same optimized data structures for the graphical model and a visitor design pattern that allows us to measure runtime without significantly affecting performance. The same decomposition of each graphical model into tree models is used for TRBP and DD. Tree models are constructed in a greedy fashion, each comprising as many potential functions as possible. The procedure is generally applicable to irregular models with higher-order potentials: Initially, all potentials of the graphical model are put on a white list that contains those potentials that have not been added to any tree model. A black list of already added potentials and a gray list of recently added potentials are initially empty. As long as there are potentials on the white list, new tree models are constructed. For each newly constructed tree model, the procedure iterates over the white list, adding potentials to the tree model if they do not introduce loops. Added potentials are moved from the white list to the gray list. After all potentials from the white list have been processed, potentials from the black list that do not introduce loops are added to the tree model. The gray list is then appended to the black list and cleared. The procedure finishes when the white list is empty. As recently shown in [17], decompositions into cyclic subproblems can lead to significantly tighter relaxations and better integer solutions.
5,125
1009.4102
1633665642
This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.
Second, the Lazy Flipper is a deterministic alternative to the randomized search for tighter bounds proposed and analyzed in 2009 by @cite_22 . Exactly as in @cite_22 , sets of variables that are connected via potentials in the graphical model are considered and variables flipped if these flips lead to a smaller upper bound on the sum of potentials. In contrast to @cite_22 , unique representatives of these sets are visited in a deterministic order. Both algorithms maintain a current best assignment of values to the variables and are thus related with the Swendsen-Wang algorithm @cite_24 @cite_23 and Wolff algorithm @cite_33 .
{ "abstract": [ "A new approach to Monte Carlo simulations is presented, giving a highly efficient method of simulation for large systems near criticality. The algorithm violates dynamic universality at second-order phase transitions, producing unusually small values of the dynamical critical exponent.", "A Monte Carlo algorithm is presented that updates large clusters of spins simultaneously in systems at and near criticality. We demonstrate its efficiency in the two-dimensional @math @math models for @math (Ising) and @math ( @math ) at their critical temperatures, and for @math (Heisenberg) with correlation lengths around 10 and 20. On lattices up to @math no sign of critical slowing down is visible with autocorrelation times of 1-2 steps per spin for estimators of long-range quantities.", "We consider the question of computing Maximum A Posteriori (MAP) assignment in an arbitrary pair-wise Markov Random Field (MRF). We present a randomized iterative algorithm based on simple local updates. The algorithm, starting with an arbitrary initial assignment, updates it in each iteration by first, picking a random node, then selecting an (appropriately chosen) random local neighborhood and optimizing over this local neighborhood. Somewhat surprisingly, we show that this algorithm finds a near optimal assignment within n log2 n iterations with high probability for any n node pair-wise MRF with geometry (i.e. MRF graph with polynomial growth) with the approximation error depending on (in a reasonable manner) the geometric growth rate of the graph and the average radius of the local neighborhood - this allows for a graceful tradeoff between the complexity of the algorithm and the approximation error. Through extensive simulations, we show that our algorithm finds extremely good approximate solutions for various kinds of MRFs with geometry.", "Vision tasks, such as segmentation, grouping, recognition, can be formulated as graph partition problems. The recent literature witnessed two popular graph cut algorithms: the Ncut using spectral graph analysis and the minimum-cut using the maximum flow algorithm. We present a third major approach by generalizing the Swendsen-Wang method - a well celebrated algorithm in statistical mechanics. Our algorithm simulates ergodic, reversible Markov chain jumps in the space of graph partitions to sample a posterior probability. At each step, the algorithm splits, merges, or regroups a sizable subgraph, and achieves fast mixing at low temperature enabling a fast annealing procedure. Experiments show it converges in 2-30 seconds on a PC for image segmentation. This is 400 times faster than the single-site update Gibbs sampler, and 20-40 times faster than the DDMCMC algorithm. The algorithm can optimize over the number of models and works for general forms of posterior probabilities, so it is more general than the existing graph cut approaches." ], "cite_N": [ "@cite_24", "@cite_33", "@cite_22", "@cite_23" ], "mid": [ "2037139490", "1969758109", "2160561951", "2063266501" ] }
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
Energy functions that depend on thousands of binary variables and decompose according to a graphical model [1,2,3,4] into potential functions that depend on subsets of all variables have been used successfully for pattern analysis, e.g. in the seminal works [5,6,7,8]. An important problem is the minimization of the sum of potentials, i.e. the search for an assignment of zeros and ones to the variables that minimizes the energy. This problem can be solved efficiently by dynamic programming if the graph is acyclic [9] or its treewidth is small enough [3], and by finding a minimum s-t-cut [6] if the energy function is (permutation) submodular [10,11]. In general, the problem is NP-hard [10]. For moderate problem sizes, exact optimization is sometimes tractable by means of Mixed Integer Linear Programming (MILP) [12,13]. Contrary to popular belief, some practical computer vision problems can indeed be solved to optimality by modern MILP solvers (cf. Section 5). However, all such solvers are eventually overburdened when the problem size becomes too large. In cases where exact optimization is intractable, one has to settle for approximations. While substantial progress has been made in this direction, a deterministic non-redundant search algorithm that constrains the search space based on the topology of the graphical model has not been proposed before. This article presents a depth-limited exhaustive search algorithm, the Lazy Flipper, that does just that. arXiv:1009.4102v1 [cs.DS] 21 Sep 2010 The Lazy Flipper starts from an arbitrary initial assignment of zeros and ones to the variables that can be chosen, for instance, to minimize the sum of only the first order potentials of the graphical model. Starting from this initial configuration, it searches for flips of variables that reduce the energy. As soon as such a flip is found, the current configuration is updated accordingly, i.e. in a greedy fashion. In the beginning, only single variables are flipped. Once a configuration is found whose energy can no longer be reduced by flipping of single variables, all those subsets of two and successively more variables that are connected via potentials in the graphical model are considered. When a subset of more than one variable is flipped, all smaller subsets that are affected by the flip are revisited. This allows the Lazy Flipper to perform an exhaustive search over all subsets of variables whose flip potentially reduces the energy. Two special data structures described in Section 3 are used to represent each subset of connected variables precisely once and to exclude subsets from the search whose flip cannot reduce the energy due to the topology of the graphical model and the history of unsuccessful flips. These data structures, the Lazy Flipper algorithm and an experimental evaluation of state-of-the-art optimization algorithms on higher-order graphical models are the main contributions of this article. Overall, the new algorithm has four favorable properties: (i) It is strictly convergent. While a global minimum is found when searching through all subgraphs (typically not tractable), approximate solutions with a guaranteed quality certificate (Section 4) are found if the search space is restricted to subgraphs of a given maximum size. The larger the subgraphs are allowed to be, the tighter the upper bound on the minimum energy becomes. This allows for a favorable tradeoff between runtime and approximation quality. (ii) Unlike in brute force search, the runtime of lazy flipping depends on the topology of the graphical model. It is exponential in the worst case but can be shorter compared to brute force search by an amount that is exponential in the number of variables. It is approximately linear in the size of the model for a fixed maximum search depth. (iii) The Lazy Flipper can be applied to graphical models of any order and topology, including but not limited to the more standard grid graphs. Directed Bayesian Networks and undirected Markov Random Fields are processed in the exact same manner; they are converted to factor graph models [14] before lazy flipping. (iv) Only trivial operations are performed on the graphical model, namely graph traversal and evaluations of potential functions. These operations are cheap compared, for instance, to the summation and minimization of potential functions performed by message passing algorithms, and require only an implicit specification of potential functions in terms of program code that computes the function value for any given assignment of values to the variables. Experiments on simulated and real-world problems, submodular and nonsubmodular functions, grids and irregular graphs (Section 5) assess the quality of Lazy Flipper approximations, their convergence as well as the dependence of the runtime of the algorithm on the size of the model and the search depth. The results are put into perspective by a comparison with Iterated Conditional Modes (ICM) [5], Belief Propagation (BP) [9,14], Tree-reweighted BP [15,4] and a Dual Decomposition ansatz using sub-gradient descent methods [16,17]. The Lazy Flipper Data Structures Two special data structures are crucial to the Lazy Flipper. The first data structure that we call a connected subgraph tree (CS-tree) ensures that only connected subsets of variables are considered, i.e. sets of variables which are connected via potentials in the graphical model. Moreover, it ensures that every such subset is represented precisely once (and not repeatedly) by an ordered sequence of its variables, cf. [30]. The rationale behind this concept is the following: If the flip of one variable and the flip of another variable not connected to the first one do not reduce the energy then it is pointless to try a simultaneous flip of both variables because the (energy increasing) contributions from both flips would sum up. Furthermore, if the flip of a disconnected set of variables reduces the energy then the same and possibly better reductions can be obtained by flipping connected subsets of this set consecutively, in any order. All disconnected subsets of variables can therefore be excluded from the search if the connected subsets are searched ordered by their size. Finding a unique representative for each connected subset of variables is important. The alternative would be to consider all sequences of pairwise distinct variables in which each variable is connected to at least one of its predecessors and to ignore the fact that many of these sequences represent the same set. Sampling algorithms that select and grow connected subsets in a randomized fashion do exactly this. However, the redundancy is large. As an example, consider a connected subset of six variables of a 2-dimensional grid graph as depicted in Fig. 1a. Although there is only one connected set that contains all six variables, 208 out of the 6! = 720 possible sequences of these variables meet the requirement that each variable is connected to at least one of its predecessors. This 208-fold redundancy hampers the exploration of the search space by means of randomized algorithms; it is avoided in lazy flipping at the cost of storing one unique representative for every connected subgraph in the CS-tree. The second data structure is a tag list that prevents the repeated assessment of unsuccessful flips. The idea is the following: If some variables have been flipped in one iteration (and the current best configuration has been updated accordingly), it suffices to revisit only those sets of variables that are connected to at least one variable that has been flipped. All other sets of variables are excluded from the search because the potentials that depend on these variables are unaffected by the flip and have been assessed in their current state before. The tag list and the connected subgraph tree are essential to the Lazy Flipper and are described in the following sections, 3.1 and 3.2. For a quick overview, the reader can however skip these sections, take for granted that it is possible to efficiently enumerate all connected subgraphs of a graphical model, ordered by their size, and refer directly to the main algorithm (Section 4 and Alg. 1). All non-trivial sub-functions used in the main algorithm are related to tag lists and the CS-tree and are described in detail now. Connected Subgraph Tree (CS-tree) The CS-tree represents subsets of connected variables uniquely. Every node in the CS-tree except the special root node is labeled with the integer index of one variable in the graphical model. The same variable index is assigned to several nodes in the CS-tree unless the graphical model is completely disconnected. The CS-tree is constructed such that every connected subset of variables in the graphical model corresponds to precisely one path in the CS-tree from a node to the root node, the node labels along the path indicating precisely the variables in the subset, and vice versa, there exists precisely one connected subset of variables in the graphical model for each path in the CS-tree from a node to the root node. In order to guarantee by construction of the CS-tree that each subset of connected variables is represented precisely once, the variable indices of each subset are put in a special order, namely the lexicographically smallest order in which each variable is connected to at least one of its predecessors. The following definition of these sequences of variable indices is recursive and therefore motivates an algorithm for the construction of the CS-tree for the Lazy Flipper. A small grid model and its complete CS-tree are depicted in Fig. 1. Definition 1 (CSR-Sequence). Given an undirected graph G = (V, E) whose m ∈ N vertices V = {1, . . . , m} are integer indices, every sequence that consists of only one index is called connected subset representing (CSR). Given n ∈ N and a CSR-sequence (v 1 , . . . , v n ), a sequence (v 1 , . . . , v n , v n+1 ) of n + 1 indices is called a CSR-sequence precisely if the following conditions hold: (i) v n+1 is not among its predecessors, i.e. ∀j ∈ {1, . . . , n} : v j = v n+1 . (ii) v n+1 is connected to at least one of its predecessors, i.e. ∃j ∈ {1, . . . , n} : {v j , v n+1 } ∈ E. (iii) v n+1 > v 1 . (iv) If n ≥ 2 and v n+1 could have been added at an earlier position j ∈ {2, . . . , n} to the sequence, fulfilling (i)-(iii), all subsequent vertices v j , . . . , v n are smaller than v n+1 , i.e. ∀j ∈ {2, . . . , n} ({v j−1 , v n+1 } ∈ E ⇒ (∀k ∈ {j, . . . , n} : v k < v n+1 )) .(1) Based on this definition, three functions are sufficient to recursively build the CS-tree T of a graphical model G, starting from the root node. The function q = growSubset(T, G, p) appends to a node p in the CS-tree the smallest variable index that is not yet among the children of p and fulfills (i)-(iv) for the CSR-sequence of variable indices on the path from p to the root node. It returns the appended node or the empty set if no suitable variable index exists. The function q = firstSubsetOfSize(T, G, n) traverses the CS-tree on the current deepest level n − 1, calling the function growSubset for each leaf until a node can be appended and thus, the first subset of size n has been found. Finally, the function q = nextSubsetOfSameSize(T, G, p) starts from a node p, finds its parent and traverses from there in level order, calling growSubset for each node to find the length-lexicographic successor of the CSR-sequence associated with the node p, i.e. the representative of the next subset of the same size. These functions are used by the Lazy Flipper (Alg. 1) to construct the CS-tree. In contrast, the traversal of already constructed parts of the CS-tree (when revisiting subsets of variables after successful flips) is performed by functions associated with tag lists which are defined the following section. Tag Lists Tag lists are used to tag variables that are affected by flips. A variable is affected by a flip either because it has been flipped itself or because it is connected (via a potential) to a flipped variable. The tag list data structure comprises a Boolean vector in which each entry corresponds to a variable, indicating whether or not this variable is affected by recent flips. As the total number of variables can be large (10 6 is not exceptional) and possibly only a few variables are affected by flips, a list of all affected variables is maintained in addition to the vector. This list allows the algorithm to untag all tagged variables without re-initializing the entire Boolean vector. The two fundamental operations on a tag list L are tag(L, x) which tags the variable with the index x, and untagAll (L). For the Lazy Flipper, three special functions are used in addition: Given a tag list L, a (possibly incomplete) CS-tree T , the graphical model G, and a node s ∈ T , tagConnectedVariables(L, T, G, s) tags all variables on the path from s to the root node in T , as well as all nodes that are connected (via a potential in G) to at least one of these nodes. The function s = firstTaggedSubset(L, T ) traverses the first level of T and returns the first node s whose variable is tagged (or the empty set if all variables are untagged). Finally, the function t = nextTaggedSubset(L, T, s) traverses T in level order, starting with the successor of s, and returns the first node t for which the path to the root contains at least one tagged variable. These functions, together with those of the CS-tree, are sufficient for the Lazy Flipper, Alg. 1. The Lazy Flipper Algorithm In the main loop of the Lazy Flipper (lines 2-26 in Alg. 1), the size n of subsets is incremented until the limit n max is reached (line 24). Inside this main loop, the algorithm falls into two parts, the exploration part (lines 3-11) and the revisiting part (lines 12-23). In the exploration part, flips of previously unseen subsets of n variables are assessed. The current best configuration c is updated in a greedy fashion, i.e. whenever a flip yields a lower energy. At the same time, the CStree is grown, using the functions defined in Section 3.1. In the revisiting part, all subsets of sizes 1 through n that are affected by recent flips are assessed iteratively until no flip of any of these subsets reduces the energy (line 14). The indices of affected variables are stored in the tag lists L 1 and L 2 (cf. Section 3.2). In practice, the Lazy Flipper can be stopped at any point, e.g. when a time limit is exceeded, and the current best configuration c taken as the output. It eventually reaches configurations for which it is guaranteed that no flip of n or less variables can yield a lower energy because all such flips that could potentially lower the energy have been assessed (line 14). Such configurations are therefore guaranteed to be optimal within a Hamming radius of n: Experiments For a comparative assessment of the Lazy Flipper, four optimization problems of different complexity are considered, two simulated problems and two problems based on real-world data. For the sake of reproducibility, the simulations are described in detail and the models constructed from real data are available from the authors as supplementary material. The first problem is a ferromagnetic Ising model that is widely used in computer vision for foreground vs. background segmentation [6]. Energy functions of this model consist of first and second order potentials that are submodular. The global minimum can therefore be found via a graph cut. We simulate random instances of this model in order to measure how the runtime of lazy flipping depends on the size of the model and the coupling strength, and to compare Lazy Flipper approximations to the global optimum (Section 5.1). The second problem is a problem of finding optimal subgraphs on a grid. Energy functions of this model consist of first and fourth order potentials, of which the latter are not permutation submodular. We simulate difficult instances of this problem that cannot be solved to optimality, even when allowing several days of runtime. In this challenging setting, Lazy Flipper approximations and their convergence are compared to those of BP, TRBP and DD as well as to the lower bounds on local polytope relaxations obtained by DD (Section 5.2). The third problem is a graphical model for removing excessive boundaries from image over-segmentations that is related to the model proposed in [31]. Energy functions of this model consist of first, third and fourth order potentials. In contrast to the grid graphs of the Ising model and the optimal subgraph model, the corresponding factor graphs are irregular but still planar. The higher-order potentials are not permutation submodular but the global optimum can be found by means of MILP in approximately 10 seconds per model using one of the fastest commercial solvers (IBM ILOG CPLEX, version 12.1). Since CPLEX is closedsource software, the algorithm is not known in detail and we use it as a black box. The general method used by CPLEX for MILP is a branch-and-bound algorithm [32,33]. 100 instances of this model obtained from the 100 natural test images of the Berkeley Segmentation Database (BSD) [34] are used to compare the Lazy Flipper to algorithms based on message passing and linear programming in a real-world setting where the global optimum is accessible (Section 5.3). The fourth problem is identical to the third, except that instances are obtained from 3-dimensional volume images of neural tissue acquired by means of Serial Block Face Scanning Electron Microscopy (SBFSEM) [35]. Unlike in the 2-dimensional case, the factor graphs are no longer planar. Whether exact optimization by means of MILP is practical depends on the size of the model. In practice, SBFSEM datasets consist of more than 2000 3 voxels. To be able to compare approximations to the global optimum, we consider 16 models obtained from 16 SBFSEM volume sub-images of only 150 3 voxels for which the global optimum can be found by means of MILP within a few minutes (Section 5.4). Ferromagnetic Ising model The ferromagnetic Ising model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with points on a 2-dimensional square grid and connected via second order potentials E jk (x j , x k ) = 1 − δ xj ,x k (δ: Kronecker delta) to their nearest neighbors. First order potentials E j (x j ) relate the variables to observed evidence in underlying data. The total energy of this model is the following sum in which α ∈ R + 0 is a weight on the second order potentials, and j ∼ k indicates that the variables x j and x k are adjacent on the grid: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + α m j=1 m k=j+1 k∼j E jk (x j , x k ) .(2) For each α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, an ensemble of ten simulated Ising models of 50 · 50 = 2500 variables is considered. The first order potentials E j are initialized randomly by drawing E j (0) uniformly from the interval [0, 1] and setting E j (1) := 1 − E j (0). The exact global minimum of the total energy is found via a graph cut. For each model, the Lazy Flipper is initialized with a configuration that minimizes the sum of the first order potentials. Upper bounds on the minimum energy found by means of lazy flipping converge towards the global optimum as depicted in Fig. 2. Color scales and gray scales in this figure respectively indicate the maximum size and the total number of distinct subsets that have been searched, averaged over all models in the ensemble. It can be seen from this figure that upper bounds on the minimum energy are tightened significantly by searching larger subsets of variables, independent of the coupling strength α. It takes the Lazy Flipper less than 100 seconds (on a single CPU of an Intel Quad Xeon E7220 at 2.93GHz) to exhaustively search all connected subsets of 6 variables. The amount of RAM required for the CS-tree (in bytes) is 24 times as high as the number of subsets (approximately 50 MB in this case) because each subset is stored in the CS-tree as a node consisting of three 64-bit integers: a variable index, the index of the parent node and the index of the level order successor (Section 3.1) 1 . For n max ∈ {1, 6}, configurations corresponding to the upper bounds on the minimum energy are depicted in Fig. 3. It can be seen from this figure that all connected subsets of falsely set variables are larger than n max . For a fixed maximum subgraph size n max , the runtime of lazy flipping scales approximately linearly with the number of variables in the Ising model (cf. Fig.4). Optimal Subgraph Model The optimal subgraph model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with the edges of a 2-dimensional grid graph. A subgraph is defined by those edges whose associated variables attain the value 1. Energy functions of this model consist of first order potentials, one for each edge, and fourth order potentials, one for each node v ∈ V in which four edges (j, k, l, m) = N (v) meet: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + (j,k,l,m)∈N (V ) E jklm (x j , x k , x l , x m ) .(3) All fourth order potentials are equal, penalizing dead ends and branches of paths in the selected subgraph: E jklm (x j , x k , x l , x m ) =                0.0 if s = 0 100.0 if s = 1 0.6 if s = 2 1.2 if s = 3 2.4 if s = 4 with s = x j + x k + x l + x m . (4) An ensemble of 16 such models is constructed by drawing the unary potentials at random, exactly as for the Ising models. Each model has 19800 variables, the same number of first order potentials, and 9801 fourth order potentials. Approximate optimal subgraphs are found by Min-Sum Belief Propagation (BP) with parallel message passing [9,14] and message damping [36], by Tree-reweighted Belief Propagation (TRBP) [4], by Dual Decomposition (DD) [16,17] and by lazy flipping (LF). DD affords also lower bounds on the minimum energy. Details on the parameters of the algorithms and the decomposition of the models are given in Appendix A. Bounds on the minimum energy converge with increasing runtime, as depicted in Fig. 5. It can be seen from this figure that Lazy Flipper approximations converge fast, reaching a smaller energy after 3 seconds than the other approximations after 10000 seconds. Subgraphs of up to 7 variables are searched, using approximately 2.2 GB of RAM for the CS-tree. A gap remains between the energies of all approximations and the lower bound on the minimum energy obtained by DD. Thus, there is no guarantee that any of the problems has been solved to optimality. However, the gaps are upper bounds on the deviation from the global optimum. They are compared at t = 10000 s in Fig. 5. For any model in the ensemble, the energy of the Lazy Flipper approximation is less than 4% away from the global optimum, a substantial improvement over the other algorithms for this particular model. Pruning of 2D Over-Segmentations The graphical model for removing excessive boundaries from image over-segmentations contains one binary variable for each boundary between segments, indicating whether this boundary is to be removed (0) or preserved (1). First order potentials relate these variables to the image content, and non-submodular third and fourth order potentials connect adjacent boundaries, supporting the closedness and smooth continuation of preserved boundaries. The energy function is a sum of these potentials: ∀x ∈ {0, 1} m E(x) = m j=1 E j (x j ) + (j,k,l)∈J E jkl (x j , x k , x l ) + (j,k,l,p)∈K E jklp (x j , x k , x l , x p ) .(5) We consider an ensemble of 100 such models obtained from the 100 BSD test images [34]. On average, a model has 8845 ± 670 binary variables, the same number of unary potentials, 5715 ± 430 third order potentials and 98 ± 18 fourth order potentials. Each variable is connected via potentials to at most six other variables, a sparse structure that is favorable for the Lazy Flipper. BP, TRBP, DD and the Lazy Flipper solve these problems approximately, thus providing upper bounds on the minimum energy. The differences between these bounds and the global optimum found by means of MILP are depicted in Fig. 6. It can be seen from this figure that, after 200 seconds, Lazy Flipper approximations provide a tighter upper bound on the global minimum in the median than those of the other three algorithms. BP and DD have a better peak performance, solving one problem to optimality. The Lazy Flipper reaches a search depth of 9 after around 1000 seconds for these sparse graphical models using roughly 720 MB of RAM for the CS-tree. At t = 5000 s and on average over all models, its approximations deviate by only 2.6% from the global optimum. Pruning of 3D Over-Segmentations The model described in the previous section is now applied in 3D to remove excessive boundaries from the over-segmentation of a volume image. In an ensemble of 16 such models obtained from 16 SBFSEM volume images, models have on average 16748 ± 1521 binary variables (and first order potentials), 26379 ± 2502 potentials of order 3, and 5081 ± 482 potentials of order 4. For BP, TRBP, DD and Lazy Flipper approximations, deviations from the global optimum are shown in Fig. 7. It can be seen from this figure that BP performs exceptionally well on these problems, providing approximations whose energies deviate by only 0.4% on average from the global optimum. One reason is that most variables influence many (up to 60) potential functions, and BP can propagate local evidence from all these potentials. Variables are connected via these potentials to as many as 100 neighboring variables which hampers the exploration of the search space by the Lazy Flipper that reaches only of search depth of 5 after 10000 seconds, using 4.8 GB of RAM for the CS-tree, yielding worse approximations than BP, TRBP and DD for these models. In practical applications where volume images and the according models are several hundred times larger and can no longer be optimized exactly, it matters whether one can further improve upon the BP approximations. Dashed lines in the first plot in Fig. 7 show the result obtained when initializing the Lazy Flipper with the BP approximation at t = 100s. This reduces the deviation from the global optimum at t = 50000 s from 0.4% on average over all models to 0.1%. Conclusion The optimum of a function of binary variables that decomposes according to a graphical model can be found by an exhaustive search over only the connected subgraphs of the model. We implemented this search, using a CS-tree to efficiently and uniquely enumerate the subgraphs. The C++ source code is available from http://hci.iwr.uni-heidelberg.de/software.php. Our algorithm is guaranteed to converge to a global minimum when searching through all subgraphs which is typically intractable. With limited runtime, approximations can be found by restricting the search to subgraphs of a given maximum size. Simulated and real-world problems exist for which these approximations compare favorably to those obtained by message passing and sub-gradient descent. For large scale problems, the applicability of the Lazy Flipper is limited by the memory required for the CS-tree. However, for regular graphs, this limit can be overcome by an implicit representation of the CS-tree that is subject of future research. A Parameters and Model Decomposition In all experiments, the damping parameters for BP and TRBP are chosen optimally from the set {0, 0.1, 0.2, . . . , 0.9}. The step size of the sub-gradient descent is chosen according to τ t = α 1 1 + βt(6) where β = 0.01 and α is chosen optimally from {0.01, 0.025, 0.05, 0.1, 0.25, 0.5}. The sequence of step sizes, in particular the function (6) and β could be tuned further. Moreover, [16] consider the primal-dual gap and [17] smooth the subgradient over iterations in order to suppress oscillations. These measures can have substantial impact on the convergence. The upper bounds obtained by BP, TRBP and DD do not decrease monotonously. After each iteration of these algorithms, we therefore consider the elapsed runtime and the current best bound, i.e. the best bound of the current and all preceding iterations. All five algorithms are implemented in C++, using the same optimized data structures for the graphical model and a visitor design pattern that allows us to measure runtime without significantly affecting performance. The same decomposition of each graphical model into tree models is used for TRBP and DD. Tree models are constructed in a greedy fashion, each comprising as many potential functions as possible. The procedure is generally applicable to irregular models with higher-order potentials: Initially, all potentials of the graphical model are put on a white list that contains those potentials that have not been added to any tree model. A black list of already added potentials and a gray list of recently added potentials are initially empty. As long as there are potentials on the white list, new tree models are constructed. For each newly constructed tree model, the procedure iterates over the white list, adding potentials to the tree model if they do not introduce loops. Added potentials are moved from the white list to the gray list. After all potentials from the white list have been processed, potentials from the black list that do not introduce loops are added to the tree model. The gray list is then appended to the black list and cleared. The procedure finishes when the white list is empty. As recently shown in [17], decompositions into cyclic subproblems can lead to significantly tighter relaxations and better integer solutions.
5,125
1009.4102
1633665642
This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.
Third, lazy flipping with a limited search depth as a means of approximate optimization competes with message passing algorithms @cite_29 @cite_3 @cite_18 @cite_28 and with algorithms based on convex programming relaxations of the optimization problem @cite_18 @cite_7 @cite_0 @cite_8 , in particular with Tree-reweighted Belief Propagation (TRBP) @cite_13 @cite_28 @cite_21 and sub-gradient descent @cite_35 @cite_1 .
{ "abstract": [ "This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.", "We present a novel message passing algorithm for approximating the MAP problem in graphical models. The algorithm is similar in structure to max-product but unlike max-product it always converges, and can be proven to find the exact MAP solution in various settings. The algorithm is derived via block coordinate descent in a dual of the LP relaxation of MAP, but does not require any tunable parameters such as step size or tree weights. We also describe a generalization of the method to cluster based potentials. The new method is tested on synthetic and real-world problems, and compares favorably with previous approaches.", "The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review 's upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.", "The problem of obtaining the maximum a posteriori estimate of a general discrete Markov random field (i.e., a Markov random field defined using a discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximation algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP-S: the linear programming (LP) relaxation proposed by Schlesinger (1976) for a special case and independently in (2001), (1998), and (2005) for the general case; (ii) QP-RL: the quadratic programming (QP) relaxation of Ravikumar and Lafferty (2006); and (iii) SOCP-MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki (2003) for two label problems and later extended by (2006) for a general label set. We show that the SOCP-MS and the QP-RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints objective function offered by QP and SOCP, the LP-S relaxation strictly dominates (i.e., provides a better approximation than) QP-RL and SOCP-MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP-S relaxation. Based on these results we propose some novel SOCP relaxations which define constraints using random variables that form cycles or cliques in the graphical model representation of the random field. Using some examples we show that the new SOCP relaxations strictly dominate the previous approaches.", "The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.", "Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of \"local\" functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward backward algorithm, the Viterbi algorithm, the iterative \"turbo\" decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms.", "Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper, we focus on the recent technique proposed by (Nov. 2005)- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy. However, the algorithm is not guaranteed to increase this bound - it may actually go down. In addition, TRW does not always converge. We develop a modification of this algorithm which we call sequential tree-reweighted message passing. Its main property is that the bound is guaranteed not to decrease. We also give a weak tree agreement condition which characterizes local maxima of the bound with respect to TRW algorithms. We prove that our algorithm has a limit point that achieves weak tree agreement. Finally, we show that, our algorithm requires half as much memory as traditional message passing approaches. Experimental results demonstrate that on certain synthetic and real problems, our algorithm outperforms both the ordinary belief propagation and tree-reweighted algorithm in (M. J. Wainwright, et al, Nov. 2005). In addition, on stereo problems with Potts interactions, we obtain a lower energy than graph cuts", "We present a novel dual decomposition approach to MAP inference with highly connected discrete graphical models. Decompositions into cyclic k-fan structured subproblems are shown to significantly tighten the Lagrangian relaxation relative to the standard local polytope relaxation, while enabling efficient integer programming for solving the subproblems. Additionally, we introduce modified update rules for maximizing the dual function that avoid oscillations and converge faster to an optimum of the relaxed problem, and never get stuck in nonoptimal fixed points.", "This paper presents a new deterministic approximation technique in Bayesian networks. This method, \"Expectation Propagation,\" unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. Loopy belief propagation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Experiments with Gaussian mixture models show Expectation Propagation to be donvincingly better than methods with similar computational cost: Laplace's method, variational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers.", "We consider the problem of optimizing multilabel MRFs, which is in general NP-hard and ubiquitous in low-level computer vision. One approach for its solution is to formulate it as an integer linear programming and relax the integrality constraints. The approach we consider in this paper is to first convert the multi-label MRF into an equivalent binary-label MRF and then to relax it. The resulting relaxation can be efficiently solved using a maximum flow algorithm. Its solution provides us with a partially optimal labelling of the binary variables. This partial labelling is then easily transferred to the multi-label problem. We study the theoretical properties of the new relaxation and compare it with the standard one. Specifically, we compare tightness, and characterize a subclass of problems where the two relaxations coincide. We propose several combined algorithms based on the technique and demonstrate their performance on challenging computer vision problems.", "We develop and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and b) a tree-reweighted max-product message-passing algorithm that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm." ], "cite_N": [ "@cite_35", "@cite_18", "@cite_7", "@cite_8", "@cite_28", "@cite_29", "@cite_21", "@cite_1", "@cite_3", "@cite_0", "@cite_13" ], "mid": [ "2131538180", "2149474573", "2135414191", "2130517245", "2120340025", "2137813581", "2164918853", "1498611611", "1934021597", "1981238278", "2108619558" ] }
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
Energy functions that depend on thousands of binary variables and decompose according to a graphical model [1,2,3,4] into potential functions that depend on subsets of all variables have been used successfully for pattern analysis, e.g. in the seminal works [5,6,7,8]. An important problem is the minimization of the sum of potentials, i.e. the search for an assignment of zeros and ones to the variables that minimizes the energy. This problem can be solved efficiently by dynamic programming if the graph is acyclic [9] or its treewidth is small enough [3], and by finding a minimum s-t-cut [6] if the energy function is (permutation) submodular [10,11]. In general, the problem is NP-hard [10]. For moderate problem sizes, exact optimization is sometimes tractable by means of Mixed Integer Linear Programming (MILP) [12,13]. Contrary to popular belief, some practical computer vision problems can indeed be solved to optimality by modern MILP solvers (cf. Section 5). However, all such solvers are eventually overburdened when the problem size becomes too large. In cases where exact optimization is intractable, one has to settle for approximations. While substantial progress has been made in this direction, a deterministic non-redundant search algorithm that constrains the search space based on the topology of the graphical model has not been proposed before. This article presents a depth-limited exhaustive search algorithm, the Lazy Flipper, that does just that. arXiv:1009.4102v1 [cs.DS] 21 Sep 2010 The Lazy Flipper starts from an arbitrary initial assignment of zeros and ones to the variables that can be chosen, for instance, to minimize the sum of only the first order potentials of the graphical model. Starting from this initial configuration, it searches for flips of variables that reduce the energy. As soon as such a flip is found, the current configuration is updated accordingly, i.e. in a greedy fashion. In the beginning, only single variables are flipped. Once a configuration is found whose energy can no longer be reduced by flipping of single variables, all those subsets of two and successively more variables that are connected via potentials in the graphical model are considered. When a subset of more than one variable is flipped, all smaller subsets that are affected by the flip are revisited. This allows the Lazy Flipper to perform an exhaustive search over all subsets of variables whose flip potentially reduces the energy. Two special data structures described in Section 3 are used to represent each subset of connected variables precisely once and to exclude subsets from the search whose flip cannot reduce the energy due to the topology of the graphical model and the history of unsuccessful flips. These data structures, the Lazy Flipper algorithm and an experimental evaluation of state-of-the-art optimization algorithms on higher-order graphical models are the main contributions of this article. Overall, the new algorithm has four favorable properties: (i) It is strictly convergent. While a global minimum is found when searching through all subgraphs (typically not tractable), approximate solutions with a guaranteed quality certificate (Section 4) are found if the search space is restricted to subgraphs of a given maximum size. The larger the subgraphs are allowed to be, the tighter the upper bound on the minimum energy becomes. This allows for a favorable tradeoff between runtime and approximation quality. (ii) Unlike in brute force search, the runtime of lazy flipping depends on the topology of the graphical model. It is exponential in the worst case but can be shorter compared to brute force search by an amount that is exponential in the number of variables. It is approximately linear in the size of the model for a fixed maximum search depth. (iii) The Lazy Flipper can be applied to graphical models of any order and topology, including but not limited to the more standard grid graphs. Directed Bayesian Networks and undirected Markov Random Fields are processed in the exact same manner; they are converted to factor graph models [14] before lazy flipping. (iv) Only trivial operations are performed on the graphical model, namely graph traversal and evaluations of potential functions. These operations are cheap compared, for instance, to the summation and minimization of potential functions performed by message passing algorithms, and require only an implicit specification of potential functions in terms of program code that computes the function value for any given assignment of values to the variables. Experiments on simulated and real-world problems, submodular and nonsubmodular functions, grids and irregular graphs (Section 5) assess the quality of Lazy Flipper approximations, their convergence as well as the dependence of the runtime of the algorithm on the size of the model and the search depth. The results are put into perspective by a comparison with Iterated Conditional Modes (ICM) [5], Belief Propagation (BP) [9,14], Tree-reweighted BP [15,4] and a Dual Decomposition ansatz using sub-gradient descent methods [16,17]. The Lazy Flipper Data Structures Two special data structures are crucial to the Lazy Flipper. The first data structure that we call a connected subgraph tree (CS-tree) ensures that only connected subsets of variables are considered, i.e. sets of variables which are connected via potentials in the graphical model. Moreover, it ensures that every such subset is represented precisely once (and not repeatedly) by an ordered sequence of its variables, cf. [30]. The rationale behind this concept is the following: If the flip of one variable and the flip of another variable not connected to the first one do not reduce the energy then it is pointless to try a simultaneous flip of both variables because the (energy increasing) contributions from both flips would sum up. Furthermore, if the flip of a disconnected set of variables reduces the energy then the same and possibly better reductions can be obtained by flipping connected subsets of this set consecutively, in any order. All disconnected subsets of variables can therefore be excluded from the search if the connected subsets are searched ordered by their size. Finding a unique representative for each connected subset of variables is important. The alternative would be to consider all sequences of pairwise distinct variables in which each variable is connected to at least one of its predecessors and to ignore the fact that many of these sequences represent the same set. Sampling algorithms that select and grow connected subsets in a randomized fashion do exactly this. However, the redundancy is large. As an example, consider a connected subset of six variables of a 2-dimensional grid graph as depicted in Fig. 1a. Although there is only one connected set that contains all six variables, 208 out of the 6! = 720 possible sequences of these variables meet the requirement that each variable is connected to at least one of its predecessors. This 208-fold redundancy hampers the exploration of the search space by means of randomized algorithms; it is avoided in lazy flipping at the cost of storing one unique representative for every connected subgraph in the CS-tree. The second data structure is a tag list that prevents the repeated assessment of unsuccessful flips. The idea is the following: If some variables have been flipped in one iteration (and the current best configuration has been updated accordingly), it suffices to revisit only those sets of variables that are connected to at least one variable that has been flipped. All other sets of variables are excluded from the search because the potentials that depend on these variables are unaffected by the flip and have been assessed in their current state before. The tag list and the connected subgraph tree are essential to the Lazy Flipper and are described in the following sections, 3.1 and 3.2. For a quick overview, the reader can however skip these sections, take for granted that it is possible to efficiently enumerate all connected subgraphs of a graphical model, ordered by their size, and refer directly to the main algorithm (Section 4 and Alg. 1). All non-trivial sub-functions used in the main algorithm are related to tag lists and the CS-tree and are described in detail now. Connected Subgraph Tree (CS-tree) The CS-tree represents subsets of connected variables uniquely. Every node in the CS-tree except the special root node is labeled with the integer index of one variable in the graphical model. The same variable index is assigned to several nodes in the CS-tree unless the graphical model is completely disconnected. The CS-tree is constructed such that every connected subset of variables in the graphical model corresponds to precisely one path in the CS-tree from a node to the root node, the node labels along the path indicating precisely the variables in the subset, and vice versa, there exists precisely one connected subset of variables in the graphical model for each path in the CS-tree from a node to the root node. In order to guarantee by construction of the CS-tree that each subset of connected variables is represented precisely once, the variable indices of each subset are put in a special order, namely the lexicographically smallest order in which each variable is connected to at least one of its predecessors. The following definition of these sequences of variable indices is recursive and therefore motivates an algorithm for the construction of the CS-tree for the Lazy Flipper. A small grid model and its complete CS-tree are depicted in Fig. 1. Definition 1 (CSR-Sequence). Given an undirected graph G = (V, E) whose m ∈ N vertices V = {1, . . . , m} are integer indices, every sequence that consists of only one index is called connected subset representing (CSR). Given n ∈ N and a CSR-sequence (v 1 , . . . , v n ), a sequence (v 1 , . . . , v n , v n+1 ) of n + 1 indices is called a CSR-sequence precisely if the following conditions hold: (i) v n+1 is not among its predecessors, i.e. ∀j ∈ {1, . . . , n} : v j = v n+1 . (ii) v n+1 is connected to at least one of its predecessors, i.e. ∃j ∈ {1, . . . , n} : {v j , v n+1 } ∈ E. (iii) v n+1 > v 1 . (iv) If n ≥ 2 and v n+1 could have been added at an earlier position j ∈ {2, . . . , n} to the sequence, fulfilling (i)-(iii), all subsequent vertices v j , . . . , v n are smaller than v n+1 , i.e. ∀j ∈ {2, . . . , n} ({v j−1 , v n+1 } ∈ E ⇒ (∀k ∈ {j, . . . , n} : v k < v n+1 )) .(1) Based on this definition, three functions are sufficient to recursively build the CS-tree T of a graphical model G, starting from the root node. The function q = growSubset(T, G, p) appends to a node p in the CS-tree the smallest variable index that is not yet among the children of p and fulfills (i)-(iv) for the CSR-sequence of variable indices on the path from p to the root node. It returns the appended node or the empty set if no suitable variable index exists. The function q = firstSubsetOfSize(T, G, n) traverses the CS-tree on the current deepest level n − 1, calling the function growSubset for each leaf until a node can be appended and thus, the first subset of size n has been found. Finally, the function q = nextSubsetOfSameSize(T, G, p) starts from a node p, finds its parent and traverses from there in level order, calling growSubset for each node to find the length-lexicographic successor of the CSR-sequence associated with the node p, i.e. the representative of the next subset of the same size. These functions are used by the Lazy Flipper (Alg. 1) to construct the CS-tree. In contrast, the traversal of already constructed parts of the CS-tree (when revisiting subsets of variables after successful flips) is performed by functions associated with tag lists which are defined the following section. Tag Lists Tag lists are used to tag variables that are affected by flips. A variable is affected by a flip either because it has been flipped itself or because it is connected (via a potential) to a flipped variable. The tag list data structure comprises a Boolean vector in which each entry corresponds to a variable, indicating whether or not this variable is affected by recent flips. As the total number of variables can be large (10 6 is not exceptional) and possibly only a few variables are affected by flips, a list of all affected variables is maintained in addition to the vector. This list allows the algorithm to untag all tagged variables without re-initializing the entire Boolean vector. The two fundamental operations on a tag list L are tag(L, x) which tags the variable with the index x, and untagAll (L). For the Lazy Flipper, three special functions are used in addition: Given a tag list L, a (possibly incomplete) CS-tree T , the graphical model G, and a node s ∈ T , tagConnectedVariables(L, T, G, s) tags all variables on the path from s to the root node in T , as well as all nodes that are connected (via a potential in G) to at least one of these nodes. The function s = firstTaggedSubset(L, T ) traverses the first level of T and returns the first node s whose variable is tagged (or the empty set if all variables are untagged). Finally, the function t = nextTaggedSubset(L, T, s) traverses T in level order, starting with the successor of s, and returns the first node t for which the path to the root contains at least one tagged variable. These functions, together with those of the CS-tree, are sufficient for the Lazy Flipper, Alg. 1. The Lazy Flipper Algorithm In the main loop of the Lazy Flipper (lines 2-26 in Alg. 1), the size n of subsets is incremented until the limit n max is reached (line 24). Inside this main loop, the algorithm falls into two parts, the exploration part (lines 3-11) and the revisiting part (lines 12-23). In the exploration part, flips of previously unseen subsets of n variables are assessed. The current best configuration c is updated in a greedy fashion, i.e. whenever a flip yields a lower energy. At the same time, the CStree is grown, using the functions defined in Section 3.1. In the revisiting part, all subsets of sizes 1 through n that are affected by recent flips are assessed iteratively until no flip of any of these subsets reduces the energy (line 14). The indices of affected variables are stored in the tag lists L 1 and L 2 (cf. Section 3.2). In practice, the Lazy Flipper can be stopped at any point, e.g. when a time limit is exceeded, and the current best configuration c taken as the output. It eventually reaches configurations for which it is guaranteed that no flip of n or less variables can yield a lower energy because all such flips that could potentially lower the energy have been assessed (line 14). Such configurations are therefore guaranteed to be optimal within a Hamming radius of n: Experiments For a comparative assessment of the Lazy Flipper, four optimization problems of different complexity are considered, two simulated problems and two problems based on real-world data. For the sake of reproducibility, the simulations are described in detail and the models constructed from real data are available from the authors as supplementary material. The first problem is a ferromagnetic Ising model that is widely used in computer vision for foreground vs. background segmentation [6]. Energy functions of this model consist of first and second order potentials that are submodular. The global minimum can therefore be found via a graph cut. We simulate random instances of this model in order to measure how the runtime of lazy flipping depends on the size of the model and the coupling strength, and to compare Lazy Flipper approximations to the global optimum (Section 5.1). The second problem is a problem of finding optimal subgraphs on a grid. Energy functions of this model consist of first and fourth order potentials, of which the latter are not permutation submodular. We simulate difficult instances of this problem that cannot be solved to optimality, even when allowing several days of runtime. In this challenging setting, Lazy Flipper approximations and their convergence are compared to those of BP, TRBP and DD as well as to the lower bounds on local polytope relaxations obtained by DD (Section 5.2). The third problem is a graphical model for removing excessive boundaries from image over-segmentations that is related to the model proposed in [31]. Energy functions of this model consist of first, third and fourth order potentials. In contrast to the grid graphs of the Ising model and the optimal subgraph model, the corresponding factor graphs are irregular but still planar. The higher-order potentials are not permutation submodular but the global optimum can be found by means of MILP in approximately 10 seconds per model using one of the fastest commercial solvers (IBM ILOG CPLEX, version 12.1). Since CPLEX is closedsource software, the algorithm is not known in detail and we use it as a black box. The general method used by CPLEX for MILP is a branch-and-bound algorithm [32,33]. 100 instances of this model obtained from the 100 natural test images of the Berkeley Segmentation Database (BSD) [34] are used to compare the Lazy Flipper to algorithms based on message passing and linear programming in a real-world setting where the global optimum is accessible (Section 5.3). The fourth problem is identical to the third, except that instances are obtained from 3-dimensional volume images of neural tissue acquired by means of Serial Block Face Scanning Electron Microscopy (SBFSEM) [35]. Unlike in the 2-dimensional case, the factor graphs are no longer planar. Whether exact optimization by means of MILP is practical depends on the size of the model. In practice, SBFSEM datasets consist of more than 2000 3 voxels. To be able to compare approximations to the global optimum, we consider 16 models obtained from 16 SBFSEM volume sub-images of only 150 3 voxels for which the global optimum can be found by means of MILP within a few minutes (Section 5.4). Ferromagnetic Ising model The ferromagnetic Ising model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with points on a 2-dimensional square grid and connected via second order potentials E jk (x j , x k ) = 1 − δ xj ,x k (δ: Kronecker delta) to their nearest neighbors. First order potentials E j (x j ) relate the variables to observed evidence in underlying data. The total energy of this model is the following sum in which α ∈ R + 0 is a weight on the second order potentials, and j ∼ k indicates that the variables x j and x k are adjacent on the grid: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + α m j=1 m k=j+1 k∼j E jk (x j , x k ) .(2) For each α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, an ensemble of ten simulated Ising models of 50 · 50 = 2500 variables is considered. The first order potentials E j are initialized randomly by drawing E j (0) uniformly from the interval [0, 1] and setting E j (1) := 1 − E j (0). The exact global minimum of the total energy is found via a graph cut. For each model, the Lazy Flipper is initialized with a configuration that minimizes the sum of the first order potentials. Upper bounds on the minimum energy found by means of lazy flipping converge towards the global optimum as depicted in Fig. 2. Color scales and gray scales in this figure respectively indicate the maximum size and the total number of distinct subsets that have been searched, averaged over all models in the ensemble. It can be seen from this figure that upper bounds on the minimum energy are tightened significantly by searching larger subsets of variables, independent of the coupling strength α. It takes the Lazy Flipper less than 100 seconds (on a single CPU of an Intel Quad Xeon E7220 at 2.93GHz) to exhaustively search all connected subsets of 6 variables. The amount of RAM required for the CS-tree (in bytes) is 24 times as high as the number of subsets (approximately 50 MB in this case) because each subset is stored in the CS-tree as a node consisting of three 64-bit integers: a variable index, the index of the parent node and the index of the level order successor (Section 3.1) 1 . For n max ∈ {1, 6}, configurations corresponding to the upper bounds on the minimum energy are depicted in Fig. 3. It can be seen from this figure that all connected subsets of falsely set variables are larger than n max . For a fixed maximum subgraph size n max , the runtime of lazy flipping scales approximately linearly with the number of variables in the Ising model (cf. Fig.4). Optimal Subgraph Model The optimal subgraph model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with the edges of a 2-dimensional grid graph. A subgraph is defined by those edges whose associated variables attain the value 1. Energy functions of this model consist of first order potentials, one for each edge, and fourth order potentials, one for each node v ∈ V in which four edges (j, k, l, m) = N (v) meet: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + (j,k,l,m)∈N (V ) E jklm (x j , x k , x l , x m ) .(3) All fourth order potentials are equal, penalizing dead ends and branches of paths in the selected subgraph: E jklm (x j , x k , x l , x m ) =                0.0 if s = 0 100.0 if s = 1 0.6 if s = 2 1.2 if s = 3 2.4 if s = 4 with s = x j + x k + x l + x m . (4) An ensemble of 16 such models is constructed by drawing the unary potentials at random, exactly as for the Ising models. Each model has 19800 variables, the same number of first order potentials, and 9801 fourth order potentials. Approximate optimal subgraphs are found by Min-Sum Belief Propagation (BP) with parallel message passing [9,14] and message damping [36], by Tree-reweighted Belief Propagation (TRBP) [4], by Dual Decomposition (DD) [16,17] and by lazy flipping (LF). DD affords also lower bounds on the minimum energy. Details on the parameters of the algorithms and the decomposition of the models are given in Appendix A. Bounds on the minimum energy converge with increasing runtime, as depicted in Fig. 5. It can be seen from this figure that Lazy Flipper approximations converge fast, reaching a smaller energy after 3 seconds than the other approximations after 10000 seconds. Subgraphs of up to 7 variables are searched, using approximately 2.2 GB of RAM for the CS-tree. A gap remains between the energies of all approximations and the lower bound on the minimum energy obtained by DD. Thus, there is no guarantee that any of the problems has been solved to optimality. However, the gaps are upper bounds on the deviation from the global optimum. They are compared at t = 10000 s in Fig. 5. For any model in the ensemble, the energy of the Lazy Flipper approximation is less than 4% away from the global optimum, a substantial improvement over the other algorithms for this particular model. Pruning of 2D Over-Segmentations The graphical model for removing excessive boundaries from image over-segmentations contains one binary variable for each boundary between segments, indicating whether this boundary is to be removed (0) or preserved (1). First order potentials relate these variables to the image content, and non-submodular third and fourth order potentials connect adjacent boundaries, supporting the closedness and smooth continuation of preserved boundaries. The energy function is a sum of these potentials: ∀x ∈ {0, 1} m E(x) = m j=1 E j (x j ) + (j,k,l)∈J E jkl (x j , x k , x l ) + (j,k,l,p)∈K E jklp (x j , x k , x l , x p ) .(5) We consider an ensemble of 100 such models obtained from the 100 BSD test images [34]. On average, a model has 8845 ± 670 binary variables, the same number of unary potentials, 5715 ± 430 third order potentials and 98 ± 18 fourth order potentials. Each variable is connected via potentials to at most six other variables, a sparse structure that is favorable for the Lazy Flipper. BP, TRBP, DD and the Lazy Flipper solve these problems approximately, thus providing upper bounds on the minimum energy. The differences between these bounds and the global optimum found by means of MILP are depicted in Fig. 6. It can be seen from this figure that, after 200 seconds, Lazy Flipper approximations provide a tighter upper bound on the global minimum in the median than those of the other three algorithms. BP and DD have a better peak performance, solving one problem to optimality. The Lazy Flipper reaches a search depth of 9 after around 1000 seconds for these sparse graphical models using roughly 720 MB of RAM for the CS-tree. At t = 5000 s and on average over all models, its approximations deviate by only 2.6% from the global optimum. Pruning of 3D Over-Segmentations The model described in the previous section is now applied in 3D to remove excessive boundaries from the over-segmentation of a volume image. In an ensemble of 16 such models obtained from 16 SBFSEM volume images, models have on average 16748 ± 1521 binary variables (and first order potentials), 26379 ± 2502 potentials of order 3, and 5081 ± 482 potentials of order 4. For BP, TRBP, DD and Lazy Flipper approximations, deviations from the global optimum are shown in Fig. 7. It can be seen from this figure that BP performs exceptionally well on these problems, providing approximations whose energies deviate by only 0.4% on average from the global optimum. One reason is that most variables influence many (up to 60) potential functions, and BP can propagate local evidence from all these potentials. Variables are connected via these potentials to as many as 100 neighboring variables which hampers the exploration of the search space by the Lazy Flipper that reaches only of search depth of 5 after 10000 seconds, using 4.8 GB of RAM for the CS-tree, yielding worse approximations than BP, TRBP and DD for these models. In practical applications where volume images and the according models are several hundred times larger and can no longer be optimized exactly, it matters whether one can further improve upon the BP approximations. Dashed lines in the first plot in Fig. 7 show the result obtained when initializing the Lazy Flipper with the BP approximation at t = 100s. This reduces the deviation from the global optimum at t = 50000 s from 0.4% on average over all models to 0.1%. Conclusion The optimum of a function of binary variables that decomposes according to a graphical model can be found by an exhaustive search over only the connected subgraphs of the model. We implemented this search, using a CS-tree to efficiently and uniquely enumerate the subgraphs. The C++ source code is available from http://hci.iwr.uni-heidelberg.de/software.php. Our algorithm is guaranteed to converge to a global minimum when searching through all subgraphs which is typically intractable. With limited runtime, approximations can be found by restricting the search to subgraphs of a given maximum size. Simulated and real-world problems exist for which these approximations compare favorably to those obtained by message passing and sub-gradient descent. For large scale problems, the applicability of the Lazy Flipper is limited by the memory required for the CS-tree. However, for regular graphs, this limit can be overcome by an implicit representation of the CS-tree that is subject of future research. A Parameters and Model Decomposition In all experiments, the damping parameters for BP and TRBP are chosen optimally from the set {0, 0.1, 0.2, . . . , 0.9}. The step size of the sub-gradient descent is chosen according to τ t = α 1 1 + βt(6) where β = 0.01 and α is chosen optimally from {0.01, 0.025, 0.05, 0.1, 0.25, 0.5}. The sequence of step sizes, in particular the function (6) and β could be tuned further. Moreover, [16] consider the primal-dual gap and [17] smooth the subgradient over iterations in order to suppress oscillations. These measures can have substantial impact on the convergence. The upper bounds obtained by BP, TRBP and DD do not decrease monotonously. After each iteration of these algorithms, we therefore consider the elapsed runtime and the current best bound, i.e. the best bound of the current and all preceding iterations. All five algorithms are implemented in C++, using the same optimized data structures for the graphical model and a visitor design pattern that allows us to measure runtime without significantly affecting performance. The same decomposition of each graphical model into tree models is used for TRBP and DD. Tree models are constructed in a greedy fashion, each comprising as many potential functions as possible. The procedure is generally applicable to irregular models with higher-order potentials: Initially, all potentials of the graphical model are put on a white list that contains those potentials that have not been added to any tree model. A black list of already added potentials and a gray list of recently added potentials are initially empty. As long as there are potentials on the white list, new tree models are constructed. For each newly constructed tree model, the procedure iterates over the white list, adding potentials to the tree model if they do not introduce loops. Added potentials are moved from the white list to the gray list. After all potentials from the white list have been processed, potentials from the black list that do not introduce loops are added to the tree model. The gray list is then appended to the black list and cleared. The procedure finishes when the white list is empty. As recently shown in [17], decompositions into cyclic subproblems can lead to significantly tighter relaxations and better integer solutions.
5,125
1009.4102
1633665642
This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.
Fourth, the Lazy Flipper guarantees that the best approximation found with a search depth @math is optimal within a Hamming distance @math . A similar guarantee known as the Single Loop Tree (SLT) neighborhood @cite_17 is given by BP in case of convergence. The SLT condition states that in any alteration of an assignment of values to the variables that leads to a lower energy, the altered variables form a subgraph in the graphical model that has at least two loops. The fact that Hamming optimality and SLT optimality differ can be exploited in practice. We show in one experiment in that BP approximations can be further improved by means of lazy flipping.
{ "abstract": [ "Graphical models, such as Bayesian networks and Markov random fields (MRFs), represent statistical dependencies of variables by a graph. The max-product \"belief propagation\" algorithm is a local-message-passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed point yields the most probable values of the unobserved variables given the observed ones. Good empirical performance has been obtained by running the max-product algorithm (or the equivalent min-sum algorithm) on graphs with loops, for applications including the decoding of \"turbo\" codes. Except for two simple graphs (cycle codes and single-loop graphs) there has been little theoretical understanding of the max-product algorithm on graphs with loops. Here we prove a result on the fixed points of max-product on a graph with arbitrary topology and with arbitrary probability distributions (discrete- or continuous-valued nodes). We show that the assignment based on a fixed point is a \"neighborhood maximum\" of the posterior probability: the posterior probability of the max-product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment. The region includes all assignments that differ from the max-product assignment in any subset of nodes that form no more than a single loop in the graph. In some graphs, this neighborhood is exponentially large. We illustrate the analysis with examples." ], "cite_N": [ "@cite_17" ], "mid": [ "2098387242" ] }
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
Energy functions that depend on thousands of binary variables and decompose according to a graphical model [1,2,3,4] into potential functions that depend on subsets of all variables have been used successfully for pattern analysis, e.g. in the seminal works [5,6,7,8]. An important problem is the minimization of the sum of potentials, i.e. the search for an assignment of zeros and ones to the variables that minimizes the energy. This problem can be solved efficiently by dynamic programming if the graph is acyclic [9] or its treewidth is small enough [3], and by finding a minimum s-t-cut [6] if the energy function is (permutation) submodular [10,11]. In general, the problem is NP-hard [10]. For moderate problem sizes, exact optimization is sometimes tractable by means of Mixed Integer Linear Programming (MILP) [12,13]. Contrary to popular belief, some practical computer vision problems can indeed be solved to optimality by modern MILP solvers (cf. Section 5). However, all such solvers are eventually overburdened when the problem size becomes too large. In cases where exact optimization is intractable, one has to settle for approximations. While substantial progress has been made in this direction, a deterministic non-redundant search algorithm that constrains the search space based on the topology of the graphical model has not been proposed before. This article presents a depth-limited exhaustive search algorithm, the Lazy Flipper, that does just that. arXiv:1009.4102v1 [cs.DS] 21 Sep 2010 The Lazy Flipper starts from an arbitrary initial assignment of zeros and ones to the variables that can be chosen, for instance, to minimize the sum of only the first order potentials of the graphical model. Starting from this initial configuration, it searches for flips of variables that reduce the energy. As soon as such a flip is found, the current configuration is updated accordingly, i.e. in a greedy fashion. In the beginning, only single variables are flipped. Once a configuration is found whose energy can no longer be reduced by flipping of single variables, all those subsets of two and successively more variables that are connected via potentials in the graphical model are considered. When a subset of more than one variable is flipped, all smaller subsets that are affected by the flip are revisited. This allows the Lazy Flipper to perform an exhaustive search over all subsets of variables whose flip potentially reduces the energy. Two special data structures described in Section 3 are used to represent each subset of connected variables precisely once and to exclude subsets from the search whose flip cannot reduce the energy due to the topology of the graphical model and the history of unsuccessful flips. These data structures, the Lazy Flipper algorithm and an experimental evaluation of state-of-the-art optimization algorithms on higher-order graphical models are the main contributions of this article. Overall, the new algorithm has four favorable properties: (i) It is strictly convergent. While a global minimum is found when searching through all subgraphs (typically not tractable), approximate solutions with a guaranteed quality certificate (Section 4) are found if the search space is restricted to subgraphs of a given maximum size. The larger the subgraphs are allowed to be, the tighter the upper bound on the minimum energy becomes. This allows for a favorable tradeoff between runtime and approximation quality. (ii) Unlike in brute force search, the runtime of lazy flipping depends on the topology of the graphical model. It is exponential in the worst case but can be shorter compared to brute force search by an amount that is exponential in the number of variables. It is approximately linear in the size of the model for a fixed maximum search depth. (iii) The Lazy Flipper can be applied to graphical models of any order and topology, including but not limited to the more standard grid graphs. Directed Bayesian Networks and undirected Markov Random Fields are processed in the exact same manner; they are converted to factor graph models [14] before lazy flipping. (iv) Only trivial operations are performed on the graphical model, namely graph traversal and evaluations of potential functions. These operations are cheap compared, for instance, to the summation and minimization of potential functions performed by message passing algorithms, and require only an implicit specification of potential functions in terms of program code that computes the function value for any given assignment of values to the variables. Experiments on simulated and real-world problems, submodular and nonsubmodular functions, grids and irregular graphs (Section 5) assess the quality of Lazy Flipper approximations, their convergence as well as the dependence of the runtime of the algorithm on the size of the model and the search depth. The results are put into perspective by a comparison with Iterated Conditional Modes (ICM) [5], Belief Propagation (BP) [9,14], Tree-reweighted BP [15,4] and a Dual Decomposition ansatz using sub-gradient descent methods [16,17]. The Lazy Flipper Data Structures Two special data structures are crucial to the Lazy Flipper. The first data structure that we call a connected subgraph tree (CS-tree) ensures that only connected subsets of variables are considered, i.e. sets of variables which are connected via potentials in the graphical model. Moreover, it ensures that every such subset is represented precisely once (and not repeatedly) by an ordered sequence of its variables, cf. [30]. The rationale behind this concept is the following: If the flip of one variable and the flip of another variable not connected to the first one do not reduce the energy then it is pointless to try a simultaneous flip of both variables because the (energy increasing) contributions from both flips would sum up. Furthermore, if the flip of a disconnected set of variables reduces the energy then the same and possibly better reductions can be obtained by flipping connected subsets of this set consecutively, in any order. All disconnected subsets of variables can therefore be excluded from the search if the connected subsets are searched ordered by their size. Finding a unique representative for each connected subset of variables is important. The alternative would be to consider all sequences of pairwise distinct variables in which each variable is connected to at least one of its predecessors and to ignore the fact that many of these sequences represent the same set. Sampling algorithms that select and grow connected subsets in a randomized fashion do exactly this. However, the redundancy is large. As an example, consider a connected subset of six variables of a 2-dimensional grid graph as depicted in Fig. 1a. Although there is only one connected set that contains all six variables, 208 out of the 6! = 720 possible sequences of these variables meet the requirement that each variable is connected to at least one of its predecessors. This 208-fold redundancy hampers the exploration of the search space by means of randomized algorithms; it is avoided in lazy flipping at the cost of storing one unique representative for every connected subgraph in the CS-tree. The second data structure is a tag list that prevents the repeated assessment of unsuccessful flips. The idea is the following: If some variables have been flipped in one iteration (and the current best configuration has been updated accordingly), it suffices to revisit only those sets of variables that are connected to at least one variable that has been flipped. All other sets of variables are excluded from the search because the potentials that depend on these variables are unaffected by the flip and have been assessed in their current state before. The tag list and the connected subgraph tree are essential to the Lazy Flipper and are described in the following sections, 3.1 and 3.2. For a quick overview, the reader can however skip these sections, take for granted that it is possible to efficiently enumerate all connected subgraphs of a graphical model, ordered by their size, and refer directly to the main algorithm (Section 4 and Alg. 1). All non-trivial sub-functions used in the main algorithm are related to tag lists and the CS-tree and are described in detail now. Connected Subgraph Tree (CS-tree) The CS-tree represents subsets of connected variables uniquely. Every node in the CS-tree except the special root node is labeled with the integer index of one variable in the graphical model. The same variable index is assigned to several nodes in the CS-tree unless the graphical model is completely disconnected. The CS-tree is constructed such that every connected subset of variables in the graphical model corresponds to precisely one path in the CS-tree from a node to the root node, the node labels along the path indicating precisely the variables in the subset, and vice versa, there exists precisely one connected subset of variables in the graphical model for each path in the CS-tree from a node to the root node. In order to guarantee by construction of the CS-tree that each subset of connected variables is represented precisely once, the variable indices of each subset are put in a special order, namely the lexicographically smallest order in which each variable is connected to at least one of its predecessors. The following definition of these sequences of variable indices is recursive and therefore motivates an algorithm for the construction of the CS-tree for the Lazy Flipper. A small grid model and its complete CS-tree are depicted in Fig. 1. Definition 1 (CSR-Sequence). Given an undirected graph G = (V, E) whose m ∈ N vertices V = {1, . . . , m} are integer indices, every sequence that consists of only one index is called connected subset representing (CSR). Given n ∈ N and a CSR-sequence (v 1 , . . . , v n ), a sequence (v 1 , . . . , v n , v n+1 ) of n + 1 indices is called a CSR-sequence precisely if the following conditions hold: (i) v n+1 is not among its predecessors, i.e. ∀j ∈ {1, . . . , n} : v j = v n+1 . (ii) v n+1 is connected to at least one of its predecessors, i.e. ∃j ∈ {1, . . . , n} : {v j , v n+1 } ∈ E. (iii) v n+1 > v 1 . (iv) If n ≥ 2 and v n+1 could have been added at an earlier position j ∈ {2, . . . , n} to the sequence, fulfilling (i)-(iii), all subsequent vertices v j , . . . , v n are smaller than v n+1 , i.e. ∀j ∈ {2, . . . , n} ({v j−1 , v n+1 } ∈ E ⇒ (∀k ∈ {j, . . . , n} : v k < v n+1 )) .(1) Based on this definition, three functions are sufficient to recursively build the CS-tree T of a graphical model G, starting from the root node. The function q = growSubset(T, G, p) appends to a node p in the CS-tree the smallest variable index that is not yet among the children of p and fulfills (i)-(iv) for the CSR-sequence of variable indices on the path from p to the root node. It returns the appended node or the empty set if no suitable variable index exists. The function q = firstSubsetOfSize(T, G, n) traverses the CS-tree on the current deepest level n − 1, calling the function growSubset for each leaf until a node can be appended and thus, the first subset of size n has been found. Finally, the function q = nextSubsetOfSameSize(T, G, p) starts from a node p, finds its parent and traverses from there in level order, calling growSubset for each node to find the length-lexicographic successor of the CSR-sequence associated with the node p, i.e. the representative of the next subset of the same size. These functions are used by the Lazy Flipper (Alg. 1) to construct the CS-tree. In contrast, the traversal of already constructed parts of the CS-tree (when revisiting subsets of variables after successful flips) is performed by functions associated with tag lists which are defined the following section. Tag Lists Tag lists are used to tag variables that are affected by flips. A variable is affected by a flip either because it has been flipped itself or because it is connected (via a potential) to a flipped variable. The tag list data structure comprises a Boolean vector in which each entry corresponds to a variable, indicating whether or not this variable is affected by recent flips. As the total number of variables can be large (10 6 is not exceptional) and possibly only a few variables are affected by flips, a list of all affected variables is maintained in addition to the vector. This list allows the algorithm to untag all tagged variables without re-initializing the entire Boolean vector. The two fundamental operations on a tag list L are tag(L, x) which tags the variable with the index x, and untagAll (L). For the Lazy Flipper, three special functions are used in addition: Given a tag list L, a (possibly incomplete) CS-tree T , the graphical model G, and a node s ∈ T , tagConnectedVariables(L, T, G, s) tags all variables on the path from s to the root node in T , as well as all nodes that are connected (via a potential in G) to at least one of these nodes. The function s = firstTaggedSubset(L, T ) traverses the first level of T and returns the first node s whose variable is tagged (or the empty set if all variables are untagged). Finally, the function t = nextTaggedSubset(L, T, s) traverses T in level order, starting with the successor of s, and returns the first node t for which the path to the root contains at least one tagged variable. These functions, together with those of the CS-tree, are sufficient for the Lazy Flipper, Alg. 1. The Lazy Flipper Algorithm In the main loop of the Lazy Flipper (lines 2-26 in Alg. 1), the size n of subsets is incremented until the limit n max is reached (line 24). Inside this main loop, the algorithm falls into two parts, the exploration part (lines 3-11) and the revisiting part (lines 12-23). In the exploration part, flips of previously unseen subsets of n variables are assessed. The current best configuration c is updated in a greedy fashion, i.e. whenever a flip yields a lower energy. At the same time, the CStree is grown, using the functions defined in Section 3.1. In the revisiting part, all subsets of sizes 1 through n that are affected by recent flips are assessed iteratively until no flip of any of these subsets reduces the energy (line 14). The indices of affected variables are stored in the tag lists L 1 and L 2 (cf. Section 3.2). In practice, the Lazy Flipper can be stopped at any point, e.g. when a time limit is exceeded, and the current best configuration c taken as the output. It eventually reaches configurations for which it is guaranteed that no flip of n or less variables can yield a lower energy because all such flips that could potentially lower the energy have been assessed (line 14). Such configurations are therefore guaranteed to be optimal within a Hamming radius of n: Experiments For a comparative assessment of the Lazy Flipper, four optimization problems of different complexity are considered, two simulated problems and two problems based on real-world data. For the sake of reproducibility, the simulations are described in detail and the models constructed from real data are available from the authors as supplementary material. The first problem is a ferromagnetic Ising model that is widely used in computer vision for foreground vs. background segmentation [6]. Energy functions of this model consist of first and second order potentials that are submodular. The global minimum can therefore be found via a graph cut. We simulate random instances of this model in order to measure how the runtime of lazy flipping depends on the size of the model and the coupling strength, and to compare Lazy Flipper approximations to the global optimum (Section 5.1). The second problem is a problem of finding optimal subgraphs on a grid. Energy functions of this model consist of first and fourth order potentials, of which the latter are not permutation submodular. We simulate difficult instances of this problem that cannot be solved to optimality, even when allowing several days of runtime. In this challenging setting, Lazy Flipper approximations and their convergence are compared to those of BP, TRBP and DD as well as to the lower bounds on local polytope relaxations obtained by DD (Section 5.2). The third problem is a graphical model for removing excessive boundaries from image over-segmentations that is related to the model proposed in [31]. Energy functions of this model consist of first, third and fourth order potentials. In contrast to the grid graphs of the Ising model and the optimal subgraph model, the corresponding factor graphs are irregular but still planar. The higher-order potentials are not permutation submodular but the global optimum can be found by means of MILP in approximately 10 seconds per model using one of the fastest commercial solvers (IBM ILOG CPLEX, version 12.1). Since CPLEX is closedsource software, the algorithm is not known in detail and we use it as a black box. The general method used by CPLEX for MILP is a branch-and-bound algorithm [32,33]. 100 instances of this model obtained from the 100 natural test images of the Berkeley Segmentation Database (BSD) [34] are used to compare the Lazy Flipper to algorithms based on message passing and linear programming in a real-world setting where the global optimum is accessible (Section 5.3). The fourth problem is identical to the third, except that instances are obtained from 3-dimensional volume images of neural tissue acquired by means of Serial Block Face Scanning Electron Microscopy (SBFSEM) [35]. Unlike in the 2-dimensional case, the factor graphs are no longer planar. Whether exact optimization by means of MILP is practical depends on the size of the model. In practice, SBFSEM datasets consist of more than 2000 3 voxels. To be able to compare approximations to the global optimum, we consider 16 models obtained from 16 SBFSEM volume sub-images of only 150 3 voxels for which the global optimum can be found by means of MILP within a few minutes (Section 5.4). Ferromagnetic Ising model The ferromagnetic Ising model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with points on a 2-dimensional square grid and connected via second order potentials E jk (x j , x k ) = 1 − δ xj ,x k (δ: Kronecker delta) to their nearest neighbors. First order potentials E j (x j ) relate the variables to observed evidence in underlying data. The total energy of this model is the following sum in which α ∈ R + 0 is a weight on the second order potentials, and j ∼ k indicates that the variables x j and x k are adjacent on the grid: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + α m j=1 m k=j+1 k∼j E jk (x j , x k ) .(2) For each α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, an ensemble of ten simulated Ising models of 50 · 50 = 2500 variables is considered. The first order potentials E j are initialized randomly by drawing E j (0) uniformly from the interval [0, 1] and setting E j (1) := 1 − E j (0). The exact global minimum of the total energy is found via a graph cut. For each model, the Lazy Flipper is initialized with a configuration that minimizes the sum of the first order potentials. Upper bounds on the minimum energy found by means of lazy flipping converge towards the global optimum as depicted in Fig. 2. Color scales and gray scales in this figure respectively indicate the maximum size and the total number of distinct subsets that have been searched, averaged over all models in the ensemble. It can be seen from this figure that upper bounds on the minimum energy are tightened significantly by searching larger subsets of variables, independent of the coupling strength α. It takes the Lazy Flipper less than 100 seconds (on a single CPU of an Intel Quad Xeon E7220 at 2.93GHz) to exhaustively search all connected subsets of 6 variables. The amount of RAM required for the CS-tree (in bytes) is 24 times as high as the number of subsets (approximately 50 MB in this case) because each subset is stored in the CS-tree as a node consisting of three 64-bit integers: a variable index, the index of the parent node and the index of the level order successor (Section 3.1) 1 . For n max ∈ {1, 6}, configurations corresponding to the upper bounds on the minimum energy are depicted in Fig. 3. It can be seen from this figure that all connected subsets of falsely set variables are larger than n max . For a fixed maximum subgraph size n max , the runtime of lazy flipping scales approximately linearly with the number of variables in the Ising model (cf. Fig.4). Optimal Subgraph Model The optimal subgraph model consists of m ∈ N binary variables x 1 , . . . , x m ∈ {0, 1} that are associated with the edges of a 2-dimensional grid graph. A subgraph is defined by those edges whose associated variables attain the value 1. Energy functions of this model consist of first order potentials, one for each edge, and fourth order potentials, one for each node v ∈ V in which four edges (j, k, l, m) = N (v) meet: ∀x ∈ {0, 1} m : E(x) = m j=1 E j (x j ) + (j,k,l,m)∈N (V ) E jklm (x j , x k , x l , x m ) .(3) All fourth order potentials are equal, penalizing dead ends and branches of paths in the selected subgraph: E jklm (x j , x k , x l , x m ) =                0.0 if s = 0 100.0 if s = 1 0.6 if s = 2 1.2 if s = 3 2.4 if s = 4 with s = x j + x k + x l + x m . (4) An ensemble of 16 such models is constructed by drawing the unary potentials at random, exactly as for the Ising models. Each model has 19800 variables, the same number of first order potentials, and 9801 fourth order potentials. Approximate optimal subgraphs are found by Min-Sum Belief Propagation (BP) with parallel message passing [9,14] and message damping [36], by Tree-reweighted Belief Propagation (TRBP) [4], by Dual Decomposition (DD) [16,17] and by lazy flipping (LF). DD affords also lower bounds on the minimum energy. Details on the parameters of the algorithms and the decomposition of the models are given in Appendix A. Bounds on the minimum energy converge with increasing runtime, as depicted in Fig. 5. It can be seen from this figure that Lazy Flipper approximations converge fast, reaching a smaller energy after 3 seconds than the other approximations after 10000 seconds. Subgraphs of up to 7 variables are searched, using approximately 2.2 GB of RAM for the CS-tree. A gap remains between the energies of all approximations and the lower bound on the minimum energy obtained by DD. Thus, there is no guarantee that any of the problems has been solved to optimality. However, the gaps are upper bounds on the deviation from the global optimum. They are compared at t = 10000 s in Fig. 5. For any model in the ensemble, the energy of the Lazy Flipper approximation is less than 4% away from the global optimum, a substantial improvement over the other algorithms for this particular model. Pruning of 2D Over-Segmentations The graphical model for removing excessive boundaries from image over-segmentations contains one binary variable for each boundary between segments, indicating whether this boundary is to be removed (0) or preserved (1). First order potentials relate these variables to the image content, and non-submodular third and fourth order potentials connect adjacent boundaries, supporting the closedness and smooth continuation of preserved boundaries. The energy function is a sum of these potentials: ∀x ∈ {0, 1} m E(x) = m j=1 E j (x j ) + (j,k,l)∈J E jkl (x j , x k , x l ) + (j,k,l,p)∈K E jklp (x j , x k , x l , x p ) .(5) We consider an ensemble of 100 such models obtained from the 100 BSD test images [34]. On average, a model has 8845 ± 670 binary variables, the same number of unary potentials, 5715 ± 430 third order potentials and 98 ± 18 fourth order potentials. Each variable is connected via potentials to at most six other variables, a sparse structure that is favorable for the Lazy Flipper. BP, TRBP, DD and the Lazy Flipper solve these problems approximately, thus providing upper bounds on the minimum energy. The differences between these bounds and the global optimum found by means of MILP are depicted in Fig. 6. It can be seen from this figure that, after 200 seconds, Lazy Flipper approximations provide a tighter upper bound on the global minimum in the median than those of the other three algorithms. BP and DD have a better peak performance, solving one problem to optimality. The Lazy Flipper reaches a search depth of 9 after around 1000 seconds for these sparse graphical models using roughly 720 MB of RAM for the CS-tree. At t = 5000 s and on average over all models, its approximations deviate by only 2.6% from the global optimum. Pruning of 3D Over-Segmentations The model described in the previous section is now applied in 3D to remove excessive boundaries from the over-segmentation of a volume image. In an ensemble of 16 such models obtained from 16 SBFSEM volume images, models have on average 16748 ± 1521 binary variables (and first order potentials), 26379 ± 2502 potentials of order 3, and 5081 ± 482 potentials of order 4. For BP, TRBP, DD and Lazy Flipper approximations, deviations from the global optimum are shown in Fig. 7. It can be seen from this figure that BP performs exceptionally well on these problems, providing approximations whose energies deviate by only 0.4% on average from the global optimum. One reason is that most variables influence many (up to 60) potential functions, and BP can propagate local evidence from all these potentials. Variables are connected via these potentials to as many as 100 neighboring variables which hampers the exploration of the search space by the Lazy Flipper that reaches only of search depth of 5 after 10000 seconds, using 4.8 GB of RAM for the CS-tree, yielding worse approximations than BP, TRBP and DD for these models. In practical applications where volume images and the according models are several hundred times larger and can no longer be optimized exactly, it matters whether one can further improve upon the BP approximations. Dashed lines in the first plot in Fig. 7 show the result obtained when initializing the Lazy Flipper with the BP approximation at t = 100s. This reduces the deviation from the global optimum at t = 50000 s from 0.4% on average over all models to 0.1%. Conclusion The optimum of a function of binary variables that decomposes according to a graphical model can be found by an exhaustive search over only the connected subgraphs of the model. We implemented this search, using a CS-tree to efficiently and uniquely enumerate the subgraphs. The C++ source code is available from http://hci.iwr.uni-heidelberg.de/software.php. Our algorithm is guaranteed to converge to a global minimum when searching through all subgraphs which is typically intractable. With limited runtime, approximations can be found by restricting the search to subgraphs of a given maximum size. Simulated and real-world problems exist for which these approximations compare favorably to those obtained by message passing and sub-gradient descent. For large scale problems, the applicability of the Lazy Flipper is limited by the memory required for the CS-tree. However, for regular graphs, this limit can be overcome by an implicit representation of the CS-tree that is subject of future research. A Parameters and Model Decomposition In all experiments, the damping parameters for BP and TRBP are chosen optimally from the set {0, 0.1, 0.2, . . . , 0.9}. The step size of the sub-gradient descent is chosen according to τ t = α 1 1 + βt(6) where β = 0.01 and α is chosen optimally from {0.01, 0.025, 0.05, 0.1, 0.25, 0.5}. The sequence of step sizes, in particular the function (6) and β could be tuned further. Moreover, [16] consider the primal-dual gap and [17] smooth the subgradient over iterations in order to suppress oscillations. These measures can have substantial impact on the convergence. The upper bounds obtained by BP, TRBP and DD do not decrease monotonously. After each iteration of these algorithms, we therefore consider the elapsed runtime and the current best bound, i.e. the best bound of the current and all preceding iterations. All five algorithms are implemented in C++, using the same optimized data structures for the graphical model and a visitor design pattern that allows us to measure runtime without significantly affecting performance. The same decomposition of each graphical model into tree models is used for TRBP and DD. Tree models are constructed in a greedy fashion, each comprising as many potential functions as possible. The procedure is generally applicable to irregular models with higher-order potentials: Initially, all potentials of the graphical model are put on a white list that contains those potentials that have not been added to any tree model. A black list of already added potentials and a gray list of recently added potentials are initially empty. As long as there are potentials on the white list, new tree models are constructed. For each newly constructed tree model, the procedure iterates over the white list, adding potentials to the tree model if they do not introduce loops. Added potentials are moved from the white list to the gray list. After all potentials from the white list have been processed, potentials from the black list that do not introduce loops are added to the tree model. The gray list is then appended to the black list and cleared. The procedure finishes when the white list is empty. As recently shown in [17], decompositions into cyclic subproblems can lead to significantly tighter relaxations and better integer solutions.
5,125
1009.1174
1820049239
Symmetry is a common feature of many combinatorial problems. Unfortunately eliminating all symmetry from a problem is often computationally intractable. This paper argues that recent parameterized complexity results provide insight into that intractability and help identify special cases in which symmetry can be dealt with more tractably.
The study of computational complexity in constraint programming has tended to focus on the structure of the constraint graph (e.g. especially measures like tree width @cite_33 @cite_42 ) or on the semantics of the constraints (e.g. @cite_4 ). However, these lines of research are mostly concerned with constraint satisfaction problems as a whole, and do not say much about individual (global) constraints. For global constraints of bounded arity, asymptotic analysis has been used to characterize the complexity of propagation both in general and for constraints with a particular semantics. For example, the generic domain consistency algorithm of @cite_39 has an @math time complexity on constraints of arity @math and domains of size @math , whilst the domain consistency algorithm of @cite_5 for the @math -ary constraint has @math time complexity. Bessiere showed that many global constraints like are also intractable to propagate @cite_36 . More recently, Samer and Szeider have studied the parameterized complexity of the constraint @cite_19 . Szeider has also studied the complexity of symmetry in a propositional resolution calculus @cite_25 . See Chapter 10 in @cite_8 for more about symmetry of propositional systems.
{ "abstract": [ "Abstract Finding solutions to a binary constraint satisfaction problem is known to be an NP-complete problem in general, but may be tractable in cases where either the set of allowed constraints or the graph structure is restricted. This paper considers restricted sets of constraints which are closed under permutation of the labels. We identify a set of constraints which gives rise to a class of tractable problems and give polynomial time algorithms for solving such problems, and for finding the equivalent minimal network. We also prove that the class of problems generated by any set of constraints not contained in this restricted set is NP-complete.", "", "", "We study the computational complexity of reasoning with global constraints. We show that reasoning with such constraints is intractable in general. We then demonstrate how the same tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when decomposing constraints will lose pruning, and when combining constraints is tractable. We also show how the same tools can be used to study symmetry breaking, meta-constraints like the cardinality constraint, and learning nogoods.", "Abstract The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.", "486,344. Roller bearings. BENEDEK, E. K. March 15, 1937, No. 7567. [Class 12 (i)] A high-speed heavyload needle roller bearing comprises two races 7, 8, with needle rollers 31, the individual rollers being spaced apart by a lubricant film and the total clearance between the rollers of a set lying between the limits of one to three needle roller diameters whereby an efficient circulation of lubricant is obtained. As shown, for a multiple row bearing, one of the races has annular oil channels 6 and the other lubricant ducts 9. Rings 10 separate the rows of rollers. The needle rollers may be conical. The invention may be applied to single and double thrust bearings. In a modification needle rollers 3, Fig. 12, carried in a rectangular cage 16 are arranged between members 14, 15, having relative oscillatory or rotary motion. The total clearance between the rollers and the cage lies between one to three roller diameters. A rectilinear-motion bearing is constructed similarly. Specification 410,968 is referred to.", "We study the consistency and domain consistency problem for extended global cardinality (EGC) constraints. An EGC constraint consists of a set X of variables, a set D of values, a domain @math for each variable x, and a \"cardinality set\" K(d) of non-negative integers for each value d. The problem is to instantiate each variable x with a value in D(x) such that for each value d, the number of variables instantiated with d belongs to the cardinality set K(d). It is known that this problem is NP-complete in general, but solvable in polynomial time if all cardinality sets are intervals. First we pinpoint connections between EGC constraints and general factors in graphs. This allows us to extend the known polynomial-time case to certain non-interval cardinality sets. Second we consider EGC constraints under restrictions in terms of the treewidth of the value graph (the bipartite graph representing variable-value pairs) and the cardinality-width (the largest integer occurring in the cardinality sets). We show that EGC constraints can be solved in polynomial time for instances of bounded treewidth, where the order of the polynomial depends on the treewidth. We show that (subject to the complexity theoretic assumption FPT???W[1]) this dependency cannot be avoided without imposing additional restrictions. If, however, also the cardinality-width is bounded, this dependency gets removed and EGC constraints can be solved in linear time.", "Many real-life Constraint Satisfaction Problems (CSPs) involve some constraints similar to the alldifferent constraints. These constraints are called constraints of difference. They are defined on a subset of variables by a set of tuples for which the values occuring in the same tuple are all different. In this paper, a new filtering algorithm for these constraints is presented. It achieves the generalized arc-consistency condition for these non-binary constraints. It is based on matching theory and its complexity is low. In fact, for a constraint defined on a subset of p variables having domains of cardinality at most d, its space complexity is O(pd) and its time complexity is O(p2d2). This filtering algorithm has been successfully used in the system RESYN ( 1992), to solve the subgraph isomorphism problem.", "We generalize Krishnamurthy’s well-studied symmetry rule for resolution systems by considering homomorphisms instead of symmetries; symmetries are injective maps of literals whichpreserve complements and clauses; homomorphisms arise from symmetries by releasing the constraint of being injective. We prove that the use of homomorphisms yields a strictly more powerful system than the use of symmetries by exhibiting an infinite sequence of sets of clauses for which the consideration of global homomorphisms allows exponentially shorter proofs than the consideration of local symmetries. It is known that local symmetries give rise to a strictly more powerful system than global symmetries; we prove a similar result for local and global homomorphisms. Finally, we obtain an exponential lower bound for the resolution system enhanced by the local homomorphism rule." ], "cite_N": [ "@cite_4", "@cite_33", "@cite_8", "@cite_36", "@cite_42", "@cite_39", "@cite_19", "@cite_5", "@cite_25" ], "mid": [ "1748315421", "2002478775", "", "137416953", "1999038767", "1592840579", "2123103890", "2164279585", "2905990799" ] }
Parameterized Complexity Results in Symmetry Breaking ⋆
Symmetry occurs in many constraint satisfaction problems. For example, in scheduling a round robin sports tournament, we may be able to interchange all the matches taking place in two stadia. Similarly, we may be able to interchange two teams throughout the tournament. As a second example, when colouring a graph (or equivalently when timetabling exams), the colours are interchangeable. We can swap red with blue throughout. If we have a proper colouring, any permutation of the colours is itself a proper colouring. Problems may have many symmetries at once. In fact, the symmetries of a problem form a group. Their action is to map solutions (a schedule, a proper colouring, etc.) onto solutions. Symmetry is problematic when solving constraint satisfaction problems as we may waste much time visiting symmetric solutions. In addition, we may visit many (failing) search states that are symmetric to those that we have already visited. One simple but effective mechanism to deal with symmetry is to add constraints which eliminate symmetric solutions [1]. Unfortunately eliminating all symmetry is NP-hard in general [2]. However, recent results in parameterized complexity give us a good understanding of the source of that complexity. In this survey paper, I summarize results in this area. For more background, see [3,4,5,6,7]. An example To illustrate the ideas, we consider a simple problem from musical composition. The all interval series problem (prob007 in CSPLib.org [8]) asks for a permutation of the numbers 0 to n − 1 so that neighbouring differences form a permutation of 1 to n − 1. For n = 12, the problem corresponds to arranging the half-notes of a scale so that all musical intervals (minor second to major seventh) are covered. This is a simple example of a graceful graph problem in which the graph is a path. We can model this as a constraint satisfaction problem in n variables with X i = j iff the ith number in the series is j. One solution for n = 11 is: X 1 , X 2 , . . . , X 11 = 3, 7, 4, 6, 5, 0, 10, 1, 9, 2, 8 The differences form the series: 4, 3, 2, 1, 5, 10,9,8,7,6. The all interval series problem has a number of different symmetries. First, we can reverse any solution and generate a new (but symmetric) solution: X 1 , X 2 , . . . , X 11 = 8, 2, 9, 1, 10, 0, 5, 6, 4, 7, 3 Second, the all interval series problem has a value symmetry as we can invert values. If we subtract all values in (1) from 10, we generate a second (but symmetric) solution: X 1 , X 2 , . . . , X 11 = 7, 3, 6, 4, 5, 10, 0, 9, 1, 8, 2 Third, we can do both and generate a third (but symmetric) solution: X 1 , X 2 , . . . , X 11 = 2, 8, 1, 9, 0, 10, 5, 4, 6, 3, 7 To eliminate such symmetric solutions from the search space, we can post additional constraints which eliminate all but one solution in each symmetry class. To eliminate the reversal of a solution, we can simply post the constraint: X 1 < X 11(5) This eliminates solution (2) as it is a reversal of (1). To eliminate the value symmetry which subtracts all values from 10, we can post: X 1 ≤ 5, X 1 = 5 ⇒ X 2 < 5(6) This eliminates solutions (2) and (3). Finally, eliminating the third symmetry where we both reverse the solution and subtract it from 10 is more difficult. We can, for instance, post: [X 1 , . . . , X 11 ] ≤ lex [10 − X 11 , . . . , 10 − X 1 ](7) Note that of the four symmetric solutions given earlier, only (4) with X 1 = 2, X 2 = 8 and X 11 = 7 satisfies all three sets of symmetry breaking constraints: (5), (6) and (7). The other three solutions are eliminated. Symmetry breaking As we have argued, symmetry is a common feature of many real-world problems that dramatically increases the size of the search space if it is not factored out. Symmetry can be defined as a bijection on assignments that preserves solutions. The set of symmetries form a group under the action of composition. We focus on two special types of symmetry. A value symmetry is a bijective mapping, σ of the values such that if X 1 = d 1 , . . . , X n = d n is a solution then X 1 = σ(d 1 ), . . . , X n = σ(d n ) is also. For example, in our all interval series example, there is a value symmetry σ that maps the value i onto n − i. A variable symmetry, on the other hand, is a bijective mapping, θ of the indices of variables such that if X 1 = d 1 , . . . , X n = d n is a solution then X θ(1) = d 1 , . . . , X θ(n) = d n is also. For example, in our all interval series example, there is a variable symmetry θ that maps the index i onto n + 1 − i. This swaps the variable X i with X n+1−i . A simple and effective mechanism to deal with symmetry is to add constraints to eliminate symmetric solutions [1,2,16,17,18,19]. The basic idea is very simple. We pick an ordering on the variables, and then post symmetry breaking constraints to ensure that the final solution is lexicographically less than any symmetric re-ordering of the variables. That is, we select the "lex leader" assignment. For example, to break the variable symmetry θ, we post the constraint: [X 1 , . . . , X n ] ≤ lex [X θ(1) , . . . , X θ(n) ] Efficient inference methods exist for propagating such constraints [20,21]. The symmetry breaking constraints in our all interval series example can all be derived from such lex leader constraints. In theory, the lex leader method solves the problem of symmetries, eliminating all symmetric solutions and pruning many symmetric states. Unfortunately, the set of symmetries might be exponentially large (for example, in a graph k-colouring, there are k! symmetries). There may therefore be too many symmetry breaking constraints to post. In addition, decomposing symmetry breaking into many lex leader constraints typically hinders propagation. We focus on three special but commonly occurring cases where symmetry breaking is more tractable and propagation can be more powerful: value symmetry, interchangeable values, and row and column symmetry. In each case, we identify islands of tractability but show that the quick-sands of intractability remain close to hand. Value symmetry Value symmetries are a commonly occurring symmetry that are more tractable to break [6]. For instance, Puget has proved that a linear number of symmetry breaking constraints will eliminate any number of value symmetries in polynomial time [22]. Given a set of value symmetries Σ, we can eliminate all value symmetry by posting the global constraint LEXLEADER(Σ, [X 1 , . . . , X n ]) [16]. This is a conjunction of lex leader constraints, ensuring that, for each σ ∈ Σ: X 1 , . . . , X n ≤ lex σ(X 1 ), . . . , σ(X n ) Unfortunately, enforcing domain consistency on this global constraint is NP-hard. However, this complexity depends on the number of symmetries. Breaking all value symmetry is fixed-parameter tractable in the number of symmetries. Theorem 1 Enforcing domain consistency on LEXLEADER(Σ, [X 1 , . . . , X n ]) is NPhard in general but fixed-parameter tractable in k = |Σ|. Proof: NP-hardness is proved by Theorem 1 in [23], and fixed-parameter tractability by Theorem 7 in [24]. 2 One situation where we may have only a small number of symmetries is when we focus on just the generators of the symmetry group [2,25]. This is attractive as the size of the generator set is logarithmic in the size of the group, many algorithms in computational group theory work on generators, and breaking just the generator symmetries has been shown to be effective on many benchmarks [25]. In general, breaking just the generators may leave some symmetry. However, on certain symmetry groups (like that for interchangeable values considered in the next section), all symmetry is eliminated (Theorem 3 in [23]). By exploiting special properties of the value symmetry group, we can identify even more tractable cases. A common type of value symmetry with such properties is that due to interchangeable values. We can break all such symmetry using the idea of value precedence [26]. In particular, we can post the global symmetry breaking constraint PRECEDENCE([X 1 , . . . , X n ]). This ensures that for all j < k: min{i | X i = j ∨ i = n + 1} < min{i | X i = k ∨ i = n + 2} That is, the first time we use j is before the first time we use k for all j < k. For example, consider the assignment: X 1 , X 2 , X 3 , . . . , X n = 1, 1, 1, 2, 1, 3, . . . , 2 This satisfies value precedence as 1 first occurs before 2, 2 first occurs before 3, etc. Now consider the symmetric assignment in which we swap 2 with 3: X 1 , X 2 , X 3 , . . . , X n = 1, 1, 1, 3, 1, 2, . . . , 3 This does not satisfy value precedence as 3 first occurs before 2. A PRECEDENCE constraint eliminates all symmetry due to interchangeable values. In [27], we give a linear time propagator for enforcing domain consistency on the PRECEDENCE constraint. In [23], we argue that PRECEDENCE can be derived from the lex leader method (but offers more propagation by being a global constraint). Another way to ensure value precedence is to map onto dual variables, Z j which record the first index using each value j [22]. This transforms value symmetry into variable symmetry on the Z j . We can then eliminate this variable symmetry with some ordering constraints: Z 1 < Z 2 < Z 3 < . . . < Z m(10) In fact, Puget proves that we can eliminate all value symmetry (and not just that due to value interchangeability) with a linear number of such ordering constraints. Unfortunately, this decomposition into ordering constraints hinders propagation even for the tractable case of interchangeable values (Theorem 5 in [23]). Indeed, even with just two value symmetries, mapping into variable symmetry can hinder propagation. This is supported by the experiments in [23] where we see faster and more effective symmetry breaking with the global PRECEDENCE constraint. This global constraint thus appears to be a promising method to eliminate the symmetry due to interchangeable values. A generalization of the symmetry due to interchangeable values is when values partition into sets, and values within each set (but not between sets) are interchangeable. The idea of value precedence can be generalized to this case [27]. The global GENPRECEDENCE constraint ensures that values in each interchangeable set occur in order. More precisely, if the values are divided into s equivalence classes, and the jth equivalence class contains the values v j,1 to v j,mj then GENPRECEDENCE ensures min{i | X i = v j,k ∨ i = n + 1} < min{i | X i = v j,k+1 ∨ i = n + 2} for all 1 ≤ j ≤ s and 1 ≤ k < m j . Enforcing domain consistency on GENPRECEDENCE is NP-hard in general but fixed-parameter tractable in k = s [23,24]. Another common type of symmetry where we can exploit special properties of the symmetry group is row and column symmetry [28]. Many problems can be modelled by a matrix model involving a matrix of decision variables [29,30,31]. Often the rows and columns of such matrices are fully or partially interchangeable [28]. For example, the Equidistant Frequency Permutation Array (EFPA) problem is a challenging combinatorial problem in coding theory. The aim is to find a set of v code words, each of length qλ such that each word contains λ copies of the symbols 1 to q, and each pair of code words is at a Hamming distance of d apart. For example, for v = 4, λ = 2, q = 3, d = 4, one solution is: This problem has applications in communications, and is closely related to other combinatorial problems like finding orthogonal Latin squares. Huczynska et al. [32] consider a simple matrix model for this problem with a v by qλ array of variables, each with domains 1 to q. This model has row and column symmetry since we can permute the rows and the columns of any solution. Although breaking all row and column symmetry is intractable in general, it is fixed-parameter tractable in the number of columns (or rows). Theorem 2 With a n by m matrix, checking lex leader constraints that break all row and column symmetry is NP-hard in general but fixed-parameter tractable in k = m. Proof: NP-hardness is proved by Theorem 3.2 in [2], and fixed-parameter tractability by Theorem 1 in [33]. 2 Note that the above result only talks about checking a constraint which breaks all row and column symmetry. That is, we only consider the computational cost of deciding if a complete assignment satisfies the constraint. Propagation of such a global constraint is computationally more difficult. Just row or column symmetry on their own are tractable to break. To eliminate all row symmetry we can post lexicographical ordering constraints on the rows. Similarly, to eliminate all column symmetry we can post lexicographical ordering constraints on the columns. When we have both row and column symmetry, we can post a DOUBLELEX constraint that lexicographically orders both the rows and columns [28]. This does not eliminate all symmetry since it may not break symmetries which permute both rows and columns. Nevertheless, it is more tractable to propagate and is often highly effective in practice. Note that DOUBLELEX can be derived from a strict subset of the LEXLEADER constraints. Unfortunately propagating such a DOUBLELEX constraint completely is already NP-hard. Theorem 3 With a n by m matrix, enforcing domain consistency on DOUBLELEX is NP-hard in general. Proof: See Threorem 3 in [33]. 2 There are two special cases of matrix models where row and column symmetry is more tractable to break. The first case is with an all-different matrix, a matrix model in which every value is different. If an all-different matrix has row and column symmetry then the lex-leader method ensures that the top left entry is the smallest value, and the first row and column are ordered [28]. Domain consistency can be enforced on such a global constraint in polynomial time [33]. The second more tractable case is with a matrix model of a function. In such a model, all entries are 0/1 and each row sum is 1. If a matrix model of a function has row and column symmetry then the lex-leader method ensures the rows and columns are lexicographically ordered, the row sums are 1, and the sums of the columns are in decreasing order [34,35,28]. Domain consistency can also be enforced on such a global constraint in polynomial time [33]. Related work The study of computational complexity in constraint programming has tended to focus on the structure of the constraint graph (e.g. especially measures like tree width [36,37]) or on the semantics of the constraints (e.g. [38]). However, these lines of research are mostly concerned with constraint satisfaction problems as a whole, and do not say much about individual (global) constraints. For global constraints of bounded arity, asymptotic analysis has been used to characterize the complexity of propagation both in general and for constraints with a particular semantics. For example, the generic domain consistency algorithm of [39] has an O(d n ) time complexity on constraints of arity n and domains of size d, whilst the domain consistency algorithm of [40] for the n-ary ALLDIFFERENT constraint has O(n 3 2 d) time complexity. Bessiere et al. showed that many global constraints like NVALUE are also intractable to propagate [11]. More recently, Samer and Szeider have studied the parameterized complexity of the EGCC constraint [41]. Szeider has also studied the complexity of symmetry in a propositional resolution calculus [42]. See Chapter 10 in [43] for more about symmetry of propositional systems. Conclusions We have argued that parameterized complexity is a useful tool with which to study symmetry breaking. In particular, we have shown that whilst it is intractable to break all symmetry completely, there are special types of symmetry like value symmetry and row and column symmetry which are more tractable to break. In these case, fixedparameter tractability comes from natural parameters like the number of generators which tend to be small. In future, we hope that insights provided by such analysis will inform the design of new search methods. For example, we might build a propagator that propagates completely when the parameter is small, but only partially when it is large. In the longer term, we hope that other aspects of parameterized complexity like kernels will find application in the domain of symmetry breaking.
2,936
1009.1344
1920329887
An online backup system should be quick and reliable in both saving and restoring users' data. To do so in a peer-to-peer implementation, data transfer scheduling and the amount of redundancy must be chosen wisely. We formalize the problem of exchanging multiple pieces of data with intermittently available peers, and we show that random scheduling completes transfers nearly optimally in terms of duration as long as the system is sufficiently large. Moreover, we propose an adaptive redundancy scheme that improves performance and decreases resource usage while keeping the risks of data loss low. Extensive simulations show that our techniques are effective in a realistic trace-driven scenario with heterogeneous bandwidth.
Data maintenance is cheap in our scenario, where it is performed by a data owner with a local copy. When maintenance is delegated to nodes that do not have a local copy of the backup objects, various coding schemes can be used @cite_21 @cite_0 to limit the amount of required data transit. For these settings, cryptographic protocols @cite_15 @cite_18 have been designed to verify the authenticity of stored data.
{ "abstract": [ "Redundancy is the basic technique to provide reliability in storage systems consisting of multiple components. A redundancy scheme defines how the redundant data are produced and maintained. The simplest redundancy scheme is replication, which however suffers from storage inefficiency. Another approach is erasure coding, which provides the same level of reliability as replication using a significantly smaller amount of storage. When redundant data are lost, they need to be replaced. While replacing replicated data consists in a simple copy, it becomes a complex operation with erasure codes: new data are produced performing a coding over some other available data. The amount of data to be read and coded is d times larger than the amount of data produced. This implies that coding has a larger computational and I O cost, which, for distributed storage systems, translates into increased network traffic. Participants of peer-to-peer systems have ample storage and CPU power, but their network bandwidth may be limited. For these reasons existing coding techniques are not suitable for P2P storage. This work explores the design space between replication and the existing erasure codes. We propose and evaluate a new class of erasure codes, called hierarchical codes, which aims at finding a flexible trade-off that allows the reduction of the network traffic due to maintenance without losing the benefits given by traditional codes.", "This paper describes a cryptographic protocol for securing self-organized data storage through periodic verifications. The proposed verification protocol, which goes beyond simple integrity checks and proves data conservation, is deterministic, efficient, and scalable. The security of this scheme relies both on the ECDLP intractability assumption and on the difficulty of finding the order of some specific elliptic curve over Zn. The protocol also makes it possible to personalize replicas and to delegate verification without revealing any secret information.", "Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.", "" ], "cite_N": [ "@cite_0", "@cite_15", "@cite_21", "@cite_18" ], "mid": [ "2159028410", "1839970284", "2951800112", "" ] }
On Scheduling and Redundancy for P2P Backup
The advent of cloud computing as a new paradigm to enable service providers with the ability to deploy cost-effective solutions has favored the development of a range of new services, including online storage applications. Due to the economy of scale of cloud-based storage services, the costs incurred by end-users to hand over their data to a remote storage location in the Internet have approached the cost of ownership of commodity storage devices. As such, online storage applications spare users most of the time-consuming nuisance of data backup: user interaction is minimal, and in case of data loss due to an accident, restoring the original data is a seamless operation. However, the longterm storage costs that are typical of a backup application may easily go past that of traditional approaches to data backup. Additionally, while data availability is a key feature that large-scale data-centers deployments guarantee, its durability is questionable, as reported recently [1]. For these reasons, peer-to-peer (P2P) storage systems are an alternative to cloud-based solutions. Storage costs are merely those of a commodity storage device, which is shared (together with some bandwidth resources) with a number of remote Internet users to form a distributed storage system. Such applications optimize latency to individual file access: indeed, users hand over their data to the P2P system, which is used as a replacement of a local hard drive. In such a scenario, low access latency is difficult to achieve: the online behavior of users is unpredictable and, at large scale, crashes and failures are the norm rather than the exception. As a consequence, storage space is sacrificed for low access latency: a P2P application stores large amounts of redundant data to cope with such unfavorable events. In this work we study a particular case of online storage: P2P backup applications. Data backup involves the bulk transfer of potentially large quantities of data, both during regular data backups and in case of data loss. As a consequence, low access latency is not an issue, while short backup and restore times seem a more reasonable goal. Given these considerations, here we seek to optimize backup and restore times, while guaranteeing that data loss remains an unlikely event. There are two main design choices that affect these metrics: scheduling, i.e. deciding how to allocate data transfers between peers, and redundancy, i.e. the amount of data in the P2P system that guarantees a backup operation to be considered complete and safe. The endeavor of this work is to study and evaluate these two intertwined aspects. First, we describe in detail our application scenario (Sec. II), and show why the assumptions underlying a backup application can simplify many problems addressed in the literature. We then set off to define the problem of scheduling in a full knowledge setting, and we show that it can be solved in polynomial time by reducing it to a maximal flow problem. Full knowledge of future peer uptime is obviously an unrealistic assumption: thus, we show that a randomized approach to scheduling yields near optimal results when the system scale is large and we corroborate our findings using real availability traces from an instant messaging application (Sec. III). We then move to study a novel redundancy policy that, rather than focusing on short-term data availability, targets short data restore times. As such, our method alleviates the storage burden of large amounts of redundant data on client machines (Sec. IV). With a trace-driven simulation of a complete P2P backup system, we show that our technique is viable in practical scenarios and illustrate its benefits in terms of increased performance (Sec. V). We conclude by studying a range of data maintenance policies when restore operations may undergo some natural delays. For example, detecting a faulty external hard-drive may not be immediate, or obtaining a new equipment upon a crash may require some time. We show that an "assisted" approach to data repair techniques (which involves a cloud-based storage service) can significantly reduce the probability of data loss, at an affordable cost (Sec. VI). II. APPLICATION SCENARIO In this work, similarly to many online backup applications (e.g., Dropbox 1 ), we assume users to specify one local folder containing important data to backup. We also assume that backup data remains available locally to peers. This is an important trait that distinguishes backup from storage applications, in which data is only stored remotely. Backup data consists of an opaque object, possibly representing an encrypted archive of changes to a set of files, that we term backup object. In the spirit of incremental backups, we consider that each backup object should be kept on the system indefinitely. Consolidation and deletion of obsolete backups are not taken into account in this work. A backup object of size o is split into k original fragments of a fixed size f , with k = o/f . Since backup data is stored on unreliable machines characterized by an unpredictable online behavior, the original k blocks are encoded using erasure coding (e.g., Reed-Solomon). This creates n encoded fragments having size f , of which any k are sufficient to recover the original data. The redundancy rate is defined as r = n/k. Here we assume that encoded fragments reside on distinct remote peers, which avoids that a single disk failure causes the loss of multiple fragments. Backup Phase: The backup phase involves a data owner and a set of remote peers that eventually store encoded fragments for the data owner. We assume that any peer in the system can collect a list of remote peers with available storage space: this can be achieved by using known techniques, e.g. a centralized "tracker" or a decentralized data structure such as a distributed hash table. Data backup requires a scheduling policy that drives the choice of where and when to upload encoded fragments to remote peers. Moreover, a redundancy policy determines when the data is safe, which completes the backup operation. Maintenance Phase: Once the backup phase is completed and encoded fragments reside on remote peers, the maintenance phase begins. Peer crashes and departures can cause the loss of some encoded fragments; during the maintenance phase, peers detects such losses and generate new encoded fragments to restore a redundancy level at which the backup is safe again. For a generic P2P storage system, in which encoded fragments only reside in the network and peers do not keep a local copy of their data, the maintenance phase is critical. Indeed, peers need to first download the whole backup object from remote machines, then to generate new encoded fragments and upload them to available peers. This problem has fostered the design of efficient coding schemes to mitigate the excessive network traffic caused by the maintenance operation (see e.g. [2], [3]). In a backup application, the maintenance phase is less critical: the data owner can generate new encoded fragments using the local copy of the data with no download required. Restore Phase: In the unfortunate case of a crash, the data owner initiates the restore phase. A peer contacts the remote machines holding encoded fragments, downloads at least k of them, and reconstructs the original backup data. Again, a scheduling policy drives the process. Since the ability to successfully restore data upon a crash is the ultimate goal of any backup system, in our application the restore traffic receives higher priority than the backup and maintenance traffic. A. Performance Metrics We characterize the system performance in terms of the amount of time required to complete the backup and the restore phases, labelled time to backup (TTB) and time to restore (TTR). In the following Sections, we use baselines for backup and restore operations which bound both TTB and TTR. Let us assume an ideal storage system with unlimited capacity and uninterrupted online time that backs up user data. In this case, TTB and TTR only depend on backup object size and on bandwidth and availability of the data owner. We label these ideal values minTTB and minTTR, and we define them formally in Sec. III. Additionally, we consider the data loss probability, which accounts for the probability of a data owner to be unable to restore backup data. A P2P backup application may exact a high toll in terms of peer resources, including storage and bandwidth. In this work we gloss over metrics of the burden on individual peers and the network, considering a scenario in which the resources of peers are lost if left unused. B. Availability Traces The online behavior of users, i.e., their patterns of connection and disconnection over time, is difficult to capture analytically. In this work we will perform our evaluations on a real application trace that exhibits both heterogeneity and correlated user behavior. Our traces capture user availability, in terms of login/logoff events, from an instant messaging (IM) server in Italy for a duration of 3 months. We argue that the behavior of regular IM users constitutes a representative case study. Indeed, for both an IM and an online backup application, users are generally signed in for as long as their machine is connected to the Internet. In this work we only consider users that are online for an average of at least four hours per day, as done in the Wuala online storage application 2 . Once this filter is applied, we obtain the trace of 376 users. User availabilities are strongly correlated, in the sense that many users connect or disconnect around the same time. As shown in Fig. 1(a), there are strong differences between the number of users connected during day and night and between workdays and weekends. Most users are online for less than 40% of the trace, while some of them are almost always connected ( Fig. 1(b)). III. THE SCHEDULING PROBLEM Scheduling data transfers between peers is an important operation that affects the time required to complete a backup or a restore task, especially in a system involving unreliable machines with unpredictable online patterns. Because of churn, a node might not be able to find online nodes to exchange data with: hence, TTB and TTR can grow due to idle periods of time. Unexpected node disconnections require a method to handle partial fragments, which can be discarded or resumed. Moreover, the redundancy rate used to cope with failures and unavailability may decrease system performance. Finally, the available bandwidth between peers involved in a data transfer, which may be shared due to parallel transmissions, is another cause for slow backup and restore operations. In this Section, we focus on the implications of churn alone. We simplify the scheduling problem by assuming the redundancy factor to be a given input parameter, and neglecting the possibility of congestion due to several different backup, restore or maintenance processes interfering. Furthermore, we do not consider interrupted fragment transfers. In Section IV, we define an adaptive scheme to compute the redundancy rate applied to a backup operation and in Section V we relax all other assumptions. We now define a reference scenario to bound TTB and TTR. Consider an ideal storage system (e.g. a cloud service) with unbounded bandwidth and 100% availability. A peer i with upload and download bandwidth u i and d i starting the backup of an object of size o at time t completes its backup at time t ′ , after having spent o ui time online. Analogously, i restores a backup object with the same size at t ′′ after having spent o di time online. We define minT T B(i, t) = t ′ −t and minT T R(i, t) = t ′′ − t. We use these reference values throughout the paper to compare the relative performance of our P2P application versus that of such an ideal system. Because we neglect congestion issues, we can focus on a backup/restore operation as seen from a single peer in the system. Let us consider a generic peer p 0 and I remote peers p 1 , . . . , p I used to store p 0 's data. We assume time to be fractioned in time-slots of fixed length. Let a i,t be an indicator variable so that a i,t = 1 if and only if p i is online at time t. Each peer i has integer upload and download capacity of respectively u i and d i fragments per time-slot. We now proceed with a series of definitions used to formalize the scheduling problem. Definition 1: A backup schedule is a set of (i, t) tuples representing the decision of uploading a fragment from p 0 to peer p i , where i ∈ {1 . . . I} at time-slot t. A valid backup schedule S satisfies the following properties: 1) ∀t : |{i : (i, t) ∈ S}| ≤ u 0 : no more than u 0 fragments per time-slot can be uploaded. 2) ∀(i, t) ∈ S : a i,t = a 0,t = 1: fragments are transferred only between online peers. 3) ∀(i, t), (j, u) ∈ S : i = j: no two fragments are stored on the same peer. Definition 2: A restore schedule is a set of (i, t) tuples representing the decision of downloading a fragment from a set of remote peers p i ∈ P at time t, where P is set of storage peers that received a fragment during the backup phase. A valid restore schedule S satisfies the following properties: 1) ∀t : |{i : (i, t) ∈ S}| ≤ d 0 : no more than d 0 fragments per time-slot can be recovered. 2) ∀(i, t) ∈ S : a i,t = a 0,t = 1. 3) ∀(i, t) : (j, u) ∈ S, i = j. 4) ∀(i, t) ∈ S : p i ∈ P : fragments can only be retrieved from storage peers. Definition 3: The completion time C of a schedule S is the last time-slot in which a transfer is performed, that is: C(S) = max{t : (i, t) ∈ S}. In the following, we first consider a full information setting, and show how to compute an optimal schedule which minimizes completion time provided that the online behavior of peers is known a priori. Then, we compare optimal scheduling to a randomized policy that needs no knowledge of future peer uptime; via a numeric analysis, we show the conditions under which a randomized, uninformed approach achieves performance comparable to that of an optimal schedule. A. Full Information Setting We cast the problem of finding the optimal schedule for both backup and restore operations as finding the minimum completion time to transfer a given number x of fragments. For backup, x will correspond to the number n of redundant encoded fragments; for restores, x will be equal to the number k of original fragments. We show that this problem can be reduced to finding the maximum number of fragments that can be transferred within a given time T . We then use a maxflow formulation and show that existing algorithms can solve the original problem in polynomial time. Definition 4: An optimal schedule to backup/restore x fragments is one that achieves the minimum completion time to transfer at least x fragments. Let S be the set of all valid schedules; the minimum completion time is: O(x) = min{C(S) : S ∈ S ∧ |S| ≥ x}.(1) The following proposition shows that the optimal completion time can be obtained by computing the maximum number of fragments that can be transferred in T time-slots. Proposition 1: Let S be the set of all valid schedules and F (t) be the function denoting the maximum number of fragments that can be transferred within time-slot t, that is: F (t) = max{|S| : S ∈ S ∧ C(S) ≤ t}.(2) The optimal completion time is: O(x) = min{t : F (t) ≥ x}. Proof: Let t 1 = O(x) and t 2 = min{t : F (t) ≥ x}. • t 1 ≥ t 2 . By Eq. 1, an S 1 ∈ S exists such that C(S 1 ) = t 1 and |S 1 | ≥ x, implying that F (t 1 ) ≥ x. Therefore, t 1 ≥ min{t : F (t) ≥ x} = t 2 . • t 1 ≤ t 2 . By Eq. 2, an S 2 exists such that C(S 2 ) = t 2 and |S 2 | ≥ x. This directly implies that t 1 = O(x) ≤ t 2 . We can now iteratively compute F (t) with growing values of t; the above Proposition guarantees that the first value T that satisfies F (T ) ≥ x will be the desired result. We now focus on a single instance of the problem of finding the maximum number of fragments F (T ) that can be transferred within time-slot T , and show that it can be encoded as a max-flow problem on a flow network built as follows. First, we create a bipartite directed graph G ′ = (V ′ , E ′ ) where V ′ = T ∪ P; the elements of T = {t i : i ∈ 1 . . . T } represent time-slots, the elements of P = {p i : i ∈ 1 . . . I} represent remote peers (only storage nodes for restores). An edge connects a time-slot to a peer if that peer is online during that particular time-slot: E ′ = {(t i , p j ) : t i ∈ T ∧ p j ∈ P ∧ a i,j = 1}. Source s and sink t nodes complete the bipartite graph G ′ and create a flow network G = (V, E). The source is connected to all the time-slots during which the data owner p 0 is online; all peers are connected to the sink. The capacities on the edges are defined as follows: edges from the source to time-slots have capacity u 0 or d 0 (respectively, for backup and restore operations); edges between timeslots and peers have capacity d i or u i (respectively, for backup and restore operations); finally, edges between peers and the sink have capacity m. Note that in this work we assume individual fragments to be uploaded to distinct peers, hence m = 1. To simplify presentation, we assume integer capacities u k = d k = 1 ∀k ∈ [0, I]. Fig. 2 illustrates an example of the whole procedure described above, for the case of a backup operation. Fig. 2(a) shows the online behavior for time-slots t 1 , . . . , t 8 of the data owner and the remote peers (p 1 , p 2 , p 3 ) that can be selected as remote locations to backup data. The optimal schedule problem amounts to deciding which remote peer should be awarded a time-slot to transfer backup fragments, so that the operation can be completed within the shortest time. This problem is encoded in the graph of Fig. 2(b). Time-slots and remote peers are represented by the nodes of the inner bipartite graph. An edge of capacity 1 connects a time-slot to the set of online peers in that time-slot, as derived from Fig. 2(a). The source node has an edge of capacity u 0 = 1 to every timeslot in which the data owner is online (in the figure, t 4 , t 5 are shaded to remind p 0 is offline): this guarantees that only 1 fragment per time-slot can be transferred. The sink node has an incident edge with capacity m = 1 from every remote peer. In the particular case of the example the smallest value of t ensuring F (t) ≥ 3 is 3, corresponding to a flow graph that contains only the t 1 , t 2 , t 3 time-slot nodes. The resulting optimal scheduling corresponds to the thick edges in Fig. 2(b). For a flow network with V nodes and E edges, the max-flow can be computed with time complexity O V E log V 2 E [4]. In our case, when we have p nodes and an optimal solution of t time-slots, V is O(p+t) and E is O(pt). The complexity of an instance of the algorithm is thus O pt p log p t + t log t p . The original problem, i.e., finding an optimal schedule that minimizes the time to transfer x fragments, can be solved by performing O(log t) max-flow computations. In fact, an upper bound for the optimal completion time can be found in O(log t) instances of the max-flow algorithm by doubling at each time the value of T , then the optimal value can be obtained, again in O(log t) time, by using binary search. The computational complexity of determining an optimal schedule in a full information framework is thus O pt log t p log p t + t log t p . B. Random Scheduling In practice, assuming complete knowledge of peers' online behavior is not realistic. We introduce a randomized scheduling policy which only requires knowing which peers are online at the time of the scheduling decision. In Sec. III-C, we compare optimal and randomized scheduling using real traces. For backup operations, in each time-slot, fragments are uploaded from the data owner to no more than u 0 remote peers chosen at random among those that are currently online and that did not receive a fragment in previous time-slots. This satisfies Def. 1. For restore operations, in each time-slot, d 0 remote peers in the set P are randomly chosen among those that are currently online and data is transferred back to the data owner. This satisfies Def. 2. We now use Fig. 2 to illustrate a possible outcome of the randomized schedule defined here and compare it to the optimal schedule computed using the max-flow formalization. We focus on the backup operation of x = 3 fragments carried out by the data owner p 0 . In Fig. 2(a), the data owner may randomly select p 1 to be the recipient of the first fragment in time-slot t 1 . Since we assume m = 1 fragment can be stored on a distinct peer, this choice implies that timeslot t 2 is "wasted". In time-slot t 3 the data owner has no choice but to store data on peer p 3 . Only in time-slot t 7 the backup process is complete, when the last fragment is uploaded to peer p 2 . Hence, this randomized schedule writes as (p 1 , t 1 ); (p 3 , t 3 ); (p 2 , t 7 ). The optimal schedule is obtained by computing the maxflow on the flow network in Fig. 2(b) (thick edges in the figure), and writes as (p 2 , t 1 ); (p 1 , t 2 ); (p 3 , t 3 ). The backup operation only requires 3 time-slots to complete. C. Numerical Analysis Here, we take a numerical perspective and compare optimal and randomized scheduling in terms of TTB and TTR. We focus on a single data owner p 0 involved in a backup operation. The input to the scheduling problem is the availability trace described in Sec. II-B, starting the backup at a random moment; we set the duration of a time-slot to one hour. Let u 0 = 1 fragment per time-slot be the upload rate of p 0 . We report results for x ∈ {40, 60, 80} backup fragments, and vary the number of randomly chosen remote peers so that I ∈ {1.1x, 1.2x, . . . , 2x}. We obtained each data point by averaging 1,000 runs of the experiment; furthermore, for each of those runs, we averaged the completion times of 1,000 random schedules in the same settings. Fig. 3 illustrates the ratio between the TTB achieved respectively by optimal and randomized scheduling, normalized to the ideal backup time minTTB. We observe that both optimal and randomized scheduling approach minTTB when the number of remote peers available to store backup fragments increases: a large system improves transmission opportunities, and TTB approaches the ideal lower bound. However, when the number of backup fragments grows, which is a consequence of higher redundancy rates, randomized scheduling requires a larger pool of remote machines to approach the performance of the optimal scheduling. We also note that heterogeneous and correlated behavior of users in the availability trace results in "idle" time-slots in which neither optimal nor randomized scheduling can transfer data. This very same evaluation can be used to evaluate a restore operation, even if the parameters acquire a different meaning. Number of remote peers TTB / minTTB random x=40 optimal x=40 random x=60 optimal x=60 random x=80 optimal x=80 Fig. 3. Numerical analysis: a comparison between optimal and randomized scheduling, using real availability traces. In this case, the number x of fragments that need to be transferred is the number of original fragments k, and the number of remote peers I will correspond to the number of encoded fragments n. For restores, as the redundancy rate n k = I x grows, backups will be more efficient. We conclude that randomized scheduling is a good choice for a P2P backup application, provided that: • to have efficient backups, the ratio between number of nodes in the system and number of fragments to store is not very close to one; • to have efficient restores, the redundancy rate is not very close to one. As a heuristic threshold, in our analysis we obtain that a value of I x = 1.5 is sufficient to complete backup and restore within a tolerable (around 10%) deviation from minTTB or minTTR, respectively. In the following, we will therefore use randomized scheduling and make sure that such a ratio is reached in order to ensure that scheduling does not impose a too harsh penalty on TTB and TTR. Birk and Kol [5] analyzed random backup scheduling by modeling peer uptime as a Markovian process. Albeit quantitatively different due to the absence of diurnal and weekly patterns in their model, their study reached a conclusion that is analogous to ours: in backups, the completion time of random scheduling converges to to the optimal value as the system size grows. IV. REDUNDANCY POLICY In the literature, the redundancy rate is generally chosen a priori to ensure what we term prompt data availability. Given a system with average availability a, a target data availability t, and assuming the availability of each individual peer as an independent random variable with probability a, a system-wide redundancy rate is computed as follows. The total number n of redundant fragments required to meet the target t, when k original fragments constitute the data to backup is computed as [6]: min n ∈ N : n i=k n i a i (1 − a) n−i ≥ t .(3) We label this method fixed-redundancy, and use it in the following as a baseline approach. Ensuring prompt data availability is not our goal, since peers only retrieve their data upon (hopefully rare) crash events. Data downloads correspond to restore operations, which require a long time to complete because of the sheer size of backup data. Hence, we approach the design of our redundancy policy by taking into account the tradeoffs that a backup application has to face. On the one hand, low redundancy improves the aggregate storage capacity of the system, TTB decreases, and maintenance costs drop. On the other hand, two factors discourage from selecting excessively low redundancy rates. First, TTR increases, as less peers will be online to serve fragments during data restores; second, there is a higher risk of data loss. Our redundancy policy operates as follows. During the backup phase, peers constantly estimate their TTR and the probability of losing data and adjust the redundancy rate according to the characteristics of the remote peers that hold their data. In practice, data owners upload encoded fragments until the estimates of TTR and data loss probability are below an arbitrary threshold. When the threshold is crossed, the backup phase terminates. Note that TTB is generally several times longer than TTR. First, in the restore phase, peers are not likely to disconnect from the Internet. Second, most peers have asymmetric lines with fast downlink and slow uplink; third, backups require uploading redundant data while restores involve downloading an amount of data equivalent to the original backup object. Because of this unbalance, we argue that it is reasonable to use a redundancy scheme that trades longer TTR (which affects only users that suffer a crash) for shorter TTB (which affects all users). We now delve into the details of how to approximate TTR and data loss probability. A. Approximating TTR Similarly to the optimal scheduling problem, predicting accurately the TTR requires full knowledge of disk failure events and peer availability patterns. We obtain an estimate of the TTR with a heuristic approach; in Sec. V we show that our approximation is reasonable. We assume that a data owner p 0 remains online during the whole restore process. The TTR can be bounded for two reasons: i) the download bandwidth d 0 of the data owner is a bottleneck; ii) the upload rate of remote peers holding p 0 's data is a bottleneck. Let us focus on the second case: we define the expected upload rate of a generic remote peer p i holding a backup fragment of p 0 as the product a i u i of the average availability and the upload bandwidth of p i . The data owner needs k fragments to recover the backup object: suppose these fragments are served by the k "fastest" remote peers. In this case, the "bottleneck" upload rate is that of the k-th peer p j with the smaller expected upload rate. If we consider l parallel downloads and a backup object of size o, a peer computes an estimate of the TTR as eT T R = max o d 0 , o la j u j .(4) B. Approximating the Data Loss Probability Upon a crash, a peer with n fragments placed on remote peers can lose its data if more than n − k of them crash as well before data is completely restored. Considering a delay w that can pass between the crash event and the beginning of the restore phase, we compute the data loss probability within a total delay of t = w + eT T R. We consider disk crashes to be memoryless events, with constant probability for any peer and at any time. Disk lifetimes are thus exponentially distributed stochastic variables with a parametric average t: a peer crashes by time t with probability 1 − e −t/t . The probability of data loss is then n i=n−k+1 n i 1 − e −t/t i e −t/t n−i .(5) Data loss probability needs to be monitored with great care. In Fig. 4, we plot the probability of losing data as a function of the redundancy rate and the delay t. Here we set t = 90 days and k = 64; when the time without maintenance is in the order of magnitude of weeks, even a small decrease in redundancy can increase the probability of data loss by several orders of magnitude. In summary, our redundancy policy triggers the end of the backup phase, and determines the redundancy rate applied to a backup object. Since we trade longer TTR for shorter TTB, our scheme ensures that data redundancy is enough to make data loss probability small, and keeps TTR under a certain value. Finally, we remark that our approximation techniques require knowing the uplink capacity and the average availability of remote peers. While a decentralized approach to resource monitoring is an appealing research subject, it is common practice (e.g. Wuala) to rely on a centralized infrastructure to monitor peer resources. V. SYSTEM SIMULATION We proceed with a trace-driven system simulation, considering all the factors identified in Sec. III: churn, correlated uptime, peer bandwidth, congestion, and fragment granularity. A. Simulation Settings Our simulation covers three months, using the availability traces described in Sec. II-B, with the exception that peers remain online during restores. Uplink capacities of peers are obtained by sampling a real bandwidth distribution measured at more than 300,000 unique Internet hosts for a 48 hour period from roughly 3,500 distinct ASes across 160 countries [7]. These values have a highly skewed distribution, with a median of 77 kBps and a mean of 428kBps. To represent typical asymmetric residential Internet lines, we assign to each peer a downlink speed equal to four times its uplink. Our adaptive redundancy policy uses the following parameters: we set the threshold for the estimated TTR to satisfy eT T R ≤ max (1 day, 2 · minT T R) and we keep the probability of data loss smaller than 10 −4 , when w = 2 weeks is the maximum delay between crash and restore events (see Sec. IV-B). Each node has 10 GB of data to backup, and dedicates 50 GB of storage space to the application. The high ratio between these two values lets us disregard issues due to insufficient storage capacity (which is considered to be cheap) and focus on the subjects of our investigation, i.e., scheduling and redundancy. The fragment size f is set to 160 MB, resulting in k = 64 original fragments per backup object. We define peers' lifetimes 3 to be exponentially distributed random variables with an expected value of 90 days. After they crash, peers return online immediately and start their restore process; in Sec. VI, we also consider a delay between crash events and restore operations. As discussed in Sec. IV-B, we compare against a baseline redundancy policy that assigns a fixed redundancy rate. Here we set a target data availability of t = 0.99, and use the system-wide average availability a = 0.36 as computed from our availability traces. Hence, we obtain a value n = 228 and a redundancy rate n/k = 3.56. Our simulations involve 376 peers. This is sufficient to ensure that the performance of a randomized scheduling is close to optimality (see Sec. III). For each set of parameters, the simulation results are obtained by averaging ten simulation runs. B. Results Fig . 5 shows the cumulative distribution function of minTTB and minTTR: these baseline values are deeply influenced by the bandwidth distribution we used, and their gap is justified by the asymmetry of the access bandwidth and the assumption that peers stay online during the restore process. We now verify the accuracy of our approximation of TTR, expressed as the ratio of estimated versus measured TTR. This 3 Here we neglect the economics of the application, e.g. promoting user loyalty to the system. Hence, we do not consider unanticipated user departures. ratio has a median of 0.92, with 10th and 90th percentiles of respectively 0.50 and 2.56. The values of TTRs vary mostly due to the diurnal and weekly connectivity patterns of users in our traces, but for most cases the eTTR is a sensible rough estimation of TTR. The adaptive policy pays off, with an average redundancy rate of 1.91 against a flat value of 3.56 for the baseline approach ( Fig. 6(a)); the maintenance traffic decreases accordingly, and the system almost doubles its storage capacity. In addition, TTB is roughly halved with the adaptive scheme ( Fig. 6(b)); a price for this is paid by crashed peers, which will have longer TTR (Fig. 6(c)). As we argued in Sec. IV, we think this loss is tolerable and well offset by the benefits of reduced redundancy. We observe tails where a minority of the nodes have very high T T B/minT T B and T T R/minT T R ratios. They are nodes with very high bandwidths and therefore low values of minTTB and minTTR (see Fig. 5); their backup and restore speeds will be limited by the bandwidth of remote nodes which are orders of magnitude smaller. These results certify that our adaptive scheme beneficially affects performance. However, lower redundancy might result in higher risks of losing data: in the following Section, we analyze this. VI. DATA LOSS AND DELAYED RESTORE Our simulation settings put the system under exceptional stress: the peer crash rate is two orders of magnitude higher than what is reported for commodity hardware [8]. In such an adverse scenario, we study the likelihood and the causes of data loss, and their relation to the redundancy scheme. In addition, we discuss the implications of delayed response to crashes, affecting both restore and maintenance operations. We consider the following scenarios: • Immediate response: Peers start restores as soon as they crash. Moreover, they immediately alert relevant peers to start their maintenance. • Delayed response: Crashed peers return online after a random delay. If this delay exceeds a timeout, peers suffering from fragment loss start their maintenance. • Delayed assisted response: After the above timeout, a third party intervenes to rescue crashed peers whose data is at risk, by maintaining it. In our simulations, delays are exponentially distributed random variables with an average of one week; the timeout value is one week as well. For performance reasons, assisted maintenance can be supported by an online storage provider, which is used as a temporary buffer. Here we assume a provider with 100% uptime, unlimited bandwidth and storage space: maintenance is triggered upon expiration of the timeout, conditioned to a data loss probability greater than 10 −4 . In our experiments, due to the inflated peer crash rates, between 11.4% and 14.6% of crashed peers could not recover their data. In Table I, we focus on those peers. The majority of data loss events affected peers that crashed before they completed their backups, according to the redundancy policy (unfinished backups column). This can be due to two reasons: the backup process is inherently time-consuming, due to the availability and bandwidth of data owners; or the backup system is inefficient. To differentiate between these two cases, we consider unavoidable data loss events (rightmost column in the table). If a peer crashes before minTTB, no online backup system could have saved the data. Data backup takes time: this simple fact alone accounts for far more than all the limitations of a P2P approach. Users should worry more about completing their backup quickly than about the reliability of their peers. The difference in redundancy between the high rate used by the fixed baseline and the adaptive approach does not impact significantly the data loss rate, excepting the case of nonassisted delayed response. Assisted maintenance is an effective way to counter this effect. In Fig. 7 we show the costs of assisted repairs in terms of data traffic. Given that prices on storage service are highly asymmetric 4 we only consider the outbound traffic, from provider to peers. Data volumes are expressed as fractions of the total size of backup objects in the system. There is a striking difference between the adaptive and fixed redundancy schemes: higher redundancy results in less emergency situations in which the server has to step in. The amount of data stored on the server has a peak load of less than 2.5% of the total backup size: the assisted repairs are quick, therefore only a small fraction of the peers need assistance simultaneously. VIII. CONCLUSION The P2P paradigm applied to backup applications is a compelling alternative to centralized online solutions, which become costly for long-term storage. In this work, we revisited P2P backup and argued that such an application is viable. Because the online behavior of users is unpredictable and, at large scale, crashes and failures are the norm rather than the exception, we showed that scheduling and redundancy policies are paramount to achieve short backup and restore times. We gave a novel formalization of optimal scheduling and showed that, with full information, a problem that may appear combinatorial in nature can actually be solved efficiently by reducing it to a maximal flow problem. Without full information, optimal scheduling is unfeasible; however, we showed that as the system size grows, the gap between randomized and optimal scheduling policies diminishes rapidly. Furthermore, we studied an adaptive scheme that strives to maintain data redundancy small, which implies shorter backup times than a state-of-the-art approach that uses a systemwide, fixed redundancy rate. This comes at the expense of increased restore times, which we argued to be a reasonable price to pay, especially in light of our study on the probability of data loss. In fact, we determined that the vast majority of data loss episodes are due to incomplete backups. Our experiments illustrated that such events are unavoidable, as they are determined by the limitations of data owners alone: no online storage system could have avoided such unfortunate events. We conclude that short backup times are crucial, far more than the reliability of the P2P system itself. As such, the crux of a P2P backup application is to design mechanisms that optimize such metric. Our research agenda includes the design and implementation of a fully fledged prototype of a P2P backup application. Additionally, we will extend the parameter space of our study, to include the natural heterogeneity of user demand in terms of storage requirements. To do so, we will collect measurements from both existing online storage systems and from a controlled deployment of our prototype implementation.
6,917
1009.1344
1920329887
An online backup system should be quick and reliable in both saving and restoring users' data. To do so in a peer-to-peer implementation, data transfer scheduling and the amount of redundancy must be chosen wisely. We formalize the problem of exchanging multiple pieces of data with intermittently available peers, and we show that random scheduling completes transfers nearly optimally in terms of duration as long as the system is sufficiently large. Moreover, we propose an adaptive redundancy scheme that improves performance and decreases resource usage while keeping the risks of data loss low. Extensive simulations show that our techniques are effective in a realistic trace-driven scenario with heterogeneous bandwidth.
A recurrent problem for P2P applications is creating incentives to encourage nodes in contributing more resources. This can be done via reputation systems @cite_4 or virtual currency @cite_8 . Specifically for storage systems, an easy and efficient solution is segregating nodes in sub-networks with roughly homogeneous characteristics such as uptime and storage space @cite_11 @cite_1 .
{ "abstract": [ "", "Peer-to-peer backup systems provide mechanisms that allow participants to store their data cooperatively. When the proportion of unstable peers grows, these systems need to increase redundancy to maintain data availability. This redundancy could be reduced by storing data to more stable peers. In consequence, network traffic would be reduced and the global network?s storage capacity would be increased. In this work we present a novel distributed backup system that improves data availability and network fairness. Using only local information our system creates groups of peers with similar stabilities. These groups are used to choose partners and to exchange data symmetrically with them. With this technique we guarantee that all peers obtain a data availability proportional to their stability and that all peers receive the same backup capacity than the disk space that they share.", "Peer-to-peer file-sharing networks are currently receiving much attention as a means of sharing and distributing information. However, as recent experience shows, the anonymous, open nature of these networks offers an almost ideal environment for the spread of self-replicating inauthentic files.We describe an algorithm to decrease the number of downloads of inauthentic files in a peer-to-peer file-sharing network that assigns each peer a unique global trust value, based on the peer's history of uploads. We present a distributed and secure method to compute global trust values, based on Power iteration. By having peers use these global trust values to choose the peers from whom they download, the network effectively identifies malicious peers and isolates them from the network.In simulations, this reputation system, called EigenTrust, has been shown to significantly decrease the number of inauthentic files on the network, even under a variety of conditions where malicious peers cooperate in an attempt to deliberately subvert the system.", "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their karma. A set of nodes, called a bankset, keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing" ], "cite_N": [ "@cite_1", "@cite_11", "@cite_4", "@cite_8" ], "mid": [ "", "2110727285", "2156523427", "136591830" ] }
On Scheduling and Redundancy for P2P Backup
The advent of cloud computing as a new paradigm to enable service providers with the ability to deploy cost-effective solutions has favored the development of a range of new services, including online storage applications. Due to the economy of scale of cloud-based storage services, the costs incurred by end-users to hand over their data to a remote storage location in the Internet have approached the cost of ownership of commodity storage devices. As such, online storage applications spare users most of the time-consuming nuisance of data backup: user interaction is minimal, and in case of data loss due to an accident, restoring the original data is a seamless operation. However, the longterm storage costs that are typical of a backup application may easily go past that of traditional approaches to data backup. Additionally, while data availability is a key feature that large-scale data-centers deployments guarantee, its durability is questionable, as reported recently [1]. For these reasons, peer-to-peer (P2P) storage systems are an alternative to cloud-based solutions. Storage costs are merely those of a commodity storage device, which is shared (together with some bandwidth resources) with a number of remote Internet users to form a distributed storage system. Such applications optimize latency to individual file access: indeed, users hand over their data to the P2P system, which is used as a replacement of a local hard drive. In such a scenario, low access latency is difficult to achieve: the online behavior of users is unpredictable and, at large scale, crashes and failures are the norm rather than the exception. As a consequence, storage space is sacrificed for low access latency: a P2P application stores large amounts of redundant data to cope with such unfavorable events. In this work we study a particular case of online storage: P2P backup applications. Data backup involves the bulk transfer of potentially large quantities of data, both during regular data backups and in case of data loss. As a consequence, low access latency is not an issue, while short backup and restore times seem a more reasonable goal. Given these considerations, here we seek to optimize backup and restore times, while guaranteeing that data loss remains an unlikely event. There are two main design choices that affect these metrics: scheduling, i.e. deciding how to allocate data transfers between peers, and redundancy, i.e. the amount of data in the P2P system that guarantees a backup operation to be considered complete and safe. The endeavor of this work is to study and evaluate these two intertwined aspects. First, we describe in detail our application scenario (Sec. II), and show why the assumptions underlying a backup application can simplify many problems addressed in the literature. We then set off to define the problem of scheduling in a full knowledge setting, and we show that it can be solved in polynomial time by reducing it to a maximal flow problem. Full knowledge of future peer uptime is obviously an unrealistic assumption: thus, we show that a randomized approach to scheduling yields near optimal results when the system scale is large and we corroborate our findings using real availability traces from an instant messaging application (Sec. III). We then move to study a novel redundancy policy that, rather than focusing on short-term data availability, targets short data restore times. As such, our method alleviates the storage burden of large amounts of redundant data on client machines (Sec. IV). With a trace-driven simulation of a complete P2P backup system, we show that our technique is viable in practical scenarios and illustrate its benefits in terms of increased performance (Sec. V). We conclude by studying a range of data maintenance policies when restore operations may undergo some natural delays. For example, detecting a faulty external hard-drive may not be immediate, or obtaining a new equipment upon a crash may require some time. We show that an "assisted" approach to data repair techniques (which involves a cloud-based storage service) can significantly reduce the probability of data loss, at an affordable cost (Sec. VI). II. APPLICATION SCENARIO In this work, similarly to many online backup applications (e.g., Dropbox 1 ), we assume users to specify one local folder containing important data to backup. We also assume that backup data remains available locally to peers. This is an important trait that distinguishes backup from storage applications, in which data is only stored remotely. Backup data consists of an opaque object, possibly representing an encrypted archive of changes to a set of files, that we term backup object. In the spirit of incremental backups, we consider that each backup object should be kept on the system indefinitely. Consolidation and deletion of obsolete backups are not taken into account in this work. A backup object of size o is split into k original fragments of a fixed size f , with k = o/f . Since backup data is stored on unreliable machines characterized by an unpredictable online behavior, the original k blocks are encoded using erasure coding (e.g., Reed-Solomon). This creates n encoded fragments having size f , of which any k are sufficient to recover the original data. The redundancy rate is defined as r = n/k. Here we assume that encoded fragments reside on distinct remote peers, which avoids that a single disk failure causes the loss of multiple fragments. Backup Phase: The backup phase involves a data owner and a set of remote peers that eventually store encoded fragments for the data owner. We assume that any peer in the system can collect a list of remote peers with available storage space: this can be achieved by using known techniques, e.g. a centralized "tracker" or a decentralized data structure such as a distributed hash table. Data backup requires a scheduling policy that drives the choice of where and when to upload encoded fragments to remote peers. Moreover, a redundancy policy determines when the data is safe, which completes the backup operation. Maintenance Phase: Once the backup phase is completed and encoded fragments reside on remote peers, the maintenance phase begins. Peer crashes and departures can cause the loss of some encoded fragments; during the maintenance phase, peers detects such losses and generate new encoded fragments to restore a redundancy level at which the backup is safe again. For a generic P2P storage system, in which encoded fragments only reside in the network and peers do not keep a local copy of their data, the maintenance phase is critical. Indeed, peers need to first download the whole backup object from remote machines, then to generate new encoded fragments and upload them to available peers. This problem has fostered the design of efficient coding schemes to mitigate the excessive network traffic caused by the maintenance operation (see e.g. [2], [3]). In a backup application, the maintenance phase is less critical: the data owner can generate new encoded fragments using the local copy of the data with no download required. Restore Phase: In the unfortunate case of a crash, the data owner initiates the restore phase. A peer contacts the remote machines holding encoded fragments, downloads at least k of them, and reconstructs the original backup data. Again, a scheduling policy drives the process. Since the ability to successfully restore data upon a crash is the ultimate goal of any backup system, in our application the restore traffic receives higher priority than the backup and maintenance traffic. A. Performance Metrics We characterize the system performance in terms of the amount of time required to complete the backup and the restore phases, labelled time to backup (TTB) and time to restore (TTR). In the following Sections, we use baselines for backup and restore operations which bound both TTB and TTR. Let us assume an ideal storage system with unlimited capacity and uninterrupted online time that backs up user data. In this case, TTB and TTR only depend on backup object size and on bandwidth and availability of the data owner. We label these ideal values minTTB and minTTR, and we define them formally in Sec. III. Additionally, we consider the data loss probability, which accounts for the probability of a data owner to be unable to restore backup data. A P2P backup application may exact a high toll in terms of peer resources, including storage and bandwidth. In this work we gloss over metrics of the burden on individual peers and the network, considering a scenario in which the resources of peers are lost if left unused. B. Availability Traces The online behavior of users, i.e., their patterns of connection and disconnection over time, is difficult to capture analytically. In this work we will perform our evaluations on a real application trace that exhibits both heterogeneity and correlated user behavior. Our traces capture user availability, in terms of login/logoff events, from an instant messaging (IM) server in Italy for a duration of 3 months. We argue that the behavior of regular IM users constitutes a representative case study. Indeed, for both an IM and an online backup application, users are generally signed in for as long as their machine is connected to the Internet. In this work we only consider users that are online for an average of at least four hours per day, as done in the Wuala online storage application 2 . Once this filter is applied, we obtain the trace of 376 users. User availabilities are strongly correlated, in the sense that many users connect or disconnect around the same time. As shown in Fig. 1(a), there are strong differences between the number of users connected during day and night and between workdays and weekends. Most users are online for less than 40% of the trace, while some of them are almost always connected ( Fig. 1(b)). III. THE SCHEDULING PROBLEM Scheduling data transfers between peers is an important operation that affects the time required to complete a backup or a restore task, especially in a system involving unreliable machines with unpredictable online patterns. Because of churn, a node might not be able to find online nodes to exchange data with: hence, TTB and TTR can grow due to idle periods of time. Unexpected node disconnections require a method to handle partial fragments, which can be discarded or resumed. Moreover, the redundancy rate used to cope with failures and unavailability may decrease system performance. Finally, the available bandwidth between peers involved in a data transfer, which may be shared due to parallel transmissions, is another cause for slow backup and restore operations. In this Section, we focus on the implications of churn alone. We simplify the scheduling problem by assuming the redundancy factor to be a given input parameter, and neglecting the possibility of congestion due to several different backup, restore or maintenance processes interfering. Furthermore, we do not consider interrupted fragment transfers. In Section IV, we define an adaptive scheme to compute the redundancy rate applied to a backup operation and in Section V we relax all other assumptions. We now define a reference scenario to bound TTB and TTR. Consider an ideal storage system (e.g. a cloud service) with unbounded bandwidth and 100% availability. A peer i with upload and download bandwidth u i and d i starting the backup of an object of size o at time t completes its backup at time t ′ , after having spent o ui time online. Analogously, i restores a backup object with the same size at t ′′ after having spent o di time online. We define minT T B(i, t) = t ′ −t and minT T R(i, t) = t ′′ − t. We use these reference values throughout the paper to compare the relative performance of our P2P application versus that of such an ideal system. Because we neglect congestion issues, we can focus on a backup/restore operation as seen from a single peer in the system. Let us consider a generic peer p 0 and I remote peers p 1 , . . . , p I used to store p 0 's data. We assume time to be fractioned in time-slots of fixed length. Let a i,t be an indicator variable so that a i,t = 1 if and only if p i is online at time t. Each peer i has integer upload and download capacity of respectively u i and d i fragments per time-slot. We now proceed with a series of definitions used to formalize the scheduling problem. Definition 1: A backup schedule is a set of (i, t) tuples representing the decision of uploading a fragment from p 0 to peer p i , where i ∈ {1 . . . I} at time-slot t. A valid backup schedule S satisfies the following properties: 1) ∀t : |{i : (i, t) ∈ S}| ≤ u 0 : no more than u 0 fragments per time-slot can be uploaded. 2) ∀(i, t) ∈ S : a i,t = a 0,t = 1: fragments are transferred only between online peers. 3) ∀(i, t), (j, u) ∈ S : i = j: no two fragments are stored on the same peer. Definition 2: A restore schedule is a set of (i, t) tuples representing the decision of downloading a fragment from a set of remote peers p i ∈ P at time t, where P is set of storage peers that received a fragment during the backup phase. A valid restore schedule S satisfies the following properties: 1) ∀t : |{i : (i, t) ∈ S}| ≤ d 0 : no more than d 0 fragments per time-slot can be recovered. 2) ∀(i, t) ∈ S : a i,t = a 0,t = 1. 3) ∀(i, t) : (j, u) ∈ S, i = j. 4) ∀(i, t) ∈ S : p i ∈ P : fragments can only be retrieved from storage peers. Definition 3: The completion time C of a schedule S is the last time-slot in which a transfer is performed, that is: C(S) = max{t : (i, t) ∈ S}. In the following, we first consider a full information setting, and show how to compute an optimal schedule which minimizes completion time provided that the online behavior of peers is known a priori. Then, we compare optimal scheduling to a randomized policy that needs no knowledge of future peer uptime; via a numeric analysis, we show the conditions under which a randomized, uninformed approach achieves performance comparable to that of an optimal schedule. A. Full Information Setting We cast the problem of finding the optimal schedule for both backup and restore operations as finding the minimum completion time to transfer a given number x of fragments. For backup, x will correspond to the number n of redundant encoded fragments; for restores, x will be equal to the number k of original fragments. We show that this problem can be reduced to finding the maximum number of fragments that can be transferred within a given time T . We then use a maxflow formulation and show that existing algorithms can solve the original problem in polynomial time. Definition 4: An optimal schedule to backup/restore x fragments is one that achieves the minimum completion time to transfer at least x fragments. Let S be the set of all valid schedules; the minimum completion time is: O(x) = min{C(S) : S ∈ S ∧ |S| ≥ x}.(1) The following proposition shows that the optimal completion time can be obtained by computing the maximum number of fragments that can be transferred in T time-slots. Proposition 1: Let S be the set of all valid schedules and F (t) be the function denoting the maximum number of fragments that can be transferred within time-slot t, that is: F (t) = max{|S| : S ∈ S ∧ C(S) ≤ t}.(2) The optimal completion time is: O(x) = min{t : F (t) ≥ x}. Proof: Let t 1 = O(x) and t 2 = min{t : F (t) ≥ x}. • t 1 ≥ t 2 . By Eq. 1, an S 1 ∈ S exists such that C(S 1 ) = t 1 and |S 1 | ≥ x, implying that F (t 1 ) ≥ x. Therefore, t 1 ≥ min{t : F (t) ≥ x} = t 2 . • t 1 ≤ t 2 . By Eq. 2, an S 2 exists such that C(S 2 ) = t 2 and |S 2 | ≥ x. This directly implies that t 1 = O(x) ≤ t 2 . We can now iteratively compute F (t) with growing values of t; the above Proposition guarantees that the first value T that satisfies F (T ) ≥ x will be the desired result. We now focus on a single instance of the problem of finding the maximum number of fragments F (T ) that can be transferred within time-slot T , and show that it can be encoded as a max-flow problem on a flow network built as follows. First, we create a bipartite directed graph G ′ = (V ′ , E ′ ) where V ′ = T ∪ P; the elements of T = {t i : i ∈ 1 . . . T } represent time-slots, the elements of P = {p i : i ∈ 1 . . . I} represent remote peers (only storage nodes for restores). An edge connects a time-slot to a peer if that peer is online during that particular time-slot: E ′ = {(t i , p j ) : t i ∈ T ∧ p j ∈ P ∧ a i,j = 1}. Source s and sink t nodes complete the bipartite graph G ′ and create a flow network G = (V, E). The source is connected to all the time-slots during which the data owner p 0 is online; all peers are connected to the sink. The capacities on the edges are defined as follows: edges from the source to time-slots have capacity u 0 or d 0 (respectively, for backup and restore operations); edges between timeslots and peers have capacity d i or u i (respectively, for backup and restore operations); finally, edges between peers and the sink have capacity m. Note that in this work we assume individual fragments to be uploaded to distinct peers, hence m = 1. To simplify presentation, we assume integer capacities u k = d k = 1 ∀k ∈ [0, I]. Fig. 2 illustrates an example of the whole procedure described above, for the case of a backup operation. Fig. 2(a) shows the online behavior for time-slots t 1 , . . . , t 8 of the data owner and the remote peers (p 1 , p 2 , p 3 ) that can be selected as remote locations to backup data. The optimal schedule problem amounts to deciding which remote peer should be awarded a time-slot to transfer backup fragments, so that the operation can be completed within the shortest time. This problem is encoded in the graph of Fig. 2(b). Time-slots and remote peers are represented by the nodes of the inner bipartite graph. An edge of capacity 1 connects a time-slot to the set of online peers in that time-slot, as derived from Fig. 2(a). The source node has an edge of capacity u 0 = 1 to every timeslot in which the data owner is online (in the figure, t 4 , t 5 are shaded to remind p 0 is offline): this guarantees that only 1 fragment per time-slot can be transferred. The sink node has an incident edge with capacity m = 1 from every remote peer. In the particular case of the example the smallest value of t ensuring F (t) ≥ 3 is 3, corresponding to a flow graph that contains only the t 1 , t 2 , t 3 time-slot nodes. The resulting optimal scheduling corresponds to the thick edges in Fig. 2(b). For a flow network with V nodes and E edges, the max-flow can be computed with time complexity O V E log V 2 E [4]. In our case, when we have p nodes and an optimal solution of t time-slots, V is O(p+t) and E is O(pt). The complexity of an instance of the algorithm is thus O pt p log p t + t log t p . The original problem, i.e., finding an optimal schedule that minimizes the time to transfer x fragments, can be solved by performing O(log t) max-flow computations. In fact, an upper bound for the optimal completion time can be found in O(log t) instances of the max-flow algorithm by doubling at each time the value of T , then the optimal value can be obtained, again in O(log t) time, by using binary search. The computational complexity of determining an optimal schedule in a full information framework is thus O pt log t p log p t + t log t p . B. Random Scheduling In practice, assuming complete knowledge of peers' online behavior is not realistic. We introduce a randomized scheduling policy which only requires knowing which peers are online at the time of the scheduling decision. In Sec. III-C, we compare optimal and randomized scheduling using real traces. For backup operations, in each time-slot, fragments are uploaded from the data owner to no more than u 0 remote peers chosen at random among those that are currently online and that did not receive a fragment in previous time-slots. This satisfies Def. 1. For restore operations, in each time-slot, d 0 remote peers in the set P are randomly chosen among those that are currently online and data is transferred back to the data owner. This satisfies Def. 2. We now use Fig. 2 to illustrate a possible outcome of the randomized schedule defined here and compare it to the optimal schedule computed using the max-flow formalization. We focus on the backup operation of x = 3 fragments carried out by the data owner p 0 . In Fig. 2(a), the data owner may randomly select p 1 to be the recipient of the first fragment in time-slot t 1 . Since we assume m = 1 fragment can be stored on a distinct peer, this choice implies that timeslot t 2 is "wasted". In time-slot t 3 the data owner has no choice but to store data on peer p 3 . Only in time-slot t 7 the backup process is complete, when the last fragment is uploaded to peer p 2 . Hence, this randomized schedule writes as (p 1 , t 1 ); (p 3 , t 3 ); (p 2 , t 7 ). The optimal schedule is obtained by computing the maxflow on the flow network in Fig. 2(b) (thick edges in the figure), and writes as (p 2 , t 1 ); (p 1 , t 2 ); (p 3 , t 3 ). The backup operation only requires 3 time-slots to complete. C. Numerical Analysis Here, we take a numerical perspective and compare optimal and randomized scheduling in terms of TTB and TTR. We focus on a single data owner p 0 involved in a backup operation. The input to the scheduling problem is the availability trace described in Sec. II-B, starting the backup at a random moment; we set the duration of a time-slot to one hour. Let u 0 = 1 fragment per time-slot be the upload rate of p 0 . We report results for x ∈ {40, 60, 80} backup fragments, and vary the number of randomly chosen remote peers so that I ∈ {1.1x, 1.2x, . . . , 2x}. We obtained each data point by averaging 1,000 runs of the experiment; furthermore, for each of those runs, we averaged the completion times of 1,000 random schedules in the same settings. Fig. 3 illustrates the ratio between the TTB achieved respectively by optimal and randomized scheduling, normalized to the ideal backup time minTTB. We observe that both optimal and randomized scheduling approach minTTB when the number of remote peers available to store backup fragments increases: a large system improves transmission opportunities, and TTB approaches the ideal lower bound. However, when the number of backup fragments grows, which is a consequence of higher redundancy rates, randomized scheduling requires a larger pool of remote machines to approach the performance of the optimal scheduling. We also note that heterogeneous and correlated behavior of users in the availability trace results in "idle" time-slots in which neither optimal nor randomized scheduling can transfer data. This very same evaluation can be used to evaluate a restore operation, even if the parameters acquire a different meaning. Number of remote peers TTB / minTTB random x=40 optimal x=40 random x=60 optimal x=60 random x=80 optimal x=80 Fig. 3. Numerical analysis: a comparison between optimal and randomized scheduling, using real availability traces. In this case, the number x of fragments that need to be transferred is the number of original fragments k, and the number of remote peers I will correspond to the number of encoded fragments n. For restores, as the redundancy rate n k = I x grows, backups will be more efficient. We conclude that randomized scheduling is a good choice for a P2P backup application, provided that: • to have efficient backups, the ratio between number of nodes in the system and number of fragments to store is not very close to one; • to have efficient restores, the redundancy rate is not very close to one. As a heuristic threshold, in our analysis we obtain that a value of I x = 1.5 is sufficient to complete backup and restore within a tolerable (around 10%) deviation from minTTB or minTTR, respectively. In the following, we will therefore use randomized scheduling and make sure that such a ratio is reached in order to ensure that scheduling does not impose a too harsh penalty on TTB and TTR. Birk and Kol [5] analyzed random backup scheduling by modeling peer uptime as a Markovian process. Albeit quantitatively different due to the absence of diurnal and weekly patterns in their model, their study reached a conclusion that is analogous to ours: in backups, the completion time of random scheduling converges to to the optimal value as the system size grows. IV. REDUNDANCY POLICY In the literature, the redundancy rate is generally chosen a priori to ensure what we term prompt data availability. Given a system with average availability a, a target data availability t, and assuming the availability of each individual peer as an independent random variable with probability a, a system-wide redundancy rate is computed as follows. The total number n of redundant fragments required to meet the target t, when k original fragments constitute the data to backup is computed as [6]: min n ∈ N : n i=k n i a i (1 − a) n−i ≥ t .(3) We label this method fixed-redundancy, and use it in the following as a baseline approach. Ensuring prompt data availability is not our goal, since peers only retrieve their data upon (hopefully rare) crash events. Data downloads correspond to restore operations, which require a long time to complete because of the sheer size of backup data. Hence, we approach the design of our redundancy policy by taking into account the tradeoffs that a backup application has to face. On the one hand, low redundancy improves the aggregate storage capacity of the system, TTB decreases, and maintenance costs drop. On the other hand, two factors discourage from selecting excessively low redundancy rates. First, TTR increases, as less peers will be online to serve fragments during data restores; second, there is a higher risk of data loss. Our redundancy policy operates as follows. During the backup phase, peers constantly estimate their TTR and the probability of losing data and adjust the redundancy rate according to the characteristics of the remote peers that hold their data. In practice, data owners upload encoded fragments until the estimates of TTR and data loss probability are below an arbitrary threshold. When the threshold is crossed, the backup phase terminates. Note that TTB is generally several times longer than TTR. First, in the restore phase, peers are not likely to disconnect from the Internet. Second, most peers have asymmetric lines with fast downlink and slow uplink; third, backups require uploading redundant data while restores involve downloading an amount of data equivalent to the original backup object. Because of this unbalance, we argue that it is reasonable to use a redundancy scheme that trades longer TTR (which affects only users that suffer a crash) for shorter TTB (which affects all users). We now delve into the details of how to approximate TTR and data loss probability. A. Approximating TTR Similarly to the optimal scheduling problem, predicting accurately the TTR requires full knowledge of disk failure events and peer availability patterns. We obtain an estimate of the TTR with a heuristic approach; in Sec. V we show that our approximation is reasonable. We assume that a data owner p 0 remains online during the whole restore process. The TTR can be bounded for two reasons: i) the download bandwidth d 0 of the data owner is a bottleneck; ii) the upload rate of remote peers holding p 0 's data is a bottleneck. Let us focus on the second case: we define the expected upload rate of a generic remote peer p i holding a backup fragment of p 0 as the product a i u i of the average availability and the upload bandwidth of p i . The data owner needs k fragments to recover the backup object: suppose these fragments are served by the k "fastest" remote peers. In this case, the "bottleneck" upload rate is that of the k-th peer p j with the smaller expected upload rate. If we consider l parallel downloads and a backup object of size o, a peer computes an estimate of the TTR as eT T R = max o d 0 , o la j u j .(4) B. Approximating the Data Loss Probability Upon a crash, a peer with n fragments placed on remote peers can lose its data if more than n − k of them crash as well before data is completely restored. Considering a delay w that can pass between the crash event and the beginning of the restore phase, we compute the data loss probability within a total delay of t = w + eT T R. We consider disk crashes to be memoryless events, with constant probability for any peer and at any time. Disk lifetimes are thus exponentially distributed stochastic variables with a parametric average t: a peer crashes by time t with probability 1 − e −t/t . The probability of data loss is then n i=n−k+1 n i 1 − e −t/t i e −t/t n−i .(5) Data loss probability needs to be monitored with great care. In Fig. 4, we plot the probability of losing data as a function of the redundancy rate and the delay t. Here we set t = 90 days and k = 64; when the time without maintenance is in the order of magnitude of weeks, even a small decrease in redundancy can increase the probability of data loss by several orders of magnitude. In summary, our redundancy policy triggers the end of the backup phase, and determines the redundancy rate applied to a backup object. Since we trade longer TTR for shorter TTB, our scheme ensures that data redundancy is enough to make data loss probability small, and keeps TTR under a certain value. Finally, we remark that our approximation techniques require knowing the uplink capacity and the average availability of remote peers. While a decentralized approach to resource monitoring is an appealing research subject, it is common practice (e.g. Wuala) to rely on a centralized infrastructure to monitor peer resources. V. SYSTEM SIMULATION We proceed with a trace-driven system simulation, considering all the factors identified in Sec. III: churn, correlated uptime, peer bandwidth, congestion, and fragment granularity. A. Simulation Settings Our simulation covers three months, using the availability traces described in Sec. II-B, with the exception that peers remain online during restores. Uplink capacities of peers are obtained by sampling a real bandwidth distribution measured at more than 300,000 unique Internet hosts for a 48 hour period from roughly 3,500 distinct ASes across 160 countries [7]. These values have a highly skewed distribution, with a median of 77 kBps and a mean of 428kBps. To represent typical asymmetric residential Internet lines, we assign to each peer a downlink speed equal to four times its uplink. Our adaptive redundancy policy uses the following parameters: we set the threshold for the estimated TTR to satisfy eT T R ≤ max (1 day, 2 · minT T R) and we keep the probability of data loss smaller than 10 −4 , when w = 2 weeks is the maximum delay between crash and restore events (see Sec. IV-B). Each node has 10 GB of data to backup, and dedicates 50 GB of storage space to the application. The high ratio between these two values lets us disregard issues due to insufficient storage capacity (which is considered to be cheap) and focus on the subjects of our investigation, i.e., scheduling and redundancy. The fragment size f is set to 160 MB, resulting in k = 64 original fragments per backup object. We define peers' lifetimes 3 to be exponentially distributed random variables with an expected value of 90 days. After they crash, peers return online immediately and start their restore process; in Sec. VI, we also consider a delay between crash events and restore operations. As discussed in Sec. IV-B, we compare against a baseline redundancy policy that assigns a fixed redundancy rate. Here we set a target data availability of t = 0.99, and use the system-wide average availability a = 0.36 as computed from our availability traces. Hence, we obtain a value n = 228 and a redundancy rate n/k = 3.56. Our simulations involve 376 peers. This is sufficient to ensure that the performance of a randomized scheduling is close to optimality (see Sec. III). For each set of parameters, the simulation results are obtained by averaging ten simulation runs. B. Results Fig . 5 shows the cumulative distribution function of minTTB and minTTR: these baseline values are deeply influenced by the bandwidth distribution we used, and their gap is justified by the asymmetry of the access bandwidth and the assumption that peers stay online during the restore process. We now verify the accuracy of our approximation of TTR, expressed as the ratio of estimated versus measured TTR. This 3 Here we neglect the economics of the application, e.g. promoting user loyalty to the system. Hence, we do not consider unanticipated user departures. ratio has a median of 0.92, with 10th and 90th percentiles of respectively 0.50 and 2.56. The values of TTRs vary mostly due to the diurnal and weekly connectivity patterns of users in our traces, but for most cases the eTTR is a sensible rough estimation of TTR. The adaptive policy pays off, with an average redundancy rate of 1.91 against a flat value of 3.56 for the baseline approach ( Fig. 6(a)); the maintenance traffic decreases accordingly, and the system almost doubles its storage capacity. In addition, TTB is roughly halved with the adaptive scheme ( Fig. 6(b)); a price for this is paid by crashed peers, which will have longer TTR (Fig. 6(c)). As we argued in Sec. IV, we think this loss is tolerable and well offset by the benefits of reduced redundancy. We observe tails where a minority of the nodes have very high T T B/minT T B and T T R/minT T R ratios. They are nodes with very high bandwidths and therefore low values of minTTB and minTTR (see Fig. 5); their backup and restore speeds will be limited by the bandwidth of remote nodes which are orders of magnitude smaller. These results certify that our adaptive scheme beneficially affects performance. However, lower redundancy might result in higher risks of losing data: in the following Section, we analyze this. VI. DATA LOSS AND DELAYED RESTORE Our simulation settings put the system under exceptional stress: the peer crash rate is two orders of magnitude higher than what is reported for commodity hardware [8]. In such an adverse scenario, we study the likelihood and the causes of data loss, and their relation to the redundancy scheme. In addition, we discuss the implications of delayed response to crashes, affecting both restore and maintenance operations. We consider the following scenarios: • Immediate response: Peers start restores as soon as they crash. Moreover, they immediately alert relevant peers to start their maintenance. • Delayed response: Crashed peers return online after a random delay. If this delay exceeds a timeout, peers suffering from fragment loss start their maintenance. • Delayed assisted response: After the above timeout, a third party intervenes to rescue crashed peers whose data is at risk, by maintaining it. In our simulations, delays are exponentially distributed random variables with an average of one week; the timeout value is one week as well. For performance reasons, assisted maintenance can be supported by an online storage provider, which is used as a temporary buffer. Here we assume a provider with 100% uptime, unlimited bandwidth and storage space: maintenance is triggered upon expiration of the timeout, conditioned to a data loss probability greater than 10 −4 . In our experiments, due to the inflated peer crash rates, between 11.4% and 14.6% of crashed peers could not recover their data. In Table I, we focus on those peers. The majority of data loss events affected peers that crashed before they completed their backups, according to the redundancy policy (unfinished backups column). This can be due to two reasons: the backup process is inherently time-consuming, due to the availability and bandwidth of data owners; or the backup system is inefficient. To differentiate between these two cases, we consider unavoidable data loss events (rightmost column in the table). If a peer crashes before minTTB, no online backup system could have saved the data. Data backup takes time: this simple fact alone accounts for far more than all the limitations of a P2P approach. Users should worry more about completing their backup quickly than about the reliability of their peers. The difference in redundancy between the high rate used by the fixed baseline and the adaptive approach does not impact significantly the data loss rate, excepting the case of nonassisted delayed response. Assisted maintenance is an effective way to counter this effect. In Fig. 7 we show the costs of assisted repairs in terms of data traffic. Given that prices on storage service are highly asymmetric 4 we only consider the outbound traffic, from provider to peers. Data volumes are expressed as fractions of the total size of backup objects in the system. There is a striking difference between the adaptive and fixed redundancy schemes: higher redundancy results in less emergency situations in which the server has to step in. The amount of data stored on the server has a peak load of less than 2.5% of the total backup size: the assisted repairs are quick, therefore only a small fraction of the peers need assistance simultaneously. VIII. CONCLUSION The P2P paradigm applied to backup applications is a compelling alternative to centralized online solutions, which become costly for long-term storage. In this work, we revisited P2P backup and argued that such an application is viable. Because the online behavior of users is unpredictable and, at large scale, crashes and failures are the norm rather than the exception, we showed that scheduling and redundancy policies are paramount to achieve short backup and restore times. We gave a novel formalization of optimal scheduling and showed that, with full information, a problem that may appear combinatorial in nature can actually be solved efficiently by reducing it to a maximal flow problem. Without full information, optimal scheduling is unfeasible; however, we showed that as the system size grows, the gap between randomized and optimal scheduling policies diminishes rapidly. Furthermore, we studied an adaptive scheme that strives to maintain data redundancy small, which implies shorter backup times than a state-of-the-art approach that uses a systemwide, fixed redundancy rate. This comes at the expense of increased restore times, which we argued to be a reasonable price to pay, especially in light of our study on the probability of data loss. In fact, we determined that the vast majority of data loss episodes are due to incomplete backups. Our experiments illustrated that such events are unavoidable, as they are determined by the limitations of data owners alone: no online storage system could have avoided such unfortunate events. We conclude that short backup times are crucial, far more than the reliability of the P2P system itself. As such, the crux of a P2P backup application is to design mechanisms that optimize such metric. Our research agenda includes the design and implementation of a fully fledged prototype of a P2P backup application. Additionally, we will extend the parameter space of our study, to include the natural heterogeneity of user demand in terms of storage requirements. To do so, we will collect measurements from both existing online storage systems and from a controlled deployment of our prototype implementation.
6,917
1009.1344
1920329887
An online backup system should be quick and reliable in both saving and restoring users' data. To do so in a peer-to-peer implementation, data transfer scheduling and the amount of redundancy must be chosen wisely. We formalize the problem of exchanging multiple pieces of data with intermittently available peers, and we show that random scheduling completes transfers nearly optimally in terms of duration as long as the system is sufficiently large. Moreover, we propose an adaptive redundancy scheme that improves performance and decreases resource usage while keeping the risks of data loss low. Extensive simulations show that our techniques are effective in a realistic trace-driven scenario with heterogeneous bandwidth.
Backup objects, whose confidentiality can be ensured by standard encryption techniques, should encode incremental differences between archive versions. Recently, various techniques have been proposed to optimize computational time and size of these differences @cite_3 .
{ "abstract": [ "Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client's (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append." ], "cite_N": [ "@cite_3" ], "mid": [ "2040193872" ] }
On Scheduling and Redundancy for P2P Backup
The advent of cloud computing as a new paradigm to enable service providers with the ability to deploy cost-effective solutions has favored the development of a range of new services, including online storage applications. Due to the economy of scale of cloud-based storage services, the costs incurred by end-users to hand over their data to a remote storage location in the Internet have approached the cost of ownership of commodity storage devices. As such, online storage applications spare users most of the time-consuming nuisance of data backup: user interaction is minimal, and in case of data loss due to an accident, restoring the original data is a seamless operation. However, the longterm storage costs that are typical of a backup application may easily go past that of traditional approaches to data backup. Additionally, while data availability is a key feature that large-scale data-centers deployments guarantee, its durability is questionable, as reported recently [1]. For these reasons, peer-to-peer (P2P) storage systems are an alternative to cloud-based solutions. Storage costs are merely those of a commodity storage device, which is shared (together with some bandwidth resources) with a number of remote Internet users to form a distributed storage system. Such applications optimize latency to individual file access: indeed, users hand over their data to the P2P system, which is used as a replacement of a local hard drive. In such a scenario, low access latency is difficult to achieve: the online behavior of users is unpredictable and, at large scale, crashes and failures are the norm rather than the exception. As a consequence, storage space is sacrificed for low access latency: a P2P application stores large amounts of redundant data to cope with such unfavorable events. In this work we study a particular case of online storage: P2P backup applications. Data backup involves the bulk transfer of potentially large quantities of data, both during regular data backups and in case of data loss. As a consequence, low access latency is not an issue, while short backup and restore times seem a more reasonable goal. Given these considerations, here we seek to optimize backup and restore times, while guaranteeing that data loss remains an unlikely event. There are two main design choices that affect these metrics: scheduling, i.e. deciding how to allocate data transfers between peers, and redundancy, i.e. the amount of data in the P2P system that guarantees a backup operation to be considered complete and safe. The endeavor of this work is to study and evaluate these two intertwined aspects. First, we describe in detail our application scenario (Sec. II), and show why the assumptions underlying a backup application can simplify many problems addressed in the literature. We then set off to define the problem of scheduling in a full knowledge setting, and we show that it can be solved in polynomial time by reducing it to a maximal flow problem. Full knowledge of future peer uptime is obviously an unrealistic assumption: thus, we show that a randomized approach to scheduling yields near optimal results when the system scale is large and we corroborate our findings using real availability traces from an instant messaging application (Sec. III). We then move to study a novel redundancy policy that, rather than focusing on short-term data availability, targets short data restore times. As such, our method alleviates the storage burden of large amounts of redundant data on client machines (Sec. IV). With a trace-driven simulation of a complete P2P backup system, we show that our technique is viable in practical scenarios and illustrate its benefits in terms of increased performance (Sec. V). We conclude by studying a range of data maintenance policies when restore operations may undergo some natural delays. For example, detecting a faulty external hard-drive may not be immediate, or obtaining a new equipment upon a crash may require some time. We show that an "assisted" approach to data repair techniques (which involves a cloud-based storage service) can significantly reduce the probability of data loss, at an affordable cost (Sec. VI). II. APPLICATION SCENARIO In this work, similarly to many online backup applications (e.g., Dropbox 1 ), we assume users to specify one local folder containing important data to backup. We also assume that backup data remains available locally to peers. This is an important trait that distinguishes backup from storage applications, in which data is only stored remotely. Backup data consists of an opaque object, possibly representing an encrypted archive of changes to a set of files, that we term backup object. In the spirit of incremental backups, we consider that each backup object should be kept on the system indefinitely. Consolidation and deletion of obsolete backups are not taken into account in this work. A backup object of size o is split into k original fragments of a fixed size f , with k = o/f . Since backup data is stored on unreliable machines characterized by an unpredictable online behavior, the original k blocks are encoded using erasure coding (e.g., Reed-Solomon). This creates n encoded fragments having size f , of which any k are sufficient to recover the original data. The redundancy rate is defined as r = n/k. Here we assume that encoded fragments reside on distinct remote peers, which avoids that a single disk failure causes the loss of multiple fragments. Backup Phase: The backup phase involves a data owner and a set of remote peers that eventually store encoded fragments for the data owner. We assume that any peer in the system can collect a list of remote peers with available storage space: this can be achieved by using known techniques, e.g. a centralized "tracker" or a decentralized data structure such as a distributed hash table. Data backup requires a scheduling policy that drives the choice of where and when to upload encoded fragments to remote peers. Moreover, a redundancy policy determines when the data is safe, which completes the backup operation. Maintenance Phase: Once the backup phase is completed and encoded fragments reside on remote peers, the maintenance phase begins. Peer crashes and departures can cause the loss of some encoded fragments; during the maintenance phase, peers detects such losses and generate new encoded fragments to restore a redundancy level at which the backup is safe again. For a generic P2P storage system, in which encoded fragments only reside in the network and peers do not keep a local copy of their data, the maintenance phase is critical. Indeed, peers need to first download the whole backup object from remote machines, then to generate new encoded fragments and upload them to available peers. This problem has fostered the design of efficient coding schemes to mitigate the excessive network traffic caused by the maintenance operation (see e.g. [2], [3]). In a backup application, the maintenance phase is less critical: the data owner can generate new encoded fragments using the local copy of the data with no download required. Restore Phase: In the unfortunate case of a crash, the data owner initiates the restore phase. A peer contacts the remote machines holding encoded fragments, downloads at least k of them, and reconstructs the original backup data. Again, a scheduling policy drives the process. Since the ability to successfully restore data upon a crash is the ultimate goal of any backup system, in our application the restore traffic receives higher priority than the backup and maintenance traffic. A. Performance Metrics We characterize the system performance in terms of the amount of time required to complete the backup and the restore phases, labelled time to backup (TTB) and time to restore (TTR). In the following Sections, we use baselines for backup and restore operations which bound both TTB and TTR. Let us assume an ideal storage system with unlimited capacity and uninterrupted online time that backs up user data. In this case, TTB and TTR only depend on backup object size and on bandwidth and availability of the data owner. We label these ideal values minTTB and minTTR, and we define them formally in Sec. III. Additionally, we consider the data loss probability, which accounts for the probability of a data owner to be unable to restore backup data. A P2P backup application may exact a high toll in terms of peer resources, including storage and bandwidth. In this work we gloss over metrics of the burden on individual peers and the network, considering a scenario in which the resources of peers are lost if left unused. B. Availability Traces The online behavior of users, i.e., their patterns of connection and disconnection over time, is difficult to capture analytically. In this work we will perform our evaluations on a real application trace that exhibits both heterogeneity and correlated user behavior. Our traces capture user availability, in terms of login/logoff events, from an instant messaging (IM) server in Italy for a duration of 3 months. We argue that the behavior of regular IM users constitutes a representative case study. Indeed, for both an IM and an online backup application, users are generally signed in for as long as their machine is connected to the Internet. In this work we only consider users that are online for an average of at least four hours per day, as done in the Wuala online storage application 2 . Once this filter is applied, we obtain the trace of 376 users. User availabilities are strongly correlated, in the sense that many users connect or disconnect around the same time. As shown in Fig. 1(a), there are strong differences between the number of users connected during day and night and between workdays and weekends. Most users are online for less than 40% of the trace, while some of them are almost always connected ( Fig. 1(b)). III. THE SCHEDULING PROBLEM Scheduling data transfers between peers is an important operation that affects the time required to complete a backup or a restore task, especially in a system involving unreliable machines with unpredictable online patterns. Because of churn, a node might not be able to find online nodes to exchange data with: hence, TTB and TTR can grow due to idle periods of time. Unexpected node disconnections require a method to handle partial fragments, which can be discarded or resumed. Moreover, the redundancy rate used to cope with failures and unavailability may decrease system performance. Finally, the available bandwidth between peers involved in a data transfer, which may be shared due to parallel transmissions, is another cause for slow backup and restore operations. In this Section, we focus on the implications of churn alone. We simplify the scheduling problem by assuming the redundancy factor to be a given input parameter, and neglecting the possibility of congestion due to several different backup, restore or maintenance processes interfering. Furthermore, we do not consider interrupted fragment transfers. In Section IV, we define an adaptive scheme to compute the redundancy rate applied to a backup operation and in Section V we relax all other assumptions. We now define a reference scenario to bound TTB and TTR. Consider an ideal storage system (e.g. a cloud service) with unbounded bandwidth and 100% availability. A peer i with upload and download bandwidth u i and d i starting the backup of an object of size o at time t completes its backup at time t ′ , after having spent o ui time online. Analogously, i restores a backup object with the same size at t ′′ after having spent o di time online. We define minT T B(i, t) = t ′ −t and minT T R(i, t) = t ′′ − t. We use these reference values throughout the paper to compare the relative performance of our P2P application versus that of such an ideal system. Because we neglect congestion issues, we can focus on a backup/restore operation as seen from a single peer in the system. Let us consider a generic peer p 0 and I remote peers p 1 , . . . , p I used to store p 0 's data. We assume time to be fractioned in time-slots of fixed length. Let a i,t be an indicator variable so that a i,t = 1 if and only if p i is online at time t. Each peer i has integer upload and download capacity of respectively u i and d i fragments per time-slot. We now proceed with a series of definitions used to formalize the scheduling problem. Definition 1: A backup schedule is a set of (i, t) tuples representing the decision of uploading a fragment from p 0 to peer p i , where i ∈ {1 . . . I} at time-slot t. A valid backup schedule S satisfies the following properties: 1) ∀t : |{i : (i, t) ∈ S}| ≤ u 0 : no more than u 0 fragments per time-slot can be uploaded. 2) ∀(i, t) ∈ S : a i,t = a 0,t = 1: fragments are transferred only between online peers. 3) ∀(i, t), (j, u) ∈ S : i = j: no two fragments are stored on the same peer. Definition 2: A restore schedule is a set of (i, t) tuples representing the decision of downloading a fragment from a set of remote peers p i ∈ P at time t, where P is set of storage peers that received a fragment during the backup phase. A valid restore schedule S satisfies the following properties: 1) ∀t : |{i : (i, t) ∈ S}| ≤ d 0 : no more than d 0 fragments per time-slot can be recovered. 2) ∀(i, t) ∈ S : a i,t = a 0,t = 1. 3) ∀(i, t) : (j, u) ∈ S, i = j. 4) ∀(i, t) ∈ S : p i ∈ P : fragments can only be retrieved from storage peers. Definition 3: The completion time C of a schedule S is the last time-slot in which a transfer is performed, that is: C(S) = max{t : (i, t) ∈ S}. In the following, we first consider a full information setting, and show how to compute an optimal schedule which minimizes completion time provided that the online behavior of peers is known a priori. Then, we compare optimal scheduling to a randomized policy that needs no knowledge of future peer uptime; via a numeric analysis, we show the conditions under which a randomized, uninformed approach achieves performance comparable to that of an optimal schedule. A. Full Information Setting We cast the problem of finding the optimal schedule for both backup and restore operations as finding the minimum completion time to transfer a given number x of fragments. For backup, x will correspond to the number n of redundant encoded fragments; for restores, x will be equal to the number k of original fragments. We show that this problem can be reduced to finding the maximum number of fragments that can be transferred within a given time T . We then use a maxflow formulation and show that existing algorithms can solve the original problem in polynomial time. Definition 4: An optimal schedule to backup/restore x fragments is one that achieves the minimum completion time to transfer at least x fragments. Let S be the set of all valid schedules; the minimum completion time is: O(x) = min{C(S) : S ∈ S ∧ |S| ≥ x}.(1) The following proposition shows that the optimal completion time can be obtained by computing the maximum number of fragments that can be transferred in T time-slots. Proposition 1: Let S be the set of all valid schedules and F (t) be the function denoting the maximum number of fragments that can be transferred within time-slot t, that is: F (t) = max{|S| : S ∈ S ∧ C(S) ≤ t}.(2) The optimal completion time is: O(x) = min{t : F (t) ≥ x}. Proof: Let t 1 = O(x) and t 2 = min{t : F (t) ≥ x}. • t 1 ≥ t 2 . By Eq. 1, an S 1 ∈ S exists such that C(S 1 ) = t 1 and |S 1 | ≥ x, implying that F (t 1 ) ≥ x. Therefore, t 1 ≥ min{t : F (t) ≥ x} = t 2 . • t 1 ≤ t 2 . By Eq. 2, an S 2 exists such that C(S 2 ) = t 2 and |S 2 | ≥ x. This directly implies that t 1 = O(x) ≤ t 2 . We can now iteratively compute F (t) with growing values of t; the above Proposition guarantees that the first value T that satisfies F (T ) ≥ x will be the desired result. We now focus on a single instance of the problem of finding the maximum number of fragments F (T ) that can be transferred within time-slot T , and show that it can be encoded as a max-flow problem on a flow network built as follows. First, we create a bipartite directed graph G ′ = (V ′ , E ′ ) where V ′ = T ∪ P; the elements of T = {t i : i ∈ 1 . . . T } represent time-slots, the elements of P = {p i : i ∈ 1 . . . I} represent remote peers (only storage nodes for restores). An edge connects a time-slot to a peer if that peer is online during that particular time-slot: E ′ = {(t i , p j ) : t i ∈ T ∧ p j ∈ P ∧ a i,j = 1}. Source s and sink t nodes complete the bipartite graph G ′ and create a flow network G = (V, E). The source is connected to all the time-slots during which the data owner p 0 is online; all peers are connected to the sink. The capacities on the edges are defined as follows: edges from the source to time-slots have capacity u 0 or d 0 (respectively, for backup and restore operations); edges between timeslots and peers have capacity d i or u i (respectively, for backup and restore operations); finally, edges between peers and the sink have capacity m. Note that in this work we assume individual fragments to be uploaded to distinct peers, hence m = 1. To simplify presentation, we assume integer capacities u k = d k = 1 ∀k ∈ [0, I]. Fig. 2 illustrates an example of the whole procedure described above, for the case of a backup operation. Fig. 2(a) shows the online behavior for time-slots t 1 , . . . , t 8 of the data owner and the remote peers (p 1 , p 2 , p 3 ) that can be selected as remote locations to backup data. The optimal schedule problem amounts to deciding which remote peer should be awarded a time-slot to transfer backup fragments, so that the operation can be completed within the shortest time. This problem is encoded in the graph of Fig. 2(b). Time-slots and remote peers are represented by the nodes of the inner bipartite graph. An edge of capacity 1 connects a time-slot to the set of online peers in that time-slot, as derived from Fig. 2(a). The source node has an edge of capacity u 0 = 1 to every timeslot in which the data owner is online (in the figure, t 4 , t 5 are shaded to remind p 0 is offline): this guarantees that only 1 fragment per time-slot can be transferred. The sink node has an incident edge with capacity m = 1 from every remote peer. In the particular case of the example the smallest value of t ensuring F (t) ≥ 3 is 3, corresponding to a flow graph that contains only the t 1 , t 2 , t 3 time-slot nodes. The resulting optimal scheduling corresponds to the thick edges in Fig. 2(b). For a flow network with V nodes and E edges, the max-flow can be computed with time complexity O V E log V 2 E [4]. In our case, when we have p nodes and an optimal solution of t time-slots, V is O(p+t) and E is O(pt). The complexity of an instance of the algorithm is thus O pt p log p t + t log t p . The original problem, i.e., finding an optimal schedule that minimizes the time to transfer x fragments, can be solved by performing O(log t) max-flow computations. In fact, an upper bound for the optimal completion time can be found in O(log t) instances of the max-flow algorithm by doubling at each time the value of T , then the optimal value can be obtained, again in O(log t) time, by using binary search. The computational complexity of determining an optimal schedule in a full information framework is thus O pt log t p log p t + t log t p . B. Random Scheduling In practice, assuming complete knowledge of peers' online behavior is not realistic. We introduce a randomized scheduling policy which only requires knowing which peers are online at the time of the scheduling decision. In Sec. III-C, we compare optimal and randomized scheduling using real traces. For backup operations, in each time-slot, fragments are uploaded from the data owner to no more than u 0 remote peers chosen at random among those that are currently online and that did not receive a fragment in previous time-slots. This satisfies Def. 1. For restore operations, in each time-slot, d 0 remote peers in the set P are randomly chosen among those that are currently online and data is transferred back to the data owner. This satisfies Def. 2. We now use Fig. 2 to illustrate a possible outcome of the randomized schedule defined here and compare it to the optimal schedule computed using the max-flow formalization. We focus on the backup operation of x = 3 fragments carried out by the data owner p 0 . In Fig. 2(a), the data owner may randomly select p 1 to be the recipient of the first fragment in time-slot t 1 . Since we assume m = 1 fragment can be stored on a distinct peer, this choice implies that timeslot t 2 is "wasted". In time-slot t 3 the data owner has no choice but to store data on peer p 3 . Only in time-slot t 7 the backup process is complete, when the last fragment is uploaded to peer p 2 . Hence, this randomized schedule writes as (p 1 , t 1 ); (p 3 , t 3 ); (p 2 , t 7 ). The optimal schedule is obtained by computing the maxflow on the flow network in Fig. 2(b) (thick edges in the figure), and writes as (p 2 , t 1 ); (p 1 , t 2 ); (p 3 , t 3 ). The backup operation only requires 3 time-slots to complete. C. Numerical Analysis Here, we take a numerical perspective and compare optimal and randomized scheduling in terms of TTB and TTR. We focus on a single data owner p 0 involved in a backup operation. The input to the scheduling problem is the availability trace described in Sec. II-B, starting the backup at a random moment; we set the duration of a time-slot to one hour. Let u 0 = 1 fragment per time-slot be the upload rate of p 0 . We report results for x ∈ {40, 60, 80} backup fragments, and vary the number of randomly chosen remote peers so that I ∈ {1.1x, 1.2x, . . . , 2x}. We obtained each data point by averaging 1,000 runs of the experiment; furthermore, for each of those runs, we averaged the completion times of 1,000 random schedules in the same settings. Fig. 3 illustrates the ratio between the TTB achieved respectively by optimal and randomized scheduling, normalized to the ideal backup time minTTB. We observe that both optimal and randomized scheduling approach minTTB when the number of remote peers available to store backup fragments increases: a large system improves transmission opportunities, and TTB approaches the ideal lower bound. However, when the number of backup fragments grows, which is a consequence of higher redundancy rates, randomized scheduling requires a larger pool of remote machines to approach the performance of the optimal scheduling. We also note that heterogeneous and correlated behavior of users in the availability trace results in "idle" time-slots in which neither optimal nor randomized scheduling can transfer data. This very same evaluation can be used to evaluate a restore operation, even if the parameters acquire a different meaning. Number of remote peers TTB / minTTB random x=40 optimal x=40 random x=60 optimal x=60 random x=80 optimal x=80 Fig. 3. Numerical analysis: a comparison between optimal and randomized scheduling, using real availability traces. In this case, the number x of fragments that need to be transferred is the number of original fragments k, and the number of remote peers I will correspond to the number of encoded fragments n. For restores, as the redundancy rate n k = I x grows, backups will be more efficient. We conclude that randomized scheduling is a good choice for a P2P backup application, provided that: • to have efficient backups, the ratio between number of nodes in the system and number of fragments to store is not very close to one; • to have efficient restores, the redundancy rate is not very close to one. As a heuristic threshold, in our analysis we obtain that a value of I x = 1.5 is sufficient to complete backup and restore within a tolerable (around 10%) deviation from minTTB or minTTR, respectively. In the following, we will therefore use randomized scheduling and make sure that such a ratio is reached in order to ensure that scheduling does not impose a too harsh penalty on TTB and TTR. Birk and Kol [5] analyzed random backup scheduling by modeling peer uptime as a Markovian process. Albeit quantitatively different due to the absence of diurnal and weekly patterns in their model, their study reached a conclusion that is analogous to ours: in backups, the completion time of random scheduling converges to to the optimal value as the system size grows. IV. REDUNDANCY POLICY In the literature, the redundancy rate is generally chosen a priori to ensure what we term prompt data availability. Given a system with average availability a, a target data availability t, and assuming the availability of each individual peer as an independent random variable with probability a, a system-wide redundancy rate is computed as follows. The total number n of redundant fragments required to meet the target t, when k original fragments constitute the data to backup is computed as [6]: min n ∈ N : n i=k n i a i (1 − a) n−i ≥ t .(3) We label this method fixed-redundancy, and use it in the following as a baseline approach. Ensuring prompt data availability is not our goal, since peers only retrieve their data upon (hopefully rare) crash events. Data downloads correspond to restore operations, which require a long time to complete because of the sheer size of backup data. Hence, we approach the design of our redundancy policy by taking into account the tradeoffs that a backup application has to face. On the one hand, low redundancy improves the aggregate storage capacity of the system, TTB decreases, and maintenance costs drop. On the other hand, two factors discourage from selecting excessively low redundancy rates. First, TTR increases, as less peers will be online to serve fragments during data restores; second, there is a higher risk of data loss. Our redundancy policy operates as follows. During the backup phase, peers constantly estimate their TTR and the probability of losing data and adjust the redundancy rate according to the characteristics of the remote peers that hold their data. In practice, data owners upload encoded fragments until the estimates of TTR and data loss probability are below an arbitrary threshold. When the threshold is crossed, the backup phase terminates. Note that TTB is generally several times longer than TTR. First, in the restore phase, peers are not likely to disconnect from the Internet. Second, most peers have asymmetric lines with fast downlink and slow uplink; third, backups require uploading redundant data while restores involve downloading an amount of data equivalent to the original backup object. Because of this unbalance, we argue that it is reasonable to use a redundancy scheme that trades longer TTR (which affects only users that suffer a crash) for shorter TTB (which affects all users). We now delve into the details of how to approximate TTR and data loss probability. A. Approximating TTR Similarly to the optimal scheduling problem, predicting accurately the TTR requires full knowledge of disk failure events and peer availability patterns. We obtain an estimate of the TTR with a heuristic approach; in Sec. V we show that our approximation is reasonable. We assume that a data owner p 0 remains online during the whole restore process. The TTR can be bounded for two reasons: i) the download bandwidth d 0 of the data owner is a bottleneck; ii) the upload rate of remote peers holding p 0 's data is a bottleneck. Let us focus on the second case: we define the expected upload rate of a generic remote peer p i holding a backup fragment of p 0 as the product a i u i of the average availability and the upload bandwidth of p i . The data owner needs k fragments to recover the backup object: suppose these fragments are served by the k "fastest" remote peers. In this case, the "bottleneck" upload rate is that of the k-th peer p j with the smaller expected upload rate. If we consider l parallel downloads and a backup object of size o, a peer computes an estimate of the TTR as eT T R = max o d 0 , o la j u j .(4) B. Approximating the Data Loss Probability Upon a crash, a peer with n fragments placed on remote peers can lose its data if more than n − k of them crash as well before data is completely restored. Considering a delay w that can pass between the crash event and the beginning of the restore phase, we compute the data loss probability within a total delay of t = w + eT T R. We consider disk crashes to be memoryless events, with constant probability for any peer and at any time. Disk lifetimes are thus exponentially distributed stochastic variables with a parametric average t: a peer crashes by time t with probability 1 − e −t/t . The probability of data loss is then n i=n−k+1 n i 1 − e −t/t i e −t/t n−i .(5) Data loss probability needs to be monitored with great care. In Fig. 4, we plot the probability of losing data as a function of the redundancy rate and the delay t. Here we set t = 90 days and k = 64; when the time without maintenance is in the order of magnitude of weeks, even a small decrease in redundancy can increase the probability of data loss by several orders of magnitude. In summary, our redundancy policy triggers the end of the backup phase, and determines the redundancy rate applied to a backup object. Since we trade longer TTR for shorter TTB, our scheme ensures that data redundancy is enough to make data loss probability small, and keeps TTR under a certain value. Finally, we remark that our approximation techniques require knowing the uplink capacity and the average availability of remote peers. While a decentralized approach to resource monitoring is an appealing research subject, it is common practice (e.g. Wuala) to rely on a centralized infrastructure to monitor peer resources. V. SYSTEM SIMULATION We proceed with a trace-driven system simulation, considering all the factors identified in Sec. III: churn, correlated uptime, peer bandwidth, congestion, and fragment granularity. A. Simulation Settings Our simulation covers three months, using the availability traces described in Sec. II-B, with the exception that peers remain online during restores. Uplink capacities of peers are obtained by sampling a real bandwidth distribution measured at more than 300,000 unique Internet hosts for a 48 hour period from roughly 3,500 distinct ASes across 160 countries [7]. These values have a highly skewed distribution, with a median of 77 kBps and a mean of 428kBps. To represent typical asymmetric residential Internet lines, we assign to each peer a downlink speed equal to four times its uplink. Our adaptive redundancy policy uses the following parameters: we set the threshold for the estimated TTR to satisfy eT T R ≤ max (1 day, 2 · minT T R) and we keep the probability of data loss smaller than 10 −4 , when w = 2 weeks is the maximum delay between crash and restore events (see Sec. IV-B). Each node has 10 GB of data to backup, and dedicates 50 GB of storage space to the application. The high ratio between these two values lets us disregard issues due to insufficient storage capacity (which is considered to be cheap) and focus on the subjects of our investigation, i.e., scheduling and redundancy. The fragment size f is set to 160 MB, resulting in k = 64 original fragments per backup object. We define peers' lifetimes 3 to be exponentially distributed random variables with an expected value of 90 days. After they crash, peers return online immediately and start their restore process; in Sec. VI, we also consider a delay between crash events and restore operations. As discussed in Sec. IV-B, we compare against a baseline redundancy policy that assigns a fixed redundancy rate. Here we set a target data availability of t = 0.99, and use the system-wide average availability a = 0.36 as computed from our availability traces. Hence, we obtain a value n = 228 and a redundancy rate n/k = 3.56. Our simulations involve 376 peers. This is sufficient to ensure that the performance of a randomized scheduling is close to optimality (see Sec. III). For each set of parameters, the simulation results are obtained by averaging ten simulation runs. B. Results Fig . 5 shows the cumulative distribution function of minTTB and minTTR: these baseline values are deeply influenced by the bandwidth distribution we used, and their gap is justified by the asymmetry of the access bandwidth and the assumption that peers stay online during the restore process. We now verify the accuracy of our approximation of TTR, expressed as the ratio of estimated versus measured TTR. This 3 Here we neglect the economics of the application, e.g. promoting user loyalty to the system. Hence, we do not consider unanticipated user departures. ratio has a median of 0.92, with 10th and 90th percentiles of respectively 0.50 and 2.56. The values of TTRs vary mostly due to the diurnal and weekly connectivity patterns of users in our traces, but for most cases the eTTR is a sensible rough estimation of TTR. The adaptive policy pays off, with an average redundancy rate of 1.91 against a flat value of 3.56 for the baseline approach ( Fig. 6(a)); the maintenance traffic decreases accordingly, and the system almost doubles its storage capacity. In addition, TTB is roughly halved with the adaptive scheme ( Fig. 6(b)); a price for this is paid by crashed peers, which will have longer TTR (Fig. 6(c)). As we argued in Sec. IV, we think this loss is tolerable and well offset by the benefits of reduced redundancy. We observe tails where a minority of the nodes have very high T T B/minT T B and T T R/minT T R ratios. They are nodes with very high bandwidths and therefore low values of minTTB and minTTR (see Fig. 5); their backup and restore speeds will be limited by the bandwidth of remote nodes which are orders of magnitude smaller. These results certify that our adaptive scheme beneficially affects performance. However, lower redundancy might result in higher risks of losing data: in the following Section, we analyze this. VI. DATA LOSS AND DELAYED RESTORE Our simulation settings put the system under exceptional stress: the peer crash rate is two orders of magnitude higher than what is reported for commodity hardware [8]. In such an adverse scenario, we study the likelihood and the causes of data loss, and their relation to the redundancy scheme. In addition, we discuss the implications of delayed response to crashes, affecting both restore and maintenance operations. We consider the following scenarios: • Immediate response: Peers start restores as soon as they crash. Moreover, they immediately alert relevant peers to start their maintenance. • Delayed response: Crashed peers return online after a random delay. If this delay exceeds a timeout, peers suffering from fragment loss start their maintenance. • Delayed assisted response: After the above timeout, a third party intervenes to rescue crashed peers whose data is at risk, by maintaining it. In our simulations, delays are exponentially distributed random variables with an average of one week; the timeout value is one week as well. For performance reasons, assisted maintenance can be supported by an online storage provider, which is used as a temporary buffer. Here we assume a provider with 100% uptime, unlimited bandwidth and storage space: maintenance is triggered upon expiration of the timeout, conditioned to a data loss probability greater than 10 −4 . In our experiments, due to the inflated peer crash rates, between 11.4% and 14.6% of crashed peers could not recover their data. In Table I, we focus on those peers. The majority of data loss events affected peers that crashed before they completed their backups, according to the redundancy policy (unfinished backups column). This can be due to two reasons: the backup process is inherently time-consuming, due to the availability and bandwidth of data owners; or the backup system is inefficient. To differentiate between these two cases, we consider unavoidable data loss events (rightmost column in the table). If a peer crashes before minTTB, no online backup system could have saved the data. Data backup takes time: this simple fact alone accounts for far more than all the limitations of a P2P approach. Users should worry more about completing their backup quickly than about the reliability of their peers. The difference in redundancy between the high rate used by the fixed baseline and the adaptive approach does not impact significantly the data loss rate, excepting the case of nonassisted delayed response. Assisted maintenance is an effective way to counter this effect. In Fig. 7 we show the costs of assisted repairs in terms of data traffic. Given that prices on storage service are highly asymmetric 4 we only consider the outbound traffic, from provider to peers. Data volumes are expressed as fractions of the total size of backup objects in the system. There is a striking difference between the adaptive and fixed redundancy schemes: higher redundancy results in less emergency situations in which the server has to step in. The amount of data stored on the server has a peak load of less than 2.5% of the total backup size: the assisted repairs are quick, therefore only a small fraction of the peers need assistance simultaneously. VIII. CONCLUSION The P2P paradigm applied to backup applications is a compelling alternative to centralized online solutions, which become costly for long-term storage. In this work, we revisited P2P backup and argued that such an application is viable. Because the online behavior of users is unpredictable and, at large scale, crashes and failures are the norm rather than the exception, we showed that scheduling and redundancy policies are paramount to achieve short backup and restore times. We gave a novel formalization of optimal scheduling and showed that, with full information, a problem that may appear combinatorial in nature can actually be solved efficiently by reducing it to a maximal flow problem. Without full information, optimal scheduling is unfeasible; however, we showed that as the system size grows, the gap between randomized and optimal scheduling policies diminishes rapidly. Furthermore, we studied an adaptive scheme that strives to maintain data redundancy small, which implies shorter backup times than a state-of-the-art approach that uses a systemwide, fixed redundancy rate. This comes at the expense of increased restore times, which we argued to be a reasonable price to pay, especially in light of our study on the probability of data loss. In fact, we determined that the vast majority of data loss episodes are due to incomplete backups. Our experiments illustrated that such events are unavoidable, as they are determined by the limitations of data owners alone: no online storage system could have avoided such unfortunate events. We conclude that short backup times are crucial, far more than the reliability of the P2P system itself. As such, the crux of a P2P backup application is to design mechanisms that optimize such metric. Our research agenda includes the design and implementation of a fully fledged prototype of a P2P backup application. Additionally, we will extend the parameter space of our study, to include the natural heterogeneity of user demand in terms of storage requirements. To do so, we will collect measurements from both existing online storage systems and from a controlled deployment of our prototype implementation.
6,917
1008.1842
2145516517
Retransmission based on packet acknowledgement (ACK NAK) is a fundamental error control technique employed in IEEE 802.11–2007 unicast network. However the 802.11–2007 standard falls short of proposing a reliable MAC-level recovery protocol for multicast frames. In this paper we propose a latency and bandwidth efficient coding algorithm based on the principles of network coding for retransmitting lost packets in a singlehop wireless multicast network and demonstrate its effectiveness over previously proposed network coding based retransmission algorithms.
Packet retransmission based on network coding for a one-to-many, single-hop multicast network is a recent field of study, first proposed by D. @cite_1 , which was later further elaborated into @cite_7 by D. @cite_7 the authors demonstrate bandwidth effectiveness achieved by employing greedy network coding for retransmission over traditional ARQ schemes through simulation work. @cite_3 the authors follow up the work in @cite_1 by comparing various packet coding algorithms for packet retransmissions. While in @cite_8 , the authors presents an analytical work on the reliability performance of network coding compared with ARQ and FEC in a lossy network. Network Coded Piggy Back (NCPB) @cite_2 demonstrates an efficient and practical testbed implemented random linear network coding based many-to-many reliable network model for real-time multi-player game network. Since our work primarily focuses on proposing an efficient network coding based retransmission algorithm for a one-to-many single-hop network, we will be comparing our results with the algorithm given in @cite_3 which is the most closely related work.
{ "abstract": [ "", "The capacity gain of network coding has been extensively studied in wired and wireless networks. Recently, it has been shown that network coding improves network reliability by reducing the number of packet retransmissions in lossy networks. However, the extent of the reliability benefit of network coding is not known. This paper quantifies the reliability gain of network coding for reliable multicasting in wireless networks, where network coding is most promising. We define the expected number of transmissions per packet as the performance metric for reliability and derive analytical expressions characterizing the performance of network coding. We also analyze the performance of reliability mechanisms based on rateless codes and automatic repeat request (ARQ), and compare them with network coding. We first study network coding performance in an access point model, where an access point broadcasts packets to a group of K receivers over lossy wireless channels. We show that the expected number of transmissions using ARQ, compared to network coding, scales as ominus (log K) as the number of receivers becomes large. We then use the access point model as a building block to study reliable multicast in a tree topology. In addition to scaling results, we derive expressions for the expected number of transmissions for finite multicast groups as well. Our results show that network coding significantly reduces the number of retransmissions in lossy networks compared to an ARQ scheme. However, rateless coding achieves asymptotic performance results similar to that of network coding.", "Traditional approaches to reliably transmit information over an error-prone network employ either forward error correction (FEC) or retransmission techniques. In this paper, we propose some network coding schemes to reduce the number of broadcast transmissions from one sender to multiple receivers. The main idea is to allow the sender to combine and retransmit the lost packets in a certain way so that with one transmission, multiple receivers are able to recover their own lost packets. For comparison, we derive a few theoretical results on the bandwidth efficiency of the proposed network coding and traditional automatic repeat-request (ARQ) schemes. Both simulations and theoretical analysis confirm the advantages of the proposed network coding schemes over the ARQ ones.", "Wireless LANs (WLANs) have been deployed at a remarkable rate at university campuses, office buildings, airports, hotels, and malls. Providing efficient and reliable wireless communications is challenging due to inherent lossy wireless medium and imperfect packet scheduling that results in packet collisions. In this paper, we develop an efficient retransmission scheme (ER) for wirless LANs. Instead of retransmitting the lost packets in their original forms, ER codes packets lost at different destinations and uses a single retransmission to potentially recover multiple packet losses. We develop a simple and practical protocol to realize the idea and implement it in both simulation and testbed, and our results demonstrate the effectiveness of this approach.", "A multi-player video game via wireless connections using portable devices is one of the most popular applications of ad-hoc networks. Broadcast transmissions can be used for many-to-many communications in multi-player games. However, reliable broadcast communications are hard to realize because of the lack of retransmissions in the medium access control (MAC) layer. In this paper, we propose a broadcast method using random linear network coding in order to enhance the reliability of many-to-many and real-time communications. Exploiting the periodic nature of game traffic as well as inherent robustness offered by random network coding, the proposed method can provide reliable packet deliveries between 2 nodes which have a link under constant fade or even under temporary loss. Our simulation results show that the proposed method can provide higher reliability than the other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with practical degree of complexity when installed on current wireless devices." ], "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "", "2119011159", "2130286537", "2165683708", "2150088023" ] }
An Efficient Network Coding based Retransmission Algorithm for Wireless Multicast
One-to-many (broadcast/multicast) transmission scheme is popular for many applications, and is widely implemented in Wireless Local Area Networks (WLANs) for its effectiveness in bandwidth consumption in a spectrum-limited wireless space. WLANs transmission is currently dictated by standards set out by IEEE 802. 11-2007 [1]. For one-to-one (unicast) wireless transmission, transmission reliability is achieved through Automatic Repeat Request (ARQ) variants or/and Forward Error Correction (FEC) schemes. Since broadcast is a special case of multicast, without loss of generality we will use the term multicast henceforth. However for a multicast, no consideration is made for ACK/NAK and RTS/CTS packet exchange in 802.11-2007 except for those frames sent with the To DS field set. Additionally, for multicast network where consideration for control packet is made, such packets are collected individually one-by-one, and so is the retransmission of the lost packets done, that is, one-by-one. As such for multicast network, the reliability problem is two-folded: 1) Efficient mechanism for the transmission of control packets (ACK/NAK, RTS/CTS), and 2) efficient retransmission of packets lost. As multicasting is gaining popularity for applications such as file distribution and multimedia conferencing, a more reliable scheme is needed for the fulfillment of future growth in multicast network. Motivated by promising applications of Network Coding (NC), recent works [2] - [5] have demonstrated the suitability of NC for retransmission of lost packets to improve bandwidth performance in a multicast network. Our algorithm is based on the concept of network coding [9], [10]. Network coding in its simplest form exploits the fact that rather than transmitting wireless packets individually to some receivers which may be 'overheard' by some other receivers already having those packets, and vice versa, it is often possible to combine those packets using bit-by-bit XOR (denoted by ⊕) and transmit it as a single coded packet, which can then be decoded by all (/most) of the receivers based on the packets they already have. For illustration consider that receiver R 1 has packet c 1 but not c 2 , while R 2 has c 2 but not c 1 . Rather than transmitting these two packets individually, the transmitter can encode c 1 and c 2 to generate c 1 ⊕ c 2 , which is then multicast to both the receivers and decoded. The remaining paper is organized as follow: In Section II we give an overview of related work, followed by the problem statement in Section III. Following that, we discuss previously proposed coding algorithm in Section IV and our BENEFIT algorithm in Section V. We then confirm the performance of BENEFIT with simulation results in Section VI, and finally present conclusion in Section VII. A. Our Contribution The novelty of our work is the development of a computationally feasible network coding based retransmission algorithm whose gains are two-folded: 1) Our algorithm BENEFIT delivers better throughput with respect to the current best single-hop, NC based retransmission algorithm, and 2) we also demonstrate that our algorithm achieves minimum time to decode packets. None of the previous works [2] - [5] on NC based retransmission incorporates consideration of packet latency in their work. Here we will also show that it is no longer necessary to follow the packet coding rule [10], [11] strictly. This relaxation in the coding rule has the potential for modification and development of other network coding based applications. III. PROBLEM STATEMENT Consider a single-hop multicast network with M fixed receiver stations R i , (i is the receiver station ID, 1 ≤ i ≤ M ) with static membership and M ≥ 2, and a single transmitting station T x . Packet batch size is denoted by N . Packet reception at R i follows Bernoulli model, whereby a successful reception of packet c k (k is the datagram packet ID, 1 ≤ k ≤ N ) at R i is indicated by '0' and packet loss by '1' in the transmission matrix (see Table I). A transmission matrix is a 2-dimensional array table, where the rows represents R i and columns represents c k . For a given packet, its packet utility cu k (0 ≤ cu k ≤ M ) is defined as the number of receiver(s) which have not received the packet (i.e. the numbers of '1's in a given column). While the receiver utility ru i indicates the number of packet(s) not received by the receiver R i from a given set of specified packet(s). For Bernoulli model, packet loss at all receivers is homogeneous, and is determined by a fixed loss probability p i , which gives a packet successful reception probability of 1 − p i . For a fixed batch size, L i denotes the number of lost packets for station i. Q j is the probability that after N transmissions, the total number of packets lost is no more than j (1 ≤ j ≤ N ). The time taken for one transmission is represented by one time slot. The time to decode a lost packet for a given R i is the total number of transmissions (original transmission, retransmission and transmission of coded packet) after which the lost packet is recovered by the given R i . For simplicity we assume that there is a reliable control packet exchange mechanism in the network 1 and that all coded/retransmitted packets are successfully received by the receivers. In the context of BENEFIT algorithm, a benefit value is generally defined as the number of '1's in the transmission matrix which are converted to '0's after the transmission of a coded packet or retransmission of the packet, hence the name of the algorithm: 'BENEFIT'. A. Theoretical Numbers of Retransmissions The probability that L i ≤ j, for a single R i is given by P [L i ≤ j] = j c=0 ( N c )p c i (1 − p i ) N −c .(1) 1 Control packets in multicast network can be implemented by designing ACK packets from multiple STA such that, upon reception of these simultaneously transmitted ACK packets, the original sender is able to efficiently decode the packet which is the superimposition of all ACK packets and infer which receiver STA have received the datagram packet [8]. The probability that all M stations experience a packet loss rate no more than j is given by M i=1 P [L i ≤ j]. Given this result, the probability that the total number retransmission is j, is given by Q j = M i=1 P [L i ≤ j] − M i=1 P [L i ≤ j − 1].(2) A more elaborative discussion of retransmission bandwidth for different transmission schemes compared with network coding is given in [4], [6]. IV. CODING ALGORITHM Previous coding algorithms (except random linear network coding, RLNC) were build on the foundation of a simple packet coding rule [10], [11]: For T x to transmit (/retransmit) M packets c 1 , ..., c M to M receivers, R 1 , ..., R M respectively, the coded packet obtained by coding M packets c 1 , ..., c M can only be decoded at R i if R i has (M − 1) of c j packets, except c i (j = i). We now discuss the major coding algorithm used in network coding literature. A. Greedy Network Coding A greedy algorithm for coding packets has been traditionally used in several network coding based literature and still continues to be a dominant approach in many NC networks like IP-level routers in the Internet [9], wireless mesh network [10] and multi-hop wireless routing [11]. A greedy network coding algorithm, like traditional greedy algorithm makes coding decisions which gives optimal local results. A greedy coding algorithm encodes current locally available packets iteratively as long as it can be decoded by all the intended receivers, without consideration whether its a 'globally' optimal solution or not. B. Random Linear Network Coding RLNC [7] is a decentralized network coding approach, whereby the coded packet is given by c coded = k=N k=1 g(e)c k , where g(e) is the global encoding vector, and is included in c coded as an overhead information in the packet header. Each of the receiver R i must successfully receive N innovative packets (i.e. coded packets which are linearly independent of the previously received coded packet). Once the receivers have N innovative packet, it can then decode N packets using simple matrix inversion. C. Sort-by-Utility The coding algorithm which E. Rozner et. al. [2] proclaims to be the delivering the best performance for a one-hop multicast network is Sort-by-Utility. Therefore we will be comparing our BENEFIT algorithm with Sort-by-Utility for evaluation purposes. In a Sort-by-Utility coding algorithm, the T x first transmits N packets, and then sorts the packets in descending order of their cu k values, using arrival time as tie-breaker for those packets have equal packet utility. Once the packets are sorted, the remaining operation of Sort-by-Utility is essentially a greedy coding algorithm, i.e. the T x then iteratively starts coding successive packets starting from packets having highest packet utilities and codes them with successive sorted packets as long as the coded packet can be decoded by all receivers. R i /c k ru i c 1 c 2 c 3 c 4 c 5 cu k 10 2 3 1 2 2 R 1 3 1 1 0 0 1 R 2 2 0 1 0 1 0 R 3 2 0 1 1 0 0 R 4 3 1 0 0 1 1 Consider as an example the matrix given in Table I. Sortby-Utility algorithm after initial packet transmission (c 1 to c 5 ) would sort the transmitted packets based on its packet utilities cu k , i.e. c 2 , c 1 , c 4 , c 5 , c 3 . Then T x transmits the packets as follows: c 2 , c 1 ⊕ c 3 , c 4 , c 5 . Thus requiring a total of 4 retransmissions with an average time to decode a packet to be 4.4 time slots. V. BENEFIT ALGORITHM BENEFIT (Fig 1), unlike Sort-by-Utility does not need to wait until the end of the batch size (N packet transmissions) before starting the retransmission process. It start transmitting coded packet once the prospective coding packets satisfy the following three conditions: CodingBenef it(), ColumnsBenef it() and CombinationBenef it() (see Table II). Retransmitting as soon as the right conditions are met rather than wait till the end of the batch size in effect reduces the time to decode the packet. BENEFIT works on the basis that it is not necessary for the coded packet to be decodable by all the receivers immediately, assuming that the non-decodable coded packet can be decoded based on future transmission of coded packet(s). This principle is in essence the key strength of BENEFIT and thus, this way it contrasts the traditional packet coding rule. For the first scan cycle, to decide whether to transmit a packet c k or to scan the next packet (see the first step of Fig. 1) is decided based on the fact that if any previously transmitted packet has never been the first prospective coding packet (i.e. pros pks[0]) in that cycle, and the current value of cu k of that previously transmitted packet is 1 ≤ cu k < M , then the algorithm scan the next packet and stores it as the first prospective coding packet, else it transmits the next packet. For consecutive cycles, the algorithm only scans the packets. CodingBenef it() ensures that packets are only coded, if the immediate benefit derived from such coding outweighs or equals the benefit derived from transmitting a single packet (uncoded) with minimum packet utility (cu k ) from the set of prospective coding packets and considered packet (c k ). ColumnsBenef it() selects the most suitable (/fittest) packets for coding, by eliminating those packets which can not be decoded by at least one receiver STA immediately. If the packets satisfy CodingBenef it() and ColumnsBenef it() conditions but not CombinationBenef it(), then they are considered eligible prospective coding packets for combination with other packet(s), and thus the algorithm then searches (scans) for other packet(s), which in combination with the previous prospective coding packets will satisfy the three conditions for packet coding. If the algorithm reaches the end of the batch size and there are still '1's in the transmission matrix, then the CombinationBenef it() condition is relaxed by decrementing DesiredBenef it and the algorithm then starts a new scan cycle (maximum of M -1 scan cycles) until the transmission matrix is composed of M *N '0's. A. Computational Complexity of BENEFIT The computational complexity of CodingBenef it(), ColumnsBenef it() and CombinationBenef it() all grow linearly with respect to the number of prospective coding packets and receivers. While DecodeSearch() can be implemented using a binary search algorithm whose average complexity is logarithmic. Given that the number of prospective coding packets increases with the number of receivers, the computational complexity of BENEFIT can be considered to be linearly increasing with the number of receiver STAs. B. Illustrative example -BENEFIT Consider the algorithm given in Fig. 1, illustrated with the transmission matrix given in Table I. After the T x transmit c 1 , c 1 is stored as the first prospective coding packet. T x then transmit c 2 , c 1 and c 2 are then checked for CodingBenef it() and ColumnsBenef it() conditions, which they satisfy. Hence they are then checked for CombinationBenef it() condition. Since both c 1 and c 2 satisfy the CombinationBenef it() condition as well, the packets are coded and transmitted. Only R 1 is not able to decode c 1 ⊕c 2 immediately (current value of cu 1 =cu 2 =1). The algorithm then scan the next packet c 2 and stores it as the first prospective coding packet, and the T x then transmit c 3 . Since c 2 and c 3 satisfy CodingBenef it() and ColumnsBenef it() conditions but not CombinationBenef it() condition, c 3 is therefore saved as a prospective coding packet. The T x then transmits c 4 , which in addition to c 2 and c 3 satisfy CombinationBenef it() condition. Hence the packets are coded and transmitted. All receivers are able to immediately benefit from the transmission of c 2 ⊕c 3 ⊕c 4 . DecodeSearch() function at R 1 after decoding c 2 ⊕ c 3 ⊕ c 4 , decodes c 1 ⊕ c 2 using c 2 and obtains c 1 . The last packet in the batch c 5 is then transmitted and stored as prospective coding packet, however as it is obvious, there is not any possibility of finding coding packets for c 5 , the algorithm decrements the value of DesiredBenef it twice, following which c 5 is retransmitted VI. SIMULATION RESULTS We construct a C++ based discrete time simulator, using Random Number Generator to generate transmission table like the one given in Table I. The characteristics of the network shall be the same as mentioned in Section III. For each set of values, the simulation is repeated 1000 times. For performance evaluation, we use retransmission ratio (also used in [2]) which is defined as the total number of retransmissions using coding algorithm divided by the total number of retransmissions using traditional 802.11 retransmission scheme. Theory in Fig. 2 and 3 refers to retransmission ratio obtained by dividing Q j (derived in Section III-A) with the total number of retransmissions using traditional 802.11 retransmission scheme. Figure 2 shows that BENEFIT consistently performs better than Sort-by-Utility. The initial trough in the graph is because of more coding opportunity available with an increase in number of STA. A simple heuristic explanation for this is that for 2 STA, there is scope for only 2 packets to be coded, however for 4 STA, there is scope for 2, 3 and 4 packets to be coded together based on opportunities. However as the number of STA increases further, retransmission ratio starts increasing as then, increase in conflict opportunities (as shown in Fig. 4, as the number of STA increases, the time to search for prospective coding packet also increases) between coding packets outweighs increase in coding opportunities. Figure 3 shows the performance of BENEFIT over a range of loss probability values. Figure 2 and 3 proves that for a smallmedium network BENEFIT performs close to the theoretical bound for all ranges of p i . While the bandwidth performance of BENEFIT and Sortby-Utility is almost similar for low loss probability and/or small network, for such networks BENEFIT can still be useful for real-time applications which are highly delay sensitive. As Fig. 4 shows that even for a small batch size, the average time BENEFIT takes to decode/retransmit a packet is far less than that of Sort-by-Utility. BENEFIT latency efficiency can be improved further by decrementing the initial value of DesiredBenef t, which will relax the CombinationBenef it() condition and thus require the T x to spend less time searching for suitable coding packet. Decreasing the batch size also reduces packet retransmission delay as show in [2], [4]. However both these techniques will come at a tradeoff cost of an increase in retransmission ratio. Flexibility to balance throughput-delay tradeoff in BEN-EFIT allows the network designer to modify the algorithm based on the network requirements. Figure 5 shows that the average time to decode packet gradually increases with p i for BENEFIT, however as p i crosses 0.8 the average time to decode packet starts decreasing as most of the packets satisfy cu k == DesiredBenef it condition and hence are retransmitted without any coding. The time saved searching for prospective coding packets reduces the average time to decode. This also explain an increase in standard deviation for BENEFIT in Fig. 5, as some packets are retransmitted without any encoding (shorter waiting time), while other packets need to wait for longer to find suitable coding partners. VII. CONCLUSION In this paper we have demonstrated a computationally feasible, bandwidth and latency efficient retransmission coding algorithm, which doesnt strictly follows the traditional coding rule. Selectively modifying the BENEFIT conditions would also allow the network designer to adjust the algorithm as per the throughput-delay requirements of the network. We believe that there is potential for research work to exploit relaxation in the coding rule, and study modifications to mechanisms like COPE [10] based on rules derived from BENEFIT.
3,076
1008.1842
2145516517
Retransmission based on packet acknowledgement (ACK NAK) is a fundamental error control technique employed in IEEE 802.11–2007 unicast network. However the 802.11–2007 standard falls short of proposing a reliable MAC-level recovery protocol for multicast frames. In this paper we propose a latency and bandwidth efficient coding algorithm based on the principles of network coding for retransmitting lost packets in a singlehop wireless multicast network and demonstrate its effectiveness over previously proposed network coding based retransmission algorithms.
The novelty of our work is the development of a computationally feasible network coding based retransmission algorithm whose gains are two-folded: 1) Our algorithm BENEFIT delivers better throughput with respect to the current best single-hop, NC based retransmission algorithm, and 2) we also demonstrate that our algorithm achieves minimum time to decode packets. None of the previous works @cite_3 - @cite_0 on NC based retransmission incorporates consideration of packet latency in their work. Here we will also show that it is no longer necessary to follow the packet coding rule @cite_4 , @cite_6 strictly. This relaxation in the coding rule has the potential for modification and development of other network coding based applications.
{ "abstract": [ "Based on the changeability of wireless network circumstance, information packets are prone to lose in the process of the wireless transmission. In wireless broadcasting ,any node of multi-nodes request retransmission information packets .This paper presents a novel retransmission approach in wireless broadcasting based on network coding (NCWBR), whose key idea is to combine different lost packets with network coding to achieve retransmission. Theoretical analysis reveals that our approach can ensure the solvability in the received nodes, and have better retransmission performance. Simulation results indicate that compared with existing approach, the approach in this paper can effectively reduce the average number of transmissions and advance the transmission efficiency.", "This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.", "A recent approach, COPE, for improving the throughput of unicast traffic in wireless multi-hop networks exploits the broadcast nature of the wireless medium through opportunistic network coding. In this paper, we analyze throughput improvements obtained by COPE-type network coding in wireless networks from a theoretical perspective. We make two key contributions. First, we obtain a theoretical formulation for computing the throughput of network coding on any wireless network topology and any pattern of concurrent unicast traffic sessions. Second, we advocate that routing be made aware of network coding opportunities rather than, as in COPE, being oblivious to it. More importantly, our work studies the tradeoff between routing flows \"close to each other\" for utilizing coding opportunities and \"away from each other\" for avoiding wireless interference. Our theoretical formulation provides a method for computing source-destination routes and utilizing the best coding opportunities from available ones so as to maximize the throughput. We handle scheduling of broadcast transmissions subject to wireless transmit receive diversity and link interference in our optimization framework. Using our formulations, we compare the performance of traditional unicast routing and network coding with coding-oblivious and coding-aware routing on a variety of mesh network topologies, including some derived from contemporary mesh network testbeds. Our evaluations show that a route selection strategy that is aware of network coding opportunities leads to higher end-to-end throughput when compared to coding-oblivious routing strategies.", "Wireless LANs (WLANs) have been deployed at a remarkable rate at university campuses, office buildings, airports, hotels, and malls. Providing efficient and reliable wireless communications is challenging due to inherent lossy wireless medium and imperfect packet scheduling that results in packet collisions. In this paper, we develop an efficient retransmission scheme (ER) for wirless LANs. Instead of retransmitting the lost packets in their original forms, ER codes packets lost at different destinations and uses a single retransmission to potentially recover multiple packet losses. We develop a simple and practical protocol to realize the idea and implement it in both simulation and testbed, and our results demonstrate the effectiveness of this approach." ], "cite_N": [ "@cite_0", "@cite_4", "@cite_6", "@cite_3" ], "mid": [ "2058330612", "2149863032", "2098232769", "2165683708" ] }
An Efficient Network Coding based Retransmission Algorithm for Wireless Multicast
One-to-many (broadcast/multicast) transmission scheme is popular for many applications, and is widely implemented in Wireless Local Area Networks (WLANs) for its effectiveness in bandwidth consumption in a spectrum-limited wireless space. WLANs transmission is currently dictated by standards set out by IEEE 802. 11-2007 [1]. For one-to-one (unicast) wireless transmission, transmission reliability is achieved through Automatic Repeat Request (ARQ) variants or/and Forward Error Correction (FEC) schemes. Since broadcast is a special case of multicast, without loss of generality we will use the term multicast henceforth. However for a multicast, no consideration is made for ACK/NAK and RTS/CTS packet exchange in 802.11-2007 except for those frames sent with the To DS field set. Additionally, for multicast network where consideration for control packet is made, such packets are collected individually one-by-one, and so is the retransmission of the lost packets done, that is, one-by-one. As such for multicast network, the reliability problem is two-folded: 1) Efficient mechanism for the transmission of control packets (ACK/NAK, RTS/CTS), and 2) efficient retransmission of packets lost. As multicasting is gaining popularity for applications such as file distribution and multimedia conferencing, a more reliable scheme is needed for the fulfillment of future growth in multicast network. Motivated by promising applications of Network Coding (NC), recent works [2] - [5] have demonstrated the suitability of NC for retransmission of lost packets to improve bandwidth performance in a multicast network. Our algorithm is based on the concept of network coding [9], [10]. Network coding in its simplest form exploits the fact that rather than transmitting wireless packets individually to some receivers which may be 'overheard' by some other receivers already having those packets, and vice versa, it is often possible to combine those packets using bit-by-bit XOR (denoted by ⊕) and transmit it as a single coded packet, which can then be decoded by all (/most) of the receivers based on the packets they already have. For illustration consider that receiver R 1 has packet c 1 but not c 2 , while R 2 has c 2 but not c 1 . Rather than transmitting these two packets individually, the transmitter can encode c 1 and c 2 to generate c 1 ⊕ c 2 , which is then multicast to both the receivers and decoded. The remaining paper is organized as follow: In Section II we give an overview of related work, followed by the problem statement in Section III. Following that, we discuss previously proposed coding algorithm in Section IV and our BENEFIT algorithm in Section V. We then confirm the performance of BENEFIT with simulation results in Section VI, and finally present conclusion in Section VII. A. Our Contribution The novelty of our work is the development of a computationally feasible network coding based retransmission algorithm whose gains are two-folded: 1) Our algorithm BENEFIT delivers better throughput with respect to the current best single-hop, NC based retransmission algorithm, and 2) we also demonstrate that our algorithm achieves minimum time to decode packets. None of the previous works [2] - [5] on NC based retransmission incorporates consideration of packet latency in their work. Here we will also show that it is no longer necessary to follow the packet coding rule [10], [11] strictly. This relaxation in the coding rule has the potential for modification and development of other network coding based applications. III. PROBLEM STATEMENT Consider a single-hop multicast network with M fixed receiver stations R i , (i is the receiver station ID, 1 ≤ i ≤ M ) with static membership and M ≥ 2, and a single transmitting station T x . Packet batch size is denoted by N . Packet reception at R i follows Bernoulli model, whereby a successful reception of packet c k (k is the datagram packet ID, 1 ≤ k ≤ N ) at R i is indicated by '0' and packet loss by '1' in the transmission matrix (see Table I). A transmission matrix is a 2-dimensional array table, where the rows represents R i and columns represents c k . For a given packet, its packet utility cu k (0 ≤ cu k ≤ M ) is defined as the number of receiver(s) which have not received the packet (i.e. the numbers of '1's in a given column). While the receiver utility ru i indicates the number of packet(s) not received by the receiver R i from a given set of specified packet(s). For Bernoulli model, packet loss at all receivers is homogeneous, and is determined by a fixed loss probability p i , which gives a packet successful reception probability of 1 − p i . For a fixed batch size, L i denotes the number of lost packets for station i. Q j is the probability that after N transmissions, the total number of packets lost is no more than j (1 ≤ j ≤ N ). The time taken for one transmission is represented by one time slot. The time to decode a lost packet for a given R i is the total number of transmissions (original transmission, retransmission and transmission of coded packet) after which the lost packet is recovered by the given R i . For simplicity we assume that there is a reliable control packet exchange mechanism in the network 1 and that all coded/retransmitted packets are successfully received by the receivers. In the context of BENEFIT algorithm, a benefit value is generally defined as the number of '1's in the transmission matrix which are converted to '0's after the transmission of a coded packet or retransmission of the packet, hence the name of the algorithm: 'BENEFIT'. A. Theoretical Numbers of Retransmissions The probability that L i ≤ j, for a single R i is given by P [L i ≤ j] = j c=0 ( N c )p c i (1 − p i ) N −c .(1) 1 Control packets in multicast network can be implemented by designing ACK packets from multiple STA such that, upon reception of these simultaneously transmitted ACK packets, the original sender is able to efficiently decode the packet which is the superimposition of all ACK packets and infer which receiver STA have received the datagram packet [8]. The probability that all M stations experience a packet loss rate no more than j is given by M i=1 P [L i ≤ j]. Given this result, the probability that the total number retransmission is j, is given by Q j = M i=1 P [L i ≤ j] − M i=1 P [L i ≤ j − 1].(2) A more elaborative discussion of retransmission bandwidth for different transmission schemes compared with network coding is given in [4], [6]. IV. CODING ALGORITHM Previous coding algorithms (except random linear network coding, RLNC) were build on the foundation of a simple packet coding rule [10], [11]: For T x to transmit (/retransmit) M packets c 1 , ..., c M to M receivers, R 1 , ..., R M respectively, the coded packet obtained by coding M packets c 1 , ..., c M can only be decoded at R i if R i has (M − 1) of c j packets, except c i (j = i). We now discuss the major coding algorithm used in network coding literature. A. Greedy Network Coding A greedy algorithm for coding packets has been traditionally used in several network coding based literature and still continues to be a dominant approach in many NC networks like IP-level routers in the Internet [9], wireless mesh network [10] and multi-hop wireless routing [11]. A greedy network coding algorithm, like traditional greedy algorithm makes coding decisions which gives optimal local results. A greedy coding algorithm encodes current locally available packets iteratively as long as it can be decoded by all the intended receivers, without consideration whether its a 'globally' optimal solution or not. B. Random Linear Network Coding RLNC [7] is a decentralized network coding approach, whereby the coded packet is given by c coded = k=N k=1 g(e)c k , where g(e) is the global encoding vector, and is included in c coded as an overhead information in the packet header. Each of the receiver R i must successfully receive N innovative packets (i.e. coded packets which are linearly independent of the previously received coded packet). Once the receivers have N innovative packet, it can then decode N packets using simple matrix inversion. C. Sort-by-Utility The coding algorithm which E. Rozner et. al. [2] proclaims to be the delivering the best performance for a one-hop multicast network is Sort-by-Utility. Therefore we will be comparing our BENEFIT algorithm with Sort-by-Utility for evaluation purposes. In a Sort-by-Utility coding algorithm, the T x first transmits N packets, and then sorts the packets in descending order of their cu k values, using arrival time as tie-breaker for those packets have equal packet utility. Once the packets are sorted, the remaining operation of Sort-by-Utility is essentially a greedy coding algorithm, i.e. the T x then iteratively starts coding successive packets starting from packets having highest packet utilities and codes them with successive sorted packets as long as the coded packet can be decoded by all receivers. R i /c k ru i c 1 c 2 c 3 c 4 c 5 cu k 10 2 3 1 2 2 R 1 3 1 1 0 0 1 R 2 2 0 1 0 1 0 R 3 2 0 1 1 0 0 R 4 3 1 0 0 1 1 Consider as an example the matrix given in Table I. Sortby-Utility algorithm after initial packet transmission (c 1 to c 5 ) would sort the transmitted packets based on its packet utilities cu k , i.e. c 2 , c 1 , c 4 , c 5 , c 3 . Then T x transmits the packets as follows: c 2 , c 1 ⊕ c 3 , c 4 , c 5 . Thus requiring a total of 4 retransmissions with an average time to decode a packet to be 4.4 time slots. V. BENEFIT ALGORITHM BENEFIT (Fig 1), unlike Sort-by-Utility does not need to wait until the end of the batch size (N packet transmissions) before starting the retransmission process. It start transmitting coded packet once the prospective coding packets satisfy the following three conditions: CodingBenef it(), ColumnsBenef it() and CombinationBenef it() (see Table II). Retransmitting as soon as the right conditions are met rather than wait till the end of the batch size in effect reduces the time to decode the packet. BENEFIT works on the basis that it is not necessary for the coded packet to be decodable by all the receivers immediately, assuming that the non-decodable coded packet can be decoded based on future transmission of coded packet(s). This principle is in essence the key strength of BENEFIT and thus, this way it contrasts the traditional packet coding rule. For the first scan cycle, to decide whether to transmit a packet c k or to scan the next packet (see the first step of Fig. 1) is decided based on the fact that if any previously transmitted packet has never been the first prospective coding packet (i.e. pros pks[0]) in that cycle, and the current value of cu k of that previously transmitted packet is 1 ≤ cu k < M , then the algorithm scan the next packet and stores it as the first prospective coding packet, else it transmits the next packet. For consecutive cycles, the algorithm only scans the packets. CodingBenef it() ensures that packets are only coded, if the immediate benefit derived from such coding outweighs or equals the benefit derived from transmitting a single packet (uncoded) with minimum packet utility (cu k ) from the set of prospective coding packets and considered packet (c k ). ColumnsBenef it() selects the most suitable (/fittest) packets for coding, by eliminating those packets which can not be decoded by at least one receiver STA immediately. If the packets satisfy CodingBenef it() and ColumnsBenef it() conditions but not CombinationBenef it(), then they are considered eligible prospective coding packets for combination with other packet(s), and thus the algorithm then searches (scans) for other packet(s), which in combination with the previous prospective coding packets will satisfy the three conditions for packet coding. If the algorithm reaches the end of the batch size and there are still '1's in the transmission matrix, then the CombinationBenef it() condition is relaxed by decrementing DesiredBenef it and the algorithm then starts a new scan cycle (maximum of M -1 scan cycles) until the transmission matrix is composed of M *N '0's. A. Computational Complexity of BENEFIT The computational complexity of CodingBenef it(), ColumnsBenef it() and CombinationBenef it() all grow linearly with respect to the number of prospective coding packets and receivers. While DecodeSearch() can be implemented using a binary search algorithm whose average complexity is logarithmic. Given that the number of prospective coding packets increases with the number of receivers, the computational complexity of BENEFIT can be considered to be linearly increasing with the number of receiver STAs. B. Illustrative example -BENEFIT Consider the algorithm given in Fig. 1, illustrated with the transmission matrix given in Table I. After the T x transmit c 1 , c 1 is stored as the first prospective coding packet. T x then transmit c 2 , c 1 and c 2 are then checked for CodingBenef it() and ColumnsBenef it() conditions, which they satisfy. Hence they are then checked for CombinationBenef it() condition. Since both c 1 and c 2 satisfy the CombinationBenef it() condition as well, the packets are coded and transmitted. Only R 1 is not able to decode c 1 ⊕c 2 immediately (current value of cu 1 =cu 2 =1). The algorithm then scan the next packet c 2 and stores it as the first prospective coding packet, and the T x then transmit c 3 . Since c 2 and c 3 satisfy CodingBenef it() and ColumnsBenef it() conditions but not CombinationBenef it() condition, c 3 is therefore saved as a prospective coding packet. The T x then transmits c 4 , which in addition to c 2 and c 3 satisfy CombinationBenef it() condition. Hence the packets are coded and transmitted. All receivers are able to immediately benefit from the transmission of c 2 ⊕c 3 ⊕c 4 . DecodeSearch() function at R 1 after decoding c 2 ⊕ c 3 ⊕ c 4 , decodes c 1 ⊕ c 2 using c 2 and obtains c 1 . The last packet in the batch c 5 is then transmitted and stored as prospective coding packet, however as it is obvious, there is not any possibility of finding coding packets for c 5 , the algorithm decrements the value of DesiredBenef it twice, following which c 5 is retransmitted VI. SIMULATION RESULTS We construct a C++ based discrete time simulator, using Random Number Generator to generate transmission table like the one given in Table I. The characteristics of the network shall be the same as mentioned in Section III. For each set of values, the simulation is repeated 1000 times. For performance evaluation, we use retransmission ratio (also used in [2]) which is defined as the total number of retransmissions using coding algorithm divided by the total number of retransmissions using traditional 802.11 retransmission scheme. Theory in Fig. 2 and 3 refers to retransmission ratio obtained by dividing Q j (derived in Section III-A) with the total number of retransmissions using traditional 802.11 retransmission scheme. Figure 2 shows that BENEFIT consistently performs better than Sort-by-Utility. The initial trough in the graph is because of more coding opportunity available with an increase in number of STA. A simple heuristic explanation for this is that for 2 STA, there is scope for only 2 packets to be coded, however for 4 STA, there is scope for 2, 3 and 4 packets to be coded together based on opportunities. However as the number of STA increases further, retransmission ratio starts increasing as then, increase in conflict opportunities (as shown in Fig. 4, as the number of STA increases, the time to search for prospective coding packet also increases) between coding packets outweighs increase in coding opportunities. Figure 3 shows the performance of BENEFIT over a range of loss probability values. Figure 2 and 3 proves that for a smallmedium network BENEFIT performs close to the theoretical bound for all ranges of p i . While the bandwidth performance of BENEFIT and Sortby-Utility is almost similar for low loss probability and/or small network, for such networks BENEFIT can still be useful for real-time applications which are highly delay sensitive. As Fig. 4 shows that even for a small batch size, the average time BENEFIT takes to decode/retransmit a packet is far less than that of Sort-by-Utility. BENEFIT latency efficiency can be improved further by decrementing the initial value of DesiredBenef t, which will relax the CombinationBenef it() condition and thus require the T x to spend less time searching for suitable coding packet. Decreasing the batch size also reduces packet retransmission delay as show in [2], [4]. However both these techniques will come at a tradeoff cost of an increase in retransmission ratio. Flexibility to balance throughput-delay tradeoff in BEN-EFIT allows the network designer to modify the algorithm based on the network requirements. Figure 5 shows that the average time to decode packet gradually increases with p i for BENEFIT, however as p i crosses 0.8 the average time to decode packet starts decreasing as most of the packets satisfy cu k == DesiredBenef it condition and hence are retransmitted without any coding. The time saved searching for prospective coding packets reduces the average time to decode. This also explain an increase in standard deviation for BENEFIT in Fig. 5, as some packets are retransmitted without any encoding (shorter waiting time), while other packets need to wait for longer to find suitable coding partners. VII. CONCLUSION In this paper we have demonstrated a computationally feasible, bandwidth and latency efficient retransmission coding algorithm, which doesnt strictly follows the traditional coding rule. Selectively modifying the BENEFIT conditions would also allow the network designer to adjust the algorithm as per the throughput-delay requirements of the network. We believe that there is potential for research work to exploit relaxation in the coding rule, and study modifications to mechanisms like COPE [10] based on rules derived from BENEFIT.
3,076
1007.4040
2110717496
Description Logic Programs (dl-programs) proposed by constitute an elegant yet powerful formalism for the integration of answer set programming with description logics, for the Semantic Web. In this paper, we generalize the notions of completion and loop formulas of logic programs to description logic programs and show that the answer sets of a dl-program can be precisely captured by the models of its completion and loop formulas. Furthermore, we propose a new, alternative semantics for dl-programs, called the canonical answer set semantics, which is defined by the models of completion that satisfy what are called canonical loop formulas. A desirable property of canonical answer sets is that they are free of circular justifications. Some properties of canonical answer sets are also explored.
Integrating ASP with description logics has attracted a great deal of attention recently. The existing approaches can be roughly classified into three categories. The first is to adopt a nonmonotonic formalism that covers both ASP and first-order logic (if not for the latter, then extend it to the first-order case) @cite_21 @cite_18 , where ontologies and rules are written in the same language, resulting in a tight coupling. The second is a loose approach: An ontology knowledge base and the rules share the same constants but not the same predicates, and the communication is via a well-defined interface, such as dl-atoms @cite_13 . The third is to combine ontologies with hybrid rules @cite_5 @cite_9 @cite_12 , where predicates in the language of ontologies are interpreted classically, whereas those in the language of rules are interpreted nonmonotonically.
{ "abstract": [ "In the context of the Semantic Web, several approaches to the combination of ontologies, given in terms of theories of classical first-order logic, and rule bases have been proposed. They either cast rules into classical logic or limit the interaction between rules and ontologies. Autoepistemic logic (AEL) is an attractive formalismwhich allows to overcome these limitations, by serving as a uniform host language to embed ontologies and nonmonotonic logic programs into it. For the latter, so far only the propositional setting has been considered. In this paper, we present several embeddings of normal and disjunctive non-ground logic programs under the stable-model semantics into first-order AEL, and compare them in combination with classical theories, with respect to stable expansions and autoepistemic consequences. Our results reveal differences and correspondences of the embeddings and provide a useful guidance in the choice of a particular embedding for knowledge combination.", "The integration of Description Logics and Datalog rules presents many semantic and computational problems. In particular, reasoning in a system fully integrating Description Logics knowledge bases (DL-KBs) and Datalog programs is undecidable. Many proposals have overcomed this problem through a \"safeness\" condition that limits the interaction between the DL-KB and the Datalog rules. Such a safe integration of Description Logics and Datalog provides for systems with decidable reasoning, at the price of a strong limitation in terms of expressive power. In this paper we define DL +log, a general framework for the integration of Description Logics and disjunctive Datalog. From the knowledge representation viewpoint, DL +log extends previous proposals, since it allows for a tighter form of integration between DL-KBs and Datalog rules which overcomes the main representational limits of the approaches based on the safeness condition. From the reasoning viewpoint, we present algorithms for reasoning in DL +log, and prove decidability and complexity of reasoning in DL +log for several Description Logics. To the best of our knowledge, DL+log constitutes the most powerful decidable combination of Description Logics and disjunctive Datalog rules proposed so far.", "Description logics (DLs) and rules are formalisms that emphasize different aspects of knowledge representation: whereas DLs are focused on specifying and reasoning about conceptual knowledge, rules are focused on nonmonotonic inference. Many applications, however, require features of both DLs and rules. Developing a formalism that integrates DLs and rules would be a natural outcome of a large body of research in knowledge representation and reasoning of the last two decades; however, achieving this goal is very challenging and the approaches proposed thus far have not fully reached it. In this paper, we present a hybrid formalism of MKNFp knowledge bases, which integrates DLs and rules in a coherent semantic framework. Achieving seamless integration is nontrivial, since DLs use an open-world assumption, while the rules are based on a closed-world assumption. We overcome this discrepancy by basing the semantics of our formalism on the logic of minimal knowledge and negation as failure (MKNF) by Lifschitz. We present several algorithms for reasoning with MKNFp knowledge bases, each suitable to different kinds of rules, and establish tight complexity bounds.", "The integration of Description Logics and Datalog rules presents many semantic and computational problems. In particular, reasoning in a system fully integrating Description Logics knowledge bases (DL-KBs) and Datalog programs is undecidable. Many proposals have overcomed this problem through a \"safeness\" condition that limits the interaction between the DL-KB and the Datalog rules. Such a safe integration of Description Logics and Datalog provides for systems with decidable reasoning, at the price of a strong limitation in terms of expressive power. In this paper we define DL +log, a general framework for the integration of Description Logics and disjunctive Datalog. From the knowledge representation viewpoint, DL +log extends previous proposals, since it allows for a tighter form of integration between DL-KBs and Datalog rules which overcomes the main representational limits of the approaches based on the safeness condition. From the reasoning viewpoint, we present algorithms for reasoning in DL +log, and prove decidability and complexity of reasoning in DL +log for several Description Logics. To the best of our knowledge, DL+log constitutes the most powerful decidable combination of Description Logics and disjunctive Datalog rules proposed so far.", "We propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN(D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. To this end, we introduce description logic programs (or dl-programs), which consist of a description logic knowledge base L and a finite set P of description logic rules (or dl-rules). Such rules are similar to usual rules in nonmonotonic logic programs, but they may also contain queries to L, possibly under default negation, in their bodies. They allow for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We define a suite of semantics for various classes of dl-programs, which conservatively extend the standard semantics of the respective classes and coincide with it in absence of a description logic knowledge base. More concretely, we generalize positive, stratified, and arbitrary normal logic programs to dl-programs, and define a Herbrand model semantics for them. We show that they have similar properties as ordinary logic programs, and also provide fixpoint characterizations in terms of (iterated) consequence operators. For arbitrary dl-programs, we define answer sets by generalizing Gelfond and Lifschitz's notion of a transform, leading to a strong and a weak answer set semantics, which are based on reductions to the semantics of positive dl-programs and ordinary positive logic programs, respectively. We also show how the weak answer sets can be computed utilizing answer sets of ordinary normal logic programs. Furthermore, we show how some advanced reasoning tasks for the Semantic Web, including different forms of closed-world reasoning and default reasoning, as well as DL-safe rules, can be realized on top of dl-programs. Finally, we give a precise picture of the computational complexity of dl-programs, and we describe efficient algorithms and a prototype implementation of dl-programs which is available on the Web.", "In the ongoing discussion about combining rules and Ontologies on the Semantic Web a recurring issue is how to combine first-order classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this paper we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL we relax the unique names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way." ], "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "2568391096", "1481181327", "2045427198", "1481181327", "2100983017", "1952953333" ] }
0
1007.4040
2110717496
Description Logic Programs (dl-programs) proposed by constitute an elegant yet powerful formalism for the integration of answer set programming with description logics, for the Semantic Web. In this paper, we generalize the notions of completion and loop formulas of logic programs to description logic programs and show that the answer sets of a dl-program can be precisely captured by the models of its completion and loop formulas. Furthermore, we propose a new, alternative semantics for dl-programs, called the canonical answer set semantics, which is defined by the models of completion that satisfy what are called canonical loop formulas. A desirable property of canonical answer sets is that they are free of circular justifications. Some properties of canonical answer sets are also explored.
Although each approach above has its own merits, the loose approach possesses some unique advantages. In many situations, we would like to combine existing knowledge bases, possibly under different logics. In this case, a notion of interface is natural and necessary. The loose approach seems particularly intuitive, as it does not rely on the use of modal operators nor on a multi-valued logic. One notices that dl-programs share similar characteristics with another recent interest, multi-context systems , in which knowledge bases of arbitrary logics communicate through bridge rules @cite_14 .
{ "abstract": [ "We propose a general framework for multi-context reasoning which allows us to combine arbitrary monotonic and nonmonotonic logics. Nonmonotonic bridge rules are used to specify the information flow among contexts. We investigate several notions of equilibrium representing acceptable belief states for our multi-context systems. The approach generalizes the heterogeneous monotonic multi-context systems developed by F. Giunchiglia and colleagues as well as the homogeneous nonmonotonic multi-context systems of Brewka, Serafini and Roelofsen." ], "cite_N": [ "@cite_14" ], "mid": [ "89175885" ] }
0
1007.4040
2110717496
Description Logic Programs (dl-programs) proposed by constitute an elegant yet powerful formalism for the integration of answer set programming with description logics, for the Semantic Web. In this paper, we generalize the notions of completion and loop formulas of logic programs to description logic programs and show that the answer sets of a dl-program can be precisely captured by the models of its completion and loop formulas. Furthermore, we propose a new, alternative semantics for dl-programs, called the canonical answer set semantics, which is defined by the models of completion that satisfy what are called canonical loop formulas. A desirable property of canonical answer sets is that they are free of circular justifications. Some properties of canonical answer sets are also explored.
However, the relationships among these different approaches are currently not well understood. For example, although we know how to translate a dl-program without the nonmonotonic operator @math to an MKNF theory while preserving the strong answer set semantics @cite_21 , when @math is involved, no such a translation is known. Similarly, although a variant of Quantified Equilibrium Logic (QEL) captures the existing hybrid approaches, as shown by @cite_12 , it is not clear how one would apply the loop formulas for logic programs with arbitrary sentences @cite_0 to dl-programs, since, to the best of our knowledge, there is no syntactic, semantics-preserving translation from dl-programs to logic programs with arbitrary sentences or to QEL.
{ "abstract": [ "Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models.", "Description logics (DLs) and rules are formalisms that emphasize different aspects of knowledge representation: whereas DLs are focused on specifying and reasoning about conceptual knowledge, rules are focused on nonmonotonic inference. Many applications, however, require features of both DLs and rules. Developing a formalism that integrates DLs and rules would be a natural outcome of a large body of research in knowledge representation and reasoning of the last two decades; however, achieving this goal is very challenging and the approaches proposed thus far have not fully reached it. In this paper, we present a hybrid formalism of MKNFp knowledge bases, which integrates DLs and rules in a coherent semantic framework. Achieving seamless integration is nontrivial, since DLs use an open-world assumption, while the rules are based on a closed-world assumption. We overcome this discrepancy by basing the semantics of our formalism on the logic of minimal knowledge and negation as failure (MKNF) by Lifschitz. We present several algorithms for reasoning with MKNFp knowledge bases, each suitable to different kinds of rules, and establish tight complexity bounds.", "In the ongoing discussion about combining rules and Ontologies on the Semantic Web a recurring issue is how to combine first-order classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this paper we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL we relax the unique names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way." ], "cite_N": [ "@cite_0", "@cite_21", "@cite_12" ], "mid": [ "49730540", "2045427198", "1952953333" ] }
0
1006.4248
2008433804
Multi-packet reception (MPR) has been recognized as a powerful capacity-enhancement technique for randomaccess wireless local area networks (WLANs). As is common with all random access protocols, the wireless channel is often under-utilized in MPR WLANs. In this paper, we propose a novel multi-round contention random-access protocol to address this problem. This work complements the existing randomaccess methods that are based on single-round contention. In the proposed scheme, stations are given multiple chances to contend for the channel until there are a sufficient number of ?winning? stations that can share the MPR channel for data packet transmission. The key issue here is the identification of the optimal time to stop the contention process and start data transmission. The solution corresponds to finding a desired tradeoff between channel utilization and contention overhead. In this paper, we conduct a rigorous analysis to characterize the optimal strategy using the theory of optimal stopping. An interesting result is that the optimal stopping strategy is a simple threshold-based rule, which stops the contention process as soon as the total number of winning stations exceeds a certain threshold. Compared with the conventional single-round contention protocol, the multi-round contention scheme significantly enhances channel utilization when the MPR capability of the channel is small to medium. Meanwhile, the scheme automatically falls back to single-round contention when the MPR capability is very large, in which case the throughput penalty due to random access is already small even with single-round contention.
In related work, @cite_0 attempts to enhance the utilization of MPR channels by allowing stations to count down and transmit as long as there are less than @math ongoing transmissions in the air. To do this, one key assumption is that a station is able to detect the number of ongoing transmissions using an energy detector. This assumption, however, is not valid in wireless networks, where the received energy from each transmitting station is random and unknown a priori.
{ "abstract": [ "With the improvement of the physical layer's ability to decode more than one packets from multiple users, the classical collision model no longer applies and a cross-layer approach should be employed when designing multiple access protocols. This is especially the case for CSMA communications, which previously have not been studied under a multipacket reception (MPR) model. Since CSMA is used in the IEEE 802.11 wireless LAN standards and other wireless networks, improving its performances could have widespread benefits. In here, we investigate the impact of MPR on CSMA and propose a performance-improving cross-layer designed CSMA protocol for wireless networks." ], "cite_N": [ "@cite_0" ], "mid": [ "2161813191" ] }
Multi-Round Contention in Wireless LANs with Multipacket Reception
In random-access wireless networks, such as IEEE 802.11 wireless local area networks (WLAN), stations share a common medium through contention-based medium access control (MAC). Most of what we know about WLAN is based on the conventional collision model, where packet collisions occur when two or more stations transmit at the same time [1], [2]. With advanced PHY-layer signal processing techniques, it is possible for an access point (AP) to detect multiple concurrently transmitted packets through, for example, multiuser detection (MUD) techniques [3], [4]. This new collision model, referred to as multi-packet reception (MPR), opens up new possibilities for drastically enhancing the capacity of WLANs. Our prior work in [5] shows that the throughput of WLANs scales super-linearly with the MPR capability of the channel. With MPR, up to M stations can transmit at the same time without causing collisions, where M is referred to as the MPR capability of the channel. An immediate question is how the MAC should be redesigned to fully utilize the advantages of MPR. In [5], [6], we derived the optimal transmission probability and backoff exponent that maximize the throughput of MPR WLAN. With the optimal transmission probability, system throughput is greatly enhanced compared with that in traditional single-packet reception (SPR) WLANs. One observation from our prior work, however, is that the MPR channel is still under-utilized from time to time even when the optimal transmission probability is adopted. In other words, the channel is not always fully occupied by M concurrent packet transmissions during the data transmission phase. This is because the current MAC protocols are based on a "single-round contention" framework. There is essentially only one contention round for each data transmission phase. For example, in the DCF RTS/CTS access mode, a data transmission phase follows immediately as long as there is one successful RTS contention. Similarly, in the DCF basic access mode, data packet transmission also serves the purpose of channel contention, which implies that there is only one round of contention for each data transmission. Due to the random-access nature, the number of stations contending for the channel at a time is a random variable. Hence, the channel is unavoidably under-utilized when there are less than M stations contend for the channel simultaneously. As such, enhancing the capacity of MPR WLANs beyond what is currently achievable remains a challenging problem. This paper proposes a novel multi-round contention randomaccess protocol to address the problem. With the multi-round contention framework, more contention rounds are executed before data transmission if the number of stations that have already won the channel contention is small. Intuitively, the more contention rounds, the more likely that the channel is fully packed with M concurrent packet transmissions in the data transmission phase. On the other hand, more contention rounds leads to higher channel-contention overhead. Finding the desired tradeoff between channel utilization and contention overhead boils down to deciding when to terminate the contention rounds and start data transmission. In this paper, we conduct a rigorous analysis to characterize the optimal strategy using the theory of optimal stopping [8]. The key contributions of this paper are summarized in the following. • We show that the problem of finding the optimal stopping strategy that maximizes the system throughput is equiv-alent to the problem of maximizing the rate of return (MR), a subclass of optimal stopping problems. • By exploiting the monotone nature of the problem, we prove that the optimal stopping strategy for multi-round contention is a simple threshold-based rule. Specifically, it is optimal to terminate the contention rounds as soon as the total number of stations that have succeeded in channel contention exceeds a certain threshold, regardless of the number of contention rounds that have already been executed. • Based on the analysis, the maximum throughput that is achievable in MPR networks with multi-round contention is derived. In particular, network throughput is maximized when the stopping threshold and the transmission probability of stations are jointly optimized. Our results show that multi-round contention drastically enhances the channel utilization compared with networks with single-round contention, especially for small to moderate M , which is the case in most practical situations. This analysis complements our work in [5] that has focused on MPR WLANs with single-round contention. • For practical implementation, we propose a multi-round contention protocol which only requires minor revisions to the current IEEE 802.11 DCF. II. SYSTEM MODEL AND PROBLEM FORMULATION A. Multi-round Contention and Problem of Maximizing Rate of Return We consider a fully connected network with K mobile stations transmitting to an AP. The transmission of stations is coordinated by a random-access protocol. We assume that the AP has the capability to decode up to M simultaneous packet transmissions, be it contention packets or data packets. Interested readers are referred to Section V in [5] for a practical protocol to implement MPR in random access networks. A sketch of the multi-round contention mechanism is illustrated in Fig. 1. The precise model will be made concrete in the next subsection, where we propose a multi-round contention protocol as a minor amendment of IEEE 802.11 RTS/CTS mechanism. In Fig. 1, the time axis is divided into contention rounds and data transmission slots. The period between the ends of two neighboring data transmission slots is referred to as a super round. Stations transmit a small contention packet with probability τ in each contention round. If there are no more than M stations contending for the channel in the same Let {X 1 , X 2 , · · · } denote a sequence of random variables representing the number of winning stations in each contention round. Obviously, 0 ≤ X i ≤ M . For each i = 1, 2, · · · , after observing X 1 = x 1 , X 2 = x 2 , · · · , X n = x n , we may stop the contention and transmit n i=1 x i data packets in the data transmission slot if n i=1 x i ≤ M ; only M winning stations will be selected to transmit if n i=1 x i > M . In other words, the number of transmitting stations is y n (x 1 , · · · , x n ) = max( n i=1 x i , M ).(1) Instead of stopping the contention at the n th round, we may also continue and observe X n+1 , hoping that y n+1 (x 1 , · · · , x n+1 ) will be much larger than y n . Of course, this is at the risk of wasting more time on contention without getting a reasonably larger y n+1 in return. A stopping rule φ determines the stopping time N based on the sequence of observations X = (X 1 , X 2 , · · · ). Note that N is random, as it is a function of random variables X. Different realizations of observations may lead to different stopping time. The system throughput can then be calculated as S φ = E X [Data payload transmitted in one super round] E X [Duration of a super round] = E X [Y N ] E X [T N ] ,(2) where Y 1 , Y 2 , · · · is a sequence of random variables with realizations being y 1 , y 2 , · · · . T N is the random variable representing the total amount of time spent to obtain a return of Y N . Let C denote the class of stopping rules with C = {N : N ≥ 1, E[T N ] < ∞}.(3) Our purpose is to find the optimal stopping rule N * ∈ C that maximizes the system throughput EX[YN ] EX [TN ] . In optimal stopping theory, this problem is referred to as the problem of MR. B. Multi-round Contention in IEEE 802.11 WLAN Having introduced the general framework of multi-round contention, we now propose a multi-round contention protocol based on IEEE 802.11 RTS/CTS access mode. Note that the problem formulated in the preceding subsection and the analysis in later sections are general and not restricted to the protocol proposed in this subsection. In IEEE 802.11, the transmission of stations is coordinated by an exponential backoff (EB) mechanism. The EB mechanism adaptively tunes the transmission probability of a station according to the traffic intensity of the network. It works as follows. At each packet transmission, a station sets its backoff timer by randomly choosing an integer within the range [0, W − 1], where W is the size of the contention window. The backoff timer freezes when the channel is busy and is decreased by one following each time slot when the channel is idle. The station transmits a packet from its buffer once the backoff timer reaches zero. At the first transmission attempt of a packet, W is set to W 0 , the minimum contention window. Each time the transmission is unsuccessful, W is multiplied by a backoff factor r. That is, the contention window size W j = r j W 0 after j successive transmission failures. The multi-round contention protocol is illustrated in Fig. 2. A station transmits an RTS packet when its backoff timer reaches zero. Previous work in [1], [2], [5] has shown that the backoff process yields an equivalent transmission probability τ at which a station transmits in a generic (randomly chosen) time slot. When the number of stations, K, is large, it is reasonable to assume that the number of transmissions in a generic time slot follows a Poisson distribution with parameter λ = Kτ [5]. That is, Pr{k stations transmit in a generic time slot} = λ k k! e −λ . (4) If no more than M stations transmit at the same time, then the contention is successful and these stations are marked as winning stations. Otherwise, a collision occurs and there are zero winning stations. From (4), it can be shown that the number of winning stations X i follows the following distribution: Pr{X i = k} =      λ k k!(e λ −1) 1 ≤ k ≤ M ∞ j=M+1 λ j j!(e λ −1) k = 0 0 otherwise ,(5) with the expectation being E[X] = λ 1 − e −λ M−1 k=0 λ k x −λ k! .(6) After observing the outcome of the contention, the AP determines whether to stop the contention rounds according to the optimal stopping strategy. It keeps silent if it decides not to stop the contention rounds. According to IEEE 802.11 DCF, other stations will then continue to count down after sensing the channel idle for a DIFS (DCF interframe space) time and contend for the channel when their counter values reach zero. If the AP decides to stop the contention rounds, it will randomly select, from all winning stations, at most M stations for data packet transmission. Note that if the total number of winning stations in this super round does not exceed M , then all of them will be select. This decision is broadcasted to all mobile stations through a CTS packet after a SIFS interval. Then, the selected winning stations send their data packets. After that, the AP responds with a group ACK, indicating which data packets have been received successfully. The stations that have contended but are not notified to transmit data by the CTS packet regard themselves as having encountered a collision, and consequently multiply their contention window by r and back off. Note that the collision can be either an actual one that occurs when more than M stations transmit together in a contention round, or a virtual one that occurs to winning stations that are not selected by the AP when the total number of winning stations exceeds M by the end of the last contention round. This protocol falls back to the traditional single-round contention protocol if the AP always terminates contention after the first successful contention round. Under the analytical framework described in the last subsection, we regard one RTS contention including the preceding idle slots and the succeeding interframe spaces as one contention round, as illustrated in Fig. 2. Likewise, the "data transmission slot" contains the data packet transmission, the group ACK, together with the interframe spaces. Let T RT S and T data be the durations defined in Fig. 2, i.e., T RT S = RT S + DIF S,(7)T data = T H + L R + T ACK + SIF S + DIF S,(8) where T H denotes the transmission time of a packet header, L denotes the payload length of a packet, R denotes the data transmission rate, and T ACK denotes the time duration of a group ACK packet. The acronyms (i.e., RTS, CTS, SIFS, DIFS, ACK) represent the corresponding time duration specified in the IEEE 802.11 standard. If the contention phase stops at the N th round, then T N = N T RT S + N i=1 I i σ + CT S + 2SIF S − DIF S + T data ,(9) where σ is the length of a idle slot and I i is the number of idle slots preceding the RTS packet in the i th contention round. From (4), it can be seen that a slot is idle with probability e −λ when K is large. Therefore, I i follows a geometric distribution with mean value m I = E[I] = e −λ 1 − e −λ .(10) The term CT S + 2SIF S − DIF S in (9) is due to the fact that the duration of the last contention round is statistically different from others. System throughput defined in (2) can now be written as S φ = E X [Y N ] E X,I [T N ] = E X [min( N i=1 X i , M )] E X,I [N T RT S + N i=1 I i σ + B] = E X [min( N i=1 X i , M )] E X [N (T RT S + m I σ) + B] packets/second (11) where B = CT S + 2SIF S − DIF S + T data is a constant invariant of the stopping criterion. Intuitively, the decision whether to stop contention at a certain round could be based on the number of contention rounds that have already taken place, the number of stations that have already won the contention, or a combination of both. However, our analysis in Section IV reveals a somewhat surprising result: The optimal stopping rule is solely based on the number of winning stations, regardless of how many contention rounds that have already been executed. III. PRELIMINARY ON OPTIMAL STOPPING THEORY Before deriving the optimal stopping rule in the next section, we introduce in this section some definitions and theorems that will be useful in our later discussions. Theorem 1 states that the problem of MR is equivalent to a stopping rule problem that aims to maximize the return Y N −µT N for some µ, where Y N and T N are as defined in (2). N * . (b) Conversely, if sup N ∈C (E X [Y N ] − µE X [T N ]) = 0 for some µ, then sup N ∈C EX[YN ] EX[TN ] = µ. Moreover, if sup N ∈C (E X [Y N ] − µE X [T N ]) = 0 is attained at N * ∈ C, then N * is optimal for maximizing sup N ∈C EX[YN ] EX [TN ] . Remark 1. µ is in fact the optimal rate of return which is equal to sup N ∈C EX[YN ] EX [TN ] . Theorem 1 implies that to maximize the system throughput, we can alternatively solve a regular stopping rule problem that maximizes Z N , where Z N = Y N − µT N . It can be shown that the optimal stopping rule is the one that satisfies the following equation 1 . N * = min n ≥ 1 : (12) Z n ≥ sup m≥n E X [Z m |X 1 = x 1 , · · · , X n = x n ] . In other words, it is optimal to stop at a stage if the return at this stage is no less than the expected return of stopping at a future stage. Definition 1 (One-stage look-ahead rule). The one-stage lookahead (1-sla) rule is the one that stops if the return for stopping at the stage is at least as large as the expected return of continuing one stage and then stop. Mathematically, the 1sla rule is described by the stopping time N 1 = min n ≥ 1 : (13) Z n ≥ E Xn+1 [Z n+1 |X 1 = x 1 , · · · , X n = x n ] . Remark 2. The 1-sla rule is not optimal in general. However, Definition 2 and Theorem 2 show that N * = N 1 when some conditions are satisfied. Definition 2. Let A n denote the event {Z n ≥ E[Z n+1 |X 1 = x 1 , · · · , X n = x n ]}. We say the stopping rule problem is monotone if A 0 ⊂ A 1 ⊂ A 2 ⊂ · · · . In other words, the problem is monotone if the one-stage look-ahead calls for stopping at stage n, then it will also call for stopping at all future stages no matter what the future observations turn out to be. Theorem 2. If lim n→∞ Z n = Z ∞ , E[sup n |Z n |] < ∞, IV. OPTIMAL STOPPING RULE FOR MPR WLAN WITH MULTI-ROUND CONTENTION In this section, we analyze the optimal stopping rule that maximizes the system throughput of MPR WLAN with multiround contention. In what follows, Lemma 1 shows that the 1-sla rule is a threshold based rule and the stopping time is solely determined by the number of stations that have already won the contention. Furthermore, it is proved in Lemma 2 that the 1-sla rule is the optimal stopping rule for our particular problem of maximizing system throughput of MPR WLAN. Lemma 1. The 1-sla rule that maximizes EX[YN ] EX [TN ] or, equivalently, Y N −µT N is a threshold based rule that stops at the n th contention round as soon as n i=1 X i ≥ θ. When the number of stations K is large, θ is a fixed constant invariant with n. Proof : Note that Z N = Y N − µT N (14) = min N i=1 X i , M − µN T RT S − µ N i=1 I i σ − µB = M − M − N i=1 X i + − µN T RT S − µ N i=1 I i σ − µB, where (·) + is equal to the argument if the argument is positive, and zero otherwise. The 1-sla rule described in (13) can now be rewritten as (15) shown at the top of the next page, where θ n = M − v n and v n = max u : u−E Xn+1 [(u−X n+1 ) + ] ≤ µ(T RT S +m I σ) .(16) When K is large, the distribution of X i is identical for all i (see (5)). In this case, v n and θ n are invariant with n. If no confusion arises, the subscript n will be omitted hereafter. It is obvious from (15) that the 1-sla rule is a threshold based rule with a constant threshold θ. Remark 3. Equation (a) in (15) are due to the fact that u − E Xn+1 [(u − X n+1 ) + ] is an increasing function of u. Lemma 2. For MPR WLANs with multi-round contention, the stopping rule N 1 obtained by (15) is the optimal solution to Problem (12). That is, N 1 = N * . Proof : To prove Lemma 2, we note that the return function Z n = Y n −µn(T RT S +m I σ)−µB has the following properties in our particular problem. lim n→∞ Z n = Z ∞ = −∞,(17) and E[sup n Z n ] ≤ M − µ(T RT S + m I σ) − µB < ∞.(18) Furthermore, it can be seen from (15) that the problem is monotone, because as long as the threshold θ is exceeded at a certain stage n and the 1-sla rule calls for stopping, the threshold will always be exceeded at all future stages regardless of the future observations of X. Therefore, the 1-sla rule is the optimal stopping rule according to Theorem 2. Theorem 3. The optimal stopping rule that maximizes the throughput of MPR WLAN is a threshold based rule described as follows. N * = min n ≥ 1 : Lemma 4. Given a threshold θ ∈ (0, M ] and an attempt rate λ, N * obtained by the optimal stopping rule (19) has the distribution given in (21) at the top of the next page. n i=1 X i ≥ θ .(19)N 1 = min n ≥ 1 : M − n i=1 X i + − E Xn+1 M − n i=1 X i − X n+1 + X 1 = x 1 , · · · , X n = x n ≤ µ(T RT S + m I σ) = min n ≥ 1 : M − n i=1 X i − E Xn+1 M − n i=1 X i − X n+1 + X 1 = x 1 , · · · , X n = x n ≤ µ(T RT S + m I σ) (a) = min n ≥ 1 : M − n i=1 X i ≤ v n = min n ≥ 1 : n i=1 X i ≥ θ n (15) Pr{N * (λ, θ) = n} = (21)    M k=θ λ k k!(e λ −1) n = 1 M i=θ λ i i! (e λ −1) n ∞ i=M+1 λ i i! n−1 + 1 (e λ −1) n θ−1 s=1 M i=θ−s λ i i! λ s s! n−1 l=1 n−1 l ∞ j=M+1 λ j j! − 1 n−1−l l s n > 1 Proof : When n = 1, Pr{N * (λ, θ) = 1} = Pr{X 1 ≥ θ} = M k=θ λ k k!(e λ − 1) ,(22) which proves the first half of (21). When n > 1, Pr{N * (λ, θ) = n} = Pr n−1 i=1 X i < θ, n i=1 X i ≥ θ = θ−1 s=0 Pr X n ≥ θ − s n−1 i=1 X i = s Pr n−1 i=1 X i = s = θ−1 s=0 Pr X ≥ θ − s Pr n−1 i=1 X i = s ,(23) where the last equality is due to the fact that the sequence of X i is i.i.d. Substituting (20) into (23), the second half of (21) is obtained. From Lemma 4, we can derive E[N * (λ, θ)] as E[N * (λ, θ)] = ∞ n=1 n Pr{N * (λ, θ) = n} (24) = M k=θ λ k k!(e λ − 1) 1 (1 − ∞ k=M+1 λ k k!(e λ −1) ) 2 + ∞ n=1 n + 1 (e λ − 1) n+1 n l=1 n l ∞ j=M+1 λ j j! − 1 n−l × M k=1 λ k k! θ−1 s=max(θ−k,1) λ s l s s! . Lemma 5. N * (λ,θ) i=1 X i follows the distribution given in (25), shown at the top of the next page, when the contention phase is stopped according to the stopping rule (19). Proof : For s ≥ θ, we have Pr N * (λ,θ) i=1 X i = s (26) = ∞ n=1 Pr n i=1 X i = s n−1 i=1 X i < θ Pr n−1 i=1 X i < θ = ∞ n=1 θ−1 t=0 Pr X = s − t Pr n−1 i=1 X i = t = Pr X = s + ∞ n=2 θ−1 t=0 Pr X = s − t Pr n−1 i=1 X i = t Substituting (5) and (20) into (26), we get (25). B. Throughput of MPR WLAN with Multi-round Contention Given λ and θ, system throughput is calculated as S(λ, θ) = E X Y N * (λ,θ) E X T N * (λ,θ) (27) = E X min( N * (λ,θ) i=1 X i , M ) E X N * (λ, θ) (T RT S + m I σ) + B packets/sec. where E X N * (λ, θ) is given by (24) and E X min( N * (λ,θ) i=1 X i , M ) can be calculated as E X min( N * (λ,θ) i=1 X i , M ) = M s=θ s Pr N * (λ,θ) i=1 X i = s (28) + ∞ s=M+1 M Pr N * (λ,θ) i=1 X i = s WLAN throughputs studied in previous papers can be regarded as special cases of (27). In particular, when θ = 1, (27) reduces to the throughput performance of MPR WLANs with singleround contention [5]. When M = 1 and θ = 1, (27) reduces to the throughput of traditional WLANs with single-packet reception [1]. Pr N * (λ,θ) i=1 X i = s =            0 s < θ λ s s!(e λ −1) 1 M j=1 λ j j!(e λ −1) s ≥ θ + ∞ n=1 1 (e λ −1) n n l=1 n l ∞ j=M+1 λ j j! − 1 n−l θ−1 t=1 s t l t(25) From (16), it can be seen that there exists an optimal θ * (λ) that maximizes the system throughput for a given λ (or equivalently, a given distribution of X). With the analysis in this section, θ * (λ) can be obtained by performing a simple line search instead of calculating directly from (16). In addition, if we have the freedom to adjust the attempt rate λ as well, the maximum system throughput can be achieved by jointly optimizing λ and θ: S * = max λ,θ S(λ, θ).(29) If we blindly set θ = M without optimizing it, then we obtain a lower bound B L (λ) S(λ, M ) = M E X N * (λ, M ) (T RT S + m I σ) + B ≤ S(λ, θ * (λ)),(30) which yields B * L = max λ B L (λ) ≤ S * .(31) As we will show shortly, the gap between B * L and S * is marginal in most cases. Nonetheless, B L (λ) is much easier to calculate than S(λ, θ). Moreover, to achieve B * L , we simply need to find the λ that minimizes E X [N * (λ, M )]. This is much less computationally involved than finding the right λ and θ to maximize S(λ, θ). Hence, B * L serves a good approximation of S * from both analytical and practical perspective. As an illustration, throughput S (in unit of Mbps) is plotted against λ and θ in Fig. 3 for an IEEE 802.11a WLAN with M = 10 and data transmission rate 54 Mbps. Other system parameters are shown in Table II. It can be seen that for each attempt rate λ, there exists an optimal θ that maximizes throughput S, and vice versa. In general, traditional singleround contention (i.e., setting θ = 1) does not yield as high throughput as multi-round contention (i.e., setting θ > 1). Fixing λ = 6, we plot S(λ, θ) and B L (λ) in Fig. 4. In particular, B L (λ) = S(λ, 10) according to the definition. It can be seen that B L is close to the maximum value of S(λ, θ), which occurs when θ = 9. Furthermore, note that only 72% of the maximum throughput can be achieved when θ = 1, in which case the system reduces to one with single-round contention. Reducing the data transmission rate to 6 Mbps, we plot the curve again in Fig. 5. In this case, B L coincides with the maximum value of S(λ, θ). In Fig. 6, we plot the optimal threshold θ * (λ * ) obtained from (29). One interesting observation is that θ * is not necessarily close to M , especially for large M . This implies the optimal strategy would rather transmit fewer than M packets than executing too many contention rounds in these cases. VI. THROUGHPUT SCALING AND COMPARISON WITH SINGLE-ROUND CONTENTION Our previous study in [5] has demonstrated MPR as a powerful capacity-enhancement technique in traditional WLANs with single-round contention. In particular, we have proved that the maximum throughput increases super-linearly with M , the MPR capability of the channel. In this section, we extend the study by investigating (i) how the maximum system throughput scales with the MPR capability M under multi-round contention; and (ii) how multi-round contention improves system performance compared with single-round contention. In the following figures, B * L is obtained through analytical approaches, while S * is obtained through semianalytical simulations. In Fig. 7, we plot the maximum throughput S * and its lower bound B * L as a function of M when system parameters are set as in Table II. It can be seen that system throughput increases drastically with the increase of MPR capability. Moreover, the lower bound B * L is very close to the actual throughput S * . The maximum normalized throughput with respect to M , i.e., S * M , is plotted in Fig. 8. For comparison, the maximum normalized throughput of single-round contention RTS/CTS access network (derived in [5]) is also plotted. Three conclusions can be drawn from the figure. First, similar to the single-round contention case, S * M increases with M in the multi-round contention case when M is larger than 4. In other words, multi-round contention preserves the super-linear throughput scaling. In practical systems, M is directly related to the cost (e.g., bandwidth in CDMA systems or the number of antennas in multi-antenna systems). Super-linear scaling of throughput implies that the achievable throughput per unit cost increases with M . Second, multi-round contention sig- S * and B * L for IEEE 802.11a with L = 8184 bits and data transmission rate 54 Mbps. nificantly improves system throughput compared with singleround contention, especially for small to medium M (say M ≤ 20). The throughput improvement can be as high as 23%. This is because the channel is more likely to be "fully occupied" with packets with the multi-round contention MAC. Note that small to medium M is of particular interest for practical applications, where the multiuser detection capability at the receiver is typically not high. This provides a strong incentive for the deployment of a multi-round contention MAC in future wireless networks. Third, the gap between the normalized throughputs of multi-round contention and single-round contention networks diminishes when M grows (perhaps impractically) large. This is not surprising, however, as we have proved in [5] that the throughput penalty due to distributed random access diminishes to zero when M becomes large even with single-round contention. Thus, the optimal stopping strategy we have derived may turn out to stop the contention process after one contention round most of the time. Before leaving this section, note that we have assumed that the attempt rate λ remains constant for all contention rounds. In principle, throughput S * can be further improved by allowing λ to vary from one contention round to another. For example, a smaller λ should be adopted if the number of winning stations is already close to the threshold θ so as to reduce the probability of collision in the contention round. By doing so, however, the derivation of the optimal stopping rule would be much more involved. Fortunately, from the throughput upper bound that we derive in Appendix B, it can be seen that the potential throughput enhancement by varying λ across different slots is marginal. VII. DISCUSSIONS AND VERIFICATION OF ANALYSIS In this section, we verify the analysis through simulation. We also discuss the validity of the Poisson assumption adopted in Sections IV and V. In Fig. 9, we simulate the multi-round IEEE 802.11 WLAN described in Section II-B and Fig. 2 when there are K = 100 stations. In the figure, the MPR capability M varies from 1 to 40. We set backoff exponent r = 2, minimum contention window W 0 = 16, and threshold θ = M . Other parameters are the same as in Table II. For each M , the simulation is run for 100, 000 generic time slots after 5, 000 slots of warm-up. For comparison, we also plot the analytical results S(λ, θ) by setting λ to be the average aggregate transmission probability obtained from the simulations 2 . It can be seen from the figure that the simulation and analytical results almost overlap when M is relatively small. When M exceeds 30, the simulation result deviates slightly from the analysis. This is because for large M , each station tends to transmit at a higher probability τ under the exponential backoff scheme. In this case, λ = Kτ is not much smaller than K, and hence the Poisson assumption becomes less accurate. VIII. CONCLUSIONS In this paper, we have proposed a multi-round contention random-access protocol for WLANs with MPR capability. An optimal stopping rule is derived to strike the desired tradeoff between channel utilization and contention overhead. In particular, we prove that the one-stage look-ahead rule, which is a simple threshold-based rule, is optimal due to the special feature of the return function. The multi-round contention protocol significantly improves system throughput compared with conventional single-round contention protocols, especially for small to medium M . This is because the MPR channel is now more likely to be packed with as many packets as it can resolve. Furthermore, multi-round contention preserves super-linear throughput scaling, providing a strong incentive to deploy MPR in future WLANs. In Sections IV and V, we have assumed that K is large enough so that the number of transmissions in a generic time slot follows a Poisson distribution. Our simulation in Fig. 9 shows that this assumption is very accurate when K is sufficiently larger than M . On the other hand, the Poisson assumption is less accurate when K is relatively small. In this case, the transmission attempts follows a binomial distribution that varies from one contention round to another, as the number of potential contenders decreases with the number of contention rounds. As a result, θ n defined in (15) is no longer invariant with n, making the analysis of the 1-sla scheme much more complicated. In our future work, we will devise effective mechanisms to analyze multi-round contention MPR WLANs for small to medium K. APPENDIX A PROOF OF LEMMA 3 The proof is trivial for s = 0. For s > 0, we prove the lemma by induction. It is obvious that Lemma 3 holds when n = 1. Assuming that Lemma 3 holds when n = N , we show in (32) that it also holds for n = N + 1 in the following. Pr{ N +1 i=1 X i = s} = s−1 sN =1 Pr{X N +1 = s − s N N i=1 X i = s N } Pr{ N i=1 X i = s N } + Pr{X N +1 = 0 N i=1 X i = s} Pr{ N i=1 X i = s} + Pr{X N +1 = s N i=1 X i = 0} Pr{ N i=1 X i = 0} = s−1 sN =1 λ s−sN (s − s N )!(e λ − 1) λ sN s N !(e λ − 1) N N l=1 N l ∞ j=M+1 λ j j! − 1 N −l l sN + ∞ j=M+1 λ j j!(e λ − 1) λ s s!(e λ − 1) N N l=1 N l ∞ j=M+1 λ j j! − 1 N −l l s + λ s s!(e λ − 1) 1 (e λ − 1) N ∞ j=M+1 λ j j! N = λ s s!(e λ − 1) N +1 N l=1 N l ∞ j=M+1 λ j j! − 1 N −l ((l + 1) s − l s − 1) + ∞ j=M+1 λ j j! N l=1 N l ∞ j=M+1 λ j j! − 1 N −l l s + ∞ j=M+1 λ j j! N = λ s s!(e λ − 1) N +1 N +1 l=1 N + 1 l ∞ j=M+1 λ j j! − 1 N +1−l l s ,(32) APPENDIX B MULTI-ROUND CONTENTION WITH CARRY-OVER: A THROUGHPUT UPPER BOUND In our proposed scheme, a winning station may not be selected for data transmission when there are more than M winning stations by the end of the contention phase. The unselected stations regard themselves as having encountered virtual collisions and back off. In this appendix, we propose an alternative scheme where unselected winning stations are carried over to the next super round instead of being discarded. In other words, these stations are automatically categorized as winning stations without the need to contend for the channel again. This scheme is based on an ideal assumption that the AP can memorize the contention outcomes of the previous round. Hence, no "contention efforts" are wasted. All winning stations can eventually transmit without the need to contend again. As such, it is always optimal to wait until there are no fewer than M winning stations (including the carried-over ones) before data transmission. The system throughput is given by with the optimal λ that maximizes S c being λ * c = arg max S c = arg max E[X] T RT S + m I σ .(34) It is not surprising that the optimal λ * c is simply the one that maximizes the number of winning stations per unit time during the contention phase. S c (λ * c ) serves as an upper bound of the throughput of multiround contention WLAN without carry over. In other words, it puts a cap on the potential throughput enhancement by varying λ across contention rounds. In Fig. 10, the throughput upper bound is plotted together with the maximum throughput S * of the non-carry-over protocol. It shows that S * can hardly be further improved when M is small, and at most by 11% when M is as large as 80.
6,495
1006.4248
2008433804
Multi-packet reception (MPR) has been recognized as a powerful capacity-enhancement technique for randomaccess wireless local area networks (WLANs). As is common with all random access protocols, the wireless channel is often under-utilized in MPR WLANs. In this paper, we propose a novel multi-round contention random-access protocol to address this problem. This work complements the existing randomaccess methods that are based on single-round contention. In the proposed scheme, stations are given multiple chances to contend for the channel until there are a sufficient number of ?winning? stations that can share the MPR channel for data packet transmission. The key issue here is the identification of the optimal time to stop the contention process and start data transmission. The solution corresponds to finding a desired tradeoff between channel utilization and contention overhead. In this paper, we conduct a rigorous analysis to characterize the optimal strategy using the theory of optimal stopping. An interesting result is that the optimal stopping strategy is a simple threshold-based rule, which stops the contention process as soon as the total number of winning stations exceeds a certain threshold. Compared with the conventional single-round contention protocol, the multi-round contention scheme significantly enhances channel utilization when the MPR capability of the channel is small to medium. Meanwhile, the scheme automatically falls back to single-round contention when the MPR capability is very large, in which case the throughput penalty due to random access is already small even with single-round contention.
The theory of optimal stopping has been widely studied in the fields of statistics, economics, and mathematical finance since 1960's @cite_5 . It was not until very recently that optimal stopping theory started to find application in wireless networks. In @cite_9 , the tradeoff between the spectrum access opportunity and spectrum sensing overhead in cognitive radio systems is formulated as a finite-horizon optimal stopping problem, which is solved using backward induction. Likewise, a finite-horizon optimal stopping problem is formulated in @cite_4 to derive an optimal next-hop selection strategy in multi-hop ad hoc networks. The problem of maximizing the rate of return (MR) was applied to opportunistic scheduling in ad-hoc networks in @cite_11 and opportunistic spectrum access of cognitive radio networks in @cite_3 . Notably, the application of optimal stopping theory in wireless systems is still at its infancy stage. Our work in this paper is an attempt to introduce it to wireless random-access networks.
{ "abstract": [ "In order to adapt to time-varying wireless channels, various channel-adaptive schemes have been proposed to exploit inherent spatial diversity in mobile wireless ad hoc networks where there are usually alternate next-hop relays available at a given forwarding node. However, current schemes along this line are designed based on heuristics, implying room for performance enhancement. To seek a theoretical foundation for improving spatial diversity gain, we formulate the selection of the next-hop as a sequential decision problem and propose a general \"optimal stopping relaying (OSR)\" framework for designing such next-hop diversity schemes. As a particular example, assuming Rayleigh fading channels, we implement an OSR strategy to optimize information efficiency (IE) in a protocol stack consisting of greedy perimeter stateless routing (GPSR) and IEEE 802.11 MAC protocols. We present mathematical analysis of the proposed OSR together with other strategies in literature for a single forwarding node. In addition, we perform extensive simulations (using QualNet) to evaluate the end-to-end performance of these relaying strategies in a multi-hop network. Both the mathematical and simulation results demonstrate the superiority of OSR over other existing schemes.", "Radio spectrum resource is of fundamental importance for wireless communication. Recent reports show that most available spectrum has been allocated. While some of the spectrum bands (e.g., unlicensed band, GSM band) have seen increasingly crowded usage, most of the other spectrum resources are underutilized. This drives the emergence of open spectrum and dynamic spectrum access concepts, which allow unlicensed users equipped with cognitive radios to opportunistically access the spectrum not used by primary users. Cognitive radio has many advanced features, such as agilely sensing the existence of primary users and utilizing multiple spectrum bands simultaneously. However, in practice such capabilities are constrained by hardware cost. In this paper, we discuss how to conduct efficient spectrum management in ad hoc cognitive radio networks while taking the hardware constraints (e.g., single radio, partial spectrum sensing and spectrum aggregation limit) into consideration. A hardware-constrained cognitive MAC, HC-MAC, is proposed to conduct efficient spectrum sensing and spectrum access decision. We identify the issue of optimal spectrum sensing decision for a single secondary transmission pair, and formulate it as an optimal stopping problem. A decentralized MAC protocol is then proposed for the ad hoc cognitive radio networks. Simulation results are presented to demonstrate the effectiveness of our proposed protocol.", "The listen-Before-Talk (LBT) strategy has been prevalent in cognitive radio networks where secondary users opportunistically access under-utilized primary band. To minimize the amount of disruption from secondary users to primary signals, secondary users generally are required to detect the presence of the primary user reliably, and access the spectrum intelligently. The sensing time has to be long enough to achieve desirable detection performance. Weaker primary signals require longer sensing time, thereby reduce the secondary transmission opportunities. In this paper, we generalize the packet-level LBT strategy by allowing the secondary user to potentially transmit multiple packets after one sensing, and study the optimal control policy to determine the conditions under which the secondary user should sense the channel. We show that the optimal spectrum access control policy has a simple threshold-based structure, where the secondary user transmits consecutive packets until the estimated probability of the primary user being idle falls below a threshold, and senses the channel otherwise. The result applies to systems with both perfect and imperfect packet collision detection with the primary users.", "", "We consider distributed opportunistic scheduling (DOS) in wireless ad-hoc networks, where many links contend for the same channel using random access. In such networks, distributed opportunistic scheduling involves a process of joint channel probing and distributed scheduling. Due to channel fading, the link condition corresponding to a successful channel probing could be either good or poor. In the latter case, further channel probing, although at the cost of additional delay, may lead to better channel conditions and hence higher transmission rates. The desired tradeoff boils down to judiciously choosing the optimal stopping strategy for channel probing and the rate threshold. In this paper, we pursue a rigorous characterization of the optimal strategies from two perspectives, namely, a network-centric perspective and a user-centric perspective. We first consider DOS from a network-centric point of view, where links cooperate to maximize the overall network throughput. Using optimal stopping theory, we show that the optimal strategy turns out to be a pure threshold policy, where the rate threshold can be obtained by solving a fixed point equation. We further devise an iterative algorithm for computing the threshold. Next, we explore DOS from a user-centric perspective, where each links seeks to maximize its own throughput. We treat the problem of rate threshold selections for different links as a non-cooperative game. We explore the existence and uniqueness of the Nash equilibrium, and show that the Nash equilibrium can be approached by the best response strategy. We then develop an online stochastic iterative algorithm using local observations only, and establish its convergence. Finally, we observe that there is an efficiency loss in terms of the throughput at the Nash equilibrium, and introduce apricing-based mechanism to mitigate the loss." ], "cite_N": [ "@cite_4", "@cite_9", "@cite_3", "@cite_5", "@cite_11" ], "mid": [ "2159238092", "2157313704", "2147910082", "70227856", "2082330137" ] }
Multi-Round Contention in Wireless LANs with Multipacket Reception
In random-access wireless networks, such as IEEE 802.11 wireless local area networks (WLAN), stations share a common medium through contention-based medium access control (MAC). Most of what we know about WLAN is based on the conventional collision model, where packet collisions occur when two or more stations transmit at the same time [1], [2]. With advanced PHY-layer signal processing techniques, it is possible for an access point (AP) to detect multiple concurrently transmitted packets through, for example, multiuser detection (MUD) techniques [3], [4]. This new collision model, referred to as multi-packet reception (MPR), opens up new possibilities for drastically enhancing the capacity of WLANs. Our prior work in [5] shows that the throughput of WLANs scales super-linearly with the MPR capability of the channel. With MPR, up to M stations can transmit at the same time without causing collisions, where M is referred to as the MPR capability of the channel. An immediate question is how the MAC should be redesigned to fully utilize the advantages of MPR. In [5], [6], we derived the optimal transmission probability and backoff exponent that maximize the throughput of MPR WLAN. With the optimal transmission probability, system throughput is greatly enhanced compared with that in traditional single-packet reception (SPR) WLANs. One observation from our prior work, however, is that the MPR channel is still under-utilized from time to time even when the optimal transmission probability is adopted. In other words, the channel is not always fully occupied by M concurrent packet transmissions during the data transmission phase. This is because the current MAC protocols are based on a "single-round contention" framework. There is essentially only one contention round for each data transmission phase. For example, in the DCF RTS/CTS access mode, a data transmission phase follows immediately as long as there is one successful RTS contention. Similarly, in the DCF basic access mode, data packet transmission also serves the purpose of channel contention, which implies that there is only one round of contention for each data transmission. Due to the random-access nature, the number of stations contending for the channel at a time is a random variable. Hence, the channel is unavoidably under-utilized when there are less than M stations contend for the channel simultaneously. As such, enhancing the capacity of MPR WLANs beyond what is currently achievable remains a challenging problem. This paper proposes a novel multi-round contention randomaccess protocol to address the problem. With the multi-round contention framework, more contention rounds are executed before data transmission if the number of stations that have already won the channel contention is small. Intuitively, the more contention rounds, the more likely that the channel is fully packed with M concurrent packet transmissions in the data transmission phase. On the other hand, more contention rounds leads to higher channel-contention overhead. Finding the desired tradeoff between channel utilization and contention overhead boils down to deciding when to terminate the contention rounds and start data transmission. In this paper, we conduct a rigorous analysis to characterize the optimal strategy using the theory of optimal stopping [8]. The key contributions of this paper are summarized in the following. • We show that the problem of finding the optimal stopping strategy that maximizes the system throughput is equiv-alent to the problem of maximizing the rate of return (MR), a subclass of optimal stopping problems. • By exploiting the monotone nature of the problem, we prove that the optimal stopping strategy for multi-round contention is a simple threshold-based rule. Specifically, it is optimal to terminate the contention rounds as soon as the total number of stations that have succeeded in channel contention exceeds a certain threshold, regardless of the number of contention rounds that have already been executed. • Based on the analysis, the maximum throughput that is achievable in MPR networks with multi-round contention is derived. In particular, network throughput is maximized when the stopping threshold and the transmission probability of stations are jointly optimized. Our results show that multi-round contention drastically enhances the channel utilization compared with networks with single-round contention, especially for small to moderate M , which is the case in most practical situations. This analysis complements our work in [5] that has focused on MPR WLANs with single-round contention. • For practical implementation, we propose a multi-round contention protocol which only requires minor revisions to the current IEEE 802.11 DCF. II. SYSTEM MODEL AND PROBLEM FORMULATION A. Multi-round Contention and Problem of Maximizing Rate of Return We consider a fully connected network with K mobile stations transmitting to an AP. The transmission of stations is coordinated by a random-access protocol. We assume that the AP has the capability to decode up to M simultaneous packet transmissions, be it contention packets or data packets. Interested readers are referred to Section V in [5] for a practical protocol to implement MPR in random access networks. A sketch of the multi-round contention mechanism is illustrated in Fig. 1. The precise model will be made concrete in the next subsection, where we propose a multi-round contention protocol as a minor amendment of IEEE 802.11 RTS/CTS mechanism. In Fig. 1, the time axis is divided into contention rounds and data transmission slots. The period between the ends of two neighboring data transmission slots is referred to as a super round. Stations transmit a small contention packet with probability τ in each contention round. If there are no more than M stations contending for the channel in the same Let {X 1 , X 2 , · · · } denote a sequence of random variables representing the number of winning stations in each contention round. Obviously, 0 ≤ X i ≤ M . For each i = 1, 2, · · · , after observing X 1 = x 1 , X 2 = x 2 , · · · , X n = x n , we may stop the contention and transmit n i=1 x i data packets in the data transmission slot if n i=1 x i ≤ M ; only M winning stations will be selected to transmit if n i=1 x i > M . In other words, the number of transmitting stations is y n (x 1 , · · · , x n ) = max( n i=1 x i , M ).(1) Instead of stopping the contention at the n th round, we may also continue and observe X n+1 , hoping that y n+1 (x 1 , · · · , x n+1 ) will be much larger than y n . Of course, this is at the risk of wasting more time on contention without getting a reasonably larger y n+1 in return. A stopping rule φ determines the stopping time N based on the sequence of observations X = (X 1 , X 2 , · · · ). Note that N is random, as it is a function of random variables X. Different realizations of observations may lead to different stopping time. The system throughput can then be calculated as S φ = E X [Data payload transmitted in one super round] E X [Duration of a super round] = E X [Y N ] E X [T N ] ,(2) where Y 1 , Y 2 , · · · is a sequence of random variables with realizations being y 1 , y 2 , · · · . T N is the random variable representing the total amount of time spent to obtain a return of Y N . Let C denote the class of stopping rules with C = {N : N ≥ 1, E[T N ] < ∞}.(3) Our purpose is to find the optimal stopping rule N * ∈ C that maximizes the system throughput EX[YN ] EX [TN ] . In optimal stopping theory, this problem is referred to as the problem of MR. B. Multi-round Contention in IEEE 802.11 WLAN Having introduced the general framework of multi-round contention, we now propose a multi-round contention protocol based on IEEE 802.11 RTS/CTS access mode. Note that the problem formulated in the preceding subsection and the analysis in later sections are general and not restricted to the protocol proposed in this subsection. In IEEE 802.11, the transmission of stations is coordinated by an exponential backoff (EB) mechanism. The EB mechanism adaptively tunes the transmission probability of a station according to the traffic intensity of the network. It works as follows. At each packet transmission, a station sets its backoff timer by randomly choosing an integer within the range [0, W − 1], where W is the size of the contention window. The backoff timer freezes when the channel is busy and is decreased by one following each time slot when the channel is idle. The station transmits a packet from its buffer once the backoff timer reaches zero. At the first transmission attempt of a packet, W is set to W 0 , the minimum contention window. Each time the transmission is unsuccessful, W is multiplied by a backoff factor r. That is, the contention window size W j = r j W 0 after j successive transmission failures. The multi-round contention protocol is illustrated in Fig. 2. A station transmits an RTS packet when its backoff timer reaches zero. Previous work in [1], [2], [5] has shown that the backoff process yields an equivalent transmission probability τ at which a station transmits in a generic (randomly chosen) time slot. When the number of stations, K, is large, it is reasonable to assume that the number of transmissions in a generic time slot follows a Poisson distribution with parameter λ = Kτ [5]. That is, Pr{k stations transmit in a generic time slot} = λ k k! e −λ . (4) If no more than M stations transmit at the same time, then the contention is successful and these stations are marked as winning stations. Otherwise, a collision occurs and there are zero winning stations. From (4), it can be shown that the number of winning stations X i follows the following distribution: Pr{X i = k} =      λ k k!(e λ −1) 1 ≤ k ≤ M ∞ j=M+1 λ j j!(e λ −1) k = 0 0 otherwise ,(5) with the expectation being E[X] = λ 1 − e −λ M−1 k=0 λ k x −λ k! .(6) After observing the outcome of the contention, the AP determines whether to stop the contention rounds according to the optimal stopping strategy. It keeps silent if it decides not to stop the contention rounds. According to IEEE 802.11 DCF, other stations will then continue to count down after sensing the channel idle for a DIFS (DCF interframe space) time and contend for the channel when their counter values reach zero. If the AP decides to stop the contention rounds, it will randomly select, from all winning stations, at most M stations for data packet transmission. Note that if the total number of winning stations in this super round does not exceed M , then all of them will be select. This decision is broadcasted to all mobile stations through a CTS packet after a SIFS interval. Then, the selected winning stations send their data packets. After that, the AP responds with a group ACK, indicating which data packets have been received successfully. The stations that have contended but are not notified to transmit data by the CTS packet regard themselves as having encountered a collision, and consequently multiply their contention window by r and back off. Note that the collision can be either an actual one that occurs when more than M stations transmit together in a contention round, or a virtual one that occurs to winning stations that are not selected by the AP when the total number of winning stations exceeds M by the end of the last contention round. This protocol falls back to the traditional single-round contention protocol if the AP always terminates contention after the first successful contention round. Under the analytical framework described in the last subsection, we regard one RTS contention including the preceding idle slots and the succeeding interframe spaces as one contention round, as illustrated in Fig. 2. Likewise, the "data transmission slot" contains the data packet transmission, the group ACK, together with the interframe spaces. Let T RT S and T data be the durations defined in Fig. 2, i.e., T RT S = RT S + DIF S,(7)T data = T H + L R + T ACK + SIF S + DIF S,(8) where T H denotes the transmission time of a packet header, L denotes the payload length of a packet, R denotes the data transmission rate, and T ACK denotes the time duration of a group ACK packet. The acronyms (i.e., RTS, CTS, SIFS, DIFS, ACK) represent the corresponding time duration specified in the IEEE 802.11 standard. If the contention phase stops at the N th round, then T N = N T RT S + N i=1 I i σ + CT S + 2SIF S − DIF S + T data ,(9) where σ is the length of a idle slot and I i is the number of idle slots preceding the RTS packet in the i th contention round. From (4), it can be seen that a slot is idle with probability e −λ when K is large. Therefore, I i follows a geometric distribution with mean value m I = E[I] = e −λ 1 − e −λ .(10) The term CT S + 2SIF S − DIF S in (9) is due to the fact that the duration of the last contention round is statistically different from others. System throughput defined in (2) can now be written as S φ = E X [Y N ] E X,I [T N ] = E X [min( N i=1 X i , M )] E X,I [N T RT S + N i=1 I i σ + B] = E X [min( N i=1 X i , M )] E X [N (T RT S + m I σ) + B] packets/second (11) where B = CT S + 2SIF S − DIF S + T data is a constant invariant of the stopping criterion. Intuitively, the decision whether to stop contention at a certain round could be based on the number of contention rounds that have already taken place, the number of stations that have already won the contention, or a combination of both. However, our analysis in Section IV reveals a somewhat surprising result: The optimal stopping rule is solely based on the number of winning stations, regardless of how many contention rounds that have already been executed. III. PRELIMINARY ON OPTIMAL STOPPING THEORY Before deriving the optimal stopping rule in the next section, we introduce in this section some definitions and theorems that will be useful in our later discussions. Theorem 1 states that the problem of MR is equivalent to a stopping rule problem that aims to maximize the return Y N −µT N for some µ, where Y N and T N are as defined in (2). N * . (b) Conversely, if sup N ∈C (E X [Y N ] − µE X [T N ]) = 0 for some µ, then sup N ∈C EX[YN ] EX[TN ] = µ. Moreover, if sup N ∈C (E X [Y N ] − µE X [T N ]) = 0 is attained at N * ∈ C, then N * is optimal for maximizing sup N ∈C EX[YN ] EX [TN ] . Remark 1. µ is in fact the optimal rate of return which is equal to sup N ∈C EX[YN ] EX [TN ] . Theorem 1 implies that to maximize the system throughput, we can alternatively solve a regular stopping rule problem that maximizes Z N , where Z N = Y N − µT N . It can be shown that the optimal stopping rule is the one that satisfies the following equation 1 . N * = min n ≥ 1 : (12) Z n ≥ sup m≥n E X [Z m |X 1 = x 1 , · · · , X n = x n ] . In other words, it is optimal to stop at a stage if the return at this stage is no less than the expected return of stopping at a future stage. Definition 1 (One-stage look-ahead rule). The one-stage lookahead (1-sla) rule is the one that stops if the return for stopping at the stage is at least as large as the expected return of continuing one stage and then stop. Mathematically, the 1sla rule is described by the stopping time N 1 = min n ≥ 1 : (13) Z n ≥ E Xn+1 [Z n+1 |X 1 = x 1 , · · · , X n = x n ] . Remark 2. The 1-sla rule is not optimal in general. However, Definition 2 and Theorem 2 show that N * = N 1 when some conditions are satisfied. Definition 2. Let A n denote the event {Z n ≥ E[Z n+1 |X 1 = x 1 , · · · , X n = x n ]}. We say the stopping rule problem is monotone if A 0 ⊂ A 1 ⊂ A 2 ⊂ · · · . In other words, the problem is monotone if the one-stage look-ahead calls for stopping at stage n, then it will also call for stopping at all future stages no matter what the future observations turn out to be. Theorem 2. If lim n→∞ Z n = Z ∞ , E[sup n |Z n |] < ∞, IV. OPTIMAL STOPPING RULE FOR MPR WLAN WITH MULTI-ROUND CONTENTION In this section, we analyze the optimal stopping rule that maximizes the system throughput of MPR WLAN with multiround contention. In what follows, Lemma 1 shows that the 1-sla rule is a threshold based rule and the stopping time is solely determined by the number of stations that have already won the contention. Furthermore, it is proved in Lemma 2 that the 1-sla rule is the optimal stopping rule for our particular problem of maximizing system throughput of MPR WLAN. Lemma 1. The 1-sla rule that maximizes EX[YN ] EX [TN ] or, equivalently, Y N −µT N is a threshold based rule that stops at the n th contention round as soon as n i=1 X i ≥ θ. When the number of stations K is large, θ is a fixed constant invariant with n. Proof : Note that Z N = Y N − µT N (14) = min N i=1 X i , M − µN T RT S − µ N i=1 I i σ − µB = M − M − N i=1 X i + − µN T RT S − µ N i=1 I i σ − µB, where (·) + is equal to the argument if the argument is positive, and zero otherwise. The 1-sla rule described in (13) can now be rewritten as (15) shown at the top of the next page, where θ n = M − v n and v n = max u : u−E Xn+1 [(u−X n+1 ) + ] ≤ µ(T RT S +m I σ) .(16) When K is large, the distribution of X i is identical for all i (see (5)). In this case, v n and θ n are invariant with n. If no confusion arises, the subscript n will be omitted hereafter. It is obvious from (15) that the 1-sla rule is a threshold based rule with a constant threshold θ. Remark 3. Equation (a) in (15) are due to the fact that u − E Xn+1 [(u − X n+1 ) + ] is an increasing function of u. Lemma 2. For MPR WLANs with multi-round contention, the stopping rule N 1 obtained by (15) is the optimal solution to Problem (12). That is, N 1 = N * . Proof : To prove Lemma 2, we note that the return function Z n = Y n −µn(T RT S +m I σ)−µB has the following properties in our particular problem. lim n→∞ Z n = Z ∞ = −∞,(17) and E[sup n Z n ] ≤ M − µ(T RT S + m I σ) − µB < ∞.(18) Furthermore, it can be seen from (15) that the problem is monotone, because as long as the threshold θ is exceeded at a certain stage n and the 1-sla rule calls for stopping, the threshold will always be exceeded at all future stages regardless of the future observations of X. Therefore, the 1-sla rule is the optimal stopping rule according to Theorem 2. Theorem 3. The optimal stopping rule that maximizes the throughput of MPR WLAN is a threshold based rule described as follows. N * = min n ≥ 1 : Lemma 4. Given a threshold θ ∈ (0, M ] and an attempt rate λ, N * obtained by the optimal stopping rule (19) has the distribution given in (21) at the top of the next page. n i=1 X i ≥ θ .(19)N 1 = min n ≥ 1 : M − n i=1 X i + − E Xn+1 M − n i=1 X i − X n+1 + X 1 = x 1 , · · · , X n = x n ≤ µ(T RT S + m I σ) = min n ≥ 1 : M − n i=1 X i − E Xn+1 M − n i=1 X i − X n+1 + X 1 = x 1 , · · · , X n = x n ≤ µ(T RT S + m I σ) (a) = min n ≥ 1 : M − n i=1 X i ≤ v n = min n ≥ 1 : n i=1 X i ≥ θ n (15) Pr{N * (λ, θ) = n} = (21)    M k=θ λ k k!(e λ −1) n = 1 M i=θ λ i i! (e λ −1) n ∞ i=M+1 λ i i! n−1 + 1 (e λ −1) n θ−1 s=1 M i=θ−s λ i i! λ s s! n−1 l=1 n−1 l ∞ j=M+1 λ j j! − 1 n−1−l l s n > 1 Proof : When n = 1, Pr{N * (λ, θ) = 1} = Pr{X 1 ≥ θ} = M k=θ λ k k!(e λ − 1) ,(22) which proves the first half of (21). When n > 1, Pr{N * (λ, θ) = n} = Pr n−1 i=1 X i < θ, n i=1 X i ≥ θ = θ−1 s=0 Pr X n ≥ θ − s n−1 i=1 X i = s Pr n−1 i=1 X i = s = θ−1 s=0 Pr X ≥ θ − s Pr n−1 i=1 X i = s ,(23) where the last equality is due to the fact that the sequence of X i is i.i.d. Substituting (20) into (23), the second half of (21) is obtained. From Lemma 4, we can derive E[N * (λ, θ)] as E[N * (λ, θ)] = ∞ n=1 n Pr{N * (λ, θ) = n} (24) = M k=θ λ k k!(e λ − 1) 1 (1 − ∞ k=M+1 λ k k!(e λ −1) ) 2 + ∞ n=1 n + 1 (e λ − 1) n+1 n l=1 n l ∞ j=M+1 λ j j! − 1 n−l × M k=1 λ k k! θ−1 s=max(θ−k,1) λ s l s s! . Lemma 5. N * (λ,θ) i=1 X i follows the distribution given in (25), shown at the top of the next page, when the contention phase is stopped according to the stopping rule (19). Proof : For s ≥ θ, we have Pr N * (λ,θ) i=1 X i = s (26) = ∞ n=1 Pr n i=1 X i = s n−1 i=1 X i < θ Pr n−1 i=1 X i < θ = ∞ n=1 θ−1 t=0 Pr X = s − t Pr n−1 i=1 X i = t = Pr X = s + ∞ n=2 θ−1 t=0 Pr X = s − t Pr n−1 i=1 X i = t Substituting (5) and (20) into (26), we get (25). B. Throughput of MPR WLAN with Multi-round Contention Given λ and θ, system throughput is calculated as S(λ, θ) = E X Y N * (λ,θ) E X T N * (λ,θ) (27) = E X min( N * (λ,θ) i=1 X i , M ) E X N * (λ, θ) (T RT S + m I σ) + B packets/sec. where E X N * (λ, θ) is given by (24) and E X min( N * (λ,θ) i=1 X i , M ) can be calculated as E X min( N * (λ,θ) i=1 X i , M ) = M s=θ s Pr N * (λ,θ) i=1 X i = s (28) + ∞ s=M+1 M Pr N * (λ,θ) i=1 X i = s WLAN throughputs studied in previous papers can be regarded as special cases of (27). In particular, when θ = 1, (27) reduces to the throughput performance of MPR WLANs with singleround contention [5]. When M = 1 and θ = 1, (27) reduces to the throughput of traditional WLANs with single-packet reception [1]. Pr N * (λ,θ) i=1 X i = s =            0 s < θ λ s s!(e λ −1) 1 M j=1 λ j j!(e λ −1) s ≥ θ + ∞ n=1 1 (e λ −1) n n l=1 n l ∞ j=M+1 λ j j! − 1 n−l θ−1 t=1 s t l t(25) From (16), it can be seen that there exists an optimal θ * (λ) that maximizes the system throughput for a given λ (or equivalently, a given distribution of X). With the analysis in this section, θ * (λ) can be obtained by performing a simple line search instead of calculating directly from (16). In addition, if we have the freedom to adjust the attempt rate λ as well, the maximum system throughput can be achieved by jointly optimizing λ and θ: S * = max λ,θ S(λ, θ).(29) If we blindly set θ = M without optimizing it, then we obtain a lower bound B L (λ) S(λ, M ) = M E X N * (λ, M ) (T RT S + m I σ) + B ≤ S(λ, θ * (λ)),(30) which yields B * L = max λ B L (λ) ≤ S * .(31) As we will show shortly, the gap between B * L and S * is marginal in most cases. Nonetheless, B L (λ) is much easier to calculate than S(λ, θ). Moreover, to achieve B * L , we simply need to find the λ that minimizes E X [N * (λ, M )]. This is much less computationally involved than finding the right λ and θ to maximize S(λ, θ). Hence, B * L serves a good approximation of S * from both analytical and practical perspective. As an illustration, throughput S (in unit of Mbps) is plotted against λ and θ in Fig. 3 for an IEEE 802.11a WLAN with M = 10 and data transmission rate 54 Mbps. Other system parameters are shown in Table II. It can be seen that for each attempt rate λ, there exists an optimal θ that maximizes throughput S, and vice versa. In general, traditional singleround contention (i.e., setting θ = 1) does not yield as high throughput as multi-round contention (i.e., setting θ > 1). Fixing λ = 6, we plot S(λ, θ) and B L (λ) in Fig. 4. In particular, B L (λ) = S(λ, 10) according to the definition. It can be seen that B L is close to the maximum value of S(λ, θ), which occurs when θ = 9. Furthermore, note that only 72% of the maximum throughput can be achieved when θ = 1, in which case the system reduces to one with single-round contention. Reducing the data transmission rate to 6 Mbps, we plot the curve again in Fig. 5. In this case, B L coincides with the maximum value of S(λ, θ). In Fig. 6, we plot the optimal threshold θ * (λ * ) obtained from (29). One interesting observation is that θ * is not necessarily close to M , especially for large M . This implies the optimal strategy would rather transmit fewer than M packets than executing too many contention rounds in these cases. VI. THROUGHPUT SCALING AND COMPARISON WITH SINGLE-ROUND CONTENTION Our previous study in [5] has demonstrated MPR as a powerful capacity-enhancement technique in traditional WLANs with single-round contention. In particular, we have proved that the maximum throughput increases super-linearly with M , the MPR capability of the channel. In this section, we extend the study by investigating (i) how the maximum system throughput scales with the MPR capability M under multi-round contention; and (ii) how multi-round contention improves system performance compared with single-round contention. In the following figures, B * L is obtained through analytical approaches, while S * is obtained through semianalytical simulations. In Fig. 7, we plot the maximum throughput S * and its lower bound B * L as a function of M when system parameters are set as in Table II. It can be seen that system throughput increases drastically with the increase of MPR capability. Moreover, the lower bound B * L is very close to the actual throughput S * . The maximum normalized throughput with respect to M , i.e., S * M , is plotted in Fig. 8. For comparison, the maximum normalized throughput of single-round contention RTS/CTS access network (derived in [5]) is also plotted. Three conclusions can be drawn from the figure. First, similar to the single-round contention case, S * M increases with M in the multi-round contention case when M is larger than 4. In other words, multi-round contention preserves the super-linear throughput scaling. In practical systems, M is directly related to the cost (e.g., bandwidth in CDMA systems or the number of antennas in multi-antenna systems). Super-linear scaling of throughput implies that the achievable throughput per unit cost increases with M . Second, multi-round contention sig- S * and B * L for IEEE 802.11a with L = 8184 bits and data transmission rate 54 Mbps. nificantly improves system throughput compared with singleround contention, especially for small to medium M (say M ≤ 20). The throughput improvement can be as high as 23%. This is because the channel is more likely to be "fully occupied" with packets with the multi-round contention MAC. Note that small to medium M is of particular interest for practical applications, where the multiuser detection capability at the receiver is typically not high. This provides a strong incentive for the deployment of a multi-round contention MAC in future wireless networks. Third, the gap between the normalized throughputs of multi-round contention and single-round contention networks diminishes when M grows (perhaps impractically) large. This is not surprising, however, as we have proved in [5] that the throughput penalty due to distributed random access diminishes to zero when M becomes large even with single-round contention. Thus, the optimal stopping strategy we have derived may turn out to stop the contention process after one contention round most of the time. Before leaving this section, note that we have assumed that the attempt rate λ remains constant for all contention rounds. In principle, throughput S * can be further improved by allowing λ to vary from one contention round to another. For example, a smaller λ should be adopted if the number of winning stations is already close to the threshold θ so as to reduce the probability of collision in the contention round. By doing so, however, the derivation of the optimal stopping rule would be much more involved. Fortunately, from the throughput upper bound that we derive in Appendix B, it can be seen that the potential throughput enhancement by varying λ across different slots is marginal. VII. DISCUSSIONS AND VERIFICATION OF ANALYSIS In this section, we verify the analysis through simulation. We also discuss the validity of the Poisson assumption adopted in Sections IV and V. In Fig. 9, we simulate the multi-round IEEE 802.11 WLAN described in Section II-B and Fig. 2 when there are K = 100 stations. In the figure, the MPR capability M varies from 1 to 40. We set backoff exponent r = 2, minimum contention window W 0 = 16, and threshold θ = M . Other parameters are the same as in Table II. For each M , the simulation is run for 100, 000 generic time slots after 5, 000 slots of warm-up. For comparison, we also plot the analytical results S(λ, θ) by setting λ to be the average aggregate transmission probability obtained from the simulations 2 . It can be seen from the figure that the simulation and analytical results almost overlap when M is relatively small. When M exceeds 30, the simulation result deviates slightly from the analysis. This is because for large M , each station tends to transmit at a higher probability τ under the exponential backoff scheme. In this case, λ = Kτ is not much smaller than K, and hence the Poisson assumption becomes less accurate. VIII. CONCLUSIONS In this paper, we have proposed a multi-round contention random-access protocol for WLANs with MPR capability. An optimal stopping rule is derived to strike the desired tradeoff between channel utilization and contention overhead. In particular, we prove that the one-stage look-ahead rule, which is a simple threshold-based rule, is optimal due to the special feature of the return function. The multi-round contention protocol significantly improves system throughput compared with conventional single-round contention protocols, especially for small to medium M . This is because the MPR channel is now more likely to be packed with as many packets as it can resolve. Furthermore, multi-round contention preserves super-linear throughput scaling, providing a strong incentive to deploy MPR in future WLANs. In Sections IV and V, we have assumed that K is large enough so that the number of transmissions in a generic time slot follows a Poisson distribution. Our simulation in Fig. 9 shows that this assumption is very accurate when K is sufficiently larger than M . On the other hand, the Poisson assumption is less accurate when K is relatively small. In this case, the transmission attempts follows a binomial distribution that varies from one contention round to another, as the number of potential contenders decreases with the number of contention rounds. As a result, θ n defined in (15) is no longer invariant with n, making the analysis of the 1-sla scheme much more complicated. In our future work, we will devise effective mechanisms to analyze multi-round contention MPR WLANs for small to medium K. APPENDIX A PROOF OF LEMMA 3 The proof is trivial for s = 0. For s > 0, we prove the lemma by induction. It is obvious that Lemma 3 holds when n = 1. Assuming that Lemma 3 holds when n = N , we show in (32) that it also holds for n = N + 1 in the following. Pr{ N +1 i=1 X i = s} = s−1 sN =1 Pr{X N +1 = s − s N N i=1 X i = s N } Pr{ N i=1 X i = s N } + Pr{X N +1 = 0 N i=1 X i = s} Pr{ N i=1 X i = s} + Pr{X N +1 = s N i=1 X i = 0} Pr{ N i=1 X i = 0} = s−1 sN =1 λ s−sN (s − s N )!(e λ − 1) λ sN s N !(e λ − 1) N N l=1 N l ∞ j=M+1 λ j j! − 1 N −l l sN + ∞ j=M+1 λ j j!(e λ − 1) λ s s!(e λ − 1) N N l=1 N l ∞ j=M+1 λ j j! − 1 N −l l s + λ s s!(e λ − 1) 1 (e λ − 1) N ∞ j=M+1 λ j j! N = λ s s!(e λ − 1) N +1 N l=1 N l ∞ j=M+1 λ j j! − 1 N −l ((l + 1) s − l s − 1) + ∞ j=M+1 λ j j! N l=1 N l ∞ j=M+1 λ j j! − 1 N −l l s + ∞ j=M+1 λ j j! N = λ s s!(e λ − 1) N +1 N +1 l=1 N + 1 l ∞ j=M+1 λ j j! − 1 N +1−l l s ,(32) APPENDIX B MULTI-ROUND CONTENTION WITH CARRY-OVER: A THROUGHPUT UPPER BOUND In our proposed scheme, a winning station may not be selected for data transmission when there are more than M winning stations by the end of the contention phase. The unselected stations regard themselves as having encountered virtual collisions and back off. In this appendix, we propose an alternative scheme where unselected winning stations are carried over to the next super round instead of being discarded. In other words, these stations are automatically categorized as winning stations without the need to contend for the channel again. This scheme is based on an ideal assumption that the AP can memorize the contention outcomes of the previous round. Hence, no "contention efforts" are wasted. All winning stations can eventually transmit without the need to contend again. As such, it is always optimal to wait until there are no fewer than M winning stations (including the carried-over ones) before data transmission. The system throughput is given by with the optimal λ that maximizes S c being λ * c = arg max S c = arg max E[X] T RT S + m I σ .(34) It is not surprising that the optimal λ * c is simply the one that maximizes the number of winning stations per unit time during the contention phase. S c (λ * c ) serves as an upper bound of the throughput of multiround contention WLAN without carry over. In other words, it puts a cap on the potential throughput enhancement by varying λ across contention rounds. In Fig. 10, the throughput upper bound is plotted together with the maximum throughput S * of the non-carry-over protocol. It shows that S * can hardly be further improved when M is small, and at most by 11% when M is as large as 80.
6,495
1006.1919
2952193499
We introduce a new simulation platform called Insight, created to design and simulate cyber-attacks against large arbitrary target scenarios. Insight has surprisingly low hardware and configuration requirements, while making the simulation a realistic experience from the attacker's standpoint. The scenarios include a crowd of simulated actors: network devices, hardware devices, software applications, protocols, users, etc. A novel characteristic of this tool is to simulate vulnerabilities (including 0-days) and exploits, allowing an attacker to compromise machines and use them as pivoting stones to continue the attack. A user can test and modify complex scenarios, with several interconnected networks, where the attacker has no initial connectivity with the objective of the attack. We give a concise description of this new technology, and its possible uses in the security research field, such as pentesting training, study of the impact of 0-days vulnerabilities, evaluation of security countermeasures, and risk assessment tool.
In contrast, high interaction honeypots'' and virtualization technologies (e.g., VMware, Xen, Qemu) execute native system and application code, but the price of this fidelity is quite high. For example, the RINSE approach @cite_9 is implemented over the iSSFNet network simulator, which runs on parallel machines to support real-time simulation of large-scale networks. All these solutions share the same principle of simulating almost every aspect of a real machine or real network, but share the similar problems too: expensive configuration cost and expensive hardware and software licenses. Moreover, most of these solutions are not fully compatible with standard network protections (e.g., firewalls, IDSs), suffering a lack of integration between all security actors in complex cyber-attack scenarios.
{ "abstract": [ "NeSSi network security simulator is a novel network simulation tool which incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Its capabilities such as profile-based automated attack generation, traffic analysis and support for detection algorithm plug-ins allow it to be used for security research and evaluation purposes. NeSSi has been successfully used for testing intrusion detection algorithms, conducting network security analysis and developing overlay security frameworks. NeSSi is built upon the agent framework JIAC, resulting in a distributed and extensible architecture. In this paper, we provide an overview of the NeSSi architecture as well as its distinguishing features and briefly demonstrate its application to current security research projects." ], "cite_N": [ "@cite_9" ], "mid": [ "1971632589" ] }
Simulating Cyber-Attacks for Fun and Profit
Computer security has become a necessity in most of today's computer uses and practices, however it is a wide topic and security issues can arise from almost everywhere: binary flaws (e.g., buffer overflows [17]), Web flaws (e.g., SQL injection, remote file inclusion), protocol flaws (e.g., TCP/IP flaws [3]), not to mention hardware, human, cryptographic and other well known flaws. Although it may seem obvious, it is useless to secure a network with a hundred firewalls if the computers behind it are vulnerable to client-side attacks. The protection provided by an Intrusion Detection System (IDS) is worthless against new vulnerabilities and 0-day attacks. As networks have grown in size, they implement a wider variety of more complex configurations and include new devices (e.g. embedded devices) and technologies. This has created new flows of information and control, and therefore new attack vectors. As a result, the job of both black hat and white hat communities has become more difficult and challenging. The previous examples are just the tip of the iceberg, computer security is a complex field and it has to be approached with a global view, considering the whole picture simultaneously: network devices, hardware devices, software applications, protocols, users, etcetera. With that goal in mind, we are going to introduce a new simulation platform called Insight, which has been created to design and simulate cyberattacks against arbitrary target scenarios. In practice, the simulation of complex networks requires to resolve the tension between the scalability and accuracy of the simulated subsystems, devices and data. This is a complex issue, and to find a satisfying solution for this trade-off we have adopted the following design restrictions: 1. Our goal is to have a simulator on a single desktop computer, running hundreds of simulated machines, with a simulated traffic realistic only from the attacker's standpoint. 2. Attacks within the simulator are not launched by real attackers in the wild (e.g. script-kiddies, worms, black hats). As a consequence, the simulation does not have to handle exploiting details such as stack overflows or heap overflows. Instead, attacks are executed from an attack framework by Insight users who know they are playing in a simulated environment. To demonstrate our approach, Insight introduces a platform for executing attack experiments and tools for constructing these attacks. By providing this ability, we show that its users are able to design and adapt attack-related technologies, and have better tests to assess their quality. Attacks are executed from an attack framework which includes many information gathering and exploitation modules. Modules can be scripted, modified or even added. One of the major Insight features is the capability to simulate exploits. An exploit is a piece of code that attempts to compromise a computer system via a specific vulnerability. There are many ways to exploit security holes. If a computer programmer makes a programming mistake in a computer program, it is sometimes possible to circumvent security. Some common exploiting techniques are stack exploits, heap exploits, format string exploits, etc. To simulate these techniques in detail is very expensive. The main problem is to maintain the complete state (e.g., memory, stack, heap, CPU registers) for every simulated machine. From the attacker's point of view, an exploit can be modeled as a magic string sent to a target machine to unleash a hidden feature (e.g., reading files remotely) with a probabilistic result. This is a lightweight approach, and we have sacrificed some of the realism in order to support very large and complex scenarios. For example, 1, 000 virtual machines and network devices (e.g., hubs, switches, IDS, firewalls) can be simulated on a single Windows desktop, each one running their own simulated OS, applications, vulnerabilities and file systems. Certainly, taking into account available technologies, it is not feasible to use a complete virtualization server (e.g., VMware) running thousands of images simultaneously. As a result, the main design concept of our implementation is to focus on the attacker's point of view, and to simulate on demand. In particular, the simulator only generates information as requested by the attacker. By performing this ondemand processing, the main performance bottleneck comes from the ability of the attacker to request information from the scenario. Therefore, it is not necessary, for example, to simulate the complete TCP/IP packet traffic over the network if nobody is requesting that information. A more lightweight approach is to send data between network sockets writing in the memory address space of the peer socket, and leaving the full packet simulation as an option. INSIGHT APPROACH & OVERVIEW A diagram of the Insight general architecture is showed in Fig. 1. The Simulator subsystem is the main component. It performs all simulation tasks on the simulated machines, such as system call execution, memory management, interrupts, device I/O management, etcetera. At least one Simulator subsystem is required, but the architecture allows several ones, each running in a real computer 2 Including copy-on-write file system optimizations implemented also in Insight, as we are going to see it in §5.5. (e.g., a Windows desktop). In this example, there are two simulation subsystems, but more could be added in order to support more virtual hosts. The simulation proceeds in a lightweight fashion. It means, for example, that not all system calls for all OS are supported by the simulation. Instead of implementing the whole universe of system calls, Insight handles a reduced and generic set of system calls, shared by all the simulated OS. Using this approach, a specific OS system call is mapped to an Insight syscall which works similarly to the original one. For example, the Windows sockets API is based on the Berkeley sockets API model used in Berkeley UNIX, but both implementations are slightly different 3 . Similarly, there are some instances where Insight sockets have to diverge from strict adherence to the Berkeley conventions, usually due to implementation difficulties in the simulated environment. In spite of this (and ignoring the differences between OS), all sockets system calls of the real world have been mapped to this unique simulated API. Of course, there are some system calls and management tasks closely related to the underlying OS which were not fully supported, such as UNIX fork and signal syscalls, or the complete set of functions implemented by the Windows SDK. There is a trade-off between precision and efficiency, and the decision of which syscalls were implemented was made with the objective of maintaining the precision of the simulation from the attacker's standpoint. The exploitation of binary vulnerabilities 4 is simulated with a probabilistic approach, keeping the attack model simple, lightweight, and avoiding to track anomalous conditions (and its countermeasures), such as buffer overflows, format string vulnerabilities, exception handler overwriting-among other well known vulnerabilities [1]. This probabilistic approach allows us to mimic the unpredictable behavior when an exploit is launched against a targeted machine. Let us assume that a simulated computer was initialized with an underlying vulnerability (e.g. it hosts a vulnerable OS). In this case, the exploit payload is replaced by a special ID or "magic string", which is sent to the attacked application using a preexistent TCP communication channel. When the attacked application receives this ID, Insight will decide if the exploit worked or not based on a probability distribution that depends on the exploit and the properties describing the simulated computer (e.g., OS, patches, open services). If the exploit is successful, then Insight will grant the control in the target computer through the agent abstraction, which will be described in §4. The probabilistic attack model is implemented by the Simulator subsystems, and it is supported by the Exploits Database, a special configuration file which stores the information related to the vulnerabilities. This file has a XML tree structure, and each entry has the whole necessary information needed by the simulator to compute the probabilistic behavior of a given simulated exploit. For example, a given exploit succeeds against a clean XP SP2 with 83% probability if port 21 is open, but crashes the system if it is a SP1. We are going to spend some time in the probability distribution, how to populate the exploits database, and the Insight attack model in the next sections. Returning to the architecture layout showed in Fig. 1, all simulator subsystems are coordinated by a unique Simulator Monitor, which deals with management and administrative operations, including administrative tasks (such as starting/stopping a simulator instance) and providing statistical information for the usage and performance of these. A set of Configuration Files defines the snapshot of a virtual Scenario. Similarly, a scenario snapshot defines the instantaneous status of the simulation, and involves a crowd of simulated actors: servers, workstations, applications, network devices (e.g. firewalls, routers or hubs) and their present status. Even users can be simulated using this approach, and this is especially interesting in client-side attack simulation, where we expect some careless users opening our poisoned crafted e-mails. Finally, at the right bottom of the architecture diagram, we can see the Penetration Testing Framework, an external system which interacts with the simulated scenario in real time, sending system call requests through a communication channel implemented by the simulator. This attack framework is a free tailored version of the Impact solution 5 , however other attack tools are planned to be supported in the future (e.g., 4 Insight supports simulation for binary vulnerabilities. Other kind of vulnerabilities (e.g. client-side and SQL injections) will be implemented in the future versions. 5 Available from http://trials.coresecurity.com/. Metasploit [16]). The attacker actions are coded as Impact script files (using Python) called modules, which have been implemented using the attack framework SDK, as shown in the architecture diagram. The framework Python modules include several tools for common tasks (e.g. information gathering, exploits, import scenarios). The attacks are executed in real time against a given simulated scenario; a simulation component can provide scenarios of thousands of computers with arbitrary configurations and topologies. Insight users can design new scenarios and they have scripts to manage the creation and modifications for the simulated components, and therefore iterate, import and reproduce cyber-attack experiments. THE SIMULATED ATTACK MODEL One of the characteristics that distinguish the scenarios simulated by Insight is the ability to compromise machines, and use them as pivoting stones to build complex multi-step attacks. To compromise a machine means to install an agent that will be able to execute arbitrary system calls (syscalls) as a user of this system. The agent architecture is based on the solution called syscall proxy (see [5] for more details). The idea of syscall proxying is to build a sort of universal payload that allows an attacker to execute any system call on a compromised host. By installing a small payload (a thin syscall server) on a vulnerable machine, the attacker will be able to execute complex applications on his local host, with all system calls executed remotely. This syscall server is called an agent. In the Insight attack model, the use of syscall proxying introduces two additional layers between a process run by the attacker and the compromised OS. These layers are the syscall client layer and the syscall server layer. The syscall client layer runs on the attacker's Penetration Testing Framework. It acts as a link between the process running on the attacker's machine and the system services on a remote host simulated by Insight. This layer is responsible for forwarding each syscall argument and generating a proper request that the agent can understand. It is also responsible for sending this request to the agent and sending back the results to the calling process. The syscall server layer (i.e. the agent that runs on the simulated system) receives requests from the syscall client to execute specific syscalls using the OS services. After the syscall finishes, its results are marshalled and sent back to the client. Probabilistic exploits In the simulator security model, a vulnerability is a mechanism used to access an otherwise restricted communication channel. In this model, a real exploit payload is replaced by an ID or "magic string" which is sent to a simulated application. If this application is defined to be vulnerable (and some other requirements are fulfilled), then an agent will be installed in the computer hosting the vulnerable application. The simulated exploit payload includes the aforementioned magic string. When the Simulator subsystem receives this information, it looks up for the string in the Exploits Database. If it is found, then the simulator will decide if the exploit worked or not and with what effect based on a probability distribution that depends on the effective scenario information of that computer and the specific exploit. Suppose, for example, that the Penetration Testing Framework assumes (wrongly) the attacked machine is a Red Hat Linux 8.0, but that machine is indeed a Windows system. In this hypothetical situation, the exploit would fail with 100% of probability. On the other side, if the attacked machine is effectively running an affected version of Red Hat Linux 9.0, then the probability of success could be 75%, or as determined in the exploit database. Remote attack model overview In Fig. 2 we can see the sequence of events which occurs when an attacker launches a remote exploit against a simulated machine. The rectangles in the top are the four principal components involved: The Penetration Testing Framework, the Simulator and the Exploits Database are the subsystems explained in Fig. 1 When an exploit is launched against a service running in a simulated machine, a connection is established between the Penetration Testing Framework and the service 6 . Then, the simulated exploit payload is sent to the application. The targeted application reads the payload by running the system call read. Every time the syscall read is invoked, the Simulator subsystem analyzes if a magic string is present in the data which has just been read. When a magic string is detected, the Simulator searches for it in the Exploits Database. If the exploit is found, a new agent is installed in the compromised machine. The exploit payload also includes information of the OS that the Penetration Testing Framework knows about the attacked machine: OS version, system architecture, service packs, etcetera. All this information is used to compute the probabilistic function and allows the Simulator to decide whether the exploit should succeed or not. Local attack model overview Insight can also simulate local attacks: If an attacker gains control over a machine but does not have enough privileges to complete a specific action, a local attack can deploy a new agent with higher privileges. In Fig. 3 we can see the sequence of events which occurs when a local attack is launched against a given machine. A running agent has to be present in the targeted machine in order to launch a local exploit. All local simulated attacks are executed by the Simulator subsystem identically: The Penetration Testing Framework will write the exploit magic string into the agent standard input, using the write system call, and the Simulator will eventually detect the magic string intercepting that system call. In a similar way as the previous example, the exploit magic string is searched in the database and a new agent (with higher privileges) is installed with probabilistic chance. DETAILED DESCRIPTION One of the most challenging issues in the Insight architecture is to resolve the tension between realism and performance. The goal was to have a simulator on a single desktop computer, running hundreds of simulated machines, with a simulated traffic realistic from a penetration test point of view. But there is a trade-off between realism and performance and we are going to discuss some of these problems and other architecture details in the following sections. The Insight development library New applications can be developed for the simulation platform using a minimal C standard library, a standardized collection of header files and library routines used to implement common operations such as: input, output and string handling in the C programming language. This library-a partial libc-implements the most common functions (e.g., read, write, open), allowing any developer to implement his own services with the usual compilers and development tools (e.g., gcc, g++, MS Visual Studio). For example, a web server could be implemented, linked with the provided libc and plugged within the Insight simulated scenarios. The provided libc supports the most common system calls, but it is still incomplete and we were unable to compile complex open source applications. In spite of this, some services (e.g., a small DNS) and network tools (e.g., ipconfig, netstat) have been included in the simulation platform, and new system calls are planned to be supported in the future. Simulating sockets A hierarchy for file descriptors has been developed as shown in Fig. 4. File descriptors can refer (but they are not limited) to files, directories, sockets, or pipes. At the top of the hierarchy, the tree root shows the descriptor object which typically provides the operations for reading and writing data, closing and duplicating file descriptors, among other generic system calls. The simulated sockets implementation spans between two kinds of supported sockets subclasses: 1. SocketDirect. This variety of sockets is optimized for the simulation in one computer. Socket direct is fast: as soon as a connection is established, the client keeps a file descriptor pointing directly to the server's descriptor. Routing is only executed during the connection and the protocol control blocks (PCBs) are created as expected, but they are only used during connection establishment. Reading and writing operations between direct sockets are carried out using shared memory. Since both sockets can access the shared memory area like regular working memory, this is a very fast way of communication. 2. SocketReal. In some particular cases, we are interested in having full socket functionality. For example, the communication between Insight and the outside world is made using real sockets. As a result, this socket subclass wraps a real BSD socket of the underlying OS. Support for routing and state-less firewalling was also implemented, supporting the simulating of attack payloads that connect back to the attacker, accept connections from the attacker or reuse the attack connection. The exploits database When an exploit is raised, Insight has to decide whether the attack is successful or not depending on the environment conditions. For example, an exploit can require either a specific service pack installed in the target machine to be successful, or a specific library loaded in memory, or a particular open port, among others requirements. All these conditions vary over the time, and they are basically unpredictable from the attacker's standpoint. As a result, the behavior of a given exploit has been modeled using a probabilistic approach. In order to determine the resulting behavior of the attack, Insight uses the Exploits Database showed in the architecture layout of Fig. 1. It has a XML tree structure. For example, if an exploit succeeds against a clean XP professional SP2 with 83% probability, or crashes the machine with 0.05% probability in other case; this could be expressed as follows: <database> <exploit id="sample exploit"> <requirement type="system"> <os arch="i386" name="windows" /> <win>XP</win> <edition>professional</edition> <servicepack>2</servicepack> </requirement> <results> <agent chance="0.83" /> <crash chance="0.05" what="os" /> <reset chance="0.00" what="os" /> <crash chance="0.00" what="application" /> <reset chance="0.00" what="application" /> </results> </exploit> <exploit> ... </exploit> <exploit> ... </exploit> ... </database> The conditions needed to install a new agent are described in the requirements section. It is possible to use several tags in this section, they specify the conditions which have influence on the execution of the exploit (e.g., OS required, a specific application running, an open port). The results section is a list of the relevant probabilities. In order, these are the chance of: 1. successfully installing an agent, 2. crashing the target machine, 3. resetting the target machine, 4. crashing the target application, 5. and the chance of resetting the target application. To determine the result, we follow this procedure: processing the lines in order, for each positive probability, choose a random value between 0 and 1. If the value is smaller than the chance attribute, the corresponding action is the result of the exploit. In this example, we draw a random number to see if an agent is installed. If the value is smaller than 0.83, an agent is installed and the execution of the exploit is finished. Otherwise, we draw a second number to see if the OS crashes. If the value is smaller than 0.05, the OS crashes and the attacked machine becomes useless, otherwise there is no visible result. Other possible results could be: raising an IDS alarm, writing some log in a network device (e.g. firewall, IDS or router) or capturing a session id, cookie, credential or password. The exploits database allows us to model the probabilistic behavior of any exploit from the attacker's point of view, but how do we populate our database? A paranoid approach would be to assign a probability of success of 100% to every exploit. In that way, we would consider the case where an attacker can launch each exploit as many times as he wants, and will finally compromise the target machine with 100% probability (assuming the attack does not crash the system). A more realistic approach is to use statistics from real networks. Currently we are using the framework presented by Marcelo Picorelli [18] in order to populate the probabilities in the exploits database. This framework was originally implemented to assess and improve the quality of real exploits in QA environments. It allows us to perform over 500 real exploitation tests daily on several running configurations, spanning different target operating systems with their own setups and applications that add up to more than 160 OS configurations. In this context, a given exploit is executed against: • All the available platforms • All the available applications All these tests are executed automatically using low end hardware, VMware servers, OS images and snapshots. The testing framework has been designed to improve testing time and coverage, and we have modified it in order to collect statistical information of the exploitation test results. Scheduler The scheduler main task is to assign the CPU resources to the different simulated actors (e.g. simulated machines and process). The scheduling iterates over the hierarchy machine-process-thread as a tree (like a depth-first search), each machine running its processes in round-robin. In a similar way, running a process is giving all its threads the order to run until a system call is needed. Obviously, depending on the state of each thread, they run, change state or finish execution. The central issue is that threads execute systems calls and then (if possible) continue their activity until they finish or another system call is required. Insight threads are simulated within real threads of the underlying OS. Simulated machines and processes are all running within one or several working processes (running hundreds of threads), and all of them are coordinated by a unique scheduler process called the master process. Thanks to this architecture, there is a very low loss of performance due to context switching 7 . File system In order to handle thousand of files without wasting huge disk space, the file system simulation is accomplished by mounting shared file repositories. We are going to refer these repositories as template file systems. For example, all simulated Windows XP systems could share a file repository with the default installation provided by Microsoft. These shared templates would have reading permission only. Thus, if a virtual machine needs to read or change a file, it will be copied within the local file system of the given machine. This technique is well known as copy-on-write. The fundamental idea is allowing multiple callers asking for resources which are initially indistinguishable, giving them pointers to the same resource. This function can be maintained until a caller tries to modify its copy of the resource, at which point a true private copy is created to prevent the changes from becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that no private copy needs to be created if a caller never makes any modification. On the other hand, with the purpose of improving the simulator's performance, a file cache has been implemented: the simulator saves the most recent accessed files (or block of files) in memory. In high scale simulated scenarios, it is very common to have several machines doing the same task at (almost) the same time 8 . If the data requested by these kind of tasks are in the file system cache, the whole system performance would improve, because less disk accesses will be required, even in scenarios of hundreds or thousands simulated machines. PERFORMANCE ANALYSIS To evaluate the performance of the simulator we run a test including a scenario with an increasing number of complete LANs with 250 computers each, simultaneously emulated. The tests only involves the execution of a network discovery on the complete LANs through a TCP connection to port 80. An original pen-testing module used for information was executed with no modifications, this was a design goal of the simulator, to use real unmodified attack modules when possible. Performance of the simulator LANs Computers Time (secs) Syscalls/sec 1 250 80 356 2 500 173 236 3 750 305 175 4 1000 479 139 Table 2: Evolution of the system performance as the simulated scenario grows, running a network discovery module, connecting to a predefined port. This benchmark was run on a single Intel Pentium D 2.67Ghz, 1.43GB RAM. We can observe the decrease of system calls processed per second as we increase the number of simulated computer as Insight was ran on a single real computer with limited resources. Nevertheless, the simulation is efficient because system calls are required on demand by the connections of the module gathering the information of the networks through TCP connections. APPLICATIONS We have created a playground to experiment with cyberattack scenarios which has several applications. The most important are: Data collection and visualization. Having the complete network scenario in one computer allows an easy capture and log of system calls and network traffic. This information is useful for analyzing and debugging real pen-test tools and their behavior on complex scenarios. Some efforts have been made to visualize attack pivoting and network information gathering using the platform presented. Pentest training. Our simulation tool is already being used in Pentest courses. It provides reproducible scenarios, where students can practice the different steps of a pentest: information gathering, attack and penetrate, privilege escalation, local information gathering and pivoting. The simulation allows the student to grasp the essence of pivoting. Setting up a real laboratory where pivoting makes sense is an expensive task, whereas our tool requires only one computer per student (and in case of network / computer crash, the simulation environment can be easily reset). Configuring new scenarios, with more machines or more complex topologies, is easy as a scenario wizard is provided. In Pentest classes with Insight, the teacher can check the logs to see if students used the right tools with the correct parameters. He can test the students' ability to plan, see if they did not perform unnecessary actions. The teacher can also identify their weaknesses as pentesters and plan new exercises to work on these. The students can be evaluated: success, performance, stealth and quality of reports can be measured. Worm Spreading Analysis. The lightweight design of the platform allows the simulation of socket/network behavior of thousands of computers gives a good framework for research on worm infestation and spreading. It should be possible to develop very accurate applications to mimic worm behavior using the Insight C programming API. There are available abstract modeling [7] or high-fidelity discrete event [27] studies but no system call level recreation of attacks like we propose in this future application of the platform. Attack Planning. It can be used as a flexible environment to develop and test attack planning algorithms used in automated penetration testing based on attack graphs [12]. Analysis of countermeasures. Duplication of the production configuration on a simulated staging environment accurately mimicking or mirroring the security aspects of an organization's network allows the anticipation of software/hardware changes and their impact on security. For example, you can answer questions like "Will the network avoid attack vector A if firewall rule R is added to the complex rule set S of firewall F ?" Impact of 0-day vulnerabilities. The simulator can be used to study the impact of 0-days (vulnerabilities that have not been publicly disclosed) in your network. How is that possible? We do not know current 0-days... but we can model the existence of 0-day vulnerabilities based on statistics. In our security model, the specific details of the vulnerability are not needed to study the impact on the network, just that it may exist with a measurable probability. That information can be gathered from public vulnerability databases: the discovery date, exploit date, disclosure date and patch date are found in several public databases of vulnerabilities and exploits [6, 23,22,11]. The risk of a 0-day vulnerability is given by the probability of an attacker discovering and exploiting it. Although we do not have data about the security underground, the probabilities given by public information are a lower bound indicator. As shown in [10], the risk posed by a vulnerability exists before the discovery date, augments as an exploit is made available for the vulnerability, and when the vulnerability is disclosed. The risk only diminishes as a patch becomes available and users apply the patches (and workarounds). The probability of discovery, and the probability of an exploit being developed, can be estimated as a function of the time before disclosure (see Fig. 5 taken from [10]). For Microsoft products, we have visibility of upcoming disclosures of vulnerabilities: every month (on patch Tuesday) on average 9,40 patches are released (high and medium risk), based on those dates we estimate the probability that the vulnerabilities were discovered and exploited during the months before disclosure. CONCLUSION We have created a playground to experiment with cyberattack scenarios. The framework is based on a probabilistic attack model-that model is also used by attack planning tools developed in our lab. By making use of the proxy syscalls technology, and simulating multiplatform agents, we were able to implement a simulation that is both realistic and lightweight, allowing the simulation of networks with thousands of hosts. The framework provides a global view of the scenarios. It is centered on the attacker's point of view, and designed to increase the size and complexity of simulated scenarios, while remaining realistic for the attacker. The value of this framework is given by its multiple applications: • Evaluate network security If you are interested in using Insight, send us an email. We are trying to build a community using it as common language for discussing information security scenarios and practices, and will strongly support new applications of this tool.
5,139
1006.1919
2952193499
We introduce a new simulation platform called Insight, created to design and simulate cyber-attacks against large arbitrary target scenarios. Insight has surprisingly low hardware and configuration requirements, while making the simulation a realistic experience from the attacker's standpoint. The scenarios include a crowd of simulated actors: network devices, hardware devices, software applications, protocols, users, etc. A novel characteristic of this tool is to simulate vulnerabilities (including 0-days) and exploits, allowing an attacker to compromise machines and use them as pivoting stones to continue the attack. A user can test and modify complex scenarios, with several interconnected networks, where the attacker has no initial connectivity with the objective of the attack. We give a concise description of this new technology, and its possible uses in the security research field, such as pentesting training, study of the impact of 0-days vulnerabilities, evaluation of security countermeasures, and risk assessment tool.
Other interesting approaches to solve these problems include the framework developed by @cite_20 . While they focus on distributed denial of service attacks (DDoS) and defensive IDS analysis, we focus on offensive strategies to understand the scenarios and develop countermeasures. Also @cite_8 have integrated @cite_13 and @cite_16 to create a flexible and very detailed network laboratory and simulation tool. The latter project has privileged accuracy and virtualization over scalability and performance.
{ "abstract": [ "The idea of VDE is very effective but straightforward simple and can be applied in very many configuration to provide several services. It is a sort of Swiss knife of emulated networks. It can be used as a general virtual private network as well as a support technology for mobility, a tool for network testing, a general reconfigurable overlay network, a layer for implementing privacy preserving technologies and many others. A prototype VDE has been implemented and released as free software under the GPL licence.", "User-mode Linux [2](UML) is the port of the Linux kernel to Linux. It implements a Linux virtual machine running on a Linux host. Its hardware is virtual, being constructed from resources provided by the host. UML can run essentially any application that can run on the host. The design and implementation of UML has been previously described[1]. This paper will describe the work that has taken place during the last year, including changes to what was described in the previous paper. This paper will also discuss new applications of UML involving integration of the virtual and host environments, along with other possibilties such as using UML as a clustering platform.", "Monitoring unused or dark IP addresses offers opportunities to significantly improve and expand knowledge of abuse activity without many of the problems associated with typical network intrusion detection and firewall systems. In this paper, we address the problem of designing and deploying a system for monitoring large unused address spaces such as class A telescopes with 16M IP addresses. We describe the architecture and implementation of the Internet Sink (iSink) system which measures packet traffic on unused IP addresses in an efficient, extensible and scalable fashion. In contrast to traditional intrusion detection systems or firewalls, iSink includes an active component that generates response packets to incoming traffic. This gives the iSink an important advantage in discriminating between different types of attacks (through examination of the response payloads). The key feature of iSink’s design that distinguishes it from other unused address space monitors is that its active response component is stateless and thus highly scalable. We report performance results of our iSink implementation in both controlled laboratory experiments and from a case study of a live deployment. Our results demonstrate the efficiency and scalability of our implementation as well as the important perspective on abuse activity that is afforded by its use.", "" ], "cite_N": [ "@cite_16", "@cite_13", "@cite_20", "@cite_8" ], "mid": [ "2130171629", "1591055603", "1532313454", "" ] }
Simulating Cyber-Attacks for Fun and Profit
Computer security has become a necessity in most of today's computer uses and practices, however it is a wide topic and security issues can arise from almost everywhere: binary flaws (e.g., buffer overflows [17]), Web flaws (e.g., SQL injection, remote file inclusion), protocol flaws (e.g., TCP/IP flaws [3]), not to mention hardware, human, cryptographic and other well known flaws. Although it may seem obvious, it is useless to secure a network with a hundred firewalls if the computers behind it are vulnerable to client-side attacks. The protection provided by an Intrusion Detection System (IDS) is worthless against new vulnerabilities and 0-day attacks. As networks have grown in size, they implement a wider variety of more complex configurations and include new devices (e.g. embedded devices) and technologies. This has created new flows of information and control, and therefore new attack vectors. As a result, the job of both black hat and white hat communities has become more difficult and challenging. The previous examples are just the tip of the iceberg, computer security is a complex field and it has to be approached with a global view, considering the whole picture simultaneously: network devices, hardware devices, software applications, protocols, users, etcetera. With that goal in mind, we are going to introduce a new simulation platform called Insight, which has been created to design and simulate cyberattacks against arbitrary target scenarios. In practice, the simulation of complex networks requires to resolve the tension between the scalability and accuracy of the simulated subsystems, devices and data. This is a complex issue, and to find a satisfying solution for this trade-off we have adopted the following design restrictions: 1. Our goal is to have a simulator on a single desktop computer, running hundreds of simulated machines, with a simulated traffic realistic only from the attacker's standpoint. 2. Attacks within the simulator are not launched by real attackers in the wild (e.g. script-kiddies, worms, black hats). As a consequence, the simulation does not have to handle exploiting details such as stack overflows or heap overflows. Instead, attacks are executed from an attack framework by Insight users who know they are playing in a simulated environment. To demonstrate our approach, Insight introduces a platform for executing attack experiments and tools for constructing these attacks. By providing this ability, we show that its users are able to design and adapt attack-related technologies, and have better tests to assess their quality. Attacks are executed from an attack framework which includes many information gathering and exploitation modules. Modules can be scripted, modified or even added. One of the major Insight features is the capability to simulate exploits. An exploit is a piece of code that attempts to compromise a computer system via a specific vulnerability. There are many ways to exploit security holes. If a computer programmer makes a programming mistake in a computer program, it is sometimes possible to circumvent security. Some common exploiting techniques are stack exploits, heap exploits, format string exploits, etc. To simulate these techniques in detail is very expensive. The main problem is to maintain the complete state (e.g., memory, stack, heap, CPU registers) for every simulated machine. From the attacker's point of view, an exploit can be modeled as a magic string sent to a target machine to unleash a hidden feature (e.g., reading files remotely) with a probabilistic result. This is a lightweight approach, and we have sacrificed some of the realism in order to support very large and complex scenarios. For example, 1, 000 virtual machines and network devices (e.g., hubs, switches, IDS, firewalls) can be simulated on a single Windows desktop, each one running their own simulated OS, applications, vulnerabilities and file systems. Certainly, taking into account available technologies, it is not feasible to use a complete virtualization server (e.g., VMware) running thousands of images simultaneously. As a result, the main design concept of our implementation is to focus on the attacker's point of view, and to simulate on demand. In particular, the simulator only generates information as requested by the attacker. By performing this ondemand processing, the main performance bottleneck comes from the ability of the attacker to request information from the scenario. Therefore, it is not necessary, for example, to simulate the complete TCP/IP packet traffic over the network if nobody is requesting that information. A more lightweight approach is to send data between network sockets writing in the memory address space of the peer socket, and leaving the full packet simulation as an option. INSIGHT APPROACH & OVERVIEW A diagram of the Insight general architecture is showed in Fig. 1. The Simulator subsystem is the main component. It performs all simulation tasks on the simulated machines, such as system call execution, memory management, interrupts, device I/O management, etcetera. At least one Simulator subsystem is required, but the architecture allows several ones, each running in a real computer 2 Including copy-on-write file system optimizations implemented also in Insight, as we are going to see it in §5.5. (e.g., a Windows desktop). In this example, there are two simulation subsystems, but more could be added in order to support more virtual hosts. The simulation proceeds in a lightweight fashion. It means, for example, that not all system calls for all OS are supported by the simulation. Instead of implementing the whole universe of system calls, Insight handles a reduced and generic set of system calls, shared by all the simulated OS. Using this approach, a specific OS system call is mapped to an Insight syscall which works similarly to the original one. For example, the Windows sockets API is based on the Berkeley sockets API model used in Berkeley UNIX, but both implementations are slightly different 3 . Similarly, there are some instances where Insight sockets have to diverge from strict adherence to the Berkeley conventions, usually due to implementation difficulties in the simulated environment. In spite of this (and ignoring the differences between OS), all sockets system calls of the real world have been mapped to this unique simulated API. Of course, there are some system calls and management tasks closely related to the underlying OS which were not fully supported, such as UNIX fork and signal syscalls, or the complete set of functions implemented by the Windows SDK. There is a trade-off between precision and efficiency, and the decision of which syscalls were implemented was made with the objective of maintaining the precision of the simulation from the attacker's standpoint. The exploitation of binary vulnerabilities 4 is simulated with a probabilistic approach, keeping the attack model simple, lightweight, and avoiding to track anomalous conditions (and its countermeasures), such as buffer overflows, format string vulnerabilities, exception handler overwriting-among other well known vulnerabilities [1]. This probabilistic approach allows us to mimic the unpredictable behavior when an exploit is launched against a targeted machine. Let us assume that a simulated computer was initialized with an underlying vulnerability (e.g. it hosts a vulnerable OS). In this case, the exploit payload is replaced by a special ID or "magic string", which is sent to the attacked application using a preexistent TCP communication channel. When the attacked application receives this ID, Insight will decide if the exploit worked or not based on a probability distribution that depends on the exploit and the properties describing the simulated computer (e.g., OS, patches, open services). If the exploit is successful, then Insight will grant the control in the target computer through the agent abstraction, which will be described in §4. The probabilistic attack model is implemented by the Simulator subsystems, and it is supported by the Exploits Database, a special configuration file which stores the information related to the vulnerabilities. This file has a XML tree structure, and each entry has the whole necessary information needed by the simulator to compute the probabilistic behavior of a given simulated exploit. For example, a given exploit succeeds against a clean XP SP2 with 83% probability if port 21 is open, but crashes the system if it is a SP1. We are going to spend some time in the probability distribution, how to populate the exploits database, and the Insight attack model in the next sections. Returning to the architecture layout showed in Fig. 1, all simulator subsystems are coordinated by a unique Simulator Monitor, which deals with management and administrative operations, including administrative tasks (such as starting/stopping a simulator instance) and providing statistical information for the usage and performance of these. A set of Configuration Files defines the snapshot of a virtual Scenario. Similarly, a scenario snapshot defines the instantaneous status of the simulation, and involves a crowd of simulated actors: servers, workstations, applications, network devices (e.g. firewalls, routers or hubs) and their present status. Even users can be simulated using this approach, and this is especially interesting in client-side attack simulation, where we expect some careless users opening our poisoned crafted e-mails. Finally, at the right bottom of the architecture diagram, we can see the Penetration Testing Framework, an external system which interacts with the simulated scenario in real time, sending system call requests through a communication channel implemented by the simulator. This attack framework is a free tailored version of the Impact solution 5 , however other attack tools are planned to be supported in the future (e.g., 4 Insight supports simulation for binary vulnerabilities. Other kind of vulnerabilities (e.g. client-side and SQL injections) will be implemented in the future versions. 5 Available from http://trials.coresecurity.com/. Metasploit [16]). The attacker actions are coded as Impact script files (using Python) called modules, which have been implemented using the attack framework SDK, as shown in the architecture diagram. The framework Python modules include several tools for common tasks (e.g. information gathering, exploits, import scenarios). The attacks are executed in real time against a given simulated scenario; a simulation component can provide scenarios of thousands of computers with arbitrary configurations and topologies. Insight users can design new scenarios and they have scripts to manage the creation and modifications for the simulated components, and therefore iterate, import and reproduce cyber-attack experiments. THE SIMULATED ATTACK MODEL One of the characteristics that distinguish the scenarios simulated by Insight is the ability to compromise machines, and use them as pivoting stones to build complex multi-step attacks. To compromise a machine means to install an agent that will be able to execute arbitrary system calls (syscalls) as a user of this system. The agent architecture is based on the solution called syscall proxy (see [5] for more details). The idea of syscall proxying is to build a sort of universal payload that allows an attacker to execute any system call on a compromised host. By installing a small payload (a thin syscall server) on a vulnerable machine, the attacker will be able to execute complex applications on his local host, with all system calls executed remotely. This syscall server is called an agent. In the Insight attack model, the use of syscall proxying introduces two additional layers between a process run by the attacker and the compromised OS. These layers are the syscall client layer and the syscall server layer. The syscall client layer runs on the attacker's Penetration Testing Framework. It acts as a link between the process running on the attacker's machine and the system services on a remote host simulated by Insight. This layer is responsible for forwarding each syscall argument and generating a proper request that the agent can understand. It is also responsible for sending this request to the agent and sending back the results to the calling process. The syscall server layer (i.e. the agent that runs on the simulated system) receives requests from the syscall client to execute specific syscalls using the OS services. After the syscall finishes, its results are marshalled and sent back to the client. Probabilistic exploits In the simulator security model, a vulnerability is a mechanism used to access an otherwise restricted communication channel. In this model, a real exploit payload is replaced by an ID or "magic string" which is sent to a simulated application. If this application is defined to be vulnerable (and some other requirements are fulfilled), then an agent will be installed in the computer hosting the vulnerable application. The simulated exploit payload includes the aforementioned magic string. When the Simulator subsystem receives this information, it looks up for the string in the Exploits Database. If it is found, then the simulator will decide if the exploit worked or not and with what effect based on a probability distribution that depends on the effective scenario information of that computer and the specific exploit. Suppose, for example, that the Penetration Testing Framework assumes (wrongly) the attacked machine is a Red Hat Linux 8.0, but that machine is indeed a Windows system. In this hypothetical situation, the exploit would fail with 100% of probability. On the other side, if the attacked machine is effectively running an affected version of Red Hat Linux 9.0, then the probability of success could be 75%, or as determined in the exploit database. Remote attack model overview In Fig. 2 we can see the sequence of events which occurs when an attacker launches a remote exploit against a simulated machine. The rectangles in the top are the four principal components involved: The Penetration Testing Framework, the Simulator and the Exploits Database are the subsystems explained in Fig. 1 When an exploit is launched against a service running in a simulated machine, a connection is established between the Penetration Testing Framework and the service 6 . Then, the simulated exploit payload is sent to the application. The targeted application reads the payload by running the system call read. Every time the syscall read is invoked, the Simulator subsystem analyzes if a magic string is present in the data which has just been read. When a magic string is detected, the Simulator searches for it in the Exploits Database. If the exploit is found, a new agent is installed in the compromised machine. The exploit payload also includes information of the OS that the Penetration Testing Framework knows about the attacked machine: OS version, system architecture, service packs, etcetera. All this information is used to compute the probabilistic function and allows the Simulator to decide whether the exploit should succeed or not. Local attack model overview Insight can also simulate local attacks: If an attacker gains control over a machine but does not have enough privileges to complete a specific action, a local attack can deploy a new agent with higher privileges. In Fig. 3 we can see the sequence of events which occurs when a local attack is launched against a given machine. A running agent has to be present in the targeted machine in order to launch a local exploit. All local simulated attacks are executed by the Simulator subsystem identically: The Penetration Testing Framework will write the exploit magic string into the agent standard input, using the write system call, and the Simulator will eventually detect the magic string intercepting that system call. In a similar way as the previous example, the exploit magic string is searched in the database and a new agent (with higher privileges) is installed with probabilistic chance. DETAILED DESCRIPTION One of the most challenging issues in the Insight architecture is to resolve the tension between realism and performance. The goal was to have a simulator on a single desktop computer, running hundreds of simulated machines, with a simulated traffic realistic from a penetration test point of view. But there is a trade-off between realism and performance and we are going to discuss some of these problems and other architecture details in the following sections. The Insight development library New applications can be developed for the simulation platform using a minimal C standard library, a standardized collection of header files and library routines used to implement common operations such as: input, output and string handling in the C programming language. This library-a partial libc-implements the most common functions (e.g., read, write, open), allowing any developer to implement his own services with the usual compilers and development tools (e.g., gcc, g++, MS Visual Studio). For example, a web server could be implemented, linked with the provided libc and plugged within the Insight simulated scenarios. The provided libc supports the most common system calls, but it is still incomplete and we were unable to compile complex open source applications. In spite of this, some services (e.g., a small DNS) and network tools (e.g., ipconfig, netstat) have been included in the simulation platform, and new system calls are planned to be supported in the future. Simulating sockets A hierarchy for file descriptors has been developed as shown in Fig. 4. File descriptors can refer (but they are not limited) to files, directories, sockets, or pipes. At the top of the hierarchy, the tree root shows the descriptor object which typically provides the operations for reading and writing data, closing and duplicating file descriptors, among other generic system calls. The simulated sockets implementation spans between two kinds of supported sockets subclasses: 1. SocketDirect. This variety of sockets is optimized for the simulation in one computer. Socket direct is fast: as soon as a connection is established, the client keeps a file descriptor pointing directly to the server's descriptor. Routing is only executed during the connection and the protocol control blocks (PCBs) are created as expected, but they are only used during connection establishment. Reading and writing operations between direct sockets are carried out using shared memory. Since both sockets can access the shared memory area like regular working memory, this is a very fast way of communication. 2. SocketReal. In some particular cases, we are interested in having full socket functionality. For example, the communication between Insight and the outside world is made using real sockets. As a result, this socket subclass wraps a real BSD socket of the underlying OS. Support for routing and state-less firewalling was also implemented, supporting the simulating of attack payloads that connect back to the attacker, accept connections from the attacker or reuse the attack connection. The exploits database When an exploit is raised, Insight has to decide whether the attack is successful or not depending on the environment conditions. For example, an exploit can require either a specific service pack installed in the target machine to be successful, or a specific library loaded in memory, or a particular open port, among others requirements. All these conditions vary over the time, and they are basically unpredictable from the attacker's standpoint. As a result, the behavior of a given exploit has been modeled using a probabilistic approach. In order to determine the resulting behavior of the attack, Insight uses the Exploits Database showed in the architecture layout of Fig. 1. It has a XML tree structure. For example, if an exploit succeeds against a clean XP professional SP2 with 83% probability, or crashes the machine with 0.05% probability in other case; this could be expressed as follows: <database> <exploit id="sample exploit"> <requirement type="system"> <os arch="i386" name="windows" /> <win>XP</win> <edition>professional</edition> <servicepack>2</servicepack> </requirement> <results> <agent chance="0.83" /> <crash chance="0.05" what="os" /> <reset chance="0.00" what="os" /> <crash chance="0.00" what="application" /> <reset chance="0.00" what="application" /> </results> </exploit> <exploit> ... </exploit> <exploit> ... </exploit> ... </database> The conditions needed to install a new agent are described in the requirements section. It is possible to use several tags in this section, they specify the conditions which have influence on the execution of the exploit (e.g., OS required, a specific application running, an open port). The results section is a list of the relevant probabilities. In order, these are the chance of: 1. successfully installing an agent, 2. crashing the target machine, 3. resetting the target machine, 4. crashing the target application, 5. and the chance of resetting the target application. To determine the result, we follow this procedure: processing the lines in order, for each positive probability, choose a random value between 0 and 1. If the value is smaller than the chance attribute, the corresponding action is the result of the exploit. In this example, we draw a random number to see if an agent is installed. If the value is smaller than 0.83, an agent is installed and the execution of the exploit is finished. Otherwise, we draw a second number to see if the OS crashes. If the value is smaller than 0.05, the OS crashes and the attacked machine becomes useless, otherwise there is no visible result. Other possible results could be: raising an IDS alarm, writing some log in a network device (e.g. firewall, IDS or router) or capturing a session id, cookie, credential or password. The exploits database allows us to model the probabilistic behavior of any exploit from the attacker's point of view, but how do we populate our database? A paranoid approach would be to assign a probability of success of 100% to every exploit. In that way, we would consider the case where an attacker can launch each exploit as many times as he wants, and will finally compromise the target machine with 100% probability (assuming the attack does not crash the system). A more realistic approach is to use statistics from real networks. Currently we are using the framework presented by Marcelo Picorelli [18] in order to populate the probabilities in the exploits database. This framework was originally implemented to assess and improve the quality of real exploits in QA environments. It allows us to perform over 500 real exploitation tests daily on several running configurations, spanning different target operating systems with their own setups and applications that add up to more than 160 OS configurations. In this context, a given exploit is executed against: • All the available platforms • All the available applications All these tests are executed automatically using low end hardware, VMware servers, OS images and snapshots. The testing framework has been designed to improve testing time and coverage, and we have modified it in order to collect statistical information of the exploitation test results. Scheduler The scheduler main task is to assign the CPU resources to the different simulated actors (e.g. simulated machines and process). The scheduling iterates over the hierarchy machine-process-thread as a tree (like a depth-first search), each machine running its processes in round-robin. In a similar way, running a process is giving all its threads the order to run until a system call is needed. Obviously, depending on the state of each thread, they run, change state or finish execution. The central issue is that threads execute systems calls and then (if possible) continue their activity until they finish or another system call is required. Insight threads are simulated within real threads of the underlying OS. Simulated machines and processes are all running within one or several working processes (running hundreds of threads), and all of them are coordinated by a unique scheduler process called the master process. Thanks to this architecture, there is a very low loss of performance due to context switching 7 . File system In order to handle thousand of files without wasting huge disk space, the file system simulation is accomplished by mounting shared file repositories. We are going to refer these repositories as template file systems. For example, all simulated Windows XP systems could share a file repository with the default installation provided by Microsoft. These shared templates would have reading permission only. Thus, if a virtual machine needs to read or change a file, it will be copied within the local file system of the given machine. This technique is well known as copy-on-write. The fundamental idea is allowing multiple callers asking for resources which are initially indistinguishable, giving them pointers to the same resource. This function can be maintained until a caller tries to modify its copy of the resource, at which point a true private copy is created to prevent the changes from becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that no private copy needs to be created if a caller never makes any modification. On the other hand, with the purpose of improving the simulator's performance, a file cache has been implemented: the simulator saves the most recent accessed files (or block of files) in memory. In high scale simulated scenarios, it is very common to have several machines doing the same task at (almost) the same time 8 . If the data requested by these kind of tasks are in the file system cache, the whole system performance would improve, because less disk accesses will be required, even in scenarios of hundreds or thousands simulated machines. PERFORMANCE ANALYSIS To evaluate the performance of the simulator we run a test including a scenario with an increasing number of complete LANs with 250 computers each, simultaneously emulated. The tests only involves the execution of a network discovery on the complete LANs through a TCP connection to port 80. An original pen-testing module used for information was executed with no modifications, this was a design goal of the simulator, to use real unmodified attack modules when possible. Performance of the simulator LANs Computers Time (secs) Syscalls/sec 1 250 80 356 2 500 173 236 3 750 305 175 4 1000 479 139 Table 2: Evolution of the system performance as the simulated scenario grows, running a network discovery module, connecting to a predefined port. This benchmark was run on a single Intel Pentium D 2.67Ghz, 1.43GB RAM. We can observe the decrease of system calls processed per second as we increase the number of simulated computer as Insight was ran on a single real computer with limited resources. Nevertheless, the simulation is efficient because system calls are required on demand by the connections of the module gathering the information of the networks through TCP connections. APPLICATIONS We have created a playground to experiment with cyberattack scenarios which has several applications. The most important are: Data collection and visualization. Having the complete network scenario in one computer allows an easy capture and log of system calls and network traffic. This information is useful for analyzing and debugging real pen-test tools and their behavior on complex scenarios. Some efforts have been made to visualize attack pivoting and network information gathering using the platform presented. Pentest training. Our simulation tool is already being used in Pentest courses. It provides reproducible scenarios, where students can practice the different steps of a pentest: information gathering, attack and penetrate, privilege escalation, local information gathering and pivoting. The simulation allows the student to grasp the essence of pivoting. Setting up a real laboratory where pivoting makes sense is an expensive task, whereas our tool requires only one computer per student (and in case of network / computer crash, the simulation environment can be easily reset). Configuring new scenarios, with more machines or more complex topologies, is easy as a scenario wizard is provided. In Pentest classes with Insight, the teacher can check the logs to see if students used the right tools with the correct parameters. He can test the students' ability to plan, see if they did not perform unnecessary actions. The teacher can also identify their weaknesses as pentesters and plan new exercises to work on these. The students can be evaluated: success, performance, stealth and quality of reports can be measured. Worm Spreading Analysis. The lightweight design of the platform allows the simulation of socket/network behavior of thousands of computers gives a good framework for research on worm infestation and spreading. It should be possible to develop very accurate applications to mimic worm behavior using the Insight C programming API. There are available abstract modeling [7] or high-fidelity discrete event [27] studies but no system call level recreation of attacks like we propose in this future application of the platform. Attack Planning. It can be used as a flexible environment to develop and test attack planning algorithms used in automated penetration testing based on attack graphs [12]. Analysis of countermeasures. Duplication of the production configuration on a simulated staging environment accurately mimicking or mirroring the security aspects of an organization's network allows the anticipation of software/hardware changes and their impact on security. For example, you can answer questions like "Will the network avoid attack vector A if firewall rule R is added to the complex rule set S of firewall F ?" Impact of 0-day vulnerabilities. The simulator can be used to study the impact of 0-days (vulnerabilities that have not been publicly disclosed) in your network. How is that possible? We do not know current 0-days... but we can model the existence of 0-day vulnerabilities based on statistics. In our security model, the specific details of the vulnerability are not needed to study the impact on the network, just that it may exist with a measurable probability. That information can be gathered from public vulnerability databases: the discovery date, exploit date, disclosure date and patch date are found in several public databases of vulnerabilities and exploits [6, 23,22,11]. The risk of a 0-day vulnerability is given by the probability of an attacker discovering and exploiting it. Although we do not have data about the security underground, the probabilities given by public information are a lower bound indicator. As shown in [10], the risk posed by a vulnerability exists before the discovery date, augments as an exploit is made available for the vulnerability, and when the vulnerability is disclosed. The risk only diminishes as a patch becomes available and users apply the patches (and workarounds). The probability of discovery, and the probability of an exploit being developed, can be estimated as a function of the time before disclosure (see Fig. 5 taken from [10]). For Microsoft products, we have visibility of upcoming disclosures of vulnerabilities: every month (on patch Tuesday) on average 9,40 patches are released (high and medium risk), based on those dates we estimate the probability that the vulnerabilities were discovered and exploited during the months before disclosure. CONCLUSION We have created a playground to experiment with cyberattack scenarios. The framework is based on a probabilistic attack model-that model is also used by attack planning tools developed in our lab. By making use of the proxy syscalls technology, and simulating multiplatform agents, we were able to implement a simulation that is both realistic and lightweight, allowing the simulation of networks with thousands of hosts. The framework provides a global view of the scenarios. It is centered on the attacker's point of view, and designed to increase the size and complexity of simulated scenarios, while remaining realistic for the attacker. The value of this framework is given by its multiple applications: • Evaluate network security If you are interested in using Insight, send us an email. We are trying to build a community using it as common language for discussing information security scenarios and practices, and will strongly support new applications of this tool.
5,139
1006.1919
2952193499
We introduce a new simulation platform called Insight, created to design and simulate cyber-attacks against large arbitrary target scenarios. Insight has surprisingly low hardware and configuration requirements, while making the simulation a realistic experience from the attacker's standpoint. The scenarios include a crowd of simulated actors: network devices, hardware devices, software applications, protocols, users, etc. A novel characteristic of this tool is to simulate vulnerabilities (including 0-days) and exploits, allowing an attacker to compromise machines and use them as pivoting stones to continue the attack. A user can test and modify complex scenarios, with several interconnected networks, where the attacker has no initial connectivity with the objective of the attack. We give a concise description of this new technology, and its possible uses in the security research field, such as pentesting training, study of the impact of 0-days vulnerabilities, evaluation of security countermeasures, and risk assessment tool.
The @cite_17 is another interesting prototype. It improves high-fidelity honeypot scalability by up to six times while still closely emulating the execution behavior of individual Internet hosts. Potemkin uses quite sophisticated on-demand techniques for instantiating hosts Including file system optimizations implemented also in , as we are going to see it in . , but this approach focuses on attracting real attacks and it shows the same honeypot limitations to reach this goal. As an example, to capture e-mail viruses, a honeypot must posses an e-mail address, must be scripted to read mail (executing attachments like a naive user) and, most critically, must be influenced to add the honeypot to their address books. Passive malware (e.g., many spyware applications) may require a honeypot to generate explicit requests, and focused malware (e.g., targeting only financial institutions) may carefully select its victims and never touch a large-scale honeyfarm. In each of these cases there are partial solutions, and they require careful engineering to truly mimic the target environment.
{ "abstract": [ "A honeypot is a closely monitored network decoy serving several purposes: it can distract adversaries from more valuable machines on a network, provide early warning about new attack and exploitation trends, or allow in-depth examination of adversaries during and after exploitation of a honeypot. Deploying a physical honeypot is often time intensive and expensive as different operating systems require specialized hardware and every honeypot requires its own physical system. This paper presents Honeyd, a framework for virtual honeypots that simulates virtual computer systems at the network level. The simulated computer systems appear to run on unallocated network addresses. To deceive network fingerprinting tools, Honeyd simulates the networking stack of different operating systems and can provide arbitrary routing topologies and services for an arbitrary number of virtual systems. This paper discusses Honeyd's design and shows how the Honeyd framework helps in many areas of system security, e.g. detecting and disabling worms, distracting adversaries, or preventing the spread of spam email." ], "cite_N": [ "@cite_17" ], "mid": [ "1514368868" ] }
Simulating Cyber-Attacks for Fun and Profit
Computer security has become a necessity in most of today's computer uses and practices, however it is a wide topic and security issues can arise from almost everywhere: binary flaws (e.g., buffer overflows [17]), Web flaws (e.g., SQL injection, remote file inclusion), protocol flaws (e.g., TCP/IP flaws [3]), not to mention hardware, human, cryptographic and other well known flaws. Although it may seem obvious, it is useless to secure a network with a hundred firewalls if the computers behind it are vulnerable to client-side attacks. The protection provided by an Intrusion Detection System (IDS) is worthless against new vulnerabilities and 0-day attacks. As networks have grown in size, they implement a wider variety of more complex configurations and include new devices (e.g. embedded devices) and technologies. This has created new flows of information and control, and therefore new attack vectors. As a result, the job of both black hat and white hat communities has become more difficult and challenging. The previous examples are just the tip of the iceberg, computer security is a complex field and it has to be approached with a global view, considering the whole picture simultaneously: network devices, hardware devices, software applications, protocols, users, etcetera. With that goal in mind, we are going to introduce a new simulation platform called Insight, which has been created to design and simulate cyberattacks against arbitrary target scenarios. In practice, the simulation of complex networks requires to resolve the tension between the scalability and accuracy of the simulated subsystems, devices and data. This is a complex issue, and to find a satisfying solution for this trade-off we have adopted the following design restrictions: 1. Our goal is to have a simulator on a single desktop computer, running hundreds of simulated machines, with a simulated traffic realistic only from the attacker's standpoint. 2. Attacks within the simulator are not launched by real attackers in the wild (e.g. script-kiddies, worms, black hats). As a consequence, the simulation does not have to handle exploiting details such as stack overflows or heap overflows. Instead, attacks are executed from an attack framework by Insight users who know they are playing in a simulated environment. To demonstrate our approach, Insight introduces a platform for executing attack experiments and tools for constructing these attacks. By providing this ability, we show that its users are able to design and adapt attack-related technologies, and have better tests to assess their quality. Attacks are executed from an attack framework which includes many information gathering and exploitation modules. Modules can be scripted, modified or even added. One of the major Insight features is the capability to simulate exploits. An exploit is a piece of code that attempts to compromise a computer system via a specific vulnerability. There are many ways to exploit security holes. If a computer programmer makes a programming mistake in a computer program, it is sometimes possible to circumvent security. Some common exploiting techniques are stack exploits, heap exploits, format string exploits, etc. To simulate these techniques in detail is very expensive. The main problem is to maintain the complete state (e.g., memory, stack, heap, CPU registers) for every simulated machine. From the attacker's point of view, an exploit can be modeled as a magic string sent to a target machine to unleash a hidden feature (e.g., reading files remotely) with a probabilistic result. This is a lightweight approach, and we have sacrificed some of the realism in order to support very large and complex scenarios. For example, 1, 000 virtual machines and network devices (e.g., hubs, switches, IDS, firewalls) can be simulated on a single Windows desktop, each one running their own simulated OS, applications, vulnerabilities and file systems. Certainly, taking into account available technologies, it is not feasible to use a complete virtualization server (e.g., VMware) running thousands of images simultaneously. As a result, the main design concept of our implementation is to focus on the attacker's point of view, and to simulate on demand. In particular, the simulator only generates information as requested by the attacker. By performing this ondemand processing, the main performance bottleneck comes from the ability of the attacker to request information from the scenario. Therefore, it is not necessary, for example, to simulate the complete TCP/IP packet traffic over the network if nobody is requesting that information. A more lightweight approach is to send data between network sockets writing in the memory address space of the peer socket, and leaving the full packet simulation as an option. INSIGHT APPROACH & OVERVIEW A diagram of the Insight general architecture is showed in Fig. 1. The Simulator subsystem is the main component. It performs all simulation tasks on the simulated machines, such as system call execution, memory management, interrupts, device I/O management, etcetera. At least one Simulator subsystem is required, but the architecture allows several ones, each running in a real computer 2 Including copy-on-write file system optimizations implemented also in Insight, as we are going to see it in §5.5. (e.g., a Windows desktop). In this example, there are two simulation subsystems, but more could be added in order to support more virtual hosts. The simulation proceeds in a lightweight fashion. It means, for example, that not all system calls for all OS are supported by the simulation. Instead of implementing the whole universe of system calls, Insight handles a reduced and generic set of system calls, shared by all the simulated OS. Using this approach, a specific OS system call is mapped to an Insight syscall which works similarly to the original one. For example, the Windows sockets API is based on the Berkeley sockets API model used in Berkeley UNIX, but both implementations are slightly different 3 . Similarly, there are some instances where Insight sockets have to diverge from strict adherence to the Berkeley conventions, usually due to implementation difficulties in the simulated environment. In spite of this (and ignoring the differences between OS), all sockets system calls of the real world have been mapped to this unique simulated API. Of course, there are some system calls and management tasks closely related to the underlying OS which were not fully supported, such as UNIX fork and signal syscalls, or the complete set of functions implemented by the Windows SDK. There is a trade-off between precision and efficiency, and the decision of which syscalls were implemented was made with the objective of maintaining the precision of the simulation from the attacker's standpoint. The exploitation of binary vulnerabilities 4 is simulated with a probabilistic approach, keeping the attack model simple, lightweight, and avoiding to track anomalous conditions (and its countermeasures), such as buffer overflows, format string vulnerabilities, exception handler overwriting-among other well known vulnerabilities [1]. This probabilistic approach allows us to mimic the unpredictable behavior when an exploit is launched against a targeted machine. Let us assume that a simulated computer was initialized with an underlying vulnerability (e.g. it hosts a vulnerable OS). In this case, the exploit payload is replaced by a special ID or "magic string", which is sent to the attacked application using a preexistent TCP communication channel. When the attacked application receives this ID, Insight will decide if the exploit worked or not based on a probability distribution that depends on the exploit and the properties describing the simulated computer (e.g., OS, patches, open services). If the exploit is successful, then Insight will grant the control in the target computer through the agent abstraction, which will be described in §4. The probabilistic attack model is implemented by the Simulator subsystems, and it is supported by the Exploits Database, a special configuration file which stores the information related to the vulnerabilities. This file has a XML tree structure, and each entry has the whole necessary information needed by the simulator to compute the probabilistic behavior of a given simulated exploit. For example, a given exploit succeeds against a clean XP SP2 with 83% probability if port 21 is open, but crashes the system if it is a SP1. We are going to spend some time in the probability distribution, how to populate the exploits database, and the Insight attack model in the next sections. Returning to the architecture layout showed in Fig. 1, all simulator subsystems are coordinated by a unique Simulator Monitor, which deals with management and administrative operations, including administrative tasks (such as starting/stopping a simulator instance) and providing statistical information for the usage and performance of these. A set of Configuration Files defines the snapshot of a virtual Scenario. Similarly, a scenario snapshot defines the instantaneous status of the simulation, and involves a crowd of simulated actors: servers, workstations, applications, network devices (e.g. firewalls, routers or hubs) and their present status. Even users can be simulated using this approach, and this is especially interesting in client-side attack simulation, where we expect some careless users opening our poisoned crafted e-mails. Finally, at the right bottom of the architecture diagram, we can see the Penetration Testing Framework, an external system which interacts with the simulated scenario in real time, sending system call requests through a communication channel implemented by the simulator. This attack framework is a free tailored version of the Impact solution 5 , however other attack tools are planned to be supported in the future (e.g., 4 Insight supports simulation for binary vulnerabilities. Other kind of vulnerabilities (e.g. client-side and SQL injections) will be implemented in the future versions. 5 Available from http://trials.coresecurity.com/. Metasploit [16]). The attacker actions are coded as Impact script files (using Python) called modules, which have been implemented using the attack framework SDK, as shown in the architecture diagram. The framework Python modules include several tools for common tasks (e.g. information gathering, exploits, import scenarios). The attacks are executed in real time against a given simulated scenario; a simulation component can provide scenarios of thousands of computers with arbitrary configurations and topologies. Insight users can design new scenarios and they have scripts to manage the creation and modifications for the simulated components, and therefore iterate, import and reproduce cyber-attack experiments. THE SIMULATED ATTACK MODEL One of the characteristics that distinguish the scenarios simulated by Insight is the ability to compromise machines, and use them as pivoting stones to build complex multi-step attacks. To compromise a machine means to install an agent that will be able to execute arbitrary system calls (syscalls) as a user of this system. The agent architecture is based on the solution called syscall proxy (see [5] for more details). The idea of syscall proxying is to build a sort of universal payload that allows an attacker to execute any system call on a compromised host. By installing a small payload (a thin syscall server) on a vulnerable machine, the attacker will be able to execute complex applications on his local host, with all system calls executed remotely. This syscall server is called an agent. In the Insight attack model, the use of syscall proxying introduces two additional layers between a process run by the attacker and the compromised OS. These layers are the syscall client layer and the syscall server layer. The syscall client layer runs on the attacker's Penetration Testing Framework. It acts as a link between the process running on the attacker's machine and the system services on a remote host simulated by Insight. This layer is responsible for forwarding each syscall argument and generating a proper request that the agent can understand. It is also responsible for sending this request to the agent and sending back the results to the calling process. The syscall server layer (i.e. the agent that runs on the simulated system) receives requests from the syscall client to execute specific syscalls using the OS services. After the syscall finishes, its results are marshalled and sent back to the client. Probabilistic exploits In the simulator security model, a vulnerability is a mechanism used to access an otherwise restricted communication channel. In this model, a real exploit payload is replaced by an ID or "magic string" which is sent to a simulated application. If this application is defined to be vulnerable (and some other requirements are fulfilled), then an agent will be installed in the computer hosting the vulnerable application. The simulated exploit payload includes the aforementioned magic string. When the Simulator subsystem receives this information, it looks up for the string in the Exploits Database. If it is found, then the simulator will decide if the exploit worked or not and with what effect based on a probability distribution that depends on the effective scenario information of that computer and the specific exploit. Suppose, for example, that the Penetration Testing Framework assumes (wrongly) the attacked machine is a Red Hat Linux 8.0, but that machine is indeed a Windows system. In this hypothetical situation, the exploit would fail with 100% of probability. On the other side, if the attacked machine is effectively running an affected version of Red Hat Linux 9.0, then the probability of success could be 75%, or as determined in the exploit database. Remote attack model overview In Fig. 2 we can see the sequence of events which occurs when an attacker launches a remote exploit against a simulated machine. The rectangles in the top are the four principal components involved: The Penetration Testing Framework, the Simulator and the Exploits Database are the subsystems explained in Fig. 1 When an exploit is launched against a service running in a simulated machine, a connection is established between the Penetration Testing Framework and the service 6 . Then, the simulated exploit payload is sent to the application. The targeted application reads the payload by running the system call read. Every time the syscall read is invoked, the Simulator subsystem analyzes if a magic string is present in the data which has just been read. When a magic string is detected, the Simulator searches for it in the Exploits Database. If the exploit is found, a new agent is installed in the compromised machine. The exploit payload also includes information of the OS that the Penetration Testing Framework knows about the attacked machine: OS version, system architecture, service packs, etcetera. All this information is used to compute the probabilistic function and allows the Simulator to decide whether the exploit should succeed or not. Local attack model overview Insight can also simulate local attacks: If an attacker gains control over a machine but does not have enough privileges to complete a specific action, a local attack can deploy a new agent with higher privileges. In Fig. 3 we can see the sequence of events which occurs when a local attack is launched against a given machine. A running agent has to be present in the targeted machine in order to launch a local exploit. All local simulated attacks are executed by the Simulator subsystem identically: The Penetration Testing Framework will write the exploit magic string into the agent standard input, using the write system call, and the Simulator will eventually detect the magic string intercepting that system call. In a similar way as the previous example, the exploit magic string is searched in the database and a new agent (with higher privileges) is installed with probabilistic chance. DETAILED DESCRIPTION One of the most challenging issues in the Insight architecture is to resolve the tension between realism and performance. The goal was to have a simulator on a single desktop computer, running hundreds of simulated machines, with a simulated traffic realistic from a penetration test point of view. But there is a trade-off between realism and performance and we are going to discuss some of these problems and other architecture details in the following sections. The Insight development library New applications can be developed for the simulation platform using a minimal C standard library, a standardized collection of header files and library routines used to implement common operations such as: input, output and string handling in the C programming language. This library-a partial libc-implements the most common functions (e.g., read, write, open), allowing any developer to implement his own services with the usual compilers and development tools (e.g., gcc, g++, MS Visual Studio). For example, a web server could be implemented, linked with the provided libc and plugged within the Insight simulated scenarios. The provided libc supports the most common system calls, but it is still incomplete and we were unable to compile complex open source applications. In spite of this, some services (e.g., a small DNS) and network tools (e.g., ipconfig, netstat) have been included in the simulation platform, and new system calls are planned to be supported in the future. Simulating sockets A hierarchy for file descriptors has been developed as shown in Fig. 4. File descriptors can refer (but they are not limited) to files, directories, sockets, or pipes. At the top of the hierarchy, the tree root shows the descriptor object which typically provides the operations for reading and writing data, closing and duplicating file descriptors, among other generic system calls. The simulated sockets implementation spans between two kinds of supported sockets subclasses: 1. SocketDirect. This variety of sockets is optimized for the simulation in one computer. Socket direct is fast: as soon as a connection is established, the client keeps a file descriptor pointing directly to the server's descriptor. Routing is only executed during the connection and the protocol control blocks (PCBs) are created as expected, but they are only used during connection establishment. Reading and writing operations between direct sockets are carried out using shared memory. Since both sockets can access the shared memory area like regular working memory, this is a very fast way of communication. 2. SocketReal. In some particular cases, we are interested in having full socket functionality. For example, the communication between Insight and the outside world is made using real sockets. As a result, this socket subclass wraps a real BSD socket of the underlying OS. Support for routing and state-less firewalling was also implemented, supporting the simulating of attack payloads that connect back to the attacker, accept connections from the attacker or reuse the attack connection. The exploits database When an exploit is raised, Insight has to decide whether the attack is successful or not depending on the environment conditions. For example, an exploit can require either a specific service pack installed in the target machine to be successful, or a specific library loaded in memory, or a particular open port, among others requirements. All these conditions vary over the time, and they are basically unpredictable from the attacker's standpoint. As a result, the behavior of a given exploit has been modeled using a probabilistic approach. In order to determine the resulting behavior of the attack, Insight uses the Exploits Database showed in the architecture layout of Fig. 1. It has a XML tree structure. For example, if an exploit succeeds against a clean XP professional SP2 with 83% probability, or crashes the machine with 0.05% probability in other case; this could be expressed as follows: <database> <exploit id="sample exploit"> <requirement type="system"> <os arch="i386" name="windows" /> <win>XP</win> <edition>professional</edition> <servicepack>2</servicepack> </requirement> <results> <agent chance="0.83" /> <crash chance="0.05" what="os" /> <reset chance="0.00" what="os" /> <crash chance="0.00" what="application" /> <reset chance="0.00" what="application" /> </results> </exploit> <exploit> ... </exploit> <exploit> ... </exploit> ... </database> The conditions needed to install a new agent are described in the requirements section. It is possible to use several tags in this section, they specify the conditions which have influence on the execution of the exploit (e.g., OS required, a specific application running, an open port). The results section is a list of the relevant probabilities. In order, these are the chance of: 1. successfully installing an agent, 2. crashing the target machine, 3. resetting the target machine, 4. crashing the target application, 5. and the chance of resetting the target application. To determine the result, we follow this procedure: processing the lines in order, for each positive probability, choose a random value between 0 and 1. If the value is smaller than the chance attribute, the corresponding action is the result of the exploit. In this example, we draw a random number to see if an agent is installed. If the value is smaller than 0.83, an agent is installed and the execution of the exploit is finished. Otherwise, we draw a second number to see if the OS crashes. If the value is smaller than 0.05, the OS crashes and the attacked machine becomes useless, otherwise there is no visible result. Other possible results could be: raising an IDS alarm, writing some log in a network device (e.g. firewall, IDS or router) or capturing a session id, cookie, credential or password. The exploits database allows us to model the probabilistic behavior of any exploit from the attacker's point of view, but how do we populate our database? A paranoid approach would be to assign a probability of success of 100% to every exploit. In that way, we would consider the case where an attacker can launch each exploit as many times as he wants, and will finally compromise the target machine with 100% probability (assuming the attack does not crash the system). A more realistic approach is to use statistics from real networks. Currently we are using the framework presented by Marcelo Picorelli [18] in order to populate the probabilities in the exploits database. This framework was originally implemented to assess and improve the quality of real exploits in QA environments. It allows us to perform over 500 real exploitation tests daily on several running configurations, spanning different target operating systems with their own setups and applications that add up to more than 160 OS configurations. In this context, a given exploit is executed against: • All the available platforms • All the available applications All these tests are executed automatically using low end hardware, VMware servers, OS images and snapshots. The testing framework has been designed to improve testing time and coverage, and we have modified it in order to collect statistical information of the exploitation test results. Scheduler The scheduler main task is to assign the CPU resources to the different simulated actors (e.g. simulated machines and process). The scheduling iterates over the hierarchy machine-process-thread as a tree (like a depth-first search), each machine running its processes in round-robin. In a similar way, running a process is giving all its threads the order to run until a system call is needed. Obviously, depending on the state of each thread, they run, change state or finish execution. The central issue is that threads execute systems calls and then (if possible) continue their activity until they finish or another system call is required. Insight threads are simulated within real threads of the underlying OS. Simulated machines and processes are all running within one or several working processes (running hundreds of threads), and all of them are coordinated by a unique scheduler process called the master process. Thanks to this architecture, there is a very low loss of performance due to context switching 7 . File system In order to handle thousand of files without wasting huge disk space, the file system simulation is accomplished by mounting shared file repositories. We are going to refer these repositories as template file systems. For example, all simulated Windows XP systems could share a file repository with the default installation provided by Microsoft. These shared templates would have reading permission only. Thus, if a virtual machine needs to read or change a file, it will be copied within the local file system of the given machine. This technique is well known as copy-on-write. The fundamental idea is allowing multiple callers asking for resources which are initially indistinguishable, giving them pointers to the same resource. This function can be maintained until a caller tries to modify its copy of the resource, at which point a true private copy is created to prevent the changes from becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that no private copy needs to be created if a caller never makes any modification. On the other hand, with the purpose of improving the simulator's performance, a file cache has been implemented: the simulator saves the most recent accessed files (or block of files) in memory. In high scale simulated scenarios, it is very common to have several machines doing the same task at (almost) the same time 8 . If the data requested by these kind of tasks are in the file system cache, the whole system performance would improve, because less disk accesses will be required, even in scenarios of hundreds or thousands simulated machines. PERFORMANCE ANALYSIS To evaluate the performance of the simulator we run a test including a scenario with an increasing number of complete LANs with 250 computers each, simultaneously emulated. The tests only involves the execution of a network discovery on the complete LANs through a TCP connection to port 80. An original pen-testing module used for information was executed with no modifications, this was a design goal of the simulator, to use real unmodified attack modules when possible. Performance of the simulator LANs Computers Time (secs) Syscalls/sec 1 250 80 356 2 500 173 236 3 750 305 175 4 1000 479 139 Table 2: Evolution of the system performance as the simulated scenario grows, running a network discovery module, connecting to a predefined port. This benchmark was run on a single Intel Pentium D 2.67Ghz, 1.43GB RAM. We can observe the decrease of system calls processed per second as we increase the number of simulated computer as Insight was ran on a single real computer with limited resources. Nevertheless, the simulation is efficient because system calls are required on demand by the connections of the module gathering the information of the networks through TCP connections. APPLICATIONS We have created a playground to experiment with cyberattack scenarios which has several applications. The most important are: Data collection and visualization. Having the complete network scenario in one computer allows an easy capture and log of system calls and network traffic. This information is useful for analyzing and debugging real pen-test tools and their behavior on complex scenarios. Some efforts have been made to visualize attack pivoting and network information gathering using the platform presented. Pentest training. Our simulation tool is already being used in Pentest courses. It provides reproducible scenarios, where students can practice the different steps of a pentest: information gathering, attack and penetrate, privilege escalation, local information gathering and pivoting. The simulation allows the student to grasp the essence of pivoting. Setting up a real laboratory where pivoting makes sense is an expensive task, whereas our tool requires only one computer per student (and in case of network / computer crash, the simulation environment can be easily reset). Configuring new scenarios, with more machines or more complex topologies, is easy as a scenario wizard is provided. In Pentest classes with Insight, the teacher can check the logs to see if students used the right tools with the correct parameters. He can test the students' ability to plan, see if they did not perform unnecessary actions. The teacher can also identify their weaknesses as pentesters and plan new exercises to work on these. The students can be evaluated: success, performance, stealth and quality of reports can be measured. Worm Spreading Analysis. The lightweight design of the platform allows the simulation of socket/network behavior of thousands of computers gives a good framework for research on worm infestation and spreading. It should be possible to develop very accurate applications to mimic worm behavior using the Insight C programming API. There are available abstract modeling [7] or high-fidelity discrete event [27] studies but no system call level recreation of attacks like we propose in this future application of the platform. Attack Planning. It can be used as a flexible environment to develop and test attack planning algorithms used in automated penetration testing based on attack graphs [12]. Analysis of countermeasures. Duplication of the production configuration on a simulated staging environment accurately mimicking or mirroring the security aspects of an organization's network allows the anticipation of software/hardware changes and their impact on security. For example, you can answer questions like "Will the network avoid attack vector A if firewall rule R is added to the complex rule set S of firewall F ?" Impact of 0-day vulnerabilities. The simulator can be used to study the impact of 0-days (vulnerabilities that have not been publicly disclosed) in your network. How is that possible? We do not know current 0-days... but we can model the existence of 0-day vulnerabilities based on statistics. In our security model, the specific details of the vulnerability are not needed to study the impact on the network, just that it may exist with a measurable probability. That information can be gathered from public vulnerability databases: the discovery date, exploit date, disclosure date and patch date are found in several public databases of vulnerabilities and exploits [6, 23,22,11]. The risk of a 0-day vulnerability is given by the probability of an attacker discovering and exploiting it. Although we do not have data about the security underground, the probabilities given by public information are a lower bound indicator. As shown in [10], the risk posed by a vulnerability exists before the discovery date, augments as an exploit is made available for the vulnerability, and when the vulnerability is disclosed. The risk only diminishes as a patch becomes available and users apply the patches (and workarounds). The probability of discovery, and the probability of an exploit being developed, can be estimated as a function of the time before disclosure (see Fig. 5 taken from [10]). For Microsoft products, we have visibility of upcoming disclosures of vulnerabilities: every month (on patch Tuesday) on average 9,40 patches are released (high and medium risk), based on those dates we estimate the probability that the vulnerabilities were discovered and exploited during the months before disclosure. CONCLUSION We have created a playground to experiment with cyberattack scenarios. The framework is based on a probabilistic attack model-that model is also used by attack planning tools developed in our lab. By making use of the proxy syscalls technology, and simulating multiplatform agents, we were able to implement a simulation that is both realistic and lightweight, allowing the simulation of networks with thousands of hosts. The framework provides a global view of the scenarios. It is centered on the attacker's point of view, and designed to increase the size and complexity of simulated scenarios, while remaining realistic for the attacker. The value of this framework is given by its multiple applications: • Evaluate network security If you are interested in using Insight, send us an email. We are trying to build a community using it as common language for discussing information security scenarios and practices, and will strongly support new applications of this tool.
5,139
1005.5462
2951076037
This paper provides a theoretical explanation on the clustering aspect of nonnegative matrix factorization (NMF). We prove that even without imposing orthogonality nor sparsity constraint on the basis and or coefficient matrix, NMF still can give clustering results, thus providing a theoretical support for many works, e.g., [1] and [2], that show the superiority of the standard NMF as a clustering method.
@cite_10 provides the theoretical analysis on the equivalences between orthogonal NMF to @math -means clustering for both rectangular data matrices and symmetric matrices. However as their proofs utilize the zero gradient conditions, the hidden assumptions (setting the Lagrange multipliers to zeros) are not revealed there. Actually it can be easily shown that their approach is the KKT conditions applied to the unconstrained version of eq. . Thus there is no guarantee that minimizing eq. by using the zero gradient conditions leads to the stationary point located on the nonnegative orthant as required by the objective.
{ "abstract": [ "Currently, most research on nonnegative matrix factorization (NMF)focus on 2-factor @math factorization. We provide a systematicanalysis of 3-factor @math NMF. While it unconstrained 3-factor NMF is equivalent to it unconstrained 2-factor NMF, itconstrained 3-factor NMF brings new features to it constrained 2-factor NMF. We study the orthogonality constraint because it leadsto rigorous clustering interpretation. We provide new rules for updating @math and prove the convergenceof these algorithms. Experiments on 5 datasets and a real world casestudy are performed to show the capability of bi-orthogonal 3-factorNMF on simultaneously clustering rows and columns of the input datamatrix. We provide a new approach of evaluating the quality ofclustering on words using class aggregate distribution andmulti-peak distribution. We also provide an overview of various NMF extensions andexamine their relationships." ], "cite_N": [ "@cite_10" ], "mid": [ "2043545458" ] }
0
1005.1934
2950160647
Probabilistic databases play a crucial role in the management and understanding of uncertain data. However, incorporating probabilities into the semantics of incomplete databases has posed many challenges, forcing systems to sacrifice modeling power, scalability, or restrict the class of relational algebra formula under which they are closed. We propose an alternative approach where the underlying relational database always represents a single world, and an external factor graph encodes a distribution over possible worlds; Markov chain Monte Carlo (MCMC) inference is then used to recover this uncertainty to a desired level of fidelity. Our approach allows the efficient evaluation of arbitrary queries over probabilistic databases with arbitrary dependencies expressed by graphical models with structure that changes during inference. MCMC sampling provides efficiency by hypothesizing modifications to possible worlds rather than generating entire worlds from scratch. Queries are then run over the portions of the world that change, avoiding the onerous cost of running full queries over each sampled world. A significant innovation of this work is the connection between MCMC sampling and materialized view maintenance techniques: we find empirically that using view maintenance techniques is several orders of magnitude faster than naively querying each sampled world. We also demonstrate our system's ability to answer relational queries with aggregation, and demonstrate additional scalability through the use of parallelization.
Because early theoretical work on incomplete data focuses largely on algebras and representation systems (e.g., @cite_29 @cite_11 ), it was only natural to extend this line of thinking to probabilities @cite_8 @cite_25 @cite_2 @cite_16 @cite_24 benjelloun06uldbs . However, this extension is quite difficult since the probabilities in query results must include expressions derived from the confidence values originally embedded in the database. Systems meeting these theoretical conditions must overcome a set of challenges that are often satisfied at the expense of modeling-power or understandability.
{ "abstract": [ "It is often desirable to represent in a database, entities whose properties cannot be deterministically classified. The authors develop a data model that includes probabilities associated with the values of the attributes. The notion of missing probabilities is introduced for partially specified probability distributions. This model offers a richer descriptive language allowing the database to more accurately reflect the uncertain real world. Probabilistic analogs to the basic relational operators are defined and their correctness is studied. A set of operators that have no counterpart in conventional relational systems is presented. >", "We study the problem of null values. By this we mean that an attribute is applicable but its value at present is unknown and also that an attribute is applicable but its value is arbitrary. We adopt the view that tuples denote statements of predicate logic about database relations. Then, a null value of the first kind, respectively second kind, corresponds to an existentially quantified variable, respectively universally quantified variable. For instance if r is a database relation without null values and X is a range declaration for r then the tuple (a, ∀,b, ∃) ∈ R is intended to mean “there exists an x ∈ X such that for all y ∈ X: (a,y,b,x) ∈ r”. We extend basic operations of the well-known relational algebra to relations with null values. Using formal notions of correctness and completeness (adapted from predicate logic) we show that our extensions are meaningful and natural. Furthermore we reexamine the generalized join within our framework. Finally we investigate the algebraic structure of the class of relations with null values under a partial ordering which can be interpreted as a kind of logical implication.", "", "Note: Chapter 6 Reference EPFL-CHAPTER-167070 Record created on 2011-06-22, modified on 2017-05-12", "We discuss, compare and relate some old and some new models for incomplete and probabilistic databases. We characterize the expressive power of c-tables over infinite domains and we introduce a new kind of result, algebraic completion, for studying less expressive models. By viewing probabilistic models as incompleteness models with additional probability information, we define completeness and closure under query languages of general probabilistic database models and we introduce a new such model, probabilistic c-tables, that is shown to be complete and closed under the relational algebra.", "", "ABSTRACT This paper concerns the semantics of Codd's relational model of data. Formulated are precise conditions that should be satisfied in a semantically meaningful extension of the usual relational operators, such as projection, selection, union, and join, from operators on relations to operators on tables with “null values” of various kinds allowed. These conditions require that the system be safe in the sense that no incorrect conclusion is derivable by using a specified subset Ω of the relational operators; and that it be complete in the sense that all valid conclusions expressible by relational expressions using operators in Ω are in fact derivable in this system. Two such systems of practical interest are shown. The first, based on the usual Codd's null values, supports projection and selection. The second, based on many different (“marked”) null values or variables allowed to appear in a table, is shown to correctly support projection, positive selection (with no negation occurring in the selection condition), union, and renaming of attributes, which allows for processing arbitrary conjunctive queries. A very desirable property enjoyed by this system is that all relational operators on tables are performed in exactly the same way as in the case of the usual relations. A third system, mainly of theoretical interest, supporting projection, selection, union, join, and renaming, is also discussed. Under a so-called closed world assumption, it can also handle the operator of difference. It is based on a device called a conditional table and is crucial to the proof of the correctness of the second system. All systems considered allow for relational expressions containing arbitrarily many different relation symbols, and no form of the universal relation assumption is required. Categories and Subject Descriptors: H.2.3 [Database Management]: Languages— query languages; H.2.4 [Database Management]: Systems— query processing General Terms: Theory" ], "cite_N": [ "@cite_8", "@cite_29", "@cite_24", "@cite_2", "@cite_16", "@cite_25", "@cite_11" ], "mid": [ "2125791539", "1486310914", "", "643516821", "2165211504", "", "1990391007" ] }
0
1005.1934
2950160647
Probabilistic databases play a crucial role in the management and understanding of uncertain data. However, incorporating probabilities into the semantics of incomplete databases has posed many challenges, forcing systems to sacrifice modeling power, scalability, or restrict the class of relational algebra formula under which they are closed. We propose an alternative approach where the underlying relational database always represents a single world, and an external factor graph encodes a distribution over possible worlds; Markov chain Monte Carlo (MCMC) inference is then used to recover this uncertainty to a desired level of fidelity. Our approach allows the efficient evaluation of arbitrary queries over probabilistic databases with arbitrary dependencies expressed by graphical models with structure that changes during inference. MCMC sampling provides efficiency by hypothesizing modifications to possible worlds rather than generating entire worlds from scratch. Queries are then run over the portions of the world that change, avoiding the onerous cost of running full queries over each sampled world. A significant innovation of this work is the connection between MCMC sampling and materialized view maintenance techniques: we find empirically that using view maintenance techniques is several orders of magnitude faster than naively querying each sampled world. We also demonstrate our system's ability to answer relational queries with aggregation, and demonstrate additional scalability through the use of parallelization.
Although there is a vast body of work on probabilistic databases, graphical models have largely been ignored until recently. The work of @cite_21 @cite_19 casts query evaluation as inference in a graphical model and BayesStore @cite_17 makes explicit use of Bayesian networks to represent uncertainty in the database. While expressive, generative Bayesian networks have difficulty representing the types of dependencies handled automatically in discriminative models @cite_4 , motivating a database approach to linear chain conditional random fields @cite_13 . We, however, present a more general representation based on factor graphs, an umbrella framework for both Bayesian networks and conditional random fields. Perhaps more importantly we directly address the problem of scalable query evaluation in these representations|with an MCMC sampler|whereas previous systems based on graphical models are severely restricted by this bottleneck. Furthermore our approach can easily evaluate any relational algebra query without the need to close the graphical model under the semantics of each operator.
{ "abstract": [ "We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.", "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.", "There has been a recent surge in work in probabilistic databases, propelled in large part by the huge increase in noisy data sources --- from sensor data, experimental data, data from uncurated sources, and many others. There is a growing need for database management systems that can efficiently represent and query such data. In this work, we show how data characteristics can be leveraged to make the query evaluation process more efficient. In particular, we exploit what we refer to as shared correlations where the same uncertainties and correlations occur repeatedly in the data. Shared correlations occur mainly due to two reasons: (1) Uncertainty and correlations usually come from general statistics and rarely vary on a tuple-to-tuple basis; (2) The query evaluation procedure itself tends to re-introduce the same correlations. Prior work has shown that the query evaluation problem on probabilistic databases is equivalent to a probabilistic inference problem on an appropriately constructed probabilistic graphical model (PGM). We leverage this by introducing a new data structure, called the random variable elimination graph (rv-elim graph) that can be built from the PGM obtained from query evaluation. We develop techniques based on bisimulation that can be used to compress the rv-elim graph exploiting the presence of shared correlations in the PGM, the compressed rv-elim graph can then be used to run inference. We validate our methods by evaluating them empirically and show that even with a few shared correlations significant speed-ups are possible.", "", "Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensory readings for a variety of reasons, including motion prediction and human behavior modeling. Such probabilistic data analyses require sophisticated machine-learning tools that can effectively model the complex spatio temporal correlation patterns present in uncertain sensory data. Unfortunately, to date, most existing approaches to probabilistic database systems have relied on somewhat simplistic models of uncertainty that can be easily mapped onto existing relational architectures: Probabilistic information is typically associated with individual data tuples, with only limited or no support for effectively capturing and reasoning about complex data correlations. In this paper, we introduce BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tools as first-class citizens of the database system. Adopting a machine-learning view, BAYESSTORE employs concise statistical relational models to effectively encode the correlation patterns between uncertain data, and promotes probabilistic inference and statistical model manipulation as part of the standard DBMS operator repertoire to support efficient and sound query processing. We present BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and we redefine traditional query processing operators, to manipulate the data and the probabilistic models of the database in an efficient manner. Finally, we validate our approach, by demonstrating the value of exploiting data correlations during query processing, and by evaluating a number of optimizations which significantly accelerate query processing." ], "cite_N": [ "@cite_4", "@cite_21", "@cite_19", "@cite_13", "@cite_17" ], "mid": [ "2147880316", "2120825705", "2114242687", "", "2114157818" ] }
0
1004.4796
1900622699
We discuss a programming language for real-time audio signal processing that is embedded in the functional language Haskell and uses the Low-Level Virtual Machine as back-end. With that framework we can code with the comfort and type safety of Haskell while achieving maximum efficiency of fast inner loops and full vectorisation. This way Haskell becomes a valuable alternative to special purpose signal processing languages.
Our goal is to make use of the elegance of programming for signal processing. Our work is driven by the experience that today compiled code cannot compete with traditional signal processing packages written in C. There has been a lot of progress in recent years, most notably the improved support for arrays without overhead, the elimination of temporary arrays ( fusion ) and the Data-Parallel Haskell project @cite_21 that aims at utilising multiple cores of modern processors for array oriented data processing. However there is still a considerable gap in performance between idiomatic code and idiomatic C code. A recent development is an LLVM -backend for the GHC Glasgow Haskell Compiler . that adds all of the low-level optimisations of LLVM to GHC . However we still need some tuning of the high-level optimisation and a support for processor vector types in order to catch up with our EDSL method.
{ "abstract": [ "If you want to program a parallel computer, a purely functional language like Haskell is a promising starting point. Since the language is pure, it is by-default safe for parallel evaluation, whereas imperative languages are by-default unsafe. But that doesn't make it easy! Indeed it has proved quite difficult to get robust, scalable performance increases through parallel functional programming, especially as the number of processors increases. A particularly promising and well-studied approach to employing large numbers of processors is to use data parallelism. Blelloch's pioneering work on NESL showed that it was possible to combine a rather flexible programming model (nested data parallelism) with a fast, scalable execution model (flat data parallelism). In this talk I will describe Data Parallel Haskell, which embodies nested data parallelism in a modern, general-purpose language, implemented in a state-of-the-art compiler, GHC. I will focus particularly on the vectorisation transformation, which transforms nested to flat data parallelism, and I hope to present performance numbers." ], "cite_N": [ "@cite_21" ], "mid": [ "2163496769" ] }
0
1004.4796
1900622699
We discuss a programming language for real-time audio signal processing that is embedded in the functional language Haskell and uses the Low-Level Virtual Machine as back-end. With that framework we can code with the comfort and type safety of Haskell while achieving maximum efficiency of fast inner loops and full vectorisation. This way Haskell becomes a valuable alternative to special purpose signal processing languages.
Another special purpose language is ChucK @cite_3 . Distinguishing features of ChucK are the generalisation to many different rates and the possibility of programming while the program is running, that is while the sound is playing. As explained in internal-parameter we can already cope with control signals at different rates, however the management of sample rates at all could be better if it was integrated in our framework for physical dimensions. Since the systems Hugs and GHC both have a fine interactive mode, can in principle also be used for live coding. However it still requires better support by LLVM (shared libraries) and by our implementation.
{ "abstract": [ "In this paper, we describe ChucK - a programming language and programming model for writing precisely timed, concurrent audio synthesis and multimedia programs. Precise concurrent audio programming has been an unsolved (and ill-defined) problem. ChucK provides a concurrent programming model that solves this problem and significantly enhances designing, developing, and reasoning about programs with complex audio timing. ChucK employs a novel data-driven timing mechanism and a related time-based synchronization model, both implemented in a virtual machine. We show how these features enable precise, concurrent audio programming and provide a high degree of programmability in writing real-time audio and multimedia programs. As an extension, programmers can use this model to write code on-the-fly -- while the program is running. These features provide a powerful programming tool for building and experimenting with complex audio synthesis and multimedia programs." ], "cite_N": [ "@cite_3" ], "mid": [ "2124470842" ] }
0
1004.4371
2950912279
We exhibit a strong connection between cover times of graphs, Gaussian processes, and Talagrand's theory of majorizing measures. In particular, we show that the cover time of any graph @math is equivalent, up to universal constants, to the square of the expected maximum of the Gaussian free field on @math , scaled by the number of edges in @math . This allows us to resolve a number of open questions. We give a deterministic polynomial-time algorithm that computes the cover time to within an O(1) factor for any graph, answering a question of Aldous and Fill (1994). We also positively resolve the blanket time conjectures of Winkler and Zuckerman (1996), showing that for any graph, the blanket and cover times are within an O(1) factor. The best previous approximation factor for both these problems was @math for @math -vertex graphs, due to Kahn, Kim, Lovasz, and Vu (2000).
A fundamental bound of Matthews @cite_46 shows that @math where we recall that @math is the expected hitting time from @math to @math . Using the straightforward lower bound @math , this fact provides a deterministic @math -approximation to @math in @math -node graphs.
{ "abstract": [ "On donne des bornes superieures et inferieures sur la fonction generatrice des moments du temps pris par une chaine de Markov pour visiter au moins n des N sous-ensembles selectionnes de son espace d'etats" ], "cite_N": [ "@cite_46" ], "mid": [ "2068008593" ] }
Cover times, blanket times, and majorizing measures
Let G = (V, E) be a finite, connected graph, and consider the simple random walk on G. Writing τ cov for the first time at which every vertex of G has been visited, let E v τ cov denote the expectation of this quantity when the random walk is started at some vertex v ∈ V . The following fundamental parameter is known as the cover time of G, t cov (G) = max v∈V E v τ cov .(1) We refer to the books [2,36] and the survey [37] for relevant background material. We also recall the discrete Gaussian free field (GFF) on the graph G. This is a centered Gaussian process {η v } v∈V with η v 0 = 0 for some fixed v 0 ∈ V . The process is characterized by the relation E (η u − η v ) 2 = R eff (u, v) for all u, v ∈ V , where R eff denotes the effective resistance on G. Equivalently, the covariances E(η u η v ) are given by the Green kernel of the random walk killed at v 0 . (We refer to Sections 1.2 and 1.3 for background on electrical networks and Gaussian processes.) The next theorem represents one of the primary connections put forward in this work. We use the notation ≍ to denote equivalence up to a universal constant factor. where {η v } v∈V is the Gaussian free field on G. The utility of such a characterization will become clear soon. Despite being an intensively studied parameter of graphs, a number of basic questions involving the cover time have remained open. We now highlight two of these, whose resolution we discuss subsequently. The blanket time. For a node v ∈ V , let π(v) = deg (v) 2|E| denote the stationary measure of the random walk, and let N v (t) be a random variable denoting the number of times the random walk has visited v up to time t. Now define τ • bl (δ) to be the first time t 1 at which N v (t) δt π(v)(2) holds for all v ∈ V . In other words, τ • bl (δ) is the first time at which all nodes have been visited at least a δ fraction as much as we expect at stationarity. Using the same notation as in (1), define the δ-blanket time as t • bl (G, δ) = max v∈V E v τ • bl (δ) .(3) Clearly for δ ∈ (0, 1), we have t • bl (G, δ) t cov (G). Winkler and Zuckerman [54] made the following conjecture. Conjecture 1.1. For every 0 < δ < 1, there exists a C such that for every graph G, one has t • bl (G, δ) C · t cov (G). In other words, for every fixed δ ∈ (0, 1), one has t cov (G) ≍ t • bl (G, δ). Kahn, Kim, Lovász, and Vu [30] showed that for every fixed δ ∈ (0, 1), one can take C ≍ (log log n) 2 for n-node graphs, but whether there is a universal constant, independent of n, remained open for every value of δ > 0. In order to bound t • bl (G, δ), we introduce the following stronger notion. Let τ bl (δ) be the first time t 1 such that for every u, v ∈ V , we have N u (t)/π(u) N v (t)/π(v) δ, i.e. the first time at which all the values {N u (t)/π(u)} u∈V are within a factor of δ. As in [30], we define the strong δ-blanket time as t bl (G, δ) = max v∈V E v τ bl (δ). Clearly one has t • bl (G, δ) t bl (G, δ) for every δ ∈ (0, 1). The second question we highlight is computational in nature. In other words, is there a quantity A(G) which can be computed deterministically, in polynomialtime in |V |, such that A(G) ≍ t cov (G). It is crucial that one asks for a deterministic procedure, since a randomized algorithm can simply simulate the chain, and output the empirical mean of the observed times at which the graph is first covered. This is guaranteed to produce an accurate estimate with high-probability in polynomial time, since the mean and standard deviation of τ cov are O(|V | 3 ) [6]. A result of Matthews [43] can be used to produce a determinisically computable bound which is within a log |V | factor of t cov (G). Subsequently, [30] showed how one could compute a bound which lies within an O((log log |V |) 2 ) factor of the cover time. Before we state our main theorem and resolve the preceding questions, we briefly review the γ 2 functional from Talagrand's theory of majorizing measures [48,50]. Majorizing measures and Gaussian processes. Consider a compact metric space (X, d). Let M 0 = 1 and M k = 2 2 k for k 1. For a partition P of X and an element x ∈ X, we will write P (x) for the unique S ∈ P containing x. An admissible sequence {A k } k 0 of partitions of X is such that A k+1 is a refinement of A k for k 0, and |A k | M k for all n 0. Talagrand defines the functional γ 2 (X, d) = inf sup x∈X k 0 2 k/2 diam(A k (x)),(4) where the infimum is over all admissible sequences {A k }. Consider now a Gaussian process {η i } i∈I over some index set I. This is a stochastic process such that every finite linear combination of random variables is normally distributed. For the purposes of the present paper, one may assume that I is finite. We will assume that all Gaussian processes are centered, i.e. E(η i ) = 0 for all i ∈ I. The index set I carries a natural metric which assigns, for i, j ∈ I, d(i, j) = E |η i − η j | 2 .(5) The following result constitutes a primary consequence of the majorizing measures theory. Theorem (MM) (Majorizing measures theorem [48]). For any centered Gaussian process {η i } i∈I , γ 2 (I, d) ≍ E sup {η i : i ∈ I} . We remark that the upper bound of the preceding theorem, i.e. E sup {η i : i ∈ I} Cγ 2 (I, d) for some constant C, goes back to work of Fernique [24,25]. Fernique formulated this result in the language of measures (from whence the name "majorizing measures" arises), while the formulation of γ 2 given in (4) is due to Talagrand. The fact that the two notions are related is non-trivial; we refer to [50, §2] for a thorough discussion of the connection between them. Commute times, hitting times, and cover times. In order to relate the majorizing measure theory to cover times of graphs, we recall the following natural metric. For any two nodes u, v ∈ V , use H(u, v) to denote the expected hitting time from u to v, i.e. the expected time for a random walk started at u to hit v. The expected commute time between two nodes u, v ∈ V is then defined by κ(u, v) = H(u, v) + H(v, u). It is immediate that κ(u, v) is a metric on any finite, connected graph. A well-known fact [11] is that κ(u, v) = 2|E| R eff (u, v), where R eff (u, v) is the effective resistance between u and v, when G is considered as an electrical network with unit conductances on the edges. We now restate our main result in terms of majorizing measures. For a metric d, we write √ d for the distance √ d(u, v) = d(u, v). Theorem 1.2 (Cover times, blanket times, and majorizing measures). For any graph G = (V, E) and any 0 < δ < 1, we have t cov (G) ≍ γ 2 (V, √ κ) 2 = |E| · γ 2 (V, R eff ) 2 ≍ δ t bl (G, δ), where ≍ δ denotes equivalence up to a constant depending on δ. Clearly this yields a positive resolution to Conjecture 1.1. Moreover, we prove the preceding theorem in the setting of general finite-state reversible Markov chains. See Theorem 1.9 for a statement of our most general theorem. We now address some additional consequences of the main theorem. First, observe that by combining Theorem 1.2 with Theorem (MM), we obtain Theorem 1.1. Theorem 1.3 (Cover times and the Gaussian free field). For any graph G = (V, E) and any 0 < δ < 1, we have t cov (G) ≍ |E| E max v∈V η v 2 ≍ δ t bl (G, δ), where {η v } is the Gaussian free field on G. In fact, in Section 2.2, we exhibit the following strong asymptotic upper bound. Theorem 1.4. For every graph G = (V, E), if t hit (G) denotes the maximal hitting time in G, and {η v } v∈V is the Gaussian free field on G, then t cov (G) 1 + C t hit (G) t cov (G) · |E| · E sup v∈V η v 2 , where C > 0 is a universal constant. In Section 3, we prove the following theorem which, in conjunction with Theorem 1.2, resolves Question 1.2. Theorem 1.5. Let (X, d) be a finite metric space, with n = |X|. If, for any two points x, y ∈ X, one can deterministically compute d(x, y) in time polynomial in n, then one can deterministically compute a number A(X, d) in polynomial time, for which A(X, d) ≍ γ 2 (X, d). A "comparison theorem" follows immediately from Theorem 1.2, and the fact that γ 2 (X, d) Lγ 2 (X, d ′ ) whenever d Ld ′ (see (4)). Theorem 1.6 (Comparison theorem for cover times). Suppose G and G ′ are two graphs on the same set of nodes V , and κ G and κ G ′ are the distances induced by respective commute times. If there exists a number L 1 such that κ G (u, v) L · κ G ′ (u, v) for all u, v ∈ V , then t cov (G) O(L) · t cov (G ′ ) . Finally, our work implies that there is an extremely simple randomized algorithm for computing the cover time of a graph, up to constant factors. To this end, consider a graph G = (V, E) whose vertex set we take to be V = {1, 2, . . . , n}. Let D be the diagonal degree matrix, i.e. such that D ii = deg(i) and D ij = 0 for i = j, and let A be the adjacency matrix of G. We define the following normalized Laplacian, L G = D − A tr(D) . Let L + G denote the Moore-Penrose peudoinverse of L G . Note that both L G and L + G are positive semi-definite. We have the following characterization. Theorem 1.7. For any connected graph G, it holds that t cov (G) ≍ E L + G g 2 ∞ , where g = (g 1 , . . . , g n ) is an n-dimensional Gaussian, i.e. such that {g i } are i.i.d. N(0,1) random variables. The preceding theorem yields an O(n ω )-time randomized algorithm for approximating t cov (G), where ω ∈ [2, 2.376) is the best-possible exponent for matrix multiplication [13]. Using the linearsystem solvers of Spielman and Teng [47] (see also [45]), along with ideas from Spielman and Srivistava [46], we present an algorithm that runs in near-linear time in the number of edges of G. ) and outputs a number A(G) such that t cov (G) ≍ E [A(G)] ≍ (E A(G) 2 ) 1/2 . Preliminaries To begin, we introduce some fundamental notions from random walks and electrical networks. Electrical networks and random walks. A network is a finite, undirected graph G = (V, E), together with a set of non-negative conductances {c xy : x, y ∈ V } supported exactly on the edges of G, i.e. c xy > 0 ⇐⇒ xy ∈ E. The conductances are symmetric so that c xy = c yx for all x, y ∈ V . We will write c x = y∈V c xy and C = x∈V c x for the total conductance. We will often use the notation G(V ) for a network on the vertex set V . In this case, the associated conductances are implicit. In the few cases when there are multiple networks under consideration simultaneously, we will use the notation c G xy to refer to the conductances in G. Associated to such a network is the canonical discrete time random walk on G, whose transition probabilities are given by p xy = c xy /c x for all x, y ∈ V . It is easy to see that this defines the transition matrix of a reversible Markov chain on V , and that every finite-state reversible Markov chain arises in this way (see [2, §3.2]). The stationary measure of a vertex is precisely π(x) = c x /C. Associated to such an electrical network are the classical quantities C eff , R eff : V × V → R 0 which are referred to, respectively, as the effective conductance and effective resistance between pairs of nodes. We refer to [36,Ch. 9] for a discussion of the connection between electrical networks and the corresponding random walk. For now, it is useful to keep in mind the following fact [11]: For any x, y ∈ V , R eff (x, y) = κ(x, y) C ,(9) where the commute time κ is defined as before (6). For convenience, we will work exclusively with continuous-time Markov chains, where the transition rates between nodes are given by the probabilities p xy from the discrete chain. One way to realize the continuous-time chain is by making jumps according to the discrete-time chain, where the times spent between jumps are i.i.d. exponential random variables with mean 1. We refer to these random variables as the holding times. See [2, Ch. 2] for background and relevant definitions. Cover times, local times, and blanket times. We will now define various stopping times for the continuous-time random walk. First, we observe that if τ ⋆ cov is the first time at which the continuous-time random walk has visited every node of G, then for every vertex v, E v τ ⋆ cov = E v τ cov , where we recall that the latter quantity refers to the discrete-time chain. Thus we may also define the cover time with respect to the continuous-time chain, i.e. t cov (G) = max v∈V E v τ ⋆ cov . In fact, it will be far more convenient to work with the cover and return time defined as follows. Let {X t } t∈[0,∞) be the continuous-time chain, and define τ cov = inf {t > τ ⋆ cov : X t = X 0 } .(10) For concreteness, we define the cover and return time of G as t cov (G) = max v∈V E v τ cov , but the following fact shows that the choice of initial vertex is not of great importance for us (see [2,Ch. 5,Lem. 25]), 1 2 t cov (G) t cov (G) t cov (G) 3 min v∈V E v τ cov .(11) For a vertex v ∈ V and time t, we define the local time L v t by L v t = 1 c v t 0 1 {Xs=v} ds ,(12) where we recall that c v = u∈V c uv . For δ ∈ (0, 1), we define τ ⋆ bl (δ) as the first time t > 0 at which min u,v∈V L u t L v t δ. Furthermore, the continuous-time strong δ-blanket time is defined to be t ⋆ bl (G, δ) = max v∈V E v τ ⋆ bl (δ).(13) Asymptotic notation. For expressions A and B, we will use the notation A B to denote that A C · B for some constant C > 0. If we wish to stress that the constant C depends on some parameter, e.g. C = C(p), we will use the notation A p B. We use A ≍ B to denote the conjunction A B and B A, and we use the notation A ≍ p B similarly. Outline We first state our main theorem in full generality. We use only the language of effective resistances, since this is most natural in the context to follow. Theorem 1.9. For any network G = (V, E) and any 0 < δ < 1, t cov (G) ≍ C γ 2 (V, R eff ) 2 ≍ δ t bl (G, δ) ≍ δ t ⋆ bl (G, δ), where C is the total conductance of G. We now present an overview of our main arguments, and layout the organization of the paper. Hints of a connection. First, it may help the reader to have some intuition about why cover times should be connected to the Gaussian processes and particularly the theory of majorizing measures. A first hint goes back to work of Aldous [3], where it is shown that the hitting times of Markov chains are approximately distributed as exponential random variables. It is well-known that an exponential variable can be represented as the sum of the squares of two Gaussians. Observing that the cover time is just the maximum of all the hitting times, one might hope that the cover time can be related to the maximum of a family of Gaussians. This point of view is strengthened by some quantitative similarities. Let {η i } i∈I be a centered Gaussian process, and let d(i, j) be the natural metric on I from (5). The following two lemmas are central to the proof of the majorizing measures theorem (Theorem (MM)). We refer to [35] [50] for their utility in the majorizing measures theory. The next lemma follows directly from the definition of the Gaussian density; see, for instance, [42,Lem. 5 Lemma 1.10 (Gaussian concentration). For every i, j ∈ I, and α > 0, P (η i − η j > α) exp −α 2 2 d(i, j) 2 . The next result can be found in [35,Thm. 3.18]. Lemma 1.11 (Sudakov minoration). For every α > 0, If I ′ ⊆ I is such that i, j ∈ I ′ and i = j implies d(i, j) α, then E sup i∈I ′ η i α log |I ′ |. Now, let G = (V, E) be a network, and consider the associated continuous-time random walk {X t } with local times L v t . We define also the inverse local times τ v (t) = inf{s : L v s > t}. An analog of the following lemma was proved in [30] for the discrete-time chain; the continuous-time version can be similarly proved, though we will not do so here, as it will not be used in the arguments to come. In interpreting the next lemma, it helps to recall that L u τ u (t) = t. Lemma 1.12 (Concentration for local times) . For all u, v ∈ V and any α > 0 and t 0, we have P u L u τ u (t) − L v τ u (t) α exp −α 2 4tR eff (u, v) , where P u denotes the measure for the random walk started at u. Thus local times satisfy sub-gaussian concentration, where now the distance d is replaced by √ t · R eff . On the other side, the classical bound of Matthews [43] provides an analog to Lemma 1.11. Lemma 1.13 (Matthews bound). For every α > 0, if V ′ ⊆ V is such that u, v ∈ V ′ and u = v implies H(u, v) α, then t cov (G) α log(|V ′ | − 1). Of course the similar structure of these lemmas offers no formal connection, but merely a hint that something deeper may be happening. We now discuss a far more concrete connection between local times and Gaussian processes. The isomorphism theorems. The distribution of the local times for a Borel right process can be fully characterized by certain associated Gaussian processes; results of this flavor go by the name of Isomorphism Theorems. Several versions have been developed by Ray [44] and Knight [33], Dynkin [18,17], Marcus and Rosen [40,41], Eisenbaum [19] and Eisenbaum, Kaspi, Marcus, Rosen and Shi [20]. In what follows, we present the second Ray-Knight theorem in the special case of a continuous-time random walk. It first appeared in [20]; see also Theorem 8.2.2 of the book by Marcus and Rosen [42] (which contains a wealth of information on the connection between local times and Gaussian processes). It is easy to verify that the continuous-time random walk on a connected graph is indeed a recurrent strongly symmetric Borel right process. Theorem 1.14 (Generalized Second Ray-Knight Isomorphism Theorem). Fix v 0 ∈ V and define the inverse local time, τ (t) = inf{s : L v 0 s > t}.(14) Let T 0 be the hitting time to v 0 and let Γ v 0 (x, y) = E x (L y T 0 ). Denote by η = {η x : x ∈ V } a mean zero Gaussian process with covariance Γ v 0 (x, y). Let P v 0 and P η be the measures on the processes {L x T 0 } and {η x }, respectively. Then under the measure P v 0 × P η , for any t > 0 L x τ (t) + 1 2 η 2 x : x ∈ V law = 1 2 (η x + √ 2t) 2 : x ∈ V .(15) Thus to every continuous-time random walk, we can associate a Gaussian process {η v } v∈V . As discussed in Section 2.4, we have the relationship d(u, v) = R eff (u, v), where d(u, v) = E |η u − η v | 2 . In particular, the process {η v } v∈V is the Gaussian free field on the network G. Using the Isomorphism Theorem in conjunction with concentration bounds for Gaussian processes, we already have enough machinery to prove the following upper bound in Section 2.1, t cov (G) t bl (G, δ) δ C [γ 2 (V, d)] 2 = C γ 2 (V, R eff ) 2 .(16) We also show how to prove a matching lower bound in terms of γ 2 , but for a slightly different notion of "blanket time." Thus (16) proves the first half of Theorem 1.9. The lower bound for cover times quite a bit more difficult to prove. Of course, the cover and return time relates to the event ∃v : L v τ (t) = 0 , and unfortunately the correspondence (15) seems too coarse to provide lower bounds on the probability of this event directly. To this end, we need to show that for the right value of t in Theorem 1.14, we often have η x ≈ − √ 2t for some x ∈ V . The main difficulty is that we will have to show that there is often a vertex x ∈ V with |η x + √ 2t| being much smaller than the standard deviation of η x . In doing so, we will use the full power of the majorizing measures theory, as well as the special structure of the Gaussian processes arising from the Isomorphism Theorem. The discrete Gaussian free field and a tree-like subprocess. In Section 2.4 (see (35)), we recall that the Gaussian processes arising from the Isomorphism Theorem are not arbitrary, but correspond to the Gaussian free field (GFF) associated with G. Special properties of such processes will be essential to our proof of Theorem 1.9. In particular, if we use R eff (v, S) to denote the effective resistance between a point v and a set of vertices S ⊆ V , then we have the relationship R eff (v, S) = dist L 2 (η v , aff({η w } w∈S )),(17) where aff(·) denotes the affine hull, and dist L 2 is the L 2 distance in the Hilbert space underlying the process {η v } v∈V . In Section 2.3, we prove a number of properties of the effective resistance metric (e.g. Foster's network theorem); combined with (17), this yields some properties unique to processes arising from a GFF. Next, in Section 3, we recall that one of the primary components of the majorizing measures theory is that every Gaussian process {η i } i∈I contains a "tree like" subprocess which controls E sup i∈I η i . After a preprocessing step that ensures our trees have a number of additional features, we use the structure of the GFF to select a representative subtree with very strong independence properties that will be essential to our analysis of cover times. Restructuring the randomness and a percolation argument. The majorizing measures theory is designed to control the first moment E sup i∈I η i of the supremum of Gaussian process. In analyzing (15) to prove a lower bound on the cover times, we actually need to employ a variant of the second moment method. The need for this, and a detailed discussion of how it proceeds, are presented at the beginning of Section 4. Towards this end, we want to associate events to the leaves of our "tree like" subprocess which can be thought of as "open events" in a percolation process on the tree. For general trees, it is known that the second moment method gives accurate estimates for the probability of having an open path to a leaf [38]. While our trees are not regular, they are "regularized" by the majorizing measure, and we do a somewhat standard analysis of such a process in Section 4.3. The real difficulty involves setting up the right filtration on the probability space corresponding to our tree so that the percolation argument yields the desired control on the cover times. This requires a delicate definition of the events associated to each edge, and the ensuing analysis forms the technical core of our argument in Section 4. Algorithmic issues. In order to complete the proof of Theorem 1.5 and thus resolve Question 1.2, we present a deterministic algorithm which computes an approximation to γ 2 (X, d) for any metric space (X, d). This is achieved in Section 3.3. While the algorithm is fairly elementary to describe, its analysis requires a number of tools from the majorizing measures theory. We remark that, in combination with Theorem 1.9, this yields the following result. Observe that for general reversible chains, the cover time is not necessarily bounded a polynomial in |V |, and thus even randomized simulation of the chain does not yield a polynomial-time algorithm for approximating t cov (G). Finally, in Section 4.5, we prove Theorems 1.7 and 1.8 in the setting of arbitrary reversible Markov chains, leading to a near-linear time randomized algorithm for computing cover times. Gaussian processes and local times We now discuss properties of the Gaussian processes arising from the isomorphism theorem (Theorem 1.14). In Section 2.1, we show that the isomorphism theorem, combined with concentration properties of Gaussian processes, is already enough to get strong control on blanket times and related quantities. In Section 2.3, we prove some geometric properties of the resistance metric on networks that will be crucial to our work on the cover time in Sections 3 and 4. Finally, in Section 2.4, we recall the definition of the Gaussian free field and show how the geometry of such a process relates to the geometry of the underlying resistance metric. The blanket time We first remark that the covariance matrix of the Gaussian process arising from the isomorphism theorem can be calculated explicitly in terms of the resistance metric on the network G(V ). Throughout this section, the process {η x } x∈V refers to the one resulting from Theorem 1.14 with v 0 ∈ V some fixed (but arbitrary) vertex, τ (t) refers to the inverse local time defined in (14), and T 0 is the hitting time to v 0 . Lemma 2.1. For every x, y ∈ V , Γ v 0 (x, y) = E x (L y T 0 ) = 1 2 (R eff (x, v 0 ) + R eff (v 0 , y) − R eff (x, y)) . In particular, E (η x − η y ) 2 = R eff (x, y). Proof. To prove the lemma, we use the cycle identity for hitting times (see, e.g., [36,Lem. 10.10]) which asserts that, H(x, v 0 ) + H(v 0 , y) + H(y, x) = H(x, y) + H(y, v 0 ) + H(v 0 , x).(18) Averaging both sides of (18) and recalling (9) yields H(x, v 0 ) + H(v 0 , y) + H(y, x) = C 2 [R eff (x, v 0 ) + R eff (v 0 , y) + R eff (x, y)] . Now, we subtract CR eff (x, y) = H(x, y) + H(y, x) from both sides, giving H(x, v 0 ) + H(v 0 , y) − H(x, y) = C 2 [R eff (x, v 0 , ) + R eff (v 0 , y) − R eff (x, y)] Finally, we conclude using the identity (see, e.g. [2, Ch 2., Lem. 9]), E x (L y T 0 ) = 1 C (H(x, v 0 ) + H(v 0 , y) − H(x, y)) . We now relate the blanket time of the random walk to the expected supremum of its associated Gaussian process. The following is a central facet of the theory of concentration of measure; see, for example, [34, Thm. 7.1, Eq. (7.4)]. Lemma 2.2. Consider a Gaussian process {η x : x ∈ V } and define σ = sup x∈V (E(η 2 x )) 1/2 . Then for α > 0, P sup x∈V η x − E sup x∈V η x > α 2 exp(−α 2 /2σ 2 ) . We are now ready to establish the upper bound on the strong blanket time t ⋆ bl (G, δ), for any fixed 0 < δ < 1. Note that this will naturally yield an upper bound on t bl (δ). Theorem 2.3. Consider a network G(V ) and its total conductance C = x∈V c x . For any fixed 0 < δ < 1, the blanket time t ⋆ bl (G, δ) of the random walk on G(V ) satisfies t ⋆ bl (G, δ) δ C · E sup x∈V η x 2 , where {η x } is the associated Gaussian process from Theorem 1.14. Proof. We first prove that for some A δ > 0 t ⋆ bl (δ) A δ C E sup x∈V η x 2 + sup x∈V E (η 2 x ) .(19) Fix a vertex v 0 ∈ V and consider the local times {L x τ (t) : x ∈ V }, where for t > 0, we write τ (t) = inf{s : L v 0 s > t}. Let σ = sup x∈V E(η 2 x ) and Λ = E sup x η x . Use {η L x } to denote the copy of the Gaussian process corresponding to the left-hand side of (15), and {η R x } to denote the i.i.d. process corresponding to the right-hand side. Fix β > 0, and set t = t(β) = β(Λ 2 + σ 2 ). By Theorem 1.14, we get that P min x L x τ (t) √ δt P inf x 1 2 (η R x + √ 2t) 2 1 + √ δ 2 t + P sup x 1 2 (η L x ) 2 1 − √ δ 2 t . Therefore, P min x L x τ (t) √ δt P inf x η R x −a δ √ t + P sup x |η L x | b δ √ t , where a δ = √ 2 − 1 + √ δ and b δ = 1 − √ δ. Applying Lemma 2.2, we obtain that if β > β 0 (δ) for some β 0 (δ) > 0, then P min x L x τ (t) √ δt 6 exp(−γ δ β) ,(20) where γ δ = 1 2 (a 2 δ ∧ b 2 δ ). On the other hand, we have P max x L x τ (t) t/ √ δ P max x 1 2 (η R x + √ 2t) 2 t/ √ δ = P max x η x a ′ δ √ t , where a ′ δ = 1/δ − 1. Applying Lemma 2.2 again for β > β 0 (δ), we get that P max x L x τ (t) t/ √ δ 2 exp(−γ ′ δ β) ,(21) where γ ′ δ = (a ′ δ ) 2 /2. Note that assuming min x L x τ (t) √ δt and max x L x τ (t) t/ √ δ, we have τ (t) = x c x L x τ (t) Ct/ √ δ as well as min x,y L x τ (t) /L y τ (t) δ. It then follows that τ ⋆ bl τ (t) Ct/ √ δ. Therefore, we can deduce that τ ⋆ bl Ct/ √ δ ⊂ min x L x τ (t) √ δt max x L x τ (t) t/ √ δ . Combined with (20) and (21), it yields that P(τ ⋆ bl Ct/ √ δ) 6 exp(−γ δ β) + 2 exp(−γ ′ δ β) . It then follows that t ⋆ bl A δ C(Λ 2 + σ 2 ) for some A δ > 0 which depends only on δ, establishing (19). It remains to prove that σ = O(Λ). To this end, let x * be such that Eη 2 x * = σ 2 . We have Λ E max(η v 0 , η x * ) = E max(0, η x * ) = σ √ 2π .(22) This completes the proof for the continuous-time case. Remark 1. An interesting question is the asymptotic behavior of δ-blanket time as δ → 1, namely the dependence on δ of A δ in (19). As implied in the proof, we can see that A δ 1 γ δ + 1 γ ′ δ 1 (1 − δ) 2 . These asymptotics are tight for the complete graph; see e.g. [54,Cor. 2]. We next extend the proof of the preceding theorem to the case of the discrete-time random walk. The next lemma contains the main estimate required for this extension. Lemma 2.4. Let G(V ) be a network and write γ 2 = γ 2 (V, √ R eff ). Then for all u 16, we have v∈V e −u·cvγ 2 2 e −u/8 . Proof. By definition of the γ 2 functional, we can choose a sequence of partitions A k with |A k | 2 2 k such that γ 2 1 2 sup v∈V k 0 2 k/2 diam(A k (v)) . For v ∈ V , let k v = min{k : {v} ∈ A k }. It is clear that R eff (u, v) 1/c v for all u = v and hence (diam(A kv−1 (v))) 2 1/c v . Therefore, we see that v∈V e −u·cvγ 2 2 = ∞ k=0 v:kv=k+1 e −u·cvγ 2 2 ∞ k=1 2 2 k+1 e −u2 k /4 e −u/8 , completing the proof. Theorem 2.5. Consider a network G(V ) and its total conductance C = x∈V c x . For any fixed 0 < δ < 1, the discrete blanket time t bl (G, δ) of the random walk on on G(V ) satisfies t bl (G, δ) δ C · E sup x∈V η x 2 , where {η x } is the associated Gaussian process from Theorem 1.14. Proof. We now consider the embedded discrete-time random walk of the continuous-time counterpart (i.e. the corresponding jump chain; see [2, Ch. 2]). Let N v t be such that c v · N v t is the number of visits to vertex v up to continuous time t, i.e. N v t is a discrete-time analog of the local time L v t . Fix a vertex v 0 ∈ V and consider the local times {L x τ (t) : x ∈ V }. Let σ = sup x∈V E(η 2 x ) and Λ = E sup x η x . Again, set t = β(Λ 2 + σ 2 ). Let τ bl (δ) denote the first time at which N x t δt C for every x ∈ V . Assuming that min x N x τ (t) δ 1/4 t and max x N x τ (t) t/δ 3/4 , we have τ (t) = x c x N x τ (t) Ct/δ 3/4 and thus min x N x τ (t) δτ (t)/C. It then follows that τ bl (δ) τ (t) Ct/δ 3/4 . Therefore, we deduce that τ bl (δ) Ct δ 3/4 ⊂ min x N x τ (t) δ 1/4 t max x N x τ (t) t/δ 3/4 . Therefore we have, P τ bl (δ) Ct δ 3/4 P min x L x τ (t) √ δt or max x L x τ (t) t/ √ δ + P ∀x : √ δt L x τ (t) t/ √ δ | min x N x τ (t) δ 1/4 t or max x N x τ (t) t/δ 3/4 . Note that we have already bounded the first term in (20) and (21). The second term can be bounded by a simple application of a large deviation inequality on the sum of i.i.d. exponential variables. Precisely, x∈V P √ δt L x τ (t) t/ √ δ | N x τ (t) δ 1/4 t or N x τ (t) t/δ 3/4 x∈V e −ã δ ·cxt for some constantã δ > 0 depending only on δ. Recall that Theorem (MM) implies E sup x η x ≍ γ 2 (V, √ R eff ). By (22), we see that σ √ 2πΛ. Altogether, we get that t ≍ Λ 2 ≍ β γ 2 (V, √ R eff ) 2 . Applying Lemma 2.4, we conclude that there existsβ 0 (δ) > 0 depending only on δ such that for all β β 0 (δ), we have P(τ bl (G, δ) Ct/δ 3/4 ) e −b δ β whereb δ is a constant depending only on δ. This immediately yields the desired upper bound on the blanket time for the discrete-time random walk. We next exhibit a lower bound on a variation of blanket time (considered in [30]). It is apparent that the lower bound on the cover time, which will be proved in Section 4, is an automatic lower bound on the blanket time. In what follows, though, we try to give a simple argument that can be regarded as a warm up. For the convenience of analysis, we consider the following notion. For 0 < ε < 1, define t * bl (G, ε) = max w∈V inf{s : P w (∀u, v ∈ V : L u t 2L v t ) > ε for all t s} .(23) Theorem 2.6. Consider a network G(V ) and its total conductance C = x∈V c x . For any fixed 0 < ε < 1, we have t * bl (G, ε) ε C · E sup x∈V η x 2 . In order to prove Theorem 2.6, we will use the next simple lemma. We will also require this estimate in Section 4. Lemma 2.7. Let τ (t) be the inverse local time at vertex v 0 , as defined in (14). Let C be the total conductance and let D = max x,y∈V R eff (x, y). Then, for all β > 0 and t D 2 /β 2 , P v 0 (τ (t) βCt) 3β . Proof. We use P v to denote the measure on random walks started at a vertex v ∈ V , and we use E v similarly. Let p δ = min v {P v (τ (t) δCt)} for some δ > 0. Using the strong Markov property, we get that for all v ∈ V , P v (τ (t) kδCt) (1 − p δ ) k . In particular, E v τ (t) δCt/p δ . By Theorem 1.14, it follows easily that E v 0 τ (t) = Ct. Since E v τ (t) E v 0 (τ (t)) , we deduce that p δ δ. Let u = u(δ) be such that P u (τ (t) δCt) = p δ . Let Y, Z be random variables with the law τ (t), when the random walk is started at u and v 0 , respectively. Clearly, Y law = Z + T v 0 ,(24) where T v 0 is distributed as the hitting time to v 0 , when then random walk is started at u and (9)), and this yields P u (T v 0 CD 2 /β) β. Using the assumption t D 2 /β 2 and (24), we conclude that T v 0 is independent of Z. Since R eff (u, v 0 ) D 2 , we have E u T v 0 CD 2 (byP(Z βCt) P(Z 2βCt − CD 2 /β) P(Y 2βCt) + P(T v 0 CD 2 /β) p 2β + β 3β , as required. We are now ready to establish the lower bound on t * bl (G, ε). Proof of Theorem 2.6. We consider the associated Gaussian process as in the proof of Theorem 2.3. Let σ = sup x∈V Eη 2 x and Λ = E sup x η x . Observe that the maximal hitting time is a simple lower bound on t * bl (G, ε) up to a constant depending only on ε. In light of Lemma 2.1, we see t * bl (G, ε) ε C · σ 2 . Therefore, we can assume in what follows Λ 2 100 log(4/ε)ε −2 σ 2 .(25)Let t * = 1 2 Λ 2 . By Lemma 2.2, we get P inf x∈V 1 2 (η R x + √ 2t * ) 2 log(4/ε)σ 2 P | sup x∈V η R x − Λ| 2 log(4/ε) σ 1 − ε 2 . Applying Theorem 1.14, we obtain P inf x∈V L x τ (t * ) log(4/ε)σ 2 1 − ε 2 . By triangle inequality, we have D 2σ. Recalling the assumption (25), we can apply Lemma 2.7 and deduce that P(τ (t * ) εCt * /6) ε/2 . Writing t 0 = εCt * /6, we can then obtain that P inf x∈V L x t 0 log(4/ε)σ 2 , τ (t * ) t 0 1 − ε . Also, we see that sup x∈V L x t 0 εΛ 2 /12 whenever τ (t * ) t 0 . Using assumption (25) again, we conclude P v 0 (∃x, y ∈ V : L x t 0 > 2L y t 0 ) 1 − ε . This implies that t * bl (G, ε) t 0 , completing the proof. An asymptotically strong upper bound Finally, we show a strong upper bound for the asymptotics of t cov on a sequence of graphs {G n }, assuming t hit (G n ) = o(t cov (G n )). Theorem 2.8. For any graph G = (V, E) with v 0 ∈ V , let t hit (G) be the maximal hitting time in G and let {η v } v∈V be the GFF on G with η v 0 = 0. Then, for a universal constant C > 0, t cov (G) 1 + C t hit (G) t cov (G) · |E| · E sup v∈V η v 2 . Proof. Theorem 2.5 asserts that t cov (G) (E max v η v ) 2 ,(26) where denotes stochastic domination. Write σ 2 = max v Eη 2 v . Note that σ 2 corresponds to the diameter of V in the effective resistance metric, thus t hit (G) ≍ |E|σ 2 . Denote by S = v d v η 2 v , where d v is the degree of vertex v. By a generalized Hölder inequality and moment estimates for Gaussian variables (here we use that EX 6 = 15 for a standard Gaussian variable X), we obtain that ES 3 u,v,w d u d v d w E(η 2 u η 2 v η 2 w ) u,v,w d u d v d w E(η 6 u ) 1/3 E(η 6 v ) 1/3 E(η 6 w ) 1/3 15|E| 3 σ 6 . An application of Markov's inequality then yields P(S α|E|σ 2 ) 15 α 3 .(27)Write Q = v d v η v . Clearly, Q is a centered Gaussian with variance bounded by 4|E| 2 σ 2 and therefore, P(|Q| α|E|σ) 2e −α 2 /8 . For β > 0, let t = 1 2 (E max v η v + βσ) 2 . Noting τ (t) = v d v L v τ (t) and recalling the Isomorphism theorem (Theorem 1.14), we get that τ (t) 2|E|t + √ 2t 2 |Q| + 1 2 S . Combined with (27) and (28), we deduce that P(τ (t) 2|E|t + √ 2tβ|E|σ + β|E|σ 2 ) 12 (β − 2) 2 + 2e −β 2 /8 .(29) We now turn to bound the probability for τ cov > τ (t). Observe that on the event {τ cov > τ (t)}, there exists v ∈ V such that L v τ (t) = 0. It is clear that for all v ∈ V , we have P(η 2 v βσ 2 /2) 2e −β/4 . Since {η v } v∈V and {L v τ (t) } v∈V are two independent processes, we obtain P {τ cov > τ (t)} \ ∃v ∈ V : L v τ (t) + 1 2 η 2 v < βσ 2 /2 2e −β/4 .(30) On the other hand, we deduce from the concentration of Gaussian processes (Lemma 2.2) that P inf v ( √ 2t + η v ) 2 βσ/2 2e −β/8 . Applying Isomorphism theorem again and combined with (30), we get that P(τ cov > τ (t)) 4e −β/8 . Combined with (29), it follows that P(τ cov 2|E|t + √ 2tβ|E|σ + β|E|σ 2 ) 15 β 3 + 2e −β 2 /8 + 4e −β/8 . Since t = 1 2 (E max v η v + βσ) 2 , we can deduce that for some universal constant C 1 > 0, t cov (G) |E|(E sup v η v ) 2 + C 1 |E|(σ 2 + σE sup v η v ) . Recalling (26), we complete the proof. Geometry of the resistance metric We now discuss some relevant properties of the resistance metric on a network G(V ). Effective resistances and network reduction. For a subset S ⊆ V , define the quotient network G/S to have vertex set (V \ S) ∪ {v S }, where v S is a new vertex disjoint from V . The conductances in G/S are defined by c G/S xy = c xy if x, y / ∈ S and c v S x = y∈S c xy for x / ∈ S. Now, given v ∈ V and S ⊆ V , we put R eff (v, S) △ = R G/S eff (v, v S ),(31) where the latter effective resistance is computed in G/S. For two disjoint sets S, T ⊆ V , we define R eff (S, T ) △ = R G/S eff (v S , T ), and the resistance is defined to be 0 if S ∩ T = ∅. It is straightforward to check that R eff (S, T ) = R eff (T, S). The following network reduction lemma was discovered by Campbell [10] under the name "star-mesh transformation" (see also, e.g., [39, Ex. 2.47(d)]). We give a proof for completeness. Lemma 2.9. For a network G(V ) and a subset V ⊂ V , there exists a networkG(Ṽ ) such that for all u, v ∈ V , we havec v = c v and R G eff (u, v) = R eff (u, v) . We call G( V ) the reduced network. Furthermore, if V = V \ {x}, we then have the formulã c yz = c yz + c * ,x yz , where c * ,x yz = c xy c xz w∈Vx c xw .(32) Proof. Let P be the transition kernel of the discrete-time random walk {S t } on the network G and let P V be the transition kernel of the induced random walk on V , namely for u, v ∈ V P V (u, v) = P u (T + V = v) , where T + A △ = min{t 1 : S t ∈ A} for all A ⊆ V . In other words, P V is the chain watched in the subset V . We observe that P V is a reversible Markov chain on V (see, e.g., [2,36]). It is clear that the chain P V has the same invariant measure as that of P restricted to V , up to scaling by a constant. Therefore, there exists a (unique) network G( V ) corresponding to the Markov chain P V such thatc u = c u for all u ∈ V . We next show that the effective resistances are preserved in G( V ). To this end, we use the following identity relating effective resistance and the random walk (see, e.g., [39, Eq. (2.5)]), P v (T + v > T u ) = 1 c v R eff (u, v) ,(33) where T u = min{t 0 : S t = u}. Since P V is a watched chain on the subset V , we see that P V v (T + v > T u ) = P v (T + v > T u ) for all u, v ∈ V . This yields R G eff (u, v) = R eff (u, v) . To prove the second half of the lemma, we let G( V ) be the network defined by (32). A straightforward calculation yields that c v = c v − c xv + y∈Vx c * ,x vy = c v − c xv + y∈Vx c xv c xy z∈Vx c xz = c v . Let P G be the transition kernel for the random walk on the network G( V ). Then, P G (u, v) =c uṽ c u = c uv + cuxcxv y∈Vx cxy c u . On the other hand, the watched chain P V satisfies P V (u, v) = c uv c u + c ux c u c xv y∈Vx c xy . Altogether, we see that P G (u, v) = P V (u, v), completing the proof. Well-separated sets. The following result is an important property of the resistance metric, crucial for our analysis. Proposition 2.10. Consider a network G(V ) and its associated resistance metric (V, R eff ). Suppose that for some subset S ⊆ V , there is a partition S = B 1 ∪ B 2 ∪ · · · ∪ B m which satisfies the following properties. 1. For all i = 1, 2, . . . , m and for all x, y ∈ B i , we have R eff (x, y) ε/48. For all i = j ∈ {1, 2, . . . , m}, for all x ∈ B i and y ∈ B j , we have R eff (x, y) ε. Then there is a subset I ⊆ {1, 2, . . . , m} with |I| m/2 such that for all i ∈ I, R eff (B i , S \ B i ) ε/24. In order to prove Proposition 2.10, we need the following two ingredients. Lemma 2.11. Suppose the network H(W ) can be partitioned into two disjoint parts A and B such that for some ε > 0, and some vertices u ∈ A and v ∈ B, we have 1. R H eff (u, v) ε, and 2. R H eff (u, x) ε/12 for all x ∈ A, and R H eff (v, x) ε/12 for all x ∈ B. Then, R H eff (A, B) ε/6. R eff (x, y) = min f E(f ) , where E(f ) = 1 2 x,y f 2 (x, y)r xy , and the minimum is over all unit flows from x to y. Here, r xy = 1/c xy is the edge resistance for {x, y}. Suppose now that R H eff (A, B) < ε/6. Then there exists a unit flow f AB from set A to set B such that E(f AB ) < ε/6. For x ∈ A, let q x be the amount of flow sent out from vertex x in f AB and for x ∈ B, let q x be the amount of flow sent in to vertex x. Note that x∈A q x = x∈B q x = 1. Analogously, by assumption (2), there exist flows {f ux : x ∈ A} and {f xv : x ∈ B} such that f xy is a unit flow from x to y and E(f xy ) ε/12. We next build a flow f such that f = f AB + w∈A q w f uw + z∈B q z f zv . We see that f is indeed a unit flow from u to v. Furthermore, by Cauchy-Schwartz, E(f ) = 1 2 x,y f 2 (x, y)r xy = 1 2 x,y r xy f AB (x, y) + w∈A q w f uw (x, y) + z∈B q z f zv (x, y) 2 3 2 x,y r xy f 2 AB (x, y) + w∈A q w f 2 uw (x, y) + z∈B q z f 2 zv (x, y) = 3 E(f AB ) + w∈A q w E(f uw ) + z∈B q z E(f zv ) < ε . This contradicts assumption (1), completing the proof. Lemma 2.12. For any network G(V ), the following holds. If there is a subset S ⊆ V and a value ε > 0 such that R eff (u, v) ε for all u, v ∈ S, then there is a subset S ′ ⊆ S with |S ′ | |S|/2 such that for every v ∈ S ′ , R eff (v, S \ {v}) ε/4. Proof. Consider the reduced network G on the vertex set S, as defined in Lemma 2.9. Let the new conductances be denotedc xy for x, y ∈ S. By Lemma 2.9, our initial assumption that R eff (u, v) ε for all u, v ∈ S implies that R G eff (u, v) ε for all u, v ∈ S. Let n = |S|. Foster's Theorem [26] (see also [53]) states that 1 2 u =v∈S R G eff (u, v)c u,v = n − 1 . Combined with the fact that R G eff (u, v) ε, this yields 1 2 u =v∈Sc uv n ε . In particular, there exists a subset S ′ ⊆ S with |S ′ | n/2 such that for all v ∈ S ′ , u∈S\{v}c uv 4 ε . It follows that for every v ∈ S ′ , we have C G eff (v, S \ {v}) 4/ε, hence R eff (v, S \ {v}) = R G eff (v, S \ {v}) ε/4. Proof of Proposition 2.10. For each i ∈ {1, 2, . . . , m}, choose some v i ∈ B i . By assumption (2), R eff (v i , v j ) ε for i = j. Thus applying Lemma 2.12, we find a subset I ⊆ {1, 2, . . . , m} with |I| m/2 and such that for every i ∈ I, we have R eff (v i , {v 1 , . . . , v m } \ {v i }) ε/4 .(34) We claim that this subset I satisfies the conclusion of the proposition. To this end, fix i ∈ I, and letG be the quotient network formed by gluing {v 1 , . . . , v m } \ {v i } into a single vertexṽ. By (34), we have RG eff (v i ,ṽ) ε/4. Now let, B =   {ṽ} ∪ j =i B j   \ {v i } i∈I . Consider any x ∈B with x =ṽ. Then x ∈ B j for some j = i, hence by assumption (1), we conclude that, RG eff (x,ṽ) R eff (x, v j ) ε/48 . We may now apply Lemma 2.11 to the sets B i andB inG (with respective vertices v i andṽ) to conclude that RG eff (B i ,B) ε/24 . But the preceding line immediately yields, R eff (B i , S \ B i ) ε/24, finishing the proof. We end this section with the following simple lemma. Lemma 2.13. For any network G(V ), if A, B 1 , B 2 ⊆ V are disjoint, then R eff (A, B 1 ∪ B 2 ) R eff (A, B 1 ) · R eff (A, B 2 ) R eff (A, B 1 ) + R eff (A, B 2 ) . Proof. By considering the quotient graph, the lemma can be reduced to the case when A = {u}. Let {S t } be the discrete-time random walk on the network and define T B = min{t 0 : S t ∈ B} and T + B = min{t 1 : S t ∈ B} for B ⊆ V . It is clear that for a random walk started at u, we have P u (T + u > T B 1 ∪B 2 ) P u (T + u > T B 1 ) + P u (T + u > T B 2 ) . Combined with (33), this gives 1 R eff (u, B 1 ∪ B 2 ) 1 R eff (u, B 1 ) + 1 R eff (u, B 2 ) , yielding the desired inequality. The Gaussian free field We recall the graph Laplacian ∆ : ℓ 2 (V ) → ℓ 2 (V ) defined by ∆f (x) = c x f (x) − y c xy f (y). Consider a connected network G(V ). Fix a vertex v 0 ∈ V , and consider the random process X = {η v } v∈V , where η v 0 = 0, and X has density proportional to exp − 1 2 X , ∆X = exp − 1 4 u,v c uv |η u − η v | 2 .(35) The process X is called the Gaussian free field (GFF) associated with G. The next lemma is known, see, e.g., Theorem 9.20 of [28]. We include the proof for completeness. Lemma 2.14. For any connected network G(V ), if X = {η v } v∈V is the associated GFF, then for all u, v ∈ V , E (η u − η v ) 2 = R eff (u, v).(36) Proof. From (35), and the fact that the Laplacian is positive semi-definite, it is clear that X is a Gaussian process. Let Γ v 0 (u, v) = E u L v T 0 , where T 0 is the hitting time for v 0 as in Theorem 1.14. From Lemma 2.1, we have Γ v 0 (u, v) = 1 2 (R eff (v 0 , u) + R eff (v 0 , v) − R eff (u, v)) .(37) Let ∆ and Γ v 0 , respectively, be the matrices ∆ and Γ v 0 with the row and column corresponding to v 0 removed. Appealing to (35), if we can show that ∆ Γ v 0 = I, it follows that Γ v 0 is the covariance matrix for X . In this case, comparing (37) to E(η u η v ) = 1 2 Eη 2 u + Eη 2 v − E(η u − η v ) 2 and using η v 0 = 0, we see that (36) follows. In order to demonstrate ∆ Γ v 0 = I, we consider u, v such that v 0 / ∈ {u, v}. Conditioning on the first step of the walk from u gives, c u Γ v 0 (u, v) = c u E u L v T 0 = 1 {u=v} + w c uw E w L v T 0 = 1 {u=v} + w c uw Γ v 0 (v, w)(38) On the other hand, by definition of the Laplacian, (∆Γ v 0 )(u, v) = c u Γ v 0 (u, v) − w c uw Γ v 0 (v, w) = 1 {u=v} , where the latter equality is precisely (38). Thus ∆ Γ v 0 = I, completing the proof. A geometric identity. In what follows, for a set of points Y lying in some Hilbert space, we use aff(Y ) to denote their affine hull, i.e. the closure of { n i=1 α i y i : n 1, y i ∈ Y, n i=1 α i = 1}. Of course, when Y contains the origin, aff(Y ) is simply the linear span of Y . Lemma 2.15. For any network G(V ), if X = {η v } v∈V is the GFF associated with G, then for any w ∈ V and subset S ⊆ V , R eff (w, S) = dist L 2 (η w , aff({η u } u∈S )) . Proof. Since the statement of the lemma is invariant under translation, we may assume that the GFF is defined with respect to some v 0 ∈ S. In this case, by the definition in (35), the GFF for G/S has density proportional to exp   − 1 4   u,v / ∈S c uv |η u − η v | 2 + u / ∈S c v S u |η u | 2     , i.e. the GFF on G/S is precisely the initial Gaussian process X conditioned on the linear subspace A S = {η v = η v 0 = 0 : v ∈ S}. Using (31) and Lemma 2.14, we have R eff (w, S) = R G/S eff (w, v S ) = E |η w − η v 0 | 2 A S = E |η w | 2 A S . To compute the latter expectation, write η w = Y + Y ′ , where Y ′ ∈ span({η v } v∈S ) and E(Y Y ′ ) = 0. It follows immediately that dist L 2 (η w , aff({η u } u∈S )) = E[Y 2 ] = E |η w | 2 A S , completing the proof. Majorizing measures We now review the relevant parts of the majorizing measure theory. One is encouraged to consult the book [52] for further information. In Section 1, we saw Talagrand's γ 2 functional. For our purposes, it will be more convenient to work with a different value that is equivalent to the functional γ 2 , up to universal constants. In Section 3.2, we discuss separated trees, and prove a number of standard properties about such objects. In Section 3.3, we present a deterministic algorithm for computing γ 2 (X, d) for any finite metric space (X, d). Finally, in Section 3.4, we specialize the theory of Gaussian processes and trees to the case of GFFs. There, we will use the geometric properties proved in Sections 2.3 and 2.4. Before we begin, we attempt to give some rough intuition about the role of trees in the majorizing measures theory. A good reference for this material is [27]. A tree of subsets of X is a finite collection F of subsets with the property that for all A, B ∈ F, either A ∩ B = ∅, or A ⊆ B, or B ⊆ A. A set B is a child of A if B ⊆ A, B = A, and C ∈ F, B ⊆ C ⊆ A =⇒ C = B or C = A. We assume that X ∈ F, and X is referred to as the root of the tree F. To each A ∈ F, we use N (A) to denote the number of children of A. A branch of F is a sequence A 1 ⊃ A 2 ⊃ · · · such that each A k+1 is a child of A k . A branch is maximal if it is not contained in a longer branch. We will assume additionally that every maximal branch terminates in a singleton set {x} for x ∈ X. Let {η x } x∈X be a centered Gaussian process with X finite, and let d(x, y) = E (η x − η y ) 2 . The basic premise of the tree interpretation of the majorizing measures theory is that one can assign a measure of "size" to any tree of subsets in X, and this size provides a lower bound on E sup x∈X η x . The majorizing measures theorem then claims that the value of the optimal such tree is within absolute constants of the expected supremum. The size of the tree (see (39)) can be defined using only the metric structure of (X, d), without reference to the underlying Gaussian process. Thus much of the theorems in this section are stated for general metric spaces. The tree of subsets is meant to capture the structure of (X, d) at all scales simultaneously. In general, to obtain a multi-scale lower bound on the expected supremum of the process, one arranges so that the diameter of the subsets decreases exponentially as one goes down the tree, and all subsets at one level of the tree are separated by a constant fraction of their diameter (see Definitions 3.1 and 3.8 below). This allows a certain level of independence between different branches of the tree which is exploited in the lower bounds. Much of this section is devoted to proving that one can construct a near-optimal tree with a number of regularity properties that will be crucial to our approach in Section 4. Trees, measures, and functionals Let (X, d) be an arbitrary metric space. Definition 3.1. For values q ∈ N and α, β > 0, and r 2, a tree of subsets F in X is called a (q, r, α, β)-tree if to each A ∈ F, one can associate a number n(A) ∈ Z such that the following three conditions are satisfied. diam(A) α r n(A) . We will refer to a (q, r, 4, 1 2 )-tree as simply a (q, r)-tree. The r-size of a tree of subsets F, written size r (F), is defined as the infimum of k 1 r n(A k ) log + N (A k )(39) over all possible maximal branches of F, where we use the notation log + x = log x for x = 0, and log + (0) = 0. To connect trees of subsets with the γ 2 functional, we recall the relationship with majorizing measures. The next result is from [51, Thm. 1.1] Theorem 3.2. For every metric space (X, d), we have γ 2 (X, d) ≍ inf sup x∈X ∞ 0 log 1 µ(B(x, ε)) 1/2 dε, where B(x, ε) is the closed ball of radius ε about x, and the infimum is over all finitely supported probability measures on X. We will also need the following theorem due to Talagrand (see Proposition 4.3 of [50] and also Theorem T5 of [27].) We will employ it now and also in Section 3.3. Theorem 3.3. There is a value r 0 2 such that the following holds. Let (X, d) be a finite metric space, and r r 0 . Assume there is a family of functions {ϕ i : X → R + : i ∈ Z} such that the following conditions hold for some β > 0. 1. ϕ i (x) ϕ i−1 (x) for all i ∈ Z and x ∈ X. 2. If t 1 , t 2 , . . . , t N ∈ B(s, r j ) are such that d(t i , t i ′ ) r j−1 for i = i ′ , then ϕ j (s) βr j log N + min {ϕ j−2 (t i ) : i = 1, 2, . . . , N } . Under these conditions, γ 2 (X, d) r,β sup x∈X,i∈Z ϕ i (x). The preceding two theorems allow us to present the following connection between trees and γ 2 . Such a connection is well-known (see, e.g. [49]), but we record the proofs here for completeness, and for the precise quantitative bounds we will use in future sections. Lemma 3.4. There is a value r 0 2 such that for every finite metric space (X, d), and every r r 0 , we have γ 2 (X, d) r sup{size r (F) : F is a (1, r, 4, 1 2 )-tree in X} . Proof. First, for a subset S ⊆ X, let θ(S) = sup{size r (F) : F is a (1, r, 4, 1 2 )-tree in X} . Then define, for every i ∈ Z and x ∈ X, define ϕ i (x) = θ(B(x, 2r i )) . where B(x, R) is the closed ball of radius R about x ∈ X. We now wish to verify that the conditions of Theorem 3.3 hold for {ϕ i }. Condition (1) is immediate. Assume that r 8. Given t 1 , t 2 , . . . , t N as in condition (2) of Theorem 3.3, consider the set A = B(s, 2r j ) which has diameter bounded by 4r j , and the disjoint subset sets of A given by A i = B(t i , 2r j−2 ) which each have diameter bounded by 4r j−2 , and which satisfy d(A i , A j ) r j−1 /2 for i = j. We also have A i ⊆ A for each i ∈ {1, . . . , N }. Taking the tree of subsets with root A, n(A) = j, and children {A i } N i=1 , and in each A i a tree which achieves value at least θ(A i ) = θ(B(t i , 2r j−2 )) = ϕ j−2 (i), we see immediately that ϕ j (s) = θ(B(s, 2r j )) r j log N + min{ϕ j−2 (t i ) : i = 1, 2, . . . , N }, confirming condition (2) of Theorem 3.3. Applying the theorem, it follows that γ 2 (X, d) r θ(X), proving (40). We will need the upper bound (40) to hold for (2, r, 4, 1 2 )-trees. Toward this end, we state a version of [49, Thm 3.1]. The theorem there is only proved for α = 1 and β = 1 2 , but it is straightforward to see that it works for all values α, β > 0 since the proof merely proceeds by choosing an appropriate subtree of the given tree; the values α and β are not used. Theorem 3.5. For every metric space (X, d), the following holds. For every α, β, r > 0 and q ∈ N, and for every (1, r, α, β)-tree F in X, there exists a (q, r, α, β)-tree F ′ in X such that size r (F) q · size r (F ′ ) . Combining Theorem 3.5 with Lemma 3.4 yields the following upper bound using (2, r)-trees. Corollary 3.6. There is a value r 0 2 such that for every finite metric space (X, d), and every r r 0 , we have γ 2 (X, d) r sup{size r (F) : F is a (2, r, 4, 1 2 )-tree in X} . Now we move onto a lower bound on γ 2 . Lemma 3.7. There is a value r 0 2 such that for every finite metric space (X, d), and every r r 0 , we have γ 2 (X, d) sup{size r (F) : F is a (1, r, 8, 1 6 )-tree} . Proof. We will show for any probability measure µ on X and any (1, r, 8, 1 6 )-tree F in X, we have size r (F) r sup x∈X ∞ 0 log 1 µ(B(x, ε)) 1/2 dε . The basic idea is that if A 1 , A 2 , . . . A k are children of A, in F, then the sets B(A i , 1 20 r n(A)−1 ) are disjoint by property (2) of Definition 3.1, where we write B(S, R) = {x ∈ X : d(x, S) R}. Thus one of these sets A i has µ(B(A i , 1 20 r n(A)−1 )) 1/N (A). Thus we may find a finite sequence of sets, starting with A (0) = X such that A (i+1) is a child A (i) and µ(B(A (i+1) , 1 20 r n(A (i) )−1 )) 1/N (A (i) ). Since every maximal branch in a tree of subsets terminates in a singleton, the sequence ends with some set A ′ = A (h) = {x}. By construction, we have µ(B(x, 1 20 r n(A ′ )−1 )) 1 N (A ′ ) . Thus, assuming r 40, r n(A ′ )−2 log + N (A ′ ) 1 20 r n(A ′ )−1 r n(A ′ )−2 1 log µ(B(x, ε)) dε .(42) Separated trees Let (X, d) be an arbitrary metric space. Consider a finite, connected, graph-theoretic tree T = (V, E) (i.e., a connected, acyclic graph) such that V ⊆ X, with a fixed root z ∈ V , and a mapping s : V → Z. Abusing notation, we will sometimes use T for the vertex set of T . For a vertex x ∈ T , we use T x to denote the subtree rooted at x, and we use Γ(x) to denote the set of children 1 of x with respect to the root z. Finally, we write ∆(x) = |Γ(x)| + 1 for all x ∈ T . Let L be the set of leaves of T . For any v ∈ T , let P(v) = {z, . . . , v} denote the set of nodes on the unique path from the root to v. For a pair of nodes u, v ∈ T , we use P(u, v) to denote the sequence of nodes on the unique path from u to v. If u is the parent of v, we write u = p(v) and in particular we write z = p(z). For any such pair (T , s) and r 2, we define the value of (T , s) by val r (T , s) = inf ℓ∈L v∈P(ℓ) r s(v) log ∆(v).(43) The following definition will be central. 1 Formally, these are precisely the neighbors of x in T whose unique path to the root z passes through x. Definition 3.8. For a value r 2, we say that the pair (T , s) is an r-separated tree in (X, d) if it satisfies the following conditions for all x ∈ T . 1. For all y ∈ Γ(x), s(y) s(x) − 2. For all u, v ∈ Γ(x), we have d(x, T u ) 1 2 r s(x)−1 and d(T u , T v ) 1 2 r s(x)−1 . 3. diam(T x ) 4r s(x) . We remark that our separated tree is a slightly different version of the (2, r)-tree introduced in the preceding section. The main difference is that the nodes of our separated tree are point in the metric space X, whereas a node in a (2, r)-tree is a subset of X. Our definition is tailored for the application in Section 4. Not surprisingly, we have a similar version of the above theorem for separated trees. Theorem 3.9. For some r 0 2 and every r r 0 , and any metric space (X, d), we have sup T val r (T , s) ≍ r γ 2 (X, d), where the supremum is over all r-separated trees in X. Theorem 3.9 follows from Corollary 3.6 and the following lemma. Lemma 3.10. Consider r 8 and any metric space (X, d). For any (2, r)-tree F, there is an rseparated tree T such that size r (F) = val r (T ). Also, for any r-separated tree T , there is a (2, r)-tree F such that size r (F) val r (T ) − r diam(X). Proof. We only prove the first half of the statement, since the second half can be obtained by reversing the construction. The additive factor −r diam(X) is due to the slight difference in the definitions of the value for a separated tree and the size for a (2, r)-tree (see (43) and (39)). Let F be a (2, r)-tree on (X, d). For each A ∈ F with N (A) 1, we select one child c(A) and an arbitrary point v A ∈ c(A). We now construct the separated tree T . Its vertex set is a subset of Let us first verify that T is an r-separated tree. Condition (1) of Definition 3.8 holds because if y is a child of v A ∈ T , then y = v B for some child B of A (in F), which implies s(y) = n(B) n(A) − 2 = s(v A ) − 2. Secondly, If v A is a node with children v B 1 , v B 2 , . . . , v B k , then clearly by Definition 3.1, d(v A , T v B i ) d(c(A), B i ) 1 2 r s(v A )−1 , d(T v B i , T v B j ) d(B i , B j ) 1 2 r s(v A )−1 , verifying condition (2) of Definition 3.8. Thirdly, if x A ∈ T , then for any child x B of x A , we know B is a child of A, hence diam(T x B ) diam(B) 4r n(A) = 4r s(x A ) , using property (3) of a q-tree. This verifies condition (3) of Definition 3.8. Finally, observe that for every non-leaf node v A ∈ T , we have ∆(v A ) = |Γ(v A )| + 1 = N (A), and for leaves, we have log ∆(v A ) = log + N (A) = 0. It follows that val r (T , s) = size r (F), completing the proof. Additional structure We now observe that we can take our separated trees to have some additional properties. Say that an r-separated tree (T , s) is C-regular for some C 1, if it satisfies, for every v ∈ T \ L, ∆(v) exp C 2 r 2 4 s(z)−s(v) .(44) Lemma 3.11. For every C 1 and r 4, for every r-separated tree (T , s) in X, if val r (T , s) 4Cr s(z)+1 , then there is a C-regular r-separated tree (T ′ , s ′ ) in X with Proof. Consider the following operation on an r-separated tree (T , s). For x ∈ T \ L, consider a new r-separated tree (T ′ , s ′ ) = Φ x (T , s), which is defined as follows. Let u be the child of x and let S contain the remaining children such that val r (T u , s| Tu ) val r (T v , s| Tv ) for all v ∈ S ,(45) where T u is the subtree of T rooted at u and containing all its descendants, and s| Tu is the restriction of s on the subtree T u . Consider the tree T ′ that results from deleting all the nodes in S, as well as the subtrees under them, and then contracting the edge (x, u). We also put s ′ (x) = s(u) and s ′ (y) = s(y) for all y ∈ T ′ . As long as there is a node x ∈ T \ L which violates (44) (for the current (T ′ , s ′ )), we iterate this procedure (namely, we replace (T ′ , s ′ ) by Φ x (T ′ , s ′ )). It is clear that we end with a C-regular tree (T ′ , s ′ ). Note that different choices of x at each stage will lead to different outcomes, but the following proof shows that all of them satisfy the required condition. It is also straightforward to verify that for any ℓ ∈ L ′ , we have v∈P T ′ (ℓ) r s ′ (v) log ∆ T ′ (v) v∈P T (ℓ) r s(v) log ∆ T (v) − Cr v∈P T (ℓ) r s(v) 2 s(z)−s(v) v∈P T (ℓ) r s(v) log ∆ T (v) − Cr s(z)+1 ∞ k=0 2 2k r −2k v∈P T (ℓ) r s(v) log ∆ T (v) − 2Cr s(z)+1 val r (T , s) − 2Cr s(z)+1 1 2 val r (T , s). where in the second line we have used property (1) of Definition 3.8, in the third line, we have used r 4, and in the final line we have used our assumption that val r (T , s) 4Cr s(z)+1 . It remains to prove that val r (T , s) val r (T ′ , s ′ ). The issue here is that it is possible L ′ L. However, by our choice of u at each stage (as in equation (45)), it is guaranteed that ℓ ∈ L ′ for a certain ℓ ∈ L such that val r (T , s) = v∈P(ℓ) r s(v) log ∆(v). This completes the proof. We next study the subtrees of separated trees. In what follows, we continue denoting by s| T ′ the restriction of s on T ′ for T ′ ⊆ T , and we use a subscript T ′ to refer to the subtree T ′ . Lemma 3.12. For every r-separated tree (T , s), there is a subtree T ′ ⊆ T such that (T ′ , s| T ′ ) is an r-separated tree satisfying the following conditions. 1. val r (T , s) ≍ val r (T ′ , s| T ′ ). For every v ∈ T ′ \ L T ′ , ∆ T ′ (v) = ∆(v). For every v ∈ T ′ \ L T ′ and w ∈ L T ′ ∩ T v , u∈P(v,w) r s(u) log ∆ T ′ (u) 1 2 r s(p(v)) log ∆ T ′ (p(v)).(46) Proof. We construct the subtree T ′ in the following way. We examine the vertices of v ∈ T in the breadth-first search order (that is, we order the vertices such that their distances to the root are non-decreasing). If v is not deleted yet and for some ℓ ∈ L ∩ T v , u∈P(v,ℓ) r s(u) log ∆ T (u) r s(p(v)) log ∆ T (p(v)) ,(47) we delete all the descendants of v. Let T ′ be the subtree obtained at the end of the process. It is clear that (T ′ , s| T ′ ) is a separated tree, and it remains to verify the required properties. By the construction of our subtree T ′ , we see that whenever a vertex is deleted, all its siblings are deleted. So for a node v ∈ T ′ \ L T ′ , all the children in T of v are preserved in T ′ , yielding property (2). Note that if v ∈ L T ′ \ L, there exists ℓ ∈ L ∩ T v such that (47) holds. Therefore, we see u∈P(z,v) r s(u) log ∆ T ′ (u) = u∈P(z,v)\{v} r s(u) log ∆ T (u) 1 2 u∈P(z,ℓ) r s(u) log ∆ T (u) 1 2 val r (T , s) . This verifies property (1) (noting that the reverse inequality is trivial). Take v ∈ T ′ \ L T ′ and w ∈ L T ′ ∩ T v . If w ∈ L, we see that (46) holds for v and w since (47) does not hold for v and ℓ = w (otherwise all the descendants of v have to be deleted and v will be a leaf node in T ′ ). If w ∈ L, there exists ℓ 0 ∈ L ∩ T w such that u∈P(w,ℓ 0 ) r s(u) log ∆ T (u) r s(p(w)) log ∆ T (p(w)) . Recall that (47) fails with ℓ = ℓ 0 . Altogether, we conclude that u∈P(v,w) r s(u) log ∆ T ′ (u) = u∈P(v,ℓ 0 ) r s(u) log ∆ T (u) − u∈P(w,ℓ 0 ) r s(u) log ∆ T (u) 1 2 u∈P(v,ℓ 0 ) r s(u) log ∆ T (u) 1 2 r s(p(v)) log ∆ T (p(v)) , establishing property (3) and completing the proof. Finally, we observe that separated trees are stable in the following sense. Lemma 3.13. Fix 0 < δ < 1. Suppose that (T , s) is an r-separated tree in X, and for every node v ∈ V , we delete all but ⌈δ · |Γ(v)|⌉ of its children. Denote by T ′ the induced tree on the connected component containing z(T ). Then (T ′ , s| T ′ ) is an r-separated tree and val r (T , s) ≍ δ val r (T ′ , s| T ′ ). Proof. It is clear that Properties (1), (2) and (3) of separated trees are preserved for the induced tree T ′ for s| T ′ . So (T ′ , s) is an r-separated tree. Furthermore, for every leaf ℓ of T ′ , v∈P(ℓ) r s(v) log ∆ T ′ (v) v∈P(ℓ) r s(v) log(1 + ⌈δ · |Γ(v)|⌉) c(δ) v∈P(ℓ) r s(v) log(1 + |Γ(v)|) c(δ)val r (T , s) , where c(δ) is a constant depending only on δ. It follows that val r (T ′ , s| T ′ ) c(δ)val r (T , s), completing the proof since the reverse direction is obvious. Computing an approximation to γ 2 deterministically We now present a deterministic algorithm for computing an approximation to γ 2 . Theorem 3.14. Let (X, d) be a finite metric space, with n = |X|. If, for any two points x, y ∈ X, one can compute d(x, y) in time polynomial in n, then one can compute a number A(X, d) in polynomial time, for which A(X, d) ≍ γ 2 (X, d). Proof. Fix r 16. First, let us assume that 1 d(x, y) r M for x = y ∈ X and some M ∈ N. Fix x 0 ∈ X. Our algorithm constructs functions ϕ 0 , ϕ 1 , . . . , ϕ M : X → R + . We will return the value A(X, d) = ϕ M (x 0 ). First put ϕ 1 (x) = ϕ 0 (x) = 0 for all x ∈ X. Next, we show how to construct ϕ j given ϕ 0 , ϕ 1 , . . . , ϕ j−1 . For x ∈ X and r 0, we use B(x, r) △ = {y ∈ X : d(x, y) r}. First, we construct a maximal 1 3 r j−1 net N j in X in the following way. Supposing that y 1 , . . . , y k have already been chosen, let y k+1 be a point satisfying ϕ j−2 (y k+1 ) = max ϕ j−2 (y) : y ∈ X \ k i=1 B x, 1 3 r j−1 , as long as there exists some point of X \ k i=1 B(x, 1 3 r j−1 ) remaining. For x ∈ X, set g j (x) = y min{k:d(x,y k ) 1 3 r j−1 } . Now we define ϕ j (x) for x ∈ X. Suppose that B(x, 2r j ) ∩ N j = {y ℓ 1 , y ℓ 2 , . . . , y ℓ h }, with ℓ 1 ℓ 2 · · · ℓ h , and define 1 16 r j−2 ) is empty. I. ϕ j (x) = ϕ j−1 (x) if B(g j (x), 4r j ) \ B(g j (x), II. Otherwise, ϕ j (x) = max max k h r j log k + min i k ϕ j−2 (y ℓ i ) , max{ϕ j−1 (z) : z ∈ B(x, 1 3 r j−1 )} . (48) Now, we verify that {ϕ j } M j=0 satisfies the conditions of Theorem 3.3. The monotonicity condition (1) is satisfied by construction. We will now verify condition (2), starting with the following lemma. Proof. We prove this by induction on j. Clearly it holds vacuously for j 2. Assume that it holds for ϕ 0 , ϕ 1 , . . . , ϕ j−1 and j 2. By the condition of the lemma and the fact that s ∈ B(g j (s), 1 3 r j−1 ), we have d(s, g j (s)) 1 16 r j−2 ,(49) which implies that B(s, 2r j ) \ B(s, 1 8 r j−2 ) is also empty. Furthermore, we have g j (s) = g j (t), since otherwise d(g j (s), g j (t)) 1 3 r j−1 , and we would conclude that 2r j d(g j (t), s) d(g j (s), g j (t)) − d(s, g j (s)) 1 3 r j−1 − 1 16 r j−2 1 8 r j−1 , contradicting the fact that B(s, 2r j ) \ B(s, 1 8 r j−2 ) is empty. It follows that B(s, 2r j ) \ B(s, 1 8 r j−2 ) = ∅ and B(t, 2r j ) \ B(t, 1 8 r j−2 ) = ∅ .(50) Since g j (s) = g j (t), we conclude that both ϕ j (s) and ϕ j (t) are defined by case (I) above, hence ϕ j (s) = ϕ j−1 (s) and ϕ j (t) = ϕ j−1 (t) . So we are done by induction unless B(g j (s), 4r j−1 ) \ B(g j (s), 1 16 r j−3 ) is non-empty, in which case ϕ j−1 (s) and ϕ j−1 (t) are defined by case (II). But from (50) and d(s, t) r j , we see that B(t, 2r j−1 ) = B(s, 2r j−1 ) and B(s, 1 3 r j−2 ) = B(t, 1 3 r j−2 ) as well. This implies that ϕ j−1 (s) and ϕ j−1 (t) see the same maximization in (48), hence ϕ j−1 (s) = ϕ j−1 (t) and by (51) we are done. Now, let s, t 1 , . . . , t N ∈ X be as in condition (2), and let B(s, 2r j ) ∩ N j = {y ℓ 1 , y ℓ 2 , . . . , y ℓ h } be such that ℓ 1 ℓ 2 · · · ℓ h . If B(g j (s), 4r j ) \ B(g j (s), 1 16 r j−1 ) is empty, then N = 1, and Lemma 3.15 implies that ϕ j (s) = ϕ j (t 1 ) ϕ j−2 (t 1 ), where the latter inequality follows from monotonicity. Thus we may assume that ϕ j (s) is defined by case (II). To every t i , we can associate a distinct point g j (t i ) ∈ B(s, 2r j ) ∩ N j , and by construction we have ϕ j−2 (g j (t i )) ϕ j−2 (t i ), since ϕ j−2 (y k ) is decreasing as k increases. Using this property again in conjunction with the definition (48), we have ϕ j (s) r j log N + min{ϕ j−2 (y ℓ i ) : i = 1, . . . , N } r j log N + min{ϕ j−2 (g j (t i )) : i = 1, . . . , N } r j log N + min{ϕ j−2 (t i ) : i = 1, . . . , N }, completing our verification of condition (2) of Theorem 3.3. Applying Theorem 3.3, we see that γ 2 (X, d) sup x∈X,i∈Z ϕ i (x) = ϕ M (x 0 ) = A(X, d).(52) To prove the matching lower bound, we first build a tree T whose vertex set is a subset of X × Z. The root of T is (x 0 , M ). In general, if (x, j) is already a vertex of T with j 1, then we add children to (x, j) according to the maximizer of (48). If ϕ j (x) = ϕ j−1 (z), then we make (z, j − 1) the only child of (x, j). Otherwise, we put the nodes (y 1 , j − 2), . . . , (y h , j − 2) as children of (x, j), where {y i } ⊆ N j are the nodes that achieve the maximum in (48). Let the pair (T ′ , s) be a constructed in the following way from T . We replace every maximal path of the form (x, j 0 ), (x, j 0 − 1), . . . , (x, j 0 − k) by the vertex x and put s(x) = j 0 − k. It follows immediately by construction that val r (T ′ , s) ϕ M (x 0 ) + r diam(X, d) ϕ M (x 0 ),(53) where the latter inequality follows from (52), since ϕ M (x 0 ) γ 2 (X, d) diam(X, d). Note that the correction term of diam(X, d) in (53) is simply because of the use of ∆(v) = |Γ(v)| + 1 in the definition (43). We next build a (1, r, 8, 1 16 )-tree F, which essentially captures the structure of the tree T . In general, the sets in F will be balls in X, with the node (x, j) ∈ T being associated with the set B(x, 4r j ) in F, which will have label n(B(x, 4r j )) = j. We construct the (1, r, 8, 1 16 )-tree F recursively. The root of F is B(x 0 , 4r M ) (which is equal to X), and we define n(B(x, 4r j )) = M . In general, if F contains the set B(x, 4r j ) corresponding to the node (x, j) ∈ T , and if (x, j) has children (y 1 , j − 2), (y 2 , j − 2), . . . , (y h , j − 2) ∈ T , we add the sets B(y i , 4r j−2 ) as children of B(x, 4r j ) in F, with n(B(y i , 4r j−2 )) = j − 2. Likewise, if (z, j − 1) is the child of (x, j), then we add the set B(z, 4r j−1 ) as the unique child of B(x, 4r j ) in F and put n(B(z, 4r j−1 )) = j − 1. We continue in this manner until T is exhausted. We now verify that F is indeed a (1, r, 8, 1 6 )-tree. First, note that if (z, j − 1) is a child of (x, j) in T , then clearly B(z, 4r j−1 ) ⊆ B(x, 4r j ) since this can only happen if d(x, z) 1 3 r j−1 . Also, if (y 1 , j − 2), . . . , (y h , j − 2) are the children of (x, j), then by the construction of the maps in (48), we have d(y i , x) 2r j , hence B(y i , 4r j−2 ) ⊆ B(x, 4r j ), recalling that r 16. Furthermore, for i = k, since y i , y k ∈ N j , we have d(y i , y k ) 1 3 r j−1 , so B(y i , 4r j−2 ) ∩ B(y k , 4r j−2 ) = ∅, verifying that F is indeed a tree of subsets. In fact, we have the estimate d B(y i , 4r j−2 ), B(y k , 4r j−2 ) 1 3 r j−1 − 8r j−2 1 6 r j−1 = 1 6 r n(B(x,4r j ))−1 , using r 16. This verifies that property (2) of a (1, r, 1, 1 6 )-tree is satisfied. Furthermore, property (1) of a (1, r, 8, 1 6 )-tree follows immediately by construction. Finally, to verify property (3), note that for any set in our tree of subsets F, corresponding to a node of the form (x, j) ∈ T , we have diam(B(x, 4r j )) 8r j and n(B(x, 4r j )) = j. By construction, we have val r (T ′ , s) size r (F) + r diam(X, d), and Lemma 3.7 yields γ 2 (X, d) size r (F) + diam(X, d) (using γ 2 (X, d) diam(X, d)). Combining this with (53) shows that γ 2 (X, d) val r (T ′ , s) ϕ M (x 0 ) = A(X, d). Together with (52), this shows that γ 2 (X, d) ≍ A(X, d). The only thing left is to remove the dependence of our running time on M . But since there are at most n 2 distinct distances in (X, d), only O(n 2 ) of the maps ϕ 0 , ϕ 1 , . . . , ϕ M are distinct. More precisely, suppose that there is no pair u, v ∈ X satisfying d(u, v) ∈ [r j−3 , r j+1 ] for some j ∈ Z. In that case, ϕ j (x) is defined by case (I) for all x ∈ X, and thus ϕ j ≡ ϕ j−1 . Obviously, we may skip computation of the intermediate non-distinct maps (and it is easy to see which maps to skip by precomputing the values of j such that there are u, v ∈ X with d(u, v) ∈ [r j−3 , r j+1 ].) Since there are only O(n 2 ) non-trivial values of j, this completes the proof. Tree-like properties of the Gaussian free field Finally, we consider how the resistance metric (and hence the Gaussian free field) allows us to obtain trees with special properties. Consider a network G(V ), and the associated metric space (V, √ R eff ). Let (T , s) be an r-separated tree in G. We say that (T , s) is strongly r-separated if, for every non-root node v ∈ T , we have the inequality R eff (v, T \ T v ) 1 20 r s(p(v))−1 ,(54) where p(v) denotes the parent of v in T . Lemma 3.16. For any network G(V ) and any r 96, let (T 0 , s) be an arbitrary r-separated tree on the space (V, √ R eff ). Then there is an induced strongly r-separated tree (T , s) such that |Γ T (v)| |Γ T 0 (v)|/2 for all v ∈ T \ L T . Furthermore val r (T , s) ≍ val r (T 0 , s).(55) Proof. Consider any non-leaf node v ∈ T 0 with children c 1 , . . . , c k , where k 1. If k = 1, let S v = {c 1 }. Otherwise, we wish to apply Proposition 2.10 to the sets {T c i } k i=1 . By property (2) of separated trees, we get that for all x ∈ T c i , y ∈ T c j with i = j R eff (x, y) 1 2 r s(v)−1 2 = 1 4 r 2(s(v)−1) . Combined with property (3) of separated trees, Proposition 2.10 yields that there exists a subset S v ⊆ {c 1 , . . . , c k } with |S v | k/2 such that for c ∈ S v , we have R eff (T c , T v \ (T c ∪ {v})) 1 4 r 2(s(v)−1) · 1 24 1 96 r 2(s(v)−1) . Applying Lemma 2.13 with A = T c , B 1 = T v \ (T c ∪ {v}) and B 2 = {v}, we get that R eff (T c , T v \ T c ) 1 100 r 2(s(v)−1) .(56) Next, consider the induced r-separated tree (T , s) that arises from deleting, for every non-leaf node v ∈ T 0 , all the children not in S v as well as all their descendants. It is clear that for all v ∈ T \ L T , we have |Γ T (v)| |Γ T 0 (v)|/2. Lemma 3.13 then yields that val r (T , s) ≍ val r (T 0 , s). It remains to verify that (T , s) is strongly r-separated. Define D 0 = 1 and for h 1, D h = D h−1 1 − D 2 h−1 r −4h . It is straightforward to verify that D h 1/2 for all h 0, since r 2. We now prove, by induction on the height of T , that for every node u at depth h 1 in T , R eff (u, T \ T u ) 1 10 r s(p(u))−1 D h−1 .(57) By the preceding remarks, this verifies (54), completing the proof of the lemma. Let z = z(T ) be the root, and let v be some child of z. Let u ∈ T v be a node at depth h in T v (and hence at depth h + 1 in T ). By (56), we have R eff (u, T \ T v ) R eff (T v , T \ T v ) 1 10 r s(p(v))−1 .(58) If u = v, then the preceding inequality yields (57). Otherwise, u = v, and h 1. By the induction hypothesis (57) applied to u and T v , we have R eff (u, T v \ T u ) 1 10 r s(p(u))−1 D h−1 .(59) Since u ∈ T v is a node at depth h, we get from property (1) of a separated tree that s(p(v)) s(p(u)) + 2h and therefore 1 10 r s(p(u))−1 D h−1 r −2h · 1 10 r s(p(v))−1 D h−1 .(60) Now, using (58) and (59), we apply Lemma 2.13 with A = {u}, B 1 = T v \ T u and B 2 = T \ T v , yielding R eff (u, T \ T u ) 1 10 r s(p(u))−1 D h−1 · 1 10 r s(p(v))−1 ( 1 10 r s(p(u))−1 D h−1 ) 2 + ( 1 10 r s(p(v))−1 ) 2 1 10 r s(p(u))−1 D h−1 1 1 + (D h−1 r −2h ) 2 1 10 r s(p(u))−1 D h−1 (1 − D 2 h−1 r −4h ), where the second transition follows from (60) and the third transition follows from the fact that (1 + x 2 ) −1/2 1 − x 2 . This completes the proof. Good trees inside the GFF. Consider a Gaussian free field {η x } x∈V corresponding to network G(V ) with the associated metric space (V, d), where d(x, y) = (E(η x − η y ) 2 ) 1/2 . Proposition 3.17. For some r 0 2 and any r r 0 and C 1, there exists a constant K = K(C, r) depending only on C and r such that the following holds. For an arbitrary Gaussian free field {η x } x∈V with γ 2 (V, d) K diam(V ), there exists an r-separated tree (T , s) with set of leaves L, such that the following properties hold. (a) val r (T , s) ≍ r,C γ 2 (X, d). (b) For every v ∈ V , dist L 2 (η v , aff({η u } u / ∈Tv )) 1 20 r s(p(v))−1 . (c) For every v ∈ V , ∆(v) exp C 2 r 2 4 s(z)−s(v) for all v ∈ T \ L. (d) For every v ∈ T \ L and w ∈ L ∩ T v , u∈P(v,w) r s(u) log ∆(u) 1 2 r s(p(v)) log ∆(p(v)). We call such a tree T a C-good r-separated tree. Proof. By definition of the GFF, we have d = √ R eff for some network G(V ). Applying Theorem 3.9, there exists an r-separated tree (T 0 , s 0 ) such that val r (T 0 , s 0 ) ≍ r γ 2 (V, d). Recalling property (3) of Definition 3.8 and the assumption that γ 2 (V, d) K diam(V ), we can then select K large enough such that the condition of Lemma 3.11 is satisfied for the separated tree (T 0 , s 0 ). Then applying Lemma 3.11, we can get a 2C-regular separated tree (T 1 , s 1 ) with val r (T 1 , s 1 ) ≍ r,C val r (T 0 , s 0 ). At this point, using Lemma 3.16, we obtain a C-regular strongly r-separated tree (T 2 , s 2 ) such that val r (T 2 , s 2 ) ≍ r γ 2 (V, d). That is to say, the tree (T 2 , s 2 ) satisfies properties (a) and (c). Furthermore, by Lemma 2.15, we see that property (b) holds for (T 2 , s 2 ) because it is equivalent to the strongly r-separated property (54). Finally, Lemma 3.12 implies that there exists a subtree T ⊆ T 2 with val r (T , s 2 | T ) ≍ r,C val r (T 2 , s 2 ) such that property (d) holds for T and properties (a) and (c) are preserved (note that by property (2) of Lemma 3.12, the degrees of non-leaf nodes are preserved). Observe that property (b) is preserved by taking subtrees. Writing s = s 2 | T , we conclude that the separated tree (T , s) satisfies all the required properties, completing the proof. The cover time We now turn to our main theorem. Theorem 4.1. For any network G(V ) with total conductance C = x∈V c x , we have t cov (G) ≍ C γ 2 (V, R eff ) 2 . Combined with Theorem 2.3, this also yields a positive answer to the strong conjecture of Winkler and Zuckerman [54]. t cov (G) ≍ C γ 2 (V, R eff ) 2 ≍ δ t bl (G, δ). For the remainder of this section, we denote S = γ 2 (V, R eff ).(61) It is clear that for all 0 < δ < 1, we have t cov (G) t bl (G, δ), and t bl (G, δ) δ CS 2 by Theorem 2.3. Thus, in order to prove the preceding corollary and Theorem 4.1, we need only show that t cov (G) CS 2 .(62) Let {W t } be the continuous-time random walk on G(V ), and let {L v t } v∈V be the local times, as defined in Section 2. Applying the isomorphism theorem (Theorem 1.14) with some fixed v 0 ∈ V , we have L x τ (t) + 1 2 η 2 x : x ∈ V law = 1 2 (η x + √ 2t) 2 : x ∈ V ,(63) for some associated Gaussian process {η x } x∈V . By Lemma 2.14, this process is a Gaussian free field, and we have for every x, y ∈ V , d(x, y) △ = E |η x − η y | 2 = R eff (x, y).(64) Let D = max x,y∈V d(x, y) be the diameter of the Gaussian process. Proof outline. Let {L > 0} be the event {L x τ (t) > 0 : x ∈ V }. Consider a set S ⊆ R V , and let S L and S R be the events corresponding to the left and right-hand sides of (63) falling into S. Our goal is to find such a set S so that for some t ≍ S 2 , we have P(S R ) − P(S L ∩ {L > 0}) c,(65) for some universal constant c > 0. In this case, with probability at least c, the set of uncovered vertices {v : L v τ (t) = 0} is non-empty. Using the fact that the inverse local time τ (t) is Ct with probability at least 1 − c/2, we will conclude that t cov (G) CS 2 . Thus we are left to give a lower bound on P(S R ) and an upper bound on P(S L ∩ {L > 0}). Since the structure of the local times process {L x t } conditioned on {L > 0} can be quite unwieldy, we will only use first moment bounds for the latter task. Calculating a lower bound on P(S R ) will require a significantly more delicate application of the second-moment method, but here we will be able to exploit the full power of Gaussian processes and the majorizing measures theory. Before defining the set S ⊆ R V , we describe it in broad terms. By (64) and Theorem (MM), we know that for some t 0 ≍ S 2 , we should have E inf x∈V η x = −E sup x∈V η x close to − √ 2t 0 . By Lemma 2.2, we know that the standard deviation of inf x∈V η x is O(D). Thus we can expect that with probability bounded away from 0, for the right choice of t 0 ≍ S 2 , some value on the right-hand side of (63) is O(D) for t = t 0 . Now, when E sup x∈V η x ≫ D, it is intuitively true that for t = εt 0 and ε > 0 small, there should be many points x ∈ V with η x ≈ − √ 2t. If these points have some level of independence, then we should expect that with probability bounded away from 0, there is some x ∈ V with |η x − √ 2t| very small (much smaller than O(D)). Our set S will represent the existence of such a point. On the other hand, we will argue that if all the local times {L x τ (t) } are positive, then the probability for the left-hand side to have such a low value is small. A tree-like sub-process First, observe that by the commute time identity, t cov (G) C max x,y∈V R eff (x, y) = CD 2 . Thus in proving Theorem 4.1, we may assume that S KD ,(66) for any universal constant K 1. In particular, by an application of Proposition 3.17, we can assume the existence of an r-separated tree (T , s) in (V, d), for some fixed r 128, with root z = v 0 , and such that for some constant C 1 and θ = θ(C), properties (67), (70), (71), and (72) below are satisfied. We will choose C sufficiently large later, independent of any other parameters. For each u ∈ T , let h u denote the height of u, where we order the tree so that h z = 0, where z is the root. Recalling that L is the set of leaves of T , for each v ∈ L, let P(v) = {f v (0), f v (1), . . . , f v (h v )} be the set of nodes on the path from z = f v (0) to v = f v (h v ), where f v (i) is the parent of f v (i + 1), for 0 i < h. First, we can require that for every v ∈ L, σ v 1 θ S,(67) where χ v (k) △ = r s(fv(k)) log ∆(f v (k)) ,(68)σ v △ = hv−1 k=0 χ v (k).(69) Furthermore, we can require that the tree T satisfies, for every v ∈ V , hv−1 i=j+1 χ v (i) C · 2 j · r s(fv(j)) ,(70) as well as ∆(f v (k)) exp(C 2 r 2 4 k ) . Finally, we require that for every v ∈ T , dist L 2 (η v , aff({η u } u / ∈Tv )) 1 20 r s(p(v))−1 .(72) All these requirements are justified by Proposition 3.17. The distinguishing event. For u, v ∈ L, we let h uv be the height of the least common ancestor of u and v. We deg ↓ (f u (k)) . First, we fix ε = 1 2 10 rθ . For every v ∈ L, consider the events E v (ε) = |η v − εS| 50 r s(p(v)) m −3/4 v .(75) Instead of arguing directly about the events E v (ε), we will couple them to leaf events of a "percolation" process on T . In particular, in Section 4.2, we will prove the following lemma. Lemma 4.3. For all v ∈ L, there exist events E v such that the following properties hold. 1. E v ⊆ E v (ε) = |η v − εS| 50 r s(p(v)) m −3/4 v . 2. P(E v ) 1 2 m −7/8 v . 3. P(E u ∩ E v ) m 1/8 uv (m u m v ) −7/8 . In Section 4.3, we will prove that for any events {E v } v∈L satisfying properties (2) and (3) of Lemma 4.3, we have P u∈L E u 1 8 .(76) Thus for t = 1 2 ε 2 S 2 , we have P ∃v ∈ V : 1 2 (η v + √ 2t) 2 50 2 r 2s(p(v)) m −3/2 v 1 8 .(77) In light of the discussion surrounding (65), the reader should think of S = s ∈ R V : s v 50 2 r 2s(p(v)) m −3/2 v for some v ∈ V , and then (77) gives the desired lower bound on P(S R ). We now turn to an upper bound on P(S L ∩ {L > 0}). The next lemma is proved in Section 4.4. Lemma 4.4. For t 1 2 ε 2 S 2 , P v∈L 0 < L v τ (t) 50 2 · r 2s(p(v)) m −3/2 v 1 16 .(78) From (78) and (77), we conclude that with probability at least 1/16, we must have L v τ (t) = 0 for some v ∈ V and t = 1 2 ε 2 S 2 , else (63) is violated. This implies that P v 0 τ cov τ ( 1 2 ε 2 S 2 ) 1 16 .(79) To finish our proof of (62) and complete the proof of Theorem 4.1, we will apply Lemma 2.7 with β = 1 96 . In particular, we may choose K = 96/ε in (66), and then applying Lemma 2.7 yields P τ ( 1 2 ε 2 S 2 ) C ε 2 S 2 192 1 32 . Combining this with (79) yields P v 0 τ cov C ε 2 S 2 192 1 16 . In particular, τ cov Cε 2 S 2 . This completes the proof of (62), and hence of Theorem 4.1. The coupling The present section is devoted to the proof of Lemma 4.3. Toward this end, we will try to find a leaf v ∈ L for which η v ≈ εS. As in Lemma 4.3(1), the level of closeness we desire is gauged according to a proper scale, r s(p(v)) , as well as to the number of other leaves we expect to see at this scale, which is represented roughly by m −3/4 v (the value 3/4 is not essential here, and any other value in (1/2, 1) would suffice). Our goal is to find a such a leaf by starting at the root of the tree, and arguing that some of its children should be somewhat close to the target εS. This closeness is achieved using the fact that, by definition of an r-separated tree, the children are separated in the Gaussian distance, and thus exhibit some level of independence. We will continue in this manner inductively, arguing that the children which are somewhat close to the target have their own children which we could expect to be even closer, and so on. We aim to shrink these windows around the target more and more so they are small enough once we reach the leaves. There are a number of difficulties involved in executing this scheme. In particular, conditioning on the exact values of the children of the root could determine the entire process, making future levels moot. Thus we must first select a careful filtering which allows us to reserve some randomness for later levels. This is done in Section 4.2.1. Furthermore, the intermediate targets have to be arranged according to the variances along the root-leaf paths in our tree. This corresponds to the fact that, although we have a uniform lower bound on each σ v (from (67)), the summation defining the σ v 's could put different weights on the various levels (recall (69)). The targets also have to take into account random "noise" from the filter described above, and thus the targets themselves must be random. This "window analysis" is performed in Section 4.2.2. Restructuring the randomness We know that η z = 0, since z = v 0 is the root of T (and the starting point of the associated random walk). Fix a depth-first ordering of T (one starts at the root and explores as far as possible along each branch before backtracking). Write u ≺ v if u is explored before v, and u v if u ≺ v or u = v. For u = z, we write u − for the vertex preceding u in the DFS order. Let F = span ({η x : x ∈ T }). For a node v ∈ T , let F v = span({η u } u v ) and F − v = span({η u } u≺v ). We next associate a centered Gaussian process {ξ x : x ∈ T } to {η x : x ∈ T } in the following inductive way. Define ξ z = 0. Now, assuming we have defined ξ u for u ≺ v, we define ξ v by writing η v = ζ v + ξ v , where ζ v ∈ F v − and ξ v ⊥ F v − . Observe that, by construction, {ξ u } u v forms an orthogonal basis in L 2 for F v . Applying (72), we have for all u ∈ T , ξ u 2 = dist L 2 (η u , span ({η w } w≺u )) dist L 2 (η u , span ({η w } w / ∈Tu )) 1 20 r s(p(u))−1 ,(80) where we used the fact that the span and the affine hull are the same since ξ z = 0. For v ∈ L, define the subspaces F v,k = span ({ξ u : f v (k) ≺ u f v (k + 1)}) , F − v,k = span ({ξ u : f v (k) ≺ u ≺ f v (k + 1)}) . For 0 k h v − 1, define inductivelyη v,0 = 0, and η v,k+1 =η v,k + proj F v,k (η v ).(81) Note that the subspaces {F v,k } hv k=0 are mutually orthogonal, and together they span F v . Thus, η v,hv = η v .(82) Furthermore, by the definition of the subspace F v,k , we can decomposẽ η v,k+1 −η v,k =ζ v,k +ξ v,k ,(83)whereζ v,k ∈ F − v,k , andξ v,k ⊥ F − v,k . The next lemma states thatξ v,k has at least comparable variance toζ v,k . ζ v,k 2 8 r s(fv(k)) ,(84) and, 1 64 r s(fv(k))−1 ξ v,k 2 8 r s(fv(k)) . Proof. Writing the telescoping sum, η v = hv−1 j=0 η fv(j+1) − η fv(j) , we see that proj F v,k (η v ) 2 hv−1 j=k η fv (j+1) − η fv(j) 2 hv−1 j=k 4r s(fv(j)) 8 r s(fv(k)) ,(86) where we used properties (1) and (3) of the separated tree, and have assumed r 2. Thus by orthogonality and (83), we have ζ v,k 2 η v,k+1 −η v,k 2 = proj F v,k (η v ) 2 8 r s(fv(k)) , and precisely the same conclusion holds forξ v,k . Next, we establish a lower bound on ξ v,k 2 . From (81) and (83), ξ v,k = proj F v,k (η v ) − proj F − v,k (η v ) (87) = hv−1 j=k proj F v,k (η fv(j+1) − η fv(j) ) − proj F − v,k (η fv(j+1) − η fv(j) ) = proj F v,k (η fv (k+1) − η fv (k) ) − proj F − v,k (η fv (k+1) − η fv(k) ) + hv−1 j=k+1 proj F v,k (η fv (j+1) − η fv(j) ) − proj F − v,k (η fv (j+1) − η fv(j) ) . Observe that the term in brackets is precisely proj F v,k (η fv (k+1) ) − proj F − v,k (η fv(k+1) ) = ξ fv(k+1) , since η fv(k) ⊥ F v,k . In particular, we arrive at ξ v,k 2 ξ fv(k+1) 2 − hv−1 j=k+1 η fv(j+1) − η fv(j) 2 1 32 r s(fv(k))−1 − 2 r s(fv(k+1)) Defining the events E v Recall that our goal now is to find many leaves v ∈ L with η v ≈ εS. Now, writing η v = hv−1 k=0 proj F v,k (η v ) = hv−1 k=0 (ζ v,k +ξ v,k ), our "ideal" goal would be to hit a window around the target by getting the kth term of this sum close to a v (k) △ = εS χ v (k) σ v , for k = 0, 1, . . . , h v − 1. We will use the variance of theξ v,k variables (recall Lemma 4.5) to lower bound the probability that some points get closer to the desired target. On the other hand, we will treat theζ v,k variables as noise which has to be bounded in absolute value. This noise cannot always be countered in a single level, but it can be countered on average along the path to the leaf; this is the content of (70). We will amortize this cost over future targets as follows. Let b v (0) = 0 and for k = 0, 1, . . . , h v − 2, define ρ v (k) =ζ v,k +ξ v,k − a v (k) + b v (k) , b v (k + 1) = k i=0 χ v (k + 1) hv−1 ℓ=i+1 χ v (ℓ) ρ v (i). Clearly ρ v (0) =ζ v,0 +ξ v,0 − a v (0) represents how much we miss our first target. A similar fact holds for the final target, as the next lemma argues; in between, the errors are spread out proportional to the contribution to val r (T , s) for each of the the remaining levels (represented by the χ v (k) values). Here b v (k) represents the error that is meant to be absorbed in the k-th level. Lemma 4.6. For every v ∈ L, ρ v (h v − 1) = η v − εS. Proof. We have, hv−2 k=0 b v (k + 1) = hv−2 k=0 k i=0 χ v (k + 1) hv−1 ℓ=i+1 χ v (ℓ) ρ v (i) = hv−2 i=0 ρ v (i) hv −2 k=i χ v (k + 1) hv−1 ℓ=i+1 χ v (ℓ) = hv−2 i=0 ρ v (i) . (88) Also note that hv−1 k=0 ρ v (k) = hv−1 k=0 (ζ v,k +ξ v,k − a v (k) + b v (k)) = η v − εS + hv−1 k=0 b v (k) . Combined with b v (0) = 0 and (88), it follows that ρ v (h v − 1) = η v − εS, completing the proof. We now define the events A v (k) = {|ζ v,k | εθχ v (k)} , B v (k) = {|ρ v (k)| w v (k)} , where, for 0 k h v − 2, w v (k) is selected so that P B v (k) |ζ v,k + b v (k) = deg ↓ (f v (k)) −1/8 .(89) We emphasize that the windown w v (k) is not deterministic. And, for k = h v − 1, we select w v (k) so that P(B v (k) |ζ v,k + b v (k)) = deg ↓ (f v (k)) −1/8 m −3/4 v ,(90) Remark 2. Here, w v (k) can be thought to represent the window size around the random target. The value of w v (k) is chosen to make the probabilities in (89) and (90) exact, allowing us to couple seamlessly to the percolation process in Section 4.3. The key fact, proved in Lemma 4.7, is that the window sizes actually satisfy a deterministic upper bound, assuming that all the "good" events on the path from the root to f v (k) occurred. Thus one should think of the true window size as the bounds specified in (94) and (95), while the random value is for the purpose of the coupling. For 0 k ℓ h v − 1, define A v (k, ℓ) △ = ℓ i=k A v (i) and B v (k, ℓ) △ = ℓ i=k B v (i).(91) Sinceξ v,k ∈ σ(F v,k \ F − v,k ) (see, e.g. (87)), we see that the event B v (k) is conditionally independent of σ(F − fv (k+1) ) given the value ofζ v,k + b v (k). This implies that for all events E 0 ∈ σ(F − fv (k+1) ) such that E 0 ∩ A v (0, k) ∩ B v (0, k − 1) = ∅, P (B v (k) | A v (0, k), B v (0, k − 1), E 0 ) = deg ↓ (f v (k)) −1/8 , if 0 k < h v − 1, deg ↓ (f v (k)) −1/8 m −3/4 v , if k = h v − 1.(92) Finally, for v ∈ L, we define the event E v = A v (0, h v − 1) ∩ B v (0, h v − 1) .(93) Window analysis. We will now show that our final window w v (h v − 1) is small enough. Observe that our choice of w v (k) is not deterministic. Nevertheless, we will give an absolute upper bound. The bound is essentially the natural one: For any node u in the tree, and any child v of u, the standard deviation of η u − η v is O(r s(u) ). This follows from property (3) of the r-separated tree (recall Definition 3.8). Lemma 4.7. For every v ∈ L and k = 0, 1, . . . , h v − 2, if A v (0, k) and B v (0, k − 1) hold then, w v (k) 50 r s(fv(k)) .(94)Furthermore, if A v (0, h v − 1) and B v (0, h v − 2) hold, then w v (h v − 1) 50 r s(fv(hv −1)) m −3/4 v . (95) Proof. For k = 0, we have ρ v (0) =ζ v,0 +ξ v,0 − a v (0). By (67), we have a v (0) = εSχ v (0)/σ v θεχ v (0) = θεr s(fv(0)) log ∆(f v (0)).(96) Furthermore, from Lemma 4.5, we know that for all k 0, 1 64 r s(fv(k))−1 ξ v,k 2 8 r s(fv(k)) .(97) Now, consider a value w > 0 such that w a v (0) + εθχ v (0) 2θεr s(fv(0)) log ∆(f v (0)) .(98) Using (97) and recalling the Gaussian density, we have P |ρ v (0)| w | A v (0) P |ρ v (0)| w |ζ v,0 = −εθχ v (0) = P |ξ v,0 − a v (0) − εθχ v (0)| w 1 2 w √ 2π 8r s(fv(0)) exp − 1 2 (128εrθ) 2 log ∆(f v (0)) = w 16 √ 2πr s(fv(0)) ∆(f v (0)) − 1 2 (128εrθ) 2 .(99) Recalling the assumption (71), we have log ∆(f v (0)) Cr 16 √ 2π2 10 r, by choosing C large enough. In particular, εθχ v (0) (16 √ 2π2 10 εθr)r s(fv(0)) = 16 √ 2πr s(fv(0)) , recalling (74). Thus setting w = 16 √ 2πr s(fv(0)) satisfies (98), and applying (99) we have P |ρ v (0)| 16 √ 2πr s(fv(0)) | A v (0) ∆(f v (0)) − 1 where we have used 1 2 (128εrθ) 2 = 1 128 , and ∆(f v (0)) 16 from (71). Therefore w v (0) 16 √ 2πr s(fv(0)) 50 r s(fv (0)) , recalling the definition of w v (0) from (89). Now suppose that (94) holds for all k ℓ < h v − 2, and consider the case k = ℓ + 1. If the events {B v (j) : 0 j ℓ} hold, then |ρ v (j)| w v (j) 50 r s(fv (j)) , where the first inequality is from the definition of B v (j), and the second is from the induction hypothesis. Using (70), it follows that |b v (k)| k−1 i=0 χ v (k) hv−1 ℓ=i+1 χ v (ℓ) |ρ v (i)| 2 C χ v (k).(100)Recall that ρ v (k) =ζ v,k +ξ v,k − a v (k) + b v (k) . Similar to the k = 0 case, we obtain that for 0 < w 2θεr s(fv(k)) log ∆(f v (k)), we have, P |ρ v (k)| w | A v (i), B v (i) for all 0 i < k, A v (k) P ξ v,k − a v (k) − εθχ v (k) − 2 C χ v (k) w 1 2 w √ 2π8 r s(fv(k)) ∆(f v (k)) − 1 2 (128r) 2 (εθ+C −1 ) 2 . Now, by choosing C 1024r, and recalling (74), we see that 1 2 (128r) 2 (εθ + C −1 ) 2 1 32 . Since ∆(f v (k)) 16 (again, by (71)), we conclude that P |ρ v (k)| 16 √ 2πr s(fv(k)) | A v (i), B v (i) for all 0 i < k, A v (k) deg ↓ (f v (k)) −1/8 . This implies w v (k) 16 √ 2πr s(fv(k)) 50 r s(fv (k)) , where we recall once again the definition of w v (k) from (89). An almost identical argument yields that w v (h v − 1) 50 r s(fv(hv −1)) m Proof. This follows directly from Lemma 4.6, the identity (82) and the definition of B v (k). The first moment. We now give lower bounds on the probability of the event E v . Lemma 4.9. For every v ∈ L, P(E v ) 1 2 m −7/8 v . Proof. We have, P(E v ) = hv−1 k=0 P (A v (k) | A v (0, k − 1), B v (0, k − 1)) P (B v (k) | A v (0, k), B v (0, k − 1)) = m −3/4 v hv−1 k=0 deg ↓ (f v (k)) −1/8 hv−1 k=0 P (A v (k) | A v (0, k − 1), B v (0, k − 1)) = m −7/8 v hv−1 k=0 P (A v (k)) ,(101) where the second line follows from (92), and the third line from the fact that A v (k) is independent of {A v (i), B v (i) : 0 i < k}. Using (84), we have P(A v (k)) 1 − 2 √ 2π ∞ εθχv(k) exp − x 2 128r 2s(fv (k)) dx 1 − 2∆(f v (k)) − 1 128 ε 2 θ 2 1 − 2 exp − 1 128 2 −20 C 2 4 k . where we have used (71), the definition of ε (74), and χ v (k) = r s(fv(k)) log ∆(f v (k)). Clearly by choosing C a large enough constant, we have hv−1 k=0 P (A v (k)) 1 2 , completing the proof. The second moment. Finally, we bound the probability of E u ∩ E v for u = v. Proof. Assume, without loss of generality, that u ≺ v ∈ L. It is clear from (101) that P(E u ) m −7/8 u . Also, we have P(E v | E u ) P(A v (0, h v − 1), B v (0, h u − 1) | E u ) hv−1 k=huv P(B v (k) | E u , A v (0, k), B v (0, k − 1)) . Now recall that E u ∈ σ(F − fv (huv+1) ) ⊂ σ(F − fv (k+1) ) for all k h uv . By (92), we obtain, hv−1 k=huv P(B v (k) | E u , A v (0, k), B v (0, k−1)) = deg ↓ (f v (h v −1)) −3/4 hv−1 k=huv deg ↓ (f v (k)) −1/8 = m 1/8 uv m −7/8 v . Altogether, we conclude that P(E u ∩ E v ) = P(E u )P(E v | E u ) m 1/8 uv (m u m v ) −7/8 ,as required. The main coupling lemma, Lemma 4.3, is an immediately corollary of Lemmas 4.8, 4.9 and 4.10. Tree-like percolation Lemma 4.11 below yields (76). Its proof is a variant on the well-known second moment method for percolation in trees (see [38]). First, we define a measure ν on L via ν(u) = m −1 u . Observe that ν is a probability measure on L, i.e. u∈L ν(u) = 1. To see this, construct a unit flow from the root to the leaves, where each non-leaf node splits its incoming flow equally among its children. Clearly the amount that reaches a leaf u is precisely ν(u). Lemma 4.11. Suppose that to each v ∈ L, we associate an event E v such that the following bounds old. where the last equality follows from (102). By assumption (2), we have EZ 2 = u,v∈L (m u m v ) −1/8 P(E u ∩ E v ) u,v∈L m 1/8 uv (m u m v ) −1 . In order to estimate the second moment, we first fix u and sum over v. To be more precise, let L h (u) = {v ∈ L : h uv = h}, where we recall that h u is the height of a node u, and h uv is the height of the least-common ancestor of u and v. We can then partition L = h 0 L h (u) and obtain for every u ∈ L, deg ↓ (f u (i)) 1/8 ν(L h (u)) . Recalling the flow representation of the measure ν, we see that ν(L h (u)) = deg ↓ (f u (h)) − 1 deg ↓ (f u (h)) h−1 i=0 deg ↓ (f u (i)) . Therefore, v∈L m 1/8 uv m −1 v = hu ℓ=0 deg ↓ (f u (h)) − 1 deg ↓ (f u (h)) h−1 i=0 deg ↓ (f u (i)) −7/8 hu ℓ=0 h−1 i=0 deg ↓ (f u (i)) −7/8 2 , where the last transition follows from (71), for C chosen sufficiently large. Applying the second moment method, we deduce that P (Z > 0) (EZ) 2 EZ 2 1 8 , completing the proof. The local times We now prove Lemma 4.4, in order to the complete the analysis of the left-hand side of (63). Proof. Note that the random walk is at vertex v 0 at time τ (t). Hence, given that L v τ (t) > 0, the random walk contains at least one excursion which starts at v and ends at v 0 . Therefore, given that L v τ (t) > 0, we see c v L v τ (t) stochastically dominates the random variable L = Tv 0 0 1 {Xt=v} dt , where X t is a random walk on the network started at v and T v 0 is the hitting time to v 0 . By definition, every time the random walk hits v, it takes an exponential time for the walk to leave. Also, the probability that the random walk would hit v 0 before returning to v can be related to the effective resistance (see, for example, [39]). Formally, when the random walk W t is at vertex v, it will wait until the Poisson clock σ with rate 1 rings and then move to a neighbor (possibly v itself) selected proportional to the edge conductance. Define T + v = min{t > σ : X t = v} . Then we have the continuous-time version of (33), P v (T + v > T v 0 ) = 1 c v R eff (v, v 0 ) . By the strong Markov property, L follows the law of the sum of a geometric number of i.i.d. exponential variables. Thus L follows the law of an exponential variable with EL = c v R eff (v, v 0 ). Recalling property (72) of our separated tree T , we see that R eff (v, v 0 ) = E(η v − η v 0 ) 2 2 −10 r 2s(fv(hv−1))−2 . Thus, P(0 < L v τ (t) 50 2 · r 2s(fv(hv−1)) m −3/2 v ) P(L c v · 50 2 · r 2s(fv(hv−1)) m −3/2 v ) 50 2 · r 2s(fv(hv−1)) m −3/2 v R eff (v, v 0 ) 2 11 · 50 2 · r 2 m −3/2 v 1 16 m −1 v , where the last transition using (71) for C chosen large enough, and m v exp(C 2 r 2 ). Therefore, we conclude that P v∈LẼ v 1 16 v∈L m −1 v = 1 16 , where we used, from (102), the fact that v∈L m −1 v = 1, completing the proof. Additional applications We now prove a generalization of Theorem 1.7. Suppose that V = {1, 2, . . . , n}, and let G(V ) be a network with conductances {c ij }. We define real, symmetric n × n matrices D and A by D ij = c i i = j 0 otherwise. A ij = c ij . We write L G = D − A tr(D) ,(103) and L + G for the pseudoinverse of L G . where g = (g 1 , . . . , g n ) is a standard n-dimensional Gaussian. Proof. If κ denotes the commute time in G, then the following formula is well-known (see, e.g. [32]), κ(i, j) = e i − e j , L + G (e i − e j ) , where {e 1 , . . . , e n } are the standard basis vectors in R n . Using the fact that L + G is self-adjoint and positive semi-definite, this yields κ(i, j) = L + G e i − L + G e j 2 . Let g = (g 1 , . . . , g n ) ∈ R n be a standard n-dimensional Gaussian, and consider the Gaussian processes {η i : i = 1, . . . , n} where η i = g, L + G e i . One verifies that for all i, j ∈ V , E |η i − η j | 2 = L + G (e i − e j ) 2 = κ(i, j), thus by Theorem (MM), γ 2 (V, √ κ) ≍ E max i∈V η i = E max i∈V g, L + G e i = E max i∈V L + G g, e i ≍ E L + G g ∞ .(104) By Theorem 1.9, [γ 2 (V, √ κ)] 2 ≍ t cov (G). Finally, one can use Lemma 2.2 to conclude that E L + G g ∞ 2 ≍ E L + G g 2 ∞ , completing the proof. Proof. In [46, §4], it is shown how to compute a k × n matrix Z, in expected time O(m(log m) O(1) ), with k = O(log n), and such that for every i, j ∈ V , κ(i, j) Z(e i − e j ) 2 2κ(i, j). We can associate the Gaussian processes {η i } i∈V , where η i = g, Ze i , and g is a standard kdimensional Gaussian. Letting d(i, j) = E |η i − η j | 2 , we see from (105) that √ κ d √ 2κ, therefore γ 2 (V, √ κ) ≍ γ 2 (V, d). It follows (see (104)) that E Zg 2 ∞ ≍ E L + G g 2 ∞ ≍ t cov (G), where the last equivalence is the content of Theorem 4.13. The output of our algorithm is thus A(G) = Zg 2 ∞ , where g is a standard k-dimensional Gaussian vector. The fact that E[A(G)] ≍ (E[A(G) 2 ]) 1/2 follows from Lemma 2.2.
24,068
1004.4371
2950912279
We exhibit a strong connection between cover times of graphs, Gaussian processes, and Talagrand's theory of majorizing measures. In particular, we show that the cover time of any graph @math is equivalent, up to universal constants, to the square of the expected maximum of the Gaussian free field on @math , scaled by the number of edges in @math . This allows us to resolve a number of open questions. We give a deterministic polynomial-time algorithm that computes the cover time to within an O(1) factor for any graph, answering a question of Aldous and Fill (1994). We also positively resolve the blanket time conjectures of Winkler and Zuckerman (1996), showing that for any graph, the blanket and cover times are within an O(1) factor. The best previous approximation factor for both these problems was @math for @math -vertex graphs, due to Kahn, Kim, Lovasz, and Vu (2000).
Furthermore, for a few families of specific examples, the asymptotics of the cover time have been calculated more precisely. These include the work of Aldous @cite_37 for regular trees, Dembo, Peres, Rosen, and Zeitouni @cite_22 for the 2-dimensional discrete torus, and Cooper and Frieze @cite_45 for the giant component of various random graphs.
{ "abstract": [ "Abstract For simple random walk on a finite tree, the cover time is the time taken to visit every vertex. For the balanced b-ary tree of height m, the cover time is shown to be asymptotic to 2m 2 b m + 1 ( log b) (b − 1) as m → ∞ . On the uniform random labeled tree on n vertices, we give a convincing heuristic argument that the mean time to cover and return to the root is asymptotic to 6(2π) 1 2 n 3 2 , and prove a weak O(n 3 2 ) upper bound. The argument rests upon a recursive formula for cover time of trees generated by a simple branching process.", "We study the cover time of a random walk on the largest component of the random graph Gn,p. We determine its value up to a factor 1 + o(1) whenever np = c > 1, c = O(lnn). In particular, we show that the cover time is not monotone for c = Θ(lnn). We also determine the cover time of the k-cores, k ≥ 2. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008", "LetT (x;\") denote the rst hitting time of the disc of radius \" centered at x for Brownian motion on the two dimensional torus T 2 . We prove that sup x2T2T (x;\")=j log\"j 2 ! 2= as \" ! 0. The same applies to Brownian motion on any smooth, compact connected, two- dimensional, Riemannian manifold with unit area and no boundary. As a consequence, we prove a conjecture, due to Aldous (1989), that the number of steps it takes a simple random walk to cover all points of the lattice torus Z 2 is asymptotic to 4n 2 (logn) 2 = . Determining these asymptotics is an essential step toward analyzing the fractal structure of the set of uncovered sites before coverage is complete; so far, this structure was only studied non-rigorously in the physics literature. We also establish a conjecture, due to Kesten and R ev esz, that describes the asymptotics for the number of steps needed by simple random walk in Z 2 to cover the disc of radius n." ], "cite_N": [ "@cite_37", "@cite_45", "@cite_22" ], "mid": [ "1971599265", "1984070713", "2120076230" ] }
Cover times, blanket times, and majorizing measures
Let G = (V, E) be a finite, connected graph, and consider the simple random walk on G. Writing τ cov for the first time at which every vertex of G has been visited, let E v τ cov denote the expectation of this quantity when the random walk is started at some vertex v ∈ V . The following fundamental parameter is known as the cover time of G, t cov (G) = max v∈V E v τ cov .(1) We refer to the books [2,36] and the survey [37] for relevant background material. We also recall the discrete Gaussian free field (GFF) on the graph G. This is a centered Gaussian process {η v } v∈V with η v 0 = 0 for some fixed v 0 ∈ V . The process is characterized by the relation E (η u − η v ) 2 = R eff (u, v) for all u, v ∈ V , where R eff denotes the effective resistance on G. Equivalently, the covariances E(η u η v ) are given by the Green kernel of the random walk killed at v 0 . (We refer to Sections 1.2 and 1.3 for background on electrical networks and Gaussian processes.) The next theorem represents one of the primary connections put forward in this work. We use the notation ≍ to denote equivalence up to a universal constant factor. where {η v } v∈V is the Gaussian free field on G. The utility of such a characterization will become clear soon. Despite being an intensively studied parameter of graphs, a number of basic questions involving the cover time have remained open. We now highlight two of these, whose resolution we discuss subsequently. The blanket time. For a node v ∈ V , let π(v) = deg (v) 2|E| denote the stationary measure of the random walk, and let N v (t) be a random variable denoting the number of times the random walk has visited v up to time t. Now define τ • bl (δ) to be the first time t 1 at which N v (t) δt π(v)(2) holds for all v ∈ V . In other words, τ • bl (δ) is the first time at which all nodes have been visited at least a δ fraction as much as we expect at stationarity. Using the same notation as in (1), define the δ-blanket time as t • bl (G, δ) = max v∈V E v τ • bl (δ) .(3) Clearly for δ ∈ (0, 1), we have t • bl (G, δ) t cov (G). Winkler and Zuckerman [54] made the following conjecture. Conjecture 1.1. For every 0 < δ < 1, there exists a C such that for every graph G, one has t • bl (G, δ) C · t cov (G). In other words, for every fixed δ ∈ (0, 1), one has t cov (G) ≍ t • bl (G, δ). Kahn, Kim, Lovász, and Vu [30] showed that for every fixed δ ∈ (0, 1), one can take C ≍ (log log n) 2 for n-node graphs, but whether there is a universal constant, independent of n, remained open for every value of δ > 0. In order to bound t • bl (G, δ), we introduce the following stronger notion. Let τ bl (δ) be the first time t 1 such that for every u, v ∈ V , we have N u (t)/π(u) N v (t)/π(v) δ, i.e. the first time at which all the values {N u (t)/π(u)} u∈V are within a factor of δ. As in [30], we define the strong δ-blanket time as t bl (G, δ) = max v∈V E v τ bl (δ). Clearly one has t • bl (G, δ) t bl (G, δ) for every δ ∈ (0, 1). The second question we highlight is computational in nature. In other words, is there a quantity A(G) which can be computed deterministically, in polynomialtime in |V |, such that A(G) ≍ t cov (G). It is crucial that one asks for a deterministic procedure, since a randomized algorithm can simply simulate the chain, and output the empirical mean of the observed times at which the graph is first covered. This is guaranteed to produce an accurate estimate with high-probability in polynomial time, since the mean and standard deviation of τ cov are O(|V | 3 ) [6]. A result of Matthews [43] can be used to produce a determinisically computable bound which is within a log |V | factor of t cov (G). Subsequently, [30] showed how one could compute a bound which lies within an O((log log |V |) 2 ) factor of the cover time. Before we state our main theorem and resolve the preceding questions, we briefly review the γ 2 functional from Talagrand's theory of majorizing measures [48,50]. Majorizing measures and Gaussian processes. Consider a compact metric space (X, d). Let M 0 = 1 and M k = 2 2 k for k 1. For a partition P of X and an element x ∈ X, we will write P (x) for the unique S ∈ P containing x. An admissible sequence {A k } k 0 of partitions of X is such that A k+1 is a refinement of A k for k 0, and |A k | M k for all n 0. Talagrand defines the functional γ 2 (X, d) = inf sup x∈X k 0 2 k/2 diam(A k (x)),(4) where the infimum is over all admissible sequences {A k }. Consider now a Gaussian process {η i } i∈I over some index set I. This is a stochastic process such that every finite linear combination of random variables is normally distributed. For the purposes of the present paper, one may assume that I is finite. We will assume that all Gaussian processes are centered, i.e. E(η i ) = 0 for all i ∈ I. The index set I carries a natural metric which assigns, for i, j ∈ I, d(i, j) = E |η i − η j | 2 .(5) The following result constitutes a primary consequence of the majorizing measures theory. Theorem (MM) (Majorizing measures theorem [48]). For any centered Gaussian process {η i } i∈I , γ 2 (I, d) ≍ E sup {η i : i ∈ I} . We remark that the upper bound of the preceding theorem, i.e. E sup {η i : i ∈ I} Cγ 2 (I, d) for some constant C, goes back to work of Fernique [24,25]. Fernique formulated this result in the language of measures (from whence the name "majorizing measures" arises), while the formulation of γ 2 given in (4) is due to Talagrand. The fact that the two notions are related is non-trivial; we refer to [50, §2] for a thorough discussion of the connection between them. Commute times, hitting times, and cover times. In order to relate the majorizing measure theory to cover times of graphs, we recall the following natural metric. For any two nodes u, v ∈ V , use H(u, v) to denote the expected hitting time from u to v, i.e. the expected time for a random walk started at u to hit v. The expected commute time between two nodes u, v ∈ V is then defined by κ(u, v) = H(u, v) + H(v, u). It is immediate that κ(u, v) is a metric on any finite, connected graph. A well-known fact [11] is that κ(u, v) = 2|E| R eff (u, v), where R eff (u, v) is the effective resistance between u and v, when G is considered as an electrical network with unit conductances on the edges. We now restate our main result in terms of majorizing measures. For a metric d, we write √ d for the distance √ d(u, v) = d(u, v). Theorem 1.2 (Cover times, blanket times, and majorizing measures). For any graph G = (V, E) and any 0 < δ < 1, we have t cov (G) ≍ γ 2 (V, √ κ) 2 = |E| · γ 2 (V, R eff ) 2 ≍ δ t bl (G, δ), where ≍ δ denotes equivalence up to a constant depending on δ. Clearly this yields a positive resolution to Conjecture 1.1. Moreover, we prove the preceding theorem in the setting of general finite-state reversible Markov chains. See Theorem 1.9 for a statement of our most general theorem. We now address some additional consequences of the main theorem. First, observe that by combining Theorem 1.2 with Theorem (MM), we obtain Theorem 1.1. Theorem 1.3 (Cover times and the Gaussian free field). For any graph G = (V, E) and any 0 < δ < 1, we have t cov (G) ≍ |E| E max v∈V η v 2 ≍ δ t bl (G, δ), where {η v } is the Gaussian free field on G. In fact, in Section 2.2, we exhibit the following strong asymptotic upper bound. Theorem 1.4. For every graph G = (V, E), if t hit (G) denotes the maximal hitting time in G, and {η v } v∈V is the Gaussian free field on G, then t cov (G) 1 + C t hit (G) t cov (G) · |E| · E sup v∈V η v 2 , where C > 0 is a universal constant. In Section 3, we prove the following theorem which, in conjunction with Theorem 1.2, resolves Question 1.2. Theorem 1.5. Let (X, d) be a finite metric space, with n = |X|. If, for any two points x, y ∈ X, one can deterministically compute d(x, y) in time polynomial in n, then one can deterministically compute a number A(X, d) in polynomial time, for which A(X, d) ≍ γ 2 (X, d). A "comparison theorem" follows immediately from Theorem 1.2, and the fact that γ 2 (X, d) Lγ 2 (X, d ′ ) whenever d Ld ′ (see (4)). Theorem 1.6 (Comparison theorem for cover times). Suppose G and G ′ are two graphs on the same set of nodes V , and κ G and κ G ′ are the distances induced by respective commute times. If there exists a number L 1 such that κ G (u, v) L · κ G ′ (u, v) for all u, v ∈ V , then t cov (G) O(L) · t cov (G ′ ) . Finally, our work implies that there is an extremely simple randomized algorithm for computing the cover time of a graph, up to constant factors. To this end, consider a graph G = (V, E) whose vertex set we take to be V = {1, 2, . . . , n}. Let D be the diagonal degree matrix, i.e. such that D ii = deg(i) and D ij = 0 for i = j, and let A be the adjacency matrix of G. We define the following normalized Laplacian, L G = D − A tr(D) . Let L + G denote the Moore-Penrose peudoinverse of L G . Note that both L G and L + G are positive semi-definite. We have the following characterization. Theorem 1.7. For any connected graph G, it holds that t cov (G) ≍ E L + G g 2 ∞ , where g = (g 1 , . . . , g n ) is an n-dimensional Gaussian, i.e. such that {g i } are i.i.d. N(0,1) random variables. The preceding theorem yields an O(n ω )-time randomized algorithm for approximating t cov (G), where ω ∈ [2, 2.376) is the best-possible exponent for matrix multiplication [13]. Using the linearsystem solvers of Spielman and Teng [47] (see also [45]), along with ideas from Spielman and Srivistava [46], we present an algorithm that runs in near-linear time in the number of edges of G. ) and outputs a number A(G) such that t cov (G) ≍ E [A(G)] ≍ (E A(G) 2 ) 1/2 . Preliminaries To begin, we introduce some fundamental notions from random walks and electrical networks. Electrical networks and random walks. A network is a finite, undirected graph G = (V, E), together with a set of non-negative conductances {c xy : x, y ∈ V } supported exactly on the edges of G, i.e. c xy > 0 ⇐⇒ xy ∈ E. The conductances are symmetric so that c xy = c yx for all x, y ∈ V . We will write c x = y∈V c xy and C = x∈V c x for the total conductance. We will often use the notation G(V ) for a network on the vertex set V . In this case, the associated conductances are implicit. In the few cases when there are multiple networks under consideration simultaneously, we will use the notation c G xy to refer to the conductances in G. Associated to such a network is the canonical discrete time random walk on G, whose transition probabilities are given by p xy = c xy /c x for all x, y ∈ V . It is easy to see that this defines the transition matrix of a reversible Markov chain on V , and that every finite-state reversible Markov chain arises in this way (see [2, §3.2]). The stationary measure of a vertex is precisely π(x) = c x /C. Associated to such an electrical network are the classical quantities C eff , R eff : V × V → R 0 which are referred to, respectively, as the effective conductance and effective resistance between pairs of nodes. We refer to [36,Ch. 9] for a discussion of the connection between electrical networks and the corresponding random walk. For now, it is useful to keep in mind the following fact [11]: For any x, y ∈ V , R eff (x, y) = κ(x, y) C ,(9) where the commute time κ is defined as before (6). For convenience, we will work exclusively with continuous-time Markov chains, where the transition rates between nodes are given by the probabilities p xy from the discrete chain. One way to realize the continuous-time chain is by making jumps according to the discrete-time chain, where the times spent between jumps are i.i.d. exponential random variables with mean 1. We refer to these random variables as the holding times. See [2, Ch. 2] for background and relevant definitions. Cover times, local times, and blanket times. We will now define various stopping times for the continuous-time random walk. First, we observe that if τ ⋆ cov is the first time at which the continuous-time random walk has visited every node of G, then for every vertex v, E v τ ⋆ cov = E v τ cov , where we recall that the latter quantity refers to the discrete-time chain. Thus we may also define the cover time with respect to the continuous-time chain, i.e. t cov (G) = max v∈V E v τ ⋆ cov . In fact, it will be far more convenient to work with the cover and return time defined as follows. Let {X t } t∈[0,∞) be the continuous-time chain, and define τ cov = inf {t > τ ⋆ cov : X t = X 0 } .(10) For concreteness, we define the cover and return time of G as t cov (G) = max v∈V E v τ cov , but the following fact shows that the choice of initial vertex is not of great importance for us (see [2,Ch. 5,Lem. 25]), 1 2 t cov (G) t cov (G) t cov (G) 3 min v∈V E v τ cov .(11) For a vertex v ∈ V and time t, we define the local time L v t by L v t = 1 c v t 0 1 {Xs=v} ds ,(12) where we recall that c v = u∈V c uv . For δ ∈ (0, 1), we define τ ⋆ bl (δ) as the first time t > 0 at which min u,v∈V L u t L v t δ. Furthermore, the continuous-time strong δ-blanket time is defined to be t ⋆ bl (G, δ) = max v∈V E v τ ⋆ bl (δ).(13) Asymptotic notation. For expressions A and B, we will use the notation A B to denote that A C · B for some constant C > 0. If we wish to stress that the constant C depends on some parameter, e.g. C = C(p), we will use the notation A p B. We use A ≍ B to denote the conjunction A B and B A, and we use the notation A ≍ p B similarly. Outline We first state our main theorem in full generality. We use only the language of effective resistances, since this is most natural in the context to follow. Theorem 1.9. For any network G = (V, E) and any 0 < δ < 1, t cov (G) ≍ C γ 2 (V, R eff ) 2 ≍ δ t bl (G, δ) ≍ δ t ⋆ bl (G, δ), where C is the total conductance of G. We now present an overview of our main arguments, and layout the organization of the paper. Hints of a connection. First, it may help the reader to have some intuition about why cover times should be connected to the Gaussian processes and particularly the theory of majorizing measures. A first hint goes back to work of Aldous [3], where it is shown that the hitting times of Markov chains are approximately distributed as exponential random variables. It is well-known that an exponential variable can be represented as the sum of the squares of two Gaussians. Observing that the cover time is just the maximum of all the hitting times, one might hope that the cover time can be related to the maximum of a family of Gaussians. This point of view is strengthened by some quantitative similarities. Let {η i } i∈I be a centered Gaussian process, and let d(i, j) be the natural metric on I from (5). The following two lemmas are central to the proof of the majorizing measures theorem (Theorem (MM)). We refer to [35] [50] for their utility in the majorizing measures theory. The next lemma follows directly from the definition of the Gaussian density; see, for instance, [42,Lem. 5 Lemma 1.10 (Gaussian concentration). For every i, j ∈ I, and α > 0, P (η i − η j > α) exp −α 2 2 d(i, j) 2 . The next result can be found in [35,Thm. 3.18]. Lemma 1.11 (Sudakov minoration). For every α > 0, If I ′ ⊆ I is such that i, j ∈ I ′ and i = j implies d(i, j) α, then E sup i∈I ′ η i α log |I ′ |. Now, let G = (V, E) be a network, and consider the associated continuous-time random walk {X t } with local times L v t . We define also the inverse local times τ v (t) = inf{s : L v s > t}. An analog of the following lemma was proved in [30] for the discrete-time chain; the continuous-time version can be similarly proved, though we will not do so here, as it will not be used in the arguments to come. In interpreting the next lemma, it helps to recall that L u τ u (t) = t. Lemma 1.12 (Concentration for local times) . For all u, v ∈ V and any α > 0 and t 0, we have P u L u τ u (t) − L v τ u (t) α exp −α 2 4tR eff (u, v) , where P u denotes the measure for the random walk started at u. Thus local times satisfy sub-gaussian concentration, where now the distance d is replaced by √ t · R eff . On the other side, the classical bound of Matthews [43] provides an analog to Lemma 1.11. Lemma 1.13 (Matthews bound). For every α > 0, if V ′ ⊆ V is such that u, v ∈ V ′ and u = v implies H(u, v) α, then t cov (G) α log(|V ′ | − 1). Of course the similar structure of these lemmas offers no formal connection, but merely a hint that something deeper may be happening. We now discuss a far more concrete connection between local times and Gaussian processes. The isomorphism theorems. The distribution of the local times for a Borel right process can be fully characterized by certain associated Gaussian processes; results of this flavor go by the name of Isomorphism Theorems. Several versions have been developed by Ray [44] and Knight [33], Dynkin [18,17], Marcus and Rosen [40,41], Eisenbaum [19] and Eisenbaum, Kaspi, Marcus, Rosen and Shi [20]. In what follows, we present the second Ray-Knight theorem in the special case of a continuous-time random walk. It first appeared in [20]; see also Theorem 8.2.2 of the book by Marcus and Rosen [42] (which contains a wealth of information on the connection between local times and Gaussian processes). It is easy to verify that the continuous-time random walk on a connected graph is indeed a recurrent strongly symmetric Borel right process. Theorem 1.14 (Generalized Second Ray-Knight Isomorphism Theorem). Fix v 0 ∈ V and define the inverse local time, τ (t) = inf{s : L v 0 s > t}.(14) Let T 0 be the hitting time to v 0 and let Γ v 0 (x, y) = E x (L y T 0 ). Denote by η = {η x : x ∈ V } a mean zero Gaussian process with covariance Γ v 0 (x, y). Let P v 0 and P η be the measures on the processes {L x T 0 } and {η x }, respectively. Then under the measure P v 0 × P η , for any t > 0 L x τ (t) + 1 2 η 2 x : x ∈ V law = 1 2 (η x + √ 2t) 2 : x ∈ V .(15) Thus to every continuous-time random walk, we can associate a Gaussian process {η v } v∈V . As discussed in Section 2.4, we have the relationship d(u, v) = R eff (u, v), where d(u, v) = E |η u − η v | 2 . In particular, the process {η v } v∈V is the Gaussian free field on the network G. Using the Isomorphism Theorem in conjunction with concentration bounds for Gaussian processes, we already have enough machinery to prove the following upper bound in Section 2.1, t cov (G) t bl (G, δ) δ C [γ 2 (V, d)] 2 = C γ 2 (V, R eff ) 2 .(16) We also show how to prove a matching lower bound in terms of γ 2 , but for a slightly different notion of "blanket time." Thus (16) proves the first half of Theorem 1.9. The lower bound for cover times quite a bit more difficult to prove. Of course, the cover and return time relates to the event ∃v : L v τ (t) = 0 , and unfortunately the correspondence (15) seems too coarse to provide lower bounds on the probability of this event directly. To this end, we need to show that for the right value of t in Theorem 1.14, we often have η x ≈ − √ 2t for some x ∈ V . The main difficulty is that we will have to show that there is often a vertex x ∈ V with |η x + √ 2t| being much smaller than the standard deviation of η x . In doing so, we will use the full power of the majorizing measures theory, as well as the special structure of the Gaussian processes arising from the Isomorphism Theorem. The discrete Gaussian free field and a tree-like subprocess. In Section 2.4 (see (35)), we recall that the Gaussian processes arising from the Isomorphism Theorem are not arbitrary, but correspond to the Gaussian free field (GFF) associated with G. Special properties of such processes will be essential to our proof of Theorem 1.9. In particular, if we use R eff (v, S) to denote the effective resistance between a point v and a set of vertices S ⊆ V , then we have the relationship R eff (v, S) = dist L 2 (η v , aff({η w } w∈S )),(17) where aff(·) denotes the affine hull, and dist L 2 is the L 2 distance in the Hilbert space underlying the process {η v } v∈V . In Section 2.3, we prove a number of properties of the effective resistance metric (e.g. Foster's network theorem); combined with (17), this yields some properties unique to processes arising from a GFF. Next, in Section 3, we recall that one of the primary components of the majorizing measures theory is that every Gaussian process {η i } i∈I contains a "tree like" subprocess which controls E sup i∈I η i . After a preprocessing step that ensures our trees have a number of additional features, we use the structure of the GFF to select a representative subtree with very strong independence properties that will be essential to our analysis of cover times. Restructuring the randomness and a percolation argument. The majorizing measures theory is designed to control the first moment E sup i∈I η i of the supremum of Gaussian process. In analyzing (15) to prove a lower bound on the cover times, we actually need to employ a variant of the second moment method. The need for this, and a detailed discussion of how it proceeds, are presented at the beginning of Section 4. Towards this end, we want to associate events to the leaves of our "tree like" subprocess which can be thought of as "open events" in a percolation process on the tree. For general trees, it is known that the second moment method gives accurate estimates for the probability of having an open path to a leaf [38]. While our trees are not regular, they are "regularized" by the majorizing measure, and we do a somewhat standard analysis of such a process in Section 4.3. The real difficulty involves setting up the right filtration on the probability space corresponding to our tree so that the percolation argument yields the desired control on the cover times. This requires a delicate definition of the events associated to each edge, and the ensuing analysis forms the technical core of our argument in Section 4. Algorithmic issues. In order to complete the proof of Theorem 1.5 and thus resolve Question 1.2, we present a deterministic algorithm which computes an approximation to γ 2 (X, d) for any metric space (X, d). This is achieved in Section 3.3. While the algorithm is fairly elementary to describe, its analysis requires a number of tools from the majorizing measures theory. We remark that, in combination with Theorem 1.9, this yields the following result. Observe that for general reversible chains, the cover time is not necessarily bounded a polynomial in |V |, and thus even randomized simulation of the chain does not yield a polynomial-time algorithm for approximating t cov (G). Finally, in Section 4.5, we prove Theorems 1.7 and 1.8 in the setting of arbitrary reversible Markov chains, leading to a near-linear time randomized algorithm for computing cover times. Gaussian processes and local times We now discuss properties of the Gaussian processes arising from the isomorphism theorem (Theorem 1.14). In Section 2.1, we show that the isomorphism theorem, combined with concentration properties of Gaussian processes, is already enough to get strong control on blanket times and related quantities. In Section 2.3, we prove some geometric properties of the resistance metric on networks that will be crucial to our work on the cover time in Sections 3 and 4. Finally, in Section 2.4, we recall the definition of the Gaussian free field and show how the geometry of such a process relates to the geometry of the underlying resistance metric. The blanket time We first remark that the covariance matrix of the Gaussian process arising from the isomorphism theorem can be calculated explicitly in terms of the resistance metric on the network G(V ). Throughout this section, the process {η x } x∈V refers to the one resulting from Theorem 1.14 with v 0 ∈ V some fixed (but arbitrary) vertex, τ (t) refers to the inverse local time defined in (14), and T 0 is the hitting time to v 0 . Lemma 2.1. For every x, y ∈ V , Γ v 0 (x, y) = E x (L y T 0 ) = 1 2 (R eff (x, v 0 ) + R eff (v 0 , y) − R eff (x, y)) . In particular, E (η x − η y ) 2 = R eff (x, y). Proof. To prove the lemma, we use the cycle identity for hitting times (see, e.g., [36,Lem. 10.10]) which asserts that, H(x, v 0 ) + H(v 0 , y) + H(y, x) = H(x, y) + H(y, v 0 ) + H(v 0 , x).(18) Averaging both sides of (18) and recalling (9) yields H(x, v 0 ) + H(v 0 , y) + H(y, x) = C 2 [R eff (x, v 0 ) + R eff (v 0 , y) + R eff (x, y)] . Now, we subtract CR eff (x, y) = H(x, y) + H(y, x) from both sides, giving H(x, v 0 ) + H(v 0 , y) − H(x, y) = C 2 [R eff (x, v 0 , ) + R eff (v 0 , y) − R eff (x, y)] Finally, we conclude using the identity (see, e.g. [2, Ch 2., Lem. 9]), E x (L y T 0 ) = 1 C (H(x, v 0 ) + H(v 0 , y) − H(x, y)) . We now relate the blanket time of the random walk to the expected supremum of its associated Gaussian process. The following is a central facet of the theory of concentration of measure; see, for example, [34, Thm. 7.1, Eq. (7.4)]. Lemma 2.2. Consider a Gaussian process {η x : x ∈ V } and define σ = sup x∈V (E(η 2 x )) 1/2 . Then for α > 0, P sup x∈V η x − E sup x∈V η x > α 2 exp(−α 2 /2σ 2 ) . We are now ready to establish the upper bound on the strong blanket time t ⋆ bl (G, δ), for any fixed 0 < δ < 1. Note that this will naturally yield an upper bound on t bl (δ). Theorem 2.3. Consider a network G(V ) and its total conductance C = x∈V c x . For any fixed 0 < δ < 1, the blanket time t ⋆ bl (G, δ) of the random walk on G(V ) satisfies t ⋆ bl (G, δ) δ C · E sup x∈V η x 2 , where {η x } is the associated Gaussian process from Theorem 1.14. Proof. We first prove that for some A δ > 0 t ⋆ bl (δ) A δ C E sup x∈V η x 2 + sup x∈V E (η 2 x ) .(19) Fix a vertex v 0 ∈ V and consider the local times {L x τ (t) : x ∈ V }, where for t > 0, we write τ (t) = inf{s : L v 0 s > t}. Let σ = sup x∈V E(η 2 x ) and Λ = E sup x η x . Use {η L x } to denote the copy of the Gaussian process corresponding to the left-hand side of (15), and {η R x } to denote the i.i.d. process corresponding to the right-hand side. Fix β > 0, and set t = t(β) = β(Λ 2 + σ 2 ). By Theorem 1.14, we get that P min x L x τ (t) √ δt P inf x 1 2 (η R x + √ 2t) 2 1 + √ δ 2 t + P sup x 1 2 (η L x ) 2 1 − √ δ 2 t . Therefore, P min x L x τ (t) √ δt P inf x η R x −a δ √ t + P sup x |η L x | b δ √ t , where a δ = √ 2 − 1 + √ δ and b δ = 1 − √ δ. Applying Lemma 2.2, we obtain that if β > β 0 (δ) for some β 0 (δ) > 0, then P min x L x τ (t) √ δt 6 exp(−γ δ β) ,(20) where γ δ = 1 2 (a 2 δ ∧ b 2 δ ). On the other hand, we have P max x L x τ (t) t/ √ δ P max x 1 2 (η R x + √ 2t) 2 t/ √ δ = P max x η x a ′ δ √ t , where a ′ δ = 1/δ − 1. Applying Lemma 2.2 again for β > β 0 (δ), we get that P max x L x τ (t) t/ √ δ 2 exp(−γ ′ δ β) ,(21) where γ ′ δ = (a ′ δ ) 2 /2. Note that assuming min x L x τ (t) √ δt and max x L x τ (t) t/ √ δ, we have τ (t) = x c x L x τ (t) Ct/ √ δ as well as min x,y L x τ (t) /L y τ (t) δ. It then follows that τ ⋆ bl τ (t) Ct/ √ δ. Therefore, we can deduce that τ ⋆ bl Ct/ √ δ ⊂ min x L x τ (t) √ δt max x L x τ (t) t/ √ δ . Combined with (20) and (21), it yields that P(τ ⋆ bl Ct/ √ δ) 6 exp(−γ δ β) + 2 exp(−γ ′ δ β) . It then follows that t ⋆ bl A δ C(Λ 2 + σ 2 ) for some A δ > 0 which depends only on δ, establishing (19). It remains to prove that σ = O(Λ). To this end, let x * be such that Eη 2 x * = σ 2 . We have Λ E max(η v 0 , η x * ) = E max(0, η x * ) = σ √ 2π .(22) This completes the proof for the continuous-time case. Remark 1. An interesting question is the asymptotic behavior of δ-blanket time as δ → 1, namely the dependence on δ of A δ in (19). As implied in the proof, we can see that A δ 1 γ δ + 1 γ ′ δ 1 (1 − δ) 2 . These asymptotics are tight for the complete graph; see e.g. [54,Cor. 2]. We next extend the proof of the preceding theorem to the case of the discrete-time random walk. The next lemma contains the main estimate required for this extension. Lemma 2.4. Let G(V ) be a network and write γ 2 = γ 2 (V, √ R eff ). Then for all u 16, we have v∈V e −u·cvγ 2 2 e −u/8 . Proof. By definition of the γ 2 functional, we can choose a sequence of partitions A k with |A k | 2 2 k such that γ 2 1 2 sup v∈V k 0 2 k/2 diam(A k (v)) . For v ∈ V , let k v = min{k : {v} ∈ A k }. It is clear that R eff (u, v) 1/c v for all u = v and hence (diam(A kv−1 (v))) 2 1/c v . Therefore, we see that v∈V e −u·cvγ 2 2 = ∞ k=0 v:kv=k+1 e −u·cvγ 2 2 ∞ k=1 2 2 k+1 e −u2 k /4 e −u/8 , completing the proof. Theorem 2.5. Consider a network G(V ) and its total conductance C = x∈V c x . For any fixed 0 < δ < 1, the discrete blanket time t bl (G, δ) of the random walk on on G(V ) satisfies t bl (G, δ) δ C · E sup x∈V η x 2 , where {η x } is the associated Gaussian process from Theorem 1.14. Proof. We now consider the embedded discrete-time random walk of the continuous-time counterpart (i.e. the corresponding jump chain; see [2, Ch. 2]). Let N v t be such that c v · N v t is the number of visits to vertex v up to continuous time t, i.e. N v t is a discrete-time analog of the local time L v t . Fix a vertex v 0 ∈ V and consider the local times {L x τ (t) : x ∈ V }. Let σ = sup x∈V E(η 2 x ) and Λ = E sup x η x . Again, set t = β(Λ 2 + σ 2 ). Let τ bl (δ) denote the first time at which N x t δt C for every x ∈ V . Assuming that min x N x τ (t) δ 1/4 t and max x N x τ (t) t/δ 3/4 , we have τ (t) = x c x N x τ (t) Ct/δ 3/4 and thus min x N x τ (t) δτ (t)/C. It then follows that τ bl (δ) τ (t) Ct/δ 3/4 . Therefore, we deduce that τ bl (δ) Ct δ 3/4 ⊂ min x N x τ (t) δ 1/4 t max x N x τ (t) t/δ 3/4 . Therefore we have, P τ bl (δ) Ct δ 3/4 P min x L x τ (t) √ δt or max x L x τ (t) t/ √ δ + P ∀x : √ δt L x τ (t) t/ √ δ | min x N x τ (t) δ 1/4 t or max x N x τ (t) t/δ 3/4 . Note that we have already bounded the first term in (20) and (21). The second term can be bounded by a simple application of a large deviation inequality on the sum of i.i.d. exponential variables. Precisely, x∈V P √ δt L x τ (t) t/ √ δ | N x τ (t) δ 1/4 t or N x τ (t) t/δ 3/4 x∈V e −ã δ ·cxt for some constantã δ > 0 depending only on δ. Recall that Theorem (MM) implies E sup x η x ≍ γ 2 (V, √ R eff ). By (22), we see that σ √ 2πΛ. Altogether, we get that t ≍ Λ 2 ≍ β γ 2 (V, √ R eff ) 2 . Applying Lemma 2.4, we conclude that there existsβ 0 (δ) > 0 depending only on δ such that for all β β 0 (δ), we have P(τ bl (G, δ) Ct/δ 3/4 ) e −b δ β whereb δ is a constant depending only on δ. This immediately yields the desired upper bound on the blanket time for the discrete-time random walk. We next exhibit a lower bound on a variation of blanket time (considered in [30]). It is apparent that the lower bound on the cover time, which will be proved in Section 4, is an automatic lower bound on the blanket time. In what follows, though, we try to give a simple argument that can be regarded as a warm up. For the convenience of analysis, we consider the following notion. For 0 < ε < 1, define t * bl (G, ε) = max w∈V inf{s : P w (∀u, v ∈ V : L u t 2L v t ) > ε for all t s} .(23) Theorem 2.6. Consider a network G(V ) and its total conductance C = x∈V c x . For any fixed 0 < ε < 1, we have t * bl (G, ε) ε C · E sup x∈V η x 2 . In order to prove Theorem 2.6, we will use the next simple lemma. We will also require this estimate in Section 4. Lemma 2.7. Let τ (t) be the inverse local time at vertex v 0 , as defined in (14). Let C be the total conductance and let D = max x,y∈V R eff (x, y). Then, for all β > 0 and t D 2 /β 2 , P v 0 (τ (t) βCt) 3β . Proof. We use P v to denote the measure on random walks started at a vertex v ∈ V , and we use E v similarly. Let p δ = min v {P v (τ (t) δCt)} for some δ > 0. Using the strong Markov property, we get that for all v ∈ V , P v (τ (t) kδCt) (1 − p δ ) k . In particular, E v τ (t) δCt/p δ . By Theorem 1.14, it follows easily that E v 0 τ (t) = Ct. Since E v τ (t) E v 0 (τ (t)) , we deduce that p δ δ. Let u = u(δ) be such that P u (τ (t) δCt) = p δ . Let Y, Z be random variables with the law τ (t), when the random walk is started at u and v 0 , respectively. Clearly, Y law = Z + T v 0 ,(24) where T v 0 is distributed as the hitting time to v 0 , when then random walk is started at u and (9)), and this yields P u (T v 0 CD 2 /β) β. Using the assumption t D 2 /β 2 and (24), we conclude that T v 0 is independent of Z. Since R eff (u, v 0 ) D 2 , we have E u T v 0 CD 2 (byP(Z βCt) P(Z 2βCt − CD 2 /β) P(Y 2βCt) + P(T v 0 CD 2 /β) p 2β + β 3β , as required. We are now ready to establish the lower bound on t * bl (G, ε). Proof of Theorem 2.6. We consider the associated Gaussian process as in the proof of Theorem 2.3. Let σ = sup x∈V Eη 2 x and Λ = E sup x η x . Observe that the maximal hitting time is a simple lower bound on t * bl (G, ε) up to a constant depending only on ε. In light of Lemma 2.1, we see t * bl (G, ε) ε C · σ 2 . Therefore, we can assume in what follows Λ 2 100 log(4/ε)ε −2 σ 2 .(25)Let t * = 1 2 Λ 2 . By Lemma 2.2, we get P inf x∈V 1 2 (η R x + √ 2t * ) 2 log(4/ε)σ 2 P | sup x∈V η R x − Λ| 2 log(4/ε) σ 1 − ε 2 . Applying Theorem 1.14, we obtain P inf x∈V L x τ (t * ) log(4/ε)σ 2 1 − ε 2 . By triangle inequality, we have D 2σ. Recalling the assumption (25), we can apply Lemma 2.7 and deduce that P(τ (t * ) εCt * /6) ε/2 . Writing t 0 = εCt * /6, we can then obtain that P inf x∈V L x t 0 log(4/ε)σ 2 , τ (t * ) t 0 1 − ε . Also, we see that sup x∈V L x t 0 εΛ 2 /12 whenever τ (t * ) t 0 . Using assumption (25) again, we conclude P v 0 (∃x, y ∈ V : L x t 0 > 2L y t 0 ) 1 − ε . This implies that t * bl (G, ε) t 0 , completing the proof. An asymptotically strong upper bound Finally, we show a strong upper bound for the asymptotics of t cov on a sequence of graphs {G n }, assuming t hit (G n ) = o(t cov (G n )). Theorem 2.8. For any graph G = (V, E) with v 0 ∈ V , let t hit (G) be the maximal hitting time in G and let {η v } v∈V be the GFF on G with η v 0 = 0. Then, for a universal constant C > 0, t cov (G) 1 + C t hit (G) t cov (G) · |E| · E sup v∈V η v 2 . Proof. Theorem 2.5 asserts that t cov (G) (E max v η v ) 2 ,(26) where denotes stochastic domination. Write σ 2 = max v Eη 2 v . Note that σ 2 corresponds to the diameter of V in the effective resistance metric, thus t hit (G) ≍ |E|σ 2 . Denote by S = v d v η 2 v , where d v is the degree of vertex v. By a generalized Hölder inequality and moment estimates for Gaussian variables (here we use that EX 6 = 15 for a standard Gaussian variable X), we obtain that ES 3 u,v,w d u d v d w E(η 2 u η 2 v η 2 w ) u,v,w d u d v d w E(η 6 u ) 1/3 E(η 6 v ) 1/3 E(η 6 w ) 1/3 15|E| 3 σ 6 . An application of Markov's inequality then yields P(S α|E|σ 2 ) 15 α 3 .(27)Write Q = v d v η v . Clearly, Q is a centered Gaussian with variance bounded by 4|E| 2 σ 2 and therefore, P(|Q| α|E|σ) 2e −α 2 /8 . For β > 0, let t = 1 2 (E max v η v + βσ) 2 . Noting τ (t) = v d v L v τ (t) and recalling the Isomorphism theorem (Theorem 1.14), we get that τ (t) 2|E|t + √ 2t 2 |Q| + 1 2 S . Combined with (27) and (28), we deduce that P(τ (t) 2|E|t + √ 2tβ|E|σ + β|E|σ 2 ) 12 (β − 2) 2 + 2e −β 2 /8 .(29) We now turn to bound the probability for τ cov > τ (t). Observe that on the event {τ cov > τ (t)}, there exists v ∈ V such that L v τ (t) = 0. It is clear that for all v ∈ V , we have P(η 2 v βσ 2 /2) 2e −β/4 . Since {η v } v∈V and {L v τ (t) } v∈V are two independent processes, we obtain P {τ cov > τ (t)} \ ∃v ∈ V : L v τ (t) + 1 2 η 2 v < βσ 2 /2 2e −β/4 .(30) On the other hand, we deduce from the concentration of Gaussian processes (Lemma 2.2) that P inf v ( √ 2t + η v ) 2 βσ/2 2e −β/8 . Applying Isomorphism theorem again and combined with (30), we get that P(τ cov > τ (t)) 4e −β/8 . Combined with (29), it follows that P(τ cov 2|E|t + √ 2tβ|E|σ + β|E|σ 2 ) 15 β 3 + 2e −β 2 /8 + 4e −β/8 . Since t = 1 2 (E max v η v + βσ) 2 , we can deduce that for some universal constant C 1 > 0, t cov (G) |E|(E sup v η v ) 2 + C 1 |E|(σ 2 + σE sup v η v ) . Recalling (26), we complete the proof. Geometry of the resistance metric We now discuss some relevant properties of the resistance metric on a network G(V ). Effective resistances and network reduction. For a subset S ⊆ V , define the quotient network G/S to have vertex set (V \ S) ∪ {v S }, where v S is a new vertex disjoint from V . The conductances in G/S are defined by c G/S xy = c xy if x, y / ∈ S and c v S x = y∈S c xy for x / ∈ S. Now, given v ∈ V and S ⊆ V , we put R eff (v, S) △ = R G/S eff (v, v S ),(31) where the latter effective resistance is computed in G/S. For two disjoint sets S, T ⊆ V , we define R eff (S, T ) △ = R G/S eff (v S , T ), and the resistance is defined to be 0 if S ∩ T = ∅. It is straightforward to check that R eff (S, T ) = R eff (T, S). The following network reduction lemma was discovered by Campbell [10] under the name "star-mesh transformation" (see also, e.g., [39, Ex. 2.47(d)]). We give a proof for completeness. Lemma 2.9. For a network G(V ) and a subset V ⊂ V , there exists a networkG(Ṽ ) such that for all u, v ∈ V , we havec v = c v and R G eff (u, v) = R eff (u, v) . We call G( V ) the reduced network. Furthermore, if V = V \ {x}, we then have the formulã c yz = c yz + c * ,x yz , where c * ,x yz = c xy c xz w∈Vx c xw .(32) Proof. Let P be the transition kernel of the discrete-time random walk {S t } on the network G and let P V be the transition kernel of the induced random walk on V , namely for u, v ∈ V P V (u, v) = P u (T + V = v) , where T + A △ = min{t 1 : S t ∈ A} for all A ⊆ V . In other words, P V is the chain watched in the subset V . We observe that P V is a reversible Markov chain on V (see, e.g., [2,36]). It is clear that the chain P V has the same invariant measure as that of P restricted to V , up to scaling by a constant. Therefore, there exists a (unique) network G( V ) corresponding to the Markov chain P V such thatc u = c u for all u ∈ V . We next show that the effective resistances are preserved in G( V ). To this end, we use the following identity relating effective resistance and the random walk (see, e.g., [39, Eq. (2.5)]), P v (T + v > T u ) = 1 c v R eff (u, v) ,(33) where T u = min{t 0 : S t = u}. Since P V is a watched chain on the subset V , we see that P V v (T + v > T u ) = P v (T + v > T u ) for all u, v ∈ V . This yields R G eff (u, v) = R eff (u, v) . To prove the second half of the lemma, we let G( V ) be the network defined by (32). A straightforward calculation yields that c v = c v − c xv + y∈Vx c * ,x vy = c v − c xv + y∈Vx c xv c xy z∈Vx c xz = c v . Let P G be the transition kernel for the random walk on the network G( V ). Then, P G (u, v) =c uṽ c u = c uv + cuxcxv y∈Vx cxy c u . On the other hand, the watched chain P V satisfies P V (u, v) = c uv c u + c ux c u c xv y∈Vx c xy . Altogether, we see that P G (u, v) = P V (u, v), completing the proof. Well-separated sets. The following result is an important property of the resistance metric, crucial for our analysis. Proposition 2.10. Consider a network G(V ) and its associated resistance metric (V, R eff ). Suppose that for some subset S ⊆ V , there is a partition S = B 1 ∪ B 2 ∪ · · · ∪ B m which satisfies the following properties. 1. For all i = 1, 2, . . . , m and for all x, y ∈ B i , we have R eff (x, y) ε/48. For all i = j ∈ {1, 2, . . . , m}, for all x ∈ B i and y ∈ B j , we have R eff (x, y) ε. Then there is a subset I ⊆ {1, 2, . . . , m} with |I| m/2 such that for all i ∈ I, R eff (B i , S \ B i ) ε/24. In order to prove Proposition 2.10, we need the following two ingredients. Lemma 2.11. Suppose the network H(W ) can be partitioned into two disjoint parts A and B such that for some ε > 0, and some vertices u ∈ A and v ∈ B, we have 1. R H eff (u, v) ε, and 2. R H eff (u, x) ε/12 for all x ∈ A, and R H eff (v, x) ε/12 for all x ∈ B. Then, R H eff (A, B) ε/6. R eff (x, y) = min f E(f ) , where E(f ) = 1 2 x,y f 2 (x, y)r xy , and the minimum is over all unit flows from x to y. Here, r xy = 1/c xy is the edge resistance for {x, y}. Suppose now that R H eff (A, B) < ε/6. Then there exists a unit flow f AB from set A to set B such that E(f AB ) < ε/6. For x ∈ A, let q x be the amount of flow sent out from vertex x in f AB and for x ∈ B, let q x be the amount of flow sent in to vertex x. Note that x∈A q x = x∈B q x = 1. Analogously, by assumption (2), there exist flows {f ux : x ∈ A} and {f xv : x ∈ B} such that f xy is a unit flow from x to y and E(f xy ) ε/12. We next build a flow f such that f = f AB + w∈A q w f uw + z∈B q z f zv . We see that f is indeed a unit flow from u to v. Furthermore, by Cauchy-Schwartz, E(f ) = 1 2 x,y f 2 (x, y)r xy = 1 2 x,y r xy f AB (x, y) + w∈A q w f uw (x, y) + z∈B q z f zv (x, y) 2 3 2 x,y r xy f 2 AB (x, y) + w∈A q w f 2 uw (x, y) + z∈B q z f 2 zv (x, y) = 3 E(f AB ) + w∈A q w E(f uw ) + z∈B q z E(f zv ) < ε . This contradicts assumption (1), completing the proof. Lemma 2.12. For any network G(V ), the following holds. If there is a subset S ⊆ V and a value ε > 0 such that R eff (u, v) ε for all u, v ∈ S, then there is a subset S ′ ⊆ S with |S ′ | |S|/2 such that for every v ∈ S ′ , R eff (v, S \ {v}) ε/4. Proof. Consider the reduced network G on the vertex set S, as defined in Lemma 2.9. Let the new conductances be denotedc xy for x, y ∈ S. By Lemma 2.9, our initial assumption that R eff (u, v) ε for all u, v ∈ S implies that R G eff (u, v) ε for all u, v ∈ S. Let n = |S|. Foster's Theorem [26] (see also [53]) states that 1 2 u =v∈S R G eff (u, v)c u,v = n − 1 . Combined with the fact that R G eff (u, v) ε, this yields 1 2 u =v∈Sc uv n ε . In particular, there exists a subset S ′ ⊆ S with |S ′ | n/2 such that for all v ∈ S ′ , u∈S\{v}c uv 4 ε . It follows that for every v ∈ S ′ , we have C G eff (v, S \ {v}) 4/ε, hence R eff (v, S \ {v}) = R G eff (v, S \ {v}) ε/4. Proof of Proposition 2.10. For each i ∈ {1, 2, . . . , m}, choose some v i ∈ B i . By assumption (2), R eff (v i , v j ) ε for i = j. Thus applying Lemma 2.12, we find a subset I ⊆ {1, 2, . . . , m} with |I| m/2 and such that for every i ∈ I, we have R eff (v i , {v 1 , . . . , v m } \ {v i }) ε/4 .(34) We claim that this subset I satisfies the conclusion of the proposition. To this end, fix i ∈ I, and letG be the quotient network formed by gluing {v 1 , . . . , v m } \ {v i } into a single vertexṽ. By (34), we have RG eff (v i ,ṽ) ε/4. Now let, B =   {ṽ} ∪ j =i B j   \ {v i } i∈I . Consider any x ∈B with x =ṽ. Then x ∈ B j for some j = i, hence by assumption (1), we conclude that, RG eff (x,ṽ) R eff (x, v j ) ε/48 . We may now apply Lemma 2.11 to the sets B i andB inG (with respective vertices v i andṽ) to conclude that RG eff (B i ,B) ε/24 . But the preceding line immediately yields, R eff (B i , S \ B i ) ε/24, finishing the proof. We end this section with the following simple lemma. Lemma 2.13. For any network G(V ), if A, B 1 , B 2 ⊆ V are disjoint, then R eff (A, B 1 ∪ B 2 ) R eff (A, B 1 ) · R eff (A, B 2 ) R eff (A, B 1 ) + R eff (A, B 2 ) . Proof. By considering the quotient graph, the lemma can be reduced to the case when A = {u}. Let {S t } be the discrete-time random walk on the network and define T B = min{t 0 : S t ∈ B} and T + B = min{t 1 : S t ∈ B} for B ⊆ V . It is clear that for a random walk started at u, we have P u (T + u > T B 1 ∪B 2 ) P u (T + u > T B 1 ) + P u (T + u > T B 2 ) . Combined with (33), this gives 1 R eff (u, B 1 ∪ B 2 ) 1 R eff (u, B 1 ) + 1 R eff (u, B 2 ) , yielding the desired inequality. The Gaussian free field We recall the graph Laplacian ∆ : ℓ 2 (V ) → ℓ 2 (V ) defined by ∆f (x) = c x f (x) − y c xy f (y). Consider a connected network G(V ). Fix a vertex v 0 ∈ V , and consider the random process X = {η v } v∈V , where η v 0 = 0, and X has density proportional to exp − 1 2 X , ∆X = exp − 1 4 u,v c uv |η u − η v | 2 .(35) The process X is called the Gaussian free field (GFF) associated with G. The next lemma is known, see, e.g., Theorem 9.20 of [28]. We include the proof for completeness. Lemma 2.14. For any connected network G(V ), if X = {η v } v∈V is the associated GFF, then for all u, v ∈ V , E (η u − η v ) 2 = R eff (u, v).(36) Proof. From (35), and the fact that the Laplacian is positive semi-definite, it is clear that X is a Gaussian process. Let Γ v 0 (u, v) = E u L v T 0 , where T 0 is the hitting time for v 0 as in Theorem 1.14. From Lemma 2.1, we have Γ v 0 (u, v) = 1 2 (R eff (v 0 , u) + R eff (v 0 , v) − R eff (u, v)) .(37) Let ∆ and Γ v 0 , respectively, be the matrices ∆ and Γ v 0 with the row and column corresponding to v 0 removed. Appealing to (35), if we can show that ∆ Γ v 0 = I, it follows that Γ v 0 is the covariance matrix for X . In this case, comparing (37) to E(η u η v ) = 1 2 Eη 2 u + Eη 2 v − E(η u − η v ) 2 and using η v 0 = 0, we see that (36) follows. In order to demonstrate ∆ Γ v 0 = I, we consider u, v such that v 0 / ∈ {u, v}. Conditioning on the first step of the walk from u gives, c u Γ v 0 (u, v) = c u E u L v T 0 = 1 {u=v} + w c uw E w L v T 0 = 1 {u=v} + w c uw Γ v 0 (v, w)(38) On the other hand, by definition of the Laplacian, (∆Γ v 0 )(u, v) = c u Γ v 0 (u, v) − w c uw Γ v 0 (v, w) = 1 {u=v} , where the latter equality is precisely (38). Thus ∆ Γ v 0 = I, completing the proof. A geometric identity. In what follows, for a set of points Y lying in some Hilbert space, we use aff(Y ) to denote their affine hull, i.e. the closure of { n i=1 α i y i : n 1, y i ∈ Y, n i=1 α i = 1}. Of course, when Y contains the origin, aff(Y ) is simply the linear span of Y . Lemma 2.15. For any network G(V ), if X = {η v } v∈V is the GFF associated with G, then for any w ∈ V and subset S ⊆ V , R eff (w, S) = dist L 2 (η w , aff({η u } u∈S )) . Proof. Since the statement of the lemma is invariant under translation, we may assume that the GFF is defined with respect to some v 0 ∈ S. In this case, by the definition in (35), the GFF for G/S has density proportional to exp   − 1 4   u,v / ∈S c uv |η u − η v | 2 + u / ∈S c v S u |η u | 2     , i.e. the GFF on G/S is precisely the initial Gaussian process X conditioned on the linear subspace A S = {η v = η v 0 = 0 : v ∈ S}. Using (31) and Lemma 2.14, we have R eff (w, S) = R G/S eff (w, v S ) = E |η w − η v 0 | 2 A S = E |η w | 2 A S . To compute the latter expectation, write η w = Y + Y ′ , where Y ′ ∈ span({η v } v∈S ) and E(Y Y ′ ) = 0. It follows immediately that dist L 2 (η w , aff({η u } u∈S )) = E[Y 2 ] = E |η w | 2 A S , completing the proof. Majorizing measures We now review the relevant parts of the majorizing measure theory. One is encouraged to consult the book [52] for further information. In Section 1, we saw Talagrand's γ 2 functional. For our purposes, it will be more convenient to work with a different value that is equivalent to the functional γ 2 , up to universal constants. In Section 3.2, we discuss separated trees, and prove a number of standard properties about such objects. In Section 3.3, we present a deterministic algorithm for computing γ 2 (X, d) for any finite metric space (X, d). Finally, in Section 3.4, we specialize the theory of Gaussian processes and trees to the case of GFFs. There, we will use the geometric properties proved in Sections 2.3 and 2.4. Before we begin, we attempt to give some rough intuition about the role of trees in the majorizing measures theory. A good reference for this material is [27]. A tree of subsets of X is a finite collection F of subsets with the property that for all A, B ∈ F, either A ∩ B = ∅, or A ⊆ B, or B ⊆ A. A set B is a child of A if B ⊆ A, B = A, and C ∈ F, B ⊆ C ⊆ A =⇒ C = B or C = A. We assume that X ∈ F, and X is referred to as the root of the tree F. To each A ∈ F, we use N (A) to denote the number of children of A. A branch of F is a sequence A 1 ⊃ A 2 ⊃ · · · such that each A k+1 is a child of A k . A branch is maximal if it is not contained in a longer branch. We will assume additionally that every maximal branch terminates in a singleton set {x} for x ∈ X. Let {η x } x∈X be a centered Gaussian process with X finite, and let d(x, y) = E (η x − η y ) 2 . The basic premise of the tree interpretation of the majorizing measures theory is that one can assign a measure of "size" to any tree of subsets in X, and this size provides a lower bound on E sup x∈X η x . The majorizing measures theorem then claims that the value of the optimal such tree is within absolute constants of the expected supremum. The size of the tree (see (39)) can be defined using only the metric structure of (X, d), without reference to the underlying Gaussian process. Thus much of the theorems in this section are stated for general metric spaces. The tree of subsets is meant to capture the structure of (X, d) at all scales simultaneously. In general, to obtain a multi-scale lower bound on the expected supremum of the process, one arranges so that the diameter of the subsets decreases exponentially as one goes down the tree, and all subsets at one level of the tree are separated by a constant fraction of their diameter (see Definitions 3.1 and 3.8 below). This allows a certain level of independence between different branches of the tree which is exploited in the lower bounds. Much of this section is devoted to proving that one can construct a near-optimal tree with a number of regularity properties that will be crucial to our approach in Section 4. Trees, measures, and functionals Let (X, d) be an arbitrary metric space. Definition 3.1. For values q ∈ N and α, β > 0, and r 2, a tree of subsets F in X is called a (q, r, α, β)-tree if to each A ∈ F, one can associate a number n(A) ∈ Z such that the following three conditions are satisfied. diam(A) α r n(A) . We will refer to a (q, r, 4, 1 2 )-tree as simply a (q, r)-tree. The r-size of a tree of subsets F, written size r (F), is defined as the infimum of k 1 r n(A k ) log + N (A k )(39) over all possible maximal branches of F, where we use the notation log + x = log x for x = 0, and log + (0) = 0. To connect trees of subsets with the γ 2 functional, we recall the relationship with majorizing measures. The next result is from [51, Thm. 1.1] Theorem 3.2. For every metric space (X, d), we have γ 2 (X, d) ≍ inf sup x∈X ∞ 0 log 1 µ(B(x, ε)) 1/2 dε, where B(x, ε) is the closed ball of radius ε about x, and the infimum is over all finitely supported probability measures on X. We will also need the following theorem due to Talagrand (see Proposition 4.3 of [50] and also Theorem T5 of [27].) We will employ it now and also in Section 3.3. Theorem 3.3. There is a value r 0 2 such that the following holds. Let (X, d) be a finite metric space, and r r 0 . Assume there is a family of functions {ϕ i : X → R + : i ∈ Z} such that the following conditions hold for some β > 0. 1. ϕ i (x) ϕ i−1 (x) for all i ∈ Z and x ∈ X. 2. If t 1 , t 2 , . . . , t N ∈ B(s, r j ) are such that d(t i , t i ′ ) r j−1 for i = i ′ , then ϕ j (s) βr j log N + min {ϕ j−2 (t i ) : i = 1, 2, . . . , N } . Under these conditions, γ 2 (X, d) r,β sup x∈X,i∈Z ϕ i (x). The preceding two theorems allow us to present the following connection between trees and γ 2 . Such a connection is well-known (see, e.g. [49]), but we record the proofs here for completeness, and for the precise quantitative bounds we will use in future sections. Lemma 3.4. There is a value r 0 2 such that for every finite metric space (X, d), and every r r 0 , we have γ 2 (X, d) r sup{size r (F) : F is a (1, r, 4, 1 2 )-tree in X} . Proof. First, for a subset S ⊆ X, let θ(S) = sup{size r (F) : F is a (1, r, 4, 1 2 )-tree in X} . Then define, for every i ∈ Z and x ∈ X, define ϕ i (x) = θ(B(x, 2r i )) . where B(x, R) is the closed ball of radius R about x ∈ X. We now wish to verify that the conditions of Theorem 3.3 hold for {ϕ i }. Condition (1) is immediate. Assume that r 8. Given t 1 , t 2 , . . . , t N as in condition (2) of Theorem 3.3, consider the set A = B(s, 2r j ) which has diameter bounded by 4r j , and the disjoint subset sets of A given by A i = B(t i , 2r j−2 ) which each have diameter bounded by 4r j−2 , and which satisfy d(A i , A j ) r j−1 /2 for i = j. We also have A i ⊆ A for each i ∈ {1, . . . , N }. Taking the tree of subsets with root A, n(A) = j, and children {A i } N i=1 , and in each A i a tree which achieves value at least θ(A i ) = θ(B(t i , 2r j−2 )) = ϕ j−2 (i), we see immediately that ϕ j (s) = θ(B(s, 2r j )) r j log N + min{ϕ j−2 (t i ) : i = 1, 2, . . . , N }, confirming condition (2) of Theorem 3.3. Applying the theorem, it follows that γ 2 (X, d) r θ(X), proving (40). We will need the upper bound (40) to hold for (2, r, 4, 1 2 )-trees. Toward this end, we state a version of [49, Thm 3.1]. The theorem there is only proved for α = 1 and β = 1 2 , but it is straightforward to see that it works for all values α, β > 0 since the proof merely proceeds by choosing an appropriate subtree of the given tree; the values α and β are not used. Theorem 3.5. For every metric space (X, d), the following holds. For every α, β, r > 0 and q ∈ N, and for every (1, r, α, β)-tree F in X, there exists a (q, r, α, β)-tree F ′ in X such that size r (F) q · size r (F ′ ) . Combining Theorem 3.5 with Lemma 3.4 yields the following upper bound using (2, r)-trees. Corollary 3.6. There is a value r 0 2 such that for every finite metric space (X, d), and every r r 0 , we have γ 2 (X, d) r sup{size r (F) : F is a (2, r, 4, 1 2 )-tree in X} . Now we move onto a lower bound on γ 2 . Lemma 3.7. There is a value r 0 2 such that for every finite metric space (X, d), and every r r 0 , we have γ 2 (X, d) sup{size r (F) : F is a (1, r, 8, 1 6 )-tree} . Proof. We will show for any probability measure µ on X and any (1, r, 8, 1 6 )-tree F in X, we have size r (F) r sup x∈X ∞ 0 log 1 µ(B(x, ε)) 1/2 dε . The basic idea is that if A 1 , A 2 , . . . A k are children of A, in F, then the sets B(A i , 1 20 r n(A)−1 ) are disjoint by property (2) of Definition 3.1, where we write B(S, R) = {x ∈ X : d(x, S) R}. Thus one of these sets A i has µ(B(A i , 1 20 r n(A)−1 )) 1/N (A). Thus we may find a finite sequence of sets, starting with A (0) = X such that A (i+1) is a child A (i) and µ(B(A (i+1) , 1 20 r n(A (i) )−1 )) 1/N (A (i) ). Since every maximal branch in a tree of subsets terminates in a singleton, the sequence ends with some set A ′ = A (h) = {x}. By construction, we have µ(B(x, 1 20 r n(A ′ )−1 )) 1 N (A ′ ) . Thus, assuming r 40, r n(A ′ )−2 log + N (A ′ ) 1 20 r n(A ′ )−1 r n(A ′ )−2 1 log µ(B(x, ε)) dε .(42) Separated trees Let (X, d) be an arbitrary metric space. Consider a finite, connected, graph-theoretic tree T = (V, E) (i.e., a connected, acyclic graph) such that V ⊆ X, with a fixed root z ∈ V , and a mapping s : V → Z. Abusing notation, we will sometimes use T for the vertex set of T . For a vertex x ∈ T , we use T x to denote the subtree rooted at x, and we use Γ(x) to denote the set of children 1 of x with respect to the root z. Finally, we write ∆(x) = |Γ(x)| + 1 for all x ∈ T . Let L be the set of leaves of T . For any v ∈ T , let P(v) = {z, . . . , v} denote the set of nodes on the unique path from the root to v. For a pair of nodes u, v ∈ T , we use P(u, v) to denote the sequence of nodes on the unique path from u to v. If u is the parent of v, we write u = p(v) and in particular we write z = p(z). For any such pair (T , s) and r 2, we define the value of (T , s) by val r (T , s) = inf ℓ∈L v∈P(ℓ) r s(v) log ∆(v).(43) The following definition will be central. 1 Formally, these are precisely the neighbors of x in T whose unique path to the root z passes through x. Definition 3.8. For a value r 2, we say that the pair (T , s) is an r-separated tree in (X, d) if it satisfies the following conditions for all x ∈ T . 1. For all y ∈ Γ(x), s(y) s(x) − 2. For all u, v ∈ Γ(x), we have d(x, T u ) 1 2 r s(x)−1 and d(T u , T v ) 1 2 r s(x)−1 . 3. diam(T x ) 4r s(x) . We remark that our separated tree is a slightly different version of the (2, r)-tree introduced in the preceding section. The main difference is that the nodes of our separated tree are point in the metric space X, whereas a node in a (2, r)-tree is a subset of X. Our definition is tailored for the application in Section 4. Not surprisingly, we have a similar version of the above theorem for separated trees. Theorem 3.9. For some r 0 2 and every r r 0 , and any metric space (X, d), we have sup T val r (T , s) ≍ r γ 2 (X, d), where the supremum is over all r-separated trees in X. Theorem 3.9 follows from Corollary 3.6 and the following lemma. Lemma 3.10. Consider r 8 and any metric space (X, d). For any (2, r)-tree F, there is an rseparated tree T such that size r (F) = val r (T ). Also, for any r-separated tree T , there is a (2, r)-tree F such that size r (F) val r (T ) − r diam(X). Proof. We only prove the first half of the statement, since the second half can be obtained by reversing the construction. The additive factor −r diam(X) is due to the slight difference in the definitions of the value for a separated tree and the size for a (2, r)-tree (see (43) and (39)). Let F be a (2, r)-tree on (X, d). For each A ∈ F with N (A) 1, we select one child c(A) and an arbitrary point v A ∈ c(A). We now construct the separated tree T . Its vertex set is a subset of Let us first verify that T is an r-separated tree. Condition (1) of Definition 3.8 holds because if y is a child of v A ∈ T , then y = v B for some child B of A (in F), which implies s(y) = n(B) n(A) − 2 = s(v A ) − 2. Secondly, If v A is a node with children v B 1 , v B 2 , . . . , v B k , then clearly by Definition 3.1, d(v A , T v B i ) d(c(A), B i ) 1 2 r s(v A )−1 , d(T v B i , T v B j ) d(B i , B j ) 1 2 r s(v A )−1 , verifying condition (2) of Definition 3.8. Thirdly, if x A ∈ T , then for any child x B of x A , we know B is a child of A, hence diam(T x B ) diam(B) 4r n(A) = 4r s(x A ) , using property (3) of a q-tree. This verifies condition (3) of Definition 3.8. Finally, observe that for every non-leaf node v A ∈ T , we have ∆(v A ) = |Γ(v A )| + 1 = N (A), and for leaves, we have log ∆(v A ) = log + N (A) = 0. It follows that val r (T , s) = size r (F), completing the proof. Additional structure We now observe that we can take our separated trees to have some additional properties. Say that an r-separated tree (T , s) is C-regular for some C 1, if it satisfies, for every v ∈ T \ L, ∆(v) exp C 2 r 2 4 s(z)−s(v) .(44) Lemma 3.11. For every C 1 and r 4, for every r-separated tree (T , s) in X, if val r (T , s) 4Cr s(z)+1 , then there is a C-regular r-separated tree (T ′ , s ′ ) in X with Proof. Consider the following operation on an r-separated tree (T , s). For x ∈ T \ L, consider a new r-separated tree (T ′ , s ′ ) = Φ x (T , s), which is defined as follows. Let u be the child of x and let S contain the remaining children such that val r (T u , s| Tu ) val r (T v , s| Tv ) for all v ∈ S ,(45) where T u is the subtree of T rooted at u and containing all its descendants, and s| Tu is the restriction of s on the subtree T u . Consider the tree T ′ that results from deleting all the nodes in S, as well as the subtrees under them, and then contracting the edge (x, u). We also put s ′ (x) = s(u) and s ′ (y) = s(y) for all y ∈ T ′ . As long as there is a node x ∈ T \ L which violates (44) (for the current (T ′ , s ′ )), we iterate this procedure (namely, we replace (T ′ , s ′ ) by Φ x (T ′ , s ′ )). It is clear that we end with a C-regular tree (T ′ , s ′ ). Note that different choices of x at each stage will lead to different outcomes, but the following proof shows that all of them satisfy the required condition. It is also straightforward to verify that for any ℓ ∈ L ′ , we have v∈P T ′ (ℓ) r s ′ (v) log ∆ T ′ (v) v∈P T (ℓ) r s(v) log ∆ T (v) − Cr v∈P T (ℓ) r s(v) 2 s(z)−s(v) v∈P T (ℓ) r s(v) log ∆ T (v) − Cr s(z)+1 ∞ k=0 2 2k r −2k v∈P T (ℓ) r s(v) log ∆ T (v) − 2Cr s(z)+1 val r (T , s) − 2Cr s(z)+1 1 2 val r (T , s). where in the second line we have used property (1) of Definition 3.8, in the third line, we have used r 4, and in the final line we have used our assumption that val r (T , s) 4Cr s(z)+1 . It remains to prove that val r (T , s) val r (T ′ , s ′ ). The issue here is that it is possible L ′ L. However, by our choice of u at each stage (as in equation (45)), it is guaranteed that ℓ ∈ L ′ for a certain ℓ ∈ L such that val r (T , s) = v∈P(ℓ) r s(v) log ∆(v). This completes the proof. We next study the subtrees of separated trees. In what follows, we continue denoting by s| T ′ the restriction of s on T ′ for T ′ ⊆ T , and we use a subscript T ′ to refer to the subtree T ′ . Lemma 3.12. For every r-separated tree (T , s), there is a subtree T ′ ⊆ T such that (T ′ , s| T ′ ) is an r-separated tree satisfying the following conditions. 1. val r (T , s) ≍ val r (T ′ , s| T ′ ). For every v ∈ T ′ \ L T ′ , ∆ T ′ (v) = ∆(v). For every v ∈ T ′ \ L T ′ and w ∈ L T ′ ∩ T v , u∈P(v,w) r s(u) log ∆ T ′ (u) 1 2 r s(p(v)) log ∆ T ′ (p(v)).(46) Proof. We construct the subtree T ′ in the following way. We examine the vertices of v ∈ T in the breadth-first search order (that is, we order the vertices such that their distances to the root are non-decreasing). If v is not deleted yet and for some ℓ ∈ L ∩ T v , u∈P(v,ℓ) r s(u) log ∆ T (u) r s(p(v)) log ∆ T (p(v)) ,(47) we delete all the descendants of v. Let T ′ be the subtree obtained at the end of the process. It is clear that (T ′ , s| T ′ ) is a separated tree, and it remains to verify the required properties. By the construction of our subtree T ′ , we see that whenever a vertex is deleted, all its siblings are deleted. So for a node v ∈ T ′ \ L T ′ , all the children in T of v are preserved in T ′ , yielding property (2). Note that if v ∈ L T ′ \ L, there exists ℓ ∈ L ∩ T v such that (47) holds. Therefore, we see u∈P(z,v) r s(u) log ∆ T ′ (u) = u∈P(z,v)\{v} r s(u) log ∆ T (u) 1 2 u∈P(z,ℓ) r s(u) log ∆ T (u) 1 2 val r (T , s) . This verifies property (1) (noting that the reverse inequality is trivial). Take v ∈ T ′ \ L T ′ and w ∈ L T ′ ∩ T v . If w ∈ L, we see that (46) holds for v and w since (47) does not hold for v and ℓ = w (otherwise all the descendants of v have to be deleted and v will be a leaf node in T ′ ). If w ∈ L, there exists ℓ 0 ∈ L ∩ T w such that u∈P(w,ℓ 0 ) r s(u) log ∆ T (u) r s(p(w)) log ∆ T (p(w)) . Recall that (47) fails with ℓ = ℓ 0 . Altogether, we conclude that u∈P(v,w) r s(u) log ∆ T ′ (u) = u∈P(v,ℓ 0 ) r s(u) log ∆ T (u) − u∈P(w,ℓ 0 ) r s(u) log ∆ T (u) 1 2 u∈P(v,ℓ 0 ) r s(u) log ∆ T (u) 1 2 r s(p(v)) log ∆ T (p(v)) , establishing property (3) and completing the proof. Finally, we observe that separated trees are stable in the following sense. Lemma 3.13. Fix 0 < δ < 1. Suppose that (T , s) is an r-separated tree in X, and for every node v ∈ V , we delete all but ⌈δ · |Γ(v)|⌉ of its children. Denote by T ′ the induced tree on the connected component containing z(T ). Then (T ′ , s| T ′ ) is an r-separated tree and val r (T , s) ≍ δ val r (T ′ , s| T ′ ). Proof. It is clear that Properties (1), (2) and (3) of separated trees are preserved for the induced tree T ′ for s| T ′ . So (T ′ , s) is an r-separated tree. Furthermore, for every leaf ℓ of T ′ , v∈P(ℓ) r s(v) log ∆ T ′ (v) v∈P(ℓ) r s(v) log(1 + ⌈δ · |Γ(v)|⌉) c(δ) v∈P(ℓ) r s(v) log(1 + |Γ(v)|) c(δ)val r (T , s) , where c(δ) is a constant depending only on δ. It follows that val r (T ′ , s| T ′ ) c(δ)val r (T , s), completing the proof since the reverse direction is obvious. Computing an approximation to γ 2 deterministically We now present a deterministic algorithm for computing an approximation to γ 2 . Theorem 3.14. Let (X, d) be a finite metric space, with n = |X|. If, for any two points x, y ∈ X, one can compute d(x, y) in time polynomial in n, then one can compute a number A(X, d) in polynomial time, for which A(X, d) ≍ γ 2 (X, d). Proof. Fix r 16. First, let us assume that 1 d(x, y) r M for x = y ∈ X and some M ∈ N. Fix x 0 ∈ X. Our algorithm constructs functions ϕ 0 , ϕ 1 , . . . , ϕ M : X → R + . We will return the value A(X, d) = ϕ M (x 0 ). First put ϕ 1 (x) = ϕ 0 (x) = 0 for all x ∈ X. Next, we show how to construct ϕ j given ϕ 0 , ϕ 1 , . . . , ϕ j−1 . For x ∈ X and r 0, we use B(x, r) △ = {y ∈ X : d(x, y) r}. First, we construct a maximal 1 3 r j−1 net N j in X in the following way. Supposing that y 1 , . . . , y k have already been chosen, let y k+1 be a point satisfying ϕ j−2 (y k+1 ) = max ϕ j−2 (y) : y ∈ X \ k i=1 B x, 1 3 r j−1 , as long as there exists some point of X \ k i=1 B(x, 1 3 r j−1 ) remaining. For x ∈ X, set g j (x) = y min{k:d(x,y k ) 1 3 r j−1 } . Now we define ϕ j (x) for x ∈ X. Suppose that B(x, 2r j ) ∩ N j = {y ℓ 1 , y ℓ 2 , . . . , y ℓ h }, with ℓ 1 ℓ 2 · · · ℓ h , and define 1 16 r j−2 ) is empty. I. ϕ j (x) = ϕ j−1 (x) if B(g j (x), 4r j ) \ B(g j (x), II. Otherwise, ϕ j (x) = max max k h r j log k + min i k ϕ j−2 (y ℓ i ) , max{ϕ j−1 (z) : z ∈ B(x, 1 3 r j−1 )} . (48) Now, we verify that {ϕ j } M j=0 satisfies the conditions of Theorem 3.3. The monotonicity condition (1) is satisfied by construction. We will now verify condition (2), starting with the following lemma. Proof. We prove this by induction on j. Clearly it holds vacuously for j 2. Assume that it holds for ϕ 0 , ϕ 1 , . . . , ϕ j−1 and j 2. By the condition of the lemma and the fact that s ∈ B(g j (s), 1 3 r j−1 ), we have d(s, g j (s)) 1 16 r j−2 ,(49) which implies that B(s, 2r j ) \ B(s, 1 8 r j−2 ) is also empty. Furthermore, we have g j (s) = g j (t), since otherwise d(g j (s), g j (t)) 1 3 r j−1 , and we would conclude that 2r j d(g j (t), s) d(g j (s), g j (t)) − d(s, g j (s)) 1 3 r j−1 − 1 16 r j−2 1 8 r j−1 , contradicting the fact that B(s, 2r j ) \ B(s, 1 8 r j−2 ) is empty. It follows that B(s, 2r j ) \ B(s, 1 8 r j−2 ) = ∅ and B(t, 2r j ) \ B(t, 1 8 r j−2 ) = ∅ .(50) Since g j (s) = g j (t), we conclude that both ϕ j (s) and ϕ j (t) are defined by case (I) above, hence ϕ j (s) = ϕ j−1 (s) and ϕ j (t) = ϕ j−1 (t) . So we are done by induction unless B(g j (s), 4r j−1 ) \ B(g j (s), 1 16 r j−3 ) is non-empty, in which case ϕ j−1 (s) and ϕ j−1 (t) are defined by case (II). But from (50) and d(s, t) r j , we see that B(t, 2r j−1 ) = B(s, 2r j−1 ) and B(s, 1 3 r j−2 ) = B(t, 1 3 r j−2 ) as well. This implies that ϕ j−1 (s) and ϕ j−1 (t) see the same maximization in (48), hence ϕ j−1 (s) = ϕ j−1 (t) and by (51) we are done. Now, let s, t 1 , . . . , t N ∈ X be as in condition (2), and let B(s, 2r j ) ∩ N j = {y ℓ 1 , y ℓ 2 , . . . , y ℓ h } be such that ℓ 1 ℓ 2 · · · ℓ h . If B(g j (s), 4r j ) \ B(g j (s), 1 16 r j−1 ) is empty, then N = 1, and Lemma 3.15 implies that ϕ j (s) = ϕ j (t 1 ) ϕ j−2 (t 1 ), where the latter inequality follows from monotonicity. Thus we may assume that ϕ j (s) is defined by case (II). To every t i , we can associate a distinct point g j (t i ) ∈ B(s, 2r j ) ∩ N j , and by construction we have ϕ j−2 (g j (t i )) ϕ j−2 (t i ), since ϕ j−2 (y k ) is decreasing as k increases. Using this property again in conjunction with the definition (48), we have ϕ j (s) r j log N + min{ϕ j−2 (y ℓ i ) : i = 1, . . . , N } r j log N + min{ϕ j−2 (g j (t i )) : i = 1, . . . , N } r j log N + min{ϕ j−2 (t i ) : i = 1, . . . , N }, completing our verification of condition (2) of Theorem 3.3. Applying Theorem 3.3, we see that γ 2 (X, d) sup x∈X,i∈Z ϕ i (x) = ϕ M (x 0 ) = A(X, d).(52) To prove the matching lower bound, we first build a tree T whose vertex set is a subset of X × Z. The root of T is (x 0 , M ). In general, if (x, j) is already a vertex of T with j 1, then we add children to (x, j) according to the maximizer of (48). If ϕ j (x) = ϕ j−1 (z), then we make (z, j − 1) the only child of (x, j). Otherwise, we put the nodes (y 1 , j − 2), . . . , (y h , j − 2) as children of (x, j), where {y i } ⊆ N j are the nodes that achieve the maximum in (48). Let the pair (T ′ , s) be a constructed in the following way from T . We replace every maximal path of the form (x, j 0 ), (x, j 0 − 1), . . . , (x, j 0 − k) by the vertex x and put s(x) = j 0 − k. It follows immediately by construction that val r (T ′ , s) ϕ M (x 0 ) + r diam(X, d) ϕ M (x 0 ),(53) where the latter inequality follows from (52), since ϕ M (x 0 ) γ 2 (X, d) diam(X, d). Note that the correction term of diam(X, d) in (53) is simply because of the use of ∆(v) = |Γ(v)| + 1 in the definition (43). We next build a (1, r, 8, 1 16 )-tree F, which essentially captures the structure of the tree T . In general, the sets in F will be balls in X, with the node (x, j) ∈ T being associated with the set B(x, 4r j ) in F, which will have label n(B(x, 4r j )) = j. We construct the (1, r, 8, 1 16 )-tree F recursively. The root of F is B(x 0 , 4r M ) (which is equal to X), and we define n(B(x, 4r j )) = M . In general, if F contains the set B(x, 4r j ) corresponding to the node (x, j) ∈ T , and if (x, j) has children (y 1 , j − 2), (y 2 , j − 2), . . . , (y h , j − 2) ∈ T , we add the sets B(y i , 4r j−2 ) as children of B(x, 4r j ) in F, with n(B(y i , 4r j−2 )) = j − 2. Likewise, if (z, j − 1) is the child of (x, j), then we add the set B(z, 4r j−1 ) as the unique child of B(x, 4r j ) in F and put n(B(z, 4r j−1 )) = j − 1. We continue in this manner until T is exhausted. We now verify that F is indeed a (1, r, 8, 1 6 )-tree. First, note that if (z, j − 1) is a child of (x, j) in T , then clearly B(z, 4r j−1 ) ⊆ B(x, 4r j ) since this can only happen if d(x, z) 1 3 r j−1 . Also, if (y 1 , j − 2), . . . , (y h , j − 2) are the children of (x, j), then by the construction of the maps in (48), we have d(y i , x) 2r j , hence B(y i , 4r j−2 ) ⊆ B(x, 4r j ), recalling that r 16. Furthermore, for i = k, since y i , y k ∈ N j , we have d(y i , y k ) 1 3 r j−1 , so B(y i , 4r j−2 ) ∩ B(y k , 4r j−2 ) = ∅, verifying that F is indeed a tree of subsets. In fact, we have the estimate d B(y i , 4r j−2 ), B(y k , 4r j−2 ) 1 3 r j−1 − 8r j−2 1 6 r j−1 = 1 6 r n(B(x,4r j ))−1 , using r 16. This verifies that property (2) of a (1, r, 1, 1 6 )-tree is satisfied. Furthermore, property (1) of a (1, r, 8, 1 6 )-tree follows immediately by construction. Finally, to verify property (3), note that for any set in our tree of subsets F, corresponding to a node of the form (x, j) ∈ T , we have diam(B(x, 4r j )) 8r j and n(B(x, 4r j )) = j. By construction, we have val r (T ′ , s) size r (F) + r diam(X, d), and Lemma 3.7 yields γ 2 (X, d) size r (F) + diam(X, d) (using γ 2 (X, d) diam(X, d)). Combining this with (53) shows that γ 2 (X, d) val r (T ′ , s) ϕ M (x 0 ) = A(X, d). Together with (52), this shows that γ 2 (X, d) ≍ A(X, d). The only thing left is to remove the dependence of our running time on M . But since there are at most n 2 distinct distances in (X, d), only O(n 2 ) of the maps ϕ 0 , ϕ 1 , . . . , ϕ M are distinct. More precisely, suppose that there is no pair u, v ∈ X satisfying d(u, v) ∈ [r j−3 , r j+1 ] for some j ∈ Z. In that case, ϕ j (x) is defined by case (I) for all x ∈ X, and thus ϕ j ≡ ϕ j−1 . Obviously, we may skip computation of the intermediate non-distinct maps (and it is easy to see which maps to skip by precomputing the values of j such that there are u, v ∈ X with d(u, v) ∈ [r j−3 , r j+1 ].) Since there are only O(n 2 ) non-trivial values of j, this completes the proof. Tree-like properties of the Gaussian free field Finally, we consider how the resistance metric (and hence the Gaussian free field) allows us to obtain trees with special properties. Consider a network G(V ), and the associated metric space (V, √ R eff ). Let (T , s) be an r-separated tree in G. We say that (T , s) is strongly r-separated if, for every non-root node v ∈ T , we have the inequality R eff (v, T \ T v ) 1 20 r s(p(v))−1 ,(54) where p(v) denotes the parent of v in T . Lemma 3.16. For any network G(V ) and any r 96, let (T 0 , s) be an arbitrary r-separated tree on the space (V, √ R eff ). Then there is an induced strongly r-separated tree (T , s) such that |Γ T (v)| |Γ T 0 (v)|/2 for all v ∈ T \ L T . Furthermore val r (T , s) ≍ val r (T 0 , s).(55) Proof. Consider any non-leaf node v ∈ T 0 with children c 1 , . . . , c k , where k 1. If k = 1, let S v = {c 1 }. Otherwise, we wish to apply Proposition 2.10 to the sets {T c i } k i=1 . By property (2) of separated trees, we get that for all x ∈ T c i , y ∈ T c j with i = j R eff (x, y) 1 2 r s(v)−1 2 = 1 4 r 2(s(v)−1) . Combined with property (3) of separated trees, Proposition 2.10 yields that there exists a subset S v ⊆ {c 1 , . . . , c k } with |S v | k/2 such that for c ∈ S v , we have R eff (T c , T v \ (T c ∪ {v})) 1 4 r 2(s(v)−1) · 1 24 1 96 r 2(s(v)−1) . Applying Lemma 2.13 with A = T c , B 1 = T v \ (T c ∪ {v}) and B 2 = {v}, we get that R eff (T c , T v \ T c ) 1 100 r 2(s(v)−1) .(56) Next, consider the induced r-separated tree (T , s) that arises from deleting, for every non-leaf node v ∈ T 0 , all the children not in S v as well as all their descendants. It is clear that for all v ∈ T \ L T , we have |Γ T (v)| |Γ T 0 (v)|/2. Lemma 3.13 then yields that val r (T , s) ≍ val r (T 0 , s). It remains to verify that (T , s) is strongly r-separated. Define D 0 = 1 and for h 1, D h = D h−1 1 − D 2 h−1 r −4h . It is straightforward to verify that D h 1/2 for all h 0, since r 2. We now prove, by induction on the height of T , that for every node u at depth h 1 in T , R eff (u, T \ T u ) 1 10 r s(p(u))−1 D h−1 .(57) By the preceding remarks, this verifies (54), completing the proof of the lemma. Let z = z(T ) be the root, and let v be some child of z. Let u ∈ T v be a node at depth h in T v (and hence at depth h + 1 in T ). By (56), we have R eff (u, T \ T v ) R eff (T v , T \ T v ) 1 10 r s(p(v))−1 .(58) If u = v, then the preceding inequality yields (57). Otherwise, u = v, and h 1. By the induction hypothesis (57) applied to u and T v , we have R eff (u, T v \ T u ) 1 10 r s(p(u))−1 D h−1 .(59) Since u ∈ T v is a node at depth h, we get from property (1) of a separated tree that s(p(v)) s(p(u)) + 2h and therefore 1 10 r s(p(u))−1 D h−1 r −2h · 1 10 r s(p(v))−1 D h−1 .(60) Now, using (58) and (59), we apply Lemma 2.13 with A = {u}, B 1 = T v \ T u and B 2 = T \ T v , yielding R eff (u, T \ T u ) 1 10 r s(p(u))−1 D h−1 · 1 10 r s(p(v))−1 ( 1 10 r s(p(u))−1 D h−1 ) 2 + ( 1 10 r s(p(v))−1 ) 2 1 10 r s(p(u))−1 D h−1 1 1 + (D h−1 r −2h ) 2 1 10 r s(p(u))−1 D h−1 (1 − D 2 h−1 r −4h ), where the second transition follows from (60) and the third transition follows from the fact that (1 + x 2 ) −1/2 1 − x 2 . This completes the proof. Good trees inside the GFF. Consider a Gaussian free field {η x } x∈V corresponding to network G(V ) with the associated metric space (V, d), where d(x, y) = (E(η x − η y ) 2 ) 1/2 . Proposition 3.17. For some r 0 2 and any r r 0 and C 1, there exists a constant K = K(C, r) depending only on C and r such that the following holds. For an arbitrary Gaussian free field {η x } x∈V with γ 2 (V, d) K diam(V ), there exists an r-separated tree (T , s) with set of leaves L, such that the following properties hold. (a) val r (T , s) ≍ r,C γ 2 (X, d). (b) For every v ∈ V , dist L 2 (η v , aff({η u } u / ∈Tv )) 1 20 r s(p(v))−1 . (c) For every v ∈ V , ∆(v) exp C 2 r 2 4 s(z)−s(v) for all v ∈ T \ L. (d) For every v ∈ T \ L and w ∈ L ∩ T v , u∈P(v,w) r s(u) log ∆(u) 1 2 r s(p(v)) log ∆(p(v)). We call such a tree T a C-good r-separated tree. Proof. By definition of the GFF, we have d = √ R eff for some network G(V ). Applying Theorem 3.9, there exists an r-separated tree (T 0 , s 0 ) such that val r (T 0 , s 0 ) ≍ r γ 2 (V, d). Recalling property (3) of Definition 3.8 and the assumption that γ 2 (V, d) K diam(V ), we can then select K large enough such that the condition of Lemma 3.11 is satisfied for the separated tree (T 0 , s 0 ). Then applying Lemma 3.11, we can get a 2C-regular separated tree (T 1 , s 1 ) with val r (T 1 , s 1 ) ≍ r,C val r (T 0 , s 0 ). At this point, using Lemma 3.16, we obtain a C-regular strongly r-separated tree (T 2 , s 2 ) such that val r (T 2 , s 2 ) ≍ r γ 2 (V, d). That is to say, the tree (T 2 , s 2 ) satisfies properties (a) and (c). Furthermore, by Lemma 2.15, we see that property (b) holds for (T 2 , s 2 ) because it is equivalent to the strongly r-separated property (54). Finally, Lemma 3.12 implies that there exists a subtree T ⊆ T 2 with val r (T , s 2 | T ) ≍ r,C val r (T 2 , s 2 ) such that property (d) holds for T and properties (a) and (c) are preserved (note that by property (2) of Lemma 3.12, the degrees of non-leaf nodes are preserved). Observe that property (b) is preserved by taking subtrees. Writing s = s 2 | T , we conclude that the separated tree (T , s) satisfies all the required properties, completing the proof. The cover time We now turn to our main theorem. Theorem 4.1. For any network G(V ) with total conductance C = x∈V c x , we have t cov (G) ≍ C γ 2 (V, R eff ) 2 . Combined with Theorem 2.3, this also yields a positive answer to the strong conjecture of Winkler and Zuckerman [54]. t cov (G) ≍ C γ 2 (V, R eff ) 2 ≍ δ t bl (G, δ). For the remainder of this section, we denote S = γ 2 (V, R eff ).(61) It is clear that for all 0 < δ < 1, we have t cov (G) t bl (G, δ), and t bl (G, δ) δ CS 2 by Theorem 2.3. Thus, in order to prove the preceding corollary and Theorem 4.1, we need only show that t cov (G) CS 2 .(62) Let {W t } be the continuous-time random walk on G(V ), and let {L v t } v∈V be the local times, as defined in Section 2. Applying the isomorphism theorem (Theorem 1.14) with some fixed v 0 ∈ V , we have L x τ (t) + 1 2 η 2 x : x ∈ V law = 1 2 (η x + √ 2t) 2 : x ∈ V ,(63) for some associated Gaussian process {η x } x∈V . By Lemma 2.14, this process is a Gaussian free field, and we have for every x, y ∈ V , d(x, y) △ = E |η x − η y | 2 = R eff (x, y).(64) Let D = max x,y∈V d(x, y) be the diameter of the Gaussian process. Proof outline. Let {L > 0} be the event {L x τ (t) > 0 : x ∈ V }. Consider a set S ⊆ R V , and let S L and S R be the events corresponding to the left and right-hand sides of (63) falling into S. Our goal is to find such a set S so that for some t ≍ S 2 , we have P(S R ) − P(S L ∩ {L > 0}) c,(65) for some universal constant c > 0. In this case, with probability at least c, the set of uncovered vertices {v : L v τ (t) = 0} is non-empty. Using the fact that the inverse local time τ (t) is Ct with probability at least 1 − c/2, we will conclude that t cov (G) CS 2 . Thus we are left to give a lower bound on P(S R ) and an upper bound on P(S L ∩ {L > 0}). Since the structure of the local times process {L x t } conditioned on {L > 0} can be quite unwieldy, we will only use first moment bounds for the latter task. Calculating a lower bound on P(S R ) will require a significantly more delicate application of the second-moment method, but here we will be able to exploit the full power of Gaussian processes and the majorizing measures theory. Before defining the set S ⊆ R V , we describe it in broad terms. By (64) and Theorem (MM), we know that for some t 0 ≍ S 2 , we should have E inf x∈V η x = −E sup x∈V η x close to − √ 2t 0 . By Lemma 2.2, we know that the standard deviation of inf x∈V η x is O(D). Thus we can expect that with probability bounded away from 0, for the right choice of t 0 ≍ S 2 , some value on the right-hand side of (63) is O(D) for t = t 0 . Now, when E sup x∈V η x ≫ D, it is intuitively true that for t = εt 0 and ε > 0 small, there should be many points x ∈ V with η x ≈ − √ 2t. If these points have some level of independence, then we should expect that with probability bounded away from 0, there is some x ∈ V with |η x − √ 2t| very small (much smaller than O(D)). Our set S will represent the existence of such a point. On the other hand, we will argue that if all the local times {L x τ (t) } are positive, then the probability for the left-hand side to have such a low value is small. A tree-like sub-process First, observe that by the commute time identity, t cov (G) C max x,y∈V R eff (x, y) = CD 2 . Thus in proving Theorem 4.1, we may assume that S KD ,(66) for any universal constant K 1. In particular, by an application of Proposition 3.17, we can assume the existence of an r-separated tree (T , s) in (V, d), for some fixed r 128, with root z = v 0 , and such that for some constant C 1 and θ = θ(C), properties (67), (70), (71), and (72) below are satisfied. We will choose C sufficiently large later, independent of any other parameters. For each u ∈ T , let h u denote the height of u, where we order the tree so that h z = 0, where z is the root. Recalling that L is the set of leaves of T , for each v ∈ L, let P(v) = {f v (0), f v (1), . . . , f v (h v )} be the set of nodes on the path from z = f v (0) to v = f v (h v ), where f v (i) is the parent of f v (i + 1), for 0 i < h. First, we can require that for every v ∈ L, σ v 1 θ S,(67) where χ v (k) △ = r s(fv(k)) log ∆(f v (k)) ,(68)σ v △ = hv−1 k=0 χ v (k).(69) Furthermore, we can require that the tree T satisfies, for every v ∈ V , hv−1 i=j+1 χ v (i) C · 2 j · r s(fv(j)) ,(70) as well as ∆(f v (k)) exp(C 2 r 2 4 k ) . Finally, we require that for every v ∈ T , dist L 2 (η v , aff({η u } u / ∈Tv )) 1 20 r s(p(v))−1 .(72) All these requirements are justified by Proposition 3.17. The distinguishing event. For u, v ∈ L, we let h uv be the height of the least common ancestor of u and v. We deg ↓ (f u (k)) . First, we fix ε = 1 2 10 rθ . For every v ∈ L, consider the events E v (ε) = |η v − εS| 50 r s(p(v)) m −3/4 v .(75) Instead of arguing directly about the events E v (ε), we will couple them to leaf events of a "percolation" process on T . In particular, in Section 4.2, we will prove the following lemma. Lemma 4.3. For all v ∈ L, there exist events E v such that the following properties hold. 1. E v ⊆ E v (ε) = |η v − εS| 50 r s(p(v)) m −3/4 v . 2. P(E v ) 1 2 m −7/8 v . 3. P(E u ∩ E v ) m 1/8 uv (m u m v ) −7/8 . In Section 4.3, we will prove that for any events {E v } v∈L satisfying properties (2) and (3) of Lemma 4.3, we have P u∈L E u 1 8 .(76) Thus for t = 1 2 ε 2 S 2 , we have P ∃v ∈ V : 1 2 (η v + √ 2t) 2 50 2 r 2s(p(v)) m −3/2 v 1 8 .(77) In light of the discussion surrounding (65), the reader should think of S = s ∈ R V : s v 50 2 r 2s(p(v)) m −3/2 v for some v ∈ V , and then (77) gives the desired lower bound on P(S R ). We now turn to an upper bound on P(S L ∩ {L > 0}). The next lemma is proved in Section 4.4. Lemma 4.4. For t 1 2 ε 2 S 2 , P v∈L 0 < L v τ (t) 50 2 · r 2s(p(v)) m −3/2 v 1 16 .(78) From (78) and (77), we conclude that with probability at least 1/16, we must have L v τ (t) = 0 for some v ∈ V and t = 1 2 ε 2 S 2 , else (63) is violated. This implies that P v 0 τ cov τ ( 1 2 ε 2 S 2 ) 1 16 .(79) To finish our proof of (62) and complete the proof of Theorem 4.1, we will apply Lemma 2.7 with β = 1 96 . In particular, we may choose K = 96/ε in (66), and then applying Lemma 2.7 yields P τ ( 1 2 ε 2 S 2 ) C ε 2 S 2 192 1 32 . Combining this with (79) yields P v 0 τ cov C ε 2 S 2 192 1 16 . In particular, τ cov Cε 2 S 2 . This completes the proof of (62), and hence of Theorem 4.1. The coupling The present section is devoted to the proof of Lemma 4.3. Toward this end, we will try to find a leaf v ∈ L for which η v ≈ εS. As in Lemma 4.3(1), the level of closeness we desire is gauged according to a proper scale, r s(p(v)) , as well as to the number of other leaves we expect to see at this scale, which is represented roughly by m −3/4 v (the value 3/4 is not essential here, and any other value in (1/2, 1) would suffice). Our goal is to find a such a leaf by starting at the root of the tree, and arguing that some of its children should be somewhat close to the target εS. This closeness is achieved using the fact that, by definition of an r-separated tree, the children are separated in the Gaussian distance, and thus exhibit some level of independence. We will continue in this manner inductively, arguing that the children which are somewhat close to the target have their own children which we could expect to be even closer, and so on. We aim to shrink these windows around the target more and more so they are small enough once we reach the leaves. There are a number of difficulties involved in executing this scheme. In particular, conditioning on the exact values of the children of the root could determine the entire process, making future levels moot. Thus we must first select a careful filtering which allows us to reserve some randomness for later levels. This is done in Section 4.2.1. Furthermore, the intermediate targets have to be arranged according to the variances along the root-leaf paths in our tree. This corresponds to the fact that, although we have a uniform lower bound on each σ v (from (67)), the summation defining the σ v 's could put different weights on the various levels (recall (69)). The targets also have to take into account random "noise" from the filter described above, and thus the targets themselves must be random. This "window analysis" is performed in Section 4.2.2. Restructuring the randomness We know that η z = 0, since z = v 0 is the root of T (and the starting point of the associated random walk). Fix a depth-first ordering of T (one starts at the root and explores as far as possible along each branch before backtracking). Write u ≺ v if u is explored before v, and u v if u ≺ v or u = v. For u = z, we write u − for the vertex preceding u in the DFS order. Let F = span ({η x : x ∈ T }). For a node v ∈ T , let F v = span({η u } u v ) and F − v = span({η u } u≺v ). We next associate a centered Gaussian process {ξ x : x ∈ T } to {η x : x ∈ T } in the following inductive way. Define ξ z = 0. Now, assuming we have defined ξ u for u ≺ v, we define ξ v by writing η v = ζ v + ξ v , where ζ v ∈ F v − and ξ v ⊥ F v − . Observe that, by construction, {ξ u } u v forms an orthogonal basis in L 2 for F v . Applying (72), we have for all u ∈ T , ξ u 2 = dist L 2 (η u , span ({η w } w≺u )) dist L 2 (η u , span ({η w } w / ∈Tu )) 1 20 r s(p(u))−1 ,(80) where we used the fact that the span and the affine hull are the same since ξ z = 0. For v ∈ L, define the subspaces F v,k = span ({ξ u : f v (k) ≺ u f v (k + 1)}) , F − v,k = span ({ξ u : f v (k) ≺ u ≺ f v (k + 1)}) . For 0 k h v − 1, define inductivelyη v,0 = 0, and η v,k+1 =η v,k + proj F v,k (η v ).(81) Note that the subspaces {F v,k } hv k=0 are mutually orthogonal, and together they span F v . Thus, η v,hv = η v .(82) Furthermore, by the definition of the subspace F v,k , we can decomposẽ η v,k+1 −η v,k =ζ v,k +ξ v,k ,(83)whereζ v,k ∈ F − v,k , andξ v,k ⊥ F − v,k . The next lemma states thatξ v,k has at least comparable variance toζ v,k . ζ v,k 2 8 r s(fv(k)) ,(84) and, 1 64 r s(fv(k))−1 ξ v,k 2 8 r s(fv(k)) . Proof. Writing the telescoping sum, η v = hv−1 j=0 η fv(j+1) − η fv(j) , we see that proj F v,k (η v ) 2 hv−1 j=k η fv (j+1) − η fv(j) 2 hv−1 j=k 4r s(fv(j)) 8 r s(fv(k)) ,(86) where we used properties (1) and (3) of the separated tree, and have assumed r 2. Thus by orthogonality and (83), we have ζ v,k 2 η v,k+1 −η v,k 2 = proj F v,k (η v ) 2 8 r s(fv(k)) , and precisely the same conclusion holds forξ v,k . Next, we establish a lower bound on ξ v,k 2 . From (81) and (83), ξ v,k = proj F v,k (η v ) − proj F − v,k (η v ) (87) = hv−1 j=k proj F v,k (η fv(j+1) − η fv(j) ) − proj F − v,k (η fv(j+1) − η fv(j) ) = proj F v,k (η fv (k+1) − η fv (k) ) − proj F − v,k (η fv (k+1) − η fv(k) ) + hv−1 j=k+1 proj F v,k (η fv (j+1) − η fv(j) ) − proj F − v,k (η fv (j+1) − η fv(j) ) . Observe that the term in brackets is precisely proj F v,k (η fv (k+1) ) − proj F − v,k (η fv(k+1) ) = ξ fv(k+1) , since η fv(k) ⊥ F v,k . In particular, we arrive at ξ v,k 2 ξ fv(k+1) 2 − hv−1 j=k+1 η fv(j+1) − η fv(j) 2 1 32 r s(fv(k))−1 − 2 r s(fv(k+1)) Defining the events E v Recall that our goal now is to find many leaves v ∈ L with η v ≈ εS. Now, writing η v = hv−1 k=0 proj F v,k (η v ) = hv−1 k=0 (ζ v,k +ξ v,k ), our "ideal" goal would be to hit a window around the target by getting the kth term of this sum close to a v (k) △ = εS χ v (k) σ v , for k = 0, 1, . . . , h v − 1. We will use the variance of theξ v,k variables (recall Lemma 4.5) to lower bound the probability that some points get closer to the desired target. On the other hand, we will treat theζ v,k variables as noise which has to be bounded in absolute value. This noise cannot always be countered in a single level, but it can be countered on average along the path to the leaf; this is the content of (70). We will amortize this cost over future targets as follows. Let b v (0) = 0 and for k = 0, 1, . . . , h v − 2, define ρ v (k) =ζ v,k +ξ v,k − a v (k) + b v (k) , b v (k + 1) = k i=0 χ v (k + 1) hv−1 ℓ=i+1 χ v (ℓ) ρ v (i). Clearly ρ v (0) =ζ v,0 +ξ v,0 − a v (0) represents how much we miss our first target. A similar fact holds for the final target, as the next lemma argues; in between, the errors are spread out proportional to the contribution to val r (T , s) for each of the the remaining levels (represented by the χ v (k) values). Here b v (k) represents the error that is meant to be absorbed in the k-th level. Lemma 4.6. For every v ∈ L, ρ v (h v − 1) = η v − εS. Proof. We have, hv−2 k=0 b v (k + 1) = hv−2 k=0 k i=0 χ v (k + 1) hv−1 ℓ=i+1 χ v (ℓ) ρ v (i) = hv−2 i=0 ρ v (i) hv −2 k=i χ v (k + 1) hv−1 ℓ=i+1 χ v (ℓ) = hv−2 i=0 ρ v (i) . (88) Also note that hv−1 k=0 ρ v (k) = hv−1 k=0 (ζ v,k +ξ v,k − a v (k) + b v (k)) = η v − εS + hv−1 k=0 b v (k) . Combined with b v (0) = 0 and (88), it follows that ρ v (h v − 1) = η v − εS, completing the proof. We now define the events A v (k) = {|ζ v,k | εθχ v (k)} , B v (k) = {|ρ v (k)| w v (k)} , where, for 0 k h v − 2, w v (k) is selected so that P B v (k) |ζ v,k + b v (k) = deg ↓ (f v (k)) −1/8 .(89) We emphasize that the windown w v (k) is not deterministic. And, for k = h v − 1, we select w v (k) so that P(B v (k) |ζ v,k + b v (k)) = deg ↓ (f v (k)) −1/8 m −3/4 v ,(90) Remark 2. Here, w v (k) can be thought to represent the window size around the random target. The value of w v (k) is chosen to make the probabilities in (89) and (90) exact, allowing us to couple seamlessly to the percolation process in Section 4.3. The key fact, proved in Lemma 4.7, is that the window sizes actually satisfy a deterministic upper bound, assuming that all the "good" events on the path from the root to f v (k) occurred. Thus one should think of the true window size as the bounds specified in (94) and (95), while the random value is for the purpose of the coupling. For 0 k ℓ h v − 1, define A v (k, ℓ) △ = ℓ i=k A v (i) and B v (k, ℓ) △ = ℓ i=k B v (i).(91) Sinceξ v,k ∈ σ(F v,k \ F − v,k ) (see, e.g. (87)), we see that the event B v (k) is conditionally independent of σ(F − fv (k+1) ) given the value ofζ v,k + b v (k). This implies that for all events E 0 ∈ σ(F − fv (k+1) ) such that E 0 ∩ A v (0, k) ∩ B v (0, k − 1) = ∅, P (B v (k) | A v (0, k), B v (0, k − 1), E 0 ) = deg ↓ (f v (k)) −1/8 , if 0 k < h v − 1, deg ↓ (f v (k)) −1/8 m −3/4 v , if k = h v − 1.(92) Finally, for v ∈ L, we define the event E v = A v (0, h v − 1) ∩ B v (0, h v − 1) .(93) Window analysis. We will now show that our final window w v (h v − 1) is small enough. Observe that our choice of w v (k) is not deterministic. Nevertheless, we will give an absolute upper bound. The bound is essentially the natural one: For any node u in the tree, and any child v of u, the standard deviation of η u − η v is O(r s(u) ). This follows from property (3) of the r-separated tree (recall Definition 3.8). Lemma 4.7. For every v ∈ L and k = 0, 1, . . . , h v − 2, if A v (0, k) and B v (0, k − 1) hold then, w v (k) 50 r s(fv(k)) .(94)Furthermore, if A v (0, h v − 1) and B v (0, h v − 2) hold, then w v (h v − 1) 50 r s(fv(hv −1)) m −3/4 v . (95) Proof. For k = 0, we have ρ v (0) =ζ v,0 +ξ v,0 − a v (0). By (67), we have a v (0) = εSχ v (0)/σ v θεχ v (0) = θεr s(fv(0)) log ∆(f v (0)).(96) Furthermore, from Lemma 4.5, we know that for all k 0, 1 64 r s(fv(k))−1 ξ v,k 2 8 r s(fv(k)) .(97) Now, consider a value w > 0 such that w a v (0) + εθχ v (0) 2θεr s(fv(0)) log ∆(f v (0)) .(98) Using (97) and recalling the Gaussian density, we have P |ρ v (0)| w | A v (0) P |ρ v (0)| w |ζ v,0 = −εθχ v (0) = P |ξ v,0 − a v (0) − εθχ v (0)| w 1 2 w √ 2π 8r s(fv(0)) exp − 1 2 (128εrθ) 2 log ∆(f v (0)) = w 16 √ 2πr s(fv(0)) ∆(f v (0)) − 1 2 (128εrθ) 2 .(99) Recalling the assumption (71), we have log ∆(f v (0)) Cr 16 √ 2π2 10 r, by choosing C large enough. In particular, εθχ v (0) (16 √ 2π2 10 εθr)r s(fv(0)) = 16 √ 2πr s(fv(0)) , recalling (74). Thus setting w = 16 √ 2πr s(fv(0)) satisfies (98), and applying (99) we have P |ρ v (0)| 16 √ 2πr s(fv(0)) | A v (0) ∆(f v (0)) − 1 where we have used 1 2 (128εrθ) 2 = 1 128 , and ∆(f v (0)) 16 from (71). Therefore w v (0) 16 √ 2πr s(fv(0)) 50 r s(fv (0)) , recalling the definition of w v (0) from (89). Now suppose that (94) holds for all k ℓ < h v − 2, and consider the case k = ℓ + 1. If the events {B v (j) : 0 j ℓ} hold, then |ρ v (j)| w v (j) 50 r s(fv (j)) , where the first inequality is from the definition of B v (j), and the second is from the induction hypothesis. Using (70), it follows that |b v (k)| k−1 i=0 χ v (k) hv−1 ℓ=i+1 χ v (ℓ) |ρ v (i)| 2 C χ v (k).(100)Recall that ρ v (k) =ζ v,k +ξ v,k − a v (k) + b v (k) . Similar to the k = 0 case, we obtain that for 0 < w 2θεr s(fv(k)) log ∆(f v (k)), we have, P |ρ v (k)| w | A v (i), B v (i) for all 0 i < k, A v (k) P ξ v,k − a v (k) − εθχ v (k) − 2 C χ v (k) w 1 2 w √ 2π8 r s(fv(k)) ∆(f v (k)) − 1 2 (128r) 2 (εθ+C −1 ) 2 . Now, by choosing C 1024r, and recalling (74), we see that 1 2 (128r) 2 (εθ + C −1 ) 2 1 32 . Since ∆(f v (k)) 16 (again, by (71)), we conclude that P |ρ v (k)| 16 √ 2πr s(fv(k)) | A v (i), B v (i) for all 0 i < k, A v (k) deg ↓ (f v (k)) −1/8 . This implies w v (k) 16 √ 2πr s(fv(k)) 50 r s(fv (k)) , where we recall once again the definition of w v (k) from (89). An almost identical argument yields that w v (h v − 1) 50 r s(fv(hv −1)) m Proof. This follows directly from Lemma 4.6, the identity (82) and the definition of B v (k). The first moment. We now give lower bounds on the probability of the event E v . Lemma 4.9. For every v ∈ L, P(E v ) 1 2 m −7/8 v . Proof. We have, P(E v ) = hv−1 k=0 P (A v (k) | A v (0, k − 1), B v (0, k − 1)) P (B v (k) | A v (0, k), B v (0, k − 1)) = m −3/4 v hv−1 k=0 deg ↓ (f v (k)) −1/8 hv−1 k=0 P (A v (k) | A v (0, k − 1), B v (0, k − 1)) = m −7/8 v hv−1 k=0 P (A v (k)) ,(101) where the second line follows from (92), and the third line from the fact that A v (k) is independent of {A v (i), B v (i) : 0 i < k}. Using (84), we have P(A v (k)) 1 − 2 √ 2π ∞ εθχv(k) exp − x 2 128r 2s(fv (k)) dx 1 − 2∆(f v (k)) − 1 128 ε 2 θ 2 1 − 2 exp − 1 128 2 −20 C 2 4 k . where we have used (71), the definition of ε (74), and χ v (k) = r s(fv(k)) log ∆(f v (k)). Clearly by choosing C a large enough constant, we have hv−1 k=0 P (A v (k)) 1 2 , completing the proof. The second moment. Finally, we bound the probability of E u ∩ E v for u = v. Proof. Assume, without loss of generality, that u ≺ v ∈ L. It is clear from (101) that P(E u ) m −7/8 u . Also, we have P(E v | E u ) P(A v (0, h v − 1), B v (0, h u − 1) | E u ) hv−1 k=huv P(B v (k) | E u , A v (0, k), B v (0, k − 1)) . Now recall that E u ∈ σ(F − fv (huv+1) ) ⊂ σ(F − fv (k+1) ) for all k h uv . By (92), we obtain, hv−1 k=huv P(B v (k) | E u , A v (0, k), B v (0, k−1)) = deg ↓ (f v (h v −1)) −3/4 hv−1 k=huv deg ↓ (f v (k)) −1/8 = m 1/8 uv m −7/8 v . Altogether, we conclude that P(E u ∩ E v ) = P(E u )P(E v | E u ) m 1/8 uv (m u m v ) −7/8 ,as required. The main coupling lemma, Lemma 4.3, is an immediately corollary of Lemmas 4.8, 4.9 and 4.10. Tree-like percolation Lemma 4.11 below yields (76). Its proof is a variant on the well-known second moment method for percolation in trees (see [38]). First, we define a measure ν on L via ν(u) = m −1 u . Observe that ν is a probability measure on L, i.e. u∈L ν(u) = 1. To see this, construct a unit flow from the root to the leaves, where each non-leaf node splits its incoming flow equally among its children. Clearly the amount that reaches a leaf u is precisely ν(u). Lemma 4.11. Suppose that to each v ∈ L, we associate an event E v such that the following bounds old. where the last equality follows from (102). By assumption (2), we have EZ 2 = u,v∈L (m u m v ) −1/8 P(E u ∩ E v ) u,v∈L m 1/8 uv (m u m v ) −1 . In order to estimate the second moment, we first fix u and sum over v. To be more precise, let L h (u) = {v ∈ L : h uv = h}, where we recall that h u is the height of a node u, and h uv is the height of the least-common ancestor of u and v. We can then partition L = h 0 L h (u) and obtain for every u ∈ L, deg ↓ (f u (i)) 1/8 ν(L h (u)) . Recalling the flow representation of the measure ν, we see that ν(L h (u)) = deg ↓ (f u (h)) − 1 deg ↓ (f u (h)) h−1 i=0 deg ↓ (f u (i)) . Therefore, v∈L m 1/8 uv m −1 v = hu ℓ=0 deg ↓ (f u (h)) − 1 deg ↓ (f u (h)) h−1 i=0 deg ↓ (f u (i)) −7/8 hu ℓ=0 h−1 i=0 deg ↓ (f u (i)) −7/8 2 , where the last transition follows from (71), for C chosen sufficiently large. Applying the second moment method, we deduce that P (Z > 0) (EZ) 2 EZ 2 1 8 , completing the proof. The local times We now prove Lemma 4.4, in order to the complete the analysis of the left-hand side of (63). Proof. Note that the random walk is at vertex v 0 at time τ (t). Hence, given that L v τ (t) > 0, the random walk contains at least one excursion which starts at v and ends at v 0 . Therefore, given that L v τ (t) > 0, we see c v L v τ (t) stochastically dominates the random variable L = Tv 0 0 1 {Xt=v} dt , where X t is a random walk on the network started at v and T v 0 is the hitting time to v 0 . By definition, every time the random walk hits v, it takes an exponential time for the walk to leave. Also, the probability that the random walk would hit v 0 before returning to v can be related to the effective resistance (see, for example, [39]). Formally, when the random walk W t is at vertex v, it will wait until the Poisson clock σ with rate 1 rings and then move to a neighbor (possibly v itself) selected proportional to the edge conductance. Define T + v = min{t > σ : X t = v} . Then we have the continuous-time version of (33), P v (T + v > T v 0 ) = 1 c v R eff (v, v 0 ) . By the strong Markov property, L follows the law of the sum of a geometric number of i.i.d. exponential variables. Thus L follows the law of an exponential variable with EL = c v R eff (v, v 0 ). Recalling property (72) of our separated tree T , we see that R eff (v, v 0 ) = E(η v − η v 0 ) 2 2 −10 r 2s(fv(hv−1))−2 . Thus, P(0 < L v τ (t) 50 2 · r 2s(fv(hv−1)) m −3/2 v ) P(L c v · 50 2 · r 2s(fv(hv−1)) m −3/2 v ) 50 2 · r 2s(fv(hv−1)) m −3/2 v R eff (v, v 0 ) 2 11 · 50 2 · r 2 m −3/2 v 1 16 m −1 v , where the last transition using (71) for C chosen large enough, and m v exp(C 2 r 2 ). Therefore, we conclude that P v∈LẼ v 1 16 v∈L m −1 v = 1 16 , where we used, from (102), the fact that v∈L m −1 v = 1, completing the proof. Additional applications We now prove a generalization of Theorem 1.7. Suppose that V = {1, 2, . . . , n}, and let G(V ) be a network with conductances {c ij }. We define real, symmetric n × n matrices D and A by D ij = c i i = j 0 otherwise. A ij = c ij . We write L G = D − A tr(D) ,(103) and L + G for the pseudoinverse of L G . where g = (g 1 , . . . , g n ) is a standard n-dimensional Gaussian. Proof. If κ denotes the commute time in G, then the following formula is well-known (see, e.g. [32]), κ(i, j) = e i − e j , L + G (e i − e j ) , where {e 1 , . . . , e n } are the standard basis vectors in R n . Using the fact that L + G is self-adjoint and positive semi-definite, this yields κ(i, j) = L + G e i − L + G e j 2 . Let g = (g 1 , . . . , g n ) ∈ R n be a standard n-dimensional Gaussian, and consider the Gaussian processes {η i : i = 1, . . . , n} where η i = g, L + G e i . One verifies that for all i, j ∈ V , E |η i − η j | 2 = L + G (e i − e j ) 2 = κ(i, j), thus by Theorem (MM), γ 2 (V, √ κ) ≍ E max i∈V η i = E max i∈V g, L + G e i = E max i∈V L + G g, e i ≍ E L + G g ∞ .(104) By Theorem 1.9, [γ 2 (V, √ κ)] 2 ≍ t cov (G). Finally, one can use Lemma 2.2 to conclude that E L + G g ∞ 2 ≍ E L + G g 2 ∞ , completing the proof. Proof. In [46, §4], it is shown how to compute a k × n matrix Z, in expected time O(m(log m) O(1) ), with k = O(log n), and such that for every i, j ∈ V , κ(i, j) Z(e i − e j ) 2 2κ(i, j). We can associate the Gaussian processes {η i } i∈V , where η i = g, Ze i , and g is a standard kdimensional Gaussian. Letting d(i, j) = E |η i − η j | 2 , we see from (105) that √ κ d √ 2κ, therefore γ 2 (V, √ κ) ≍ γ 2 (V, d). It follows (see (104)) that E Zg 2 ∞ ≍ E L + G g 2 ∞ ≍ t cov (G), where the last equivalence is the content of Theorem 4.13. The output of our algorithm is thus A(G) = Zg 2 ∞ , where g is a standard k-dimensional Gaussian vector. The fact that E[A(G)] ≍ (E[A(G) 2 ]) 1/2 follows from Lemma 2.2.
24,068
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
In this section we explore the relationship between our approach and previously developed solutions @cite_20 @cite_4 @cite_19 @cite_5 .
{ "abstract": [ "Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling", "The problem of sampling signals that are not admissible within the classical Shannon framework has received much attention in the recent past. Typically, these signals have a parametric representation with a finite number of degrees of freedom per time unit. It was shown that, by choosing suitable sampling kernels, the parameters can be computed by employing high-resolution spectral estimation techniques. In this letter, we propose a simple acquisition and reconstruction method within the framework of multichannel sampling. In the proposed approach, an infinite stream of nonuniformly-spaced Dirac impulses can be sampled and accurately reconstructed provided that there is at most one Dirac impulse per sampling period. The reconstruction algorithm has a low computational complexity, and the parameters are computed on the fly. The processing delay is minimal just the sampling period. We propose sampling circuits using inexpensive passive devices such as resistors and capacitors. We also show how the approach can be extended to sample piecewise-constant signals with a minimal change in the system configuration. We provide some simulation results to confirm the theoretical findings.", "Sparse sampling of continuous-time sparse signals is addressed. In particular, it is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals. The noisy case is analyzed and solved, proposing methods reaching the optimal performance given by the Cramer-Rao bounds. Finally, a number of applications have been discussed where sparsity can be taken advantage of. The comprehensive coverage given in this article should lead to further research in sparse sampling, as well as new applications. One main application to use the theory presented in this article is ultra-wide band (UWB) communications.", "The authors consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic \"bandlimited and sinc kernel\" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems." ], "cite_N": [ "@cite_19", "@cite_5", "@cite_4", "@cite_20" ], "mid": [ "2103300762", "2108634960", "2149213383", "2158537680" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
The work in @cite_20 was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In , we derived a general condition on the sampling kernel , under which recovery is guaranteed. The lowpass filter of @cite_20 is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples.
{ "abstract": [ "The authors consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic \"bandlimited and sinc kernel\" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems." ], "cite_N": [ "@cite_20" ], "mid": [ "2158537680" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
The authors of @cite_20 proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in @cite_9 , since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for @math . Modifications proposed in @cite_9 exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for @math .
{ "abstract": [ "Recently, it was shown that it is possible to develop exact sampling schemes for a large class of parametric nonbandlimited signals, namely certain signals of finite rate of innovation. A common feature of such signals is that they have a finite number of degrees of freedom per unit of time and can be reconstructed from a finite number of uniform samples. In order to prove sampling theorems, considered the case of deterministic, noiseless signals and developed algebraic methods that lead to perfect reconstruction. However, when noise is present, many of those schemes can become ill-conditioned. In this paper, we revisit the problem of sampling and reconstruction of signals with finite rate of innovation and propose improved, more robust methods that have better numerical conditioning in the presence of noise and yield more accurate reconstruction. We analyze, in detail, a signal made up of a stream of Diracs and develop algorithmic tools that will be used as a basis in all constructions. While some of the techniques have been already encountered in the spectral estimation framework, we further explore preconditioning methods that lead to improved resolution performance in the case when the signal contains closely spaced components. For classes of periodic signals, such as piecewise polynomials and nonuniform splines, we propose novel algebraic approaches that solve the sampling problem in the Laplace domain, after appropriate windowing. Building on the results for periodic signals, we extend our analysis to finite-length signals and develop schemes based on a Gaussian kernel, which avoid the problem of ill-conditioning by proper weighting of the data matrix. Our methods use structured linear systems and robust algorithmic solutions, which we show through simulation results.", "The authors consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic \"bandlimited and sinc kernel\" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems." ], "cite_N": [ "@cite_9", "@cite_20" ], "mid": [ "2098662489", "2158537680" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
In @cite_19 the family of polynomial reproducing kernels was introduced as sampling filters for the model . B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of @math . This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process.
{ "abstract": [ "Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling" ], "cite_N": [ "@cite_19" ], "mid": [ "2103300762" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
Another general family introduced in @cite_19 for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order @math is described by where @math are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the @math 's, @math . Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients @math , (see @cite_19 ), instead of our Vandermonde matrix relation . With this specific choice of parameters the E-spline function satisfies .
{ "abstract": [ "Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling" ], "cite_N": [ "@cite_19" ], "mid": [ "2103300762" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
When the E-spline coefficients @math are pure imaginary, it can be easily shown that becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem @math . In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in @cite_19 is that no more than @math pulses be in any interval of length @math , @math being the support of the filter, and @math the sampling period. Since @math grows linearly with @math , the constraint cast on the infinite stream becomes more stringent, quadratically with @math . On the other hand, the constraint on the infinite stream using the SoS filter is independent of @math .
{ "abstract": [ "Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling" ], "cite_N": [ "@cite_19" ], "mid": [ "2103300762" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
The work in @cite_19 addressed the infinite stream case, with @math . They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most @math diracs within any interval of duration @math , where @math denotes the support of the sampling filter and @math the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of @math between the bursts of length @math enable the reduction to the finite case.
{ "abstract": [ "Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling" ], "cite_N": [ "@cite_19" ], "mid": [ "2103300762" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is @math , which is the critical sampling rate. Our sampling rate is @math and therefore we oversample by a factor of @math . In the same scenario, the method in @cite_19 would require a sampling rate of @math , i.e., oversampling by a factor of @math . Properties of polynomial reproducing kernels imply that @math , therefore for any @math , our method exhibits more efficient sampling. A table comparing the various features is shown in Table .
{ "abstract": [ "Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling" ], "cite_N": [ "@cite_19" ], "mid": [ "2103300762" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1003.2822
2138787877
Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.
Recent work @cite_5 presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. p 2.4cm
{ "abstract": [ "The problem of sampling signals that are not admissible within the classical Shannon framework has received much attention in the recent past. Typically, these signals have a parametric representation with a finite number of degrees of freedom per time unit. It was shown that, by choosing suitable sampling kernels, the parameters can be computed by employing high-resolution spectral estimation techniques. In this letter, we propose a simple acquisition and reconstruction method within the framework of multichannel sampling. In the proposed approach, an infinite stream of nonuniformly-spaced Dirac impulses can be sampled and accurately reconstructed provided that there is at most one Dirac impulse per sampling period. The reconstruction algorithm has a low computational complexity, and the parameters are computed on the fly. The processing delay is minimal just the sampling period. We propose sampling circuits using inexpensive passive devices such as resistors and capacitors. We also show how the approach can be extended to sample piecewise-constant signals with a minimal change in the system configuration. We provide some simulation results to confirm the theoretical findings." ], "cite_N": [ "@cite_5" ], "mid": [ "2108634960" ] }
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Sampling is the process of representing a continuous-time signal by discrete-time coefficients, while retaining the important signal features. The well-known Shannon-Nyquist theorem states that the minimal sampling rate required for perfect reconstruction of bandlimited signals is twice the maximal frequency. This result has since been generalized to minimal rate sampling schemes for signals lying in arbitrary subspaces [1], [2]. Recently, there has been growing interest in sampling of signals consisting of a stream of short pulses, where the pulse shape is known. Such signals have a finite number of degrees of freedom per unit time, also known as the Finite Rate of Innovation (FRI) property [3]. This interest is motivated by applications such as digital processing of neuronal signals, bio-imaging, image processing and ultrawideband (UWB) communications, where such signals are present in abundance. Our work is motivated by the possible application of this model in ultrasound imaging, where echoes of the transmit pulse are reflected off scatterers within the tissue, and form a stream of pulses signal at the receiver. The time-delays and amplitudes of the echoes indicate the position and strength of the various scatterers, respectively. Therefore, determining these parameters from low rate samples of the received signal is an important problem. Reducing the rate allows more efficient processing which can translate to power and size reduction of the ultrasound imaging system. Our goal is to design a minimal rate single-channel sampling and reconstruction scheme for pulse streams that is stable even in the presence of many pulses. Since the set of FRI signals does not form a subspace, classic subspace schemes cannot be directly used to design low-rate sampling schemes. Mathematically, such FRI signals conform with a broader model of signals lying in a union of subspaces [4]- [9]. Although the minimal sampling rate required for such settings has been derived, no generic sampling scheme exists for the general problem. Nonetheless, some special cases have been treated in previous work, including streams of pulses. A stream of pulses can be viewed as a parametric signal, uniquely defined by the time-delays of the pulses and their amplitudes. Efficient sampling of periodic impulse streams, having L impulses in each period, was proposed in [3], [10]. The heart of the solution is to obtain a set of Fourier series coefficients, which then converts the problem of determining the time-delays and amplitudes to that of finding the frequencies and amplitudes of a sum of sinusoids. The latter is a standard problem in spectral analysis [11] which can be solved using conventional methods, such as the annihilating filter approach, as long as the number of samples is at least 2L. This result is intuitive since there are 2L degrees of freedom in each period: L time-delays and L amplitudes. Periodic streams of pulses are mathematically convenient to analyze, however not very practical. In contrast, finite streams of pulses are prevalent in applications such as ultrasound imaging. The first treatment of finite Dirac streams appears in [3], in which a Gaussian sampling kernel was proposed. The time-delays and amplitudes are then estimated from the Gaussian tails. This method and its improvement [12] are numerically unstable for high rates of innovation, since they rely on the Gaussian tails which take on small values. The work in [13] introduced a general family of polynomial and exponential reproducing kernels, which can be used to solve FRI problems. Specifically, B-spline and E-spline sampling kernels which satisfy the reproduction condition are proposed. This method treats streams of Diracs, differentiated Diracs, and short pulses with compact support. However, the proposed sampling filters result in poor reconstruction results for large L. To the best of our knowledge, a numerically stable sampling and reconstruction scheme for high order problems has not yet been reported. Infinite streams of pulses arise in applications such as UWB communications, where the communicated data changes frequently. Using spline filters [13], and under certain limitations on the signal, the infinite stream can be divided into a sequence of separate finite problems. The individual finite cases may be treated using methods for the finite setting, at the expense of above critical sampling rate, and suffer from the same instability issues. In addition, the constraints that are cast on the signal become more and more stringent as the number of pulses per unit time grows. In a recent work [14] the authors propose a sampling and reconstruction scheme for L = 1, however, our interest here is in high values of L. Another related work [7] proposes a semi-periodic model, where the pulse time-delays do not change from period to period, but the amplitudes vary. This is a hybrid case in which the number of degrees of freedom in the time-delays is finite, but there is an infinite number of degrees of freedom in the amplitudes. Therefore, the proposed recovery scheme generally requires an infinite number of samples. This differs from the periodic and finite cases we discuss in this paper which have a finite number of degrees of freedom and, consequently, require only a finite number of samples. In this paper we study sampling of signals consisting of a stream of pulses, covering the three different cases: periodic, finite and infinite streams of pulses. The criteria we consider for designing such systems are: a) Minimal sampling rate which allows perfect reconstruction, b) numerical stability (with sufficiently separated time delays), and c) minimal restrictions on the number of pulses per sampling period. We begin by treating periodic pulse streams. For this setting, we develop a general sampling scheme for arbitrary pulse shapes which allows to determine the times and amplitudes of the pulses, from a minimal number of samples. As we show, previous work [3] is a special case of our extended results. In contrast to the infinite time-support of the filters in [3], we develop a compactly supported class of filters which satisfy our mathematical condition. This class of filters consists of a sum of sinc functions in the frequency domain. We therefore refer to such functions as Sum of Sincs (SoS). To the best of our knowledge, this is the first class of finite support filters that solve the periodic case. As we discuss in detail in Section V, these filters are related to exponential reproducing kernels, introduced in [13]. The compact support of the SoS filters is the key to extending the periodic solution to the finite stream case. Generalizing the SoS class, we design a sampling and reconstruction scheme which perfectly reconstructs a finite stream of pulses from a minimal number of samples, as long as the pulse shape has compact support. Our reconstruction is numerically stable for both small values of L and large number of pulses, e.g., L = 100. In contrast, Gaussian sampling filters [3], [12] are unstable for L > 9, and we show in simulations that B-splines and E-splines [13] exhibit large estimation errors for L ≥ 5. In addition, we demonstrate substantial improvement in noise robustness even for low values of L. Our advantage stems from the fact that we propose compactly supported filters on the one hand, while staying within the regime of Fourier coefficients reconstruction on the other hand. Extending our results to the infinite setting, we consider an infinite stream consisting of pulse bursts, where each burst contains a large number of pulses. The stability of our method allows to reconstruct even a large number of closely spaced pulses, which cannot be treated using existing solutions [13]. In addition, the constraints cast on the structure of the signal are independent of L (the number of pulses in each burst), in contrast to previous work, and therefore similar sampling schemes may be used for different values of L. Finally, we show that our sampling scheme requires lower sampling rate for L ≥ 3. As an application, we demonstrate our sampling scheme on real ultrasound imaging data acquired by GE healthcare's ultrasound system. We obtain high accuracy estimation while reducing the number of samples by two orders of magnitude in comparison with current imaging techniques. The remainder of the paper is organized as follows. In Section II we present the periodic signal model, and derive a general sampling scheme. The SoS class is then developed and demonstrated via simulations. The extension to the finite case is presented in Section III, followed by simulations showing the advantages of our method in high order problems and noisy settings. In Section IV, we treat infinite streams of pulses. Section V explores the relationship of our work to previous methods. Finally, in Section VI, we demonstrate our algorithm on real ultrasound imaging data. II. PERIODIC STREAM OF PULSES A. Problem Formulation Throughout the paper we denote matrices and vectors by bold font, with lowercase letters corresponding to vectors and uppercase letters to matrices. The nth element of a vector a is written as a n , and A ij denotes the ijth element of a matrix A. Superscripts (·) * , (·) T and (·) H represent complex conjugation, transposition and conjugate transposition, respectively. The Moore-Penrose pseudo-inverse of a matrix A is written as A † . The continuous-time Fourier transform (CTFT) of a continuous-time signal x (t) ∈ L 2 is defined by X (ω) = ∞ −∞ x (t) e −jωt dt, and x (t) , y (t) = ∞ −∞ x * (t) y (t) dt,(1) denotes the inner product between two L 2 signals. Consider a τ -periodic stream of pulses, defined as x(t) = m∈Z L l=1 a l h(t − t l − mτ ),(2) where h(t) is a known pulse shape, τ is the known period, and {t l , a l } L l=1 , t l ∈ [0, τ ), a l ∈ C, l = 1 . . . L are the unknown delays and amplitudes. Our goal is to sample x(t) and reconstruct it, from a minimal number of samples. Since the signal has 2L degrees of freedom, we expect the minimal number of samples to be 2L. We are primarily interested in pulses which have small time-support. Direct uniform sampling of 2L samples of the signal will result in many zero samples, since the probability for the sample to hit a pulse is very low. Therefore, we must construct a more sophisticated sampling scheme. Define the periodic continuation of h(t) as f (t) = m∈Z h(t − mτ ). Using Poisson's summation formula [15], f (t) may be written as f (t) = 1 τ k∈Z H 2πk τ e j2πkt/τ ,(3) where H(ω) denotes the CTFT of the pulse h(t). Substituting (3) into (2) we obtain x(t) = L l=1 a l f (t − t l ) = k∈Z 1 τ H 2πk τ L l=1 a l e −j2πktl/τ e j2πkt/τ = k∈Z X[k]e j2πkt/τ ,(4) where we denoted X[k] = 1 τ H 2πk τ L l=1 a l e −j2πktl/τ . The expansion in (4) is the Fourier series representation of the τ -periodic signal x(t) with Fourier coefficients given by (5). Following [3], we now show that once 2L or more Fourier coefficients of x(t) are known, we may use conventional tools from spectral analysis to determine the unknowns {t l , a l } L l=1 . The method by which the Fourier coefficients are obtained will be presented in subsequent sections. Define a set K of M consecutive indices such that H 2πk τ = 0, ∀k ∈ K. We assume such a set exists, which is usually the case for short time-support pulses h(t). Denote by H the M × M diagonal matrix with kth entry 1 τ H 2πk τ , and by V(t) the M × L matrix with klth element e −j2πktl/τ , where t = {t 1 , . . . , t L } is the vector of the unknown delays. In addition denote by a the length-L vector whose lth element is a l , and by x the length-M vector whose kth element is X[k]. We may then write (5) in matrix form as x = HV(t)a.(6) Since H is invertible by construction we define y = H −1 x, which satisfies y = V(t)a.(7) The matrix V is a Vandermonde matrix and therefore has full column rank [11], [16] as long as M ≥ L and the time-delays are distinct, i.e., t i = t j for all i = j. Writing the expression for the kth element of the vector y in (7) explicitly: y k = L l=1 a l e −j2πktl/τ . Evidently, given the vector x, (7) is a standard problem of finding the frequencies and amplitudes of a sum of L complex exponentials (see [11] for a review of this topic). This problem may be solved as long as |K| = M ≥ 2L. The annihilating filter approach used extensively by Vetterli et al. [3], [10] is one way of recovering the frequencies, and is thoroughly described in the literature [3], [10], [11]. This method can solve the problem using the critical number of samples M = 2L, as opposed to other techniques such as MUSIC [17], [18] and ESPRIT [19] which require oversampling. Since we are interested in minimal-rate sampling, we use the annihilating filter throughout the paper. B. Obtaining The Fourier Series Coefficients As we have seen, given the vector of M ≥ 2L Fourier series coefficients x, we may use standard tools from spectral analysis to determine the set {t l , a l } L l=1 . In practice, however, the signal is sampled in the time domain, and therefore we do not have direct access to samples of x. Our goal is to design a single-channel sampling scheme which allows to determine x from time-domain samples. In contrast to previous work [3], [10] which focused on a low-pass sampling filter, in this section we derive a general condition on the sampling kernel allowing to obtain the vector x. For the sake of clarity we confine ourselves to uniform sampling, the results extend in a straightforward manner to nonuniform sampling as well. Consider sampling the signal x(t) uniformly with sampling kernel s * (−t) and sampling period T , as depicted in Fig. 1. The samples are given by s * (−t) x(t) c[n] t = nTc[n] = ∞ −∞ x(t)s * (t − nT )dt = s(t − nT ), x(t) .(9) Substituting (4) into (9) we have c[n] = k∈Z X[k] ∞ −∞ e j2πkt/τ s * (t − nT )dt = k∈Z X[k]e j2πknT /τ ∞ −∞ e j2πkt/τ s * (t)dt = k∈Z X[k]e j2πknT /τ S * (2πk/τ ),(10) where S(ω) is the CTFT of s(t). Choosing any filter s(t) which satisfies S(ω) =          0 ω = 2πk/τ, k / ∈ K nonzero ω = 2πk/τ, k ∈ K arbitrary otherwise,(11) we can rewrite (10) as c[n] = k∈K X[k]e j2πknT /τ S * (2πk/τ ).(12) In contrast to (10), the sum in (12) is finite. Note that (11) implies that any real filter meeting this condition will satisfy k ∈ K ⇒ −k ∈ K, and in addition S(2πk/τ ) = S * (−2πk/τ ), due to the conjugate symmetry of real filters. Defining the M × M diagonal matrix S whose kth entry is S * (2πk/τ ) for all k ∈ K, and the length-N vector c whose nth element is c[n], we may write (12) as c = V(−t s )Sx(13) where t s = {nT : n = 0 . . . N − 1}, and V is defined as in (6) with a different parameter −t s and dimensions N × M . The matrix S is invertible by construction. Since V is Vandermonde, it is left invertible as long as N ≥ M . Therefore, x = S −1 V † (−t s )c.(14) In the special case where N = M and T = τ /N , the recovery in (14) becomes: x = S −1 DFT{c},(15) i.e., the vector x is obtained by applying the Discrete Fourier Transform (DFT) on the sample vector, followed by a correction matrix related to the sampling filter. The idea behind this sampling scheme is that each sample is actually a linear combination of the elements of x. The sampling kernel s(t) is designed to pass the coefficients X[k], k ∈ K while suppressing all other coefficients X[k], k / ∈ K. This is exactly what the condition in (11) means. This sampling scheme guarantees that each sample combination is linearly independent of the others. Therefore, the linear system of equations in (13) has full column rank which allows to solve for the vector x. We summarize this result in the following theorem. Theorem 1. Consider the τ -periodic stream of pulses of order L: x(t) = m∈Z L l=1 a l h(t − t l − mτ ). Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then the samples c[n] = s(t − nT ), x(t) , n = 0 . . . N − 1, uniquely determine the signal x(t) for any s(t) satisfying condition (11), as long as N ≥ |K| ≥ 2L. In order to extend Theorem 1 to nonuniform sampling, we only need to substitute the nonuniform sampling times in the vector t s in (14). Theorem 1 presents a general single channel sampling scheme. One special case of this framework is the one proposed by Vetterli et al. in [3] in which s * (−t) = B sinc(−Bt), where B = M/τ and N ≥ M ≥ 2L. In this case s(t) is an ideal low-pass filter of bandwidth B with S(ω) = 1 √ 2π rect ω 2πB .(16) Clearly, (16) satisfies the general condition in (11) with K = {−⌊M/2⌋, . . . , ⌊M/2⌋} and S 2πk τ = 1 √ 2π , ∀k ∈ K. Note that since this filter is real valued it must satisfy k ∈ K ⇒ −k ∈ K, i.e., the indices come in pairs except for k = 0. Since k = 0 is part of the set K, in this case the cardinality M = |K| must be odd valued so that N ≥ M ≥ 2L + 1 samples, rather than the minimal rate N ≥ 2L. The ideal low-pass filter is bandlimited, and therefore has infinite time-support, so that it cannot be extended to finite and infinite streams of pulses. In the next section we propose a class of non-bandlimited sampling kernels, which exploit the additional degrees of freedom in condition (11), and have compact support in the time domain. The compact support allows to extend this class to finite and infinite streams, as we show in Sections III and IV, respectively. C. Compactly Supported Sampling Kernels Consider the following SoS class which consists of a sum of sincs in the frequency domain: where b k = 0, k ∈ K. The filter in (17) is real valued if and only if k ∈ K ⇒ −k ∈ K and b k = b * −k for all k ∈ K. Since for each sinc in the sum G(ω) = τ √ 2π k∈K b k sinc ω 2π/τ − k(17)sinc ω 2π/τ − k =    1 ω = 2πk ′ /τ, k ′ = k 0 ω = 2πk ′ /τ, k ′ = k,(18) the filter G(ω) satisfies (11) by construction. Switching to the time domain g(t) = rect t τ k∈K b k e j2πkt/τ ,(19) which is clearly a time compact filter with support τ . The SoS class in (19) may be extended to G(ω) = τ √ 2π k∈K b k φ ω 2π/τ − k(20) where b k = 0, k ∈ K, and φ(ω) is any function satisfying: φ (ω) =          1 ω = 0 0 |ω| ∈ N arbitrary otherwise.(21) This more general structure allows for smooth versions of the rect function, which is important when practically implementing analog filters. The function g(t) represents a class of filters determined by the parameters {b k } k∈K . These degrees of freedom offer a filter design tool where the free parameters {b k } k∈K may be optimized for different goals, e.g., parameters which will result in a feasible analog filter. In Theorem 2 below, we show how to choose {b k } to minimize the mean-squared error (MSE) in the presence of noise. Determining the parameters {b k } k∈K may be viewed from a more empirical point of view. The impulse response of any analog filter having support τ may be written in terms of a windowed Fourier series as Φ(t) = rect t τ k∈Z β k e j2πkt/τ .(22) Confining ourselves to filters which satisfy β k = 0, k ∈ K, we may truncate the series and choose: b k =    β k k ∈ K 0 k / ∈ K(23) as the parameters of g(t) in (19). With this choice, g(t) can be viewed as an approximation to Φ(t). Notice that there is an inherent tradeoff here: using more coefficients will result in a better approximation of the analog filter, but in turn will require more samples, since the number of samples N must be greater than the cardinality of the set K. The reconstruction is exact to numerical precision. To demonstrate the filter g(t) we first choose K = {−p, . . . , p} and set all coefficients {b k } to one, resulting in g(t) = rect t τ p k=−p e j2πkt/τ = rect t τ D p (2πt/τ ),(24) where the Dirichlet kernel D p (t) is defined by D p (t) = p k=−p e jkt = sin p + 1 2 t sin(t/2) .(25) The resulting filter for p = 10 and τ = 1 sec, is depicted in Fig. 2. This filter is also optimal in an MSE sense for the case h(t) = δ(t), as we show in Theorem 2. In Fig. 3 we plot g(t) for the case in which the b k 's are chosen as a length-M symmetric Hamming window: b k = 0.54 − 0.46 cos 2π k + ⌊M/2⌋ M , k ∈ K.(26) Notice that in both cases the coefficients satisfy b k = b * −k , and therefore, the resulting filters are real valued. In the presence of noise, the choice of {b k } k∈K will effect the performance. Consider the case in which digital noise is added to the samples c, so that y = c + w, with w denoting a white Gaussian noise vector. Using (13), y = V(−t s )Bx + w(27) where B is a diagonal matrix, having {b k } on its diagonal. To choose the optimal B we assume that the {a l } are uncorrelated with variance σ 2 a , independent of {t l }, and that {t l } are uniformly distributed in [0, τ ). Since the noise is added to the samples after filtering, increasing the filter's amplification will always reduce the MSE. Therefore, the filter's energy must be normalized, and we do so by adding the constraint Tr(B * B) = 1. Under these assumptions, we have the following theorem: Theorem 2. The minimal MSE of a linear estimator of x from the noisy samples y in (27) is achieved by choosing the coefficients |b i | 2 =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2 (28) whereh k = H(2πk/τ )σ a √ L/τ and are arranged in an increasing order of |h k |, √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(29) and m is the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 . Proof: See the Appendix. An important consequence of Theorem 2 is the following corollary. Corollary 1. If |h k | 2 = |h ℓ | 2 , ∀k, ℓ ∈ K then the optimal coefficients are |b i | 2 = 1/|K|, ∀k ∈ K. Proof: It is evident from (28) that if |h k | = |h ℓ | then |b k | = |b ℓ |. To satisfy the trace constraint Tr(B * B) = 1, λ cannot be chosen such that all b i = 0. Therefore, |b i | 2 = 1/|K| for all i ∈ K. From Corollary 1 it follows that when h(t) = δ(t), the optimal choice of coefficients is b k = b j for all k and j. We therefore use this choice when simulating noisy settings in the next section. Our sampling scheme for the periodic case consists of sampling kernels having compact support in the time domain. In the next section we exploit the compact support of our filter, and extend the results to the finite stream case. We will show that our sampling and reconstruction scheme offers a numerically stable solution, with high noise robustness. h(t) = 1 √ 2πσ 2 exp(−t 2 /2σ 2 ),(30) with parameter σ = 7 · 10 −3 , and period τ = 1. The time-delays and amplitudes were chosen randomly. In order to demonstrate near-critical sampling we choose the set of indices K = {−L, . . . , L} with cardinality M = |K| = 11. We filter x(t) with g(t) of (26). The filter output is sampled uniformly N times, with sampling period T = τ /N , where N = M = 11. The sampling process is depicted in Fig. 4. The vector x is obtained using (14), and the delays and amplitudes are determined by the annihilating filter method. Reconstruction results are depicted in Fig. 5. The estimation and reconstruction are both exact to numerical precision. Analog filtering operations are carried out by discrete approximations over a fine grid. The analog signal and filters are mimicked by high rate digital signals. Since the sampling rate which constructs the fine grid is between 2-3 orders of magnitude higher than the final sampling rate T , the simulations reflect very well the analog results. samples were taken, sampled uniformly with sampling period T = τ /N . We choose g(t) given by (24). As explained earlier, only the values of the filter at points 2πk/τ, k ∈ K affect the samples (see (11)). Since the values of the filter at the relevant points coincide and are equal to one for the low-pass filter [3] and g * (−t), the resulting samples for both settings are identical. Therefore, we present results for our method only, and state that the exact same results are obtained using the approach of [3]. In our setup white Gaussian noise (AWGN) with variance σ 2 n is added to the samples, where we define the SNR as: SNR = 1 N c 2 2 σ 2 n ,(31) with c denoting the clean samples. In our experiments the noise variance is set to give the desired SNR. The simulation consists of 1000 experiments for each SNR, where in each experiment a new noise vector is created. We choose t = τ · (1/3 2/3) T and a = τ · (1 1) T , where these vectors remain constant throughout the experiments. We define the error in time-delay estimation as the average of t −t 2 2 , where t andt denote the true and estimated time-delays, respectively, sorted in increasing order. The error in amplitudes is similarly defined by a −â 2 2 . In Fig. 6 we show the error as a function of SNR for both delay and amplitude estimation. Estimation of the time-delays is the main interest in FRI literature, due to special nonlinear methods required for delay recovery. Once the delays are known, the standard least-squares method is typically used to recover the amplitudes, therefore, we focus on delay estimation in the sequel. Finally, for the same setting we can improve reconstruction accuracy at the expense of oversampling, as illustrated in Fig. 7. Here we show recovery performance for oversampling factors of 1, 2, 4 and 8. The oversampling was exploited using the total least-squares method, followed by Cadzow's iterative denoising (both described in detail in [10]). III. FINITE STREAM OF PULSES A. Extension of SoS Class Consider now a finite stream of pulses, defined as x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, l = 1 . . . L,(32) where, as in Section II, h(t) is a known pulse shape, and {t l , a l } L l=1 are the unknown delays and amplitudes. The time-delays {t l } L l=1 are restricted to lie in a finite time interval [0, τ ). Since there are only 2L degrees of freedom, we wish to design a sampling and reconstruction method which perfectly reconstructsx(t) from 2L samples. In this section we assume that the pulse h(t) has finite support R, i.e., h(t) = 0, ∀|t| ≥ R/2.(33) This is a rather weak condition, since our primary interest is in very short pulses which have wide, or even infinite, frequency support, and therefore cannot be sampled efficiently using classical sampling results for bandlimited signals. We now investigate the structure of the samples taken in the periodic case, and design a sampling kernel for the finite setting which obtains precisely the same samples c[n], as in the periodic case. In the periodic setting, the resulting samples are given by (10). Using g(t) of (19) as the sampling kernel we have c[n] = g(t − nT ), x(t) = m∈Z L l=1 a l ∞ −∞ h(t − t l − mτ )g * (t − nT )dt = m∈Z L l=1 a l ∞ −∞ h(t)g * (t − (nT − t l − mτ )) dt = m∈Z L l=1 a l ϕ(nT − t l − mτ ),(34) where we defined ϕ(ϑ) = g(t − ϑ), h(t) .(35) Since g(t) in (19) vanishes for all |t| > τ /2 and h(t) satisfies (33), the support of ϕ(t) is (R + τ ), i.e., ϕ(t) = 0 for all |t| ≥ (R + τ )/2.(36) Using this property, the summation in (34) will be over nonzero values for indices m satisfying |nT − t l − mτ | < (R + τ )/2.(37) Sampling within the window [0, τ ), i.e., nT ∈ [0, τ ), and noting that the time-delays lie in the interval t l ∈ [0, τ ), l = 1 . . . L, (37) implies that (R + τ )/2 > |nT − t l − mτ | ≥ |m|τ − |nT − t l | > (|m| − 1)τ.(38) Here we used the triangle inequality and the fact that |nT − t l | < τ in our setting. Therefore, |m| < R/τ + 3 2 ⇒ |m| ≤ R/τ + 3 2 − 1 △ = r,(39) i.e., the elements of the sum in (34) vanish for all m but the values in (39). Consequently, the infinite sum in (34) reduces to a finite sum over m ≤ |r| so that (34) becomes c[n] = r m=−r L l=1 a l ϕ(nT − t l − mτ ) = r m=−r L l=1 a l ∞ −∞ h(t − t l )g * (t − nT + mτ )dt = r m=−r g(t − nT + mτ ), L l=1 a l h(t − t l ) ,(40) where in the last equality we used the linearity of the inner product. Defining a function which consists of (2r + 1) periods of g(t): g r (t) = r m=−r g(t + mτ ),(41) we conclude that c[n] = g r (t − nT ),x(t) .(42) Therefore, the samples c[n] can be obtained by filtering the aperiodic signalx(t) with the filter g * r (−t) prior to sampling. This filter has compact support equal to (2r + 1)τ . Since the finite setting samples (42) are identical to those of the periodic case (34), recovery of the delays and amplitudes is performed exactly the same as in the periodic setting. We summarize this result in the following theorem. Theorem 3. Consider the finite stream of pulses given by: x(t) = L l=1 a l h(t − t l ), t l ∈ [0, τ ), a l ∈ R, where h(t) has finite support R. Choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K. Then, N samples given by: c[n] = g r (t − nT ),x(t) , n = 0 . . . N − 1, nT ∈ [0, τ ), where r is defined in (39), and g r (t) is compactly supported and defined by (41) (based on the filter g(t) in (17)), uniquely determine the signalx(t) as long as N ≥ |K| ≥ 2L. If, for example, the support R of h(t) satisfies R ≤ τ then we obtain from (39) that r = 1. Therefore, the filter in this case would consist of 3 periods of g(t): g 3p (t) △ = g r (t) r=1 = g(t − τ ) + g(t) + g(t + τ ).(43) Practical implementation of the filter may be carried out using delay-lines. The relation of this scheme to previous approaches will be investigated in Section V. . Perfect reconstruction is achieved as can be seen in Fig. 8. The estimation is exact to numerical precision. 2) High Order Problems: The same simulation was carried out with L = 20 diracs. The results are shown in Fig. 9. Here again, the reconstruction is perfect even for large L. 3) Noisy Case: We now consider the performance of our method in the presence of noise. In addition, we compare our performance to the B-spline and E-spline methods proposed in [13], and to the Gaussian sampling kernel [3]. We examine 4 scenarios, in which the signal consists of L = 2, 3, 5, 20 diracs 1 . In our setup, the time-delays are equally distributed in the window [0, τ ), with τ = 1, and remain constant throughout the experiments. All amplitudes are set to one. 1 Due to computational complexity of calculating the time-domain expression for high order E-splines, the functions were simulated up to order 9, which allows for L = 5 pulses. samples. In other words, σ n in (31) is method-dependent, and is determined by the desired SNR and the samples of the specific technique. Hard thresholding was implemented in order to improve the spline methods, as suggested by the authors in [13]. The threshold was chosen to be 3σ n , where σ n is the standard deviation of the AWGN. For the Gaussian sampling kernel the parameter σ was optimized and took on the value of σ = 0.25, 0.28, 0.32, 0.9, respectively. The results are given in Fig. 10. For L = 2 all methods are stable, where E-splines exhibit better performance than B-splines, and Gaussian and SoS approaches demonstrate the lowest errors. As the value of L grows, the advantage of the SoS filter becomes more prominent, where for L ≥ 5, the performance of Gaussian and both spline methods deteriorate and have errors approaching the order of τ . In contrast, the SoS filter retains its performance nearly unchanged even up to L = 20, where the B-spline and Gaussian methods are unstable. The improved version of the Gaussian approach presented in [12] would not perform better in this high order case, since it fails for L > 9, as noted by the authors. A comparison of our approach to previous methods will be detailed in Section V. IV. INFINITE STREAM OF PULSES We now consider the case of an infinite stream of pulses z(t) = l∈Z a l h(t − t l ), t l , a l ∈ R.(44) We assume that the infinite signal has a bursty character, i.e., the signal has two distinct phases: a) bursts of maximal duration τ containing at most L pulses, and b) quiet phases between bursts. For the sake of clarity we begin with the case h(t) = δ(t). For this choice the filter g * r (−t) in (41) reduces to g * 3p (−t) of (43). Since the filter g * 3p (−t) has compact support 3τ we are assured that the current burst cannot influence samples taken 3τ /2 seconds before or after it. In the finite case we have confined ourselves to sampling within the interval [0, τ ). Similarly, here, we assume that the samples are taken during the burst duration. Therefore, if the minimal spacing between any two consecutive bursts is 3τ /2, then we are guaranteed that each sample taken during the burst is influenced by one burst only, as depicted in Fig. 11. Consequently, the infinite problem can be reduced to a sequential solution of local distinct finite order problems, as in Section III. Here the compact support of our filter comes into play, allowing us to apply local reconstruction methods. τ 1st burst 2nd burst g 3p (t) filter support = 3τ t −0.5τ 1.5τ 2.5τ 3.5τ Fig. 11. Bursty signal z(t). Spacing of 3τ /2 between bursts ensures that the influence of the current burst ends before taking the samples of the next burst. This is due to the finite support, 3τ of the sampling kernel g * 3p (−t). In the above argument we assume we know the locations of the bursts, since we must acquire samples from within the burst duration. Samples outside the burst duration are contaminated by energy from adjacent bursts. Nonetheless, knowledge of burst locations is available in many applications such as synchronized communication where the receiver knows when to expect the bursts, or in radar or imaging scenarios where the transmitter is itself the receiver. We now state this result in a theorem. Theorem 4. Consider a signal z(t) which is a stream of bursts consisting of delayed and weighted diracs. The maximal burst duration is τ , and the maximal number of pulses within each burst is L. Then, the samples given by c[n] = g 3p (t − nT ), z(t) , n ∈ Z where g 3p (t) is defined by (43), are a sufficient characterization of z(t) as long as the spacing between two adjacent bursts is greater than 3τ /2, and the burst locations are known. Extending this result to a general pulse h(t) is quite straightforward, as long as h(t) is compactly supported with support R, and we filter with g * r (−t) as defined in (41) with the appropriate r from (39). If we can choose a set K of consecutive indices for which H(2πk/τ ) = 0, ∀k ∈ K and we are guaranteed that the minimal spacing between two adjacent bursts is greater than ((2r + 1)τ + R) /2, then the above theorem holds. A. Periodic Case The work in [3] was the first to address efficient sampling of pulse streams, e.g., diracs. Their approach for solving the periodic case was ideal lowpass filtering, followed by uniform sampling, which allowed to obtain the Fourier series coefficients of the signal. These coefficients are then processed by the annihilating filter to obtain the unknown time-delays and amplitudes. In Section II, we derived a general condition on the sampling kernel (11), under which recovery is guaranteed. The lowpass filter of [3] is a special case of this result. The noise robustness of both the lowpass approach and our more general method is high as long as the pulses are well separated, since reconstruction from Fourier series coefficients is stable in this case. Both approaches achieve the minimal number of samples. The lowpass filter is bandlimited and consequently has infinite time-support. Therefore, this sampling scheme is unsuitable for finite and infinite streams of pulses. The SoS class introduced in Section II consists of compactly supported filters which is crucial to enable the extension of our results to finite and infinite streams of pulses. A comparison between the two methods is shown in Table I. B. Finite Pulse Stream The authors of [3] proposed a Gaussian sampling kernel for sampling finite streams of Diracs. The Gaussian method is numerically unstable, as mentioned in [12], since the samples are multiplied by a rapidly diverging or decaying exponent. Therefore, this approach is unsuitable for L ≥ 6. Modifications proposed in [12] exhibit better performance and stability. However, these methods require substantial oversampling, and still exhibit instability for L > 9. In [13] the family of polynomial reproducing kernels was introduced as sampling filters for the model (32). B-splines were proposed as a specific example. The B-spline sampling filter enables obtaining moments of the signal, rather than Fourier coefficients. The moments are then processed with the same annihilating filter used in previous methods. However, as mentioned by the authors, this approach is unstable for high values of L. This is due to the fact that in contrast to the estimation of Fourier coefficients, estimating high order moments is unstable, since unstable weighting of the samples is carried out during the process. Another general family introduced in [13] for the finite model is the class of exponential reproducing kernels. As a specific case, the authors propose E-spline sampling kernels. The CTFT of an E-spline of order N + 1 is described byβ α (ω) = N n=0 1 − e αn−jω jω − α n ,(45) where α = (α 0 , α 1 , . . . , α N ) are free parameters. In order to use E-splines as sampling kernels for pulse streams, the authors propose a specific structure on the α's, α n = α 0 + nλ. Choosing exponents having a non-vanishing real part results in unstable weighting, as in the B-spline case. However, choosing the special case of pure imaginary exponents in the E-splines, already suggested by the authors, results in a reconstruction method based on Fourier coefficients, which demonstrates an interesting relation to our method. The Fourier coefficients are obtained by applying a matrix consisting of the exponent spanning coefficients {c m,n }, (see [13]), instead of our Vandermonde matrix relation (14). With this specific choice of parameters the E-spline function satisfies (11). Interestingly, with a proper choice of spanning coefficients, it can be shown that the SoS class can reproduce exponentials with frequencies {2πk/τ } k∈K , and therefore satisfies the general exponential reproduction property of [13]. However, the SoS filter proposes a new sampling scheme which has substantial advantages over existing methods including E-splines. The first advantage is in the presence of noise, where both methods have the following structure: y = Ax + w,(46) where w is the noise vector. While the Fourier coefficients vector x is common to both approaches, the linear transformation A is method dependent, and therefore the sample vector y is different. In our approach with g(t) of (24), A is the DFT matrix, which for any order L has a condition number of 1. However, in the case of E-splines the transformation matrix A consists of the E-spline exponential spanning coefficients, which has a much higher condition number, e.g., above 100 for L = 5. Consequently, some Fourier coefficients will have much higher values of noise than others. This scenario of high variance between noise levels of the samples is known to deteriorate the performance of spectral analysis methods [11], the annihilating filter being one of them. This explains our simulations which show that the SoS filter outperforms the E-spline approach in the presence of noise. When the E-spline coefficients α are pure imaginary, it can be easily shown that (45) becomes a multiplication of shifted sincs. This is in contrast to the SoS filter which consists of a sum of sincs in the frequency domain. Since multiplication in the frequency domain translates to convolution in the time domain, it is clear that the support of the E-spline grows with its order, and in turn with the order of the problem L. In contrast, the support of the SoS filter remains unchanged. This observation becomes important when examining the infinite case. The constraint on the signal in [13] is that no more than L pulses be in any interval of length LP T , P being the support of the filter, and T the sampling period. Since P grows linearly with L, the constraint cast on the infinite stream becomes more stringent, quadratically with L. On the other hand, the constraint on the infinite stream using the SoS filter is independent of L. We showed in simulations that typically for L ≥ 5 the estimation errors, using both B-spline and Espline sampling kernels, become very large. In contrast, our approach leads to stable reconstruction even for very high values of L, e.g., L = 100. In addition, even for low values of L we showed in simulations that although the E-spline method has improved performance over B-splines, the SoS reconstruction method outperforms both spline approaches. A comparison is described in Table II. C. Infinite Streams The work in [13] addressed the infinite stream case, with h(t) = δ(t). They proposed filtering the signal with a polynomial reproducing sampling kernel prior to sampling. If the signal has at most L diracs within any interval of duration LP T , where P denotes the support of the sampling filter and T the sampling period, then the samples are a sufficient characterization of the signal. This condition allows to divide the infinite stream into a sequence of finite case problems. In our approach the quiet phases of 1.5τ between the bursts of length τ enable the reduction to the finite case. Since the infinite solution is based on the finite one, our method is advantageous in terms of stability in high order problems and noise robustness. However, we do have an additional requirement of quiet phases between the bursts. Regarding the sampling rate, the number of degrees of freedom of the signal per unit time, also known as the rate of innovation, is ρ = 2L/2.5τ , which is the critical sampling rate. Our sampling rate is 2L/τ and therefore we oversample by a factor of 2.5. In the same scenario, the method in [13] would require a sampling rate of LP/2.5τ , i.e., oversampling by a factor of P/2. Properties of polynomial reproducing kernels imply that P ≥ 2L, therefore for any L ≥ 3, our method exhibits more efficient sampling. A table comparing the various features is shown in Table III. Recent work [14] presented a low complexity method for reconstructing streams of pulses (both infinite and finite cases) consisting of diracs. However the basic assumption of this method is that there is at most one dirac per sampling period. This means we must have prior knowledge about a lower limit on the spacing between two consecutive deltas, in order to guarantee correct reconstruction. In some cases such a limit may not exist; even if it does it will usually force us to sample at a much higher rate than the critical one. VI. APPLICATION -ULTRASOUND IMAGING An interesting application of our framework is ultrasound imaging. In ultrasonic imaging an acoustic pulse is transmitted into the scanned tissue. The pulse is reflected due to changes in acoustic impedance which occur, for example, at the boundaries between two different tissues. At the receiver, the echoes are recorded, where the time-of-arrival and power of the echo indicate the scatterer's location and strength, respectively. Accurate estimation of tissue boundaries and scatterer locations allows for reliable detection of certain illnesses, and is therefore of major clinical importance. The location of the boundaries is often more important than the power of the reflection. This stream of pulses is finite since the pulse energy decays within the tissue. We now demonstrate our method on real 1-dimensional (1D) ultrasound data. The multiple echo signal which is recorded at the receiver can be modeled as a finite stream of pulses, as in (32). The unknown time-delays correspond to the locations of the various scatterers, whereas the amplitudes correspond to their reflection coefficients. The pulse shape in this case is a Gaussian defined in (30), due the physical characteristics of the electro-acoustic transducer (mechanical damping). We assume the received pulse-shape is known, either by assuming it is unchanged through propagation, through physically modeling ultrasonic wave propagation, or by prior estimation of received pulse. Full investigation of mismatch in the pulse shape is left for future research. In our setting, a phantom consisting of uniformly spaced pins, mimicking point scatterers, was scanned by GE Healthcare's Vivid-i portable ultrasound imaging system [20], [21], using a 3S-RS probe. We use the data recorded by a single element in the probe, which is modeled as a 1D stream of pulses. The center frequency of the probe is f c = 1.7021 MHz, The width of the transmitted Gaussian pulse in this case is σ = 3 · 10 −7 sec, and the depth of imaging is R max = 0.16 m corresponding to a time window of 2 τ = 2.08 · 10 −4 sec. In this experiment all filtering and sampling operations are carried out digitally in simulation. The analog filter required by the sampling scheme is replaced by a lengthy Finite Impulse Response (FIR) filter. Since the sampling frequency of the element in the system is f s = 20 MHz, which is more than 5 times higher than the Nyquist rate, the recorded data represents the continuous signal reliably. Consequently, digital filtering of the high-rate sampled data vector (4160 samples) followed by proper decimation mimics the original analog sampling scheme with high accuracy. The recorded signal is depicted in Fig. 12. The band-pass ultrasonic signal is demodulated to base-band, i.e., envelope-detection is performed, before inserted into the process. We carried out our sampling and reconstruction scheme on the aforementioned data. We set L = 4, looking for the strongest 4 echoes. Since the data is corrupted by strong noise we over-sampled the signal, obtaining twice the minimal number of samples. In addition, hard-thresholding of the samples was implemented, where we set the threshold to 10 percent of the maximal value. We obtained N = 17 samples by decimating the output of the lengthy FIR digital filter imitating g * 3p (−t) from (43), where the coefficients {b k } were all set to one. In Fig. 13a the reconstructed signal is depicted vs. the full demodulated signal using all 4160 samples. Clearly, the time-delays were estimated with high precision. The amplitudes were estimated as well, however the amplitude of the second pulse has a large error. This is probably due to the large values of noise present in its vicinity. However, as mentioned earlier, the exact locations of the scatterers is often more important than the accurate reflection coefficients. We carried out the same experiment only now oversampling by a factor of 4, resulting in N = 33 samples. Here no hard-thresholding is required. The results are depicted in Fig. 13b, and are very similar to our previous results. In both simulations, the estimation error in the pulse location is around 0.1 mm. Current ultrasound imaging technology operates at the high rate sampled data, e.g., f s = 20 MHz in our setting. Since there are usually 100 different elements in a single ultrasonic probe each sampled at a very high rate, data throughput becomes very high, and imposes high computational complexity to the system, limiting its capabilities. Therefore, there is a demand for lowering the sampling rate, which in turn will reduce the complexity of reconstruction. Exploiting the parametric point of view, our sampling VII. CONCLUSIONS We presented efficient sampling and reconstruction schemes for streams of pulses. For the case of a periodic stream of pulses, we derived a general condition on the sampling kernel which allows a single-channel uniform sampling scheme. Previous work [3] is a special case of this general result. We then proposed a class of filters, satisfying the condition, with compact support. Exploiting the compact support of the filters, we constructed a new sampling scheme for the case of a finite stream of pulses. Simulations show this method exhibits better performance than previous techniques [3], [13], in terms of stability in high order problems, and noise robustness. An extension to an infinite stream of pulses was also presented. The compact support of the filter allows for local reconstruction, and thus lowers the complexity of the problem. Finally, we demonstrated the advantage of our approach in reducing the sampling and processing rate of ultrasound imaging, by applying our techniques to real ultrasound data. APPENDIX PROOF OF THEOREM 2 The MSE of the optimal linear estimator of the vector x from the measurement vector y is known to be [22] MSE = Tr {R xx } − Tr R xy R −1 yy R yx . The covariance matrices in our case are R xy = R xx B * V * (48) R yy = VBR xx B * V * + σ 2 I,(49) where we used (27), and the fact that R ww = σ 2 I since w is a white Gaussian noise vector. Under our assumptions on {t l } and {a l }, denoting h k = H(2πk/τ ), and using (5) (R xx ) k,k ′ = E X[k]X * [k ′ ] = 1 τ 2 h k h k ′ L l=1 L l ′ =1 E a l a * l ′ e −j 2π τ (ktl−k ′ t l ′ ) = σ 2 a τ 2 h k h k ′ L l=1 E e −j 2π τ (k−k ′ )tl = σ 2 a τ 2 h k h k ′ L l=1 τ 0 1 τ e −j 2π τ (k−k ′ )tl dt = σ 2 a τ 2 L|h k | 2 δ k,k ′ .(50) Denoting byH a diagonal matrix with kth element |h k | 2 = |h k | 2 σ 2 a L/τ 2 we have R xx =H.(51) Since the first term of (47) is independent of B, minimizing the MSE with respect to B is equivalent to maximizing the second term in (47). Substituting (48),(49) and (51) into this term, the optimal B is a solution to Using the matrix inversion formula [23], (VBHB * V * + σ 2 I) −1 = 1 σ 2 I − VB σ 2H−1 + B * V * VB −1 B * V * .(53) It is easy to verify from the definition of V in (13) that (V * V) ik = N −1 l=0 e j 2π N l(k−i) = N δ k,i .(54) Therefore, the objective in (52) equals Tr N σ 2H B * I − B σ 2 NH −1 + B * B −1 B * BH = |K| i=1 |h i | 2 1 − σ 2 /N |b i | 2 |h i | 2 + σ 2 /N(55) where we used the fact that B andH are diagonal. We can now find the optimal B by maximizing (55), which is equivalent to minimizing the negative term: min B |K| i=1 |h i | 2 1 + |b i | 2 |h i | 2 N/σ 2 , s.t. |K| i=1 |b i | 2 = 1.(56) Denoting β i = |b i | 2 , (56) becomes a convex optimization problem: min βi |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2(57) subject to β i ≥ 0 (58) |K| i=1 β i = 1.(59) To solve (57) subject to (58) and (59), we form the Lagrangian: L = |K| i=1 |h i | 2 1 + β i |h i | 2 N/σ 2 + λ   |K| i=1 β i − 1   − |K| i=1 µ i β i(60) where from the Karush-Kuhn-Tucker (KKT) conditions [24], µ i ≥ 0 and µ i β i = 0. Differentiating (60) with respect to β i and equating to 0 |h i | 4 N/σ 2 (1 + β i |h i | 2 N/σ 2 ) 2 + µ i = λ,(61) so that λ > 0, sinceh i > 0 by construction of H (see Theorem 1). If λ > |h i | 4 N/σ 2 then µ i > 0, and therefore, β i = 0 from KKT. If λ ≤ |h i | 4 N/σ 2 then from (61) µ i = 0 and β i = σ 2 N N λσ 2 − 1 |h i | 2 .(62) The optimal β i is therefore β i =      σ 2 N N λσ 2 − 1 |hi| 2 λ ≤ |h i | 4 N/σ 2 0 λ > |h i | 4 N/σ 2(63) where λ > 0 is chosen to satisfy (59). Note that from (63), if β i = 0 and i < j, then β j = 0 as well, since |h i | are in an increasing order. We now show that there is a unique λ that satisfies (59). Define the function G(λ) = |K| i=1 β i (λ) − 1,(64) so that λ is a root of G(λ). Since the |h i |'s are in an increasing order, |h |K| | = max i |h i |. It is clear from (63) that G(λ) is monotonically decreasing for 0 < λ ≤ |h |K| | 4 N/σ 2 . In addition, G(λ) = −1 for λ > |h |K| | 4 N/σ 2 , and G(λ) > 0 for λ → 0. Thus, there is a unique λ for which (59) is satisfied. Substituting (63) into (59), and denoting by m the smallest index for which λ ≤ |h m+1 | 4 N/σ 2 , we have √ λ = (|K| − m) N/σ 2 N/σ 2 + |K| i=m+1 1/|h i | 2 ,(65) completing the proof of the theorem.
9,762
1001.4603
1848085800
Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins' common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. To cope with this situation, we propose a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. This allows us to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples.
A non-statistical approach for analyzing the homology and the genetic semihomology'' in protein sequences was presented in @cite_4 @cite_5 . Instead of using a statistically computed scoring matrix, amino acid similarities are scored according to the complexity of the substitution process at the DNA level, depending on the number and type (transition transversion) of nucleotide changes that are necessary for replacing one amino acid by the other. This ensures a differentiated treatment of amino acid substitutions at different positions of the protein sequence, thus avoiding possible rough approximations resulting from scoring them equally, based on a classic scoring matrix. The main drawback of this approach is that it was not designed to cope with frameshift mutations..
{ "abstract": [ "The non-statistical, non-Markovian model of protein mutational variability is described. There are presented the essential features of the algorithm of genetic semihomology and some examples of its application. The results of genetic semihomology approach are compared with the results obtained by using statistical algorithms and matrices which are assumed in widely used programs such as ClustalW, FASTA, MultAlin and BLAST. The aim of the new algorithm elaboration is to improve the accuracy of the results of protein sequence comparison, avoid the wrong assumptions and misinterpretation of the results, and increase the amount of information available from such study. © 2000 Elsevier Science Ireland Ltd. All rights reserved.", "Abstract A new algorithm for analysis of the homology and genetic semihomology in protein sequence is described. It assumes the close relation between the compared amino acids and their codons in related proteins. The algorithm is based on the network of the genetic relationship between amino acids and, thus differs from the commonly used statistical matrices. The results obtained by using this method are more comprehensive than used at present, and reflect the actual mechanism of protein differentiation and evolution. They concern: (1) location of homologous and semihomologous sites in compared proteins; (2) precise estimation of insertion deletion gaps in non-homologous fragments; (3) analysis of internal homology and semihomology; (4) precise location of domains in multidomain proteins; (5) estimation of genetic code of non-homologous fragments; (6) construction of genetic probes; (7) studies on differentiation processes among related proteins; (8) estimation of the degree of relationship among related proteins; (9) studies on the evolution mechanism within homologous protein families and (10) confirmation of actual relationship of sequences showing low degree of homology." ], "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2107751960", "2045258636" ] }
Back-translation for discovering distant protein homologies
In protein-coding DNA sequences, frameshift mutations (insertions or deletions of one or more bases) can alter the translation reading frame, affecting all the amino acids encoded from that point forward. Thus, frameshifts produce a drastic change in the resulting protein sequence, preventing any similarity to be visible at the amino acid level. When the coding DNA sequence is relatively well conserved, the similarity remains detectable at the DNA level, by DNA sequence alignment, as reported in several papers, including [1][2][3][4]. However, the divergence often involves additional base substitutions. It has been shown [5][6][7] that, in coding DNA, there is a base compositional bias among codon positions, that does not apply when the translation reading frame is changed. Hence, after a reading frame change, a coding sequence is likely to undergo base substitutions leading to a composition that complies with this bias. Amongst these substitutions, synonymous mutations (usually occurring on the third position of the codon) are more likely to be accepted by natural selection, since they are silent with respect to the gene's product. If, in a long evolutionary time, a large number of codons in one or both sequences are affected by these changes, the sequence may be altered to such an extent that the common origin becomes difficult to observe by direct DNA comparison. In this paper, we address the problem of finding distant protein homologies, in particular when the primary cause of the divergence is a frameshift. We achieve this by computing the best alignment of DNA sequences that encode the target proteins. This approach relies on the idea that synonymous mutations cause mismatches in the DNA alignments that can be avoided when all the sequences with the same translation are explored, instead of just the known coding DNA sequences. This allows the algorithm to search for an alignment by dealing only with non-synonymous mutations and gaps. We designed and implemented an efficient method for aligning putative coding DNA sequences, which builds expressive alignments between hypothetical nucleotide sequences that can provide some information about the common ancestral sequence, if such a sequence exists. We perform the analysis on memoryefficient graph representations of the complete set of putative DNA sequences for each protein, described in Section 3.1. The proposed method, presented in Section 3.2, consists of a dynamic programming alignment algorithm that computes the two putative DNA sequences that have the best scoring alignment under an appropriate scoring system (Section 3.3) designed to reflect the actual evolution process from a codon-oriented perspective. While the idea of finding protein relations by frameshifted DNA alignments is not entirely new, as we will show in Section 2 in a brief related work overview, Section 4 -presenting tests performed on artificial data -demonstrates the efficiency of our scoring system for distant sequences. Furthermore, we validate our method on several pairs of sequences known to be encoded by overlapping genes, and on some published examples of frameshifts resulting in functional proteins. We briefly present these experiments in Section 5, along with a study of a protein family whose members present high dissimilarity on a certain interval. The paper is concluded in Section 6. Our approach to distant protein relation discovery The problem of inferring homologies between distantly related proteins, whose divergence is the result of frameshifts and point mutations, is approached in this paper by determining the best pairwise alignment between two DNA sequences that encode the proteins. Given two proteins P A and P B , the objective is to find a pair of DNA sequences, D A and D B , such that translation(D A ) = P A and translation(D B ) = P B , which produce the best pairwise alignment under a given scoring system. The alignment algorithm (described in Section 3.2) incorporates a gap penalty that limits the number of frameshifts allowed in an alignment, to comply with the observed frequency of frameshifts in a coding sequence's evolution. The scoring system (Section 3.3) is based on possible mutational patterns of the sequences. This leads to reducing the false positive rate and focusing on alignments that are more likely to be biologically significant. Data structures An explicit enumeration and pairwise alignment of all the putative DNA sequences is not an option, since their number increases exponentially with the protein's length 1 . Therefore, we represent the protein's "back-translation" (set of possible source DNAs) as a directed acyclic graph, whose size depends linearly on the length of the protein, and where a path represents one putative sequence. As illustrated in Figure 1(a), the graph is organized as a sequence of length 3n where n is the length of the protein sequence. At each position i in the graph, there is a group of nodes, each representing a possible nucleotide that can appear at position i in at least one of the putative coding sequences. Two nodes at consecutive positions are linked by arcs if and only if they are either consecutive nucleotides of the same codon, or they are respectively the third and the first base of two consecutive codons. No other arcs exist in the graph. Note that in the implementation, the number of nodes is reduced by using the IUPAC nucleotide codes. If the amino acids composing a protein sequence are non-ambiguous, only 4 extra nucleotide symbols -R, Y , H and N -are necessary for their back-translation. In this condensed representation, the number of ramifications in the graph is substantially reduced, as illustrated by Figure 1. More precisely, the only amino acids with ramifications in their back-translation are amino acids R, L and S, each encoded by 6 codons with different prefixes. Alignment algorithm We use a dynamic programming method, similar to the Smith-Waterman algorithm, extended to data structures described in Section 3.1 and equipped with gap related restrictions. Given the input graphs G A and G B obtained by back-translating proteins P A and P B , the algorithm finds the best scoring local alignment between two DNA sequences comprised in the back-translation graphs (illustrated in Figure 2). The alignment is built by filling each entry M [i, j, (α A , α B )] of a dynamic programming matrix M , where i and j are positions of the first and second graph respectively, and (α A , α B ) is a pair of nodes that can be found in G A at position i, and in G B at position j, respectively. An example is given in Figure 3. The dynamic programming algorithm begins with a classic local alignment initialization (0 at the top and left borders), followed by the recursion step described in equation (1). The partial alignment score from each cell M [i, j, (α A , α B )] is computed as the maximum of 6 types of values: (a) 0 (similarly to the classic Smith-Waterman algorithm, only non-negative scores are considered for local alignments). (b) the substitution score of symbols (α A , α B ), denoted score(α A , α B ), added to the score of the best partial alignment ending in M [i − 1, j − 1], provided that the partially aligned paths contain α A on position i and α B on position YSH Back-translation T A C T T A C G A C G T C T C A C T (a) T A Y T C N A G Y C A Y (b) Fig. 1. Example of fully represented (a) and condensed (b) back-translation graph for the amino acid sequence YSH. AGN: QET: C C A C G T G G A C G T A A C T C A A G G A A G A C A C G T Alignment C C A C G T G G A C G T A A C T C A A G G A A G A C A C G T Fig. 2. Alignment example. A path (corresponding to a putative DNA sequence) was chosen from each graph so that the match/mismatch ratio is maximized. j respectively; this condition is ensured by restricting the entries of M [i − 1, j −1] to those labeled with symbols that precede α A and α B in the graphs. (c) the cost singleGapP enalty of a frameshift (gap of size 1 or extension of a gap of size 1) in the first sequence, added to the score of the best partial alignment that ends in a cell M [i, j − 1, (α A , β B )], provided that β B precedes α B in the second graph; this case is considered only if the number of allowed frameshifts on the current path is not exceeded, or a gap of size 1 is extended. (d) the cost of a frameshift in the second sequence, added to a partial alignment score defined as above. (e) the cost tripleGapP enalty of removing an entire codon from the first sequence, added to the score of the best partial alignment ending in a cell T A C G A G C T C T T G T C T T A T T G A G T T T C A T A C C T G T C G G G C T C C G T G C A T G T C T T T A G G G C G T G A T A C G T G C C T C C T T T C i j (α A , α B ) M [i, j] is a "cell" of MM [i, j − 3, (α A , β B ) ]. (f) the cost of removing an entire codon from the second sequence, added to the score of the best partial alignment ending in a cell M [i − 3, j, (β A , α B )] We adopted a non-monotonic gap penalty function, which favors insertions and deletions of full codons, and does not allow a large number of frameshifts -very rare events, usually eliminated by natural selection. As can be seen in equation (1), two particular kinds of gaps are considered: i) frameshifts -gaps of size 1 or 2, with high penalty, whose number in a local alignment can be limited, and ii) codon skips -gaps of size 3 which correspond to the insertion or deletion of a whole codon. M [i, j, (α A , α B )] = max                0 (a) M [i − 1, j − 1, (β A , β B )] + score(α A , α B ), β k ∈ pred(α k ); (b) (M [i, j − 1, (α A , β B )] + singleGapP enalty) , β B ∈ pred(α B ); (c) (M [i − 1, j, (β A , α B )] + singleGapP enalty) , β A ∈ pred(α A ); (d) (M [i, j − 3, (α A , β B )] + tripleGapP enalty) , j ≥ 3 (e) (M [i − 3, j, (β A , α B )] + tripleGapP enalty) , i ≥ 3 (f)(1) Translation-dependent scoring function In this section, we present a new translation-dependent scoring system suitable for our alignment algorithm. The scoring scheme we designed incorporates information about possible mutational patterns for coding sequences, based on a codon substitution model, with the aim of filtering out alignments between sequences that are unlikely to have common origins. Mutation rates have been shown to vary within genomes, under the influence of several factors, including neighbor bases [15]. Consequently, a model where all base mismatches are equally penalized is oversimplified, and ignores possibly precious information about the context of the substitution. With the aim of retracing the sequence's evolution and revealing which base substitutions are more likely to occur within a given codon, our scoring system targets pairs of triplets (α, p, a), were α is a nucleotide, p is its position in the codon, and a is the amino acid encoded by that codon, thus differentiating various contexts of a substitution. There are 99 valid triplets out of the total of 240 hypothetical combinations. Pairwise alignment scores are computed for all possible pairs of valid triplets (t 1 , t 2 ) = ((α 1 , p 1 , a 1 ), (α 2 , p 2 , a 2 )) as a classic log-odds ratio: score(t 1 , t 2 ) = λ log f t1t2 b t1t2 (2) where f t1t2 is the frequency of the t 1 ↔ t 2 substitution in related sequences, and b t1t2 = p(t 1 )p(t 2 ) is the background probability. In order to obtain the foreground probabilities f titj , we will consider the following scenario: two proteins are encoded on the same DNA sequence, on different reading frames; at some point, the sequence was duplicated and the two copies diverged independently; we assume that the two coding sequences undergo, in their independent evolution, synonymous and non-synonymous point mutations, or full codon insertions and removals. The insignificant amount of available real data that fits our hypothesis does not allow classical, statistical computation of the foreground and background probabilities. Therefore, instead of doing statistics on real data directly, we will rely on codon frequency tables and codon substitution models. We assume that codon substitutions in our scenarios can be modeled by a Markov model presented in [16] 2 which specifies the relative instantaneous substitution rate from codon i to codon j as: Q ij =                0 if i or j is a stop codon, or if i → j requires more than 1 nucleotide substitution, π j if i → j is a synonymous transversion, π j κ if i → j is a synonymous transition, π j ω if i → j is a nonsynonymous transversion, π j κω if i → j is a nonsynonymous transition. for all i = j. Here, the parameter ω represents the nonsynonymous-synonymous rate ratio, κ the transition-transversion rate ratio, and π j the equilibrium frequency of codon j. As in all Markov models of sequence evolution, absolute rates are found by normalizing the relative rates to a mean rate of 1 at equilibrium, that is, by enforcing i j =i π i Q ij = 1 and completing the instantaneous rate matrix Q by defining Q ii = − j =i Q ij to give a form in which the transition probability matrix is calculated as P (θ) = e θQ [18]. Evolutionary times θ are measured in expected number of nucleotide substitutions per codon. With this codon substitution model, f titj can be deduced in several steps. Basically, we first need to identify all pairs of codons with a common subsequence, that have a perfect semi-global alignment (for instance, codons CAT and AT G satisfy this condition, having the common subsequence AT ; this example is further explained below). We then assume that the codons from each pair undergo independent evolution, according to the codon substitution model. For the resulting codons, we compute, based on all possible original codon pairs, p((α i , p i , c i ), (α j , p j , c j )) -the probability that nucleotide α i , situated on position p i of codon c i , and nucleotide α j , situated on position p j of codon c j have a common origin (equation (5)). From these, we can immediately compute, as shown by equation (6), p((α i , p i , a i ), (α j , p j , a j )), corresponding in fact to the foreground probabilities f titj , where t i = (α i , p i , a i ) and t j = (α j , p j , a j ). In the following, p(c 1 θ → c 2 ) stands for the probability of the event codon c 1 mutates into codon c 2 in the evolutionary time θ, and is given by P c1,c2 (θ). c 1 [interval 1 ] ≡ c 2 [interval 2 ] states that codon c 1 restricted to the positions given by interval 1 is a sequence identical to c 2 restricted to interval 2 . This is equivalent to having a word w obtained by "merging" the two codons. For instance, if c 1 = CAT and c 2 = AT G, with their common substring being placed in interval 1 = [2..3] and interval 2 = [1. .2] respectively, w is CAT G. Finally, p(c 1 [interval 1 ] ≡ c 2 [interval 2 ]) is the probability to have c 1 and c 2 , in the relation described above, which we compute as the probability of the word w obtained by "merging" the two codons. This function should be symmetric, it should depend on the codon distribution, and the probabilities of all the words w of a given length should sum to 1. However, since we consider the case where the same DNA sequence is translated on two different reading frames, one of the two translated sequences would have an atypical composition. Consequently, the probability of a word w is computed as if the sequence had the known codon composition when translated on the reading frame imposed by the first codon, or on the one imposed by the second. This hypothesis can be formalized as: p(w) = p(w on rf1 OR w on rf2) = p rf1 (w) + p rf2 (w) − p rf1 (w) · p rf2 (w) (4) where p rf1 (w) and p rf2 (w) are the probabilities of the word w in the reading frame imposed by the position of the first and second codon, respectively. This is computed as the products of the probabilities of the codons and codon pieces that compose the word w in the established reading frame. In the previous example, the probabilities of w = CAT G in the first and second reading frame are: p rf1 (CAT G) = p(CAT ) · p(G * * ) = p(CAT ) · c:c starts with G p(c) p rf2 (CAT G) = p( * * C) · p(AT G) = c:c ends with C p(c) · p(AT G) The values of p((α i , p i , c i ), (α j , p j , c j )) are computed as: c ′ i ,c ′ j :c ′ i [interval i ]≡c ′ j [interval j ] pi∈intervali,pj ∈intervalj p(c ′ i [interval i ] ≡ c ′ j [interval j ]) · p(c ′ i θ → c i ) · p(c ′ j θ → c j ) (5) from which obtaining the foreground probabilities is straightforward: f titj = p((α i , p i , a i ), (α j , p j , a j )) = c i encodes a i , c j encodes a j p((α i , p i , c i ), (α j , p j , c j ))(6) The background probabilities of (t i , t j ), b titj , can be simply expressed as the probability of the two symbols appearing independently in the sequences: b titj = b (αi,pi,ai),(αj,pj ,aj ) = c i encodes a i , c j encodes a j π ci π cj (7) Substitution matrix for ambiguous symbols From matrices built as explained above, the versions that use IUPAC ambiguity codes for nucleotides (as proposed in the final paragraph of 3.1) can be computed: the score of pairing two ambiguous symbols is the maximum over all substitution scores for all pairs of nucleotides from the respective sets. Score evaluation The score significance is estimated according to the Gumbel distribution, where the parameters λ and K are computed with the method described in [19,20]. Since the forward alignment and the reverse complementary alignment are two independent cases with different score distributions, two parameter pairs, λ f w , K f w and λ rc , K rc are computed and used in practice. To validate the translation-dependent scoring system we designed in the previous section, we tested it on an artificial data set consisting in 96 pairs of protein sequences of average length 300. Each pair was obtained by translating a randomly generated DNA sequence on two different reading frames. Both sequences in each pair were then mutated independently, according to codon mutation probability matrices corresponding to each of the evolutionary times 0.01, 0.1, 0.3, 0.5, 0.7, 1.0, 1.5, 2.00 (measured in average number of mutations per codon). To this data set we applied four variants of alignment algorithms: i) classic alignment of DNA sequences using classic base substitution scores and affine gap penalties; ii) classic alignment of DNA sequences using a translation-dependent scoring scheme designed in Section 3.3; iii) alignment of back-translation graphs (Section 3.2) using classic base substitution scores and affine gap penalties; iv) alignment of back-translation graphs using a translation-dependent scoring scheme. For the tests involving translation-dependent scores, we used scoring functions corresponding to evolutionary times from 0.30 to 1.00. Table 1 briefly shows the e-values of the scores obtained with each setup when aligning sequence pairs with various evolutionary distances. While all variants perform well for highly similar sequences, we can clearly deduce the ability of the translation-dependent scores to help the algorithm build significant alignments between sequences that underwent important changes. The resulting alignments reveal that, even after many mutations, the translation-dependent scores manage to recover large parts of the original shared sequence, by correctly aligning most positions. On the other hand, with classic match/mismatch scores, the algorithm usually fails to find these common zones. Moreover, due to the large number of mismatches, the alignment has a low score, comparable to scores that can be obtained for randomly chosen sequences. This makes it difficult to establish whether the alignment is biologically meaningful or it was obtained by chance. This issue is solved by the translation-dependent scores by uneven substitution penalties, according to the codon mutation models. We conclude that the usage of translation-dependent scores makes the algorithm more robust, able to detect the common origins even after the sequences underwent many modifications, and also able to filter out alignments where the nucleotide pairs match by pure chance and not due to evolutionary relations. Experimental results Tests on known overlapping and frameshifted genes We tested the method on pairs of proteins known to be encoded by overlapping genes in viral genomes (phage X174 and Influenza A) and in E.coli plasmids, as well as on the newly identified overlapping genes yaaW and htgA from E.coli K12 [21]. In all cases, we obtained perfect identification of gene overlaps with simple substitution scores and with translation-dependent scoring matrices corresponding to low evolutionary distances (at most 1 mutation per codon ). Translation-dependent scoring matrices of higher evolutionary distances favor, in some (rare) cases, substitutions instead of matches within the alignment. This is a natural consequence of increasing the codon's chance to mutate, and it illustrates the importance of choosing a score matrix corresponding to the real evolutionary distance. Our method was also able to detect, directly on the protein sequences, the frameshifts resulting in functional proteins reported in [1][2][3][4]. New divergence scenarios for orthologous proteins In this section we discuss the application of our method to FMR1NB (Fragile X mental retardation 1 neighbor protein) family. The Ensembl database [22] provides 23 members of this family, from mammalian species, including human, mouse, dog and cow. Their multiple alignment, provided by Ensembl, shows high dissimilarity on the first part (100 amino acids approximately), and good conservation on the rest of the sequence. We apply our alignment algorithm on proteins from several organisms, where the complete sequence is available. We performed our experiments with translation-dependent scoring matrices corresponding to 0.3, 0.5 and 0.7 mutations per codon. Given that, in our scenario (presented in section 3.3), the divergence is applied on two reading frames, this implies an overall mutation rate of 0.6, 1.0 and 1.4 mutations per codon respectively. Thus, the mutation rate per base reflected by our scores is less than 0.5, which is approximately the nucleotide substitution rate for mouse relative to human [23]. The number of allowed frameshifts was limited to 3. The gap penalties were set in all cases to -20 for codon indels, -20 for size 1 gaps and -5 for the extension of size 1 gaps (size 1 and size 2 gaps correspond to frameshifts). These choices were made so that the penalty for codon indels is higher than the average penalty for 3 substitutions. Fig. 4. Human and mouse FMR1NB proteins, aligned using a translation-dependent matrix of evolutionary distance 0.7 (the sign of each substitution score appears on the fourth row). The size 4 gap corresponds to a frameshift that corrects the reading frame. Figure 4 presents a fragment of the alignment obtained on the FMR1NB proteins of human (gene ID ENSG00000176988) and mouse (gene ID ENS-MUSG00000062170). The algorithm finds a frameshift near the 100th amino acid, managing to align the initial part of the proteins at the DNA level. Similar frameshifted alignments are obtained for human vs. cow and human vs. dog, while alignments between proteins of primates do not contain frameshifts. The consistency of the frameshift position in these alignments supports the evidence of a frameshift event that might have occurred in the primate lineage. L][S][Y][Y][L][C][S][G][S][S][Y][F][V][L][A][N][G][H][I][L][P][N][S][E][N][A][H][G][Q][S][L][E][E][D][S][A][L][E][ If confirmed, this frameshift would have modified the first topological domain and the first transmembrane domain of the product protein. Interestingly, the FMR1NB gene occurs nearby the Fragile X mental retardation 1 gene (FMR1), involved in the corresponding genetic disease [24]. Conclusions In this paper, we addressed the problem of finding distant protein homologies, in particular affected by frameshift events, from a codon evolution perspective. We search for protein common origins by implicitly aligning all their putative coding DNA sequences, stored in efficient data structures called back-translation graphs. Our approach relies on a dynamic programming alignment algorithm for these graphs, which involves a non-monotonic gap penalty that handles differently frameshifts and full codon indels. We designed a powerful translation-dependent scoring function for nucleotide pairs, based on codon substitution models, whose purpose is to reflect the expected dynamics of coding DNA sequences. The method was shown to perform better than classic alignment on artificial data, obtained by mutating independently, according to a codon substitution model coding sequences translated with a frameshift. Moreover, it successfully detected published frameshift mutation cases resulting in functional proteins. We then described an experiment involving homologous mammalian proteins that showed little conservation at the amino acid level on a large region, and provided possible frameshifted alignments obtained with our method, that may explain the divergence. As illustrated by this example, the proposed method should allow to better explain a high divergence of homologous proteins and to help to establish new homology relations between genes with unknown origins. An implementation of our method is available at http://bioinfo.lifl.fr/path/.
4,526
1001.4603
1848085800
Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins' common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. To cope with this situation, we propose a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. This allows us to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples.
Regarding frameshift mutation discovery , many studies @cite_23 @cite_16 @cite_10 @cite_17 preferred the plain BLAST @cite_22 @cite_8 alignment approach: BLASTN on DNA and mRNA, or BLASTX on mRNA and proteins, applicable only when the DNA sequences are sufficiently similar. BLASTX programs, although capable of insightful results thanks to the six frame translations, have the limitation of not being able to transparently manage frameshifts that occur inside the sequence, for example by reconstructing an alignment from pieces obtained on different reading frames.
{ "abstract": [ "A new approach to rapid sequence comparison, basic local alignment search tool (BLAST), directly approximates alignments that optimize a measure of local similarity, the maximal segment pair (MSP) score. Recent mathematical results on the stochastic properties of MSP scores allow an analysis of the performance of this method as well as the statistical significance of alignments it generates. The basic algorithm is simple and robust; it can be implemented in a number of ways and applied in a variety of contexts including straight-forward DNA and protein sequence database searches, motif searches, gene identification searches, and in the analysis of multiple regions of similarity in long DNA sequences. In addition to its flexibility and tractability to mathematical analysis, BLAST is an order of magnitude faster than existing sequence comparison tools of comparable sensitivity.", "The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSIBLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily.", "Frameshift mutations are generally considered to be deleterious and of little importance for the evolution of novel gene functions. However, by screening an exhaustive set of vertebrate gene families, we found that, when a second transcript encoding the original gene product compensates for this mutation, frameshift mutations can be retained for millions of years and enable new gene functions to be acquired.", "Genomic duplication, followed by divergence, contributes to organismal evolution. Several mechanisms, such as exon shuffling and alternative splicing, are responsible for novel gene functions, but they generate homologous domains and do not usually lead to drastic innovation. Major novelties can potentially be introduced by frameshift mutations and this idea can explain the creation of novel proteins. Here, we employ a strategy using simulated protein sequences and identify 470 human and 108 mouse frameshift events that originate new gene segments. No obvious interspecies overlap was observed, suggesting high rates of acquisition of evolutionary events. This inference is supported by a deficiency of TpA dinucleotides in the protein-coding sequences, which decreases the occurrence of translational termination, even on the complementary strand. Increased usage of the TGA codon as the termination signal in newer genes also supports our inference. This suggests that tolerated frameshift changes are a prevalent mechanism for the rapid emergence of new genes and that protein-coding sequences can be derived from existing or ancestral exons rather than from events that result in noncoding sequences becoming exons.", "Background Efforts to gather genomic evidence for the processes of gene evolution are ongoing, and are closely coupled to improved gene annotation methods. Such annotation is complicated by the occurrence of disrupted mRNAs (dmRNAs), harbouring frameshifts and premature stop codons, which can be considered indicators of decay into pseudogenes.", "Motivation: The recent release of the draft sequence of the chimpanzee genome is an invaluable resource for finding genome-wide genetic differences that might explain phenotypic differences between humans and chimpanzees. Results: In this paper, we describe a simple procedure to identify potential human-specific frameshift mutations that occurred after the divergence of human and chimpanzee. The procedure involves collecting human coding exons bearing insertions or deletions compared with the chimpanzee genome and identification of homologs from other species, in support of the mutations being human-specific. Using this procedure, we identified nine genes, BASE, DNAJB3, FLJ33674, HEJ1, NTSR2, RPL13AP, SCGB1D4, WBSCR27 and ZCCHC13, that show human-specific alterations including truncations of the C-terminus. In some cases, the frameshift mutation results in gene inactivation or decay. In other cases, the altered protein seems to be functional. This study demonstrates that even the unfinished chimpanzee genome sequence can be useful in identifying modification of genes that are specific to the human lineage and, therefore, could potentially be relevant to the study of the acquisition of human-specific traits. Availability: Contact: [email protected]" ], "cite_N": [ "@cite_22", "@cite_8", "@cite_23", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2055043387", "2158714788", "2160385629", "2011880232", "2145247212", "2159808435" ] }
Back-translation for discovering distant protein homologies
In protein-coding DNA sequences, frameshift mutations (insertions or deletions of one or more bases) can alter the translation reading frame, affecting all the amino acids encoded from that point forward. Thus, frameshifts produce a drastic change in the resulting protein sequence, preventing any similarity to be visible at the amino acid level. When the coding DNA sequence is relatively well conserved, the similarity remains detectable at the DNA level, by DNA sequence alignment, as reported in several papers, including [1][2][3][4]. However, the divergence often involves additional base substitutions. It has been shown [5][6][7] that, in coding DNA, there is a base compositional bias among codon positions, that does not apply when the translation reading frame is changed. Hence, after a reading frame change, a coding sequence is likely to undergo base substitutions leading to a composition that complies with this bias. Amongst these substitutions, synonymous mutations (usually occurring on the third position of the codon) are more likely to be accepted by natural selection, since they are silent with respect to the gene's product. If, in a long evolutionary time, a large number of codons in one or both sequences are affected by these changes, the sequence may be altered to such an extent that the common origin becomes difficult to observe by direct DNA comparison. In this paper, we address the problem of finding distant protein homologies, in particular when the primary cause of the divergence is a frameshift. We achieve this by computing the best alignment of DNA sequences that encode the target proteins. This approach relies on the idea that synonymous mutations cause mismatches in the DNA alignments that can be avoided when all the sequences with the same translation are explored, instead of just the known coding DNA sequences. This allows the algorithm to search for an alignment by dealing only with non-synonymous mutations and gaps. We designed and implemented an efficient method for aligning putative coding DNA sequences, which builds expressive alignments between hypothetical nucleotide sequences that can provide some information about the common ancestral sequence, if such a sequence exists. We perform the analysis on memoryefficient graph representations of the complete set of putative DNA sequences for each protein, described in Section 3.1. The proposed method, presented in Section 3.2, consists of a dynamic programming alignment algorithm that computes the two putative DNA sequences that have the best scoring alignment under an appropriate scoring system (Section 3.3) designed to reflect the actual evolution process from a codon-oriented perspective. While the idea of finding protein relations by frameshifted DNA alignments is not entirely new, as we will show in Section 2 in a brief related work overview, Section 4 -presenting tests performed on artificial data -demonstrates the efficiency of our scoring system for distant sequences. Furthermore, we validate our method on several pairs of sequences known to be encoded by overlapping genes, and on some published examples of frameshifts resulting in functional proteins. We briefly present these experiments in Section 5, along with a study of a protein family whose members present high dissimilarity on a certain interval. The paper is concluded in Section 6. Our approach to distant protein relation discovery The problem of inferring homologies between distantly related proteins, whose divergence is the result of frameshifts and point mutations, is approached in this paper by determining the best pairwise alignment between two DNA sequences that encode the proteins. Given two proteins P A and P B , the objective is to find a pair of DNA sequences, D A and D B , such that translation(D A ) = P A and translation(D B ) = P B , which produce the best pairwise alignment under a given scoring system. The alignment algorithm (described in Section 3.2) incorporates a gap penalty that limits the number of frameshifts allowed in an alignment, to comply with the observed frequency of frameshifts in a coding sequence's evolution. The scoring system (Section 3.3) is based on possible mutational patterns of the sequences. This leads to reducing the false positive rate and focusing on alignments that are more likely to be biologically significant. Data structures An explicit enumeration and pairwise alignment of all the putative DNA sequences is not an option, since their number increases exponentially with the protein's length 1 . Therefore, we represent the protein's "back-translation" (set of possible source DNAs) as a directed acyclic graph, whose size depends linearly on the length of the protein, and where a path represents one putative sequence. As illustrated in Figure 1(a), the graph is organized as a sequence of length 3n where n is the length of the protein sequence. At each position i in the graph, there is a group of nodes, each representing a possible nucleotide that can appear at position i in at least one of the putative coding sequences. Two nodes at consecutive positions are linked by arcs if and only if they are either consecutive nucleotides of the same codon, or they are respectively the third and the first base of two consecutive codons. No other arcs exist in the graph. Note that in the implementation, the number of nodes is reduced by using the IUPAC nucleotide codes. If the amino acids composing a protein sequence are non-ambiguous, only 4 extra nucleotide symbols -R, Y , H and N -are necessary for their back-translation. In this condensed representation, the number of ramifications in the graph is substantially reduced, as illustrated by Figure 1. More precisely, the only amino acids with ramifications in their back-translation are amino acids R, L and S, each encoded by 6 codons with different prefixes. Alignment algorithm We use a dynamic programming method, similar to the Smith-Waterman algorithm, extended to data structures described in Section 3.1 and equipped with gap related restrictions. Given the input graphs G A and G B obtained by back-translating proteins P A and P B , the algorithm finds the best scoring local alignment between two DNA sequences comprised in the back-translation graphs (illustrated in Figure 2). The alignment is built by filling each entry M [i, j, (α A , α B )] of a dynamic programming matrix M , where i and j are positions of the first and second graph respectively, and (α A , α B ) is a pair of nodes that can be found in G A at position i, and in G B at position j, respectively. An example is given in Figure 3. The dynamic programming algorithm begins with a classic local alignment initialization (0 at the top and left borders), followed by the recursion step described in equation (1). The partial alignment score from each cell M [i, j, (α A , α B )] is computed as the maximum of 6 types of values: (a) 0 (similarly to the classic Smith-Waterman algorithm, only non-negative scores are considered for local alignments). (b) the substitution score of symbols (α A , α B ), denoted score(α A , α B ), added to the score of the best partial alignment ending in M [i − 1, j − 1], provided that the partially aligned paths contain α A on position i and α B on position YSH Back-translation T A C T T A C G A C G T C T C A C T (a) T A Y T C N A G Y C A Y (b) Fig. 1. Example of fully represented (a) and condensed (b) back-translation graph for the amino acid sequence YSH. AGN: QET: C C A C G T G G A C G T A A C T C A A G G A A G A C A C G T Alignment C C A C G T G G A C G T A A C T C A A G G A A G A C A C G T Fig. 2. Alignment example. A path (corresponding to a putative DNA sequence) was chosen from each graph so that the match/mismatch ratio is maximized. j respectively; this condition is ensured by restricting the entries of M [i − 1, j −1] to those labeled with symbols that precede α A and α B in the graphs. (c) the cost singleGapP enalty of a frameshift (gap of size 1 or extension of a gap of size 1) in the first sequence, added to the score of the best partial alignment that ends in a cell M [i, j − 1, (α A , β B )], provided that β B precedes α B in the second graph; this case is considered only if the number of allowed frameshifts on the current path is not exceeded, or a gap of size 1 is extended. (d) the cost of a frameshift in the second sequence, added to a partial alignment score defined as above. (e) the cost tripleGapP enalty of removing an entire codon from the first sequence, added to the score of the best partial alignment ending in a cell T A C G A G C T C T T G T C T T A T T G A G T T T C A T A C C T G T C G G G C T C C G T G C A T G T C T T T A G G G C G T G A T A C G T G C C T C C T T T C i j (α A , α B ) M [i, j] is a "cell" of MM [i, j − 3, (α A , β B ) ]. (f) the cost of removing an entire codon from the second sequence, added to the score of the best partial alignment ending in a cell M [i − 3, j, (β A , α B )] We adopted a non-monotonic gap penalty function, which favors insertions and deletions of full codons, and does not allow a large number of frameshifts -very rare events, usually eliminated by natural selection. As can be seen in equation (1), two particular kinds of gaps are considered: i) frameshifts -gaps of size 1 or 2, with high penalty, whose number in a local alignment can be limited, and ii) codon skips -gaps of size 3 which correspond to the insertion or deletion of a whole codon. M [i, j, (α A , α B )] = max                0 (a) M [i − 1, j − 1, (β A , β B )] + score(α A , α B ), β k ∈ pred(α k ); (b) (M [i, j − 1, (α A , β B )] + singleGapP enalty) , β B ∈ pred(α B ); (c) (M [i − 1, j, (β A , α B )] + singleGapP enalty) , β A ∈ pred(α A ); (d) (M [i, j − 3, (α A , β B )] + tripleGapP enalty) , j ≥ 3 (e) (M [i − 3, j, (β A , α B )] + tripleGapP enalty) , i ≥ 3 (f)(1) Translation-dependent scoring function In this section, we present a new translation-dependent scoring system suitable for our alignment algorithm. The scoring scheme we designed incorporates information about possible mutational patterns for coding sequences, based on a codon substitution model, with the aim of filtering out alignments between sequences that are unlikely to have common origins. Mutation rates have been shown to vary within genomes, under the influence of several factors, including neighbor bases [15]. Consequently, a model where all base mismatches are equally penalized is oversimplified, and ignores possibly precious information about the context of the substitution. With the aim of retracing the sequence's evolution and revealing which base substitutions are more likely to occur within a given codon, our scoring system targets pairs of triplets (α, p, a), were α is a nucleotide, p is its position in the codon, and a is the amino acid encoded by that codon, thus differentiating various contexts of a substitution. There are 99 valid triplets out of the total of 240 hypothetical combinations. Pairwise alignment scores are computed for all possible pairs of valid triplets (t 1 , t 2 ) = ((α 1 , p 1 , a 1 ), (α 2 , p 2 , a 2 )) as a classic log-odds ratio: score(t 1 , t 2 ) = λ log f t1t2 b t1t2 (2) where f t1t2 is the frequency of the t 1 ↔ t 2 substitution in related sequences, and b t1t2 = p(t 1 )p(t 2 ) is the background probability. In order to obtain the foreground probabilities f titj , we will consider the following scenario: two proteins are encoded on the same DNA sequence, on different reading frames; at some point, the sequence was duplicated and the two copies diverged independently; we assume that the two coding sequences undergo, in their independent evolution, synonymous and non-synonymous point mutations, or full codon insertions and removals. The insignificant amount of available real data that fits our hypothesis does not allow classical, statistical computation of the foreground and background probabilities. Therefore, instead of doing statistics on real data directly, we will rely on codon frequency tables and codon substitution models. We assume that codon substitutions in our scenarios can be modeled by a Markov model presented in [16] 2 which specifies the relative instantaneous substitution rate from codon i to codon j as: Q ij =                0 if i or j is a stop codon, or if i → j requires more than 1 nucleotide substitution, π j if i → j is a synonymous transversion, π j κ if i → j is a synonymous transition, π j ω if i → j is a nonsynonymous transversion, π j κω if i → j is a nonsynonymous transition. for all i = j. Here, the parameter ω represents the nonsynonymous-synonymous rate ratio, κ the transition-transversion rate ratio, and π j the equilibrium frequency of codon j. As in all Markov models of sequence evolution, absolute rates are found by normalizing the relative rates to a mean rate of 1 at equilibrium, that is, by enforcing i j =i π i Q ij = 1 and completing the instantaneous rate matrix Q by defining Q ii = − j =i Q ij to give a form in which the transition probability matrix is calculated as P (θ) = e θQ [18]. Evolutionary times θ are measured in expected number of nucleotide substitutions per codon. With this codon substitution model, f titj can be deduced in several steps. Basically, we first need to identify all pairs of codons with a common subsequence, that have a perfect semi-global alignment (for instance, codons CAT and AT G satisfy this condition, having the common subsequence AT ; this example is further explained below). We then assume that the codons from each pair undergo independent evolution, according to the codon substitution model. For the resulting codons, we compute, based on all possible original codon pairs, p((α i , p i , c i ), (α j , p j , c j )) -the probability that nucleotide α i , situated on position p i of codon c i , and nucleotide α j , situated on position p j of codon c j have a common origin (equation (5)). From these, we can immediately compute, as shown by equation (6), p((α i , p i , a i ), (α j , p j , a j )), corresponding in fact to the foreground probabilities f titj , where t i = (α i , p i , a i ) and t j = (α j , p j , a j ). In the following, p(c 1 θ → c 2 ) stands for the probability of the event codon c 1 mutates into codon c 2 in the evolutionary time θ, and is given by P c1,c2 (θ). c 1 [interval 1 ] ≡ c 2 [interval 2 ] states that codon c 1 restricted to the positions given by interval 1 is a sequence identical to c 2 restricted to interval 2 . This is equivalent to having a word w obtained by "merging" the two codons. For instance, if c 1 = CAT and c 2 = AT G, with their common substring being placed in interval 1 = [2..3] and interval 2 = [1. .2] respectively, w is CAT G. Finally, p(c 1 [interval 1 ] ≡ c 2 [interval 2 ]) is the probability to have c 1 and c 2 , in the relation described above, which we compute as the probability of the word w obtained by "merging" the two codons. This function should be symmetric, it should depend on the codon distribution, and the probabilities of all the words w of a given length should sum to 1. However, since we consider the case where the same DNA sequence is translated on two different reading frames, one of the two translated sequences would have an atypical composition. Consequently, the probability of a word w is computed as if the sequence had the known codon composition when translated on the reading frame imposed by the first codon, or on the one imposed by the second. This hypothesis can be formalized as: p(w) = p(w on rf1 OR w on rf2) = p rf1 (w) + p rf2 (w) − p rf1 (w) · p rf2 (w) (4) where p rf1 (w) and p rf2 (w) are the probabilities of the word w in the reading frame imposed by the position of the first and second codon, respectively. This is computed as the products of the probabilities of the codons and codon pieces that compose the word w in the established reading frame. In the previous example, the probabilities of w = CAT G in the first and second reading frame are: p rf1 (CAT G) = p(CAT ) · p(G * * ) = p(CAT ) · c:c starts with G p(c) p rf2 (CAT G) = p( * * C) · p(AT G) = c:c ends with C p(c) · p(AT G) The values of p((α i , p i , c i ), (α j , p j , c j )) are computed as: c ′ i ,c ′ j :c ′ i [interval i ]≡c ′ j [interval j ] pi∈intervali,pj ∈intervalj p(c ′ i [interval i ] ≡ c ′ j [interval j ]) · p(c ′ i θ → c i ) · p(c ′ j θ → c j ) (5) from which obtaining the foreground probabilities is straightforward: f titj = p((α i , p i , a i ), (α j , p j , a j )) = c i encodes a i , c j encodes a j p((α i , p i , c i ), (α j , p j , c j ))(6) The background probabilities of (t i , t j ), b titj , can be simply expressed as the probability of the two symbols appearing independently in the sequences: b titj = b (αi,pi,ai),(αj,pj ,aj ) = c i encodes a i , c j encodes a j π ci π cj (7) Substitution matrix for ambiguous symbols From matrices built as explained above, the versions that use IUPAC ambiguity codes for nucleotides (as proposed in the final paragraph of 3.1) can be computed: the score of pairing two ambiguous symbols is the maximum over all substitution scores for all pairs of nucleotides from the respective sets. Score evaluation The score significance is estimated according to the Gumbel distribution, where the parameters λ and K are computed with the method described in [19,20]. Since the forward alignment and the reverse complementary alignment are two independent cases with different score distributions, two parameter pairs, λ f w , K f w and λ rc , K rc are computed and used in practice. To validate the translation-dependent scoring system we designed in the previous section, we tested it on an artificial data set consisting in 96 pairs of protein sequences of average length 300. Each pair was obtained by translating a randomly generated DNA sequence on two different reading frames. Both sequences in each pair were then mutated independently, according to codon mutation probability matrices corresponding to each of the evolutionary times 0.01, 0.1, 0.3, 0.5, 0.7, 1.0, 1.5, 2.00 (measured in average number of mutations per codon). To this data set we applied four variants of alignment algorithms: i) classic alignment of DNA sequences using classic base substitution scores and affine gap penalties; ii) classic alignment of DNA sequences using a translation-dependent scoring scheme designed in Section 3.3; iii) alignment of back-translation graphs (Section 3.2) using classic base substitution scores and affine gap penalties; iv) alignment of back-translation graphs using a translation-dependent scoring scheme. For the tests involving translation-dependent scores, we used scoring functions corresponding to evolutionary times from 0.30 to 1.00. Table 1 briefly shows the e-values of the scores obtained with each setup when aligning sequence pairs with various evolutionary distances. While all variants perform well for highly similar sequences, we can clearly deduce the ability of the translation-dependent scores to help the algorithm build significant alignments between sequences that underwent important changes. The resulting alignments reveal that, even after many mutations, the translation-dependent scores manage to recover large parts of the original shared sequence, by correctly aligning most positions. On the other hand, with classic match/mismatch scores, the algorithm usually fails to find these common zones. Moreover, due to the large number of mismatches, the alignment has a low score, comparable to scores that can be obtained for randomly chosen sequences. This makes it difficult to establish whether the alignment is biologically meaningful or it was obtained by chance. This issue is solved by the translation-dependent scores by uneven substitution penalties, according to the codon mutation models. We conclude that the usage of translation-dependent scores makes the algorithm more robust, able to detect the common origins even after the sequences underwent many modifications, and also able to filter out alignments where the nucleotide pairs match by pure chance and not due to evolutionary relations. Experimental results Tests on known overlapping and frameshifted genes We tested the method on pairs of proteins known to be encoded by overlapping genes in viral genomes (phage X174 and Influenza A) and in E.coli plasmids, as well as on the newly identified overlapping genes yaaW and htgA from E.coli K12 [21]. In all cases, we obtained perfect identification of gene overlaps with simple substitution scores and with translation-dependent scoring matrices corresponding to low evolutionary distances (at most 1 mutation per codon ). Translation-dependent scoring matrices of higher evolutionary distances favor, in some (rare) cases, substitutions instead of matches within the alignment. This is a natural consequence of increasing the codon's chance to mutate, and it illustrates the importance of choosing a score matrix corresponding to the real evolutionary distance. Our method was also able to detect, directly on the protein sequences, the frameshifts resulting in functional proteins reported in [1][2][3][4]. New divergence scenarios for orthologous proteins In this section we discuss the application of our method to FMR1NB (Fragile X mental retardation 1 neighbor protein) family. The Ensembl database [22] provides 23 members of this family, from mammalian species, including human, mouse, dog and cow. Their multiple alignment, provided by Ensembl, shows high dissimilarity on the first part (100 amino acids approximately), and good conservation on the rest of the sequence. We apply our alignment algorithm on proteins from several organisms, where the complete sequence is available. We performed our experiments with translation-dependent scoring matrices corresponding to 0.3, 0.5 and 0.7 mutations per codon. Given that, in our scenario (presented in section 3.3), the divergence is applied on two reading frames, this implies an overall mutation rate of 0.6, 1.0 and 1.4 mutations per codon respectively. Thus, the mutation rate per base reflected by our scores is less than 0.5, which is approximately the nucleotide substitution rate for mouse relative to human [23]. The number of allowed frameshifts was limited to 3. The gap penalties were set in all cases to -20 for codon indels, -20 for size 1 gaps and -5 for the extension of size 1 gaps (size 1 and size 2 gaps correspond to frameshifts). These choices were made so that the penalty for codon indels is higher than the average penalty for 3 substitutions. Fig. 4. Human and mouse FMR1NB proteins, aligned using a translation-dependent matrix of evolutionary distance 0.7 (the sign of each substitution score appears on the fourth row). The size 4 gap corresponds to a frameshift that corrects the reading frame. Figure 4 presents a fragment of the alignment obtained on the FMR1NB proteins of human (gene ID ENSG00000176988) and mouse (gene ID ENS-MUSG00000062170). The algorithm finds a frameshift near the 100th amino acid, managing to align the initial part of the proteins at the DNA level. Similar frameshifted alignments are obtained for human vs. cow and human vs. dog, while alignments between proteins of primates do not contain frameshifts. The consistency of the frameshift position in these alignments supports the evidence of a frameshift event that might have occurred in the primate lineage. L][S][Y][Y][L][C][S][G][S][S][Y][F][V][L][A][N][G][H][I][L][P][N][S][E][N][A][H][G][Q][S][L][E][E][D][S][A][L][E][ If confirmed, this frameshift would have modified the first topological domain and the first transmembrane domain of the product protein. Interestingly, the FMR1NB gene occurs nearby the Fragile X mental retardation 1 gene (FMR1), involved in the corresponding genetic disease [24]. Conclusions In this paper, we addressed the problem of finding distant protein homologies, in particular affected by frameshift events, from a codon evolution perspective. We search for protein common origins by implicitly aligning all their putative coding DNA sequences, stored in efficient data structures called back-translation graphs. Our approach relies on a dynamic programming alignment algorithm for these graphs, which involves a non-monotonic gap penalty that handles differently frameshifts and full codon indels. We designed a powerful translation-dependent scoring function for nucleotide pairs, based on codon substitution models, whose purpose is to reflect the expected dynamics of coding DNA sequences. The method was shown to perform better than classic alignment on artificial data, obtained by mutating independently, according to a codon substitution model coding sequences translated with a frameshift. Moreover, it successfully detected published frameshift mutation cases resulting in functional proteins. We then described an experiment involving homologous mammalian proteins that showed little conservation at the amino acid level on a large region, and provided possible frameshifted alignments obtained with our method, that may explain the divergence. As illustrated by this example, the proposed method should allow to better explain a high divergence of homologous proteins and to help to establish new homology relations between genes with unknown origins. An implementation of our method is available at http://bioinfo.lifl.fr/path/.
4,526
1001.4603
1848085800
Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins' common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. To cope with this situation, we propose a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. This allows us to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples.
An interesting approach for handling frameshifts at the protein level was developed in @cite_9 . Several substitution matrices were designed for aligning amino acids encoded on different reading frames, based on nucleotide pair matches between respective codons. This idea has the advantage of being easy to use with any classic protein alignment tool. However, it lacks flexibility in gap positioning.
{ "abstract": [ "Theproteinsequencedatabasewas analyzed for evidence that some distinct sequence families might be distantly related in evolution by changes in frame of translation. Sequences were compared using special amino acid substitution matrices for the alternate frames of translation. The statistical significance of alignment scores were computed in the true database and shuffled versions of the database that preserve any potential codon bias. The comparison of results from these two databases provides a very sensitive method for de- tecting remote relationships. We find a weak but measurable relatedness within the database as a whole, supporting the notion that some proteins may have evolved from others through changes in frame of translation. We also quantify residual ho- mology in the ordinary sense within a database of generally unrelated sequences. Proteins 1999;37:" ], "cite_N": [ "@cite_9" ], "mid": [ "2082309149" ] }
Back-translation for discovering distant protein homologies
In protein-coding DNA sequences, frameshift mutations (insertions or deletions of one or more bases) can alter the translation reading frame, affecting all the amino acids encoded from that point forward. Thus, frameshifts produce a drastic change in the resulting protein sequence, preventing any similarity to be visible at the amino acid level. When the coding DNA sequence is relatively well conserved, the similarity remains detectable at the DNA level, by DNA sequence alignment, as reported in several papers, including [1][2][3][4]. However, the divergence often involves additional base substitutions. It has been shown [5][6][7] that, in coding DNA, there is a base compositional bias among codon positions, that does not apply when the translation reading frame is changed. Hence, after a reading frame change, a coding sequence is likely to undergo base substitutions leading to a composition that complies with this bias. Amongst these substitutions, synonymous mutations (usually occurring on the third position of the codon) are more likely to be accepted by natural selection, since they are silent with respect to the gene's product. If, in a long evolutionary time, a large number of codons in one or both sequences are affected by these changes, the sequence may be altered to such an extent that the common origin becomes difficult to observe by direct DNA comparison. In this paper, we address the problem of finding distant protein homologies, in particular when the primary cause of the divergence is a frameshift. We achieve this by computing the best alignment of DNA sequences that encode the target proteins. This approach relies on the idea that synonymous mutations cause mismatches in the DNA alignments that can be avoided when all the sequences with the same translation are explored, instead of just the known coding DNA sequences. This allows the algorithm to search for an alignment by dealing only with non-synonymous mutations and gaps. We designed and implemented an efficient method for aligning putative coding DNA sequences, which builds expressive alignments between hypothetical nucleotide sequences that can provide some information about the common ancestral sequence, if such a sequence exists. We perform the analysis on memoryefficient graph representations of the complete set of putative DNA sequences for each protein, described in Section 3.1. The proposed method, presented in Section 3.2, consists of a dynamic programming alignment algorithm that computes the two putative DNA sequences that have the best scoring alignment under an appropriate scoring system (Section 3.3) designed to reflect the actual evolution process from a codon-oriented perspective. While the idea of finding protein relations by frameshifted DNA alignments is not entirely new, as we will show in Section 2 in a brief related work overview, Section 4 -presenting tests performed on artificial data -demonstrates the efficiency of our scoring system for distant sequences. Furthermore, we validate our method on several pairs of sequences known to be encoded by overlapping genes, and on some published examples of frameshifts resulting in functional proteins. We briefly present these experiments in Section 5, along with a study of a protein family whose members present high dissimilarity on a certain interval. The paper is concluded in Section 6. Our approach to distant protein relation discovery The problem of inferring homologies between distantly related proteins, whose divergence is the result of frameshifts and point mutations, is approached in this paper by determining the best pairwise alignment between two DNA sequences that encode the proteins. Given two proteins P A and P B , the objective is to find a pair of DNA sequences, D A and D B , such that translation(D A ) = P A and translation(D B ) = P B , which produce the best pairwise alignment under a given scoring system. The alignment algorithm (described in Section 3.2) incorporates a gap penalty that limits the number of frameshifts allowed in an alignment, to comply with the observed frequency of frameshifts in a coding sequence's evolution. The scoring system (Section 3.3) is based on possible mutational patterns of the sequences. This leads to reducing the false positive rate and focusing on alignments that are more likely to be biologically significant. Data structures An explicit enumeration and pairwise alignment of all the putative DNA sequences is not an option, since their number increases exponentially with the protein's length 1 . Therefore, we represent the protein's "back-translation" (set of possible source DNAs) as a directed acyclic graph, whose size depends linearly on the length of the protein, and where a path represents one putative sequence. As illustrated in Figure 1(a), the graph is organized as a sequence of length 3n where n is the length of the protein sequence. At each position i in the graph, there is a group of nodes, each representing a possible nucleotide that can appear at position i in at least one of the putative coding sequences. Two nodes at consecutive positions are linked by arcs if and only if they are either consecutive nucleotides of the same codon, or they are respectively the third and the first base of two consecutive codons. No other arcs exist in the graph. Note that in the implementation, the number of nodes is reduced by using the IUPAC nucleotide codes. If the amino acids composing a protein sequence are non-ambiguous, only 4 extra nucleotide symbols -R, Y , H and N -are necessary for their back-translation. In this condensed representation, the number of ramifications in the graph is substantially reduced, as illustrated by Figure 1. More precisely, the only amino acids with ramifications in their back-translation are amino acids R, L and S, each encoded by 6 codons with different prefixes. Alignment algorithm We use a dynamic programming method, similar to the Smith-Waterman algorithm, extended to data structures described in Section 3.1 and equipped with gap related restrictions. Given the input graphs G A and G B obtained by back-translating proteins P A and P B , the algorithm finds the best scoring local alignment between two DNA sequences comprised in the back-translation graphs (illustrated in Figure 2). The alignment is built by filling each entry M [i, j, (α A , α B )] of a dynamic programming matrix M , where i and j are positions of the first and second graph respectively, and (α A , α B ) is a pair of nodes that can be found in G A at position i, and in G B at position j, respectively. An example is given in Figure 3. The dynamic programming algorithm begins with a classic local alignment initialization (0 at the top and left borders), followed by the recursion step described in equation (1). The partial alignment score from each cell M [i, j, (α A , α B )] is computed as the maximum of 6 types of values: (a) 0 (similarly to the classic Smith-Waterman algorithm, only non-negative scores are considered for local alignments). (b) the substitution score of symbols (α A , α B ), denoted score(α A , α B ), added to the score of the best partial alignment ending in M [i − 1, j − 1], provided that the partially aligned paths contain α A on position i and α B on position YSH Back-translation T A C T T A C G A C G T C T C A C T (a) T A Y T C N A G Y C A Y (b) Fig. 1. Example of fully represented (a) and condensed (b) back-translation graph for the amino acid sequence YSH. AGN: QET: C C A C G T G G A C G T A A C T C A A G G A A G A C A C G T Alignment C C A C G T G G A C G T A A C T C A A G G A A G A C A C G T Fig. 2. Alignment example. A path (corresponding to a putative DNA sequence) was chosen from each graph so that the match/mismatch ratio is maximized. j respectively; this condition is ensured by restricting the entries of M [i − 1, j −1] to those labeled with symbols that precede α A and α B in the graphs. (c) the cost singleGapP enalty of a frameshift (gap of size 1 or extension of a gap of size 1) in the first sequence, added to the score of the best partial alignment that ends in a cell M [i, j − 1, (α A , β B )], provided that β B precedes α B in the second graph; this case is considered only if the number of allowed frameshifts on the current path is not exceeded, or a gap of size 1 is extended. (d) the cost of a frameshift in the second sequence, added to a partial alignment score defined as above. (e) the cost tripleGapP enalty of removing an entire codon from the first sequence, added to the score of the best partial alignment ending in a cell T A C G A G C T C T T G T C T T A T T G A G T T T C A T A C C T G T C G G G C T C C G T G C A T G T C T T T A G G G C G T G A T A C G T G C C T C C T T T C i j (α A , α B ) M [i, j] is a "cell" of MM [i, j − 3, (α A , β B ) ]. (f) the cost of removing an entire codon from the second sequence, added to the score of the best partial alignment ending in a cell M [i − 3, j, (β A , α B )] We adopted a non-monotonic gap penalty function, which favors insertions and deletions of full codons, and does not allow a large number of frameshifts -very rare events, usually eliminated by natural selection. As can be seen in equation (1), two particular kinds of gaps are considered: i) frameshifts -gaps of size 1 or 2, with high penalty, whose number in a local alignment can be limited, and ii) codon skips -gaps of size 3 which correspond to the insertion or deletion of a whole codon. M [i, j, (α A , α B )] = max                0 (a) M [i − 1, j − 1, (β A , β B )] + score(α A , α B ), β k ∈ pred(α k ); (b) (M [i, j − 1, (α A , β B )] + singleGapP enalty) , β B ∈ pred(α B ); (c) (M [i − 1, j, (β A , α B )] + singleGapP enalty) , β A ∈ pred(α A ); (d) (M [i, j − 3, (α A , β B )] + tripleGapP enalty) , j ≥ 3 (e) (M [i − 3, j, (β A , α B )] + tripleGapP enalty) , i ≥ 3 (f)(1) Translation-dependent scoring function In this section, we present a new translation-dependent scoring system suitable for our alignment algorithm. The scoring scheme we designed incorporates information about possible mutational patterns for coding sequences, based on a codon substitution model, with the aim of filtering out alignments between sequences that are unlikely to have common origins. Mutation rates have been shown to vary within genomes, under the influence of several factors, including neighbor bases [15]. Consequently, a model where all base mismatches are equally penalized is oversimplified, and ignores possibly precious information about the context of the substitution. With the aim of retracing the sequence's evolution and revealing which base substitutions are more likely to occur within a given codon, our scoring system targets pairs of triplets (α, p, a), were α is a nucleotide, p is its position in the codon, and a is the amino acid encoded by that codon, thus differentiating various contexts of a substitution. There are 99 valid triplets out of the total of 240 hypothetical combinations. Pairwise alignment scores are computed for all possible pairs of valid triplets (t 1 , t 2 ) = ((α 1 , p 1 , a 1 ), (α 2 , p 2 , a 2 )) as a classic log-odds ratio: score(t 1 , t 2 ) = λ log f t1t2 b t1t2 (2) where f t1t2 is the frequency of the t 1 ↔ t 2 substitution in related sequences, and b t1t2 = p(t 1 )p(t 2 ) is the background probability. In order to obtain the foreground probabilities f titj , we will consider the following scenario: two proteins are encoded on the same DNA sequence, on different reading frames; at some point, the sequence was duplicated and the two copies diverged independently; we assume that the two coding sequences undergo, in their independent evolution, synonymous and non-synonymous point mutations, or full codon insertions and removals. The insignificant amount of available real data that fits our hypothesis does not allow classical, statistical computation of the foreground and background probabilities. Therefore, instead of doing statistics on real data directly, we will rely on codon frequency tables and codon substitution models. We assume that codon substitutions in our scenarios can be modeled by a Markov model presented in [16] 2 which specifies the relative instantaneous substitution rate from codon i to codon j as: Q ij =                0 if i or j is a stop codon, or if i → j requires more than 1 nucleotide substitution, π j if i → j is a synonymous transversion, π j κ if i → j is a synonymous transition, π j ω if i → j is a nonsynonymous transversion, π j κω if i → j is a nonsynonymous transition. for all i = j. Here, the parameter ω represents the nonsynonymous-synonymous rate ratio, κ the transition-transversion rate ratio, and π j the equilibrium frequency of codon j. As in all Markov models of sequence evolution, absolute rates are found by normalizing the relative rates to a mean rate of 1 at equilibrium, that is, by enforcing i j =i π i Q ij = 1 and completing the instantaneous rate matrix Q by defining Q ii = − j =i Q ij to give a form in which the transition probability matrix is calculated as P (θ) = e θQ [18]. Evolutionary times θ are measured in expected number of nucleotide substitutions per codon. With this codon substitution model, f titj can be deduced in several steps. Basically, we first need to identify all pairs of codons with a common subsequence, that have a perfect semi-global alignment (for instance, codons CAT and AT G satisfy this condition, having the common subsequence AT ; this example is further explained below). We then assume that the codons from each pair undergo independent evolution, according to the codon substitution model. For the resulting codons, we compute, based on all possible original codon pairs, p((α i , p i , c i ), (α j , p j , c j )) -the probability that nucleotide α i , situated on position p i of codon c i , and nucleotide α j , situated on position p j of codon c j have a common origin (equation (5)). From these, we can immediately compute, as shown by equation (6), p((α i , p i , a i ), (α j , p j , a j )), corresponding in fact to the foreground probabilities f titj , where t i = (α i , p i , a i ) and t j = (α j , p j , a j ). In the following, p(c 1 θ → c 2 ) stands for the probability of the event codon c 1 mutates into codon c 2 in the evolutionary time θ, and is given by P c1,c2 (θ). c 1 [interval 1 ] ≡ c 2 [interval 2 ] states that codon c 1 restricted to the positions given by interval 1 is a sequence identical to c 2 restricted to interval 2 . This is equivalent to having a word w obtained by "merging" the two codons. For instance, if c 1 = CAT and c 2 = AT G, with their common substring being placed in interval 1 = [2..3] and interval 2 = [1. .2] respectively, w is CAT G. Finally, p(c 1 [interval 1 ] ≡ c 2 [interval 2 ]) is the probability to have c 1 and c 2 , in the relation described above, which we compute as the probability of the word w obtained by "merging" the two codons. This function should be symmetric, it should depend on the codon distribution, and the probabilities of all the words w of a given length should sum to 1. However, since we consider the case where the same DNA sequence is translated on two different reading frames, one of the two translated sequences would have an atypical composition. Consequently, the probability of a word w is computed as if the sequence had the known codon composition when translated on the reading frame imposed by the first codon, or on the one imposed by the second. This hypothesis can be formalized as: p(w) = p(w on rf1 OR w on rf2) = p rf1 (w) + p rf2 (w) − p rf1 (w) · p rf2 (w) (4) where p rf1 (w) and p rf2 (w) are the probabilities of the word w in the reading frame imposed by the position of the first and second codon, respectively. This is computed as the products of the probabilities of the codons and codon pieces that compose the word w in the established reading frame. In the previous example, the probabilities of w = CAT G in the first and second reading frame are: p rf1 (CAT G) = p(CAT ) · p(G * * ) = p(CAT ) · c:c starts with G p(c) p rf2 (CAT G) = p( * * C) · p(AT G) = c:c ends with C p(c) · p(AT G) The values of p((α i , p i , c i ), (α j , p j , c j )) are computed as: c ′ i ,c ′ j :c ′ i [interval i ]≡c ′ j [interval j ] pi∈intervali,pj ∈intervalj p(c ′ i [interval i ] ≡ c ′ j [interval j ]) · p(c ′ i θ → c i ) · p(c ′ j θ → c j ) (5) from which obtaining the foreground probabilities is straightforward: f titj = p((α i , p i , a i ), (α j , p j , a j )) = c i encodes a i , c j encodes a j p((α i , p i , c i ), (α j , p j , c j ))(6) The background probabilities of (t i , t j ), b titj , can be simply expressed as the probability of the two symbols appearing independently in the sequences: b titj = b (αi,pi,ai),(αj,pj ,aj ) = c i encodes a i , c j encodes a j π ci π cj (7) Substitution matrix for ambiguous symbols From matrices built as explained above, the versions that use IUPAC ambiguity codes for nucleotides (as proposed in the final paragraph of 3.1) can be computed: the score of pairing two ambiguous symbols is the maximum over all substitution scores for all pairs of nucleotides from the respective sets. Score evaluation The score significance is estimated according to the Gumbel distribution, where the parameters λ and K are computed with the method described in [19,20]. Since the forward alignment and the reverse complementary alignment are two independent cases with different score distributions, two parameter pairs, λ f w , K f w and λ rc , K rc are computed and used in practice. To validate the translation-dependent scoring system we designed in the previous section, we tested it on an artificial data set consisting in 96 pairs of protein sequences of average length 300. Each pair was obtained by translating a randomly generated DNA sequence on two different reading frames. Both sequences in each pair were then mutated independently, according to codon mutation probability matrices corresponding to each of the evolutionary times 0.01, 0.1, 0.3, 0.5, 0.7, 1.0, 1.5, 2.00 (measured in average number of mutations per codon). To this data set we applied four variants of alignment algorithms: i) classic alignment of DNA sequences using classic base substitution scores and affine gap penalties; ii) classic alignment of DNA sequences using a translation-dependent scoring scheme designed in Section 3.3; iii) alignment of back-translation graphs (Section 3.2) using classic base substitution scores and affine gap penalties; iv) alignment of back-translation graphs using a translation-dependent scoring scheme. For the tests involving translation-dependent scores, we used scoring functions corresponding to evolutionary times from 0.30 to 1.00. Table 1 briefly shows the e-values of the scores obtained with each setup when aligning sequence pairs with various evolutionary distances. While all variants perform well for highly similar sequences, we can clearly deduce the ability of the translation-dependent scores to help the algorithm build significant alignments between sequences that underwent important changes. The resulting alignments reveal that, even after many mutations, the translation-dependent scores manage to recover large parts of the original shared sequence, by correctly aligning most positions. On the other hand, with classic match/mismatch scores, the algorithm usually fails to find these common zones. Moreover, due to the large number of mismatches, the alignment has a low score, comparable to scores that can be obtained for randomly chosen sequences. This makes it difficult to establish whether the alignment is biologically meaningful or it was obtained by chance. This issue is solved by the translation-dependent scores by uneven substitution penalties, according to the codon mutation models. We conclude that the usage of translation-dependent scores makes the algorithm more robust, able to detect the common origins even after the sequences underwent many modifications, and also able to filter out alignments where the nucleotide pairs match by pure chance and not due to evolutionary relations. Experimental results Tests on known overlapping and frameshifted genes We tested the method on pairs of proteins known to be encoded by overlapping genes in viral genomes (phage X174 and Influenza A) and in E.coli plasmids, as well as on the newly identified overlapping genes yaaW and htgA from E.coli K12 [21]. In all cases, we obtained perfect identification of gene overlaps with simple substitution scores and with translation-dependent scoring matrices corresponding to low evolutionary distances (at most 1 mutation per codon ). Translation-dependent scoring matrices of higher evolutionary distances favor, in some (rare) cases, substitutions instead of matches within the alignment. This is a natural consequence of increasing the codon's chance to mutate, and it illustrates the importance of choosing a score matrix corresponding to the real evolutionary distance. Our method was also able to detect, directly on the protein sequences, the frameshifts resulting in functional proteins reported in [1][2][3][4]. New divergence scenarios for orthologous proteins In this section we discuss the application of our method to FMR1NB (Fragile X mental retardation 1 neighbor protein) family. The Ensembl database [22] provides 23 members of this family, from mammalian species, including human, mouse, dog and cow. Their multiple alignment, provided by Ensembl, shows high dissimilarity on the first part (100 amino acids approximately), and good conservation on the rest of the sequence. We apply our alignment algorithm on proteins from several organisms, where the complete sequence is available. We performed our experiments with translation-dependent scoring matrices corresponding to 0.3, 0.5 and 0.7 mutations per codon. Given that, in our scenario (presented in section 3.3), the divergence is applied on two reading frames, this implies an overall mutation rate of 0.6, 1.0 and 1.4 mutations per codon respectively. Thus, the mutation rate per base reflected by our scores is less than 0.5, which is approximately the nucleotide substitution rate for mouse relative to human [23]. The number of allowed frameshifts was limited to 3. The gap penalties were set in all cases to -20 for codon indels, -20 for size 1 gaps and -5 for the extension of size 1 gaps (size 1 and size 2 gaps correspond to frameshifts). These choices were made so that the penalty for codon indels is higher than the average penalty for 3 substitutions. Fig. 4. Human and mouse FMR1NB proteins, aligned using a translation-dependent matrix of evolutionary distance 0.7 (the sign of each substitution score appears on the fourth row). The size 4 gap corresponds to a frameshift that corrects the reading frame. Figure 4 presents a fragment of the alignment obtained on the FMR1NB proteins of human (gene ID ENSG00000176988) and mouse (gene ID ENS-MUSG00000062170). The algorithm finds a frameshift near the 100th amino acid, managing to align the initial part of the proteins at the DNA level. Similar frameshifted alignments are obtained for human vs. cow and human vs. dog, while alignments between proteins of primates do not contain frameshifts. The consistency of the frameshift position in these alignments supports the evidence of a frameshift event that might have occurred in the primate lineage. L][S][Y][Y][L][C][S][G][S][S][Y][F][V][L][A][N][G][H][I][L][P][N][S][E][N][A][H][G][Q][S][L][E][E][D][S][A][L][E][ If confirmed, this frameshift would have modified the first topological domain and the first transmembrane domain of the product protein. Interestingly, the FMR1NB gene occurs nearby the Fragile X mental retardation 1 gene (FMR1), involved in the corresponding genetic disease [24]. Conclusions In this paper, we addressed the problem of finding distant protein homologies, in particular affected by frameshift events, from a codon evolution perspective. We search for protein common origins by implicitly aligning all their putative coding DNA sequences, stored in efficient data structures called back-translation graphs. Our approach relies on a dynamic programming alignment algorithm for these graphs, which involves a non-monotonic gap penalty that handles differently frameshifts and full codon indels. We designed a powerful translation-dependent scoring function for nucleotide pairs, based on codon substitution models, whose purpose is to reflect the expected dynamics of coding DNA sequences. The method was shown to perform better than classic alignment on artificial data, obtained by mutating independently, according to a codon substitution model coding sequences translated with a frameshift. Moreover, it successfully detected published frameshift mutation cases resulting in functional proteins. We then described an experiment involving homologous mammalian proteins that showed little conservation at the amino acid level on a large region, and provided possible frameshifted alignments obtained with our method, that may explain the divergence. As illustrated by this example, the proposed method should allow to better explain a high divergence of homologous proteins and to help to establish new homology relations between genes with unknown origins. An implementation of our method is available at http://bioinfo.lifl.fr/path/.
4,526
1001.3850
2952519855
Several different "hat games" have recently received a fair amount of attention. Typically, in a hat game, one or more players are required to correctly guess their hat colour when given some information about other players' hat colours. Some versions of these games have been motivated by research in complexity theory and have ties to well-known research problems in coding theory, and some variations have led to interesting new research. In this paper, we review Ebert's Hat Game, which garnered a considerable amount of publicity in the late 90's and early 00's, and the Hats-on-a-line Game. Then we introduce a new hat game which is a "hybrid" of these two games and provide an optimal strategy for playing the new game. The optimal strategy is quite simple, but the proof involves an interesting combinatorial argument.
A few years prior to the introduction of Ebert's Hat Game, in 1994, a similar game was described by Aspnes, Beigel, Furst and Rudich @cite_7 . In their version of the game, players are not allowed to pass, and the objective is for a majority of the players to guess correctly. For the three-player game, it is easy to describe a strategy that will succeed with probability @math , just as in Ebert's game: Alice votes the opposite of Bob's hat colour; Bob votes the opposite of Charlie's hat colour; and Charlie votes the opposite of Alice's hat colour.
{ "abstract": [ "We consider the problem of approximating a Boolean functionf∶ 0,1 n → 0,1 by the sign of an integer polynomialp of degreek. For us, a polynomialp(x) predicts the value off(x) if, wheneverp(x)≥0,f(x)=1, and wheneverp(x)<0,f(x)=0. A low-degree polynomialp is a good approximator forf if it predictsf at almost all points. Given a positive integerk, and a Boolean functionf, we ask, “how good is the best degreek approximation tof?” We introduce a new lower bound technique which applies to any Boolean function. We show that the lower bound technique yields tight bounds in the casef is parity. Minsky and Papert [10] proved that a perceptron cannot compute parity; our bounds indicate exactly how well a perceptron canapproximate it. As a consequence, we are able to give the first correct proof that, for a random oracleA, PP A is properly contained in PSPACE A . We are also able to prove the old AC0 exponential-size lower bounds in a new way. This allows us to prove the new result that an AC0 circuit with one majority gate cannot approximate parity. Our proof depends only on basic properties of integer polynomials." ], "cite_N": [ "@cite_7" ], "mid": [ "1990538552" ] }
Yet Another Hat Game
In this introduction, we review two popular hat games and mention some related work. In Section 2, we introduce our new game and give a complete solution for it. In Section 3, we make some brief comments. • If a player's hat colour could not result in a bad configuration, then that player passes. Strategies for more players are based on this idea of specifying certain appropriately chosen bad configurations and then using a similar strategy as in the 3-player game. The bad configurations are obtained using Hamming codes, which are perfect single error correcting codes. For every integer m ≥ 2, there is a Hamming code of length n = 2 m −1 containing 2 2 m −m−1 = 2 n−m codewords. In a Hamming code, every non-codeword can be changed into exactly one codeword by changing one entry. (This property allows the Hamming code to correct any single error that occurs during transmission.) If the configuration of hats is not a codeword, then there is a unique position i such that changing entry i creates a codeword. Player i will therefore guess correctly and every other player will pass. If the configuration of hats is a codeword, then everyone will guess incorrectly. Thus the group wins if and only if the configuration of hats is not a codeword. Since there are 2 n−m codewords and 2 n configurations in total, the success probability is 1 − 2 −m = 1 − 1/(n + 1). It can be proven fairly easily that this success probability is optimal, and can be attained only when a perfect 1-error correcting code exists. More generally, any strategy for this hat game on an arbitrary number n of players is "equivalent" to a covering code of length n, and thus optimal strategies (for any number of players) are known if and only if optimal covering codes are known (see [8] for additional information). Hats-on-a-line Another popular hat game has n players standing in a line. Hats of two colours (gray and brown) are distributed randomly to each player. Each player P i (1 ≤ i ≤ n) can only see the hats worn by players P i+1 , . . . , P n (i.e., the players "ahead of" P i in the line). Each player is required to guess their hat colour, and they guess in the order P 1 , . . . , P n . The objective is to maximise the number of correct guesses [3,2]. Clearly the first player's guess will be correct with probability 50%, no matter what her strategy is. However, a simple strategy can be devised in which players P 2 , . . . , P n always guess correctly by making use of information gleaned from prior guesses. As before, suppose that 0 corresponds to gray and 1 corresponds to brown. Let c i denote the colour of player P i 's hat, 1 ≤ i ≤ n. Here is the strategy: • P 1 knows the values c 2 , . . . , c n (she can see the hats belonging to P 2 , . . . , P n ). P 1 provides as her guess the value g 1 = n i=2 c i mod 2. • P 2 hears the value g 1 provided by P 1 and P 2 knows the values c 3 , . . . , c n . Therefore P 2 can compute c 2 = g 1 − n i=3 c i mod 2. P 2 's guess is c 2 , which is correct. • For any player P j with j ≥ 2, P j hears the values g 1 , c 2 , . . . , c j−1 provided by P 1 , . . . , P j−1 respectively, and P j knows the values c j+1 , . . . , c n . Therefore P j can compute c j = g 1 − i∈{2,...,n}\{j} c i mod 2. P j 's guess is c j , which is correct. It is not hard to see that the same strategy can be applied for an arbitrary number of colours, q, where q > 1. The colours are named 0, . . . , q − 1 and all computations are performed modulo q. If this is done, then P 1 has probability 1/q of guessing correctly, and the remaining n − 1 players will always guess correctly. Clearly this is optimal. A New Hats-on-a-line Game When the second author gave a talk to high school students about Ebert's Hat Game, one student asked about sequential voting. It is attractive to consider sequential voting especially in the context of the Hats-on-a-line Game, but in that game the objective is different than in Ebert's game. A natural "hybrid" game would allow sequential voting, but retain the same objective as in Ebert's game. So we consider the following new hats-on-aline game specified as follows: • hats of q > 1 colours are distributed randomly; • visual information is restricted to the hats-on-a-line scenario; • sequential voting occurs in the order P 1 , . . . , P n with abstentions allowed; and Assume that gray is one of the hat colours. For each player P i (1 ≤ i ≤ n), when it is player P i 's turn, if he can see at least one gray hat, he passes; otherwise, he guesses "gray". • the objective is that at least one player guesses correctly and no player guesses incorrectly. We'll call this game the New Hats-on-a-line Game. First, we observe that it is sufficient to consider strategies where only one player makes a guess. If the first player to guess is incorrect, then any subsequent guesses are irrelevant because the players have already lost the game. On the other hand, if the first player to guess is correct, then the players will win if all the later players pass. We consider the simple strategy presented in Table 3, which we term the Gray Strategy. The Gray Strategy can be applied for any number of colours (assuming that gray is one of the colours, of course!). It is easy to analyse the success probability of the Gray Strategy: Theorem 2.1. The success probability of the Gray Strategy for the New Hats-on-a-line Game with q hat colours and n players is 1 − ((q − 1)/q) n . Proof. The probability that P 1 sees no gray hat is ((q − 1)/q) n−1 . In this case, her guess of "gray" is correct with probability 1/q. If P 1 passes, then there is at least one gray hat among the remaining n − 1 players. Let j = max{i : P i has a gray hat}. Then players P 1 , . . . , P j−1 will pass and player P j will correctly guess "gray". So the group wins if player P 1 passes. Overall, the probability of winning is 1 q × q − 1 q n−1 + 1 × 1 − q − 1 q n−1 = 1 − q − 1 q n . The main purpose of this section is to show that the Gray Strategy is an optimal strategy. (By the term "optimal", we mean that the strategy has the maximum possible probability of success, where the maximum is computed over all possible strategies allowed by the game.) We'll do two simple special cases before proceeding to the general proof. (The proof of the general case is independent of these two proofs, but the proofs of the special cases are still of interest due to their simplicity.) We first show that the Gray Strategy is optimal if q = 2. In this proof and all other proofs in this section, we can restrict our attention without loss of generality to deterministic strategies. Theorem 2.2. The maximum success probability for any strategy for the New Hats-on-a-line Game with two hat colours and n players is 1 − 2 −n . Proof. The proof is by induction on n. For n = 1, the result is trivial, as any guess by P 1 is correct with probability 1/2. So we can assume n > 1. Suppose there are c configurations of n − 1 hats for which player P 1 guesses a colour. We consider two cases: case 1: c ≥ 1 There are c cases where P 1 's guess is correct with probability 1/2. Therefore the probability of an incorrect guess by P 1 is 1 2 × c 2 n−1 ≥ 1 2 n . case 2: c = 0 Since player P 1 always passes, the game reduces to an (n − 1)-player game, in which the probability of winning is at most 1 − 2 −n+1 , by induction. Considering both cases, we see that the probability of winning is at most max{1 − 2 −n , 1 − 2 −n+1 } = 1 − 2 −n . We observe that the above proof holds even when every player has complete visual information, as the restricted visual information in the hats-on-a-line model is not used in the proof. We next prove optimality for the two-player game for an arbitrary number of hat colours, as follows. Theorem 2.3. The maximum success probability for any strategy for the New Hats-on-a-line Game with q hat colours and two players is 1 − q − 1 q 2 = 2q − 1 q 2 . Proof. Suppose that player P 1 guesses her hat colour for r out of the q possible colours for P 2 's hat that she might see. Any guess she makes is correct with probability 1/q. We distinguish two cases: case 1: r = q If r = q, then the overall success probability is 1/q. case 2: r < q In this case, player P 1 passes with probability (q−r)/q. Given that P 1 passes, P 2 knows that his hat is one of q − r equally possible colours, so his guess will be correct with probability 1/(q − r). Therefore the overall success probability is 1 q × r q + 1 q − r × q − r q = r q 2 + 1 q . To maximise this quantity, we take r = q − 1. This yields a success probability of (2q − 1)/q 2 . Case 2 yields the optimal strategy because (2q − 1)/q 2 > 1/q when q > 1. The Main Theorem Based on the partial results proven above, it is tempting to conjecture that the maximum success strategy is 1 − ((q − 1)/q) n , for any integers n > 1 and q > 1. In fact, we will prove that this is always the case. The proof is done in two steps. A strategy is defined to be restricted if the any guess made by any player other than the first player is always correct. First, we show that any optimal strategy must be a restricted strategy. Then we prove optimality of the Gray Strategy by considering only restricted strategies. In all of our proofs, we denote the colour of P i 's hat by c i , 1 ≤ i ≤ n. The n-tuple (c 1 , . . . , c n ) is the configuration of hats. Lemma 2.4. Any optimal strategy for the New Hats-on-a-line Game is a restricted strategy. Proof. Suppose S is an optimal strategy for the New Hats-on-a-line Game that is not restricted. If player P 1 passes, then the outcome of the game is determined by the (n − 1)-tuple (c 2 , . . . , c n ), which is known to P 1 . Since P 1 knows the strategies of all the players, she can determine exactly which (n − 1)-tuples will lead to incorrect guesses by a later player. Denote this set of (n−1)-tuples by F . Because S is not restricted, it follows that F = ∅. We create a new strategy S ′ by modifying S as follows: 1. If (c 2 , . . . , c n ) ∈ F , then P 1 guesses an arbitrary colour (e.g., P 1 could guess "gray"). 2. If (c 2 , . . . , c n ) ∈ F , then proceed as in S. It is easy to see that S ′ is a restricted strategy. The strategies S and S ′ differ only in what happens for configurations (c 1 , . . . , c n ) where (c 2 , . . . , c n ) ∈ F . When (c 2 , . . . , c n ) ∈ F , S ′ will guess correctly with probability 1/q. On the other hand, S always results in an incorrect guess when (c 2 , . . . , c n ) ∈ F . Because |F | > 1, the success probability of S ′ is greater than the success probability of S. This contradicts the optimality of S and the desired result follows. Now we proceed to the second part of the proof. Lemma 2.5. The maximum success probability for any restricted strategy for the New Hats-on-a-line Game with q hat colours and n players is 1 − ((q − 1)/q) n . Proof. Suppose an optimal restricted strategy S is being used. Let A denote the set of (n − 1)-tuples (c 2 , . . . , c n ) for which P 1 guesses; let B denote the set of (n − 1)-tuples for which P 1 passes and P 2 guesses (correctly); and let C denote the set of (n − 1)-tuples for which P 1 and P 2 both pass. Clearly every (n − 1)-tuple is in exactly one of A, B, or C, so |A| + |B| + |C| = q n−1 .(1) Now construct A ′ (B ′ , C ′ , resp.) from A (B, C, resp.) by deleting the first co-ordinate (i.e., the value c 2 ) from each (n − 1)-tuple. A ′ , B ′ and C ′ are treated as multisets. We make some simple observations: (i) B ′ ∩ C ′ = ∅. This beacuse P 2 's strategy is determined by the (n − 2)tuple (c 3 , . . . , c n ). (ii) For each (c 3 , . . . , c n ) ∈ B ′ , there are precisely q − 1 occurrences of (c 3 , . . . , c n ) ∈ A ′ . This follows because player P 2 can be guaranteed to guess correctly only when his hat colour is determined uniquely. (iii) A ′ ∩ C ′ = ∅. This follows from the optimality of the strategy S. (The existence of an (n − 1)-tuple (c 2 , . . . , c n ) ∈ A such that (c 3 , . . . , c n ) ∈ C ′ contradicts the optimality of S: P 1 should pass, for this configuration will eventually lead to a correct guess by a later player.) We now define a restricted strategy S ′ for the (n − 1)-player game with players P 2 , . . . , P n (here P 2 is the "first" player). The strategy is obtained by modifying S, as follows: 1. P 2 guesses (arbitrarily) if (c 3 , . . . , c n ) ∈ A ′ ∪ B ′ and P 2 passes if (c 3 , . . . , c n ) ∈ C ′ . (This is well-defined in view of the three preceding observations.) 2. P 3 , . . . , P n proceed exactly as in strategy S. Since the set of (n − 2)-tuples for which P 2 passes is the same in both of strategies S and S ′ , it follows that P 3 , . . . , P n only make correct guesses in S ′ , and therefore S ′ is restricted. Let β n denote the maximum number of (n − 1)-tuples for which the first player passes in an optimal restricted strategy. We will prove that β n ≤ q n−1 − (q − 1) n−1 . ( This is true for n = 2, since β 2 ≤ 1. Now we proceed by induction on n. We will use a few equations and inequalities. First, from (ii), it is clear that |A| ≥ (q − 1)|B|.(3) Next, because S ′ is a restricted strategy for n − 1 players, we have |C| ≤ qβ n−1 .(4) Finally, from the optimality of S, it must be the case that |B| + |C| = β n .(5) Applying (1), (3), (4) and (5), we have β n = |B| + |C| = q n−1 − |A| ≤ q n−1 − (q − 1)|B| = q n−1 − (q − 1)(β n − |C|) ≤ q n−1 − (q − 1)β n + q(q − 1)β n−1 , from which we obtain β n ≤ q n−2 + (q − 1)β n−1 . Applying the induction assumption, we see that β n ≤ q n−2 + (q − 1)(q n−2 − (q − 1) n−2 ) = q n−1 − (q − 1) n−1 , showing that (2) is true. Finally, using (2), the success probability of S is computed to be ≤ 1 q + β n q n−1 × 1 − 1 q ≤ 1 q + q n−1 − (q − 1) n−1 q n−1 × 1 − 1 q = 1 − q − 1 q n . Summarizing, we have proven our main theorem. Theorem 2.6. The Gray Strategy for the New Hats-on-a-line Game with q hat colours and n players is optimal. Proof. This is an immediate consequence of Theorem 2.1 and Lemmas 2.4 and 2.5. Comments It is interesting to compare Ebert's Hat Game, the Hats-on-a-line Game and the New Hats-on-a-line Game. The optimal solutions to Ebert's game are easily shown to be equivalent to covering codes. There are many open problems concerning these combinatorial structures, so the optimal solution to Ebert's game is not known in general. The optimal solution to the Hatson-a-line Game is a simple arithmetic strategy, and it is obvious that the strategy is optimal. We have introduced the New Hats-on-a-line Game as a hybrid of the two preceding games. The optimal strategy is very simple, but the proof of optimality is rather delicate combinatorial proof by induction. This game does not seem to have any connection to combinatorial structures such as covering codes. The analysis of these three games utilize different techniques. At the present time, there does not appear to be any kind of unified approach that is appropriate for understanding these games and/or other types of hat games.
3,181