diff --git "a/4dE4T4oBgHgl3EQfbgxF/content/tmp_files/2301.05073v1.pdf.txt" "b/4dE4T4oBgHgl3EQfbgxF/content/tmp_files/2301.05073v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/4dE4T4oBgHgl3EQfbgxF/content/tmp_files/2301.05073v1.pdf.txt" @@ -0,0 +1,2636 @@ +Gradient TRIX +CHRISTOPH LENZEN, CISPA Helmholtz Center for Information Security, Germany +SHREYAS SRINIVAS, CISPA Helmholtz Center for Information Security, Germany and Saarbrucken +Graduate School for Computer Science, Saarland University, Germany +Gradient clock synchronization (GCS) algorithms minimize the worst-case clock offset between the nodes in a +distributed network of diameter 𝐷 and size 𝑛. They achieve optimal offsets of Θ(log 𝐷) locally, i.e. between +adjacent nodes [18], and Θ(𝐷) globally [2]. As demonstrated in [3], this is a highly promising approach +for improved clocking schemes for large-scale synchronous Systems-on-Chip (SoC). Unfortunately, in large +systems, faults hinder their practical use. State of the art fault-tolerant GCS [4] has a drawback that is fatal in +this setting: It relies on node and edge replication. For 𝑓 = 1, this translates to at least 16-fold edge replication +and high degree nodes, far from the optimum of 2𝑓 + 1 = 3 for tolerating up to 𝑓 faulty neighbors. +In this work, we present a self-stabilizing GCS algorithm for a grid-like directed graph with optimal node in- +and out-degrees of 3 that tolerates 1 faulty in-neighbor. If nodes fail with independent probability 𝑝 ∈ π‘œ(π‘›βˆ’1/2), +it achieves asymptotically optimal local skew of Θ(log 𝐷) with probability 1 βˆ’ π‘œ(1); this holds under general +worst-case assumptions on link delay and clock speed variations, provided they change slowly relative to the +speed of the system. The failure probability is the largest possible ensuring that with probabity 1 βˆ’ π‘œ(1) for +each node at most one in-neighbor fails. As modern hardware is clocked at gigahertz speeds and the algorithm +can simultaneously sustain a constant number of arbitrary changes due to faults in each clock cycle, this +results in sufficient robustness to dramatically increase the size of reliable synchronously clocked SoCs. +CCS Concepts: β€’ Hardware β†’ Very large scale integration design; Very large scale integration design; +β€’ Computing methodologies β†’ Distributed algorithms; +Additional Key Words and Phrases: Clock Synchronisation, Fault Tolerance, VLSI, Self-Stabilisation +1 +INTRODUCTION +In their seminal work from 2004 [7], Fan and Lynch introduced the task of Gradient Clock Synchro- +nization (GCS). In a network of nodes that synchronize their clocks, it requires to minimize the +worst-case clock offset between neighbors. Two key insights motivate minimizing this local skew: +β€’ In many applications the skew between adjacent nodes is the appropriate measure of quality. +β€’ The global skew, the maximum clock offset between any pair of nodes in the network, grows +linearly with the diameter 𝐷 of the network [2]. +Defying the intuition of many, Fan and Lynch proved a lower bound of Ξ©(log 𝐷/log log 𝐷) on the +local skew. Follow-up work then established that this bound was very close to the mark: the best +local skew that can be achieved is Θ(log 𝐷) [18]. This exponential gap between global and local +skew strongly suggests better scalability of systems employing this approach to synchronization. +Yet, more than a decade after these results have been published, we know of no efforts to apply +these techniques in products. This is not for want of demand! To drive this point home, consider +the case of clocking synchronous hardware. Conceptually speaking, state of the art hardware +that operates synchronously distributes a clock signal from a single source using a tree network, +see e.g. [9, 21]. However, for any tree spanning a square grid, there will be adjacent grid points +whose distance in the tree is proportional to the side length of the grid [8]. Hence, the worst-case +local skew on a computer chip clocked by a clock tree must grow linearly with the side length +of the chip [2]. Indeed, these theoretical results are reflected in the reality of hardware suppliers. +Modern systems gave up on maintaining globally synchronous operation, instead communicating +Authors’ addresses: Christoph Lenzen, lenzen@cispa.de, CISPA Helmholtz Center for Information Security, SaarbrΓΌcken, +Germany; Shreyas Srinivas, shreyas.srinivas@cispa.de, CISPA Helmholtz Center for Information Security, SaarbrΓΌcken, +Germany and Saarbrucken Graduate School for Computer Science, Saarland University, SaarbrΓΌcken, Germany. +arXiv:2301.05073v1 [cs.DC] 12 Jan 2023 + +2 +Christoph Lenzen and Shreyas Srinivas +asynchronously between multiple clock islands [1]. This comes at a steep cost, both in terms of +communication latency [12] and ease of design. +So, which obstacle prevents application? At least in the above setting, neither large hidden +constants nor an overly complex algorithm get in the way. On the contrary, recent work demon- +strates that implementation effort is easily managable and pays off already for moderately-sized +systems [3]. Instead, the main obstacle are faults. To see that this is the key issue, recall that +today’s hardware comprises an enourmous number of individual components. Recent off-the-shelf +hardware has transistor counts beyond the 10 billion mark [22], requiring either incredibly low fault +rates or some degree of fault-tolerance. In a system composed of multiple clock islands that interact +asynchronously, these islands are canonical choices for fault-containment regions. Thus, one can +get away with using a clocking scheme in each island that cannot sustain faults, interpreting a +fault of the clocking subsystem as a fault of the respective island. In contrast, when the clocking +subsystems of the islands interact with each other via a clock synchronization algorithm, we must +ensure that a clock fault in a single island does not bring down the entire system! +Fault-Tolerant Clocking. When clocking hardware, high connectivity networks are not scalable. This +limits the number of concurrent faults that can be sustained, as tolerating up to 𝑓 faults requires +a node connectivity of 2𝑓 + 1. In [4], this bound is matched asymptotically by augmenting an +arbitrary network such that the GCS algorithm from [18] is simulated in a fault-tolerant way. Here, +augmentation means to replace each node in the original network by a clique of size 3𝑓 + 1 and +each edge by a biclique. The clique then synchronizes internally using the classic Lynch-Welch +algorithm [10], and the resulting local outputs are interpreted as (an approximation of) a joint +cluster clock on which the (non-tolerant) GCS algorithm from [18] is simulated. +Unfortunately, this approach is impractical due to the large overhead in terms of edges. Leaving +asymptotics aside – the edge overhead compared to the original graph is Θ(𝑓 2) rather than 𝑂(𝑓 ) – +even for the important special case of 𝑓 = 1 node degrees will be at least 15. This is a far cry from +the simplicity of current distribution techniques, and factor 5 beyond the minimum node degree +of 2𝑓 + 1 = 3. What might look like a β€œmoderate constant” to a theoretician will not only cause a +headache to the engineer trying to route all of these edges with few layers and precise timing, it +will also substantially increase communication delay uncertainty. This, in turn, directly translates +into an increased skew, placing the break-even point with prior art beyond relevant limits. +In summary, it is essential to get as close as possible to the minimum required connectivity. This +train of thought led to the study of fault-tolerant clock distribution in low-degree networks [6, 20]. +Both of these works have in common that they assume that the clock signal is generated at a central +location. This enables these approaches to achieve self-stabilization and tolerance to isolated faults +with very simple pulse forwarding schemes. The basic idea is to propagate the signal from layer to +layer, having each node wait for two nodes signaling a clock pulse before locally generating and +forwarding their own pulse. Moreover, it is assumed that in absence of faults delays are changing +only slowly over time. Thus, matching the input frequency to the expected delay between grid +layers results in clock pulses that are well-synchronized between adjacent layers. +The above works differ in the grid structure they use (Figure 1) and the skew bounds they provide: +β€’ Denoting by 𝑑 βˆ’ 𝑒 and 𝑑 the minimum and maximum end-to-end communication delay, in a +grid of width 𝐷 [6] bounds the local skew by 𝑑 + 𝑂(𝑒2𝐷/𝑑). Since in practice 𝑑 ≫ 𝑒, this is a +non-trivial bound. Unfortunately, the fact that 𝑑 ≫ 𝑒 also means that this bound is too large +for applications. Even worse, for each fault this bound increases by 𝑑. +β€’ In [20], each fault adds at most 𝑒 to the local skew. Observe that the used grid also has the +minimum required connectivity, as each node has only 3 incoming and outgoing edges each. + +Gradient TRIX +3 +0 +𝑒 +2𝑒 +𝑑 +𝑑 +𝑑 +π‘‘βˆ’π‘’ +π‘‘βˆ’π‘’ +π‘‘βˆ’π‘’ +𝑑 +𝑑 +𝑑 +Fig. 1. TRIX [20] (top) and HEX [6] (bottom) grids. TRIX uses the naive pulse forwarding scheme of waiting +for the second copy of each pulse before forwarding it. We see how the TRIX grid can accumulate a skew of +Θ(𝑒𝐷). In the HEX grid, each node waits for two copies of a pulse from in-neighbours. However, 2 of the 4 +in-neighbors are on the same layer, causing a skew of 𝑑 if a neighbor on the preceding layer crashes. +Alas, these advantages come at the expense of poor scaling of worst-case skews with the +number of layers: on layer β„“, adjacent nodes may pulse up to 𝑒ℓ time apart. +Note that in order to tolerate failure of an arbitrary component, also the clock source has to +be replicated and the replicas to be synchronized in a fault-tolerant and self-stabilizing manner. +However, here one can employ techniques for fully connected networks [11, 19]; using them in a +single location for 𝑓 = 1 does not constitute a scalability issue. +In light of the above, in this work we ask the question +β€œCan a small local skew be achieved in a fault-tolerant way at minimal connectivity?” +Our Contribution +We provide a positive answer to the above question for the special case of 𝑓 = 1. This is achieved by +using the same grid as in [20], but with a different rule for forwarding pulses. Our novel algorithm +is designed as a discrete and fault-tolerant counterpart to the GCS algorithm from [18]. +Making this work requires substantial conceptual innovation and technical novelty. On the +conceptual level, our algorithm simulates a discretized variant of the (non-fault-tolerant!) GCS +algorithm from [18] on an arbitrary base graph of minimum degree 2. In more detail, each copy of +the graph, referred to as layer, represents a β€œtime step” of the GCS algorithm. For each node, there +is an edge from its copy on a given layer to the copies of itself and its neighbors on the next layer. +The forwarded pulses along these edges serve two very different functions: +β€’ The pulse messages sent to copies of neigbhors correspond to the GCS algorithm’s messages +for estimating clock offsets to neighbors. +β€’ The pulse messages sent between copies of the same node convey its local time from one of +its copies to the next. +Note that this turns a permanently faulty node in the grid into a simulated node being faulty in a +single time step only. This is of vital importance, because it enables us to rely on the self-stabilization +properties of the GCS algorithm from [18]. These are implicitly shown in [13]; we prove them +explicitly in the different setting of this work. +However, by itself this does not guarantee bounded skew between correct nodes, since we +also need to contain the effect of such a β€œtransient” fault on the state of the simulated algorithm. + +4 +Christoph Lenzen and Shreyas Srinivas +Otherwise, a fault would increase skews arbitrarily, effectively corrupting downstream nodes: at +any given node, the smallest or largest time at which a pulse from neighbors on the preceding +layer is received could be determined by a faulty node. We can overcome this issue if there is at +most one faulty in-neighbor. The key observation to controlling the impact of a faulty node on the +pulse time lies in that it can indeed affect only one of three times: the smallest or largest time at +which a pulse from copies of neighbors on the previous layer is received, or the time at which the +pulse from the copy of the node itself is received. In particular, the median of these three times lies +within the interval spanned by the correct in-neighbors’ pulse times. By imposing the additional +constraint to always tie the time at which a pulse is generated closely to this median, we can limit +the local impact of a fault on skews. +In summary, we seek to simultaneously simulate a time-discrete variant of the GCS algorithm +from [18], while also guaranteeing that pulse forwarding times are, up to a sufficiently small +deviation, identical to median reception times plus a fixed offset. Unfortunately, no existing GCS +algorithm that achieves a small local skew [14, 15, 17, 18] can be used for this purpose as-is, since +their decision rules are in conflict with the above β€œstick to the median” requirement. +As our main technical contribution, we resolve this conflict, simultaneously adapting the resulting +algorithm to the discrete setting. To do so, we determine suitably weakened discrete variants of the +slow and fast conditions introduced in [15]. In essence, we allow that a simulated node whose pulse +time is ahead all of its neighbors’ pulse times to delay its next pulse by the difference to the fastest +neighbor; an analogous rule applies to nodes pulsing later than all of their neighbors. From the +perspective of the GCS algorithm in [18], this constitutes a potentially arbitrarily large clock β€œjump,” +which we leverage to implement the stick-to-the-median requirement despite the arbitrary changes +in timing faulty nodes may apply to their pulse messages. To prevent uncontrolled oscillatory +behavior arising from adjacent nodes β€œjumping” in opposite directions, we introduce an additional +condition, which we refer to as jump condition. Essentially, it slightly reduces how large jumps +are to avoid that uncertainty in message delays and local clock speeds cause nodes to β€œoverswing,” +potentially resulting in arbitrarily large skews, cf. Figure 5. +Turning so many knobs at once meant that it was not clear that such a scheme would work. +Indeed, bounding the skew of this novel algorithm turned out to be highly challenging, as jumps +that delay pulses rather than speeding them up invalidate the fundamental assumption that clocks +progress at rate at least 1 present in all prior work [14, 15, 17, 18]. As a result, the main technical +hurdle and contribution turned out to be proving a bound on the local skew Lβ„“ between neighbors +in the same layer β„“ for the fault-free case. +Theorem 2. If there are no faults, then Lβ„“ ≀ 4πœ…(2 + log 𝐷) for all β„“ ∈ N. +Here, choosing the input clock frequency to be 1/(2𝑑) results in πœ… ∈ Θ(𝑒 + (πœ— βˆ’ 1)𝑑), where +it is assumed that local clocks run at rates between 1 and πœ— > 1. All of our results require that +𝑑 ≫ 𝑒 + (πœ— βˆ’ 1)𝑑, or equivalently, that the local skew remains small compared to 𝑑. Note that if +this condition does not hold, we are outside the parameter range of interest: then skews become +large compared to the length of a clock cycle under ideal conditions and clock frequency has to be +reduced substantially. +To address faults, we bound by how much faults can affect timing. Due to the aforementioned +stick to the median rule, we can bound the local impact of a fault on timing in terms of the local +skew. However, applying this argument repeatedly, skews would grow exponentially in the number +of faults. While tolerating a constant number of faults is certainly better than tolerating none, this +is unsatisfactory, since the requirement of one faulty in-neighbor holds with probability 1 βˆ’ π‘œ(1) +for a fairly high independent probability of 𝑝 ∈ π‘œ(1/βˆšπ‘›). Given that the topology we are most + +Gradient TRIX +5 +interested in is roughly a square grid, i.e., there are roughly βˆšπ‘› layers, the naive approach outlined +above does not result in a non-trivial bound on the skew unless 𝑝 is very close to 1/𝑛. +We provide an improved analysis exploiting that our base graph is almost a line. Hence, the 𝑑-hop +neighborhood grows linearly with 𝑑 and hence the number of nodes in layers β„“β€² ∈ [β„“ βˆ’ 𝑛1/12, β„“] +that affect the pulse time of a node in layer β„“ is in Θ(𝑛1/6). Thus, if nodes fail with probability +𝑝 ∈ π‘œ(1/βˆšπ‘›), the probability that there are more than 2 faulty nodes within distance 𝑛1/12 that +affect a given node is π‘œ(1/𝑛). Intuitively, this buys enough time for the self-stabilization properties +of the simulated algorithm to reduce its local skew again before it spirals out of control. +Theorem 5. With probability 1 βˆ’ π‘œ(1), Lβ„“ ∈ 𝑂(πœ… log 𝐷) for all β„“ ∈ N. +The final step is to extend this bound on the local skew within a layer to one that includes +adjacent nodes in different layers. As we propagate pulses layer by layer, we cannot hope to match +pulse times of the π‘˜-th pulse between different layers. Instead, we match the input period to the +nominal time a pulse spends on each layer. This works neatly so long as there are no changes in +message delay, clock speed, and behavior of faulty nodes between consecutive pulses. +Theorem 7. If faulty nodes do not change the timing of their output pulses, then L ∈ 𝑂(πœ… log 𝐷) +with probability 1 βˆ’ π‘œ(1). +To a large extent, this strong assumption is justified in our specific context. Clock speeds of +modern systems are in the gigahertz range, and the amount of change in timing that occurs within a +single clock cycle is much smaller than over the lifetime of a system [23]. Similarly, the by far most +common timing faults are stuck-at faults, i.e., the signal observed by downstream nodes remains +constant logical 0 or 1, and broken connections. From the point of view of the receiving node, this +is equivalent to an early or late pulse, respectively, without any change between pulses. +Of course, timing will still change slowly, the above benign faults will occur at some point, before +which the nodes worked correctly, and some faults may be more severe. Using once more that +faulty nodes’ impact on timing is bounded by the local skew, the bound from Theorem 7 extends to +a constant number of arbitrary faults in each pulse alongside small changes in delays and hardware +clock speeds. +Corollary 7. With probability 1βˆ’π‘œ(1), L ∈ 𝑂(πœ… log 𝐷) even when in each pulse (i) a constant number +of faulty nodes change their output behavior and timing, (ii) link delays vary by up to π‘›βˆ’1/2𝑒 log 𝐷, +and (iii) hardware clock speeds vary by up to π‘›βˆ’1/2(πœ— βˆ’ 1) log 𝐷. +Finally, if all else fails, we can fall back on the ability of the pulse progation algorithm to +recover from arbitrary transient faults. In constrast to the simulated GCS algorithm, achieving +self-stabilization of the pulse propagation scheme itself is straightforward due to the directionality +of the propagation. +Theorem 6. The pulse propagation algorithm can be implemented in a self-stabilizing way. It +stabilizes within 𝑂(βˆšπ‘›) pulses. +In light of these results, we view this work as a major step towards simultaneously achieving +high performance and strong robustness in the practical setting of clock distribution in hardware. +In alignment with the theoretical question motivating this work, we achieve an asymptotically +optimal local skew at the minimum possible node degree under the assumption of node failures +with probability π‘œ(π‘›βˆ’1/2). +Organization of this Article. In Section 2, we discuss the system model, introduce the graph on +which we run our synchronization algorithm, and motivate our modeling choices, including its non- +standard aspects. We then present a simplified version of the algorithm that better highlights the + +6 +Christoph Lenzen and Shreyas Srinivas +conceptual approach in Section 3; the full algorithm and its equivalence without faulty predecessors +is shown in Appendix B. We follow with the formal derivation of the skew bounds in Section 4. +2 +MODELING +The model we use is non-standard, as it is tailored to the specific setting outlined in the introduction. +Accordingly, we will emphasize and discuss model choices where this seems prudent. +Setting. Recall that our goal is to provide a synchronized clock signal to a large System-on-Chip. +Physically, this means that we need to provide the clock signal to a rectangular area; for simplicity, +we will assume it to be square. We want to supply a uniform grid of nodes in the square area with +this signal, which then will serve as roots of relatively small local clock trees supplying the low-level +components with the clock signal. If these trees contribute a maximum clock skew of Ξ” and the +skew between adjacent grid points is at most L, the triangle inequality guarantees a worst-case +skew of L + 2Ξ” between adjacent components of the System-on-Chip. The local clock trees can be +designed using standard methodology. Therefore, in the following we will focus exclusively on the +grid of their roots. +A key assumption we make is that communication delay between correct adjacent nodes changes +only slowly with time. This enables us to generate synchronized pulses at all grid nodes by matching +the input frequency with the (inverse) propagation time between consecutive layers. This is justified +for two reasons: +β€’ The dominant sources of uncertainty in propagation delay are inaccuracies in component +fabrication, aging, and temperature and frequency variations that are slow relative to the +time it takes to propagate an input clock pulse across even a large System-on-Chip [23] +β€’ Changing delays of all links between a pair of adjacent layers by up to 𝛿 increases skew +bounds by at most 𝛿, cf. Lemma 22. +In order to generate sufficiently synchronized pulses at the nodes of layer 0, a straightforward +solution is to use a redundant path, i.e., a path of 3-cliques in which adjacent cliques are fully +bipartitely connected, to propagate pulses from the clock reference along an edge of the chip. As +we show in Corollary 6, this results in input pulses of small enough local skew. For each clique, +one of the nodes will be the layer-0 node providing its output pulse to close-by nodes of layer 1. +In a perfect grid, all layers would consist of a path. Unfortunately, this results in the issue that the +endpoints of the path, lacking one neighbor, would have only two adjacent nodes in the preceding +and subsequent layer. A naive solution is to insert a additional edges between the boundary nodes, +turning the layer into a cycle and the entire graph into a cylinder (with some special treatment +of layer 0). However, realizing such a solution on the square would result in far too long edges +between boundary nodes or require to, essentially, replicate each layer, effectively doubling the +number of nodes and edges in the graph. +Instead, we choose to replicate the boundary nodes only, which then provides the β€œmissing” input +to the next layer. Note that this increases the degree of the nodes next to the boundary nodes by +one. We cope with this by a general analysis allowing for the layers to be copies of an arbitrary base +graph of minimum degree 2. In Figure 2 and Figure 3, we show the base graph and the connectivity +of nodes between adjacent layers of our synchronization network in our assumed setting. +Network Graph. We are given a simple connected base graph 𝐻 = (𝑉, 𝐸) of minimum degree 2 and +diameter 𝐷 ∈ N>0. For 𝑣,𝑀 ∈ 𝑉 , denote by 𝑑(𝑣,𝑀) ≀ 𝐷 the distance from 𝑣 to 𝑀 in 𝐻. To derive the +graph 𝐺 = (𝑉𝐺, 𝐸𝐺) we use for synchronization, for each β„“ ∈ N we create a copy 𝑉ℓ of 𝑉 . Denoting +by (𝑣, β„“) the copy of 𝑣 ∈ 𝑉 in 𝑉ℓ, we define 𝐸ℓ := {((𝑣, β„“), (𝑀, β„“ + 1)) | {𝑣,𝑀} ∈ 𝐸 ∨ 𝑣 = 𝑀}. We now +obtain 𝐺 by setting 𝑉𝐺 := οΏ½ +β„“ ∈N 𝑉ℓ and 𝐸𝐺 := οΏ½ +β„“ ∈N 𝐸ℓ. That is, for each layer β„“ ∈ N we have a copy + +Gradient TRIX +7 +Fig. 2. Base graph 𝐻 used in this work. Rather than using a cycle, which would result in a TRIX grid, we +replicate the end nodes of a line to ensure a minimum degree of 2. Alternatively, one could use a line and +exploit that the probability that one of the 𝑂(βˆšπ‘›) boundary nodes fails is π‘œ(1). +Fig. 3. Layer structure of 𝐺 resulting from our choice of 𝐻. Most nodes have in- and out-degree 3, some 4. +of 𝑣 ∈ 𝑉 , which has outgoing edges to the copies of itself and all its neighbors on layer β„“ + 1.1 Sine +𝑉𝐺 is a DAG, we refer to out-neighbors as successors and in-neighbors as predecessors. +Fault Model. An unknown subset 𝐹 βŠ‚ 𝑉𝐺 is Byzantine faulty, meaning that these nodes may violate +the protocol arbitrarily. Edge faults are mapped to node faults, i.e., if edge ((𝑣, β„“), (𝑀, β„“ +1)) is faulty, +we instead consider (οΏ½οΏ½οΏ½οΏ½, β„“) faulty. We impose the constraint that no node has two faulty predecessors. +Formally, for all β„“ ∈ N and 𝑣 ∈ 𝑉 , |({(𝑣, β„“)} βˆͺ οΏ½ +{𝑣,𝑀}∈𝐸{(𝑀, β„“)}) ∩ 𝐹 | ≀ 1. When analyzing the +system under random faults, we will assume that each node fails independently with probability +𝑝 ∈ π‘œ(1/βˆšπ‘›), which ensures that the above constraint is met with probability 1 βˆ’ π‘œ(1). In addition, +we impose the restriction that at most a constant number of such faulty nodes change their timing +behavior between consecutive pulses. +Communication. Each node has the ability to broadcast pulse messages on its outgoing edges. If +node 𝑣ℓ ∈ 𝑉ℓ broadcasts at time 𝑑𝑣,β„“, its successors receive its message at a (potentially different) +time from [𝑑𝑣,β„“ + 𝑑 βˆ’ 𝑒,𝑑𝑣,β„“ + 𝑑]. The maximum end-to-end delay 𝑑 includes any delay caused by +computation. Typically, the delay uncertainty 𝑒 is much smaller than 𝑑. As discussed above, we +assume delays to be static, i.e., each edge 𝑒 = ((𝑣, β„“), (𝑀, β„“ +1)) has an unknown, but fixed associated +delay 𝛿𝑒 ∈ [𝑑 βˆ’ 𝑒,𝑑] applied to each pulse sent from (𝑣, β„“) to (𝑀, β„“ + 1). +Note that faulty nodes can send pulses at arbitrary times, without being required to broadcast; +even if physical node implementations disallow point-to-point communication, edge faults could +still result in this behavior. +Local Clocks and Computations. Each node is able to approximately measure the progress of time +by means of a local time reference. We model this by node (𝑣, β„“) having query access to a hardware +clock 𝐻𝑣,β„“ : Rβ‰₯0 β†’ Rβ‰₯0 satisfying +βˆ€π‘‘ < 𝑑 β€² ∈ Rβ‰₯0, 𝑑 β€² βˆ’ 𝑑 ≀ 𝐻𝑣,β„“ (𝑑 β€²) βˆ’ 𝐻𝑣,β„“ (𝑑) ≀ πœ—(𝑑 β€² βˆ’ 𝑑). +for some πœ— > 1. No known phase relation is assumed between the hardware clocks. The algorithm +will use them exclusively to measure how much time passes between local events. Analogous to +delays, we assume that hardware clock speeds are static. This is justified in the same way as for +delays. +1This is an abuse of notation, since in a (roughly) square grid of 𝑛 := |𝑉𝐺 | nodes, we have Θ(βˆšπ‘›) layers. Since 𝑛, i.e., the +size of the grid, will only play a role when making probabilistic statements, we opted for this more convenient notation. + +8 +Christoph Lenzen and Shreyas Srinivas +Computations are deterministic. However, in addition to receiving a message, the hardware clock +reaching a time value previously determined by the algorithm can also trigger computations and +possibly broadcasting a pulse. +Output and Skew. The goal of the algorithm is to synchronize the pulses generated by correct nodes. +We assume that correct nodes on layer 0 generate well-synchronized pulses at times π‘‘π‘˜ +𝑣,0 for π‘˜ ∈ N>0 +at a frequency we control. In Appendix A, we discuss how to realize this assumption in detail. All +other correct nodes generate pulses π‘‘π‘˜ +𝑣,β„“, π‘˜ ∈ N>0, based on the pulse messages received from their +predecessors. +Our measure of quality is the worst-case local skew the algorithm guarantees. We define the local +skew as the largest offset between the π‘˜-th pulses of adjacent nodes on the same layer or pulses π‘˜ +and π‘˜ + 1 of adjacent nodes on layers β„“ and β„“ + 1, whichever is larger. Formally, for β„“ ∈ N we define +Lβ„“ := sup +π‘˜ ∈N +max +{𝑣,𝑀}∈𝐸 +(𝑣,β„“),(𝑀,β„“)βˆ‰πΉ +{|π‘‘π‘˜ +𝑣,β„“ βˆ’ π‘‘π‘˜ +𝑀,β„“|}, +Lβ„“,β„“+1 := sup +π‘˜ ∈N +max +((𝑣,β„“),(𝑀,β„“+1)) βˆˆπΈβ„“ +(𝑣,β„“),(𝑀,β„“+1)βˆ‰πΉ +{|π‘‘π‘˜ +𝑣,β„“ βˆ’ π‘‘π‘˜+1 +𝑀,β„“+1|}, +and L := supβ„“ ∈N max{Lβ„“, Lβ„“,β„“+1}. This deviates from the standard definition of the local skew: +β€’ The definition is adjusted to pulse synchronization, which can be viewed as an essentially +equivalent time-discrete variant of clock synchronization [16]. +β€’ Between consecutive layers, we synchronize consecutive pulses. After initialization, which is +complete once the first pulse propagated through the (in practice finite) grid, this is equivalent +to a layer-dependent index shift of pulse numbers. +3 +ALGORITHM +In this section, we discuss the pulse forwarding algorithm. We provide a simplified version of the +algorithm that behaves identical so long as the predecessors of the executing node are correct. The +full algorithm needs to handle the possibility that faulty nodes send multiple messages or none at +all. This complicates bookkeeping and loop control, distracting from the principles underlying the +algorithm’s operation. Accordingly, we defer the full algorithm to Appendix B, where we show the +equivalence to the simplified variant when there are no faulty predecessors. +3.1 +Simplified Pulse Forwarding Algorithm +The algorithm proceeds in iterations corresponding to pulses. In each iteration, node (𝑣, β„“) +(1) timestamps the arrival times of the pulses of its predecessors using its hardware clock, +(2) determines a correction value C𝑣,β„“ based on these timestamps, and +(3) forwards the pulse Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ time after receiving the pulse from π‘£β„“βˆ’1, measured by its +hardware clock. +If all reception times are close to each other, then C𝑣,β„“ will be small. Recalling that messages are +in transit for roughly 𝑑 time, this translates to Ξ› being the nominal time for a pulse to propagate +from layer β„“ βˆ’ 1 to layer β„“. We need to choose Ξ› large enough such that the above sequence can be +always realized. That is, we need to consider how far apart the reception times of messages from +the previous layer can be, and ensure that Ξ› βˆ’ 𝑑 exceeds this value plus the resulting C𝑣,β„“. +Assuming that this precondition holds, Algorithm 1 implements the above approach. In each +loop iteration, it initializes three reception times to ∞: +β€’ 𝐻own, which stores the arrival time of the pulse from (𝑣, β„“ βˆ’ 1). From the perspective of the +simulated GCS algorithm, this reflects the state of the node 𝑣 ∈ 𝑉 simulated by (𝑣, β„“), β„“ ∈ N. +β€’ 𝐻min, which stores the minimum arrival time of a pulse from a neighbor π‘€β„“βˆ’1, 𝑀 β‰  𝑣. This +corresponds to the first pulse received from a neighbor 𝑀 of 𝑣 in 𝐺 in this iteration. + +Gradient TRIX +9 +Algorithm 1 Simplified pseudocode for discrete GCS at node (𝑣, β„“), β„“ > 0. As shown in Lemma 29, +this code is equivalent to Algorithm 3 in the absence of faults. The parameters Ξ› and πœ… will be +determined later, based on the analysis. +loop +𝐻own, 𝐻min, 𝐻max := ∞ +do +if received pulse from (𝑣, β„“ βˆ’ 1) then +𝐻own := 𝐻𝑣,β„“ (𝑑) +if received pulse from first (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸 then +𝐻min := 𝐻𝑣,β„“ (𝑑) +if received pulse from last (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸 then +𝐻max := 𝐻𝑣,β„“ (𝑑) +until 𝐻own, 𝐻min, 𝐻max < ∞ +C𝑣,β„“ := min𝑠 ∈N{max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ…/2 +if C𝑣,β„“ < 0 then +C𝑣,β„“ := min{𝐻own βˆ’ 𝐻min βˆ’ πœ…/2 + 2πœ…, 0} +else if C𝑣,β„“ > πœ—πœ… then +C𝑣,β„“ := max{𝐻own βˆ’ 𝐻max βˆ’ πœ…/2 βˆ’ πœ…,πœ—πœ…} +wait until 𝐻𝑣,β„“ (𝑑) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ +broadcast pulse +β€’ 𝐻max, which stores the maximum arrival time of a pulse from a neighbor π‘€β„“βˆ’1, 𝑀 β‰  𝑣. This +corresponds to the last pulse received from a neighbor 𝑀 of 𝑣 in 𝐺 in this iteration. +The do-until loop fills these variables with the correct values. At the heart of the algorithm lies the +computation of 𝐢𝑣,β„“. If there were no faults, one could always compute +Ξ” := min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 +and then choose the closest value from the range [0,πœ—πœ…], i.e., set +C𝑣,β„“ := + + +Ξ” +if Ξ” ∈ [0,πœ—πœ…], +0 +if Ξ” < 0, and +πœ—πœ… +if Ξ” > πœ—πœ…. +To get intuition on this choice, observe that minπ‘₯ ∈R{max{𝐻own βˆ’ 𝐻min βˆ’ π‘₯, 𝐻own βˆ’ 𝐻max + π‘₯}} is +attained when 𝐻own βˆ’ 𝐻max + π‘₯ = 𝐻own βˆ’ 𝐻min βˆ’ π‘₯, which is equivalent to π‘₯ = (𝐻max βˆ’ 𝐻min)/2, +i.e., 𝐻own βˆ’ Ξ” = (𝐻max βˆ’ 𝐻min)/2. If timing was perfectly accurate, the reception times of the pulse +messages could serve as exact proxies for the actual pulse forwarding times of the nodes on layer +β„“ βˆ’ 1. In iteration π‘˜, this would mean to generate the pulse at (𝑣, β„“) faster if (𝑣, β„“ βˆ’ 1) generated +its pulse later than the average of min{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1} and max{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1}. Thus, any (𝑣, β„“) for +which π‘‘π‘˜ +𝑣,β„“βˆ’1 βˆ’ min{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1} > max{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1} βˆ’ π‘‘π‘˜ +𝑣,β„“βˆ’1 would choose 𝐢𝑣,β„“ > 0, attempting +to reduce max{𝑣,𝑀}∈𝐸{|π‘‘π‘˜ +𝑣,β„“ βˆ’ π‘‘π‘˜ +𝑀,β„“|} compared to max{𝑣,𝑀}∈𝐸{|π‘‘π‘˜ +𝑣,β„“βˆ’1 βˆ’ π‘‘π‘˜ +𝑀,β„“βˆ’1|}. This can be viewed as +trying to reduce the local skew by a greedy strategy. +Unfortunately, this naive strategy fails to account for inaccuracies due to message delay uncer- +tainty and drifting hardware clocks. Nonetheless, we follow this strategy up to deviations of 𝑂(πœ…). +The additional terms serve the following purposes: + +10 +Christoph Lenzen and Shreyas Srinivas +β€’ Considering only discrete choices for π‘₯ ∈ 4πœ…N rather than arbitrary π‘₯ ∈ R is the key +ingredient that makes the algorithmic approach succeed, cf. [15]. Essentially, this is necessary +because there is no way to determine π‘‘π‘£β„“βˆ’1,π‘˜ βˆ’ π‘‘π‘€β„“βˆ’1,π‘˜ precisely. Discretizing observed skews +in units of πœ… ∈ Θ(𝑒 + (πœ— βˆ’ 1)(Ξ› βˆ’ 𝑑)) enables a delicate strategy that alternates between +overestimating skews to locally generate the next pulse earlier for the sake of β€œcatching up” +with others and underestimating skews to β€œwait” for others catch up. +οΏ½οΏ½οΏ½ Substracting πœ…/2 accounts for errors in measuring skews, which are caused by uncertainty in +message delay and hardware clock speed. +β€’ To limit the damage that a faulty predecessor of (𝑣, β„“) can do, we ensure that (𝑣, β„“) generates +its pulse without too large of a deviation from the median of 𝑑𝑣,β„“βˆ’1, min{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1}, and +max{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1} (plus the nominal offset of Ξ›). This is achieved by permitting corrections +𝐢𝑣,β„“ < 0 if (𝑣, β„“ βˆ’ 1) clearly generated its pulse earlier than min{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1} and 𝐢𝑣,β„“ > πœ—πœ… +if it clearly generated its pulse later than max{𝑣,𝑀}∈𝐸{π‘‘π‘˜ +𝑀,β„“βˆ’1}, respectively. +To further motivate the last point, recall that there can be at most one fault among the predecessors +of (𝑣, β„“). A single faulty predecessor can only affect only one of the three values 𝐻own, 𝐻min, and +𝐻max: control 𝐻own arbitrarily, 𝐻min to be smaller than the minimum reception time from a correct +node (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, or 𝐻max to exceed the maximum reception time from correct nodes +(𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸. Hence, ensuring that pulses are generated with only a small offset relative to +median {𝐻own, 𝐻min, 𝐻max} + Ξ› βˆ’ 𝑑 indeed limits the damage that a fault can do. +Achieving all of the desired properties is non-trivial, leading to the fairly involved choice of C𝑣,β„“. +It can be viewed as simultaneously implementing relaxed fast and slow conditions (as introduced +in [15]), an additional jump condition required to make the GCS algorithm work under these relaxed +fast and slow conditions, and the requirement to stick close to the median of predecessors’ pulse +times. In Section 4.2, we specify the (relaxed) slow and fast condition, as well as the jump condition, +and show that the algorithm implements them. Lemmas 19 and 20 show that the algorithm also +enforces deviates little from the time interval spanned by correct predecessors (offset by Ξ›). +There is some freedom in the choice of parameters. For simplicity, we fix a good choice of πœ… and +note that 𝑑 must satisfy a lower bound 𝐡 ∈ 𝑂(supβ„“ ∈N{Lβ„“} +πœ…). Observe that this constraint simply +means that the skew bounds are useful, as a skew that is of similar size as the maximum end-to-end +delay requires to slow the system down substantially. Finally, Ξ› must be at least 𝑑 +𝑂(supβ„“ ∈N{Lβ„“}), +which due to the previous constraint holds e.g. for the choice Ξ› = 2𝑑. Formally, for a sufficiently +large constant 𝐢,2 +πœ… := 2 +οΏ½ +𝑒 + +οΏ½ +1 βˆ’ 1 +πœ— +οΏ½ +(Ξ› βˆ’ 𝑑) +οΏ½ +, +(1) +Ξ› β‰₯ πΆπœ—(sup +β„“ ∈N +{Lβ„“} + 𝑒) + 𝑑, and +(2) +𝑑 β‰₯ 𝐢(πœ—(sup +β„“ ∈N +{Lβ„“} + 𝑒) + πœ…). +(3) +Complete Algorithm. The complete algorithm cannot wait for messages from all predecessors to +determine when to send its pulse, as a faulty node not sending its pulse then would deadlock all its +descendants. As discussed above, the hardware clock time of the next pulse time does not deviate +much from median {𝐻own, 𝐻min, 𝐻max} + Ξ› βˆ’π‘‘, but does depend on max{𝐻min, 𝐻own, 𝐻max} in some +cases. However, we will prove that Lβ„“βˆ’1 is small enough such that all pulse messages from correct +nodes will be received in time. Hence, it is sufficient to wait until median {𝐻own, 𝐻min, 𝐻max}+πœ—Lβ„“βˆ’1 +(or later) according to 𝐻𝑣,β„“. Provided that Ξ› βˆ’ 𝑑 is large enough, this implies that any message for +2We do not attempt to optimize constants in this work. + +Gradient TRIX +11 +computing C𝑣,β„“ missing is due to a fault; in fact, at the point in time when this becomes clear, C𝑣,β„“ +is already determined, regardless of how late the message would arrive. +The complete algorithm differs from Algorithm 1 by covering the case that a signal does not +arrive in time. Intuitively, one can treat the respective message arrival time (𝐻own or 𝐻max, 𝐻min is +not possible) as ∞, while allowing such an ∞ to cancel out in substraction: +β€’ If 𝐻own = ∞, then 𝐢𝑣,β„“ ∈ 𝐻own βˆ’ 𝐻max βˆ’ 𝑂(πœ…), and (𝑣, β„“) will generate its pulse at local time +𝐻own + Ξ› βˆ’ 𝑑 βˆ’ 𝐢𝑣,β„“ ∈ 𝐻max + Ξ› βˆ’ 𝑑 + 𝑂(πœ…). +β€’ If 𝐻max = ∞ and 𝐻own β‰₯ 𝐻min, then 𝐢𝑣,β„“ ∈ 𝐻own βˆ’ 𝐻min Β± Θ(πœ…) and (𝑣, β„“) will generate its +pulse at local time 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ 𝐢𝑣,β„“ ∈ 𝐻min + Ξ› βˆ’ 𝑑 Β± 𝑂(πœ…). +β€’ If 𝐻max = ∞ and 𝐻own < 𝐻min, then 𝐢𝑣,β„“ ∈ [0, 2πœ…] and (𝑣, β„“) will generate its pulse at local +time 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ 𝐢𝑣,β„“ ∈ 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ 𝑂(πœ…). +Note that in all cases, the pulse is generated with an offset of Ξ›βˆ’π‘‘ βˆ’Ξ˜(πœ…) from the median reception +time. The complete algorithm follows the above intuition, leveraging the fact that there is no need +to wait indefinitely to determine that the third signal is late, and is given in Appendix B. +Last, but not least, it is of interest to make the pulse forwarding algorithm self-stabilizing [5]. +Due to the design choice of propagating the clock signal from a single source along a DAG, this will +immediately translate to the overall scheme being self-stabilizing, so long as the clock generation +is self-stabilizing, too. This is straightforward, because one can assume that the signals from the +previous layer are already well-synchronized. Thus, all that nodes need to do is to detect when all +but possibly one (faulty) pulse signal arrive in close temporal proximity to determine when to clear +their memory and start a new iteration of the main loop. In Appendix B.1, we sketch how this can +be achieved using standard techniques. +4 +ANALYSIS +We now analyze the pulse progagation scheme under the assumption that layer 0 generates well- +synchronized pulses. We discuss a suitable method for achieving this in Appendix A. Our analysis +proceeds along the following lines: +(1) We show that, if the local skew is small enough compared to Ξ›, i.e., Equation (2) holds, all +correct nodes execute their iterations as intended. That is, each correct node on layer β„“ > 0 +receives the π‘˜-th pulses of its correct predecessors in its π‘˜-th loop iteration. This is deferred +to Appendix B. We then proceed under the assumption that this holds true, which will be +justified retroactively once we establish that the local skew is bounded. +(2) Since delays and hardware clock speeds are (approximated as being) static, any (substantial) +change in relative timing of consecutive pulses is due to faulty nodes. Thus, the task of +bounding the local skew reduces to bounding the intra-layer skew Lβ„“ for a single pulse, since +such a bound must take into account the full variability introduced by faulty nodes. This +reasoning is deferred to Appendix C. +(3) Based on potential functions, we analyze Lβ„“ in the absence of faults. The results entail not +only bounded skew, but also that the potentials recover if they become unexpectedly large. +(4) We show that faulty nodes have limited impact on the potentials. From this and the above +recovery property, we conclude that skews behave favorably also when there are faults. +As stated above, the first two steps of our line of reasoning are deferred to the appendix. The main +challenge is to bound Lβ„“ for a single pulse. Due to the first step, we know that the π‘˜-th pulse at +correct nodes depends only on the π‘˜-th pulses of their predecessors (Lemma 28). Therefore, in the +following fix π‘˜ and denote the π‘˜-th pulse time of correct (𝑣, β„“) ∈ 𝑉𝐺 by 𝑑𝑣,β„“. +Recall that for 𝑣,𝑀 ∈ 𝑉 , we denote by 𝑑(𝑣,𝑀) their distance in the base graph 𝐻. Our analysis is +built around the following potential functions. + +12 +Christoph Lenzen and Shreyas Srinivas +Definition 1 (Potential Functions). Let 𝑣,𝑀 ∈ 𝑉 and 𝑠, β„“ ∈ N. We define +πœ“π‘  +𝑣,𝑀(β„“) := 𝑑𝑣,β„“ βˆ’ 𝑑𝑀,β„“ βˆ’ 4π‘ πœ…π‘‘(𝑣,𝑀), +Ψ𝑠 (β„“) := max +𝑣,π‘€βˆˆπ‘‰{πœ“π‘  +𝑣,𝑀(β„“)}, +πœ‰π‘  +𝑣,𝑀(β„“) := 𝑑𝑣,β„“ βˆ’ 𝑑𝑀,β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…π‘‘(𝑣,𝑀), and +Ξžπ‘  (β„“) := max +𝑣,π‘€βˆˆπ‘‰{Ξžπ‘  +𝑣,𝑀(β„“)}. +Bounding Ψ𝑠 (β„“) readily translates to bounding Lβ„“. +Observation 1. If for 𝑠, β„“ ∈ N and some Ψ𝑠 ∈ Rβ‰₯0 it holds that Ψ𝑠 (β„“) ≀ Ψ𝑠, then Lβ„“ ≀ Ψ𝑠 + 4π‘ πœ…. +Proof. Fix π‘˜ ∈ N and suppose that {𝑣,𝑀} ∈ 𝐸 maximizes |𝑑𝑣,β„“ βˆ’ 𝑑𝑀,β„“|. W.l.o.g., assume that +𝑑𝑣,β„“ β‰₯ 𝑑𝑀,β„“. Since {𝑣,𝑀} ∈ 𝐸, we have that 𝑑(𝑣,𝑀) = 1. Hence, +|𝑑𝑣,β„“ βˆ’ 𝑑𝑀,β„“| = 𝑑𝑣,β„“ βˆ’ 𝑑𝑀,β„“ = πœ“π‘  +𝑣,𝑀(β„“) + 4π‘ πœ… ≀ Ψ𝑠 (β„“) + 4π‘ πœ… ≀ Ψ𝑠 + 4π‘ πœ…. +Since π‘˜ ∈ N is arbitrary, it follows that Lβ„“ ≀ Ψ𝑠 + 4π‘ πœ…. +β–‘ +In summary, the goal of our analysis will be to bound Ψ𝑠 (β„“) by a small value for some 𝑠 satisfying +4π‘ πœ… ∈ 𝑂(𝑒 log 𝐷). +We first study the behavior of the algorithm if there are no faults. Accordingly, this will be tacitly +assumed in all statements of this section, with the expection of Section 4.4. Note that by Lemma 29, +this means that we may also tacitly assume that Algorithm 1 is run by all nodes in layers β„“ ∈ N>0. +In Section 4.4, we will then bound the impact of faulty layers on the potential. +4.1 +Basic Statements +We first show three basic lemmas. The first relates the local reception times of pulses to the actual +sending times, bounding the error by πœ…. +Lemma 1. For (𝑣, β„“) ∈ 𝑉ℓ, β„“ ∈ N>0, set 𝑑min := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} and 𝑑max := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1}. +Then +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max βˆ’ πœ… ≀ 𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑min βˆ’ πœ… ≀ 𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑min. +Proof. We prove the first inequality; the second is shown analogously. Let 𝑑 β€² +𝑣,β„“βˆ’1 and 𝑑 β€² +max denote +the times when the pulse messages sent at time 𝑑𝑣,β„“βˆ’1 and 𝑑max are received at 𝑣ℓ, respectively. From +the bounds on message delays, it follows that +𝑑𝑣,β„“βˆ’1 + 𝑑 βˆ’ 𝑒 ≀ 𝑑 β€² +𝑣,β„“βˆ’1 ≀ 𝑑𝑣,β„“βˆ’1 + 𝑑 and +𝑑max + 𝑑 βˆ’ 𝑒 ≀ 𝑑 β€² +max ≀ 𝑑max + 𝑑. +Thus, +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max βˆ’ 𝑒 ≀ 𝑑 β€² +𝑣,β„“βˆ’1 βˆ’ 𝑑 β€² +max ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max + 𝑒. +Using the bounds on hardware clock rates, we get that +|𝑑 β€² +𝑣,β„“βˆ’1 βˆ’ 𝑑 β€² +max βˆ’ (𝐻own βˆ’ 𝐻max)| ≀ (πœ— βˆ’ 1)|𝑑 β€² +𝑣,β„“βˆ’1 βˆ’ 𝑑 β€² +max| ≀ (πœ— βˆ’ 1)(|𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max| + 𝑒). + +Gradient TRIX +13 +Applying Equation (2), we infer that +|𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max βˆ’ (𝐻own βˆ’ 𝐻max)| ≀ |𝑑 β€² +𝑣,β„“βˆ’1 βˆ’ 𝑑 β€² +max βˆ’ (𝐻own βˆ’ 𝐻max)| + 𝑒 +≀ (πœ— βˆ’ 1)|𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max| + πœ—π‘’ +≀ (πœ— βˆ’ 1)Lβ„“βˆ’1 + πœ—π‘’ +≀ (πœ— βˆ’ 1) +οΏ½Ξ› βˆ’ 𝑑 +πœ— +βˆ’ 𝑒 +οΏ½ ++ πœ—π‘’ += +οΏ½ +1 βˆ’ 1 +πœ— +οΏ½ +(Ξ› βˆ’ 𝑑) + 𝑒. +Finally, using Equation (1), we conclude that +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max βˆ’ πœ… ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max βˆ’ 2 +οΏ½οΏ½ +1 βˆ’ 1 +πœ— +οΏ½ +(Ξ› βˆ’ 𝑑) + 𝑒 +οΏ½ +≀ 𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 +≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max. +β–‘ +The second lemma shows that corrections are not too large. +Lemma 2. For all 𝑣 ∈ 𝑉 and β„“ ∈ N>0, C𝑣,β„“ ≀ Ξ› βˆ’ 𝑑. +Proof. Abbreviate +Ξ” = min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 . +We distinguish three cases. +β€’ Ξ” < 0. Then Algorithm 1 sets +C𝑣,β„“ ≀ min +οΏ½ +𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 + 2πœ…, 0 +οΏ½ +≀ 0. +As Ξ› β‰₯ 𝑑 by Equation (2), the claim of the lemma holds in this case. +β€’ 0 ≀ Ξ” ≀ πœ—πœ…. Then, using the notation of Lemma 1, +C𝑣,β„“ = Ξ” < min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 +≀ min +𝑠 ∈N +οΏ½ +max{𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max + 4π‘ πœ…,𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑min βˆ’ 4π‘ πœ…} +οΏ½ +≀ min +𝑠 ∈N {max{Lβ„“βˆ’1 + 4π‘ πœ…, Lβ„“βˆ’1 βˆ’ 4π‘ πœ…}} += Lβ„“βˆ’1, +which is smaller than Ξ› βˆ’ 𝑑 by Equation (2). +β€’ Ξ” > πœ—πœ…. Note that then +πœ—πœ… < Ξ” ≀ max{𝐻own βˆ’ 𝐻max, 𝐻own βˆ’ 𝐻min} βˆ’ πœ… +2 = 𝐻own βˆ’ 𝐻max βˆ’ πœ… +2, + +14 +Christoph Lenzen and Shreyas Srinivas +as 𝐻max β‰₯ 𝐻min. Therefore, applying Lemma 2, +C𝑣,β„“ = max +οΏ½ +𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 βˆ’ πœ…,πœ—πœ… +οΏ½ +≀ 𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 +≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max +≀ Lβ„“βˆ’1 +< Ξ› βˆ’ 𝑑. +β–‘ +The third lemma bounds the time difference between the pulses of (𝑣, β„“ βˆ’ 1) and (𝑣, β„“). +Lemma 3. For all 𝑣 ∈ 𝑉 and β„“ ∈ N>0 it holds that +𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ +πœ— +≀ 𝑑𝑣,β„“ βˆ’ 𝑑𝑣,β„“βˆ’1 ≀ Ξ› βˆ’ C𝑣,β„“. +Proof. Let 𝑑 β€² +𝑣,β„“βˆ’1 denote the time at which (𝑣, β„“) receives the pulse sent by (𝑣, β„“ βˆ’ 1) at time 𝑑𝑣,β„“βˆ’1. +Inspecting the code of Algorithm 1, we see that +𝐻𝑣,β„“ (𝑑𝑣,β„“) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ = 𝐻𝑣,β„“ (𝑑 β€² +𝑣,β„“βˆ’1) + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“. +Since C𝑣,β„“ ≀ Ξ› βˆ’π‘‘ by Lemma 2, it follows that 𝐻𝑣,β„“ (𝑑 β€² +𝑣,β„“βˆ’1) β‰₯ 𝐻𝑣,β„“ (𝑑𝑣,β„“) and hence 𝑑𝑣,β„“ β‰₯ 𝑑 β€² +𝑣,β„“βˆ’1. Using +the bounds on message delays and hardware clock speeds, we get that +𝑑𝑣,β„“ βˆ’ 𝑑𝑣,β„“βˆ’1 = 𝑑𝑣,β„“ βˆ’ 𝑑 β€² +𝑣,β„“βˆ’1 + 𝑑 β€² +𝑣,β„“βˆ’1 βˆ’ 𝑑𝑣,β„“βˆ’1 +≀ 𝐻𝑣,β„“ (𝑑𝑣,β„“) βˆ’ 𝐻𝑣,β„“ (𝑑 β€² +𝑣,β„“βˆ’1) + 𝑑 += Ξ› βˆ’ C𝑣,β„“ +and +𝑑𝑣,β„“ βˆ’ 𝑑𝑣,β„“βˆ’1 = 𝑑𝑣,β„“ βˆ’ 𝑑 β€² +𝑣,β„“βˆ’1 + 𝑑 β€² +𝑣,β„“βˆ’1 βˆ’ 𝑑𝑣,β„“βˆ’1 +β‰₯ +𝐻𝑣,β„“ (𝑑𝑣,β„“) βˆ’ 𝐻𝑣,β„“ (𝑑 β€² +𝑣,β„“βˆ’1) +πœ— ++ 𝑑 βˆ’ 𝑒 += Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ +πœ— ++ 𝑑 βˆ’ 𝑒, +which can be rearranged into the claimed inequalities. +β–‘ +4.2 +The Slow, Fast, and Jump Conditions +The key to bounding the local skew without faults is to find the right balance between two conflicting +goals: choosing C𝑣,β„“ large enough to β€œcatch up” to predecessors π‘€β„“βˆ’1 β‰  π‘£β„“βˆ’1 that generated their +pulse earlier than π‘£β„“βˆ’1, but small enough to β€œwait” for predecessors π‘€β„“βˆ’1 β‰  π‘£β„“βˆ’1 that generated their +pulse later than π‘£β„“βˆ’1. The following condition captures what we need regarding the latter. +Definition 2 (Slow Condition). For all 𝑠 ∈ N, correct layers β„“ βˆ’ 1 ∈ N, and 𝑣ℓ ∈ 𝑉ℓ \ 𝐹, we require +the slow condition SC(𝑠) := SC-1(𝑠) ∨ SC-2(𝑠) ∨ SC-3 to hold, where +SC-1(𝑠) : C𝑣,β„“ +πœ— +≀ 𝑑𝑣,β„“βˆ’1 βˆ’ max +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} + 4π‘ πœ… +SC-2(𝑠) : C𝑣,β„“ +πœ— +≀ 𝑑𝑣,β„“βˆ’1 βˆ’ +min +{𝑣,οΏ½οΏ½οΏ½οΏ½}∈𝐸{𝑑𝑀,β„“βˆ’1} βˆ’ 4π‘ πœ… +SC-3: C𝑣,β„“ ≀ 0. + +Gradient TRIX +15 +This can be viewed as a variant of the slow condition from [15], adjusted to our setting by +quantifying by how much 𝑣ℓ may safely shift the timing of its pulse. The main conceptual difference +to [15] is that we relax the slow condition by adding SC-3. In what follows, we drop 𝑠 from the +notation when it is clear from context. +Lemma 4. For all 𝑠 ∈ N and (𝑣, β„“) ∈ 𝑉ℓ, β„“ ∈ N>0, SC(𝑠) holds at (𝑣, β„“). +Proof. Using Lemma 29, we prove the claim for Algorithm 1. Set 𝑑min := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} +and 𝑑max := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1}. If C𝑣,β„“ ≀ 0, SC-3 is trivially satisfied. Hence, assume that C𝑣,β„“ > 0. +Abbreviate +Ξ” = min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 += max{𝐻own βˆ’ 𝐻max + 4𝑠minπœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4𝑠minπœ…} βˆ’ πœ… +2, +where 𝑠min ∈ N is an index for which the minimum is attained. +If Ξ” ≀ πœ—πœ…, then C𝑣,β„“ = Ξ”. Otherwise, +C𝑣,β„“ = max +οΏ½ +𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 βˆ’ πœ…,πœ—πœ… +οΏ½ +≀ max{Ξ”,πœ—πœ…} = Ξ”. +Either way, we get that C𝑣,β„“/πœ— < C𝑣,β„“ ≀ Ξ”. +We distinguish two cases. +β€’ 𝐻own βˆ’ 𝐻max + 4𝑠minπœ… β‰₯ 𝐻own βˆ’ 𝐻min βˆ’ 4𝑠minπœ…. Then for 𝑠 ∈ N, 𝑠 β‰₯ 𝑠min, by Lemma 1 we have +that +Ξ” ≀ 𝐻own βˆ’ 𝐻max + 4𝑠minπœ… βˆ’ πœ… +2 ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max + 4π‘ πœ…, +i.e., SC-1 holds. Now consider 𝑠 ∈ N, 𝑠 < 𝑠min. Since 𝐻own βˆ’π»max βˆ’πœ—π‘’ + 4π‘ πœ… < 𝐻own βˆ’π»max βˆ’ +πœ—π‘’ + 4𝑠minπœ… ≀ Ξ”, but the minimum is attained at index 𝑠min, we must have that +Ξ” ≀ 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ… βˆ’ πœ… +2 ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑min βˆ’ 4π‘ πœ…, +where the second step again applies Lemma 1. Thus, SC-2 holds. +β€’ 𝐻own βˆ’ 𝐻max + 4𝑠minπœ… < 𝐻own βˆ’ 𝐻min βˆ’ 4𝑠minπœ…. In this case, we analogously infer that SC-1 +holds for 𝑠 > 𝑠min and SC-2 holds for 𝑠 ≀ 𝑠min. +β–‘ +The fast condition is the counterpart to Definition 2 addressing the need to β€œcatch up” to neighbors +that are ahead. +Definition 3 (Fast Condition). For all 𝑠 ∈ N>0, correct layers β„“ βˆ’ 1 ∈ N>0, and 𝑣ℓ ∈ 𝑉ℓ \ 𝐹, we +require the fast condition FC(𝑠) := FC-1(𝑠) ∨ FC-2(𝑠) ∨ FC-3 to hold, where +FC-1(𝑠) : C𝑣,β„“ β‰₯ 𝑑𝑣,β„“βˆ’1 βˆ’ max +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} + (4𝑠 βˆ’ 2)πœ… + πœ… +FC-2(𝑠) : C𝑣,β„“ β‰₯ 𝑑𝑣,β„“βˆ’1 βˆ’ +min +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} βˆ’ (4𝑠 βˆ’ 2)πœ… + πœ… +FC-3: C𝑣,β„“ β‰₯ πœ…. +This can be viewed as a variant of the fast condition from [15], adjusted to our setting by +quantifying by how much 𝑣ℓ may safely shift the timing of its pulse. The main conceptual difference +to [15] is that we relax the fast condition by adding FC-3. +In addition, note that there is an additive term of πœ… that does not change sign. Its purpose is to +account for the fact that our simulation of the GCS algorithm from [18] operates in discrete time +steps corresponding to the layers. The continuous versions of the GCS algorithm in [14, 15, 18] +can choose this term arbitrarily small. In contrast, we need it to exceed the maximum error in +time measurement accumulated in a step. We remark that, in principle, one could choose this term + +16 +Christoph Lenzen and Shreyas Srinivas +𝑣 +𝑑𝑣 βˆ’ 4π‘ πœ…π‘‘(𝑣,𝑀) +𝑀 +𝑑𝑀 βˆ’ (4𝑠 βˆ’ 2)πœ…π‘‘(𝑣,𝑀) +Fig. 4. Slow condition (left) and fast condition (right). SC(𝑠) is tailored to ensuring that maxπ‘€βˆˆπ‘‰ {πœ“π‘ π‘£,𝑀(β„“)} +(the length of the green arrow) cannot grow quickly. Nodes 𝑀 with C𝑀,β„“ ≀ 0 (SC-3 holds) cannot apply a +correction pushing them below the red line. If C𝑀,β„“ > 0, then both SC-1 and SC-2 will ensure that there is a +neighbor π‘₯ of 𝑀 such that the offset of 𝑑𝑀,β„“βˆ’1 βˆ’ C𝑀,β„“/πœ— to the black line does not exceed the one of 𝑑π‘₯,β„“βˆ’1. +In other words, SC ensures that the blue arrows indicating C𝑀,β„“/πœ— do not reach below the red line. This +means that any increase of maxπ‘€βˆˆπ‘‰ {πœ“π‘ π‘£,𝑀(β„“)} is caused by delay and clock speed variation, which in turn is +bounded by πœ…/2 per layer. Similarly, FC(𝑠) is tailored to ensuring that maxπ‘£βˆˆπ‘‰ {πœ‰π‘ π‘£,𝑀(β„“)} (the length of the +green arrow), if positive, decreases by at least πœ…/2. To ensure this, C𝑀,β„“ (indicated by blue arrows) must be +large enough to reach below the red line. This is achieved by FC(𝑠) having an additional β€œslack” term of πœ…, +which overcomes the β€œloss” of πœ…/2 due to uncertainty. +different from πœ…. However, since both need to meet the same lower bound of 𝑒 + (1 βˆ’ 1/πœ—)(Ξ› βˆ’ 𝑑), +there is no asymptotic gain in introducing a separate parameter. +Lemma 5. For all 𝑠 ∈ N and (𝑣, β„“) ∈ 𝑉ℓ, β„“ ∈ N>0, FC(𝑠) holds at (𝑣, οΏ½οΏ½οΏ½). +Proof. Using Lemma 29, we prove the claim for Algorithm 1. Set 𝑑min := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} and +𝑑max := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1}. If C𝑣,β„“ β‰₯ πœ—πœ…, trivially FC-3 is satisfied. Hence, assume that C𝑣,β„“ < πœ—πœ…. +Abbreviate +Ξ” = min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 += max{𝐻own βˆ’ 𝐻max + 4𝑠minπœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4𝑠minπœ…} βˆ’ πœ… +2, +where 𝑠min ∈ N is an index for which the minimum is attained. +If Ξ” β‰₯ 0, then C𝑣,β„“ = Ξ”. Otherwise, +C𝑣,β„“ = min +οΏ½ +𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 + 2πœ…, 0 +οΏ½ +β‰₯ Ξ”. +Either way, we get that C𝑣,β„“ β‰₯ Ξ”. +For 𝑠 ∈ N, 𝑠 ≀ 𝑠min, by Lemma 1 and Equation (1) it holds that +Ξ” β‰₯ 𝐻own βˆ’ 𝐻max + 4π‘ πœ… βˆ’ πœ… +2 β‰₯ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max + (4𝑠 βˆ’ 2)πœ… + πœ…, +proving that FC-1 holds. For 𝑠 ∈ N, 𝑠 > 𝑠min, by Lemma 1 and Equation (1) we get that +Ξ” β‰₯ 𝐻own βˆ’ 𝐻min βˆ’ 4(𝑠 βˆ’ 1)πœ… βˆ’ πœ… +2 β‰₯ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑min βˆ’ (4𝑠 βˆ’ 2)πœ… + πœ…, +showing that FC-2 holds. +β–‘ +Our relaxation of the slow and fast conditions adds a substantial complication. From the per- +spective of the time-continuous variant of the algorithm in [15], we now allow for arbitrarily large +clock β€œjumps,” rather than bounded clock rates. In our discrete version, the rate bound from [15] +corresponds to C𝑣,β„“ ∈ [0,πœ—πœ…]. Without this additional constraint, the slow and fast conditions are +insufficient to bound skews. + +Gradient TRIX +17 +𝑣ℓ+2 +𝑣ℓ+1 +𝑣ℓ +𝑣ℓ+2 +𝑣ℓ+1 +𝑣ℓ +Fig. 5. On the left, it is shown how skews increase without JC. While SC(0) disallows that (𝑣, β„“) speeds up +its pulse by more than the equivalent of (𝑣, β„“ βˆ’ 1) matching the earliest pulse of any (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, +FC permits that a node (𝑣, β„“) with slow (𝑣, β„“ βˆ’ 1) to β€œovershoot,” i.e., C𝑣,β„“ (shown as blue arrow) gets large. +This results in an amplifying oscillatory behavior. On the right, the same scenario is shown with JC in effect. +JC forces the corrections to stop πœ… before the earliest or latest neighbor, respectively, resulting in a dampened +oscillation. +This is illustrated in Figure 5, showing an execution that satisfies SC and FC, but suffers from +skews that grow without bound. The key issue is that adjacent nodes could β€œjump” in opposite +directions, resulting in an oscillatory behavior in which measurement errors accumulate indefinitely. +To avoid this kind of behavior, we add an additional condition that β€œdampens” such oscillations, yet +limits by how much a faulty predecessor can cause an increase in skew. +Definition 4 (Jump Condition). For all correct layers β„“ βˆ’ 1 ∈ N>0 and 𝑣ℓ ∈ 𝑉ℓ \ 𝐹, we require the +jump condition JC := JC-1 ∨ JC-2 ∨ JC-3 to hold, where +JC-1: πœ… < C𝑣,β„“ +πœ— +≀ 𝑑𝑣,β„“βˆ’1 βˆ’ max +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} βˆ’ πœ… +JC-2: 0 > C𝑣,β„“ β‰₯ 𝑑𝑣,β„“βˆ’1 βˆ’ +min +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} + πœ… +JC-3: 0 ≀ C𝑣,β„“ +πœ— +≀ πœ…. +Lemma 6. Suppose that layer β„“ βˆ’ 1 ∈ N and 𝑣ℓ ∈ 𝑉ℓ are correct. Then JC holds at 𝑣ℓ. +Proof. Using Lemma 29, we prove the claim for Algorithm 1. Set 𝑑min := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} and +𝑑max := min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1}. We distinguish three cases. +β€’ 0 ≀ C𝑣,β„“ ≀ πœ—πœ…. Then JC-3 is satisfied trivially. + +18 +Christoph Lenzen and Shreyas Srinivas +β€’ C𝑣,β„“ < 0. By Lemma 1 and Equation (1), then +C𝑣,β„“ = 𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 + 2πœ… β‰₯ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑min + πœ…, +i.e., JC-2 holds. +β€’ C𝑣,β„“ > πœ—πœ…. By Lemma 1, then +C𝑣,β„“ = 𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 βˆ’ πœ… ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑max βˆ’ πœ…, +i.e., JC-3 holds. +β–‘ +4.3 +Bounding Ψ𝑠 in the Absence of Faults +With the conditions established, we are ready to study how Ψ𝑠 (β„“) evolves in the fault-free setting. +The main technical challenge in bounding Ψ𝑠 lies in performing the induction step from 𝑠 βˆ’ 1 ∈ N +to 𝑠. We will argue that for Ψ𝑠 ( Β―β„“ ) to be large for some Β―β„“, Ξžπ‘  (β„“ ) must have been large for some +β„“ < Β―β„“, with an additive term growing with Β―β„“ βˆ’ β„“. +Theorem 1. For 𝑠 ∈ N>0 and layers β„“ ≀ Β―β„“, it holds that +Ψ𝑠 ( Β―β„“ ) ≀ max +οΏ½ +0, Ξžπ‘  (β„“ ) βˆ’ ( Β―β„“ βˆ’ β„“ + 1)πœ… +οΏ½ ++ ( Β―β„“ βˆ’ β„“ ) Β· πœ… +2 . +Proof strategy. Intuitively, we intend to argue that if Ψ𝑠 ( Β―β„“ ) is large, so must be Ξžπ‘  (β„“ ). Tracing +back the cause for this, we show that in every step, we have that Ξžπ‘  (β„“ βˆ’ 1) is larger than Ξžπ‘  (β„“) by at +least πœ…/2. Since Ξžπ‘  ( Β―β„“ ) β‰₯ Ψ𝑠 ( Β―β„“ ), as πœ“π‘  +𝑣,𝑀(β„“) β‰₯ πœ‰π‘  +𝑣,𝑀(β„“) for all 𝑣, 𝑀, 𝑠, and β„“, this yields the claim. To +formalize that Ξžπ‘  (β„“) must have been decreasing steadily, we seek to show that the minimal layer β„“ +for which there are nodes 𝑣ℓ,𝑀ℓ ∈ 𝑉 satisfying that πœ‰π‘  +𝑣ℓ,𝑀ℓ (β„“) is large enough is β„“. To this end, we +identify nodes 𝑀 and 𝑣 – either 𝑀ℓ and 𝑣ℓ themselves or neighbors of them – which cause the large +skew on layer β„“ by a having a large skew on layer β„“ βˆ’ 1. This is done based on SC(𝑠) and FC(𝑠), +with JC kicking in for the special case that 𝑀 = 𝑣ℓ and 𝑣 = 𝑀ℓ. +A key obstacle is that if 𝑀 is a neighbor of 𝑀ℓ, this results in a larger difference in skew than if 𝑣 +is a neighbor of 𝑣ℓ, namely 4π‘ πœ… versus (4𝑠 βˆ’ 2)πœ…. Thus, when 𝑀 is closer to 𝑣ℓ than 𝑀ℓ, we β€œlose” 2πœ… +relative to the skew bound on layer β„“. For 𝑑(𝑣 Β―β„“,𝑀 Β―β„“) many steps, we can compensate for this based +on the initial skew between 𝑣 Β―β„“ and 𝑀 Β―β„“, but not more. To address this, essentially we need to show +that for any additional steps β€œtowards” 𝑣ℓ there will be a corresponding step β€œaway” from 𝑣ℓ, on +which we β€œgain” additional 2πœ… relative to the skew bound on the layer β„“. +If corrections were always positive, this would be straightforward: Steps towards 𝑣ℓ would also +be steps towards 𝑣 Β―β„“, and upon 𝑀ℓ = 𝑣 Β―β„“ we would reach a contradiction to the skew bounds shown. +Unfortunately, negative corrections foreclose this simple argument. To address this, we introduce a +third β€œprover” node 𝑝ℓ, where 𝑝 Β―β„“ = 𝑣 Β―β„“, which never increases its distance to 𝑀ℓ; if 𝑝ℓ performs a +negative correction, then 𝑝 is a neighbor of 𝑝ℓ that is closer to 𝑀ℓ. We then can infer that 𝑝 β‰  𝑀 +from the skew bounds. +A major complication this approach faces is the special case 𝑝 = 𝑀ℓ and 𝑀 = 𝑝ℓ. Again, JC kicks +in to show that we have sufficiently large skew between 𝑝 and 𝑀. However, now 𝑝 lies β€œbehind” 𝑀 +from the perspective of 𝑣. A later reversal of this situation by repeating the case that 𝑝 = 𝑀ℓ and +𝑀 = 𝑝ℓ results in 𝑀 being farther away from 𝑣ℓ, yet 𝑑(𝑝,𝑀) = 𝑑(𝑝ℓ,𝑀ℓ). The proof covers this case +by adding an additional (4𝑠 βˆ’ 2)πœ… to the skew bound if the above situation occured an odd number +of times. +Finally, we seek to avoid the case that 𝑣 = 𝑝ℓ and 𝑝 = 𝑣ℓ for analogous reasons. Fortunately, +here we can exploit that the skew bound between 𝑣ℓ and 𝑀ℓ is stronger than the one between 𝑝ℓ +and 𝑀ℓ, meaning that we can simply choose 𝑝 = 𝑣 instead in this situation. In the proof, we do so + +Gradient TRIX +19 +whenever 𝑣 lies on the path connecting 𝑝ℓ and 𝑀ℓ that we maintain to keep track of hop counts in +the construction. +β–‘ +Proof of Theorem 1. Assume towards a contradiction that the statement of Theorem 1 is false +for minimal Β―β„“, i.e., there are 𝑣 Β―β„“ and 𝑀 Β―β„“ such that +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ > ( Β―β„“ βˆ’ β„“ ) Β· πœ… +2 +(4) +and +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ > Ξžπ‘  (β„“ ) βˆ’ ( Β―β„“ βˆ’ β„“ ) Β· πœ… +2 βˆ’ πœ… +(5) +and there is no smaller Β―β„“β€² for which this applies for some pair of nodes. +Let β„“ ∈ [β„“, Β―β„“] be minimal such that are 𝑣ℓ, 𝑝ℓ,𝑀ℓ ∈ 𝑉 , a path 𝑄ℓ in 𝐻 from 𝑝ℓ to 𝑣ℓ, and a path 𝑃ℓ +in 𝐻 from 𝑝ℓ to 𝑀ℓ with the following properties: +(P1) 𝑀ℓ β‰  𝑝ℓ. +(P2) 𝑀ℓ β‰  𝑣ℓ. +(P3) +𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ 4π‘ πœ…|𝑃ℓ| β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ β„“) Β· πœ… +2 > 0. +(P4) Denote by |𝑃ℓ| and |𝑄ℓ| the length of 𝑃ℓ and 𝑄ℓ, respectively. With the shorthand +Ξ”β„“ := +οΏ½ +|𝑃ℓ| + |𝑄ℓ| βˆ’ 1 +if 𝑃ℓ and 𝑄ℓ have the same first edge +|𝑃ℓ| + |𝑄ℓ| +else, +it holds that +𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…Ξ”β„“ β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ β„“) Β· πœ… +2 + 2πœ…|𝑃ℓ|. +(P5) If 𝑣ℓ ∈ 𝑃ℓ, then 𝑝ℓ = 𝑣ℓ. +To see that such an index must indeed exist, let +β€’ 𝑝 Β―β„“ := 𝑣 Β―β„“, +β€’ 𝑃 Β―β„“ be a shortest path in 𝐻 from 𝑝 Β―β„“ to 𝑀 Β―β„“, and +β€’ 𝑄 Β―β„“ := (𝑝 Β―β„“) = (𝑣 Β―β„“), i.e., the 0-length path from 𝑝 Β―β„“ to 𝑣 Β―β„“. +This choice satisfies +β€’ (P1) and (P2), because Ψ𝑠 +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) β‰  0 implies that 𝑣 Β―β„“ β‰  𝑀 Β―β„“; +β€’ (P4), because +𝑑𝑣 Β―β„“,Β―β„“ βˆ’ 𝑑𝑀 Β―β„“,Β―β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…Ξ”β„“ = 𝑑𝑣 Β―β„“,Β―β„“ βˆ’ 𝑑𝑀 Β―β„“,Β―β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…|𝑃 Β―β„“| = πœ“π‘£ Β―β„“,𝑀 Β―β„“ + 2πœ…|𝑃 Β―β„“|; and +β€’ (P3) and (P5), because 𝑝 Β―β„“ = 𝑣 Β―β„“ (i.e., 𝑑𝑝 Β―β„“,Β―β„“ = 𝑑𝑣 Β―β„“,Β―β„“ and Δ¯ℓ = |𝑃 Β―β„“|) and (P4) holds. +Corollary 2 proves that in fact β„“ = β„“. Note that +𝑑(𝑣ℓ,𝑀ℓ) ≀ +οΏ½ +|𝑃ℓ| + |𝑄ℓ| βˆ’ 2 +if 𝑃ℓ and 𝑄ℓ share the first edge +|𝑃ℓ| + |𝑄ℓ| +else +≀ Ξ”β„“ + +20 +Christoph Lenzen and Shreyas Srinivas +and that |𝑃ℓ| β‰₯ 1 due to (P1). Therefore, (P4) yields that +Ξžπ‘  (β„“ ) β‰₯ 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…π‘‘(𝑣ℓ,𝑀ℓ) +β‰₯ 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…Ξ”β„“ +β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ β„“) Β· πœ… +2 + 2πœ…|𝑃ℓ| +β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ β„“) Β· πœ… +2 + 2πœ…, +contradicting Equation (5) and completing the proof. +β–‘ +The remainder of Section 4.3 is dedicated to proving Corollary 2, which is the missing step in +the proof of Theorem 1. To this end, until the end of Section 4.3 we consider the setting of the +proof of Theorem 1 and assume for contradiction that β„“ > β„“. We take note of some straightforward +implications. +Observation 2. For any fixed index β„“, we have the following implications: +β€’ (P3) β‡’ (P1) +β€’ (P4) β‡’ (P2) +β€’ (𝑣ℓ = π‘β„“βˆ§ (P4)) β‡’ (P3). +Moreover, +πœ“π‘  +𝑣ℓ,𝑀ℓ (β„“) βˆ’ ( Β―β„“ βˆ’ β„“) Β· πœ… +2 > 0. +Proof. We prove each implication separately. +β€’ From (P3), 𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ > 4π‘ πœ…|𝑃ℓ| β‰₯ 0. This implies 𝑑𝑝ℓ,β„“ > 𝑑𝑀ℓ,β„“ and hence 𝑀ℓ β‰  𝑝ℓ, i.e., (P1). +β€’ Note that Ξ”β„“ β‰₯ 0, |𝑃ℓ| β‰₯ 0, and 4𝑠 βˆ’ 2 > 0. Hence, (P4) and Equation (4) imply that +𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ β‰₯ πœ“π‘  +𝑣ℓ,𝑀ℓ ( Β―β„“ ) > 0. +It follows that 𝑀ℓ β‰  𝑣ℓ, i.e., (P2). +β€’ If 𝑣ℓ = 𝑝ℓ, then 𝑑𝑣ℓ,β„“ = 𝑑𝑝ℓ,β„“, |𝑄ℓ| = 0, and Ξ”β„“ = |𝑃ℓ|. Thus, (P4) implies that +𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ (4𝑠 βˆ’ 2)πœ…|𝑃ℓ| β‰₯ πœ“π‘  +𝑣ℓ,𝑀ℓ (¯𝑙) + ( Β―β„“ βˆ’ β„“) Β· πœ… +2 + 2πœ…|𝑃ℓ| β‰₯ πœ“π‘  +𝑣ℓ,𝑀ℓ (¯𝑙) βˆ’ ( Β―β„“ βˆ’ β„“) Β· πœ… +2 + 2πœ…|𝑃ℓ|, +which can be rearranged to yield (P3). +β–‘ +A Step in the Construction. We now identify nodes that are suitable for taking the role of 𝑣ℓ, 𝑝ℓ, and +𝑀ℓ on layer β„“ βˆ’ 1. These are either the nodes themselves or neighbors of them in 𝐻, where FC(𝑠), +SC(𝑠), and JC serve to relate respective pulse times. +Lemma 7. There is a node 𝑣 ∈ 𝑉 such that +𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ βˆ’ πœ…, +where +Δ𝑣 = + + +0 +and 𝑣 = 𝑣ℓ, +βˆ’1 +and {𝑣, 𝑣ℓ} is the last edge of 𝑄ℓ or the first edge of 𝑃ℓ, or +1 +and {𝑣ℓ, 𝑣} ∈ 𝐸. +Proof. By Lemma 5, 𝑣ℓ obeys the fast condition. Thus one of three things is true for 𝑣ℓ. +β€’ FC-1(𝑠) holds. In this case, let 𝑣 = arg max{π‘₯,𝑣ℓ }∈𝐸{𝑑π‘₯,β„“βˆ’1} and bound +𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ ≀ +max +{π‘₯,𝑣ℓ }∈𝐸 +οΏ½ +𝑑π‘₯,β„“βˆ’1 +οΏ½ +βˆ’ (4𝑠 βˆ’ 2)πœ… βˆ’ πœ… = 𝑑𝑣,β„“βˆ’1 βˆ’ (4𝑠 βˆ’ 2)πœ… βˆ’ πœ…, +i.e., the claim of the lemma holds with Δ𝑣 = 1. + +Gradient TRIX +21 +β€’ FC-2(𝑠) holds. In this case, let {𝑣, 𝑣ℓ} be the last edge of 𝑄ℓ if |𝑄ℓ| β‰  0 or the first edge of 𝑃ℓ +otherwise; the latter is feasible, because then 𝑣ℓ = 𝑝ℓ, and |𝑃ℓ| β‰  0 due to (P1). We get that +𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ ≀ +min +{π‘₯,𝑣ℓ }∈𝐸 +οΏ½ +𝑑π‘₯,β„“βˆ’1 +οΏ½ ++ (4𝑠 βˆ’ 2)πœ… βˆ’ πœ… ≀ 𝑑𝑣,β„“βˆ’1 + (4𝑠 βˆ’ 2)πœ… βˆ’ πœ…. +Thus, the claim of the lemma holds with Δ𝑣 = βˆ’1. +β€’ FC-3 holds. In this case, +𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ ≀ 𝑑𝑣ℓ,β„“βˆ’1 βˆ’ πœ…, +i.e., the claim of the lemma holds with Δ𝑣 = 0. +β–‘ +Lemma 8. There is a node 𝑀 ∈ 𝑉 such that +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +β‰₯ 𝑑𝑀,β„“βˆ’1 + 4π‘ πœ…Ξ”π‘€, +where +Δ𝑀 = + + +0 +and 𝑀 = 𝑀ℓ, +βˆ’1 +and {𝑀,𝑀ℓ} is the last edge of 𝑃ℓ, or +1 +and {𝑀ℓ,𝑀} ∈ 𝐸. +Proof. By Lemma 4, 𝑀ℓ satisfies SC. We make a case distinction based on which one of SC-1, +SC-2, and SC-3 applies. +β€’ SC-1(𝑠) holds. Let {𝑀,𝑀ℓ} be the last edge of 𝑃ℓ; by (P1), |𝑃ℓ| β‰  0, i.e., this edge exists. Then +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,𝑙 +πœ— +β‰₯ +max +{π‘₯,𝑀ℓ }∈𝐸{𝑑π‘₯,β„“βˆ’1} βˆ’ 4π‘ πœ… β‰₯ 𝑑𝑀,β„“βˆ’1 βˆ’ 4π‘ πœ…, +i.e., the claim of the lemma holds with Δ𝑀 = βˆ’1. +β€’ SC-2(𝑠) holds. In this case, let 𝑀 = arg min{π‘₯,𝑣ℓ }∈𝐸{𝑑π‘₯,β„“βˆ’1} and bound +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ, β„“ +πœ— +β‰₯ +min +{π‘₯,𝑀ℓ }∈𝐸{𝑑π‘₯,β„“βˆ’1} + 4π‘ πœ… = 𝑑𝑀,β„“βˆ’1 + 4π‘ πœ…. +Thus, the lemma holds with Δ𝑀 = 1. +β€’ SC-3 holds. Then +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀,β„“ β‰₯ 𝑑𝑀ℓ,β„“βˆ’1, +i.e., the claim of the lemma holds with Δ𝑀 = 0. +β–‘ +Lemma 9. There is a node 𝑝 ∈ 𝑉 such that +𝑑𝑝ℓ,β„“βˆ’1 βˆ’ C𝑝ℓ,β„“ ≀ +οΏ½ +𝑑𝑝,β„“βˆ’1 +and 𝑝 = 𝑝ℓ, or +𝑑𝑝,β„“βˆ’1 βˆ’ πœ… +and {𝑝ℓ, 𝑝} is the first edge of 𝑃ℓ. +Proof. If C𝑝ℓ,β„“ β‰₯ 0, the claim holds with 𝑝 = 𝑝ℓ. Hence, suppose that C𝑝ℓ,β„“ < 0. Let {𝑝ℓ, 𝑝} be +the first edge of 𝑃ℓ; such an edge exists, as by (P1) we have that 𝑝ℓ β‰  𝑀ℓ and hence |𝑃ℓ| β‰  0. By +Lemma 6, 𝑝ℓ satisfies JC. As C𝑝ℓ,β„“ < 0, JC-2 must apply. We conclude that +C𝑝ℓ,β„“ β‰₯ 𝑑𝑝ℓ,β„“βˆ’1 βˆ’ +min +{π‘₯,𝑝ℓ }∈𝐸 +οΏ½ +𝑑π‘₯,β„“βˆ’1 +οΏ½ ++ πœ… β‰₯ 𝑑𝑝ℓ,β„“βˆ’1 βˆ’ 𝑑𝑝,β„“βˆ’1 + πœ…. +Rearranging terms, the desired inequality follows. +β–‘ + +22 +Christoph Lenzen and Shreyas Srinivas +In the following, let (𝑣, 𝑝,𝑀) be the triple of nodes guaranteed by Lemmas 7 to 9. Denote by β—¦ +concatenation of paths, by prefix(𝑅,π‘₯) the prefix of path 𝑅 ending at node π‘₯ ∈ 𝑅, and by suffix(𝑅,π‘₯) +the suffix of path 𝑅 starting at node π‘₯ ∈ 𝑅. Let +𝑝′ = +οΏ½ +𝑣 +if 𝑣 lies on suffix(𝑃ℓ, 𝑝), +𝑝 +else, +𝑃 := +οΏ½ +prefix(𝑃ℓ,𝑀) +if 𝑀 lies on 𝑃ℓ, +𝑃ℓ β—¦ (𝑀ℓ,𝑀) +else, +𝑃 β€² := +οΏ½ +suffix(𝑃, 𝑝′) +if 𝑝′ lies on 𝑃, +(𝑝′,𝑀) +else, +𝑄 := +οΏ½ +prefix(𝑄ℓ, 𝑣) +if 𝑣 lies on 𝑄ℓ, +𝑄ℓ β—¦ {𝑣ℓ, 𝑣} +else, +𝑄 β€² := +οΏ½ +suffix(𝑄, 𝑝′) +if 𝑝′ lies on 𝑄, +(𝑝′, 𝑝ℓ) β—¦ 𝑄 +else. +For notational convenience, in analogy to Ξ”β„“ we also define +Ξ” := +οΏ½ +|𝑃 β€²| + |𝑄 β€²| βˆ’ 1 +if 𝑃 β€² and 𝑄 β€² have the same first edge +|𝑃 β€²| + |𝑄 β€²| +else. +We will show that this construction satisfies properties (P1) to (P5) for layer β„“ βˆ’ 1 with π‘£β„“βˆ’1 = 𝑣, +π‘β„“βˆ’1 = 𝑝′, π‘€β„“βˆ’1 = 𝑀, π‘ƒβ„“βˆ’1 = 𝑃 β€², and π‘„β„“βˆ’1 = 𝑄 β€²; this will constitute the desired contradiction. +However, we first point out that indeed 𝑃 β€² and 𝑄 β€² are paths in 𝐻 from 𝑝′ to 𝑀 and 𝑣, respectively. +To this end, we first cover the special case that 𝑝′ does not lie on 𝑃. +Observation 3. If 𝑝′ does not lie on 𝑃, then 𝑝′ = 𝑀ℓ and either 𝑀 = 𝑝ℓ or 𝑝′ = 𝑣. +Proof. By Lemma 9, 𝑝 lies on the first edge of 𝑃ℓ. Hence, if 𝑝′ = 𝑝, 𝑝′ lies on 𝑃 unless prefix(𝑃ℓ,𝑀) +does not contain this edge. By Lemma 8, this can only happen if the first edge of 𝑃ℓ is also the last +edge, i.e., 𝑃ℓ = (𝑝ℓ,𝑀ℓ) = (𝑀, 𝑝′). +It remains to consider the case that 𝑝′ β‰  𝑝, i.e., 𝑝′ = 𝑣. Again, we use that all edges but the last of +𝑃ℓ are also contained in 𝑃 by Lemma 8. Thus, 𝑝′ = 𝑣 = 𝑀ℓ. +β–‘ +Observation 4. 𝑃 β€² is a path in 𝐻 from 𝑝′ to 𝑀 and 𝑄 β€² is a path in 𝐻 from 𝑝′ to 𝑣. +Proof. To show that 𝑃 β€² is a path from 𝑝′ to 𝑀, note that by Lemma 8, 𝑃 is a path in 𝐻, which +by definition ends at 𝑀. Thus, if 𝑃 β€² = suffix(𝑃, 𝑝′), 𝑃 β€² is a path from 𝑝′ to 𝑀 in 𝐻. Otherwise, by +Observation 3, 𝑝′ = 𝑀ℓ, and {𝑝′,𝑀} = {𝑀ℓ,𝑀} ∈ 𝐸 by Lemma 8. +To show that 𝑄 β€² is a path from 𝑝′ to 𝑣, note that by Lemma 7, 𝑄 is a path in 𝐻, which by definition +ends at 𝑣. If 𝑝′ = 𝑝, by Lemma 9 𝑄 β€² is also a path in 𝐻, which by definition begins at 𝑝′ and has +the same endpoint as 𝑄, which is 𝑣. On the other hand, if 𝑝′ = 𝑣, suffix(𝑄, 𝑝′) = suffix(𝑄, 𝑣) = (𝑣), +which is the 0-length path from 𝑝′ = 𝑣 to itself. +β–‘ +Proving the Properties. To prove Corollary 2, we establish that the tuple (𝑣, 𝑝′,𝑀, 𝑃 β€²,𝑄′) satisfies +properties (P1) to (P5) for layer β„“ βˆ’ 1, contradicting the minimality of β„“. By Observation 4, indeed 𝑃 β€² +and 𝑄 β€² are paths from 𝑣 to 𝑀 and 𝑝′, respectively. In the following, we will repeatedly use this fact +and the property that {π‘₯β„“,π‘₯} ∈ 𝐸 for π‘₯ ∈ {𝑣,𝑀, 𝑝} whenever π‘₯ β‰  π‘₯β„“, without explicitly invoking +Observation 4 and Lemmas 7 to 9. +We first rule out the special case that 𝑣 = 𝑀ℓ and 𝑀 = 𝑣ℓ. + +Gradient TRIX +23 +Lemma 10. The case that 𝑣 = 𝑀ℓ and 𝑀 = 𝑣ℓ is not possible. +Proof. Assume towards a contradiction that 𝑣 = 𝑀ℓ and 𝑀 = 𝑣ℓ. We use (P4), Lemma 3, and +Lemma 8 to bound +βˆ’C𝑀,β„“ β‰₯ 𝑑𝑀,β„“ βˆ’ 𝑑𝑀,β„“βˆ’1 βˆ’ Ξ› += 𝑑𝑣ℓ,β„“ βˆ’ (𝑑𝑀,β„“βˆ’1 βˆ’ 4π‘ πœ…) βˆ’ Ξ› βˆ’ 4π‘ πœ… +β‰₯ 𝑑𝑣ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ +βˆ’ Ξ› βˆ’ 4π‘ πœ… += 𝑑𝑣ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ +βˆ’ πœ… +2 βˆ’ 4π‘ πœ… +β‰₯ 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ 4π‘ πœ… βˆ’ πœ… +2 +β‰₯ πœ“π‘£ Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ β„“ + 1)πœ… +2 +> 0. +Thus, by JC, it holds that +𝑑𝑀,β„“βˆ’1 ≀ 𝑑𝑀ℓ,β„“βˆ’1 + C𝑀,β„“ βˆ’ πœ…. +Note that by (P1), |𝑃ℓ| β‰  0 and hence |𝑃ℓ|, Ξ”β„“ β‰₯ 1. Thus, by (P4) and Equation (4) +𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ 4π‘ πœ… β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ β„“)πœ… +2 β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) > 0. +We distinguish two cases. +β€’ C𝑀ℓ,β„“ ≀ πœ—πœ…. Then by Lemma 3 +4π‘ πœ… < 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ += 𝑑𝑀,β„“ βˆ’ 𝑑𝑀ℓ,β„“ +≀ 𝑑𝑀,β„“βˆ’1 βˆ’ C𝑀,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ ++ 𝑒 + +οΏ½ +1 βˆ’ 1 +πœ— +οΏ½ +(Ξ› βˆ’ 𝑑) +≀ 𝑒 + +οΏ½ +1 βˆ’ 1 +πœ— +οΏ½ +(Ξ› βˆ’ 𝑑) +< πœ…, +which is a contradiction, because 𝑠 β‰₯ 1. +β€’ C𝑀ℓ,β„“ > πœ—πœ…. By JC, it follows that +𝑑𝑀ℓ,β„“βˆ’1 β‰₯ 𝑑𝑀,β„“βˆ’1 + C𝑀ℓ,β„“ +πœ— ++ πœ…, +yielding by Lemma 3 that +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 = 𝑑𝑀ℓ,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 +β‰₯ 𝑑𝑀,β„“βˆ’1 + C𝑀ℓ,β„“ +πœ— ++ πœ… βˆ’ (𝑑𝑀ℓ,β„“βˆ’1 + C𝑀,β„“ βˆ’ πœ…) += 𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ ++ 2πœ… +β‰₯ 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ + 2πœ… βˆ’ πœ… +2 +> 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ + πœ… +2 . + +24 +Christoph Lenzen and Shreyas Srinivas +Recall that by (P1), |𝑃ℓ| β‰  0 and hence |𝑃ℓ|, Ξ”β„“ β‰₯ 1. Moreover, 𝑑(𝑣,𝑀) = 𝑑(𝑀ℓ,𝑀) ≀ 1, since +by Lemma 8 𝑀 is either 𝑀ℓ or a neighbor of 𝑀ℓ. Therefore, (P4) implies that +πœ“π‘  +𝑣,𝑀(β„“ βˆ’ 1) = 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 βˆ’ 4π‘ πœ…π‘‘(𝑣,𝑀) +> 𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ 4π‘ πœ…|𝑃ℓ| + πœ… +2 +β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 . +Thus, 𝑣 and 𝑀 satisfy Equation (4) and Equation (5) with index Β―β„“ replaced by index β„“ βˆ’ 1 < Β―β„“, +contradicting the minimality of Β―β„“. +β–‘ +Next, we prove a helper lemma relating 𝑑𝑀ℓ,β„“ and 𝑑𝑀,β„“βˆ’1 by a stronger bound than Lemma 8 for +the special case that 𝑝′ = 𝑀ℓ and 𝑀 = 𝑝ℓ. This follows similar reasoning as the previous lemma. +However, it does not yield an immediate contradiction, as we need to rely on the weaker bound +provided by (P3). +Lemma 11. If 𝑝′ = 𝑀ℓ and 𝑀 = 𝑝ℓ, then +𝑑𝑀ℓ,β„“ βˆ’ 𝑑𝑀,β„“βˆ’1 > 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +. +Proof. We use (P3) and Lemma 3 to bound +βˆ’C𝑀,β„“ β‰₯ 𝑑𝑀,β„“ βˆ’ 𝑑𝑀,β„“βˆ’1 βˆ’ Ξ› += 𝑑𝑝ℓ,β„“ βˆ’ (𝑑𝑀,β„“βˆ’1 βˆ’ 4π‘ πœ…) βˆ’ Ξ› βˆ’ 4π‘ πœ… +β‰₯ 𝑑𝑝ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ +βˆ’ Ξ› βˆ’ 4π‘ πœ… += 𝑑𝑝ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ +βˆ’ πœ… +2 βˆ’ 4π‘ πœ… +β‰₯ 𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ 4π‘ πœ… βˆ’ πœ… +2 +β‰₯ πœ“π‘£ Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ β„“ + 1)πœ… +2 +> 0. +Thus, by JC, it holds that +𝑑𝑀,β„“βˆ’1 ≀ 𝑑𝑀ℓ,β„“βˆ’1 βˆ’ πœ…. +We distinguish two cases. +β€’ C𝑀ℓ,β„“ ≀ πœ—πœ…. Then +𝑑𝑀ℓ,β„“ βˆ’ 𝑑𝑀,β„“βˆ’1 β‰₯ 𝑑𝑀ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“βˆ’1 + πœ… +β‰₯ 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ C𝑀ℓ,β„“ +πœ— ++ πœ… +β‰₯ 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +. +β€’ C𝑀ℓ,β„“ > πœ—πœ…. By JC, it follows that +𝑑𝑀ℓ,β„“βˆ’1 β‰₯ 𝑑𝑀,β„“βˆ’1 + C𝑀ℓ,β„“ +πœ— ++ πœ…, + +Gradient TRIX +25 +yielding that +𝑑𝑀ℓ,β„“ βˆ’ 𝑑𝑀,β„“βˆ’1 β‰₯ 𝑑𝑀ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“βˆ’1 + C𝑀ℓ,β„“ +πœ— ++ πœ… +> 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +. +β–‘ +Using Lemma 11, we establish (P4) for the special case of 𝑝′ = 𝑀ℓ and 𝑀 = 𝑝ℓ. Note that this +entails that 𝑀 is closer to 𝑝ℓ, yet 𝑃 is not shorter than 𝑃ℓ. This is accounted for by the case distinction +in the definition of Ξ”β„“, which covers the difference. +Lemma 12. If 𝑝′ = 𝑀ℓ and 𝑀 = 𝑝ℓ, then (P4) holds for 𝑣, 𝑝′, 𝑀, |𝑃 β€²|, |𝑄 β€²|, and layer β„“ βˆ’ 1. +Proof. Denote by Δ𝑣 ∈ {βˆ’1, 0, 1} the value such that +𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ βˆ’ πœ… +according to Lemma 7. By Lemmas 3 and 11, +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 +> 𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀ℓ,β„“ + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +β‰₯ 𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ + (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ + πœ… βˆ’ 𝑑𝑀ℓ,β„“ + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +β‰₯ 𝑑𝑣ℓ,β„“ βˆ’ Ξ› + (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ + πœ… βˆ’ 𝑑𝑀ℓ,β„“ + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +=𝑑𝑣ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ + (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ + πœ… +2 +β‰₯ (4𝑠 βˆ’ 2)πœ…(Ξ”β„“ + Δ𝑣) +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 + πœ…π‘ |𝑃ℓ|. +We claim that Ξ” ≀ Ξ”β„“ + Δ𝑣. Note that plugging this into the above inequality yields +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 β‰₯ (4𝑠 βˆ’ 2)πœ…Ξ” +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) + ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 + πœ…π‘ |𝑃 β€²|, +i.e., (P4) for 𝑣, 𝑝′, 𝑀, |𝑃 β€²|, |𝑄 β€²|, and layer β„“ βˆ’ 1, as desired. Therefore, proving the above claim will +complete the proof. +To show the claim, we first note that 𝑃 β€² = (𝑝′,𝑀) = (𝑀ℓ, 𝑝ℓ). Since 𝑀 = 𝑝ℓ, by Lemma 8 we also +have that 𝑃ℓ = (𝑝ℓ,𝑀ℓ) = (𝑀, 𝑝′). In particular, |𝑃ℓ| = |𝑃 β€²|. We distinguish two cases. +β€’ 𝑃ℓ and 𝑄ℓ share the first edge. It follows that |𝑄ℓ| β‰₯ 2, as otherwise 𝑣ℓ = 𝑀ℓ, contradicting +(P2). If 𝑣 = 𝑝′, then +|𝑄 β€²| = |𝑄| = |(𝑣)| = 0 ≀ |𝑄ℓ| βˆ’ 2 ≀ |𝑄ℓ| + Δ𝑣 βˆ’ 1. +Otherwise, the first edge of𝑄 is the first edge of𝑄ℓ and thus 𝑃ℓ. This edge is {𝑝ℓ,𝑀ℓ} = {𝑝ℓ, 𝑝′}. +Hence, |𝑄 β€²| = | suffix(𝑄, 𝑝′)| ≀ |𝑄| βˆ’ 1 = |𝑄ℓ| + Δ𝑣 βˆ’ 1. Either way, we get that +Ξ” ≀ |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃ℓ| + |𝑄ℓ| + Δ𝑣 βˆ’ 1 = Ξ”β„“ + Δ𝑣. +β€’ 𝑃ℓ and 𝑄ℓ do not share the first edge, but 𝑃 β€² and 𝑄 β€² do. Then +Ξ” = |𝑃 β€²| + |𝑄 β€²| βˆ’ 1 ≀ |𝑃ℓ| + |𝑄ℓ| + Δ𝑣 βˆ’ 1 = Ξ”β„“ + Δ𝑣. +β€’ 𝑃ℓ and 𝑄ℓ do not share the first edge and neither do 𝑃 β€² and 𝑄 β€². As the first (and only) edge of +𝑃 β€² is {𝑝′,𝑀} = {𝑝′, 𝑝ℓ}, this entails that 𝑄 β€² = suffix(𝑄, 𝑝′). We distinguish two subcases. +– | suffix(𝑄, 𝑝′)| ≀ |𝑄| βˆ’ 1. Then +Ξ” = |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃ℓ| + |𝑄| βˆ’ 1 ≀ |𝑃ℓ| + |𝑄ℓ| + Δ𝑣 = Ξ”β„“ + Δ𝑣. + +26 +Christoph Lenzen and Shreyas Srinivas +– | suffix(𝑄, 𝑝′)| = |𝑄| and 𝑣ℓ β‰  𝑀. Then 𝑝′ is the last node on 𝑄, i.e., 𝑣 = 𝑝′. As by Observa- +tion 4 𝑄 β€² is a path from 𝑝′ to 𝑣, it follows that |𝑄 β€²| = 0 < |𝑄ℓ|. We conclude that +Ξ” = |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃ℓ| + |𝑄ℓ| + Δ𝑣 = Ξ”β„“ + Δ𝑣. +– | suffix(𝑄, 𝑝′)| = |𝑄| and 𝑣ℓ = 𝑀. As 𝑀 = 𝑝ℓ and 𝑝′ = 𝑣 = 𝑀ℓ as in the previous subcase, +this contradicts Lemma 10. +β–‘ +Before proceeding to the case that 𝑣 β‰  𝑀ℓ or 𝑀 β‰  𝑣ℓ, we prove another helper statement ruling +out the specific case that 𝑣 β‰  𝑝′ = 𝑀. +Lemma 13. It is not possible that 𝑣 β‰  𝑝′ = 𝑀. +Proof. Assume towards a contradiction that 𝑣 β‰  𝑝′ = 𝑀. Thus, 𝑝′ = 𝑝. Lemmas 8 and 9 yield +that +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +β‰₯ 𝑑𝑀,β„“βˆ’1 βˆ’ 4π‘ πœ… and +𝑑𝑝ℓ,β„“βˆ’1 βˆ’ C𝑝ℓ,β„“ ≀ 𝑑𝑝′,β„“βˆ’1. +Using (P3) and Lemma 3, it follows that +0 = 𝑑𝑝′,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 +β‰₯ 𝑑𝑝ℓ,β„“βˆ’1 βˆ’ C𝑝ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ +βˆ’ 4π‘ πœ… +β‰₯ 𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ 4π‘ πœ… βˆ’ πœ… +2 +β‰₯ πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 +> 0, +arriving at the desired contradiction. +β–‘ +We now establish (P4) for the case that 𝑣 β‰  𝑀ℓ or 𝑀 β‰  𝑣ℓ. +Lemma 14. If 𝑝′ β‰  𝑀ℓ or 𝑀 β‰  𝑝ℓ, then (P4) holds for 𝑣, 𝑝′, 𝑀, |𝑃 β€²|, |𝑄 β€²|, and layer β„“ βˆ’ 1. +Proof. Denote by Δ𝑀, Δ𝑣 ∈ {βˆ’1, 0, 1} the values such that +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ β‰₯ 𝑑𝑀,β„“βˆ’1 + 4π‘ πœ…Ξ”π‘€ +𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣ℓ,β„“ ≀ 𝑑𝑣,β„“βˆ’1 βˆ’ (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ βˆ’ πœ… +according to Lemmas 7 and 8. Using (P4) and Lemma 3, we bound +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 +β‰₯ 𝑑𝑣ℓ,β„“βˆ’1 βˆ’ C𝑣,β„“ +πœ— ++ (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ + πœ… βˆ’ (𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀,β„“ βˆ’ 4π‘ πœ…Ξ”π‘€) +β‰₯ 𝑑𝑣ℓ,β„“ + (4𝑠 βˆ’ 2)πœ…Ξ”π‘£ + πœ… βˆ’ 𝑑𝑀ℓ,β„“ + 4π‘ πœ…Ξ”π‘€ βˆ’ πœ… +2 +β‰₯ (4𝑠 βˆ’ 2)πœ…(Ξ”β„“ + Δ𝑣 + Δ𝑀) + ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 + πœ…π‘  (|𝑃ℓ| + Δ𝑀) +β‰₯ (4𝑠 βˆ’ 2)πœ…(Ξ”β„“ + Δ𝑣 + Δ𝑀) + ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 + πœ…π‘ |𝑃 β€²|, +where the last step exploits that |𝑃 β€²| = | suffix(𝑃, 𝑝′)| ≀ |𝑃| ≀ |𝑃ℓ| + Δ𝑀. We claim that Ξ” ≀ +Ξ”β„“ + Δ𝑣 + Δ𝑀. Proving this claim will complete the proof, as by the above inequality then +𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 β‰₯ (4𝑠 βˆ’ 2)πœ…Ξ” + ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 + πœ…π‘ |𝑃 β€²|, + +Gradient TRIX +27 +i.e., (P4) for 𝑣, 𝑝′, 𝑀, |𝑃 β€²|, |𝑄 β€²|, and layer β„“ βˆ’ 1. +By Observation 3 and the prerequisites of the lemma, 𝑃 β€² = suffix(𝑃, 𝑝′) or 𝑝′ = 𝑣 = 𝑀ℓ. To cover +the possibility that 𝑃 β€² = suffix(𝑃, 𝑝′), we distinguish several cases: +β€’ 𝑝′ = 𝑝ℓ. Then 𝑃 β€² = 𝑃 and 𝑄 β€² = 𝑄, as 𝑝′ is the first node of both 𝑃 and 𝑄. Hence, +|𝑃 β€²| + |𝑄 β€²| = |𝑃| + |𝑄| ≀ |𝑃ℓ| + |𝑄ℓ| + Δ𝑀 + Δ𝑣. +We distinguish three subcases. +– 𝑃ℓ and 𝑄ℓ do not share their first edge. Then +Ξ” ≀ |𝑃 β€²| + |𝑄 β€²| = |𝑃ℓ| + |𝑄ℓ| + Δ𝑀 + Δ𝑣 = Ξ”β„“ + Δ𝑀 + Δ𝑣. +– 𝑃ℓ, 𝑄ℓ, and 𝑄 β€² share the same first edge. By Lemma 13, 𝑀 β‰  𝑝′. Therefore, 𝑃 β€² = 𝑃 β‰  (𝑝′), +which means that 𝑃ℓ and 𝑃 β€² have the same first edge, too. Thus, 𝑄 β€² and 𝑃 β€² have the same +first edge as well, and +Ξ” = |𝑃 β€²| + |𝑄 β€²| βˆ’ 1 = |𝑃ℓ| + |𝑄ℓ| βˆ’ 1 + Δ𝑀 + Δ𝑣 = Ξ”β„“ + Δ𝑀 + Δ𝑣. +– 𝑃ℓ and 𝑄ℓ have the same first edge, but 𝑄 β€² does not. Since 𝑄ℓ β‰  (𝑝ℓ), we have that 𝑣ℓ β‰  𝑝ℓ. +By (P5), this implies that 𝑣ℓ βˆ‰ 𝑃ℓ. In particular, 𝑣ℓ cannot be part of the first edge of 𝑄ℓ and +|𝑄ℓ| β‰₯ 2. As 𝑝′ = 𝑝ℓ, 𝑄 and 𝑄 β€² both start with 𝑝′. Therefore, 𝑄 β€² is a prefix of 𝑄ℓ. However, +𝑄ℓ has the same first edge as 𝑃ℓ, while 𝑄 β€² does not. Thus, |𝑄 β€²| = 0 ≀ |𝑄ℓ| + Δ𝑣 βˆ’ 1. We +conclude that +Ξ” = |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃ℓ| + |𝑄ℓ| βˆ’ 1 + Δ𝑀 + Δ𝑣 = Ξ”β„“ + Δ𝑀 + Δ𝑣. +β€’ 𝑣 = 𝑝′ β‰  𝑝ℓ. Then 𝑄 β€² = (𝑝′). Moreover, by the prerequisites of the lemma, 𝑃 β€² = suffix(𝑃, 𝑝′). +Since 𝑝′ β‰  𝑝ℓ, we have that | suffix(𝑃, 𝑝′)| ≀ |𝑃| βˆ’ 1. By construction, |𝑄 β€²| ≀ |𝑄| + 1. Overall, +Ξ” ≀ |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃| βˆ’ 1 + |𝑄| + 1 = |𝑃ℓ| + Δ𝑀 + |𝑄ℓ| + Δ𝑣 = Ξ”β„“ + Δ𝑀 + Δ𝑣. +β€’ 𝑣 β‰  𝑝′ β‰  𝑝ℓ. Thus, 𝑝′ = 𝑝 and by Lemma 9 {𝑝ℓ, 𝑝′} is the first edge of 𝑃ℓ. Hence, |𝑃 β€²| = +| suffix(𝑃, 𝑝′)| ≀ |𝑃| βˆ’ 1 ≀ |𝑃ℓ| + Δ𝑀 βˆ’ 1. We distinguish two subcases. +– 𝑃ℓ and 𝑄ℓ do not share their first edge. Then +Ξ” ≀ |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃ℓ| + Δ𝑀 + |𝑄ℓ| + Δ𝑣 = Ξ”β„“ + Δ𝑀 + Δ𝑣. +– 𝑃ℓ and 𝑄ℓ share their first edge. As 𝑣 β‰  𝑝′, 𝑄 has the same first edge as 𝑄ℓ, i.e., {𝑝ℓ, 𝑝′}. +Hence, |𝑄 β€²| = | suffix(𝑄, 𝑝′| = |𝑄| βˆ’ 1 ≀ |𝑄ℓ| + Δ𝑣 βˆ’ 1. We conclude that +Ξ” ≀ |𝑃 β€²| + |𝑄 β€²| ≀ |𝑃ℓ| + Δ𝑀 + |𝑄ℓ| + Δ𝑣 βˆ’ 2 < Ξ”β„“ + Δ𝑀 + Δ𝑣. +It remains to consider the case that 𝑃 β€² β‰  suffix(𝑃, 𝑝′) and 𝑝′ = 𝑣 = 𝑀ℓ. Then |𝑄 β€²| = (𝑣) and +|𝑃 β€²| = |(𝑝′,𝑀)| = 1, implying that Ξ” = 1. By (P2), 𝑣ℓ β‰  𝑀ℓ = 𝑣. If Δ𝑣 = 1, then +Ξ” = 1 ≀ Ξ”β„“ ≀ Ξ”β„“ + Δ𝑀 + Δ𝑣. +By Lemma 7, the remaining case is that Δ𝑣 = βˆ’1 and {𝑣, 𝑣ℓ} is the last edge of 𝑄ℓ or the first edge +of 𝑃ℓ. By Lemma 10, it is impossible that 𝑣 = 𝑀ℓ, so this edge must be the last one of 𝑄ℓ and distinct +from the first one of 𝑃ℓ. Moreover, by the prerequisites of the lemma, 𝑝ℓ β‰  𝑀, so it must hold that +|𝑃ℓ| β‰₯ 2. Overall, either +β€’ |𝑄ℓ| β‰₯ 2 and +Ξ” = 1 ≀ |𝑃ℓ| + |𝑄ℓ| βˆ’ 3 ≀ Ξ”β„“ βˆ’ 2 = Ξ”β„“ + Δ𝑀 + Δ𝑣, or +β€’ |𝑄ℓ| = 1 and 𝑄ℓ and 𝑃ℓ do not share the first edge, yielding +Ξ” = 1 ≀ |𝑃ℓ| + |𝑄ℓ| βˆ’ 2 = Ξ”β„“ βˆ’ 2 = Ξ”β„“ + Δ𝑀 + Δ𝑣. +β–‘ +Corollary 1. (P4) and (P2) hold for 𝑣, 𝑝′, 𝑀, |𝑃 β€²|, |𝑄 β€²|, and layer β„“ βˆ’ 1. + +28 +Christoph Lenzen and Shreyas Srinivas +Proof. Follows from Lemma 12, Lemma 14, and Observation 2. +β–‘ +It remains to prove (P3). +Lemma 15. (P3) holds for 𝑣, 𝑝′, |𝑃 β€²|, and layer β„“ βˆ’ 1. +Proof. If 𝑣 = 𝑝′, the statement readily follows from Corollary 1 and Observation 2. Therefore, +assume that 𝑣 β‰  𝑝′ and hence 𝑝′ = 𝑝 in the following. Denote by Δ𝑀 ∈ {βˆ’1, 0, 1} the value such +that +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ β‰₯ 𝑑𝑀,β„“βˆ’1 + 4π‘ πœ…Ξ”π‘€ +𝑑𝑝ℓ,β„“βˆ’1 βˆ’ C𝑝ℓ,β„“ ≀ 𝑑𝑝′,β„“βˆ’1 +according to Lemmas 8 and 9. +Using (P3) and Lemma 3, it follows that +𝑑𝑝′,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 β‰₯ 𝑑𝑝ℓ,β„“βˆ’1 βˆ’ C𝑝ℓ,β„“ βˆ’ +οΏ½ +𝑑𝑀ℓ,β„“βˆ’1 βˆ’ C𝑀ℓ,β„“ +πœ— +οΏ½ ++ Δ𝑀4π‘ πœ… +β‰₯ 𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ + 4π‘ πœ…Ξ”π‘€ βˆ’ πœ… +2 +β‰₯ 4π‘ πœ…(|𝑃ℓ| + Δ𝑀) +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 . +If 𝑃 β€² = suffix(𝑃, 𝑝′), then |𝑃 β€²| ≀ |𝑃| ≀ |𝑃ℓ| + Δ𝑀 and (P3) for 𝑣, 𝑝′, |𝑃 β€²|, and layer β„“ βˆ’ 1 readily +follows from the above inequality. +Otherwise, by the assumption that 𝑣 β‰  𝑝′ and Observation 3, it holds that 𝑝′ = 𝑀ℓ and 𝑀 = 𝑝ℓ, +and |𝑃 β€²| = |𝑃ℓ|. Using Lemmas 3 and 11 together with (P3), we arrive at +𝑑𝑝′,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1 β‰₯ 𝑑𝑝′,β„“βˆ’1 βˆ’ 𝑑𝑀ℓ,β„“ + +οΏ½ +𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +οΏ½ +β‰₯ 𝑑𝑝ℓ,β„“βˆ’1 βˆ’ C𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ + +οΏ½ +𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +οΏ½ +β‰₯ 𝑑𝑝ℓ,β„“ βˆ’ 𝑑𝑀ℓ,β„“ βˆ’ πœ… +2 +β‰₯ 4π‘ πœ…|𝑃ℓ| +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2 +β‰₯ 4π‘ πœ…|𝑃 β€²| +πœ“π‘  +𝑣 Β―β„“,𝑀 Β―β„“ ( Β―β„“ ) βˆ’ ( Β―β„“ βˆ’ (β„“ βˆ’ 1))πœ… +2, +i.e., (P3) for 𝑣, 𝑝′, |𝑃 β€²|, and layer β„“ βˆ’ 1. +β–‘ +Finally, using these results it is not hard to show that (P5) is satisfied as well. +Lemma 16. (P5) holds for 𝑣, 𝑝′, |𝑃 β€²|, and layer β„“ βˆ’ 1. +Proof. Suppose that 𝑣 lies on 𝑃 β€². By Corollary 1, 𝑣 β‰  𝑀. Thus, if 𝑃 β€² = (𝑝′,𝑀), 𝑣 = 𝑝′, i.e., (P5) +holds for 𝑣, 𝑝′, |𝑃 β€²|, and layer β„“ βˆ’ 1. +Otherwise, 𝑃 β€² = suffix(𝑃, 𝑝′), implying that 𝑣 lies on suffix(𝑃, 𝑝′). As 𝑣 β‰  𝑀, this implies that 𝑣 +lies on 𝑃ℓ. Assuming for contradiction that 𝑣 β‰  𝑝′ = 𝑝, by Lemma 9 we have that prefix(𝑃, 𝑝′) = +prefix(𝑃ℓ, 𝑝′), which equals either (𝑝ℓ) = (𝑝′) or (𝑝ℓ, 𝑝′). Thus, the above entails that 𝑣 actually lies +on suffix(𝑃ℓ, 𝑝′) = suffix(𝑃ℓ, 𝑝). As then 𝑝′ = 𝑣, this is a contradiction and we must indeed have +that 𝑝′ = 𝑣. +β–‘ +Corollary 2. In the proof of Theorem 1, it must hold that β„“ = β„“. + +Gradient TRIX +29 +Proof. Assuming for contradiction that β„“ > β„“, Corollary 1, Lemmas 15 and 16, and Observation 2 +show that layer β„“ βˆ’ 1 also satisfies the properties (P1) to (P5) for some π‘£β„“βˆ’1, π‘β„“βˆ’1,π‘€β„“βˆ’1, and paths +π‘ƒβ„“βˆ’1, π‘„β„“βˆ’1, contradicting the minimality of β„“. +β–‘ +Bounding Skews. With our machinery for bounding Ψ𝑠 in place, it remains to perform the induction +on 𝑠 ∈ N>0 to wrap things up. To anchor the induction at 𝑠 = 1, we exploit that Ξ¨1(β„“) ≀ Ξ1(β„“)+2πœ…π·. +Lemma 17. +Ξ¨1(β„“) ≀ +οΏ½ +Ξ1(0) +if β„“ < 4Ξ1(0)/πœ… +4πœ…π· +else. +Proof. Recall that πœ… = 2(𝑒 + (1 βˆ’ 1/πœ—)(Ξ› βˆ’ 𝑑)). Note that Ξ1(β„“) ≀ Ξ¨1(β„“) + 2πœ…π· for all β„“ ∈ N. By +Theorem 1, we thus have for any β„“ ≀ Β―β„“ that +Ξ¨1( Β―β„“ ) ≀ max +οΏ½ +0, Ξ1(β„“ ) βˆ’ ( Β―β„“ βˆ’ β„“ + 1)πœ… +οΏ½ ++ ( Β―β„“ βˆ’ β„“ )πœ… +2 +≀ max +οΏ½ +0, Ξ¨1(β„“ ) + 2πœ…π· βˆ’ ( Β―β„“ βˆ’ β„“ + 1)πœ… +οΏ½ ++ ( Β―β„“ βˆ’ β„“ )πœ… +2 . +In particular, we have that +Ξ¨1(β„“) ≀ +οΏ½ +max +οΏ½ +4πœ…π·, Ξ1(0) +οΏ½ +if β„“ < 8𝐷 +max +οΏ½ +4πœ…π·, Ξ¨1(β„“ βˆ’ 8𝐷) βˆ’ 2πœ…π· +οΏ½ +else. +By induction on π‘˜ ∈ N, we thus have that +Ξ¨1(β„“) ≀ max{4πœ…π·, Ξ1(0) βˆ’ 2π‘˜πœ…π·} +for all β„“ ∈ [8π‘˜π·, 8(π‘˜ + 1)𝐷). The claim of the lemma follows by noting that β„“ β‰₯ 4Ξ1(0)/πœ… results in +π‘˜ β‰₯ Ξ1(0)/(2πœ…π·). +β–‘ +Note that this lemma shows that Ξ¨1 self-stabilizes [5] within 𝑂(Ξ1(0)/πœ…) layers. +We remark that a more careful analysis reveals a bound on Ξ¨1(β„“) that converges to 2πœ…π·. We +confine ourselves to stating this result for the small input skew that we guarantee. +Corollary 3. If L0 ≀ 4πœ…, then Ξ¨1(β„“) ≀ 2πœ…π· for all β„“ ∈ N. +Proof. Note that +Ξ1(0) = max +𝑣,π‘€βˆˆπ‘‰{𝑑𝑣,0 βˆ’ 𝑑𝑀,0 βˆ’ 2πœ…π‘‘(𝑣,𝑀)} ≀ max +𝑣,π‘€βˆˆπ‘‰{(L0 βˆ’ 2πœ…)𝑑(𝑣,𝑀)} ≀ (L0 βˆ’ 2πœ…)𝐷 ≀ 2πœ…π·. +By replacing 8𝐷 with 4𝐷 in the induction from the proof of Lemma 17, we get that +Ξ¨1(β„“) ≀ +οΏ½ +max +οΏ½ +2πœ…π·, Ξ1(0) +οΏ½ +if β„“ < 4𝐷 +max +οΏ½ +2πœ…π·, Ξ¨1(β„“ βˆ’ 4𝐷) +οΏ½ +else, +implying a uniform bound of Ξ¨1(β„“) ≀ 2πœ…π· for all β„“ ∈ N. +β–‘ +For the sake of completeness, we also infer that supβ„“ ∈N{Ξ¨0(β„“)}, also referred to as the global +skew in the literature, is in 𝑂(𝑒 + (1 βˆ’ 1/πœ—)(Ξ› βˆ’π‘‘)). Provided that Ξ› ∈ 𝑂(𝑑 +𝑒/(πœ— βˆ’ 1)), this bound +is asymptotically optimal [2]. +Corollary 4. If L0 ≀ 4πœ…, then Ξ¨0(β„“) ≀ 6πœ…π· ∈ 𝑂(𝑒 + (1 βˆ’ 1/πœ—)(Ξ› βˆ’ 𝑑)) for all β„“ ∈ N. +Proof. Follows from Corollary 3, the fact that Ξ¨0(β„“) ≀ Ξ¨1(β„“) + 4πœ…π·, and the choice of πœ…. +β–‘ +In order to bound the local skew, we now turn to attention to Ψ𝑠 (β„“) for 𝑠 > 1. + +30 +Christoph Lenzen and Shreyas Srinivas +Lemma 18. For some 𝑠 ∈ N, 𝑠 > 0, suppose that Ξ¨π‘ βˆ’1(β„“) ≀ Ξ¨π‘ βˆ’1 for all β„“ ∈ N. Then +Ψ𝑠 (β„“) ≀ +οΏ½ +Ξžπ‘  (0) + Ξ¨π‘ βˆ’1 +2 +if β„“ < Ξ¨π‘ βˆ’1/πœ… +Ξ¨π‘ βˆ’1 +2 +else. +Proof. Recall that πœ… = 2(𝑒 + (1 βˆ’ 1/πœ—)(Ξ› βˆ’ 𝑑)). For β„“ < Ξ¨π‘ βˆ’1/πœ…, by Theorem 1 with Β―β„“ = β„“ and +β„“ = 0 we have that +Ψ𝑠 (β„“) ≀ Ξžπ‘  (0) + πœ…β„“ +2 ≀ Ξžπ‘  (0) + Ξ¨π‘ βˆ’1 +2 +. +Note that Ξžπ‘  (β„“) ≀ Ξ¨π‘ βˆ’1(β„“) ≀ Ξ¨π‘ βˆ’1 for all β„“ ∈ N. Thus, for β„“ β‰₯ Ξ¨π‘ βˆ’1/πœ… by Theorem 1 with Β―β„“ = β„“ +and β„“ = β„“ βˆ’ βŒŠΞ¨π‘ βˆ’1/πœ…βŒ‹ we have that +Ψ𝑠 (β„“) ≀ max +οΏ½ +0, Ξžπ‘  +οΏ½ +β„“ βˆ’ +οΏ½ Ξ¨π‘ βˆ’1 +πœ… +οΏ½οΏ½ +βˆ’ +οΏ½οΏ½ Ξ¨π‘ βˆ’1 +πœ… +οΏ½ ++ 1 +οΏ½ +πœ… +οΏ½ ++ +οΏ½ Ξ¨π‘ βˆ’1 +πœ… +οΏ½ πœ… +2 ≀ Ξ¨π‘ βˆ’1 +2 +. +β–‘ +Using this lemma, we can bound the local skew by 𝑂(πœ…(1 + log 𝐷)) = 𝑂((𝑒 + (1 βˆ’ 1/πœ—)(Ξ› βˆ’ +𝑑))(1 + log 𝐷)). +Theorem 2. If there are no faults, then Lβ„“ ≀ 4πœ…(2 + log 𝐷) for all β„“ ∈ N. +Proof. By Lemma 27, L0 ≀ 4πœ…. By Corollary 3, Ξ¨1(β„“) ≀ 2πœ…π· for all β„“ ∈ N. By the assumption +that L0 ≀ 4πœ…, for all 𝑠 > 1 we have that +Ξžπ‘  (0) = max +𝑣,π‘€βˆˆπ‘‰{𝑑𝑣,0 βˆ’ 𝑑𝑀,0 βˆ’ (4𝑠 βˆ’ 2)πœ…π‘‘(𝑣,𝑀)} ≀ max +𝑣,π‘€βˆˆπ‘‰{(L0 βˆ’ 6πœ…)𝑑(𝑣,𝑀)} = 0. +Hence, inductive use of Lemma 18 yields that Ψ𝑠 (β„“) ≀ 22βˆ’π‘ πœ…π·. In particular, Ψ⌊log π·βŒ‹ ≀ 8πœ…. The +claim now follows by Observation 1. +β–‘ +Moreover, in addition we obtain the following self-stabilization property. +Theorem 3. If for 𝑠,𝑠′ ∈ N, 𝑠 ≀ 𝑠′, we have that Ψ𝑠 (β„“) ≀ Ψ𝑠 for all β„“ β‰₯ β„“ ∈ N, then for β„“ β‰₯ β„“ +Lβ„“ ≀ +οΏ½ +4π‘ πœ… + Ψ𝑠 +if β„“ ≀ β„“ < β„“ + 2Ψ𝑠/πœ… and +4π‘ β€²πœ… + +Ψ𝑠 +2π‘ β€²βˆ’π‘  +if β„“ β‰₯ β„“ + 2Ψ𝑠/πœ…. +Proof. Inductive use3 of Lemma 18 yields for 𝑠′ β‰₯ 𝑠 and β„“ β‰₯ β„“ + �𝑠′ +𝜎=𝑠+1 Ψ𝑠/(2πœŽβˆ’π‘ πœ…) that +Ψ𝑠′ ≀ +Ψ𝑠 +2π‘ β€²βˆ’π‘  . +Since the sum forms a geometric series, this in particular applies to all β„“ β‰₯ β„“ + 2Ψ𝑠/πœ…. The claim +now follows by applying Observation 1. +β–‘ +4.4 +Bounding Skews in the Presence of Faults +To analyze how skews evolve with faults, we relate the setting with faults to the bounds we have +for a fault-free system. The key property the algorithm guarantees is that, up to an additive 2πœ…, the +pulse time is within the interval spanned by the correct predecessors’ pulse times plus Ξ›. We first +show this for the case that for some node (𝑣, β„“), (𝑣, β„“ βˆ’ 1) is faulty. +3As is, the lemma applies only if β„“ = 0. However, the algorithm and hence all statements are invariant under shifting indices +by β„“. + +Gradient TRIX +31 +Lemma 19. Suppose that the only faulty predecessor of (𝑣, β„“) ∈ 𝑉ℓ, β„“ > 0, is (𝑣, β„“ βˆ’ 1). Denote +𝑑min := +min +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} and +𝑑max := max +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1}. +Then +𝑑min + Ξ› βˆ’ 2πœ… ≀ 𝑑𝑣,β„“ ≀ 𝑑max + Ξ› + 2πœ…. +Proof. By the assumption of the lemma, for all {𝑣,𝑀} ∈ 𝐸, (𝑀, β„“ βˆ’ 1) βˆ‰ 𝐹. We have that +𝐻own βˆ’ 𝐻max = min +𝑠 ∈N {𝐻own βˆ’ 𝐻max + 4π‘ πœ…} +≀ min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} +≀ max{𝐻own βˆ’ 𝐻max, 𝐻own βˆ’ 𝐻min} += 𝐻own βˆ’ 𝐻min. +Hence, abbreviating +Ξ” = min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2, +it holds that +𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 ≀ Ξ” ≀ 𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 . +Taking into account the adjustments in case Ξ” βˆ‰ [0,πœ—πœ…] and using that 𝐻min ≀ 𝐻max we get that +𝐻own βˆ’ 𝐻max βˆ’ 3πœ… +2 ≀ C𝑣,β„“ ≀ 𝐻own βˆ’ 𝐻min + 3πœ… +2 . +Therefore, the local time 𝐻𝑣,β„“ (𝑑𝑣,β„“) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ at which (𝑣, β„“) generates its pulse satisfies +𝐻min + Ξ› βˆ’ 𝑑 βˆ’ 3πœ… +2 ≀ 𝐻𝑣,β„“ (𝑑𝑣,β„“) ≀ 𝐻max + Ξ› βˆ’ 𝑑 + 3πœ… +2 . +If 𝐻min > 𝐻𝑣,β„“ (𝑑𝑣,β„“), we have that +𝑑min βˆ’ 𝑑𝑣,β„“ ≀ 𝐻min βˆ’ 𝐻𝑣,β„“ (𝑑𝑣,β„“). +Applying the lower bound of 𝑑 βˆ’ 𝑒 on message delay and Equation (1), we get that +𝑑𝑣,β„“ β‰₯ 𝑑min + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ 3πœ… +2 > 𝑑min + Ξ› βˆ’ 2πœ…. +If 𝐻min ≀ 𝐻𝑣,β„“ (𝑑𝑣,β„“), the bounds on message delays and hardware clock drift together with Equa- +tion (1) yield that +𝑑𝑣,β„“ β‰₯ 𝑑min + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ 3πœ…/2 +πœ— +> 𝑑min + Ξ› βˆ’ 3πœ… +2 βˆ’ 𝑒 βˆ’ +οΏ½ +1 βˆ’ 1 +πœ— +οΏ½ +(Ξ› βˆ’ 𝑑) += 𝑑min + Ξ› βˆ’ 2πœ…. +Concerning the upper bound on 𝑑𝑣,β„“, note that because 𝑑𝑣,β„“ is increasing in 𝐻𝑣,β„“ (𝑑𝑣,β„“), to bound +𝑑𝑣,β„“ from above we may assume that +𝐻𝑣,β„“ (𝑑𝑣,β„“) = 𝐻max + Ξ› βˆ’ 𝑑 + 3πœ… +2 > 𝐻max, +where the last step uses Equation (2). In this case, +𝑑𝑣,β„“ βˆ’ 𝑑max ≀ 𝐻𝑣,β„“ (𝑑𝑣,β„“) βˆ’ 𝐻max + 𝑑 ≀ Ξ› + 3πœ… +2 < Ξ› + 2πœ…. +β–‘ + +32 +Christoph Lenzen and Shreyas Srinivas +Similar reasoning covers the case that for some (𝑣, β„“) ∈ 𝑉ℓ and {𝑣,𝑀} ∈ 𝐸, (𝑀, β„“ βˆ’ 1) is faulty. +Lemma 20. Suppose that for (𝑣, β„“) ∈ 𝑉ℓ, β„“ > 0, (𝑣, β„“ βˆ’ 1) is not faulty, and at most one predecessor is +faulty. Denoting +𝑑min := +min +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +(𝑀,β„“βˆ’1)βˆ‰πΉ +{𝑑𝑀,β„“βˆ’1} and +𝑑max := +max +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +(𝑀,β„“βˆ’1)βˆ‰πΉ +{𝑑𝑀,β„“βˆ’1}, +then +𝑑min + Ξ› βˆ’ 2πœ… ≀ 𝑑𝑣,β„“ ≀ 𝑑max + Ξ›. +Proof. By Lemma 3, C𝑣,β„“ β‰₯ 0 implies that +𝑑𝑣,β„“ βˆ’ 𝑑min ≀ 𝑑𝑣,β„“ βˆ’ 𝑑𝑣,β„“βˆ’1 ≀ Ξ›, +while C𝑣,β„“ ≀ πœ—πœ… yields that +𝑑𝑣,β„“ βˆ’ 𝑑max β‰₯ 𝑑𝑣,β„“ βˆ’ 𝑑𝑣,β„“βˆ’1 β‰₯ 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +βˆ’ πœ… β‰₯ Ξ› βˆ’ 2πœ…. +It remains to show the upper bound on 𝑑𝑣,β„“ if C𝑣,β„“ < 0 and the lower bound if C𝑣,β„“ > πœ—πœ…. +Consider first the case that C𝑣,β„“ < 0. Accordingly, +C𝑣,β„“ = 𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 + 2πœ… > 𝐻own βˆ’ 𝐻min. +It follows that +𝐻𝑣,β„“ (𝑑𝑣,β„“) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ ≀ 𝐻min + Ξ› βˆ’ 𝑑. +Noting that the reception time of the first message from a predecessor is bounded from above by +the reception time of the message from a correct predecessor, we conclude that +𝑑𝑣,β„“ ≀ 𝑑min + Ξ›. +Now consider the case that C𝑣,β„“ > πœ—πœ…. Consequently, +C𝑣,β„“ = 𝐻own βˆ’ 𝐻max βˆ’ πœ… +2 βˆ’ πœ… > 𝐻own βˆ’ 𝐻max βˆ’ πœ—π‘’. +It follows that the local time 𝐻 at which (𝑣, β„“) generates its pulse satisfies that +𝐻 = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ β‰₯ 𝐻max + Ξ› βˆ’ 𝑑 + πœ—π‘’. +Noting that the reception time of the latest message from a predecessor is bounded from below by +the reception time of the latest message from a correct predecessor, by Equation (1) we conclude +that +𝑑𝑣,β„“ β‰₯ 𝑑max + 𝑑 + Ξ› βˆ’ 𝑑 +πœ— +> 𝑑𝑣,β„“βˆ’1 + Ξ› βˆ’ πœ…. +β–‘ +Corollary 5. Denote +𝑑min := +min +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +(𝑀,β„“βˆ’1)βˆ‰πΉ +{𝑑𝑀,β„“βˆ’1} and +𝑑max := +max +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +(𝑀,β„“βˆ’1)βˆ‰πΉ +{𝑑𝑀,β„“βˆ’1}. +Then +𝑑min + Ξ› βˆ’ 2πœ… ≀ 𝑑𝑣,β„“ ≀ 𝑑max + Ξ› + 2πœ…. + +Gradient TRIX +33 +Proof. Immediate from Lemmas 19 and 20 and the assumption that no node has more than one +faulty predecessor. +β–‘ +Using this result, we can bound the impact of a fault in layer β„“ βˆ’ 1 on successors via the skew +bounds of close-by nodes on layer β„“ βˆ’ 1; we exploit that all bounds we show would in fact also +apply to the faulty node if it was correct. +Lemma 21. Suppose for a node (𝑣, β„“) ∈ 𝑉ℓ, β„“ > 0, that one of its predecessors is faulty. Moreover, +assume that in an execution that differs only in that the faulty predecessor of (𝑣, β„“) is correct, it holds +that max{𝑣,𝑀}∈𝐸{|𝑑𝑣,β„“βˆ’1 βˆ’ 𝑑𝑀,β„“βˆ’1|} ≀ 𝐡. Then in the execution with the predecessor being faulty, the +pulse time of (𝑣, β„“) differs by at most 2𝐡 + 4πœ…. +Proof. Denote by standard variables values in the execution without the predecessor being +faulty and by primed variables values in the one where it is. In particular, for node (𝑣, β„“) ∈ 𝑉ℓ \ 𝐹 +𝑑min := +min +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1{𝑑𝑀,β„“βˆ’1}, +𝑑max := +max +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +{𝑑𝑀,β„“βˆ’1}, +𝑑 β€² +min := +min +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +(𝑀,β„“βˆ’1)βˆ‰πΉ +{𝑑𝑀,β„“βˆ’1}, and +𝑑 β€² +max := +max +((𝑀,β„“βˆ’1),(𝑣,β„“)) βˆˆπΈβ„“βˆ’1 +(𝑀,β„“βˆ’1)βˆ‰πΉ +{𝑑𝑀,β„“βˆ’1}. +denote the earliest and latest pulsing times of (correct) predecessors without and with faults on +layer β„“ βˆ’ 1, respectively. +Observe that +𝑑𝑣,β„“βˆ’1 βˆ’ 𝐡 ≀ 𝑑min ≀ 𝑑minβ€² ≀ 𝑑maxβ€² ≀ 𝑑max ≀ 𝑑𝑣,β„“βˆ’1 + 𝐡. +Hence, Corollary 5 (applied to both executions) shows that +𝑑𝑣,β„“βˆ’1 βˆ’ 𝐡 βˆ’ 2πœ… ≀ 𝑑𝑣,β„“ ≀ 𝑑𝑣,β„“βˆ’1 + 𝐡 + 2πœ… and +𝑑𝑣,β„“βˆ’1 βˆ’ 𝐡 βˆ’ 2πœ… ≀ 𝑑 β€² +𝑣,β„“ ≀ 𝑑𝑣,β„“βˆ’1 + 𝐡 + 2πœ…. +β–‘ +Finally, we observe that such a β€œtime shift” propagates without further increase, so long as there +are no faults. However, a subtlety here is that this is only true for our bounds on timing: a change +in timing might leave more time for drift of the local clock to accumulate; since our worst-case +bounds include the maximum time error that can possibly be accumulated from drift (so long as +local skews do not become exceedingly large), this is already accounted for in the bound provided +by Lemma 3. Hence, we obtain the following generalized variant of Lemma 3. +Lemma 22. Suppose that for 𝑣 ∈ 𝑉 and β„“ ∈ N>0 the predecessors of (𝑣, β„“) are correct. If we shift the +pulse times of these predecessors by at most 𝛿 ∈ R, where Equation (2) still holds for the shifted times, +then +𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ +πœ— +βˆ’ 𝛿 ≀ 𝑑 β€² +𝑣,β„“ βˆ’ 𝑑𝑣,β„“βˆ’1 ≀ Ξ› βˆ’ C𝑣,β„“ + 𝛿, +where 𝑑 β€² +𝑣,β„“ denotes the pulse time of (𝑣, β„“) in the execution with the shifts applied. +Proof. Pulse times are increasing as functions of pulse times of predecessors. Therefore, in +order to maximize or minimize 𝑑 β€² +𝑣,β„“, we need to maximize or minimize the predecessors’ pulse times, +respectively. Shifting all predecessors’ pulse times uniformly by 𝛿 also shifts 𝑑 β€² +𝑣,β„“ by 𝛿 relative to +𝑑𝑣,β„“. The statement now follows analogously to the proof of Lemma 3, carrying the uniform shift +through all inequalities. +β–‘ + +34 +Christoph Lenzen and Shreyas Srinivas +With these tools in place, we can conclude that skews do not grow arbitrarily in the face of faults. +Theorem 4. If there are at most 𝑓 faulty nodes in the system and none in layer 0, then Lβ„“ ∈ +𝑂(5𝑓 πœ… log 𝐷). +Proof. We prove by induction on the number 𝑖 ≀ 𝑓 of layers β„“ > 0 with faults that the skew is +bounded by 𝐡𝑖 := 4πœ…(2 + log 𝐷)5𝑖 �𝑖 +𝑗=0 5βˆ’π‘— ∈ 𝑂(5𝑓 πœ… log 𝐷). By Corollary 6, L0 ≀ πœ…/2 < 4πœ…. Thus, +if there are no faults in layers β„“ > 0, by Theorem 2 we have that Lβ„“ ≀ 𝐡0 := 4πœ…(2 + log 𝐷) for all +β„“ ∈ N. +Assume that we completed step 𝑖 ∈ N and that ℓ𝑖+1 is the next layer where faults need to be +added. Then we have that for all β„“ ≀ ℓ𝑖+1 that Lβ„“β€² ≀ 𝐡𝑖 = 4πœ…(2 + log 𝐷)5𝑓 �𝑖 +𝑗=0 5βˆ’π‘— both before and +after adding the faults on layer 𝑖 + 1. By Lemma 21, it follows that pulsing times on layer ℓ𝑖+1 + 1 do +not change by more than 2𝐡𝑖 + 4πœ… due to the addition of faults. By Lemma 22, this extends to all +bounds4 we compute on pulse times in layers β„“ > ℓ𝑖+1. Since 𝐷 β‰₯ 1 and thus log 𝐷 β‰₯ 0, we get that +the local skew in step 𝑖 + 1 is bounded by +5𝐡𝑖 + 4πœ… = 4πœ…(2 + log 𝐷)5𝑖+1 +π‘–βˆ‘οΈ +𝑗=0 +5βˆ’π‘— + 4πœ… ≀ 4πœ…(2 + log 𝐷)5𝑖+1 +𝑖+1 +βˆ‘οΈ +𝑗=0 +5βˆ’π‘— = 𝐡𝑖+1. +β–‘ +Bounding Skews with Uniform Fault Distribution +The bound in Theorem 4, which is exponential in 𝑓 , seems to suggest that the system can only +support a very small number of faults or the local skew explodes. However, we have not yet +taken into account that the starting point of our entire approach is the assumption that faults are +sufficiently sparse, meaning that it is highly unlikely that many of them cluster together in a way +that causes an exponential pile-up of local skew. This enables the self-stabilization properties of +the algorithm to prevent such a build-up altogether. +In the following, assume that each node fails uniformly and independently with probability +π‘œ(π‘›βˆ’1/2). This is the largest probability of error we can support while guaranteeing that no node +has more than one faulty predecessor with probability 1 βˆ’ π‘œ(1). A key observation is that this +entails that within a fairly large distance of 𝑛1/12, no node has more than a constant number of +faulty nodes that can influence it. We now formalize and show this claim. +Definition 5 (Distance-𝛿 Ancestors). For node (𝑣, β„“) ∈ 𝑉ℓ and 𝛿 ∈ N, its distance-𝛿 ancestors are +all nodes (𝑀, β„“β€²) ∈ 𝑉𝐺 \ {(𝑣, β„“)} such that there is a (directed) path of length at most 𝛿 from (𝑀, β„“β€²) +to (𝑣, β„“) in 𝐺. +Definition 6 (Distance-𝛿 π‘˜-faulty). Node (𝑣, β„“) ∈ 𝑉ℓ, β„“ ∈ N>0 is distance-𝛿 π‘˜-faulty if π‘˜ ∈ N is +minimal such that there are at most π‘˜ faulty nodes among the distance-((π‘˜ + 1)𝛿) ancestors of +(𝑣, β„“). +Observation 5. Suppose that 𝛿 ≀ 𝑛1/12. If nodes fail independently with probability 𝑝 ∈ π‘œ(1/βˆšπ‘›), +then with probability 1 βˆ’ π‘œ(1) all nodes are distance-𝛿 π‘˜-faulty for π‘˜ ≀ 2. +Proof. In order to be distance-𝛿 π‘˜-faulty for π‘˜ > 2, a node must have at least 3 faults among +its distance-(3𝛿) ancestors. The number of these ancestors is bounded by (3𝛿)2 ∈ 𝑂(𝑛1/6). Since +𝑝 ∈ π‘œ(1/βˆšπ‘›), the probability for this to happen is bounded by 𝑂(𝑝3�𝑛1/6 +3 +οΏ½) = 𝑂(𝑝3βˆšπ‘›) βŠ‚ π‘œ(1/𝑛). +The claim follows by applying a union bound over all 𝑛 nodes. +β–‘ +4Due to drifting hardware clocks, this does not apply to the pulse times themselves. However, we rely on Lemma 3 to prove +our bounds in the absence of faults, and this is covered by Lemma 22. + +Gradient TRIX +35 +We can exploit this to control how much skews grow as the result of faults much better. +Lemma 23. Suppose that Ψ𝑠 (β„“) ≀ 𝐡𝑠,β„“ and Lβ„“ ≀ 𝐡 for all layers β„“ β‰₯ β„“ and 𝑠 ∈ N, where β„“, β„“ ∈ N, if +there are no faults in these layers. If no node in a layer β„“ β‰₯ β„“ has more than 2 faulty nodes among its +distance-(β„“ βˆ’ β„“ ) ancestors, then Ψ𝑠 (β„“) ≀ 𝐡𝑠,β„“ + 12𝐡 + 24πœ… for all β„“ β‰₯ β„“. +Proof. We examine by how much adding faults on layers β„“ β‰₯ Β―β„“ might affect pulsing times. For +β„“ β‰₯ Β―β„“ and (𝑣, β„“) ∈ 𝑉ℓ, denote by 𝑓𝑣,β„“ ∈ {0, 1, 2} the number of faulty distance-(β„“ βˆ’ Β―β„“) ancestors +of (𝑣, β„“). For 𝑓𝑣,β„“ = 0, there is no change in 𝑑𝑣,β„“. For 𝑓𝑣,β„“ > 0, consider two cases. If (𝑣, β„“) has no +faulty predecessor, then by Lemma 22, 𝑑𝑣,β„“ is changed at most by the maximum shift that any +of its predecessors undergoes. On the other hand, if (𝑣, β„“) does have a faulty predecessor, then +𝑓𝑣,β„“ > 𝑓𝑀,β„“βˆ’1 for all correct predecessors of (𝑣, β„“). Thus, by Lemma 21 we can bound shifts by 𝐡𝑓𝑣,β„“ , +where 𝐡0 := 0 and 𝐡𝑓 +1 := 2(𝐡 + 𝐡𝑓 ) + 4πœ…. +By assumption, 𝑓𝑣,β„“ ≀ 2 and hence the maximum shift is bounded by 𝐡2 = 6𝐡 + 12πœ…. We conclude +that Ψ𝑠 (β„“) ≀ 𝐡𝑠,β„“ + 2𝐡2 = 𝐡𝑠,β„“ + 12𝐡 + 24πœ…, as claimed. +β–‘ +Together with Lemma 23, Observation 5 shows that skews do not increase by more than a +constant factor within 𝑛1/12 layers. However, we need to handle a total of Θ(βˆšπ‘›) layers. To this end, +we slice up the task into chunks of 𝑛1/12 layers and leverage the self-stabilization properties of the +algorithm. For simplicity, in the following we assume that 𝑛1/12 is integer. As we prove asymptotic +bounds, this does not affect the results. +Definition 7 (Slices). Slice 𝑖 ∈ N>0 consists of layers β„“ ∈ [(𝑖 βˆ’ 1)𝑛1/12,𝑖𝑛1/12 βˆ’ 1]. +Note that there are no more than 𝑛5/12 slices, because the nodes are arranged in square grid. Due +to the duplication of nodes on layer 0 and the boundary nodes on layers β„“ > 0, the number of slices +is actually 𝑛5/12 βˆ’ Θ(1). +As our next step towards a probabilistic skew bound, we prove that if the local skew remains +bounded, then for levels 𝑠 that are not too large, Ψ𝑠 remains almost as small as without faults. First, +we show a loose bound that naively accumulates shifts slice by slice. +Lemma 24. Suppose that +β€’ L0 ≀ 4πœ…, +β€’ each node is distance-𝑛1/12 π‘˜-faulty for π‘˜ ≀ 2, and +β€’ Lβ„“ ≀ 𝐡 for all β„“ ∈ N. +Then for each 𝑠 ∈ N and layer β„“ in slice 𝑖 ∈ N>0, we have that +Ψ𝑠 (β„“) ≀ 22βˆ’π‘ πœ…π· + 𝑖(12𝐡 + 24πœ…). +for all β„“ ∈ N. +Proof. Assume first that there are no faults. In this case, analogously to the proof of Theorem 2, +we get that Ψ𝑠 (β„“) ≀ 22βˆ’π‘ πœ…π· for all β„“ ∈ N. Now we β€œadd” faults inductively slice by slice, by +Lemma 23 each time increasing the bound on Ψ𝑠 (β„“) by 12𝐡 + 24πœ… for all slices 𝑗 β‰₯ 𝑖. +β–‘ +For larger values of 𝑠, 22βˆ’π‘ πœ…π· β‰ͺ 𝑛1/12, meaning that this naive bound is insufficient to show +that Ψ𝑠 (β„“) does not increase much compared to the fault-free setting. However, we can take things +much further by leveraging Theorem 3. +Lemma 25. Suppose that +β€’ L0 ≀ 4πœ…, +β€’ each node is distance-𝑛1/12 π‘˜-faulty for π‘˜ ≀ 2, and +β€’ Lβ„“ ≀ 𝐡 ∈ π‘œ(𝑛1/12πœ…/log 𝐷) for all β„“ ∈ N. + +36 +Christoph Lenzen and Shreyas Srinivas +Then for5 𝑠 ∈ N>0, 𝑠 ≀ log 𝐷 βˆ’ log(𝐡/πœ…) βˆ’ 2 log log 𝐷, it holds that +Ψ𝑠 (β„“) ≀ Ψ𝑠 ∈ (1 + π‘œ(1))22βˆ’π‘ πœ…π·. +Proof. Note that 𝐷 ∈ Θ(𝑛1/2) and hence log log 𝐷 ∈ πœ”(1). Accordingly, the prerequisites of the +lemma ensure that 𝑛5/12(𝐡 + πœ…) ∈ π‘œ(πœ…π·/log 𝐷) and 𝐡 + πœ… ∈ π‘œ(Ξ¨π‘ βˆ’1/log 𝐷). Hence, we may fix a +suitable πœ€ ∈ π‘œ(1) such that +𝑛5/12(12𝐡 + 24πœ…) ≀ +πœ€ +log 𝐷 Β· 2πœ…π· and +οΏ½οΏ½ Ξ¨π‘ βˆ’1 +𝑛5/12πœ… +οΏ½ ++ 1 +οΏ½ +(12𝐡 + 24πœ…) ≀ +πœ€ +4 log 𝐷 Β· Ξ¨π‘ βˆ’1. +We claim that if 𝑛 is sufficiently large such that πœ€ ≀ 1, we have that +Ψ𝑠 (β„“) ≀ Ψ𝑠 := 22βˆ’π‘ πœ…π· Β· +οΏ½ +1 + +πœ€π‘  +log 𝐷 +οΏ½ +, +which we show by induction on 𝑠 ∈ N>0. +For the base case of 𝑠 = 1, note that there are no more than 𝑛5/12 slices, yielding by Lemma 24 +that +Ξ¨1(β„“) ≀ 2πœ…π· + 𝑛5/12(12𝐡 + 24πœ…) ≀ +οΏ½ +1 + +πœ€ +log 𝐷 +οΏ½ +2πœ…π·, +i.e., indeed Ξ¨1(β„“) ≀ Ξ¨1. +Now assume that the claim holds for 𝑠 βˆ’ 1 ∈ N>0. Then, by Lemma 24 and the induction +hypothesis, for layers β„“ in slices 𝑖 ≀ ⌈(Ξ¨π‘ βˆ’1/(𝑛1/12πœ…)βŒ‰, we have that +Ψ𝑠 (β„“) ≀ 22βˆ’π‘ πœ…π· + +οΏ½ Ξ¨π‘ βˆ’1 +𝑛1/12πœ… +οΏ½ +(12𝐡 + 24πœ…) < Ξ¨π‘ βˆ’1 +2 ++ +οΏ½οΏ½ Ξ¨π‘ βˆ’1 +𝑛1/12πœ… +οΏ½ ++ 1 +οΏ½ +(12𝐡 + 24πœ…). +For a layer β„“ in a slice 𝑖 > ⌈(Ξ¨π‘ βˆ’1/(𝑛1/12πœ…)βŒ‰, assume first that we add only faults in slices 𝑗 < +𝑖 βˆ’ ⌈(Ξ¨π‘ βˆ’1/(𝑛1/12πœ…)βŒ‰. Hence, we can apply Lemma 18, shifting layer indices such that β€œlayer 0” is +the first layer of slice 𝑖 βˆ’ ⌈(Ξ¨π‘ βˆ’1/(𝑛1/12πœ…)βŒ‰. In this setting, we thus have that Ψ𝑠 (β„“) ≀ Ξ¨π‘ βˆ’1 +2 . We now +apply Lemma 23 inductively to slices 𝑗 ∈ [π‘–βˆ’βŒˆ(Ξ¨π‘ βˆ’1/(𝑛1/12πœ…)βŒ‰,𝑖], adding in total (βŒˆΞ¨π‘ βˆ’1/(𝑛1/12πœ…)βŒ‰+ +1)(12𝐡 + 24πœ…) to the bound, i.e., +Ψ𝑠 (β„“) ≀ Ξ¨π‘ βˆ’1 +2 ++ +οΏ½οΏ½ Ξ¨π‘ βˆ’1 +𝑛1/12πœ… +οΏ½ ++ 1 +οΏ½ +(12𝐡 + 24πœ…) +≀ +οΏ½1 +2 + +οΏ½ +πœ€ +4 log 𝐷 +οΏ½οΏ½ +Ξ¨π‘ βˆ’1 += +οΏ½1 +2 + +οΏ½ +πœ€ +4 log 𝐷 +οΏ½οΏ½ +22βˆ’(π‘ βˆ’1)πœ…π· Β· +οΏ½ +1 + πœ€(𝑠 βˆ’ 1) +log 𝐷 +οΏ½ += 22βˆ’π‘ πœ…π· Β· +οΏ½ +1 + πœ€(𝑠 βˆ’ 1/2) +log 𝐷 ++ +πœ€2 +2 log2 𝐷 +οΏ½ +≀ 22βˆ’π‘ πœ…π· Β· +οΏ½ +1 + +πœ€π‘  +log 𝐷 +οΏ½ +, +where the last step assumes that 𝑛 is large enough so that πœ€ ≀ 1. +β–‘ +5If 𝐷 = 1, we assume the upper bound on 𝑠 to be negative and the claim is vacuously true. Note that we are making an +asymptotic statement in 𝑛 and that 𝐷 grows with 𝑛, so this case is actually of no concern here. + +Gradient TRIX +37 +Our goal is to bound Ψ⌊log π·βŒ‹ by 𝑂(πœ… log 𝐷), since by Observation 1 this implies a bound of +𝑂(πœ… log 𝐷) on the local skew. Thus, we will use the above lemma with 𝐡 ∈ 𝑂(πœ… log 𝐷), which +gets us within 𝑂(log log 𝐷) levels of our β€œtarget” level ⌊log π·βŒ‹. To bridge this remaining gap, we +exploit that the time required for stabilizing the remaining 𝑂(log log 𝐷) levels after a fault-induced +increase of skews takes only log𝑂 (1) 𝐷 = log𝑂 (1) 𝑛 βŠ‚ π‘œ(𝑛1/12) layers, since the involved potentials +are bounded by π‘œ(πœ…π‘›1/12). +Lemma 26. Suppose that +β€’ L0 ≀ 4πœ… and +β€’ each node is distance-𝑛1/12 π‘˜-faulty for π‘˜ ≀ 2. +Then Lβ„“ ∈ 𝑂(πœ… log 𝐷). +Proof. Assume towards a contradiction that the claim is false, and let Β―β„“ ∈ N>0 be minimal such +that Lβ„“ is too large. Hence, for layers β„“ < Β―β„“, we may assume that Lβ„“ ≀ πΆπœ… log 𝐷 for a sufficiently +large constant 𝐢. +Consider 𝑠 = ⌊log 𝐷 βˆ’ log(𝐡/πœ…) βˆ’ 2 log log 𝐷 βˆ’ logπΆβŒ‹ βˆ’ 5. By Lemma 25, for all β„“ ∈ N, β„“ < Β―β„“ it +holds that +Ψ𝑠 (β„“) ∈ Ψ𝑠 := (1 + π‘œ(1))22βˆ’π‘ πœ…π· βŠ† +οΏ½1 +4 + π‘œ(1) +οΏ½ +log3 𝐷, +which for sufficiently large 𝑛 is smaller than ⌊log3 π·βŒ‹/2. In fact, this bound also applies to layer Β―β„“, +since the pulsing times of nodes on layer Β―β„“ depend only on the behavior of nodes on layer Β―β„“ βˆ’ 1 and +the delays of messages sent to nodes on layer Β―β„“. +Now assume that 𝑛 is sufficiently large. This ensures that log3 𝐷 ≀ 𝑛1/12, implying by the +prerequisites of the lemma that each node is distance-(log3 𝐷) π‘˜-faulty for π‘˜ ≀ 2. Consider adjacent +correct nodes (𝑣, β„“), (𝑀, β„“) ∈ 𝑉ℓ \ 𝐹 for any β„“ ∈ N, β„“ ≀ Β―β„“, and {𝑣,𝑀} ∈ 𝐸. We first show that +distance-(log3 𝐷) 0-faulty nodes satisfy that +𝑑𝑣,β„“ βˆ’ 𝑑𝑀,β„“ ∈ (4 + π‘œ(1))πœ…(2 + log 𝐷) βŠ‚ 𝑂(πœ… log 𝐷). +(6) +Since faults that are not among the ancestry of a node cannot affect its pulse time, this follows by +applying Theorem 3 with β„“ = β„“ βˆ’ ⌊(log3 𝐷)βŒ‹ ≀ β„“ βˆ’ 2Ψ𝑠 and 𝑠′ := ⌊log π·βŒ‹. +To extend this to distance-(log3 𝐷) π‘˜-faulty nodes for π‘˜ ∈ {1, 2}, we show by induction on +π‘˜ ∈ {0, 1, 2} that such nodes have their pulse time shifted by no more than 𝑂(πœ… log 𝐷) relative to +an execution in which they are distance-(log3 𝐷) 0-faulty. The base case of π‘˜ = 0 is trivial. +To perform the step from π‘˜ βˆ’ 1 ∈ {0, 1} to π‘˜, assume towards a contradiction that there is a +node (𝑣, β„“) with a larger shift, on some minimal layer. Now consider a distance-(log3 𝐷) π‘˜-faulty +node (𝑣, β„“) ∈ 𝑉ℓ \ 𝐹, β„“ ≀ Β―β„“, whose predecessors are all correct. There must be a distance-(log3 𝐷) +ancestor of (𝑣, β„“) that is faulty, since otherwise (𝑣, β„“) would be distance-(log3 𝐷) 0-faulty. Let 𝑑 +be the minimal distance in which there is a faulty ancestor of (𝑣, β„“). Then all ancestors of (𝑣, β„“) +in distance 𝑑 are distance-(log3 𝐷) π‘˜β€²-faulty for π‘˜β€² < π‘˜, as otherwise (𝑣, β„“) would be π‘˜β€²-faulty for +some π‘˜β€² > π‘˜. +Consider an ancestor of (𝑣, β„“) in distance 𝑑 βˆ’ 1. If its predecessors are all correct, by the induction +hypothesis and Lemma 22 their pulse time is shifted by 𝑂(πœ… log 𝐷) relative to an execution in which +they are distance distance-(log3 𝐷) 0-faulty. If there is a faulty predecessor, we infer this from the +induction hypothesis, Equation (6), and Lemma 21.6 If 𝑑 > 1, we now inductively apply Lemma 22 +until having extended this bound to all ancestors of (𝑣, β„“) within distance 𝑑 βˆ’ 1 and finally (𝑣, β„“) +itself. This is a contradiction to (𝑣, β„“) violating the claimed bound on the shift. +6Here the constants in the 𝑂-notation change, while Lemma 22 maintains the bound used in its prerequisites. Since we +perform only two inductive steps, we do not need to keep track of how much the constants increase. + +38 +Christoph Lenzen and Shreyas Srinivas +We conclude that indeed shifts are bounded by 𝑂(πœ… log 𝐷). From this and Equation (6), it im- +mediately follows that L Β―β„“ ∈ 𝑂(πœ… log 𝐷). As 𝐢 is sufficiently large, for sufficiently large 𝑛 this is a +contradiction. We conclude that Lβ„“ ∈ 𝑂(πœ… log 𝐷) for all β„“ ∈ N, as claimed. +β–‘ +Putting these results together, we arrive the desired bound on the local skew. +Theorem 5. With probability 1 βˆ’ π‘œ(1), Lβ„“ ∈ 𝑂(πœ… log 𝐷) for all β„“ ∈ N. +Proof. By Corollary 6, with probability 1 βˆ’ π‘œ(1) it holds that L0 ≀ πœ…/2. By Observation 5, with +probability 1 βˆ’ π‘œ(1) each node is distance-𝑛1/12 π‘˜-faulty for π‘˜ ≀ 2. By a union bound, both events +occur concurrently with probability 1 βˆ’ π‘œ(1). Hence, the claim follows by applying Lemma 26. +β–‘ + +Gradient TRIX +39 +REFERENCES +[1] B. Bailey. Clocks Getting Skewed Up, March 2022. https://semiengineering.com/clocks-getting-skewed-up/. +[2] S. Biaz and J. L. Welch. Closed Form Bounds for Clock Synchronization under Simple Uncertainty Assumptions. +Information Processing Letters, 80:151–157, 2001. +[3] J. Bund, M. FΓΌgger, C. Lenzen, M. Medina, and W. Rosenbaum. PALS: Plesiochronous and Locally Synchronous Systems. +In International Symposium on Asynchronous Circuits and Systems (ASYNC), pages 36–43, 2020. +[4] J. Bund, C. Lenzen, and W. Rosenbaum. Fault Tolerant Gradient Clock Synchronization. In Symposium on Principles of +Distributed (PODC), pages 357–365, 2019. +[5] E. W. Dijkstra. Self-stabilizing systems in spite of distributed control. Communications of the ACM, 17(11):943–644, +1974. +[6] D. Dolev, M. FΓΌgger, C. Lenzen, M. Perner, and U. Schmid. HEX: Scaling Honeycombs is Easier than Scaling Clock +Trees. Journal of Computer and System Sciences, 82(5):929–956, 2016. +[7] R. Fan and N. Lynch. Gradient Clock Synchronization. In Symposium on Principles of Distributed Computing (PODC), +pages 320–327, 2004. +[8] A. L. Fisher and H. T. Kung. Synchronizing Large VLSI Processor Arrays. Transactions on Computers, 34(8):734–740, +1985. +[9] E. G. Friedman. Clock Distribution Networks in Synchronous Digital Integrated Circuits. Proceedings of the IEEE, +89(5):665–692, 2001. +[10] Jennifer Lundelius Welch and Nancy Lynch. A new fault-tolerant algorithm for clock synchronization. Information +and Computation, 77(1):1–36, 1988. +[11] P. Khanchandani and C. Lenzen. Self-Stabilizing Byzantine Clock Synchronization with Optimal Precision. Theory of +Computing Systems, 2018. +[12] D. J. Kinniment. Synchronization and Arbitration in Digital Systems. Wiley Publishing, 2008. +[13] F. Kuhn, C. Lenzen, T. Locher, and R. Oshman. Optimal Gradient Clock Synchronization in Dynamic Networks. CoRR, +abs/1005.2894, 2010. +[14] F. Kuhn, C. Lenzen, T. Locher, and R. Oshman. Optimal Gradient Clock Synchronization in Dynamic Networks. +Symposium on Principles of distributed computing (PODC), 2010. +[15] F. Kuhn and R. Oshman. Gradient Clock Synchronization Using Reference Broadcasts. In Conference on Principles of +Distributed Systems (OPODIS), pages 204–218, 2009. +[16] Clock Synchronisation and Adversarial Fault Tolerance, 2021, retrieved on 04 Jan 2023. https://www.mpi-inf.mpg.de/ +fileadmin/inf/d1/teaching/summer21/csaft/reading-material-ch09.pdf. +[17] C. Lenzen, T. Locher, and R. Wattenhofer. Clock Synchronization with Bounded Global and Local Skew. In Symposium +on Foundations of Computer Science (FOCS), pages 509–518, 2008. +[18] C. Lenzen, T. Locher, and R. Wattenhofer. Tight Bounds for Clock Synchronization. Journal of the ACM, 57(2), 2010. +[19] C. Lenzen and J. Rybicki. Self-Stabilising Byzantine Clock Synchronisation Is Almost as Easy as Consensus. Journal of +the ACM, 66(5), 2019. +[20] C. Lenzen and B. Wiederhake. TRIX: Low-Skew Pulse Propagation for Fault-Tolerant Hardware, 2020. https://arxiv. +org/abs/2010.01415. +[21] R. Shelar. Routing with Constraints for Post-Grid Clock Distribution in Microprocessors. IEEE Transactions on +Computer-Aided Design of Integrated Circuits and Systems, 29(2):245–249, 2010. +[22] Transistor Count, retreived Oct 2022. https://en.wikipedia.org/wiki/Transistor_count. +[23] T. Xanthopoulos, editor. Clocking in Modern VLSI Systems. Springer US, 2009. + +40 +Christoph Lenzen and Shreyas Srinivas +A +GENERATING SYNCHRONIZED INPUTS +In this appendix we describe a method for generating well synchronised pulses at layer 0, at a rate +of roughly one pulse per Ξ› time units. There are several ways of approaching this task, but even +when aiming for a fault-tolerant solution, this is an easy problem. The reason is that we merely +need to maintain a small local skew on a line topology, with no alternative propagation paths to +neighboring nodes. +Since our goal is to handle an independent probability of 𝑝 ∈ π‘œ(π‘›βˆ’1/2) of node failures, in fact +we can simply exploit that at most βˆšπ‘› nodes are required on layer 0. We provide a trivial scheme +that is suitable for our specific setting of the base graph 𝐺 being a line (with replicated endpoints). +Algorithm 2 Pulse forwarding algorithm for nodes (𝑖, 0), 𝑖 ∈ {1, . . . , 𝐷}; node (0, 0) is the clock +source. The parameter Ξ› is as described in Algorithm 3. +𝐻 := ∞ +loop +do +if received pulse from (𝑖 βˆ’ 1, 0) then +𝐻 := 𝐻𝑖,0(𝑑) +until 𝐻𝑖,0(𝑑) = 𝐻 + Ξ› βˆ’ 𝑑 +broadcast pulse to (𝑖 + 1, 0) and successors on layer 1. +Lemma 27. For π‘˜ ∈ N, assume that the clock source at node (0, 0) generates its π‘˜-th pulse at time +(π‘˜ βˆ’ 1)Ξ›. If all nodes on layer 0 are correct, the scheme given in the above algorithm generates pulses +with local skew L0 ≀ πœ…/2 and π‘‘π‘˜ +𝑖,0 ∈ [(π‘˜ + 𝑖 βˆ’ 1)Ξ› βˆ’ π‘–πœ…/2, (π‘˜ + 𝑖 βˆ’ 1)Ξ›]. Moreover, it stabilizes after +transient faults within time 𝐷Λ. +Proof. Consider first the case that there are no transient faults. We prove the statement by +induction on 𝑖 ∈ N, where the base case is covered by the assumptions on node 0. +For the step from 𝑖 βˆ’ 1 ∈ N to 𝑖, we perform an induction over the pulse number π‘˜ ∈ N>0. +The induction hypothesis is that pulses 1, . . . ,π‘˜ βˆ’ 1 have been generated in accordance with the +claim of the lemma and the first π‘˜ βˆ’ 1 loop iterations at node 𝑖 have been completed by the time +the π‘˜-th pulse message from node 𝑖 βˆ’ 1 arrives. Note that we can use π‘˜ = 0 as base case for this +induction, for which the claim is vacuously true. For the step from π‘˜ βˆ’ 1 ∈ N to π‘˜, denote by +𝑑 β€² +π‘–βˆ’1,π‘˜ ∈ [π‘‘π‘–βˆ’1,π‘˜ + 𝑑 βˆ’ 𝑒,π‘‘π‘–βˆ’1,π‘˜ + 𝑑] the reception time of the pulse message from node (0,𝑖 βˆ’ 1) at +node (0,𝑖). By the bounds on hardware clock rates, Equation (1), and the induction hypothesis of +the induction on 𝑖, node (0,𝑖) generates its π‘˜-th pulse at time +𝑑𝑖,π‘˜ ∈ +οΏ½ +π‘‘π‘–βˆ’1,π‘˜ + 𝑑 βˆ’ 𝑒 + Ξ› βˆ’ 𝑑 +πœ— +,π‘‘π‘–βˆ’1,π‘˜ + Ξ› +οΏ½ +βŠ† +οΏ½ +π‘‘π‘–βˆ’1,π‘˜ + Ξ› βˆ’ πœ… +2,π‘‘π‘–βˆ’1,π‘˜ + Ξ› +οΏ½ +βŠ† +οΏ½ +(π‘˜ + 𝑖 βˆ’ 1)Ξ› βˆ’ π‘–πœ… +2 , (π‘˜ + 𝑖 βˆ’ 1)Ξ› +οΏ½ +, +unless it receives another pulse message from (𝑖 βˆ’ 1, 0) before doing so. This, however, is not the +case, since we assume that message delays and hardware clock rates do not vary over time, entailing +that these reception times lie Ξ› time apart.7 +7Note that a separation of Ξ› βˆ’ 𝑑 time would suffice. The slack of 𝑑 means that small changes in timing between pulses are +unproblematic, which we exploit in Corollary 7. + +Gradient TRIX +41 +It remains to show the claimed bound on stabilization time. To this end, observe that the only +state information that nodes maintain is 𝐻. On reception of a pulse message, this state is overwritten. +This will remove spurious state from the system. +We would like to argue that the above induction can therefore be performed as-is, meaning that +the system has stabilized by the time each node has generated its first pulse. However, there is a +subtlety: it could happen that a spurious message that is still in transit at time 0 overwrites the +state of node (1, 0) after it received the first message from (0, 0). Node (1, 0) then behaves as if the +first message of (0, 0) arrived later, at the exact same time as the spurious message. Because also +such a spurious message is delivered within at most 𝑑 time, we can re-interpret this as a longer +delay of still at most 𝑑 of the first message sent by node (0, 0). Note that this modification reduces +the difference between the reception times of the first and second pulse from node (0, 0) at node +(1, 0) by up to 𝑒, but the separation remains at least Ξ› βˆ’ 𝑒 β‰₯ Ξ› βˆ’ 𝑑, i.e., the second message is not +received before (1, 0) generates its first pulse. We can apply the same scheme to nodes 2, . . . , 𝐷, +resulting in the desired bound on the stabilization time. +β–‘ +Corollary 6. L0 ≀ πœ…/2 with probability 1 βˆ’ π‘œ(1). It is self-stabilizing with stabilization time Λ𝐷. +We remark that for a general base graph 𝐺, ensuring a small local skew is non-trivial. However, +so long as |𝑉 | is small enough such that faults on layer 0 occur with probability π‘œ(1), one is free to +fall back on a non-fault-tolerant GCS algorithm. This achieves L0 ∈ 𝑂(πœ… log 𝐷), which does not +increase the asymptotic local skew bound of the pulse forwarding scheme. + +42 +Christoph Lenzen and Shreyas Srinivas +B +FULL PULSE FORWARDING ALGORITHM +Algorithm 3 Discrete GCS at node (𝑣, β„“), β„“ > 0. The parameters Ξ›, and πœ… will be determined later, +based on the analysis. +loop +𝐻min, 𝐻own, 𝐻max := ∞ +for {𝑣,𝑀} ∈ 𝐸 do +π‘Ÿπ‘€ := 0 +do +if received pulse from π‘£β„“βˆ’1 and 𝐻own = ∞ then +𝐻own := 𝐻𝑣,β„“ (𝑑) +if for some {𝑣,𝑀} ∈ 𝐸 received pulse from (𝑀, β„“ βˆ’ 1) and π‘Ÿπ‘€ = 0 then +if π‘Ÿπ‘€β€² = 0 for all {𝑣,𝑀 β€²} ∈ 𝐸 then +𝐻min := 𝐻𝑣,β„“ (𝑑) +π‘Ÿπ‘€ := 1 +if π‘Ÿπ‘€β€² = 1 for all {𝑣,𝑀 β€²} ∈ 𝐸 then +𝐻max := 𝐻𝑣,β„“ (𝑑) +until 𝐻min < ∞ and 𝐻𝑣,β„“ (𝑑) β‰₯ min{𝐻max + πœ…/2 + πœ—πœ…, 2𝐻own βˆ’ 𝐻min + 2πœ…)} +if 𝐻𝑣,β„“ (𝑑) = 𝐻max + πœ…/2 + πœ—πœ… then +wait until 𝐻𝑣,β„“ (𝑑) = 𝐻max + 3πœ…/2 + Ξ› βˆ’ 𝑑 +else +C𝑣,β„“ := min𝑠 ∈N{max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ…/2 +if C𝑣,β„“ < 0 then +C𝑣,β„“ := min {𝐻own βˆ’ 𝐻min + 3πœ…/2, 0} +else if C𝑣,β„“ > πœ—πœ… then +C𝑣,β„“ := max {𝐻own βˆ’ 𝐻max βˆ’ 3πœ…/2,πœ—πœ…} +wait until 𝐻𝑣,β„“ (𝑑) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ +broadcast pulse +A basic requirement for the algorithm to work correctly is that (𝑣, β„“) receives the π‘˜-th pulses of all +correct predecessors within its π‘˜-th iteration of the main loop of Algorithm 3. +Lemma 28. For all π‘˜ ∈ N and (𝑣, β„“) ∈ 𝑉ℓ, β„“ > 0, node (𝑣, β„“) receives the π‘˜-th pulses of all correct +predecessors within its π‘˜-th iteration of the main loop of Algorithm 3. +Proof. We show by induction on β„“ ∈ N>0 and π‘˜ ∈ N>0 that (𝑣, β„“) broadcasts the π‘˜π‘‘β„Ž pulse after +receiving the π‘˜-th pulse from all correct (𝑀, β„“ βˆ’ 1) satisfying that ((𝑀, β„“ βˆ’ 1), (𝑣, β„“)) ∈ 𝐸, but before +receiving the (π‘˜ + 1)-th pulse from such a node. Moreover, for all π‘˜ β‰₯ 2, π‘‘π‘˜ +𝑣,β„“ βˆ’ π‘‘π‘˜βˆ’1 +𝑣,β„“ += Ξ›. +For the induction on β„“, we use οΏ½οΏ½οΏ½ = 0 as base case, requiring only that nodes generate pulses +at frequency 1/Ξ›. As delays and clock speeds do not change, this holds true. For the step from +β„“ βˆ’ 1 ∈ N to β„“, we perform the induction on π‘˜. Suppose that the claim holds for all π‘˜β€² < π‘˜ ∈ N>0 +and consider the π‘˜-th iteration of the outer loop at (𝑣, β„“). +β€’ The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 𝐻max + πœ…/2 + πœ—πœ…. Then a message from each +node (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, has been received in the current loop iteration. By the induction +hypotheses for layer β„“ βˆ’ 1 and pulse π‘˜ βˆ’ 1, respectively, for correct such nodes this is the π‘˜-th +pulse message. +We need to show that the π‘˜-th message from (𝑣, β„“ βˆ’ 1) is received in time; the induction +hypothesis guarantees that it is not received too early. As the minimum degree of 𝐺 is 2, at + +Gradient TRIX +43 +least one node (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, is correct. If (𝑣, β„“ βˆ’ 1) is correct, too, it sent its pulse +message at the latest at time 𝑑𝑀,β„“βˆ’1 + Lβ„“βˆ’1. By the bounds on message delay and clock speed, +this message is received at a local time +𝐻 ≀ 𝐻max + πœ—(Lβ„“βˆ’1 + 𝑒) ≀ 𝐻max + Ξ› βˆ’ 𝑑 < 𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“). +β€’ The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 2𝐻own βˆ’π»min +2πœ…. As 𝐻min < ∞, also 𝐻own < ∞. +Using that 𝐻min ≀ 𝐻max, we get that +Ξ” := min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 +≀ max{𝐻own βˆ’ 𝐻max, 𝐻own βˆ’ 𝐻min} βˆ’ πœ… +2 +≀ 𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 +and hence C𝑣,β„“ ≀ 𝐻own βˆ’ 𝐻min + 3πœ…/2 ≀ 3πœ…/2. It follows that +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) β‰₯ max{𝐻min, 𝐻own} + Ξ› βˆ’ 𝑑 βˆ’ 3πœ… +2 . +We distinguish two subcases. +– (𝑣, β„“ βˆ’ 1) is correct. Then by the bounds on message delay and clock speed, for each correct +(𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, its π‘˜-th pulse message is received at a local time +𝐻 ≀ 𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒) ≀ 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ 3πœ… +2 < 𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“), +where the last step uses Equation (2). +– (𝑣, β„“ βˆ’ 1) is faulty, implying that all (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, are correct. Then by the bounds +on message delay and clock speed, for each correct (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, its π‘˜-th pulse +message is received at a local time +𝐻 ≀ 𝐻min + Ξ› βˆ’ 𝑑 βˆ’ 3πœ… +2 < 𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“), +where we use that in order to guarantee that Ξ› βˆ’ 𝑑 β‰₯ πœ—(2Lβ„“βˆ’1 + 𝑒) (i.e., Equation (2)), this +must also hold in an execution that differs by (𝑣, β„“ βˆ’ 1) being correct; in such an execution, +we have that +max +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} βˆ’ +min +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} ≀ +max +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} βˆ’ 𝑑𝑣,β„“βˆ’1 + 𝑑𝑣,β„“βˆ’1 βˆ’ +min +{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} ≀ 2Lβ„“βˆ’1. +It remains to show that (𝑣, β„“) generates its pulse before receiving a (π‘˜ + 1)-th pulse message from a +correct predecessor. We distinguish two cases. +β€’ (𝑣, β„“ βˆ’ 1) is not faulty. Then the earliest local time 𝐻 at which (𝑣, β„“) has received a π‘˜-th pulse +from a correct predecessor is bounded from below by +𝐻 β‰₯ 𝐻own βˆ’ πœ—(Lβ„“βˆ’1 + 𝑒). +As delays and clock speeds do not change, the earliest message reception time for a (π‘˜ + 1)- +th pulse from a correct predecessor is Ξ› time later. Hence, it is sufficient to show that +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) ≀ 𝐻 + Ξ›. We distinguish three subcases. +– The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 𝐻max + πœ…/2 + πœ—πœ… and at local time 𝐻min a +message from a correct predecessor (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, was received by (𝑣, β„“). Thus, +𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒) + 2πœ… β‰₯ 2𝐻own βˆ’ 𝐻min + 2πœ… β‰₯ 𝐻max + πœ… +2 + πœ—πœ…. + +44 +Christoph Lenzen and Shreyas Srinivas +and, by Equation (3), +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻max + 3πœ… +2 + Ξ› βˆ’ 𝑑 +≀ 𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒) + 2πœ… + Ξ› βˆ’ 𝑑 +≀ 𝐻own βˆ’ πœ—(Lβ„“βˆ’1 + 𝑒) +≀ 𝐻 + Ξ›. +– The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 𝐻max + πœ…/2 + πœ—πœ… and at local time 𝐻max a +message from a correct predecessor (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, was received by (𝑣, β„“). Therefore, +𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒) β‰₯ 𝐻max +and, by Equation (3), +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻max + 3πœ… +2 + Ξ› βˆ’ 𝑑 +≀ 𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒) + 3πœ… +2 + Ξ› βˆ’ 𝑑 +≀ 𝐻own βˆ’ πœ—(Lβ„“βˆ’1 + 𝑒) +≀ 𝐻 + Ξ›. +– The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 2𝐻ownβˆ’π»min+2πœ… and C𝑣,β„“ β‰₯ 0. By Equation (3), +then +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ ≀ 𝐻own + Ξ› βˆ’ 𝑑 ≀ 𝐻 + Ξ›. +– The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 2𝐻own βˆ’ 𝐻min + 2πœ… and C𝑣,β„“ < 0. Then +C𝑣,β„“ = 𝐻own βˆ’ 𝐻min + 3πœ… +2 +and +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ = 𝐻min βˆ’ 3πœ… +2 + Ξ› βˆ’ 𝑑. +Since 𝐻min is bounded from above by the earliest local reception time of a message from a +correct node (𝑀, β„“ βˆ’ 1), {𝑣,𝑀} ∈ 𝐸, we have that +𝐻min ≀ 𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒). +By Equation (3), we conclude that +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) ≀ 𝐻own + πœ—(Lβ„“βˆ’1 + 𝑒) βˆ’ 3πœ… +2 + Ξ› βˆ’ 𝑑 < 𝐻 + Ξ›. +β€’ (𝑣, β„“ βˆ’ 1) is faulty. Then 𝐻 = 𝐻min. Checking all cases in a similar fashion, we see that +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) ≀ 𝐻max + 3πœ… +2 + Ξ› βˆ’ 𝑑. +Using that Equation (3) must also apply in an execution where (𝑣, β„“ βˆ’ 1) is not faulty and +hence max{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} βˆ’ min{𝑣,𝑀}∈𝐸{𝑑𝑀,β„“βˆ’1} ≀ 2Lβ„“βˆ’1, it follows that +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) ≀ 𝐻max + 3πœ… +2 + Ξ› βˆ’ 𝑑 +≀ 𝐻min + 2πœ—(Lβ„“βˆ’1 + 𝑒) + 3πœ… +2 + Ξ› βˆ’ 𝑑 +≀ 𝐻min + Ξ› +≀ 𝐻 + Ξ›. +β–‘ +We are now ready to show that Algorithm 3 is equivalent to Algorithm 1 in the absence of faults. + +Gradient TRIX +45 +Lemma 29. Suppose that for (𝑣, β„“) ∈ 𝑉ℓ, β„“ > 0, and the predecessors of (𝑣, β„“) are correct. Then running +Algorithm 1 instead of Algorithm 3 results in the same pulse times of node (𝑣, β„“). +Proof. Assume towards a contradiction that the claim is false. Denote by π‘‘π‘˜ +𝑣,β„“ and (π‘‘π‘˜ +𝑣,β„“)β€² the +pulse times of Algorithm 1 and Algorithm 3 in executions with identical delays, clock speeds, and +behavior of faulty nodes. W.l.o.g., let π‘‘π‘˜ +𝑣,β„“ be minimal with the property that π‘‘π‘˜ +𝑣,β„“ β‰  (π‘‘π‘˜ +𝑣,β„“)β€². +Consider the π‘˜-th loop iteration of Algorithm 3 at node (𝑣, β„“). We distinguish cases according to +why the inner loop terminated. +β€’ The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 𝐻max + πœ…/2 + πœ—πœ…. Then in Algorithm 1, we have +that +𝐻own β‰₯ 𝐻max + πœ… +2 + πœ—πœ…, +implying that +Ξ” := min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 +β‰₯ min +𝑠 ∈N {𝐻own βˆ’ 𝐻max + 4π‘ πœ…} βˆ’ πœ… +2 +β‰₯ 𝐻own βˆ’ 𝐻min βˆ’ πœ… +2 +β‰₯ πœ—πœ…. +Hence, Algorithm 1 computes +C𝑣,β„“ = 𝐻own βˆ’ 𝐻max βˆ’ 3πœ… +2 +and generates its π‘˜-th pulse at local time +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻max + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ = 𝐻max + 3πœ… +2 + Ξ› βˆ’ 𝑑 = 𝐻𝑣,β„“ ((π‘‘π‘˜ +𝑣,β„“)β€²), +a contradiction. +β€’ The inner loop terminated because 𝐻𝑣,β„“ (𝑑) = 2𝐻own βˆ’π»min +2πœ…. As 𝐻min < ∞, also 𝐻own < ∞ +for Algorithm 3. We distinguish two subcases. +– In Algorithm 1, we have +Ξ” := min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 < 0. +Then the same holds in Algorithm 3, as there 𝐻max is either identical to that of Algorithm 1 +of ∞. Hence, both algorithms compute 𝐢𝑣,β„“ = min{𝐻own βˆ’π»min +3πœ…/2, 0} and subsequently +𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻own + Ξ› βˆ’ 𝑑 βˆ’ C𝑣,β„“ = 𝐻𝑣,β„“ ((π‘‘π‘˜ +𝑣,β„“)β€²), a contradiction. +– In Algorithm 1, we have +Ξ” := min +𝑠 ∈N {max{𝐻own βˆ’ 𝐻max + 4π‘ πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4π‘ πœ…}} βˆ’ πœ… +2 β‰₯ 0 +Let 𝑠min ∈ N be such that +Ξ” := max{𝐻own βˆ’ 𝐻max + 4𝑠minπœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4𝑠minπœ…} βˆ’ πœ… +2 . +If Ξ” = 𝐻own βˆ’ 𝐻min βˆ’ 4𝑠minπœ… βˆ’ πœ…/2, the fact that 𝐻own and 𝐻min are identical in both +algorithms, while 𝐻max is either also identical or βˆ’βˆž in Algorithm 3, again leads to the + +46 +Christoph Lenzen and Shreyas Srinivas +contradiction 𝐻𝑣,β„“ (π‘‘π‘˜ +𝑣,β„“) = 𝐻𝑣,β„“ ((π‘‘π‘˜ +𝑣,β„“)β€²). Hence, suppose that Ξ” = 𝐻own βˆ’π»max + 4𝑠minπœ… βˆ’πœ…/2 +in Algorithm 1. Therefore, +0 ≀ Ξ” += 𝐻own βˆ’ 𝐻max + 4𝑠minπœ… βˆ’ πœ…/2 +≀ max{𝐻own βˆ’ 𝐻max + 4(𝑠min βˆ’ 1)πœ…, 𝐻own βˆ’ 𝐻min βˆ’ 4(𝑠min βˆ’ 1)πœ…} βˆ’ πœ… +2 += 𝐻own βˆ’ 𝐻min βˆ’ 4(𝑠min βˆ’ 1)πœ… βˆ’ πœ… +2 . +Thus, +2𝐻own βˆ’ 𝐻min + 2πœ… β‰₯ 𝐻own + 4𝑠minπœ… βˆ’ 3πœ… +2 β‰₯ 𝐻max βˆ’ πœ… < 𝐻max βˆ’ πœ… +2 βˆ’ πœ—πœ…. +This is a contradiction, as then the inner loop in Algorithm 3 would have terminated at an +earlier time. +β–‘ +B.1 +Self-Stabilization +Making Algorithm 3 self-stabilizing follows standard techniques. Accordingly, we confine ourselves +to a brief high-level discussion of how this is achieved. +Theorem 6. The pulse propagation algorithm can be implemented in a self-stabilizing way. It +stabilizes within 𝑂(βˆšπ‘›) pulses. +Proof sketch. The key observation is that self-stabilization can proceed layer by layer, where +Corollary 6 shows that layer 0 stabilizes fast enough. Thus, we can assume that the correct nodes of +the previous layer generate pulses at a stable frequency of Ξ› satisfying the skew bounds obtained +in the analysis. +This allows us to make sure that the timing of its listening loop aligns with the pulse signals +from the previous layer: From all but one predecessor, the pulse signals must be received while the +inner loop is running. Moreover, the inner loop will terminate within Ξ› time. Instead of restarting +the inner loop dependent on the own generated pulse, we can instead start a loop iteration when +receiving the first pulse after a quiet period of, say, Ξ›/10 (where too frequent pulses of a faulty +predecessor are filtered out). As such a quiet period must occur by Equation (2), this will align the +loop correctly with the π‘˜-th pulses of correct predecessors for some π‘˜ ∈ N. +Once the inner loop terminates, we look for the next quiet period, and start a new instance of +the inner loop on reception of the next pulse from a predecessor, and so on. Whenever the inner +loop terminates correctly, i.e., not due to a timeout, we also compute the time to generate the next +pulse as in Algorithm 3. However, we do not wait until the pulse is generated before willing to start +a new instance of the inner loop. This way, we ensure that we do not miss the first pulse message +of a correct predecessor for pulse π‘˜ + 1 in case the inner loop for pulse π‘˜ was misaligned. +β–‘ +C +OBTAINING THE FINAL SKEW BOUNDS +Recall that our model assumes that message delays and clock speeds do not vary. If the behavior of +faulty nodes is static, i.e., the timing of their output pulse messages is identical in each pulse as +well, a stable input frequency of 1/Ξ› results in repeating the exact same message pattern with the +same timing every 1/Ξ› time. We can exploit this to bound Lβ„“,β„“+1 in terms of Lβ„“. +Theorem 7. If faulty nodes do not change the timing of their output pulses, then L ∈ 𝑂(πœ… log 𝐷) +with probability 1 βˆ’ π‘œ(1). + +Gradient TRIX +47 +Proof. By Corollary 5, for correct (𝑣, β„“ + 1) ∈ 𝑉ℓ+1, β„“ ∈ N, +min +((𝑀,β„“),(𝑣,β„“+1)) βˆˆπΈβ„“ +(𝑀,β„“)βˆ‰πΉ +{π‘‘π‘˜ +𝑀,β„“} + Ξ› βˆ’ 2πœ… ≀ π‘‘π‘˜ +𝑣,β„“+1 ≀ +max +((𝑀,β„“),(𝑣,β„“)) βˆˆπΈβ„“ +(𝑀,β„“)βˆ‰πΉ +{π‘‘π‘˜ +𝑀,β„“} + Ξ› + 2πœ…. +Because the behavior of fault nodes does not change between pulses, a simple induction shows +that π‘‘π‘˜+1 +π‘₯,β„“β€² = π‘‘π‘˜ +π‘₯,β„“β€² + Ξ› for all correct nodes (π‘₯, β„“β€²) ∈ 𝑉ℓ′, β„“β€² ∈ N. In particular, +min +((𝑀,β„“),(𝑣,β„“+1)) βˆˆπΈβ„“ +(𝑀,β„“βˆ’1)βˆ‰πΉ +{π‘‘π‘˜+1 +𝑀,β„“ } βˆ’ 2πœ… ≀ π‘‘π‘˜ +𝑣,β„“+1 ≀ +max +((𝑀,β„“),(𝑣,β„“+1)) βˆˆπΈβ„“ +(𝑀,β„“)βˆ‰πΉ +{π‘‘π‘˜+1 +𝑀,β„“ } + 2πœ…. +By Theorem 5, Lβ„“ ∈ 𝑂(πœ… log 𝐷). Note that this bound applies uniformly over all executions. +Thus, even if (𝑣, β„“) is faulty, using that its neighbors are within distance 2 of each other, it holds +that +min +((𝑀,β„“),(𝑣,β„“+1)) βˆˆπΈβ„“ +(𝑀,β„“βˆ’1)βˆ‰πΉ +{π‘‘π‘˜+1 +𝑀,β„“ } βˆ’ +max +((𝑀,β„“),(𝑣,β„“+1)) βˆˆπΈβ„“ +(𝑀,β„“)βˆ‰πΉ +{π‘‘π‘˜+1 +𝑀,β„“ } ∈ 𝑂(πœ… log 𝐷), +by virtue of comparing to an execution in which (𝑣, β„“) is correct. As (𝑣, β„“ + 1) was an arbitrary +correct node, the claim of the theorem follows. +β–‘ +It remains to argue that some variation can be sustained. +Corollary 7. With probability 1βˆ’π‘œ(1), L ∈ 𝑂(πœ… log 𝐷) even when in each pulse (i) a constant number +of faulty nodes change their output behavior and timing, (ii) link delays vary by up to π‘›βˆ’1/2𝑒 log 𝐷, +and (iii) hardware clock speeds vary by up to π‘›βˆ’1/2(πœ— βˆ’ 1) log 𝐷. +Proof. The maximum length of a directed path in 𝐻 is bounded by 2βˆšπ‘›: at most 𝐷 ≀ βˆšπ‘› hops +in layer 0, followed by at most βˆšπ‘› links from layer to layer. Thus, accumulating all changes in +timing due to link delay and clock speed variation along a path results in a deviation of 𝑂((𝑒 + +(πœ— βˆ’ 1)(Ξ› βˆ’ 𝑑)) log 𝐷 = 𝑂(πœ… log 𝐷). This is trivial for layer 0 and applies to pulse propagation +through the layers as well, because our respective analysis relies on Corollary 5 and Lemma 22. In +order to take into account a constant number of faulty nodes with arbitrary behavior, we reason +analogously to the proof of Theorem 4, i.e., rely on Corollary 5 as well. +β–‘ +