id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
16-16.2-2
16
16.2
16.2-2
docs/Chap16/16.2.md
Give a dynamic-programming solution to the $0$-$1$ knapsack problem that runs in $O(nW)$ time, where $n$ is the number of items and $W$ is the maximum weight of items that the thief can put in his knapsack.
Suppose we know that a particular item of weight $w$ is in the solution. Then we must solve the subproblem on $n − 1$ items with maximum weight $W − w$. Thus, to take a bottom-up approach we must solve the $0$-$1$ knapsack problem for all items and possible weights smaller than W. We'll build an $n + 1$ by $W + 1$ table of values where the rows are indexed by item and the columns are indexed by total weight. (The first row and column of the table will be a dummy row). For row $i$ column $j$, we decide whether or not it would be advantageous to include item i in the knapsack by comparing the total value of of a knapsack including items $1$ through $i − 1$ with max weight $j$, and the total value of including items $1$ through $i − 1$ with max weight $j − i.weight$ and also item $i$. To solve the problem, we simply examine the $n$, $W$ entry of the table to determine the maximum value we can achieve. To read off the items we include, start with entry $n$, $W$. In general, proceed as follows: if entry $i$, $j$ equals entry $i - 1$, $j$, don't include item $i$, and examine entry $i - 1$, $j$ next. If entry $i$, $j$ doesn't equal entry $i − 1$, $j$, include item $i$ and examine entry $i − 1$, $j − i$.weight next. See algorithm below for construction of table: ```cpp 0-1-KNAPSACK(n, W) Initialize an (n + 1) by (W + 1) table K for i = 1 to n K[i, 0] = 0 for j = 1 to W K[0, j] = 0 for i = 1 to n for j = 1 to W if j < i.weight K[i, j] = K[i - 1, j] else K[i, j] = max(K[i - 1, j], K[i - 1, j - i.weight] + i.value) ```
[ { "lang": "cpp", "code": "0-1-KNAPSACK(n, W)\n Initialize an (n + 1) by (W + 1) table K\n for i = 1 to n\n K[i, 0] = 0\n for j = 1 to W\n K[0, j] = 0\n for i = 1 to n\n for j = 1 to W\n if j < i.weight\n K[i, j] = K[i - 1, j]\n else\n K[i, j] = max(K[i - 1, j], K[i - 1, j - i.weight] + i.value)" } ]
false
[]
16-16.2-3
16
16.2
16.2-3
docs/Chap16/16.2.md
Suppose that in a $0$-$1$ knapsack problem, the order of the items when sorted by increasing weight is the same as their order when sorted by decreasing value. Give an efficient algorithm to find an optimal solution to this variant of the knapsack problem, and argue that your algorithm is correct.
Suppose in an optimal solution we take an item with $v_1$, $w_1$, and drop an item with $v_2$, $w_2$, and $w_1 > w_2$, $v_1 < v_2$, we can substitute $1$ with $2$ and get a better solution. Therefore we should always choose the items with the greatest values.
[]
false
[]
16-16.2-4
16
16.2
16.2-4
docs/Chap16/16.2.md
Professor Gekko has always dreamed of inline skating across North Dakota. He plans to cross the state on highway U.S. 2, which runs from Grand Forks, on the eastern border with Minnesota, to Williston, near the western border with Montana. The professor can carry two liters of water, and he can skate $m$ miles before running out of water. (Because North Dakota is relatively flat, the professor does not have to worry about drinking water at a greater rate on uphill sections than on flat or downhill sections.) The professor will start in Grand Forks with two full liters of water. His official North Dakota state map shows all the places along U.S. 2 at which he can refill his water and the distances between these locations. The professor's goal is to minimize the number of water stops along his route across the state. Give an efficient method by which he can determine which water stops he should make. Prove that your strategy yields an optimal solution, and give its running time.
The greedy solution solves this problem optimally, where we maximize distance we can cover from a particular point such that there still exists a place to get water before we run out. The first stop is at the furthest point from the starting position which is less than or equal to $m$ miles away. The problem exhibits optimal substructure, since once we have chosen a first stopping point $p$, we solve the subproblem assuming we are starting at $p$. Combining these two plans yields an optimal solution for the usual cut-and-paste reasons. Now we must show that this greedy approach in fact yields a first stopping point which is contained in some optimal solution. Let $O$ be any optimal solution which has the professor stop at positions $o_1, o_2, \dots, o_k$. Let $g_1$ denote the furthest stopping point we can reach from the starting point. Then we may replace $o_1$ by $g_2$ to create a modified solution $G$, since $o_2 - o_1 < o_2 - g_1$. In other words, we can actually make it to the positions in $G$ without running out of water. Since $G$ has the same number of stops, we conclude that $g_1$ is contained in some optimal solution. Therefore the greedy strategy works.
[]
false
[]
16-16.2-5
16
16.2
16.2-5
docs/Chap16/16.2.md
Describe an efficient algorithm that, given a set $\\{x_1, x_2, \ldots, x_n\\}$ of points on the real line, determines the smallest set of unit-length closed intervals that contains all of the given points. Argue that your algorithm is correct.
Consider the leftmost interval. It will do no good if it extends any further left than the leftmost point, however, we know that it must contain the leftmost point. So, we know that it's left hand side is exactly the leftmost point. So, we just remove any point that is within a unit distance of the left most point since they are contained in this single interval. Then, we just repeat until all points are covered. Since at each step there is a clearly optimal choice for where to put the leftmost interval, this final solution is optimal.
[]
false
[]
16-16.2-6
16
16.2
16.2-6 $\star$
docs/Chap16/16.2.md
Show how to solve the fractional knapsack problem in $O(n)$ time.
First compute the value of each item, defined to be it's worth divided by its weight. We use a recursive approach as follows, find the item of median value, which can be done in linear time as shown in chapter 9. Then sum the weights of all items whose value exceeds the median and call it $M$. If $M$ exceeds $W$ then we know that the solution to the fractional knapsack problem lies in taking items from among this collection. In other words, we're now solving the fractional knapsack problem on input of size $n / 2$. On the other hand, if the weight doesn't exceed $W$, then we must solve the fractional knapsack problem on the input of $n / 2$ low-value items, with maximum weight $W − M$. Let $T(n)$ denote the runtime of the algorithm. Since we can solve the problem when there is only one item in constant time, the recursion for the runtime is $T(n) = T(n / 2) + cn$ and $T(1) = d$, which gives runtime of $O(n)$.
[]
false
[]
16-16.2-7
16
16.2
16.2-7
docs/Chap16/16.2.md
Suppose you are given two sets $A$ and $B$, each containing $n$ positive integers. You can choose to reorder each set however you like. After reordering, let $a_i$ be the $i$th element of set $A$, and let $b_i$ be the $i$th element of set $B$. You then receive a payoff of $\prod_{i = 1}^n a_i^{b_i}$. Give an algorithm that will maximize your payoff. Prove that your algorithm maximizes the payoff, and state its running time.
Since an idential permutation of both sets doesn't affect this product, suppose that $A$ is sorted in ascending order. Then, we will prove that the product is maximized when $B$ is also sorted in ascending order. To see this, suppose not, that is, there is some $i < j$ so that $a_i < a_j$ and $b_i > b_j$. Then, consider only the contribution to the product from the indices $i$ and $j$. That is, $a_i^{b_i}a_j^{b_j}$, then, if we were to swap the order of $b_i$ and $b_j$, we would have that contribution be $a_i^{b_j}a_j^{b_i}$. we can see that this is larger than the previous expression because it differs by a factor of $\left(\frac{a_j}{a_i}\right)^{b_i - b_j}$ which is bigger than one. So, we couldn't of maximized the product with this ordering on $B$.
[]
false
[]
16-16.3-1
16
16.3
16.3-1
docs/Chap16/16.3.md
Explain why, in the proof of Lemma 16.2, if $x.freq = b.freq$, then we must have $a.freq = b.freq = x.freq = y.freq$.
If we have that $x.freq = b.freq$, then we know that $b$ is tied for lowest frequency. In particular, it means that there are at least two things with lowest frequency, so $y.freq = x.freq$. Also, since $x.freq \le a.freq \le b.freq = x.freq$, we must have $a.freq = x.freq$.
[]
false
[]
16-16.3-2
16
16.3
16.3-2
docs/Chap16/16.3.md
Prove that a binary tree that is not full cannot correspond to an optimal prefix code.
Let $T$ be a binary tree that is not full. $T$ represents a binary prefix code for a file composed of characters from alphabet $C$, where $c \in C$, $f\(c\)$ is th number of occurrences of $c$ in the file. The cost of tree $T$, or the number of bits in the encoding, is $\sum_{c \in C} d_T\(c\) \cdot f\(c\)$, where $d_T\(c\)$ is the depth of character $c$ in tree $T$. Let $N$ be a node of greatest depth that has exactly one child. If $N$ is the root of $T$, $N$ can be removed and the deepth of each node reduced by one, yielding a tree representing the same alphabet with a lower cost. This mean the original code was not optimal. Otherwise, let $M$ be the parent of $N$, let $T_1$ be the (possibly non-existent) sibling of $N$, and let $T_2$ be the subtree rooted at the child of $N$. Replace $M$ by $N$, making $T_1$ be the children of $N$. If $T_1$ is empty, repeat the process. We have a new prefix code of lower cost, so the original was not optimal.
[]
false
[]
16-16.3-3
16
16.3
16.3-3
docs/Chap16/16.3.md
What is an optimal Huffman code for the following set of frequencies, based on the first $8$ Fibonacci numbers? $$a:1 \quad b:1 \quad c:2 \quad d:3 \quad e:5 \quad f:8 \quad g:13 \quad h:21$$ Can you generalize your answer to find the optimal code when the frequencies are the first $n$ Fibonacci numbers?
$$ \begin{array}{c|l} a & 1111111 \\\\ b & 1111110 \\\\ c & 111110 \\\\ d & 11110 \\\\ e & 1110 \\\\ f & 110 \\\\ g & 10 \\\\ h & 0 \end{array} $$ **GENERALIZATION** In what follows we use $a_i$ to denote $i$-th Fibonacci number. To avoid any confusiion we stress that we consider Fibonacci's sequence beginning $1$, $1$, i.e. $a_1 = a_2 = 1$. Let us consider a set of $n$ symbols $\Sigma = \\{c_i ~|~ 1 \le i \le n \\}$ such that for each $i$ we have $c_i.freq = a_i$. We shall prove that the Huffman code for this set of symbols given by the run of algorithm HUFFMAN from CLRS is the following code: - $code(c_n) = 0$ - $code(c_{i - 1}) = 1code(c_i)$ for $2 \le i \le n - 1$ (i.e. we take a code for symbol $c_i$ and add $1$ to the beginning) - $code(c_1) = 1^{n - 1}$ By $code(c)$ we mean the codeword assigned to the symbol $c_i$ by the run of HUFFMAN($\Sigma$) for any $c \in \Sigma$. First we state two technical claims which can be easily proven using the proper induction. Following good manners of our field we leave the proofs to the reader :-) - (HELPFUL CLAIM 1) $ (\forall k \in \mathbb{N}) ~ \sum\limits_{i = 1}^{k} a_i = a_{k + 2} - 1$ - (HELPFUL CLAIM 2) Let $z$ be an inner node of tree $T$ constructed by the algorithm HUFFMAN. Then $z.freq$ is sum of frequencies of all leafs of the subtree of $T$ rooted in $z$. Consider tree $T_n$ inductively defined by - $T_2.left = c_2$, $T_2.right = c_1$ and $T_2.freq = c_1.freq + c_2.freq = 2$ - $(\forall i; 3 \le i \le n) ~ T_i.left = c_i$, $T_i.right = T_{i - 1}$ and $T_i.freq = c_i.freq + T_{i - 1}.freq$ We shall prove that $T_n$ is the tree produced by the run of HUFFMAN($\Sigma$). **KEY CLAIM:** $T_{i + 1}$ is exactly the node $z$ constructed in $i$-th run of the for-cycle of HUFFMAN($\Sigma$) and the content of the priority queue $Q$ just after $i$-th run of the for-cycle is exactly $Q = (a_{i + 2}, T_{i + 1}, a_{i + 3}, \dots, a_n)$ with $a_{i + 2}$ being the minimal element for each $1 \le i < n$. (Since we prefer not to overload our formal notation we just note that for $i = n - 1$ we claim that $Q = (T_n)$ and our notation grasp this fact in a sense.) **PROOF OF KEY CLAIM** by induction on $i$. - for $i = 1$ we see that the characters with lowest frequencies are exactly $c_1$ and $c_2$, thus obviously the algorithm HUFFMAN($\Sigma$) constructs $T_2$ in the first run of its for-cycle. Also it is obvious that just after this run of the for-cycle we have $Q = (a_3, T_{2}, a_4, \dots, a_n)$. - for $2 \le i < n$ we suppose that our claim is true for all $j < i$ and prove the claim for $i$. Since the claim is true for $i - 1$, we know that just before $i-th$ execution of the for-cycle we have the following content of the priority queue $Q=(a_{i + 1}, T_i, a_{i + 2}, \dots, a_n)$. Thus line 5 of HUFFMAN extracts $a_{i + 1}$ and sets $z.left = a_{i + 1}$ and line 6 of HUFFMAN extracts $T_i$ and sets $z.right = T_i$. Now we can see that indeed $z$ is exactly $T_{i + 1}$. Using (CLAIM 2) and observing the way $T_{i + 1}$ is defined we get that $z.freq = T_{i + 1}.freq = \sum\limits_{i=1}^{i + 1} a_i$. Thus using (CLAIM 1) one can see that $a_{i + 2} < T_{i + 1}.freq < a_{i + 3}$. Therefore for the content of the priority queue $Q$ just after the $i$-th execution of the for-cycle we have $Q=(a_{i + 2}, T_{i + 2}, a_{i + 3}, \dots, a_n)$. **KEY CLAIM** tells us that just after the last execution of the for-cycle we have $Q = (T_n)$ and therefore the line 9 of HUFFMAN returns $T_n$ as the result. One can easily see that the code given in the beginning is exactly the code which corresponds to the code-tree $T_n$.
[]
false
[]
16-16.3-4
16
16.3
16.3-4
docs/Chap16/16.3.md
Prove that we can also express the total cost of a tree for a code as the sum, over all internal nodes, of the combined frequencies of the two children of the node.
Let tree be a full binary tree with $n$ leaves. Apply induction hypothesis on the number of leaves in $T$. When $n = 2$ (the case $n = 1$ is trivially true), there are two leaves $x$ and $y$ with the same parent $z$, then the cost of $T$ is $$ \begin{aligned} B(T) & = f(x) d_T(x) + f(y) d_T(y) \\\\ & = f(x) + f(y) & \text{since $d_T(x) = d_T(y) = 1$} \\\\ & = f(\text{child}_1\text{ of }z) + f(\text{child}_2\text{ of }z). \end{aligned} $$ Thus, the statement of theorem is true. Now suppose $n > 2$ and also suppose that theorem is true for trees on $n - 1$ leaves. Let $c_1$ and $c_2$ are two sibling leaves in $T$ such that they have the same parent $p$. Letting $T'$ be the tree obtained by deleting $c_1$ and $c_2$, by induction we know that $$ \begin{aligned} B(T) & = \sum_{\text{leaves } l'\in T'} f(l')d_T(l') \\\\ & = \sum_{\text{internal nodes } i'\in T'} f(\text{child}_1\text{ of }i') + f(\text{child}_2\text{ of }i'). \end{aligned} $$ Using this information, calculates the cost of $T$. $$ \begin{aligned} B(T) & = \sum_{\text{leaves }l \in T} f(l)d_T(l) \\\\ & = \sum_{l \ne c_1, c_2} f(l)d_T(l) + f(c_1)d_T(c_1) - 1 + f(c_2)d_T(c_2) - 1 + f(c_1) + f(c_2) \\\\ & = \sum_{\text{internal nodes }i'\in T'} f(\text{child}_1\text{ of }i') + f(\text{child}_2\text{ of }i') + f(c_1) + f(c_2) \\\\ & = \sum\_{\text{internal nodes }i\in T} f(\text{child}_1\text{ of }i) + f(\text{child}_1\text{ of }i). \end{aligned} $$ Thus the statement is true.
[]
false
[]
16-16.3-5
16
16.3
16.3-5
docs/Chap16/16.3.md
Prove that if we order the characters in an alphabet so that their frequencies are monotonically decreasing, then there exists an optimal code whose codeword lengths are monotonically increasing.
**Little formal-mathematical note here:** We are required to prove existence of an optimal code with some property. Therefore we are required also to show, that some optimal code exists. It is trivial in this case, since we know that the code produced by a run of Huffman's algorithm produce one such code for us. However, it is good to be aware of this. Proving just the implication "if a code is optimal then it has the desired property" doesn't suffice. OK, now we are ready to prove the already mentioned implication "if a code is optimal then it has the desired property". Main idea of our proof is that if the code violates desired property, then we find two symbols which violate the property and 'fix the code'. For the formal proof we go as follows. Suppose that we have an alphabet $C = \{a_1, \ldots, a_n\}$ where the characters are written in monotonically decreasing order, i.e. $a_1.freq \ge a_2.freq \ge \ldots \ge a_n$. Let us consider an optimal code $B$ for $C$. Let us denote the codeword for the character $c \in C$ in the code $B$ by $cw_B(c)$. W.l.o.g. we can assume that for any $i$ such that $a_i.freq = a_{i + 1}.freq$ it holds that $|cw(a_i)| \le |cw(a_{i + 1})|$. This assumption can be made since for any $a_i.freq = a_{i + 1}.freq$ for which $|cw(a_i)| > |cw(a_{i + 1})|$ we can simply swap codewords for $a_i$ and $a_{i + 1}$ and obtain a code with desired property and the same cost as is the cost of $B$. We prove that $B$ has the desired property, i.e., its codeword lengths are monotonically increasing. We proceed by contradiction. If lengths of the codewords are not monotonically increasing, then there exist an index $i$ such that $|cw_B(a_i)| > |cw_B(a_{i + 1})| $. Using our assumptions on $C$ and $B$ we get that $a_i.freq > a_{i + 1}.freq$. Define new code $B'$ for $C$ such that for $a_j$ such that $j \ne i$ and $j \ne i + 1$ we keep $cw_{B'}(a_j) = cw_B(a_j)$ and we swap codewords for $a_i$ and $a_{j + 1}$, i.e. we set $cw_{B'}(a_i) = cw_{B}(a_{i + 1})$ and $cw_{B'}(a_{i + 1}) = cw_{B}(a_{i})$. Now compare costs of the codes $B$ and $B'$. It holds that $$ \begin{aligned} cost(B') &= cost(B) - (|cw_B(a_i)|(a_i.freq) + |cw_B(a_{i + 1})|(a_{i + 1}.freq)) \\\\ &+ (|cw_B(a_i)|(a_{i + 1}.freq) + |cw_B(a_{i + 1})|(a_{i}.freq)) \\\\ &= cost(B) + |cw_B(a_i)|(a_{i + 1}.freq - a_i.freq) + |cw_B(a_{i + 1})|(a_i.freq - a_{i + 1}.freq) \end{aligned} $$ For better readability now denote $a_i.freq - a_{i + 1}.freq = \phi$. Since $a_i.freq > a_{i + 1}.freq$, we get $\phi > 0$ and we can write $$ cost(B') = cost(B) - \phi|cw_B(a_i)| + \phi|cw_B(a_{i + 1})| = cost(B) - \phi(|cw_B(a_i)| - |cw_B(a_{i + 1})|) $$ Since $|cw_B(a_i)| > |cw_B(a_{i + 1})| $, we get $|cw_B(a_i)| - |cw_B(a_{i + 1})| > 0$. Thus $\phi(|cw_B(a_i)| - |cw_B(a_{i + 1})|) > 0$ which imply $cost(B') < cost(B)$. Therefore the code $B$ is not optimal, a contradiction. Therefore, we conclude that codeword lengths of $B$ are monotonically increasing and the proof is complete. **Note:** For those not familiar with mathematical parlance, w.l.o.g means without loss of generality.
[]
false
[]
16-16.3-6
16
16.3
16.3-6
docs/Chap16/16.3.md
Suppose we have an optimal prefix code on a set $C = \\{0, 1, \ldots, n - 1 \\}$ of characters and we wish to transmit this code using as few bits as possible. Show how to represent any optimal prefix code on $C$ using only $2n - 1 + n \lceil \lg n \rceil$ bits. ($\textit{Hint:}$ Use $2n - 1$ bits to specify the structure of the tree, as discovered by a walk of the tree.)
First observe that any full binary tree has exactly $2n - 1$ nodes. We can encode the structure of our full binary tree by performing a preorder traversal of $T$. For each node that we record in the traversal, write a $0$ if it is an internal node and a $1$ if it is a leaf node. Since we know the tree to be full, this uniquely determines its structure. Next, note that we can encode any character of $C$ in $\lceil \lg n \rceil$ bits. Since there are $n$ characters, we can encode them in order of appearance in our preorder traversal using $n\left\lceil \lg n \right\rceil$ bits.
[]
false
[]
16-16.3-7
16
16.3
16.3-7
docs/Chap16/16.3.md
Generalize Huffman's algorithm to ternary codewords (i.e., codewords using the symbols $0$, $1$, and $2$), and prove that it yields optimal ternary codes.
Instead of grouping together the two with lowest frequency into pairs that have the smallest total frequency, we will group together the three with lowest frequency in order to have a final result that is a ternary tree. The analysis of optimality is almost identical to the binary case. We are placing the symbols of lowest frequency lower down in the final tree and so they will have longer codewords than the more frequently occurring symbols.
[]
false
[]
16-16.3-8
16
16.3
16.3-8
docs/Chap16/16.3.md
Suppose that a data file contains a sequence of $8$-bit characters such that all $256$ characters are about equally common: the maximum character frequency is less than twice the minimum character frequency. Prove that Huffman coding in this case is no more efficient than using an ordinary $8$-bit fixed-length code.
For any $2$ characters, the sum of their frequencies exceeds the frequency of any other character, so initially Huffman coding makes $128$ small trees with $2$ leaves each. At the next stage, no internal node has a label which is more than twice that of any other, so we are in the same setup as before. Continuing in this fashion, Huffman coding builds a complete binary tree of height $\lg 256 = 8$, which is no more efficient than ordinary $8$-bit length codes.
[]
false
[]
16-16.3-9
16
16.3
16.3-9
docs/Chap16/16.3.md
Show that no compression scheme can expect to compress a file of randomly chosen $8$-bit characters by even a single bit. ($\textit{Hint:}$ Compare the number of possible files with the number of possible encoded files.)
If every possible character is equally likely, then, when constructing the Huffman code, we will end up with a complete binary tree of depth $7$. This means that every character, regardless of what it is will be represented using $7$ bits. This is exactly as many bits as was originally used to represent those characters, so the total length of the file will not decrease at all.
[]
false
[]
16-16.4-1
16
16.4
16.4-1
docs/Chap16/16.4.md
Show that $(S, \mathcal I_k)$ is a matroid, where $S$ is any finite set and $\mathcal I_k$ is the set of all subsets of $S$ of size at most $k$, where $k \le |S|$.
The first condition that $S$ is a finite set is a given. To prove the second condition we assume that $k \ge 0$, this gets us that $\mathcal I_k$ is nonempty. Also, to prove the hereditary property, suppose $A \in \mathcal I_k$ this means that $|A| \le k$. Then, if $B \subseteq A$, this means that $|B| \le |A| \le k$, so $B \in \mathcal I_k$. Lastly, we prove the exchange property by letting $A, B \in \mathcal I_k$ be such that $|A| < |B|$. Then, we can pick any element $x \in B \backslash A$, then, $$|A \cup {x}| = |A| + 1 \le |B| \le k,$$ so, we can extend $A$ to $A \cup \\{x\\} \in \mathcal I_k$.
[]
false
[]
16-16.4-2
16
16.4
16.4-2 $\star$
docs/Chap16/16.4.md
Given an $m \times n$ matrix $T$ over some field (such as the reals), show that $(S, \mathcal I)$ is a matroid, where $S$ is the set of columns of $T$ and $A \in \mathcal I$ if and only if the columns in $A$ are linearly independent.
Let $c_1, \dots, c_m$ be the columns of $T$. Suppose $C = \\{c_{i1}, \dots, c_{ik}\\}$ is dependent. Then there exist scalars $d_1, \dots, d_k$ not all zero such that $\sum_{j = 1}^k d_jc_{ij} = 0$. By adding columns to $C$ and assigning them to have coefficient $0$ in the sum, we see that any superset of $C$ is also dependent. By contrapositive, any subset of an independent set must be independent. Now suppose that $A$ and $B$ are two independent sets of columns with $|A| > |B|$. If we couldn't add any column of $A$ to be whilst preserving independence then it must be the case that every element of $A$ is a linear combination of elements of $B$. But this implies that $B$ spans a $|A|$-dimensional space, which is impossible. Therefore, our independence system must satisfy the exchange property, so it is in fact a matroid.
[]
false
[]
16-16.4-3
16
16.4
16.4-3 $\star$
docs/Chap16/16.4.md
Show that if $(S, \mathcal I)$ is a matroid, then $(S, \mathcal I')$ is a matroid, where $\mathcal I' = \\{A': S - A'$ contains some maximal $A \in \mathcal I\\}$. That is, the maximal independent sets of $(S, \mathcal I')$ are just the complements of the maximal independent sets of $(S, \mathcal I)$.
Condition one of being a matroid is still satisfied because the base set hasn't changed. Next we show that $\mathcal I'$ is nonempty. Let $A$ be any maximal element of $\mathcal I$, then we have that $S - A \in \mathcal I'$ because $S - (S - A) = A \subseteq A$ which is maximal in $\mathcal I$. Next we show the hereditary property, suppose that $B \subseteq A \in \mathcal I'$, then, there exists some $A' \in \mathcal I$ so that $S − A \subseteq A'$, however, $S − B \supseteq S − A \subseteq A$ so $B \in \mathcal I'$. Last, we prove the exchange property. That is, if we have $B, A \in \mathcal I'$ and $|B| < |A|$, we can find an element $x$ in $A − B$ to add to $B$ so that it stays independent. We will split into two cases: - The first case is that $|A - B| = 1$. Let $x \in A-B$ be the only element in $A - B$. Since $|A| > |B|$ and $|A - B| = 1$, it follows in this case $B \subset A$. We extend $B$ by $x$ and we have $B \cup \\{x\\} = A \in \mathcal I'$. - The second case is if the first case does not hold. Let $C$ be a maximal independent set of $\mathcal I$ contained in $S − A$. Pick an aribitrary set of size $|C| − 1$ from some maximal independent set contained in $S - B$, call it $$. Since $D$ is a subset of a maximal independent set, it is also independent, and so, by the exchange property, there is some $y \in C − D$ so that $D \cup \\{y\\}$ is a maximal independent set in $\mathcal I$. Then, we select $x$ to be any element other than $y$ in $A − B$. Then, $S − (B \cup \\{x\\})$ will still contain $D \cup \\{y\\}$. This means that $B \cup \\{x\\}$ is independent in $\mathcal I'$.
[]
false
[]
16-16.4-4
16
16.4
16.4-4 $\star$
docs/Chap16/16.4.md
Let $S$ be a finite set and let $S_1, S_2, \ldots, S_k$ be a partition of $S$ into nonempty disjoint subsets. Define the structure $(S, \mathcal I)$ by the condition that $\mathcal I = \\{A: \mid A \cap S_i \mid \le 1$ for $i = 1, 2, \ldots, k\\}$. Show that $(S, \mathcal I)$ is a matroid. That is, the set of all sets $A$ that contain at most one member of each subset in the partition determines the independent sets of a matroid.
Suppose $X \subset Y$ and $Y \in \mathcal I$. Then $(X \cap S_i) \subset (Y \cap S_i)$ for all $i$, so $$|X \cap S_i| \le |Y \cap S_i| \le 1$$ for all $1 \le i \le k$. Therefore $\mathcal M$ is closed under inclusion. Now Let $A, B \in \mathcal I$ with $|A| > |B|$. Then there must exist some $j$ such that $|A \cap S_j| = 1$ but $|B \cap S_j| = 0$. Let $a \in A \cap S_j$. Then $a \notin B$ and $|(B \cup \\{a\\}) \cap S_j| = 1$. Since $$|(B \cup \\{a\\}) \cap S_i| = |B \cap S_i| \le 1$$ for all $i \ne j$, we must have $B \cup \\{a\\} \in \mathcal I$. Therefore $\mathcal M$ is a matroid.
[]
false
[]
16-16.4-5
16
16.4
16.4-5
docs/Chap16/16.4.md
Show how to transform the weight function of a weighted matroid problem, where the desired optimal solution is a _minimum-weight_ maximal independent subset, to make it a standard weighted-matroid problem. Argue carefully that your transformation is correct.
Suppose that $W$ is the largest weight that any one element takes. Then, define the new weight function $w_2(x) = 1 + W - w(x)$. This then assigns a strictly positive weight, and we will show that any independent set that that has maximum weight with respect to $w_2$ will have minimum weight with respect to $w$. Recall Theorem 16.6 since we will be using it, suppose that for our matriod, all maximal independent sets have size $S$. Then, suppose $M_1$ and $M_2$ are maximal independent sets so that $M_1$ is maximal with respect to $w_2$ and $M_2$ is minimal with respect to $w$. Then, we need to show that $w(M_1) = w(M_2)$. Suppose not to achieve a contradiction, then, by minimality of $M_2$, $w(M_1) > w(M_2)$. Rewriting both sides in terms of $w_2$, we have $$w_2(M_2) - (1 + W)S > w_2(M_1) - (1 + W)S,$$ so, $$w_2(M_2) > w_2(M_1).$$ This however contradicts maximality of $M_1$ with respect to $w_2$. So, we must have that $w(M_1) = w(M_2)$. So, a maximal independent set that has the largest weight with respect to $w_2$ also has the smallest weight with respect to $w$.
[]
false
[]
16-16.5-1
16
16.5
16.5-1
docs/Chap16/16.5.md
Solve the instance of the scheduling problem given in Figure 16.7, but with each penalty $w_i$ replaced by $80 - w_i$.
$$ \begin{array}{c|ccccccc} a_i & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \hline d_i & 4 & 2 & 4 & 3 & 1 & 4 & 6 \\\\ w_i & 10 & 20 & 30 & 40 & 50 & 60 & 70 \end{array} $$ We begin by just greedily constructing the matroid, adding the most costly to leave incomplete tasks first. So, we add tasks $7, 6, 5, 4, 3$. Then, in order to schedule tasks $1$ or $2$ we need to leave incomplete more important tasks. So, our final schedule is $\langle 5, 3, 4, 6, 7, 1, 2 \rangle$ to have a total penalty of only $w_1 + w_2 = 30$.
[]
false
[]
16-16.5-2
16
16.5
16.5-2
docs/Chap16/16.5.md
Show how to use property 2 of Lemma 16.12 to determine in time $O(|A|)$ whether or not a given set $A$ of tasks is independent.
We provide a pseudocode which grasps main ideas of an algorithm. ```cpp IS-INDEPENDENT(A) n = A.length let Nts[0..n] be an array filled with 0s for each a in A if a.deadline >= n Nts[n] = Nts[n] + 1 else Nts[d] = Nts[d] + 1 for i = 1 to n Nts[i] = Nts[i] + Nts[i - 1] // at this moment, Nts[i] holds value of N_i(A) for i = 1 to n if Nts[i] > i return false return true ```
[ { "lang": "cpp", "code": "IS-INDEPENDENT(A)\n n = A.length\n let Nts[0..n] be an array filled with 0s\n for each a in A\n if a.deadline >= n\n Nts[n] = Nts[n] + 1\n else\n Nts[d] = Nts[d] + 1\n for i = 1 to n\n Nts[i] = Nts[i] + Nts[i - 1]\n // at this moment, Nts[i] holds value of N_i(A)\n for i = 1 to n\n if Nts[i] > i\n return false\n return true" } ]
false
[]
16-16-1
16
16-1
16-1
docs/Chap16/Problems/16-1.md
Consider the problem of making change for $n$ cents using the fewest number of coins. Assume that each coin's value is an integer. **a.** Describe a greedy algorithm to make change consisting of quarters, dimes, nickels, and pennies. Prove that your algorithm yields an optimal solution. **b.** Suppose that the available coins are in the denominations that are powers of $c$, i.e., the denominations are $c^0, c^1, \ldots, c^k$ for some integers $c > 1$ and $k \ge 1$. Show that the greedy algorithm always yields an optimal solution. **c.** Give a set of coin denominations for which the greedy algorithm does not yield an optimal solution. Your set should include a penny so that there is a solution for every value of $n$. **d.** Give an $O(nk)$-time algorithm that makes change for any set of $k$ different coin denominations, assuming that one of the coins is a penny.
**a.** Always give the highest denomination coin that you can without going over. Then, repeat this process until the amount of remaining change drops to $0$. **b.** Given an optimal solution $(x_0, x_1, \dots, x_k)$ where $x_i$ indicates the number of coins of denomination $c_i$ . We will first show that we must have $x_i < c$ for every $i < k$. Suppose that we had some $x_i \ge c$, then, we could decrease $x_i$ by $c$ and increase $x_{i + 1}$ by $1$. This collection of coins has the same value and has $c − 1$ fewer coins, so the original solution must of been non-optimal. This configuration of coins is exactly the same as you would get if you kept greedily picking the largest coin possible. This is because to get a total value of $V$, you would pick $x_k = \lfloor V c^{−k} \rfloor$ and for $i < k$, $x_i\lfloor (V\mod c^{i + 1})c^{-i} \rfloor$. This is the only solution that satisfies the property that there aren't more than $c$ of any but the largest denomination because the coin amounts are a base $c$ representation of $V\mod c^k$. **c.** Let the coin denominations be $\\{1, 3, 4\\}$, and the value to make change for be $6$. The greedy solution would result in the collection of coins $\\{1, 1, 4\\}$ but the optimal solution would be $\\{3, 3\\}$. **d.** See algorithm $\text{MAKE-CHANGE}(S, v)$ which does a dynamic programming solution. Since the first forloop runs $n$ times, and the inner for loop runs $k$ times, and the later while loop runs at most $n$ times, the total running time is $O(nk)$.
[]
false
[]
16-16-2
16
16-2
16-2
docs/Chap16/Problems/16-2.md
Suppose you are given a set $S = \\{a_1, a_2, \ldots, a_n\\}$ of tasks, where task $a_i$ requires $p_i$ units of processing time to complete, once it has started. You have one computer on which to run these tasks, and the computer can run only one task at a time. Let $c_i$ be the **_completion time_** of task $a_i$ , that is, the time at which task $a_i$ completes processing. Your goal is to minimize the average completion time, that is, to minimize $(1 / n) \sum_{i = 1}^n c_i$. For example, suppose there are two tasks, $a_1$ and $a_2$, with $p_1 = 3$ and $p_2 = 5$, and consider the schedule in which $a_2$ runs first, followed by $a_1$. Then $c_2 = 5$, $c_1 = 8$, and the average completion time is $(5 + 8) / 2 = 6.5$. If task $a_1$ runs first, however, then $c_1 = 3$, $c_2 = 8$, and the average completion time is $(3 + 8) / 2 = 5.5$. **a.** Give an algorithm that schedules the tasks so as to minimize the average completion time. Each task must run non-preemptively, that is, once task $a_i$ starts, it must run continuously for $p_i$ units of time. Prove that your algorithm minimizes the average completion time, and state the running time of your algorithm. **b.** Suppose now that the tasks are not all available at once. That is, each task cannot start until its **_release time_** $r_i$. Suppose also that we allow **_preemption_**, so that a task can be suspended and restarted at a later time. For example, a task $a_i$ with processing time $p_i = 6$ and release time $r_i = 1$ might start running at time $1$ and be preempted at time $4$. It might then resume at time $10$ but be preempted at time $11$, and it might finally resume at time $13$ and complete at time $15$. Task $a_i$ has run for a total of $6$ time units, but its running time has been divided into three pieces. In this scenario, $a_i$'s completion time is $15$. Give an algorithm that schedules the tasks so as to minimize the average completion time in this new scenario. Prove that your algorithm minimizes the average completion time, and state the running time of your algorithm.
**a.** Order the tasks by processing time from smallest to largest and run them in that order. To see that this greedy solution is optimal, first observe that the problem exhibits optimal substructure: if we run the first task in an optimal solution, then we obtain an optimal solution by running the remaining tasks in a way which minimizes the average completion time. Let $O$ be an optimal solution. Let $a$ be the task which has the smallest processing time and let b be the first task run in $O$. Let $G$ be the solution obtained by switching the order in which we run $a$ and $b$ in $O$. This amounts reducing the completion times of a and the completion times of all tasks in $G$ between $a$ and $b$ by the difference in processing times of $a$ and $b$. Since all other completion times remain the same, the average completion time of $G$ is less than or equal to the average completion time of $O$, proving that the greedy solution gives an optimal solution. This has runtime $O(n\lg n)$ because we must first sort the elements. **b.** Without loss of generality we my assume that every task is a unit time task. Apply the same strategy as in part (a), except this time if a task which we would like to add next to the schedule isn't allowed to run yet, we must skip over it. Since there could be many tasks of short processing time which have late release time, the runtime becomes $O(n^2)$ since we might have to spend $O(n)$ time deciding which task to add next at each step.
[]
false
[]
16-16-3
16
16-3
16-3
docs/Chap16/Problems/16-3.md
**a.** The **_incidence matrix_** for an undirected graph $G = (V, E)$ is a $|V| \times |E|$ matrix $M$ such that $M_{ve} = 1$ if edge $e$ is incident on vertex $v$, and $M_{ve} = 0$ otherwise. Argue that a set of columns of $M$ is linearly independent over the field of integers modulo $2$ if and only if the corresponding set of edges is acyclic. Then, use the result of Exercise 16.4-2 to provide an alternate proof that $(E, \mathcal I)$ of part (a) is a matroid. **b.** Suppose that we associate a nonnegative weight $w(e)$ with each edge in an undirected graph $G = (V, E)$. Give an efficient algorithm to find an acyclic subset of $E$ of maximum total weight. **c.** Let $G(V, E)$ be an arbitrary directed graph, and let $(E, \mathcal I)$ be defined so that $A \in \mathcal I$ if and only if $A$ does not contain any directed cycles. Give an example of a directed graph $G$ such that the associated system $(E, \mathcal I)$ is not a matroid. Specify which defining condition for a matroid fails to hold. **d.** The **_incidence matrix_** for a directed graph $G = (V, E)$ with no self-loops is a $|V| \times |E|$ matrix $M$ such that $M_{ve} = -1$ if edge $e$ leaves vertex $v$, $M_{ve} = 1$ if edge $e$ enters vertex $v$, and $M_{ve} = 0$ otherwise. Argue that if a set of columns of $M$ is linearly independent, then the corresponding set of edges does not contain a directed cycle. **e.** Exercise 16.4-2 tells us that the set of linearly independent sets of columns of any matrix $M$ forms a matroid. Explain carefully why the results of parts (d) and (e) are not contradictory. How can there fail to be a perfect correspondence between the notion of a set of edges being acyclic and the notion of the associated set of columns of the incidence matrix being linearly independent?
**a.** First, suppose that a set of columns is not linearly independent over $\mathbb F_2$ then, there is some subset of those columns, say $S$ so that a linear combination of $S$ is $0$. However, over $\mathbb F_2$, since the only two elements are $1$ and $0$, a linear combination is a sum over some subset. Suppose that this subset is $S'$, note that it has to be nonempty because of linear dependence. Now, consider the set of edges that these columns correspond to. Since the columns had their total incidence with each vertex $0$ in $\mathbb F_2$, it is even. So, if we consider the subgraph on these edges, then every vertex has a even degree. Also, since our $S'$ was nonempty, some component has an edge. Restrict our attention to any such component. Since this component is connected and has all even vertex degrees, it contains an Euler Circuit, which is a cycle. Now, suppose that our graph had some subset of edges which was a cycle. Then, the degree of any vertex with respect to this set of edges is even, so, when we add the corresponding columns, we will get a zero column in $\mathbb F_2$. Since sets of linear independent columns form a matroid, by problem 16.4-2, the acyclic sets of edges form a matroid as well. **b.** One simple approach is to take the highest weight edge that doesn't complete a cycle. Another way to phrase this is by running Kruskal's algorithm (see Chapter 23) on the graph with negated edge weights. **c.** Consider the digraph on [3] with the edges $(1, 2), (2, 1), (2, 3), (3, 2), (3, 1)$ where $(u, v)$ indicates there is an edge from $u$ to $v$. Then, consider the two acyclic subsets of edges $B = (3, 1), (3, 2), (2, 1)$ and $A = (1, 2), (2, 3)$. Then, adding any edge in $B - A$ to $A$ will create a cycle. So, the exchange property is violated. **d.** Suppose that the graph contained a directed cycle consisting of edges corresponding to columns $S$. Then, since each vertex that is involved in this cycle has exactly as many edges going out of it as going into it, the rows corresponding to each vertex will add up to zero, since the outgoing edges count negative and the incoming vertices count positive. This means that the sum of the columns in $S$ is zero, so, the columns were not linearly independent. **e.** There is not a perfect correspondence because we didn't show that not containing a directed cycle means that the columns are linearly independent, so there is not perfect correspondence between these sets of independent columns (which we know to be a matriod) and the acyclic sets of edges (which we know not to be a matroid).
[]
false
[]
16-16-4
16
16-4
16-4
docs/Chap16/Problems/16-4.md
Consider the following algorithm for the problem from Section 16.5 of scheduling unit-time tasks with deadlines and penalties. Let all $n$ time slots be initially empty, where time slot $i$ is the unit-length slot of time that finishes at time $i$. We consider the tasks in order of monotonically decreasing penalty. When considering task $a_j$, if there exists a time slot at or before $a_j$'s deadline $d_j$ that is still empty, assign $a_j$ to the latest such slot, filling it. If there is no such slot, assign task $a_j$ to the latest of the as yet unfilled slots. **a.** Argue that this algorithm always gives an optimal answer. **b.** Use the fast disjoint-set forest presented in Section 21.3 to implement the algorithm efficiently. Assume that the set of input tasks has already been sorted into monotonically decreasing order by penalty. Analyze the running time of your implementation.
**a.** Let $O$ be an optimal solution. If $a_j$ is scheduled before its deadline, we can always swap it with whichever activity is scheduled at its deadline without changing the penalty. If it is scheduled after its deadline but $a_j.deadline \le j$ then there must exist a task from among the first $j$ with penalty less than that of $a_j$ . We can then swap aj with this task to reduce the overall penalty incurred. Since $O$ is optimal, this can't happen. Finally, if $a_j$ is scheduled after its deadline and $a_j.deadline > j$ we can swap $a_j$ with any other late task without increasing the penalty incurred. Since the problem exhibits the greedy choice property as well, this greedy strategy always yields on optimal solution. **b.** Assume that $\text{MAKE-SET}(x)$ returns a pointer to the element $x$ which is now it its own set. Our disjoint sets will be collections of elements which have been scheduled at contiguous times. We'll use this structure to quickly find the next available time to schedule a task. Store attributes $x.low$ and $x.high$ at the representative $x$ of each disjoint set. This will give the earliest and latest time of a scheduled task in the block. Assume that $\text{UNION}(x, y)$ maintains this attribute. This can be done in constant time, so it won't affect the asymptotics. Note that the attribute is well-defined under the union operation because we only union two blocks if they are contiguous. Without loss of generality we may assume that task $a_1$ has the greatest penalty, task $a_2$ has the second greatest penalty, and so on, and they are given to us in the form of an array $A$ where $A[i] = a_i$. We will maintain an array $D$ such that $D[i]$ contains a pointer to the task with deadline $i$. We may assume that the size of $D$ is at most $n$, since a task with deadline later than $n$ can't possibly be scheduled on time. There are at most $3n$ total $\text{MAKE-SET}$, $\text{UNION}$, and $\text{FIND-SET}$ operations, each of which occur at most $n$ times, so by Theorem 21.14 the runtime is $O(n\alpha(n))$. ```cpp SCHEDULING-VARIATIONS(A) let D[1..n] be a new array for i = 1 to n a[i].time = a[i].deadline if D[a[i].deadline] != NIL y = FIND-SET(D[a[i].deadline]) a[i].time = y.low - 1 x = MAKE-SET(a[i]) D[a[i].time] = x x.low = x.high = a[i].time if D[a[i].time - 1] != NIL UNION(D[a[i].time - 1], D[a[i].time]) if D[a[i].time + 1] != NIL UNION(D[a[i].time], D[a[i].time + 1]) ```
[ { "lang": "cpp", "code": "SCHEDULING-VARIATIONS(A)\n let D[1..n] be a new array\n for i = 1 to n\n a[i].time = a[i].deadline\n if D[a[i].deadline] != NIL\n y = FIND-SET(D[a[i].deadline])\n a[i].time = y.low - 1\n x = MAKE-SET(a[i])\n D[a[i].time] = x\n x.low = x.high = a[i].time\n if D[a[i].time - 1] != NIL\n UNION(D[a[i].time - 1], D[a[i].time])\n if D[a[i].time + 1] != NIL\n UNION(D[a[i].time], D[a[i].time + 1])" } ]
false
[]
16-16-5
16
16-5
16-5
docs/Chap16/Problems/16-5.md
Modern computers use a cache to store a small amount of data in a fast memory. Even though a program may access large amounts of data, by storing a small subset of the main memory in the **_cache_**—a small but faster memory—overall access time can greatly decrease. When a computer program executes, it makes a sequence $\langle r_1, r_2, \ldots, r_n \rangle$ of $n$ memory requests, where each request is for a particular data element. For example, a program that accesses 4 distinct elements $\\{a, b, c, d\\}$ might make the sequence of requests $\langle d, b, d, b, d, a, c, d, b, a, c, b \rangle$. Let $k$ be the size of the cache. When the cache contains $k$ elements and the program requests the $(k + 1)$st element, the system must decide, for this and each subsequent request, which $k$ elements to keep in the cache. More precisely, for each request $r_i$, the cache-management algorithm checks whether element $r_i$ is already in the cache. If it is, then we have a **_cache hit_**; otherwise, we have a cache miss. Upon a **_cache miss_**, the system retrieves $r_i$ from the main memory, and the cache-management algorithm must decide whether to keep $r_i$ in the cache. If it decides to keep $r_i$ and the cache already holds $k$ elements, then it must evict one element to make room for $r_i$ . The cache-management algorithm evicts data with the goal of minimizing the number of cache misses over the entire sequence of requests. Typically, caching is an on-line problem. That is, we have to make decisions about which data to keep in the cache without knowing the future requests. Here, however, we consider the off-line version of this problem, in which we are given in advance the entire sequence of $n$ requests and the cache size $k$, and we wish to minimize the total number of cache misses. We can solve this off-line problem by a greedy strategy called **_furthest-in-future_**, which chooses to evict the item in the cache whose next access in the request sequence comes furthest in the future. **a.** Write pseudocode for a cache manager that uses the furthest-in-future strategy. The input should be a sequence $\langle r_1, r_2, \ldots, r_n \rangle$ of requests and a cache size $k$, and the output should be a sequence of decisions about which data element (if any) to evict upon each request. What is the running time of your algorithm? **b.** Show that the off-line caching problem exhibits optimal substructure. **c.** Prove that furthest-in-future produces the minimum possible number of cache misses.
**a.** Suppose there are $m$ distinct elements that could be requested. There may be some room for improvement in terms of keeping track of the furthest in future element at each position. If you maintain a (double circular) linked list with a node for each possible cache element and an array so that in index $i$ there is a pointer corresponding to the node in the linked list corresponding to the possible cache request $i$. Then, starting with the elements in an arbitrary order, process the sequence $\langle r_1, \dots, r_n \rangle$ from right to left. Upon processing a request move the node corresponding to that request to the beginning of the linked list and make a note in some other array of length $n$ of the element at the end of the linked list. This element is tied for furthest-in-future. Then, just scan left to right through the sequence, each time just checking some set for which elements are currently in the cache. It can be done in constant time to check if an element is in the cache or not by a direct address table. If an element need be evicted, evict the furthest-in-future one noted earlier. This algorithm will take time $O(n + m)$ and use additional space $O(m + n)$. If we were in the stupid case that $m > n$, we could restrict our attention to the possible cache requests that actually happen, so we have a solution that is $O(n)$ both in time and in additional space required. **b.** Index the subproblems $c[i, S]$ by a number $i \in [n]$ and a subset $S \in \binom{[m]}{k}$. Which indicates the lowest number of misses that can be achieved with an initial cache of $S$ starting after index $i$. Then, $$c[i, S] = \min_{x \in \\{S\\}} (c[i + 1, \\{r_i\\} \cup (S − \\{x\\})] + (1 − \chi_{\\{r_i\\}}(x))),$$ which means that $x$ is the element that is removed from the cache unless it is the current element being accessed, in which case there is no cost of eviction. **c.** At each time we need to add something new, we can pick which entry to evict from the cache. We need to show the there is an exchange property. That is, if we are at round $i$ and need to evict someone, suppose we evict $x$. Then, if we were to instead evict the furthest in future element $y$, we would have no more evictions than before. To see this, since we evicted $x$, we will have to evict someone else once we get to $x$, whereas, if we had used the other strategy, we wouldn't of had to evict anyone until we got to $y$. This is a point later in time than when we had to evict someone to put $x$ back into the cache, so we could, at reloading $y$, just evict the person we would of evicted when we evicted someone to reload $x$. This causes the same number of misses unless there was an access to that element that wold of been evicted at reloading $x$ some point in between when $x$ any $y$ were needed, in which case furthest in future would be better.
[]
false
[]
17-17.1-1
17
17.1
17.1-1
docs/Chap17/17.1.md
If the set of stack operations included a $\text{MULTIPUSH}$ operation, which pushses $k$ items onto the stack, would the $O(1)$ bound on the amortized cost of stack operations continue to hold?
No. The time complexity of such a series of operations depends on the number of pushes (pops vice versa) could be made. Since one $\text{MULTIPUSH}$ needs $\Theta(k)$ time, performing $n$ $\text{MULTIPUSH}$ operations, each with $k$ elements, would take $\Theta(kn)$ time, leading to amortized cost of $\Theta(k)$.
[]
false
[]
17-17.1-2
17
17.1
17.1-2
docs/Chap17/17.1.md
Show that if a $\text{DECREMENT}$ operation were included in the $k$-bit counter example, $n$ operations could cost as much as $\Theta(nk)$ time.
The logarithmic bit flipping predicate does not hold, and indeed a sequence of events could consist of the incrementation of all $1$s and decrementation of all $0$s; yielding $\Theta(nk)$.
[]
false
[]
17-17.1-3
17
17.1
17.1-3
docs/Chap17/17.1.md
Suppose we perform a sequence of $n$ operations on a data structure in which the $i$th operation costs $i$ if $i$ is an exact power of $2$, and $1$ otherwise. Use aggregate analysis to determine the amortized cost per operation.
Let $n$ be arbitrary, and have the cost of operation $i$ be $c(i)$. Then we have, $$ \begin{aligned} \sum_{i = 1}^n c(i) & = \sum_{i = 1}^{\left\lceil\lg n\right\rceil} 2^i + \sum_{i \le n \text{ is not a power of } 2} 1 \\\\ & \le \sum_{i = 1}^{\left\lceil\lg n\right\rceil} 2^i + n \\\\ & = 2^{1 + \left\lceil\lg n\right\rceil} - 1 + n \\\\ & \le 2n - 1 + n \\\\ & \le 3n \in O(n). \end{aligned} $$ To find the average, we divide by $n$, and the amortized cost per operation is $O(1)$.
[]
false
[]
17-17.2-1
17
17.2
17.2-1
docs/Chap17/17.2.md
Suppose we perform a sequence of stack operations on a stack whose size never exceeds $k$. After every $k$ operations, we make a copy of the entire stack for backup purposes. Show that the cost of $n$ stack operations, including copying the stack, is $O(n)$ by assigning suitable amortized costs to the various stack operations.
For every stack operation, we charge twice. - First, we charge the actual cost of the stack operation. - Second, we charge the cost of copying an element later on. Since we have the size of the stack never exceed $k$, and there are always $k$ operations between backups, we always overpay by at least enough. Therefore, the amortized cost of the operation is constant, and the cost of the $n$ operation is $O(n)$.
[]
false
[]
17-17.2-2
17
17.2
17.2-2
docs/Chap17/17.2.md
Redo Exercise 17.1-3 using an accounting method of analysis.
Let $c_i =$ cost of $i$th operation. $$ c_i = \begin{cases} i & \text{if $i$ is an exact power of $2$}, \\\\ 1 & \text{otherwise}. \end{cases} $$ Charge $3$ (amortized cost $\hat c_i$) for each operation. - If $i$ is not an exact power of $2$, pay $\$1$, and store $\$2$ as credit. - If $i$ is an exact power of $2$, pay $\$i$, using stored credit. $$ \begin{array}{cccc} \text{Operation} & \text{Cost} & \text{Actual cost} & \text{Credit remaining} \\\\ \hline 1 & 3 & 1 & 2 \\\\ 2 & 3 & 2 & 3 \\\\ 3 & 3 & 1 & 5 \\\\ 4 & 3 & 4 & 4 \\\\ 5 & 3 & 1 & 6 \\\\ 6 & 3 & 1 & 8 \\\\ 7 & 3 & 1 & 10 \\\\ 8 & 3 & 8 & 5 \\\\ 9 & 3 & 1 & 7 \\\\ 10 & 3 & 1 & 9 \\\\ \vdots & \vdots & \vdots & \vdots \end{array} $$ Since the amortized cost is $\$3$ per operation, $\sum\limits_{i = 1}^n \hat c_i = 3n$. We know from Exercise 17.1-3 that $\sum\limits_{i = 1}^n \hat c_i < 3n$. Then we have $$\sum_{i = 1}^n \hat c_i \ge \sum_{i = 1}^n c_i \Rightarrow \text{credit} = \text{amortized cose} - \text{actual cost} \ge 0.$$ Since the amortized cost of each operation is $O(1)$, and the amount of credit never goes negative, the total cost of $n$ operations is $O(n)$.
[]
false
[]
17-17.2-3
17
17.2
17.2-3
docs/Chap17/17.2.md
Suppose we wish not only to increment a counter but also to reset it to zero (i.e., make all bits in it $0$). Counting the time to examine or modify a bit as $\Theta(1)$, show how to implement a counter as an array of bits so that any sequence of $n$ $\text{INCREMENT}$ and $\text{RESET}$ operations takes time $O(n)$ on an initially zero counter. ($\textit{Hint:}$ Keep a pointer to the high-order $1$.)
(Removed)
[]
false
[]
17-17.3-1
17
17.3
17.3-1
docs/Chap17/17.3.md
Suppose we have a potential function $\Phi$ such that $\Phi(D_i) \ge \Phi(D_0)$ for all $i$, but $\Phi(D_0) \ne 0$. Show that there exists a potential fuction $\Phi'$ such that $\Phi'(D_0) = 0$, $\Phi'(D_i) \ge 0$ for all $i \ge 1$, and the amortized costs using $\Phi'$ are the same as the amortized costs using $\Phi$.
Define the potential function $\Phi'(D_i) = \Phi(D_i) - \Phi(D_0)$ for all $i \ge 1$. Then $$\Phi'(D_0) = \Phi(D_0) - \Phi(D_0) = 0,$$ and $$\Phi'(D_i) = \Phi(D_i) - \Phi(D_0) \ge 0.$$ The amortized cost is $$ \begin{aligned} \hat c_i' & = c_i + \Phi'(D_i) - \Phi'(D_{i - 1}) \\\\ & = c_i + (\Phi(D_i) - \Phi(D_0)) - (\Phi(D_{i - 1}) - \Phi(D_0)) \\\\ & = c_i + \Phi(D_i) - \Phi(D_{i - 1}) \\\\ & = \hat c_i. \end{aligned} $$
[]
false
[]
17-17.3-2
17
17.3
17.3-2
docs/Chap17/17.3.md
Redo Exercise 17.1-3 using a potential method of analysis.
Define the potential function $\Phi(D_0) = 0$, and $\Phi(D_i) = 2i - 2^{1 + \lfloor \lg i \rfloor}$ for $i > 0$. For operation 1, $$\hat c_i = c_i + \Phi(D_i) - \Phi(D_{i - 1}) = 1 + 2i - 2^{1+ \lfloor \lg i \rfloor} - 0 = 1.$$ For operation $i(i > 1)$, if $i$ is not a power of $2$, then $$\hat c_i = c_i + \Phi(D_i) - \Phi(D_{i - 1}) = 1 + 2i - 2^{1 + \lfloor \lg 1 \rfloor} - (2(i - 1) - 2^{1 + \lfloor \lg(i - 1) \rfloor}) = 3.$$ If $i = 2^j$ for some $j \in \mathbb N$, then $$\hat c_i = c_i + \Phi(D_i) - \Phi(D_{i - 1}) = i + 2i - 2^{1 + j}-(2(i - 1) - 2^{1 + j - 1}) = i + 2i - 2i - 2i + 2 + i = 2.$$ Thus, the amortized cost is $3$ per operation.
[]
false
[]
17-17.3-3
17
17.3
17.3-3
docs/Chap17/17.3.md
Consider an ordinary binary min-heap data structure with $n$ elements supporting the instructions $\text{INSERT}$ and $\text{EXTRACT-MIN}$ in $O(\lg n)$ worst-case time. Give a potential function $\Phi$ such that the amortized cost of $\text{INSERT}$ is $O(\lg n)$ and the amortized cost of $\text{EXTRACT-MIN}$ is $O(1)$, and show that it works.
Make the potential function be equal to $\sum_{i = 1}^n \lg i$ where $n$ is the size of the min-heap. Then, there is still a cost of $O(\lg n)$ to $\text{INSERT}$, since only an amount of amortization that is $\lg n$ was spent to increase the size of the heap by $1$. However, the amortized cost of $\text{EXTRACT-MIN}$ is $0$, as all its actual cost is compensated.
[]
false
[]
17-17.3-4
17
17.3
17.3-4
docs/Chap17/17.3.md
What is the total cost of executing $n$ of the stack operations $\text{PUSH}$, $\text{POP}$, and $\text{MULTIPOP}$, assuming that the stack begins with $s_0$ objects and finishes with $s_n$ objects?
Let $\Phi$ be the potential function that returns the number of elements in the stack. We know that for this potential function, we have amortized cost $2$ for $\text{PUSH}$ operation and amortized cost $0$ for $\text{POP}$ and $\text{MULTIPOP}$ operations. The total amortized cost is $$\sum_{i = 1}^n \hat c_i = \sum_{i = 1}^n c_i + \Phi(D_n) - \Phi(D_0).$$ Using the potential function and the known amortized costs, we can rewrite the equation as $$ \begin{aligned} \sum_{i = 1}^n c_i & = \sum_{i = 1}^n \hat c_i + \Phi(D_0) - \Phi(D_n) \\\\ & = \sum_{i = 1}^n \hat c_i + s_0 - s_n \\\\ & \le 2n + s_0 - s_n, \end{aligned} $$ which gives us the total cost of $O(n + (s_0 - s_n))$. If $s_n \ge s_0$, then this equals to $O(n)$, that is, if the stack grows, then the work done is limited by the number of operations. (Note that it does not matter here that the potential may go below the starting potential. The condition $\Phi(D_n) \ge \Phi(D_0)$ for all $n$ is only required to have $\sum_{i = 1}^n \hat c_i \ge \sum_{i = 1}^n c_i$, but we do not need for that to hold in this application.)
[]
false
[]
17-17.3-5
17
17.3
17.3-5
docs/Chap17/17.3.md
Suppose that a counter begins at a number with $b$ $1$s in its binary representation, rather than at $0$. Show that the cost of performing $n$ $\text{INCREMENT}$ operations is $O(n)$ if $n = \Omega(b)$. (Do not assume that $b$ is constant.)
$$ \begin{aligned} \sum_{i = 1}^n c_i & = \sum_{i = 1}^n \hat c_i - \Phi(D_n) + \Phi(D_0) \\\\ & = n - x + b \\\\ & \le n - x + n \\\\ & = O(n). \end{aligned} $$
[]
false
[]
17-17.3-6
17
17.3
17.3-6
docs/Chap17/17.3.md
Show how to implement a queue with two ordinary stacks (Exercise 10.1-6) so that the amortized cost of each $\text{ENQUEUE}$ and each $\text{DEQUEUE}$ operation is $O(1)$.
We'll use the accounting method for the analysis. Assign cost $3$ to the $\text{ENQUEUE}$ operation and $0$ to the $\text{DEQUEUE}$ operation. Recall the implementation of 10.1-6 where we enqueue by pushing on to the top of stack 1, and dequeue by popping from stack 2. If stack 2 is empty, then we must pop every element from stack 1 and push it onto stack 2 before popping the top element from stack 2. For each item that we enqueue we accumulate 2 credits. Before we can dequeue an element, it must be moved to stack 2. Note: this might happen prior to the time at which we wish to dequeue it, but it will happen only once overall. One of the 2 credits will be used for this move. Once an item is on stack 2 its pop only costs $1$ credit, which is exactly the remaining credit associated to the element. Since each operation's cost is $O(1)$, the amortized cost per operation is $O(1)$.
[]
false
[]
17-17.3-7
17
17.3
17.3-7
docs/Chap17/17.3.md
Design a data structure to support the following two operations for a dynamic multiset $S$ of integers, which allows duplicate values: $\text{INSERT}(S, x)$ inserts $x$ into $S$. $\text{DELETE-LARGER-HALF}(S)$ deletes the largest $\lceil |S| / 2 \rceil$ elements from $S$. Explain how to implement this data structure so that any sequence of $m$ $\text{INSERT}$ and $\text{DELETE-LARGER-HALF}$ operations runs in $O(m)$ time. Your implementation should also include a way to output the elements of $S$ in $O(|S|)$ time.
We'll store all our elements in an array, and if ever it is too large, we will copy all the elements out into an array of twice the length. To delete the larger half, we first find the element $m$ with order statistic $\lceil |S| / 2 \rceil$ by the algorithm presented in section 9.3. Then, scan through the array and copy out the elements that are smaller or equal to $m$ into an array of half the size. Since the delete half operation takes time $O(|S|)$ and reduces the number of elements by $\lfloor |S| / 2 \rfloor \in \Omega(|S|)$, we can make these operations take ammortized constant time by selecting our potential function to be linear in $|S|$. Since the insert operation only increases $|S|$ by one, we have that there is only a constant amount of work going towards satisfying the potential, so the total ammortized cost of an insertion is still constant. To output all the elements just iterate through the array and output each.
[]
false
[]
17-17.4-1
17
17.4
17.4-1
docs/Chap17/17.4.md
Suppose that we wish to implement a dynamic, open-address hash table. Why might we consider the table to be full when its load factor reaches some value $\alpha$ that is strictly less than $1$? Describe briefly how to make insertion into a dynamic, open-address hash table run in such a way that the expected value of the amortized cost per insertion is $O(1)$. Why is the expected value of the actual cost per insertion not necessarily $O(1)$ for all insertions?
By theorems 11.6-11.8, the expected cost of performing insertions and searches in an open address hash table approaches infinity as the load factor approaches one, for any load factor fixed away from $1$, the expected time is bounded by a constant though. The expected value of the actual cost my not be $O(1)$ for every insertion because the actual cost may include copying out the current values from the current table into a larger table because it became too full. This would take time that is linear in the number of elements stored.
[]
false
[]
17-17.4-2
17
17.4
17.4-2
docs/Chap17/17.4.md
Show that if $\alpha_{i - 1} \ge 1 / 2$ and the $i$th operation on a dynamic table is $\text{TABLE-DELETE}$, then the amortized cost of the operation with respect to the potential function $\text{(17.6)}$ is bounded above by a constant.
If $\alpha_{i - 1} \ge 1 / 2$, $\text{TABLE-DELETE}$ cannot **_contract_**, so $c_i = 1$ and $size_i = size_{i - 1}$. - **Case 1:** if $\alpha_i \ge 1 / 2$, $$ \begin{aligned} \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ & = 1 + (2 \cdot num_i - size_i) - (2 \cdot num_{i - 1} - size_{i - 1}) \\\\ & = 1 + (2 \cdot (num_{i - 1} - 1) - size_{i - 1}) - (2 \cdot num_{i - 1} - size_{i - 1}) \\\\ & = -1. \end{aligned} $$ - **Case 2:** if $\alpha_i < 1 / 2$, $$ \begin{aligned} \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ & = 1 + (size_i / 2 - num_i) - (2 \cdot num_{i - 1} - size_{i - 1}) \\\\ & = 1 + (size_{i - 1} / 2 - (num_{i - 1} - 1)) - (2 \cdot num_{i - 1} - size_{i - 1}) \\\\ & = 2 + \frac{3}{2} size_{i - 1} - 3 \cdot num_{i - 1} \\\\ & = 2 + \frac{3}{2} size_{i - 1} - 3\alpha_{i - 1} size_{i - 1} \\\\ & \le 2 + \frac{3}{2} size_{i - 1} - \frac{3}{2} size_{i - 1} \\\\ & = 2. \end{aligned} $$
[]
false
[]
17-17.4-3
17
17.4
17.4-3
docs/Chap17/17.4.md
Suppose that instead of contracting a table by halving its size when its load factor drops below $1 / 4$, we contract it by multiplying its size by $2 / 3$ when its load factor drops below $1 / 3$. Using the potential function $$\Phi(T) = | 2 \cdot T.num - T.size |,$$ show that the amortized cost of a $\text{TABLE-DELETE}$ that uses this strategy is bounded above by a constant.
If $1 / 3 < \alpha_i \le 1 / 2$, $$ \begin{aligned} \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ & = 1 + (size_i - 2 \cdot num_i) - (size_i - 2 \cdot (num_i + 1)) \\\\ & = 3. \end{aligned} $$ If the $i$th operation does trigger a contraction, $$ \begin{aligned} \frac{1}{3} size_{i - 1} & = num_i + 1 \\\\ size_{i - 1} & = 3 (num_i + 1) \\\\ size_{i} & = \frac{2}{3} size_{i - 1} = 2 (num_i + 1). \end{aligned} $$ $$ \begin{aligned} \hat c_i & = c_i + \Phi_i - \Phi_{i - 1} \\\\ & = (num_i + 1) + [2 \cdot (num_i + 1) - 2 \cdot num_i] - [3 \cdot (num_i + 1) - 2 \cdot (num_i + 1)] \\\\ & = 2. \end{aligned} $$
[]
false
[]
17-17-1
17
17-1
17-1
docs/Chap17/Problems/17-1.md
Chapter 30 examines an important algorithm called the fast Fourier transform, or $\text{FFT}$. The first step of the $\text{FFT}$ algorithm performs a **_bit-reversal permutation_** on an input array $A[0..n - 1]$ whose length is $n = 2^k$ for some nonnegative integer $k$. This permutation swaps elements whose indices have binary representations that are the reverse of each other. We can express each index $a$ as a $k$-bit sequence $\langle a_{k - 1}, a_{k - 2}, \ldots, a_0 \rangle$, where $a = \sum_{i = 0}^{k - 1} a_i 2^i$. We define $$\text{rev}\_k(\langle a_{k - 1}, a_{k - 2}, \ldots, a_0 \rangle) = \langle a_0, a_1, \ldots, a_{k - 1} \rangle;$$ thus, $$\text{rev}\_k(a) = \sum_{i = 0}^{k - 1} a_{k - i - 1} 2^i.$$ For example, if $n = 16$ (or, equivalently, $k = 4$), then $\text{rev}_k(3) = 12$, since the $4$-bit representation of $3$ is $0011$, which when reversed gives $1100$, the $4$-bit representation of $12$. **a.** Given a function $\text{rev}_k$ that runs in $\Theta(k)$ time, write an algorithm to perform the bit-reversal permutation on an array of length $n = 2^k$ in $O(nk)$ time. We can use an algorithm based on an amortized analysis to improve the running time of the bit-reversal permutation. We maintain a "bit-reversed counter" and a procedure $\text{BIT-REVERSED-INCREMENT}$ that, when given a bit-reversed-counter value $a$, produces $\text{rev}_k(\text{rev}_k(a) + 1)$. If $k = 4$, for example, and the bit-reversed counter starts at $0$, then successive calls to $\text{BIT-REVERSED-INCREMENT}$ produce the sequence $$0000, 1000, 0100, 1100, 0010, 1010, \ldots = 0, 8, 4, 12, 2, 10, \ldots.$$ **b.** Assume that the words in your computer store $k$-bit values and that in unit time, your computer can manipulate the binary values with operations such as shifting left or right by arbitrary amounts, bitwise-$\text{AND}$, bitwise-$\text{OR}$, etc. Describe an implementation of the $\text{BIT-REVERSED-INCREMENT}$ procedure that allows the bit-reversal permutation on an $n$-element array to be performed in a total of $O(n)$ time. **c.** Suppose that you can shift a word left or right by only one bit in unit time. Is it still possible to implement an $O(n)$-time bit-reversal permutation?
**a.** Initialize a second array of length $n$ to all trues, then, going through the indices of the original array in any order, if the corresponding entry in the second array is true, then swap the element at the current index with the element at the bit-reversed position, and set the entry in the second array corresponding to the bit-reversed index equal to false. Since we are running $rev_k < n$ times, the total runtime is $O(nk)$. **b.** Doing a bit reversed increment is the same thing as adding a one to the leftmost position where all carries are going to the left instead of the right. ```cpp BIT-REVERSED-INCREMENT(a) let m be a 1 followed by k - 1 0s while m bitwise-AND is not zero a = a bitwise-XOR m shift m right by 1 m bitwise-OR a ``` By a similar analysis to the binary counter (just look at the problem in a mirror), this $\text{BIT-REVERSED-INCREMENT}$ will take constant ammortized time. So, to perform the bit-reversed permutation, have a normal binary counter and a bit reversed counter, then, swap the values of the two counters and increment. Do not swap however if those pairs of elements have already been swapped, which can be kept track of in a auxiliary array. **c.** The $\text{BIT-REVERSED-INCREMENT}$ procedure given in the previous part only uses single shifts to the right, not arbitrary shifts.
[ { "lang": "cpp", "code": "BIT-REVERSED-INCREMENT(a)\n let m be a 1 followed by k - 1 0s\n while m bitwise-AND is not zero\n a = a bitwise-XOR m\n shift m right by 1\n m bitwise-OR a" } ]
false
[]
17-17-2
17
17-2
17-2
docs/Chap17/Problems/17-2.md
Binary search of a sorted array takes logarithmic search time, but the time to insert a new element is linear in the size of the array. We can improve the time for insertion by keeping several sorted arrays. Specifically, suppose that we wish to support $\text{SEARCH}$ and $\text{INSERT}$ on a set of $n$ elements. Let $k = \lceil \lg(n + 1) \rceil$, and let the binary representation of $n$ be $\langle n_{k - 1}, n_{k - 2}, \ldots, n_0 \rangle$. We have $k$ sorted arrays $A_0, A_1, \ldots, A_{k - 1}$, where for $i = 0, 1, \ldots, k - 1$, the length of array $A_i$ is $2^i$. Each array is either full or empty, depending on whether $n_i = 1$ or $n_i = 0$, respectively. The total number of elements held in all $k$ arrays is therefore $\sum_{i = 0}^{k - 1} n_i 2^i = n$. Although each individual array is sorted, elements in different arrays bear no particular relationship to each other. **a.** Describe how to perform the $\text{SEARCH}$ operation for this data structure. Analyze its worst-case running time. **b.** Describe how to perform the $\text{INSERT}$ operation. Analyze its worst-case and amortized running times. **c.** Discuss how to implement $\text{DELETE}$.
**a.** We linearly go through the lists and binary search each one since we don't know the relationship between one list an another. In the worst case, every list is actually used. Since list $i$ has length $2^i$ and it's sorted, we can search it in $O(i)$ time. Since $i$ varies from $0$ to $O(\lg n)$, the runtime of SEARCH is $O(\lg^2 n)$. **b.** To insert, we put the new element into $A_0$ and update the lists accordingly. In the worst case, we must combine lists $A_0, A_1, \dots, A_{m − 1}$ into list Am. Since merging two sorted lists can be done linearly in the total length of the lists, the time this takes is $O(2^m)$. In the worst case, this takes time $O(n)$ since $m$ could equal $k$. We'll use the accounting method to analyse the amoritized cost. Assign a cost of $\lg n$ to each insertion. Thus, each item carries $\lg n$ credit to pay for its later merges as additional items are inserted. Since an individual item can only be merged into a larger list and there are only $\lg n$ lists, the credit pays for all future costs the item might incur. Thus, the amortized cost is $O(\lg n)$. **c.** Find the smallest $m$ such that $n_m \ne 0$ in the binary representation of $n$. If the item to be deleted is not in list $A_m$, remove it from its list and swap in an item from Am, arbitrarily. This can be done in $O(\lg n)$ time since we may need to search list $A_k$ to find the element to be deleted. Now simply break list $A_m$ into lists $A_0, A_1, \dots, A_{m − 1}$ by index. Since the lists are already sorted, the runtime comes entirely from making the splits, which takes $O(m)$ time. In the worst case, this is $O(\lg n)$.
[]
false
[]
17-17-3
17
17-3
17-3
docs/Chap17/Problems/17-3.md
Consider an ordinary binary search tree augmented by adding to each node $x$ the attribute $x.size$ giving the number of keys stored in the subtree rooted at $x$. Let $\alpha$ be a constant in the range $1 / 2 \le \alpha < 1$. We say that a given node $x$ is **_$\alpha$-balanced_** if $x.left.size \le \alpha \cdot x.size$ and $x.right.size \le \alpha \cdot x.size$. The tree as a whole is **_$\alpha$-balanced_** if every node in the tree is $\alpha$-balanced. The following amortized approach to maintaining weight-balanced trees was suggested by G. Varghese. **a.** A $1 / 2$-balanced tree is, in a sense, as balanced as it can be. Given a node $x$ in an arbitrary binary search tree, show how to rebuild the subtree rooted at $x$ so that it becomes $1 / 2$-balanced. Your algorithm should run in time $\Theta(x.size)$, and it can use $O(x.size)$ auxiliary storage. **b.** Show that performing a search in an $n$-node $\alpha$-balanced binary search tree takes $O(\lg n)$ worst-case time. For the remainder of this problem, assume that the constant $\alpha$ is strictly greater than $1 / 2$. Suppose that we implement $\text{INSERT}$ and $\text{DELETE}$ as usual for an $n$-node binary search tree, except that after every such operation, if any node in the tree is no longer $\alpha$-balanced, then we "rebuild" the subtree rooted at the highest such node in the tree so that it becomes $1 / 2$-balanced. We shall analyze this rebuilding scheme using the potential method. For a node $x$ in a binary search tree $T$, we define $$\Delta(x) = |x.left.size - x.right.size|,$$ and we define the potential of $T$ as $$\Phi(T) = c \sum_{x \in T: \Delta(x) \ge 2} \Delta(x),$$ where $c$ is a sufficiently large constant that depends on $\alpha$. **c.** Argue that any binary search tree has nonnegative potential and that a $1 / 2$-balanced tree has potential $0$. **d.** Suppose that $m$ units of potential can pay for rebuilding an $m$-node subtree. How large must $c$ be in terms of $\alpha$ in order for it to take $O(1)$ amortized time to rebuild a subtree that is not $\alpha$-balanced? **e.** Show that inserting a node into or deleting a node from an $n$-node $\alpha$-balanced tree costs $O(\lg n)$ amortized time.
**a.** Since we have $O(x.size)$ auxiliary space, we will take the tree rooted at $x$ and write down an inorder traversal of the tree into the extra space. This will only take linear time to do because it will visit each node thrice, once when passing to its left child, once when the nodes value is output and passing to the right child, and once when passing to the parent. Then, once the inorder traversal is written down, we can convert it back to a binary tree by selecting the median of the list to be the root, and recursing on the two halves of the list that remain on both sides. Since we can index into the middle element of a list in constant time, we will have the recurrence $$T(n) = 2T(n / 2) + 1,$$ which has solution that is linear. Since both trees come from the same underlying inorder traversal, the result is a $\text{BST}$ since the original was. Also, since the root at each point was selected so that half the elements are larger and half the elements are smaller, it is a $1 / 2$-balanced tree. **b.** We will show by induction that any tree with $\le \alpha^{-d} + d$ elements has a depth of at most $d$. This is clearly true for $d = 0$ because any tree with a single node has depth $0$, and since $\alpha^0 = 1$, we have that our restriction on the number of elements requires there to only be one. Now, suppose that in some inductive step we had a contradiction, that is, some tree of depth $d$ that is $\alpha$ balanced but has more than $\alpha - d$ elements. We know that both of the subtrees are alpha balanced, and by being alpha balanced at the root, we have $$root.left.size \le \alpha \cdot root.size,$$ which implies $$root.right.size > root.size - \alpha \cdot root.size - 1.$$ So, $$ \begin{aligned} root.right.size & > (1 - \alpha)root.size - 1 \\\\ & > (1 - \alpha)\alpha - d + d - 1 \\\\ & = (\alpha - 1 - 1)\alpha - d + 1 + d - 1 \\\\ & \ge \alpha - d + 1 + d - 1, \end{aligned} $$ which is a contradiction to the fact that it held for all smaller values of $d$ because any child of a tree of depth d has depth $d - 1$. **c.** The potential function is a sum of $\Delta(x)$ each of which is the absolute value of a quantity, so, since it is a sum of nonnegative values, it is nonnegative regardless of the input $\text{BST}$. If we suppose that our tree is $1 / 2$-balanced, then, for every node $x$, we'll have that $\Delta(x) \le 1$, so, the sum we compute to find the potential will be over no nonzero terms. **d.** $$ \begin{aligned} \hat c_i & = c_i + \Phi(D_i) - \Phi(D_{i - 1}) \\\\ O(1) & = m + \Phi(D_i) - \Phi(D_{i - 1}) \\\\ \Phi(D_{i - 1}) & = m + \Phi(D_i) \\\\ \Phi(D_{i - 1}) & \ge m. \end{aligned} $$ $$ \begin{aligned} \Delta(x) & = x.left.size - x.right.size \\\\ & \ge \alpha \cdot m - ((1 - \alpha) m - 1) \\\\ & = (2\alpha - 1)m + 1. \end{aligned} $$ $$ \begin{aligned} m & \le c((2\alpha - 1)m + 1) \\\\ c & \ge \frac{m}{(2\alpha - 1)m + 1} \\\\ & \ge \frac{1}{2\alpha}. \end{aligned} $$ **e.** Suppose that our tree is $\alpha$ balanced. Then, we know that performing a search takes time $O(\lg(n))$. So, we perform that search and insert the element that we need to insert or delete the element we found. Then, we may have made the tree become unbalanced. However, we know that since we only changed one position, we have only changed the $\Delta$ value for all of the parents of the node that we either inserted or deleted. Therefore, we can rebuild the balanced properties starting at the lowest such unbalanced node and working up. Since each one only takes ammortized constant time, and there are $O(\lg(n))$ many trees made unbalanced, tot total time to rebalanced every subtree is $O(\lg(n))$ ammortized time.
[]
false
[]
17-17-4
17
17-4
17-4
docs/Chap17/Problems/17-4.md
There are four basic operations on red-black trees that perform **_structural modifications_**: node insertions, node deletions, rotations, and color changes. We have seen that $\text{RB-INSERT}$ and $\text{RB-DELETE}$ use only $O(1)$ rotations, node insertions, and node deletions to maintain the red-black properties, but they may make many more color changes. **a.** Describe a legal red-black tree with $n$ nodes such that calling $\text{RB-INSERT}$ to add the $(n + 1)$st node causes $\Omega(\lg n)$ color changes. Then describe a legal red-black tree with $n$ nodes for which calling $\text{RB-DELETE}$ on a particular node causes $\Omega(\lg n)$ color changes. Although the worst-case number of color changes per operation can be logarithmic, we shall prove that any sequence of $m$ $\text{RB-INSERT}$ and $\text{RB-DELETE}$ operations on an initially empty red-black tree causes $O(m)$ structural modifications in the worst case. Note that we count each color change as a structural modification. **b.** Some of the cases handled by the main loop of the code of both $\text{RB-INSERT-FIXUP}$ and $\text{RB-DELETE-FIXUP}$ are **_terminating_**: once encountered, they cause the loop to terminate after a constant number of additional operations. For each of the cases of $\text{RB-INSERT-FIXUP}$ and $\text{RB-DELETE-FIXUP}$, specify which are terminating and which are not. ($\textit{Hint:}$ Look at Figures 13.5, 13.6, and 13.7.) We shall first analyze the structural modifications when only insertions are performed. Let $T$ be a red-black tree, and define $\Phi(T)$ to be the number of red nodes in $T$. Assume that $1$ unit of potential can pay for the structural modifications performed by any of the three cases of $\text{RB-INSERT-FIXUP}$. **c.** Let $T'$ be the result of applying Case 1 of $\text{RB-INSERT-FIXUP}$ to $T$. Argue that $\Phi(T') = \Phi(T) - 1$. **d.** When we insert a node into a red-black tree using $\text{RB-INSERT}$, we can break the operation into three parts. List the structural modifications and potential changes resulting from lines 1–16 of $\text{RB-INSERT}$, from nonterminating cases of $\text{RB-INSERT-FIXUP}$, and from terminating cases of $\text{RB-INSERT-FIXUP}$. **e.** Using part (d), argue that the amortized number of structural modifications performed by any call of $\text{RB-INSERT}$ is $O(1)$. We now wish to prove that there are $O(m)$ structural modifications when there are both insertions and deletions. Let us define, for each node $x$, $$ w(x) = \begin{cases} 0 & \text{ if } x \text{ is red}, \\\\ 1 & \text{ if } x \text{ is black and has no red children}, \\\\ 0 & \text{ if } x \text{ is black and has one red children}, \\\\ 2 & \text{ if } x \text{ is black and has two red children}. \end{cases} $$ Now we redefine the potential of a red-black tree $T$ as $$\Phi(T) = \sum_{x \in T} w(x),$$ and let $T'$ be the tree that results from applying any nonterminating case of $\text{RB-INSERT-FIXUP}$ or $\text{RB-DELETE-FIXUP}$ to $T$. **f.** Show that $\Phi(T') \le \Phi(T) - 1$ for all nonterminating cases of $\text{RB-INSERT-FIXUP}$. Argue that the amortized number of structural modifications performed by any call of $\text{RB-INSERT-FIXUP}$ is $O(1)$. **g.** Show that $\Phi(T') \le \Phi(T) - 1$ for all nonterminating cases of $\text{RB-DELETE-FIXUP}$. Argue that the amortized number of structural modifications performed by any call of $\text{RB-DELETE-FIXUP}$ is $O(1)$. **h.** Complete the proof that in the worst case, any sequence of $m$ $\text{RB-INSERT}$ and $\text{RB-DELETE}$ operations performs $O(m)$ structural modifications.
**a.** If we insert a node into a complete binary search tree whose lowest level is all red, then there will be $\Omega(\lg n)$ instances of case 1 required to switch the colors all the way up the tree. If we delete a node from an all-black, complete binary tree then this also requires $\Omega(\lg n)$ time because there will be instances of case 2 at each iteration of the **while** loop. **b.** For $\text{RB-INSERT}$, cases 2 and 3 are terminating. For $\text{RB-DELETE}$, cases 1 and 3 are terminating. **c.** After applying case 1, $z$'s parent and uncle have been changed to black and $z$'s grandparent is changed to red. Thus, there is a ned loss of one red node, so $\Phi(T') = \Phi(T) − 1$. **d.** For case 1, there is a single decrease in the number of red nodes, and thus a decrease in the potential function. However, a single call to $\text{RB-INSERTFIXUP}$ could result in $\Omega(\lg n)$ instances of case 1. For cases 2 and 3, the colors stay the same and each performs a rotation. **e.** Since each instance of case 1 requires a specific node to be red, it can't decrease the number of red nodes by more than $\Phi(T)$. Therefore the potential function is always non-negative. Any insert can increase the number of red nodes by at most $1$, and one unit of potential can pay for any structural modifications of any of the 3 cases. Note that in the worst case, the call to $\text{RB-INSERT}$ has to perform $k$ case-1 operations, where $k$ is equal to $\Phi(T_i) − \Phi(T_{i − 1})$. Thus, the total amortized cost is bounded above by $2(\Phi(T_n) − \Phi(T_0)) \le n$, so the amortized cost of each insert is $O(1)$. **f.** In case 1 of $\text{RB-INSERT}$, we reduce the number of black nodes with two red children by $1$ and we at most increase the number of black nodes with no red children by $1$, leaving a net loss of at most $1$ to the potential function. In our new potential function, $\Phi(T_n) − \Phi(T_0) \le n$. Since one unit of potential pays for each operation and the terminating cases cause constant structural changes, the total amortized cost is $O(n)$ making the amortized cost of each $\text{RB-INSERT-FIXUP}$ $O(1)$. **g.** In case 2 of $\text{RB-DELETE}$, we reduce the number of black nodes with two red children by $1$, thereby reducing the potential function by $2$. Since the change in potential is at least negative $1$, it pays for the structural modifications. Since the other cases cause constant structural changes, the total amortized cost is $O(n)$ making the amortized cost of each $\text{RB-DELETE-FIXUP}$ $O(1)$. **h.** As described above, whether we insert or delete in any of the cases, the potential function always pays for the changes made if they're nonterminating. If they're terminating then they already take constant time, so the amortized cost of any operation in a sequence of $m$ inserts and deletes is $O(1)$, making the toal amortized cost $O(m)$.
[]
false
[]
17-17-5
17
17-5
17-5
docs/Chap17/Problems/17-5.md
A **_self-organizing list_** is a linked list of $n$ elements, in which each element has a unique key. When we search for an element in the list, we are given a key, and we want to find an element with that key. A self-organizing list has two important properties: 1. To find an element in the list, given its key, we must traverse the list from the beginning until we encounter the element with the given key. If that element is the $k$th element from the start of the list, then the cost to find the element is $k$. 2. We may reorder the list elements after any operation, according to a given rule with a given cost. We may choose any heuristic we like to decide how to reorder the list. Assume that we start with a given list of $n$ elements, and we are given an access sequence $\sigma = \langle \sigma_1, \sigma_2, \ldots, \sigma_m \rangle$ of keys to find, in order. The cost of the sequence is the sum of the costs of the individual accesses in the sequence. Out of the various possible ways to reorder the list after an operation, this problem focuses on transposing adjacent list elements-switching their positions in the list—with a unit cost for each transpose operation. You will show, by means of a potential function, that a particular heuristic for reordering the list, move-to-front, entails a total cost no worse than $4$ times that of any other heuristic for maintaining the list order—even if the other heuristic knows the access sequence in advance! We call this type of analysis a **_competitive analysis_**. For a heuristic $H$ and a given initial ordering of the list, denote the access cost of sequence $\sigma$ by $C_H(\sigma)$ Let $m$ be the number of accesses in $\sigma$. **a.** Argue that if heuristic $\text H$ does not know the access sequence in advance, then the worst-case cost for $\text H$ on an access sequence $\sigma$ is $C_H(\sigma) = \Omega(mn)$. With the **_move-to-front_** heuristic, immediately after searching for an element $x$, we move $x$ to the first position on the list (i.e., the front of the list). Let $\text{rank}\_L(x)$ denote the rank of element $x$ in list $L$, that is, the position of $x$ in list $L$. For example, if $x$ is the fourth element in $L$, then $\text{rank}\_L(x) = 4$. Let $c_i$ denote the cost of access $\sigma_i$ using the move-to-front heuristic, which includes the cost of finding the element in the list and the cost of moving it to the front of the list by a series of transpositions of adjacent list elements. **b.** Show that if $\sigma_i$ accesses element $x$ in list $L$ using the move-to-front heuristic, then $c_i = 2 \cdot \text{rank}\_L(x) - 1$. Now we compare move-to-front with any other heuristic $\text H$ that processes an access sequence according to the two properties above. Heuristic $\text H$ may transpose elements in the list in any way it wants, and it might even know the entire access sequence in advance. Let $L_i$ be the list after access $\sigma_i$ using move-to-front, and let $L_i^\*$ be the list after access $\sigma_i$ using heuristic $\text H$. We denote the cost of access $\sigma_i$ by $c_i$ for move-to-front and by $c_i^\*$ for heuristic $\text H$. Suppose that heuristic $\text H$ performs $t_i^\*$ transpositions during access $\sigma_i$. **c.** In part (b), you showed that $c_i = 2 \cdot \text{rank}\_{L_{i - 1}}(x) - 1$. Now show that $c_i^\* = \text{rank}\_{L_{i - 1}^\*}(x) + t_i^\*$. We define an **_inversion_** in list $L_i$ as a pair of elements $y$ and $z$ such that $y$ precedes $z$ in $L_i$ and $z$ precedes $y$ in list $L_i^\*$. Suppose that list $L_i$ has $q_i$ inversions after processing the access sequence $\langle \sigma_1, \sigma_2, \ldots, \sigma_i \rangle$. Then, we define a potential function $\Phi$ that maps $L_i$ to a real number by $\Phi(L_i) = 2q_i$. For example, if $L_i$ has the elements $\langle e, c, a, d, b \rangle$ and $L_i^\*$ has the elements $\langle c, a, b, d, e \rangle$, then $L_i$ has 5 inversions $((e, c), (e, a), (e, d), (e, b), (d, b))$, and so $\Phi(L_i) = 10$. Observe that $\Phi(L_i) \ge 0$ for all $i$ and that, if move-to-front and heuristic $\text H$ start with the same list $L_0$, then $\Phi(L_0) = 0$. **d.** Argue that a transposition either increases the potential by $2$ or decreases the potential by $2$. Suppose that access $\sigma_i$ finds the element $x$. To understand how the potential changes due to $\sigma_i$, let us partition the elements other than $x$ into four sets, depending on where they are in the lists just before the $i$th access: - Set $A$ consists of elements that precede $x$ in both $L_{i - 1}$ and $L_{i - 1}^\*$. - Set $B$ consists of elements that precede $x$ in $L_{i - 1}$ and follow $x$ in $L_{i - 1}^\*$. - Set $C$ consists of elements that follow $x$ in $L_{i - 1}$ and precede $x$ in $L_{i - 1}^\*$. - Set $D$ consists of elements that follow $x$ in both $L_{i - 1}$ and $L_{i - 1}^\*$. **e.** Argue that $\text{rank}\_{L_{i - 1}}(x) = |A| + |B| + 1$ and $\text{rank}\_{L_{i - 1}^\*}(x) = |A| + |C| + 1$. **f.** Show that access $\sigma_i$ causes a change in potential of $$\Phi(L_i) - \Phi(L_{i - 1}) \le 2(|A| - |B| + t_i^*),$$ where, as before, heuristic $\text H$ performs $t_i^\*$ transpositions during access $\sigma_i$. Define the amortized cost $\hat c_i$ of access $\sigma_i$ by $\hat c_i = c_i + \Phi(L_i) - \Phi(L_{i - 1})$. **g.** Show that the amortized cost $\hat c_i$ of access $\sigma_i$ is bounded from above by $4c_i^\*$. **h.** Conclude that the cost $C_{\text{MTF}}(\sigma)$ of access sequence $\sigma$ with move-to-front is at most $4$ times the cost $C_H(\sigma)$ of $\sigma$ withany other heuristic $\text H$, assuming that both heuristics start with the same list.
**a.** Since the heuristic is picked in advance, given any sequence of requests given so far, we can simulate what ordering the heuristic will call for, then, we will pick our next request to be whatever element will of been in the last position of the list. Continuing until all the requests have been made, we have that the cost of this sequence of accesses is $= mn$. **b.** The cost of finding an element is $= \text{rank}\_L(x)$ and since it needs to be swapped with all the elements before it, of which there are $\text{rank}\_L(x) - 1$, the total cost is $2 \cdot \text{rank}\_L(x) - 1$. **c.** Regardless of the heuristic used, we first need to locate the element, which is left where ever it was after the previous step, so, needs $\text{rank}\_{L_{i - 1}}(x)$. After that, by definition, there are $t_i$ transpositions made, so, $c^\*\_i = \text{rank}\_{L_{i - 1}}(x) + t_i^\*$. **d.** If we perform a transposition of elements $y$ and $z$, where $y$ is towards the left. Then there are two cases. The first is that the final ordering of the list in $L_i^\*$ is with $y$ in front of $z$, in which case we have just increased the number of inversions by $1$, so the potential increases by $2$. The second is that in $L_I^\*z$ occurs before $y$, in which case, we have just reduced the number of inversions by one, reducing the potential by $2$. In both cases, whether or not there is an inversion between $y$ or $z$ and any other element has not changed, since the transposition only changed the relative ordering of those two elements. **e.** By definition, $A$ and $B$ are the only two of the four categories to place elements that precede $x$ in $L_{i - 1}$, since there are $|A| + |B|$ elements preceding it, it's rank in $L_{i - 1}$ is $|A| + |B| + 1$. Similarly, the two categories in which an element can be if it precedes $x$ in $L^\*_{i - 1}$ are $A$ and $C$, so, in $L^\*\_{i - 1}$, $x$ has $\text{rank} |A| + |C| + 1$. **f.** We have from part d that the potential increases by $2$ if we transpose two elements that are being swapped so that their relative order in the final ordering is being screwed up, and decreases by two if they are begin placed into their correct order in $L^\*_i$. In particular, they increase it by at most $2$, since we are keeping track of the number of inversions that may not be the direct effect of the transpositions that heuristic $H$ made, we see which ones the Move to front heuristic may of added. In particular, since the move to front heuristic only changed the relative order of $x$ with respect to the other elements, moving it in front of the elements that preceded it in $L_{i - 1}$, we only care about sets $A$ and $B$. For an element in $A$, moving it to be behind $A$ created an inversion, since that element preceded $x$ in $L^\*_i$. However, if the element were in $B$, we are removing an inversion by placing $x$ in front of it. **g.** $$ \begin{aligned} \hat c_i & \le 2(|A| + |B| + 1) - 1 + 2(|A| - |B| + t_i^\*) \\\\ & = 4|A| + 1 + 2 t_i^\* \\\\ & \le 4(|A| + |C| + 1 + t_i^\*) \\\\ & = 4 c_i^\*. \end{aligned} $$ **h.** We showed that the amortized cost of each operation under the move to front heuristic was at most four times the cost of the operation using any other heuristic. Since the amortized cost added up over all these operation is at most the total (real) cost, so we have that the total cost with movetofront is at most four times the total cost with an arbitrary other heuristic.
[]
false
[]
18-18.1-1
18
18.1
18.1-1
docs/Chap18/18.1.md
Why don't we allow a minimum degree of $t = 1$?
According to the definition, minimum degree $t$ means every node other than the root must have at least $t - 1$ keys, and every internal node other than the root thus has at least $t$ children. So, when $t = 1$, it means every node other than the root must have at least $t - 1 = 0$ key, and every internal node other than the root thus has at least $t = 1$ child. Thus, we can see that the minimum case doesn't exist, because no node exists with $0$ key, and no node exists with only $1$ child in a B-tree.
[]
false
[]
18-18.1-2
18
18.1
18.1-2
docs/Chap18/18.1.md
For what values of $t$ is the tree of Figure 18.1 a legal B-tree?
According to property 5 of B-tree, every node other than the root must have at least $t - 1$ keys and may contain at most $2t - 1$ keys. In Figure 18.1, the number of keys of each node (except the root) is either $2$ or $3$. So to make it a legal B-tree, we need to guarantee that $t - 1 \le 2 \text{ and } 2 t - 1 \ge 3$, which yields $2 \le t \le 3$. So $t$ can be $2$ or $3$.
[]
false
[]
18-18.1-3
18
18.1
18.1-3
docs/Chap18/18.1.md
Show all legal B-trees of minimum degree $2$ that represent $\\{1, 2, 3, 4, 5\\}$.
We know that every node except the root must have at least $t - 1 = 1$ keys, and at most $2t - 1 = 3$ keys. Also remember that the leaves stay in the same depth. Thus, there are $2$ possible legal B-trees: - $$| 1, 2, 3, 4, 5 |$$ - $$| 3 |$$ $$\swarrow \quad \searrow$$ $$| 1, 2 | \qquad\qquad | 4, 5 |$$
[]
false
[]
18-18.1-4
18
18.1
18.1-4
docs/Chap18/18.1.md
As a function of the minimum degree $t$, what is the maximum number of keys that can be stored in a B-tree of height $h$?
$$ \begin{aligned} n & = (1 + 2t + (2t) ^ 2 + \cdots + (2t) ^ {h}) \cdot (2t - 1) \\\\ & = (2t)^{h + 1} - 1. \end{aligned} $$
[]
false
[]
18-18.1-5
18
18.1
18.1-5
docs/Chap18/18.1.md
Describe the data structure that would result if each black node in a red-black tree were to absorb its red children, incorporating their children with its own.
After absorbing each red node into its black parent, each black node may contain $1, 2$ ($1$ red child), or $3$ ($2$ red children) keys, and all leaves of the resulting tree have the same depth, according to property 5 of red-black tree (For each node, all paths from the node to descendant leaves contain the same number of black nodes). Therefore, a red-black tree will become a Btree with minimum degree $t = 2$, i.e., a 2-3-4 tree.
[]
false
[]
18-18.2-1
18
18.2
18.2-1
docs/Chap18/18.2.md
Show the results of inserting the keys $$F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, Y, D, Z, E$$ in order into an empty B-tree with minimum degree $2$. Draw only the configurations of the tree just before some node must split, and also draw the final configuration.
(Omit!)
[]
false
[]
18-18.2-2
18
18.2
18.2-2
docs/Chap18/18.2.md
Explain under what circumstances, if any, redundant $\text{DISK-READ}$ or $\text{DISK-WRITE}$ operations occur during the course of executing a call to $\text{B-TREE-INSERT}$. (A redundant $\text{DISK-READ}$ is a $\text{DISK-READ}$ for a page that is already in memory. A redundant $\text{DISK-WRITE}$ writes to disk a page of information that is identical to what is already stored there.)
In order to insert the key into a full child node but without its parent being full, we need the following operations: - $\text{DISK-READ}$: Key placement - $\text{DISK-WRITE}$: Split nodes - $\text{DISK-READ}$: Get to the parent - $\text{DISK-WRITE}$: Fill parent If both were full, we'd have to do the same, but instead of the final step, repeat the above to split the parent node and write into the child nodes. With both considerations in mind, there should never be a redundant $\text{DISK-READ}$ or $\text{DISK-WRITE}$ on a $\text{B-TREE-INSERT}$.
[]
false
[]
18-18.2-3
18
18.2
18.2-3
docs/Chap18/18.2.md
Explain how to find the minimum key stored in a B-tree and how to find the predecessor of a given key stored in a B-tree.
- Finding the minimum in a B-tree is quite similar to finding a minimum in a binary search tree. We need to find the left most leaf for the given root, and return the first key. - **PRE:** $x$ is a node on the B-tree $T$. The top level call is $\text{B-TREE-FIND-MIN}(T.root)$. - **POST:** $\text{FCTVAL}$ is the minimum key stored in the subtree rooted at $x$. ```cpp B-TREE-FIND-MIN(x) if x == NIL // T is empty return NIL else if x.leaf // x is leaf return x.key[1] // return the minimum key of x else DISK-READ(x.c[1]) return B-TREE-FIND-MIN(x.c[1]) ``` - Finding the predecessor of a given key $x.key_i$ is according to the following rules: 1. If $x$ is not a leaf, return the maximum key in the $i$-th child of $x$, which is also the maximum key of the subtree rooted at $x.c_i$. 2. If $x$ is a leaf and $i > 1$, return the $(i - 1)$st key of $x$, i.e., $x.key_{i - 1}$. 3. Otherwise, look for the last node y (from the bottom up) and $j > 0$, such that $x.key_i$ is the leftmost key in $y.c_j$; if $j = 1$, return $\text{NIL}$ since $x.key_i$ is the minimum key in the tree; otherwise we return $y.key_{j - 1}$. - **PRE:** $x$ is a node on the B-tree $T$. $i$ is the index of the key. - **POST:** $\text{FCTVAL}$ is the predecessor of $x.key_i$. ```cpp B-TREE-FIND-PREDECESSOR(x, i) if !x.leaf DISK-READ(x.c[i]) return B-TREE-FIND-MAX(x.c[i]) else if i > 1 // x is a leaf and i > 1 return x.key[i - 1] else z = x while true if z.p == NIL // z is root return NIL // z.key[i] is the minimum key in T; no predecessor y = z.p j = 1 DISK-READ(y.c[1]) while y.c[j] != x j = j + 1 DISK-READ(y.c[j]) if j == 1 z = y else return y.key[j - 1] ``` - **PRE:** $x$ is a node on the B-tree $T$. The top level call is $\text{B-TREE-FIND-MAX}(T.root)$. - **POST:** $\text{FCTVAL}$ is the maximum key stored in the subtree rooted at $x$. ```cpp B-TREE-FIND-MAX(x) if x == NIL // T is empty return NIL else if x.leaf // x is leaf return x.[x.n] // return the maximum key of x else DISK-READ(x.c[x.n + 1]) return B-TREE-FIND-MIN(x.c[x.n + 1]) ```
[ { "lang": "cpp", "code": " B-TREE-FIND-MIN(x)\n if x == NIL // T is empty\n return NIL\n else if x.leaf // x is leaf\n return x.key[1] // return the minimum key of x\n else\n DISK-READ(x.c[1])\n return B-TREE-FIND-MIN(x.c[1])" }, { "lang": "cpp", "code": " B-TREE-FIND-PREDECESSOR(x, i)\n if !x.leaf\n DISK-READ(x.c[i])\n return B-TREE-FIND-MAX(x.c[i])\n else if i > 1 // x is a leaf and i > 1\n return x.key[i - 1]\n else\n z = x\n while true\n if z.p == NIL // z is root\n return NIL // z.key[i] is the minimum key in T; no predecessor\n y = z.p\n j = 1\n DISK-READ(y.c[1])\n while y.c[j] != x\n j = j + 1\n DISK-READ(y.c[j])\n if j == 1\n z = y\n else\n return y.key[j - 1]" }, { "lang": "cpp", "code": " B-TREE-FIND-MAX(x)\n if x == NIL // T is empty\n return NIL\n else if x.leaf // x is leaf\n return x.[x.n] // return the maximum key of x\n else\n DISK-READ(x.c[x.n + 1])\n return B-TREE-FIND-MIN(x.c[x.n + 1])" } ]
false
[]
18-18.2-4
18
18.2
18.2-4 $\star$
docs/Chap18/18.2.md
Suppose that we insert the keys $\\{1, 2, \ldots, n\\}$ into an empty B-tree with minimum degree 2. How many nodes does the final B-tree have?
The final tree can have as many as $n - 1$ nodes. Unless $n = 1$ there cannot ever be $n$ nodes since we only ever insert a key into a non-empty node, so there will always be at least one node with $2$ keys. Next observe that we will never have more than one key in a node which is not a right spine of our B-tree. This is because every key we insert is larger than all keys stored in the tree, so it will be inserted into the right spine of the tree. Nodes not in the right spine are a result of splits, and since $t = 2$, every split results in child nodes with one key each. The fewest possible number of nodes occurs when every node in the right spine has $3$ keys. In this case, $n = 2h + 2^{h + 1} - 1$ where $h$ is the height of the B-tree, and the number of nodes is $2^{h + 1} - 1$. Asymptotically these are the same, so the number of nodes is $\Theta(n)$.
[]
false
[]
18-18.2-5
18
18.2
18.2-5
docs/Chap18/18.2.md
Since leaf nodes require no pointers to children, they could conceivably use a different (larger) $t$ value than internal nodes for the same disk page size. Show how to modify the procedures for creating and inserting into a B-tree to handle this variation.
You would modify the insertion procedure by, in $\text{B-TREE-INSERT}$, check if the node is a leaf, and if it is, only split it if there twice as many keys stored as expected. Also, if an element needs to be inserted into a full leaf, we would split the leaf into two separate leaves, each of which doesn't have too many keys stored in it.
[]
false
[]
18-18.2-6
18
18.2
18.2-6
docs/Chap18/18.2.md
Suppose that we were to implement $\text{B-TREE-SEARCH}$ to use binary search rather than linear search within each node. Show that this change makes the CPU time required $O(\lg n)$, independently of how $t$ might be chosen as a function of $n$.
As in the $\text{TREE-SEARCH}$ procedure for binary search trees, the nodes encountered during the recursion form a simple path downward from the root of the tree. Thus, the $\text{B-TREE-SEARCH}$ procedure needs $O(h) = O(\log_t n)$ CPU time to search along the path, where $h$ is the height of the B-tree and $n$ is the number of keys in the B-tree, and we know that $h \le \log_t \frac{n + 1}{2}$. Since the number of keys in each nodeis less than $2t - 1$, a binary search within each node is $O(\lg t)$. So the total time is: $$ \begin{aligned} O(\lg t \cdot \log_t n) & = O(\lg t \cdot \frac{\lg n}{\lg t}) & \text{by changing the base of the logarithm.} \\\\ & = O(\lg n). \end{aligned} $$ Thus, the CPU time required is $O(\lg n)$.
[]
false
[]
18-18.2-7
18
18.2
18.2-7
docs/Chap18/18.2.md
Suppose that disk hardware allows us to choose the size of a disk page arbitrarily, but that the time it takes to read the disk page is $a + bt$, where $a$ and $b$ are specified constants and $t$ is the minimum degree for a B-tree using pages of the selected size. Describe how to choose $t$ so as to minimize (approximately) the B-tree search time. Suggest an optimal value of $t$ for the case in which $a = 5$ milliseconds and $b = 10$ microseconds.
$$\min \log_t n \cdot (a + bt) = \min \frac{a + bt}{\ln t}$$ $$\frac{\partial}{\partial t} (\frac{a + bt}{\ln t}) = - \frac{a + bt - bt \ln t}{t \ln^2 t}$$ $$ \begin{aligned} a + bt & = bt \ln t \\\\ 5 + 10t & = 10t \ln t \\\\ t & = e^{W \left(\frac{1}{2e} \right) + 1}, \\\\ \end{aligned} $$ where $W$ is the LambertW function, and we should choose $t = 3$.
[]
false
[]
18-18.3-1
18
18.3
18.3-1
docs/Chap18/18.3.md
Show the results of deleting $C$, $P$, and $V$, in order, from the tree of Figure 18.8(f).
- Figure 18.8(f) ![](../img/18.3-1-1.png) - delete $C$ ![](../img/18.3-1-2.png) - delete $P$ ![](../img/18.3-1-3.png) - delete $V$ ![](../img/18.3-1-4.png)
[]
true
[ "../img/18.3-1-1.png", "../img/18.3-1-2.png", "../img/18.3-1-3.png", "../img/18.3-1-4.png" ]
18-18.3-2
18
18.3
18.3-2
docs/Chap18/18.3.md
Write pseudocode for $\text{B-TREE-DELETE}$.
The algorithm $\text{B-TREE-DELETE}(x, k)$ is a recursive procedure which deletes key $k$ from the B-tree rooted at node $x$. The functions $\text{PREDECESSOR}(k, x)$ and $\text{SUCCESSOR}(k, x)$ return the predecessor and successor of $k$ in the B-tree rooted at $x$ respectively. The cases where $k$ is the last key in a node have been omitted because the pseudocode is already unwieldy. For these, we simply use the left sibling as opposed to the right sibling, making the appropriate modifications to the indexing in the **for** loops.
[]
false
[]
18-18-1
18
18-1
18-1
docs/Chap18/Problems/18-1.md
Consider implementing a stack in a computer that has a relatively small amount of fast primary memory and a relatively large amount of slower disk storage. The operations $\text{PUSH}$ and $\text{POP}$ work on single-word values. The stack we wish to support can grow to be much larger than can fit in memory, and thus most of it must be stored on disk. A simple, but inefficient, stack implementation keeps the entire stack on disk. We maintain in - memory a stack pointer, which is the disk address of the top element on the stack. If the pointer has value $p$, the top element is the $(p \mod m)$th word on page $\lfloor p / m \rfloor$ of the disk, where $m$ is the number of words per page. To implement the $\text{PUSH}$ operation, we increment the stack pointer, read the appropriate page into memory from disk, copy the element to be pushed to the appropriate word on the page, and write the page back to disk. A $\text{POP}$ operation is similar. We decrement the stack pointer, read in the appropriate page from disk, and return the top of the stack. We need not write back the page, since it was not modified. Because disk operations are relatively expensive, we count two costs for any implementation: the total number of disk accesses and the total CPU time. Any disk access to a page of $m$ words incurs charges of one disk access and $\Theta(m)$ CPU time. **a.** Asymptotically, what is the worst-case number of disk accesses for $n$ stack operations using this simple implementation? What is the CPU time for $n$ stack operations? (Express your answer in terms of $m$ and $n$ for this and subsequent parts.) Now consider a stack implementation in which we keep one page of the stack in memory. (We also maintain a small amount of memory to keep track of which page is currently in memory.) We can perform a stack operation only if the relevant disk page resides in memory. If necessary, we can write the page currently in memory to the disk and read in the new page from the disk to memory. If the relevant disk page is already in memory, then no disk accesses are required. **b.** What is the worst-case number of disk accesses required for $n$ $\text{PUSH}$ operations? What is the CPU time? **c.** What is the worst-case number of disk accesses required for $n$ stack operations? What is the CPU time? Suppose that we now implement the stack by keeping two pages in memory (in addition to a small number of words for bookkeeping). **d.** Describe how to manage the stack pages so that the amortized number of disk accesses for any stack operation is $O(1 / m)$ and the amortized CPU time for any stack operation is $O(1)$.
**a.** We will have to make a disk access for each stack operation. Since each of these disk operations takes time $\Theta(m)$, the CPU time is $\Theta(mn)$. **b.** Since only every mth push starts a new page, the number of disk operations is approximately $n / m$, and the CPU runtime is $\Theta(n)$, since both the contribution from the cost of the disk access and the actual running of the push operations are both $\Theta(n)$. **c.** If we make a sequence of pushes until it just spills over onto the second page, then alternate popping and pulling many times, the asymptotic number of disk accesses and CPU time is of the same order as in part a. This is because when we are doing that alternating of pops and pushes, each one triggers a disk access. **d.** We define the potential of the stack to be the absolute value of the difference between the current size of the stack and the most recently passed multiple of $m$. This potential function means that the initial stack which has size $0$, is also a multiple of $m$, so the potential is zero. Also, as we do a stack operation we either increase or decrease the potential by one. For us to have to load a new page from disk and write an old one to disk, we would need to be at least $m$ positions away from the most recently visited multiple of $m$, because we would have had to just cross a page boundary. This cost of loading and storing a page takes (real) cpu time of $\Theta(m)$. However, we just had a drop in the potential function of order $\Theta(m)$. So, the amortized cost of this operation is $O(1)$.
[]
false
[]
18-18-2
18
18-2
18-2
docs/Chap18/Problems/18-2.md
The **_join_** operation takes two dynamic sets $S'$ and $S''$ and an element $x$ such that for any $x' \in S'$ and $x'' \in S''$, we have $x'.key < x.key < x''.key$. It returns a set $S = S' \cup \\{x\\} \cup S''$. The **_split_** operation is like an "inverse" join: given a dynamic set $S$ and an element $x \in S$, it creates a set $S'$ that consists of all elements in set $S$ and an element $x \in S$, it creates a set $S'$ that consists of all elements in $S - \\{x\\}$ whose keys are less than $x.key$ and a set $S''$ that consists of all elements in $S - \\{x\\}$ whose keys are greater than $x.key$. In this problem, we investigate how to implement these operations on 2-3-4 trees. We assume for convenience that elements consist only of keys and that all key values are distinct. **a.** Show how to maintain, for every node $x$ of a 2-3-4 tree, the height of the subtree rooted at $x$ as an attribute $x.height$. Make sure that your implementation does not affect the asymptotic running times of searching, insertion, and deletion. **b.** Show how to implement the join operation. Given two 2-3-4 trees $T'$ and $T''$ and a key $k$, the join operation should run in $O(1 + |h' - h''|)$ time, where $h'$ and $h''$ are the heights of $T'$ and $T''$, respectively. **c.** Consider the simple path $p$ from the root of a 2-3-4 tree $T$ to a given key $k$, the set $S'$ of keys in $T$ that are less than $k$, and the set $S''$ of keys in $T$ that are greater than $k$. Show that $p$ breaks $S'$ into a set of trees $\\{T'\_0, T'\_1, \ldots, T'\_m\\}$ and a set of keys $\\{k'\_1, k'\_2, \ldots, k'\_m\\}$, where, for $i = 1, 2, \ldots, m$, we have $y < k'\_i < z$ for any keys $y \in T'\_{i - 1}$ and $z \in T'\_i$. What is the relationship between the heights of $T'\_{i - 1}$ and $T'\_i$? Describe how $p$ breaks $S''$ into sets of trees and keys. **d.** Show how to implement the split operation on $T$. Use the join operation to assemble the keys in $S'$ into a single 2-3-4 tree $T'$ and the keys in $S''$ into a single 2-3-4 tree $T''$. The running time of the split operation should be $O(\lg n)$, where $n$ is the number of keys in $T$. ($\textit{Hint:}$ The costs for joining should telescope.)
**a.** For insertion it will suffice to explain how to update height when we split a node. Suppose node $x$ is split into nodes $y$ and $z$, and the median of $x$ is merged into node $w$. The height of $w$ remains unchanged unless $x$ was the root (in which case $w.height = x.height + 1$). The height of $y$ or $z$ will often change. We set $$y.height = \max_i y.c_i .height + 1$$ and $$z.height = \max_i z.c_i.height + 1.$$ Each update takes $O(t)$. Since a call to $\text{B-TREE-INSERT}$ makes at most $h$ splits where $h$ is the height of the tree, the total time it takes to update heights is $O(th)$, preserving the asymptotic running time of insert. For deletion the situation is even simple. The only time the height changes is when the root has a single node and it is merged with its subtree nodes, leaving an empty root node to be deleted. In this case, we update the height of the new node to be the (old) height of the root minus $1$. **b.** Without loss of generality, assume $h' \ge h''$. We essentially wish to merge $T''$ into $T'$ at a node of height $h''$ using node $x$. To do this, find the node at depth $h' - h''$ on the right spine of $T'$. Add $x$ as a key to this node, and $T''$ as the additional child. If it should happen that the node was already full, perform a split operation. **c.** Let $x_i$ be the node encountered after $i$ steps on path $p$. Let $l_i$ be the index of the largest key stored in $x_i$ which is less than or equal to $k$. We take $k_i' = x_i.key_{l_i}$ and $T'\_{i - 1}$ to be the tree whose root node consists of the keys in $x_i$ which are less than $x_i.key_{l_i}$, and all of their children. In general, $T'\_{i - 1}.height \ge T'\_i.height$. For $S''$, we take a similar approach. They keys will be those in nodes passed on $p$ which are immediately greater than $k$, and the trees will be rooted at a node consisting of the larger keys, with the associated subtrees. When we reach the node which contains $k$, we don't assign a key, but we do assign a tree. **d.** Let $T_1$ and $T_2$ be empty trees. Consider the path $p$ from the root of $T$ to $k$. Suppose we have reached node $x_i$. We join tree $T'\_{i - 1}$ to $T_1$, then insert $k' i$ into $T_1$. We join $T''\_{i - 1}$ to $T_2$ and insert $k''\_i$ into $T_2$. Once we have encountered the node which contains $k$ at $x_m.key_k$, join $x_m.c_k$ with $T_1$ and $x_m.c_{k + 1}$ with $T_2$. We will perform at most $2$ join operations and $1$ insert operation for each level of the tree. Using the runtime determined in part (b), and the fact that when we join a tree $T'$ to $T_1$ (or $T''$ to $T_2$ respectively) the height difference is $$T'.height - T_1.height.$$ Since the heights are nondecreasing of successive tree that are joined, we get a telescoping sum of heights. The first tree has height $h$, where $h$ is the height of $T$, and the last tree has height $0$. Thus, the runtime is $$O(2(h + h)) = O(\lg n).$$
[]
false
[]
19-19.1-191
19
19.1
19.1
docs/Chap19/19.1.md
There is no exercise in this section.
There is no exercise in this section.
[]
false
[]
19-19.2-1
19
19.2
19.2-1
docs/Chap19/19.2.md
Show the Fibonacci heap that results from calling $\text{FIB-HEAP-EXTRACT-MIN}$ on the Fibonacci heap shown in Figure 19.4(m).
(Omit!)
[]
false
[]
19-19.3-1
19
19.3
19.3-1
docs/Chap19/19.3.md
Suppose that a root $x$ in a Fibonacci heap is marked. Explain how $x$ came to be a marked root. Argue that it doesn't matter to the analysis that $x$ is marked, even though it is not a root that was first linked to another node and then lost one child.
$x$ came to be a marked root because at some point it had been a marked child of $H.min$ which had been removed in $\text{FIB-HEAP-EXTRACT-MIN}$ operation. See figure 19.4 for an example, where the node with key $18$ became a marked root. It doesn't add the potential for having to do any more actual work for it to be marked. This is because the only time that markedness is checked is in line 3 of cascading cut. This however is only ever run on nodes whose parent is non $\text{NIL}$. Since every root has $\text{NIL}$ as it parent, line 3 of cascading cut will never be run on this marked root. It will still cause the potential function to be larger than needed, but that extra computation that was paid in to get the potential function higher will never be used up later.
[]
false
[]
19-19.3-2
19
19.3
19.3-2
docs/Chap19/19.3.md
Justify the $O(1)$ amortized time of $\text{FIB-HEAP-DECREASE-KEY}$ as an average cost per operation by using aggregate analysis.
Recall that the actual cost of $\text{FIB-HEAP-DECREASE-KEY}$ is $O\(c\)$, where $c$ is the number of calls made to $\text{CASCADING-CUT}$. If $c_i$ is the number of calls made on the $i$th key decrease, then the total time of $n$ calls to $\text{FIB-HEAPDECREASE-KEY}$ is $\sum_{i = 1}^n O(c_i)$. Next observe that every call to $\text{CASCADING-CUT}$ moves a node to the root, and every call to a root node takes $O(1)$. Since no roots ever become children during the course of these calls, we must have that $\sum_{i = 1}^n c_i = O(n)$. Therefore the aggregate cost is $O(n)$, so the average, or amortized, cost is $O(1)$.
[]
false
[]
19-19.4-1
19
19.4
19.4-1
docs/Chap19/19.4.md
Professor Pinocchio claims that the height of an $n$-node Fibonacci heap is $O(\lg n)$. Show that the professor is mistaken by exhibiting, for any positive integer $n$, a sequence of Fibonacci-heap operations that creates a Fibonacci heap consisting of just one tree that is a linear chain of $n$ nodes.
- **Initialize:** insert $3$ numbers then extract-min. - **Iteration:** insert $3$ numbers, in which at least two numbers are less than the root of chain, then extract-min. The smallest newly inserted number will be extracted and the remaining two numbers will form a heap whose degree of root is $1$, and since the root of the heap is less than the old chain, the chain will be merged into the newly created heap. Finally we should delete the node which contains the largest number of the 3 inserted numbers.
[]
false
[]
19-19.4-2
19
19.4
19.4-2
docs/Chap19/19.4.md
Suppose we generalize the cascading-cut rule to cut a node $x$ from its parent as soon as it loses its $k$th child, for some integer constant $k$. (The rule in Section 19.3 uses $k = 2$.) For what values of $k$ is $D(n) = O(\lg n)$?
Following the proof of lemma 19.1, if $x$ is any node if a Fibonacci heap, $x.degree = m$, and $x$ has children $y_1, y_2, \ldots, y_m$, then $y_1.degree \ge 0$ and $y_i.degree \ge i - k$. Thus, if $s_m$ denotes the fewest nodes possible in a node of degree $m$, then we have $s_0 = 1, s_1 = 2, \ldots, s_{k - 1} = k$ and in general, $s_m = k + \sum_{i = 0}^{m - k} s_i$. Let $$ f_m = \begin{cases} 0 & m = 0, \\\\ 1 & 0 < m < k, \\\\ f_{m - 1} + f_{m - k} & m \ge k. \end{cases} $$ Let $\alpha$ be a root of $x^k - x^{k - 1} = 1$, i.e. $x^{k - 1}(x - 1)=1$. We'll show by induction that $f_{m + k} \ge \alpha^m$. For the base cases: $$ \begin{aligned} f_k & = f_{k - 1} + f_0 = 1 + 0 = 1 = \alpha^0 \\\\ f_{k + 1} & = f_k + f_1 = 1 + 1 \ge \alpha^1 \\\\ & \vdots \\\\ f_{k + k} & = f_{2k - 1} + f_k = f_{2k - 1} + 1 \ge 1 + \alpha^{k - 1} = \alpha^k. \end{aligned} $$ In general, we have $$f_{m + k} = f_{m + k - 1} + f_m \ge \alpha^{m - 1} + \alpha^{m - k} = \alpha^{m - k}(\alpha^{k - 1} + 1) = \alpha^m.$$ Next we show that $f_{m + k} = 1 + \sum_{i = 0}^m f_i$. The base case is clear, since $f_k = f_0 + 1 = 0 + 1$. For the induction step, we have $$f_{m + k} = f_{m - 1 + k} + f_m = 1 + \sum_{i 0}^{m - 1} f_i + f_m = 1 + \sum_{i = 0}^m f_i.$$ Observe that $s_i \ge f_{i + k}$ for $0 \le i < k$. Again, by induction, for $m \ge k$ we have $$s_m = k + \sum_{i = 0}^{m - k} s_i \ge k + \sum_{i = 0}^{m - k} f_{i + k} \ge k + \sum_{i = 0}^m f_i = f_{m + k}.$$ So in general, $s_m \ge f_{m + k}$. Putting it all together, we have $$ \begin{aligned} size(x) & \ge s_m \\\\ & \ge k + \sum_{i = k}^m s_{i - k} \\\\ & \ge k + \sum_{i = k}^m f_i \\\\ & \ge k + \sum_{i = 0}^m f_i - \sum_{i = 0}^{k - 1} f_i \\\\ & = k + \sum_{i = 0}^m f_i - (k - 1) \\\\ & = 1 + \sum_{i = 0}^m f_i \\\\ & = f_{m + k} \\\\ & \ge \alpha^m. \end{aligned} $$ Taking logs on both sides, we have $$\log_\alpha n \ge m.$$ In other words, provided that $\alpha$ is a constant, we have a logarithmic bound on the maximum degree. And I think $k$ should only satisfy $\alpha > 1$ in $\alpha^{k - 1}(\alpha - 1) = 1$.
[]
false
[]
19-19-1
19
19-1
19-1
docs/Chap19/Problems/19-1.md
Professor Pisano has proposed the following variant of the $\text{FIB-HEAP-DELETE}$ procedure, claiming that it runs faster when the node being deleted is not the node pointed to by $H.min$. ```cpp PISANO-DELETE(H, x) if x == H.min FIB-HEAP-EXTRACT-MIN(H) else y = x.p if y != NIL CUT(H, x, y) CASCADING-CUT(H, y) add x's child list to the root list of H remove x from the root list of H ``` **a.** The professor's claim that this procedure runs faster is based partly on the assumption that line 7 can be performed in $O(1)$ actual time. What is wrong with this assumption? **b.** Give a good upper bound on the actual time of $\text{PISANO-DELETE}$ when $x$ is not $H.min$. Your bound should be in terms of $x.degree$ and the number $c$ of calls to the $\text{CASCADING-CUT}$ procedure. **c.** Suppose that we call $\text{PISANO-DELETE}(H, x)$, and let $H'$ be the Fibonacci heap that results. Assuming that node $x$ is not a root, bound the potential of $H'$ in terms of $x.degree$, $c$, $t(H)$, and $m(H)$. **d.** Conclude that the amortized time for $\text{PISANO-DELETE}$ is asymptotically no better than for $\text{FIB-HEAP-DELETE}$, evenwhen $x \ne H.min$.
**a.** It can take actual time proportional to the number of children that $x$ had because for each child, when placing it in the root list, their parent pointer needs to be updated to be $\text{NIL}$ instead of $x$. **b.** Line 7 takes actual time bounded by $x.degree$ since updating each of the children of $x$ only takes constant time. So, if $c$ is the number of cascading cuts that are done, the actual cost is $O(c + x.degree)$. **c.** We examine the number of trees in the root list $t(H)$ and the number of marked nodes $m(H)$ of the resulting Fibonacci heap $H'$ to upper-bound its potential. The number of trees $t(H)$ increases by the number of children $x$ had ($=x.degree$), due to line 7 of $\text{PISANO-DELETE}(H, x)$. The number of marked nodes in $H'$ is calculated as follows. The first $c - 1$ recursive calls out of the $c$ calls to $\text{CASCADING-CUT}$ unmarks a marked node (line 4 of $\text{CUT}$ invoked by line 5 of $\text{CASCADING-CUT}$). The final $c$th call to $\text{CASCADING-CUT}$ marks an unmarked node (line 4 of $\text{CASCADING-CUT}$), and therefore, the total change in marked nodes is $-(c - 1) + 1 = -c + 2$. Therefore, the potential of H' is $$\Phi(H') \le t(H) + x.degree + 2(m(H) - c + 2).$$ **d.** The asymptotic time is $$\Theta(x.degree) = \Theta(\lg(n)),$$ which is the same asyptotic time that was required for the original deletion method.
[ { "lang": "cpp", "code": "> PISANO-DELETE(H, x)\n> if x == H.min\n> FIB-HEAP-EXTRACT-MIN(H)\n> else y = x.p\n> if y != NIL\n> CUT(H, x, y)\n> CASCADING-CUT(H, y)\n> add x's child list to the root list of H\n> remove x from the root list of H\n>" } ]
false
[]
19-19-2
19
19-2
19-2
docs/Chap19/Problems/19-2.md
The **_binomial tree_** $B_k$ is an ordered tree (see Section B.5.2) defined recursively. As shown in Figure 19.6(a), the binomial tree $B_0$ consists of a single node. The binomial tree $B_k$ consists of two binomial trees $B_{k - 1}$ that are linked together so that the root of one is the leftmost child of the root of the other. Figure 19.6(b) shows the binomial trees $B_0$ through $B_4$. **a.** Show that for the binomial tree $B_k$, 1. there are $2^k$ nodes, 2. the height of the tree is $k$, 3. there are exactly $\binom{k}{i}$ nodes at depth $i$ for $i = 0, 1, \ldots, k$, and 4. the root has degree $k$, which is greater than that of any other node; moreover, as Figure 19.6\(c\) shows, if we number the children of the root from left to right by $k - 1, k - 2, \ldots, 0$, then child $i$ is the root of a subtree $B_i$. A **_binomial heap_** $H$ is a set of binomial trees that satisfies the following properties: 1. Each node has a $key$ (like a Fibonacci heap). 2. Each binomial tree in $H$ obeys the min-heap property. 3. For any nonnegative integer $k$, there is at most one binomial tree in $H$ whose root has degree $k$. **b.** Suppose that a binomial heap $H$ has a total of $n$ nodes. Discuss the relationship between the binomial trees that $H$ contains and the binary representation of $n$. Conclude that $H$ consists of at most $\lfloor \lg n \rfloor + 1$ binomial trees. Suppose that we represent a binomial heap as follows. The left-child, right-sibling scheme of Section 10.4 represents each binomial tree within a binomial heap. Each node contains its key; pointers to its parent, to its leftmost child, and to the sibling immediately to its right (these pointers are $\text{NIL}$ when appropriate); and its degree (as in Fibonacci heaps, how many children it has). The roots form a singly linked root list, ordered by the degrees of the roots (from low to high), and we access the binomial heap by a pointer to the first node on the root list. **c.** Complete the description of how to represent a binomial heap (i.e., name the attributes, describe when attributes have the value $\text{NIL}$, and define how the root list is organized), and show how to implement the same seven operations on binomial heaps as this chapter implemented on Fibonacci heaps. Each operation should run in $O(\lg n)$ worst-case time, where $n$ is the number of nodes in the binomial heap (or in the case of the $\text{UNION}$ operation, in the two binomial heaps that are being united). The $\text{MAKE-HEAP}$ operation should take constant time. **d.** Suppose that we were to implement only the mergeable-heap operations on a Fibonacci heap (i.e., we do not implement the $\text{DECREASE-KEY}$ or $\text{DELETE}$ operations). How would the trees in a Fibonacci heap resemble those in a binomial heap? How would they differ? Show that the maximum degree in an $n$-node Fibonacci heap would be at most $\lfloor \lg n\rfloor$. **e.** Professor McGee has devised a new data structure based on Fibonacci heaps. A McGee heap has the same structure as a Fibonacci heap and supports just the mergeable-heap operations. The implementations of the operations are the same as for Fibonacci heaps, except that insertion and union consolidate the root list as their last step. What are the worst-case running times of operations on McGee heaps?
**a.** 1. $B_k$ consists of two binomial trees $B_{k - 1}$. 2. The height of one $B_{k - 1}$ is increased by $1$. 3. For $i = 0$, $\binom{k}{0} = 1$ and only root is at depth $0$. Suppose in $B_{k - 1}$, the number of nodes at depth $i$ is $\binom{k - 1}{i}$, in $B_k$, the number of nodes at depth $i$ is $\binom{k - 1}{i} + \binom{k - 1}{i - 1} = \binom{k}{i}$. 4. The degree of the root increase by $1$. **b.** Let $n.b$ denote the binary expansion of $n$. The fact that we can have at most one of each binomial tree corresponds to the fact that we can have at most $1$ as any digit of $n.b$. Since each binomial tree has a size which is a power of $2$, the binomial trees required to represent n nodes are uniquely determined. We include $B_k$ if and only if the $k$th position of $n.b$ is $1$. Since the binary representation of $n$ has at most $\lfloor \lg n \rfloor+ 1$ digits, this also bounds the number of trees which can be used to represent $n$ nodes. **c.** Given a node $x$, let $x.key$, $x.p$, $x.c$, and $x.s$ represent the attributes key, parent, left-most child, and sibling to the right, respectively. The pointer attributes have value $\text{NIL}$ when no such node exists. The root list will be stored in a singly linked list. - **MAKE-HEAP** initialize an empty list for the root list and return a pointer to the head of the list, which contains $\text{NIL}$. This takes constant time. To insert: Let $x$ be a node with key $k$, to be inserted. Scan the root list to find the first $m$ such that $B_m$ is not one of the trees in the binomial heap. If there is no $B_0$, simply create a single root node $x$. Otherwise, union $x, B_0, B_1, \ldots, B_{m - 1}$ into a $B_m$ tree. Remove all root nodes of the unioned trees from the root list, and update it with the new root. Since each join operation is logarithmic in the height of the tree, the total time is $O(\lg n)$. $\text{MINIMUM}$ just scans the root list and returns the minimum in $O(\lg n)$, since the root list has size at most $O(\lg n)$. - **EXTRACT-MIN:** finds and deletes the minimum, then splits the tree Bm which contained the minimum into its component binomial trees $B_0, B_1, \ldots, B_{m - 1}$ in $O(\lg n)$ time. Finally, it unions each of these with any existing trees of the same size in $O(\lg n)$ time. - **UNION:** suppose we have two binomial heaps consisting of trees $B_{i_1}, B_{i_2}, \ldots, B_{i_k}$ and $B_{j_1}, B_{j_2}, \ldots, B_{j_m}$ respectively. Simply union orresponding trees of the same size between the two heaps, then do another check and join any newly created trees which have caused additional duplicates. Note: we will perform at most one union on any fixed size of binomial tree so the total running time is still logarithmic in $n$, where we assume that $n$ is sum of the sizes of the trees which we are unioning. - **DECREASE-KEY:** simply swap the node whose key was decreased up the tree until it satisfies the min-heap property. This method requires that we swap the node with its parent along with all their satellite data in a brute-force manner to avoid updating $p$ attributes of the siblings of the node. When the data stored in each node is large, we may want to update $p$ instead, which, however, will increase the running time bound to $O(\lg^2 n)$. - **DELETE:** note that every binomial tree consists of two copies of a smaller binomial tree, so we can write the procedure recursively. If the tree is a single node, simply delete it. If we wish to delete from $B_k$, first split the tree into its constituent copies of $B_{k - 1}$, and recursively call delete on the copy of $B_{k - 1}$ which contains $x$. If this results in two binomial trees of the same size, simply union them. **d.** The Fibonacci heap will look like a binomial heap, except that multiple copies of a given binomial tree will be allowed. Since the only trees which will appear are binomial trees and $B_k$ has $2k$ nodes, we must have $2k \le n$, which implies $k \le \lfloor \lg n \rfloor$. Since the largest root of any binomial tree occurs at the root, and on $B_k$ it is degree $k$, this also bounds the largest degree of a node. **e.** $\text{INSERT}$ and $\text{UNION}$ will no longer have amortized $O(1)$ running time because $\text{CONSOLIDATE}$ has runtime $O(\lg n)$. Even if no nodes are consolidated, the runtime is dominated by the check that all degrees are distinct. Since calling $\text{UNION}$ on a heap and a single node is the same as insertion, it must also have runtime $O(\lg n)$. The other operations remain unchanged.
[]
false
[]
19-19-3
19
19-3
19-3
docs/Chap19/Problems/19-3.md
We wish to augment a Fibonacci heap $H$ to support two new operations without changing the amortized running time of any other Fibonacci-heap operations. **a.** The operation $\text{FIB-HEAP-CHANGE-KEY}(H, x, k)$ changes the key of node $x$ to the value $k$. Give an efficient implementation of $\text{FIB-HEAP-CHANGE-KEY}$, and analyze the amortized running time of your implementation for the cases in which $k$ is greater than, less than, or equal to $x.key$. **b.** Give an efficient implementation of $\text{FIB-HEAP-PRUNE}(H, r)$, which deletes $q = \min(r, H.n)$ nodes from $H$. You may choose any $q$ nodes to delete. Analyze the amortized running time of your implementation. ($\textit{Hint:}$ You may need to modify the data structure and potential function.)
**a.** If $k < x.key$ just run the decrease key procedure. If $k > x.key$, delete the current value $x$ and insert $x$ again with a new key. For the first case, the amortized time is $O(1)$, and for the last case the amortized time is $O(\lg n)$. **b.** Suppose that we also had an additional cost to the potential function that was proportional to the size of the structure. This would only increase when we do an insertion, and then only by a constant amount, so there aren't any worries concerning this increased potential function raising the amortized cost of any operations. Once we've made this modification, to the potential function, we also modify the heap itself by having a doubly linked list along all of the leaf nodes in the heap. To prune we then pick any leaf node, remove it from it's parent's child list, and remove it from the list of leaves. We repeat this $\min(r, H.n)$ times. This causes the potential to drop by an amount proportional to $r$ which is on the order of the actual cost of what just happened since the deletions from the linked list take only constant amounts of time each. So, the amortized time is constant.
[]
false
[]
19-19-4
19
19-4
19-4
docs/Chap19/Problems/19-4.md
Chapter 18 introduced the 2-3-4 tree, in which every internal node (other than possibly the root) has two, three, or four children and all leaves have the same depth. In this problem, we shall implement **_2-3-4 heaps_**, which support the mergeable-heap operations. The 2-3-4 heaps differ from 2-3-4 trees in the following ways. In 2-3-4 heaps, only leaves store keys, and each leaf $x$ stores exactly one key in the attribute $x.key$. The keys in the leaves may appear in any order. Each internal node $x$ contains a value $x.small$ that is equal to the smallest key stored in any leaf in the subtree rooted at $x$. The root $r$ contains an attribute $r.height$ that gives the height of the tree. Finally, 2-3-4 heaps are designed to be kept in main memory, so that disk reads and writes are not needed. Implement the following 2-3-4 heap operations. In parts (a)–(e), each operation should run in $O(\lg n)$ time on a 2-3-4 heap with $n$ elements. The $\text{UNION}$ operation in part (f) should run in $O(\lg n)$ time, where $n$ is the number of elements in the two input heaps. **a.** $\text{MINIMUM}$, which returns a pointer to the leaf with the smallest key. **b.** $\text{DECREASE-KEY}$, which decreases the key of a given leaf $x$ to a given value $k \le x.key$. **c.** $\text{INSERT}$, which inserts leaf $x$ with key $k$. **d.** $\text{DELETE}$, which deletes a given leaf $x$. **e.** $\text{EXTRACT-MIN}$, which extracts the leaf with the smallest key. **f.** $\text{UNION}$, which unites two 2-3-4 heaps, returning a single 2-3-4 heap and destroying the input heaps.
**a.** Traverse a path from root to leaf as follows: At a given node, examine the attribute $x.small$ in each child-node of the current node. Proceed to the child node which minimizes this attribute. If the children of the current node are leaves, then simply return a pointer to the child node with smallest key. Since the height of the tree is $O(\lg n)$ and the number of children of any node is at most $4$, this has runtime $O(\lg n)$. **b.** Decrease the key of $x$, then traverse the simple path from $x$ to the root by following the parent pointers. At each node $y$ encountered, check the attribute $y.small$. If $k < y.small$, set $y.small = k$. Otherwise do nothing and continue on the path. **c.** Insert works the same as in a B-tree, except that at each node it is assumed that the node to be inserted is 'smaller' than every key stored at that node, so the runtime is inherited. If the root is split, we update the height of the tree. When we reach the final node before the leaves, simply insert the new node as the leftmost child of that node. **d.** As with $\text{B-TREE-DELETE}$, we'll want to ensure that the tree satisfies the properties of being a 2-3-4 tree after deletion, so we'll need to check that we're never deleting a leaf which only has a single sibling. This is handled in much the same way as in chapter 18. We can imagine that dummy keys are stored in all the internal nodes, and carry out the deletion process in exactly the same way as done in exercise 18.3-2, with the added requirement that we update the height stored in the root if we merge the root with its child nodes. **e.** $\text{EXTRACT-MIN}$ simply locates the minimum as done in part (a), then deletes it as in part (d). **f.** This can be done by implementing the join operation, as in Problem 18-2 (b).
[]
false
[]
20-20.1-1
20
20.1
20.1-1
docs/Chap20/20.1.md
Modify the data structures in this section to support duplicate keys.
To modify these structure to allow for multiple elements, instead of just storing a bit in each of the entries, we can store the head of a linked list representing how many elements of that value that are contained in the structure, with a $\text{NIL}$ value to represent having no elements of that value.
[]
false
[]
20-20.1-2
20
20.1
20.1-2
docs/Chap20/20.1.md
Modify the data structures in this section to support keys that have associated satellite data.
All operations will remain the same, except instead of the leaves of the tree being an array of integers, they will be an array of nodes, each of which stores $x.key$ in addition to whatever additional satellite data you wish.
[]
false
[]
20-20.1-3
20
20.1
20.1-3
docs/Chap20/20.1.md
Observe that, using the structures in this section, the way we find the successor and predecessor of a value $x$ does not depend on whether $x$ is in the set at the time. Show how to find the successor of $x$ in a binary search tree when $x$ is not stored in the tree.
To find the successor of a given key $k$ from a binary tree, call the procedure $\text{SUCC}(x, T.root)$. Note that this will return $\text{NIL}$ if there is no entry in the tree with a larger key.
[]
false
[]
20-20.1-4
20
20.1
20.1-4
docs/Chap20/20.1.md
Suppose that instead of superimposing a tree of degree $\sqrt u$, we were to superimpose a tree of degree $u^{1 / k}$, where $k > 1$ is a constant. What would be the height of such a tree, and how long would each of the operations take?
The new tree would have height $k$. $\text{INSERT}$ would take $O(k)$, $\text{MINIMUM}$, $\text{MAXIMUM}$, $\text{SUCCESSOR}$, $\text{PREDECESSOR}$, and $\text{DELETE}$ would take $O(ku^{1 / k})$.
[]
false
[]
20-20.2-1
20
20.2
20.2-1
docs/Chap20/20.2.md
Write pseudocode for the procedures $\text{PROTO-vEB-MAXIMUM}$ and $\text{PROTO-vEB-PREDECESSOR}$.
See the two algorithms, $\text{PROTO-vEB-MAXIMUM}$ and $\text{PROTO-vEB-PREDECESSOR}$.
[]
false
[]
20-20.2-2
20
20.2
20.2-2
docs/Chap20/20.2.md
Write pseudocode for $\text{PROTO-vEB-DELETE}$. It should update the appropriate summary bit by scanning the related bits within the cluster. What is the worst-case running time of your procedure?
```cpp PROTO-vEB-DELETE(V, x) if V.u == 2 V.A[x] = 0 else PROTO-vEB-DELETE(V.cluster[high(x)], low(x)) inCluster = false for i = 0 to sqrt(u) - 1 if PROTO-vEB-MEMBER(V.cluster[high(x)], low(i)) inCluster = true break if inCluster == false PROTO-vEB-DELETE(V.summary, high(x)) ``` When we delete a key, we need to check membership of all keys of that cluster to know how to update the summary structure. There are $\sqrt u$ of these, and each membership takes $O(\lg\lg u)$ time to check. With the recursive calls, recurrence for running time is $$T(u) = T(\sqrt u) + O(\sqrt u\lg\lg u).$$ We make the substitution $m = \lg u$ and $S(m) = T(2^m)$. Then we apply the Master Theorem, using case 3, to solve the recurrence. Substituting back, we find that the runtime is $T(u) = O(\sqrt u\lg\lg u)$.
[ { "lang": "cpp", "code": "PROTO-vEB-DELETE(V, x)\n if V.u == 2\n V.A[x] = 0\n else\n PROTO-vEB-DELETE(V.cluster[high(x)], low(x))\n inCluster = false\n for i = 0 to sqrt(u) - 1\n if PROTO-vEB-MEMBER(V.cluster[high(x)], low(i))\n inCluster = true\n break\n if inCluster == false\n PROTO-vEB-DELETE(V.summary, high(x))" } ]
false
[]
20-20.2-3
20
20.2
20.2-3
docs/Chap20/20.2.md
Add the attribute $n$ to each $\text{proto-vEB}$ structure, giving the number of elements currently in the set it represents, and write pseudocode for $\text{PROTO-vEB-DELETE}$ that uses the attribute $n$ to decide when to reset summary bits to $0$. What is the worst-case running time of your procedure? What other procedures need to change because of the new attribute? Do these changes affect their running times?
We would keep the same as before, but insert immediately after the else, a check of whether $n = 1$. If it doesn't continue as usual, but if it does, then we can just immediately set the summary bit to $0$, null out the pointer in the table, and be done immediately. This has the upside that it can sometimes save up to $\lg\lg u$. The procedure has the big downside that the number of elements that are in the set could be as high as $\lg(\lg u)$, in which case $\lg u$ many bits are needed to store $n$.
[]
false
[]
20-20.2-4
20
20.2
20.2-4
docs/Chap20/20.2.md
Modify the $\text{proto-vEB}$ structure to support duplicate keys.
The array $A$ found in a proto van Emde Boas structure of size $2$ should now support integers, instead of just bits. All other pats of the structure will remain the same. The integer will store the number of duplicates at that position. The modifications to insert, delete, minimum, successor, etc will be minor. Only the base cases will need to be updated.
[]
false
[]
20-20.2-5
20
20.2
20.2-5
docs/Chap20/20.2.md
Modify the $\text{proto-vEB}$ structure to support keys that have associated satellite data.
The only modification necessary would be for the $u = 2$ trees. They would need to also include a length two array that had pointers to the corresponding satellite data which would be populated in case the corresponding entry in $A$ were $1$.
[]
false
[]
20-20.2-6
20
20.2
20.2-6
docs/Chap20/20.2.md
Write pseudocode for a procedure that creates a $\text{proto-vEB}(u)$ structure.
This algorithm recursively allocates proper space and appropriately initializes attributes for a proto van Emde Boas structure of size $u$. ```cpp MAKE-PROTO-vEB(u) allocate a new vEB tree V V.u = u if u == 2 let A be an array of size 2 V.A[1] = V.A[0] = 0 else V.summary = MAKE-PROTO-vEB(sqrt(u)) for i = 0 to sqrt(u) - 1 V.cluster[i] = MAKE-PROTO-vEB(sqrt(u)) ```
[ { "lang": "cpp", "code": "MAKE-PROTO-vEB(u)\n allocate a new vEB tree V\n V.u = u\n if u == 2\n let A be an array of size 2\n V.A[1] = V.A[0] = 0\n else\n V.summary = MAKE-PROTO-vEB(sqrt(u))\n for i = 0 to sqrt(u) - 1\n V.cluster[i] = MAKE-PROTO-vEB(sqrt(u))" } ]
false
[]
20-20.2-7
20
20.2
20.2-7
docs/Chap20/20.2.md
Argue that if line 9 of $\text{PROTO-vEB-MINIMUM}$ is executed, then the $\text{proto-vEB}$ structure is empty.
For line 9 to be executed, we would need that in the summary data, we also had a $\text{NIL}$ returned. This could of either happened through line 9, or 6. Eventually though, it would need to happen in line 6, so, there must be some number of summarizations that happened of $V$ that caused us to get an empty $u = 2$ $\text{vEB}$. However, a summarization has an entry of one if any of the corresponding entries in the data structure are one. This means that there are no entries in $V$, and so, we have that $V$ is empty.
[]
false
[]
20-20.2-8
20
20.2
20.2-8
docs/Chap20/20.2.md
Suppose that we designed a $\text{proto-vEB}$ structure in which each _cluster_ array had only $u^{1 / 4}$ elements. What would the running times of each operation be?
There are $u^{3 / 4}$ clusters in each $\text{proto-vEB}$. - **MEMBER/INSERT:** $$T(u) = T(u^{1 / 4}) + O(1) = \Theta(\lg\log_4 u) = \Theta(\lg\lg u).$$ - **MINIMUM/MAXIMUM:** $$T(u) = T(u^{1 / 4}) + T(u^{3 / 4}) + O(1) = \Theta(\lg u).$$ - **SUCCESSOR/PREDECESSOR/DELETE:** $$T(u) = T(u^{1 / 4}) + T(u^{3 / 4}) + \Theta(\lg u^{1 / 4}) = \Theta(\lg u \lg\lg u).$$
[]
false
[]
20-20.3-1
20
20.3
20.3-1
docs/Chap20/20.3.md
Modify vEB trees to support duplicate keys.
To support duplicate keys, for each $u = 2$ vEB tree, instead of storing just a bit in each of the entries of its array, it should store an integer representing how many elements of that value the vEB contains.
[]
false
[]
20-20.3-2
20
20.3
20.3-2
docs/Chap20/20.3.md
Modify vEB trees to support keys that have associated satellite data.
For any key which is a minimum on some vEB, we'll need to store its satellite data with the min value since the key doesn't appear in the subtree. The rest of the satellite data will be stored alongside the keys of the vEB trees of size $2$. Explicitly, for each non-summary vEB tree, store a pointer in addition to min. If min is $\text{NIL}$, the pointer should also point to $\text{NIL}$. Otherwise, the pointer should point to the satellite data associated with that minimum. In a size $2$ vEB tree, we'll have two additional pointers, which will each point to the minimum's and maximum's satellite data, or $\text{NIL}$ if these don't exist. In the case where $\min = \max$, the pointers will point to the same data.
[]
false
[]
20-20.3-3
20
20.3
20.3-3
docs/Chap20/20.3.md
Write pseudocode for a procedure that creates an empty van Emde Boas tree.
We define the procedure for any $u$ that is a power of $2$. If $u = 2$, then, just slap that fact together with an array of length $2$ that contains $0$ in both entries. If $u = 2k > 2$, then, we create an empty vEB tree called Summary with $u = 2^{\lceil k / 2 \rceil}$. We also make an array called cluster of length $2^{\lceil k / 2 \rceil}$ with each entry initialized to an empty vEB tree with $u = 2^{\lfloor k / 2 \rfloor}$. Lastly, we create a min and max element, both initialized to $\text{NIL}$.
[]
false
[]
20-20.3-4
20
20.3
20.3-4
docs/Chap20/20.3.md
What happens if you call $\text{VEB-TREE-INSERT}$ with an element that is already in the vEB tree? What happens if you call $\text{VEB-TREE-DELETE}$ with an element that is not in the vEB tree? Explain why the procedures exhibit the behavior that they do. Show how to modify vEB trees and their operations so that we can check in constant time whether an element is present.
Suppose that $x$ is already in $V$ and we call $\text{INSERT}$. Then we can't satisfy lines 1, 3, 6, or 10, so we will enter the else case on line 9 every time until we reach the base case. If $x$ is already in the base-case tree, then we won't change anything. If $x$ is stored in a min attribute of a vEB tree that isn't base-case, however, we will insert a duplicate of it in some base-case tree. Now suppose we call $\text{DELETE}$ when $x$ isn't in $V$ . If there is only a single element in $V$, lines 1 through 3 will delete it, regardless of what element it is. To enter the elseif of line 4, $x$ can't be equal to $0$ or $1$ and the vEB tree must be of size $2$. In this case, we delete the max element, regardless of what it is. Since the recursive call always puts us in this case, we always delete an element we shouldn't. To avoid these issue, keep and updated auxiliary array $A$ with $u$ elements. Set $A[i] = 0$ if $i$ is not in the tree, and $1$ if it is. Since we can perform constant time updates to this array, it won't affect the runtime of any of our operations. When inserting $x$, check first to be sure $A[x] = 0$. If it's not, simply return. If it is, set $A[x] = 1$ and proceed with insert as usual. When deleting $x$, check if $A[x] = 1$. If it isn't, simply return. If it is, set $A[x] = 0$ and proceed with delete as usual.
[]
false
[]
20-20.3-5
20
20.3
20.3-5
docs/Chap20/20.3.md
Suppose that instead of $\sqrt[\uparrow]u$ clusters, each with universe size $\sqrt[\downarrow]u$, we constructed vEB trees to have $u^{1 / k}$ clusters, each with universe size $u^{1 - 1 / k}$, where $k > 1$ is a constant. If we were to modify the operations appropriately, what would be their running times? For the purpose of analysis, assume that $u^{1 / k}$ and $u^{1 - 1 / k}$ are always integers.
Similar to the analysis of $\text{(20.4)}$, we will analyze $$T(u) \le T(u^{1 - 1 / k}) + T(u^{1 / k}) + O(1).$$ This is a good choice for analysis because for many operations we first check the summary vEB tree, which will have size $u^{1 / k}$ (the second term). And then possible have to check a vEB tree somewhere in cluster, which will have size $u^{1 - 1/k}$ (the first term). We let $T(2^m) = S(m)$, so the equation becomes $$S(m) \le S(m(1 - 1/k)) + S(m/k) + O(1).$$ If $k > 2$ the first term dominates, so by master theorem, we'll have that $S(m)$ is $O(\lg m)$, this means that T will be $O(\lg(\lg u))$ just as in the original case where we took squareroots.
[]
false
[]
20-20.3-6
20
20.3
20.3-6
docs/Chap20/20.3.md
Creating a vEB tree with universe size $u$ requires $O(u)$ time. Suppose we wish to explicitly account for that time. What is the smallest number of operations $n$ for which the amortized time of each operation in a vEB tree is $O(\lg\lg u)$?
Set $n = u / \lg\lg u$. Then performing $n$ operations takes $c(u + n\lg\lg u)$ time for some constant $c$. Using the aggregate amortized analysis, we divide by $n$ to see that the amortized cost of each operations is $c(\lg\lg u + \lg\lg u) = O(\lg\lg u)$ per operation. Thus we need $n \ge u/ \lg \lg u$.
[]
false
[]
20-20-1
20
20-1
20-1
docs/Chap20/Problems/20-1.md
This problem explores the space requirements for van Emde Boas trees and suggests a way to modify the data structure to make its space requirement depend on the number $n$ of elements actually stored in the tree, rather than on the universe size $u$. For simplicity, assume that $\sqrt u$ is always an integer. **a.** Explain why the following recurrence characterizes the space requirement $P(u)$ of a van Emde Boas tree with universe size $u$: $$P(u) = (\sqrt u + 1) P(\sqrt u) + \Theta(\sqrt u). \tag{20.5}$$ **b.** Prove that recurrence $\text{(20.5)}$ has the solution $P(u) = O(u)$. In order to reduce the space requirements, let us define a **_reduced-space van Emde Boas tree_**, or **_RS-vEB tree_**, as a vEB tree $V$ but with the following changes: - The attribute $V.cluster$, rather than being stored as a simple array of pointers to vEB trees with universe size $\sqrt u$, is a hash table (see Chapter 11) stored as a dynamic table (see Section 17.4). Corresponding to the array version of $V.cluster$, the hash table stores pointers to RS-vEB trees with universe size $\sqrt u$. To find the $i$th cluster, we look up the key $i$ in the hash table, so that we can find the $i$th cluster by a single search in the hash table. - The hash table stores only pointers to nonempty clusters. A search in the hash table for an empty cluster returns $\text{NIL}$, indicating that the cluster is empty. - The attribute $V.summary$ is $\text{NIL}$ if all clusters are empty. Otherwise, $V.summary$ points to an RS-vEB tree with universe size $\sqrt u$. Because the hash table is implemented with a dynamic table, the space it requires is proportional to the number of nonempty clusters. When we need to insert an element into an empty RS-vEB tree, we create the RS-vEB tree by calling the following procedure, where the parameter u is the universe size of the RS-vEB tree: ```cpp CREATE-NEW-RS-vEB-TREE(u) allocate a new vEB tree V V.u = u V.min = NIL V.max = NIL V.summary = NIL create V.cluster as an empty dynamic hash table return V ``` **c.** Modify the $\text{VEB-TREE-INSERT}$ procedure to produce pseudocode for the procedure $\text{RS-VEB-TREE-INSERT}(V, x)$, which inserts $x$ into the RS-vEB tree $V$, calling $\text{CREATE-NEW-RS-VEB-TREE}$ as appropriate. **d.** Modify the $\text{VEB-TREE-SUCCESSOR}$ procedure to produce pseudocode for the procedure $\text{RS-VEB-TREE-SUCCESSOR}(V, x)$, which returns the successor of $x$ in RS-vEB tree $V$, or $\text{NIL}$ if $x$ has no successor in $V$. **e.** Prove that, under the assumption of simple uniform hashing, your $\text{RS-VEBTREE-INSERT}$ and $\text{RS-VEB-TREE-SUCCESSOR}$ procedures run in $O(\lg\lg u)$ expected time. **f.** Assuming that elements are never deleted from a vEB tree, prove that the space requirement for the RS-vEB tree structure is $O(n)$, where $n$ is the number of elements actually stored in the RS-vEB tree. **g.** RS-vEB trees have another advantage over vEB trees: they require less time to create. How long does it take to create an empty RS-vEB tree?
**a.** Lets look at what has to be stored for a vEB tree. Each vEB tree contains one vEB tree of size $\sqrt[+]u$ and $\sqrt[+]u$ vEB trees of size $\sqrt[1]u$. It also is storing three numbers each of order $O(u)$, so they need $\Theta(\lg(u))$ space each. Lastly, it needs to store $\sqrt u$ many pointers to the cluster vEB trees. We'll combine these last two contributions which are $\Theta(\lg(u))$ and $\Theta(\sqrt u)$ respectively into a single term that is $\Theta(\sqrt u)$. This gets us the recurrence $$P(u) = P(\sqrt[+]u) + \sqrt[+]u P(\sqrt[-]u) + \Theta(\sqrt u).$$ Then, we have that $u = 2^{2m}$ (which follows from the assumption that $\sqrt u$ was an integer), this equation becomes $$ \begin{aligned} P(u) & = (1 + 2^m)P(2^m) + \Theta(\sqrt u) \\\\ & = (1 + \sqrt u)P(\sqrt u) + \Theta(\sqrt u) \end{aligned} $$ as desired. **b.** We recall from our solution to problem 3-6.e (it seems like so long ago now) that given a number $n$, a bound on the number of times that we need to take the squareroot of a number before it falls below $2$ is $\lg\lg n$. So, if we just unroll out recurrence, we get that $$P(u) \le \Big(\prod_{i = 1}^{\lg\lg u}(u^{1 / 2^i} + 1) \Big) P(2) + \sum_{i = 1}^{\lg\lg u} \Theta(u^{1 / 2^i})(u^{1 / 2i} + 1).$$ The first product has a highest power of $u$ corresponding to always multiplying the first terms of each binomial. The power in this term is equal to $\sum_{i = 1}^{\lg\lg u}$ which is a partial sum of a geometric series whose sum is $1$. This means that the first term is $o(u)$. The order of the ith term in the summation appearing in the formula is $u^{2 / 2^i}$. In particular, for $i = 1$ is it $O(u)$, and for any $i > 1$, we have that $2 / 2^i < 1$, so those terms will be $o(u)$. Putting it all together, the largest term appearing is $O(u)$, and so, $P(u)$ is $O(u)$. **c.** For this problem we just use the version written for normal vEB trees, with minor modifications. That is, since there are entries in cluster that may not exist, and summary may of not yet been initialized, just before we try to access either, we check to see if it's initialized. If it isn't, we do so then. **d.** As in the previous problem, we just wait until just before either of the two things that may of not been allocated try to get used then allocate them if need be. **e.** Since the initialization performed only take constant time, those modifications don't ruin the the desired runtime bound for the original algorithms already had. So, our responses to parts \(c\) and (d) are $O(\lg\lg n)$. **f.** As mentioned in the errata, this part should instead be changed to $O(n\lg n)$ space. When we are adding an element, we may have to add an entry to a dynamic hash table, which means that a constant amount of extra space would be needed. If we are adding an element to that table, we also have to add an element to the RS-vEB tree in the summary, but the entry that we add in the cluster will be a constant size RS-vEB tree. We can charge the cost of that addition to the summary table to the making the minimum element entry that we added in the cluster table. Since we are always making at least one element be added as a new min entry somewhere, this amortization will mean that it is only a constant amount of time in order to store the new entry. **g.** It only takes a constant amount of time to create an empty RS-vEB tree. This is immediate since the only dependence on $u$ in $\text{CREATE-NEW-RSvEB-TREE}(u)$ is on line 2 when $V.u$ is initialized, but this only takes a constant amount of time. Since nothing else in the procedure depends on $u$, it must take a constant amount of time.
[ { "lang": "cpp", "code": "> CREATE-NEW-RS-vEB-TREE(u)\n> allocate a new vEB tree V\n> V.u = u\n> V.min = NIL\n> V.max = NIL\n> V.summary = NIL\n> create V.cluster as an empty dynamic hash table\n> return V\n>" } ]
false
[]
20-20-2
20
20-2
20-2
docs/Chap20/Problems/20-2.md
This problem investigates D. Willard's "$y$-fast tries" which, like van Emde Boas trees, perform each of the operations $\text{MEMBER}$, $\text{MINIMUM}$, $\text{MAXIMUM}$, $\text{PREDECESSOR}$, and $\text{SUCCESSOR}$ on elements drawn from a universe with size $u$ in $O(\lg\lg u)$ worst-case time. The $\text{INSERT}$ and $\text{DELETE}$ operations take $O(\lg\lg u)$ amortized time. Like reduced-space van Emde Boas trees (see Problem 20-1), $y$-fast tries use only $O(n)$ space to store $n$ elements. The design of $y$-fast tries relies on perfect hashing (see Section 11.5). As a preliminary structure, suppose that we create a perfect hash table containing not only every element in the dynamic set, but every prefix of the binary representation of every element in the set. For example, if $u = 16$, so that $\lg u = 4$, and $x = 13$ is in the set, then because the binary representation of $13$ is $1101$, the perfect hash table would contain the strings $1$, $11$, $110$, and $1101$. In addition to the hash table, we create a doubly linked list of the elements currently in the set, in increasing order. **a.** How much space does this structure require? **b.** Show how to perform the $\text{MINIMUM}$ and $\text{MAXIMUM}$ operations in $O(1)$ time; the $\text{MEMBER}$, $\text{PREDECESSOR}$, and $\text{SUCCESSOR}$ operations in $O(\lg\lg u)$ time; and the $\text{INSERT}$ and $\text{DELETE}$ operations in $O(\lg u)$ time. To reduce the space requirement to $O(n)$, we make the following changes to the data structure: - We cluster the $n$ elements into $n / \lg u$ groups of size $\lg u$. (Assume for now that $\lg u$ divides $n$.) The first group consists of the $\lg u$ smallest elements in the set, the second group consists of the next $\lg u$ smallest elements, and so on. - We designate a "representative" value for each group. The representative of the $i$th group is at least as large as the largest element in the $i$th group, and it is smaller than every element of the $(i + 1)$st group. (The representative of the last group can be the maximum possible element $u - 1$.) Note that a representative might be a value not currently in the set. - We store the $\lg u$ elements of each group in a balanced binary search tree, such as a red-black tree. Each representative points to the balanced binary search tree for its group, and each balanced binary search tree points to its group's representative. The perfect hash table stores only the representatives, which are also stored in a doubly linked list in increasing order. We call this structure a **_$y$-fast trie_**. **c.** Show that a $y$-fast trie requires only $O(n)$ space to store $n$ elements. **d.** Show how to perform the $\text{MINIMUM}$ and $\text{MAXIMUM}$ operations in $O(\lg\lg u)$ time with a $y$-fast trie. **e.** Show how to perform the $\text{MEMBER}$ operation in $O(\lg\lg u)$ time. **f.** Show how to perform the $\text{PREDECESSOR}$ and $\text{SUCCESSOR}$ operations in $O(\lg\lg u)$ time. **g.** Explain why the $\text{INSERT}$ and $\text{DELETE}$ operations take $\Omega(\lg\lg u)$ time. **h.** Show how to relax the requirement that each group in a $y$-fast trie has exactly $\lg u$ elements to allow $\text{INSERT}$ and $\text{DELETE}$ to run in $O(\lg\lg u)$ amortized time without affecting the asymptotic running times of the other operations.
**a.** By 11.5, the perfect hash table uses $O(m)$ space to store m elements. In a universe of size $u$, each element contributes $\lg u$ entries to the hash table, so the requirement is $O(n\lg u)$. Since the linked list requires $O(n)$, the total space requirement is $O(n\lg u)$. **b.** $\text{MINIMUM}$ and $\text{MAXIMUM}$ are easy. We just examine the first and last elements of the associated doubly linked list. $\text{MEMBER}$ can actually be performed in $O(1)$, since we are simply checking membership in a perfect hash table. $\text{PREDECESSOR}$ and $\text{SUCCESSOR}$ are a bit more complicated. Assume that we have a binary tree in which we store all the elements and their prefixes. When we query the hash table for an element, we get a pointer to that element's location in the binary search tree, if the element is in the tree, and $\text{NIL}$ otherwise. Moreover, assume that every leaf node comes with a pointer to its position in the doubly linked list. Let $x$ be the number whose successor we seek. Begin by performing a binary search of the prefixes in the hash table to find the longest hashed prefix $y$ which matches a prefix of $x$. This takes $O(\lg\lg u)$ since we can check if any prefix is in the hash table in $O(1)$. Observe that $y$ can have at most one child in the BST, because if it had both children then one of these would share a longer prefix with $x$. If the left child is missing, have the left child pointer point to the largest labeled leaf node in the BST which is less than $y$. If the right child is missing, use its pointer to point to the successor of $y$. If $y$ is a leaf node then $y = x$, so we simply follow the pointer to $x$ in the doubly linked list, in $O(1)$, and its successor is the next element on the list. If $y$ is not a leaf node, we follow its predecessor or successor node, depending on which we need. This gives us $O(1)$ access to the proper element, so the total runtime is $O(\lg\lg u)$. $\text{INSERT}$ and $\text{DELETE}$ must take $O(\lg u)$ since we need to insert one entry into the hash table for each of their bits and update the pointers. **c.** The doubly linked list has less than $n$ elements, while the binary search trees contains $n$ nodes, thus a $y$-fast trie requires $O(n)$ space. **d.** $\text{MINIMUM}$: Find the minimum representative in the doubly linked list in $\Theta(1)$, then find the minimum element in the binary search tree in $O(\lg\lg u)$. **e.** Find the smallest representative greater than $k$ with binary searching in $\Theta(\lg\lg u)$, find the element in the binary search tree in $O(\lg\lg u)$. **f.** If we can find the largest representative greater than or equal to $x$, we can determine which binary tree contains the predecessor or successor of $x$. To do this, just call $\text{PREDECESSOR}$ or $\text{SUCCESSOR}$ on $x$ to locate the appropriate tree in $O(\lg\lg u)$. Since the tree has height $\lg u$, we can find the predecessor or successor in $O(\lg\lg u)$. **g.** Same as **_e_**, we need to find the cluster in $\Theta(\lg\lg u)$, then the operations in the binary search tree takes $O(\lg\lg u)$. **h.** We can relax the requirements and only impose the condition that each group has at least $\frac{1}{2}\lg u$ elements and at most $2\lg u$ elements. - If a red-black tree is too big, we split it in half at the median. - If a red-black tree is too small, we merge it with a neighboring tree. - If this causes the merged tree to become too large, we split it at the median. - If a tree splits, we create a new representative. - If two trees merge, we delete the lost representative. Any split or merge takes $O(\lg u)$ since we have to insert or delete an element in the data structure storing our representatives, which by part (b) takes $O(\lg u)$. However, we only split a tree after at least $\lg u$ insertions, since the size of one of the red-black trees needs to increase from $\lg u$ to $2\lg u$ and we only merge two trees after at least $(1 / 2)\lg u$ deletions, because the size of the merging tree needs to have decreased from $\lg u$ to $(1 / 2)\lg u$. Thus, the amortized cost of the merges, splits, and updates to representatives is $O(1)$ per insertion or deletion, so the amortized cost is $O(\lg\lg u)$ as desired.
[]
false
[]
21-21.1-1
21
21.1
21.1-1
docs/Chap21/21.1.md
Suppose that $\text{CONNECTED-COMPONENTS}$ is run on the undirected graph $G = (V, E)$, where $V = \\{a, b, c, d, e, f, g, h, i, j, k\\}$ and the edges of $E$ are processed in the order $(d, i)$, $(f, k)$, $(g, i)$, $(b, g)$, $(a, h)$, $(i, j)$, $(d, k)$, $(b, j)$, $(d, f)$, $(g, j)$, $(a, e)$. List the vertices in each connected component after each iteration of lines 3–5.
$$ \begin{array}{c|lllllllllll} \text{Edge processed} & \\\\ \hline initial & \\{a\\} & \\{b\\} & \\{c\\} & \\{d\\} & \\{e\\} & \\{f\\} & \\{g\\} & \\{h\\} & \\{i\\} & \\{j\\} & \\{k\\} \\\\ (d, i) & \\{a\\} & \\{b\\} & \\{c\\} & \\{d, i\\} & \\{e\\} & \\{f\\} & \\{g\\} & \\{h\\} & & \\{j\\} & \\{k\\} \\\\ (f, k) & \\{a\\} & \\{b\\} & \\{c\\} & \\{d, i\\} & \\{e\\} & \\{f, k\\} & \\{g\\} & \\{h\\} & & \\{j\\} & \\\\ (g, i) & \\{a\\} & \\{b\\} & \\{c\\} & \\{d, i, g\\} & \\{e\\} & \\{f, k\\} & & \\{h\\} & & \\{j\\} & \\\\ (b, g) & \\{a\\} & \\{b, d, i, g\\} & \\{c\\} & & \\{e\\} & \\{f, k\\} & & \\{h\\} & & \\{j\\} & \\\\ (a, h) & \\{a, h\\} & \\{b, d, i, g\\} & \\{c\\} & & \\{e\\} & \\{f, k\\} & & & & \\{j\\} & \\\\ (i, j) & \\{a, h\\} & \\{b, d, i, g, j\\} & \\{c\\} & & \\{e\\} & \\{f, k\\} & & & & & \\\\ (d, k) & \\{a, h\\} & \\{b, d, i, g, j, f, k\\} & \\{c\\} & & \\{e\\} & & & & & & \\\\ (b, j) & \\{a, h\\} & \\{b, d, i, g, j, f, k\\} & \\{c\\} & & \\{e\\} & & & & & & \\\\ (d, f) & \\{a, h\\} & \\{b, d, i, g, j, f, k\\} & \\{c\\} & & \\{e\\} & & & & & & \\\\ (g, j) & \\{a, h\\} & \\{b, d, i, g, j, f, k\\} & \\{c\\} & & \\{e\\} & & & & & & \\\\ (a, e) & \\{a, h, e\\} & \\{b, d, i, g, j, f, k\\} & \\{c\\} & & & & & & & & \end{array} $$ So, the connected components that we are left with are $\\{a, h, e\\}$, $\\{b, d, i, g, j, f, k\\}$, and $\\{c\\}$.
[]
false
[]
21-21.1-2
21
21.1
21.1-2
docs/Chap21/21.1.md
Show that after all edges are processed by $\text{CONNECTED-COMPONENTS}$, two vertices are in the same connected component if and only if they are in the same set.
First suppose that two vertices are in the same connected component. Then there exists a path of edges connecting them. If two vertices are connected by a single edge, then they are put into the same set when that edge is processed. At some point during the algorithm every edge of the path will be processed, so all vertices on the path will be in the same set, including the endpoints. Now suppose two vertices $u$ and $v$ wind up in the same set. Since every vertex starts off in its own set, some sequence of edges in $G$ must have resulted in eventually combining the sets containing $u$ and $v$. From among these, there must be a path of edges from $u$ to $v$, implying that $u$ and $v$ are in the same connected component.
[]
false
[]
21-21.1-3
21
21.1
21.1-3
docs/Chap21/21.1.md
During the execution of $\text{CONNECTED-COMPONENTS}$ on an undirected graph $G = (V, E)$ with $k$ connected components, how many times is $\text{FIND-SET}$ called? How many times is $\text{UNION}$ called? Express your answers in terms of $|V|$, $|E|$, and $k$.
Find set is called twice on line 4, this is run once per edge in the graph, so, we have that find set is run $2|E|$ times. Since we start with $|V|$ sets, at the end only have $k$, and each call to $\text{UNION}$ reduces the number of sets by one, we have that we have to made $|V| - k$ calls to $\text{UNION}$.
[]
false
[]
21-21.2-1
21
21.2
21.2-1
docs/Chap21/21.2.md
Write pseudocode for $\text{MAKE-SET}$, $\text{FIND-SET}$, and $\text{UNION}$ using the linked-list representation and the weighted-union heuristic. Make sure to specify the attributes that you assume for set objects and list objects.
The three algorithms follow the english description and are provided here. There are alternate versions using the weighted union heuristic, suffixed with $\text{WU}$. ```cpp MAKE-SET(x) // Assume x is a pointer to a node contains .key .set .next Create a node S contains .head .tail .size x.set = S x.next = NIL S.head = x S.tail = x S.size = 1 return S ``` ```cpp FIND-SET(x) return x.set.head ``` ```cpp UNION(x, y) S1 = x.set S2 = y.set if S1.size >= S2.size S1.tail.next = S2.head z = S2.head while z != NIL // z.next is incorrect, so S2.tail node should not be updated z.set = S1 z = z.next S1.tail = S2.tail S1.size = S1.size + S2.size // Update the size of set return S1 else same procedure as above change x to y change S1 to S2 ```
[ { "lang": "cpp", "code": "MAKE-SET(x)\n// Assume x is a pointer to a node contains .key .set .next\n Create a node S contains .head .tail .size\n x.set = S\n x.next = NIL\n S.head = x\n S.tail = x\n S.size = 1\n return S" }, { "lang": "cpp", "code": "FIND-SET(x)\n return x.set.head" }, { "lang": "cpp", "code": "UNION(x, y)\n S1 = x.set\n S2 = y.set\n if S1.size >= S2.size\n S1.tail.next = S2.head\n z = S2.head\n while z != NIL // z.next is incorrect, so S2.tail node should not be updated\n z.set = S1\n z = z.next\n S1.tail = S2.tail\n S1.size = S1.size + S2.size // Update the size of set\n return S1\n else\n same procedure as above\n change x to y\n change S1 to S2" } ]
false
[]
21-21.2-2
21
21.2
21.2-2
docs/Chap21/21.2.md
Show the data structure that results and the answers returned by the $\text{FIND-SET}$ operations in the following program. Use the linked-list representation with the weighted-union heuristic. ```cpp for i = 1 to 16 MAKE-SET(x[i]) for i = 1 to 15 by 2 UNION(x[i], x[i + 1]) for i = 1 to 13 by 4 UNION(x[i], x[i + 2]) UNION(x[1], x[5]) UNION(x[11], x[13]) UNION(x[1], x[10]) FIND-SET(x[2]) FIND-SET(x[9]) ``` Assume that if the sets containing $x_i$ and $x_j$ have the same size, then the operation $\text{UNION}(x_i, x_j)$ appends $x_j$'s list onto $x_i$'s list.
Originally we have $16$ sets, each containing $x_i$. In the following, we'll replace $x_i$ by $i$. After the **for** loop in line 3 we have: $$\\{1,2\\}, \\{3, 4\\}, \\{5, 6\\}, \\{7, 8\\}, \\{9, 10\\}, \\{11, 12\\}, \\{13, 14\\}, \\{15, 16\\}.$$ After the **for** loop on line 5 we have $$\\{1, 2, 3, 4\\}, \\{5, 6, 7, 8\\}, \\{9, 10, 11, 12\\}, \\{13, 14, 15, 16\\}.$$ Line 7 results in: $$\\{1, 2, 3, 4, 5, 6, 7, 8\\}, \\{9, 10, 11, 12\\}, \\{13, 14, 15, 16\\}.$$ Line 8 results in: $$\\{1, 2, 3, 4, 5, 6, 7, 8\\}, \\{9, 10, 11, 12, 13, 14, 15, 16\\}.$$ Line 9 results in: $$\\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\\}.$$ $\text{FIND-SET}(x_2)$ and $\text{FIND-SET}(x_9)$ each return pointers to $x_1$. ```cpp MAKE-SET-WU(x) L = MAKE-SET(x) L.size = 1 return L ``` ```cpp UNION-WU(x, y) L1 = x.set L2 = y.set if L1.size ≥ L2.size L = UNION(x, y) else L = UNION(y, x) L.size = L1.size + L2.size return L ```
[ { "lang": "cpp", "code": "> for i = 1 to 16\n> MAKE-SET(x[i])\n> for i = 1 to 15 by 2\n> UNION(x[i], x[i + 1])\n> for i = 1 to 13 by 4\n> UNION(x[i], x[i + 2])\n> UNION(x[1], x[5])\n> UNION(x[11], x[13])\n> UNION(x[1], x[10])\n> FIND-SET(x[2])\n> FIND-SET(x[9])\n>" }, { "lang": "cpp", "code": "MAKE-SET-WU(x)\n L = MAKE-SET(x)\n L.size = 1\n return L" }, { "lang": "cpp", "code": "UNION-WU(x, y)\n L1 = x.set\n L2 = y.set\n if L1.size ≥ L2.size\n L = UNION(x, y)\n else L = UNION(y, x)\n L.size = L1.size + L2.size\n return L" } ]
false
[]
21-21.2-3
21
21.2
21.2-3
docs/Chap21/21.2.md
Adapt the aggregate proof of Theorem 21.1 to obtain amortized time bounds of $O(1)$ for $\text{MAKE-SET}$ and $\text{FIND-SET}$ and $O(\lg n)$ for $\text{UNION}$ using the linked-list representation and the weighted-union heuristic.
During the proof of theorem 21.1, we concluded that the time for the $n$ $\text{UNION}$ operations to run was at most $O(n \lg n)$. This means that each of them took an amortized time of at most $O(\lg n)$. Also, since there is only a constant actual amount of work in performing $\text{MAKE-SET}$ and $\text{FIND-SET}$ operations, and none of that ease is used to offset costs of $\text{UNION}$ operations, they both have $O(1)$ runtime.
[]
false
[]