id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
12-12.4-5
12
12.4
12.4-5 $\star$
docs/Chap12/12.4.md
Consider $\text{RANDOMIZED-QUICKSORT}$ operating on a sequence of $n$ distinct input numbers. Prove that for any constant $k > 0$, all but $O(1 / n^k)$ of the $n!$ input permutations yield an $O(n\lg n)$ running time.
Let $A(n)$ denote the probability that when quicksorting a list of length $n$, some pivot is selected to not be in the middle $n^{1 - k / 2}$ of the numberes. This doesn't happen with probability $\frac{1}{n^{k / 2}}$. Then, we have that the two subproblems are of size $n_1, n_2$ with $n_1 + n_2 = n - 1$, then $$A(n) \le \frac{1}{n^{k / 2}} + T(n_1)+T(n_2).$$ Since we bounded the depth by $O(1 / \lg n)$ let $\\{a_{i, j}\\}_i$ be all the subproblem sizes left at depth $j$, $$A(n) \le \frac{1}{n^{k / 2}} \sum_j\sum_i \frac{1}{a}.$$
[]
false
[]
12-12-1
12
12-1
12-1
docs/Chap12/Problems/12-1.md
Equal keys pose a problem for the implementation of binary search trees. **a.** What is the asymptotic performance of $\text{TREE-INSERT}$ when used to insert $n$ items with identical keys into an initially empty binary search tree? We propose to improve $\text{TREE-INSERT}$ by testing before line 5 to determine whether $z.key = x.key$ and by testing before line 11 to determine whether $z.key = y.key$. If equality holds, we implement one of the following strategies. For each strategy, find the asymptotic performance of inserting $n$ items with identical keys into an initially empty binary search tree. (The strategies are described for line 5, in which we compare the keys of $z$ and $x$. Substitute $y$ for $x$ to arrive at the strategies for line 11.) **b.** Keep a boolean flag $x.b$ at node $x$, and set $x$ to either $x.left$ or $x.right$ based on the value of $x.b$, which alternates between $\text{FALSE}$ and $\text{TRUE}$ each time we visit $x$ while inserting a node with the same key as $x$. **c.** Keep a list of nodes with equal keys at $x$, and insert $z$ into the list. **d.** Randomly set $x$ to either $x.left$ or $x.right$. (Give the worst-case performance and informally derive the expected running time.)
**a.** Each insertion will add the element to the right of the rightmost leaf because the inequality on line 11 will always evaluate to false. This will result in the runtime being $\sum_{i = 1}^n i \in \Theta(n^2)$. **b.** This strategy will result in each of the two children subtrees having a difference in size at most one. This means that the height will be $\Theta(\lg n)$. So, the total runtime will be $\sum_{i = 1}^n \lg n \in \Theta(n\lg n)$. **c.** This will only take linear time since the tree itself will be height $0$, and a single insertion into a list can be done in constant time. **d.** - **Worst-case:** every random choice is to the right (or all to the left) this will result in the same behavior as in the first part of this problem, $\Theta(n^2)$. - **Expected running time:** notice that when randomly choosing, we will pick left roughly half the time, so, the tree will be roughly balanced, so, we have that the depth is roughly $\lg(n)$, $\Theta(n\lg n)$.
[]
false
[]
12-12-2
12
12-2
12-2
docs/Chap12/Problems/12-2.md
Given two strings $a = a_0a_1 \ldots a_p$ and $b = b_0b_1 \ldots b_q$, where each $a_i$ and each $b_j$ is in some ordered set of characters, we say that string $a$ is **_lexicographically less than_** string $b$ if either 1. there exists an integer $j$, where $0 \le j \le \min(p, q)$, such that $a_i = b_i$ for all $i = 0, 1, \ldots j - 1$ and $a_j < b_j$, or 2. $p < q$ and $a_i = b_i$ for all $i = 0, 1, \ldots, p$. For example, if $a$ and $b$ are bit strings, then $10100 < 10110$ by rule 1 (letting $j = 3$) and $10100 < 101000$ by rule 2. This ordering is similar to that used in English-language dictionaries. The **_radix tree_** data structure shown in Figure 12.5 stores the bit strings $1011, 10, 011, 100$, and $0$. When searching for a key $a = a_0a_1 \ldots a_p$, we go left at a node of depth $i$ if $a_i = 0$ and right if $a_i = 1$. Let $S$ be a set of distinct bit strings whose lengths sum to $n$. Show how to use a radix tree to sort $S$ lexicographically in $\Theta(n)$ time. For the example in Figure 12.5, the output of the sort should be the sequence $0, 011, 10, 100, 1011$.
(Removed)
[]
false
[]
12-12-3
12
12-3
12-3
docs/Chap12/Problems/12-3.md
In this problem, we prove that the average depth of a node in a randomly built binary search tree with $n$ nodes is $O(\lg n)$. Although this result is weaker than that of Theorem 12.4, the technique we shall use reveals a surprising similarity between the building of a binary search tree and the execution of $\text{RANDOMIZED-QUICKSORT}$ from Section 7.3. We define the **_total path length_** $P(T)$ of a binary tree $T$ as the sum, over all nodes $x$ in $T$, of the depth of node $x$, which we denote by $d(x, T)$. **a.** Argue that the average depth of a node in $T$ is $$\frac{1}{n} \sum_{x \in T} d(x, T) = \frac{1}{n} P(T).$$ Thus, we wish to show that the expected value of $P(T)$ is $O(n\lg n)$. **b.** Let $T_L$ and $T_R$ denote the left and right subtrees of tree $T$, respectively. Argue that if $T$ has $n$ nodes, then $$P(T) = P(T_L) + P(T_R) + n - 1.$$ **c.** Let $P(n)$ denote the average total path length of a randomly built binary search tree with n nodes. Show that $$P(n) = \frac{1}{n} \sum_{i = 0}^{n - 1} (P(i) + P(n - i - 1) + n - 1).$$ **d.** Show how to rewrite $P(n)$ as $$P(n) = \frac{2}{n} \sum_{k = 1}^{n - 1} P(k) + \Theta(n).$$ **e.** Recalling the alternative analysis of the randomized version of quicksort given in Problem 7-3, conclude that $P(n) = O(n\lg n)$. At each recursive invocation of quicksort, we choose a random pivot element to partition the set of elements being sorted. Each node of a binary search tree partitions the set of elements that fall into the subtree rooted at that node. **f.** Describe an implementation of quicksort in which the comparisons to sort a set of elements are exactly the same as the comparisons to insert the elements into a binary search tree. (The order in which comparisons are made may differ, but the same comparisons must occur.)
(Removed)
[]
false
[]
12-12-4
12
12-4
12-4
docs/Chap12/Problems/12-4.md
Let $b_n$ denote the number of different binary trees with $n$ nodes. In this problem, you will find a formula for $b_n$, as well as an asymptotic estimate. **a.** Show that $b_0 = 1$ and that, for $n \ge 1$, $$b_n = \sum_{k = 0}^{n - 1} b_k b_{n - 1 - k}.$$ **b.** Referring to Problem 4-4 for the definition of a generating function, let $B(x)$ be the generating function $$B(x) = \sum_{n = 0}^\infty b_n x^n.$$ Show that $B(x) = xB(x)^2 + 1$, and hence one way to express $B(x)$ in closed form is $$B(x) = \frac{1}{2x} (1 - \sqrt{1 - 4x}).$$ The **_Taylor expansion_** of $f(x)$ around the point $x = a$ is given by $$f(x) = \sum_{k = 0}^\infty \frac{f^{(k)}(a)}{k!} (x - a)^k,$$ where $f^{(k)}(x)$ is the $k$th derivative of $f$ evaluated at $x$. **c.** Show that $$b_n = \frac{1}{n + 1} \binom{2n}{n}$$ (the $n$th **_Catalan number_**) by using the Taylor expansion of $\sqrt{1 - 4x}$ around $x = 0$. (If you wish, instead of using the Taylor expansion, you may use the generalization of the binomial expansion (C.4) to nonintegral exponents $n$, where for any real number $n$ and for any integer $k$, we interpret $\binom{n}{k}$ to be $n(n - 1) \cdots (n - k + 1) / k!$ if $k \ge 0$, and $0$ otherwise.) **d.** Show that $$b_n = \frac{4^n}{\sqrt{\pi}n^{3 / 2}} (1 + O(1 / n)).$$
**a.** A root with two subtree. **b.** $$ \begin{aligned} B(x)^2 & = (b_0 x^0 + b_1 x^1 + b_2 x^2 + \cdots)^2 \\\\ & = b_0^2 x^0 + (b_0 b_1 + b_1 b_0) x^1 + (b_0 b_2 + b_1 b_1 + b_2 b_0) x^2 + \cdots \\\\ & = \sum_{k = 0}^0 b_k b_{0 - k} x^0 + \sum_{k = 0}^1 b_k b_{1 - k} x^1 + \sum_{k = 0}^2 b_k b_{2 - k} x^2 + \cdots \end{aligned} $$ $$ \begin{aligned} xB(x)^2 + 1 & = 1 + \sum_{k = 0}^0 b_k b_{1 - 1 - k} x^1 + \sum_{k = 0}^2 b_k b_{2-1 - k} x^3 + \sum_{k = 0}^2 b_k b_{3-1 - k} x^2 + \cdots \\\\ & = 1 + b_1 x^1 + b_2 x^2 + b_3 x^3 + \cdots \\\\ & = b_0 x^0 + b_1 x^1 + b_2 x^2 + b_3 x^3 + \cdots \\\\ & = \sum_{n = 0}^\infty b_n x^n \\\\ & = B(x). \end{aligned} $$ $$ \begin{aligned} x B(x)^2 + 1 & = x \cdot \frac{1}{4x^2} (1 + 1 - 4x - 2\sqrt{1 - 4x}) + 1 \\\\ & = \frac{1}{4x} (2 - 2\sqrt{1 - 4x}) - 1 + 1 \\\\ & = \frac{1}{2x} (1 - \sqrt{1 - 4x}) \\\\ & = B(x). \end{aligned} $$ **c.** Let $f(x) = \sqrt{1 - 4x}$, the numerator of the derivative is $$ \begin{aligned} 2 \cdot (1 \cdot 2) \cdot (3 \cdot 2) \cdot (5 \cdot 2) \cdots & = 2^k \cdot \prod_{i = 0}^{k - 2} (2k + 1) \\\\ & = 2^k \cdot \frac{(2(k - 1))!}{2^{k - 1}(k - 1)!} \\\\ & = \frac{2(2(k - 1))!}{(k - 1)!}. \end{aligned} $$ $$f(x) = 1 - 2x - 2x^2 - 4 x^3 - 10x^4 - 28x^5 - \cdots.$$ The coefficient is $\frac{2(2(k - 1))!}{k!(k - 1)!}$. $$ \begin{aligned} B(x) & = \frac{1}{2x}(1 - f(x)) \\\\ & = 1 + x + 2x^2 + 5x^3 + 14x^4 + \cdots \\\\ & = \sum_{n = 0}^\infty \frac{(2n)!}{(n + 1)!n!} x \\\\ & = \sum_{n = 0}^\infty \frac{1}{n + 1} \frac{(2n)!}{n!n!} x \\\\ & = \sum_{n = 0}^\infty \frac{1}{n + 1} \binom{2n}{n} x. \end{aligned} $$ $$b_n = \frac{1}{n + 1} \binom{2n}{n}.$$ **d.** $$ \begin{aligned} b_n & = \frac{1}{n + 1} \frac{(2n)!}{n!n!} \\\\ & \approx \frac{1}{n + 1} \frac{\sqrt{4 \pi n}(2n / e)^{2n}}{2 \pi n (n / e)^{2n}} \\\\ & = \frac{1}{n + 1} \frac{4^n}{\sqrt{\pi n} } \\\\ & = (\frac{1}{n} + (\frac{1}{n + 1} - \frac{1}{n})) \frac{4^n}{\sqrt{\pi n}} \\\\ & = (\frac{1}{n} - \frac{1}{n^2 + n}) \frac{4^n}{\sqrt{\pi n}} \\\\ & = \frac{1}{n} (1 - \frac{1}{n + 1}) \frac{4^n}{\sqrt{\pi n}} \\\\ & = \frac{4^n}{\sqrt{\pi}n^{3 / 2}} (1 + O(1 / n)). \end{aligned} $$
[]
false
[]
13-13.1-1
13
13.1
13.1-1
docs/Chap13/13.1.md
In the style of Figure 13.1(a), draw the complete binary search tree of height $3$ on the keys $\\{1, 2, \ldots, 15\\}$. Add the $\text{NIL}$ leaves and color the nodes in three different ways such that the black-heights of the resulting red-black trees are $2$, $3$, and $4$.
- Complete binary tree of $height = 3$: ![](../img/13.1-1-1.png) - Red-black tree of $black\text-heights = 2$: ![](../img/13.1-1-2.png) - Red-black tree of $black\text-heights = 3$: ![](../img/13.1-1-3.png) - Red-black tree of $black\text-heights = 4$: ![](../img/13.1-1-4.png)
[]
true
[ "../img/13.1-1-1.png", "../img/13.1-1-2.png", "../img/13.1-1-3.png", "../img/13.1-1-4.png" ]
13-13.1-2
13
13.1
13.1-2
docs/Chap13/13.1.md
Draw the red-black tree that results after $\text{TREE-INSERT}$ is called on the tree in Figure 13.1 with key $36$. If the inserted node is colored red, is the resulting tree a red-black tree? What if it is colored black?
- If the inserted node is colored red, the tree doesn't satisfy property 4 because $35$ will be the parent of $36$, which is also colored red. - If the inserted node is colored black, the tree doesn't satisfy property 5 because there will be two paths from node $38$ to $T.nil$ which contain different numbers of black nodes. We don't draw the _wrong_ red-black tree; however, we draw the adjusted correct tree: ![](../img/13.1-2-1.png)
[]
true
[ "../img/13.1-2-1.png" ]
13-13.1-3
13
13.1
13.1-3
docs/Chap13/13.1.md
Let us define a **_relaxed red-black tree_** as a binary search tree that satisfies red-black properties 1, 3, 4, and 5. In other words, the root may be either red or black. Consider a relaxed red-black tree $T$ whose root is red. If we color the root of $T$ black but make no other changes to $T$, is the resulting tree a red-black tree?
Yes, it is. - Property 1 is trivially satisfied since only one node is changed and it is not changed to some mysterious third color. - Property 3 is trivially satisfied since no new leaves are introduced. - Property 4 is satisfied since there was no red node introduced, and root is in every path from the root to the leaves, but no others. - Property 5 is satisfied since the only paths we will be changing the number of black nodes in are those coming from the root. All of these will increase by $1$, and so will all be equal.
[]
false
[]
13-13.1-4
13
13.1
13.1-4
docs/Chap13/13.1.md
Suppose that we "absorb" every red node in a red-black tree into its black parent, so that the children of the red node become children of the black parent. (Ignore what happens to the keys.) What are the possible degrees of a black node after all its red children are absorbed? What can you say about the depths of the leaves of the resulting tree?
The degree of a node in a rooted tree is the number of its children (see Section B.5.2). Given this definition, the possible degrees are $0$, $2$, $3$ and $4$, based on whether the black node had zero, one or two red children, each with either zero or two black children. The depths could shrink by at most a factor of $1 / 2$.
[]
false
[]
13-13.1-5
13
13.1
13.1-5
docs/Chap13/13.1.md
Show that the longest simple path from a node $x$ in a red-black tree to a descendant leaf has length at most twice that of the shortest simple path from node $x$ to a descendant leaf.
Suppose we have the longest simple path $(a_1, a_2, \dots, a_s)$ and the shortest simple path $(b_1, b_2, \dots, b_t)$. Then, by property 5 we know they have equal numbers of black nodes. By property 4, we know that neither contains a repeated red node. This tells us that at most $\left\lfloor \frac{s - 1}{2} \right\rfloor$ of the nodes in the longest path are red. This means that at least $\left\lceil \frac{s + 1}{2} \right\rceil$ are black, so, $t \ge \left\lceil \frac{s + 1}{2} \right\rceil$. Therefore, if, by way of contradiction, we had that $s > t \cdot 2$, then $t \ge \left\lceil \frac{s + 1}{2} \right\rceil \ge \left\lceil \frac{2t + 2}{2} \right\rceil = t + 1$ results a contradiction.
[]
false
[]
13-13.1-6
13
13.1
13.1-6
docs/Chap13/13.1.md
What is the largest possible number of internal nodes in a red-black tree with black-height $k$? What is the smallest possible number?
- The largest is a path with half black nodes and half red nodes, which has $2^{2k} - 1$ internal nodes. - The smallest is a path with all black nodes, which has $2^k - 1$ internal nodes.
[]
false
[]
13-13.1-7
13
13.1
13.1-7
docs/Chap13/13.1.md
Describe a red-black tree on $n$ keys that realizes the largest possible ratio of red internal nodes to black internal nodes. What is this ratio? What tree has the smallest possible ratio, and what is the ratio?
- The largest ratio is $2$, each black node has two red children. - The smallest ratio is $0$.
[]
false
[]
13-13.2-1
13
13.2
13.2-1
docs/Chap13/13.2.md
Write pseudocode for $\text{RIGHT-ROTATE}$.
```cpp RIGHT-ROTATE(T, y) x = y.left y.left = x.right if x.right != T.nil x.right.p = y x.p = y.p if y.p == T.nil T.root = x else if y == y.p.right y.p.right = x else y.p.left = x x.right = y y.p = x ```
[ { "lang": "cpp", "code": "RIGHT-ROTATE(T, y)\n x = y.left\n y.left = x.right\n if x.right != T.nil\n x.right.p = y\n x.p = y.p\n if y.p == T.nil\n T.root = x\n else if y == y.p.right\n y.p.right = x\n else y.p.left = x\n x.right = y\n y.p = x" } ]
false
[]
13-13.2-2
13
13.2
13.2-2
docs/Chap13/13.2.md
Argue that in every $n$-node binary search tree, there are exactly $n - 1$ possible rotations.
Every node can rotate with its parent, only the root does not have a parent, therefore there are $n - 1$ possible rotations.
[]
false
[]
13-13.2-3
13
13.2
13.2-3
docs/Chap13/13.2.md
Let $a$, $b$, and $c$ be arbitrary nodes in subtrees $\alpha$, $\beta$, and $\gamma$, respectively, in the left tree of Figure 13.2. How do the depths of $a$, $b$, and $c$ change when a left rotation is performed on node $x$ in the figure?
- $a$: increase by $1$. - $b$: unchanged. - $c$: decrease by $1$.
[]
false
[]
13-13.2-4
13
13.2
13.2-4
docs/Chap13/13.2.md
Show that any arbitrary $n$-node binary search tree can be transformed into any other arbitrary $n$-node binary search tree using $O(n)$ rotations. ($\textit{Hint:}$ First show that at most $n - 1$ right rotations suffice to transform the tree into a right-going chain.)
Consider transforming an arbitrary $n$-node binary tree into a right-going chain as follows: Let the root and all successive right children of the root be the elements of the chain initial chain. For any node $x$ which is a left child of a node on the chain, a single right rotation on the parent of $x$ will add that node to the chain and not remove any elements from the chain. Thus, we can convert any binary search tree to a right chain with at most $n − 1$ right rotations. Let $r_1, r_2, \dots, r_k$ be the sequence of rotations required to convert some binary search tree $T_1$ into a right-going chain, and let $s_1, s_2, \dots, s_m$ be the sequence of rotations required to convert some other binary search tree $T_2$ to a right-going chain. Then $k < n$ and $m < n$, and we can convert $T_1$ to $T_2$ by performing the sequence $r_1, r_2, \dots, r_k, s_m', s_{m - 1}', \dots, s_1'$ where $s_i'$ is the opposite rotation of $s_i$. Since $k + m < 2n$, the number of rotations required is $O(n)$.
[]
false
[]
13-13.2-5
13
13.2
13.2-5 $\star$
docs/Chap13/13.2.md
We say that a binary search tree $T_1$ can be **_right-converted_** to binary search tree $T_2$ if it is possible to obtain $T_2$ from $T_1$ via a series of calls to $\text{RIGHT-ROTATE}$. Give an example of two trees $T_1$ and $T_2$ such that $T_1$ cannot be right-converted to $T_2$. Then, show that if a tree $T_1$ can be right-converted to $T_2$, it can be right-converted using $O(n^2)$ calls to $\text{RIGHT-ROTATE}$.
We can use $O(n)$ calls to rotate the node which is the root in $T_2$ to $T_1$'s root, then use the same operation in the two subtrees. There are $n$ nodes, therefore the upper bound is $O(n^2)$.
[]
false
[]
13-13.3-1
13
13.3
13.3-1
docs/Chap13/13.3.md
In line 16 of $\text{RB-INSERT}$, we set the color of the newly inserted node $z$ to red. Observe that if we had chosen to set $z$'s color to black, then property 4 of a red-black tree would not be violated. Why didn't we choose to set $z$'s color to black?
If we chose to set the color of $z$ to black then we would be violating property 5 of being a red-black tree. Because any path from the root to a leaf under $z$ would have one more black node than the paths to the other leaves.
[]
false
[]
13-13.3-2
13
13.3
13.3-2
docs/Chap13/13.3.md
Show the red-black trees that result after successively inserting the keys $41, 38, 31, 12, 19, 8$ into an initially empty red-black tree.
- insert $41$: ![](../img/13.3-2-1.png) - insert $38$: ![](../img/13.3-2-2.png) - insert $31$: ![](../img/13.3-2-3.png) - insert $12$: ![](../img/13.3-2-4.png) - insert $19$: ![](../img/13.3-2-5.png) - insert $8$: ![](../img/13.3-2-6.png)
[]
true
[ "../img/13.3-2-1.png", "../img/13.3-2-2.png", "../img/13.3-2-3.png", "../img/13.3-2-4.png", "../img/13.3-2-5.png", "../img/13.3-2-6.png" ]
13-13.3-3
13
13.3
13.3-3
docs/Chap13/13.3.md
Suppose that the black-height of each of the subtrees $\alpha, \beta, \gamma, \delta, \epsilon$ in Figures 13.5 and 13.6 is $k$. Label each node in each figure with its black-height to verify that the indicated transformation preserves property 5.
(Removed)
[]
false
[]
13-13.3-4
13
13.3
13.3-4
docs/Chap13/13.3.md
Professor Teach is concerned that $\text{RB-INSERT-FIXUP}$ might set $T.nil.color$ to $\text{RED}$, in which case the test in line 1 would not cause the loop to terminate when $z$ is the root. Show that the professor's concern is unfounded by arguing that $\text{RB-INSERT-FIXUP}$ never sets $T.nil.color$ to $\text{RED}$.
First observe that $\text{RB-INSERT-FIXUP}$ only modifies the child of a node if it is already $\text{RED}$, so we will never modify a child which is set to $T.nil$. We just need to check that the parent of the root is never set to $\text{RED}$. Since the root and the parent of the root are automatically black, if $z$ is at depth less than $2$, the **while** loop will be broken. We only modify colors of nodes at most two levels above $z$, so the only case we need to worry about is when $z$ is at depth $2$. In this case we risk modifying the root to be $\text{RED}$, but this is handled in line 16. When $z$ is updated, it will be either the root or the child of the root. Either way, the root and the parent of the root are still $\text{BLACK}$, so the **while** condition is violated, making it impossibly to modify $T.nil$ to be $\text{RED}$.
[]
false
[]
13-13.3-5
13
13.3
13.3-5
docs/Chap13/13.3.md
Consider a red-black tree formed by inserting $n$ nodes with $\text{RB-INSERT}$. Argue that if $n > 1$, the tree has at least one red node.
- **Case 1:** $z$ and $z.p.p$ are $\text{RED}$, if the loop terminates, then $z$ could not be the root, thus $z$ is $\text{RED}$ after the fix up. - **Case 2:** $z$ and $z.p$ are $\text{RED}$, and after the rotation $z.p$ could not be the root, thus $z.p$ is $\text{RED}$ after the fix up. - **Case 3:** $z$ is $\text{RED}$ and $z$ could not be the root, thus $z$ is $\text{RED}$ after the fix up. Therefore, there is always at least one red node.
[]
false
[]
13-13.3-6
13
13.3
13.3-6
docs/Chap13/13.3.md
Suggest how to implement $\text{RB-INSERT}$ efficiently if the representation for red-black trees includes no storage for parent pointers.
Use stack to record the path to the inserted node, then parent is the top element in the stack. - **Case 1:** we pop $z.p$ and $z.p.p$. - **Case 2:** we pop $z.p$ and $z.p.p$, then push $z.p.p$ and $z$. - **Case 3:** we pop $z.p$, $z.p.p$ and $z.p.p.p$, then push $z.p$.
[]
false
[]
13-13.4-1
13
13.4
13.4-1
docs/Chap13/13.4.md
Argue that after executing $\text{RB-DELETE-FIXUP}$, the root of the tree must be black.
- **Case 1:** transform to 2, 3, 4. - **Case 2:** if terminates, the root of the subtree (the new $x$) is set to black. - **Case 3:** transform to 4. - **Case 4:** the root (the new $x$) is set to black.
[]
false
[]
13-13.4-2
13
13.4
13.4-2
docs/Chap13/13.4.md
Argue that if in $\text{RB-DELETE}$ both $x$ and $x.p$ are red, then property 4 is restored by the call to $\text{RB-DELETE-FIXUP}(T, x)$.
Suppose that both $x$ and $x.p$ are red in $\text{RB-DELETE}$. This can only happen in the else-case of line 9. Since we are deleting from a red-black tree, the other child of y.p which becomes $x$'s sibling in the call to $\text{RB-TRANSPLANT}$ on line 14 must be black, so $x$ is the only child of $x.p$ which is red. The while-loop condition of $\text{RB-DELETE-FIXUP}(T, x)$ is immediately violated so we simply set $x.color = black$, restoring property 4.
[]
false
[]
13-13.4-3
13
13.4
13.4-3
docs/Chap13/13.4.md
In Exercise 13.3-2, you found the red-black tree that results from successively inserting the keys $41, 38, 31, 12, 19, 8$ into an initially empty tree. Now show the red-black trees that result from the successive deletion of the keys in the order $8, 12, 19, 31, 38, 41$.
- initial: ![](../img/13.4-3-1.png) - delete $8$: ![](../img/13.4-3-2.png) - delete $12$: ![](../img/13.4-3-3.png) - delete $19$: ![](../img/13.4-3-4.png) - delete $31$: ![](../img/13.4-3-5.png) - delete $38$: ![](../img/13.4-3-6.png) - delete $41$: ![](../img/13.4-3-7.png)
[]
true
[ "../img/13.4-3-1.png", "../img/13.4-3-2.png", "../img/13.4-3-3.png", "../img/13.4-3-4.png", "../img/13.4-3-5.png", "../img/13.4-3-6.png", "../img/13.4-3-7.png" ]
13-13.4-4
13
13.4
13.4-4
docs/Chap13/13.4.md
In which lines of the code for $\text{RB-DELETE-FIXUP}$ might we examine or modify the sentinel $T.nil$?
When the node $y$ in $\text{RB-DELETE}$ has no children, the node $x = T.nil$, so we'll examine the line 2 of $\text{RB-DELETE-FIXUP}$. When the root node is deleted, $x = T.nil$ and the root at this time is $x$, so the line 23 of $\text{RB-DELETE-FIXUP}$ will draw $x$ to black.
[]
false
[]
13-13.4-5
13
13.4
13.4-5
docs/Chap13/13.4.md
In each of the cases of Figure 13.7, give the count of black nodes from the root of the subtree shown to each of the subtrees $\alpha, \beta, \ldots, \zeta$, and verify that each count remains the same after the transformation. When a node has a $color$ attribute $c$ or $c'$, use the notation $\text{count}\(c\)$ or $\text{count}(c')$ symbolically in your count.
Our count will include the root (if it is black). - **Case 1:** For each subtree, it is $2$ both before and after. - **Case 2:** - For $\alpha$ and $\beta$, it is $1 + \text{count}\(c\)$ in both cases. - For the rest of the subtrees, it is from $2 + \text{count}\(c\)$ to $1 + \text{count}\(c\)$. This decrease in the count for the other subtreese is handled by then having $x$ represent an additional black. - **Case 3:** - For $\epsilon$ and $\zeta$, it is $2+\text{count}\(c\)$ both before and after. - For all the other subtrees, it is $1+\text{count}\(c\)$ both before and after. - **Case 4:** - For $\alpha$ and $\beta$, it is from $1 + \text{count}\(c\)$ to $2 + \text{count}\(c\)$. - For $\gamma$ and $\delta$, it is $1 + \text{count}\(c\) + \text{count}(c')$ both before and after. - For $\epsilon$ and $\zeta$, it is $1 + \text{count}\(c\)$ both before and after. This increase in the count for $\alpha$ and $\beta$ is because $x$ before indicated an extra black.
[]
false
[]
13-13.4-6
13
13.4
13.4-6
docs/Chap13/13.4.md
Professors Skelton and Baron are concerned that at the start of case 1 of $\text{RB-DELETE-FIXUP}$, the node $x.p$ might not be black. If the professors are correct, then lines 5–6 are wrong. Show that $x.p$ must be black at the start of case 1, so that the professors have nothing to worry about.
At the start of case 1 we have set $w$ to be the sibling of $x$. We check on line 4 that $w.color == red$, which means that the parent of $x$ and $w$ cannot be red. Otherwise property 4 is violated. Thus, their concerns are unfounded.
[]
false
[]
13-13.4-7
13
13.4
13.4-7
docs/Chap13/13.4.md
Suppose that a node $x$ is inserted into a red-black tree with $\text{RB-INSERT}$ and then is immediately deleted with $\text{RB-DELETE}$. Is the resulting red-black tree the same as the initial red-black tree? Justify your answer.
No, the red-black tree will not necessarily be the same. - Example 1: - initial: ![](../img/13.4-7-1.png) - insert $1$: ![](../img/13.4-7-2.png) - delete $1$: ![](../img/13.4-7-3.png) - Example 2: - initial: ![](../img/13.4-7-4.png) - insert $1$: ![](../img/13.4-7-5.png) - delete $1$: ![](../img/13.4-7-6.png)
[]
true
[ "../img/13.4-7-1.png", "../img/13.4-7-2.png", "../img/13.4-7-3.png", "../img/13.4-7-4.png", "../img/13.4-7-5.png", "../img/13.4-7-6.png" ]
13-13-1
13
13-1
13-1
docs/Chap13/Problems/13-1.md
During the course of an algorithm, we sometimes find that we need to maintain past versions of a dynamic set as it is updated. We call such a set **_persistent_**. One way to implement a persistent set is to copy the entire set whenever it is modified, but this approach can slow down a program and also consume much space. Sometimes, we can do much better. Consider a persistent set $S$ with the operations $\text{INSERT}$, $\text{DELETE}$, and $\text{SEARCH}$, which we implement using binary search trees as shown in Figure 13.8(a). We maintain a separate root for every version of the set. In order to insert the key $5$ into the set, we create a new node with key $5$. This node becomes the left child of a new node with key $7$, since we cannot modify the existing node with key $7$. Similarly, the new node with key $7$ becomes the left child of a new node with key $8$ whose right child is the existing node with key $10$. The new node with key $8$ becomes, in turn, the right child of a new root $r'$ with key $4$ whose left child is the existing node with key $3$. We thus copy only part of the tree and share some of the nodes with the original tree, as shown in Figure 13.8(b). Assume that each tree node has the attributes $key$, $left$, and $right$ but no parent. (See also Exercise 13.3-6.) **a.** For a general persistent binary search tree, identify the nodes that we need to change to insert a key $k$ or delete a node $y$. **b.** Write a procedure $\text{PERSISTENT-TREE-INSERT}$ that, given a persistent tree $T$ and a key $k$ to insert, returns a new persistent tree $T'$ that is the result of inserting $k$ into $T$. **c.** If the height of the persistent binary search tree $T$ is $h$, what are the time and space requirements of your implementation of $\text{PERSISTENT-TREE-INSERT}$? (The space requirement is proportional to the number of new nodes allocated.) **d.** Suppose that we had included the parent attribute in each node. In this case, $\text{PERSISTENT-TREE-INSERT}$ would need to perform additional copying. Prove that $\text{PERSISTENT-TREE-INSERT}$ would then require $\Omega(n)$ time and space, where $n$ is the number of nodes in the tree. **e.** Show how to use red-black trees to guarantee that the worst-case running time and space are $O(\lg n)$ per insertion or deletion.
(Removed)
[]
false
[]
13-13-2
13
13-2
13-2
docs/Chap13/Problems/13-2.md
The **_join_** operation takes two dynamic sets $S_1$ and $S_2$ and an element $x$ such that for any $x_1 \in S_1$ and $x_2 \in S_2$, we have $x_1.key \le x.key \le x_2.key$. It returns a set $S = S_1 \cup \\{x\\} \cup S_2$. In this problem, we investigate how to implement the join operation on red-black trees. **a.** Given a red-black tree $T$, let us store its black-height as the new attribute $T.bh$. Argue that $\text{RB-INSERT}$ and $\text{RB-DELETE}$ can maintain the $bh$ attribute without requiring extra storage in the nodes of the tree and without increasing the asymptotic running times. Show that while descending through $T$, we can determine the black-height of each node we visit in $O(1)$ time per node visited. We wish to implement the operation $\text{RB-JOIN}(T_1, x, T_2)$, which destroys $T_1$ and $T_2$ and returns a red-black tree $T = T_1 \cup \\{x\\} \cup T_2$. Let $n$ be the total number of nodes in $T_1$ and $T_2$. **b.** Assume that $T_1.bh \ge T_2.bh$. Describe an $O(\lg n)$-time algorithm that finds a black node $y$ in $T_1$ with the largest key from among those nodes whose black-height is $T_2.bh$. **c.** Let $T_y$ be the subtree rooted at $y$. Describe how $T_y \cup \\{x\\} \cup T_2$ can replace $T_y$ in $O(1)$ time without destroying the binary-search-tree property. **d.** What color should we make $x$ so that red-black properties 1, 3, and 5 are maintained? Describe how to enforce properties 2 and 4 in $O(\lg n)$ time. **e.** Argue that no generality is lost by making the assumption in part (b). Describe the symmetric situation that arises when $T_1.bh \le T_2.bh$. **f.** Argue that the running time of $\text{RB-JOIN}$ is $O(\lg n)$.
**a.** - Initialize: $bh = 0$. - $\text{RB-INSERT}$: if in the last step the root is red, we increase $bh$ by $1$. - $\text{RB-DELETE}$: if $x$ is root, we decrease $bh$ by $1$. - Each node: in the simple path, decrease $bh$ by $1$ each time we find a black node. **b.** Move to the right child if the node has a right child, otherwise move to the left child. If the node is black, we decease $bh$ by $1$. Repeat the step until $bh = T_2.bh$. **c.** The time complexity is $O(1)$. ```cpp RB-JOIN'(T[y], x, T[2]) TRANSPLANT(T[y], x) x.left = T[y] x.right = T[2] T[y].parent = x T[2].parent = x ``` **d.** Red. Call $\text{INSERT-FIXUP(T[1], x)}$. The time complexity is $O(\lg n)$. **e.** Same, if $T_1.bh\le T_2.bh$, then we can use the above algorithm symmetrically. **f.** $O(1) + O(\lg n) = O(\lg n)$.
[ { "lang": "cpp", "code": "RB-JOIN'(T[y], x, T[2])\n TRANSPLANT(T[y], x)\n x.left = T[y]\n x.right = T[2]\n T[y].parent = x\n T[2].parent = x" } ]
false
[]
13-13-3
13
13-3
13-3
docs/Chap13/Problems/13-3.md
An **_AVL tree_** is a binary search tree that is **_height balanced_**: for each node $x$, the heights of the left and right subtrees of $x$ differ by at most $1$. To implement an AVL tree, we maintain an extra attribute in each node: $x.h$ is the height of node $x$. As for any other binary search tree $T$, we assume that $T.root$ points to the root node. **a.** Prove that an AVL tree with $n$ nodes has height $O(\lg n)$. ($\textit{Hint:}$ Prove that an AVL tree of height $h$ has at least $F_h$ nodes, where $F_h$ is the $h$th Fibonacci number.) **b.** To insert into an AVL tree, we first place a node into the appropriate place in binary search tree order. Afterward, the tree might no longer be height balanced. Specifically, the heights of the left and right children of some node might differ by $2$. Describe a procedure $\text{BALANCE}(x)$, which takes a subtree rooted at $x$ whose left and right children are height balanced and have heights that differ by at most $2$, i.e., $|x.right.h - x.left.h| \le 2$, and alters the subtree rooted at $x$ to be height balanced. ($\textit{Hint:}$ Use rotations.) **c.** Using part (b), describe a recursive procedure $\text{AVL-INSERT}(x, z)$ that takes a node $x$ within an AVL tree and a newly created node $z$ (whose key has already been filled in), and adds $z$ to the subtree rooted at $x$, maintaining the property that $x$ is the root of an AVL tree. As in $\text{TREE-INSERT}$ from Section 12.3, assume that $z.key$ has already been filled in and that $z.left = \text{NIL}$ and $z.right = \text{NIL}$; also assume that $z.h = 0$. Thus, to insert the node $z$ into the AVL tree $T$, we call $\text{AVL-INSERT}(T.root, z)$. **d.** Show that $\text{AVL-INSERT}$, run on an $n$-node AVL tree, takes $O(\lg n)$ time and performs $O(1)$ rotations.
**a.** Let $T(h)$ denote the minimum size of an AVL tree of height $h$. Since it is height $h$, it must have the max of it's children's heights is equal to $h - 1$. Since we are trying to get as few nodes total as possible, suppose that the other child has as small of a height as is allowed. Because of the restriction of AVL trees, we have that the smaller child must be at least one less than the larger one, so, we have that $$T(h) \ge T(h - 1) + T(h - 2) + 1,$$ where the $+1$ is coming from counting the root node. We can get inequality in the opposite direction by simply taking a tree that achieves the minimum number of number of nodes on height $h - 1$ and on $h - 2$ and join them together under another node. So, we have that $$T(h) = T(h - 1) + T(h - 2) + 1, \text{ where } T(0) = 0, T(1) = 1.$$ This is both the same recurrence and initial conditions as the Fibonacci numbers. So, recalling equation $\text{(3.25)}$, we have that $$T(h) = \Big\lfloor \frac{\phi^h}{\sqrt 5} + \frac{1}{2} \Big\rfloor \le n.$$ Rearranging for $h$, we have $$ \begin{aligned} \frac{\phi^h}{\sqrt 5} - \frac{1}{2} & \le n \\\\ \phi^h & \le \sqrt 5(n + \frac{1}{2}) \\\\ h & \le \frac{\lg \sqrt 5 + \lg(n + \frac{1}{2})}{\lg\phi} \in O(\lg n). \end{aligned} $$ **b.** Let $\text{UNBAL}(x)$ denote $x.left.h - x.right.h$. Then, the algorithm $\text{BALANCE}$ does what is desired. Note that because we are only rotating a single element at a time, the value of $\text{UNBAL}(x)$ can only change by at most $2$ in each step. Also, it must eventually start to change as the tree that was shorter becomes saturated with elements. We also fix any breaking of the AVL property that rotating may of caused by our recursive calls to the children. ```cpp BALANCE(x) while |UNBAL(x)| > 1 if UNBAL(x) > 0 RIGHT-ROTATE(T, x) else LEFT-ROTATE(T, x) BALANCE(x.left) BALANCE(x.right) ``` **c.** For the given algorithm $\text{AVL-INSERT}(x, z)$, it correctly maintains the fact that it is a BST by the way we search for the correct spot to insert $z$. Also, we can see that it maintains the property of being AVL, because after inserting the element, it checks all of the parents for the AVL property, since those are the only places it could of broken. It then fixes it and also updates the height attribute for any of the nodes for which it may of changed. **d.** Both **for** loops only run for $O(h) = O(\lg(n))$ iterations. Also, only a single rotation will occur in the second while loop because when we do it, we will be decreasing the height of the subtree rooted there, which means that it's back down to what it was before, so all of it's ancestors will have unchanged heights, so, no further balancing will be required. ```cpp AVL-INSERT(x, z) w = x while w != NIL y = w if z.key > y.key w = w.right else w = w.left if z.key > y.key y.right = z if y.left = NIL y.h = 1 else y.left = z if y.right = NIL y.h = 1 while y != x y.h = 1 + max(y.left.h, y.right.h) if y.left.h > y.right.h + 1 RIGHT-ROTATE(T, y) if y.right.h > y.left.h + 1 LEFT-ROTATE(T, y) y = y.p ```
[ { "lang": "cpp", "code": "BALANCE(x)\n while |UNBAL(x)| > 1\n if UNBAL(x) > 0\n RIGHT-ROTATE(T, x)\n else\n LEFT-ROTATE(T, x)\n BALANCE(x.left)\n BALANCE(x.right)" }, { "lang": "cpp", "code": "AVL-INSERT(x, z)\n w = x\n while w != NIL\n y = w\n if z.key > y.key\n w = w.right\n else w = w.left\n if z.key > y.key\n y.right = z\n if y.left = NIL\n y.h = 1\n else\n y.left = z\n if y.right = NIL\n y.h = 1\n while y != x\n y.h = 1 + max(y.left.h, y.right.h)\n if y.left.h > y.right.h + 1\n RIGHT-ROTATE(T, y)\n if y.right.h > y.left.h + 1\n LEFT-ROTATE(T, y)\n y = y.p" } ]
false
[]
13-13-4
13
13-4
13-4
docs/Chap13/Problems/13-4.md
If we insert a set of $n$ items into a binary search tree, the resulting tree may be horribly unbalanced, leading to long search times. As we saw in Section 12.4, however, randomly built binary search trees tend to be balanced. Therefore, one strategy that, on average, builds a balanced tree for a fixed set of items would be to randomly permute the items and then insert them in that order into the tree. What if we do not have all the items at once? If we receive the items one at a time, can we still randomly build a binary search tree out of them? We will examine a data structure that answers this question in the affirmative. A **_treap_** is a binary search tree with a modified way of ordering the nodes. Figure 13.9 shows an example. As usual, each node $x$ in the tree has a key value $x.key$. In addition, we assign $x.priority$, which is a random number chosen independently for each node. We assume that all priorities are distinct and also that all keys are distinct. The nodes of the treap are ordered so that the keys obey the binary-search-tree property and the priorities obey the min-heap order property: - If $v$ is a left child of $u$, then $v.key < u.key$. - If $v$ is a right child of $u$, then $v.key > u.key$. - If $v$ is a child of $u$, then $v.priority > u.priority$. (This combination of properties is why the tree is called a "treap": it has features of both a binary search tree and a heap.) It helps to think of treaps in the following way. Suppose that we insert nodes $x_1, x_2, \ldots,x_n$, with associated keys, into a treap. Then the resulting treap is the tree that would have been formed if the nodes had been inserted into a normal binary search tree in the order given by their (randomly chosen) priorities, i.e., $x_i.priority < x_j.priority$ means that we had inserted $x_i$ before $x_j$. **a.** Show that given a set of nodes $x_1, x_2, \ldots, x_n$, with associated keys and priorities, all distinct, the treap associated with these nodes is unique. **b.** Show that the expected height of a treap is $\Theta(\lg n)$, and hence the expected time to search for a value in the treap is $\Theta(\lg n)$. Let us see how to insert a new node into an existing treap. The first thing we do is assign to the new node a random priority. Then we call the insertion algorithm, which we call $\text{TREAP-INSERT}$, whose operation is illustrated in Figure 13.10. **c.** Explain how $\text{TREAP-INSERT}$ works. Explain the idea in English and give pseudocode. ($\textit{Hint:}$ Execute the usual binary-search-tree insertion procedure and then perform rotations to restore the min-heap order property.) **d.** Show that the expected running time of $\text{TREAP-INSERT}$ is $\Theta(\lg n)$. $\text{TREAP-INSERT}$ performs a search and then a sequence of rotations. Although these two operations have the same expected running time, they have different costs in practice. A search reads information from the treap without modifying it. In contrast, a rotation changes parent and child pointers within the treap. On most computers, read operations are much faster than write operations. Thus we would like $\text{TREAP-INSERT}$ to perform few rotations. We will show that the expected number of rotations performed is bounded by a constant. In order to do so, we will need some definitions, which Figure 13.11 depicts. The **_left spine_** of a binary search tree $T$ is the simple path from the root to the node with the smallest key. In other words, the left spine is the simple path from the root that consists of only left edges. Symmetrically, the **_right spine_** of $T$ is the simple path from the root consisting of only right edges. The **_length_** of a spine is the number of nodes it contains. **e.** Consider the treap $T$ immediately after $\text{TREAP-INSERT}$ has inserted node $x$. Let $C$ be the length of the right spine of the left subtree of $x$. Let $D$ be the length of the left spine of the right subtree of $x$. Prove that the total number of rotations that were performed during the insertion of $x$ is equal to $C + D$. We will now calculate the expected values of $C$ and $D$. Without loss of generality, we assume that the keys are $1, 2, \ldots, n$ since we are comparing them only to one another. For nodes $x$ and $y$ in treap $T$, where $y \ne x$, let $k = x.key$ and $i = y.key$. We define indicator random variables $$X\_{ik} = \text{I\\{\$y\$ is in the right spine of the left subtree of \$x\$\\}}.$$ **f.** Show that $X_{ik} = 1$ if and only if $y.priority > x.priority$, $y.key < x.key$, and, for every $z$ such that $y.key < z.key < x.key$, we have $y.priority < z.priority$. **g.** Show that $$ \begin{aligned} \Pr\\{X_{ik} = 1\\} & = \frac{(k - i - 1)!}{(k - i + 1)!} \\\\ & = \frac{1}{(k - i + 1)(k - i)}. \\\\ \end{aligned} $$ **h.** Show that $$ \begin{aligned} \text E[C] & = \sum_{j = 1}^{k - 1} \frac{1}{j(j + 1)} \\\\ & = 1 - \frac{1}{k}. \end{aligned} $$ **i.** Use a symmetry argument to show that $$\text E[D] = 1 - \frac{1}{n - k + 1}.$$ **j.** Conclude that the expected number of rotations performed when inserting a node into a treap is less than $2$.
**a.** The root is the node with smallest priority, the root divides the sets into two subsets based on the key. In each subset, the node with smallest priority is selected as the root, thus we can uniquely determine a treap with a specific input. **b.** For the priority of all nodes, each permutation corresponds to exactly one treap, that is, all nodes forms a BST in priority, since the priority of all nodes is spontaneous, treap is, essentially, randomly built binary search tress. Therefore, the expected height of a treap is $\Theta(\lg n)$. **c.** First insert a node as usual using the binary-search-tree insertion procedure. Then perform left and right rotations until the parent of the inserted node no longer has larger priority. **d.** Rotation is $\Theta(1)$, at most $h$ rotations, therefore the expected running time is $\Theta(\lg n)$. **e.** Left rotation increase $C$ by $1$, right rotation increase $D$ by $1$. **f.** The first two are obvious. The min-heap property will not hold if $y.priority > z.priority$. **g.** $$\Pr\\{X_{ik} = 1\\} = \frac{(k - i - 1)!}{(k - i + 1)!} = \frac{1}{(k - i + 1)(k - i)}.$$ **h.** $$ \begin{aligned} \text E[C] & = \sum_{j = 1}^{k - 1} \frac{1}{(k - i + 1)(k - i)} \\\\ & = \sum_{j = 1}^{k - 1} (\frac{1}{k - i} - \frac{1}{k - i + 1}) \\\\ & = 1 - \frac{1}{k}. \end{aligned} $$ **i.** $$ \begin{aligned} \text E[D] & = \sum_{j = 1}^{n - k} \frac{1}{(k - i + 1)(k - i)} \\\\ & = 1 - \frac{1}{n - k + 1}. \end{aligned} $$ **j.** By part (e), the number of rotations is $C + D$. By linearity of expectation, $\text E[C + D] = 2 - \frac{1}{k} - \frac{1}{n - k + 1} \le 2$ for any choice of $k$.
[]
false
[]
14-14.1-1
14
14.1
14.1-1
docs/Chap14/14.1.md
Show how $\text{OS-SELECT}(T.root, 10)$ operates on the red-black tree $T$ of Figure 14.1.
- $26: r = 13, i = 10$, go left. - $17: r = 8, i = 10$, go right. - $21: r = 3, i = 2$, go left. - $19: r = 1, i = 2$, go right. - $20: r = 1, i = 1$, choose $20$.
[]
false
[]
14-14.1-2
14
14.1
14.1-2
docs/Chap14/14.1.md
Show how $\text{OS-RANK}(T, x)$ operates on the red-black tree $T$ of Figure 14.1 and the node $x$ with $x.key = 35$.
- $35: r = 1$. - $38: r = 1$. - $30: r = r + 2 = 3$. - $41: r = 3$. - $26: r = r + 13 = 16$.
[]
false
[]
14-14.1-3
14
14.1
14.1-3
docs/Chap14/14.1.md
Write a nonrecursive version of $\text{OS-SELECT}$.
```cpp OS-SELECT(x, i) r = x.left.size + 1 while r != i if i < r x = x.left else x = x.right i = i - r r = x.left.size + 1 return x ```
[ { "lang": "cpp", "code": "OS-SELECT(x, i)\n r = x.left.size + 1\n while r != i\n if i < r\n x = x.left\n else x = x.right\n i = i - r\n r = x.left.size + 1\n return x" } ]
false
[]
14-14.1-4
14
14.1
14.1-4
docs/Chap14/14.1.md
Write a recursive procedure $\text{OS-KEY-RANK}(T, k)$ that takes as input an order-statistic tree $T$ and a key $k$ and returns the rank of $k$ in the dynamic set represented by $T$. Assume that the keys of $T$ are distinct.
```cpp OS-KEY-RANK(T, k) if k == T.root.key return T.root.left.size + 1 else if T.root.key > k return OS-KEY-RANK(T.left, k) else return T.root.left.size + 1 + OS-KEY-RANK(T.right, k) ```
[ { "lang": "cpp", "code": "OS-KEY-RANK(T, k)\n if k == T.root.key\n return T.root.left.size + 1\n else if T.root.key > k\n return OS-KEY-RANK(T.left, k)\n else return T.root.left.size + 1 + OS-KEY-RANK(T.right, k)" } ]
false
[]
14-14.1-5
14
14.1
14.1-5
docs/Chap14/14.1.md
Given an element $x$ in an $n$-node order-statistic tree and a natural number $i$, how can we determine the $i$th successor of $x$ in the linear order of the tree in $O(\lg n)$ time?
The desired result is $\text{OS-SELECT}(T, \text{OS-RANK}(T, x) + i)$. This has runtime $O(h)$, which by the properties of red black trees, is $O(\lg n)$.
[]
false
[]
14-14.1-6
14
14.1
14.1-6
docs/Chap14/14.1.md
Observe that whenever we reference the size attribute of a node in either $\text{OS-SELECT}$ or $\text{OS-RANK}$, we use it only to compute a rank. Accordingly, suppose we store in each node its rank in the subtree of which it is the root. Show how to maintain this information during insertion and deletion. (Remember that these two operations can cause rotations.)
First perform the usual BST insertion procedure on $z$, the node to be inserted. Then add $1$ to the rank of every node on the path from the root to $z$ such that $z$ is in the left subtree of that node. Since the added node is a leaf, it will have no subtrees so its rank will always be $1$. When a left rotation is performed on $x$, its rank within its subtree will remain the same. The rank of $x.right$ will be increased by the rank of $x$, plus one. If we perform a right rotation on a node $y$, its rank will decrement by $y.left.rank + 1$. The rank of $y.left$ will remain unchanged. For deletion of $z$, decrement the rank of every node on the path from $z$ to the root such that $z$ is in the left subtree of that node. For any rotations, use the same rules as before.
[]
false
[]
14-14.1-7
14
14.1
14.1-7
docs/Chap14/14.1.md
Show how to use an order-statistic tree to count the number of inversions (see Problem 2-4) in an array of size $n$ in time $O(n\lg n)$.
The runtime to build a red-black tree is $O(n\lg n)$, so we need to calculate inversions while building trees. Every time $\text{INSERT}$, we can use $\text{OS-RANK}$ to calculate the rank of the node, thus calculating inversions.
[]
false
[]
14-14.1-8
14
14.1
14.1-8 $\star$
docs/Chap14/14.1.md
Consider $n$ chords on a circle, each defined by its endpoints. Describe an $O(n\lg n)$-time algorithm to determine the number of pairs of chords that intersect inside the circle. (For example, if the $n$ chords are all diameters that meet at the center, then the correct answer is $\binom{n}{2}$.) Assume that no two chords share an endpoint.
Sort the vertices in clock-wise order, and assign a unique value to each vertex. For each chord its two vertices are $u_i$, $v_i$ and $u_i < v_i$. Add the vertices one by one in clock-wise order, if we meet a $u_i$, we add it to the order-statistic tree, if we meet a $v_i$, we calculate how many nodes are larger than $u_i$ (which is the number of intersects with chord $i$), and remove $u_i$.
[]
false
[]
14-14.2-1
14
14.2
14.2-1
docs/Chap14/14.2.md
Show, by adding pointers to the nodes, how to support each of the dynamic-set queries $\text{MINIMUM}$, $\text{MAXIMUM}$, $\text{SUCCESSOR}$, and $\text{PREDECESSOR}$ in $O(1)$worstcase time on an augmented order-statistic tree. The asymptotic performance of other operations on order-statistic trees should not be affected.
- **MINIMUM:** A pointer points to the minimum node, if the node is being deleted, move the pointer to its successor. - **MAXIMUM:** Similar to $\text{MINIMUM}$. - **SUCCESSOR:** Every node records its successor, the insertion and deletion is similar to that in linked list. - **PREDECESSOR:** Similar to $\text{MAXIMUM}$.
[]
false
[]
14-14.2-2
14
14.2
14.2-2
docs/Chap14/14.2.md
Can we maintain the black-heights of nodes in a red-black tree as attributes in the nodes of the tree without affecting the asymptotic performance of any of the redblack tree operations? Show how, or argue why not. How about maintaining the depths of nodes?
Since the black height of a node depends only on the black height and color of its children, Theorem 14.1 implies that we can maintain the attribute without affecting the asymptotic performance of the other red-black tree operations. The same is not true for maintaining the depths of nodes. If we delete the root of a tree we could potentially have to update the depths of $O(n)$ nodes, making the $\text{DELETE}$ operation asymptotically slower than before.
[]
false
[]
14-14.2-3
14
14.2
14.2-3 $\star$
docs/Chap14/14.2.md
Let $\otimes$ be an associative binary operator, and let $a$ be an attribute maintained in each node of a red-black tree. Suppose that we want to include in each node $x$ an additional attribute $f$ such that $x.f = x_1.a \otimes x_2.a \otimes \cdots \otimes x_m.a$, where $x_1, x_2, \ldots ,x_m$ is the inorder listing of nodes in the subtree rooted at $x$. Show how to update the $f$ attributes in $O(1)$ time after a rotation. Modify your argument slightly to apply it to the $size$ attributes in order-statistic trees.
$x.f = x.left.f \otimes x.a \otimes x.right.f$.
[]
false
[]
14-14.2-4
14
14.2
14.2-4 $\star$
docs/Chap14/14.2.md
We wish to augment red-black trees with an operation $\text{RB-ENUMERATE}(x, a, b)$ that outputs all the keys $k$ such that $a \le k \le b$ in a red-black tree rooted at $x$. Describe how to implement $\text{RB-ENUMERATE}$ in $\Theta(m+\lg n)$ time, where $m$ is the number of keys that are output and $n$ is the number of internal nodes in the tree. ($\textit{Hint:}$ You do not need to add new attributes to the red-black tree.)
- $\Theta(\lg n)$: Find the smallest key that larger than or equal to $a$. - $\Theta(m)$: Based on Exercise 14.2-1, find the $m$ successor.
[]
false
[]
14-14.3-1
14
14.3
14.3-1
docs/Chap14/14.3.md
Write pseudocode for $\text{LEFT-ROTATE}$ that operates on nodes in an interval tree and updates the $max$ attributes in $O(1)$ time.
Add 2 lines in $\text{LEFT-ROTATE}$ in 13.2 ```cpp y.max = x.max x.max = max(x.high, x.left.max, x.right.max) ```
[ { "lang": "cpp", "code": " y.max = x.max\n x.max = max(x.high, x.left.max, x.right.max)" } ]
false
[]
14-14.3-2
14
14.3
14.3-2
docs/Chap14/14.3.md
Rewrite the code for $\text{INTERVAL-SEARCH}$ so that it works properly when all intervals are open.
```cpp INTERVAL-SEARCH(T, i) x = T.root while x != T.nil and i does not overlap x.int if x.left != T.nil and x.left.max > i.low x = x.left else x = x.right return x ```
[ { "lang": "cpp", "code": "INTERVAL-SEARCH(T, i)\n x = T.root\n while x != T.nil and i does not overlap x.int\n if x.left != T.nil and x.left.max > i.low\n x = x.left\n else x = x.right\n return x" } ]
false
[]
14-14.3-3
14
14.3
14.3-3
docs/Chap14/14.3.md
Describe an efficient algorithm that, given an interval $i$ , returns an interval overlapping $i$ that has the minimum low endpoint, or $T.nil$ if no such interval exists.
Consider the usual interval search given, but, instead of breaking out of the loop as soon as we have an overlap, we just keep track of the overlap that has the minimum low endpoint, and continue the loop. After the loop terminates, we return the overlap stored.
[]
false
[]
14-14.3-4
14
14.3
14.3-4
docs/Chap14/14.3.md
Given an interval tree $T$ and an interval $i$, describe how to list all intervals in $T$ that overlap $i$ in $O(\min(n, k \lg n))$ time, where $k$ is the number of intervals in the output list. ($\textit{Hint:}$ One simple method makes several queries, modifying the tree between queries. A slightly more complicated method does not modify the tree.)
```cpp INTERVALS-SEARCH(T, x, i) let list be an empty array if i overlaps x.int list.APPEND(x) if x.left != T.nil and x.left.max > i.low list = list.APPEND(INTERVALS-SEARCH(T, x.left, i)) if x.right != T.nil and x.int.low ≤ i.high and x.right.max ≥ i.low list = list.APPEND(INTERVALS-SEARCH(T, x.right, i)) return list ```
[ { "lang": "cpp", "code": "INTERVALS-SEARCH(T, x, i)\n let list be an empty array\n if i overlaps x.int\n list.APPEND(x)\n if x.left != T.nil and x.left.max > i.low\n list = list.APPEND(INTERVALS-SEARCH(T, x.left, i))\n if x.right != T.nil and x.int.low ≤ i.high and x.right.max ≥ i.low\n list = list.APPEND(INTERVALS-SEARCH(T, x.right, i))\n return list" } ]
false
[]
14-14.3-5
14
14.3
14.3-5
docs/Chap14/14.3.md
Suggest modifications to the interval-tree procedures to support the new operation $\text{INTERVAL-SEARCH-EXACTLY}(T, i)$, where $T$ is an interval tree and $i$ is an interval. The operation should return a pointer to a node $x$ in $T$ such that $x.int.low = i.low$ and $x.int.high = i.high$, or $T.nil$ if $T$ contains no such node. All operations, including $\text{INTERVAL-SEARCH-EXACTLY}$, should run in $O(\lg n)$ time on an $n$-node interval tree.
Search for nodes which has exactly the same low value. ```cpp INTERVAL-SEARCH-EXACTLY(T, i) x = T.root while x != T.nil and i not exactly overlap x if i.high > x.max x = T.nil else if i.low < x.low x = x.left else if i.low > x.low x = x.right else x = T.nil return x ```
[ { "lang": "cpp", "code": "INTERVAL-SEARCH-EXACTLY(T, i)\n x = T.root\n while x != T.nil and i not exactly overlap x\n if i.high > x.max\n x = T.nil\n else if i.low < x.low\n x = x.left\n else if i.low > x.low\n x = x.right\n else x = T.nil\n return x" } ]
false
[]
14-14.3-6
14
14.3
14.3-6
docs/Chap14/14.3.md
Show how to maintain a dynamic set $Q$ of numbers that supports the operation $\text{MIN-GAP}$, which gives the magnitude of the difference of the two closest numbers in $Q$. For example, if $Q = \\{1, 5, 9, 15, 18, 22 \\}$, then $\text{MIN-GAP}(Q)$ returns $18 - 15 = 3$, since $15$ and $18$ are the two closest numbers in $Q$. Make the operations $\text{INSERT}$, $\text{DELETE}$, $\text{SEARCH}$, and $\text{MIN-GAP}$ as efficient as possible, and analyze their running times.
Store the elements in a red-black tree, where the key value is the value of each number itself. The auxiliary attribute stored at a node $x$ will be the min gap between elements in the subtree rooted at $x$, the maximum value contained in subtree rooted at $x$, and the minimum value contained in the subtree rooted at $x$. The min gap at a leaf will be $\infty$. Since we can determine the attributes of a node $x$ using only the information about the key at $x$, and the attributes in $x.left$ and $x.right$, Theorem 14.1 implies that we can maintain the values in all nodes of the tree during insertion and deletion without asymptotically affecting their $O(\lg n)$ performance. For $\text{MIN-GAP}$, just check the min gap at the root, in constant time.
[]
false
[]
14-14.3-7
14
14.3
14.3-7 $\star$
docs/Chap14/14.3.md
VLSI databases commonly represent an integrated circuit as a list of rectangles. Assume that each rectangle is rectilinearly oriented (sides parallel to the $x$- and $y$-axes), so that we represent a rectangle by its minimum and maximum $x$ and $y$-coordinates. Give an $O(n\lg n)$-time algorithm to decide whether or not a set of $n$ rectangles so represented contains two rectangles that overlap. Your algorithm need not report all intersecting pairs, but it must report that an overlap exists if one rectangle entirely covers another, even if the boundary lines do not intersect. ($\textit{Hint:}$ Move a "sweep" line across the set of rectangles.)
Let $L$ be the set of left coordinates of rectangles. Let $R$ be the set of right coordinates of rectangles. Sort both of these sets in $O(n\lg n)$ time. Then, we will have a pointer to $L$ and a pointer to $R$. If the pointer to $L$ is smaller, call interval search on $T$ for the up-down interval corresponding to this left hand side. If it contains something that intersects the up-down bounds of this rectangle, there is an intersection, so stop. Otherwise add this interval to $T$ and increment the pointer to $L$. If $R$ is the smaller one, remove the up-down interval that that right hand side corresponds to and increment the pointer to $R$. Since all the interval tree operations used run in time $O(\lg n)$ and we only call them at most $3n$ times, we have that the runtime is $O(n\lg n)$.
[]
false
[]
14-14-1
14
14-1
14-1
docs/Chap14/Problems/14-1.md
Suppose that we wish to keep track of a **_point of maximum overlap_** in a set of intervals—a point with the largest number of intervals in the set that overlap it. **a.** Show that there will always be a point of maximum overlap that is an endpoint of one of the segments. **b.** Design a data structure that efficiently supports the operations $\text{INTERVAL-INSERT}$, $\text{INTERVAL-DELETE}$, and $\text{FIND-POM}$, which returns a point of maximum overlap. ($\textit{Hint:}$ Keep a red-black tree of all the endpoints. Associate a value of $+1$ with each left endpoint, and associate a value of $-1$ with each right endpoint. Augment each node of the tree with some extra information to maintain the point of maximum overlap.)
(Removed)
[]
false
[]
14-14-2
14
14-2
14-2
docs/Chap14/Problems/14-2.md
We define the **_Josephus problem_** as follows. Suppose that $n$ people form a circle and that we are given a positive integer $m \le n$. Beginning with a designated first person, we proceed around the circle, removing every $m$th person. After each person is removed, counting continues around the circle that remains. This process continues until we have removed all $n$ people. The order in which the people are removed from the circle defines the **_$(n, m)$-Josephus permutation_** of the integers $1, 2, \ldots, n$. For example, the $(7, 3)$-Josephus permutation is $\langle 3, 6, 2, 7, 5, 1, 4 \rangle$. **a.** Suppose that $m$ is a constant. Describe an $O(n)$-time algorithm that, given an integer $n$, outputs the $(n, m)$-Josephus permutation. **b.** Suppose that $m$ is not a constant. Describe an $O(n\lg n)$-time algorithm that, given integers $n$ and $m$, outputs the $(n, m)$-Josephus permutation.
(Removed)
[]
false
[]
15-15.1-1
15
15.1
15.1-1
docs/Chap15/15.1.md
Show that equation $\text{(15.4)}$ follows from equation $\text{(15.3)}$ and the initial condition $T(0) = 1$.
- For $n = 0$, this holds since $2^0 = 1$. - For $n > 0$, substituting into the recurrence, we have $$ \begin{aligned} T(n) & = 1 + \sum_{j = 0}^{n - 1} 2^j \\\\ & = 1 + (2^n - 1) \\\\ & = 2^n. \end{aligned} $$
[]
false
[]
15-15.1-2
15
15.1
15.1-2
docs/Chap15/15.1.md
Show, by means of a counterexample, that the following "greedy" strategy does not always determine an optimal way to cut rods. Define the **_density_** of a rod of length $i$ to be $p_i / i$, that is, its value per inch. The greedy strategy for a rod of length $n$ cuts off a first piece of length $i$, where $1 \le i \le n$, having maximum density. It then continues by applying the greedy strategy to the remaining piece of length $n - i$.
The counterexample: $$ \begin{array}{c|cccc} \text{length $i$} & 1 & 2 & 3 & 4 \\\\ \hline \text{price $p_i$} & 1 & 20 & 33 & 36 \\\\ p_i / i & 1 & 10 & 11 & 9 \end{array} $$
[]
false
[]
15-15.1-3
15
15.1
15.1-3
docs/Chap15/15.1.md
Consider a modification of the rod-cutting problem in which, in addition to a price $p_i$ for each rod, each cut incurs a fixed cost of $c$. The revenue associated with a solution is now the sum of the prices of the pieces minus the costs of making the cuts. Give a dynamic-programming algorithm to solve this modified problem.
We can modify $\text{BOTTOM-UP-CUT-ROD}$ algorithm from section 15.1 as follows: ```cpp MODIFIED-CUT-ROD(p, n, c) let r[0..n] be a new array r[0] = 0 for j = 1 to n q = p[j] for i = 1 to j - 1 q = max(q, p[i] + r[j - i] - c) r[j] = q return r[n] ``` We need to account for cost $c$ on every iteration of the loop in lines 5-6 but the last one, when $i = j$ (no cuts). We make the loop run to $j - 1$ instead of $j$, make sure $c$ is subtracted from the candidate revenue in line 6, then pick the greater of current best revenue $q$ and $p[j]$ (no cuts) in line 7.
[ { "lang": "cpp", "code": "MODIFIED-CUT-ROD(p, n, c)\n let r[0..n] be a new array\n r[0] = 0\n for j = 1 to n\n q = p[j]\n for i = 1 to j - 1\n q = max(q, p[i] + r[j - i] - c)\n r[j] = q\n return r[n]" } ]
false
[]
15-15.1-4
15
15.1
15.1-4
docs/Chap15/15.1.md
Modify $\text{MEMOIZED-CUT-ROD}$ to return not only the value but the actual solution, too.
```cpp MEMOIZED-CUT-ROD(p, n) let r[0..n] and s[0..n] be new arrays for i = 0 to n r[i] = -∞ (val, s) = MEMOIZED-CUT-ROD-AUX(p, n, r, s) print "The optimal value is" val "and the cuts are at" s j = n while j > 0 print s[j] j = j - s[j] ``` ```cpp MEMOIZED-CUT-ROD-AUX(p, n, r, s) if r[n] ≥ 0 return r[n] if n == 0 q = 0 else q = -∞ for i = 1 to n (val, s) = MEMOIZED-CUT-ROD-AUX(p, n - i, r, s) if q < p[i] + val q = p[i] + val s[n] = i r[n] = q return (q, s) ```
[ { "lang": "cpp", "code": "MEMOIZED-CUT-ROD(p, n)\n let r[0..n] and s[0..n] be new arrays\n for i = 0 to n\n r[i] = -∞\n (val, s) = MEMOIZED-CUT-ROD-AUX(p, n, r, s)\n print \"The optimal value is\" val \"and the cuts are at\" s\n j = n\n while j > 0\n print s[j]\n j = j - s[j]" }, { "lang": "cpp", "code": "MEMOIZED-CUT-ROD-AUX(p, n, r, s)\n if r[n] ≥ 0\n return r[n]\n if n == 0\n q = 0\n else q = -∞\n for i = 1 to n\n (val, s) = MEMOIZED-CUT-ROD-AUX(p, n - i, r, s)\n if q < p[i] + val\n q = p[i] + val\n s[n] = i\n r[n] = q\n return (q, s)" } ]
false
[]
15-15.1-5
15
15.1
15.1-5
docs/Chap15/15.1.md
The Fibonacci numbers are defined by recurrence $\text{(3.22)}$. Give an $O(n)$-time dynamic-programming algorithm to compute the nth Fibonacci number. Draw the subproblem graph. How many vertices and edges are in the graph?
```cpp FIBONACCI(n) let fib[0..n] be a new array fib[0] = 1 fib[1] = 1 for i = 2 to n fib[i] = fib[i - 1] + fib[i - 2] return fib[n] ``` There are $n + 1$ vertices in the subproblem graph, i.e., $v_0, v_1, \dots, v_n$. - For $v_0, v_1$, each has $0$ leaving edge. - For $v_2, v_3, \dots, v_n$, each has $2$ leaving edges. Thus, there are $2n - 2$ edges in the subproblem graph.
[ { "lang": "cpp", "code": "FIBONACCI(n)\n let fib[0..n] be a new array\n fib[0] = 1\n fib[1] = 1\n for i = 2 to n\n fib[i] = fib[i - 1] + fib[i - 2]\n return fib[n]" } ]
false
[]
15-15.2-1
15
15.2
15.2-1
docs/Chap15/15.2.md
Find an optimal parenthesization of a matrix-chain product whose sequence of dimensions is $\langle 5, 10, 3, 12, 5, 50, 6 \rangle$.
$$((5 \times 10)(10 \times 3))(((3 \times 12)(12 \times 5))((5 \times 50)(50 \times 6))).$$
[]
false
[]
15-15.2-2
15
15.2
15.2-2
docs/Chap15/15.2.md
Give a recursive algorithm $\text{MATRIX-CHAIN-MULTIPLY}(A, s, i, j)$ that actually performs the optimal matrix-chain multiplication, given the sequence of matrices $\langle A_1, A_2, \ldots ,A_n \rangle$, the $s$ table computed by $\text{MATRIX-CHAIN-ORDER}$, and the indices $i$ and $j$. (The initial call would be $\text{MATRIX-CHAIN-MULTIPLY}(A, s, 1, n)$.)
```cpp MATRIX-CHAIN-MULTIPLY(A, s, i, j) if i == j return A[i] if i + 1 == j return A[i] * A[j] b = MATRIX-CHAIN-MULTIPLY(A, s, i, s[i, j]) c = MATRIX-CHAIN-MULTIPLY(A, s, s[i, j] + 1, j) return b * c ```
[ { "lang": "cpp", "code": "MATRIX-CHAIN-MULTIPLY(A, s, i, j)\n if i == j\n return A[i]\n if i + 1 == j\n return A[i] * A[j]\n b = MATRIX-CHAIN-MULTIPLY(A, s, i, s[i, j])\n c = MATRIX-CHAIN-MULTIPLY(A, s, s[i, j] + 1, j)\n return b * c" } ]
false
[]
15-15.2-3
15
15.2
15.2-3
docs/Chap15/15.2.md
Use the substitution method to show that the solution to the recurrence $\text{(15.6)}$ is $\Omega(2^n)$.
Suppose $P(n) \ge c2^n$, $$ \begin{aligned} P(n) & \ge \sum_{k = 1}^{n - 1} c2^k \cdot c2^{n - k} \\\\ & = \sum_{k = 1}^{n - 1} c^2 2^n \\\\ & = c^2 (n - 1) 2^n \\\\ & \ge c^2 2^n & (n > 1) \\\\ & \ge c 2^n. & (c \ge 1) \end{aligned} $$
[]
false
[]
15-15.2-4
15
15.2
15.2-4
docs/Chap15/15.2.md
Describe the subproblem graph for matrix-chain multiplication with an input chain of length $n$. How many vertices does it have? How many edges does it have, and which edges are they?
The vertices of the subproblem graph are the ordered pair $v_{ij}$, where $i \le j$. - If $i = j$, the vertex $v_{ij}$ has no output edge. - If $i < j$, for each $k$, s.t. $i \le k < j$, the subproblem graph contains edges $(v_{ij}, v_{ik})$ and $(v_{ij}, v_{k+1, j})$, and these edges indicate that to solve the subproblem of optimally parenthesizing the product $A_i \cdots A_j$, we need to solve subproblems of optimally parenthesizing the products $A_i \cdots A_k$ and $A_{k + 1} \cdots A_j$. The number of vertices is $$\sum_{i = 1}^n \sum_{j = i}^n = \frac{n(n + 1)}{2}.$$ The number of edges is $$\sum_{i = 1}^n \sum_{j = i}^n (j - i) = \frac{(n - 1)n(n + 1)}{6}.$$
[]
false
[]
15-15.2-5
15
15.2
15.2-5
docs/Chap15/15.2.md
Let $R(i, j)$ be the number of times that table entry $m[i, j]$ is referenced while computing other table entries in a call of $\text{MATRIX-CHAIN-ORDER}$. Show that the total number of references for the entire table is $$\sum_{i = 1}^n \sum_{j = i}^n R(i, j) = \frac{n^3 - n}{3}.$$ ($\textit{Hint:}$ You may find equation $\text{(A.3)}$ useful.)
We count the number of times that we reference a different entry in $m$ than the one we are computing, that is, $2$ times the number of times that line 10 runs. $$ \begin{aligned} \sum_{l = 2}^n \sum_{i = 1}^{n - l + 1} \sum_{k = i}^{i + l - 2} 2 & = \sum_{l = 2}^n \sum_{i = 1}^{n - l + 1} 2(l - 1) \\\\ & = \sum_{l = 2}^n 2(l - 1)(n - l + 1) \\\\ & = \sum_{l = 1}^{n - 1} 2l(n - l) \\\\ & = 2n \sum_{l = 1}^{n - 1} l - 2 \sum_{l = 1}^{n - 1} l^2 \\\\ & = n^2(n - 1) - 2 \cdot \frac{(n - 1)n(2n - 1)}{6} \\\\ & = n^3 - n^2 - \frac{2n^3 - 3n^2 + n}{3} \\\\ & = \frac{n^3 - n}{3}. \end{aligned} $$
[]
false
[]
15-15.2-6
15
15.2
15.2-6
docs/Chap15/15.2.md
Show that a full parenthesization of an $n$-element expression has exactly $n - 1$ pairs of parentheses.
We proceed by induction on the number of matrices. A single matrix has no pairs of parentheses. Assume that a full parenthesization of an $n$-element expression has exactly $n − 1$ pairs of parentheses. Given a full parenthesization of an $(n + 1)$-element expression, there must exist some $k$ such that we first multiply $B = A_1 \cdots A_k$ in some way, then multiply $C = A_{k + 1} \cdots A_{n + 1}$ in some way, then multiply $B$ and $C$. By our induction hypothesis, we have $k − 1$ pairs of parentheses for the full parenthesization of $B$ and $n + 1 − k − 1$ pairs of parentheses for the full parenthesization of $C$. Adding these together, plus the pair of outer parentheses for the entire expression, yields $k - 1 + n + 1 - k - 1 + 1 = (n + 1) - 1$ parentheses, as desired.
[]
false
[]
15-15.3-1
15
15.3
15.3-1
docs/Chap15/15.3.md
Which is a more efficient way to determine the optimal number of multiplications in a matrix-chain multiplication problem: enumerating all the ways of parenthesizing the product and computing the number of multiplications for each, or running $\text{RECURSIVE-MATRIX-CHAIN}$? Justify your answer.
Running $\text{RECURSIVE-MATRIX-CHAIN}$ is asymptotically more efficient than enumerating all the ways of parenthesizing the product and computing the number of multiplications for each. Consider the treatment of subproblems by each approach: 1. For each possible place to split the matrix chain, the enumeration approach finds all ways to parenthesize the left half, finds all ways to parenthesize the right half, and looks at all possible combinations of the left half with the right half. The amount of work to look at each combination of left and right half subproblem results is thus the product of the number of ways to parenthesize the left half and the number of ways to parenthesize the right half. 2. For each possible place to split the matrix chain, $\text{RECURSIVE-MATRIX-CHAIN}$ finds the best way to parenthesize the left half, finds the best way to parenthesize the right half, and combines just those two results. Thus the amount of work to combine the left and right half subproblem results is $O(1)$.
[]
false
[]
15-15.3-2
15
15.3
15.3-2
docs/Chap15/15.3.md
Draw the recursion tree for the $\text{MERGE-SORT}$ procedure from Section 2.3.1 on an array of $16$ elements. Explain why memoization fails to speed up a good divide-and-conquer algorithm such as $\text{MERGE-SORT}$.
Draw a recursion tree. ![](../img/15.3-2.png) The $\text{MERGE-SORT}$ procedure performs at most a single call to any pair of indices of the array that is being sorted. In other words, the subproblems do not overlap and therefore memoization will not improve the running time.
[]
true
[ "../img/15.3-2.png" ]
15-15.3-3
15
15.3
15.3-3
docs/Chap15/15.3.md
Consider a variant of the matrix-chain multiplication problem in which the goal is to parenthesize the sequence of matrices so as to maximize, rather than minimize, the number of scalar multiplications. Does this problem exhibit optimal substructure?
Yes, this problem also exhibits optimal substructure. If we know that we need the subproduct $(A_l \cdot A_r)$, then we should still find the most expensive way to compute it — otherwise, we could do better by substituting in the most expensive way.
[]
false
[]
15-15.3-4
15
15.3
15.3-4
docs/Chap15/15.3.md
As stated, in dynamic programming we first solve the subproblems and then choose which of them to use in an optimal solution to the problem. Professor Capulet claims that we do not always need to solve all the subproblems in order to find an optimal solution. She suggests that we can find an optimal solution to the matrix-chain multiplication problem by always choosing the matrix $A_k$ at which to split the subproduct $A_i A_{i + 1} \cdots A_j$ (by selecting $k$ to minimize the quantity $p_{i - 1} p_k p_j$) _before_ solving the subproblems. Find an instance of the matrix-chain multiplication problem for which this greedy approach yields a suboptimal solution.
Suppose that we are given matrices $A_1$, $A_2$, $A_3$, and $A_4$ with dimensions such that $$p_0, p_1, p_2, p_3, p_4 = 1000, 100, 20, 10, 1000.$$ Then $p_0 p_k p_4$ is minimized when $k = 3$, so we need to solve the subproblem of multiplying $A_1 A_2 A_3$, and also $A_4$ which is solved automatically. By her algorithm, this is solved by splitting at $k = 2$. Thus, the full parenthesization is $(((A_1A_2)A_3)A_4)$. This requires $$1000 \cdot 100 \cdot 20 + 1000 \cdot 20 \cdot 10 + 1000 \cdot 10 \cdot 1000 = 12200000$$ scalar multiplications. On the other hand, suppose we had fully parenthesized the matrices to multiply as $((A_1(A_2A_3))A_4)$. Then we would only require $$100 \cdot 20 \cdot 10 + 1000 \cdot 100 \cdot 10 + 1000 \cdot 10 \cdot 1000 = 11020000$$ scalar multiplications, which is fewer than Professor Capulet's method. Therefore her greedy approach yields a suboptimal solution.
[]
false
[]
15-15.3-5
15
15.3
15.3-5
docs/Chap15/15.3.md
Suppose that in the rod-cutting problem of Section 15.1, we also had limit $l_i$ on the number of pieces of length $i$ that we are allowed to produce, for $i = 1, 2, \ldots, n$. Show that the optimal-substructure property described in Section 15.1 no longer holds.
The optimal substructure property doesn't hold because the number of pieces of length $i$ used on one side of the cut affects the number allowed on the other. That is, there is information about the particular solution on one side of the cut that changes what is allowed on the other. To make this more concrete, suppose the rod was length $4$, the values were $l_1 = 2, l_2 = l_3 = l_4 = 1$, and each piece has the same worth regardless of length. Then, if we make our first cut in the middle, we have that the optimal solution for the two rods left over is to cut it in the middle, which isn't allowed because it increases the total number of rods of length $1$ to be too large.
[]
false
[]
15-15.3-6
15
15.3
15.3-6
docs/Chap15/15.3.md
Imagine that you wish to exchange one currency for another. You realize that instead of directly exchanging one currency for another, you might be better off making a series of trades through other currencies, winding up with the currency you want. Suppose that you can trade $n$ different currencies, numbered $1, 2, \ldots, n$, where you start with currency $1$ and wish to wind up with currency $n$. You are given, for each pair of currencies $i$ and $j$ , an exchange rate $r_{ij}$, meaning that if you start with $d$ units of currency $i$ , you can trade for $dr_{ij}$ units of currency $j$. A sequence of trades may entail a commission, which depends on the number of trades you make. Let $c_k$ be the commission that you are charged when you make $k$ trades. Show that, if $c_k = 0$ for all $k = 1, 2, \ldots, n$, then the problem of finding the best sequence of exchanges from currency $1$ to currency $n$ exhibits optimal substructure. Then show that if commissions $c_k$ are arbitrary values, then the problem of finding the best sequence of exchanges from currency $1$ to currency $n$ does not necessarily exhibit optimal substructure.
First we assume that the commission is always zero. Let $k$ denote a currency which appears in an optimal sequence $s$ of trades to go from currency $1$ to currency $n$. $p_k$ denote the first part of this sequence which changes currencies from $1$ to $k$ and $q_k$ denote the rest of the sequence. Then $p_k$ and $q_k$ are both optimal sequences for changing from $1$ to $k$ and $k$ to $n$ respectively. To see this, suppose that $p_k$ wasn't optimal but that $p_k'$ was. Then by changing currencies according to the sequence $p_k'q_k$ we would have a sequence of changes which is better than $s$, a contradiction since $s$ was optimal. The same argument applies to $q_k$. Now suppose that the commissions can take on arbitrary values. Suppose we have currencies $1$ through $6$, and $r_{12} = r_{23} = r_{34} = r_{45} = 2$, $r_{13} = r_{35} = 6$, and all other exchanges are such that $r_{ij} = 100$. Let $c_1 = 0$, $c_2 = 1$, and $c_k = 10$ for $k \ge 3$. The optimal solution in this setup is to change $1$ to $3$, then $3$ to $5$, for a total cost of $13$. An optimal solution for changing $1$ to $3$ involves changing $1$ to $2$ then $2$ to $3$, for a cost of $5$, and an optimal solution for changing $3$ to $5$ is to change $3$ to $4$ then $4$ to $5$, for a total cost of $5$. However, combining these optimal solutions to subproblems means making more exchanges overall, and the total cost of combining them is $18$, which is not optimal.
[]
false
[]
15-15.4-1
15
15.4
15.4-1
docs/Chap15/15.4.md
Determine an $\text{LCS}$ of $\langle 1, 0, 0, 1, 0, 1, 0, 1 \rangle$ and $\langle 0, 1, 0, 1, 1, 0, 1, 1, 0 \rangle$.
$\langle 1, 0, 0, 1, 1, 0 \rangle$ or $\langle 1, 0, 1, 0, 1, 0 \rangle$.
[]
false
[]
15-15.4-2
15
15.4
15.4-2
docs/Chap15/15.4.md
Give pseudocode to reconstruct an $\text{LCS}$ from the completed $c$ table and the original sequences $X = \langle x_1, x_2, \ldots, x_m \rangle$ and $Y = \langle y_1, y_2, \ldots, y_n \rangle$ in $O(m + n)$ time, without using the $b$ table.
```cpp PRINT-LCS(c, X, Y, i, j) if c[i, j] == 0 return if X[i] == Y[j] PRINT-LCS(c, X, Y, i - 1, j - 1) print X[i] else if c[i - 1, j] > c[i, j - 1] PRINT-LCS(c, X, Y, i - 1, j) else PRINT-LCS(c, X, Y, i, j - 1) ```
[ { "lang": "cpp", "code": "PRINT-LCS(c, X, Y, i, j)\n if c[i, j] == 0\n return\n if X[i] == Y[j]\n PRINT-LCS(c, X, Y, i - 1, j - 1)\n print X[i]\n else if c[i - 1, j] > c[i, j - 1]\n PRINT-LCS(c, X, Y, i - 1, j)\n else\n PRINT-LCS(c, X, Y, i, j - 1)" } ]
false
[]
15-15.4-3
15
15.4
15.4-3
docs/Chap15/15.4.md
Give a memoized version of $\text{LCS-LENGTH}$ that runs in $O(mn)$ time.
```cpp MEMOIZED-LCS-LENGTH(X, Y, i, j) if c[i, j] > -1 return c[i, j] if i == 0 or j == 0 return c[i, j] = 0 if x[i] == y[j] return c[i, j] = LCS-LENGTH(X, Y, i - 1, j - 1) + 1 return c[i, j] = max(LCS-LENGTH(X, Y, i - 1, j), LCS-LENGTH(X, Y, i, j - 1)) ```
[ { "lang": "cpp", "code": "MEMOIZED-LCS-LENGTH(X, Y, i, j)\n if c[i, j] > -1\n return c[i, j]\n if i == 0 or j == 0\n return c[i, j] = 0\n if x[i] == y[j]\n return c[i, j] = LCS-LENGTH(X, Y, i - 1, j - 1) + 1\n return c[i, j] = max(LCS-LENGTH(X, Y, i - 1, j), LCS-LENGTH(X, Y, i, j - 1))" } ]
false
[]
15-15.4-4
15
15.4
15.4-4
docs/Chap15/15.4.md
Show how to compute the length of an $\text{LCS}$ using only $2 \cdot \min(m, n)$ entries in the $c$ table plus $O(1)$ additional space. Then show how to do the same thing, but using $\min(m, n)$ entries plus $O(1)$ additional space.
Since we only use the previous row of the $c$ table to compute the current row, we compute as normal, but when we go to compute row $k$, we free row $k - 2$ since we will never need it again to compute the length. To use even less space, observe that to compute $c[i, j]$, all we need are the entries $c[i − 1, j]$, $c[i − 1, j − 1]$, and $c[i, j − 1]$. Thus, we can free up entry-by-entry those from the previous row which we will never need again, reducing the space requirement to $\min(m, n)$. Computing the next entry from the three that it depends on takes $O(1)$ time and space.
[]
false
[]
15-15.4-5
15
15.4
15.4-5
docs/Chap15/15.4.md
Give an $O(n^2)$-time algorithm to find the longest monotonically increasing subsequence of a sequence of $n$ numbers.
Given a list of numbers $L$, make a copy of $L$ called $L'$ and then sort $L'$. ```cpp PRINT-LCS(c, X, Y) n = c[X.length, Y.length] let s[1..n] be a new array i = X.length j = Y.length while i > 0 and j > 0 if x[i] == y[j] s[n] = x[i] n = n - 1 i = i - 1 j = j - 1 else if c[i - 1, j] ≥ c[i, j - 1] i = i - 1 else j = j - 1 for i = 1 to s.length print s[i] ``` ```cpp MEMO-LCS-LENGTH-AUX(X, Y, c, b) m = |X| n = |Y| if c[m, n] != 0 or m == 0 or n == 0 return if x[m] == y[n] b[m, n] = ↖ c[m, n] = MEMO-LCS-LENGTH-AUX(X[1..m - 1], Y[1..n - 1], c, b) + 1 else if MEMO-LCS-LENGTH-AUX(X[1..m - 1], Y, c, b) ≥ MEMO-LCS-LENGTH-AUX(X, Y[1..n - 1], c, b) b[m, n] = ↑ c[m, n] = MEMO-LCS-LENGTH-AUX(X[1..m - 1], Y, c, b) else b[m, n] = ← c[m, n] = MEMO-LCS-LENGTH-AUX(X, Y[1..n - 1], c, b) ``` ```cpp MEMO-LCS-LENGTH(X, Y) let c[1..|X|, 1..|Y|] and b[1..|X|, 1..|Y|] be new tables MEMO-LCS-LENGTH-AUX(X, Y, c, b) return c and b ``` Then, just run the $\text{LCS}$ algorithm on these two lists. The longest common subsequence must be monotone increasing because it is a subsequence of $L'$ which is sorted. It is also the longest monotone increasing subsequence because being a subsequence of $L'$ only adds the restriction that the subsequence must be monotone increasing. Since $|L| = |L'| = n$, and sorting $L$ can be done in $o(n^2)$ time, the final running time will be $O(|L||L'|) = O(n^2)$.
[ { "lang": "cpp", "code": "PRINT-LCS(c, X, Y)\n n = c[X.length, Y.length]\n let s[1..n] be a new array\n i = X.length\n j = Y.length\n while i > 0 and j > 0\n if x[i] == y[j]\n s[n] = x[i]\n n = n - 1\n i = i - 1\n j = j - 1\n else if c[i - 1, j] ≥ c[i, j - 1]\n i = i - 1\n else j = j - 1\n for i = 1 to s.length\n print s[i]" }, { "lang": "cpp", "code": "MEMO-LCS-LENGTH-AUX(X, Y, c, b)\n m = |X|\n n = |Y|\n if c[m, n] != 0 or m == 0 or n == 0\n return\n if x[m] == y[n]\n b[m, n] = ↖\n c[m, n] = MEMO-LCS-LENGTH-AUX(X[1..m - 1], Y[1..n - 1], c, b) + 1\n else if MEMO-LCS-LENGTH-AUX(X[1..m - 1], Y, c, b) ≥ MEMO-LCS-LENGTH-AUX(X, Y[1..n - 1], c, b)\n b[m, n] = ↑\n c[m, n] = MEMO-LCS-LENGTH-AUX(X[1..m - 1], Y, c, b)\n else\n b[m, n] = ←\n c[m, n] = MEMO-LCS-LENGTH-AUX(X, Y[1..n - 1], c, b)" }, { "lang": "cpp", "code": "MEMO-LCS-LENGTH(X, Y)\n let c[1..|X|, 1..|Y|] and b[1..|X|, 1..|Y|] be new tables\n MEMO-LCS-LENGTH-AUX(X, Y, c, b)\n return c and b" } ]
false
[]
15-15.4-6
15
15.4
15.4-6 $\star$
docs/Chap15/15.4.md
Give an $O(n\lg n)$-time algorithm to find the longest monotonically increasing subsequence of a sequence of $n$ numbers. ($\textit{Hint:}$ Observe that the last element of a candidate subsequence of length $i$ is at least as large as the last element of a candidate subsequence of length $i - 1$. Maintain candidate subsequences by linking them through the input sequence.)
The algorithm $\text{LONG-MONOTONIC}(A)$ returns the longest monotonically increasing subsequence of $A$, where $A$ has length $n$. The algorithm works as follows: a new array B will be created such that $B[i]$ contains the last value of a longest monotonically increasing subsequence of length $i$. A new array $C$ will be such that $C[i]$ contains the monotonically increasing subsequence of length $i$ with smallest last element seen so far. To analyze the runtime, observe that the entries of $B$ are in sorted order, so we can execute line 9 in $O(\lg n)$ time. Since every other line in the for-loop takes constant time, the total run-time is $O(n\lg n)$. ```cpp LONG-MONOTONIC(A) let B[1..n] be a new array where every value = ∞ let C[1..n] be a new array L = 1 for i = 1 to n if A[i] < B[1] B[1] = A[i] C[1].head.key = A[i] else let j be the largest index of B such that B[j] < A[i] B[j + 1] = A[i] C[j + 1] = C[j] INSERT(C[j + 1], A[i]) if j + 1 > L L = L + 1 print C[L] ```
[ { "lang": "cpp", "code": "LONG-MONOTONIC(A)\n let B[1..n] be a new array where every value = ∞\n let C[1..n] be a new array\n L = 1\n for i = 1 to n\n if A[i] < B[1]\n B[1] = A[i]\n C[1].head.key = A[i]\n else\n let j be the largest index of B such that B[j] < A[i]\n B[j + 1] = A[i]\n C[j + 1] = C[j]\n INSERT(C[j + 1], A[i])\n if j + 1 > L\n L = L + 1\n print C[L]" } ]
false
[]
15-15.5-1
15
15.5
15.5-1
docs/Chap15/15.5.md
Write pseudocode for the procedure $\text{CONSTRUCT-OPTIMAL-BST}(root)$ which, given the table $root$, outputs the structure of an optimal binary search tree. For the example in Figure 15.10, your procedure should print out the structure $$ \begin{aligned} & \text{$k_2$ is the root} \\\\ & \text{$k_1$ is the left child of $k_2$} \\\\ & \text{$d_0$ is the left child of $k_1$} \\\\ & \text{$d_1$ is the right child of $k_1$} \\\\ & \text{$k_5$ is the right child of $k_2$} \\\\ & \text{$k_4$ is the left child of $k_5$} \\\\ & \text{$k_3$ is the left child of $k_4$} \\\\ & \text{$d_2$ is the left child of $k_3$} \\\\ & \text{$d_3$ is the right child of $k_3$} \\\\ & \text{$d_4$ is the right child of $k_4$} \\\\ & \text{$d_5$ is the right child of $k_5$} \end{aligned} $$ corresponding to the optimal binary search tree shown in Figure 15.9(b).
```cpp CONSTRUCT-OPTIMAL-BST(root, i, j, last) if i == j return if last == 0 print root[i, j] + "is the root" else if j < last print root[i, j] + "is the left child of" + last else print root[i, j] + "is the right child of" + last CONSTRUCT-OPTIMAL-BST(root, i, root[i, j] - 1, root[i, j]) CONSTRUCT-OPTIMAL-BST(root, root[i, j] + 1, j, root[i, j]) ```
[ { "lang": "cpp", "code": "CONSTRUCT-OPTIMAL-BST(root, i, j, last)\n if i == j\n return\n if last == 0\n print root[i, j] + \"is the root\"\n else if j < last\n print root[i, j] + \"is the left child of\" + last\n else\n print root[i, j] + \"is the right child of\" + last\n CONSTRUCT-OPTIMAL-BST(root, i, root[i, j] - 1, root[i, j])\n CONSTRUCT-OPTIMAL-BST(root, root[i, j] + 1, j, root[i, j])" } ]
false
[]
15-15.5-2
15
15.5
15.5-2
docs/Chap15/15.5.md
Determine the cost and structure of an optimal binary search tree for a set of $n = 7$ keys with the following probabilities $$ \begin{array}{c|cccccccc} i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \hline p_i & & 0.04 & 0.06 & 0.08 & 0.02 & 0.10 & 0.12 & 0.14 \\\\ q_i & 0.06 & 0.06 & 0.06 & 0.06 & 0.05 & 0.05 & 0.05 & 0.05 \end{array} $$
Cost is $3.12$. $$ \begin{aligned} & \text{$k_5$ is the root} \\\\ & \text{$k_2$ is the left child of $k_5$} \\\\ & \text{$k_1$ is the left child of $k_2$} \\\\ & \text{$d_0$ is the left child of $k_1$} \\\\ & \text{$d_1$ is the right child of $k_1$} \\\\ & \text{$k_3$ is the right child of $k_2$} \\\\ & \text{$d_2$ is the left child of $k_3$} \\\\ & \text{$k_4$ is the right child of $k_3$} \\\\ & \text{$d_3$ is the left child of $k_4$} \\\\ & \text{$d_4$ is the right child of $k_4$} \\\\ & \text{$k_7$ is the right child of $k_5$} \\\\ & \text{$k_6$ is the left child of $k_7$} \\\\ & \text{$d_5$ is the left child of $k_6$} \\\\ & \text{$d_6$ is the right child of $k_6$} \\\\ & \text{$d_7$ is the right child of $k_7$} \end{aligned} $$
[]
false
[]
15-15.5-3
15
15.5
15.5-3
docs/Chap15/15.5.md
Suppose that instead of maintaining the table $w[i, j]$, we computed the value of $w(i, j)$ directly from equation $\text{(15.12)}$ in line 9 of $\text{OPTIMAL-BST}$ and used this computed value in line 11. How would this change affect the asymptotic running time of $\text{OPTIMAL-BST}$?
Computing $w(i, j)$ from the equation is $\Theta(j - i)$, since the loop below on lines 10-14 is also $\Theta(j - i)$, it wouldn't affect the asymptotic running time of $\text{OPTIMAL-BST}$ which would stay $\Theta(n^3)$.
[]
false
[]
15-15.5-4
15
15.5
15.5-4 $\star$
docs/Chap15/15.5.md
Knuth [212] has shown that there are always roots of optimal subtrees such that $root[i, j - 1] \le root[i, j] \le root[i + 1, j]$ for all $1 \le i < j \le n$. Use this fact to modify the $\text{OPTIMAL-BST}$ procedure to run in $\Theta(n^2)$ time.
Change the **for** loop of line 10 in $\text{OPTIMAL-BST}$ to ```cpp for r = r[i, j - 1] to r[i + 1, j] ``` Knuth's result implies that it is sufficient to only check these values because optimal root found in this range is in fact the optimal root of some binary search tree. The time spent within the **for** loop of line 6 is now $\Theta(n)$. This is because the bounds on $r$ in the new **for** loop of line 10 are nonoverlapping. To see this, suppose we have fixed $l$ and $i$. On one iteration of the **for** loop of line 6, the upper bound on $r$ is $$r[i + 1, j] = r[i + 1, i + l - 1].$$ When we increment $i$ by $1$ we increase $j$ by $1$. However, the lower bound on $r$ for the next iteration subtracts this, so the lower bound on the next iteration is $$r[i + 1, j + 1 - 1] = r[i + 1, j].$$ Thus, the total time spent in the **for** loop of line 6 is $\Theta(n)$. Since we iterate the outer **for** loop of line 5 $n$ times, the total runtime is $\Theta(n^2)$.
[ { "lang": "cpp", "code": " for r = r[i, j - 1] to r[i + 1, j]" } ]
false
[]
15-15-1
15
15-1
15-1
docs/Chap15/Problems/15-1.md
Suppose that we are given a directed acyclic graph $G = (V, E)$ with real-valued edge weights and two distinguished vertices $s$ and $t$ . Describe a dynamic-programming approach for finding a longest weighted simple path from $s$ to $t$. What does the subproblem graph look like? What is the efficiency of your algorithm?
First, we clarify the question. We define the weight of a path to be the sum of the weights of the edges in the path. Then, our task is to find the path from $s$ to $t$ that has the largest weight. $\textbf{Dynamic-programming approach}$ From a given vertex, we try all outgoing edges and take the max over the weights of the resulting paths. That is, for some vertex $v$, $$ \text{LONGEST}(G, v, t) = \max\_{v∼v'} w((v, v’)) + \text{LONGEST}(G, v’, t) $$ where $w: E \rightarrow \mathbb{R}$ is a function mapping edges to their weights. The natural base case is that $\text{LONGEST}(G, t, t) = 0$. Applying this recurrence relation along with memoization gives a dynamic programming solution. $\textbf{Subproblem graph}$ The subproblem graph is simply the subgraph of our original graph that consists of vertices and edges that lie on some path from $s$ to $t$. This is because the initial problem asks for the longest weighted path from $s$ to $t$, and then this request asks for the longest weighted path from each of $s$’s neighbors to $t$, and so on. $\textbf{Algorithm efficiency}$ In the worst case (for example, consider a graph that "starts with $s$ and ends with $t$", meaning that every vertex lies on a path from $s$ to $t$), we have $|V|$ distinct subproblems. In this case, we also explore all edges (we explore the edge $(v,w)$ in the subproblem $\text{LONGEST}(G, v, t)$). So, our time complexity is $O(|V| + |E|)$. $\textbf{Verifying overlapping subproblems and optimal substructure}$ In order to use DP, we must verify that (a) there are overlapping subproblems and (b) we have optimal substructure. For (a), consider a graph that has the following subgraph: $\{(a, b), (a, c), (b, d), (c, d)\}$ (imagine that $s$ has an edge to $a$ and $d$ has an edge to $t$). Then, we will ask for the solution to $\text{LONGEST}(G,d,t)$ twice (once from $b$ and once from $c$). For (b), suppose we have the longest weight simple path from $s$ to $t$. Let $v$ be the first vertex after $s$ in our path. Suppose $v \rightsquigarrow t$ is not the longest weight simple path from $v$ to $t$. Then, there is a longer simple path $v \rightsquigarrow t$. If it included $s$, then we have a path $s \rightsquigarrow v$ and a path $v \rightsquigarrow s$, so our graph is not acyclic. Thus, this longer path does not include $s$. So, we can construct a longer weight simple path from $s$ to $t$ than our original path by going from $s$ to $v$ and then taking the longer weight path from $v$ to $t$. We conclude that our original path was not the longest weight simple path from $s$ to $t$, contradiction.
[]
false
[]
15-15-10
15
15-10
15-10
docs/Chap15/Problems/15-10.md
Your knowledge of algorithms helps you obtain an exciting job with the Acme Computer Company, along with a $\$10,000$ signing bonus. You decide to invest this money with the goal of maximizing your return at the end of 10 years. You decide to use the Amalgamated Investment Company to manage your investments. Amalgamated Investments requires you to observe the following rules. It offers $n$ different investments, numbered $1$ through $n$. In each year $j$, investment $i$ provides a return rate of $r_{ij}$ . In other words, if you invest $d$ dollars in investment $i$ in year $j$, then at the end of year $j$ , you have $dr_{ij}$ dollars. The return rates are guaranteed, that is, you are given all the return rates for the next 10 years for each investment. You make investment decisions only once per year. At the end of each year, you can leave the money made in the previous year in the same investments, or you can shift money to other investments, by either shifting money between existing investments or moving money to a new investement. If you do not move your money between two consecutive years, you pay a fee of $f_1$ dollars, whereas if you switch your money, you pay a fee of $f_2$ dollars, where $f_2 > f_1$. **a.** The problem, as stated, allows you to invest your money inmultiple investments in each year. Prove that there exists an optimal investment strategy that, in each year, puts all the money into a single investment. (Recall that an optimal investment strategy maximizes the amount of money after 10 years and is not concerned with any other objectives, such as minimizing risk.) **b.** Prove that the problem of planning your optimal investment strategy exhibits optimal substructure. **c.** Design an algorithm that plans your optimal investment strategy. What is the running time of your algorithm? **d.** Suppose that Amalgamated Investments imposed the additional restriction that, at any point, you can have no more than $\$15,000$ in any one investment. Show that the problem of maximizing your income at the end of 10 years no longer exhibits optimal substructure.
**a.** Without loss of generality, suppose that there exists an optimal solution $S$ which involves investing $d_1$ dollars into investment $k$ and $d_2$ dollars into investement $m$ in year $1$. Further, suppose in this optimal solution, you don't move your money for the first $j$ years. If $r_{k1} + r_{k2} + \ldots + r_{kj} > r_{m1} +r_{m2} + \ldots + r_{mj}$ then we can perform the usual cut-and-paste maneuver and instead invest $d_1 + d_2$ dollars into investment $k$ for $j$ years. Keeping all other investments the same, this results in a strategy which is at least as profitable as $S$, but has reduced the number of different investments in a given span of years by $1$. Continuing in this way, we can reduce the optimal strategy to consist of only a single investment each year. **b.** If a particular investment strategy is the year-one-plan for a optimal investment strategy, then we must solve two kinds of optimal suproblem: either we maintain the strategy for an additional year, not incurring the moneymoving fee, or we move the money, which amounts to solving the problem where we ignore all information from year $1$. Thus, the problem exhibits optimal substructure. **c.** The algorithm works as follows: We build tables $I$ and $R$ of size $10$ such that $I[i]$ tells which investment should be made (with all money) in year $i$, and $R[i]$ gives the total return on the investment strategy in years $i$ through $10$. ```cpp INVEST(d, n) let I[1..10] and R[1..10] be new tables for k = 10 downto 1 q = 1 for i = 1 to n if r[i, k] > r[q, k] // i now holds the investment which looks best for a given year q = i if R[k + 1] + dr_{I[k + 1]k} - f[1] > R[k + 1] + dr[q, k] - f[2] // If revenue is greater when money is not moved R[k] = R[k + 1] + dr_{I[k + 1]k} - f[1] I[k] = I[k + 1] else R[k] = R[k + 1] + dr[q, k] - f[2] I[k] = q return I as an optimal stategy with return R[1] ``` **d.** The previous investment strategy was independent of the amount of money you started with. When there is a cap on the amount you can invest, the amount you have to invest in the next year becomes relevant. If we know the year-one-strategy of an optimal investment, and we know that we need to move money after the first year, we're left with the problem of investing a different initial amount of money, so we'd have to solve a subproblem for every possible initial amount of money. Since there is no bound on the returns, there's also no bound on the number of subproblems we need to solve.
[ { "lang": "cpp", "code": "INVEST(d, n)\n let I[1..10] and R[1..10] be new tables\n for k = 10 downto 1\n q = 1\n for i = 1 to n\n if r[i, k] > r[q, k] // i now holds the investment which looks best for a given year\n q = i\n if R[k + 1] + dr_{I[k + 1]k} - f[1] > R[k + 1] + dr[q, k] - f[2] // If revenue is greater when money is not moved\n R[k] = R[k + 1] + dr_{I[k + 1]k} - f[1]\n I[k] = I[k + 1]\n else\n R[k] = R[k + 1] + dr[q, k] - f[2]\n I[k] = q\n return I as an optimal stategy with return R[1]" } ]
false
[]
15-15-11
15
15-11
15-11
docs/Chap15/Problems/15-11.md
The Rinky Dink Company makes machines that resurface ice rinks. The demand for such products varies from month to month, and so the company needs to develop a strategy to plan its manufacturing given the fluctuating, but predictable, demand. The company wishes to design a plan for the next $n$ months. For each month $i$, the company knows the demand $d_i$, that is, the number of machines that it will sell. Let $D = \sum_{i = 1}^n d_i$ be the total demand over the next $n$ months. The company keeps a full-time staff who provide labor to manufacture up to $m$ machines per month. If the company needs to make more than $m$ machines in a given month, it can hire additional, part-time labor, at a cost that works out to $c$ dollars per machine. Furthermore, if, at the end of a month, the company is holding any unsold machines, it must pay inventory costs. The cost for holding $j$ machines is given as a function $h(j)$ for $j = 1, 2, \ldots, D$, where $h(j) \ge 0$ for $1 \le j \le D$ and $h(j) \le h(j + 1)$ for $1 \le j \le D - 1$. Give an algorithm that calculates a plan for the company that minimizes its costs while fulfilling all the demand. The running time should be polyomial in $n$ and $D$.
Our subproblems will be indexed by and integer $i \in [n]$ and another integer $j \in [D]$. $i$ will indicate how many months have passed, that is, we will restrict ourselves to only caring about $(d_i, \dots, d_n)$. $j$ will indicate how many machines we have in stock initially. Then, the recurrence we will use will try producing all possible numbers of machines from $1$ to $[D]$. Since the index space has size $O(nD)$ and we are only running through and taking the minimum cost from $D$ many options when computing a particular subproblem, the total runtime will be $O(nD^2)$.
[]
false
[]
15-15-12
15
15-12
15-12
docs/Chap15/Problems/15-12.md
Suppose that you are the general manager for a major-league baseball team. During the off-season, you need to sign some free-agent players for your team. The team owner has given you a budget of $\$X$ to spend on free agents. You are allowed to spend less than $\$X$ altogether, but the owner will fire you if you spend any more than $\$X$. You are considering $N$ different positions, and for each position, $P$ free-agent players who play that position are available. Because you do not want to overload your roster with too many players at any position, for each position you may sign at most one free agent who plays that position. (If you do not sign any players at a particular position, then you plan to stick with the players you already have at that position.) To determine how valuable a player is going to be, you decide to use a sabermetric statistic known as "$\text{VORP}$", or "value over replacement player". A player with a higher $\text{VORP}$ is more valuable than a player with a lower $\text{VORP}$. A player with a higher $\text{VORP}$ is not necessarily more expensive to sign than a player with a lower $\text{VORP}$, because factors other than a player's value determine how much it costs to sign him. For each available free-agent player, you have three pieces of information: - the player's position, - the amount of money it will cost to sign the player, and - the player's $\text{VORP}$. Devise an algorithm that maximizes the total $\text{VORP}$ of the players you sign while spending no more than $\$X$ altogether. You may assume that each player signs for a multiple of $100,000$. Your algorithm should output the total $\text{VORP}$ of the players you sign, the total amount of money you spend, and a list of which players you sign. Analyze the running time and space requirement of your algorithm.
We will make an $N + 1$ by $X + 1$ by $P + 1$ table. The runtime of the algorithm is $O(NXP)$. ```cpp BASEBALL(N, X, P) initialize a table B of size (N + 1) by (X + 1) initialize an array P of length N for i = 0 to N B[i, 0] = 0 for j = 1 to X B[0, j] = 0 for i = 1 to N for j = 1 to X if j < i.cost B[i, j] = B[i - 1, j] q = B[i - 1, j] p = 0 for k = 1 to P if j >= i.cost t = B[i - 1, j - i.cost] + i.value if t > q q = t p = k B[i, j] = q P[i] = p print("The total VORP is", B[N, X], "and the players are:") i = N j = X C = 0 for k = 1 to N // prints the players from the table if B[i, j] != B[i - 1, j] print(P[i]) j = j - i.cost C = C + i.cost i = i - 1 print("The total cost is", C) ```
[ { "lang": "cpp", "code": "BASEBALL(N, X, P)\n initialize a table B of size (N + 1) by (X + 1)\n initialize an array P of length N\n for i = 0 to N\n B[i, 0] = 0\n for j = 1 to X\n B[0, j] = 0\n for i = 1 to N\n for j = 1 to X\n if j < i.cost\n B[i, j] = B[i - 1, j]\n q = B[i - 1, j]\n p = 0\n for k = 1 to P\n if j >= i.cost\n t = B[i - 1, j - i.cost] + i.value\n if t > q\n q = t\n p = k\n B[i, j] = q\n P[i] = p\n print(\"The total VORP is\", B[N, X], \"and the players are:\")\n i = N\n j = X\n C = 0\n for k = 1 to N // prints the players from the table\n if B[i, j] != B[i - 1, j]\n print(P[i])\n j = j - i.cost\n C = C + i.cost\n i = i - 1\n print(\"The total cost is\", C)" } ]
false
[]
15-15-2
15
15-2
15-2
docs/Chap15/Problems/15-2.md
A **_palindrome_** is a nonempty string over some alphabet that reads the same forward and backward. Examples of palindromes are all strings of length $1$, $\text{civic}$, $\text{racecar}$, and $\text{aibohphobia}$ (fear of palindromes). Give an efficient algorithm to find the longest palindrome that is a subsequence of a given input string. For example, given the input $\text{character}$, your algorithm should return $\text{carac}$. What is the running time of your algorithm?
Let $A[1..n]$ denote the array which contains the given word. First note that for a palindrome to be a subsequence we must be able to divide the input word at some position $i$, and then solve the longest common subsequence problem on $A[1..i]$ and $A[i + 1..n]$, possibly adding in an extra letter to account for palindromes with a central letter. Since there are $n$ places at which we could split the input word and the $\text{LCS}$ problem takes time $O(n^2)$, we can solve the palindrome problem in time $O(n^3)$.
[]
false
[]
15-15-3
15
15-3
15-3
docs/Chap15/Problems/15-3.md
In the **_euclidean traveling-salesman problem_**, we are given a set of $n$ points in the plane, and we wish to find the shortest closed tour that connects all n points. Figure 15.11(a) shows the solution to a $7$-point problem. The general problem is NP-hard, and its solution is therefore believed to require more than polynomial time (see Chapter 34). J. L. Bentley has suggested that we simplify the problem by restricting our attention to **_bitonic tours_**, that is, tours that start at the leftmost point, go strictly rightward to the rightmost point, and then go strictly leftward back to the starting point. Figure 15.11(b) shows the shortest bitonic tour of the same $7$ points. In this case, a polynomial-time algorithm is possible. Describe an $O(n^2)$-time algorithm for determining an optimal bitonic tour. You may assume that no two points have the same $x$-coordinate and that all operations on real numbers take unit time. ($\textit{Hint:}$ Scan left to right, maintaining optimal possibilities for the two parts of the tour.)
First sort all the points based on their $x$ coordinate. To index our subproblem, we will give the rightmost point for both the path going to the left and the path going to the right. Then, we have that the desired result will be the subproblem indexed by $v$, where $v$ is the rightmost point. Suppose by symmetry that we are further along on the left-going path, that the leftmost path is going to the $i$th one and the right going path is going until the $j$th one. Then, if we have that $i > j + 1$, then we have that the cost must be the distance from the $i − 1$st point to the ith plus the solution to the subproblem obtained where we replace $i$ with $i − 1$. There can be at most $O(n^2)$ of these subproblem, but solving them only requires considering a constant number of cases. The other possibility for a subproblem is that $j \le i \le j + 1$. In this case, we consider for every $k$ from $1$ to $j$ the subproblem where we replace $i$ with $k$ plus the cost from $k$th point to the $i$th point and take the minimum over all of them. This case requires considering $O(n)$ things, but there are only $O(n)$ such cases. So, the final runtime is $O(n^2)$.
[]
false
[]
15-15-4
15
15-4
15-4
docs/Chap15/Problems/15-4.md
Consider the problem of neatly printing a paragraph with a monospaced font (all characters having the same width) on a printer. The input text is a sequence of $n$ words of lengths $l_1, l_2, \ldots, l_n$, measured in characters. We want to print this paragraph neatly on a number of lines that hold a maximum of $M$ characters each. Our criterion of "neatness" is as follows. If a given line contains words $i$ through $j$, where $i \le j$ , and we leave exactly one space between words, the number of extra space characters at the end of the line is $M - j + i - \sum_{k = i}^j l_k$, which must be nonnegative so that the words fit on the line. We wish to minimize the sum, over all lines except the last, of the cubes of the numbers of extra space characters at the ends of lines. Give a dynamic-programming algorithm to print a paragraph of $n$ words neatly on a printer. Analyze the running time and space requirements of your algorithm.
First observe that the problem exhibits optimal substructure in the following way: Suppose we know that an optimal solution has $k$ words on the first line. Then we must solve the subproblem of printing neatly words $l_{k + 1}, \dots, l_n$. We build a table of optimal solutions solutions to solve the problem using dynamic programming. If $n − 1 + \sum_{k = 1}^n l_k < M$ then put all words on a single line for an optimal solution. In the following algorithm $\text{PRINT-NEATLY}(n)$, $C[k]$ contains the cost of printing neatly words $l_k$ through $l_n$. We can determine the cost of an optimal solution upon termination by examining $C[1]$. The entry $P[k]$ contains the position of the last word which should appear on the first line of the optimal solution of words $l_k, \dots, l_n$. Thus, to obtain the optimal way to place the words, we make $l_{P[1]}$ the last word on the first line, $l_{P[P[1]]}$ the last word on the second line, and so on. ```cpp PRINT-NEATLY(n) let P[1..n] and C[1..n] be new tables for k = n downto 1 if sum_{i = k}^n l_i + n - k < M C[k] = 0 q = ∞ for j = 1 to n - k cost = sum_{m = 1}^j l_{k + m} + m - 1 if cost < M and (M - cost)^3 + C[k + m + 1] < q q = (M - cost)^3 + C[k + m + 1] P[k] = k + j C[k] = q ```
[ { "lang": "cpp", "code": "PRINT-NEATLY(n)\n let P[1..n] and C[1..n] be new tables\n for k = n downto 1\n if sum_{i = k}^n l_i + n - k < M\n C[k] = 0\n q = ∞\n for j = 1 to n - k\n cost = sum_{m = 1}^j l_{k + m} + m - 1\n if cost < M and (M - cost)^3 + C[k + m + 1] < q\n q = (M - cost)^3 + C[k + m + 1]\n P[k] = k + j\n C[k] = q" } ]
false
[]
15-15-5
15
15-5
15-5
docs/Chap15/Problems/15-5.md
In order to transform one source string of text $x[1..m]$ to a target string $y[1..n]$, we can perform various transformation operations. Our goal is, given $x$ and $y$, to produce a series of transformations that change $x$ to $y$. We use an array $z$—assumed to be large enough to hold all the characters it will need—to hold the intermediate results. Initially, $z$ is empty, and at termination, we should have $z[j] = y[j]$ for $j = 1, 2, \ldots, n$. We maintain current indices $i$ into $x$ and $j$ into $z$, and the operations are allowed to alter $z$ and these indices. Initially, $i = j = 1$. We are required to examine every character in $x$ during the transformation, which means that at the end of the sequence of transformation operations, we must have $i = m + 1$. We may choose from among six transformation operations: **Copy** a character from $x$ to $z$ by setting $z[j] = x[i]$ and then incrementing both $i$ and $j$. This operation examines $x[i]$. **Replace** a character from $x$ by another character $c$, by setting $z[j] = c$, and then incrementing both $i$ and $j$. This operation examines $x[i]$. **Delete** a character from $x$ by incrementing $i$ but leaving $j$ alone. This operation examines $x[i]$. **Insert** the character $c$ into $z$ by setting $z[j] = c$ and then incrementing $j$, but leaving $i$ alone. This operation examines no characters of $x$. **Twiddle** (i.e., exchange) the next two characters by copying them from $x$ to $z$ but in the opposite order; we do so by setting $z[j] = x[i + 1]$ and $z[j + 1] = x[i]$ and then setting $i = i + 2$ and $j = j + 2$. This operation examines $x[i]$ and $x[i + 1]$. **Kill** the remainder of $x$ by setting $i = m + 1$. This operation examines all characters in $x$ that have not yet been examined. This operation, if performed, must be the final operation. As an example, one way to transform the source string $\text{algorithm}$ to the target string $\text{altruistic}$ is to use the following sequence of operations, where the underlined characters are $x[i]$ and $z[j]$ after the operation: $$ \begin{array}{lll} \text{Operation} & x & z \\\\ \hline \textit{initial strings} & \underline algorithm & \text{\textunderscore} \\\\ \text{copy} & a\underline lgorithm & a\text{\textunderscore} \\\\ \text{copy} & al\underline gorithm & al\text{\textunderscore} \\\\ \text{replace by $t$} & alg\underline orithm & alt\text{\textunderscore} \\\\ \text{delete} & algo\underline rithm & alt\text{\textunderscore} \\\\ \text{copy} & algor\underline ithm & altr\text{\textunderscore} \\\\ \text{insert $u$} & algor\underline ithm & altru\text{\textunderscore} \\\\ \text{insert $i$} & algor\underline ithm & altrui\text{\textunderscore} \\\\ \text{insert $s$} & algor\underline ithm & altruis\text{\textunderscore} \\\\ \text{twiddle} & algorit\underline hm & altruisti\text{\textunderscore} \\\\ \text{insert $c$} & algorit\underline hm & altruistic\text{\textunderscore} \\\\ \text{kill} & algorithm\text{\textunderscore} & altruistic\text{\textunderscore} \end{array} $$ Note that there are several other sequences of transformation operations that transform $\text{algorithm}$ to $\text{altruistic}$. Each of the transformation operations has an associated cost. The cost of an operation depends on the specific application, but we assume that each operation's cost is a constant that is known to us. We also assume that the individual costs of the copy and replace operations are less than the combined costs of the delete and insert operations; otherwise, the copy and replace operations would not be used. The cost of a given sequence of transformation operations is the sum of the costs of the individual operations in the sequence. For the sequence above, the cost of transforming $\text{algorithm}$ to $\text{altruistic}$ is $$\text{($3 \cdot$ cost(copy)) + cost(replace) + cost(delete) + ($4 \cdot$ cost(insert)) + cost(twiddle) + cost(kill)}.$$ **a.** Given two sequences $x[1..m]$ and $y[1..n]$ and set of transformation-operation costs, the **_edit distance_** from $x$ to $y$ is the cost of the least expensive operatoin sequence that transforms $x$ to $y$. Describe a dynamic-programming algorithm that finds the edit distance from $x[1..m]$ to $y[1..n]$ and prints an optimal opeartion sequence. Analyze the running time and space requirements of your algorithm. The edit-distance problem generalizes the problem of aligning two DNA sequences (see, for example, Setubal and Meidanis [310, Section 3.2]). There are several methods for measuring the similarity of two DNA sequences by aligning them. One such method to align two sequences $x$ and $y$ consists of inserting spaces at arbitrary locations in the two sequences (including at either end) so that the resulting sequences $x'$ and $y'$ have the same length but do not have a space in the same position (i.e., for no position $j$ are both $x'[j]$ and $y'[j]$ a space). Then we assign a "score" to each position. Position $j$ receives a score as follows: - $+1$ if $x'[j] = y'[j]$ and neither is a space, - $-1$ if $x'[j] \ne y'[j]$ and neither is a space, - $-2$ if either $x'[j]$ or $y'[j]$ is a space. The score for the alignment is the sum of the scores of the individual positions. For example, given the sequences $x = \text{GATCGGCAT}$ and $y = \text{CAATGTGAATC}$, one alignment is $$ \begin{array}{cccccccccccc} \text G & & \text A & \text T & \text C & \text G & & \text G & \text C & \text A & \text T & \\\\ \text C & \text A & \text A & \text T & & \text G & \text T & \text G & \text A & \text A & \text T & \text C \\\\ - & * & + & + & * & + & * & + & - & + & + & * \end{array} $$ A $+$ under a position indicates a score of $+1$ for that position, a $-$ indicates a score of $-1$, and a $*$ indicates a score of $-2$, so that this alignment has a total score of $6 \cdot -2 \cdot 1 - 4 \cdot 2 = -4$. **b.** Explain how to cast the problem of finding an optimal alignment as an edit distance problem using a subset of the transformation operations copy, replace, delete, insert, twiddle, and kill.
**a.** We will index our subproblems by two integers, $1 \le i \le m$ and $1 \le j \le n$. We will let $i$ indicate the rightmost element of $x$ we have not processed and $j$ indicate the rightmost element of $y$ we have not yet found matches for. For a solution, we call $\text{EDIT}(x, y, i, j)$. **b.** We will set $$\text{cost(delete)} = \text{cost(insert)} = 2,$$ $$\text{cost(copy)} = −1,$$ $$\text{cost(replace)} = 1,$$ and $$\text{cost(twiddle)} = \text{cost(kill)} = \infty.$$ Then a minimum cost translation of the first string into the second corresponds to an alignment. We view - a $\text{copy}$ or a $\text{replace}$ as incrementing a pointer for both strings, - a $\text{insert}$ as putting a space at the current position of the pointer in the first string, and - a $\text{delete}$ operation means putting a space in the current position in the second string. Since $\text{twiddle}$s and $\text{kill}$s have infinite costs, we will have neither of them in a minimal cost solution. The final value for the alignment will be the negative of the minimum cost sequence of edits. ```cpp EDIT(x, y, i, j) let m = x.length let n = y.length if i == m return (n - j)cost(insert) if j == n return min{(m - i)cost(delete), cost(kill)} initialize o1, ..., o5 to ∞ if x[i] == y[j] o1 = cost(copy) + EDIT(x, y, i + 1, j + 1) o2 = cost(replace) + EDIT(x, y, i + 1, j + 1) o3 = cost(delete) + EDIT(x, y, i + 1, j) o4 = cost(insert) + EDIT(x, y, i, j + 1) if i < m - 1 and j < n - 1 if x[i] == y[j + 1] and x[i + 1] == y[j] o5 = cost(twiddle) + EDIT(x, y, i + 2, j + 2) return min_{i ∈ [5]}{o_i} ```
[ { "lang": "cpp", "code": "EDIT(x, y, i, j)\n let m = x.length\n let n = y.length\n if i == m\n return (n - j)cost(insert)\n if j == n\n return min{(m - i)cost(delete), cost(kill)}\n initialize o1, ..., o5 to ∞\n if x[i] == y[j]\n o1 = cost(copy) + EDIT(x, y, i + 1, j + 1)\n o2 = cost(replace) + EDIT(x, y, i + 1, j + 1)\n o3 = cost(delete) + EDIT(x, y, i + 1, j)\n o4 = cost(insert) + EDIT(x, y, i, j + 1)\n if i < m - 1 and j < n - 1\n if x[i] == y[j + 1] and x[i + 1] == y[j]\n o5 = cost(twiddle) + EDIT(x, y, i + 2, j + 2)\n return min_{i ∈ [5]}{o_i}" } ]
false
[]
15-15-6
15
15-6
15-6
docs/Chap15/Problems/15-6.md
Professor Stewart is consulting for the president of a corporation that is planning a company party. The company has a hierarchical structure; that is, the supervisor relation forms a tree rooted at the president. The personnel office has ranked each employee with a conviviality rating, which is a real number. In order to make the party fun for all attendees, the president does not want both an employee and his or her immediate supervisor to attend. Professor Stewart is given the tree that describes the structure of the corporation, using the left-child, right-sibling representation described in Section 10.4. Each node of the tree holds, in addition to the pointers, the name of an employee and that employee's conviviality ranking. Describe an algorithm to make up a guest list that maximizes the sum of the conviviality ratings of the guests. Analyze the running time of your algorithm.
The problem exhibits optimal substructure in the following way: If the root $r$ is included in an optimal solution, then we must solve the optimal subproblems rooted at the grandchildren of $r$. If $r$ is not included, then we must solve the optimal subproblems on trees rooted at the children of $r$. The dynamic programming algorithm to solve this problem works as follows: We make a table $C$ indexed by vertices which tells us the optimal conviviality ranking of a guest list obtained from the subtree with root at that vertex. We also make a table $G$ such that $G[i]$ tells us the guest list we would use when vertex $i$ is at the root. Let $T$ be the tree of guests. To solve the problem, we need to examine the guest list stored at $G[T.root]$. First solve the problem at each leaf $L$. If the conviviality ranking at $L$ is positive, $G[L] = \\{L\\}$ and $C[L] = L.conviv$. Otherwise $G[L] = \emptyset$ and $C[L] = 0$. Iteratively solve the subproblems located at parents of nodes at which the subproblem has been solved. In general for a node $x$, $$C[x] = \max(\sum_{y\text{ is a child of } x} C[y], x.conviv + \sum_{y\text{ is a grandchild of } x} C[y]).$$ The runtime is $O(n)$ since each node appears in at most two of the sums (because each node has at most 1 parent and 1 grandparent) and each node is solved once.
[]
false
[]
15-15-7
15
15-7
15-7
docs/Chap15/Problems/15-7.md
We can use dynamic programming on a directed graph $G = (V, E)$ for speech recognition. Each edge $(u, v) \in E$ is labeled with a sound $\sigma(u, v)$ from a finite set $\Sigma$ of sounds. The labeled graph is a formal model of a person speaking a restricted language. Each path in the graph starting from a distinguished vertex $v_0 \in V$ corresponds to a possible sequence of sounds producted by the model. We define the label of a directed path to be the concatenation of the labels of the edges on that path. **a.** Describe an efficient algorithm that, given an edge-labeled graph $G$ with distinguished vertex $v_0$ and a sequence $s = \langle \sigma_1, \sigma_2, \ldots, \sigma_k \rangle$ of sounds from $\Sigma$, returns a path in $G$ that begins at $v_0$ and has $s$ as its label, if any such path exists. Otherwise, the algorithm should return $\text{NO-SUCH-PATH}$. Analyze the running time of your algorithm. ($\textit{Hint:}$ You may find concepts from Chapter 22 useful.) Now, suppose that every edge $(u, v) \in E$ has an associated nonnegatve probability $p(u, v)$ of traversing the edge $(u, v)$ from vertex $u$ and thus producing the corresponding sound. The sum of the probabilities of the edges leaving any vertex equals $1$. The probability of a path is defined to the product of the probabilities of its edges. We can view the probability of a path beginning at $v_0$ as the probability that a "random walk" beginning at $v_0$ will follow the specified path, where we randomly choose which edge to take leaving a vertex $u$ according to the probabilities of the available edges leaving $u$. **b.** Extend your answer to part (a) so that if a path is returned, it is a _most probable path_ starting at $v_0$ and having label $s$. Analyze the running time of your algorithm.
**a.** Our substructure will consist of trying to find suffixes of s of length one less starting at all the edges leaving $v_0$ with label $\sigma_0$. if any of them have a solution, then, there is a solution. If none do, then there is none. See the algorithm $\text{VITERBI}$ for details. ```cpp VITERBI(G, s, v[0]) if s.length = 0 return v[0] for edges(v[0], v[1]) in V for some v[1] if sigma(v[0], v[1]) = sigma[1] res = VITERBI(G, (sigma[2], ..., sigma[k]), v[1]) if res != NO-SUCH-PATH return (v[0], res) return NO-SUCH-PATH ``` Since the subproblems are indexed by a suffix of $s$ (of which there are only $k$) and a vertex in the graph, there are at most $O(k|V|)$ different possible arguments. Since each run may require testing a edge going to every other vertex, and each iteration of the **for** loop takes at most a constant amount of time other than the call to $\text{PROB-VITERBI}$, the final runtime is $O(k|V|^2)$. **b.** For this modification, we will need to try all the possible edges leaving from $v_0$ instead of stopping as soon as we find one that works. The substructure is very similar. We'll make it so that instead of just returning the sequence, we'll have the algorithm also return the probability of that maximum probability sequence, calling the fields seq and prob respectively. See the algorithm $\text{PROB-VITERBI}$. Since the runtime is indexed by the same things, we have that we will call it with at most $O(k|V|)$ different possible arguments. Since each run may require testing a edge going to every other vertex, and each iteration of the **for** loop takes at most a constant amount of time other than the call to $\text{PROB-VITERBI}$, the final runtime is $O(k|V|^2)$. ```cpp PROB-VITERBI(G, s, v[0]) if s.length = 0 return v[0] sols.seq = NO-SUCH-PATH sols.prob = 0 for edges(v[0], v[1]) in V for some v[1] if sigma(v[0], v[1]) = sigma[1] res = PROB-VITERBI(G, (sigma[2], ..., sigma[k]), v[1]) if p(v[0], v[1]) * res.prob ≥ sols.prob sols.prob = p(v[0], v[1]) * res.prob and sols.seq = v[0], res.seq return sols ```
[ { "lang": "cpp", "code": "VITERBI(G, s, v[0])\n if s.length = 0\n return v[0]\n for edges(v[0], v[1]) in V for some v[1]\n if sigma(v[0], v[1]) = sigma[1]\n res = VITERBI(G, (sigma[2], ..., sigma[k]), v[1])\n if res != NO-SUCH-PATH\n return (v[0], res)\n return NO-SUCH-PATH" }, { "lang": "cpp", "code": "PROB-VITERBI(G, s, v[0])\n if s.length = 0\n return v[0]\n sols.seq = NO-SUCH-PATH\n sols.prob = 0\n for edges(v[0], v[1]) in V for some v[1]\n if sigma(v[0], v[1]) = sigma[1]\n res = PROB-VITERBI(G, (sigma[2], ..., sigma[k]), v[1])\n if p(v[0], v[1]) * res.prob ≥ sols.prob\n sols.prob = p(v[0], v[1]) * res.prob and sols.seq = v[0], res.seq\n return sols" } ]
false
[]
15-15-8
15
15-8
15-8
docs/Chap15/Problems/15-8.md
We are given a color picture consisting of an $m \times n$ array $A[1..m, 1..n]$ of pixels, where each pixel specifies a triple of red, green, and blue (RGB) intensities. Suppose that we wish to compress this picture slightly. Specifically, we wish to remove one pixel from each of the $m$ rows, so that the whole picture becomes one pixel narrower. To avoid disturbing visual effects, however, we require that the pixels removed in two adjacent rows be in the same or adjacent columns; the pixels removed form a "seam" from the top row to the bottom row where successive pixels in the seam are adjacent vertically or diagonally. **a.** Show that the number of such possible seams grows at least exponentially in $m$, assuming that $n > 1$. **b.** Suppose now that along with each pixel $A[i, j]$, we have calculated a real-valued disruption measure $d[i, j]$, indicating how disruptive it would be to remove pixel $A[i, j]$. Intuitively, the lower a pixel's disruption measure, the more similar the pixel is to its neighbors. Suppose further that we define the disruption measure of a seam to be the sum of the disruption measures of its pixels. Give an algorithm to find a seam with the lowest disruption measure. How efficient is your algorithm?
**a.** If $n > 1$ then for every choice of pixel at a given row, we have at least $2$ choices of pixel in the next row to add to the seam ($3$ if we're not in column $1$ or $n$). Thus the total number of possibilities is bounded below by $2^m$. **b.** We create a table $D[1..m, 1..n]$ such that $D[i, j]$ stores the disruption of an optimal seam ending at position $[i, j]$, which started in row $1$. We also create a table $S[i, j]$ which stores the list of ordered pairs indicating which pixels were used to create the optimal seam ending at position $(i, j)$. To find the solution to the problem, we look for the minimum $k$ entry in row $m$ of table $D$, and use the list of pixels stored at $S[m, k]$ to determine the optimal seam. To simplify the algorithm $\text{Seam}(A)$, let $\text{MIN}(a, b, c)$ be the function which returns $−1$ if a is the minimum, $0$ if $b$ is the minimum, and $1$ if $c$ is the minimum value from among $a$, $b$, and $c$. The time complexity of the algorithm is $O(mn)$. ```cpp SEAM(A) let D[1..m, 1..n] be a table with zeros let S[1..m, 1..n] be a table with empty lists for i = 1 to n S[1, i] = (1, i) D[1, i] = d_{1i} for i = 2 to m for j = 1 to n if j == 1 // left-edge case if D[i - 1, j] < D[i - 1, j + 1] D[i, j] = D[i - 1, j] + d_{ij} S[i, j] = S[i - 1, j].insert(i, j) else D[i, j] = D[i - 1, j + 1] + d_{ij} S[i, j] = S[i - 1, j + 1].insert(i, j) else if j == n // right-edge case if D[i - 1, j - 1] < D[i - 1, j] D[i, j] = D[i - 1, j - 1] + d_{ij} S[i, j] = S[i - 1, j - 1].insert(i, j) else D[i, j] = D[i - 1, j] + d_{ij} S[i, j] = S[i - 1, j].insert(i, j) x = MIN(D[i - 1, j - 1], D[i - 1, j], D[i - 1, j + 1]) D[i, j] = D[i - 1, j + x] S[i, j] = S[i - 1, j + x].insert(i, j) q = 1 for j = 1 to n if D[m, j] < D[m, q] q = j print(S[m, q]) ```
[ { "lang": "cpp", "code": "SEAM(A)\n let D[1..m, 1..n] be a table with zeros\n let S[1..m, 1..n] be a table with empty lists\n for i = 1 to n\n S[1, i] = (1, i)\n D[1, i] = d_{1i}\n for i = 2 to m\n for j = 1 to n\n if j == 1 // left-edge case\n if D[i - 1, j] < D[i - 1, j + 1]\n D[i, j] = D[i - 1, j] + d_{ij}\n S[i, j] = S[i - 1, j].insert(i, j)\n else\n D[i, j] = D[i - 1, j + 1] + d_{ij}\n S[i, j] = S[i - 1, j + 1].insert(i, j)\n else if j == n // right-edge case\n if D[i - 1, j - 1] < D[i - 1, j]\n D[i, j] = D[i - 1, j - 1] + d_{ij}\n S[i, j] = S[i - 1, j - 1].insert(i, j)\n else\n D[i, j] = D[i - 1, j] + d_{ij}\n S[i, j] = S[i - 1, j].insert(i, j)\n x = MIN(D[i - 1, j - 1], D[i - 1, j], D[i - 1, j + 1])\n D[i, j] = D[i - 1, j + x]\n S[i, j] = S[i - 1, j + x].insert(i, j)\n q = 1\n for j = 1 to n\n if D[m, j] < D[m, q]\n q = j\n print(S[m, q])" } ]
false
[]
15-15-9
15
15-9
15-9
docs/Chap15/Problems/15-9.md
A certain string-processing language allows a programmer to break a string into two pieces. Because this operation copies the string, it costs $n$ time units to break a string of $n$ characters into two pieces. Suppose a programmer wants to break a string into many pieces. The order in which the breaks occur can affect the total amount of time used. For example, suppose that the programmer wants to break a $20$-character string after characters $2$, $8$, and $10$ (numbering the characters in ascending order from the left-hand end, starting from $1$). If she programs the breaks to occur in left-to-right order, then the first break costs $20$ time units, the second break costs $18$ time units (breaking the string from characters $3$ to $20$ at character $8$), and the third break costs $12$ time units, totaling $50$ time units. If she programs the breaks to occur in right-to-left order, however, then the first break costs $20$ time units, the second break costs $10$ time units, and the third break costs $8$ time units, totaling $38$ time units. In yet another order, she could break first at $8$ (costing $20$), then break the left piece at $2$ (costing $8$), and finally the right piece at $10$ (costing $12$), for a total cost of $40$. Design an algorithm that, given the numbers of characters after which to break, determines a least-cost way to sequence those breaks. More formally, given a string $S$ with $n$ characters and an array $L[1..m]$ containing the break points, com- pute the lowest cost for a sequence of breaks, along with a sequence of breaks that achieves this cost.
The subproblems will be indexed by contiguous subarrays of the arrays of cuts needed to be made. We try making each possible cut, and take the one with cheapest cost. Since there are $m$ to try, and there are at most $m^2$ possible things to index the subproblems with, we have that the m dependence is that the solution is $O(m^3)$. Also, since each of the additions is of a number that is $O(n)$, each of the iterations of the for loop may take time $O(\lg n + \lg m)$, so, the final runtime is $O(m^3 \lg n)$. The given algorithm will return $(cost, seq)$ where $cost$ is the cost of the cheapest sequence, $and$ seq is the sequence of cuts to make. ```cpp CUT-STRING(L, i, j, l, r) if l == r return (0, []) minCost = ∞ for k = i to j if l + r + CUT-STRING(L, i, k, l, L[k]).cost + CUT-STRING(L, k, j, L[k], j).cost < minCost minCost = r - l + CUT-STRING(L, i, k, l, L[k]).cost + CUT-STRING(L, k + 1, j, L[k], j).cost minSeq = L[k] + CUT-STRING(L, i, k, l, L[k]) + CUT-STRING(L, i, k + 1, l, L[k]) return (minCost, minSeq) ``` Sample call: ``cpp L = [3, 8, 10] S = 20 CUT-STRING(L, 0, len(L), 0, s) ```
[ { "lang": "cpp", "code": "CUT-STRING(L, i, j, l, r)\n if l == r\n return (0, [])\n minCost = ∞\n for k = i to j\n if l + r + CUT-STRING(L, i, k, l, L[k]).cost + CUT-STRING(L, k, j, L[k], j).cost < minCost\n minCost = r - l + CUT-STRING(L, i, k, l, L[k]).cost + CUT-STRING(L, k + 1, j, L[k], j).cost\n minSeq = L[k] + CUT-STRING(L, i, k, l, L[k]) + CUT-STRING(L, i, k + 1, l, L[k])\n return (minCost, minSeq)" } ]
false
[]
16-16.1-1
16
16.1
16.1-1
docs/Chap16/16.1.md
Give a dynamic-programming algorithm for the activity-selection problem, based on recurrence $\text{(16.2)}$. Have your algorithm compute the sizes $c[i, j]$ as defined above and also produce the maximum-size subset of mutually compatible activities. Assume that the inputs have been sorted as in equation $\text{(16.1)}$. Compare the running time of your solution to the running time of $\text{GREEDY-ACTIVITY-SELECTOR}$.
```cpp DYNAMIC-ACTIVITY-SELECTOR(s, f, n) let c[0..n + 1, 0..n + 1] and act[0..n + 1, 0..n + 1] be new tables for i = 0 to n c[i, i] = 0 c[i, i + 1] = 0 c[n + 1, n + 1] = 0 for l = 2 to n + 1 for i = 0 to n - l + 1 j = i + l c[i, j] = 0 k = j - 1 while f[i] < f[k] if f[i] ≤ s[k] and f[k] ≤ s[j] and c[i, k] + c[k, j] + 1 > c[i, j] c[i, j] = c[i, k] + c[k, j] + 1 act[i, j] = k k = k - 1 print "A maximum size set of mutually compatible activities has size" c[0, n + 1] print "The set contains" PRINT-ACTIVITIES(c, act, 0, n + 1) ``` ```cpp PRINT-ACTIVITIES(c, act, i, j) if c[i, j] > 0 k = act[i, j] print k PRINT-ACTIVITIES(c, act, i, k) PRINT-ACTIVITIES(c, act, k, j) ``` - $\text{GREEDY-ACTIVITY-SELECTOR}$ runs in $\Theta(n)$ time and - $\text{DYNAMIC-ACTIVITY-SELECTOR}$ runs in $O(n^3)$ time.
[ { "lang": "cpp", "code": "DYNAMIC-ACTIVITY-SELECTOR(s, f, n)\n let c[0..n + 1, 0..n + 1] and act[0..n + 1, 0..n + 1] be new tables\n for i = 0 to n\n c[i, i] = 0\n c[i, i + 1] = 0\n c[n + 1, n + 1] = 0\n for l = 2 to n + 1\n for i = 0 to n - l + 1\n j = i + l\n c[i, j] = 0\n k = j - 1\n while f[i] < f[k]\n if f[i] ≤ s[k] and f[k] ≤ s[j] and c[i, k] + c[k, j] + 1 > c[i, j]\n c[i, j] = c[i, k] + c[k, j] + 1\n act[i, j] = k\n k = k - 1\n print \"A maximum size set of mutually compatible activities has size\" c[0, n + 1]\n print \"The set contains\"\n PRINT-ACTIVITIES(c, act, 0, n + 1)" }, { "lang": "cpp", "code": "PRINT-ACTIVITIES(c, act, i, j)\n if c[i, j] > 0\n k = act[i, j]\n print k\n PRINT-ACTIVITIES(c, act, i, k)\n PRINT-ACTIVITIES(c, act, k, j)" } ]
false
[]
16-16.1-2
16
16.1
16.1-2
docs/Chap16/16.1.md
Suppose that instead of always selecting the first activity to finish, we instead select the last activity to start that is compatible with all previously selected activities. Describe how this approach is a greedy algorithm, and prove that it yields an optimal solution.
This becomes exactly the same as the original problem if we imagine time running in reverse, so it produces an optimal solution for essentially the same reasons. It is greedy because we make the best looking choice at each step.
[]
false
[]
16-16.1-3
16
16.1
16.1-3
docs/Chap16/16.1.md
Not just any greedy approach to the activity-selection problem produces a maximum-size set of mutually compatible activities. Give an example to show that the approach of selecting the activity of least duration from among those that are compatible with previously selected activities does not work. Do the same for the approaches of always selecting the compatible activity that overlaps the fewest other remaining activities and always selecting the compatible remaining activity with the earliest start time.
As a counterexample to the optimality of greedily selecting the shortest, suppose our activity times are $\\{(1, 9), (8, 11), (10, 20)\\}$ then, picking the shortest first, we have to eliminate the other two, where if we picked the other two instead, we would have two tasks not one. As a counterexample to the optimality of greedily selecting the task that conflicts with the fewest remaining activities, suppose the activity times are $\\{(−1, 1), (2, 5), (0, 3), (0, 3), (0, 3), (4, 7), (6, 9), (8, 11), (8, 11), (8, 11), (10, 12)\\}$. Then, by this greedy strategy, we would first pick $(4, 7)$ since it only has a two conflicts. However, doing so would mean that we would not be able to pick the only optimal solution of $(−1, 1)$, $(2, 5)$, $(6, 9)$, $(10, 12)$. As a counterexample to the optimality of greedily selecting the earliest start times, suppose our activity times are $\\{(1, 10), (2, 3), (4, 5)\\}$. If we pick the earliest start time, we will only have a single activity, $(1, 10)$, whereas the optimal solution would be to pick the two other activities.
[]
false
[]
16-16.1-4
16
16.1
16.1-4
docs/Chap16/16.1.md
Suppose that we have a set of activities to schedule among a large number of lecture halls, where any activity can take place in any lecture hall. We wish to schedule all the activities using as few lecture halls as possible. Give an efficient greedy algorithm to determine which activity should use which lecture hall. (This problem is also known as the **_interval-graph coloring problem_**. We can create an interval graph whose vertices are the given activities and whose edges connect incompatible activities. The smallest number of colors required to color every vertex so that no two adjacent vertices have the same color corresponds to finding the fewest lecture halls needed to schedule all of the given activities.)
Maintain a set of free (but already used) lecture halls $F$ and currently busy lecture halls $B$. Sort the classes by start time. For each new start time which you encounter, remove a lecture hall from $F$, schedule the class in that room, and add the lecture hall to $B$. If $F$ is empty, add a new, unused lecture hall to $F$. When a class finishes, remove its lecture hall from $B$ and add it to $F$. This is optimal for following reason, suppose we have just started using the mth lecture hall for the first time. This only happens when ever classroom ever used before is in $B$. But this means that there are $m$ classes occurring simultaneously, so it is necessary to have $m$ distinct lecture halls in use.
[]
false
[]
16-16.1-5
16
16.1
16.1-5
docs/Chap16/16.1.md
Consider a modification to the activity-selection problem in which each activity $a_i$ has, in addition to a start and finish time, a value $v_i$. The objective is no longer to maximize the number of activities scheduled, but instead to maximize the total value of the activities scheduled. That is, we wish to choose a set $A$ of compatible activities such that $\sum_{a_k \in A} v_k$ is maximized. Give a polynomial-time algorithm for this problem.
Easy and straightforward solution is to run a dynamic programming solution based on the equation $\text{(16.2)}$ where the second case has "1" replaced with "$v_k$". Since the subproblems are still indexed by a pair of activities, and each calculation requires taking the minimum over some set of size $\le |S_{ij}| \in O(n)$. The total runtime is bounded by $O(n^3)$. However, if we are cunning a little, we can be more efficient and give the algorithm which runs in $O(n\log n)$. **INPUT:** $n$ activities with values. **IDEA OF ALGORITHM:** 1. Sort input vector of activities according their finish times in ascending order. Let us denote the activities in this sorted vector by $(a_0, a_1, \dots, a_{n - 1})$. 2. For each $0 \le i < n$ construct partial solution $S_i$. By a partial solution $S_i$, we mean a solution to the problem but considering only activities with indexes lower or equal to $i$. Remember value of each partial solution. 3. Clearly $S_0 = \\{a_0\\}$. 4. We can construct $S_{i + 1}$ as follows. Possible values of $S_{i + 1}$ is either $S_i$ or the solution obtained by joining the activity $a_{i + 1}$ with partial solution $S_j$ where $j < i + 1$ is the index of activity such that $a_j$ is compatible with $a_{i + 1}$ but $a_{j + 1}$ is not compatible with $a_{i + 1}$. Pick the one of these two possible solutions, which has greater value. Ties can be resolved arbitrarily. 5. Therefore we can construct partial solutions in order $S_0, S_1, \dots, S_{n - 1}$ using (3) for $S_0$ and (4) for all the others. 6. Give $S_{n - 1}$ as the solution for problem. **ANALYSIS OF TIME COMPLEXITY:** - Sorting of activities can be done in $O(n\log n)$ time. - Finding the value of $S_0$ is in $O(1)$. - Any $S_{i + 1}$ can be found in $O(\log n)$. It is thanks to the fact that we have properly sorted activities. Therefore we can for each $i + 1$ find the proper $j$ in $O(\log n)$ using the binary search. If we have the proper $j$, the rest can be done in $O(1)$. - Therefore, we have $O(n\log n)$ time for constructing of all $S_i$'s. **IMPLEMENTATION DETAILS:** - Use the dynamic programming. - It is important not to remember too much for each $S_i$. Do not construct $S_i$'s directly (you can end up in $\Omega(n^2)$ time if you do so). For each $S_i$ it is sufficient to remember: - it's value - whether or not it includes the activity $a_i$ - the value of $j$ (from (4)). - Using these information obtained by the run of described algorithm you can reconstruct the solution in $O(n)$ time, which does not violate final time complexity. **PROOF OF CORRECTNESS:** (sketched) - Clearly, $S_0 = \\{a_0\\}.$ - For $S_{i + 1}$ we argue by (4). Partial solution $S_{i + 1}$ either includes the activity $a_{i + 1}$ or doesn't include it, there is no third way. - If it does not include $a_{i + 1}$, then clearly $S_{i + 1} = S_i$. - If it includes $a_{i + 1}$, then $S_{i + 1}$ consists of $a_{i + 1}$ and partial solution which uses all activities compatible with $a_{i + 1}$ with indexes lower than $i + 1$. Since activities are sorted according their finish times, activities with indexes $j$ and lower are compatible and activities with index $j + 1$ and higher up to $i + 1$ are not compatible. We do not consider all the other activities for $S_{i + 1}$. Therefore setting $S_{i + 1} = \\{a_{i + 1}\\} \cup S_j$ gives correct answer in this case. The fact that we need $S_j$ and not some other solution for activities with indexes up to $j$ can be easily shown by the standard cut-and-paste argument. - Since for $S_{n - 1}$ we consider all of the activities, it is actually the solution of the problem.
[]
false
[]
16-16.2-1
16
16.2
16.2-1
docs/Chap16/16.2.md
Prove that the fractional knapsack problem has the greedy-choice property.
Let $I$ be the following instance of the knapsack problem: Let $n$ be the number of items, let $v_i$ be the value of the $i$th item, let $w_i$ be the weight of the $i$th item and let $W$ be the capacity. Assume the items have been ordered in increasing order by $v_i / w_i$ and that $W \ge w_n$. Let $s = (s_1, s_2, \ldots, s_n)$ be a solution. The greedy algorithm works by assigning $s_n = \min(w_n, W)$, and then continuing by solving the subproblem $$I' = (n - 1, \\{v_1, v_2, \ldots, v_{n - 1}\\}, \\{w_1, w_2, \ldots, w_{n - 1}\\}, W - w_n)$$ until it either reaches the state $W = 0$ or $n = 0$. We need to show that this strategy always gives an optimal solution. We prove this by contradiction. Suppose the optimal solution to $I$ is $s_1, s_2, \ldots, s_n$, where $s_n < \min(w_n, W)$. Let $i$ be the smallest number such that $s_i > 0$. By decreasing $s_i$ to $\max(0, W - w_n)$ and increasing $s_n$ by the same amount, we get a better solution. Since this a contradiction the assumption must be false. Hence the problem has the greedy-choice property.
[]
false
[]