id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
08-8-6
08
8-6
8-6
docs/Chap08/Problems/8-6.md
The problem of merging two sorted lists arises frequently. We have seen a procedure for it as the subroutine $\text{MERGE}$ in Section 2.3.1. In this problem, we will prove a lower bound of $2n - 1$ on the worst-case number of comparisons required to merge two sorted lists, each containing $n$ items. First we will show a lower bound of $2n - o(n)$ comparisons by using a decision tree. **a.** Given $2n$ numbers, compute the number of possible ways to divide them into two sorted lists, each with $n$ numbers. **b.** Using a decision tree and your answer to part (a), show that any algorithm that correctly merges two sorted lists must perform at least $2n - o(n)$ comparisons. Now we will show a slightly tighter $2n - 1$ bound. **c.** Show that if two elements are consecutive in the sorted order and from different lists, then they must be compared. **d.** Use your answer to the previous part to show a lower bound of $2n - 1$ comparisons for merging two sorted lists.
**a.** There are $\binom{2n}{n}$ ways to divide $2n$ numbers into two sorted lists, each with $n$ numbers. **b.** Based on Exercise C.1.13, $$ \begin{aligned} \binom{2n}{n} & \le 2^h \\\\ h & \ge \lg\frac{(2n)!}{(n!)^2} \\\\ & = \lg (2n!) - 2\lg (n!) \\\\ & = \Theta(2n\lg 2n) - 2\Theta(n\lg n) \\\\ & = \Theta(2n). \end{aligned} $$ **c.** We have to know the order of the two consecutive elements. **d.** Let list $A = 1, 3, 5, \ldots, 2n - 1$ and $B = 2, 4, 6, \ldots, 2n$. By part (c), we must compare $1$ with $2$, $2$ with $3$, $3$ with $4$, and so on up until we compare $2n - 1$ with $2n$. This amounts to a total of $2n - 1$ comparisons.
[]
false
[]
08-8-7
08
8-7
8-7
docs/Chap08/Problems/8-7.md
A **_compare-exchange_** operation on two array elements $A[i]$ and $A[j]$, where $i < j$, has the form ```cpp COMPARE-EXCHANGE(A, i, j) if A[i] > A[j] exchange A[i] with A[j] ``` After the compare-exchange operation, we know that $A[i] \le A[j]$. An **_oblivious compare-exchange algorithm_** operates solely by a sequence of prespecified compare-exchange operations. The indices of the positions compared in the sequence must be determined in advance, and although they can depend on the number of elements being sorted, they cannot depend on the values being sorted, nor can they depend on the result of any prior compare-exchange operation. For example, here is insertion sort expressed as an oblivious compare-exchange algorithm: ```cpp INSERTION-SORT(A) for j = 2 to A.length for i = j - 1 downto 1 COMPARE-EXCHANGE(A, i, i + 1) ``` The **_0-1 sorting lemma_** provides a powerful way to prove that an oblivious compare-exchange algorithm produces a sorted result. It states that if an oblivious compare-exchange algorithm correctly sorts all input sequences consisting of only $0$s and $1$s, then it correctly sorts all inputs containing arbitrary values. You will prove the $0$-$1$ sorting lemma by proving its contrapositive: if an oblivious compare-exchange algorithm fails to sort an input containing arbitrary values, then it fails to sort some $0$-$1$ input. Assume that an oblivious compare-exchange algorithm $\text X$ fails to correctly sort the array $A[1..n]$. Let $A[p]$ be the smallest value in $A$ that algorithm $\text X$ puts into the wrong location, and let $A[q]$ be the value that algorithm $\text X$ moves to the location into which $A[p]$ should have gone. Define an array $B[1..n]$ of $0$s and $1$s as follows: $$ B[i] = \begin{cases} 0 & \text{if $A[i] \le A[p]$}, \\\\ 1 & \text{if $A[i] > A[p]$}. \end{cases} $$ **a.** Argue that $A[q] > A[p]$, so that $B[p] = 0$ and $B[q] = 1$. **b.** To complete the proof of the $0$-$1$ sorting lemma, prove that algorithm $\text X$ fails to sort array $B$ correctly. Now you will use the $0$-$1$ sorting lemma to prove that a particular sorting algorithm works correctly. The algorithm, **_columnsort_**, works on a rectangular array of $n$ elements. The array has $r$ rows and $s$ columns (so that $n = rs$), subject to three restrictions: - $r$ must be even, - $s$ must be a divisor of $r$, and - $r \ge 2 s^2$. When columnsort completes, the array is sorted in **_column-major order_**: reading down the columns, from left to right, the elements monotonically increase. Columnsort operates in eight steps, regardless of the value of $n$. The odd steps are all the same: sort each column individually. Each even step is a fixed permutation. Here are the steps: 1. Sort each column. 2. Transpose the array, but reshape it back to $r$ rows and $s$ columns. In other words, turn the leftmost column into the top $r / s$ rows, in order; turn the next column into the next $r / s$ rows, in order; and so on. 3. Sort each column. 4. Perform the inverse of the permutation performed in step 2. 5. Sort each column. 6. Shift the top half of each column into the bottom half of the same column, and shift the bottom half of each column into the top half of the next column to the right. Leave the top half of the leftmost column empty. Shift the bottom half of the last column into the top half of a new rightmost column, and leave the bottom half of this new column empty. 7. Sort each column. 8. Perform the inverse of the permutation performed in step 6. Figure 8.5 shows an example of the steps of columnsort with $r = 6$ and $s = 3$. (Even though this example violates the requirement that $r \ge 2s^2$, it happens to work.) **c.** Argue that we can treat columnsort as an oblivious compare-exchange algorithm, even if we do not know what sorting method the odd steps use. Although it might seem hard to believe that columnsort actually sorts, you will use the $0$-$1$ sorting lemma to prove that it does. The $0$-$1$ sorting lemma applies because we can treat columnsort as an oblivious compare-exchange algorithm. A couple of definitions will help you apply the $0$-$1$ sorting lemma. We say that an area of an array is **_clean_** if we know that it contains either all $0$s or all $1$s. Otherwise, the area might contain mixed $0$s and $1$s, and it is **_dirty_**. From here on, assume that the input array contains only $0$s and $1$s, and that we can treat it as an array with $r$ rows and $s$ columns. **d.** Prove that after steps 1–3, the array consists of some clean rows of $0$s at the top, some clean rows of $1$s at the bottom, and at most $s$ dirty rows between them. **e.** Prove that after step 4, the array, read in column - major order, starts with a clean area of $0$s, ends with a clean area of $1$s, and has a dirty area of at most $s^2$ elements in the middle. **f.** Prove that steps 5–8 produce a fully sorted $0$-$1$ output. Conclude that columnsort correctly sorts all inputs containing arbitrary values. **g.** Now suppose that $s$ does not divide $r$. Prove that after steps 1–3, the array consists of some clean rows of $0$s at the top, some clean rows of $1$s at the bottom, and at most $2s - 1$ dirty rows between them. How large must $r$ be, compared with $s$, for columnsort to correctly sort when $s$ does not divide $r$? **h.** Suggest a simple change to step 1 that allows us to maintain the requirement that $r \ge 2s^2$ even when $s$ does not divide $r$, and prove that with your change, columnsort correctly sorts.
(Removed)
[ { "lang": "cpp", "code": "> COMPARE-EXCHANGE(A, i, j)\n> if A[i] > A[j]\n> exchange A[i] with A[j]\n>" }, { "lang": "cpp", "code": "> INSERTION-SORT(A)\n> for j = 2 to A.length\n> for i = j - 1 downto 1\n> COMPARE-EXCHANGE(A, i, i + 1)\n>" } ]
false
[]
09-9.1-1
09
9.1
9.1-1
docs/Chap09/9.1.md
Show that the second smallest of $n$ elements can be found with $n + \lceil \lg n \rceil - 2$ comparisons in the worst case. ($\textit{Hint:}$ Also find the smallest element.)
We can compare the elements in a tournament fashion - we split them into pairs, compare each pair and then proceed to compare the winners in the same fashion. We need to keep track of each "match" the potential winners have participated in. We select a winner in $n − 1$ matches. At this point, we know that the second smallest element is one of the lgn elements that lost to the smallest - each of them is smaller than the ones it has been compared to, prior to losing. In another $\lceil \lg n \rceil − 1$ comparisons we can find the smallest element out of those.
[]
false
[]
09-9.1-2
09
9.1
9.1-2 $\star$
docs/Chap09/9.1.md
Prove the lower bound of $\lceil 3n / 2 \rceil - 2$ comparisons in the worst case to find both the maximum and minimum of $n$ numbers. ($\textit{Hint:}$ Consider how many numbers are potentially either the maximum or minimum, and investigate how a comparison affects these counts.)
If $n$ is odd, there are $$ \begin{aligned} 1 + \frac{3(n-3)}{2} + 2 & = \frac{3n}{2} - \frac{3}{2} \\\\ & = (\bigg\lceil \frac{3n}{2} \bigg\rceil - \frac{1}{2}) - \frac{3}{2} \\\\ & = \bigg\lceil \frac{3n}{2} \bigg\rceil - 2 \end{aligned} $$ comparisons. If $n$ is even, there are $$ \begin{aligned} 1 + \frac{3(n - 2)}{2} & = \frac{3n}{2} - 2 \\\\ & = \bigg\lceil \frac{3n}{2} \bigg\rceil - 2 \end{aligned} $$ comparisons.
[]
false
[]
09-9.2-1
09
9.2
9.2-1
docs/Chap09/9.2.md
Show that $\text{RANDOMIZED-SELECT}$ never makes a recursive call to a $0$-length array.
Calling a $0$-length array would mean that the second and third arguments are equal. So, if the call is made on line 8, we would need that $p = q - 1$, which means that $q - p + 1 = 0$. However, $i$ is assumed to be a nonnegative number, and to be executing line 8, we would need that $i < k = q - p + 1 = 0$, a contradiction. The other possibility is that the bad recursive call occurs on line 9. This would mean that $q + 1 = r$. To be executing line 9, we need that $i > k = q - p + 1 = r - p$. This would be a nonsensical original call to the array though because we are asking for the ith element from an array of strictly less size.
[]
false
[]
09-9.2-2
09
9.2
9.2-2
docs/Chap09/9.2.md
Argue that the indicator random variable $X_k$ and the value $T(\max(k - 1, n - k))$ are independent.
The probability that $X_k$ is equal to $1$ is unchanged when we know the max of $k - 1$ and $n - k$. In other words, $\Pr\\{X_k = a \mid \max(k - 1, n - k) = m\\} = \Pr\\{X_k = a\\}$ for $a = 0, 1$ and $m = k - 1, n - k$ so $X_k$ and $\max(k - 1, n - k)$ are independent. By C.3-5, so are $X_k$ and $T(\max(k - 1, n - k))$.
[]
false
[]
09-9.2-3
09
9.2
9.2-3
docs/Chap09/9.2.md
Write an iterative version of $\text{RANDOMIZED-SELECT}$.
```cpp PARTITION(A, p, r) x = A[r] i = p for k = p - 1 to r if A[k] < x i = i + 1 swap A[i] with A[k] i = i + 1 swap A[i] with A[r] return i ``` ```cpp RANDOMIZED-PARTITION(A, p, r) x = RANDOM(p - 1, r) swap A[x] with A[r] return PARTITION(A, p, r) ``` ```cpp RANDOMIZED-SELECT(A, p, r, i) while true if p == r return A[p] q = RANDOMIZED-PARTITION(A, p, r) k = q - p + 1 if i == k return A[q] if i < k r = q - 1 else p = q + 1 i = i - k ```
[ { "lang": "cpp", "code": "PARTITION(A, p, r)\n x = A[r]\n i = p\n for k = p - 1 to r\n if A[k] < x\n i = i + 1\n swap A[i] with A[k]\n i = i + 1\n swap A[i] with A[r]\n return i" }, { "lang": "cpp", "code": "RANDOMIZED-PARTITION(A, p, r)\n x = RANDOM(p - 1, r)\n swap A[x] with A[r]\n return PARTITION(A, p, r)" }, { "lang": "cpp", "code": "RANDOMIZED-SELECT(A, p, r, i)\n while true\n if p == r\n return A[p]\n q = RANDOMIZED-PARTITION(A, p, r)\n k = q - p + 1\n if i == k\n return A[q]\n if i < k\n r = q - 1\n else\n p = q + 1\n i = i - k" } ]
false
[]
09-9.2-4
09
9.2
9.2-4
docs/Chap09/9.2.md
Suppose we use $\text{RANDOMIZED-SELECT}$ to select the minimum element of the array $A = \langle 3, 2, 9, 0, 7, 5, 4, 8, 6, 1 \rangle$. Describe a sequence of partitions that results in a worst-case performance of $\text{RANDOMIZED-SELECT}$.
When the partition selected is always the maximum element of the array we get worst-case performance. In the example, the sequence would be $\langle 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 \rangle$.
[]
false
[]
09-9.3-1
09
9.3
9.3-1
docs/Chap09/9.3.md
In the algorithm $\text{SELECT}$, the input elements are divided into groups of $5$. Will the algorithm work in linear time if they are divided into groups of $7$? Argue that $\text{SELECT}$ does not run in linear time if groups of $3$ are used.
It will still work if they are divided into groups of $7$, because we will still know that the median of medians is less than at least $4$ elements from half of the $\lceil n / 7 \rceil$ groups, so, it is greater than roughly $4n / 14$ of the elements. Similarly, it is less than roughly $4n / 14$ of the elements. So, we are never calling it recursively on more than $10n / 14$ elements. $T(n) \le T(n / 7) + T(10n / 14) + O(n)$. So, we can show by substitution this is linear. We guess $T(n) < cn$ for $n < k$. Then, for $m \ge k$, $$ \begin{aligned} T(m) & \le T(m / 7) + T(10m / 14) + O(m) \\\\ & \le cm(1 / 7 + 10 / 14) + O(m), \end{aligned} $$ therefore, as long as we have that the constant hidden in the big-Oh notation is less than $c / 7$, we have the desired result. Suppose now that we use groups of size $3$ instead. So, For similar reasons, we have that the recurrence we are able to get is $T(n) = T(\lceil n / 3 \rceil) + T(4n / 6) + O(n) \ge T(n / 3) + T(2n / 3) + O(n)$ So, we will show it is $\ge cn \lg n$. $$ \begin{aligned} T(m) & \ge c(m / 3)\lg (m / 3) + c(2m / 3) \lg (2m / 3) + O(m) \\\\ & \ge cm\lg m + O(m), \end{aligned} $$ therefore, we have that it grows more quickly than linear.
[]
false
[]
09-9.3-2
09
9.3
9.3-2
docs/Chap09/9.3.md
Analyze $\text{SELECT}$ to show that if $n \ge 140$, then at least $\lceil n / 4 \rceil$ elements are greater than the median-of-medians $x$ and at least $\lceil n / 4 \rceil$ elements are less than $x$.
$$ \begin{aligned} \frac{3n}{10} - 6 & \ge \lceil \frac{n}{4} \rceil \\\\ \frac{3n}{10} - 6 & \ge \frac{n}{4} + 1 \\\\ 12n - 240 & \ge 10n + 40 \\\\ n & \ge 140. \end{aligned} $$
[]
false
[]
09-9.3-3
09
9.3
9.3-3
docs/Chap09/9.3.md
Show how quicksort can be made to run in $O(n\lg n)$ time in the worst case, assuming that all elements are distinct.
We can modify quicksort to run in worst case $n\lg n$ time by choosing our pivot element to be the exact median by using quick select. Then, we are guaranteed that our pivot will be good, and the time taken to find the median is on the same order of the rest of the partitioning.
[]
false
[]
09-9.3-4
09
9.3
9.3-4 $\star$
docs/Chap09/9.3.md
Suppose that an algorithm uses only comparisons to find the $i$th smallest element in a set of $n$ elements. Show that it can also find the $i - 1$ smaller elements and $n - i$ larger elements without performing additional comparisons.
Create a graph with $n$ vertices and draw a directed edge from vertex $i$ to vertex $j$ if the $i$th and $j$th elements of the array are compared in the algorithm and we discover that $A[i] \ge A[j]$. Observe that $A[i]$ is one of the $i - 1$ smaller elements if there exists a path from $x$ to $i$ in the graph, and $A[i]$ is one of the $n - i$ larger elements if there exists a path from $i$ to $x$ in the graph. Every vertex $i$ must either lie on a path to or from $x$ because otherwise the algorithm can't distinguish between $i \le x$ and $i \ge x$. Moreover, if a vertex $i$ lies on both a path to $x$ and a path from $x$, then it must be such that $x \le A[i] \le x$,so $x = A[i]$. In this case, we can break ties arbitrarily.
[]
false
[]
09-9.3-5
09
9.3
9.3-5
docs/Chap09/9.3.md
Suppose that you have a "black-box" worst-case linear-time median subroutine. Give a simple, linear-time algorithm that solves the selection problem for an arbitrary order statistic.
To use it, just find the median, partition the array based on that median. - If $i$ is less than half the length of the original array, recurse on the first half. - If $i$ is half the length of the array, return the element coming from the median finding black box. - If $i$ is more than half the length of the array, subtract half the length of the array, and then recurse on the second half of the array.
[]
false
[]
09-9.3-6
09
9.3
9.3-6
docs/Chap09/9.3.md
The $k$th **_quantiles_** of an $n$-element set are the $k - 1$ order statistics that divide the sorted set into $k$ equal-sized sets (to within $1$). Give an $O(n\lg k)$-time algorithm to list the $k$th quantiles of a set.
Pre-calculate the positions of the quantiles in $O(k)$, we use the $O(n)$ select algorithm to find the $\lfloor k / 2 \rfloor$th position, after that the elements are divided into two sets by the pivot the $\lfloor k / 2 \rfloor$th position, we do it recursively in the two sets to find other positions. Since the maximum depth is $\lceil \lg k \rceil$, the total running time is $O(n\lg k)$. ```cpp PARTITION(A, p, r) x = A[r] i = p for k = p to r if A[k] < x i = i + 1 swap A[i] with A[k] i = i + 1 swap a[i] with a[r] return i ``` ```cpp RANDOMIZED-PARTITION(A, p, r) x = RANDOM(p, r) swap A[x] with A[r] return PARTITION(A, p, r) ``` ```cpp RANDOMIZED-SELECT(A, p, r, i) while true if p == r return p, A[p] q = RANDOMIZED-PARTITION(A, p, r) k = q - p + 1 if i == k return q, A[q] if i < k r = q else p = q + 1 i = i - k ``` ```cpp k-QUANTITLES-SUB(A, p, r, pos, f, e, quantiles) if f + 1 > e return mid = (f + e) / 2 q, val = RANDOMIZED-SELECT(A, p, r, pos[mid]) quantiles[mid] = val k = q - p + 1 for i = mid + 1 to e pos[i] = pos[i] - k k-QUANTILES-SUB(A, q + 1, r, pos, mid + 1, e, quantiles) ``` ```cpp k-QUANTITLES(A, k) num = A.size() / k mod = A.size() % k pos = num[1..k] for i = 1 to mod pos[i] = pos[i] + 1 for i = 1 to k pos[i] = pos[i] + pos[i - 1] quantiles = [1..k] k-QUANTITLES-SUB(A, 0, A.length, pos, 0, pos.size(), quantiles) return quantiles ```
[ { "lang": "cpp", "code": "PARTITION(A, p, r)\n x = A[r]\n i = p\n for k = p to r\n if A[k] < x\n i = i + 1\n swap A[i] with A[k]\n i = i + 1\n swap a[i] with a[r]\n return i" }, { "lang": "cpp", "code": "RANDOMIZED-PARTITION(A, p, r)\n x = RANDOM(p, r)\n swap A[x] with A[r]\n return PARTITION(A, p, r)" }, { "lang": "cpp", "code": "RANDOMIZED-SELECT(A, p, r, i)\n while true\n if p == r\n return p, A[p]\n q = RANDOMIZED-PARTITION(A, p, r)\n k = q - p + 1\n if i == k\n return q, A[q]\n if i < k\n r = q\n else\n p = q + 1\n i = i - k" }, { "lang": "cpp", "code": "k-QUANTITLES-SUB(A, p, r, pos, f, e, quantiles)\n if f + 1 > e\n return\n mid = (f + e) / 2\n q, val = RANDOMIZED-SELECT(A, p, r, pos[mid])\n quantiles[mid] = val\n k = q - p + 1\n for i = mid + 1 to e\n pos[i] = pos[i] - k\n k-QUANTILES-SUB(A, q + 1, r, pos, mid + 1, e, quantiles)" }, { "lang": "cpp", "code": "k-QUANTITLES(A, k)\n num = A.size() / k\n mod = A.size() % k\n pos = num[1..k]\n for i = 1 to mod\n pos[i] = pos[i] + 1\n for i = 1 to k\n pos[i] = pos[i] + pos[i - 1]\n quantiles = [1..k]\n k-QUANTITLES-SUB(A, 0, A.length, pos, 0, pos.size(), quantiles)\n return quantiles" } ]
false
[]
09-9.3-7
09
9.3
9.3-7
docs/Chap09/9.3.md
Describe an $O(n)$-time algorithm that, given a set $S$ of $n$ distinct numbers and a positive integer $k \le n$, determines the $k$ numbers in $S$ that are closest to the median of $S$.
Find the median in $O(n)$; create a new array, each element is the absolute value of the original value subtract the median; find the $k$th smallest number in $O(n)$, then the desired values are the elements whose absolute difference with the median is less than or equal to the $k$th smallest number in the new array.
[]
false
[]
09-9.3-8
09
9.3
9.3-8
docs/Chap09/9.3.md
Let $X[1..n]$ and $Y[1..n]$ be two arrays, each containing $n$ numbers already in sorted order. Give an $O(\lg n)$-time algorithm to find the median of all $2n$ elements in arrays $X$ and $Y$.
Without loss of generality, assume $n$ is a power of $2$. ```cpp MEDIAN(X, Y, n) if n == 1 return min(X[1], Y[1]) if X[n / 2] < Y[n / 2] return MEDIAN(X[n / 2 + 1..n], Y[1..n / 2], n / 2) return MEDIAN(X[1..n / 2], Y[n / 2 + 1..n], n / 2) ```
[ { "lang": "cpp", "code": "MEDIAN(X, Y, n)\n if n == 1\n return min(X[1], Y[1])\n if X[n / 2] < Y[n / 2]\n return MEDIAN(X[n / 2 + 1..n], Y[1..n / 2], n / 2)\n return MEDIAN(X[1..n / 2], Y[n / 2 + 1..n], n / 2)" } ]
false
[]
09-9.3-9
09
9.3
9.3-9
docs/Chap09/9.3.md
Professor Olay is consulting for an oil company, which is planning a large pipeline running east to west through an oil field of $n$ wells. The company wants to connect a spur pipeline from each well directly to the main pipeline along a shortest route (either north or south), as shown in Figure 9.2. Given the $x$- and $y$-coordinates of the wells, how should the professor pick the optimal location of the main pipeline, which would be the one that minimizes the total length of the spurs? Show how to determine the optimal location in linear time.
- If $n$ is odd, we pick the $y$ coordinate of the main pipeline to be equal to the median of all the $y$ coordinates of the wells. - If $n$ is even, we pick the $y$ coordinate of the pipeline to be anything between the $y$ coordinates of the wells with $y$-coordinates which have order statistics $\lfloor (n + 1) / 2 \rfloor$ and the $\lceil (n + 1) / 2 \rceil$. These can all be found in linear time using the algorithm from this section.
[]
false
[]
09-9-1
09
9-1
9-1
docs/Chap09/Problems/9-1.md
Given a set of $n$ numbers, we wish to find the $i$ largest in sorted order using a comparison-based algorithm. Find the algorithm that implements each of the following methods with the best asymptotic worst-case running time, and analyze the running times of the algorithms in terms of $n$ and $i$ . **a.** Sort the numbers, and list the $i$ largest. **b.** Build a max-priority queue from the numbers, and call $\text{EXTRACT-MAX}$ $i$ times. **c.** Use an order-statistic algorithm to find the $i$th largest number, partition around that number, and sort the $i$ largest numbers.
**a.** The running time of sorting the numbers is $O(n\lg n)$, and the running time of listing the $i$ largest is $O(i)$. Therefore, the total running time is $O(n\lg n + i)$. **b.** The running time of building a max-priority queue (using a heap) from the numbers is $O(n)$, and the running time of each call $\text{EXTRACT-MAX}$ is $O(\lg n)$. Therefore, the total running time is $O(n + i\lg n)$. **c.** The running time of finding and partitioning around the $i$th largest number is $O(n)$, and the running time of sorting the $i$ largest numbers is $O(i\lg i)$. Therefore, the total running time is $O(n + i\lg i)$.
[]
false
[]
09-9-2
09
9-2
9-2
docs/Chap09/Problems/9-2.md
For $n$ distinct elements $x_1, x_2, \ldots, x_n$ with positive weights $w_1, w_2, \ldots, w_n$ such that $\sum_{i = 1}^n w_i = 1$, the **_weighted (lower) median_** is the element $x_k$ satisfying $$\sum_{x_i < x_k} w_i < \frac{1}{2}$$ and $$\sum_{x_i > x_k} w_i \le \frac{1}{2}.$$ For example, if the elements are $0.1, 0.35, 0.05, 0.1, 0.15, 0.05, 0.2$ and each element equals its weight (that is, $w_i = x_i$ for $i = 1, 2, \ldots, 7$), then the median is $0.1$, but the weighted median is $0.2$. **a.** Argue that the median of $x_1, x_2, \ldots, x_n$ is the weighted median of the $x_i$ with weights $w_i = 1 / n$ for $i = 1, 2, \ldots, n$. **b.** Show how to compute the weighted median of $n$ elements in $O(n\lg n)$ worstcase time using sorting. **c.** Show how to compute the weighted median in $\Theta(n)$ worst-case time using a linear-time median algorithm such as $\text{SELECT}$ from Section 9.3. The **_post-office location problem_** is defined as follows. We are given $n$ points $p_1, p_2, \ldots, p_n$ with associated weights $w_1, w_2, \ldots, w_n$. We wish to find a point $p$ (not necessarily one of the input points) that minimizes the sum $\sum_{i = 1}^n w_i d(p, p_i)$, where $d(a, b)$ is the distance between points $a$ and $b$. **d.** Argue that the weighted median is a best solution for the $1$-dimensional postoffice location problem, in which points are simply real numbers and the distance between points $a$ and $b$ is $d(a, b) = |a - b|$. **e.** Find the best solution for the $2$-dimensional post-office location problem, in which the points are $(x,y)$ coordinate pairs and the distance between points $a = (x_1, y_1)$ and $b = (x_2, y_2)$ is the **_Manhattan distance_** given by $d(a, b) = |x_1 - x_2| + |y_1 - y_2|$.
**a.** Let $m_k$ be the number of $x_i$ smaller than $x_k$. When weights of $1 / n$ are assigned to each $x_i$, we have $\sum_{x_i < x_k} w_i = m_k / n$ and $\sum_{x_i > x_k} w_i = (n - m_k - 1) / 2$. The only value of $m_k$ which makes these sums $< 1 / 2$ and $\le 1 / 2$ respectively is when $\lceil n / 2 \rceil - 1$, and this value $x$ must be the median since it has equal numbers of $x_i's$ which are larger and smaller than it. **b.** First use mergesort to sort the $x_i$'s by value in $O(n\lg n)$ time. Let $S_i$ be the sum of the weights of the first $i$ elements of this sorted array and note that it is $O(1)$ to update $S_i$. Compute $S_1, S_2, \dots$ until you reach $k$ such that $S_{k − 1} < 1 / 2$ and $S_k \ge 1 / 2$. The weighted median is $x_k$. **c.** We modify $\text{SELECT}$ to do this in linear time. Let $x$ be the median of medians. Compute $\sum_{x_i < x} w_i$ and $\sum_{x_i > x} w_i$ and check if either of these is larger than $1 / 2$. If not, stop. If so, recurse on the collection of smaller or larger elements known to contain the weighted median. This doesn't change the runtime, so it is $\Theta(n)$. **d.** Let $p$ be the minimizer, and suppose that $p$ is not the weighted median. Let $\epsilon$ be small enough such that $\epsilon < \min_i(|p − p_i|)$, where we don't include $k$ if $p = p_k$. If $p_m$ is the weighted median and $p < p_m$, choose $\epsilon > 0$. Otherwise choose $\epsilon < 0$. Thus, we have $$\sum_{i = 1}^n w_id(p + \epsilon, p_i) = \sum_{i = 1}^n w_id(p, p_i) + \epsilon\left(\sum_{p_i < p} w_i - \sum_{p_i > p} w_i \right) < \sum_{i = 1}^n w_id(p, p_i),$$ the difference in sums will take the opposite sign of epsilon. e. Observe that $$\sum_{i = 1}^n w_id(p, p_i) = \sum_{i = 1}^n w_i |p_x - (p_i)_x| + \sum\_{i = 1}^n w_i|p_y - (p_i)_y|.$$ It will suffice to minimize each sum separately, which we can do since we choose $p_x$ and $p_y$ individually. By part (e), we simply take $p = (p_x, p_y)$ to be such that $p_x$ is the weighted median of the $x$-coordinates of the $p_i$'s and py is the weighted medain of the $y$-coordiantes of the $p_i$'s.
[]
false
[]
09-9-3
09
9-3
9-3
docs/Chap09/Problems/9-3.md
We showed that the worst-case number $T(n)$ of comparisons used by $\text{SELECT}$ to select the $i$th order statistic from $n$ numbers satisfies $T(n) = \Theta(n)$, but the constant hidden by the $\Theta$-notation is rather large. When $i$ is small relative to $n$, we can implement a different procedure that uses $\text{SELECT}$ as a subroutine but makes fewer comparisons in the worst case. **a.** Describe an algorithm that uses $U_i(n)$ comparisons to find the $i$th smallest of $n$ elements, where $$ U_i(n) = \begin{cases} T(n) & \text{if $i \ge n / 2$}, \\\\ \lfloor n / 2 \rfloor + U_i(\lceil n / 2 \rceil) + T(2i) & \text{otherwise}. \end{cases} $$ ($\textit{Hint:}$ Begin with $\lfloor n / 2 \rfloor$ disjoint pairwise comparisons, and recurse on the set containing the smaller element from each pair.) **b.** Show that, if $i < n / 2$, then $U_i(n) = n + O(T(2i)\lg(n / i))$. **c.** Show that if $i$ is a constant less than $n / 2$, then $U_i(n) = n + O(\lg n)$. **d.** Show that if $i = n / k$ for $k \ge 2$, then $U_i(n) = n + O(T(2n / k)\lg k)$.
(Removed)
[]
false
[]
09-9-4
09
9-4
9-4
docs/Chap09/Problems/9-4.md
In this problem, we use indicator random variables to analyze the $\text{RANDOMIZED-SELECT}$ procedure in a manner akin to our analysis of $\text{RANDOMIZED-QUICKSORT}$ in Section 7.4.2. As in the quicksort analysis, we assume that all elements are distinct, and we rename the elements of the input array $A$ as $z_1, z_2, \ldots, z_n$, where $z_i$ is the $i$th smallest element. Thus, the call $\text{RANDOMIZED-SELECT}(A, 1, n, k)$ returns $z_k$. For $1 \le i < j \le n$, let $$X_{ijk} = \text{I \\{$z_i$ is compared with $z_j$ sometime during the execution of the algorithm to find $z_k$\\}}.$$ **a.** Give an exact expression for $\text E[X_{ijk}]$. ($\textit{Hint:}$ Your expression may have different values, depending on the values of $i$, $j$, and $k$.) **b.** Let $X_k$ denote the total number of comparisons between elements of array $A$ when finding $z_k$. Show that $$\text E[X_k] \le 2 \Bigg (\sum_{i = 1}^{k}\sum_{j = k}^n \frac{1}{j - i + 1} + \sum_{j = k + 1}^{n} \frac{j - k - 1}{j - k + 1} + \sum_{i = 1}^{k-2} \frac{k - i - 1}{k - i + 1} \Bigg).$$ **c.** Show that $\text E[X_k] \le 4n$. **d.** Conclude that, assuming all elements of array $A$ are distinct, $\text{RANDOMIZED-SELECT}$ runs in expected time $O(n)$.
(Removed)
[]
false
[]
10-10.1-1
10
10.1
10.1-1
docs/Chap10/10.1.md
Using Figure 10.1 as a model, illustrate the result of each operation in the sequence $\text{PUSH}(S, 4)$, $\text{PUSH}(S, 1)$, $\text{PUSH}(S, 3)$, $\text{POP}(S)$, $\text{PUSH}(S, 8)$, and $\text{POP}(S)$ on an initially empty stack $S$ stored in array $S[1..6]$.
$$ \begin{array}{l|ccc} \text{PUSH($S, 4$)} & 4 & & \\\\ \text{PUSH($S, 1$)} & 4 & 1 & \\\\ \text{PUSH($S, 3$)} & 4 & 1 & 3 \\\\ \text{POP($S$)} & 4 & 1 & \\\\ \text{PUSH($S, 8$)} & 4 & 1 & 8 \\\\ \text{POP($S$)} & 4 & 1 & \end{array} $$
[]
false
[]
10-10.1-2
10
10.1
10.1-2
docs/Chap10/10.1.md
Explain how to implement two stacks in one array $A[1..n]$ in such a way that neither stack overflows unless the total number of elements in both stacks together is $n$. The $\text{PUSH}$ and $\text{POP}$ operations should run in $O(1)$ time.
The first stack starts at $1$ and grows up towards n, while the second starts form $n$ and grows down towards $1$. Stack overflow happens when an element is pushed when the two stack pointers are adjacent.
[]
false
[]
10-10.1-3
10
10.1
10.1-3
docs/Chap10/10.1.md
Using Figure 10.2 as a model, illustrate the result of each operation in the sequence $\text{ENQUEUE}(Q, 4)$, $\text{ENQUEUE}(Q ,1)$, $\text{ENQUEUE}(Q, 3)$, $\text{DEQUEUE}(Q)$, $\text{ENQUEUE}(Q, 8)$, and $\text{DEQUEUE}(Q)$ on an initially empty queue $Q$ stored in array $Q[1..6]$.
$$ \begin{array}{l|cccc} \text{ENQUEUE($Q, 4$)} & 4 & & & \\\\ \text{ENQUEUE($Q, 1$)} & 4 & 1 & & \\\\ \text{ENQUEUE($Q, 3$)} & 4 & 1 & 3 & \\\\ \text{DEQUEUE($Q$)} & & 1 & 3 & \\\\ \text{ENQUEUE($Q, 8$)} & & 1 & 3 & 8 \\\\ \text{DEQUEUE($Q$)} & & & 3 & 8 \end{array} $$
[]
false
[]
10-10.1-4
10
10.1
10.1-4
docs/Chap10/10.1.md
Rewrite $\text{ENQUEUE}$ and $\text{DEQUEUE}$ to detect underflow and overflow of a queue.
To detect underflow and overflow of a queue, we can implement $\text{QUEUE-EMPTY}$ and $\text{QUEUE-FULL}$ first. ```cpp QUEUE-EMPTY(Q) if Q.head == Q.tail return true else return false ``` ```cpp QUEUE-FULL(Q) if Q.head == Q.tail + 1 or (Q.head == 1 and Q.tail == Q.length) return true else return false ``` ```cpp ENQUEUE(Q, x) if QUEUE-FULL(Q) error "overflow" else Q[Q.tail] = x if Q.tail == Q.length Q.tail = 1 else Q.tail = Q.tail + 1 ``` ```cpp DEQUEUE(Q) if QUEUE-EMPTY(Q) error "underflow" else x = Q[Q.head] if Q.head == Q.length Q.head = 1 else Q.head = Q.head + 1 return x ```
[ { "lang": "cpp", "code": "QUEUE-EMPTY(Q)\n if Q.head == Q.tail\n return true\n else return false" }, { "lang": "cpp", "code": "QUEUE-FULL(Q)\n if Q.head == Q.tail + 1 or (Q.head == 1 and Q.tail == Q.length)\n return true\n else return false" }, { "lang": "cpp", "code": "ENQUEUE(Q, x)\n if QUEUE-FULL(Q)\n error \"overflow\"\n else\n Q[Q.tail] = x\n if Q.tail == Q.length\n Q.tail = 1\n else Q.tail = Q.tail + 1" }, { "lang": "cpp", "code": "DEQUEUE(Q)\n if QUEUE-EMPTY(Q)\n error \"underflow\"\n else\n x = Q[Q.head]\n if Q.head == Q.length\n Q.head = 1\n else Q.head = Q.head + 1\n return x" } ]
false
[]
10-10.1-5
10
10.1
10.1-5
docs/Chap10/10.1.md
Whereas a stack allows insertion and deletion of elements at only one end, and a queue allows insertion at one end and deletion at the other end, a **_deque_** (double-ended queue) allows insertion and deletion at both ends. Write four $O(1)$-time procedures to insert elements into and delete elements from both ends of a deque implemented by an array.
The procedures $\text{QUEUE-EMPTY}$ and $\text{QUEUE-FULL}$ are implemented in Exercise 10.1-4. ```cpp HEAD-ENQUEUE(Q, x) if QUEUE-FULL(Q) error "overflow" else if Q.head == 1 Q.head = Q.length else Q.head = Q.head - 1 Q[Q.head] = x ``` ```cpp TAIL-ENQUEUE(Q, x) if QUEUE-FULL(Q) error "overflow" else Q[Q.tail] = x if Q.tail == Q.length Q.tail = 1 else Q.tail = Q.tail + 1 ``` ```cpp HEAD-DEQUEUE(Q) if QUEUE-EMPTY(Q) error "underflow" else x = Q[Q.head] if Q.head == Q.length Q.head = 1 else Q.head = Q.head + 1 return x ``` ```cpp TAIL-DEQUEUE(Q) if QUEUE-EMPTY(Q) error "underflow" else if Q.tail == 1 Q.tail = Q.length else Q.tail = Q.tail - 1 x = Q[Q.tail] return x ```
[ { "lang": "cpp", "code": "HEAD-ENQUEUE(Q, x)\n if QUEUE-FULL(Q)\n error \"overflow\"\n else\n if Q.head == 1\n Q.head = Q.length\n else Q.head = Q.head - 1\n Q[Q.head] = x" }, { "lang": "cpp", "code": "TAIL-ENQUEUE(Q, x)\n if QUEUE-FULL(Q)\n error \"overflow\"\n else\n Q[Q.tail] = x\n if Q.tail == Q.length\n Q.tail = 1\n else Q.tail = Q.tail + 1" }, { "lang": "cpp", "code": "HEAD-DEQUEUE(Q)\n if QUEUE-EMPTY(Q)\n error \"underflow\"\n else\n x = Q[Q.head]\n if Q.head == Q.length\n Q.head = 1\n else Q.head = Q.head + 1\n return x" }, { "lang": "cpp", "code": "TAIL-DEQUEUE(Q)\n if QUEUE-EMPTY(Q)\n error \"underflow\"\n else\n if Q.tail == 1\n Q.tail = Q.length\n else Q.tail = Q.tail - 1\n x = Q[Q.tail]\n return x" } ]
false
[]
10-10.1-6
10
10.1
10.1-6
docs/Chap10/10.1.md
Show how to implement a queue using two stacks. Analyze the running time of the queue operations.
- $\text{ENQUEUE}$: $\Theta(1)$. - $\text{DEQUEUE}$: worst $O(n)$, amortized $\Theta(1)$. Let the two stacks be $A$ and $B$. $\text{ENQUEUE}$ pushes elements on $B$. $\text{DEQUEUE}$ pops elements from $A$. If $A$ is empty, the contents of $B$ are transfered to $A$ by popping them out of $B$ and pushing them to $A$. That way they appear in reverse order and are popped in the original. A $\text{DEQUEUE}$ operation can perform in $\Theta(n)$ time, but that will happen only when $A$ is empty. If many $\text{ENQUEUE}$s and $\text{DEQUEUE}$s are performed, the total time will be linear to the number of elements, not to the largest length of the queue.
[]
false
[]
10-10.1-7
10
10.1
10.1-7
docs/Chap10/10.1.md
Show how to implement a stack using two queues. Analyze the running time of the stack operations.
- $\text{PUSH}$: $\Theta(1)$. - $\text{POP}$: $\Theta(n)$. We have two queues and mark one of them as active. $\text{PUSH}$ queues an element on the active queue. $\text{POP}$ should dequeue all but one element of the active queue and queue them on the inactive. The roles of the queues are then reversed, and the final element left in the (now) inactive queue is returned. The $\text{PUSH}$ operation is $\Theta(1)$, but the $\text{POP}$ operation is $\Theta(n)$ where $n$ is the number of elements in the stack.
[]
false
[]
10-10.2-1
10
10.2
10.2-1
docs/Chap10/10.2.md
Can you implement the dynamic-set operation $\text{INSERT}$ on a singly linked list in $O(1)$ time? How about $\text{DELETE}$?
- $\text{INSERT}$: can be implemented in constant time by prepending it to the list. ```cpp LIST-INSERT(L, x) x.next = L.head L.head = x ``` - $\text{DELETE}$: you can copy the value from the successor to element you want to delete, and then you can delete the successor in $O(1)$ time. This solution is not good in situations when you have a large object, in that case copying the whole object will be a bad idea.
[ { "lang": "cpp", "code": " LIST-INSERT(L, x)\n x.next = L.head\n L.head = x" } ]
false
[]
10-10.2-2
10
10.2
10.2-2
docs/Chap10/10.2.md
Implement a stack using a singly linked list $L$. The operations $\text{PUSH}$ and $\text{POP}$ should still take $O(1)$ time.
```cpp STACK-EMPTY(L) if L.head == NIL return true else return false ``` - $\text{PUSH}$: adds an element in the beginning of the list. ```cpp PUSH(L, x) x.next = L.head L.head = x ``` - $\text{POP}$: removes the first element from the list. ```cpp POP(L) if STACK-EMPTY(L) error "underflow" else x = L.head L.head = L.head.next return x ```
[ { "lang": "cpp", "code": "STACK-EMPTY(L)\n if L.head == NIL\n return true\n else return false" }, { "lang": "cpp", "code": " PUSH(L, x)\n x.next = L.head\n L.head = x" }, { "lang": "cpp", "code": " POP(L)\n if STACK-EMPTY(L)\n error \"underflow\"\n else\n x = L.head\n L.head = L.head.next\n return x" } ]
false
[]
10-10.2-3
10
10.2
10.2-3
docs/Chap10/10.2.md
Implement a queue by a singly linked list $L$. The operations $\text{ENQUEUE}$ and $\text{DEQUEUE}$ should still take $O(1)$ time.
```cpp QUEUE-EMPTY(L) if L.head == NIL return true else return false ``` - $\text{ENQUEUE}$: inserts an element at the end of the list. In this case we need to keep track of the last element of the list. We can do that with a sentinel. ```cpp ENQUEUE(L, x) if QUEUE-EMPTY(L) L.head = x else L.tail.next = x L.tail = x x.next = NIL ``` - $\text{DEQUEUE}$: removes an element from the beginning of the list. ```cpp DEQUEUE(L) if QUEUE-EMPTY(L) error "underflow" else x = L.head if L.head == L.tail L.tail = NIL L.head = L.head.next return x ```
[ { "lang": "cpp", "code": "QUEUE-EMPTY(L)\n if L.head == NIL\n return true\n else return false" }, { "lang": "cpp", "code": " ENQUEUE(L, x)\n if QUEUE-EMPTY(L)\n L.head = x\n else L.tail.next = x\n L.tail = x\n x.next = NIL" }, { "lang": "cpp", "code": " DEQUEUE(L)\n if QUEUE-EMPTY(L)\n error \"underflow\"\n else\n x = L.head\n if L.head == L.tail\n L.tail = NIL\n L.head = L.head.next\n return x" } ]
false
[]
10-10.2-4
10
10.2
10.2-4
docs/Chap10/10.2.md
As written, each loop iteration in the $\text{LIST-SEARCH}'$ procedure requires two tests: one for $x \ne L.nil$ and one for $x.key \ne k$. Show how to eliminate the test for $x \ne L.nil$ in each iteration.
```cpp LIST-SEARCH'(L, k) x = L.nil.next L.nil.key = k while x.key != k x = x.next return x ```
[ { "lang": "cpp", "code": "LIST-SEARCH'(L, k)\n x = L.nil.next\n L.nil.key = k\n while x.key != k\n x = x.next\n return x" } ]
false
[]
10-10.2-5
10
10.2
10.2-5
docs/Chap10/10.2.md
Implement the dictionary operations $\text{INSERT}$, $\text{DELETE}$, and $\text{SEARCH}$ using singly linked, circular lists. What are the running times of your procedures?
- $\text{INSERT}$: $O(1)$. ```cpp LIST-INSERT''(L, x) x.next = L.nil.next L.nil.next = x ``` - $\text{DELETE}$: $O(n)$. ```cpp LIST-DELETE''(L, x) prev = L.nil while prev.next != x if prev.next == L.nil error "element not exist" prev = prev.next prev.next = x.next ``` - $\text{SEARCH}$: $O(n)$. ```cpp LIST-SEARCH''(L, k) x = L.nil.next while x != L.nil and x.key != k x = x.next return x ```
[ { "lang": "cpp", "code": " LIST-INSERT''(L, x)\n x.next = L.nil.next\n L.nil.next = x" }, { "lang": "cpp", "code": " LIST-DELETE''(L, x)\n prev = L.nil\n while prev.next != x\n if prev.next == L.nil\n error \"element not exist\"\n prev = prev.next\n prev.next = x.next" }, { "lang": "cpp", "code": " LIST-SEARCH''(L, k)\n x = L.nil.next\n while x != L.nil and x.key != k\n x = x.next\n return x" } ]
false
[]
10-10.2-6
10
10.2
10.2-6
docs/Chap10/10.2.md
The dynamic-set operation $\text{UNION}$ takes two disjoint sets $S_1$ and $S_2$ as input, and it returns a set $S = S_1 \cup S_2$ consisting of all the elements of $S_1$ and $S_2$. The sets $S_1$ and $S_2$ are usually destroyed by the operation. Show how to support $\text{UNION}$ in $O(1)$ time using a suitable list data structure.
If both sets are a doubly linked lists, we just point link the last element of the first list to the first element in the second. If the implementation uses sentinels, we need to destroy one of them. ```cpp LIST-UNION(L[1], L[2]) L[2].nil.next.prev = L[1].nil.prev L[1].nil.prev.next = L[2].nil.next L[2].nil.prev.next = L[1].nil L[1].nil.prev = L[2].nil.prev ```
[ { "lang": "cpp", "code": "LIST-UNION(L[1], L[2])\n L[2].nil.next.prev = L[1].nil.prev\n L[1].nil.prev.next = L[2].nil.next\n L[2].nil.prev.next = L[1].nil\n L[1].nil.prev = L[2].nil.prev" } ]
false
[]
10-10.2-7
10
10.2
10.2-7
docs/Chap10/10.2.md
Give a $\Theta(n)$-time nonrecursive procedure that reverses a singly linked list of $n$ elements. The procedure should use no more than constant storage beyond that needed for the list itself.
```cpp LIST-REVERSE(L) p[1] = NIL p[2] = L.head while p[2] != NIL p[3] = p[2].next p[2].next = p[1] p[1] = p[2] p[2] = p[3] L.head = p[1] ```
[ { "lang": "cpp", "code": "LIST-REVERSE(L)\n p[1] = NIL\n p[2] = L.head\n while p[2] != NIL\n p[3] = p[2].next\n p[2].next = p[1]\n p[1] = p[2]\n p[2] = p[3]\n L.head = p[1]" } ]
false
[]
10-10.2-8
10
10.2
10.2-8 $\star$
docs/Chap10/10.2.md
Explain how to implement doubly linked lists using only one pointer value $x.np$ per item instead of the usual two ($next$ and $prev$). Assume all pointer values can be interpreted as $k$-bit integers, and define $x.np$ to be $x.np = x.next \text{ XOR } x.prev$, the $k$-bit "exclusive-or" of $x.next$ and $x.prev$. (The value $\text{NIL}$ is represented by $0$.) Be sure to describe what information you need to access the head of the list. Show how to implement the $\text{SEARCH}$, $\text{INSERT}$, and $\text{DELETE}$ operations on such a list. Also show how to reverse such a list in $O(1)$ time.
```cpp LIST-SEARCH(L, k) prev = NIL x = L.head while x != NIL and x.key != k next = prev XOR x.np prev = x x = next return x ``` ```cpp LIST-INSERT(L, x) x.np = NIL XOR L.tail if L.tail != NIL L.tail.np = (L.tail.np XOR NIL) XOR x // tail.prev XOR x if L.head == NIL L.head = x L.tail = x ``` ```cpp LIST-DELETE(L, x) y = L.head prev = NIL while y != NIL next = prev XOR y.np if y != x prev = y y = next else if prev != NIL prev.np = (prev.np XOR y) XOR next // prev.prev XOR next else L.head = next if next != NIL next.np = prev XOR (y XOR next.np) // prev XOR next.next else L.tail = prev ``` ```cpp LIST-REVERSE(L) tmp = L.head L.head = L.tail L.tail = tmp ```
[ { "lang": "cpp", "code": "LIST-SEARCH(L, k)\n prev = NIL\n x = L.head\n while x != NIL and x.key != k\n next = prev XOR x.np\n prev = x\n x = next\n return x" }, { "lang": "cpp", "code": "LIST-INSERT(L, x)\n x.np = NIL XOR L.tail\n if L.tail != NIL\n L.tail.np = (L.tail.np XOR NIL) XOR x // tail.prev XOR x\n if L.head == NIL\n L.head = x\n L.tail = x" }, { "lang": "cpp", "code": "LIST-DELETE(L, x)\n y = L.head\n prev = NIL\n while y != NIL\n next = prev XOR y.np\n if y != x\n prev = y\n y = next\n else\n if prev != NIL\n prev.np = (prev.np XOR y) XOR next // prev.prev XOR next\n else L.head = next\n if next != NIL\n next.np = prev XOR (y XOR next.np) // prev XOR next.next\n else L.tail = prev" }, { "lang": "cpp", "code": "LIST-REVERSE(L)\n tmp = L.head\n L.head = L.tail\n L.tail = tmp" } ]
false
[]
10-10.3-1
10
10.3
10.3-1
docs/Chap10/10.3.md
Draw a picture of the sequence $\langle 13, 4, 8, 19, 5, 11 \rangle$ stored as a doubly linked list using the multiple-array representation. Do the same for the single-array representation.
- A multiple-array representation with $L = 2$, $$ \begin{array}{|r|c|c|c|c|c|c|c|} \hline index & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \hline next & & 3 & 4 & 5 & 6 & 7 & \diagup \\\\ \hline key & & 13 & 4 & 8 & 19 & 5 & 11 \\\\ \hline prev & & \diagup & 2 & 3 & 4 & 5 & 6 \\\\ \hline \end{array} $$ - A single-array version with $L = 1$, $$ \begin{array}{|r|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline index & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\\\ \hline key & 13 & 4 & \diagup & 4 & 7 & 1 & 8 & 10 & 4 & 19 & 13 & 7 & 5 & 16 & 10 & 11 & \diagup & 13 \\\\ \hline \end{array} $$
[]
false
[]
10-10.3-2
10
10.3
10.3-2
docs/Chap10/10.3.md
Write the procedures $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ for a homogeneous collection of objects implemented by the single-array representation.
```cpp ALLOCATE-OBJECT() if free == NIL error "out of space" else x = free free = A[x + 1] return x ``` ```cpp FREE-OBJECT(x) A[x + 1] = free free = x ```
[ { "lang": "cpp", "code": "ALLOCATE-OBJECT()\n if free == NIL\n error \"out of space\"\n else x = free\n free = A[x + 1]\n return x" }, { "lang": "cpp", "code": "FREE-OBJECT(x)\n A[x + 1] = free\n free = x" } ]
false
[]
10-10.3-3
10
10.3
10.3-3
docs/Chap10/10.3.md
Why don't we need to set or reset the $prev$ attributes of objects in the implementation of the $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ procedures?
We implement $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ in the hope of managing the storage of currently non-used object in the free list so that one can be allocated for reusing. As the free list acts like a stack, to maintain this stack-like collection, we merely remember its first pointer and set the $next$ attribute of objects. There is no need to worry the $prev$ attribute, for it hardly has any impact on the resulting free list.
[]
false
[]
10-10.3-4
10
10.3
10.3-4
docs/Chap10/10.3.md
It is often desirable to keep all elements of a doubly linked list compact in storage, using, for example, the first $m$ index locations in the multiple-array representation. (This is the case in a paged, virtual-memory computing environment.) Explain how to implement the procedures $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ so that the representation is compact. Assume that there are no pointers to elements of the linked list outside the list itself. ($\textit{Hint:}$ Use the array implementation of a stack.)
```cpp ALLOCATE-OBJECT() if STACK-EMPTY(F) error "out of space" else x = POP(F) return x ``` ```cpp FREE-OBJECT(x) p = F.top - 1 p.prev.next = x p.next.prev = x x.key = p.key x.prev = p.prev x.next = p.next PUSH(F, p) ```
[ { "lang": "cpp", "code": "ALLOCATE-OBJECT()\n if STACK-EMPTY(F)\n error \"out of space\"\n else x = POP(F)\n return x" }, { "lang": "cpp", "code": "FREE-OBJECT(x)\n p = F.top - 1\n p.prev.next = x\n p.next.prev = x\n x.key = p.key\n x.prev = p.prev\n x.next = p.next\n PUSH(F, p)" } ]
false
[]
10-10.3-5
10
10.3
10.3-5
docs/Chap10/10.3.md
Let $L$ be a doubly linked list of length $n$ stored in arrays $key$, $prev$, and $next$ of length $m$. Suppose that these arrays are managed by $\text{ALLOCATE-OBJECT}$ and $\text{FREE-OBJECT}$ procedures that keep a doubly linked free list $F$. Suppose further that of the $m$ items, exactly $n$ are on list $L$ and $m - n$ are on the free list. Write a procedure $\text{COMPACTIFY-LIST}(L, F)$ that, given the list $L$ and the free list $F$, moves the items in $L$ so that they occupy array positions $1, 2, \ldots, n$ and adjusts the free list $F$ so that it remains correct, occupying array positions $n + 1, n + 2, \ldots, m$. The running time of your procedure should be $\Theta(n)$, and it should use only a constant amount of extra space. Argue that your procedure is correct.
We represent the combination of arrays $key$, $prev$, and $next$ by a multible-array $A$. Each object of $A$'s is either in list $L$ or in the free list $F$, but not in both. The procedure $\text{COMPACTIFY-LIST}$ transposes the first object in $L$ with the first object in $A$, the second objects until the list $L$ is exhausted. ```cpp COMPACTIFY-LIST(L, F) TRANSPOSE(A[L.head], A[1]) if F.head == 1 F.head = L.head L.head = 1 l = A[L.head].next i = 2 while l != NIL TRANSPOSE(A[l], A[i]) if F == i F = l l = A[l].next i = i + 1 ``` ```cpp TRANSPOSE(a, b) SWAP(a.prev.next, b.prev.next) SWAP(a.prev, b.prev) SWAP(a.next.prev, b.next.prev) SWAP(a.next, b.next) ```
[ { "lang": "cpp", "code": "COMPACTIFY-LIST(L, F)\n TRANSPOSE(A[L.head], A[1])\n if F.head == 1\n F.head = L.head\n L.head = 1\n l = A[L.head].next\n i = 2\n while l != NIL\n TRANSPOSE(A[l], A[i])\n if F == i\n F = l\n l = A[l].next\n i = i + 1" }, { "lang": "cpp", "code": "TRANSPOSE(a, b)\n SWAP(a.prev.next, b.prev.next)\n SWAP(a.prev, b.prev)\n SWAP(a.next.prev, b.next.prev)\n SWAP(a.next, b.next)" } ]
false
[]
10-10.4-1
10
10.4
10.4-1
docs/Chap10/10.4.md
Draw the binary tree rooted at index $6$ that is represented by the following attributes: $$ \begin{array}{cccc} \text{index} & key & left & right \\\\ \hline 1 & 12 & 7 & 3 \\\\ 2 & 15 & 8 & \text{NIL} \\\\ 3 & 4 & 10 & \text{NIL} \\\\ 4 & 10 & 5 & 9 \\\\ 5 & 2 & \text{NIL} & \text{NIL} \\\\ 6 & 18 & 1 & 4 \\\\ 7 & 7 & \text{NIL} & \text{NIL} \\\\ 8 & 14 & 6 & 2 \\\\ 9 & 21 & \text{NIL} & \text{NIL} \\\\ 10 & 5 & \text{NIL} & \text{NIL} \end{array} $$
![](../img/10.4-1.png)
[]
true
[ "../img/10.4-1.png" ]
10-10.4-2
10
10.4
10.4-2
docs/Chap10/10.4.md
Write an $O(n)$-time recursive procedure that, given an $n$-node binary tree, prints out the key of each node in the tree.
```cpp PRINT-BINARY-TREE(T) PRINT-BINARY-TREE-AUX(T.root) PRINT-BINARY-TREE-AUX(x) if node != NIL PRINT-BINARY-TREE-AUX(x.left) print x.key PRINT-BINARY-TREE-AUX(x.right) ```
[ { "lang": "cpp", "code": "PRINT-BINARY-TREE(T)\n PRINT-BINARY-TREE-AUX(T.root)\n\nPRINT-BINARY-TREE-AUX(x)\n if node != NIL\n PRINT-BINARY-TREE-AUX(x.left)\n print x.key\n PRINT-BINARY-TREE-AUX(x.right)" } ]
false
[]
10-10.4-3
10
10.4
10.4-3
docs/Chap10/10.4.md
Write an O$(n)$-time nonrecursive procedure that, given an $n$-node binary tree, prints out the key of each node in the tree. Use a stack as an auxiliary data structure.
```cpp PRINT-BINARY-TREE(T, S) PUSH(S, T.root) while !STACK-EMPTY(S) x = S[S.top] while x != NIL // store all nodes on the path towards the leftmost leaf PUSH(S, x.left) x = S[S.top] POP(S) // S has NIL on its top, so pop it if !STACK-EMPTY(S) // print this nodes, leap to its in-order successor x = POP(S) print x.key PUSH(S, x.right) ```
[ { "lang": "cpp", "code": "PRINT-BINARY-TREE(T, S)\n PUSH(S, T.root)\n while !STACK-EMPTY(S)\n x = S[S.top]\n while x != NIL // store all nodes on the path towards the leftmost leaf\n PUSH(S, x.left)\n x = S[S.top]\n POP(S) // S has NIL on its top, so pop it\n if !STACK-EMPTY(S) // print this nodes, leap to its in-order successor\n x = POP(S)\n print x.key\n PUSH(S, x.right)" } ]
false
[]
10-10.4-4
10
10.4
10.4-4
docs/Chap10/10.4.md
Write an $O(n)$-time procedure that prints all the keys of an arbitrary rooted tree with $n$ nodes, where the tree is stored using the left-child, right-sibling representation.
```cpp PRINT-LCRS-TREE(T) x = T.root if x != NIL print x.key lc = x.left-child if lc != NIL PRINT-LCRS-TREE(lc) rs = lc.right-sibling while rs != NIL PRINT-LCRS-TREE(rs) rs = rs.right-sibling ```
[ { "lang": "cpp", "code": "PRINT-LCRS-TREE(T)\n x = T.root\n if x != NIL\n print x.key\n lc = x.left-child\n if lc != NIL\n PRINT-LCRS-TREE(lc)\n rs = lc.right-sibling\n while rs != NIL\n PRINT-LCRS-TREE(rs)\n rs = rs.right-sibling" } ]
false
[]
10-10.4-5
10
10.4
10.4-5 $\star$
docs/Chap10/10.4.md
Write an $O(n)$-time nonrecursive procedure that, given an $n$-node binary tree, prints out the key of each node. Use no more than constant extra space outside of the tree itself and do not modify the tree, even temporarily, during the procedure.
```cpp PRINT-KEY(T) prev = NIL x = T.root while x != NIL if prev = x.parent print x.key prev = x if x.left x = x.left else if x.right x = x.right else x = x.parent else if prev == x.left and x.right != NIL prev = x x = x.right else prev = x x = x.parent ```
[ { "lang": "cpp", "code": "PRINT-KEY(T)\n prev = NIL\n x = T.root\n while x != NIL\n if prev = x.parent\n print x.key\n prev = x\n if x.left\n x = x.left\n else\n if x.right\n x = x.right\n else \n x = x.parent\n else if prev == x.left and x.right != NIL\n prev = x\n x = x.right\n else\n prev = x\n x = x.parent" } ]
false
[]
10-10.4-6
10
10.4
10.4-6 $\star$
docs/Chap10/10.4.md
The left-child, right-sibling representation of an arbitrary rooted tree uses three pointers in each node: _left-child_, _right-sibling_, and _parent_. From any node, its parent can be reached and identified in constant time and all its children can be reached and identified in time linear in the number of children. Show how to use only two pointers and one boolean value in each node so that the parent of a node or all of its children can be reached and identified in time linear in the number of children.
Use boolean to identify the last sibling, and the last sibling's right-sibling points to the parent.
[]
false
[]
10-10-1
10
10-1
10-1
docs/Chap10/Problems/10-1.md
For each of the four types of lists in the following table, what is the asymptotic worst-case running time for each dynamic-set operation listed? $$ \begin{array}{l|c|c|c|c|} & \text{unsorted, singly linked} & \text{sorted, singly linked} & \text{unsorted, doubly linked} & \text{sorted, doubly linked} \\\\ \hline \text{SEARCH($L, k$)} & & & & \\\\ \hline \text{INSERT($L, x$)} & & & & \\\\ \hline \text{DELETE($L, x$)} & & & & \\\\ \hline \text{SUCCESSOR($L, x$)} & & & & \\\\ \hline \text{PREDECESSOR($L, x$)} & & & & \\\\ \hline \text{MINIMUM($L$)} & & & & \\\\ \hline \text{MAXIMUM($L$)} & & & & \\\\ \hline \end{array} $$
$$ \begin{array}{l|c|c|c|c|} & \text{unsorted, singly linked} & \text{sorted, singly linked} & \text{unsorted, doubly linked} & \text{sorted, doubly linked} \\\\ \hline \text{SEARCH($L, k$)} & \Theta(n) & \Theta(n) & \Theta(n) & \Theta(n) \\\\ \hline \text{INSERT($L, x$)} & \Theta(1) & \Theta(n) & \Theta(1) & \Theta(n) \\\\ \hline \text{DELETE($L, x$)} & \Theta(n) & \Theta(n) & \Theta(1) & \Theta(1) \\\\ \hline \text{SUCCESSOR($L, x$)} & \Theta(n) & \Theta(1) & \Theta(n) & \Theta(1) \\\\ \hline \text{PREDECESSOR($L, x$)} & \Theta(n) & \Theta(n) & \Theta(n) & \Theta(1) \\\\ \hline \text{MINIMUM($L$)} & \Theta(n) & \Theta(1) & \Theta(n) & \Theta(1) \\\\ \hline \text{MAXIMUM($L$)} & \Theta(n) & \Theta(n) & \Theta(n) & \Theta(1) \\\\ \hline \end{array} $$
[]
false
[]
10-10-2
10
10-2
10-2
docs/Chap10/Problems/10-2.md
A **_mergeable heap_** supports the following operations: $\text{MAKE-HEAP}$ (which creates an empty mergeable heap), $\text{INSERT}$, $\text{MINIMUM}$, $\text{EXTRACT-MIN}$, and $\text{UNION}$. Show how to implement mergeable heaps using linked lists in each of the following cases. Try to make each operation as efficient as possible. Analyze the running time of each operation in terms of the size of the dynamic set(s) being operated on. **a.** Lists are sorted. **b.** Lists are unsorted. **c.** Lists are unsorted, and dynamic sets to be merged are disjoint.
In all three cases, $\text{MAKE-HEAP}$ simply creates a new list $L$, sets $L.head = \text{NIL}$, and returns $L$ in constant time. Assume lists are doubly linked. To realize a linked list as a heap, we imagine the usual array implementation of a binary heap, where the children of the $i$th element are $2i$ and $2i + 1$. **a.** To insert, we perform a linear scan to see where to insert an element such that the list remains sorted. This takes linear time. The first element in the list is the minimum element, and we can find it in constant time. $\text{EXTRACT-MIN}$ returns the first element of the list, then deletes it. Union performs a merge operation between the two sorted lists, interleaving their entries such that the resulting list is sorted. This takes time linear in the sum of the lengths of the two lists. **b.** To insert an element $x$ into the heap, begin linearly scanning the list until the first instance of an element $y$ which is strictly larger than $x$. If no such larger element exists, simply insert $x$ at the end of the list. If $y$ does exist, replace $y \text t$ by $x$. This maintains the min-heap property because $x \le y$ and $y$ was smaller than each of its children, so $x$ must be as well. Moreover, $x$ is larger than its parent because $y$ was the first element in the list to exceed $x$. Now insert $y$, starting the scan at the node following $x$. Since we check each node at most once, the time is linear in the size of the list. To get the minimum element, return the key of the head of the list in constant time. To extract the minimum element, we first call $\text{MINIMUM}$. Next, we'll replace the key of the head of the list by the key of the second smallest element $y$ in the list. We'll take the key stored at the end of the list and use it to replace the key of $y$. Finally, we'll delete the last element of the list, and call $\text{MIN-HEAPIFY}$ on the list. To implement this with linked lists, we need to step through the list to get from element $i$ to element $2i$. We omit this detail from the code, but we'll consider it for runtime analysis. Since the value of $i$ on which $\text{MIN-HEAPIFY}$ is called is always increasing and we never need to step through elements multiple times, the runtime is linear in the length of the list. ```cpp EXTRACT-MIN(L) min = MINIMIM(L) linearly scan for the second smallest element, located in position i L.head.key = L[i] L[i].key = L[L.length].key DELETE(L, L[L.length]) MIN-HEAPIFY(L[i], i) return min ``` ```cpp MIN-HEAPIFY(L[i], i) l = L[2i].key r = L[2i + 1].key p = L[i].key smallest = i if L[2i] != NIL and l < p smallest = 2i if L[2i + 1] != NIL and r < L[smallest] smallest = 2i + 1 if smallest != i exchange L[i] with L[smallest] MIN-HEAPIFY(L[smallest], smallest]) ``` Union is implemented below, where we assume $A$ and $B$ are the two list representations of heaps to be merged. The runtime is again linear in the lengths of the lists to be merged. ```cpp UNION(A, B) if A.head == NIL return B x = A.head while B.head != NIL if B.head.key ≤ x.key INSERT(B, x.key) x.key = B.head.key DELETE(B, B.head) x = x.next return A ``` **c.** Since the algorithms in part (b) didn't depend on the elements being distinct, we can use the same ones.
[ { "lang": "cpp", "code": "EXTRACT-MIN(L)\n min = MINIMIM(L)\n linearly scan for the second smallest element, located in position i\n L.head.key = L[i]\n L[i].key = L[L.length].key\n DELETE(L, L[L.length])\n MIN-HEAPIFY(L[i], i)\n return min" }, { "lang": "cpp", "code": "MIN-HEAPIFY(L[i], i)\n l = L[2i].key\n r = L[2i + 1].key\n p = L[i].key\n smallest = i\n if L[2i] != NIL and l < p\n smallest = 2i\n if L[2i + 1] != NIL and r < L[smallest]\n smallest = 2i + 1\n if smallest != i\n exchange L[i] with L[smallest]\n MIN-HEAPIFY(L[smallest], smallest])" }, { "lang": "cpp", "code": "UNION(A, B)\n if A.head == NIL\n return B\n x = A.head\n while B.head != NIL\n if B.head.key ≤ x.key\n INSERT(B, x.key)\n x.key = B.head.key\n DELETE(B, B.head)\n x = x.next\n return A" } ]
false
[]
10-10-3
10
10-3
10-3
docs/Chap10/Problems/10-3.md
Exercise 10.3-4 asked how we might maintain an $n$-element list compactly in the first $n$ positions of an array. We shall assume that all keys are distinct and that the compact list is also sorted, that is, $key[i] < key[next[i]]$ for all $i = 1, 2, \ldots, n$ such that $next[i] \ne \text{NIL}$. We will also assume that we have a variable $L$ that contains the index of the first element on the list. Under these assumptions, you will show that we can use the following randomized algorithm to search the list in $O(\sqrt n)$ expected time. ```cpp COMPACT-LIST-SEARCH(L, n, k) i = L while i != NIL and key[i] < k j = RANDOM(1, n) if key[i] < key[j] and key[j] ≤ k i = j if key[i] == k return i i = next[i] if i == NIL or key[i] > k return NIL else return i ``` If we ignore lines 3–7 of the procedure, we have an ordinary algorithm for searching a sorted linked list, in which index $i$ points to each position of the list in turn. The search terminates once the index $i$ "falls off" the end of the list or once $key[i] \ge k$. In the latter case, if $key[i] = k$, clearly we have found a key with the value $k$. If, however, $key[i] > k$, then we will never find a key with the value $k$, and so terminating the search was the right thing to do. Lines 3–7 attempt to skip ahead to a randomly chosen position $j$. Such a skip benefits us if $key[j]$ is larger than $key[i]$ and no larger than $k$; in such a case, $j$ marks a position in the list that $i$ would have to reach during an ordinary list search. Because the list is compact, we know that any choice of $j$ between $1$ and $n$ indexes some object in the list rather than a slot on the free list. Instead of analyzing the performance of $\text{COMPACT-LIST-SEARCH}$ directly, we shall analyze a related algorithm, $\text{COMPACT-LIST-SEARCH}'$, which executes two separate loops. This algorithm takes an additional parameter $t$ which determines an upper bound on the number of iterations of the first loop. ```cpp COMPACT-LIST-SEARCH'(L, n, k, t) i = L for q = 1 to t j = RANDOM(1, n) if key[i] < key[j] and key[j] ≤ k i = j if key[i] == k return i while i != NIL and key[i] < k i = next[i] if i == NIL or key[i] > k return NIL else return i ``` To compare the execution of the algorithms $\text{COMPACT-LIST-SEARCH}(L, n, k)$ and $\text{COMPACT-LIST-SEARCH}'(L, n, k, t)$, assume that the sequence of integers returned by the calls of $\text{RANDOM}(1, n)$ is the same for both algorithms. **a.** Suppose that $\text{COMPACT-LIST-SEARCH}(L, n, k)$ takes $t$ iterations of the **while** loop of lines 2–8. Argue that $\text{COMPACT-LIST-SEARCH}'(L, n, k, t)$ returns the same answer and that the total number of iterations of both the **for** and **while** loops within $\text{COMPACT-LIST-SEARCH}'$ is at least $t$. In the call $\text{COMPACT-LIST-SEARCH}'(L, n, k, t)$, let $X_t$ be the random variable that describes the distance in the linked list (that is, through the chain of $next$ pointers) from position $i$ to the desired key $k$ after $t$ iterations of the **for** loop of lines 2–7 have occurred. **b.** Argue that the expected running time of $\text{COMPACT-LIST-SEARCH}'(L, n, k, t)$ is $O(t + \text E[X_t])$. **c.** Show that $\text E[X_t] \le \sum_{r = 1}^n (1 - r / n)^t$. ($\textit{Hint:}$ Use equation $\text{(C.25)}$.) **d.** Show that $\sum_{r = 0}^{n - 1} r^t \le n^{t + 1} / (t + 1)$. **e.** Prove that $\text E[X_t] \le n / (t + 1)$. **f.** Show that $\text{COMPACT-LIST-SEARCH}'(L, n, k, t)$ runs in $O(t + n / t)$ expected time. **g.** Conclude that $\text{COMPACT-LIST-SEARCH}$ runs in $O(\sqrt n)$ expected time. **h.** Why do we assume that all keys are distinct in $\text{COMPACT-LIST-SEARCH}$? Argue that random skips do not necessarily help asymptotically when the list contains repeated key values.
**a.** If the original version of the algorithm takes only $t$ iterations, then, we have that it was only at most t random skips though the list to get to the desired value, since each iteration of the original while loop is a possible random jump followed by a normal step through the linked list. **b.** The for loop on lines 2–7 will get run exactly $t$ times, each of which is constant runtime. After that, the while loop on lines 8–9 will be run exactly $X_t$ times. So, the total runtime is $O(t + \text E[X_t])$. **c.** Using equation $\text{C.25}$, we have that $\text E[X_t] = \sum_{i = 1}^\infty \Pr\\{X_t \ge i\\}$. So, we need to show that $\Pr\\{X_t \ge i\\} \le (1 - i / n)^t$. This can be seen because having $X_t$ being greater than $i$ means that each random choice will result in an element that is either at least $i$ steps before the desired element, or is after the desired element. There are $n - i$ such elements, out of the total $n$ elements that we were pricking from. So, for a single one of the choices to be from such a range, we have a probability of $(n - i) / n = (1 - i / n)$. Since each of the selections was independent, the total probability that all of them were is $(1 - i / n)^t$, as desired. Lastly, we can note that since the linked list has length $n$, the probability that $X_t$ is greater than $n$ is equal to zero. **d.** Since we have that $t > 0$, we know that the function $f(x) = x^t$ is increasing, so, that means that $\lfloor x \rfloor^t \le f(x)$. So, $$\sum_{r = 0}^{n - 1} r^t = \int_0^n \lfloor r \rfloor^t dr \le \int_0^n f(r)dr = \frac{n^{t + 1}}{t + 1}.$$ **e.** $$ \begin{aligned} \text E[X_t] & \le \sum_{r = 1}^n (1 - r / n)^t & \text{from part (c)} \\\\ & = \sum_{r = 1}^n \frac{(n - r)^t}{n^t} \\\\ & = \frac{1}{n^t} \sum_{r = 1}^n (n - r)^t, \end{aligned} $$ and $$ \begin{aligned} \sum_{r = 1}^n (n - r)^t & = (n - 1)^t + (n - 2)^t + \cdots + 1^t + 0^t \\\\ & = \sum_{r = 0}^{n - 1} r^t. \end{aligned} $$ So, $$ \begin{aligned} \text E[X_t] & = \frac{1}{n^t} \sum_{r = 0}^{n - 1} r^t \\\\ & \le \frac{1}{n^t} \cdot \frac{n^{t + 1}}{t + 1} & \text{from part (d)} \\\\ & = \frac{n}{t + 1}. \end{aligned} $$ **f.** We just put together parts (b) and (e) to get that it runs in time $O(t + n / (t + 1))$. But, this is the same as $O(t + n / t)$. **g.** Since we have that for any number of iterations $t$ that the first algorithm takes to find its answer, the second algorithm will return it in time $O(t + n / t)$. In particular, if we just have that $t = \sqrt n$. The second algorithm takes time only $O(\sqrt n)$. This means that tihe first list search algorithm is $O(\sqrt n)$ as well. **h.** If we don't have distinct key values, then, we may randomly select an element that is further along than we had been before, but not jump to it because it has the same key as what we were currently at. The analysis will break when we try to bound the probability that $X_t \ge i$.
[ { "lang": "cpp", "code": "> COMPACT-LIST-SEARCH(L, n, k)\n> i = L\n> while i != NIL and key[i] < k\n> j = RANDOM(1, n)\n> if key[i] < key[j] and key[j] ≤ k\n> i = j\n> if key[i] == k\n> return i\n> i = next[i]\n> if i == NIL or key[i] > k\n> return NIL\n> else return i\n>" }, { "lang": "cpp", "code": "> COMPACT-LIST-SEARCH'(L, n, k, t)\n> i = L\n> for q = 1 to t\n> j = RANDOM(1, n)\n> if key[i] < key[j] and key[j] ≤ k\n> i = j\n> if key[i] == k\n> return i\n> while i != NIL and key[i] < k\n> i = next[i]\n> if i == NIL or key[i] > k\n> return NIL\n> else return i\n>" } ]
false
[]
11-11.1-1
11
11.1
11.1-1
docs/Chap11/11.1.md
Suppose that a dynamic set $S$ is represented by a direct-address table $T$ of length $m$. Describe a procedure that finds the maximum element of $S$. What is the worst-case performance of your procedure?
As the dynamic set $S$ is represented by the direct-address table $T$, for each key $k$ in $S$, there is a slot $k$ in $T$ points to it. If no element with key $k$ in $S$, then $T[k] = \text{NIL}$. Using this property, we can find the maximum element of $S$ by traversing down from the highest slot to seek the first non-$\text{NIL}$ one. ```cpp MAXIMUM(S) return TABLE-MAXIMUM(T, m - 1) ``` ```cpp TABLE-MAXIMUM(T, l) if l < 0 return NIL else if DIRECT-ADDRESS-SEARCH(T, l) != NIL return l else return TABLE-MAXIMUM(T, l - 1) ``` The $\text{TABLE-MAXIMUM}$ procedure gest down and checks $1$ slot at a time, linearly approaches the solution. In the worst case where $S$ is empty, $\text{TABLE-MAXIMUM}$ examines $m$ slots. Therefore, the worst-case performance of $\text{MAXIMUM}$ is $O(m)$, where $m$ is the length of the direct-address table $T$.
[ { "lang": "cpp", "code": "MAXIMUM(S)\n return TABLE-MAXIMUM(T, m - 1)" }, { "lang": "cpp", "code": "TABLE-MAXIMUM(T, l)\n if l < 0\n return NIL\n else if DIRECT-ADDRESS-SEARCH(T, l) != NIL\n return l\n else return TABLE-MAXIMUM(T, l - 1)" } ]
false
[]
11-11.1-2
11
11.1
11.1-2
docs/Chap11/11.1.md
A **_bit vector_** is simply an array of bits ($0$s and $1$s). A bit vector of length $m$ takes much less space than an array of $m$ pointers. Describe how to use a bit vector to represent a dynamic set of distinct elements with no satellite data. Dictionary operations should run in $O(1)$ time.
Using the bit vector data structure, we can represent keys less than $m$ by a string of $m$ bits, denoted by $V[0..m - 1]$, in which each position that occupied by the bit $1$, corresponds to a key in the set $S$. If the set contains no element with key $k$, then $V[k] = 0$. For instance, we can store the set $\\{2, 4, 6, 10, 16\\}$ in a bit vector of length $20$: 0-indexed, ordered from left to right: $$00101010001000001000$$ ```cpp BITMAP-SEARCH(V, k) if V[k] != 0 return k else return NIL ``` ```cpp BITMAP-INSERT(V, x) V[x] = 1 ``` ```cpp BITMAP-DELETE(V, x) V[x] = 0 ``` Each of these operations takes only $O(1)$ time.
[ { "lang": "cpp", "code": "BITMAP-SEARCH(V, k)\n if V[k] != 0\n return k\n else return NIL" }, { "lang": "cpp", "code": "BITMAP-INSERT(V, x)\n V[x] = 1" }, { "lang": "cpp", "code": "BITMAP-DELETE(V, x)\n V[x] = 0" } ]
false
[]
11-11.1-3
11
11.1
11.1-3
docs/Chap11/11.1.md
Suggest how to implement a direct-address table in which the keys of stored elements do not need to be distinct and the elements can have satellite data. All three dictionary operations ($\text{INSERT}$, $\text{DELETE}$, and $\text{SEARCH}$) should run in $O(1)$ time. (Don't forget that $\text{DELETE}$ takes as an argument a pointer to an object to be deleted, not a key.)
Assuming that fetching an element should return the satellite data of all the stored elements, we can have each key map to a doubly linked list. - $\text{INSERT}$: appends the element to the list in constant time - $\text{DELETE}$: removes the element from the linked list in constant time (the element contains pointers to the previous and next element) - $\text{SEARCH}$: returns the first element, which is a node in a linked list, in constant time
[]
false
[]
11-11.1-4
11
11.1
11.1-4 $\star$
docs/Chap11/11.1.md
We wish to implement a dictionary by using direct addressing on a _huge_ array. At the start, the array entries may contain garbage, and initializing the entire array is impractical because of its size. Describe a scheme for implementing a direct-address dictionary on a huge array. Each stored object should use $O(1)$ space; the operations $\text{SEARCH}$, $\text{INSERT}$, and $\text{DELETE}$ should take $O(1)$ time each; and initializing the data structure should take $O(1)$ time. ($\textit{Hint:}$ Use an additional array, treated somewhat like a stack whose size is the number of keys actually stored in the dictionary, to help determine whether a given entry in the huge array is valid or not.)
The additional data structure will be a stack $S$. Initially, set $S$ to be empty, and do nothing to initialize the huge array. Each object stored in the huge array will have two parts: the key value, and a pointer to an element of $S$, which contains a pointer back to the object in the huge array. - To insert $x$, push an element $y$ to the stack which contains a pointer to position $x$ in the huge array. Update position $A[x]$ in the huge array $A$ to contain a pointer to $y$ in $S$. - To search for $x$, go to position $x$ of $A$ and go to the location stored there. If that location is an element of $S$ which contains a pointer to $A[x]$, then we know $x$ is in $A$. Otherwise, $x \notin A$. - To delete $x$, invalidate the element of $S$ which is pointed to by $A[x]$. Because there may be "holes" in $S$ now, we need to pop an item from $S$, move it to the position of the "hole", and update the pointer in $A$ accordingly. Each of these takes $O(1)$ time and there are at most as many elements in $S$ as there are valid elements in $A$.
[]
false
[]
11-11.2-1
11
11.2
11.2-1
docs/Chap11/11.2.md
Suppose we use a hash function $h$ to hash $n$ distinct keys into an array $T$ of length $m$. Assuming simple uniform hashing, what is the expected number of collisions? More precisely, what is the expected cardinality of $\\{\\{k, l\\}: k \ne l \text{ and } h(k) = h(l)\\}$?
Under the assumption of simple uniform hashing, we will use linearity of expectation to compute this. Suppose that all the keys are totally ordered $\\{k_1, \dots, k_n\\}$. Let $X_i$ be the number of $\ell$'s such that $\ell > k_i$ and $h(\ell) = h(k_i)$. So $X_i$ is the (expected) number of times that key $k_i$ is collided by those keys hashed afterward. Note, that this is the same thing as $\sum_{j > i} \Pr(h(k_j) = h(k_i)) = \sum_{j > i} 1 / m = (n - i) / m$. Then, by linearity of expectation, the number of collisions is the sum of the number of collisions for each possible smallest element in the collision. The expected number of collisions is $$\sum_{i = 1}^n \frac{n - i}{m} = \frac{n^2 - n(n + 1) / 2}{m} = \frac{n^2 - n}{2m}.$$
[]
false
[]
11-11.2-2
11
11.2
11.2-2
docs/Chap11/11.2.md
Demonstrate what happens when we insert the keys $5, 28, 19, 15, 20, 33, 12, 17, 10$ into a hash table with collisions resolved by chaining. Let the table have $9$ slots, and let the hash function be $h(k) = k \mod 9$.
Let us number our slots $0, 1, \dots, 8$. Then our resulting hash table will look like following: $$ \begin{array}{c|l} h(k) & \text{keys} \\\\ \hline 0 \mod 9 & \\\\ 1 \mod 9 & 10 \to 19 \to 28 \\\\ 2 \mod 9 & 20 \\\\ 3 \mod 9 & 12 \\\\ 4 \mod 9 & \\\\ 5 \mod 9 & 5 \\\\ 6 \mod 9 & 33 \to 15 \\\\ 7 \mod 9 & \\\\ 8 \mod 9 & 17 \end{array} $$
[]
false
[]
11-11.2-3
11
11.2
11.2-3
docs/Chap11/11.2.md
Professor Marley hypothesizes that he can obtain substantial performance gains by modifying the chaining scheme to keep each list in sorted order. How does the professor's modification affect the running time for successful searches, unsuccessful searches, insertions, and deletions?
- Successful searches: no difference, $\Theta(1 + \alpha)$. - Unsuccessful searches: faster but still $\Theta(1 + \alpha)$. - Insertions: same as successful searches, $\Theta(1 + \alpha)$. - Deletions: same as before if we use doubly linked lists, $\Theta(1)$.
[]
false
[]
11-11.2-4
11
11.2
11.2-4
docs/Chap11/11.2.md
Suggest how to allocate and deallocate storage for elements within the hash table itself by linking all unused slots into a free list. Assume that one slot can store a flag and either one element plus a pointer or two pointers. All dictionary and free-list operations should run in $O(1)$ expected time. Does the free list need to be doubly linked, or does a singly linked free list suffice?
The flag in each slot of the hash table will be $1$ if the element contains a value, and $0$ if it is free. The free list must be doubly linked. - Search is unmodified, so it has expected time $O(1)$. - To insert an element $x$, first check if $T[h(x.key)]$ is free. If it is, delete $T[h(x.key)]$ and change the flag of $T[h(x.key)]$ to $1$. If it wasn't free to begin with, simply insert $x.key$ at the start of the list stored there. - To delete, first check if $x.prev$ and $x.next$ are $\text{NIL}$. If they are, then the list will be empty upon deletion of $x$, so insert $T[h(x.key)]$ into the free list, update the flag of $T[h(x.key)]$ to $0$, and delete $x$ from the list it's stored in. Since deletion of an element from a singly linked list isn't $O(1)$, we must use a doubly linked list. - All other operations are $O(1)$.
[]
false
[]
11-11.2-5
11
11.2
11.2-5
docs/Chap11/11.2.md
Suppose that we are storing a set of $n$ keys into a hash table of size $m$. Show that if the keys are drawn from a universe $U$ with $|U| > nm$, then $U$ has a subset of size $n$ consisting of keys that all hash to the same slot, so that the worst-case searching time for hashing with chaining is $\Theta(n)$.
Suppose the $m - 1$ slots contains at most $n - 1$ elements, then the remaining slot should have $$|U| - (m - 1)(n - 1) > nm - (m - 1)(n - 1) = n + m - 1 \ge n$$ elements, thus $U$ has a subset of size $n$.
[]
false
[]
11-11.2-6
11
11.2
11.2-6
docs/Chap11/11.2.md
Suppose we have stored $n$ keys in a hash table of size $m$, with collisions resolved by chaining, and that we know the length of each chain, including the length $L$ of the longest chain. Describe a procedure that selects a key uniformly at random from among the keys in the hash table and returns it in expected time $O(L \cdot (1 + 1 / \alpha))$.
Choose one of the $m$ spots in the hash table at random. Let $n_k$ denote the number of elements stored at $T[k]$. Next pick a number $x$ from $1$ to $L$ uniformly at random. If $x < n_j$, then return the $x$th element on the list. Otherwise, repeat this process. Any element in the hash table will be selected with probability $1 / mL$, so we return any key with equal probability. Let $X$ be the random variable which counts the number of times we must repeat this process before we stop and $p$ be the probability that we return on a given attempt. Then $E[X] = p(1 + \alpha) + (1 − p)(1 + E[X])$ since we'd expect to take $1 + \alpha$ steps to reach an element on the list, and since we know how many elements are on each list, if the element doesn't exist we'll know right away. Then we have $E[X] = \alpha + 1 / p$. The probability of picking a particular element is $n / mL = \alpha / L$. Therefore, we have $$ \begin{aligned} E[X] & = \alpha + L / \alpha \\\\ & = L(\alpha/L + 1/\alpha) \\\\ & = O(L(1 + 1/\alpha)) \end{aligned} $$ since $\alpha \le L$.
[]
false
[]
11-11.3-1
11
11.3
11.3-1
docs/Chap11/11.3.md
Suppose we wish to search a linked list of length $n$, where each element contains a key $k$ along with a hash value $h(k)$. Each key is a long character string. How might we take advantage of the hash values when searching the list for an element with a given key?
If every element also contained a hash of the long character string, when we are searching for the desired element, we'll first check if the hashvalue of the node in the linked list, and move on if it disagrees. This can increase the runtime by a factor proportional to the length of the long character strings.
[]
false
[]
11-11.3-2
11
11.3
11.3-2
docs/Chap11/11.3.md
Suppose that we hash a string of $r$ characters into $m$ slots by treating it as a radix-128 number and then using the division method. We can easily represent the number $m$ as a 32-bit computer word, but the string of $r$ characters, treated as a radix-128 number, takes many words. How can we apply the division method to compute the hash value of the character string without using more than a constant number of words of storage outside the string itself?
```cpp sum = 0 for i = 1 to r sum = (sum * 128 + s[i]) % m ``` Use `sum` as the key.
[ { "lang": "cpp", "code": " sum = 0\n for i = 1 to r\n sum = (sum * 128 + s[i]) % m" } ]
false
[]
11-11.3-3
11
11.3
11.3-3
docs/Chap11/11.3.md
Consider a version of the division method in which $h(k) = k \mod m$, where $m = 2^p - 1$ and $k$ is a character string interpreted in radix $2^p$. Show that if we can derive string $x$ from string $y$ by permuting its characters, then $x$ and $y$ hash to the same value. Give an example of an application in which this property would be undesirable in a hash function.
We will show that each string hashes to the sum of it's digits $\mod 2^p − 1$. We will do this by induction on the length of the string. - Base case Suppose the string is a single character, then the value of that character is the value of $k$ which is then taken $\mod m$. - Inductive step. Let $w = w_1w_2$ where $|w_1| \ge 1$ and $|w_2| = 1$. Suppose $h(w_1) = k_1$. Then, $h(w) = h(w_1)2^p + h(w_2) \mod 2^p − 1 = h(w_1) + h(w_2) \mod 2^p − 1$. So, since $h(w_1)$ was the sum of all but the last digit $\mod m$, and we are adding the last digit $\mod m$, we have the desired conclusion.
[]
false
[]
11-11.3-4
11
11.3
11.3-4
docs/Chap11/11.3.md
Consider a hash table of size $m = 1000$ and a corresponding hash function $h(k) = \lfloor m (kA \mod 1) \rfloor$ for $A = (\sqrt 5 - 1) / 2$. Compute the locations to which the keys $61$, $62$, $63$, $64$, and $65$ are mapped.
- $h(61) = \lfloor 1000(61 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 700$. - $h(62) = \lfloor 1000(62 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 318$. - $h(63) = \lfloor 1000(63 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 936$. - $h(64) = \lfloor 1000(64 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 554$. - $h(65) = \lfloor 1000(65 \cdot \frac{\sqrt 5 - 1}{2} \mod 1) \rfloor = 172$.
[]
false
[]
11-11.3-5
11
11.3
11.3-5 $\star$
docs/Chap11/11.3.md
Define a family $\mathcal H$ of hash functions from a finite set $U$ to a finite set $B$ to be **_$\epsilon$-universal_** if for all pairs of distinct elements $k$ and $l$ in $U$, $$\Pr\\{h(k) = h(l)\\} \le \epsilon,$$ where the probability is over the choice of the hash function $h$ drawn at random from the family $\mathcal H$. Show that an $\epsilon$-universal family of hash functions must have $$\epsilon \ge \frac{1}{|B|} - \frac{1}{|U|}.$$
As a simplifying assumption, assume that $|B|$ divides $|U|$. It's just a bit messier if it doesn't divide evenly. Suppose to a contradiction that $\epsilon > \frac{1}{|B|} - \frac{1}{|U|}$. This means that $\forall$ pairs $k, \ell \in U$, we have that the number $n_{k, \ell}$ of hash functions in $\mathcal H$ that have a collision on those two elements satisfies $n_{k, \ell} \le \frac{|\mathcal H}{|B|} - \frac{|\mathcal H}{|U|}$. So, summing over all pairs of elements in $U$, we have that the total number is $\le \frac{|\mathcal H||U|^2}{2|B|} - \frac{|\mathcal H||U|}{2}$. Any particular hash function must have that there are at least $|B|\binom{|U| / |B|}{2} = |B|\frac{|U|^2 - |U||B|}{2|B|^2} = \frac{|U|^2}{2|B|} - \frac{|U|}{2}$ colliding pairs for that hash function, summing over all hash functions, we get that there are at least $|\mathcal H| \left(\frac{|U|^2}{2|B|} - \frac{|U|}{2}\right)$ colliding pairs total. Since we have that there are at most some number less than this many, we have a contradiction, and so must have the desired restriction on $\epsilon$.
[]
false
[]
11-11.3-6
11
11.3
11.3-6 $\star$
docs/Chap11/11.3.md
Let $U$ be the set of $n$-tuples of values drawn from $\mathbb Z_p$, and let $B = \mathbb Z_p$, where $p$ is prime. Define the hash function $h_b: U \rightarrow B$ for $b \in \mathbb Z_p$ on an input $n$-tuple $\langle a_0, a_1, \ldots, a_{n - 1} \rangle$ from $U$ as $$h_b(\langle a_0, a_1, \ldots, a_{n - 1} \rangle) = \Bigg(\sum_{j = 0}^{n - 1} a_jb^j \Bigg) \mod p,$$ and let $\mathcal{H} = \\{h_b : b \in \mathbb Z_p\\}$. Argue that $\mathcal H$ is $((n - 1) / p)$-universal according to the definition of $\epsilon$-universal in Exercise 11.3-5. ($\textit{Hint:}$ See Exercise 31.4-4.)
Fix $b \in \mathbb Z_p$. By exercise 31.4-4, $h_b(x)$ collides with $h_b(y)$ for at most $n - 1$ other $y \in U$. Since there are a total of $p$ possible values that $h_b$ takes on, the probability that $h_b(x) = h_b(y)$ is bounded from above by $\frac{n - 1}{p}$, since this holds for any value of $b$, $\mathcal H$ is $((n - 1 ) /p)$-universal.
[]
false
[]
11-11.4-1
11
11.4
11.4-1
docs/Chap11/11.4.md
Consider inserting the keys $10, 22, 31, 4, 15, 28, 17, 88, 59$ into a hash table of length $m = 11$ using open addressing with the auxiliary hash function $h'(k) = k$. Illustrate the result of inserting these keys using linear probing, using quadratic probing with $c_1 = 1$ and $c_2 = 3$, and using double hashing with $h_1(k) = k$ and $h_2(k) = 1 + (k \mod (m - 1))$.
We use $T_t$ to represent each time stamp $t$ starting with $i = 0$, and if encountering a collision, then we iterate $i$ from $i = 1$ to $i = m - 1 = 10$ until there is no collision. - **Linear probing**: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & T_7 & T_8 \\\\ \hline 0 \mod 11 & & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\\\ 1 \mod 11 & & & & & & & & 88 & 88 \\\\ 2 \mod 11 & & & & & & & & & \\\\ 3 \mod 11 & & & & & & & & & \\\\ 4 \mod 11 & & & & 4 & 4 & 4 & 4 & 4 & 4 \\\\ 5 \mod 11 & & & & & 15 & 15 & 15 & 15 & 15 \\\\ 6 \mod 11 & & & & & & 28 & 28 & 28 & 28 \\\\ 7 \mod 11 & & & & & & & 17 & 17 & 17 \\\\ 8 \mod 11 & & & & & & & & & 59 \\\\ 9 \mod 11 & & & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\\\ 10 \mod 11 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \end{array} $$ - **Quadradic probing**, it will look identical until there is a collision on inserting the fifth element: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i + 3i^2) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & T_7 & T_8 \\\\ \hline 0 \mod 11 & & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\\\ 1 \mod 11 & & & & & & & & & \\\\ 2 \mod 11 & & & & & & & & 88 & 88 \\\\ 3 \mod 11 & & & & & & & 17 & 17 & 17 \\\\ 4 \mod 11 & & & & 4 & 4 & 4 & 4 & 4 & 4 \\\\ 5 \mod 11 & & & & & & & & & \\\\ 6 \mod 11 & & & & & & 28 & 28 & 28 & 28 \\\\ 7 \mod 11 & & & & & & & & & 59 \\\\ 8 \mod 11 & & & & & 15 & 15 & 15 & 15 & 15 \\\\ 9 \mod 11 & & & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\\\ 10 \mod 11 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \end{array} $$ Note that there is no way to insert the element $59$ now, because the offsets coming from $c_1 = 1$ and $c_2 = 3$ can only be even, and an odd offset would be required to insert $59$ because $59 \mod 11 = 4$ and all the empty positions are at odd indices. - **Double hashing**: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i(1 + k \mod 10)) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & T_7 & T_8 \\\\ \hline 0 \mod 11 & & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\\\ 1 \mod 11 & & & & & & & & & \\\\ 2 \mod 11 & & & & & & & & & 59 \\\\ 3 \mod 11 & & & & & & & 17 & 17 & 17 \\\\ 4 \mod 11 & & & & 4 & 4 & 4 & 4 & 4 & 4 \\\\ 5 \mod 11 & & & & & 15 & 15 & 15 & 15 & 15 \\\\ 6 \mod 11 & & & & & & 28 & 28 & 28 & 28 \\\\ 7 \mod 11 & & & & & & & & 88 & 88 \\\\ 8 \mod 11 & & & & & & & & & \\\\ 9 \mod 11 & & & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\\\ 10 \mod 11 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \end{array} $$
[]
false
[]
11-11.4-2
11
11.4
11.4-2
docs/Chap11/11.4.md
Write pseudocode for $\text{HASH-DELETE}$ as outlined in the text, and modify $\text{HASH-INSERT}$ to handle the special value $\text{DELETED}$.
```cpp HASH-DELETE(T, k) i = 0 repeat j = h(k, i) if T[j] == k T[j] = DELETE return j else i = i + 1 until T[j] == NIL or i == m error "element not exist" ``` By implementing $\text{HASH-DELETE}$ in this way, the $\text{HASH-INSERT}$ need to be modified to treat $\text{NIL}$ slots as empty ones. ```cpp HASH-INSERT(T, k) i = 0 repeat j = h(k, i) if T[j] == NIL or T[j] == DELETE T[j] = k return j else i = i + 1 until i == m error "hash table overflow" ```
[ { "lang": "cpp", "code": "HASH-DELETE(T, k)\n i = 0\n repeat\n j = h(k, i)\n if T[j] == k\n T[j] = DELETE\n return j\n else i = i + 1\n until T[j] == NIL or i == m\n error \"element not exist\"" }, { "lang": "cpp", "code": "HASH-INSERT(T, k)\n i = 0\n repeat\n j = h(k, i)\n if T[j] == NIL or T[j] == DELETE\n T[j] = k\n return j\n else i = i + 1\n until i == m\n error \"hash table overflow\"" } ]
false
[]
11-11.4-3
11
11.4
11.4-3
docs/Chap11/11.4.md
Consider an open-address hash table with uniform hashing. Give upper bounds on the expected number of probes in an unsuccessful search and on the expected number of probes in a successful search when the load factor is $3 / 4$ and when it is $7 / 8$.
- $\alpha = 3 / 4$, - unsuccessful: $\frac{1}{1 - \frac{3}{4}} = 4$ probes, - successful: $\frac{1}{\frac{3}{4}} \ln\frac{1}{1-\frac{3}{4}} \approx 1.848$ probes. - $\alpha = 7 / 8$, - unsuccessful: $\frac{1}{1 - \frac{7}{8}} = 8$ probes, - successful: $\frac{1}{\frac{7}{8}} \ln\frac{1}{1 - \frac{7}{8}} \approx 2.377$ probes.
[]
false
[]
11-11.4-4
11
11.4
11.4-4 $\star$
docs/Chap11/11.4.md
Suppose that we use double hashing to resolve collisions—that is, we use the hash function $h(k, i) = (h_1(k) + ih_2(k)) \mod m$. Show that if $m$ and $h_2(k)$ have greatest common divisor $d \ge 1$ for some key $k$, then an unsuccessful search for key $k$ examines $(1/d)$th of the hash table before returning to slot $h_1(k)$. Thus, when $d = 1$, so that $m$ and $h_2(k)$ are relatively prime, the search may examine the entire hash table. ($\textit{Hint:}$ See Chapter 31.)
Suppose $d = \gcd(m, h_2(k))$, the $\text{LCM}$ $l = m \cdot h_2(k) / d$. Since $d | h_2(k)$, then $m \cdot h_2(k) / d \mod m = 0 \cdot (h_2(k) / d \mod m) = 0$, therefore $(l + ih_2(k)) \mod m = ih_2(k) \mod m$, which means $ih_2(k) \mod m$ has a period of $m / d$.
[]
false
[]
11-11.4-5
11
11.4
11.4-5 $\star$
docs/Chap11/11.4.md
Consider an open-address hash table with a load factor $\alpha$. Find the nonzero value $\alpha$ for which the expected number of probes in an unsuccessful search equals twice the expected number of probes in a successful search. Use the upper bounds given by Theorems 11.6 and 11.8 for these expected numbers of probes.
$$ \begin{aligned} \frac{1}{1 - \alpha} & = 2 \cdot \frac{1}{\alpha} \ln\frac{1}{1 - \alpha} \\\\ \alpha & = 0.71533. \end{aligned} $$
[]
false
[]
11-11.5-1
11
11.5
11.5-1 $\star$
docs/Chap11/11.5.md
Suppose that we insert $n$ keys into a hash table of size $m$ using open addressing and uniform hashing. Let $p(n, m)$ be the probability that no collisions occur. Show that $p(n, m) \le e^{-n(n - 1) / 2m}$. ($\textit{Hint:}$ See equation $\text{(3.12)}$.) Argue that when $n$ exceeds $\sqrt m$, the probability of avoiding collisions goes rapidly to zero.
$$ \begin{aligned} p(n, m) & = \frac{m}{m} \cdot \frac{m - 1}{m} \cdots \frac{m - n + 1}{m} \\\\ & = \frac{m \cdot (m - 1) \cdots (m - n + 1)}{m^n}. \end{aligned} $$ $$ \begin{aligned} (m - i) \cdot (m - n + i) & = (m - \frac{n}{2} + \frac{n}{2} - i) \cdot (m - \frac{n}{2} - \frac{n}{2} + i) \\\\ & = (m - \frac{n}{2})^2 - (i - \frac{n}{2})^2 \\\\ & \le (m - \frac{n}{2})^2. \end{aligned} $$ $$ \begin{aligned} p(n, m) & \le \frac{m \cdot (m - \frac{n}{2})^{n - 1}}{m^n} \\\\ & = (1 - \frac{n}{2m}) ^ {n - 1}. \end{aligned} $$ Based on equation $\text{(3.12)}$, $e^x \ge 1 + x$, $$ \begin{aligned} p(n, m) & \le (e^{-n / 2m})^{n - 1} \\\\ & = e^{-n(n - 1) / 2m}. \end{aligned} $$
[]
false
[]
11-11-1
11
11-1
11-1
docs/Chap11/Problems/11-1.md
Suppose that we use an open-addressed hash table of size $m$ to store $n \le m / 2$ items. **a.** Assuming uniform hashing, show that for $i = 1, 2, \ldots, n$, the probability is at most $2^{-k}$ that the $i$th insertion requires strictly more than $k$ probes. **b.** Show that for $i = 1, 2, \ldots, n$, the probability is $O(1 / n^2)$ that the $i$th insertion requires more than $2\lg n$ probes. Let the random variable $X_i$ denote the number of probes required by the $i$th insertion. You have shown in part (b) that $\Pr\\{X_i > 2\lg n\\} = O(1 / n^2)$. Let the random variable $X = \max_{1 \le i \le n} X_i$ denote the maximum number of probes required by any of the $n$ insertions. **c.** Show that $\Pr\\{X > 2\lg n\\} = O(1 / n)$. **d.** Show that the expected length $\text E[X]$ of the longest probe sequence is $O(\lg n)$.
(Removed)
[]
false
[]
11-11-2
11
11-2
11-2
docs/Chap11/Problems/11-2.md
Suppose that we have a hash table with $n$ slots, with collisions resolved by chaining, and suppose that $n$ keys are inserted into the table. Each key is equally likely to be hashed to each slot. Let $M$ be the maximum number of keys in any slot after all the keys have been inserted. Your mission is to prove an $O(\lg n / \lg\lg n)$ upper bound on $\text E[M]$, the expected value of $M$. **a.** Argue that the probability $Q_k$ that exactly $k$ keys hash to a particular slot is given by $$Q_k = \bigg(\frac{1}{n} \bigg)^k \bigg(1 - \frac{1}{n} \bigg)^{n - k} \binom{n}{k}.$$ **b.** Let $P_k$ be the probability that $M = k$, that is, the probability that the slot containing the most keys contains $k$ keys. Show that $P_k \le n Q_k$. **c.** Use Stirling's approximation, equation $\text{(3.18)}$, to show that $Q_k < e^k / k^k$. **d.** Show that there exists a constant $c > 1$ such that $Q_{k_0} < 1 / n^3$ for $k_0 = c\lg n / \lg\lg n$. Conclude that $P_k < 1 / n^2$ for $k \ge k_0 = c\lg n / \lg\lg n$. **e.** Argue that $$\text E[M] \le \Pr\bigg\\{M > \frac{c\lg n}{\lg\lg n}\bigg\\} \cdot n + \Pr\bigg\\{M \le \frac{c\lg n}{\lg\lg n}\bigg\\} \cdot \frac{c\lg n}{\lg\lg n}.$$ Conclude that $\text E[M] = O(\lg n / \lg\lg n)$.
(Removed)
[]
false
[]
11-11-3
11
11-3
11-3
docs/Chap11/Problems/11-3.md
Suppose that we are given a key $k$ to search for in a hash table with positions $0, 1, \ldots, m - 1$, and suppose that we have a hash function $h$ mapping the key space into the set $\\{0, 1, \ldots, m - 1\\}$. The search scheme is as follows: 1. Compute the value $j = h(k)$, and set $i = 0$. 2. Probe in position $j$ for the desired key $k$. If you find it, or if this position is empty, terminate the search. 3. Set $i = i + 1$. If $i$ now equals $m$, the table is full, so terminate the search. Otherwise, set $j = (i + j) \mod m$, and return to step 2. Assume that $m$ is a power of $2$. **a.** Show that this scheme is an instance of the general "quadratic probing" scheme by exhibiting the appropriate constants $c_1$ and $c_2$ for equation $\text{(11.5)}$. **b.** Prove that this algorithm examines every table position in the worst case.
(Removed)
[]
false
[]
11-11-4
11
11-4
11-4
docs/Chap11/Problems/11-4.md
Let $\mathcal H$ be a class of hash functions in which each hash function $h \in \mathcal H$ maps the universe $U$ of keys to $\\{0, 1, \ldots, m - 1 \\}$. We say that $\mathcal H$ is **_k-universal_** if, for every fixed sequence of $k$ distinct keys $\langle x^{(1)}, x^{(2)}, \ldots, x^{(k)} \rangle$ and for any $h$ chosen at random from $\mathcal H$, the sequence $\langle h(x^{(1)}), h(x^{(2)}), \ldots, h(x^{(k)}) \rangle$ is equally likely to be any of the $m^k$ sequences of length $k$ with elements drawn from $\\{0, 1, \ldots, m - 1 \\}$. **a.** Show that if the family $\mathcal H$ of hash functions is $2$-universal, then it is universal. **b.** Suppose that the universe $U$ is the set of $n$-tuples of values drawn from $\mathbb Z_p = \\{0, 1, \ldots, p - 1\\}$, where $p$ is prime. Consider an element $x = \langle x_0, x_1, \ldots, x_{n - 1} \rangle \in U$. For any $n$-tuple $a = \langle a_0, a_1, \ldots, a_{n - 1} \rangle \in U$, define the hash function $h_a$ by $$h_a(x) = \Bigg(\sum_{j = 0}^{n - 1} a_j x_j \Bigg) \mod p.$$ Let $\mathcal H = \\{h_a\\}$. Show that $\mathcal H$ is universal, but not $2$-universal. ($\textit{Hint:}$ Find a key for which all hash functions in $\mathcal H$ produce the same value.) **c.** Suppose that we modify $\mathcal H$ slightly from part (b): for any $a \in U$ and for any $b \in \mathbb Z_p$, define $$h'_{ab}(x) = \Bigg(\sum\_{j = 0}^{n - 1} a_j x_j + b \Bigg) \mod p$$ and $\mathcal h' = \\{h'_{ab}\\}$. Argue that $\mathcal H'$ is $2$-universal. ($\textit{Hint:}$ Consider fixed $n$-tuples $x \in U$ and $y \in U$, with $x_i \ne y_i$ for some $i$. What happens to $h'\_{ab}(x)$ and $h'\_{ab}(y)$ as $a_i$ and $b$ range over $\mathbb Z_p$?) **d.** Suppose that Alice and Bob secretly agree on a hash function $h$ form $2$-universal family $\mathcal H$ of hash functions. Each $h \in \mathcal H$ maps from a universe of keys $u$ to $\mathbb Z_p$, where $p$ is aprime. Later, Alice sends a message $m$ to Bob over the Internet, where $m \in U$. She authenticates this message to Bob by also sending an authentication tag $t = h(m)$, and Bob checks that the pair $(m, t)$ he receives indeed satisfies $t = h(m)$. Suppose that an adversary intercepts $(m, t)$ en route and tries to fool Bob by replacing the pair $(m, t)$ with a different pair $(m', t')$. Argue that the probability that the adversary succeeds in fooling Bob into accepting $(m', t')$ is at most $1 / p$, no matter how much computing power the adversary has, and even if the adversary knows the family $\mathcal H$ of hash functions used.
**a.** The number of hash functions for which $h(k) = h(l)$ is $\frac{m}{m^2}|\mathcal H| = \frac{1}{m}|\mathcal H|$, therefore the family is universal. **b.** For $x = \langle 0, 0, \ldots, 0 \rangle$, $\mathcal H$ could not be $2$-universal. **c.** Let $x, y \in U$ be fixed, distinct $n$-tuples. As $a_i$ and $b$ range over $\mathbb Z_p, h'_{ab}(x)$ is equally likely to achieve every value from $1$ to $p$ since for any sequence $a$, we can let $b$ vary from $1$ to $p - 1$. Thus, $\langle h'\_{ab}(x), h'\_{ab}(y) \rangle$ is equally likely to be any of the $p^2$ sequences, so $\mathcal H$ is $2$-universal. **d.** Since $\mathcal H$ is $2$-universal, every pair of $\langle t, t' \rangle$ is equally likely to appear, thus $t'$ could be any value from $\mathbb Z_p$. Even the adversary knows $\mathcal H$, since $\mathcal H$ is $2$-universal, then $\mathcal H$ is universal, the probability of choosing a hash function that $h(k) = h(l)$ is at most $1 / p$, therefore the probability is at most $1 / p$.
[]
false
[]
12-12.1-1
12
12.1
12.1-1
docs/Chap12/12.1.md
For the set of $\\{ 1, 4, 5, 10, 16, 17, 21 \\}$ of keys, draw binary search trees of heights $2$, $3$, $4$, $5$, and $6$.
- $height = 2$: ![](../img/12.1-1-1.png) - $height = 3$: ![](../img/12.1-1-2.png) - $height = 4$: ![](../img/12.1-1-3.png) - $height = 5$: ![](../img/12.1-1-4.png) - $height = 6$: ![](../img/12.1-1-5.png)
[]
true
[ "../img/12.1-1-1.png", "../img/12.1-1-2.png", "../img/12.1-1-3.png", "../img/12.1-1-4.png", "../img/12.1-1-5.png" ]
12-12.1-2
12
12.1
12.1-2
docs/Chap12/12.1.md
What is the difference between the binary-search-tree property and the min-heap property (see page 153)? Can the min-heap property be used to print out the keys of an $n$-node tree in sorted order in $O(n)$ time? Show how, or explain why not.
- The binary-search-tree property guarantees that all nodes in the left subtree are smaller, and all nodes in the right subtree are larger. - The min-heap property only guarantees the general child-larger-than-parent relation, but doesn't distinguish between left and right children. For this reason, the min-heap property can't be used to print out the keys in sorted order in linear time because we have no way of knowing which subtree contains the next smallest element.
[]
false
[]
12-12.1-3
12
12.1
12.1-3
docs/Chap12/12.1.md
Give a nonrecursive algorithm that performs an inorder tree walk. ($\textit{Hint:}$ An easy solution uses a stack as an auxiliary data structure. A more complicated, but elegant, solution uses no stack but assumes that we can test two pointers for equality.)
```cpp INORDER-TREE-WALK(T) let S be an empty stack current = T.root done = 0 while !done if current != NIL PUSH(S, current) current = current.left else if !S.EMPTY() current = POP(S) print current current = current.right else done = 1 ```
[ { "lang": "cpp", "code": "INORDER-TREE-WALK(T)\n let S be an empty stack\n current = T.root\n done = 0\n while !done\n if current != NIL\n PUSH(S, current)\n current = current.left\n else\n if !S.EMPTY()\n current = POP(S)\n print current\n current = current.right\n else done = 1" } ]
false
[]
12-12.1-4
12
12.1
12.1-4
docs/Chap12/12.1.md
Give recursive algorithms that perform preorder and postorder tree walks in $\Theta(n)$ time on a tree of $n$ nodes.
```cpp PREORDER-TREE-WALK(x) if x != NIL print x.key PREORDER-TREE-WALK(x.left) PREORDER-TREE-WALK(x.right) ``` ```cpp POSTORDER-TREE-WALK(x) if x != NIL POSTORDER-TREE-WALK(x.left) POSTORDER-TREE-WALK(x.right) print x.key ```
[ { "lang": "cpp", "code": "PREORDER-TREE-WALK(x)\n if x != NIL\n print x.key\n PREORDER-TREE-WALK(x.left)\n PREORDER-TREE-WALK(x.right)" }, { "lang": "cpp", "code": "POSTORDER-TREE-WALK(x)\n if x != NIL\n POSTORDER-TREE-WALK(x.left)\n POSTORDER-TREE-WALK(x.right)\n print x.key" } ]
false
[]
12-12.1-5
12
12.1
12.1-5
docs/Chap12/12.1.md
Argue that since sorting $n$ elements takes $\Omega(n\lg n)$ time in the worst case in the comparison model, any comparison-based algorithm for constructing a binary search tree from an arbitrary list of $n$ elements takes $\Omega(n\lg n)$ time in the worst case.
Assume, for the sake of contradiction, that we can construct the binary search tree by comparison-based algorithm using less than $\Omega(n\lg n)$ time, since the inorder tree walk is $\Theta(n)$, then we can get the sorted elements in less than $\Omega(n\lg n)$ time, which contradicts the fact that sorting $n$ elements takes $\Omega(n\lg n)$ time in the worst case.
[]
false
[]
12-12.2-1
12
12.2
12.2-1
docs/Chap12/12.2.md
Suppose that we have numbers between $1$ and $1000$ in a binary search tree, and we want to search for the number $363$. Which of the following sequences could _not_ be the sequence of nodes examined? **a.** $2, 252, 401, 398, 330, 344, 397, 363$. **b.** $924, 220, 911, 244, 898, 258, 362, 363$. **c.** $925, 202, 911, 240, 912, 245, 363$. **d.** $2, 399, 387, 219, 266, 382, 381, 278, 363$. **e.** $935, 278, 347, 621, 299, 392, 358, 363$.
- **c.** could not be the sequence of nodes explored because we take the left child from the $911$ node, and yet somehow manage to get to the $912$ node which cannot belong the left subtree of $911$ because it is greater. - **e.** is also impossible because we take the right subtree on the $347$ node and yet later come across the $299$ node.
[]
false
[]
12-12.2-2
12
12.2
12.2-2
docs/Chap12/12.2.md
Write recursive versions of $\text{TREE-MINIMUM}$ and $\text{TREE-MAXIMUM}$.
```cpp TREE-MINIMUM(x) if x.left != NIL return TREE-MINIMUM(x.left) else return x ``` ```cpp TREE-MAXIMUM(x) if x.right != NIL return TREE-MAXIMUM(x.right) else return x ```
[ { "lang": "cpp", "code": "TREE-MINIMUM(x)\n if x.left != NIL\n return TREE-MINIMUM(x.left)\n else return x" }, { "lang": "cpp", "code": "TREE-MAXIMUM(x)\n if x.right != NIL\n return TREE-MAXIMUM(x.right)\n else return x" } ]
false
[]
12-12.2-3
12
12.2
12.2-3
docs/Chap12/12.2.md
Write the $\text{TREE-PREDECESSOR}$ procedure.
```cpp TREE-PREDECESSOR(x) if x.left != NIL return TREE-MAXIMUM(x.left) y = x.p while y != NIL and x == y.left x = y y = y.p return y ```
[ { "lang": "cpp", "code": "TREE-PREDECESSOR(x)\n if x.left != NIL\n return TREE-MAXIMUM(x.left)\n y = x.p\n while y != NIL and x == y.left\n x = y\n y = y.p\n return y" } ]
false
[]
12-12.2-4
12
12.2
12.2-4
docs/Chap12/12.2.md
Professor Bunyan thinks he has discovered a remarkable property of binary search trees. Suppose that the search for key $k$ in a binary search tree ends up in a leaf. Consider three sets: $A$, the keys to the left of the search path; $B$, the keys on the search path; and $C$, the keys to the right of the search path. Professor Bunyan claims that any three keys $a \in A$, $b \in B$, and $c \in C$ must satisfy $a \le b \le c$. Give a smallest possible counterexample to the professor's claim.
Search for $9$ in this tree. Then $A = \\{7\\}$, $B = \\{5, 8, 9\\}$ and $C = \\{\\}$. So, since $7 > 5$ it breaks professor's claim.
[]
false
[]
12-12.2-5
12
12.2
12.2-5
docs/Chap12/12.2.md
Show that if a node in a binary search tree has two children, then its successor has no left child and its predecessor has no right child.
Suppose the node $x$ has two children. Then it's successor is the minimum element of the BST rooted at $x.right$. If it had a left child then it wouldn't be the minimum element. So, it must not have a left child. Similarly, the predecessor must be the maximum element of the left subtree, so cannot have a right child.
[]
false
[]
12-12.2-6
12
12.2
12.2-6
docs/Chap12/12.2.md
Consider a binary search tree $T$ whose keys are distinct. Show that if the right subtree of a node $x$ in $T$ is empty and $x$ has a successor $y$, then $y$ is the lowest ancestor of $x$ whose left child is also an ancestor of $x$. (Recall that every node is its own ancestor.)
First we establish that $y$ must be an ancestor of $x$. If $y$ weren't an ancestor of $x$, then let $z$ denote the first common ancestor of $x$ and $y$. By the binary-search-tree property, $x < z < y$, so $y$ cannot be the successor of $x$. Next observe that $y.left$ must be an ancestor of $x$ because if it weren't, then $y.right$ would be an ancestor of $x$, implying that $x > y$. Finally, suppose that $y$ is not the lowest ancestor of $x$ whose left child is also an ancestor of $x$. Let $z$ denote this lowest ancestor. Then $z$ must be in the left subtree of $y$, which implies $z < y$, contradicting the fact that $y$ is the successor of $x$.
[]
false
[]
12-12.2-7
12
12.2
12.2-7
docs/Chap12/12.2.md
An alternative method of performing an inorder tree walk of an $n$-node binary search tree finds the minimum element in the tree by calling $\text{TREE-MINIMUM}$ and then making $n - 1$ calls to $\text{TREE-SUCCESSOR}$. Prove that this algorithm runs in $\Theta(n)$ time.
To show this bound on the runtime, we will show that using this procedure, we traverse each edge twice. This will suffice because the number of edges in a tree is one less than the number of vertices. Consider a vertex of a BST, say $x$. Then, we have that the edge between $x.p$ and $x$ gets used when successor is called on $x.p$ and gets used again when it is called on the largest element in the subtree rooted at $x$. Since these are the only two times that that edge can be used, apart from the initial finding of tree minimum. We have that the runtime is $O(n)$. We trivially get the runtime is $\Omega(n)$ because that is the size of the output.
[]
false
[]
12-12.2-8
12
12.2
12.2-8
docs/Chap12/12.2.md
Prove that no matter what node we start at in a height-$h$ binary search tree, $k$ successive calls to $\text{TREE-SUCCESSOR}$ take $O(k + h)$ time.
Suppose $x$ is the starting node and $y$ is the ending node. The distance between $x$ and $y$ is at most $2h$, and all the edges connecting the $k$ nodes are visited twice, therefore it takes $O(k + h)$ time.
[]
false
[]
12-12.2-9
12
12.2
12.2-9
docs/Chap12/12.2.md
Let $T$ be a binary search tree whose keys are distinct, let $x$ be a leaf node, and let $y$ be its parent. Show that $y.key$ is either the smallest key in $T$ larger than $x.key$ or the largest key in $T$ smaller than $x.key$.
- If $x = y.left$, then calling successor on $x$ will result in no iterations of the while loop, and so will return $y$. - If $x = y.right$, the while loop for calling predecessor (see exercise 3) will be run no times, and so $y$ will be returned.
[]
false
[]
12-12.3-1
12
12.3
12.3-1
docs/Chap12/12.3.md
Give a recursive version of the $\text{TREE-INSERT}$ procedure.
```cpp RECURSIVE-TREE-INSERT(T, z) if T.root == NIL T.root = z else INSERT(NIL, T.root, z) ``` ```cpp INSERT(p, x, z) if x == NIL z.p = p if z.key < p.key p.left = z else p.right = z else if z.key < x.key INSERT(x, x.left, z) else INSERT(x, x.right, z) ```
[ { "lang": "cpp", "code": "RECURSIVE-TREE-INSERT(T, z)\n if T.root == NIL\n T.root = z\n else INSERT(NIL, T.root, z)" }, { "lang": "cpp", "code": "INSERT(p, x, z)\n if x == NIL\n z.p = p\n if z.key < p.key\n p.left = z\n else p.right = z\n else if z.key < x.key\n INSERT(x, x.left, z)\n else INSERT(x, x.right, z)" } ]
false
[]
12-12.3-2
12
12.3
12.3-2
docs/Chap12/12.3.md
Suppose that we construct a binary search tree by repeatedly inserting distinct values into the tree. Argue that the number of nodes examined in searching for a value in the tree is one plus the number of nodes examined when the value was first inserted into the tree.
Number of nodes examined while searching also includes the node which is searched for, which isn't the case when we inserted it.
[]
false
[]
12-12.3-3
12
12.3
12.3-3
docs/Chap12/12.3.md
We can sort a given set of $n$ numbers by first building a binary search tree containing these numbers (using $\text{TREE-INSERT}$ repeatedly to insert the numbers one by one) and then printing the numbers by an inorder tree walk. What are the worst-case and best-case running times for this sorting algorithm?
- The worst-case is that the tree formed has height $n$ because we were inserting them in already sorted order. This will result in a runtime of $\Theta(n^2)$. - The best-case is that the tree formed is approximately balanced. This will mean that the height doesn't exceed $O(\lg n)$. Note that it can't have a smaller height, because a complete binary tree of height $h$ only has $\Theta(2^h)$ elements. This will result in a rutime of $O(n\lg n)$. We showed $\Omega(n\lg n)$ in exercise [12.1-5](../12.1/#121-5).
[]
false
[]
12-12.3-4
12
12.3
12.3-4
docs/Chap12/12.3.md
Is the operation of deletion "commutative" in the sense that deleting $x$ and then $y$ from a binary search tree leaves the same tree as deleting $y$ and then $x$? Argue why it is or give a counterexample.
No, giving the following courterexample. - Delete $A$ first, then delete $B$: ``` A C C / \ / \ \ B D B D D / C ``` - Delete $B$ first, then delete $A$: ``` A A D / \ \ / B D D C / / C C ```
[ { "lang": "", "code": " A C C\n / \\ / \\ \\\n B D B D D\n /\n C" }, { "lang": "", "code": " A A D\n / \\ \\ /\n B D D C\n / /\n C C" } ]
false
[]
12-12.3-5
12
12.3
12.3-5
docs/Chap12/12.3.md
Suppose that instead of each node $x$ keeping the attribute $x.p$, pointing to $x$'s parent, it keeps $x.succ$, pointing to $x$'s successor. Give pseudocode for $\text{SEARCH}$, $\text{INSERT}$, and $\text{DELETE}$ on a binary search tree $T$ using this representation. These procedures should operate in time $O(h)$, where $h$ is the height of the tree $T$. ($\textit{Hint:}$ You may wish to implement a subroutine that returns the parent of a node.)
We don't need to change $\text{SEARCH}$. We have to implement $\text{PARENT}$, which facilitates us a lot. ```cpp PARENT(T, x) if x == T.root return NIL y = TREE-MAXIMUM(x).succ if y == NIL y = T.root else if y.left == x return y y = y.left while y.right != x y = y.right return y ``` ```cpp INSERT(T, z) y = NIL x = T.root pred = NIL while x != NIL y = x if z.key < x.key x = x.left else pred = x x = x.right if y == NIL T.root = z z.succ = NIL else if z.key < y.key y.left = z z.succ = y if pred != NIL pred.succ = z else y.right = z z.succ = y.succ y.succ = z ``` We modify $\text{TRANSPLANT}$ a bit since we no longer have to keep the pointer of $p$. ```cpp TRANSPLANT(T, u, v) p = PARENT(T, u) if p == NIL T.root = v else if u == p.left p.left = v else p.right = v ``` Also, we have to implement $\text{TREE-PREDECESSOR}$, which helps us easily find the predecessor in line 2 of $\text{DELETE}$. ```cpp TREE-PREDECESSOR(T, x) if x.left != NIL return TREE-MAXIMUM(x.left) y = T.root pred = NIL while y != NIL if y.key == x.key break if y.key < x.key pred = y y = y.right else y = y.left return pred ``` ```cpp DELETE(T, z) pred = TREE-PREDECESSOR(T, z) pred.succ = z.succ if z.left == NIL TRANSPLANT(T, z, z.right) else if z.right == NIL TRANSPLANT(T, z, z.left) else y = TREE-MIMIMUM(z.right) if PARENT(T, y) != z TRANSPLANT(T, y, y.right) y.right = z.right TRANSPLANT(T, z, y) y.left = z.left ``` Therefore, all these five algorithms are still $O(h)$ despite the increase in the hidden constant factor.
[ { "lang": "cpp", "code": "PARENT(T, x)\n if x == T.root\n return NIL\n y = TREE-MAXIMUM(x).succ\n if y == NIL\n y = T.root\n else\n if y.left == x\n return y\n y = y.left\n while y.right != x\n y = y.right\n return y" }, { "lang": "cpp", "code": "INSERT(T, z)\n y = NIL\n x = T.root\n pred = NIL\n while x != NIL\n y = x\n if z.key < x.key\n x = x.left\n else\n pred = x\n x = x.right\n if y == NIL\n T.root = z\n z.succ = NIL\n else if z.key < y.key\n y.left = z\n z.succ = y\n if pred != NIL\n pred.succ = z\n else\n y.right = z\n z.succ = y.succ\n y.succ = z" }, { "lang": "cpp", "code": "TRANSPLANT(T, u, v)\n p = PARENT(T, u)\n if p == NIL\n T.root = v\n else if u == p.left\n p.left = v\n else\n p.right = v" }, { "lang": "cpp", "code": "TREE-PREDECESSOR(T, x)\n if x.left != NIL\n return TREE-MAXIMUM(x.left)\n y = T.root\n pred = NIL\n while y != NIL\n if y.key == x.key\n break\n if y.key < x.key\n pred = y\n y = y.right\n else\n y = y.left\n return pred" }, { "lang": "cpp", "code": "DELETE(T, z)\n pred = TREE-PREDECESSOR(T, z)\n pred.succ = z.succ\n if z.left == NIL\n TRANSPLANT(T, z, z.right)\n else if z.right == NIL\n TRANSPLANT(T, z, z.left)\n else\n y = TREE-MIMIMUM(z.right)\n if PARENT(T, y) != z\n TRANSPLANT(T, y, y.right)\n y.right = z.right\n TRANSPLANT(T, z, y)\n y.left = z.left" } ]
false
[]
12-12.3-6
12
12.3
12.3-6
docs/Chap12/12.3.md
When node $z$ in $\text{TREE-DELETE}$ has two children, we could choose node $y$ as its predecessor rather than its successor. What other changes to $\text{TREE-DELETE}$ would be necessary if we did so? Some have argued that a fair strategy, giving equal priority to predecessor and successor, yields better empirical performance. How might $\text{TREE-DELETE}$ be changed to implement such a fair strategy?
Update line 5 so that $y$ is set equal to $\text{TREE-MAXIMUM}(z.left)$ and lines 6-12 so that every $y.left$ and $z.left$ is replaced with $y.right$ and $z.right$ and vice versa. To implement the fair strategy, we could randomly decide each time $\text{TREE-DELETE}$ is called whether or not to use the predecessor or successor.
[]
false
[]
12-12.4-1
12
12.4
12.4-1
docs/Chap12/12.4.md
Prove equation $\text{(12.3)}$.
Consider all the possible positions of the largest element of the subset of $n + 3$ of size $4$. Suppose it were in position $i + 4$ for some $i \le n − 1$. Then, we have that there are $i + 3$ positions from which we can select the remaining three elements of the subset. Since every subset with different largest element is different, we get the total by just adding them all up (inclusion exclusion principle).
[]
false
[]
12-12.4-2
12
12.4
12.4-2
docs/Chap12/12.4.md
Describe a binary search tree on n nodes such that the average depth of a node in the tree is $\Theta(\lg n)$ but the height of the tree is $\omega(\lg n)$. Give an asymptotic upper bound on the height of an $n$-node binary search tree in which the average depth of a node is $\Theta(\lg n)$.
To keep the average depth low but maximize height, the desired tree will be a complete binary search tree, but with a chain of length $c(n)$ hanging down from one of the leaf nodes. Let $k = \lg(n − c(n))$ be the height of the complete binary search tree. Then the average height is approximately given by $$\frac{1}{n} \left[\sum_{i = 1}^{n - c(n)} \lg i + (k + 1) + (k + 2) + \cdots + (k + c(n))\right] \approx \lg(n - c(n)) + \frac{c(n)^2}{2n}.$$ The upper bound is given by the largest $c(n)$ such that $\lg(n − c(n))+ \frac{c(n)^2}{2n} = \Theta(\lg n)$ and $c(n) = \omega(\lg n)$. One function which works is $\sqrt n$.
[]
false
[]
12-12.4-3
12
12.4
12.4-3
docs/Chap12/12.4.md
Show that the notion of a randomly chosen binary search tree on $n$ keys, where each binary search tree of $n$ keys is equally likely to be chosen, is different from the notion of a randomly built binary search tree given in this section. ($\textit{Hint:}$ List the possibilities when $n = 3$.)
Suppose we have the elements $\\{1, 2, 3\\}$. Then, if we construct a tree by a random ordering, then, we get trees which appear with probabilities some multiple of $\frac{1}{6}$. However, if we consider all the valid binary search trees on the key set of $\\{1, 2, 3\\}$. Then, we will have only five different possibilities. So, each will occur with probability $\frac{1}{5}$, which is a different probability distribution.
[]
false
[]
12-12.4-4
12
12.4
12.4-4
docs/Chap12/12.4.md
Show that the function $f(x) = 2^x$ is convex.
The second derivative is $2^x\ln^2 2$ which is always positive, so the function is convex
[]
false
[]