id
stringlengths 6
26
| chapter
stringclasses 36
values | section
stringlengths 3
5
| title
stringlengths 3
27
| source_file
stringlengths 13
29
| question_markdown
stringlengths 17
6.29k
| answer_markdown
stringlengths 3
6.76k
| code_blocks
listlengths 0
9
| has_images
bool 2
classes | image_refs
listlengths 0
7
|
---|---|---|---|---|---|---|---|---|---|
01-1.1-1
|
01
|
1.1
|
1.1-1
|
docs/Chap01/1.1.md
|
Give a real-world example that requires sorting or a real-world example that requires computing a convex hull.
|
- Sorting: browse the price of the restaurants with ascending prices on NTU street.
- Convex hull: computing the diameter of set of points.
|
[] | false |
[] |
01-1.1-2
|
01
|
1.1
|
1.1-2
|
docs/Chap01/1.1.md
|
Other than speed, what other measures of efficiency might one use in a real-world setting?
|
Memory efficiency and coding efficiency.
|
[] | false |
[] |
01-1.1-3
|
01
|
1.1
|
1.1-3
|
docs/Chap01/1.1.md
|
Select a data structure that you have seen previously, and discuss its strengths and limitations.
|
Linked-list:
- Strengths: insertion and deletion.
- Limitations: random access.
|
[] | false |
[] |
01-1.1-4
|
01
|
1.1
|
1.1-4
|
docs/Chap01/1.1.md
|
How are the shortest-path and traveling-salesman problems given above similar? How are they different?
|
- Similar: finding path with shortest distance.
- Different: traveling-salesman has more constraints.
|
[] | false |
[] |
01-1.1-5
|
01
|
1.1
|
1.1-5
|
docs/Chap01/1.1.md
|
Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is "approximately" the best is good enough.
|
- Best: find the GCD of two positive integer numbers.
- Approximately: find the solution of differential equations.
|
[] | false |
[] |
01-1.2-1
|
01
|
1.2
|
1.2-1
|
docs/Chap01/1.2.md
|
Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved.
|
Drive navigation.
|
[] | false |
[] |
01-1.2-2
|
01
|
1.2
|
1.2-2
|
docs/Chap01/1.2.md
|
Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size $n$ , insertion sort runs in $8n^2$ steps, while merge sort runs in $64n\lg n$ steps. For which values of $n$ does insertion sort beat merge sort?
|
$$
\begin{aligned}
8n^2 & < 64n\lg n \\\\
2^n & < n^8 \\\\
2 \le n & \le 43.
\end{aligned}
$$
|
[] | false |
[] |
01-1.2-3
|
01
|
1.2
|
1.2-3
|
docs/Chap01/1.2.md
|
What is the smallest value of $n$ such that an algorithm whose running time is $100n^2$ runs faster than an algorithm whose running time is $2^n$ on the same machine?
|
$$
\begin{aligned}
100n^2 & < 2^n \\\\
n & \ge 15.
\end{aligned}
$$
|
[] | false |
[] |
01-1-1
|
01
|
1-1
|
1-1
|
docs/Chap01/Problems/1-1.md
|
For each function $f(n)$ and time $t$ in the following table, determine the largest size $n$ of a problem that can be solved in time $t$, assuming that the algorithm to solve the problem takes $f(n)$ microseconds.
|
$$
\begin{array}{cccccccc}
& \text{1 second} & \text{1 minute} & \text{1 hour} & \text{1 day} & \text{1 month} & \text{1 year} & \text{1 century} \\\\
\hline
\lg n & 2^{10^6} & 2^{6 \times 10^7} & 2^{3.6 \times 10^9} & 2^{8.64 \times 10^{10}} & 2^{2.59 \times 10^{12}} & 2^{3.15 \times 10^{13}} & 2^{3.15 \times 10^{15}} \\\\
\sqrt n & 10^{12} & 3.6 \times 10^{15} & 1.3 \times 10^{19} & 7.46 \times 10^{21} & 6.72 \times 10^{24} & 9.95 \times 10^{26} & 9.95 \times 10^{30} \\\\
n & 10^6 & 6 \times 10^7 & 3.6 \times 10^9 & 8.64 \times 10^{10} & 2.59 \times 10^{12} & 3.15 \times 10^{13} & 3.15 \times 10^{15} \\\\
n\lg n & 6.24 \times 10^4 & 2.8 \times 10^6 & 1.33 \times 10^8 & 2.76 \times 10^9 & 7.19 \times 10^{10} & 7.98 \times 10^{11} & 6.86 \times 10^{13} \\\\
n^2 & 1000 & 7745 & 60000 & 293938 & 1609968 & 5615692 & 56156922 \\\\
n^3 & 100 & 391 & 1532 & 4420 & 13736 & 31593 & 146645 \\\\
2^n & 19 & 25 & 31 & 36 & 41 & 44 & 51 \\\\
n! & 9 & 11 & 12 & 13 & 15 & 16 & 17
\end{array}
$$
|
[] | false |
[] |
02-2.1-1
|
02
|
2.1
|
2.1-1
|
docs/Chap02/2.1.md
|
Using Figure 2.2 as a model, illustrate the operation of $\text{INSERTION-SORT}$ on the array $A = \langle 31, 41, 59, 26, 41, 58 \rangle$.
|

The operation of $\text{INSERTION-SORT}$ on the array $A = \langle 31, 41, 59, 26, 41, 58 \rangle$. Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles.
(a)-(e) are iterations of the for loop of lines 1-8.
In each iteration, the black rectangle holds the key taken from $A[i]$, which is compared with the values in shaded rectangles to its left in the test of line 5. Dotted arrows show array values moved one position to the right in line 6. and solid arrows indicate where the key moves to in line 8.
(f) is the final sorted array.
The changes of array A during traversal:
$$
\begin{aligned}
A & = \langle 31, 41, 59, 26, 41, 58 \rangle \\\\
A & = \langle 31, 41, 59, 26, 41, 58 \rangle \\\\
A & = \langle 31, 41, 59, 26, 41, 58 \rangle \\\\
A & = \langle 26, 31, 41, 59, 41, 58 \rangle \\\\
A & = \langle 26, 31, 41, 41, 59, 58 \rangle \\\\
A & = \langle 26, 31, 41, 41, 58, 59 \rangle
\end{aligned}
$$
|
[] | true |
[
"../img/2.1-1-1.png"
] |
02-2.1-2
|
02
|
2.1
|
2.1-2
|
docs/Chap02/2.1.md
|
Rewrite the $\text{INSERTION-SORT}$ procedure to sort into nonincreasing instead of nondecreasing order.
|
```cpp
INSERTION-SORT(A)
for j = 2 to A.length
key = A[j]
i = j - 1
while i > 0 and A[i] < key
A[i + 1] = A[i]
i = i - 1
A[i + 1] = key
```
|
[
{
"lang": "cpp",
"code": "INSERTION-SORT(A)\n for j = 2 to A.length\n key = A[j]\n i = j - 1\n while i > 0 and A[i] < key\n A[i + 1] = A[i]\n i = i - 1\n A[i + 1] = key"
}
] | false |
[] |
02-2.1-3
|
02
|
2.1
|
2.1-3
|
docs/Chap02/2.1.md
|
Consider the **_searching problem_**:
**Input**: A sequence of $n$ numbers $A = \langle a_1, a_2, \ldots, a_n \rangle$ and a value $v$.
**Output:** An index $i$ such that $v = A[i]$ or the special value $\text{NIL}$ if $v$ does not appear in $A$.
Write pseudocode for **_linear search_**, which scans through the sequence, looking for $v$. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.
|
```cpp
LINEAR-SEARCH(A, v)
for i = 1 to A.length
if A[i] == v
return i
return NIL
```
**Loop invariant:** At the start of each iteration of the **for** loop, the subarray $A[1..i - 1]$ consists of elements that are different than $v$.
**Initialization:** Before the first loop iteration ($i = 1$), the subarray is the empty array, so the proof is trivial.
**Maintenance:** During each loop iteration, we compare $v$ with $A[i]$. If they are the same, we return $i$, which is the correct result. Otherwise, we continue to the next iteration of the loop. At the end of each loop iteration, we know the subarray $A[1..i]$ does not contain $v$, so the loop invariant holds true. Incrementing $i$ for the next iteration of the **for** loop then preserves the loop invariant.
**Termination:** The loop terminates when $i > A.length = n$. Since $i$ increases by $1$, we must have $i = n + 1$ at that time. Substituting $n + 1$, for $i$ in the wording of the loop invariant, we have that the subarray $A[1..n]$ consists of elements that are different than $v$. Thus, we return $\text{NIL}$. Observing that $A[1..n]$, we conclude that the entire array does not have any element equal to $v$. Hence the algorithm is correct.
|
[
{
"lang": "cpp",
"code": "LINEAR-SEARCH(A, v)\n for i = 1 to A.length\n if A[i] == v\n return i\n return NIL"
}
] | false |
[] |
02-2.1-4
|
02
|
2.1
|
2.1-4
|
docs/Chap02/2.1.md
|
Consider the problem of adding two $n$-bit binary integers, stored in two $n$-element arrays $A$ and $B$. The sum of the two integers should be stored in binary form in an $(n + 1)$-element array $C$. State the problem formally and write pseudocode for adding the two integers.
|
**Input:** An array of booleans $A = \langle a_1, a_2, \ldots, a_n \rangle$ and an array of booleans $B = \langle b_1, b_2, \ldots, b_n \rangle$, each representing an integer stored in binary format (each digit is a number, either $0$ or $1$, **least-significant digit first**) and each of length $n$.
**Output:** An array $C = \langle c_1, c_2, \ldots, c_{n + 1} \rangle$ such that $C' = A' + B'$ where $A'$, $B'$ and $C'$ are the integers, represented by $A$, $B$ and $C$.
```cpp
ADD-BINARY(A, B)
carry = 0
for i = 1 to A.length
sum = A[i] + B[i] + carry
C[i] = sum % 2 // remainder
carry = sum / 2 // quotient
C[A.length + 1] = carry
return C
```
|
[
{
"lang": "cpp",
"code": "ADD-BINARY(A, B)\n carry = 0\n for i = 1 to A.length\n sum = A[i] + B[i] + carry\n C[i] = sum % 2 // remainder\n carry = sum / 2 // quotient\n C[A.length + 1] = carry\n return C"
}
] | false |
[] |
02-2.2-1
|
02
|
2.2
|
2.2-1
|
docs/Chap02/2.2.md
|
Express the function $n^3 / 1000 - 100n^2 - 100n + 3$ in terms of $\Theta$-notation.
|
$\Theta(n^3)$.
|
[] | false |
[] |
02-2.2-2
|
02
|
2.2
|
2.2-2
|
docs/Chap02/2.2.md
|
Consider sorting $n$ numbers stored in array $A$ by first finding the smallest element of $A$ and exchanging it with the element in $A[1]$. Then find the second smallest element of $A$, and exchange it with $A[2]$. Continue in this manner for the first $n - 1$ elements of $A$. Write pseudocode for this algorithm, which is known as **_selection sort_**. What loop invariant does this algorithm maintain? Why does it need to run for only the first $n - 1$ elements, rather than for all $n$ elements? Give the best-case and worst-case running times of selection sort in $\Theta$-notation.
|
- Pseudocode:
```cpp
n = A.length
for i = 1 to n - 1
minIndex = i
for j = i + 1 to n
if A[j] < A[minIndex]
minIndex = j
swap(A[i], A[minIndex])
```
- Loop invariant:
At the start of the loop in line 1, the subarray $A[1..i - 1]$ consists of the smallest $i - 1$ elements in array $A$ with sorted order.
- Why does it need to run for only the first $n - 1$ elements, rather than for all $n$ elements?
After $n - 1$ iterations, the subarray $A[1..n - 1]$ consists of the smallest $i - 1$ elements in array $A$ with sorted order. Therefore, $A[n]$ is already the largest element.
- Running time: $\Theta(n^2)$.
|
[
{
"lang": "cpp",
"code": " n = A.length\n for i = 1 to n - 1\n minIndex = i\n for j = i + 1 to n\n if A[j] < A[minIndex]\n minIndex = j\n swap(A[i], A[minIndex])"
}
] | false |
[] |
02-2.2-3
|
02
|
2.2
|
2.2-3
|
docs/Chap02/2.2.md
|
Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in $\Theta$-notation? Justify your answers.
|
If the element is present in the sequence, half of the elements are likely to be checked before it is found in the average case. In the worst case, all of them will be checked. That is, $n / 2$ checks for the average case and $n$ for the worst case. Both of them are $\Theta(n)$.
|
[] | false |
[] |
02-2.2-4
|
02
|
2.2
|
2.2-4
|
docs/Chap02/2.2.md
|
How can we modify almost any algorithm to have a good best-case running time?
|
You can modify any algorithm to have a best case time complexity by adding a special case. If the input matches this special case, return the pre-computed answer.
|
[] | false |
[] |
02-2.3-1
|
02
|
2.3
|
2.3-1
|
docs/Chap02/2.3.md
|
Using Figure 2.4 as a model, illustrate the operation of merge sort on the array $A = \langle 3, 41, 52, 26, 38, 57, 9, 49 \rangle$.
|
$$[3] \quad [41] \quad [52] \quad [26] \quad [38] \quad [57] \quad [9] \quad [49]$$
$$\downarrow$$
$$[3|41] \quad [26|52] \quad [38|57] \quad [9|49]$$
$$\downarrow$$
$$[3|26|41|52] \quad [9|38|49|57]$$
$$\downarrow$$
$$[3|9|26|38|41|49|52|57]$$
|
[] | false |
[] |
02-2.3-2
|
02
|
2.3
|
2.3-2
|
docs/Chap02/2.3.md
|
Rewrite the $\text{MERGE}$ procedure so that it does not use sentinels, instead stopping once either array $L$ or $R$ has had all its elements copied back to $A$ and then copying the remainder of the other array back into $A$.
|
```cpp
MERGE(A, p, q, r)
n1 = q - p + 1
n2 = r - q
let L[1..n1] and R[1..n2] be new arrays
for i = 1 to n1
L[i] = A[p + i - 1]
for j = 1 to n2
R[j] = A[q + j]
i = 1
j = 1
for k = p to r
if i > n1
A[k] = R[j]
j = j + 1
else if j > n2
A[k] = L[i]
i = i + 1
else if L[i] ≤ R[j]
A[k] = L[i]
i = i + 1
else
A[k] = R[j]
j = j + 1
```
|
[
{
"lang": "cpp",
"code": "MERGE(A, p, q, r)\n n1 = q - p + 1\n n2 = r - q\n let L[1..n1] and R[1..n2] be new arrays\n for i = 1 to n1\n L[i] = A[p + i - 1]\n for j = 1 to n2\n R[j] = A[q + j]\n i = 1\n j = 1\n for k = p to r\n if i > n1\n A[k] = R[j]\n j = j + 1\n else if j > n2\n A[k] = L[i]\n i = i + 1\n else if L[i] ≤ R[j]\n A[k] = L[i]\n i = i + 1\n else\n A[k] = R[j]\n j = j + 1"
}
] | false |
[] |
02-2.3-3
|
02
|
2.3
|
2.3-3
|
docs/Chap02/2.3.md
|
Use mathematical induction to show that when $n$ is an exact power of $2$, the solution of the recurrence
$$
T(n) =
\begin{cases}
2 & \text{if } n = 2, \\\\
2T(n / 2) + n & \text{if } n = 2^k, \text{for } k > 1
\end{cases}
$$
is $T(n) = n\lg n$.
|
- Base case
For $n = 2^1$, $T(n) = 2\lg 2 = 2$.
- Suppose $n = 2^k$, $T(n) = n\lg n = 2^k \lg 2^k = 2^kk$.
For $n = 2^{k + 1}$,
$$
\begin{aligned}
T(n) & = 2T(2^{k + 1} / 2) + 2^{k + 1} \\\\
& = 2T(2^k) + 2^{k + 1} \\\\
& = 2 \cdot 2^kk + 2^{k + 1} \\\\
& = 2^{k + 1}(k + 1) \\\\
& = 2^{k + 1} \lg 2^{k + 1} \\\\
& = n\lg n.
\end{aligned}
$$
By P.M.I., $T(n) = n\lg n$, when $n$ is an exact power of $2$.
|
[] | false |
[] |
02-2.3-4
|
02
|
2.3
|
2.3-4
|
docs/Chap02/2.3.md
|
We can express insertion sort as a recursive procedure as follows. In order to sort $A[1..n]$, we recursively sort $A[1..n - 1]$ and then insert $A[n]$ into the sorted array $A[1..n - 1]$. Write a recurrence for the running time of this recursive version of insertion sort.
|
It takes $\Theta(n)$ time in the worst case to insert $A[n]$ into the sorted array $A[1..n - 1]$. Therefore, the recurrence
$$
T(n) = \begin{cases}
\Theta(1) & \text{if } n = 1, \\\\
T(n - 1) + \Theta(n) & \text{if } n > 1.
\end{cases}
$$
The solution of the recurrence is $\Theta(n^2)$.
|
[] | false |
[] |
02-2.3-5
|
02
|
2.3
|
2.3-5
|
docs/Chap02/2.3.md
|
Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence $A$ is sorted, we can check the midpoint of the sequence against $v$ and eliminate half of the sequence from further consideration. The **_binary search_** algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is $\Theta(\lg n)$.
|
- Iterative:
```cpp
ITERATIVE-BINARY-SEARCH(A, v, low, high)
while low ≤ high
mid = floor((low + high) / 2)
if v == A[mid]
return mid
else if v > A[mid]
low = mid + 1
else high = mid - 1
return NIL
```
- Recursive:
```cpp
RECURSIVE-BINARY-SEARCH(A, v, low, high)
if low > high
return NIL
mid = floor((low + high) / 2)
if v == A[mid]
return mid
else if v > A[mid]
return RECURSIVE-BINARY-SEARCH(A, v, mid + 1, high)
else return RECURSIVE-BINARY-SEARCH(A, v, low, mid - 1)
```
Each time we do the comparison of $v$ with the middle element, the search range continues with range halved.
The recurrence
$$
T(n) = \begin{cases}
\Theta(1) & \text{if } n = 1, \\\\
T(n / 2) + \Theta(1) & \text{if } n > 1.
\end{cases}
$$
The solution of the recurrence is $T(n) = \Theta(\lg n)$.
|
[
{
"lang": "cpp",
"code": " ITERATIVE-BINARY-SEARCH(A, v, low, high)\n while low ≤ high\n mid = floor((low + high) / 2)\n if v == A[mid]\n return mid\n else if v > A[mid]\n low = mid + 1\n else high = mid - 1\n return NIL"
},
{
"lang": "cpp",
"code": " RECURSIVE-BINARY-SEARCH(A, v, low, high)\n if low > high\n return NIL\n mid = floor((low + high) / 2)\n if v == A[mid]\n return mid\n else if v > A[mid]\n return RECURSIVE-BINARY-SEARCH(A, v, mid + 1, high)\n else return RECURSIVE-BINARY-SEARCH(A, v, low, mid - 1)"
}
] | false |
[] |
02-2.3-6
|
02
|
2.3
|
2.3-6
|
docs/Chap02/2.3.md
|
Observe that the **while** loop of lines 5–7 of the $\text{INSERTION-SORT}$ procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray $A[i..j - 1]$. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to $\Theta(n\lg n)$?
|
Each time the **while** loop of lines 5-7 of $\text{INSERTION-SORT}$ scans backward through the sorted array $A[1..j - 1]$. The loop not only searches for the proper place for $A[j]$, but it also moves each of the array elements that are bigger than $A[j]$ one position to the right (line 6). These movements takes $\Theta(j)$ time, which occurs when all the $j - 1$ elements preceding $A[j]$ are larger than $A[j]$. The running time of using binary search to search is $\Theta(\lg j)$, which is still dominated by the running time of moving element $\Theta(j)$.
Therefore, we can't improve the overrall worst-case running time of insertion sort to $\Theta(n\lg n)$.
|
[] | false |
[] |
02-2.3-7
|
02
|
2.3
|
2.3-7 $\star$
|
docs/Chap02/2.3.md
|
Describe a $\Theta(n\lg n)$-time algorithm that, given a set $S$ of $n$ integers and another integer $x$, determines whether or not there exist two elements in $S$ whose sum is exactly $x$.
|
First, sort $S$, which takes $\Theta(n\lg n)$.
Then, for each element $s_i$ in $S$, $i = 1, \dots, n$, search $A[i + 1..n]$ for $s_i' = x - s_i$ by binary search, which takes $\Theta(\lg n)$.
- If $s_i'$ is found, return its position;
- otherwise, continue for next iteration.
The time complexity of the algorithm is $\Theta(n\lg n) + n \cdot \Theta(\lg n) = \Theta(n\lg n)$.
|
[] | false |
[] |
02-2-1
|
02
|
2-1
|
2-1
|
docs/Chap02/Problems/2-1.md
|
Although merge sort runs in $\Theta(n\lg n)$ worst-case time and insertion sort runs in $\Theta(n^2)$ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to **_coarsen_** the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which $n / k$ sublists of length $k$ are sorted using insertion sort and then merged using the standard merging mechanism, where $k$ is a value to be determined.
**a.** Show that insertion sort can sort the $n / k$ sublists, each of length $k$, in $\Theta(nk)$ worst-case time.
**b.** Show how to merge the sublists in $\Theta(n\lg(n / k))$ worst-case time.
**c.** Given that the modified algorithm runs in $\Theta(nk + n\lg(n / k))$ worst-case time, what is the largest value of $k$ as a function of $n$ for which the modified algorithm has the same running time as standard merge sort, in terms of $\Theta$-notation?
**d.** How should we choose $k$ in practice?
|
**a.** The worst-case time to sort a list of length $k$ by insertion sort is $\Theta(k^2)$. Therefore, sorting $n / k$ sublists, each of length $k$ takes $\Theta(k^2 \cdot n / k) = \Theta(nk)$ worst-case time.
**b.** We have $n / k$ sorted sublists each of length $k$. To merge these $n / k$ sorted sublists to a single sorted list of length $n$, we have to take $2$ sublists at a time and continue to merge them. The process can be visualized as a tree with $\lg(n / k)$ levels and we compare $n$ elements in each level. Therefore, the worst-case time to merge the sublists is $\Theta(n\lg(n / k))$.
**c.** The modified algorithm has time complexity as standard merge sort when $\Theta(nk + n\lg(n / k)) = \Theta(n\lg n)$. Assume $k = \Theta(\lg n)$,
$$
\begin{aligned}
\Theta(nk + n\lg(n / k))
& = \Theta(nk + n\lg n - n\lg k) \\\\
& = \Theta(n\lg n + n\lg n - n\lg(\lg n)) \\\\
& = \Theta(2n\lg n - n\lg(\lg n)) \\\\
& = \Theta(n\lg n).
\end{aligned}
$$
**d.** Choose $k$ be the largest length of sublist on which insertion sort is faster than merge sort.
|
[] | false |
[] |
02-2-2
|
02
|
2-2
|
2-2
|
docs/Chap02/Problems/2-2.md
|
Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order.
```cpp
BUBBLESORT(A)
for i = 1 to A.length - 1
for j = A.length downto i + 1
if A[j] < A[j - 1]
exchange A[j] with A[j - 1]
```
**a.** Let $A'$ denote the output of $\text{BUBBLESORT}(A)$ To prove that $\text{BUBBLESORT}$ is correct, we need to prove that it terminates and that
$$A'[1] \le A'[2] \le \cdots \le A'[n], \tag{2.3}$$
where $n = A.length$. In order to show that $\text{BUBBLESORT}$ actually sorts, what else do we need to prove?
The next two parts will prove inequality $\text{(2.3)}$.
**b.** State precisely a loop invariant for the **for** loop in lines 2–4, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter.
**c.** Using the termination condition of the loop invariant proved in part (b), state a loop invariant for the **for** loop in lines 1–4 that will allow you to prove inequality $\text{(2.3)}$. Your proof should use the structure of the loop invariant proof presented in this chapter.
**d.** What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort?
|
**a.** $A'$ consists of the elements in $A$ but in sorted order.
**b.** **Loop invariant:** At the start of each iteration of the **for** loop of lines 2-4, the subarray $A[j..n]$ consists of the elements originally in $A[j..n]$ before entering the loop but possibly in a different order and the first element $A[j]$ is the smallest among them.
**Initialization:** Initially the subarray contains only the last element $A[n]$, which is trivially the smallest element of the subarray.
**Maintenance:** In every step we compare $A[j]$ with $A[j - 1]$ and make $A[j - 1]$ the smallest among them. After the iteration, the length of the subarray increases by one and the first element is the smallest of the subarray.
**Termination:** The loop terminates when $j = i$. According to the statement of loop invariant, $A[i]$ is the smallest among $A[i..n]$ and $A[i..n]$ consists of the elements originally in $A[i..n]$ before entering the loop.
**c.** **Loop invariant:** At the start of each iteration of the **for** loop of lines 1-4, the subarray $A[1..i − 1]$ consists of the $i - 1$ smallest elements in $A[1..n]$ in sorted order. $A[i..n]$ consists of the $n - i + 1$ remaining elements in $A[1..n]$.
**Initialization:** Initially the subarray $A[1..i − 1]$ is empty and trivially this is the smallest element of the subarray.
**Maintenance:** From part (b), after the execution of the inner loop, $A[i]$ will be the smallest element of the subarray $A[i..n]$. And in the beginning of the outer loop, $A[1..i − 1]$ consists of elements that are smaller than the elements of $A[i..n]$, in sorted order. So, after the execution of the outer loop, subarray $A[1..i]$ will consists of elements that are smaller than the elements of $A[i + 1..n]$, in sorted order.
**Termination:** The loop terminates when $i = A.length$. At that point the array $A[1..n]$ will consists of all elements in sorted order.
**d.** The $i$th iteration of the **for** loop of lines 1-4 will cause $n − i$ iterations of the **for** loop of lines 2-4, each with constant time execution, so the worst-case running time of bubble sort is $\Theta(n^2)$ which is same as the worst-case running time of insertion sort.
|
[
{
"lang": "cpp",
"code": "> BUBBLESORT(A)\n> for i = 1 to A.length - 1\n> for j = A.length downto i + 1\n> if A[j] < A[j - 1]\n> exchange A[j] with A[j - 1]\n>"
}
] | false |
[] |
02-2-3
|
02
|
2-3
|
2-3
|
docs/Chap02/Problems/2-3.md
|
The following code fragment implements Horner's rule for evaluating a polynomial
$$
\begin{aligned}
P(x) & = \sum_{k = 0}^n a_k x^k \\\\
& = a_0 + x(a_1 + x (a_2 + \cdots + x(a_{n - 1} + x a_n) \cdots)),
\end{aligned}
$$
given the coefficients $a_0, a_1, \ldots, a_n$ and a value of $x$:
```cpp
y = 0
for i = n downto 0
y = a[i] + x * y
```
**a.** In terms of $\Theta$-notation, what is the running time of this code fragment for Horner's rule?
**b.** Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner's rule
**c.** Consider the following loop invariant: At the start of each iteration of the **for** loop of lines 2-3,
$$y = \sum_{k = 0}^{n - (i + 1)} a_{k + i + 1} x^k.$$
Interpret a summation with no terms as equaling $0$. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show that, at termination, $y = \sum_{k = 0}^n a_k x^k$.
**d.** Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients $a_0, a_1, \ldots, a_n$.
|
**a.** $\Theta(n)$.
**b.**
```cpp
NAIVE-HORNER()
y = 0
for k = 0 to n
temp = 1
for i = 1 to k
temp = temp * x
y = y + a[k] * temp
```
The running time is $\Theta(n^2)$, because of the nested loop. It is obviously slower.
**c.** **Initialization:** It is pretty trivial, since the summation has no terms which implies $y = 0$.
**Maintenance:** By using the loop invariant, in the end of the $i$-th iteration, we have
$$
\begin{aligned}
y & = a_i + x \sum_{k = 0}^{n - (i + 1)} a_{k + i + 1} x^k \\\\
& = a_i x^0 + \sum_{k = 0}^{n - i - 1} a_{k + i + 1} x^{k + 1} \\\\
& = a_i x^0 + \sum_{k = 1}^{n - i} a_{k + i} x^k \\\\
& = \sum_{k = 0}^{n - i} a_{k + i} x^k.
\end{aligned}
$$
**Termination:** The loop terminates at $i = -1$. If we substitute,
$$y = \sum_{k = 0}^{n - i - 1} a_{k + i + 1} x^k = \sum_{k = 0}^n a_k x^k.$$
**d.** The invariant of the loop is a sum that equals a polynomial with the given coefficients.
|
[
{
"lang": "cpp",
"code": "> y = 0\n> for i = n downto 0\n> y = a[i] + x * y\n>"
},
{
"lang": "cpp",
"code": "NAIVE-HORNER()\n y = 0\n for k = 0 to n\n temp = 1\n for i = 1 to k\n temp = temp * x\n y = y + a[k] * temp"
}
] | false |
[] |
02-2-4
|
02
|
2-4
|
2-4
|
docs/Chap02/Problems/2-4.md
|
Let $A[1..n]$ be an array of $n$ distinct numbers. If $i < j$ and $A[i] > A[j]$, then the pair $(i, j)$ is called an **_inversion_** of $A$.
**a.** List the five inversions in the array $\langle 2, 3, 8, 6, 1 \rangle$.
**b.** What array with elements from the set $\\{1, 2, \ldots, n\\}$ has the most inversions? How many does it have?
**c.** What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.
**d.** Give an algorithm that determines the number of inversions in any permutation of $n$ elements in $\Theta(n\lg n)$ worst-case time. ($\textit{Hint:}$ Modify merge sort).
|
**a.** $(1, 5)$, $(2, 5)$, $(3, 4)$, $(3, 5)$, $(4, 5)$.
**b.** The array $\langle n, n - 1, \dots, 1 \rangle$ has the most inversions $(n - 1) + (n - 2) + \cdots + 1 = n(n - 1) / 2$.
**c.** The running time of insertion sort is a constant times the number of inversions. Let $I(i)$ denote the number of $j < i$ such that $A[j] > A[i]$. Then $\sum_{i = 1}^n I(i)$ equals the number of inversions in $A$.
Now consider the **while** loop on lines 5-7 of the insertion sort algorithm. The loop will execute once for each element of $A$ which has index less than $j$ is larger than $A[j]$. Thus,
it will execute $I(j)$ times. We reach this **while** loop once for each iteration
of the **for** loop, so the number of constant time steps of insertion sort is $\sum_{j = 1}^n I(j)$ which is exactly the inversion number of $A$.
**d.** We'll call our algorithm $\text{COUNT-INVERSIONS}$ for modified merge sort. In addition to sorting $A$, it will also keep track of the number of inversions.
$\text{COUNT-INVERSIONS}(A, p, r)$ sorts $A[p..r]$ and returns the number of inversions in the elements of $A[p..r]$, so $left$ and $right$ track the number of inversions of the form $(i, j)$ where $i$ and $j$ are both in the same half of $A$.
$\text{MERGE-INVERSIONS}(A, p, q, r)$ returns the number of inversions of the form $(i, j)$ where $i$ is in the first half of the array and $j$ is in the second half. Summing these up gives the total number of inversions in $A$. The runtime of the modified algorithm is $\Theta(n\lg n)$, which is same as merge sort since we only add an additional constant-time operation to some of the iterations in some of the loops.
```cpp
COUNT-INVERSIONS(A, p, r)
if p < r
q = floor((p + r) / 2)
left = COUNT-INVERSIONS(A, p, q)
right = COUNT-INVERSIONS(A, q + 1, r)
inversions = MERGE-INVERSIONS(A, p, q, r) + left + right
return inversions
```
```cpp
MERGE-INVERSIONS(A, p, q, r)
n1 = q - p + 1
n2 = r - q
let L[1..n1 + 1] and R[1..n2 + 1] be new arrays
for i = 1 to n1
L[i] = A[p + i - 1]
for j = 1 to n2
R[j] = A[q + j]
L[n1 + 1] = ∞
R[n2 + 1] = ∞
i = 1
j = 1
inversions = 0
for k = p to r
if L[i] <= R[j]
A[k] = L[i]
i = i + 1
else
inversions = inversions + n1 - i + 1
A[k] = R[j]
j = j + 1
return inversions
```
|
[
{
"lang": "cpp",
"code": "COUNT-INVERSIONS(A, p, r)\n if p < r\n q = floor((p + r) / 2)\n left = COUNT-INVERSIONS(A, p, q)\n right = COUNT-INVERSIONS(A, q + 1, r)\n inversions = MERGE-INVERSIONS(A, p, q, r) + left + right\n return inversions"
},
{
"lang": "cpp",
"code": "MERGE-INVERSIONS(A, p, q, r)\n n1 = q - p + 1\n n2 = r - q\n let L[1..n1 + 1] and R[1..n2 + 1] be new arrays\n for i = 1 to n1\n L[i] = A[p + i - 1]\n for j = 1 to n2\n R[j] = A[q + j]\n L[n1 + 1] = ∞\n R[n2 + 1] = ∞\n i = 1\n j = 1\n inversions = 0\n for k = p to r\n if L[i] <= R[j]\n A[k] = L[i]\n i = i + 1\n else\n inversions = inversions + n1 - i + 1\n A[k] = R[j]\n j = j + 1\n return inversions"
}
] | false |
[] |
03-3.1-1
|
03
|
3.1
|
3.1-1
|
docs/Chap03/3.1.md
|
Let $f(n) + g(n)$ be asymptotically nonnegative functions. Using the basic definition of $\Theta$-notation, prove that $\max(f(n), g(n)) = \Theta(f(n) + g(n))$.
|
For asymptotically nonnegative functions $f(n)$ and $g(n)$, we know that
$$
\begin{aligned}
\exists n_1, n_2: & f(n) \ge 0 & \text{for} \, n > n_1 \\\\
& g(n) \ge 0 & \text{for} \, n > n_2.
\end{aligned}
$$
Let $n_0 = \max(n_1, n_2)$ and we know the equations below would be true for $n > n_0$:
$$
\begin{aligned}
f(n) & \le \max(f(n), g(n)) \\\\
g(n) & \le \max(f(n), g(n)) \\\\
(f(n) + g(n))/2 & \le \max(f(n), g(n)) \\\\
\max(f(n), g(n)) & \le \(f(n) + g(n)).
\end{aligned}
$$
Then we can combine last two inequalities:
$$0 \le \frac{f(n) + g(n)}{2} \le \max{(f(n), g(n))} \le f(n) + g(n).$$
Which is the definition of $\Theta{(f(n) + g(n))}$ with $c_1 = \frac{1}{2}$ and $c_2 = 1$
|
[] | false |
[] |
03-3.1-2
|
03
|
3.1
|
3.1-2
|
docs/Chap03/3.1.md
|
Show that for any real constants $a$ and $b$, where $b > 0$,
$$(n + a)^b = \Theta(n^b). \tag{3.2}$$
|
Expand $(n + a)^b$ by the Binomial Expansion, we have
$$(n + a)^b = C_0^b n^b a^0 + C_1^b n^{b - 1} a^1 + \cdots + C_b^b n^0 a^b.$$
Besides, we know below is true for any polynomial when $x \ge 1$.
$$a_0 x^0 + a_1 x^1 + \cdots + a_n x^n \le (a_0 + a_1 + \cdots + a_n) x^n.$$
Thus,
$$C_0^b n^b \le C_0^b n^b a^0 + C_1^b n^{b - 1} a^1 + \cdots + C_b^b n^0 a^b \le (C_0^b + C_1^b + \cdots + C_b^b) n^b = 2^b n^b.$$
$$\implies (n + a)^b = \Theta(n^b).$$
|
[] | false |
[] |
03-3.1-3
|
03
|
3.1
|
3.1-3
|
docs/Chap03/3.1.md
|
Explain why the statement, "The running time of algorithm $A$ is at least $O(n^2)$," is meaningless.
|
$T(n)$: running time of algorithm $A$. We just care about the upper bound and the lower bound of $T(n)$.
The statement: $T(n)$ is at least $O(n^2)$.
- Upper bound: Because "$T(n)$ is at least $O(n^2)$", there's no information about the upper bound of $T(n)$.
- Lower bound: Assume $f(n) = O(n^2)$, then the statement: $T(n) \ge f(n)$, but $f(n)$ could be any fuction that is "smaller" than $n^2$. For example, constant, $n$, etc, so there's no conclusion about the lower bound of $T(n)$, too.
Therefore, the statement, "The running time of algorithm $A$ is at least $O(n^2)$," is meaningless.
|
[] | false |
[] |
03-3.1-4
|
03
|
3.1
|
3.1-4
|
docs/Chap03/3.1.md
|
Is $2^{n + 1} = O(2^n)$? Is $2^{2n} = O(2^n)$?
|
- True. Note that $2^{n + 1} = 2 \times 2^n$. We can choose $c \ge 2$ and $n_0 = 0$, such that $0 \le 2^{n + 1} \le c \times 2^n$ for all $n \ge n_0$. By definition, $2^{n + 1} = O(2^n)$.
- False. Note that $2^{2n} = 2^n \times 2^n = 4^n$. We can't find any $c$ and $n_0$, such that $0 \le 2^{2n} = 4^n \le c \times 2^n$ for all $n \ge n_0$.
|
[] | false |
[] |
03-3.1-5-1
|
03
|
3.1
|
3.1-5
|
docs/Chap03/3.1.md
|
Prove Theorem 3.1.
|
The theorem states:
> For any two functions $f(n)$ and $g(n)$, we have $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$.
From $f = \Theta(g(n))$, we have that
$$0 \le c_1 g(n) \le f(n) \le c_2g(n) \text{ for } n > n_0.$$
We can pick the constants from here and use them in the definitions of $O$ and $\Omega$ to show that both hold.
From $f(n) = \Omega(g(n))$ and $f(n) = O(g(n))$, we have that
$$
\begin{aligned}
& 0 \le c_3g(n) \le f(n) & \text{ for all } n \ge n_1 \\\\
\text{and } & 0 \le f(n) \le c_4g(n) & \text{ for all } n \ge n_2.
\end{aligned}
$$
If we let $n_3 = \max(n_1, n_2)$ and merge the inequalities, we get
$$0 \le c_3g(n) \le f(n) \le c_4g(n) \text{ for all } n > n_3.$$
Which is the definition of $\Theta$.
|
[] | false |
[] |
03-3.1-5-2
|
03
|
3.1
|
3.1-5
|
docs/Chap03/3.1.md
|
For any two functions $f(n)$ and $g(n)$, we have $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$.
|
The theorem states:
> For any two functions $f(n)$ and $g(n)$, we have $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$.
From $f = \Theta(g(n))$, we have that
$$0 \le c_1 g(n) \le f(n) \le c_2g(n) \text{ for } n > n_0.$$
We can pick the constants from here and use them in the definitions of $O$ and $\Omega$ to show that both hold.
From $f(n) = \Omega(g(n))$ and $f(n) = O(g(n))$, we have that
$$
\begin{aligned}
& 0 \le c_3g(n) \le f(n) & \text{ for all } n \ge n_1 \\\\
\text{and } & 0 \le f(n) \le c_4g(n) & \text{ for all } n \ge n_2.
\end{aligned}
$$
If we let $n_3 = \max(n_1, n_2)$ and merge the inequalities, we get
$$0 \le c_3g(n) \le f(n) \le c_4g(n) \text{ for all } n > n_3.$$
Which is the definition of $\Theta$.
|
[] | false |
[] |
03-3.1-6
|
03
|
3.1
|
3.1-6
|
docs/Chap03/3.1.md
|
Prove that the running time of an algorithm is $\Theta(g(n))$ if and only if its worst-case running time is $O(g(n))$ and its best-case running time is $\Omega(g(n))$.
|
If $T_w$ is the worst-case running time and $T_b$ is the best-case running time, we know that
$$
\begin{aligned}
& 0 \le c_1g(n) \le T_b(n) & \text{ for } n > n_b \\\\
\text{and } & 0 \le T_w(n) \le c_2g(n) & \text{ for } n > n_w.
\end{aligned}
$$
Combining them we get
$$0 \le c_1g(n) \le T_b(n) \le T_w(n) \le c_2g(n) \text{ for } n > \max(n_b, n_w).$$
Since the running time is bound between $T_b$ and $T_w$ and the above is the definition of the $\Theta$-notation, proved.
|
[] | false |
[] |
03-3.1-7
|
03
|
3.1
|
3.1-7
|
docs/Chap03/3.1.md
|
Prove $o(g(n)) \cap w(g(n))$ is the empty set.
|
Let $f(n) = o(g(n)) \cap w(g(n))$.
We know that for any $c_1 > 0$, $c_2 > 0$,
$$
\begin{aligned}
& \exists n_1 > 0: 0 \le f(n) < c_1g(n) \\\\
\text{and } & \exists n_2 > 0: 0 \le c_2g(n) < f(n).
\end{aligned}
$$
If we pick $n_0 = \max(n_1, n_2)$, and let $c_1 = c_2$, from the problem definition we get
$$c_1g(n) < f(n) < c_1g(n).$$
There is no solutions, which means that the intersection is the empty set.
|
[] | false |
[] |
03-3.1-8
|
03
|
3.1
|
3.1-8
|
docs/Chap03/3.1.md
|
We can extend our notation to the case of two parameters $n$ and $m$ that can go to infinity independently at different rates. For a given function $g(n, m)$ we denote $O(g(n, m))$ the set of functions:
$$
\begin{aligned}
O(g(n, m)) = \\{f(n, m):
& \text{ there exist positive constants } c, n_0, \text{ and } m_0 \\\\
& \text{ such that } 0 \le f(n, m) \le cg(n, m) \\\\
& \text{ for all } n \ge n_0 \text{ or } m \ge m_0.\\}
\end{aligned}
$$
Give corresponding definitions for $\Omega(g(n, m))$ and $\Theta(g(n, m))$.
|
$$
\begin{aligned}
\Omega(g(n, m)) = \\{ f(n, m):
& \text{ there exist positive constants $c$, $n_0$, and $m_0$ such that } \\\\
& \text{ $0 \le cg(n, m) \le f(n, m)$ for all $n \ge n_0$ and $m \ge m_0$}.\\}
\end{aligned}
$$
$$
\begin{aligned}
\Theta(g(n, m)) = \\{ f(n, m):
& \text{ there exist positive constants $c_1$, $c_2$, $n_0$, and $m_0$ such that } \\\\
& \text{ $0 \le c_1 g(n, m) \le f(n, m) \le c_2 g(n, m)$ for all $n \ge n_0$ and $m \ge m_0$}.\\}
\end{aligned}
$$
|
[] | false |
[] |
03-3.2-1
|
03
|
3.2
|
3.2-1
|
docs/Chap03/3.2.md
|
Show that if $f(n)$ and $g(n)$ are monotonically increasing functions, then so are the functions $f(n) + g(n)$ and $f(g(n))$, and if $f(n)$ and $g(n)$ are in addition nonnegative, then $f(n) \cdot g(n)$ is monotonically increasing.
|
$$
\begin{aligned}
f(m) & \le f(n) \quad \text{ for } m \le n \\\\
g(m) & \le g(n) \quad \text{ for } m \le n, \\\\
\to f(m) + g(m) & \le f(n) + g(n),
\end{aligned}
$$
which proves the first function.
Then
$$f(g(m)) \le f(g(n)) \text{ for } m \le n.$$
This is true, since $g(m) \le g(n)$ and $f(n)$ is monotonically increasing.
If both functions are nonnegative, then we can multiply the two equalities and we get
$$f(m) \cdot g(m) \le f(n) \cdot g(n).$$
|
[] | false |
[] |
03-3.2-2
|
03
|
3.2
|
3.2-2
|
docs/Chap03/3.2.md
|
Prove equation $\text{(3.16)}$.
|
$$
\begin{aligned}
a^{\log_b c} = a^\frac{\log_a c}{\log_a b} = (a^{\log_a c})^{\frac{1}{\log_a b}} = c^{\log_b a}
\end{aligned}
$$
|
[] | false |
[] |
03-3.2-3
|
03
|
3.2
|
3.2-3
|
docs/Chap03/3.2.md
|
Prove equation $\text{(3.19)}$. Also prove that $n! = \omega(2^n)$ and $n! = o(n^n)$.
$$\lg(n!) = \Theta(n\lg n) \tag{3.19}$$
|
We can use **Stirling's approximation** to prove these three equations.
For equation $\text{(3.19)}$,
$$
\begin{aligned}
\lg(n!)
& = \lg\Bigg(\sqrt{2\pi n}\Big(\frac{n}{e}\Big)^n\Big(1 + \Theta(\frac{1}{n})\Big)\Bigg) \\\\
& = \lg\sqrt{2\pi n } + \lg\Big(\frac{n}{e}\Big)^n + \lg\Big(1+\Theta(\frac{1}{n})\Big) \\\\
& = \Theta(\sqrt n) + n\lg{\frac{n}{e}} + \lg\Big(\Theta(1) + \Theta(\frac{1}{n})\Big) \\\\
& = \Theta(\sqrt n) + \Theta(n\lg n) + \Theta(\frac{1}{n}) \\\\
& = \Theta(n\lg n).
\end{aligned}
$$
For $n! = \omega(2^n)$,
$$
\begin{aligned}
\lim_{n \to \infty} \frac{2^n}{n!}
& = \lim_{n \to \infty} \frac{2^n}{\sqrt{2\pi n} \left(\frac{n}{e}\right)^n \left(1 + \Theta\left(\frac{1}{n}\right)\right)} \\\\
& = \lim_{n \to \infty} \frac{1}{\sqrt{2\pi n} \left(1 + \Theta\left(\frac{1}{n}\right)\right)} \left(\frac{2e}{n}\right)^n \\\\
& \le \lim_{n \to \infty} \left(\frac{2e}{n}\right)^n \\\\
& \le \lim_{n \to \infty} \frac{1}{2^n} = 0,
\end{aligned}
$$
where the last step holds for $n > 4e$.
For $n! = o(n^n)$,
$$
\begin{aligned}
\lim_{n \to \infty} \frac{n^n}{n!}
& = \lim_{n \to \infty} \frac{n^n}{\sqrt{2\pi n} \left(\frac{n}{e}\right)^n \left(1 + \Theta\left(\frac{1}{n}\right)\right)} \\\\
& = \lim_{n \to \infty} \frac{e^n}{\sqrt{2\pi n} \left(1 + \Theta\left(\frac{1}{n}\right)\right)} \\\\
& = \lim_{n \to \infty} O(\frac{1}{\sqrt n})e^n \\\\
& \ge \lim_{n \to \infty} \frac{e^n}{c\sqrt n} & \text{(for some constant $c > 0$)}\\\\
& \ge \lim_{n \to \infty} \frac{e^n}{cn} \\\\
& = \lim_{n \to \infty} \frac{e^n}{c} = \infty.
\end{aligned}
$$
|
[] | false |
[] |
03-3.2-4
|
03
|
3.2
|
3.2-4 $\star$
|
docs/Chap03/3.2.md
|
Is the function $\lceil \lg n \rceil!$ polynomially bounded? Is the function $\lceil \lg\lg n \rceil!$ polynomially bounded?
|
Proving that a function $f(n)$ is polynomially bounded is equivalent to proving that $\lg(f(n)) = O(\lg n)$ for the following reasons.
- If $f$ is polynomially bounded, then there exist constants $c$, $k$, $n_0$ such that for all $n \ge n_0$, $f(n) \le cn^k$. Hence, $\lg(f(n)) \le kc\lg n$, which means that $\lg(f(n)) = O(\lg n)$.
- If $\lg(f(n)) = O(\lg n)$, then $f$ is polynomially bounded.
In the following proofs, we will make use of the following two facts:
1. $\lg(n!) = \Theta(n\lg n)$
2. $\lceil \lg n \rceil = \Theta(\lg n)$
$\lceil \lg n \rceil!$ is not polynomially bounded because
$$
\begin{aligned}
\lg(\lceil \lg n \rceil!)
& = \Theta(\lceil \lg n \rceil \lg \lceil \lg n \rceil) \\\\
& = \Theta(\lg n\lg\lg n) \\\\
& = \omega(\lg n) \\\\
& \ne O(\lg n).
\end{aligned}
$$
$\lceil \lg\lg n \rceil!$ is polynomially bounded because
$$
\begin{aligned}
\lg(\lceil \lg\lg n \rceil!)
& = \Theta(\lceil \lg\lg n \rceil \lg \lceil \lg\lg n \rceil) \\\\
& = \Theta(\lg\lg n\lg\lg\lg n) \\\\
& = o((\lg\lg n)^2) \\\\
& = o(\lg^2(\lg n)) \\\\
& = o(\lg n) \\\\
& = O(\lg n).
\end{aligned}
$$
The last step above follows from the property that any polylogarithmic function grows more slowly than any positive polynomial function, i.e., that for constants $a, b > 0$, we have $\lg^b n = o(n^a)$. Substitute $\lg n$ for $n$, $2$ for $b$, and $1$ for $a$, giving $\lg^2(\lg n) = o(\lg n)$.
Therefore, $\lg(\lceil \lg\lg n \rceil!) = O(\lg n)$, and so $\lceil \lg\lg n \rceil!$ is polynomially bounded.
|
[] | false |
[] |
03-3.2-5
|
03
|
3.2
|
3.2-5 $\star$
|
docs/Chap03/3.2.md
|
Which is asymptotically larger: $\lg(\lg^\*n)$ or $\lg^\*(\lg n)$?
|
We have $\lg^\* 2^n = 1 + \lg^\* n$,
$$
\begin{aligned}
\lim_{n \to \infty} \frac{\lg(\lg^\*n)}{\lg^\*(\lg n)}
& = \lim_{n \to \infty} \frac{\lg(\lg^\* 2^n)}{\lg^\*(\lg 2^n)} \\\\
& = \lim_{n \to \infty} \frac{\lg(1 + \lg^\* n)}{\lg^\* n} \\\\
& = \lim_{n \to \infty} \frac{\lg(1 + n)}{n} \\\\
& = \lim_{n \to \infty} \frac{1}{1 + n} \\\\
& = 0.
\end{aligned}
$$
Therefore, we have that $\lg^*(\lg n)$ is asymptotically larger.
|
[] | false |
[] |
03-3.2-6
|
03
|
3.2
|
3.2-6
|
docs/Chap03/3.2.md
|
Show that the golden ratio $\phi$ and its conjugate $\hat \phi$ both satisfy the equation $x^2 = x + 1$.
|
$$
\begin{aligned}
\phi^2 & = \Bigg(\frac{1 + \sqrt 5}{2}\Bigg)^2 = \frac{6 + 2\sqrt 5}{4} = 1 + \frac{1 + \sqrt 5}{2} = 1 + \phi \\\\
\hat\phi^2 & = \Bigg(\frac{1 - \sqrt 5}{2}\Bigg)^2 = \frac{6 - 2\sqrt 5}{4} = 1 + \frac{1 - \sqrt 5}{2} = 1 + \hat\phi.
\end{aligned}
$$
|
[] | false |
[] |
03-3.2-7
|
03
|
3.2
|
3.2-7
|
docs/Chap03/3.2.md
|
Prove by induction that the $i$th Fibonacci number satisfies the equality
$$F_i = \frac{\phi^i - \hat\phi^i}{\sqrt 5},$$
where $\phi$ is the golden ratio and $\hat\phi$ is its conjugate.
|
- Base case
For $i = 0$,
$$
\begin{aligned}
\frac{\phi^0 - \hat\phi^0}{\sqrt 5}
& = \frac{1 - 1}{\sqrt 5} \\\\
& = 0 \\\\
& = F_0.
\end{aligned}
$$
For $i = 1$,
$$
\begin{aligned}
\frac{\phi^1 - \hat\phi^1}{\sqrt 5}
& = \frac{(1 + \sqrt 5) - (1 - \sqrt 5)}{2 \sqrt 5} \\\\
& = 1 \\\\
& = F_1.
\end{aligned}
$$
- Assume
- $F_{i - 1} = (\phi^{i - 1} - \hat\phi^{i - 1}) / \sqrt 5$ and
- $F_{i - 2} = (\phi^{i - 2} - \hat\phi^{i - 2}) / \sqrt 5$,
$$
\begin{aligned}
F_i & = F_{i - 1} + F_{i - 2} \\\\
& = \frac{\phi^{i - 1} - \hat\phi^{i - 1}}{\sqrt 5} + \frac{\phi^{i - 2} - \hat\phi^{i - 2}}{\sqrt 5} \\\\
& = \frac{\phi^{i - 2}(\phi + 1) - \hat\phi^{i - 2}(\hat\phi + 1)}{\sqrt 5} \\\\
& = \frac{\phi^{i - 2}\phi^2 - \hat\phi^{i - 2}\hat\phi^2}{\sqrt 5} \\\\
& = \frac{\phi^i - \hat\phi^i}{\sqrt 5}.
\end{aligned}
$$
|
[] | false |
[] |
03-3.2-8
|
03
|
3.2
|
3.2-8
|
docs/Chap03/3.2.md
|
Show that $k\ln k = \Theta(n)$ implies $k = \Theta(n / \lg n)$.
|
From the symmetry of $\Theta$,
$$k\ln k = \Theta(n) \Rightarrow n = \Theta(k\ln k).$$
Let's find $\ln n$,
$$\ln n = \Theta(\ln(k\ln k)) = \Theta(\ln k + \ln\ln k) = \Theta(\ln k).$$
Let's divide the two,
$$\frac{n}{\ln n} = \frac{\Theta(k\ln k)}{\Theta(\ln k)} = \Theta\Big({\frac{k\ln k}{\ln k}}\Big) = \Theta(k).$$
|
[] | false |
[] |
03-3-1
|
03
|
3-1
|
3-1
|
docs/Chap03/Problems/3-1.md
|
Let
$$p(n) = \sum_{i = 0}^d a_i n^i,$$
where $a_d > 0$, be a degree-$d$ polynomial in $n$, and let $k$ be a constant. Use the definitions of the asymptotic notations to prove the following properties.
**a.** If $k \ge d$, then $p(n) = O(n^k)$.
**b.** If $k \le d$, then $p(n) = \Omega(n^k)$.
**c.** If $k = d$, then $p(n) = \Theta(n^k)$.
**d.** If $k > d$, then $p(n) = o(n^k)$.
**e.** If $k < d$, then $p(n) = \omega(n^k)$.
|
Let's see that $p(n) = O(n^d)$. We need do pick $c = a_d + b$, such that
$$\sum\limits_{i = 0}^d a_i n^i = a_d n^d + a_{d - 1}n^{d - 1} + \cdots + a_1n + a_0 \le cn^d.$$
When we divide by $n^d$, we get
$$c = a_d + b \ge a_d + \frac{a_{d - 1}}n + \frac{a_{d - 2}}{n^2} + \cdots + \frac{a_0}{n^d}.$$
and
$$b \ge \frac{a_{d - 1}}n + \frac{a_{d - 2}}{n^2} + \cdots + \frac{a_0}{n^d}.$$
If we choose $b = 1$, then we can choose $n_0$,
$$n_0 = \max(da_{d - 1}, d\sqrt{a_{d - 2}}, \ldots, d\sqrt[d]{a_0}).$$
Now we have $n_0$ and $c$, such that
$$p(n) \le cn^d \quad \text{for } n \ge n_0,$$
which is the definition of $O(n^d)$.
By chosing $b = -1$ we can prove the $\Omega(n^d)$ inequality and thus the $\Theta(n^d)$ inequality.
It is very similar to prove the other inequalities.
|
[] | false |
[] |
03-3-2
|
03
|
3-2
|
3-2
|
docs/Chap03/Problems/3-2.md
|
Indicate for each pair of expressions $(A, B)$ in the table below, whether $A$ is $O$, $o$, $\Omega$, $\omega$, or $\Theta$ of $B$. Assume that $k \ge 1$, $\epsilon > 0$, and $c > 1$ are constants. Your answer should be in the form of the table with "yes" or "no" written in each box.
|
$$
\begin{array}{ccccccc}
A & B & O & o & \Omega & \omega & \Theta \\\\
\hline
\lg^k n & n^\epsilon & yes & yes & no & no & no \\\\
n^k & c^n & yes & yes & no & no & no \\\\
\sqrt n & n^{\sin n} & no & no & no & no & no \\\\
2^n & 2^{n / 2} & no & no & yes & yes & no \\\\
n^{\lg c} & c^{\lg n} & yes & no & yes & no & yes \\\\
\lg(n!) & \lg(n^n) & yes & no & yes & no & yes
\end{array}
$$
|
[] | false |
[] |
03-3-3
|
03
|
3-3
|
3-3
|
docs/Chap03/Problems/3-3.md
|
**a.** Rank the following functions by order of growth; that is, find an arrangement $g_1, g_2, \ldots , g_{30}$ of the functions $g_1 = \Omega(g_2), g_2 = \Omega(g_3), \ldots, g_{29} = \Omega(g_{30})$. Partition your list into equivalence classes such that functions $f(n)$ and $g(n)$ are in the same class if and only if $f(n) = \Theta(g(n))$.
$$
\begin{array}{cccccc}
\lg(\lg^{^\*}n) \quad & \quad 2^{\lg^\*n} \quad & \quad (\sqrt 2)^{\lg n} \quad & \quad n^2 \quad & \quad n! \quad & \quad (\lg n)! \\\\
(\frac{3}{2})^n \quad & \quad n^3 \quad & \quad \lg^2 n \quad & \quad \lg(n!) \quad & \quad 2^{2^n} \quad & \quad n^{1/\lg n} \\\\
\lg\lg n \quad & \quad \lg^\* n \quad & \quad n\cdot 2^n \quad & \quad n^{\lg\lg n} \quad & \quad \lg n \quad & \quad 1 \\\\
2^{\lg n} \quad & \quad (\lg n)^{\lg n} \quad & \quad e^n \quad & \quad 4^{\lg n} \quad & \quad (n + 1)! \quad & \quad \sqrt{\lg n} \\\\
\lg^\*(\lg n) \quad & \quad 2^{\sqrt{2\lg n}} \quad & \quad n \quad & \quad 2^n \quad & \quad n\lg n \quad & \quad 2^{2^{n + 1}}
\end{array}
$$
**b.** Give an example of a single nonnegative function $f(n)$ such that for all functions $g_i(n)$ in part (a), $f(n)$ is neither $O(g_i(n))$ nor $\Omega(g_i(n))$.
|
$$
\begin{array}{ll}
2^{2^{n + 1}} & \\\\
2^{2^n} & \\\\
(n + 1)! & \\\\
n! & \\\\
e^n & \\\\
n\cdot 2^n & \\\\
2^n & \\\\
(3 / 2)^n & \\\\
(\lg n)^{\lg n} = n^{\lg\lg n} & \\\\
(\lg n)! & \\\\
n^3 & \\\\
n^2 = 4^{\lg n} & \\\\
n\lg n \text{ and } \lg(n!) & \\\\
n = 2^{\lg n} & \\\\
(\sqrt 2)^{\lg n}(= \sqrt n) & \\\\
2^{\sqrt{2\lg n}} & \\\\
\lg^2 n & \\\\
\ln n & \\\\
\sqrt{\lg n} & \\\\
\ln\ln n & \\\\
2^{\lg^\*n} & \\\\
\lg^\*n \text{ and } \lg^\*(\lg n) & \\\\
\lg(\lg^\*n) & \\\\
n^{1 / \lg n}(= 2) \text{ and } 1 &
\end{array}
$$
**b.** For example,
$$
f(n) =
\begin{cases} 2^{2^{n + 2}} & \text{if $n$ is even}, \\\\
0 & \text{if $n$ is odd}.
\end{cases}
$$
for all functions $g_i(n)$ in part (a), $f(n)$ is neither $O(g_i(n))$ nor $\Omega(g_i(n))$.
|
[] | false |
[] |
03-3-4
|
03
|
3-4
|
3-4
|
docs/Chap03/Problems/3-4.md
|
Let $f(n)$ and $g(n)$ by asymptotically positive functions. Prove or disprove each of the following conjectures.
**a.** $f(n) = O(g(n))$ implies $g(n) = O(f(n))$.
**b.** $f(n) + g(n) = \Theta(\min(f(n), g(n)))$.
**c.** $f(n) = O(g(n))$ implies $\lg(f(n)) = O(\lg(g(n)))$, where $\lg(g(n)) \ge 1$ and $f(n) \ge 1$ for all sufficiently large $n$.
**d.** $f(n) = O(g(n))$ implies $2^{f(n)} = O(2^{g(n)})$.
**e.** $f(n) = O((f(n))^2)$.
**f.** $f(n) = O(g(n))$ implies $g(n) = \Omega(f(n))$.
**g.** $f(n) = \Theta(f(n / 2))$.
**h.** $f(n) + o(f(n)) = \Theta(f(n))$.
|
**a.** Disprove, $n = O(n^2)$, but $n^2 \ne O(n)$.
**b.** Disprove, $n^2 + n \ne \Theta(\min(n^2, n)) = \Theta(n)$.
**c.** Prove, because $f(n) \ge 1$ after a certain $n \ge n_0$.
$$
\begin{aligned}
\exists c, n_0: \forall n \ge n_0, 0 \le f(n) \le cg(n) \\\\
\Rightarrow 0 \le \lg f(n) \le \lg (cg(n)) = \lg c + \lg g(n).
\end{aligned}
$$
We need to prove that
$$\lg f(n) \le d\lg g(n).$$
We can find $d$,
$$d = \frac{\lg c + \lg g(n)}{\lg g(n)} = \frac{\lg c}{\lg g(n)} + 1 \le \lg c + 1,$$
where the last step is valid, because $\lg g(n) \ge 1$.
**d.** Disprove, because $2n = O(n)$, but $2^{2n} = 4^n \ne O(2^n)$.
**e.** Prove, $0 \le f(n) \le cf^2(n)$ is trivial when $f(n) \ge 1$, but if $f(n) < 1$ for all $n$, it's not correct. However, we don't care this case.
**f.** Prove, from the first, we know that $0 \le f(n) \le cg(n)$ and we need to prove that $0 \le df(n) \le g(n)$, which is straightforward with $d = 1 / c$.
**g.** Disprove, let's pick $f(n) = 2^n$. We will need to prove that
$$\exists c_1, c_2, n_0: \forall n \ge n_0, 0 \le c_1 \cdot 2^{n / 2} \le 2^n \le c_2 \cdot 2^{n / 2},$$
which is obviously untrue.
**h.** Prove, let $g(n) = o(f(n))$. Then
$$\exists c, n_0: \forall n \ge n_0, 0 \le g(n) < cf(n).$$
We need to prove that
$$\exists c_1, c_2, n_0: \forall n \ge n_0, 0 \le c_1f(n) \le f(n) + g(n) \le c_2f(n).$$
Thus, if we pick $c_1 = 1$ and $c_2 = c + 1$, it holds.
|
[] | false |
[] |
03-3-5
|
03
|
3-5
|
3-5
|
docs/Chap03/Problems/3-5.md
|
Some authors define $\Omega$ in a slightly different way than we do; let's use ${\Omega}^{\infty}$ (read "omega infinity") for this alternative definition. We say that $f(n) = {\Omega}^{\infty}(g(n))$ if there exists a positive constant $c$ such that $f(n) \ge cg(n) \ge 0$ for infinitely many integers $n$.
**a.** Show that for any two functions $f(n)$ and $g(n)$ that are asymptotically nonnegative, either $f(n) = O(g(n))$ or $f(n) = {\Omega}^{\infty}(g(n))$ or both, whereas this is not true if we use $\Omega$ in place of ${\Omega}^{\infty}$.
**b.** Describe the potential advantages and disadvantages of using ${\Omega}^{\infty}$ instead of $\Omega$ to characterize the running times of programs.
Some authors also define $O$ in a slightly different manner; let's use $O'$ for the alternative definition. We say that $f(n) = O'(g(n))$ if and only if $|f(n)| = O(g(n))$.
**c.** What happens to each direction of the "if and only if" in Theorem 3.1 if we substitute $O'$ for $O$ but we still use $\Omega$?
Some authors define $\tilde O$ (read "soft-oh") to mean $O$ with logarithmic factors ignored:
$$
\begin{aligned}
\tilde{O}(g(n)) =
\\{f(n): & \text{ there exist positive constants $c$, $k$, and $n_0$ such that } \\\\
& \text{ $0 \le f(n) \le cg(n) \lg^k(n)$ for all $n \ge n_0$ }.\\}
\end{aligned}
$$
**d.** Define $\tilde\Omega$ and $\tilde\Theta$ in a similar manner. Prove the corresponding analog to Theorem 3.1.
|
**a.** We have
$$
f(n) =
\begin{cases}
O(g(n)) \text{ and } {\Omega}^{\infty}(g(n)) & \text{if $f(n) = \Theta(g(n))$}, \\\\
O(g(n)) & \text{if $0 \le f(n) \le cg(n)$}, \\\\
{\Omega}^{\infty}(g(n)) & \text{if $0 \le cg(n) \le f(n)$, for infinitely many integers $n$}.
\end{cases}
$$
If there are only finite $n$ such that $f(n) \ge cg(n) \ge 0$. When $n \to \infty$, $0 \le f(n) \le cg(n)$, i.e., $f(n) = O(g(n))$.
Obviously, it's not hold when we use $\Omega$ in place of ${\Omega}^{\infty}$.
**b.**
- Advantages: We can characterize all the relationships between all functions.
- Disadvantages: We cannot characterize precisely.
**c.** For any two functions $f(n)$ and $g(n)$, we have if $f(n) = \Theta(g(n))$ then $f(n) = O'(g(n))$ and $f(n) = \Omega(g(n))$ and $f(n) = \Omega(g(n))$.
But the conversion is not true.
**d.** We have
$$
\begin{aligned}
\tilde\Omega(g(n)) = \\{f(n):
& \text{ there exist positive constants $c$, $k$, and $n_0$ such that } \\\\
& \text{ $0 \le cg(n)\lg^k(n) \le f(n)$ for all $n \ge n_0$}.\\}
\end{aligned}
$$
$$
\begin{aligned}
\tilde{\Theta}(g(n)) = \\{f(n):
& \text{ there exist positive constants $c_1$, $c_2$, $k_1$, $k_2$, and $n_0$ such that } \\\\
& \text{ $0\le c_1 g(n) \lg^{k_1}(n) \le f(n)\le c_2g (n) \lg^{k_2}(n)$ for all $n\ge n_0$.}\\}
\end{aligned}
$$
For any two functions $f(n)$ and $g(n)$, we have $f(n) = \tilde\Theta(g(n))$ if and only if $f(n) = \tilde O(g(n))$ and $f(n) = \tilde\Omega(g(n))$.
|
[] | false |
[] |
03-3-6
|
03
|
3-6
|
3-6
|
docs/Chap03/Problems/3-6.md
|
We can apply the iteration operator $^\*$ used in the $\lg^\*$ function to any monotonically increasing function $f(n)$ over the reals. For a given constant $c \in \mathbb R$, we define the iterated function ${f_c}^\*$ by ${f_c}^\*(n) = \min \\{i \ge 0 : f^{(i)}(n) \le c \\}$ which need not be well defined in all cases. In other words, the quantity ${f_c}^\*(n)$ is the number of iterated applications of the function $f$ required to reduce its argument down to $c$ or less.
|
For each of the following functions $f(n)$ and constants $c$, give as tight a bound as possible on ${f_c}^\*(n)$.
$$
\begin{array}{ccl}
f(n) & c & {f_c}^\* \\\\
\hline
n - 1 & 0 & \Theta(n) \\\\
\lg n & 1 & \Theta(\lg^\*{n}) \\\\
n / 2 & 1 & \Theta(\lg n) \\\\
n / 2 & 2 & \Theta(\lg n) \\\\
\sqrt n & 2 & \Theta(\lg\lg n) \\\\
\sqrt n & 1 & \text{does not converge} \\\\
n^{1 / 3} & 2 & \Theta(\log_3{\lg n}) \\\\
n / \lg n & 2 & \omega(\lg\lg n), o(\lg n)
\end{array}
$$
|
[] | false |
[] |
04-4.1-1
|
04
|
4.1
|
4.1-1
|
docs/Chap04/4.1.md
|
What does $\text{FIND-MAXIMUM-SUBARRAY}$ return when all elements of $A$ are negative?
|
It will return the greatest element of $A$.
|
[] | false |
[] |
04-4.1-2
|
04
|
4.1
|
4.1-2
|
docs/Chap04/4.1.md
|
Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in $\Theta(n^2)$ time.
|
```cpp
BRUTE-FORCE-FIND-MAXIMUM-SUBARRAY(A)
n = A.length
max-sum = -∞
for l = 1 to n
sum = 0
for h = l to n
sum = sum + A[h]
if sum > max-sum
max-sum = sum
low = l
high = h
return (low, high, max-sum)
```
|
[
{
"lang": "cpp",
"code": "BRUTE-FORCE-FIND-MAXIMUM-SUBARRAY(A)\n n = A.length\n max-sum = -∞\n for l = 1 to n\n sum = 0\n for h = l to n\n sum = sum + A[h]\n if sum > max-sum\n max-sum = sum\n low = l\n high = h\n return (low, high, max-sum)"
}
] | false |
[] |
04-4.1-3
|
04
|
4.1
|
4.1-3
|
docs/Chap04/4.1.md
|
Implement both the brute-force and recursive algorithms for the maximum-subarray problem on your own computer. What problem size $n_0$ gives the crossover point at which the recursive algorithm beats the brute-force algorithm? Then, change the base case of the recursive algorithm to use the brute-force algorithm whenever the problem size is less than $n_0$. Does that change the crossover point?
|
On my computer, $n_0$ is $37$.
If the algorithm is modified to used divide and conquer when $n \ge 37$ and the brute-force approach when $n$ is less, the performance at the crossover point almost doubles. The performance at $n_0 - 1$ stays the same, though (or even gets worse, because of the added overhead).
What I find interesting is that if we set $n_0 = 20$ and used the mixed approach to sort $40$ elements, it is still faster than both.
|
[] | false |
[] |
04-4.1-4
|
04
|
4.1
|
4.1-4
|
docs/Chap04/4.1.md
|
Suppose we change the definition of the maximum-subarray problem to allow the result to be an empty subarray, where the sum of the values of an empty subarray is $0$. How would you change any of the algorithms that do not allow empty subarrays to permit an empty subarray to be the result?
|
If the original algorithm returns a negative sum, returning an empty subarray instead.
|
[] | false |
[] |
04-4.1-5
|
04
|
4.1
|
4.1-5
|
docs/Chap04/4.1.md
|
Use the following ideas to develop a nonrecursive, linear-time algorithm for the maximum-subarray problem. Start at the left end of the array, and progress toward the right, keeping track of the maximum subarray seen so far. Knowing a maximum subarray $A[1..j]$, extend the answer to find a maximum subarray ending at index $j + 1$ by using the following observation: a maximum subarray $A[i..j + 1]$, is either a maximum subarray of $A[1..j]$ or a subarray $A[i..j + 1]$, for some $1 \le i \le j + 1$. Determine a maximum subarray of the form $A[i..j + 1]$ in constant time based on knowing a maximum subarray ending at index $j$.
|
```cpp
ITERATIVE-FIND-MAXIMUM-SUBARRAY(A)
n = A.length
max-sum = -∞
sum = -∞
for j = 1 to n
currentHigh = j
if sum > 0
sum = sum + A[j]
else
currentLow = j
sum = A[j]
if sum > max-sum
max-sum = sum
low = currentLow
high = currentHigh
return (low, high, max-sum)
```
|
[
{
"lang": "cpp",
"code": "ITERATIVE-FIND-MAXIMUM-SUBARRAY(A)\n n = A.length\n max-sum = -∞\n sum = -∞\n for j = 1 to n\n currentHigh = j\n if sum > 0\n sum = sum + A[j]\n else\n currentLow = j\n sum = A[j]\n if sum > max-sum\n max-sum = sum\n low = currentLow\n high = currentHigh\n return (low, high, max-sum)"
}
] | false |
[] |
04-4.2-1
|
04
|
4.2
|
4.2-1
|
docs/Chap04/4.2.md
|
Use Strassen's algorithm to compute the matrix product
$$
\begin{pmatrix}
1 & 3 \\\\
7 & 5
\end{pmatrix}
\begin{pmatrix}
6 & 8 \\\\
4 & 2
\end{pmatrix}
.
$$
Show your work.
|
The first matrices are
$$
\begin{array}{ll}
S_1 = 6 & S_6 = 8 \\\\
S_2 = 4 & S_7 = -2 \\\\
S_3 = 12 & S_8 = 6 \\\\
S_4 = -2 & S_9 = -6 \\\\
S_5 = 6 & S_{10} = 14.
\end{array}
$$
The products are
$$
\begin{aligned}
P_1 & = 1 \cdot 6 = 6 \\\\
P_2 & = 4 \cdot 2 = 8 \\\\
P_3 & = 6 \cdot 12 = 72 \\\\
P_4 & = -2 \cdot 5 = -10 \\\\
P_5 & = 6 \cdot 8 = 48 \\\\
P_6 & = -2 \cdot 6 = -12 \\\\
P_7 & = -6 \cdot 14 = -84.
\end{aligned}
$$
The four matrices are
$$
\begin{aligned}
C_{11} & = 48 + (-10) - 8 + (-12) = 18 \\\\
C_{12} & = 6 + 8 = 14 \\\\
C_{21} & = 72 + (-10) = 62 \\\\
C_{22} & = 48 + 6 - 72 - (-84) = 66.
\end{aligned}
$$
The result is
$$
\begin{pmatrix}
18 & 14 \\\\
62 & 66
\end{pmatrix}
.
$$
|
[] | false |
[] |
04-4.2-2
|
04
|
4.2
|
4.2-2
|
docs/Chap04/4.2.md
|
Write pseudocode for Strassen's algorithm.
|
```cpp
STRASSEN(A, B)
n = A.rows
if n == 1
return a[1, 1] * b[1, 1]
let C be a new n × n matrix
A[1, 1] = A[1..n / 2][1..n / 2]
A[1, 2] = A[1..n / 2][n / 2 + 1..n]
A[2, 1] = A[n / 2 + 1..n][1..n / 2]
A[2, 2] = A[n / 2 + 1..n][n / 2 + 1..n]
B[1, 1] = B[1..n / 2][1..n / 2]
B[1, 2] = B[1..n / 2][n / 2 + 1..n]
B[2, 1] = B[n / 2 + 1..n][1..n / 2]
B[2, 2] = B[n / 2 + 1..n][n / 2 + 1..n]
S[1] = B[1, 2] - B[2, 2]
S[2] = A[1, 1] + A[1, 2]
S[3] = A[2, 1] + A[2, 2]
S[4] = B[2, 1] - B[1, 1]
S[5] = A[1, 1] + A[2, 2]
S[6] = B[1, 1] + B[2, 2]
S[7] = A[1, 2] - A[2, 2]
S[8] = B[2, 1] + B[2, 2]
S[9] = A[1, 1] - A[2, 1]
S[10] = B[1, 1] + B[1, 2]
P[1] = STRASSEN(A[1, 1], S[1])
P[2] = STRASSEN(S[2], B[2, 2])
P[3] = STRASSEN(S[3], B[1, 1])
P[4] = STRASSEN(A[2, 2], S[4])
P[5] = STRASSEN(S[5], S[6])
P[6] = STRASSEN(S[7], S[8])
P[7] = STRASSEN(S[9], S[10])
C[1..n / 2][1..n / 2] = P[5] + P[4] - P[2] + P[6]
C[1..n / 2][n / 2 + 1..n] = P[1] + P[2]
C[n / 2 + 1..n][1..n / 2] = P[3] + P[4]
C[n / 2 + 1..n][n / 2 + 1..n] = P[5] + P[1] - P[3] - P[7]
return C
```
|
[
{
"lang": "cpp",
"code": "STRASSEN(A, B)\n n = A.rows\n if n == 1\n return a[1, 1] * b[1, 1]\n let C be a new n × n matrix\n A[1, 1] = A[1..n / 2][1..n / 2]\n A[1, 2] = A[1..n / 2][n / 2 + 1..n]\n A[2, 1] = A[n / 2 + 1..n][1..n / 2]\n A[2, 2] = A[n / 2 + 1..n][n / 2 + 1..n]\n B[1, 1] = B[1..n / 2][1..n / 2]\n B[1, 2] = B[1..n / 2][n / 2 + 1..n]\n B[2, 1] = B[n / 2 + 1..n][1..n / 2]\n B[2, 2] = B[n / 2 + 1..n][n / 2 + 1..n]\n S[1] = B[1, 2] - B[2, 2]\n S[2] = A[1, 1] + A[1, 2]\n S[3] = A[2, 1] + A[2, 2]\n S[4] = B[2, 1] - B[1, 1]\n S[5] = A[1, 1] + A[2, 2]\n S[6] = B[1, 1] + B[2, 2]\n S[7] = A[1, 2] - A[2, 2]\n S[8] = B[2, 1] + B[2, 2]\n S[9] = A[1, 1] - A[2, 1]\n S[10] = B[1, 1] + B[1, 2]\n P[1] = STRASSEN(A[1, 1], S[1])\n P[2] = STRASSEN(S[2], B[2, 2])\n P[3] = STRASSEN(S[3], B[1, 1])\n P[4] = STRASSEN(A[2, 2], S[4])\n P[5] = STRASSEN(S[5], S[6])\n P[6] = STRASSEN(S[7], S[8])\n P[7] = STRASSEN(S[9], S[10])\n C[1..n / 2][1..n / 2] = P[5] + P[4] - P[2] + P[6]\n C[1..n / 2][n / 2 + 1..n] = P[1] + P[2]\n C[n / 2 + 1..n][1..n / 2] = P[3] + P[4]\n C[n / 2 + 1..n][n / 2 + 1..n] = P[5] + P[1] - P[3] - P[7]\n return C"
}
] | false |
[] |
04-4.2-3
|
04
|
4.2
|
4.2-3
|
docs/Chap04/4.2.md
|
How would you modify Strassen's algorithm to multiply $n \times n$ matrices in which $n$ is not an exact power of $2$? Show that the resulting algorithm runs in time $\Theta(n^{\lg7})$.
|
We can just extend it to an $n \times n$ matrix and pad it with zeroes. It's obviously $\Theta(n^{\lg7})$.
|
[] | false |
[] |
04-4.2-4
|
04
|
4.2
|
4.2-4
|
docs/Chap04/4.2.md
|
What is the largest $k$ such that if you can multiply $3 \times 3$ matrices using $k$ multiplications (not assuming commutativity of multiplication), then you can multiply $n \times n$ matrices is time $o(n^{\lg 7})$? What would the running time of this algorithm be?
|
Assume $n = 3^m$ for some $m$. Then, using block matrix multiplication, we obtain the recursive running time $T(n) = kT(n / 3) + O(1)$.
By master theorem, we can find the largest $k$ to satisfy $\log_3 k < \lg 7$ is $k = 21$.
|
[] | false |
[] |
04-4.2-5
|
04
|
4.2
|
4.2-5
|
docs/Chap04/4.2.md
|
V. Pan has discovered a way of multiplying $68 \times 68$ matrices using $132464$ multiplications, a way of multiplying $70 \times 70$ matrices using $143640$ multiplications, and a way of multiplying $72 \times 72$ matrices using $155424$ multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare to Strassen's algorithm?
|
**Analyzing Pan's Methods**
Pan has introduced three methods for divide-and-conquer matrix multiplication, each with different parameters. We will analyze the recurrence relations, compute the exponents using the Master Theorem, and compare the resulting asymptotic running times to Strassen’s algorithm.
**Method 1:**
- **Recurrence Relation:**
$$
T(n) = 132{,}464 \cdot T\left(\frac{n}{68}\right)
$$
- **Parameters:** $a = 132{,}464$, $b = 68$
- **Applying Master Theorem:**
$$
\log_b a = \frac{\log 132{,}464}{\log 68} \approx 2.7951284873613815
$$
- **Asymptotic Running Time:** $O(n^{2.7951284873613815})$
**Method 2:**
- **Recurrence Relation:**
$$
T(n) = 143{,}640 \cdot T\left(\frac{n}{70}\right)
$$
- **Parameters:** $a = 143{,}640$, $b = 70$
- **Applying Master Theorem:**
$$
\log_b a = \frac{\log 143{,}640}{\log 70} \approx 2.795122689748337
$$
- **Asymptotic Running Time:** $O(n^{2.795122689748337})$
**Method 3:**
- **Recurrence Relation:**
$$
T(n) = 155{,}424 \cdot T\left(\frac{n}{72}\right)
$$
- **Parameters:** $a = 155{,}424$, $b = 72$
- **Applying Master Theorem:**
$$
\log_b a = \frac{\log 155{,}424}{\log 72} \approx 2.795147391093449
$$
- **Asymptotic Running Time:** $O(n^{2.795147391093449})$
**Comparison to Strassen's Algorithm**
Strassen’s algorithm has an asymptotic running time of $O(n^{2.807})$, derived from $\log_2 7 \approx 2.807$.
**Which Method is Best?**
Now, let's compare the exponents:
- **Method 1:** $\log_b a = 2.7951284873613815$
- **Method 2:** $\log_b a = 2.795122689748337$
- **Method 3:** $\log_b a = 2.795147391093449$
Since **smaller exponents lead to more efficient algorithms**, **Method 2** is the best because it has the smallest exponent, $2.795122689748337$.
|
[] | false |
[] |
04-4.2-6
|
04
|
4.2
|
4.2-6
|
docs/Chap04/4.2.md
|
How quickly can you multiply a $kn \times n$ matrix by an $n \times kn$ matrix, using Strassen's algorithm as a subroutine? Answer the same question with the order of the input matrices reversed.
|
- $(kn \times n)(n \times kn)$ produces a $kn \times kn$ matrix. This produces $k^2$ multiplications of $n \times n$ matrices.
- $(n \times kn)(kn \times n)$ produces an $n \times n$ matrix. This produces $k$ multiplications and $k - 1$ additions.
|
[] | false |
[] |
04-4.2-7
|
04
|
4.2
|
4.2-7
|
docs/Chap04/4.2.md
|
Show how to multiply the complex numbers $a + bi$ and $c + di$ using only three multiplications of real numbers. The algorithm should take $a$, $b$, $c$ and $d$ as input and produce the real component $ac - bd$ and the imaginary component $ad + bc$ separately.
|
The three matrices are
$$
\begin{aligned}
A & = (a + b)(c + d) = ac + ad + bc + bd \\\\
B & = ac \\\\
C & = bd.
\end{aligned}
$$
The result is
$$(B - C) + (A - B - C)i.$$
|
[] | false |
[] |
04-4.3-1
|
04
|
4.3
|
4.3-1
|
docs/Chap04/4.3.md
|
Show that the solution of $T(n) = T(n - 1) + n$ is $O(n^2)$.
|
We guess $T(n) \le cn^2$,
$$
\begin{aligned}
T(n) & \le c(n - 1)^2 + n \\\\
& = cn^2 - 2cn + c + n \\\\
& = cn^2 + n(1 - 2c) + c \\\\
& \le cn^2,
\end{aligned}
$$
where the last step holds for $c > \frac{1}{2}$.
|
[] | false |
[] |
04-4.3-2
|
04
|
4.3
|
4.3-2
|
docs/Chap04/4.3.md
|
Show that the solution of $T(n) = T(\lceil n / 2 \rceil) + 1$ is $O(\lg n)$.
|
We guess $T(n) \le c\lg(n - a)$,
$$
\begin{aligned}
T(n) & \le c\lg(\lceil n / 2 \rceil - a) + 1 \\\\
& \le c\lg((n + 1) / 2 - a) + 1 \\\\
& = c\lg((n + 1 - 2a) / 2) + 1 \\\\
& = c\lg(n + 1 - 2a) - c\lg 2 + 1 & (c \ge 1) \\\\
& \le c\lg(n + 1 - 2a) & (a \ge 1) \\\\
& \le c\lg(n - a),
\end{aligned}
$$
|
[] | false |
[] |
04-4.3-3
|
04
|
4.3
|
4.3-3
|
docs/Chap04/4.3.md
|
We saw that the solution of $T(n) = 2T(\lfloor n / 2 \rfloor) + n$ is $O(n\lg n)$. Show that the solution of this recurrence is also $\Omega(n\lg n)$. Conclude that the solution is $\Theta(n\lg n)$.
|
First, we guess $T(n) \le cn\lg n$,
$$
\begin{aligned}
T(n) & \le 2c\lfloor n / 2 \rfloor\lg{\lfloor n / 2 \rfloor} + n \\\\
& \le cn\lg(n / 2) + n \\\\
& = cn\lg n - cn\lg 2 + n \\\\
& = cn\lg n + (1 - c)n \\\\
& \le cn\lg n,
\end{aligned}
$$
where the last step holds for $c \ge 1$.
Next, we guess $T(n) \ge c(n + a)\lg(n + a)$,
$$
\begin{aligned}
T(n) & \ge 2c(\lfloor n / 2 \rfloor + a)(\lg(\lfloor n / 2 \rfloor + a) + n \\\\
& \ge 2c((n - 1) / 2 + a)(\lg((n - 1) / 2 + a)) + n \\\\
& = 2c\frac{n - 1 + 2a}{2}\lg\frac{n - 1 + 2a}{2} + n \\\\
& = c(n - 1 + 2a)\lg(n - 1 + 2a) - c(n - 1 + 2a)\lg 2 + n \\\\
& = c(n - 1 + 2a)\lg(n - 1 + 2a) + (1 - c)n - (2a - 1)c & (0 \le c < 1, n \ge \frac{(2a - 1)c}{1 - c}) \\\\
& \ge c(n - 1 + 2a)\lg(n - 1 + 2a) & (a \ge 1) \\\\
& \ge c(n + a)\lg(n + a),
\end{aligned}
$$
|
[] | false |
[] |
04-4.3-4
|
04
|
4.3
|
4.3-4
|
docs/Chap04/4.3.md
|
Show that by making a different inductive hyptohesis, we can overcome the difficulty with the boundary condition $T(1) = 1$ for recurrence $\text{(4.19)}$ without adjusting the boundary conditions for the inductive proof.
|
We guess $T(n) \le n\lg n + n$,
$$
\begin{aligned}
T(n) & \le 2(c\lfloor n / 2 \rfloor\lg{\lfloor n / 2 \rfloor} + \lfloor n / 2 \rfloor) + n \\\\
& \le 2c(n / 2)\lg(n / 2) + 2(n / 2) + n \\\\
& = cn\lg(n / 2) + 2n \\\\
& = cn\lg n - cn\lg{2} + 2n \\\\
& = cn\lg n + (2 - c)n \\\\
& \le cn\lg n + n,
\end{aligned}
$$
where the last step holds for $c \ge 1$.
This time, the boundary condition is
$$T(1) = 1 \le cn\lg n + n = 0 + 1 = 1.$$
|
[] | false |
[] |
04-4.3-5-1
|
04
|
4.3
|
4.3-5
|
docs/Chap04/4.3.md
|
Show that $\Theta(n\lg n)$ is the solution to the "exact" recurrence $\text{(4.3)}$ for merge sort.
|
The recurrence is
> $$T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n) \tag{4.3}$$
To show $\Theta$ bound, separately show $O$ and $\Omega$ bounds.
- For $O(n\lg n)$, we guess $T(n) \le c(n - 2)\lg(n - 2) - 2c$,
$$
\begin{aligned}
T(n) & \le c(\lceil n / 2 \rceil -2 )\lg(\lceil n / 2 \rceil - 2) + c(\lfloor n / 2 \rfloor - 2)\lg(\lfloor n / 2 \rfloor - 2) + dn \\\\
& \le c(n / 2 + 1 - 2)\lg(n / 2 + 1 - 2) - 2c + c(n / 2 - 2)\lg(n / 2 - 2) - 2c + dn \\\\
& \le c(n / 2 - 1 )\lg(n / 2 - 1) + c(n / 2 - 1)\lg(n / 2 - 1) + dn \\\\
& = c\frac{n - 2}{2}\lg\frac{n - 2}{2} + c\frac{n - 2}{2}\lg\frac{n - 2}{2} - 4c + dn \\\\
& = c(n - 2)\lg\frac{n - 2}{2} - 4c + dn \\\\
& = c(n - 2)\lg(n - 2) - c(n - 2) - 4c + dn \\\\
& = c(n - 2)\lg(n - 2) + (d - c)n + 2c - 4c \\\\
& \le c(n - 2)\lg(n - 2) - 2c,
\end{aligned}
$$
where the last step holds for $c > d$.
- For $\Omega(n\lg n)$, we guess $T(n) \ge c(n + 2)\lg (n + 2) + 2c$,
$$
\begin{aligned}
T(n) & \ge c(\lceil n / 2 \rceil +2 )\lg(\lceil n / 2 \rceil + 2) + c(\lfloor n / 2 \rfloor + 2)\lg(\lfloor n / 2 \rfloor + 2) + dn \\\\
& \ge c(n / 2 + 2)\lg(n / 2 + 2) + 2c + c(n / 2 - 1 + 2)\lg(n / 2 - 1 + 2) + 2c + dn \\\\
& \ge c(n / 2 + 1)\lg(n / 2 + 1) + c(n / 2 + 1)\lg(n / 2 + 1) + 4c + dn \\\\
& \ge c\frac{n + 2}{2}\lg\frac{n + 2}{2} + c\frac{n + 2}{2}\lg\frac{n + 2}{2} + 4c + dn \\\\
& = c(n + 2)\lg\frac{n + 2}{2} + 4c + dn \\\\
& = c(n + 2)\lg(n + 2) - c(n + 2) + 4c + dn \\\\
& = c(n + 2)\lg(n + 2) + (d - c)n - 2c + 4c \\\\
& \ge c(n + 2)\lg(n + 2) + 2c,
\end{aligned}
$$
where the last step holds for $d > c$.
|
[] | false |
[] |
04-4.3-5-2
|
04
|
4.3
|
4.3-5
|
docs/Chap04/4.3.md
|
$$T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n) \tag{4.3}$$
|
The recurrence is
> $$T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n) \tag{4.3}$$
To show $\Theta$ bound, separately show $O$ and $\Omega$ bounds.
- For $O(n\lg n)$, we guess $T(n) \le c(n - 2)\lg(n - 2) - 2c$,
$$
\begin{aligned}
T(n) & \le c(\lceil n / 2 \rceil -2 )\lg(\lceil n / 2 \rceil - 2) + c(\lfloor n / 2 \rfloor - 2)\lg(\lfloor n / 2 \rfloor - 2) + dn \\\\
& \le c(n / 2 + 1 - 2)\lg(n / 2 + 1 - 2) - 2c + c(n / 2 - 2)\lg(n / 2 - 2) - 2c + dn \\\\
& \le c(n / 2 - 1 )\lg(n / 2 - 1) + c(n / 2 - 1)\lg(n / 2 - 1) + dn \\\\
& = c\frac{n - 2}{2}\lg\frac{n - 2}{2} + c\frac{n - 2}{2}\lg\frac{n - 2}{2} - 4c + dn \\\\
& = c(n - 2)\lg\frac{n - 2}{2} - 4c + dn \\\\
& = c(n - 2)\lg(n - 2) - c(n - 2) - 4c + dn \\\\
& = c(n - 2)\lg(n - 2) + (d - c)n + 2c - 4c \\\\
& \le c(n - 2)\lg(n - 2) - 2c,
\end{aligned}
$$
where the last step holds for $c > d$.
- For $\Omega(n\lg n)$, we guess $T(n) \ge c(n + 2)\lg (n + 2) + 2c$,
$$
\begin{aligned}
T(n) & \ge c(\lceil n / 2 \rceil +2 )\lg(\lceil n / 2 \rceil + 2) + c(\lfloor n / 2 \rfloor + 2)\lg(\lfloor n / 2 \rfloor + 2) + dn \\\\
& \ge c(n / 2 + 2)\lg(n / 2 + 2) + 2c + c(n / 2 - 1 + 2)\lg(n / 2 - 1 + 2) + 2c + dn \\\\
& \ge c(n / 2 + 1)\lg(n / 2 + 1) + c(n / 2 + 1)\lg(n / 2 + 1) + 4c + dn \\\\
& \ge c\frac{n + 2}{2}\lg\frac{n + 2}{2} + c\frac{n + 2}{2}\lg\frac{n + 2}{2} + 4c + dn \\\\
& = c(n + 2)\lg\frac{n + 2}{2} + 4c + dn \\\\
& = c(n + 2)\lg(n + 2) - c(n + 2) + 4c + dn \\\\
& = c(n + 2)\lg(n + 2) + (d - c)n - 2c + 4c \\\\
& \ge c(n + 2)\lg(n + 2) + 2c,
\end{aligned}
$$
where the last step holds for $d > c$.
|
[] | false |
[] |
04-4.3-6
|
04
|
4.3
|
4.3-6
|
docs/Chap04/4.3.md
|
Show that the solution to $T(n) = 2T(\lfloor n / 2 \rfloor + 17) + n$ is $O(n\lg n)$.
|
We guess $T(n) \le c(n - a)\lg(n - a)$,
$$
\begin{aligned}
T(n) & \le 2c(\lfloor n / 2 \rfloor + 17 - a)\lg(\lfloor n / 2 \rfloor + 17 - a) + n \\\\
& \le 2c(n / 2 + 17 - a)\lg(n / 2 + 17 - a) + n \\\\
& = c(n + 34 - 2a)\lg\frac{n + 34 - 2a}{2} + n \\\\
& = c(n + 34 - 2a)\lg(n + 34 - 2a) - c(n + 34 - 2a) + n & (c > 1, n > n_0 = f(a)) \\\\
& \le c(n + 34 - 2a)\lg(n + 34 - 2a) & (a \ge 34) \\\\
& \le c(n - a)\lg(n - a).
\end{aligned}
$$
|
[] | false |
[] |
04-4.3-7
|
04
|
4.3
|
4.3-7
|
docs/Chap04/4.3.md
|
Using the master method in Section 4.5, you can show that the solution to the recurrence $T(n) = 4T(n / 3) + n$ is $T(n) = \Theta(n^{\log_3 4})$. Show that a substitution proof with the assumption $T(n) \le cn^{\log_3 4}$ fails. Then show how to subtract off a lower-order term to make the substitution proof work.
|
We guess $T(n) \le cn^{\log_3 4}$ first,
$$
\begin{aligned}
T(n) & \le 4c(n / 3)^{\log_3 4} + n \\\\
& = cn^{\log_3 4} + n.
\end{aligned}
$$
We stuck here.
We guess $T(n) \le cn^{\log_3 4} - dn$ again,
$$
\begin{aligned}
T(n) & \le 4(c(n / 3)^{\log_3 4} - dn / 3) + n \\\\
& = 4(cn^{\log_3 4} / 4 - dn / 3) + n \\\\
& = cn^{\log_3 4} - \frac{4}{3}dn + n \\\\
& \le cn^{\log_3 4} - dn,
\end{aligned}
$$
where the last step holds for $d \ge 3$.
|
[] | false |
[] |
04-4.3-8
|
04
|
4.3
|
4.3-8
|
docs/Chap04/4.3.md
|
Using the master method in Section 4.5, you can show that the solution to the recurrence $T(n) = 4T(n / 2) + n$ is $T(n) = \Theta(n^2)$. Show that a substitution proof with the assumption $T(n) \le cn^2$ fails. Then show how to subtract off a lower-order term to make the substitution proof work.
|
First, let's try the guess $T(n) \le cn^2$. Then, we have
$$
\begin{aligned}
T(n) &= 4T(n / 2) + n \\\\
&\le 4c(n / 2)^2 + n \\\\
&= cn^2 + n.
\end{aligned}
$$
We can't proceed any further from the inequality above to conclude $T(n) \le cn^2$.
Alternatively, let us try the guess
$$T(n) \le cn^2 - cn,$$
which subtracts off a lower-order term. Now we have
$$
\begin{aligned}
T(n) &= 4T(n / 2) + n \\\\
&= 4\left(c(n / 2)^2 - c(n / 2)\right) + n \\\\
&= 4c(n / 2)^2 - 4c(n / 2) + n \\\\
&= cn^2 + (1 - 2c)n \\\\
&\le cn^2,
\end{aligned}
$$
where the last step holds for $c \ge 1 / 2$.
|
[] | false |
[] |
04-4.3-9
|
04
|
4.3
|
4.3-9
|
docs/Chap04/4.3.md
|
Solve the recurrence $T(n) = 3T(\sqrt n) + \log n$ by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral.
|
First,
$$
\begin{aligned}
T(n) & = 3T(\sqrt n) + \lg n & \text{ let } m = \lg n \\\\
T(2^m) & = 3T(2^{m / 2}) + m \\\\
S(m) & = 3S(m / 2) + m.
\end{aligned}
$$
Now we guess $S(m) \le cm^{\lg 3} + dm$,
$$
\begin{aligned}
S(m) & \le 3\Big(c(m / 2)^{\lg 3} + d(m / 2)\Big) + m \\\\
& \le cm^{\lg 3} + (\frac{3}{2}d + 1)m & (d \le -2) \\\\
& \le cm^{\lg 3} + dm.
\end{aligned}
$$
Then we guess $S(m) \ge cm^{\lg 3} + dm$,
$$
\begin{aligned}
S(m) & \ge 3\Big(c(m / 2)^{\lg 3} + d(m / 2)\Big) + m \\\\
& \ge cm^{\lg 3} + (\frac{3}{2}d + 1)m & (d \ge -2) \\\\
& \ge cm^{\lg 3} + dm.
\end{aligned}
$$
Thus,
$$
\begin{aligned}
S(m) & = \Theta(m^{\lg 3}) \\\\
T(n) & = \Theta(\lg^{\lg 3}{n}).
\end{aligned}
$$
|
[] | false |
[] |
04-4.4-1
|
04
|
4.4
|
4.4-1
|
docs/Chap04/4.4.md
|
Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = 3T(\lfloor n / 2 \rfloor) + n$. Use the substitution method to verify your answer.
|
- The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $3^{\lg n} = n^{\lg 3}$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg n - 1$, is $3^i(n / 2^i) = (3 / 2)^i n$.
$$
\begin{aligned}
T(n) & = n + \frac{3}{2}n + \Big(\frac{3}{2}\Big)^2 n + \cdots + \Big(\frac{3}{2}\Big)^{\lg n - 1} n + \Theta(n^{\lg 3}) \\\\
& = \sum_{i = 0}^{\lg n - 1} \Big(\frac{3}{2}\Big)^i n + \Theta(n^{\lg 3}) \\\\
& = \frac{(3 / 2)^{\lg n} - 1}{(3 / 2) - 1}n + \Theta(n^{\lg 3}) \\\\
& = 2[(3 / 2)^{\lg n} - 1]n + \Theta(n^{\lg 3}) \\\\
& = 2[n^{\lg(3 / 2)} - 1]n + \Theta(n^{\lg 3}) \\\\
& = 2[n^{\lg 3 - \lg 2} - 1]n + \Theta(n^{\lg 3}) \\\\
& = 2[n^{\lg 3 - 1 + 1} - n] + \Theta(n^{\lg 3}) \\\\
& = O(n^{\lg 3}).
\end{aligned}
$$
- We guess $T(n) \le cn^{\lg 3} - dn$,
$$
\begin{aligned}
T(n) & = 3T(\lfloor n / 2 \rfloor) + n \\\\
& \le 3 \cdot (c(n / 2)^{\lg 3} - d(n / 2)) + n \\\\
& = (3 / 2^{\lg 3})cn^{\lg 3} - (3d / 2)n + n \\\\
& = cn^{\lg 3} + (1 - 3d / 2)n \\\\
& \le cn^{\lg 3} - dn,
\end{aligned}
$$
where the last step holds for $d \ge 2$.
|
[] | false |
[] |
04-4.4-2
|
04
|
4.4
|
4.4-2
|
docs/Chap04/4.4.md
|
Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = T(n / 2) + n^2$. Use the substitution method to verify your answer.
|
- The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $1^{\lg n} = 1$ leaf.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg{n - 1}$, is $1^i (n / 2^i)^2 = (1 / 4)^i n^2$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\lg n - 1} \Big(\frac{1}{4}\Big)^i n^2 + \Theta(1) \\\\
& < \sum_{i = 0}^\infty \Big(\frac{1}{4}\Big)^i n^2 + \Theta(1) \\\\
& = \frac{1}{1 - (1 / 4)} n^2 + \Theta(1) \\\\
& = \Theta(n^2).
\end{aligned}
$$
- We guess $T(n) \le cn^2$,
$$
\begin{aligned}
T(n) & \le c(n / 2)^2 + n^2 \\\\
& = cn^2 / 4 + n^2 \\\\
& = (c / 4 + 1)n^2 \\\\
& \le cn^2,
\end{aligned}
$$
where the last step holds for $c \ge 4 / 3$.
|
[] | false |
[] |
04-4.4-3
|
04
|
4.4
|
4.4-3
|
docs/Chap04/4.4.md
|
Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = 4T(n / 2 + 2) + n$. Use the substitution method to verify your answer.
|
- The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $4^{\lg n} = n^2$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg n - 1$, is $4^i(n / 2^i + 2) = 2^i n + 2 \cdot 4^i$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\lg n - 1} (2^i n + 2 \cdot 4^i) + \Theta(n^2) \\\\
& = \sum_{i = 0}^{\lg n - 1} 2^i n + \sum_{i = 0}^{\lg n - 1} 2 \cdot 4^i + \Theta(n^2) \\\\
& = \frac{2^{\lg n} - 1}{2 - 1}n + 2 \cdot \frac{4^{\lg n} - 1}{4 - 1} + \Theta(n^2) \\\\
& = (2^{\lg n} - 1)n + \frac{2}{3} (4^{\lg n} - 1) + \Theta(n^2) \\\\
& = (n - 1)n + \frac{2}{3}(n^2 - 1) + \Theta(n^2) \\\\
& = \Theta(n^2).
\end{aligned}
$$
- We guess $T(n) \le c(n^2 - dn)$,
$$
\begin{aligned}
T(n) & = 4T(n / 2 + 2) + n \\\\
& \le 4c[(n / 2 + 2)^2 - d(n / 2 + 2)] + n \\\\
& = 4c(n^2 / 4 + 2n + 4 - dn / 2 - 2d) + n \\\\
& = cn^2 + 8cn + 16c - 2cdn - 8cd + n \\\\
& = cn^2 - cdn + 8cn + 16c - cdn - 8cd + n \\\\
& = c(n^2 - dn) - (cd - 8c - 1)n - (d - 2) \cdot 8c \\\\
& \le c(n^2 - dn),
\end{aligned}
$$
where the last step holds for $cd - 8c - 1 \ge 0$.
|
[] | false |
[] |
04-4.4-4
|
04
|
4.4
|
4.4-4
|
docs/Chap04/4.4.md
|
Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = 2T(n - 1) + 1$. Use the substitution method to verify your answer.
|
- The subproblem size for a node at depth $i$ is $n - i$.
Thus, the tree has $n + 1$ levels ($i = 0, 1, 2, \dots, n$) and $2^n$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, n - 1$, is $2^i$.
The $n$-th level has $2^n$ leaves each with cost $\Theta(1)$, so the total cost of the $n$-th level is $\Theta(2^n)$.
Adding the costs of all the levels of the recursion tree we get the following:
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{n - 1} 2^i + \Theta(2^n)\\\\
& = \frac{2^n - 1}{2 - 1} + \Theta(2^n)\\\\
& = 2^n - 1 + \Theta(2^n)\\\\
& = \Theta(2^n).
\end{aligned}
$$
- We guess $T(n) \le c2^n - d$,
$$
\begin{aligned}
T(n) & \le 2(c2^{n - 1} - d) + 1 \\\\
& = c2^n - 2d + 1 \\\\
& \le c2^n - d
\end{aligned}
$$
Where the last step holds for $d \ge 1$. Thus $T(n) = O(2^n)$.
|
[] | false |
[] |
04-4.4-5
|
04
|
4.4
|
4.4-5
|
docs/Chap04/4.4.md
|
Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = T(n - 1) + T(n / 2) + n$. Use the substitution method to verify your answer.
|
This is a curious one. The tree makes it look like it is exponential in the worst case. The tree is not full (not a complete binary tree of height $n$), but it is not polynomial either. It's easy to show $O(2^n)$ and $\Omega(n^2)$.
To justify that this is a pretty tight upper bound, we'll show that we can't have any other choice. If we have that $T(n) \le cn^k$, when we substitue into the recurrence, the new coefficient for $n^k$ can be as high as $c(1 + \frac{1}{2^k})$ which is bigger than $c$ regardless of how we choose the value $c$.
- We guess $T(n) \le c2^n - 4n$,
$$
\begin{aligned}
T(n) & \le c2^{n - 1} - 4(n - 1) + c2^{n / 2} - 4n / 2 + n \\\\
& = c(2^{n - 1} + 2^{n / 2}) - 5n + 4 & (n \ge 1 / 4) \\\\
& \le c(2^{n - 1} + 2^{n / 2}) - 4n & (n \ge 2)\\\\
& = c(2^{n - 1} + 2^{n - 1}) - 4n \\\\
& \le c2^n - 4n \\\\ & = O(2^n).
\end{aligned}
$$
- We guess $T(n) \ge cn^2$,
$$
\begin{aligned}
T(n) & \ge c(n - 1)^2 + c(n / 2)^2 + n \\\\
& = cn^2 - 2cn + c + cn^2 / 4 + n \\\\
& = (5 / 4)cn^2 + (1 - 2c)n + c \\\\
& \ge cn^2 + (1 - 2c)n + c & (c \le 1 / 2) \\\\
& \ge cn^2 \\\\
& = \Omega(n^2).
\end{aligned}
$$
|
[] | false |
[] |
04-4.4-6
|
04
|
4.4
|
4.4-6
|
docs/Chap04/4.4.md
|
Argue that the solution to the recurrence $T(n) = T(n / 3) + T(2n / 3) + cn$, where $c$ is a constant, is $\Omega(n\lg n)$ by appealing to the recursion tree.
|
We know that the cost at each level of the tree is $cn$ by examining the tree in figure 4.6. To find a lower bound on the cost of the algorithm, we need a lower bound on the height of the tree.
The shortest simple path from root to leaf is found by following the leftest child at each node. Since we divide by $3$ at each step, we see that this path has length $\log_3 n$. Therefore, the cost of the algorithm is
$$cn(\log_3 n + 1) \ge cn\log_3 n = \frac{c}{\lg 3} n\lg n = \Omega(n\lg n).$$
|
[] | false |
[] |
04-4.4-7
|
04
|
4.4
|
4.4-7
|
docs/Chap04/4.4.md
|
Draw the recursion tree for $T(n) = 4T(\lfloor n / 2 \rfloor) + cn$, where $c$ is a constant, and provide a tight asymptotic bound on its solution. Verify your answer with the substitution method.
|
- The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $4^{\lg n} = n^{\lg 4} = n^2$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg n - 1$, is $4^i(cn / 2^i) = 2^icn$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\lg n - 1} 2^icn + \Theta(n^2) \\\\
& = \frac{2^{\lg n} - 1}{2 - 1}cn + \Theta(n^2) \\\\
& = \Theta(n^2).
\end{aligned}
$$
- For $O(n^2)$, we guess $T(n) \le dn^2 - cn$,
$$
\begin{aligned}
T(n) & \le 4d(n / 2)^2 - 4c(n / 2) + cn \\\\
& = dn^2 - cn.
\end{aligned}
$$
- For $\Omega(n^2)$, we guess $T(n) \ge dn^2 - cn$,
$$
\begin{aligned}
T(n) & \ge 4d(n / 2)^2 - 4c(n / 2) + cn \\\\
& = dn^2 - cn.
\end{aligned}
$$
|
[] | false |
[] |
04-4.4-8
|
04
|
4.4
|
4.4-8
|
docs/Chap04/4.4.md
|
Use a recursion tree to give an asymptotically tight solution to the recurrence $T(n) = T(n - a) + T(a) + cn$, where $a \ge 1$ and $c > 0$ are constants.
|
- The tree has $n / a + 1$ levels.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, n / a - 1$, is $c(n - ia)$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{n / a} c(n - ia) + (n / a)ca \\\\
& = \sum_{i = 0}^{n / a} cn - \sum_{i = 0}^{n / a} cia + (n / a)ca \\\\
& = cn^2/a - \Theta(n) + \Theta(n) \\\\
& = \Theta(n^2).
\end{aligned}
$$
- For $O(n^2)$, we guess $T(n) \le cn^2$,
$$
\begin{aligned}
T(n) & \le c(n - a)^2 + ca + cn \\\\
& \le cn^2 - 2can + ca + cn \\\\
& \le cn^2 - c(2an - a - n) & (a > 1 / 2, n > 2a) \\\\
& \le cn^2 - cn \\\\
& \le cn^2 \\\\
& = \Theta(n^2).
\end{aligned}
$$
- For $\Omega(n^2)$, we guess $T(n) \ge cn^2$,
$$
\begin{aligned}
T(n) & \ge c(n - a)^2 + ca + cn \\\\
& \ge cn^2 - 2acn + ca + cn \\\\
& \ge cn^2 - c(2an - a - n) & (a < 1 / 2, n > 2a) \\\\
& \ge cn^2 + cn \\\\
& \ge cn^2 \\\\
& = \Theta(n^2).
\end{aligned}
$$
|
[] | false |
[] |
04-4.4-9
|
04
|
4.4
|
4.4-9
|
docs/Chap04/4.4.md
|
Use a recursion tree to give an asymptotically tight solution to the recurrence $T(n) = T(\alpha n) + T((1 - \alpha)n) + cn$, where $\alpha$ is a constant in the range $0 < \alpha < 1$, and $c > 0$ is also a constant.
|
We can assume that $0 < \alpha \le 1 / 2$, since otherwise we can let $\beta = 1 − \alpha$ and solve it for $\beta$.
Thus, the depth of the tree is $\log_{1 / \alpha} n$ and each level costs $cn$. And let's guess that the leaves are $\Theta(n)$,
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\log_{1 / \alpha} n} cn + \Theta(n) \\\\
& = cn\log_{1 / \alpha} n + \Theta(n) \\\\
& = \Theta(n\lg n).
\end{aligned}
$$
We can also show $T(n) = \Theta(n\lg n)$ by substitution.
To prove the upper bound, we guess that $T(n) \le dn\lg n$ for a constant $d > 0$,
$$
\begin{aligned}
T(n) & = T(\alpha n) + T((1 - \alpha)n) + cn \\\\
& \le d\alpha n\lg(\alpha n) + d(1 - \alpha)n\lg((1 - \alpha)n) + cn \\\\
& = d\alpha n\lg\alpha + d\alpha n\lg n + d(1 - \alpha)n\lg(1 - \alpha) + d(1 - \alpha)n\lg n + cn \\\\
& = dn\lg n + dn(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) + cn \\\\
& \le dn\lg n,
\end{aligned}
$$
where the last step holds when $d \ge \frac{-c}{\alpha\lg\alpha + (1 - \alpha)\lg(1 - \alpha)}$.
We can achieve this result by solving the inequality
$$
\begin{aligned}
dn\lg n + dn(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) + cn & \le dn\lg n \\\\
\implies dn(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) + cn & \le 0 \\\\
\implies d(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) & \le -c \\\\
\implies d & \ge \frac{-c}{\alpha\lg\alpha + (1 - \alpha)\lg(1 - \alpha)},
\end{aligned}
$$
To prove the lower bound, we guess that $T(n) \ge dn\lg n$ for a constant $d > 0$,
$$
\begin{aligned}
T(n) & = T(\alpha n) + T((1 - \alpha)n) + cn \\\\
& \ge d\alpha n\lg(\alpha n) + d(1 - \alpha)n\lg((1 - \alpha)n) + cn \\\\
& = d\alpha n\lg\alpha + d\alpha n\lg n + d(1 - \alpha)n\lg(1 - \alpha) + d(1 - \alpha)n\lg n + cn \\\\
& = dn\lg n + dn(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) + cn \\\\
& \ge dn\lg n,
\end{aligned}
$$
where the last step holds when $0 < d \le \frac{-c}{\alpha\lg\alpha + (1 - \alpha)\lg(1 - \alpha)}$.
We can achieve this result by solving the inequality
$$
\begin{aligned}
dn\lg n + dn(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) + cn & \ge dn\lg n \\\\
\implies dn(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) + cn & \ge 0 \\\\
\implies d(\alpha \lg\alpha + (1 - \alpha) \lg(1 - \alpha)) & \ge -c \\\\
\implies 0 < d & \le \frac{-c}{\alpha\lg\alpha + (1 - \alpha)\lg(1 - \alpha)},
\end{aligned}
$$
Therefore, $T(n) = \Theta(n\lg n)$.
|
[] | false |
[] |
04-4.5-1
|
04
|
4.5
|
4.5-1
|
docs/Chap04/4.5.md
|
Use the master method to give tight asymptotic bounds for the following recurrences:
**a.** $T(n) = 2T(n / 4) + 1$.
**b.** $T(n) = 2T(n / 4) + \sqrt n$.
**c.** $T(n) = 2T(n / 4) + n$.
**d.** $T(n) = 2T(n / 4) + n^2$.
|
**a.** $\Theta(n^{\log_4 2}) = \Theta(\sqrt n)$.
**b.** $\Theta(n^{\log_4 2}\lg n) = \Theta(\sqrt n\lg n)$.
**c.** $\Theta(n)$.
**d.** $\Theta(n^2)$.
|
[] | false |
[] |
04-4.5-2
|
04
|
4.5
|
4.5-2
|
docs/Chap04/4.5.md
|
Professor Caesar wishes to develop a matrix-multiplication algorithm that is asymptotically faster than Strassen's algorithm. His algorithm will use the divide-and-conquer method, dividing each matrix into pieces of size $n / 4 \times n / 4$, and the divide and combine steps together will take $\Theta(n^2)$ time. He needs to determine how many subproblems his algorithm has to create in order to beat Strassen's algorithm. If his algorithm creates $a$ subproblems, then the recurrence for the running time $T(n)$ becomes $T(n) = aT(n / 4) + \Theta(n^2)$. What is the largest integer value of $a$ for which Professor Caesar's algorithm would be asymptotically faster than Strassen's algorithm?
|
Strassen's algorithm has running time of $\Theta(n^{\lg 7})$.
The largest integer $a$ such that $\log_4 a < \lg 7$ is $a = 48$.
|
[] | false |
[] |
04-4.5-3
|
04
|
4.5
|
4.5-3
|
docs/Chap04/4.5.md
|
Use the master method to show that the solution to the binary-search recurrence $T(n) = T(n / 2) + \Theta(1)$ is $T(n) = \Theta(\lg n)$. (See exercise 2.3-5 for a description of binary search.)
|
$$
\begin{aligned}
a & = 1, b = 2, \\\\
f(n) & = \Theta(n^{\lg 1}) = \Theta(1), \\\\
T(n) & = \Theta(\lg n).
\end{aligned}
$$
|
[] | false |
[] |
04-4.5-4
|
04
|
4.5
|
4.5-4
|
docs/Chap04/4.5.md
|
Can the master method be applied to the recurrence $T(n) = 4T(n / 2) + n^2\lg n$? Why or why not? Give an asymptotic upper bound for this recurrence.
|
With $a = 4$, $b = 2$, we have $f(n) = n^2\lg n \ne O(n^{2 - \epsilon}) \ne \Omega(n^{2 + \epsilon})$, so we cannot apply the master method.
We guess $T(n) \le cn^2\lg^2 n$, subsituting $T(n/2) \le c(n/2)^2\lg^2 (n/2)$ into the recurrence yields
$$
\begin{aligned}
T(n) & = 4T(n / 2) + n^2\lg n \\\\
& \le 4c(n / 2)^2\lg^2(n / 2) + n^2\lg n \\\\
& = cn^2\lg(n / 2)\lg n - cn^2\lg(n / 2)\lg 2 + n^2\lg n \\\\
& = cn^2\lg^2 n - cn^2\lg n\lg 2 - cn^2\lg(n / 2)\lg 2 + n^2\lg n \\\\
& = cn^2\lg^2 n + (1 - c\lg 2)n^2\lg n - cn^2\lg(n / 2)\lg 2 & (c \ge 1/\lg 2) \\\\
& \le cn^2\lg^2 n - cn^2\lg(n / 2)\lg 2 \\\\
& \le cn^2\lg^2 n.
\end{aligned}
$$
Exercise 4.6-2 is the general case for this.
|
[] | false |
[] |
04-4.5-5
|
04
|
4.5
|
4.5-5 $\star$
|
docs/Chap04/4.5.md
|
Consider the regularity condition $af(n / b) \le cf(n)$ for some constant $c < 1$, which is part of case 3 of the master theorem. Give an example of constants $a \ge 1$ and $b > 1$ and a function $f(n)$ that satisfies all the conditions in case 3 of the master theorem, except the regularity condition.
|
$a = 1$, $b = 2$ and $f(n) = n(2 - \cos n)$.
If we try to prove it,
$$
\begin{aligned}
\frac{n}{2}(2 - \cos\frac{n}{2}) & < cn \\\\
\frac{1 - cos(n / 2)}{2} & < c \\\\
1 - \frac{cos(n / 2)}{2} & \le c.
\end{aligned}
$$
Since $\min\cos(n / 2) = -1$, this implies that $c \ge 3 / 2$. But $c < 1$.
|
[] | false |
[] |
04-4.6-1
|
04
|
4.6
|
4.6-1 $\star$
|
docs/Chap04/4.6.md
|
Give a simple and exact expression for $n_j$ in equation $\text{(4.27)}$ for the case in which $b$ is a positive integer instead of an arbitrary real number.
|
We state that $\forall{j \ge 0}, n_j = \left \lceil \frac{n}{b^j} \right \rceil$.
Indeed, for $j = 0$ we have from the recurrence's base case that $n_0 = n = \left \lceil \frac{n}{b^0} \right \rceil$.
Now, suppose $n_{j - 1} = \left \lceil \frac{n}{b^{j - 1}} \right \rceil$ for some $j > 0$. By definition, $n_j = \left \lceil \frac{n_{j - 1}}{b} \right \rceil$.
It follows from the induction hypothesis that $n_j = \left \lceil \frac{\left \lceil \frac{n}{b^{j - 1}} \right \rceil}{b} \right \rceil$.
Since $b$ is a positive integer, equation $\text{(3.4)}$ implies that $\left \lceil \frac{\left \lceil \frac{n}{b^{j - 1}} \right \rceil}{b} \right \rceil = \left \lceil \frac{n}{b^j} \right \rceil$.
Therefore, $n_j = \left \lceil \frac{n}{b^j} \right \rceil$.
P.S. $n_j$ is obtained by shifting the base $b$ representation $j$ positions to the right, and adding $1$ if any of the $j$ least significant positions are non-zero.
|
[] | false |
[] |
04-4.6-2
|
04
|
4.6
|
4.6-2 $\star$
|
docs/Chap04/4.6.md
|
Show that if $f(n) = \Theta(n^{\log_b a}\lg^k{n})$, where $k \ge 0$, then the master recurrence has solution $T(n) = \Theta(n^{\log_b a}\lg^{k + 1}n)$. For simplicity, confine your analysis to exact powers of $b$.
|
$$
\begin{aligned}
g(n) & = \sum_{j = 0}^{\log_b n - 1} a^j f(n / b^j) \\\\
f(n / b^j) & = \Theta\Big((n / b^j)^{\log_b a} \lg^k(n / b^j) \Big) \\\\
g(n) & = \Theta\Big(\sum_{j = 0}^{\log_b n - 1}a^j\big(\frac{n}{b^j}\big)^{\log_b a}\lg^k\big(\frac{n}{b^j}\big)\Big) \\\\
& = \Theta(A) \\\\
A & = \sum_{j = 0}^{\log_b n - 1} a^j \big(\frac{n}{b^j}\big)^{\log_b a}\lg^k\frac{n}{b^j} \\\\
& = n^{\log_b a} \sum_{j = 0}^{\log_b n - 1}\Big(\frac{a}{b^{\log_b a}}\Big)^j\lg^k\frac{n}{b^j} \\\\
& = n^{\log_b a}\sum_{j = 0}^{\log_b n - 1}\lg^k\frac{n}{b^j} \\\\
& = n^{\log_b a} B \\\\
\lg^k\frac{n}{d} & = (\lg n - \lg d)^k = \lg^k{n} + o(\lg^k{n}) \\\\
B & = \sum_{j = 0}^{\log_b n - 1}\lg^k\frac{n}{b^j} \\\\
& = \sum_{j = 0}^{\log_b n - 1}\Big(\lg^k{n} - o(\lg^k{n})\Big) \\\\
& = \log_b n\lg^k{n} + \log_b n \cdot o(\lg^k{n}) \\\\
& = \Theta(\log_b n\lg^k{n}) \\\\
& = \Theta(\lg^{k + 1}{n}) \\\\
g(n) & = \Theta(A) \\\\
& = \Theta(n^{\log_b a}B) \\\\
& = \Theta(n^{\log_b a}\lg^{k + 1}{n}).
\end{aligned}
$$
|
[] | false |
[] |
04-4.6-3
|
04
|
4.6
|
4.6-3 $\star$
|
docs/Chap04/4.6.md
|
Show that case 3 of the master method is overstated, in the sense that the regularity condition $af(n / b) \le cf(n)$ for some constant $c < 1$ implies that there exists a constant $\epsilon > 0$ such that $f(n) = \Omega(n^{\log_b a + \epsilon})$.
|
$$
\begin{aligned}
af(n / b) & \le cf(n) \\\\
\Rightarrow f(n / b) & \le \frac{c}{a} f(n) \\\\
\Rightarrow f(n) & \le \frac{c}{a} f(bn) \\\\
& = \frac{c}{a} \left(\frac{c}{a} f(b^2n)\right) \\\\
& = \frac{c}{a} \left(\frac{c}{a}\left(\frac{c}{a} f(b^3n)\right)\right) \\\\
& = \left(\frac{c}{a}\right)^i f(b^i n) \\\\
\Rightarrow f(b^i n) & \ge \left(\frac{a}{c}\right)^i f(n).
\end{aligned}
$$
Let $n = 1$, then we have
$$f(b^i) \ge \left(\frac{a}{c}\right)^i f(1) \quad (*).$$
Let $b^i = n \Rightarrow i = \log_b n$, then substitue back to equation $(*)$,
$$
\begin{aligned}
f(n) & \ge \left(\frac{a}{c}\right)^{\log_b n} f(1) \\\\
& \ge n^{\log_b \frac{a}{c}} f(1) \\\\
& \ge n^{\log_b a + \epsilon} & \text{ where $\epsilon > 0$ because $\frac{a}{c} > a$ (recall that $c < 1$)} \\\\
& = \Omega(n^{\log_b a + \epsilon}).
\end{aligned}
$$
|
[] | false |
[] |
04-4-1
|
04
|
4-1
|
4-1
|
docs/Chap04/Problems/4-1.md
|
Give asymptotic upper and lower bound for $T(n)$ in each of the following recurrences. Assume that $T(n)$ is constant for $n \le 2$. Make your bounds as tight as possible, and justify your answers.
**a.** $T(n) = 2T(n / 2) + n^4$.
**b.** $T(n) = T(7n / 10) + n$.
**c.** $T(n) = 16T(n / 4) + n^2$.
**d.** $T(n) = 7T(n / 3) + n^2$.
**e.** $T(n) = 7T(n / 2) + n^2$.
**f.** $T(n) = 2T(n / 4) + \sqrt n$.
**g.** $T(n) = T(n - 2) + n^2$.
|
**a.** By master theorem, $T(n) = \Theta(n^4)$.
**b.** By master theorem, $T(n) = \Theta(n)$.
**c.** By master theorem, $T(n) = \Theta(n^2\lg n)$.
**d.** By master theorem, $T(n) = \Theta(n^2)$.
**e.** By master theorem, $T(n) = \Theta(n^{\lg 7})$.
**f.** By master theorem, $T(n) = \Theta(\sqrt n \lg n)$.
**g.** Let $d = m \mod 2$,
$$
\begin{aligned}
T(n) & = \sum_{j = 1}^{j = n / 2} (2j + d)^2 \\\\
& = \sum_{j = 1}^{n / 2} 4j^2 + 4jd + d^2 \\\\
& = \frac{n(n + 2)(n + 1)}{6} + \frac{n(n + 2)d}{2} + \frac{d^2n}{2} \\\\
& = \Theta(n^3).
\end{aligned}
$$
|
[] | false |
[] |
04-4-2
|
04
|
4-2
|
4-2
|
docs/Chap04/Problems/4-2.md
|
Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an $N$-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies:
1. An array is passed by pointer. Time $= \Theta(1)$.
2. An array is passed by copying. Time $= \Theta(N)$, where $N$ is the size of the array.
3. An array is passed by copying only the subrage that might be accessed by the called procedure. Time $= \Theta(q - p + 1)$ if the subarray $A[p..q]$ is passed.
**a.** Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let $N$ be the size of the original problems and $n$ be the size of a subproblem.
**b.** Redo part (a) for the $\text{MERGE-SORT}$ algorithm from Section 2.3.1.
|
**a.**
1. $T(n) = T(n / 2) + c = \Theta(\lg n)$. (master method)
2. $\Theta(n\lg n)$.
$$
\begin{aligned}
T(n) & = T(n / 2) + cN \\\\
& = 2cN + T(n / 4) \\\\
& = 3cN + T(n / 8) \\\\
& = \sum_{i = 0}^{\lg n - 1}(2^icN / 2^i) \\\\
& = cN\lg n \\\\
& = \Theta(n\lg n).
\end{aligned}
$$
3. $T(n) = T(n / 2) + cn = \Theta(n)$. (master method)
**b.**
1. $T(n) = 2T(n / 2) + cn = \Theta(n\lg n)$. (master method)
2. $\Theta(n^2)$.
$$
\begin{aligned}
T(n) & = 2T(n / 2) + cn + 2N = 4N + cn + 2c(n / 2) + 4T(n / 4) \\\\
& = 8N + 2cn + 4c(n / 4) + 8T(n / 8) \\\\
& = \sum_{i = 0}^{\lg n - 1}(cn + 2^iN) \\\\
& = \sum_{i = 0}^{\lg n - 1}cn + N\sum_{i = 0}^{\lg n - 1}2^i \\\\
& = cn\lg n + N\frac{2^{\lg n} - 1}{2 - 1} \\\\
& = cn\lg n + nN - N = \Theta(nN) \\\\
& = \Theta(n^2).
\end{aligned}
$$
3. $\Theta(n\lg n)$.
$$
\begin{aligned}
T(n) & = 2T(n / 2) + cn + 2n / 2 \\\\
& = 2T(n / 2) + (c + 1)n \\\\
& = \Theta(n\lg n).
\end{aligned}
$$
|
[] | false |
[] |
04-4-3
|
04
|
4-3
|
4-3
|
docs/Chap04/Problems/4-3.md
|
Give asymptotic upper and lower bounds for $T(n)$ in each of the following recurrences. Assume that $T(n)$ is constant for sufficiently small $n$. Make your bounds as tight as possible, and justify your answers.
**a.** $T(n) = 4T(n / 3) + n\lg n$.
**b.** $T(n) = 3T(n / 3) + n / \lg n$.
**c.** $T(n) = 4T(n / 2) + n^2\sqrt n$.
**d.** $T(n) = 3T(n / 3 - 2) + n / 2$.
**e.** $T(n) = 2T(n / 2) + n / \lg n$.
**f.** $T(n) = T(n / 2) + T(n / 4) + T(n / 8) + n$.
**g.** $T(n) = T(n - 1) + 1 / n$.
**h.** $T(n) = T(n - 1) + \lg n$.
**i.** $T(n) = T(n - 2) + 1 / \lg n$.
**j.** $T(n) = \sqrt nT(\sqrt n) + n$
|
**a.** By master theorem, $T(n) = \Theta(n^{\log_3 4})$.
**b.**
By the recursion-tree method, we can guess that $T(n) = \Theta(n\log_3\log_3 n)$.
We start by proving the upper bound.
Suppose $k < n \implies T(k) \le ck \log_3\log_3 k - k$, where we subtract a lower order term to strengthen our induction hypothesis.
It follows that
$$
\begin{aligned}
T(n) & \le 3 (c \frac{n}{3} \log_3\log_3 \frac{n}{3} - \frac{n}{3}) + \frac{n}{\lg n} \\\\
& \le c n \log_3\log_3 n - n + \frac{n}{\lg n} \\\\
& \le c n \log_3\log_3 n,
\end{aligned}
$$
if $n$ is sufficiently large.
The lower bound can proved analogously.
**c.** By master theorem, $T(n) = \Theta(n^{2.5})$.
**d.** It is $\Theta(n\lg n)$. The subtraction occurring inside the argument to $T$ won't change the asymptotics of the solution, that is, for large $n$ the division is so much more of a change than the subtraction that it is the only part that matters. once we drop that subtraction, the solution comes by the master theorem.
**e.** By the same reasoning as part (b), the function is $O(n\lg n)$ and $\Omega(n^{1 - \epsilon})$ for every $\epsilon$ and so is $\tilde O(n)$, see [Problem 3-5](../../../Chap03/Problems/3-5).
**f.** We guess $T(n) \le cn$,
$$
\begin{aligned}
T(n) & = T(n / 2) + T(n / 4) + T(n / 8) + n \\\\
& \le \frac{7}{8}cn + n \le cn. \\\\
\end{aligned}
$$
where the last step holds for $c \ge 8$.
**g.** Recall that $\chi_A$ denotes the indicator function of $A$. We see that the sum is
$$T(0) + \sum_{j = 1}^n \frac{1}{j} = T(0) + \int_1^{n + 1}\sum_{j = 1}^{n + 1} \frac{\chi_{j, j + 1}(x)}{j}dx.$$
Since $\frac{1}{x}$ is monatonically decreasing, we have that for every $i \in \mathbb Z^+$,
$$\text{sup}\_{x \in (i, i + 1)} \sum\_{j = 1}^{n + 1} \frac{\chi_{j, j + 1}(x)}{j} - \frac{1}{x} = \frac{1}{i} - \frac{1}{i + 1} = \frac{1}{i(i + 1)}.$$
Our expression for $T(n)$ becomes
$$T(N) = T(0) + \int_1^{n + 1} \Big(\frac{1}{x} + O(\frac{1}{\lfloor x \rfloor(\lfloor x \rfloor + 1)})\Big)dx.$$
We deal with the error term by first chopping out the constant amount between 1 and 2 and then bound the error term by $O(\frac{1}{x(x - 1)})$ which has an anti-derivative (by method of partial fractions) that is $O(\frac{1}{n})$,
$$
\begin{aligned}
T(N) & = \int_1^{n + 1} \frac{dx}{x} + O(\frac{1}{n}) \\\\
& = \lg n + T(0) + \frac{1}{2} + O(\frac{1}{n}).
\end{aligned}
$$
This gets us our final answer of $T(n) = \Theta(\lg n)$.
**h.** We see that we explicity have
$$
\begin{aligned}
T(n) & = T(0) + \sum_{j = 1}^n \lg j \\\\
& = T(0) + \int_1^{n + 1} \sum_{j = 1}^{n + 1} \chi_{(j, j + 1)}(x) \lg j dx.
\end{aligned}
$$
Similarly to above, we will relate this sum to the integral of $\lg x$.
$$\text{sup}\_{x \in (i, i + 1)} \sum\_{j = 1}^{n + 1} \chi\_{(j, j + 1)}(x) \lg j - \lg x = \lg(j + 1) - \lg j = \lg \Big(\frac{j + 1}{j}\Big).$$
Therefore,
$$
\begin{aligned}
T(n) & \le \int_i^n \lg(x + 2) + \lg x - \lg(x + 1)dx \\\\
& (1 + O(\frac{1}{\lg n})) \Theta(n\lg n).
\end{aligned}
$$
**i.** See the approach used in the previous two parts, we will get $T(n) = \Theta(\frac{n}{\lg n})$.
**j.** Let $i$ be the smallest $i$ so that $n^{\frac{1}{2^i}} < 2$. We recall from a previous problem (3-6.e) that this is $\lg\lg n$ Expanding the recurrence, we have that it is
$$
\begin{aligned}
T(n) & = n^{1 - \frac{1}{2^i}}T(2) + n + n\sum_{j = 1}^i \\\\
& = \Theta(n\lg\lg n).
\end{aligned}
$$
|
[] | false |
[] |
04-4-4
|
04
|
4-4
|
4-4
|
docs/Chap04/Problems/4-4.md
|
This problem develops properties of the Fibonacci numbers, which are defined by recurrence $\text{(3.22)}$. We shall use the technique of generating functions to solve the Fibonacci recurrence. Define the **_generating function_** (or **_formal power series_**) $\mathcal F$ as
$$
\begin{aligned}
\mathcal F(z)
& = \sum_{i = 0}^{\infty} F_iz^i \\\\
& = 0 + z + z^2 + 2z^3 + 3z^4 + 5z^5 + 8z^6 + 13z^7 + 21z^8 + \cdots,
\end{aligned}
$$
where $F_i$ is the $i$th Fibonacci number.
**a.** Show that $\mathcal F(z) = z + z\mathcal F(z) + z^2\mathcal F$.
**b.** Show that
$$
\begin{aligned}
\mathcal F(z)
& = \frac{z}{1 - z - z^2} \\\\
& = \frac{z}{(1 - \phi z)(1 - \hat\phi z)} \\\\
& = \frac{1}{\sqrt 5}\Big(\frac{1}{1 - \phi z} - \frac{1}{1 - \hat{\phi} z}\Big),
\end{aligned}
$$
where
$\phi = \frac{1 + \sqrt 5}{2} = 1.61803\ldots$
and
$\hat\phi = \frac{1 - \sqrt 5}{2} = -0.61803\ldots$
**c.** Show that
$$\mathcal F(z) = \sum_{i = 0}^{\infty}\frac{1}{\sqrt 5}(\phi^i - \hat{\phi}^i)z^i.$$
**d.** Use part \(c\) to prove that $F_i = \phi^i / \sqrt 5$ for $i > 0$, rounded to the nearest integer. ($\textit{Hint:}$ Observe that $|\hat{\phi}| < 1$.)
|
**a.**
$$
\begin{aligned} z + z\mathcal F(z) + z^2\mathcal F(Z)
& = z + z\sum_{i = 0}^{\infty} F_iz^i + z^2\sum_{i = 0}^{\infty}F_i z^i \\\\
& = z + \sum_{i = 1}^{\infty} F_{i - 1}z^i + \sum_{i = 2}^{\infty}F_{i - 2} z^i \\\\
& = z + F_1z + \sum_{i = 2}^{\infty}(F_{i - 1} + F_{i - 2})z^i \\\\
& = z + F_1z + \sum_{i = 2}^{\infty}F_iz^i \\\\
& = \mathcal F(z).
\end{aligned}
$$
**b.** Note that $\phi - \hat\phi = \sqrt 5$, $\phi + \hat\phi = 1$ and $\phi\hat\phi = - 1$.
$$
\begin{aligned}
\mathcal F(z) & = \frac{\mathcal F(z)(1 - z - z^2)}{1 - z - z^2} \\\\
& = \frac{\mathcal F(z) - z\mathcal F(z) - z^2\mathcal F(z) - z + z}{1 - z - z^2} \\\\
& = \frac{\mathcal F(z) - \mathcal F(z) + z}{1 - z - z^2} \\\\
& = \frac{z}{1 - z - z^2} \\\\
& = \frac{z}{1 - (\phi + \hat\phi)z + \phi\hat\phi z^2} \\\\
& = \frac{z}{(1 - \phi z)(1 - \hat\phi z)} \\\\
& = \frac{\sqrt 5 z}{\sqrt 5 (1 - \phi z)(1 - \hat\phi z)} \\\\
& = \frac{(\phi - \hat\phi)z + 1 - 1}{\sqrt 5 (1 - \phi z)(1 - \hat\phi z)} \\\\
& = \frac{(1 - \hat\phi z) - (1 - \phi z)}{\sqrt 5 (1 - \phi z)(1 - \hat\phi z)} \\\\
& = \frac{1}{\sqrt 5}\Big(\frac{1}{1 - \phi z} - \frac{1}{1 - \hat\phi z}\Big).
\end{aligned}
$$
**c.** We have $\frac{1}{1 - x} = \sum_{k = 0}^{\infty}x^k$, when $|x| < 1$, thus
$$
\begin{aligned}
\mathcal F(n) & = \frac{1}{\sqrt 5}\Big(\frac{1}{1 - \phi z} - \frac{1}{1 - \hat\phi z}\Big) \\\\
& = \frac{1}{\sqrt 5}\Big(\sum_{i = 0}^{\infty}\phi^i z^i - \sum_{i = 0}^{\infty}\hat{\phi}^i z^i\Big) \\\\
& = \sum_{i = 0}^{\infty}\frac{1}{\sqrt 5}(\phi^i - \hat{\phi}^i) z^i.
\end{aligned}
$$
**d.** $\mathcal F(z) = \sum_{i = 0}^{\infty}\alpha_i z^i$ where $\alpha_i = \frac{\phi^i - \hat{\phi}^i}{\sqrt 5}$. From this follows that $\alpha_i = F_i$, that is
$$F_i = \frac{\phi^i - \hat{\phi}^i}{\sqrt 5} = \frac{\phi^i}{\sqrt 5} - \frac{\hat{\phi}^i}{\sqrt 5},$$
For $i = 1$, $\phi / \sqrt 5 = (\sqrt 5 + 5) / 10 > 0.5$. For $i > 2$, $|\hat{\phi}^i| < 0.5$.
|
[] | false |
[] |
04-4-5
|
04
|
4-5
|
4-5
|
docs/Chap04/Problems/4-5.md
|
Professor Diogenes has $n$ supposedly identical integrated-circuit chips that in principle are capable of testing each other. The professor's test jig accommodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether the other chip is good or bad, but the professor cannot trust the answer of a bad chip. Thus, the four possible outcomes of a test are as follows:
$$
\begin{array}{lll}
\text{Chip $A$ says} & \text{Chip $B$ says} & \text{Conclusion} \\\\
\hline
\text{$B$ is good} & \text{$A$ is good} & \text{both are good, or both are bad} \\\\
\text{$B$ is good} & \text{$A$ is bad} & \text{at least one is bad} \\\\
\text{$B$ is bad} & \text{$A$ is good} & \text{at least one is bad} \\\\
\text{$B$ is bad} & \text{$A$ is bad} & \text{at least one is bad}
\end{array}
$$
**a.** Show that if more than $n / 2$ chips are bad, the professor cannot necessarily determine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor.
**b.** Consider the problem of finding a single good chip from among $n$ chips, assuming that more than $n / 2$ of the chips are good. Show that $\lfloor n / 2 \rfloor$ pairwise tests are sufficient to reduce the problem to one of nearly half the size.
**c.** Show that the good chips can be identified with $\Theta(n)$ pairwise tests, assuming that more than $n / 2$ chips are good. Give and solve the recurrence that describes the number of tests.
|
**a.** Lets say that there are $g < n / 2$ good chips and $n - g$ bad chips.
From this assumption, we can always find a set of good chips $G$ and a set of bad chips $B$ of equal size $g$ since $n - g \ge g$.
Now, assume that chips in $B$ always conspire to fool the professor in the following:
"for any test made by the professor, chips in $B$ declare chips in $B$ as 'good' and chips in $G$ as 'bad'."
Since the chips in $G$ always report correct answers thus there exists symmetric behaviors, it is not possible to distinguish bad chips from good ones.
**b.**
Generalize the original problem to: "Assume there are more good chips than bad chips."
**Algorithm:**
1. Pairwise test them, and leave the last one alone if the number of chips is odd.
- If the report says at least one of them is bad, throw both chips away;
- otherwise, throw one away from each pair.
2. Recursively find one good chip among the remaining chips. The recursion ends when the number of remaining chips is $1$ or $2$.
- If there is only $1$ chip left, then it is the good chip we desire.
- If there are $2$ chips left, we make a pairwise test between them. If the report says both are good, we can conclude that both are good chips. Otherwise, one is good and the other is bad and we throw both away. The chip we left alone at step $1$ is a good chip.
**Explanation:**
1. If the number of chips is odd, from assumption we know the number of good chips must be greater than the number of bad chips. We randomly leave one chip alone from the chips, in which good chips are not less than bad chips.
2. Chip pairs that do not say each other is good either have one bad chip or have two bad chips, throwing them away doesn't change the fact that good chips are not less than bad chips.
3. The remaining chip pairs are either both good chips or bad chips, after throwing one chip away in every those pairs, we have reduced the size of the problem to at most half of the original problem size.
4. If the number of good chips is $n$ ($n > 1$) more than that of bad chips, we just throw away the chip we left alone when the number of chips is odd. In this case, the number of good chips is at least one more than that of bad chips, and we can eventually find a good chip as our algorithm claims.
5. If the number of good chips is exactly one more than that of bad chips, there are $2$ cases.
- We left alone the good chip, and remaining chips are one half good and one half bad. In this case, all the chips will be thrown away eventually. And the chip left alone is the one we desire.
- We left alone the bad chip, there are more good chips than bad chips in the remaining chips. In this case, we can recursively find a good chip in the remaining chips and the left bad chip will be thrown away at the end.
**c.** As the solution provided in (b), we can find one good chip in
$$T(n) \le T(\lceil n / 2 \rceil) + \lfloor n / 2 \rfloor.$$
By the master theorem, we have $T(n) = O(n)$. After finding a good chip, we can identify all good chips with that good chip we just found in $n - 1$ tests, so the total number of tests is
$$O(n) + n - 1 = \Theta(n).$$
|
[] | false |
[] |
04-4-6
|
04
|
4-6
|
4-6
|
docs/Chap04/Problems/4-6.md
|
An $m \times n$ array $A$ of real numbers is a **Monge array** if for all $i$, $j$, $k$, and $l$ such that $1 \le i < k \le m$ and $1 \le j < l \le n$, we have
$$A[i, j] + A[k, l] \le A[i, l] + A[k, j].$$
In other words, whenever we pick two rows and two columns of a Monge array and consider the four elements at the intersections of the rows and columns, the sum of the upper-left and lower-right elements is less than or equal to the sum of the lower-left and upper-right elements. For example, the following array is Monge:
$$
\begin{matrix}
10 & 17 & 13 & 28 & 23 \\\\
17 & 22 & 16 & 29 & 23 \\\\
24 & 28 & 22 & 34 & 24 \\\\
11 & 13 & 6 & 17 & 7 \\\\
45 & 44 & 32 & 37 & 23 \\\\
36 & 33 & 19 & 21 & 6 \\\\
75 & 66 & 51 & 53 & 34
\end{matrix}
$$
**a.** Prove that an array is Monge if and only if for all $i = 1, 2, \ldots, m - 1$, and $j = 1, 2, \ldots, n - 1$ we have
$$A[i, j] + A[i + 1,j + 1] \le A[i, j + 1] + A[i + 1, j].$$
($\textit{Hint:}$ For the "if" part, use induction seperately on rows and columns.)
**b.** The following array is not Monge. Change one element in order to make it Monge. ($\textit{Hint:}$ Use part (a).)
$$
\begin{matrix}
37 & 23 & 22 & 32 \\\\
21 & 6 & 7 & 10 \\\\
53 & 34 & 30 & 31 \\\\
32 & 13 & 9 & 6 \\\\
43 & 21 & 15 & 8
\end{matrix}
$$
**c.** Let $f(i)$ be the index of the column containing the leftmost minimum element of row $i$. Prove that $f(1) \le f(2) \le \cdots \le f(m)$ for any $m \times n$ Monge array.
**d.** Here is a description of a divide-and-conquer algorithm that computes the leftmost minimum element in each row of an $m \times n$ Monge array $A$:
Construct a submatrix $A'$ of $A$ consisting of the even-numbered rows of $A$. Recursively determine the leftmost minimum for each row in $A'$. Then compute the leftmost minimum in the odd-numbered rows of $A$.
Explain how to compute the leftmost minimum in the odd-numbered rows of $A$ (given that the leftmost minimum of the even-numbered rows is known) in $O(m + n)$ time.
**e.** Write the recurrence describing the running time of the algorithm described in part (d). Show that its solution is $O(m + n\log m)$.
|
**a.** The "only if" part is trivial, it follows form the definition of Monge array.
As for the "if" part, let's first prove that
$$
\begin{aligned}
A[i, j] + A[i + 1, j + 1] & \le A[i, j + 1] + A[i + 1, j] \\\\
\Rightarrow A[i, j] + A[k, j + 1] & \le A[i, j + 1] + A[k, j],
\end{aligned}
$$
where $i < k$.
Let's prove it by induction. The base case of $k = i + 1$ is given. As for the inductive step, we assume it holds for $k = i + n$ and we want to prove it for $k + 1 = i + n + 1$. If we add the given to the assumption, we get
$$
\begin{aligned}
A[i, j] + A[k, j + 1] & \le A[i, j + 1] + A[k, j] & \text{(assumption)} \\\\
A[k, j] + A[k + 1, j + 1] & \le A[k, j + 1] + A[k + 1, j] & \text{(given)} \\\\
\Rightarrow A[i, j] + A[k, j + 1] + A[k, j] + A[k + 1, j + 1] & \le A[i, j + 1] + A[k, j] + A[k, j + 1] + A[k + 1, j] \\\\
\Rightarrow A[i, j] + A[k + 1, j + 1] & \le A[i, j + 1] + A[k + 1, j]
\end{aligned}
$$
**b.**
$$
\begin{matrix}
37 & 23 & \mathbf{24} & 32 \\\\
21 & 6 & 7 & 10 \\\\
53 & 34 & 30 & 31 \\\\
32 & 13 & 9 & 6 \\\\
43 & 21 & 15 & 8 \\\\
\end{matrix}
$$
**c.** Let $a_i$ and $b_j$ be the leftmost minimal elements on rows $a$ and $b$ and let's assume that $i > j$. Then we have
$$A[j, a] + A[i, b] \le A[i, a] + A[j, b].$$
But
$$
\begin{aligned}
A[j, a] \ge A[i, a] & (a_i \text{ is minimal}) \\\\
A[i, b] \ge A[j, b] & (b_j \text{ is minimal}) \\\\
\end{aligned}
$$
Which implies that
$$
\begin{aligned}
A[j, a] + A[i, b] & \ge A[i, a] + A[j, b] \\\\
A[j, a] + A[i, b] & = A[i, a] + A[j, b]
\end{aligned}
$$
Which in turn implies that either:
$$
\begin{aligned}
A[j, b] < A[i, b] & \Rightarrow A[i, a] > A[j, a] \Rightarrow a_i \text{ is not minimal} \\\\
A[j, b] = A[i, b] & \Rightarrow b_j \text{ is not the leftmost minimal}
\end{aligned}
$$
**d.** If $\mu_i$ is the index of the $i$-th row's leftmost minimum, then we have
$$\mu_{i - 1} \le \mu_i \le \mu_{i + 1}.$$
For $i = 2k + 1$, $k \ge 0$, finding $\mu_i$ takes $\mu_{i + 1} - \mu_{i - 1} + 1$ steps at most, since we only need to compare with those numbers. Thus
$$
\begin{aligned}
T(m, n) & = \sum_{i = 0}^{m / 2 - 1} (\mu_{2i + 2} - \mu_{2i} + 1) \\\\
& = \sum_{i = 0}^{m / 2 - 1} \mu_{2i + 2} - \sum_{i = 0}^{m / 2 - 1}\mu_{2i} + m / 2 \\\\
& = \sum_{i = 1}^{m / 2} \mu_{2i} - \sum_{i = 0}^{m / 2 - 1}\mu_{2i} + m / 2 \\\\ &= \mu_m - \mu_0 + m / 2 \\\\
& = n + m / 2 \\\\
& = O(m + n).
\end{aligned}
$$
**e.** The divide time is $O(1)$, the conquer part is $T(m / 2)$ and the merge part is $O(m + n)$. Thus,
$$
\begin{aligned}
T(m) & = T(m / 2) + cn + dm \\\\
& = cn + dm + cn + dm / 2 + cn + dm / 4 + \cdots \\\\
& = \sum_{i = 0}^{\lg m - 1}cn + \sum_{i = 0}^{\lg m - 1}\frac{dm}{2^i} \\\\
& = cn\lg m + dm\sum_{i = 0}^{\lg m - 1} \\\\
& < cn\lg m + 2dm \\\\
& = O(n\lg m + m).
\end{aligned}
$$
|
[] | false |
[] |
05-5.1-1
|
05
|
5.1
|
5.1-1
|
docs/Chap05/5.1.md
|
Show that the assumption that we are always able to determine which candidate is best in line 4 of procedure $\text{HIRE-ASSISTANT}$ implies that we know a total order on the ranks of the candidates.
|
A total order is a partial order that is a total relation $(\forall a, b \in A:aRb \text{ or } bRa)$.
A relation is a partial order if it is reflexive, antisymmetric and transitive.
Assume that the relation is good or better.
- **Reflexive:** This is a bit trivial, but everybody is as good or better as themselves.
- **Transitive:** If $A$ is better than $B$ and $B$ is better than $C$, then $A$ is better than $C$.
- **Antisymmetric:** If $A$ is better than $B$, then $B$ is not better than $A$.
So far we have a partial order.
Since we assume we can compare any two candidates, then comparison must be a total relation and thus we have a total order.
|
[] | false |
[] |
05-5.1-2
|
05
|
5.1
|
5.1-2 $\star$
|
docs/Chap05/5.1.md
|
Describe an implementation of the procedure $\text{RANDOM}(a, b)$ that only makes calls to $\text{RANDOM}(0, 1)$. What is the expected running time of your procedure, as a function of $a$ and $b$?
|
As $(b - a)$ could be any number, we need at least $\lceil \lg(b - a) \rceil$ bits to represent the number. We set $\lceil \lg(b - a) \rceil$ as $k$. Basically, we need to call $\text{RANDOM}(0, 1)$ $k$ times. If the number represented by binary is bigger than $b - a$, it's not valid number and we give it another try, otherwise we return that number.
```cpp
RANDOM(a, b)
range = b - a
bits = ceil(log(2, range))
result = 0
for i = 0 to bits - 1
r = RANDOM(0, 1)
result = result + r << i
if result > range
return RANDOM(a, b)
else return a + result
```
The expectation of times of calling procedure $\text{RANDOM}(a, b)$ is $\frac{2^k}{b - a}$. $\text{RANDOM}(0, 1)$ will be called $k$ times in that procedure.
The expected running time is $\Theta(\frac{2^k}{b - a} \cdot k)$, $k$ is $\lceil \lg(b - a) \rceil$.
Considering $2^k$ is less than $2 \cdot (b - a)$, so the running time is $O(k)$.
|
[
{
"lang": "cpp",
"code": "RANDOM(a, b)\n range = b - a\n bits = ceil(log(2, range))\n result = 0\n for i = 0 to bits - 1\n r = RANDOM(0, 1)\n result = result + r << i\n if result > range\n return RANDOM(a, b)\n else return a + result"
}
] | false |
[] |
05-5.1-3
|
05
|
5.1
|
5.1-3 $\star$
|
docs/Chap05/5.1.md
|
Suppose that you want to output $0$ with probability $1 / 2$ and $1$ with probability $1 / 2$. At your disposal is a procedure $\text{BIASED-RANDOM}$, that outputs either $0$ or $1$. It outputs $1$ with some probability $p$ and $0$ with probability $1 - p$, where $0 < p < 1$, but you do not know what $p$ is. Give an algorithm that uses $\text{BIASED-RANDOM}$ as a subroutine, and returns an unbiased answer, returning $0$ with probability $1 / 2$ and $1$ with probability $1 / 2$. What is the expected running time of your algorithm as a function of $p$?
|
There are 4 outcomes when we call $\text{BIASED-RANDOM}$ twice, i.e., $00$, $01$, $10$, $11$.
The strategy is as following:
- $00$ or $11$: call $\text{BIASED-RANDOM}$ twice again
- $01$: output $0$
- $10$: output $1$
We can calculate the probability of each outcome:
- $\Pr\\{00 | 11\\} = p^2 + (1 - p)^2$
- $\Pr\\{01\\} = (1 - p)p$
- $\Pr\\{10\\} = p(1 - p)$
Since there's no other way to return a value, it returns $0$ and $1$ both with probability $1 / 2$.
The pseudo code is as follow:
```cpp
UNBIASED-RANDOM
while true
x = BIASED-RANDOM
y = BIASED-RANDOM
if x != y
return x
```
This algorithm actually uses the equivalence of the probability of occurrence of $01$ and $10$, and subtly converts the unequal $00$ and $11$ to $01$ and $10$, thus eliminating the probability that its probability is not equivalent.
Each iteration is a Bernoulli trial, where "success" means that the iteration does return a value.
We can view each iteration as a Bernoulli trial, where "success" means that the iteration returns a value.
$$
\begin{aligned}
\Pr\\{\text{success}\\}
& = \Pr\\{0\text{ is returned}\\} + \Pr\\{1\text{ is returned}\\} \\\\
& = 2p(1 - p).
\end{aligned}
$$
The expected number of trials for this scenario is $1 / (2p(1 - p))$. Thus, the expected running time of $\text{UNBIASED-RANDOM}$ is $\Theta(1 / (2p(1 - p))$.
|
[
{
"lang": "cpp",
"code": "UNBIASED-RANDOM\n while true\n x = BIASED-RANDOM\n y = BIASED-RANDOM\n if x != y\n return x"
}
] | false |
[] |
05-5.2-1
|
05
|
5.2
|
5.2-1
|
docs/Chap05/5.2.md
|
In $\text{HIRE-ASSISTANT}$, assuming that the candidates are presented in a random order, what is the probability that you hire exactly one time? What is the probability you hire exactly $n$ times?
|
You will hire exactly one time if the best candidate is at first. There are $(n − 1)!$ orderings with the best candidate being at first, so the probability that you hire exactly one time is $\frac{(n - 1)!}{n!} = \frac{1}{n}$.
You will hire exactly $n$ times if the candidates are presented in increasing order. There is only an ordering for this situation, so the probability that you hire exactly $n$ times is $\frac{1}{n!}$.
|
[] | false |
[] |
CLRS Solutions QA
Short description.
A compact Q&A dataset distilled from the community-maintained CLRS solutions project. Each row contains:
- the exercise question (markdown),
- the answer (markdown),
- book chapter/section metadata,
- optional code blocks (language-tagged),
- optional image references (relative paths from the source repo).
This set is useful for building retrieval, RAG, tutoring, and evaluation pipelines for classic algorithms & data structures topics.
⚠️ Attribution: This dataset is derived from the open-source repository walkccc/CLRS (MIT license). Credit belongs to @walkccc and all contributors. This packaging only restructures their content into a machine-friendly format.
Contents & Stats
- Split(s):
train
- Rows: ~1,016
- Source: Parsed from markdown files in
walkccc/CLRS
(third-edition exercises/solutions)
Note: A small number of rows reference images present in the original repo (
docs/img/...
). This dataset includes the image references (paths) as metadata; actual image files are not bundled here.
Also available (human-readable copies):
# JSONL
ds_json = load_dataset(
"json",
data_files="hf://datasets/Siddharth899/clrs-qa/data/train.jsonl.gz",
token=True, # needed if the repo is private
)
# CSV
ds_csv = load_dataset(
"csv",
data_files="hf://datasets/Siddharth899/clrs-qa/data/train.csv.gz",
token=True,
)
Data Fields
Field | Type | Description |
---|---|---|
id |
string |
Stable row id composed from chapter/section/title (e.g., 02-2.3-5 ). |
chapter |
string |
Chapter number as a zero-padded string (e.g., "02" ). |
section |
string |
Section identifier as in the source (e.g., "2.3" or "2-1" ). |
title |
string |
Exercise/problem label (e.g., "2.3-5" or "2-1" ). |
source_file |
string |
Original markdown relative path in the source repo. |
question_markdown |
string |
Exercise prompt in markdown. |
answer_markdown |
string |
Solution/answer in markdown (often includes LaTeX). |
code_blocks |
list of objects {lang, code} |
Zero or more language-tagged code snippets extracted from the answer. |
has_images |
bool |
Whether this item references images. |
image_refs |
list[string] |
Relative paths to referenced images in the original repo. |
Example code_blocks
entry:
[
{"lang": "cpp", "code": "INSERTION-SORT(A)\n ..."},
{"lang": "python", "code": "def merge(...):\n ..."}
]
Data Construction
Source:
walkccc/CLRS
License upstream: MIT
Method: A small script parses chapter/section markdown files, extracts headings, prompts, answers, fenced code blocks, and image references, and emits JSONL → uploaded to the Hub (Parquet auto-materialized).
Known quirks:
- Some answers are brief/telegraphic (mirroring the original).
- Image references point to paths in the upstream repo; not all images are bundled here.
- Math is plain markdown with LaTeX snippets (
$...$
,$$...$$
); rendering depends on your viewer.
License
- This dataset (packaging): MIT
- Upstream content: MIT (from
walkccc/CLRS
)
You must preserve the original MIT license notice and attribute @walkccc and contributors when using this dataset.
MIT License
Copyright (c) walkccc
... (see upstream repository for the full license text)
Additionally, include attribution similar to:
“Portions of the content are derived from walkccc/CLRS (MIT). © The respective contributors.”
Citation
If you use this dataset, please cite both the dataset and the upstream project:
Dataset (this repo):
@misc{clrs_qa_dataset_2025,
title = {CLRS Solutions QA (walkccc-derived)},
author = {Siddharth899},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/Siddharth899/clrs-qa}},
note = {Derived from walkccc/CLRS (MIT)}
}
Upstream CLRS solutions:
@misc{walkccc_clrs,
title = {Solutions to Introduction to Algorithms (Third Edition)},
author = {walkccc and contributors},
howpublished = {\url{https://github.com/walkccc/CLRS}},
license = {MIT}
}
Contact & Maintenance
- Maintainer of this dataset packaging: @Siddharth899
- Issues / requests: open an issue on the HF dataset repo.
- Downloads last month
- 68