id
stringlengths 6
26
| chapter
stringclasses 36
values | section
stringlengths 3
5
| title
stringlengths 3
27
| source_file
stringlengths 13
29
| question_markdown
stringlengths 17
6.29k
| answer_markdown
stringlengths 3
6.76k
| code_blocks
listlengths 0
9
| has_images
bool 2
classes | image_refs
listlengths 0
7
|
---|---|---|---|---|---|---|---|---|---|
33-33.1-1
|
33
|
33.1
|
33.1-1
|
docs/Chap33/33.1.md
|
Prove that if $p_1 \times p_2$ is positive, then vector $p_1$ is clockwise from vector $p_2$ with respect to the origin $(0, 0)$ and that if this cross product is negative, then $p_1$ is counterclockwise from $p_2$.
|
(Omit!)
|
[] | false |
[] |
33-33.1-2
|
33
|
33.1
|
33.1-2
|
docs/Chap33/33.1.md
|
Professor van Pelt proposes that only the $x$-dimension needs to be tested in line 1 of ON-SEGMENT. Show why the professor is wrong.
|
$(0, 0), (5, 5), (10, 0)$.
|
[] | false |
[] |
33-33.1-3
|
33
|
33.1
|
33.1-3
|
docs/Chap33/33.1.md
|
The **_polar angle_** of a point $p_1$ with respect to an origin point $p_0$ is the angle of the vector $p_1 - p_0$ in the usual polar coordinate system. For example, the polar angle of $(3, 5)$ with respect to $(2, 4)$ is the angle of the vector $(1, 1)$, which is $45$ degrees or $\pi / 4$ radians. The polar angle of $(3, 3)$ with respect to $(2, 4)$ is the angle of the vector $(1, 1)$, which is $315$ degrees or $7\pi / 4$ radians. Write pseudocode to sort a sequence $\langle p_1, p_2, \ldots, p_n \rangle$ of $n$ points according to their polar angles with respect to a given origin point $p_0$. Your procedure should take $O(n\lg n)$ time and use cross products to compare angles.
|
(Omit!)
|
[] | false |
[] |
33-33.1-4
|
33
|
33.1
|
33.1-4
|
docs/Chap33/33.1.md
|
Show how to determine in $O(n^2 \lg n)$ time whether any three points in a set of $n$ points are colinear.
|
Based on exercise 33.1-3, for each point $p_i$, let $p_i$ be $p_0$ and sort other points according to their polar angles mod $\pi$. Then scan linearly to see whether two points have the same polar angle. $O(n \cdot n\lg n) = O(n^2 \lg n)$.
|
[] | false |
[] |
33-33.1-5-1
|
33
|
33.1
|
33.1-5
|
docs/Chap33/33.1.md
|
A **_polygon_** is a piecewise-linear, closed curve in the plane. That is, it is a curve ending on itself that is formed by a sequence of straight-line segments, called the **_sides_** of the polygon. A point joining two consecutive sides is a **_vertex_** of the polygon. If the polygon is **_simple_**, as we shall generally assume, it does not cross itself. The set of points in the plane enclosed by a simple polygon forms the **_interior_** of the polygon, the set of points on the polygon itself forms its **_boundary_**, and the set of points surrounding the polygon forms its **_exterior_**. A simple polygon is convex if, given any two points on its boundary or in its interior, all points on the line segment drawn between them are contained in the polygon's boundary or interior. A vertex of a convex polygon cannot be expressed as a convex combination of any two distinct points on the boundary or in the interior of the polygon.
|
> Professor Amundsen proposes the following method to determine whether a sequence $\langle p_1, p_2, \ldots, p_{n - 1} \rangle$ of $n$ points forms the consecutive vertices of a convex polygon. Output "yes" if the set ${ \angle p_i p_{i + 1} p_{i + 2}: i = 0, 1, \ldots, n - 1 }$, where subscript addition is performed modulo $n$, does not contain both left turns and right turns; otherwise, output "no." Show that although this method runs in linear time, it does not always produce the correct answer. Modify the professor's method so that it always produces the correct answer in linear time.
A line.
|
[] | false |
[] |
33-33.1-5-2
|
33
|
33.1
|
33.1-5
|
docs/Chap33/33.1.md
|
Professor Amundsen proposes the following method to determine whether a sequence $\langle p_1, p_2, \ldots, p_{n - 1} \rangle$ of $n$ points forms the consecutive vertices of a convex polygon. Output "yes" if the set ${ \angle p_i p_{i + 1} p_{i + 2}: i = 0, 1, \ldots, n - 1 }$, where subscript addition is performed modulo $n$, does not contain both left turns and right turns; otherwise, output "no." Show that although this method runs in linear time, it does not always produce the correct answer. Modify the professor's method so that it always produces the correct answer in linear time.
|
> Professor Amundsen proposes the following method to determine whether a sequence $\langle p_1, p_2, \ldots, p_{n - 1} \rangle$ of $n$ points forms the consecutive vertices of a convex polygon. Output "yes" if the set ${ \angle p_i p_{i + 1} p_{i + 2}: i = 0, 1, \ldots, n - 1 }$, where subscript addition is performed modulo $n$, does not contain both left turns and right turns; otherwise, output "no." Show that although this method runs in linear time, it does not always produce the correct answer. Modify the professor's method so that it always produces the correct answer in linear time.
A line.
|
[] | false |
[] |
33-33.1-6
|
33
|
33.1
|
33.1-6
|
docs/Chap33/33.1.md
|
Given a point $p_0 = (x_0, y_0)$, the **_right horizontal ray_** from $p_0$ is the set of points ${ p_i = (x_i, y_i) : x_i \ge x_0 ~\text{and}~ y_i = y_0 }$, that is, it is the set of points due right of $p_0$ along with $p_0$ itself. Show how to determine whether a given right horizontal ray from $p_0$ intersects a line segment $\overline{p_1 p_2}$ in $O(1)$ time by reducing the problem to that of determining whether two line segments intersect.
|
$p_1.y = p_2.y = 0$ and $\max(p_1.x, p_2.x) \ge 0$.
or
$\text{sign}(p_1.y) \ne \text{sign}(p_2.y)$ and \$\displaystyle p_1.y \cdot \frac{p_1.x - p_2.x}{p_1.y - p_2.y} \ge 0{equation}
|
[] | false |
[] |
33-33.1-7
|
33
|
33.1
|
33.1-7
|
docs/Chap33/33.1.md
|
One way to determine whether a point $p_0$ is in the interior of a simple, but not necessarily convex, polygon $P$ is to look at any ray from $p_0$ and check that the ray intersects the boundary of $P$ an odd number of times but that $p_0$ itself is not on the boundary of $P$. Show how to compute in $\Theta(n)$ time whether a point $p_0$ is in the interior of an $n$-vertex polygon $P$. (Hint: Use Exercise 33.1-6. Make sure your algorithm is correct when the ray intersects the polygon boundary at a vertex and when the ray overlaps a side of the polygon.)
|
Based on exercise 33.1-6, use $p_i - p_0$ as $p_i$.
|
[] | false |
[] |
33-33.1-8
|
33
|
33.1
|
33.1-8
|
docs/Chap33/33.1.md
|
Show how to compute the area of an $n$-vertex simple, but not necessarily convex, polygon in $\Theta(n)$ time. (See Exercise 33.1-5 for definitions pertaining to polygons.)
|
Half of the sum of the cross products of ${\overline{p_1 p_i}, \overline{p_1 p_{i + 1}} ~|~ i \in [2, n - 1] }$.
|
[] | false |
[] |
33-33.2-1
|
33
|
33.2
|
33.2-1
|
docs/Chap33/33.2.md
|
Show that a set of $n$ line segments may contain $\Theta(n ^ 2)$ intersections.
|
Star.
|
[] | false |
[] |
33-33.2-2
|
33
|
33.2
|
33.2-2
|
docs/Chap33/33.2.md
|
Given two segments $a$ and $b$ that are comparable at $x$, show how to determine in $O(1)$ time which of $a \succeq_x b$ or $b \succeq_x a$ holds. Assume that neither segment is vertical.
|
(Omit!)
|
[] | false |
[] |
33-33.2-3
|
33
|
33.2
|
33.2-3
|
docs/Chap33/33.2.md
|
Professor Mason suggests that we modify $\text{ANY-SEGMENTS-INTERSECT}$ so that instead of returning upon finding an intersection, it prints the segments that intersect and continues on to the next iteration of the **for** loop. The professor calls the resulting procedure $\text{PRINT-INTERSECTING-SEGMENTS}$ and claims that it prints all intersections, from left to right, as they occur in the set of line segments. Professor Dixon disagrees, claiming that Professor Mason's idea is incorrect. Which professor is right? Will $\text{PRINT-INTERSECTING-SEGMENTS}$ always find the leftmost intersection first? Will it always find all the intersections?
|
No.
|
[] | false |
[] |
33-33.2-4
|
33
|
33.2
|
33.2-4
|
docs/Chap33/33.2.md
|
Give an $O(n\lg n)$-time algorithm to determine whether an n-vertex polygon is simple.
|
Same as $\text{ANY-SEGMENTS-INTERSECT}$.
|
[] | false |
[] |
33-33.2-5
|
33
|
33.2
|
33.2-5
|
docs/Chap33/33.2.md
|
Give an $O(n\lg n)$-time algorithm to determine whether two simple polygons with a total of $n$ vertices intersect.
|
Same as $\text{ANY-SEGMENTS-INTERSECT}$.
|
[] | false |
[] |
33-33.2-6
|
33
|
33.2
|
33.2-6
|
docs/Chap33/33.2.md
|
A **_disk_** consists of a circle plus its interior and is represented by its center point and radius. Two disks intersect if they have any point in common. Give an $O(n\lg n)$- time algorithm to determine whether any two disks in a set of $n$ intersect.
|
Same as $\text{ANY-SEGMENTS-INTERSECT}$.
|
[] | false |
[] |
33-33.2-7
|
33
|
33.2
|
33.2-7
|
docs/Chap33/33.2.md
|
Given a set of $n$ line segments containing a total of $k$ intersections, show how to output all $k$ intersections in $O((n + k) \lg)$ time.
|
Treat the intersection points as event points.
|
[] | false |
[] |
33-33.2-8
|
33
|
33.2
|
33.2-8
|
docs/Chap33/33.2.md
|
Argue that $\text{ANY-SEGMENTS-INTERSECT}$ works correctly even if three or more segments intersect at the same point.
|
(Omit!)
|
[] | false |
[] |
33-33.2-9
|
33
|
33.2
|
33.2-9
|
docs/Chap33/33.2.md
|
Show that $\text{ANY-SEGMENTS-INTERSECT}$ works correctly in the presence of vertical segments if we treat the bottom endpoint of a vertical segment as if it were a left endpoint and the top endpoint as if it were a right endpoint. How does your answer to Exercise 33.2-2 change if we allow vertical segments?
|
(Omit!)
|
[] | false |
[] |
33-33.3-1
|
33
|
33.3
|
33.3-1
|
docs/Chap33/33.3.md
|
Prove that in the procedure $\text{GRAHAM-SCAN}$, points $p_1$ and $p_m$ must be vertices of $\text{CH}(Q)$.
|
To see this, note that $p_1$ and $p_m$ are the points with the lowest and highest polar angle with respect to $p_0$. By symmetry, we may just show it for $p_1$ and we would also have it for $p_m$ just by reflecting the set of points across a vertical line.
To see a contradiction, suppose that we have the convex hull doesn't contain $p_1$. Then, let $p$ be the point in the convex hull that has the lowest polar angle with respect to $p_0$. If $p$ is on the line from $p_0$ to $p_1$, we could replace it with $p_1$ and have a convex hull, meaning we didn't start with a convex hull.
If we have that it is not on that line, then there is no way that the convex hull given contains $p_1$, also contradicting the fact that we had selected a convex hull
|
[] | false |
[] |
33-33.3-2
|
33
|
33.3
|
33.3-2
|
docs/Chap33/33.3.md
|
Consider a model of computation that supports addition, comparison, and multiplication and for which there is a lower bound of $\Omega(n\lg n)$ to sort $n$ numbers. Prove that $\Omega(n\lg n)$ is a lower bound for computing, in order, the vertices of the convex hull of a set of $n$ points in such a model.
|
Let our $n$ numbers be $a_1, a_2, \dots, a_n$ and $f$ be a strictly convex function, such as $e^x$. Let $p_i = (a_i, f(a_i))$. Compute the convex hull of $p_1, p_2, \dots, p_n$. Then every point is in the convex hull. We can recover the numbers themselves by looking at the $x$-coordinates of the points in the order returned by the convex-hull algorithm, which will necessarily be a cyclic shift of the numbers in increasing order, so we can recover the proper order in linear time.
In an algorithm such as $\text{GRAHAM-SCAN}$ which starts with the point with minimum $y$-coordinate, the order returned actually gives the numbers in increasing order.
|
[] | false |
[] |
33-33.3-3
|
33
|
33.3
|
33.3-3
|
docs/Chap33/33.3.md
|
Given a set of points $Q$, prove that the pair of points farthest from each other must be vertices of $\text{CH}(Q)$.
|
Suppose that $p$ and $q$ are the two furthest apart points. Also, to a contradiction, suppose, without loss of generality that $p$ is on the interior of the convex hull. Then, construct the circle whose center is $q$ and which has $p$ on the circle. Then, if we have that there are any vertices of the convex hull that are outside this circle, we could pick that vertex and $q$, they would have a higher distance than between $p$ and $q$. So, we know that all of the vertices of the convex hull lie inside the circle. This means that the sides of the convex hull consist of line segments that are contained within the circle. So, the only way that they could contain $p$, a point on the circle is if it was a vertex, but we suppsed that $p$ wasn't a vertex of the convex hull, giving us our contradiction.
|
[] | false |
[] |
33-33.3-4
|
33
|
33.3
|
33.3-4
|
docs/Chap33/33.3.md
|
For a given polygon $P$ and a point $q$ on its boundary, the **_shadow_** of $q$ is the set of points $r$ such that the segment $\overline{qr}$ is entirely on the boundary or in the interior of $P$. As Figure 33.10 illustrates, a polygon $P$ is **_star-shaped_** if there exists a point $p$ in the interior of $P$ that is in the shadow of every point on the boundary of $P$. The set of all such points $p$ is called the **_kernel_** of $P$. Given an $n$-vertex, star-shaped polygon $P$ specified by its vertices in counterclockwise order, show how to compute $\text{CH}(P)$ in $O(n)$ time.
|
We simply run $\text{GRAHAM-SCAN}$ but without sorting the points, so the runtime becomes $O(n)$. To prove this, we'll prove the following loop invariant: At the start of each iteration of the for loop of lines 7-10, stack $S$ consists of, from bottom to top, exactly the vertices of $\text{CH}(Q_{i - 1})$. The proof is quite similar to the proof of correctness. The invariant holds the first time we execute line 7 for the same reasons outline in the section. At the start of the $i$th iteration, $S$ contains $\text{CH}(Q_{i - 1})$. Let $p_j$ be the top point on $S$ after executing the while loop of lines 8-9, but before $p_i$ is pushed, and let pk be the point just below $p_j$ on $S$. At this point, $S$ contains $\text{CH}(Q_j)$ in counterclockwise order from bottom to top. Thus, when we push $p_i$, $S$ contains exactly the vertices of $\text{CH}(Q_j \cup \\{p_i\\})$.
We now show that this is the same set of points as $\text{CH}(Q_i)$. Let $p_t$ be any point that was popped from $S$ during iteration $i$ and $p_r$ be the point just below $p_t$ on stack $S$ at the time $p_t$ was popped. Let $p$ be a point in the kernel of $P$. Since the angle $\angle p_rp_tp_i$ makes a nonelft turn and $P$ is star shaped, $p_t$ must be in the interior or on the boundary of the triangle formed by $p_r$, $p_i$, and $p$. Thus, $p_t$ is not in the convex hull of $Q_i$, so we have $\text{CH}(Q_i - \\{p_t\\}) = \text{CH}(Q_i)$. Applying this equality repeatedly for each point removed from $S$ in the while loop of lines 8-9, we have $\text{CH}(Q_j \cup \\{p_i\\}) = \text{CH}(Q_i)$.
When the loop terminates, the loop invariant implies that $S$ consists of exactly the vertices of $\text{CH}(Q_m)$ in counterclockwise order, proving correctness.
|
[] | false |
[] |
33-33.3-5
|
33
|
33.3
|
33.3-5
|
docs/Chap33/33.3.md
|
In the **_on-line convex-hull problem_**, we are given the set $Q$ of $n$ points one point at a time. After receiving each point, we compute the convex hull of the points seen so far. Obviously, we could run Graham's scan once for each point, with a total running time of $O(n^2\lg n)$. Show how to solve the on-line convex-hull problem in a total of $O(n^2)$ time.
|
Suppose that we have a convex hull computed from the previous stage $\\{q_0, q_1, \dots, q_m\\}$, and we want to add a new vertex, $p$ in and keep track of how we should change the convex hull.
First, process the vertices in a clockwise manner, and look for the first time that we would have to make a non-left to get to $p$. This tells us where to start cutting vertices out of the convex hull. To find out the upper bound on the vertices that we need to cut out, turn around, start processing vertices in a clockwise manner and see the first time that we would need to make a non-right.
Then, we just remove the vertices that are in this set of vertices and replace the with $p$. There is one last case to consider, which is when we end up passing ourselves when we do our clockwise sweep.
Then we just remove no vertices and add $p$ in in between the two vertices that we had found in the two sweeps. Since for each vertex we add we are only considering each point in the previous step's convex hull twice, the runtime is $O(nh) = O(n^2)$ where h is the number of points in the convex hull.
|
[] | false |
[] |
33-33.3-6
|
33
|
33.3
|
33.3-6 $\star$
|
docs/Chap33/33.3.md
|
Show how to implement the incremental method for computing the convex hull of $n$ points so that it runs in $O(n\lg n)$ time.
|
(Omit!)
|
[] | false |
[] |
33-33.4-1
|
33
|
33.4
|
33.4-1
|
docs/Chap33/33.4.md
|
Professor Williams comes up with a scheme that allows the closest-pair algorithm to check only $5$ points following each point in array $Y'$. The idea is always to place points on line $l$ into set $P_L$. Then, there cannot be pairs of coincident points on line $l$ with one point in $P_L$ and one in $P_R$. Thus, at most $6$ points can reside in the $\delta \times 2\delta$ rectangle. What is the flaw in the professor's scheme?
|
In particular, when we select line $l$, we may be unable perform an even split of the vertices. So, we don't neccesarily have that both the left set of points and right set of points have fallen to roughly half. For example, suppose that the points are all arranged on a vertical line, then, when we recurse on the the left set of points, we haven't reduced the problem size at all, let alone by a factor of two. There is also the issue in this setup that you may end up asking about a set of size less than two when looking at the right set of points.
|
[] | false |
[] |
33-33.4-2
|
33
|
33.4
|
33.4-2
|
docs/Chap33/33.4.md
|
Show that it actually suffices to check only the points in the $5$ array positions following each point in the array $Y'$.
|
Since we only care about the shortest distance, the distance $\delta'$ must be strictly less than $\delta$. The picture in Figure 33.11(b) only illustrates the case of a nonstrict inequality. If we exclude the possibility of points whose $x$ coordinate differs by exactly $\delta$ from $l$, then it is only possible to place at most $6$ points in the $\delta \times 2\delta$ rectangle, so it suffices to check on the points in the $5$ array positions following each point in the array $Y'$.
|
[] | false |
[] |
33-33.4-3
|
33
|
33.4
|
33.4-3
|
docs/Chap33/33.4.md
|
We can define the distance between two points in ways other than euclidean. In the plane, the **_$L_m$-distance_** between points $p_1$ and $p_2$ is given by the expression $(|x_1 - x_2|^m + |y_1 - y_2|^m)^{1 / m}$. Euclidean distance, therefore, is $L_2$-distance. Modify the closest-pair algorithm to use the $L_1$-distance, which is also known as the **_Manhattan distance_**.
|
In the analysis of the algorithm, most of it goes through just based on the triangle inequality. The only main point of difference is in looking at the number of points that can be fit into a $\delta \times 2\delta$ rectangle. In particular, we can cram in two more points than the eight shown into the rectangle by placing points at the centers of the two squares that the rectangle breaks into. This means that we need to consider points up to $9$ away in $Y'$ instead of $7$ away. This has no impact on the asymptotics of the algorithm and it is the only correction to the algorithm that is needed if we switch from $L_2$ to $L_1$.
|
[] | false |
[] |
33-33.4-4
|
33
|
33.4
|
33.4-4
|
docs/Chap33/33.4.md
|
Given two points $p_1$ and $p_2$ in the plane, the $L_\infty$-distance between them is given by $\max(|x_1 - x_2|, |y_1 - y_2|)$. Modify the closest-pair algorithm to use the $L_\infty$-distance.
|
We can simply run the divide and conquer algorithm described in the section, modifying the brute force search for $|P| \le 3$ and the check against the next $7$ points in $Y'$ to use the $L_\infty$ distance. Since the $L_\infty$ distance between two points is always less than the euclidean distance, there can be at most $8$ points in the $\delta \times 2\delta$ rectangle which we need to examine in order to determine whether the closest pair is in that box. Thus, the modified algorithm is still correct and has the same runtime.
|
[] | false |
[] |
33-33.4-5
|
33
|
33.4
|
33.4-5
|
docs/Chap33/33.4.md
|
Suppose that $\Omega(n)$ of the points given to the closest-pair algorithm are covertical. Show how to determine the sets $P_L$ and $P_R$ and how to determine whether each point of $Y$ is in $P_L$ or $P_R$ so that the running time for the closest-pair algorithm remains $O(n\lg n)$.
|
We select the line $l$ so that it is roughly equal, and then, we won't run into any issue if we just pick an arbitrary subset of the vertices that are on the line to go to one side or the other.
Since the analysis of the algorithm allowed for both elements from $P_L$ and $P_R$ to be on the line, we still have correctness if we do this. To determine what values of $Y$ belong to which of the set can be made easier if we select our set going to $P_L$ to be the lowest however many points are needed, and the $P_R$ to be the higher points. Then, just knowing the index of $Y$ that we are looking at, we know whether that point belonged to $P_L$ or to $P_R$.
|
[] | false |
[] |
33-33.4-6
|
33
|
33.4
|
33.4-6
|
docs/Chap33/33.4.md
|
Suggest a change to the closest-pair algorithm that avoids presorting the $Y$ array but leaves the running time as $O(n\lg n)$. ($\textit{Hint:}$ Merge sorted arrays $Y_L$ and $Y_R$ to form the sorted array $Y$.)
|
In addition to returning the distance of the closest pair, the modify the algorithm to also return the points passed to it, sorted by $y$-coordinate, as $Y$. To do this, merge $Y_L$ and $Y_R$ returned by each of its recursive calls. If we are at the base case, when $n \le 3$, simply use insertion sort to sort the elements by y-coordinate directly. Since each merge takes linear time, this doesn't affect the recursive equation for the runtime.
|
[] | false |
[] |
33-33-1
|
33
|
33-1
|
33-1
|
docs/Chap33/Problems/33-1.md
|
Given a set $Q$ of points in the plane, we define the **_convex layers_** of $Q$ inductively. The first convex layer of $Q$ consists of those points in $Q$ that are vertices of $\text{CH}(Q)$. For $i > 1$, define $Q_i$ to consist of the points of $Q$ with all points in convex layers $i, 2, \dots, i - 1$ removed. Then, the $i$th convex layer of $Q$ is $\text{CH}(Q_i)$ if $Q_i \ne \emptyset$ and is undefined otherwise.
**a.** Give an $O(n^2)$- time algorithm to find the convex layers of a set of $n$ points.
**b.** Prove that $\Omega(n\lg n)$ time is required to compute the convex layers of a set of $n$ points with any model of computation that requires $\Omega(n\lg n)$ time to sort $n$ real numbers.
|
(Omit!)
|
[] | false |
[] |
33-33-2
|
33
|
33-2
|
33-2
|
docs/Chap33/Problems/33-2.md
|
Let $Q$ be a set of $n$ points in the plane. We say that point $(x, y)$ **_dominates_** point $(x', y')$ if $x \ge x'$ and $y \ge y'$. A point in $Q$ that is dominated by no other points in $Q$ is said to be **_maximal_**. Note that $Q$ may contain many maximal points, which can be organized into **_maximal layers_** as follows. The first maximal layer $L_1$ is the set of maximal points of $Q$. For $i > 1$, the $i$th maximal layer $L_i$ is the set of maximal points in $Q - \bigcup_{j = 1}^{i - 1} L_j$.
Suppose that $Q$ has $k$ nonempty maximal layers, and let $y_i$ be the $y$-coordinate of the leftmost point in $L_i$ for $i = 1, 2, \dots, k$. For now, assume that no two points in $Q$ have the same $x$- or $y$-coordinate.
**a.** Show that $y_1 > y_2 > \cdots > y_k$.
Consider a point $(x, y)$ that is to the left of any point in $Q$ and for which $y$ is distinct from the $y$-coordinate of any point in $Q$. Let $Q' = Q \cup \\{(w, y)\\}$.
**b.** Let $j$ be the minimum index such that $y_j < y$, unless $y < y_k$, in which case we let $j = k + 1$. Show that the maximal layers of $Q'$ are as follows:
- If $j \le k$, then the maximal layers of $Q'$ are the same as the maximal layers of $Q$, except that $L_j$ also includes $(x, y)$ as its new leftmost point.
- If $j = k + 1$, then the first $k$ maximal layers of $Q'$ are the same as for $Q$, but in addition, $Q'$ has a nonempty $(k + 1)$st maximal layer: $L_{k + 1} = \\{(x, y)\\}$.
**c.** Describe an $O(n\lg n)$-time algorithm to compute the maximal layers of a set $Q$ of $n$ points. ($\textit{Hint:}$ Move a sweep line from right to left.)
**d.** Do any difficulties arise if we now allow input points to have the same $x$- or $y$-coordinate? Suggest a way to resolve such problems.
|
(Omit!)
|
[] | false |
[] |
33-33-3
|
33
|
33-3
|
33-3
|
docs/Chap33/Problems/33-3.md
|
A group of $n$ Ghostbusters is battling n ghosts. Each Ghostbuster carries a proton pack, which shoots a stream at a ghost, eradicating it. A stream goes in a straight line and terminates when it hits the ghost. The Ghostbusters decide upon the following strategy. They will pair off with the ghosts, forming $n$ Ghostbuster-ghost pairs, and then simultaneously each Ghostbuster will shoot a stream at his chosen ghost. As we all know, it is very dangerous to let streams cross, and so the Ghostbusters must choose pairings for which no streams will cross.
Assume that the position of each Ghostbuster and each ghost is a fixed point in the plane and that no three positions are colinear.
**a.** Argue that there exists a line passing through one Ghostbuster and one ghost such that the number of Ghostbusters on one side of the line equals the number of ghosts on the same side. Describe how to find such a line in $O(n\lg n)$ time.
**b.** Give an $O(n^2\lg n)$-time algorithm to pair Ghostbusters with ghosts in such a way that no streams cross.
|
(Omit!)
|
[] | false |
[] |
33-33-4
|
33
|
33-4
|
33-4
|
docs/Chap33/Problems/33-4.md
|
Professor Charon has a set of $n$ sticks, which are piled up in some configuration. Each stick is specified by its endpoints, and each endpoint is an ordered triple giving its $(x, y, z)$ coordinates. No stick is vertical. He wishes to pick up all the sticks, one at a time, subject to the condition that he may pick up a stick only if there is no other stick on top of it.
**a.** Give a procedure that takes two sticks $a$ and $b$ and reports whether $a$ is above, below, or unrelated to $b$.
**b.** Describe an efficient algorithm that determines whether it is possible to pick up all the sticks, and if so, provides a legal order in which to pick them up.
|
(Omit!)
|
[] | false |
[] |
33-33-5
|
33
|
33-5
|
33-5
|
docs/Chap33/Problems/33-5.md
|
Consider the problem of computing the convex hull of a set of points in the plane that have been drawn according to some known random distribution. Sometimes, the number of points, or size, of the convex hull of $n$ points drawn from such a distribution has expectation $O(n^{1 - \epsilon})$ for some constant $\epsilon > 0$. We call such a distribution **_sparse-hulled_**. Sparse-hulled distributions include the following:
- Points drawn uniformly from a unit-radius disk. The convex hull has expected size $\Theta(n^{1 / 3})$.
- Points drawn uniformly from the interior of a convex polygon with $k$ sides, for any constant $k$. The convex hull has expected size $\Theta(\lg n)$.
- Points drawn according to a two-dimensional normal distribution. The convex $p$ hull has expected size $\Theta(\sqrt{\lg n})$.
**a.** Given two convex polygons with $n_1$ and $n_2$ vertices respectively, show how to compute the convex hull of all $n_1 + n_2$ points in $O(n_1 + n_2)$ time. (The polygons may overlap.)
**b.** Show how to compute the convex hull of a set of $n$ points drawn independently according to a sparse-hulled distribution in $O(n)$ average-case time. ($\textit{Hint:}$ Recursively find the convex hulls of the first $n / 2$ points and the second $n / 2$ points, and then combine the results.)
|
(Omit!)
|
[] | false |
[] |
34-34.1-1
|
34
|
34.1
|
34.1-1
|
docs/Chap34/34.1.md
|
Define the optimization problem $\text{LONGEST-PATH-LENGTH}$ as the relation that associates each instance of an undirected graph and two vertices with the number of edges in a longest simple path between the two vertices. Define the decision problem $\text{LONGEST-PATH}$ $= \\{\langle G, u, v, k\rangle: G = (V, E)$ is an undirected graph, $u, v \in V, k \ge 0$ is an integer, and there exists a simple path from $u$ to $v$ in $G$ consisting of at least $k$ edges $\\}$. Show that the optimization problem $\text{LONGEST-PATH-LENGTH}$ can be solved in polynomial time if and only if $\text{LONGEST-PATH} \in P$.
|
Showing that $\text{LONGEST-PATH-LENGTH}$ being polynomial implies that $\text{LONGEST-PATH}$ is polynomial is trivial, because we can just compute the length of the longest path and reject the instance of $\text{LONGEST-PATH}$ if and only if $k$ is larger than the number we computed as the length of the longest path.
Since we know that the number of edges in the longest path length is between $0$ and $|E|$, we can perform a binary search for it's length. That is, we construct an instance of $\text{LONGEST-PATH}$ with the given parameters along with $k = \frac{|E|}{2}$. If we hear yes, we know that the length of the longest path is somewhere above the halfway point. If we hear no, we know it is somewhere below. Since each time we are halving the possible range, we have that the procedure can require $O(\lg |E|)$ many steps. However, running a polynomial time subroutine $\lg n$ many times still gets us a polynomial time procedure, since we know that with this procedure we will never be feeding output of one call of $\text{LONGEST-PATH}$ into the next.
|
[] | false |
[] |
34-34.1-2
|
34
|
34.1
|
34.1-2
|
docs/Chap34/34.1.md
|
Give a formal definition for the problem of finding the longest simple cycle in an undirected graph. Give a related decision problem. Give the language corresponding to the decision problem.
|
The problem $\text{LONGST-SIMPLE-CYCLE}$ is the relation that associates each instance of a graph with the longest simple cycle contained in that graph. The decision problem is, given $k$, to determine whether or not the instance graph has a simple cycle of length at least $k$. If yes, output $1$. Otherwise, output $0$. The language corresponding to the decision problem is the set of all $\langle G, k\rangle$ such that $G = (V, E)$ is an undirected graph, $k \ge 0$ is an integer, and there exists a simple cycle in $G$ consisting of at least $k$ edges.
|
[] | false |
[] |
34-34.1-3
|
34
|
34.1
|
34.1-3
|
docs/Chap34/34.1.md
|
Give a formal encoding of directed graphs as binary strings using an adjacency-matrix representation. Do the same using an adjacency-list representation. Argue that the two representations are polynomially related.
|
A formal encoding of the adjacency matrix representation is to first encode an integer $n$ in the usual binary encoding representing the number of vertices. Then, there will be $n^2$ bits following. The value of bit $m$ will be $1$ if there is an edge from vertex $\lfloor m / n \rfloor$ to vertex $(m \mod n)$, and $0$ if there is not such an edge.
An encoding of the adjacency list representation is a bit more finessed. We'll be using a different encoding of integers, call it $g(n)$. In particular, we will place a $0$ immediately after every bit in the usual representation. Since this only doubles the length of the encoding, it is still polynomially related. Also, the reason we will be using this encoding is because any sequence of integers encoded in this way cannot contain the string $11$ and must contain at least one $0$. Suppose that we have a vertex with edges going to the vertices indexed by $i_1, i_2, i_3, \dots, i_k$. Then, the encoding corresponding to that vertex is $g(i_1)11g(i_2)11 \dots 11g(i_k)1111$. Then, the encoding of the entire graph will be the concatenation of all the encodings of the vertices. As we are reading through, since we used this encoding of the indices of the vertices, we won’t ever be confused about where each of the vertex indices ends or when we are moving on to the next vertex's list.
To go from the list to matrix representation, we can read off all the adjacent vertices, store them, sort them, and then output a row of the adjacency matrix. Since there is some small constant amount of space for the adjacency list representation for each vertex in the graph, the size of the encoding blows up by at most a factor of $O(n)$, which means that the size of the encoding overall is at most squared.
To go in the other direction, it is just a matter of keeping track of the positions in a given row that have $1$'s, encoding those numerical values in the way described, and doing this for each row. Since we are only increasing the size of the encoding by a factor of at most $O(\lg n)$ (which happens in the dense graph case), we have that both of them are polynomially related.
|
[] | false |
[] |
34-34.1-4
|
34
|
34.1
|
34.1-4
|
docs/Chap34/34.1.md
|
Is the dynamic-programming algorithm for the 0-1 knapsack problem that is asked for in Exercise 16.2-2 a polynomial-time algorithm? Explain your answer.
|
This isn't a polynomial-time algorithm. Recall that the algorithm from Exercise 16.2-2 had running time $\Theta(nW)$ where $W$ was the maximum weight supported by the knapsack. Consider an encoding of the problem. There is a polynomial encoding of each item by giving the binary representation of its index, worth, and weight, represented as some binary string of length $a = \Omega(n)$. We then encode $W$, in polynomial time. This will have length $\Theta(\lg W) = b$. The solution to this problem of length $a + b$ is found in time $\Theta(nW) = \Theta(a \cdot 2^b)$. Thus, the algorithm is actually exponential.
|
[] | false |
[] |
34-34.1-5
|
34
|
34.1
|
34.1-5
|
docs/Chap34/34.1.md
|
Show that if an algorithm makes at most a constant number of calls to polynomial-time subroutines and performs an additional amount of work that also takes polynomial time, then it runs in polynomial time. Also show that a polynomial number of calls to polynomial-time subroutines may result in an exponential-time algorithm.
|
(Omit!)
|
[] | false |
[] |
34-34.1-6
|
34
|
34.1
|
34.1-6
|
docs/Chap34/34.1.md
|
Show that the class $P$, viewed as a set of languages, is closed under union, intersection, concatenation, complement, and Kleene star. That is, if $L_1, L_2 \in P$, then $L_1 \cup L_2 \in P$, $L_1 \cap L_2 \in P$, $L_1L_2 \in P$, $\bar L_1 \in P$, and $L_1^\* \in P$.
|
(Omit!)
|
[] | false |
[] |
34-34.2-1
|
34
|
34.2
|
34.2-1
|
docs/Chap34/34.2.md
|
Consider the language $\text{GRAPH-ISOMORPHISM}$ $= \\{\langle G_1, G_2 \rangle: G_1$ and $G_2$ are isomorphic graphs$\\}$. Prove that $\text{GRAPH-ISOMORPHISM} \in \text{NP}$ by describing a polynomial-time algorithm to verify the language.
|
(Omit!)
|
[] | false |
[] |
34-34.2-2
|
34
|
34.2
|
34.2-2
|
docs/Chap34/34.2.md
|
Prove that if $G$ is an undirected bipartite graph with an odd number of vertices, then $G$ is nonhamiltonian.
|
Let $G = (V, E)$ be a bipartite graph with bipartition $V = X \cup Y$, where $X \cap Y = \emptyset$. Assume for contradiction that:
1. The total number of vertices $n = |X| + |Y|$ is **odd**.
2. $G$ contains a **Hamiltonian cycle** $C$.
Since $G$ is bipartite, the cycle $C$ must alternate between vertices in $X$ and $Y$. Starting at a vertex in $X$, the cycle proceeds to $Y$, then back to $X$, and so on. To return to the starting vertex in $X$, the cycle must consist of an **even** number of steps. However, $n$ is odd, making it impossible to alternate perfectly and return to the starting point.
This contradiction implies that $G$ cannot have a Hamiltonian cycle when $n$ is odd.
|
[] | false |
[] |
34-34.2-3
|
34
|
34.2
|
34.2-3
|
docs/Chap34/34.2.md
|
Show that if $\text{HAM-CYCLE} \in P$, then the problem of listing the vertices of a hamiltonian cycle, in order, is polynomial-time solvable.
|
We can use the following Lemma:
**Lemma:** Let $G$ be a Hamiltonian graph. If every edge in $G$ is **essential**, meaning it must appear in **every** Hamiltonian cycle of $G$, then $G$ itself must be a Hamiltonian cycle.
**Proof of Lemma (by Contradiction):** Let $C$ be a Hamiltonian cycle in $G$. Assume $G \neq C$, meaning there exists at least one edge $e \in G$ that does not belong to $C$. Thus, $e$ is **non-essential**. This contradicts the assumption that every edge in $G$ is **essential**.
**Proof:** Assume $\text{HAM-CYCLE} \in P$, meaning there exists a polynomial-time algorithm $A$ that decides whether a given graph contains a Hamiltonian cycle.
Let $G = (V, E)$ be a graph with a Hamiltonian cycle. We begin by setting $G^\ast = G$.
For each edge $e \in E$, we perform the following steps:
1. Remove $e$ from $G^\ast$, resulting in a subgraph $G'$.
2. Use algorithm $A$ to check if $G'$ still contains a Hamiltonian cycle:
- **Case 1: $e$ is essential.** If removing $e$ from $G^\ast$ causes $G'$ to lose its Hamiltonian cycle, then $e$ must be part of every Hamiltonian cycle in $G^\ast$. In this case, $G^\ast$ remains unchanged.
- **Case 2: $e$ is non-essential.** If $G'$ still contains a Hamiltonian cycle, then there exist Hamiltonian cycles in $G^\ast$ that do not include $e$. In this case, we safely remove $e$ from $G^\ast$, so we update $G^\ast = G'$.
The iterations ensure that $G^\ast$ remains a subgraph of $G$ that still contains a Hamiltonian cycle. Every edge that remains in the final graph must be essential for the last $G^\ast$. Thus, by the lemma above, $G^\ast$ must be a Hamiltonian cycle. Since $G^\ast$ is a subgraph of $G$, it is also a Hamiltonian cycle of $G$.
Algorithm $A$ runs in polynomial time, and since there are at most $O(n^2)$ edges in $G$, the total number of calls to $A$ is at most $O(n^2)$. Thus, the running time of the iterations is polynomial.
Finally, listing the vertices of the Hamiltonian cycle in order can be done in polynomial time. This involves starting from an arbitrary vertex in $G^\ast$ and following the Hamiltonian cycle in $G^\ast$. Regardless of whether the graph is stored with an adjacency list or adjacency matrix, this will take polynomial time.
Thus, the problem of listing the vertices of a Hamiltonian cycle in order can be solved in polynomial time.
|
[] | false |
[] |
34-34.2-4
|
34
|
34.2
|
34.2-4
|
docs/Chap34/34.2.md
|
Prove that the class $\text{NP}$ of languages is closed under union, intersection, concatenation, and Kleene star. Discuss the closure of $\text{NP}$ under complement.
|
(Omit!)
|
[] | false |
[] |
34-34.2-5
|
34
|
34.2
|
34.2-5
|
docs/Chap34/34.2.md
|
Show that any language in $\text{NP}$ can be decided by an algorithm running in time $2^{O(n^k)}$ for some constant $k$.
|
(Omit!)
|
[] | false |
[] |
34-34.2-6
|
34
|
34.2
|
34.2-6
|
docs/Chap34/34.2.md
|
A **_hamiltonian path_** in a graph is a simple path that visits every vertex exactly once. Show that the language $\text{HAM-PATH}$ $= \\{\langle G, u, v \rangle:$ there is a hamiltonian path from $u$ to $v$ in graph $G\\}$ belongs to $\text{NP}$.
|
(Omit!)
|
[] | false |
[] |
34-34.2-7
|
34
|
34.2
|
34.2-7
|
docs/Chap34/34.2.md
|
Show that the hamiltonian-path problem from Exercise 34.2-6 can be solved in polynomial time on directed acyclic graphs. Give an efficient algorithm for the problem.
|
(Omit!)
|
[] | false |
[] |
34-34.2-8
|
34
|
34.2
|
34.2-8
|
docs/Chap34/34.2.md
|
Let $\phi$ be a boolean formula constructed from the boolean input variables $x_1, x_2, \dots, x_k$, negations ($\neg$), ANDs ($\vee$), ORs ($\wedge$), and parentheses. The formula $\phi$ is a **_tautology_** if it evaluates to $1$ for every assignment of $1$ and $0$ to the input variables. Define $\text{TAUTOLOGY}$ as the language of boolean formulas that are tautologies. Show that $\text{TAUTOLOGY} \in \text{co-NP}$.
|
(Omit!)
|
[] | false |
[] |
34-34.2-9
|
34
|
34.2
|
34.2-9
|
docs/Chap34/34.2.md
|
Prove that $\text P \subseteq \text{co-NP}$.
|
(Omit!)
|
[] | false |
[] |
34-34.2-10
|
34
|
34.2
|
34.2-10
|
docs/Chap34/34.2.md
|
Prove that if $\text{NP} \ne \text{co-NP}$, then $\text P \ne \text{NP}$.
|
(Omit!)
|
[] | false |
[] |
34-34.2-11
|
34
|
34.2
|
34.2-11
|
docs/Chap34/34.2.md
|
Let $G$ be a connected, undirected graph with at least $3$ vertices, and let $G^3$ be the graph obtained by connecting all pairs of vertices that are connected by a path in $G$ of length at most $3$. Prove that $G^3$ is hamiltonian. ($\textit{Hint:}$ Construct a spanning tree for $G$, and use an inductive argument.)
|
(Omit!)
|
[] | false |
[] |
34-34.3-1
|
34
|
34.3
|
34.3-1
|
docs/Chap34/34.3.md
|
Verify that the circuit in Figure 34.8(b) is unsatisfiable.
|
(Omit!)
|
[] | false |
[] |
34-34.3-2
|
34
|
34.3
|
34.3-2
|
docs/Chap34/34.3.md
|
Show that the $\le_\text P$ relation is a transitive relation on languages. That is, show that if $L_1 \le_\text P L_2$ and $L_2 \le_\text P L_3$, then $L_1 \le_\text P L_3$.
|
(Omit!)
|
[] | false |
[] |
34-34.3-3
|
34
|
34.3
|
34.3-3
|
docs/Chap34/34.3.md
|
Prove that $L \le_\text P \bar L$ if and only if $\bar L \le_\text P L$.
|
(Omit!)
|
[] | false |
[] |
34-34.3-4
|
34
|
34.3
|
34.3-4
|
docs/Chap34/34.3.md
|
Show that we could have used a satisfying assignment as a certificate in an alternative proof of Lemma 34.5. Which certificate makes for an easier proof?
|
(Omit!)
|
[] | false |
[] |
34-34.3-5
|
34
|
34.3
|
34.3-5
|
docs/Chap34/34.3.md
|
The proof of Lemma 34.6 assumes that the working storage for algorithm A occupies a contiguous region of polynomial size. Where in the proof do we exploit this assumption? Argue that this assumption does not involve any loss of generality.
|
(Omit!)
|
[] | false |
[] |
34-34.3-6
|
34
|
34.3
|
34.3-6
|
docs/Chap34/34.3.md
|
A language $L$ is **_complete_** for a language class $C$ with respect to polynomial-time reductions if $L \in C$ and $L' \le_\text P L$ for all $L' \in C$. Show that $\emptyset$ and $\\{0, 1\\}^\*$ are the only languages in $\text P$ that are not complete for $\text P$ with respect to polynomial-time reductions.
|
(Omit!)
|
[] | false |
[] |
34-34.3-7
|
34
|
34.3
|
34.3-7
|
docs/Chap34/34.3.md
|
Show that, with respect to polynomial-time reductions (see Exercise 34.3-6), $L$ is complete for $\text{NP}$ if and only if $\bar L$ is complete for \text{co-NP}\$.
|
(Omit!)
|
[] | false |
[] |
34-34.3-8
|
34
|
34.3
|
34.3-8
|
docs/Chap34/34.3.md
|
The reduction algorithm $F$ in the proof of Lemma 34.6 constructs the circuit $C = f(x)$ based on knowledge of $x$, $A$, and $k$. Professor Sartre observes that the string $x$ is input to $F$, but only the existence of $A$, $k$, and the constant factor implicit in the $O(n^k)$ running time is known to $F$ (since the language $L$ belongs to $\text{NP}$), not their actual values. Thus, the professor concludes that $F$ can't possibly construct the circuit $C$ and that the language $\text{CIRCUIT-SAT}$ is not necessarily $\text{NP-hard}$. Explain the flaw in the professor's reasoning.
|
(Omit!)
|
[] | false |
[] |
34-34.4-1
|
34
|
34.4
|
34.4-1
|
docs/Chap34/34.4.md
|
Consider the straightforward (nonpolynomial-time) reduction in the proof of Theorem 34.9. Describe a circuit of size $n$ that, when converted to a formula by this method, yields a formula whose size is exponential in $n$.
|
(Omit!)
|
[] | false |
[] |
34-34.4-2
|
34
|
34.4
|
34.4-2
|
docs/Chap34/34.4.md
|
Show the $\text{3-CNF}$ formula that results when we use the method of Theorem 34.10 on the formula $\text{(34.3)}$.
|
(Omit!)
|
[] | false |
[] |
34-34.4-3
|
34
|
34.4
|
34.4-3
|
docs/Chap34/34.4.md
|
Professor Jagger proposes to show that $\text{SAT} \le_\text P \text{3-CNF-SAT}$ by using only the truth-table technique in the proof of Theorem 34.10, and not the other steps. That is, the professor proposes to take the boolean formula $\phi$, form a truth table for its variables, derive from the truth table a formula in $\text{3-DNF}$ that is equivalent to $\neg\phi$, and then negate and apply DeMorgan's laws to produce a $\text{3-CNF}$ formula equivalent to $\phi$. Show that this strategy does not yield a polynomial-time reduction.
|
(Omit!)
|
[] | false |
[] |
34-34.4-4
|
34
|
34.4
|
34.4-4
|
docs/Chap34/34.4.md
|
Show that the problem of determining whether a boolean formula is a tautology is complete for $\text{co-NP}$. ($\textit{Hint:}$ See Exercise 34.3-7.)
|
(Omit!)
|
[] | false |
[] |
34-34.4-5
|
34
|
34.4
|
34.4-5
|
docs/Chap34/34.4.md
|
Show that the problem of determining the satisfiability of boolean formulas in disjunctive normal form is polynomial-time solvable.
|
(Omit!)
|
[] | false |
[] |
34-34.4-6
|
34
|
34.4
|
34.4-6
|
docs/Chap34/34.4.md
|
Suppose that someone gives you a polynomial-time algorithm to decide formula satisfiability. Describe how to use this algorithm to find satisfying assignments in polynomial time.
|
(Omit!)
|
[] | false |
[] |
34-34.4-7
|
34
|
34.4
|
34.4-7
|
docs/Chap34/34.4.md
|
Let $\text{2-CNF-SAT}$ be the set of satisfiable boolean formulas in $\text{CNF}$ with exactly $2$ literals per clause. Show that $\text{2-CNF-SAT} \in P$. Make your algorithm as efficient as possible. ($\textit{Hint:}$ Observe that $x \vee y$ is equivalent to $\neg x \to y$. Reduce $\text{2-CNF-SAT}$ to an efficiently solvable problem on a directed graph.)
|
(Omit!)
|
[] | false |
[] |
34-34.5-1
|
34
|
34.5
|
34.5-1
|
docs/Chap34/34.5.md
|
The **_subgraph-isomorphism problem_** takes two undirected graphs $G_1$ and $G_2$, and it asks whether $G_1$ is isomorphic to a subgraph of $G_2$. Show that the subgraphisomorphism problem is $\text{NP-complete}$.
|
(Omit!)
|
[] | false |
[] |
34-34.5-2
|
34
|
34.5
|
34.5-2
|
docs/Chap34/34.5.md
|
Given an integer $m \times n$ matrix $A$ and an integer $m$-vector $b$, the **_0-1 integerprogramming problem_** asks whether there exists an integer $n$-vector $x$ with elements in the set $\\{0, 1\\}$ such that $Ax \le b$. Prove that 0-1 integer programming is $\text{NP-complete}$. ($\textit{Hint:}$ Reduce from $\text{3-CNF-SAT}$}$.)
|
We begin by showing that 0-1 Integer Programming (0-1 IP) is in $\text{NP}$ and then prove that it is $\text{NP-hard}$ by reducing the $\text{3-CNF-SAT}$ problem to it.
Given an assignment to the variables of a 0-1 Integer Programming problem, it is straightforward to check if the constraints are satisfied in polynomial time. We simply plug the values into the inequalities and verify that each constraint holds. Therefore, 0-1 Integer Programming is in $\text{NP}$.
Next, we prove that 0-1 Integer Programming is $\text{NP-hard}$ by reducing $\text{3-CNF-SAT}$ to it. Let $S$ be a 3-CNF formula with $m$ clauses $C_1, \dots, C_m$ and $n$ variables $v_1, \dots, v_n$. The $\text{3-CNF-SAT}$ problem asks whether there exists a truth assignment to the variables such that every clause is satisfied.
For each clause in $S$, we can represent the clause as an inequality. A clause $C_i$ in the formula is a disjunction of three literals, each being either a variable $v_i$ or its negation $\ne v_i$. For example, a clause $v_i \vee \ne v_j \vee v_k$ is satisfied if at least one of the literals evaluates to true. We can convert each such clause into an inequality for an integer program.
Let $x_i$ represent the boolean variable $v_i$, and for negated variables, let $1 - x_j$ represent the boolean variable $\ne v_j$. Thus, $x_i$ is 1 if $v_i$ is true, and 0 otherwise. The clause $v_i \vee \ne v_j \vee v_k$ can be written as:
$$
x_i + (1 - x_j) + x_k \ge 1.
$$
This inequality guarantees that at least one of $x_i$, $1 - x_j$, or $x_k$ is 1, which corresponds to the clause being satisfied. Rewriting this, we get:
$$
x_i - x_j + x_k \ge 2.
$$
This is the inequality we need for the integer programming formulation.
Now, to express the system of inequalities as a matrix inequality, we let $x = (x_1, x_2, \dots, x_n)$ be the vector of variables representing the boolean values of the original variables $v_1, v_2, \dots, v_n$. For each clause, we can write an inequality like the one above. In the case of $C_i = v_i \vee \ne v_j \vee v_k$, the corresponding inequality $x_i - x_j + x_k \ge 2$ can be represented in matrix form as:
$$
\begin{pmatrix}
0 & \dots & 1 & \dots & -1 & \dots & 1 & \dots & 0
\end{pmatrix}
\begin{pmatrix}
x_1 \\\\
x_2 \\\\
\vdots \\\\
x_n
\end{pmatrix}
\ge 2.
$$
In this vector, the positions corresponding to $x_i$, $x_j$, and $x_k$ are assigned the coefficients $1$, $-1$, and $1$, respectively, while all other positions are assigned 0. The inequality for each clause can then be written as:
$$
A x \ge b,
$$
where $A$ is a matrix whose rows represent the coefficients of each inequality, $x$ is the vector of variables, and $b$ is the corresponding vector of bounds for each inequality. The system of inequalities for the entire 3-CNF formula $S$ can then be represented as:
$$
A x \leq b.
$$
Thus, we have reduced the $\text{3-CNF-SAT}$ problem to a 0-1 Integer Programming problem, and since $\text{3-CNF-SAT}$ is $\text{NP-complete}$, it follows that 0-1 Integer Programming is $\text{NP-hard}$.
Since 0-1 Integer Programming is both in $\text{NP}$ and $\text{NP-hard}$, it is $\text{NP-complete}$.
|
[] | false |
[] |
34-34.5-3
|
34
|
34.5
|
34.5-3
|
docs/Chap34/34.5.md
|
The integer **_linear-programming problem_** is like the 0-1 integer-programming problem given in Exercise 34.5-2, except that the values of the vector $x$ may be any integers rather than just $0$ or $1$. Assuming that the 0-1 integer-programming problem is $\text{NP-hard}$, show that the integer linear-programming problem is $\text{NP-complete}$.
|
We know 0-1 integer programming is NP-complete. To prove integer linear programming is NP-hard, we reduce from the 0-1 integer programming problem.
Given a 0-1 integer program:
$$
A x \le b, \quad x_i \in \\{0, 1\\},
$$
Convert it into an integer linear program without predefined bounds on the variables:
1. Add constraints to ensure $x_i \le 1$:
Append the $n \times n$ identity matrix $I$ and the vector $\mathbf{1}$ to get:
$$
\begin{pmatrix}
A \\\ I
\end{pmatrix}x
\le
\begin{pmatrix}
b \\\ \mathbf{1}
\end{pmatrix}.
$$
3. Add constraints to ensure $x_i \ge 0$:
Append $-I$ and $\mathbf{0}$:
$$
\begin{pmatrix}
A \\\ I \\\ -I
\end{pmatrix}x
\le
\begin{pmatrix}
b \\\ \mathbf{1} \\\ \mathbf{0}
\end{pmatrix}.
$$
Now, each $x_i$ must be an integer satisfying $0 \le x_i \le 1$. Thus, any integral solution to this system corresponds exactly to a solution of the original 0-1 problem. Since 0-1 integer programming is NP-complete, integer linear programming is NP-hard.
|
[] | false |
[] |
34-34.5-4
|
34
|
34.5
|
34.5-4
|
docs/Chap34/34.5.md
|
Show how to solve the subset-sum problem in polynomial time if the target value $t$ is expressed in unary.
|
(Omit!)
|
[] | false |
[] |
34-34.5-5
|
34
|
34.5
|
34.5-5
|
docs/Chap34/34.5.md
|
The **_set-partition problem_** takes as input a set $S$ of numbers. The question is whether the numbers can be partitioned into two sets $A$ and $\bar A = S - A$ such that $\sum_{x \in A} x = \sum_{x \in \bar A} x$. Show that the set-partition problem is $\text{NP-complete}$.
|
(Omit!)
|
[] | false |
[] |
34-34.5-6
|
34
|
34.5
|
34.5-6
|
docs/Chap34/34.5.md
|
Show that the hamiltonian-path problem is $\text{NP-complete}$.
|
(Omit!)
|
[] | false |
[] |
34-34.5-7
|
34
|
34.5
|
34.5-7
|
docs/Chap34/34.5.md
|
The **_longest-simple-cycle problem_** is the problem of determining a simple cycle (no repeated vertices) of maximum length in a graph. Formulate a related decision problem, and show that the decision problem is $\text{NP-complete}$.
|
(Omit!)
|
[] | false |
[] |
34-34.5-8
|
34
|
34.5
|
34.5-8
|
docs/Chap34/34.5.md
|
In the **_half 3-CNF satisfiability_** problem, we are given a $\text{3-CNF}$ formula $\phi$ with $n$ variables and $m$ clauses, where $m$ is even. We wish to determine whether there exists a truth assignment to the variables of $\phi$ such that exactly half the clauses evaluate to $0$ and exactly half the clauses evaluate to $1$. Prove that the half $\text{3-CNF}$ satisfiability problem is $\text{NP-complete}$.
|
(Omit!)
|
[] | false |
[] |
34-34-1
|
34
|
34-1
|
34-1
|
docs/Chap34/Problems/34-1.md
|
An **_independent set_** of a graph $G = (V, E)$ is a subset $V' \subseteq V$ of vertices such that each edge in $E$ is incident on at most one vertex in $V'$. The **_independent-set problem_** is to find a maximum-size independent set in $G$.
**a.** Formulate a related decision problem for the independent-set problem, and prove that it is $\text{NP-complete}$. ($\textit{Hint:}$ Reduce from the clique problem.)
**b.** Suppose that you are given a "black-box" subroutine to solve the decision problem you defined in part (a). Give an algorithm to find an independent set of maximum size. The running time of your algorithm should be polynomial in $|V|$ and $|E|$, counting queries to the black box as a single step.
Although the independent-set decision problem is $\text{NP-complete}$, certain special cases are polynomial-time solvable.
**c.** Give an efficient algorithm to solve the independent-set problem when each vertex in $G$ has degree $2$. Analyze the running time, and prove that your algorithm works correctly.
**d.** Give an efficient algorithm to solve the independent-set problem when $G$ is bipartite. Analyze the running time, and prove that your algorithm works correctly. ($\text{Hint:}$ Use the results of Section 26.3.)
|
(Omit!)
|
[] | false |
[] |
34-34-2
|
34
|
34-2
|
34-2
|
docs/Chap34/Problems/34-2.md
|
Bonnie and Clyde have just robbed a bank. They have a bag of money and want to divide it up. For each of the following scenarios, either give a polynomial-time algorithm, or prove that the problem is $\text{NP-complete}$. The input in each case is a list of the $n$ items in the bag, along with the value of each.
**a.** The bag contains $n$ coins, but only $2$ different denominations: some coins are worth $x$ dollars, and some are worth $y$ dollars. Bonnie and Clyde wish to divide the money exactly evenly.
**b.** The bag contains $n$ coins, with an arbitrary number of different denominations, but each denomination is a nonnegative integer power of $2$, i.e., the possible denominations are $1$ dollar, $2$ dollars, $4$ dollars, etc. Bonnie and Clyde wish to divide the money exactly evenly.
**c.** The bag contains $n$ checks, which are, in an amazing coincidence, made out to "Bonnie or Clyde." They wish to divide the checks so that they each get the exact same amount of money.
**d.** The bag contains $n$ checks as in part (c), but this time Bonnie and Clyde are willing to accept a split in which the difference is no larger than $100$ dollars.
|
(Omit!)
|
[] | false |
[] |
34-34-3
|
34
|
34-3
|
34-3
|
docs/Chap34/Problems/34-3.md
|
Mapmakers try to use as few colors as possible when coloring countries on a map, as long as no two countries that share a border have the same color. We can model this problem with an undirected graph $G = (V, E)$ in which each vertex represents a country and vertices whose respective countries share a border are adjacent. Then, a **_$k$-coloring_** is a function $c: V \to \\{1, 2, \dots, k \\}$ such that $c(u) \ne c(v)$ for every edge $(u, v) \in E$. In other words, the numbers $1, 2, \dots, k$ represent the $k$ colors, and adjacent vertices must have different colors. The **_graph-coloring problem_** is to determine the minimum number of colors needed to color a given graph.
**a.** Give an efficient algorithm to determine a $2$-coloring of a graph, if one exists.
**b.** Cast the graph-coloring problem as a decision problem. Show that your decision problem is solvable in polynomial time if and only if the graph-coloring problem is solvable in polynomial time.
**c.** Let the language $\text{3-COLOR}$ be the set of graphs that can be $3$-colored. Show that if $\text{3-COLOR}$ is $\text{NP-complete}$, then your decision problem from part (b) is $\text{NP-complete}$.
To prove that $\text{3-COLOR}$ is $\text{NP-complete}$, we use a reduction from $\text{3-CNF-SAT}$. Given a formula $\phi$ of $m$ clauses on $n$ variables $x_1, x_2, \dots, x_n$, we construct a graph $G = (V, E)$ as follows. The set $V$ consists of a vertex for each variable, a vertex for the negation of each variable, $5$ vertices for each clause, and $3$ special vertices: $\text{TRUE}$, $\text{FALSE}$, and $\text{RED}$. The edges of the graph are of two types: "literal" edges that are independent of the clauses and "clause" edges that depend on the clauses. The literal edges form a triangle on the special vertices and also form a triangle on $x_i, \neg x_i$, and $\text{RED}$ for $i = 1, 2, \dots, n$.
**d.** Argue that in any $3$-coloring $c$ of a graph containing the literal edges, exactly one of a variable and its negation is colored $c(\text{TRUE})$ and the other is colored $c(\text{FALSE})$. Argue that for any truth assignment for $\phi$, there exists a $3$-coloring of the graph containing just the literal edges.
The widget shown in Figure 34.20 helps to enforce the condition corresponding to a clause $(x \vee y \vee z)$. Each clause requires a unique copy of the $5$ vertices that are heavily shaded in the figure; they connect as shown to the literals of the clause and the special vertex $\text{TRUE}$.
**e.** Argue that if each of $x$, $y$, and $z$ is colored $c(\text{TRUE})$ or $c(\text{FALSE})$, then the widget is $3$-colorable if and only if at least one of $x$, $y$, or $z$ is colored $c(\text{TRUE})$.
**f.** Complete the proof that $\text{3-COLOR}$ is $\text{NP-complete}$.
|
(Omit!)
|
[] | false |
[] |
34-34-4
|
34
|
34-4
|
34-4
|
docs/Chap34/Problems/34-4.md
|
Suppose that we have one machine and a set of $n$ tasks $a_1, a_2, \dots, a_n$, each of which requires time on the machine. Each task $a_j$ requires $t_j$ time units on the machine (its processing time), yields a profit of $p_j$, and has a deadline $d_j$. The machine can process only one task at a time, and task $a_j$ must run without interruption for $t_j$ consecutive time units. If we complete task $a_j$ by its deadline $d_j$, we receive a profit $p_j$, but if we complete it after its deadline, we receive no profit. As an optimization problem, we are given the processing times, profits, and deadlines for a set of $n$ tasks, and we wish to find a schedule that completes all the tasks and returns the greatest amount of profit. The processing times, profits, and deadlines are all nonnegative numbers.
**a.** State this problem as a decision problem.
**b.** Show that the decision problem is $\text{NP-complete}$.
**c.** Give a polynomial-time algorithm for the decision problem, assuming that all processing times are integers from $1$ to $n$. ($\textit{Hint:}$ Use dynamic programming.)
**d.** Give a polynomial-time algorithm for the optimization problem, assuming that all processing times are integers from $1$ to $n$.
|
(Omit!)
|
[] | false |
[] |
35-35.1-1
|
35
|
35.1
|
35.1-1
|
docs/Chap35/35.1.md
|
Give an example of a graph for which $\text{APPROX-VERTEX-COVER}$ always yields a suboptimal solution.
|
(Omit!)
|
[] | false |
[] |
35-35.1-2
|
35
|
35.1
|
35.1-2
|
docs/Chap35/35.1.md
|
Prove that the set of edges picked in line 4 of $\text{APPROX-VERTEX-COVER}$ forms a maximal matching in the graph $G$.
|
(Omit!)
|
[] | false |
[] |
35-35.1-3
|
35
|
35.1
|
35.1-3 $\star$
|
docs/Chap35/35.1.md
|
Professor Bündchen proposes the following heuristic to solve the vertex-cover problem. Repeatedly select a vertex of highest degree, and remove all of its incident edges. Give an example to show that the professor's heuristic does not have an approximation ratio of $2$. ($\textit{Hint:}$ Try a bipartite graph with vertices of uniform degree on the left and vertices of varying degree on the right.)
|
Consider a bibartite graph with left part $L$ and right part $R$ such that $L$ has $5$ vertices of degrees $(5, 5, 5, 5, 5)$ and $R$ has $11$ vertices of degrees $(5, 4, 4, 3, 2, 2, 1, 1, 1, 1, 1)$ (the graph is easy to draw and the figure is omitted here).
Clearly there exists a vertex-cover of size $5$ (the left vertices). The idea is to show that the proposed algorithm chooses all the vertices on the right part, resulting in the approximation ratio of $11 / 5 > 2$.
1. After choosing the first vertex in $R$, the degrees on $L$ decrease to $(4, 4, 4, 4, 4)$.
2. After choosing the second vertex in $R$, the degrees on $L$ decrease to $(4, 3, 3, 3, 3)$.
3. After choosing the third vertex in $R$, the degrees on $L$ decrease to $(3, 3, 2, 2, 2)$.
4. After choosing the fourth vertex in $R$, the degrees on $L$ decrease to $(2, 2, 2, 2, 1)$.
5. After choosing the fifth vertex in $R$, the degrees on $L$ decrease to $(2, 2, 1, 1, 1)$.
6. After choosing the sixth vertex in $R$, the degrees on $L$ decrease to $(1, 1, 1, 1, 1)$.
Now the algorithm still has to choose $5$ more vertices.
|
[] | false |
[] |
35-35.1-4
|
35
|
35.1
|
35.1-4
|
docs/Chap35/35.1.md
|
Give an efficient greedy algorithm that finds an optimal vertex cover for a tree in linear time.
|
(Omit!)
|
[] | false |
[] |
35-35.1-5
|
35
|
35.1
|
35.1-5
|
docs/Chap35/35.1.md
|
From the proof of Theorem 34.12, we know that the vertex-cover problem and the $\text{NP-complete}$ clique problem are complementary in the sense that an optimal vertex cover is the complement of a maximum-size clique in the complement graph. Does this relationship imply that there is a polynomial-time approximation algorithm with a constant approximation ratio for the clique problem? Justify your answer.
|
(Omit!)
|
[] | false |
[] |
35-35.2-1
|
35
|
35.2
|
35.2-1
|
docs/Chap35/35.2.md
|
Suppose that a complete undirected graph $G = (V, E)$ with at least $3$ vertices has a cost function $c$ that satisfies the triangle inequality. Prove that $c(u, v) \ge 0$ for all $u, v \in V$.
|
Let $u, v \in V$. Let $w$ be any other vertex in $V$. Then by the triangle inequality we have
$$
\begin{aligned}
c(w, u) + c(u, v) &\ge c(w, v) \\\\
c(w, v) + c(v, u) &\ge c(w, u).
\end{aligned}
$$
Adding the above inequalities together gives
$$
\cancel{c(w, u)} + \cancel{c(w, v)} + c(u, v) + c(v, u) \ge \cancel{c(w, v)} + \cancel{c(w, u)}.
$$
Since $G$ is undirected we have $c(u, v) = c(v, u)$, and so the above inequality gives $2c(u, v) \ge 0$ from which the result follows.
|
[] | false |
[] |
35-35.2-2
|
35
|
35.2
|
35.2-2
|
docs/Chap35/35.2.md
|
Show how in polynomial time we can transform one instance of the traveling-salesman problem into another instance whose cost function satisfies the triangle inequality. The two instances must have the same set of optimal tours. Explain why such a polynomial-time transformation does not contradict Theorem 35.3, assuming that $\text P \ne \text{NP}$.
|
(Omit!)
|
[] | false |
[] |
35-35.2-3
|
35
|
35.2
|
35.2-3
|
docs/Chap35/35.2.md
|
Consider the following **_closest-point heuristic_** for building an approximate traveling-salesman tour whose cost function satisfies the triangle inequality. Begin with a trivial cycle consisting of a single arbitrarily chosen vertex. At each step, identify the vertex $u$ that is not on the cycle but whose distance to any vertex on the cycle is minimum. Suppose that the vertex on the cycle that is nearest $u$ is vertex $v$. Extend the cycle to include $u$ by inserting $u$ just after $v$. Repeat until all vertices are on the cycle. Prove that this heuristic returns a tour whose total cost is not more than twice the cost of an optimal tour.
|
(Omit!)
|
[] | false |
[] |
35-35.2-4
|
35
|
35.2
|
35.2-4
|
docs/Chap35/35.2.md
|
In the **_bottleneck traveling-salesman problem_**, we wish to find the hamiltonian cycle that minimizes the cost of the most costly edge in the cycle. Assuming that the cost function satisfies the triangle inequality, show that there exists a polynomial-time approximation algorithm with approximation ratio $3$ for this problem. ($\textit{Hint:}$ Show recursively that we can visit all the nodes in a bottleneck spanning tree, as discussed in Problem 23-3, exactly once by taking a full walk of the tree and skipping nodes, but without skipping more than two consecutive intermediate nodes. Show that the costliest edge in a bottleneck spanning tree has a cost that is at most the cost of the costliest edge in a bottleneck hamiltonian cycle.)
|
(Omit!)
|
[] | false |
[] |
35-35.2-5
|
35
|
35.2
|
35.2-5
|
docs/Chap35/35.2.md
|
Suppose that the vertices for an instance of the traveling-salesman problem are points in the plane and that the cost $c(u, v)$ is the euclidean distance between points $u$ and $v$. Show that an optimal tour never crosses itself.
|
(Omit!)
|
[] | false |
[] |
35-35.3-1
|
35
|
35.3
|
35.3-1
|
docs/Chap35/35.3.md
|
Consider each of the following words as a set of letters: $\\{\text{arid}$, $\text{dash}$, $\text{drain}$, $\text{heard}$, $\text{lost}$, $\text{nose}$, $\text{shun}$, $\text{slate}$, $\text{snare}$, $\text{thread}\\}$. Show which set cover $\text{GREEDY-SET-COVER}$ produces when we break ties in favor of the word that appears first in the dictionary.
|
(Omit!)
|
[] | false |
[] |
35-35.3-2
|
35
|
35.3
|
35.3-2
|
docs/Chap35/35.3.md
|
Show that the decision version of the set-covering problem is $\text{NP-complete}$ by reducing it from the vertex-cover problem.
|
(Omit!)
|
[] | false |
[] |
35-35.3-3
|
35
|
35.3
|
35.3-3
|
docs/Chap35/35.3.md
|
Show how to implement $\text{GREEDY-SET-COVER}$ in such a way that it runs in time $O\Big(\sum_{S \in \mathcal F} |S|\Big)$.
|
(Omit!)
|
[] | false |
[] |
35-35.3-4
|
35
|
35.3
|
35.3-4
|
docs/Chap35/35.3.md
|
Show that the following weaker form of Theorem 35.4 is trivially true:
$$|\mathcal C| \le |\mathcal C^\*| \max\\{|S|: S \in \mathcal F\\}.$$
|
(Omit!)
|
[] | false |
[] |
35-35.3-5
|
35
|
35.3
|
35.3-5
|
docs/Chap35/35.3.md
|
$\text{GREEDY-SET-COVER}$ can return a number of different solutions, depending on how we break ties in line 4. Give a procedure $\text{BAD-SET-COVER-INSTANCE}(n)$ that returns an $n$-element instance of the set-covering problem for which, depending on how we break ties in line 4, $\text{GREEDY-SET-COVER}$ can return a number of different solutions that is exponential in $n$.
|
(Omit!)
|
[] | false |
[] |
35-35.4-1
|
35
|
35.4
|
35.4-1
|
docs/Chap35/35.4.md
|
Show that even if we allow a clause to contain both a variable and its negation, randomly setting each variable to 1 with probability $1 / 2$ and to $0$ with probability $1 / 2$ still yields a randomized $8 / 7$-approximation algorithm.
|
(Omit!)
|
[] | false |
[] |
35-35.4-2
|
35
|
35.4
|
35.4-2
|
docs/Chap35/35.4.md
|
The **_MAX-CNF satisfiability problem_** is like the $\text{MAX-3-CNF}$ satisfiability problem, except that it does not restrict each clause to have exactly $3$ literals. Give a randomized $2$-approximation algorithm for the $\text{MAX-CNF}$ satisfiability problem.
|
(Omit!)
|
[] | false |
[] |
35-35.4-3
|
35
|
35.4
|
35.4-3
|
docs/Chap35/35.4.md
|
In the $\text{MAX-CUT}$ problem, we are given an unweighted undirected graph $G = (V, E)$. We define a cut $(S, V - S)$ as in Chapter 23 and the **_weight_** of a cut as the number of edges crossing the cut. The goal is to find a cut of maximum weight. Suppose that for each vertex $v$, we randomly and independently place $v$ in $S$ with probability $1 / 2$ and in $V - S$ with probability $1 / 2$. Show that this algorithm is a randomized $2$-approximation algorithm.
|
$\gdef\Ex{\mathbf{E}}$
$\gdef\Prob{\mathbf{P}}$
$\gdef\opt{\texttt{opt}}$
We first rewrite the algorithm for clarity.
```cpp
APPROX-MAX-CUT(G)
for each v in V
flip a fair coin
if heads
add v to S
else
add v to V - S
```
This algorithm clearly runs in linear time. For each edge $(u, v) \in E$, define the event $A_{uv}$ to be the event where edge
$(i, j)$ crosses the cut $(S, V - S)$, and let $1_{A_{uv}}$ be the indicator random variable for $A_{uv}$.
The event $A_{ij}$ occurs if and only if the vertices $u$ and $v$ are placed in different sets during the main loop in $\text{APPROX-MAX-CUT}$. Hence,
$$
\begin{aligned}
\Prob \\{A_{uv}\\}
&= \Prob \\{u \in S \wedge v \in V - S\\} + P\\{u \in V - S \wedge v\in S\\} \\\\
&= \frac{1}{2}\cdot\frac{1}{2} + \frac{1}{2}\cdot\frac{1}{2} \\\\
&= \frac{1}{2}.
\end{aligned}
$$
Let $\opt$ denote the cost of a maximum cut in $G$, and let $c = |(S, V - S)|$, that is, the size of the cut produced by $\text{APPROX-MAX-CUT}$. Clearly $c = \sum_{(u, v) \in E} 1_{A_{uv}}$. Also, note that $\texttt{opt} \leq |E|$ (this is tight iff $G$ is bipartite). Hence,
$$
\begin{aligned}
\Ex[c]
&= \Ex\left[\sum_{(u, v) \in E} 1_{A_{uv}}\right]\\\\
&= \sum_{(u, v)\in E}\Ex[1_{A_{uv}}]\\\\
&= \sum_{(u, v)\in E}\Prob\\{A_{uv}\\}\\\\
&= \frac{1}{2}|E|\\\\
&\ge \frac{1}{2}\opt
\end{aligned}
$$
Hence, $\Ex\left[\frac{\opt}{c}\right] \le \frac{|E|}{1 / 2|E|} = 2$, and so $\text{APPROX-MAX-CUT}$ is a randomized $2$-approximation algorithm.
|
[
{
"lang": "cpp",
"code": "APPROX-MAX-CUT(G)\n for each v in V\n flip a fair coin\n if heads\n add v to S\n else\n add v to V - S"
}
] | false |
[] |
35-35.4-4
|
35
|
35.4
|
35.4-4
|
docs/Chap35/35.4.md
|
Show that the constraints in line $\text{(35.19)}$ are redundant in the sense that if we remove them from the linear program in lines $\text{(35.17)}–\text{(35.20)}$, any optimal solution to the resulting linear program must satisfy $x(v) \le 1$ for each $v \in V$.
|
(Omit!)
|
[] | false |
[] |
35-35.5-1
|
35
|
35.5
|
35.5-1
|
docs/Chap35/35.5.md
|
Prove equation $\text{(35.23)}$. Then show that after executing line 5 of $\text{EXACT-SUBSET-SUM}$, $L_i$ is a sorted list containing every element of $P_i$ whose value is not more than $t$.
|
(Omit!)
|
[] | false |
[] |
35-35.5-2
|
35
|
35.5
|
35.5-2
|
docs/Chap35/35.5.md
|
Using induction on $i$, prove inequality $\text{(35.26)}$.
|
(Omit!)
|
[] | false |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.