id
stringlengths 6
26
| chapter
stringclasses 36
values | section
stringlengths 3
5
| title
stringlengths 3
27
| source_file
stringlengths 13
29
| question_markdown
stringlengths 17
6.29k
| answer_markdown
stringlengths 3
6.76k
| code_blocks
listlengths 0
9
| has_images
bool 2
classes | image_refs
listlengths 0
7
|
---|---|---|---|---|---|---|---|---|---|
26-26-1
|
26
|
26-1
|
26-1
|
docs/Chap26/Problems/26-1.md
|
A$n \times n$ **_grid_** is an undirected graph consisting of $n$ rows and $n$ columns of vertices, as shown in Figure 26.11. We denote the vertex in the $i$th row and the $j$th column by $(i, j)$. All vertices in a grid have exactly four neighbors, except for the boundary vertices, which are the points $(i, j)$ for which $i = 1$, $i = n$, $j = 1$, or $j = n$.
Given $m \le n^2$ starting points $(x_1, y_1), (x_2, y_2), \ldots, (x_m, y_m)$ in the grid, the **_escape problem_** is to determine whether or not there are $m$ vertex-disjoint paths from the starting points to any $m$ different points on the boundary. For example, the grid in Figure 26.11(a) has an escape, but the grid in Figure 26.11(b) does not.
**a.** Consider a flow network in which vertices, as well as edges, have capacities. That is, the total positive flow entering any given vertex is subject to a capacity constraint. Show that determining the maximum flow in a network with edge and vertex capacities can be reduced to an ordinary maximum-flow problem on a flow network of comparable size.
**b.** Describe an efficient algorithm to solve the escape problem, and analyze its running time.
|
**a.** This problem is identical to exercise 26.1-7.
**b.** Construct a vertex constrained flow network from the instance of the escape problem by letting our flow network have a vertex (each with unit capacity) for each intersection of grid lines, and have a bidirectional edge with unit capacity for each pair of vertices that are adjacent in the grid. Then, we will put a unit capacity edge going from $s$ to each of the distinguished vertices, and a unit capacity edge going from each vertex on the sides of the grid to $t$. Then, we know that a solution to this problem will correspond to a solution to the escape problem because all of the augmenting paths will be a unit flow, because every edge has unit capacity. This means that the flows through the grid will be the paths taken. This gets us the escaping paths if the total flow is equal to $m$ (we know it cannot be greater than $m$ by looking at the cut which has $s$ by itself). And, if the max flow is less than $m$, we know that the escape problem is not solvable, because otherwise we could construct a flow with value $m$ from the list of disjoint paths that the people escaped along.
|
[] | false |
[] |
26-26-2
|
26
|
26-2
|
26-2
|
docs/Chap26/Problems/26-2.md
|
A **_path cover_** of a directed graph $G = (V, E)$ is a set $P$ of vertex-disjoint paths such that every vertex in $V$ is included in exactly one path in $P$. Paths may start and end anywhere, and they may be of any length, including $0$. A **_minimum path cover_** of $G$ is a path cover containing the fewest possible paths.
**a.** Give an efficient algorithm to find a minimum path cover of a directed acyclic graph $G = (V, E)$. ($\textit{Hint:}$ Assuming that $V = \\{1, 2, \ldots, n\\}$, construct the graph $G' = (V', E')$, where
$$
\begin{aligned}
V' & = \\{x_0, x_1, \ldots, x_n\\} \cup \\{y_0, y_1, \ldots, y_n\\}, \\\\
E' & = \\{(x_0, x_i): i \in V\\} \cup \\{(y_i, y_0): i \in V\\} \cup \\{(x_i, y_j): (i, j) \in E\\},
\end{aligned}
$$
and run a maximum-flow algorithm.)
**b.** Does your algorithm work for directed graphs that contain cycles? Explain.
|
**a.** Set up the graph $G'$ as defined in the problem, give each edge capacity $1$, and run a maximum-flow algorithm. I claim that if $(x_i, y_j)$ has flow $1$ in the maximum flow and we set $(i, j)$ to be an edge in our path cover, then the result is a minimum path cover. First observe that no vertex appears twice in the same path. If it did, then we would have $f(x_i, y_j) = f(x_k, y_j)$ for some $i \ne k \ne j$. However, this contradicts the conservation of flow, since the capacity leaving $y_j$ is only $1$. Moreover, since the capacity from $s$ to $x_i$ is $1$, we can never have two edges of the form $(x_i, y_j)$ and $(x_i, y_k)$ for $k \ne j$. We can ensure every vertex is included in some path by asserting that if there is no edge $(x_i, y_j)$ or $(x_j, y_i)$ for some $j$, then $j$ will be on a path by itself. Thus, we are guaranteed to obtain a path cover. If there are $k$ paths in a cover of $n$ vertices, then they will consist of $n − k$ edges in total. Given a path cover, we can recover a flow by assigning edge $(x_i, y_j)$ flow $1$ if and only if $(i, j)$ is an edge in one of the paths in the cover. Suppose that the maximum flow algorithm yields a cover with $k$ paths, and hence flow $n − k$, but a minimum path cover uses strictly fewer than $k$ paths. Then it must use strictly more than $n − k$ edges, so we can recover a flow which is larger than the one previously found, contradicting the fact that the previous flow was maximal. Thus, we find a minimum path cover. Since the maximum flow in the graph corresponds to finding a maximum matching in the bipartite graph obtained by considering the induced subgraph of $G'$ on $\\{1, 2, \dots, n\\}$, section 26.3 tells us that we can find a maximum flow in $O(V E)$.
**b.** This doesn't work for directed graphs which contain cycles. To see this, consider the graph on $\\{1, 2, 3, 4\\}$ which contains edges $(1, 2)$, $(2, 3)$, $(3, 1)$, and $(4, 3)$. The desired output would be a single path $4$, $3$, $1$, $2$ but flow which assigns edges $(x_1, y_2)$, $(x_2, y_3)$, and $(x_3, y_1)$ flow $1$ is maximal.
|
[] | false |
[] |
26-26-3
|
26
|
26-3
|
26-3
|
docs/Chap26/Problems/26-3.md
|
Professor Gore wants to open up an algorithmic consulting company. He has identified n important subareas of algorithms (roughly corresponding to different portions of this textbook), which he represents by the set $A = \\{A_1, A_2, \ldots, A_n\\}$. In each subarea $A_k$, he can hire an expert in that area for $c_k$ dollars. The consulting company has lined up a set $J = \\{J_1, J_2, \ldots, J_m\\}$ of potential jobs. In order to perform job $J_i$, the company needs to have hired experts in a subset $R_i \subseteq A$ of subareas. Each expert can work on multiple jobs simultaneously. If the company chooses to accept job $J_i$, it must have hired experts in all subareas in $R_i$, and it will take in revenue of $p_i$ dollars.
Professor Gore's job is to determine which subareas to hire experts in and which jobs to accept in order to maximize the net revenue, which is the total income from jobs accepted minus the total cost of employing the experts.
Consider the following flow network $G$. It contains a source vertex $s$, vertices $A_1, A_2, \ldots, A_n$, vertices $J_1, J_2, \ldots, J_m$, and a sink vertex $t$. For $k = 1, 2, \ldots, n$, the flow network contains an edge $(s, A_k)$ with capacity $c(s, A_k) = c_k$, and for $i = 1, 2, \ldots, m$, the flow network contains an edge $(J_i, t)$ with capacity $c(J_i, t) = p_i$. For $k = 1, 2, \ldots, n$ and $i = 1, 2, \ldots, m$, if $A_k \in R_i$, then $G$ contains an edge $(A_k, J_i)$ with capacity $c(A_k, J_i) = \infty$.
**a.** Show that if $J_i \in T$ for a finite-capacity cut $(S, T)$ of $G$, then $A_k \in T$ for each $A_k \in R_i$.
**b.** Show how to determine the maximum net revenue from the capacity of a minimum cut of $G$ and the given $p_i$ values.
**c.** Give an efficient algorithm to determine which jobs to accept and which experts to hire. Analyze the running time of your algorithm in terms of $m$, $n$, and $r = \sum_{i = 1}^m |R_i|$.
|
**a.** Suppose to a contradiction that there were some $J_i \in T$, and some $A_k \in R_i$ so that $A_k \notin T$. However, by the definition of the flow network, there is an edge of infinite capacity going from $A_k$ to $J_i$ because $A_k \in R_i$. This means that there is an edge of infinite capacity that is going across the given cut. This means that the capacity of the cut is infinite, a contradiction to the given fact that the cut was finite capacity.
**b.** Though tempting, it doesn't suffice to just look at the experts that are on the $s$ side of the cut. To see why this doesn't work, imagine there's one specialized skill areal, such as "Computer power switch operator", that is required for every job. Then, any finite cut that would include any job getting done would requiring that this expert be hired. However, since there is an infinite capacity edge coming from him to every other job, then all of the experts need for all the other jobs would also need to be hired. So, if we have this obiquitously required employee, any minimum cut would have to be all or nothing, but it is trivial to find a counterexample to this being optimal.
In order for this problem to be solvable, one must assume that for every expert you've hired, you do all of the jobs that he is required for. If this is the case, then let $S_k \subseteq [n]$ be the indices of the experts that lie on the source side of the cut, and let $S_i \subseteq [m]$ be the indices of jobs that lie on the source side of the cut, then the net revenue is just
$$\sum_{S_i} p_i − \sum_{S_k} c_k$$
To see this is minimum, transferring over some set of experts and tasks from the sink side to the source side causes the capacity to go down by the cost of those experts and go up by the revenue of those jobs. If the cut was minimal than this must be a positive change, so the revenue isn't enough to justify the hire, meaning that those jobs that were on the source side in the minimal cut are exactly the jobs to attempt.
**c.** Again, to get a solution, we must make the assumption that for every expert that is hired, all jobs that that expert is required for must be completed. Basically just run either the $O(V^3)$ relabel-to-front algorithm described in section 26.5 on the flow network, and hire the experts that are on the source side of the cut. By the previous part, we know that this gets us the best outcome. The number of edges in the flow network is $m + n + r$, and the number of vertices is $2 + m + n$, so the runtime is just $O((2 + m + n)^3)$, so it's cubic in $\max(m, n)$. There is no dependence on $R$ using this algorithm, but this is reasonable since we have the inherent bound that $r < mn$, which is a lower order term. Without this unstated assumption, I suspect that there isn't an efficient solution possible, but cannot think of what NP-complete problem you would use for the reduction.
|
[] | false |
[] |
26-26-4
|
26
|
26-4
|
26-4
|
docs/Chap26/Problems/26-4.md
|
Let $G = (V, E)$ be a flow network with source $s$, sink $t$, and integer capacities. Suppose that we are given a maximum flow in $G$.
**a.** Suppose that we increase the capacity of a single edge $(u, v) \in E$ by $1$. Give an $O(V + E)$-time algorithm to update the maximum flow.
**b.** Suppose that we decrease the capacity of a single edge $(u, v) \in E$ by $1$. Give an $O(V + E)$-time algorithm to update the maximum flow.
|
**a.** If there exists a minimum cut on which $(u, v)$ doesn't lie then the maximum flow can't be increased, so there will exist no augmenting path in the residual network. Otherwise it does cross a minimum cut, and we can possibly increase the flow by $1$. Perform one iteration of Ford-Fulkerson. If there exists an augmenting path, it will be found and increased on this iteration. Since the edge capacities are integers, the flow values are all integral. Since flow strictly increases, and by an integral amount each time, a single iteration of the while loop of line 3 of Ford-Fulkerson will increase the flow by $1$, which we know to be maximal. To find an augmenting path we use a BFS, which runs in $O(V + E') = O(V + E)$.
**b.** If the edge's flow was already at least $1$ below capacity then nothing changes. Otherwise, find a path from $s$ to $t$ which contains $(u, v)$ using BFS in $O(V + E)$. Decrease the flow of every edge on that path by $1$. This decreases total flow by $1$. Then run one iteration of the while loop of Ford-Fulkerson in $O(V + E)$. By the argument given in part a, everything is integer valued and flow strictly increases, so we will either find no augmenting path, or will increase the flow by $1$ and then terminate. "
|
[] | false |
[] |
26-26-5
|
26
|
26-5
|
26-5
|
docs/Chap26/Problems/26-5.md
|
Let $G = (V, E)$ be a flow network with source $s$, sink $t$, and an integer capacity $c(u, v)$ on each edge $(u, v) \in E$. Let $C = \max_{(u, v) \in E} c(u, v)$.
**a.** Argue that a minimum cut of $G$ has capacity at most $C|E|$.
**b.** For a given number $K$, show how to find an augmenting path of capacity at least $K$ in $O(E)$ time, if such a path exists.
We can use the following modification of $\text{FORD-FULKERSON-METHOD}$ to compute a maximum flow in $G$:
```cpp
MAX-FLOW-BY-SCALING(G, s, t)
C = max_{(u, v) ∈ E} c(u, v)
initialize flow f to 0
K = 2^{floor(lg C)}
while K ≥ 1
while there exists an augmenting path p of capacity at least K augment flow f along p
K = K / 2
return f
```
**c.** Argue that $\text{MAX-FLOW-BY-SCALING}$ returns a maximum flow.
**d.** Show that the capacity of a minimum cut of the residual network $G_f$ is at most $2K|E|$ each time line 4 is executed.
**e.** Argue that the inner **while** loop of lines 5–6 executes $O(E)$ times for each value of $K$.
**f.** Conclude that $\text{MAX-FLOW-BY-SCALING}$ can be implemented so that it runs in $O(E^2\lg C)$ time.
|
**a.** Since the capacity of a cut is the sum of the capacity of the edges going from a vertex on one side to a vertex on the other, it is less than or equal to the sum of the capacities of all of the edges. Since each of the edges has a capacity that is $\le C$, if we were to replace the capacity of each edge with $C$, we would only be potentially increasing the sum of the capacities of all the edges. After so changing the capacities of the edges, the sum of the capacities of all the edges is equal to $C|E|$, potentially an overestimate of the original capacity of any cut, and so of the minimum cut.
**b.** Since the capacity of a path is equal to the minimum of the capacities of each of the edges along that path, we know that any edges in the residual network that have a capacity less than $K$ cannot be used in such an augmenting path. Similarly, so long as all the edges have a capacity of at least $K$, then the capacity of the augmenting path, if it is found, will be of capacity at least $K$. This means that all that needs be done is remove from the residual network those edges whose capacity is less than $K$ and then run BFS.
**c.** Since $K$ starts out as a power of $2$, and through each iteration of the while loop on line 4, it decreases by a factor of two until it is less than $1$. There will be some iteration of that loop when $K = 1$. During this iteration, we will be using any augmenting paths of capacity at least $1$ when running the loop on line 5. Since the original capacities are all integers, the augmenting paths at each step will be integers, which means that no augmenting path will have a capacity of less than $1$. So, once the algorithm terminates, there will be no more augmenting paths since there will be no more augmenting paths of capacity at least $1$.
**d.** Each time line 4 is executed we know that there is no augmenting path of capacity at least $2K$. To see this fact on the initial time that line 4 is executed we just note that $2K = 2 \cdot 2^{\lfloor \lg C \rfloor} > 2 \cdot 2^{\lg C − 1} = 2^{\lg C} = C$. Then, since an augmenting path is limited by the capacity of the smallest edge it contains, and all the edges have a capacity at most $C$, no augmenting path will have a capacity greater than that. On subsequent times executing line 4, the loop of line 5 during the previous execution of the outer loop will of already used up and capacious augmenting paths, and would only end once there are no more.
Since any augmenting path must have a capacity of less than $2K$, we can look at each augmenting path $p$, and assign to it an edge $e_p$ which is any edge whose capacity is tied for smallest among all the edges along the path. Then, removing all of the edges $e_p$ would disconnect the residual network since every possible augmenting path goes through one of those edge. We know that there are at most $|E|$ of them since they are a subset of the edges. We also know that each of them has capacity at most $2K$ since that was the value of the augmenting path they were selected to be tied for cheapest in. So, the total cost of this cut is $2K|E|$.
**e.** Each time that the inner while loop runs, we know that it adds an amount of flow that is at least $K$, since that’s the value of the augmenting path. We also know that before we start that while loop, there is a cut of cost $\le 2K|E|$. This means that the most flow we could possibly add is $2K|E|$. Combining these two facts, we get that the most cuts possible is $\frac{2K|E|}{K} = 2|E| \in O(|E|)$.
**f.** We only execute the outermost **for** loop $\lg C$ many times since $\lg(2^{\lfloor \lg C \rfloor}) \le \lg C$. The inner while loop only runs $O(|E|)$ many times by the previous part. Finally, every time the inner **for** loop runs, the operation it does can be done in time $O(|E|)$ by part (b). Putting it all together, the runtime is $O(|E|^2\lg C)$.
|
[
{
"lang": "cpp",
"code": "> MAX-FLOW-BY-SCALING(G, s, t)\n> C = max_{(u, v) ∈ E} c(u, v)\n> initialize flow f to 0\n> K = 2^{floor(lg C)}\n> while K ≥ 1\n> while there exists an augmenting path p of capacity at least K augment flow f along p\n> K = K / 2\n> return f\n>"
}
] | false |
[] |
26-26-6
|
26
|
26-6
|
26-6
|
docs/Chap26/Problems/26-6.md
|
In this problem, we describe a faster algorithm, due to Hopcroft and Karp, for $p$ finding a maximum matching in a bipartite graph. The algorithm runs in $O(\sqrt V E)$ time. Given an undirected, bipartite graph $G = (V, E)$, where $V = L \cup R$ and all edges have exactly one endpoint in $L$, let $M$ be a matching in $G$. We say that a simple path $P$ in $G$ is an **_augmenting path_** with respect to $M$ if it starts at an unmatched vertex in $L$, ends at an unmatched vertex in $R$, and its edges belong alternately to $M$ and $E - M$. (This definition of an augmenting path is related to, but different from, an augmenting path in a flow network.) In this problem, we treat a path as a sequence of edges, rather than as a sequence of vertices. A shortest augmenting path with respect to a matching $M$ is an augmenting path with a minimum number of edges.
Given two sets $A$ and $B$, the **_symmetric difference_** $A \oplus B$ is defined as $(A - B) \cup (B - A)$, that is, the elements that are in exactly one of the two sets.
**a.** Show that if $M$ is a matching and $P$ is an augmenting path with respect to $M$, then the symmetric difference $M \oplus P$ is a matching and $|M \oplus P| = |M| + 1$. Show that if $P_1, P_2, \ldots, P_k$ are vertex-disjoint augmenting paths with respect to $M$, then the symmetric difference $M \oplus (P_1 \cup P_2 \cup \cdots \cup P_k)$ is a matching with cardinality $|M| + k$.
The general structure of our algorithm is the following:
```cpp
HOPCROPFT-KARP(G)
M = Ø
repeat
let P = {P[1], P[2], ..., P[k]} be a maximal set of vertex-disjoint shortest augmenting paths with respect to M
M = M ⨁ (P[1] ∪ P[2] ∪ ... ∪ P[k])
until P == Ø
return M
```
The remainder of this problem asks you to analyze the number of iterations in the algorithm (that is, the number of iterations in the **repeat** loop) and to describe an implementation of line 3.
**b.** Given two matchings $M$ and $M^\*$ in $G$, show that every vertex in the graph $G' = (V, M \oplus M^\*)$ has degree at most $2$. Conclude that $G'$ is a disjoint union of simple paths or cycles. Argue that edges in each such simple path or cycle belong alternately to $M$ or $M^\*$. Prove that if $|M| \le |M^\*|$, then $M \oplus M^\*$ contains at least $|M^\*| - |M|$ vertex-disjoint augmenting paths with respect to $M$.
Let $l$ be the length of a shortest augmenting path with respect to a matching $M$, and let $P_1, P_2, \ldots, P_k$ be a maximal set of vertex-disjoint augmenting paths of length $l$ with respect to $M$. Let $M' = M \oplus (P_1 \cup \cdots \cup P_k)$, and suppose that $P$ is a shortest augmenting path with respect to $M'$.
**c.** Show that if $P$ is vertex-disjoint from $P_1, P_2, \ldots, P_k$ , then $P$ has more than $l$ edges.
**d.** Now suppose that $P$ is not vertex-disjoint from $P_1, P_2, \ldots, P_k$ . Let $A$ be the set of edges $(M \oplus M') \oplus P$. Show that $A = (P_1 \cup P_2 \cup \cdots \cup P_k) \oplus P$ and that $|A| \ge (k + 1)l$. Conclude that $P$ has more than $l$ edges.
**e.** Prove that if a shortest augmenting path with respect to $M$ has $l$ edges, the size of the maximum matching is at most $|M| + |V| / (l + 1)$.
**f.** Show that the number of **repeat** loop iterations in the algorithm is at most $2\sqrt{|V|}$. ($\textit{Hint:}$ By how much can $M$ grow after iteration number $\sqrt{|V|}$?)
**g.** Give an algorithm that runs in $O(E)$ time to find a maximal set of vertexdisjoint shortest augmenting paths $P_1, P_2, \ldots, P_k$ for a given matching $M$. Conclude that the total running time of $\text{HOPCROFT-KARP}$ is $O(\sqrt V E)$.
|
**a.** Suppose $M$ is a matching and $P$ is an augmenting path with respect to $M$. Then $P$ consists of $k$ edges in $M$, and $k + 1$ edges not in $M$. This is because the first edge of $P$ touches an unmatched vertex in $L$, so it cannot be in $M$. Similarly, the last edge in $P$ touches an unmatched vertex in $R$, so the last edge cannot be in $M$. Since the edges alternate being in or not in $M$, there must be exactly one more edge not in $M$ than in $M$. This implies that
$$|M \oplus P| = |M| + |P| - 2k = |M| + 2k + 1 - 2k = |M| + 1,$$
since we must remove each edge of $M$ which is in $P$ from both $M$ and $P$. Now suppose $P_1, P_2, \ldots, P_k$ are vertex-disjoint augmenting paths with respect to $M$. Let $k_i$ be the number of edges in $P_i$ which are in $M$, so that $|P_i| = 2k + i + 1$. Then we have
$$M \oplus (P_1 \cup P_2 \cup \cdots \cup P_k) = |M| + |P_1| + \cdots + |P_k| - 2k_1 - 2k_2 - \cdots - 2k_k = |M| + k.$$
To see that we in fact get a matching, suppose that there was some vertex $v$ which had at least $2$ incident edges $e$ and $e'$. They cannot both come from $M$, since $M$ is a matching. They cannot both come from $P$ since $P$ is simple and every other edge of $P$ is removed. Thus, $e \in M$ and $e' \in P \backslash M$. However, if $e \in M$ then $e \in P$, so $e \notin M \oplus P$, a contradiction. A similar argument gives the case of $M \oplus (P_1 \cup \cdots \cup P_k)$.
**b.** Suppose some vertex in $G'$ has degree at least $3$. Since the edges of $G'$ come from $M \oplus M^\*$, at least $2$ of these edges come from the same matching. However, a matching never contains two edges with the same endpoint, so this is impossible. Thus every vertex has degree at most $2$, so $G'$ is a disjoint union of simple paths and cycles. If edge $(u, v)$ is followed by edge $(z, w)$ in a simple path or cycle then we must have $v = z$. Since two edges with the same endpoint cannot appear in a matching, they must belong alternately to $M$ and $M^\*$. Since edges alternate, every cycle has the same number of edges in each matching and every path has at most one more edge in one matching than in the other. Thus, if $|M| \le |M^\*|$ there must be at least $|M^\*| - |M|$ vertex-disjoint augmenting paths with respect to $M$.
**c.** Every vertex matched by $M$ must be incident with some edge in $M'$. Since $P$ is augmenting with respect to $M$′, the left endpoint of the first edge of $P$ isn't incident to a vertex touched by an edge in $M'$. In particular, $P$ starts at a vertex in $L$ which is unmatched by $M$ since every vertex of $M$ is incident with an edge in $M'$. Since $P$ is vertex disjoint from $P_1, P_2, \ldots, P_k$, any edge of $P$ which is in $M'$ must in fact be in $M$ and any edge of $P$ which is not in $M'$ cannot be in $M$. Since $P$ has edges alternately in $M'$ and $E - M'$, $P$ must in fact have edges alternately in $M$ and $E - M$. Finally, the last edge of $P$ must be incident to a vertex in $R$ which is unmatched by $M'$. Any vertex unmatched by $M'$ is also unmatched by $M$, so $P$ is an augmenting path for $M$. $P$ must have length at least $l$ since $l$ is the length of the shortest augmenting path with respect to $M$. If $P$ had length exactly $l$, then this would contradict the fact that $P_1 \cup \cdots \cup P_k$ is a maximal set of vertex disjoint paths of length $l$ because we could add $P$ to the set. Thus $P$ has more than $l$ edges.
**d.** Any edge in $M \oplus M'$ is in exactly one of $M$ or $M'$. Thus, the only possible contributing edges from $M'$ are from $P_1 \cup \cdots \cup P_k$. An edge from $M$ can contribute if and only if it is not in exactly one of $M$ and $P_1 \cup \cdots \cup P_k$, which means it must be in both. Thus, the edges from $M$ are redundant so $M \oplus M' = (P_1 \cup \cdots \cup P_k)$ which implies $A = (P_1 \cup \cdots \cup P_k) \oplus P$.
Now we'll show that $P$ is edge disjoint from each $P_i$. Suppose that an edge $e$ of $P$ is also an edge of $P_i$ for some $i$. Since $P$ is an augmenting path with respect to $M'$ either $e \in M'$ or $e \in E - M'$. Suppose $e \in M'$. Since $P$ is also augmenting with respect to $M$, we must have $e \in M$. However, if $e$ is in $M$ and $M'$, then $e$ cannot be in any of the $P_i$'s by the definition of $M'$. Now suppose $e \in E - M'$. Then $e \in E - M$ since $P$ is augmenting with respect to $M$. Since $e$ is an edge of $P_i$, $e \in E - M'$ implies that $e \in M$, a contradiction.
Since $P$ has edges alternately in $M'$ and $E - M'$ and is edge disjoint from $P_1 \cup \cdots \cup P_k$, $P$ is also an augmenting path for $M$, which implies $|P| \ge l$. Since every edge in $A$ is disjoint we conclude that $|A| \ge (k + 1)l$.
**e.** Suppose $M^\*$ is a matching with strictly more than $|M| + |V| / (l + 1)$ edges. By part (b) there are strictly more than $|V| / (l + 1)$ vertex-disjoint augmenting paths with respect to $M$. Each one of these contains at least $l$ edges, so it is incident on $l + 1$ vertices. Since the paths are vertex disjoint, there are strictly more than $|V|(l + 1) / (l + 1)$ distinct vertices incident with these paths, a contradiction. Thus, the size of the maximum matching is at most $|M| + |V| / (l + 1)$.
**f.** Consider what happens after iteration number $\sqrt{|V|}$. Let $M^\*$ be a maximal matching in $G$. Then $|M^\*| \ge |M|$ so by part (b), $M \oplus M^\*$ contains at least $|M^\*| - |M|$ vertex disjoint augmenting paths with respect to $M$. By part \(c\), each of these is also a an augmenting path for $M$. Since each has length $\sqrt{|V|}$, there can be at most $\sqrt{|V|}$ such paths, so $|M^\*| - |M| \le \sqrt{|V|}$. Thus, only $\sqrt{|V|}$ additional iterations of the repeat loop can occur, so there are at most $2\sqrt{|V|}$ iterations in total.
**g.** For each unmatched vertex in $L$ we can perform a modified $\text{BFS}$ to find the length of the shortest path to an unmatched vertex in $R$. Modify the $\text{BFS}$ to ensure that we only traverse an edge if it causes the path to alternate between an edge in $M$ and an edge in $E - M$. The first time an unmatched vertex in $R$ is reached we know the length $k$ of a shortest augmenting path.
We can use this to stop our search early if at any point we have traversed more than that number of edges. To find disjoint paths, start at the vertices of $R$ which were found at distance $k$ in the $\text{BFS}$. Run a $\text{DFS}$ backwards from these, which maintains the property that the next vertex we pick has distance one fewer, and the edges alternate between being in $M$ and $E - M$. As we build up a path, mark the vertices as used so that we never traverse them again. This takes $O(E)$, so by part (f) the total runtime is $O(\sqrt VE)$.
|
[
{
"lang": "cpp",
"code": "> HOPCROPFT-KARP(G)\n> M = Ø\n> repeat\n> let P = {P[1], P[2], ..., P[k]} be a maximal set of vertex-disjoint shortest augmenting paths with respect to M\n> M = M ⨁ (P[1] ∪ P[2] ∪ ... ∪ P[k])\n> until P == Ø\n> return M\n>"
}
] | false |
[] |
27-27.1-1
|
27
|
27.1
|
27.1-1
|
docs/Chap27/27.1.md
|
Suppose that we spawn $\text{P-FIB}(n - 2)$ in line 4 of $\text{P-FIB}$, rather than calling it as is done in the code. What is the impact on the asymptotic work, span, and parallelism?
|
(Removed)
|
[] | false |
[] |
27-27.1-2
|
27
|
27.1
|
27.1-2
|
docs/Chap27/27.1.md
|
Draw the computation dag that results from executing $\text{P-FIB}(5)$. Assuming that each strand in the computation takes unit time, what are the work, span, and parallelism of the computation? Show how to schedule the dag on 3 processors using greedy scheduling by labeling each strand with the time step in which it is executed.
|
- Work: $T_1 = 29$.
- Span: $T_\infty = 10$.
- Parallelism: $T_1 / T_\infty \approx 2.9$.
|
[] | false |
[] |
27-27.1-3
|
27
|
27.1
|
27.1-3
|
docs/Chap27/27.1.md
|
Prove that a greedy scheduler achieves the following time bound, which is slightly stronger than the bound proven in Theorem 27.1:
$$T_P \le \frac{T_1 - T_\infty}{P} + T_\infty. \tag{27.5}$$
|
Suppose that there are x incomplete steps in a run of the program. Since each of these steps causes at least one unit of work to be done, we have that there is at most $(T_1 - x)$ units of work done in the complete steps. Then, we suppose by contradiction that the number of complete steps is strictly greater than $\lfloor (T_1 - x) / P \rfloor$. Then, we have that the total amount of work done during the complete steps is
$$P \cdot (\lfloor (T_1 - x) / P \rfloor + 1) = P \lfloor (T_1 - x) / P = (T_1 - x) - ((T_1 - x) \mod P) + P > T_1 - x.$$
This is a contradiction because there are only $(T_1 - x)$ units of work done during complete steps, which is less than the amount we would be doing. Notice that since $T_\infty$ is abound on the total number of both kinds of steps, it is a bound on the number of incomplete steps, $x$, so,
$$T_P \le \lfloor (T_1 - x) / P \rfloor + x \le \lfloor (T_1 - T_\infty) / P \rfloor + T_\infty.$$
Where the second inequality comes by noting that the middle expression, as a function of $x$ is monotonically increasing, and so is bounded by the largest value of $x$ that is possible, namely $T_\infty$.
|
[] | false |
[] |
27-27.1-4
|
27
|
27.1
|
27.1-4
|
docs/Chap27/27.1.md
|
Construct a computation dag for which one execution of a greedy scheduler can take nearly twice the time of another execution of a greedy scheduler on the same number of processors. Describe how the two executions would proceed.
|
The computation is given in the image below. Let vertex $u$ have degree $k$, and assume that there are $m$ vertices in each vertical chain. Assume that this is executed on $k$ processors. In one execution, each strand from among the $k$ on the left is executed concurrently, and then the $m$ strands on the right are executed one at a time. If each strand takes unit time to execute, then the total computation takes $2m$ time. On the other hand, suppose that on each time step of the computation, $k - 1$ strands from the left (descendants of $u$) are executed, and one from the right (a descendant of $v$), is executed. If each strand take unit time to executed, the total computation takes $m + m / k$. Thus, the ratio of times is $2m / (m + m / k) = 2 / (1 + 1 / k)$. As $k$ gets large, this approaches $2$ as desired.
|
[] | false |
[] |
27-27.1-5
|
27
|
27.1
|
27.1-5
|
docs/Chap27/27.1.md
|
Professor Karan measures her deterministic multithreaded algorithm on $4$, $10$, and $64$ processors of an ideal parallel computer using a greedy scheduler. She claims that the three runs yielded $T_4 = 80$ seconds, $T_{10} = 42$ seconds, and $T_{64} = 10$ seconds. Argue that the professor is either lying or incompetent. ($\textit{Hint:}$ Use the work law $\text{(27.2)}$, the span law $\text{(27.3)}$, and inequality $\text{(27.5)}$ from Exercise 27.1-3.)
|
(Removed)
|
[] | false |
[] |
27-27.1-6
|
27
|
27.1
|
27.1-6
|
docs/Chap27/27.1.md
|
Give a multithreaded algorithm to multiply an $n \times n$ matrix by an $n$-vector that achieves $\Theta(n^2 / \lg n)$ parallelism while maintaining $\Theta(n^2)$ work.
|
(Removed)
|
[] | false |
[] |
27-27.1-7
|
27
|
27.1
|
27.1-7
|
docs/Chap27/27.1.md
|
Consider the following multithreaded pseudocode for transposing an $n \times n$ matrix $A$ in place:
```cpp
P-TRANSPOSE(A)
n = A.rows
parallel for j = 2 to n
parallel for i = 1 to j - 1
exchange a[i, j] with a[j, i]
```
Analyze the work, span, and parallelism of this algorithm.
|
(Removed)
|
[
{
"lang": "cpp",
"code": "> P-TRANSPOSE(A)\n> n = A.rows\n> parallel for j = 2 to n\n> parallel for i = 1 to j - 1\n> exchange a[i, j] with a[j, i]\n>"
}
] | false |
[] |
27-27.1-8
|
27
|
27.1
|
27.1-8
|
docs/Chap27/27.1.md
|
Suppose that we replace the **parallel for** loop in line 3 of $\text{P-TRANSPOSE}$ (see Exercise 27.1-7) with an ordinary **for** loop. Analyze the work, span, and parallelism of the resulting algorithm.
|
(Removed)
|
[] | false |
[] |
27-27.1-9
|
27
|
27.1
|
27.1-9
|
docs/Chap27/27.1.md
|
For how many processors do the two versions of the chess programs run equally fast, assuming that $T_P = T_1 / P + T_\infty$?
|
(Removed)
|
[] | false |
[] |
27-27.2-1
|
27
|
27.2
|
27.2-1
|
docs/Chap27/27.2.md
|
Draw the computation dag for computing $\text{P-SQUARE-MATRIX-MULTIPLY}$ on $2 \times 2$ matrices, labeling how the vertices in your diagram correspond to strands in the execution of the algorithm. Use the convention that spawn and call edges point downward, continuation edges point horizontally to the right, and return edges point upward. Assuming that each strand takes unit time, analyze the work, span, and parallelism of this computation.
|
(Omit!)
|
[] | false |
[] |
27-27.2-2
|
27
|
27.2
|
27.2-2
|
docs/Chap27/27.2.md
|
Repeat Exercise 27.2-1 for $\text{P-MATRIX-MULTIPLY-RECURSIVE}$.
|
(Omit!)
|
[] | false |
[] |
27-27.2-3
|
27
|
27.2
|
27.2-3
|
docs/Chap27/27.2.md
|
Give pseudocode for a multithreaded algorithm that multiplies two $n \times n$ matrices with work $\Theta(n^3)$ but span only $\Theta(\lg n)$. Analyze your algorithm.
|
(Removed)
|
[] | false |
[] |
27-27.2-4
|
27
|
27.2
|
27.2-4
|
docs/Chap27/27.2.md
|
Give pseudocode for an efficient multithreaded algorithm that multiplies a $p \times q$ matrix by a $q \times r$ matrix. Your algorithm should be highly parallel even if any of $p$, $q$, and $r$ are $1$. Analyze your algorithm.
|
(Removed)
|
[] | false |
[] |
27-27.2-5
|
27
|
27.2
|
27.2-5
|
docs/Chap27/27.2.md
|
Give pseudocode for an efficient multithreaded algorithm that transposes an $n \times n$ matrix in place by using divide-and-conquer to divide the matrix recursively into four $n / 2 \times n / 2$ submatrices. Analyze your algorithm.
|
```cpp
P-MATRIX-TRANSPOSE(A)
n = A.rows
if n == 1
return
partition A into n / 2 ✕ n / 2 submatrices A11, A12, A21, A22
spawn P-MATRIX-TRANSPOSE(A11)
spawn P-MATRIX-TRANSPOSE(A12)
spawn P-MATRIX-TRANSPOSE(A21)
P-MATRIX-TRANSPOSE(A22)
sync
// exchange A12 with A21
parallel for i = 1 to n / 2
parallel for j = 1 + n / 2 to n
exchange A[i, j] with A[i + n / 2, j - n / 2]
```
- span: $T(n) = T(n / 2) + O(\lg n) = O(\lg^2 n)$.
- work: $T(n) = 4T(n / 2) + O(n^2) = O(n^2\lg n)$.
|
[
{
"lang": "cpp",
"code": "P-MATRIX-TRANSPOSE(A)\n n = A.rows\n if n == 1\n return\n partition A into n / 2 ✕ n / 2 submatrices A11, A12, A21, A22\n spawn P-MATRIX-TRANSPOSE(A11)\n spawn P-MATRIX-TRANSPOSE(A12)\n spawn P-MATRIX-TRANSPOSE(A21)\n P-MATRIX-TRANSPOSE(A22)\n sync\n // exchange A12 with A21\n parallel for i = 1 to n / 2\n parallel for j = 1 + n / 2 to n\n exchange A[i, j] with A[i + n / 2, j - n / 2]"
}
] | false |
[] |
27-27.2-6
|
27
|
27.2
|
27.2-6
|
docs/Chap27/27.2.md
|
Give pseudocode for an efficient multithreaded implementation of the Floyd-Warshall algorithm (see Section 25.2), which computes shortest paths between all pairs of vertices in an edge-weighted graph. Analyze your algorithm.
|
(Removed)
|
[] | false |
[] |
27-27.3-1
|
27
|
27.3
|
27.3-1
|
docs/Chap27/27.3.md
|
Explain how to coarsen the base case of $\text{P-MERGE}$.
|
Replace the condition on line 2 with a check that $n < k$ for some base case size $k$. And instead of just copying over the particular element of $A$ to the right spot in $B$, you would call a serial sort on the remaining segment of $A$ and copy the result of that over into the right spots in $B$.
|
[] | false |
[] |
27-27.3-2
|
27
|
27.3
|
27.3-2
|
docs/Chap27/27.3.md
|
Instead of finding a median element in the larger subarray, as $\text{P-MERGE}$ does, consider a variant that finds a median element of all the elements in the two sorted subarrays using the result of Exercise 9.3-8. Give pseudocode for an efficient multithreaded merging procedure that uses this median-finding procedure. Analyze your algorithm.
|
By a slight modification of exercise 9.3-8 we can find we can find the median of all elements in two sorted arrays of total length $n$ in $O(\lg n)$ time. We'll modify $\text{P-MERGE}$ to use this fact. Let $\text{MEDIAN}(T, p_1, r_1, p_2, r_2)$ be the function which returns a pair, $q$, where $q.pos$ is the position of the median of all the elements $T$ which lie between positions $p_1$ and $r_1$, and between positions $p_2$ and $r_2$, and $q.arr$ is $1$ if the position is between $p_1$ and $r_1$, and $2$ otherwise.
```cpp
P-MEDIAN-MERGE(T, p[1], r[1], p[2], r[2], A, p[3])
n[1] = r[1] - p[1] + 1
n[2] = r[2] - p[2] + 1
if n[1] < n[2] // ensure that n[1] ≥ n[2]
exchange p[1] with p[2]
exchange r[1] with r[2]
exchange n[1] with n[2]
if n[1] == 0 // both empty?
return
q = MEDIAN(T, p[1], r[1], p[2], r[2])
if q.arr == 1
q[2] = BINARY-SEARCH(T[q.pos], T, p[2], r[2])
q[3] = p[3] + q.pos - p[1] + q[2] - p[2]
A[q[3]] = T[q.pos]
spawn P-MEDIAN-MERGE(T, p[1], q.pos - 1, p[2], q[2] - 1, A, p[3])
P-MEDIAN-MERGE(T, q.pos + 1, r[1], q[2] + 1, r[2], A, p[3])
sync
else
q[2] = BINARY-SEARCH(T[q.pos], T, p[1], r[1])
q[3] = p[3] + q.pos - p[2] + q[2] - p[1]
A[q[3]] = T[q.pos]
spawn P-MEDIAN-MERGE(T, p[1], q[2] - 1, p[2], q.pos - 1, A, p[3])
P-MEDIAN-MERGE(T, q[2] + 1, r[1], q.pos + 1, r[2], A, p[3])
sync
```
The work is characterized by the recurrence $T_1(n) = O(\lg n) + 2T_1(n / 2)$, whose solution tells us that $T_1(n) = O(n)$. The work is at least $\Omega(n)$ since we need to examine each element, so the work is $\Theta(n)$. The span satisfies the recurrence
$$
\begin{aligned}
T_\infty(n) & = O(\lg n) + O(\lg n / 2) + T_\infty(n / 2) \\\\
& = O(\lg n) + T_\infty(n / 2) \\\\
& = \Theta(\lg^2 n),
\end{aligned}
$$
by exercise 4.6-2.
|
[
{
"lang": "cpp",
"code": "P-MEDIAN-MERGE(T, p[1], r[1], p[2], r[2], A, p[3])\n n[1] = r[1] - p[1] + 1\n n[2] = r[2] - p[2] + 1\n if n[1] < n[2] // ensure that n[1] ≥ n[2]\n exchange p[1] with p[2]\n exchange r[1] with r[2]\n exchange n[1] with n[2]\n if n[1] == 0 // both empty?\n return\n q = MEDIAN(T, p[1], r[1], p[2], r[2])\n if q.arr == 1\n q[2] = BINARY-SEARCH(T[q.pos], T, p[2], r[2])\n q[3] = p[3] + q.pos - p[1] + q[2] - p[2]\n A[q[3]] = T[q.pos]\n spawn P-MEDIAN-MERGE(T, p[1], q.pos - 1, p[2], q[2] - 1, A, p[3])\n P-MEDIAN-MERGE(T, q.pos + 1, r[1], q[2] + 1, r[2], A, p[3])\n sync\n else\n q[2] = BINARY-SEARCH(T[q.pos], T, p[1], r[1])\n q[3] = p[3] + q.pos - p[2] + q[2] - p[1]\n A[q[3]] = T[q.pos]\n spawn P-MEDIAN-MERGE(T, p[1], q[2] - 1, p[2], q.pos - 1, A, p[3])\n P-MEDIAN-MERGE(T, q[2] + 1, r[1], q.pos + 1, r[2], A, p[3])\n sync"
}
] | false |
[] |
27-27.3-3
|
27
|
27.3
|
27.3-3
|
docs/Chap27/27.3.md
|
Give an efficient multithreaded algorithm for partitioning an array around a pivot, as is done by the $\text{PARTITION}$ procedure on page 171. You need not partition the array in place. Make your algorithm as parallel as possible. Analyze your algorithm. ($\textit{Hint:}$ You may need an auxiliary array and may need to make more than one pass over the input elements.)
|
Suppose that there are $c$ different processors, and the array has length $n$
and you are going to use its last element as a pivot. Then, look at each chunk
of size $\lceil \frac{n}{c} \rceil$ of entries before the last element, give one to each processor. Then, each counts the number of elements that are less than the pivot. Then, we compute all the running sums of these values that are returned. This can be done easily by considering all of the subarrays placed along the leaves of a binary tree, and then summing up adjacent pairs. This computation can be done in time $\lg(\min\\{c, n\\})$ since it's the log of the number of leaves. From there, we can compute all the running sums for each of the subarrays also in logarithmic time. This is by keeping track of the sum of all more left cousins of each internal node, which is found by adding the left sibling's sum vale to the left cousin value of the parent, with the root's left cousin value initiated to $0$. This also just takes time the depth of the tree, so is $\lg(\min\\{c, n\\})$. Once all of these values are computed at the root, it is the index that the subarray's elements less than the pivot should be put. To find the position where the subarray's elements larger than the root should be put, just put it at twice the sum value of the root minus the left cousin value for that subarray. Then, the time taken is just $O(\frac{n}{c})$. By doing this procedure, the total work is just $O(n)$, and the span is $O(\lg n)$, and so has parallelization of $O(\frac{n}{\lg n})$. This whole process is split across the several algoithms appearing here.
|
[] | false |
[] |
27-27.3-4
|
27
|
27.3
|
27.3-4
|
docs/Chap27/27.3.md
|
Give a multithreaded version of $\text{RECURSIVE-FFT}$ on page 911. Make your implementation as parallel as possible. Analyze your algorithm.
|
```cpp
P-RECURSIVE-FFT(a)
n = a.length
if n == 1
return a
w[n] = e^{2 * π * i / n}
w = 1
a(0) = [a[0], a[2]..a[n - 2]]
a(1) = [a[1], a[3]..a[n - 1]]
y(0) = spawn P-RECURSIVE-FFT(a[0])
y(1) = P-RECURSIVE-FFT(a[1])
sync
parallel for k = 0 to n / 2 - 1
y[k] = y[k](0) + w * y[k](1)
y[k + n / 2] = y[k](0) - w * y[k](1)
w = w * w[n]
return y
```
$\text{P-RECURSIVE-FFT}$ parallelized over the two recursive
calls, having a parallel for works because each of the iterations of the **for** loop touch independent sets of variables. The span of the procedure is only $\Theta(\lg n)$ giving it a parallelization of $\Theta(n)$.
|
[
{
"lang": "cpp",
"code": "P-RECURSIVE-FFT(a)\n n = a.length\n if n == 1\n return a\n w[n] = e^{2 * π * i / n}\n w = 1\n a(0) = [a[0], a[2]..a[n - 2]]\n a(1) = [a[1], a[3]..a[n - 1]]\n y(0) = spawn P-RECURSIVE-FFT(a[0])\n y(1) = P-RECURSIVE-FFT(a[1])\n sync\n parallel for k = 0 to n / 2 - 1\n y[k] = y[k](0) + w * y[k](1)\n y[k + n / 2] = y[k](0) - w * y[k](1)\n w = w * w[n]\n return y"
}
] | false |
[] |
27-27.3-5
|
27
|
27.3
|
27.3-5 $\star$
|
docs/Chap27/27.3.md
|
Give a multithreaded version of $\text{RANDOMIZED-SELECT}$ on page 216. Make your implementation as parallel as possible. Analyze your algorithm. ($\textit{Hint:}$ Use the partitioning algorithm from Exercise 27.3-3.)
|
Randomly pick a pivot element, swap it with the last element, so that it is in the correct format for running the procedure described in 27.3-3. Run partition from problem 27.3-3. As an intermediate step, in that procedure, we compute the number of elements less than the pivot ($T.root.sum$), so keep track of that value after the end of partition. Then, if we have that it is less than $k$, recurse on the subarray that was greater than or equal to the pivot, decreasing the order statistic of the element to be selected by $T.root.sum$. If it is larger than the order statistic of the element to be selected, then leave it unchanged and recurse on the subarray that was formed to be less than the pivot. A lot of the analysis in section 9.2 still applies, except replacing the timer needed for partitioning with the runtime of the algorithm in problem 27.3-3. The work is unchanged from the serial case because when $c = 1$, the algorithm reduces to the serial algorithm for partitioning. For span, the $O(n)$ term in the equation half way down page 218 can be replaced with an $O(\lg n)$ term. It can be seen with the substitution method that the solution to this is logarithmic
$$E[T(n)] \le \frac{2}{n} \sum_{k = \lfloor n / 2 \rfloor}^{n - 1} C\lg k + O(\lg n) \le O(\lg n).$$
So, the total span of this algorithm will still just be $O(\lg n)$.
|
[] | false |
[] |
27-27.3-6
|
27
|
27.3
|
27.3-6 $\star$
|
docs/Chap27/27.3.md
|
Show how to multithread $\text{SELECT}$ from Section 9.3. Make your implementation as parallel as possible. Analyze your algorithm.
|
Let $\text{MEDIAN}(A)$ denote a brute force method which returns the median element of the array $A$. We will only use this to find the median of small arrays, in particular, those of size at most $5$, so it will always run in constant time. We also let $A[i..j]$ denote the array whose elements are $A[i], A[i + 1], \ldots, A[j]$. The function $\text{P-PARTITION}(A, x)$ is a multithreaded function which partitions $A$ around the input element $x$ and returns the number of elements in $A$ which are less than or equal to $x$. Using a parallel **for** loop, its span is logarithmic in the number of elements in $A$. The work is the same as the serialization, which is $\Theta(n)$ according to section 9.3. The span satisfies the recurrence
$$
\begin{aligned}
T_\infty(n) & = \Theta(lg n / 5) + T_\infty(n / 5) + \Theta(\lg n) + T_\infty(7n / 10 + 6) \\\\
& \le \Theta(\lg n) + T_\infty(n / 5) + T_\infty(7n / 10 + 6).
\end{aligned}
$$
Using the substitution method we can show that $T_\infty(n) = O(n^\epsilon)$ for some $\epsilon < 1$. In particular, $\epsilon = 0.9$ works. This gives a parallelization of $\Omega(n^0.1)$.
```cpp
P-SELECT(A, i)
if n == 1
return A[1]
let T[1..floor(n / 5)] be a new array
parallel for i = 0 to floor(n / 5) - 1
T[i + 1] = MEDIAN(A[i * floor(n / 5)..i * floor(n / 5) + 4])
if n / 5 is not an integer
T[floor(n / 5)] = MEDIAN(A[5 * floor(n / 5)..n])
x = P-SELECT(T, ceil(n / 5))
k = P-PARTITION(A, x)
if k == i
return x
else if i < k
P-SELECT(A[1..k - 1], i)
else
P-SELECT(A[k + 1..n], i - k)
```
|
[
{
"lang": "cpp",
"code": "P-SELECT(A, i)\n if n == 1\n return A[1]\n let T[1..floor(n / 5)] be a new array\n parallel for i = 0 to floor(n / 5) - 1\n T[i + 1] = MEDIAN(A[i * floor(n / 5)..i * floor(n / 5) + 4])\n if n / 5 is not an integer\n T[floor(n / 5)] = MEDIAN(A[5 * floor(n / 5)..n])\n x = P-SELECT(T, ceil(n / 5))\n k = P-PARTITION(A, x)\n if k == i\n return x\n else if i < k\n P-SELECT(A[1..k - 1], i)\n else\n P-SELECT(A[k + 1..n], i - k)"
}
] | false |
[] |
27-27-1
|
27
|
27-1
|
27-1
|
docs/Chap27/Problems/27-1.md
|
Consider the following multithreaded algorithm for performing pairwise addition on $n$-element arrays $A[1..n]$ and $B[1..n]$, storing the sums in $C[1..n]$:
```cpp
SUM-ARRAYS(A, B, C)
parallel for i = 1 to A.length
C[i] = A[i] + B[i]
```
**a.** Rewrite the parallel loop in $\text{SUM-ARRAYS}$ using nested parallelism (**spawn** and **sync**) in the manner of $\text{MAT-VEC-MAIN-LOOP}$. Analyze the parallelism of your implementation.
Consider the following alternative implementation of the parallel loop, which contains a value $grain\text-size$ to be specified:
```cpp
SUM-ARRAYS'(A, B, C)
n = A.length
grain-size = ? // to be determined
r = ceil(n / grain-size)
for k = 0 to r - 1
spawn ADD-SUBARRAY(A, B, C, k * grain-size + 1, min((k + 1) * grain-size, n))
sync
```
```cpp
ADD-SUBARRAY(A, B, C, i, j)
for k = i to j
C[k] = A[k] + B[k]
```
**b.** Suppose that we set $grain\text -size = 1$. What is the parallelism of this implementation?
**c.** Give a formula for the span of $\text{SUM-ARRAYS}'$ in terms of $n$ and $grain\text-size$. Derive the best value for grain-size to maximize parallelism.
|
**a.** See the algorithm $\text{SUM-ARRAYS}(A, B, C)$. The parallelism is $O(n)$ since it's work is $n\lg n$ and the span is $\lg n$.
**b.** If grainsize is $1$, this means that each call of $\text{ADD-SUBARRAY}$ just sums a single pair of numbers. This means that since the for loop on line 4 will run $n$ times, both the span and work will be $O(n)$. So, the parallelism is just $O(1)$.
```cpp
SUM-ARRAYS(A, B, C)
n = floor(A.length / 2)
if n == 0
C[1] = A[1] + B[1]
else
spawn SUM-ARRAYS(A[1..n], B[1..n], C[1..n])
SUM-ARRAYS(A[n + 1..A.length], B[n + 1..A..length], C[n + 1..A.length])
```
**c.** Let $g$ be the grainsize. The runtime of the function that spawns all the other functions is $\left\lceil \frac{n}{g} \right\rceil$. The runtime of any particular spawned task is $g$. So, we want to minimize
$$\frac{n}{g} + g.$$
To do this we pull out our freshman calculus hat and take a derivative, we have
$$0 = 1 − \frac{n}{g^2}.$$
To solve this, we set $g = \sqrt n$. This minimizes the quantity and makes the span $O(n / g + g) = O(\sqrt n)$. Resulting in a parallelism of $O(\sqrt n)$.
|
[
{
"lang": "cpp",
"code": "> SUM-ARRAYS(A, B, C)\n> parallel for i = 1 to A.length\n> C[i] = A[i] + B[i]\n>"
},
{
"lang": "cpp",
"code": "> SUM-ARRAYS'(A, B, C)\n> n = A.length\n> grain-size = ? // to be determined\n> r = ceil(n / grain-size)\n> for k = 0 to r - 1\n> spawn ADD-SUBARRAY(A, B, C, k * grain-size + 1, min((k + 1) * grain-size, n))\n> sync\n>"
},
{
"lang": "cpp",
"code": "> ADD-SUBARRAY(A, B, C, i, j)\n> for k = i to j\n> C[k] = A[k] + B[k]\n>"
},
{
"lang": "cpp",
"code": "SUM-ARRAYS(A, B, C)\n n = floor(A.length / 2)\n if n == 0\n C[1] = A[1] + B[1]\n else\n spawn SUM-ARRAYS(A[1..n], B[1..n], C[1..n])\n SUM-ARRAYS(A[n + 1..A.length], B[n + 1..A..length], C[n + 1..A.length])"
}
] | false |
[] |
27-27-2
|
27
|
27-2
|
27-2
|
docs/Chap27/Problems/27-2.md
|
The $\text{P-MATRIX-MULTIPLY-RECURSIVE}$ procedure has the disadvantage that it must allocate a temporary matrix $T$ of size $n \times n$, which can adversely affect the constants hidden by the $\Theta$-notation. The $\text{P-MATRIX-MULTIPLY-RECURSIVE}$ procedure does have high parallelism, however. For example, ignoring the constants in the $\Theta$-notation, the parallelism for multiplying $1000 \times 1000$ matrices comes to approximately $1000^3 / 10^2 = 10^7$, since $\lg 1000 \approx 10$. Most parallel computers have far fewer than 10 million processors.
**a.** Describe a recursive multithreaded algorithm that eliminates the need for the temporary matrix $T$ at the cost of increasing the span to $\Theta(n)$. ($\textit{Hint:}$ Compute $C = C + AB$ following the general strategy of $\text{P-MATRIX-MULTIPLY-RECURSIVE}$, but initialize $C$ in parallel and insert a sync in a judiciously chosen location.)
**b.** Give and solve recurrences for the work and span of your implementation.
**c.** Analyze the parallelism of your implementation. Ignoring the constants in the $\Theta$-notation, estimate the parallelism on $1000 \times 1000$ matrices. Compare with the parallelism of $\text{P-MATRIX-MULTIPLY-RECURSIVE}$.
|
(Removed)
|
[] | false |
[] |
27-27-3
|
27
|
27-3
|
27-3
|
docs/Chap27/Problems/27-3.md
|
**a.** Parallelize the $\text{LU-DECOMPOSITION}$ procedure on page 821 by giving pseudocode for a multithreaded version of this algorithm. Make your implementation as parallel as possible, and analyze its work, span, and parallelism.
**b.** Do the same for $\text{LUP-DECOMPOSITION}$ on page 824.
**c.** Do the same for $\text{LUP-SOLVE}$ on page 817.
**d.** Do the same for a multithreaded algorithm based on equation $\text{(28.13)}$ for inverting a symmetric positive-definite matrix.
|
**a.** For the algorithm $\text{LU-DECOMPOSITION}(A)$ on page 821, the inner **for** loops can be parallelized, since they never update values that are read on later runs of those loops. However, the outermost **for** loop cannot be parallelized because across iterations of it the changes to the matrices from previous runs are used to affect the next. This means that the span will be $\Theta(n \lg n)$, work will still be $\Theta(n^3)$ and, so, the parallelization will be $\Theta(\frac{n^3}{n\lg n}) = \Theta(\frac{n^2}{\lg n})$.
**b.** The **for** loop on lines 7-10 is taking the max of a set of things, while recording the index that that max occurs. This **for** loop can therefor be replaced with a $\lg n$ span parallelized procedure in which we arrange the $n$ elements into the leaves of an almost balanced binary tree, and we let each internal node be the max of its two children. Then, the span will just be the depth of this tree. This procedure can gracefully scale with the number of processors to make the span be linear, though even if it is $\Theta(n\lg n)$ it will be less than the $\Theta(n^2)$ work later. The **for** loop on line 14-15 and the implicit **for** loop on line 15 have no concurrent editing, and so, can be made parallel to have a span of $lg n$. While the **for** loop on lines 18-19 can be made parallel, the one containing it cannot without creating data races. Therefore, the total span of the naive parallelized algorithm will be $\Theta(n^2\lg n)$, with a work of $\Theta(n^3)$. So, the parallelization will be $\Theta(\frac{n}{\lg n})$. Not as parallized as part (a), but still a significant improvement.
**c.** We can parallelize the computing of the sums on lines 4 and 6, but cannot also parallize the **for** loops containing them without creating an issue of concurrently modifying data that we are reading. This means that the span will be $\Theta(n\lg n)$, work will still be $\Theta(n^2)$, and so the parallelization will be $\Theta(\frac{n}{\lg n})$.
**d.** The recurrence governing the amount of work of implementing this procedure is given by
$$I(n) \le 2I(n / 2) + 4M(n / 2) + O(n^2).$$
However, the two inversions that we need to do are independent, and the span of parallelized matrix multiply is just $O(\lg n)$. Also, the $n^2$ work of having to take a transpose and subtract and add matrices has a span of only $O(\lg n)$. Therefore, the span satisfies the recurrence
$$I_\infty(n) \le I_\infty(n / 2) + O(\lg n).$$
This recurrence has the solution $I_\infty(n) \in \Theta(\lg^2 n)$ by exercise 4.6-2. Therefore, the span of the inversion algorithm obtained by looking at the procedure detailed on page 830. This makes the parallelization of it equal to $\Theta(M(n) / \lg^2 n)$ where $M(n)$ is the time to compute matrix products.
|
[] | false |
[] |
27-27-4
|
27
|
27-4
|
27-4
|
docs/Chap27/Problems/27-4.md
|
A **_$\otimes$-reduction_** of an array $x[1 \ldots n]$, where $\otimes$ is an associative operator, is the value
$$y = x[1] \otimes x[2] \otimes \cdots \otimes x[n].$$
The following procedure computes the $\otimes$-reduction of a subarray $x[i \ldots j]$ serially.
```cpp
REDUCE(x, i, j)
y = x[i]
for k = i + 1 to j
y = y ⊗ x[k]
return y
```
**a.** Use nested parallelism to implement a multithreaded algorithm $\text{P-REDUCE}$, which performs the same function with $\Theta(n)$ work and $\Theta(\lg n)$ span. Analyze your algorithm.
A related problem is that of computing a **_$\otimes$-prefix computation_**, sometimes called a **_$\otimes$-scan_**, on an array $x[1 \ldots n]$, where $\otimes$ is once again an associative operator. The $\otimes$-scan produces the array $y[1 \ldots n]$ given by
$$
\begin{aligned}
y[1] & = x[1], \\\\
y[2] & = x[1] \otimes x[2], \\\\
y[3] & = x[1] \otimes x[2] \otimes x[3], \\\\
& \vdots \\\\
y[n] & = x[1] \otimes x[2] \otimes x[3] \otimes \cdots \otimes x[n],
\end{aligned}
$$
that is, all prefixes of the array $x$ "summed" using $\otimes$ operator. The following serial procedure $\text{SCAN}$ performs a $\otimes$-prefix computation:
```cpp
SCAN(x)
n = x.length
let y[1..n] be a new array
y[1] = x[1]
for i = 2 to n
y[i] = y[i - 1] ⊗ x[i]
return y
```
Unfortunately, multithreading $\text{SCAN}$ is not straightforward. For example, changing the **for** loop to a **parallel for** loop would create races, since each iteration of the loop body depends on the previous iteration. The following procedure $\text{P-SCAN-1}$ performs the $\otimes$-prefix computation in parallel, albeit inefficiently.
```cpp
P-SCAN-1(x)
n = x.length
let y[1..n] be a new array
P-SCAN-1-AUX(x, y, 1, n)
return y
```
```cpp
P-SCAN-1-AUX(x, y, i, j)
parallel for l = i to j
y[l] = P-REDUCE(x, 1, l)
```
**b.** Analyze the work, span, and parallelism of $\text{P-SCAN-1}$.
By using nested parallelism, we can obtain a more efficient $\otimes$-prefix computation:
```cpp
P-SCAN-2(x)
n = x.length
let y[1..n] be a new array
P-SCAN-2-AUX(x, y, 1, n)
return y
```
```cpp
P-SCAN-2-AUX(x, y, i, j)
if i == j
y[i] = x[i]
else k = floor((i + j) / 2)
spawn P-SCAN-2-AUX(x, y, i, k)
P-SCAN-2-AUX(x, y, k + 1, j)
sync
parallel for l = k + 1 to j
y[l] = y[k] ⊗ y[l]
```
**c.** Argue that $\text{P-SCAN-2}$ is correct, and analyze its work, span, and parallelism.
We can improve on both $\text{P-SCAN-1}$ and $\text{P-SCAN-2}$ by performing the $\otimes$-prefix computation in two distinct passes over the data. On the first pass, we gather the terms for various contiguous subarrays of $x$ into a temporary array $t$, and on the second pass we use the terms in $t$ to compute the final result $y$. The following pseudocode implements this strategy, but certain expressions have been omitted:
```cpp
P-SCAN-3(x)
n = x.length
let y[1..n] and t[1..n] be new arrays
y[1] = x[1]
if n > 1
P-SCAN-UP(x, t, 2, n)
P-SCAN-DOWN(x[1], x, t, y, 2, n)
return y
```
```cpp
P-SCAN-UP(x, t, i, j)
if i == j
return x[i]
else
k = floor((i + j) / 2)
t[k] = spawn P-SCAN-UP(x, t, i, k)
right = P-SCAN-UP(x, t, k + 1, j)
sync
return ____ // fill in the blank
```
```cpp
P-SCAN-DOWN(v, x, t, y, i, j)
if i == j
y[i] = v ⊗ x[i]
else
k = floor((i + j) / 2)
spawn P-SCAN-DOWN(____, x, t, y, i, k) // fill in the blank
P-SCAN-DOWN(____, x, t, y, k + 1, j) // fill in the blank
sync
```
**d.** Fill in the three missing expressions in line 8 of $\text{P-SCAN-UP}$ and lines 5 and 6 of $\text{P-SCAN-DOWN}$. Argue that with expressions you supplied, $\text{P-SCAN-3}$ is correct. ($\textit{Hint:}$ Prove that the value $v$ passed to $\text{P-SCAN-DOWN}(v, x, t, y, i, j)$ satisfies $v = x[1] \otimes x[2] \otimes \cdots \otimes x[i - 1]$.)
**e.** Analyze the work, span, and parallelism of $\text{P-SCAN-3}$.
|
(Removed)
|
[
{
"lang": "cpp",
"code": "> REDUCE(x, i, j)\n> y = x[i]\n> for k = i + 1 to j\n> y = y ⊗ x[k]\n> return y\n>"
},
{
"lang": "cpp",
"code": "> SCAN(x)\n> n = x.length\n> let y[1..n] be a new array\n> y[1] = x[1]\n> for i = 2 to n\n> y[i] = y[i - 1] ⊗ x[i]\n> return y\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-1(x)\n> n = x.length\n> let y[1..n] be a new array\n> P-SCAN-1-AUX(x, y, 1, n)\n> return y\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-1-AUX(x, y, i, j)\n> parallel for l = i to j\n> y[l] = P-REDUCE(x, 1, l)\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-2(x)\n> n = x.length\n> let y[1..n] be a new array\n> P-SCAN-2-AUX(x, y, 1, n)\n> return y\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-2-AUX(x, y, i, j)\n> if i == j\n> y[i] = x[i]\n> else k = floor((i + j) / 2)\n> spawn P-SCAN-2-AUX(x, y, i, k)\n> P-SCAN-2-AUX(x, y, k + 1, j)\n> sync\n> parallel for l = k + 1 to j\n> y[l] = y[k] ⊗ y[l]\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-3(x)\n> n = x.length\n> let y[1..n] and t[1..n] be new arrays\n> y[1] = x[1]\n> if n > 1\n> P-SCAN-UP(x, t, 2, n)\n> P-SCAN-DOWN(x[1], x, t, y, 2, n)\n> return y\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-UP(x, t, i, j)\n> if i == j\n> return x[i]\n> else\n> k = floor((i + j) / 2)\n> t[k] = spawn P-SCAN-UP(x, t, i, k)\n> right = P-SCAN-UP(x, t, k + 1, j)\n> sync\n> return ____ // fill in the blank\n>"
},
{
"lang": "cpp",
"code": "> P-SCAN-DOWN(v, x, t, y, i, j)\n> if i == j\n> y[i] = v ⊗ x[i]\n> else\n> k = floor((i + j) / 2)\n> spawn P-SCAN-DOWN(____, x, t, y, i, k) // fill in the blank\n> P-SCAN-DOWN(____, x, t, y, k + 1, j) // fill in the blank\n> sync\n>"
}
] | false |
[] |
27-27-5
|
27
|
27-5
|
27-5
|
docs/Chap27/Problems/27-5.md
|
Computational science is replete with algorithms that require the entries of an array to be filled in with values that depend on the values of certain already computed neighboring entries, along with other information that does not change over the course of the computation. The pattern of neighboring entries does not change during the computation and is called a **_stencil_**. For example, Section 15.4 presents a stencil algorithm to compute a longest common subsequence, where the value in entry $c[i, j]$ depends only on the values in $c[i - 1, j]$, $c[i, j - 1]$, and $c[i - 1, j - 1]$, as well as the elements $x_i$ and $y_j$ within the two sequences given as inputs. The input sequences are fixed, but the algorithm fills in the two-dimensional array $c$ so that it computes entry $c[i, j]$ after computing all three entries $c[i - 1, j]$, $c[i, j - 1]$, and $c[i - 1, j - 1]$.
In this problem, we examine how to use nested parallelism to multithread a simple stencil calculation on an $n \times n$ array $A$ in which, of the values in $A$, the value placed into entry $A[i, j]$ depends only on values in $A[i' , j']$, where $i' \le i$ and $j' \le j$ (and of course, $i' \ne i$ or $j' \ne j$). In other words, the value in an entry depends only on values in entries that are above it and/or to its left, along with static information outside of the array. Furthermore, we assume throughout this problem that once we have filled in the entries upon which $A[i, j]$ depends, we can fill in $A[i, j]$ in $\Theta(1)$ time (as in the $\text{LCS-LENGTH}$ procedure of Section 15.4).
We can partition the $n \times n$ array $A$ into four $n / 2 \times n / 2$ subarrays as follows:
$$
A =
\begin{pmatrix}
A_{11} & A_{12} \\\\
A_{21} & A_{22} \tag{27.11}
\end{pmatrix}
.
$$
Observe now that we can fill in subarray $A_{11}$ recursively, since it does not depend on the entries of the other three subarrays. Once $A_{11}$ is complete, we can continue to fill in $A_{12}$ and $A_{21}$ recursively in parallel, because although they both depend on $A_{11}$ , they do not depend on each other. Finally, we can fill in $A_{22}$ recursively.
**a.** Give multithreaded pseudocode that performs this simple stencil calculation using a divide-and-conquer algorithm $\text{SIMPLE-STENCIL}$ based on the decomposition $\text{(27.11)}$ and the discussion above. (Don't worry about the details of the base case, which depends on the specific stencil.) Give and solve recurrences for the work and span of this algorithm in terms of $n$. What is the parallelism?
**b.** Modify your solution to part (a) to divide an $n \times n$ array into nine $n / 3 \times n / 3$ subarrays, again recursing with as much parallelism as possible. Analyze this algorithm. How much more or less parallelism does this algorithm have compared with the algorithm from part (a)?
**c.** Generalize your solutions to parts (a) and (b) as follows. Choose an integer $b \ge 2$. Divide an $n \times n$ array into $b^2$ subarrays, each of size $n / b \times n / b$, recursing with as much parallelism as possible. In terms of $n$ and $b$, what are the work, span, and parallelism of your algorithm? Argue that, using this approach, the parallelism must be $o(n)$ for any choice of $b \ge 2$. ($\textit{Hint:}$ For this last argument, show that the exponent of $n$ in the parallelism is strictly less than $1$ for any choice of $b \ge 2$.)
**d.** Give pseudocode for a multithreaded algorithm for this simple stencil calculation that achieves $\Theta(n\lg n)$ parallelism. Argue using notions of work and span that the problem, in fact, has $\Theta(n)$ inherent parallelism. As it turns out, the divide-and-conquer nature of our multithreaded pseudocode does not let us achieve this maximal parallelism.
|
(Removed)
|
[] | false |
[] |
27-27-6
|
27
|
27-6
|
27-6
|
docs/Chap27/Problems/27-6.md
|
Just as with ordinary serial algorithms, we sometimes want to implement randomized multithreaded algorithms. This problem explores how to adapt the various performance measures in order to handle the expected behavior of such algorithms. It also asks you to design and analyze a multithreaded algorithm for randomized quicksort.
**a.** Explain how to modify the work law $\text{(27.2)}$, span law $\text{(27.3)}$, and greedy scheduler bound $\text{(27.4)}$ to work with expectations when $T_P$, $T_1$, and $T_\infty$ are all random variables.
**b.** Consider a randomized multithreaded algorithm for which $1\%$ of the time we have $T_1 = 10^4$ and $T_{10,000} = 1$, but for $99\%$ of the time we have $T_1 = T_{10,000} = 10^9$. Argue that the **_speedup_** of a randomized multithreaded algorithm should be defined as $\text E[T_1]/\text E[T_P]$, rather than $\text E[T_1 / T_P]$.
**c.** Argue that the **_parallelism_** of a randomized multithreaded algorithm should be defined as the ratio $\text E[T_1] / \text E[T_\infty]$.
**d.** Multithread the $\text{RANDOMIZED-QUICKSORT}$ algorithm on page 179 by using nested parallelism. (Do not parallelize $\text{RANDOMIZED-PARTITION}$.) Give the pseudocode for your $\text{P-RANDOMIZED-QUICKSORT}$ algorithm.
**e.** Analyze your multithreaded algorithm for randomized quicksort. ($\textit{Hint:}$ Review the analysis of $\text{RANDOMIZED-SELECT}$ on page 216.)
|
**a.**
$$
\begin{aligned}
\text E[T_P] & \ge \text E[T_1] / P \\\\
\text E[T_P] & \ge \text E[T_\infty] \\\\
\text E[T_P] & \le \text E[T_1]/P + \text E[T_\infty].
\end{aligned}
$$
**b.**
$$\text E[T_1] \approx \text E[T_{10,000}] \approx 9.9 \times 10^8, \text E[T_1]/\text E[T_P] = 1.$$
$$\text E[T_1 / T_{10,000}] = 10^4 * 0.01 + 0.99 = 100.99.$$
**c.** Same as the above.
**d.**
```cpp
RANDOMIZED-QUICKSORT(A, p, r)
if p < r
q = RANDOM-PARTITION(A, p, r)
spawn RANDOMIZED-QUICKSORT(A, p, q - 1)
RANDOMIZED-QUICKSORT(A, q + 1, r)
sync
```
**e.**
$$
\begin{aligned}
\text E[T_1] & = O(n\lg n) \\\\
\text E[T_\infty] & = O(\lg n) \\\\
\text E[T_1] / \text E[T_\infty] & = O(n).
\end{aligned}
$$
|
[
{
"lang": "cpp",
"code": "RANDOMIZED-QUICKSORT(A, p, r)\n if p < r\n q = RANDOM-PARTITION(A, p, r)\n spawn RANDOMIZED-QUICKSORT(A, p, q - 1)\n RANDOMIZED-QUICKSORT(A, q + 1, r)\n sync"
}
] | false |
[] |
28-28.1-1
|
28
|
28.1
|
28.1-1
|
docs/Chap28/28.1.md
|
Solve the equation
$$
\begin{pmatrix}
1 & 0 & 0 \\\\
4 & 1 & 0 \\\\
-6 & 5 & 1
\end{pmatrix}
\begin{pmatrix}
x_1 \\\\
x_2 \\\\
x_3
\end{pmatrix}
=
\begin{pmatrix}
3 \\\\
14 \\\\
-7
\end{pmatrix}
$$
by using forward substitution.
|
$$
\begin{pmatrix}
3 \\\\
14 - 4 \cdot 3 \\\\
-7 - 5 \cdot (14 - 4 \cdot 3) - (-6) \cdot 3
\end{pmatrix}
=
\begin{pmatrix}
3 \\\\
2 \\\\
1
\end{pmatrix}
.
$$
|
[] | false |
[] |
28-28.1-2
|
28
|
28.1
|
28.1-2
|
docs/Chap28/28.1.md
|
Find an $\text{LU}$ decomposition of the matrix
$$
\begin{pmatrix}
4 & -5 & 6 \\\\
8 & -6 & 7 \\\\
12 & -7 & 12
\end{pmatrix}
.
$$
|
$$
\begin{pmatrix}
4 & -5 & 6 \\\\
8 & -6 & 7 \\\\
12 & -7 & 12
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 & 0 \\\\
2 & 1 & 0 \\\\
3 & 2 & 1
\end{pmatrix}
\begin{pmatrix}
4 & -5 & 6 \\\\
0 & 4 & -5 \\\\
0 & 0 & 4
\end{pmatrix}
.
$$
|
[] | false |
[] |
28-28.1-3
|
28
|
28.1
|
28.1-3
|
docs/Chap28/28.1.md
|
Solve the equation
$$
\begin{pmatrix}
1 & 5 & 4 \\\\
2 & 0 & 3 \\\\
5 & 8 & 2
\end{pmatrix}
\begin{pmatrix}
x_1 \\\\
x_2 \\\\
x_3
\end{pmatrix}
=
\begin{pmatrix}
12 \\\\
9 \\\\
5
\end{pmatrix}
$$
by using forward substitution.
|
We have
$$
\begin{aligned}
A & =
\begin{pmatrix}
1 & 5 & 4 \\\\
2 & 0 & 3 \\\\
5 & 8 & 2
\end{pmatrix}
, \\\\
b & =
\begin{pmatrix}
12 \\\\
9 \\\\
5
\end{pmatrix}
,
\end{aligned}
$$
and we with to solve for the unknown $x$. The $\text{LUP}$ decomposition is
$$
\begin{aligned}
L & =
\begin{pmatrix}
1 & 0 & 0 \\\\
0.2 & 1 & 0 \\\\
0.4 & -\frac{3.2}{3.4} & 1
\end{pmatrix}
, \\\\
U & =
\begin{pmatrix}
5 & 8 & 2 \\\\
0 & 3.4 & 3.6 \\\\
0 & 0 & 2.2 + \frac{11.52}{3.4}
\end{pmatrix}
, \\\\
P & =
\begin{pmatrix}
0 & 0 & 1 \\\\
1 & 0 & 0 \\\\
0 & 1 & 0
\end{pmatrix}
.
\end{aligned}
$$
Using forward substitution, we solve $Ly = Pb$ for $y$:
$$
\begin{pmatrix}
1 & 0 & 0 \\\\
0.2 & 1 & 0 \\\\
0.4 & -\frac{3.2}{3.4} & 1
\end{pmatrix}
\begin{pmatrix}
y_1 \\\\
y_2 \\\\
y_3
\end{pmatrix}
=
\begin{pmatrix}
5 \\\\
12 \\\\
9
\end{pmatrix}
,
$$
obtaining
$$
y =
\begin{pmatrix}
5 \\\\
11 \\\\
7 + \frac{35.2}{3.4}
\end{pmatrix}
$$
by computing first $y_1$, then $y_2$, and finally $y_3$. Using back substitution, we solve $Ux = y$ for $x$:
$$
\begin{pmatrix}
5 & 8 & 2 \\\\
0 & 3.4 & 3.6 \\\\
0 & 0 & 2.2 + \frac{11.52}{3.4}
\end{pmatrix}
\begin{pmatrix}
x_1 \\\\
x_2 \\\\
x_3
\end{pmatrix}
=
\begin{pmatrix}
5 \\\\
11 \\\\
7 + \frac{35.2}{3.4}
\end{pmatrix}
,
$$
thereby obtaining the desired answer
$$
x =
\begin{pmatrix}
-\frac{3}{19} \\\\
-\frac{1}{19} \\\\
\frac{49}{19}
\end{pmatrix}
$$
by computing first $x_3$, then $x_2$, and finally $x_1$.
|
[] | false |
[] |
28-28.1-4
|
28
|
28.1
|
28.1-4
|
docs/Chap28/28.1.md
|
Describe the $\text{LUP}$ decomposition of a diagonal matrix.
|
The $\text{LUP}$ decomposition of a diagonal matrix $D$ is $D = IDI$ where $I$ is the identity matrix.
|
[] | false |
[] |
28-28.1-5
|
28
|
28.1
|
28.1-5
|
docs/Chap28/28.1.md
|
Describe the $\text{LUP}$ decomposition of a permutation matrix $A$, and prove that it is unique.
|
(Omit!)
|
[] | false |
[] |
28-28.1-6
|
28
|
28.1
|
28.1-6
|
docs/Chap28/28.1.md
|
Show that for all $n \ge 1$, there exists a singular $n \times n$ matrix that has an $\text{LU}$ decomposition.
|
The zero matrix always has an $\text{LU}$ decomposition by taking $L$ to be any unit lower-triangular matrix and $U$ to be the zero matrix, which is upper triangular.
|
[] | false |
[] |
28-28.1-7
|
28
|
28.1
|
28.1-7
|
docs/Chap28/28.1.md
|
In $\text{LU-DECOMPOSITION}$, is it necessary to perform the outermost **for** loop iteration when $k = n$? How about in $\text{LUP-DECOMPOSITION}$?
|
For $\text{LU-DECOMPOSITION}$, it is indeed necessary. If we didn't run the line 6 of the outermost **for** loop, $u_{nn}$ would be left its initial value of $0$ instead of being set equal to $a_{nn}$. This can clearly produce incorrect results, because the $\text{LU-DECOMPOSITION}$ of any non-singular matrix must have both $L$ and $U$ having positive determinant. However, if $u_{nn} = 0$, the determinant of $U$ will be $0$ by problem D.2-2.
For $\text{LUP-DECOMPOSITION}$, the iteration of the outermost **for** loop that occurs with $k = n$ will not change the final answer. Since $\pi$ would have to be a permutation on a single element, it cannot be non-trivial. and the **for** loop on line 16 will not run at all.
|
[] | false |
[] |
28-28.2-1
|
28
|
28.2
|
28.2-1
|
docs/Chap28/28.2.md
|
Let $M(n)$ be the time to multiply two $n \times n$ matrices, and let $S(n)$ denote the time required to square an $n \times n$ matrix. Show that multiplying and squaring matrices have essentially the same difficulty: an $M(n)$-time matrix-multiplication algorithm implies an $O(M(n))$-time squaring algorithm, and an $S(n)$-time squaring algorithm implies an $O(S(n))$-time matrix-multiplication algorithm.
|
Showing that being able to multiply matrices in time $M(n)$ implies being able to square matrices in time $M(n)$ is trivial because squaring a matrix is just multiplying it by itself.
The more tricky direction is showing that being able to square matrices in time $S(n)$ implies being able to multiply matrices in time $O(S(n))$.
As we do this, we apply the same regularity condition that $S(2n) \in O(S(n))$. Suppose that we are trying to multiply the matrices, $A$ and $B$, that is, find $AB$. Then, define the matrix
$$
C =
\begin{pmatrix}
I & A \\\\
0 & B
\end{pmatrix}
$$
Then, we can find $C^2$ in time $S(2n) \in O(S(n))$. Since
$$
C^2 =
\begin{pmatrix}
I & A + AB \\\\
0 & B
\end{pmatrix}
$$
Then we can just take the upper right quarter of $C^2$ and subtract $A$ from it to obtain the desired result. Apart from the squaring, we've only done work that is $O(n^2)$. Since $S(n)$ is $\Omega(n^2)$ anyways, we have that the total amount of work we've done is $O(n^2)$.
|
[] | false |
[] |
28-28.2-2
|
28
|
28.2
|
28.2-2
|
docs/Chap28/28.2.md
|
Let $M(n)$ be the time to multiply two $n \times n$ matrices, and let $L(n)$ be the time to compute the LUP decomposition of an $n \times n$ matrix. Show that multiplying matrices and computing LUP decompositions of matrices have essentially the same difficulty: an $M(n)$-time matrix-multiplication algorithm implies an $O(M(n))$-time LUP-decomposition algorithm, and an $L(n)$-time LUP-decomposition algorithm implies an $O(L(n))$-time matrix-multiplication algorithm.
|
Let $A$ be an $n \times n$ matrix. Without loss of generality we'll assume $n = 2^k$, and impose the regularity condition that $L(n / 2) \le cL(n)$ where $c < 1 / 2$ and $L(n)$ is the time it takes to find an LUP decomposition of an $n \times n$ matrix. First, decompose $A$ as
$$
A =
\begin{pmatrix}
A_1 \\\\
A_2
\end{pmatrix}
$$
where $A_1$ is $n / 2$ by $n$. Let $A_1 = L_1U_1P_1$ be an LUP decomposition of $A_1$, where $L_1$ is $ / 2$ by $n / 2$, $U_1$ is $n / 2$ by $n$, and $P_1$ is $n$ by $n$. Perform a block decomposition of $U_1$ and $A_2P_1^{-1}$ as $U_1 = [\overline U_1|B]$ and $A_2P_1^{-1} = [C|D]$ where $\overline U_1$ and $C$ are $n / 2$ by $n / 2$ matrices. Since we assume that $A$ is nonsingular, $\overline U_1$ must also be nonsingular.
Set $F = D - C\overline U_1^{-1}B$. Then we have
$$
A =
\begin{pmatrix}
L_1 & 0 \\\\
C\overline U_1^{-1} & I_{n / 2}
\end{pmatrix}
\begin{pmatrix}
\overline U_1 & B \\\\
0 & F
\end{pmatrix}
P_1.
$$
Now let $F = L_2U_2P_2$ be an LUP decomposition of $F$, and let $\overline P = \begin{pmatrix} I_{n / 2} & 0 \\\\ 0 & P_2 \end{pmatrix}$. Then we may write
$$
A =
\begin{pmatrix}
L_1 & 0 \\\\
C\overline U_1^{-1} & L_2
\end{pmatrix}
\begin{pmatrix}
\overline U_1 & BP_2^{-1} \\\\
0 & U_2
\end{pmatrix}
\overline P P_1.
$$
This is an LUP decomposition of $A$. To achieve it, we computed two LUP decompositions of half size, a constant number of matrix multiplications, and a constant number of matrix inversions. Since matrix inversion and multiplication are computationally equivalent, we conclude that the runtime is $O(M(n))$.
|
[] | false |
[] |
28-28.2-3
|
28
|
28.2
|
28.2-3
|
docs/Chap28/28.2.md
|
Let $M(n)$ be the time to multiply two $n \times n$ matrices, and let $D(n)$ denote the time required to find the determinant of an $n \times n$ matrix. Show that multiplying matrices and computing the determinant have essentially the same difficulty: an $M(n)$-time matrix-multiplication algorithm implies an $O(M(n))$-time determinant algorithm, and a $D(n)$-time determinant algorithm implies an $O(D(n))$-time matrix-multiplication algorithm.
|
(Omit!)
|
[] | false |
[] |
28-28.2-4
|
28
|
28.2
|
28.2-4
|
docs/Chap28/28.2.md
|
Let $M(n)$ be the time to multiply two $n \times n$ boolean matrices, and let $T(n)$ be the time to find the transitive closure of an $n \times n$ boolean matrix. (See Section 25.2.) Show that an $M(n)$-time boolean matrix-multiplication algorithm implies an $O(M(n)\lg n)$-time transitive-closure algorithm, and a $T(n)$-time transitive-closure algorithm implies an $O(T(n))$-time boolean matrix-multiplication algorithm.
|
Suppose we can multiply boolean matrices in $M(n)$ time, where we assume this means that if we're multiplying boolean matrices $A$ and $B$, then $(AB)\_{ij} = (a\_{i1} \wedge b_{1j}) \vee \dots \vee (a_{in} \wedge b_{nj})$. To find the transitive closure of a boolean matrix $A$ we just need to find the $n^{\text{th}}$ power of $A$. We can do this by computing $A^2$, then $(A^2)^2$, then $((A^2)^2)^2$ and so on. This requires only $\lg n$ multiplications, so the transitive closure can be computed in $O(M(n)\lg n)$.
For the other direction, first view $A$ and $B$ as adjacency matrices, and impose the regularity condition $T(3n) = O(T(n))$, where $T(n)$ is the time to compute the transitive closure of a graph on $n$ vertices. We will define a new graph whose transitive closure matrix contains the boolean product of $A$ and $B$. Start by placing $3n$ vertices down, labeling them $1, 2, \dots, n, 1', 2', \dots, n', 1'', 2'', \dots, n''$.
Connect vertex $i$ to vertex $j'$ if and only if $A_{ij} = 1$. Connect vertex $j'$ to vertex $k''$ if and only if $B_{jk} = 1$. In the resulting graph, the only way to get from the first set of $n$ vertices to the third set is to first take an edge which "looks like" an edge in $A$, then take an edge which "looks like" an edge in $B$. In particular, the transitive closure of this graph is:
$$
\begin{pmatrix}
I & A & AB \\\\
0 & I & B \\\\
0 & 0 & I
\end{pmatrix}
.
$$
Since the graph is only of size $3n$, computing its transitive closure can be done in $O(T(3n)) = O(T(n))$ by the regularity condition. Therefore multiplying matrices and finding transitive closure are are equally hard.
|
[] | false |
[] |
28-28.2-5
|
28
|
28.2
|
28.2-5
|
docs/Chap28/28.2.md
|
Does the matrix-inversion algorithm based on Theorem 28.2 work when matrix elements are drawn from the field of integers modulo $2$? Explain.
|
It does not work necessarily over the field of two elements. The problem comes in in applying theorem D.6 to conclude that $A^{\text T}A$ is positive definite. In the proof of that theorem they obtain that $||Ax||^2 \ge 0$ and only zero if every entry of $Ax$ is zero. This second part is not true over the field with two elements, all that would be required is that there is an even number of ones in $Ax$. This means that we can only say that $A^{\text T}A$ is positive semi-definite instead of the positive definiteness that the algorithm requires.
|
[] | false |
[] |
28-28.2-6
|
28
|
28.2
|
28.2-6 $\star$
|
docs/Chap28/28.2.md
|
Generalize the matrix-inversion algorithm of Theorem 28.2 to handle matrices of complex numbers, and prove that your generalization works correctly. ($\textit{Hint:}$ Instead of the transpose of $A$, use the **_conjugate transpose_** $A^\*$, which you obtain from the transpose of $A$ by replacing every entry with its complex conjugate. Instead of symmetric matrices, consider **_Hermitian_** matrices, which are matrices $A$ such that $A = A^\*$.)
|
We may again assume that our matrix is a power of $2$, this time with complex entries. For the moment we assume our matrix $A$ is Hermitian and positivedefinite. The proof goes through exactly as before, with matrix transposes replaced by conjugate transposes, and using the fact that Hermitian positivedefinite matrices are invertible. Finally, we need to justify that we can obtain the same asymptotic running time for matrix multiplication as for matrix inversion when $A$ is invertible, but not Hermitian positive-definite.
For any nonsingular matrix $A$, the matrix $A^\*A$ is Hermitian and positive definite, since for any $x$ we have $x^\*A^\*Ax = \langle Ax, Ax \rangle > 0$ by the definition of inner product. To invert $A$, we first compute $(A^\*A)^{-1} = A^{-1}(A^\*)^{-1}$. Then we need only multiply this result on the right by $A^\*$. Each of these steps takes $O(M(n))$ time, so we can invert any nonsingluar matrix with complex entries in $O(M(n))$ time.
|
[] | false |
[] |
28-28.3-1
|
28
|
28.3
|
28.3-1
|
docs/Chap28/28.3.md
|
Prove that every diagonal element of a symmetric positive-definite matrix is positive.
|
To see this, let $e_i$ be the vector that is $0$s except for a $1$ in the $i$th
position. Then, we consider the quantity $e_i^{\text T}Ae_i$ for every $i$. $Ae_i$ takes each row of $A$ and pulls out the $i$th column of it, and puts those values into a column vector. Then, we multiply that on the left by $e_i^{\text T}$, pulls out the $i$th row of this quantity, which means that the quantity $e_i^{\text T}Ae_i$ exactly the value of $A_{i, i}$.
So, we have that by positive definiteness, since $e_i$ is nonzero, that quantity must be positive. Since we do this for every $i$, we have that every entry along the diagonal must be positive.
|
[] | false |
[] |
28-28.3-2
|
28
|
28.3
|
28.3-2
|
docs/Chap28/28.3.md
|
Let
$$
A =
\begin{pmatrix}
a & b \\\\
b & c
\end{pmatrix}
$$
be a $2 \times 2$ symmetrix positive-definite matrix. Prove that its determinant $ac - b^2$ is positive by "completing the square" in a manner similar to that used in the proof of Lemma 28.5.
|
Let $x = -by / a$. Since $A$ is positive-definite, we have
$$
\begin{aligned}
0 & <
\begin{pmatrix} x & y \end{pmatrix}^{\text T} A
\begin{pmatrix} x \\\\ y \end{pmatrix} \\\\
& =
\begin{pmatrix} x & y \end{pmatrix}^{\text T}
\begin{pmatrix} ax + by \\\\ bx + cy \end{pmatrix} \\\\
& = ax^2 + 2bxy + cy^2 \\\\
& = cy^2 - \frac{b^2y^2}{a} \\\\
& = (c - b^2 / a)y^2.
\end{aligned}
$$
Thus, $c - b^2 / a > 0$, which implies $ac - b^2 > 0$, since $a > 0$.
|
[] | false |
[] |
28-28.3-3
|
28
|
28.3
|
28.3-3
|
docs/Chap28/28.3.md
|
Prove that the maximum element in a symmetric positive-definite matrix lies on the diagonal.
|
Suppose to a contradiction that there were some element $a_{ij}$ with $i \ne j$ so that $a_{ij}$ were a largest element. We will use $e_i$ to denote the vector that is all zeroes except for having a $1$ at position $i$. Then, we consider the value $(e_i − e_j)^{\text T} A(e_i − e_j)$. When we compute $A(e_i - e_j)$ this will return a vector which is column $i$ minus column $j$. Then, when we do the last multiplication, we will get the quantity which is the $i$th row minus the $j$th row. So,
$$
\begin{aligned}
(e_i - e_j)^{\text T} A(e_i - e_j)
& = a_{ii} - a_{ij} - a_{ji} + a_{jj} \\\\
& = a_{ii} + a_{jj} - 2a_{ij} \le 0
\end{aligned}
$$
Where we used symmetry to get that $a_{ij} = a_{ji}$. This result contradicts the fact that $A$ was positive definite. So, our assumption that there was a element tied for largest off the diagonal must of been false.
|
[] | false |
[] |
28-28.3-4
|
28
|
28.3
|
28.3-4
|
docs/Chap28/28.3.md
|
Prove that the determinant of each leading submatrix of a symmetrix positive-definite matrix is positive.
|
The claim clearly holds for matrices of size $1$ because the single entry in the matrix is positive the only leading submatrix is the matrix itself. Now suppose
the claim holds for matrices of size $n$, and let $A$ be an $(n + 1) \times (n + 1)$ symmetric positive-definite matrix. We can write $A$ as
$$
A =
\left[
\begin{array}{ccc|c}
& & & \\\\
& A' & & w \\\\
& & & \\\\
\hline
& v & & c
\end{array}
\right]
.
$$
Then $A'$ is clearly symmetric, and for any $x$ we have $x^{\text T} A'x = \begin{pmatrix} x & 0 \end{pmatrix} A \begin{pmatrix} x \\\\ 0 \end{pmatrix} > 0$, so $A'$ is positive-definite. By our induction hypothesis, every leading submatrix of $A'$ has positive determinant, so we are left only to show that $A$ has positive determinant. By Theorem D.4, the determinant of $A$ is equal to the determinant of the matrix
$$
B =
\left[
\begin{array}{c|ccc}
c & & v & \\\\
\hline
& & & \\\\
w & & A' & \\\\
& & &
\end{array}
\right]
.
$$
Theorem D.4 also tells us that the determinant is unchanged if we add a multiple of one column of a matrix to another. Since $0 < e_{n + 1}^{\text T} Ae_{n + 1} = c$, we can use multiples of the first column to zero out every entry in the first row other than $c$. Specifically, the determinant of $B$ is the same as the determinant of the matrix obtained in this way, which looks like
$$
C =
\left[
\begin{array}{c|ccc}
c & & 0 & \\\\
\hline
& & & \\\\
w & & A'' & \\\\
& & &
\end{array}
\right]
.
$$
By definition, $\det(A) = c\det(A'')$. By our induction hypothesis, $\det(A'') > 0$. Since $c > 0$ as well, we conclude that $\det(A) > 0$, which completes the proof.
|
[] | false |
[] |
28-28.3-5
|
28
|
28.3
|
28.3-5
|
docs/Chap28/28.3.md
|
Let $A_k$ denote the $k$th leading submatrix of a symmetric positive-definite matrix $A$. Prove that $\text{det}(A_k) / \text{det}(A_{k - 1})$ is the $k$th pivot during $\text{LU}$ decomposition, where, by convention, $\text{det}(A_0) = 1$.
|
When we do an LU decomposition of a positive definite symmetric matrix, we never need to permute the rows. This means that the pivot value being used from the first operation is the entry in the upper left corner. This gets us that for the case $k = 1$, it holds because we were told to define $\det(A_0) = 1$, getting us, $a_{11} = \det(A_1) / \det(A_0)$. When Diagonalizing a matrix, the product of the pivot values used gives the determinant of the matrix. So, we have that the determinant of $A_k$ is a product of the $k$th pivot value with all the previous values. By induction, the product of all the previous values is $\det(A_{k − 1})$. So, we have that if $x$ is the $k$th pivot value, $\det(A_k) = x\det(A_{k − 1})$, giving us the desired result that the $k$th pivot value is $\det(A_k) / \det(A_{k − 1})$.
|
[] | false |
[] |
28-28.3-6
|
28
|
28.3
|
28.3-6
|
docs/Chap28/28.3.md
|
Find the function of the form
$$F(x) = c_1 + c_2x\lg x + c_3 e^x$$
that is the best least-squares fit to the data points
$$(1, 1), (2, 1), (3, 3), (4, 8).$$
|
First we form the $A$ matrix
$$
A =
\begin{pmatrix}
1 & 0 & e \\\\
1 & 2 & e^2 \\\\
1 & 3\lg 3 & e^3 \\\\
1 & 8 & e^4
\end{pmatrix}
.
$$
We compute the pseudoinverse, then multiply it by $y$, to obtain the coefficient vector
$$
c =
\begin{pmatrix}
0.411741 \\\\
-0.20487 \\\\
0.16546
\end{pmatrix}
.
$$
|
[] | false |
[] |
28-28.3-7
|
28
|
28.3
|
28.3-7
|
docs/Chap28/28.3.md
|
Show that the pseudoinverse $A^+$ satisfies the following four equations:
$$
\begin{aligned}
AA^+A & = A, \\\\
A^+AA^+ & = A^+, \\\\
(AA^+)^{\text T} & = AA^+, \\\\
(A^+A)^{\text T} & = A^+A.
\end{aligned}
$$
|
$$
\begin{aligned}
AA^+A & = A((A^{\text T}A)^{-1}A^{\text T})A \\\\
& = A(A^{\text T}A)^{-1}(A^{\text T}A) \\\\
& = A,
\end{aligned}
$$
$$
\begin{aligned}
A^+AA^+ & = ((A^{\text T}A)^{-1}A^{\text T})A((A^{\text T}A)^{-1}A^{\text T}) \\\\
& = (A^{\text T}A)^{-1}(A^{\text T}A)(A^{\text T}A)^{-1}A^{\text T} \\\\
& = (A^{\text T}A)^{-1}A^{\text T} \\\\
& = A^+,
\end{aligned}
$$
$$
\begin{aligned}
(AA^+)^{\text T}
& = (A(A^{\text T}A)^{-1}A^{\text T})^{\text T} \\\\
& = A((A^{\text T}A)^{-1})^{\text T}A^{\text T} \\\\
& = A((A^{\text T}A)^{\text T})^{-1}A^{\text T} \\\\
& = A(A^{\text T}A)^{-1}A^{\text T} \\\\
& = AA^+,
\end{aligned}
$$
$$
\begin{aligned}
(A^+A)^{\text T}
& = ((A^{\text T}A)^{-1}A^{\text T}A)^{\text T} \\\\
& = ((A^{\text T}A)^{-1}(A^{\text T}A))^{\text T} \\\\
& = I^{\text T} \\\\
& = I \\\\
& = (A^{\text T}A)^{-1}(A^{\text T}A) \\\\
& = A^+A.
\end{aligned}
$$
|
[] | false |
[] |
28-28-1
|
28
|
28-1
|
28-1
|
docs/Chap28/Problems/28-1.md
|
Consider the tridiagonal matrix
$$
A =
\begin{pmatrix}
1 & -1 & 0 & 0 & 0 \\\\
-1 & 2 & -1 & 0 & 0 \\\\
0 & -1 & 2 & -1 & 0 \\\\
0 & 0 & -1 & 2 & -1 \\\\
0 & 0 & 0 & -1 & 2
\end{pmatrix}
.
$$
**a.** Find an $\text{LU}$ decomposition of $A$.
**b.** Solve the equation $Ax = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \end{pmatrix}^{\text T}$ by using forward and back substitution.
**c.** Find the inverse of $A$.
**d.** Show how, for any $n \times n$ symmetric positive-definite, tridiagonal matrix $A$ and any $n$-vector $b$, to solve the equation $Ax = b$ in $O(n)$ time by performing an $\text{LU}$ decomposition. Argue that any method based on forming $A^{-1}$ is asymptotically more expensive in the worst case.
**e.** Show how, for any $n \times n$ nonsingular, tridiagonal matrix $A$ and any $n$-vector $b$, to solve the equation $Ax = b$ in $O(n)$ time by performing an $\text{LUP}$ decomposition.
|
**a.**
$$
\begin{aligned}
L & =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\\\
-1 & 1 & 0 & 0 & 0 \\\\
0 & -1 & 1 & 0 & 0 \\\\
0 & 0 & -1 & 1 & 0 \\\\
0 & 0 & 0 & -1 & 1
\end{pmatrix}
, \\\\
U & =
\begin{pmatrix}
1 & -1 & 0 & 0 & 0 \\\\
0 & 1 & -1 & 0 & 0 \\\\
0 & 0 & 1 & -1 & 0 \\\\
0 & 0 & 0 & 1 & -1 \\\\
0 & 0 & 0 & 0 & 1
\end{pmatrix}
, \\\\
P & =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\\\
0 & 1 & 0 & 0 & 0 \\\\
0 & 0 & 1 & 0 & 0 \\\\
0 & 0 & 0 & 1 & 0 \\\\
0 & 0 & 0 & 0 & 1
\end{pmatrix}
.
\end{aligned}
$$
**b.** We first do back substitution to obtain that
$$
Ux =
\begin{pmatrix}
5 \\\\
4 \\\\
3 \\\\
2 \\\\
1
\end{pmatrix}
.
$$
By forward substitution, we have that
$$
x =
\begin{pmatrix}
5 \\\\
9 \\\\
12 \\\\
14 \\\\
15
\end{pmatrix}
.
$$
**c.** We will set $Ax = e_i$ for each $i$, where $e_i$ is the vector that is all zeroes except for a one in the $i$th position. Then, we will just concatenate all of these solutions together to get the desired inverse.
$$
\begin{array}{|c|c|}
\hline
\text{equation} & \text{solution} \\\\
\hline
Ax_1 = e_1 & x_1 = \begin{pmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\ 1 \end{pmatrix} \\\\
\hline
Ax_2 = e_2 & x_2 = \begin{pmatrix} 1 \\\\ 2 \\\\ 2 \\\\ 2 \\\\ 2 \end{pmatrix} \\\\
\hline
Ax_3 = e_3 & x_3 = \begin{pmatrix} 1 \\\\ 2 \\\\ 3 \\\\ 3 \\\\ 3 \end{pmatrix} \\\\
\hline
Ax_4 = e_4 & x_4 = \begin{pmatrix} 1 \\\\ 2 \\\\ 3 \\\\ 4 \\\\ 4 \end{pmatrix} \\\\
\hline
Ax_5 = e_5 & x_5 = \begin{pmatrix} 1 \\\\ 2 \\\\ 3 \\\\ 4 \\\\ 5 \end{pmatrix} \\\\
\hline
\end{array}
$$
Thus,
$$
A^{-1} =
\begin{pmatrix}
1 & 1 & 1 & 1 & 1 \\\\
1 & 2 & 2 & 2 & 2 \\\\
1 & 2 & 3 & 3 & 3 \\\\
1 & 2 & 3 & 4 & 4 \\\\
1 & 2 & 3 & 4 & 5
\end{pmatrix}
.
$$
**d.** When performing the LU decomposition, we only need to take the max over at most two different rows, so the loop on line 7 of $\text{LUP-DECOMPOSITION}$ drops to $O(1)$. There are only some constant number of nonzero entries in each row, so the loop on line 14 can also be reduced to being $O(1)$. Lastly, there are only some constant number of nonzero entries of the form $a_{ik}$ and $a_{kj}$. Since the square of a constant is also a constant, this means that the nested **for** loops on lines 16-19 also only take time $O(1)$ to run. Since the **for** loops on lines 3 and 5 both run $O(n)$ times and take $O(1)$ time each to run (provided we are smart to not consider a bunch of zero entries in the matrix), the total runtime can be brought down to $O(n)$.
Since for a Tridiagonal matrix, it will only ever have finitely many nonzero entries in any row, we can do both the forward and back substitution each in time only $O(n)$.
Since the asymptotics of performing the LU decomposition on a positive definite tridiagonal matrix is $O(n)$, and this decomposition can be used to solve the equation in time $O(n)$, the total time for solving it with this method is $O(n)$. However, to simply record the inverse of the tridiagonal matrix would take time $O(n^2)$ since there are that many entries, so, any method based on computing the inverse of the matrix would take time $\Omega(n^2)$ which is clearly slower than the previous method.
**e.** The runtime of our LUP decomposition algorithm drops to being $O(n)$ because we know there are only ever a constant number of nonzero entries in each row and column, as before. Once we have an LUP decomposition, we also know that that decomposition have both $L$ and $U$ having only a constant number of non-zero entries in each row and column. This means that when we perform the forward and backward substitution, we only spend a constant amount of time per entry in $x$, and so, only takes $O(n)$ time.
|
[] | false |
[] |
28-28-2
|
28
|
28-2
|
28-2
|
docs/Chap28/Problems/28-2.md
|
A pratical method for interpolating a set of points with a curve is to use **cubic splines**. We are given a set $\\{(x_i, y_i): i = 0, 1, \ldots, n\\}$ of $n + 1$ point-value pairs, where $x_0 < x_1 < \cdots < x_n$. We wish to fit a piecewise-cubic curve (spline) $f(x)$ to the points. That is, the curve $f(x)$ is made up of $n$ cubic polynomials $f_i(x) = a_i + b_ix + c_ix^2 + d_ix^3$ for $i = 0, 1, \ldots, n - 1$, where if $x$ falls in the range $x_i \le x \le x_{i + 1}$, then the value of the curve is given by $f(x) = f_i(x - x_i)$. The points $x_i$ at which the cubic polynomials are "pasted" together are called **knots**. For simplicity, we shall assume that $x_i = i$ for $i = 0, 1, \ldots, n$.
To ensure continuity of $f(x)$, we require that
$$
\begin{aligned}
f(x_i) & = f_i(0) = y_i, \\\\
f(x_{i + 1}) & = f_i(1) = y_{i + 1}
\end{aligned}
$$
for $i = 0, 1, \ldots, n - 1$. To ensure that $f(x)$ is sufficiently smooth, we also insist that the first derivative be continuous at each knot:
$$f'(x_{i + 1}) = f'\_i(1) = f'\_{i + 1}(0)$$
for $i = 0, 1, \ldots, n - 2$.
**a.** Suppose that for $i = 0, 1, \ldots, n$, we are given not only the point-value pairs $\\{(x_i, y_i)\\}$ but also the first derivatives $D_i = f'(x_i)$ at each knot. Express each coefficient $a_i$, $b_i$, $c_i$ and $d_i$ in terms of the values $y_i$, $y_{i + 1}$, $D_i$, and $D_{i + 1}$. (Remember that $x_i = i$.) How quickly can we compute the $4n$ coefficients from the point-value pairs and first derivatives?
The question remains of how to choose the first derivatives of $f(x)$ at the knots. One method is to require the second derivatives to be continuous at the knots:
$$f''(x_{i + 1}) = f''\_i(1) = f''\_{i + 1}(0)$$
for $i = 0, 1, \ldots, n - 2$. At the first and last knots, we assume that $f''(x_0) = f''\_0(0) = 0$ and $f''(x_n) = f''_{n - 1}(1) = 0$; these assumptions make $f(x)$ a **_natural_** cubic spline.
**b.** Use the continuity constraints on the second derivative to show that for $i = 1, 2, \ldots, n - 1$,
$$D_{i - 1} + 4D_i + D_{i + 1} = 3(y_{i + 1} - y_{i - 1}). \tag{23.21}$$
**c.** Show that
$$
\begin{aligned}
2D_0 + D_1 & = 3(y_1 - y_0), & \text{(28.22)} \\\\
D_{n - 1} + 2D_n & = 3(y_n - y_{n - 1}). & \text{(28.23)}
\end{aligned}
$$
**d.** Rewrite equations $\text{(28.21)}$–$\text{(28.23)}$ as a matrix equation involving the vector $D = \langle D_0, D_1, \ldots, D_n \rangle$ or unknowns. What attributes does the matrix in your equation have?
**e.** Argue that a natural cubic spline can interpolate a set of $n + 1$ point-value pairs in $O(n)$ time (see Problem 28-1).
**f.** Show how to determine a natural cubic spline that interpolates a set of $n + 1$ points $(x_i, y_i)$ satisfying $x_0 < x_1 < \cdots < x_n$, even when $x_i$ is not necessarily equal to $i$. What matrix equation must your method solve, and how quickly does your algorithm run?
|
**a.** We have $a_i = f_i(0) = y_i$ and $b_i = f_i'(0) = f'(x_i) = D_i$. Since $f_i(1) = a_i + b_i + c_i + d_i$ and $f_i'(1) = b_i + 2c_i + 3d_i$, we have $d_i = D_{i + 1} - 2y_{i + 1} + 2y_i + D_i$ which implies $c_i = 3y_{i + 1} - 3y_i - D_{i + 1} - 2D_i$. Since each coefficient can be computed in constant time from the known values, we can compute the $4n$ coefficients in linear time.
**b.** By the continuity constraints, we have $f_i''(1) = f_{i + 1}''(0)$ which implies that $2c_i + 6d_i = 2c_{i + 1}$, or $c_i + 3d_i = c_{i + 1}$. Using our equations from above, this is equivalent to
$$D_i + 2D_{i + 1} + 3y_i - 3y_{i + 1} = 3y_{i + 2} - 3y_{i + 1} - D_{i + 2} - 2D_{i + 1}.$$
Rearranging gives the desired equation
$$D_i + 4D_{i + 1} + D_{i + 2} = 3(y_{i + 2} - y_i).$$
**c.** The condition on the left endpoint tells us that $f_0''(0) = 0$, which implies $2c_0 = 0$. By part (a), this means $3(y_1 − y_0) = 2D_0 + D_1$. The condition on the right endpoint tells us that $f_{n - 1}''(1) = 0$, which implies $c_{n - 1} + 3d_{n - 1} = 0$. By part (a), this means $3(y_n - y_{n - 1}) = D_{n - 1} + 2D_n$.
**d.** The matrix equation has the form $AD = Y$, where $A$ is symmetric and tridiagonal. It looks like this:
$$
\begin{pmatrix}
2 & 1 & 0 & 0 & \cdots & 0 \\\\
1 & 4 & 1 & 0 & \cdots & 0 \\\\
0 & \ddots & \ddots & \ddots & \cdots & \vdots \\\\
\vdots & \cdots & 1 & 4 & 1 & 0 \\\\
0 & \cdots & 0 & 1 & 4 & 1 \\\\
0 & \cdots & 0 & 0 & 1 & 2 \\\\
\end{pmatrix}
\begin{pmatrix}
D_0 \\\\
D_1 \\\\
D_2 \\\\
\vdots \\\\
D_{n - 1} \\\\
D_n
\end{pmatrix}
=
\begin{pmatrix}
3(y_1 - y_0) \\\\
3(y_2 - y_0) \\\\
3(y_3 - y_1) \\\\
\vdots \\\\
3(y_n - y_{n - 2}) \\\\
3(y_n - y_{n - 1})
\end{pmatrix}
.
$$
**e.** Since the matrix is symmetric and tridiagonal, Problem 28-1 (e) tells us that we can solve the equation in $O(n)$ time by performing an LUP decomposition. By part (a), once we know each $D_i$ we can compute each $f_i$ in $O(n)$ time.
**f.** For the general case of solving the nonuniform natural cubic spline problem, we require that $f(x_{i + 1}) = f_i(x_{i + 1} − x_i) = y_{i + 1}$, $f'(x_{i + 1}) = f_i'(x_{i + 1} - x_i) = f_{i + 1}'(0)$ and $f''(x_{i + 1}) = f_i''(x_{i + 1} - x_i) = f_{i + 1}''(0)$. We can still solve for each of $a_i$, $b_i$, $c_i$ and $d_i$ in terms of $y_i$, $y_{i + 1}$, $D_i$ and $D_{i + 1}$, so we still get a tridiagonal matrix equation. The solution will be slightly messier, but ultimately it is solved just like the simpler case, in $O(n)$ time.
|
[] | false |
[] |
29-29.1-1
|
29
|
29.1
|
29.1-1
|
docs/Chap29/29.1.md
|
If we express the linear program in $\text{(29.24)}$–$\text{(29.28)}$ in the compact notation of $\text{(29.19)}$–$\text{(29.21)}$, what are $n$, $m$, $A$, $b$, and $c$?
|
$$
\begin{aligned}
n = m & = 3, \\\\
A & =
\begin{pmatrix}
1 & 1 & -1 \\\\
-1 & -1 & 1 \\\\
1 & -2 & 2
\end{pmatrix}
, \\\\
b & =
\begin{pmatrix}
7 \\\\
-7 \\\\
4
\end{pmatrix}
, \\\\
c & =
\begin{pmatrix}
2 \\\\
-3 \\\\
3
\end{pmatrix}
.
\end{aligned}
$$
|
[] | false |
[] |
29-29.1-2
|
29
|
29.1
|
29.1-2
|
docs/Chap29/29.1.md
|
Give three feasible solutions to the linear program in $\text{(29.24)}$–$\text{(29.28)}$. What is the objective value of each one?
|
1. $(x_1, x_2, x_3) = (6, 1, 0)$ with objective value $9$.
2. $(x_1, x_2, x_3) = (5, 2, 0)$ with objective value $4$.
3. $(x_1, x_2, x_3) = (4, 3, 0)$ with objective value $-1$.
|
[] | false |
[] |
29-29.1-3
|
29
|
29.1
|
29.1-3
|
docs/Chap29/29.1.md
|
For the slack form in $\text{(29.38)}$–$\text{(29.41)}$, what are $N$, $B$, $A$, $b$, $c$, and $v$?
|
$$
\begin{aligned}
N & = \\{1, 2, 3\\}, \\\\
B & = \\{4, 5, 6\\}, \\\\
A & =
\begin{pmatrix}
1 & 1 & -1 \\\\
-1 & -1 & 1 \\\\
1 & -2 & 2
\end{pmatrix}
, \\\\
b & =
\begin{pmatrix}
7 \\\\
-7 \\\\
4
\end{pmatrix}
, \\\\
c & =
\begin{pmatrix}
2 \\\\
-3 \\\\
3
\end{pmatrix}
, \\\\
v & = 0.
\end{aligned}
$$
|
[] | false |
[] |
29-29.1-4
|
29
|
29.1
|
29.1-4
|
docs/Chap29/29.1.md
|
Convert the following linear program into standard form:
$$
\begin{array}{lrcrcrcrl}
\text{minimize} & 2x_1 & + & 7x_2 & + & x_3 & & \\\\
\text{subject to} & \\\\
& x_1 & & & - & x_3 & = & 7 \\\\
& 3x_1 & + & x_2 & & & \ge & 24 \\\\
& & & x_2 & & & \ge & 0 \\\\
& & & & & x_3 & \le & 0 & .
\end{array}
$$
|
$$
\begin{array}{lrcrcrcrcrl}
\text{maximize} & -2x_1 & + & 2x_2 & - & 7x_3 & + & x_4 & & \\\\
\text{subject to} & \\\\
& -x_1 & + & x_2 & & & - & x_4 & \le & -7 \\\\
& x_1 & - & x_2 & & & + & x_4 & \le & 7 \\\\
& -3x_1 & + & 3x_2 & - & x_3 & & & \le & -24 \\\\
& & x_1, x_2, x_3, x_4 & & & & & & \ge & 0 & .
\end{array}
$$
|
[] | false |
[] |
29-29.1-5
|
29
|
29.1
|
29.1-5
|
docs/Chap29/29.1.md
|
Convert the following linear program into slack form:
$$
\begin{array}{lrcrcrcrl}
\text{maximize} & 2x_1 & & & - & 6x_3 \\\\
\text{subject to} & \\\\
& x_1 & + & x_2 & - & x_3 & \le & 7 \\\\
& 3x_1 & - & x_2 & & & \ge & 8 \\\\
& -x_1 & + & 2x_2 & + & 2x_3 & \ge & 0 \\\\
& & x_1, x_2, x_3 & & & & \ge & 0 & .
\end{array}
$$
What are the basic and nonbasic variables?
|
First, we will multiply the second and third inequalities by minus one to make it so that they are all $\le$ inequalities. We will introduce the three new variables $x_4$, $x_5$, $x_6$, and perform the usual procedure for rewriting in slack form
$$
\begin{array}{rcrcrcrcr}
x_4 & = & 7 & - & x_1 & - & x_2 & + & x_3 \\\\
x_5 & = & -8 & + & 3x_1 & - & x_2 \\\\
x_6 & = & & - & x_1 & + & 2x_2 & + & 2x_3 \\\\
x_1, x_2, x_3, x_4, x_5, x_6 & \ge & 0 & ,
\end{array}
$$
where we are still trying to maximize $2x_1 - 6x_3$. The basic variables are $x_4$, $x_5$, $x_6$ and the nonbasic variables are $x_1$, $x_2$, $x_3$.
|
[] | false |
[] |
29-29.1-6
|
29
|
29.1
|
29.1-6
|
docs/Chap29/29.1.md
|
Show that the following linear program is infeasible:
$$
\begin{array}{lrcrcrl}
\text{minimize} & 3x_1 & - & 2x_2 \\\\
\text{subject to} & \\\\
& x_1 & + & x_2 & \le & 2 \\\\
& -2x_1 & - & 2x_2 & \le & -10 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
By dividing the second constraint by $2$ and adding to the first, we have $0 \le −3$, which is impossible. Therefore there linear program is unfeasible.
|
[] | false |
[] |
29-29.1-7
|
29
|
29.1
|
29.1-7
|
docs/Chap29/29.1.md
|
Show that the following linear program is unbounded:
$$
\begin{array}{lrcrcrl}
\text{minimize} & x_1 & - & x_2 \\\\
\text{subject to} & \\\\
& -2x_1 & + & x_2 & \le & -1 \\\\
& -x_1 & - & 2x_2 & \le & -2 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
For any number $r > 1$, we can set $x_1 = 2r$ and $x_2 = r$. Then, the restaints become
$$
\begin{array}{rcrcrl}
-2(2r) & + & r = -3r & \le & -1 \\\\
-2r & - & 2r = -4r & \le & -2 \\\\
& 2r, r & & \ge & 0 & .
\end{array}
$$
All of these inequalities are clearly satisfied because of our initial restriction in selecting $r$. Now, we look to the objective function, it is $2r - r = r$. So, since we can select $r$ to be arbitrarily large, and still satisfy all of the constraints, we can achieve an arbitrarily large value of the objective function.
|
[] | false |
[] |
29-29.1-8
|
29
|
29.1
|
29.1-8
|
docs/Chap29/29.1.md
|
Suppose that we have a general linear program with $n$ variables and $m$ constraints, and suppose that we convert it into standard form. Give an upper bound on the number of variables and constraints in the resulting linear program.
|
In the worst case, we have to introduce 2 variables for every variable to ensure that we have nonnegativity constraints, so the resulting program will have $2n$ variables. If each of the $m$ constraints (these which are not non-negativity constrains for variables) is an equality, we would have to double the number of constraints to create inequalities, resulting in $2m$ constraints. Changing minimization to maximization and greater-than signs to less-than signs don't affect the number of variables or constraints, so the upper bound is $2n$ on variables and $2m + 2n$ on constraints, where the $2n$ accounts for the at most $2n$ new non-negative inequalities.
|
[] | false |
[] |
29-29.1-9
|
29
|
29.1
|
29.1-9
|
docs/Chap29/29.1.md
|
Give an example of a linear program for which the feasible region is not bounded, but the optimal objective value is finite.
|
Consider the linear program where we want to maximize $x_1 − x_2$ subject to the constraints $x_1 − x_2 \le 1$ and $x_1$, $x_2 \ge 0$. clearly the objective value can never be greater than one, and it is easy to achieve the optimal value of $1$, by setting $x_1 = 1$ and $x_2 = 0$. Then, this feasible region is unbounded because for any number $r$, we could set $x_1 = x_2 = r$, and that would be feasible because the difference of the two would be zero which is $\le 1$.
If we further wanted it so that there was a single solution that achieved the finite optimal value, we could add the requirements that $x_1 \le 1$.
|
[] | false |
[] |
29-29.2-1
|
29
|
29.2
|
29.2-1
|
docs/Chap29/29.2.md
|
Put the single-pair shortest-path linear program from $\text{(29.44)}$–$\text{(29.46)}$ into standard form.
|
The objective is already in normal form. However, some of the constraints are equality constraints instead of $\le$ constraints. This means that we need to rewrite them as a pair of inequality constraints, the overlap of whose solutions is just the case where we have equality. We also need to deal with the fact that most of the variables can be negative. To do that, we will introduce variables for the negative part and positive part, each of which need be positive, and we'll just be sure to subtract the negative part. $d_s$ need not be changed in this way since it can never be negative since we are not assuming the existence of negative weight cycles.
$$
\begin{aligned}
d_v^+ - d_v^- - d_u^+ + d_u^- \le w(u, v) \text{ for every edge } (u, v) \\\\
d_s \le 0
\end{aligned}
$$
|
[] | false |
[] |
29-29.2-2
|
29
|
29.2
|
29.2-2
|
docs/Chap29/29.2.md
|
Write out explicitly the linear program corresponding to finding the shortest path from node $s$ to node $y$ in Figure 24.2(a).
|
$$
\begin{array}{ll}
\text{minimize} & d_y \\\\
\text{subject to} & \\\\
& d_t \le d_s + 3 \\\\
& d_x \le d_t + 6 \\\\
& d_y \le d_s + 5 \\\\
& d_y \le d_t + 2 \\\\
& d_z \le d_x + 2 \\\\
& d_t \le d_y + 1 \\\\
& d_x \le d_y + 4 \\\\
& d_z \le d_y + 1 \\\\
& d_s \le d_z + 1 \\\\
& d_x \le d_z + 7 \\\\
& d_2 = 0.
\end{array}
$$
|
[] | false |
[] |
29-29.2-3
|
29
|
29.2
|
29.2-3
|
docs/Chap29/29.2.md
|
In the single-source shortest-paths problem, we want to find the shortest-path weights from a source vertex $s$ to all vertices $v \in V$. Given a graph $G$, write a linear program for which the solution has the property that $d_v$ is the shortest-path weight from $s$ to $v$ for each vertex $v \in V$.
|
We will follow a similar idea to the way to when we were finding the shortest path between two particular vertices.
$$
\begin{array}{ll}
\text{maximize} & \sum_{v \in V} d_v \\\\
\text{subject to} & \\\\
& d_v \le d_u + w(u, v) \text{ for each edge } (u, v) \\\\
& d_s = 0.
\end{array}
$$
The first type of constraint makes sure that we never say that a vertex is further away than it would be if we just took the edge corresponding to that constraint. Also, since we are trying to maximize all of the variables, we will make it so that there is no slack anywhere, and so all the dv values will correspond to lengths of shortest paths to $v$. This is because the only thing holding back the variables is the information about relaxing along the edges, which is what determines shortest paths.
|
[] | false |
[] |
29-29.2-4
|
29
|
29.2
|
29.2-4
|
docs/Chap29/29.2.md
|
Write out explicitly the linear program corresponding to finding the maximum flow in Figure 26.1(a).
|
$$
\begin{array}{lll}
\text{maximize} & f_{sv_1} + f_{sv_2} \\\\
\text{subject to} & \\\\
& f_{sv_1} & \le 16 \\\\
& f_{sv_2} & \le 14 \\\\
& f_{v_1v_3} & \le 12 \\\\
& f_{v_2v_1} & \le 4 \\\\
& f_{v_2v_4} & \le 14 \\\\
& f_{v_3v_2} & \le 9 \\\\
& f_{v_3t} & \le 20 \\\\
& f_{v_4v_3} & \le 7 \\\\
& f_{v_4t} & \le 4 \\\\
& f_{sv_1} + f_{v_2v_1} & = f_{v_1v_3} \\\\
& f_{sv_2} + f_{v_3v_2} & = f_{v_2v_1} + f_{v_2v_4} \\\\
& f_{v_1v_3} + f_{v_4v_3} & = f_{v_3v_2} + f_{v_3t} \\\\
& f_{v_2v_4} & = f_{v_4v_3} + f_{v_4t} \\\\
& f_{uv} & \ge 0 \text{ for } u, v \in \\{s, v_1, v_2, v_3, v_4, t\\}.
\end{array}
$$
|
[] | false |
[] |
29-29.2-5
|
29
|
29.2
|
29.2-5
|
docs/Chap29/29.2.md
|
Rewrite the linear program for maximum flow $\text{(29.47)}$–$\text{(29.50)}$ so that it uses only $O(V + E)$ constraints.
|
All we need to do to bring the number of constraints down from $O(V^2)$ to $O(V + E)$ is to replace the way we index the flows. Instead of indexing it by a pair of vertices, we will index it by an edge. This won't change anything about the analysis because between pairs of vertices that don't have an edge between them, there definitely won't be any flow. Also, it brings the number of constraints of the first and third time down to $O(E)$ and the number of constraints of the second kind stays at $O(V)$.
$$
\begin{array}{lll}
\text{maximize} & \sum_{\text{edges $e$ leaving $s$}} f_e - \sum_{\text{edges $e$ entering $s$}} f_s \\\\
\text{subject to} & \\\\
& f_{(u, v)} \le c(u, v) \text{ for each edge } (u, v) \\\\
& \sum_{\text{edges $e$ leaving $u$}} f_e - \sum_{\text{edges $e$ entering $u$}} f_e \text{ for each edge } u \in V - \\{s, t\\} \\\\
& f_e \ge 0 \text{ for each edge } e.
\end{array}
$$
|
[] | false |
[] |
29-29.2-6
|
29
|
29.2
|
29.2-6
|
docs/Chap29/29.2.md
|
Write a linear program that, given a bipartite graph $G = (V, E)$ solves the maximum-bipartite-matching problem.
|
Recall from section 26.3 that we can solve the maximum-bipartite-matching problem by viewing it as a network flow problem, where we append a source $s$ and sink $t$, each connected to every vertex is $L$ and $R$ respectively by an edge with capacity $1$, and we give every edge already in the bipartite graph capacity $1$. The integral maximum flows are in correspondence with maximum bipartite matchings. In this setup, the linear programming problem to solve is as follows:
$$
\begin{aligned}
\text{maximize} & \sum_{v \in L} f_{sv} \\\\
\text{subject to} & \\\\
& f_{(u, v)} \le 1 \text{ for each } u, v \in \\{s\\} \cup L \cup R \cup \\{t\\} = V \\\\
& \sum_{v \in V} f_{vu} = \sum_{v \in V} f_{uv} \text{ for each } u \in L \cup R \\\\
& f_{uv} \ge 0 \text{ for each } u, v \in V
\end{aligned}
$$
|
[] | false |
[] |
29-29.2-7
|
29
|
29.2
|
29.2-7
|
docs/Chap29/29.2.md
|
In the **_minimum-cost multicommodity-flow problem_**, we are given directed graph $G = (V, E)$ in which each edge $(u, v) \in E$ has a nonnegative capacity $c(u, v) \ge 0$ and a cost $a(u, v)$. As in the multicommodity-flow problem, we are given $k$ different commodities, $K_1, K_2, \ldots, K_k$, where we specify commodity $i$ by the triple $K_i = (s_i, t_i, d_i)$. We define the flow $f_i$ for commodity $i$ and the aggregate flow $f_{uv}$ on edge $(u, v)$ as in the multicommodity-flow problem. A feasible flow is one in which the aggregate flow on each edge $(u, v)$ is no more than the capacity of edge $(u, v)$. The cost of a flow is $\sum_{u, v \in V} a(u, v)f_{uv}$, and the goal is to find the feasible flow of minimum cost. Express this problem as a linear program.
|
As in the minimum cost flow problem, we have constraints for the edge capacities, for the conservation of flow, and nonnegativity. The difference is that the restraint that before we required exactly $d$ units to flow, now, we require that for each commodity, the right amount of that commodity flows. the conservation equalities will be applied to each different type of commodity independently. If we superscript $f$, it will denote the type of commodity the flow is describing; if we do not superscript it, it will denote the aggregate flow.
We want to minimize
$$\sum_{u, v \in V} a(u, v) f_{uv}.$$
The capacity constraints are that
$$\sum_{i \in [k]} f_{uv}^i \le c(u, v) \text{ for each edge } (u, v).$$
The conservation constraints are that for every $i \in [k]$, for every $u \in V \backslash \\{s_i, t_i\\}$.
$$\sum_{v \in V} f_{uv}^i = \sum_{v \in V} f_{vu}^i.$$
Now, the constraints that correspond to requiring a certain amount of flow are that for every $i \in [k]$.
$$\sum_{v \in V} f_{s_i, v}^i - \sum_{v \in V} f_{v, s_i}^i = d.$$
Now, we put in the constraint that makes sure what we called the aggregate flow is actually the aggregate flow, so, for every $u, v \in V$,
$$f_{uv} = \sum_{i \in [k]} f_{uv}^i.$$
Finally, we get to the fact that all flows are nonnegative, for every $u, v \in V$,
$$f_{uv} \ge 0.$$
|
[] | false |
[] |
29-29.3-1-1
|
29
|
29.3
|
29.3-1
|
docs/Chap29/29.3.md
|
Complete the proof of Lemma 29.4 by showing that it must be the case that $c = c'$ and $v = v'$.
|
We subtract equation $\text{(29.81)}$ from equation $\text{(29.79)}$.
> $$z = v + \sum_{j \in N} c_j x_j, \tag{29.79}$$
> $$z = v' + \sum_{j \in N} c_j' x_j. \tag{29.81}$$
Thus we have,
$$
\begin{aligned}
0 & = v - v' + \sum_{j \in N} (c_j - c_j') x_j. \\\\
\sum_{j \in N} c_j' x_j & = v - v' + \sum_{j \in N} c_j x_j.
\end{aligned}
$$
By Lemma 29.3, we have $c_j = c_j'$ for every $j$ and $v = v'$ since $v - v' = 0$.
|
[] | false |
[] |
29-29.3-1-2
|
29
|
29.3
|
29.3-1
|
docs/Chap29/29.3.md
|
$$z = v + \sum_{j \in N} c_j x_j, \tag{29.79}$$
|
We subtract equation $\text{(29.81)}$ from equation $\text{(29.79)}$.
> $$z = v + \sum_{j \in N} c_j x_j, \tag{29.79}$$
> $$z = v' + \sum_{j \in N} c_j' x_j. \tag{29.81}$$
Thus we have,
$$
\begin{aligned}
0 & = v - v' + \sum_{j \in N} (c_j - c_j') x_j. \\\\
\sum_{j \in N} c_j' x_j & = v - v' + \sum_{j \in N} c_j x_j.
\end{aligned}
$$
By Lemma 29.3, we have $c_j = c_j'$ for every $j$ and $v = v'$ since $v - v' = 0$.
|
[] | false |
[] |
29-29.3-1-3
|
29
|
29.3
|
29.3-1
|
docs/Chap29/29.3.md
|
$$z = v' + \sum_{j \in N} c_j' x_j. \tag{29.81}$$
|
We subtract equation $\text{(29.81)}$ from equation $\text{(29.79)}$.
> $$z = v + \sum_{j \in N} c_j x_j, \tag{29.79}$$
> $$z = v' + \sum_{j \in N} c_j' x_j. \tag{29.81}$$
Thus we have,
$$
\begin{aligned}
0 & = v - v' + \sum_{j \in N} (c_j - c_j') x_j. \\\\
\sum_{j \in N} c_j' x_j & = v - v' + \sum_{j \in N} c_j x_j.
\end{aligned}
$$
By Lemma 29.3, we have $c_j = c_j'$ for every $j$ and $v = v'$ since $v - v' = 0$.
|
[] | false |
[] |
29-29.3-2
|
29
|
29.3
|
29.3-2
|
docs/Chap29/29.3.md
|
Show that the call to $\text{PIVOT}$ in line 12 of $\text{SIMPLEX}$ never decreases the value of $v$.
|
The only time $v$ is updated in $\text{PIVOT}$ is line 14, so it will suffice to show that $c_e \hat b_e \ge 0$. Prior to making the call to $\text{PIVOT}$, we choose an index $e$ such that $c_e > 0$, and this is unchanged in $\text{PIVOT}$. We set $\hat b_e$ in line 3 to be $b_l / a_{le}$.
The loop invariant proved in Lemma 29.2 tells us that $b_l \ge 0$. The if-condition of line 6 of $\text{SIMPLEX}$ tells us that only the noninfinite $\delta_i$ must have $a_{ie} > 0$, and we choose $l$ to minimize $\delta_l$, so we must have $a_{le} > 0$. Thus, $c_e \hat b_e \ge 0$, which implies $v$ can never decrease.
|
[] | false |
[] |
29-29.3-3
|
29
|
29.3
|
29.3-3
|
docs/Chap29/29.3.md
|
Prove that the slack form given to the $\text{PIVOT}$ procedure and the slack form that the procedure returns are equivalent.
|
To show that the two slack forms are equivalent, we will show both that they have equal objective functions, and their sets of feasible solutions are equal.
First, we'll check that their sets of feasible solutions are equal. Basically all we do to the constraints when we pivot is take the non-basic variable, $e$, and solve the equation corresponding to the basic variable $l$ for $e$. We are then taking that expression and replacing $e$ in all the constraints with this expression we got by solving the equation corresponding to $l$. Since each of these algebraic operations are valid, the result of the sequence of them is also algebraically equivalent to the original.
Next, we'll see that the objective functions are equal. We decrease each $c_j$ by $c_e \hat a_{ej}$, which is to say that we replace the non-basic variable we are making basic with the expression we got it was equal to once we made it basic.
Since the slack form returned by $\text{PIVOT}$, has the same feasible region and an equal objective function, it is equivalent to the original slack form passed in.
|
[] | false |
[] |
29-29.3-4
|
29
|
29.3
|
29.3-4
|
docs/Chap29/29.3.md
|
Suppose we convert a linear program $(A, b, c)$ in standard form to slack form. Show that the basic solution is feasible if and only if $b_i \ge 0$ for $i = 1, 2, \ldots, m$.
|
First suppose that the basic solution is feasible. We set each $x_i = 0$ for $1 \le i \le n$, so we have $x_{n + i} = b_i - \sum_{j = 1}^n a_{ij}x_j = b_i$ as a satisfied constraint. Since we also require $x_{n + i} \ge 0$ for all $1 \le i \le m$, this implies $b_i \ge 0$.
Now suppose $b_i \ge 0$ for all $i$. In the basic solution we set $x_i = 0$ for $1 \le i \le n$ which satisfies the nonnegativity constraints. We set $x_{n + i} = b_i$ for $1 \le i \le m$ which satisfies the other constraint equations, and also the nonnegativity constraints on the basic variables since $b_i \ge 0$. Thus, every constraint is satisfied, so the basic solution is feasible.
|
[] | false |
[] |
29-29.3-5
|
29
|
29.3
|
29.3-5
|
docs/Chap29/29.3.md
|
Solve the following linear program using $\text{SIMPLEX}$:
$$
\begin{array}{lrcrcrl}
\text{maximize} & 18x_1 & + & 12.5x_2 \\\\
\text{subject to} & \\\\
& x_1 & + & x_2 & \le & 20 \\\\
& x_1 & & & \le & 12 \\\\
& & & x_2 & \le & 16 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
First, we rewrite the linear program into it's slack form
$$
\begin{array}{lrl}
\text{maximize} & 18x_1 + 12.5x_2 \\\\
\text{subject to} & \\\\
& x_3 & = 20 - x_1 - x_2 \\\\
& x_4 & = 12 - x_1 \\\\
& x_5 & = 16 - x_2 \\\\
& x_1, x_2, x_3, x_4, x_5 & \ge 0.
\end{array}
$$
We now stop since no more non-basic variables appear in the objective with a positive coefficient. Our solution is $(12, 8, 0, 0, 8)$ and has a value of $316$. Going back to the standard form we started with, we just disregard the values of $x_3$ through $x_5$ and have the solution that $x_1 = 12$ and $x_2 = 8$. We can check that this is both feasible and has the objective achieve $316$.
|
[] | false |
[] |
29-29.3-6
|
29
|
29.3
|
29.3-6
|
docs/Chap29/29.3.md
|
Solve the following linear program using $\text{SIMPLEX}$:
$$
\begin{array}{lrcrcrl}
\text{maximize} & 5x_1 & - & 3x_2 \\\\
\text{subject to} & \\\\
& x_1 & - & x_2 & \le & 1 \\\\
& 2x_1 & + & x_2 & \le & 2 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
First, we convert the linear program into it's slack form
$$
\begin{array}{rcrcrcrl}
z & = & & & 5x_1 & - & 3x_2 \\\\
x_3 & = & 1 & - & x_1 & + & x_2 \\\\
x_4 & = & 2 & - & 2x_1 & - & x_2 & .
\end{array}
$$
The nonbasic variables are $x_1$ and $x_2$. Of these, only $x_1$ has a positive coefficient in the objective function, so we must choose $x_e = x_1$. Both equations limit $x_1$ by $1$, so we'll choose the first one to rewrite $x_1$ with. Using $x_1 = 1 − x_3 + x_2$ we obtain the new system
$$
\begin{array}{rcrcrcrl}
z & = & 5 & - & 5x_3 & + & 2x_2 & \\\\
x_1 & = & 1 & - & x_3 & + & x_2 & \\\\
x_4 & = & & & 2x_3 & - & 3x_2 & .
\end{array}
$$
Now $x_2$ is the only nonbasic variable with positive coefficient in the objective function, so we set $x_e = x_2$. The last equation limits $x_2$ by $0$ which is most restrictive, so we set $x_2 = \frac{2}{3} x_3 − \frac{1}{3} x_4$. Rewriting, our new system becomes
$$
\begin{array}{rcrcrcrl}
z & = & 5 & - & \frac{11}{3} x_3 & - & \frac{2}{3} x_4 \\\\
x_1 & = & 1 & - & \frac{1}{3} x_3 & - & \frac{1}{3} x_4 \\\\
x_2 & = & & & \frac{2}{3} x_3 & - & \frac{1}{3} x_4 & .
\end{array}
$$
Every nonbasic variable now has negative coefficient in the objective function, so we take the basic solution $(x_1, x_2, x_3, x_4) = (1, 0, 0, 0)$. The objective value this achieves is $5$.
|
[] | false |
[] |
29-29.3-7
|
29
|
29.3
|
29.3-7
|
docs/Chap29/29.3.md
|
Solve the following linear program using $\text{SIMPLEX}$:
$$
\begin{array}{lrcrcrcrl}
\text{minimize} & x_1 & + & x_2 & + & x_3 \\\\
\text{subject to} & \\\\
& 2x_1 & + & 7.5x_2 & + & 3x_3 & \ge & 10000 \\\\
& 20x_1 & & 5x_2 & + & 10x_3 & \ge & 30000 \\\\
& & x_1, x_2, x_3 & & & & \ge & 0 & .
\end{array}
$$
|
First, we convert this equation to the slack form. Doing so doesn't change the objective, but the constraints become
$$
\begin{array}{rcrcrcrcr}
z & = & & - & x_1 & - & x_2 & - & x_3 \\\\
x_4 & = & -10000 & + & 2x_1 & + & 7.5x_2 & + & 3x_3 \\\\
x_5 & = & -30000 & + & 20x_1 & + & 5x_2 & + & 10x_3 \\\\
x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Also, since the objective is to minimize a given function, we'll change it over to maximizing the negative of that function. In particular maximize $−x_1 − x_2 − x_3$. Now, we note that the initial basic solution is not feasible, because it would leave $x_4$ and $x_5$ being negative. This means that finding an initial solution requires using the method of section 29.5. The auxiliary linear program in slack form is
$$
\begin{array}{rcrcrcrcrcr}
z & = & & - & x_0 \\\\
x_4 & = & -10000 & + & x_0 & + & 2x_1 & + & 7.5x_2 & + & 3x_3 \\\\
x_5 & = & -30000 & + & x_0 & + & 20x_1 & + & 5x_2 & + & 10x_3 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
We choose $x_0$ as the entering variable and $x_5$ as the leaving variable, since it is the basic variable whose value in the basic solution is most negative. After pivoting, we have the slack form
$$
\begin{array}{rcrcrcrcrcr}
z & = & -30000 & + & 20x_1 & + & 5x_2 & + & 10x_3 & - & x_5 \\\\
x_0 & = & 30000 & - & 20x_1 & - & 5x_2 & - & 10x_3 & + & x_5 \\\\
x_4 & = & 20000 & - & 18x_1 & + & 2.5x_2 & - & 7x_3 & + & x_5 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
The associated basic solution is feasible, so now we just need to repeatedly call $\text{PIVOT}$ until we obtain an optimal solution to $L_{aux}$. We'll choose $x_2$ as our entering variable. This gives
$$
\begin{array}{rcrcrcrcrcr}
z & = & & - & x_0 \\\\
x_2 & = & 6000 & - & 0.2x_0 & - & 4x_1 & - & 2x_3 & + & 0.2x_5 \\\\
x_4 & = & 35000 & - & 0.5x_0 & - & 28x_1 & - & 12x_3 & + & 1.5x_5 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
This slack form is the final solution to the auxiliary problem. Since this solution has $x_0 = 0$, we know that our initial problem was feasible. Furthermore, since $x_0 = 0$, we can just remove it from the set of constraints. We then restore the original objective function, with appropriate substitutions made to include only the nonbasic variables. This yields
$$
\begin{array}{rcrcrcrcr}
z & = & -6000 & + & 3x_1 & + & x_3 & - & 0.2x_5 \\\\
x_2 & = & 6000 & - & 4x_1 & - & 2x_3 & + & 0.2x_5 \\\\
x_4 & = & 35000 & - & 28x_1 & - & 12x_3 & + & 1.5x_5 \\\\
x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
This slack form has a feasible basic solution, and we can return it to $\text{SIMPLEX}$. We choose $x_1$ as our entering variable. This gives
$$
\begin{array}{rcrcrcrcr}
z & = & -2250 & - & \frac{2}{7} x_3 & - & \frac{3}{28} x_4 & - & \frac{11}{280} x_5 \\\\
x_1 & = & 1250 & - & \frac{3}{7} x_3 & - & \frac{1}{28} x_4 & + & \frac{15}{280} x_5 \\\\
x_2 & = & 1000 & - & \frac{2}{7} x_3 & + & \frac{4}{28} x_4 & - & \frac{4}{280} x_5 \\\\
x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
At this point, all coefficients in the objective function are negative, so the basic solution is an optimal solution. This solution is $(x_1, x_2, x_3) = (1250, 1000, 0)$.
|
[] | false |
[] |
29-29.3-8
|
29
|
29.3
|
29.3-8
|
docs/Chap29/29.3.md
|
In the proof of Lemma 29.5, we argued that there are at most $\binom{m + n}{n}$ ways to choose a set $B$ of basic variables. Give an example of a linear program in which there are strictly fewer than $\binom{m + n}{n}$ ways to choose the set $B$.
|
Consider the simple program
$$
\begin{array}{rcrcrl}
z & = & & - & x_1 \\\\
x_2 & = & 1 & - & x_1 & .
\end{array}
$$
In this case, we have $m = n = 1$, so $\binom{m + n}{n} = \binom{2}{1} = 2$, however, since the only coefficients of the objective function are negative, we can't make any other choices for basic variable. We must immediately terminate with the basic solution $(x_1, x_2) = (0, 1)$, which is optimal.
|
[] | false |
[] |
29-29.4-1
|
29
|
29.4
|
29.4-1
|
docs/Chap29/29.4.md
|
Formulate the dual of the linear program given in Exercise 29.3-5.
|
By just transposing $A$, swapping $b$ and $c$, and switching the maximization to a minimization, we want to minimize $20y_1 + 12y_2 + 16y_3$ subject to the constrain
$$
\begin{aligned}
y_1 + y_2 & \ge 18 \\\\
y_1 + y_3 & \ge 12.5 \\\\
y_1, y_2, y_3 & \ge 0
\end{aligned}
$$
|
[] | false |
[] |
29-29.4-2
|
29
|
29.4
|
29.4-2
|
docs/Chap29/29.4.md
|
Suppose that we have a linear program that is not in standard form. We could produce the dual by first converting it to standard form, and then taking the dual. It would be more convenient, however, to be able to produce the dual directly. Explain how we can directly take the dual of an arbitrary linear program.
|
By working through each aspect of putting a general linear program into standard form, as outlined on page 852, we can show how to deal with transforming each into the dual individually. If the problem is a minimization instead of a maximization, replace $c_j$ by $−c_j$ in $\text{(29.84)}$. If there is a lack of nonnegativity constraint on $x_j$ we duplicate the $j$th column of $A$, which corresponds to duplicating the $j$th row of $A^{\text T}$. If there is an equality constraint for $b_i$, we convert it to two inequalities by duplicating then negating the $i$th column of $A^{\text T}$, duplicating then negating the $i$th entry of $b$, and adding an extra $y_i$ variable. We handle the greater-than-or-equal-to sign $\sum_{i = 1}^n a_{ij}x_j \ge b_i$ by negating $i$th column of $A^{\text T}$ and negating $b_i$. Then we solve the dual problem of minimizing $b^{\text T}y$ subject to $A^{\text T}y$ and $y \ge 0$.
|
[] | false |
[] |
29-29.4-3
|
29
|
29.4
|
29.4-3
|
docs/Chap29/29.4.md
|
Write down the dual of the maximum-flow linear program, as given in lines $\text{(29.47)}$–$\text{(29.50)}$ on page 860. Explain how to interpret this formulation as a minimum-cut problem.
|
First, we'll convert the linear program for maximum flow described in equation $\text{(29.47)}$-$\text{(29.50)}$ into standard form. The objective function says that $c$ is a vector indexed by a pair of vertices, and it is positive one if $s$ is the first index and negative one if $s$ is the second index (zero if it is both). Next, we'll modify the constraints by switching the equalities over into inequalities to get
$$
\begin{array}{rcll}
f_{uv} & \le & c(u, v) & \text{ for each } u, v \in V \\\\
\sum_{u \in V} f_{vu} & \le & \sum_{u \in V} f_{uv} & \text{ for each } v \in V - \\{s, t\\} \\\\
\sum_{u \in V} f_{vu} & \ge & \sum_{u \in V} f_{uv} & \text{ for each } v \in V - \\{s, t\\} \\\\
f_{uv} & \ge & 0 & \text{ for each } u, v \in V
\end{array}
$$
Then, we'll convert all but the last set of the inequalities to be $\le$ by multiplying the third line by $-1$.
$$
\begin{array}{rcll}
f_{uv} & \le & c(u, v) & \text{ for each } u, v \in V \\\\
\sum_{u \in V} f_{vu} & \le & \sum_{u \in V} f_{uv} & \text{ for each } v \in V - \\{s, t\\} \\\\
\sum_{u \in V} -f_{vu} & \le & \sum_{u \in V} -f_{uv} & \text{ for each } v \in V - \\{s, t\\} \\\\
f_{uv} & \ge & 0 & \text{ for each } u, v \in V
\end{array}
$$
Finally, we'll bring all the variables over to the left to get
$$
\begin{array}{rcll}
f_{uv} & \le & c(u, v) & \text{ for each } u, v \in V \\\\
\sum_{u \in V} f_{vu} - \sum_{u \in V} f_{uv} & \le & 0 & \text{ for each } v \in V - \\{s, t\\} \\\\
\sum_{u \in V} -f_{vu} - \sum_{u \in V} -f_{uv} & \le & 0 & \text{ for each } v \in V - \\{s, t\\} \\\\
f_{uv} & \ge & 0 & \text{ for each } u, v \in V
\end{array}
$$
Now, we can finally write down our $A$ and $b$. $A$ will be a $|V|^2 \times |V|^2 + 2|V| − 4$ matrix built from smaller matrices $A_1$ and $A_2$ which correspond to the three types of constraints that we have (of course, not counting the non-negativity constraints). We will let $g(u, v)$ be any bijective mapping from $V \times V$ to $[|V|^2]$. We'll also let $h$ be any bijection from $V - \\{s, t\\}$ to $[|V| - 2]$.
$$
A =
\begin{pmatrix}
A_1 \\\\
A_2 \\\\
-A_2
\end{pmatrix},
$$
where $A_1$ is defined as having its row $g(u, v)$ be all zeroes except for having the value $1$ at at the $g(u, v)$th entry. We define $A_2$ to have it's row $h(u)$ be equal to $1$ at all columns $j$ for which $j = g(v, u)$ for some $v$ and equal to $-1$ at all columns $j$ for which $j = g(u, v)$ for some $v$. Lastly, we mention that $b$ is defined as having it's $j$th entry be equal to $c(u, v)$ if $j = g(u, v)$ and zero if $j > |V|^2$.
Now that we have placed the linear program in standard form, we can take its dual. We want to minimize $\sum_{i = 1}^{|V|^2 + 2|V| - 2} b_iy_i$ given the constraints that all the $y$ values are non-negative, and $A^{\text T} y \ge c$.
|
[] | false |
[] |
29-29.4-4
|
29
|
29.4
|
29.4-4
|
docs/Chap29/29.4.md
|
Write down the dual of the minimum-cost-flow linear program, as given in lines $\text{(29.51)}$–$\text{(29.52)}$ on page 862. Explain how to interpret this problem in terms of graphs and flows.
|
First we need to put the linear programming problem into standard form, as follows:
$$
\begin{array}{lrcrl}
\text{maximize} & \sum_{(u, v) \in E} -a(u, v) f_{uv} \\\\
\text{subject to} & \\\\
& f_{uv} & \le & c(u, v) & \text{ for each } u, v \in V \\\\
& \sum_{v \in V} f_{vu} - \sum_{v \in V} f_{uv} & \le & 0 & \text{ for each } u \in V - \\{s, t\\} \\\\
& \sum_{v \in V} f_{uv} - \sum_{v \in V} f_{vu} & \le & 0 & \text{ for each } u \in V - \\{s, t\\} \\\\
& \sum_{v \in V} f_{sv} - \sum_{v \in V} f_{vs} & \le & d \\\\
& \sum_{v \in V} f_{vs} - \sum_{v \in V} f_{sv} & \le & -d \\\\
& f_{uv} & \ge & 0 & .
\end{array}
$$
We now formulate the dual problem. Let the vertices be denoted $v_1, v_2, \dots, v_n, s, t$ and the edges be $e_1, e_2, \dots, e_k$. Then we have $b_i = c(e_i)$ for $1 \le i \le k$, $b_i = 0$ for $k + 1 \le i \le k + 2n$, $b_{k + 2n + 1} = d$, and $b_{k + 2n + 2} = −d$. We also have $c_i = −a(e_i)$ for $1 \le i \le k$. For notation, let $j.left$ denote the tail of edge $e_j$ and $j.right$ denote the head. Let $\chi_s(e_j) = 1$ if $e_j$ enters $s$, set it equal to $-1$ if $e_j$ leaves $s$, and set it equal to $0$ if $e_j$ is not incident with $s$. The dual problem is:
$$
\begin{array}{ll}
\text{minimize} & \sum_{i = 1}^k c(e_j)y_i + dy_{k + 2n + 1} - dy_{k + 2n + 2} \\\\
\text{subject to} & \\\\
& y_j + y_{k + e_j.right} - y_{k + j.left} - y_{k + n + e_j.right} + y_{k + n + e_j.left} - \chi_s(e_j) y_{k + 2n + 1} + \chi_s(e_j) y_{k + 2n + 2} \ge -a(e_j),
\end{array}
$$
where $j$ runs between $1$ and $k$. There is one constraint equation for each edge $e_j$.
|
[] | false |
[] |
29-29.4-5
|
29
|
29.4
|
29.4-5
|
docs/Chap29/29.4.md
|
Show that the dual of the dual of a linear program is the primal linear program.
|
Suppose that our original linear program is in standard form for some $A$, $b$, $c$. Then, the dual of this is to minimize $\sum_{i = 1}^m b_iy_i$ subject to $A^{\text T} y \ge c$ This can be rewritten as wanting to maximize $\sum_{i = 1}^m (−b_i)y_i$ subject to $(−A)^{\text T} y \le −c$. Since this is a standard form, we can take its dual easily, it is minimize $\sum_{j = 1}^n (−c_j)x_j$ subject to $(−A)x \ge −b$. This is the same as minimizing $\sum_{j = 1}^n c_jx_j$ subject to $Ax \le b$, which was the original linear program.
|
[] | false |
[] |
29-29.4-6
|
29
|
29.4
|
29.4-6
|
docs/Chap29/29.4.md
|
Which result from Chapter 26 can be interpreted as weak duality for the maximum-flow problem?
|
Corollary 26.5 from Chapter 26 can be interpreted as weak duality.
|
[] | false |
[] |
29-29.5-1
|
29
|
29.5
|
29.5-1
|
docs/Chap29/29.5.md
|
Give detailed pseudocode to implement lines 5 and 14 of $\text{INITIALIZE-SIMPLEX}$.
|
For line 5, first let $(N, B, A, b, c, v)$ be the result of calling $\text{PIVOT}$ on $L_{aux}$ using $x_0$ as the entering variable. Then repeatedly call $\text{PIVOT}$ until an optimal solution to $L_{aux}$ is obtained, and return this to $(N, B, A, b, c, v)$. To remove $x_0$ from the constraints, set $a_{i, 0} = 0$ for all $i \in B$, and set $N = N \backslash \\{0\\}$. To restore the original objective function of $L$, for each $j \in N$ and each $i \in B$, set $c_j = c_j − c_ia_{ij}$.
|
[] | false |
[] |
29-29.5-2
|
29
|
29.5
|
29.5-2
|
docs/Chap29/29.5.md
|
Show that when the main loop of $\text{SIMPLEX}$ is run by $\text{INITIALIZE-SIMPLEX}$, it can never return "unbounded."
|
In order to enter line 10 of $\text{INITIALIZE-SIMPLEX}$ and begin iterating the main loop of $\text{SIMPLEX}$, we must have recovered a basic solution which is feasible for $L_{aux}$. Since $x_0 \ge 0$ and the objective function is $−x_0$, the objective value associated to this solution (or any solution) must be negative. Since the goal is to aximize, we have an upper bound of $0$ on the objective value. By Lemma 29.2, $\text{SIMPLEX}$ correctly determines whether or not the input linear program is unbounded. Since $L_{aux}$ is not unbounded, this can never be returned by $\text{SIMPLEX}$.
|
[] | false |
[] |
29-29.5-3
|
29
|
29.5
|
29.5-3
|
docs/Chap29/29.5.md
|
Suppose that we are given a linear program $L$ in standard form, and suppose that for both $L$ and the dual of $L$, the basic solutions associated with the initial slack forms are feasible. Show that the optimal objective value of $L$ is $0$.
|
Since it is in standard form, the objective function has no constant term, it is entirely given by $\sum_{i = 1}^n c_ix_i$, which is going to be zero for any basic solution. The same thing goes for its dual. Since there is some solution which has the objective function achieve the same value both for the dual and the primal, by the corollary to the weak duality theorem, that common value must be the optimal value of the objective function.
|
[] | false |
[] |
29-29.5-4
|
29
|
29.5
|
29.5-4
|
docs/Chap29/29.5.md
|
Suppose that we allow strict inequalities in a linear program. Show that in this case, the fundamental theorem of linear programming does not hold.
|
Consider the linear program in which we wish to maximize $x_1$ subject to the constraint $x_1 < 1$ and $x_1 \ge 0$. This has no optimal solution, but it is clearly bounded and has feasible solutions. Thus, the Fundamental theorem of linear programming does not hold in the case of strict inequalities.
|
[] | false |
[] |
29-29.5-5
|
29
|
29.5
|
29.5-5
|
docs/Chap29/29.5.md
|
Solve the following linear program using $\text{SIMPLEX}$:
$$
\begin{array}{lrcrcrl}
\text{maxmize} & x_1 & + & 3x_2 \\\\
\text{subject to} & \\\\
& x_1 & - & x_2 & \le & 8 \\\\
& -x_1 & - & x_2 & \le & -3 \\\\
& -x_1 & + & 4x_2 & \le & 2 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
The initial basic solution isn't feasible, so we will need to form the auxiliary linear program,
$$
\begin{array}{lrcrcrcrl}
\text{maxmize} & -x_0 \\\\
\text{subject to} & \\\\
& -x_0 & + & x_1 & - & x_2 & \le & 8 \\\\
& -x_0 & - & x_1 & - & x_2 & \le & -3 \\\\
& -x_0 & - & x_1 & + & 4x_2 & \le & 2 \\\\
& & x_0, x_1, x_2 & & & & \ge & 0 & .
\end{array}
$$
Writing this linear program in slack form,
$$
\begin{array}{rcrcrcrcr}
z & = & & - & x_0 \\\\
x_3 & = & 8 & + & x_0 & - & x_1 & + & x_2 \\\\
x_4 & = & -3 & + & x_0 & + & x_1 & + & x_2 \\\\
x_5 & = & 2 & + & x_0 & + & x_1 & - & 4x_2 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Next we make one call to $\text{PIVOT}$ where $x_0$ is the entering variable and $x_4$ is the leaving variable.
$$
\begin{array}{rcrcrcrcr}
z & = & -3 & + & x_1 & + & x_2 & - & x_4 \\\\
x_0 & = & 3 & - & x_1 & - & x_2 & + & x_4 \\\\
x_3 & = & 11 & - & 2x_1 & & & + & x_4 \\\\
x_5 & = & 5 & & & - & 5x_2 & + & x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
The basic solution is feasible, so we repeatedly call $\text{PIVOT}$ to get the optimal solution to $L_{aux}$. We'll choose $x_1$ to be our entering variable and $x_0$ to be the leaving variable. This gives
$$
\begin{array}{rcrcrcrcr}
z & = & & & -x_0 \\\\
x_1 & = & 3 & - & x_0 & - & x_2 & + & x_4 \\\\
x_3 & = & 5 & + & 2x_0 & + & 2x_2 & - & x_4 \\\\
x_5 & = & 5 & & & - & 5x_2 & + & x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
The basic solution is now optimal for $L_{aux}$, so we return this slack form to $\text{SIMPLEX}$, set $x_0 = 0$, and update the objective function which yields
$$
\begin{array}{rcrcrcr}
z & = & 3 & + & 2x_2 & + & x_4 \\\\
x_1 & = & 3 & - & x_2 & + & x_4 \\\\
x_3 & = & 5 & + & 2x_2 & - & x_4 \\\\
x_5 & = & 5 & - & 5x_2 & + & x_4 \\\\
x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
We'll choose $x_2$ as our entering variable, which makes $x_5$ our leaving variable. $\text{PIVOT}$ then gives,
$$
\begin{array}{rcrcrcr}
z & = & 5 & + & (7 / 5)x_4 & - & (2 / 5)x_5 \\\\
x_1 & = & 2 & + & (1 / 5)x_4 & + & (1 / 5)x_5 \\\\
x_2 & = & 1 & + & (4 / 5)x_4 & - & (1 / 5)x_5 \\\\
x_3 & = & 7 & - & (3 / 5)x_4 & - & (2 / 5)x_5 \\\\
x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
We'll choose $x_4$ as our entering variable, which makes $x_3$ our leaving variable. $\text{PIVOT}$ then gives,
$$
\begin{array}{rcrcrcr}
z & = & (64 / 3) & - & (7 / 3)x_3 & - & (4 / 3)x_5 \\\\
x_1 & = & (34 / 3) & - & (4 / 3)x_3 & - & (1 / 3)x_5 \\\\
x_2 & = & (10 / 3) & - & (1 / 3)x_3 & - & (1 / 3)x_5 \\\\
x_4 & = & (35 / 3) & - & (5 / 3)x_3 & - & (2 / 3)x_5 \\\\
x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Now all coefficients in the objective function are negative, so the basic solution is the optimal solution. It is $(x_1, x_2) = (34 / 3, 10 / 3)$.
|
[] | false |
[] |
29-29.5-6
|
29
|
29.5
|
29.5-6
|
docs/Chap29/29.5.md
|
Solve the following linear program using $\text{SIMPLEX}$:
$$
\begin{array}{lrcrcrl}
\text{maxmize} & x_1 & - & 2x_2 \\\\
\text{subject to} & \\\\
& x_1 & + & 2x_2 & \le & 4 \\\\
& -2x_1 & - & 6x_2 & \le & -12 \\\\
& & & x_2 & \le & 1 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
The initial basic solution isn't feasible, so we will need to form the auxiliary linear program,
$$
\begin{array}{lrcrcrcrl}
\text{maxmize} & -x_0 \\\\
\text{subject to} & \\\\
& -x_0 & + & x_1 & + & 2x_2 & \le & 4 \\\\
& -x_0 & - & 2x_1 & - & 6x_2 & \le & -12 \\\\
& -x_0 & & & + & x_2 & \le & 1 \\\\
& & x_0, x_1, x_2 & & & & \ge & 0 & .
\end{array}
$$
Writing this linear program in slack form,
$$
\begin{array}{rcrcrcrcr}
z & = & & - & x_0 \\\\
x_3 & = & 4 & + & x_0 & - & x_1 & - & 2x_2 \\\\
x_4 & = & -12 & + & x_0 & + & 2x_1 & + & 6x_2 \\\\
x_5 & = & 1 & + & x_0 & & & - & x_2 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Next we make one call to $\text{PIVOT}$ where $x_0$ is the entering variable and $x_4$ is the leaving variable.
$$
\begin{array}{rcrcrcrcr}
z & = & -12 & + & 2x_1 & + & 6x_2 & - & x_4 \\\\
x_0 & = & 12 & - & 2x_1 & - & 6x_2 & + & x_4 \\\\
x_3 & = & 16 & - & 3x_1 & - & 8x_2 & + & x_4 \\\\
x_5 & = & 13 & - & 2x_1 & - & 8x_2 & + & x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
The basic solution is $(x_0, x_1, x_2, x_3, x_4, x_5) = (12, 0, 0, 16, 0, 13)$ which is feasible for the auxiliary program. Now we need to run $\text{SIMPLEX}$ to find the optimal objective value to $L_{aux}$. Let $x_1$ be our next entering variable. It is most constrained by $x_3$, which will be our leaving variable. After $\text{PIVOT}$, the new linear program is
$$
\begin{array}{rcrcrcrcr}
z & = & -(4 / 3) & + & (2 / 3)x_2 & - & (2 / 3)x_3 & + & (1 / 3) x_4 \\\\
x_0 & = & (4 / 3) & - & (2 / 3)x_2 & + & (2 / 3)x_3 & + & (1 / 3) x_4 \\\\
x_1 & = & (16 / 3) & - & (8 / 3)x_2 & - & (1 / 3)x_3 & + & (1 / 3) x_4 \\\\
x_5 & = & (7 / 3) & - & (8 / 3)x_2 & + & (2 / 3)x_3 & + & (1 / 3) x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Every coefficient in the objective function is negative, so we take the basic solution $(x_0, x_1, x_2, x_3, x_4, x_5) = (4 / 3, 16 / 3, 0, 0, 0, 7 / 3)$ which is also optimal. Since $x_0 \ne 0$, the original linear program must be unfeasible.
|
[] | false |
[] |
29-29.5-7
|
29
|
29.5
|
29.5-7
|
docs/Chap29/29.5.md
|
Solve the following linear program using $\text{SIMPLEX}$:
$$
\begin{array}{lrcrcrl}
\text{maxmize} & x_1 & + & 3x_2 \\\\
\text{subject to} & \\\\
& -x_1 & + & x_2 & \le & -1 \\\\
& -x_1 & - & x_2 & \le & -3 \\\\
& -x_1 & + & 4x_2 & \le & 2 \\\\
& & x_1, x_2 & & \ge & 0 & .
\end{array}
$$
|
The initial basic solution isn't feasible, so we will need to form the auxiliary linear program,
$$
\begin{array}{lrcrcrcrl}
\text{maxmize} & -x_0 \\\\
\text{subject to} & \\\\
& -x_0 & - & x_1 & + & x_2 & \le & -1 \\\\
& -x_0 & - & x_1 & - & x_2 & \le & -3 \\\\
& -x_0 & - & x_1 & + & 4x_2 & \le & 2 \\\\
& & x_0, x_1, x_2 & & & & \ge & 0 & .
\end{array}
$$
Writing this linear program in slack form,
$$
\begin{array}{rcrcrcrcr}
z & = & & - & x_0 \\\\
x_3 & = & -1 & + & x_0 & + & x_1 & - & x_2 \\\\
x_4 & = & -3 & + & x_0 & + & x_1 & + & x_2 \\\\
x_5 & = & 2 & + & x_0 & + & x_1 & - & 4x_2 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Next we make one call to $\text{PIVOT}$ where $x_0$ is the entering variable and $x_4$ is the leaving variable.
$$
\begin{array}{rcrcrcrcr}
z & = & -3 & + & x_1 & + & x_2 & - & x_4 \\\\
x_0 & = & 3 & - & x_1 & - & x_2 & + & x_4 \\\\
x_3 & = & 2 & & & - & 2x_2 & + & x_4 \\\\
x_5 & = & 5 & & & - & 5x_2 & + & x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Let $x_1$ be our entering variable. Then $x_0$ is our leaving variable, and we have
$$
\begin{array}{rcrcrcrcr}
z & = & & - & x_0 \\\\
x_1 & = & 3 & - & x_0 & - & x_2 & + & x_4 \\\\
x_3 & = & 2 & & & - & 2x_2 & + & x_4 \\\\
x_5 & = & 5 & & & - & 5x_2 & + & x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
The basic solution is feasible, and optimal for $L_{aux}$, so we return this and run $\text{SIMPLEX}$. Updating the objective function and setting $x_0 = 0$ gives
$$
\begin{array}{rcrcrcr}
z & = & 3 & + & 2x_2 & + & x_4 \\\\
x_1 & = & 3 & - & x_2 & + & x_4 \\\\
x_3 & = & 2 & - & 2x_2 & + & x_4 \\\\
x_5 & = & 5 & - & 5x_2 & + & x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
We'll choose $x_2$ as our entering variable, which makes $x_3$ our leaving variable. This gives
$$
\begin{array}{rcrcrcr}
z & = & 5 & - & x_3 & + & 2x_4 \\\\
x_1 & = & 2 & + & (1 / 2)x_3 & + & (1 / 2)x_4 \\\\
x_2 & = & 1 & - & (1 / 2)x_3 & + & (1 / 2)x_4 \\\\
x_5 & = & & & (5 / 2)x_3 & - & (3 / 2)x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Next we use $x_4$ as our entering variable, which makes $x_5$ our leaving variable. This gives
$$
\begin{array}{rcrcrcr}
z & = & 5 & + & (7 / 3)x_3 & - & (4 / 3)x_5 \\\\
x_1 & = & 2 & + & (4 / 3)x_3 & - & (1 / 3)x_5 \\\\
x_2 & = & 1 & + & (1 / 3)x_3 & - & (1 / 3)x_5 \\\\
x_4 & = & & & (5 / 3)x_3 & - & (2 / 3)x_5 \\\\
x_0, x_1, x_2, x_3, x_4, x_5 & \ge & 0 & .
\end{array}
$$
Finally, we would like to choose $x_3$ as our entering variable, but every coefficient on $x_3$ is positive, so $\text{SIMPLEX}$ returns that the linear program is unbounded.
|
[] | false |
[] |
29-29.5-8
|
29
|
29.5
|
29.5-8
|
docs/Chap29/29.5.md
|
Solve the linear program given in $\text{(29.6)}$–$\text{(29.10)}$.
|
We first put the linear program in standard form,
$$
\begin{array}{lrcrcrcrcrl}
\text{maxmize} & x_1 & + & x_2 & + & x_3 & + & x_4 \\\\
\text{subject to} & \\\\
& 2x_1 & - & 8x_2 & & & - & 10x_4 & \le & -50 \\\\
& -5x_1 & - & 2x_2 & & & & & \le & -100 \\\\
& -3x_1 & + & 5x_2 & - & 10x_3 & + & 2x_4 & \le & -25 \\\\
& & x_1, x_2, x_3, x_4 & & & & & & \ge & 0 & .
\end{array}
$$
The initial basic solution isn't feasible, so we will need to form the auxiliary linear program.
$$
\begin{array}{rcrcrcrcrcrrcl}
z & = & & - & x_0 \\\\
x_5 & = & -50 & + & x_0 & - & 2x_1 & + & 8x_2 & & & + & 10x_4 \\\\
x_6 & = & -100 & + & x_0 & + & 5x_1 & + & 2x_2 \\\\
x_7 & = & -25 & + & x_0 & + & 3x_1 & - & 5x_2 & + & 10x_3 & - & 2x_4 \\\\
x_0, x_1, x_2, x_3, x_4, x_5, x_6, x_7 & \ge & 0 & .
\end{array}
$$
The index of the minimum $b_i$ is $2$, so we take $x_0$ to be our entering variable and $x_6$ to be our leaving variable. The call to $\text{PIVOT}$ on line 8 yields
$$
\begin{array}{rcrcrcrcrcrrcl}
z & = & -100 & + & 5x_1 & + & 2x_2 & & & & & - & x_6 \\\\
x_0 & = & 100 & - & 5x_1 & - & 2x_2 & & & & & + & x_6 \\\\
x_5 & = & 50 & - & 7x_1 & + & 8x_2 & & & + & 10x_4 & + & x_6 \\\\
x_7 & = & 75 & - & 2x_1 & - & 7x_2 & + & 10x_3 & - & 2x_4 & + & x_6 \\\\
x_0, x_1, x_2, x_3, x_4, x_5, x_6, x_7 & \ge & 0 & .
\end{array}
$$
Next we'll take $x_2$ to be our entering variable and $x_5$ to be our leaving variable. The call to $\text{PIVOT}$ yields
$$
\begin{array}{rcrcrcrcrcrcr}
z & = & -225 / 2 & + & (27 / 4)x_1 & & & - & (10 / 4)x_4 & + & (1 / 4)x_5 & - & (5 / 4)x_6 \\\\
x_0 & = & 225 / 2 & - & (27 / 4)x_1 & & & + & (10 / 4)x_4 & - & (1 / 4)x_5 & + & (5 / 4)x_6 \\\\
x_2 & = & -50 / 8 & + & (7 / 8)x_1 & & & - & (10 / 8)x_4 & + & (1 / 8)x_5 & - & (1 / 8)x_6 \\\\
x_7 & = & 475 / 4 & - & (65 / 8)x_1 & + & 10x_3 & + & (54 / 8)x_4 & - & (7 / 8)x_5 & + & (15 / 8)x_6 \\\\
x_0, x_1, x_2, x_3, x_4, x_5, x_6, x_7 & \ge & 0 & .
\end{array}
$$
The work gets rather messy, but $\text{INITIALIZE-SIMPLEX}$ does eventually give a feasible solution to the linear program, and after running the simplex method we find that $(x_1, x_2, x_3, x_4) = (175 / 11, 225 / 22, 125 / 44, 0)$ is an optimal solution to the original linear programming problem.
|
[] | false |
[] |
29-29.5-9
|
29
|
29.5
|
29.5-9
|
docs/Chap29/29.5.md
|
Consider the following $1$-variable linear program, which we call $P$:
$$
\begin{array}{lrcrl}
\text{maximize} & tx \\\\
\text{subject to} & rx & \le & s \\\\
& x & \ge & 0 & ,
\end{array}
$$
where $r$, $s$, and $t$ are arbitrary real numbers. Let $D$ be the dual of $P$.
State for which values of $r$, $s$, and $t$ you can assert that
1. Both $P$ and $D$ have optimal solutions with finite objective values.
2. $P$ is feasible, but $D$ is infeasible.
3. $D$ is feasible, but $P$ is infeasible.
4. Neither $P$ nor $D$ is feasible.
|
1. One option is that $r = 0$, $s \ge 0$ and $t \le 0$. Suppose that $r > 0$, then, if we have that $s$ is non-negative and $t$ is non-positive, it will be as we want.
2. We will split into two cases based on $r$. If $r = 0$, then this is exactly when $t$ is non-positive and $s$ is non-negative. The other possible case is that $r$ is negative, and $t$ is positive. In which case, because $r$ is negative, we can always get $rx$ as small as we want so s doesn't matter, however, we can never make $rx$ positive so it can never be $\ge t$.
3. Again, we split into two possible cases for $r$. If $r = 0$, then it is when $t$ is nonnegative and $s$ is non-positive. The other possible case is that $r$ is positive, and $s$ is negative. Since $r$ is positive, $rx$ will always be non-negative, so it cannot be $\le s$. But since $r$ is positive, we have that we can always make $rx$ as big as we want, in particular, greater than $t$.
4. If we have that $r = 0$ and $t$ is positive and $s$ is negative. If $r$ is nonzero, then we can always either make $rx$ really big or really small depending on the sign of $r$, meaning that either the primal or the dual would be feasable.
|
[] | false |
[] |
29-29-1
|
29
|
29-1
|
29-1
|
docs/Chap29/Problems/29-1.md
|
Given a set of $m$ linear inequalities on $n$ variables $x_1, x_2, \dots, x_n$, the **_linear-inequality feasibility problem_** asks whether there is a setting of the variables that simultaneously satisfies each of the inequalities.
**a.** Show that if we have an algorithm for linear programming, we can use it to solve a linear-inequality feasibility problem. The number of variables and constraints that you use in the linear-programming problem should be polynomial in $n$ and $m$.
**b.** Show that if we have an algorithm for the linear-inequality feasibility problem, we can use it to solve a linear-programming problem. The number of variables and linear inequalities that you use in the linear-inequality feasibility problem should be polynomial in $n$ and $m$, the number of variables and constraints in the linear program.
|
**a.** We just let the linear inequalities that we need to satisfy be our set of constraints in the linear program. We let our function to maximize just be a constant. The solver for linear programs would fail to detect any feasible solution if the linear constraints were not feasible. If the linear programming solver returns any solution at all, we know that the linear constraints are feasible.
**b.** Suppose that we are trying to solve the linear program in standard form with some particular $A$, $b$, $c$. That is, we want to maximize $\sum_{j = 1}^n c_jx_j$ subject to $Ax \le b$ and all entries of the $x$ vector are non-negative. Now, consider the dual program, that is, we want to minimize $\sum_{i = 1}^m b_iy_i$ subject to $A^{\text T} y \ge c$ and all the entries in the $y$ vector are nonzero. We know by Corollary 29.9, if $x$ and $y$ are feasible solutions to their respective problems, then, if we have that their objective functions are equal, then, they are both optimal solutions.
We can force their objective functions to be equal. To do this, let $c_k$ be some nonzero entry in the $c$ vector. If there are no nonzero entries, then the function we are trying to optimize is just the zero function, and it is exactly a feasibility question, so we we would be done. Then, we add two linear inequalities to require $x_k = \frac{1}{c_k} \Big(\sum_{i = 1}^m b_iy_i - \sum_{j = 1}^n c_jx_j \Big)$. This will require that whatever values the variables take, their objective functions will be equal. Lastly, we just throw these in with the inequalities we already had. So, the constraints will be:
$$
\begin{aligned}
Ax & \le b \\\\
A^{\text T} y & \ge c \\\\
x_k & \le \frac{1}{c_k} \Bigg(\sum_{i = 1}^m b_iy_i - \sum_{j = 1}^n c_jx_j \Bigg) \\\\
x_k & \ge \frac{1}{c_k} \Bigg(\sum_{i = 1}^m b_iy_i - \sum_{j = 1}^n c_jx_j \Bigg) \\\\
x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_m & \ge 0.
\end{aligned}
$$
We have a number of variables equal to $n + m$ and a number of constraints equal to $2 + 2n + 2m$, so both are polynomial in $n$ and $m$. Also, any assignment of variables which satisfy all of these constraints will be a feasible solution to both the problem and its dual that cause the respective objective functions to take the same value, and so, must be an optimal solution to both the original problem and its dual. This of course assumes that the linear inequality feasibility solver doesn't merely say that the inequalities are satisfiable, but actually returns a satisfying assignment.
Lastly, it is necessary to note that if there is some optimal solution $x$, then, we can obtain an optimal solution for the dual that makes the objective functions equal by theorem 29.10. This ensures that the two constraints we added to force the objectives of the primal and the dual to be equal don't cause us to change the optimal solution to the linear program.
|
[] | false |
[] |
29-29-2
|
29
|
29-2
|
29-2
|
docs/Chap29/Problems/29-2.md
|
**_Complementary slackness_** describes a relationship between the values of primal variables and dual constraints and between the values of dual variables and primal constraints. Let $\bar x$ be a feasible solution to the primal linear program given in $\text{(29.16)–(29.18)}$, and let $\bar y$ be a feasible solution to the dual linear program given in $\text{(29.83)–(29.85)}$. Complementary slackness states that the following conditions are necessary and sufficient for $\bar x$ and $\bar y$ to be optimal:
$$\sum_{i = 1}^m a_{ij}\bar y_i = c_j \text{ or } \bar x_j = 0 \text{ for } j = 1, 2, \dots, n$$
and
$$\sum_{j = 1}^m a_{ij}\bar x_j = b_i \text{ or } \bar y_i = 0 \text{ for } j = 1, 2, \dots, m.$$
**a.** Verify that complementary slackness holds for the linear program in lines $\text{(29.53)–(29.57)}$.
**b.** Prove that complementary slackness holds for any primal linear program and its corresponding dual.
**c.** Prove that a feasible solution $\bar x$ to a primal linear program given in lines $\text{(29.16)–(29.18)}$ is optimal if and only if there exist values $\bar y = (\bar y_1, \bar y_2, \dots, \bar y_m)$ such that
1. $\bar y$ is a feasible solution to the dual linear program given in $\text{(29.83)–(29.85)}$,
2. $\sum_{i = 1}^m a_{ij}\bar y_i = c_j$ for all $j$ such that $\bar x_j > 0$, and
3. $\bar y_i = 0$ for all $i$ such that $\sum_{j = 1}^n a_{ij}\bar x_j < b_i$.
|
**a.** An optimal solution to the LP program given in $\text{(29.53)}$-$\text{(29.57)}$ is $(x_1, x_2, x_3) = (8, 4, 0)$. An optimal solution to the dual is $(y_1, y_2, y_3) = (0, 1 / 6, 2 / 3)$. It is then straightforward to verify that the equations hold.
**b.** First suppose that complementary slackness holds. Then the optimal objective value of the primal problem is, if it exists,
$$
\begin{aligned}
\sum_{k = 1} c_kx_k
& = \sum_{k = 1}^n \sum_{i = 1}^m a_{ik}y_ix_k \\\\
& = \sum_{i = 1}^m \sum_{k = 1}^n a_{ik}x_ky_i \\\\
& = \sum_{i = 1}^m b_iy_i,
\end{aligned}
$$
which is precisely the optimal objective value of the dual problem. If any $x_j$ is $0$, then those terms drop out of them sum, so we can safely replace $c_k$ by whatever we like in those terms. Since the objective values are equal, they must be optimal. An identical argument shows that if an optimal solution exists for the dual problem then any feasible solution for the primal problem which satisfies the second equality of complementary slackness must also be optimal.
Now suppose that $x$ and $y$ are optimal solutions, but that complementary slackness fails. In other words, there exists some $j$ such that $x_j \ne 0$ but $\sum_{i = 1}^m a_{ij}y_i > c_j$, or there exists some $i$ such that $y_i \ne 0$ but $\sum_{j = 1}^n a_{ij}x_j < b_i$. In the first case we have
$$
\begin{aligned}
\sum_{k = 1} c_kx_k
& < \sum_{k = 1}^n \sum_{i = 1}^m a_{ik}y_ix_k \\\\
& = \sum_{i = 1}^m \sum_{k = 1}^n a_{ik}x_ky_i \\\\
& = \sum_{i = 1}^m b_iy_i.
\end{aligned}
$$
This implies that the optimal objective value of the primal solution is strictly less than the optimal value of the dual solution, a contradiction. The argument for the second case is identical. Thus, $x$ and $y$ are optimal solutions if and only if complementary slackness holds.
**c.** This follows immediately from part (b). If $x$ is feasible and $y$ satisfies conditions 1, 2, and 3, then complementary slackness holds, so $x$ and $y$ are optimal. On the other hand, if $x$ is optimal, then the dual linear program must have an optimal solution $y$ as well, according to Theorem 29.10. Optimal solutions are feasible, and by part (b), $x$ and $y$ satisfy complementary slackness. Thus, conditions 1, 2, and 3 hold.
|
[] | false |
[] |
29-29-3
|
29
|
29-3
|
29-3
|
docs/Chap29/Problems/29-3.md
|
An **_integer linear-programming problem_** is a linear-programming problem with the additional constraint that the variables $x$ must take on integral values. Exercise 34.5-3 shows that just determining whether an integer linear program has a feasible solution is NP-hard, which means that there is no known polynomial-time algorithm for this problem.
**a.** Show that weak duality (Lemma 29.8) holds for an integer linear program.
**b.** Show that duality (Theorem 29.10) does not always hold for an integer linear program.
**c.** Given a primal linear program in standard form, let us define $P$ to be the optimal objective value for the primal linear program, $D$ to be the optimal objective value for its dual, $IP$ to be the optimal objective value for the integer version of the primal (that is, the primal with the added constraint that the variables take on integer values), and $ID$ to be the optimal objective value for the integer version of the dual. Assuming that both the primal integer program and the dual integer program are feasible and bounded, show that
$$IP \le P = D \le ID.$$
|
**a.** The proof for weak duality goes through identically. Nowhere in it does it use the integrality of the solutions.
**b.** Consider the linear program given in standard form by $A = (1)$, $b = (\frac{1}{2})$ and $c = (2)$. The highest we can get this is $0$ since that's that only value that $x$ can be. Now, consider the dual to this, that is, we are trying to minimize $\frac{x}{2}$ subject to the constraint that $x \ge 2$. This will be minimized when $x = 2$, so, the smallest solution we can get is $1$.
Since we have just exhibited an example of a linear program that has a different optimal solution as it's dual, the duality theorem does not hold for integer linear program.
**c.** The first inequality comes from looking at the fact that by adding the restriction that the solution must be integer valued, we obtain a set of feasible solutions that is a subset of the feasible solutions of the original primal linear program. Since, to get $IP$, we are taking the max over a subset of the things we are taking a max over to get $P$, we must get a number that is no larger. The third inequality is similar, except since we are taking min over a subset, the inequality goes the other way. The middle equality is given by Theorem 29.10.
|
[] | false |
[] |
29-29-4
|
29
|
29-4
|
29-4
|
docs/Chap29/Problems/29-4.md
|
Let $A$ be an $m \times n$ matrix and $c$ be an $n$-vector. Then Farkas's lemma states that exactly one of the systems
$$
\begin{aligned}
Ax & \le 0, \\\\
c^Tx & > 0
\end{aligned}
$$
and
$$
\begin{aligned}
A^Ty & = c, \\\\
y & \le 0
\end{aligned}
$$
is solvable, where $x$ is an $n$-vector and $y$ is an $m$-vector. Prove Farkas's lemma.
|
Suppose that both systems are solvable, let $x$ denote a solution to the first system, and $y$ denote a solution to the second. Taking transposes we have $x^{\text T}A^{\text T} \le 0^{\text T}$. Right multiplying by $y$ gives $x^{\text T}c = x^{\text T}A^{\text T}y \le 0^{\text T}$, which is a contradiction to the fact that $c^{\text T}x > 0$. Thus, both systems cannot be simultaneously solved. Now suppose that the second system fails. Consider the following linear program:
$$\text{maximize } 0x \text{ subject to } A^{\text T}y = c \text{ and } y \ge 0,$$
and its corresponding dual program
$$\text{minimize } -c^{\text T}x \text{ subject to } Ax \le 0.$$
Since the second system fails, the primal is infeasible. However, the dual is always feasible by taking $x = 0$. If there were a finite solution to the dual, then duality says there would also be a finite solution to the primal. Thus, the dual must be unbounded. Thus, there must exist a vector $x$ which makes $−c^{\text T}x$ arbitrarily small, implying that there exist vectors $x$ for which $c^{\text T}x$ is strictly greater than $0$. Thus, there is always at least one solution.
|
[] | false |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.