Dataset Viewer
id
stringlengths 18
77
| text
stringlengths 52
3.68M
| source
stringclasses 12
values | format
stringclasses 2
values |
---|---|---|---|
algo_notes_block_0 | \cleardoublepage
\lecture{1}{2022-09-23}{I'm rambling a bit}{}
\chapterafterlecture{Introduction} | algo_notes | latex |
algo_notes_block_1 | \begin{parag}{Definition: Algorithm}
An \important{algorithm} is any well-defined computational procedure that takes some value, or set of values as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.
\end{parag} | algo_notes | latex |
algo_notes_block_2 | \begin{parag}{Definition: Instance}
Given a problem, an \important{instance} is a set of precise inputs. | algo_notes | latex |
algo_notes_block_3 | \begin{subparag}{Remark}
Note that for a problem that waits a number $n$ as an input, ``a positive integer'' is not an instance, whereas 232 would be one.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture01/NaiveArithmetic.code}
ans = 0
for i = 1, 2, ..., n
ans = ans + i
return ans
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture01/CleverArithmetic.code}
return n*(n + 1)/2
\end{filecontents*} | algo_notes | latex |
algo_notes_block_4 | \begin{parag}{Example: Arithmetic series}
Let's say that, given $n$, we want to compute $\sum_{i=1}^{n} i$. There are multiples way to do so. | algo_notes | latex |
algo_notes_block_5 | \begin{subparag}{Naive algorithm}
The first algorithm that could come to mind is to compute the sum:
\importcode{Lecture01/NaiveArithmetic.code}{pseudo}
This algorithm is very space efficient, since it only stores 2 numbers. However, it has a time-complexity of $\Theta\left(n\right)$ elementary operations, which is not very great.
\end{subparag} | algo_notes | latex |
algo_notes_block_6 | \begin{subparag}{Clever algorithm}
A much better way is to simply use the arithmetic partial series formula, yielding:
\importcode{Lecture01/CleverArithmetic.code}{pseudo}
This algorithm is both very efficient in space and in time. This shows that the first algorithm we think of is not necessarily the best, and that sometimes we can really improve it.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_7 | \begin{parag}{Complexity analysis}
We want to analyse algorithm complexities and, to do so, we need a model. We will consider that any primitive operations (basically any line in pseudocode) consists of a constant amount of time. Different lines may take a different time, but they do not depend on the sizes of the input.
When we only have primitive operations, we basically only need to compute the number of times each line is executed, and then look how it behaves asymptotically.
We will mainly consider worst-case behaviour since it gives a guaranteed upper bound and, for some algorithms, the worst case occurs often. Also, the average case is often as bad as the worst-case. | algo_notes | latex |
algo_notes_block_8 | \begin{subparag}{Remark}
When comparing asymptotic behaviour, we have to be careful about the fact that this is asymptotical. In other words, some algorithms which behave less well when $n$ is very large might be better when $n$ is very small.
As a personal remark, it makes me think about galactic algorithms. There are some algorithms which could be better for very large numbers for some tasks, but those numbers should be so large that it will never be used in practice (for instance, having more bits than the number of atoms in the universe). My favourite example is an algorithm which does a 1729-dimensional Fourier transform to multiply two numbers.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_9 | \begin{parag}{Personal note: Definitions}
We say that $f\left(x\right) \in O\left(g\left(x\right)\right)$, or more informally $f\left(x\right) = O\left(g\left(x\right)\right)$, read ``$f$ is big-O of $g$'', if there exists a $M \in \mathbb{R}_+$ and a $x_0 \in \mathbb{R}$ such that: \[\left|f\left(x\right)\right| \leq M \left|g\left(x\right)\right|, \mathspace \forall x \geq x_0\]
This leads to many other definitions:
\begin{itemize}
\item We say that $f\left(x\right) \in \Omega\left(g\left(x\right)\right)$ when $g\left(x\right) \in O\left(f\left(x\right)\right)$.
\item We say that $f\left(x\right) \in \Theta\left(g\left(x\right)\right)$ when $f\left(x\right) \in O\left(g\left(x\right)\right)$ and $f\left(x\right) \in \Omega\left(g\left(x\right)\right)$. Functions belonging to $\Theta\left(g\left(x\right)\right)$ represent an equivalence class.
\item We say that $f\left(x\right) \in o\left(g\left(x\right)\right)$ when $f\left(x\right) \in O\left(g\left(x\right)\right)$ but $f\left(x\right) \not\in \Theta\left(g\left(x\right)\right)$.
\item We say that $f\left(x\right) \in \omega\left(g\left(x\right)\right)$ when $f\left(x\right) \in \Omega\left(g\left(x\right)\right)$ but $f\left(x\right) \not\in \Theta\left(g\left(x\right)\right)$.
\end{itemize}
\end{parag} | algo_notes | latex |
algo_notes_block_10 | \begin{parag}{Personal note: Intuition}
We can have the following intuition:
\begin{itemize}
\item $f\left(x\right) \in O\left(g\left(x\right)\right)$ means that $f$ grows slower than (or as fast as) $g$ when $x \to \infty$.
\item $f\left(x\right) \in \Omega\left(g\left(x\right)\right)$ means that $f$ grows faster than (or as fast as) $g$ when $x \to \infty$.
\item $f\left(x\right) \in \Theta\left(g\left(x\right)\right)$ means that $f$ grows exactly as fast as $g$ when $x \to \infty$.
\item $f\left(x\right) \in o\left(g\left(x\right)\right)$ means that $f$ grows strictly slower than $g$ when $x \to \infty$.
\item $f\left(x\right) \in \omega\left(g\left(x\right)\right)$ means that $f$ grows strictly faster than $g$ when $x \to \infty$.
\end{itemize}
The following theorem can also help the intuition.
\end{parag} | algo_notes | latex |
algo_notes_block_11 | \begin{parag}{Personal note: Theorem}
Let $f$ and $g$ be two functions, such that the following limit exists or diverges:
\[\lim_{x \to \infty} \frac{\left|f\left(x\right)\right|}{\left|g\left(x\right)\right|} = \ell \in \mathbb{R} \cup \left\{\infty\right\}\]
We can draw the following conclusions, depending on the value of $\ell $:
\begin{itemize}
\item If $\ell = 0$, then $f\left(x\right) \in o\left(g\left(x\right)\right)$.
\item If $\ell = \infty$, then $f\left(x\right) \in \omega\left(g\left(x\right)\right)$.
\item If $\ell \in \mathbb{R}^*$, then $f\left(x\right) \in \Theta\left(g\left(x\right)\right)$.
\end{itemize} | algo_notes | latex |
algo_notes_block_12 | \begin{subparag}{Proof}
We will only prove the third point, the other two are left as exercises to the reader.
First, we can see that $\ell > 0$, since $\ell \neq 0$ by hypothesis and since $\frac{\left|f\left(x\right)\right|}{\left|g\left(x\right)\right|} > 0$ for all $x$.
We can apply the definition of the limit. Since it is valid for all $\epsilon > 0$, we know that, in particular, it is true for $\epsilon = \frac{\ell }{2} > 0$. Thus, by definition of the limit, we know that for $\epsilon = \frac{\ell }{2} > 0$, there exists a $x_0 \in \mathbb{R}$, such that for all $x \geq x_0$, we have:
\autoeq{\left|\frac{\left|f\left(x\right)\right|}{\left|g\left(x\right)\right|} - \ell \right| \leq \epsilon = \frac{\ell }{2} \iff -\frac{\ell }{2} \leq \frac{\left|f\left(x\right)\right|}{\left|g\left(x\right)\right|} - \ell \leq \frac{\ell }{2} \iff \frac{\ell }{2} \left|g\left(x\right)\right| \leq \left|f\left(x\right)\right| \leq \frac{3 \ell }{2} \left|g\left(x\right)\right|}
since $\left|g\left(x\right)\right| > 0$.
Since $\frac{\ell }{2} \left|g\left(x\right)\right| \leq \left|f\left(x\right)\right|$ for $x \geq x_0$, we get that $f \in \Omega\left(g\left(x\right)\right)$. Also, since $\left|f\left(x\right)\right| \leq \frac{3 \ell }{2} \left|g\left(x\right)\right|$ for $x \geq x_0$, we get that $f \in O\left(g\left(x\right)\right)$.
We can indeed conclude that $f \in \Theta\left(g\left(x\right)\right)$.
\qed
\end{subparag} | algo_notes | latex |
algo_notes_block_13 | \begin{subparag}{Example}
Let $a \in \mathbb{R}$ and $b \in \mathbb{R}_+$. Let us compute the following ratio:
\[\lim_{n \to \infty} \frac{\left|\left(n + a\right)^b\right|}{\left|n^b\right|} = \lim_{n \to \infty} \left|\left(1 + \frac{a}{n}\right)^{b}\right| = 1\]
which allows us to conclude that $\left(n + a\right)^b \in \Theta\left(n^b\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_14 | \begin{subparag}{Side note: Link with series}
You can go read my Analyse 1 notes on my GitHub (in French) if you want more information, but there is an interesting link with series we can do here.
You can convince yourself that if $a_n \in \Theta\left(b_n\right)$, then $\sum_{n = 1}^{\infty} \left|a_n\right|$ and $\sum_{n = 1}^{\infty} \left|b_n\right|$ have the same nature. Indeed, this hypothesis yields that $\exists C_1, C_2 \in \mathbb{R}_+$ and a $n_0 \in \mathbb{N}$ such that, for all $n \geq n_0$:
\[0 \leq C_1 \left|b_n\right| \leq \left|a_n\right| \leq C_2 \left|b_n\right| \implies C_1 \sum_{n = 1}^{\infty} \left|b_n\right| \leq \sum_{n = 1}^{\infty} \left|a_n\right| \leq C_2 \sum_{n = 1}^{\infty} \left|b_n\right|\]
by the comparison criteria.
Also, we know very well the convergence of the series $\sum_{n = 1}^{\infty} \left|\frac{1}{n^p}\right|$ (which converges for all $p > 1$, and diverges otherwise). Using Taylor series, this allows us to know very easily the convergence of a series.
For instance, let us consider $\sum_{n = 1}^{\infty} \left(\cos\left(\frac{1}{n}\right) - 1\right)$. We can compute the following limit:
\autoeq{\lim_{n \to \infty} \frac{\left|\cos\left(\frac{1}{n}\right) - 1\right|}{\frac{1}{n^p}} = \lim_{n \to \infty} \left|n^p \left(1 - \frac{\left(\frac{1}{n}\right)^2}{2} + \epsilon\left(\frac{1}{n^4}\right) - 1\right)\right| = \lim_{n \to \infty} \left|-\frac{n^p}{2n^2} + n^p \epsilon\left(\frac{1}{n^4}\right)\right|= \frac{1}{2}}
for $p = 2$. In other words, our series has the same nature as $\sum_{n=1}^{\infty} \left|\frac{1}{n^2}\right|$, which converges. This allows us to conclude that $\sum_{n = 1}^{\infty} \left(\cos\left(\frac{1}{n}\right) - 1\right)$ converges absolutely.
You can see how powerful those tools are.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_15 | \begin{parag}{Definition: The sorting problem}
For the \important{sorting problem}, we take a sequence of $n$ numbers $\left(a_1, \ldots, a_n\right)$ as input, and we want to output a reordering of those numbers $\left(a_1', \ldots, a_n'\right)$ such that $a_1' \leq \ldots \leq a_n'$. | algo_notes | latex |
algo_notes_block_16 | \begin{subparag}{Example}
Given the input $\left(5, 2, 4, 6, 1, 3\right)$, a correct output is $\left(1, 2, 3, 4, 5, 6\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_17 | \begin{subparag}{Personal note: Remark}
It is important to have the same numbers at the start and the end. Else, it allows to have algorithms such as the Stalin sort (remove all elements which are not in order, leading to a complexity of $\Theta\left(n\right)$), or the Nagasaki sort (clearing the list, leading to a complexity of $\Theta\left(1\right)$).
They are more jokes than real algorithms, here is where I found the Nagasaki sort:
\begin{center}
\small \url{
\end{center}
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_18 | \begin{parag}{Definition: In place algorithm}
An algorithm solving the sorting problem is said to be \important{in place} when the numbers are rearranged within the array (with at most a constant number of variables oustside the array at any time).
\end{parag} | algo_notes | latex |
algo_notes_block_19 | \begin{parag}{Loop invariant}
We will see algorithms, which we will need to prove are correct. To do so, one of the methods is to use a loop invariant. This is something that stays true at any iteration of a loop. The idea is very similar to induction.
To use a loop invariant, we need to do three steps. In the \important{initialization}, we show that the invariant is true prior to the first iteration of the loop. In the \important{maintenance}, we show that, if the invariant is true before an iteration, then it remains true before the next iteration. Finally, in the \important{termination}, we use the invariant when the loop terminates to show that our algorithm works.
\end{parag} | algo_notes | latex |
algo_notes_block_20 | \section{Insertion sort}
\begin{filecontents*}[overwrite]{Lecture01/InsertionSort.code}
for j = 2 to n:
key = a[j]
// Insert a[j] into the sorted sequence.
i = j - 1
while i > 0 and a[i] > key
a[i + 1] = a[i]
i = i - 1
a[i+1] = key
\end{filecontents*} | algo_notes | latex |
algo_notes_block_21 | \begin{parag}{Insertion sort}
They idea of \important{insertion sort} is to iteratively sort the sequence. We iteratively insert elements at the right place.
This algorithm can be formulated as:
\importcode{Lecture01/InsertionSort.code}{pseudo}
We can see that this algorithm is in place.
\end{parag}
\lecture{2}{2022-09-26}{Teile und herrsche}{} | algo_notes | latex |
algo_notes_block_22 | \begin{parag}{Proof}
Let us prove that insertion sort works by using a loop invariant.
We take as an invariant that at the start of each iteration of the outer for loop, the subarray \texttt{a[1\ldots (j-1)]} consists of the elements originally in \texttt{a[1\ldots (j-1)]} but in sorted order.
\begin{enumerate}
\item Before the first iteration of the loop, we have $j = 2$. Thus, the subarray consist only of \texttt{a[1]}, which is trivially sorted.
\item We assume the invariants holds at the beginning of an iteration $j = k$. The body of our inner while loop works by moving the elements \texttt{a[k-1]}, \texttt{a[k-2]}, and so on one step to the right, until it finds the proper position for \texttt{a[k]}, at which point it inserts the value of \texttt{a[k]}. Thus, at the end of the loop, the subarray \texttt{a[1\ldots k]} consists of the elements originally in \texttt{a[1\ldots k]} in a sorted order.
\item The loop terminated when $j = n + 1$. Thus, the loop invariant implies that \texttt{a[1\ldots n]} contains the original elements in sorted order.
\end{enumerate}
\end{parag} | algo_notes | latex |
algo_notes_block_23 | \begin{parag}{Complexity analysis}
We can see that the first line is executed $n$ times, and the lines which do not belong to the inner loop are executed $n-1$ times (the first line of a loop is executed one time more than its body, since we need to do a last comparison before knowing we can exit the loop). We only need to compute how many times the inner loop is executed every iteration.
In the best case, the loop is already sorted, meaning that the inner loop is never entered. This leads to $T\left(n\right) = \Theta\left(n\right)$ complexity, where $T\left(n\right)$ is the number of operations required by the algorithm.
In the worst case, the loop is sorted in reverse order, meaning that the first line of the inner loop is executed $j$ times. Thus, our complexity is given by:
\[T\left(n\right) = \Theta\left(\sum_{j=2}^{n} j\right) = \Theta\left(\frac{n\left(n+1\right) - 1}{2}\right) = \Theta\left(n^2\right)\]
As mentioned in the previous course, we mainly have to keep in mind the worst case scenario.
\end{parag}
\chapter{Divide and conquer} | algo_notes | latex |
algo_notes_block_24 | \begin{parag}{Divide-and-conquer}
We will use a powerful algorithmic approach: recursively divide the problem into smaller subproblems.
We first \important{divide} the problem into $a$ subproblems of size $\frac{n}{b}$ that are smaller instances of the same problem. We then \important{conquer} the subproblems by solving them recursively (and if the subproblems are small enough, let's say of size less than $c$ for some constant $c$, we can just solve them by brute force). Finally, we \important{combine} the subproblem solutions to give a solution to the original
problem
This gives us the following cost function:
\begin{functionbypart}{T\left(n\right)}
\Theta\left(1\right), \mathspace \text{if } n \leq c \\
a T\left(\frac{n}{b}\right) + D\left(n\right) + C\left(n\right), \mathspace \text{otherwise}
\end{functionbypart}
where $D\left(n\right)$ is the time to divide and $C\left(n\right)$ the time to combine solutions.
\end{parag}
\begin{filecontents*}[overwrite]{Lecture02/MergeSort.code}
MergeSort(A, p, r):
// p is the beginning index and r is the end index; they represent the section of the array we try to sort.
if p < r: // base case
q = floor((p + r)/2) // divide
MergeSort(A, p, q) // conquer
MergeSort(A, q+1, r) // conquer
Merge(A, p, q, r) // combine
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture02/MergeMergeSort.code}
Merge(A, p, q, r):
// p is the beginning index of the first subarray, q is the beginning index of the second subarray and r is the end of the second subarray
n1 = q - p + 1 // number of elements in first subarray
n2 = r - q // number of elements in second subarray
let L[1...(n1+1)] and R[1...(n2+1)] be new arrays
for i = 1 to n1:
L[i] = A[p + i - 1]
for j = 1 to n2:
R[j] = A[q + j]
L[n1 + 1] = infinity
L[n2 + 1] = infinity
// Merge the two created subarrays.
i = 1
j = 1
for k = p to r:
// Since both subarrays are sorted, the next element is one of L[i] or R[j]
if L[i] <= R[j]:
A[k] = L[i]
i = i + 1
else:
A[k] = R[j]
j = j + 1
\end{filecontents*} | algo_notes | latex |
algo_notes_block_25 | \begin{parag}{Merge sort}
Merge sort is a divide and conquer algorithm:
\importcode{Lecture02/MergeSort.code}{pseudo}
For it to be efficient, we need to have an efficient merge procedure. Note that merging two sorted subarrays is rather easy: if we have two sorted piles of cards and we want to merge them, we only have to iterately take the smallest card between the two piles, there cannot be any smaller card that would come later, since the piles are sorted. This gives us the following algorithm:
\importcode{Lecture02/MergeMergeSort.code}{pseudo}
We can see that this algorithm is not in place, making it require more memory than insertion sort. | algo_notes | latex |
algo_notes_block_26 | \begin{subparag}{Rermark}
The Professor put the following video on the slides, and I like it very much, so here it is (reading the comments, dancers say ``teile une herrsche'', which means ``divide and conquer''):
\begin{center}
\url{
\end{center}
\end{subparag}
\end{parag}
\lecture{3}{2022-09-30}{Trees which grow in the wrong direction}{} | algo_notes | latex |
algo_notes_block_27 | \begin{parag}{Theorem: Correctness of Merge-Sort}
Assuming that the implementation of the \texttt{merge} procedure is correct, \texttt{mergeSort(A, p, r)} correctly sorts the numbers in $A\left[p\ldots r\right]$. | algo_notes | latex |
algo_notes_block_28 | \begin{subparag}{Proof}
Let's do a proof by induction on $n = r -p$.
\begin{itemize}[left=0pt]
\item When $n = 0$, we have $r = p$, and thus $A\left[p \ldots r\right]$ is trivially sorted.
\item We suppose our statement is true for all $n \in \left\{0, \ldots, k-1\right\}$ for some $k$, and we want to prove it for $n = k$.
By the inductive hypothesis, both \texttt{mergeSort(A, p, q)} and \texttt{mergeSort(A, q+1, r)} successfully sort the two subarrays. Therefore, a correct merge procedure will successfully sort $A\left[p\ldots q\right]$ as required.
\end{itemize}
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_29 | \begin{parag}{Complexity analysis}
Let's analyse the complexity for merge sort.
Modifying the complexity for divide and conquer, we get:
\begin{functionbypart}{T\left(n\right)}
\Theta\left(1\right), \mathspace \text{if } n = 1\\
2T\left(\frac{n}{2}\right) + \Theta\left(n\right), \mathspace \text{otherwise}
\end{functionbypart}
Let's first try to guess the guess the solution of this recurrence. We can set $\Theta\left(n\right) = c\cdot n$ for some $c$, leading to:
\autoeq{T\left(n\right) = 2T\left(\frac{n}{2}\right) + c\cdot n = 2\left(2T\left(\frac{n}{4}\right) + c \frac{n}{2}\right) + c n = 4T\left(\frac{n}{4}\right) + 2cn = 4\left(2T\left(\frac{n}{8}\right) + c \frac{n}{4}\right) = 8T\left(\frac{n}{8}\right) + 3cn}
Thus, this seems that, continuing this enough times, we get:
\[T\left(n\right) = n T\left(1\right) + \log_2\left(n\right) c n \implies T\left(n\right) = \Theta\left(n \log\left(n\right)\right)\]
We still need to prove that this is true. We can do this by induction, and this is then named the substitution method. | algo_notes | latex |
algo_notes_block_30 | \begin{subparag}{Proof: Upper bound}
We want to show that there exists a constant $a > 0$ such that $T\left(n\right) \leq an\log\left(n\right)$ for all $n \geq 2$ (meaning that $T\left(n\right) = O\left(n \log\left(n\right)\right)$), by induction on $n$.
\begin{itemize}[left=0pt]
\item For any constant $n \in \left\{2, 3, 4\right\}$, $T\left(n\right)$ has a constant value; selecting $a$ larger than this value will satisfy the base cases when $n \in \left\{2, 3, 4\right\}$.
\item We assume that our statement is true for all $n \in \left\{2, 3, \ldots, k-1\right\}$ and we want to prove it for $n = k$:
\autoeq{T\left(n\right) = 2T\left(\frac{n}{2}\right) + cn \over{\leq}{IH} 2 \frac{an}{2} \log\left(\frac{n}{2}\right) + cn = an \log\left(\frac{n}{2}\right) + cn = an\log\left(n\right) - an + cn \leq an\log\left(n\right)}
if we select $a \geq c$.
\end{itemize}
We can thus select $a$ to be a positive constant so that both the base case and the inductive step hold.
\end{subparag} | algo_notes | latex |
algo_notes_block_31 | \begin{subparag}{Proof: Lower bound}
We want to show that there exists a constant $b > 0$ such that $T\left(n\right) \geq bn\log\left(n\right)$ for all $n \geq 0$ (meaning that $T\left(n\right) = \Omega\left(n \log\left(n\right)\right)$), by induction on $n$.
\begin{itemize}[left=0pt]
\item For $n = 1$, $T\left(n\right) = c$ and $b n \log\left(n\right) = 0$, so the base case is satisfied for any $b$.
\item We assume that our statement is true for all $k \in \left\{0, \ldots, k-1\right\}$ we want to prove it for $n = k$:
\autoeq{T\left(n\right) = 2T\left(\frac{n}{2}\right) + cn \over{\geq}{IH} 2 \frac{bn}{2} \log\left(\frac{n}{2}\right) + cn = bn \log\left(\frac{n}{2}\right) + cn = bn\log\left(n\right) - bn + cn \geq bn\log\left(n\right)}
selecting $b \leq c$.
We can thus select $b$ to be a positive constant so that both the base case and the inductive step hold.
\end{itemize}
\end{subparag} | algo_notes | latex |
algo_notes_block_32 | \begin{subparag}{Proof: Conclusion}
Since $T\left(n\right) = O\left(n \log\left(n\right)\right)$ and $T\left(n\right) = \Omega\left(n \log\left(n\right)\right)$, we have proven that $T\left(n\right) = \Theta\left(n\log\left(n\right)\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_33 | \begin{subparag}{Remark}
The real recurrence relation for merge-sort is:
\begin{functionbypart}{T\left(n\right)}
c, \mathspace \text{if } n = 1 \\
T\left(\left\lfloor \frac{n}{2} \right\rfloor \right) + T\left(\left\lceil \frac{n}{2} \right\rceil \right) + c \cdot n.
\end{functionbypart}
Note that we are allowed to take the same $c$ everywhere since we are considering the worst-case, and thus we can take the maximum of the two constants supposed to be there, and call it $c$.
Anyhow, in our proof, we did not consider floor and ceiling functions. Indeed, they make calculations really messy but don't change the final asymptotic result. Thus, when analysing recurrences, we simply assume for simplicity that all divisions evaluate to an integer.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_34 | \begin{parag}{Remark}
We have to be careful when using asymptotic notations with induction. For instance, if we know that $T\left(n\right) = 4T\left(\frac{n}{4}\right) + n$, and we want to prove that $T\left(n\right) = O\left(n\right)$, then we cannot just do:
\[T\left(n\right) \over{\leq}{IH} 4c \frac{n}{4} + n = cn + n = n\left(c+1\right) = O\left(n\right)\]
Indeed, we have to clearly state that we want to prove that $T\left(n\right) \leq nc$, and then prove it for the exact constant $c$ during the inductive step. The proof above is wrong and, in fact, $T\left(n\right) \neq O\left(n\right)$.
\end{parag} | algo_notes | latex |
algo_notes_block_35 | \begin{parag}{Other proof: Tree}
Another way of guessing the complexity of merge sort, which works for many recurrences, is thinking of the entire recurrence tree. A recurrence tree is a tree (really?) where each node corresponds to the cost of a subproblem. We can thus sum the costs within each level of the tree to obtain a set of per-level costs, and then sum all the per-level costs to determine the total cost of all levels of recursion.
For merge sort, we can draw the following tree:
\imagehere[0.8]{Lecture03/MergeSortComplexityTree.png}
We can observe than on any level, the amount of work sums up to $cn$. Since there are $\log_2\left(n\right)$ levels, we can guess that $T\left(n\right) = cn \log_2\left(n\right) = \Theta\left(n \log\left(n\right)\right)$. To prove it formally, we would again need to use recurrence.
\end{parag} | algo_notes | latex |
algo_notes_block_36 | \begin{parag}{Tree: Other example}
Let's do another example for a tree, but for which the substitution method does not work: we take $T\left(n\right) = T\left(\frac{n}{3}\right) + T\left(\frac{2n}{3}\right) + cn$. The tree looks like:
\imagehere[0.7]{Lecture03/TreeOtherExample.png}
Again, we notice that every level contributes to around $cn$, and we have at least $\log_3\left(n\right)$ full levels. Therefore, it seems reasonable to say that $an\log_3\left(n\right) \leq T\left(n\right) \leq bn\log_{\frac{3}{2}}\left(n\right)$ and thus $T\left(n\right) = \Theta\left(n\log\left(n\right)\right)$.
\end{parag}
\lecture{4}{2022-10-03}{Master theorem}{} | algo_notes | latex |
algo_notes_block_37 | \begin{parag}{Example}
Let's look at the following recurrence:
\[T\left(n\right) = T\left(\frac{n}{4}\right) + T\left(\frac{3}{4}n\right) + 1\]
We want to show it is $\Theta\left(n\right)$. | algo_notes | latex |
algo_notes_block_38 | \begin{subparag}{Upper bound}
Let's prove that there exists a $b$ such that $T\left(n\right) \leq bn$. We consider the base case to be correct, by choosing $b$ to be large enough.
Let's do the inductive step. We get:
\autoeq{T\left(n\right) = T\left(\frac{1}{4}n\right) + T\left(\frac{3}{4}n\right) + 1 \leq b \frac{1}{4} n + b \frac{3}{4} n + 1 = bn + 1}
But we wanted $bn$, so it proves nothing. We could consider that our guess is wrong, or do another proof.
\end{subparag} | algo_notes | latex |
algo_notes_block_39 | \begin{subparag}{Upper bound (better)}
Let's now instead the harder induction hypothesis, stating that $T\left(n\right) \leq bn - b'$. This gives us:
\autoeq{T\left(n\right) = T\left(\frac{1}{4}n\right) + T\left(\frac{3}{4}n\right) + 1 \leq b \frac{1}{4} n - b' + b \frac{3}{4} n - b' + 1 = bn - b' + \left(1 - b'\right) \leq bn - b'}
as long as $b' \geq 1$.
Thus, taking $b$ such that the base case works, we have proven that $T\left(n\right) \leq bn - b' \leq bn$, and thus $T\left(n\right) \in O\left(n\right)$. We needed to make our claim stronger for it to work, and this is something that is often needed to do.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_40 | \begin{parag}{Master theorem}
Let $a \leq 1$ and $b > 1$ be constants. Also, let $T\left(n\right)$ be a function defined on the nonnegative integers by the following recurrence:
\[T\left(n\right) = aT\left(\frac{n}{b}\right) + f\left(n\right)\]
Then, $T\left(n\right)$ has the following asymptotic bounds:
\begin{enumerate}
\item If $f\left(n\right) = O\left(n^{\log_b\left(a\right) - \epsilon}\right)$ for some constant $\epsilon > 0$, then $T\left(n\right) = \Theta\left(n^{\log_b\left(a\right)}\right)$.
\item If $f\left(n\right) = \Theta\left(n^{\log_b\left(a\right)}\right)$, then $T\left(n\right) = \Theta\left(n^{\log_b\left(a\right)} \log\left(n\right)\right)$.
\item If $f\left(n\right) = \Omega\left(n^{\log_b\left(a\right) + \epsilon}\right)$ for some constant $\epsilon > 0$, and if $a f\left(\frac{n}{b}\right) \leq c f\left(n\right)$ for some constant $c < 1$ and all sufficiently large $n$, then $T\left(n\right)= \Theta\left(f\left(n\right)\right)$. Note that the second condition holds for most functions.
\end{enumerate} | algo_notes | latex |
algo_notes_block_41 | \begin{subparag}{Example}
Let us consider the case for merge sort, thus $T\left(n\right) = 2T\left(\frac{n}{2}\right) + cn$. We get $a = b = 2$, so $\log_b\left(a\right) = 1$ and:
\[f\left(n\right) = \Theta\left(n^1\right) = \Theta\left(n^{\log_b\left(a\right)}\right)\]
This means that we are in the second case, telling us:
\[T\left(n\right) = \Theta\left(n^{\log_b\left(a\right)} \log\left(n\right)\right) = \Theta\left(n \log\left(n\right)\right)\]
\end{subparag} | algo_notes | latex |
algo_notes_block_42 | \begin{subparag}{Tree}
To learn this theorem, we only need to get the intuition of why it works, and to be able to reconstruct it. To do so, we can draw a tree. The depth of this tree is $\log_b\left(n\right)$, and there are $a^{\log_b\left(n\right)} = n^{\log_b\left(a\right)}$ leaves. If a node does $f\left(n\right)$ work, then each of its children does $af\left(\frac{n}{b}\right)$ work.
\begin{enumerate}[left=0pt]
\item If $f$ grows slowly, a parent does less work than all its children combined. This means that most of the work is done at the leaf. Thus, the only thing that matters is the number of leafs which $f$ has: $n^{\log_b\left(a\right)}$.
\item If $f$ grows such that every child contributes exactly the same as their parents, then every level does the same work. Since we have $n^{\log_b\left(a\right)}$ leafs which each contribute a constant amount of work, the last level adds up to $c\cdot n^{\log_b\left(a\right)}$ work, and thus every level adds up to this value. We have $\log_b\left(n\right)$ levels, meaning that we have a total work of $c n^{\log_b\left(a\right)} \log_b\left(n\right)$.
\item If $f$ grows fast, then a parent does more work than all its children combined. This means that all the work is done at the root and, thus, that all that matters is $f\left(n\right)$.
\end{enumerate}
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_43 | \begin{parag}{Application}
Let's use a modified version of merge sort in order to count the number of inversions in an array $A$ (an inversion is $i < j$ such that $A\left[j\right] < A\left[i\right]$, where $A$ never has twice the same value).
The idea is that we can just add a return value to merge sort: the number of inversions. In the trivial case $n = 0$, there is no inversion. For the recursive part, we can just add the number of inversions of the two sub-cases and the number of inversions we get from the merge procedure (which is the complicated part).
For the merge procedure, since the two subarrays are sorted, we notice that if the element we are considering from the first subarray is greater than the one we are considering of the second subarray, then we need to add $\left(q - i + 1\right)$ to our current count.
\svghere{Lecture04/CountInversions.svg}
This solution is $\Theta\left(n \log\left(n\right)\right)$ and thus much better than the trivial $\Theta\left(n^2\right)$ double for-loop solution. | algo_notes | latex |
algo_notes_block_44 | \begin{subparag}{Remark}
We can notice that there are at most $\frac{n\left(n-1\right)}{2}$ inversions (in a reverse-sorted array). It seems great that our algorithm achieves to count this value in a smaller complexity. This comes from the fact that, sometimes, we add much more than 1 at the same time in the merge procedure.
\end{subparag}
\end{parag}
\lecture{5}{2022-10-07}{Fast matrix multiplication}{} | algo_notes | latex |
algo_notes_block_45 | \begin{parag}{Maximum subarray problem}
We have an array of values representing stock price, and we want to find when we should have bought and when we should have sold (retrospectively, so this is no investment advice). We want to buy when the cost is as low as possible and sell when it is as high as possible. Note that we cannot just take the all time minimum and all time maximum since the maximum could be before the minimum.
Let's switch our perspective by instead considering the array of changes: the difference between $i$ and $i-1$. We then want to find the largest contiguous subarray that has the maximum sum; this is named the \important{maximum subarray problem}. In other words, we want to find $i < j$ such that $A\left[i\ldots j\right]$ has the biggest sum possible. For instance, for $A = \left[1, -4, 3, -4\right]$, we have $i = j = 3$, and the sum is 3.
The bruteforce solution, in which we compute the sum efficiently, is a runtime of $\Theta\left(\binom{n}{2}\right) = \Theta\left(n^2\right)$, which is not great.
Let's now instead use a divide-and-conquer is method. Only the merge procedure is complicated. We must not miss solutions that cross the midpoint. However, if we know that we want to find a subarray which crosses the midpoint, we can try to find the best $i$ in the left part until the midpoint (which takes linear time), find the best $j$ so that the subarray from the midpoint to $j$ (which also takes linear time). This means that we get three subarrays: one that is only in the left part, one that cross the midpoint and one that is only in the right. This represents all possible subarrays, and we can just take the best one amongst those three.
We get that the divide step is $\Theta\left(1\right)$, the conquer step solves two problems each of size $\frac{n}{2}$, and the merge time takes linear time. Thus, we have the exact same recurrence relation as for merge sort, giving us a time complexity of $\Theta\left(n\log\left(n\right)\right)$. | algo_notes | latex |
algo_notes_block_46 | \begin{subparag}{Remark}
We will make a $\Theta\left(n\right)$ algorithm to solve this problem in the third exercise series.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_47 | \begin{parag}{Problem}
We want to multiply quickly two numbers. The regular algorithm seen in primary school is $O\left(n^2\right)$, but we think that we may be able to go faster.
We are given two integers $a, b$ with $n$ bits each (they are given to us through arrays of bits), and we want to output $a\cdot b$. This can be important for cryptography for instance.
\end{parag} | algo_notes | latex |
algo_notes_block_48 | \begin{parag}{Fast multiplication}
We want to use a divide and conquer strategy.
Let's say we have an array of values $a_0, \ldots, a_{n}$ giving us $a$, and an array of values $b_0, \ldots, b_n$ giving us $b$ (we will use base 10 here, but it works for any base):
\[a = \sum_{i=0}^{n-1} a_i 10^{i}, \mathspace b = \sum_{i=0}^{n-1} b_i 10^{i}\]
Let's divide our numbers in the middle. We get four numbers $a_L$, $a_H$, $b_L$ and $b_H$, defined as:
\[a_L = \sum_{i=0}^{\frac{n}{2} - 1} a_i 10^{i}, \mathspace a_H = \sum_{i=\frac{n}{2}}^{n-1} a_i 10^{i - \frac{n}{2}}, \mathspace b_L = \sum_{i=0}^{\frac{n}{2} - 1} b_i 10^{i}, \mathspace b_H = \sum_{i=\frac{n}{2}}^{n-1} b_i 10^{i - \frac{n}{2}}\]
We can represent this geometrically:
\svghere[0.25]{Lecture05/FastMultiplicationBitsSplitted.svg}
We get the following relations:
\[a = a_L + 10^{\frac{n}{2}} a_H, \mathspace b = b_{L} + 10^{\frac{n}{2}} b_H\]
Thus, the multiplication is given by:
\[ab = \left(a_L + 10^{\frac{n}{2}}a_H\right) \left(b_L + 10^{\frac{n}{2}}b_H\right) = a_L b_L + 10^{\frac{n}{2}} \left(a_H b_L + b_H a_L\right) + 10^n a_H b_H\]
This gives us a recursive algorithm. We compute $a_L b_L$, $a_H b_L$, $a_L b_H$ and $a_H b_H$ recursively. We can then do the corresponding shifts, and finally add everything up. | algo_notes | latex |
algo_notes_block_49 | \begin{subparag}{Complexity algorithm}
The recurrence of this algorithm is given by:
\[T\left(n\right) = 4T\left(\frac{n}{2}\right) + n\]
since addition takes a linear time.
However, this solves to $T\left(n\right) = \Theta\left(n^2\right)$ by the master theorem \frownie.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_50 | \begin{parag}{Karatsouba algorithm}
Karatsouba, a computer scientist, realised that we do not need 4 multiplications. Indeed, let's compute the following value:
\[\left(a_L + a_H\right)\left(b_L + b_H\right) = a_L b_L + b_H b_H + a_H b_L + b_H a_L\]
This means that computing $a_L b_L$ and $b_H b_H$, we can extract $a_H b_L + b_H a_L$ from the product hereinabove by computing:
\[\left(a_L + a_H\right)\left(b_L + b_H\right) - a_L b_L - a_H b_H = a_H b_L + b_H a_L\]
Thus, considering what we did before, we this time only need three multiplications: $\left(a_L + a_H\right)\left(b_L + b_H\right)$, $a_L b_L$ and $a_H b_H$. | algo_notes | latex |
algo_notes_block_51 | \begin{subparag}{Complexity analysis}
The recurrence of this algorithm is given by:
\[T\left(n\right) = 3T\left(\frac{n}{2}\right) + n\]
This solves to $T\left(n\right) = \Theta\left(n^{\log_2\left(3\right)}\right)$, which is better than the primary school algorithm.
Note that we are cheating a bit on the complexity, since $\left(a_L + a_H\right)\left(b_L + b_H\right)$ is $T\left(\frac{n}{2} + 1\right)$. However, as mentioned in the last lesson, we don't really care about floor and ceiling functions (nor this $+1$).
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_52 | \begin{parag}{Remark}
Note that, in most of the cases, we are working with 64-bit numbers which can be multiplied in constant time on a 64-bit CPU. The algorithm above is in fact really useful for huge numbers (in cryptography for instance).
\end{parag} | algo_notes | latex |
algo_notes_block_53 | \begin{parag}{Problem}
We are given two $n \times n$ matrices, $A = \left(a_{ij}\right)$ and $B = \left(b_{ij}\right)$, and we want to output a $n \times n$ matrix $C = \left(c_{ij}\right)$ such that $C = AB$.
Basically, when computing the value of $c_{ij}$, we compute the dot-product of the $i$\Th row of $A$ and the $j$\Th column of $B$. | algo_notes | latex |
algo_notes_block_54 | \begin{subparag}{Example}
For instance, for $n = 2$:
\autoeq{\begin{pmatrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} = \begin{pmatrix} a_{11} b_{11} + a_{12} b_{21} & a_{11} b_{12} + a_{12} b_{22} \\ a_{21} b_{11} + a_{22} b_{21} & a_{21} b_{12} + a_{22} b_{22}\end{pmatrix}}
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture05/naiveMatrixMultiplication.code}
let C be a new nxn matrix
for i = 1 to n
for j = 1 to n
c[i][j] = 0
for k = 1 to n
c[i]j] = c[i][j] + a[i][k]*b[j][j]
\end{filecontents*} | algo_notes | latex |
algo_notes_block_55 | \begin{parag}{Naive algorithm}
The naive algorithm is:
\importcode{Lecture05/naiveMatrixMultiplication.code}{pseudo} | algo_notes | latex |
algo_notes_block_56 | \begin{subparag}{Complexity}
There are three nested for-loops, so we get a runtime of $\Theta\left(n^3\right)$.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_57 | \begin{parag}{Divide and conquer}
We can realise that, when multiplying matrices, this is like multiplying submatrices. If we have $A$ and $B$ being two $n \times n$ matrices, then we can split them into submatrices and get:
\[\begin{pmatrix} C_{11} & C_{12} \\ C_{21} & C_{21} \end{pmatrix} = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} \]
where those elements are matrices.
In other words, we do have that:
\[C_{11} = A_{11} B_{11} + A_{12} B_{21}\]
and similarly for all others elements. | algo_notes | latex |
algo_notes_block_58 | \begin{subparag}{Complexity}
Since we are splitting our multiplication into 8 matrix multiplications that each need the multiplication of two $\frac{n}{2} \times \frac{n}{2}$ matrices, we get the following recurrence relation:
\[T\left(n\right) = 8T\left(\frac{n}{2}\right) + n^2\]
since adding two matrices takes $O\left(n^2\right)$ time.
The master theorem tells us that we have $T\left(n\right) = \Theta\left(n^3\right)$, which is no improvement.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_59 | \begin{parag}{Strassen's algorithm}
Strassen realised that we only need to perform 7 recursive multiplications of $\frac{n}{2} \times \frac{n}{2}$ rather than $8$. This gives us the recurrence:
\[T\left(n\right) = 7T\left(\frac{n}{2}\right) + \Theta\left(n^2\right)\]
where the $\Theta\left(n^2\right)$ comes from additions, substractions and copying some matrices.
This solves to $T\left(n\right) = \Theta\left(n^{\log_2\left(7\right)}\right)$ by the master theorem, which is better!
\end{parag} | algo_notes | latex |
algo_notes_block_60 | \begin{parag}{Remark}
Strassen was the first to beat the $\Theta\left(n^3\right)$, but now we find algorithms with better and better complexity (Even though the best ones currently known are galactic algorithms).
\end{parag}
\cleardoublepage
\lecture{6}{2022-10-10}{Heap sort}{}
\chapterafterlecture{Great data structures yield great algorithms} | algo_notes | latex |
algo_notes_block_61 | \begin{parag}{Nearly-complete binary tree}
A binary tree of depth $d$ is nearly complete if all $d-1$ levels are full, and, at level $d$, if a node is present, then all nodes to its left must also be present. | algo_notes | latex |
algo_notes_block_62 | \begin{subparag}{Terminology}
The size of a tree is its number of vertices.
\end{subparag} | algo_notes | latex |
algo_notes_block_63 | \begin{subparag}{Example}
For instance, the three on the left is a nearly-complete binary tree of depth $3$, but not the one on the right:
\imagehere{Lecture06/NearlyCompleteBinaryTreeExample.png}
Both binary trees are of size 10.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_64 | \begin{parag}{Heap}
A \important{heap} (or max-heap) is a nearly-complete binary tree such that, for every node $i$, the key (value stored at that node) of its children is less than or equal to its key. | algo_notes | latex |
algo_notes_block_65 | \begin{subparag}{Examples}
For instance, the nearly complete binary tree of depth 3 of the left is a max-heap, but not the one on the right:
\imagehere{Lecture06/MaxHeapExample.png}
\end{subparag} | algo_notes | latex |
algo_notes_block_66 | \begin{subparag}{Observations}
We notice that the maximum number is necessarily at the root. Also, any path linking a leaf to the root is an increasing sequence.
\end{subparag} | algo_notes | latex |
algo_notes_block_67 | \begin{subparag}{Remark 1}
We can define the min-heap to be like the max-heap, but the property each node follows is that the key of its children is greater than or equal to its key.
\end{subparag} | algo_notes | latex |
algo_notes_block_68 | \begin{subparag}{Remark 2}
We must not confuse heaps and binary-search trees (which we will define later), which are very similar but have a more restrictive property.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_69 | \begin{parag}{Height}
The height of a node is defined to be the number of edges on a \textit{longest} simple path from the node \textit{down} to a leaf. | algo_notes | latex |
algo_notes_block_70 | \begin{subparag}{Example}
For instance, in the following picture, the node holding 10 has height 1, the node holding 14 has height 2, and the one holding a 2 has height 0.
\imagehere[0.7]{Lecture06/MaxHeapExample2.png}
\end{subparag} | algo_notes | latex |
algo_notes_block_71 | \begin{subparag}{Remark}
We note that, if we have $n$ nodes, we can bound the height $h$ of any node:
\[h \leq \log_2\left(n\right)\]
Also, we notice that the height of the root is the largest height of a node from the tree. This is defined to be the \important{height of the heap}. We notice it is thus $\Theta\left(\log_2\left(n\right)\right)$.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_72 | \begin{parag}{Storing a heap}
We will store a heap in an array, layer by layer. Thus, take the first layer and store it in the array. Then, we take the next layer, and store it after the first layer. We continue this way until the end.
Let's consider we store our numbers in a starting with index starting at 1. The children of a node $A\left[i\right]$ are stored in $A\left[2i\right]$ and $A\left[2i + 1\right]$. Also, if $i > 1$, the parent of the node $A\left[i\right]$ is $A\left[\left\lfloor \frac{i}{2} \right\rfloor \right]$.
Using this method, we realise that we do not need a pointer to the left and right elements for each node. | algo_notes | latex |
algo_notes_block_73 | \begin{subparag}{Example}
For instance, let's consider again the following tree, but considering the index of each node:
\imagehere[0.7]{Lecture06/MaxHeapExample2-Array.png}
This would be stored in memory as:
\[A = \left[16, 14, 10, 8, 7, 9, 3, 2, 4, 1\right]\]
Then, the left child of $i = 3$ is $\text{left}\left(i\right) = 2\cdot 3 = 6$, and its right child is $\text{right}\left(i\right) = 2\cdot 3 + 1 = 7$, as expected.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture06/maxHeapify.code}
procedure maxHeapify(A, i, n):
l = left(i) // 2i
r = right(i) // 2i + 1
if l <= n and A[l] > A[i] // don't want an overflow, so check l <= n
largest = l
else
largest = i
if r <= n and A[r] > A[largest]
largest = r
if largest!= i
swap(A, i, largest) // swap A[i] and A[largest]
maxHeapify(A, largest, n)
\end{filecontents*} | algo_notes | latex |
algo_notes_block_74 | \begin{parag}{Max heapify}
To manipulate a heap, we need to \important{max-heapify}. Given an $i$ such that the subtrees of $i$ are heaps (this condition is important), this algorithm ensures that the subtree rooted at $i$ is a heap (satisfying the heap property). The only violation we could have is the root of one of the subtrees being larger than the node $i$.
So, to fix our tree, we compare $A\left[i\right], A\left[\texttt{left}\left(i\right)\right]$ and $A\left[\texttt{right}\left(i\right)\right]$. If necessary, we swap $A\left[i\right]$ with the largest of the two children to get the heap property. This could break the previous sub-heap, so we need to continue this process, comparing and swapping down the heap, until the subtree rooted at $i$ is a max-heap. We could write this algorithm in pseudocode as:
\importcode{Lecture06/maxHeapify.code}{pseudo} | algo_notes | latex |
algo_notes_block_75 | \begin{subparag}{Complexity}
Asymptotically, we have done less computations than the height of our node $i$, yielding a complexity of $O\left(\text{height}\left(i\right)\right)$.
Also, we are working in place, thus we are taking a space of $\Theta\left(n\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_76 | \begin{subparag}{Remark}
This procedure is the main primitive we have to work with heaps, it is really important.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture06/buildMaxHeap.code}
procedure buildMaxHeap(A, n)
for i = floor(n/2) downto 1
maxHeapify(A, i, n)
\end{filecontents*} | algo_notes | latex |
algo_notes_block_77 | \begin{parag}{Building a heap}
To make a heap from an unordered array $A$ of length $n$, we can use the following \texttt{buildMaxHeap} procedure:
\importcode{Lecture06/buildMaxHeap.code}{pseudo}
The idea is that the nodes strictly larger than $\left\lfloor \frac{n}{2} \right\rfloor $ are leafs (no node after $\left\lfloor \frac{n}{2} \right\rfloor $ can have a left child since it would be at index $2i$ leading to an overflow, meaning that they are all all leaves), which are trivial subheaps. Then, we can merge increasingly higher heaps through the \texttt{maxHeapify} procedure. Note that we cannot loop in the other direction (from 1 to $\left\lfloor \frac{n}{2} \right\rfloor $), since maxHeapify does not create heap when our sub-trees are not heap. | algo_notes | latex |
algo_notes_block_78 | \begin{subparag}{Complexity}
We can use the fact that \texttt{maxHeapify} is $O\left(\text{height}\left(i\right)\right)$ to compute the complexity of our new procedure. We can note that there are approximately $2^\ell $ nodes at the $\ell $\Th level (this is not really true for the last level, but it is not important) and since we have $\ell + h \approx \log_2\left(n\right)$ (the sum of the height and the level of a given note is approximately the height of our tree), we get that we have approximately $n\left(h\right) = 2^{\log_2\left(n\right) - h} = n2^{-h}$ nodes at height $h$. This yields:
\[T\left(n\right) = \sum_{h=0}^{\log_2\left(n\right)} n\left(h\right) O\left(h\right) = \sum_{h=0}^{\log_2\left(n\right)} \frac{n}{2^h} O\left(h\right) = O\left(n\sum_{h=0}^{\log_2\left(n\right)} \frac{h}{2^h}\right)\]
However, we notice that:
\[\sum_{h=0}^{\infty} \frac{h}{2^h} = \frac{\frac{1}{2}}{\left(1 - \frac{1}{2}\right)^2} = 2\]
Thus, we get that our function is bounded by $O\left(n\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_79 | \begin{subparag}{Correctness}
To prove the correctness of this algorithm, we can use a loop invariant: at the start of every iteration of for loops, each node $i+1, \ldots, n$ is a root of a max-heap.
\begin{enumerate}[left=0pt]
\item At start, each node $\left\lfloor \frac{n}{2} \right\rfloor + 1, \ldots, n $ is a leaf, which is the root of a trivial max-heap. Since, at start, $i = \left\lfloor \frac{n}{2} \right\rfloor $, the initialisation of the loop invariant is true.
\item We notice that both children of the node $i$ are indexed higher than $i$ and thus, by the loop invariant, they are both roots of max-heaps. This means that the \texttt{maxHeapify} procedure makes node $i$ a max-heap root. This means that the invariant stays true after each iteration.
\item The loop terminates when $i = 0$. By our loop invariant, each node (notably node 1) is the root of a max-heap.
\end{enumerate}
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture06/Heapsort.code}
procedure heapsort(A, n)
buildMaxHeap(A, n)
for i = n downto 2
swap(A, 1, i) // swap A[1] and A[i]
maxHeapify(A, 1, i-1)
\end{filecontents*} | algo_notes | latex |
algo_notes_block_80 | \begin{parag}{Heapsort}
Now that we built our heap, we can use it to sort our array:
\importcode{Lecture06/Heapsort.code}{pseudo}
A max-heap is only useful for one thing: get the maximum element. When we get it, we swap it to its right place. We can then max-heapify the new tree (without the element we put in the right place), and start again. | algo_notes | latex |
algo_notes_block_81 | \begin{subparag}{Complexity}
We run $O\left(n\right)$ times the heap repair (which runs in $O\left(\log\left(n\right)\right)$), thus we get that our algorithm has complexity $O\left(n\log_2\left(n\right)\right)$.
It is interesting to see that, here, the good complexity comes from a really good data-structure. This is basically like selection sort (finding the biggest element and putting it at the end, it runs in $O\left(n^2\right)$) but finding the maximum in a constant time.
\end{subparag} | algo_notes | latex |
algo_notes_block_82 | \begin{subparag}{Remark}
We can note that, unlike Merge Sort, this sorting algorithm is in place.
\end{subparag}
\end{parag}
\lecture{7}{2022-10-14}{Queues, stacks and linked list}{} | algo_notes | latex |
algo_notes_block_83 | \begin{parag}{Definition: Priority queue}
A priority queue maintains a dynamic set $S$ of elements, where each element has a key (an associated value that regulates its importance). This is a more constraining datastructure than arrays, since we cannot access any element.
We want to have the following operations:
\begin{itemize}[left=0pt]
\item \texttt{Insert(S, x)}: inserts the element $x$ into $S$.
\item \texttt{Maximum(S)}: Returns the element of $S$ with the largest key.
\item \texttt{Extract-Max(S)}: removes and returns the element of $S$ with the largest key.
\item \texttt{Increase-Key(S, x, k)}: increases the value of element $x$'s key to $k$, assuming that $k$ is greater than its current key value.
\end{itemize} | algo_notes | latex |
algo_notes_block_84 | \begin{subparag}{Usage}
Priority queue have many usage, the biggest one will be in Dijkstra's algorithm, which we will see later in this course.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture07/PriorityQueueHeapIncreaseKey.code}
procedure HeapIncreaseKey(A, i, key)
if key < A[i]:
error "new key is smaller than current key"
A[i] = key
while i > 1 and A[Parent(i)] < A[i]
exchange A[i] with A[Parent(i)]
i = Parent(i)
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture07/PriorityQueueHeapInsert.code}
procedure HeapInsert(A, key, n)
n = n + 1 // this is more complex in real life, but this is not important here
A[n] = -infinity
HeapIncreaseKey(A, n, key)
\end{filecontents*} | algo_notes | latex |
algo_notes_block_85 | \begin{parag}{Using a heap}
Let us try to implement a priority queue using a heap. | algo_notes | latex |
algo_notes_block_86 | \begin{subparag}{Maximum}
Since we are using a heap, we have two procedures for free. \texttt{Maximum(S)} simply returns the root. This is $\Theta\left(1\right)$.
For \texttt{Extract-Max(S)}, we can move the last element of the array to the root and run \texttt{Max-Heapify} on the root (like what we do with heap-sort, but without needing to put the root to the last element of the heap).
\imagehere[0.5]{Lecture07/PriorityQueueExtractMaxHeap.png}
\texttt{Extract-Max} runs in the same time as applying \texttt{Max-Heapify} on the root, and thus in $O\left(\log\left(n\right)\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_87 | \begin{subparag}{Increase key}
To implement \texttt{Increase-Key}, after having changed the key of our element, we can make it go up until its parent has a bigger key that it.
\importcode{Lecture07/PriorityQueueHeapIncreaseKey.code}{pseudo}
This looks a lot like max-heapify, and it is thus $O\left(\log\left(n\right)\right)$.
Note that if wanted to implement \texttt{Decrease-Key}, we could just run \texttt{Max-Heapify} on the element we modified.
\end{subparag} | algo_notes | latex |
algo_notes_block_88 | \begin{subparag}{Insert}
To insert a new key into heap, we can increment the heap size, insert a new node in the last position in the heap with the key $-\infty$, and increase the $-\infty$ value to \texttt{key} using \texttt{Heap-Increase-Key}.
\importcode{Lecture07/PriorityQueueHeapInsert.code}{pseudo}
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_89 | \begin{parag}{Remark}
We can make min-priority queues with min-heaps similarly.
\end{parag} | algo_notes | latex |
algo_notes_block_90 | \begin{parag}{Introduction}
We realise that the heap was really great because it led to very efficient algorithms. So, we can try to make more great data structures.
\end{parag} | algo_notes | latex |
algo_notes_block_91 | \begin{parag}{Definition: Stack}
A stack is a data structure where we can insert (\texttt{Push(S, x)}) and delete elements (\texttt{Pop(S)}). This is known as a last-in, first-out (LIFO), meaning that the element we get by using the \texttt{Pop} procedure is the one that was inserted the most recently. | algo_notes | latex |
algo_notes_block_92 | \begin{subparag}{Intuition}
This is really like a stack: we put elements over one another, and then we can only take elements back from the top.
\end{subparag} | algo_notes | latex |
algo_notes_block_93 | \begin{subparag}{Usage}
Stacks are everywhere in computing: a computer has a stack and that's how it operates.
Another usage is to know if an expressions with parenthesis, brackets and curly brackets is a well-parenthesised expression. Indeed, we can go through all letters from our expression. When we get an opening character (\texttt{(}, \texttt{[} or \texttt{\{}), then we can push it in our stack. When we get a closing character (\texttt{)}, \texttt{]} or \texttt{\}}) we can pop an element from the stack and verify that both characters correspond.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture07/StackPush.code}
procedure Push(S, x):
S.top = S.top + 1
S[S.top] = x
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture07/StackPop.code}
procedure Pop(S, x):
if StackEmpty(S)
error "underflow"
S.top = S.top - 1
return S[S.top + 1]
\end{filecontents*} | algo_notes | latex |
algo_notes_block_94 | \begin{parag}{Stack implementation}
A good way to implement a stack is using an array.
We have an array of size $n$, and a pointer \texttt{S.top} to the last element (some space in the array can be unused). | algo_notes | latex |
algo_notes_block_95 | \begin{subparag}{Empty}
To know if our stack is empty, we can basically only return \texttt{S.top == 0}. This definitely has a complexity of $O\left(1\right)$.
\end{subparag} | algo_notes | latex |
algo_notes_block_96 | \begin{subparag}{Push}
To push an element in our array, we can do:
\importcode{Lecture07/StackPush.code}{pseudo}
Note that, in reality, we would need to verify that we have the space to add one more element, not to get an \texttt{IndexOutOfBoundException}.
We can notice that this is executed in constant time.
\end{subparag} | algo_notes | latex |
algo_notes_block_97 | \begin{subparag}{Pop}
Popping element is very similar to pushing:
\importcode{Lecture07/StackPop.code}{pseudo}
We can notice that this is also done in constant time.
\end{subparag}
\end{parag} | algo_notes | latex |
algo_notes_block_98 | \begin{parag}{Queue}
A queue is a data structure where we can insert elements (\texttt{Enqueue(Q, x)}) and delete elements (\texttt{Dequeue(Q)}). This is known as a first-in, first-out (FIFO), meaning that the element we get by using the \texttt{Dequeue} procedure is the one that was inserted the least recently. | algo_notes | latex |
algo_notes_block_99 | \begin{subparag}{Intuition}
This is really like a queue in real life: people that get out of the queue are people who were there for the longest.
\end{subparag} | algo_notes | latex |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 9