text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
```python
# Shabat, IUM, spring 2019, Algebra-2 course, excercises 5.2 and 5.3
# Generic code for displaying multiplication tables
import itertools as it
from IPython.core.display import Latex, Math
import sympy as sp
import pandas as pd
from tabulate import tabulate
```
```python
# sp.init_printing()
```
```python
x = sp.symbols("x")
```
```python
def zero_polynomial(p):
F_p = sp.FF(p)
return sp.Poly([], x, domain=F_p)
def iter_all_polynomials(p, deg):
F_p = sp.FF(p)
all_scalars = [F_p(n) for n in range(p)]
powers = [x**n for n in range(deg + 1)]
for coeff_tuple in it.product(all_scalars, repeat=deg + 1):
if coeff_tuple[0] == 0:
continue
yield sp.Poly(coeff_tuple, x, domain=F_p)
def expr_product(exprs):
res = exprs[0].as_expr()
for expr in exprs[1:]:
res *= expr.as_expr()
return res
def iter_all_polynomials_with_factorings(p, max_deg):
table = {}
for f in iter_all_polynomials(p, 1):
table[f] = (f,)
yield f, (f,)
for deg in range(2, max_deg + 1):
for f, f_factors in list(table.items()):
for g, g_factors in list(table.items()):
h = f * g
if h.degree() == deg:
table[h] = (*f_factors, *g_factors)
for f in iter_all_polynomials(p, deg):
f_factors = table.setdefault(f, (f, ))
yield f, f_factors
def return_dict(generator):
def func(*args, **kwargs):
return dict(generator(*args, **kwargs))
return func
@return_dict
def as_table(polynomials_with_factorings):
for f, f_factors in polynomials_with_factorings:
yield f.as_expr(), expr_product(f_factors)
def sympy_to_latex(v):
return "$" + sp.latex(v) + "$"
display(pd.Series(as_table(iter_all_polynomials_with_factorings(2, 3))))
display(pd.Series(as_table(iter_all_polynomials_with_factorings(3, 2))))
```
x x
x + 1 x + 1
x**2 x**2
x**2 + 1 (x + 1)**2
x**2 + x x*(x + 1)
x**2 + x + 1 x**2 + x + 1
x**3 x**3
x**3 + 1 (x + 1)*(x**2 + x + 1)
x**3 + x x*(x + 1)**2
x**3 + x + 1 x**3 + x + 1
x**3 + x**2 x**2*(x + 1)
x**3 + x**2 + 1 x**3 + x**2 + 1
x**3 + x**2 + x x*(x**2 + x + 1)
x**3 + x**2 + x + 1 (x + 1)**3
dtype: object
x x
x + 1 x + 1
x - 1 x - 1
-x -x
1 - x 1 - x
-x - 1 -x - 1
x**2 x**2
x**2 + 1 x**2 + 1
x**2 - 1 (1 - x)*(-x - 1)
x**2 + x -x*(-x - 1)
x**2 + x + 1 (1 - x)**2
x**2 + x - 1 x**2 + x - 1
x**2 - x -x*(1 - x)
x**2 - x + 1 (-x - 1)**2
x**2 - x - 1 x**2 - x - 1
-x**2 -x**2
1 - x**2 (-x - 1)*(x - 1)
-x**2 - 1 -x**2 - 1
-x**2 + x x*(1 - x)
-x**2 + x + 1 -x**2 + x + 1
-x**2 + x - 1 (-x - 1)*(x + 1)
-x**2 - x x*(-x - 1)
-x**2 - x + 1 -x**2 - x + 1
-x**2 - x - 1 (1 - x)*(x - 1)
dtype: object
```python
def elements_and_multiplication_table(p, f):
f = sp.Poly(f, domain=sp.FF(p))
deg = f.degree()
elements = list(it.chain([zero_polynomial(p)], *[iter_all_polynomials(p, elem_deg) for elem_deg in range(deg)]))
return [g.as_expr() for g in elements], [[(g * h % f).as_expr() for g in elements] for h in elements]
def as_matrix(e, m):
return pd.DataFrame(m, columns=list(map(sympy_to_latex, e)), index=list(map(sympy_to_latex, e))).applymap(sympy_to_latex)
e, m = elements_and_multiplication_table(2, x**3 + x**2 + 1)
display(as_matrix(e, m))
e, m = elements_and_multiplication_table(3, x**2 - x - 1)
display(as_matrix(e, m))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>$0$</th>
<th>$1$</th>
<th>$x$</th>
<th>$x + 1$</th>
<th>$x^{2}$</th>
<th>$x^{2} + 1$</th>
<th>$x^{2} + x$</th>
<th>$x^{2} + x + 1$</th>
</tr>
</thead>
<tbody>
<tr>
<th>$0$</th>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
</tr>
<tr>
<th>$1$</th>
<td>$0$</td>
<td>$1$</td>
<td>$x$</td>
<td>$x + 1$</td>
<td>$x^{2}$</td>
<td>$x^{2} + 1$</td>
<td>$x^{2} + x$</td>
<td>$x^{2} + x + 1$</td>
</tr>
<tr>
<th>$x$</th>
<td>$0$</td>
<td>$x$</td>
<td>$x^{2}$</td>
<td>$x^{2} + x$</td>
<td>$x^{2} + 1$</td>
<td>$x^{2} + x + 1$</td>
<td>$1$</td>
<td>$x + 1$</td>
</tr>
<tr>
<th>$x + 1$</th>
<td>$0$</td>
<td>$x + 1$</td>
<td>$x^{2} + x$</td>
<td>$x^{2} + 1$</td>
<td>$1$</td>
<td>$x$</td>
<td>$x^{2} + x + 1$</td>
<td>$x^{2}$</td>
</tr>
<tr>
<th>$x^{2}$</th>
<td>$0$</td>
<td>$x^{2}$</td>
<td>$x^{2} + 1$</td>
<td>$1$</td>
<td>$x^{2} + x + 1$</td>
<td>$x + 1$</td>
<td>$x$</td>
<td>$x^{2} + x$</td>
</tr>
<tr>
<th>$x^{2} + 1$</th>
<td>$0$</td>
<td>$x^{2} + 1$</td>
<td>$x^{2} + x + 1$</td>
<td>$x$</td>
<td>$x + 1$</td>
<td>$x^{2} + x$</td>
<td>$x^{2}$</td>
<td>$1$</td>
</tr>
<tr>
<th>$x^{2} + x$</th>
<td>$0$</td>
<td>$x^{2} + x$</td>
<td>$1$</td>
<td>$x^{2} + x + 1$</td>
<td>$x$</td>
<td>$x^{2}$</td>
<td>$x + 1$</td>
<td>$x^{2} + 1$</td>
</tr>
<tr>
<th>$x^{2} + x + 1$</th>
<td>$0$</td>
<td>$x^{2} + x + 1$</td>
<td>$x + 1$</td>
<td>$x^{2}$</td>
<td>$x^{2} + x$</td>
<td>$1$</td>
<td>$x^{2} + 1$</td>
<td>$x$</td>
</tr>
</tbody>
</table>
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>$0$</th>
<th>$1$</th>
<th>$-1$</th>
<th>$x$</th>
<th>$x + 1$</th>
<th>$x - 1$</th>
<th>$- x$</th>
<th>$1 - x$</th>
<th>$- x - 1$</th>
</tr>
</thead>
<tbody>
<tr>
<th>$0$</th>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
<td>$0$</td>
</tr>
<tr>
<th>$1$</th>
<td>$0$</td>
<td>$1$</td>
<td>$-1$</td>
<td>$x$</td>
<td>$x + 1$</td>
<td>$x - 1$</td>
<td>$- x$</td>
<td>$1 - x$</td>
<td>$- x - 1$</td>
</tr>
<tr>
<th>$-1$</th>
<td>$0$</td>
<td>$-1$</td>
<td>$1$</td>
<td>$- x$</td>
<td>$- x - 1$</td>
<td>$1 - x$</td>
<td>$x$</td>
<td>$x - 1$</td>
<td>$x + 1$</td>
</tr>
<tr>
<th>$x$</th>
<td>$0$</td>
<td>$x$</td>
<td>$- x$</td>
<td>$x + 1$</td>
<td>$1 - x$</td>
<td>$1$</td>
<td>$- x - 1$</td>
<td>$-1$</td>
<td>$x - 1$</td>
</tr>
<tr>
<th>$x + 1$</th>
<td>$0$</td>
<td>$x + 1$</td>
<td>$- x - 1$</td>
<td>$1 - x$</td>
<td>$-1$</td>
<td>$x$</td>
<td>$x - 1$</td>
<td>$- x$</td>
<td>$1$</td>
</tr>
<tr>
<th>$x - 1$</th>
<td>$0$</td>
<td>$x - 1$</td>
<td>$1 - x$</td>
<td>$1$</td>
<td>$x$</td>
<td>$- x - 1$</td>
<td>$-1$</td>
<td>$x + 1$</td>
<td>$- x$</td>
</tr>
<tr>
<th>$- x$</th>
<td>$0$</td>
<td>$- x$</td>
<td>$x$</td>
<td>$- x - 1$</td>
<td>$x - 1$</td>
<td>$-1$</td>
<td>$x + 1$</td>
<td>$1$</td>
<td>$1 - x$</td>
</tr>
<tr>
<th>$1 - x$</th>
<td>$0$</td>
<td>$1 - x$</td>
<td>$x - 1$</td>
<td>$-1$</td>
<td>$- x$</td>
<td>$x + 1$</td>
<td>$1$</td>
<td>$- x - 1$</td>
<td>$x$</td>
</tr>
<tr>
<th>$- x - 1$</th>
<td>$0$</td>
<td>$- x - 1$</td>
<td>$x + 1$</td>
<td>$x - 1$</td>
<td>$1$</td>
<td>$- x$</td>
<td>$1 - x$</td>
<td>$x$</td>
<td>$-1$</td>
</tr>
</tbody>
</table>
</div>
| 48fd4e6c4e503bf209644110fd5a1824da7e6302 | 18,456 | ipynb | Jupyter Notebook | notebooks/finite_field_multiplication.ipynb | tekhnus/misc | cf4c6e29434c546e3c29f24f7bb16a0ac65005f5 | [
"Unlicense"
]
| null | null | null | notebooks/finite_field_multiplication.ipynb | tekhnus/misc | cf4c6e29434c546e3c29f24f7bb16a0ac65005f5 | [
"Unlicense"
]
| null | null | null | notebooks/finite_field_multiplication.ipynb | tekhnus/misc | cf4c6e29434c546e3c29f24f7bb16a0ac65005f5 | [
"Unlicense"
]
| null | null | null | 34.114603 | 134 | 0.265009 | true | 3,661 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.808067 | 0.70906 | __label__yue_Hant | 0.373156 | 0.485716 |
# Generating C Code for the Scalar Wave Equation in Cartesian Coordinates
## Authors: Zach Etienne & Thiago Assumpção
### Formatting improvements courtesy Brandon Clark
## This module generates the C Code for the Scalarwave in Cartesian coordinates and sets up either monochromatic plane wave or spherical Gaussian [Initial Data](https://en.wikipedia.org/wiki/Initial_value_problem).
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented below ([right-hand-side expressions](#code_validation1); [initial data expressions](#code_validation2)). In addition, all expressions have been validated against a trusted code (the [original SENR/NRPy+ code](https://bitbucket.org/zach_etienne/nrpy)).
### NRPy+ Source Code for this module:
* [ScalarWave/ScalarWave_RHSs.py](../edit/ScalarWave/ScalarWave_RHSs.py)
* [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py)
## Introduction:
### Problem Statement
We wish to numerically solve the scalar wave equation as an [initial value problem](https://en.wikipedia.org/wiki/Initial_value_problem) in Cartesian coordinates:
$$\partial_t^2 u = c^2 \nabla^2 u \text{,}$$
where $u$ (the amplitude of the wave) is a function of time and space: $u = u(t,x,y,...)$ (spatial dimension as-yet unspecified) and $c$ is the wave speed, subject to some initial condition
$$u(0,x,y,...) = f(x,y,...)$$
and suitable spatial boundary conditions.
As described in the next section, we will find it quite useful to define
$$v(t,x,y,...) = \partial_t u(t,x,y,...).$$
In this way, the second-order PDE is reduced to a set of two coupled first-order PDEs
\begin{align}
\partial_t u &= v \\
\partial_t v &= c^2 \nabla^2 u.
\end{align}
We will use NRPy+ to generate efficient C codes capable of generating both initial data $u(0,x,y,...) = f(x,y,...)$; $v(0,x,y,...)=g(x,y,...)$, as well as finite-difference expressions for the right-hand sides of the above expressions. These expressions are needed within the *Method of Lines* to "integrate" the solution forward in time.
### The Method of Lines
Once we have initial data, we "evolve it forward in time", using the [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html). In short, the Method of Lines enables us to handle
1. the **spatial derivatives** of an initial value problem PDE using **standard finite difference approaches**, and
2. the **temporal derivatives** of an initial value problem PDE using **standard strategies for solving ordinary differential equations (ODEs)**, so long as the initial value problem PDE can be written in the form
$$\partial_t \vec{f} = \mathbf{M}\ \vec{f},$$
where $\mathbf{M}$ is an $N\times N$ matrix filled with differential operators that act on the $N$-element column vector $\vec{f}$. $\mathbf{M}$ may not contain $t$ or time derivatives explicitly; only *spatial* partial derivatives are allowed to appear inside $\mathbf{M}$. The scalar wave equation as written in the [previous module](Tutorial-ScalarWave.ipynb)
\begin{equation}
\partial_t
\begin{bmatrix}
u \\
v
\end{bmatrix}=
\begin{bmatrix}
0 & 1 \\
c^2 \nabla^2 & 0
\end{bmatrix}
\begin{bmatrix}
u \\
v
\end{bmatrix}
\end{equation}
satisfies this requirement.
Thus we can treat the spatial derivatives $\nabla^2 u$ of the scalar wave equation using **standard finite-difference approaches**, and the temporal derivatives $\partial_t u$ and $\partial_t v$ using **standard approaches for solving ODEs**. In [the next module](Tutorial-Start_to_Finish-ScalarWave.ipynb), we will apply the highly robust [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4), used widely for numerically solving ODEs, to "march" (integrate) the solution vector $\vec{f}$ forward in time from its initial value ("initial data").
### Basic Algorithm
The basic algorithm for solving the scalar wave equation [initial value problem](https://en.wikipedia.org/wiki/Initial_value_problem), based on the Method of Lines (see section above) is outlined below, with NRPy+-based components highlighted in <font color='green'>green</font>. We will review how NRPy+ generates these core components in this module.
1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.
1. <font color='green'>Set gridfunction values to initial data.</font>
1. Evolve the system forward in time using RK4 time integration. At each RK4 substep, do the following:
1. <font color='green'>Evaluate scalar wave RHS expressions.</font>
1. Apply boundary conditions.
**We refer to the right-hand side of the equation $\partial_t \vec{f} = \mathbf{M}\ \vec{f}$ as the RHS. In this case, we refer to the $\mathbf{M}\ \vec{f}$ as the "scalar wave RHSs".** In the following sections we will
1. Use NRPy+ to cast the scalar wave RHS expressions -- in finite difference form -- into highly efficient C code,
1. first in one spatial dimension with fourth-order finite differences,
1. and then in three spatial dimensions with tenth-order finite differences.
1. Use NRPy+ to generate monochromatic plane-wave initial data for the scalar wave equation, where the wave propagates in an arbitrary direction.
As for the $\nabla^2 u$ term, spatial derivatives are handled in NRPy+ via [finite differencing](https://en.wikipedia.org/wiki/Finite_difference).
We will sample the solution $\{u,v\}$ at discrete, uniformly-sampled points in space and time. For simplicity, let's assume that we consider the wave equation in one spatial dimension. Then the solution at any sampled point in space and time is given by
$$u^n_i = u(t_n,x_i) = u(t_0 + n \Delta t, x_0 + i \Delta x),$$
where $\Delta t$ and $\Delta x$ represent the temporal and spatial resolution, respectively. $v^n_i$ is sampled at the same points in space and time.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
1. [Step 1](#initializenrpy): Initialize core NRPy+ modules
1. [Step 2](#rhss1d): Scalar Wave RHSs in One Spatial Dimension, Fourth-Order Finite Differencing
1. [Step 2.a](#ccode1d): C-code output example: Scalar wave RHSs with 4th order finite difference stencils
1. [Step 3](#rhss3d): Scalar Wave RHSs in Three Spatial Dimensions
1. [Step 3.a](#code_validation1): Code Validation against `ScalarWave.ScalarWave_RHSs` NRPy+ module
1. [Step 3.b](#ccode3d): C-code output example: Scalar wave RHSs with 10th order finite difference stencils and SIMD enabled
1. [Step 4](#id): Setting up Initial Data for the Scalar Wave Equation
1. [Step 4.a](#planewave): The Monochromatic Plane-Wave Solution
1. [Step 4.b](#sphericalgaussian): The Spherical Gaussian Solution (*Courtesy Thiago Assumpção*)
1. [Step 5](#code_validation2): Code Validation against `ScalarWave.InitialData` NRPy+ module
1. [Step 6](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize core NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from NRPy+:
```python
# Step P1: Import needed NRPy+ core modules:
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import grid as gri # NRPy+: Functions having to do with numerical grids
import finite_difference as fin # NRPy+: Finite difference C code generation module
from outputC import lhrh # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
```
<a id='rhss1d'></a>
# Step 2: Scalar Wave RHSs in One Spatial Dimension \[Back to [top](#toc)\]
$$\label{rhss1d}$$
To minimize complication, we will first restrict ourselves to solving the wave equation in one spatial dimension, so
\begin{align}
\partial_t u &= v \\
\partial_t v &= c^2 \nabla^2 u \\
&= c^2 \partial_x^2 u.
\end{align}
We will construct SymPy expressions of the right-hand sides of $u$ and $v$ using [NRPy+ finite-difference notation](Tutorial-Finite_Difference_Derivatives.ipynb) to represent the derivative, so that finite-difference C-code kernels can be easily constructed.
Extension of this operator to higher spatial dimensions when using NRPy+ is straightforward, as we will see below.
```python
# Step P2: Define the C parameter wavespeed. The `wavespeed`
# variable is a proper SymPy variable, so it can be
# used in below expressions. In the C code, it acts
# just like a usual parameter, whose value is
# specified in the parameter file.
thismodule = "ScalarWave"
wavespeed = par.Cparameters("REAL",thismodule,"wavespeed", 1.0)
# Step 1: Set the spatial dimension parameter, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM", 1)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Register gridfunctions that are needed as input
# to the scalar wave RHS expressions.
uu, vv = gri.register_gridfunctions("EVOL", ["uu", "vv"])
# Step 3: Declare the rank-2 indexed expression \partial_{ij} u,
# which is symmetric about interchange of indices i and j
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dDD = ixp.declarerank2("uu_dDD", "sym01")
# Step 4: Define right-hand sides for the evolution.
uu_rhs = vv
vv_rhs = 0
for i in range(DIM):
vv_rhs += wavespeed*wavespeed*uu_dDD[i][i]
vv_rhs = sp.simplify(vv_rhs)
```
<a id='ccode1d'></a>
## Step 2.a: C-code output example: Scalar wave RHSs with 4th order finite difference stencils \[Back to [top](#toc)\]
$$\label{ccode1d}$$
As was discussed in [the finite difference section of the tutorial](Tutorial-Finite_Difference_Derivatives.ipynb), NRPy+ approximates derivatives using [finite-difference methods](https://en.wikipedia.org/wiki/Finite_difference), the second-order derivative $\partial_x^2$ accurate to fourth-order in uniform grid spacing $\Delta x$ (from fitting the unique 4th-degree polynomial to 5 sample points of $u$) is given by
\begin{equation}
\left[\partial_x^2 u(t,x)\right]_j = \frac{1}{(\Delta x)^2}
\left(
-\frac{1}{12} \left(u_{j+2} + u_{j-2}\right)
+ \frac{4}{3} \left(u_{j+1} + u_{j-1}\right)
- \frac{5}{2} u_j \right)
+ \mathcal{O}\left((\Delta x)^4\right).
\end{equation}
```python
# Step 5: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 4)
# Step 6: Generate C code for scalarwave evolution equations,
# print output to the screen (standard out, or stdout).
fin.FD_outputC("stdout",
[lhrh(lhs=gri.gfaccess("rhs_gfs", "uu"), rhs=uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs", "vv"), rhs=vv_rhs)])
```
{
/*
* NRPy+ Finite Difference Code Generation, Step 1 of 2: Read from main memory and compute finite difference stencils:
*/
/*
* Original SymPy expression:
* "const double uu_dDD00 = invdx0**2*(-5*uu/2 + 4*uu_i0m1/3 - uu_i0m2/12 + 4*uu_i0p1/3 - uu_i0p2/12)"
*/
const double uu_i0m2 = in_gfs[IDX2(UUGF, i0-2)];
const double uu_i0m1 = in_gfs[IDX2(UUGF, i0-1)];
const double uu = in_gfs[IDX2(UUGF, i0)];
const double uu_i0p1 = in_gfs[IDX2(UUGF, i0+1)];
const double uu_i0p2 = in_gfs[IDX2(UUGF, i0+2)];
const double vv = in_gfs[IDX2(VVGF, i0)];
const double FDPart1_Rational_5_2 = 5.0/2.0;
const double FDPart1_Rational_1_12 = 1.0/12.0;
const double FDPart1_Rational_4_3 = 4.0/3.0;
const double uu_dDD00 = ((invdx0)*(invdx0))*(FDPart1_Rational_1_12*(-uu_i0m2 - uu_i0p2) + FDPart1_Rational_4_3*(uu_i0m1 + uu_i0p1) - FDPart1_Rational_5_2*uu);
/*
* NRPy+ Finite Difference Code Generation, Step 2 of 2: Evaluate SymPy expressions and write to main memory:
*/
/*
* Original SymPy expressions:
* "[rhs_gfs[IDX2(UUGF, i0)] = vv,
* rhs_gfs[IDX2(VVGF, i0)] = uu_dDD00*wavespeed**2]"
*/
rhs_gfs[IDX2(UUGF, i0)] = vv;
rhs_gfs[IDX2(VVGF, i0)] = uu_dDD00*((wavespeed)*(wavespeed));
}
**Success!** Notice that indeed NRPy+ was able to compute the spatial derivative operator,
\begin{equation}
\left[\partial_x^2 u(t,x)\right]_j \approx \frac{1}{(\Delta x)^2}
\left(
-\frac{1}{12} \left(u_{j+2} + u_{j-2}\right)
+ \frac{4}{3} \left(u_{j+1} + u_{j-1}\right)
- \frac{5}{2} u_j \right),
\end{equation}
correctly (easier to read in the "Original SymPy expressions" comment block at the top of the C output.
As NRPy+ is designed to generate codes in arbitrary coordinate systems, instead of sticking with Cartesian notation for 3D coordinates, $x,y,z$, we instead adopt $x_0,x_1,x_2$ for our coordinate labels. Thus you will notice the appearance of `invdx0`$=1/\Delta x_0$, where $\Delta x_0$ as the (uniform) grid spacing in the zeroth, or $x_0$ direction. In this case $x_0$ represents the $x$ direction.
<a id='rhss3d'></a>
# Step 3: Scalar Wave RHSs in Three Spatial Dimensions \[Back to [top](#toc)\]
$$\label{rhss3d}$$
Let's next repeat the same process, only this time for the scalar wave equation in **3 spatial dimensions** (3D).
```python
# Step 1: Define the C parameter wavespeed. The `wavespeed`
# variable is a proper SymPy variable, so it can be
# used in below expressions. In the C code, it acts
# just like a usual parameter, whose value is
# specified in the parameter file.
wavespeed = par.Cparameters("REAL", thismodule, "wavespeed", 1.0)
# Step 2: Set the spatial dimension parameter
# to *FOUR* this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# Step 3a: Reset gridfunctions registered in 1D case above,
# to avoid NRPy+ throwing an error about double-
# registering gridfunctions, which is not allowed.
gri.glb_gridfcs_list = []
# Step 3b: Register gridfunctions that are needed as input
# to the scalar wave RHS expressions.
uu, vv = gri.register_gridfunctions("EVOL", ["uu", "vv"])
# Step 4: Declare the rank-2 indexed expression \partial_{ij} u,
# which is symmetric about interchange of indices i and j
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dDD = ixp.declarerank2("uu_dDD", "sym01")
# Step 5: Define right-hand sides for the evolution.
uu_rhs = vv
vv_rhs = 0
for i in range(DIM):
vv_rhs += wavespeed*wavespeed*uu_dDD[i][i]
# Step 6: Simplify the expression for c^2 \nabla^2 u (a.k.a., vv_rhs):
vv_rhs = sp.simplify(vv_rhs)
```
<a id='code_validation1'></a>
## Step 3.a: Validate SymPy expressions against `ScalarWave.ScalarWave_RHSs` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation1}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the three-spatial-dimension Scalar Wave equation (i.e., `uu_rhs` and `vv_rhs`) between
1. this tutorial and
2. the [NRPy+ ScalarWave.ScalarWave_RHSs](../edit/ScalarWave/ScalarWave_RHSs.py) module.
```python
# Step 10: We already have SymPy expressions for uu_rhs and vv_rhs in
# terms of other SymPy variables. Even if we reset the list
# of NRPy+ gridfunctions, these *SymPy* expressions for
# uu_rhs and vv_rhs *will remain unaffected*.
#
# Here, we will use the above-defined uu_rhs and vv_rhs to
# validate against the same expressions in the
# ScalarWave/ScalarWave_RHSs.py module,
# to ensure consistency between this tutorial
# (historically speaking, the tutorial was written first)
# and the ScalarWave_RHSs.py module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice will spawn an error.
gri.glb_gridfcs_list = []
# Step 11: Call the ScalarWave_RHSs() function from within the
# ScalarWave/ScalarWave_RHSs.py module,
# which should do exactly the same as in Steps 1-10 above.
import ScalarWave.ScalarWave_RHSs as swrhs
swrhs.ScalarWave_RHSs()
# Step 12: Consistency check between the tutorial notebook above
# and the ScalarWave_RHSs() function from within the
# ScalarWave/ScalarWave_RHSs.py module.
print("Consistency check between ScalarWave tutorial and NRPy+ module:")
print("uu_rhs - swrhs.uu_rhs = "+str(sp.simplify(uu_rhs - swrhs.uu_rhs))+"\t\t (should be zero)")
print("vv_rhs - swrhs.vv_rhs = "+str(sp.simplify(vv_rhs - swrhs.vv_rhs))+"\t\t (should be zero)")
```
Consistency check between ScalarWave tutorial and NRPy+ module:
uu_rhs - swrhs.uu_rhs = 0 (should be zero)
vv_rhs - swrhs.vv_rhs = 0 (should be zero)
<a id='ccode3d'></a>
## Step 3.b: C-code output example: Scalar wave RHSs with 10th order finite difference stencils and SIMD enabled \[Back to [top](#toc)\]
$$\label{ccode3d}$$
Next we'll output the above expressions as Ccode, using the [NRPy+ finite-differencing C code kernel generation infrastructure](Tutorial-Finite_Difference_Derivatives.ipynb). This code will represent spatial derivatives as 10th-order finite differences and output the C code with [SIMD](https://en.wikipedia.org/wiki/SIMD) enabled. ([Common-subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination) is enabled by default.)
```python
# Step 7: Set the finite differencing order to 10.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 10)
# Step 8: Generate C code for scalarwave evolution equations,
# print output to the screen (standard out, or stdout).
fin.FD_outputC("stdout",
[lhrh(lhs=gri.gfaccess("rhs_gfs","uu"),rhs=uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vv"),rhs=vv_rhs)], params="enable_SIMD=True")
```
{
/*
* NRPy+ Finite Difference Code Generation, Step 1 of 2: Read from main memory and compute finite difference stencils:
*/
/*
* Original SymPy expressions:
* "[const REAL_SIMD_ARRAY uu_dDD00 = invdx0**2*(-5269*uu/1800 + 5*uu_i0m1_i1_i2/3 - 5*uu_i0m2_i1_i2/21 + 5*uu_i0m3_i1_i2/126 - 5*uu_i0m4_i1_i2/1008 + uu_i0m5_i1_i2/3150 + 5*uu_i0p1_i1_i2/3 - 5*uu_i0p2_i1_i2/21 + 5*uu_i0p3_i1_i2/126 - 5*uu_i0p4_i1_i2/1008 + uu_i0p5_i1_i2/3150),
* const REAL_SIMD_ARRAY uu_dDD11 = invdx1**2*(-5269*uu/1800 + 5*uu_i0_i1m1_i2/3 - 5*uu_i0_i1m2_i2/21 + 5*uu_i0_i1m3_i2/126 - 5*uu_i0_i1m4_i2/1008 + uu_i0_i1m5_i2/3150 + 5*uu_i0_i1p1_i2/3 - 5*uu_i0_i1p2_i2/21 + 5*uu_i0_i1p3_i2/126 - 5*uu_i0_i1p4_i2/1008 + uu_i0_i1p5_i2/3150),
* const REAL_SIMD_ARRAY uu_dDD22 = invdx2**2*(-5269*uu/1800 + 5*uu_i0_i1_i2m1/3 - 5*uu_i0_i1_i2m2/21 + 5*uu_i0_i1_i2m3/126 - 5*uu_i0_i1_i2m4/1008 + uu_i0_i1_i2m5/3150 + 5*uu_i0_i1_i2p1/3 - 5*uu_i0_i1_i2p2/21 + 5*uu_i0_i1_i2p3/126 - 5*uu_i0_i1_i2p4/1008 + uu_i0_i1_i2p5/3150)]"
*/
const REAL_SIMD_ARRAY uu_i0_i1_i2m5 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2-5)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2m4 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2-4)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2m3 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2-3)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2m2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2-2)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2m1 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2-1)]);
const REAL_SIMD_ARRAY uu_i0_i1m5_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1-5,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1m4_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1-4,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1m3_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1-3,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1m2_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1-2,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1m1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1-1,i2)]);
const REAL_SIMD_ARRAY uu_i0m5_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0-5,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0m4_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0-4,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0m3_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0-3,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0m2_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0-2,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0m1_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0-1,i1,i2)]);
const REAL_SIMD_ARRAY uu = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0p1_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0+1,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0p2_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0+2,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0p3_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0+3,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0p4_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0+4,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0p5_i1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0+5,i1,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1p1_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1+1,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1p2_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1+2,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1p3_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1+3,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1p4_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1+4,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1p5_i2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1+5,i2)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2p1 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2+1)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2p2 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2+2)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2p3 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2+3)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2p4 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2+4)]);
const REAL_SIMD_ARRAY uu_i0_i1_i2p5 = ReadSIMD(&in_gfs[IDX4(UUGF, i0,i1,i2+5)]);
const REAL_SIMD_ARRAY vv = ReadSIMD(&in_gfs[IDX4(VVGF, i0,i1,i2)]);
const double tmpFDPart1_NegativeOne_ = -1.0;
const REAL_SIMD_ARRAY FDPart1_NegativeOne_ = ConstSIMD(tmpFDPart1_NegativeOne_);
const double tmpFDPart1_Rational_1_3150 = 1.0/3150.0;
const REAL_SIMD_ARRAY FDPart1_Rational_1_3150 = ConstSIMD(tmpFDPart1_Rational_1_3150);
const double tmpFDPart1_Rational_5269_1800 = 5269.0/1800.0;
const REAL_SIMD_ARRAY FDPart1_Rational_5269_1800 = ConstSIMD(tmpFDPart1_Rational_5269_1800);
const double tmpFDPart1_Rational_5_1008 = 5.0/1008.0;
const REAL_SIMD_ARRAY FDPart1_Rational_5_1008 = ConstSIMD(tmpFDPart1_Rational_5_1008);
const double tmpFDPart1_Rational_5_126 = 5.0/126.0;
const REAL_SIMD_ARRAY FDPart1_Rational_5_126 = ConstSIMD(tmpFDPart1_Rational_5_126);
const double tmpFDPart1_Rational_5_21 = 5.0/21.0;
const REAL_SIMD_ARRAY FDPart1_Rational_5_21 = ConstSIMD(tmpFDPart1_Rational_5_21);
const double tmpFDPart1_Rational_5_3 = 5.0/3.0;
const REAL_SIMD_ARRAY FDPart1_Rational_5_3 = ConstSIMD(tmpFDPart1_Rational_5_3);
const REAL_SIMD_ARRAY FDPart1_0 = MulSIMD(FDPart1_Rational_5269_1800, uu);
const REAL_SIMD_ARRAY uu_dDD00 = MulSIMD(MulSIMD(invdx0, invdx0), FusedMulAddSIMD(FDPart1_Rational_5_126, AddSIMD(uu_i0m3_i1_i2, uu_i0p3_i1_i2), FusedMulAddSIMD(FDPart1_Rational_5_3, AddSIMD(uu_i0m1_i1_i2, uu_i0p1_i1_i2), FusedMulSubSIMD(FDPart1_Rational_1_3150, AddSIMD(uu_i0m5_i1_i2, uu_i0p5_i1_i2), FusedMulAddSIMD(FDPart1_Rational_5_1008, AddSIMD(uu_i0m4_i1_i2, uu_i0p4_i1_i2), FusedMulAddSIMD(FDPart1_Rational_5_21, AddSIMD(uu_i0m2_i1_i2, uu_i0p2_i1_i2), FDPart1_0))))));
const REAL_SIMD_ARRAY uu_dDD11 = MulSIMD(MulSIMD(invdx1, invdx1), FusedMulAddSIMD(FDPart1_Rational_5_126, AddSIMD(uu_i0_i1m3_i2, uu_i0_i1p3_i2), FusedMulAddSIMD(FDPart1_Rational_5_3, AddSIMD(uu_i0_i1m1_i2, uu_i0_i1p1_i2), FusedMulSubSIMD(FDPart1_Rational_1_3150, AddSIMD(uu_i0_i1m5_i2, uu_i0_i1p5_i2), FusedMulAddSIMD(FDPart1_Rational_5_1008, AddSIMD(uu_i0_i1m4_i2, uu_i0_i1p4_i2), FusedMulAddSIMD(FDPart1_Rational_5_21, AddSIMD(uu_i0_i1m2_i2, uu_i0_i1p2_i2), FDPart1_0))))));
const REAL_SIMD_ARRAY uu_dDD22 = MulSIMD(MulSIMD(invdx2, invdx2), FusedMulAddSIMD(FDPart1_Rational_5_126, AddSIMD(uu_i0_i1_i2m3, uu_i0_i1_i2p3), FusedMulAddSIMD(FDPart1_Rational_5_3, AddSIMD(uu_i0_i1_i2m1, uu_i0_i1_i2p1), FusedMulSubSIMD(FDPart1_Rational_1_3150, AddSIMD(uu_i0_i1_i2m5, uu_i0_i1_i2p5), FusedMulAddSIMD(FDPart1_Rational_5_1008, AddSIMD(uu_i0_i1_i2m4, uu_i0_i1_i2p4), FusedMulAddSIMD(FDPart1_Rational_5_21, AddSIMD(uu_i0_i1_i2m2, uu_i0_i1_i2p2), FDPart1_0))))));
/*
* NRPy+ Finite Difference Code Generation, Step 2 of 2: Evaluate SymPy expressions and write to main memory:
*/
/*
* Original SymPy expressions:
* "[const REAL_SIMD_ARRAY __RHS_exp_0 = vv,
* const REAL_SIMD_ARRAY __RHS_exp_1 = wavespeed**2*(uu_dDD00 + uu_dDD11 + uu_dDD22)]"
*/
const REAL_SIMD_ARRAY __RHS_exp_0 = vv;
const REAL_SIMD_ARRAY __RHS_exp_1 = MulSIMD(MulSIMD(wavespeed, wavespeed), AddSIMD(uu_dDD00, AddSIMD(uu_dDD11, uu_dDD22)));
WriteSIMD(&rhs_gfs[IDX4(UUGF, i0, i1, i2)], __RHS_exp_0);
WriteSIMD(&rhs_gfs[IDX4(VVGF, i0, i1, i2)], __RHS_exp_1);
}
<a id='id'></a>
# Step 4: Setting up Initial Data for the Scalar Wave Equation \[Back to [top](#toc)\]
$$\label{id}$$
<a id='planewave'></a>
## Step 4.a: The Monochromatic Plane-Wave Solution \[Back to [top](#toc)\]
$$\label{planewave}$$
The solution to the scalar wave equation for a monochromatic (single-wavelength) wave traveling in the $\hat{k}$ direction is
$$u(\vec{x},t) = f(\hat{k}\cdot\vec{x} - c t),$$
where $\hat{k}$ is a unit vector. We choose $f(\hat{k}\cdot\vec{x} - c t)$ to take the form
$$
f(\hat{k}\cdot\vec{x} - c t) = \sin\left(\hat{k}\cdot\vec{x} - c t\right) + 2,
$$
where we add the $+2$ to ensure that the exact solution never crosses through zero. In places where the exact solution passes through zero, the relative error (i.e., the most common error measure used to check that the numerical solution converges to the exact solution) is undefined. Also, $f(\hat{k}\cdot\vec{x} - c t)$ plus a constant is still a solution to the wave equation.
```python
# Step 1: Set parameters defined in other modules
xx = gri.xx # Sets the Cartesian coordinates xx[0]=x; xx[1]=y; xx[2]=z
# Step 2: Declare free parameters intrinsic to these initial data
time = par.Cparameters("REAL", thismodule, "time",0.0)
kk = par.Cparameters("REAL", thismodule, ["kk0", "kk1", "kk2"],[1.0,1.0,1.0])
# Step 3: Normalize the k vector
kk_norm = sp.sqrt(kk[0]**2 + kk[1]**2 + kk[2]**2)
# Step 4: Compute k.x
dot_product = sp.sympify(0)
for i in range(DIM):
dot_product += xx[i]*kk[i]
dot_product /= kk_norm
# Step 5: Set initial data for uu and vv, where vv_ID = \partial_t uu_ID.
uu_ID_PlaneWave = sp.sin(dot_product - wavespeed*time)+2
vv_ID_PlaneWave = sp.diff(uu_ID_PlaneWave, time)
```
Next we verify that $f(\hat{k}\cdot\vec{x} - c t)$ satisfies the wave equation, by computing
$$\left(c^2 \nabla^2 - \partial_t^2 \right)\ f\left(\hat{k}\cdot\vec{x} - c t\right),$$
and confirming the result is exactly zero.
```python
sp.simplify(wavespeed**2*(sp.diff(uu_ID_PlaneWave,xx[0],2) +
sp.diff(uu_ID_PlaneWave,xx[1],2) +
sp.diff(uu_ID_PlaneWave,xx[2],2))
- sp.diff(uu_ID_PlaneWave,time,2))
```
$\displaystyle 0$
<a id='sphericalgaussian'></a>
## Step 4.b: The Spherical Gaussian Solution \[Back to [top](#toc)\]
$$\label{sphericalgaussian}$$
Here we will implement the spherical Gaussian solution, consists of ingoing and outgoing wave fronts:
\begin{align}
u(r,t) &= u_{\rm out}(r,t) + u_{\rm in}(r,t) + 1,\ \ \text{where}\\
u_{\rm out}(r,t) &=\frac{r-ct}{r} \exp\left[\frac{-(r-ct)^2}{2 \sigma^2}\right] \\
u_{\rm in}(r,t) &=\frac{r+ct}{r} \exp\left[\frac{-(r+ct)^2}{2 \sigma^2}\right] \\
\end{align}
where $c$ is the wavespeed, and $\sigma$ is the width of the Gaussian (i.e., the "standard deviation").
```python
# Step 1: Set parameters defined in other modules
xx = gri.xx # Sets the Cartesian coordinates xx[0]=x; xx[1]=y; xx[2]=z
# Step 2: Declare free parameters intrinsic to these initial data
time = par.Cparameters("REAL", thismodule, "time",0.0)
sigma = par.Cparameters("REAL", thismodule, "sigma",3.0)
# Step 4: Compute r
r = sp.sympify(0)
for i in range(DIM):
r += xx[i]**2
r = sp.sqrt(r)
# Step 5: Set initial data for uu and vv, where vv_ID = \partial_t uu_ID.
uu_ID_SphericalGaussianOUT = +(r - wavespeed*time)/r * sp.exp( -(r - wavespeed*time)**2 / (2*sigma**2) )
uu_ID_SphericalGaussianIN = +(r + wavespeed*time)/r * sp.exp( -(r + wavespeed*time)**2 / (2*sigma**2) )
uu_ID_SphericalGaussian = uu_ID_SphericalGaussianOUT + uu_ID_SphericalGaussianIN + sp.sympify(1)
vv_ID_SphericalGaussian = sp.diff(uu_ID_SphericalGaussian, time)
```
Since the wave equation is linear, both the leftgoing and rightgoing waves must satisfy the wave equation, which implies that their sum also satisfies the wave equation.
Next we verify that $u(r,t)$ satisfies the wave equation, by computing
$$\left(c^2 \nabla^2 - \partial_t^2 \right)\left\{u_{\rm R}(r,t)\right\},$$
and
$$\left(c^2 \nabla^2 - \partial_t^2 \right)\left\{u_{\rm L}(r,t)\right\},$$
are separately zero. We do this because SymPy has difficulty simplifying the combined expression.
```python
print(sp.simplify(wavespeed**2*(sp.diff(uu_ID_SphericalGaussianOUT,xx[0],2) +
sp.diff(uu_ID_SphericalGaussianOUT,xx[1],2) +
sp.diff(uu_ID_SphericalGaussianOUT,xx[2],2))
- sp.diff(uu_ID_SphericalGaussianOUT,time,2)) )
print(sp.simplify(wavespeed**2*(sp.diff(uu_ID_SphericalGaussianIN,xx[0],2) +
sp.diff(uu_ID_SphericalGaussianIN,xx[1],2) +
sp.diff(uu_ID_SphericalGaussianIN,xx[2],2))
- sp.diff(uu_ID_SphericalGaussianIN,time,2)))
```
0
0
<a id='code_validation2'></a>
# Step 5: Code Validation against `ScalarWave.InitialData` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation2}$$
As a code validation check, we will verify agreement in the SymPy expressions for plane-wave initial data for the Scalar Wave equation between
1. this tutorial and
2. the NRPy+ [ScalarWave.InitialData](../edit/ScalarWave/InitialData.py) module.
```python
# We just defined SymPy expressions for uu_ID and vv_ID in
# terms of other SymPy variables. Here, we will use the
# above-defined uu_ID and vv_ID to validate against the
# same expressions in the ScalarWave/InitialData.py
# module, to ensure consistency between this tutorial
# (historically speaking, the tutorial was written first)
# and the PlaneWave ID module itself.
#
# Step 6: Call the InitialData(Type="PlaneWave") function from within the
# ScalarWave/InitialData.py module,
# which should do exactly the same as in Steps 1-5 above.
import ScalarWave.InitialData as swid
swid.InitialData(Type="PlaneWave")
# Step 7: Consistency check between the tutorial notebook above
# and the PlaneWave option from within the
# ScalarWave/InitialData.py module.
print("Consistency check between ScalarWave tutorial and NRPy+ module: PlaneWave Case")
if sp.simplify(uu_ID_PlaneWave - swid.uu_ID) != 0:
print("TEST FAILED: uu_ID_PlaneWave - swid.uu_ID = "+str(sp.simplify(uu_ID_PlaneWave - swid.uu_ID))+"\t\t (should be zero)")
sys.exit(1)
if sp.simplify(vv_ID_PlaneWave - swid.vv_ID) != 0:
print("TEST FAILED: vv_ID_PlaneWave - swid.vv_ID = "+str(sp.simplify(vv_ID_PlaneWave - swid.vv_ID))+"\t\t (should be zero)")
sys.exit(1)
print("TESTS PASSED!")
# Step 8: Consistency check between the tutorial notebook above
# and the SphericalGaussian option from within the
# ScalarWave/InitialData.py module.
swid.InitialData(Type="SphericalGaussian")
print("Consistency check between ScalarWave tutorial and NRPy+ module: SphericalGaussian Case")
if sp.simplify(uu_ID_SphericalGaussian - swid.uu_ID) != 0:
print("TEST FAILED: uu_ID_SphericalGaussian - swid.uu_ID = "+str(sp.simplify(uu_ID_SphericalGaussian - swid.uu_ID))+"\t\t (should be zero)")
sys.exit(1)
if sp.simplify(vv_ID_SphericalGaussian - swid.vv_ID) != 0:
print("TEST FAILED: vv_ID_SphericalGaussian - swid.vv_ID = "+str(sp.simplify(vv_ID_SphericalGaussian - swid.vv_ID))+"\t\t (should be zero)")
sys.exit(1)
print("TESTS PASSED!")
```
Consistency check between ScalarWave tutorial and NRPy+ module: PlaneWave Case
TESTS PASSED!
Consistency check between ScalarWave tutorial and NRPy+ module: SphericalGaussian Case
TESTS PASSED!
<a id='latex_pdf_output'></a>
# Step 6: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-ScalarWave.pdf](Tutorial-ScalarWave.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ScalarWave")
```
Created Tutorial-ScalarWave.tex, and compiled LaTeX file to PDF file
Tutorial-ScalarWave.pdf
| d47054402485293e02b42471a9abcba4ba61229d | 45,642 | ipynb | Jupyter Notebook | Tutorial-ScalarWave.ipynb | Harmohit-Singh/nrpytutorial | 81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1 | [
"BSD-2-Clause"
]
| 66 | 2018-06-26T22:18:09.000Z | 2022-02-09T21:12:33.000Z | Tutorial-ScalarWave.ipynb | Harmohit-Singh/nrpytutorial | 81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1 | [
"BSD-2-Clause"
]
| 14 | 2020-02-13T16:09:29.000Z | 2021-11-12T14:59:59.000Z | Tutorial-ScalarWave.ipynb | Harmohit-Singh/nrpytutorial | 81e6fe09c6882a2d95e1d0ea57f465fc7eda41e1 | [
"BSD-2-Clause"
]
| 30 | 2019-01-09T09:57:51.000Z | 2022-03-08T18:45:08.000Z | 49.718954 | 618 | 0.617611 | true | 10,682 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.760651 | 0.649529 | __label__eng_Latn | 0.712535 | 0.347405 |
# Fun applications
```python
import pandas as pd
import pandas_datareader
import datetime
import matplotlib.pyplot as plt
plt.style.use('seaborn')
```
# Friday effect in the stock market?
Let's investigate if there is a "Friday effect" on the stock market. That is, do stock prices on average increase more on Fridays than they do on other days of the week?
**Reading in data:**
```python
start = datetime.datetime(2014,1,1)
end = datetime.datetime(2018,1,1)
```
```python
firms = []
for i,stock_name in enumerate(['IBM','AAPL', 'TSLA']):
firm_stock = pandas_datareader.iex.daily.IEXDailyReader(stock_name, start, end).read()
firm_stock['firm'] = stock_name
firms.append(firm_stock)
```
```python
stocks = pd.concat(firms)
```
```python
stocks.sample(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>open</th>
<th>high</th>
<th>low</th>
<th>close</th>
<th>volume</th>
<th>firm</th>
</tr>
<tr>
<th>date</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2014-09-02</th>
<td>275.5000</td>
<td>284.8900</td>
<td>274.3000</td>
<td>284.120</td>
<td>9852351</td>
<td>TSLA</td>
</tr>
<tr>
<th>2017-10-24</th>
<td>338.8000</td>
<td>342.8000</td>
<td>336.1600</td>
<td>337.340</td>
<td>4491672</td>
<td>TSLA</td>
</tr>
<tr>
<th>2014-09-18</th>
<td>93.8859</td>
<td>94.2728</td>
<td>93.5451</td>
<td>93.757</td>
<td>37299435</td>
<td>AAPL</td>
</tr>
</tbody>
</table>
</div>
```python
# convert index from type 'O' to 'datetime'
stocks.index = pd.to_datetime(stocks.index)
```
```python
# sort dataframe
stocks.sort_values(['firm','date'], inplace=True)
```
```python
# create the weekday variable
stocks['weekday'] = stocks.index.weekday
stocks['year'] = stocks.index.year
```
**First differences:** Here we must avoid the temptation to use `df.var.diff(1)`, which will take differences across different firms. Instead, we have to use `df.groupby('firm').var.diff(1)` to ensure that we stay within the firm. This will produce a NaN as the first value for each firm.
```python
stocks['diff_close'] = stocks.groupby('firm').close.diff(1)
```
## Analysis based on raw averages
```python
ax = stocks.groupby('weekday')['close'].mean().plot(kind='line', style='-o');
ax.set_ylabel('Avg. closing price');
ax.set_xticks(range(5));
```
**Mean**
```python
ax = stocks.groupby('weekday')['diff_close'].mean().plot(kind='bar');
ax.set_ylabel('Avg. log-difference in closing price');
```
**Median**
```python
# could there be outliers? then the median is better than the mean
ax = stocks.groupby('weekday')['diff_close'].median().plot(kind='bar');
ax.set_ylabel('median log-difference in closing price');
```
**By firm**
```python
# effects by firms?
stocks.groupby(['firm','weekday']).diff_close.median().unstack().plot(kind='bar');
```
**Summing up so far:** It seems that companies iffer, but there is a tendency to higher prices on Mondays... surprisingly (maybe we'd expect people to be angry and pessimistic Monday morning?)
**Question:** What about quantities? If they differ over the week, it's not so surprising that prices differ as well.
```python
# Avg. quantity by day by firm
stocks.groupby(['firm','weekday']).volume.mean().unstack().plot(kind='bar');
```
Damn, Apple's volume is so much higher, we cannot compare these directly. We can't even look at deviations from the mean, since even those are larger. However, *percent deviations from the mean* might make sense.
```python
# transformation frunction
def pct_deviation_from_mean(x):
m = np.mean(x)
dev = (x-m)/m
return dev
# one-step split-apply-combine
stocks['vol_pct_dev_from_mean'] = stocks.groupby('firm')['volume'].transform(pct_deviation_from_mean)
# plot
ax = stocks.groupby(['firm','weekday'])['vol_pct_dev_from_mean'].mean().unstack().plot(kind='bar');
ax.set_ylabel('Volume (pct. deviation from firm avg.)');
```
## Analysis based on regressions
Let's run the regression
\\[ \Delta p_{jt} = \beta_0 + \text{weekday dummies} + \text{firm fixed effect} + \text{year fixed effects} + \text{error} \\]
The weekday fixed effects will be so that 0 = Monday (left out, the reference category), 1 = Tuesday, ..., 4 = Friday. Thus, we test for the Friday effect by examining the p-value for the 4th weekday, denoted `C(weekday)[T.4]`.
***Note*** We will not expect regressions in any of your work.
```python
import statsmodels.formula.api as sm
res = sm.ols(formula = 'diff_close~C(weekday)+C(firm)+C(year)', data = stocks).fit()
res.summary()
```
<table class="simpletable">
<caption>OLS Regression Results</caption>
<tr>
<th>Dep. Variable:</th> <td>diff_close</td> <th> R-squared: </th> <td> 0.001</td>
</tr>
<tr>
<th>Model:</th> <td>OLS</td> <th> Adj. R-squared: </th> <td> -0.002</td>
</tr>
<tr>
<th>Method:</th> <td>Least Squares</td> <th> F-statistic: </th> <td> 0.4364</td>
</tr>
<tr>
<th>Date:</th> <td>Wed, 05 Jun 2019</td> <th> Prob (F-statistic):</th> <td> 0.916</td>
</tr>
<tr>
<th>Time:</th> <td>23:02:51</td> <th> Log-Likelihood: </th> <td> -7351.5</td>
</tr>
<tr>
<th>No. Observations:</th> <td> 2700</td> <th> AIC: </th> <td>1.472e+04</td>
</tr>
<tr>
<th>Df Residuals:</th> <td> 2690</td> <th> BIC: </th> <td>1.478e+04</td>
</tr>
<tr>
<th>Df Model:</th> <td> 9</td> <th> </th> <td> </td>
</tr>
<tr>
<th>Covariance Type:</th> <td>nonrobust</td> <th> </th> <td> </td>
</tr>
</table>
<table class="simpletable">
<tr>
<td></td> <th>coef</th> <th>std err</th> <th>t</th> <th>P>|t|</th> <th>[0.025</th> <th>0.975]</th>
</tr>
<tr>
<th>Intercept</th> <td> 0.2209</td> <td> 0.250</td> <td> 0.883</td> <td> 0.377</td> <td> -0.270</td> <td> 0.711</td>
</tr>
<tr>
<th>C(weekday)[T.1]</th> <td> -0.0966</td> <td> 0.227</td> <td> -0.426</td> <td> 0.670</td> <td> -0.541</td> <td> 0.348</td>
</tr>
<tr>
<th>C(weekday)[T.2]</th> <td> -0.2176</td> <td> 0.226</td> <td> -0.961</td> <td> 0.337</td> <td> -0.662</td> <td> 0.226</td>
</tr>
<tr>
<th>C(weekday)[T.3]</th> <td> -0.2673</td> <td> 0.228</td> <td> -1.171</td> <td> 0.242</td> <td> -0.715</td> <td> 0.180</td>
</tr>
<tr>
<th>C(weekday)[T.4]</th> <td> -0.2515</td> <td> 0.228</td> <td> -1.102</td> <td> 0.271</td> <td> -0.699</td> <td> 0.196</td>
</tr>
<tr>
<th>C(firm)[T.IBM]</th> <td> -0.1015</td> <td> 0.174</td> <td> -0.583</td> <td> 0.560</td> <td> -0.443</td> <td> 0.240</td>
</tr>
<tr>
<th>C(firm)[T.TSLA]</th> <td> 0.0265</td> <td> 0.174</td> <td> 0.152</td> <td> 0.879</td> <td> -0.315</td> <td> 0.368</td>
</tr>
<tr>
<th>C(year)[T.2015]</th> <td> -0.0292</td> <td> 0.222</td> <td> -0.131</td> <td> 0.895</td> <td> -0.465</td> <td> 0.406</td>
</tr>
<tr>
<th>C(year)[T.2016]</th> <td> -0.0047</td> <td> 0.222</td> <td> -0.021</td> <td> 0.983</td> <td> -0.440</td> <td> 0.431</td>
</tr>
<tr>
<th>C(year)[T.2017]</th> <td> 0.1677</td> <td> 0.222</td> <td> 0.755</td> <td> 0.451</td> <td> -0.268</td> <td> 0.604</td>
</tr>
</table>
<table class="simpletable">
<tr>
<th>Omnibus:</th> <td>512.737</td> <th> Durbin-Watson: </th> <td> 1.920</td>
</tr>
<tr>
<th>Prob(Omnibus):</th> <td> 0.000</td> <th> Jarque-Bera (JB): </th> <td>8797.757</td>
</tr>
<tr>
<th>Skew:</th> <td>-0.396</td> <th> Prob(JB): </th> <td> 0.00</td>
</tr>
<tr>
<th>Kurtosis:</th> <td>11.808</td> <th> Cond. No. </th> <td> 7.55</td>
</tr>
</table><br/><br/>Warnings:<br/>[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Want $\LaTeX$? No problem!
```python
res.summary().as_latex()
```
'\\begin{center}\n\\begin{tabular}{lclc}\n\\toprule\n\\textbf{Dep. Variable:} & diff_close & \\textbf{ R-squared: } & 0.001 \\\\\n\\textbf{Model:} & OLS & \\textbf{ Adj. R-squared: } & -0.002 \\\\\n\\textbf{Method:} & Least Squares & \\textbf{ F-statistic: } & 0.4364 \\\\\n\\textbf{Date:} & Wed, 05 Jun 2019 & \\textbf{ Prob (F-statistic):} & 0.916 \\\\\n\\textbf{Time:} & 23:02:51 & \\textbf{ Log-Likelihood: } & -7351.5 \\\\\n\\textbf{No. Observations:} & 2700 & \\textbf{ AIC: } & 1.472e+04 \\\\\n\\textbf{Df Residuals:} & 2690 & \\textbf{ BIC: } & 1.478e+04 \\\\\n\\textbf{Df Model:} & 9 & \\textbf{ } & \\\\\n\\bottomrule\n\\end{tabular}\n\\begin{tabular}{lcccccc}\n & \\textbf{coef} & \\textbf{std err} & \\textbf{t} & \\textbf{P$>$$|$t$|$} & \\textbf{[0.025} & \\textbf{0.975]} \\\\\n\\midrule\n\\textbf{Intercept} & 0.2209 & 0.250 & 0.883 & 0.377 & -0.270 & 0.711 \\\\\n\\textbf{C(weekday)[T.1]} & -0.0966 & 0.227 & -0.426 & 0.670 & -0.541 & 0.348 \\\\\n\\textbf{C(weekday)[T.2]} & -0.2176 & 0.226 & -0.961 & 0.337 & -0.662 & 0.226 \\\\\n\\textbf{C(weekday)[T.3]} & -0.2673 & 0.228 & -1.171 & 0.242 & -0.715 & 0.180 \\\\\n\\textbf{C(weekday)[T.4]} & -0.2515 & 0.228 & -1.102 & 0.271 & -0.699 & 0.196 \\\\\n\\textbf{C(firm)[T.IBM]} & -0.1015 & 0.174 & -0.583 & 0.560 & -0.443 & 0.240 \\\\\n\\textbf{C(firm)[T.TSLA]} & 0.0265 & 0.174 & 0.152 & 0.879 & -0.315 & 0.368 \\\\\n\\textbf{C(year)[T.2015]} & -0.0292 & 0.222 & -0.131 & 0.895 & -0.465 & 0.406 \\\\\n\\textbf{C(year)[T.2016]} & -0.0047 & 0.222 & -0.021 & 0.983 & -0.440 & 0.431 \\\\\n\\textbf{C(year)[T.2017]} & 0.1677 & 0.222 & 0.755 & 0.451 & -0.268 & 0.604 \\\\\n\\bottomrule\n\\end{tabular}\n\\begin{tabular}{lclc}\n\\textbf{Omnibus:} & 512.737 & \\textbf{ Durbin-Watson: } & 1.920 \\\\\n\\textbf{Prob(Omnibus):} & 0.000 & \\textbf{ Jarque-Bera (JB): } & 8797.757 \\\\\n\\textbf{Skew:} & -0.396 & \\textbf{ Prob(JB): } & 0.00 \\\\\n\\textbf{Kurtosis:} & 11.808 & \\textbf{ Cond. No. } & 7.55 \\\\\n\\bottomrule\n\\end{tabular}\n%\\caption{OLS Regression Results}\n\\end{center}\n\nWarnings: \\newline\n [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.'
**Conclusion:** The Friday dummy is not statistically significantly different from zero (p = 0.087 > 0.05). Thus, we conclude that there is no more growth in prices on Fridays compared to other days.
# The relationship between income and CO2 emissions
What is the relationship between GDP and emissions? Do the rich pollute more than the poor? Do fast-growing countries start polluting more?
## Downloading data
We will use the World Bank as the source of our data. See more here: https://data.worldbank.org/indicator/EN.ATM.CO2E.PC?locations=US-CN-DK-RU&view=chart
```python
from pandas_datareader import wb
```
```python
countries = ['DK','ZA','US','GB','CN','IN','BR','CA','RU','KR','VN','SE','DE','FR','BG','IT','PK','ID','MX','PL']
```
```python
co2 = wb.download(indicator='EN.ATM.CO2E.KT', country=countries, start=1970, end=2017) # alternatively, "EN.ATM.CO2E.PC" is per cap.
pop = wb.download(indicator='SP.POP.TOTL', country=countries, start=1970, end=2017)
gdp = wb.download(indicator='NY.GDP.PCAP.KD', country=countries, start=1970, end=2017)
```
```python
# merge datasets
both = pd.merge(co2, gdp,how='outer',left_index=True,right_index=True)
both = pd.merge(both,pop,how='outer',left_index=True,right_index=True)
# process
both.reset_index(inplace=True)
both.year = both.year.astype(int) # datatype: object -> integer
both = both.rename(columns={'EN.ATM.CO2E.KT':'co2_tot', 'NY.GDP.PCAP.KD':'gdp', 'SP.POP.TOTL':'population'})
both['co2'] = both.co2_tot / both.population # per capita based on KT number
# sort by year
both = both.sort_values(['country','year'])
# drop rows with missings in any variable
both = both.dropna()
```
## Overview: Across-country comparisons
First we look at the average emissions over time. The US and Canada are the worst countries. Denmark is somewhere in the middle, but worse than Sweden :(
```python
ax = both.groupby('country').co2.mean().plot(kind='bar')
ax.set_ylabel('Avg. annual emissions per capita (kt CO2/cap.)');
```
How about the annualized growth rates? Recall that these are calculated using the formula
\\[ \text{annual growth} = \left( \frac{y_{t_1}}{y_{t_0}} \right) ^ {\frac{1}{t_1 - t_0}} \\]
```python
def annual_growth(x):
x_last = x.values[-1]
x_first = x.values[0]
num_years = len(x)
growth_annualized = (x_last/x_first)**(1/num_years) - 1.0
return growth_annualized
ax = both.groupby('country')['co2'].agg(annual_growth).plot(kind='bar')
ax.set_ylabel('Annual growth from first to last year');
```
Now we see that many developing countries (China, Vietnam, Korea, India) stand out with massive increases in CO2 emissions. For most of the developed countries, we see modest decreases in emissions.
Next, we will zoom in on the developments over time for each country. We start with one big graph that is impossible to read.
```python
fig = plt.figure()
ax = plt.subplot(111)
both.set_index('year').groupby('country')['co2_tot'].plot(kind='line', legend=True, ax=ax);
ax.set_ylabel('Total CO2 emissions (kt)')
box = ax.get_position() # find plot coordinates
ax.set_position([box.x0, box.y0 + box.height * 0.1,box.width, box.height * 0.9]) # shrink height by 10% at bottom
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.15),ncol=5); # Put a legend below current axis
```
That graph is extremely hard to read so let's use a `FacetGrid` from the plotting package `seaborn`:
```python
import seaborn as sns # a package full of nice things! Google it...
```
```python
by_var = 'country'
y_var = 'co2'
g = sns.FacetGrid(both, col=by_var, hue=by_var, col_wrap=4)
g = g.map(plt.plot, 'year', y_var) # draw the upper line
g = g.map(plt.fill_between, 'year', y_var, alpha=0.2).set_titles("{col_name} "+by_var) # draw the underlying area
g = g.set_titles("{col_name}") # # Control the title of each facet
```
**Tentative conclusion:** We have seen strong increases in emissions in many developing countries and minor falls in emissions in some of the more developed countries. Initially, this might indicate that there is no strong relationship between CO2 and GDP. However
## GDP and CO2
The above is across countries, what is the relationship between avg. GDP and avg. emissions. What about within a country? How do changes in GDP correlate with changes in CO2?
### Across-country comparisons
First, we analyze the problem by focusing on comparisons that go across countries. To do this, we must collapse the data within a country, e.g. by taking the average emissions over time.
```python
(both.groupby('country').mean()
.apply(np.log)
.plot(kind='scatter', x='gdp', y='co2'));
```
This graph seems to indicate that there is indeed a relationship between GDP and CO2: countries with a higher GDP per capita tend to have higher CO2 emissions per capita. It even looks like one log point in GDP translates into nearly a full log point in CO2.
### Within-country comparisons
It is possible that the across-country comparison above is invalid. There might be an omitted variable. It might be that large economies are located where it is cold and thus have to spend much effort to heat houses, which burns CO2. It might be that if we take any *given* country, that *all else equal* increasing GDP will lead to lower CO2. Maybe because the country can now afford to invest in green tech.
To explore whether that is the case, we can focus on *within-country variation* rather than *across country variation*. One example of what is meant by this is that we can focus on the relationship between observed changes in GDP and changes in CO2.
#### Average differences
First, we compute for each country (using a `groupby`), the average one-year growth rate for that country, using the function `pct_change()`.
```python
ax = both.groupby('country').agg(lambda x : x.pct_change().mean()).plot(kind='scatter',x='gdp',y='co2')
ax.set_xlabel('Avg. one-year growth in GDP');
ax.set_ylabel('Avg. one-year groth in CO2 emissions');
```
Just as a test, let us make sure that the *average log differences* give approximately the same picture:
```python
ax = both.groupby('country').agg(lambda x: (np.log(x)).diff().mean()).plot(kind='scatter',x='gdp',y='co2')
ax.set_xlabel('Avg. one-year log-difference in GDP');
ax.set_ylabel('Avg. one-year log-difference in CO2');
```
**Partial conclusion:** From these graphs, we draw the same conclusion: there is a positive relationship between changes in GDP and changes in CO2. *However*, we do note that for sufficiently low changes in GDP (less than 2%), CO2 emissions are actually *falling*. If we take that at face value, this appears to indicate that it is possible to have positive growth while still reducing emissions. But the relationship is monotonic (based on what our data tells us).
#### Pooled differences
Instead of taking the average (over time) change in GDP and in CO2 within each country, we can also just plot all the annual differences.
```python
logdiffs = both.set_index('country').groupby(level=0).transform(lambda x : (np.log(x)).diff(1) ).reset_index()
```
```python
logdiffs.plot(kind='scatter', x='gdp', y='co2');
```
Again we see there appears to be a positive relationship. However, we may be worried that it is driven by *outliers*. That is, we would like to think about how much data we have where. For this, we can use the `seaborn` plot, `jointplot`, which simultaneously indicates (with color intensity and marginal histograms) what the data density looks like. We would lik to mainly draw our conclusions based on where we have a fair amount of data.
```python
h = sns.jointplot(x=logdiffs.gdp, y=logdiffs.co2, kind='hex')
h.set_axis_labels('log change in GDP', 'log change in CO2');
```
### Regression model
In many settings, a linear regression can be a great way of summarizing the dataset.
We will estimate the following models
\begin{align}
(0): \log \text{CO}_2 &= \beta_0 + \beta_1 \log \text{GDP} + \text{error} \\
(1): \log \text{CO}_2 &= \beta_0 + \beta_1 \log \text{GDP} + \text{time trend} + \text{error} \\
(2): \log \text{CO}_2 &= \beta_0 + \beta_1 \log \text{GDP} + \text{time trend} + \text{country fixed effects} + \text{error} \\
(3): \Delta \log \text{CO}_2 &= \tilde{\beta}_0 + \beta_1 \Delta \log \text{GDP} + \text{error} \\
(4): \Delta \log \text{CO}_2 &= \tilde{\beta}_0 + \beta_1 \Delta \log \text{GDP} + \text{country fixed effects} + \text{error}.
\end{align}
In model (0), we are simply comparing all observations. If there is a general tendency over time that is unrelated to GDP but that has affected emissions, then we are not accounting for it. That is done by (1). However, there might still be omitted variables specific to each country that affect the emissions and affect the GDP and thus cause endogeneity (e.g. temperature / distance from equator). Model (2) takes this into account.
The models (3) and (4) are first-difference specifications. In principle, model (3) should be equivalent to (2) if it is correctly specified. In practice, the two can differ due to a number of misspecifications. For example, if the country fixed effects are slowly changing over time, perhaps due to slow, gradual changes in temperature (and thus in heating needs), or preferences or productivity. In that case, focusing on short-run movements in first-differences may be more acurate. On the other hand, if there are a lot of short-run fluctuations, then fixed effects are perhaps closer to the truth. In practice, these are questions that depend on the specific deviations. We will simply estimate both and check whether they are close.
```python
res0 = sm.ols(formula='np.log(co2)~1+np.log(gdp)', data=both).fit()
res1 = sm.ols(formula='np.log(co2)~1+np.log(gdp)+C(year)', data=both).fit()
res2 = sm.ols(formula='np.log(co2)~1+np.log(gdp)+C(year)+C(country)', data=both).fit()
res3 = sm.ols(formula='co2~1+gdp', data=logdiffs).fit() # in this dataframe, variables are already logged
res4 = sm.ols(formula='co2~1+gdp+C(country)', data=logdiffs).fit() # in this dataframe, variables are already logged
print(f'Baseline: {res0.params["np.log(gdp)"] : 8.4f}')
print(f'Year FE: {res1.params["np.log(gdp)"] : 8.4f}')
print(f'Year+Country FE: {res2.params["np.log(gdp)"] : 8.4f}')
print(f'FD: {res3.params["gdp"] : 8.4f}')
print(f'FD + country FE: {res4.params["gdp"] : 8.4f}')
```
Baseline: 0.6255
Year FE: 0.6281
Year+Country FE: 0.8482
FD: 0.9055
FD + country FE: 0.8637
We could easily extend the model with e.g. country fixed effects by writing `co2~gdp+C(country)`.
**Plotting** the estimates in a cool graph:
```python
res = pd.DataFrame(data = [
['Baseline', res0.params["np.log(gdp)"], res1.bse["np.log(gdp)"]],
['Year FE', res1.params["np.log(gdp)"], res1.bse["np.log(gdp)"]],
['Year+country FE', res2.params["np.log(gdp)"], res2.bse["np.log(gdp)"]],
['FD', res3.params["gdp"], res3.bse["gdp"]],
['FD+country FE', res4.params["gdp"], res4.bse["gdp"]],
], columns=['Model','estimate','se'])
```
```python
fig, ax = plt.subplots()
x_pos = range(res.shape[0])
ax.bar(x_pos, res.estimate, yerr=res.se, alpha=0.5, ecolor='black', capsize=5);
ax.set_xticks(x_pos);
ax.set_xticklabels(res.Model);
ax.set_ylabel('Estimate');
ax.set_xlabel('Model');
```
### Conclusion
**Overall:** Unfortunately, our investigations document a clear correlation between GDP and CO2 Emissions. We estimate an elasticity between 0.63 and 0.90, although we put more confidence in the three models that account for country fixed effects, where the estimate ranges between 0.85 and 0.90. This is a quite high elasticity, implying that economic growth almost moves one-for-one with CO2 emissions.
| 70dba7ec1f2a04feab120addea7ef94f9a6662d0 | 398,396 | ipynb | Jupyter Notebook | 07/Fun_applications.ipynb | mariusgruenewald/lectures-2019 | 36812db370dfe7229be2df88b5020940394e54c0 | [
"MIT"
]
| 14 | 2019-01-11T09:47:18.000Z | 2019-08-25T05:45:18.000Z | 07/Fun_applications.ipynb | mariusgruenewald/lectures-2019 | 36812db370dfe7229be2df88b5020940394e54c0 | [
"MIT"
]
| 10 | 2019-01-09T19:32:09.000Z | 2020-03-02T15:51:44.000Z | 07/Fun_applications.ipynb | mariusgruenewald/lectures-2019 | 36812db370dfe7229be2df88b5020940394e54c0 | [
"MIT"
]
| 31 | 2019-02-11T09:23:44.000Z | 2020-01-13T10:54:42.000Z | 334.224832 | 101,716 | 0.919229 | true | 7,547 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.885631 | 0.697817 | __label__eng_Latn | 0.902755 | 0.459593 |
```python
from __future__ import print_function
import math
import sisl
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
%matplotlib inline
```
One of the key ideas behind `sisl` was the interaction of a DFT Hamiltonian and the user.
In this example we will highlight a unique implementation in TBtrans which enables ***any*** kind of user intervention.
The idea is a transformation of the Green function calculation from:
\begin{equation}
\mathbf G^{-1}(E) = \mathbf S (E + i\eta) - \mathbf H - \sum_i\boldsymbol\Sigma_i
\end{equation}
to
\begin{equation}
\mathbf G^{-1}(E) = \mathbf S (E + i\eta) - \mathbf H - \delta\mathbf H - \sum_i\boldsymbol\Sigma_i - \delta\boldsymbol \Sigma
\end{equation}
where $\delta\mathbf H$ and $\delta\boldsymbol\Sigma$ can be of any type, i.e. complex and/or real.
The only (important!) difference between $\delta\mathbf H$ and $\delta\boldsymbol \Sigma$ is that the former enters the calculation of bond-currents, while the latter does not.
Since TBtrans by it-self does not allow complex Hamiltonians the above is a way to leviate this restriction. One feature this may be used for is by applying magnetic fields.
In the following we will use Peierls substitution on a square tight-binding model:
\begin{equation}
\mathbf H(\Phi) = \mathbf H(0)e^{i \Phi \delta x^- \cdot \delta y^+ / 2},
\end{equation}
where $\Phi$ is the magnetic flux (in proper units), $\delta x^- = x_j - x_i$ and $\delta y^+=y_j + y_i$, with $i$ and $j$ being atomic indices. If you are interested in this substitution, please search litterature.
---
First create a square lattice and define the on-site and nearest neighbour couplings
```python
square = sisl.Geometry([0,0,0], sisl.Atom(1, R=1.0), sc=sisl.SuperCell(1, nsc=[3, 3, 1]))
on, nn = 4, -1
```
```python
H_minimal = sisl.Hamiltonian(square)
H_minimal.construct([[0.1, 1.1], [on, nn]])
H_elec = H_minimal.tile(100, 1).tile(2, 0)
H_elec.set_nsc([3, 1, 1])
H_elec.write('ELEC.nc')
```
```python
H = H_elec.tile(50, 0)
# Make a constriction
geom = H.geom.translate( -H.geom.center(what='xyz') )
# This constriction is based on an example in the kwant project (called qhe). We however make a slight modification.
# Remove some atoms, this will create a constriction of 100 - 40 * 2 = 20 Ang with a Gaussian edge profile
remove = (np.abs(geom.xyz[:, 1]) > 50 - 37.5 * np.exp( -(geom.xyz[:, 0] / 12) **2 )).nonzero()[0]
# To reduce computations we find the atoms in the constriction such that we can
# limit the calculation region.
device = (np.abs(geom.remove(remove).xyz[:, 0]) < .6).nonzero()[0]
geom.remove(remove).write('test.xyz')
# Pretty print a range of atoms that is the smallest device region
print(sisl.utils.list2str(device))
H = H.remove(remove)
H.write('DEVICE.nc')
```
The above printed list of atoms should be inserted in the `RUN.fdf` in the `TBT.Atoms.Device`. This is important when one is *only* interested in the transmission, and does not care about the density of states. You are always encouraged to select the minimal device region to 1) speed up computations and 2) drastically reduce memory requirements.
In this regard you should read in the TBtrans manual about the additional flag `TBT.Atoms.Device.Connect` (you may try, as an additional exercise to set this flag to true and check the difference from the previous calculation).
Now we have $\mathbf H(0)$ with *no* phases due to magnetic fields. As the magnetic field is changing the Hamiltonian, and thus enters the bond-current calculations, we have to use the $\delta\mathbf H$ term (and *not* $\delta\boldsymbol\Sigma$).
The first thing we need to calculate is $\delta x^- \cdot\delta y^+$.
Since we already have the Hamiltonian we can utilize the connections by looping the coupling elements (in a sparse matrix/graph this is called *edges*). This is *much* cheaper than trying to figure out which atoms are neighbouring.
```python
device = H.geom
xy = sisl.Hamiltonian(device)
for ia in device:
# Get all connecting elements (excluding it-self)
edges = H.edges(ia)
# Calculate the vector between edges and ia:
# xyz[edges, :] - xyz[ia, :]
Rij = device.Rij(ia, edges)
# Now calculate the product:
# (xj - xi) * (yj + yi)
# Notice that we correct to +yi by adding it twice
xy[ia, edges] = Rij[:, 0] * (Rij[:, 1] + 2 * device.xyz[ia, 1])
xy[ia, ia] = 0.
xy.finalize() # this is only because it will speed up the following calculations
```
Now we have the coupling dependent phase factor, $\delta x^-\cdot\delta y^+$, and all we need to calculate is $\delta\mathbf H$ that transforms $\mathbf H(0) \to \mathbf H(\Phi)$.
This is done easily as a `Hamiltonian` allows basic element wise operations, i.e. `+`, `-`, `*`, `/` and `**` (the power function).
Your task is to insert the correct mathematical equation below, such that `dH` contains $\delta \mathbf H$ for $\mathbf H(\Phi) = \mathbf H(0) + \delta\mathbf H$.
To help you I have inserted the exponential function. To finalize the equation, you need three terms: `nn`, `xy` and `rec_phi`.
<!-- dH = math.e ** (0.5j / rec_phi * xy) * nn - nn-->
```python
rec_phis = np.arange(1, 51, 4)
print('Calulating (of {}):'.format(len(rec_phis)), end='')
for i, rec_phi in enumerate(rec_phis):
print(' {}'.format(i+1), end=',')
# Calculate H(Phi)
# TODO: insert the correct mathematical equation below to calculate the correct phase
#dH = math.e ** (0.5j ... ) ...
with sisl.get_sile('M_{}.dH.nc'.format(rec_phi), mode='w') as fh:
fh.write_delta(dH)
```
## Exercises
- Calculate all physical quantities for all different applied magnetic fields.
Before running the calculations, search the manual on how to save the self-energies (*HINT* out-of-core). By default, TBtrans calculates the self-energies as they are needed. However, if one has the same electrodes, same $k$-grid and same $E$-points for several different runs (as in this case) one can with benefit calculate the self-energies *once*, and then reuse them in subsequent calculations.
To ease the calculation of all magnetic fields
To help you a script `run.sh` is located in this directory. Carefully read it to infer which option specifies the $\delta \mathbf H$ term.
Since this example has 14 different setups, each with 51 energy points, it will take some time. Around 30 seconds for the first (includes self-energy calculation), and around 10 seconds for all subsequent setups. So be patient. :)
- Secondly, read in all output into the workbook in a list.
- **TIME (*HARD*)**:
Choose an energy-point for $B=0$ and $B>0$ such that you have a finite transmission ($T>0$). Next, extract the bond-currents and calculate the transmission using the law of current conservation (Kirchhoffs 1st law: $T_{\mathrm{in}} \equiv T_{\mathrm{out}}$). *HINT*: select a line of atoms and calculate the bond-currents flowing between this line and its neighbouring line of atoms. Assert that the transmission is the same using both methods.
```python
# Create short-hand function
gs = sisl.get_sile
# No magnetic field
tbt0 = gs('siesta.TBT.nc')
# All magnetic fields in increasing order
tbts = [gs('M_{}/siesta.TBT.nc'.format(rec_phi)) for rec_phi in rec_phis]
```
- Plot the transmission function for all applied fields in the full energy range.
```python
# Do trick with easy plotting utility
E = tbt0.E[:]
Eplot = partial(plt.plot, E)
for rec_phi, tbt in zip(rec_phis, tbts):
Eplot(tbt.transmission(), '--', label='{}'.format(rec_phi));
Eplot(tbt0.transmission(), 'k');
plt.xlim([E.min(), E.max()]); plt.ylim([0, None]);
plt.xlabel('Energy [eV]'); plt.ylabel('Transmisson'); plt.legend(bbox_to_anchor=(1, 1), loc=2);
```
- Make a contour plot of the transmission vs. $\Phi$ and $E$
```python
T = np.stack([tbt.transmission() for tbt in tbts])
T[T < 1e-9] = 0 # small numbers are difficult to interpolate (they tend to blow up), so remove them
plt.contourf(rec_phis, E, T.T, 20); plt.colorbar(label='Transmission');
plt.ylabel('Energy [eV]'); plt.xlabel(r'$1/\Phi$');
```
- **TIME** Choose a given magnetic field and create a different set of constriction widths and plot $T(E, \Phi)$ for different widths, fix $\Phi$ at a fairly large value (copy codes in `In [4-7]` and adapt), you may decide the Gaussian profile and change if you want.
```python
```
```python
```
## Learned lessons
- Advanced construction of geometries by removing subsets of atoms (both for Hamiltonian and geometries)
- Creation of $\delta\mathbf H$ terms. Note that *exactly* the same method is used for the $\delta\boldsymbol\Sigma$ terms, the only difference is how it is specified in the fdf-file (`TBT.dH` vs. `TBT.dSE`). Please look in the manual and figure out what the difference is between the two methods?
- Supplying fdf-flags to TBtrans on the command line to override flags in the input files.
- Inform TBtrans to store the self-energies on disk to re-use them in later calculations.
- Inform TBtrans to make all output into a sub-folder (and if it does not exist, create it)
- Adding complex valued Hamiltonians, we have not uncovered everything as the $\delta$ files may contain $k$-resolved and/or $E$-resolved $\delta$-terms for full control, but the principles are the same. Search the documentation for `deltancSileTBtrans`.
```python
```
| 2f5b5d7002d8974f9ef69a139a3ffa96a95d3337 | 12,858 | ipynb | Jupyter Notebook | ts-tbt-sisl-tutorial-master/TB_07/run.ipynb | rwiuff/QuantumTransport | 5367ca2130b7cf82fefd4e2e7c1565e25ba68093 | [
"MIT"
]
| 1 | 2021-09-25T14:05:45.000Z | 2021-09-25T14:05:45.000Z | ts-tbt-sisl-tutorial-master/TB_07/run.ipynb | rwiuff/QuantumTransport | 5367ca2130b7cf82fefd4e2e7c1565e25ba68093 | [
"MIT"
]
| 1 | 2020-03-31T03:17:38.000Z | 2020-03-31T03:17:38.000Z | ts-tbt-sisl-tutorial-master/TB_07/run.ipynb | rwiuff/QuantumTransport | 5367ca2130b7cf82fefd4e2e7c1565e25ba68093 | [
"MIT"
]
| 2 | 2020-01-27T10:27:51.000Z | 2020-06-17T10:18:18.000Z | 42.86 | 457 | 0.611837 | true | 2,576 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.805632 | 0.692573 | __label__eng_Latn | 0.993811 | 0.44741 |
# Why You Should Hedge Beta and Sector Exposures (Part II)
by Jonathan Larkin and Maxwell Margenot
Part of the Quantopian Lecture Series:
* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)
* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
Notebook released under the Creative Commons Attribution 4.0 License.
---
In the first lecture on [Why You Should Hedge Beta and Sector Exposure](quantopian.com/lectures/why-you-should-hedge-beta-and-sector-exposures-part-i), we covered the information coefficient (IC) and effective breadth, providing yet more reasons to make as many independent bets as possible. Here we expand upon the concepts detailed there by decomposing portfolios of varying numbers of securities to further explore the effects of systematic risk.
```python
import numpy as np
import matplotlib.pyplot as plt
```
## Hedging Beta and Sector Risk is Good for Allocators (Which is Good for You!)
Let's work from two basic beliefs:
- You would like someone to fund your algorithm
- The institution that funds your algorithm is not going to allocate 100% of its money to you. In other words, your algorithm is one in a portfolio of algorithms.
The implication of the second belief is subtle. Why should it matter that your high Sharpe algo is part of a portfolio? The key to understanding the importance of this and what it has to do with beta and sector exposure is the following mathematical result:
**In a portfolio, stock specific risk can be diversified out while common factor risk cannot.**
<div class="alert alert-warning">
<b>TL;DR:</b> Beta and sector exposure are **common factors**, i.e., they are among a handful of risk characteristics that are shared among all stocks. Risk exposure to common factors does not diversify away in a portfolio of algos. An allocator will not be able to make a large allocation to you if your algo presents common factor risk. The combination of many algos with modest common factor risk can lead to overwhelming common factor risk at the portfolio level. Allocators do not like this. If you want to get a large capital allocation, you must have low beta and sector exposure consistently over time.
</div>
# Foundations
### Single Stock Risk Decomposition
To build intuition, let's posit a single factor model:
$$r_i = \alpha_i + \beta_i r_m + \epsilon_i$$
where $\alpha_i$ is the intercept, $\epsilon_i$ is the error, and $r_m$ is the market return. This is the [Capital Asset Pricing Model (CAPM)](https://www.quantopian.com/lectures/the-capital-asset-pricing-model-and-arbitrage-pricing-theory), which posits that the returns to a stock can be attributable to its beta-weighted exposure to the market and a return which is idiosyncratic to that stock. Two important assumptions here are that the $\epsilon_i$s are uncorrelated to the market and each other across stocks. See the [Lecture on Beta Hedging](https://www.quantopian.com/lectures/beta-hedging) for more background.
In this case, the "risk", as measured by the variance, for an individual stock is:
$$\sigma_i^2 = \beta_i^2 \sigma_m^2 + \sigma_{\epsilon_i}^2$$
A stocks variance is broken into the **common risk**, $\beta_i^2\sigma_m^2$, and **specific risk**, $\sigma_{\epsilon_i}$. **Common risk** is risk in the stock driven by market risk which is common among all stocks proportionate to the stock's beta. **Specific risk** is the risk that is unique to that individual stock.
Let's look at two examples and decompose the risk into the percent due to common factor risk.
```python
def stock_risk(beta, market_vol, idio_vol):
common_risk = (beta**2)*(market_vol**2)
specific_risk = idio_vol**2
total_risk = common_risk + specific_risk
return total_risk, common_risk/total_risk
```
We take two separate stocks, each with different market beta exposures and idiosyncratic volatility.
```python
# Betas
b1 = 1.2
b2 = 1.1
# Market volatility
market_vol = 0.15
# Idiosyncratic volatilities
idio_vol_1 = 0.10
idio_vol_2 = 0.07
```
```python
total_1, pct_common_1 = stock_risk(b1, market_vol, idio_vol_1)
total_2, pct_common_2 = stock_risk(b2, market_vol, idio_vol_2)
print "Stock 1 risk (annualized standard deviation): %0.4f " % np.sqrt(total_1)
print "Stock 1: percent of total risk due to common risk: %0.4f " % pct_common_1
print "\nStock 2 risk (annualized standard deviation): %0.4f " % np.sqrt(total_2)
print "Stock 2: percent of total risk due to common risk: %0.4f " % pct_common_2
```
Stock 1 risk (annualized standard deviation): 0.2059
Stock 1: percent of total risk due to common risk: 0.7642
Stock 2 risk (annualized standard deviation): 0.1792
Stock 2: percent of total risk due to common risk: 0.8475
This is just looking at the breakdown of the risk associated with each individual stock. We can combine these into a portfolio to see how their combined volatility is affected by common factor risk.
### Two Stock Portfolio Risk Decomposition
Now let's imagine you have a two stock portfolio with percentage weights $w_1$ and $w_2$. The risk of the portfolio (derived below), $\Pi$, under the one-factor model is then:
$$\sigma_{\Pi}^2 = \overbrace{\sigma_m^2\left( w_1^2\beta_1^2 + w_2^2\beta_2^2 + 2w_1w_2\beta_1\beta_1 \right)}^{\text{common risk}} + \overbrace{w_1^2\epsilon_1^2 + w_2^2 \epsilon_2^2}^{\text{specifc risk}}$$
This is the simplest possible example of portfolio factor risk, one factor and two assets, yet we can already use it to gain intuition about portfolio risk and hedging.
```python
# The weights for each security in our portfolio
w1 = 0.5
w2 = 0.5
```
```python
def two_stocks_one_factor(w1, w2, b1, b2, market_vol, idio_vol_1, idio_vol_2):
common_risk = (market_vol**2)*(w1*w1*b1*b1 + w2*w2*b2*b2 + 2*w1*w2*b1*b2)
specific_risk = w1*w1*idio_vol_1**2 + w2*w2*idio_vol_2**2
total_risk = common_risk + specific_risk
return total_risk, common_risk/total_risk
```
The risk for a two stock, equally-weighted, long-only portfolio:
```python
total, pct_common = two_stocks_one_factor(w1, w2, b1, b2, market_vol, idio_vol_1, idio_vol_2)
print "Portfolio risk (annualized standard deviation): %0.4f " % np.sqrt(total)
print "Percent of total risk due to common risk: %0.4f" % pct_common
```
Portfolio risk (annualized standard deviation): 0.1830
Percent of total risk due to common risk: 0.8887
The astute reader will notice that the proportion of risk in the portfolio due to common factor risk is **larger for the portfolio** than for the weighted sum of the common risk proportion for the two components. To repeat the key point in this lecture: **In a portfolio, stock specific risk diversifies while common factor risk does not.**
The risk for a two stock, beta-hedged long-short portfolio:
```python
w2 = -w1*b1/b2 # set weight 2 such that the portfolio has zero beta
total, pct_common = two_stocks_one_factor(w1, w2, b1, b2, market_vol, idio_vol_1, idio_vol_2)
print "Portfolio risk (annualized standard deviation): %0.4f " % np.sqrt(total)
print "Percent of total risk due to common risk: %0.4f" % pct_common
```
Portfolio risk (annualized standard deviation): 0.0629
Percent of total risk due to common risk: 0.0000
Note that we eliminated **all** the common risk with a perfect beta hedge.
# Portfolio Risk
If $X$ is a column vector of n random variables, $X_1,\dots,X_n$, and $c$ is a column vector of coefficients (constants), then the [variance of the weighted sum](https://en.wikipedia.org/wiki/Variance) $c'X$ is
$$\text{Var}(c'X) = c'\Sigma c$$
where $\Sigma$ is the covariance matrix of the $X$'s.
In our application, $c$ is our stock weight vector $w$ and $\Sigma$ is the covariance matrix of stock returns.
$$\sigma_{\Pi}^2 = w' \Sigma w$$
Just as we decompose the single stock risk above, we can decompose the covariance matrix to separate *common risk* and *specific risk*
$$\Sigma = BFB' + D$$
Thus
$$\sigma_{\Pi}^2 = w'(BFB' + D)w$$
$$\sigma_{\Pi}^2 = w'BFB'w + w'Dw$$
Which for the two stock portfolio above works out to
\begin{equation}
\sigma_{\Pi}^2 =
\overbrace{
\begin{bmatrix} w_1 & w_2 \end{bmatrix}
\begin{bmatrix} \beta_{1} \\ \beta_{2} \end{bmatrix}
\sigma_m^2
\begin{bmatrix} \beta_{1} & \beta_{2} \end{bmatrix}
\begin{bmatrix} w_1 \\ w_2 \end{bmatrix}
}^{\text{common risk}}
+ \overbrace{\begin{bmatrix} w_1 & w_2 \end{bmatrix}
\begin{bmatrix} \epsilon_1^2 & 0\\ 0 & \epsilon_2^2 \end{bmatrix}
\begin{bmatrix} w_1 \\ w_2 \end{bmatrix}}^{\text{specific risk}}
\end{equation}
If you work through this matrix multiplication, you get the stated result above
$$\sigma_{\Pi}^2 = \overbrace{\sigma_m^2\left( w_1^2\beta_1^2 + w_2^2\beta_2^2 + 2w_1w_2\beta_1\beta_1 \right)}^{\text{common risk}} + \overbrace{w_1^2\epsilon_1^2 + w_2^2 \epsilon_2^2}^{\text{specifc risk}}$$
### Multi-Factor Models
Of course, we can expand the CAPM to include *additional* risk factors besides market beta. We could posit that there are in total $m$ risks which are *common* to all stocks.
$$r_i = \alpha_i + \beta_{1,i} f_1 + \dots + \beta_{m,i} f_m + \epsilon_i$$
or more concisely
$$r_i = \alpha_i + \sum_{j=1}^m \beta_{j,i} f_j + \epsilon_i$$
or, considering all stocks, $i$, from 1 to N, even more concisely, for a given period $t$,
$$r = \alpha + Bf + \epsilon$$
where $r$ is the Nx1 column vector of returns, $B$ is the Nx$m$ matrix of factor betas, $f$ is the Nx1 column of factor returns, and $\epsilon$ is the Nx1 column vector of idiosyncratic returns. Finally,
$$\sigma_{\Pi}^2 = w'BFB'w + w'Dw$$
where $B$ is the Nx$m$ matrix of factor betas, $F$ is the $m$x$m$ covariance matrix of factor returns, and $D$ is a NxN matrix with the $\epsilon_i$'s on diagonal, and zeros everywhere else.
With this result, *assuming we had a suitable risk model giving us the matrices $B$, $F$, and $D$*, we could calculate our portfolio risk and the proportion of risk coming from common risk.
Likewise, just as we set $w_2$ above in the two stock case to the value that neutralized the exposure to the single factor $\beta$, in the multi-factor case we could use the factor betas matrix $B$ to construct a portfolio which is neutral to **all** common factors. **A portfolio which is neutral to all common factors has zero common factor risk.**
# Portfolios of Algos
Even without a risk model, we can get some intuition as to how the risk of a portfolio of algos looks.
What does a resulting portfolio of algos look like when the individual algos have non-zero common risk? Taking some inspiration from a recent journal article [The Dangers of Diversification](http://www.iijournals.com/doi/abs/10.3905/jpm.2017.43.2.013?journalCode=jpm) by Garvey, Kahn, and Savi, imagine that each algo has a certain *budget of common risk* it can take. This budget is defined as the percent common risk of total risk in the algo.
In the first case, we assume that all algos have this same budget (and use all the budget!) and the correlation between their common risks is 1.0. This is simular to the case of a single factor model.
```python
def portfolio_risk_decomposition(budget=0.2, correl=1.0, algo_count=2, algo_total_risk=0.04):
N = algo_count
algo_common_risk = budget*(algo_total_risk**2)
algo_idio_risk = algo_total_risk**2 - algo_common_risk
w = 1./N
covar = correl*algo_common_risk
common_risk = N*w*w*algo_common_risk + (N*N - N)*w*w*covar
idio_risk = algo_idio_risk*w
total_risk = common_risk + idio_risk
return total_risk, common_risk/total_risk
```
```python
a, b = portfolio_risk_decomposition(budget=0.2, algo_count=20, correl=1.0, algo_total_risk=0.04)
print "Portfolio total risk: %.4f " % np.sqrt(a)
print "Portfolio percent of common risk: %.4f " % b
```
Portfolio total risk: 0.0196
Portfolio percent of common risk: 0.8333
```python
algos = np.linspace(1,20)
plt.plot(
algos,
portfolio_risk_decomposition(budget=0.2, correl=1.0, algo_count=algos)[1]
)
plt.plot(
algos,
portfolio_risk_decomposition(budget=0.4, correl=1.0, algo_count=algos)[1]
)
plt.ylim([0,1]);
plt.title('Percent of Portfolio Risk due to Common Risk')
plt.xlabel('Number of Algos in Portfolio')
plt.ylabel('Percent of Portfolio of Algos Risk due to Common Risk')
plt.legend(
['20% Single Algo Common Risk Budget', '40% Single Algo Common Risk Budget']
);
```
From this plot, you can see that from the allocator's perspective, a "small" budget that allows for 20% of individual algo total risk to be driven by common risk leads to a 20 algo portfolio **with 83%** of it's risk driven by common risk! Ideally an allocator wants you to have **zero common factor risk**.
<div class="alert alert-warning">
<b>TL;DR:</b> Even if you can't predict portfolio risk and don't have a risk model to decompose risk, you can form a portfolio with **zero common risk** by hedging the beta exposure to common factors. The most important common factors in the US Equity market are market beta and sector beta. Hedge your beta and be sector neutral if you want a large allocation from any allocator.
</div>
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| aadae3f8a16821431515251675caa1567fb2e6be | 72,760 | ipynb | Jupyter Notebook | docs/memo/notebooks/lectures/Why_Hedge_II/notebook.ipynb | hebpmo/zipline | 396469b29e7e0daea4fe1e8a1c18f6c7eeb92780 | [
"Apache-2.0"
]
| 4 | 2018-11-17T20:04:53.000Z | 2021-12-10T14:47:30.000Z | docs/memo/notebooks/lectures/Why_Hedge_II/notebook.ipynb | t330883522/zipline | 396469b29e7e0daea4fe1e8a1c18f6c7eeb92780 | [
"Apache-2.0"
]
| null | null | null | docs/memo/notebooks/lectures/Why_Hedge_II/notebook.ipynb | t330883522/zipline | 396469b29e7e0daea4fe1e8a1c18f6c7eeb92780 | [
"Apache-2.0"
]
| 3 | 2018-11-17T20:04:50.000Z | 2020-03-01T11:11:41.000Z | 148.489796 | 52,902 | 0.866575 | true | 3,935 | Qwen/Qwen-72B | 1. YES
2. YES | 0.682574 | 0.810479 | 0.553212 | __label__eng_Latn | 0.992715 | 0.123626 |
```python
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import matplotlib.pyplot as plt
from PIL import Image
import sympy as spp
imgx=800
imgy=800
imgz=800
image=Image.new("RGB",(imgx,imgy))
image.putpixel((100,100),(255,255,255))
xa=-2
xb=2
ya=-2
yb=2
maxit=30
h=1e-6
eps=1e-3
@interact(k=[2,3,4,5,6,7,8,9],R=[1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100],G=[1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100],B=[1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100])
def Q(k,R,G,B):
k=int(k)
def f(z):
return z**k-k-k
for y in range (imgy):
zy=y*(yb-ya)/(imgy-1)+ya
for x in range (imgx):
zx=x*(xb-xa)/(imgx-1)+xa
z=complex(zx,zy)
for i in range (maxit):
dz=(f(z+complex(h,h))-f(z))/complex(h,h)
z0=z-f(z)/dz
if abs (z0-z)<eps:
break
R=int(R)
G=int(G)
B=int(B)
z=z0
r=i*R
g=i*G
b=i*B
image.putpixel((x,y),(r,g,b))
return image
```
| e3da9aac0a1d01a8ead27e251384608b8ffc88af | 2,156 | ipynb | Jupyter Notebook | Codigo interactivo.ipynb | mruizm4/Galeria-de-Fractales | aaea6cce6f7b227ab2c5fca1dbcb43ffe58fe64e | [
"Apache-2.0"
]
| null | null | null | Codigo interactivo.ipynb | mruizm4/Galeria-de-Fractales | aaea6cce6f7b227ab2c5fca1dbcb43ffe58fe64e | [
"Apache-2.0"
]
| 4 | 2021-06-08T21:45:27.000Z | 2022-03-12T00:34:40.000Z | Codigointeractivo.ipynb | mruizm4/Galeria-de-Fractales | aaea6cce6f7b227ab2c5fca1dbcb43ffe58fe64e | [
"Apache-2.0"
]
| null | null | null | 28.368421 | 237 | 0.451763 | true | 463 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.699254 | 0.643866 | __label__yue_Hant | 0.209582 | 0.334248 |
```python
import mahotas as mh
import numpy as np
from sklearn.datasets import load_digits
import matplotlib.pyplot as plt
from tpot import TPOT
from sklearn.cross_validation import train_test_split
import dautil as dl
```
```python
context = dl.nb.Context('extracting_texture')
lr = dl.nb.LatexRenderer(chapter=11, start=5, context=context)
lr.render(r' C_{\Delta x, \Delta y}(i,j)=\sum_{p=1}^n\sum_{q=1}^m\begin{cases} 1, & \text{if }I(p,q)=i\text{ and }I(p+\Delta x,q+\Delta y)=j \\ 0, & \text{otherwise}\end{cases}')
lr.render(r'''\begin{align}
Angular \text{ } 2nd \text{ } Moment &= \sum_{i} \sum_{j} p[i,j]^{2}\\
Contrast &= \sum_{n=0}^{Ng-1} n^{2} \left \{ \sum_{i=1}^{Ng} \sum_{j=1}^{Ng} p[i,j] \right \} \text{, where } |i-j|=n\\
Correlation &= \frac{\sum_{i=1}^{Ng} \sum_{j=1}^{Ng}(ij)p[i,j] - \mu_x \mu_y}{\sigma_x \sigma_y} \\
Entropy &= -\sum_{i}\sum_{j} p[i,j] log(p[i,j])\\
\end{align}''')
```
```python
digits = load_digits()
X = digits.data.copy()
for i, img in enumerate(digits.images):
np.append(X[i], mh.features.haralick(
img.astype(np.uint8)).ravel())
X_train, X_test, y_train, y_test = train_test_split(
X, digits.target, train_size=0.75)
```
```python
tpot = TPOT(generations=6, population_size=101,
random_state=46, verbosity=2)
tpot.fit(X_train, y_train)
```
```python
%matplotlib inline
context = dl.nb.Context('extracting_texture')
dl.nb.RcWidget(context)
```
```python
print('Score {:.2f}'.format(tpot.score(X_train, y_train, X_test, y_test)))
dl.plotting.img_show(plt.gca(), digits.images[0])
plt.title('Original Image')
plt.figure()
dl.plotting.img_show(plt.gca(), digits.data[0].reshape((8, 8)))
plt.title('Core Features')
plt.figure()
dl.plotting.img_show(plt.gca(), mh.features.haralick(
digits.images[0].astype(np.uint8)))
plt.title('Haralick Features')
```
```python
```
| cbd4648121a23411022c9adebf78d5c3dec92cf0 | 3,626 | ipynb | Jupyter Notebook | Module2/Python_Data_Analysis_code/Chapter 11/extracting_texture.ipynb | vijaysharmapc/Python-End-to-end-Data-Analysis | a00f2d5d1547993e000b2551ec6a1360240885ba | [
"MIT"
]
| 119 | 2016-08-24T20:12:01.000Z | 2022-03-23T03:59:30.000Z | Module2/Python_Data_Analysis_code/Chapter 11/extracting_texture.ipynb | vijaysharmapc/Python-End-to-end-Data-Analysis | a00f2d5d1547993e000b2551ec6a1360240885ba | [
"MIT"
]
| 3 | 2016-10-18T03:49:11.000Z | 2020-11-03T12:41:29.000Z | Module2/Python_Data_Analysis_code/Chapter 11/extracting_texture.ipynb | vijaysharmapc/Python-End-to-end-Data-Analysis | a00f2d5d1547993e000b2551ec6a1360240885ba | [
"MIT"
]
| 110 | 2016-08-19T01:57:35.000Z | 2022-02-18T17:02:17.000Z | 26.467153 | 208 | 0.538334 | true | 629 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.79053 | 0.702591 | __label__eng_Latn | 0.155757 | 0.470685 |
<a href="https://colab.research.google.com/github/lmcanavals/algorithmic_complexity/blob/main/03_01_divide_and_conquer.ipynb" target="_parent"></a>
# Divide and Conquer
## Master Theorem
$$ T(n) = aT(\frac{n}{b}) + O(n^k) $$
So...
$$
T(n) = \begin{equation}
\left\{
\begin{aligned}
O(n^k) && a < b^k\\
O(n^k\,log\,n) && a = b^k\\
O(n^{log_b\,a}) && a > b^k
\end{aligned}
\right.
\end{equation}
$$
## Max element in list
```python
import random
a = list(range(30))
random.shuffle(a)
print(a)
```
[13, 17, 9, 10, 5, 22, 15, 24, 14, 21, 23, 1, 11, 0, 12, 27, 7, 8, 20, 6, 25, 19, 29, 2, 4, 16, 3, 18, 26, 28]
### Brute Force
```python
def bfmax(a):
m = a[0]
for i in range(1, len(a)):
if a[i] > m:
m = a[i]
return m
```
```python
res = bfmax(a)
assert res == 29
print(res)
```
29
$ O(n) $
### Divide and conquer
```python
def dcmax(a, i, f):
if i == f:
return a[i]
mid = (i + f) // 2
maxleft = dcmax(a, i, mid)
maxright = dcmax(a, mid + 1, f)
return maxleft if maxleft > maxright else maxright
```
en c/c++
```c++
expr ? a : b
```
en python
```python
a if expre else b
```
```python
res = dcmax(a, 0, len(a) - 1)
assert res == 29
print(res)
```
29
$$ T(n) = 2T(\frac{n}{2}) + O(1) $$
so we have:
$$
a = 2\\
b = 2\\
k = 0
$$
Then using the master theorem:
$$
T(n) = n^{\log_2{2}}\\
T(n) = n\\
T(n) \implies O(n)
$$
```python
bigboy = list(range(10000))
random.shuffle(bigboy)
%timeit bfmax(bigboy)
%timeit dcmax(bigboy, 0, 9999)
```
1000 loops, best of 5: 769 µs per loop
100 loops, best of 5: 4.51 ms per loop
## Sumarize list elements
```python
def _sumarize(a, i, f):
if i == f:
return a[i]
else:
mid = (i + f) // 2
s1 = _sumarize(a, i, mid)
s2 = _sumarize(a, mid + 1, f)
return s1 + s2
def sumarize(a):
return _sumarize(a, 0, len(a) - 1)
```
```python
res = sumarize(a)
assert res == sum(a)
res
```
435
## Matrix Multiplication
for matrixes where $n = 2^k \,,\, k \in N$
### Classic way
```python
def matmul(a, b):
n = len(a)
c = [[0]*n for _ in range(n)]
for i in range(n):
for j in range(n):
accum = 0
for k in range(n):
accum += a[i][k] * b[k][j]
c[i][j] = accum
return c
```
```python
import numpy as np
```
```python
n = 8
a = np.array(list(range(n*n))).reshape((n, n))
b = np.array(list(range(n*n))).reshape((n, n))
print(a)
print(b)
```
[[ 0 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14 15]
[16 17 18 19 20 21 22 23]
[24 25 26 27 28 29 30 31]
[32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47]
[48 49 50 51 52 53 54 55]
[56 57 58 59 60 61 62 63]]
[[ 0 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14 15]
[16 17 18 19 20 21 22 23]
[24 25 26 27 28 29 30 31]
[32 33 34 35 36 37 38 39]
[40 41 42 43 44 45 46 47]
[48 49 50 51 52 53 54 55]
[56 57 58 59 60 61 62 63]]
```python
c = matmul(a, b)
np.array(c)
```
array([[ 1120, 1148, 1176, 1204, 1232, 1260, 1288, 1316],
[ 2912, 3004, 3096, 3188, 3280, 3372, 3464, 3556],
[ 4704, 4860, 5016, 5172, 5328, 5484, 5640, 5796],
[ 6496, 6716, 6936, 7156, 7376, 7596, 7816, 8036],
[ 8288, 8572, 8856, 9140, 9424, 9708, 9992, 10276],
[10080, 10428, 10776, 11124, 11472, 11820, 12168, 12516],
[11872, 12284, 12696, 13108, 13520, 13932, 14344, 14756],
[13664, 14140, 14616, 15092, 15568, 16044, 16520, 16996]])
### Divide and conquer way
```python
def dcmatmul(a, b, c, ri, rf, ci, cf):
n = len(a)
if ri == rf:
accum = 0
for k in range(n):
accum += a[ri][k] * b[k][cf]
c[ri][cf] = accum
else:
rmid = (ri + rf) // 2
cmid = (ci + cf) // 2
dcmatmul(a, b, c, ri, rmid, ci, cmid)
dcmatmul(a, b, c, rmid+1, rf, ci, cmid)
dcmatmul(a, b, c, ri, rmid, cmid +1, cf)
dcmatmul(a, b, c, rmid+1, rf, cmid + 1, cf)
```
```python
n = 8
a = np.array(list(range(n*n))).reshape((n, n))
b = np.array(list(range(n*n))).reshape((n, n))
c = np.zeros((n, n))
dcmatmul(a, b, c, 0, n-1, 0, n-1)
print(c)
```
[[ 1120. 1148. 1176. 1204. 1232. 1260. 1288. 1316.]
[ 2912. 3004. 3096. 3188. 3280. 3372. 3464. 3556.]
[ 4704. 4860. 5016. 5172. 5328. 5484. 5640. 5796.]
[ 6496. 6716. 6936. 7156. 7376. 7596. 7816. 8036.]
[ 8288. 8572. 8856. 9140. 9424. 9708. 9992. 10276.]
[10080. 10428. 10776. 11124. 11472. 11820. 12168. 12516.]
[11872. 12284. 12696. 13108. 13520. 13932. 14344. 14756.]
[13664. 14140. 14616. 15092. 15568. 16044. 16520. 16996.]]
```python
import pdb
def dcmatmul(a, b, c, ri, rf, ci, cf):
n = len(a)
#pdb.set_trace()
if ri == rf and ci == cf:
accum = 0
for k in range(n):
accum += a[ri][k] * b[k][cf]
c[ri][cf] = accum
elif ri == rf:
cmid = (ci + cf) // 2
dcmatmul(a, b, c, ri, rf, ci, cmid)
dcmatmul(a, b, c, ri, rf, cmid + 1, cf)
elif ci == cf:
rmid = (ri + rf) // 2
dcmatmul(a, b, c, ri, rmid, ci, cf)
dcmatmul(a, b, c, rmid+1, rf, ci, cf)
else:
rmid = (ri + rf) // 2
cmid = (ci + cf) // 2
dcmatmul(a, b, c, ri, rmid, ci, cmid)
dcmatmul(a, b, c, rmid+1, rf, ci, cmid)
dcmatmul(a, b, c, ri, rmid, cmid +1, cf)
dcmatmul(a, b, c, rmid+1, rf, cmid + 1, cf)
```
```python
a = np.array(list(range(9))).reshape(3, 3)
b = np.array(list(range(9))).reshape(3, 3)
c = matmul(a, b)
np.array(c)
```
array([[ 15, 18, 21],
[ 42, 54, 66],
[ 69, 90, 111]])
```python
c = np.zeros((3, 3))
dcmatmul(a, b, c, 0, 2, 0, 2)
c
```
array([[ 15., 18., 21.],
[ 42., 54., 66.],
[ 0., 90., 111.]])
```python
```
| 927ca24efe4c87cbb81abd6a3874ba14b0a51858 | 16,550 | ipynb | Jupyter Notebook | 03_01_divide_and_conquer.ipynb | ronaldo91929-glic/Complejidad-Algoritmica-Curso | f690f0b4c00e7a5fd9b3316e4afd8fa7ac9bb421 | [
"CC0-1.0"
]
| 17 | 2021-04-11T04:25:02.000Z | 2022-03-28T22:20:58.000Z | 03_01_divide_and_conquer.ipynb | ronaldo91929-glic/Complejidad-Algoritmica-Curso | f690f0b4c00e7a5fd9b3316e4afd8fa7ac9bb421 | [
"CC0-1.0"
]
| 7 | 2021-10-01T22:27:37.000Z | 2021-11-27T01:32:03.000Z | 03_01_divide_and_conquer.ipynb | ronaldo91929-glic/Complejidad-Algoritmica-Curso | f690f0b4c00e7a5fd9b3316e4afd8fa7ac9bb421 | [
"CC0-1.0"
]
| 30 | 2021-04-12T15:43:14.000Z | 2022-03-15T18:07:19.000Z | 25.658915 | 254 | 0.379517 | true | 2,719 | Qwen/Qwen-72B | 1. YES
2. YES | 0.941654 | 0.857768 | 0.807721 | __label__krc_Cyrl | 0.102959 | 0.714939 |
<a href="https://colab.research.google.com/github/G750cloud/20MA573/blob/master/Hw2(2).ipynb" target="_parent"></a>
```
Problem 1
```
$f(x+h)=f(x)+f'(x)h+\frac{1}{2}f''(x)h^2+\frac{1}{6}f'''(x)h^3+\frac{1}{24}f''''(x)h^4+O(h^5)$ (1)
$f(x-h)=f(x)-f'(x)h+\frac{1}{2}f''(x)h^2-\frac{1}{6}f'''(x)h^3+\frac{1}{24}f''''(x)h^4-O(h^5)$ (2)
$f(x+2h)=f(x)+f'(x)2h+f''(x)2h^2+\frac{4}{3}f'''(x)h^3+\frac{2}{3}f''''(x)h^4+O(h^5)$ (3)
$f(x-2h)=f(x)-f'(x)2h+f''(x)2h^2-\frac{4}{3}f'''(x)h^3+\frac{2}{3}f''''(x)h^4-O(h^5)$ (4)
SO let $\frac{2}{3}((1)-(2))-\frac{1}{12}((3)-(4))$, We can get:
$O(h^4)=\left|f'(x)-\frac{\frac{2}{3}(f(x+h)-f(x-h))-\frac{1}{12}(f(x+2h)-f(x-2h))}{h}\right|$
So we get a difference approximation.
```
Problem 2
```
(1)
Since f(x) is an even function.
So $f(x)=f(-x)$\
$f'(x)=-f'(-x)$\
$f'(0)=-f'(0)$
That is $f'(0)=0$
(2)
Let f(x) be expanded at 0 by Tayler:
$f(0+h)=f(0)+f'(0)h+\frac{1}{2}f''(0)h^2+\frac{1}{6}f'''(0)h^3+O(h^4)$
That is: $f(h)=\frac{1}{2}f''(0)h^2+O(h^4)$
So $O(h^2)=\left|\frac{2f(h)}{h^2}-f''(h)\right|$
(3)
$f(h)=\frac{1}{2}f''(0)h^2+\frac{1}{24}h^4f(0)+O(h^6)$
$f(2h)=f''(0)2h^2+\frac{2h^4}{3}f''''(0)+O(h^6)$
Since $f''(0)-\frac{C1f(h)+C2f(h)}{h^2}=O(h^2)$
So we can get the system of equations:
\begin{equation}
\left\{
\begin{array}{lr}
1-\frac{C1}{2}-2C2=0 & \\
\frac{C1}{24}+\frac{2C2}{3}=0 &
\end{array}
\right.
\end{equation}
So $C1=\frac{8}{3},C2=-\frac{1}{6}$
(4)
Since f(x) is an odd function.
So the odd power of the function at zero is zero.
So we are able to use the same above way to find the $f''(x)$ efficiently.
| 5293e7cc631dd10c21f62d53c2490dd59190a744 | 3,967 | ipynb | Jupyter Notebook | Hw2(2).ipynb | G750cloud/20MA573 | 6450c6a69542b9e1de37db2215cedfba0ba68621 | [
"MIT"
]
| null | null | null | Hw2(2).ipynb | G750cloud/20MA573 | 6450c6a69542b9e1de37db2215cedfba0ba68621 | [
"MIT"
]
| null | null | null | Hw2(2).ipynb | G750cloud/20MA573 | 6450c6a69542b9e1de37db2215cedfba0ba68621 | [
"MIT"
]
| null | null | null | 30.282443 | 222 | 0.37333 | true | 866 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.880797 | 0.724182 | __label__eng_Latn | 0.202551 | 0.520849 |
# Biostat823 Assignment2
**Number theory and a Google recruitment puzzle**
Find the first 10-digit prime in the decimal expansion of 17π.
The first 5 digits in the decimal expansion of π are 14159. The first 4-digit prime in the decimal expansion of π are 4159. You are asked to find the first 10-digit prime in the decimal expansion of 17π. First solve sub-problems (divide and conquer):
- Write a function to generate an arbitrary large expansion of a mathematical expression like π. Hint: You can use the standard library `decimal` or the 3rd party library `sympy` to do this
- Write a function to check if a number is prime. Hint: See Sieve of Eratosthenes
- Write a function to generate sliding windows of a specified width from a long iterable (e.g. a string representation of a number)
Write unit tests for each of these three functions. You are encouraged, but not required, to try [test-driven development](https://en.wikipedia.org/wiki/Test-driven_development).
Now use these helper functions to write the function that you need.
Write a unit test for this final function, given that the first 10-digit prime in the expansion e is 7427466391. Finally, solve the given problem.
This assignment can be found in my github blog (named as Biostat 823 Assignment2): https://ashleyhmy.github.io/BIOS823_blog/
```python
import sympy as sym
import math
import unittest
```
```python
# Write a function to generate large arbitary expression for scientific expression
def num_expansion(expr, args):
"""
generate an arbitary large expansion for a scientific expression like pi and e, returns numeric expression.
expr is the mathematical expression to be converted eg, expr = sym.exp(1) use the sympy package to get scientific expression for e
args is the number of significant numbers required eg, args = 5
Examples to get arbitary expression for e with 5 significant numbers:
>>> num_expansion(expr, 5)
2.7183
"""
num = expr.evalf(args)
return num
```
```python
# Write a function to check if a number is a prime
def is_prime(num):
"""
Take input num to check if num is a prime, if num is a prime return True,
if num is not a prime return False.
Example:
>>>is_prime(17)
False
"""
if num<2:
return False
if num==2:
return True
if num>2 and num%2 == 0:
return False
for i in range(3, 1 + math.floor(math.sqrt(num)), 2):
if num%i == 0:
return False
return True
```
```python
# Write a function to generate sliding windows of a sequence
def get_window(seq, digits):
"""
seq is the input, a list of numbers
win_size is the size of the window
Example:
seq = [1,2,3,4,5,6]
win_size = 2
>>> window(seq, win_size, step)
[[1,2], [2,3], [3,4], [4,5], [5,6]]
"""
num_of_chunk = int(len(seq)-digits + 1)
for i in range(0, num_of_chunk):
yield seq[i:i+digits]
```
```python
# Use unittest package to test the three functions above (num_expansion, is_prime, get_window)
class TestFunctions(unittest.TestCase):
def test_num_expansion(self):
result = pi.evalf(5)
self.assertEqual(num_expansion(pi, 5), result)
def test_is_prime(self):
self.assertEqual(is_prime(17), True)
self.assertEqual(is_prime(10), False)
def test_window(self):
seq = [1,2,3,4]
self.assertEqual(list(get_window(seq, 3)), [[1,2,3],[2,3,4]])
if __name__ == "__main__":
unittest.main(argv=[''], verbosity =2, exit=False)
```
test_is_prime (__main__.TestFunctions) ... ok
test_num_expansion (__main__.TestFunctions) ... ok
test_window (__main__.TestFunctions) ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.002s
OK
```python
# Now use the three helper function above to write a function to get the first 10 digit of a number
def get_first_prime(expr, args, digits):
"""
The input expr is the methametical expression that want to be expanded
The input args represents how many significant number is required from the methametical expression
The input digits represents the number of digits is in the prime
Example:
>>>get_first_prime(pi, 200, 10)
5926535897
"""
num1 = num_expansion(expr, args)
str1 = str(num1)
list1 = str1.split('.')
lst = [''.join(list1[0:2])]
num = lst[0]
seq = [int(a) for a in str(num)]
windows = list(window(seq, digits))
prime_lst = []
for win in windows:
str1 = ''.join(map(str, win))
num_to_check = int(str1)
if is_prime(num_to_check) == True:
prime_lst.append(num_to_check)
prime = prime_lst[0]
return prime
```
```python
# test get_first_prime() function
class TestFunctions(unittest.TestCase):
'''To check whether the output of get_first_prime() function is equal to 7427466391
'''
def test_get_first_prime(self):
expr = sym.exp(1)
self.assertEqual(get_first_prime(expr, 200, 10), 7427466391)
if __name__ == "__main__":
unittest.main(argv=[''], verbosity =2, exit=False)
```
test_get_first_prime (__main__.TestFunctions) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.025s
OK
```python
# Find the first 10 digits of 17pi
get_first_prime(pi*17, 50, 10)
print("The first 10 digit prime of 17\u03C0 is", get_first_prime(pi*17, 50, 10))
```
The first 10 digit prime of 17π is 8649375157
| 989b37ec0f2d29e43074b64b29df83279ff0cbbe | 8,543 | ipynb | Jupyter Notebook | _notebooks/2021-09-17-823-Assignment2.ipynb | AshleyHMY/BIOS823_blog | ced31b0dcee7e4820852ab45077a551e65ccf433 | [
"Apache-2.0"
]
| null | null | null | _notebooks/2021-09-17-823-Assignment2.ipynb | AshleyHMY/BIOS823_blog | ced31b0dcee7e4820852ab45077a551e65ccf433 | [
"Apache-2.0"
]
| null | null | null | _notebooks/2021-09-17-823-Assignment2.ipynb | AshleyHMY/BIOS823_blog | ced31b0dcee7e4820852ab45077a551e65ccf433 | [
"Apache-2.0"
]
| null | null | null | 31.758364 | 259 | 0.539038 | true | 1,421 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.872347 | 0.757919 | __label__eng_Latn | 0.959441 | 0.599231 |
```python
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
%matplotlib inline
import math
from scipy import stats
from sympy import *
init_printing()
```
## Beta Densities with Integer Parameters ##
In the previous section we learned how to work with joint densities, but many of the joint density functions seemed to appear out of nowhere. For example, we checked that the function
$$
f(x, y) = 120x(y-x)(1-y), ~~~~ 0 < x < y < 1
$$
is a joint density, but there was no clue where it came from. In this section we will find its origin and go on to develop an important family of densities on the unit interval.
### Order Statistics of IID Uniform $(0, 1)$ Variables ###
Let $U_1, U_2, \ldots, U_n$ be i.i.d. uniform on $(0, 1)$. Imagine each $U_i$ as the position of a dart thrown at the unit interval. The graph below shows the positions of five such darts, each shown as a star.
```python
# NO CODE
np.random.seed(17) #make plot deterministic
plt.plot([0, 1], [0, 0], color='k', lw=2)
y = 1 - np.ones(5)
x = stats.uniform.rvs(size=5)
order_stats = np.sort(x)
plt.scatter(x, y, marker='*', color='r', s=100)
plt.text(0, -0.0007, r'0', size=16)
plt.text(0.98, -0.0007, r'1', size=16)
plt.xlim(0, 1)
plt.yticks([])
plt.xticks([])
plt.title('Five IID Uniform (0, 1) Variables');
```
Based on the graph above, can you tell which star corresponds to $U_1$? You can't, because $U_1$ could be any of the five stars. So also you can't identify any of the five variables $U_1, U_2, U_3, U_4, U_5$.
What you *can* see, however, is the list of $U_i$'s *sorted in increasing order*. You can see the value of the minimum, the second on the sorted list, the third, the fourth, and finally the fifth which is the maximum.
These are called the *order statistics* of $U_1, U_2, U_3, U_4, U_5$, and are denoted $U_{(1)}, U_{(2)}, U_{(3)}, U_{(4)}, U_{(5)}$.
Remember that because the $U_i$'s are independent random variables with densities, there can't be ties: the chance that two of them are equal is 0.
```python
# NO CODE
plt.plot([0, 1], [0, 0], color='k', lw=2)
order_stats = np.sort(x)
plt.scatter(x, y, marker='*', color='r', s=100)
u_labels = make_array('$U_{(1)}$', '$U_{(2)}$', '$U_{(3)}$', '$U_{(4)}$', '$U_{(5)}$')
for i in range(5):
plt.text(order_stats[i], -0.0007, u_labels[i], size=16)
plt.text(0, -0.0007, r'0', size=16)
plt.text(0.98, -0.0007, r'1', size=16)
plt.xlim(0, 1)
plt.yticks([])
plt.xticks([])
plt.title('Order Statistics of the Five IID Uniform (0, 1) Variables');
```
In general for $1 \le k \le n$, the *$k$th order statistic* of $U_1, U_2, \ldots, U_n$ is the $k$th value when the $U_i$'s are sorted in increasing order. This can also be thought of as the $k$th *ranked* value when the minimum has rank 1. It is denoted $U_{(k)}$.
### Joint Density of Two Order Statistics ###
Let $n = 5$ as above and let's try to work out the joint density of $U_{(2)}$ and $U_{(4)}$. That's the joint density of the second and fourth values on the sorted list.
The graph below shows the event $\{U_{(2)} \in dx, U_{(4)} \in dy\}$ for values $x$ and $y$ such that $0 < x < y < 1$.
```python
# NO CODE
plt.plot([0, 1], [0, 0], color='k', lw=2)
y = 1 - np.ones(5)
x = make_array(0.1, 0.3, 0.45, 0.7, 0.9)
plt.scatter(x, y, marker='*', color='r', s=100)
plt.plot([0.28, 0.32], [0, 0], color='gold', lw=2)
plt.text(0.28, -0.0007, r'$dx$', size=16)
plt.plot([0.68, 0.72], [0, 0], color='gold', lw=2)
plt.text(0.68, -0.0007, r'$dy$', size=16)
plt.text(0, -0.0007, r'0', size=16)
plt.text(0.98, -0.0007, r'1', size=16)
plt.xlim(0, 1)
plt.yticks([])
plt.xticks([])
plt.title('$n = 5$; $\{ U_{(2)} \in dx, U_{(4)} \in dy \}$');
```
To find $P(U_{(2)} \in dx, U_{(4)} \in dy)$, notice that for this event to occur:
- one of $U_1, U_2, U_3, U_4, U_5$ must be in $(0, x)$
- one must be in $dx$
- one must be in $(x, y)$
- one must be in $dy$
- one must be in $(y, 1)$
You can think of each of the five independent uniform $(0, 1)$ variables as a multinomial trial. It can land in any of the five intervals above, independently of the others and with the same chance as the others.
The chances are given by
$$
\begin{align*}
&P(U \in (0, x)) = x, ~~ P(U \in dx) \sim 1dx, ~~ P(U \in (x, y)) = (y-x)\\
&P(U \in dy) \sim 1dy, ~~ P(U \in (y, 1)) = 1-y
\end{align*}
$$
where $U$ is any uniform $(0, 1)$ random variable.
Apply the multinomial formula to get
$$
\begin{align*}
P(U_{(2)} \in dx, U_{(4)} \in dy) ~ &\sim ~
\frac{5!}{1!1!1!1!1!} x^1 (1dx)^1 (y-x)^1 (1dy)^1 (1-y)^1 \\
&\sim ~ 120x(y-x)(1-y)dxdy
\end{align*}
$$
and therefore the joint density of $U_{(2)}$ and $U_{(4)}$ is given by
$$
f(x, y) = 120x(y-x)(1-y), ~~~ 0 < x < y < 1
$$
This solves the mystery of how the formula arises.
But it also does much more. The *marginal* densities of the order statistics of i.i.d. uniform $(0, 1)$ variables form a family that is important in data science.
### The Density of $U_{(k)}$ ###
Let $U_{(k)}$ be the $k$th order statistic of $U_1, U_2, \ldots, U_n$. We will find the density of $U_{(k)}$ by following the same general process that we followed to find the joint density above.
The graph below displays the event $\{ U_{(k)} \in dx \}$. For the event to occur,
- One of the variables $U_1, U_2, \ldots, U_n$ has to be in $dx$.
- Of the remaining $n-1$ variables, $k-1$ must have values in $(0, x)$ and the rest in $(x, 1)$.
```python
# NO CODE
plt.plot([0, 1], [0, 0], color='k', lw=2)
plt.scatter(0.4, 0, marker='*', color='r', s=100)
plt.plot([0.38, 0.42], [0, 0], color='gold', lw=2)
plt.text(0.38, -0.0007, r'$dx$', size=16)
plt.text(0.1, 0.001, '$k-1$ stars', size=16)
plt.text(0.1, 0.0005, 'in $(0, x)$', size=16)
plt.text(0.6, 0.001, '$n-k$ stars', size=16)
plt.text(0.6, 0.0005, 'in $(x, 1)$', size=16)
plt.text(0, -0.0007, r'0', size=16)
plt.text(0.98, -0.0007, r'1', size=16)
plt.xlim(0, 1)
plt.yticks([])
plt.xticks([])
plt.title('$\{ U_{(k)} \in dx \}$');
```
Apply the multinomial formula again.
$$
P(U_{(k)} \in dx) ~ \sim ~
\frac{n!}{(k-1)! 1! (n-k)!} x^{k-1} (1dx)^1 (1-x)^{n-k}
$$
Therefore the density of $U_{(k)}$ is given by
$$
f_{U_{(k)}} (x) = \frac{n!}{(k-1)!(n-k)!} x^{k-1}(1-x)^{n-k}, ~~~ 0 < x < 1
$$
For consistency, let's rewrite the exponents slightly so that each ends with $-1$:
$$
f_{U_{(k)}} (x) = \frac{n!}{(k-1)!((n-k+1)-1)!} x^{k-1}(1-x)^{(n-k+1)-1}, ~~~ 0 < x < 1
$$
Because $1 \le k \le n$, we know that $n-k+1$ is a positive integer. Since $n$ is an arbitrary positive integer, so is $n-k+1$.
### Beta Densities ###
We have shown that if $r$ and $s$ are any two positive integers, then the function
$$
f(x) ~ = ~ \frac{(r+s-1)!}{(r-1)!(s-1)!} x^{r-1}(1-x)^{s-1}, ~~~ 0 < x < 1
$$
is a probability density function. This is called the *beta density with parameters $r$ and $s$*.
By the derivation above, **the $k$th order statistic $U_{(k)}$ of $n$ i.i.d. uniform $(0, 1)$ random variables has the beta density with parameters $k$ and $n-k+1$.**
The shape of the density is determined by the two factors that involve $x$. All the factorials are just parts of the constant that make the density integrate to 1.
Notice that the uniform $(0, 1)$ density is the same as the beta density with parameters $r = 1$ and $s = 1$. The uniform $(0, 1)$ density is a member of the *beta family*.
The graph below shows some beta density curves. As you would expect, the beta $(3, 3)$ density is symmetric about 0.5.
```python
x = np.arange(0, 1.01, 0.01)
for i in np.arange(1, 7, 1):
plt.plot(x, stats.beta.pdf(x, i, 6-i), lw=2)
plt.title('Beta $(i, 6-i)$ densities for $1 \leq i \leq 5$');
```
By choosing the parameters appropriately, you can create beta densities that put much of their mass near a prescribed value. That is one of the reasons beta densities are used to model *random proportions*. For example, if you think that the probability that an email is spam is most likely in the 60% to 90% range, but might be lower, you might model your belief by choosing the density that peaks at around 0.75 in the graph above.
The calculation below shows you how to get started on the process of picking parameters so that the beta density with those parameters has properties that reflect your beliefs.
### The Beta Integral ###
The beta density integrates to 1, and hence for all positive integers $r$ and $s$ we have
$$
\int_0^1 x^{r-1}(1-x)^{s-1}dx ~ = ~ \frac{(r-1)!(s-1)!}{(r+s-1)!}
$$
Thus probability theory makes short work of an otherwise laborious integral. Also, we can now find the expectation of a random variable with a beta density.
Let $X$ have the beta $(r, s)$ density for two positive integer parameters $r$ and $s$. Then
$$
\begin{align*}
E(X) &= \int_0^1 x \frac{(r+s-1)!}{(r-1)!(s-1)!} x^{r-1}(1-x)^{s-1}dx \\ \\
&= \frac{(r+s-1)!}{(r-1)!(s-1)!} \int_0^1 x^r(1-x)^{s-1}dx \\ \\
&= \frac{(r+s-1)!}{(r-1)!(s-1)!} \cdot \frac{r!(s-1)!}{(r+s)!} ~~~~~~~ \text{(beta integral for parameters } r+1 \text{and } s\text{)}\\ \\
&= \frac{r}{r+s}
\end{align*}
$$
You can follow the same method to find $E(X^2)$ and hence $Var(X)$.
The formula for the expectation allows you to pick parameters corresponding to your belief about the random proportion being modeled by $X$. For example, if you think the proportion is likely to be somewhere around 0.4, you might start by trying out a beta prior with $r = 2$ and $s = 3$.
You will have noticed that the form of the beta density looks rather like the binomial formula. Indeed, we used the binomial formula to derive the beta density. Later in the course you will see another close relation between the beta and the binomial. These properties make the beta family one of the most widely used families of densities in machine learning.
```python
```
| 5eda1e295bffc6e7d575d3cd635cc58f7ba9738a | 83,130 | ipynb | Jupyter Notebook | notebooks/Chapter_17/04_Beta_Densities_with_Integer_Parameters.ipynb | ifengji/textbook | e8819ed52367fdf124cef66146daeb5f842a2bf0 | [
"MIT"
]
| null | null | null | notebooks/Chapter_17/04_Beta_Densities_with_Integer_Parameters.ipynb | ifengji/textbook | e8819ed52367fdf124cef66146daeb5f842a2bf0 | [
"MIT"
]
| null | null | null | notebooks/Chapter_17/04_Beta_Densities_with_Integer_Parameters.ipynb | ifengji/textbook | e8819ed52367fdf124cef66146daeb5f842a2bf0 | [
"MIT"
]
| null | null | null | 195.140845 | 37,104 | 0.887153 | true | 3,426 | Qwen/Qwen-72B | 1. YES
2. YES | 0.654895 | 0.849971 | 0.556642 | __label__eng_Latn | 0.991438 | 0.131595 |
```python
import numpy
import sympy
from matplotlib import pyplot
%matplotlib inline
```
```python
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
sympy.init_printing()
```
```python
# Set parameters.
nx = 101
L = 25.0
dx = L / (nx - 1)
dt = 0.001
nt = int(8/60/dt)
Vmax = 90.0
𝜌max = 100
x = numpy.linspace(0.0, L, num=nx)
```
```python
# u0 = numpy.ones(nx)
# for i in range(len(u0)):
# if u0[i] == 1:
# u0[i] = 10
# mask = numpy.where(numpy.logical_and(x >= 2.0, x <= 4.2))
# u0[mask] = 50.0
```
```python
def rho(x):
rho = numpy.zeros_like(x)
for i in range(len(rho)):
if rho[i] == 0:
rho[i] = 10
mask = numpy.where(numpy.logical_and(x >= 2.0, x <= 4.2))
rho[mask] = 50.0
return rho
```
```python
rho0 = rho(x)
print(rho0)
```
[10. 10. 10. 10. 10. 10. 10. 10. 50. 50. 50. 50. 50. 50. 50. 50. 50. 10.
10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10.
10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10.
10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10.
10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10.
10. 10. 10. 10. 10. 10. 10. 10. 10. 10. 10.]
```python
def flux(rho, Vmax, 𝜌max):
F = rho * Vmax * (1 - rho/𝜌max)
return F
```
```python
def ftbs(rho0, nt, dt, dx, bc_value, *args):
rho_hist = [rho0.copy()]
rho = rho0.copy()
for n in range(nt):
F = flux(rho, *args)
rho[1:] = rho[1:] - dt/dx * (F[1:] - F[:-1])
rho[0] = bc_value
rho_hist.append(rho.copy())
return rho_hist
```
```python
rho_hist = ftbs(rho0, nt, dt, dx, rho0[0], Vmax, 𝜌max)
print(rho_hist)
```
[array([10., 10., 10., 10., 10., 10., 10., 10., 50., 50., 50., 50., 50.,
50., 50., 50., 50., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10.]), array([10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 44.24,
50. , 50. , 50. , 50. , 50. , 50. , 50. , 50. , 15.76,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. , 10. ,
10. , 10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 38.59943936, 49.88056064,
50. , 50. , 50. , 50. , 50. ,
50. , 50. , 19.98055936, 11.53944064, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 33.30734138, 49.41270998,
49.99994864, 50. , 50. , 50. , 50. ,
50. , 50. , 23.2247599 , 13.62041276, 10.43482735,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 28.55046285, 48.41083019,
49.99870697, 50. , 50. , 50. , 50. ,
50. , 50. , 25.80564844, 15.80403194, 11.30577001,
10.12454961, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 24.44676036, 46.76362433,
49.98961532, 49.99999999, 50. , 50. , 50. ,
50. , 50. , 27.91296837, 17.90642325, 12.48613514,
10.4586588 , 10.03581444, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 21.03744537, 44.45064618,
49.95190885, 49.99999961, 50. , 50. , 50. ,
50. , 50. , 29.66918145, 19.85820177, 13.84438774,
11.02107819, 10.15684091, 10.01030994, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 18.29723182, 41.54172291,
49.84105399, 49.99999128, 50. , 50. , 50. ,
50. , 50. , 31.15721331, 21.64087072, 15.27970881,
11.78473916, 10.40207644, 10.05242269, 10.00296888, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 16.15546766, 38.1810399 ,
49.58359212, 49.99990033, 50. , 50. , 50. ,
50. , 50. , 32.4353955 , 23.25795329, 16.72423907,
12.70240631, 10.78939824, 10.15255087, 10.01720172, 10.00085501,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 14.51909619, 34.56028751,
49.0813402 , 49.99927611, 50. , 50. , 50. ,
50. , 50. , 33.54605069, 24.72179152, 18.1359402 ,
13.72420954, 11.3162971 , 10.33380335, 10.05609956, 10.0055618 ,
10.00024624, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 13.29111651, 30.88645218,
48.22619337, 49.99623794, 50. , 50. , 50. ,
50. , 50. , 34.52068751, 26.04751087, 19.49072994,
14.80642234, 11.96608233, 10.61092518, 10.13568845, 10.02010545,
10.00177701, 10.00007092, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 12.38226817, 27.35048028,
46.92234061, 49.98491099, 49.99999995, 50. , 50. ,
50. , 50. , 35.38328032, 27.25031631, 20.77626771,
15.91440279, 12.71484987, 10.98863834, 10.27127928, 10.05332853,
10.00705416, 10.00056226, 10.00002042, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 11.71660566, 24.10294547,
45.10963709, 49.95081265, 49.99999913, 50. , 50. ,
50. , 50. , 36.1524149 , 28.34435492, 21.98759002,
17.02248786, 13.53692601, 11.46277245, 10.47462496, 10.11584365,
10.02037112, 10.00243175, 10.00017647, 10.00000588, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 11.23283148, 21.24108641,
42.78136667, 49.86472502, 49.99999042, 50. , 50. ,
50. , 50. , 36.84273511, 29.34231578, 23.12421136,
18.11264416, 14.40826806, 12.02279616, 10.75231948, 10.21841001,
10.04782039, 10.00759681, 10.00082597, 10.00005501, 10.00000169,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.88324756, 18.80814073,
39.99148747, 49.6771997 , 49.99992454, 50. , 50. ,
50. , 50. , 37.46594414, 30.25537045, 24.18825651,
19.17282777, 15.30815257, 12.65458403, 11.10552417, 10.37031012,
10.0967867 , 10.01917318, 10.00277577, 10.00027705, 10.00001705,
10.00000049, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.63168072, 16.80226308,
36.84954513, 49.31696166, 49.99954942, 50. , 50. ,
50. , 50. , 38.03151334, 31.09326268, 25.18328105,
20.19547418, 16.2197076 , 13.34274496, 11.53068473, 10.57814557,
10.17510151, 10.04149347, 10.00749693, 10.00099665, 10.00009193,
10.00000526, 10.00000014, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.45119314, 15.1902737 ,
33.50458615, 48.69607714, 49.99786987, 50. , 50. ,
50. , 50. , 38.54719416, 31.86445483, 26.11353842,
21.17625897, 17.12978182, 14.07227126, 12.02076666, 10.84524538,
10.29008527, 10.07986841, 10.01728194, 10.00286853, 10.00035248,
10.00003022, 10.00000161, 10.00000004, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.32198239, 13.92166582,
30.12196003, 47.72264267, 49.99174911, 49.99999998, 50. ,
50. , 50. , 39.0193945 , 32.57628728, 26.98353259,
22.11314097, 18.02849733, 14.82953288, 12.56660036, 11.1716669 ,
10.44770233, 10.14013089, 10.03528495, 10.00701855, 10.00107708,
10.00012303, 10.00000985, 10.00000049, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.22962468, 12.93994984,
26.85852502, 46.31882225, 49.97307848, 49.99999974, 50. ,
50. , 50. , 39.45345981, 33.23513073, 27.79775182,
23.00564985, 18.9086976 , 15.60275806, 13.15807191, 11.55463505,
10.65198361, 10.22806058, 10.06541437, 10.01515497, 10.00278805,
10.00039779, 10.00004245, 10.00000319, 10.00000015, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.16368259, 12.19030227,
23.84201498, 44.43970579, 49.92429724, 49.99999713, 50. ,
50. , 50. , 39.85388605, 33.84652352, 28.56051615,
23.85436827, 19.76540453, 16.38216227, 13.78503859, 11.98922137,
10.90477672, 10.34880736, 10.11208464, 10.0296151 , 10.00634892,
10.00108616, 10.00014479, 10.0000145 , 10.00000103, 10.00000005,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.11663846, 11.62381008,
21.15881596, 42.08774188, 49.81301713, 49.9999765 , 50. ,
50. , 50. , 40.22448311, 34.41529175, 29.27589615,
24.6605616 , 20.59533666, 17.15986397, 14.43795049, 12.46908647,
11.20579861, 10.5064175 , 10.17986801, 10.05332426, 10.01304657,
10.00260169, 10.0004159 , 10.00005202, 10.0000049 , 10.00000033,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.08309556, 11.19918801,
18.85151096, 39.31858563, 49.58776921, 49.99985063, 50. ,
50. , 50. , 40.56850174, 34.94565239, 29.94767541,
25.42591677, 21.39650618, 17.92968908, 15.10821588, 12.98716285,
11.55291274, 10.70352829, 10.27310748, 10.08966263, 10.02463692,
10.00560923, 10.00104538, 10.00015682, 10.00001847, 10.00000165,
10.0000001 , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.05918889, 10.88290551,
16.92452225, 36.23649691, 49.17764757, 49.99923887, 50. ,
50. , 50. , 40.88873311, 35.44130119, 30.57933983,
26.15236021, 22.16789363, 18.68693102, 15.78837021, 13.5362115 ,
11.94253499, 10.9412513 , 10.39555536, 10.14225517, 10.04333757,
10.01108713, 10.00235966, 10.00041272, 10.00005832, 10.00000649,
10.00000055, 10.00000003, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.04215511, 10.64846879,
15.35434678, 32.98010535, 48.49811965, 49.99680432, 50. ,
50. , 50. , 41.18758778, 35.90548709, 31.17408262,
26.84193258, 22.90919203, 19.42810674, 16.47210857, 14.10923163,
12.37008097, 11.21922605, 10.55008963, 10.21471521, 10.07175975,
10.02036894, 10.00487275, 10.00097342, 10.00016038, 10.00002142,
10.00000226, 10.00000018, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.03002083, 10.4753579 ,
14.10074858, 29.70178524, 47.46340344, 49.98868404, 49.99999996,
50. , 50. , 41.46715877, 36.34107516, 31.73481815,
27.4967039 , 23.62060887, 20.15073035, 17.15423242, 14.69973153,
12.83038976, 11.53580149, 10.73853883, 10.31037965, 10.11278349,
10.03515245, 10.00933424, 10.00209634, 10.00039453, 10.00006144,
10.00000778, 10.00000078, 10.00000006, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.02137808, 10.34791105,
13.11636069, 26.54552582, 46.00330392, 49.96552094, 49.9999995 ,
50. , 50. , 41.72927254, 36.75059981, 32.26420046,
28.11871718, 24.30271506, 20.85311419, 17.8305484 , 15.30188376,
13.31808526, 11.8882942 , 10.96162279, 10.43207272, 10.16939017,
10.05746884, 10.01676575, 10.00418056, 10.00088464, 10.00015737,
10.00002323, 10.0000028 , 10.00000027, 10.00000002, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.01522284, 10.25430366,
12.35357358, 23.62848005, 44.08040432, 49.90802033, 49.99999522,
50. , 50. , 41.9755303 , 37.13630983, 32.76464359,
28.70995221, 24.95633076, 21.53419999, 18.49774599, 15.91059211,
13.82785821, 12.2732755 , 11.21899678, 10.58192623, 10.24447397,
10.08961078, 10.02847736, 10.00780415, 10.00183381, 10.00036682,
10.00006186, 10.00000868, 10.000001 , 10.00000009, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.01083949, 10.18568036,
11.76869254, 21.03001317, 41.70290869, 49.78190098, 49.99996476,
50. , 50. , 42.20734191, 37.50020651, 33.23834234,
29.27230326, 25.58244 , 22.19341903, 19.15327084, 16.52149671,
14.35466821, 12.68685034, 11.50937447, 10.76127221, 10.34065628,
10.13402512, 10.0460578 , 10.01375533, 10.00355306, 10.0007893 ,
10.00014969, 10.000024 , 10.00000321, 10.00000035, 10.00000003,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00771814, 10.13544989,
11.3239227 , 18.78947133, 38.92940241, 49.53424202, 49.99979352,
50. , 50. , 42.42595378, 37.84407605, 33.68729233,
29.80756654, 26.18212737, 22.83057838, 19.79520396, 17.13093885,
14.8938744 , 13.12490359, 11.83070009, 10.97061043, 10.46012511,
10.19318179, 10.07133536, 10.02305149, 10.00649068, 10.00158522,
10.0003339 , 10.0000602 , 10.0000092 , 10.00000117, 10.00000012,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00549553, 10.09872898,
10.98788646, 16.91120065, 35.86386212, 49.0938137 , 49.99901257,
50. , 50. , 42.63247201, 38.16951717, 34.11330893,
30.31743431, 26.75653144, 23.44576947, 20.422152 , 17.73590257,
15.44130794, 13.58330124, 12.18034188, 11.20964249, 10.60451555,
10.26943364, 10.1063111 , 10.03694084, 10.01125843, 10.00299785,
10.00069427, 10.00013902, 10.00002388, 10.00000348, 10.00000043,
10.00000004, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00391293, 10.07191273,
10.73528732, 15.37372575, 32.64172442, 48.3773805 , 49.99605635,
50. , 50. , 42.8278817 , 38.47796465, 34.51804479,
30.80349349, 27.30681102, 24.03929546, 21.03315096, 18.33394497,
15.99330031, 14.05804315, 12.55528401, 11.47735755, 10.77484004,
10.36488299, 10.15306974, 10.0568837 , 10.01865051, 10.00537647,
10.00135767, 10.00029893, 10.00005704, 10.00000936, 10.00000131,
10.00000015, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00278606, 10.05234735,
10.54616315, 14.13986609, 29.41011553, 47.30214389, 49.98657799,
49.99999994, 50. , 43.01306311, 38.77070951, 34.90300589,
31.26722722, 27.83412089, 24.61161393, 21.62758355, 18.92312348,
16.54668001, 14.54537025, 12.95230099, 11.77215157, 10.97146914,
10.48126857, 10.213677 , 10.08451258, 10.02965127, 10.00919825,
10.00251499, 10.00060384, 10.00012671, 10.00002309, 10.00000363,
10.00000049, 10.00000005, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.0019837 , 10.03808353,
10.4050082 , 13.16550436, 26.30689443, 45.80215019, 49.9603763 ,
49.99999929, 50. , 43.18880534, 39.04891635, 35.26956605,
31.71001838, 28.33959481, 25.16329215, 22.20510947, 19.50192514,
17.09874852, 15.0418324 , 13.36810542, 12.09196258, 11.19415731,
10.61988265, 10.29007391, 10.12157328, 10.04542878, 10.01508586,
10.00443948, 10.00115423, 10.00026412, 10.00005293, 10.00000923,
10.00000139, 10.00000018, 10.00000002, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00141241, 10.02769199,
10.29991919, 12.40596445, 23.44339388, 43.84468148, 49.89694295,
49.99999364, 50. , 43.35581789, 39.31363825, 35.61898007,
32.13315426, 28.82433382, 25.69497221, 22.76560745, 20.06920101,
17.64724373, 15.54432448, 13.7994654 , 12.43440754, 11.44210416,
10.78152343, 10.38397843, 10.16985175, 10.06731261, 10.02381801,
10.00750489, 10.00210032, 10.00052047, 10.00011376, 10.00002182,
10.00000365, 10.00000053, 10.00000007, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00100564, 10.02012622,
10.22183882, 11.81993878, 20.89438444, 41.44216611, 49.76058458,
49.9999554 , 50. , 43.51474045, 39.56582967, 35.95239554,
32.53783185, 29.2893986 , 26.20734414, 23.30912739, 20.62410714,
18.19029692, 16.05009833, 14.24329187, 12.79690966, 11.7140395 ,
10.96648269, 10.49680337, 10.23109331, 10.09675634, 10.03633018,
10.01220123, 10.00365665, 10.00097545, 10.00023089, 10.0000483 ,
10.00000888, 10.00000143, 10.0000002 , 10.00000002, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00071602, 10.01462095,
10.1639213 , 11.37143267, 18.69629565, 38.6561249 , 49.49713946,
49.99974905, 50. , 43.66615138, 39.80635762, 36.27086335,
32.9251635 , 29.7358047 , 26.70112541, 23.83585134, 21.16605258,
18.72638779, 16.55675734, 14.69669874, 13.17680957, 12.00832093,
11.17456522, 10.62959682, 10.30692153, 10.13528684, 10.05370397,
10.01914615, 10.006117 , 10.00174759, 10.00044532, 10.00010088,
10.00002023, 10.00000357, 10.00000055, 10.00000007, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00050981, 10.0106171 ,
10.12101876, 11.03034364, 16.85221614, 35.59166664, 49.03478918,
49.99883873, 50. , 43.81057487, 40.03601136, 36.5753472 ,
33.29618256, 30.16451995, 27.17704521, 24.34606176, 21.69465444,
19.25430006, 17.06223931, 15.1570401 , 13.57145696, 12.3230341 ,
11.40513337, 10.78300816, 10.39876413, 10.18444439, 10.07714433,
10.02908974, 10.00986821, 10.00300586, 10.00082036, 10.00020008,
10.00004346, 10.00000837, 10.00000142, 10.00000021, 10.00000003,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00036299, 10.0077066 ,
10.0892754 , 10.77222714, 15.34072539, 32.38343483, 48.2907828 ,
49.99548486, 49.99999999, 43.94848722, 40.25551087, 36.86673204,
33.65184893, 30.57646346, 27.6358328 , 24.84011598, 22.20969938,
19.77307977, 17.56479216, 15.62192889, 13.97828178, 12.65608814,
11.65716926, 10.95727955, 10.50779169, 10.2457185 , 10.1079457 ,
10.04291108, 10.01540131, 10.0049819 , 10.00144975, 10.00037872,
10.00008857, 10.00001848, 10.00000342, 10.00000056, 10.00000008,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00025845, 10.00559186,
10.06581207, 10.57765515, 14.12553514, 29.17611582, 47.1840638 ,
49.98496781, 49.99999992, 44.08032212, 40.46551421, 37.14583161,
33.99305447, 30.97250568, 28.07820873, 25.31842562, 22.71111114,
20.28199701, 18.06294577, 16.08924153, 14.39484645, 13.00530099,
11.92934696, 11.15226041, 10.63487348, 10.32048467, 10.14744886,
10.06160573, 10.02331835, 10.00798192, 10.00246693, 10.00068717,
10.00017213, 10.00003866, 10.00000776, 10.00000138, 10.00000022,
10.00000003, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00018401, 10.00405595,
10.04848413, 10.43143002, 13.16381658, 26.10408074, 45.65152704,
49.95642243, 49.99999911, 44.20647543, 40.66662401, 37.41339523,
34.32062804, 31.35346932, 28.50487854, 25.78144027, 23.19892317,
20.7805115 , 18.5554824 , 16.55711231, 14.81888116, 13.36847185,
12.22010768, 11.36744051, 10.78055221, 10.40994737, 10.19699168,
10.08626395, 10.03433341, 10.01239709, 10.00405504, 10.00119972,
10.00032046, 10.0000771 , 10.00001666, 10.00000322, 10.00000055,
10.00000008, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00013102, 10.00294089,
10.03569721, 10.32180321, 12.41225422, 23.27487871, 43.66394637,
49.88835609, 49.99999227, 44.32730917, 40.85939314, 37.67011379,
34.63534035, 31.72013077, 28.91652825, 26.22963447, 23.67325587,
21.26824234, 19.04140744, 17.02392089, 15.24830495, 13.74343969,
12.52773276, 11.60199631, 10.94503777, 10.51509323, 10.25785762,
10.11804062, 10.04926686, 10.01871106, 10.00645706, 10.002022 ,
10.00057368, 10.00014719, 10.00003407, 10.00000709, 10.00000132,
10.00000022, 10.00000003, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00009328, 10.00213168,
10.02626795, 10.2397729 , 11.83077982, 20.75989523, 41.23723486,
49.74387689, 49.9999474 , 44.44315508, 41.04432972, 37.91662523,
34.93790837, 32.07322184, 29.31382123, 26.66349746, 24.13429772,
21.74494153, 19.51992144, 17.48827532, 15.68123561, 14.12812797,
12.85041029, 11.85484546, 11.12821796, 10.63665726, 10.33122568,
10.15811871, 10.06903228, 10.02750365, 10.0099851 , 10.00329916,
10.00099078, 10.00027002, 10.00006665, 10.00001486, 10.00000298,
10.00000054, 10.00000009, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00006642, 10.00154464,
10.01931917, 10.17848796, 11.38442918, 18.59303491, 38.43572323,
49.46768325, 49.99971124, 44.55431778, 41.22190154, 38.15351933,
35.22899954, 32.41343181, 29.69739625, 27.08352515, 24.58228965,
22.21047112, 19.99039413, 17.94899271, 16.11599086, 14.52057698,
13.18629321, 12.12470434, 11.32968341, 10.77510359, 10.41812574,
10.20766857, 10.09461632, 10.03944946, 10.01502808, 10.00522439,
10.00165556, 10.00047759, 10.00012522, 10.00002977, 10.0000064 ,
10.00000124, 10.00000022, 10.00000003, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00004729, 10.00111892,
10.01420144, 10.13276069, 11.04390334, 16.77588144, 35.36612938,
48.98726636, 49.99869114, 44.66107741, 41.39253995, 38.38134206,
35.5092356 , 32.74140955, 30.06786627, 27.49021391, 25.0175123 ,
22.66478361, 20.45234084, 18.40507895, 16.55108328, 14.91896516,
13.53354759, 12.41014479, 11.54876272, 10.9306204 , 10.51940192,
10.2678061 , 10.12705234, 10.05531089, 10.02205665, 10.00804714,
10.00268329, 10.00081684, 10.0002267 , 10.00005726, 10.00001313,
10.00000273, 10.00000051, 10.00000009, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00003367, 10.00081029,
10.01043439, 10.09867835, 10.78535385, 15.28643395, 32.16323859,
48.22001802, 49.99499888, 44.76369215, 41.55664333, 38.6005995 ,
35.77919615, 33.05776564, 30.42581791, 27.88405576, 25.4402755 ,
23.10790532, 20.90540122, 18.8557085 , 16.98521086, 15.32162147,
13.89039092, 12.70964702, 11.78456428, 11.10312798, 10.63568625,
10.33955269, 10.16738935, 10.07592533, 10.03162461, 10.01208036,
10.00422787, 10.00135436, 10.00039666, 10.00010606, 10.00002584,
10.00000573, 10.00000115, 10.00000021, 10.00000003, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00002397, 10.00058663,
10.00766304, 10.07329876, 10.58977666, 14.08850945, 28.9704648 ,
47.08608382, 49.98359296, 44.86240017, 41.7145802 , 38.81176131,
36.0394219 , 33.36307459, 30.77181139, 28.26553475, 25.85090978,
23.5399222 , 21.34932054, 19.30020487, 17.41724473, 15.72703056,
14.25512107, 13.02164662, 12.03602104, 11.29229763, 10.76738341,
10.42379947, 10.21665819, 10.10218685, 10.04436606, 10.01770603,
10.00648893, 10.00218187, 10.00067247, 10.00018975, 10.00004895,
10.00001152, 10.00000247, 10.00000048, 10.00000008, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00001707, 10.00042459,
10.00562525, 10.0544148 , 10.44226389, 13.13979947, 25.91984711,
45.52458222, 49.95302667, 44.95742095, 41.8666919 , 39.01526392,
36.2904177 , 33.65787696, 31.10638081, 28.63512422, 26.24975946,
23.96096797, 21.78393312, 19.73802243, 17.8462154 , 16.13383234,
14.62613683, 13.34457472, 12.30193533, 11.49757872, 10.91466654,
10.52127827, 10.27583728, 10.1350232 , 10.06098795, 10.02537816,
10.00971848, 10.00342217, 10.00110716, 10.00032877, 10.0000895 ,
10.0000223 , 10.00000507, 10.00000105, 10.0000002 , 10.00000003,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00001215, 10.00030722,
10.00412757, 10.04037395, 10.33125685, 12.3976951 , 23.11609288,
43.50921439, 49.8809289 , 45.04895238, 42.01329505, 39.21151334,
36.53265529, 33.94268138, 31.43003471, 28.99328488, 26.63717709,
24.37121412, 22.20914799, 20.16872951, 18.27129828, 16.54081755,
15.00195121, 13.67689074, 12.58102129, 11.71823123, 11.07748319,
10.63254051, 10.34581997, 10.17536938, 10.08225786, 10.03562271,
10.01422649, 10.00523521, 10.00177385, 10.00055294, 10.00015841,
10.00004165, 10.00001004, 10.00000221, 10.00000044, 10.00000008,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000865, 10.00022224,
10.00302737, 10.0299408 , 10.24787174, 11.82286204, 20.62781294,
41.0590034 , 49.72931087, 45.13714768, 42.15468355, 39.4008877 ,
36.76657584, 34.21796657, 31.74325677, 29.34046337, 27.01351898,
24.77086154, 22.62493643, 20.59199319, 18.69179921, 16.94692051,
15.38119872, 14.01710819, 12.87194342, 11.9533611 , 11.25556977,
10.7579449 , 10.42738562, 10.22413934, 10.10898762, 10.04903384,
10.02038476, 10.00782407, 10.00277063, 10.00090455, 10.00027203,
10.00007528, 10.00001914, 10.00000446, 10.00000095, 10.00000019,
10.00000003, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000616, 10.00016073,
10.00221953, 10.02219293, 10.18532559, 11.38100582, 18.48664638,
38.24098117, 49.44178554, 45.2220143 , 42.29112952, 39.58373959,
36.99259224, 34.4841832 , 32.04650671, 29.67709138, 27.37914166,
25.16013356, 23.03132127, 21.00756552, 19.10714047, 17.35120997,
15.76263794, 14.36381376, 13.17335011, 12.20195603, 11.44847269,
10.89765375, 10.52117614, 10.28219756, 10.14201322, 10.06626642,
10.02862853, 10.01144027, 10.00422583, 10.00144196, 10.0004542 ,
10.00013194, 10.00003531, 10.00000869, 10.00000196, 10.00000041,
10.00000008, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000439, 10.00011621,
10.00162661, 10.01644234, 10.13846525, 11.04339211, 16.69263946,
35.16364021, 48.94511902, 45.30307746, 42.42288065, 39.76039805,
37.21109127, 34.74175569, 32.34022117, 30.00358507, 27.73439908,
25.5392702 , 23.42836768, 21.41527137, 19.51684752, 17.75287896,
16.14515043, 14.71568067, 13.48390185, 12.46291998, 11.65557395,
11.05163736, 10.62767872, 10.35033223, 10.18217222, 10.0880247 ,
10.03945539, 10.01638801, 10.00630318, 10.00224366, 10.00073867,
10.00022475, 10.00006314, 10.00001636, 10.0000039 , 10.00000086,
10.00000017, 10.00000003, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000312, 10.00008401,
10.00119163, 10.01217638, 10.1033907 , 10.78662335, 15.22298614,
31.96229451, 48.15670175, 45.37849137, 42.55014661, 39.93117021,
37.42243553, 34.99108389, 32.62481474, 30.32034483, 28.07964043,
25.90852335, 23.81617531, 21.81499761, 19.92053667, 18.15123419,
16.52773692, 15.07147713, 13.80229389, 12.73510501, 11.87611943,
11.21968515, 10.74721575, 10.42923152, 10.23027994, 10.11504761,
10.05342106, 10.02302678, 10.00920679, 10.00341267, 10.00117209,
10.00037276, 10.00010968, 10.00002983, 10.00000749, 10.00000173,
10.00000037, 10.00000007, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000222, 10.00006071,
10.00087264, 10.0090133 , 10.07715893, 10.59204146, 14.04129257,
28.77959977, 46.99764189, 45.4431495 , 42.67305771, 40.09634188,
37.62696519, 35.23254465, 32.90068094, 30.62775518, 28.41520844,
26.26815285, 24.19487165, 22.20668367, 20.31790378, 18.54568536,
16.90951156, 15.43007066, 14.12727388, 13.0173395 , 12.10924776,
11.40142233, 10.87994146, 10.51946425, 10.28710564, 10.14809127,
10.07113213, 10.03177197, 10.01318534, 10.00508112, 10.00181734,
10.00060296, 10.00018545, 10.00005283, 10.00001392, 10.00000339,
10.00000076, 10.00000016, 10.00000003, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000158, 10.00004387,
10.0006388 , 10.00666908, 10.05755412, 10.44499571, 13.10544177,
25.74579605, 45.40899346, 45.48545214, 42.79156682, 40.25617598,
37.82499956, 35.46649334, 33.16819325, 30.92618495, 28.74143813,
26.61842328, 24.56460634, 22.59031329, 20.70871403, 18.93573482,
17.28969502, 15.79042918, 14.45765475, 13.30845242, 12.3540187 ,
11.59633027, 11.02584507, 10.62146565, 10.35335023, 10.18790961,
10.09323563, 10.04309311, 10.01853528, 10.0074146 , 10.00275723,
10.00095269, 10.00030569, 10.00009102, 10.00002513, 10.00000642,
10.00000152, 10.00000033, 10.00000007, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000113, 10.00003169,
10.00046746, 10.00493252, 10.042911 , 10.33411348, 12.37323799,
22.96320479, 43.36711281, 45.48294582, 42.90525614, 40.41090613,
38.01683826, 35.69326519, 33.4277061 , 31.21598748, 29.05865584,
26.95960132, 24.92554637, 22.96590732, 21.09279266, 19.32096772,
17.66760675, 16.15161964, 14.7923236 , 13.60729331, 12.60943958,
11.80376926, 11.18475953, 10.73552883, 10.42962656, 10.23523414,
10.12040591, 10.05750955, 10.02560249, 10.01061632, 10.00409838,
10.00147237, 10.00049202, 10.00015285, 10.00004411, 10.00001181,
10.00000293, 10.00000067, 10.00000014, 10.00000003, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.0000008 , 10.00002289,
10.00034196, 10.00364667, 10.03197973, 10.25064241, 11.80584439,
20.49797906, 40.89393764, 45.39801673, 43.01300995, 40.56072132,
38.20276186, 35.91317657, 33.67955585, 31.497501 , 29.36717861,
27.29195359, 25.27787208, 23.33351751, 21.47001675, 19.70104271,
18.04265702, 16.51280476, 15.13024705, 13.91274835, 12.87448925,
12.02300218, 11.35637459, 10.86180172, 10.51644329, 10.29075388,
10.15332943, 10.07558342, 10.03478217, 10.01493038, 10.0059752 ,
10.00222861, 10.00077436, 10.00025053, 10.00007542, 10.00002111,
10.00000549, 10.00000132, 10.0000003 , 10.00000006, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000057, 10.00001653,
10.00025007, 10.00269496, 10.02382345, 10.18789004, 11.36945993,
18.37965164, 38.05912172, 45.17574509, 43.11251315, 40.70573635,
38.3830313 , 36.12652613, 33.92406174, 31.77104906, 29.66731367,
27.61574498, 25.62177374, 23.69322116, 21.84030791, 20.07568344,
18.41433887, 16.87323847, 15.47047378, 14.22375257, 13.14813899,
12.25321791, 11.5402533 , 11.00028933, 10.61419296, 10.3550966 ,
10.19268797, 10.0979102 , 10.04651672, 10.02064414, 10.00855362,
10.00330752, 10.00119317, 10.00040139, 10.00012585, 10.00003675,
10.00000999, 10.00000252, 10.00000059, 10.00000013, 10.00000003,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000041, 10.00001193,
10.00018281, 10.00199086, 10.01774046, 10.1407639 , 11.03579222,
16.60675174, 34.97297904, 44.746225 , 43.1995037 , 40.84594145,
38.55788555, 36.33359567, 34.16152679, 32.03694096, 29.9593582 ,
27.93123725, 25.95744869, 24.04511647, 22.20362584, 20.4446707 ,
18.78222023, 17.2322605 , 15.81213485, 14.53929869, 13.4293702 ,
12.49355355, 11.73585051, 11.15086047, 10.72314467, 10.42881225,
10.23914138, 10.12510704, 10.06129132, 10.02808919, 10.01203442,
10.00481817, 10.00180206, 10.00062942, 10.00020521, 10.00006241,
10.0000177 , 10.00000467, 10.00000115, 10.00000026, 10.00000006,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000029, 10.00000861,
10.0001336 , 10.00147015, 10.01320569, 10.10540335, 10.78181505,
15.15559009, 31.77150742, 44.03267185, 43.26662425, 40.98112158,
38.72753625, 36.53465072, 34.39223864, 32.29547228, 30.2435992 ,
28.23868792, 26.28509903, 24.38931861, 22.55996261, 20.80783537,
19.14593645, 17.58929042, 16.15444219, 14.85844306, 13.71718885,
12.74311477, 11.94253231, 11.31325829, 10.84344127, 10.51235937,
10.2933105 , 10.1577994 , 10.07962743, 10.03764072, 10.01665587,
10.00689601, 10.00267063, 10.00096713, 10.00032738, 10.00010354,
10.00003057, 10.00000842, 10.00000216, 10.00000052, 10.00000011,
10.00000002, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000021, 10.00000622,
10.0000976 , 10.00108523, 10.00982648, 10.07888979, 10.58916893,
13.98943082, 28.59682956, 42.96466368, 43.30164989, 41.11072813,
38.89215803, 36.72994061, 34.61647034, 32.5469254 , 30.52031346,
28.53834951, 26.6049296 , 24.72595631, 22.90933775, 21.16505193,
19.50518319, 17.94382151, 16.49668586, 15.18030912, 14.01063695,
13.00099397, 12.15959557, 11.48711358, 10.97510086, 10.60609499,
10.35576125, 10.19660653, 10.10207414, 10.04971517, 10.02269541,
10.00970602, 10.0038874 , 10.00145771, 10.00051162, 10.000168 ,
10.00005159, 10.0000148 , 10.00000397, 10.00000099, 10.00000023,
10.00000005, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000015, 10.00000449,
10.00007128, 10.0008008 , 10.00730934, 10.05902162, 10.44343577,
13.06620178, 25.57763417, 41.49370459, 43.28498886, 41.23367267,
39.05187203, 36.91969777, 34.83448097, 32.79156998, 30.78976761,
28.83046883, 26.91714636, 25.05516899, 23.25179386, 21.51623283,
19.85970983, 18.29541453, 16.83823039, 15.50408886, 14.30880112,
13.26628587, 12.38628671, 11.67195995, 11.11802205, 10.71026818,
10.42699054, 10.2421266 , 10.12919782, 10.06476594, 10.03047006,
10.01344545, 10.00556288, 10.00215742, 10.00078409, 10.00026696,
10.00008512, 10.0000254 , 10.00000709, 10.00000185, 10.00000045,
10.0000001 , 10.00000002, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.0000001 , 10.00000324,
10.00005204, 10.0005907 , 10.00543507, 10.04414083, 10.33341984,
12.34398301, 22.81408157, 39.60696298, 43.18683239, 41.3479983 ,
39.20671887, 37.10413585, 35.04651615, 33.02966353, 31.05221828,
29.1152865 , 27.22195506, 25.37710425, 23.58739284, 21.86132329,
20.20931339, 18.64369166, 17.17851039, 15.82904257, 14.61081877,
13.53810055, 12.62181913, 11.86724996, 11.27199245, 10.82501753,
10.50741472, 10.29492211, 10.16157035, 10.08327733, 10.04033551,
10.01834585, 10.00783252, 10.00313809, 10.00117959, 10.0004159 ,
10.00013749, 10.0000426 , 10.00001236, 10.00000336, 10.00000085,
10.0000002 , 10.00000004, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000007, 10.00000233,
10.00003799, 10.00043557, 10.00404 , 10.03300048, 10.25050068,
11.78431994, 20.37003634, 37.33515079, 42.96508691, 41.45037467,
39.35661489, 37.28344607, 35.25280828, 33.26145183, 31.30791228,
29.39303668, 27.51956012, 25.69191584, 23.9162126 , 22.2002968 ,
20.55383296, 18.98833069, 17.51702598, 16.15449752, 14.91588211,
13.81557401, 12.86538899, 12.07237156, 11.43669945, 10.95037199,
10.59736087, 10.35550622, 10.19975651, 10.10575671, 10.05268365,
10.02467422, 10.01085937, 10.0044899 , 10.00174361, 10.00063584,
10.00021767, 10.00006992, 10.00002107, 10.00000595, 10.00000157,
10.00000039, 10.00000009, 10.00000002, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000005, 10.00000168,
10.00002772, 10.00032107, 10.00300198, 10.02466372, 10.1880826 ,
11.35381576, 18.2730239 , 34.75201996, 42.56581666, 41.5353566 ,
39.50128288, 37.45779082, 35.45357639, 33.48716943, 31.55708682,
29.66394674, 27.81016378, 25.99976184, 24.23834423, 22.53315112,
20.89314468, 19.32905946, 17.85333786, 16.47984549, 15.22324035,
14.0978765 , 13.11618893, 12.28666395, 11.61174279, 11.08625506,
10.69706115, 10.42433072, 10.24430109, 10.13272527, 10.06793842,
10.03273314, 10.01483628, 10.00632396, 10.00253448, 10.00095487,
10.0003381 , 10.00011247, 10.00003514, 10.0000103 , 10.00000283,
10.00000073, 10.00000018, 10.00000004, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000004, 10.00000121,
10.00002022, 10.00023658, 10.00222991, 10.0184273 , 10.14114312,
11.0245554 , 16.52008834, 31.96525954, 41.92777493, 41.59433578,
39.64014523, 37.62729304, 35.64902553, 33.70703991, 31.79996972,
29.92823719, 28.09396535, 26.3008032 , 24.55388955, 22.85990469,
21.22715714, 19.6656507 , 18.18706252, 16.80453983, 15.53220047,
14.38421855, 13.3734199 , 12.50943266, 11.79664806, 11.23249157,
10.80665039, 10.50177607, 10.29571628, 10.16470766, 10.08655023,
10.0428595 , 10.01998751, 10.00877486, 10.00362573, 10.00140978,
10.00051573, 10.00017745, 10.00005741, 10.00001746, 10.00000499,
10.00000134, 10.00000034, 10.00000008, 10.00000002, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000003, 10.00000088,
10.00001475, 10.00017427, 10.00165585, 10.01376365, 10.10587146,
10.77383992, 15.08663747, 29.1009101 , 40.99144717, 41.61411553,
39.77216227, 37.79201927, 35.83934526, 33.9212762 , 32.03677971,
30.18612161, 28.37116064, 26.59520247, 24.86295894, 23.18059354,
21.55580731, 19.99791724, 18.5178674 , 17.12809194, 15.84212673,
14.67385541, 13.63630084, 12.73996323, 11.99088052, 11.38881657,
10.92616659, 10.58814381, 10.35446991, 10.20222099, 10.10898887,
10.05542207, 10.02656947, 10.01200294, 10.00510845, 10.00204793,
10.00077321, 10.00027488, 10.00009198, 10.00002896, 10.00000858,
10.00000239, 10.00000062, 10.00000015, 10.00000004, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000002, 10.00000063,
10.00001075, 10.00012833, 10.00122916, 10.01027728, 10.07938408,
10.58358043, 13.93554196, 26.28509477, 39.7112226 , 41.57512405,
39.89559045, 37.95195334, 36.02470692, 34.13008058, 32.26772664,
30.43780659, 28.64194154, 26.88312275, 25.16566957, 23.49526856,
21.87905683, 20.32570758, 18.84546633, 17.45006753, 16.15243941,
14.96608968, 13.90407649, 12.97753343, 12.19385868, 11.55488573,
11.05555409, 10.68365164, 10.42097504, 10.24576356, 10.13573527,
10.07081761, 10.0348705 , 10.01619608, 10.00709364, 10.00292928,
10.00114031, 10.00041839, 10.00014466, 10.00004711, 10.00001445,
10.00000417, 10.00000113, 10.00000029, 10.00000007, 10.00000002,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000001, 10.00000045,
10.00000784, 10.00009447, 10.00091213, 10.0076718 , 10.05950363,
10.43957523, 13.02470957, 23.62740052, 38.06768656, 41.44955459,
40.00762446, 38.10695615, 36.20525911, 34.33364447, 32.49301167,
30.6834918 , 28.90649564, 27.16472682, 25.4621438 , 23.80399312,
22.1968888 , 20.64890185, 19.16961515, 17.77008282, 16.46261273,
15.26027273, 14.17602345, 13.22142384, 12.40496733, 11.73028669,
11.19466909, 10.78843101, 10.49558132, 10.29580391, 10.16727231,
10.08946562, 10.04520959, 10.02157088, 10.00971438, 10.00412846,
10.00165551, 10.0006263 , 10.00022349, 10.00007521, 10.00002386,
10.00000713, 10.00000201, 10.00000053, 10.00000013, 10.00000003,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000001, 10.00000033,
10.00000571, 10.00006952, 10.00067664, 10.00572522, 10.0445886 ,
10.33079748, 12.31243119, 21.20943136, 36.07640452, 41.20018263,
40.10387928, 38.25670507, 36.3811205 , 34.53214782, 32.71282747,
30.92337006, 29.16500601, 27.44017641, 25.75250787, 24.10684092,
22.50930477, 20.96740814, 19.49010755, 18.08780057, 16.77217235,
15.55580503, 14.45145457, 13.47092677, 12.62356947, 11.9145508 ,
11.34328699, 10.90252738, 10.57856835, 10.35277064, 10.20407514,
10.11180203, 10.05793387, 10.02837314, 10.01312771, 10.00573693,
10.00236767, 10.00092271, 10.0003395 , 10.00011791, 10.00003864,
10.00001195, 10.00000348, 10.00000096, 10.00000025, 10.00000006,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000001, 10.00000024,
10.00000416, 10.00005114, 10.00050179, 10.00427135, 10.03340298,
10.2487561 , 11.76057716, 19.08018976, 33.79029533, 40.78103562,
40.17766639, 38.40060344, 36.55236855, 34.72575788, 32.92735825,
31.15762737, 29.417651 , 27.70963161, 26.03689071, 24.40389425,
22.81632227, 21.28115918, 19.80677128, 18.40292634, 17.08069256,
15.85213556, 14.72972202, 13.72535356, 12.8490172 , 12.10716478,
11.50111106, 11.02590263, 10.67014122, 10.41704333, 10.24660136,
10.13827176, 10.07341499, 10.03687745, 10.01751612, 10.00786497,
10.00333792, 10.00133884, 10.00050746, 10.00018173, 10.00006147,
10.00001964, 10.00000592, 10.00000168, 10.00000045, 10.00000011,
10.00000003, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000017,
10.00000303, 10.00003761, 10.00037201, 10.00318578, 10.02501702,
10.18695315, 11.33610861, 17.2578021 , 31.29448683, 40.14108083,
40.21902655, 38.53764739, 36.7190226 , 34.91462713, 33.1367797 ,
31.38644305, 29.66460411, 27.97325037, 26.31542302, 24.69524233,
23.11797241, 21.59010932, 20.11946455, 18.71520475, 17.38779322,
16.1487606 , 15.01021913, 13.98404044, 13.08066117, 12.30758195,
11.66778202, 11.15843957, 10.77042831, 10.48894493, 10.29528144,
10.16932059, 10.09204431, 10.04738575, 10.02308839, 10.01064362,
10.00464153, 10.00191454, 10.00074689, 10.00027554, 10.00009611,
10.00003169, 10.00000987, 10.0000029 , 10.00000081, 10.00000021,
10.00000005, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000012,
10.00000221, 10.00002765, 10.0002757 , 10.00237545, 10.01873184,
10.14043912, 11.01145268, 15.7355602 , 28.69472775, 39.23136826,
40.21351551, 38.6662325 , 36.8810184 , 35.09888994, 33.34125871,
31.60998974, 29.90603387, 28.23118811, 26.58823637, 24.98098 ,
23.41429793, 21.89423185, 20.42807279, 19.02441601, 17.69313664,
16.44522196, 15.29238121, 14.24635292, 13.31785882, 12.51523283,
11.84288803, 11.29994795, 10.87948124, 10.56873597, 10.35050977,
10.20538664, 10.11422717, 10.060225 , 10.03007987, 10.0142262 ,
10.0063698 , 10.00269985, 10.00108316, 10.00041129, 10.00014778,
10.00005024, 10.00001615, 10.00000491, 10.00000141, 10.00000038,
10.0000001 , 10.00000002, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000009,
10.00000161, 10.00002032, 10.00020426, 10.00177074, 10.01402244,
10.10545716, 10.7642127 , 14.48976224, 26.10223379, 38.01474395,
40.14083817, 38.78387693, 37.03817061, 35.27865739, 33.5409528 ,
31.82843348, 30.14210386, 28.48359742, 26.85546259, 25.26120646,
23.70535143, 22.19351659, 20.73250562, 19.33037254, 17.99642446,
16.74110491, 15.57568563, 14.51168904, 13.55998115, 12.72953467,
12.02597475, 11.45017175, 10.99727669, 10.65661051, 10.41263674,
10.24689167, 10.14037621, 10.07574371, 10.03875187, 10.01878953,
10.00863186, 10.00375667, 10.00154875, 10.00060478, 10.00022367,
10.00007833, 10.00002597, 10.00000815, 10.00000242, 10.00000068,
10.00000018, 10.00000005, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000006,
10.00000117, 10.00001493, 10.00015129, 10.00131961, 10.01049465,
10.07916329, 10.57655355, 13.48727017, 23.61868827, 36.47589923,
39.97364232, 38.88683097, 37.19011799, 35.45400945, 33.7360091 ,
32.04193362, 30.37297261, 28.73062776, 27.1172331 , 25.53602432,
23.99119372, 22.4879677 , 21.03269417, 19.63291589, 18.29739453,
17.03603583, 15.85965117, 14.7794815 , 13.80641828, 12.94990007,
12.21655518, 11.60879734, 11.12372009, 10.75269405, 10.48196206,
10.29423273, 10.17090416, 10.09430758, 10.04939027, 10.02453455,
10.01155627, 10.00516051, 10.00218459, 10.00087664, 10.00033343,
10.00012019, 10.00004105, 10.00001328, 10.00000407, 10.00000118,
10.00000032, 10.00000008, 10.00000002, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000005,
10.00000085, 10.00001097, 10.00011202, 10.00098314, 10.00785263,
10.05940889, 10.43447928, 12.69156689, 21.32474728, 34.62883892,
39.67709789, 38.96953981, 37.33624397, 35.6249834 , 33.92656271,
32.25064271, 30.59879362, 28.97242533, 27.37367848, 25.80553863,
24.27189256, 22.7776018 , 21.32858856, 19.93191386, 18.59581796,
17.32967969, 16.14383692, 15.04919892, 14.05658384, 13.17574444,
12.41411885, 11.775462 , 11.25865061, 10.85704323, 10.55872953,
10.3477742 , 10.20621627, 10.11629427, 10.06230322, 10.03168638,
10.01529233, 10.0070021 , 10.00304157, 10.00125331, 10.00048987,
10.0001816 , 10.00006384, 10.00002128, 10.00000672, 10.00000201,
10.00000057, 10.00000015, 10.00000004, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000003,
10.00000062, 10.00000805, 10.00008292, 10.00073226, 10.00587443,
10.04457317, 10.32712588, 12.06692639, 19.27401065, 32.51924783,
39.21014086, 39.02393128, 37.47556276, 35.79155679, 34.11273413,
32.45470618, 30.81971527, 29.20913285, 27.62492798, 26.06985616,
24.54752134, 23.06244623, 21.62015569, 20.22725775, 18.8914962 ,
17.62173754, 16.42784074, 15.3203463 , 14.30991825, 13.40649232,
12.61814044, 11.94976264, 11.40184726, 10.96964731, 10.64312344,
10.4078409 , 10.24670264, 10.1420874 , 10.07781793, 10.04049367,
10.02001104, 10.00938902, 10.00418206, 10.0017683 , 10.00070974,
10.00027038, 10.00009776, 10.00003354, 10.00001091, 10.00000337,
10.00000099, 10.00000027, 10.00000007, 10.00000002, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000002,
10.00000045, 10.00000591, 10.00006136, 10.00054525, 10.00439361,
10.03343496, 10.24612879, 11.58085847, 17.49261671, 30.22061282,
38.52918057, 39.03852217, 37.60655756, 35.95362278, 34.29462542,
32.65426181, 31.03588082, 29.44088945, 27.87110927, 26.32908473,
24.81815813, 23.34253751, 21.90737723, 20.51885998, 19.18425838,
17.91194388, 16.71129753, 15.59246491, 14.56589105, 13.64158265,
12.82808747, 12.13126434, 11.55303585, 11.09043104, 10.73526653,
10.47471209, 10.29273076, 10.17207017, 10.09627666, 10.05122716,
10.02590558, 10.01244704, 10.00568141, 10.00246342, 10.0010146 ,
10.00039692, 10.00014747, 10.00005203, 10.00001743, 10.00000554,
10.00000167, 10.00000048, 10.00000013, 10.00000003, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000002,
10.00000033, 10.00000434, 10.00004539, 10.00040589, 10.00328535,
10.02507501, 10.18508703, 11.20523505, 15.98313502, 27.8247919 ,
37.59446052, 38.99738964, 37.72695387, 36.11095465, 34.4723145 ,
32.84943884, 31.24742828, 29.66783061, 28.11234807, 26.58333266,
25.08388468, 23.61792003, 22.19024776, 20.80665176, 19.47395871,
18.20006417, 16.99387725, 15.86513164, 14.82400243, 13.88047301,
13.04342712, 12.31950856, 11.71189634, 11.21925886, 10.83521958,
10.54861681, 10.34463861, 10.20661852, 10.11803197, 10.06417749,
10.03319116, 10.01632124, 10.00762947, 10.0033901 , 10.00143184,
10.0005748 , 10.00021931, 10.00007952, 10.00002739, 10.00000897,
10.00000279, 10.00000082, 10.00000023, 10.00000006, 10.00000002,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000024, 10.00000318, 10.00003357, 10.00030207, 10.0024561 ,
10.01880181, 10.13912463, 10.91653842, 14.73074296, 25.42932581,
36.37822772, 38.87916573, 37.83340669, 36.26315516, 34.64584687,
33.04035668, 31.45449019, 29.89008801, 28.34876797, 26.83270827,
25.34478565, 23.88864486, 22.46877319, 21.0905811 , 19.76047413,
18.48589237, 17.27528284, 16.137958 , 15.08378407, 14.12264294,
13.2636321 , 12.51402072, 11.87807034, 11.35594006, 10.94298249,
10.62973061, 10.40272839, 10.2460944 , 10.14344136, 10.07965225,
10.04210436, 10.02117677, 10.01013195, 10.00461087, 10.00199578,
10.00082162, 10.00032169, 10.00011978, 10.00004241, 10.00001427,
10.00000457, 10.00000139, 10.0000004 , 10.00000011, 10.00000003,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000017, 10.00000234, 10.00002482, 10.00022474, 10.00183576,
10.0140955 , 10.10454007, 10.69559772, 13.70979564, 23.12459694,
34.87283246, 38.65639872, 37.92107762, 36.40958485, 34.81522368,
33.22712282, 31.65719335, 30.10778946, 28.58049021, 27.07731947,
25.60094793, 24.15476864, 22.74296929, 21.37061089, 20.04370205,
18.76924857, 17.55524812, 16.41058879, 15.34479936, 14.36759636,
13.48818562, 12.71431722, 12.0511687 , 11.50023471, 11.05849659,
10.71817359, 10.4672613 , 10.29083913, 10.17286148, 10.09797229,
10.0529017 , 10.02719915, 10.01331161, 10.00620065, 10.00274887,
10.00115977, 10.00046567, 10.00017793, 10.00006469, 10.00002238,
10.00000736, 10.0000023 , 10.00000068, 10.00000019, 10.00000005,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000013, 10.00000171, 10.00001834, 10.00016716, 10.0013718 ,
10.0105654 , 10.07853066, 10.52707566, 12.88951006, 20.98370703,
33.09639053, 38.29584465, 37.9830807 , 36.54926131, 34.98038465,
33.40982993, 31.85565828, 30.3210588 , 28.8076335 , 27.31727345,
25.85245998, 24.41635271, 23.01286035, 21.6467172 , 20.32355837,
19.04997674, 17.83353564, 16.68270063, 15.60664315, 14.61486329,
13.71658546, 12.9199116 , 12.23077867, 11.65185998, 11.18164813,
10.81400991, 10.53845325, 10.34116723, 10.2066421 , 10.11946736,
10.06585755, 10.03459407, 10.0173092 , 10.00824811, 10.00374287,
10.00161741, 10.00066557, 10.0002608 , 10.0000973 , 10.00003456,
10.00001169, 10.00000376, 10.00000115, 10.00000034, 10.00000009,
10.00000002, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000009, 10.00000126, 10.00001355, 10.0001243 , 10.00102487,
10.00791804, 10.05897846, 10.39887261, 12.23818621, 19.05683149,
31.09402285, 37.76036352, 38.00978945, 36.68071899, 35.14118396,
33.58855157, 32.04999849, 30.53001573, 29.03031392, 27.55267632,
26.09941136, 24.67346226, 23.27847806, 21.91888778, 20.59997552,
19.32794265, 18.1099346 , 16.95400026, 15.86894116, 14.86400094,
13.94834723, 13.13032003, 12.41647071, 11.81049673, 11.31227268,
10.91724846, 10.61647192, 10.39736087, 10.24512004, 10.14447134,
10.08126141, 10.04358665, 10.02228401, 10.01085687, 10.00504018,
10.0022295 , 10.00093969, 10.00037737, 10.00014439, 10.00005263,
10.00001827, 10.00000604, 10.0000019 , 10.00000057, 10.00000016,
10.00000004, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000007, 10.00000092, 10.00001001, 10.00009241, 10.00076551,
10.00593303, 10.04428536, 10.30154333, 11.72592525, 17.37032184,
28.93386551, 37.01290534, 37.98803266, 36.80181614, 35.29735616,
33.76333598, 32.24031939, 30.73477557, 29.24864473, 27.7836329 ,
26.34189223, 24.92616562, 23.53986046, 22.18712065, 20.8729008 ,
19.60303183, 18.38425881, 17.22422284, 16.13134904, 15.11459425,
14.18300689, 13.34506591, 12.60780485, 11.97579608, 11.45016019,
11.02784487, 10.70143486, 10.45966514, 10.28861328, 10.173317 ,
10.0994145 , 10.05442004, 10.02841412, 10.01414652, 10.00671506,
10.0030389 , 10.00131114, 10.00053932, 10.00021149, 10.00007906,
10.00002817, 10.00000957, 10.00000309, 10.00000095, 10.00000028,
10.00000008, 10.00000002, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000005, 10.00000067, 10.00000739, 10.00006868, 10.00057166,
10.00444491, 10.03324682, 10.22777331, 11.32609966, 15.92956979,
26.69857556, 36.02248272, 37.90027449, 36.90947305, 35.4484686 ,
33.93419746, 32.42671657, 30.93544899, 29.46273627, 28.0102465 ,
26.57999298, 25.17453361, 23.79705099, 22.45142283, 21.14229474,
19.87514779, 18.65634471, 17.49313016, 16.39355126, 15.36625591,
14.42012255, 13.56368382, 12.80433629, 12.14738585, 11.59506043,
11.14570428, 10.79340891, 10.52828423, 10.33741543, 10.2063306 ,
10.12062586, 10.06735353, 10.03589607, 10.01825341, 10.00885476,
10.00409751, 10.00180871, 10.0007616 , 10.0003059 , 10.0001172 ,
10.00004282, 10.00001492, 10.00000496, 10.00000157, 10.00000047,
10.00000014, 10.00000004, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000003, 10.00000049, 10.00000546, 10.00005103, 10.0004268 ,
10.00332948, 10.02495578, 10.17193247, 11.01592565, 14.72401492,
24.47435936, 34.77117531, 37.72399102, 36.99932379, 35.59385522,
34.10110406, 32.60927358, 31.13214152, 29.67269582, 28.23261871,
26.81380387, 25.418639 , 24.05009771, 22.71180927, 21.40812971,
20.14421023, 18.92604954, 17.76050885, 16.65525977, 15.61862611,
14.65927577, 13.78572267, 13.00562043, 12.32487665, 11.74668878,
11.27068516, 10.89241067, 10.60337865, 10.3917908 , 10.24382643,
10.14520795, 10.08265991, 10.04494413, 10.02333105, 10.01156065,
10.00546737, 10.00246783, 10.00106316, 10.00043714, 10.00017154,
10.00006424, 10.00002296, 10.00000783, 10.00000255, 10.00000079,
10.00000023, 10.00000007, 10.00000002, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000003, 10.00000036, 10.00000403, 10.0000379 , 10.00031858,
10.00249355, 10.01872961, 10.12970736, 10.77646477, 13.73270837,
22.3401469 , 33.26046689, 37.43161088, 37.06526566, 35.73252515,
34.26396045, 32.78805848, 31.32495291, 29.87862738, 28.45084924,
27.04341476, 25.65855602, 24.29905254, 22.9683018 , 21.67038861,
20.41015352, 19.19324951, 18.02616867, 16.91621266, 15.87137192,
14.90007227, 14.0107482 , 13.21121728, 12.50786753, 11.90473202,
11.40260359, 10.99840804, 10.68506348, 10.45197007, 10.2861016 ,
10.17347195, 10.10062244, 10.05578895, 10.02955031, 10.01494905,
10.00722187, 10.00333162, 10.00146768, 10.00061743, 10.00024803,
10.00009515, 10.00003485, 10.00001219, 10.00000407, 10.0000013 ,
10.00000039, 10.00000011, 10.00000003, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000002, 10.00000026, 10.00000297, 10.00002815, 10.00023774,
10.00186718, 10.01405486, 10.09780507, 10.5923085 , 12.92929899,
20.35925062, 31.51498707, 36.99151965, 37.09890027, 35.8630377 ,
34.42258401, 32.96311925, 31.51397614, 30.08063149, 28.66503579,
27.26891483, 25.89435994, 24.54397064, 23.22092823, 21.92906363,
20.6729252 , 19.45783821, 18.28994072, 17.1761726 , 16.12418649,
15.14214229, 14.23834495, 13.42069512, 12.6959512 , 12.06885402,
11.54123803, 11.11132264, 10.77340769, 10.51814684, 10.33343099,
10.20572295, 10.12153121, 10.06867575, 10.03709897, 10.01915187,
10.00944668, 10.00445186, 10.00200446, 10.0008623 , 10.00035442,
10.00013918, 10.00005221, 10.00001871, 10.00000641, 10.00000209,
10.00000065, 10.00000019, 10.00000006, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.00000019, 10.00000219, 10.0000209 , 10.00017738,
10.00139791, 10.01054551, 10.07371874, 10.45112006, 12.2858736 ,
18.57486436, 29.58222512, 36.37060914, 37.0888844 , 35.98333288,
34.57667188, 33.13447715, 31.69929612, 30.27880488, 28.87527383,
27.49039235, 26.12612668, 24.78490981, 23.46972154, 22.18415525,
20.93248465, 19.719725 , 18.55167588, 17.43492543, 16.37678803,
15.3851405 , 14.46811757, 13.63363359, 12.88871864, 12.23870122,
11.68633437, 11.23103292, 10.86843443, 10.59047511, 10.38606288,
10.24225499, 10.14567925, 10.08386193, 10.04618106, 10.024317 ,
10.01224077, 10.00589012, 10.00270926, 10.00119123, 10.00050069,
10.00020117, 10.00007726, 10.00002836, 10.00000995, 10.00000334,
10.00000107, 10.00000033, 10.0000001 , 10.00000003, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.00000014, 10.00000162, 10.00001551, 10.00013231,
10.00104641, 10.00791139, 10.05554401, 10.34314155, 11.77554273,
17.00952611, 27.52787209, 35.53855829, 37.0202562 , 36.09050507,
34.72575542, 33.30211767, 31.88098777, 30.47324007, 29.0816565 ,
27.70793452, 26.35393249, 25.02193002, 23.71471919, 22.43567121,
21.18880188, 19.97883358, 18.81124321, 17.6922786 , 16.62891872,
15.62874572, 14.69969179, 13.84962616, 13.08576319, 12.41390775,
11.83761101, 11.35737792, 10.97012225, 10.66906753, 10.44421498,
10.28334638, 10.17335819, 10.10161423, 10.05701551, 10.0306083 ,
10.01571714, 10.00771869, 10.00362525, 10.0016284 , 10.00069956,
10.00028743, 10.00011295, 10.00004244, 10.00001525, 10.00000524,
10.00000172, 10.00000054, 10.00000016, 10.00000005, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.0000001 , 10.00000119, 10.00001151, 10.00009867,
10.00078315, 10.0059345 , 10.0418367 , 10.26072624, 11.37393649,
15.66767015, 25.42772268, 34.47345064, 36.87388183, 36.18050615,
34.86913801, 33.46597788, 32.05911335, 30.66402476, 29.28427436,
27.92162724, 26.57785366, 25.25509291, 23.95596241, 22.68362566,
21.44185638, 20.23510065, 19.06852843, 17.94805972, 16.88034345,
15.87266032, 14.93271489, 14.06828213, 13.28668405, 12.59410018,
11.99476398, 11.49016143, 11.07840707, 10.75399474, 10.50807135,
10.32925518, 10.20485395, 10.12220547, 10.06983447, 10.03820525,
10.02000331, 10.01002157, 10.00480399, 10.00220345, 10.00096706,
10.00040612, 10.0001632 , 10.00006275, 10.00002308, 10.00000812,
10.00000273, 10.00000088, 10.00000027, 10.00000008, 10.00000002,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.00000008, 10.00000088, 10.00000854, 10.00007357,
10.00058602, 10.00445104, 10.03150304, 10.19792447, 11.05988294,
14.53992008, 23.35803927, 33.16764756, 36.6262783 , 36.2477665 ,
35.00581048, 33.62592913, 32.23371885, 30.85124105, 29.48321523,
28.13155496, 26.79796628, 25.48446147, 24.1934957 , 22.92803834,
21.69163611, 20.48847465, 19.32343255, 18.20211511, 17.13084864,
16.11660951, 15.1668559 , 14.28922804, 13.49108924, 12.77890176,
12.15747182, 11.62915635, 11.19318489, 10.84528546, 10.57777995,
10.38021522, 10.24044231, 10.14591092, 10.08488111, 10.04730216,
10.02524165, 10.01289523, 10.00630638, 10.00295234, 10.00132312,
10.00056767, 10.00023316, 10.00009168, 10.00003451, 10.00001243,
10.00000429, 10.00000141, 10.00000045, 10.00000013, 10.00000004,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000006, 10.00000065, 10.00000633, 10.00005484,
10.00043844, 10.00333798, 10.02371556, 10.15013255, 10.81554194,
13.60782446, 21.38659576, 31.63237002, 36.25018034, 36.28472948,
35.13433693, 33.78175325, 32.40482883, 31.03496426, 29.67856378,
28.33780051, 27.01434601, 25.71009961, 24.42736622, 23.16893394,
21.9381366 , 20.73891463, 19.57587053, 18.45430841, 17.3802409 ,
16.36034045, 15.4018055 , 14.51210874, 13.69859803, 12.96793628,
12.32540025, 11.77410929, 11.31431502, 10.94292744, 10.65345109,
10.4364325 , 10.2803846 , 10.17300448, 10.10240699, 10.05810702,
10.03158932, 10.01644931, 10.00820351, 10.00391819, 10.00179231,
10.00078523, 10.0003295 , 10.00013242, 10.00005097, 10.00001879,
10.00000663, 10.00000224, 10.00000073, 10.00000022, 10.00000007,
10.00000002, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000004, 10.00000048, 10.00000469, 10.00004087,
10.00032796, 10.00250295, 10.0178488 , 10.11380358, 10.62621729,
12.84811174, 19.56620702, 29.89948456, 35.71625609, 36.28131346,
35.25270239, 33.93311017, 32.5724395 , 31.21526137, 29.87040126,
28.54044493, 27.22706786, 25.93207191, 24.65762344, 23.40634137,
22.18136006, 20.98638921, 19.82577007, 18.70451922, 17.62834579,
16.60362129, 15.63727562, 14.73658805, 13.90884292, 13.16083133,
12.49820645, 11.924745 , 11.44162362, 11.04686914, 10.73515667,
10.49808237, 10.32492371, 10.20375469, 10.12266907, 10.07083981,
10.03921785, 10.02080701, 10.01057757, 10.00515218, 10.00240452,
10.00107526, 10.00046075, 10.00018918, 10.00007443, 10.00002806,
10.00001013, 10.00000351, 10.00000116, 10.00000037, 10.00000011,
10.00000003, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000003, 10.00000035, 10.00000348, 10.00003045,
10.00024528, 10.00187658, 10.01343032, 10.08621408, 10.48000725,
12.23599667, 17.93163765, 28.01961844, 34.99623672, 36.22435073,
35.35811192, 34.07949396, 32.7365092 , 31.39218892, 30.05880489,
28.73956726, 27.43620602, 26.15044331, 24.88431863, 23.64029323,
22.42131463, 21.23087559, 20.07307041, 18.9526419 , 17.87500656,
16.8462401 , 15.872999 , 14.96234907, 14.12147115, 13.35722116,
12.67554307, 12.08077089, 11.57490767, 11.15702199, 10.82293013,
10.56530724, 10.37428037, 10.23842075, 10.14592644, 10.08573053,
10.04831245, 10.02610536, 10.01352249, 10.00671438, 10.00319577,
10.00145807, 10.00063772, 10.00026739, 10.00010748, 10.00004141,
10.0000153 , 10.00000541, 10.00000184, 10.0000006 , 10.00000019,
10.00000006, 10.00000002, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000002, 10.00000026, 10.00000258, 10.00002268,
10.00018341, 10.00140678, 10.01010348, 10.06527847, 10.36739752,
11.7474411 , 16.49977335, 26.05674476, 34.06734938, 36.09711089,
35.44673087, 34.22017341, 32.89694537, 31.56578998, 30.24384715,
28.93524432, 27.6418337 , 26.36527891, 25.10750458, 23.87082532,
22.65801372, 21.47235867, 20.31772136, 19.19858429, 18.12008289,
17.08800383, 16.10872845, 15.1890943 , 14.33614581, 13.55674903,
12.85706171, 12.24188115, 11.71393889, 11.27326318, 10.91676724,
10.63821512, 10.42864995, 10.27724865, 10.17243683, 10.10301675,
10.0590708 , 10.03249505, 10.01714456, 10.00867462, 10.00420901,
10.0019585 , 10.00087398, 10.00037404, 10.00015353, 10.00006044,
10.00002282, 10.00000826, 10.00000287, 10.00000095, 10.0000003 ,
10.00000009, 10.00000003, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000002, 10.00000019, 10.00000191, 10.00001689,
10.00013712, 10.00105445, 10.00759919, 10.04940305, 10.28085783,
11.3604954 , 15.27219828, 24.08039619, 32.91740097, 35.87909838,
35.51335721, 34.35411244, 33.05358695, 31.73609022, 30.42559487,
29.1275504 , 27.8440229 , 26.5766437 , 25.32723521, 24.09797611,
22.89147532, 21.71083025, 20.5596822 , 19.44226665, 18.36344975,
17.32873719, 16.34423611, 15.41654543, 14.55254664, 13.75906919,
13.04241607, 12.40776078, 11.85846786, 11.39543879, 11.01662742,
10.71687881, 10.48819984, 10.32046748, 10.20245296, 10.1229409 ,
10.07170159, 10.04014011, 10.02156276, 10.01111318, 10.0054949 ,
10.0026066 , 10.00118631, 10.00051802, 10.00021704, 10.00008725,
10.00003365, 10.00001245, 10.00000442, 10.00000151, 10.00000049,
10.00000015, 10.00000005, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000001, 10.00000014, 10.00000142, 10.00001258,
10.0001025 , 10.00079027, 10.00571451, 10.03737211, 10.21447403,
11.05593922, 14.2390303 , 22.15729652, 31.54936255, 35.54640321,
35.55102385, 34.47986396, 33.20618057, 31.90309237, 30.60410784,
29.31655694, 28.04284431, 26.78460238, 25.54356534, 24.32178638,
23.12172149, 21.9462883 , 20.79892087, 19.68362057, 18.60499623,
17.56828155, 16.57931262, 15.64444302, 14.77037047, 13.96384841,
13.23126467, 12.57808916, 12.00822795, 11.52336721, 11.12243571,
10.80133579, 10.55306731, 10.3682861 , 10.23621897, 10.14574723,
10.08642261, 10.04921711, 10.02690891, 10.01412143, 10.00711263,
10.00343834, 10.00159533, 10.00071049, 10.00030372, 10.00012463,
10.00004909, 10.00001856, 10.00000673, 10.00000234, 10.00000078,
10.00000025, 10.00000008, 10.00000002, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000001, 10.0000001 , 10.00000105, 10.00000936,
10.0000766 , 10.00059219, 10.00429644, 10.02825963, 10.16362925,
10.81744568, 13.3829758 , 20.34422558, 29.98411813, 35.07293278,
35.55054308, 34.59543012, 33.35434847, 32.06676887, 30.77943708,
29.502332 , 28.23836704, 26.98921918, 25.75655037, 24.54229883,
23.34877781, 22.17873629, 21.03541309, 19.92258799, 18.84462447,
17.80649391, 16.81376616, 15.87254604, 14.98933147, 14.17076714,
13.42327311, 12.75254341, 12.16293915, 11.65684271, 11.23408515,
10.89158877, 10.62335811, 10.42089019, 10.2739669 , 10.17167868,
10.10345851, 10.05991412, 10.03332756, 10.01780233, 10.00913063,
10.0044964 , 10.00212609, 10.00096531, 10.00042087, 10.00017621,
10.00007084, 10.00002735, 10.00001014, 10.00000361, 10.00000123,
10.0000004 , 10.00000013, 10.00000004, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000001, 10.00000008, 10.00000078, 10.00000697,
10.00005724, 10.0004437 , 10.00322968, 10.02136104, 10.12473631,
10.63145574, 12.68289784, 18.68339617, 28.26033219, 34.43278729,
35.50003317, 34.6980804 , 33.49754546, 32.2270519 , 30.95162241,
29.68493968, 28.43065843, 27.19055769, 25.96624611, 24.75955775,
23.57267295, 22.40818254, 21.26914158, 20.1591203 , 19.08224863,
18.04324579, 17.04742157, 16.10063123, 15.20916114, 14.3795204 ,
13.61811599, 12.93080127, 12.32231169, 11.79563904, 11.35143955,
10.9876068 , 10.69914566, 10.47843982, 10.31591328, 10.20097359,
10.12303835, 10.0724293 , 10.04097556, 10.02227074, 10.01162724,
10.00583083, 10.00280868, 10.0012996 , 10.00057767, 10.00024667,
10.00010119, 10.00003988, 10.0000151 , 10.00000549, 10.00000192,
10.00000064, 10.00000021, 10.00000006, 10.00000002, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000006, 10.00000057, 10.00000519,
10.00004276, 10.0003324 , 10.00242736, 10.01614082, 10.09501861,
10.48689998, 12.11655965, 17.20078499, 26.43111272, 33.60379713,
35.38451251, 34.78411934, 33.6350016 , 32.38381994, 31.12068923,
29.86443923, 28.61978378, 27.38868068, 26.17270851, 24.9736087 ,
23.79343824, 22.63463973, 21.50009538, 20.39317745, 19.31779392,
18.27842223, 17.2801194 , 16.32849243, 15.4296081 , 14.58981835,
13.81547849, 13.11254371, 12.48604942, 11.93951304, 11.47433645,
11.08932695, 10.78047085, 10.54106746, 10.36225615, 10.23386234,
10.14539279, 10.08696909, 10.0500214 , 10.02765347, 10.01469127,
10.00749983, 10.00367896, 10.0017342 , 10.00078558, 10.000342 ,
10.00014309, 10.00005754, 10.00002223, 10.00000826, 10.00000295,
10.00000101, 10.00000033, 10.00000011, 10.00000003, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000004, 10.00000042, 10.00000386,
10.00003194, 10.00024898, 10.00182403, 10.01219226, 10.07233337,
10.3748591 , 11.66249158, 15.90706542, 24.55804691, 32.57183201,
35.18570973, 34.84859732, 33.76564614, 32.5368798 , 31.2866441 ,
30.04088399, 28.805806 , 27.5836499 , 26.37599352, 25.18449829,
24.01110726, 22.85812436, 21.72826922, 20.62472721, 19.5511957 ,
18.51192085, 17.51171493, 16.55593984, 15.65043777, 14.80138664,
14.01505757, 13.29745711, 12.65385282, 12.08820809, 11.60259036,
11.19665636, 10.86734241, 10.60887667, 10.41317232, 10.27056422,
10.17075123, 10.10374625, 10.06064411, 10.03408918, 10.01842241,
10.00957039, 10.00477922, 10.00229425, 10.00105877, 10.00046975,
10.00020037, 10.00008217, 10.0000324 , 10.00001228, 10.00000448,
10.00000157, 10.00000053, 10.00000017, 10.00000005, 10.00000002,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000003, 10.00000031, 10.00000287,
10.00002385, 10.00018647, 10.00137043, 10.00920673, 10.05503103,
10.28821872, 11.30109752, 14.80029451, 22.70392015, 31.33504503,
34.88230949, 34.88496382, 33.88800741, 32.68594256, 31.44946894,
30.21431983, 28.98878517, 27.77552591, 26.57615687, 25.39227389,
24.22571558, 23.07865628, 21.95366287, 20.85374442, 19.78239862,
18.74365085, 17.74207726, 16.78279921, 15.87143192, 15.01396648,
14.21656288, 13.48523517, 12.82542182, 12.24145741, 11.73599592,
11.30947464, 10.9597379 , 10.68194121, 10.4688151 , 10.31128433,
10.1993388 , 10.12297747, 10.07303202, 10.04172794, 10.02293147,
10.01211888, 10.00615883, 10.00300986, 10.00141458, 10.00063938,
10.00027795, 10.00011622, 10.00004674, 10.00001808, 10.00000672,
10.00000241, 10.00000083, 10.00000027, 10.00000009, 10.00000003,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000002, 10.00000023, 10.00000213,
10.00001781, 10.00013964, 10.00102946, 10.00695018, 10.04184423,
10.22134882, 11.01518365, 13.86938568, 20.9257243 , 29.90694148,
34.45089997, 34.88467493, 34.00008256, 32.83059156, 31.60911316,
30.3847832 , 29.168778 , 27.96436787, 26.77325391, 25.59698341,
24.43730037, 23.29625832, 22.17628068, 21.08021029, 20.01135581,
18.97353219, 17.97108844, 17.00891107, 16.09238815, 15.22731466,
14.41971747, 13.67558042, 13.00045818, 12.3989872 , 11.87433121,
11.42763652, 11.05760502, 10.76030487, 10.5293125 , 10.35621079,
10.23137333, 10.14488093, 10.08738107, 10.05073058, 10.02834048,
10.01523154, 10.00787493, 10.00391666, 10.00187399, 10.00086263,
10.00038204, 10.00016279, 10.00006675, 10.00002633, 10.00000999,
10.00000365, 10.00000128, 10.00000043, 10.00000014, 10.00000004,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000002, 10.00000017, 10.00000159,
10.00001329, 10.00010455, 10.00077319, 10.00524518, 10.03180087,
10.16982158, 10.79009299, 13.09756506, 19.26933628, 28.31724446,
33.86785661, 34.83678968, 34.09917006, 32.97023998, 31.76548324,
30.55229845, 29.34583709, 28.15023326, 26.96733946, 25.79867516,
24.64590019, 23.51095586, 22.39613101, 21.30411186, 20.23802816,
19.20149472, 18.19864254, 17.23412985, 16.31311925, 15.44120328,
14.62425817, 13.86820545, 13.17866768, 12.56051945, 12.01736086,
11.55097469, 11.16086347, 10.84398164, 10.59476589, 10.40551225,
10.26706245, 10.16967358, 10.10389295, 10.0612677 , 10.03478246,
10.01900486, 10.00999302, 10.00505647, 10.00246224, 10.00115389,
10.00052045, 10.00022594, 10.00009441, 10.00003797, 10.0000147 ,
10.00000548, 10.00000196, 10.00000068, 10.00000022, 10.00000007,
10.00000002, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000001, 10.00000013, 10.00000118,
10.00000992, 10.00007827, 10.00058063, 10.00395734, 10.02415637,
10.13017179, 10.61359829, 12.46530749, 17.76663864, 26.61000996,
33.1122316 , 34.72762651, 34.18165847, 33.10407479, 31.91842879,
30.71687422, 29.52000995, 28.33317756, 27.15846758, 25.99739758,
24.8515547 , 23.72277656, 22.61322586, 21.52544135, 20.46238366,
19.42747738, 18.42464481, 17.45832313, 16.53345252, 15.65541946,
14.82993581, 14.06283393, 13.35976185, 12.72577453, 12.16483908,
11.67930271, 11.26940699, 10.93295653, 10.66524921, 10.4593358 ,
10.30660077, 10.19756846, 10.12277299, 10.07351843, 10.04240105,
10.02354575, 10.01258749, 10.00647793, 10.00320931, 10.00153068,
10.00070288, 10.00031076, 10.00013229, 10.00005422, 10.0000214 ,
10.00000813, 10.00000298, 10.00000105, 10.00000036, 10.00000012,
10.00000004, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000001, 10.00000009, 10.00000088,
10.0000074 , 10.00005859, 10.00043595, 10.0029849 , 10.0183411 ,
10.09969825, 10.47566587, 11.9525397 , 16.43512983, 24.83917734,
32.16940991, 34.54060172, 34.2427668 , 33.23098328, 32.06772413,
30.87849882, 29.69133774, 28.51325388, 27.34669145, 26.19319913,
25.05430446, 23.93174997, 22.82758039, 21.74419568, 20.6843967 ,
19.65142747, 18.64901091, 17.68137077, 16.75322916, 15.86976495,
15.03651534, 14.25920129, 13.54345958, 12.89447353, 12.31651254,
11.81241789, 11.38310577, 11.02718671, 10.74080866, 10.51780522,
10.35016731, 10.22877187, 10.14422783, 10.08766893, 10.05134987,
10.0289716 , 10.01574205, 10.00823707, 10.00415056, 10.0020141 ,
10.00094128, 10.00042369, 10.00018369, 10.00007671, 10.00003085,
10.00001195, 10.00000446, 10.0000016 , 10.00000055, 10.00000018,
10.00000006, 10.00000002, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10.00000001, 10.00000007, 10.00000065,
10.00000552, 10.00004385, 10.00032727, 10.00225084, 10.01391969,
10.07630196, 10.36816594, 11.54011019, 15.27949842, 23.06245669,
31.03491652, 34.25642871, 34.27623745, 33.34945757, 32.21304388,
31.03713387, 29.85985358, 28.69051247, 27.53206312, 26.3861281 ,
25.25419071, 24.13790733, 23.03921261, 21.960376 , 20.90404753,
19.8733 , 18.87166609, 17.90316417, 16.97230348, 16.08405557,
15.24377567, 14.45705525, 13.72948831, 13.06634025, 12.47212302,
11.95010421, 11.5018089 , 11.12660307, 10.8214628 , 10.58101978,
10.39792321, 10.26348074, 10.16846298, 10.10391068, 10.06179158,
10.03541007, 10.01955003, 10.01039785, 10.00532729, 10.00262935,
10.00125024, 10.00057275, 10.00025281, 10.00010752, 10.00004406,
10.0000174 , 10.00000662, 10.00000243, 10.00000086, 10.00000029,
10.0000001 , 10.00000003, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000005, 10.00000048,
10.00000412, 10.00003281, 10.00024564, 10.00169687, 10.01055974,
10.05835613, 10.28457612, 11.21064126, 14.2943588 , 21.33488153,
29.71747186, 33.85389703, 34.27399347, 33.45747144, 32.35393076,
31.19270601, 30.02558032, 28.86500001, 27.71463326, 26.57623243,
25.45125518, 24.34128127, 23.248143 , 22.17398725, 21.1213217 ,
20.09305697, 19.09254452, 18.12360549, 17.1905422 , 16.29812075,
15.45150951, 14.6561561 , 13.91758506, 13.24110298, 12.6314098 ,
12.09213508, 11.62534704, 11.23111205, 10.90720324, 10.64905339,
10.45000975, 10.30188006, 10.19568035, 10.12243845, 10.07389676,
10.04299872, 10.02411458, 10.01303269, 10.00678732, 10.00340628,
10.0016474 , 10.00076786, 10.00034495, 10.00014936, 10.00006233,
10.00002508, 10.00000972, 10.00000363, 10.00000131, 10.00000045,
10.00000015, 10.00000005, 10.00000002, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000004, 10.00000036,
10.00000307, 10.00002455, 10.00018435, 10.00127892, 10.00800762,
10.04460263, 10.21970405, 10.94891931, 13.46736126, 19.70334786,
28.24036281, 33.31143353, 34.22579177, 33.55232413, 32.48975339,
31.34509598, 30.1885276 , 29.03675883, 27.89445091, 26.76355957,
25.64553993, 24.54190558, 23.45439422, 22.38503773, 21.33620957,
20.31066682, 19.31158855, 18.34260689, 17.40782378, 16.51180287,
15.65952302, 14.85627689, 14.10749718, 13.41849601, 12.79411188,
12.23827596, 11.753535 , 11.34059771, 10.99799555, 10.72195425,
10.50654678, 10.34414045, 10.22607564, 10.14344823, 10.08784245,
10.05188435, 10.02954865, 10.01622279, 10.00858554, 10.0043799 ,
10.00215393, 10.00102116, 10.00046675, 10.00020569, 10.0000874 ,
10.00003581, 10.00001414, 10.00000539, 10.00000198, 10.0000007 ,
10.00000024, 10.00000008, 10.00000002, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000003, 10.00000027,
10.00000229, 10.00001836, 10.00013832, 10.00096369, 10.00606998,
10.0340702 , 10.16944145, 10.74197315, 12.78208971, 18.20306023,
26.64050366, 32.60952871, 34.11893439, 33.63044556, 32.61965109,
31.49412413, 30.34868798, 29.20582578, 28.07156306, 26.94815627,
25.83708715, 24.73981502, 23.65799083, 22.59353877, 21.54870584,
20.52610386, 19.52874808, 18.56008988, 17.62403766, 16.72495669,
15.86763546, 15.05720338, 14.29898292, 13.59826084, 12.95996992,
12.38828685, 11.88617442, 11.45492399, 11.09378062, 10.79974495,
10.56763147, 10.39041611, 10.25983594, 10.16713501, 10.10381062,
10.0622222 , 10.03597478, 10.02005845, 10.01078439, 10.00559093,
10.00279495, 10.00134739, 10.00062642, 10.00028087, 10.00012147,
10.00005066, 10.00002038, 10.00000791, 10.00000296, 10.00000107,
10.00000037, 10.00000012, 10.00000004, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000002, 10.0000002 ,
10.00000171, 10.00001373, 10.00010378, 10.00072599, 10.0045995 ,
10.02601018, 10.13055371, 10.57896255, 12.22041833, 16.85620135,
24.96513815, 31.73387343, 33.9381415 , 33.6871602 , 32.74246227,
31.63953142, 30.50603182, 29.3722308 , 28.24601425, 27.13006842,
26.02593904, 24.93504514, 23.85895904, 22.79950438, 21.75880912,
20.73934775, 19.74397999, 18.7759846 , 17.83908362, 16.93744874,
16.07567875, 15.25873399, 14.49181183, 13.78014722, 13.12872789,
12.54192455, 12.02305626, 11.57393705, 11.1944763 , 10.88242287,
10.63333756, 10.44084292, 10.29713736, 10.19369039, 10.12198628,
10.0741748 , 10.04352476, 10.02463914, 10.01345429, 10.00708634,
10.00360011, 10.00176426, 10.00083405, 10.00038039, 10.00016737,
10.00007106, 10.0000291 , 10.0000115 , 10.00000439, 10.00000161,
10.00000057, 10.0000002 , 10.00000006, 10.00000002, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000002, 10.00000015,
10.00000127, 10.00001027, 10.00007784, 10.00054679, 10.003484 ,
10.01984627, 10.1005041 , 10.45096616, 11.76422128, 15.6725739 ,
23.26677665, 30.67873889, 33.66573641, 33.7164118 , 32.85663219,
31.78095453, 30.66050058, 29.53599504, 28.41784598, 27.30934077,
26.21213762, 25.12763205, 24.05732646, 23.00295096, 21.96652156,
20.95038302, 19.9572475 , 18.9902292 , 18.05287108, 17.14915666,
16.28349694, 15.46067956, 14.68576494, 13.96391393, 13.30013459,
12.6989447 , 12.16396328, 11.69746766, 11.29997918, 10.96996109,
10.70371494, 10.49553699, 10.33814277, 10.22330033, 10.14255557,
10.08791074, 10.05233899, 10.03007356, 10.01667399, 10.00891984,
10.00460401, 10.00229295, 10.00110194, 10.00051104, 10.00022872,
10.0000988 , 10.00004119, 10.00001657, 10.00000643, 10.00000241,
10.00000087, 10.0000003 , 10.0000001 , 10.00000003, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000001, 10.00000011,
10.00000095, 10.00000768, 10.00005838, 10.00041174, 10.00263813,
10.01513531, 10.07730959, 10.35072885, 11.39647659, 14.65160457,
21.59743163, 29.44986413, 33.28232574, 33.71046128, 32.96009552,
31.91789334, 30.81199801, 29.69712838, 28.58709592, 27.48601669,
26.39572459, 25.31761229, 24.2531219 , 23.20389701, 22.17184848,
21.15919862, 20.16851973, 19.20276927, 18.26531846, 17.35996856,
16.49094576, 15.66286304, 14.88063489, 14.14932941, 13.47394483,
12.85910368, 12.3086723 , 11.82533357, 11.41016664, 11.06230946,
10.77878963, 10.55459347, 10.38299993, 10.25614283, 10.16570371,
10.10360312, 10.06256569, 10.0364794 , 10.02053081, 10.01115232,
10.00584676, 10.00295848, 10.00144494, 10.00068122, 10.00031003,
10.00013621, 10.00005778, 10.00002366, 10.00000935, 10.00000357,
10.00000132, 10.00000047, 10.00000016, 10.00000005, 10.00000002,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000001, 10.00000008,
10.00000071, 10.00000574, 10.00004378, 10.00030997, 10.00199695,
10.01153692, 10.05942409, 10.27240543, 11.10187893, 13.78500192,
20.00334022, 28.06603203, 32.76814692, 33.65958766, 33.05012891,
32.04966859, 30.96037863, 29.85562622, 28.75379703, 27.66013783,
26.57674113, 25.5050227 , 24.44637519, 23.40236286, 22.37479806,
21.36578753, 20.37777114, 19.41355721, 18.47635259, 17.56978246,
16.69789199, 15.86511917, 15.07622588, 14.3361722 , 13.64992048,
13.02216018, 12.45695631, 11.95734187, 11.52489894, 11.15939608,
10.85856419, 10.61808579, 10.43183968, 10.29238579, 10.19161283,
10.12142788, 10.07435992, 10.04398295, 10.02512073, 10.01385226,
10.00737443, 10.00379021, 10.00188081, 10.00090116, 10.00041693,
10.00018627, 10.00008037, 10.00003349, 10.00001347, 10.00000524,
10.00000196, 10.00000071, 10.00000025, 10.00000008, 10.00000003,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10.00000001, 10.00000006,
10.00000053, 10.00000429, 10.00003282, 10.00023331, 10.00151112,
10.00878988, 10.04564482, 10.21132122, 10.86709432, 13.05946606,
18.52112486, 26.5587097 , 32.10516303, 33.55184799, 33.1231695 ,
32.17536715, 31.10543266, 30.01146522, 28.91797628, 27.83174371,
26.75522774, 25.68990018, 24.63711697, 23.59837048, 22.57538099,
21.57014636, 20.58498109, 19.62255175, 18.6859081 , 17.77850565,
16.90421299, 16.06729406, 15.27235356, 14.52423122, 13.82783128,
13.18787667, 12.60858648, 12.09329122, 11.64402137, 11.26112888,
10.94301833, 10.6860652 , 10.4847746 , 10.33218492, 10.22045986,
10.14156205, 10.08788232, 10.05271858, 10.03054836, 10.017096 ,
10.00923955, 10.00482232, 10.00243068, 10.00118329, 10.00055639,
10.0002527 , 10.00011087, 10.00004699, 10.00001924, 10.00000761,
10.00000291, 10.00000107, 10.00000038, 10.00000013, 10.00000004,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000004,
10.00000039, 10.00000321, 10.00002461, 10.00017557, 10.00114312,
10.00669386, 10.03503782, 10.16375968, 10.68077757, 12.45905353,
17.17586438, 24.96957794, 31.27979499, 33.37298745, 33.17459803,
32.29377162, 31.24686636, 30.16459787, 29.079653 , 28.00087122,
26.93122401, 25.87228164, 24.82537855, 23.79194322, 22.77361026,
21.77227502, 20.79013342, 19.82971744, 18.89392686, 17.9860541 ,
17.1097961 , 16.26924475, 15.46884478, 14.71330589, 14.00745551,
13.35602055, 12.76333385, 12.23297398, 11.76736646, 11.36739742,
11.03210999, 10.75856075, 10.54189785, 10.37568198, 10.2524144 ,
10.1641818 , 10.10329774, 10.06282794, 10.03692674, 10.02096797,
10.01150146, 10.00609426, 10.00311941, 10.00154252, 10.00073693,
10.00034016, 10.00015172, 10.00006538, 10.00002723, 10.00001096,
10.00000426, 10.0000016 , 10.00000058, 10.0000002 , 10.00000007,
10.00000002, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000003,
10.00000029, 10.0000024 , 10.00001844, 10.00013209, 10.00086447,
10.0050954 , 10.02687902, 10.12677991, 10.53344833, 11.9670106 ,
15.98102876, 23.34633128, 30.28592156, 33.10662874, 33.19848981,
32.40327056, 31.38427655, 30.31494524, 29.23883687, 28.16755391,
27.10476834, 26.05220376, 25.01119173, 23.98310565, 22.96950087,
21.97217639, 20.99321607, 20.03502416, 19.10035747, 18.1923519 ,
17.31453811, 16.47083876, 15.66553737, 14.90320625, 14.18858043,
13.52636525, 12.92097093, 12.37617819, 11.89475611, 11.4780748 ,
11.12577648, 10.83557952, 10.60328242, 10.42300309, 10.28763673,
10.18946046, 10.12077372, 10.07445904, 10.04437698, 10.02556077,
10.01422671, 10.00765119, 10.00397607, 10.00199664, 10.00096894,
10.00045443, 10.00020599, 10.00009025, 10.00003822, 10.00001564,
10.00000619, 10.00000237, 10.00000087, 10.00000031, 10.00000011,
10.00000004, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000002,
10.00000022, 10.00000179, 10.00001382, 10.00009936, 10.00065355,
10.00387698, 10.02060784, 10.09806371, 10.4172944 , 11.5670491 ,
14.9398444 , 21.73759083, 29.12753816, 32.73489671, 33.18734692,
32.50174547, 31.51711753, 30.46238763, 29.39552512, 28.33182122,
27.27589769, 26.22970288, 25.19458867, 24.17188336, 23.1630696 ,
22.16985604, 21.1942207 , 20.23844671, 19.30515475, 18.39733075,
17.5183447 , 16.67195361, 15.86227979, 15.09375284, 14.37100266,
13.69869098, 13.08127304, 12.52268945, 12.02600363, 11.59301959,
11.223936 , 10.91710719, 10.66898071, 10.47425747, 10.32627594,
10.2175666 , 10.14047879, 10.08776512, 10.05302774, 10.0309751 ,
10.0174893 , 10.00954442, 10.00503435, 10.00256667, 10.0012649 ,
10.0006026 , 10.00027754, 10.00012358, 10.0000532 , 10.00002215,
10.00000891, 10.00000347, 10.0000013 , 10.00000047, 10.00000017,
10.00000006, 10.00000002, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000002,
10.00000016, 10.00000134, 10.00001036, 10.00007473, 10.00049394,
10.00294869, 10.01579083, 10.07578951, 10.32594823, 11.24413317,
14.04748646, 20.18796817, 27.82036336, 32.23962358, 33.13184036,
32.58643046, 31.64465845, 30.6067525 , 29.54969904, 28.49369733,
27.44464716, 26.4048148 , 25.37560173, 24.3583028 , 23.35433481,
22.36532192, 21.39314233, 20.43996439, 19.50827924, 18.6009294 ,
17.72112992, 16.87247629, 16.05893074, 15.28477658, 14.55452835,
13.87278549, 13.24401955, 12.67229251, 12.16091577, 11.71207787,
11.32648918, 11.00310893, 10.73902435, 10.52953635, 10.36846815,
10.24866206, 10.16258072, 10.10290334, 10.0630145 , 10.03731959,
10.02137085, 10.01183177, 10.00633302, 10.0032773 , 10.0016398 ,
10.00079334, 10.00037116, 10.00016792, 10.00007347, 10.00003109,
10.00001272, 10.00000504, 10.00000193, 10.00000071, 10.00000026,
10.00000009, 10.00000003, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000012, 10.000001 , 10.00000776, 10.00005619, 10.00037321,
10.00224175, 10.01209316, 10.05852967, 10.25426431, 10.98488576,
13.29352412, 18.73419461, 26.39180791, 31.60420446, 33.02061418,
32.6537418 , 31.76592862, 30.74779865, 29.70131938, 28.65319982,
27.61104956, 26.57757456, 25.55426334, 24.54239112, 23.54331622,
22.55858418, 21.58997907, 20.63956063, 19.70969683, 18.80309316,
17.92281567, 17.07230283, 16.25535882, 15.47611857, 14.73897336,
14.04844449, 13.40899484, 12.82477284, 12.29929454, 11.83508514,
11.43332083, 11.09353039, 10.81342446, 10.5889122 , 10.41433509,
10.28290004, 10.1872447 , 10.12003339, 10.07447866, 10.04471044,
10.02596072, 10.01457789, 10.0079163 , 10.00415724, 10.00211137,
10.00103711, 10.00049274, 10.00022645, 10.00010067, 10.0000433 ,
10.00001801, 10.00000725, 10.00000282, 10.00000106, 10.00000039,
10.00000014, 10.00000005, 10.00000002, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000009, 10.00000075, 10.00000581, 10.00004224, 10.00028191,
10.00170363, 10.00925646, 10.04516776, 10.19811314, 10.77772604,
12.66419456, 17.40286139, 24.87907404, 30.81601527, 32.84023348,
32.69907707, 31.87964789, 30.88519588, 29.85032042, 28.81033781,
27.77513476, 26.74801625, 25.73060584, 24.72417603, 23.73003478,
22.74965484, 21.78473182, 20.83722263, 19.90937826, 19.00377346,
18.1233312 , 17.27133778, 16.45144206, 15.66762986, 14.92416316,
14.22547213, 13.57598918, 12.97991792, 12.44093898, 11.96186826,
11.54430169, 11.18829908, 10.89217203, 10.6524383 , 10.46398278,
10.32042343, 10.21463154, 10.1393159 , 10.08756652, 10.05327092,
10.03135587, 10.01785448, 10.0098343 , 10.00523968, 10.00270053,
10.00134649, 10.00064952, 10.00030314, 10.0001369 , 10.00005982,
10.00002529, 10.00001035, 10.0000041 , 10.00000157, 10.00000058,
10.00000021, 10.00000007, 10.00000002, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000007, 10.00000056, 10.00000435, 10.00003175, 10.00021288,
10.00129418, 10.00708154, 10.03483234, 10.15419883, 10.61283372,
12.14426669, 16.20986127, 23.32562895, 29.86908693, 32.57538982,
32.71658872, 31.9841387 , 31.01849875, 29.99660228, 28.96510972,
27.93692903, 26.91617265, 25.90466132, 24.90368566, 23.91451245,
22.9385477 , 21.97740399, 21.03294104, 20.10729882, 19.20292742,
18.32261263, 17.46949372, 16.64706754, 15.8591711 , 15.10993284,
14.40368118, 13.74479937, 13.13751841, 12.5856467 , 12.09224731,
11.65929025, 11.2873257 , 10.97523872, 10.72014855, 10.51750054,
10.36136313, 10.24489583, 10.16091084, 10.10242807, 10.06313066,
10.03766073, 10.02174049, 10.01214331, 10.00656268, 10.00343174,
10.00173643, 10.00085024, 10.0004029 , 10.00018478, 10.00008202,
10.00003524, 10.00001465, 10.0000059 , 10.0000023 , 10.00000087,
10.00000032, 10.00000011, 10.00000004, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10.00000001,
10.00000005, 10.00000042, 10.00000326, 10.00002386, 10.00016072,
10.00098277, 10.00541496, 10.0268443 , 10.11990251, 10.48201331,
11.71841433, 15.16124222, 21.77672389, 28.76652065, 32.20949798,
32.69894608, 32.07721653, 31.14711283, 30.14002099, 29.1175002 ,
28.096454 , 27.082075 , 26.07646146, 25.08094839, 24.09677205,
23.12527805, 22.16800126, 21.22670966, 20.3034379 , 19.40051736,
18.52060252, 17.66669081, 16.84213089, 16.05061224, 15.29612696,
14.58289324, 13.91522934, 13.29736912, 12.73321532, 12.22603728,
11.77813456, 11.39050576, 11.06257775, 10.7920576 , 10.57496026,
10.4058367 , 10.27818422, 10.18497582, 10.11921567, 10.0744249 ,
10.04498683, 10.02632211, 10.01490613, 10.00816953, 10.00433334,
10.00222465, 10.00110546, 10.00053173, 10.00024759, 10.00011161,
10.00004871, 10.00002058, 10.00000842, 10.00000333, 10.00000128,
10.00000047, 10.00000017, 10.00000006, 10.00000002, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000004, 10.00000031, 10.00000244, 10.00001793, 10.0001213 ,
10.00074603, 10.00413859, 10.02067514, 10.09315091, 10.37851005,
11.37212503, 14.25497549, 20.27485527, 27.52202704, 31.72580672,
32.63711429, 32.15605546, 31.2702515 , 30.28037567, 29.26747631,
28.2537255 , 27.24575245, 26.24603734, 25.25599276, 24.27683716,
23.30986251, 22.35653137, 21.41852516, 20.49777873, 19.5965105 ,
18.71724938, 17.86285631, 17.03653593, 16.24183216, 15.48259931,
14.76293877, 14.08709054, 13.45926991, 12.88344378, 12.36304975,
11.90067402, 11.49772113, 11.15412505, 10.86816123, 10.6364159 ,
10.45394713, 10.31463379, 10.21166442, 10.13808258, 10.08729341,
10.05345234, 10.03169276, 10.01819224, 10.01010911, 10.00543799,
10.0028319 , 10.00142777, 10.00069696, 10.00032943, 10.00015078,
10.00006683, 10.00002868, 10.00001192, 10.0000048 , 10.00000187,
10.00000071, 10.00000026, 10.00000009, 10.00000003, 10.00000001,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000003, 10.00000023, 10.00000183, 10.00001347, 10.00009153,
10.00056611, 10.00316159, 10.01591409, 10.07230758, 10.29681115,
11.09222597, 13.48311408, 18.85601409, 26.16005718, 31.10907946,
32.52019947, 32.21702706, 31.38688114, 30.41739205, 29.41498255,
28.408752 , 27.40723163, 26.41341919, 25.42884731, 24.45473192,
23.49231891, 22.54300388, 21.60838681, 20.69030801, 19.79087851,
18.91250731, 18.05792418, 17.23019418, 16.43271832, 15.66921271,
14.94365709, 14.26020224, 13.62302631, 13.03613342, 12.50309435,
12.02674104, 11.60884175, 11.24980053, 10.94843693, 10.70190331,
10.50578189, 10.35437057, 10.24112451, 10.1591815 , 10.10187949,
10.06318142, 10.03795285, 10.02207796, 10.01243623, 10.00678301,
10.00358238, 10.00183214, 10.00090743, 10.00043527, 10.00020223,
10.000091 , 10.00003967, 10.00001675, 10.00000685, 10.00000271,
10.00000104, 10.00000039, 10.00000014, 10.00000005, 10.00000002,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000002, 10.00000017, 10.00000137, 10.00001011, 10.00006905,
10.00042943, 10.00241413, 10.01224225, 10.05608417, 10.23245245,
10.867124 , 12.83391916, 17.54728774, 24.71427924, 30.34776588,
32.33543504, 32.25551358, 31.49565178, 30.55070128, 29.55993442,
28.56153257, 27.56653592, 26.5786362 , 25.59954041, 24.63048096,
23.67266607, 22.72743 , 21.79629624, 20.88101565, 19.98359723,
19.10633556, 18.2518346 , 17.42302446, 16.62316633, 15.85583871,
15.12489629, 14.43439171, 13.78845017, 13.19108901, 12.64598015,
12.1561627 , 11.72372724, 11.34950947, 11.03284476, 10.77144028,
10.56141219, 10.39750818, 10.2734966 , 10.182663 , 10.11832861,
10.07430347, 10.04520949, 10.0266465 , 10.01521185, 10.00841074,
10.00450404, 10.00233617, 10.00117374, 10.00057125, 10.00026934,
10.00012304, 10.00005445, 10.00002335, 10.0000097 , 10.00000391,
10.00000152, 10.00000058, 10.00000021, 10.00000007, 10.00000003,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000002, 10.00000013, 10.00000102, 10.0000076 , 10.00005207,
10.00032564, 10.00184255, 10.00941227, 10.04346848, 10.18184159,
10.68685092, 12.29368725, 16.36598722, 23.22456015, 29.43639935,
32.06841062, 32.26570138, 31.59481056, 30.67981282, 29.70221024,
28.71205435, 27.72368465, 26.74171609, 25.76810013, 24.8041092 ,
23.85092373, 22.90982243, 21.98225723, 21.06989449, 20.1746463 ,
19.2986982 , 18.44453361, 17.61495244, 16.81307962, 16.04235732,
15.30651308, 14.6094944 , 13.95536002, 13.34811952, 12.79151689,
12.28876231, 11.84222856, 11.45314397, 11.12132833, 10.84502684,
10.62089251, 10.44414672, 10.30891237, 10.20867391, 10.13678721,
10.08695218, 10.05357604, 10.03198784, 10.01850331, 10.01036888,
10.00562899, 10.00296046, 10.0015085 , 10.00074476, 10.00035629,
10.00016517, 10.0000742 , 10.00003231, 10.00001363, 10.00000557,
10.00000221, 10.00000085, 10.00000032, 10.00000011, 10.00000004,
10.00000001, 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.0000001 , 10.00000077, 10.0000057 , 10.00003926,
10.00024685, 10.0014057 , 10.0072325 , 10.03366677, 10.14210237,
10.54298755, 11.84815964, 15.32011808, 21.73299866, 28.37777433,
31.70365943, 32.2403698 , 31.6820949 , 30.8040799 , 29.84164053,
28.86028925, 27.87869199, 26.90268485, 25.93455406, 24.97564179,
24.02711233, 23.09019514, 22.16627551, 21.25694003, 20.36400888,
19.48956375, 18.63597271, 17.80591029, 17.00236898, 16.22865664,
15.48837262, 14.78535387, 14.12358148, 13.5070389 , 12.93951603,
12.42436074, 11.96418963, 11.56058449, 11.21381593, 10.92264576,
10.68426037, 10.49437184, 10.34749324, 10.23735581, 10.15740117,
10.10126453, 10.06317148, 10.03819859, 10.0223844 , 10.01271075,
10.00699381, 10.00372892, 10.00192664, 10.00096471, 10.00046817,
10.00022021, 10.0001004 , 10.00004437, 10.00001901, 10.0000079 ,
10.00000318, 10.00000124, 10.00000047, 10.00000017, 10.00000006,
10.00000002, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.00000007, 10.00000057, 10.00000428, 10.0000296 ,
10.00018707, 10.00107196, 10.00555456, 10.02605759, 10.11094153,
10.42852133, 11.48350517, 14.40979072, 20.27978362, 27.18436453,
31.225707 , 32.17070399, 31.75460316, 30.92265563, 29.97799452,
29.00618974, 28.03156562, 27.06156617, 26.09892906, 25.14510393,
24.20125293, 23.26856327, 22.34835855, 21.44215023, 20.55167139,
19.67890485, 18.82610846, 17.99583623, 17.19095225, 16.41463257,
15.67034827, 14.96182183, 14.29294746, 13.66766659, 13.08979177,
12.56277784, 12.08944876, 11.67170135, 11.31022181, 11.00426331,
10.75153634, 10.54825402, 10.38934913, 10.2688435 , 10.18031448,
10.11737961, 10.07411973, 10.04538167, 10.02693544, 10.01549554,
10.00863988, 10.00466908, 10.00244566, 10.00124174, 10.00061117,
10.00029163, 10.00013491, 10.00006051, 10.00002632, 10.0000111 ,
10.00000454, 10.0000018 , 10.00000069, 10.00000026, 10.00000009,
10.00000003, 10.00000001, 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10.00000001, 10.00000005, 10.00000043, 10.00000321, 10.00002231,
10.00014172, 10.00081711, 10.00426368, 10.02015505, 10.08653682,
10.33767511, 11.18693159, 13.62910416, 18.89964537, 25.87850758,
30.62062208, 32.04617898, 31.80864171, 31.03443765, 30.11096281,
29.14968345, 28.18230497, 27.21838087, 26.26125106, 25.31252073,
24.3733671 , 23.44494296, 22.52851538, 21.62552526, 20.73762318,
19.86669796, 19.01490219, 18.18467417, 17.37875386, 16.60018842,
15.85232137, 15.13875802, 14.46329827, 13.82982805, 13.24216185,
12.70383351, 12.21784022, 11.78635625, 11.41044746, 11.08983006,
10.82272425, 10.60584812, 10.43457738, 10.30326358, 10.20566769,
10.1354374 , 10.08654875, 10.05364587, 10.03224315, 10.01878848,
10.01061372, 10.00581248, 10.00308595, 10.00158845, 10.00079277,
10.00038365, 10.00018004, 10.00008194, 10.00003616, 10.00001548,
10.00000643, 10.00000259, 10.00000101, 10.00000038, 10.00000014,
10.00000005, 10.00000002, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000004, 10.00000032, 10.00000241, 10.00001681,
10.00010733, 10.0006226 , 10.00327113, 10.01557973, 10.06744437,
10.26573081, 10.94700694, 12.96810018, 17.61944935, 24.4911192 ,
29.87799209, 31.85458482, 31.83955069, 31.13799899, 30.24013529,
29.29066629, 28.33089899, 27.37314614, 26.42154477, 25.47791708,
24.54347675, 23.61935123, 22.70675647, 21.80706732, 20.92185635,
20.05292306, 19.20231961, 18.37237339, 17.56570454, 16.78523459,
16.03418092, 15.31603007, 14.63448179, 13.99335506, 13.39644822,
12.84734884, 12.34919544, 11.90440374, 11.51438305, 11.179282 ,
10.89781165, 10.66719308, 10.48326185, 10.34073316, 10.2335965 ,
10.15557749, 10.10058961, 10.06310529, 10.03840052, 10.02266096,
10.01296718, 10.00719495, 10.0038711 , 10.00201971, 10.00102192,
10.00050148, 10.00023868, 10.00011019, 10.00004935, 10.00002144,
10.00000903, 10.00000369, 10.00000146, 10.00000056, 10.00000021,
10.00000008, 10.00000003, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000003, 10.00000024, 10.00000181, 10.00001266,
10.00008126, 10.0004742 , 10.00250839, 10.01203569, 10.05252286,
10.20886215, 10.75377377, 12.4145114 , 16.45714779, 23.05904978,
28.99308402, 31.58228402, 31.84151567, 31.23150238, 30.36497311,
29.42899359, 28.47732336, 27.52587462, 26.5798333 , 25.64131748,
24.71160404, 23.79180582, 22.88309352, 21.98678044, 21.10436549,
20.23756339, 19.38833053, 18.55888812, 17.75174086, 16.96968822,
16.21582333, 15.49351332, 14.80635338, 14.15808603, 13.55247771,
12.99314699, 12.48334439, 12.02569268, 11.62190878, 11.27254156,
10.9767704 , 10.73231192, 10.53547227, 10.38135863, 10.26423034,
10.17793766, 10.11637541, 10.07387869, 10.04550646, 10.02719049,
10.01575775, 10.00885694, 10.00482824, 10.00255287, 10.00130927,
10.00065136, 10.00031437, 10.0001472 , 10.00006687, 10.00002947,
10.00001261, 10.00000523, 10.00000211, 10.00000082, 10.00000031,
10.00000011, 10.00000004, 10.00000001, 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000002, 10.00000018, 10.00000136, 10.00000954,
10.0000615 , 10.00036104, 10.00192257, 10.00929233, 10.04087196,
10.16398354, 10.59872761, 11.95516104, 15.42198212, 21.62154049,
27.96879661, 31.2148023 , 31.80738152, 31.3125958 , 30.48477311,
29.56446888, 28.62153685, 27.67657314, 26.73613773, 25.80274582,
24.87777124, 23.96232512, 23.05753935, 22.16467031, 21.2851475 ,
20.42060517, 19.57290856, 18.74417727, 17.93680499, 17.15347282,
16.39715208, 15.67109063, 14.97877588, 14.32386615, 13.71008241,
13.14105403, 12.62011659, 12.15006752, 11.7328963 , 11.3695189 ,
11.05955748, 10.80121188, 10.59126371, 10.42523469, 10.29769107,
10.20265259, 10.13404011, 10.08608864, 10.05366547, 10.0324607 ,
10.01904861, 10.01084376, 10.0059883 , 10.00320812, 10.00166741,
10.00084084, 10.00041142, 10.00019534, 10.00009 , 10.00004024,
10.00001746, 10.00000736, 10.00000301, 10.00000119, 10.00000046,
10.00000017, 10.00000006, 10.00000002, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ]), array([10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10.00000002, 10.00000013, 10.00000102, 10.00000718,
10.00004654, 10.00027477, 10.00147286, 10.00717013, 10.03178273,
10.1286182 , 10.47471502, 11.57697926, 14.5156085 , 20.21645201,
26.81692967, 30.73783732, 31.72849729, 31.37828755, 30.59862313,
29.69682954, 28.76347677, 27.82524126, 26.89047657, 25.96222518,
25.04200061, 24.13092801, 23.23010777, 22.34074412, 21.46420133,
20.60203741, 19.75603082, 18.92820409, 18.12084426, 17.33651797,
16.57807742, 15.8486521 , 15.15161952, 14.49054755, 13.86910014,
13.29089963, 12.75934222, 12.27736966, 11.84721012, 11.47011314,
11.14611586, 10.87388482, 10.65067633, 10.47244348, 10.33409169,
10.22985248, 10.15371734, 10.09986066, 10.06298703, 10.0385611 ,
10.0229088 , 10.01320587, 10.00738638, 10.00400872, 10.00211111,
10.00107888, 10.00053509, 10.00025757, 10.00012034, 10.00005458,
10.00002402, 10.00001027, 10.00000426, 10.00000171, 10.00000067,
10.00000025, 10.00000009, 10.00000003, 10.00000001, 10. ,
10. , 10. , 10. , 10. , 10. ,
10. , 10. , 10. , 10. , 10. ,
10. ])]
```python
def ftbs1(rho0, nt, dt, dx, *args):
rho = rho0.copy()
for n in range(1, nt):
F = flux(rho, *args)
rho[1:] = rho[1:] - dt/dx * (F[1:] - F[:-1])
rho[0] = bc_value
return rho
rho = ftbs1(rho0, nt, dt, dx, rho0[0], Vmax, 𝜌max)
print(rho)
```
```python
```
| d77f23324c6007dcbc87c551d1f2398a3029ed32 | 231,635 | ipynb | Jupyter Notebook | hw2/hw2/Untitled.ipynb | YinfengDing/MAE6286 | 41dc302762fc54ed1c8c9ff0621bd5f3c8e5d7f0 | [
"Apache-2.0"
]
| 2 | 2021-09-21T15:19:06.000Z | 2021-09-21T15:19:08.000Z | hw2/hw2/Untitled.ipynb | YinfengDing/MAE6286 | 41dc302762fc54ed1c8c9ff0621bd5f3c8e5d7f0 | [
"Apache-2.0"
]
| null | null | null | hw2/hw2/Untitled.ipynb | YinfengDing/MAE6286 | 41dc302762fc54ed1c8c9ff0621bd5f3c8e5d7f0 | [
"Apache-2.0"
]
| null | null | null | 80.624782 | 1,952 | 0.368718 | true | 84,129 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.626124 | 0.5555 | __label__krc_Cyrl | 0.869214 | 0.128943 |
# How To Write A Hydro Code
Michael Zingale
There are _many_ methods for solving the equations of hydrodynamics. We will make some choices right from the start:
* We will consider **finite-volume methods**. These are popular in astrophysics because they are based on the integral form of the conservative equations and properly conserve mass, momentum, and energy.
* We will consider an **Eulerian** grid: the grid is fixed and the fluid moves through it.
* We will be **explicit in time**: the new solution depends only on the previous state.
* We will look at a simple 2nd order **method-of-lines** integration. We do this for simplicity here, and will point out where things are commonly done differently. This scheme has a much simpler spatial reconstruction than methods that do characteristic tracing and relies on an integrator (like a Runge-Kutta method) to advance in time.
* We will work in 1-d.
* We won't cover in detail how to write a Riemann solver (that's a math exercise as much as anything else and beyond the scope of this notebook).
* We'll assume a gamma-law equation of state—this is often not the case in astrophysics.
Much more in-depth details and derivations are given in my hydro notes available online: https://github.com/Open-Astrophysics-Bookshelf/numerical_exercises
For a greater variety of methods, in 2-d, see the pyro code: https://github.com/python-hydro/pyro2 (ref: [Harpole et al. JOSS](http://joss.theoj.org/papers/10.21105/joss.01265))
## Overview
We'll focus on the Euler equations. In 1-d, these are:
\begin{align*}
\frac{\partial \rho}{\partial t} + \frac{\partial (\rho u)}{\partial x} & = 0 \\
\frac{\partial (\rho u)}{\partial t} + \frac{\partial (\rho u^2 + p)}{\partial x} &= 0 \\
\frac{\partial (\rho E)}{\partial t} + \frac{\partial (u(\rho E + p))}{\partial x} &= 0 \\
\end{align*}
This is a set of (hyperbolic) partial differential equations. To close the system, we need an equation of state, relating the specific internal energy, $e$, to the pressure:
\begin{align*}
e &= E - \frac{1}{2}u^2 \\
p &= \rho e (\gamma - 1)
\end{align*}
To solve these, we need to discretize the equations in both space and time. We'll use grid-based methods (in addition to the finite-volume method we'll consider, this can include finite-difference and finite-element methods).
Our system of equations can be expressed in conservative form:
$$ \frac{\partial U}{\partial t} + \frac{\partial F(U)}{\partial x} = 0$$
where $U = (\rho, \rho u, \rho E)^\intercal$ and
$$
F(U) = \left ( \begin{array}{c} \rho u \\ \rho u^2 + p \\ u (\rho E + p) \end{array} \right )$$
In a finite-volume method, we store the state of the fluid in discrete volumes in space, and we can refer to this discretized state with an index. To see this, we integrate the conservative law system in space over a volume $[x_{i-1/2},x_{i+1/2}]$:
$$\frac{\partial \langle U\rangle_i}{\partial t} = - \frac{F_{i+1/2} - F_{i-1/2}}{\Delta x}$$
This is the form of the equations we will solve. Here, $\langle U\rangle_i$ represents the average state of the fluid in a volume:
$$\langle U\rangle_i = \frac{1}{\Delta x} \int_{x_{i-1/2}}^{x_{i+1/2}} U(x) dx$$
Visually, we usually think of this grid as:
The state on the grid represents an instance in time. We evolve the state by computing the fluxes through the volumes. These fluxes tell us how much the state changes in each volume over some small timestep, $\Delta t$.
Our code will have the following structure:
* Create our numerical grid
* Set the initial conditions
* Main timestep evolution loop
* Compute the timestep
* Loop to advance one step (count depends on the number of stages in the integrator)
* Reconstruct the state to interfaces
* Solve Riemann problem to find the fluxes through the interface
* Do a conservative update of the state to the stage
* Output
## Grid
We'll manage our 1-d grid via a class `FVGrid`. We will divide the domain into a number of zones (or volumes) that will store the state. To implement boundary conditions, we traditionally use ghost cells--extra cells added to each end of the domain. We'll consider a grid that looks like this:
We'll use the names `lo` and `hi` to refer to the first and last zone in our domain. The domain boundaries are the bold lines shown above, and beyond that, on each end, we have ghost cells.
The main information we need to setup the grid are the number of zones in the interior and the number of ghost cells.
```python
import numpy as np
```
To make life easier, we'll have a simple class with indices that we use to index the fluid state arrays. We can pass this around and be sure that we are always accessing the correct fluid state.
```python
class FluidVars:
"""A simple container that holds the integer indicies we will use to
refer to the different fluid components"""
def __init__(self, gamma=1.4, C=0.8):
self.nvar = 3
# conserved variables
self.urho = 0
self.umx = 1
self.uener = 2
# primitive variables
self.qrho = 0
self.qu = 1
self.qp = 2
# EOS gamma
self.gamma = gamma
# CFL number
self.C = C
```
This is the main class for managing the finite-volume grid. In addition to holding coordinate information and knowing the bounds of the domain, it also can fill the ghost cells and give you a scratch array that lives on the same grid.
```python
class FVGrid:
"""The main finite-volume grid class for holding our fluid state."""
def __init__(self, nx, ng, xmin=0.0, xmax=1.0):
self.xmin = xmin
self.xmax = xmax
self.ng = ng
self.nx = nx
self.lo = ng
self.hi = ng+nx-1
# physical coords -- cell-centered
self.dx = (xmax - xmin)/(nx)
self.x = xmin + (np.arange(nx+2*ng)-ng+0.5)*self.dx
def scratch_array(self, nc=1):
""" return a scratch array dimensioned for our grid """
return np.squeeze(np.zeros((self.nx+2*self.ng, nc), dtype=np.float64))
def fill_BCs(self, atmp):
""" fill all ghost cells with zero-gradient boundary conditions """
if atmp.ndim == 2:
for n in range(atmp.shape[-1]):
atmp[0:self.lo, n] = atmp[self.lo, n]
atmp[self.hi+1:, n] = atmp[self.hi, n]
else:
atmp[0:self.lo] = atmp[self.lo]
atmp[self.hi+1:] = atmp[self.hi]
```
## Reconstruction
We need to use the cell-averages to figure out what the fluid state is on the interfaces. We'll _reconstruct_ the cell-averages as piecewise lines that give us the same average in the zone. We then follow these lines to the interfaces to define the left and right state at each interface.
Usually we work in terms of the **primitive variables**, $q = (\rho, u, p)$. So we first write a routine to do the algebraic transformation from conservative to primitive variables:
\begin{align}
\rho &= \rho \\
u &= \frac{(\rho u)}{\rho} \\
p &= \left ( (\rho E) - \frac{1}{2} \frac{(\rho u)^2}{\rho}\right )(\gamma - 1)
\end{align}
```python
def cons_to_prim(grid, U):
"""take a conservative state U and return the corresponding primitive
variable state as a new array."""
v = FluidVars()
q = grid.scratch_array(nc=v.nvar)
q[:, v.qrho] = U[:, v.urho]
q[:, v.qu] = U[:, v.umx]/U[:, v.urho]
rhoe = U[:, v.uener] - 0.5*q[:, v.qrho]*q[:, v.qu]**2
q[:, v.qp] = rhoe*(v.gamma - 1.0)
return q
```
Next we need a routine to create the interface states. Here's well construct a slope for each zone, $\Delta q$ based on the average state in the neighboring zones. This gives us a line representing the value of the fluid state as a function of position in each zone:
$$q_i(x) = \langle q\rangle_i + \frac{\Delta q_i}{\Delta x} (x - x_i)$$
Note that there is a unique $q_i(x)$ for each zone—this is usually called _piecewise linear reconstruction_. By design, the average of $q_i(x)$ over the zone is the cell-average, so it is conservative.
We use this equation for a line to find the fluid state right at the interface. For zone $i$, the line $q_i(x)$ gives you the right state on the left interface, $q_{i-1/2,R}$, and the left state on the right interface, $q_{i+1/2,L}$. Visually this looks like:
There's one additional wrinkle—2nd order codes tend to produce oscillations near discontinuities, so we usually need to _limit_ the slopes, $\Delta q_i$, so we don't introduce new minima or maxima in the evolution. We'll use the minmod limiter:
\begin{equation}
\left . \frac{\partial a}{\partial x} \right |_i = \mathtt{minmod} \left (
\frac{a_i - a_{i-1}}{\Delta x}, \frac{a_{i+1} - a_i}{\Delta x} \right )
\end{equation}
with
\begin{equation}
\mathtt{minmod}(a,b) = \left \{
\begin{array}{ll}
a & \mathit{if~} |a| < |b| \mathrm{~and~} a\cdot b > 0 \\
b & \mathit{if~} |b| < |a| \mathrm{~and~} a\cdot b > 0 \\
0 & \mathit{otherwise}
\end{array}
\right .
\end{equation}
```python
def states(grid, U):
v = FluidVars()
q = cons_to_prim(grid, U)
# construct the slopes
dq = grid.scratch_array(nc=v.nvar)
for n in range(v.nvar):
dl = grid.scratch_array()
dr = grid.scratch_array()
dl[grid.lo-1:grid.hi+2] = q[grid.lo:grid.hi+3,n] - q[grid.lo-1:grid.hi+2,n]
dr[grid.lo-1:grid.hi+2] = q[grid.lo-1:grid.hi+2,n] - q[grid.lo-2:grid.hi+1,n]
# these where's do a minmod()
d1 = np.where(np.fabs(dl) < np.fabs(dr), dl, dr)
dq[:, n] = np.where(dl*dr > 0.0, d1, 0.0)
# now make the states
q_l = grid.scratch_array(nc=v.nvar)
q_l[grid.lo:grid.hi+2, :] = q[grid.lo-1:grid.hi+1, :] + 0.5*dq[grid.lo-1:grid.hi+1, :]
q_r = grid.scratch_array(nc=v.nvar)
q_r[grid.lo:grid.hi+2, :] = q[grid.lo:grid.hi+2, :] - 0.5*dq[grid.lo:grid.hi+2, :]
return q_l, q_r
```
## Riemann problem and conservative update
After doing our reconstruction, we are left with a left and right state on an interface. To find the unique fluid state on the interface, we solve a _Riemann problem_,
$$q_{i+1/2} = \mathcal{R}(q_{i+1/2,L},q_{i+1/2,R})$$
We could spend an entire day talking about how to solve the Riemann problem. Well just summarize things here.
At each interface, we have a left and right state. Information about the jump across this interface will be carried away from the interface by the 3 hydrodynamic waves ($u$ and $u\pm c$).
The solution to the Riemann problem that we need is the state on the interface--with that we can evaluate the flux through the interface.
To solve the Riemann problem, we need to know how much each variable changes across each of the three waves. To complicate matters, the left and right waves can be either shocks or rarefactions. The middle wave ($u$) is always a contact discontinuity (and of our primitive variables, only $\rho$ jumps across it).
For a gamma-law gas, we can write down analytic expressions for the change in the primitive variables across both a rarefaction and shock. We can then solve these to find the state inbetween the left and right waves (the star state) and then compute the wave speeds.
Finally, we can find the solution on the interface by determining which region we are in.
We'll use an exact Riemann solver to find the solution on the interface. There a lot of algebra involved in finding the expressions for the jumps across the waves and the wave speeds, which we'll skip (by see my notes). Instead we'll just use this solver to give us the state.
One we have the interface state, we can compute the fluxes using this state:
```python
def cons_flux(state, v):
""" given an interface state, return the conservative flux"""
flux = np.zeros((v.nvar), dtype=np.float64)
flux[v.urho] = state.rho * state.u
flux[v.umx] = flux[v.urho] * state.u + state.p
flux[v.uener] = (0.5 * state.rho * state.u**2 +
state.p/(v.gamma - 1.0) + state.p) * state.u
return flux
```
```python
import riemann_exact as re
help(re)
```
Help on module riemann_exact:
NAME
riemann_exact
DESCRIPTION
An exact Riemann solver for the Euler equations with a gamma-law
gas. The left and right states are stored as State objects. We then
create a RiemannProblem object with the left and right state:
> rp = RiemannProblem(left_state, right_state)
Next we solve for the star state:
> rp.find_star_state()
Finally, we sample the solution to find the interface state, which
is returned as a State object:
> q_int = rp.sample_solution()
CLASSES
builtins.object
RiemannProblem
State
class RiemannProblem(builtins.object)
| RiemannProblem(left_state, right_state, gamma=1.4)
|
| a class to define a Riemann problem. It takes a left
| and right state. Note: we assume a constant gamma
|
| Methods defined here:
|
| __init__(self, left_state, right_state, gamma=1.4)
| Initialize self. See help(type(self)) for accurate signature.
|
| find_star_state(self, p_min=0.001, p_max=1000.0)
| root find the Hugoniot curve to find ustar, pstar
|
| rarefaction_solution(self, sgn, state)
| return the interface solution considering a rarefaction wave
|
| sample_solution(self)
| given the star state (ustar, pstar), find the state on the interface
|
| shock_solution(self, sgn, state)
| return the interface solution considering a shock
|
| u_hugoniot(self, p, side)
| define the Hugoniot curve, u(p).
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
class State(builtins.object)
| State(p=1.0, u=0.0, rho=1.0)
|
| a simple object to hold a primitive variable state
|
| Methods defined here:
|
| __init__(self, p=1.0, u=0.0, rho=1.0)
| Initialize self. See help(type(self)) for accurate signature.
|
| __str__(self)
| Return str(self).
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
FUNCTIONS
cons_flux(state, v)
given an interface state, return the conservative flux
FILE
/home/zingale/classes/how_to_write_a_hydro_code/riemann_exact.py
For a method-of-lines approach, we want to just compute the righthand side, $A = -\partial F/\partial x$. Then we will turn our PDE into an ODE for time:
$$\frac{\partial \langle U\rangle_i}{\partial t} = -A_i = - \frac{F_{i+1/2} - F_{i-1/2}}{\Delta x}$$
We can then use any ODE integration method, like Runge-Kutta to solve the system.
This routine will take the conserved state, $U$, construct the left and right states at all interfaces, solve the Riemann problem to get the unique state on the boundary, and then compute the advective term and return it.
```python
def make_flux_divergence(grid, U):
v = FluidVars()
# get the states
q_l, q_r = states(grid, U)
# now solve the Riemann problem
flux = grid.scratch_array(nc=v.nvar)
for i in range(grid.lo, grid.hi+2):
sl = re.State(rho=q_l[i,v.qrho], u=q_l[i,v.qu], p=q_l[i,v.qp])
sr = re.State(rho=q_r[i,v.qrho], u=q_r[i,v.qu], p=q_r[i,v.qp])
rp = re.RiemannProblem(sl, sr, gamma=v.gamma)
rp.find_star_state()
q_int = rp.sample_solution()
flux[i, :] = cons_flux(q_int, v)
A = grid.scratch_array(nc=v.nvar)
for n in range(v.nvar):
A[grid.lo:grid.hi+1, n] = (flux[grid.lo:grid.hi+1, n] -
flux[grid.lo+1:grid.hi+2, n])/grid.dx
return A
```
## Timestep
Explicit hydro codes have a restriction on the size of the timestep. We cannot allow information to move more than one zone per step. For the hydro equations, the speeds at which information travels are $u$ and $u \pm c$, so we use the largest speed here to compute the timestep.
```python
def timestep(grid, U):
v = FluidVars()
# compute the sound speed
q = cons_to_prim(grid, U)
c = grid.scratch_array()
c[grid.lo:grid.hi+1] = np.sqrt(v.gamma *
q[grid.lo:grid.hi+1,v.qp] /
q[grid.lo:grid.hi+1,v.qrho])
dt = v.C * grid.dx / (np.abs(q[grid.lo:grid.hi+1, v.qu]) +
c[grid.lo:grid.hi+1]).max()
return dt
```
## Main driver
This is the main driver. For simplicity, I've hardcoded the initial conditions here for the standard Sod problem. Usually those would be a separate routine.
This does 2nd-order RK (or Euler's method) for the integration, and requires that we compute the advection terms twice to advance the solution by $\Delta t$. The update looks like:
\begin{align*}
U^\star &= U^n + \frac{\Delta t}{2} A(U^n) \\
U^{n+1} &= U^n + \Delta t A(U^\star)
\end{align*}
```python
def mol_solve(nx, tmax=1.0, init_cond=None):
"""Perform 2nd order MOL integration of the Euler equations.
You need to pass in a function foo(grid) that returns the
initial conserved fluid state."""
grid = FVGrid(nx, 2)
v = FluidVars()
U = init_cond(grid)
t = 0.0
while t < tmax:
dt = timestep(grid, U)
if t + dt > tmax:
dt = tmax - t
grid.fill_BCs(U)
k1 = make_flux_divergence(grid, U)
U_tmp = grid.scratch_array(nc=v.nvar)
for n in range(v.nvar):
U_tmp[:, n] = U[:, n] + 0.5 * dt * k1[:, n]
grid.fill_BCs(U_tmp)
k2 = make_flux_divergence(grid, U_tmp)
for n in range(v.nvar):
U[:, n] += dt * k2[:, n]
t += dt
return grid, U
```
## Example: Sod's problem
The Sod problem is a standard test problem, consisting of a left and right state separated by an initial discontinuity. As time evolves, a rightward moving shock and contact and leftward moving rarefaction form.
One reason this problem is so popular is that you can find the exact solution (it's just the Riemann problem) and compare the performance of your code to the exact solution.
```python
def sod(grid):
v = FluidVars()
U = grid.scratch_array(nc=v.nvar)
# setup initial conditions -- this is Sod's problem
rho_l = 1.0
u_l = 0.0
p_l = 1.0
rho_r = 0.125
u_r = 0.0
p_r = 0.1
idx_l = grid.x < 0.5
idx_r = grid.x >= 0.5
U[idx_l, v.urho] = rho_l
U[idx_l, v.umx] = rho_l * u_l
U[idx_l, v.uener] = p_l/(v.gamma - 1.0) + 0.5 * rho_l * u_l**2
U[idx_r, v.urho] = rho_r
U[idx_r, v.umx] = rho_r * u_r
U[idx_r, v.uener] = p_r/(v.gamma - 1.0) + 0.5 * rho_r * u_r**2
return U
```
```python
g, U = mol_solve(128, tmax=0.2, init_cond=sod)
```
```python
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 100
plt.rcParams['figure.figsize'] = [8, 6]
```
```python
sod = np.genfromtxt("sod-exact.out", skip_header=2, names=True)
```
```python
v = FluidVars()
plt.scatter(g.x, U[:,v.urho], marker="x", color="C0")
plt.plot(sod["x"], sod["rho"], color="C1")
```
## Exercises
1. Run the problem without limiting the slopes to see how it compares
2. Try a higher-order Runge-Kutta time integration methods to see how the problem changes
3. Implement periodic boundary conditions and create a new set of initial conditions that just puts a low amplitude Gaussian pulse—this will create an acoustic wave that propagates through the domain.
```python
```
| 6fa6b132d2eef4d0c13c8ec41155e56fa550a998 | 45,688 | ipynb | Jupyter Notebook | write_a_hydrocode.ipynb | python-hydro/how_to_write_a_hydro_code | 337a066405b47f9a31c8dd00ec833c3ba6742c3c | [
"BSD-3-Clause"
]
| 24 | 2019-07-17T15:38:13.000Z | 2021-08-13T22:55:37.000Z | write_a_hydrocode.ipynb | andreuva/how_to_write_a_hydro_code | 337a066405b47f9a31c8dd00ec833c3ba6742c3c | [
"BSD-3-Clause"
]
| null | null | null | write_a_hydrocode.ipynb | andreuva/how_to_write_a_hydro_code | 337a066405b47f9a31c8dd00ec833c3ba6742c3c | [
"BSD-3-Clause"
]
| 7 | 2019-07-18T04:10:45.000Z | 2021-12-02T12:09:38.000Z | 47.345078 | 13,144 | 0.642029 | true | 5,740 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.743168 | 0.558872 | __label__eng_Latn | 0.982811 | 0.136776 |
```python
import numpy as np
import matplotlib.pyplot as plt
```
# Diminuindo o erro: a fórmula do valor médio e a fórmula dos trapézios
Vimos que o erro ao considerar somas de Riemann (na verdade, deveríamos chamar de "somas de Cauchy" estas que usam o ponto inicial)
tende a zero porque a oscilação da função diminui conforme o tamanho do intervalo considerado diminui.
Será que é possível (analogamente às diferenças centrais) obter fórmulas que convirjam mais rápido?
Em geral (ou seja, para funções apenas contínuas) isso não é possível,
pois a oscilação $\omega$ é o único mecanismo de controle que possuímos.
Mas, seguindo o princípio geral do curso, "mais derivadas = melhor convergência",
então vamos procurar métodos que nos dêem erros menores se supusermos que a função seja derivável.
Começamos com funções uma vez deriváveis.
## Erro, com derivadas
A primeira coisa a ser feita é estimar o erro da fórmula que já temos, supondo que $f$ seja derivável.
Começamos com as somas de Cauchy, onde $x_k = c_k$:
$$\begin{align}
e _ {n,k} & = h \cdot \left| f(c_k) - \int_0^1 f(c_k + th) \, dt \right|
= h \cdot \left| \int_0^1 \big[ f(c_k) - f(c_k + th) \big] \, dt \right| \\
& = h \cdot \left| \int_0^1 f'(\xi_t)(-th) \, dt \right|
= h^2 \cdot \left| \int_0^1 f'(\xi_t) t \, dt \right| \\
& \leq h^2 \cdot \int_0^1 \max \bigl| f'(\xi) \bigr| t \, dt
= h^2 \cdot \max \bigl| f'(\xi) \bigr| \cdot \int_0^1 t \, dt
= h^2 \cdot \max \bigl| f'(\xi) \bigr| \cdot \frac{1}{2}
\end{align}$$
onde usamos a fórmula do valor médio para derivada: $f(y) - f(x) = f'(\xi)(y - x)$ para algum $\xi \in (x,y)$.
Ao somar todos os $e _ {n,k}$, teremos então que o erro $E_n$ será, no máximo,
$$
\def\maxhalf{\frac{\max \bigl| f'(\xi) \bigr|}{2}}
E_n
\leq \sum _ {k=0}^{n-1} e _ {n,k}
\leq n \cdot h^2 \maxhalf
\leq h \cdot (b - a) \maxhalf.
$$
Assim, o erro da "fórmula de Cauchy" decresce linearmente com $h$.
Obs: podemos obter esta estimativa da estimativa anterior e a seguinte relação:
a oscilação de $f$ num intervalo é sempre menor do que o máximo do valor absoluto da derivada $f'$
neste mesmo intervalo, vezes o comprimento do intervalo.
## Como diminuir o erro?
Para reduzir o erro, podemos apostar em duas vertentes.
Ou fazemos os erros $e _ {n,k}$ se compensarem, ou reduzimos os $e _ {n,k}$ diretamente.
A primeira estratégia depende muito da função considerada, então vamos tentar arrumar um outro método.
Em suma, gostaríamos de reduzir o erro
$$\big(\text{Estimativa da integral de $f$ no intervalo $[c_k, d_k]$}\big) - \int_{c_k}^{d_k} f(u) \, du.$$
Inspirados pela fórmula das diferenças centrais, podemos pensar que, se calcularmos $f$ no meio do intervalo,
em vez de no bordo, o erro pode ser menor.
Ou seja, usaremos $f\left(\frac{c_k + d_k}{2}\right)$ em vez de $f(c_k)$ como estimativa de $f$.
Assim, em vez de calcularmos $S_n$, calcularemos
$$M_n = \sum_{k=0}^{n-1} f \left(\frac{c_k + d_k}{2} \right) \cdot h.$$
Esta fórmula é conhecida como **fórmula do ponto médio**.
### Exercício:
- Implemente a fórmula do ponto médio.
- Refaça os gráficos para as funções seno e gaussiana.
```python
def midpoint(f,a,b,n=100):
"""Calcula uma aproximação da integral de $f$ no intervalo $[a,b]$, com $n$ pontos pela fórmula do ponto médio."""
### Resposta aqui
```
```python
# Use esta caixa para calcular os erros
def f(x): return np.sin(x)
def g(x): return np.exp(-x**2)
# Sugestão de valores de $n$ para usar
ns = np.array([int(x) for x in np.logspace(1,4)])
### Resposta aqui
```
```python
def glinha(x):
return -2*x*np.exp(-x**2)
K_g = abs(glinha(np.pi) - glinha(0))
```
```python
plt.loglog(ns, abs(F_m - true_F), '.', label='seno')
plt.loglog(ns, abs(G_m - true_G), '+', label='gaussiana')
plt.loglog(ns, np.pi**2/(24*ns**2)*2, label='estimativa seno')
plt.loglog(ns, np.pi**2/(24*ns**2)*K_g, label='estimativa gaussiana')
plt.xlabel('Número de passos')
plt.ylabel('Erro')
plt.title('Método do ponto médio para a integral')
plt.legend(loc=0)
plt.show()
```
### Exercício: estimativa do erro
- Suponha que $f$ seja **duas** vezes diferenciável, e estime o erro cometido pela fórmula do ponto médio.
Para aproveitar a simetria ao máximo, introduza $m_k = \frac{c_k + d_k}{2}$
e faça uma mudança de variáveis para o intervalo $[-1,1]$ (em vez de $[0,1]$, que não é simétrico!).
Dê sua resposta aqui
| c8b1645bf0680087ca0ddc2ad4c43e7d954abbc2 | 34,492 | ipynb | Jupyter Notebook | comp-cientifica-I-2018-2/semana-7/raw_files/.ipynb_checkpoints/Semana7-Parte2-PontoMedio-checkpoint.ipynb | mirandagil/university-courses | e70ce5262555e84cffb13e53e139e7eec21e8907 | [
"MIT"
]
| 1 | 2019-12-23T16:39:01.000Z | 2019-12-23T16:39:01.000Z | comp-cientifica-I-2018-2/semana-7/raw_files/Semana7-Parte2-PontoMedio.ipynb | mirandagil/university-courses | e70ce5262555e84cffb13e53e139e7eec21e8907 | [
"MIT"
]
| null | null | null | comp-cientifica-I-2018-2/semana-7/raw_files/Semana7-Parte2-PontoMedio.ipynb | mirandagil/university-courses | e70ce5262555e84cffb13e53e139e7eec21e8907 | [
"MIT"
]
| null | null | null | 172.46 | 27,688 | 0.873304 | true | 1,551 | Qwen/Qwen-72B | 1. YES
2. YES | 0.679179 | 0.851953 | 0.578628 | __label__por_Latn | 0.995692 | 0.182677 |
```python
%matplotlib inline
```
```python
import numpy as np
import numpy.polynomial.polynomial as p
import matplotlib.pyplot as plt
from turtle import *
from itertools import groupby
import re
from Crypto.Util import number
from sympy.ntheory import factorint
import time
```
# Basic Algebra Exercise
## Functions, Polynomials, Complex Numbers. Applications of Abstract Algebra
### Problem 1. Polynomial Interpolation
We know that if we have a set of $n$ data points with coordinates $(x_1; y_1), (x_2; y_2), \dots, (x_n; y_n)$, we can try to figure out what function may have generated these points.
Please note that **our assumptions about the data** will lead us to choosing one function over another. This means that our results are as good as our data and assumptions. Therefore, it's extremely important that we write down our assumptions (which sometimes can be difficult as we sometimes don't realize we're making them). It will be better for our readers if they know what those assumptions and models are.
In this case, we'll state two assumptions:
1. The points in our dataset are generated by a polynomial function
2. The points are very precise, there is absolutely no error in them. This means that the function should pass **through every point**
This method is called *polynomial interpolation* (*"polynomial"* captures assumption 1 and *"interpolation"* captures assumption 2).
It can be proved (look at [Wikipedia](https://en.wikipedia.org/wiki/Polynomial_interpolation) for example) that if we have $n$ data points, there is only one polynomial of degree $n-1$ which passes through them. In "math speak": "the vector spaces of $n$ points and polynomials of degree $n-1$ are isomorphic (there exists a bijection mapping one to the other)".
There are a lot of ways to do interpolation. We can also write the function ourselves if we want but this requires quite a lot more knowledge than we already covered in this course. So we'll use a function which does this for us. `numpy.polyfit()` is one such function. It accepts three main parameters (there are others as well, but they are optional): a list of $x$ coordinates, a list of $y$ coordinates, and a polynomial degree.
Let's say we have these points:
```python
points = np.array([(0, 0), (1, 0.8), (2, 0.9), (3, 0.1), (4, -0.8), (5, -1.0)])
```
First, we need to "extract" the coordinates:
```python
x = points[:, 0]
y = points[:, 1]
```
Then, we need to calculate the interpolating polynomial. For the degree, we'll set $n-1$:
```python
coefficients = np.polyfit(x, y, len(points) - 1)
poly = np.poly1d(coefficients)
```
After that, we need to plot the function. To do this, we'll create a range of $x$ values and evaluate the polynomial at each value:
```python
plot_x = np.linspace(np.min(x), np.max(x), 1000)
plot_y = poly(plot_x)
```
Finally, we need to plot the result. We'll plot both the fitting polynomial curve (using `plt.plot()`) and the points (using `plt.scatter`). It's also nice to have different colors to make the line stand out from the points.
```python
plt.plot(plot_x, plot_y, c = "green")
plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
Don't forget to label the axes!
Your task now is to wrap the code in a function. It should accept a list of points, the polynomial degree, min and max value of $x$ used for plotting. We'll use this function to try some other cases.
```python
def interpolate_polynomial(points, degree, min_x, max_x):
"""
Interpolates a polynomial of the specified degree through the given points and plots it
points - a list of points (x, y) to plot
degree - the polynomial degree
min_x, max_x - range of x values used to plot the interpolating polynomial
"""
x = points[:, 0]
y = points[:, 1]
coefficients = np.polyfit(x, y, degree)
poly = np.poly1d(coefficients)
plot_x = np.linspace(min_x, max_x, 1000)
plot_y = poly(plot_x)
plt.plot(plot_x, plot_y, c = "green")
plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Degree=" + str(degree))
plt.show()
```
```python
points = np.array([(0, 0), (1, 0.8), (2, 0.9), (3, 0.1), (4, -0.8), (5, -1.0)])
interpolate_polynomial(points, len(points) - 1, np.min(points[:, 0]), np.max(points[:, 0]))
```
We see this is a very nice fit. This is expected, of course. Let's try to expand our view a little. Let's try to plot other values of $x$, further than the original ones. This is **extrapolation**.
```python
interpolate_polynomial(points, len(points) - 1, -5, 10)
```
Hmmm... it seems our polynomial goes a little wild outside the original range. This is to show how **extrapolation can be quite dangerous**.
Let's try a lower polynomial degree now. We used 4, how about 3, 2 and 1?
**Note:** We can add titles to every plot so that we know what exactly we're doing. The title may be passed as an additional parameter to our function.
```python
interpolate_polynomial(points, 3, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 2, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 1, np.min(points[:, 0]), np.max(points[:, 0]))
```
We see the fitting curves (or line in the last case) struggle more and more and they don't pass through every point. This breaks our assumptions but it can be very useful.
Okay, one more thing. How about increasing the degree? Let's try 5, 7 and 10. Python might complain a little, just ignore it, everything is fine... sort of :).
```python
interpolate_polynomial(points, 5, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 7, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 10, np.min(points[:, 0]), np.max(points[:, 0]))
```
Those graphs look pretty much the same. But that's the point exactly. I'm being quite sneaky here. Let's try to expand our view once again and see what our results really look like.
```python
interpolate_polynomial(points, 5, -10, 10)
interpolate_polynomial(points, 7, -10, 10)
interpolate_polynomial(points, 10, -10, 10)
```
Now we see there are very wild differences. Even though the first two plots look quite similar, look at the $y$ values - they're quite different.
So, these are the dangers of interpolation. Use a too high degree, and you get "the polynomial wiggle". These are all meant to represent **the same** data points but they look insanely different. Here's one more comparison.
```python
interpolate_polynomial(points, len(points) - 1, -2, 7)
interpolate_polynomial(points, len(points) + 1, -2, 7)
```
Now we can see what big difference even a small change in degree can make. This is why we have to choose our interpolating functions very carefully. Generally, a lower degree means a simpler function, which is to be preferred. See [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor).
And also, **we need to be very careful about our assumptions**.
```python
points = np.array([(-5, 0.03846), (-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882), (5, 0.03846)])
interpolate_polynomial(points, len(points) - 1, np.min(points[:, 0]), np.max(points[:, 0]))
```
This one definitely looks strange. Even stranger, if we remove the outermost points... ($x = \pm 5$), we get this
```python
points = np.array([(-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882)])
interpolate_polynomial(points, len(points - 1), np.min(points[:, 0]), np.max(points[:, 0]))
```
This is because the generating function is not a polynomial. It's actually:
$$ y = \frac{1}{1 + x^2} $$
Plot the polynomial interpolation and the real generating function **on the same plot**. You may need to modify the original plotting function or just copy its contents.
```python
def plot_function(points, min_x, max_x):
x = points[:, 0]
y = points[:, 1]
plot_x = np.linspace(min_x, max_x, 1000)
plot_y = [1 / (1 + x**2) for x in plot_x]
plt.plot(plot_x, plot_y, c = "green")
plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
```python
points = np.array([(-5, 0.03846), (-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882), (5, 0.03846)])
plot_function(points, np.min(points[:, 0]), np.max(points[:, 0]))
```
```python
points = np.array([(-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882)])
plot_function(points, np.min(points[:, 0]), np.max(points[:, 0]))
```
### Problem 2. Complex Numbers as Vectors
We saw that a complex number $z = a + bi$ is equivalent to (and therefore can be represented as) the ordered tuple $(a; b)$, which can be plotted in a 2D space. So, complex numbers and 2D points are equivalent. What is more, we can draw a vector from the origin of the coordinate plane to our point. This is called a point's **radius-vector**.
Let's try plotting complex numbers as radius vectors. Don't forget to label the real and imaginary axes. Also, move the axes to the origin. Hint: These are called "spines"; you'll need to move 2 of them to the origin and remove the other 2 completely. Hint 2: You already did this in the previous lab.
We can use `plt.quiver()` to plot the vector. It can behave a bit strangely, so we'll need to set the scale of the vectors to be the same as the scale on the graph axes:
```python
plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1)
```
Other than that, the main parameters are: $x_{begin}$, $y_{begin}$, $x_{length}$, $y_{length}$ in that order.
Now, set the aspect ratio of the axes to be equal. Also, add grid lines. Set the axis numbers (called ticks) to be something like `range(-3, 4)` for now.
```python
plt.xticks(range(-3, 4))
plt.yticks(range(-3, 4))
```
If you wish to, you can be a bit more clever with the tick marks. Find the minimal and maximal $x$ and $y$ values and set the ticks according to them. It's a good practice not to jam the plot too much, so leave a little bit of space. That is, if the actual x-range is $[-2; 2]$, set the plotting to be $[-2.5; 2.5]$ for example. Otherwise, the vector heads (arrows) will be "jammed" into a corner or side of the plot.
```python
def plot_complex_number(z):
"""
Plots the complex number z as a radius vector in the 2D space
"""
ax = plt.gca()
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1)
plt.xlabel("Re")
plt.ylabel("Im")
plt.xticks(range(-3, 4))
plt.yticks(range(-3, 4))
plt.show()
plot_complex_number(2 + 3j)
```
How about many numbers? We'll need to get a little bit more creative. First, we need to create a 2D array, each element of which will be a 4-element array: `[0, 0, z.real, z.imag]`. Next, `plt.quiver()` can accept a range of values. Look at [this StackOverflow post](https://stackoverflow.com/questions/12265234/how-to-plot-2d-math-vectors-with-matplotlib) for details and adapt your code.
```python
def plot_complex_numbers(numbers, colors):
"""
Plots the given complex numbers as radius vectors in the 2D space
"""
ax = plt.gca()
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
nums_twod = np.array([[z.real, z.imag] for z in numbers])
U, V = zip(*nums_twod)
plt.quiver(0, 0, U, V, angles = "xy", scale_units = "xy", scale = 1, color = colors)
plt.xlabel("Re")
plt.ylabel("Im")
plt.xticks(range(-4, 5))
plt.yticks(range(-4, 5))
plt.show()
plot_complex_numbers([2 + 3j, -2 - 1j, -3, 2j], ["green", "red", "blue", "orange"])
```
Now let's see what the operations look like. Let's add two numbers and plot the result.
```python
z1 = 2 + 3j
z2 = 1 - 1j
plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"])
```
We can see that adding the complex numbers is equivalent to adding vectors (remember the "parallelogram rule"). As special cases, let's try adding pure real and pure imaginary numbers:
```python
z1 = 2 + 3j
z2 = 2 + 0j
plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"])
```
```python
z1 = 2 + 3j
z2 = 0 + 2j
plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"])
```
How about multiplication? First we know that multiplying by 1 gives us the same vector and mulpiplying by -1 gives us the reversed version of the same vector. How about multiplication by $\pm i$?
```python
z = 2 + 3j
plot_complex_numbers([z, z * 1], ["red", "blue"])
plot_complex_numbers([z, z * -1], ["red", "blue"])
plot_complex_numbers([z, z * 1j], ["red", "blue"])
plot_complex_numbers([z, z * -1j], ["red", "blue"])
```
So, multiplication by $i$ is equivalent to 90-degree rotation. We can actually see the following equivalence relationships between multiplying numbers and rotation about the origin:
| Real | Imaginary | Result rotation |
|------|-----------|-----------------|
| 1 | 0 | $0^\circ$ |
| 0 | 1 | $90^\circ$ |
| -1 | 0 | $180^\circ$ |
| 0 | -1 | $270^\circ$ |
Once again, we see the power of abstraction and algebra in practice. We know that complex numbers and 2D vectors are equivalent. Now we see something more: addition and multiplication are equivalent to translation (movement) and rotation!
Let's test the multiplication some more. We can see the resulting vector is the sum of the original vectors, but *scaled and rotated*:
```python
z1 = 2 + 3j
z2 = 1 - 2j
plot_complex_numbers([z1, z2, z1 * z2], ["red", "blue", "green"])
```
### Problem 3. Recursion and Fractals
> "To understand recursion, you first need to understand recursion."
There are three main parts to a recursive function:
1. Bottom - when the recursion should finish
2. Operation - some meaningful thing to do
3. Recursive call - calling the same function
4. Clean-up - returning all data to its previous state (this reverses the effect of the operation)
Let's do one of the most famous recursion examples. And I'm not talking about Fibonacci here. Let's draw a tree using recursive functions.
The figure we're going to draw is called a **fractal**. It's self-similar, which means that if you zoom in on a part of it, it will look the same. You can see fractals everywhere in nature, with broccoli being one of the prime examples. Have a look:
First, we need to specify the recursive part. In order to draw a tree, we need to draw a line of a given length (which will be the current branch), and then draw two more lines to the left and right. By "left" and "right", we should mean "rotation by a specified angle".
So, this is how to draw a branch: draw a line and prepare to draw two more branches to the left and right. This is going to be our recursive call.
To make things prettier, more natural-looking (and have a natural end to our recursion), let's draw each "sub-branch" a little shorter. If the branch becomes too short, it won't have "child branches". This will be the bottom of our recursion.
There's one more important part of recursion, and this is **"the clean-up"**. After we did something in the recursive calls, it's very important to return the state of everything as it was **before** we did anything. In this case, after we draw a branch, we go back to our starting position.
Let's first import the most import-ant (no pun intended...) Python drawing library: `turtle`! In order to make things easier, we'll import all methods directly.
```python
from turtle import *
```
You can look up the docs about turtle if you're more interested. The basic things we're going to use are going forward and backward by a specified number of pixels and turning left and right by a specified angle (in degrees).
Let's now define our recursive function:
```python
def draw_branch(branch_length, angle):
if branch_length > 5:
forward(branch_length)
right(angle)
draw_branch(branch_length - 15, angle)
left(2 * angle)
draw_branch(branch_length - 15, angle)
right(angle)
backward(branch_length)
```
And let's call it:
```python
draw_branch(100, 20)
```
We need to start the tree not at the middle, but toward the bottom of the screen, so we need to make a few more adjustments. We can wrap the setup in another function and call it. Let's start one trunk length below the center (the trunk length is the length of the longest line).
```python
def draw_tree(trunk_length, angle):
speed("fastest")
left(90)
up()
backward(trunk_length)
down()
draw_branch(trunk_length, angle)
```
Note that the graphics will show in a separate window. Also note that sometimes you might get bugs. If you do, go to Kernel > Restart.
```python
def draw_branch(branch_length, angle):
if branch_length > 5:
forward(branch_length)
right(angle)
draw_branch(branch_length - 15, angle)
left(2 * angle)
draw_branch(branch_length - 15, angle)
right(angle)
backward(branch_length)
```
```python
def draw_tree(trunk_length, angle):
speed("fastest")
left(90)
up()
backward(trunk_length)
down()
draw_branch(trunk_length, angle)
```
Experiment with different lengths and angles. Especially interesting angles are $30^\circ$, $45^\circ$, $60^\circ$ and $90^\circ$.
```python
draw_tree(100, 30)
```
```python
draw_tree(100, 45)
```
```python
draw_tree(100, 90)
```
Now modify the original function a little. Draw the lines with different thickness. Provide the trunk thickness at the initial call. Similar to how branches go shorter, they should also go thinner.
```python
def draw_branch(branch_length, angle, thickness):
if branch_length > 5:
width(thickness)
forward(branch_length)
right(angle)
draw_branch(branch_length - 15, angle, abs(thickness - 1))
left(2 * angle)
draw_branch(branch_length - 15, angle, abs(thickness - 1))
right(angle)
backward(branch_length)
```
```python
def draw_tree(trunk_length, angle, thickness):
speed("fastest")
left(90)
up()
backward(trunk_length)
down()
draw_branch(trunk_length, angle, thickness)
```
```python
draw_tree(100, 20, 5)
```
#### * Optional problem
Try to draw another kind of fractal graphic using recursion and the `turtle` library. Two very popular examples are the "Koch snowflake" and the "Sierpinski triangle". You can also modify the original tree algorithm to create more natural-looking trees. You can, for example, play with angles, number of branches, lengths, and widths. The Internet has a lot of ideas about this :). Hint: Look up **"L-systems"**.
### Problem 4. Run-length Encoding
One application of algebra and basic math can be **compression**. This is a way to save data in less space than it originally takes. The most basic form of compression is called [run-length encoding](https://en.wikipedia.org/wiki/Run-length_encoding).
Write a function that encodes a given text. Write another one that decodes.
We can see that RLE is not very useful in the general case. But it can be extremely useful if we have very few symbols. An example of this can be DNA and protein sequences. DNA code, for example, has only 4 characters.
Test your encoding and decoding functions on a DNA sequence (you can look up some on the Internet). Measure how much your data is compressed relative to the original.
```python
def encode(text):
"""
Returns the run-length encoded version of the text
(numbers after symbols, length = 1 is skipped)
"""
splitted = list(text)
grouped = [list(j) for i, j in groupby(splitted)]
encoded = ''
for group in grouped:
encoded += group[0]
encoded += str(len(group)) if len(group) > 1 else ''
return encoded
def decode(text):
"""
Decodes the text using run-length encoding
"""
return re.sub(r'(\D)(\d*)', lambda m: m.group(1) * int(m.group(2)) if m.group(2) != '' else m.group(1), text)
```
```python
# Tests
# Test that the functions work on their own
assert encode("AABCCCDEEEE") == "A2BC3DE4"
assert decode("A2BC3DE4") == "AABCCCDEEEE"
# Test that the functions really invert each other
assert decode(encode("AABCCCDEEEE")) == "AABCCCDEEEE"
assert encode(decode("A2BC3DE4")) == "A2BC3DE4"
```
### Problem 5. Function Invertibility and Cryptography
As we already saw, some functions are able to be inverted. That is, if we know the output, we can see what input generated it directly. This is true if the function is **one-to-one correspondence** (bijection).
However, not all functions are created the same. Some functions are easy to compute but their inverses are extremely difficult. A very important example is **number factorization**. It's relatively easy (computationally) to multiply numbers but factoring them is quite difficult. Let's run an experiment.
We'll need a function to generate random n-bit numbers. One such number can be found in the `Crypto` package
```python
from Crypto.Util import number
random_integer = number.getRandomNBitInteger(n_bits)
```
We could, of course, write our factorization by hand but we'll use `sympy`
```python
from sympy.ntheory import factorint
factorint(1032969399047817906432668079951) # {3: 2, 79: 1, 36779: 1, 7776252885493: 1, 5079811103: 1}
```
This function returns a `dict` where the keys are the factors, and the values - how many times they should be multiplied.
We'll also need a tool to accurately measure performance. Have a look at [this one](https://docs.python.org/3/library/time.html#time.time) for example.
Specity a sequence of bit lengths, in increasing order. For example, you might choose something like `[10, 20, 25, 30, 32, 33, 35, 38, 40]`. Depending on your computer's abilities you can go as high as you want. For each bit length, generate a number. See how much time it takes to factor it. Then see how much time it takes to multiply the factors. Be careful how you measure these. You shouldn't include the number generation (or any other external functions) in your timing.
In order to have better accuracy, don't do this once per bit length. Do it, for example, five times, and average the results.
Plot all multiplication and factorization times as a function of the number of bits. You should see that factorization is much, much slower. If you don't see this, just try larger numbers :D.
```python
n_bits = [10, 20, 25, 30, 32, 33, 35, 38, 40, 64, 128]
f_times = []
mul_times = []
factorized = []
```
```python
def test_factorization():
for bit in n_bits:
random_integer = number.getRandomNBitInteger(bit)
start = time.time()
factorized_int = factorint(random_integer)
end = time.time()
f_times.append(end - start)
factorized.append(factorized_int)
```
```python
def test_multiply():
product = 1
for factors in factorized:
start = time.time()
for factor, tm in factors.items():
product *= factor ** tm
end = time.time()
mul_times.append(end - start)
```
```python
test_factorization()
test_multiply()
```
```python
plt.plot(n_bits, f_times, c = 'red', label = "Factorization")
plt.plot(n_bits, mul_times, c = 'green', label = "Multiplication")
plt.legend()
plt.title("Factorization vs Multiplication")
plt.show()
```
### * Problem 6. Diffie - Hellman Simulation
As we already saw, there are functions which are very easy to compute in the "forward" direction but really difficult (computationally) to invert (that is, determine the input from the output). There is a special case: the function may have a hidden "trap door". If you know where that door is, you can invert the function easily. This statement is at the core of modern cryptography.
Look up **Diffie - Hellman key exchange** (here's a [video](https://www.youtube.com/watch?v=cM4mNVUBtHk) on that but feel free to use anything else you might find useful).
Simulate the algorithm you just saw. Generate large enough numbers so the difference is noticeable (say, factoring takes 10-15 seconds). Simulate both participants in the key exchange. Simulate an eavesdropper.
First, make sure after both participants run the algotihm, they have *the same key* (they generate the same number).
Second, see how long it takes for them to exchange keys.
Third, see how long it takes the eavesdropper to arrive at the correct shared secret.
You should be able to see **the power of cryptography**. In this case, it's not that the function is irreversible. It can be reversed, but it takes a really long time (and with more bits, we're talking billions of years). However, if you know something else (this is called a **trap door**), the function becomes relatively easy to invert.
```python
# Write your code here
```
### ** Problem 7. The Galois Field in Cryptography
Research about the uses of the Galois field. What are its properties? How can it be used in cryptography? Write a simple cryptosystem based on the field.
You can use the following questions to facilitate your research:
* What is a field?
* What is GF(2)? Why is it an algebraic field?
* What is perfect secrecy? How does it relate to the participants in the conversation, and to the outside eavesdropper?
* What is symmetrical encryption?
* How to encrypt one-bit messages?
* How to extend the one-bit encryption system to many buts?
* Why is the system decryptable? How do the participants decrypt the encrypted messages?
* Why isn't the eavesdropper able to decrypt?
* What is a one-time pad?
* How does the one-time pad achieve perfect secrecy?
* What happens if we try to use a one-time pad many times?
* Provide an example where you break the "many-time pad" security
* What are some current enterprise-grade applications of encryption over GF(2)?
* Implement a cryptosystem based on GF(2). Show correctness on various test cases
### ** Problem 8. Huffman Compression Algorithm
Examine and implement the **Huffman algorithm** for compressing data. It's based on information theory and probiability theory. Document your findings and provide your implementation.
This algorithm is used for **lossless compression**: compressing data without loss of quality. You can use the following checklist:
* What is the difference betwenn lossless and lossy compression?
* When can we get away with lossy compression?
* What is entropy?
* How are Huffman trees constructed?
* Provide a few examples
* How can we get back the uncompressed data from the Huffman tree?
* How and where are Huffman trees stored?
* Implement the algorithm. Add any other formulas / assumptions / etc. you might need.
* Test the algorithm. A good meaure would be percentage compression: $$\frac{\text{compressed}}{\text{uncompressed}} * 100\%$$
* How well does Huffman's algorithm perform compared to other compression algorithms (e.g. LZ77)?
| 8e96e560fd6e5ca25baa886a269d521164709517 | 440,055 | ipynb | Jupyter Notebook | Basic-Algebra/Basic Algebra Exercise.ipynb | StanDimitroff/Math-Concepts | ebbecde56fde319f5269d35da775482b8ea5aeb3 | [
"MIT"
]
| null | null | null | Basic-Algebra/Basic Algebra Exercise.ipynb | StanDimitroff/Math-Concepts | ebbecde56fde319f5269d35da775482b8ea5aeb3 | [
"MIT"
]
| null | null | null | Basic-Algebra/Basic Algebra Exercise.ipynb | StanDimitroff/Math-Concepts | ebbecde56fde319f5269d35da775482b8ea5aeb3 | [
"MIT"
]
| null | null | null | 328.154362 | 20,680 | 0.916597 | true | 7,212 | Qwen/Qwen-72B | 1. YES
2. YES | 0.935347 | 0.867036 | 0.810979 | __label__eng_Latn | 0.995842 | 0.722508 |
# Characterization of Discrete Systems in the Time Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Impulse Response
The concept of the so-called impulse response of discrete linear time-invariant (LTI) system and its connection to linear difference equations is introduced in the following.
### Output Signal
The response $y[k] = \mathcal{H} \{ x[k] \}$ of an LTI system to an arbitrary input signal $x[k]$ is derived. By applying the [sifting-property of the discrete Dirac impulse](../discrete_signals/standard_signals.ipynb#Dirac-Impulse), the input signal can be represented as
\begin{equation}
x[k] = \sum_{\kappa = -\infty}^{\infty} x[\kappa] \cdot \delta[k-\kappa]
\end{equation}
The output signal of the system is given by
\begin{equation}
y[k] = \mathcal{H} \left\{ \sum_{\kappa = -\infty}^{\infty} x[\kappa] \cdot \delta[k-\kappa] \right\}
\end{equation}
The summation and system response operator $\mathcal{H}$ can be exchanged under the assumption that the system is linear
\begin{equation}
y[k] = \sum_{\kappa = -\infty}^{\infty} x[\kappa] \cdot \mathcal{H} \left\{ \delta[k-\kappa] \right\}
\end{equation}
where $\mathcal{H} \{\cdot\}$ was only applied to the Dirac impulse, since $x[\kappa]$ can be regarded as constant factor with respect to the index $k$.
The response of a system to a Dirac impulse as input is termed [*impulse response*](https://en.wikipedia.org/wiki/Impulse_response). It is defined as
\begin{equation}
h[k] = \mathcal{H} \left\{ \delta[k] \right\}
\end{equation}
If the system is time-invariant, the response to a shifted Dirac impulse is $\mathcal{H} \left\{ \delta[k-\kappa]) \right\} = h[k-\kappa]$. Hence, for a discrete LTI system we finally get
\begin{equation}
y[k] = \sum_{\kappa = -\infty}^{\infty} x[\kappa] \cdot h[k-\kappa] = y[k] = x[k] * h[k]
\end{equation}
This operation is termed as linear [*convolution*](https://en.wikipedia.org/wiki/Convolution) and commonly abbreviated by $*$. The properties of an LTI system are entirely characterized by its impulse response. The response $y[k]$ of a system to an arbitrary input signal $x[k]$ is given by the convolution of the input signal $x[k]$ with its impulse response $h[k]$.
### Relation to Difference Equation
The impulse response $h[k] = \mathcal{H} \{ \delta[k] \}$ is the response of an LTI system to an Dirac impulse at the input. It can be derived from the coefficients of a [linear differential equation representing the LTI system](difference_equation.ipynb) by computing the output signal for the input signal $x[k] = \delta[k]$. Introducing this into the [solution of the difference equation](difference_equation.ipynb#Computation-of-the-Output-Signal) yields
\begin{equation}
h[k] = \frac{1}{a_0} \left( \sum_{m=0}^{M} b_m \; \delta[k-m] - \sum_{n=1}^{N} a_n \; h[k-n] \right)
\end{equation}
### Finite Impulse Response
Let's consider the case of a [non-recursive system](difference_equation.ipynb#Recursive-and-Non-Recursive-Systems) with $a_n = 0$ for $n > 0$. Without loss of generality it can be assumed that $a_0 = 1$, since $\frac{1}{a_0}$ can be incorporated into the other coefficients by dividing them through $a_0$. The impulse response is given as
\begin{equation}
h[k] = \sum_{m=0}^{M} b_m \; \delta[k-m] = \begin{cases} b_k & \text{for } 0 \leq k < M \\ 0 & \text{otherwise} \end{cases}
\end{equation}
Note that the summation in above formula constitutes a convolution between the signal given by the samples $b_m$ and the Dirac impulse $\delta[k]$. The impulse response of a non-recursive system is of finite length $M$. Its values are given by the coefficients $b_m$ of the linear difference equation characterizing the system. An impulse response of finite length is commonly termed [finite impulse response (FIR)](https://en.wikipedia.org/wiki/Finite_impulse_response). The term FIR (system) is used synonymously to non-recursive system. The former relates to the length of the impulse response and the latter to the structure of the system.
**Example - Moving Average**
According to above findings, the impulse response of the [moving average filter](difference_equation.ipynb#Moving-Average) is given as
\begin{equation}
h[k] = \frac{1}{N} \cdot \text{rect}_N[k]
\end{equation}
As alternative to the [solution of the difference equation illustrated before](difference_equation.ipynb#Moving-Average), the output signal $y[k] = \mathcal{H} \{ y[k] \}$ is computed by convolution of the (same) input signal.
```python
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
%matplotlib inline
def rect(k, N):
return np.where((k >= 0) & (k < N), 1.0, 0.0)
np.random.seed(seed=0)
N = 10
k = np.arange(0, 60)
h = 1/N * rect(k, N)
x = np.cos(2*np.pi/30 * k) + .2 * np.random.normal(size=(len(k)))
y = np.convolve(x, h)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.stem(k, x)
plt.xlabel('$k$')
plt.ylabel(r'$x[k]$')
plt.ylim([-1.5, 1.5])
plt.subplot(122)
plt.stem(k, y[0:len(x)])
plt.xlabel('$k$')
plt.ylabel('$y[k]$')
plt.ylim([-1.5, 1.5])
plt.tight_layout()
```
**Exercise**
* Compare above output signal $y[k]$ derived by convolution with the output signal derived by [solution of the difference equation](difference_equation.ipynb#Moving-Average).
### Infinite Impulse Response
Now the general case of a [recursive system](difference_equation.ipynb#Recursive-and-Non-Recursive-Systems) is regarded. By inspection of above formula for the computation of the impulse response from the coefficients of a linear difference equation it becomes clear that the impulse response $h[k]$ for time-instant $k$ depends on the impulse response of past time instants $k-1, \dots, k-N+1$. This feedback generally results in an impulse response of infinite length. Such an impulse response is commonly termed as [infinite impulse response (IIR)](https://en.wikipedia.org/wiki/Infinite_impulse_response). The term IIR (system) is used synonymously to recursive system. The former relates to the length of the impulse response and the latter to the structure of the system.
For a finite-length input signal $x[k]$ the output signal of a recursive system cannot be computed by a linear convolution $y[k] = x[k] * h[k]$ in practice. This is due to the infinite length of its impulse response. As practical solution, the impulse response is often truncated to a finite length after its has decayed reasonably.
**Example**
The impulse response $h[k]$ of the previously introduced [second-order recursive LTI system](difference_equation.ipynb#Second-Order-System) with the difference equation
\begin{equation}
y[k] - y[k-1] + \frac{1}{2} y[k-2] = x[k]
\end{equation}
is derived by solution of the difference equation for a Dirac impulse as input signal. As can be deduced from its difference equation, the second-order recursive system has an IIR. Its first 256 samples are computed and plotted in the following. Note, less samples have been shown for ease of illustration.
```python
def dirac(k):
return np.where(k == 0, 1.0, 0.0)
a = [1.0, -1.0, 1/2]
b = [1.0]
k = np.arange(256)
x = dirac(k)
h = signal.lfilter(b, a, x)
plt.figure(figsize=(6, 3))
plt.stem(k, h)
plt.xlabel('$k$')
plt.ylabel(r'$h[k]$')
plt.axis([0, 25, -.5, 1.2])
```
[0, 25, -0.5, 1.2]
In order to illustrate the amplitude decay of the impulse response over a wider range, its magnitude $A[k]$ is plotted in [decibel (dB)](https://en.wikipedia.org/wiki/Decibel)
\begin{equation}
A[k] = 20 \cdot \log_{10} ( |h[k]| ) \quad \text{in dB}
\end{equation}
```python
import warnings
warnings.filterwarnings('ignore', 'divide by zero encountered in log10')
plt.figure(figsize=(10, 4))
plt.stem(k, 20*np.log10(np.abs(h)))
plt.xlabel('$k$')
plt.ylabel(r'$|h[k]|$ in dB')
plt.axis([0, k[-1], -800, 0])
plt.grid()
```
Its obvious that the magnitude of the impulse response has decayed to quite small values after 256 samples. The truncated impulse response can be used to calculate the output signal $y[k]$ by convolution with the input signal $x[k] = \text{rect}_{20}[k]$. The resulting output signal is plotted for illustration.
```python
def rect(k, N):
return np.where((k >= 0) & (k < N), 1.0, 0.0)
x = rect(k, 20)
y_ir = np.convolve(h, x)
plt.figure(figsize=(6, 3))
plt.stem(k, y_ir[:len(k)])
plt.xlabel('$k$')
plt.ylabel(r'$y[k]$')
plt.axis([0, 40, -.7, 2.6])
```
[0, 40, -0.7, 2.6]
**Exercise**
* Compare the output signal computed by convolution with the one [computed by solution of the difference equation](difference_equation.ipynb#Second-Order-System).
* Investigate the effect of truncating an IIR on the output signal computed by convolution
* Split the IIR $h[k]$ into two parts: a first part holding the samples of the truncated impulse response and second part holding the remaining samples
* Split the output signal into to parts by splitting the convolution using the splitted impulse response
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
| 554b2f295434b08f86a6fe55d59a612f4a147f72 | 492,744 | ipynb | Jupyter Notebook | discrete_systems_time_domain/impulse_response.ipynb | spatialaudio/signals-and-systems-lecture | 93e2f3488dc8f7ae111a34732bd4d13116763c5d | [
"MIT"
]
| 243 | 2016-04-01T14:21:00.000Z | 2022-03-28T20:35:09.000Z | discrete_systems_time_domain/impulse_response.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
]
| 6 | 2016-04-11T06:28:17.000Z | 2021-11-10T10:59:35.000Z | discrete_systems_time_domain/impulse_response.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
]
| 63 | 2017-04-20T00:46:03.000Z | 2022-03-30T14:07:09.000Z | 66.958011 | 18,966 | 0.573214 | true | 2,696 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.839734 | 0.641123 | __label__eng_Latn | 0.981382 | 0.327875 |
# A Quantum distance-based classifier (part 1)
<center> Robert Wezeman, TNO </center>
<a name="contents"></a>
## Table of Contents
* [Introduction](#introduction)
* [Problem](#problem)
* [Amplitude Encoding](#amplitude)
* [Data preprocessing](#dataset)
* [Quantum algorithm](#algorithm)
* [Conclusion and further work](#conclusion)
```python
# Import external python file
import numpy as np
from data_plotter import get_bin, DataPlotter # for easier plotting
DataPlotter = DataPlotter()
```
$$ \newcommand{\ket}[1]{\left|{#1}\right\rangle} $$
<a name="introduction"></a>
## Introduction
Consider the following scatter plot of the first two flowers in [the famous Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set)
Notice that just two features, the sepal width and the sepal length, divide the two different Iris species into different regions in the plot. This gives rise to the question: given only the sepal length and sepal width of a flower can we classify the flower by their correct species? This type of problem, also known as [statistical classification](https://en.wikipedia.org/wiki/Statistical_classification), is a common problem in machine learning. In general, a classifier is constructed by letting it learn a function which gives the desired output based on a sufficient amount of data. This is called supervised learning, as the desired output (the labels of the data points) are known. After learning, the classifier can classify an unlabeled data point based on the learned function. The quality of a classifier improves if it has a larger training dataset it can learn on. The true power of this quantum classifier becomes clear when using extremely large data sets.
In this notebook we will describe how to build a distance-based classifier on the Quantum Inspire using amplitude encoding. It turns out that, once the system is initialized in the desired state, regardless of the size of training data, the actual algorithm consists of only 3 actions, one Hadamard gate and two measurements. This has huge implications for the scalability of this problem for large data sets. Using only 4 qubits we show how to encode two data points, both of a different class, to predict the label for a third data point. In this notebook we will demonstrate how to use the Quantum Inspire SDK using QASM-code, we will also provide the code to obtain the same results for the ProjectQ framework.
[Back to Table of Contents](#contents)
<a name="problem"></a>
## Problem
We define the following binary classification problem: Given the data set
$$\mathcal{D} = \Big\{ ({\bf x}_1, y_1), \ldots ({\bf x}_M , y_M) \Big\},$$
consisting of $M$ data points $x_i\in\mathbb{R}^n$ and corresponding labels $y_i\in \{-1, 1\}$, give a prediction for the label $\tilde{y}$ corresponding to an unlabeled data point $\bf\tilde{x}$. The classifier we shall implement with our quantum circuit is a distance-based classifier and is given by
\begin{equation}\newcommand{\sgn}{{\rm sgn}}\newcommand{\abs}[1]{\left\lvert#1\right\rvert}\label{eq:classifier} \tilde{y} = \sgn\left(\sum_{m=0}^{M-1} y_m \left[1-\frac{1}{4M}\abs{{\bf\tilde{x}}-{\bf x}_m}^2\right]\right). \hspace{3cm} (1)\end{equation}
This is a typical $M$-nearest-neighbor model, where each data point is given a weight related to the distance measure. To implement this classifier on a quantum computer, we need a way to encode the information of the training data set in a quantum state. We do this by first encoding the training data in the amplitudes of a quantum system, and then manipulate the amplitudes of then the amplitudes will be manipulated by quantum gates such that we obtain a result representing the above classifier. Encoding input features in the amplitude of a quantum system is known as amplitude encoding.
[Back to Contents](#contents)
<a name="amplitude"></a>
## Amplitude encoding
Suppose we want to encode a classical vector $\bf{x}\in\mathbb{R}^N$ by some amplitudes of a quantum system. We assume $N=2^n$ and that $\bf{x}$ is normalised to unit length, meaning ${\bf{x}^T{x}}=1$. We can encode $\bf{x}$ in the amplitudes of a $n$-qubit system in the following way
\begin{equation}
{\bf x} = \begin{pmatrix}x^1 \\ \vdots \\ x^N\end{pmatrix} \Longleftrightarrow{} \ket{\psi_{{\bf x}}} = \sum_{i=0}^{N-1}x^i\ket{i},
\end{equation}
where $\ket{i}$ is the $i^{th}$ entry of the computational basis $\left\{\ket{0\ldots0},\ldots,\ket{1\ldots1}\right\}$. By applying an efficient quantum algorithm (resources growing polynomially in the number of qubits $n$), one can manipulate the $2^n$ amplitudes super efficiently, that is $\mathcal{O}\left(\log N\right)$. This follows as manipulating all amplitudes requires an operation on each of the $n = \mathcal{O}\left(\log N\right)$ qubits. For algorithms to be truly super-efficient, the phase where the data is encoded must also be at most polynomial in the number of qubits. The idea of quantum memory, sometimes referred as quantum RAM (QRAM), is a particular interesting one. Suppose we first run some quantum algorithm, for example in quantum chemistry, with as output some resulting quantum states. If these states could be fed into a quantum classifier, the encoding phase is not needed anymore. Finding efficient data encoding systems is still a topic of active research. We will restrict ourselves here to the implementation of the algorithm, more details can be found in the references.
<a name="state"></a>
The algorithm requires the $n$-qubit quantum system to be in the following state
\begin{equation}\label{eq:prepstate}
\ket{\mathcal{D}} = \frac{1}{\sqrt{2M}} \sum_{m=0}^{M-1} \ket{m}\Big(\ket{0}\ket{\psi_{\bf\tilde{{x}}}} + \ket{1}\ket{\psi_{\bf{x}_m}}\Big)\ket{y_m}.\hspace{3cm} (2)
\end{equation}
Here $\ket{m}$ is the $m^{th}$ state of the computational basis used to keep track of the $m^{th}$ training input. The second register is a single ancillary qubit entangled with the third register. The excited state of the ancillary qubit is entangled with the $m^{th}$ training state $\ket{\psi_{{x}_m}}$, while the ground state is entangled with the new input state $\ket{\psi_{\tilde{x}}}$. The last register encodes the label of the $m^{th}$ training data point by
\begin{equation}
\begin{split}
y_m = -1 \Longleftrightarrow& \ket{y_m} = \ket{0},\\
y_m = 1 \Longleftrightarrow& \ket{y_m} = \ket{1}.
\end{split}
\end{equation}
Once in this state the algorithm only consists of the following three operations:
1. Apply a Hadamard gate on the second register to obtain
$$\frac{1}{2\sqrt{M}} \sum_{m=0}^{M-1} \ket{m}\Big(\ket{0}\ket{\psi_{\bf\tilde{x}+x_m}} + \ket{1}\ket{\psi_{\bf\tilde{x}-x_m}}\Big)\ket{y_m},$$
where $\ket{\psi_{\bf\tilde{{x}}\pm{x}_m}} = \ket{\psi_{\tilde{\bf{x}}}}\pm \ket{\psi_{\bf{x}_m}}$.
2. Measure the second qubit. We restart the algorithm if we measure a $\ket{1}$ and only continue if we are in the $\ket{0}$ branch. We continue the algorithm with a probability $p_{acc} = \frac{1}{4M}\sum_M\abs{{\bf\tilde{x}}+{\bf x}_m}^2$, for standardised random data this is usually around $0.5$. The resulting state is given by
\begin{equation}
\frac{1}{2\sqrt{Mp_{acc}}}\sum_{m=0}^{M-1}\sum_{i=0}^{N-1} \ket{m}\ket{0}\left({\tilde{x}}^i + x_m^i\right)\ket{i}\ket{y_m}.
\end{equation}
3. Measure the last qubit $\ket{y_m}$. The probability that we measure outcome zero is given by
\begin{equation}
p(q_4=0) = \frac{1}{4Mp_{acc}}\sum_{m|y_m=0}\abs{\bf{\tilde{{x}}+{x}_m}}^2.
\end{equation}
In the special case where the amount of training data for both labels is equal, this last measurement relates to the classifier as described in previous section by
\begin{equation}
\tilde{y} = \left\{
\begin{array}{lr}
-1 & : p(q_4 = 0 ) > p(q_4 = 1)\\
+1 & : p(q_4 = 0 ) < p(q_4 = 1)
\end{array}
\right.
\end{equation}
By setting $\tilde{y}$ to be the most likely outcome of many measurement shots, we obtain the desired distance-based classifier.
[Back to Table of Contents](#contents)
<a name="dataset"></a>
## Data preprocessing
In the previous section we saw that for amplitude encoding we need a data set which is normalised. Luckily, it is always possible to bring data to this desired form with some data transformations. Firstly, we standardise the data to have zero mean and unit variance, then we normalise the data to have unit length. Both these steps are common methods in machine learning. Effectively, we only have to consider the angle between different data features.
To illustrate this procedure we apply it to the first two features of the famous Iris data set:
```python
# Plot the data
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
data = [el[0:101] for el in features][0:2] # Select only the first two features of the dataset
half_len_data = len(data[0]) // 2
iris_setosa = [el[0:half_len_data] for el in data[0:2]]
iris_versicolor = [el[half_len_data:-1] for el in data[0:2]]
DataPlotter.plot_original_data(iris_setosa, iris_versicolor); # Function to plot the data
```
```python
# Rescale the data
from sklearn import preprocessing # Module contains method to rescale data to have zero mean and unit variance
# Rescale whole data-set to have zero mean and unit variance
features_scaled = [preprocessing.scale(el) for el in data[0:2]]
iris_setosa_scaled = [el[0:half_len_data] for el in features_scaled]
iris_versicolor_scaled = [el[half_len_data:-1] for el in features_scaled]
DataPlotter.plot_standardised_data(iris_setosa_scaled, iris_versicolor_scaled); # Function to plot the data
```
```python
# Normalise the data
def normalise_data(arr1, arr2):
"""Normalise data to unit length
input: two array same length
output: normalised arrays
"""
for idx in range(len(arr1)):
norm = (arr1[idx]**2 + arr2[idx]**2)**(1 / 2)
arr1[idx] = arr1[idx] / norm
arr2[idx] = arr2[idx] / norm
return [arr1, arr2]
iris_setosa_normalised = normalise_data(iris_setosa_scaled[0], iris_setosa_scaled[1])
iris_versicolor_normalised = normalise_data(iris_versicolor_scaled[0], iris_versicolor_scaled[1])
# Function to plot the data
DataPlotter.plot_normalised_data(iris_setosa_normalised, iris_versicolor_normalised);
```
[Table of Contents](#contents)
<a name="algorithm"></a>
## Quantum algorithm
Now we can start with our quantum algorithm on the Quantum Inspire. We describe how to build the algorithm for the simplest case with only two data points, each with two features, that is $M=N=2$. For this algorithm we need 4 qubits:
* One qubit for the index register $\ket{m}$
* One ancillary qubit
* One qubit to store the information of the two features of the data points
* One qubit to store the information of the classes of the data points
From the data set described in previous section we pick the following data set $\mathcal{D} = \big\{({\bf x}_1,y_1), ({\bf x}_2, y_2) \big\}$
where:
* ${\bf x}_1 = (0.9193, 0.3937)$, $y_1 = -1$,
* ${\bf x}_2 = (0.1411, 0.9899)$, $y_2 = 1$.
We are interested in the label $\tilde{y}$ for the data point ${\bf \tilde{x}} = (0.8670, 0.4984)$.
The amplitude encoding of these data points look like
\begin{equation}
\begin{split}
\ket{\psi_{\bf\tilde{x}}} & = 0.8670 \ket{0} + 0.4984\ket{1}, \\
\ket{\psi_{\bf x_1}} & = 0.9193 \ket{0} + 0.3937\ket{1},\\
\ket{\psi_{\bf x_2}} & = 0.1411 \ket{0} + 0.9899\ket{1}.
\end{split}
\end{equation}
Before we can run the actual algorithm we need to bring the system in the desired [initial state (equation 2)](#state) which can be obtain by applying the following combination of gates starting on $\ket{0000}$.
* **Part A:** In this part the index register is initialized and the ancilla qubit is brought in the desired state. For this we use the plain QASM language of the Quantum Inspire. Part A consists of two Hadamard gates:
```python
def part_a():
qasm_a = """version 1.0
qubits 4
prep_z q[0:3]
.part_a
H q[0:1] #execute Hadamard gate on qubit 0, 1
"""
return qasm_a
```
After this step the system is in the state
$$\ket{\mathcal{D}_A} = \frac{1}{2}\Big(\ket{0}+\ket{1}\Big)\Big(\ket{0}+\ket{1}\Big)\ket{0}\ket{0} $$
* **Part B:** In this part we encode the unlabeled data point $\tilde{x}$ by making use of a controlled rotation. We entangle the third qubit with the ancillary qubit. The angle $\theta$ of the rotation should be chosen such that $\tilde{x}=R_y(\theta)\ket{0}$. By the definition of $R_y$ we have
$$ R_y(\theta)\ket{0} = \cos\left(\frac{\theta}{2}\right)\ket{0} + \sin\left(\frac{\theta}{2}\right)\ket{1}.$$
Therefore, the angle needed to rotate to the state $\psi=a\ket{0} + b\ket{1}$ is given by $\theta = 2\cos^{-1}(a)\cdot sign(b)$.
Quantum Inspire does not directly support controlled-$R_y$ gates, however we can construct it from other gates as shown in the figure below. In these pictures $k$ stand for the angle used in the $R_y$ rotation.
```python
def part_b(angle):
half_angle = angle / 2
qasm_b = """.part_b # encode test value x^tilde
CNOT q[1], q[2]
Ry q[2], -{0}
CNOT q[1], q[2]
Ry q[2], {0}
X q[1]
""".format(half_angle)
return qasm_b
```
After this step the system is in the state
$$\ket{\mathcal{D}_B} = \frac{1}{2} \Big(\ket{0}+\ket{1}\Big)\Big(\ket{0}\ket{\tilde{{x}}}+\ket{1}\ket{0}\Big)\ket{0}$$
* **Part C:** In this part we encode the first data point $x_1$. The rotation angle $\theta$ is such that $\ket{x_1} = R_y(\theta)\ket{0}$. Now a double controlled-$R_y$ rotation is needed, and similar to Part B, we construct it from other gates as shown in the figure below.
```python
def part_c(angle):
quarter_angle = angle / 4
qasm_c = """.part_c # encode training x^0 value
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], {0}
CNOT q[0],q[2]
Ry q[2], -{0}
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], -{0}
CNOT q[0],q[2]
Ry q[2], {0}
X q[0]
""".format(quarter_angle)
return qasm_c
```
After this step the system is in the state
$$\ket{\mathcal{D}_C} = \frac{1}{2}\Bigg(\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big) + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{0}\Big)\Bigg) \ket{0}$$
* **Part D:** This part is almost an exact copy of part C, however now with $\theta$ chosen such that $\ket{{x}_2} = R_y(\theta)\ket{0}$.
```python
def part_d(angle):
quarter_angle = angle / 4
qasm_d = """.part_d # encode training x^1 value
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], {0}
CNOT q[0],q[2]
Ry q[2], -{0}
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], -{0}
CNOT q[0],q[2]
Ry q[2], {0}
""".format(quarter_angle)
return qasm_d
```
After this step the system is in the state
$$\ket{\mathcal{D}_D} = \frac{1}{2}\Bigg(\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big) + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x}_2}\Big)\Bigg) \ket{0}$$
* **Part E:** The last step is to label the last qubit with the correct class, this can be done using a simple CNOT gate between the first and last qubit to obtain the desired initial state
$$\ket{\mathcal{D}_E} = \frac{1}{2}\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big)\ket{0} + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x}_2}\Big)\ket{1}.
$$
```python
def part_e():
qasm_e = """.part_e # encode the labels
CNOT q[0], q[3]
"""
return qasm_e
```
## The actual algorithm
Once the system is in this initial state, the algorithm itself only consists of one Hadamard gate and two measurements. If the first measurement gives the result $\ket{1}$, we have to abort the algorithm and start over again. However, these results can also easily be filtered out in a post-proecessing step.
```python
def part_f():
qasm_f = """
.part_f
H q[1]
"""
return qasm_f
```
The circuit for the whole algorithm now looks like:
We can send our QASM code to the Quantum Inspire with the following data points
\begin{equation}
\begin{split}
\ket{\psi_{\tilde{x}}} & = 0.8670 \ket{0} + 0.4984\ket{1},\\
\ket{\psi_{x_1}} & = 0.9193 \ket{0} + 0.3937\ket{1},\\
\ket{\psi_{x_2}} & = 0.1411 \ket{0} + 0.9899\ket{1}.
\end{split}
\end{equation}
```python
import os
from quantuminspire.api import QuantumInspireAPI
from quantuminspire.credentials import get_authentication
from math import acos
from math import pi
QI_URL = os.getenv('API_URL', 'https://api.quantum-inspire.com/')
# input data points:
angle_x_tilde = 2 * acos(0.8670)
angle_x0 = 2 * acos(0.1411)
angle_x1 = 2 * acos(0.9193)
authentication = get_authentication()
qi = QuantumInspireAPI(QI_URL, authentication)
# Build final QASM
final_qasm = part_a() + part_b(angle_x_tilde) + part_c(angle_x0) + part_d(angle_x1) + part_e() + part_f()
backend_type = qi.get_backend_type_by_name('QX single-node simulator')
result = qi.execute_qasm(final_qasm, backend_type=backend_type, number_of_shots=1, full_state_projection=True)
print(result['histogram'])
```
OrderedDict([('9', 0.3988584), ('4', 0.2768809), ('0', 0.1270332), ('13', 0.099428), ('2', 0.0658663), ('6', 0.0302195), ('15', 0.0013716), ('11', 0.0003419)])
```python
import matplotlib.pyplot as plt
def bar_plot(result_data):
res = [get_bin(el, 4) for el in range(16)]
prob = [0] * 16
for key, value in result_data['histogram'].items():
prob[int(key)] = value
# Set color=light grey when 2nd qubit = 1
# Set color=blue when 2nd qubit = 0, and last qubit = 1
# Set color=red when 2nd qubit = 0, and last qubit = 0
color_list = [
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1)
]
plt.bar(res, prob, color=color_list)
plt.ylabel('Probability')
plt.title('Results')
plt.ylim(0, 1)
plt.xticks(rotation='vertical')
plt.show()
return prob
prob = bar_plot(result)
```
We only consider the events where the second qubit equals 0, that is, we only consider the events in the set $$\{0000, 0001, 0100, 0101, 1000, 1001, 1100, 1101\}$$
The label $\tilde{y}$ is now given by
\begin{equation}
\tilde{y} = \left\{
\begin{array}{lr}
-1 & : \#\{0000, 0001, 0100, 0101\} > \#\{1000, 1001, 1100, 1101\}\\
+1 & : \#\{1000, 1001, 1100, 1101\} > \#\{0000, 0001, 0100, 0101\}
\end{array}
\right.
\end{equation}
```python
def summarize_results(prob, display=1):
sum_label0 = prob[0] + prob[1] + prob[4] + prob[5]
sum_label1 = prob[8] + prob[9] + prob[12] + prob[13]
def y_tilde():
if sum_label0 > sum_label1:
return 0, ">"
elif sum_label0 < sum_label1:
return 1, "<"
else:
return "undefined", "="
y_tilde_res, sign = y_tilde()
if display:
print("The sum of the events with label 0 is: {}".format(sum_label0))
print("The sum of the events with label 1 is: {}".format(sum_label1))
print("The label for y_tilde is: {} because sum_label0 {} sum_label1".format(y_tilde_res, sign))
return y_tilde_res
summarize_results(prob);
```
The sum of the events with label 0 is: 0.4039141
The sum of the events with label 1 is: 0.4982864
The label for y_tilde is: 1 because sum_label0 < sum_label1
The following code will randomly pick two training data points and a random test point for the algorithm. We can compare the prediction for the label by the Quantum Inspire with the true label.
```python
from random import sample
from numpy import sign
def grab_random_data():
one_random_index = sample(range(50), 1)
two_random_index = sample(range(50), 2)
random_label = sample([1,0], 1) # random label
# iris_setosa_normalised # Label 0
# iris_versicolor_normalised # Label 1
if random_label[0]:
# Test data has label = 1, iris_versicolor
data_label0 = [iris_setosa_normalised[0][one_random_index[0]],
iris_setosa_normalised[1][one_random_index[0]]]
data_label1 = [iris_versicolor_normalised[0][two_random_index[0]],
iris_versicolor_normalised[1][two_random_index[0]]]
test_data = [iris_versicolor_normalised[0][two_random_index[1]],
iris_versicolor_normalised[1][two_random_index[1]]]
else:
# Test data has label = 0, iris_setosa
data_label0 = [iris_setosa_normalised[0][two_random_index[0]],
iris_setosa_normalised[1][two_random_index[0]]]
data_label1 = [iris_versicolor_normalised[0][one_random_index[0]],
iris_versicolor_normalised[1][one_random_index[0]]]
test_data = [iris_setosa_normalised[0][two_random_index[1]],
iris_setosa_normalised[1][two_random_index[1]]]
return data_label0, data_label1, test_data, random_label
data_label0, data_label1, test_data, random_label = grab_random_data()
print("Data point {} from label 0".format(data_label0))
print("Data point {} from label 1".format(data_label1))
print("Test point {} from label {} ".format(test_data, random_label[0]))
def run_random_data(data_label0, data_label1, test_data):
angle_x_tilde = 2 * acos(test_data[0]) * sign(test_data[1]) % (4 * pi)
angle_x0 = 2 * acos(data_label0[0]) * sign(data_label0[1]) % (4 * pi)
angle_x1 = 2 * acos(data_label1[0])* sign(data_label1[1]) % (4 * pi)
# Build final QASM
final_qasm = part_a() + part_b(angle_x_tilde) + part_c(angle_x0) + part_d(angle_x1) + part_e() + part_f()
result_random_data = qi.execute_qasm(final_qasm, backend_type=backend_type, number_of_shots=1, full_state_projection=True)
return result_random_data
result_random_data = run_random_data(data_label0, data_label1, test_data);
# Plot data points:
plt.rcParams['figure.figsize'] = [16, 6] # Plot size
plt.subplot(1, 2, 1)
DataPlotter.plot_normalised_data(iris_setosa_normalised, iris_versicolor_normalised);
plt.scatter(test_data[0], test_data[1], s=50, c='green'); # Scatter plot data class ?
plt.scatter(data_label0[0], data_label0[1], s=50, c='orange'); # Scatter plot data class 0
plt.scatter(data_label1[0], data_label1[1], s=50, c='orange'); # Scatter plot data class 1
plt.legend(["Iris Setosa (label 0)", "Iris Versicolor (label 1)", "Test point", "Data points"])
plt.subplot(1, 2, 2)
prob_random_points = bar_plot(result_random_data);
summarize_results(prob_random_points);
```
To get a better idea how well this quantum classifier works we can compare the predicted label to the true label of the test datapoint. Errors in the prediction can have two causes. The quantum classifier does not give the right classifier prediction or the quantum classifier gives the right classifier prediction which for the selected data gives the wrong label. in general, the first type of errors can be reduced by increasing the number of times we run the algorithm. In our case, as we work with the simulator and our gates are deterministic ([no conditional gates](https://www.quantum-inspire.com/kbase/optimization-of-simulations/)), we do not have to deal with this first error if we use the true probability distribution. This can be done by using only a single shot without measurements.
```python
quantum_score = 0
error_prediction = 0
classifier_is_quantum_prediction = 0
classifier_score = 0
no_label = 0
def true_classifier(data_label0, data_label1, test_data):
if np.linalg.norm(np.array(data_label1) - np.array(test_data)) < np.linalg.norm(np.array(data_label0) -
np.array(test_data)):
return 1
else:
return 0
for idx in range(100):
data_label0, data_label1, test_data, random_label = grab_random_data()
result_random_data = run_random_data(data_label0, data_label1, test_data)
classifier = true_classifier(data_label0, data_label1, test_data)
sum_label0 = 0
sum_label1 = 0
for key, value in result_random_data['histogram'].items():
if int(key) in [0, 1, 4, 5]:
sum_label0 += value
if int(key) in [8, 9, 12, 13]:
sum_label1 += value
if sum_label0 > sum_label1:
quantum_prediction = 0
elif sum_label1 > sum_label0:
quantum_prediction = 1
else:
no_label += 1
continue
if quantum_prediction == classifier:
classifier_is_quantum_prediction += 1
if random_label[0] == classifier:
classifier_score += 1
if quantum_prediction == random_label[0]:
quantum_score += 1
else:
error_prediction += 1
print("In this sample of 100 data points:")
print("the classifier predicted the true label correct", classifier_score, "% of the times")
print("the quantum classifier predicted the true label correct", quantum_score, "% of the times")
print("the quantum classifier predicted the classifier label correct",
classifier_is_quantum_prediction, "% of the times")
print("Could not assign a label ", no_label, "times")
```
In this sample of 100 data points:
the classifier predicted the true label correct 93 % of the times
the quantum classifier predicted the true label correct 93 % of the times
the quantum classifier predicted the classifier label correct 100 % of the times
Could not assign a label 0 times
<a name="conclusion"></a>
## Conclusion and further work
How well the quantum classifier performs, hugely depends on the chosen data points. In case the test data point is significantly closer to one of the two training data points the classifier will result in a one-sided prediction. The other case, where the test data point has a similar distance to both training points, the classifier struggles to give an one-sided prediction. Repeating the algorithm on the same data points, might sometimes give different measurement outcomes. This type of error can be improved by running the algorithm using more shots. In the examples above we only used the true probability distribution (as if we had used an infinite number of shots). By running the algorithm instead with 512 or 1024 shots this erroneous behavior can be observed. In case of an infinite number of shots, we see that the quantum classifier gives the same prediction as classically expected.
The results of this toy example already shows the potential of a quantum computer in machine learning. Because the actual algorithm consists of only three operations, independent of the size of the data set, it can become extremely useful for tasks such as pattern recognition on large data sets. The next step is to extend this toy model to contain more data features and a larger training data set to improve the prediction. As not all data sets are best classified by a distance-based classifier, implementations of other types of classifiers might also be interesting. For more information on this particular classifier see the reference [ref](https://arxiv.org/abs/1703.10793).
[Back to Table of Contents](#contents)
### References
* Book: [Schuld and Petruccione, Supervised learning with Quantum computers, 2018](https://www.springer.com/us/book/9783319964232)
* Article: [Schuld, Fingerhuth and Petruccione, Implementing a distance-based classifier with a quantum interference circuit, 2017](https://arxiv.org/abs/1703.10793)
## The same algorithm for the projectQ framework
```python
from math import acos
import os
from quantuminspire.api import QuantumInspireAPI
from quantuminspire.projectq.backend_qx import QIBackend
from projectq import MainEngine
from projectq.backends import ResourceCounter
from projectq.ops import CNOT, CZ, H, Toffoli, X, Ry, C
from projectq.setups import restrictedgateset
from quantuminspire.credentials import get_authentication
QI_URL = os.getenv('API_URL', 'https://api.quantum-inspire.com/')
# Remote Quantum Inspire backend #
authentication = get_authentication()
qi_api = QuantumInspireAPI(QI_URL, authentication)
compiler_engines = restrictedgateset.get_engine_list(one_qubit_gates="any",
two_qubit_gates=(CNOT, CZ, Toffoli))
compiler_engines.extend([ResourceCounter()])
qi_backend = QIBackend(quantum_inspire_api=qi_api)
qi_engine = MainEngine(backend=qi_backend, engine_list=compiler_engines)
# angles data points:
angle_x_tilde = 2 * acos(0.8670)
angle_x0 = 2 * acos(0.1411)
angle_x1 = 2 * acos(0.9193)
qubits = qi_engine.allocate_qureg(4)
# part_a
for qubit in qubits[0:2]:
H | qubit
# part_b
C(Ry(angle_x_tilde), 1) | (qubits[1], qubits[2]) # Alternatively build own CRy gate as done above
X | qubits[1]
# part_c
C(Ry(angle_x0), 2) | (qubits[0], qubits[1], qubits[2]) # Alternatively build own CCRy gate as done above
X | qubits[0]
# part_d
C(Ry(angle_x1), 2) | (qubits[0], qubits[1], qubits[2]) # Alternatively build own CCRy gate as done above
# part_e
CNOT | (qubits[0], qubits[3])
# part_f
H | qubits[1]
qi_engine.flush()
# Results:
temp_results = qi_backend.get_probabilities(qubits)
res = [get_bin(el, 4) for el in range(16)]
prob = [0] * 16
for key, value in temp_results.items():
prob[int(key[::-1], 2)] = value # Reverse as projectQ has a different qubit ordering
color_list = [
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1)
]
plt.bar(res, prob, color=color_list)
plt.ylabel('Probability')
plt.title('Results')
plt.ylim(0, 1)
plt.xticks(rotation='vertical')
plt.show()
print("Results:")
print(temp_results)
```
```python
```
| b988d969fd0fc429cfb6acebe66fca888d2cf6a9 | 153,204 | ipynb | Jupyter Notebook | docs/notebooks/classifier_example/classification_example1_2_data_points.ipynb | QuTech-Delft/quantum-inspire-examples | 04e9bb879bf0b8a12cede0d67f7384028c40788f | [
"Apache-2.0"
]
| 2 | 2021-09-13T11:05:46.000Z | 2022-01-27T12:28:21.000Z | docs/notebooks/classifier_example/classification_example1_2_data_points.ipynb | QuTech-Delft/quantum-inspire-examples | 04e9bb879bf0b8a12cede0d67f7384028c40788f | [
"Apache-2.0"
]
| 4 | 2021-06-21T12:29:39.000Z | 2021-11-28T10:12:09.000Z | docs/notebooks/classifier_example/classification_example1_2_data_points.ipynb | QuTech-Delft/quantum-inspire-examples | 04e9bb879bf0b8a12cede0d67f7384028c40788f | [
"Apache-2.0"
]
| 2 | 2021-06-21T12:09:52.000Z | 2021-07-26T10:54:56.000Z | 151.237907 | 34,560 | 0.86159 | true | 8,856 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.779993 | 0.680425 | __label__eng_Latn | 0.952735 | 0.419186 |
```python
import modern_robotics as mr
import sympy as sp
from sympy import*
from sympy.physics.mechanics import dynamicsymbols, mechanics_printing
mechanics_printing()
```
```python
def exp3(omega, theta):
omega = skew(omega)
R = sp.eye(3) + sp.sin(theta) * omega + (1 - sp.cos(theta)) * omega * omega
return R
def skew(v):
return Matrix([[0, -v[2], v[1]],
[v[2], 0, -v[0]],
[-v[1], v[0], 0]])
def exp6(twist, theta):
omega = skew(twist[:3])
v = Matrix(twist[3:])
T = eye(4)
T[:3,:3] = exp3(twist[:3], theta)
T[:3,3] = (eye(3) * theta + (1 - cos(theta)) * omega +
(theta-sin(theta)) * omega * omega) * v
return T
def Ad(T):
AdT = sp.zeros(6)
R = sp.Matrix(T[:3, :3])
AdT[:3, :3] = R
AdT[3:, 3:] = R
AdT[3:, :3] = skew(T[:3, 3]) * R
return AdT
def rotX(alfa_im1):
Rx = sp.eye(4)
Rx[1,1] = sp.cos(alfa_im1)
Rx[1,2] = -sp.sin(alfa_im1)
Rx[2,1] = sp.sin(alfa_im1)
Rx[2,2] = sp.cos(alfa_im1)
return Rx
def rotZ(alfa_im1):
Rz = sp.eye(4)
Rz[0,0] = sp.cos(alfa_im1)
Rz[0,1] = -sp.sin(alfa_im1)
Rz[1,0] = sp.sin(alfa_im1)
Rz[1,1] = sp.cos(alfa_im1)
return Rz
def transX(a_im1):
trA = sp.eye(4)
trA[0,3] = a_im1
return trA
def transZ(d_i):
trA = sp.eye(4)
trA[2,3] = d_i
return trA
```
```python
# Oppg 1
print("Task 1:")
th1, th2, th3, th4, th5, th6 = dynamicsymbols('theta_1, theta_2, theta_3, theta_4, theta_5, theta_6')
config = sp.Matrix([[0,0,0,th1],[sp.pi/2,0,0,th2],[0,455,0,th3 + sp.pi/2],[sp.pi/2, 35, 420, th4],[sp.pi/2,0,0,th5],[sp.pi/2, 0, -80,th6]])
config
```
Task 1:
$\displaystyle \left[\begin{matrix}0 & 0 & 0 & \theta_{1}\\\frac{\pi}{2} & 0 & 0 & \theta_{2}\\0 & 455 & 0 & \theta_{3} + \frac{\pi}{2}\\\frac{\pi}{2} & 35 & 420 & \theta_{4}\\\frac{\pi}{2} & 0 & 0 & \theta_{5}\\\frac{\pi}{2} & 0 & -80 & \theta_{6}\end{matrix}\right]$
```python
# oppg 2
Mi = sp.Matrix([[sp.eye(4)]*6])
for i in range(6):
if i == 1:
Mi[:,4*i:4*(i+1)] = rotX(config[i,0]) * transX(config[i,1]) * transZ(config[i,2]) * rotZ(-sp.pi/2) # We compansate for the rotation of -pi/2 done when finding the D-H parameters
else:
Mi[:,4*i:4*(i+1)] = rotX(config[i,0]) * transX(config[i,1]) * transZ(config[i,2])
M = sp.eye(4)
for n in range(5,-1,-1):
M = Mi[:,4*n:4*(n+1)] * M
M
```
$\displaystyle \left[\begin{matrix}0 & 0 & 1 & -500\\0 & 1 & 0 & 0\\-1 & 0 & 0 & -490\\0 & 0 & 0 & 1\end{matrix}\right]$
```python
# oppg 3
Ai = sp.Matrix([[0,-1,0,0],[1,0,0,0],[0,0,0,0],[0,0,0,0]]) # This is a given matrix due to revolute joints
S_sp = sp.zeros(6)
for i in range(6):
dot_sum = sp.eye(4)
for n in range(i,-1,-1):
dot_sum = Mi[:,4*n:4*(n+1)] * dot_sum
S_skew = dot_sum * Ai * sp.Inverse(dot_sum)
S_sp[0,i] = S_skew[2,1]
S_sp[1,i] = S_skew[0,2]
S_sp[2,i] = S_skew[1,0]
S_sp[3,i] = S_skew[0,3]
S_sp[4,i] = S_skew[1,3]
S_sp[5,i] = S_skew[2,3]
S_sp
```
$\displaystyle \left[\begin{matrix}0 & 0 & 0 & -1 & 0 & 1\\0 & -1 & -1 & 0 & 1 & 0\\1 & 0 & 0 & 0 & 0 & 0\\0 & 0 & -455 & 0 & 490 & 0\\0 & 0 & 0 & 490 & 0 & -490\\0 & 0 & 0 & 0 & -420 & 0\end{matrix}\right]$
```python
# oppg 4, find body frame screw axis, S_bp
M_inv = mr.TransInv(M) #Finding inverse of M
Ad_M_inv = mr.Adjoint(M_inv) #Computing [Ad_M^-1]
#Using B_i = [Ad_M^-1]S_i
S_bp = sp.zeros(6,6)
for i in range(6):
S_bp[:, i] = Ad_M_inv * S_sp[:, i]
S_bp
```
$\displaystyle \left[\begin{matrix}-1 & 0 & 0 & 0 & 0 & 0\\0 & -1 & -1 & 0 & 1 & 0\\0 & 0 & 0 & -1 & 0 & 1\\0 & 500 & 500 & 0 & -80 & 0\\-500 & 0 & 0 & 0 & 0 & 0\\0 & 490 & 35 & 0 & 0 & 0\end{matrix}\right]$
## Oppgave 5
```python
import open3d as o3d
import numpy as np
def Ry_sym(theta):
ct = sp.cos(theta)
st = sp.sin(theta)
R = sp.Matrix([[ct, 0.0, st], [0.0, 1.0, 0.0], [-st, 0, ct]])
return R
class Robot_object:
Robot_objects = []
def __init__(self, num_joints, length_links):
self.num_joints = num_joints
self.length_links = length_links
self.Joints = []
self.Links = []
self.make_robot_objects()
def set_colour(self, colour):
self.colour = colour
def make_robot_objects(self):
for i in range(self.num_joints):
self.Joints.append(Joint())
for i in range(len(self.length_links)):
self.Links.append(Link(self.length_links[i]))
def transform(self, T_list):
for i, J in enumerate(self.Joints):
J.transform(T_list[i])
for i, L in enumerate(self.Links):
L.transform(T_list[i])
def draw_robot(self):
o3d.visualization.draw_geometries(self.Robot_objects)
class Joint(Robot_object):
colour = [0,1,0]
def __init__(self):
self.joint = o3d.geometry.TriangleMesh.create_cylinder(radius=0.1, height=0.3)
self.coord = o3d.geometry.TriangleMesh.create_coordinate_frame(size=0.5)
self.update_mesh_list()
self.set_colour(self.colour)
def update_mesh_list(self):
self.Robot_objects.append(self.joint)
self.Robot_objects.append(self.coord)
def set_colour(self, colour):
self.joint.paint_uniform_color(colour)
def transform(self,T):
self.joint = self.joint.transform(T)
self.coord = self.coord.transform(T)
class Link(Robot_object):
colour = [1,0,1]
def __init__(self, lenght):
self.lenght = lenght
self.link = o3d.geometry.TriangleMesh.create_cylinder(radius=0.01, height=self.lenght).rotate(Ry_sym(np.pi/2)).translate(np.array([self.lenght/2,0,0]))
self.update_mesh_list()
self.set_colour(self.colour)
def update_mesh_list(self):
self.Robot_objects.append(self.link)
def set_colour(self, colour):
self.link.paint_uniform_color(colour)
def transform(self,T):
self.link = self.link.transform(T)
f1 = o3d.geometry.TriangleMesh.create_coordinate_frame(size=0.5)
def make_transform_list(Mi, S_list, thetas):
T_list = [] # List with T01,T02,T03...
T = np.eye(4)
for i in range(len(thetas)):
T = T @ mr.MatrixExp6(mr.VecTose3(S_list[:, i] * thetas[i])) @ M
T01 = mr.MatrixExp6(mr.VecTose3(S1*th1)) @ M1
```
```python
#"Old" M:
M1=sp.Matrix([[0, 1, 0, 0],
[1, 0, 0, 0],
[0, 0, -1, 200],
[0, 0, 0, 1]])
M2=sp.Matrix([[0, 1, 0, 25],
[0, 0, 1, 0],
[1, 0, 0, 400],
[0, 0, 0, 1]])
M3=sp.Matrix([[1, 0, 0, 25],
[0, 0, 1, 0],
[0, -1, 0, 855],
[0, 0, 0, 1]])
M4=sp.Matrix([[0, 0, -1, 25+420],
[0, 1, 0, 0],
[1, 0, 0, 400+455+35],
[0, 0, 0, 1]])
M5=sp.Matrix([[1, 0, 0, 25+420],
[0, 0, 1, 0],
[0, -1, 0, 400+455+35],
[0, 0, 0, 1]])
M6=sp.Matrix([[0, 0, -1, 525],
[0, 1, 0, 0],
[1, 0, 0, 400+455+35],
[0, 0, 0, 1]])
Mlist = np.array([M1,M2,M3,M4,M5,M6], dtype=float)
om = sp.zeros(3,6)
om1 = om[:, 0] = M1[:3, 2]
om2 = om[:, 1] = M2[:3, 2]
om3 = om[:, 2] = M3[:3, 2]
om4 = om[:, 3] = M4[:3, 2]
om5 = om[:, 4] = M5[:3, 2]
om6 = om[:, 5] = M6[:3, 2]
q = sp.zeros(3,6)
q1 = q[:,0] = M1[:3, 3]
q2 = q[:,1] = M2[:3, 3]
q3 = q[:,2] = M3[:3, 3]
q4 = q[:,3] = M4[:3, 3]
q5 = q[:,4] = M5[:3, 3]
q6 = q[:,5] = M6[:3, 3]
Slist = Slist_maker(om,q)
Slist
```
```python
# oppg 6
M_poe = Matrix([[1,0,0,490],
[0,1,0,0],
[0,0,1,-420],
[0,0,0,1]])
T_s = (exp6(S_sp[:,0], theta[0]) * exp6(S_sp[:,1], theta[1]) * exp6(S_sp[:,2], theta[2]) * exp6(S_sp[:,3], theta[3]) * exp6(S_sp[:,4], theta[4]) * exp6(S_sp[:,5], theta[5])) * M
T_b = M * (exp6(S_bp[:,0], theta[0]) * exp6(S_bp[:,1], theta[1]) * exp6(S_bp[:,2], theta[2]) * exp6(S_bp[:,3], theta[3]) * exp6(S_bp[:,4], theta[4]) * exp6(S_bp[:,5], theta[5]))
T_s.simplify()
T_b.simplify()
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & 490\\0 & 1 & 0 & 0\\0 & 0 & 1 & -420\\0 & 0 & 0 & 1\end{matrix}\right], \ \left[\begin{matrix}\left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} & - \left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \cos{\left(\theta_{1} \right)}\\\left(\left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \cos{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{6} \right)} & \left(\left(- \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} + \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \sin{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \sin{\left(\theta_{1} \right)}\\\left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \cos{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \sin{\left(\theta_{6} \right)} & - \left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \sin{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{6} \right)} & \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{5} \right)} & 35 \sin{\left(\theta_{2} + \theta_{3} \right)} + 455 \sin{\left(\theta_{2} \right)} - 420 \cos{\left(\theta_{2} + \theta_{3} \right)}\\0 & 0 & 0 & 1\end{matrix}\right], \ \left[\begin{matrix}\left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} & - \left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \cos{\left(\theta_{1} \right)}\\\left(\left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \cos{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{6} \right)} & \left(\left(- \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} + \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \sin{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \sin{\left(\theta_{1} \right)}\\\left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \cos{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \sin{\left(\theta_{6} \right)} & - \left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \sin{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{6} \right)} & \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{5} \right)} & 35 \sin{\left(\theta_{2} + \theta_{3} \right)} + 455 \sin{\left(\theta_{2} \right)} - 420 \cos{\left(\theta_{2} + \theta_{3} \right)}\\0 & 0 & 0 & 1\end{matrix}\right]\right)$
```python
T_s, T_b
```
$\displaystyle \left( \left[\begin{matrix}\left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} & - \left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \cos{\left(\theta_{1} \right)}\\\left(\left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \cos{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{6} \right)} & \left(\left(- \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} + \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \sin{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \sin{\left(\theta_{1} \right)}\\\left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \cos{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \sin{\left(\theta_{6} \right)} & - \left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \sin{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{6} \right)} & \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{5} \right)} & 35 \sin{\left(\theta_{2} + \theta_{3} \right)} + 455 \sin{\left(\theta_{2} \right)} - 420 \cos{\left(\theta_{2} + \theta_{3} \right)}\\0 & 0 & 0 & 1\end{matrix}\right], \ \left[\begin{matrix}\left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} & - \left(\left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{6} \right)} - \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \cos{\left(\theta_{1} \right)}\\\left(\left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \cos{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \sin{\left(\theta_{6} \right)} & \left(\left(- \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} + \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \sin{\left(\theta_{5} \right)}\right) \sin{\left(\theta_{6} \right)} + \left(\sin{\left(\theta_{1} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{1} \right)} \cos{\left(\theta_{4} \right)}\right) \cos{\left(\theta_{6} \right)} & \left(\sin{\left(\theta_{1} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} - \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{1} \right)}\right) \sin{\left(\theta_{5} \right)} - \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{1} \right)} \cos{\left(\theta_{5} \right)} & 35 \left(12 \sin{\left(\theta_{2} + \theta_{3} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} + 13 \cos{\left(\theta_{2} \right)}\right) \sin{\left(\theta_{1} \right)}\\\left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \cos{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \sin{\left(\theta_{6} \right)} & - \left(\sin{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{4} \right)} \cos{\left(\theta_{5} \right)} - \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{2} + \theta_{3} \right)}\right) \sin{\left(\theta_{6} \right)} + \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{4} \right)} \cos{\left(\theta_{6} \right)} & \sin{\left(\theta_{2} + \theta_{3} \right)} \sin{\left(\theta_{5} \right)} \cos{\left(\theta_{4} \right)} + \cos{\left(\theta_{2} + \theta_{3} \right)} \cos{\left(\theta_{5} \right)} & 35 \sin{\left(\theta_{2} + \theta_{3} \right)} + 455 \sin{\left(\theta_{2} \right)} - 420 \cos{\left(\theta_{2} + \theta_{3} \right)}\\0 & 0 & 0 & 1\end{matrix}\right]\right)$
```python
```
| b3e719f9abb65ff3621ccc1be4b6540376d54d2c | 50,953 | ipynb | Jupyter Notebook | Arkiv/task_21_to_24_jupyter.ipynb | BirkHveding/RobotTek | 37f4ab0de6de9131239ff5d97e4b68a7091f291b | [
"Apache-2.0"
]
| null | null | null | Arkiv/task_21_to_24_jupyter.ipynb | BirkHveding/RobotTek | 37f4ab0de6de9131239ff5d97e4b68a7091f291b | [
"Apache-2.0"
]
| null | null | null | Arkiv/task_21_to_24_jupyter.ipynb | BirkHveding/RobotTek | 37f4ab0de6de9131239ff5d97e4b68a7091f291b | [
"Apache-2.0"
]
| null | null | null | 71.866008 | 10,026 | 0.383608 | true | 10,270 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.746139 | 0.660804 | __label__kor_Hang | 0.099451 | 0.3736 |
# Brusselator
The Brusselator is a simplified model for chemical reactions that oscillate. The reaction scheme is as follows:
$$
\begin{align}
A &\overset{k_1}\longrightarrow X \tag{1} \\
B + X &\overset{k_2}\longrightarrow Y + D \tag{2}\\
2X + Y &\overset{k_3}\longrightarrow 3X \tag{3}\\
X &\overset{k_4}\longrightarrow E \tag{4}\\
\end{align}
$$
`A, B, D, E, X, Y` are the reactants species.
We will use the general formulation of the mass action in differential form to derive the differentiatl equations that govern the dynamics of the system.
$$
\begin{align}
\frac{\mathrm{d} X}{\mathrm{d} t}&= (B-A)^T \cdot K \cdot X^A \tag{53}\\
\end{align}
$$
where `A` and `B` are the matrices with the stoichiometric coefficients, `X` is the state vector $X=[X_1,X_2,X_3]^T$, and `K` is a matrix in the form:
$$
A=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & \dots & 0\\
0 & 0 & k_3 &\dots &0 \\
\vdots & \vdots & \vdots & \vdots & \vdots \tag{54}\\
0 & 0 & 0 & \dots & k_r
\end{pmatrix}
$$
The system is a open reactor where we maintain a constant concentration of the reactants `A,B,D,E`.
Therefore, the equations for the evolution of `[X]` and `[Y]` are as follows:
$$\begin{align}
\frac{ d[X] }{dt} &= k_1[A] − k_2[B][X] + k_3[X]^2[Y] − k_4[X]\tag{5} \\
\frac{ d[Y] }{dt} &= k_2[B][X] − k_3[X]^2[Y] \tag{6}
\end{align} $$
We assume as initial conditions:
$$\begin{align}
[X (0)] &= 0 \tag{7} \\
[Y (0)] &= 0 \tag{8}
\end{align} $$
To calculate the equilibrium, we just set eqs. 5 and 6 to zero and solve for `X` and `Y`.
$$\begin{align}
[X]_{eq} &= \frac{k_1 [A]}{k_4}\tag{5} \\
[Y]_{eq} &= \frac{k_4 k_2 [B]}{k_3 k_1 [A]} \tag{6}
\end{align} $$
To evaluate stability, we will evaluate the Jacobian at the stationary state $([X]_{eq},[Y]_{eq})$.
The parameters sets to try are:
$$
k_1=1\\
k_2=1\\
k_3=1\\
k_4=1\\
A=1\\
B=3
$$
```julia
using DifferentialEquations
```
```julia
using Plots; gr()
```
Plots.GRBackend()
```julia
brusselator! = @ode_def BR begin
dX = k_1 * A - k_2 * B * X + k_3 * X^2 * Y - k_4 * X
dY = k_2 * B * X - k_3 * X^2 * Y
end k_1 k_2 k_3 k_4 A B
```
(::BR{getfield(Main, Symbol("##3#7")),getfield(Main, Symbol("##4#8")),getfield(Main, Symbol("##5#9")),Nothing,Nothing,getfield(Main, Symbol("##6#10")),Expr,Expr}) (generic function with 2 methods)
```julia
```
```julia
tspan = (0.0,50.0)
k_1=1.
k_2=1.
k_3=1.
k_4=1.
A=1.
B=3.
u₀=[0.0,0.0]
p=[k_1,k_2,k_3,k_4,A,B];
```
```julia
prob1 = ODEProblem(brusselator!,u₀,tspan,p)
sol1 = solve(prob1)
plot(sol1,label=["X","Y"])
title!("Brusselator")
xlabel!("Time [s]")
ylabel!("Concentration [a.u]")
```
```julia
plot(sol1,vars=(1,2),label=["limit cycle plot"])
title!("Brusselator ")
xlabel!("X [a.u.]")
ylabel!("Y [a.u.]")
```
```julia
```
| abf8255b6cc7be886b12ef7943a1287b0511f967 | 101,035 | ipynb | Jupyter Notebook | 7_Brusselator.ipynb | davidgmiguez/julia_notebooks | b395fac8f73bf8d9d366d6354a561c722f37ce66 | [
"BSD-3-Clause"
]
| null | null | null | 7_Brusselator.ipynb | davidgmiguez/julia_notebooks | b395fac8f73bf8d9d366d6354a561c722f37ce66 | [
"BSD-3-Clause"
]
| null | null | null | 7_Brusselator.ipynb | davidgmiguez/julia_notebooks | b395fac8f73bf8d9d366d6354a561c722f37ce66 | [
"BSD-3-Clause"
]
| null | null | null | 105.574713 | 241 | 0.648369 | true | 1,162 | Qwen/Qwen-72B | 1. YES
2. YES | 0.953275 | 0.890294 | 0.848695 | __label__eng_Latn | 0.644264 | 0.810137 |
# INF-510, v0.2, Claudio Torres, [email protected]. DI-UTFSM
# Fast sum of sinc!
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import scipy.sparse.linalg as sp
from scipy import interpolate
import scipy as spf
from sympy import *
import sympy as sym
from scipy.linalg import toeplitz
from ipywidgets import interact
from ipywidgets import IntSlider
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
# The variable M is used for changing the default size of the figures
M=5
```
## The magical and beautiful sinc function
```python
def Sh2(x):
if np.abs(x)<=1e-10:
return 1.0
else:
y=x
return np.sin(y)/y
Sh2v = np.vectorize(Sh2)
```
```python
# The domain
L=10
# The number of points
N=100
x=np.linspace(-L,L,100)
# Fixing randomness
np.random.seed(0)
# The weights
y=np.random.rand(N)*2-1
```
```python
# The simple algorithm
def F(xx,x,y,i=None):
n = x.shape[0]
n_out = xx.shape[0]
out = np.zeros(n_out)
# Plots all sincs
if i==None:
for i in np.arange(n):
out+=y[i]*Sh2v(xx-x[i])
else: # Just plot one sinc
out+=y[i]*Sh2v(xx-x[i])
return out
def F_pre_fast(xx,x,y,m,x_bar):
n = x.shape[0]
n_out = xx.shape[0]
out = np.zeros(n_out)
# First sum: sin
for i in np.arange(n):
out2=0
for j in np.arange(m):
out2+=(x[i]-x_bar)**j/(xx-x_bar)**(j+1)
out+=y[i]*np.cos(x[i])*out2
out*=np.sin(xx)
# Second sum: cos
out3=0
for i in np.arange(n):
out4=0
for j in np.arange(m):
out4+=(x[i]-x_bar)**j/(xx-x_bar)**(j+1)
out3+=y[i]*np.sin(x[i])*out4
out3*=np.cos(xx)
return out-out3
F_pre_fastv = np.vectorize(F_pre_fast)
```
```python
xx=np.linspace(-4*L,4*L,10*N)
yy=F(xx,x,y)
plt.figure(figsize=(M,M))
plt.plot(xx,yy)
plt.show()
```
```python
# Computing 'fast' coefficients
def compute_coefficients(x,y,m,x_bar):
coef_cos=np.zeros(m)
coef_sin=np.zeros(m)
coef_cos[0]=np.sum(y*np.cos(x))
coef_sin[0]=np.sum(y*np.sin(x))
y_cos_x=y*np.cos(x)
y_sin_x=y*np.sin(x)
for j in np.arange(1,m):
coef_cos[j]=np.sum(y_cos_x*((x-x_bar)**j))
coef_sin[j]=np.sum(y_sin_x*((x-x_bar)**j))
return coef_cos,coef_sin
# Evaluate fast sums with coefficients pre-computed (this is the real trick: pre-computacion!)
def evaluate_fast_sum(xx,coef_cos,coef_sin,x_bar):
m=coef_sin.shape[0]
# Coefficient j=0
out_cos=np.zeros(xx.shape[0])
out_sin=np.zeros(xx.shape[0])
for j in np.arange(m):
out_cos+=coef_cos[j]/((xx-x_bar)**(j+1))
out_sin+=coef_sin[j]/((xx-x_bar)**(j+1))
return np.sin(xx)*out_cos-np.cos(xx)*out_sin
evaluate_fast_sumv = np.vectorize(evaluate_fast_sum)
```
```python
# Slicing the data
mask_inner = (x<=-L/2)
mask_far_away = (x>=0)
x_inner = x[mask_inner]
y_inner = y[mask_inner]
x_far_away = x[mask_far_away]
y_far_away = y[mask_far_away]
```
```python
def change_m(m=1):
x_bar=x_inner[int(x_inner.shape[0]/2)]
coef_cos,coef_sin=compute_coefficients(x_inner,y_inner,m,x_bar)
xx=np.linspace(np.min(x_far_away)-6,np.max(x_far_away)*2,40*N)
# Original
yy=F(xx,x_inner,y_inner)
# Pre fast without rearrangements
yy_pre_fast=F_pre_fast(xx,x_inner,y_inner,m,x_bar)
# My first 'Fast' sum. Finally!
yy_f=evaluate_fast_sum(xx,coef_cos,coef_sin,x_bar)
fig = plt.figure(figsize=(2*M,M))
plt.subplot(121)
plt.plot(xx,yy,'b')
plt.plot(xx,yy_pre_fast,'g')
plt.plot(xx,yy_f,'r')
plt.grid(True)
plt.subplot(122)
plt.semilogy(xx,np.abs(yy-yy_f),'b')
plt.grid(True)
plt.ylim((1e-17,1e1))
plt.show()
interact(change_m,m=(1,50,1))
```
```python
```
| d4759e3860e64a493857153e5fd0a1befa3cb014 | 46,850 | ipynb | Jupyter Notebook | SC5/05 Fast sums - A very simple Intro.ipynb | maxaubel/Scientific-Computing | 57a04b5d3e3f7be2fe9b06127f7e569659698656 | [
"BSD-3-Clause"
]
| 37 | 2017-06-05T21:01:15.000Z | 2022-03-17T12:51:55.000Z | SC5/05 Fast sums - A very simple Intro.ipynb | maxaubel/Scientific-Computing | 57a04b5d3e3f7be2fe9b06127f7e569659698656 | [
"BSD-3-Clause"
]
| null | null | null | SC5/05 Fast sums - A very simple Intro.ipynb | maxaubel/Scientific-Computing | 57a04b5d3e3f7be2fe9b06127f7e569659698656 | [
"BSD-3-Clause"
]
| 63 | 2017-10-02T21:21:30.000Z | 2022-03-23T02:23:22.000Z | 161.551724 | 25,496 | 0.882092 | true | 1,219 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91611 | 0.880797 | 0.806907 | __label__eng_Latn | 0.369196 | 0.713047 |
In this notebook we will walk through some of the main tools for optimizing your code in order to efficiently solve DifferentialEquations.jl. User-side optimizations are important because, for sufficiently difficult problems, most of the time will be spent inside of your `f` function, the function you are trying to solve. "Efficient" integrators are those that reduce the required number of `f` calls to hit the error tolerance. The main ideas for optimizing your DiffEq code, or any Julia function, are the following:
- Make it non-allocating
- Use StaticArrays for small arrays
- Use broadcast fusion
- Make it type-stable
- Reduce redundant calculations
- Make use of BLAS calls
- Optimize algorithm choice
We'll discuss these strategies in the context of small and large systems. Let's start with small systems.
## Optimizing Small Systems (<100 DEs)
Let's take the classic Lorenz system from before. Let's start by naively writing the system in its out-of-place form:
# Optimizing DiffEq Code
### Chris Rackauckas
```julia
function lorenz(u,p,t)
dx = 10.0*(u[2]-u[1])
dy = u[1]*(28.0-u[3]) - u[2]
dz = u[1]*u[2] - (8/3)*u[3]
[dx,dy,dz]
end
```
lorenz (generic function with 1 method)
Here, `lorenz` returns an object, `[dx,dy,dz]`, which is created within the body of `lorenz`.
This is a common code pattern from high-level languages like MATLAB, SciPy, or R's deSolve. However, the issue with this form is that it allocates a vector, `[dx,dy,dz]`, at each step. Let's benchmark the solution process with this choice of function:
```julia
using DifferentialEquations, BenchmarkTools
u0 = [1.0;0.0;0.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz,u0,tspan)
@benchmark solve(prob,Tsit5())
```
┌ Info: Precompiling BenchmarkTools [6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf]
└ @ Base loading.jl:1186
BenchmarkTools.Trial:
memory estimate: 10.94 MiB
allocs estimate: 102469
--------------
minimum time: 3.455 ms (0.00% GC)
median time: 4.810 ms (0.00% GC)
mean time: 4.788 ms (12.08% GC)
maximum time: 56.229 ms (89.52% GC)
--------------
samples: 1043
evals/sample: 1
The BenchmarkTools package's `@benchmark` runs the code multiple times to get an accurate measurement. The minimum time is the time it takes when your OS and other background processes aren't getting in the way. Notice that in this case it takes about 5ms to solve and allocates around 11.11 MiB. However, if we were to use this inside of a real user code we'd see a lot of time spent doing garbage collection (GC) to clean up all of the arrays we made. Even if we turn off saving we have these allocations.
```julia
@benchmark solve(prob,Tsit5(),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 9.57 MiB
allocs estimate: 89577
--------------
minimum time: 2.993 ms (0.00% GC)
median time: 4.178 ms (0.00% GC)
mean time: 4.231 ms (10.06% GC)
maximum time: 55.648 ms (89.30% GC)
--------------
samples: 1180
evals/sample: 1
The problem of course is that arrays are created every time our derivative function is called. This function is called multiple times per step and is thus the main source of memory usage. To fix this, we can use the in-place form to ***make our code non-allocating***:
```julia
function lorenz!(du,u,p,t)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
```
lorenz! (generic function with 1 method)
Here, instead of creating an array each time, we utilized the cache array `du`. When the inplace form is used, DifferentialEquations.jl takes a different internal route that minimizes the internal allocations as well. When we benchmark this function, we will see quite a difference.
```julia
u0 = [1.0;0.0;0.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz!,u0,tspan)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 1.37 MiB
allocs estimate: 12964
--------------
minimum time: 719.300 μs (0.00% GC)
median time: 863.950 μs (0.00% GC)
mean time: 996.328 μs (7.60% GC)
maximum time: 52.938 ms (97.40% GC)
--------------
samples: 5004
evals/sample: 1
```julia
@benchmark solve(prob,Tsit5(),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 6.86 KiB
allocs estimate: 92
--------------
minimum time: 388.800 μs (0.00% GC)
median time: 424.500 μs (0.00% GC)
mean time: 456.487 μs (0.16% GC)
maximum time: 3.320 ms (86.45% GC)
--------------
samples: 10000
evals/sample: 1
There is a 4x time difference just from that change! Notice there are still some allocations and this is due to the construction of the integration cache. But this doesn't scale with the problem size:
```julia
tspan = (0.0,500.0) # 5x longer than before
prob = ODEProblem(lorenz!,u0,tspan)
@benchmark solve(prob,Tsit5(),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 6.86 KiB
allocs estimate: 92
--------------
minimum time: 1.997 ms (0.00% GC)
median time: 2.255 ms (0.00% GC)
mean time: 2.366 ms (0.00% GC)
maximum time: 4.923 ms (0.00% GC)
--------------
samples: 2108
evals/sample: 1
since that's all just setup allocations.
#### But if the system is small we can optimize even more.
Allocations are only expensive if they are "heap allocations". For a more in-depth definition of heap allocations, [there are a lot of sources online](http://net-informations.com/faq/net/stack-heap.htm). But a good working definition is that heap allocations are variable-sized slabs of memory which have to be pointed to, and this pointer indirection costs time. Additionally, the heap has to be managed and the garbage controllers has to actively keep track of what's on the heap.
However, there's an alternative to heap allocations, known as stack allocations. The stack is statically-sized (known at compile time) and thus its accesses are quick. Additionally, the exact block of memory is known in advance by the compiler, and thus re-using the memory is cheap. This means that allocating on the stack has essentially no cost!
Arrays have to be heap allocated because their size (and thus the amount of memory they take up) is determined at runtime. But there are structures in Julia which are stack-allocated. `struct`s for example are stack-allocated "value-type"s. `Tuple`s are a stack-allocated collection. The most useful data structure for DiffEq though is the `StaticArray` from the package [StaticArrays.jl](https://github.com/JuliaArrays/StaticArrays.jl). These arrays have their length determined at compile-time. They are created using macros attached to normal array expressions, for example:
```julia
using StaticArrays
A = @SVector [2.0,3.0,5.0]
```
3-element SArray{Tuple{3},Float64,1,3}:
2.0
3.0
5.0
Notice that the `3` after `SVector` gives the size of the `SVector`. It cannot be changed. Additionally, `SVector`s are immutable, so we have to create a new `SVector` to change values. But remember, we don't have to worry about allocations because this data structure is stack-allocated. `SArray`s have a lot of extra optimizations as well: they have fast matrix multiplication, fast QR factorizations, etc. which directly make use of the information about the size of the array. Thus, when possible they should be used.
Unfortunately static arrays can only be used for sufficiently small arrays. After a certain size, they are forced to heap allocate after some instructions and their compile time balloons. Thus static arrays shouldn't be used if your system has more than 100 variables. Additionally, only the native Julia algorithms can fully utilize static arrays.
Let's ***optimize `lorenz` using static arrays***. Note that in this case, we want to use the out-of-place allocating form, but this time we want to output a static array:
```julia
function lorenz_static(u,p,t)
dx = 10.0*(u[2]-u[1])
dy = u[1]*(28.0-u[3]) - u[2]
dz = u[1]*u[2] - (8/3)*u[3]
@SVector [dx,dy,dz]
end
```
lorenz_static (generic function with 1 method)
To make the solver internally use static arrays, we simply give it a static array as the initial condition:
```julia
u0 = @SVector [1.0,0.0,0.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz_static,u0,tspan)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 472.02 KiB
allocs estimate: 2662
--------------
minimum time: 423.700 μs (0.00% GC)
median time: 475.600 μs (0.00% GC)
mean time: 519.208 μs (2.59% GC)
maximum time: 1.817 ms (48.73% GC)
--------------
samples: 9591
evals/sample: 1
```julia
@benchmark solve(prob,Tsit5(),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 6.16 KiB
allocs estimate: 73
--------------
minimum time: 340.901 μs (0.00% GC)
median time: 370.400 μs (0.00% GC)
mean time: 395.819 μs (0.13% GC)
maximum time: 2.881 ms (86.59% GC)
--------------
samples: 10000
evals/sample: 1
And that's pretty much all there is to it. With static arrays you don't have to worry about allocating, so use operations like `*` and don't worry about fusing operations (discussed in the next section). Do "the vectorized code" of R/MATLAB/Python and your code in this case will be fast, or directly use the numbers/values.
#### Exercise 1
Implement the out-of-place array, in-place array, and out-of-place static array forms for the [Henon-Heiles System](https://en.wikipedia.org/wiki/H%C3%A9non%E2%80%93Heiles_system) and time the results.
## Optimizing Large Systems
### Interlude: Managing Allocations with Broadcast Fusion
When your system is sufficiently large, or you have to make use of a non-native Julia algorithm, you have to make use of `Array`s. In order to use arrays in the most efficient manner, you need to be careful about temporary allocations. Vectorized calculations naturally have plenty of temporary array allocations. This is because a vectorized calculation outputs a vector. Thus:
```julia
A = rand(1000,1000); B = rand(1000,1000); C = rand(1000,1000)
test(A,B,C) = A + B + C
@benchmark test(A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 3
--------------
minimum time: 3.257 ms (0.00% GC)
median time: 4.102 ms (0.00% GC)
mean time: 4.647 ms (17.26% GC)
maximum time: 60.949 ms (92.17% GC)
--------------
samples: 1074
evals/sample: 1
That expression `A + B + C` creates 2 arrays. It first creates one for the output of `A + B`, then uses that result array to `+ C` to get the final result. 2 arrays! We don't want that! The first thing to do to fix this is to use broadcast fusion. [Broadcast fusion](https://julialang.org/blog/2017/01/moredots) puts expressions together. For example, instead of doing the `+` operations separately, if we were to add them all at the same time, then we would only have a single array that's created. For example:
```julia
test2(A,B,C) = map((a,b,c)->a+b+c,A,B,C)
@benchmark test2(A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 5
--------------
minimum time: 4.263 ms (0.00% GC)
median time: 5.274 ms (0.00% GC)
mean time: 5.760 ms (14.21% GC)
maximum time: 60.352 ms (91.60% GC)
--------------
samples: 867
evals/sample: 1
Puts the whole expression into a single function call, and thus only one array is required to store output. This is the same as writing the loop:
```julia
function test3(A,B,C)
D = similar(A)
@inbounds for i in eachindex(A)
D[i] = A[i] + B[i] + C[i]
end
D
end
@benchmark test3(A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 2
--------------
minimum time: 3.068 ms (0.00% GC)
median time: 4.069 ms (0.00% GC)
mean time: 4.573 ms (17.14% GC)
maximum time: 60.806 ms (93.12% GC)
--------------
samples: 1091
evals/sample: 1
However, Julia's broadcast is syntactic sugar for this. If multiple expressions have a `.`, then it will put those vectorized operations together. Thus:
```julia
test4(A,B,C) = A .+ B .+ C
@benchmark test4(A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 2
--------------
minimum time: 3.100 ms (0.00% GC)
median time: 4.090 ms (0.00% GC)
mean time: 4.611 ms (17.00% GC)
maximum time: 60.075 ms (91.42% GC)
--------------
samples: 1083
evals/sample: 1
is a version with only 1 array created (the output). Note that `.`s can be used with function calls as well:
```julia
sin.(A) .+ sin.(B)
```
1000×1000 Array{Float64,2}:
1.03364 0.98507 1.27358 0.877 … 0.573536 1.23032 0.911319
1.63118 1.53896 0.71938 0.982407 1.18889 0.666447 0.456239
1.326 0.635384 1.00153 0.872837 0.57143 0.957094 0.644016
1.20394 0.252596 0.586836 0.706981 0.484221 1.16159 0.320898
0.602921 0.942712 0.245305 1.04536 0.811623 1.01748 1.49487
0.967846 0.933214 0.425036 0.763434 … 0.339187 0.863011 0.697906
0.672935 0.702386 0.815461 1.4289 1.43948 1.23604 0.991722
0.57422 1.16769 0.553201 0.327915 0.303723 0.17508 1.62917
0.16162 1.48327 0.394958 1.32529 0.73105 0.559688 0.996632
0.664391 0.808939 1.36914 0.981928 0.702237 0.539255 1.432
0.811794 1.08456 0.632371 1.07589 … 1.45691 1.66896 0.456393
0.951618 0.533518 0.78531 1.27613 0.887392 0.883319 1.43496
1.14518 1.27886 0.719942 0.939989 0.399901 0.768837 0.314349
⋮ ⋱
1.35936 1.06333 0.885049 1.31827 1.1435 0.501145 0.358912
1.30287 1.31587 1.1749 0.975842 1.61852 1.10175 0.68206
0.749153 1.35504 1.4116 1.15027 … 1.20905 0.864016 1.1477
1.27482 0.576958 1.17769 0.77052 0.532696 0.833744 1.28555
1.04075 0.844666 0.360267 1.27131 0.679665 0.533756 1.4078
0.674019 0.831317 1.5796 0.122812 0.585702 1.10292 1.46805
0.921581 1.47563 1.16279 1.26375 0.535886 1.04286 0.698426
0.508372 1.27464 0.0699981 1.29666 … 1.59411 1.02171 1.14568
0.0579072 0.884255 0.277504 0.957518 0.521798 1.38383 1.38164
0.958096 1.00757 1.52481 1.49831 1.37391 0.829876 1.52642
0.627583 1.14951 0.796797 1.43778 1.17557 0.490823 0.480095
0.234199 0.78984 1.28475 1.36424 1.11589 0.335778 0.113732
Also, the `@.` macro applys a dot to every operator:
```julia
test5(A,B,C) = @. A + B + C #only one array allocated
@benchmark test5(A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 3
--------------
minimum time: 3.213 ms (0.00% GC)
median time: 4.115 ms (0.00% GC)
mean time: 4.570 ms (15.95% GC)
maximum time: 8.086 ms (38.82% GC)
--------------
samples: 1092
evals/sample: 1
Using these tools we can get rid of our intermediate array allocations for many vectorized function calls. But we are still allocating the output array. To get rid of that allocation, we can instead use mutation. Mutating broadcast is done via `.=`. For example, if we pre-allocate the output:
```julia
D = zeros(1000,1000);
```
Then we can keep re-using this cache for subsequent calculations. The mutating broadcasting form is:
```julia
test6!(D,A,B,C) = D .= A .+ B .+ C #only one array allocated
@benchmark test6!(D,A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 799.699 μs (0.00% GC)
median time: 902.050 μs (0.00% GC)
mean time: 962.816 μs (0.00% GC)
maximum time: 3.425 ms (0.00% GC)
--------------
samples: 5152
evals/sample: 1
If we use `@.` before the `=`, then it will turn it into `.=`:
```julia
test7!(D,A,B,C) = @. D = A + B + C #only one array allocated
@benchmark test7!(D,A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 798.400 μs (0.00% GC)
median time: 892.550 μs (0.00% GC)
mean time: 950.436 μs (0.00% GC)
maximum time: 2.693 ms (0.00% GC)
--------------
samples: 5218
evals/sample: 1
Notice that in this case, there is no "output", and instead the values inside of `D` are what are changed (like with the DiffEq inplace function). Many Julia functions have a mutating form which is denoted with a `!`. For example, the mutating form of the `map` is `map!`:
```julia
test8!(D,A,B,C) = map!((a,b,c)->a+b+c,D,A,B,C)
@benchmark test8!(D,A,B,C)
```
BenchmarkTools.Trial:
memory estimate: 32 bytes
allocs estimate: 1
--------------
minimum time: 1.049 ms (0.00% GC)
median time: 1.167 ms (0.00% GC)
mean time: 1.238 ms (0.00% GC)
maximum time: 3.015 ms (0.00% GC)
--------------
samples: 4011
evals/sample: 1
Some operations require using an alternate mutating form in order to be fast. For example, matrix multiplication via `*` allocates a temporary:
```julia
@benchmark A*B
```
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 2
--------------
minimum time: 12.283 ms (0.00% GC)
median time: 13.676 ms (0.00% GC)
mean time: 14.268 ms (5.67% GC)
maximum time: 20.365 ms (12.72% GC)
--------------
samples: 350
evals/sample: 1
Instead, we can use the mutating form `mul!` into a cache array to avoid allocating the output:
```julia
using LinearAlgebra
@benchmark mul!(D,A,B) # same as D = A * B
```
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 11.668 ms (0.00% GC)
median time: 12.447 ms (0.00% GC)
mean time: 12.623 ms (0.00% GC)
maximum time: 16.679 ms (0.00% GC)
--------------
samples: 396
evals/sample: 1
For repeated calculations this reduced allocation can stop GC cycles and thus lead to more efficient code. Additionally, ***we can fuse together higher level linear algebra operations using BLAS***. The package [SugarBLAS.jl](https://github.com/lopezm94/SugarBLAS.jl) makes it easy to write higher level operations like `alpha*B*A + beta*C` as mutating BLAS calls.
### Example Optimization: Gierer-Meinhardt Reaction-Diffusion PDE Discretization
Let's optimize the solution of a Reaction-Diffusion PDE's discretization. In its discretized form, this is the ODE:
$$
\begin{align}
du &= D_1 (A_y u + u A_x) + \frac{au^2}{v} + \bar{u} - \alpha u\\
dv &= D_2 (A_y v + v A_x) + a u^2 + \beta v
\end{align}
$$
where $u$, $v$, and $A$ are matrices. Here, we will use the simplified version where $A$ is the tridiagonal stencil $[1,-2,1]$, i.e. it's the 2D discretization of the LaPlacian. The native code would be something along the lines of:
```julia
# Generate the constants
p = (1.0,1.0,1.0,10.0,0.001,100.0) # a,α,ubar,β,D1,D2
N = 100
Ax = Array(Tridiagonal([1.0 for i in 1:N-1],[-2.0 for i in 1:N],[1.0 for i in 1:N-1]))
Ay = copy(Ax)
Ax[2,1] = 2.0
Ax[end-1,end] = 2.0
Ay[1,2] = 2.0
Ay[end,end-1] = 2.0
function basic_version!(dr,r,p,t)
a,α,ubar,β,D1,D2 = p
u = r[:,:,1]
v = r[:,:,2]
Du = D1*(Ay*u + u*Ax)
Dv = D2*(Ay*v + v*Ax)
dr[:,:,1] = Du .+ a.*u.*u./v .+ ubar .- α*u
dr[:,:,2] = Dv .+ a.*u.*u .- β*v
end
a,α,ubar,β,D1,D2 = p
uss = (ubar+β)/α
vss = (a/β)*uss^2
r0 = zeros(100,100,2)
r0[:,:,1] .= uss.+0.1.*rand.()
r0[:,:,2] .= vss
prob = ODEProblem(basic_version!,r0,(0.0,0.1),p)
```
[36mODEProblem[0m with uType [36mArray{Float64,3}[0m and tType [36mFloat64[0m. In-place: [36mtrue[0m
timespan: (0.0, 0.1)
u0: [11.0394 11.002 … 11.013 11.0926; 11.0837 11.0584 … 11.0908 11.0092; … ; 11.0918 11.0966 … 11.0582 11.0298; 11.0307 11.0959 … 11.0378 11.0558]
[12.1 12.1 … 12.1 12.1; 12.1 12.1 … 12.1 12.1; … ; 12.1 12.1 … 12.1 12.1; 12.1 12.1 … 12.1 12.1]
In this version we have encoded our initial condition to be a 3-dimensional array, with `u[:,:,1]` being the `A` part and `u[:,:,2]` being the `B` part.
```julia
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 186.88 MiB
allocs estimate: 8589
--------------
minimum time: 80.334 ms (8.83% GC)
median time: 167.756 ms (48.87% GC)
mean time: 137.745 ms (37.82% GC)
maximum time: 192.902 ms (47.32% GC)
--------------
samples: 37
evals/sample: 1
While this version isn't very efficient,
#### We recommend writing the "high-level" code first, and iteratively optimizing it!
The first thing that we can do is get rid of the slicing allocations. The operation `r[:,:,1]` creates a temporary array instead of a "view", i.e. a pointer to the already existing memory. To make it a view, add `@view`. Note that we have to be careful with views because they point to the same memory, and thus changing a view changes the original values:
```julia
A = rand(4)
@show A
B = @view A[1:3]
B[2] = 2
@show A
```
A = [0.22581, 0.809546, 0.344768, 0.640214]
A = [0.22581, 2.0, 0.344768, 0.640214]
4-element Array{Float64,1}:
0.2258101941951307
2.0
0.3447681198957038
0.6402135416553185
Notice that changing `B` changed `A`. This is something to be careful of, but at the same time we want to use this since we want to modify the output `dr`. Additionally, the last statement is a purely element-wise operation, and thus we can make use of broadcast fusion there. Let's rewrite `basic_version!` to ***avoid slicing allocations*** and to ***use broadcast fusion***:
```julia
function gm2!(dr,r,p,t)
a,α,ubar,β,D1,D2 = p
u = @view r[:,:,1]
v = @view r[:,:,2]
du = @view dr[:,:,1]
dv = @view dr[:,:,2]
Du = D1*(Ay*u + u*Ax)
Dv = D2*(Ay*v + v*Ax)
@. du = Du + a.*u.*u./v + ubar - α*u
@. dv = Dv + a.*u.*u - β*v
end
prob = ODEProblem(gm2!,r0,(0.0,0.1),p)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 119.55 MiB
allocs estimate: 7119
--------------
minimum time: 69.464 ms (6.19% GC)
median time: 119.849 ms (35.84% GC)
mean time: 119.770 ms (37.59% GC)
maximum time: 175.505 ms (55.22% GC)
--------------
samples: 42
evals/sample: 1
Now, most of the allocations are taking place in `Du = D1*(Ay*u + u*Ax)` since those operations are vectorized and not mutating. We should instead replace the matrix multiplications with `mul!`. When doing so, we will need to have cache variables to write into. This looks like:
```julia
Ayu = zeros(N,N)
uAx = zeros(N,N)
Du = zeros(N,N)
Ayv = zeros(N,N)
vAx = zeros(N,N)
Dv = zeros(N,N)
function gm3!(dr,r,p,t)
a,α,ubar,β,D1,D2 = p
u = @view r[:,:,1]
v = @view r[:,:,2]
du = @view dr[:,:,1]
dv = @view dr[:,:,2]
mul!(Ayu,Ay,u)
mul!(uAx,u,Ax)
mul!(Ayv,Ay,v)
mul!(vAx,v,Ax)
@. Du = D1*(Ayu + uAx)
@. Dv = D2*(Ayv + vAx)
@. du = Du + a*u*u./v + ubar - α*u
@. dv = Dv + a*u*u - β*v
end
prob = ODEProblem(gm3!,r0,(0.0,0.1),p)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 29.76 MiB
allocs estimate: 5355
--------------
minimum time: 54.087 ms (2.33% GC)
median time: 57.128 ms (2.28% GC)
mean time: 59.910 ms (5.30% GC)
maximum time: 142.458 ms (51.42% GC)
--------------
samples: 84
evals/sample: 1
But our temporary variables are global variables. We need to either declare the caches as `const` or localize them. We can localize them by adding them to the parameters, `p`. It's easier for the compiler to reason about local variables than global variables. ***Localizing variables helps to ensure type stability***.
```julia
p = (1.0,1.0,1.0,10.0,0.001,100.0,Ayu,uAx,Du,Ayv,vAx,Dv) # a,α,ubar,β,D1,D2
function gm4!(dr,r,p,t)
a,α,ubar,β,D1,D2,Ayu,uAx,Du,Ayv,vAx,Dv = p
u = @view r[:,:,1]
v = @view r[:,:,2]
du = @view dr[:,:,1]
dv = @view dr[:,:,2]
mul!(Ayu,Ay,u)
mul!(uAx,u,Ax)
mul!(Ayv,Ay,v)
mul!(vAx,v,Ax)
@. Du = D1*(Ayu + uAx)
@. Dv = D2*(Ayv + vAx)
@. du = Du + a*u*u./v + ubar - α*u
@. dv = Dv + a*u*u - β*v
end
prob = ODEProblem(gm4!,r0,(0.0,0.1),p)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 29.66 MiB
allocs estimate: 1090
--------------
minimum time: 46.933 ms (2.46% GC)
median time: 49.269 ms (2.41% GC)
mean time: 51.746 ms (5.35% GC)
maximum time: 137.551 ms (57.03% GC)
--------------
samples: 97
evals/sample: 1
We could then use the BLAS `gemmv` to optimize the matrix multiplications some more, but instead let's devectorize the stencil.
```julia
p = (1.0,1.0,1.0,10.0,0.001,100.0,N)
function fast_gm!(du,u,p,t)
a,α,ubar,β,D1,D2,N = p
@inbounds for j in 2:N-1, i in 2:N-1
du[i,j,1] = D1*(u[i-1,j,1] + u[i+1,j,1] + u[i,j+1,1] + u[i,j-1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
end
@inbounds for j in 2:N-1, i in 2:N-1
du[i,j,2] = D2*(u[i-1,j,2] + u[i+1,j,2] + u[i,j+1,2] + u[i,j-1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
end
@inbounds for j in 2:N-1
i = 1
du[1,j,1] = D1*(2u[i+1,j,1] + u[i,j+1,1] + u[i,j-1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
end
@inbounds for j in 2:N-1
i = 1
du[1,j,2] = D2*(2u[i+1,j,2] + u[i,j+1,2] + u[i,j-1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
end
@inbounds for j in 2:N-1
i = N
du[end,j,1] = D1*(2u[i-1,j,1] + u[i,j+1,1] + u[i,j-1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
end
@inbounds for j in 2:N-1
i = N
du[end,j,2] = D2*(2u[i-1,j,2] + u[i,j+1,2] + u[i,j-1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
end
@inbounds for i in 2:N-1
j = 1
du[i,1,1] = D1*(u[i-1,j,1] + u[i+1,j,1] + 2u[i,j+1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
end
@inbounds for i in 2:N-1
j = 1
du[i,1,2] = D2*(u[i-1,j,2] + u[i+1,j,2] + 2u[i,j+1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
end
@inbounds for i in 2:N-1
j = N
du[i,end,1] = D1*(u[i-1,j,1] + u[i+1,j,1] + 2u[i,j-1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
end
@inbounds for i in 2:N-1
j = N
du[i,end,2] = D2*(u[i-1,j,2] + u[i+1,j,2] + 2u[i,j-1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
end
@inbounds begin
i = 1; j = 1
du[1,1,1] = D1*(2u[i+1,j,1] + 2u[i,j+1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
du[1,1,2] = D2*(2u[i+1,j,2] + 2u[i,j+1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
i = 1; j = N
du[1,N,1] = D1*(2u[i+1,j,1] + 2u[i,j-1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
du[1,N,2] = D2*(2u[i+1,j,2] + 2u[i,j-1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
i = N; j = 1
du[N,1,1] = D1*(2u[i-1,j,1] + 2u[i,j+1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
du[N,1,2] = D2*(2u[i-1,j,2] + 2u[i,j+1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
i = N; j = N
du[end,end,1] = D1*(2u[i-1,j,1] + 2u[i,j-1,1] - 4u[i,j,1]) +
a*u[i,j,1]^2/u[i,j,2] + ubar - α*u[i,j,1]
du[end,end,2] = D2*(2u[i-1,j,2] + 2u[i,j-1,2] - 4u[i,j,2]) +
a*u[i,j,1]^2 - β*u[i,j,2]
end
end
prob = ODEProblem(fast_gm!,r0,(0.0,0.1),p)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 29.63 MiB
allocs estimate: 505
--------------
minimum time: 7.409 ms (8.40% GC)
median time: 8.683 ms (8.87% GC)
mean time: 9.342 ms (12.98% GC)
maximum time: 91.148 ms (78.92% GC)
--------------
samples: 535
evals/sample: 1
Lastly, we can do other things like multithread the main loops, but these optimizations get the last 2x-3x out. The main optimizations which apply everywhere are the ones we just performed (though the last one only works if your matrix is a stencil. This is known as a matrix-free implementation of the PDE discretization).
This gets us to about 8x faster than our original MATLAB/SciPy/R vectorized style code!
The last thing to do is then ***optimize our algorithm choice***. We have been using `Tsit5()` as our test algorithm, but in reality this problem is a stiff PDE discretization and thus one recommendation is to use `CVODE_BDF()`. However, instead of using the default dense Jacobian, we should make use of the sparse Jacobian afforded by the problem. The Jacobian is the matrix $\frac{df_i}{dr_j}$, where $r$ is read by the linear index (i.e. down columns). But since the $u$ variables depend on the $v$, the band size here is large, and thus this will not do well with a Banded Jacobian solver. Instead, we utilize sparse Jacobian algorithms. `CVODE_BDF` allows us to use a sparse Newton-Krylov solver by setting `linear_solver = :GMRES` (see [the solver documentation](http://docs.juliadiffeq.org/latest/solvers/ode_solve.html#Sundials.jl-1), and thus we can solve this problem efficiently. Let's see how this scales as we increase the integration time.
```julia
prob = ODEProblem(fast_gm!,r0,(0.0,10.0),p)
@benchmark solve(prob,Tsit5())
```
BenchmarkTools.Trial:
memory estimate: 2.76 GiB
allocs estimate: 41689
--------------
minimum time: 2.255 s (13.38% GC)
median time: 2.399 s (19.28% GC)
mean time: 2.826 s (30.95% GC)
maximum time: 3.823 s (48.64% GC)
--------------
samples: 3
evals/sample: 1
```julia
using Sundials
@benchmark solve(prob,CVODE_BDF(linear_solver=:GMRES))
```
BenchmarkTools.Trial:
memory estimate: 306.28 MiB
allocs estimate: 87141
--------------
minimum time: 1.819 s (0.00% GC)
median time: 1.831 s (4.41% GC)
mean time: 1.889 s (4.83% GC)
maximum time: 2.018 s (9.57% GC)
--------------
samples: 3
evals/sample: 1
```julia
prob = ODEProblem(fast_gm!,r0,(0.0,100.0),p)
# Will go out of memory if we don't turn off `save_everystep`!
@benchmark solve(prob,Tsit5(),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 2.91 MiB
allocs estimate: 113
--------------
minimum time: 5.422 s (0.00% GC)
median time: 5.422 s (0.00% GC)
mean time: 5.422 s (0.00% GC)
maximum time: 5.422 s (0.00% GC)
--------------
samples: 1
evals/sample: 1
```julia
@benchmark solve(prob,CVODE_BDF(linear_solver=:GMRES))
```
BenchmarkTools.Trial:
memory estimate: 306.28 MiB
allocs estimate: 87141
--------------
minimum time: 1.849 s (4.01% GC)
median time: 1.871 s (3.96% GC)
mean time: 1.893 s (4.25% GC)
maximum time: 1.959 s (8.54% GC)
--------------
samples: 3
evals/sample: 1
Now let's check the allocation growth.
```julia
@benchmark solve(prob,CVODE_BDF(linear_solver=:GMRES),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 4.07 MiB
allocs estimate: 77881
--------------
minimum time: 1.646 s (0.00% GC)
median time: 1.694 s (0.00% GC)
mean time: 1.705 s (0.00% GC)
maximum time: 1.774 s (0.00% GC)
--------------
samples: 3
evals/sample: 1
```julia
prob = ODEProblem(fast_gm!,r0,(0.0,500.0),p)
@benchmark solve(prob,CVODE_BDF(linear_solver=:GMRES),save_everystep=false)
```
BenchmarkTools.Trial:
memory estimate: 5.31 MiB
allocs estimate: 108359
--------------
minimum time: 2.306 s (0.00% GC)
median time: 2.337 s (0.00% GC)
mean time: 2.340 s (0.00% GC)
maximum time: 2.375 s (0.00% GC)
--------------
samples: 3
evals/sample: 1
Notice that we've elimated almost all allocations, allowing the code to grow without hitting garbage collection and slowing down.
Why is `CVODE_BDF` doing well? What's happening is that, because the problem is stiff, the number of steps required by the explicit Runge-Kutta method grows rapidly, whereas `CVODE_BDF` is taking large steps. Additionally, the `GMRES` linear solver form is quite an efficient way to solve the implicit system in this case. This is problem-dependent, and in many cases using a Krylov method effectively requires a preconditioner, so you need to play around with testing other algorithms and linear solvers to find out what works best with your problem.
## Conclusion
Julia gives you the tools to optimize the solver "all the way", but you need to make use of it. The main thing to avoid is temporary allocations. For small systems, this is effectively done via static arrays. For large systems, this is done via in-place operations and cache arrays. Either way, the resulting solution can be immensely sped up over vectorized formulations by using these principles.
| b45dfc02a4ab564a7b57325090347d457ece5a01 | 56,818 | ipynb | Jupyter Notebook | notebook/introduction/optimizing_diffeq_code.ipynb | Fromeworld/DiffEqTutorials.jl | 1e017f33764b32762eab97cca2b9dfea5bece9fc | [
"MIT"
]
| null | null | null | notebook/introduction/optimizing_diffeq_code.ipynb | Fromeworld/DiffEqTutorials.jl | 1e017f33764b32762eab97cca2b9dfea5bece9fc | [
"MIT"
]
| null | null | null | notebook/introduction/optimizing_diffeq_code.ipynb | Fromeworld/DiffEqTutorials.jl | 1e017f33764b32762eab97cca2b9dfea5bece9fc | [
"MIT"
]
| null | null | null | 32.654023 | 961 | 0.512179 | true | 11,818 | Qwen/Qwen-72B | 1. YES
2. YES | 0.785309 | 0.855851 | 0.672107 | __label__eng_Latn | 0.937477 | 0.399861 |
# Lecture 7: Gambler's Ruin & Random Variables
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## Gambler's Ruin
Two gamblers $A$ and $B$ are successively playing a game until one wins all the money and the other is ruined (goes bankrupt). There is a sequence of rounds, with a one dollar bet each time. The rounds are independent events. Let $p = P(\text{A wins a certain round})$ and the inverse is $q = 1 - p$, by convention.
_What is the probability that $A$ wins the entire game?_
Some clarifications:
* there is a total of $N$ dollars in this closed system game (no other money comes into play)
* $A$ starts with $i$ dollars, $B$ starts with $N-i$ dollars
But where do we begin to solve this problem?
### Random Walk
A [random walk](https://en.wikipedia.org/wiki/Random_walk) between two points on a number line is very similar to the Gambler's Ruin.
How many rounds could a game last? Is it possible for the game to continue on to infinity?
Well, notice how this has a very nice __recursive nature__. If $A$ loses a round, the game can be seen as starting anew at $i-1$, and if he wins, the game would start anew at $i+1$. It is the same problem, but with a different starting condition.
### Strategy
Conditioning on the _first step_ is called __first step analysis__.
Let $P_i = P(\text{A wins the entire game|A starts with i dollars})$. Then from the Law of Total Probability, we have:
\begin{align}
P_i &= p P_{i+1} + q P_{i-1} \text{, } & &\text{where }1 \lt i \lt N-1 \\
& & & P_0 = 0 \\
& & & P_N = 1 \\
\end{align}
See how this is a recursive equation? This is called a [__difference equation__](http://mathworld.wolfram.com/DifferenceEquation.html), which is a discrete analog of a differential equation.
### Solving the Difference Equation
\begin{align}
P_i &= p P_{i+1} + q P_{i-1} & & \\
\\
\\
P_i &= x^i & &\text{see what happens when we guess with a power} \\
\Rightarrow x^i &= p x^{i+1} + q x^{i-1} \\
0 &= p x^2 - x + q & &\text{factoring out } x^{i-1} \text{, we are left with a quadratic}\\
\\
x &= \frac{1 \pm \sqrt{1-4pq}}{2p} & &\text{solving with the quadratic formula} \\
&= \frac{1 \pm \sqrt{(2p-1)^2}}{2p} & &\text{since }1-4pq = 1-4p(1-p) = 4p^2 - 4p - 1 = (2p -1)^2 \\
&= \frac{1 \pm (2p-1)}{2p} \\
&\in \left\{1, \frac{q}{p} \right\} \\
\\
\\
P_i &= A(1)^i + B\left(\frac{q}{p}\right)^i & &\text{if } p \neq q~~~~ \text{(general solution for difference equation)} \\
\Rightarrow B &= -A & &\text{from }P_0 = 1\\
\Rightarrow 1 &= A(1)^N + B\left(\frac{q}{p}\right)^N & &\text{from } P_N = 1\\
&= A(1)^N - A\left(\frac{q}{p}\right)^N \\
&= A\left(1-\frac{q}{p}\right)^N \\
\\
\\
\therefore P_i &=
\begin{cases}
\frac{1-\left(\frac{q}{p}\right)^i}{1-\left(\frac{q}{p}\right)^N} & \quad \text{ if } p \neq q \\
\frac{i}{N} & \quad \text{ if } p = q \\
\end{cases}
\end{align}
### Example calculations of $P_i$ over a range of $N$
Assuming an unfair game where $p=0.49$, $q=0.51$:
```python
import math
def gamblers_ruin(i, p, q, N):
if math.isclose(p,q):
return i/N
else:
return ((1 - (q/p)**i)) / (1 - (q/p)**N)
p = 0.49
q = 1.0 - p
N = 20
i = N/2
print("With N={} and p={}, probability that A wins all is {:.2f}".format(N, p, gamblers_ruin(i, p, q, N)))
N = 100
i = N/2
print("With N={} and p={}, probability that A wins all is {:.2f}".format(N, p, gamblers_ruin(i, p, q, N)))
N = 200
i = N/2
print("With N={} and p={}, probability that A wins all is {:.2f}".format(N, p, gamblers_ruin(i, p, q, N)))
```
With N=20 and p=0.49, probability that A wins all is 0.40
With N=100 and p=0.49, probability that A wins all is 0.12
With N=200 and p=0.49, probability that A wins all is 0.02
And assuming a fair game where $p = q = 0.5$:
```python
p = 0.5
q = 1.0 - p
N = 20
i = N/2
print("With N={} and p={}, probability that A wins all is {:.2f}".format(N, p, gamblers_ruin(i, p, q, N)))
N = 100
i = N/2
print("With N={} and p={}, probability that A wins all is {:.2f}".format(N, p, gamblers_ruin(i, p, q, N)))
N = 200
i = N/2
print("With N={} and p={}, probability that A wins all is {:.2f}".format(N, p, gamblers_ruin(i, p, q, N)))
```
With N=20 and p=0.5, probability that A wins all is 0.50
With N=100 and p=0.5, probability that A wins all is 0.50
With N=200 and p=0.5, probability that A wins all is 0.50
#### Could the game ever continue forever on to infinity?
Recall that we have the following solution to the difference equation for the Gambler's Ruin game:
\begin{align}
P_i &=
\begin{cases}
\frac{1-\left(\frac{q}{p}\right)^i}{1-\left(\frac{q}{p}\right)^N} & \quad \text{ if } p \neq q \\
\frac{i}{N} & \quad \text{ if } p = q \\
\end{cases}
\end{align}
The only time you'd think the game could continue on to infinity is when $p=q$. But
\begin{align}
P(\Omega) &= 1\\
&= P(\text{A wins all}) + P(\text{B wins all}) \\
&= P_i + P_{N-i} \\
&= \frac{i}{N} + \frac{N-i}{N}
\end{align}
The above implies that aside from the case where $A$ wins all, and the case where $B$ wins all, there is no other event in $\Omega$ to consider, hence the game can never continue on to infinity without either side winning.
This also means that unless $p=q$, you __will__ lose your money, and the only question is how fast will you lose it.
----
# Random Variables
Consider these statements:
\begin{align}
x + 2 &= 9 \\
x &= 7
\end{align}
_What is a variable?_
* variable $x$ is a symbol that we use as a substitute for an arbitrary _constant_ value.
_What is a __random__ variable?_
* This is not a _variable_, but a __function from the sample space $S$ to $\mathbb{R}$__.
* It is a "summary" of an aspect of the experiment (this is where the randomness comes from)
Here are a few of the most useful _discrete random variables_.
----
## Bernoulli Distribution
### Description
A probability distribution of a random variable that takes the value 1 in the case of a success with probability $p$; or takes the value 0 in case of a failure with probability $1-p$.
A most common example would be a coin toss, where heads might be considered a success with probability $p=0.5$ if the coin is a fair.
A random variable $x$ has the Bernoulli distribution if
- $x \in \{0, 1\}$
- $P(x=1) = p$
- $P(x=0) = 1-p$
### Notation
$X \sim \operatorname{Bern}(p)$
### Parameters
$0 < p < 1 \text{, } p \in \mathbb{R}$
### Probability mass function
The probability mass function $P(x)$ over possible values $x$
\begin{align}
P(x) =
\begin{cases}
1-p, &\text{ if } x = 0 \\
p, &\text{ if } x = 1 \\
\end{cases} \\
\end{align}
### Expected value
\begin{align}
\mathbb{E}(X) &= 1 P(X=1) + 0 P(X=0) \\
&= p
\end{align}
### Special case: Indicator random variables (r.v.)
\begin{align}
&X =
\begin{cases}
1, &\text{ if event A occurs} \\
0, &\text{ otherwise} \\
\end{cases} \\
\\
\\
\Rightarrow &\mathbb{E}(X) = P(A)
\end{align}
## Binomial Distribution
### Description
The distribution of the number of successes in $n$ independent Bernoulli trials $\operatorname{Bern}(p)$, where the chance of success $p$ is the same for all trials $n$.
Another case might be a string of indicator random variables.
### Notation
$X \sim \operatorname{Bin}(n, p)$
### Parameters
- $n \in \mathbb{N}$
- $p \in [0,1]$
```python
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
from scipy.stats import binom
%matplotlib inline
plt.xkcd()
_, ax = plt.subplots(figsize=(12,8))
# a few Binomial parameters n and p
pop_sizes = [240, 120, 60, 24]
p_values = [0.2, 0.3, 0.4, 0.8]
params = list(zip(pop_sizes, p_values))
# colorblind-safe, qualitative color scheme
colors = ['#a6cee3','#1f78b4','#b2df8a','#33a02c']
for i,(n,p) in enumerate(params):
x = np.arange(binom.ppf(0.01, n, p), binom.ppf(0.99, n, p))
y = binom.pmf(x, n, p)
ax.plot(x, y, 'o', ms=8, color=colors[i], label='n={}, p={}'.format(n,p))
ax.vlines(x, 0, y, color=colors[i], alpha=0.3)
# legend styling
legend = ax.legend()
for label in legend.get_texts():
label.set_fontsize('large')
for label in legend.get_lines():
label.set_linewidth(1.5)
# y-axis
ax.set_ylim([0.0, 0.23])
ax.set_ylabel(r'$P(x=k)$')
# x-axis
ax.set_xlim([10, 65])
ax.set_xlabel('# of successes k out of n Bernoulli trials')
# x-axis tick formatting
majorLocator = MultipleLocator(5)
majorFormatter = FormatStrFormatter('%d')
minorLocator = MultipleLocator(1)
ax.xaxis.set_major_locator(majorLocator)
ax.xaxis.set_major_formatter(majorFormatter)
ax.grid(color='grey', linestyle='-', linewidth=0.3)
plt.suptitle(r'Binomial PMF: $P(x=k) = \binom{n}{k} p^k (1-p)^{n-k}$')
plt.show()
```
### Probability mass function
\begin{align}
P(x=k) &= \binom{n}{k} p^k (1-p)^{n-k}
\end{align}
### Expected value
\begin{align}
\mathbb{E}(X) = np
\end{align}
## In parting...
Now think about this true statement as we move on to Lecture 3:
\begin{align}
X &\sim \operatorname{Bin}(n,p) \text{, } Y \sim \operatorname{Bin}(m,p) \\
\rightarrow X+Y &\sim \operatorname{Bin}(n+m, p)
\end{align}
----
## Appendix A: Solving $P_i$ when $p=q$ using l'Hopital's Rule
To solve for for the case where $p = q$, let $x = \frac{q}{p}$.
\begin{align}
lim_{x \rightarrow 1}{\frac{1-x^i}{1-x^N}} &= lim_{x\rightarrow1}{\frac{ix^{i-1}}{Nx^{N-1}}} &\text{ by l'Hopital's Rule} \\
&= \frac{i}{N}
\end{align}
----
View [Lecture 7: Gambler's Ruin and Random Variables | Statistics 110](http://bit.ly/2PmMbdV) on YouTube.
| 39b21a2be231143a8f7039667e6073967b4945ff | 126,499 | ipynb | Jupyter Notebook | Lecture_07.ipynb | abhra-nilIITKgp/stats-110 | 258461cdfbdcf99de5b96bcf5b4af0dd98d48f85 | [
"BSD-3-Clause"
]
| 113 | 2016-04-29T07:27:33.000Z | 2022-02-27T18:32:47.000Z | Lecture_07.ipynb | snoop2head/stats-110 | 88d0cc56ede406a584f6ba46368e548010f2b14a | [
"BSD-3-Clause"
]
| null | null | null | Lecture_07.ipynb | snoop2head/stats-110 | 88d0cc56ede406a584f6ba46368e548010f2b14a | [
"BSD-3-Clause"
]
| 65 | 2016-12-24T02:02:25.000Z | 2022-02-13T13:20:02.000Z | 268.575372 | 111,546 | 0.900837 | true | 3,246 | Qwen/Qwen-72B | 1. YES
2. YES | 0.919643 | 0.822189 | 0.75612 | __label__eng_Latn | 0.954083 | 0.595052 |
# Fastai Course DL from the Foundations Optimizer Functions
> Implementation of Optimizers from scratch, such as ADAM and friends as well as LAMB (Lesson 4 Part 3)
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
- image: images/LSUV.png
# Fastai Optimizer tweaks
- This Post is based on the Notebok by the [Fastai Course Part2](https://course.fast.ai/)
```python
#collapse
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
```python
#collapse
from exp.nb_08 import *
```
```python
#collapse
datasets.URLs.IMAGENETTE_160
```
'https://s3.amazonaws.com/fast-ai-imageclas/imagenette2-160'
## Imagenette data
We grab the data from the previous notebook.
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=3917)
```python
#collapse
path = Path("/home/cedric/.fastai/data/imagenette2-160")
```
```python
#collapse_show
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
bs=128
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
```
Then a model:
```python
#collapse
nfs = [32,64,128,256]
```
```python
#collapse
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback,
partial(BatchTransformXCallback, norm_imagenette)]
```
This is the baseline of training with vanilla SGD.
```python
#collapse
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
```
```python
#collapse
run.fit(1, learn)
```
train: [1.799063679836836, tensor(0.3704, device='cuda:0')]
valid: [1.7272854050557325, tensor(0.4257, device='cuda:0')]
## Refining the optimizer
In PyTorch, the base optimizer in `torch.optim` is just a dictionary that stores the hyper-parameters and references to the parameters of the model we want to train in parameter groups (different groups can have different learning rates/momentum/weight decay... which is what lets us do discriminative learning rates).
It contains a method `step` that will update our parameters with the gradients and a method `zero_grad` to detach and zero the gradients of all our parameters.
We build the equivalent from scratch, only ours will be more flexible. In our implementation, the step function loops over all the parameters to execute the step using stepper functions that we have to provide when initializing the optimizer.
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=4074)
```python
#collapse_show
class Optimizer():
def __init__(self, params, steppers, **defaults):
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
self.steppers = listify(steppers)
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
```
To do basic SGD, this what a step looks like:
```python
#collapse_show
def sgd_step(p, lr, **kwargs):
p.data.add_(-lr, p.grad.data)
return p
```
```python
#collapse_show
opt_func = partial(Optimizer, steppers=[sgd_step])
```
Now that we have changed the optimizer, we will need to adjust the callbacks that were using properties from the PyTorch optimizer: in particular the hyper-parameters are in the list of dictionaries `opt.hypers` (PyTorch has everything in the the list of param groups).
```python
#collapse_show
class Recorder(Callback):
def begin_fit(self): self.lrs,self.losses = [],[]
def after_batch(self):
if not self.in_train: return
self.lrs.append(self.opt.hypers[-1]['lr'])
self.losses.append(self.loss.detach().cpu())
def plot_lr (self): plt.plot(self.lrs)
def plot_loss(self): plt.plot(self.losses)
def plot(self, skip_last=0):
losses = [o.item() for o in self.losses]
n = len(losses)-skip_last
plt.xscale('log')
plt.plot(self.lrs[:n], losses[:n])
class ParamScheduler(Callback):
_order=1
def __init__(self, pname, sched_funcs):
self.pname,self.sched_funcs = pname,listify(sched_funcs)
def begin_batch(self):
if not self.in_train: return
fs = self.sched_funcs
if len(fs)==1: fs = fs*len(self.opt.param_groups)
pos = self.n_epochs/self.epochs
for f,h in zip(fs,self.opt.hypers): h[self.pname] = f(pos)
class LR_Find(Callback):
_order=1
def __init__(self, max_iter=100, min_lr=1e-6, max_lr=10):
self.max_iter,self.min_lr,self.max_lr = max_iter,min_lr,max_lr
self.best_loss = 1e9
def begin_batch(self):
if not self.in_train: return
pos = self.n_iter/self.max_iter
lr = self.min_lr * (self.max_lr/self.min_lr) ** pos
for pg in self.opt.hypers: pg['lr'] = lr
def after_step(self):
if self.n_iter>=self.max_iter or self.loss>self.best_loss*10:
raise CancelTrainException()
if self.loss < self.best_loss: self.best_loss = self.loss
```
So let's check we didn't break anything and that recorder and param scheduler work properly.
```python
#collapse_show
sched = combine_scheds([0.3, 0.7], [sched_cos(0.3, 0.6), sched_cos(0.6, 0.2)])
```
```python
#collapse_show
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback, Recorder,
partial(ParamScheduler, 'lr', sched)]
```
```python
#collapse_show
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=opt_func)
```
```python
#collapse_show
%time run.fit(1, learn)
```
train: [1.7935766134887527, tensor(0.3815, device='cuda:0')]
valid: [1.428013908240446, tensor(0.5289, device='cuda:0')]
CPU times: user 22.2 s, sys: 2.2 s, total: 24.4 s
Wall time: 32 s
```python
#collapse_show
run.recorder.plot_loss()
```
```python
#collapse_show
run.recorder.plot_lr()
```
## Weight decay
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=4623)
By letting our model learn high parameters, it might fit all the data points in the training set with an over-complex function that has very sharp changes, which will lead to overfitting.
Weight decay comes from the idea of L2 regularization, which consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible.
Limiting our weights from growing too much is going to hinder the training of the model, but it will yield to a state where it generalizes better. Going back to the theory a little bit, weight decay (or just `wd`) is a parameter that controls that sum of squares we add to our loss:
``` python
loss_with_wd = loss + (wd/2) * (weights**2).sum()
```
In practice though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high school math, the derivative of `p**2` with respect to `p` is `2*p`. So adding that big sum to our loss is exactly the same as doing:
``` python
weight.grad += wd * weight
```
for every weight in our model, which in the case of vanilla SGD is equivalent to updating the parameters with:
``` python
weight = weight - lr*(weight.grad + wd*weight)
```
This technique is called "weight decay", as each weight is decayed by a factor `lr * wd`, as it's shown in this last formula.
This only works for standard SGD, as we have seen that with momentum, RMSProp and Adam, the update has some additional formulas around the gradient. In those cases, the formula that comes from L2 regularization:
``` python
weight.grad += wd * weight
```
is different than weight decay
``` python
new_weight = weight - lr * weight.grad - lr * wd * weight
```
Most libraries use the first one, but as it was pointed out in [Decoupled Weight Regularization](https://arxiv.org/pdf/1711.05101.pdf) by Ilya Loshchilov and Frank Hutter, it is better to use the second one with the Adam optimizer, which is why fastai made it its default.
Weight decay is subtracting `lr*wd*weight` from the weights. We need this function to have an attribute `_defaults` so that we are sure there is an hyper-parameter of the same name in our `Optimizer`.
```python
#collapse_show
def weight_decay(p, lr, wd, **kwargs):
p.data.mul_(1 - lr*wd)
return p
weight_decay._defaults = dict(wd=0.)
```
L2 regularization is adding `wd*weight` to the gradients.
```python
#collapse_show
def l2_reg(p, lr, wd, **kwargs):
p.grad.data.add_(wd, p.data)
return p
l2_reg._defaults = dict(wd=0.)
```
Let's allow steppers to add to our `defaults` (which are the default values of all the hyper-parameters). This helper function adds in `dest` the key/values it finds while going through `os` and applying `f` when they was no `key` of the same name.
```python
#collapse_show
def maybe_update(os, dest, f):
for o in os:
for k,v in f(o).items():
if k not in dest: dest[k] = v
def get_defaults(d): return getattr(d,'_defaults',{})
```
This is the same as before, we just take the default values of the steppers when none are provided in the kwargs.
```python
#collapse_show
class Optimizer():
def __init__(self, params, steppers, **defaults):
self.steppers = listify(steppers)
maybe_update(self.steppers, defaults, get_defaults)
# might be a generator
self.param_groups = list(params)
# ensure params is a list of lists
if not isinstance(self.param_groups[0], list): self.param_groups = [self.param_groups]
self.hypers = [{**defaults} for p in self.param_groups]
def grad_params(self):
return [(p,hyper) for pg,hyper in zip(self.param_groups,self.hypers)
for p in pg if p.grad is not None]
def zero_grad(self):
for p,hyper in self.grad_params():
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,hyper in self.grad_params(): compose(p, self.steppers, **hyper)
```
```python
#collapse_show
sgd_opt = partial(Optimizer, steppers=[weight_decay, sgd_step])
```
```python
#collapse_show
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=sgd_opt)
```
Before trying to train, let's check the behavior works as intended: when we don't provide a value for `wd`, we pull the corresponding default from `weight_decay`.
```python
#collapse
model = learn.model
```
```python
#collapse_show
opt = sgd_opt(model.parameters(), lr=0.1)
test_eq(opt.hypers[0]['wd'], 0.)
test_eq(opt.hypers[0]['lr'], 0.1)
```
But if we provide a value, it overrides the default.
```python
#collapse_show
opt = sgd_opt(model.parameters(), lr=0.1, wd=1e-4)
test_eq(opt.hypers[0]['wd'], 1e-4)
test_eq(opt.hypers[0]['lr'], 0.1)
```
Now let's fit.
```python
#collapse
cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback]
```
```python
#collapse
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=partial(sgd_opt, wd=0.01))
```
```python
#collapse_show
run.fit(1, learn)
```
train: [1.784850152471222, tensor(0.3859, device='cuda:0')]
valid: [1.7827735619028662, tensor(0.3890, device='cuda:0')]
This is already better than the baseline!
## With momentum
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=4872)
Momentum requires to add some state. We need to save the moving average of the gradients to be able to do the step and store this inside the optimizer state. To do this, we introduce statistics. Statistics are object with two methods:
- `init_state`, that returns the initial state (a tensor of 0. for the moving average of gradients)
- `update`, that updates the state with the new gradient value
We also read the `_defaults` values of those objects, to allow them to provide default values to hyper-parameters.
```python
#collapse_show
class StatefulOptimizer(Optimizer):
def __init__(self, params, steppers, stats=None, **defaults):
self.stats = listify(stats)
maybe_update(self.stats, defaults, get_defaults)
super().__init__(params, steppers, **defaults)
self.state = {}
def step(self):
for p,hyper in self.grad_params():
if p not in self.state:
#Create a state for p and call all the statistics to initialize it.
self.state[p] = {}
maybe_update(self.stats, self.state[p], lambda o: o.init_state(p))
state = self.state[p]
for stat in self.stats: state = stat.update(p, state, **hyper)
compose(p, self.steppers, **state, **hyper)
self.state[p] = state
```
```python
#collapse_show
class Stat():
_defaults = {}
def init_state(self, p): raise NotImplementedError
def update(self, p, state, **kwargs): raise NotImplementedError
```
Here is an example of `Stat`:
```python
#collapse_show
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['grad_avg'].mul_(mom).add_(p.grad.data)
return state
```
Then we add the momentum step (instead of using the gradients to perform the step, we use the average).
```python
#collapse_show
def momentum_step(p, lr, grad_avg, **kwargs):
p.data.add_(-lr, grad_avg)
return p
```
```python
#collapse_show
sgd_mom_opt = partial(StatefulOptimizer, steppers=[momentum_step,weight_decay],
stats=AverageGrad(), wd=0.01)
```
```python
#collapse_show
learn,run = get_learn_run(nfs, data, 0.3, conv_layer, cbs=cbfs, opt_func=sgd_mom_opt)
```
```python
#collapse_show
run.fit(1, learn)
```
train: [1.9840235241313762, tensor(0.3516, device='cuda:0')]
valid: [1.7593307125796178, tensor(0.4066, device='cuda:0')]
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=5115) for discussion about weight decay interaction with batch normalisation
### Momentum experiments
What does momentum do to the gradients exactly? Let's do some plots to find out!
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=5487)
```python
#collapse_show
x = torch.linspace(-4, 4, 200)
y = torch.randn(200) + 0.3
betas = [0.5, 0.7, 0.9, 0.99]
```
```python
#collapse_show
def plot_mom(f):
_,axs = plt.subplots(2,2, figsize=(12,8))
for beta,ax in zip(betas, axs.flatten()):
ax.plot(y, linestyle='None', marker='.')
avg,res = None,[]
for i,yi in enumerate(y):
avg,p = f(avg, beta, yi, i)
res.append(p)
ax.plot(res, color='red')
ax.set_title(f'beta={beta}')
```
This is the regular momentum.
```python
#collapse_show
def mom1(avg, beta, yi, i):
if avg is None: avg=yi
res = beta*avg + yi
return res,res
plot_mom(mom1)
```
As we can see, with a too high value, it may go way too high with no way to change its course.
Another way to smooth noisy data is to do an exponentially weighted moving average. In this case, there is a dampening of (1-beta) in front of the new value, which is less trusted than the current average. We'll define `lin_comb` (*linear combination*) to make this easier (note that in the lesson this was named `ewma`).
```python
#collapse_show
def lin_comb(v1, v2, beta): return beta*v1 + (1-beta)*v2
```
```python
def mom2(avg, beta, yi, i):
if avg is None: avg=yi
avg = lin_comb(avg, yi, beta)
return avg, avg
plot_mom(mom2)
```
We can see it gets to a zero-constant when the data is purely random. If the data has a certain shape, it will get that shape (with some delay for high beta).
```python
#collapse_show
y = 1 - (x/3) ** 2 + torch.randn(200) * 0.1
```
```python
#collapse_show
y[0]=0.5
```
```python
#collapse_show
plot_mom(mom2)
```
Debiasing is here to correct the wrong information we may have in the very first batch. The debias term corresponds to the sum of the coefficient in our moving average. At the time step i, our average is:
$\begin{align*}
avg_{i} &= \beta\ avg_{i-1} + (1-\beta)\ v_{i} = \beta\ (\beta\ avg_{i-2} + (1-\beta)\ v_{i-1}) + (1-\beta)\ v_{i} \\
&= \beta^{2}\ avg_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\
&= \beta^{3}\ avg_{i-3} + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i} \\
&\vdots \\
&= (1-\beta)\ \beta^{i}\ v_{0} + (1-\beta)\ \beta^{i-1}\ v_{1} + \cdots + (1-\beta)\ \beta^{2}\ v_{i-2} + (1-\beta)\ \beta\ v_{i-1} + (1-\beta)\ v_{i}
\end{align*}$
and so the sum of the coefficients is
$\begin{align*}
S &=(1-\beta)\ \beta^{i} + (1-\beta)\ \beta^{i-1} + \cdots + (1-\beta)\ \beta^{2} + (1-\beta)\ \beta + (1-\beta) \\
&= (\beta^{i} - \beta^{i+1}) + (\beta^{i-1} - \beta^{i}) + \cdots + (\beta^{2} - \beta^{3}) + (\beta - \beta^{2}) + (1-\beta) \\
&= 1 - \beta^{i+1}
\end{align*}$
since all the other terms cancel out each other.
By dividing by this term, we make our moving average a true average (in the sense that all the coefficients we used for the average sum up to 1).
```python
#collapse_show
def mom3(avg, beta, yi, i):
if avg is None: avg=0
avg = lin_comb(avg, yi, beta)
return avg, avg/(1-beta**(i+1))
plot_mom(mom3)
```
## Adam and friends
In Adam, we use the gradient averages but with dampening (not like in SGD with momentum), so let's add this to the `AverageGrad` class.
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=5889)
```python
#collapse_show
class AverageGrad(Stat):
_defaults = dict(mom=0.9)
def __init__(self, dampening:bool=False): self.dampening=dampening
def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, mom, **kwargs):
state['mom_damp'] = 1-mom if self.dampening else 1.
state['grad_avg'].mul_(mom).add_(state['mom_damp'], p.grad.data)
return state
```
We also need to track the moving average of the gradients squared.
```python
#collapse_show
class AverageSqrGrad(Stat):
_defaults = dict(sqr_mom=0.99)
def __init__(self, dampening:bool=True): self.dampening=dampening
def init_state(self, p): return {'sqr_avg': torch.zeros_like(p.grad.data)}
def update(self, p, state, sqr_mom, **kwargs):
state['sqr_damp'] = 1-sqr_mom if self.dampening else 1.
state['sqr_avg'].mul_(sqr_mom).addcmul_(state['sqr_damp'], p.grad.data, p.grad.data)
return state
```
We will also need the number of steps done during training for the debiasing.
```python
#collapse_show
class StepCount(Stat):
def init_state(self, p): return {'step': 0}
def update(self, p, state, **kwargs):
state['step'] += 1
return state
```
This helper function computes the debias term. If we dampening, `damp = 1 - mom` and we get the same result as before. If we don't use dampening, (`damp = 1`) we will need to divide by `1 - mom` because that term is missing everywhere.
```python
#collapse_show
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
```
Then the Adam step is just the following:
```python
#collapse_show
def adam_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
p.data.addcdiv_(-lr / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps)
return p
adam_step._defaults = dict(eps=1e-5)
```
```python
#collapse_show
def adam_opt(xtra_step=None, **kwargs):
return partial(StatefulOptimizer, steppers=[adam_step,weight_decay]+listify(xtra_step),
stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()], **kwargs)
```
```python
#collapse_show
learn,run = get_learn_run(nfs, data, 0.001, conv_layer, cbs=cbfs, opt_func=adam_opt())
```
```python
#collapse_show
run.fit(3, learn)
```
train: [1.7405600677209843, tensor(0.4025, device='cuda:0')]
valid: [1.5315888734076433, tensor(0.4904, device='cuda:0')]
train: [1.2346337596697117, tensor(0.5938, device='cuda:0')]
valid: [1.3684276721735669, tensor(0.5437, device='cuda:0')]
train: [0.9368180873112261, tensor(0.7014, device='cuda:0')]
valid: [1.3147652517914012, tensor(0.5766, device='cuda:0')]
## LAMB
[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=6038)
It's then super easy to implement a new optimizer. This is LAMB from a [very recent paper](https://arxiv.org/pdf/1904.00962.pdf):
$\begin{align}
g_{t}^{l} &= \nabla L(w_{t-1}^{l}, x_{t}) \\
m_{t}^{l} &= \beta_{1} m_{t-1}^{l} + (1-\beta_{1}) g_{t}^{l} \\
v_{t}^{l} &= \beta_{2} v_{t-1}^{l} + (1-\beta_{2}) g_{t}^{l} \odot g_{t}^{l} \\
m_{t}^{l} &= m_{t}^{l} / (1 - \beta_{1}^{t}) \\
v_{t}^{l} &= v_{t}^{l} / (1 - \beta_{2}^{t}) \\
r_{1} &= \|w_{t-1}^{l}\|_{2} \\
s_{t}^{l} &= \frac{m_{t}^{l}}{\sqrt{v_{t}^{l} + \epsilon}} + \lambda w_{t-1}^{l} \\
r_{2} &= \| s_{t}^{l} \|_{2} \\
\eta^{l} &= \eta * r_{1}/r_{2} \\
w_{t}^{l} &= w_{t}^{l-1} - \eta_{l} * s_{t}^{l} \\
\end{align}$
```python
#collapse_show
def lamb_step(p, lr, mom, mom_damp, step, sqr_mom, sqr_damp, grad_avg, sqr_avg, eps, wd, **kwargs):
debias1 = debias(mom, mom_damp, step)
debias2 = debias(sqr_mom, sqr_damp, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps) + wd*p.data
r2 = step.pow(2).mean().sqrt()
p.data.add_(-lr * min(r1/r2,10), step)
return p
lamb_step._defaults = dict(eps=1e-6, wd=0.)
```
```python
#collapse_show
lamb = partial(StatefulOptimizer, steppers=lamb_step, stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()])
```
```python
#collapse_show
learn,run = get_learn_run(nfs, data, 0.003, conv_layer, cbs=cbfs, opt_func=lamb)
```
```python
#collapse_show
run.fit(3, learn)
```
train: [1.862844999141937, tensor(0.3480, device='cuda:0')]
valid: [1.7505729996019108, tensor(0.3931, device='cuda:0')]
train: [1.3384966321021228, tensor(0.5593, device='cuda:0')]
valid: [1.4524099323248407, tensor(0.5159, device='cuda:0')]
train: [1.036301331152973, tensor(0.6672, device='cuda:0')]
valid: [1.3779652667197453, tensor(0.5536, device='cuda:0')]
Other recent variants of optimizers:
- [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) (LARS also uses weight statistics, not just gradient statistics. Can you add that to this class?)
- [Adafactor: Adaptive Learning Rates with Sublinear Memory Cost](https://arxiv.org/abs/1804.04235) (Adafactor combines stats over multiple sets of axes)
- [Adaptive Gradient Methods with Dynamic Bound of Learning Rate](https://arxiv.org/abs/1902.09843)
| be8fa88458a11b71471debefc6a26f772ace5685 | 398,263 | ipynb | Jupyter Notebook | _notebooks/2020-04-17-optimizers.ipynb | Cedric-Perauer/DL_from_Foundations | c53722216a088cc9f67a2e1bf955d043023e6a85 | [
"Apache-2.0"
]
| null | null | null | _notebooks/2020-04-17-optimizers.ipynb | Cedric-Perauer/DL_from_Foundations | c53722216a088cc9f67a2e1bf955d043023e6a85 | [
"Apache-2.0"
]
| 3 | 2020-11-25T23:40:14.000Z | 2022-02-26T07:00:58.000Z | _notebooks/2020-04-17-optimizers.ipynb | Cedric-Perauer/DL_from_Foundations | c53722216a088cc9f67a2e1bf955d043023e6a85 | [
"Apache-2.0"
]
| null | null | null | 276.956189 | 110,180 | 0.925062 | true | 6,856 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.763484 | 0.582907 | __label__eng_Latn | 0.877503 | 0.192619 |
```python
import pandas as pd
import sympy as sym
sym.init_printing()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import control
```
# Tarea: LGDR
**Nombre completo**:
*Fulano de Tal*
Considere el proceso modelado por
$$G_p(s) = \frac{5}{s(s+4)(s+10)}$$
Suponga que los polos dominantes en lazo cerrado están en $-3 \pm j3$
1. Diseñe un compensador en adelanto que cumpla con el requerimiento.
```python
# código necesario para su desarrollo
```
Respuestas
2.Diseñe un compensador en atraso que disminuya a la décima parte el error de estado estable ante una entrada rampa.
```python
# código necesario para su desarrollo
```
Respuestas
3. Analice y compare el LGDR del proceso con el del sistema controlado con el compensador en adelanto y el sistema controlado con el compensador en adelanto-atraso.
```python
# código necesario para su desarrollo
```
Respuestas
4. Analice y compare la respuesta temporal del proceso con el del sistema controlado con el compensador en adelanto y el sistema controlado con el compensador en adelanto-atraso.
```python
# código necesario para su desarrollo
```
Respuestas
5. Conclusiones
- conclusión 1.
- conclusión 2.
| 37ef9d3edb12df956f4e0a12270b9be8a9c281bf | 3,757 | ipynb | Jupyter Notebook | LGDR-ejercicio.ipynb | pierrediazp/Control | 2a185eff5b5dc84045115009e62296174d072220 | [
"MIT"
]
| null | null | null | LGDR-ejercicio.ipynb | pierrediazp/Control | 2a185eff5b5dc84045115009e62296174d072220 | [
"MIT"
]
| null | null | null | LGDR-ejercicio.ipynb | pierrediazp/Control | 2a185eff5b5dc84045115009e62296174d072220 | [
"MIT"
]
| 1 | 2021-11-18T13:08:36.000Z | 2021-11-18T13:08:36.000Z | 19.670157 | 185 | 0.531275 | true | 324 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.826712 | 0.683452 | __label__spa_Latn | 0.995398 | 0.42622 |
**Brief Honor Code**. Do the homework on your own. You may discuss ideas with your classmates, but DO NOT copy the solutions from someone else or the Internet. If stuck, discuss with TA.
**1**. (20 points)
Consider the linear transformation $f(x)$ on $\mathbb{R}^3$ that takes the standard basis $\left\{e_1,e_2,e_3\right\}$ to $\left\{v_1,v_2,v_3\right\}$ where
$$v_1=\left(\begin{matrix}10\\-10\\16\end{matrix}\right), v_2=\left(\begin{matrix}2\\-5\\20\end{matrix}\right) \textrm {and } v_3=\left(\begin{matrix}1\\-4\\13\end{matrix}\right)$$
1. Write a matrix $A$ that represents the same linear transformation. (4 points)
2. Compute the rank of $A$ using two different methods (do not use `matrix_rank`!). (4 points)
3. Find the eigenvalues and eigenvectors of $A$. (4 points)
4. What is the matrix representation of $f$ with respect to the eigenbasis? (8 points)
```python
import numpy as np
A = np.array([[10,2,1],
[-10,-5,-4],
[16,20,13]])
```
```python
#Method 1: SVD
U,S,V = np.linalg.svd(A)
np.count_nonzero(S)
```
3
```python
#Method 2: LU
import scipy.linalg as la
P, L, U = la.lu(A)
np.count_nonzero(np.diag(U))
```
3
```python
e, v = la.eig(A)
print('eigenvalues:',e)
print('eigenvectors:',v)
```
eigenvalues: [9.+0.j 3.+0.j 6.+0.j]
eigenvectors: [[ 5.77350269e-01 6.31950024e-16 -1.20385853e-01]
[-5.77350269e-01 -4.47213595e-01 -2.40771706e-01]
[ 5.77350269e-01 8.94427191e-01 9.63086825e-01]]
```python
la.inv(v) @ A @ v
```
array([[ 9.00000000e+00, 6.77250238e-15, 6.22152043e-15],
[ 2.08417700e-14, 3.00000000e+00, -6.87338599e-15],
[-5.19475027e-15, 3.56964333e-15, 6.00000000e+00]])
**2**. (20 points)
You are given the following x-y coordinates (first column is x, second is y)
```
array([[ 0. , 4.12306991],
[ 3. , -15.47355729],
[ 4. , -11.68725507],
[ 3. , -20.33756693],
[ 5. , -6.06401989],
[ 6. , 32.79353057],
[ 8. , 82.48658405],
[ 9. , 84.02971858],
[ 4. , -1.30587276],
[ 8. , 68.59409878]])
```
- Find the coefficients $(a, b, c)$ of the least-squares fit of a quadratic function $y = a + bx + cx^2$ to the data.
- Plot the data and fitted curve using `matplotlib`.
```python
xs = np.array([
[ 0. , 4.12306991],
[ 3. , -15.47355729],
[ 4. , -11.68725507],
[ 3. , -20.33756693],
[ 5. , -6.06401989],
[ 6. , 32.79353057],
[ 8. , 82.48658405],
[ 9. , 84.02971858],
[ 4. , -1.30587276],
[ 8. , 68.59409878]])
```
```python
x_sq = xs[:,0]**2
intercept = np.ones(xs.shape[0])
X = np.c_[intercept,xs[:,0],x_sq]
y = xs[:,1]
res = la.lstsq(X,y)
res
```
(array([ -0.35762896, -11.78531232, 2.53125199]),
842.0494779002936,
3,
array([132.86424774, 4.91317461, 0.97590399]))
```python
import matplotlib.pyplot as plt
x = np.linspace(-0, 9, 10000)
y_fit = res[0][0]+res[0][1]*x+res[0][2]*(x**2)
plt.scatter(xs[:,0],y,c='b',alpha=0.2)
plt.plot(x,y_fit,'r-',lw=2)
pass
```
**3**. (20 points)
Use the `svd` function to solve the least squares problem above, and repeat the same plot. Calculate the residual error $\lvert y - X\beta \rvert$.
```python
U,S,VT = la.svd(X)
m = np.zeros((U.shape[1]-len(S),len(S)))
s = np.r_[np.diag(S),m]
res2 = [email protected](s)@U.T@y
print(res2)
#plot
x = np.linspace(-0, 9, 10000)
y_fit = res2[0]+res2[1]*x+res2[2]*(x**2)
plt.scatter(xs[:,0],y,c='b',alpha=0.2)
plt.plot(x,y_fit,'r-',lw=2)
pass
```
```python
np.sqrt(np.sum((y-X@res2)**2))
```
29.018088805093502
```python
la.norm(y-X@res2)
```
29.018088805093505
**4**. (20 points)
Avoiding catastrophic cancellation.
Read the Wikipedia entry on [loss of significance](https://en.wikipedia.org/wiki/Loss_of_significance). Then answer the following problem:
The tail of the standard logistic distribution is given by $1 - F(t) = 1 - (1+e^{-t})^{-1}$.
- Define a function `f1` to calculate the tail probability of the logistic distribution using the formula given above
- Use [`sympy`](http://docs.sympy.org/latest/index.html) to find the exact value of the tail distribution (using the same symbolic formula) to 20 decimal digits
- Calculate the *relative error* of `f1` when $t = 25$ (The relative error is given by `abs(exact - approximate)/exact`)
- Rewrite the expression for the tail of the logistic distribution using simple algebra so that there is no risk of cancellation, and write a function `f2` using this formula. Calculate the *relative error* of `f2` when $t = 25$.
- How much more accurate is `f2` compared with `f1` in terms of the relative error?
```python
def f1(t):
return 1-1/(1+np.exp(-t))
```
```python
import sympy as sp
def f(t):
tail = 1-1/(1+sp.exp(-t))
tail = tail.evalf(20)
return tail
```
```python
exact = f(25)
appro = f1(25)
res1 = np.abs((exact-appro)/exact)
res1
```
4.1759147665982646285e-6
```python
def f2(t):
return np.exp(-t)/(1+np.exp(-t))
appro = f2(25)
res2 = np.abs((exact-appro)/exact)
res2
```
2.3111252748218700406e-18
```python
print('f2 is',res1/res2,'times more accurate than f1.')
```
f2 is 1806875123599.7465438 times more accurate than f1.
**5**. (20 points)
Read in `figs/elephant.jpg` as a gray-scale image. The image has $1066 \times 1600$ values. Using SVD, recreate the image with a relative error of less than 0.5%. What is the relative size of the compressed image as a percentage?
```python
from skimage import io
img = io.imread('figs/elephant.jpg', as_grey=True)
plt.imshow(img,cmap='gray')
plt.xticks([])
plt.yticks([])
pass
```
```python
U,S,VT = np.linalg.svd(img)
variance = S**2
#the first variance whose cumsum is larger than 99.5% of sum
idx = np.where(variance.cumsum()>=(variance.sum()*0.995))[0][0]
new_S = S[:idx+1]
r = new_S.shape[0]
m = np.diag(new_S)
new_img = U[:,:r]@m@VT[:r,:]
plt.imshow(new_img,cmap='Greys_r')
plt.xticks([])
plt.yticks([])
pass
```
```python
'''
For the compressed image, we only need to store U, S and VT. S only contains diagonal values so its size is r.
'''
(U.shape[0]*r+r+VT.shape[0]*r)/(1066*1600)
```
0.032837124765478426
| 2df9c9b1bdc1470237f627d1921a3f235d7a2019 | 218,952 | ipynb | Jupyter Notebook | labs/Lab05-Zhechang Yang.ipynb | ZhechangYang/STA663 | 0dcf48e3e7a2d1f698b15e84946e44344b8153f5 | [
"BSD-3-Clause"
]
| null | null | null | labs/Lab05-Zhechang Yang.ipynb | ZhechangYang/STA663 | 0dcf48e3e7a2d1f698b15e84946e44344b8153f5 | [
"BSD-3-Clause"
]
| null | null | null | labs/Lab05-Zhechang Yang.ipynb | ZhechangYang/STA663 | 0dcf48e3e7a2d1f698b15e84946e44344b8153f5 | [
"BSD-3-Clause"
]
| null | null | null | 410.022472 | 112,356 | 0.935963 | true | 2,257 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.819893 | 0.656482 | __label__eng_Latn | 0.767596 | 0.363559 |
```python
import sympy as sp
import warnings
warnings.filterwarnings('ignore')
sp.init_printing()
```
```python
N = sp.Symbol("N")
x = sp.IndexedBase("x")
y = sp.IndexedBase("y")
z = sp.IndexedBase("z")
i = sp.Symbol("i")
j = sp.Symbol("j")
k = sp.Symbol("k")
dx = x[i] - x[j]
dy = y[i] - y[j]
dz = z[i] - z[j]
r = sp.sqrt(dx**2 + dy**2 + dz**2)
eps = sp.Symbol("ε")
sig = sp.Symbol("σ")
```
```python
#energy = (1/2)*sp.Sum(sp.Sum(4 * eps * ((sig/r)**12 - (sig/r)**6), (i, 1, N)), (j, 1, N))
# It is easier to work with just the summand
energy = 4 * eps * ((sig/r)**12 - (sig/r)**6)
```
```python
energy
```
$\displaystyle 4 ε \left(\frac{σ^{12}}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{6}} - \frac{σ^{6}}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{3}}\right)$
```python
r.diff(x[k])
```
$\displaystyle \frac{\left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({x}_{i} - {x}_{j}\right)}{2 \sqrt{\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}}}$
```python
energy.diff(x[k])
```
$\displaystyle 4 ε \left(- \frac{6 σ^{12} \left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({x}_{i} - {x}_{j}\right)}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{7}} + \frac{3 σ^{6} \left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({x}_{i} - {x}_{j}\right)}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{4}}\right)$
```python
energy.diff(y[k])
```
$\displaystyle 4 ε \left(- \frac{6 σ^{12} \left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({y}_{i} - {y}_{j}\right)}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{7}} + \frac{3 σ^{6} \left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({y}_{i} - {y}_{j}\right)}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{4}}\right)$
```python
energy.diff(z[k])
```
$\displaystyle 4 ε \left(- \frac{6 σ^{12} \left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({z}_{i} - {z}_{j}\right)}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{7}} + \frac{3 σ^{6} \left(2 \delta_{i k} - 2 \delta_{j k}\right) \left({z}_{i} - {z}_{j}\right)}{\left(\left({x}_{i} - {x}_{j}\right)^{2} + \left({y}_{i} - {y}_{j}\right)^{2} + \left({z}_{i} - {z}_{j}\right)^{2}\right)^{4}}\right)$
```python
import sympy as sp
import warnings
warnings.filterwarnings('ignore')
sp.init_printing()
eps = sp.Symbol("ε")
sig = sp.Symbol("σ")
rad = sp.Symbol("r")
energyRad = 4 * eps * ((sig/rad)**12 - (sig/rad)**6)
energyRad.diff(rad)
```
$\displaystyle 4 ε \left(\frac{6 σ^{6}}{r^{7}} - \frac{12 σ^{12}}{r^{13}}\right)$
```python
```
```python
```
| 9b400d2269634088f0b7876bb91485f1d7f46657 | 10,544 | ipynb | Jupyter Notebook | AJupyter/energyDirivative.ipynb | cmoser8892/MoleDymCode | 9077289a670c6cb0ed9e1daac5a03b51c83bc6fb | [
"MIT"
]
| null | null | null | AJupyter/energyDirivative.ipynb | cmoser8892/MoleDymCode | 9077289a670c6cb0ed9e1daac5a03b51c83bc6fb | [
"MIT"
]
| null | null | null | AJupyter/energyDirivative.ipynb | cmoser8892/MoleDymCode | 9077289a670c6cb0ed9e1daac5a03b51c83bc6fb | [
"MIT"
]
| null | null | null | 35.986348 | 515 | 0.300929 | true | 1,333 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944995 | 0.833325 | 0.787487 | __label__yue_Hant | 0.065119 | 0.667929 |
# Ecuación de la elasticidad lineal
## Ecuación diferencial parcial
Escribimos el problema de deformación elastica de un cuerpo con dominio geométrico $\Omega$ como
\begin{equation}
\rho \ddot{\boldsymbol{u}} + \rho \eta \dot{\boldsymbol{u}} - \boldsymbol{\nabla}\cdot\boldsymbol{\sigma} = \boldsymbol{f}\hbox{ in }\Omega,
\end{equation}
definiendo $\boldsymbol{\sigma}$ como el *tensor de esfuerzos*, $\boldsymbol{f}$ como la *fuerza de cuerpo por unidad de volumen* y $\boldsymbol{u}$ como el *vector de desplazamientos*. Además, las derivadas temporales se denotan aquí con un punto sobre la variable.
Restringiendo el problema a materiales isotrópicos, podemos definir el tensor de esfuerzos y su relación con las deformaciones como
\begin{align}
\boldsymbol{\sigma} &= \lambda\,\hbox{tr}\,(\boldsymbol{\varepsilon}) \boldsymbol{I} + 2\mu\boldsymbol{\varepsilon}, \\
\boldsymbol{\varepsilon} &= \frac{1}{2}\left(\boldsymbol{\nabla} \boldsymbol{u} + (\boldsymbol{\nabla} \boldsymbol{u})^{\top}\right),
\end{align}
donde $\boldsymbol{\varepsilon}$ es el *tensor simetrico de la tasa de deformación*, $\boldsymbol{I}$ denota el *tensor identidad*, $\mathrm{tr}$ denota la *traza* en un tensor, $\lambda$ y $\mu$
son los *parametros de Lamé* (propiedades del material) y $\rho$ es la densidad.
Si combinamos las definiciones de esfuerzo y deformación, podemos escribir el esfuerzo como
\begin{equation}
\boldsymbol{\boldsymbol{\sigma}} = \lambda(\boldsymbol{\nabla}\cdot \boldsymbol{u})\boldsymbol{I} + \mu(\boldsymbol{\nabla} \boldsymbol{u} + (\boldsymbol{\nabla} \boldsymbol{u})^{\top})
\end{equation}
## Formulación variacional
La formulación variacional del problema de deformación elastica la definimos como la integral sobre el dominio $\Omega$ del producto interno entre la ecuación y una función vectorial de test $\boldsymbol{v}\in \hat{V}$, donde $\hat{V}$ es el espacio de la funcion de test.
\begin{equation}
\int_\Omega \rho \ddot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x} + \int_\Omega \rho \eta \dot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x} -\int_\Omega (\boldsymbol{\nabla}\cdot\boldsymbol{\sigma}) \cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{x} = \int_\Omega \boldsymbol{f}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{x},
\end{equation}
Como $\boldsymbol{\nabla}\cdot\boldsymbol{\sigma}$ contiene derivadas de segundo orden en $\boldsymbol{u}$, debilitamos esta parte de la ecuación integrando por partes:
\begin{equation}
-\int_\Omega (\boldsymbol{\nabla}\cdot\boldsymbol{\sigma}) \cdot \boldsymbol{v} \ \mathrm{d}\boldsymbol{x}
= \int_\Omega \boldsymbol{\sigma} : \boldsymbol{\nabla} \boldsymbol{v} \ \mathrm{d}\boldsymbol{x} - \int_{\partial\Omega} (\boldsymbol{\sigma}\cdot \boldsymbol{n})\cdot \boldsymbol{v} \ \mathrm{d}\boldsymbol{s},
\end{equation}
donde $\boldsymbol{n}$ es vector normal en el contorno.
$\boldsymbol{\sigma}\cdot \boldsymbol{n}$ es conocido como la *tracción* (el esfuerzo en el contorno) y puede ser usado para prescribir una condicion de contorno de tipo Neumann o Robin de forma $\boldsymbol{\sigma}\cdot \boldsymbol{n} = \boldsymbol{T}$.
Combinando las 2 ecuaciones anteriores podemos escribir el problema variacional como
\begin{equation}
\int_\Omega \rho \ddot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x} + \int_\Omega \rho \eta \dot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x} + \int_\Omega \boldsymbol{\sigma} : \boldsymbol{\nabla} \boldsymbol{v}\ \mathrm{d}\boldsymbol{x} =
\int_\Omega \boldsymbol{f}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{x} + \int_{\partial\Omega} \boldsymbol{T}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{s}.
\end{equation}
Donde usando la definición de $\boldsymbol{\sigma}$ podemos escribir la ecuación variacional con $\boldsymbol{u}$ como incognita.
### Parte simetrica de $\boldsymbol{\nabla} \boldsymbol{v}$
Ya que el producto de un tensor simetrico con un anti-simetrico es cero, y sabiendo que el tensor de esfuerzos $\boldsymbol{\sigma}$ es simetrico, podemos reemplazar en la ecuacion anterior $\boldsymbol{\nabla} \boldsymbol{v}$ por su parte simétrica $\boldsymbol{\epsilon}(\boldsymbol{v})$, dando como resultado
\begin{equation}
\int_\Omega \rho \ddot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x} + \int_\Omega \rho \eta \dot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x} + \int_\Omega \boldsymbol{\sigma} : \boldsymbol{\epsilon}(\boldsymbol{v})\ \mathrm{d}\boldsymbol{x} = \int_\Omega \boldsymbol{f}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{x} + \int_{\partial\Omega} \boldsymbol{T}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{s}
\end{equation}
### Estableciendo las condiciones de contorno
Ahora consideremos cómo hacer cumplir las condiciones de contorno.
Para los contornos con condiciones de Dirichlet, aplicaremos las condiciones de manera fuerte. Para estos puntos, las funciones de test asociadas con los nodos de Dirichlet se desvanecen.
Para condiciones de tracción en los contornos, aplicaremos la forma variacional.
Similarmente a lo realizado con la ecuación de Poisson, requerimos que el espacio funcional de test correspondiente a
las funciones $ \boldsymbol{v} $ desaparezcan a lo largo de $ \partial \Omega $ para los puntos interiores.
Entonces, la integral de contorno anterior no tiene efectos para los puntos en
$ \partial \Omega \setminus \partial \Omega_T $.
### Forma variacional
En resumen, el problema variacional es el siguiente: encontrar $\boldsymbol{u}$ en un espacio funcional vectorial $\hat{V}$, tal que
\begin{equation}
m(\boldsymbol{u},\boldsymbol{v}) + c(\boldsymbol{u},\boldsymbol{v}) + a(\boldsymbol{u},\boldsymbol{v}) = L(\boldsymbol{v})\quad\forall \boldsymbol{v}\in\hat{V},
\end{equation}
donde
\begin{align}
m(\boldsymbol{u},\boldsymbol{v}) &= \int_\Omega \rho \ddot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x},\\
c(\boldsymbol{u},\boldsymbol{v}) &= \int_\Omega \rho \eta \dot{\boldsymbol{u}} \mathrm{d}\boldsymbol{x},\\
a(\boldsymbol{u},\boldsymbol{v}) &= \int_\Omega\sigma(\boldsymbol{u}) :\varepsilon(\boldsymbol{v})\ \mathrm{d}\boldsymbol{x},\\
L(\boldsymbol{v}) &= \int_\Omega \boldsymbol{f}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{x} + \int_{\partial\Omega_T} \boldsymbol{T}\cdot \boldsymbol{v}\ \mathrm{d}\boldsymbol{s},
\end{align}
## Implementación en FEniCS
Modelamos una barra 3D empotrada en un extremo y con una carga aplicada en el otro. Adicionalmente incluimos el peso de la barra.
La carga y el peso pueden ser facilmente modeladas sumando al lado derecho de la ecuacion el termino $\boldsymbol{f}=(0,-\rho g + s,0)$, en donde $g$ es la magnitud de la aceleracion por gravedad y $s(t,x)$ es una distribución de carga cualquiera. La barra es un paralelepipedo de longitud $L$ y de sección transversal $W \times H$. Fijamos el desplazamiento en la parte empotrada ($x=0$) como $\boldsymbol{u}=(0,0,0)$, y el resto de la barra la dejamos libre de tracciones $\boldsymbol{T} = 0$. Entonces, el lado derecho de la forma variacional es de la forma
$$L(\boldsymbol{v}) = \int_\Omega \boldsymbol{f}\cdot \boldsymbol{v} \mathrm{d}\boldsymbol{x}$$
### Importar paquetes
Importamos los paquetes de solucion (**dolfin**), de mallado (**mshr**), de cálculo numérico (**numpy**) para realizar operaciones numericas de proceso y posproceso, así como **matplotlib** para ver los resultados directamente durante la ejecución del programa
```python
from dolfin import *
from mshr import *
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
```
### Dominio geométrico, propiedades, malla y subdominios
Definimos el dominio computacional $\Omega$, definimos las propiedades, generamos la malla y creamos subdominios para aplicar la condición de contorno Dirichlet y la carga $s(t)$.
```python
# Dominio geometrico y malla
L = 1; W = 0.1; H =0.04
mesh = BoxMesh(Point(0, 0, 0), Point(L, W, H), 60, 10, 5)
x = SpatialCoordinate(mesh)
# Propiedades del material
# Constantes de Lame
E = 1000.0
nu = 0.3
mu = Constant(E / (2.0*(1.0 + nu)))
lmbda = Constant(E*nu / ((1.0 + nu)*(1.0 - 2.0*nu)))
# Densidad
rho = Constant(1.0)
# Coeficiente de amortiguacion
eta = Constant(0.1)
#Subdominios
# Izquierda
def left(x, on_boundary):
return near(x[0], 0.) and on_boundary
# Derecha
def right(x, on_boundary):
return near(x[0], 1.) and on_boundary
```
### Generación de los espacios funcionales sobre la malla
Una vez creada la malla, podemos crear un espacio funcional de elementos finitos $V$ sobre dicho dominio geométrico discreto:
```python
# Espacio de funciones para desplazamiento, velocidad y aceleraciones
V = VectorFunctionSpace(mesh, "Lagrange", 2)
# Espacio de funciones para esfuerzos
Vsig = TensorFunctionSpace(mesh, "DG", 0)
```
En este caso, tenemos un espacio funcional que es vectorial. Esto se puede entender como un arreglo de espacios funcionales escalares: uno para cada componente dimensional de la deformación del sólido elástico.
Usamos un elemento de tipo Lagrangeano (Lagrange) con un orden de interpolacion de primer orden. Adicionalmente creamos un espacio funcional discontinuo (DG) con el que representaremos los esfuerzos en el postproceso.
### Definición del problema variacional
La incógnita principal de este problema es un campo vectorial $ \boldsymbol{u} $ y no un campo escalar. Por lo que necesitamos trabajar con un espacio funcional vectorial.
```python
u = TrialFunction(V)
v = TestFunction(V)
```
Con `u = TrialFunction(V)` definimos `u` como un espacio funcional de elementos finitos vectorial con tres componentes: uno para cada dimensión de este problema tridimensional.
### Formas variacionales como funciones
Definimos las formas variacionales $m(\boldsymbol{u},\boldsymbol{v})$, $c(\boldsymbol{u},\boldsymbol{v})$, $a(\boldsymbol{u},\boldsymbol{v})$ y $L(\boldsymbol{v})$, la funcion para calcular los esfuerzos $\boldsymbol{\sigma}(\boldsymbol{u})$ (`sigma`) y una función de proyecccion local para representar el tensor de esfuerzos en el postproceso.
Todas estas funciones las definimos como funciones de python, usando las funciones intrinsecas de FEniCS (`inner`, `grad`, `sym`).
```python
# Tensor de esfuerzos
def sigma(u):
return 2.0*mu*sym(grad(u)) + lmbda*tr(sym(grad(u)))*Identity(len(u))
# Matriz de masa
def mmat(u,v):
return rho*inner(u, v)*dx
# Matriz de rigidez elastica
def kmat(u, v):
return inner(sigma(u), sym(grad(v)))*dx
# Amortiguacion de Rayleigh
def cmat(u, u_):
return eta*mmat(u, v)
# Proyeccion local
def local_project(v, V, u=None):
dv = TrialFunction(V)
v_ = TestFunction(V)
a_proj = inner(dv, v_)*dx
b_proj = inner(v, v_)*dx
solver = LocalSolver(a_proj, b_proj)
solver.factorize()
if u is None:
u = Function(V)
solver.solve_local_rhs(u)
return u
else:
solver.solve_local_rhs(u)
return
```
### Definición de condiciones de contorno
Especificamos la condición de contorno de Dirichlet $u=(0,0,0)$ en el subdominio izquierdo (usando la funcion `DirichletBC`) y definimos el area donde aplicamos la carga $s(t)$ (lado derecho).
```python
# Condicion de contorno izquierda (Dirichlet)
zero = Constant((0.0, 0.0, 0.0))
bc = DirichletBC(V, zero, left)
# Condicion de contorno derecha
boundary_subdomains = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)
boundary_subdomains.set_all(0)
force_boundary = AutoSubDomain(right)
force_boundary.mark(boundary_subdomains, 3)
dss = ds(subdomain_data=boundary_subdomains)
```
### Definicion de las fuerzas de cuerpo
Definimos la fuerza de cuerpo $\boldsymbol{f}$ compuesta por el peso de la barra $\rho g$ y la función temporal en el lado derecho $s(t)$
```python
# Definicion de las cargas
k = 0.5
s0 = 1.
s = Expression(("0", "s0*sin(k*pi*t)","0"), t=0, k=k, s0=s0, degree=0)
g = Constant((0.,-1,0.))
```
### Integrador temporal
Usamos el método de Newmark como integrador temporal para las derivadas temporales del problema. Creamos funciones para actualizar los campos de desplazamiento, velocidad y aceleracion en cada paso de tiempo. Tambien definimos campos vectoriales `u_old`, `v_old` y `a_old`, que vamos a usar en la integracion temporal.
Tambien definimos el tiempo total de simulación y el paso de tiempo.
```python
T = 10.0 # tiempo total
dt = 0.5 # paso de tiempo
Nt = int(T/dt)
ti = np.linspace(0, T, Nt)
# Campos para el paso de tiempo anterior
u_old = Function(V, name="Desplazamiento")
v_old = Function(V, name="Velocidad")
a_old = Function(V, name="Aceleracion")
# Metodo de Newmark (punto medio)
gamma = Constant(0.5)
beta = Constant((gamma+0.5)**2/4.)
# Aceleracion
# a = 1/(2*beta)*((u - u0 - v0*dt)/(0.5*dt*dt) - (1-2*beta)*a0)
def update_a(u, u_old, v_old, a_old, ufl=True):
if ufl:
dt_ = dt
beta_ = beta
else:
dt_ = float(dt)
beta_ = float(beta)
return (u-u_old-dt_*v_old)/beta_/dt_**2 - (1-2*beta_)/2/beta_*a_old
# Velocidad
# v = dt * ((1-gamma)*a0 + gamma*a) + v0
def update_v(a, u_old, v_old, a_old, ufl=True):
if ufl:
dt_ = dt
gamma_ = gamma
else:
dt_ = float(dt)
gamma_ = float(gamma)
return v_old + dt_*((1-gamma_)*a_old + gamma_*a)
# Actualizar campos
def update_fields(u, u_old, v_old, a_old):
"""Update fields at the end of each time step."""
# Vectores de referencia
u_vec, u0_vec = u.vector(), u_old.vector()
v0_vec, a0_vec = v_old.vector(), a_old.vector()
# Actualizacion de velocidad y aceleracion
a_vec = update_a(u_vec, u0_vec, v0_vec, a0_vec, ufl=False)
v_vec = update_v(a_vec, u0_vec, v0_vec, a0_vec, ufl=False)
# Actualizacion de desplazamiento
v_old.vector()[:], a_old.vector()[:] = v_vec, a_vec
u_old.vector()[:] = u.vector()
```
### Definicion del problema variacional
Usando todos los ingredientes anteriores, escribimos el problema variacional y exprezamos el problema en forma matricial $\boldsymbol{K} \boldsymbol{u} = \boldsymbol{b}$
```python
# Ecuacion de elasticidad en forma residual
a_new = update_a(u, u_old, v_old, a_old, ufl=True)
v_new = update_v(a_new, u_old, v_old, a_old, ufl=True)
res = mmat(a_new,v) + cmat(v_new,v) \
+ kmat(u,v) - rho*dot(s,v)*dss(3) - dot(rho*g,v)*dx
# Ensamble del sistema de ecuaciones
B_form = lhs(res)
L_form = rhs(res)
K, b = assemble_system(B_form, L_form, bc)
```
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
### Arrays de postproceso e impresion de datos
Creamos arrays de numpy para guardar algunos resultados y abrimos el archivo de **XDMF** donde vamos a almacenar los datos de postproceso
```python
# Inicializacion de variables de salida
u_tip = np.zeros((Nt,)) # desplazamiento en lado derecho de la barra
E_damp = 0
energia = np.zeros((Nt, 2))
# Creacion de archivos de salida (ParaView)
xdmf_file = XDMFFile("bar.xdmf")
xdmf_file.parameters["flush_output"] = True
xdmf_file.parameters["functions_share_mesh"] = True
xdmf_file.parameters["rewrite_function_mesh"] = False
```
## Solución del problema
En esta sección finalmente resolvemos el problema. Iniciamos definiendo nuestro campo vectorial para la solución `u` y uno donde calcularemos el tensor de esfuerzos `sig`.
Usamos el ciclo `for` para iterar sobre todos los pasos de tiempo, y en este bucle evaluamos la fuerza $s(t)$ en el paso de tiempo actual, ensamblamos la forma $L(v)$ y solucionamos el problema.
Finalmente, actualizamos los campos de desplazamiento, velocidad, aceleración y esfuerzo, y guardamos los datos necesarios para el postproceso.
```python
u = Function(V, name="Desplazamiento")
sig = Function(Vsig, name="sigma")
# Propiedades del solver
solver = LUSolver(K, "mumps")
solver.parameters["symmetric"] = True
# Bucle temporal
for i in tqdm(range(Nt),desc='Time loop'):
t = ti[i]
# Fuerza evaluada en t_{n+1}=t_{n+1}-dt
s.t = t
# Solucionar desplazamientos
b = assemble(L_form)
bc.apply(b)
solver.solve(K, u.vector(), b)
# Actualizar campos
update_fields(u, u_old, v_old, a_old)
# Calcular esfuerzos
local_project(sigma(u), Vsig, sig)
# Guardar solucion (vizualizacion)
xdmf_file.write(u_old, t)
xdmf_file.write(v_old, t)
xdmf_file.write(a_old, t)
xdmf_file.write(sig, t)
# Guardar solucion (desplazamiento y energia)
u_tip[i] = u(1., 0.05, 0.02)[1]
E_elas = assemble(0.5*kmat(u_old, u_old))
E_kin = assemble(0.5*mmat(v_old, v_old))
energia[i, :] = np.array([E_elas, E_kin])
```
Time loop: 100%|██████████| 20/20 [01:31<00:00, 4.59s/it]
## Post-proceso
### Visualización y postprocesamiento interno de la solución
Usando el array `u_tip`, podemos graficar el desplazamiento en el lado derecho de la barra
```python
# Imprimir desplazamiento en x=(1,0.05,0.02)
plt.figure()
plt.plot(ti, u_tip)
plt.xlabel("Time")
plt.ylabel("Desplazamiento")
plt.savefig('images/tip.png',format='png',dpi=400)
plt.show()
```
Usando las soluciones de desplazamiento `u_old` y velocidad `v_old`, podemos definir la energia elastica como $E_e = \int_\Omega \frac{1}{2} \boldsymbol{\sigma}(\boldsymbol{u}):\boldsymbol{\epsilon}(\boldsymbol{u}) dx$ y la energia cinetica como $E_k = \int_\Omega \frac{1}{2} \rho \dot{\boldsymbol{u}} \cdot\dot{ \boldsymbol{u}} dx$
```python
# Imprimir energia
plt.figure()
plt.plot(ti, energia)
plt.legend(("elastica", "cinetica", "amortiguacion"))
plt.xlabel("Time")
plt.ylabel("Energia")
plt.savefig('images/energy.png',format='png',dpi=400)
plt.show()
```
**Reconocimiento**: Este cuaderno fue adaptado de [The FEniCS Tutorial Volume I] (https://fenicsproject.org/pub/tutorial/sphinx1/) elaborado por Hans Petter Langtangen y Anders Logg, publicado bajo licencia CC Attribution 4.0.
| 7a12e06209db6ee7af99740862ee095da0d03df0 | 74,373 | ipynb | Jupyter Notebook | elasticidad_lineal.ipynb | ciaid-colombia/Taller-FEniCS | 629b26c4a756d1017e6b409e5ed700498a9edacd | [
"MIT"
]
| null | null | null | elasticidad_lineal.ipynb | ciaid-colombia/Taller-FEniCS | 629b26c4a756d1017e6b409e5ed700498a9edacd | [
"MIT"
]
| null | null | null | elasticidad_lineal.ipynb | ciaid-colombia/Taller-FEniCS | 629b26c4a756d1017e6b409e5ed700498a9edacd | [
"MIT"
]
| 1 | 2021-04-24T14:08:38.000Z | 2021-04-24T14:08:38.000Z | 112.345921 | 27,712 | 0.835868 | true | 5,511 | Qwen/Qwen-72B | 1. YES
2. YES | 0.930458 | 0.863392 | 0.80335 | __label__spa_Latn | 0.845447 | 0.704784 |
# Regression model—sound exposure level
This notebook explores and models the data collected from recordings of the natural acoustic environment over the urban-rural gradient near Innsbruck, Austria. The models are implemented as Bayesian models with the PyMC3 probabilistic programming library.
References:<br />
https://github.com/fonnesbeck/multilevel_modeling<br />
Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models (1st ed.). Cambridge University Press.
#### Import statements
```python
import warnings
warnings.filterwarnings('ignore')
```
```python
import pandas
import numpy
from os import path
```
```python
%matplotlib inline
from matplotlib import pyplot
from matplotlib.patches import Rectangle
import seaborn
```
```python
from pymc3 import glm, Model, NUTS, sample, stats, \
forestplot, traceplot, plot_posterior, summary, \
Normal, Uniform, Deterministic, StudentT
from pymc3.backends import SQLite
```
#### Plot settings
```python
from matplotlib import rcParams
```
```python
rcParams['font.sans-serif']
```
['DejaVu Sans',
'Bitstream Vera Sans',
'Computer Modern Sans Serif',
'Lucida Grande',
'Verdana',
'Geneva',
'Lucid',
'Arial',
'Helvetica',
'Avant Garde',
'sans-serif']
```python
rcParams['font.sans-serif'] = ['Helvetica',
'Arial',
'Bitstream Vera Sans',
'DejaVu Sans',
'Lucida Grande',
'Verdana',
'Geneva',
'Lucid',
'Avant Garde',
'sans-serif']
```
#### Variable definitions
```python
data_filepath = "/Users/Jake/OneDrive/Documents/alpine soundscapes/data/dataset.csv"
```
```python
trace_output_path = "/Users/Jake/OneDrive/Documents/alpine soundscapes/data/model traces/sel"
```
```python
seaborn_blue = seaborn.color_palette()[0]
```
## Load data
```python
data = pandas.read_csv(data_filepath)
data = data.loc[data.site<=30]
```
sort data by site and then by visit
```python
data_sorted = data.sort_values(by=['site', 'sound']).reset_index(drop=True)
```
transform variables (mean center)
```python
column_list = ['sel', 'sel_anthrophony', 'sel_biophony', 'biophony', 'week',
'building_50m', 'pavement_50m', 'forest_50m', 'field_50m',
'building_100m', 'pavement_100m', 'forest_100m', 'field_100m',
'building_200m', 'pavement_200m', 'forest_200m', 'field_200m',
'building_500m', 'pavement_500m', 'forest_500m', 'field_500m',
'd2n_50m', 'd2n_100m', 'd2n_200m', 'd2n_500m',
'temperature', 'wind_speed', 'pressure', 'bus_stop',
'construction', 'crossing', 'cycleway', 'elevator', 'escape', 'footway',
'living_street', 'motorway', 'motorway_link', 'path', 'pedestrian',
'platform', 'primary_road', 'primary_link', 'proposed', 'residential',
'rest_area', 'secondary', 'secondary_link', 'service', 'services',
'steps', 'tertiary', 'tertiary_link', 'track', 'unclassified', 'combo']
data_centered = data_sorted.copy()
for column in column_list:
data_centered[column] = data_sorted[column] - data_sorted[column].mean()
```
create sites variable for PyMC3 models
```python
sites = numpy.copy(data_sorted.site.values) - 1
```
## Model 0 - emtpy model
$$
\begin{align}
y_{ts} \sim \mathcal{N}(\alpha_s + \epsilon_t, \sigma_y^2) \\
\alpha_s \sim \mathcal{N}(M + \epsilon_s, \sigma_\alpha^2) \\
\end{align}
$$
```python
with Model() as model0:
# Priors
mu_grand = Normal('mu_grand', mu=0., tau=0.0001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
# Random intercepts
a = Normal('a', mu=mu_grand, tau=tau_a, shape=len(set(sites)))
# Model error
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# Expected value
y_hat = a[sites]
# Data likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# sample model
backend = SQLite(path.join(trace_output_path, "model0.sqlite"))
model0_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
```
100%|██████████| 10500/10500 [00:20<00:00, 511.43it/s]
```python
fig, ax = pyplot.subplots()
# organize results
model0_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()}).set_index('site')
model0_data['forest_200m'] = data.groupby('site')['forest_200m'].mean()
model0_data['quantiles'] = [stats.quantiles(model0_samples.a[:5000, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model0_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
ax.plot([x, x], [row['quantiles'][2.5], row['quantiles'][97.5]], color='black', linewidth=0.5)
ax.plot([x, x], [row['quantiles'][25], row['quantiles'][75]], color='black', linewidth=1)
ax.scatter([x], [row['quantiles'][50]], color='black', marker='o')
# format plot
l1 = ax.set_xlim([0, 100])
xl = ax.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax.set_ylabel("SEL (difference from grand mean)")
```
```python
fig, ax = pyplot.subplots()
# organize results
model0_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()}).set_index('site')
model0_data['d2n_200m'] = data.groupby('site')['d2n_200m'].mean()
model0_data['quantiles'] = [stats.quantiles(model0_samples.a[:5000, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model0_data.sort_values(by='d2n_200m').iterrows():
x = row['d2n_200m']
ax.plot([x, x], [row['quantiles'][2.5], row['quantiles'][97.5]], color='black', linewidth=0.5)
ax.plot([x, x], [row['quantiles'][25], row['quantiles'][75]], color='black', linewidth=1)
ax.scatter([x], [row['quantiles'][50]], color='black', marker='o')
# format plot
l1 = ax.set_xlim([0, 100])
xl = ax.set_xlabel("d2n within 200 meters (percent area)")
yl = ax.set_ylabel("SEL (difference from grand mean)")
```
## Model 1—time and site predictors
$$
\begin{align}
\text{level 1} \\
y_{ts} \sim \mathcal{N}(\alpha_s + \beta_s T_t, \sigma_y^2) \\
\text{level 2} \\
\alpha_s \sim \mathcal{N}(\gamma_\alpha + \gamma_{\alpha s} L_s, \sigma_\alpha^2) \\
\beta_s \sim \mathcal{N}(\gamma_\beta + \gamma_{\beta s} L_s, \sigma_\beta^2) \\
\end{align}
$$
```python
site_predictors = [
# 'building_50m', 'pavement_50m', 'forest_50m', 'field_50m',
# 'building_100m', 'pavement_100m', 'forest_100m', 'field_100m',
# 'building_200m', 'pavement_200m', 'forest_200m', 'field_200m',
# 'building_500m', 'pavement_500m', 'forest_500m', 'field_500m',
'd2n_50m', 'd2n_100m', 'd2n_200m', 'd2n_500m',
]
for predictor in site_predictors:
with Model() as model_1:
# intercept
g_a = Normal('g_a', mu=0, tau=0.001)
g_as = Normal('g_as', mu=0, tau=0.001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
mu_a = g_a + (g_as * data_centered.groupby('site')[predictor].mean())
a = Normal('a', mu=mu_a, tau=tau_a, shape=len(set(sites)))
# slope
g_b = Normal('g_b', mu=0, tau=0.001)
g_bs = Normal('g_bs', mu=0, tau=0.001)
sigma_b = Uniform('sigma_b', lower=0, upper=100)
tau_b = sigma_b**-2
mu_b = g_b + (g_bs * data_centered.groupby('site')[predictor].mean())
b = Normal('b', mu=mu_b, tau=tau_b, shape=len(set(sites)))
# model error (data-level)
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# expected values
y_hat = a[sites] + (b[sites] * data_centered.week)
# likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# simulated
#y_sim = Normal('y_sim', mu=y_hat, tau=tau_y, shape=y_hat.tag.test_value.shape)
# sample model
backend = SQLite(path.join(trace_output_path, "model1_{}.sqlite".format(predictor)))
model_1_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
```
100%|██████████| 10500/10500 [02:37<00:00, 66.46it/s]
100%|██████████| 10500/10500 [04:49<00:00, 36.27it/s]
100%|██████████| 10500/10500 [01:18<00:00, 133.27it/s]
100%|██████████| 10500/10500 [01:18<00:00, 134.29it/s]
```python
fig = pyplot.figure()
fig.set_figwidth(6.85)
fig.set_figheight(6.85/2)
ax_a = pyplot.subplot2grid((1, 2), (0, 0), rowspan=1, colspan=1)
ax_b = pyplot.subplot2grid((1, 2), (0, 1), rowspan=1, colspan=1, sharex=ax_a)
fig.subplots_adjust(left=0, bottom=0, right=1, top=1)
# organize results
model_1_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()})
model_1_data['forest_200m'] = data_sorted.forest_200m.unique()
model_1_data['quantiles_a'] = [stats.quantiles(model_1_samples['a'][:5000][:, i]) for i in range(len(set(sites)))]
model_1_data['quantiles_b'] = [stats.quantiles(model_1_samples['b'][:5000][:, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model_1_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
#ax_a.plot([x, x], [row['quantiles_a'][2.5], row['quantiles_a'][97.5]], color='black', linewidth=0.5)
ax_a.plot([x, x], [row['quantiles_a'][25], row['quantiles_a'][75]], color='black', linewidth=1)
ax_a.scatter([x], [row['quantiles_a'][50]], color='black', marker='o')
# format plot
l1 = ax_a.set_xlim([0, 100])
xl = ax_a.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax_a.set_ylabel("sel (decibel difference from grand mean)")
# plot quantiles
for i, row in model_1_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
#ax_b.plot([x, x], [row['quantiles_b'][2.5], row['quantiles_b'][97.5]], color='black', linewidth=0.5)
ax_b.plot([x, x], [row['quantiles_b'][25], row['quantiles_b'][75]], color='black', linewidth=1)
ax_b.scatter([x], [row['quantiles_b'][50]], color='black', marker='o')
# format plot
l1 = ax_b.set_xlim([0, 100])
l2 = ax_b.set_ylim((-2, 2))
xl = ax_b.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax_b.set_ylabel("rate of change of sel (dB/week)")
```
```python
fig = pyplot.figure()
fig.set_figwidth(6.85)
fig.set_figheight(6.85/2)
ax_a = pyplot.subplot2grid((1, 2), (0, 0), rowspan=1, colspan=1)
ax_b = pyplot.subplot2grid((1, 2), (0, 1), rowspan=1, colspan=1, sharex=ax_a)
fig.subplots_adjust(left=0, bottom=0, right=1, top=1)
# organize results
model_1_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()})
model_1_data['d2n_500m'] = data_sorted.d2n_500m.unique()
model_1_data['quantiles_a'] = [stats.quantiles(model_1_samples['a'][:5000][:, i]) for i in range(len(set(sites)))]
model_1_data['quantiles_b'] = [stats.quantiles(model_1_samples['b'][:5000][:, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model_1_data.sort_values(by='d2n_500m').iterrows():
x = row['d2n_500m']
#ax_a.plot([x, x], [row['quantiles_a'][2.5], row['quantiles_a'][97.5]], color='black', linewidth=0.5)
ax_a.plot([x, x], [row['quantiles_a'][25], row['quantiles_a'][75]], color='black', linewidth=1)
ax_a.scatter([x], [row['quantiles_a'][50]], color='black', marker='o')
# format plot
l1 = ax_a.set_xlim([0, 0.6])
xl = ax_a.set_xlabel("d2n within 500 meters (percent area)")
yl = ax_a.set_ylabel("sel (decibel difference from grand mean)")
# plot quantiles
for i, row in model_1_data.sort_values(by='d2n_500m').iterrows():
x = row['d2n_500m']
#ax_b.plot([x, x], [row['quantiles_b'][2.5], row['quantiles_b'][97.5]], color='black', linewidth=0.5)
ax_b.plot([x, x], [row['quantiles_b'][25], row['quantiles_b'][75]], color='black', linewidth=1)
ax_b.scatter([x], [row['quantiles_b'][50]], color='black', marker='o')
# format plot
l1 = ax_b.set_xlim([0, 0.6])
l2 = ax_b.set_ylim((-2, 2))
xl = ax_b.set_xlabel("d2n within 500 meters (percent area)")
yl = ax_b.set_ylabel("rate of change of sel (dB/week)")
```
## Model 2—environmental predictors
$$
\begin{align}
\text{level 1} \\
y_{ts} \sim \mathcal{N}(\alpha_s + \beta_s T_t, \sigma_y^2) \\
\text{level 2} \\
\alpha_s \sim \mathcal{N}(\gamma_\alpha + \gamma_{\alpha s} L_s, \sigma_\alpha^2) \\
\beta_s \sim \mathcal{N}(\gamma_\beta + \gamma_{\beta s} L_s, \sigma_\beta^2) \\
\end{align}
$$
```python
measurement_predictors = [
'temperature', 'wind_speed', 'precipitation', 'pressure',
]
for predictor in measurement_predictors:
with Model() as model2a:
# intercept
g_a = Normal('g_a', mu=0, tau=0.001)
g_as = Normal('g_as', mu=0, tau=0.001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
mu_a = g_a + (g_as * data_centered.groupby('site')['forest_200m'].mean())
a = Normal('a', mu=mu_a, tau=tau_a, shape=len(set(sites)))
# time slope
g_b = Normal('g_b', mu=0, tau=0.001)
g_bs = Normal('g_bs', mu=0, tau=0.001)
sigma_b = Uniform('sigma_b', lower=0, upper=100)
tau_b = sigma_b**-2
mu_b = g_b + (g_bs * data_centered.groupby('site')['forest_200m'].mean())
b = Normal('b', mu=mu_b, tau=tau_b, shape=len(set(sites)))
# temp slope
#g_c = Normal('g_c', mu=0, tau=0.001)
#g_cs = Normal('g_cs', mu=0, tau=0.001)
#sigma_c = Uniform('sigma_c', lower=0, upper=100)
#tau_c = sigma_c**-2
#mu_c = g_c + (g_cs * data_centered.groupby('site')['forest_200m'].mean())
#c = Normal('c', mu=mu_c, tau=tau_c, shape=len(set(sites)))
c = Uniform('c', lower=-100, upper=100)
# model error (data-level)
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# expected values
y_hat = a[sites] + (b[sites] * data_centered.week) + (c * data_centered[predictor])
# likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# simulated
#y_sim = Normal('y_sim', mu=y_hat, tau=tau_y, shape=y_hat.tag.test_value.shape)
# sample model
backend = SQLite(path.join(trace_output_path, "model2a_{0}.sqlite".format(predictor)))
model_2_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
```
100%|██████████| 10000/10000 [08:59<00:00, 18.53it/s]
100%|██████████| 10000/10000 [08:49<00:00, 18.90it/s]
100%|██████████| 10000/10000 [11:56<00:00, 44.45it/s]
100%|██████████| 10000/10000 [17:43<00:00, 9.41it/s]
```python
measurement_predictors = [
'temperature', 'wind_speed', 'precipitation', 'pressure',
]
for predictor in measurement_predictors:
with Model() as model2b:
# intercept
g_a = Normal('g_a', mu=0, tau=0.001)
g_as = Normal('g_as', mu=0, tau=0.001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
mu_a = g_a + (g_as * data_centered.groupby('site')['forest_200m'].mean())
a = Normal('a', mu=mu_a, tau=tau_a, shape=len(set(sites)))
# time slope
g_b = Normal('g_b', mu=0, tau=0.001)
g_bs = Normal('g_bs', mu=0, tau=0.001)
sigma_b = Uniform('sigma_b', lower=0, upper=100)
tau_b = sigma_b**-2
mu_b = g_b + (g_bs * data_centered.groupby('site')['forest_200m'].mean())
b = Normal('b', mu=mu_b, tau=tau_b, shape=len(set(sites)))
# predictor slope
g_c = Normal('g_c', mu=0, tau=0.001)
g_cs = Normal('g_cs', mu=0, tau=0.001)
sigma_c = Uniform('sigma_c', lower=0, upper=100)
tau_c = sigma_c**-2
mu_c = g_c + (g_cs * data_centered.groupby('site')['forest_200m'].mean())
c = Normal('c', mu=mu_c, tau=tau_c, shape=len(set(sites)))
# model error (data-level)
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# expected values
y_hat = a[sites] + (b[sites] * data_centered.week) + (c[sites] * data_centered[predictor])
# likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# simulated
#y_sim = Normal('y_sim', mu=y_hat, tau=tau_y, shape=y_hat.tag.test_value.shape)
# sample model
backend = SQLite(path.join(trace_output_path, "model2b_{0}.sqlite".format(predictor)))
model_2_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
```
INFO (theano.gof.compilelock): Waiting for existing lock by process '29643' (I am process '29703')
INFO (theano.gof.compilelock): To manually release the lock, delete /Users/Jake/.theano/compiledir_Darwin-16.6.0-x86_64-i386-64bit-i386-3.5.2-64/lock_dir
100%|██████████| 10000/10000 [18:10<00:00, 9.17it/s]
100%|██████████| 10000/10000 [17:21<00:00, 9.60it/s]
100%|██████████| 10000/10000 [1:17:41<00:00, 2.60it/s]
100%|██████████| 10000/10000 [19:32<00:00, 8.53it/s]
```python
fig, ax = pyplot.subplots()
# organize results
model_2_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()})
model_2_data['forest_200m'] = data_sorted.forest_200m.unique()
model_2_data['quantiles'] = [stats.quantiles(model_2_samples['c'][:1000][:, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model_2_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
ax.plot([x, x], [row['quantiles'][2.5], row['quantiles'][97.5]], color='black', linewidth=0.5)
ax.plot([x, x], [row['quantiles'][25], row['quantiles'][75]], color='black', linewidth=1)
ax.scatter([x], [row['quantiles'][50]], color='black', marker='o')
# format plot
l1 = ax.set_xlim([0, 100])
xl = ax.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax.set_ylabel("percent biophony (difference from grand mean)")
```
| 2d132696c124fac0780d337ded83e86499738ab1 | 123,329 | ipynb | Jupyter Notebook | Regression model - sel.ipynb | jacobdein/alpine-soundscapes | 32db33d55167b5da8107c746dbe95e82d8039a3d | [
"MIT"
]
| null | null | null | Regression model - sel.ipynb | jacobdein/alpine-soundscapes | 32db33d55167b5da8107c746dbe95e82d8039a3d | [
"MIT"
]
| null | null | null | Regression model - sel.ipynb | jacobdein/alpine-soundscapes | 32db33d55167b5da8107c746dbe95e82d8039a3d | [
"MIT"
]
| null | null | null | 153.203727 | 33,406 | 0.866657 | true | 5,786 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.749087 | 0.659794 | __label__eng_Latn | 0.22249 | 0.371253 |
```python
import sympy
import argparse
import numpy as np
import pickle
import sys
import os
from pathlib import Path
import matplotlib.pyplot as plt
import equations
import data
from derivative import dxdt
from gplearn.genetic import SymbolicRegressor
from utils import generator
```
```python
from pathlib import Path
def load_results(path_base: Path, x_id: int = 0, seed_s: int = 0, seed_e: int = 1):
res_list = []
for s in range(seed_s, seed_e):
if x_id == 0:
path = path_base / f"grad_seed_{s}.pkl"
else:
path = path_base / f"grad_x_{x_id}_seed_{s}.pkl"
try:
with open(path, "rb") as f:
res = pickle.load(f)
res_list.append(res)
except FileNotFoundError:
pass
return res_list
```
```python
ode_name = "GompertzODE"
noise_sigma = 0.09
n_sample = 50
freq = 10
x_id = 0
seed_s = 0
seed_e = 1
path_base = Path(f"results/{ode_name}/noise-{noise_sigma}/sample-{n_sample}/freq-{freq}/")
res_list = load_results(path_base, x_id = x_id, seed_s = seed_s, seed_e = seed_e)
len(res_list)
```
```python
path_base = Path(f"results_vi/{ode_name}/noise-{noise_sigma}/sample-{n_sample}/freq-{freq}/")
res_list_vi = load_results(path_base, x_id = x_id, seed_s = seed_s, seed_e = seed_e)
len(res_list_vi)
```
```python
T, B, D = res_list[0]["dg"].xt.shape
```
```python
b = 2
x_true = res_list[0]["dg"].xt[:, b, 0]
x_noise = res_list[0]["dg"].yt[:, b, 0]
x_hat = res_list_vi[0]["ode_data"]["x_hat"][:, b, 0]
t = res_list[0]["dg"].solver.t
```
```python
y_hat = res_list[0]["y_train"].reshape(T - 1, B)[:, b]
y_hat_spline = dxdt(x_noise, t, kind="spline", s=0.012)[:-1]
y_hat_direct = dxdt(x_noise, t, kind="spline", s=0.005)[:-1]
y_true = res_list[0]["ode"]._dx_dt(x_true)[0][:-1]
```
```python
```
```python
ode_name = "GompertzODE"
noise_sigma = 0.02
n_sample = 50
freq = 2
x_id = 0
seed_s = 0
seed_e = 1
path_base = Path(f"results/{ode_name}/noise-{noise_sigma}/sample-{n_sample}/freq-{freq}/")
res_list2 = load_results(path_base, x_id = x_id, seed_s = seed_s, seed_e = seed_e)
path_base = Path(f"results_vi/{ode_name}/noise-{noise_sigma}/sample-{n_sample}/freq-{freq}/")
res_list2_vi = load_results(path_base, x_id = x_id, seed_s = seed_s, seed_e = seed_e)
```
```python
T, B, D = res_list2[0]["dg"].xt.shape
b = 2
x_true2 = res_list2[0]["dg"].xt[:, b, 0]
x_noise2 = res_list2[0]["dg"].yt[:, b, 0]
x_hat2 = res_list2_vi[0]["ode_data"]["x_hat"][:, b, 0]
t2 = res_list2[0]["dg"].solver.t
y_hat2 = res_list2[0]["y_train"].reshape(T - 1, B)[:, b]
y_hat_spline2 = dxdt(x_noise2, t2, kind="spline", s=0.001)
y_hat_direct2 = dxdt(x_noise2, t2, kind="spline", s=0.00)
y_true2 = res_list2[0]["ode"]._dx_dt(x_true2)[0][:-1]
```
```python
plt.figure(figsize=(12, 2.5))
plt.style.use("tableau-colorblind10")
colors = plt.rcParams["axes.prop_cycle"].by_key()["color"]
plt.rcParams["font.size"] = "13"
plt.subplot(131)
plt.fill_between(
res_list_vi[0]["t_new"], x_hat - 0.01, x_hat + 0.02, alpha=0.2, color=colors[3]
)
plt.plot(t, x_noise, "o", ms=4, label=r"$Y(t)$", color=colors[1])
plt.plot(t, x_true, label=r"$X(t)$", color=colors[0])
plt.plot(
res_list_vi[0]["t_new"][::2],
x_hat[::2],
"-",
ms=4,
label=r"$\hat{X}(t)$",
color=colors[3],
)
# plt.ylabel('Trajectory', fontsize=14)
plt.xlabel(r"Time $t$", fontsize=14)
plt.title("(A) Trajectory")
plt.legend(fontsize=12)
plt.subplot(132)
plt.plot(t[:-1], y_true, label=r"$\dot{X}(t)$ True")
plt.plot(t[:-1], y_hat_direct, "o-", ms=4, label=r"${\dot{X}}(t)$ TV", color=colors[8])
# plt.ylabel('Derivative', fontsize=14)
plt.yticks([0.0, 0.2, 0.4])
plt.xlabel(r"Time $t$", fontsize=14)
plt.ylim((-0.25, 0.52))
plt.text(s="High noise", x=2.5, y=0.3, color="black", fontsize=14)
plt.title("(B) Derivative")
plt.subplot(133)
plt.plot(t[:-1], y_true, label="${\dot{X}}(t)$")
plt.plot(t[:-1], y_hat, "o-", ms=4, label=r"SR-T", color=colors[5])
plt.plot(t[:-1], y_hat_spline, "o-", ms=4, label=r"SR-S", color=colors[6])
# plt.ylabel('Derivative', fontsize=14)
plt.yticks([0.0, 0.2, 0.4])
plt.xlabel(r"Time $t$", fontsize=14)
plt.ylim((-0.25, 0.52))
# plt.axvline(x=0, linestyle='--', color='black')
# plt.axvline(x=1.5, linestyle='--', color='black')
plt.text(s="Bias", x=1.0, y=0.05, color="black", fontsize=14)
plt.arrow(
1.2, 0.15, 0.0, 0.15, head_length=0.05, head_width=0.15, length_includes_head=True
)
plt.legend(fontsize=12)
plt.title("(C) Estimated Derivative")
plt.tight_layout(pad=0.2)
plt.savefig(fname="Gompertz_plot.png", dpi=200)
```
```python
```
```python
```
```python
```
```python
```
| bef423c86e664cb13bdd67af800a193eb8011cf2 | 8,313 | ipynb | Jupyter Notebook | Fig3.ipynb | vanderschaarlab/D-CODE-ICLR-2022 | 556c84ea1f0fda2399ef47842afe20b724085605 | [
"BSD-3-Clause"
]
| 7 | 2022-03-04T07:46:55.000Z | 2022-03-13T17:24:57.000Z | Fig3.ipynb | ZhaozhiQIAN/D-CODE-ICLR-2022 | 556c84ea1f0fda2399ef47842afe20b724085605 | [
"BSD-3-Clause"
]
| null | null | null | Fig3.ipynb | ZhaozhiQIAN/D-CODE-ICLR-2022 | 556c84ea1f0fda2399ef47842afe20b724085605 | [
"BSD-3-Clause"
]
| null | null | null | 27.61794 | 104 | 0.50415 | true | 1,629 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.743168 | 0.6193 | __label__eng_Latn | 0.218298 | 0.277172 |
# Part 1 - Scalars and Vectors
For the questions below it is not sufficient to simply provide answer to the questions, but you must solve the problems and show your work using python (the NumPy library will help a lot!) Translate the vectors and matrices into their appropriate python representations and use numpy or functions that you write yourself to demonstrate the result or property.
```
# importing the libraries used in class
import matplotlib.pyplot as plt
import math
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
```
## 1.1 Create a two-dimensional vector and plot it on a graph
```
# Done in class
# plt.arrow(x,y,dx,dy) draws an arrow from (x,y) to (dx,dy)
# xlim and ylim is range of the borders of the graph, title sets the title
twod_vector = [0.5,0.5]
plt.arrow(0,0,twod_vector[0],twod_vector[1],head_width=0.02,head_length=0.02,color="green")
plt.xlim(0,1)
plt.ylim(0,1)
plt.title("Two Dimensional Vector")
plt.show()
```
## 1.2 Create a three-dimensional vecor and plot it on a graph
```
# Done in class
vector_3d = [0.6, 0.8, 0.3]
vector_3d_v = np.array([[0,0,0,0.6,0.8,0.3]])
# From what I understand, quiver creates a vector from point (X,Y,Z) to (U,V,W) so in this case the shown vector is from (0,0,0) to (0.6,0.8,0.3)
# The line below makes X, Y, Z, U, V, W into 1 element tuples with the corresponding value in the array, don't know why "X, Y, Z, U, V, W = 0,0,0,0.6,0.8,0.3" wasn't used instead
X, Y, Z, U, V, W = zip((*vector_3d_v))
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
ax.quiver(X,Y,Z,U,V,W,length=1)
ax.set_xlim([0,1])
ax.set_ylim([0,1])
ax.set_zlim([0,1])
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
plt.show()
```
```
# First 3 lines of the above code block is replaced by the 1st line in this code block, same output
X, Y, Z, U, V, W = 0,0,0,0.6,0.8,0.3
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
ax.quiver(X, Y, Z, U, V, W,length=1)
ax.set_xlim([0,1])
ax.set_ylim([0,1])
ax.set_zlim([0,1])
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
plt.show()
```
## 1.3 Scale the vectors you created in 1.1 by $5$, $\pi$, and $-e$ and plot all four vectors (original + 3 scaled vectors) on a graph. What do you notice about these vectors?
```
# I use the math library to get e and pi.
# I need to plot them from biggest to smallest so the smallest ones don't get covered.
# These vectors are all in the same direction. Each of these vectors can be multiplied by a factor to be equal to any of these vectors.
plt.arrow(0,0,5*twod_vector[0],5*twod_vector[1],head_width=0.02,head_length=0.02,color="green")
plt.arrow(0,0,math.pi*twod_vector[0],math.pi*twod_vector[1],head_width=0.02,head_length=0.02,color="red")
plt.arrow(0,0,twod_vector[0],twod_vector[1],head_width=0.02,head_length=0.02,color="blue")
plt.arrow(0,0,-math.e*twod_vector[0],-math.e*twod_vector[1],head_width=0.02,head_length=0.02,color="yellow")
plt.xlim(-3,5)
plt.ylim(-3,5)
plt.title("Scaled Vectors")
plt.show()
```
## 1.4 Graph vectors $\vec{a}$ and $\vec{b}$ and plot them on a graph
\begin{align}
\vec{a} = \begin{bmatrix} 5 \\ 7 \end{bmatrix}
\qquad
\vec{b} = \begin{bmatrix} 3 \\4 \end{bmatrix}
\end{align}
```
# Similar to 1.1 except the vectors are given
a = [5,7]
b = [3,4]
plt.arrow(0,0,a[0],a[1],head_width=0.02,head_length=0.1,color="red",width=0.05)
plt.arrow(0,0,b[0],b[1],head_width=0.02,head_length=0.1,color="blue",width=0.05)
plt.xlim(0,8)
plt.ylim(0,8)
```
## 1.5 find $\vec{a} - \vec{b}$ and plot the result on the same graph as $\vec{a}$ and $\vec{b}$. Is there a relationship between vectors $\vec{a} \thinspace, \vec{b} \thinspace \text{and} \thinspace \vec{a-b}$
```
# The relationship between a,b, and a-b is that a-b+b = a. a-b = [2,3], if I add b to it, I'll get: a-b+b = [2+3,3+4] = [5,7] which is a
# Also, (a-b)+b = b+(a-b), they all end up at the same place [5,7]
plt.arrow(0,0,a[0],a[1],head_width=0.02,head_length=0.1,color="red",width=0.05)
plt.arrow(0,0,b[0],b[1],head_width=0.02,head_length=0.1,color="blue",width=0.05)
plt.arrow(0,0,a[0]-b[0],a[1]-b[1],head_width=0.02,head_length=0.1,color="purple",width=0.05)
plt.xlim(0,8)
plt.ylim(0,8)
plt.show()
```
## 1.6 Find $c \cdot d$
\begin{align}
\vec{c} = \begin{bmatrix}7 & 22 & 4 & 16\end{bmatrix}
\qquad
\vec{d} = \begin{bmatrix}12 & 6 & 2 & 9\end{bmatrix}
\end{align}
```
# dot product is the sum of the products of the corresponding entries
def dot(x,y): # x and y are lists with the same length, representing vectors
Dot = 0
for n in range(len(x)):
Dot+=x[n]*y[n]
return Dot
```
```
dot([7,22,4,16],[12,6,2,9])
```
368
```
#np.dot returns the dot product of 2 arrays
c = np.array([7,22,4,16])
d = np.array([12,6,2,9])
np.dot(c,d)
```
368
## 1.7 Find $e \times f$
\begin{align}
\vec{e} = \begin{bmatrix} 5 \\ 7 \\ 2 \end{bmatrix}
\qquad
\vec{f} = \begin{bmatrix} 3 \\4 \\ 6 \end{bmatrix}
\end{align}
```
# Cross Product
# if p = [a,b,c] and q = [x,y,z] p cross q would be [bz-cy,-(az-cx),ay-bx]
# these components are the determinants of the 2x2 matrix that excludes a column, for example: bz - cy is the determinant of the matrix below, it excludes the 1st column (a z)
# b c
# y z
# so if e = [5,7,2] and f = [3,4,6], e cross f would be [(7)(6)-(2)(4),-((5)(6)-(3)(2)), (5)(4)-(3)(7)] = [42 - 8, -(30 - 6), 20-21] = [34,-24,1]
```
```
def cross(x,y): # x and y are vectors with 3 dimensions (lists with 3 numbers) in this instance
a = x[1]*y[2] - x[2]*y[1]
b = -(x[0]*y[2] - y[0]*x[2])
c = x[0]*y[1] - x[1]*y[0]
return [a,b,c]
```
```
cross([5,7,2],[3,4,6])
```
[34, -24, -1]
```
# np.cross returns the cross product of 2 vectors
e = np.array([5,7,2])
f = np.array([3,4,6])
np.cross(e,f)
```
array([ 34, -24, -1])
## 1.8 Find $||g||$ and then find $||h||$. Which is longer?
\begin{align}
\vec{g} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 8 \end{bmatrix}
\qquad
\vec{h} = \begin{bmatrix} 3 \\3 \\ 3 \\ 3 \end{bmatrix}
\end{align}
```
# ||x|| is the sqrt of the sum of squares of the values
# ||g|| = sqrt(1^2 + 1^2 + 1^2 + 8^2) = sqrt(1+1+1+64) = sqrt(67)
```
```
def mag(vec): #vec is vector which is a list
leng = 0
for n in vec:
leng+=n**2
return leng**0.5
```
```
print("||g|| is", mag([1,1,1,8])) #sqrt(67)
print("||h|| is", mag([3,3,3,3])) #sqrt(36)
```
||g|| is 8.18535277187245
||h|| is 6.0
```
# np.linalg.norm can also be used to figure out ||x||
g = np.array([1,1,1,8])
h = np.array([3,3,3,3])
print("||g|| is", np.linalg.norm(g))
print("||h|| is", np.linalg.norm(h))
```
||g|| is 8.18535277187245
||h|| is 6.0
# Part 2 - Matrices
## 2.1 What are the dimensions of the following matrices? Which of the following can be multiplied together? See if you can find all of the different legal combinations.
\begin{align}
A = \begin{bmatrix}
1 & 2 \\
3 & 4 \\
5 & 6
\end{bmatrix}
\qquad
B = \begin{bmatrix}
2 & 4 & 6 \\
\end{bmatrix}
\qquad
C = \begin{bmatrix}
9 & 6 & 3 \\
4 & 7 & 11
\end{bmatrix}
\qquad
D = \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\qquad
E = \begin{bmatrix}
1 & 3 \\
5 & 7
\end{bmatrix}
\end{align}
```
# to multiply matrices, with dimensions rows by columns, the columns of 1st matrix = rows of 2nd matrix
# and the product has rows of 1st matrix by columns of 2nd matrix
# m*n matrix times n*p matrix equals m*p matrix
# Purpose of the code below is to answer the question "Which of the following [matrices] can be multiplied together?"
# Value is set to dimensions of corresponding Key matrix
# Using dictionary so I can print out the letters.
Dimdict = {"A":[3,2],
"B":[1,3],
"C":[2,3],
"D":[3,3],
"E":[2,2]}
# testings all 25 combinations, x[0] is rows, x[1] is columns,
for m in Dimdict:
for n in Dimdict:
if Dimdict[m][1]==Dimdict[n][0]:
print(f'{m}{n} is a legal combination. The dimensions are {Dimdict[m]} * {Dimdict[n]}.')
```
AC is a legal combination. The dimensions are [3, 2] * [2, 3].
AE is a legal combination. The dimensions are [3, 2] * [2, 2].
BA is a legal combination. The dimensions are [1, 3] * [3, 2].
BD is a legal combination. The dimensions are [1, 3] * [3, 3].
CA is a legal combination. The dimensions are [2, 3] * [3, 2].
CD is a legal combination. The dimensions are [2, 3] * [3, 3].
DA is a legal combination. The dimensions are [3, 3] * [3, 2].
DD is a legal combination. The dimensions are [3, 3] * [3, 3].
EC is a legal combination. The dimensions are [2, 2] * [2, 3].
EE is a legal combination. The dimensions are [2, 2] * [2, 2].
## 2.2 Find the following products: CD, AE, and BA. What are the dimensions of the resulting matrices? How does that relate to the dimensions of their factor matrices?
```
```
## 2.3 Find $F^{T}$. How are the numbers along the main diagonal (top left to bottom right) of the original matrix and its transpose related? What are the dimensions of $F$? What are the dimensions of $F^{T}$?
\begin{align}
F =
\begin{bmatrix}
20 & 19 & 18 & 17 \\
16 & 15 & 14 & 13 \\
12 & 11 & 10 & 9 \\
8 & 7 & 6 & 5 \\
4 & 3 & 2 & 1
\end{bmatrix}
\end{align}
# Part 3 - Square Matrices
## 3.1 Find $IG$ (be sure to show your work) 😃
You don't have to do anything crazy complicated here to show your work, just create the G matrix as specified below, and a corresponding 2x2 Identity matrix and then multiply them together to show the result. You don't need to write LaTeX or anything like that (unless you want to).
\begin{align}
G=
\begin{bmatrix}
13 & 14 \\
21 & 12
\end{bmatrix}
\end{align}
## 3.2 Find $|H|$ and then find $|J|$.
\begin{align}
H=
\begin{bmatrix}
12 & 11 \\
7 & 10
\end{bmatrix}
\qquad
J=
\begin{bmatrix}
0 & 1 & 2 \\
7 & 10 & 4 \\
3 & 2 & 0
\end{bmatrix}
\end{align}
## 3.3 Find $H^{-1}$ and then find $J^{-1}$
## 3.4 Find $HH^{-1}$ and then find $J^{-1}J$. Is $HH^{-1} == J^{-1}J$? Why or Why not?
Please ignore Python rounding errors. If necessary, format your output so that it rounds to 5 significant digits (the fifth decimal place).
# Go Beyond:
A reminder that these challenges are optional. If you finish your work quickly we welcome you to work on them. If there are other activities that you feel like will help your understanding of the above topics more, feel free to work on that. Topics from the Stretch Goals sections will never end up on Sprint Challenges. You don't have to do these in order, you don't have to do all of them.
- Write a function that can calculate the dot product of any two vectors of equal length that are passed to it.
- Write a function that can calculate the norm of any vector
- Prove to yourself again that the vectors in 1.9 are orthogonal by graphing them.
- Research how to plot a 3d graph with animations so that you can make the graph rotate (this will be easier in a local notebook than in google colab)
- Create and plot a matrix on a 2d graph.
- Create and plot a matrix on a 3d graph.
- Plot two vectors that are not collinear on a 2d graph. Calculate the determinant of the 2x2 matrix that these vectors form. How does this determinant relate to the graphical interpretation of the vectors?
| 65845051b4d175a32e81287f7be242c673ef1acd | 144,264 | ipynb | Jupyter Notebook | Copy_of_Vectors_and_Matricest.ipynb | dealom/Vectors-and-Matrices | 6edeb8b9d3e5852d2b63f08d17ac98ad1d731f33 | [
"MIT"
]
| null | null | null | Copy_of_Vectors_and_Matricest.ipynb | dealom/Vectors-and-Matrices | 6edeb8b9d3e5852d2b63f08d17ac98ad1d731f33 | [
"MIT"
]
| null | null | null | Copy_of_Vectors_and_Matricest.ipynb | dealom/Vectors-and-Matrices | 6edeb8b9d3e5852d2b63f08d17ac98ad1d731f33 | [
"MIT"
]
| null | null | null | 167.748837 | 37,982 | 0.876442 | true | 3,948 | Qwen/Qwen-72B | 1. YES
2. YES | 0.942507 | 0.919643 | 0.866769 | __label__eng_Latn | 0.984291 | 0.852129 |
# Intravoxel incoherent motion (IVIM) imaging
Intra-voxel incoherent motion (IVIM) is a 2-compartment model that separates diffusion signal contributions originating from blood flow and Brownian diffusion *(Le Bihan et al. 1988)*. The model consists of 2 Ball compartments (isotropic Gaussian), each fitting the blood flow and diffusion volume fractions and diffusivities, respectively. Changes in e.g. blood volume fraction has been linked to many pathologies such as the vasculature in tumor tissue *(Le Bihan 2018)*.
\begin{align}
E_{\textrm{IVIM}}= \underbrace{f_{\textrm{blood}}\overbrace{E_{\textrm{iso}}(\lambda_{\textrm{Blood}})}^{\textrm{Ball}}}_{\textrm{Blood}} + \underbrace{f_{\textrm{Diffusion}}\overbrace{E_{\textrm{iso}}(\cdot|\lambda_{\textrm{Diffusion}})}^{\textrm{Ball}}}_{\textrm{Diffusion}}
\end{align}
Because the apparent diffusivity of blood flow is much higher than that of Brownian motion, the optimization bounds for the diffusivities of the two Balls are disjoint; the diffusivies of the diffusion compartment range
between [0.5 - 6]e-3 $mm^2/s$ (results in more precise fit according to *(Gurney-Champion et al. 2016)*), and those of the blood compartment range between [6 - 20]e-3 $mm^2/s$ (following *(Park et al. 2017)*).
The separability of blood and diffusion signal hinges on the observation that the blood-flow signal is negligible at b-values above 200-400 s/mm^2, but it does have a constribution below that bvalue (and to the b0).
Many different optimization strategies have been proposed to fit the IVIM model *(Wong et al. 2018, Gurney-Champion et al. 2018)*, of which in this example we will use Dmipy to implement and fit two:
- Following *(Wong et al. 2018)*, a two-step optimization based on the approach that first fits the 'diffusion' diffusivity by fitting a single Ball compartment to the signal where all b-values below b=400$s/mm^2$ have been truncated. Fixing this initial diffusivity, the 2-compartment model is then fitted to the whole signal.
- Following *(Gurney-Champion et al. 2018)*, they found simply fixing $\lambda_{blood}=7e-9 mm^2/s$ results in the second-best IVIM fitting performance (after fancy Bayesian fitting).
We compare the second IVIM algorithm with the one available in Dipy, and evaluate/compare the fitted parameter maps and fitting errors.
## Implementing Fixed Dstar IVIM using Dmipy
The fixed D-star IVIM implementation is very simple. We set the blood diffusivity to 7e-9 $m^2/s$ and fit the model as usual.
We'll use the same example dataset and acquisition scheme that Dipy uses as well:
### Load IVIM acquisition scheme and data
```python
from dipy.data.fetcher import read_ivim
from dmipy.core.acquisition_scheme import gtab_dipy2dmipy, acquisition_scheme_from_bvalues
img, gtab = read_ivim()
scheme_ivim = gtab_dipy2dmipy(gtab, b0_threshold=1e6, min_b_shell_distance=1e6)
scheme_ivim.print_acquisition_info
data = img.get_data()
data_slice = data[90: 155, 90: 170, 33, :]
test_voxel = data_slice[0, 0]
```
Dataset is already in place. If you want to fetch it again please first remove the folder /home/rutger/.dipy/ivim
Acquisition scheme summary
total number of measurements: 21
number of b0 measurements: 1
number of DWI shells: 20
shell_index |# of DWIs |bvalue [s/mm^2] |gradient strength [mT/m] |delta [ms] |Delta[ms] |TE[ms]
0 |1 |0 |N/A |N/A |N/A |N/A
1 |1 |10 |N/A |N/A |N/A |N/A
2 |1 |20 |N/A |N/A |N/A |N/A
3 |1 |30 |N/A |N/A |N/A |N/A
4 |1 |40 |N/A |N/A |N/A |N/A
5 |1 |60 |N/A |N/A |N/A |N/A
6 |1 |80 |N/A |N/A |N/A |N/A
7 |1 |100 |N/A |N/A |N/A |N/A
8 |1 |120 |N/A |N/A |N/A |N/A
9 |1 |140 |N/A |N/A |N/A |N/A
10 |1 |160 |N/A |N/A |N/A |N/A
11 |1 |180 |N/A |N/A |N/A |N/A
12 |1 |200 |N/A |N/A |N/A |N/A
13 |1 |300 |N/A |N/A |N/A |N/A
14 |1 |400 |N/A |N/A |N/A |N/A
15 |1 |500 |N/A |N/A |N/A |N/A
16 |1 |600 |N/A |N/A |N/A |N/A
17 |1 |700 |N/A |N/A |N/A |N/A
18 |1 |800 |N/A |N/A |N/A |N/A
19 |1 |900 |N/A |N/A |N/A |N/A
20 |1 |1000 |N/A |N/A |N/A |N/A
/home/rutger/anaconda2/lib/python2.7/site-packages/dmipy-0.1.dev0-py2.7.egg/dmipy/core/acquisition_scheme.py:860: UserWarning: pulse_separation (big_delta) or pulse_duration (small_delta) are not defined in the Dipy gtab. This means the resulting DmipyAcquisitionScheme cannot be used with CompartmentModels that need these.
Notice that this scheme has 1 b-value per "shell" for different b-values.
The D*-Fixed IVIM implementation can be implemented as follows:
```python
from dmipy.core.modeling_framework import MultiCompartmentModel
from dmipy.signal_models.gaussian_models import G1Ball
ivim_mod = MultiCompartmentModel([G1Ball(), G1Ball()])
ivim_mod.set_fixed_parameter(
'G1Ball_2_lambda_iso', 7e-9) # Following Gurney-Champion 2016
ivim_mod.set_parameter_optimization_bounds(
'G1Ball_1_lambda_iso', [.5e-9, 6e-9]) # Following Gurney-Champion 2016
ivim_fit_Dfixed = ivim_mod.fit(
acquisition_scheme=scheme_ivim,
data=test_voxel)
```
We highly recommend installing pathos to take advantage of multicore processing.
Setup brute2fine optimizer in 0.00300788879395 seconds
Fitting of 1 voxels complete in 0.524432182312 seconds.
Average of 0.524432182312 seconds per voxel.
We also fit the Dipy IVIM implementation as a reference
```python
from dipy.reconst.ivim import IvimModel
ivimmodel = IvimModel(gtab)
ivim_fit_dipy = ivimmodel.fit(test_voxel)
```
Finally we can visualize the signal fits to this test voxel for the different IVIM algorithms. Note they're very similar.
```python
import matplotlib.pyplot as plt
%matplotlib inline
fig, axs = plt.subplots(ncols=2, figsize=[10, 4], sharey=True)
fig.suptitle('Test signal fit comparisons IVIM algorithms', fontsize=15)
axs[0].set_ylabel('Signal Intensity')
axs[0].scatter(scheme_ivim.bvalues, test_voxel, label='Measured DWIs')
axs[0].plot(scheme_ivim.bvalues, ivim_fit_Dfixed.predict()[0], label='Dstar Fixed Fit')
axs[1].scatter(scheme_ivim.bvalues, test_voxel, label='Measured DWIs')
axs[1].plot(scheme_ivim.bvalues, ivim_fit_dipy.predict(gtab), label='Dipy IVIM reference')
[ax.legend() for ax in axs]
[ax.set_xlabel('b-value [s/m^2]') for ax in axs];
```
## Parameter map comparison Dstar_fixed, and Dipy reference
To properly evaluate the two algorithms we fit them to the same example slice as in the dipy IVIM example.
Note that in practice we can import custom (prepared) multi-compartment models directly:
```python
from dmipy.custom_optimizers.intra_voxel_incoherent_motion import ivim_Dstar_fixed
from time import time
ivim_fit_dmipy_fixed = ivim_Dstar_fixed(scheme_ivim, data_slice)
dipy_start = time()
ivim_fit_dipy = ivimmodel.fit(data_slice)
print('Dipy computation time: {} s'.format(time() - dipy_start))
```
Starting IVIM Dstar-fixed algorithm.
We highly recommend installing pathos to take advantage of multicore processing.
Setup brute2fine optimizer in 0.00655007362366 seconds
Fitting of 5200 voxels complete in 19.0747060776 seconds.
Average of 0.00366821270723 seconds per voxel.
IVIM Dstar-fixed optimization of 5200 voxels complete in 19.085 seconds
/home/rutger/anaconda2/lib/python2.7/site-packages/dipy-0.16.0-py2.7-linux-x86_64.egg/dipy/reconst/ivim.py:458: UserWarning: x0 obtained from linear fitting is not feasibile as initial guess for leastsq while estimating f and D_star. Using parameters from the linear fit.
warnings.warn(warningMsg, UserWarning)
/home/rutger/anaconda2/lib/python2.7/site-packages/dipy-0.16.0-py2.7-linux-x86_64.egg/dipy/reconst/ivim.py:552: UserWarning: x0 is unfeasible for leastsq fitting. Returning x0 values from the linear fit.
warnings.warn(warningMsg, UserWarning)
/home/rutger/anaconda2/lib/python2.7/site-packages/dipy-0.16.0-py2.7-linux-x86_64.egg/dipy/reconst/ivim.py:347: UserWarning: Bounds are violated for leastsq fitting. Returning parameters from linear fit
warnings.warn(warningMsg, UserWarning)
Dipy computation time: 91.0858941078 s
We can then visualize the fitted parameter maps together:
```python
import numpy as np
fig, axs = plt.subplots(nrows=4, ncols=2, figsize=[15, 16])
fig.suptitle('IVIM Parameter Map Comparison', fontsize=25, y=0.93)
axs = axs.ravel()
axs[0].set_title('Dmipy Dstar-Fixed', fontsize=18)
axs[1].set_title('Dipy', fontsize=18)
axs[0].set_ylabel('S0-Predicted', fontsize=15)
axs[2].set_ylabel('perfusion fraction', fontsize=15)
axs[4].set_ylabel('D_star (perfusion)', fontsize=15)
axs[6].set_ylabel('D (diffusion)', fontsize=15)
args = {'vmin': 0., 'interpolation': 'nearest'}
im0 = axs[0].imshow(ivim_fit_dmipy_fixed.S0, **args)
im1 = axs[1].imshow(ivim_fit_dipy.S0_predicted, **args)
im2 = axs[2].imshow(ivim_fit_dmipy_fixed.fitted_parameters['partial_volume_1'], vmax=1., **args)
im3 = axs[3].imshow(ivim_fit_dipy.perfusion_fraction, vmax=1., **args)
im4 = axs[4].imshow(np.ones_like(ivim_fit_dmipy_fixed.S0) *
ivim_fit_dmipy_fixed.fitted_and_linked_parameters['G1Ball_2_lambda_iso'] * 1e9, vmax=20, **args)
axs[4].text(10, 10, 'Fixed to 7e-9 mm$^2$/s', fontsize=14, color='white')
im5 = axs[5].imshow(ivim_fit_dipy.D_star * 1e3, vmax=20, **args)
im6 = axs[6].imshow(ivim_fit_dmipy_fixed.fitted_parameters['G1Ball_1_lambda_iso'] * 1e9, vmax=6, **args)
im7 = axs[7].imshow(ivim_fit_dipy.D * 1e3, vmax=6, **args)
for im, ax in zip([im0, im1, im2, im3, im4, im5, im6, im7], axs):
fig.colorbar(im, ax=ax, shrink=0.7)
```
Notice that the two algorithms have basically the same S0 estimation, but differences can be found in the other maps.
Interestingly, the Dipy IVIM algorithm finds overall higher perfusion volume fractions -- sort of clustered around 0.25 -- than the Dmipy implementation, as well as extremely high D-star values outside of the optimization range.
Our findings become more clear in the following parameter histograms in the example slice:
```python
import seaborn as sns
fig, axs = plt.subplots(2, 2, figsize=[10, 9])
fig.suptitle('Comparison Parameter Histograms', fontsize=20)
axs = axs.ravel()
sns.distplot(ivim_fit_dmipy_fixed.S0.ravel(), ax=axs[0], label='Dmipy Dstar-Fixed')
sns.distplot(ivim_fit_dipy.S0_predicted.ravel(), ax=axs[0], label='Dipy Reference')
axs[0].set_title('S0')
sns.distplot(ivim_fit_dmipy_fixed.fitted_parameters['partial_volume_1'].ravel(), ax=axs[1], label='Dmipy Dstar-Fixed')
sns.distplot(ivim_fit_dipy.perfusion_fraction.ravel(), ax=axs[1], label='Dipy Reference')
axs[1].set_title('Perfusion Fraction')
axs[2].axvline(x=7, label='Dmipy Dstar-Fixed')
sns.distplot(ivim_fit_dipy.D_star.ravel() * 1e3, ax=axs[2], label='Dipy Reference')
axs[2].set_ylim(0, 0.005)
axs[2].set_title('D_star (perfusion)')
axs[2].text(450, 0.001, 'Dipy IVIM does not respect\noptimization boundaries')
axs[2].arrow(800, 0.0005, 100, -0.0001, width=0.00005, head_length=80.)
sns.distplot(ivim_fit_dmipy_fixed.fitted_parameters['G1Ball_1_lambda_iso'].ravel() * 1e9, ax=axs[3], label='Dmipy Dstar-Fixed')
sns.distplot(ivim_fit_dipy.D.ravel() * 1e3, ax=axs[3], label='Dipy Reference')
axs[3].set_title('D (diffusion)')
[ax.legend() for ax in axs];
```
In the histograms notice again that the 2 Dmipy implementations find similar parameter values, and the Dipy implementation differs.
- S0 is basically the same for all algorithms.
- Perfusion fraction are lower for Dmipy IVIM, and Dipy IVIM finds a very particular peak just above 0.25.
- D_star is fixed to 7e-9 $m^2$/s for D-star-fixed algorithm, but Dipy's D_star values sometimes find values of 1000 (i.e. 3000 $mm^2/s$, 1000 times free water diffusivity).
- For regular D estimation, Dipy IVIM being somewhat lower overall compared to Dmipy's.
### Fitting error comparison
Following our previous findings we can also calculate the mean squared fitting error for the three algorithms.
```python
mse_Dstar_fixed = ivim_fit_dmipy_fixed.mean_squared_error(data_slice)
mse_dipy = np.mean(
(ivim_fit_dipy.predict(gtab) / ivim_fit_dipy.S0_predicted[..., None] -
data_slice / ivim_fit_dipy.S0_predicted[..., None]) ** 2, axis=-1)
```
```python
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=[15, 5])
fig.suptitle('IVIM fitting error comparison', fontsize=20)
axs = axs.ravel()
im0 = axs[0].imshow(mse_Dstar_fixed, vmax=0.08)
im1 = axs[1].imshow(mse_dipy, vmax=0.08)
axs[0].set_title('Dmipy IVIM Dstar-Fixed')
axs[1].set_title('Dipy IVIM reference')
for im, ax in zip([im0, im1], axs):
fig.colorbar(im, ax=ax, shrink=0.7)
axs[2].boxplot(
x=[mse_Dstar_fixed.ravel(), mse_dipy.ravel()],
labels=['Dmipy D-fixed', 'Dipy reference']);
axs[2].set_ylabel('Mean Squared Error')
axs[2].set_title('Fitting Error Boxplots', fontsize=13);
```
Dmipy's IVIM implementation has overall lower fitting error than Dipy's implementation -- which shows some extreme outliers even.
This example demonstrated that Dmipy can be easily used to generate, fit and evaluate the IVIM model. Without commenting on any implementation's correctness (no ground truth), we at least showed that Dmipy respects the parameter value boundaries that we impose during the optimization.
#### References
- Le Bihan, D., Breton, E., Lallemand, D., Aubin, M. L., Vignaud, J., & Laval-Jeantet, M. (1988). Separation of diffusion and perfusion in intravoxel incoherent motion MR imaging. Radiology, 168(2), 497-505.
- Le Bihan, D. (2017). What can we see with IVIM MRI?. NeuroImage
- Gurney-Champion OJ, Froeling M, Klaassen R, Runge JH, Bel A, Van Laarhoven HWM, et al. Minimizing the Acquisition Time for Intravoxel Incoherent Motion Magnetic Resonance Imaging Acquisitions in the Liver and Pancreas. Invest Radiol. 2016;51: 211–220.
- Park HJ, Sung YS, Lee SS, Lee Y, Cheong H, Kim YJ, et al. Intravoxel incoherent motion diffusion-weighted MRI of the abdomen: The effect of fitting algorithms on the accuracy and reliability of the parameters. J Magn Reson Imaging. 2017;45: 1637–1647.
- Wong, S. M., Backes, W. H., Zhang, C. E., Staals, J., van Oostenbrugge, R. J., Jeukens, C. R. L. P. N., & Jansen, J. F. A. (2018). On the Reproducibility of Inversion Recovery Intravoxel Incoherent Motion Imaging in Cerebrovascular Disease. American Journal of Neuroradiology.
- Gurney-Champion, O. J., Klaassen, R., Froeling, M., Barbieri, S., Stoker, J., Engelbrecht, M. R., ... & Nederveen, A. J. (2018). Comparison of six fit algorithms for the intra-voxel incoherent motion model of diffusion-weighted magnetic resonance imaging data of pancreatic cancer patients. PloS one, 13(4), e0194590.
| 41e53be5d76f99f34e118617d512f5d13c603b04 | 366,552 | ipynb | Jupyter Notebook | examples/example_ivim.ipynb | AthenaEPI/mipy | dbbca4066a6c162dcb05865df5ff666af0e4020a | [
"MIT"
]
| 59 | 2018-02-22T19:14:19.000Z | 2022-02-22T05:40:27.000Z | examples/example_ivim.ipynb | AthenaEPI/mipy | dbbca4066a6c162dcb05865df5ff666af0e4020a | [
"MIT"
]
| 95 | 2018-02-03T11:55:30.000Z | 2022-03-31T15:10:39.000Z | examples/example_ivim.ipynb | AthenaEPI/mipy | dbbca4066a6c162dcb05865df5ff666af0e4020a | [
"MIT"
]
| 23 | 2018-02-13T07:21:01.000Z | 2022-02-22T20:12:08.000Z | 745.02439 | 190,932 | 0.944611 | true | 4,630 | Qwen/Qwen-72B | 1. YES
2. YES | 0.810479 | 0.743168 | 0.602322 | __label__eng_Latn | 0.856657 | 0.237726 |
# Structural Estimation
1. This notebook shows how to **estimate** the consumption model in **ConsumptionSaving.pdf** using **Simulated Minimum Distance (SMD)**
2. It also shows how to calculate **standard errors** and **sensitivity measures**
## Simulated Minimum Distance
**Data:** We assume that we have data available for $N$ households over $T$ periods, collected in $\{w_i\}_i^N$.
**Goal:** We wish to estimate the true, unknown, parameter vector $\theta_0$. We assume our model is correctly specified in the sense that the observed data stems from the model.
**Overview:**
1. We focus on matching certain (well-chosen) **empirical moments** in the data to **simulated moments** from the model.
2. We calculate a $J\times1$ vector of moments in the data, $\Lambda_{data} = \frac{1}{N}\sum_{i=1}^N m(\theta_0|w_i)$. This could e.g. be average consumption over the life-cycle, the income variance or regressions coefficients from some statistical model.
3. To estimate $\theta$ we chose $\theta$ as to **minimize the (squared) distance** between the moments in the data and the same moments calculated from simulated data. Let $\Lambda_{sim}(\theta) = \frac{1}{N_{sim}}\sum_{s=1}^{N_{sim}} m(\theta|w_s)$ be the same moments calculated on simulated data for $N_{sim}=S\times N$ observations for $T_{sim}$ periods from the model for a given value of $\theta$. As we change $\theta$, the simulated outomes will change and the moments will too.
The **Simulated Minimum Distance (SMD)** estimator then is
$$
\hat{\theta} = \arg\min_{\theta} g(\theta)'Wg(\theta)
$$
where $W$ is a $J\times J$ positive semidefinite **weighting matrix** and
$$
g(\theta)=\Lambda_{data}-\Lambda_{sim}(\theta)
$$
is the distance between $J\times1$ vectors of moments calculated in the data and the simulated data, respectively. Concretely,
$$
\Lambda_{data} = \frac{1}{N}\sum_{i=1}^N m(\theta_0|w_i) \\
\Lambda_{sim}(\theta) = \frac{1}{N_{sim}}\sum_{s=1}^{N_{sim}} m(\theta|w_s)
$$
are $J\times1$ vectors of moments calculated in the data and the simulated data, respectively.
**Settings:** In our baseline setup, we will have $N=5,000$ observations for $T=40$ periods, and simulate $N_{sim}=100,000$ synthetic consumers for $T_{sim} = 40$ periods when estimating the model.
**Solution of consumption-saving model:** This estimator requires the solution (and simulation) of the model each trial guess of $\theta$ as we search for the one that minimizes the objective function. Therefore, structural estimation can in general be quite time-consuming. We will use the EGM to solve the consumption model quite fast and thus be able to estimate parameters within a couple of minutes. Estimation of more complex models might take significantly longer.
> **Note I:** When regressions coefficients are used as moments, they are sometimes referred to as **auxiliary parameters** (APs) and the estimator using these APs as an **Indirect Inference (II)** estimator ([Gouriéroux, Monfort and Renault, 1993](https://doi.org/10.1002/jae.3950080507)).
> **Note II:** The estimator used is also called a **simulated method of momoments (SMM)** estimator. I.e. a simulated General Method of Moments (GMM) estimator.
# Setup
```python
%load_ext autoreload
%autoreload 2
import time
import numpy as np
import scipy.optimize as optimize
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
import figs
from ConsumptionSavingModel import ConsumptionSavingModelClass
from SimulatedMinimumDistance import SimulatedMinimumDistanceClass
```
# Estimation choices
```python
# a. model settings
N = 5_000
N_sim = 100_000
par = {'simlifecycle':True,
'sim_mini':1.0 ,
'simT':40,
'simN':N_sim,
'Nxi':4,
'Npsi':4,
'Na':100}
par_true = par.copy()
par_true['simN'] = N
# b. parameters to estimate
est_par = {
'rho': {'guess':2.0,'lower':0.5,'upper':5.0,},
'beta': {'guess':0.97,'lower':0.90,'upper':0.999},
}
est_par_names = [key for key in est_par.keys()]
# c. moment function used in estimation.
def mom_func(data,ids=None):
""" returns the age profile of wealth """
if ids is None:
mean_A = np.mean(data.A[:,1:],axis=0)
else:
mean_A = np.mean(data.A[ids,1:],axis=0)
return mean_A
# d. choose weighting matrix
weighting_matrix = 0
# 0: identity (equal weight),
# 1: inverse of variance on the diagonal (removes scale),
# 2: inverse of covaraince matrix between estimation moments (optimal weighting matrix)
```
# Data and estimator
Construct **data**.
```python
# a. setup model to simulate data
true = ConsumptionSavingModelClass(name='true',par=par_true)
true.solve()
true.simulate(seed=2019) # this seed is different from the default
# b. data moments
datamoms = mom_func(true.sim)
moment_names = [i for i in range(true.par.age_min+1,true.par.age_min+true.par.simT)]
```
model solved in 3.2 secs
model simulated in 2.7 secs
**Bootstrap** variance of estimation moments used when later calculting standard errors below (and potentially for weighting matrix).
```python
num_boot = 200
num_moms = datamoms.size
smd = SimulatedMinimumDistanceClass(est_par,mom_func,datamoms=datamoms)
smd.Omega = smd.bootstrap_mom_var(true.sim,N,num_boot,num_moms)
```
**Setup estimator**.
```python
smd.plot({'data':moment_names},{'data':datamoms},xlabel='age',ylabel='wealth',hide_legend=True)
```
# Estimate the model
```python
model = ConsumptionSavingModelClass(name='estimated',par=par)
```
Choose **weighting matrix**:
```python
if weighting_matrix == 0:
W = np.eye(smd.datamoms.size) # identity
elif weighting_matrix == 1:
W = np.diag(1.0/np.diag(smd.Omega)) # inverse of variance on the diagonal
else:
W = np.linalg.inv(smd.Omega) # optimal weighting matrix
```
## Estimation results
```python
# a. estimate the model (can take several minutes)
%time est = smd.estimate(model,W)
# b. print estimation results
print(f'\n True Est. ')
for key in est_par.keys():
print(f'{key:5s} {getattr(true.par,key):2.3f} {est[key]:2.3f}')
```
objective function at starting values: 5.961003893464155
Wall time: 59.4 s
True Est.
rho 2.000 2.018
beta 0.960 0.960
Show **model-fit**:
```python
plot_data_x = {'data':moment_names,'simulated':moment_names}
plot_data_y = {'data':datamoms,'simulated':mom_func(model.sim)}
smd.plot(plot_data_x,plot_data_y,xlabel='age',ylabel='wealth')
```
## Standard errors
The SMD estimator is **asymptotic Normal** and standard errors have the same form as standard GMM estimators scaled with the adjustment factor $(1+S^{-1})$ due to the fact that we use $S$ simulations of the model.
The **standard errors** are thus
$$
\begin{align}
\text{Var}(\hat{\theta})&=(1+S^{-1})\Gamma\Omega\Gamma'/N \\
\Gamma &= -(G'WG)^{-1}G'W \\
\Omega & = \text{Var}(m(\theta_0|w_i))
\end{align}
$$
where $G=\frac{\partial g(\theta)}{\partial \theta}$ is the $J\times K$ **Jacobian** with respect to $\theta$. $\Gamma$ is related to what is sometimes called the "influence function".
**Calculating $\Omega$**:
1. Can sometimes be done **analytically**
2. Can always be done using a **bootstrap** as done above
**Calculating the Jacobian, $G$:** This is done using numerical finite differences.
```python
# a. number of datasets simulated per individual in original data
S = model.par.simN/N
# b. find standard errors
Gamma, grad_theta = smd.calc_influence_function(est['theta'],model,W)
Var_theta = (1.0+1.0/S) * Gamma @ smd.Omega @ Gamma.T /N
se = np.sqrt(np.diag(Var_theta))
# b. print estimation results
print(f' True Est. (se)')
for i,(key,val) in enumerate(est_par.items()):
print(f'{key:5s} {getattr(true.par,key):2.3f} {est[key]:2.3f} ({se[i]:2.3f})')
```
True Est. (se)
rho 2.000 2.018 (0.043)
beta 0.960 0.960 (0.001)
# Sensitivity Analysis
We now look into a **sensitivity analysis** of our estimation. Concretely, we implement the **informativeness measure** from [Honoré, Jørgensen and de Paula (2019)](https://doi.org/10.1002/jae.2779) and the **sensitivity to calibrated parameters** in [Jørgensen (2020)](https://www.ifs.org.uk/uploads/CWP1620-Sensitivity-to-Calibrated-Parameters.pdf). Further details can be found in these papers.
## The informativeness of estimation moments
The measures are motivated by those proposed in [Honoré, Jørgensen and de Paula (2019)](https://doi.org/10.1002/jae.2779). All the measures proposed in that paper is calculated, but we will focus on their measure 4 that asks **"what is the change in the asymptotic variance from completely excluding the k'th moment?"**. If the *k*th is very informative about a parameter, the asymptotic varaince of that parameter should increase significantly, if we leave out the *k*th moment.
```python
info = smd.informativeness_moments(grad_theta,smd.Omega,W)
smd.plot_heat(info['M4e'],est_par_names,moment_names,annot=False)
```
**Conclusion:** We can see that especially the wealth level for younger households are very informative regarding both $\rho$ and $\beta$. This is likely due to the fact that for low level of resources (which is the case at younger ages), the value of both these parameters affect consumption and saving decisions a lot. Thus, the level of saving especially in young ages are very informative and help to identify the two parameters.
## Sensitivity to calibrated parameters
The mesure is motivated by the one proposed in [Jørgensen (2020)](https://www.ifs.org.uk/uploads/CWP1620-Sensitivity-to-Calibrated-Parameters.pdf). Note that the estimation moments are all functions of the $L$ calibrated parameters, which we will denote $\gamma$, $g(\theta|\gamma)$.
The **sensitivity measure** is defined as
$$
\begin{align}
S &= \Gamma D
\end{align}
$$
where $D=\frac{\partial g(\theta|\gamma)}{\partial \gamma}$ is the $J\times L$ **Jacobian** with respect to $\gamma$.
*We only need to calculate $D$* since we have already calculated $\Gamma$ when we calculated standard errors above. We use numerical finite differences to calcualte this object.
**Chosen calibrated paramters:** $R$, $G$, $\sigma_{\psi}$, $\sigma_{\xi}$.
```python
cali_par_names = ('R','G','sigma_psi','sigma_xi')
cali_par = np.array([getattr(model.par,name) for name in cali_par_names])
```
**Calculate the sensitivty measure:**
```python
grad_gamma = smd.num_grad(cali_par,model,cali_par_names)
sens_cali = Gamma @ grad_gamma
```
**Plot sensitivity measure**
```python
smd.plot_heat(sens_cali,est_par_names,cali_par_names)
```
**Check:** We can compare this to a brute-force approach in which we re-estimate the model for marginal changes in the calibrated parameters. This takes considerable time, however. The results are almost identical.
```python
sens_cali_brute = smd.sens_cali_brute_force(model,est['theta'],W,cali_par_names)
smd.plot_heat(sens_cali_brute,est_par_names,cali_par_names)
```
**Arbitrary changes in $\gamma$**: We can also investigate larger simultaneous changes in $\gamma$.
```python
# a. set new calibrated parameters
cali_par_new = {'G':1.05}
# b. update calibrated parameters in new version of the model
model_new = model.copy()
for key,val in cali_par_new.items():
setattr(model_new.par,key,val)
# c. calculate new objective function
obj_vec = smd.diff_vec_func(est['theta'],model,est_par_names)
obj_vec_new = smd.diff_vec_func(est['theta'],model_new,est_par_names)
# d. approximate change in theta
Gamma_new,_ = smd.calc_influence_function(est['theta'],model_new,W)
theta_delta = Gamma_new @ obj_vec_new - Gamma @ obj_vec
# e. extrapolate the gradient
theta_delta_extrap = np.zeros(theta_delta.size)
for j,key in enumerate(cali_par_new):
theta_delta_extrap += sens_cali[:,j]*(cali_par_new[key]-getattr(model.par,key))
print(theta_delta_extrap)
```
[ 0.17300506 -0.02961878]
**Check:** Again, we can compare this approximation to a brute-force re-estimation of the model for the changed $\gamma$.
```python
est_new = smd.estimate(model_new,W)
theta_delta_brute = est_new['theta'] - est['theta']
print(theta_delta_brute)
```
objective function at starting values: 5.463540643940908
[ 0.11275468 -0.05969197]
| 77f46cd87455eed7e9ad9f326fda9d71e0d886f7 | 103,391 | ipynb | Jupyter Notebook | 00. DynamicProgramming/04. Structural Estimation.ipynb | JMSundram/ConsumptionSavingNotebooks | 338a8cecbe0043ebb4983c2fe0164599cd2a4fc0 | [
"MIT"
]
| 20 | 2019-03-09T02:08:49.000Z | 2022-03-28T15:56:04.000Z | 00. DynamicProgramming/04. Structural Estimation.ipynb | JMSundram/ConsumptionSavingNotebooks | 338a8cecbe0043ebb4983c2fe0164599cd2a4fc0 | [
"MIT"
]
| 1 | 2019-06-03T18:33:44.000Z | 2019-07-02T13:51:21.000Z | 00. DynamicProgramming/04. Structural Estimation.ipynb | JMSundram/ConsumptionSavingNotebooks | 338a8cecbe0043ebb4983c2fe0164599cd2a4fc0 | [
"MIT"
]
| 34 | 2019-02-26T19:27:37.000Z | 2021-12-27T09:34:04.000Z | 146.446176 | 24,536 | 0.886189 | true | 3,370 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.752013 | 0.552067 | __label__eng_Latn | 0.941538 | 0.120967 |
# Solving a Dynamic Discrete Choice Problem with Three Different Methods
# Mateo Velásquez-Giraldo
This notebook solves a simple dynamic "machine replacement" problem using three different methods:
- Contraction mapping iteration.
- Hotz-Miller inversion.
- Forward simulation.
The code is optimized for clarity, not speed, as the purpose is to give a sense of how the three methods work and how they can be implemented.
```python
# Setup
import numpy as np
from scipy.optimize import minimize, Bounds
import pandas as pd
import numpy.random as rnd
import copy
```
# Problem setup
The problem is taken from a problem set of Prof. Nicholas Papageorge Topics in Microeconometrics course at Johns Hopkins University and borrows heavily from Professor Juan Pantano's course Microeconometrics offered at Washington University in St. Louis.
There is a shop that operates using a machine. The machine's mantainance costs increase with its age, denoted $a_t$. On each period, the shop must decide whether to replace the machine ($i_t = 1$) or not ($i_t=0$). Assume that costs stop increasing after the machine reaches $a_t = 5$ so that, in practice, that is the maximum age. Age then evolves according to:
\begin{equation}
a_{t+1} = \begin{cases}
\min \{5,a_t+1\}, & \text{ if } i_t = 0 \\
1, & \text{ if } i_t = 1
\end{cases}.
\end{equation}
A period's profits depend on mantainance costs, replacement costs, and factors that the econometrician does not observe, modeled as stochastic shocks $\epsilon$:
\begin{equation}
\Pi (a_t,i_t,\epsilon_{0,t},\epsilon_{1,t}) = \begin{cases}
\theta a_t + \epsilon_{0,t} & \text{if } i_t=0\\
R + \epsilon_{1,t} & \text{if } i_t = 1
\end{cases}
\end{equation}
The shop's problem can be recursively defined as:
\begin{equation}
\begin{split}
V(a_t,\epsilon_{0,t},\epsilon_{1,t}) &= \max_{i_t} \Pi
(a_t,i_t,\epsilon_{0,t},\epsilon_{1,t}) + \beta
E_t[V(a_{t+1},\epsilon_{0,t+1},\epsilon_{1,t+1})]\\
&\text{s.t} \\
a_{t+1} &= \begin{cases}
\min \{5,a_t+1\}, & \text{ if } i_t = 0 \\
1, & \text{ if } i_t = 1
\end{cases}.
\end{split}
\end{equation}
The code below defines functions and objects that capture the structure of the problem
```python
# Profit function (the deterministic part)
def profit_det(a,i,theta,R):
if i == 0:
return(theta*a)
else:
return(R)
# State transition function
def transition(a, i):
if i == 0:
return(min(5,a+1))
else:
return(1)
# Construct state and choice vectors
states = np.arange(5) + 1
choices = np.arange(2)
# Construct the transition matrix array:
# A 2 x 5 x 5 array in which the position (i,j,k) contains
# the probability of moving from state j to state k given that
# choice i was made
trans_mat = np.zeros((len(choices),len(states),len(states)))
# If no-replacement, deterministically move to the next state, up to the last
for k in range(len(states)-1):
trans_mat[0][k][k+1] = 1
trans_mat[0,len(states)-1,len(states)-1] = 1
# If replacement, deterministically move to the first state
for k in range(len(states)):
trans_mat[1,k,0] = 1
```
## Some more notation
The solution methods use objects that are derived from the value function $V$ and that will be defined below.
### Pre-Shocks Expected value function $\tilde{V}(\cdot)$
This object captures the lifetime utility a shop can expect after knowing its state $a_t$ but before knowing its stochastic shock realizations.
\begin{equation}
\tilde{V}(a_t) = E_\epsilon [V(a_t,\epsilon_{0,t},\epsilon_{1,t})]
\end{equation}
### Choice-Specific Value Functions $\bar{V}_{i}(\cdot)$
These two objects capture the lifetime utility expected from a choice, excluding the current-period stochastic shock. Formally, they are:
\begin{equation}
\bar{V}_0(a_t) = \theta_1 a_t + \beta E \left[ V(\min\left\{ 5, a_t+1\right\},\epsilon_{0,t+1},\epsilon_{1,t+1}\right)]
\end{equation}
and
\begin{equation}
\bar{V}_1(a_t) = R + \beta E \left[ V(1,\epsilon_{0,t+1},\epsilon_{1,t+1}\right)].
\end{equation}
## Useful relationships
The previously defined objects are related through the following identities
\begin{equation}
\bar{V}_i\left( a_t \right) = \Pi (a_t,i_t,0,0) + \beta\tilde{V}\left(a_{t+1}\left(a_t,i\right)\right),
\end{equation}
and
\begin{equation}
V(a_t,\epsilon_{0,t},\epsilon_{1,t}) = \max \left\{ \bar{V}_0\left(
a_t \right) + \epsilon_{0,t}, \bar{V}_1\left(
a_t \right) + \epsilon_{1,t} \right\}.
\end{equation}
## Choice probabilities
Using the last relationship and assuming that a shop behaves optimally, it should be the case that
\begin{equation}
i_t = \arg \max_{i\in \{0,1\}} \left( \bar{V}_i\left( a_t \right) + \epsilon_{i,t}\right).
\end{equation}
Assuming that stochastic shocks $\epsilon$ are i.i.d Extreme-value-type-1 yields a simple expression for the probability of choosing each alternative:
\begin{equation}
P(i_t=1|a_t) = \frac{\exp (\bar{V}_1(a_t))}{\exp (\bar{V}_0(a_t))+\exp (\bar{V}_1(a_t))}.
\end{equation}
This expression allows us to estimate the model's parameters given data through maximum likelihood estimation. The likelihood function would be
\begin{equation}
\mathcal{L}(\theta,R) = \Pi_{j=1}^N P\left( i_j|a_j,\theta, R\right).
\end{equation}
We now only need ways to obtain choice-specific net-of-error value functions $\bar{V}_i(\cdot)$ for any given set of parameters. In this notebook we will explore three.
```python
# Compute the log-likelihood of (a,i) vectors given choice-specific,
# net-of-error value functions
def logL(a, i, V):
# Compute the probability of each (a,i) pair possible
probs = np.exp(V)
total = np.sum(probs, axis = 1)
probs = probs / total[:,None]
# Get a vector of the probabilities of observations
L = probs[a-1,i]
logLik = np.sum(np.log(L))
return(logLik)
```
# Solution of the dynamic problem
To simulate data, we must first solve the problem. We must then introduce the first method that we will use.
## 1. Contraction mapping iteration
A first way of obtaining choice-specific value functions is defining the following mapping.
\begin{equation}
T\left(\begin{bmatrix}
f_0(\cdot)\\
f_1(\cdot)
\end{bmatrix}\right)(a_t) = \begin{bmatrix}
\theta_1 a_t + \beta E [\max \left\{ f_0\left( a_{t+1}\left(a_t,i_t=0\right)\right) + \epsilon_{0,t}, f_1\left( \left( a_{t+1}\left(a_t,i_t=0\right) \right) \right) + \epsilon_{1,t} \right\}] \\
R + \beta E [ \max \left\{ f_0\left(
\left( a_{t+1}\left(a_t,i_t=1\right) \right) \right) + \epsilon_{0,t}, f_1\left(
\left( a_{t+1}\left(a_t,i_t=1\right) \right) \right) + \epsilon_{1,t} \right\}]
\end{bmatrix}
\end{equation}
and noting that $[\bar{V}_0(\cdot),\bar{V}_1(\cdot)]'$ is a fixed point of $T$.
In fact, $T$ is a contraction mapping, so a strategy for finding its fixed point is iteratively applying $T$ from an arbitrary starting point. This is precisely what the code below does.
```python
# Computation of E[max{V_0 + e0, V_1 + e1}]
def expectedMax(V0,V1):
return( np.euler_gamma + np.log( np.exp(V0) + np.exp(V1) ) )
# Contraction mapping
def contrMapping(Vb, theta, R, beta):
# Initialize array (rows are a, cols are i)
Vb_1 = np.zeros( Vb.shape )
for a_ind in range(len(Vb)):
# Adjust 0 indexing
a = a_ind + 1
for i in range(2):
a_1 = transition(a, i)
a_1_ind = a_1 - 1
Vb_1[a_ind, i] = profit_det(a, i, theta, R) + \
beta * expectedMax(Vb[a_1_ind,0],Vb[a_1_ind,1])
return(Vb_1)
# Solution of the fixed point problem by repeated application of the
# contraction mapping
def findFX(V0, theta, R, beta, tol, disp = True):
V1 = V0
norm = tol + 1
count = 0
while norm > tol:
count = count + 1
V1 = contrMapping(V0, theta, R, beta)
norm = np.linalg.norm(V1 - V0)
if disp:
print('Iter. %i --- Norm of difference is %.6f' % (count,norm))
V0 = V1
return(V1)
```
## 2. Hotz-Miller Inversion
The Hotz-Miller method relies on the following re-expression of the pre-shock expected value function
\begin{equation}
\tilde{V}(a_t) = \sum_{i\in\{0,1\}} P(i_t = i | a_t) \times \left( \Pi \left(a_t,i_t,0,0\right) + E\left[ \epsilon_i | i_t = i\right] + \sum_{a'= 1}^{5} P\left(a_{t+1} = a' | a_t, i_t = i\right) \tilde{V}\left(a'\right) \right)
\end{equation}
which is a system of linear equations in $\{ \tilde{V}(1),...,\tilde{V}(5) \}$ if one knows $ P(i_t = i | a_t)$, $\Pi\left(a_t,i_t,0,0\right)$, $E\left[ \epsilon_i | i_t = i\right]$, and $P\left(a_{t+1} = a' | a_t, i_t = i\right)$.
- $ P(i_t = i | a_t)$ are known as "conditional choice probabilities", and can be estimated from the data directly.
- $P\left(a_{t+1} = a' | a_t, i_t = i\right)$ are state-to-state transition probabilities. In our simple problem, transitions are deterministic, but in more complex problems these could also be directly estimated from the data.
- $\Pi\left(a_t,i_t,0,0\right)$ is known given parameters.
- $E\left[ \epsilon_i | i_t = i\right]$ is equal to $\gamma - \ln P(i_t = i|a_t)$ if one assumes i.i.d extreme value type one errors ($\gamma$ is Euler's constant).
Thus, for any given parameter vector we can solve the linear system for $\{ \tilde{V}(1),...,\tilde{V}(5) \}$. With these, we can use the previously defined relationship
\begin{equation}
\bar{V}_i\left( a_t \right) = \Pi (a_t,i_t,0,0) + \beta\tilde{V}\left(a_{t+1}\left(a_t,i\right)\right),
\end{equation}
to obtain choice-specific, net-of-error value functions and obtain our likelihood.
```python
def Hotz_Miller(theta, R, states, choices, CPPS, trans_mat,invB):
nstates = len(states)
nchoices = len(choices)
# Construct ZE matrix
ZE = np.zeros((nstates, nchoices))
for i in range(nstates):
for j in range(nchoices):
ZE[i,j] = CPPS[i,j]*( profit_det(states[i],choices[j],theta,R) +
np.euler_gamma - np.log(CPPS[i,j]) )
# Take a sum.
ZE = np.sum(ZE,1,keepdims = True)
# Compute W
W = np.matmul(invB, ZE)
# Z and V
Z = np.zeros((nstates,nchoices))
V = np.zeros((nstates,nchoices))
for i in range(nstates):
for j in range(nchoices):
Z[i,j] = np.dot(trans_mat[j][i,:],W)
V[i,j] = profit_det(states[i],choices[j],theta,R) + beta*Z[i,j]
return(V)
```
## 3. Forward Simulation
```python
def forward_simul(theta,R,beta,states,choices,CPPS,trans_mat,nperiods,nsims,
seed):
# Set seed
rnd.seed(seed)
# Initialize V
V = np.zeros((len(states),len(choices)))
for i in range(len(states)):
for j in range(len(choices)):
v_accum = 0
for r in range(nsims):
a_ind = i
c_ind = j
v = profit_det(states[a_ind], choices[c_ind], theta, R)
for t in range(nperiods):
# Simulate state
a_ind = rnd.choice(a = len(states),
p = trans_mat[c_ind][a_ind])
# Simulate choice
c_ind = rnd.choice(a = len(choices),
p = CPPS[a_ind])
# Find expected value of taste disturbance conditional on
# choice
exp_e = np.euler_gamma - np.log(CPPS[a_ind,c_ind])
# Update value funct
v = v + ( beta**(t+1) ) * (profit_det(states[a_ind],
choices[c_ind],
theta,R) +
exp_e)
v_accum = v_accum + v
V[i,j] = v_accum / nsims
return(V)
```
# Dataset simulation
Now, to simulate the model, we only need to solve the problem for some set of parameters and, using the result and simulated taste shocks, produce optimal behavior.
The function below does exactly this, simulating a panel of machines, each observed for some pre-set number of periods.
```python
def sim_dataset(theta, R, nmachines, n_per_machine, beta):
# First solve the choice specific value functions for both parameter sets
V0 = np.zeros((5,2))
tol = 1e-6 # Tolerance
V = findFX(V0, theta, R, beta, tol, disp = False)
data = pd.DataFrame(np.zeros((nmachines*n_per_machine,4)),
columns = ['Id','T','a','i'])
ind = 0
for m in range(nmachines):
# Initialize state
a_next = rnd.randint(5) + 1
for t in range(n_per_machine):
a = a_next
# Assign id and time
data.loc[ind,'Id'] = m
data.loc[ind, 'T'] = t
data.loc[ind, 'a'] = a
u_replace = V[a - 1][1] + rnd.gumbel()
u_not = V[a - 1][0] + rnd.gumbel()
if u_replace < u_not:
data.loc[ind,'i'] = 0
a_next = min(5, a+1)
else:
data.loc[ind,'i'] = 1
a_next = 1
ind = ind + 1
return(data)
```
Now we can use the function to simulate a full dataset.
```python
# Simulate a dataset of a single type
nmachines = 6000
n_per_machine = 1
# Assign test parameters
theta = -1
R = -4
beta = 0.85
data = sim_dataset(theta, R, nmachines, n_per_machine, beta)
a = data.a.values.astype(int)
i = data.i.values.astype(int)
```
It is also useful to define functions that estimate conditional choice probabilities and state-to-state transition probabilities from the data, since we will be using them in estimation for some methods.
```python
def get_ccps(states, choices):
# Function to estimate ccps. Since we are in a discrete setting,
# these are just frequencies.
# Find unique states
un_states = np.unique(states)
un_states.sort()
un_choices = np.unique(choices)
un_choices.sort()
# Initialize ccp matrix
ccps = np.ndarray((len(un_states),len(un_choices)), dtype = float)
# Fill out the matrix
for i in range(len(un_states)):
sc = choices[states == un_states[i]]
nobs = len(sc)
for j in range(len(un_choices)):
ccps[i][j] = np.count_nonzero( sc == un_choices[j]) / nobs
return(ccps)
def state_state_mat(CPP,transition_mat):
nstates = CPP.shape[0]
nchoices = CPP.shape[1]
# Initialize
PF = np.zeros((nstates,nstates))
for i in range(nstates):
for j in range(nstates):
for d in range(nchoices):
PF[i,j] = PF[i,j] + CPP[i,d]*transition_mat[d][i,j]
return(PF)
```
Now we use the functions to estimate the CCPS and the transition matrix in the dataset that we just simulated.
```python
# Estimate CPPS
cpps = get_ccps(a,i)
# Compute the state-to-state (no choice matrix)
PF = state_state_mat(cpps,trans_mat)
```
# Estimation
We are now ready to estimate the model using our data and the three methods that were previously discussed.
In every case, we define a function that takes the parameters and data, solves the model using the specific method, and computes the log-likelihood. All that is left then is to optimize!
## 1. Rust's contraction mapping.
```python
# Compute the log-likelihood of (a,i) vectors given parameter values,
# with contraction mapping method
def logL_par_fx(par, a, i, tol):
# Extract parameters
theta = par[0]
R = par[1]
beta = par[2]
# Find implied value functions
V = np.zeros((5,2))
V = findFX(V, theta, R, beta, tol, disp = False)
# Return the loglikelihood from the implied value function
return(logL(a, i, V) )
```
```python
# Set up the objective function for minimization
tol = 1e-9
x0 = np.array([0,0])
obj_fun_fx = lambda x: -1 * logL_par_fx([x[0],x[1],beta], a, i, tol)
# Optimize
est_fx = minimize(obj_fun_fx, x0, method='BFGS', options={'disp': True})
mean_est_fx = est_fx.x
se_est_fx = np.diag(est_fx.hess_inv)
```
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 2737.297479
Iterations: 14
Function evaluations: 168
Gradient evaluations: 39
```python
# Present results
print('Estimation results (S.E\'s in parentheses):')
print('Theta: %.4f (%.4f)' % (mean_est_fx[0], se_est_fx[0]))
print('R: %.4f (%.4f)' % (mean_est_fx[1], se_est_fx[1]))
```
Estimation results (S.E's in parentheses):
Theta: -1.0157 (0.0002)
R: -4.0358 (0.0017)
## 2. Hotz-Miller
```python
# Compute the log-likelihood of (a,i) vectors given parameter values,
# with forward simulation method
def logL_par_HM(par, a, i,
states, choices, CPPS, trans_mat,
invB):
# Extract parameters
theta = par[0]
R = par[1]
# Find implied value functions
V = Hotz_Miller(theta, R, states, choices, CPPS, trans_mat,invB)
# Return the loglikelihood from the implied value function
return(logL(a, i, V) )
```
```python
# Compute the "inv B" matrix
invB = np.linalg.inv( np.identity(len(states)) - beta*PF )
# Set up objective function
obj_fun_HM = lambda x: -1 * logL_par_HM(x, a, i,states, choices,
cpps, trans_mat, invB)
# Optimize
est_HM = minimize(obj_fun_HM, x0, method='BFGS', options={'disp': True})
mean_est_HM = est_HM.x
se_est_HM = np.diag(est_HM.hess_inv)
```
Optimization terminated successfully.
Current function value: 2737.302419
Iterations: 13
Function evaluations: 76
Gradient evaluations: 19
```python
# Present results
print('Estimation results (S.E\'s in parentheses):')
print('Theta: %.4f (%.4f)' % (mean_est_HM[0], se_est_HM[0]))
print('R: %.4f (%.4f)' % (mean_est_HM[1], se_est_HM[1]))
```
Estimation results (S.E's in parentheses):
Theta: -1.0162 (0.0006)
R: -4.0364 (0.0127)
## 3. Forward Simulation
```python
# Compute the log-likelihood of (a,i) vectors given parameter values,
# with forward simulation method
def logL_par_fs(par, a, i,
states, choices, CPPS, trans_mat,
nperiods, nsims, seed):
# Extract parameters
theta = par[0]
R = par[1]
beta = par[2]
# Find implied value functions
V = forward_simul(theta,R,beta,
states,choices,
CPPS,trans_mat,
nperiods,nsims,
seed)
# Return the loglikelihood from the implied value function
return(logL(a, i, V) )
```
```python
nperiods = 40
nsims = 30
seed = 1
# Set up objective function
obj_fun_fs = lambda x: -1 * logL_par_fs([x[0],x[1],beta],a,i,
states, choices, cpps, trans_mat,
nperiods = nperiods, nsims = nsims,
seed = seed)
# Optimize
est_fs = minimize(obj_fun_fs, x0, method='BFGS', options={'disp': True})
mean_est_fs = est_fs.x
se_est_fs = np.diag(est_fs.hess_inv)
```
Optimization terminated successfully.
Current function value: 2737.491300
Iterations: 12
Function evaluations: 60
Gradient evaluations: 15
```python
# Present results
print('Estimation results (S.E\'s in parentheses):')
print('Theta: %.4f (%.4f)' % (mean_est_fs[0], se_est_fs[0]))
print('R: %.4f (%.4f)' % (mean_est_fs[1], se_est_fs[1]))
```
Estimation results (S.E's in parentheses):
Theta: -1.0135 (0.0005)
R: -4.0402 (0.0119)
| ad23d9149760ebc5d2652a7c514964651acd4055 | 29,978 | ipynb | Jupyter Notebook | Notebook/Methods.ipynb | Mv77/DDCex | 89cf50ff639ec377245924df30c4a83c90061d18 | [
"MIT"
]
| null | null | null | Notebook/Methods.ipynb | Mv77/DDCex | 89cf50ff639ec377245924df30c4a83c90061d18 | [
"MIT"
]
| null | null | null | Notebook/Methods.ipynb | Mv77/DDCex | 89cf50ff639ec377245924df30c4a83c90061d18 | [
"MIT"
]
| 1 | 2020-08-06T00:05:34.000Z | 2020-08-06T00:05:34.000Z | 33.494972 | 371 | 0.502835 | true | 5,750 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.849971 | 0.74147 | __label__eng_Latn | 0.876493 | 0.561015 |
# Neural Networks Summer 2019
This notebook seeks to summarize and document the content learned during the Summer 2019 S-STEM summer reasearch program.
| Week 1 | [Week 2](#week2) | [Week 3](#week3) |
|---| --- | --- |
| [Linear Algebra](#linear-algebra) | [Multilayer Nets](#multilayer-networks)| [Standardization of data](#standardization) |
| [PCA](#PCA) | [Reinforcement Learning](#reinforcement)| [Inductive Bias](#inductive) |
| [Single Layer Neural Nets](#single-layer-neural-networks) | | |
---
## Week 1
During the first week, we covered introductory topics needed to get started in the area of neural networks. We began by learning to use Jupyter notebooks, followed by an intro to LaTeX and Markdown. These tools will be used throughout the program to help document our code and processes.
In addition we covered some python libraries that will be useful, such as Matplotlib and numpy. Towards the end of the week, we began learning about the basics of neural networks, starting with tools like Keras and TensorFlow. We used these to practice with a single layer neural network and started learning about multilayer networks.
<a name="linear-algebra"></a>
#### Linear Algebra
##### Distance and Similarity
When speaking in terms of neural networks, it's often useful to describe something as similar or disimilar to a class of things. For this, it is helpful to have some mathmatical methods of difining this similarity or difference. A common distance metric is the **Euclidean distance**: $\sqrt{\sum_{i=1}^{n}{(\boldsymbol{x}_i - \boldsymbol{y}_i)^2}}$ . Let's take a peak at that using numpy. Notice how easy numpy arrays make it.
```python
import numpy as np
X = np.array([5.0, 10.0])
Y = np.array([1.0, 8.0])
np.sqrt(np.sum(pow(X-Y, 2.0)))
```
4.47213595499958
```python
# This can also be done using scipy's euclidean method
import scipy.spatial.distance as ssd
ssd.euclidean(X, Y)
```
4.47213595499958
A common metric used for similarity is the **cosine similarity** function: $\cos {\theta} = \frac{\boldsymbol{x} \cdot \boldsymbol{y}}
{\lVert \boldsymbol{x} \rVert_2
\lVert \boldsymbol{y} \rVert_2}$
We can calculate that as well using numpy:
```python
# Calculate the cosine similarity
np.sum(X*Y) / (np.sqrt(np.dot(X,X)) * np.sqrt(np.dot(Y,Y)))
```
0.9429903335828895
There is also a **cosine dissimilarity** function derived from the similarity function. It looks like so: $\cos {\theta} = 1 - \frac{\boldsymbol{x} \cdot \boldsymbol{y}}
{\lVert \boldsymbol{x} \rVert_2
\lVert \boldsymbol{y} \rVert_2}$
This can be calculated by using numpy, or easily with the scipy method:
```python
ssd.cosine(X, Y)
```
0.05700966641711047
##### Matrices
It is also important to become familiar with matrix operations, as they are an integral piece of neural networks. Luckily, numpy also makes this quite easy.
```python
# For displaying matrices
from sympy import *
init_printing(use_latex=True)
X = np.array([5.0, 10.0])
Y = np.array([1.0, 8.0])
Z = np.array([2.0, 4.5])
# Create the matrix
data = np.array([X, Y, Z])
Matrix(data)
```
$$\left[\begin{matrix}5.0 & 10.0\\1.0 & 8.0\\2.0 & 4.5\end{matrix}\right]$$
An important concept with matrices is **pairwise distance**. This involves calculating the distance between each of the vectors in a given matrices. Let's take a look at how this is done using `pdist()`:
```python
ssd.squareform(ssd.pdist(data, metric='euclidean'))
```
array([[0. , 4.47213595, 6.26498204],
[4.47213595, 0. , 3.64005494],
[6.26498204, 3.64005494, 0. ]])
Yet another important skill with matrices is decomposition, where $\boldsymbol{A} = \boldsymbol{U} \boldsymbol{\Sigma} \boldsymbol{V}^\intercal$. Numpy makes this easy as well. Using `np.linalg.svd()` we will decompose a matrix into singular, left-singular, and right-singular values.
```python
U, S, V = np.linalg.svd(data, full_matrices=True)
Mul(Matrix(U), Matrix(np.diag(S)), Matrix(V), evaluate=False)
```
$$\left[\begin{matrix}-0.768489058813465 & 0.531109622575895 & -0.35685730382225\\-0.542199584547175 & -0.836660812335674 & -0.0775776747439673\\-0.339770771257426 & 0.13387028762612 & 0.930932096927608\end{matrix}\right] \left[\begin{matrix}14.46678736452 & 0.0\\0.0 & 2.22756893266284\end{matrix}\right] \left[\begin{matrix}-0.350056048625512 & -0.936728756268694\\0.936728756268694 & -0.350056048625512\end{matrix}\right]$$
<a name="PCA"></a>
#### PCA (Principal Component Analysis)
PCA is an extremely useful tool for neural networks, and is often one of the first things done when starting to analyze the data. It's a great way to get rid of noise in the data and prep it for being used by the neural net. The basic principal is to eliminate uneeded dimensions in the data (compression and noise reduction) and to project the data into a manner it can be vizualized in.
Let's take a look at a typical PCA workflow:
```python
import pandas
import numpy as np
from sympy import *
init_printing(use_latex=True)
from sklearn.decomposition import PCA
import keras
import matplotlib.pyplot as plt
%matplotlib inline
iris_data = np.array(pandas.read_table("https://www.cs.mtsu.edu/~jphillips/courses/CSCI4850-5850/public/iris-data.txt",
delim_whitespace=True,
header=None))
# Separate into the data and class labels
X = iris_data[:,0:4] # 0,1,2,3
Y = iris_data[:,4] # 4
# Mean center the data
def mean_center(x):
return x - np.mean(x)
Xcentered = np.apply_along_axis(mean_center, 0, X)
# Decomp
U, S, V = np.linalg.svd(Xcentered, full_matrices=True)
# How much varience do the first two principal components account for?
print((100 * (S[0] + S[1]))/np.sum(S))
```
85.4490160873562
We can see that just the first two components account for over 85 percent of the varience. This is an excellent case for PCA. The singular values will tell us which dimensions captures the most amount of variance. This dimension will then be put along the x-axis. The second principal component will be placed along the y-axis.
```python
# Rotate and remove uneeded dimensions
D = np.zeros([X.shape[0], X.shape[1]])
np.fill_diagonal(D, S)
Xrotated = np.dot(U, D)
PCs = Xrotated[:,0:2]
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
<a name="single-layer-neural-networks"></a>
#### Single Layer Neural Networks
Creating the neural network requires four steps:
1. Declare which model you'd like to use. In this case _`keras.Sequential()`_
2. Add the output layer
3. Compile the network
4. Train the network (using _`fit()`_)
Example below:
```python
import pandas
import numpy as np
import keras
data = np.array(pandas.read_table("https://www.cs.mtsu.edu/~jphillips/courses/CSCI4850-5850/public/iris-data.txt",
delim_whitespace=True,
header=None))
X = data[:,0:4]
labels = data[:,4]
Y = keras.utils.to_categorical(labels,
len(np.unique(labels)))
model = keras.Sequential()
# Input size - 4
input_size = X.shape[1]
# Output size - 3
output_size = Y.shape[1]
model.add(keras.layers.Dense(output_size,
activation='sigmoid',
input_shape=[input_size]))
model.compile(loss=keras.losses.mse,
optimizer=keras.optimizers.SGD(lr=0.7),
metrics=['accuracy'])
batch_size = 16
epochs = 500
validation_split = 0.5
history = model.fit(X, Y,
batch_size = batch_size,
epochs = epochs,
verbose = 0,
validation_split = validation_split)
# Plot Results
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(1)
# summarize history for accuracy
plt.subplot(211)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
# summarize history for loss
plt.subplot(212)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.tight_layout()
plt.show()
```
```python
score = model.evaluate(X, Y, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
150/150 [==============================] - 0s 43us/step
Test loss: 0.06692401697238286
Test accuracy: 0.9533333309491475
---
<a name="week2"></a>
## Week 2
During this week, we began to dive _deeper_ into mulitlayer neural networks...
<a name="multilayer-networks"></a>
#### Multilayer Networks
##### Activation Function and Loss Function Pairings
There is often a pairing of what activation and loss functions one should be using for a given neural network. They can be broken down into two categories. Regression and classification, where regression involves a singular output of continuous values, and classification entails categorizing something using one or more discrete categories. The classification category can be further broken down into binary classification or multiclass classification. There is often an activation function and loss function that lend themselves particularly well to a specific one of these categories.
| Category | Activation Function | Loss Function |
| --- | --- | ---|
| **Regression**| Linear $g(net_i) = net_i$ | Mean Sum Error _(Usually)_|
| **Binary Classification** | Logistic Sigmoid | Binary Cross-Entropy |
| **Multiclass Classification** | Softmax | Categorical Cross-Entropy |
<a name="reinforcement"></a>
#### Reinforcement Learning
The networks we have worked on up until this point involved supervised learning. Upon each pass through the neural net, the network was not only shown whether it was right or wrong, but also what the answer should have been. _Reinforcement Learning_ works in a different manner. There is an __agent__ which is placed into an environment, trying to reach a goal. Only upon reaching this goal does it receive feedback whether it was right or wrong. Therefore, there is a _difference in time_ of when the agent receives feedback. The agent must then figure out which actions it made were correct. This is known as __TD Learning__. See the example program below:
The following program demonstrates _Temporal Difference Learning_. We begin with a 1D maze which contains a single goal. Each section of the maze can be thought of as a __state__. We set the reward for the goal to one, and the reward for all other states to zero. We then drop the __agent__ into a random state, and let it search for the goal. Each time the goal is found the __value__ of the states used to get there will update to show the most efficient path.
Epoch is set to the amount of times the agent will be dropped into a state to find the goal. Goal is set to which state in the maze represents the goal.
Try out adjusting the length of the 1D Maze:
```python
import random
from fractions import Fraction
import numpy as np
LENGTH = 14 # Try me out
GAMMA = .5
GOAL = 0
EPOCH = 100
class Maze:
def __init__(self):
self._reward = np.zeros(LENGTH) # Reward Vector
self._value = np.zeros(LENGTH) #np.ones(LENGTH, dtype='int') # Value Vector
self._reward[GOAL] = 1
self.final_value = np.zeros(LENGTH) # Vector NOT used in learning.
# Soley used to see how many episodes required to train
# fill self._finalvalue
for i in range(LENGTH):
distance = np.abs(GOAL-i)
# account for wraping
if ((np.abs(GOAL + LENGTH) - i) < distance):
distance = np.abs(GOAL + LENGTH) - i
self.final_value[i] = GAMMA ** distance
# Disply the maze with current values inside
def display(self):
print(" ------" * LENGTH)
print("|", end='')
for i in range(LENGTH):
print(" %4s |" %Fraction(self._value[i]), end='')
print()
print(" ------" * LENGTH)
for i in range(LENGTH):
print(" ", i, " ", end='')
print()
return
# Drop agent into maze and search for goal
def episode(self, s):
# Search and update value until reaches goal
while(s != GOAL):
self._value[s] += self.delta(s)
# Move to next state
s = self._nextS(s)
self._value[s] += self.delta(s)
return
# delta(s) = (r(s) + gamma v(s + 1)) - v(s)
def delta(self, s):
if (s == GOAL):
future_val = 0.0
else:
future_val = GAMMA * self.v(self._nextS(s))
term = self._reward[s] + future_val
return term - self._value[s]
# V(s) = v(s) + gamma V(s+1)
def v(self, s):
if (s == GOAL):
return self._value[GOAL]
else:
return self._reward[s] + (GAMMA * self._value[self._nextS(s)])
# Obtain the next state to be moved to
def _nextS(self, s):
# determine left and right values (accounting for wrap around)
if (s == 0):
left = (LENGTH-1)
right = s + 1
elif (s == (LENGTH-1)):
left = s - 1
right = 0
else:
left = s - 1
right = s + 1
# determine whether to go left or right
if (self._value[left] >= self._value[right]):
nextS = left
else:
nextS = right
return nextS
# Obtain the matrix of state values
def get_values(self):
return self._value
```
```python
import numpy as np
import time
import matplotlib.pyplot as plt
%matplotlib inline
maze = Maze()
print("The goal is", GOAL)
maze.display()
done = False
i = 0
while (not done):
i += 1
s = random.randint(0, LENGTH-1)
maze.episode(s)
if (np.array_equal(maze.get_values(), maze.final_value)):
done = True
print("Maze after completion:")
maze.display()
print("Elapsed episodes:", i)
# Plot the results
v = maze.get_values()
states = np.linspace(0, LENGTH-1, LENGTH)
plt.plot(states, v)
plt.xlabel("States")
plt.ylabel("V(s)")
plt.show()
```
<a name='week3'></a>
# Week 3
<a name="standardization"></a>
##### Standardization of Data
An extra technique that may be helpful to training neural nets is standardization of data. This helps "level the playing field" so to speak to give each of the data points an equal chance of affecting the weights. The formula for standardization is like so:
</br>$z = \frac{x - \mu}{\sigma}$ where $\mu$ is the mean and $\sigma$ is the standard diviation.
<a name="inductive"></a>
##### Inductive Bias
We continued to learn about neural networks and reinforcement learning. Specifically, we learned about the concept of *inductive bias*. Inductive bias is tuning a network to more specifically fit a problem. This often makes the network lend itself more readily to a problem domain. For instance, *working memory* is an inductive bias for the problem domain of reinforcement learning. Other inductive biases exist as well. Convolutional neural networks work better for working with images, such as image recognition and classification. They provide a special modification to the neural network, that aides them in this process. By only looking at a specific region of an image and then "sliding" across it, it is able to better recognize images.
<a name="week4"></a>
# WEEK 4
This week, we became more specialized in learning, looking more specifically at reinforcement learning and working memory.
```python
```
| ad31aafa43b08da661effde27381dba3a8f2725b | 109,083 | ipynb | Jupyter Notebook | Blakes_documentation.ipynb | mtsu-cs-summer-research/neural-networks | 9f3dbc25736fb80b3c52a6630f4f61c1361a349d | [
"MIT"
]
| null | null | null | Blakes_documentation.ipynb | mtsu-cs-summer-research/neural-networks | 9f3dbc25736fb80b3c52a6630f4f61c1361a349d | [
"MIT"
]
| null | null | null | Blakes_documentation.ipynb | mtsu-cs-summer-research/neural-networks | 9f3dbc25736fb80b3c52a6630f4f61c1361a349d | [
"MIT"
]
| null | null | null | 138.782443 | 41,356 | 0.857641 | true | 4,066 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90599 | 0.880797 | 0.797993 | __label__eng_Latn | 0.979991 | 0.692338 |
# Data Visualization with PCA
## The Challenges of High-dimensional Data
Once we decide to measure more than three features per input vector, it can become challenging to understand how a network is learning to solve a problem since we can no longer generate a plot or visualization of the feature vector space to which the network is being exposed. One-, two-, or three-dimensional vectors are easy enough to plot, and we could even color the corresponding points based on their class assignments to see how groups of similar items are in similar parts of the vector space. If we can see straight boundaries between the differently colored point clouds, then we would understand why a linear network might be capable of producing a reasonable solution to the problem at hand. Also, if we see regions where a single line will not suffice for separating the different classes, then we might have reason to suspect that a linear network will fail to solve the problem, and probably attempt to use a multilayer network instead.
Given the advantages of visualizing such relationships, a common trick when exploring high-dimensional data sets is to **project** the high-dimensional data vectors onto a low-dimensional space (two or three dimensions) where we can see if such relationships exist. Such projection is _risky_ in the sense that we will be throwing information away to perform the projection (similar to how neural networks throw information away when performing regression or classification), and we may no longer see some important relationships in the low-dimensional projection of the data. However, it is sometimes possible that the projection **will** preserve relationships between the original data vectors that are important for making an accurate classification or regression while also enabling visualization.
In this assignment, we will explore a commonly-used **linear** projection known as Principal Component Analysis (PCA) to visualize some data sets and see how such projections might be useful for understanding why our single-layer networks were able to effectively learn the functions that these data sets represent. Since PCA is a linear method, it will be limited in its ability to produce useful _projections_ in a manner roughly analogous to how single-layer neural networks are limited in their ability to solve _linearly separable_ problems. Since we will be producing low-dimensional projections, relative to the original dimensionality of the vectors, this technique is also a form of **dimensionality reduction** in that the new two- or three- dimensional vectors that we produce will still share some of the relationships between one another that the higher-dimensional vectors possessed. This might even mean that we could use these new vectors as inputs for a neural network instead of the original, high-dimensional vectors. This could lead to a significant reduction in the number of neural units and connection weights in a network, and hence reduce its computation time. Also, some of the original features (or combinations of features) may just not be very useful for the problem at hand, and removing them would allow the network to focus on only the more relevant features in the data. In some cases (albeit rarely with PCA), this can even lead to superior performance of the trained neural network overall. While we will focus on two-dimensional projections in this assignment, PCA can be used to reduce the dimensionality of a given set of input vectors to any chosen number less than or equal to the original dimensionality. The smaller the dimensionality of the projection: the more information is being projected away. Thus, two-dimensional projections are often too low to be of any real use on many large data sets. However, some data sets might be reduced from millions of features to thousands, or thousands to hundreds, while still preserving the vast majority of the information that they encode. Such large reductions can make certain problems far more tractable to learn than they would otherwise be.
## Gathering the Iris data set
Let's start out by grabbing some data that we are already a little familiar with, and see if we can use PCA to better understand why a linear network can learn to solve this function effectively.
Let's start by importing some tools for the job...
```python
# For reading data sets from the web.
import pandas
# For lots of great things.
import numpy as np
# To make our plots.
import matplotlib.pyplot as plt
%matplotlib inline
# Because sympy and LaTeX make
# everything look wonderful!
from sympy import *
init_printing(use_latex=True)
from IPython.display import display
# We will use this to check our implementation...
from sklearn.decomposition import PCA
# We will grab another data set using Keras
# after we finish up with Iris...
import keras
```
Using TensorFlow backend.
Now let's grab the Iris data set and start projecting!
```python
iris_data = np.array(pandas.read_table("https://www.cs.mtsu.edu/~jphillips/courses/CSCI4850-5850/public/iris-data.txt",
delim_whitespace=True,
header=None))
```
```python
# Remember the data is composed of feature
# vectors AND class labels...
X = iris_data[:,0:4] # 0,1,2,3
Y = iris_data[:,4] # 4
# Pretty-print with display()!
display(X.shape)
display(Y.shape)
display(Matrix(np.unique(Y)).T)
```
The Iris data set consists of 150 four-dimensional feature vectors, each one assigned to one of three class labels (0,1,2) corresponding to an iris species.
We could potentially use **four** dimensions to plot and understand this data. Namely, we could make 3D plots using the first three dimensions, and sort the points along the fourth dimension so that we could play them in-sequence like a movie. However, that can still be tricky to visualize since we may miss some relationships between frames in our "movie" if they are far apart in time. Potentially more useful would be to plot the first three dimensions in one plot, then the last three dimensions in another, where the two plots now share the middle two dimensions. Still, if relationships between the first and fourth dimensions were the most important, we might not see them very clearly using this presentation.
Let's see if a PCA projection down to just two dimensions would be more effective.
To do this we will be using some linear algebra that we have already seen before, but we need to process the data just a little before we can use those tools.
First, we will _mean-center_ the values of _each_ feature in the data. That is, we will find the _mean_ value of the first feature across all examples in the data set, and then subtract the mean value from this feature for all examples. We will perform the same operation for all four features as well, so that each feature will have its mean value effectively set to zero. You can think of this as moving the entire set of data vectors so that the average of the data vectors now lies at the value zero in all four dimensions. In other words, it's a _translation_ operation on the original data. The relative distances between all of the points is maintained, so all of the relationships between the data vectors important for classification is maintained as well.
We will use a custom function for this, that we apply to each of the columns using the `apply_along_axis()` function:
```python
# Mean center a vector
def mean_center(x):
return x - np.mean(x)
Xcentered = np.apply_along_axis(mean_center,0,X)
```
Now that we have a mean-centered data matrix, we will use singular value decomposition to extract the left-singular vectors and singular-values of this matrix.
```python
U,S,V = np.linalg.svd(Xcentered,full_matrices=True)
# Percent variance accounted for
plt.plot(100.0*S/np.sum(S))
plt.ylabel('% Var')
plt.xlabel('Singular Value')
plt.show()
```
Each of the singular values indicate some amount of variance present in the original data set that is captured by the corresponding left-singular vector (column of U). They are sorted in order from largest to smallest when returned by the `svd()` function so that you can see that the largest amount of variance is captured by the first left-singular vector, the second most variance by the second singular-vector, and so on. Often, the sum of the singular values is calculated to obtain the _total_ variance in the data, and then used to normalize the variance to obtain the percentage of variance captured by each left-singular vector. Given the data above, it is clear that the first two vectors alone will account for over 85% of the total variance in the data, and should form a reasonable projection for the data set.
```python
# Variance accounted for in the first two principal components
100.0*(S[0]+S[1])/np.sum(S)
```
The singular-values (S) are mapped into a rectangular, diagonal matrix which is then multiplied by the left-singular vectors (U). The vectors in U are all unit length, so this operation effectively scales the length of the first 4 vectors by each of the corresponding singular values. These operations produce a rotated version of our original data set where the major orthogonal directions capturing the largest variance in the data lie along the principal axes. Each of these so-called principal components is a linear combination of our original feature vectors, and allows us to produce a projection onto a smaller set of these components by simply throwing away vectors associated with small singular values. Thus, while we obtain all 4 of the principal components for the iris data set, we will throw away the last two as they capture less than 15% of the variance in the data.
```python
D = np.zeros([X.shape[0],X.shape[1]])
np.fill_diagonal(D,S)
Xrotated = np.dot(U,D)
# First two principal components!
PCs = Xrotated[:,0:2]
PCs.shape
```
Now that we have projected our data set into a low-dimensional space where we can better visualize it, let's make a plot to see how it looks. We will be careful to color the points by the associated class label.
```python
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
You can see in the above plot that the data forms three fairly distinct groups of points. The red group on the left is easily linearly separable from the other two groups. Even the green and blue groups are fairly distinct. While you can almost draw a straight line between them, it appears that a few data points from each of these groups would lie on the opposite side, and not allow for perfect classification with a linear network. Nevertheless, we can now see why a linear network might work well on this data, and (perhaps more importantly) that an additional feature measurement may be needed to completely separate the green and blue species.
We can perform the same analysis using SciKitLearn:
```python
pca = PCA(2)
PCs = pca.fit_transform(X)[:,0:2]
```
```python
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
The result is almost the same, but you will notice that this data has been "flipped" along the origin of the second principal component. This is common since there is an equally valid rotation of the data 180 degrees along any axis that still preserves all of the variance and internal relationships between the data points. Either way, the same amount of variance is accounted for on each component, and decision boundaries can still be explored.
Let's perform a similar analysis with a much larger data set.
## Exploring MNIST with PCA
```python
# Load the MNIST data set using Keras
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Subsample (there's a lot of data here!)
X = x_train[range(0,x_train.shape[0],10),:,:]
Y = y_train[range(0,y_train.shape[0],10)]
display(X.shape)
display(Y.shape)
```
We will only look at the training data for this data set for now. The training set consists of 60,000 images each 28x28 pixels in size. However, we have selected just 6000 of those images for this example to make the analysis more tractable (60,000 would take a long time and lots of memory to compute the PCs). Each pixel is represented by an integer intensity value between 0 and 255. For the sake of examining what these images look like, let's scale those intensities to be floating point values in the range [0,1]:
```python
X = X.astype('float32') / 255.0
```
```python
# Plot some of the images
for i in range(5):
plt.figure()
plt.imshow(X[i,:,:])
plt.show()
```
```python
display(Matrix(Y[0:5]))
```
You can see that each of these images corresponds to a hand-written digit, each labeled with the appropriate number in the category labels, Y.
We will now **flatten** these images so that we can perform principal component analysis, so that each pixel is treated as a single measurement.
```python
X = X.reshape(X.shape[0],X.shape[1]*X.shape[2])
X.shape
```
Each image has now been encoded as a feature vector in a 784-dimensional space. Even though the pixel intensities make sense to us when visualized as a 2D image, newly initialized neural networks only experience the vector space for the first time, and have to learn such relationships from scratch. However, let's see if PCA can provide some insight on the difficulty of this task.
We will apply the same approach as before:
1. mean-centering the features
2. calculating the SVD
3. examining the singular values
4. scaling the left-singular vectors
5. plotting the two-dimensional projection
NOTE: It may take a minute or two to compute the SVD for a data set of this size, so be patient on the steps below.
```python
# Mean-centering
Xcentered = np.apply_along_axis(mean_center,0,X)
# SVD
U,S,V = np.linalg.svd(Xcentered,full_matrices=True)
# Percent variance accounted for
plt.plot(100.0*S/np.sum(S))
plt.ylabel('% Var')
plt.xlabel('Singular Value')
plt.show()
```
```python
# Variance accounted for in the first two principal components
100.0*(S[0]+S[1])/np.sum(S)
```
You can see that the variance accounted for dips sharply (which is good, because have a few PCs which capture a lot of variance is a useful thing). However, the variance captured by the first two components in total is less then 5% of the total variance in the data set, so PCA might not be so useful for visualization.
However, let's take a quick look at one more thing before moving on:
```python
# Variance accounted for in the first two principal components
display(100.0*(np.sum(S[0:340]))/np.sum(S))
# Reduction?
display(100*340/len(S))
```
Notice that 90% of the **total** variance in the data can be captured using 340 principal components. 340 is only just over 43% of the original dimensionality, so 90% of the data set can be effectively represented using a space less than half of the original in size. Thus, PCA can also be thought of as a linear data compression technique as well. While we can't visualize a 340-dimensional space, a much smaller network would be required to process this data, possibly without sacrificing generalization accuracy (but that's for a later time). Let's just take a look at the first two principal components for now:
```python
D = np.zeros([X.shape[0],X.shape[1]])
np.fill_diagonal(D,S)
Xrotated = np.dot(U,D)
# First two principal components!
PCs = Xrotated[:,0:2]
PCs.shape
```
```python
# Need a lot of colors for this one!
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue','cyan','magenta','yellow','black','brown','grey','purple'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
Note that the clutter of 6000 points is pretty bad in this space, and reducing the sampling may help in some ways. Let's try that now:
```python
plt.scatter(PCs[range(0,6000,10),0],PCs[range(0,6000,10),1],
color=[['red','green','blue','cyan','magenta','yellow','black','brown','grey','purple'][i] for i in Y[range(0,6000,10)].astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
Even with the subsampling, some interesting divisions between classes can be found, but others are not so clear. For example, the red points now correspond to images of zeros and green dots now correspond to images of ones, and these seem to lie on opposite ends of the first principal component. Also, the brown sevens seem very distinct from the cyan threes. However, others are more cluttered, like the purple nines, brown sevens, and magenta fours. However, this makes some sense because these numbers share some common featural similarities.
Because PCA is a linear technique, it is somewhat limited in its ability to capture some of the non-linear relationships between the data vectors. However, two dimensions, while useful for visualization, is still often too low to capture all of the relevant relationships that allow for categorization. The 2D representation can be thought of as capturing the lower-bound on the distances between the data vectors, since adding more dimensions may cause points to move farther apart, but they could never cause them to move closer together. Even with its drawbacks, PCA is a useful technique for quickly creating projections of high-dimensional data onto lower-dimensional spaces.
```python
```
| c1e064a4a934ddd3eb346b14b1fa77f3dd448706 | 304,658 | ipynb | Jupyter Notebook | Introductions/Visualization with PCA.ipynb | mtr3t/notebook-examples | 936f24e87e23160c73b8b4d01a37f1040e0ceb61 | [
"MIT"
]
| null | null | null | Introductions/Visualization with PCA.ipynb | mtr3t/notebook-examples | 936f24e87e23160c73b8b4d01a37f1040e0ceb61 | [
"MIT"
]
| null | null | null | Introductions/Visualization with PCA.ipynb | mtr3t/notebook-examples | 936f24e87e23160c73b8b4d01a37f1040e0ceb61 | [
"MIT"
]
| null | null | null | 380.347066 | 92,100 | 0.934008 | true | 3,845 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.908618 | 0.770479 | __label__eng_Latn | 0.999637 | 0.628412 |
## IBM Quantum Challenge Fall 2021
# Challenge 1: Optimizing your portfolio with quantum computers
<div class="alert alert-block alert-info">
We recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience.
## Introduction: What is portfolio optimization?
Portfolio optimization is a crucial process for anyone who wants to maximize returns from their investments.
Investments are usually a collection of so-called assets (stock, credits, bonds, derivatives, calls, puts, etc..) and this collection of assets is called a **portfolio**.
<center></center>
The goal of portfolio optimization is to minimize risks (financial loss) and maximize returns (financial gain). But this process is not as simple as it may seem. Gaining high returns with little risk is indeed too good to be true. Risks and returns usually have a trade-off relationship which makes optmizing your portfolio a little more complicated. As Dr. Harry Markowitz states in his Moderbn Portfolio Theory he created in 1952, "risk is an inherrent part of higher reward."
**Modern Portfolio Theory (MPT)** <br>
An investment theory based on the idea that investors are risk-averse, meaning that when given two portfolios that offer the same expected return they will prefer the less risky one. Investors can construct portfolios to maximize expected return based on a given level of market risk, emphasizing that risk is an inherent part of higher reward. It is one of the most important and influential economic theories dealing with finance and investment. Dr. Harry Markowitz created the modern portfolio theory (MPT) in 1952 and won the Nobel Prize in Economic Sciences in 1990 for it. <br><br>
**Reference:** [<b>Modern Portfolio Theory<i>](https://en.wikipedia.org/wiki/Modern_portfolio_theory)
## Challenge
<div class="alert alert-block alert-success">
**Goal**
Portfolio optimization is a crucial process for anyone who wants to maximize returns from their investments. In this first challenge, you will learn some of the basic theory behind portfolio optimization and how to formulate the problem so it can be solved by quantum computers. During the process, you will learn about Qiskit's Finance application class and methods to solve the problem efficiently.
1. **Challenge 1a**: Learn how to use the PortfolioOptimization() method in Qiskit's Finance module to convert the portfolio optimization into a quadratic program.
2. **Challenge 1b**: Implement VQE to solve a four-stock portfolio optimization problem based on the instance created in challenge 1a.
3. **Challenge 1c**: Solve the same problem using QAOA with three budgets and double weights for any of the assets in your portfolio.
</div>
<div class="alert alert-block alert-info">
Before you begin, we recommend watching the [**Qiskit Finance Demo Session with Julien Gacon**](https://youtu.be/UtMVoGXlz04?t=2022) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-finance) to learn about Qiskit's Finance module and its appications in portfolio optimization.
</div>
## 1. Finding the efficient frontier
The Modern portfolio theory (MPT) serves as a general framework to determine an ideal portfolio for investors. The MPT is also referred to as mean-variance portfolio theory because it assumes that any investor will choose the optimal portfolio from the set of portfolios that
- Maximizes expected return for a given level of risk; and
- Minimizes risks for a given level of expected returns.
The figure below shows the minimum variance frontier of modern portfolio theory where the horizontal axis shows the risk and the vertical axis shows expected return.
<center></center>
Consider a situation where you have two stocks to choose from: A and B. You can invest your entire wealth in one of these two stocks. Or you can invest 10% in A and 90% in B, or 20% in A and 80% in B, or 70% in A and 30% in B, etc ... There is a huge number of possible combinations and this is a simple case when considering two stocks. Imagine the different combinations you have to consider when you have thousands of stocks.
The minimum variance frontier shows the minimum variance that can be achieved for a given level of expected return. To construct a minimum-variance frontier of a portfolio:
- Use historical data to estimate the mean, variance of each individual stock in the portfolio, and the correlation of each pair of stocks.
- Use a computer program to find out the weights of all stocks that minimize the portfolio variance for each pre-specified expected return.
- Calculate the expected returns and variances for all the minimum variance portfolios determined in step 2 and then graph the two variables.
Investors will never want to hold a portfolio below the minimum variance point. They will always get higher returns along the positively sloped part of the minimum-variance frontier. And the positively sloped part of the minimum-variance frontier is called the **efficient frontier**.
The **efficient frontier** is where the optimal portfolios are. And it helps narrow down the different portfolios from which the investor may choose.
## 2. Goal Of Our Exercise
The goal of this exercise is to find the efficent frontier for an inherent risk using a quantum approach. We will use Qiskit's Finance application modules to convert our portfolio optimization problem into a quadratic program so we can then use variational quantum algorithms such as VQE and QAOA to solve our optimization problem. Let's first start by looking at the actual problem we have at hand.
## 3. Four-Stock Portfolio Optimization Problem
Let us consider a portfolio optimization problem where you have a total of four assets (e.g. STOCK0, STOCK1, STOCK2, STOCK3) to choose from. Your goal is to find out a combination of two assets that will minimize the tradeoff between risk and return which is the same as finding the efficient frontier for the given risk.
## 4. Formulation
How can we formulate this problem?<br>
The function which describes the efficient frontier can be formulated into a quadratic program with linear constraints as shown below. <br>
The terms that are marked in red are associated with risks and the terms in blue are associated with returns.
You can see that our goal is to minimize the tradeoff between risk and return. In general, the function we want to optimize is called an objective function. <br> <br>
<div align="center"> <font size=5em >$\min_{x \in \{0, 1\}^n}: $</font> <font color='red', size=5em >$q x^n\Sigma x$</font> - <font color='blue', size=5em>$\mu^n x$</font> </div>
<div align="center"> <font size=5em >$subject$</font> <font size=5em >$to: 1^n x = B$</font> </div>
- <font size=4em >$x$</font> indicates asset allocation.
- <font size=4em >$Σ$</font> (sigma) is a covariance matrix.
A covariance matrix is a useful math concept that is widely applied in financial engineering. It is a statistical measure of how two asset prices are varying with respect to each other. When the covariance between two stocks is high, it means that one stock experiences heavy price movements and is volatile if the price of the other stock changes.
- <font size=4em >$q$</font> is called a risk factor (risk tolerance), which is an evaluation of an individual's willingness or ability to take risks. For example, when you use the automated financial advising services, the so-called robo-advising, you will usually see different risk tolerance levels. This q value is the same as such and takes a value between 0 and 1.
- <font size=4em >$𝝁$</font> (mu) is the expected return and is something we obviously want to maximize.
- <font size=4em >$n$</font> is the number of different assets we can choose from
- <font size=4em >$B$</font> stands for Budget.
And budget in this context means the number of assets we can allocate in our portfolio.
#### Goal:
Our goal is to find the **x** value. The x value here indicates which asset to pick (𝑥[𝑖]=1) and which not to pick (𝑥[𝑖]=0).
#### Assumptions:
We assume the following simplifications:
- all assets have the same price (normalized to 1),
- the full budget $B$ has to be spent, i.e. one has to select exactly $B$ assets.
- the equality constraint $1^n x = B$ is mapped to a penalty term $(1^n x - B)^2$ which is scaled by a parameter and subtracted from the objective function.
## Step 1. Import necessary libraries
```python
#Let us begin by importing necessary libraries.
from qiskit import Aer
from qiskit.algorithms import VQE, QAOA, NumPyMinimumEigensolver
from qiskit.algorithms.optimizers import *
from qiskit.circuit.library import TwoLocal
from qiskit.utils import QuantumInstance
from qiskit.utils import algorithm_globals
from qiskit_finance import QiskitFinanceError
from qiskit_finance.applications.optimization import PortfolioOptimization
from qiskit_finance.data_providers import *
from qiskit_optimization.algorithms import MinimumEigenOptimizer
from qiskit_optimization.applications import OptimizationApplication
from qiskit_optimization.converters import QuadraticProgramToQubo
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
import warnings
from sympy.utilities.exceptions import SymPyDeprecationWarning
warnings.simplefilter("ignore", SymPyDeprecationWarning)
```
## Step 2. Generate time series data (Financial Data)
Let's first generate a random time series financial data for a total number of stocks n=4. We use RandomDataProvider for this. We are going back in time and retrieve financial data from November 5, 1955 to October 26, 1985.
```python
# Set parameters for assets and risk factor
num_assets = 4 # set number of assets to 4
q = 0.5 # set risk factor to 0.5
budget = 2 # set budget as defined in the problem
seed = 132 #set random seed
# Generate time series data
stocks = [("STOCK%s" % i) for i in range(num_assets)]
data = RandomDataProvider(tickers=stocks,
start=datetime.datetime(1955,11,5),
end=datetime.datetime(1985,10,26),
seed=seed)
data.run()
```
```python
# Let's plot our finanical data
for (cnt, s) in enumerate(data._tickers):
plt.plot(data._data[cnt], label=s)
plt.legend()
plt.xticks(rotation=90)
plt.xlabel('days')
plt.ylabel('stock value')
plt.show()
```
<div id='problem'></div>
<div class="alert alert-block alert-danger">
**WARNING** Please do not change the start/end dates that are given to the RandomDataProvider in this challenge. Otherwise, your answers will not be graded properly.
</div>
## Step 3. Quadratic Program Formulation
Let's generate the expected return first and then the covariance matrix which are both needed to create our portfolio.
### Expected Return μ
Expected return of a portfolio is the anticipated amount of returns that a portfolio may generate, making it the mean (average) of the portfolio's possible return distribution.
For example, let's say stock A, B and C each weighted 50%, 20% and 30% respectively in the portfolio. If the expected return for each stock was 15%, 6% and 9% respectively, the expected return of the portfolio would be:
<div align="center"> μ = (50% x 15%) + (20% x 6%) + (30% x 9%) = 11.4% </div>
For the problem data we generated earlier, we can calculate the expected return over the 30 years period from 1955 to 1985 by using the following `get_period_return_mean_vector()` method which is provided by Qiskit's RandomDataProvider.
```python
#Let's calculate the expected return for our problem data
mu = data.get_period_return_mean_vector() # Returns a vector containing the mean value of each asset's expected return.
print(mu)
```
### Covariance Matrix Σ
Covariance Σ is a statistical measure of how two asset's mean returns vary with respect to each other and helps us understand the amount of risk involved from an investment portfolio's perspective to make an informed decision about buying or selling stocks.
If you have 'n' stocks in your porfolio, the size of the covariance matrix will be n x n.
Let us plot the covariance marix for our 4 stock portfolio which will be a 4 x 4 matrix.
```python
# Let's plot our covariance matrix Σ(sigma)
sigma = data.get_period_return_covariance_matrix() #Returns the covariance matrix of the four assets
print(sigma)
fig, ax = plt.subplots(1,1)
im = plt.imshow(sigma, extent=[-1,1,-1,1])
x_label_list = ['stock3', 'stock2', 'stock1', 'stock0']
y_label_list = ['stock3', 'stock2', 'stock1', 'stock0']
ax.set_xticks([-0.75,-0.25,0.25,0.75])
ax.set_yticks([0.75,0.25,-0.25,-0.75])
ax.set_xticklabels(x_label_list)
ax.set_yticklabels(y_label_list)
plt.colorbar()
plt.clim(-0.000002, 0.00001)
plt.show()
```
The left-to-right diagnoal values (yellow boxes in the figure below) show the relation of a stock with 'itself'. And the off-diagonal values show the deviation of each stock's mean expected return with respect to each other. A simple way to look at a covariance matrix is:
- If two stocks increase and decrease simultaneously then the covariance value will be positive.
- If one increases while the other decreases then the covariance will be negative.
<center></center>
You may have heard the phrase "Don't Put All Your Eggs in One Basket." If you invest in things that always move in the same direction, there will be a risk of losing all your money at the same time. Covariance matrix is a nice measure to help investors diversify their assets to reduce such risk.
Now that we have all the values we need to build our portfolio for optimization, we will look into Qiskit's Finance application class that will help us contruct the quadratic program for our problem.
## Qiskit Finance application class
In Qiskit, there is a dedicated [`PortfolioOptimization`](https://qiskit.org/documentation/finance/stubs/qiskit_finance.applications.PortfolioOptimization.html#qiskit_finance.applications.PortfolioOptimization) application to construct the quadratic program for portfolio optimizations.
PortfolioOptimization class creates a porfolio instance by taking the following **five arguments** then converts the instance into a quadratic program.
Arguments of the PortfolioOptimization class:
- expected_returns
- covariances
- risk_factor
- budget
- bounds
Once our portfolio instance is converted into a quadratic program, then we can use quantum variational algorithms suchs as Variational Quantum Eigensolver (VQE) or the Quantum Approximate Optimization Algorithm (QAOA) to find the optimal solution to our problem.<br>
We already obtained expected_return and covariances from Step 3 and have risk factor and budget pre-defined. So, let's build our portfolio using the [`PortfolioOptimization`](https://qiskit.org/documentation/finance/stubs/qiskit_finance.applications.PortfolioOptimization.html#qiskit_finance.applications.PortfolioOptimization) class.
## Challenge 1a: Create the portfolio instance using PortfolioOptimization class
<div id='u-definition'></div>
<div class="alert alert-block alert-success">
**Challenge 1a** <br>
Complete the code to generate the portfolio instance using the [**PortfolioOptimization**](https://qiskit.org/documentation/finance/stubs/qiskit_finance.applications.PortfolioOptimization.html#qiskit_finance.applications.PortfolioOptimization) class. Make sure you use the **five arguments** and their values which were obtained in the previos steps and convert the instance into a quadratic program **qp**.
</div>
<div id='problem'></div>
<div class="alert alert-block alert-info">
**Note:** A binary list [1. 1. 0. 0.] indicates a portfolio consisting STOCK2 and STOCK3.
</div>
```python
##############################
# Provide your code here
portfolio =
qp =
##############################
print(qp)
```
If you were able to successfully generate the code, you should see a standard representation of the formulation of our qudratic program.
```python
# Check your answer and submit using the following code
from qc_grader import grade_ex1a
grade_ex1a(qp)
```
## Minimum Eigen Optimizer
Interestingly, our portfolio optimization problem can be solved as a ground state search of a Hamiltonian. You can think of a Hamiltonian as an energy function representing the total energy of a physical system we want to simulate such as a molecule or a magnet. The physical system can be further represented by a mathemetical model called an [**Ising model**](https://en.wikipedia.org/wiki/Ising_model) which gives us a framework to convert our binary variables into a so called spin up (+1) or spin down (-1) state.
When it comes to applyting the optimization algorithms, the algorithms usually require problems to satisfy certain criteria to be applicable. For example, variational algorithms such as VQE and QAOA can only be applied to [**Quadratic Unconstrained Binary Optimization (QUBO)**](https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization) problems, thus Qiskit provides converters to automatically map optimization problems to these different formats whenever possible.
<center></center>
Solving a QUBO is equivalent to finding a ground state of a Hamiltonian. And the Minimum Eigen Optimizer translates the Quadratic Program to a Hamiltonian, then calls a given Mimimum Eigensolver such as VQE or QAOA to compute the ground states and returns the optimization results for us.
This approach allows us to utilize computing ground states in the context of solving optimization problems as we will demonstrate in the next step in our challenge exercise.
## Step 5. Solve with classical optimizer as a reference
Lets solve the problem. First classically...
We can now use the Operator we built above without regard to the specifics of how it was created. We set the algorithm for the NumPyMinimumEigensolver so we can have a classical reference. Backend is not required since this is computed classically not using quantum computation. The result is returned as a dictionary.
```python
exact_mes = NumPyMinimumEigensolver()
exact_eigensolver = MinimumEigenOptimizer(exact_mes)
result = exact_eigensolver.solve(qp)
print(result)
```
The optimal value indicates your asset allocation.
## Challenge1b: Solution using VQE
**Variational Quantum Eigensolver (VQE)** is a classical-quantum hybrid algorithm which outsources some of the processing workload to a classical computer to efficiently calculate the ground state energy (lowest energy) of a [**Hamiltonian**](https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)). As we discussed earlier, we can reformulate the quadratic program as a ground state energy search to be solved by [**VQE**](https://qiskit.org/documentation/stubs/qiskit.algorithms.VQE.html) where the ground state corresponds to the optimal solution we are looking for. In this challenge exercise, you will be asked to find the optimal solution using VQE. <br>
<div id='u-definition'></div>
<div class="alert alert-block alert-success">
**Challenge 1b** <br>
Find the same solution by using Variational Quantum Eigensolver (VQE) to solve the problem. We will specify the optimizer and variational form to be used.
</div>
<div id='problem'></div>
<div class="alert alert-block alert-info">
**HINT:** If you are stuck, check out [**this qiskit tutorial**](https://qiskit.org/documentation/finance/tutorials/01_portfolio_optimization.html) and adapt it to our problem:
</div>
Below is some code to get you started.
```python
optimizer = SLSQP(maxiter=1000)
algorithm_globals.random_seed = 1234
backend = Aer.get_backend('statevector_simulator')
##############################
# Provide your code here
vqe =
##############################
vqe_meo = MinimumEigenOptimizer(vqe) #please do not change this code
result = vqe_meo.solve(qp) #please do not change this code
print(result) #please do not change this code
```
```python
# Check your answer and submit using the following code
from qc_grader import grade_ex1b
grade_ex1b(vqe, qp)
```
VQE should give you the same optimal results as the reference solution.
## Challenge 1c: Portfolio optimization for B=3, n=4 stocks
In this exercise, solve the same problem where one can allocate double weights (can allocate twice the amount) for a single asset. (For example, if you allocate twice for STOCK3 one for STOCK2, then your portfolio can be represented as [2, 1, 0, 0]. If you allocate a single weight for STOCK0, STOCK1, STOCK2 then your portfolio will look like [0, 1, 1, 1]) <br>
Furthermore, change the constraint to B=3. With this new constraint, find the optimal portfolio that minimizes the tradeoff between risk and return.
<div id='u-definition'></div>
<div class="alert alert-block alert-success">
**Challenge 1c** <br>
Complete the code to generate the portfolio instance using the PortfolioOptimization class. <br>
Find the optimal portfolio for budget=3 where one can allocate double weights for a single asset.<br>
Use QAOA to find your optimal solution and submit your answer.
</div>
<div id='problem'></div>
<div class="alert alert-block alert-info">
**HINT:** Remember that any one of STOCK0, STOCK1, STOCK2, STOCK3 can have double weights in our portfolio. How can we change our code to accommodate integer variables? <br>
</div>
## Step 1: Import necessary libraries
```python
#Step 1: Let us begin by importing necessary libraries
import qiskit
from qiskit import Aer
from qiskit.algorithms import VQE, QAOA, NumPyMinimumEigensolver
from qiskit.algorithms.optimizers import *
from qiskit.circuit.library import TwoLocal
from qiskit.utils import QuantumInstance
from qiskit.utils import algorithm_globals
from qiskit_finance import QiskitFinanceError
from qiskit_finance.applications.optimization import *
from qiskit_finance.data_providers import *
from qiskit_optimization.algorithms import MinimumEigenOptimizer
from qiskit_optimization.applications import OptimizationApplication
from qiskit_optimization.converters import QuadraticProgramToQubo
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
import warnings
from sympy.utilities.exceptions import SymPyDeprecationWarning
warnings.simplefilter("ignore",SymPyDeprecationWarning)
```
## Step 2: Generate Time Series Data (Financial Data)
```python
# Step 2. Generate time series data for four assets.
# Do not change start/end dates specified to generate problem data.
seed = 132
num_assets = 4
stocks = [("STOCK%s" % i) for i in range(num_assets)]
data = RandomDataProvider(tickers=stocks,
start=datetime.datetime(1955,11,5),
end=datetime.datetime(1985,10,26),
seed=seed)
data.run()
```
```python
# Let's plot our finanical data (We are generating the same time series data as in the previous example.)
for (cnt, s) in enumerate(data._tickers):
plt.plot(data._data[cnt], label=s)
plt.legend()
plt.xticks(rotation=90)
plt.xlabel('days')
plt.ylabel('stock value')
plt.show()
```
## Step 3: Calculate expected return mu and covariance sigma
```python
# Step 3. Calculate mu and sigma for this problem
mu2 = data.get_period_return_mean_vector() #Returns a vector containing the mean value of each asset.
sigma2 = data.get_period_return_covariance_matrix() #Returns the covariance matrix associated with the assets.
print(mu2, sigma2)
```
## Step 4: Set parameters and constraints based on this challenge 1c.
```python
# Step 4. Set parameters and constraints based on this challenge 1c
##############################
# Provide your code here
q2 = #Set risk factor to 0.5
budget2 = #Set budget to 3
##############################
```
## Step 5: Complete code to generate the portfolio instance
```python
# Step 5. Complete code to generate the portfolio instance
##############################
# Provide your code here
portfolio2 =
qp2 =
##############################
```
## Step 6: Let's solve the problem using QAOA
**Quantum Approximate Optimization Algorithm (QAOA)** is another variational algorithm that has applications for solving combinatorial optimization problems on near-term quantum systems. This algorithm can also be used to calculate ground states of a Hamiltonian and can be easily implemented by using Qiskit's [**QAOA**](https://qiskit.org/documentation/stubs/qiskit.algorithms.QAOA.html) application. (You will get to learn about QAOA in detail in challenge 4. Let us first focus on the basic implementation of QAOA using Qiskit in this exercise.)
```python
# Step 6. Now let's use QAOA to solve this problem.
optimizer = SLSQP(maxiter=1000)
algorithm_globals.random_seed = 1234
backend = Aer.get_backend('statevector_simulator')
##############################
# Provide your code here
qaoa =
##############################
qaoa_meo = MinimumEigenOptimizer(qaoa) #please do not change this code
result2 = qaoa_meo.solve(qp2) #please do not change this code
print(result2) #please do not change this code
```
Note: The QAOA execution may take up to a few minutes to complete.
# Submit your answer
```python
# Check your answer and submit using the following code
from qc_grader import grade_ex1c
grade_ex1c(qaoa, qp2)
```
### Further Reading:
For those who have successfully solved the first introductory level challenge, **congratulations!** <br>
I hope you were able to learn something about optimizing portfolios and how you can use Qiskit's Finance module to solve the example problem. <br> If you are interested in further reading, here are a few literature to explore:
<br>
1. [**Quantum optimization using variational algorithms on near-term quantum devices. Moll et al. 2017**](https://arxiv.org/abs/1710.01022)<br>
2. [**Improving Variational Quantum Optimization using CVaR. Barkoutsos et al. 2019.**](https://arxiv.org/abs/1907.04769)<br>
### Good luck and have fun with the challenge!
## Additional information
**Created by:** Yuri Kobayashi
**Version:** 1.0.0
| 6d0d0ae3dbcbb27f7e46eb1a7764ff11ee04cc10 | 37,407 | ipynb | Jupyter Notebook | content/challenge-1/.ipynb_checkpoints/challenge-1-checkpoint.ipynb | scapape/ibm-quantum-challenge-fall-2021 | fe6099e3af18ef2f5598ac4b835751874c9960f3 | [
"Apache-2.0"
]
| null | null | null | content/challenge-1/.ipynb_checkpoints/challenge-1-checkpoint.ipynb | scapape/ibm-quantum-challenge-fall-2021 | fe6099e3af18ef2f5598ac4b835751874c9960f3 | [
"Apache-2.0"
]
| null | null | null | content/challenge-1/.ipynb_checkpoints/challenge-1-checkpoint.ipynb | scapape/ibm-quantum-challenge-fall-2021 | fe6099e3af18ef2f5598ac4b835751874c9960f3 | [
"Apache-2.0"
]
| null | null | null | 39.794681 | 678 | 0.638998 | true | 6,042 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.795658 | 0.664759 | __label__eng_Latn | 0.993322 | 0.382789 |
# Elementary frequency domain filtering
**Author: Uzhva Denis Romanovich**
**Lecturer: Soloviev Igor Pavlovich**
## The basics of filtering in the frequency domain
### Theory
A concept of frequency is very important in the study of signal processing.
Frequency is the number of occurrences of a repeating event per unit of time.
In order to obtain a spectrum (and phases) of a signal, e.g. a set of frequencies, we usually use the discrete fourier transformation.
Since an image has two spatial dimensions, the transformation is also two-dimensional.
The corresponding equation is as follows:
$$
\begin{equation}
F(u, v) = \sum_{x=0}^{H-1} \sum_{y=0}^{W-1} f(x, y) e^{-j2\pi (ux/H + vy/W)},
\tag{1}
\end{equation}
$$
where $f(x, y)$ is a digital image of size $H \times W$.
### Code
#### Discrete fourier transformation
```python
import numpy as np
def get_dft(img, shift=False):
h = img.shape[0]
w = img.shape[1]
img_dft = np.zeros_like(img, dtype=np.complex)
img = img.astype(np.complex)
x_arr = np.arange(h)
y_arr = np.arange(w)
x_mat = np.repeat(x_arr.reshape((h, 1)), w, 1)
y_mat = np.repeat(y_arr.reshape((1, w)), h, 0)
if shift:
exp_val = (x_mat + y_mat) * np.pi * 1j
img = np.multiply(img, np.exp(exp_val))
if len(img.shape) == 3:
channels = img.shape[-1]
for ch in range(channels):
for u in range(h):
for v in range(w):
exp_val = -(u * x_mat / h + v * y_mat / w) * np.pi * 2j
img_dft[u, v] = np.sum(np.multiply(img[:, :, ch], np.exp(exp_val)))
else:
for u in range(h):
for v in range(w):
exp_val = (u * x_mat / h + v * y_mat / w) * np.pi * 2j
img_dft[u, v] = np.sum(np.multiply(img, np.exp(exp_val)))
return img_dft
```
#### Inverse DFT
```python
import numpy as np
def get_idft(img_dft, shift=False):
h = img_dft.shape[0]
w = img_dft.shape[1]
img_idft = np.zeros_like(img_dft, dtype=np.complex)
img_dft = img_dft.astype(np.complex)
u_arr = np.arange(h)
v_arr = np.arange(w)
u_mat = np.repeat(u_arr.reshape((h, 1)), w, 1)
v_mat = np.repeat(v_arr.reshape((1, w)), h, 0)
if shift:
exp_val = (u_mat + v_mat) * np.pi * 1j
img_dft = np.multiply(img_dft, np.exp(exp_val))
if len(img_dft.shape) == 3:
channels = img_dft.shape[-1]
for ch in range(channels):
for x in range(h):
for y in range(w):
exp_val = (x * u_mat / h + y * v_mat / w) * np.pi * 2j
img_idft[x, y] = np.sum(np.multiply(img_dft[:, :, ch], np.exp(exp_val)))
else:
for x in range(h):
for y in range(w):
exp_val = (x * u_mat / h + y * v_mat / w) * np.pi * 2j
img_idft[x, y] = np.sum(np.multiply(img_dft, np.exp(exp_val)))
img_idft /= (h * w)
return img_idft
```
### Results
#### Processing + visualization
First of all, we need to load images for the further processing.
We load images with quite simple patterns in order to describe the relations between the frequency and spatial domains.
```python
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
img_1 = Image.open('./pattern1.png')
img_2 = Image.open('./pattern2.png')
```
```python
# represent the images as tensors
np_1 = np.array(img_1)
np_2 = np.array(img_2)
print('Picture dimensions:')
print(np_1.shape)
print(np_2.shape)
```
Picture dimensions:
(64, 64, 3)
(64, 64, 3)
```python
# simplify the images by summing the channels
np_1_sum = np.sum(np_1, axis=2) // 3
np_2_sum = np.sum(np_2, axis=2) // 3
```
Apply DFT to the images:
```python
np_1_dft = get_dft(np_1_sum)
```
```python
np_2_dft = get_dft(np_2_sum)
```
```python
# log of abs
np_1_dft_logabs = np.log(np.abs(np_1_dft))
np_2_dft_logabs = np.log(np.abs(np_2_dft))
```
```python
fig, axs = plt.subplots(2, 2, figsize=(15, 15), dpi=120)
axs[0, 0].imshow(np_1_sum, cmap='gray')
axs[0, 0].set_title('Original of the 1st image')
axs[0, 1].matshow(np_1_dft_logabs, cmap='gray')
axs[0, 1].set_title('Abs. of DFT of the 1st image')
axs[1, 0].imshow(np_2_sum, cmap='gray')
axs[1, 0].set_title('Original of the 2nd image')
axs[1, 1].matshow(np_2_dft_logabs, cmap='gray')
axs[1, 1].set_title('Abs. of DFT of the 2nd image')
plt.show()
```
It can be seen that patterns emerge in the frequency domain, which correspond to the repetative nature of the patterns on the original images.
The 1st image is espetially interesting, since the pattern looks like a "sawtooth" signal, so that is depicted on the DFT.
## Simple band-rejection filter
### Theory
The idea is to filter an image in the frequency domain, so that we adjust certain frequencies of an image.
The band-rejection (or band-stop) filter is a very simple filter that nullifies certain frequencies or just makes them very low.
In order to apply such a filter, we use an operation of convolution in the spatial domain, which corresponds to multiplication in the frequency domain:
$$
\begin{equation}
f(x, y) * h(x, y) \iff F(u, v) H(u, v),
\tag{1}
\end{equation}
$$
where $*$ is a convolution, $f(x,y)$ and $F(u, v)$ are an image and its fourier transform, while $h(x, y)$ and $H(u, v)$ stand for a filter and its representation in the frequency domain obtained by DFT.
With that in mind, the simplest band-rejection filter can be defined as follows:
$$
\begin{equation}
H(u, v) =
\begin{cases}
0, & \text{if}\ u = H/2, v = W/2 \\
1, & \text{otherwise}
\end{cases}
\tag{1}
\end{equation}
$$
### Code
#### Center frequency rejection
```python
import numpy as np
def apply_cr_H(img_dft, size=3):
h2 = img_dft.shape[0] // 2
w2 = img_dft.shape[1] // 2
img_dft[h2-size:h2+size+1, w2-size:w2+size+1] = 0.j
return(img_dft)
```
### Results
#### Processing + visualization
```python
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
img_21 = Image.open('./int.jpg')
img_22 = Image.open('./flux.jpg')
```
```python
# represent the images as tensors
np_21 = np.array(img_21)
np_22 = np.array(img_22)
print('Picture dimensions:')
print(np_21.shape)
print(np_22.shape)
```
Picture dimensions:
(128, 128, 3)
(128, 128, 3)
```python
# simplify the images by summing the channels
np_21_sum = np.sum(np_21, axis=2)
np_22_sum = np.sum(np_22, axis=2)
```
Apply DFT and filter to the images:
```python
np_21_dft = get_dft(np_21_sum, True)
```
```python
np_22_dft = get_dft(np_22_sum, True)
```
```python
np_21_dft_h = apply_cr_H(np_21_dft, 3)
np_22_dft_h = apply_cr_H(np_22_dft, 3)
```
Obtain IDFT:
```python
np_21_idft = get_idft(np_21_dft_h)
```
```python
np_22_idft = get_idft(np_22_dft_h)
```
```python
np_21_idft_abs = np.abs(np_21_idft)
np_22_idft_abs = np.abs(np_22_idft)
```
```python
np_21_idft_abs /= np.max(np_21_idft_abs)
np_22_idft_abs /= np.max(np_22_idft_abs)
```
```python
np_21_idft_abs[np_21_idft_abs < 0] = 0.
np_22_idft_abs[np_22_idft_abs < 0] = 0.
```
```python
np_21_idft_abs = np.rot90(np.rot90(np_21_idft_abs))
np_22_idft_abs = np.rot90(np.rot90(np_22_idft_abs))
```
```python
fig, axs = plt.subplots(2, 2, figsize=(15, 15), dpi=120)
axs[0, 0].imshow(np_21_sum, cmap='gray')
axs[0, 0].set_title('Original of the 1st image')
axs[0, 1].imshow(np_21_idft_abs, cmap='gray')
axs[0, 1].set_title('1st image after band reject filtering')
axs[1, 0].imshow(np_22_sum, cmap='gray')
axs[1, 0].set_title('Original of the 2nd image')
axs[1, 1].imshow(np_22_idft_abs, cmap='gray')
axs[1, 1].set_title('2nd image after band reject filtering')
plt.show()
```
It is clear that band reject filtering allowed to better recognize the defects on the circuits: it highlighted them by removing low frequencies, so that it is easier to detect anomalies.
```python
```
| 6f5faad963f94a0e0219d7153653be50cf5f9160 | 375,059 | ipynb | Jupyter Notebook | Task5 - Elementary frequency domain filtering/Elementary frequency domain filtering.ipynb | denisuzhva/Algorithms-of-Images-Analysis-and-Classification | 469dbf2adb363bb544d9c71cf9353eb87790513b | [
"MIT"
]
| 1 | 2020-05-29T15:17:02.000Z | 2020-05-29T15:17:02.000Z | Task5 - Elementary frequency domain filtering/Elementary frequency domain filtering.ipynb | mortarsynth/Algorithms-of-Images-Analysis-and-Classification | 469dbf2adb363bb544d9c71cf9353eb87790513b | [
"MIT"
]
| null | null | null | Task5 - Elementary frequency domain filtering/Elementary frequency domain filtering.ipynb | mortarsynth/Algorithms-of-Images-Analysis-and-Classification | 469dbf2adb363bb544d9c71cf9353eb87790513b | [
"MIT"
]
| null | null | null | 643.325901 | 242,220 | 0.944014 | true | 2,514 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.800692 | 0.742532 | __label__eng_Latn | 0.850384 | 0.563483 |
# Aim of this notebook
* To construct the singular curve of universal type to finalize the solution of the optimal control problem
# Preamble
```python
from sympy import *
init_printing(use_latex='mathjax')
# Plotting
%matplotlib inline
## Make inline plots raster graphics
from IPython.display import set_matplotlib_formats
## Import modules for plotting and data analysis
import matplotlib.pyplot as plt
from matplotlib import gridspec,rc,colors
import matplotlib.ticker as plticker
## Parameters for seaborn plots
import seaborn as sns
sns.set(style='white',font_scale=1.25,
rc={"xtick.major.size": 6, "ytick.major.size": 6,
'text.usetex': False, 'font.family': 'serif', 'font.serif': ['Times']})
import pandas as pd
pd.set_option('mode.chained_assignment',None)
import numpy as np
from scipy.optimize import fsolve, root
from scipy.integrate import ode
backend = 'dopri5'
import warnings
# Timer
import time
from copy import deepcopy
from itertools import cycle
palette_size = 10;
clrs = sns.color_palette("Reds",palette_size)
iclrs = cycle(clrs) # iterated colors
clrs0 = sns.color_palette("Set1",palette_size)
# Suppress warnings
import warnings
warnings.filterwarnings("ignore")
```
# Parameter values
* Birth rate and const of downregulation are defined below in order to fit some experim. data
```python
d = .13 # death rate
α = .3 # low equilibrium point at expression of the main pathway (high equilibrium is at one)
θ = .45 # threshold value for the expression of the main pathway
κ = 40 # robustness parameter
```
* Symbolic variables - the list insludes μ & μbar, because they will be varied later
```python
σ, φ0, φ, x, μ, μbar = symbols('sigma, phi0, phi, x, mu, mubar')
```
* Main functions
```python
A = 1-σ*(1-θ)
Eminus = (α*A-θ)**2/2
ΔE = A*(1-α)*((1+α)*A/2-θ)
ΔEf = lambdify(σ,ΔE)
```
* Birth rate and cost of downregulation
```python
b = (0.1*(exp(κ*(ΔEf(1)))+1)-0.14*(exp(κ*ΔEf(0))+1))/(exp(κ*ΔEf(1))-exp(κ*ΔEf(0))) # birth rate
χ = 1-(0.14*(exp(κ*ΔEf(0))+1)-b*exp(κ*ΔEf(0)))/b
b, χ
```
$$\left ( 0.140168330860362, \quad 0.325961223954473\right )$$
```python
c_relative = 0.1
c = c_relative*(b-d)/b+(1-c_relative)*χ/(exp(κ*ΔEf(0))+1) # cost of resistance
c
```
$$0.00833519849448376$$
* Hamiltonian *H* and a part of it ρ that includes the control variable σ
```python
h = b*(χ/(exp(κ*ΔE)+1)*(1-x)+c*x)
H = -φ0 + φ*(b*(χ/(exp(κ*ΔE)+1)-c)*x*(1-x)+μ*(1-x)/(exp(κ*ΔE)+1)-μbar*exp(-κ*Eminus)*x) + h
ρ = (φ*(b*χ*x+μ)+b*χ)/(exp(κ*ΔE)+1)*(1-x)-φ*μbar*exp(-κ*Eminus)*x
ρ1 = (φ*(b*χ*x+μ)+b*χ)/(exp(κ*ΔE)+1)*(1-x)
ρ2 = φ*μbar*exp(-κ*Eminus)*x
n = b*(1-χ*(1-x)/(exp(κ*ΔE)+1)-c*x)-d
H, ρ, n
```
$$\left ( \phi \left(\frac{\mu \left(- x + 1\right)}{e^{40 \left(- 0.385 \sigma + 0.7\right) \left(- 0.3575 \sigma + 0.2\right)} + 1} - \bar{\mu} x e^{- 20 \left(- 0.165 \sigma - 0.15\right)^{2}} + x \left(-0.00116833086036159 + \frac{0.045689440686899}{e^{40 \left(- 0.385 \sigma + 0.7\right) \left(- 0.3575 \sigma + 0.2\right)} + 1}\right) \left(- x + 1\right)\right) - \phi_{0} + 0.00116833086036159 x + \frac{0.045689440686899 \left(- x + 1\right)}{e^{40 \left(- 0.385 \sigma + 0.7\right) \left(- 0.3575 \sigma + 0.2\right)} + 1}, \quad - \bar{\mu} \phi x e^{- 20 \left(- 0.165 \sigma - 0.15\right)^{2}} + \frac{\left(- x + 1\right) \left(\phi \left(\mu + 0.045689440686899 x\right) + 0.045689440686899\right)}{e^{40 \left(- 0.385 \sigma + 0.7\right) \left(- 0.3575 \sigma + 0.2\right)} + 1}, \quad - 0.00116833086036159 x - \frac{0.140168330860362 \left(- 0.325961223954473 x + 0.325961223954473\right)}{e^{40 \left(- 0.385 \sigma + 0.7\right) \left(- 0.3575 \sigma + 0.2\right)} + 1} + 0.0101683308603616\right )$$
* Same but for no treatment (σ = 0)
```python
h0 = h.subs(σ,0)
H0 = H.subs(σ,0)
ρ0 = ρ.subs(σ,0)
H0, ρ0
```
$$\left ( \phi \left(0.00368423989943599 \mu \left(- x + 1\right) - 0.637628151621773 \bar{\mu} x - 0.001 x \left(- x + 1\right)\right) - \phi_{0} + 0.001 x + 0.000168330860361587, \quad - 0.637628151621773 \bar{\mu} \phi x + 0.00368423989943599 \left(- x + 1\right) \left(\phi \left(\mu + 0.045689440686899 x\right) + 0.045689440686899\right)\right )$$
* Machinery: definition of the Poisson brackets
```python
PoissonBrackets = lambda H1, H2: diff(H1,x)*diff(H2,φ)-diff(H1,φ)*diff(H2,x)
```
* Necessary functions and defining the right hand side of dynamical equations
```python
ρf = lambdify((x,φ,σ,μ,μbar),ρ)
ρ1f = lambdify((x,φ,σ,μ,μbar),ρ1)
ρ2f = lambdify((x,φ,σ,μ,μbar),ρ2)
ρ0f = lambdify((x,φ,μ,μbar),ρ0)
dxdτ = lambdify((x,φ,σ,μ,μbar),-diff(H,φ))
dφdτ = lambdify((x,φ,σ,μ,μbar),diff(H,x))
# dndτ = lambdify((x,φ,σ,μ,μbar),-n)
dVdτ = lambdify((x,σ),h)
dρdσ = lambdify((σ,x,φ,μ,μbar),diff(ρ,σ))
dδρdτ = lambdify((x,φ,σ,μ,μbar),-PoissonBrackets(ρ0-ρ,H))
def ode_rhs(t,state,μ,μbar):
x, φ, V, δρ = state
σs = [0,1]
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σstar = 1.;
if ρf(x,φ,σstar,μ,μbar) < ρ0f(x,φ,μ,μbar):
sgm = 0
else:
sgm = σstar
return [dxdτ(x,φ,sgm,μ,μbar),dφdτ(x,φ,sgm,μ,μbar),dVdτ(x,sgm),dδρdτ(x,φ,σstar,μ,μbar)]
def σstarf(x,φ,μ,μbar):
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σstar = 1.;
if ρf(x,φ,σstar,μ,μbar) < ρ0f(x,φ,μ,μbar):
sgm = 0
else:
sgm = σstar
return sgm
```
```python
def get_primary_field(name, experiment,μ,μbar):
solutions = {}
solver = ode(ode_rhs).set_integrator(backend)
τ0 = experiment['τ0']
tms = np.linspace(τ0,experiment['T_end'],1e3+1)
for x0 in experiment['x0']:
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
sol = []; k = 0;
while (solver.t < experiment['T_end']) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[k])
sol.append([solver.t]+list(solver.y))
k += 1
solutions[x0] = {'solution': sol}
for x0, entry in solutions.items():
entry['τ'] = [entry['solution'][j][0] for j in range(len(entry['solution']))]
entry['x'] = [entry['solution'][j][1] for j in range(len(entry['solution']))]
entry['φ'] = [entry['solution'][j][2] for j in range(len(entry['solution']))]
entry['V'] = [entry['solution'][j][3] for j in range(len(entry['solution']))]
entry['δρ'] = [entry['solution'][j][4] for j in range(len(entry['solution']))]
return solutions
def get_δρ_value(tme,x0,μ,μbar):
solver = ode(ode_rhs).set_integrator(backend)
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
while (solver.t < tme) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tme)
sol = [solver.t]+list(solver.y)
return solver.y[3]
def get_δρ_ending(params,μ,μbar):
tme, x0 = params
solver = ode(ode_rhs).set_integrator(backend)
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
δτ = 1.0e-8; tms = [tme,tme+δτ]
_k = 0; sol = []
while (_k<len(tms)):# and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append(solver.y)
_k += 1
#print(sol)
return(sol[0][3],(sol[1][3]-sol[0][3])/δτ)
def get_state(tme,x0,μ,μbar):
solver = ode(ode_rhs).set_integrator(backend)
δρ0 = ρ0.subs(x,x0).subs(φ,0)-ρ.subs(x,x0).subs(φ,0).subs(σ,1.)
solver.set_initial_value([x0,0,0,δρ0],0.).set_f_params(μ,μbar)
δτ = 1.0e-8; tms = [tme,tme+δτ]
_k = 0; sol = []
while (solver.t < tms[-1]) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append(solver.y)
_k += 1
return(list(sol[0])+[(sol[1][3]-sol[0][3])/δτ])
```
# Machinery for the universal line
* To find the universal singular curve we need to define two parameters
```python
γ0 = PoissonBrackets(PoissonBrackets(H,H0),H)
γ1 = PoissonBrackets(PoissonBrackets(H0,H),H0)
```
* The dynamics
```python
dxdτSingExpr = -(γ0*diff(H0,φ)+γ1*diff(H,φ))/(γ0+γ1)
dφdτSingExpr = (γ0*diff(H0,x)+γ1*diff(H,x))/(γ0+γ1)
dVdτSingExpr = (γ0*h0+γ1*h)/(γ0+γ1)
σSingExpr = γ1*σ/(γ0+γ1)
```
* Machinery for Python: lambdify the functions above
```python
dxdτSing = lambdify((x,φ,σ,μ,μbar),dxdτSingExpr)
dφdτSing = lambdify((x,φ,σ,μ,μbar),dφdτSingExpr)
dVdτSing = lambdify((x,φ,σ,μ,μbar),dVdτSingExpr)
σSing = lambdify((x,φ,σ,μ,μbar),σSingExpr)
```
```python
def ode_rhs_Sing(t,state,μ,μbar):
x, φ, V = state
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σstar = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σstar = 1.;
return [dxdτSing(x,φ,σstar,μ,μbar),dφdτSing(x,φ,σstar,μ,μbar),dVdτSing(x,φ,σstar,μ,μbar)]
def get_universal_curve(end_point,tmax,Nsteps,μ,μbar):
tms = np.linspace(end_point[0],tmax,Nsteps);
solver = ode(ode_rhs_Sing).set_integrator(backend)
solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar)
_k = 0; sol = []
while (solver.t < tms[-1]):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
return sol
def get_σ_universal(tme,end_point,μ,μbar):
δτ = 1.0e-8; tms = [tme,tme+δτ]
solver = ode(ode_rhs_Sing).set_integrator(backend)
solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar)
_k = 0; sol = []
while (solver.t < tme+δτ):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
x, φ = sol[0][:2]
sgm = fsolve(lambda σ: dxdτ(x,φ,σ,μ,μbar)-(sol[1][0]-sol[0][0])/δτ,θ/2)[0]
return sgm
def get_state_universal(tme,end_point,μ,μbar):
solver = ode(ode_rhs_Sing).set_integrator(backend)
solver.set_initial_value(end_point[1:4],end_point[0]).set_f_params(μ,μbar)
solver.integrate(tme)
return [solver.t]+list(solver.y)
```
```python
def ode_rhs_with_σstar(t,state,μ,μbar):
x, φ, V = state
if (dρdσ(1.,x,φ,μ,μbar)<0) and (dρdσ(θ,x,φ,μ,μbar)>0):
σ = fsolve(dρdσ,.8,args=(x,φ,μ,μbar,))[0]
else:
σ = 1.;
return [dxdτ(x,φ,σ,μ,μbar),dφdτ(x,φ,σ,μ,μbar),dVdτ(x,σ)]
def ode_rhs_with_given_σ(t,state,σ,μ,μbar):
x, φ, V = state
return [dxdτ(x,φ,σ,μ,μbar),dφdτ(x,φ,σ,μ,μbar),dVdτ(x,σ)]
def get_trajectory_with_σstar(starting_point,tmax,Nsteps,μ,μbar):
tms = np.linspace(starting_point[0],tmax,Nsteps)
solver = ode(ode_rhs_with_σstar).set_integrator(backend)
solver.set_initial_value(starting_point[1:],starting_point[0]).set_f_params(μ,μbar)
sol = []; _k = 0;
while solver.t < max(tms) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
return sol
def get_trajectory_with_given_σ(starting_point,tmax,Nsteps,σ,μ,μbar):
tms = np.linspace(starting_point[0],tmax,100)
solver = ode(ode_rhs_with_given_σ).set_integrator(backend)
solver.set_initial_value(starting_point[1:],starting_point[0]).set_f_params(σ,μ,μbar)
sol = []; _k = 0;
while solver.t < max(tms) and (solver.y[0]<=1.) and (solver.y[0]>=0.):
solver.integrate(tms[_k])
sol.append([solver.t]+list(solver.y))
_k += 1
return sol
def get_state_with_σstar(tme,starting_point,μ,μbar):
solver = ode(ode_rhs_with_σstar).set_integrator(backend)
solver.set_initial_value(starting_point[1:4],starting_point[0]).set_f_params(μ,μbar)
solver.integrate(tme)
return [solver.t]+list(solver.y)
def get_finalizing_point_from_universal_curve(tme,tmx,end_point,μ,μbar):
unv_point = get_state_universal(tme,end_point,μ,μbar)
return get_state_with_σstar(tmx,unv_point,μ,μbar)[1]
```
# Field of optimal trajectories as the solution of the Bellman equation
* μ & μbar are varied by *T* and *T*bar ($\mu=1/T$ and $\bar\mu=1/\bar{T}$)
```python
tmx = 180.
end_switching_curve = {'t': 12., 'x': .9}
# for Τ, Τbar in zip([28]*5,[14,21,28,35,60]):
Τ = 9.89244654; Τbar = 12.90829551
μ = 1./Τ; μbar = 1./Τbar
print("Parameters: μ = %.5f, μbar = %.5f"%(μ,μbar))
end_switching_curve['t'], end_switching_curve['x'] = fsolve(get_δρ_ending,(end_switching_curve['t'],end_switching_curve['x']),args=(μ,μbar),xtol=1.0e-12)
end_point = [end_switching_curve['t']]+get_state(end_switching_curve['t'],end_switching_curve['x'],μ,μbar)
print("Ending point for the switching line: τ = %.1f days, x = %.1f%%" % (end_point[0], end_point[1]*100))
print("Checking the solution - should give zero values: ")
print(get_δρ_ending([end_switching_curve['t'],end_switching_curve['x']],μ,μbar))
print("* Constructing the primary field")
primary_field1 = []
experiments = {
'sol1': { 'T_end': tmx, 'τ0': 0., 'x0': list(np.linspace(0,end_switching_curve['x']-(1e-3),7)) } }
for name, values in experiments.items():
primary_field1.append(get_primary_field(name,values,μ,μbar))
primary_field2 = []
experiments = {
'sol1': { 'T_end': tmx, 'τ0': 0., 'x0': list(np.linspace(end_switching_curve['x']+(3e-6),1.,7)) } }
for name, values in experiments.items():
primary_field2.append(get_primary_field(name,values,μ,μbar))
print("* Constructing the switching curve")
switching_curve = []
x0s = np.linspace(end_switching_curve['x'],1,21); _y = end_switching_curve['t']
for x0 in x0s:
tme = fsolve(get_δρ_value,_y,args=(x0,μ,μbar))[0]
if (tme>0):
switching_curve = switching_curve+[[tme,get_state(tme,x0,μ,μbar)[0]]]
_y = tme
print("* Constructing the universal curve")
universal_curve = get_universal_curve(end_point,tmx,25,μ,μbar)
print("* Finding the last characteristic")
#time0 = time.time()
tuniv = fsolve(get_finalizing_point_from_universal_curve,tmx-40.,args=(tmx,end_point,μ,μbar,))[0]
#print("The proccess to find the last characteristic took %0.1f minutes" % ((time.time()-time0)/60.))
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
print("The last point on the universal line:")
print(univ_point)
last_trajectory = get_trajectory_with_σstar(univ_point,tmx,50,μ,μbar)
print("Final state:")
final_state = get_state_with_σstar(tmx,univ_point,μ,μbar)
print(final_state)
print("Fold-change in tumor size: %.2f"%(exp((b-d)*tmx-final_state[-1])))
```
Parameters: μ = 0.10109, μbar = 0.07747
Ending point for the switching line: τ = 10.6 days, x = 62.5%
Checking the solution - should give zero values:
(-5.874504537221782e-08, -9.351677438408805e-07)
* Constructing the primary field
* Constructing the switching curve
* Constructing the universal curve
* Finding the last characteristic
The last point on the universal line:
[169.04356474195382, 0.6217267983092295, -0.2266623412769584, 1.3585350162185321]
Final state:
[180.0, -1.4285794769364202e-13, -0.3628496425121575, 1.6327961844129553]
Fold-change in tumor size: 1.22
```python
# Plotting
plt.rcParams['figure.figsize'] = (4.5, 3.2)
_k = 0
for solutions in primary_field1:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], '-', linewidth=1, color=clrs0[1])
_k += 1
_k = 0
for solutions in primary_field2:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], '-', linewidth=1, color=clrs0[1])
_k += 1
plt.plot([x[0] for x in switching_curve],[x[1] for x in switching_curve],linewidth=3,color="k",zorder=4,linestyle="dashed")
plt.plot([end_point[0]],[end_point[1]],marker='o',color="black",zorder=4)
plt.xlim([0,120]); plt.ylim([0,1]);
plt.xlabel("backward time, days"); plt.ylabel("fraction of resistant cells, %")
# plt.show()
plt.savefig("../figures/draft/Fig2-0.pdf",format='pdf',bbox_inches='tight')
```
```python
# Plotting
plt.rcParams['figure.figsize'] = (4.5, 3.2)
_k = 0
for solutions in primary_field1:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], '-', linewidth=1, color=clrs0[1])
_k += 1
_k = 0
for solutions in primary_field2:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], '-', linewidth=1, color=clrs0[1])
_k += 1
plt.plot([x[0] for x in switching_curve],[x[1] for x in switching_curve],linewidth=3,color="k",zorder=4,linestyle="dashed")
# plt.plot([end_point[0]],[end_point[1]],marker='o',color="black",zorder=4)
plt.plot([x[0] for x in universal_curve],[x[1] for x in universal_curve],linewidth=3,color="k",zorder=3)
for tend in [60,90,120]:
tuniv = fsolve(get_finalizing_point_from_universal_curve,tend-20.,args=(tend,end_point,μ,μbar,))[0]
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
trajectory = get_trajectory_with_σstar(univ_point,tend,50,μ,μbar)
plt.plot([x[0] for x in trajectory],[x[1] for x in trajectory],linewidth=1,color=clrs0[4])
trajectory = get_trajectory_with_given_σ(univ_point,tend+20,100,0,μ,μbar)
plt.plot([x[0] for x in trajectory],[x[1] for x in trajectory],linewidth=1,color=clrs0[4])
plt.xlim([0,120]); plt.ylim([0,1]);
plt.xlabel("backward time, days"); plt.ylabel("fraction of resistant cells, %")
# plt.show()
plt.savefig("../figures/draft/Fig2-1.pdf",format='pdf',bbox_inches='tight')
```
```python
# Plotting
plt.rcParams['figure.figsize'] = (3.5, 2.5)
σs = np.linspace(0,1,101)
plt.plot(σs,[ρf(.9,0,σ,μ,μbar) for σ in σs],linewidth=2,color="k")
σimx = np.argmax([ρf(.9,0,σ,μ,μbar) for σ in σs])
print(σimx)
plt.plot(σs[σimx],[ρf(.9,0,σs[σimx],μ,μbar)],'ro')
plt.plot(σs,[ρ1f(.9,0,σ,μ,μbar) for σ in σs],'g--',linewidth=1,zorder=-5)
plt.plot(σs,[ρ2f(.9,0,σ,μ,μbar) for σ in σs],'b--',linewidth=1,zorder=-5)
plt.ylim([-.0002,.0042]);
plt.savefig("../figures/draft/Fig3-A.pdf",format='pdf',bbox_inches='tight')
```
```python
fig, ax = plt.subplots()
for solution in primary_field2:
k = 0
for x0, entry in solution.items():
if (k==0):
print("Terminal point: ",entry['τ'][0],entry['x'][0],entry['φ'][0])
kk = 17
print(entry['τ'][kk],entry['x'][kk],entry['φ'][kk])
ρyy = [ρf(entry['x'][kk],entry['φ'][kk],σ,μ,μbar) for σ in σs]
plt.plot(σs,ρyy,linewidth=2,color="k")
σimx = np.argmax(ρyy)
print(σimx)
plt.plot(σs[σimx],ρyy[σimx],'ro')
plt.plot(σs,[ρ1f(entry['x'][kk],entry['φ'][kk],σ,μ,μbar) for σ in σs],'g--',linewidth=1,zorder=-5)
plt.plot(σs,[-ρ2f(entry['x'][kk],entry['φ'][kk],σ,μ,μbar) for σ in σs],'b--',linewidth=1,zorder=-5)
plt.ylim([-.0002,.0082]);
break
k = k + 1
plt.savefig("../figures/draft/Fig3-B.pdf",format='pdf',bbox_inches='tight')
```
```python
fig, ax = plt.subplots()
for solution in primary_field2:
k = 0
for x0, entry in solution.items():
if (k==0):
print("Terminal point: ",entry['τ'][0],entry['x'][0],entry['φ'][0])
kk = 40
print(entry['τ'][kk],entry['x'][kk],entry['φ'][kk])
ρyy = [ρf(entry['x'][kk],entry['φ'][kk],σ,μ,μbar) for σ in σs]
plt.plot(σs,ρyy,linewidth=2,color="k")
σimx = np.argmax(ρyy)
print(σimx)
plt.plot(σs[σimx],ρyy[σimx],'ro')
plt.plot(σs,[ρ1f(entry['x'][kk],entry['φ'][kk],σ,μ,μbar) for σ in σs],'g--',linewidth=1,zorder=-5)
plt.plot(σs,[-ρ2f(entry['x'][kk],entry['φ'][kk],σ,μ,μbar) for σ in σs],'b--',linewidth=1,zorder=-5)
plt.ylim([-.0002,.0082]);
break
k = k + 1
plt.savefig("../figures/draft/Fig3-C.pdf",format='pdf',bbox_inches='tight')
```
```python
fig, ax = plt.subplots()
kk = 10
xu = universal_curve[kk][1]
φu = universal_curve[kk][2]
print("Point on the universal curve: ",universal_curve[kk][0],xu,φu)
ρyy = [ρf(xu,φu,σ,μ,μbar) for σ in σs]
plt.plot(σs,ρyy,linewidth=2,color="k")
σimx = np.argmax(ρyy)
print(σimx)
plt.plot(σs[σimx],ρyy[σimx],'ro')
plt.plot([0],ρyy[σimx],'ro')
plt.plot(σs,[ρ1f(xu,φu,σ,μ,μbar) for σ in σs],'g--',linewidth=1,zorder=-5)
plt.plot(σs,[-ρ2f(xu,φu,σ,μ,μbar) for σ in σs],'b--',linewidth=1,zorder=-5)
plt.ylim([-.0002,.0082]);
# ax.yaxis.set_major_formatter(plt.NullFormatter())
plt.savefig("../figures/draft/Fig3-D.pdf",format='pdf',bbox_inches='tight')
```
# Preparation for second figure
```python
# Plotting
plt.rcParams['figure.figsize'] = (6.75, 4.5)
_k = 0
for solutions in primary_field1:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], '-', linewidth=1, color=clrs0[1])
_k += 1
_k = 0
for solutions in primary_field2:
for x0, entry in solutions.items():
plt.plot(entry['τ'], entry['x'], '-', linewidth=1, color=clrs0[1])
_k += 1
plt.plot([x[0] for x in switching_curve],[x[1] for x in switching_curve],linewidth=3,color=clrs0[0],zorder=4,linestyle="dashed")
plt.plot([end_point[0]],[end_point[1]],marker='o',color="black",zorder=4)
plt.plot([x[0] for x in universal_curve],[x[1] for x in universal_curve],linewidth=3,color=clrs0[0],zorder=3)
for tend in [80,110,140]:
tuniv = fsolve(get_finalizing_point_from_universal_curve,tend-20.,args=(tend,end_point,μ,μbar,))[0]
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
trajectory = get_trajectory_with_σstar(univ_point,tend,50,μ,μbar)
plt.plot([x[0] for x in trajectory],[x[1] for x in trajectory],linewidth=1,color=clrs0[4])
trajectory = get_trajectory_with_given_σ(univ_point,tend+20,100,0,μ,μbar)
plt.plot([x[0] for x in trajectory],[x[1] for x in trajectory],linewidth=1,color=clrs0[4])
plt.xlim([0,120]); plt.ylim([0,1]);
plt.xlabel("backward time, days"); plt.ylabel("fraction of resistant cells, \%")
plt.show()
```
```python
plt.rcParams['figure.figsize'] = (6.75, 4.5)
_k = 0
for solutions in primary_field1:
for x0, entry in solutions.items():
if _k==5:
sol = [[1,τ,σstarf(x,φ,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in zip(entry['τ'],entry['x'],entry['φ'],entry['V'])]
if _k==6:
trajectory_thr = [[τ,x,φ,V] for τ,x,φ,V in zip(entry['τ'],entry['x'],entry['φ'],entry['V'])]
sol += [[0,τ,σstarf(x,φ,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in trajectory_thr]
T0 = max([x[0] for x in trajectory_thr])
_k += 1
#plt.plot(τ1, x1, '-', linewidth=1, color=clrs0[1])
#plt.plot(τthr, xthr, '--', linewidth=1, color=clrs0[1])
print(T0/30.)
plt.plot([end_point[0]],[end_point[1]],marker='o',color="black",zorder=4)
for tend in [180]:
tuniv = fsolve(get_finalizing_point_from_universal_curve,tend-20.,args=(tend,end_point,μ,μbar,))[0]
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
trajectory = get_trajectory_with_σstar(univ_point,tend,50,μ,μbar)
plt.plot([x[0] for x in trajectory],[x[1] for x in trajectory],linewidth=1,color=clrs0[4])
sol += [[3,τ,σstarf(x,φ,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in trajectory]
universal_curve = get_universal_curve(end_point,univ_point[0],50,μ,μbar)
plt.plot([x[0] for x in universal_curve],[x[1] for x in universal_curve],linewidth=3,color=clrs0[0],zorder=3)
sol = [[3,τ,get_σ_universal(τ,end_point,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in universal_curve] + sol
trajectory = get_trajectory_with_σstar([0,end_switching_curve['x'],0,0],end_point[0],50,μ,μbar)
sol = [[3,τ,σstarf(x,φ,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in trajectory] + sol
for tend in [124]:
tuniv = fsolve(get_finalizing_point_from_universal_curve,tend-20.,args=(tend,end_point,μ,μbar,))[0]
univ_point = get_state_universal(tuniv,end_point,μ,μbar)
# trajectory = get_trajectory_with_σstar(univ_point,tend,50,μ,μbar)
trajectory = get_trajectory_with_given_σ(univ_point,tend+20,200,0,μ,μbar)
plt.plot([x[0] for x in trajectory],[x[1] for x in trajectory],linewidth=1,color=clrs0[4])
sol += [[2,τ,σstarf(x,φ,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in trajectory]
universal_curve = get_universal_curve(end_point,univ_point[0],50,μ,μbar)
plt.plot([x[0] for x in universal_curve],[x[1] for x in universal_curve],linewidth=3,color=clrs0[0],zorder=3)
sol = [[2,τ,get_σ_universal(τ,end_point,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in universal_curve] + sol
trajectory = get_trajectory_with_σstar([0,end_switching_curve['x'],0,0],end_point[0],150,μ,μbar)
sol = [[2,τ,σstarf(x,φ,μ,μbar),x,exp((b-d)*τ-V)] for τ,x,φ,V in trajectory] + sol
plt.xlim([0,180]); plt.ylim([0,1]);
plt.xlabel("backward time, days"); plt.ylabel("fraction of resistant cells, \%")
plt.show()
```
```python
pd.DataFrame(sol).to_csv('../figures/draft/Fig4-trjs_optimal.csv',index=False,header=False)
```
```python
```
| 5dee9d558372793584282285fc4b2b44ebb64ef1 | 219,959 | ipynb | Jupyter Notebook | scripts/.ipynb_checkpoints/D. Field of optimal trajectories [Python]-checkpoint.ipynb | aakhmetz/AkhmKim2019Scripts | c348f6702a135e30aea5fc1eb3d8f4ca18b146e3 | [
"MIT"
]
| 1 | 2019-11-04T00:10:17.000Z | 2019-11-04T00:10:17.000Z | scripts/.ipynb_checkpoints/D. Field of optimal trajectories [Python]-checkpoint.ipynb | aakhmetz/AkhmKim2019Scripts | c348f6702a135e30aea5fc1eb3d8f4ca18b146e3 | [
"MIT"
]
| null | null | null | scripts/.ipynb_checkpoints/D. Field of optimal trajectories [Python]-checkpoint.ipynb | aakhmetz/AkhmKim2019Scripts | c348f6702a135e30aea5fc1eb3d8f4ca18b146e3 | [
"MIT"
]
| 1 | 2019-11-04T00:10:01.000Z | 2019-11-04T00:10:01.000Z | 206.147142 | 38,276 | 0.884206 | true | 9,187 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.658417 | 0.567252 | __label__eng_Latn | 0.242775 | 0.156246 |
# STIRAP in a 3-level system
STIRAP (STImulated Raman Adiabatic Passage, see e.g. [Shore1998](https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.70.1003)) is a method for adiabatically transferring the population of a quantum system from one state to another by using two drive fields coupled to an intermediate state without actually ever populating the intermediate state. The benefits over e.g. two Rabi pulses, are that since STIRAP is an adiabatic process, it is relatively easy (I've been told) to make it highly efficient. The other key benefit is that the intermediate state can be an unstable state, yet there is no population loss since it is never populated.
This notebook sets up a 3-level system and relevant couplings using the `toy_models` package and then time evolves the system using `QuTiP` to simulate STIRAP. I'll be following the definitions of Shore1998 as best as I can. The level diagram from the paper is shown below.
## Imports
```python
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
plt.style.use("ggplot")
import numpy as np
import qutip
from sympy import Symbol
from toy_systems.couplings import ToyCoupling, ToyEnergy
from toy_systems.decays import ToyDecay
from toy_systems.hamiltonian import Hamiltonian
from toy_systems.quantum_system import QuantumSystem
from toy_systems.states import Basis, BasisState, ToyQuantumNumbers
from toy_systems.visualization import Visualizer
```
## Set up states and basis
We start by defining the three states of the system: we'll have two ground states (i.e. states that don't decay) $|1\rangle$ and $|3\rangle$, and one excited state $|2\rangle$, which we will later set to have a decay to an additional state $|4\rangle$ representing all decays out of the system:
```python
# Define states
s1 = BasisState(qn=ToyQuantumNumbers(label="1"))
s2 = BasisState(qn=ToyQuantumNumbers(label="2"))
s3 = BasisState(qn=ToyQuantumNumbers(label="3"))
s4 = BasisState(qn=ToyQuantumNumbers(label="4")) # A target state for decays from |2>
# Define basis
basis = Basis((s1, s2, s3, s4))
basis.print()
```
|0> = |1>
|1> = |2>
|2> = |3>
|3> = |4>
## Define energies, couplings and decays
I'm going to define the system in the rotating frame as given in [Shore1998](https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.70.1003) so that the Hamiltonian doesn't have any quickly rotating terms of the form $e^{i\omega t}$.
The Hamiltonian I'm trying to produce is shown below (with $\hbar = 1$):
### Energies
```python
Δp = Symbol('Delta_p') # Detuning for pump beam
Δs = Symbol('Delta_s') # Detuning for Stokes beam
E1 = ToyEnergy([s1], 0)
E2 = ToyEnergy([s2], Δp)
# The energy for state |3> needs to be defined in two parts since it contains two sympy.Symbols
E3p = ToyEnergy([s3], Δp)
E3s = ToyEnergy([s3], -Δs)
```
### Couplings
```python
Ωp = Symbol('Omega_p') # Drive field Rabi rate for pump beam
Ωs = Symbol('Omega_s') # Drive field Rabi rate for Stokes beam
coupling_p = ToyCoupling(s1,s2,Ωp/2, time_dep = "exp(-(t+t_p)**2/(2*sigma_p**2))", time_args= {"t_p":-1, "sigma_p":1})
coupling_s = ToyCoupling(s2,s3,Ωs/2, time_dep = "exp(-(t+t_s)**2/(2*sigma_s**2))", time_args= {"t_s":1, "sigma_s":1})
```
### Decays
Defining a decay from $|2\rangle$ to $|4\rangle$ :
```python
decay = ToyDecay(s2, ground = s4, gamma = Symbol("Gamma"))
```
### Define a QuantumSystem
The QuantumSystem object combines the basis, Hamiltonian and decays to make setting parameters for time evolution using QuTiP more convenient.
```python
# Define the system
system = QuantumSystem(
basis=basis,
couplings=[E1, E2, E3p, E3s, coupling_p, coupling_s],
decays=[decay],
)
# Get representations of the Hamiltonian and the decays that will be accepted by qutip
Hqobj, c_qobj = system.get_qobjs()
visualizer = Visualizer(system, vertical={"label":10}, horizontal={"label":50})
```
## Time-evolution using `QuTiP`
We can now see if time evolving the system results in something resembling STIRAP. The key to success is to choose the parameters well. Shore gives us the rule of thumb that we should have $\sqrt{\Omega_p^2 + \Omega_s^2}\tau > 10$ where $\tau$ is proportional to the time overlap of the Stokes and pump pulse. In practice it seems that taking the centers of the Gaussians to be separated by $2\sigma$ works pretty well. The broader the Gaussians are (i.e. larger $\sigma$), the more adiabatic the process, which results in less population in the intermediate state and therefore less loss. I'm taking both pulses to have the same parameters for simplicity (except they occur at different times of course).
```python
# Get a pointer to the time-evolution arguments
args = Hqobj.args
print("Keys for setting arguments:")
print(f"args = {args}")
```
Keys for setting arguments:
args = {'Delta_p': 1, 'Delta_s': 1, 't_p': -1, 'sigma_p': 1, 'Omega_p': 1, 't_s': 1, 'sigma_s': 1, 'Omega_s': 1, 'Gamma': 1}
```python
# Generate a Qobj representing the initial state
psi0 = (1*s1).qobj(basis)
# Make operators for getting the probability of being in each state
P_1_op = qutip.Qobj((1*s1).density_matrix(basis), type = "oper")
P_2_op = qutip.Qobj((1*s2).density_matrix(basis), type = "oper")
P_3_op = qutip.Qobj((1*s3).density_matrix(basis), type = "oper")
P_4_op = qutip.Qobj((1*s4).density_matrix(basis), type = "oper")
# Set the parameters for the system
# Good STIRAP
Omega = 10
t0 = 10
sigma = 10
Delta = 0
# Bad STIRAP
# Omega = 5
# t0 = 1
# sigma = 1
# Delta = 0
args["Delta_p"] = Delta
args["Omega_p"] = Omega
args["sigma_p"] = sigma
args["t_p"] = -t0
args["Delta_s"] = Delta
args["Omega_s"] = Omega
args["sigma_s"] = sigma
args["t_s"] = t0
# Times at which result is requested
times = np.linspace(-5*sigma,5*sigma,1001)
# Setting the max_step is sometimes necessary
options = qutip.solver.Options(method = 'adams', nsteps=10000, max_step=1e0)
# Setup a progress bar
pb = qutip.ui.progressbar.EnhancedTextProgressBar()
# Run the time-evolution
result = qutip.mesolve(Hqobj, psi0, times, c_ops = c_qobj, e_ops = [P_1_op, P_2_op, P_3_op, P_4_op],
progress_bar=pb, options = options)
```
Total run time: 0.48s*] Elapsed 0.48s / Remaining 00:00:00:00[*********40% ] Elapsed 0.16s / Remaining 00:00:00:00
```python
fig, ax = plt.subplots(figsize = (16,9))
ln = []
ln+=ax.plot(times, result.expect[0], label = "P_1")
ln+=ax.plot(times, result.expect[1], label = "P_2")
ln+=ax.plot(times, result.expect[2], label = "P_3")
ln+=ax.plot(times, result.expect[3], label = "P_4")
ax.set_title("STIRAP", fontsize = 18)
ax.set_xlabel("Time / (1/Γ)", fontsize = 16)
ax.set_ylabel("Population in each state", fontsize = 16)
axc = ax.twinx()
ln+=coupling_p.plot_time_dep(times, args, ax=axc, ls = '--', c = 'k', lw = 1, label = 'Pump')
ln+=coupling_s.plot_time_dep(times, args, ax=axc, ls = ':', c = 'k', lw = 1, label = 'Stokes')
ax.legend(ln, [l.get_label() for l in ln], fontsize = 16)
print(f"Transfer efficiency: {result.expect[2][-1]*100:.1f} %")
```
```python
```
| 42c7f92532207c91fda1931e3a39dbf661a4cf33 | 196,167 | ipynb | Jupyter Notebook | examples/STIRAP in a 3-level system.ipynb | otimgren/toy-systems | 017184e26ad19eb8497af7e7e4f3e7bb814d5807 | [
"MIT"
]
| null | null | null | examples/STIRAP in a 3-level system.ipynb | otimgren/toy-systems | 017184e26ad19eb8497af7e7e4f3e7bb814d5807 | [
"MIT"
]
| null | null | null | examples/STIRAP in a 3-level system.ipynb | otimgren/toy-systems | 017184e26ad19eb8497af7e7e4f3e7bb814d5807 | [
"MIT"
]
| null | null | null | 514.874016 | 84,528 | 0.942625 | true | 2,150 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.795658 | 0.598345 | __label__eng_Latn | 0.943789 | 0.228486 |
# Text Message Spam Detection
## The Business Use Case
You are the CEO of a new email service company trying to attract capital for your next growth stage. Most of the private equity firms you have spoken to want to see a user base of at least 100,000 users prior to commiting. Due to your superb marketing team, you estimate that each day you attract 1000 new users. Typically, 10% of all email messages are spam, and an average user receives 50 emails per day.
One of your data scientists says he can provide you a model with **90% accuracy** in classifying spam / ham. Another data scientist says she can build a model that is only **80% accurate** but has **100% recall**. A third data scientist says he can build a model that has **80% accuracy** with **100% precision**.
Which of the above models would you pick to model? Why?
```python
import pandas as pd
data = pd.read_csv("spam-sms.csv", encoding='latin-1')
data.shape
```
(1037, 2)
```python
Y = data["class"].values
X = data["text"].values
```
# Bayes Rule
$$
\begin{equation}
P(A|B) = \frac{P(B|A)P(A)}{P(B)}
\end{equation}
$$
For our purposes, we will redefine this as
$$
\begin{equation}
P(spam|text) = \frac{P(text|spam)P(spam)}{P(text)}
\end{equation}
$$
Here,
- **prior** means before seeing any new text (evidence). Our impression of the likelihood of certain words appearing before new evidence is introduced.
- **text** is the new message (evidence) being introduced that we want to classify as either spam or ham. Let's say we have a new text message `When are you coming home? I'm hungry.`.
- $P(text)$ is the **prior likelihood** of seeing a particular text message with that exact combination of words. For instance, `P("the car is")` will be significantly higher than `P("Downstream supply chain agents")`, especially in the **context of text messages**.
- $P(spam)$ is the likelihood that any text message will be spam. This is computed in our dataset:
```python
p_spam = sum(data["class"] == "spam") / len(data)
p_ham = 1 - p_spam # since there are only two classes
```
- $P(text|spam)$ is our **likelihood**. More specifically, the likelihood of this text message given that it is a piece of spam. It is saying, *let's assume that this message is spam. Knowing that, how likely is it that we'll find this particular combination of words in the text message?*
In order to quickly get our likelihoods, we'll need to create a **likelihood table**:
```python
spam_data = data[data["class"] == "spam"]
ham_data = data[data["class"] == "ham"]
print(f"The shape of the spam_data is {spam_data.shape}. The shape of the ham_data is {ham_data.shape}.")
```
The shape of the spam_data is (156, 2). The shape of the ham_data is (881, 2).
# Perform Count Vectorization
```python
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(data["text"].values)
# create the vocabulary list
vocabulary = set(vectorizer.get_feature_names())
```
# Populate Likelihood Table
```python
# Create the vocabulary list
import spacy, string
nlp = spacy.load('en') # python3 -m spacy download en
from nltk import word_tokenize
likelihood_table = pd.DataFrame(columns=["spam", "ham"], index=list(vocabulary)).fillna(0)
# populate the spam column in our likelihood table
for i, sentence in enumerate(spam_data["text"].values):
for token in word_tokenize(sentence):
if token.lower() not in likelihood_table.index:
likelihood_table.loc[token.lower(), "spam"] = 1
else:
likelihood_table.loc[token.lower(), "spam"] += 1
likelihood_table.fillna(0, inplace=True)
```
```python
# populate the ham column in our likelihood table
for i, sentence in enumerate(ham_data["text"].values):
for token in nlp(sentence):
if token.text.lower() not in likelihood_table.index:
likelihood_table.loc[token.text.lower(), "ham"] = 1
else:
likelihood_table.loc[token.text.lower(), "ham"] += 1
likelihood_table.fillna(0, inplace=True)
```
```python
likelihood_table.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>spam</th>
<th>ham</th>
</tr>
</thead>
<tbody>
<tr>
<th>stylish</th>
<td>0.0</td>
<td>2.0</td>
</tr>
<tr>
<th>lager</th>
<td>0.0</td>
<td>1.0</td>
</tr>
<tr>
<th>been</th>
<td>8.0</td>
<td>24.0</td>
</tr>
<tr>
<th>indian</th>
<td>0.0</td>
<td>1.0</td>
</tr>
<tr>
<th>child</th>
<td>0.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
### Likelihood with "Fun"
```python
likelihood_table.loc["fun"]
```
spam 1.0
ham 2.0
Name: fun, dtype: float64
### Frequency with "Won"
What about words like `won`?
```python
likelihood_table.loc["won"]
```
Output:
```
spam 16.0
ham 0.0
Name: won, dtype: float64
```
# Edge Cases
What is $P(w = won|c = ham)$? If even one of the words' class-conditional probabilities is 0, then the entire likelihood will be zero, since the likelihood is simply the product of all the words' individual likelihoods.
## Additive Smoothing Techniques
We can define a new likelihood for the word `won`:
$$
\begin{equation}
P_{new}(w = won | c = ham) = \frac{N_{ham, won} + \alpha}{N_{ham} + \alpha d}
\end{equation}
$$
Here, $N_{ham, won}$ is the number of times `won` appears in a text message that is classified as ham, and $N_{ham}$ is simply the total number of messages that are classified as ham.
## Simple Optimizations to Improve Naive Bayes Probabilistic Models for Text Classification
- to may be useful to simply create a simple **co-occurence matrix**, and **run a correlation analysis** on the features (words). If certain words have extremely high correlations, you may wish to take them out, or fuse them into a single entity.
- apply smoothing techniques to handle **out-of-vocabulary test words**
- **ensemble techniques like bagging / boosting** do **not** help. There isn't any "variation" in a Naive Bayes model. Given the same trained corpus $C$, and a new text message $m$, a Naive Bayes model will always output the same prediction.
# Representing Words as Probabilities
We can represent a sentence (a sequence of words) mathematically as
$$
\begin{equation}
w = \{{w_0, w_1, w_2, \dots,w_{s-1}}\}
\end{equation}
$$
Here, **$s$** represents the total number of words in the sentence. **$w_{0}$** represents the first word in the sentence, **$w_{1}$** represents the second word in the sentence, and so on.
# Exercise:
`Older people, like everyone else, can benefit from accessing ride-sharing, but many are not comfortable with smart-phones.`
You can ignore punctuation and capitalization for now.
1. What is $s$?
2. What is $w_4$? What is $w_6$?
3. What is $V$ (this corpus' vocabulary size, assuming this is the only sentence in the corpus)? You can do this the hard way, by counting manually.
```python
sentence = "Older people, like everyone else, can benefit from accessing ride-sharing, but many are not comfortable with smart-phones"
import re # the most efficient, concise way (less readable)
vocabulary = set([re.sub(r'[^\w\s]','',word).lower() for word in sentence.split()])
print("The size of the vocabulary is {} words".format(vocabulary))
```
# Independence
In statistics, two events are independent if the outcome of one event does not affect the probability of the outcomes of another event.
You will also often see this written as
$$
\begin{equation}
P(A,B) = P(A) * P(B)
\end{equation}
$$
In other words, an event A is independent of event B if the **probability of event A and event B happening together** is equal to **the probability of event A multplied by the probability of event B**.
# Bigram Model
A bigram is a group of two tokens (frequently words) that are treated as one distinct entity. For instance, the distinct bigrams in the sentence `I am home now` would be
```python
bigrams = [
("I", "am"),
("am", "home"),
("home", "now")
]
```
### Exercise:
Write a Python function to find all the bigrams in the sentence
`In recent years, Johnson & Johnson has been focusing more on its high-margin pharmaceutical segment via acquisitions.`
**Hints**:
- split the sentence into a list of individual words (`my_sentence.split()`)
- remove punctuation
- lowercase all the letters
- use a for loop to iterate through this list, getting the **i-th** and **i + 1-th** elements of the list
**Challenge**:
Generalize this function to work with `n-grams`.
## Language Model
Are words in a sentence conditionally independent from each other? In other words, does knowing that the first word `The` change your belief in the likelihood of the second word that follows?
Which of the following sentences is more likely?
```python
sentence_A = "Jack went to Wal-Mart."
sentence_B = "at and the be of I"
```
Notice that all the words in sentence B come from [Wikipedia most common words in the English language](https://en.wikipedia.org/wiki/Most_common_words_in_English). Yet we intuitively know that the sentence is nonsensical and is unlikely to be seen in natural language.
We can express the likelihood of a sentence $w$ as $p(w)$, and define it as
$$
\begin{equation}
p(w) = \prod_{i=0}^{s}p(w_{i+1}|w_{i})
\end{equation}
$$
If we want to generalize this to an **N-Gram** model:
$$
\begin{equation}
p(w) = \prod_{i=0}^{s}p(w_i |w_{i-n+1}, w_{i-n+2}, \dots, w_{i})
\end{equation}
$$
$$
\begin{equation}
p(w) = \prod_{i=0}^{s}p(w_i | w_{i-n}^{i})
\end{equation}
$$
Here, $w_{i-n}^{i} = w_{i-n+1}, w_{i-n+2}, \dots, w_{i}$.
### Exercises
Use the following documents:
```python
documents = [
"Eat dinner at home",
"He needs to go to the store",
"She needs to go home"
]
```
1. Get a list of all unique tokens in the vocabulary
2. Calculate the transition frequencies.
3. Calculate the transition probabilities.
## Generating Bigrams Using NLTK
```python
import pandas as pd
import nltk
from nltk import word_tokenize
reviews_df = pd.read_csv("mcdonalds-yelp-negative-reviews.csv", encoding="latin-1")
for review in reviews_df["review"]:
bigram = list(nltk.bigrams(word_tokenize(review)))
print(bigram[:10])
```
[('I', 'wanted'), ('wanted', 'to'), ('to', 'grab'), ('grab', 'breakfast'), ('breakfast', 'one'), ('one', 'morning'), ('morning', 'before'), ('before', 'work'), ('work', 'since'), ('since', 'it')]
## Generating Bigrams Using Scikit-Learn
```python
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(ngram_range=(2,2))
X = vectorizer.fit_transform(reviews_df["review"])
bigram_features = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())
bigram_features.shape
```
(1525, 64297)
```python
vectorizer = CountVectorizer(ngram_range=(1,1))
X = vectorizer.fit_transform(reviews_df["review"])
unigram_features = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())
unigram_features.shape
```
(1525, 8379)
## Model Evaluation: Choosing n in an n-Gram Model
- the larger the dataset, and by implication, the more rich the corpus, the larger the n we can likely try.
- in practice, $n = 2$, $n = 3$, $n = 4$ work well. A larger $n$ tends to begin to overfit (and may be computationally extremely expensive). Remember the **bias-variance** tradeoff:
Here, as $n \rightarrow \infty$, model complexity increases dramatically.
- **tune $n$ based on the performance of the downstream model**: usually n-gram models are the first step in a broader sentiment analysis prediction model, or topic modelling model, recommendation system, or sequence-to-sequence translation task.
### Perplexity
Look again at the definition of likelihood for a particular sentence:
$$
\begin{equation}
L = p(w) = \prod_{i=0}^{s}p(w_i | w_{i-n}^{i})
\end{equation}
$$
Here, $w_{i-n}^{i}$ stands for $w_{i-n}, w_{i-n+1}, ..., w_{i-1}$ (if you have bi-gram (**`n=2`**) model, then you would have $w_{i-2}, w_{i-1}$)
Is it reasonable to compare two sentences, one with $s=4$ (sentence length of 4 words) with the one below in Sentence B?
##### Sentence A:
> *I love to eat.*
##### Sentence B:
> *My escort was an exceptionally genial sixty-seven-year-old man named Don Seely, an electrical engineer who said that he was between jobs and using the unwanted free time to volunteer his services to the Northern Kentucky Tea Party, the rally’s host organization, as a Webmaster.*
Answer: **No**. A common way of quantifying the likelihood of your n-gram models, accounting for different sizes of test corpuses, is to use **perplexity**. Remember that our likelihood of seeing a particular sentence is
$$
\begin{equation}
P = \frac{1}{\sqrt[N]{p(w)}}
\end{equation}
$$
$N$ is the length of all the words in the test sentence. We typically use perplexity, instead of simply likelihood, as the overall model evaluation metric, because in general, **in order to compare two different models**, they should be using the same test corpus / vocabulary.
```python
import numpy as np
import matplotlib.pyplot as plt
def perplexity(likelihood, N):
return 1 / (likelihood ** (1/N))
x = []
y = []
for i in range(5, 500):
x.append(i)
y.append(perplexity(likelihood=.204, N=i)) # an example likelihood of .004
plt.plot(x, y)
plt.xlabel("N")
plt.ylabel("Perplexity")
```
## Dealing with Out-of-Vocabulary Words
Let's pretend that our training corpus is
> *This is mistaken logic. It is true that a high variance and low bias model can preform well in some sense.*
Our test corpus is
> *This **is not** true.*
Assuming a bi-gram model is used, what is the **perplexity** of our model?
We don't actually need to count each bi-gram. **The answer is infinity**. Why?
$
\begin{equation}
p(w_i = not | w_{i-1} = is) = 0
\end{equation}
$
What you can do instead:
* Look at the **frequency distribution** of words in your corpus
* Decide upon some **threshold cutoff**, where every word below that threshold frequency will be converted into an **`UNKNOWN`** token. Now, whenever a new word appears that is out of vocabulary, you simply convert it into **`UNKNOWN`** and run the tests as usual.
| 3bc6915eeede82d7d43396cc3636066b8c75bf17 | 34,558 | ipynb | Jupyter Notebook | week3/Probabilities and N-Gram Language Models.ipynb | lynkeib/dso-560-nlp-and-text-analytics | 9fa1314b2ed32a51fa41443f40bf549e4320948d | [
"MIT"
]
| null | null | null | week3/Probabilities and N-Gram Language Models.ipynb | lynkeib/dso-560-nlp-and-text-analytics | 9fa1314b2ed32a51fa41443f40bf549e4320948d | [
"MIT"
]
| null | null | null | week3/Probabilities and N-Gram Language Models.ipynb | lynkeib/dso-560-nlp-and-text-analytics | 9fa1314b2ed32a51fa41443f40bf549e4320948d | [
"MIT"
]
| null | null | null | 47.404664 | 11,192 | 0.685804 | true | 3,890 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.79053 | 0.630991 | __label__eng_Latn | 0.990113 | 0.304334 |
```python
%pylab inline
%config InlineBackend.figure_format = 'retina'
from ipywidgets import interact
```
Populating the interactive namespace from numpy and matplotlib
# A simple stochastic gene expression model
$$
{\text{off} \atop (N=0)}
\quad
{{\alpha/\epsilon \atop\longrightarrow}\atop {\longleftarrow \atop \beta/\epsilon}}
\quad
{\text{on} \atop (N=1)}\quad
$$
where $x$ and $y$ satisfy the ODEs
$$
\dot{x} = \gamma N(t) - \delta x
$$
The solution to the above linear ODE is
$$
x(t) = x_0 e^{-\delta(t - t_0)} + \frac{\gamma N(t)}{\delta}\left(1 - e^{-\delta(t - t_0)}\right).
$$
Note that the derivative $\frac{d}{dt}N(t) = 0$ for every $t>0$ except where jumps occur. We will be 'integrating' the ode between jumps in the Markov process $N(t)$.
In the limit $\epsilon \to 0^+$, the stochastic process converges to the ODE
$$\dot{x}_{\infty} = \langle N \rangle \gamma - \delta x_{\infty}, \quad x_{\infty}(0) = x_0,$$
where
$$\langle N \rangle = \frac{\alpha}{\alpha + \beta}.$$
The exact solution is
$$
x_{\infty}(t) = x_0 e^{-\delta t} + \frac{\gamma \langle N \rangle}{\delta}\left(1 - e^{-\delta t}\right).
$$
```python
epsilon = 10.5
gamma = 2.
delta = 0.25
alpha = 1.
beta = 1.
n0 = 0
x0 = 0
Nsteps = 10000
N = zeros(Nsteps)
N[0] = n0
X = zeros(Nsteps)
X[0] = x0
T = zeros(Nsteps)
T[0] = 0
for j in arange(1, Nsteps):
u = rand(1)[0]
rate = beta/epsilon if N[j-1]==1 else alpha/epsilon
tau = -log(u)/rate
T[j] = T[j-1] + tau
N[j] = 0 if N[j-1]==1 else 1
## update x using the exact solution above
X[j] = X[j-1]*exp(-delta*tau) + gamma*N[j-1]/delta*(1 - exp(-delta*tau))
## I want to plot a vector of x values that includes times in between jumps
Tplot = linspace(0, T[9], 200)
dt = Tplot[1] - Tplot[0] ## the time step 'delta t'
Xplot = zeros(200)
Xplot[0] = x0
tnext_jump = T[1] #
n = n0
k = 1
for j in arange(1, 200): ## use the exact solution to the ODE above instead of Euler's method
if Tplot[j] > tnext_jump:
n = N[k]
tnext_jump = T[k+1]
k += 1
Xplot[j] = Xplot[j-1]*exp(-delta*dt) + gamma*n/delta*(1 - exp(-delta*dt))
## I wan't to make a plot for the limiting ODE solution
Tinf = linspace(0, T[-1], 200)
Navg = alpha/(alpha + beta)
Xinf = x0*exp(-delta*Tinf) + gamma*Navg/delta*(1 - exp(-delta*Tinf))
figure(1, [8, 2])
plot(T[:10], N[:10], '-o')
yticks([0, 1])
title('First 10 time steps', fontsize=20)
xlabel('t', fontsize=24)
ylabel('N', fontsize=24);
figure(2, [8, 4])
plot(Tplot, Xplot)
xlabel('t', fontsize=24)
ylabel('x', fontsize=24);
figure(3, [12, 6])
plot(T, X)
plot(Tinf, Xinf, 'k')
xlabel('t', fontsize=24)
ylabel('x', fontsize=24);
```
# Fully coupled hybrid processes
Suppose that the transition rates depend on $x(t)$. In general, a jump Markov process with time dependent rate $r(t)$ uses the jump time density
$$ p(t) = r(t)e^{-\int_0^t r(t')dt'} .$$
Suppose we have a transition rate $a(x) > 0$ that depends on $x$. Then, it also depend on time because $x$ is time dependent. To generate random jump times, we can generalize the idea behind the Gillespie algorithm by finding the first time $\tau$ at which the cummulative distribution function $P(\tau)$ is greater than or equal to a computer generated uniform random variable `u`. Here we are using the fact that any cummulative distribution function is non decreasing. We can represent this computation as
$$
\begin{gather}
\frac{dP}{d\tau} = a(x(\tau))e^{-\mu(\tau)} \\
\frac{d\mu}{d\tau} = a(x(\tau)) \\
\frac{dx}{d\tau} = f(x, N).
\end{gather}
$$
We can numerically integrate the above system (we might need something better than Euler's method). Note one can eliminate the first equation with $P(\tau) = 1 - e^{-\mu(\tau)}$.
The basic steps:
1. Generate uniform RV `u`
2. Initialize $x(0) = x_0$, $\mu(0) = 0$, and $P(0) = 0$.
3. Evolve the above system, step by step, until $P(\tau) > u$
4. The first time at which $P(\tau) = u$ is the random jump time $\tau_k$ and we update the global time $t_k = t_{k-1} + \tau_k$
5. Sample a new gene state $N$ just like in regular Gillespie
6. The final value of $x(\tau)$ becomes the initial condition $x_0$ in the next step (i.e., $x_0 = x(\tau_k)$)
7. Repeat
# Gene expression system: mutual repressors model: phase plane
\begin{align*}
\dot{x} &= f(x, y), \\
\dot{y} &= f(y, x),
%% b = 0.15
\end{align*}
where
$$
f(x, y) = \frac{b + x^2}{b + x^2 + y^2} - x.
$$
The parameter $b>0$ represents the base rate of expression.
```python
b = 0.1
x = linspace(0.01, 1, 200)
y1 = sqrt((b + x**2)*(1-x)/x)
xv = linspace(0, 1.5, 50)
X, Y = meshgrid(xv, xv)
U = (b + X**2)/(b + X**2 + Y**2) - X
V = (b + Y**2)/(b + Y**2 + X**2) - Y
# U /= sqrt(U**2 + V**2)
# V /= sqrt(U**2 + V**2)
figure(1, [8, 6])
# quiver(X, Y, U, V, color='0.5')
streamplot(X, Y, U, V, color='0.75')
plot(x, y1, 'b')
plot(y1, x, 'r')
xlim(0, 1.2)
ylim(0, 1.2)
xlabel('x', fontsize=24)
ylabel('y', fontsize=24);
```
# Mutual repressors: hybrid stochastic process
$$
(N=-1)
{{b \atop\longrightarrow}\atop {\longleftarrow \atop x(t)^2}}
(N=0)
{{y(t)^2 \atop\longrightarrow}\atop {\longleftarrow \atop b}}
(N=1)
$$
where $x$ and $y$ satisfy the ODEs
$$
\begin{align}
\dot{x} &= \mathbf{1}[N(t)\neq 1] - x \\
\dot{y} &= \mathbf{1}[N(t)\neq -1] - y
\end{align}
$$
```python
### TO BE UPDATED
```
| aa4fec20995c2901f64bb16723227d44e235559d | 395,680 | ipynb | Jupyter Notebook | Week 5 - hybrid processes.ipynb | newby-jay/MATH371-Winter2020-JupyterNotebooks | d33857cc29d7fcb16e6b7aca975d9062f7e24ce6 | [
"Apache-2.0"
]
| null | null | null | Week 5 - hybrid processes.ipynb | newby-jay/MATH371-Winter2020-JupyterNotebooks | d33857cc29d7fcb16e6b7aca975d9062f7e24ce6 | [
"Apache-2.0"
]
| null | null | null | Week 5 - hybrid processes.ipynb | newby-jay/MATH371-Winter2020-JupyterNotebooks | d33857cc29d7fcb16e6b7aca975d9062f7e24ce6 | [
"Apache-2.0"
]
| null | null | null | 1,244.27673 | 244,700 | 0.956518 | true | 1,957 | Qwen/Qwen-72B | 1. YES
2. YES | 0.824462 | 0.824462 | 0.679737 | __label__eng_Latn | 0.845496 | 0.417589 |
```python
from sympy import *
from sympy import pprint
from sympy.core.numbers import mod_inverse
a,b,c,d = symbols('a,b,c,d')
```
```python
G = Matrix([[2, 1, 3, 2],
[4, 2, 0, 1]
])
```
```python
def mod(x,modulus):
numer, denom = x.as_numer_denom()
return numer*mod_inverse(denom,modulus) % modulus
```
```python
G_rref = G.rref(iszerofunc=lambda x: x % 5 == 0)
pprint(G_rref[0].applyfunc(lambda x: mod(x,5)))
```
```python
F = Matrix([[4,3,1,3],[2,4,1,3]])
F.rref(iszerofunc=lambda x: x % 5 == 0)
pprint(F_rref[0].applyfunc(lambda x: mod(x,5)))
```
```python
```
| 1208dcc54acc809022c92a560b3a1fe176d3b7c8 | 8,241 | ipynb | Jupyter Notebook | pset_3.4_checking_work_160519.ipynb | brunston/enigma | 3944fade43e100f3f631e308cf70fd28242a7b26 | [
"MIT"
]
| null | null | null | pset_3.4_checking_work_160519.ipynb | brunston/enigma | 3944fade43e100f3f631e308cf70fd28242a7b26 | [
"MIT"
]
| null | null | null | pset_3.4_checking_work_160519.ipynb | brunston/enigma | 3944fade43e100f3f631e308cf70fd28242a7b26 | [
"MIT"
]
| null | null | null | 64.889764 | 1,267 | 0.649557 | true | 223 | Qwen/Qwen-72B | 1. YES
2. YES | 0.955981 | 0.817574 | 0.781586 | __label__yue_Hant | 0.353231 | 0.654218 |
## Heat equation
The heat equation is a parabolic partial differential equation that describes the distribution of heat (or variation in temperature) in a given region over time. The general form of the equation in any coordinate system is given by:
\begin{align*}
\frac{\partial u}{\partial t} - \phi \nabla^2 u = f
\end{align*}
We will work here with the heat equation in one spatial dimension. This can be formulated as:
\begin{align*}
\mathcal{L}_{\bar{x}}^{\phi}u(\bar{x}) = \frac{\partial}{\partial t}u(\bar{x}) - \phi \frac{\partial^2}{\partial x^2}u(\bar{x}) = f(\bar{x}),
\end{align*}
where $\bar{x} = (t, x) \in \mathbb{R}^2$.
The fundamental solution to the heat equation gives us:
\begin{align*}
u(x,t) &= e^{-t}sin(2\pi x) \\
f(x,t) &= e^{-t}(4\pi^2 - 1)sin(2\pi x)
\end{align*}
#### Simulate data
```python
import time
import numpy as np
import sympy as sp
from scipy.optimize import minimize
from scipy.optimize import minimize_scalar
import matplotlib.pyplot as plt
```
```python
np.random.seed(int(time.time()))
def get_simulated_data(n):
t = np.random.rand(n)
x = np.random.rand(n)
y_u = np.multiply(np.exp(-t), np.sin(2*np.pi*x))
y_f = (4*np.pi**2 - 1) * np.multiply(np.exp(-t), np.sin(2*np.pi*x))
return (t, x, y_u, y_f)
(t, x, y_u, y_f) = get_simulated_data(10)
```
#### Evaluate kernels
We use a reduced version of the kernel here.
1. $k_{uu}(x_i, x_j, t_i, t_j; \theta) = e^ \left[ -\theta_1 (x_i-x_j)^2 - \theta_2 (t_i-t_j)^2 \right]$
2. $k_{ff}(\bar{x}_i,\bar{x}_j;\theta,\phi) = \mathcal{L}_{\bar{x}_i}^\phi \mathcal{L}_{\bar{x}_j}^\phi k_{uu}(\bar{x}_i, \bar{x}_j; \theta) = \mathcal{L}_{\bar{x}_i}^\phi \left[ \frac{\partial}{\partial t_j}k_{uu} - \phi \frac{\partial^2}{\partial x_j^2} k_{uu} \right] \\
= \frac{\partial}{\partial t_i}\frac{\partial}{\partial t_j}k_{uu} - \phi \left[ \frac{\partial}{\partial t_i}\frac{\partial^2}{\partial x_j^2}k_{uu} + \frac{\partial^2}{\partial x_i^2}\frac{\partial}{\partial t_j}k_{uu} \right] + \phi^2 \frac{\partial^2}{\partial x_i^2}\frac{\partial^2}{\partial x_j^2}k_{uu}$
3. $k_{fu}(\bar{x}_i,\bar{x}_j;\theta,\phi) = \mathcal{L}_{\bar{x}_i}^\phi k_{uu}(\bar{x}_i, \bar{x}_j; \theta)
= \frac{\partial}{\partial t_i}k_{uu} - \phi \frac{\partial^2}{\partial x_i^2}k_{uu}$
```python
x_i, x_j, t_i, t_j, theta1, theta2, phi = sp.symbols('x_i x_j t_i t_j theta1 theta2 phi')
```
```python
kuu_sym = sp.exp(-theta1*(x_i - x_j)**2 - theta2*(t_i - t_j)**2)
kuu_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta1, theta2), kuu_sym, "numpy")
def kuu(t, x, theta1, theta2):
k = np.zeros((t.size, t.size))
for i in range(t.size):
for j in range(t.size):
k[i,j] = kuu_fn(x[i], x[j], t[i], t[j], theta1, theta2)
return k
```
```python
kff_sym = sp.diff(kuu_sym, t_j, t_i) \
- phi*sp.diff(kuu_sym,x_j,x_j,t_i) \
- phi*sp.diff(kuu_sym,t_j,x_i,x_i) \
+ phi**2*sp.diff(kuu_sym,x_j,x_j,x_i,x_i)
kff_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta1, theta2, phi), kff_sym, "numpy")
def kff(t, x, theta1, theta2, p):
k = np.zeros((t.size, t.size))
for i in range(t.size):
for j in range(t.size):
k[i,j] = kff_fn(x[i], x[j], t[i], t[j], theta1, theta2, p)
return k
```
```python
kfu_sym = sp.diff(kuu_sym,t_i) - phi*sp.diff(kuu_sym,x_i,x_i)
kfu_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta1, theta2, phi), kfu_sym, "numpy")
def kfu(t, x, theta1, theta2, p):
k = np.zeros((t.size, t.size))
for i in range(t.size):
for j in range(t.size):
k[i,j] = kfu_fn(x[i], x[j], t[i], t[j], theta1, theta2, p)
return k
```
```python
def kuf(t, x, theta1, theta2, p):
return kfu(t, x, theta1, theta2, p).T
```
#### create covariance matrix and NLML
```
params = [sig_u, l_u, phi]
```
```python
def nlml(params, t, x, y1, y2, s):
params = np.exp(params)
K = np.block([
[
kuu(t, x, params[0], params[1]) + s*np.identity(x.size),
kuf(t, x, params[0], params[1], params[2])
],
[
kfu(t, x, params[0], params[1], params[2]),
kff(t, x, params[0], params[1], params[2]) + s*np.identity(x.size)
]
])
y = np.concatenate((y1, y2))
val = 0.5*(np.log(abs(np.linalg.det(K))) + np.mat(y) * np.linalg.inv(K) * np.mat(y).T)
return val.item(0)
```
#### Optimize hyperparameters
```python
nlml((1,1,0), t, x, y_u, y_f, 1e-6)
```
10.08849356029074
```python
%%timeit
nlml_wp = lambda params: nlml(params, t, x, y_u, y_f, 1e-7)
minimize(
nlml_wp,
np.random.rand(3),
method="Nelder-Mead",
options={'maxiter' : 5000, 'fatol' : 0.001})
```
2.13 s ± 267 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```python
def minimize_restarts(t, x, y_u, y_f, n = 10):
nlml_wp = lambda params: nlml(params, t, x, y_u, y_f, 1e-7)
all_results = []
for it in range(0,n):
all_results.append(
minimize(
nlml_wp,
np.random.rand(3),
method="Nelder-Mead",
options={'maxiter' : 5000, 'fatol' : 0.001}))
filtered_results = [m for m in all_results if 0 == m.status]
return min(filtered_results, key = lambda x: x.fun)
```
```python
m = minimize_restarts(t, x, y_u, y_f, 20)
m
```
final_simplex: (array([[ 1.51464808, -1.4236345 , -0.03028208],
[ 1.51462468, -1.42355069, -0.03028317],
[ 1.51466738, -1.42367271, -0.03028443],
[ 1.51467467, -1.42354775, -0.03028619]]), array([-3.63412215, -3.63412211, -3.63412211, -3.63412206]))
fun: -3.6341221452010437
message: 'Optimization terminated successfully.'
nfev: 176
nit: 100
status: 0
success: True
x: array([ 1.51464808, -1.4236345 , -0.03028208])
##### Estimated value of $\alpha$
```python
np.exp(m.x[2])
```
0.9701718328188159
#### Analysis
##### Contour lines for the likelihood
```python
delta = 0.01
theta1_range = np.arange(1, 2, delta)
theta2_range = np.arange(-2, 0, delta)
theta1_mesh, theta2_mesh = np.meshgrid(theta1_range, theta2_range)
nlml_mesh_fn = lambda mesh1, mesh2: nlml(np.array([mesh1, mesh2, 0]), t, x, y_u, y_f, 1e-7)
nlml_mesh = np.zeros(theta1_mesh.shape)
for i in range(nlml_mesh.shape[0]):
for j in range(nlml_mesh.shape[1]):
nlml_mesh[i][j] = nlml_mesh_fn(theta1_mesh[i][j], theta2_mesh[i][j])
```
```python
contour_range = np.logspace(0, np.log(np.max(nlml_mesh) - np.min(nlml_mesh)), num=15, base=np.exp(1))
contour_range = np.min(nlml_mesh) + contour_range
f, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(10,3))
f.suptitle("NLML contour lines")
cs1 = ax1.contour(theta1_mesh, theta2_mesh, nlml_mesh, contour_range)
plt.clabel(cs1, inline=1, fontsize=5)
ax1.set(xlabel= r"$ln(\theta_1)$", ylabel= r"$ln(\theta_2)$")
cs2 = ax2.contour(np.exp(theta1_mesh), np.exp(theta2_mesh), nlml_mesh, contour_range)
plt.clabel(cs1, inline=1, fontsize=5)
ax2.set(xlabel= r"$\theta_1$", ylabel= r"$\theta_2$")
```
[<matplotlib.text.Text at 0x10d094160>, <matplotlib.text.Text at 0x10d082a58>]
```python
plt.show()
```
##### Profile likelihood
```python
theta1_optim = np.zeros(theta1_range.size)
for i in range(theta1_range.size):
nlml_opt_theta2 = lambda t2 : nlml(np.array([theta1_range[i], t2, 0]), t, x, y_u, y_f, 1e-7)
m = minimize_scalar(nlml_opt_theta2)
theta1_optim[i] = m['fun']
```
```python
theta2_optim = np.zeros(theta2_range.size)
for i in range(theta2_range.size):
def nlml_opt_theta1(t1):
return nlml(np.array([t1, theta2_range[i], 0]), t, x, y_u, y_f, 1e-7)
m = minimize_scalar(nlml_opt_theta1)
theta2_optim[i] = m['fun']
```
```python
f, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, sharey=True, figsize=(10,3))
f.suptitle("Profile likelihood")
ax1.plot(theta1_range, theta1_optim)
ax1.set(xlabel= r"$ln(\theta_1)$", ylabel= "nlml")
ax2.plot(theta2_range, theta2_optim, 'r')
ax2.set(xlabel= r"$ln(\theta_2)$", ylabel= "nlml")
```
[<matplotlib.text.Text at 0x10d45e518>, <matplotlib.text.Text at 0x10d454400>]
```python
plt.show()
```
##### Errors
We generate 5 set of samples for each number of data points(n) in the range (5,25) as earlier. The absolute error in the parameter estimate and the computation times are plotted in the following figure.
```python
n_range = np.arange(5, 25)
plot_data = np.zeros((5, n_range.size, 4))
for j in range(plot_data.shape[0]):
print("iter ", j)
for i in range(plot_data.shape[1]):
print("# points: ", n_range[i])
start_time = time.time()
(t, x, y1, y2) = get_simulated_data(n_range[i])
nlml_wp = lambda params: nlml(params, t, x, y1, y2, 1e-7)
m = minimize(nlml_wp, np.random.rand(4), method="Nelder-Mead")
end_time = time.time()
plot_data[j,i,:] = np.array([m.nfev, m.nit, np.exp(m.x[2]), end_time - start_time])
```
iter 0
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 1
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 2
# points: 5
# points: 6
/usr/local/lib/python3.6/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in exp
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 3
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 4
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
```python
from matplotlib.ticker import MaxNLocator
f, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(12,4))
for i in range(1,4):
ax1.plot(n_range[5:], abs(plot_data[i,5:,2] -1), "og")
ax1.set(xlabel = "Number of data points", ylabel = "Absolute error")
ax1.set_title("(A) Error in estimate of the parameter")
ax1.plot(n_range[5:], np.amin(abs(np.amin(plot_data, axis=2) - 1), axis=0)[5:])
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
ax1.axhline(0.01, color='black', linestyle='-.')
ax1.axhline(0.004, color='red', linestyle='-.')
ax1.axhline(0.002, color='black', linestyle='-.')
for i in range(1,4):
ax2.plot(n_range[5:], plot_data[i,5:,3], "og")
ax2.set(xlabel = "Number of data points", ylabel = "Execution time")
ax2.set_title("(B) Execution time benchmark")
ax2.xaxis.set_major_locator(MaxNLocator(integer=True))
```
```python
plt.show()
```
The minimum error for each value of n is bounded by 0.002 for n > 10. The computation time also increases monotonically with n.
##### With the full kernel
```python
theta, l1, l2 = sp.symbols('theta l1 l2')
kuu_sym = theta*sp.exp(-(x_i - x_j)**2/(2*l1) - (t_i - t_j)**2/(2*l2))
kuu_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta, l1, l2), kuu_sym, "numpy")
def kuu(t, x, theta, l1, l2):
k = np.zeros((t.size, t.size))
for i in range(t.size):
for j in range(t.size):
k[i,j] = kuu_fn(x[i], x[j], t[i], t[j], theta, l1, l2)
return k
kff_sym = sp.diff(kuu_sym, t_j, t_i) \
- phi*sp.diff(kuu_sym,x_j,x_j,t_i) \
- phi*sp.diff(kuu_sym,t_j,x_i,x_i) \
+ phi**2*sp.diff(kuu_sym,x_j,x_j,x_i,x_i)
kff_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta, l1, l2, phi), kff_sym, "numpy")
def kff(t, x, theta, l1, l2, p):
k = np.zeros((t.size, t.size))
for i in range(t.size):
for j in range(t.size):
k[i,j] = kff_fn(x[i], x[j], t[i], t[j], theta, l1, l2, p)
return k
kfu_sym = sp.diff(kuu_sym,t_i) - phi*sp.diff(kuu_sym,x_i,x_i)
kfu_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta, l1, l2, phi), kfu_sym, "numpy")
def kfu(t, x, theta, l1, l2, p):
k = np.zeros((t.size, t.size))
for i in range(t.size):
for j in range(t.size):
k[i,j] = kfu_fn(x[i], x[j], t[i], t[j], theta, l1, l2, p)
return k
def kuf(t, x, theta, l1, l2, p):
return kfu(t, x, theta, l1, l2, p).T
def nlml(params, t, x, y1, y2, s):
params = np.exp(params)
K = np.block([
[
kuu(t, x, params[0], params[1], params[2]) + s*np.identity(x.size),
kuf(t, x, params[0], params[1], params[2], params[3])
],
[
kfu(t, x, params[0], params[1], params[2], params[3]),
kff(t, x, params[0], params[1], params[2], params[3]) + s*np.identity(x.size)
]
])
y = np.concatenate((y1, y2))
val = 0.5*(np.log(abs(np.linalg.det(K))) + np.mat(y) * np.linalg.inv(K) * np.mat(y).T)
return val.item(0)
(t, x, y_u, y_f) = get_simulated_data(10)
```
```python
nlml((1,1,1,0), t, x, y_u, y_f, 1e-6)
```
1100767.8910308597
```python
%%timeit
nlml_wp = lambda params: nlml(params, t, x, y_u, y_f, 1e-7)
minimize(
nlml_wp,
np.random.rand(4),
method="Nelder-Mead",
options={'maxiter' : 5000, 'fatol' : 0.001})
```
6.93 s ± 1.94 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
The reduced kernel takes less time for optimization as compared to the full one.
```python
n_range = np.arange(5, 25)
plot_data_r = np.zeros((5, n_range.size, 4))
for j in range(plot_data_r.shape[0]):
print("iter ", j)
for i in range(plot_data_r.shape[1]):
print("# points: ", n_range[i])
start_time = time.time()
(t, x, y1, y2) = get_simulated_data(n_range[i])
nlml_wp = lambda params: nlml(params, t, x, y1, y2, 1e-7)
m = minimize(nlml_wp, np.random.rand(4), method="Nelder-Mead")
end_time = time.time()
plot_data_r[j,i,:] = np.array([m.nfev, m.nit, np.exp(m.x[3]), end_time - start_time])
```
iter 0
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 1
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 2
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
iter 3
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
/usr/local/lib/python3.6/site-packages/numpy/linalg/linalg.py:1874: RuntimeWarning: overflow encountered in det
r = _umath_linalg.det(a, signature=signature)
iter 4
# points: 5
# points: 6
# points: 7
# points: 8
# points: 9
# points: 10
# points: 11
# points: 12
# points: 13
# points: 14
# points: 15
# points: 16
# points: 17
# points: 18
# points: 19
# points: 20
# points: 21
# points: 22
# points: 23
# points: 24
```python
f, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(12,4))
for i in range(plot_data.shape[0]):
if(i != 3):
ax1.plot(n_range[5:], abs(plot_data_r[i,5:,2] -1), "og")
ax1.set(xlabel = "Number of data points", ylabel = "Absolute error")
ax1.set_title("Error in estimate of the parameter")
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
ax1.axhline(0.01, color='black', linestyle='-.')
ax1.axhline(0.004, color='red', linestyle='-.')
ax1.axhline(0.0025, color='black', linestyle='-.')
for i in range(plot_data.shape[0]):
if(i != 3):
ax2.plot(n_range[5:], plot_data_r[i,5:,3], "og")
ax2.set(xlabel = "Number of data points", ylabel = "Execution time")
ax2.set_title("Execution time benchmark")
ax2.xaxis.set_major_locator(MaxNLocator(integer=True))
```
```python
plt.show()
```
The same analysis as earlier in the chapter was done for the full kernel given by $\theta exp((\mathbf{x}-\mathbf{y})^T \Sigma (\mathbf{x}-\mathbf{y}))$ where $\Sigma = diag([l_1, l_2])$.
It can be noticed that the error in parameter estimate is slightly lower for the full kernel but the difference is not very significant. Meanwhile, the execution times are significantly different for both the cases. For 10 data points, the minimizer takes an average of 2.13 seconds for the reduced kernel while it is 6.93 seconds for the full kernel. This shows that not all hyperparameters might be necessary to get acceptable results. For some specific problems, having an intuition on the choice of kernels might be fruitful in the end.
| 96ead3e20d348974192dcd55c17e9bb940368cc2 | 226,602 | ipynb | Jupyter Notebook | heat_equation/heat_eqn_kernel2.ipynb | ratnania/mlhiphy | c75b5c4b5fbc557f77d234df001fe11b10681d7d | [
"MIT"
]
| 6 | 2018-07-12T09:03:43.000Z | 2019-10-29T09:50:34.000Z | heat_equation/heat_eqn_kernel2.ipynb | ratnania/mlhiphy | c75b5c4b5fbc557f77d234df001fe11b10681d7d | [
"MIT"
]
| null | null | null | heat_equation/heat_eqn_kernel2.ipynb | ratnania/mlhiphy | c75b5c4b5fbc557f77d234df001fe11b10681d7d | [
"MIT"
]
| 4 | 2018-04-25T06:33:03.000Z | 2020-03-13T02:25:07.000Z | 208.657459 | 123,000 | 0.890235 | true | 6,745 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.718594 | 0.613617 | __label__eng_Latn | 0.406316 | 0.263968 |
```python
%matplotlib inline
import sympy as sympy
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sbn
from scipy import *
```
Numpy allows us to create vectors rather than lists.
```python
#Take list 1, 2, 3 and make it a vector
x_vector = np.array([1,2,3])
```
Vectors have *shape*, or *direction.*
```python
x_vector.shape
```
(3,)
Additionally, we can create matrices.
```python
matrix = np.array([[1, 2, 3],
[4, 5, 6]])
matrix
```
array([[1, 2, 3],
[4, 5, 6]])
#### Addition/Subtraction
\begin{equation}
A+3=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}+3
=\begin{bmatrix}
a_{11}+3 & a_{12}+3 \\
a_{21}+3 & a_{22}+3
\end{bmatrix}
\end{equation}
\begin{equation}
A-3=\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}-3
=\begin{bmatrix}
a_{11}-3 & a_{12}-3 \\
a_{21}-3 & a_{22}-3
\end{bmatrix}
\end{equation}
Match the rows/columns and add the scalar.
And similarly for matrices:
\begin{equation}
A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \\
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}_{2 \times 2}
\end{equation}
Here's an example with `numpy`
```python
#Initial matrix has dimensions 2 x 3, this one should be 3 x 2
matrix2 = np.random.rand(3,2)
matrix*matrix2
```
#### Scalar Multiplication
#### Matrix Multiplicataion
\begin{align}
A_{3 \times 2} \times C_{2 \times 3}=&
\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
a_{31} & a_{32}
\end{bmatrix}_{3 \times 2}
\times
\begin{bmatrix}
c_{11} & c_{12} & c_{13} \\
c_{21} & c_{22} & c_{23}
\end{bmatrix}_{2 \times 3} \\
=&
\begin{bmatrix}
a_{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \\
a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \\
a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}
\end{bmatrix}_{3 \times 3}
\end{align}
Requirements:
* Columns of first matrix = Rows of second matrix\
$$c_x=r_y$$
Result:
* A matrix of with the same amount of rows as the first matrix, and the columns of the second.
$$r_x \times c_y$$
Memory Devices
* C for condition, c for columns first
* R for result, r for rows first
#### Terminology
**Basis vectors** the $\color{red}{columns}$ of the matrix, which are the $\color{red}{\mbox{input dimensions}}$ mapping to a certain output.
**Linear Combination**:
**Span** - all possible linear combinations. So if I have $a_{1}\vec{v_{1}} + a_{1}\vec{v_{2}}$, then the span is the entire 2D coordinate plane, *unless they are linearly dependent*
**Linear Dependence** - one of the vectors can be defined as a linear combination of the others.
* Consider a third vector, $\vec{u}$. It would be linear dependent if: $$\vec{u} = a\vec{v} + b\vec{v}$$
* Basically, this would be having a vector that doesn't really do anything. If I said, go 3 miles east, then four miles north, those vectors would be **linearly independent**. But if I added on 'go five miles northeast,' that would be useless information, denoting **linear dependency**.
A **rank** is the dimensions of the output of a linear transformation. It is equal to the number of columns in the column space. When the rank is the same as the number of columns in the matrix, then it is spanning all possible dimensions, and it can be considered **full rank**
A **column space** is the set of all possible outputs of a linear transformation, also known as the *span of the columns in the matrix*
The **Null space**, or the **kernel**, of a vector space is all vector mappings $\vec{v}$ in $$A\vec{x}=\vec{v}$$
where the mapping is the 0 vector. Everything in the column space that is *not in the kernel* is known as the image of $A$
#### Linear Transformations
Essentially a function, taking in a vector as a parameter and outputting another vector.
Two important properties:
* Lines remain lines
* Origin remains fixed
Try to keep in mind that with any linear transformation in 2D, the *gridlines stay parallel and evenly spaced.*
Looking at this, where the vector started at $(-1, 2)$, we can see that this is essentially a linear algebra distributive property.
Commonly, this linear transformed $\hat{i} and \hat{j}$ is represented as follows:
And finally, we can represent the linear transformation as:
Common transformation is a 90° counterclockwise
* $\hat{i}$ = \begin{bmatrix}
0\\
1
\end{bmatrix}
* $\hat{j}$ = \begin{bmatrix}
-1\\
0
\end{bmatrix}
##### The big idea
**All matrices can be thought of as linear transformations.**
#### Matrix Composition
Multiple transformations can be thought of as one transformation, a **composition**, the product of two transformation matrices
Note here that **order matters**. The right-hand matrix (rotation) is the *first* transformation, and the left-hand (shear) is the *second* transformation. It is similar to how in the equation $f(g(x))$, $g(x)$ is evaluated first.
With that said, matrix multiplication *is* associative. That is, if we have matrices $A, B \mbox{ and } C$
$$A(BC) = (AB)C$$
because we are just applying three transformations in the same order
#### Determinants
The **scaled area** enclosed by vector components $\hat{i}\mbox{ and }\hat{j}$ is called the **determinant**
* For 3D planes, the **scaled volume** is the determinant, a **parallelepiped**
Determinants of *linearly dependent* vectors will be zero. Thus, solving the determinant is a way to solve for the linear dependence of two vectors.
When the **orientation** of space has been **inverted** ($\hat{i}\mbox{ now on the right of }\hat{j}$, the **determinant is negative**
#### Systems of Equations
Commonly, the matrix of the coefficients is called $A$, the variable vector is referred to as $\vec{x}$, and the vector of the constants is known as $\vec{v}$. This gives us the equation: $$A\vec{x}=\vec{v}$$
In words: Applying the transformation $A$ to vector $\vec{x}$ makes $\vec{x}$ land on $\vec{v}$
If we want to go from $\vec{x}$ to $\vec{v}$, we have to divide the matrix $A$, or **multiply by the inverse matrix,** $A^{-1}$
#### Identity Matrix
Multiplying the transformation by the inverse transformation, $A\times A^{-1}$, results in no change at all. This is called the identity matrix, and has form:
\begin{equation*}
\mbox{$A\times A^{-1}$} =
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\end{equation*}
If the determinant is zero, than there **is no inverse matrix**, $A^{-1}$. In 2D, this is when the output is squished on a line. In 3D, it is squished onto a plane. These have **ranks** of 1, and 2 respectively.
A **rank** is therefore the dimensions of the output of a linear transformation.
A **column space** is the set of all possible outputs of a linear transformation.
#### Nonsquare Matrices
Consider a $2 \times 3$ matrix, $B$
\begin{equation*}
B =
\begin{bmatrix}
a & b \\
d & e \\
g & h
\end{bmatrix}
\end{equation*}
This would mean we have a *2D input* mapping to a *3d output*. The number of columns map to the input, and the number of rows to the output.
#### Dot Product
##### Properties
Thinking in terms of projections help explain these.
* Perpendicular: Dot product is *zero*
* Similar direction: Dot product is *positive*
* Opposite direction: Dot product is *negative*
#### Cross Product
Sign changes, order matters. Remember it with $\hat{i}\mbox{ and }\hat{j}$. Their dot product, with $\hat{i}$ on the left of the cross product multiplication, would be positive.
Big idea with sign? - If the first vector is on the left of the second, then the result will be positive.
##### Properties:
* As vectors get closer to being perpendicular, the result increases
* The result is a vector
* Perpendicular to the parallelogram formed, considering the right hand rule
##### Calculation
It is the determinant, but with the first column being the basis vectors.
Remember that this involves striking through the rows and then carrying out the determinant like usual.
#### Duality
Any transformation from a 2D plane to the 1D number line can be equivalently thought of as a linear transformation (a $1\times 2$ matrix) or a vector (a $2 \times1$ cell)
The vector equivalent to this linear transformation is called the **dual vector**
#### Cramer's Rule
The "logic" behind this is that areas get scaled by the same transformation, $A$. See [this video](https://www.youtube.com/watch?v=jBsC34PxzoM&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=12) for more information.
#### Change of Basis
We use the basis vectors $\hat{i}\mbox{ and }\hat{j}$, but we can change these and then scale them with vectors.
Translating between coordinate systems uses a matrix translator of sorts, called the **change of basis matrix**.
Multiplying the change of basis matrix written in our coordinates by the vector in the other coordinates gives the same vector in our coordinates. If we used the inverse of the change of basis matrix instead and multiplied by our coordinates, we could translate our coordinates to the other coordinates.
$$A^{-1}MA$$
The above equation generally suggests a coordinate system shift. Computing this, where $A$ is the change of basis matrix and $M$ is the transformation will result in the matrix that can translate a vector in the other coordinate system in the **same way** as it translates one in our coordinate system.
#### Eigenvectors
Vectors that remain on their same span when transformed. They are only stretched or shrunk, but **not** knocked off of their span path.
To turn both sides into matrix-vector multiplication, we multiply $\lambda$ by the identity matrix, $I$, which brings us to this:
Often, it turns out this can be calculated by calculating the roots of a polynomial. Here, we can see an example of this, showing that a matrix \begin{equation*}
B =
\begin{bmatrix}
0 & -1\\
1 & 0 \\
\end{bmatrix}
\end{equation*}
(the rotation matrix) has no eigenvectors.
On the other hand, we can also have times where the diagonals of the matrix are the eigenvalues themselves. This happens with diagonal matrices.
Change of basis is used in conjunction with eigenvectors and diagonal matrices to simplify computations, rerouting the coordinate system to one with eigenvectors.
#### Linearity
```python
from IPython.core.display import HTML
def css_styling():
styles = open("../../jupyter-styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunss.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsx.otf');
}
@font-face {
font-family: "Computer Modern";
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsi.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunso.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top:12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width:800px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 22pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
| 3715f3d39d72a67a0196b9e58346ff716d672b46 | 27,605 | ipynb | Jupyter Notebook | _posts/ml/Essence-of-Linear-Algebra.ipynb | luke-anglin/luke-anglin.github.io | 1ac370a52dd66c8ce7206cf57296d513103e92e9 | [
"MIT"
]
| null | null | null | _posts/ml/Essence-of-Linear-Algebra.ipynb | luke-anglin/luke-anglin.github.io | 1ac370a52dd66c8ce7206cf57296d513103e92e9 | [
"MIT"
]
| null | null | null | _posts/ml/Essence-of-Linear-Algebra.ipynb | luke-anglin/luke-anglin.github.io | 1ac370a52dd66c8ce7206cf57296d513103e92e9 | [
"MIT"
]
| null | null | null | 26.492322 | 684 | 0.52121 | true | 3,562 | Qwen/Qwen-72B | 1. YES
2. YES | 0.817574 | 0.79053 | 0.646317 | __label__eng_Latn | 0.99 | 0.339943 |
# Algorithms Exercise 2
## Imports
```python
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
```
## Peak finding
Write a function `find_peaks` that finds and returns the indices of the local maxima in a sequence. Your function should:
* Properly handle local maxima at the endpoints of the input array.
* Return a Numpy array of integer indices.
* Handle any Python iterable as input.
```python
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
maxima = []
for x in range(0, len(a)):
if(x==len(a)-1 or a[x]>a[x+1]) and a[x]>a[x-1]:
maxima.append(x)
return np.array(maxima)
```
```python
a = [1,3,6,4,5,1]
find_peaks(a)
```
array([2, 4])
```python
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
```
Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
* Convert that string to a Numpy array of integers.
* Find the indices of the local maxima in the digits of $\pi$.
* Use `np.diff` to find the distances between consequtive local maxima.
* Visualize that distribution using an appropriately customized histogram.
```python
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
```
```python
q = pi_digits_str
r = []
for p in q:
r.append(int(p))
a = np.array(r)
m = find_peaks(a)
m = find_peaks(a)
s = np.diff(m)
plt.hist(s,bins=50)
```
```python
?np.split
```
```python
assert True # use this for grading the pi digits histogram
```
| 4d4cda5379dbc6872d7f468e3df2803e93f214f3 | 14,259 | ipynb | Jupyter Notebook | assignments/assignment07/AlgorithmsEx02.ipynb | LimeeZ/phys292-2015-work | d31e1e0f5dc7fa37dcfd77f59f76431d1478ea06 | [
"MIT"
]
| null | null | null | assignments/assignment07/AlgorithmsEx02.ipynb | LimeeZ/phys292-2015-work | d31e1e0f5dc7fa37dcfd77f59f76431d1478ea06 | [
"MIT"
]
| null | null | null | assignments/assignment07/AlgorithmsEx02.ipynb | LimeeZ/phys292-2015-work | d31e1e0f5dc7fa37dcfd77f59f76431d1478ea06 | [
"MIT"
]
| null | null | null | 51.291367 | 7,216 | 0.724174 | true | 511 | Qwen/Qwen-72B | 1. YES
2. YES | 0.749087 | 0.91611 | 0.686246 | __label__eng_Latn | 0.967943 | 0.432711 |
### Inversion of an Hermitian Matrix
Author: Andrés Gómez - 2020
The objective of this notebook is to solve the linear system
$$M\vec{x}=\vec{b}$$
where $M \in \mathbb{C}^{2^n\times2^n}$ is an hermitian Matrix with eigenvalues $\{\lambda_j, j=1\dots 2^n\}$, and $\vec{x}$ and $\vec{b} \in \mathbb{C}^{2^n}$
The original algorithm was proposed by [Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd](https://arxiv.org/abs/0811.3171), but later was improved and a simpler version is described on [Danial Dervovic, Mark Herbster, Peter Mountney, Simone Severini, Naïri Usher, Leonard Wossnig](https://arxiv.org/abs/1802.08227), which is used on this notebook.
The algorithm assumes that $\vec{b}$ is normalized, so it can be decomposed on the basis of a quantum system as
$$\vec{b}=\sum_{j=1}^{M}\beta_j|u_j\rangle$$
If $C \in \mathbb{R}$ is a constant such as $C<min\{\lambda_j\}$, being $\{\lambda_j\}$ the set of eigenvalues of $A$. In this case, the algorithm will find an approximation of $\vec{x}$ as
$$|x\rangle \approx C \sum_{j=1}^M \frac{\beta_j}{\lambda_j}|u_j\rangle$$
The algorithm uses three registers:
1. C or clock register, where the eigenvalues of the matrix $M$ are stored, with the number of qubits desired for the accuracy.
2. I to store $\vec{b}$
3. One ancilla qubit to calculate the inversion of the eigenvalues
For this notebook, the QuantumRegister is defined as:
$$|ancilla\rangle \otimes |Clock\rangle \otimes |b\rangle$$
The algorithm has 3 steps:
1. Calculate on register C the eigenvalues of $M$ using the Quantum Phase Estimation algorithm. Because $\vec{b}$ is a superposition of the eigenvectors, all the eigenvalues will be stored on this register because the parallelism of the Quantum Operators
2. Inverse the eigenvalues using controlled $R_y$ rotations over the ancilla qubit
3. Apply the inverse of step 1
Let's do it step by step.
**NOTE**. This version of the algorithm can work only for Hermitian matrices with eigenvalues $0<\lambda<=2^{accuracy}$
```python
import projectq
from projectq.cengines import MainEngine
from projectq.ops import H,X,Ry,Rx,C,Measure,QFT,get_inverse,All,Swap,QubitOperator,TimeEvolution
from projectq.meta import Control,Compute,Uncompute
import numpy as np
import math
#import cmath
```
Auxiliary functions to show the matrix and quantum states
```python
def MatrixToLatex(A):
a="\\begin{pmatrix}"
for i in range(A.shape[0]):
for j in range(A.shape[1]):
if ((j+1)%A.shape[1])==0:
a=a+"{0:.2f}".format(A[i,j])
else:
a=a+"%s&"%"{0:.2f}".format(A[i,j])
if ((i+1)%A.shape[0])!=0:
a=a+"\\\\"
a=a+"\\end{pmatrix}"
return(a)
def Display(string):
from IPython.display import display, Markdown
display(Markdown(string))
def get_state_as_str(eng,qubits,cheat=False):
import numpy as np
s="$"
if (cheat):
print("Cheat: ", eng.backend.cheat())
for j in range(2**(len(qubits))):
bits=np.binary_repr(j,width=len(qubits))
a=eng.backend.get_amplitude("%s"%(bits[-1::-1]),qubits)
if (abs(a.real)>0.0000001)|(abs(a.imag)>0.0000001):
#print("Añado")
if s!="$":
s=s+"+"
a="({:.5f})".format(a)
s=s+"%s|%s\\rangle_a|%s\\rangle_C|%s\\rangle_b"%(a,bits[0],bits[1:-2],bits[-2:])
#print(s)
s=s+"$"
#Display(s)
return(s)
```
## <span style="color:blue"> 1. Create the matrix M</span>
Create one matrix $M$ from an spectral decomposition. Let be the eigenvectors
$$v_1=\frac{1 }{\sqrt{2}}(|00\rangle+|01\rangle)\\ v_2=\frac{1 }{\sqrt{2}}(|00\rangle-|01\rangle) \\v_3=\frac{1 }{\sqrt{2}}(|10\rangle+|11\rangle) \\ v_4=\frac{1 }{\sqrt{2}}(|10\rangle-|11\rangle)$$
and the eigenvalues $\lambda_1=16,\lambda_2=8,\lambda_3=4,\lambda_4=2$
Define the matrix
$$M=\lambda_1|v_1\rangle\langle v_1| + \lambda_2|v_2\rangle\langle v_2| + \lambda_3|v_3\rangle\langle v_3| + \lambda_4|v_4 \rangle\langle v_4|$$
```python
Lambda=[16,8,4,2]
Chi1P=(1/math.sqrt(2))*np.array([[1],[1],[0],[0]])
Chi1M=(1/math.sqrt(2))*np.array([[1],[-1],[0],[0]])
Chi2P=(1/math.sqrt(2))*np.array([[0],[0],[1],[1]])
Chi2M=(1/math.sqrt(2))*np.array([[0],[0],[1],[-1]])
Vector=[Chi1P,Chi1M,Chi2P,Chi2M] # Two, Three]
M=np.zeros((len(Chi1P),len(Chi1P)))
for i in range(len(Vector)):
M=M+Lambda[i]*np.dot(Vector[i],Vector[i].T)
Display("M=%s"%MatrixToLatex(M))
```
M=\begin{pmatrix}12.00&4.00&0.00&0.00\\4.00&12.00&0.00&0.00\\0.00&0.00&3.00&1.00\\0.00&0.00&1.00&3.00\end{pmatrix}
Check that this matrix has the expected eigenvalues and eigenvector
```python
E,v=np.linalg.eig(M)
Display("Eigenvalues: %s"%np.array2string(E,separator=", "))
Display("Eigenvectors: %s"%np.array2string(v,separator=", "))
for i in range(len(Vector)):
Display("M|v_%d> = %s must be Lambda[%d]*|v[%d]>=%s"%(i,np.array2string(np.dot(M,Vector[i]).T), i,i,np.array2string(Lambda[i]*Vector[i].T,separator=", ")))
```
Eigenvalues: [16., 8., 4., 2.]
Eigenvectors: [[ 0.70710678, -0.70710678, 0. , 0. ],
[ 0.70710678, 0.70710678, 0. , 0. ],
[ 0. , 0. , 0.70710678, -0.70710678],
[ 0. , 0. , 0.70710678, 0.70710678]]
M|v_0> = [[11.3137085 11.3137085 0. 0. ]] must be Lambda[0]*|v[0]>=[[11.3137085, 11.3137085, 0. , 0. ]]
M|v_1> = [[ 5.65685425 -5.65685425 0. 0. ]] must be Lambda[1]*|v[1]>=[[ 5.65685425, -5.65685425, 0. , 0. ]]
M|v_2> = [[0. 0. 2.82842712 2.82842712]] must be Lambda[2]*|v[2]>=[[0. , 0. , 2.82842712, 2.82842712]]
M|v_3> = [[ 0. 0. 1.41421356 -1.41421356]] must be Lambda[3]*|v[3]>=[[ 0. , 0. , 1.41421356, -1.41421356]]
### Unitary operator from the Hermitian Matrix
From the Hermitian matrix $M \in \mathbb{C}^{2^n\times2^n}$, it is possible to create an Unitary Operator $U_M=e^{iM}$ with eigenvalues $e^{i\lambda_i}$, being $\lambda_i$ the eigenvalues of $M$ and with the same eigenvectors
Check that $U_M |v_i>=e^{iM}|v_i>=e^{i\lambda_i} |v_i>$
```python
from scipy.linalg import expm
for i in range(len(Vector)):
OP=np.dot(expm(1j*M),Vector[i])
EIG=np.exp(1j*Lambda[i])*Vector[i]
Display("$$ U_M |v[%d]\\rangle=%s,e^{i\lambda_%d}|v[%d]\\rangle=%s$$"%(i,MatrixToLatex(OP),i,i,MatrixToLatex(EIG)))
```
$$ U_M |v[0]\rangle=\begin{pmatrix}-0.68-0.20j\\-0.68-0.20j\\0.00+0.00j\\0.00+0.00j\end{pmatrix},e^{i\lambda_0}|v[0]\rangle=\begin{pmatrix}-0.68-0.20j\\-0.68-0.20j\\0.00-0.00j\\0.00-0.00j\end{pmatrix}$$
$$ U_M |v[1]\rangle=\begin{pmatrix}-0.10+0.70j\\0.10-0.70j\\0.00+0.00j\\0.00+0.00j\end{pmatrix},e^{i\lambda_1}|v[1]\rangle=\begin{pmatrix}-0.10+0.70j\\0.10-0.70j\\-0.00+0.00j\\-0.00+0.00j\end{pmatrix}$$
$$ U_M |v[2]\rangle=\begin{pmatrix}0.00+0.00j\\0.00+0.00j\\-0.46-0.54j\\-0.46-0.54j\end{pmatrix},e^{i\lambda_2}|v[2]\rangle=\begin{pmatrix}0.00-0.00j\\0.00-0.00j\\-0.46-0.54j\\-0.46-0.54j\end{pmatrix}$$
$$ U_M |v[3]\rangle=\begin{pmatrix}0.00+0.00j\\0.00+0.00j\\-0.29+0.64j\\0.29-0.64j\end{pmatrix},e^{i\lambda_3}|v[3]\rangle=\begin{pmatrix}-0.00+0.00j\\-0.00+0.00j\\-0.29+0.64j\\0.29-0.64j\end{pmatrix}$$
Because the eigenvalues of this case are integers, they have an exact binary representation
```python
for i in range(len(Lambda)):
print("Binary of %.0f is "%(Lambda[i]),"{0:05b}".format(int(Lambda[i])))
```
Binary of 16 is 10000
Binary of 8 is 01000
Binary of 4 is 00100
Binary of 2 is 00010
### Matrix decomposition
Any matrix $M \in \mathbb{C}^{2^n}\times\mathbb{C}^{2^n}$, being $n$ the number of qubits, can be decomposed on tensor products of the extended Pauli set $\Sigma=\{I,X,Y,Z\}$.
If $\sigma_i \in \Sigma, i=1,2,3,4$, then
$$M=\sum_{ijk\dots l=1}^4 A_{ijk\dots l} \sigma_i\otimes\sigma_j\otimes\sigma_k\otimes \dots \otimes\sigma_l$$
where
$$A_{ijk\dots l}=\frac{1}{2^n}Tr[\sigma_i\otimes\sigma_j\otimes\sigma_k\otimes \dots \otimes\sigma_l M]$$
If the matrix M is Hermitian, $A_{ijk\dots l} \in \mathbb{R}$
The next function, **DecompositionOnSigmas**, makes this decomposition, creating a **[QubitOperator](https://projectq.readthedocs.io/en/latest/projectq.ops.html#projectq.ops.QubitOperator)** with this decomposition
```python
def ProductTensor(A):
a=A[-1]
for i in range(len(A)-2,-1,-1):
a=np.tensordot(A[i],a,axes=0)
a=np.concatenate((np.concatenate((a[0][0],a[0][1]),axis=1),np.concatenate((a[1][0],a[1][1]),axis=1)))
return a
def DecompositionOnSigmas(A):
I=np.array([[1,0],[0,1]])
X=np.array([[0,1],[1,0]])
Y=np.array([[0,-1j],[1j,0]])
Z=np.array([[1,0],[0,-1]])
Pauli={"I":I,"X":X,"Y":Y,"Z":Z}
import itertools
n=int(math.log2(A.shape[0]))
Ham=QubitOperator()
for i in itertools.product("IXYZ",repeat=n):
AxB=ProductTensor([Pauli[i[0]],Pauli[i[1]]])
coef=(1/2**n)*complex(np.trace(np.dot(AxB,A)))
if (coef.real!=0) | (coef.imag!=0):
Paulis=""
if i[0][0]!="I":
Paulis=Paulis+"%s1"%i[0]
if i[1][0]!="I":
Paulis=Paulis+" %s0"%i[1]
Ham=Ham+QubitOperator(Paulis,coef)
return Ham
```
The decomposition of $M$ is
$$M=a_{11}(I\otimes I)+ a_{12}(I\otimes X) +a_{13}(I\otimes Y)+ a_{14}(I\otimes Z)\\+a_{21}(X\otimes I)+a_{22}(X\otimes X)+a_{23}(X\otimes Y)+a_{24}(X\otimes Z)\\+a_{31}(Y\otimes I)+a_{32}(Y\otimes X)+a_{33}(Y\otimes Y)+a_{34}(Y\otimes Z)\\+a_{41}(Z\otimes I)+a_{42}(Z\otimes X)+a_{43}(Z\otimes Y)+a_{44}(Z\otimes Z)$$
$$M= 7.5(I\otimes I) + 2.5(I\otimes X) + 4.5(Z\otimes I) +1.5(Z\otimes X)$$
For example:
$$a_{11}=\frac{1}{2^2}Tr((I\otimes I)M)=\frac{1}{4}Tr\left[ \begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}\begin{pmatrix}12&4&0&0\\4&12&0&0\\0&0&3&1\\0&0&1&3\end{pmatrix}\right]=\\
=\frac{1}{4}Tr\left[\begin{pmatrix}12&0&0&0\\0&12&0&0\\0&0&3&0\\0&0&0&3\end{pmatrix} \right]=\frac{1}{4}30=7.5$$
```python
DecompositionOnSigmas(M)
```
(1.4999999999999996+0j) X0 Z1 +
(7.499999999999998+0j) I +
(2.4999999999999996+0j) X0 +
(4.499999999999998+0j) Z1
## <span style="color:blue"> 2. First step: Calculate eigenvalues using Quantum Phase Estimation algorithm </span>
Now, construct the circuit for the phase estimation circuits. We will build in this case the unitary operator using the **[TimeEvolution](https://projectq.readthedocs.io/en/latest/projectq.ops.html#projectq.ops.TimeEvolution)** function of Project Q. This gate makes the time evolution of a Hamiltonian (in our case, the decomposition on $\sigma_i$ of M) as $$U_M=e^{-iMt}$$
We will choose $$t=\frac{-2\pi}{2^{accuracy}}$$, being *accuracy* the number of desired binary digits for our eigenvalues.
This will map the eigenvalues of the Matrix M on the states of the qubit register. Because the $-$ sign is implicit in TimeEvolution operator and the positive exponent is desired to calculate the eigenvalues ( $e^{iMt}$ ), a $-$ sign must be included on the selected time.
**CalculateEigenvalues** accepts as argument a number of a vector. From 0 to 3, this number will initialize the vector $b$ with the eigenvectors of M. From 4 and 6, b is initialized to:
$4, |b_4\rangle=|01\rangle=\frac{\sqrt{2}}{2}(|v_1\rangle - |v_2\rangle)=\beta_1|v_1\rangle + \beta_2 |v_2\rangle$
$5, |b_5\rangle=|10\rangle=\frac{\sqrt{2}}{2}(|v_3\rangle + |v_4\rangle)$
$6, |b_6\rangle=H_1 R_{x0}(0.05)R_{x1}(0.25)|00\rangle$
Let's see what's happen with these vectors and why we have selected this evolution time.
The controled operation $CU_M$ with a single additional qubit of the unitary gate $U_M$ is defined as:
$$CU_M=|0\rangle\langle0|\otimes I +|1\rangle\langle1|\otimes U_M$$
so,
$$CU_M(H|0\rangle\otimes|b_4\rangle)=CU_M[(\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)\otimes|b_4\rangle)]=$$
$$=\frac{1}{\sqrt{2}}[|0\rangle\otimes|b_4\rangle + |1\rangle\otimes U_M(|b_4\rangle)]=$$
$$=\frac{1}{\sqrt{2}}[|0\rangle\otimes (\beta_1 |v_1\rangle + \beta_2 |v_2\rangle) + |1\rangle\otimes U_M(\beta_1 |v_1\rangle + \beta_2 |v_2\rangle)]$$
$$= \frac{1}{\sqrt{2}}[|0\rangle \otimes(\beta_1 |v_1\rangle + \beta_2 |v_2\rangle)+ |1\rangle\otimes(\beta_1 e^{i\lambda_1t}|v_1\rangle + \beta_2 e^{i\lambda_2t}|v_2\rangle)]$$
$$=\frac{\beta_1}{\sqrt{2}}[|0\rangle \otimes |v_1\rangle + e^{i\lambda_1t}|1\rangle \otimes |v_1\rangle)]
+ \frac{\beta_2}{\sqrt{2}}[|0\rangle \otimes |v_2\rangle + e^{i\lambda_2t}|1\rangle \otimes |v_2\rangle)]$$
$$=\frac{1}{\sqrt{2}}[(|0\rangle + e^{i\lambda_1t}|1\rangle) \otimes \beta_1|v_1\rangle)]
+ \frac{1}{\sqrt{2}}[(|0\rangle + e^{i\lambda_2t}|1\rangle) \otimes \beta_2|v_2\rangle)]$$
Passing the eigenvalues to the control qubit and keeping the superposition of $|v_1\rangle$ and $|v_2\rangle$ on register $|b\rangle$
Defining the controlled unitary operation of operator U with qubit l as $C^lU$, if we appy $\Pi_{l=0}^{accuracy}C^l(U_M)^l$ on state $H^{\otimes accuracy}|0\rangle \otimes |b_4\rangle$, the result is:
$$\Pi_{l=0}^{accuracy}C^l(U_M)^l[H^{\otimes accuracy}|0\rangle \otimes |b_4\rangle] = [ \frac{\beta_1}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i\lambda_1 tk}|k\rangle \otimes |v_1\rangle ]+ [\frac{\beta_2}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i\lambda_2 tk}|k\rangle \otimes |v_2\rangle]$$
Choosing $t=\frac{2\pi}{2^{accuracy}}$, the final state after the controlled operations is:
$$[ \frac{\beta_1}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i2\pi k \frac{\lambda_1}{2^{accuracy}}}|k\rangle \otimes |v_1\rangle ]+ [\frac{\beta_2}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i2\pi k \frac{\lambda_2}{2^{accuracy}}}|k\rangle \otimes |v_2\rangle]$$
Now, applying now the inverse Quantum Fourier Transform on the control qubits:
$$(iQFT\otimes I)([ \frac{\beta_1}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i2\pi k \frac{\lambda_1}{2^{accuracy}}}|k\rangle \otimes |v_1\rangle ]+ [\frac{\beta_2}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i2\pi k \frac{\lambda_2}{2^{accuracy}}}|k\rangle \otimes |v_2\rangle])=$$
$$=iQFT( \frac{\beta_1}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i2\pi k \frac{\lambda_1}{2^{accuracy}}}|k\rangle) \otimes I|v_1\rangle+
iQFT( \frac{\beta_2}{2^{accuracy/2}}\sum_{k=0}^{2^{accuracy}-1} e^{i2\pi k \frac{\lambda_2}{2^{accuracy}}}|k\rangle) \otimes I|v_2\rangle
=$$
$$=\beta_1|\lambda_1\rangle\otimes|v_1\rangle + \beta_2|\lambda_2\rangle\otimes|v_2\rangle
$$
As consecuence, the state has a superposition of all the eigenvalues of $M$ on the control register.
### Operations to init the state to the values of different vectors
```python
def InitState(b,vector=0,eng=None,cheat=False):
"""
Init the vector b
"""
if vector==0:
"""
1/sqrt(2)(|00>+|01>)
"""
H|b[0]
if vector==1:
"""
1/sqrt(2)(|00>-|01>)
"""
X|b[0]
H|b[0]
if vector==2:
"""
1/sqrt(2)(|10>+|11>)
"""
X|b[1]
H|b[0]
if vector==3:
"""
1/sqrt(2)(|10>-|11>)
"""
X|b[1]
X|b[0]
H|b[0]
if vector==4:
"""
|01>
"""
X|b[0]
if vector==5:
"""
|10>
"""
X|b[1]
if vector==6:
Rx(0.05)|b[0]
Rx(0.25)|b[1]
H|b[1]
if (eng!=None) & cheat:
eng.flush()
Display(get_state_as_str(eng,b+Clock))
return
```
### Quantum Phase Estimation algorithm for an Hermitian Matrix
```python
def QPE(M,Clock,b,eng=None,cheat=True):
Ham=DecompositionOnSigmas(M)
accuracy=len(Clock)
t0=2*math.pi/2**accuracy
"""
Init the Clock
"""
All(H)|Clock
"""
Apply the time evolution of the Hamiltonian
"""
for i in range(len(Clock)):
with Control(eng,Clock[i]):
TimeEvolution(time=-t0*2**i,hamiltonian=Ham)|b
"""
Apply the iQFT
"""
for i in range(len(Clock)//2):
Swap | (Clock[i],Clock[len(Clock)-i-1])
get_inverse(QFT)|Clock
#H|C
if (eng==None) & cheat:
eng.flush()
Display(get_state_as_str(eng,b+Clock))
```
### Main function to calculate the eigenvalues of an Hermitian Matrix
```python
def CalculateEigenvalues(M,accuracy,vector=0,cheat=False):
eng=MainEngine()
cols = M.shape[0]
m = int(math.log2(cols))
Clock = eng.allocate_qureg(accuracy)
b = eng.allocate_qureg(m)
InitState(b,vector=vector,eng=eng,cheat=cheat)
QPE(M,Clock,b,eng,cheat)
"""
Measure the registers
"""
All(Measure)|Clock
All(Measure)|b
eng.flush()
"""
Get output
"""
output=[int(q) for q in Clock]
ancilla=[int(q) for q in b]
del Clock
del b
del eng
"""
Calculate the Eigenvalue
"""
bits=0
for (k,i) in enumerate(output):
bits=bits+i*2.**k
return bits
```
We will calculate the fase with an accuracy of $$\frac{1}{2^5}$$
Because this is a probabilistic algorithm, we have to repeat the experiment several times. In this case, 100
Calculate the eigenvalues for the eigenvectors
```python
accuracy=5
experiments=100
%matplotlib inline
import matplotlib.pyplot as plt
for j in range(0,4,1):
out=[]
for i in range(experiments):
out.append(CalculateEigenvalues(M,accuracy=accuracy,vector=j,cheat=False))
x=plt.hist(out,bins=2**accuracy,range=(0,(2**accuracy)),label="$\lambda_%d$"%(j+1))
plt.legend()
plt.show()
plt.close()
```
Choosing vector=4,
$|b_4\rangle=|01\rangle=\frac{\sqrt{2}}{2}(|v_1\rangle-|v_2\rangle)$
this is a superposition of the eigenvectors $|v_1\rangle$ and $|v_2\rangle$, so the final state after the QPE must contain eigenvalues $\lambda_1$ and $\lambda_2$
```python
out=[]
j=4
for i in range(experiments):
out.append(CalculateEigenvalues(M,accuracy=accuracy,vector=j,cheat=False))
x=plt.hist(out,bins=2**accuracy,range=(0,(2**accuracy)),label="$b_%d$"%j)
plt.legend()
plt.show()
plt.close()
```
For the vector=5,
$|b_5\rangle=|10\rangle=\frac{\sqrt{2}}{2}(|v_3\rangle-|v_4\rangle)$
so, because this superposition, the final state after the QPE must contain eigenvalues $\lambda_3$ and $\lambda_4$
```python
out=[]
j=5
for i in range(experiments):
out.append(CalculateEigenvalues(M,accuracy=accuracy,vector=j,cheat=False))
x=plt.hist(out,bins=2**accuracy,range=(0,(2**accuracy)),color="r",label="$b_%d$"%j)
plt.legend()
plt.show()
plt.close()
```
And, because
$$|b_6\rangle=H_1 R_{x0}(0.05)R_{x1}(0.25)|00\rangle = \sum_{i=1}^4\beta_i |v_i\rangle$$
the final state must have a combination of all eigenvalues
```python
out=[]
j=6
for i in range(experiments):
out.append(CalculateEigenvalues(M,accuracy=accuracy,vector=j,cheat=False))
x=plt.hist(out,bins=2**accuracy,range=(0,(2**accuracy)),color="g",label="$b_%d$"%j)
plt.legend()
plt.show()
plt.close()
```
## <span style="color:blue"> 3. Second step: Inversion of eigenvalues </span>
After the previous step, the register Clock will have a superposition of all eigenvalues as states. On the next step, the values of these states will be inverted on the amplitudes, using an ancilla qubit over a set of controlled $R_y$ operations will be applyed.
Following the definition of the QuantumRegister
$$|ancilla\rangle \otimes |Clock\rangle \otimes |b\rangle$$
, let assume that the ancilla and Clock registers are ordered such than one Quantum State is defined by
$$|a_0 c_{n-1} c_{n-2} \dots c_0\rangle=|a_0\rangle \otimes |c_{n-1} c_{n-2} \dots c_0\rangle_C $$
being $n$ the accuracy and number of qubits on the Clock register.
```python
from sympy import *
from sympy.physics.quantum import TensorProduct
c=Symbol("C")
#Alpha=Symbol("beta_1")
#Beta=Symbol("beta_2")
Theta=Symbol("Theta")
SRy=Matrix([[cos(Theta/2),-sin(Theta/2)],[sin(Theta/2),cos(Theta/2)]])
Uno=Matrix([[0],[1]])
Zero=Matrix([[1],[0]])
#B=TensorProduct(Uno.T,Uno)
#A=TensorProduct(Zero.T,Zero)
#X_matrix=Matrix([[0,1],[1,0]])
I=Matrix([[1,0],[0,1]])
II=I.copy()
```
In this case, the control $R_y$ unitary operator by state $|c_i\rangle$ on ancilla $|a_0\rangle$ is defined as:
$$C_{c_i}R_y(\theta)= R_y(\theta)\otimes |c_i\rangle\langle c_i|+I\otimes (I-|c_i\rangle\langle c_i|)$$
For state $|10\rangle$ as control, is
```python
CRy=SRy.copy()
OneZero=Matrix(np.array([[0],[0],[1],[0]]))
COneZero=TensorProduct(OneZero,OneZero.T)
II=I.copy()
for i in range(1):
II=TensorProduct(I,II)
CRy=TensorProduct(CRy,COneZero)+TensorProduct(I,(II-COneZero))
CRy
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & \cos{\left(\frac{\Theta}{2} \right)} & 0 & 0 & 0 & - \sin{\left(\frac{\Theta}{2} \right)} & 0\\0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\0 & 0 & \sin{\left(\frac{\Theta}{2} \right)} & 0 & 0 & 0 & \cos{\left(\frac{\Theta}{2} \right)} & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\end{matrix}\right]$
The angle of rotation will be
$$\theta=2*sin^{-1}(\frac{C}{\lambda_i})$$
When $Ry(\theta)$ is applied on state $|0\rangle$, the result is
$$\begin{pmatrix}\sqrt{1-\frac{C^2}{\lambda_i^2}}\\ \frac{C}{\lambda_i}\end{pmatrix}$$
```python
L_1=Symbol("lambda_1")
Rot=2*asin(c/L_1)
SRy.subs(Theta,Rot)*Zero
```
$\displaystyle \left[\begin{matrix}\sqrt{- \frac{C^{2}}{\lambda_{1}^{2}} + 1}\\\frac{C}{\lambda_{1}}\end{matrix}\right]$
So, for $\lambda_i=2$ represented as state $|10\rangle$, the result of applying the $C_{|10\rangle}R_y(\theta)$ will be
$$C_{|10>}(|0\rangle \otimes |10\rangle)=[R_y(\theta)\otimes |10\rangle\langle 10|+I\otimes (I-|10\rangle\langle 10|)](|0\rangle\otimes |10\rangle)=$$
$$R_y(\theta)|0\rangle \otimes |10\rangle\langle 10|10\rangle+I|0\rangle\otimes (I-|10\rangle\langle 10|)|10\rangle =$$
$$R_y(\theta)|0\rangle\otimes |10\rangle +I|0\rangle \otimes (0)|10\rangle=$$
$$R_y(\theta)|0\rangle\otimes |10\rangle =$$
$$\sqrt{1-\frac{C^2}{\lambda_i^2}}|0>\otimes |10\rangle + \frac{C}{\lambda_i}|1\rangle\otimes |10\rangle =$$
$$\begin{pmatrix}\sqrt{1-\frac{C^2}{\lambda_i^2}}\\ \frac{C}{\lambda_i}\end{pmatrix}\otimes \begin{pmatrix}0\\0\\1\\0\end{pmatrix}=$$
$$\begin{pmatrix}0\\0\\\sqrt{1-\frac{C^2}{\lambda_i^2}}\\0\\0\\0 \\ \frac{C}{\lambda_i}\\0\end{pmatrix}$$
```python
State=TensorProduct(Zero,TensorProduct(Uno,Zero))
Lambda_i=Symbol("lambda_1")
Rot=2*asin(c/Lambda_i)
CRyLambda=CRy.subs(Theta,Rot)
CRyLambda*State
```
$\displaystyle \left[\begin{matrix}0\\0\\\sqrt{- \frac{C^{2}}{\lambda_{1}^{2}} + 1}\\0\\0\\0\\\frac{C}{\lambda_{1}}\\0\end{matrix}\right]$
For two eigenvalues $\lambda_1$ and $\lambda_2$, the final state after Quantum Phase Estimation was
$$|\chi\rangle= |0\rangle \otimes |\lambda_1\rangle \otimes \beta_1|v_1\rangle +|0\rangle \otimes |\lambda_2\rangle \otimes \beta_2|v_2\rangle $$
After applying $C_{\lambda_2}R_y(\theta_2)C_{\lambda_1}R_y(\theta_1)$, the result state is:
$$C_{\lambda_2}R_y(\theta_2)C_{\lambda_1}R_y(\theta_1)|\chi\rangle=C_{\lambda_2}R_y(\theta_2)[R_y(\theta_1)\otimes |\lambda_1\rangle\langle\lambda_1| \otimes I + I\otimes (I-|\lambda_1\rangle\langle\lambda_1|) \otimes I] (|0\rangle \otimes |\lambda_1\rangle \otimes \beta_1|v_1\rangle +|0\rangle \otimes |\lambda_2\rangle \otimes \beta_2|v_2\rangle)=$$
$$C_{\lambda_2}R_y(\theta_2)[(\sqrt{1-\frac{C^2}{\lambda_1^2}}|0> + \frac{C}{\lambda_1}|1\rangle) \otimes |\lambda_1\rangle \otimes \beta_1 |v_1\rangle + |0\rangle \otimes |\lambda_2\rangle \otimes \beta_2 |v_2\rangle]=$$
$$(\sqrt{1-\frac{C^2}{\lambda_1^2}}|0> + \frac{C}{\lambda_1}|1\rangle) \otimes |\lambda_1\rangle \otimes \beta_1 |v_1\rangle + (\sqrt{1-\frac{C^2}{\lambda_2^2}}|0> + \frac{C}{\lambda_2}|1\rangle) \otimes |\lambda_2\rangle \otimes \beta_2 |v_2\rangle$$
But, if $C_{\lambda_i}R_y(\theta_i))$ with $i$ different of 1 or 2, the state will remain unchanged. So, appying all the controlled-$R_y$ gates for all possible eigenvalues (integers from 0 to $2^n-1$), the final state will be:
$$|\phi\rangle = \sum_i (\sqrt{1-\frac{C^2}{\lambda_i^2}}|0> + \frac{C}{\lambda_i}|1\rangle)\otimes |\lambda_i\rangle \otimes \beta_i |v_i\rangle$$
After applying the inverse of the Quantum Phase Estimation, the final state is:
$$|\phi\rangle = \sum_i (\sqrt{1-\frac{C^2}{\lambda_i^2}}|0> + \frac{C}{\lambda_i}|1\rangle)\otimes |0\rangle \otimes \beta_i |v_i\rangle$$
This function will loop around all possible states for $n$ qubits, applying this controlled rotation. There is an special case, when $\lambda_i=2^{accuracy}$. On this case, the eigenvalue is mapped to the state $|0\rangle_{Clock}$. For this case, all the qubits must be inverted before appying a controlled-$R_y$ rotation.
```python
def ControlledRy(Clock,ancilla,c,accuracy):
from projectq.ops import C
Format="{0:0%db}"%accuracy
for i in range(1,2**accuracy):
angle=2*asin(c/i)
h=Format.format(i)
controls=[]
for j,k in enumerate(h[-1::-1]):
if k== "1":
controls.append(Clock[j])
C(Ry(angle),len(controls))|(controls,ancilla)
All(X) | Clock
angle=2*asin(c/2**accuracy)
C(Ry(angle),len(controls))|(Clock,ancilla)
All(X) | Clock
```
Let's see an example. With $accuracy=2$ and a single $\lambda=2$ for the eigenvector $|0>$, after the QPE, the state should be:
$$|0\rangle_{ancilla}\otimes |10\rangle_{Clock}\otimes |0\rangle_b$$
Applying the controlled-$R_y$ with "c=1", the result must be
$$(\sqrt{1- \frac{c^2}{\lambda^2}}|0\rangle_{ancilla}+\frac{c}{\lambda}|1\rangle_{ancilla})\otimes |10\rangle_{Clock}\otimes |0\rangle_b=$$
$$(\sqrt{0.75}|0\rangle_{ancilla}+0.5|1\rangle_{ancilla})\otimes |10\rangle_{Clock}\otimes |0\rangle_b=$$
$$\sqrt{0.75}|0\rangle_{ancilla}\otimes |10\rangle_{Clock}\otimes |0\rangle_b+0.5|1\rangle_{ancilla}\otimes |10\rangle_{Clock}\otimes |0\rangle_b=$$
$$0.866025|0\rangle_{ancilla}\otimes |10\rangle_{Clock}\otimes |0\rangle_b+0.5|1\rangle_{ancilla}\otimes |10\rangle_{Clock}\otimes |0\rangle_b$$
```python
c=1.
accuracy=2
Format="{0:0%db}"%accuracy
eng=MainEngine()
b = eng.allocate_qureg(2)
Clock = eng.allocate_qureg(accuracy)
ancilla=eng.allocate_qureg(1)
X|Clock[1]
ControlledRy(Clock,ancilla,c,accuracy)
eng.flush()
Display(get_state_as_str(eng,b+Clock+ancilla))
All(Measure)|b+Clock+ancilla
eng.flush()
del b
del Clock
del ancilla
del eng
```
$(0.86603+0.00000j)|0\rangle_a|10\rangle_C|00\rangle_b+(0.50000+0.00000j)|1\rangle_a|10\rangle_C|00\rangle_b$
And for $\lambda=2^{accuracy}$ the clock register will be $|0\rangle$
```python
c=1.
accuracy=2
Format="{0:0%db}"%accuracy
eng=MainEngine()
b = eng.allocate_qureg(2)
Clock = eng.allocate_qureg(accuracy)
ancilla=eng.allocate_qureg(1)
ControlledRy(Clock,ancilla,c,accuracy)
eng.flush()
Display(get_state_as_str(eng,b+Clock+ancilla))
All(Measure)|b+Clock+ancilla
eng.flush()
del b
del Clock
del ancilla
del eng
```
$(0.96825+0.00000j)|0\rangle_a|00\rangle_C|00\rangle_b+(0.25000+0.00000j)|1\rangle_a|00\rangle_C|00\rangle_b$
## <span style="color:blue"> 4. Third step: putting all together</span>
Now, all the needed pieces are there. Combine in the algorithm which has the steps:
1. Init register $|b\rangle$ to the normalized values of $\vec{b}$
2. Apply the Quantum Phase Estimation algorithm for the Hermitian matrix $M$
3. Apply the controlled-$R_y$ rotations
4. Uncompute step 2
5. Measure ancilla register. If the measure is 1, the quantum register $|b\rangle$ will contain the result. If the result is 0, clear the Quantum registers and go to 1
For vector=4,
$|b_4\rangle=|01\rangle=\frac{\sqrt{2}}{2}(|v_1\rangle-|v_2\rangle)$
so
$$\beta_1=\beta_2=\frac{\sqrt{2}}{2}$$
and
$$|v_1\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|01\rangle)$$
$$|v_2\rangle=\frac{1}{\sqrt{2}}(|00\rangle-|01\rangle)$$
And the $|b\rangle$ when the ancilla register is $|1\rangle$, is
$$|b\rangle=\frac{\beta_1}{\lambda_1}|v_1\rangle+\frac{\beta_2}{\lambda_2}|v_2\rangle=
\frac{\sqrt{2}}{2}\frac{1}{\sqrt{2}}\frac{1}{\lambda_1}(|00\rangle+|01\rangle)-
\frac{\sqrt{2}}{2}\frac{1}{\sqrt{2}}\frac{1}{\lambda_2}(|00\rangle-|01\rangle)=$$
$$\frac{1}{2}(\frac{1}{\lambda_1}-\frac{1}{\lambda_2})|00\rangle+
\frac{1}{2}(\frac{1}{\lambda_1}+\frac{1}{\lambda_2})|01\rangle$$
And because $\lambda_1=16$ and $\lambda_2=8$, the amplitudes will be
$$\frac{1}{2}(\frac{1}{16}-\frac{1}{8})|00\rangle+
\frac{1}{2}(\frac{1}{16}+\frac{1}{8})|01\rangle=-0.03125|00\rangle+0.09375|01\rangle$$
```python
accuracy=5
result=0
c=1
cheat=True
Format="{0:0%db}"%accuracy
cols = M.shape[0]
m = int(math.log2(cols))
while result==0:
eng=MainEngine()
b = eng.allocate_qureg(m)
Clock = eng.allocate_qureg(accuracy)
ancilla=eng.allocate_qureg(1)
InitState(b,vector=4,eng=eng,cheat=False)
with Compute(eng):
QPE(M,Clock,b,eng,cheat=False)
if cheat:
eng.flush()
Display(get_state_as_str(eng,b+Clock+ancilla))
ControlledRy(Clock,ancilla,c,accuracy)
Uncompute(eng)
if cheat:
eng.flush()
Display(get_state_as_str(eng,b+Clock+ancilla))
Output=get_state_as_str(eng,b+Clock+ancilla)
All(Measure)|ancilla
eng.flush()
result=int(ancilla)
if result==0:
All(Measure) |Clock
All(Measure) |b
eng.flush()
del Clock
del b
del ancilla
del eng
```
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
$(-0.50000-0.00000j)|0\rangle_a|01000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|01000\rangle_C|01\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|00\rangle_b+(0.50000+0.00000j)|0\rangle_a|10000\rangle_C|01\rangle_b$
$(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
```python
Display("Before measure the ancilla qubit, the state is: %s"%Output)
```
Before measure the ancilla qubit, the state is: $(0.00294-0.00000j)|0\rangle_a|00000\rangle_C|00\rangle_b+(0.99510+0.00000j)|0\rangle_a|00000\rangle_C|01\rangle_b+(-0.03125-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.09375+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
```python
Display("After measure the ancilla qubit, the state is: %s"%get_state_as_str(eng,b+Clock+ancilla))
```
After measure the ancilla qubit, the state is: $(-0.31623-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.94868+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
## <span style="color:blue"> 4. Calculate expectation values </span>
The register $|b\rangle$ now can be used on other operations. For example, to calculate the expectation value of one observable as $I\otimes\sigma_x$
Let's check that the results is identical to classical ones
```python
def solve(A,b):
import numpy as np
a = np.array(A)
b = np.array(b)
x = np.linalg.solve(a, b)
Isigmax=np.array([[0,1,0,0],[1,0,0,0],[0,0,0,1],[0,0,1,0]])
Isigmay=np.array([[0,-1j,0,0],[1j,0,0,0],[0,0,0,-1j],[0,0,1j,0]])
Isigmaz=np.array([[1,0,0,0],[0,-1,0,0],[0,0,1,0],[0,0,0,-1]])
norm=np.linalg.norm(x)
Esx=np.dot(x,np.dot(Isigmax,x.T))/norm**2
Esy=np.dot(x,np.dot(Isigmay,x.T))/norm**2
Esz=np.dot(x,np.dot(Isigmaz,x.T))/norm**2
return Esx,Esy,Esz,x
```
```python
bvector=np.array([0,1,0,0])
```
```python
def DisplayResults(eng, Qureg,B,A,b):
Display("After Measure:%s"%get_state_as_str(eng,Qureg,False))
Format="{0:0%db}"%np.log2(len(b))
amplitudes=[]
for i in range(len(b)):
a="%s%s1"%(Format.format(i)[-1::-1],accuracy*"0")
#print("a",a)
amplitudes.append(eng.backend.get_amplitude(a,Qureg))
#print(amplitudes[i])
Esx,Esy,Esz,x = solve(A, b)
Q="({:.5f})".format(amplitudes[0])
for i in range(1,len(amplitudes)):
Q=Q+",%s"%("({:.5f})".format(amplitudes[i]))
Display("Quantum: (%s)."%(Q))
Classical="%.5f"%x[0]
for i in range(1,len(x)):
Classical=Classical+",%.5f"%x[i]
Display("Classical: (%s)"%Classical)
Ratios="%.3f"%(amplitudes[0].real/x[0])
for i in range(1,len(x)):
if x[i]!=0:
Ratios=Ratios+",%.3f"%(amplitudes[i].real/x[i])
else:
Ratios=Ratios+",-"
Display("Ratios:(%s)"%Ratios)
Display("Calculated expectation value of $\sigma_X$:%.3f. Should be %.3f"%(eng.backend.get_expectation_value(QubitOperator("X0"),B).real,Esx.real))
Display("Calculated expectation value of $\sigma_Y$:%.3f. Should be %.3f"%(eng.backend.get_expectation_value(QubitOperator("Y0"),B).real,Esy.real))
Display("Calculated expectation value of $\sigma_Z$:%.3f. Should be %.3f"%(eng.backend.get_expectation_value(QubitOperator("Z0"),B).real,Esz.real))
```
```python
DisplayResults(eng, b+Clock+ancilla,b,M,bvector)
```
After Measure:$(-0.31623-0.00000j)|1\rangle_a|00000\rangle_C|00\rangle_b+(0.94868+0.00000j)|1\rangle_a|00000\rangle_C|01\rangle_b$
Quantum: ((-0.31623-0.00000j),(0.94868+0.00000j),(0.00000+0.00000j),(0.00000+0.00000j)).
Classical: (-0.03125,0.09375,0.00000,0.00000)
Ratios:(10.119,10.119,-,-)
Calculated expectation value of $\sigma_X$:-0.600. Should be -0.600
Calculated expectation value of $\sigma_Y$:0.000. Should be 0.000
Calculated expectation value of $\sigma_Z$:-0.800. Should be -0.800
```python
```
| d71ecb3bc22a4d00967a38fb5afac8ccd5da286e | 122,589 | ipynb | Jupyter Notebook | Notebooks/Inversion_Hermitian_Matrix-ProjectQ.ipynb | gomeztato/QuantumCourse | 881d03635332ae4627975e713c30c1b833cabe21 | [
"Apache-2.0"
]
| 16 | 2019-04-14T18:26:12.000Z | 2021-11-22T08:08:40.000Z | Notebooks/Inversion_Hermitian_Matrix-ProjectQ.ipynb | gomeztato/QuantumCourse | 881d03635332ae4627975e713c30c1b833cabe21 | [
"Apache-2.0"
]
| 1 | 2020-01-21T06:50:16.000Z | 2020-01-21T06:50:16.000Z | Notebooks/Inversion_Hermitian_Matrix-ProjectQ.ipynb | gomeztato/QuantumCourse | 881d03635332ae4627975e713c30c1b833cabe21 | [
"Apache-2.0"
]
| 3 | 2019-04-16T10:15:29.000Z | 2020-01-29T15:14:25.000Z | 43.986006 | 6,036 | 0.621475 | true | 23,716 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.746139 | 0.631191 | __label__yue_Hant | 0.350478 | 0.304798 |
# Hypothesis Testing
## Single-Sample, One-Sided Tests
Our students have completed their school year, and been asked to rate their statistics class on a scale between -5 (terrible) and 5 (fantastic). The statistics class is taught online to tens of thousands of students, so to assess its success, we'll take a random sample of 50 ratings.
Run the following code to draw 50 samples.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(123)
lo = np.random.randint(-5, -1, 6)
mid = np.random.randint(0, 3, 38)
hi = np.random.randint(4, 6, 6)
sample = np.append(lo,np.append(mid, hi))
print("Min:" + str(sample.min()))
print("Max:" + str(sample.max()))
print("Mean:" + str(sample.mean()))
plt.hist(sample)
plt.show()
```
A question we might immediately ask is: "how do students tend to like the class"? In this case, possible ratings were between -5 and 5, with a "neutral" score of 0. In other words, if our average score is above zero, then students tend to enjoy the course.
In the sample above, the mean score is above 0 (in other words, people liked the class in this data). If you had actually run this course and saw this data, it might lead you to believe that the overall mean rating for this class (i.e., not just the sample) is likely to be positive.
There is an important point to be made, though: this is just a sample, and you want to make a statement not just about your sample but the whole population from which it came. In other words, you want to know how the class was received overall, but you only have access to a limited set of data. This often the case when analyzing data.
So, how can you test your belief that your positive looking *sample* reflects the fact that the course does tend to get good evaluations, that your *population* mean (not just your sample mean) is positive?
We start by defining two hypotheses:
* The *null* hypothesis (**H<sub>0</sub>**) is that the population mean for all of the ratings is *not* higher than 0, and the fact that our sample mean is higher than this is due to random chance in our sample selection.
* The *alternative* hypothesis (**H<sub>1</sub>**) is that the population mean is actually higher than 0, and the fact that our sample mean is higher than this means that our sample correctly detected this trend.
You can write these as mutually exclusive expressions like this:
\begin{equation}H_{0}: \mu \le 0 \\ H_{1}: \mu > 0 \end{equation}
So how do we test these hypotheses? Because they are mutually exclusive, if we can show the null is probably not true, then we are safe to reject it and conclude that people really do like our online course. But how do we do that?
Well, if the *null* hypothesis is true, the sampling distribution for ratings with a sample size of 50 will be a normal distribution with a mean of 0. Run the following code to visualize this, with the mean of 0 shown as a yellow dashed line.
*(The code just generates a normal distribution with a mean of 0 and a standard deviation that makes it approximate a sampling distribution of 50 random means between -5 and 5 - don't worry too much about the actual values, it's just to illustrate the key points!)*
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
pop = np.random.normal(0, 1.15, 100000)
plt.hist(pop, bins=100)
plt.axvline(pop.mean(), color='yellow', linestyle='dashed', linewidth=2)
plt.show()
```
This illustrates all the *sample* results you could get if the null hypothesis was true (that is, the rating population mean is actually 0). Note that if the null hypothesis is true, it's still *possible* to get a sample with a mean ranging from just over -5 to just under 5. The question is how *probable* is it to get a sample with a mean as high we did for our 50-rating sample under the null hypothesis? And how improbable would it *need* to be for us to conclude that the null is, in fact, a poor explanation for our data?
Well, we measure distance from the mean in standard deviations, so we need to find out how many standard deviations above the null-hypothesized population mean of 0 our sample mean is, and measure the area under the distribution curve from this point on - that will give us the probability of observing a mean that is *at least* as high as our sample mean. We call the number of standard deviations above the mean where our sample mean is found the *test statistic* (or sometimes just *t-statistic*), and we call the area under the curve from this point (representing the probability of observing a sample mean this high or greater) the *p-value*.
So the p-value tells us how probable our sample mean is when the null is true, but we need to set a threshold under which we consider this to be too improbable to be explained by random chance alone. We call this threshold our *critical value*, and we usually indicate it using the Greek letter alpha (**α**). You can use any value you think is appropriate for **α** - commonly a value of 0.05 (5%) is used, but there's nothing special about this value.
We calculate the t-statistic by performing a statistical test. Technically, when the standard deviation of the population is known, we call it a *z-test* (because a *normal* distribution is often called a *z-distribution* and we measure variance from the mean in multiples of standard deviation known as *z-scores*). When the standard deviation of the population is not known, the test is referred to as a *t-test* and based on an adjusted version of a normal distribution called a *student's t distribution*, in which the distribution is "flattened" to allow for more sample variation depending on the sample size. Generally, with a sample size of 30 or more, a t-test is approximately equivalent to a z-test.
Specifically, in this case we're performing a *single sample* test (we're comparing the mean from a single sample of ratings against the hypothesized population mean), and it's a *one-tailed* test (we're checking to see if the sample mean is *greater than* the null-hypothesized population mean - in other words, in the *right* tail of the distribution).
The general formula for one-tailed, single-sample t-test is:
\begin{equation}t = \frac{\bar{x} - \mu}{s \div \sqrt{n}} \end{equation}
In this formula, **x̄** is the sample mean, **μ** is the population mean, **s** is the standard deviation, and **n** is the sample size. You can think of the numerator of this equation (the expression at the top of the fraction) as a *signal*, and the denominator (the expression at the bottom of the fraction) as being *noise*. The signal measures the difference between the statistic and the null-hypothesized value, and the noise represents the random variance in the data in the form of standard deviation (or standard error). The t-statistic is the ratio of signal to noise, and measures the number of standard errors between the null-hypothesized value and the observed sample mean. A large value tells you that your "result" or "signal" was much larger than you would typically expect by chance.
Fortunately, most programming languages used for statistical analysis include functions to perform a t-test, so you rarely need to manually calculate the results using the formula.
Run the code below to run a single-sample t-test comparing our sample mean for ratings to a hypothesized population mean of 0, and visualize the resulting t-statistic on the normal distribution for the null hypothesis.
```python
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# T-Test
t,p = stats.ttest_1samp(sample, 0)
# ttest_1samp is 2-tailed, so half the resulting p-value to get a 1-tailed p-value
p1 = '%f' % (p/2)
print ("t-statistic:" + str(t))
print("p-value:" + str(p1))
# calculate a 90% confidence interval. 10% of the probability is outside this, 5% in each tail
ci = stats.norm.interval(0.90, 0, 1.15)
plt.hist(pop, bins=100)
# show the hypothesized population mean
plt.axvline(pop.mean(), color='yellow', linestyle='dashed', linewidth=2)
# show the right-tail confidence interval threshold - 5% of propbability is under the curve to the right of this.
plt.axvline(ci[1], color='red', linestyle='dashed', linewidth=2)
# show the t-statistic - the p-value is the area under the curve to the right of this
plt.axvline(pop.mean() + t*pop.std(), color='magenta', linestyle='dashed', linewidth=2)
plt.show()
```
In the plot produced by the code above, the yellow line shows the population mean for the null hypothesis. The area under the curve to the right of the red line represents the critical value of 0.05 (or 5%). The magenta line indicates how much higher the sample mean is compared to the hypothesized population mean. This is calculated as the t-statistic (which is printed above the plot) multiplied by the standard deviation. The area under the curve to the right of this encapsulates the p-value calculated by the test (which is also printed above the plot).
So what should we conclude from these results?
Well, if the p-value is smaller than our critical value of 0.05, that means that under the null hypothesis, the probability of observing a sample mean as high as we did by random chance is low. That's a good sign for us, because it means that our sample is unlikely under the null, and therefore the null is a poor explanation for the data. We can safely *reject* the null hypothesis in favor of the alternative hypothesis - there's enough evidence to suggest that the population mean for our class ratings is greater than 0.
Conversely, if the p-value is greater than the critical value, we *fail to reject the null hypothesis* and conclude that the mean rating is not greater than 0. Note that we never actually *accept* the null hypothesis, we just conclude that there isn't enough evidence to reject it!
## Two-Tailed Tests
The previous test was an example of a one-tailed test in which the p-value represents the area under one tail of the distribution curve. In this case, the area in question is under the right tail because the alternative hypothesis we were trying to show was that the true population mean is *greater than* the mean of the null hypothesis scenario.
Suppose we restated our hypotheses like this:
* The *null* hypothesis (**H<sub>0</sub>**) is that the population mean for all of the ratings is 0, and the fact that our sample mean is higher or lower than this can be explained by random chance in our sample selection.
* The *alternative* hypothesis (**H<sub>1</sub>**) is that the population mean is not equal to 0.
We can write these as mutually exclusive expressions like this:
\begin{equation}H_{0}: \mu = 0 \\ H_{1}: \mu \neq 0 \end{equation}
Why would we do this? Well, in the test we performed earlier, we could only reject the null hypothesis if we had really *positive* ratings, but what if our sample data looked really *negative*? It would be a mistake to turn around and run a one-tailed test the other way, for negative ratings. Instead, we conduct a test designed for such a question: a two-tailed test.
In a two-tailed test, we are willing to reject the null hypothesis if the result is significantly *greater* or *lower* than the null hypothesis. Our critical value (5%) is therefore split in two: the top 2.5% of the curve and the bottom 2.5% of the curve. As long as our test statistic is in that region, we are in the extreme 5% of values (p < .05) and we reject the null hypothesis. In other words, our p-value now needs to be below .025, but it can be in either tail of the distribution. For convenience, we usually "double" the p-value in a two-tailed test so that we don't have to remember this rule and still compare against .05 (this is known as a "two-tailed p-value"). In fact, it is assumed this has been done in all statistical analyses unless stated otherwise.
The following code shows the results of a two-tailed, single sample test of our class ratings. Note that the ***ttest_1samp*** function in the ***stats*** library returns a 2-tailed p-value by default (which is why we halved it in the previous example).
```python
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# T-Test
t,p = stats.ttest_1samp(sample, 0)
print ("t-statistic:" + str(t))
# ttest_1samp is 2-tailed
print("p-value:" + '%f' % p)
# calculate a 95% confidence interval. 50% of the probability is outside this, 2.5% in each tail
ci = stats.norm.interval(0.95, 0, 1.15)
plt.hist(pop, bins=100)
# show the hypothesized population mean
plt.axvline(pop.mean(), color='yellow', linestyle='dashed', linewidth=2)
# show the confidence interval thresholds - 5% of propbability is under the curve outside these.
plt.axvline(ci[0], color='red', linestyle='dashed', linewidth=2)
plt.axvline(ci[1], color='red', linestyle='dashed', linewidth=2)
# show the t-statistic thresholds - the p-value is the area under the curve outside these
plt.axvline(pop.mean() - t*pop.std(), color='magenta', linestyle='dashed', linewidth=2)
plt.axvline(pop.mean() + t*pop.std(), color='magenta', linestyle='dashed', linewidth=2)
plt.show()
```
Here we see that our 2-tailed p-value was clearly less than 0.05; so We reject the null hypothesis.
You may note that doubling the p-value in a two-tailed test makes it harder to reject the null. This is true; we require more evidence because we are asking a more complicated question.
## Two-Sample Tests
In both of the previous examples, we compared a statistic from a single data sample to a null-hypothesized population parameter. Sometimes you might want to compare two samples against one another.
For example, let's suppose that some of the students who took the statistics course had previously studied mathematics, while other students had no previous math experience. You might hypothesize that the grades of students who had previously studied math are significantly higher than the grades of students who had not.
* The *null* hypothesis (**H<sub>0</sub>**) is that the population mean grade for students with previous math studies is not greater than the population mean grade for students without any math experience, and the fact that our sample mean for math students is higher than our sample mean for non-math students can be explained by random chance in our sample selection.
* The *alternative* hypothesis (**H<sub>1</sub>**) is that the population mean grade for students with previous math studies is greater than the population mean grade for students without any math experience.
We can write these as mutually exclusive expressions like this:
\begin{equation}H_{0}: \mu_{1} \le \mu_{2} \\ H_{1}: \mu_{1} > \mu_{2} \end{equation}
This is a one-sided test that compares two samples. To perform this test, we'll take two samples. One sample contains 100 grades for students who have previously studied math, and the other sample contains 100 grades for students with no math experience.
We won't go into the test-statistic formula here, but it essentially the same as the one above, adapted to include information from both samples. We can easily test this in most software packages using the command for an "independent samples" t-test:
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
%matplotlib inline
np.random.seed(123)
nonMath = np.random.normal(66.0, 1.5, 100)
math = np.random.normal(66.55, 1.5, 100)
print("non-math sample mean:" + str(nonMath.mean()))
print("math sample mean:" + str(math.mean()))
# Independent T-Test
t,p = stats.ttest_ind(math, nonMath)
# ttest_ind is 2-tailed, so half the resulting p-value to get a 1-tailed p-value
p1 = '%f' % (p/2)
print("t-statistic:" + str(t))
print("p-value:" + str(p1))
pop = np.random.normal(nonMath.mean(), nonMath.std(), 100000)
# calculate a 90% confidence interval. 10% of the probability is outside this, 5% in each tail
ci = stats.norm.interval(0.90, nonMath.mean(), nonMath.std())
plt.hist(pop, bins=100)
# show the hypothesized population mean
plt.axvline(pop.mean(), color='yellow', linestyle='dashed', linewidth=2)
# show the right-tail confidence interval threshold - 5% of propbability is under the curve to the right of this.
plt.axvline(ci[1], color='red', linestyle='dashed', linewidth=2)
# show the t-statistic - the p-value is the area under the curve to the right of this
plt.axvline(pop.mean() + t*pop.std(), color='magenta', linestyle='dashed', linewidth=2)
plt.show()
```
You can interpret the results of this test the same way as for the previous single-sample, one-tailed test. If the p-value (the area under the curve to the right of the magenta line) is smaller than our critical value (**α**) of 0.05 (the area under the curve to the right of the red line), then the difference can't be explained by chance alone; so we can reject the null hypothesis and conclude that students with previous math experience perform better on average than students without.
Alternatively, you could always compare two groups and *not* specify a direction (i.e., two-tailed). If you did this, as above, you could simply double the p-value (now .001), and you would see you could still reject the null hypothesis.
## Paired Tests
In the two-sample test we conduced previously, the samples were independent; in other words there was no relatioship between the observations in the first sample and the observations in the second sample. Sometimes you might want to compare statistical differences between related observations before and after some change that you believe might influence the data.
For example, suppose our students took a mid-term exam, and later took and end-of-term exam. You might hypothesise that the students will improve their grades in the end-of-term exam, after they've undertaken additional study. We could test for a general improvement on average across all students with a two-sample independent test, but a more appropriate test would be to compare the two test scores for each individual student.
To accomplish this, we need to create two samples; one for scores in the mid-term, exam, the other for scores in the end-of-term exam. Then we need to compare the samples in such a way that each pair of observations for the same student are compared to one another.
This is known as a paired-samples t-test or a dependent-samples t-test. Technically, it tests whether the *changes* tend to be in the positive or negative direction.
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
%matplotlib inline
np.random.seed(123)
midTerm = np.random.normal(59.45, 1.5, 100)
endTerm = np.random.normal(60.05, 1.5, 100)
# Paired (related) test
t,p = stats.ttest_rel(endTerm, midTerm)
# ttest_rel is 2-tailed, so half the resulting p-value to get a 1-tailed p-value
p1 = '%f' % (p/2)
print("t-statistic:" + str(t))
print("p-value:" + str(p1))
pop = np.random.normal(midTerm.mean(), midTerm.std(), 100000)
# calculate a 90% confidence interval. 10% of the probability is outside this, 5% in each tail
ci = stats.norm.interval(0.90, midTerm.mean(), midTerm.std())
plt.hist(pop, bins=100)
# show the hypothesized population mean
plt.axvline(pop.mean(), color='yellow', linestyle='dashed', linewidth=2)
# show the right-tail confidence interval threshold - 5% of propbability is under the curve to the right of this.
plt.axvline(ci[1], color='red', linestyle='dashed', linewidth=2)
# show the t-statistic - the p-value is the area under the curve to the right of this
plt.axvline(pop.mean() + t*pop.std(), color='magenta', linestyle='dashed', linewidth=2)
plt.show()
```
In our sample, we see that scores did in fact improve, so we can we reject the null hypothesis.
```python
```
| 46d29ec960e30ce0d1052dce05d86824e8a563eb | 73,164 | ipynb | Jupyter Notebook | Statistics and Probability by Hiren/04-06-Hypothesis Testing.ipynb | serkin/Basic-Mathematics-for-Machine-Learning | ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab | [
"Apache-2.0"
]
| null | null | null | Statistics and Probability by Hiren/04-06-Hypothesis Testing.ipynb | serkin/Basic-Mathematics-for-Machine-Learning | ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab | [
"Apache-2.0"
]
| null | null | null | Statistics and Probability by Hiren/04-06-Hypothesis Testing.ipynb | serkin/Basic-Mathematics-for-Machine-Learning | ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab | [
"Apache-2.0"
]
| null | null | null | 148.707317 | 9,036 | 0.858195 | true | 4,757 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.901921 | 0.788348 | __label__eng_Latn | 0.999595 | 0.66993 |
# Nested Logit Model: Compute Market Shares
```python
import pandas as pd
import numpy as np
import biogeme.database as db
import biogeme.biogeme as bio
import biogeme.models as models
import biogeme.optimization as opt
import biogeme.results as res
from biogeme.expressions import Beta, DefineVariable
import seaborn as sns
import matplotlib.pyplot as plt
```
**Import Optima data**
```python
pandas = pd.read_csv("../../Data/6-Discrete Choice Models/optima.dat",sep='\t')
database = db.Database ("data/optima", pandas)
```
**Use collumn names as variables**
```python
globals().update(database.variables)
```
**Exclude some unwanted entries**
```python
exclude = (Choice == -1.)
database.remove(exclude)
```
**Define some dummy variables**
```python
male = (Gender == 1)
female = (Gender == 2)
unreportedGender = (Gender == -1)
fulltime = (OccupStat == 1)
notfulltime = (OccupStat != 1)
```
**Rescale some data**
```python
TimePT_scaled = TimePT / 200
TimeCar_scaled = TimeCar / 200
MarginalCostPT_scaled = MarginalCostPT / 10
CostCarCHF_scaled = CostCarCHF / 10
distance_km_scaled = distance_km / 5
```
**Create parameters to be estimated**
```python
ASC_CAR = Beta('ASC_CAR',0,None,None,0)
ASC_PT = Beta('ASC_PT',0,None,None,1)
ASC_SM = Beta('ASC_SM',0,None,None,0)
BETA_TIME_FULLTIME = Beta('BETA_TIME_FULLTIME',0,None,None,0)
BETA_TIME_OTHER = Beta('BETA_TIME_OTHER',0,None,None,0)
BETA_DIST_MALE = Beta('BETA_DIST_MALE',0,None,None,0)
BETA_DIST_FEMALE = Beta('BETA_DIST_FEMALE',0,None,None,0)
BETA_DIST_UNREPORTED = Beta('BETA_DIST_UNREPORTED',0,None,None,0)
BETA_COST = Beta('BETA_COST',0,None,None,0)
```
**Define the utility functions**
\begin{align}
V_{PT} & = \beta_{PT} + \beta_{time_{fulltime}} X_{time_{PT}} X_{fulltime} + \beta_{time_{other}} X_{time_{PT}} X_{not\_fulltime} + \beta_{cost} X_{cost_{PT}} \\
V_{car} & = \beta_{car} + \beta_{time_{fulltime}} X_{time_{car}} X_{fulltime} + \beta_{time_{other}} X_{time_{car}} X_{not\_fulltime} + \beta_{cost} X_{cost_{car}} \\
V_{SM} & = \beta_{SM} + \beta_{male} X_{distance} X_{male} + \beta_{female} X_{distance} X_{female} + \beta_{unreported} X_{distance} X_{unreported}
\end{align}
```python
V_PT = ASC_PT + BETA_TIME_FULLTIME * TimePT_scaled * fulltime + \
BETA_TIME_OTHER * TimePT_scaled * notfulltime + \
BETA_COST * MarginalCostPT_scaled
V_CAR = ASC_CAR + \
BETA_TIME_FULLTIME * TimeCar_scaled * fulltime + \
BETA_TIME_OTHER * TimeCar_scaled * notfulltime + \
BETA_COST * CostCarCHF_scaled
V_SM = ASC_SM + \
BETA_DIST_MALE * distance_km_scaled * male + \
BETA_DIST_FEMALE * distance_km_scaled * female + \
BETA_DIST_UNREPORTED * distance_km_scaled * unreportedGender
```
**Associate utility functions with alternatives and associate availability of alternatives**
In this example all alternatives are available for each individual
```python
V = {0: V_PT,
1: V_CAR,
2: V_SM}
av = {0: 1,
1: 1,
2: 1}
```
**Define the nests**
1. Define the nests paramenters
2. List alternatives in nests
```python
MU_NO_CAR = Beta('MU_NO_CAR', 1.,1.,None,0)
CAR_NEST = 1., [1]
NO_CAR_NEST = MU_NO_CAR, [0, 2]
nests = CAR_NEST, NO_CAR_NEST
```
**Define the choice probabilities**
```python
prob_pt = models.nested(V, av , nests , 0)
prob_car = models.nested(V, av , nests , 1)
prob_sm = models.nested(V, av , nests , 2)
```
**Compute normalizing weights for each alternative**
```python
sumWeight = database.data['Weight'].sum()
normalized_Weight = Weight * len(database.data['Weight']) / sumWeight
```
**Define what we want to simulate**
1. Normalized weights
2. Choice probabilities for each choice
3. Revenues for the Public Transportation alternative
```python
simulate ={'weight': normalized_Weight ,
'Prob. car': prob_car ,
'Prob. public transportation': prob_pt ,
'Prob. slow modes': prob_sm ,
'Revenue public transportation': prob_pt * MarginalCostPT}
```
**Define the Biogeme object**
```python
biogeme = bio.BIOGEME(database, simulate)
biogeme.modelName = "optima_nested_logit_market"
```
**Retrieve the names of the variables we want to use. Then retrieve the results from the model that we estimated earlier**
```python
betas = biogeme.freeBetaNames
print('Extracting the following variables:')
for k in betas:
print('\t',k)
results = res.bioResults(pickleFile='optima_nested_logit.pickle')
betaValues = results.getBetaValues ()
```
Extracting the following variables:
ASC_CAR
ASC_SM
BETA_COST
BETA_DIST_FEMALE
BETA_DIST_MALE
BETA_DIST_UNREPORTED
BETA_TIME_FULLTIME
BETA_TIME_OTHER
MU_NO_CAR
**Perform the simulation**
```python
simulatedValues = biogeme.simulate(betaValues)
```
**Compute confidente intervals using this simulation**
```python
b = results.getBetasForSensitivityAnalysis(betas , size=100)
left, right = biogeme.confidenceIntervals(b, .9)
```
**Computed the weighted probabilities**
```python
simulatedValues['Weighted prob. car'] =simulatedValues['weight'] * simulatedValues['Prob. car']
left['Weighted prob. car'] = left['weight'] * left['Prob. car']
right['Weighted prob. car'] = right['weight'] * right['Prob. car']
simulatedValues['Weighted prob. public transportation'] =simulatedValues['weight'] * simulatedValues['Prob. public transportation']
left['Weighted prob. public transportation'] = left['weight'] * left['Prob. public transportation']
right['Weighted prob. public transportation'] = right['weight'] * right['Prob. public transportation']
simulatedValues['Weighted prob. slow modes'] =simulatedValues['weight'] * simulatedValues['Prob. slow modes']
left['Weighted prob. slow modes'] = left['weight'] * left['Prob. slow modes']
right['Weighted prob. slow modes'] = right['weight'] * right['Prob. slow modes']
```
**Compute the market shares**
```python
marketShare_car = simulatedValues['Weighted prob. car'].mean()
marketShare_car_left = left['Weighted prob. car'].mean()
marketShare_car_right = right['Weighted prob. car'].mean()
marketShare_pt = simulatedValues['Weighted prob. public transportation'].mean()
marketShare_pt_left = left['Weighted prob. public transportation'].mean()
marketShare_pt_right = right['Weighted prob. public transportation'].mean()
marketShare_sm = simulatedValues['Weighted prob. slow modes'].mean()
marketShare_sm_left = left['Weighted prob. slow modes'].mean()
marketShare_sm_right = right['Weighted prob. slow modes'].mean()
```
**Display results**
```python
print(f"Market Share for car : {100*marketShare_car:.1f}% [{100*marketShare_car_left:.1f} % , {100*marketShare_car_right:.1f} %]")
print(f"Market Share for PT : {100*marketShare_pt:.1f}% [{100*marketShare_pt_left:.1f} % , {100*marketShare_pt_right:.1f} %]")
print(f"Market Share for SM : {100*marketShare_sm:.1f}% [{100*marketShare_sm_left:.1f} % , {100*marketShare_sm_right:.1f} %]")
```
Market Share for car : 65.3% [60.6 % , 69.0 %]
Market Share for PT : 28.1% [23.6 % , 32.2 %]
Market Share for SM : 6.6% [4.6 % , 10.6 %]
**Compute revenues for Public Transportation alternative**
```python
revenues_pt = ( simulatedValues['Revenue public transportation']*simulatedValues['weight']).sum()
revenues_pt_left = (left['Revenue public transportation']*left['weight']).sum()
revenues_pt_right = ( right ['Revenue public transportation']*right['weight']).sum()
print( f"Revenues for PT : {revenues_pt:.3f} [{revenues_pt_left:.3f}, {revenues_pt_right:.3f}]")
```
Revenues for PT : 3018.342 [2420.490, 3721.883]
| 92087e089b5760e5000ff583c541eb9a9f36a97d | 13,433 | ipynb | Jupyter Notebook | Code/8.2-NestedLogitModels/02-logit-nested-market.ipynb | ErisonBarros/Transport-Demand-Modelling | a5bc469a69865c3f36845b8cf49dd7239cd8f186 | [
"MIT"
]
| 1 | 2021-03-14T23:18:14.000Z | 2021-03-14T23:18:14.000Z | Code/8.2-NestedLogitModels/02-logit-nested-market.ipynb | valenca13/Transport-Demand-Modelling | a5bc469a69865c3f36845b8cf49dd7239cd8f186 | [
"MIT"
]
| null | null | null | Code/8.2-NestedLogitModels/02-logit-nested-market.ipynb | valenca13/Transport-Demand-Modelling | a5bc469a69865c3f36845b8cf49dd7239cd8f186 | [
"MIT"
]
| null | null | null | 26.547431 | 182 | 0.553488 | true | 2,166 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.718594 | 0.59082 | __label__eng_Latn | 0.467449 | 0.211004 |
```python
import fit_pendulum_data as p1
import midpoint_vec as p2
import Lagrange_poly1 as p3
import Lagrange_poly2 as p4
import Lagrange_poly2b as p5
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
```
# Exercise 5.18
## Plots L vs T from a source file and fits polynomials of varying degrees to it
```python
p1.part_a()
```
### Below are the polynomials being fit to the data
```python
p1.part_b()
```
# Exercise 5.22
## Computes the midpoint rule in different forms
```python
p2.midpointint(p2.function, 1, 3, 50)[0]
```
12.231183999999999
```python
p2.sum_vectorized(p2.function, 1, 3, 50)
```
12.231183999999999
```python
p2.sum_numpy(p2.function, 1, 3, 50)
```
12.231183999999999
# Exercise 5.23, 5.24, 5.25 (3 part)
## Lagrange Interpolation
```python
p4.graph(p4.sin, 20, 0, 10, [0,10,-2,2])
```
```python
p5.problem_5_25()
```
```python
```
| cb9bc1566bb6050cf96f421a946ee0cdfa3ca409 | 91,645 | ipynb | Jupyter Notebook | hw-2-C0deMonkee.ipynb | chapman-phys227-2016s/hw-2-C0deMonkee | 2db1f30afecb9f2d09e29a15c6f47656e8582deb | [
"MIT"
]
| null | null | null | hw-2-C0deMonkee.ipynb | chapman-phys227-2016s/hw-2-C0deMonkee | 2db1f30afecb9f2d09e29a15c6f47656e8582deb | [
"MIT"
]
| null | null | null | hw-2-C0deMonkee.ipynb | chapman-phys227-2016s/hw-2-C0deMonkee | 2db1f30afecb9f2d09e29a15c6f47656e8582deb | [
"MIT"
]
| null | null | null | 305.483333 | 15,324 | 0.922647 | true | 316 | Qwen/Qwen-72B | 1. YES
2. YES | 0.875787 | 0.872347 | 0.76399 | __label__eng_Latn | 0.786758 | 0.613338 |
```python
from resources.workspace import *
```
$
% START OF MACRO DEF
% DO NOT EDIT IN INDIVIDUAL NOTEBOOKS, BUT IN macros.py
%
\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Expect}[0]{\mathbb{E}}
\newcommand{\NormDist}{\mathcal{N}}
%
\newcommand{\DynMod}[0]{\mathscr{M}}
\newcommand{\ObsMod}[0]{\mathscr{H}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
\newcommand{\tn}[1]{#1}
\newcommand{\ceq}[0]{\mathrel{≔}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\C}[0]{\mat{C}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\z}[0]{\bvec{z}}
\newcommand{\q}[0]{\bvec{q}}
\newcommand{\br}[0]{\bvec{r}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\bx}[0]{\bvec{\bar{x}}}
\newcommand{\by}[0]{\bvec{\bar{y}}}
\newcommand{\barB}[0]{\mat{\bar{B}}}
\newcommand{\barP}[0]{\mat{\bar{P}}}
\newcommand{\barC}[0]{\mat{\bar{C}}}
\newcommand{\barK}[0]{\mat{\bar{K}}}
%
\newcommand{\D}[0]{\mat{D}}
\newcommand{\Dobs}[0]{\mat{D}_{\text{obs}}}
\newcommand{\Dmod}[0]{\mat{D}_{\text{obs}}}
%
\newcommand{\ones}[0]{\bvec{1}}
\newcommand{\AN}[0]{\big( \I_N - \ones \ones\tr / N \big)}
%
% END OF MACRO DEF
$
## Dynamical systems
are systems (sets of equations) whose variables evolve in time (the equations contains time derivatives). As a branch of mathematics, its theory is mainly concerned with understanding the behaviour of solutions (trajectories) of the systems.
## Chaos
is also known as the butterfly effect: "a butterfly that flaps its wings in Brazil can 'cause' a hurricane in Texas".
As opposed to the opinions of Descartes/Newton/Laplace, chaos effectively means that even in a deterministic (non-stochastic) universe, we can only predict "so far" into the future. This will be illustrated below using two toy-model dynamical systems made by Edward Lorenz.
---
## The Lorenz (1963) attractor
The [Lorenz-63 dynamical system](https://en.wikipedia.org/wiki/Lorenz_system) can be derived as an extreme simplification of *Rayleigh-Bénard convection*: fluid circulation in a shallow layer of fluid uniformly heated (cooled) from below (above).
This produces the following 3 *coupled* ordinary differential equations (ODE):
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \\
\dot{y} & = \rho x - y - xz \\
\dot{z} & = -\beta z + xy
\end{aligned}
$$
where the "dot" represents the time derivative, $\frac{d}{dt}$. The state vector is $\x = (x,y,z)$, and the parameters are typically set to
```python
SIGMA = 10.0
BETA = 8/3
RHO = 28.0
```
The ODEs can be coded as follows
```python
def dxdt(xyz, t0, sigma, beta, rho):
"""Compute the time-derivative of the Lorenz-63 system."""
x, y, z = xyz
return [
sigma * (y - x),
x * (rho - z) - y,
x * y - beta * z
]
```
#### Numerical integration to compute the trajectories
Below is a function to numerically **integrate** the ODEs and **plot** the solutions.
<!--
This function also takes arguments to control ($\sigma$, $\beta$, $\rho$) and of the numerical integration (`N`, `T`).
-->
```python
from scipy.integrate import odeint
output_63 = [None]
@interact( sigma=(0.,50), beta=(0.,5), rho=(0.,50), N=(0,50), eps=(0.01,1), T=(0.,40))
def animate_lorenz(sigma=SIGMA, beta=BETA, rho=RHO , N=2, eps=0.01, T=1.0):
# Initial conditions: perturbations around some "proto" state
seed(1)
x0_proto = array([-6.1, 1.2, 32.5])
x0 = x0_proto + eps*randn((N, 3))
# Compute trajectories
tt = linspace(0, T, int(100*T)+1) # Time sequence for trajectory
dd = lambda x,t: dxdt(x,t, sigma,beta,rho) # Define dxdt(x,t) with fixed params.
xx = array([odeint(dd, xn, tt) for xn in x0]) # Integrate
output_63[0] = xx
# PLOTTING
ax = plt.figure(figsize=(10,5)).add_subplot(111, projection='3d')
ax.axis('off')
colors = plt.cm.jet(linspace(0,1,N))
for n in range(N):
ax.plot(*(xx[n,:,:].T),'-' , color=colors[n])
ax.scatter3D(*xx[n,-1,:],s=40,color=colors[n])
```
**Exc 2**:
* Move `T` (use your arrow keys). What does it control?
* Set `T` to something small; move the sliders for `N` and `eps`. What do they control?
* Visually investigate the system's (i.e. the trajectories') sensitivity to initial conditions by moving `T`, `N` and `eps`. How long do you think it takes (on average) for two trajectories (or the estimation error) to grow twice as far apart as they started (alternatives: 0.03, 0.3, 3, 30)?
### Averages
Slide `N` and `T` to their upper bounds. Execute the code cell below.
```python
# Compute the average location of the $m$-th component of the state in TWO ways.
m = 0 # state component index (must be 0,1,2)
nB = 20
xx = output_63[0][:,:,m]
plt.hist(xx[:,-1] ,density=1,bins=nB, label="ensemble dist.",alpha=1.0) # -1: last time
plt.hist(xx[-1,:] ,density=1,bins=nB, label="temporal dist.",alpha=0.5) # -1: last ensemble member
#plt.hist(xx.ravel(),density=1,bins=nB, label="total distribution",alpha=0.5)
plt.legend();
```
**Exc 6*:** Answer the questions below.
* (a) Do you think the samples behind the histograms are drawn from the same distribution?
* (b) The answer to the above question means that this dynamical system is [ergodic](https://en.wikipedia.org/wiki/Ergodic_theory#Ergodic_theorems).
Now, suppose we want to investigate which (DA) method is better at estimating the true state (trajectory) for this system, on average. Should we run several short experiments or one long one?
```python
#show_answer("Ergodicity a")
#show_answer("Ergodicity b")
```
---
## The "Lorenz-95" model
The Lorenz-96 system
is a "1D" model, designed to resemble atmospheric convection. Each state variable $\x_m$ can be considered some atmospheric quantity at grid point at a fixed latitude of the earth. The system
is given by the coupled set of ODEs,
$$
\frac{d \x_m}{dt} = (\x_{m+1} − \x_{m-2}) \x_{m-1} − \x_m + F
\, ,
\quad \quad m \in \{1,\ldots,M\}
\, ,
$$
where the subscript indices apply periodically.
This model is not derived from physics but has similar characteristics, such as
<ul>
<li> there is external forcing, determined by a parameter $F$;</li>
<li> there is internal dissipation, emulated by the linear term;</li>
<li> there is energy-conserving advection, emulated by quadratic terms.</li>
</ul>
[Further description in the very readable original article](http://eaps4.mit.edu/research/Lorenz/Predicability_a_Problem_2006.pdf).
**Exc 10:** Show that the "total energy" $\sum_{m=1}^{M} \x_m^2$ is preserved by the quadratic terms in the ODE.
```python
#show_answer("Hint: Lorenz energy")
#show_answer("Lorenz energy")
```
The model is animated below.
```python
# For all m, any n: s(x,n) := x[m+n], circularly.
def s(x,n):
return np.roll(x,-n)
output_95 = [None]
@interact( M=(5,60,1), Force=(0,40,1), eps=(0.01,3,0.1), T=(0.05,40,0.05))
def animate_lorenz_95(M=40, Force=8.0, eps=0.01,T=0):
# Initial conditions: perturbations
x0 = zeros(M)
x0[0] = eps
def dxdt(x,t):
return (s(x,1)-s(x,-2))*s(x,-1) - x + Force
tt = linspace(0, T, int(40*T)+1)
xx = odeint(lambda x,t: dxdt(x,t), x0, tt)
output_95[0] = xx
plt.figure(figsize=(7,4))
# Plot last only
#plt.plot(xx[-1],'b')
# Plot multiple
Lag = 8
colors = plt.cm.cubehelix(0.1+0.6*linspace(0,1,Lag))
for k in range(Lag,0,-1):
plt.plot(xx[max(0,len(xx)-k)],color=colors[Lag-k])
plt.ylim(-10,20)
```
**Exc 12:** Investigate by moving the sliders: Under which settings of the force `F` is the system chaotic (is the predictability horizon finite)?
---
## Error/perturbation dynamics
**Exc 14*:** Suppose $x(t)$ and $z(t)$ are "twins": they evolve according to the same law $f$:
$$
\begin{align}
\frac{dx}{dt} &= f(x) \\
\frac{dz}{dt} &= f(z) \, .
\end{align}
$$
Define the "error": $\varepsilon(t) = x(t) - z(t)$.
Suppose $z(0)$ is close to $x(0)$.
Let $F = \frac{df}{dx}(x(t))$.
* a) Show that the error evolves according to the ordinary differential equation (ODE)
$$\frac{d \varepsilon}{dt} \approx F \varepsilon \, .$$
```python
#show_answer("error evolution")
```
* b) Suppose $F$ is constant. Show that the error grows exponentially: $\varepsilon(t) = \varepsilon(0) e^{F t} $.
```python
#show_answer("anti-deriv")
```
* c)
* 1) Suppose $F<1$.
What happens to the error?
What does this mean for predictability?
* 2) Now suppose $F>1$.
Given that all observations are uncertain (i.e. $R_t>0$, if only ever so slightly),
can we ever hope to estimate $x(t)$ with 0 uncertainty?
```python
#show_answer("predictability cases")
```
* d) Consider the ODE derived above.
How might we change it in order to model (i.e. emulate) a saturation of the error at some level?
Can you solve this equation?
```python
#show_answer("saturation term")
```
* e) Now suppose $z(t)$ evolves according to $\frac{dz}{dt} = g(z)$, with $g \neq f$.
What is now the differential equation governing the evolution of the error, $\varepsilon$?
```python
#show_answer("liner growth")
```
**Exc 16*:** Recall the Lorenz-63 system. What is its doubling time (i.e. estimate how long does it take for two trajectories to grow twice as far apart as they were to start with) ?
*Hint: Set `N=50, eps=0.01, T=1,` and compute the spread of the particles now as compared to how they started*
```python
xx = output_63[0][:,-1] # Ensemble of particles at the end of integration
### compute your answer here ###
```
```python
#show_answer("doubling time")
```
The answer actually depends on where in "phase space" the particles started.
To get a universal answer one must average these experiments for many different initial conditions.
## In summary:
Prediction (forecasting) with these systems is challenging because they are chaotic: small errors grow exponentially.
In other words, chaos means that there is a limit to how far into the future we can make predictions (skillfully).
It is therefore crucial to minimize the initial error as much as possible. This is a task for DA.
### Next: [Ensemble representation](T7%20-%20Ensemble%20representation.ipynb)
| 7549c3a486ea893d1ddde30590cba75a36d2e67d | 16,985 | ipynb | Jupyter Notebook | notebooks/T6 - Dynamical systems, chaos, Lorenz.ipynb | mvdebolskiy/DA-tutorials | 89f0ed3383d34a66f37c2f66be48735d4b0f47a2 | [
"MIT"
]
| 70 | 2019-06-26T14:34:52.000Z | 2022-03-22T15:43:40.000Z | notebooks/T6 - Dynamical systems, chaos, Lorenz.ipynb | srinivas2036/DA-tutorials | f70813efe9810129aea49e9f9e7200fbdf6f5a96 | [
"MIT"
]
| 4 | 2019-09-26T17:03:55.000Z | 2022-03-23T08:53:49.000Z | notebooks/T6 - Dynamical systems, chaos, Lorenz.ipynb | srinivas2036/DA-tutorials | f70813efe9810129aea49e9f9e7200fbdf6f5a96 | [
"MIT"
]
| 24 | 2019-08-30T14:30:11.000Z | 2022-02-15T08:56:50.000Z | 31.108059 | 298 | 0.53035 | true | 3,420 | Qwen/Qwen-72B | 1. YES
2. YES | 0.66888 | 0.763484 | 0.510679 | __label__eng_Latn | 0.95131 | 0.024808 |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/sympy_fpgroup.ipynb" target="_parent"></a>
```python
```
メモ
とりあえず sympy document の fpgroups を読んでみよう。 手を動かす、という意味で。
山括弧
https://docs.sympy.org/latest/modules/combinatorics/fp_groups.html
```python
from sympy.combinatorics.free_groups import free_group, vfree_group, xfree_group
from sympy.combinatorics.fp_groups import FpGroup, CosetTable, coset_enumeration_r
```
とりあえず import はできた。
```python
F,a,b = free_group("a,b")
G=FpGroup(F,[a**2,b**3,(a*b)**4])
G
```
$\displaystyle \mathtt{\text{<fp group on the generators (a, b)>}}$
```python
F,r,s,t = free_group("r,s,t")
G=FpGroup(F,[r**2,s**2,t**2,r*s*t*r**-1*t**-1*s**-1,s*t*r*s**-1*r**-1*t**-1])
```
$\displaystyle \mathtt{\text{<free group on the generators (r, s, t)>}}$
```python
F,x,y=free_group("x,y")
F
```
$\displaystyle \mathtt{\text{<free group on the generators (x, y)>}}$
```python
F=vfree_group("x,y")
F
```
$\displaystyle \mathtt{\text{<free group on the generators (x, y)>}}$
```python
F=xfree_group("x,y")
print(F)
x**2
```
(<free group on the generators (x, y)>, (x, y))
$\displaystyle \left( \left( x, \ 2\right)\right)$
```python
from sympy.combinatorics.free_groups import free_group, vfree_group, xfree_group
from sympy.combinatorics.fp_groups import FpGroup, CosetTable, coset_enumeration_r
F, a, b = free_group("a, b")
Cox = FpGroup(F, [a**6, b**6, (a*b)**2, (a**2*b**2)**2, (a**3*b**3)**5])
# C_r = coset_enumeration_r(Cox, [a], max_cosets=50)
```
# 途中だけどここまで
群論の group 表記の山括弧が云々、なのでとりあえずストップ。
```python
```
| 8e81f4c7383ea2f846167ecf93b34408c2d08069 | 6,994 | ipynb | Jupyter Notebook | sympy_fpgroup.ipynb | kalz2q/-yjupyternotebooks | ba37ac7822543b830fe8602b3f611bb617943463 | [
"MIT"
]
| 1 | 2021-09-16T03:45:19.000Z | 2021-09-16T03:45:19.000Z | sympy_fpgroup.ipynb | kalz2q/-yjupyternotebooks | ba37ac7822543b830fe8602b3f611bb617943463 | [
"MIT"
]
| null | null | null | sympy_fpgroup.ipynb | kalz2q/-yjupyternotebooks | ba37ac7822543b830fe8602b3f611bb617943463 | [
"MIT"
]
| null | null | null | 24.978571 | 235 | 0.42851 | true | 642 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.665411 | 0.531122 | __label__yue_Hant | 0.352127 | 0.072304 |
# Demo of IPython Notebook
## David Pine
```
2+3
```
5
```
import numpy as np
import matplotlib.pyplot as plt
```
```
np.sin(np.pi/6)
```
0.49999999999999994
```
plt.plot([1,2,3,2,3,4,3,4,5])
```
```
```
```
# Calculates time, gallons of gas used, and cost of gasoline for
# a trip
distance = float(raw_input("Input distance of trip in miles: "))
mpg = 30. # car mileage
speed = 60. # average speed
costPerGallon = 4.10 # price of gas
time = distance/speed
gallons = distance/mpg
cost = gallons*costPerGallon
print("\nDuration of trip = {0:0.1f} hours".format(time))
print("Gasoline used = {0:0.1f} gallons (@ {1:0.0f} mpg)"
.format(gallons, mpg))
print("Cost of gasoline = ${0:0.2f} (@ ${1:0.2f}/gallon)"
.format(cost, costPerGallon))
```
Input distance of trip in miles: 450
Duration of trip = 7.5 hours
Gasoline used = 15.0 gallons (@ 30 mpg)
Cost of gasoline = $61.50 (@ $4.10/gallon)
The total distance $x$ traveled during a trip can be
obtained by integrating the velocity $v(t)$ over the
duration $T$ of the trip:
\begin{align}
x = \int_0^T v(t)\, dt
\end{align}
```
!cat LiamSelinaData.txt
```
Date: 2013-09-16
Data taken by Liam and Selena
frequency (Hz) amplitude (mm) amp error (mm)
0.7500 13.52 0.32
1.7885 12.11 0.92
2.8269 14.27 0.73
3.8654 16.60 2.06
4.9038 22.91 1.75
5.9423 35.28 0.91
6.9808 60.99 0.99
8.0192 33.38 0.36
9.0577 17.78 2.32
10.0962 10.99 0.21
11.1346 7.47 0.48
12.1731 6.72 0.51
13.2115 4.40 0.58
14.2500 4.07 0.63
```
```
| 76bff0210216936fca8bea4d6250d108ebd7715f | 14,570 | ipynb | Jupyter Notebook | Book/apdx2/Supporting Materials/FirstNotebook.ipynb | lorenghoh/pyman | 9b4ddd52c5577fc85e2601ae3128f398f0eb673c | [
"CC0-1.0"
]
| 1 | 2020-02-16T16:15:04.000Z | 2020-02-16T16:15:04.000Z | Book/apdx2/Supporting Materials/FirstNotebook.ipynb | lorenghoh/pyman | 9b4ddd52c5577fc85e2601ae3128f398f0eb673c | [
"CC0-1.0"
]
| null | null | null | Book/apdx2/Supporting Materials/FirstNotebook.ipynb | lorenghoh/pyman | 9b4ddd52c5577fc85e2601ae3128f398f0eb673c | [
"CC0-1.0"
]
| 1 | 2020-01-08T23:35:54.000Z | 2020-01-08T23:35:54.000Z | 66.227273 | 9,369 | 0.761977 | true | 679 | Qwen/Qwen-72B | 1. YES
2. YES | 0.731059 | 0.782662 | 0.572172 | __label__eng_Latn | 0.464983 | 0.167677 |
# Structural Estimation
1. This notebook shows how to **estimate** the consumption model in **ConsumptionSaving.pdf** using **Simulated Minimum Distance (SMD)**
2. It also shows how to calculate **standard errors** and **sensitivity measures**
## Simulated Minimum Distance
**Data:** We assume that we have data available for $N$ households over $T$ periods, collected in $\{w_i\}_i^N$.
**Goal:** We wish to estimate the true, unknown, parameter vector $\theta_0$. We assume our model is correctly specified in the sense that the observed data stems from the model.
**Overview:**
1. We focus on matching certain (well-chosen) **empirical moments** in the data to **simulated moments** from the model.
2. We calculate a $J\times1$ vector of moments in the data, $\Lambda_{data} = \frac{1}{N}\sum_{i=1}^N m(\theta_0|w_i)$. This could e.g. be average consumption over the life-cycle, the income variance or regressions coefficients from some statistical model.
3. To estimate $\theta$ we chose $\theta$ as to **minimize the (squared) distance** between the moments in the data and the same moments calculated from simulated data. Let $\Lambda_{sim}(\theta) = \frac{1}{N_{sim}}\sum_{s=1}^{N_{sim}} m(\theta|w_s)$ be the same moments calculated on simulated data for $N_{sim}=S\times N$ observations for $T_{sim}$ periods from the model for a given value of $\theta$. As we change $\theta$, the simulated outomes will change and the moments will too.
The **Simulated Minimum Distance (SMD)** estimator then is
$$
\hat{\theta} = \arg\min_{\theta} g(\theta)'Wg(\theta)
$$
where $W$ is a $J\times J$ positive semidefinite **weighting matrix** and
$$
g(\theta)=\Lambda_{data}-\Lambda_{sim}(\theta)
$$
is the distance between $J\times1$ vectors of moments calculated in the data and the simulated data, respectively. Concretely,
$$
\Lambda_{data} = \frac{1}{N}\sum_{i=1}^N m(\theta_0|w_i) \\
\Lambda_{sim}(\theta) = \frac{1}{N_{sim}}\sum_{s=1}^{N_{sim}} m(\theta|w_s)
$$
are $J\times1$ vectors of moments calculated in the data and the simulated data, respectively.
**Settings:** In our baseline setup, we will have $N=5,000$ observations for $T=40$ periods, and simulate $N_{sim}=100,000$ synthetic consumers for $T_{sim} = 40$ periods when estimating the model.
**Solution of consumption-saving model:** This estimator requires the solution (and simulation) of the model each trial guess of $\theta$ as we search for the one that minimizes the objective function. Therefore, structural estimation can in general be quite time-consuming. We will use the EGM to solve the consumption model quite fast and thus be able to estimate parameters within a couple of minutes. Estimation of more complex models might take significantly longer.
> **Note I:** When regressions coefficients are used as moments, they are sometimes referred to as **auxiliary parameters** (APs) and the estimator using these APs as an **Indirect Inference (II)** estimator ([Gouriéroux, Monfort and Renault, 1993](https://onlinelibrary.wiley.com/doi/abs/10.1002/jae.3950080507)).
> **Note II:** The estimator used is also called a **simulated method of momoments (SMM)** estimator. I.e. a simulated General Method of Moments (GMM) estimator.
# Setup
```python
%matplotlib inline
%load_ext autoreload
%autoreload 2
import time
import numpy as np
import scipy.optimize as optimize
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
import figs
from ConsumptionSavingModel import ConsumptionSavingModelClass
from SimulatedMinimumDistance import SimulatedMinimumDistanceClass
```
# Estimation choices
```python
# a. model settings
N = 5_000
N_sim = 100_000
par = {'simlifecycle':True,
'sim_mini':1.0 ,
'simT':40,
'simN':N_sim,
'Nxi':4,
'Npsi':4,
'Na':100}
par_true = par.copy()
par_true['simN'] = N
# b. parameters to estimate
est_par = {
'rho': {'guess':2.0,'lower':0.5,'upper':5.0,},
'beta': {'guess':0.97,'lower':0.90,'upper':0.999},
}
est_par_names = [key for key in est_par.keys()]
# c. moment function used in estimation.
def mom_func(data,ids=None):
""" returns the age profile of wealth """
if ids is None:
mean_A = np.mean(data.A[:,1:],axis=0)
else:
mean_A = np.mean(data.A[ids,1:],axis=0)
return mean_A
# d. choose weighting matrix
weighting_matrix = 0
# 0: identity (equal weight),
# 1: inverse of variance on the diagonal (removes scale),
# 2: inverse of covaraince matrix between estimation moments (optimal weighting matrix)
```
# Data and estimator
Construct **data**.
```python
# a. setup model to simulate data
true = ConsumptionSavingModelClass(name='true',solmethod='egm',**par_true)
true.solve()
true.simulate(seed=2019) # this seed is different from the default
# b. data moments
datamoms = mom_func(true.sim)
moment_names = [i for i in range(true.par.age_min+1,true.par.age_min+true.par.simT)]
```
model solved in 1.3 secs
model simulated in 1.3 secs
**Bootstrap** variance of estimation moments used when later calculting standard errors below (and potentially for weighting matrix).
```python
num_boot = 200
num_moms = datamoms.size
smd = SimulatedMinimumDistanceClass(est_par,mom_func,datamoms=datamoms)
smd.Omega = smd.bootstrap_mom_var(true.sim,N,num_boot,num_moms)
```
**Setup estimator**.
```python
smd.plot({'data':moment_names},{'data':datamoms},xlabel='age',ylabel='wealth',hide_legend=True)
```
# Estimate the model
```python
model = ConsumptionSavingModelClass(name='estimated',solmethod='egm',**par)
```
Choose **weighting matrix**:
```python
if weighting_matrix == 0:
W = np.eye(smd.datamoms.size) # identity
elif weighting_matrix == 1:
W = np.diag(1.0/np.diag(smd.Omega)) # inverse of variance on the diagonal
else:
W = np.linalg.inv(smd.Omega) # optimal weighting matrix
```
## Estimation results
```python
# a. estimate the model (can take several minutes)
%time est = smd.estimate(model,W)
# b. print estimation results
print(f'\n True Est. ')
for key in est_par.keys():
print(f'{key:5s} {getattr(true.par,key):2.3f} {est[key]:2.3f}')
```
objective function at starting values: 5.961003893464155
Wall time: 47.7 s
True Est.
rho 2.000 2.018
beta 0.960 0.960
Show **model-fit**:
```python
plot_data_x = {'data':moment_names,'simulated':moment_names}
plot_data_y = {'data':datamoms,'simulated':mom_func(model.sim)}
smd.plot(plot_data_x,plot_data_y,xlabel='age',ylabel='wealth')
```
## Standard errors
The SMD estimator is **asymptotic Normal** and standard errors have the same form as standard GMM estimators scaled with the adjustment factor $(1+S^{-1})$ due to the fact that we use $S$ simulations of the model.
The **standard errors** are thus
$$
\begin{align}
\text{Var}(\hat{\theta})&=(1+S^{-1})\Gamma\Omega\Gamma'/N \\
\Gamma &= -(G'WG)^{-1}G'W \\
\Omega & = \text{Var}(m(\theta_0|w_i))
\end{align}
$$
where $G=\frac{\partial g(\theta)}{\partial \theta}$ is the $J\times K$ **Jacobian** with respect to $\theta$. $\Gamma$ is related to what is sometimes called the "influence function".
**Calculating $\Omega$**:
1. Can sometimes be done **analytically**
2. Can always be done using a **bootstrap** as done above
**Calculating the Jacobian, $G$:** This is done using numerical finite differences.
```python
# a. number of datasets simulated per individual in original data
S = model.par.simN/N
# b. find standard errors
Gamma, grad_theta = smd.calc_influence_function(est['theta'],model,W)
Var_theta = (1.0+1.0/S) * Gamma @ smd.Omega @ Gamma.T /N
se = np.sqrt(np.diag(Var_theta))
# b. print estimation results
print(f' True Est. (se)')
for i,(key,val) in enumerate(est_par.items()):
print(f'{key:5s} {getattr(true.par,key):2.3f} {est[key]:2.3f} ({se[i]:2.3f})')
```
True Est. (se)
rho 2.000 2.018 (0.043)
beta 0.960 0.960 (0.001)
# Sensitivity Analysis
We now look into a **sensitivity analysis** of our estimation. Concretely, we implement the **informativeness measure** from [Honoré, Jørgensen and de Paula (2019)](https://arxiv.org/abs/1907.02101v2 "The Informativeness of Estimation Moments") and the **sensitivity to calibrated parameters** in [Jørgensen (2020)](https://www.dropbox.com/s/g8ip7h051dyhn3r/Sensitivity.pdf?dl=0). Further details can be found in these papers.
## The informativeness of estimation moments
The measures are motivated by those proposed in [Honoré, Jørgensen and de Paula (2019)](https://arxiv.org/abs/1907.02101v2 "The Informativeness of Estimation Moments"). All the measures proposed in that paper is calculated, but we will focus on their measure 4 that asks **"what is the change in the asymptotic variance from completely excluding the k'th moment?"**. If the *k*th is very informative about a parameter, the asymptotic varaince of that parameter should increase significantly, if we leave out the *k*th moment.
```python
info = smd.informativeness_moments(grad_theta,smd.Omega,W)
smd.plot_heat(info['M4e'],est_par_names,moment_names,annot=False)
```
**Conclusion:** We can see that especially the wealth level for younger households are very informative regarding both $\rho$ and $\beta$. This is likely due to the fact that for low level of resources (which is the case at younger ages), the value of both these parameters affect consumption and saving decisions a lot. Thus, the level of saving especially in young ages are very informative and help to identify the two parameters.
## Sensitivity to calibrated parameters
The mesure is motivated by the one proposed in [Jørgensen (2020)](www.tjeconomics.com "Sensitivity to Calibrated Parameters"). Note that the estimation moments are all functions of the $L$ calibrated parameters, which we will denote $\gamma$, $g(\theta|\gamma)$.
The **sensitivity measure** is defined as
$$
\begin{align}
S &= \Gamma D
\end{align}
$$
where $D=\frac{\partial g(\theta|\gamma)}{\partial \gamma}$ is the $J\times L$ **Jacobian** with respect to $\gamma$.
*We only need to calculate $D$* since we have already calculated $\Gamma$ when we calculated standard errors above. We use numerical finite differences to calcualte this object.
**Chosen calibrated paramters:** $R$, $G$, $\sigma_{\psi}$, $\sigma_{\xi}$.
```python
cali_par_names = ('R','G','sigma_psi','sigma_xi')
cali_par = np.array([getattr(model.par,name) for name in cali_par_names])
```
**Calculate the sensitivty measure:**
```python
grad_gamma = smd.num_grad(cali_par,model,cali_par_names)
sens_cali = Gamma @ grad_gamma
```
**Plot sensitivity measure**
```python
smd.plot_heat(sens_cali,est_par_names,cali_par_names)
```
**Check:** We can compare this to a brute-force approach in which we re-estimate the model for marginal changes in the calibrated parameters. This takes considerable time, however. The results are almost identical.
```python
sens_cali_brute = smd.sens_cali_brute_force(model,est['theta'],W,cali_par_names)
smd.plot_heat(sens_cali_brute,est_par_names,cali_par_names)
```
**Arbitrary changes in $\gamma$**: We can also investigate larger simultaneous changes in $\gamma$.
```python
# a. set new calibrated parameters
cali_par_new = {'G':1.05}
# b. update calibrated parameters in new version of the model
model_new = model.copy()
for key,val in cali_par_new.items():
setattr(model_new.par,key,val)
# c. calculate new objective function
obj_vec = smd.diff_vec_func(est['theta'],model,est_par_names)
obj_vec_new = smd.diff_vec_func(est['theta'],model_new,est_par_names)
# d. approximate change in theta
Gamma_new,_ = smd.calc_influence_function(est['theta'],model_new,W)
theta_delta = Gamma_new @ obj_vec_new - Gamma @ obj_vec
# e. extrapolate the gradient
theta_delta_extrap = np.zeros(theta_delta.size)
for j,key in enumerate(cali_par_new):
theta_delta_extrap += sens_cali[:,j]*(cali_par_new[key]-getattr(model.par,key))
print(theta_delta_extrap)
```
[ 0.17300506 -0.02961878]
**Check:** Again, we can compare this approximation to a brute-force re-estimation of the model for the changed $\gamma$.
```python
est_new = smd.estimate(model_new,W)
theta_delta_brute = est_new['theta'] - est['theta']
print(theta_delta_brute)
```
objective function at starting values: 5.463540643940908
[ 0.11275468 -0.05969197]
| 6837008f7f374d754002591f752316e822e75c71 | 95,022 | ipynb | Jupyter Notebook | DynamicProgramming/04. Structural Estimation.ipynb | ThomasHJorgensen/ConsumptionSavingNotebooks | badbdfb1da226d5494026de2adcfec171c7f40ea | [
"MIT"
]
| 1 | 2021-11-07T23:37:25.000Z | 2021-11-07T23:37:25.000Z | DynamicProgramming/04. Structural Estimation.ipynb | ThomasHJorgensen/ConsumptionSavingNotebooks | badbdfb1da226d5494026de2adcfec171c7f40ea | [
"MIT"
]
| null | null | null | DynamicProgramming/04. Structural Estimation.ipynb | ThomasHJorgensen/ConsumptionSavingNotebooks | badbdfb1da226d5494026de2adcfec171c7f40ea | [
"MIT"
]
| null | null | null | 132.527197 | 22,404 | 0.879112 | true | 3,409 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.7773 | 0.616481 | __label__eng_Latn | 0.941208 | 0.270623 |
| |Pierre Proulx, ing, professeur|
|:---|:---|
|Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **|
### Section18.7, Difffusion et réaction dans un catalyseur poreux sphérique
Voir les hypothèses dans le livre de Transport Phenomena.
```python
#
# Pierre Proulx
#
# Préparation de l'affichage et des outils de calcul symbolique
#
import sympy as sp
from IPython.display import *
sp.init_printing(use_latex=True)
%matplotlib inline
```
```python
# Paramètres, variables et fonctions
r,delta_r,R,k_1,a,D_A,C_AR=sp.symbols('r,delta_r,R,k_1,a,D_A,C_AR')
N_A=sp.symbols('N_A')
C_A=sp.Function('C_A')(r)
```
```python
#
# Résultat du bilan sur une coquille sphérique
#
eq=sp.Eq(sp.diff(N_A(r)*r**2,r)+k_1*a*C_A*r**2,0)
display(eq)
```
```python
#
# Solution après transformation, avant les conditions aux limites (annexe C-1)
#
C1,C2=sp.symbols('C1,C2')
C_A=C_AR*C1/r*sp.cosh((k_1*a/D_A)**0.5*r)+C_AR*C2/r*sp.sinh((k_1*a/D_A)**0.5*r)
display(C_A)
```
```python
#C_A=C_A.subs(sp.symbols('C1'),0)
cl1=r**2*sp.diff(C_A,r) # même approche que faite en classe
cl1=sp.cancel(cl1) # Truc pour faire effectuer les multiplications et divisions de r et r2
# et éviter les indeterminations 0/0.
cl1=cl1.subs(r,0)
cl2=sp.Eq(C_A.subs(r,R)-C_AR,0) # à la surface, C_A = C_AR
cl2=cl2.lhs
constantes=sp.solve([cl1,cl2],sp.symbols('C1 C2'))
display(constantes)
C_A1=C_A.subs(constantes)
display(sp.simplify(C_A1))
```
```python
#
# en r=0, le flux est fini ou la concentration est finie. les deux conditions sont ok et donnent
# que C1 = 0, il est plus facile de le faire ' a la main ', mais je montre comment on peut
# utiliser la condition de flux nul plus haut (calcul de C_A1)
C_A=C_A.subs('C1',0)
#
# résoudre pour C2
#
eq=sp.Eq(C_A.subs(r,R)-C_AR)
constante=sp.solve((eq,0),'C2')
C_A=C_A.subs(constante)
display(C_A)
display(sp.simplify(C_A)-sp.simplify(C_A1)) # vérifier que les deux approches sont correctes
```
```python
#
# Tracer le profil pour différentes valeurs de la constante de réaction
#
dico1={'C_AR':1,'D_A':1.e-7,'R':0.01,'a':16,'k_1':1.e-5}
Thiele1=((k_1*a/D_A)**(0.5)*R).subs(dico1).evalf(3)
C_A1=C_A.subs(dico1)
dico2={'C_AR':1,'D_A':1.e-7,'R':0.01,'a':16,'k_1':4.e-5}
Thiele2=((k_1*a/D_A)**(0.5)*R).subs(dico2).evalf(3)
C_A2=C_A.subs(dico2)
dico3={'C_AR':1,'D_A':1.e-7,'R':0.01,'a':16,'k_1':16.e-5}
Thiele3=((k_1*a/D_A)**(0.5)*R).subs(dico3).evalf(3)
C_A3=C_A.subs(dico3)
dico4={'C_AR':1,'D_A':1.e-7,'R':0.01,'a':16,'k_1':64.e-5}
Thiele4=((k_1*a/D_A)**(0.5)*R).subs(dico4).evalf(3)
C_A4=C_A.subs(dico4)
dico5={'C_AR':1,'D_A':1.e-7,'R':0.01,'a':16,'k_1':256.e-5}
Thiele5=((k_1*a/D_A)**(0.5)*R).subs(dico5).evalf(3)
C_A5=C_A.subs(dico5)
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=10,8
p = sp.plot((C_A1,(r,0,0.01)),(C_A2,(r,0,0.01)),(C_A3,(r,0,0.01)),(C_A4,(r,0,0.01)),(C_A5,(r,0,0.01)),
legend=True, title='Concentration radiale en fonction de Thiele',
xlabel='r',ylabel='C_a',show=False)
p[0].line_color = 'blue'
p[0].label=str(Thiele1)
p[1].line_color = 'black'
p[1].label=str(Thiele2)
p[2].line_color = 'red'
p[2].label=str(Thiele3)
p[3].line_color = 'green'
p[3].label=str(Thiele4)
p[4].line_color = 'yellow'
p[4].label=str(Thiele5)
p.show()
```
| 670fa72375ee7a2f224dbe0a882b700896ea37a0 | 74,789 | ipynb | Jupyter Notebook | Chap-18-Section-18-7.ipynb | Spationaute/GCH200 | 55144f5b2a59a7240d36c985997387f5036149f7 | [
"MIT"
]
| 1 | 2018-02-26T16:29:58.000Z | 2018-02-26T16:29:58.000Z | Chap-18-Section-18-7.ipynb | Spationaute/GCH200 | 55144f5b2a59a7240d36c985997387f5036149f7 | [
"MIT"
]
| null | null | null | Chap-18-Section-18-7.ipynb | Spationaute/GCH200 | 55144f5b2a59a7240d36c985997387f5036149f7 | [
"MIT"
]
| 2 | 2018-02-27T15:04:33.000Z | 2021-06-03T16:38:07.000Z | 230.830247 | 44,540 | 0.879354 | true | 1,327 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.805632 | 0.679869 | __label__fra_Latn | 0.354141 | 0.417895 |
# Automated Gradual Pruning Schedule
Michael Zhu and Suyog Gupta, ["To prune, or not to prune: exploring the efficacy of pruning for model compression"](https://arxiv.org/pdf/1710.01878), 2017 NIPS Workshop on Machine Learning of Phones and other Consumer Devices<br>
<br>
After completing sensitivity analysis, decide on your pruning schedule.
## Table of Contents
1. [Implementation of the gradual sparsity function](#Implementation-of-the-gradual-sparsity-function)
2. [Visualize pruning schedule](#Visualize-pruning-schedule)
3. [References](#References)
```python
import numpy
import matplotlib.pyplot as plt
from functools import partial
import torch
from torch.autograd import Variable
from ipywidgets import widgets, interact
```
## Implementation of the gradual sparsity function
The function ```sparsity_target``` implements the gradual sparsity schedule from [[1]](#zhu-gupta):<br><br>
<b><i>"We introduce a new automated gradual pruning algorithm in which the sparsity is increased from an initial sparsity value $s_i$ (usually 0) to a final sparsity value $s_f$ over a span of $n$ pruning steps, starting at training step $t_0$ and with pruning frequency $\Delta t$."</i></b><br>
<br>
<div id="eq:zhu_gupta_schedule"></div>
<center>
$\large
\begin{align}
s_t = s_f + (s_i - s_f) \left(1- \frac{t-t_0}{n\Delta t}\right)^3
\end{align}
\ \ for
\large \ \ t \in \{t_0, t_0+\Delta t, ..., t_0+n\Delta t\}
$
</center>
<br>
Pruning happens once at the beginning of each epoch, until the duration of the pruning (the number of epochs to prune) is exceeded. After pruning ends, the training continues without pruning, but the pruned weights are kept at zero.
```python
def sparsity_target(starting_epoch, ending_epoch, initial_sparsity, final_sparsity, current_epoch):
if final_sparsity < initial_sparsity:
return current_epoch
if current_epoch < starting_epoch:
return current_epoch
span = ending_epoch - starting_epoch
target_sparsity = ( final_sparsity +
(initial_sparsity - final_sparsity) *
(1.0 - ((current_epoch-starting_epoch)/span))**3)
return target_sparsity
```
## Visualize pruning schedule
When using the Automated Gradual Pruning (AGP) schedule, you may want to visualize how the pruning schedule will look as a function of the epoch number. This is called the *sparsity function*. The widget below will help you do this.<br>
There are three knobs you can use to change the schedule:
- ```duration```: this is the number of epochs over which to use the AGP schedule ($n\Delta t$).
- ```initial_sparsity```: $s_i$
- ```final_sparsity```: $s_f$
- ```frequency```: this is the pruning frequency ($\Delta t$).
```python
def draw_pruning(duration, initial_sparsity, final_sparsity, frequency):
epochs = []
sparsity_levels = []
# The derivative of the sparsity (i.e. sparsity rate of change)
d_sparsity = []
if frequency=='':
frequency = 1
else:
frequency = int(frequency)
for epoch in range(0,40):
epochs.append(epoch)
current_epoch=Variable(torch.FloatTensor([epoch]), requires_grad=True)
if epoch<duration and epoch%frequency == 0:
sparsity = sparsity_target(
starting_epoch=0,
ending_epoch=duration,
initial_sparsity=initial_sparsity,
final_sparsity=final_sparsity,
current_epoch=current_epoch
)
sparsity_levels.append(sparsity)
sparsity.backward()
d_sparsity.append(current_epoch.grad.item())
current_epoch.grad.data.zero_()
else:
sparsity_levels.append(sparsity)
d_sparsity.append(0)
plt.plot(epochs, sparsity_levels, epochs, d_sparsity)
plt.ylabel('sparsity (%)')
plt.xlabel('epoch')
plt.title('Pruning Rate')
plt.ylim(0, 100)
plt.draw()
duration_widget = widgets.IntSlider(min=0, max=100, step=1, value=28)
si_widget = widgets.IntSlider(min=0, max=100, step=1, value=0)
interact(draw_pruning,
duration=duration_widget,
initial_sparsity=si_widget,
final_sparsity=(0,100,1),
frequency='2');
```
<div id="toc"></div>
## References
1. <div id="zhu-gupta"></div> **Michael Zhu and Suyog Gupta**.
[*To prune, or not to prune: exploring the efficacy of pruning for model compression*](https://arxiv.org/pdf/1710.01878),
NIPS Workshop on Machine Learning of Phones and other Consumer Devices,
2017.
```python
```
| c56560730655a5b36e4e6b74cdae9ff04d7781c7 | 6,707 | ipynb | Jupyter Notebook | jupyter/agp_schedule.ipynb | thomasjpfan/distiller | a029c9b0f1758b4949e7d624ffb947a80317fe6c | [
"Apache-2.0"
]
| 94 | 2019-01-30T18:18:12.000Z | 2022-03-07T23:47:13.000Z | jupyter/agp_schedule.ipynb | thomasjpfan/distiller | a029c9b0f1758b4949e7d624ffb947a80317fe6c | [
"Apache-2.0"
]
| 4 | 2020-09-26T00:53:47.000Z | 2022-02-10T01:23:34.000Z | code/jupyter/agp_schedule.ipynb | sinreq-learn/sinreq-learn.code | a205d3fa22a41d5f4fc1ef1e5698c4f1dbb11e6a | [
"BSD-4-Clause-UC"
]
| 23 | 2019-04-19T09:38:14.000Z | 2022-01-24T04:45:23.000Z | 36.254054 | 307 | 0.565827 | true | 1,128 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.7773 | 0.666743 | __label__eng_Latn | 0.890078 | 0.387398 |
```python
from scipy import linalg as la
from scipy import optimize
import sympy
sympy.init_printing()
import numpy as np
import matplotlib.pyplot as plt
```
```python
# Graphical solution
x = np.arange(-4, 2, 1)
x2 = np.arange(-2, 6, 1)
y1 = (4 - 2*x) / 3
y2 = (3 - 5*x) / 4
fig, ax = plt.subplots(figsize=(10, 5))
ax.set_xlabel("${x_1}$")
ax.set_ylabel("${x_2}$")
ax.plot(x, y2, 'r', label="$2{x_1}+3{x_2}-4=0$")
ax.plot(x, y1, 'b', label="$5{x_1}+4{x_2}-3=0$")
ax.plot(-1, 2, 'black', lw=5, marker='o')
ax.annotate("The intersection point\nof the two lines is the solution\nto the system of equations", fontsize=14, family="serif", xy=(-1, 2),
xycoords="data", xytext=(-150, -80),
textcoords="offset points", arrowprops=dict(arrowstyle="->", connectionstyle="arc3, rad=-.5"))
ax.set_xticks(x)
ax.set_yticks(x2)
ax.legend()
```
### Squared system
```python
A = sympy.Matrix([[2, 3], [5, 4]])
b = sympy.Matrix([4, 3])
A.rank()
```
```python
A.condition_number()
```
```python
sympy.N(_)
```
```python
A.norm()
```
```python
A = np.array([[2, 3], [5, 4]])
b = np.array([4, 3])
```
```python
np.linalg.matrix_rank(A)
```
2
```python
np.linalg.cond(A)
```
```python
np.linalg.norm(A)
```
```python
# LU factorization
A = sympy.Matrix([[2, 3], [5, 4]])
b = sympy.Matrix([4, 3])
L, U, _ = A.LUdecomposition()
L
```
$\displaystyle \left[\begin{matrix}1 & 0\\\frac{5}{2} & 1\end{matrix}\right]$
```python
U
```
$\displaystyle \left[\begin{matrix}2 & 3\\0 & - \frac{7}{2}\end{matrix}\right]$
```python
L * U == A
```
True
```python
x = A.solve(b)
x
```
$\displaystyle \left[\begin{matrix}-1\\2\end{matrix}\right]$
```python
A = np.array([[2, 3], [5, 4]])
b = np.array([4, 3])
P, L, U = la.lu(A)
L
```
array([[1. , 0. ],
[0.4, 1. ]])
```python
P.dot(L.dot(U))
```
array([[2., 3.],
[5., 4.]])
```python
la.solve(A, b)
```
array([-1., 2.])
```python
# Symbolic vs Numerical
p = sympy.symbols("p", positive=True)
A = sympy.Matrix([[1, sympy.sqrt(p)], [1, 1/sympy.sqrt(p)]])
b = sympy.Matrix([1, 2])
x = A.solve(b)
x
```
$\displaystyle \left[\begin{matrix}\frac{2 p - 1}{p - 1}\\\frac{1}{- \sqrt{p} + \frac{1}{\sqrt{p}}}\end{matrix}\right]$
```python
#Symbolic problem specification
p = sympy.symbols("p", positive=True)
A = sympy.Matrix([[1, sympy.sqrt(p)], [1, 1/sympy.sqrt(p)]])
b = sympy.Matrix([1, 2])
# Solve symbolically
x_sym_sol = A.solve(b)
Acond = A.condition_number().simplify()
# Numerical problem specification
AA = lambda p: np.array([[1, np.sqrt(p)], [1, 1/np.sqrt(p)]])
bb = np.array([1, 2])
x_num_sol = lambda p: np.linalg.solve(AA(p), bb)
# Graph the difference between the symbolic (exact) and numerical results.
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
p_vec = np.linspace(0.9, 1.1, 200)
for n in range(2):
x_sym = np.array([x_sym_sol[n].subs(p, pp).evalf() for pp in p_vec])
x_num = np.array([x_num_sol(pp)[n] for pp in p_vec])
axes[0].plot(p_vec, (x_num - x_sym)/x_sym, 'k')
axes[0].set_title("Error in solution\n(numerical - symbolic)/symbolic")
axes[0].set_xlabel(r'$p$', fontsize=18)
axes[1].plot(p_vec, [Acond.subs(p, pp).evalf() for pp in p_vec])
axes[1].set_title("Condition number")
axes[1].set_xlabel(r'$p$', fontsize=18)
```
### Rectangular system
```python
x_vars = sympy.symbols("x_1, x_2, x_3")
A = sympy.Matrix([[1, 2, 3], [4, 5, 6]])
x = sympy.Matrix(x_vars)
b = sympy.Matrix([7, 8])
sympy.solve(A*x - b, x_vars)
```
### Least squares
```python
# define true model parameters
x = np.linspace(-1, 1, 100)
a, b, c = 1, 2, 3
y_exact = a + b * x + c * x**2
# Simulate noisy data
m = 100
X = 1 - 2 * np.random.rand(m)
Y = a + b * X + c * X**2 + np.random.randn(m)
# fit the data to the model using linear least square
A = np.vstack([X**0, X**1, X**2]) # see np.vander for alternative
```
```python
sol, r, rank, sv = la.lstsq(A.T, Y)
y_fit = sol[0] + sol[1] * x + sol[2] * x**2
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(X, Y, 'go', alpha=0.5, label='Simulated data')
ax.plot(x, y_exact, 'k', lw=2, label='True value $y = 1 + 2x + 3x^2$')
ax.plot(x, y_fit, 'b', lw=2, label='Least square fit')
ax.set_xlabel(r"$x$", fontsize=18)
ax.set_ylabel(r"$y$", fontsize=18)
ax.legend(loc=2)
```
```python
# fit the data to the model using linear least square:
# 1st order polynomial
A = np.vstack([X**n for n in range(2)])
sol, r, rank, sv = la.lstsq(A.T, Y)
y_fit1 = sum([s * x**n for n, s in enumerate(sol)])
```
```python
# 15th order polynomial
A = np.vstack([X**n for n in range(16)])
sol, r, rank, sv = la.lstsq(A.T, Y)
y_fit15 = sum([s * x**n for n, s in enumerate(sol)])
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(X, Y, 'go', alpha=0.5, label='Simulated data')
ax.plot(x, y_exact, 'k', lw=2, label='True value $y = 1 + 2x +3x^2$')
ax.plot(x, y_fit1, 'b', lw=2, label='Least square fit [1st order]')
ax.plot(x, y_fit15, 'm', lw=2, label='Least square fit [15th order]')
ax.set_xlabel(r"$x$", fontsize=18)
ax.set_ylabel(r"$y$", fontsize=18)
ax.legend(loc=2)
```
### Eigenvalues / Eigenvectors
```python
eps, delta = sympy.symbols("epsilon, Delta")
H = sympy.Matrix([[eps, delta], [delta, -eps]])
H
```
$\displaystyle \left[\begin{matrix}\epsilon & \Delta\\\Delta & - \epsilon\end{matrix}\right]$
```python
H.eigenvals()
```
```python
H.eigenvects()
```
$\displaystyle \left[ \left( - \sqrt{\Delta^{2} + \epsilon^{2}}, \ 1, \ \left[ \left[\begin{matrix}- \frac{\Delta}{\epsilon + \sqrt{\Delta^{2} + \epsilon^{2}}}\\1\end{matrix}\right]\right]\right), \ \left( \sqrt{\Delta^{2} + \epsilon^{2}}, \ 1, \ \left[ \left[\begin{matrix}- \frac{\Delta}{\epsilon - \sqrt{\Delta^{2} + \epsilon^{2}}}\\1\end{matrix}\right]\right]\right)\right]$
```python
(eval1, _, evec1), (eval2, _, evec2) = H.eigenvects()
# Orthogonality
sympy.simplify(evec1[0].T * evec2[0])
```
$\displaystyle \left[\begin{matrix}0\end{matrix}\right]$
```python
A = np.array([[1, 3, 5], [3, 5, 3], [5, 3, 9]])
evals, evecs = la.eig(A)
evals
```
array([13.35310908+0.j, -1.75902942+0.j, 3.40592034+0.j])
```python
evecs
```
array([[ 0.42663918, 0.90353276, -0.04009445],
[ 0.43751227, -0.24498225, -0.8651975 ],
[ 0.79155671, -0.35158534, 0.49982569]])
```python
la.eigvalsh(A)
```
array([-1.75902942, 3.40592034, 13.35310908])
### Nonlinear equations
```python
x, a, b, c = sympy.symbols("x, a, b, c")
sympy.solve(a + b*x + c*x**2, x)
```
```python
sympy.solve(a * sympy.cos(x) - b * sympy.sin(x), x)
```
```python
x = np.linspace(-2, 2, 1000)
# four examples of nonlinear functions
f1 = x**2 - x - 1
f2 = x**3 - 3 * np.sin(x)
f3 = np.exp(x) - 2
f4 = 1 - x**2 + np.sin(50 / (1 + x**2))
# plot each function
fig, axes = plt.subplots(1, 4, figsize=(12, 3), sharey=True)
for n, f in enumerate([f1, f2, f3, f4]):
axes[n].plot(x, f, lw=1.5)
axes[n].axhline(0, ls=':', color='k')
axes[n].set_ylim(-5, 5)
axes[n].set_xticks([-2, -1, 0, 1, 2])
axes[n].set_xlabel(r'$x$', fontsize=18)
axes[0].set_ylabel(r'$f(x)$', fontsize=18)
titles = [
r'$f(x)=x^2-x-1$',
r'$f(x)=x^3-3\sin(x)$',
r'$f(x)=\exp(x)-2$',
r'$f(x)=\sin\left(50/(1+x^2)\right)+1-x^2$'
]
for n, title in enumerate(titles):
axes[n].set_title(title)
```
| 92d4e4f0ef8cf72a53270c688c2146d6f6e720ae | 468,510 | ipynb | Jupyter Notebook | NumericalPython/4.EquationSolving.ipynb | nickovchinnikov/Computational-Science-and-Engineering | 45620e432c97fce68a24e2ade9210d30b341d2e4 | [
"MIT"
]
| 8 | 2021-01-14T08:00:23.000Z | 2022-01-31T14:00:11.000Z | NumericalPython/4.EquationSolving.ipynb | nickovchinnikov/Computational-Science-and-Engineering | 45620e432c97fce68a24e2ade9210d30b341d2e4 | [
"MIT"
]
| null | null | null | NumericalPython/4.EquationSolving.ipynb | nickovchinnikov/Computational-Science-and-Engineering | 45620e432c97fce68a24e2ade9210d30b341d2e4 | [
"MIT"
]
| 1 | 2022-01-25T15:21:40.000Z | 2022-01-25T15:21:40.000Z | 473.242424 | 60,786 | 0.760867 | true | 2,773 | Qwen/Qwen-72B | 1. YES
2. YES | 0.953275 | 0.953966 | 0.909392 | __label__eng_Latn | 0.222079 | 0.951156 |
# Visualizing the effect of $L_1/L_2$ regularization
We use a toy example with two weights $(w_0, w_1)$ to illustrate the effect $L_1$ and $L_2$ regularization has on the solution to a loss minimization problem.
## Table of Contents
1. [Draw the data loss and the L1/L2L1/L2 regularization curves](#Draw-the-data-loss-and-the-%24L_1%2FL_2%24-regularization-curves)
2. [Plot the training progress](#Plot-the-training-progress)
3. [L1L1 -norm regularization leads to "near-sparsity"](#%24L_1%24-norm-regularization-leads-to-%22near-sparsity%22)
4. [References](#References)
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import matplotlib.animation as animation
import matplotlib.patches as mpatches
from torch.autograd import Variable
import torch
import math
```
```python
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['savefig.dpi'] = 75
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 10, 6
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 14
# plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = "Sans Serif"
plt.rcParams['font.serif'] = "cm"
```
## Draw the data loss and the $L_1/L_2$ regularization curves
We choose just a very simple convex loss function for illustration (in blue), which has its minimum at W=(3,2):
<!-- Equation labels as ordinary links -->
<div id="eq:loss"></div>
$$
\begin{equation}
loss(W) = 0.5(w_0-3)^2 + 2.5(w_1-2)^2
\label{eq:loss} \tag{1}
\end{equation}
$$
The L1-norm regularizer (aka lasso regression; lasso: least absolute shrinkage and selection operator):
<div id="eq:l1"></div>
$$
\begin{equation}
L_1(W) = \sum_{i=1}^{|W|} |w_i|
\label{eq:eq1} \tag{2}
\end{equation}
$$
The L2 regularizer (aka Ridge Regression and Tikhonov regularization), is not is the square of the L2-norm, and this little nuance is sometimes overlooked.
<div id="eq:l2"></div>
$$
\begin{equation}
L_2(W) = ||W||_2^2= \sum_{i=1}^{|W|} w_i^2
\label{eq:eq3} \tag{3}
\end{equation}
$$
```python
def loss_fn(W):
return 0.5*(W[0]-3)**2 + 2.5*(W[1]-2)**2
# L1 regularization
def L1_regularization(W):
return abs(W[0]) + abs(W[1])
def L2_regularization(W):
return np.sqrt(W[0]**2 + W[1]**2)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection="3d")
xmesh, ymesh = np.mgrid[-3:9:50j,-2:6:50j]
loss_mesh = loss_fn(np.array([xmesh, ymesh]))
ax.plot_surface(xmesh, ymesh, loss_mesh);
l1_mesh = L1_regularization(np.array([xmesh, ymesh]))
ax.plot_surface(xmesh, ymesh, l1_mesh);
l2_mesh = L2_regularization(np.array([xmesh, ymesh]))
ax.plot_surface(xmesh, ymesh, l2_mesh);
```
## Plot the training progress
<br>
The diamond contour lines are the values of the L1 regularization. Since this is a contour diagram, all the points on a contour line have the same L1 value. <br>
In otherwords, for all points on a contour line:
$$L_1(W) = \left|w_0\right| + \left|w_1\right| == constant$$
This is called the L1-ball.<br>
L2-balls maintain the equation: $$L_2(W) = w_0^2 + w_1^2 == constant$$
<br>
The oval contour line are the values of the data loss function. The regularized solution tries to find weights that satisfy both the data loss and the regularization loss.
<br><br>
```alpha``` and ```beta``` control the strengh of the regularlization loss versus the data loss.
To see how the regularizers act "in the wild", set ```alpha``` and ```beta``` to a high value like 10. The regularizers will then dominate the loss, and you will see how each of the regularizers acts.
<br>
Experiment with the value of alpha to see how it works.
```python
initial_guess = torch.Tensor([8,5])
W = Variable(initial_guess, requires_grad=True)
W_l1_reg = Variable(initial_guess.clone(), requires_grad=True)
W_l2_reg = Variable(initial_guess.clone(), requires_grad=True)
def L1_regularization(W):
return W.norm(1)
def L2_regularization(W):
return W.pow(2).sum()
lr = 0.04
alpha = 0.75 # 1.5 # 4 # 0.4
beta = 0.75
num_steps = 1000
def train(W, lr, alpha, beta, num_steps):
guesses = []
for i in range(num_steps):
# Zero the gradients of the weights
if W.grad is not None:
W.grad.data.zero_()
# Compute the loss and the gradients of W
loss = loss_fn(W) + alpha * L1_regularization(W) + beta * L2_regularization(W)
loss.backward()
# Update W
W.data = W.data - lr * W.grad.data
guesses.append(W.data.numpy())
return guesses
# Train the weights without regularization
guesses = train(W, lr, alpha=0, beta=0, num_steps=num_steps)
# ...and with L1 regularization
guesses_l1_reg = train(W_l1_reg, lr, alpha=alpha, beta=0, num_steps=num_steps)
guesses_l2_reg = train(W_l2_reg, lr, alpha=0, beta=beta, num_steps=num_steps)
fig = plt.figure(figsize=(15,10))
plt.axis("equal")
# Draw the contour maps of the data-loss and regularization loss
CS = plt.contour(xmesh, ymesh, loss_mesh, 10, cmap=plt.cm.bone)
# Draw the L1-balls
CS2 = plt.contour(xmesh, ymesh, l1_mesh, 10, linestyles='dashed', levels=list(range(5)));
# Draw the L2-balls
CS3 = plt.contour(xmesh, ymesh, l2_mesh, 10, linestyles='dashed', levels=list(range(5)));
# Add green contour lines near the loss minimum
CS4 = plt.contour(CS, levels=[0.25, 0.5], colors='g')
# Place a green dot at the data loss minimum, and an orange dot at the origin
plt.scatter(3,2, color='g')
plt.scatter(0,0, color='r')
# Color bars and labels
plt.xlabel("W[0]")
plt.ylabel("W[1]")
CB = plt.colorbar(CS, label="data loss", shrink=0.8, extend='both')
CB2 = plt.colorbar(CS2, label="reg loss", shrink=0.8, extend='both')
# Label the contour lines
plt.clabel(CS, fmt = '%2d', colors = 'k', fontsize=14) #contour line labels
plt.clabel(CS2, fmt = '%2d', colors = 'red', fontsize=14) #contour line labels
# Plot the two sets of weights (green are weights w/o regularization; red are L1; blue are L2)
it_array = np.array(guesses)
unregularized = plt.plot(it_array.T[0], it_array.T[1], "o", color='g')
it_array = np.array(guesses_l1_reg)
l1 = plt.plot(it_array.T[0], it_array.T[1], "+", color='r')
it_array = np.array(guesses_l2_reg)
l2 = plt.plot(it_array.T[0], it_array.T[1], "+", color='b')
# Legends require a proxy artists in this case
unregularized = mpatches.Patch(color='g', label='unregularized')
l1 = mpatches.Patch(color='r', label='L1')
l2 = mpatches.Patch(color='b', label='L2')
plt.legend(handles=[unregularized, l1, l2])
# Finally add the axes, so we can see how far we are from the sparse solution.
plt.axhline(0, color='orange')
plt.axvline(0, color='orange')
print("solution: loss(%.3f, %.3f)=%.3f" % (W.data[0], W.data[1], loss_fn(W)))
print("solution: l1_loss(%.3f, %.3f)=%.3f" % (W.data[0], W.data[1], L1_regularization(W)))
print("regularized solution: loss(%.3f, %.3f)=%.3f" % (W_l1_reg.data[0], W_l1_reg.data[1], loss_fn(W_l1_reg)))
print("regularized solution: l1_loss(%.3f, %.3f)=%.3f" % (W_l1_reg.data[0], W_l1_reg.data[1], L1_regularization(W_l1_reg)))
print("regularized solution: l2_loss(%.3f, %.3f)=%.3f" % (W_l2_reg.data[0], W_l2_reg.data[1], L2_regularization(W_l2_reg)))
```
## $L_1$-norm regularization leads to "near-sparsity"
$L_1$-norm regularization is often touted as sparsity inducing, but it actually creates solutions that oscillate around 0, not exactly 0 as we'd like. <br>
To demonstrate this, we redefine our toy loss function so that the optimal solution for $w_0$ is close to 0 (0.3).
```python
def loss_fn(W):
return 0.5*(W[0]-0.3)**2 + 2.5*(W[1]-2)**2
# Train again
W = Variable(initial_guess, requires_grad=True)
guesses_l1_reg = train(W, lr, alpha=alpha, beta=0, num_steps=num_steps)
```
When we draw the progress of the weight training, we see that $W_0$ is gravitating towards zero.<br>
```python
# Draw the contour maps of the data-loss and regularization loss
CS = plt.contour(xmesh, ymesh, loss_mesh, 10, cmap=plt.cm.bone)
# Plot the progress of the training process
it_array = np.array(guesses_l1_reg)
l1 = plt.plot(it_array.T[0], it_array.T[1], "+", color='r')
# Finally add the axes, so we can see how far we are from the sparse solution.
plt.axhline(0, color='orange')
plt.axvline(0, color='orange');
plt.xlabel("W[0]")
plt.ylabel("W[1]")
```
But if we look closer at what happens to $w_0$ in the last 100 steps of the training, we see that is oscillates around 0, but never quite lands there. Why?<br>
Well, $dL1/dw_0$ is a constant (```lr * alpha``` in our case), so the weight update step:<br>
```W.data = W.data - lr * W.grad.data``` <br>
can be expanded to <br>
```W.data = W.data - lr * (alpha + dloss_fn(W)/dW0)``` where ```dloss_fn(W)/dW0)``` <br>is the gradient of loss_fn(W) with respect to $w_0$. <br>
The oscillations are not constant (although they do have a rythm) because they are influenced by this latter loss.
```python
it_array = np.array(guesses_l1_reg[int(0.9*num_steps):])
for i in range(len(it_array)):
print("%.4f\t(diff=%.4f)" % (it_array.T[0][i], abs(it_array.T[0][i]-it_array.T[0][i-1])))
```
## References
<div id="Goodfellow-et-al-2016"></div> **Ian Goodfellow and Yoshua Bengio and Aaron Courville**.
[*Deep Learning*](http://www.deeplearningbook.org),
MIT Press,
2016.
| 564acbbab72fbf30b78997a24947715e5f81fcf8 | 13,556 | ipynb | Jupyter Notebook | jupyter/L1-regularization.ipynb | tufei/distiller | c0e45da21ecc3244d4c0b8a854090ffcbae1d12b | [
"Apache-2.0"
]
| 94 | 2019-01-30T18:18:12.000Z | 2022-03-07T23:47:13.000Z | jupyter/L1-regularization.ipynb | tufei/distiller | c0e45da21ecc3244d4c0b8a854090ffcbae1d12b | [
"Apache-2.0"
]
| 4 | 2020-09-26T00:53:47.000Z | 2022-02-10T01:23:34.000Z | code/jupyter/L1-regularization.ipynb | sinreq-learn/sinreq-learn.code | a205d3fa22a41d5f4fc1ef1e5698c4f1dbb11e6a | [
"BSD-4-Clause-UC"
]
| 23 | 2019-04-19T09:38:14.000Z | 2022-01-24T04:45:23.000Z | 35.302083 | 212 | 0.57067 | true | 2,937 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.879147 | 0.794128 | __label__eng_Latn | 0.868282 | 0.683358 |
<!-- dom:TITLE: Learning from data: Neural networks, from the simple perceptron to deep learning -->
# Learning from data: Neural networks, from the simple perceptron to deep learning
<!-- dom:AUTHOR: Christian Forssén at Department of Physics, Chalmers University of Technology, Sweden -->
<!-- Author: -->
**Christian Forssén**, Department of Physics, Chalmers University of Technology, Sweden
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: --> **Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: **Oct 15, 2019**
Copyright 2018-2019, Christian Forssén. Released under CC Attribution-NonCommercial 4.0 license
# Neural networks
Artificial neural networks are computational systems that can learn to
perform tasks by considering examples, generally without being
programmed with any task-specific rules. It is supposed to mimic a
biological system, wherein neurons interact by sending signals in the
form of mathematical functions between layers. All layers can contain
an arbitrary number of neurons, and each connection is represented by
a weight variable.
## Terminology
Each time we describe a neural network algorithm we will typically specify three things.
Architecture:
:
The architecture specifies what variables are involved in the network and their topological relationships – for example, the variables involved in a neural net might be the weights of the connections between the neurons, along with the activities of the neurons.
Activity rule:
:
Most neural network models have short time-scale dynamics: local rules define how the activities of the neurons change in response to each other. Typically the activity rule depends on the weights (the parameters) in the network.
Learning rule:
:
The learning rule specifies the way in which the neural network’s weights change with time. This learning is usually viewed as taking place on a longer time scale than the time scale of the dynamics under the activity rule. Usually the learning rule will depend on the activities of the neurons. It may also depend on the values of target values supplied by a teacher and on the current value of the weights.
## Artificial neurons
The field of artificial neural networks has a long history of
development, and is closely connected with the advancement of computer
science and computers in general. A model of artificial neurons was
first developed by McCulloch and Pitts in 1943 to study signal
processing in the brain and has later been refined by others. The
general idea is to mimic neural networks in the human brain, which is
composed of billions of neurons that communicate with each other by
sending electrical signals. Each neuron accumulates its incoming
signals, which must exceed an activation threshold to yield an
output. If the threshold is not overcome, the neuron remains inactive,
i.e. has zero output.
This behaviour has inspired a simple mathematical model for an artificial neuron.
<!-- Equation labels as ordinary links -->
<div id="artificialNeuron"></div>
$$
\begin{equation}
y = f\left(\sum_{i=1}^n w_jx_j + b \right) = f(z),
\label{artificialNeuron} \tag{1}
\end{equation}
$$
where the bias $b$ is sometimes denoted $w_0$.
Here, the output $y$ of the neuron is the value of its activation function, which have as input
a weighted sum of signals $x_1, \dots ,x_n$ received by $n$ other neurons.
Conceptually, it is helpful to divide neural networks into four
categories:
1. general purpose neural networks, including deep neural networks (DNN) with several hidden layers, for supervised learning,
2. neural networks designed specifically for image processing, the most prominent example of this class being Convolutional Neural Networks (CNNs),
3. neural networks for sequential data such as Recurrent Neural Networks (RNNs), and
4. neural networks for unsupervised learning such as Deep Boltzmann Machines.
In natural science, DNNs and CNNs have already found numerous
applications. In statistical physics, they have been applied to detect
phase transitions in 2D Ising and Potts models, lattice gauge
theories, and different phases of polymers, or solving the
Navier-Stokes equation in weather forecasting. Deep learning has also
found interesting applications in quantum physics. Various quantum
phase transitions can be detected and studied using DNNs and CNNs:
topological phases, and even non-equilibrium many-body
localization.
In quantum information theory, it has been shown that one can perform
gate decompositions with the help of neural networks.
The applications are not limited to the natural sciences. There is a
plethora of applications in essentially all disciplines, from the
humanities to life science and medicine.
## Neural network types
An artificial neural network (ANN), is a computational model that
consists of layers of connected neurons, or nodes or units. We will
refer to these interchangeably as units or nodes, and sometimes as
neurons.
It is supposed to mimic a biological nervous system by letting each
neuron interact with other neurons by sending signals in the form of
mathematical functions between layers. A wide variety of different
ANNs have been developed, but most of them consist of an input layer,
an output layer and eventual layers in-between, called *hidden
layers*. All layers can contain an arbitrary number of nodes, and each
connection between two nodes is associated with a weight variable.
Neural networks (also called neural nets) are neural-inspired
nonlinear models for supervised learning. As we will see, neural nets
can be viewed as natural, more powerful extensions of supervised
learning methods such as linear and logistic regression and soft-max
methods we discussed earlier.
### Feed-forward neural networks
The feed-forward neural network (FFNN) was the first and simplest type
of ANNs that were devised. In this network, the information moves in
only one direction: forward through the layers.
Nodes are represented by circles, while the arrows display the
connections between the nodes, including the direction of information
flow. Additionally, each arrow corresponds to a weight variable
(figure to come). We observe that each node in a layer is connected
to *all* nodes in the subsequent layer, making this a so-called
*fully-connected* FFNN.
### Convolutional Neural Network
A different variant of FFNNs are *convolutional neural networks*
(CNNs), which have a connectivity pattern inspired by the animal
visual cortex. Individual neurons in the visual cortex only respond to
stimuli from small sub-regions of the visual field, called a receptive
field. This makes the neurons well-suited to exploit the strong
spatially local correlation present in natural images. The response of
each neuron can be approximated mathematically as a convolution
operation. (figure to come)
Convolutional neural networks emulate the behaviour of neurons in the
visual cortex by enforcing a *local* connectivity pattern between
nodes of adjacent layers: Each node in a convolutional layer is
connected only to a subset of the nodes in the previous layer, in
contrast to the fully-connected FFNN. Often, CNNs consist of several
convolutional layers that learn local features of the input, with a
fully-connected layer at the end, which gathers all the local data and
produces the outputs. They have wide applications in image and video
recognition.
### Recurrent neural networks
So far we have only mentioned ANNs where information flows in one
direction: forward. *Recurrent neural networks* on the other hand,
have connections between nodes that form directed *cycles*. This
creates a form of internal memory which are able to capture
information on what has been calculated before; the output is
dependent on the previous computations. Recurrent NNs make use of
sequential information by performing the same task for every element
in a sequence, where each element depends on previous elements. An
example of such information is sentences, making recurrent NNs
especially well-suited for handwriting and speech recognition.
### Other types of networks
There are many other kinds of ANNs that have been developed. One type
that is specifically designed for interpolation in multidimensional
space is the radial basis function (RBF) network. RBFs are typically
made up of three layers: an input layer, a hidden layer with
non-linear radial symmetric activation functions and a linear output
layer (''linear'' here means that each node in the output layer has a
linear activation function). The layers are normally fully-connected
and there are no cycles, thus RBFs can be viewed as a type of
fully-connected FFNN. They are however usually treated as a separate
type of NN due the unusual activation functions.
## Multilayer perceptrons
The *multilayer perceptron* (MLP) is a very popular, and easy to implement approach, to deep learning. It consists of
1. a neural network with one or more layers of nodes between the input and the output nodes.
2. the multilayer network structure, or architecture, or topology, consists of an input layer, one or more hidden layers, and one output layer.
3. the input nodes pass values to the first hidden layer, its nodes pass the information on to the second and so on till we reach the output layer.
As a convention it is normal to call a network with one layer of input units, one layer of hidden units and one layer of output units as a two-layer network. A network with two layers of hidden units is called a three-layer network etc.
The number of input nodes does not need to equal the number of output
nodes. This applies also to the hidden layers. Each layer may have its
own number of nodes and activation functions.
The hidden layers have their name from the fact that they are not
linked to observables and as we will see below when we define the
so-called activation $\boldsymbol{z}$, we can think of this as a basis
expansion of the original inputs $\boldsymbol{x}$.
### Why multilayer perceptrons?
According to the [universal approximation
theorem](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.441.7873&rep=rep1&type=pdf), a feed-forward
neural network with just a single hidden layer containing a finite
number of neurons can approximate a continuous multidimensional
function to arbitrary accuracy, assuming the activation function for
the hidden layer is a **non-constant, bounded and
monotonically-increasing continuous function**. The theorem thus
states that simple neural networks can represent a wide variety of
interesting functions when given appropriate parameters. It is the
multilayer feedforward architecture itself which gives neural networks
the potential of being universal approximators.
Note that the requirements on the activation function only applies to
the hidden layer, the output nodes are always assumed to be linear, so
as to not restrict the range of output values.
### Mathematical model
The output $y$ is produced via the activation function $f$
$$
y = f\left(\sum_{i=1}^n w_ix_i + b \right) = f(z),
$$
This function receives $x_i$ as inputs.
Here the activation $z=(\sum_{i=1}^n w_ix_i+b)$.
In an FFNN of such neurons, the *inputs* $x_i$ are the *outputs* of
the neurons in the preceding layer. Furthermore, an MLP is
fully-connected, which means that each neuron receives a weighted sum
of the outputs of *all* neurons in the previous layer.
First, for each node $j$ in the first hidden layer, we calculate a weighted sum $z_j^1$ of the input coordinates $x_i$,
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation} z_j^1 = \sum_{i=1}^{n} w_{ji}^1 x_i + b_j^1
\label{_auto1} \tag{2}
\end{equation}
$$
Here $b_j^1$ is the so-called bias which is normally needed in
case of zero activation weights or inputs. How to fix the biases and
the weights will be discussed below. The value of $z_j^1$ is the
argument to the activation function $f$ of each node $j$, The
variable $n$ stands for all possible inputs to a given node $j$ in the
first layer. We define the output $y_j^1$ of all neurons in layer 1 as
<!-- Equation labels as ordinary links -->
<div id="outputLayer1"></div>
$$
\begin{equation}
y_j^1 = f(z_j^1) = f\left(\sum_{i=1}^n w_{ji}^1 x_i + b_j^1\right),
\label{outputLayer1} \tag{3}
\end{equation}
$$
where we assume that all nodes in the same layer have identical
activation functions, hence the notation $f$. In general, we could assume in the more general case that different layers have different activation functions.
In this case we would identify these functions with a superscript $l$ for the $l$-th layer,
<!-- Equation labels as ordinary links -->
<div id="generalLayer"></div>
$$
\begin{equation}
y_i^l = f^l(z_i^l) = f^l\left(\sum_{j=1}^{N_{l-1}} w_{ij}^l y_j^{l-1} + b_i^l\right),
\label{generalLayer} \tag{4}
\end{equation}
$$
where $N_{l-1}$ is the number of nodes in layer $l-1$. When the output of
all the nodes in the first hidden layer are computed, the values of
the subsequent layer can be calculated and so forth until the output
is obtained.
The output of neuron $i$ in layer 2 is thus,
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
y_i^2 = f^2\left(\sum_{j=1}^N w_{ij}^2 y_j^1 + b_i^2\right)
\label{_auto2} \tag{5}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="outputLayer2"></div>
$$
\begin{equation}
= f^2\left[\sum_{j=1}^N w_{ij}^2f^1\left(\sum_{k=1}^M w_{jk}^1 x_k + b_j^1\right) + b_i^2\right]
\label{outputLayer2} \tag{6}
\end{equation}
$$
where we have substituted $y_k^1$ with the inputs $x_k$. Finally, the ANN output reads
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
y_i^3 = f^3\left(\sum_{j=1}^N w_{ij}^3 y_j^2 + b_i^3\right)
\label{_auto3} \tag{7}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
= f^3\left[\sum_{j} w_{ij}^3 f^2\left(\sum_{k} w_{jk}^2 f^1\left(\sum_{m} w_{km}^1 x_m + b_k^1\right) + b_j^2\right)
+ b_1^3\right]
\label{_auto4} \tag{8}
\end{equation}
$$
We can generalize this expression to an MLP with $L$ hidden
layers. The complete functional form is,
<!-- Equation labels as ordinary links -->
<div id="completeNN"></div>
$$
\begin{equation}
y^{L+1}_i = f^{L+1}\left[\!\sum_{j=1}^{N_L} w_{ij}^L f^L \left(\sum_{k=1}^{N_{L-1}}w_{jk}^{L-1}\left(\dots f^1\left(\sum_{n=1}^{N_0} w_{mn}^1 x_n+ b_m^1\right)\dots\right)+b_k^{L-1}\right)+b_1^L\right]
\label{completeNN} \tag{9}
\end{equation}
$$
which illustrates a basic property of MLPs: The only independent
variables are the input values $x_n$.
This confirms that an MLP, despite its quite convoluted mathematical
form, is nothing more than an analytic function, specifically a
mapping of real-valued vectors $\boldsymbol{x} \in \mathbb{R}^n \rightarrow
\boldsymbol{y} \in \mathbb{R}^m$.
Furthermore, the flexibility and universality of an MLP can be
illustrated by realizing that the expression is essentially a nested
sum of scaled activation functions of the form
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
f(x) = c_1 f(c_2 x + c_3) + c_4,
\label{_auto5} \tag{10}
\end{equation}
$$
where the parameters $c_i$ are weights and biases. By adjusting these
parameters, the activation functions can be shifted up and down or
left and right, change slope or be rescaled which is the key to the
flexibility of a neural network.
### Matrix-vector notation
We can introduce a more convenient notation for the activations in an ANN.
Additionally, we can represent the biases and activations
as layer-wise column vectors $\boldsymbol{b}_l$ and $\boldsymbol{y}_l$, so that the $i$-th element of each vector
is the bias $b_i^l$ and activation $y_i^l$ of node $i$ in layer $l$ respectively.
We have that $\boldsymbol{W}_l$ is an $N_{l-1} \times N_l$ matrix, while $\boldsymbol{b}_l$ and $\boldsymbol{y}_l$ are $N_l \times 1$ column vectors.
With this notation, the sum becomes a matrix-vector multiplication, and we can write
the equation for the activations of hidden layer 2 (assuming three nodes for simplicity) as
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\boldsymbol{y}_2 = f_2(\boldsymbol{W}_2 \boldsymbol{y}_{1} + \boldsymbol{b}_{2}) =
f_2\left(\left[\begin{array}{ccc}
w^2_{11} &w^2_{12} &w^2_{13} \\
w^2_{21} &w^2_{22} &w^2_{23} \\
w^2_{31} &w^2_{32} &w^2_{33} \\
\end{array} \right] \cdot
\left[\begin{array}{c}
y^1_1 \\
y^1_2 \\
y^1_3 \\
\end{array}\right] +
\left[\begin{array}{c}
b^2_1 \\
b^2_2 \\
b^2_3 \\
\end{array}\right]\right).
\label{_auto6} \tag{11}
\end{equation}
$$
### Matrix-vector notation and activation
The activation of node $i$ in layer 2 is
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
y^2_i = f_2\Bigr(w^2_{i1}y^1_1 + w^2_{i2}y^1_2 + w^2_{i3}y^1_3 + b^2_i\Bigr) =
f_2\left(\sum_{j=1}^3 w^2_{ij} y_j^1 + b^2_i\right).
\label{_auto7} \tag{12}
\end{equation}
$$
This is not just a convenient and compact notation, but also a useful
and intuitive way to think about MLPs: The output is calculated by a
series of matrix-vector multiplications and vector additions that are
used as input to the activation functions. For each operation
$\mathrm{W}_l \boldsymbol{y}_{l-1}$ we move forward one layer.
### Activation functions
A property that characterizes a neural network, other than its
connectivity, is the choice of activation function(s). The following restrictions are imposed on an activation function for a FFNN to fulfill the universal approximation theorem
* Non-constant
* Bounded
* Monotonically-increasing
* Continuous
**Logistic and Hyperbolic activation functions**
The second requirement excludes all linear functions. Furthermore, in
a MLP with only linear activation functions, each layer simply
performs a linear transformation of its inputs.
Regardless of the number of layers, the output of the NN will be
nothing but a linear function of the inputs. Thus we need to introduce
some kind of non-linearity to the NN to be able to fit non-linear
functions Typical examples are the logistic *Sigmoid*
$$
f_\mathrm{sigmoid}(z) = \frac{1}{1 + e^{-z}},
$$
and the *hyperbolic tangent* function
$$
f_\mathrm{tanh}(z) = \tanh(z)
$$
**Rectifier activation functions**
The Rectifier Linear Unit (ReLU) uses the following activation function
$$
f_\mathrm{ReLU}(z) = \max(0,z).
$$
To solve a problem of dying ReLU neurons, practitioners often use a variant of the ReLU
function, such as the leaky ReLU or the so-called
exponential linear unit (ELU) function
$$
f_\mathrm{ELU}(z) = \left\{\begin{array}{cc} \alpha\left( \exp{(z)}-1\right) & z < 0,\\ z & z \ge 0.\end{array}\right.
$$
### Relevance
The *sigmoid* function are more biologically plausible because the
output of inactive neurons are zero. Such activation function are
called *one-sided*. However, it has been shown that the hyperbolic
tangent performs better than the sigmoid for training MLPs. It has
become the most popular for *deep neural networks*
## Deriving the back propagation code for a multilayer perceptron model
Note: figures will be inserted later!
As we have seen the final output of a feed-forward network can be expressed in terms of basic matrix-vector multiplications.
The unknowwn quantities are our weights $w_{ij}$ and we need to find an algorithm for changing them so that our errors are as small as possible.
This leads us to the famous [back propagation algorithm](https://www.nature.com/articles/323533a0).
The questions we want to ask are how do changes in the biases and the
weights in our network change the cost function and how can we use the
final output to modify the weights?
To derive these equations let us start with a plain regression problem
and define our cost function as
$$
{\cal C}(\boldsymbol{W}) = \frac{1}{2}\sum_{i=1}^n\left(y_i - t_i\right)^2,
$$
where the $t_i$s are our $n$ targets (the values we want to
reproduce), while the outputs of the network after having propagated
all inputs $\boldsymbol{x}$ are given by $y_i$. Other cost functions can also be considered.
### Definitions
With our definition of the targets $\boldsymbol{t}$, the outputs of the
network $\boldsymbol{y}$ and the inputs $\boldsymbol{x}$ we
define now the activation $z_j^l$ of node/neuron/unit $j$ of the
$l$-th layer as a function of the bias, the weights which add up from
the previous layer $l-1$ and the forward passes/outputs
$\boldsymbol{a}^{l-1}$ from the previous layer as
$$
z_j^l = \sum_{i=1}^{M_{l-1}}w_{ij}^la_i^{l-1}+b_j^l,
$$
where $b_j^l$ are the biases from layer $l$. Here $M_{l-1}$
represents the total number of nodes/neurons/units of layer $l-1$. The
figure here illustrates this equation. We can rewrite this in a more
compact form as the matrix-vector products we discussed earlier,
$$
\boldsymbol{z}^l = \left(\boldsymbol{W}^l\right)^T\boldsymbol{a}^{l-1}+\boldsymbol{b}^l.
$$
With the activation values $\boldsymbol{z}^l$ we can in turn define the
output of layer $l$ as $\boldsymbol{a}^l = f(\boldsymbol{z}^l)$ where $f$ is our
activation function. In the examples here we will use the sigmoid
function discussed in the logistic regression lecture. We will also use the same activation function $f$ for all layers
and their nodes. It means we have
$$
a_j^l = f(z_j^l) = \frac{1}{1+\exp{-(z_j^l)}}.
$$
### Derivatives and the chain rule
From the definition of the activation $z_j^l$ we have
$$
\frac{\partial z_j^l}{\partial w_{ij}^l} = a_i^{l-1},
$$
and
$$
\frac{\partial z_j^l}{\partial a_i^{l-1}} = w_{ji}^l.
$$
With our definition of the activation function we have (note that this function depends only on $z_j^l$)
$$
\frac{\partial a_j^l}{\partial z_j^{l}} = a_j^l(1-a_j^l)=f(z_j^l) \left[ 1-f(z_j^l) \right].
$$
### Derivative of the cost function
With these definitions we can now compute the derivative of the cost function in terms of the weights.
Let us specialize to the output layer $l=L$. Our cost function is
$$
{\cal C}(\boldsymbol{W^L}) = \frac{1}{2}\sum_{i=1}^n\left(y_i - t_i\right)^2=\frac{1}{2}\sum_{i=1}^n\left(a_i^L - t_i\right)^2,
$$
The derivative of this function with respect to the weights is
$$
\frac{\partial{\cal C}(\boldsymbol{W^L})}{\partial w_{jk}^L} = \left(a_j^L - t_j\right)\frac{\partial a_j^L}{\partial w_{jk}^{L}},
$$
The last partial derivative can easily be computed and reads (by applying the chain rule)
$$
\frac{\partial a_j^L}{\partial w_{jk}^{L}} = \frac{\partial a_j^L}{\partial z_{j}^{L}}\frac{\partial z_j^L}{\partial w_{jk}^{L}}=a_j^L(1-a_j^L)a_k^{L-1},
$$
### Bringing it together, first back propagation equation
We have thus
$$
\frac{\partial{\cal C}(\boldsymbol{W^L})}{\partial w_{jk}^L} = \left(a_j^L - t_j\right)a_j^L(1-a_j^L)a_k^{L-1},
$$
Defining
$$
\delta_j^L = a_j^L(1-a_j^L)\left(a_j^L - t_j\right) = f'(z_j^L)\frac{\partial {\cal C}}{\partial (a_j^L)},
$$
and using the Hadamard product of two vectors we can write this as
$$
\boldsymbol{\delta}^L = f'(\boldsymbol{z}^L)\circ\frac{\partial {\cal C}}{\partial (\boldsymbol{a}^L)}.
$$
This is an important expression. The second term on the right handside
measures how fast the cost function is changing as a function of the $j$th
output activation. If, for example, the cost function doesn't depend
much on a particular output node $j$, then $\delta_j^L$ will be small,
which is what we would expect. The first term on the right, measures
how fast the activation function $f$ is changing at a given activation
value $z_j^L$.
Notice that everything in the above equations is easily computed. In
particular, we compute $z_j^L$ while computing the behaviour of the
network, and it is only a small additional overhead to compute
$f'(z^L_j)$. The exact form of the derivative with respect to the
output depends on the form of the cost function.
However, provided the cost function is known there should be little
trouble in calculating
$$
\frac{\partial {\cal C}}{\partial (a_j^L)}
$$
With the definition of $\delta_j^L$ we have a more compact definition of the derivative of the cost function in terms of the weights, namely
$$
\frac{\partial{\cal C}(\boldsymbol{W^L})}{\partial w_{jk}^L} = \delta_j^La_k^{L-1}.
$$
### Derivatives in terms of $z_j^L$
It is also easy to see that our previous equation can be written as
$$
\delta_j^L =\frac{\partial {\cal C}}{\partial z_j^L}= \frac{\partial {\cal C}}{\partial a_j^L}\frac{\partial a_j^L}{\partial z_j^L},
$$
which can also be interpreted as the partial derivative of the cost function with respect to the biases $b_j^L$, namely
$$
\delta_j^L = \frac{\partial {\cal C}}{\partial b_j^L}\frac{\partial b_j^L}{\partial z_j^L}=\frac{\partial {\cal C}}{\partial b_j^L},
$$
That is, the error $\delta_j^L$ is exactly equal to the rate of change of the cost function as a function of the bias.
We have now three equations that are essential for the computations of the derivatives of the cost function at the output layer. These equations are needed to start the algorithm and they are
**The starting equations.**
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
\frac{\partial{\cal C}(\boldsymbol{W^L})}{\partial w_{jk}^L} = \delta_j^La_k^{L-1},
\label{_auto8} \tag{13}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
\delta_j^L = f'(z_j^L)\frac{\partial {\cal C}}{\partial (a_j^L)},
\label{_auto9} \tag{14}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
\delta_j^L = \frac{\partial {\cal C}}{\partial b_j^L},
\label{_auto10} \tag{15}
\end{equation}
$$
An interesting consequence of the above equations is that when the
activation $a_k^{L-1}$ is small, the gradient term, that is the
derivative of the cost function with respect to the weights, will also
tend to be small. We say then that the weight learns slowly, meaning
that it changes slowly when we minimize the weights via say gradient
descent. In this case we say the system learns slowly.
Another interesting feature is that is when the activation function,
represented by the sigmoid function here, is rather flat when we move towards
its end values $0$ and $1$. In these
cases, the derivatives of the activation function will also be close
to zero, meaning again that the gradients will be small and the
network learns slowly again.
We need a fourth equation and we are set. We are going to propagate
backwards in order to determine the weights and biases. In order
to do so we need to represent the error in the layer before the final
one $L-1$ in terms of the errors in the final output layer.
### Final back-propagating equation
We have that (replacing $L$ with a general layer $l$)
$$
\delta_j^l =\frac{\partial {\cal C}}{\partial z_j^l}.
$$
We want to express this in terms of the equations for layer $l+1$. Using the chain rule and summing over all $k$ entries we have
$$
\delta_j^l =\sum_k \frac{\partial {\cal C}}{\partial z_k^{l+1}}\frac{\partial z_k^{l+1}}{\partial z_j^{l}}=\sum_k \delta_k^{l+1}\frac{\partial z_k^{l+1}}{\partial z_j^{l}},
$$
and recalling that
$$
z_j^{l+1} = \sum_{i=1}^{M_{l}}w_{ij}^{l+1}a_i^{l}+b_j^{l+1},
$$
with $M_l$ being the number of nodes in layer $l$, we obtain
$$
\delta_j^l =\sum_k \delta_k^{l+1}w_{kj}^{l+1}f'(z_j^l),
$$
This is our final equation.
We are now ready to set up the algorithm for back propagation and learning the weights and biases.
## Setting up the back-propagation algorithm
The four equations provide us with a way of computing the gradient of the cost function. Let us write this out in the form of an algorithm.
**Summary.**
* First, we set up the input data $\boldsymbol{x}$ and the activations $\boldsymbol{z}_1$ of the input layer and compute the activation function and the outputs $\boldsymbol{a}^1$.
* Secondly, perform the feed-forward until we reach the output layer. I.e., compute all activation functions and the pertinent outputs $\boldsymbol{a}^l$ for $l=2,3,\dots,L$.
* Compute the ouput error $\boldsymbol{\delta}^L$ by
$$
\delta_j^L = f'(z_j^L)\frac{\partial {\cal C}}{\partial (a_j^L)}.
$$
* Back-propagate the error for each $l=L-1,L-2,\dots,2$ as
$$
\delta_j^l = \sum_k \delta_k^{l+1}w_{kj}^{l+1}f'(z_j^l).
$$
* Finally, update the weights and the biases using gradient descent for each $l=L-1,L-2,\dots,2$ and update the weights and biases according to the rules
$$
w_{jk}^l\leftarrow w_{jk}^l- \eta \delta_j^la_k^{l-1},
$$
$$
b_j^l \leftarrow b_j^l-\eta \frac{\partial {\cal C}}{\partial b_j^l}=b_j^l-\eta \delta_j^l,
$$
The parameter $\eta$ is the learning rate.
Here it is convenient to use stochastic gradient descent with mini-batches and an outer loop that steps through multiple epochs of training.
<!-- !split -->
## Learning challenges
The back-propagation algorithm works by going from
the output layer to the input layer, propagating the error gradient. The learning algorithm uses these
gradients to update each parameter with a Gradient Descent (GD) step.
Unfortunately, the gradients often get smaller and smaller as the
algorithm progresses down to the first hidden layers. As a result, the
GD update step leaves the lower layer connection weights
virtually unchanged, and training never converges to a good
solution. This is known in the literature as
**the vanishing gradients problem**.
In other cases, the opposite can happen, namely that the gradients grow bigger and
bigger. The result is that many of the layers get large updates of the
weights and the learning algorithm diverges. This is the **exploding gradients problem**, which is mostly encountered in recurrent neural networks. More generally, deep neural networks suffer from unstable gradients, different layers may learn at widely different speeds
<!-- !split -->
### Is the Logistic activation function (Sigmoid) our choice?
Although this unfortunate behavior has been empirically observed for
quite a while (it was one of the reasons why deep neural networks were
mostly abandoned for a long time), it is only around 2010 that
significant progress was made in understanding it.
A paper titled [Understanding the Difficulty of Training Deep
Feedforward Neural Networks by Xavier Glorot and Yoshua Bengio](http://proceedings.mlr.press/v9/glorot10a.html) identified problems with the popular logistic
sigmoid activation function and the weight initialization technique
that was most popular at the time (namely random initialization using
a normal distribution with a mean of 0 and a standard deviation of
1).
They showed that with this activation function and this
initialization scheme, the variance of the outputs of each layer is
much greater than the variance of its inputs. Going forward in the
network, the variance keeps increasing after each layer until the
activation function saturates at the top layers. This is actually made
worse by the fact that the logistic function has a mean of 0.5, not 0
(the hyperbolic tangent function has a mean of 0 and behaves slightly
better than the logistic function in deep networks).
### The derivative of the Logistic funtion
Looking at the logistic activation function, when inputs become large
(negative or positive), the function saturates at 0 or 1, with a
derivative extremely close to 0. Thus when backpropagation kicks in,
it has virtually no gradient to propagate back through the network,
and what little gradient exists keeps getting diluted as
backpropagation progresses down through the top layers, so there is
really nothing left for the lower layers.
In their paper, Glorot and Bengio proposed a way to significantly
alleviate this problem. The signal must flow properly in both
directions: in the forward direction when making predictions, and in
the reverse direction when backpropagating gradients. We don’t want
the signal to die out, nor do we want it to explode and saturate. For
the signal to flow properly, the authors argue that we need the
variance of the outputs of each layer to be equal to the variance of
its inputs, and we also need the gradients to have equal variance
before and after flowing through a layer in the reverse direction.
One of the insights in the 2010 paper by Glorot and Bengio was that
the vanishing/exploding gradients problems were in part due to a poor
choice of activation function. Until then most people had assumed that
if Nature had chosen to use roughly sigmoid activation functions in
biological neurons, they must be an excellent choice. But it turns out
that other activation functions behave much better in deep neural
networks, in particular the ReLU activation function, mostly because
it does not saturate for positive values (and also because it is quite
fast to compute).
### The RELU function family
The Rectifier Linear Unit (ReLU) uses the following activation function
$$
f(z) = \max(0,z).
$$
The ReLU activation function suffers from a problem known as the dying
ReLUs: during training, some neurons effectively die, meaning they
stop outputting anything other than 0.
In some cases, you may find that half of your network’s neurons are
dead, especially if you used a large learning rate. During training,
if a neuron’s weights get updated such that the weighted sum of the
neuron’s inputs is negative, it will start outputting 0. When this
happen, the neuron is unlikely to come back to life since the gradient
of the ReLU function is 0 when its input is negative.
To solve this problem, nowadays practitioners use a variant of the ReLU
function, such as the leaky ReLU discussed above or the so-called
exponential linear unit (ELU) function
$$
ELU(z) = \left\{\begin{array}{cc} \alpha\left( \exp{(z)}-1\right) & z < 0,\\ z & z \ge 0.\end{array}\right.
$$
### Which activation function should we use?
In general it seems that the ELU activation function is better than
the leaky ReLU function (and its variants), which is better than
ReLU. ReLU performs better than $\tanh$ which in turn performs better
than the logistic function.
If runtime
performance is an issue, then you may opt for the leaky ReLU function over the
ELU function If you don’t
want to tweak yet another hyperparameter, you may just use the default
$\alpha$ of $0.01$ for the leaky ReLU, and $1$ for ELU. If you have
spare time and computing power, you can use cross-validation or
bootstrap to evaluate other activation functions.
<!-- !split -->
## A top-down perspective on Neural networks
The first thing we would like to do is divide the data into two or three
parts. A training set, a validation or dev (development) set, and a
test set.
* The training set is used for learning and adjusting the weights.
* The dev/validation set is a subset of the training data. It is used to
check how well we are doing out-of-sample, after training the model on
the training dataset. We use the validation error as a proxy for the
test error in order to make tweaks to our model, e.g. changing hyperparameters such as the learning rate.
* The test set will be used to test the performance of or predictions with the final neural net.
It is crucial that we do not use any of the test data to train the algorithm. This is a cardinal sin in ML. T If the validation and test sets are drawn from the same distributions, then a good performance on the validation set should lead to similarly good performance on the test set.
However, sometimes
the training data and test data differ in subtle ways because, for
example, they are collected using slightly different methods, or
because it is cheaper to collect data in one way versus another. In
this case, there can be a mismatch between the training and test
data. This can lead to the neural network overfitting these small
differences between the test and training sets, and a poor performance
on the test set despite having a good performance on the validation
set. To rectify this, Andrew Ng suggests making two validation or dev
sets, one constructed from the training data and one constructed from
the test data. The difference between the performance of the algorithm
on these two validation sets quantifies the train-test mismatch. This
can serve as another important diagnostic when using DNNs for
supervised learning.
## Limitations of supervised learning with deep networks
Like all statistical methods, supervised learning using neural
networks has important limitations. This is especially important when
one seeks to apply these methods, especially to physics problems. Like
all tools, DNNs are not a universal solution. Often, the same or
better performance on a task can be achieved by using a few
hand-engineered features (or even a collection of random
features).
Here we list some of the important limitations of supervised neural network based models.
* **Need labeled data**. All supervised learning methods, DNNs for supervised learning require labeled data. Often, labeled data is harder to acquire than unlabeled data (e.g. one must pay for human experts to label images).
* **Supervised neural networks are extremely data intensive.** DNNs are data hungry. They perform best when data is plentiful. This is doubly so for supervised methods where the data must also be labeled. The utility of DNNs is extremely limited if data is hard to acquire or the datasets are small (hundreds to a few thousand samples). In this case, the performance of other methods that utilize hand-engineered features can exceed that of DNNs.
* **Homogeneous data.** Almost all DNNs deal with homogeneous data of one type. It is very hard to design architectures that mix and match data types (i.e. some continuous variables, some discrete variables, some time series). In applications beyond images, video, and language, this is often what is required. In contrast, ensemble models like random forests or gradient-boosted trees have no difficulty handling mixed data types.
* **Many problems are not about prediction.** In natural science we are often interested in learning something about the underlying distribution that generates the data. In this case, it is often difficult to cast these ideas in a supervised learning setting. While the problems are related, it is possible to make good predictions with a *wrong* model. The model might or might not be useful for understanding the underlying science.
Some of these remarks are particular to DNNs, others are shared by all supervised learning methods. This motivates the use of unsupervised methods which in part circumvent these problems.
| ea103a2cf6cff0e028f3e5f643c18dcc5dc6e81b | 53,913 | ipynb | Jupyter Notebook | doc/src/NeuralNet/NeuralNet.ipynb | quantshah/tif285 | f60ef4bcf95eaf974ce063ac0b2ed422c3d75015 | [
"CC0-1.0"
]
| null | null | null | doc/src/NeuralNet/NeuralNet.ipynb | quantshah/tif285 | f60ef4bcf95eaf974ce063ac0b2ed422c3d75015 | [
"CC0-1.0"
]
| null | null | null | doc/src/NeuralNet/NeuralNet.ipynb | quantshah/tif285 | f60ef4bcf95eaf974ce063ac0b2ed422c3d75015 | [
"CC0-1.0"
]
| null | null | null | 38.128006 | 455 | 0.615937 | true | 10,062 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.824462 | 0.867036 | 0.714838 | __label__eng_Latn | 0.999095 | 0.49914 |
# Random walks on the spanning cluster
In this notebook we'll explore diffusion on the spanning cluster.
```python
import sys
sys.path.append("..")
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.ndimage
import tqdm
import sklearn.linear_model
from generate_spanning_cluster import get_spanning_cluster
from percwalk import percwalk, parallel_percwalk
from log_binning import log_bin
```
```python
sns.set(color_codes=True)
```
```python
# Critical percolation probability
p_c = 0.59275
```
## Distance moved on the percolating cluster
We'll start by exploring diffusion on the percolating cluster for $p > p_c$. We'll measure the second moment of the average distance, $\langle R^2 \rangle$, as a function of the number of steps performed. Below we'll demonstrate what the diffusion on the percolating cluster looks like.
```python
spanning_cluster = get_spanning_cluster(100, p_c)
```
Having created a system with a percolating cluster, we start by visualizing the spanning cluster.
```python
plt.figure(figsize=(14, 10))
plt.imshow(spanning_cluster, origin="lower")
plt.show()
```
Here we can see the sites contained in the percolating cluster. Moving on, we'll start a walker on a random site in the percolating cluster.
```python
num_walks = int(1e5)
num_steps = 0
```
```python
while num_steps <= 1:
walker_map, displacements, num_steps = percwalk(spanning_cluster, num_walks)
```
```python
plt.figure(figsize=(14, 10))
plt.imshow(spanning_cluster, origin="lower")
# walker_map is oriented as row-column (ix, iy)
plt.plot(walker_map[1], walker_map[0])
plt.show()
```
In the above plot we can see how the walker moves in the spanning cluster. A thick line signifies several walks over a site.
## Mean squared distance
Next, we'll look at the mean squared distance, $\langle r^2 \rangle$, as a function of the number of walks, $N$, for varying $p > p_c$.
A free random walker behaves according to
\begin{align}
\langle r^2 \rangle \propto D t,
\end{align}
as we've seen in the molecular dynamics simulations.
On the percolating cluster we expect that the random walker at times gets stuck in dangling ends or slows down through singly connected bonds.
We introduce the scaling theory
\beging{align}
\langle r^2 \rangle = t^{2k} f\left[ (p - p_c) t^x \right],
\end{align}
for the mean squared distance on a system with random set sites.
We can categorize the function $f(u)$ for three different situations.
\begin{align}
f(u) = \begin{cases}
\text{const} & |u| \ll 1, \\
u^{\mu} & u \gg 1, \\
(-u)^{\beta - 2\nu} & u \ll -1.
\end{cases}
\end{align}
The first situation arises when $p \approx p_c$ and yields the scaling
\begin{align}
\langle r^2 \rangle \propto t^{2k},
\end{align}
where $k < 1/2$.
This is known as _subdiffusion_ as opposed to the _superdiffusion_ observed for the initial molecular dynamics simulations.
The second situation $u \gg 1 \implies p > p_c$ and $t > t_0$, where $t_0$ is some threshold time.
This will yield the scaling theory
\begin{align}
\langle r^2 \rangle \propto (p - p_c)^{\mu} t \propto t^{2k} \left[(p - p_c) t^{x} \right]^{\mu}.
\end{align}
The final situation is found when $p < p_c$ and $t > t_0$ yielding the scaling theory
\begin{align}
\langle r^2 \rangle \propto (p_c - p)^{\beta - 2\nu} \propto t^{2k} \left[ (p_c - p) t^x \right]^{\beta - 2\nu}.
\end{align}
We will not be exploring the latter situation.
When $p > p_c$ and after a certain time $t_0$ has passed we expect the random walker on the cluster to behave as a normal free random walker.
That is, when $t \gg t_0$ we get
\begin{align}
\langle r^2 \rangle \propto D t,
\end{align}
when $p > p_c$.
Stated differently, when $\langle r^2 \rangle \gg \xi^2$ the walker behaves as a free random walker.
The interesting fact here is that when $p = p_c$ we know that $\xi \to \infty$ and this condition will never be fullfilled.
This means that the scaling theory described above for the mean squared distance when $p = p_c$ will be true indefinitely.
We can summarize it by the function
\begin{align}
\langle r^2 \rangle \propto \begin{cases}
t^{2k} & \langle r^2 \rangle \ll \xi^2, \\
D t & \langle r^2 \rangle \gg \xi^2.
\end{cases}
\end{align}
We now simulate a system of size $L$ and compute the mean squared displacement on the spanning cluster several times for every system.
```python
L = 2048
num_walks = 1 * int(1e6)
num_walks_arr = np.arange(num_walks)
p_arr = np.linspace(p_c, 0.8, 12)
num_walkers = 500
num_systems = 20
r_squared = np.zeros((len(p_arr), num_walks))
```
It is important to note that the implementation for the percolation walker we use require $2L^2 > N_w$, where $N_w$ is the number of walks (or, time if you like) we let the walkers perform. This is so that the walker won't start backtracking from whence it came thereby flattening the displacement curve.
```python
for i, p in tqdm.tqdm_notebook(enumerate(p_arr), total=len(p_arr)):
spanning_cluster = get_spanning_cluster(L, p, num_attempts=1000)
r_squared[i] += parallel_percwalk(spanning_cluster, num_walks, num_walkers, num_systems)
```
HBox(children=(IntProgress(value=0, max=12), HTML(value='')))
Having computed the mean squared displacement for several different values of $p \geq p_c$ we look at the diffusion behavior.
For $p = p_c$ we know that $\xi \to \infty$ and we know that
\begin{align}
\langle r^2(t) \rangle \propto t^{2k},
\end{align}
for all values of $t$ as we will never have $\langle r^2(t) \rangle \gg \xi^2$.
We therefore estimate $k$ by taking the logarithm on both sides of the equation and using linear regression.
\begin{gather}
\log\left( \langle r^2(t) \rangle \right)
= \log(C) + 2k \log(t),
\end{gather}
where $C$ is a constant serving as the intercept.
```python
plt.figure(figsize=(14, 10))
plt.loglog(num_walks_arr, r_squared[0])
plt.fill_between(num_walks_arr, r_squared[0], alpha=0.2)
plt.xlabel(r"$t$")
plt.ylabel(r"$\langle r^2(t) \rangle$")
plt.title(r"Plot of the mean squared distance as a function of $t$ for $p = p_c$")
plt.show()
```
In this figure we can see how the log-log plot of the mean squared distance as a function of time for $p = p_c$ becomes linear.
We now use linear regression to find an estimate for $k$.
```python
log_walks = np.log(num_walks_arr[1:])
log_r = np.log(r_squared[0, 1:])
clf = sklearn.linear_model.LinearRegression().fit(
log_walks[:, np.newaxis], log_r[:, np.newaxis]
)
k = clf.coef_[0, 0] / 2
C = clf.intercept_[0]
print(f"k = {k}")
```
k = 0.35000464022009997
Having found the intercept and the coefficient $k$ we plot $\langle r^2(t) \rangle$ as a function of the number of walks (time, in a sense) and compare it to the theoretical estimate.
```python
plt.figure(figsize=(14, 10))
plt.plot(
num_walks_arr,
r_squared[0],
label=rf"$p = p_c = {p_c}$",
)
plt.fill_between(num_walks_arr, r_squared[0], alpha=0.2)
plt.plot(
num_walks_arr,
np.exp(C) * num_walks_arr ** (2 * k),
"--",
label=r"$\langle r^2(t) \rangle = \exp(C) t^{2k}$",
)
plt.title(r"Plot of the mean squared displacement as a function of time $t$ for $p = p_c$")
plt.xlabel(r"$t$")
plt.ylabel(r"$\langle r^2(t) \rangle$")
plt.legend(loc="best")
plt.show()
```
In this figure we have plotted the mean squared displacement as a function of "time", i.e., the number of walks, on the spanning cluster for $p = p_c$.
We observe the expected subdiffusion.
We also see how the theoretical estimate of the subdiffusion follows the data closely.
Below we plot the mean squared distance for $p \geq p_c$.
```python
plt.figure(figsize=(14, 10))
plt.loglog(
num_walks_arr,
r_squared[0],
"--",
label=rf"$p = p_c = {p_c}$",
)
plt.fill_between(num_walks_arr, r_squared[0], alpha=0.2)
for i, p in enumerate(p_arr[1:]):
plt.loglog(
num_walks_arr,
r_squared[i + 1],
label=rf"$p = {p:.3f}$"
)
plt.legend(loc="best")
plt.xlabel(r"$t$")
plt.ylabel(r"$\langle r^2(t) \rangle$")
plt.title(r"Plot of the mean squared distance as a function of time for $p \geq p_c$")
plt.show()
```
In the above figure we see how after a time $t_0$ the mean squared distance for $p > p_c$ deviates from the situation when $p = p_c$.
Reformulating the scaling theory for $p > p_c$ to create a data collapse we get
\begin{gather}
\langle r^2 \rangle \propto t^{2k} (p - p_c) t^{x}
\implies
t^{-2k} \langle r^2 \rangle \propto (p - p_c) t^{x}.
\end{gather}
We plot this on a log-log axis when creating the data collapse.
However, the challenge is to find an expression for $x$.
We have previously found that
\begin{align}
\langle r^2(t) \rangle \propto D(p) t,
\end{align}
where $D(p) = (p - p_c)^{\mu}$ when $p > p_c$ and $r \gg \xi$.
Inserted into the scaling theory we can write
\begin{gather}
(p - p_c)^{\mu} t \propto t^{2k} \left[(p - p_c) t^{x}\right]^{\mu}
\implies
1 = 2k + \mu x
\implies
x = \frac{1 - 2k}{\mu}.
\end{gather}
Having already found $k$ we are thus tasked with finding an expression for $\mu$ and we can be on our merry way to plotting a data collapse.
We know that when $t < t_0$ the mean squared distance scales as $t^{2k}$ when $p = p_c$ and $p > p_c$.
However, when $t > t_0$ this scaling continues only for $p = p_c$ whereas for $p > p_c$ the mean squared distance will increase faster in time.
We are interested in locating $t_0$ and we do this by finding the point where
\begin{align}
\langle r^2(t; p_c) \rangle \leq \frac{1}{2} \langle r^2(t; p > p_c) \rangle,
\end{align}
that is, we find the point where the scaling is more than twice as high.
At $t = t_0$ we know that $\langle r^2(t_0) \rangle = \xi^2$.
We are lucky enough to have a scaling ansatz for the characteristic cluster length stating
\begin{align}
\xi = \xi_0 (p - p_c)^{-\nu}.
\end{align}
```python
plt.figure(figsize=(14, 10))
plt.loglog(
num_walks_arr,
r_squared[0],
"--",
label=rf"$p = p_c = {p_c}$",
)
plt.fill_between(num_walks_arr, r_squared[0], alpha=0.2)
xi_squared_list = []
t_list = []
ind_list = []
p_list = []
for i, p in enumerate(p_arr[1:]):
plt.loglog(
num_walks_arr,
r_squared[i + 1],
label=rf"$p = {p:.3f}$"
)
mask = r_squared[0, 1:] <= 0.5 * r_squared[i + 1, 1:]
if not np.any(mask):
continue
index = np.argmax(mask)
xi_squared_list.append(r_squared[i + 1, index])
t_list.append(num_walks_arr[index])
ind_list.append(i + 1)
p_list.append(p)
plt.scatter(num_walks_arr[index], r_squared[i + 1, index])
plt.legend(loc="best")
plt.xlabel(r"$t$")
plt.ylabel(r"$\langle r^2(t) \rangle$")
plt.title(r"Plot of the mean squared distance as a function of time for $p \geq p_c$")
plt.show()
```
It is interesting to note that for a larger value of $p > p_c$ the cross-over time $t_0$ becomes smaller.
That is, for a higher percolation probability $p$ the time it takes for the system to behave as a "normal" random walker becomes shorter.
This also makes sense as there are fewer sites not contained in the spanning cluster.
Having found the points $(t_0, \xi^2)$ we now find an expression for $\nu$.
Taking the logarithm on both sides of the scaling ansatz for the characteristic length yields
\begin{align}
\log(\xi) = \log(\xi_0) - \nu \log(p - p_c).
\end{align}
We use linear regression to find an expression for the intercept $\log(\xi_0)$ and the exponent $\nu$.
```python
p_min_pc_arr = np.abs(np.array(p_list) - p_c)
clf = sklearn.linear_model.LinearRegression().fit(
np.log(p_min_pc_arr)[:, np.newaxis],
np.log(np.sqrt(xi_squared_list))[:, np.newaxis],
)
nu = -clf.coef_[0, 0]
log_xi_0 = clf.intercept_[0]
print(f"nu = {nu}")
```
nu = 1.2067707882240395
The exact value of $\nu$ in two dimensions is $\nu = 4/3$.
```python
nu_exact = 4 / 3
print(f"nu_exact = {nu_exact}")
```
nu_exact = 1.3333333333333333
We therefore see that we are in ballpark range.
We plot the characteristic length as a function of $p$ and compare it to the theoretical scaling using our estimate of $\nu$.
```python
fig = plt.figure(figsize=(14, 10))
plt.plot(
p_list,
np.sqrt(xi_squared_list),
"-o",
label=r"$Measured$",
)
plt.plot(
p_list,
np.exp(log_xi_0) * p_min_pc_arr ** (-nu),
"-o",
label=r"$(p - p_c)^{-%.3f}$" % nu
)
#plt.plot(
# p_list,
# np.exp(log_xi_0) * p_min_pc_arr ** (-nu_exact),
# "-o",
# label=r"$(p - p_c)^{-%.3f}$" % nu_exact
#)
plt.xlabel(r"$p$")
plt.ylabel(r"$\xi$")
plt.title(r"Plot of the characteristic length as a function of the percolation probability $p$")
plt.legend(loc="best")
plt.show()
```
In this figure we see how our estimate of $\nu$ follows the measured data quite well.
And, more importantly, they follow the theoretical scaling using the exact value of $\nu$ closely.
Looking at the point where the cross-over time occurs, we see that this scales inversely with increasing $p$.
We know that for $p > p_c$ and $t < t_0$ we have the relation
\begin{align}
\langle r^2(t) \rangle \propto t^{2k},
\end{align}
that is the system experiences subdiffusion.
When $t > t_0$ the relation takes on the form
\begin{align}
\langle r^2(t) \rangle \propto t^{2k}\left[(p - p_c) t^x \right]^{\mu}.
\end{align}
However, at the cross-over time, that is when $t = t_0$, we expect these to be proportional.
\begin{gather}
t_0^{2k} \propto
t_0^{2k}\left[(p - p_c) t_0^{x} \right]^{\mu}
\implies
(p - p_c) t_0^{x} \propto 1
\implies
t_0 \propto |p - p_c|^{-1/x}.
\end{gather}
```python
neg_inv_x, log_C = np.polyfit(np.log(p_min_pc_arr), np.log(t_list), deg=1)
x = -1 / neg_inv_x
print(f"x = {x}")
```
x = 0.29554963750661734
```python
fig = plt.figure(figsize=(14, 10))
plt.plot(
p_list,
t_list,
"-o",
label=r"Measured",
)
plt.plot(
p_list,
np.exp(log_C) * p_min_pc_arr ** (-1 / x),
"-o",
label=r"$t_0 \propto |p - p_c|^{-1/x}$"
)
plt.xlabel(r"$p$")
plt.ylabel(r"$t_0$")
plt.title(r"Plot of the scaling of the cross-over time as a function for $p$")
plt.legend(loc="best")
plt.show()
```
In order to find an estimate for $x$ we need to find a value for $\mu$.
As we've stated earlier, when $p > p_c$ and $t \gg t_0$ which means that $r \gg \xi$ the random walker moves away from the anomalous diffusion and instead behaves as a free random walker with the relation
\begin{align}
\langle r^2(t) \rangle = D(p) t.
\end{align}
The Einstein relation for diffusion relates the diffusion constant to the conductance through
\begin{align}
D(p) \propto \sigma(p) \propto (p - p_c)^{\mu}.
\end{align}
```python
plt.figure(figsize=(14, 10))
diffusion_list = []
intercept_list = []
p_min_pc_diff_list = []
for i, p_i in enumerate(ind_list):
t_0 = t_list[i]
p = p_arr[p_i]
p_min_pc_diff_list.append(p - p_c)
cross = num_walks_arr > t_0
t_cross = num_walks_arr[cross]
r_cross = r_squared[p_i, cross]
d, intercept = np.polyfit(t_cross, r_cross, deg=1)
diffusion_list.append(d)
intercept_list.append(intercept)
plt.plot(t_cross, r_cross, label=fr"$p = {p:.3f}$")
plt.title(r"Plot of the mean squared distance as a function of $t > t_0$")
plt.xlabel(r"$t$")
plt.ylabel(r"$\langle r^2(t) \rangle$")
plt.legend(loc="best")
plt.show()
```
Ideally we expect the mean squared distance to be completely linear, however, we see that even though $t > t_0$ there are slight deviations to this trend.
Taking the logarithm on both sides of the scaling ansatz we are able to estimate $\mu$.
\begin{align}
\log(D) \propto \mu \log(p - p_c).
\end{align}
Again we use the least squares method to estimate $\mu$.
```python
mu, intercept = np.polyfit(np.log(p_min_pc_diff_list), np.log(diffusion_list), deg=1)
```
```python
print(mu)
```
0.8923720197088654
From the lecture notes we know that the exact value of $\mu$ is given by $\mu = 1.3$.
```python
mu_exact = 1.3
```
This shows that we miss by quite a lot when it comes to estimating $\mu$.
```python
plt.figure(figsize=(14, 10))
plt.plot(
p_min_pc_diff_list,
diffusion_list,
"-o",
label=r"Measured",
)
plt.plot(
p_min_pc_diff_list,
np.exp(intercept) * np.array(p_min_pc_diff_list) ** (mu),
"-o",
label=r"$(p - p_c)^{%.3f}$" % mu,
)
plt.xlabel(r"$p - p_c$")
plt.ylabel(r"$D(p)$")
plt.title(r"Plot of the diffusion constant as a function of $p - p_c$")
plt.legend(loc="best")
plt.show()
```
In this figure we've plotted the diffusion constant as a function of $p - p_c$.
We compare the measured values with the theoretical scaling using our estimate of $\mu$.
```python
x = (1 - 2 * k) / mu
print(f"x = {x}")
```
x = 0.3361722610460954
```python
p_min_pc = p_arr[1:] - p_c
r_gg_xi = r_squared[1:, 1:]
t_sq = num_walks_arr[1:] ** (-2 * k)
```
```python
plt.figure(figsize=(14, 10))
for i in range(len(p_min_pc)):
plt.plot(
p_min_pc[i] * num_walks_arr[1:] ** (x),
t_sq * r_gg_xi[i],
label=fr"$p = {p_arr[i + 1]:.3f}$",
)
#plt.plot(
# p_min_pc[i] * num_walks_arr[1:] ** (x),
# np.exp(intercept) * (p_min_pc[i] * num_walks_arr[1:] ** (x)) ** mu,
# label=fr"Theoretical: $p = {p_arr[i + 1]:.3f}$",
#)
plt.legend(loc="best")
plt.xlabel(r"$(p - p_c) t^{x}$")
plt.ylabel(r"$t^{-2k} \langle r^{2} (t)\rangle$")
plt.title(r"Data-collapse plot for mean squared distance")
plt.show()
```
In this figure we can see the data-collapse of the mean squared displacement.
We have compared to the theoretical scaling
\begin{gather}
\langle r^2 \rangle \propto t^{2k} (p - p_c) t^{x}
\propto t^{2k}\left[(p - p_c) t^{x}\right]^{\mu},
\end{gather}
where we use the intercept from the diffusion constant as the constant of proportionality.
We note that collapse isn't completely perfect, especially when $p$ is close to $p_c$.
A possible explanation to this is due to the percolation walker being restricted to a finite spanning cluster.
We might be able to alleviate some of this discrepancy if we allow the spanning cluster to have periodic boundaries, however this will require quite a lot of manipulation when labelling the spanning cluster and for the percolation walker and has therefore not been tested.
Finally we are interested in estimating the dimension of the random walk, $d_w$.
We have that
\begin{align}
d_w = \frac{1}{k}.
\end{align}
We can also estimate $d_w$ from the scaling relation
\begin{align}
\langle r^2(t) \rangle \propto t^{2k} = t^{2/d_w},
\end{align}
when we $t < t_0$ for $p > p_c$.
```python
d_w_list = []
d_w_intercept_list = []
p_min_pc_d_w_list = []
for i, p_i in enumerate(ind_list):
t_0 = t_list[i]
p = p_arr[p_i]
p_min_pc_d_w_list.append(p - p_c)
pre_cross = num_walks_arr < t_0
t_pre_cross = num_walks_arr[pre_cross]
r_pre_cross = r_squared[p_i, pre_cross]
d_w_2_inv, d_w_intercept = np.polyfit(t_pre_cross, r_pre_cross, deg=1)
d_w_list.append(2 / d_w_2_inv)
d_w_intercept_list.append(d_w_intercept)
```
```python
fig = plt.figure(figsize=(14, 10))
plt.plot(p_list, d_w_list, "-o")
plt.xlabel(r"$p$")
plt.ylabel(r"$d_w$")
plt.title(r"Plot of the dimension of the random walk as a function of $p$")
plt.show()
```
| efd8a6b684981f3ccfd79c4e970b6d1234c07a1e | 749,837 | ipynb | Jupyter Notebook | project-4/random-walks-on-the-spanning-cluster-large.ipynb | Schoyen/FYS4460 | 0c6ba1deefbfd5e9d1657910243afc2297c695a3 | [
"MIT"
]
| 1 | 2019-08-29T16:29:18.000Z | 2019-08-29T16:29:18.000Z | project-4/random-walks-on-the-spanning-cluster-large.ipynb | Schoyen/FYS4460 | 0c6ba1deefbfd5e9d1657910243afc2297c695a3 | [
"MIT"
]
| null | null | null | project-4/random-walks-on-the-spanning-cluster-large.ipynb | Schoyen/FYS4460 | 0c6ba1deefbfd5e9d1657910243afc2297c695a3 | [
"MIT"
]
| 1 | 2020-05-27T14:01:36.000Z | 2020-05-27T14:01:36.000Z | 661.231922 | 134,356 | 0.946982 | true | 5,854 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.826712 | 0.683452 | __label__eng_Latn | 0.96361 | 0.42622 |
```python
%matplotlib inline
%config InlineBackend.figure_formats = ['svg']
import numpy as np
import matplotlib.pyplot as plt
```
# Pulses and Waveforms
A waveform is simply a time varying signal. When a pulse operation occurs, a digital to analog converter produces a signal by combinding the following data:
* a time varying complex function $ u(t) $ which describes the baseband waveform
* a *frequency* $f$, in Hertz,
* a *phase* $\theta$, in radians,
* a unitless *scale* $\alpha$.
The signal generated from this has the mathematical form
\begin{equation}
x(t) = \text{Re}[\alpha u(t)e^{i (\theta + 2 \pi f t)}],
\end{equation}
where $\text{Re}$ denotes the real part of a complex number.
In accordance with the usual conventions, we will refer to the real part of $u(t)$ as the **in-phase** component, and the imaginary part as the **quadrature** component. A general complex number $z = x + iy$ will sometimes be referred to as an **IQ-value**, with $I = x$ and $Q = y$.
Quil-T provides the programmer with pulse and capture-level control, both by allowing for custom baseband waveforms, as well as allowing for direct control over the run-time frequency, phase, and scale.
## Waveform Templates and References
There are two ways to specify a Quil-T waveform:
- by using a pre-existing template, which has a shape dictated by certain parameters
- by referencing a custom waveform definition.
Consider the following Quil-T code:
```
PULSE 0 "rf" flat(duration: 1e-6, iq: 0.5 + 0.5*i)
```
This denotes a pulse operation, on frame `0 "rf"`, with a total duration of one microsecond and with baseband waveform given by the `flat` template, (corresponding to $u(t) = 1/2 + i/2$ for the duration of the signal). The resulting signal produced depends also on the phase, frequency, and scaling factor associated with frame `0 "rf"`. These may be set explicitly, as in
```
SET-SCALE 0 "rf" 1.0
SET-FREQUENCY 0 "rf" 5e9
SET-PHASE 0 "rf" 0.0
```
On the other hand, a custom waveform may be defined and used. This is done using the `DEFWAVEFORM` form,
```
DEFWAVEFORM my_waveform:
0.01, 0.01+0.01*i, ...
```
the body of which consists of a list of complex numbers. Such a custom waveform may be referenced directly by name, as in
```
PULSE 0 "rf" my_waveform.
```
The precise meaning of this depends on the *sample rate* of the associated frame: this indicates the number of samples per second consumed by the underlying digital-to-analog converter. For example, suppose that the definition of `0 "rf"` looked like this
```
DEFFRAME 0 "rf":
SAMPLE-RATE: 1000000000.0
INITIAL-FREQUENCY: 4807541957.13474
DIRECTION: "tx"
```
and `my_waveform` has a definition consisting of complex numbers $z_1, \ldots, z_N$. Then, letting $r=10^9$ denote the sample rate, the resulting pulse has total duration $\left \lceil{\frac{N}{r}}\right \rceil$, corresponding to the baseband waveform $ u(t) = z_{\left \lfloor{tr}\right \rfloor}. $ Here $\left \lceil{x}\right \rceil $ and $\left \lfloor{x}\right \rfloor$ denote the ceiling and floor of $x$, respectively.
As before, this baseband waveform is combined with the frame's scale, frequency, and phase during digital to analog conversion.
## A Catalog of Template Waveforms
Rigetti provides a number of templates by default. These include
- `flat`, corresponding to simple rectangular waveforms
- `gaussian`, for a Gaussian waveform
- `drag_gaussian`, for a Gaussian waveform modified by the Derivative Removal by Adiabatic Gate (DRAG) technique
- `hrm_gaussian`, for a DRAG Gaussian waveform with second-order corrections
- `erf_square`, for a flat waveform with smooth edges derived from the Gaussian error function
- `boxcar_kernel`, for a flat waveform which is normalized to integrate to 1
Each of these waveforms has a corresponding definition in `pyquil.quiltwaveforms`. In addition to providing documentation on the meaning of each of the waveform parameters, this module also contains routines for generating samples for each template.
Below we look at each individidually, discussing the meaning of various parameters and plotting the real part of the waveform envelope.
```python
from pyquil.quilatom import TemplateWaveform
def plot_waveform(wf: TemplateWaveform, sample_rate: float):
""" Plot a template waveform by sampling at the specified sample rate. """
samples = wf.samples(sample_rate)
times = np.arange(len(samples))/sample_rate
print(wf)
plt.plot(times, samples.real)
plt.show()
```
### flat
A flat waveform is simple: it represents a constant signal for a certain duration. There are two required parameters:
* `duration`: the length of the waveform, in seconds
* `iq`: a complex number
```python
from pyquil.quiltwaveforms import FlatWaveform
plot_waveform(FlatWaveform(duration=1e-6, iq=1.0), sample_rate=1e9)
```
flat(duration: 1e-06, iq: 1.0)
#### Scale, Phase, Detuning
In addition to the parameters specific to each template waveform, there are also a few generic parameters. One of these we have met: each template waveform has a required `duration` argument, indicating the length of the waveform in seconds.
The other arguments are *optional*, and are used to modulate the basic shape. These are
* scale $\alpha$, which has the effect of scaling the baseband $u(t) \mapsto \alpha u(t)$
* phase $\theta$, in radians, which has the effect of a phase shift on the baseband $u(t) \mapsto e^{i \theta} u(t)$
* detuning $f_d$, in Hertz, which has the effect of $u(t) \mapsto e^{2 \pi i f_d} u(t)$
These may be provided as arguments to any template waveform, e.g.
```
PULSE 0 "rf" flat(duration: 1e-8, iq: 1.0, scale: 0.3, phase: 1.570796, detuning: 1e8)
```
Below we consider this by way of the PyQuil bindings.
```python
plot_waveform(FlatWaveform(duration=1e-6, iq=1.0, detuning=1e7), sample_rate=1e9)
```
flat(duration: 1e-06, iq: 1.0, detuning: 10000000.0)
### gaussian
Several of the template waveforms provided are derived from a standard (unnormalized) Gaussian. Here we have
\begin{equation}
u(t) = \text{exp}\Big(-\frac{(t-t_0)^2}{2 \sigma^2}\Big),
\end{equation}
where $t_0$ denotes the center of the Gaussian, and $\sigma$ is the usual standard deviation.
By Quil-T convention, this is parameterized by the Gaussian's [full width at half maximum](https://en.wikipedia.org/wiki/Full_width_at_half_maximum) (FWHM), which is defined to be
\begin{equation}
\text{FWHM} = 2 \sqrt{2 \ln(2)} \sigma.
\end{equation}
As with all Quil-T waveforms, a Quil-T `gaussian` has a finite duration, and thus corresponds to a truncation of a true Gaussian.
In short, the parameters are:
* `duration`: the duration of the waveform, in seconds. The Gaussian will be truncated to $[0, \text{duration}]$
* `t0`: the center of the Gaussian, in seconds
* `fwhm`: the full width half maximum of the Gaussian, in seconds
```python
from pyquil.quiltwaveforms import GaussianWaveform
plot_waveform(GaussianWaveform(duration=1e-6, t0=5e-7, fwhm=4e-7), sample_rate=1e9)
```
gaussian(duration: 1e-06, fwhm: 4e-07, t0: 5e-07)
### drag_gaussian
The `drag_gaussian` waveform extends the basic Gaussian with an additional correction factor (cf. https://arxiv.org/abs/1809.04919 and references therein). The shape is given by
\begin{equation}
u(t) = \Big(1 + i\frac{\alpha}{2 \pi \eta \sigma^2}\Big)\text{exp}\Big(-\frac{(t-t_0)^2}{2 \sigma^2}\Big),
\end{equation}
where $\eta$ is the anharmonicity constant, in Hz, and $\alpha$ is a dimensionless shape parameter.
As before, rather than providing $\sigma$ explicity, `drag_gaussian` takes a full-width half max parameter.
In summary, the required arguments to `drag_gaussian` are:
* `duration`: the duration of the waveform, in seconds. The Gaussian will be truncated to $[0, \text{duration}]$
* `t0`: the center of the Gaussian, in seconds
* `fwhm`: the full width half maximum of the Gaussian, in seconds
* `anh`: the anharmonicity constant $\eta$, in Hertz
* `alpha`: dimensionless shape parameter $\alpha$.
```python
from pyquil.quiltwaveforms import DragGaussianWaveform
plot_waveform(
DragGaussianWaveform(duration=1e-6, t0=5e-7, fwhm=4e-7, anh=1.1, alpha=1.0),
sample_rate=1e9
)
```
drag_gaussian(duration: 1e-06, fwhm: 4e-07, t0: 5e-07, anh: 1.1, alpha: 1.0)
Of course, we only plotted the *real part* of the waveform above. The imaginary part is relevant when the baseband waveform is converted to a passband waveform, as we can demonstrate below via the optional `detuning` argument.
```python
from pyquil.quiltwaveforms import DragGaussianWaveform
plot_waveform(
DragGaussianWaveform(duration=1e-6, t0=5e-7, fwhm=4e-7, anh=1.1, alpha=1.0, detuning=1e7),
sample_rate=1e9
)
```
drag_gaussian(duration: 1e-06, fwhm: 4e-07, t0: 5e-07, anh: 1.1, alpha: 1.0, detuning: 10000000.0)
### hrm_gaussian
The `hrm_gaussian` waveform is a variant of `drag_gaussian` which incorporates a higher order term. The shape is given by
\begin{equation}
u(t) = \Big(1 - H_2 \frac{(t-t_0)^2}{2 \sigma^2} + i\frac{\alpha (t-t_0)}{2 \pi \eta \sigma^2} \big(1 - H_2 \big(\frac{(t-t_0)^2}{2\sigma^2} - 1\big)\Big)\text{exp}\Big(-\frac{(t-t_0)^2}{2 \sigma^2}\Big),
\end{equation}
where $\alpha$ is a dimensionless DRAG parameter, $\eta$ is the anharmonicity constant, and $H_2$ is a second order correction coefficient (cf. [1]). Note that when $H_2$ equals 0 this reduces to an ordinary `drag_gaussian`.
The required arguments to `hrm_gaussian` are:
* `duration`: the duration of the waveform, in seconds. The Gaussian will be truncated to $[0, \text{duration}]$
* `t0`: the center of the Gaussian, in seconds
* `fwhm`: the full width half maximum of the Gaussian, in seconds
* `anh`: the anharmonicity constant $\eta$, in Hertz
* `alpha`: dimensionless shape parameter $\alpha$
* `second_order_hrm_coeff`: the constant $H_2$.
[1] Warren, W. S. (1984). Effects of arbitrary laser or NMR pulse shapes on population inversion and coherence. The Journal of Chemical Physics, 81(12), 5437–5448. doi:10.1063/1.447644
```python
from pyquil.quiltwaveforms import HrmGaussianWaveform
plot_waveform(
HrmGaussianWaveform(duration=1e-6, t0=5e-7, fwhm=4e-7, anh=1.1, alpha=1.0, second_order_hrm_coeff=0.5),
sample_rate=1e9
)
```
hrm_gaussian(duration: 1e-06, fwhm: 4e-07, t0: 5e-07, anh: 1.1, alpha: 1.0, second_order_hrm_coeff: 0.5)
### erf_square
The `erf_square` waveform is a variant of `flat_waveform` with the boundary discontinuities smoothed via the [error function](https://en.wikipedia.org/wiki/Error_function) (erf), and additional zero-padding.
The required arguments are:
* `duration`: the duration of the nonzero part of the waveform, in seconds
* `risetime`: width of each of the rise and fall sections of the pulse, in seconds
* `pad_left`: amount of zero-padding to add to the left of the pulse, in seconds
* `pad_right`: amount of zero-padding to add to the right of the pulse, in seconds
**NOTE**: The total duration of the waveform is `duration + pad_left + pad_right`; the total duration of the *support* (nonzero entries) is `duration`.
```python
from pyquil.quiltwaveforms import ErfSquareWaveform
plot_waveform(
ErfSquareWaveform(duration=1e-6, risetime=1e-7, pad_left=1e-7, pad_right=1e-7),
sample_rate=1e9
)
```
erf_square(duration: 1e-06, risetime: 1e-07, pad_left: 1e-07, pad_right: 1e-07)
## Capture and Kernels
In Quil-T, waveforms are used in two places:
* in a `PULSE` operation, to specify what signal is generated
* in a `CAPTURE` operation, to specify how to resolve a signal into a number
The `CAPTURE` operation involves reading in a signal on a suitable signal line, and integrating it with respect to a *kernel*.
Mathematically, there is a real-valued signal $ s(t), $ corresponding to a reading on the signal line. This is combined with a complex-valued "baseband" kernel $k(t)$ to get a resulting **IQ value** $z$ by
\begin{equation}
z = \int_{t_\text{min}}^{t_\text{max}} k(t) s(t) e^{-2 \pi i f t} \, dt,
\end{equation}
where $f$ denotes the frequency of the associated capture frame.
In Quil-T, this integrating kernel is specified by a waveform, with the usual convention that is is scaled to satisfy $\int k(t) \, dt = 1.$ The most common example is the `boxcar_kernel`, which corresponds to a flat pulse scaled to satisfy this condition.
### boxcar_kernel
Because of the normalization condition, the `boxcar_kernel` requires only a `duration` argument for its construction.
```python
from pyquil.quiltwaveforms import BoxcarAveragerKernel
plot_waveform(
BoxcarAveragerKernel(duration=1e-6),
sample_rate=1e9
)
```
boxcar_kernel(duration: 1e-06)
The samples should sum to (roughly) one.
```python
assert np.isclose(
np.sum(BoxcarAveragerKernel(duration=1e-6).samples(1e9)),
1.0
)
```
**Note**: The reference implementations of these waveforms makes use of Python's double precision floating point arithmetic. The *actual* implementation involves a certain amount of hardware dependence. For example, Rigetti's waveform generation hardware currently makes use of 16 bit fixed point arithmetic.
## Compile Time versus Run Time
When thinking about parameters in Quil-T, there are two times to consider:
* the time of compilation, when a Quil-T program is translated to a binary format executable on Rigetti hardware
* the time at which the program is run.
**All template parameters must be resolved at compile time.** The Quil-T compiler depends on being able to determine the size and contents of waveforms in advane. In other words, the following is not allowed:
```
DECLARE theta REAL
PULSE 0 "rf" flat(duration: 1e-8, iq: 1.0, phase: theta)
```
However, the following program is valid, and does something equivalent
```
DECLARE theta REAL
SHIFT-PHASE 0 "rf" theta
PULSE 0 "rf" flat(duration: 1e-8, iq: 1.0)
SHIFT-PHASE 0 "rf" -theta
```
(Why the second `SHIFT-PHASE`? To leave the frame `0 "rf"` as it was before we started!)
Explicit control over the *run-time* phase, frequency, or scale requires the use of one of the following instructions.
* `SET-SCALE`
* `SET-PHASE`
* `SHIFT-PHASE`
* `SET-FREQUENCY`
* `SHIFT-FREQUENCY`
all of which support run-time parameter arguments.
```python
```
| 374261f52e2d486de0e97a52598e9de821f6cf26 | 225,383 | ipynb | Jupyter Notebook | docs/source/quilt_waveforms.ipynb | dwillmer/pyquil | f9a8504d20729b79f07ec4730c93f4b84d6439eb | [
"Apache-2.0"
]
| 1 | 2021-11-30T21:03:15.000Z | 2021-11-30T21:03:15.000Z | docs/source/quilt_waveforms.ipynb | dwillmer/pyquil | f9a8504d20729b79f07ec4730c93f4b84d6439eb | [
"Apache-2.0"
]
| null | null | null | docs/source/quilt_waveforms.ipynb | dwillmer/pyquil | f9a8504d20729b79f07ec4730c93f4b84d6439eb | [
"Apache-2.0"
]
| null | null | null | 41.173365 | 452 | 0.477427 | true | 4,095 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.803174 | 0.681065 | __label__eng_Latn | 0.99251 | 0.420674 |
```python
# IMPORT PACKAGES
from geoscilabs.em import UXO_TEM_Widget as UXO
from IPython.display import display
from ipywidgets import HBox
```
# Contents
This app contains 3 widgets:
* **Orientation and polarization widget:** This widget allows the user to visualize the orientation, infer the dimensions and change the polarizabilities of compact objects they wish to model.
* **Data visualization widget:** This widget allows the user to visualize the step-off response of compact objects using three commonly used instruments: EM61, TEMTADS, and MPV.
* **Parameter estimation widget:** This widget allows the user to invert synthetic data collected using EM61, TEMTADS or MPV instruments in order to recover the location and primary polarizabilities for a compact object.
# Background Theory
## Polarization Tensor
The magnetic dipole moment ${\bf m}$ being experienced by a compact object is given by:
\begin{equation}
\mathbf{m = Q \, h_p}
\end{equation}
where ${\bf h_p} = [h_x,h_y,h_z]^T$ is the primary magnetic field caused by the transmitter before shut-off and ${\bf Q}$ is the called the **polarizability tensor**. The polarizability tensor is a 3X3 symmetric, positive-definite (SPD) matrix given by:
\begin{equation}
{\bf Q} = \begin{bmatrix} q_{11} & q_{12} & q_{13} \\ q_{12} & q_{22} & q_{23} \\ q_{13} & q_{23} & q_{33} \end{bmatrix}
\end{equation}
where $q_{ij}$ defines hows strongly field component $h_i$ contributes towards $m_j$.
## Coordinates and Primary Polarizations
The polarizability tensor for an object depends on its orientation, dimensions and electromagnetic properties. Because the polarizability tensor is SPD, it can be decomposed using the following eigen-decomposition:
\begin{equation}
{\bf Q = A \, L(t) \, A^T}
\end{equation}
where
\begin{equation}
{\bf A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{bmatrix} \;\;\;\; \textrm{and} \;\;\;\;
{\bf L(t)} = \begin{bmatrix} L_{x'}(t) & 0 & 0 \\ 0 & L_{y'}(t) & 0 \\ 0 & 0 & L_{z'}(t) \end{bmatrix}
\end{equation}
${\bf A}$ is a SPD rotation matrix from the coordinate system defined by the UXO ($x',y',z'$) to the field coordinate system ($x,y,z$). ${\bf A}$ is defined by three angles: $\psi,\theta$ and $\phi$. $\theta$ is the azimuthal angle (angle relative to vertical), $\phi$ is the declination (angle relative to North) and $\psi$ is the roll (rotation about z' axis).
${\bf L(t)}$ characterizes the primary polarizabilities of the object. The magnetic dipole moment experienced by the object is a linear combination of polarizabilities $L_{x'},L_{y'}$ and $L_{z'}$. Depending on the dimensions and of the object, $L_{x'},L_{y'}$ and $L_{z'}$ may differ. For example:
* A sphere has primary polarizabilities $L_{x'}=L_{y'}=L_{z'}$
* A UXO has primary polarizabilities $L_{x'}=L_{y'}<L_{z'}$
For a given axis $i$, the primary polarizability for a step-off response at $t>0$ is given by:
\begin{equation}
L_{ii}(t) = k_i \Bigg ( 1 + \frac{t^{1/2}}{\alpha_i^{1/2}} \Bigg )^{-\beta_i} e^{-t/\gamma_i}
\end{equation}
where the decay of the object's polarization is determined by parameters $k_i,\alpha_i,\beta_i$ and $\gamma_i$.
## Predicting Data
There are a multitude of instruments used to measure the time-domain responses exhibited by UXOs (EM61, TEMTADS, MPV). For each individual measurement, a transmitter loop produces a primary magnetic field ${\bf h_p} = [h_x,h_y,h_z]^T$ which is turned off a $t=0$. The primary field polarizes the UXO according to its polarizability tensor ${\bf Q}$. The polarization of the object produces a secondary field which induces an EMF in one or more receiver coils. The field component being measured by each receiver coil depends on its orientation.
Where ${\bf G} = [g_x,g_y,g_z]$ maps the dipole moment experienced by the object to the induced voltage in a receiver coil:
\begin{equation}
d = {\bf G \, m} = {\bf G \, Q \, h_p}
\end{equation}
Because it is SPD, the polarizability tensor may be characterized at each time by 6 parameters $(q_{11},q_{12},q_{13},q_{22},q_{23},q_{33})$. The previous expression can ultimately be reformulated as:
\begin{equation}
d = {\bf P \, q}
\end{equation}
where
\begin{equation}
{\bf q^T} = [q_{11} \;\; q_{12} \;\; q_{13} \;\; q_{22}\;\; q_{23} \;\; q_{33}]
\end{equation}
and
\begin{equation}
{\bf P} = [h_xg_x \;\; h_xg_y \!+\! h_yg_x \;\; h_xg_z \!+\! h_zg_x \;\; h_zg_y \;\; h_yg_z \!+\! h_zg_y \;\; h_zg_z]
\end{equation}
Thus in the case that there are $N$ distinct transmitter-receiver pair, each transmitter-receiver pair is represented as a row within ${\bf P}$. ${\bf q}$ contains all the necessary information to construct ${\bf Q}$ and ${\bf P}$ contains all the geometric information associated with the problem.
## Inversion and Parameter Estimation
When inverting field-collected UXO data there are two primary goals:
* Accurate location of a target object (recover $x,y,z$)
* Accurate characterization of a target object (by recovering $L_{x'},L_{y'},L_{z'}$)
For this widget, we will accomplish these goals in two steps.
### Step 1
In step 1, we intend to recover the location of the target $(x,y,z)$ and the elements of the polarizability tensor $(q_{11},q_{12},q_{13},q_{22},q_{23},q_{33})$ at each time. A basic approach is applied by finding the location and polarizabilities which minimize the following data misfit function:
\begin{equation}
\begin{split}
\Phi &= \sum_{i=k}^K \Big \| {\bf W_k} \big ( {\bf P \, q_k - d_{k,obs}} \big ) \Big \|^2 \\
& \textrm{s.t.} \\
& q_{min} \leq q_{ij}(t) \leq q_{max} \\
& q_{ii}(t) \geq 0 \\
& \big | q_{ij}(t) \big | \leq \frac{1}{2} \big ( \; \big | q_{ii}(t) \big | + \big | q_{jj}(t) \big | \; \big )
\end{split}
\end{equation}
where ${\bf P}$ depends on the location of the target, $i$ refers to the time-channel, $d_{i,obs}$ is the observed data at time $i$ and ${\bf W_i}$ are a set of weights applied to the data misfit. The constraint assures that negative polarizabilities (non-physical) are not recovered in order to fit the data.
### Step 2
Once recovered, ${\bf q}$ at each time can be used to construct the corresponding polarizability tensor ${\bf Q}$. Recall that the eigen-decomposition of ${\bf Q}$ is given by:
\begin{equation}
{\bf Q = A \, L(t) \, A^T}
\end{equation}
Thus $L_{x'}(t),L_{y'}(t),L_{z'}(t)$ are just the eigenvalues of ${\bf Q}$ and the elements of the rotation matrix ${\bf A}$ are the eigenvectors. Once $L_{x'},L_{y'},L_{z'}$ have been recovered at all times, the curves can be compared against the known primary polarizabilities of objects which are stored in a library.
### Practical Considerations
**Sampling Density:** The optimum line and station spacing depends significantly on the dimensions of the target, its depth and the system being used to perform the survey. It is important to use a sampling density which accurately characterizes TEM anomalies without adding unnecessary time and expense.
**Excitation Orientation:** The excitation of a buried target occurs parallel to the inducing field. Thus in order to accurately recover polarizations $L_{x′},L_{y′}$ and $L_{z′}$ for the target, we must excite the target significantly from multiple angles. Ideally, the target would be excited from 3 orthogonal directions; thus assuring the data contains significant contributions from each polarization.
# Orientation and Polarization Widget
### Purpose
This app allows the user to visualize the orientation, approximate dimensions and polarizability of compact objects they wish to model with subsequent apps.
### Parameter Descriptions
* $\Phi$: Clockwise rotation about the z-axis
* $\theta$: Azimuthal angle (angle from vertical)
* $\phi$: Declination angle (Clockwise angle from North)
* $k_i,\alpha_i,\beta_i,\gamma_i$: Parameters which characterize the polarization along axis $i$
```python
# NOTE: INITIATE WIDGET BY ADJUSTING ANY PARAMETER!!!
Out1 = UXO.ImageUXOWidget()
display(HBox(Out1.children[0:3]))
display(HBox(Out1.children[3:7]))
display(HBox(Out1.children[7:11]))
display(HBox(Out1.children[11:15]))
display(Out1.children[15])
Out1.out
```
# Data Visualization Widget
### Purpose
This widget allows the user to visualize the time-domain response using three commonly used instruments: EM61, TEMTADS, and MPV. On the leftmost plot, the TEM anomaly at the center of the transmitter loop is plotted at a specified time. On the rightmost plot, the TEM decays registered by all receiver coils for a particular transmitter loop are plotted.
### Parameter Descriptions
* TxType: Instrument used to predict data. Set as "EM61", "TEMTADS" or "MPV"
* $x_{true},y_{true},z_{true}$: Location of the object
* $\psi,\theta,\phi$: Angles defining the orientation of the object
* $k_i,\alpha_i,\beta_i,\gamma_i$: Parameters which characterize the polarization along axis $i$
* Time channel: Adjusts the time in which the TEM anomaly at the center of the transmitter loop is plotted
* X location, Y location: The transmitter location at which you would like to see all decays measured by the receiver coils.
```python
# NOTE: INITIATE WIDGET BY ADJUSTING ANY PARAMETER!!!
TxType = "EM61" # Set TxType to "EM61", "TEMTADS" or "MPV"
Out2 = UXO.ImageDataWidget(TxType)
display(HBox(Out2.children[0:3]))
display(HBox(Out2.children[3:6]))
display(HBox(Out2.children[6:10]))
display(HBox(Out2.children[10:14]))
display(HBox(Out2.children[14:18]))
display(HBox(Out2.children[18:21]))
if TxType is "MPV":
display(Out2.children[21])
Out2.out
```
# Parameter Estimation Widget
### Purpose
This widget allows the user to invert synthetic data using EM61, TEMTADS or MPV instruments in order to recover the location and primary polarizabilities for a compact object. The goal of this app is to demonstrate how successful recovery depends on:
* Sampling density
* Excitation orientation
### Parameter Descriptions
* TxType: Instrument used for simulation. Set as "EM61", "TEMTADS" or "MPV"
* $x_{true},y_{true},z_{true}$: True location of the object
* $\psi,\theta,\phi$: True angles defining the orientation of the object
* $k_i,\alpha_i,\beta_i,\gamma_i$: True parameters which characterize the polarization of the object along axis $i$
* $D_x,D_y$: The x-width and y-width for the cued-interrogation region
* $N_x,N_y$: The number of stations in the x and y direction
* $x_0,y_0,z_0$: Starting guess for the location of the object
```python
# NOTE: INITIATE WIDGET BY ADJUSTING ANY PARAMETER!!!
TxType = "EM61" # Set TxType to "EM61", "TEMTADS" or "MPV"
Out3 = UXO.InversionWidget(TxType)
display(HBox(Out3.children[0:3]))
display(HBox(Out3.children[3:6]))
display(HBox(Out3.children[6:10]))
display(HBox(Out3.children[10:14]))
display(HBox(Out3.children[14:18]))
display(HBox(Out3.children[18:22]))
display(HBox(Out3.children[22:25]))
if TxType is "MPV":
display(HBox(Out3.children[25:27]))
else:
display(Out3.children[25])
Out3.out
```
```python
```
| 11dafc57a156aab42685693647f0d6ee6ae5a220 | 14,726 | ipynb | Jupyter Notebook | notebooks/em/TDEM_UXO.ipynb | simpeg/geosci-labs | 2062f4c910c53687482b203031981938c9f25984 | [
"MIT"
]
| null | null | null | notebooks/em/TDEM_UXO.ipynb | simpeg/geosci-labs | 2062f4c910c53687482b203031981938c9f25984 | [
"MIT"
]
| null | null | null | notebooks/em/TDEM_UXO.ipynb | simpeg/geosci-labs | 2062f4c910c53687482b203031981938c9f25984 | [
"MIT"
]
| null | null | null | 47.198718 | 555 | 0.609942 | true | 3,150 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.782662 | 0.795658 | 0.622732 | __label__eng_Latn | 0.987517 | 0.285145 |
```python
!jupyter nbconvert Lecture-16.ipynb --to slides --post serve
```
# Lecture 16: Analytical Solutions II: The Laplace Transform and Semi Infinite Media
### Sections
* [Introduction](#Introduction)
* [Learning Goals](#Learning-Goals)
* [On Your Own](#On-Your-Own)
* The Laplace Transform
* The Laplace Transform of Derivatives
* [In Class](#In-Class)
* Solving a PDE using Laplace Transformations
* Applying the Boundary Conditions
* [Homework](#Homework)
* [Summary](#Summary)
* [Looking Ahead](#Looking-Ahead)
* [Reading Assignments and Practice](#Reading-Assignments-and-Practice)
### Introduction
----
While using the words "hard" or "not hard" are a bit unfair when discussing solutions to the diffusion equation, I will say that my personal experience is that problems with finite bounds are a bit trickier to set up than problems where the boundaries are located infinitely far away. The simpler solutions to infinite/semi-infinite problems make them tempting to use on problems with small dimensions. So that is a source of tension for the materials scientist! Do I use the quick and easy solution, or do I try and simulate/compute the more-precise solution? Understanding the limitations and quantifying the error (and, if possible bounding the error) is a useful tactic.
Set up our notebook with a few (usual) imports:
```python
%matplotlib inline
import numpy as np
import math as math
from scipy.special import erfc
from ipywidgets import interact, fixed
import matplotlib.pyplot as plt
```
```python
# run this cell before class.
def diffusionSolution(x, t, C0, D):
return C0*erfc(x/(2.0*np.sqrt(D*t)))
def myfig(t=0.1, C0=5.0, D=1.0):
"""
This function plots a solution to the diffusion equation
based on a Laplace transform solution. Four inputs are
required.
"""
x = np.linspace(0.0, 100.0, 5000)
y1 = diffusionSolution(x, t, C0, D)
fig = plt.figure(figsize=(5,4))
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x, y1, 'r', label=r"$c(x,t)$")
plt.fill_between(x, 0, y1, alpha=0.1, color='red')
axes.set_xlim([0.0,10.0])
axes.set_ylim([0.0,10.0])
axes.legend()
axes.grid(False)
return
```
```python
interact(myfig, t=(0.001,100.0,0.1), C0=(1.0,10.0,1.0), D=(1.0,10.0,1.0));
```
[Top of Page](#Sections)
### Learning Goals
----
* Practice computing integral transforms and their inverses using Python
* Identify useful resources in textbooks or on the Web
* Use the Laplace transform to solve a semi-infinite diffusion problem
* Gain confidence to apply the Laplace transform in future lectures
[Top of Page](#Sections)
### On Your Own
----
#### The Laplace Transform
The Laplace transform is an integral transform. The Laplace transform of a function $f(t)$ is defined to be:
$$
\int_0^{\infty} f(t) e^{-st} dt
$$
We will explore `sympy`'s functionality for computing the Laplace transform. But first let us to the integration explicitly.
```python
%matplotlib notebook
import sympy as sp
sp.init_session(quiet=True)
```
IPython console for SymPy 1.0 (Python 2.7.12-64-bit) (ground types: python)
Here we set up a few symbols so that our notation is consistent. It is imperitive that we place conditions on these parameters. You can relax these conditions, but in most introductory textbooks these conditions are spelled out in the text rather than symbolically. So it can be easy to miss the conditions if you are scanning the equations.
```python
s = symbols('s', positive=True)
omega = symbols('omega', positive=True)
C0 = symbols('C0', positive=True)
init_printing()
```
Here is the Laplace transform of $\sin(\omega t)$:
```python
laplaceSinIntegral = sp.Integral(sp.sin(omega*t)*sp.exp(-s*t), (t,0,oo))
laplaceSinIntegral
```
```python
laplaceSinIntegral.doit()
```
We can Laplace transform a constant (this is important for boundary conditions):
```python
sp.integrate(C0*sp.exp(-s*t), (t,0,oo))
```
```python
?sp.laplace_transform
```
```python
sp.laplace_transform(sp.sin(t), t, s)
```
(1/(s**2 + 1), 0, True)
```python
sp.laplace_transform(sin(omega*t), t, s)[0]
```
```python
sp.laplace_transform(C0, t, s)
```
(C0/s, 0, True)
### In Class - Part I
----
#### The Laplace Transform of Derivatives
This is where the strength of the method is realized. We will use the product rule to develop an expression for the Laplace transform of a derivative with respect to time and space. In time the transform requires integration by parts. In space we require differentiation under the integral.
Start with our symbolic representation of a derivative quantity
```python
sp.diff(f(x,t),x)
```
Let us apply the laplace transform function to this definition:
```python
sp.laplace_transform(sp.diff(f(x,t),t), t, s)
```
`sympy` is being strictly correct, but unhelpful. A short discussion on integration by parts will help you to understand the textbook definition of the Laplace transform of a derivative.
I think of integration by parts now a little differently than when I learned it. [Wikipedia's description](https://en.wikipedia.org/wiki/Integration_by_parts) is closer to how I think about things now. Start by applying the product (chain?) rule to the product of functions:
$$
\frac{d}{dx} (u(x)v(x)) = u(x)\frac{dv(x)}{dx} + v(x)\frac{du(x)}{dx}
$$
then integrate and re arrange:
$$
\int u(x) v'(x) \, dx = u(x) v(x) - \int v(x) \, u'(x) dx
$$
We can apply similar logic to define the Laplace transform of the derivative of $f(t)$. We can start by differentiating the product
$$f(x,t) e^{-st}$$
```python
sp.diff(f(x,t)*sp.exp(-s*t),t)
```
We can write the integral of the product (`sympy` will not evaluate this for us because of the use of `Integral`). We can call for integration with the `doit()` method:
```python
leftHandSide = sp.Integral(sp.Derivative(f(x,t)*sp.exp(-s*t),t),(t,0,oo))
leftHandSide
```
By the fundamental theorem of the Calculus this expression is:
$$
[e^{-st}f(t)] \, \Big |^{\,t=\infty}_{\, t=0}
$$
Here we use a little bit of code to write the Integral for each term produced by the product rule:
```python
expandedDerivative = sp.diff(f(x,t)*sp.exp(-s*t),t)
[sp.Integral(terms,(t,0,oo)) for terms in expandedDerivative.args]
```
Making our whole definition:
$$
\mathcal{L}(f_t) = \int_{0}^{\infty} e^{- s t} \frac{\partial f(x,t)}{\partial t}\, dt = [e^{-st}f(t)] \, \Big |^{\,t=\infty}_{\, t=0} + \int_{0}^{\infty} s f{\left (x,t \right )} e^{- s t}\, dt
$$
Where the $t$ subscript refers to differentiation in time. It's that simple. Some texts will continue from this point and define, generally, the Laplace transform of an n'th derivative. The only remaining wrinkle is to find the Laplace transform of a spatial derivative and then we have all the tools we need to solve the diffusion equation using Laplace transforms.
[Top of Page](#Sections)
To find the Laplace transform of the derivative of $f$ with respect to $x$, let us start with `sympy`:
```python
sp.laplace_transform(sp.diff(f(x,t),x), t, s)
```
OK. So - we need to dive a bit deeper on how to define things here. The quick answer is that you can change the order of operations.
The longer answer is that the operations for [differentiation under the integral](https://en.wikipedia.org/wiki/Leibniz_integral_rule) commute under certain conditions. The previous link and [this](https://en.wikipedia.org/wiki/Interchange_of_limiting_operations) one provide some needed background. We've encountered this idea before when we define cross derivatives under partial differentiation:
$$
\frac{\partial}{\partial y} \left( \frac{\partial f}{\partial x} \right) \equiv \frac{\partial^2 f}{\partial y \partial x} = \frac{\partial^2 f}{\partial x \partial y}
$$
For what it's worth - I scanned my calculus texbook and an applied math text and neither text indexes this rule for easy reference. I'm not sure how you would learn this on your own - except to rationalize that integration and differentiation are limit processes and if the variables of the limits are independent then maybe the operations commute. The diffusion text (from which I pulled this discussion) does not provide reference or proof that the limit operations (i.e. differentation and integration) can be interchanged - the text merely says that it can be done.
$$
\mathcal{L}\left( \frac{\partial^2 f(x,t)}{\partial x^2} \right) = \frac{\partial^2}{\partial x^2} \mathcal{L} (f(x,t)) = \frac{\partial^2 A(x)}{\partial x^2}
$$
Ultimately, once you use the Laplace transform on the diffusion equation:
$$
D \frac{\partial^2 c}{\partial x^2} = \frac{\partial c}{\partial t}
$$
you end up with an ordinary differential equation in x alone. The time dependence is removed by the integral. A is the tranformed function:
$$
D \frac{\partial^2 A(x)}{\partial x^2} = s A(x)
$$
[Top of Page](#Sections)
### In Class - Part II
----
#### Solving a Differential Equation Using Laplace Transforms
Solving this differential equation is as simple as calling DSolve and investigating the behavior at the boundaries. Don't forget to transform the boundary conditions, too. The notation "A" here is used to remind you that after the Laplace transform we are in a different space.
```python
import sympy as sp
sp.init_printing()
```
```python
f = sp.symbols('f', cls=sp.Function)
x = sp.symbols('x', real=True)
s = sp.symbols('s', real=True, positive=True)
C0, D = sp.symbols('C0 D', real=True, positive=True)
C1, C2 = sp.symbols('C1 C2')
t = sp.symbols('t', real=True, positive=True)
```
The problem of interest here is one where the substance, $c(x,t)$ is diffusing into an semi-infinite ($0 \leq x \leq \infty$) medium that is initially devoid of any solute (e.g. $c(x,0) = 0$). Note that this initial condition explicitly sets the value of one of the terms in the Laplace transform of the time derivative of $c(x,t)$. By virtue of a Laplace transform of Fick's law:
$$
\frac{\partial c(x,t)}{\partial t} = D \frac{\partial^2 c(x,t)}{\partial x^2}
$$
we arrive at the differential equation of interest (where `f` is the Laplace transform of $c(x,y)$:
$$
\left. s \,\, f(x) - f(x) \right|_{t=0} = D \frac{\partial^2 f(x)}{\partial x^2}
$$
in one variable, $x$ as we've integrated the time variable out of the PDE by the Laplace transform. The second term on the LHS is zero by virtue of the initial conditions.
```python
equationToSolve = sp.Eq((sp.diff(f(x),x,2) - (s/D)*f(x)),0)
equationToSolve
```
Calling `sp.dsolve` to find the function $f(x)$ we get:
```python
solutionToEquation = sp.dsolve(equationToSolve,f(x))
solutionToEquation
```
#### Applying the Boundary Conditions
Now we use the boundary conditions to evaluate the constants. What happens as x goes to infinity?
```python
solutionToEquation.subs(x,sp.oo)
```
C2 must therefore be zero. We also know that at $x=0$ and $t=0$ the concentration of the diffusant is $C_0$. We Laplace transform this boundary condition:
```python
sp.laplace_transform(C0,t,s)
```
```python
solutionToEquation.subs([(C2,0),(x,0)])
```
Identifying that $C_1 = C_0/s$. Making the final substitutions:
```python
thingToInverseTransform = solutionToEquation.subs([(C2,0),(C1,C0/s)])
thingToInverseTransform.rhs
```
You can use the `rhs` and `lhs` attributes to get the bits of the functions you are interested in.
```python
sp.inverse_laplace_transform(thingToInverseTransform.rhs, s, t)
```
It helps to know that $\mathrm{erfc}(z) = 1 - \mathrm{erf}(z)$. With this substition our final equation for the domain $\{x\, |\, 0 \leq x \leq \infty \}$ is:
$$
c(x,t) = C_0 \, \mathrm{erfc} \left( \frac{x}{2\sqrt{Dt}} \right)
$$
#### DIY: Plot the solution and use `interact` to explore the effect of variables
```python
# Your code goes here.
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import erfc
from ipywidgets import interact, fixed
def awesomeLaplaceFunction(x):
return x**2
def makeAwesomePlotForProfLewis(x_finish, numPoints):
# create a linspace that has 0 to x_finish
# set to "x" value
x = np.linspace(start=0, stop=x_finish, num=numPoints)
# call awesomeLaplaceFunction and set to "y" value.
y = awesomeLaplaceFunction(x)
# make a plot (I will copy code from elsewhere for this one)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)
axes.semilogx(xb, gamma, 'r')
# Setting the y-limit cleans up the plot.
axes.set_ylim([0,0.8])
axes.set_xlabel('Bulk Concentration $x_b$')
axes.set_ylabel('Surface Tension $\gamma$')
axes.set_title('Surface Tension Change due to Adsorption');
return None
interact(makeAwesomePlotForProfLewis, (arg1,0,1), (arg2,0,1));
```
_Question: All those times we write $x \propto \sqrt{DT}$ what are we really saying? Did playing with the values above give you some insight?_
### Diffusion from a Point Source Located at the Origin
You won't have the above figure, but you should be able to make your own `interact` visualization to make one of your own.
Recall that the transform of:
$$
\frac{\partial c(x,t)}{\partial t} = D \frac{\partial^2 c(x,t)}{\partial x^2}
$$
is:
$$
\left. s \,\, f(x) - f(x) \right|_{t=0} = D \frac{\partial^2 f(x)}{\partial x^2}
$$
where $f$ is the Laplace transform of $c$.
Now we consider a point source located at the origin of our coordinate system. Note that $c(x,0) = 0$ everywhere EXCEPT at $x=0$ where we have an amount of substance $M$.
In this problem we are solving over all space. Recalling our general solution to this problem in Laplace space:
$$
f{\left (x \right )} = C_{1} e^{- \frac{\sqrt{s} x}{\sqrt{D}}} + C_{2} e^{\frac{\sqrt{s} x}{\sqrt{D}}}
$$
Let us examine the behavior of $C_1$ and $C_2$. As $x \rightarrow \infty$, $C_2$ must be zero and as $x \rightarrow -\infty$, $C_1$ must go to zero to ensure that the solution remains finite.
So we can use the symmetry of the problem to our advantage to help us arrive at a solution.
The mass that will diffuse into the $x>0$ part of the system will be $M/2$ so we can define the equation we want to solve, and then Laplace transform the LHS boundary condition.
```python
# some initial imports.
import sympy as sp
sp.init_printing()
```
```python
f, c = sp.symbols('f c', cls=sp.Function)
x = sp.symbols('x', real=True)
s = sp.symbols('s', real=True, positive=True)
C0, D, M = sp.symbols('C0 D M', real=True, positive=True)
C1, C2 = sp.symbols('C1 C2')
t = sp.symbols('t', real=True, positive=True)
```
A quick reminder of the differential equation of interest:
$$
\left. s \,\, f(x) - f(x) \right|_{t=0} = D \frac{\partial^2 f(x)}{\partial x^2}
$$
```python
equationToSolve = None
equationToSolve
```
Now we invoke `dsolve`.
```python
solutionToEquation = sp.dsolve(equationToSolve,f(x))
solutionToEquation
```
Consider the solution and what the values of the constants must be given the physical constraints on the problem. Reason out the value of one of the constants and substitute.
```python
solutionXPositive = solutionToEquation # What substitution do you have to make?
solutionXPositive
```
Let us write a definite integral for our mass constraint
$$
\int^\infty_0 c(x,t) = \frac{M}{2}
$$
```python
massConstraint = sp.Eq(sp.Integral(c(x,t),(x,0,sp.oo)),M/2)
massConstraint
```
In order to use the integral as a boundary condition we need to Laplace transform both sides of the equation. What do we know?
* The LHS is a limit process independent of the variable of the transform.
* The RHS is a constant.
```python
[sp.laplace_transform(massConstraint.lhs, t, s), \
sp.laplace_transform(massConstraint.rhs, t, s)]
```
```python
solutionXPositive
```
Inspecting the above and changing the order of integration operations (i.e. move the Laplace transform inside the other integral) in the transform of the mass constraint, we know the following must be true.
```python
massConstraintIntegral = sp.Eq(sp.Integral(solutionXPositive.rhs,\
(x,0,sp.oo)),M/2/s)
massConstraintIntegral
```
Do the integration:
```python
massConstraintEvaluated = massConstraintIntegral.doit()
massConstraintEvaluated
```
Solve for the contant.
```python
constantOne = sp.solveset(massConstraintEvaluated,C1)
constantOne
```
In the cell below - put your value for $C_1$ as determined above.
```python
finalResults = solutionXPositive.subs(C1, None)
finalResults
```
Invert the transformation.
```python
sp.inverse_laplace_transform(finalResults.rhs, s, t)
```
Our final solution is then:
$$
c(x,t) = \frac{M}{2\sqrt{\pi D t}} \exp \left(- \frac{x^{2}}{4 D t} \right)
$$
[Top of Page](#Sections)
### DIY: Use `interact` to create a visualization of the above solution.
```python
# Your code goes here.
```
[Top of Page](#Sections)
### Homework
----
1. Write a function that sums the contributions from an arbitrary number of point sources located at an arbitrary position along the 1D space. (e.g. one point source at $x=1$ and another at $x=5$, etc.)
1. Generalize this to an infinite number of point sources located in the half domain $-\infty \leq x \leq 0$. When done correctly (integrate!) you will get the diffusion couple solution. This is a standard diffusion analysis technique meaning that there are plenty of resources available describing the procedure.
[Top of Page](#Sections)
### Looking Ahead
----
Next topic is numerical solutions to the diffusion equation. We have examined the analytical methods so that we begin building foundational knowledge of "standard" diffusion geometries and solutions. This gives us a way to check our numerical solutions and the errors that are part of said solutions.
We will be once again looking at storing numerical data in arrays and operations on those arrays. The main intellectual challenge will be thinking of arrays of numerical data as different "states" of the system. The operations on those arrays are governed by the differential governing equations.
[Top of Page](#Sections)
### Reading Assignments and Practice
----
* The solutions here are found in standard texbooks. You can look at authors such as: Crank, Shewmon, and Glicksman. Some texts are more mathematical than others - some are more materials focused than others.
* Having access to a table of Laplace transforms will be helpful. The computer algebra system doesn't really provide that much help in the cases above. Mostly it keeps you from making silly mistakes. So - practice with a transforms table by your side.
* Try and solve some diffusion problems that you devise yourself. See if you can get to the point where you understand that there is really only one solution - it is just that the solutions are re-scaled for each case.
[Top of Page](#Sections)
```python
```
| ea97e5b77f0df46715a1f926d1300fb369717be3 | 78,940 | ipynb | Jupyter Notebook | Lecture-16-Laplace-Transforms.ipynb | juhimgupta/MTLE-4720 | 41797715111636067dd4e2b305a782835c05619f | [
"MIT"
]
| 23 | 2017-07-19T04:04:38.000Z | 2022-02-18T19:33:43.000Z | Lecture-16-Laplace-Transforms.ipynb | juhimgupta/MTLE-4720 | 41797715111636067dd4e2b305a782835c05619f | [
"MIT"
]
| 2 | 2019-04-08T15:21:45.000Z | 2020-03-03T20:19:00.000Z | Lecture-16-Laplace-Transforms.ipynb | juhimgupta/MTLE-4720 | 41797715111636067dd4e2b305a782835c05619f | [
"MIT"
]
| 11 | 2017-07-27T02:27:49.000Z | 2022-01-27T08:16:40.000Z | 41.243469 | 2,876 | 0.680454 | true | 5,211 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.831143 | 0.654884 | __label__eng_Latn | 0.988043 | 0.359845 |
```python
import os
import sys
import glob
import operator as op
import itertools as it
from functools import reduce, partial
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("notebook", font_scale=1.5)
%matplotlib inline
from sympy import symbols, hessian, Function, N
```
## Univariate Optimization
### Interval elimination
#### Bracketing
To bracket a minimum, we need 3 points such that $a \le b \le c$ and $f(b) \le f(a)$, $f(b \le f(c)$. Interval elimination is the minimization analog of bisection. One famous method we will describe in class is Golden Section search.
#### Golden section search
Let the relative length of $b$ from $a$ be $w$. That is
$$
b - a = w \\
c - a = 1
$$
We choose a new point $x$ between $a$ and $b$ - Suppose $x$ lies a distance $z$ beyond $b$.
If $f(x) > 4(b)$, the next bracket is $(a, b, x)$; otherwise the next bracket is $(b, x, c)$.
How do we choose the distance $z$?
There are two possible values for the next bracket length $s$, $s = w + z$ or $s = 1 - w$.
Since we have no information, we make them equal giving $z = 1 - 2w$.
By a self-similarity argument, we also know that $\frac{z}{1-w} = w$.
Eliminating $z$ from these equations gives $w^2 - 3w + 1 = 0$.
#### Interpolation
Near a minimum, Taylor series tells us that the function will look like a quadratic. So quadratic interpolation is a reasonable approach. For a quadratic $ax^2 + bx + c = 0$, the minimum is given by $\frac{-b}{2a}$. Given 3 points on the function, we can solve for $a, b, c$ as a linear system, and set the next point to be $x = \frac{-b}{2a}$.
#### Newton method
Differentiating the Taylor series with only linear terms for $f$ and setting $\frac{df}{dx} = 0$, we get
$$
x_{k+1} = x_{k} - \frac{f'(x)}{f''(x)}
$$
Derivatives can be expensive to calculate, so they may be replaced by finite approximations - these are called Quasi-Newton methods.
Alternatively, the regula falsi method replaces the second derivative with the secant approximation of the first derivative, where the two points are chosen so that the first derivative have different signs - i.e. they bracket a critical point.
# Algorithms for Optimization and Root Finding for Multivariate Problems
**Note**: much of the following notes are taken from Nodecal, J., and S. J. Wright. "Numerical optimization." (2006). It is available online via the library.
## Convexity
A subset $A\subset \mathbb{R}^n$ is *convex* if for any two points $x,y\in A$, the line segment:
$$tx + (1-t)y \;\;\;\;\;\; t\in [0,1]$$
is also in $A$
A function $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is *convex* if its domain $D$ is a convex set and for any two points $x,y\in D$, the graph of $f$ (a subset of $\mathbb{R}^{n+1})$ lies below the line:
$$tf(x) + (1-t)f(y)\;\;\;\;\;t\in [0,1]$$
i.e.
$$f(tx+(1-t)y) \leq tf(x) + (1-t)f(y)\;\;\;\;\;t\in [0,1]$$
#### Convexity guarantees that if an optimizer converges, it converges to the global minimum.
Luckily, we often encounter convex problems in statistics.
## Line Search Methods
There are essentially two classes of multivariate optimization methods. We'll cover line search methods, but refer the reader to Nodecal and Wright for discussion of 'trust region methods'. We should note that all of these methods require that we are 'close' to the minimum (maximum) we are seeking, and that 'noisy' functions or ill-behaved functions are beyond our scope.
A line search method is exactly as it sounds - we search on a line (in $n$ dimensional space) and try to find a minimum. We start with an initial point, and use an iterative method:
$$x_{k+1} = x_k + \alpha_k p_k$$
where $\alpha_k$ is the *step size* and $p_k$ is the search direction. These are the critical choices that change the behavior of the search.
#### Step Size
Ideally, (given a choice of direction, $p_k$) we would want to minimize:
$$\varphi(\alpha) = f(x_k + \alpha p_k)$$
with respect to $\alpha$. This is usually computationally intensive, so in practice, a sequence of $\alpha$ candidates are generated, and then the 'best' is chosen according to some 'conditions' (e.g. see Wolfe conditions below). We won't be going into detail regarding these. The important thing to know is that they ensure that $f$ decreases sufficiently, according to some conditions. Interested students should see Nodecal
##### Wolfe conditions
- Condition 1: $f(x_k + \alpha_k p) \le f(x_k) + c_1 p_k^T \nabla f(x_k)$
Function is decreased sufficiently after step.
- Condition 2: $p_k^T \nabla f(x_k + \alpha_k p) > c_2 p_k \nabla f(x_k)$$
Slope is decreased sufficiently after step.
## Steepest Descent
In steepest descent, one chooses $p_k=\nabla f_k = \nabla f(x_k)$. It is so named, because the gradient points in the direction of steepest ascent, thus, $-\nabla f_k$ will point in the direction of steepest descent. We'll consider this method in its ideal case, that of a quadratic:
$$f(x) = \frac12 x^TQx - b^Tx$$
where $Q$ is positive-definite and symmetric. Note that:
$$\nabla f = Qx -b$$
so the minimum occurs at $x$ such that
$$Qx= b$$
Clearly, we can solve this easily, but let's walk through the algorithm and first find the (ideal) step length:
$$f(x_k - \alpha \nabla f_k) = \frac12\left(x_k - \alpha \nabla f_k\right)^TQ\left(x_k - \alpha \nabla f_k\right) - b^T \left(x_k - \alpha \nabla f_k\right) $$
If we differentiate this with respect to $\alpha$ and find the zero, we obtain:
$\alpha_k = \frac{\nabla f_k^T\nabla f_k}{\nabla f_k^TQ\nabla f_k}$
Thus,
$$x_{k+1} = x_k - \frac{\nabla f_k^T\nabla f_k}{\nabla f_k^TQ\nabla f_k} \nabla f_k$$
But we know that $\nabla f_k = Qx_k -b$, so we have a closed form solution for $x_{k+1}$. This allows us to compute an error bound. Again, details can be found in the text, but here is the result:
$$||x_{k+1} - x^*||_Q^2 \leq \left(\frac{\lambda_n - \lambda_1}{\lambda_n+\lambda_1}\right)^2 ||x_{k} - x^*||_Q^2$$
where $0<\lambda_1\leq ... \leq \lambda_n$ and $x^*$ denotes the minimizer.
Now, if $\lambda_1=...=\lambda_n = \lambda$, then $Q=\lambda I$, the algorithm converges in one step. Geometrically, the contours are ellipsoids, the value of $\frac{\lambda_n}{\lambda_1}$ elongates the axes and causes the steps to 'zig-zag'. Because of this, convergence slows as $\frac{\lambda_n}{\lambda_1}$ increases.
## Newton's Method
Newton's method is another line-search, and here
$$p_k = -H^{-1}\nabla f_k$$
Note that if the Hessian is not positive definite, this may not always be a descent direction.
In the neighborhood of a local minimum, the Hessian *will* be positive definite. Now, if $x_0$ is 'close enough' to the minimizer $x^*$, the step size $\alpha_k =1$ gives quadratic convergence.
The advantage of multiplying the gradient by the inverse of the Hessian is that the gradient is corrected for curvature, and the new direction points toward the minimum.
```python
#def Quad(x):
# return (x[1:])*np.sin(x[:-1])**2.0)
#def DQuad(x,y):
# return (np.array([np.cos(x)*np.sin(y)**2.0,2.0*np.sin(x)*np.cos(y)**2.0]))
def Quad(x):
return ((x[1:])**2.0 + 5*(x[:-1])**2.0)
def DQuad(x,y):
return (np.array([2.0*x,10.0*y]))
```
```python
x = np.linspace(-20,20, 100)
y = np.linspace(-20,20, 100)
X, Y = np.meshgrid(x, y)
Z = Quad(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
Hinv=-np.array([[0.5,0],[0,0.1]])
```
```python
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.contour(X,Y,Z);
plt.title("Steepest Descent");
step=-0.25
X0 = 10.0
Y0 = 1.0
Ngrad=Hinv.dot(DQuad(X0,Y0))
sgrad = step*DQuad(X0,Y0)
plt.quiver(X0,Y0,sgrad[0],sgrad[1],color='red',angles='xy',scale_units='xy',scale=1);
X1 = X0 + sgrad[0]
Y1 = Y0 + sgrad[1]
sgrad = step*DQuad(X1,Y1)
plt.quiver(X1,Y1,sgrad[0],sgrad[1],color='green',angles='xy',scale_units='xy',scale=1);
X2 = X1 + sgrad[0]
Y2 = Y1 + sgrad[1]
sgrad = step*DQuad(X2,Y2)
plt.quiver(X2,Y2,sgrad[0],sgrad[1],color='purple',angles='xy',scale_units='xy',scale=1);
plt.subplot(122)
plt.contour(X,Y,Z);
plt.title("Newton's Method")
plt.quiver(X0,Y0,Ngrad[0],Ngrad[1],color='purple',angles='xy',scale_units='xy',scale=1);
#Compute Hessian and plot again.
```
## Coordinate Descent
Another method is called 'coordinate' descent, and it involves searching along coordinate directions (cyclically), i.e.:
$$p_{mk} = e_{k} \;\;\;\;\;\; k=1,...,n$$
where $m$ is the number of steps.
The main advantage is that $\nabla f$ is not required. It can behave reasonably well, if coordinates are not tightly coupled.
### Newton CG Algorithm
Features:
* Minimizes a 'true' quadratic on $\mathbb{R}^n$ in $n$ steps
* Does NOT require storage or inversion of an $n \times n$ matrix.
We begin with $:\mathbb{R}^n\rightarrow \mathbb{R}$. Take a quadratic approximation to $f$:
$$f(x) \approx \frac12 x^T H x + b^Tx + c$$
Note that in the neighborhood of a minimum, $H$ will be positive-definite (and symmetric). (If we are maximizing, just consider $-H$).
This reduces the optimization problem to finding the zeros of
$$Hx = -b$$
This is a linear problem, which is nice. The dimension $n$ may be very large - which is not so nice.
#### General Inner Product
Recall the axiomatic definition of an inner product $<,>_A$:
* For any two vectors $v,w$ we have
$$<v,w>_A = <w,v>_A$$
* For any vector $v$
$$<v,v>_A \;\geq 0$$
with equality $\iff$ $v=0$.
* For $c\in\mathbb{R}$ and $u,v,w\in\mathbb{R}^n$, we have
$$<cv+w,u> = c<v,u> + <w,u>$$
These properties are known as symmetric, positive definite and bilinear, respectively.
Fact: If we denote the standard inner product on $\mathbb{R}^n$ as $<,>$ (this is the 'dot product'), any symmetric, positive definite $n\times n$ matrix $A$ defines an inner product on $\mathbb{R}^n$ via:
$$<v,w>_A \; = <v,Aw> = v^TAw$$
Just as with the standard inner product, general inner products define for us a notion of 'orthogonality'. Recall that with respect to the standard product, 2 vectors are orthogonal if their product vanishes. The same applies to $<,>_A$:
$$<v,w>_A = 0 $$
means that $v$ and $w$ are orthogonal under the inner product induced by $A$. Equivalently, if $v,w$ are orthogonal under $A$, we have:
$$v^TAw = 0$$
This is also called *conjugate* (thus the name of the method).
#### Conjugate Vectors
Suppose we have a set of $n$ vectors $p_1,...,p_n$ that are mutually conjugate. These vectors form a basis of $\mathbb{R}^n$. Getting back to the problem at hand, this means that our solution vector $x$ to the linear problem may be written as follows:
$$x = \sum\limits_{i=1}^n \alpha_i p_i$$
So, finding $x$ reduces to finding a conjugate basis and the coefficients for $x$ in that basis.
If we let $A=H$,note that:
$${p}_k^{T} {-b}={p}_k^{T} {A}{x}$$
and because $x = \sum\limits_{i=1}^n \alpha_i p_i$, we have:
$$p^TAx = \sum\limits_{i=1}^n \alpha_i p^TA p_i$$
we can solve for $\alpha_k$:
$$\alpha_k = \frac{{p}_k^{T}{(-b)}}{{p}_k^{T} {A}{p}_k} = -\frac{\langle {p}_k, {b}\rangle}{\,\,\,\langle {p}_k, {p}_k\rangle_{A}} = -\frac{\langle{p}_k, {b}\rangle}{\,\,\,\|{p}_k\|_{A}^2}.$$
Now, all we need are the $p_k$'s.
A nice initial guess would be the gradient at some initial point $x_1$. So, we set $p_1 = \nabla f(x_1)$. Then set:
$$x_2 = x_1 + \alpha_1p_1$$
This should look familiar. In fact, it is gradient descent. For $p_2$, we want $p_1$ and $p_2$ to be conjugate (under $A$). That just means orthogonal under the inner product induced by $A$. We set
$$p_2 = \nabla f(x_2) - \frac{p_1^TA\nabla f(x_2)}{{p}_1^{T}{A}{p}_1} {p}_1$$
I.e. We take the gradient at $x_1$ and subtract its projection onto $p_1$. This is the same as Gram-Schmidt orthogonalization.
The $k^{th}$ conjugate vector is:
$$p_{k} = \nabla f(x_k) - \sum\limits_{i=1}^{k-1}\frac{p_i^T A \nabla f(x_k)}{p_i^TAp_i} p_i$$
The 'trick' is that in general, we do not need all $n$ conjugate vectors. In fact, it turns out that $\nabla f(x_k) = b-Ax_k$ is conjugate to all the $p_i$ for $i=1,...,k-2$. Therefore, we need only the last term in the sum.
Convergence rate is dependent on sparsity and condition number of $A$. Worst case is $n^2$.
### BFGS - Broyden–Fletcher–Goldfarb–Shanno
BFGS is a 'quasi' Newton method of optimization. Such methods are variants of the Newton method, where the Hessian $H$ is replaced by some approximation. We we wish to solve the equation:
$$B_k{p}_k = -\nabla f({x}_k)$$
for $p_k$. This gives our search direction, and the next candidate point is given by:
$$x_{k+1} = x_k + \alpha_k p_k$$.
where $\alpha_k$ is a step size.
At each step, we require that the new approximate $H$ meets the secant condition:
$$B_{k+1}(x_{k+1}-x_k) = \nabla f(x_{k+1}) -\nabla f(x_k)$$
There is a unique, rank one update that satisfies the above:
$$B_{k+1} = B_k + c_k v_kv_k^T$$
where
$$ c_k = -\frac{1}{\left(B_k(x_{k+1}-x_k) - (\nabla f(x_{k+1})-\nabla f(x_k)\right)^T (x_{k+1}-x_k) }$$
and
$$v_k = B_k(x_{k+1}-x_k) - (\nabla f(x_{k+1})-\nabla f(x_k))$$
Note that the update does NOT preserve positive definiteness if $c_k<0$. In this case, there are several options for the rank one correction, but we will not address them here. Instead, we will describe the BFGS method, which almost always guarantees a positive-definite correction. Specifically:
$$B_{k+1} = B_k + b_k g_k g_k^T + c_k B_k d_k d_k^TB_k$$
where we have introduced the shorthand:
$$g_k = \nabla f(x_{k+1}) - \nabla f(x_k) \;\;\;\;\;\;\;\ \mathrm{ and }\;\;\;\;\;\;\; d_k = x_{k+1} - x_k$$
If we set:
$$b_k = \frac{1}{g_k^Td_k} \;\;\;\;\; \mathrm{ and } \;\;\;\;\; c_k = \frac{1}{d_k^TB_kd_k}$$
we satisfy the secant condition.
### Nelder-Mead Simplex
While Newton's method is considered a 'second order method' (requires the second derivative), and quasi-Newton methods are first order (require only first derivatives), Nelder-Mead is a zero-order method. I.e. NM requires only the function itself - no derivatives.
For $f:\mathbb{R}^n\rightarrow \mathbb{R}$, the algorithm computes the values of the function on a simplex of dimension $n$, constructed from $n+1$ vertices. For a univariate function, the simplex is a line segment. In two dimensions, the simplex is a triangle, in 3D, a tetrahedral solid, and so on.
The algorithm begins with $n+1$ starting points and then the follwing steps are repeated until convergence:
* Compute the function at each of the points
* Sort the function values so that
$$f(x_1)\leq ...\leq f(x_{n+1})$$
* Compute the centroid $x_c$ of the n-dimensional region defined by $x_1,...,x_n$
* Reflect $x_{n+1}$ about the centroid to get $x_r$
$$x_r = x_c + \alpha (x_c - x_{n+1})$$
* Create a new simplex according to the following rules:
- If $f(x_1)\leq f(x_r) < f(x_n)$, replace $x_{n+1}$ with $x_r$
- If $f(x_r)<f(x_1)$, expand the simplex through $x_r$:
$$x_e = x_c + \gamma (x_c - x_{n+1})$$
If $f(x_e)<f(x_r)$, replace $x_{n+1}$ with $x_e$, otherwise, replace $x_{n+1}$ with $x_r$
- If $f({x}_{r}) \geq f({x}_{n})$, compute $x_p = x_c + \rho(x_c - x_{n+1})$. If $f({x}_{p}) < f({x}_{n+1})$, replace $x_{n+1}$ with $x_p$
- If all else fails, replace *all* points except $x_1$ according to
$$x_i = {x}_{1} + \sigma({x}_{i} - {x}_{1})$$
The default values of $\alpha, \gamma,\rho$ and $\sigma$ in scipy are not listed in the documentation, nor are they inputs to the function.
### Powell's Method
Powell's method is another derivative-free optimization method that is similar to conjugate-gradient. The algorithm steps are as follows:
Begin with a point $p_0$ (an initial guess) and a set of vectors $\xi_1,...,\xi_n$, initially the standard basis of $\mathbb{R}^n$.
- Compute for $i=1,...,n$, find $\lambda_i$ that minimizes $f(p_{i-1} +\lambda_i \xi_i)$ and set $p_i = p_{i-1} + \lambda_i\xi_i$
- For $i=1,...,n-1$, replace $\xi_{i}$ with $\xi_{i+1}$ and then replace $\xi_n$ with $p_n - p_0$
- Choose $\lambda$ so that $f(p_0 + \lambda(p_n-p_0)$ is minimum and replace $p_0$ with $p_0 + \lambda(p_n-p_0)$
Essentially, the algorithm performs line searches and tries to find fruitful directions to search.
## Solvers
### Levenberg-Marquardt (Damped Least Squares)
Recall the least squares problem:
Given a set of data points $(x_i, y_i)$ where $x_i$'s are independent variables (in $\mathbb{R}^n$ and the $y_i$'s are response variables (in $\mathbb{R}$), find the parameter values of $\beta$ for the model $f(x;\beta)$ so that
$$S(\beta) = \sum\limits_{i=1}^m \left(y_i - f(x_i;\beta)\right)^2$$
is minimized.
If we were to use Newton's method, our update step would look like:
$$\beta_{k+1} = \beta_k - H^{-1}\nabla S(\beta_k)$$
Gradient descent, on the other hand, would yield:
$$\beta_{k+1} = \beta_k - \gamma\nabla S(\beta_k)$$
Levenberg-Marquardt adaptively switches between Newton's method and gradient descent.
$$\beta_{k+1} = \beta_k - (H + \lambda I)^{-1}\nabla S(\beta_k)$$
When $\lambda$ is small, the update is essentially Newton-Gauss, while for $\lambda$ large, the update is gradient descent.
### Newton-Krylov
The notion of a Krylov space comes from the Cayley-Hamilton theorem (CH). CH states that a matrix $A$ satisfies its characteristic polynomial. A direct corollary is that $A^{-1}$ may be written as a linear combination of powers of the matrix (where the highest power is $n-1$).
The Krylov space of order $r$ generated by an $n\times n$ matrix $A$ and an $n$-dimensional vector $b$ is given by:
$$\mathcal{K}_r(A,b) = \operatorname{span} \, \{ b, Ab, A^2b, \ldots, A^{r-1}b \}$$
These are actually the subspaces spanned by the conjugate vectors we mentioned in Newton-CG, so, technically speaking, Newton-CG is a Krylov method.
Now, the scipy.optimize newton-krylov solver is what is known as a 'Jacobian Free Newton Krylov'. It is a very efficient algorithm for solving *large* $n\times n$ non-linear systems. We won't go into detail of the algorithm's steps, as this is really more applicable to problems in physics and non-linear dynamics.
## GLM Estimation and IRLS
Recall generalized linear models are models with the following components:
* A linear predictor $\eta = X\beta$
* A response variable with distribution in the exponential family
* An invertible 'link' function $g$ such that
$$E(Y) = \mu = g^{-1}(\eta)$$
We may write the log-likelihood:
$$\ell(\eta) = \sum\limits_{i=1}^m (y_i \log(\eta_i) + (\eta_i - y_i)\log(1-\eta_i) $$
where $\eta_i = \eta(x_i,\beta)$.
Differentiating, we obtain:
$$\frac{\partial L}{\partial \beta} = \frac{\partial \eta}{\partial \beta}^T\frac{\partial L}{\partial \eta} = 0$$
Written slightly differently than we have in the previous sections, the Newton update to find $\beta$ would be:
$$-\frac{\partial^2 L}{\partial \beta \beta^T} \left(\beta_{k+1} -\beta_k\right) = \frac{\partial \eta}{\partial \beta}^T\frac{\partial L}{\partial \eta}$$
Now, if we compute:
$$-\frac{\partial^2 L}{\partial \beta \beta^T} = \sum \frac{\partial L}{\partial \eta_i}\frac{\partial^2 \eta_i}{\partial \beta \beta^T} - \frac{\partial \eta}{\partial \beta}^T \frac{\partial^2 L}{\partial \eta \eta^T} \frac{\partial \eta}{\partial \beta}$$
Taking expected values on the right hand side and noting:
$$E\left(\frac{\partial L}{\partial \eta_i} \right) = 0$$
and
$$E\left(-\frac{\partial^2 L}{\partial \eta \eta^T} \right) = E\left(\frac{\partial L}{\partial \eta}\frac{\partial L}{\partial \eta}^T\right) \equiv A$$
So if we replace the Hessian in Newton's method with its expected value, we obtain:
$$\frac{\partial \eta}{\partial \beta}^TA\frac{\partial \eta}{\partial \beta}\left(\beta_{k+1} -\beta_k\right) = \frac{\partial \eta}{\partial \beta}^T\frac{\partial L}{\partial \eta} $$
Now, these actually have the form of the normal equations for a weighted least squares problem.
$$\min_{\beta_{k+1}}\left(A^{-1}\frac{\partial L}{\partial \eta} + \frac{\partial \eta}{\partial \beta}\left(\beta_{k+1} -\beta_k\right)\right)^T A \left(A^{-1}\frac{\partial L}{\partial \eta} + \frac{\partial \eta}{\partial \beta}\left(\beta_{k+1} -\beta_k\right)\right)$$
$A$ is a weight matrix, and changes with iteration - thus this technique is *iteratively reweighted least squares*.
### Constrained Optimization and Lagrange Multipliers
Often, we want to optimize a function subject to a constraint or multiple constraints. The most common analytical technique for this is called 'Lagrange multipliers'. The theory is based on the following:
If we wish to optimize a function $f(x,y)$ subject to the constraint $g(x,y)=c$, we are really looking for points at which the gradient of $f$ and the gradient of $g$ are in the same direction. This amounts to:
$$\nabla_{(x,y)}f = \lambda \nabla_{(x,y)}g$$
(often, this is written with a (-) sign in front of $\lambda$). The 2-d problem above defines two equations in three unknowns. The original constraint, $g(x,y)=c$ yields a third equation. Additional constraints are handled by finding:
$$\nabla_{(x,y)}f = \lambda_1 \nabla_{(x,y)}g_1 + ... + \lambda_k \nabla_{(x,y)}g_k$$
The generalization to functions on $\mathbb{R}^n$ is also trivial:
$$\nabla_{x}f = \lambda \nabla_{x}g$$
```python
```
| 3c062be855cb5f120f5696cb1bdcfa7007ad2ede | 30,148 | ipynb | Jupyter Notebook | notebooks/T07C_Optimization_Algorithms.ipynb | Yijia17/sta-663-2021 | e6484e3116c041b8c8eaae487eff5f351ff499c9 | [
"MIT"
]
| 18 | 2021-01-19T16:35:54.000Z | 2022-01-01T02:12:30.000Z | notebooks/T07C_Optimization_Algorithms.ipynb | Yijia17/sta-663-2021 | e6484e3116c041b8c8eaae487eff5f351ff499c9 | [
"MIT"
]
| null | null | null | notebooks/T07C_Optimization_Algorithms.ipynb | Yijia17/sta-663-2021 | e6484e3116c041b8c8eaae487eff5f351ff499c9 | [
"MIT"
]
| 24 | 2021-01-19T16:26:13.000Z | 2022-03-15T05:10:14.000Z | 38.651282 | 437 | 0.565377 | true | 6,819 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.913677 | 0.769156 | __label__eng_Latn | 0.990911 | 0.62534 |
# Variational Autoencoder for the not-MNIST dataset
We will use a probabilisitc non-linear generative model for the not-MNIST dataset. Unlike the localization example, we will train both the **generative model parameters** and the parameters of the **amortized variaitonal family**.
----
---
----
---
## ELBO lower-bound to $p(\mathbf{X})$
Where $\text{KL}(q_{\eta,\mathbf{x}}(\mathbf{z})||p(\mathbf{z}))$ is known in closed form since it is the KL divergence between two Gaussian pdfs:
\begin{align}
\text{KL}(q_{\eta,\mathbf{x}}(\mathbf{z})||p(\mathbf{z})) = \frac{1}{2} \left[\text{tr}\left(\text{diag}(\sigma_{\eta}(\mathbf{x}))\right)+\left(\mu_{\eta}(\mathbf{x})^T\mu_{\eta}(\mathbf{x})\right)-2-\log\det \left(\text{diag}(\sigma_{\eta}(\mathbf{x}))\right) \right]
\end{align}
## SGD optimization
- Sample a M-size minibatch of images.
- Sample $\mathbf{\epsilon}^{(i)}$ from $\mathcal{N}(\mathbf{0},\mathbf{I})$, $i=1,\ldots,M$.
- For $i=1,\ldots,M$, compute
\begin{align}
\mathbf{z}^{(i)} = \mu_\eta(\mathbf{x}^{(i)}) + \sqrt{\text{diag}(\sigma_\eta(\mathbf{x}^{(i)}))} \circ \mathbf{\epsilon}^{(i)}
\end{align}
- Compute gradients of
\begin{align}
\hat{\mathcal{L}}(\mathbf{X},\theta,\eta) =\sum_{i=1}^M \Big(\log p_\theta(\mathbf{x}^{(i)}|\mathbf{z}^{(i)}) - \text{KL}(q_{\eta,\mathbf{x}^{(i)}}(\mathbf{z})||p(\mathbf{z}))\Big)
\end{align}
w.r.t. $\theta,\eta$
- Perform SGD update
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import tensorflow as tf
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
# use seaborn plotting defaults
import seaborn as sns; sns.set()
%matplotlib inline
```
/Users/olmos/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
## Loading MNIST database
```python
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
mnist = fetch_mldata('MNIST original')
```
```python
images = mnist.data.astype(np.float32)
labels = mnist.target
train_dataset,test_dataset,train_labels,test_labels = train_test_split(images, labels, test_size=0.33, random_state=42)
train_dataset,valid_dataset,train_labels,valid_labels = train_test_split(train_dataset, train_labels, test_size=0.33, random_state=42)
#Generate a random index
num_figs=3
img_size = 28
index=np.random.randint(train_dataset.shape[0], size=num_figs)
for i in range(num_figs):
plt.figure()
plt.imshow(train_dataset[index[i],:].reshape([28,28]),cmap=plt.cm.gray)
plt.title('Label:' + str(train_labels[index[i]]))
```
### TensorFlow Computation Graph and Loss Function
```python
z_dim = 2 #Latent Space
model_name = 'model1' #In 'model1.py' we define the variational family
learning_rate = 1e-3
num_imgs = 50 #Number of samples generated from the generative model (for testing)
num_iter = 20000 #SGD iterations
period_plot = 1000
sigma_reconstruction = 0.01 #Reconstruction variance
batch_size = 200
dims = [batch_size,784]
```
```python
sess_VAE = tf.Graph()
with sess_VAE.as_default():
print('[*] Importing model: ' + model_name)
model = __import__(model_name)
print('[*] Defining placeholders')
inputX = tf.placeholder(tf.float32, shape=dims, name='x-input')
print('[*] Defining the encoder')
log_var, mean, sample_z, KL = model.encoder(inputX,z_dim,batch_size)
print('[*] Defining the decoder')
loglik,img_reconstruction = model.decoder(inputX,sample_z,sigma_reconstruction,dims[1])
loss = -tf.reduce_mean(loglik - KL)
optim = tf.train.AdamOptimizer(learning_rate).minimize(loss)
print('[*] Defining Sample operation...')
samples = model.new_samples(num_imgs, z_dim, dims[1])
# Output dictionary -> Useful if computation graph is defined in a separate .py file
tf_nodes = {}
tf_nodes['X'] = inputX
tf_nodes['mean'] = mean
tf_nodes['logvar'] = log_var
tf_nodes['KL'] = tf.reduce_mean(KL)
tf_nodes['loglik'] = tf.reduce_mean(loglik)
tf_nodes['img_reconst'] = img_reconstruction
tf_nodes['optim'] = optim
tf_nodes['samples'] = samples
```
[*] Importing model: model1
[*] Defining placeholders
[*] Defining the encoder
[*] Defining the decoder
[*] Defining Sample operation...
## SGD optimization
```python
############ SGD Inference #####################################
with tf.Session(graph=sess_VAE) as session:
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
tf.global_variables_initializer().run()
print('Training the VAE ...')
for it in range(num_iter):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (it * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feedDict = {tf_nodes['X'] : batch_data.reshape([-1,784])}
_,loglik,KL= session.run([tf_nodes['optim'],tf_nodes['loglik'],tf_nodes['KL']],feedDict)
if(it % period_plot ==0):
print("It = %d, loglik = %.5f, KL = %.5f" %(it,loglik,KL))
it = 0
offset = (it * batch_size) % (train_labels.shape[0] - batch_size)
batch_data_train = train_dataset[offset:(offset + batch_size), :]
batch_data_test = test_dataset[offset:(offset + batch_size), :]
#We compute the latent representation of batch images and their reconstruction
feedDict = {tf_nodes['X'] : batch_data_train.reshape([-1,784])}
z,reconstructions_train = session.run([tf_nodes['mean'],tf_nodes['img_reconst']], feed_dict=feedDict)
feedDict = {tf_nodes['X'] : batch_data_test.reshape([-1,784])}
z_test,reconstructions_test = session.run([tf_nodes['mean'],tf_nodes['img_reconst']], feed_dict=feedDict)
samples = session.run(tf_nodes['samples'])
```
Training the VAE ...
It = 0, loglik = -294680416.00000, KL = 2.37981
It = 1000, loglik = -170337856.00000, KL = 14.90301
It = 2000, loglik = -162173904.00000, KL = 17.67475
It = 3000, loglik = -157364464.00000, KL = 17.43681
It = 4000, loglik = -151082656.00000, KL = 19.02109
It = 5000, loglik = -146750832.00000, KL = 19.01055
It = 6000, loglik = -144242784.00000, KL = 17.92510
It = 7000, loglik = -141732912.00000, KL = 18.24413
It = 8000, loglik = -138781984.00000, KL = 19.72985
It = 9000, loglik = -139642464.00000, KL = 19.51037
It = 10000, loglik = -140557952.00000, KL = 19.72429
It = 11000, loglik = -132590232.00000, KL = 18.34506
It = 12000, loglik = -133731248.00000, KL = 20.11160
It = 13000, loglik = -124924768.00000, KL = 20.37013
It = 14000, loglik = -124596776.00000, KL = 20.34818
It = 15000, loglik = -134286560.00000, KL = 20.94070
It = 16000, loglik = -127114336.00000, KL = 20.84480
It = 17000, loglik = -121211256.00000, KL = 19.79773
It = 18000, loglik = -121058880.00000, KL = 19.89827
It = 19000, loglik = -124864248.00000, KL = 21.91816
### Lets plot some train images and their reconstruction
```python
#Lets plot train images and their reconstruction
n_plots_axis=5
f2, axarr2 = plt.subplots(n_plots_axis,2)
for i in range(n_plots_axis):
axarr2[i,0].imshow(batch_data_train[i,:].reshape([28,28]),cmap='gray');
axarr2[i,1].imshow(reconstructions_train[i,:].reshape([28,28]),cmap='gray')
```
### Lets plot latent representation of train images
```python
def plot_latent_space_with_images(ax,recons_images,z_samples):
ax.clear()
for i in range(len(recons_images)):
im = OffsetImage(recons_images[i].reshape([28,28]), zoom=1,cmap='gray')
ab = AnnotationBbox(im, z_samples[i],frameon=True)
ax.add_artist(ab)
ax.set_xlim(np.min(z_samples[:,0])-1,np.max(z_samples[:,0])+1)
ax.set_ylim(np.min(z_samples[:,1])-1,np.max(z_samples[:,1])+1)
ax.set_title('Latent space Z with Images')
plt.figure()
f_latent, ax_latent = plt.subplots(1,1,figsize=(8, 8))
plot_latent_space_with_images(ax_latent,reconstructions_train,z)
plt.rcParams["figure.figsize"] = [14,14]
```
### Lets plot latent representation of test images
```python
plt.figure()
f_latent, ax_latent = plt.subplots(1,1,figsize=(8, 8))
plot_latent_space_with_images(ax_latent,reconstructions_test,z_test)
plt.rcParams["figure.figsize"] = [14,14]
```
```python
# Lets plot latent representation of test images
f3, axarr3 = plt.subplots(n_plots_axis,2)
for i in range(n_plots_axis):
axarr3[i,0].imshow(batch_data_test[i,:].reshape([28,28]),cmap='gray');
axarr3[i,1].imshow(reconstructions_test[i,:].reshape([28,28]),cmap='gray')
```
```python
# Lets plot samples of the model
n_plots_axis=5
f4, axarr4 = plt.subplots(n_plots_axis,2)
for i in range(n_plots_axis):
axarr4[i,0].imshow(samples[i,:].reshape([28,28]),cmap='gray');
axarr4[i,1].imshow(samples[i+20,:].reshape([28,28]),cmap='gray')
```
```python
```
| 26495d58a589d62b53c2f586720d0fca98bd7a81 | 377,747 | ipynb | Jupyter Notebook | Notebooks/Part_7/Variational Autoencoder/VAE_MNIST.ipynb | olmosUC3M/Introduction-to-Tensor-Flow-and-Deep-Learning | 3d173606f273f6b3e2bf3cbdccea1c4fe59af71f | [
"MIT"
]
| 4 | 2018-03-05T14:19:15.000Z | 2020-09-13T23:53:08.000Z | Notebooks/Part_7/Variational Autoencoder/VAE_MNIST.ipynb | olmosUC3M/Introduction-to-Tensor-Flow-and-Deep-Learning | 3d173606f273f6b3e2bf3cbdccea1c4fe59af71f | [
"MIT"
]
| null | null | null | Notebooks/Part_7/Variational Autoencoder/VAE_MNIST.ipynb | olmosUC3M/Introduction-to-Tensor-Flow-and-Deep-Learning | 3d173606f273f6b3e2bf3cbdccea1c4fe59af71f | [
"MIT"
]
| 1 | 2022-03-31T20:26:47.000Z | 2022-03-31T20:26:47.000Z | 672.147687 | 107,816 | 0.947494 | true | 2,903 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.766294 | 0.673685 | __label__eng_Latn | 0.455126 | 0.403526 |
# Simple Electrical Circuit
This notebook is a simple tutorial on how to use the julia package `BondGraphs.jl` to simulate a simple electrical circuit.
This tutorial has been adapted from https://bondgraphtools.readthedocs.io/en/latest/tutorials/RC.html
```julia
# Since BondGraphs is not yet in the package manager, we will need to include it directly from Github
# NOTE: You will need Julia >= 1.7
using Pkg; Pkg.add(url="https://github.com/jedforrest/BondGraphs.jl")
using BondGraphs
```
Our first example will be a simple electric circuit of a capacitor, resistor, and current supply in parallel. We will first model this circuit without the current supply.
## Model Construction
We first create a `BondGraph` object which will hold all our components
```julia
model = BondGraph("RC Circuit")
```
BondGraph RC Circuit (0 Nodes, 0 Bonds)
We then create all the components of our model. Each component contains a constitutive equation which builds our model
```julia
C = Component(:C)
constitutive_relations(C)
```
\begin{align}
0 =& \frac{q\left( t \right)}{C} - E_1\left( t \right) \\
\frac{dq(t)}{dt} =& F_1\left( t \right)
\end{align}
```julia
R = Component(:R)
constitutive_relations(R)
```
\begin{align}
0 =& - R F_1\left( t \right) + E_1\left( t \right)
\end{align}
We also create an `EqualEffort` node which represents Kirchoff's Voltage Law for shared effort (voltage) between components
```julia
kvl = EqualEffort()
```
𝟎
Components and nodes are added to the model, and connected together as a graph network
```julia
add_node!(model, [C, R, kvl])
connect!(model, R, kvl)
connect!(model, C, kvl)
model
```
BondGraph RC Circuit (3 Nodes, 2 Bonds)
Our bond graph is fundamentally a graph object, and can be manipulated using Julia's graph functionality
```julia
using Graphs
adjacency_matrix(model)
```
3×3 SparseArrays.SparseMatrixCSC{Int64, Int64} with 2 stored entries:
⋅ ⋅ 1
⋅ ⋅ 1
⋅ ⋅ ⋅
We can view our model structure by plotting the bond graph as a graph network. A plot recipe is available for `Plots.jl`
```julia
using Plots
plot(model, fontsize=12)
```
## Simulating our Model
With a bond graph we can automatically generate a series of differential equations which combine all the constitutive relations from the components, with efforts and flows shared according to the graph structure.
```julia
constitutive_relations(model)
```
\begin{align}
\frac{dC_{+}q(t)}{dt} =& \frac{ - C_{+q}\left( t \right)}{C_{+}C R_{+}R}
\end{align}
We set values for parameters in the graph model itself. Each component comes with default values.
When substituted into our equations, we get the following relation for the capacitor charge `C.q(t)`
```julia
C.C = 1
R.R = 2
constitutive_relations(model; sub_defaults=true)
```
\begin{align}
\frac{dC_{+}q(t)}{dt} =& \frac{ - C_{+q}\left( t \right)}{2}
\end{align}
We can solve this the bond graph directly using the in-built `simulate` function.
```julia
tspan = (0., 10.)
u0 = [1] # initial value for C.q(t)
sol = simulate(model, tspan; u0)
plot(sol)
```
Under the hood, our `simulate` function is converting our bond graph into an `ODESystem` from `ModelingToolkit.jl`.
We can chose instead to create an `ODESystem` directly and handle it with whatever tools we like.
```julia
using ModelingToolkit
sys = ODESystem(model)
display("text/plain", sys)
```
[0m[1mModel RC Circuit with 1 [22m[0m[1mequations[22m
[0m[1mStates (1):[22m
C₊q(t) [defaults to 0.0]
[0m[1mParameters (2):[22m
C₊C [defaults to 1]
R₊R [defaults to 2]
[35mIncidence matrix:[39m1×2 SparseArrays.SparseMatrixCSC{Num, Int64} with 1 stored entry:
× ⋅
## Expanding the circuit with a current supply
We will expand our model by adding an external current (flow) supply in parallel, represented by the component `Sf` (Source of Flow)
```julia
Is = Component(:Sf, "Is")
add_node!(model, Is)
connect!(model, Is, kvl)
plot(model, fontsize = 10)
```
The simplest external input is a constant. We set the parameter function of the current supply component to be `fs(t) = 2`. Note the additional '2' in the constitutive relation below
```julia
Is.fs = t -> 2 # Note that this is still a function of time
constitutive_relations(model)
```
\begin{align}
\frac{dC_{+}q(t)}{dt} =& \frac{ - C_{+q}\left( t \right) + 2.0 C_{+}C R_{+}R}{C_{+}C R_{+}R}
\end{align}
```julia
sol = simulate(model, tspan; u0)
plot(sol)
```
The forcing function can be more complex, such as `fs(t) = sin(2t)`
```julia
Is.fs = t -> sin(2t)
constitutive_relations(model; sub_defaults=true)
```
\begin{align}
\frac{dC_{+}q(t)}{dt} =& \frac{ - C_{+q}\left( t \right) + 2 \sin\left( 2 t \right)}{2}
\end{align}
```julia
sol = simulate(model, tspan; u0)
plot(sol)
```
The input can be any arbitrary julia function of t, so long as it returns a sensible output. Note that for this to work you must register the custom function with `@register_symbolic`, so that the library knows not to simplify this function further. Note the addition of `f(t)` in the equation.
```julia
f(t) = t % 2 <= 1 ? 0 : 1 # repeating square wave
@register_symbolic f(t)
Is.fs = t -> f(t)
constitutive_relations(model; sub_defaults=true)
```
\begin{align}
\frac{dC_{+}q(t)}{dt} =& \frac{ - C_{+q}\left( t \right) + 2 f\left( t \right)}{2}
\end{align}
```julia
sol = simulate(model, tspan; u0)
plot(sol)
```
Once a function `f(t)` is registered in in the model, it can be repeatedly modified and called without having to reconstruct the model
```julia
p = plot();
for i in 1:4
f(t) = cos(i * t)
sol = simulate(model, (0., 5.); u0)
plot!(p, sol, label = "f(t) = cos($(i)t)", lw=2)
end
plot(p)
```
| 8f1db5ba970b2ef7cfac25eef4517875657ff09b | 332,464 | ipynb | Jupyter Notebook | Basic Tutorials/SimpleElectricalCircuit.ipynb | jedforrest/BondGraphsTutorials | d93b12d9c3e31c9c6b53d9997d92c81587ee299e | [
"MIT"
]
| null | null | null | Basic Tutorials/SimpleElectricalCircuit.ipynb | jedforrest/BondGraphsTutorials | d93b12d9c3e31c9c6b53d9997d92c81587ee299e | [
"MIT"
]
| null | null | null | Basic Tutorials/SimpleElectricalCircuit.ipynb | jedforrest/BondGraphsTutorials | d93b12d9c3e31c9c6b53d9997d92c81587ee299e | [
"MIT"
]
| null | null | null | 156.748703 | 11,972 | 0.687813 | true | 1,778 | Qwen/Qwen-72B | 1. YES
2. YES | 0.937211 | 0.833325 | 0.781001 | __label__eng_Latn | 0.964678 | 0.652859 |
```python
from IPython.display import display, Math, Latex
import matplotlib.pyplot as plt
%matplotlib inline
#display(Math(r'F(K) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')) #exemplo
```
<div style="font-family: 'Times New Roman', Times, serif; font-size: 16px;">
<h1>SymPy</h1>
<p>Usando a biblioteca <b style="color: blue;">SymPy</b> de matemática simbólica para computar:</p>
<ul style="font-weight: bold">
<li>derivadas</li>
<li>retas tangentes</li>
<li>derivadas de ordem mais altas</li>
<li>máximos e mínimos</li>
</ul>
</div>
```python
import sympy as sp
```
```python
sp.init_printing()
sp.var('x, y')
```
$f(x) = (x^3 - 3x + 2)e^{-x/4} - 1$
```python
f = sp.Lambda(x, (x**3 - 3*x + 2)*sp.exp(-x/4) - 1)
f
```
<div style="font-family: 'Times New Roman', Times, serif; font-size: 16px;">
<h2>Derivadas</h2>
<p>Usando a biblioteca <b style="color: blue;">Sympy</b> de matemática simbólica para computar a derivada da função</p>
</div>
$ f' = \dfrac{df}{dx} $.
```python
diff( f(x), x )
```
Para avaliar a derivada em um ponto, por exemplo, para calcular $f'(1)$:
```python
diff( f(x),x ).subs(x, 1)
```
Também podemos definir a função derivada de $f$:
```python
fl = Lambda(x, diff(f(x), x))
fl
```
Com isso, $f'(1)$ pode ser computada com:
```python
fl(1)
```
**Exercício**
Sengo $g(x) = x^2 + \dfrac{1}{2}$ calcule $g'(1)$.
```python
# Solução
g = Lambda(x, (x**2) + Rational(1, 2))
g
```
```python
gl = Lambda(x, diff(g(x), x))
gl
```
<div style="font-family: 'Times New Roman', Times, serif; font-size: 16px;">
<h2>Reta Tangente</h2>
</div>
Aqui, vamos ver como computar a reta tangente ao gráfico da função $f$ no ponto $x_0 = - \dfrac{1}{2}$. Lembremos que tal reta tangente tem equação:
$y = f'(x_0)(x - x_0) + f(x_0)$
Assim sendo, podemos definir a função affim cujo gráfico é a reta tangente com o seguinte comando:
```python
x0 = -1/2
r = Lambda(x, fl(x0)*(x-x0) + f(x0))
r
```
Vejamos os gráficos de $f(x)$ e da reta tangente computada.
```python
p = plot(f(x), (x, -2, 2), line_color='blue', show=False)
q = plot(r(x), (x, -1.5, 1), line_color='red', show=False)
p.extend(q)
p.show()
```
**Exercício**
Encontre a reta tangente ao gráfico de $y = \dfrac{1}{x}$ em $x = 1$. Faça os esboços dos gráficos e da reta tangente em um mesmo gráfico.
```python
reta_ex_y = Lambda(x, 1 / x)
reta_ex_y
```
```python
reta_ex_yl = Lambda(x, diff(reta_ex_y(x), x))
reta_ex_yl
```
```python
reta_ex_x = 1
reta_ex_r = Lambda(x, reta_ex_yl(reta_ex_x)*(x-reta_ex_x) + reta_ex_y(reta_ex_x))
reta_ex_r
```
```python
reta_ex_p = plot(reta_ex_y(x), (x, -2, 2), line_color='blue', show=False)
reta_ex_q = plot(reta_ex_r(x), (x, -1.5, 1), line_color='red', show=False)
reta_ex_p.extend(reta_ex_q)
reta_ex_p.show()
```
```python
f = lambda x: ( ((-0.5)*(x**2)) + (2.5*x) + (4.5) )
sp.solve(((-0.5)*(x**2)) + (2.5*x) + (4.5), x)
```
```python
```
| ff54b756b02b4ff86ce170104d3b67443b422e01 | 13,813 | ipynb | Jupyter Notebook | SymPy.ipynb | GuilhermeEsdras/number-methods | e92a1e12d71ba688d01407982cbde5160f849498 | [
"MIT"
]
| null | null | null | SymPy.ipynb | GuilhermeEsdras/number-methods | e92a1e12d71ba688d01407982cbde5160f849498 | [
"MIT"
]
| null | null | null | SymPy.ipynb | GuilhermeEsdras/number-methods | e92a1e12d71ba688d01407982cbde5160f849498 | [
"MIT"
]
| null | null | null | 35.784974 | 3,028 | 0.647072 | true | 1,132 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.845942 | 0.706771 | __label__por_Latn | 0.661107 | 0.480397 |
```python
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sympy
from sympy import Matrix, init_printing
from scipy.sparse.linalg import svds,eigs
import sklearn
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics.pairwise import cosine_distances
from sklearn.metrics import pairwise_distances
from time import time
!pip install surprise
import surprise
from surprise import SVD
from surprise import Dataset
from surprise.model_selection import cross_validate
init_printing()
```
```python
data = pd.read_csv('top50.csv',encoding = "ISO-8859-1")
data.index = [data["Track.Name"]]
```
```python
data = data[['Beats.Per.Minute',
'Energy', 'Danceability', 'Loudness..dB..', 'Liveness', 'Valence.',
'Length.', 'Acousticness..', 'Speechiness.', 'Popularity']]
```
```python
def index_to_instance(df,index=None):
if index:
return XYZ(df)[index][1]
else:
return XYZ(df)
def XYZ(df):
return sorted(list(zip(list(df.index.codes[0].data),list(df.index.levels[0].array))))
def value_to_index_map(array):
array1 = zip(array,range(len(array)))
return array1
```
```python
index_to_instance(data,10)
```
'One Thing Right'
```python
class RecSysContentBased():
def __init__(self):
pass
def fit(self,train):
self.train_set = train
df1 = cosine_similarity(train)
self.similarity = df1
self.distances = pairwise_distances(train,metric='euclidean')
def evaluate(self,user):
d = sorted(value_to_index_map(self.distances[user]))
return list(index_to_instance(self.train_set,d[i][1]) for i in range(len(d)))
def predict(self):
pass
def test(self,testset):
pass
```
```python
model = RecSysContentBased()
```
```python
model.fit(data)
```
```python
print("Top 5 Songs closest to {0} are: \n{1}".format(index_to_instance(data,10),pd.Series(model.evaluate(10)[1:6])))
```
Top 5 Songs closest to One Thing Right are:
0 Old Town Road - Remix
1 Happier
2 fuck, i'm lonely (with Anne-Marie) - from 13 ...
3 boyfriend (with Social House)
4 Takeaway
dtype: object
```python
```
| 4b37d57208ca23c0172f9014fa9d1e1cab5400d3 | 4,850 | ipynb | Jupyter Notebook | Content_Based_Demo.ipynb | Aayush-hub/Spotify-Recommendation-Engine | 8e601af297216f2e60004a4daae5a1fa875f21a9 | [
"MIT"
]
| 157 | 2019-09-07T13:39:20.000Z | 2022-03-27T09:53:50.000Z | Content_Based_Demo.ipynb | saurabhshahane4/Spotify-Recommendation-Engine | 03c2011a7ecc336b259ca06fe2b955e2e33039fa | [
"MIT"
]
| 74 | 2019-10-12T13:27:01.000Z | 2021-09-03T04:03:14.000Z | Content_Based_Demo.ipynb | saurabhshahane4/Spotify-Recommendation-Engine | 03c2011a7ecc336b259ca06fe2b955e2e33039fa | [
"MIT"
]
| 152 | 2019-06-27T16:36:47.000Z | 2022-02-24T14:43:09.000Z | 24.744898 | 125 | 0.516907 | true | 598 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.626124 | 0.514792 | __label__eng_Latn | 0.496333 | 0.034365 |
# Multi-State Model first example
## In this notebook
This notebook provides a simple setting which illustrates basic usage of the model.
## Typical settings
In a typical setting of modelling patient illness trajectories, there are multiple sources of complexity:
1. There could be many states (mild, severe, recovered, released from hospital, death etc.)
2. The probability of each transition and the duration of the stay in each state depend on patient covariates.
3. Patient covariates can change over time, possibly in a manner which depends on the states visited.
In order to introduce the multi-state-model we shall use a much simpler setting where our data arrives from a simple 3 state model and covariates do not change over time or affect the probabilities of transitions between states.
## A Simple Multi-State Setting
Patients start at state 1, state 3 shall be a terminal state and states 1,2 shall be identical in the sense that from both:
1. With probability 1/2 you transition to state 3 within 1 day.
2. With probability 1/2 you transition to state 2 or 1 (depending on the present state), within $t∼exp(λ)$
```python
from pymsm.plotting import state_diagram
state_diagram(
"""
s1 : 1
s2: 2
s3: 3
s1 --> s2: P=0.5, t~exp(lambda)
s1 --> s3: P=0.5, t=1
s2 --> s3: P=0.5, t=1
"""
)
```
A simple Multi-State Model
For this setting, one can show that the expected time until reaching a terminal state is $1+\frac{1}{λ}$ (see proof at the end of this notebook.)
## The Dataset Structure
Let’s load the dataset, which was constructed based on the graph above
```python
from pymsm.examples.first_example_utils import create_toy_setting_dataset
dataset = create_toy_setting_dataset(lambda_param=2)
print('dataset type: {}'.format(type(dataset)))
print('elemnets type: {}'.format(type(dataset[0])))
```
dataset type: <class 'list'>
elemnets type: <class 'pymsm.multi_state_competing_risks_model.PathObject'>
The dataset is a list of elements from class PathObject. Each PathObject in the list corresponds to a single sample’s (i.e “patient’s”) observed path. Let’s look at one such object in detail:
```python
first_path = dataset[0]
print(type(first_path))
print('\n------covariates------')
print(first_path.covariates)
print('\n-------states---------')
print(first_path.states)
print('\n--time at each state--')
print(first_path.time_at_each_state)
print('\n------sample id-------')
print(first_path.sample_id)
```
<class 'pymsm.multi_state_competing_risks_model.PathObject'>
------covariates------
a -0.669272
b 0.884765
dtype: float64
-------states---------
[1, 2, 3]
--time at each state--
[0.4078647886081198, 1]
------sample id-------
0
We see the following attributes:
1. *covariates* : These are the sample’s covariates. In this case they were randomally generated and do not affect the state transitions, but for a patient this could be a numerical vector with entries such as:
* “age in years”
* “is male”
* “number of days that have passed since hospitalization”
* etc..
2. *states* : These are the observed states the sample visited, encoded as positive integers. Here we can see the back and forth between states 1 and 2, ending with the only terminal state (state 3).
3. *time_at_each_state* : These are the observed times spent at each state.
4. *sample_id* : (optional) a unique identifier of the patient.
Note: if the last state is a terminal state, then the vector of times should be shorter than the vector of states by 1. Conversely, if the last state is not a terminal state, then the length of the vector of times should be the same as that of the states. In such a case, the sample is inferred to be right censored.
## Updating Covariates Over Time
In order to update the patient covariates over time, we need to define a state-transition function.
In this simple case, the covariates do not change and the function is trivial
```python
def default_update_covariates_function(covariates_entering_origin_state, origin_state=None, target_state=None,
time_at_origin=None, abs_time_entry_to_target_state=None):
return covariates_entering_origin_state
```
You can define any function, as long as it recieves the following parameter types (in this order):
1. pandas Series (sample covariates when entering the origin state)
2. int (origin state number)
3. int (target state number)
4. float (time spent at origin state)
5. float (absolute time of entry to target state)
If some of the parameters are not used in the function, use a default value of None, as in the example above.
## Defining terminal states
```python
terminal_states = [3]
```
## Fitting the model
Import and init the Model
```python
from pymsm.multi_state_competing_risks_model import MultiStateModel
multi_state_model = MultiStateModel(dataset, terminal_states, default_update_covariates_function,
['covariate_1', 'covariate_2'])
```
Fit the Model
```python
multi_state_model.fit()
```
Fitting Model at State: 1
>>> Fitting Transition to State: 2, n events: 702
>>> Fitting Transition to State: 3, n events: 674
Fitting Model at State: 2
>>> Fitting Transition to State: 3, n events: 326
>>> Fitting Transition to State: 1, n events: 376
## Making predictions
Predictions are done via monte carlo simulation. Initial patient covariates, along with the patient’s current state are supplied. The next states are sequentially sampled via the model parameters. The process concludes when the patient arrives at a terminal state or the number of transitions exceeds the specified maximum.
```python
import numpy as np
all_mcs = multi_state_model.run_monte_carlo_simulation(
# the current covariates of the patient.
# especially important to use updated covariates in case of
# time varying covariates along with a prediction from a point in time
# during hospitalization
sample_covariates = np.array([0.2,-0.3]),
# in this setting samples start at state 1, but
# in general this can be any non-terminal state which
# then serves as the simulation starting point
origin_state = 1,
# in this setting we start predictions from time 0, but
# predictions can be made from any point in time during the
# patient's trajectory
current_time = 0,
# If there is an observed upper limit on the number of transitions, we recommend
# setting this value to that limit in order to prevent generation of outlier paths
max_transitions = 100,
# the number of paths to simulate:
n_random_samples = 1000)
```
100%|█████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:04<00:00, 232.83it/s]
## The Simulation Results Format:
Each run is described by a list of states and times spent at each state (same format as the dataset the model is fit to).
```python
mc = all_mcs[0]
print(mc.states)
print(mc.time_at_each_state)
mc = all_mcs[1]
print(mc.states)
print(mc.time_at_each_state)
```
[1, 2, 1, 3]
[3.2023886466379428, 0.9495424935730798, 0.15962949075341282]
[1, 2, 1, 3]
[1.8968554459242266, 1.6332968427722088, 2.7134405014544503]
## Analyzing The Results
Recall we could compute the expected time for this simple setting? We will now see that the model provides an accurate estimate of this expected value of $1+\frac{1}{\lambda}$
```python
from pymsm.examples.first_example_utils import plot_total_time_until_terminal_state
plot_total_time_until_terminal_state(all_mcs, true_lambda=2)
```
## Conclusions
This notebook provides a simple example usage of the multi-state model, beginning with the structure of the dataset used to fit the model and up to a simple analysis of the model’s predictions.
By following this process you can fit the model to any such dataset and make predictions
## Appendix 1 - Demonstrating that the expected time until reaching the terminal state is $1+\frac{1}{λ}$
Let T be the random variable denoting the time until reaching the terminal state #3, and let $S2$ be the random variable denoting the second state visited by the sample (recall all patients start at state 1, that is: $S1=1$).
From the law of total expectation:
\begin{equation}
\mathbf{E}[T] = \mathbf{E}[\mathbf{E}[T|S_2]] = \mathbf{P}(S_2 = 3)\cdot\mathbf{E}[T|S_2 = 3] + \mathbf{P}(S_2 = 2)\cdot\mathbf{E}[T|S_2 = 2]
\end{equation}
Denote $T=T_1+T_{2^+}$ (“The total time is the sum of the time of the first transition plus the time from arrival to the second state onwards”). Then:
\begin{equation}
=\frac{1}{2}\cdot1 + \frac{1}{2}\cdot\mathbf{E}[T_1 + T_{2^+}|S_2 = 2] = \frac{1}{2}+\frac{1}{2}\cdot(\mathbf{E}[T_1|S_2 = 2] + \mathbf{E}[T_{2^+}]|S_2 = 2) \\= \frac{1}{2}\cdot1 + \frac{1}{2}\cdot(\frac{1}{λ}+\mathbf{E}[T])
\end{equation}
We then have:
\begin{equation}
2\cdot\mathbf{E}[T] = 1 + (\frac{1}{λ} + \mathbf{E}[T])
\end{equation}
and:
\begin{equation}
{E}[T] = 1 + \frac{1}{λ}
\end{equation}
| a14d7ad0f794507e2f6c87e5173f9e59f2e9e1ed | 27,035 | ipynb | Jupyter Notebook | src/pymsm/archive/first_example.ipynb | hrossman/pymsm | 0c8149c8abfb1529b0688ffc6f07dee62b9b2d69 | [
"MIT"
]
| 20 | 2022-02-24T21:57:23.000Z | 2022-03-30T09:43:45.000Z | src/pymsm/archive/first_example.ipynb | hrossman/pymsm | 0c8149c8abfb1529b0688ffc6f07dee62b9b2d69 | [
"MIT"
]
| null | null | null | src/pymsm/archive/first_example.ipynb | hrossman/pymsm | 0c8149c8abfb1529b0688ffc6f07dee62b9b2d69 | [
"MIT"
]
| null | null | null | 48.976449 | 11,082 | 0.728019 | true | 2,448 | Qwen/Qwen-72B | 1. YES
2. YES | 0.712232 | 0.822189 | 0.58559 | __label__eng_Latn | 0.990884 | 0.198851 |
```python
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
import numpy as np
from scipy.stats import binom, norm
from scipy import integrate
from collections import namedtuple
from matplotlib import cm
import pandas as pd
import six
if six.PY3:
from importlib import reload
import luigi
import pickle
from pprint import pprint
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
# k-NN, Function Expectation, Density Estimation
```python
from experiment_framework.helpers import build_convergence_curve_pipeline
from empirical_privacy.one_bit_sum import GenSampleOneBitSum
# from empirical_privacy import one_bit_sum_joblib as one_bit_sum
# from empirical_privacy import lsdd
# reload(one_bit_sum)
```
```python
def B_pmf(k, n, p):
return binom(n, p).pmf(k)
def B0_pmf(k, n, p):
return B_pmf(k, n-1, p)
def B1_pmf(k, n, p):
return B_pmf(k-1, n-1, p)
def sd(N, P):
return 0.5*np.sum(abs(B0_pmf(i, N, P) - B1_pmf(i, N, P)) for i in range(N+1))
def optimal_correctness(n, p):
return 0.5 + 0.5*sd(n, p)
```
```python
```
```python
n_max = 2**10
ntri=30
n=7
p=0.5
sd(n,p)
```
0.31250000000000022
```python
B0 = [B0_pmf(i, n, p) for i in range(n+1)]
B1 = [B1_pmf(i, n, p) for i in range(n+1)]
dif = np.abs(np.array(B0)-np.array(B1))
sdv = 0.5*np.sum(dif)
pc = 0.5+0.5*sdv
print(f'n={n} coin flips p={p} probability of heads'\
'\nB0 has first outcome=0, B1 has first outcome=1')
print(f'Statistic is the total number of heads sum')
print(f'N_heads=\t{" ".join(np.arange(n+1).astype(str))}')
print(f'PMF of B0=\t{B0}\nPMF of B1=\t{B1}')
print(f'|B0-B1|=\t{dif}')
print(f'sd = 0.5 * sum(|B0-B1|) = {sdv}')
print(f'P(Correct) = 0.5 + 0.5*sd = {pc}')
```
n=7 coin flips p=0.5 probability of heads
B0 has first outcome=0, B1 has first outcome=1
Statistic is the total number of heads sum
N_heads= 0 1 2 3 4 5 6 7
PMF of B0= [0.015625000000000007, 0.093750000000000028, 0.23437500000000003, 0.31250000000000022, 0.23437500000000003, 0.093750000000000028, 0.015625000000000007, 0.0]
PMF of B1= [0.0, 0.015625000000000007, 0.093750000000000028, 0.23437500000000003, 0.31250000000000022, 0.23437500000000003, 0.093750000000000028, 0.015625000000000007]
|B0-B1|= [ 0.015625 0.078125 0.140625 0.078125 0.078125 0.140625 0.078125
0.015625]
sd = 0.5 * sum(|B0-B1|) = 0.3125000000000002
P(Correct) = 0.5 + 0.5*sd = 0.6562500000000001
```python
ccc_kwargs = {
'confidence_interval_width':10,
'n_max':2**13,
'dataset_settings' : {
'n_trials':n,
'prob_success':p,
'gen_distr_type':'binom'
},
'validation_set_size' : 2000
}
CCCs = []
Fits = ['knn', 'density', 'expectation']
for fit in Fits:
CCCs.append(build_convergence_curve_pipeline(
GenSampleOneBitSum,
gensample_kwargs = {'generate_in_batch':True},
fitter=fit,
fitter_kwargs={} if fit=='knn' else {'statistic_column':0}
)(**ccc_kwargs)
)
luigi.build(CCCs, local_scheduler=True, workers=4, log_level='ERROR')
colors = cm.Accent(np.linspace(0,1,len(CCCs)+1))
ax = plt.figure(figsize=(10,5))
ax = plt.gca()
leg_handles = []
for (i, CC) in enumerate(CCCs):
with CC.output().open() as f:
res = pickle.load(f)
handle=sns.tsplot(res['sd_matrix'], ci='sd', color=colors[i], ax=ax, legend=False, time=res['training_set_sizes'])
j=0
for i in range(len(CCCs), 2*len(CCCs)):
handle.get_children()[i].set_label('{}'.format(Fits[j]))
j+=1
plt.semilogx()
plt.axhline(optimal_correctness(n, p), linestyle='--', color='r', label='_nolegend_')
plt.axhline(0.5, linestyle='-', color='b', label='_nolegend_')
plt.title('n={n} p={p} $\delta$={d:.3f}'.format(n=n, p=p, d=sd(n,p)), fontsize=20)
plt.xlabel('num samples')
plt.ylabel('Correctness Rate')
plt.legend(loc=(0,1.1))
```
### Repeat the above using joblib to make sure the luigi implementation is correct
```python
from math import ceil, log
one_bit_sum.n_jobs=1
N = int(ceil(log(n_max) / log(2)))
N_samples = np.logspace(4,N,num=N-3, base=2).astype(np.int)
ax = plt.figure(figsize=(10,5))
ax = plt.gca()
AlgArg = namedtuple('AlgArg', field_names=['f_handle', 'f_kwargs'])
algs = [
AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt'}),
AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt_random_tiebreak'}),
AlgArg(one_bit_sum.get_density_est_correctness_rate_cached, {'bandwidth_method':None}),
AlgArg(one_bit_sum.get_expectation_correctness_rate_cached, {'bandwidth_method':None}),
AlgArg(one_bit_sum.get_lsdd_correctness_rate_cached, {})
#AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'cv'})
]
colors = cm.Accent(np.linspace(0,1,len(algs)+1))
leg_handles = []
for (i,alg) in enumerate(algs):
res = one_bit_sum.get_res(n,p,ntri, alg.f_handle, alg.f_kwargs, n_max=n_max)
handle=sns.tsplot(res, ci='sd', color=colors[i], ax=ax, legend=False, time=N_samples)
# f, coef = get_fit(res, N_samples)
# print alg, coef
# lim = coef[0]
# plt.plot(N_samples, f(N_samples), linewidth=3)
# plt.text(N_samples[-1], lim, '{:.3f}'.format(lim),fontsize=16)
j=0
for i in range(len(algs), 2*len(algs)):
#print i, i/2-1 if i%2==0 else (i)/2
handle.get_children()[i].set_label('{} {}'.format(algs[j].f_handle.func.__name__, algs[j].f_kwargs))
j+=1
plt.semilogx()
plt.axhline(optimal_correctness(n, p), linestyle='--', color='r', label='_nolegend_')
plt.axhline(0.5, linestyle='-', color='b', label='_nolegend_')
plt.title('n={n} p={p} $\delta$={d:.3f}'.format(n=n, p=p, d=sd(n,p)), fontsize=20)
plt.xlabel('num samples')
plt.ylabel('Correctness Rate')
plt.legend(loc=(0,1.1))
#print ax.get_legend_handles_labels()
```
### Timing GenSamples
Without halving: 7.5sec
With halving: 8.1sec (i.e. not much overhead)
```python
from luigi_utils.sampling_framework import GenSamples
import time
class GS(GenSamples(GenSampleOneBitSum, generate_in_batch=True)):
pass
GSi = GS(dataset_settings = ccc_kwargs['dataset_settings'],
random_seed='0',
generate_positive_samples=True,
num_samples=2**15)
start = time.time()
luigi.build([GSi], local_scheduler=True, workers=8, log_level='ERROR')
cputime = time.time() - start
print(cputime)
```
8.109845638275146
```python
res['training_set_sizes'].shape
```
(8,)
```python
np.concatenate((np.array([]), np.array([1,2,3])))
```
array([ 1., 2., 3.])
### More exp
```python
def get_fit(res, N_samples):
ntri, nsamp = res.shape
sqrt2 = np.sqrt(2)
Xlsq = np.hstack((np.ones((nsamp,1)),
sqrt2/(N_samples.astype(np.float)**0.25)[:, np.newaxis]))
y = 1.0 - res.reshape((nsamp*ntri, 1))
Xlsq = reduce(lambda x,y: np.vstack((x,y)), [Xlsq]*ntri)
coef = np.linalg.lstsq(Xlsq, y)[0].ravel()
f = lambda n: 1.0 - coef[0] - coef[1]*sqrt2/n.astype(np.float)**0.25, coef
return f
```
```python
trial=0
num_samples=2**11
bandwidth_method=None
from scipy.stats import gaussian_kde
X0, X1, y0, y1 = one_bit_sum.gen_data(n, p, num_samples, trial)
X0 = X0.ravel()
X1 = X1.ravel()
bw = None
if hasattr(bandwidth_method, '__call__'):
bw = float(bandwidth_method(num_samples)) / num_samples # eg log
if type(bandwidth_method) == float:
bw = num_samples**(1-bandwidth_method)
f0 = gaussian_kde(X0, bw_method = bw)
f1 = gaussian_kde(X1, bw_method = bw)
#Omega = np.unique(np.concatenate((X0, X1)))
_min = 0
_max = n
x = np.linspace(_min, _max, num=10*num_samples)
print('difference of densities=',0.5 + 0.5 * 0.5 * np.mean(np.abs(f0(x)-f1(x))))
denom = f0(x)+f1(x)
numer = np.abs(f0(x)-f1(x))
print('expectation = ',0.5 + 0.5*np.mean(numer/denom))
```
# Uniforml distributed random variables
$$g_0 = U[0,0.5]+\sum_{i=1}^{n-1} U[0,1]$$
$$g_1 = U[0.5,1.0]+\sum_{i=1}^{n-1} U[0,1]$$
Let $\mu_n = \frac{n-1}{2}$ and $\sigma_n = \sqrt{\frac{n-0.75}{12}}$
By the CLT $g_0\sim N(\mu_n+0.25, \sigma_n)$ and $g_1\sim N(\mu_n+0.75, \sigma_n)$.
```python
from math import sqrt
n=3
x = np.linspace(n/2.0-sqrt(n), n/2.0+sqrt(n))
sigma = sqrt((n-0.75)/12.0)
sqrt2 = sqrt(2)
mu = (n-1.0)/2
def g0_pdf(x):
return norm.pdf(x, loc=mu+0.25, scale=sigma)
def g1_pdf(x):
return norm.pdf(x, loc=mu+0.75, scale=sigma)
def d_pdf(x):
return norm.pdf(x, loc=-0.5, scale=sigma*sqrt2)
def g_int(n):
sigma = sqrt((n-0.75)/12.0)
mu = (n-1.0)/2
N0 = norm(loc=mu+0.25, scale=sigma)
N1 = norm(loc=mu+0.75, scale=sigma)
I0 = N0.cdf(n*0.5)-N0.cdf(0)
I1 = N1.cdf(n*0.5)-N1.cdf(0)
return 2*(I0-I1)
def g_stat_dist(n):
return 0.5 * g_int(n)
def g_optimal_correctness(n):
return 0.5 + 0.5*g_stat_dist(n)
plt.plot(x, g0_pdf(x), label='$g_0$')
plt.plot(x, g1_pdf(x), label='$g_1$')
#plt.plot(x, d_pdf(x), label='$d$')
plt.axvline(x=n/2.0, color='r')
assert g0_pdf(n/2.0)==g1_pdf(n/2.0)
plt.legend()
print(g_optimal_correctness(n))
```
```python
from math import ceil, log
if n_max >= 2**13:
one_bit_sum.n_jobs=1
else:
one_bit_sum.n_jobs=-1
N = int(ceil(log(n_max) / log(2)))
N_samples = np.logspace(4,N,num=N-3, base=2).astype(np.int)
ax = plt.figure(figsize=(10,5))
ax = plt.gca()
AlgArg = namedtuple('AlgArg', field_names=['f_handle', 'f_kwargs'])
algs = [
AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt'}),
AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'sqrt_random_tiebreak'}),
AlgArg(one_bit_sum.get_density_est_correctness_rate_cached, {'bandwidth_method':None}),
AlgArg(one_bit_sum.get_expectation_correctness_rate_cached, {'bandwidth_method':None}),
AlgArg(one_bit_sum.get_lsdd_correctness_rate_cached, {})
#AlgArg(one_bit_sum.get_knn_correctness_rate_cached, {'neighbor_method':'cv'})
]
for A in algs:
A.f_kwargs['type']='norm'
colors = cm.Accent(np.linspace(0,1,len(algs)+1))
leg_handles = []
for (i,alg) in enumerate(algs):
res = one_bit_sum.get_res(n,p,ntri, alg.f_handle, alg.f_kwargs, n_max=n_max)
handle=sns.tsplot(res, ci='sd', color=colors[i], ax=ax, legend=False, time=N_samples)
# f, coef = get_fit(res, N_samples)
# print alg, coef
# lim = coef[0]
# plt.plot(N_samples, f(N_samples), linewidth=3)
# plt.text(N_samples[-1], lim, '{:.3f}'.format(lim),fontsize=16)
j=0
for i in range(len(algs), 2*len(algs)):
#print i, i/2-1 if i%2==0 else (i)/2
handle.get_children()[i].set_label(algs[j].f_handle.func.__name__)
j+=1
#print handle.get_children()[i].get_label()
plt.semilogx()
plt.axhline(g_optimal_correctness(n), linestyle='--', color='r', label='_nolegend_')
plt.axhline(0.5, linestyle='-', color='b', label='_nolegend_')
plt.title('n={n} $\delta$={d:.3f}'.format(n=n, d=g_stat_dist(n)), fontsize=20)
plt.xlabel('num samples')
plt.ylabel('Correctness Rate')
plt.legend(loc=(1.1,0))
#print ax.get_legend_handles_labels()
```
```python
true_value = g_optimal_correctness(n)
print(true_value)
```
```python
trial=0
num_samples=2**15
bandwidth_method=None
from scipy.stats import gaussian_kde
X0, X1, y0, y1 = one_bit_sum.gen_data(n, p, num_samples, trial, type='norm')
X0 = X0.ravel()
X1 = X1.ravel()
bw = None
if hasattr(bandwidth_method, '__call__'):
bw = float(bandwidth_method(num_samples)) / num_samples # eg log
if type(bandwidth_method) == float:
bw = num_samples**(1-bandwidth_method)
f0 = gaussian_kde(X0, bw_method = bw)
f1 = gaussian_kde(X1, bw_method = bw)
#Omega = np.unique(np.concatenate((X0, X1)))
_min = 0
_max = n
x = np.linspace(_min, _max, num=num_samples)
```
```python
print('difference of densities=',0.5 + 0.5 * 0.5 * integrate.quad(lambda x: np.abs(f0(x)-f1(x)), -np.inf, np.inf)[0])
```
```python
X = np.concatenate((X0,X1))
f0x = f0(X)
f1x = f1(X)
denom = (f0x+f1x+np.spacing(1))
numer = np.abs(f0x-f1x)
print('expectation = ',0.5 + 0.5*np.mean(numer/denom))
```
```python
print('exact=',g_optimal_correctness(n))
```
```python
plt.plot(x, f0(x),label='$\hat g_0$', linestyle='--')
plt.plot(x, f1(x),label='$\hat g_1$', linestyle='--')
plt.plot(x, g0_pdf(x), label='$g_0$')
plt.plot(x, g1_pdf(x), label='$g_1$')
plt.legend(loc=(1.05,0))
```
### Comparing different numerical integration techniques
```python
to_int = [f0,f1]
print 'Quad'
# for (i,f) in enumerate(to_int):
# intr = integrate.quad(f, -np.inf, np.inf)
# print 'func={0} err={1:.3e}'.format(i, abs(1-intr[0]))
g_int(n)-integrate.quad(lambda x: np.abs(f0(x)-f1(x)), -np.inf, np.inf)[0]
```
```python
to_int = [f0,f1]
print 'Quad'
g_int(n)-integrate.quad(lambda x: np.abs(f0(x)-f1(x)), -np.inf, np.inf)[0]
```
```python
g_int(n)
```
```python
print 'Simps'
def delta(x):
return np.abs(f0(x)-f1(x))
X = np.unique(np.concatenate((X0,X1)))
y = delta(X)
g_int(n)-integrate.simps(y,X)
```
```python
import empirical_privacy.lsdd
```
```python
rtv = lsdd.lsdd(X0[np.newaxis, :], X1[np.newaxis, :])
```
```python
plt.hist(rtv[1])
```
```python
np.mean(rtv[1])
```
## Sympy-based analysis
```python
import sympy as sy
n,k = sy.symbols('n k', integer=True)
#k = sy.Integer(k)
p = sy.symbols('p', real=True)
q=1-p
def binom_pmf(k, n, p):
return sy.binomial(n,k)*(p**k)*(q**(n-k))
def binom_cdf(x, n, p):
return sy.Sum([binom_pmf(j, n, p) for j in sy.Range(x+1)])
B0 = binom_pmf(k, n-1, p)
B1 = binom_pmf(k-1, n-1, p)
```
```python
def stat_dist(N,P):
return 0.5*sum([sy.Abs(B0.subs([(n,N),(p,P), (k,i)])-B1.subs([(n,N),(p,P), (k,i)])) for i in range(N+1)])
def sd(N, P):
return 0.5*np.sum(abs(B0(i, N, P) - B1(i, N, P)) for i in range(N+1))
```
```python
stat_dist(50,0.5)
```
```python
sd(5000,0.5)
```
```python
N=2
terms =[(B0.subs([(n,N), (k,i)]).simplify(),B1.subs([(n,N), (k,i)]).simplify()) for i in range(N+1)]
print terms
```
```python
0.5*sum(map(lambda t: sy.Abs(t[0]-t[1]), terms)).subs([(p,0.5)])
```
```python
stat_dist(4,0.5)
```
| e7d4ada2e79d1665f7ae86be77fb30caf3a7116e | 140,063 | ipynb | Jupyter Notebook | Notebooks/development/1-bit sum.ipynb | maksimt/empirical_privacy | e032f869c7bfa5f0e31035e08ce33cdfcaff1326 | [
"MIT"
]
| 2 | 2019-03-19T03:16:40.000Z | 2019-08-14T10:49:24.000Z | Notebooks/development/1-bit sum.ipynb | maksimt/empirical_privacy | e032f869c7bfa5f0e31035e08ce33cdfcaff1326 | [
"MIT"
]
| null | null | null | Notebooks/development/1-bit sum.ipynb | maksimt/empirical_privacy | e032f869c7bfa5f0e31035e08ce33cdfcaff1326 | [
"MIT"
]
| null | null | null | 157.374157 | 73,804 | 0.882853 | true | 4,863 | Qwen/Qwen-72B | 1. YES
2. YES | 0.70253 | 0.766294 | 0.538344 | __label__eng_Latn | 0.221185 | 0.089084 |
# Error bars in plots:
### Sources of uncertainty:
In these calculations we are considering the following uncertainties
1. Model uncertainty
2. IRF uncertainty/climate sensitivity uncertainty
Model uncertainty is represented as the spread in the ERF produced by the considered RCMIP models. IRF uncertainty is the uncertainty by which the ERF is translated into changes in temperature.
## IRF:
In these calculations we use the impulse response function:
\begin{align*}
\text{IRF}(t)=& 0.885\cdot (\frac{0.587}{4.1}\cdot exp(\frac{-t}{4.1}) + \frac{0.413}{249} \cdot exp(\frac{-t}{249}))\\
\text{IRF}(t)= & \sum_{i=1}^2\frac{\alpha \cdot c_i}{\tau_i}\cdot exp\big(\frac{-t}{\tau_1}\big)
\end{align*}
with $\alpha = 0.885$, $c_1=0.587$, $\tau_1=4.1$, $c_2=0.413$ and $\tau_2 = 249$.
### Calculate $\Delta T$ from ERF:
We then use the estimated ERF$_x$ for some forcing agent(s) $x$ as follows:
\begin{align*}
\Delta T_x (t) &= \int_0^t ERF_x(t') IRF(t-t') dt' \\
\end{align*}
Now, define $\Delta_x$ as follows:
\begin{align}
\Delta_x = & \frac{1}{\alpha} \int_0^t ERF_x(t') IRF(t-t') dt'\\
=& \frac{1}{\alpha} \int_0^t ERF_x(t') \sum_{i=1}^2\frac{\alpha \cdot c_i}{\tau_i}\cdot exp\big(\frac{-(t-t')}{\tau_1}\big)dt' \\
=& \int_0^t ERF_x(t') \sum_{i=1}^2\frac{c_i}{\tau_i}\cdot exp\big(\frac{-(t-t')}{\tau_1}\big)dt' \\
\end{align}
So, then:
\begin{align}
\Delta T_x (t) = \alpha \cdot \Delta_x(t)
\end{align}
This means that the uncertainty in $\Delta T$ can be calculated according to the propagated uncertainty in the product of parameter $\alpha$ and uncertainty in ERF$_x$.
### Distribution of a product of two independent variables:
Assuming these two are independent we get:
\begin{align}
Var(\Delta T_x) = &Var(\alpha\cdot \Delta_{x})\\
= & (Var(\alpha) +E(\alpha)^2)(Var(\Delta_{x}) + E( \Delta_{x})^2) - E(\alpha)^2E(\Delta_{x})^2
\end{align}
Let $\sigma_x= \sqrt{Var(\Delta_{x})}$, $\mu_x= E(\Delta_{x})$, $\sigma_\alpha = \sqrt{Var(\alpha)}$ and $\mu_\alpha = E(\alpha)$
\begin{align}
Var(\Delta T_x) = (\sigma_x^2 + \mu_x^2)(\sigma_\alpha^2+\mu_\alpha^2) - \mu_x^2 \mu_\alpha^2
\end{align}
## Method:
The following method is used:
1. Intra model variability from $ERF$ from different models
2. Assume this is independent of the $IRF$
3. Combine these two uncertainties with $Var(\Delta T_x) = (\sigma_x^2 + \mu_x^2)(\sigma_\alpha^2+\mu_\alpha^2) - \mu_x^2 \mu_\alpha^2$
## Sums and differences:
For any additive combination of several components (either sum of two SLCF's or difference etc), e.g. the difference between methane contribution $X_i$ and the total anthropogenic contribution $Y$, we would have some covariance between between X and Y, because if one model has large $X_i$ it would normally have large $Y$ as well.
So either we can take this into account explicitly:
$$ Var(X+Y) = Var(X)+Var(Y) +2Cov(X,Y)$$
Alternatively, we can treat the sum or difference of the ERF as one stocastic variable and alpha as another and assume they are independent. The independence of the error on ECS and ERF is a good assumption here. Secondly, we do then not need to consider the covariance of ERF between different components because it is implicitly covered.
### Summary:
Let $\sigma_{\alpha}$ and $\mu_{\alpha}$ be the standard deviation and mean for a normal distribution of the $\alpha$ parameter in ECS. Secondly, let $X_i$ be a sample of
\begin{align}
X_i = & \frac{1}{\alpha} \int_0^t ERF_i(t') IRF(t-t') dt'\\
=& \int_0^t ERF_i(t') \sum_{i=1}^2\frac{c_i}{\tau_i}\cdot exp\big(\frac{-(t-t')}{\tau_1}\big)dt' \\
\end{align}
where $ERF_i$ is some difference or sum of different ERF components.
Then
\begin{align}
\sigma_{X_i} = \sqrt{\frac{\sum(X_{i,k}-\mu_{X_i})^2}{N}}
\end{align}
and we can get
\begin{align}
\sigma_T = (\sigma_{X_i}+\mu_{X_i})(\sigma_{\alpha} + \mu_{\alpha}) - \mu_{X_i}\mu_{\alpha}
\end{align}
### Technical calculation:
From any calculation of
\begin{align}
\Delta T_{\alpha=\mu_\alpha} = \sum_i T_i - \sum_k T_k
\end{align}
for all models, calculated with IRF such that $\alpha = \mu_{\alpha}$, we can find
\begin{align}
X_{i,k} = \frac{1}{\mu_{\alpha}} \Delta T_{\alpha=\mu_\alpha,k}
\end{align}
where the index $k$ signifies the different models.
And thus we can easily calculate
\begin{align}
\sigma_{X_i} = \sqrt{\frac{\sum(X_{i,k}-\mu_{X_i})^2}{N}}
\end{align}
since
\begin{align}
\mu_{X_i} = \frac{1}{\mu_\alpha}\mu_{\Delta T_{\alpha=\mu_\alpha}}
\end{align}
we have
\begin{align}
\sigma_{X_i} = \frac{1}{\mu_\alpha} \sigma_{\Delta T_{\alpha=\mu_\alpha}}.
\end{align}
## Finally:
Let $\Delta T = X_{i}\cdot \alpha $ and assume $X_i$ and $\alpha$ independent.
Then
\begin{align}
\sigma_{\Delta T}^2 =& (\sigma_{X_i}^2+\mu_{X_i}^2)(\sigma_{\alpha}^2 + \mu_{\alpha}^2) - \mu_{X_i}^2\mu_{\alpha}^2\\
\sigma_{\Delta T}^2 =& \frac{1}{\mu_\alpha^2}\big[(\sigma_{\Delta T_{\alpha=\mu_\alpha} }^2 +\mu_{\Delta T_{\alpha=\mu_\alpha}}^2)(\sigma_{\alpha}^2 + \mu_{\alpha}^2) - \mu_{\Delta T_{\alpha=\mu_\alpha}}^2\mu_{\alpha}^2 \big]\\
\sigma_{\Delta T} =& \frac{1}{\mu_\alpha}\big[(\sigma_{\Delta T_{\alpha=\mu_\alpha} }^2 +\mu_{\Delta T_{\alpha=\mu_\alpha}}^2)(\sigma_{\alpha}^2 + \mu_{\alpha}^2) - \mu_{\Delta T_{\alpha=\mu_\alpha}}^2\mu_{\alpha}^2 \big]^{\frac{1}{2}}
\end{align}
```python
def sigma_DT(dT, sig_alpha, mu_alpha, dim='climatemodel'):
sig_DT = dT.std(dim)
mu_DT = dT.mean(dim)
return ((sig_DT**2 + mu_DT**2)*(sig_alpha**2+mu_alpha**2)- mu_DT**2*mu_alpha**2)**(0.5)/mu_alpha
```
In other words, it suffices to know
a) $\sigma_\alpha$ and $\mu_\alpha$ and
b) $\Delta T_x$ calculated for a fixed $\mu_\alpha$
to compute the uncertainty bars.
| a895bf324303c3a49670bf5ab2083c16b5a75d1b | 9,302 | ipynb | Jupyter Notebook | ar6_ch6_rcmipfigs/notebooks/Uncertainty_calculation.ipynb | annefou/AR6_CH6_RCMIPFIGS | edaee2c6d41a1a8c996b3bb7776ddd7ebac06c18 | [
"BSD-3-Clause"
]
| null | null | null | ar6_ch6_rcmipfigs/notebooks/Uncertainty_calculation.ipynb | annefou/AR6_CH6_RCMIPFIGS | edaee2c6d41a1a8c996b3bb7776ddd7ebac06c18 | [
"BSD-3-Clause"
]
| null | null | null | ar6_ch6_rcmipfigs/notebooks/Uncertainty_calculation.ipynb | annefou/AR6_CH6_RCMIPFIGS | edaee2c6d41a1a8c996b3bb7776ddd7ebac06c18 | [
"BSD-3-Clause"
]
| null | null | null | 33.221429 | 349 | 0.539454 | true | 2,003 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.824462 | 0.743599 | __label__eng_Latn | 0.871232 | 0.565962 |
# Single Particle Model (SPM)
## Model Equations
The SPM consists of two spherically symmetric diffusion equations: one within a representative negative particle ($\text{k}=\text{n}$) and one within a representative positive particle ($\text{k}=\text{p}$). In the centre of the particle the standard no-flux condition is imposed. Since the SPM assumes that all particles in an electrode behave in exactly the same way, the flux on the surface of a particle is simply the current $I$ divided by the thickness of the electrode $L_{\text{k}}$. The concentration of lithium in electrode $\text{k}$ is denoted $c_{\text{k}}$ and the current is denoted by $I$. All parameters in the model stated here are dimensionless and are given in terms of dimensional parameters at the end of this notebook. The model equations for the SPM are then:
\begin{align}
\mathcal{C}_{\text{k}} \frac{\partial c_{\text{s,k}}}{\partial t} &= -\frac{1}{r_{\text{k}}^2} \frac{\partial}{\partial r_{\text{k}}} \left(r_{\text{k}}^2 N_{\text{s,k}}\right), \\
N_{\text{s,k}} &= -D_{\text{s,k}}(c_{\text{s,k}}) \frac{\partial c_{\text{s,k}}}{\partial r_{\text{k}}}, \quad \text{k} \in \text{n, p}, \end{align}
$$
N_{\text{s,k}}\big|_{r_{\text{k}}=0} = 0, \quad \text{k} \in \text{n, p}, \quad \ \ - \frac{a_{R, \text{k}}\gamma_{\text{k}}}{\mathcal{C}_{\text{k}}} N_{\text{s,k}}\big|_{r_{\text{k}}=1} =
\begin{cases}
\frac{I}{L_{\text{n}}}, \quad &\text{k}=\text{n}, \\
-\frac{I}{L_{\text{p}}}, \quad &\text{k}=\text{p},
\end{cases} \\
c_{\text{s,k}}(r_{\text{k}},0) = c_{\text{s,k,0}}, \quad \text{k} \in \text{n, p},$$
where $D_{\text{s,k}}$ is the diffusion coefficient in the solid, $N_{\text{s,k}}$ denotes the flux of lithium ions in the solid particle within the region $\text{k}$, and $r_{\text{k}} \in[0,1]$ is the radial coordinate of the particle in electrode $\text{k}$.
### Voltage Expression
The terminal voltage is obtained from the expression:
$$
V = U_{\text{p}}(c_{\text{p}})\big|_{r_{\text{p}}=1} - U_{\text{n}}(c_{\text{n}})\big|_{r_{\text{n}}=1} -2\sinh^{-1}\left(\frac{I}{j_{\text{0,p}} L_{\text{p}}}\right) - 2\sinh^{-1}\left(\frac{I}{j_{\text{0,n}} L_{\text{n}}}\right)
$$
with the exchange current densities given by
$$j_{\text{0,k}} = \frac{\gamma_{\text{k}}}{\mathcal{C}_{\text{r,k}}}(c_{\text{k}})^{1/2}(1-c_{\text{k}})^{1/2} $$
More details can be found in [[1]](#ref).
## Example solving SPM using PyBaMM
Below we show how to solve the Single Particle Model, using the default geometry, mesh, parameters, discretisation and solver provided with PyBaMM. In this notebook we explicitly handle all the stages of setting up, processing and solving the model in order to explain them in detail. However, it is often simpler in practice to use the `Simulation` class, which handles many of the stages automatically, as shown [here](../simulation-class.ipynb).
First we need to import `pybamm`, and then change our working directory to the root of the pybamm folder.
```python
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
```
Note: you may need to restart the kernel to use updated packages.
We then create an instance of the SPM:
```python
model = pybamm.lithium_ion.SPM()
```
The model object is a subtype of [`pybamm.BaseModel`](https://pybamm.readthedocs.io/en/latest/source/models/base_model.html), and contains all the equations that define this particular model. For example, the `rhs` dict contained in `model` has a dictionary mapping variables such as $c_n$ to the equation representing its rate of change with time (i.e. $\partial{c_n}/\partial{t}$). We can see this explicitly by visualising this entry in the `rhs` dict:
```python
variable = next(iter(model.rhs.keys()))
equation = next(iter(model.rhs.values()))
print('rhs equation for variable \'',variable,'\' is:')
path = 'examples/notebooks/models/'
equation.visualise(path+'spm1.png')
```
rhs equation for variable ' Discharge capacity [A.h] ' is:
We need a geometry in which to define our model equations. In pybamm this is represented by the [`pybamm.Geometry`](https://pybamm.readthedocs.io/en/latest/source/geometry/geometry.html) class. In this case we use the default geometry object defined by the model
```python
geometry = model.default_geometry
```
This geometry object defines a number of domains, each with its own name, spatial variables and min/max limits (the latter are represented as equations similar to the rhs equation shown above). For instance, the SPM has the following domains:
```python
print('SPM domains:')
for i, (k, v) in enumerate(geometry.items()):
print(str(i+1)+'.',k,'with variables:')
for var, rng in v.items():
if 'min' in rng:
print(' -(',rng['min'],') <=',var,'<= (',rng['max'],')')
else:
print(var, '=', rng['position'])
```
SPM domains:
1. negative electrode with variables:
-( 0 ) <= x_n <= ( Negative electrode thickness [m] / (Negative electrode thickness [m] + Separator thickness [m] + Positive electrode thickness [m]) )
2. separator with variables:
-( Negative electrode thickness [m] / (Negative electrode thickness [m] + Separator thickness [m] + Positive electrode thickness [m]) ) <= x_s <= ( Negative electrode thickness [m] / (Negative electrode thickness [m] + Separator thickness [m] + Positive electrode thickness [m]) + Separator thickness [m] / (Negative electrode thickness [m] + Separator thickness [m] + Positive electrode thickness [m]) )
3. positive electrode with variables:
-( Negative electrode thickness [m] / (Negative electrode thickness [m] + Separator thickness [m] + Positive electrode thickness [m]) + Separator thickness [m] / (Negative electrode thickness [m] + Separator thickness [m] + Positive electrode thickness [m]) ) <= x_p <= ( 1 )
4. negative particle with variables:
-( 0 ) <= r_n <= ( 1 )
5. positive particle with variables:
-( 0 ) <= r_p <= ( 1 )
6. current collector with variables:
z = 1
Both the model equations and the geometry include parameters, such as $\gamma_p$ or $L_p$. We can substitute these symbolic parameters in the model with values by using the [`pybamm.ParameterValues`](https://pybamm.readthedocs.io/en/latest/source/parameters/parameter_values.html) class, which takes either a python dictionary or CSV file with the mapping between parameter names and values. Rather than create our own instance of `pybamm.ParameterValues`, we will use the default parameter set included in the model
```python
param = model.default_parameter_values
```
We can then apply this parameter set to the model and geometry
```python
param.process_model(model)
param.process_geometry(geometry)
```
The next step is to mesh the input geometry. We can do this using the [`pybamm.Mesh`](https://pybamm.readthedocs.io/en/latest/source/meshes/meshes.html) class. This class takes in the geometry of the problem, and also two dictionaries containing the type of mesh to use within each domain of the geometry (i.e. within the positive or negative electrode domains), and the number of mesh points.
The default mesh types and the default number of points to use in each variable for the SPM are:
```python
for k, t in model.default_submesh_types.items():
print(k,'is of type',t.__repr__())
for var, npts in model.default_var_pts.items():
print(var,'has',npts,'mesh points')
```
negative electrode is of type Generator for Uniform1DSubMesh
separator is of type Generator for Uniform1DSubMesh
positive electrode is of type Generator for Uniform1DSubMesh
negative particle is of type Generator for Uniform1DSubMesh
positive particle is of type Generator for Uniform1DSubMesh
current collector is of type Generator for SubMesh0D
x_n has 20 mesh points
x_s has 20 mesh points
x_p has 20 mesh points
r_n has 30 mesh points
r_p has 30 mesh points
y has 10 mesh points
z has 10 mesh points
With these defaults, we can then create our mesh of the given geometry:
```python
mesh = pybamm.Mesh(geometry, model.default_submesh_types, model.default_var_pts)
```
The next step is to discretise the model equations using this mesh. We do this using the [`pybamm.Discretisation`](https://pybamm.readthedocs.io/en/latest/source/discretisations/discretisation.html) class, which takes both the mesh we have already created, and a dictionary of spatial methods to use for each geometry domain. For the case of the SPM, we use the following defaults for the spatial discretisation methods:
```python
for k, method in model.default_spatial_methods.items():
print(k,'is discretised using',method.__class__.__name__,'method')
```
macroscale is discretised using FiniteVolume method
negative particle is discretised using FiniteVolume method
positive particle is discretised using FiniteVolume method
current collector is discretised using ZeroDimensionalSpatialMethod method
We then create the `pybamm.Discretisation` object, and use this to discretise the model equations
```python
disc = pybamm.Discretisation(mesh, model.default_spatial_methods)
disc.process_model(model);
```
After this stage, all of the variables in `model` have been discretised into `pybamm.StateVector` objects, and spatial operators have been replaced by matrix-vector multiplications, ready to be evaluated within a time-stepping algorithm of a given solver. For example, the rhs expression for $\partial{c_n}/\partial{t}$ that we visualised above is now represented by:
```python
model.concatenated_rhs.children[0].visualise(path+'spm2.png')
```
Now we are ready to run the time-stepping routine to solve the model. Once again we use the default ODE solver.
```python
# Solve the model at the given time points (in seconds)
solver = model.default_solver
n = 250
t_eval = np.linspace(0, 3600, n)
print('Solving using',type(solver).__name__,'solver...')
solution = solver.solve(model, t_eval)
print('Finished.')
```
Solving using ScipySolver solver...
Finished.
Each model in pybamm has a list of relevant variables defined in the model, for use in visualising the model solution or for comparison with other models. The SPM defines the following variables:
```python
print('SPM model variables:')
for v in model.variables.keys():
print('\t-',v)
```
SPM model variables:
- Time
- Time [s]
- Time [min]
- Time [h]
- x
- x [m]
- x_n
- x_n [m]
- x_s
- x_s [m]
- x_p
- x_p [m]
- Sum of electrolyte reaction source terms
- Sum of negative electrode electrolyte reaction source terms
- Sum of positive electrode electrolyte reaction source terms
- Sum of x-averaged negative electrode electrolyte reaction source terms
- Sum of x-averaged positive electrode electrolyte reaction source terms
- Sum of interfacial current densities
- Sum of negative electrode interfacial current densities
- Sum of positive electrode interfacial current densities
- Sum of x-averaged negative electrode interfacial current densities
- Sum of x-averaged positive electrode interfacial current densities
- r_n
- r_n [m]
- r_p
- r_p [m]
- Total current density
- Total current density [A.m-2]
- Current [A]
- C-rate
- Discharge capacity [A.h]
- Porosity
- Negative electrode porosity
- Separator porosity
- Positive electrode porosity
- X-averaged negative electrode porosity
- X-averaged separator porosity
- X-averaged positive electrode porosity
- Active material volume fraction
- Negative electrode active material volume fraction
- Separator active material volume fraction
- Positive electrode active material volume fraction
- X-averaged negative electrode active material volume fraction
- X-averaged separator active material volume fraction
- X-averaged positive electrode active material volume fraction
- Leading-order porosity
- Leading-order negative electrode porosity
- Leading-order separator porosity
- Leading-order positive electrode porosity
- Leading-order x-averaged negative electrode porosity
- Leading-order x-averaged separator porosity
- Leading-order x-averaged positive electrode porosity
- Leading-order active material volume fraction
- Leading-order negative electrode active material volume fraction
- Leading-order separator active material volume fraction
- Leading-order positive electrode active material volume fraction
- Leading-order x-averaged negative electrode active material volume fraction
- Leading-order x-averaged separator active material volume fraction
- Leading-order x-averaged positive electrode active material volume fraction
- Porosity change
- Negative electrode porosity change
- Separator porosity change
- Positive electrode porosity change
- X-averaged negative electrode porosity change
- X-averaged separator porosity change
- X-averaged positive electrode porosity change
- Leading-order x-averaged negative electrode porosity change
- Leading-order x-averaged separator porosity change
- Leading-order x-averaged positive electrode porosity change
- Negative electrode volume-averaged velocity
- Positive electrode volume-averaged velocity
- Negative electrode volume-averaged velocity [m.s-1]
- Positive electrode volume-averaged velocity [m.s-1]
- Negative electrode volume-averaged acceleration
- Positive electrode volume-averaged acceleration
- Negative electrode volume-averaged acceleration [m.s-1]
- Positive electrode volume-averaged acceleration [m.s-1]
- X-averaged negative electrode volume-averaged acceleration
- X-averaged positive electrode volume-averaged acceleration
- X-averaged negative electrode volume-averaged acceleration [m.s-1]
- X-averaged positive electrode volume-averaged acceleration [m.s-1]
- Negative electrode pressure
- Positive electrode pressure
- X-averaged negative electrode pressure
- X-averaged positive electrode pressure
- Separator pressure
- X-averaged separator pressure
- Negative electrode transverse volume-averaged velocity
- Separator transverse volume-averaged velocity
- Positive electrode transverse volume-averaged velocity
- Negative electrode transverse volume-averaged velocity [m.s-2]
- Separator transverse volume-averaged velocity [m.s-2]
- Positive electrode transverse volume-averaged velocity [m.s-2]
- X-averaged negative electrode transverse volume-averaged velocity
- X-averaged separator transverse volume-averaged velocity
- X-averaged positive electrode transverse volume-averaged velocity
- X-averaged negative electrode transverse volume-averaged velocity [m.s-2]
- X-averaged separator transverse volume-averaged velocity [m.s-2]
- X-averaged positive electrode transverse volume-averaged velocity [m.s-2]
- Transverse volume-averaged velocity
- Transverse volume-averaged velocity [m.s-2]
- Negative electrode transverse volume-averaged acceleration
- Separator transverse volume-averaged acceleration
- Positive electrode transverse volume-averaged acceleration
- Negative electrode transverse volume-averaged acceleration [m.s-2]
- Separator transverse volume-averaged acceleration [m.s-2]
- Positive electrode transverse volume-averaged acceleration [m.s-2]
- X-averaged negative electrode transverse volume-averaged acceleration
- X-averaged separator transverse volume-averaged acceleration
- X-averaged positive electrode transverse volume-averaged acceleration
- X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]
- X-averaged separator transverse volume-averaged acceleration [m.s-2]
- X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]
- Transverse volume-averaged acceleration
- Transverse volume-averaged acceleration [m.s-2]
- Negative particle concentration
- Negative particle concentration [mol.m-3]
- X-averaged negative particle concentration
- X-averaged negative particle concentration [mol.m-3]
- R-averaged negative particle concentration
- R-averaged negative particle concentration [mol.m-3]
- Average negative particle concentration
- Average negative particle concentration [mol.m-3]
- Negative particle surface concentration
- Negative particle surface concentration [mol.m-3]
- X-averaged negative particle surface concentration
- X-averaged negative particle surface concentration [mol.m-3]
- Negative electrode active volume fraction
- Negative electrode volume-averaged concentration
- Negative electrode volume-averaged concentration [mol.m-3]
- Negative electrode extent of lithiation
- X-averaged negative electrode extent of lithiation
- Total lithium in negative electrode [mol]
- Minimum negative particle concentration
- Maximum negative particle concentration
- Minimum negative particle concentration [mol.m-3]
- Maximum negative particle concentration [mol.m-3]
- Minimum negative particle surface concentration
- Maximum negative particle surface concentration
- Minimum negative particle surface concentration [mol.m-3]
- Maximum negative particle surface concentration [mol.m-3]
- Negative particle radius
- Negative particle radius [m]
- Negative surface area to volume ratio
- Negative surface area to volume ratio [m-1]
- Positive particle concentration
- Positive particle concentration [mol.m-3]
- X-averaged positive particle concentration
- X-averaged positive particle concentration [mol.m-3]
- R-averaged positive particle concentration
- R-averaged positive particle concentration [mol.m-3]
- Average positive particle concentration
- Average positive particle concentration [mol.m-3]
- Positive particle surface concentration
- Positive particle surface concentration [mol.m-3]
- X-averaged positive particle surface concentration
- X-averaged positive particle surface concentration [mol.m-3]
- Positive electrode active volume fraction
- Positive electrode volume-averaged concentration
- Positive electrode volume-averaged concentration [mol.m-3]
- Positive electrode extent of lithiation
- X-averaged positive electrode extent of lithiation
- Total lithium in positive electrode [mol]
- Minimum positive particle concentration
- Maximum positive particle concentration
- Minimum positive particle concentration [mol.m-3]
- Maximum positive particle concentration [mol.m-3]
- Minimum positive particle surface concentration
- Maximum positive particle surface concentration
- Minimum positive particle surface concentration [mol.m-3]
- Maximum positive particle surface concentration [mol.m-3]
- Positive particle radius
- Positive particle radius [m]
- Positive surface area to volume ratio
- Positive surface area to volume ratio [m-1]
- Electrolyte concentration
- Electrolyte concentration [mol.m-3]
- Electrolyte concentration [Molar]
- X-averaged electrolyte concentration
- X-averaged electrolyte concentration [mol.m-3]
- X-averaged electrolyte concentration [Molar]
- Negative electrolyte concentration
- Negative electrolyte concentration [mol.m-3]
- Negative electrolyte concentration [Molar]
- Separator electrolyte concentration
- Separator electrolyte concentration [mol.m-3]
- Separator electrolyte concentration [Molar]
- Positive electrolyte concentration
- Positive electrolyte concentration [mol.m-3]
- Positive electrolyte concentration [Molar]
- X-averaged negative electrolyte concentration
- X-averaged negative electrolyte concentration [mol.m-3]
- X-averaged separator electrolyte concentration
- X-averaged separator electrolyte concentration [mol.m-3]
- X-averaged positive electrolyte concentration
- X-averaged positive electrolyte concentration [mol.m-3]
- Electrolyte flux
- Electrolyte flux [mol.m-2.s-1]
- Negative current collector temperature
- Negative current collector temperature [K]
- X-averaged negative electrode temperature
- X-averaged negative electrode temperature [K]
- Negative electrode temperature
- Negative electrode temperature [K]
- X-averaged separator temperature
- X-averaged separator temperature [K]
- Separator temperature
- Separator temperature [K]
- X-averaged positive electrode temperature
- X-averaged positive electrode temperature [K]
- Positive electrode temperature
- Positive electrode temperature [K]
- Positive current collector temperature
- Positive current collector temperature [K]
- Cell temperature
- Cell temperature [K]
- X-averaged cell temperature
- X-averaged cell temperature [K]
- Volume-averaged cell temperature
- Volume-averaged cell temperature [K]
- Ambient temperature [K]
- Ambient temperature
- Inner negative electrode sei thickness
- Inner negative electrode sei thickness [m]
- X-averaged inner negative electrode sei thickness
- X-averaged inner negative electrode sei thickness [m]
- Outer negative electrode sei thickness
- Outer negative electrode sei thickness [m]
- X-averaged outer negative electrode sei thickness
- X-averaged outer negative electrode sei thickness [m]
- Total negative electrode sei thickness
- Total negative electrode sei thickness [m]
- X-averaged total negative electrode sei thickness
- X-averaged total negative electrode sei thickness [m]
- X-averaged negative electrode resistance [Ohm.m2]
- Inner negative electrode sei concentration [mol.m-3]
- X-averaged inner negative electrode sei concentration [mol.m-3]
- Outer negative electrode sei concentration [mol.m-3]
- X-averaged outer negative electrode sei concentration [mol.m-3]
- Negative sei concentration [mol.m-3]
- X-averaged negative electrode sei concentration [mol.m-3]
- Loss of lithium to negative electrode sei [mol]
- Total negative electrode sei thickness
- Total negative electrode sei thickness [m]
- X-averaged total negative electrode sei thickness
- X-averaged total negative electrode sei thickness [m]
- X-averaged negative electrode resistance [Ohm.m2]
- Inner negative electrode sei interfacial current density
- Inner negative electrode sei interfacial current density [A.m-2]
- X-averaged inner negative electrode sei interfacial current density
- X-averaged inner negative electrode sei interfacial current density [A.m-2]
- Outer negative electrode sei interfacial current density
- Outer negative electrode sei interfacial current density [A.m-2]
- X-averaged outer negative electrode sei interfacial current density
- X-averaged outer negative electrode sei interfacial current density [A.m-2]
- Negative electrode sei interfacial current density
- Negative electrode sei interfacial current density [A.m-2]
- X-averaged negative electrode sei interfacial current density
- X-averaged negative electrode sei interfacial current density [A.m-2]
- Inner positive electrode sei thickness
- Inner positive electrode sei thickness [m]
- X-averaged inner positive electrode sei thickness
- X-averaged inner positive electrode sei thickness [m]
- Outer positive electrode sei thickness
- Outer positive electrode sei thickness [m]
- X-averaged outer positive electrode sei thickness
- X-averaged outer positive electrode sei thickness [m]
- Total positive electrode sei thickness
- Total positive electrode sei thickness [m]
- X-averaged total positive electrode sei thickness
- X-averaged total positive electrode sei thickness [m]
- X-averaged positive electrode resistance [Ohm.m2]
- Inner positive electrode sei concentration [mol.m-3]
- X-averaged inner positive electrode sei concentration [mol.m-3]
- Outer positive electrode sei concentration [mol.m-3]
- X-averaged outer positive electrode sei concentration [mol.m-3]
- Positive sei concentration [mol.m-3]
- X-averaged positive electrode sei concentration [mol.m-3]
- Loss of lithium to positive electrode sei [mol]
- Total positive electrode sei thickness
- Total positive electrode sei thickness [m]
- X-averaged total positive electrode sei thickness
- X-averaged total positive electrode sei thickness [m]
- X-averaged positive electrode resistance [Ohm.m2]
- Inner positive electrode sei interfacial current density
- Inner positive electrode sei interfacial current density [A.m-2]
- X-averaged inner positive electrode sei interfacial current density
- X-averaged inner positive electrode sei interfacial current density [A.m-2]
- Outer positive electrode sei interfacial current density
- Outer positive electrode sei interfacial current density [A.m-2]
- X-averaged outer positive electrode sei interfacial current density
- X-averaged outer positive electrode sei interfacial current density [A.m-2]
- Positive electrode sei interfacial current density
- Positive electrode sei interfacial current density [A.m-2]
- X-averaged positive electrode sei interfacial current density
- X-averaged positive electrode sei interfacial current density [A.m-2]
- Electrolyte tortuosity
- Negative electrolyte tortuosity
- Positive electrolyte tortuosity
- X-averaged negative electrolyte tortuosity
- X-averaged positive electrolyte tortuosity
- Separator tortuosity
- X-averaged separator tortuosity
- Electrode tortuosity
- Negative electrode tortuosity
- Positive electrode tortuosity
- X-averaged negative electrode tortuosity
- X-averaged positive electrode tortuosity
- Separator volume-averaged velocity
- Separator volume-averaged velocity [m.s-1]
- Separator volume-averaged acceleration
- Separator volume-averaged acceleration [m.s-1]
- X-averaged separator volume-averaged acceleration
- X-averaged separator volume-averaged acceleration [m.s-1]
- Volume-averaged velocity
- Volume-averaged velocity [m.s-1]
- Volume-averaged acceleration
- X-averaged volume-averaged acceleration
- Volume-averaged acceleration [m.s-1]
- X-averaged volume-averaged acceleration [m.s-1]
- Pressure
- Negative particle flux
- X-averaged negative particle flux
- Positive particle flux
- X-averaged positive particle flux
- Total concentration in electrolyte [mol]
- Ohmic heating
- Ohmic heating [W.m-3]
- X-averaged Ohmic heating
- X-averaged Ohmic heating [W.m-3]
- Volume-averaged Ohmic heating
- Volume-averaged Ohmic heating [W.m-3]
- Irreversible electrochemical heating
- Irreversible electrochemical heating [W.m-3]
- X-averaged irreversible electrochemical heating
- X-averaged irreversible electrochemical heating [W.m-3]
- Volume-averaged irreversible electrochemical heating
- Volume-averaged irreversible electrochemical heating[W.m-3]
- Reversible heating
- Reversible heating [W.m-3]
- X-averaged reversible heating
- X-averaged reversible heating [W.m-3]
- Volume-averaged reversible heating
- Volume-averaged reversible heating [W.m-3]
- Total heating
- Total heating [W.m-3]
- X-averaged total heating
- X-averaged total heating [W.m-3]
- Volume-averaged total heating
- Volume-averaged total heating [W.m-3]
- Negative current collector potential
- Negative current collector potential [V]
- Current collector current density
- Current collector current density [A.m-2]
- Leading-order current collector current density
- Sei interfacial current density
- Sei interfacial current density [A.m-2]
- Sei interfacial current density per volume [A.m-3]
- X-averaged negative electrode total interfacial current density
- X-averaged negative electrode total interfacial current density [A.m-2]
- X-averaged negative electrode total interfacial current density per volume [A.m-3]
- Negative electrode exchange current density
- X-averaged negative electrode exchange current density
- Negative electrode exchange current density [A.m-2]
- X-averaged negative electrode exchange current density [A.m-2]
- Negative electrode exchange current density per volume [A.m-3]
- X-averaged negative electrode exchange current density per volume [A.m-3]
- Negative electrode reaction overpotential
- X-averaged negative electrode reaction overpotential
- Negative electrode reaction overpotential [V]
- X-averaged negative electrode reaction overpotential [V]
- Negative electrode surface potential difference
- X-averaged negative electrode surface potential difference
- Negative electrode surface potential difference [V]
- X-averaged negative electrode surface potential difference [V]
- Negative electrode sei film overpotential
- X-averaged negative electrode sei film overpotential
- Negative electrode sei film overpotential [V]
- X-averaged negative electrode sei film overpotential [V]
- Negative electrode open circuit potential
- Negative electrode open circuit potential [V]
- X-averaged negative electrode open circuit potential
- X-averaged negative electrode open circuit potential [V]
- Negative electrode entropic change
- X-averaged negative electrode entropic change
- X-averaged positive electrode total interfacial current density
- X-averaged positive electrode total interfacial current density [A.m-2]
- X-averaged positive electrode total interfacial current density per volume [A.m-3]
- Positive electrode exchange current density
- X-averaged positive electrode exchange current density
- Positive electrode exchange current density [A.m-2]
- X-averaged positive electrode exchange current density [A.m-2]
- Positive electrode exchange current density per volume [A.m-3]
- X-averaged positive electrode exchange current density per volume [A.m-3]
- Positive electrode reaction overpotential
- X-averaged positive electrode reaction overpotential
- Positive electrode reaction overpotential [V]
- X-averaged positive electrode reaction overpotential [V]
- Positive electrode surface potential difference
- X-averaged positive electrode surface potential difference
- Positive electrode surface potential difference [V]
- X-averaged positive electrode surface potential difference [V]
- Positive electrode sei film overpotential
- X-averaged positive electrode sei film overpotential
- Positive electrode sei film overpotential [V]
- X-averaged positive electrode sei film overpotential [V]
- Positive electrode open circuit potential
- Positive electrode open circuit potential [V]
- X-averaged positive electrode open circuit potential
- X-averaged positive electrode open circuit potential [V]
- Positive electrode entropic change
- X-averaged positive electrode entropic change
- Negative electrode interfacial current density
- X-averaged negative electrode interfacial current density
- Negative electrode interfacial current density [A.m-2]
- X-averaged negative electrode interfacial current density [A.m-2]
- Negative electrode interfacial current density per volume [A.m-3]
- X-averaged negative electrode interfacial current density per volume [A.m-3]
- Positive electrode interfacial current density
- X-averaged positive electrode interfacial current density
- Positive electrode interfacial current density [A.m-2]
- X-averaged positive electrode interfacial current density [A.m-2]
- Positive electrode interfacial current density per volume [A.m-3]
- X-averaged positive electrode interfacial current density per volume [A.m-3]
- Interfacial current density
- Interfacial current density [A.m-2]
- Interfacial current density per volume [A.m-3]
- Exchange current density
- Exchange current density [A.m-2]
- Exchange current density per volume [A.m-3]
- Negative electrode oxygen interfacial current density
- X-averaged negative electrode oxygen interfacial current density
- Negative electrode oxygen interfacial current density [A.m-2]
- X-averaged negative electrode oxygen interfacial current density [A.m-2]
- Negative electrode oxygen interfacial current density per volume [A.m-3]
- X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]
- Negative electrode oxygen exchange current density
- X-averaged negative electrode oxygen exchange current density
- Negative electrode oxygen exchange current density [A.m-2]
- X-averaged negative electrode oxygen exchange current density [A.m-2]
- Negative electrode oxygen exchange current density per volume [A.m-3]
- X-averaged negative electrode oxygen exchange current density per volume [A.m-3]
- Negative electrode oxygen reaction overpotential
- X-averaged negative electrode oxygen reaction overpotential
- Negative electrode oxygen reaction overpotential [V]
- X-averaged negative electrode oxygen reaction overpotential [V]
- Negative electrode oxygen open circuit potential
- Negative electrode oxygen open circuit potential [V]
- X-averaged negative electrode oxygen open circuit potential
- X-averaged negative electrode oxygen open circuit potential [V]
- Positive electrode oxygen interfacial current density
- X-averaged positive electrode oxygen interfacial current density
- Positive electrode oxygen interfacial current density [A.m-2]
- X-averaged positive electrode oxygen interfacial current density [A.m-2]
- Positive electrode oxygen interfacial current density per volume [A.m-3]
- X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]
- Positive electrode oxygen exchange current density
- X-averaged positive electrode oxygen exchange current density
- Positive electrode oxygen exchange current density [A.m-2]
- X-averaged positive electrode oxygen exchange current density [A.m-2]
- Positive electrode oxygen exchange current density per volume [A.m-3]
- X-averaged positive electrode oxygen exchange current density per volume [A.m-3]
- Positive electrode oxygen reaction overpotential
- X-averaged positive electrode oxygen reaction overpotential
- Positive electrode oxygen reaction overpotential [V]
- X-averaged positive electrode oxygen reaction overpotential [V]
- Positive electrode oxygen open circuit potential
- Positive electrode oxygen open circuit potential [V]
- X-averaged positive electrode oxygen open circuit potential
- X-averaged positive electrode oxygen open circuit potential [V]
- Oxygen interfacial current density
- Oxygen interfacial current density [A.m-2]
- Oxygen interfacial current density per volume [A.m-3]
- Oxygen exchange current density
- Oxygen exchange current density [A.m-2]
- Oxygen exchange current density per volume [A.m-3]
- Negative electrode potential
- Negative electrode potential [V]
- X-averaged negative electrode potential
- X-averaged negative electrode potential [V]
- Negative electrode ohmic losses
- Negative electrode ohmic losses [V]
- X-averaged negative electrode ohmic losses
- X-averaged negative electrode ohmic losses [V]
- Gradient of negative electrode potential
- Negative electrode current density
- Negative electrode current density [A.m-2]
- Negative electrolyte potential
- Negative electrolyte potential [V]
- Separator electrolyte potential
- Separator electrolyte potential [V]
- Positive electrolyte potential
- Positive electrolyte potential [V]
- Electrolyte potential
- Electrolyte potential [V]
- X-averaged electrolyte potential
- X-averaged electrolyte potential [V]
- X-averaged negative electrolyte potential
- X-averaged negative electrolyte potential [V]
- X-averaged separator electrolyte potential
- X-averaged separator electrolyte potential [V]
- X-averaged positive electrolyte potential
- X-averaged positive electrolyte potential [V]
- X-averaged electrolyte overpotential
- X-averaged electrolyte overpotential [V]
- Gradient of negative electrolyte potential
- Gradient of separator electrolyte potential
- Gradient of positive electrolyte potential
- Gradient of electrolyte potential
- Electrolyte current density
- Electrolyte current density [A.m-2]
- Negative electrolyte current density
- Negative electrolyte current density [A.m-2]
- Positive electrolyte current density
- Positive electrolyte current density [A.m-2]
- X-averaged concentration overpotential
- X-averaged electrolyte ohmic losses
- X-averaged concentration overpotential [V]
- X-averaged electrolyte ohmic losses [V]
- Positive electrode potential
- Positive electrode potential [V]
- X-averaged positive electrode potential
- X-averaged positive electrode potential [V]
- Positive electrode ohmic losses
- Positive electrode ohmic losses [V]
- X-averaged positive electrode ohmic losses
- X-averaged positive electrode ohmic losses [V]
- Gradient of positive electrode potential
- Positive electrode current density
- Positive electrode current density [A.m-2]
- Electrode current density
- Positive current collector potential
- Positive current collector potential [V]
- Local voltage
- Local voltage [V]
- Terminal voltage
- Terminal voltage [V]
- X-averaged open circuit voltage
- Measured open circuit voltage
- X-averaged open circuit voltage [V]
- Measured open circuit voltage [V]
- X-averaged reaction overpotential
- X-averaged reaction overpotential [V]
- X-averaged sei film overpotential
- X-averaged sei film overpotential [V]
- X-averaged solid phase ohmic losses
- X-averaged solid phase ohmic losses [V]
- X-averaged battery open circuit voltage [V]
- Measured battery open circuit voltage [V]
- X-averaged battery reaction overpotential [V]
- X-averaged battery solid phase ohmic losses [V]
- X-averaged battery electrolyte ohmic losses [V]
- X-averaged battery concentration overpotential [V]
- Battery voltage [V]
- Change in measured open circuit voltage
- Change in measured open circuit voltage [V]
- Local ECM resistance
- Local ECM resistance [Ohm]
- Terminal power [W]
To help visualise the results, pybamm provides the `pybamm.ProcessedVariable` class, which takes the output of a solver and a variable, and allows the user to evaluate the value of that variable at any given time or $x$ value. These processed variables are automatically created by the solution dictionary.
```python
voltage = solution['Terminal voltage [V]']
c_s_n_surf = solution['Negative particle surface concentration']
c_s_p_surf = solution['Positive particle surface concentration']
```
One we have these variables in hand, we can begin generating plots using a library such as Matplotlib. Below we plot the terminal voltage and surface particle concentrations versus time
```python
t = solution["Time [s]"].entries
x = solution["x [m]"].entries[:, 0]
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(13,4))
ax1.plot(t, voltage(t))
ax1.set_xlabel(r'$Time [s]$')
ax1.set_ylabel('Terminal voltage [V]')
ax2.plot(t, c_s_n_surf(t=t, x=x[0])) # can evaluate at arbitrary x (single representative particle)
ax2.set_xlabel(r'$Time [s]$')
ax2.set_ylabel('Negative particle surface concentration')
ax3.plot(t, c_s_p_surf(t=t, x=x[-1])) # can evaluate at arbitrary x (single representative particle)
ax3.set_xlabel(r'$Time [s]$')
ax3.set_ylabel('Positive particle surface concentration')
plt.tight_layout()
plt.show()
```
Some of the output variables are defined over space as well as time. Once option to visualise these variables is to use the `interact` slider widget. Below we plot the negative/positive particle concentration over $r$, using a slider to change the current time point
```python
c_s_n = solution['Negative particle concentration']
c_s_p = solution['Positive particle concentration']
r_n = solution["r_n [m]"].entries[:, 0]
r_p = solution["r_p [m]"].entries[:, 0]
```
```python
c_s_n = solution['Negative particle concentration']
c_s_p = solution['Positive particle concentration']
r_n = solution["r_n [m]"].entries[:, 0, 0]
r_p = solution["r_p [m]"].entries[:, 0, 0]
def plot_concentrations(t):
f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5))
plot_c_n, = ax1.plot(r_n, c_s_n(r=r_n,t=t,x=x[0])) # can evaluate at arbitrary x (single representative particle)
plot_c_p, = ax2.plot(r_p, c_s_p(r=r_p,t=t,x=x[-1])) # can evaluate at arbitrary x (single representative particle)
ax1.set_ylabel('Negative particle concentration')
ax2.set_ylabel('Positive particle concentration')
ax1.set_xlabel(r'$r_n$ [m]')
ax2.set_xlabel(r'$r_p$ [m]')
ax1.set_ylim(0, 1)
ax2.set_ylim(0, 1)
plt.show()
import ipywidgets as widgets
widgets.interact(plot_concentrations, t=widgets.FloatSlider(min=0,max=3600,step=10,value=0));
```
interactive(children=(FloatSlider(value=0.0, description='t', max=3600.0, step=10.0), Output()), _dom_classes=…
The QuickPlot class can be used to plot the common set of useful outputs which should give you a good initial overview of the model. The method `Quickplot.dynamic_plot` employs the slider widget.
```python
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
```
interactive(children=(FloatSlider(value=0.0, description='t', max=1.0, step=0.01), Output()), _dom_classes=('w…
## Dimensionless Parameters
In the table below, we provide the dimensionless parameters in the SPM in terms of the dimensional parameters in LCO.csv. We use a superscript * to indicate dimensional quantities.
| Parameter | Expression |Interpretation |
|:--------------------------|:----------------------------------------|:------------------------------------------|
| $L_{\text{k}}$ | $L_{\text{k}}^*/L^*$ | Ratio of region thickness to cell thickness|
|$\mathcal{C}_{\text{k}}$ | $\tau_{\text{k}}^*/\tau_{\text{d}}^*$ | Ratio of solid diffusion and discharge timescales |
|$\mathcal{C}_{\text{r,k}}$ |$\tau_{\text{r,k}}^*/\tau_{\text{d}}^*$ |Ratio of reaction and discharge timescales|
|$a_{R, \text{k}}$ |$a_{\text{k}}^* R_{\text{k}}^*$ | Product of particle radius and surface area to volume ratio|
|$\gamma_{\text{k}}$ |$c_{\text{k,max}}^*/c_{\text{n,max}}^*$ |Ratio of maximum lithium concentrations in solid|
<a name="ref">[1]</a> Scott G. Marquis, Valentin Sulzer, Robert Timms, Colin P. Please, and S. Jon Chapman. "An asymptotic derivation of a single particle model with electrolyte." Journal of The Electrochemical Society, 166(15):A3693–A3706, 2019
```python
```
| 5db6220b9947efa151c069b8165c0b46eaf002c6 | 97,873 | ipynb | Jupyter Notebook | examples/notebooks/models/SPM.ipynb | kinnala/PyBaMM | 3c4ef83d1ea06287a55ceac5f25e139e54599ea9 | [
"BSD-3-Clause"
]
| null | null | null | examples/notebooks/models/SPM.ipynb | kinnala/PyBaMM | 3c4ef83d1ea06287a55ceac5f25e139e54599ea9 | [
"BSD-3-Clause"
]
| null | null | null | examples/notebooks/models/SPM.ipynb | kinnala/PyBaMM | 3c4ef83d1ea06287a55ceac5f25e139e54599ea9 | [
"BSD-3-Clause"
]
| null | null | null | 82.454086 | 41,512 | 0.777865 | true | 10,979 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.851953 | 0.747569 | __label__eng_Latn | 0.961876 | 0.575185 |
# trigonometric functions
## definitions
六个三角函数. 设直角三角形中, $\theta$ 的对边、邻边和斜边分别是
$a$, $b$, $c$, 得到
- sine. $$\sin\theta = \frac{a}{c}$$
- cosine. $$\cos\theta = \frac{b}{c}$$
- tangent. $$\tan\theta = \frac{a}{b}$$
- cotangent. $$\cot\theta = \frac{b}{a}$$
- secant. $$\sec\theta = \frac{c}{b}$$
- cosecant. $$\csc\theta = \frac{c}{a}$$
三个不同适用范围的定义.
- 直角三角形定义. 根据此定义, $\theta \in [0, \frac{\pi}{2}]$.
- 单位圆定义. 圆心在原点的单位圆上点的座标为 $(\cos \theta, \sin \theta)$.
根据此定义, $\theta \in \mathbb{R}$. 该定义与双曲函数的双曲线定义对应.
- Taylor 级数定义. 根据此定义, $\theta \in \mathbb{C}$.
\begin{align}
\sin x &= \sum_{n=0}^\infty \frac{(-1)^nx^{2n+1}}{(2n+1)!}\\
\cos x &= \sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)!}
\end{align}
## geometric relations
两种不同的单位圆表示方法.
曲线和值的关系.
## properties
- 关系.
\begin{align}
\tan\theta &= \frac{\sin\theta}{\frac\cos\theta}\\
\cot\theta &= \frac{1}{\tan\theta}\\
\sec\theta &= \frac{1}{\cos\theta}\\
\csc\theta &= \frac{1}{\sin\theta}
\end{align}
- 周期性. (三角函数的单位圆定义的必然推论.)
$\tan$ and $\cot$ 的周期是 $k\pi$, 其他 4 个周期是 $2k\pi$.
- 复数域上的性质. 由 $\sin x$, $\cos x$, $e^x$ 的 Taylor expansion, 可以得到 Euler's
formula 成立.
$$e^{ix} = \cos x + i \sin x$$
$$\cos x = \operatorname{Re}(e^{ix}), \sin x = \operatorname{Im}(e^{ix})$$
$$\sin x = \frac{e^{ix} - e^{-ix}}{2i}, \cos x = \frac{e^{ix} + e^{-ix}}{2}$$
complex sine, cosine 和 real sine, cosine 以及 real hyperbolic sine, cosine
的关系
\begin{align}
\sin(x+iy) &= \sin x \cosh y + i \cos x \sinh y\\
\cos(x+iy) &= \cos x \cosh y - i \sin x \sinh y
\end{align}
## etymology
- sine. Latin sinus for gulf or bay, means "open".
- cosine. The sine of the complementary or co-angle.
- tangent. So called because it can be represented as a line segment tangent
to the circle, that is the line that touches the circle but not cuts it.
从图中看得很清楚.
- cotangent. Tangent of co-angle.
- secant. So called because it represents the line that cuts the circle (from
Latin: secare, to cut). 从图中看得很清楚.
- cosecant. The secant of co-angle.
```python
```
| 7813e62c33dfec957844f2ac4828eb1df77dd84b | 99,391 | ipynb | Jupyter Notebook | math/special-functions.ipynb | Naitreey/notes-and-knowledge | 48603b2ad11c16d9430eb0293d845364ed40321c | [
"BSD-3-Clause"
]
| 5 | 2018-05-16T06:06:45.000Z | 2021-05-12T08:46:18.000Z | math/special-functions.ipynb | Naitreey/notes-and-knowledge | 48603b2ad11c16d9430eb0293d845364ed40321c | [
"BSD-3-Clause"
]
| 2 | 2018-04-06T01:46:22.000Z | 2019-02-13T03:11:33.000Z | math/special-functions.ipynb | Naitreey/notes-and-knowledge | 48603b2ad11c16d9430eb0293d845364ed40321c | [
"BSD-3-Clause"
]
| 2 | 2019-04-11T11:02:32.000Z | 2020-06-27T11:59:09.000Z | 849.495726 | 49,848 | 0.95724 | true | 943 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.805632 | 0.718472 | __label__yue_Hant | 0.379698 | 0.507582 |
# Periodic Signals
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Relation between Spectrum and Fourier Series
The Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$, [as derived before](spectrum.ipynb#Fourier-Transform), is a line spectrum. It consists of a weighted series of Dirac impulses. Periodic functions can be represented alternatively by a [Fourier series](https://en.wikipedia.org/wiki/Fourier_series). The relation between the spectrum $X(j \omega)$ of a periodic signal and its Fourier series coefficients is derived in the following.
The complex Fourier series of a periodic signal $x(t)$ is defined as
\begin{equation}
x(t) = \sum_{n = - \infty}^{\infty} X_n \, e^{j n \frac{2 \pi}{T_\text{p}} t}
\end{equation}
where $T_\text{p} > 0$ denotes the period of the signal and $X_n$ the Fourier series coefficients of $x(t)$. The Fourier series represents the signal as weighted superposition of complex exponential signals. The weights (expansion coefficients) $X_n$ are given as
\begin{equation}
X_n = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} x(t) \, e^{- j n \frac{2 \pi}{T_\text{p}} t} \; dt
\end{equation}
Introducing the [Fourier transform $X(j \omega)$ of a periodic signal](spectrum.ipynb#Fourier-Transform) into the [inverse Fourier transform](../fourier_transform/definition.ipynb#Definition) yields
\begin{align}
x(t) &= \frac{1}{2 \pi} \int_{-\infty}^{\infty} X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \, e^{j \omega t} \; d \omega \\
&= \frac{1}{T_\text{p}} \sum_{\mu = -\infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \, e^{j \, \mu \frac{2 \pi}{T_\text{p}} t}
\end{align}
where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period $x_0(t)$ of the periodic signal. Note, the [definition of the Dirac comb](spectrum.ipynb#The-Dirac-Comb) and the multiplication property of the Dirac impulse was used to derive the last equality. Comparing this result with the definition of the Fourier series reveals that both are equal for
\begin{equation}
X_n = \frac{1}{T_\text{p}} X_0 \left( j \, n \frac{2 \pi}{T_\text{p}} \right)
\end{equation}
The Fourier series coefficients $X_n$ of a periodic signal are equal to the scaled Fourier transform $X_0(j \omega)$ of one period of the signal at the frequencies $\omega = n \frac{2 \pi}{T_\text{p}}$.
**Example**
The Fourier series coefficients of the pulse train can be derived from the [Fourier transform of the pulse train](spectrum.ipynb#Fourier-Transform-of-the-Pulse-Train) as
\begin{equation}
X_n = \frac{T}{T_p} \, e^{-j \omega \frac{T}{2}} \cdot \text{sinc} \left( \frac{\omega T}{2} \right) \bigg\vert_{\omega = n \frac{2 \pi}{T_\text{p}}} = \frac{T}{T_p} \, e^{-j n \pi \frac{T}{T_\text{p}}} \cdot \text{sinc} \left( n \pi \frac{T}{T_\text{p}} \right)
\end{equation}
With these coefficients the pulse train can be represented by the Fourier series
\begin{equation}
x(t) = \frac{T}{T_p} \sum_{n = -\infty}^{\infty} \, e^{-j n \pi \frac{T}{T_\text{p}}} \cdot \text{sinc} \left( n \pi \frac{T}{T_\text{p}} \right) \, e^{j n \frac{2 \pi}{T_\text{p}} t}
\end{equation}
This series cannot be evaluated numerically due to its infinite limits. The series has to be truncated to a finite number of summands in a practical implementation. The consequences of truncating the series are illustrated in the following. First the weights $X_n$ of the Fourier series are defined
```python
import sympy as sym
%matplotlib inline
sym.init_printing()
n = sym.symbols('n', integer=True)
t = sym.symbols('t', real=True)
T = 2
Tp = 5
Xn = T/Tp * sym.exp(-sym.I * n * sym.pi * T/Tp) * sym.sinc(n * sym.pi * T/Tp)
Xn
```
Now the Fourier series is evaluated for a finite upper and lower limit $N$
\begin{equation}
x_N(t) = \sum_{n = -N}^{N} X_n \, e^{j n \frac{2 \pi}{T_\text{p}} t}
\end{equation}
```python
N = 15
x = sym.Sum(Xn * sym.exp(sym.I*n*2*sym.pi/Tp*t), (n, -N, N)).doit()
sym.plot(x, (t, 0, 10), xlabel='$t$', ylabel='$x_N(t)$')
```
<sympy.plotting.plot.Plot at 0x7f88c6e59d50>
Overshoots can be observed at the discontinuities of the pulse train. The relative magnitude of these overshoots remains at constantly 9% even when increasing the limits of the truncated Fourier series expansion. This effect is known as [*Gibbs phenomenon*](https://en.wikipedia.org/wiki/Gibbs_phenomenon). Truncated Fourier series are therefore not very well suited for the approximation of signals with discontinuities.
**Exercise**
* Examine the properties of the truncated Fourier series when you increase the limit $N$ in above example. Note: The evaluation of the Fourier series may take a while due to involved numerical complexity.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
| 449c567c8c10275ac16ea6a147484dd9afb05dad | 255,482 | ipynb | Jupyter Notebook | periodic_signals/fourier_series.ipynb | spatialaudio/signals-and-systems-lecture | 93e2f3488dc8f7ae111a34732bd4d13116763c5d | [
"MIT"
]
| 243 | 2016-04-01T14:21:00.000Z | 2022-03-28T20:35:09.000Z | periodic_signals/fourier_series.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
]
| 6 | 2016-04-11T06:28:17.000Z | 2021-11-10T10:59:35.000Z | periodic_signals/fourier_series.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
]
| 63 | 2017-04-20T00:46:03.000Z | 2022-03-30T14:07:09.000Z | 67.89317 | 30,102 | 0.602872 | true | 1,676 | Qwen/Qwen-72B | 1. YES
2. YES | 0.819893 | 0.763484 | 0.625975 | __label__eng_Latn | 0.92964 | 0.292681 |
<a href="https://colab.research.google.com/github/john-s-butler-dit/Numerical-Analysis-Python/blob/master/Chapter%2006%20-%20Boundary%20Value%20Problems/603_Boundary%20Value%20Problem.ipynb" target="_parent"></a>
# Finite Difference Method
#### John S Butler [email protected]
[Course Notes](https://johnsbutler.netlify.com/files/Teaching/Numerical_Analysis_for_Differential_Equations.pdf)
[Github](https://github.com/john-s-butler-dit/Numerical-Analysis-Python)
## Overview
This notebook illustrates the finite different method for a linear Boundary Value Problem.
The video below walks through the code.
```python
from IPython.display import HTML
HTML('')
```
## Introduction
To numerically approximate a linear Boundary Value Problem
\begin{equation}
y^{''}=f(x,y,y^{'}), \ \ \ a < x < b, \end{equation}
with the boundary conditions
\begin{equation}y(a)=\alpha,\end{equation} and
\begin{equation}y(b) =\beta,\end{equation}
is dicretised to a system of difference equations.
the first derivative can be approximated by the difference operators:
\begin{equation} D^{+}U_{i}=\frac{U_{i+1}-U_{i}}{h_{i+1}} \ \ \ \mbox{ Forward,} \end{equation}
\begin{equation} D^{-}U_{i}=\frac{U_{i}-U_{i-1}}{h_i} \ \ \ \mbox{ Backward,} \end{equation}
or
\begin{equation}D^{0}U_{i}=\frac{U_{i+1}-U_{i-1}}{x_{i+1}-x_{i-1}}=\frac{U_{i+1}-U_{i-1}}{2h} \ \ \ \mbox{ Centered.} \end{equation}
The second derivative can be approximated by:
\begin{equation}\delta_x^{2}U_{i}=\frac{2}{x_{i+1}-x_{i-1}}\left(\frac{U_{i+1}-U_{i}}{x_{i+1}-x_{i}}-\frac{U_{i}-U_{i-1}}{x_{i}-x_{i-1}}\right)=\frac{U_{i+1}-2U_{2}+U_{i-1}}{h^2} \ \ \ \mbox{ Centered in $x$ direction}. \end{equation}
### Example Boundary Value Problem
To illustrate the method we will apply the finite difference method to the this boundary value problem
\begin{equation} \frac{d^2 y}{dx^2} = 4y,\end{equation}
with the boundary conditions
\begin{equation} y(0)=1.1752, y(1)=10.0179. \end{equation}
```python
import numpy as np
import math
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
```
## Discrete Axis
The stepsize is defined as
\begin{equation}h=\frac{b-a}{N}\end{equation}
here it is
\begin{equation}h=\frac{1-0}{10}\end{equation}
giving
\begin{equation}x_i=0+0.1 i\end{equation}
for $i=0,1,...10.$
```python
## BVP
N=10
h=1/N
x=np.linspace(0,1,N+1)
fig = plt.figure(figsize=(10,4))
plt.plot(x,0*x,'o:',color='red')
plt.plot(x[0],0,'o:',color='green')
plt.plot(x[10],0,'o:',color='green')
plt.xlim((0,1))
plt.xlabel('x',fontsize=16)
plt.title('Illustration of discrete time points for h=%s'%(h),fontsize=32)
plt.show()
```
## The Difference Equation
To convert the boundary problem into a difference equation we use 1st and 2nd order difference operators.
The general difference equation is
\begin{equation} \frac{1}{h^2}\left(y_{i-1}-2y_i+y_{i+1}\right)=4y_i \ \ \ i=1,..,N-1. \end{equation}
Rearranging the equation we have the system of N-1 equations
\begin{equation}i=1: \frac{1}{0.1^2}\color{green}{y_{0}} -\left(\frac{2}{0.1^2}+4\right)y_1 +\frac{1}{0.1^2} y_{2}=0\end{equation}
\begin{equation}i=2: \frac{1}{0.1^2}y_{1} -\left(\frac{2}{0.1^2}+4\right)y_2 +\frac{1}{0.1^2} y_{3}=0\end{equation}
\begin{equation} ...\end{equation}
\begin{equation}i=8: \frac{1}{0.1^2}y_{7} -\left(\frac{2}{0.1^2}+4\right)y_8 +\frac{1}{0.1^2} y_{9}=0\end{equation}
\begin{equation}i=9: \frac{1}{0.1^2}y_{8} -\left(\frac{2}{0.1^2}+4\right)y_9 +\frac{1}{0.1^2} \color{green}{y_{10}}=0\end{equation}
where the green terms are the known boundary conditions.
Rearranging the equation we have the system of 9 equations
\begin{equation}i=1: -\left(\frac{2}{0.1^2}+4\right)y_1 +\frac{1}{0.1^2} y_{2}=-\frac{1}{0.1^2}\color{green}{y_{0}}\end{equation}
\begin{equation}i=2: \frac{1}{0.1^2}y_{1} -\left(\frac{2}{0.1^2}+4\right)y_2 +\frac{1}{0.1^2} y_{3}=0\end{equation}
\begin{equation} ...\end{equation}
\begin{equation}i=8: \frac{1}{0.1^2}y_{7} -\left(\frac{2}{0.1^2}+4\right)y_8 +\frac{1}{0.1^2} y_{9}=0\end{equation}
\begin{equation}i=9: \frac{1}{0.1^2}y_{8} -\left(\frac{2}{0.1^2}+4\right)y_9 =-\frac{1}{0.1^2} \color{green}{y_{10}}\end{equation}
where the green terms are the known boundary conditions.
This is system can be put into matrix form
\begin{equation} A\color{red}{\mathbf{y}}=\mathbf{b} \end{equation}
Where A is a $9\times 9 $ matrix of the form
\begin{equation}
A=\left(\begin{array}{ccc ccc ccc}
-204&100&0& 0&0&0& 0&0&0\\
100&-204&100 &0&0&0& 0&0&0\\
0&100&-204& 100&0&0& 0&0&0\\
.&.&.& .&.&.& .&.&.\\
.&.&.& .&.&.& .&.&.\\
0&0&0& 0&0&0& 100&-204&100\\
0&0&0& 0&0&0& 0&100&-204
\end{array}\right)
\end{equation}
which can be represented graphically as:
```python
A=np.zeros((N-1,N-1))
# Diagonal
for i in range (0,N-1):
A[i,i]=-(2/(h*h)+4)
for i in range (0,N-2):
A[i+1,i]=1/(h*h)
A[i,i+1]=1/(h*h)
plt.imshow(A)
plt.xlabel('i',fontsize=16)
plt.ylabel('j',fontsize=16)
plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1))
plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1))
clb=plt.colorbar()
clb.set_label('Matrix value')
plt.title('Matrix A',fontsize=32)
plt.tight_layout()
plt.subplots_adjust()
plt.show()
```
$\mathbf{y}$ is the unknown vector which is contains the numerical approximations of the $y$.
$$
\color{red}{\mathbf{y}}=\color{red}{
\left(\begin{array}{c} y_1\\
y_2\\
y_3\\
.\\
.\\
y_8\\
y_9
\end{array}\right).}
\end{equation}
```python
y=np.zeros((N+1))
# Boundary Condition
y[0]=1.1752
y[N]=10.0179
```
and the known right hand side is a known $9\times 1$ vector with the boundary conditions
\begin{equation}
\mathbf{b}=\left(\begin{array}{c}-117.52\\
0\\
0\\
.\\
.\\
0\\
-1001.79 \end{array}\right)
\end{equation}
```python
b=np.zeros(N-1)
# Boundary Condition
b[0]=-y[0]/(h*h)
b[N-2]=-y[N]/(h*h)
```
```python
```
## Solving the system
To solve invert the matrix $A$ such that
\begin{equation}A^{-1}Ay=A^{-1}b\end{equation}
\begin{equation}y=A^{-1}b\end{equation}
The plot below shows the graphical representation of $A^{-1}$.
```python
invA=np.linalg.inv(A)
plt.imshow(invA)
plt.xlabel('i',fontsize=16)
plt.ylabel('j',fontsize=16)
plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1))
plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1))
clb=plt.colorbar()
clb.set_label('Matrix value')
plt.title(r'Matrix $A^{-1}$',fontsize=32)
plt.tight_layout()
plt.subplots_adjust()
plt.show()
y[1:N]=np.dot(invA,b)
```
## Result
The plot below shows the approximate solution of the Boundary Value Problem (blue v) and the exact solution (black dashed line).
```python
fig = plt.figure(figsize=(8,4))
plt.plot(x,y,'v',label='Finite Difference')
plt.plot(x,np.sinh(2*x+1),'k:',label='exact')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
plt.show()
```
```python
```
```python
```
```python
```
| 41239224846e33339306bc95e087c87158722d34 | 76,471 | ipynb | Jupyter Notebook | Chapter 06 - Boundary Value Problems/603_Boundary Value Problem.ipynb | jjcrofts77/Numerical-Analysis-Python | 97e4b9274397f969810581ff95f4026f361a56a2 | [
"MIT"
]
| 69 | 2019-09-05T21:39:12.000Z | 2022-03-26T14:00:25.000Z | Chapter 06 - Boundary Value Problems/603_Boundary Value Problem.ipynb | jjcrofts77/Numerical-Analysis-Python | 97e4b9274397f969810581ff95f4026f361a56a2 | [
"MIT"
]
| null | null | null | Chapter 06 - Boundary Value Problems/603_Boundary Value Problem.ipynb | jjcrofts77/Numerical-Analysis-Python | 97e4b9274397f969810581ff95f4026f361a56a2 | [
"MIT"
]
| 13 | 2021-06-17T15:34:04.000Z | 2022-01-14T14:53:43.000Z | 154.799595 | 17,414 | 0.863818 | true | 2,634 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.737158 | 0.628024 | __label__eng_Latn | 0.603963 | 0.297441 |
# Random Signals and LTI-Systems
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Auto-Correlation Function
The auto-correlation function (ACF) $\varphi_{yy}[\kappa]$ of the output signal of an LTI system $y[k] = \mathcal{H} \{ x[k] \}$ is derived. It is assumed that the input signal is a wide-sense stationary (WSS) real-valued random process and that the LTI system has a real-valued impulse repsonse $h[k] \in \mathbb{R}$.
Introducing the output relation $y[k] = h[k] * x[k]$ of an LTI system into the definition of the ACF and rearranging terms yields
\begin{equation}
\begin{split}
\varphi_{yy}[\kappa] &= E \{ y[k+\kappa] \cdot y[k] \} \\
&= E \left\{ \sum_{\mu = -\infty}^{\infty} h[\mu] \; x[k+\kappa-\mu] \cdot
\sum_{\nu = -\infty}^{\infty} h[\nu] \; x[k-\nu] \right\} \\
&= \underbrace{h[\kappa] * h[-\kappa]}_{\varphi_{hh}[\kappa]} * \varphi_{xx}[\kappa]
\end{split}
\end{equation}
where the ACF $\varphi_{hh}[\kappa]$ of the deterministic impulse response $h[k]$ is commonly termed as *filter ACF*. This is related to the [link between ACF and convolution](../random_signals/correlation_functions.ipynb#Definition). The relation above is known as the *Wiener-Lee theorem*. It states that the ACF of the output $\varphi_{yy}[\kappa]$ of an LTI system is given by the convolution of the input signal's ACF $\varphi_{xx}[\kappa]$ with the filter ACF $\varphi_{hh}[\kappa]$. For a system which just attenuates the input signal $y[k] = A \cdot x[k]$ with $A \in \mathbb{R}$, the ACF at the output is given as $\varphi_{yy}[\kappa] = A^2 \cdot \varphi_{xx}[\kappa]$.
### Example - System Response to White Noise
Let's assume that the wide-sense ergodic input signal $x[k]$ of an LTI system with impulse response $h[k] = \text{rect}_N[k]$ is normal distributed white noise. Introducing $\varphi_{xx}[\kappa] = N_0\, \delta[\kappa]$ and $h[k]$ into the Wiener-Lee theorem yields
\begin{equation}
\varphi_{yy}[\kappa] = N_0 \cdot \varphi_{hh}[\kappa] = N_0 \cdot (\text{rect}_N[\kappa] * \text{rect}_N[-\kappa])
\end{equation}
The example is evaluated numerically for $N_0 = 1$ and $N=5$
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
L = 10000 # number of samples
K = 30 # limit for lags in ACF
# generate input signal (white Gaussian noise)
np.random.seed(2)
x = np.random.normal(size=L)
# compute system response
y = np.convolve(x, [1, 1, 1, 1, 1], mode='full')
# compute and truncate ACF
acf = 1/len(y) * np.correlate(y, y, mode='full')
acf = acf[len(y)-K-1:len(y)+K-1]
kappa = np.arange(-K, K)
# plot ACF
plt.figure(figsize=(10, 6))
plt.stem(kappa, acf, use_line_collection=True)
plt.title('Estimated ACF of output signal $y[k]$')
plt.ylabel(r'$\hat{\varphi}_{yy}[\kappa]$')
plt.xlabel(r'$\kappa$')
plt.axis([-K, K, 1.2*min(acf), 1.1*max(acf)])
plt.grid()
```
**Exercise**
* Derive the theoretic result for $\varphi_{yy}[\kappa]$ by calculating the filter-ACF $\varphi_{hh}[\kappa]$.
* Why is the estimated ACF $\hat{\varphi}_{yy}[\kappa]$ of the output signal not exactly equal to its theoretic result $\varphi_{yy}[\kappa]$?
* Change the number of samples `L` and rerun the example. What changes?
Solution: The filter-ACF is given by $\varphi_{hh}[\kappa] = \text{rect}_N[\kappa] * \text{rect}_N[-\kappa]$. The convolution of two rectangular signals $\text{rect}_N[\kappa]$ results in a triangular signal. Taking the time reversal into account yields
\begin{equation}
\varphi_{hh}[\kappa] = \begin{cases}
N - |\kappa| & \text{for } -N < \kappa \leq N \\
0 & \text{otherwise}
\end{cases}
\end{equation}
for even $N$. The estimated ACF $\hat{\varphi}_{yy}[\kappa]$ differs from its theoretic value due to the statistical uncertainties when using random signals of finite length. Increasing its length `L` lowers the statistical uncertainties.
## Cross-Correlation Function
The cross-correlation functions (CCFs) $\varphi_{xy}[\kappa]$ and $\varphi_{yx}[\kappa]$ between the in- and output signal of an LTI system $y[k] = \mathcal{H} \{ x[k] \}$ are derived. As for the ACF it is assumed that the input signal originates from a wide-sense stationary real-valued random process and that the LTI system's impulse response is real-valued, i.e. $h[k] \in \mathbb{R}$.
Introducing the convolution into the definition of the CCF and rearranging the terms yields
\begin{equation}
\begin{split}
\varphi_{xy}[\kappa] &= E \{ x[k+\kappa] \cdot y[k] \} \\
&= E \left\{ x[k+\kappa] \cdot \sum_{\mu = -\infty}^{\infty} h[\mu] \; x[k-\mu] \right\} \\
&= \sum_{\mu = -\infty}^{\infty} h[\mu] \cdot E \{ x[k+\kappa] \cdot x[k-\mu] \} \\
&= h[-\kappa] * \varphi_{xx}[\kappa]
\end{split}
\end{equation}
The CCF $\varphi_{xy}[\kappa]$ between in- and output is given as the time-reversed impulse response of the system convolved with the ACF of the input signal.
The CCF between out- and input is yielded by taking the symmetry relations of the CCF and ACF into account
\begin{equation}
\varphi_{yx}[\kappa] = \varphi_{xy}[-\kappa] = h[\kappa] * \varphi_{xx}[\kappa]
\end{equation}
The CCF $\varphi_{yx}[\kappa]$ between out- and input is given as the impulse response of the system convolved with the ACF of the input signal.
For a system which just attenuates the input signal $y[k] = A \cdot x[k]$, the CCFs between input and output are given as $\varphi_{xy}[\kappa] = A \cdot \varphi_{xx}[\kappa]$ and $\varphi_{yx}[\kappa] = A \cdot \varphi_{xx}[\kappa]$.
## System Identification by Cross-Correlation
The process of determining the impulse response or transfer function of a system is referred to as *system identification*. The CCFs of an LTI system play an important role in the estimation of the impulse response $h[k]$ of an unknown system. This is illustrated in the following.
The basic idea is to use a specific measurement signal as input signal to the system. Let's assume that the unknown LTI system is excited by [white noise](../random_signals/white_noise.ipynb). The ACF of the wide-sense stationary input signal $x[k]$ is then given as $\varphi_{xx}[\kappa] = N_0 \cdot \delta[\kappa]$. According to the relation derived above, the CCF between out- and input for this special choice of the input signal becomes
\begin{equation}
\varphi_{yx}[\kappa] = h[\kappa] * N_0 \cdot \delta[\kappa] = N_0 \cdot h[\kappa]
\end{equation}
For white noise as input signal $x[k]$, the impulse response of an LTI system can be estimated by estimating the CCF between its out- and input signals. Using noise as measurement signal instead of a Dirac impulse is beneficial since its [crest factor](https://en.wikipedia.org/wiki/Crest_factor) is limited.
### Example
The application of the CCF to the identification of a system is demonstrated. The system is excited by wide-sense ergodic normal distributed white noise with $N_0 = 1$. The ACF of the in- and output, as well as the CCF between out- and input is estimated and plotted.
```python
import scipy.signal as sig
N = 10000 # number of samples for input signal
K = 50 # limit for lags in ACF
# generate input signal
np.random.seed(5) # normally distributed (zero-mean, unit-variance) white noise
x = np.random.normal(size=N)
# impulse response of the system
h = np.concatenate((np.zeros(10), sig.triang(10), np.zeros(10)))
# output signal by convolution
y = np.convolve(h, x, mode='full')
# compute correlation functions
acfx = 1/len(x) * np.correlate(x, x, mode='full')
acfy = 1/len(y) * np.correlate(y, y, mode='full')
ccfyx = 1/len(y) * np.correlate(y, x, mode='full')
def plot_correlation_function(cf):
cf = cf[N-K-1:N+K-1]
kappa = np.arange(-len(cf)//2, len(cf)//2)
plt.stem(kappa, cf, use_line_collection=True)
plt.xlabel(r'$\kappa$')
plt.axis([-K, K, -0.2, 1.1*max(cf)])
# plot ACFs and CCF
plt.rc('figure', figsize=(10, 3))
plt.figure()
plot_correlation_function(acfx)
plt.title('Estimated ACF of input signal')
plt.ylabel(r'$\hat{\varphi}_{xx}[\kappa]$')
plt.figure()
plot_correlation_function(acfy)
plt.title('Estimated ACF of output signal')
plt.ylabel(r'$\hat{\varphi}_{yy}[\kappa]$')
plt.figure()
plot_correlation_function(ccfyx)
plt.plot(np.arange(len(h)), h, 'g-')
plt.title('Estimated and true impulse response')
plt.ylabel(r'$\hat{h}[k]$, $h[k]$')
```
**Exercise**
* Why is the estimated CCF $\hat{\varphi}_{yx}[k]$ not exactly equal to the true impulse response $h[k]$ of the system?
* What changes if you change the number of samples `N` of the input signal?
Solution: The derived relations for system identification hold for the case of a wide-sense ergodic input signal of infinite duration. Since we can only numerically simulate signals of finite duration, the observed deviations are a result of the resulting statistical uncertainties. Increasing the length `N` of the input signal improves the estimate of the impulse response.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| 436d77679c268d326f258bbf9db943b6d747a61a | 67,925 | ipynb | Jupyter Notebook | random_signals_LTI_systems/correlation_functions.ipynb | Fun-pee/signal-processing | 205d5e55e3168a1ec9da76b569af92c0056619aa | [
"MIT"
]
| 3 | 2020-09-21T10:15:40.000Z | 2020-09-21T13:36:40.000Z | random_signals_LTI_systems/correlation_functions.ipynb | jools76/digital-signal-processing-lecture | 4bdfe13fa4a7502412f3f0d54deb8f034aef1ce2 | [
"MIT"
]
| null | null | null | random_signals_LTI_systems/correlation_functions.ipynb | jools76/digital-signal-processing-lecture | 4bdfe13fa4a7502412f3f0d54deb8f034aef1ce2 | [
"MIT"
]
| null | null | null | 209 | 16,644 | 0.892499 | true | 2,840 | Qwen/Qwen-72B | 1. YES
2. YES | 0.771843 | 0.863392 | 0.666403 | __label__eng_Latn | 0.95661 | 0.386609 |
```python
import numpy as np
import sympy as sy
from sympy import I, exp
```
```python
z = sy.symbols("z", real=False)
a,h,r1,s0,s1 = sy.symbols("a,h,r1,s0,s1")
```
```python
pd = 0.76*np.exp(1j*0.28)
print np.conjugate(pd)
A2ndorder = (z-pd)*(z-np.conjugate(pd))
print sy.collect(sy.simplify(sy.expand(A2ndorder)), z)
Acl = A2ndorder*(z-a)
print sy.collect(sy.simplify(sy.expand(Acl)), z)
```
(0.730402133116-0.210030292909j)
z**2 - 1.46080426623237*z + 0.5776
-0.5776*a + z**3 + z**2*(-a - 1.46080426623237) + z*(1.46080426623237*a + 0.5776)
```python
Acp = sy.poly(Acl, z)
Ap = sy.poly((z-1)*(z-0.76), z)
Bp = sy.poly(-.10*z + 0.14, z)
Rp = sy.poly(z+r1, z)
Sp = sy.poly(s0*z + s1, z)
dioph=(Ap*Rp+Bp*Sp-Acp).all_coeffs()
```
```python
sol=sy.solve(dioph, (r1,s0,s1))
print sol[r1]
sol[r1]-1.76-0.10*sol[s1]
```
-1.92372666904173*a + 1.2932173366584
-2.55546400366438*a + 0.235249605130106
```python
```
```python
np.exp(-0.15)
```
0.86070797642505781
```python
argp=np.sqrt(3)*0.15
argp*180/np.pi
```
14.885880176388383
```python
0.4/(2*np.sqrt(2))
```
0.1414213562373095
```python
np.exp(-0.28)
```
0.75578374145572547
```python
0.28*180.0/np.pi
```
16.04281826366305
```python
hh=0.14
-1+hh+np.exp(-2*hh)
```
-0.10421625854427452
```python
1-(1+hh)*np.exp(-2*hh)
```
0.13840653474047282
```python
np.exp(-2*hh)
```
0.75578374145572547
```python
sy.expand()
```
| 8c9d596fcbcc459c138e43796dcc319dbcfe89ee | 5,666 | ipynb | Jupyter Notebook | polynomial-design/notebooks/L9-pole-placement-polynomial-approach.ipynb | kjartan-at-tec/mr2007-computerized-control | 16e35f5007f53870eaf344eea1165507505ab4aa | [
"MIT"
]
| 2 | 2020-11-07T05:20:37.000Z | 2020-12-22T09:46:13.000Z | polynomial-design/notebooks/L9-pole-placement-polynomial-approach.ipynb | alfkjartan/control-computarizado | 5b9a3ae67602d131adf0b306f3ffce7a4914bf8e | [
"MIT"
]
| 4 | 2020-06-12T20:44:41.000Z | 2020-06-12T20:49:00.000Z | polynomial-design/notebooks/L9-pole-placement-polynomial-approach.ipynb | alfkjartan/control-computarizado | 5b9a3ae67602d131adf0b306f3ffce7a4914bf8e | [
"MIT"
]
| 1 | 2019-09-25T20:02:23.000Z | 2019-09-25T20:02:23.000Z | 17.596273 | 91 | 0.46682 | true | 634 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.795658 | 0.679423 | __label__yue_Hant | 0.119354 | 0.416858 |
# Elemento Spring
## Fundamento teórico
El elemento *Spring* es un elemento finito unidimensional donde
las coordenadas locales y globales coinciden. Cada elemento spring tiene dos
nodos como se muestra en la figura siguiete. Sea la rigidez del
resorte la denotada por $k$, en este caso la matriz de rigidez del elemento está
dada por:
\begin{equation}
K_{(e)} = \begin{bmatrix}
k & -k \\
-k & k
\end{bmatrix}
\end{equation}
Obviamente la matriz de rigidez para un elemento *spring* es de $2\,x\,2$, dado que
este tiene dos grados de libertada, uno en cada nodo. Consecuentemente para un
sistema de elementos *spring* con $n$ nodos, el tamaño de la matriz global de
rigidez $K$ será de $n\,x\,n$. La matriz global de rigidez se obtiene ensamblando
los matrices de rigidez por elemento $K_{(i)}$ para $i=1,2,...,n$, utilizando el método
directo de la rigidez.
Una vez que la matriz global de rigidez $K$ es obtenida se tiene un sistema de ecuaciones
de la forma:
\begin{equation}
[K]\{U\} = \{F\}
\end{equation}
Donde $U$ es el vector global de desplazamientos nodales y $F$ es el vector global de
fuerzas nodales.
El sistema de ecuaciones resultantes se puede simplificar aplicando las condiciones
de frontera o restricciones de desplazamiento, quedando generalmente un sistema
de menor dimensión el cuál está determinado y puede resolverse utilizando métodos
de álgebra lineal´, quedando una posible solución como:
\begin{equation}
\overline{U} = \overline{K}^{-1}\, \overline{F}
\end{equation}
Donde $\overline{U}, \overline{K} \,\, y \,\, \overline{F}$ corresponden a las variables descritas
anteriormente, después de aplicar las condiciones de frontera correspondientes.
## Un ejemplo resuelto en NuSA
**Ejemplo 1.** Para el ensamble mostrado en la figura siguiente, calcular
a) la matriz global de rigidez b) los desplazamientos de los nodos 3 y 4 c) las fuerzas
de reacción en los nodos 1 y 2, y d) las fuerzas en cada elemento. Una fuerza de 5000 lb
es aplicada en el nodo 4 en la dirección $x$, las constantes de rigidez para cada resorte
se muestran en la figura. Los nodos 1 y 2 están fijos.
Los pasos para solucionar el problema utilizando NuSA se resumen en la siguiente
lista:
1. Importar las librerías a utilizar
2. Definir constantes o datos de entrada
1. Crear un modelo del tipo correspondiente
1. Crear nodos y elementos
1. Agregar los nodos y elementos al modelo
1. Indicar las cargas y condiciones de frontera
1. Resolver el modelo
1. Consultar los datos de salida requeridos
Siguiendo la metodología anterior, vamos a ir *desmenuzando* cada uno de los
puntos expuestos.
#### Importar las librerías a utilizar
Se importan los módulos `core`, `model` y `element`, que contienen
todas las clases necesarias para crear y solucionar el modelo de elemento finito.
```
from nusa.core import *
from nusa.model import *
from nusa.element import *
```
#### Definir constantes o datos de entrada
En esta paso se crean variables con datos a utilizar en el resto del procedimiento, las
cuales pueden ser fuerzas nodales aplicadas, desplazamientos preescritos, o bien constantes
mecánicas del material.
Para nuestro ejemplo se definen la fuerza $P$ aplicada en el nodo 4 y las constantes de rigidez
para cada resorte.
```
# Definiendo constantes
P = 5000.0
k1, k2, k3 = 1000, 2000, 3000
```
#### Crear un modelo de tipo correspondiente
Para este caso se crea un modelo instanciando un objeto de la clase `SpringModel`.
```
m1 = SpringModel("2D Model")
```
Como puede notarse, el único argumento de entrada es un nombre para el modelo, mismo que no es necesario.
#### Crear nodos y elementos
```
# Nodos
n1 = Node((0,0))
n2 = Node((0,0))
n3 = Node((0,0))
n4 = Node((0,0))
# Elementos
e1 = Spring((n1,n3),k1)
e2 = Spring((n3,n4),k2)
e3 = Spring((n4,n2),k3)
```
#### Agregar los nodos y elementos al modelo
```
for nd in (n1,n2,n3,n4):
m1.addNode(nd)
for el in (e1,e2,e3):
m1.addElement(el)
```
#### Indicar las cargas y condiciones de frontera
```
m1.addForce(n4,(P,))
m1.addConstraint(n1,ux=0)
m1.addConstraint(n2,ux=0)
```
#### Resolver el modelo
```
m1.solve()
```
#### Consultar los datos de salida requeridos
```
# a) Matriz global
print "a) Matriz global:\n {0}".format(m1.KG)
# b) Desplazamiento en los nodos 3 y 4
print "\nb) Desplazamientos de nodos 3 y 4"
print "UX3: {0}".format(n3.ux)
print "UX4: {0}".format(n4.ux)
# c) Fuerzas de reacción en los nodos 1 y 2
print "\nc) Fuerzas nodales en 1 y 2"
print "FX1: {0}".format(n1.fx)
print "FX2: {0}".format(n2.fx)
# d) Fuerzas en cada resorte
print "\nd) Fuerzas en elementos"
print "FE1:\n {0}".format(e1.fx)
print "FE2:\n {0}".format(e2.fx)
print "FE3:\n {0}".format(e3.fx)
```
Podemos, entonces, ejecutar el script resultante.
```python
from nusa import *
"""
Logan, D. (2007). A first course in the finite element analysis.
Example 2.1, pp. 42.
"""
P = 5000.0
k1, k2, k3 = 1000, 2000, 3000
# Model
m1 = SpringModel("2D Model")
# Nodes
n1 = Node((0,0))
n2 = Node((0,0))
n3 = Node((0,0))
n4 = Node((0,0))
# Elements
e1 = Spring((n1,n3),k1)
e2 = Spring((n3,n4),k2)
e3 = Spring((n4,n2),k3)
# Add elements
for nd in (n1,n2,n3,n4):
m1.add_node(nd)
for el in (e1,e2,e3):
m1.add_element(el)
m1.add_force(n4,(P,))
m1.add_constraint(n1,ux=0)
m1.add_constraint(n2,ux=0)
m1.solve()
# a) Matriz global
print("a) Matriz global:\n {0}".format(m1.KG))
# b) Desplazamiento en los nodos 3 y 4
print("\nb) Desplazamientos de nodos 3 y 4")
print("UX3: {0}".format(n3.ux))
print("UX4: {0}".format(n4.ux))
# c) Fuerzas de reacción en los nodos 1 y 2
print("\nc) Fuerzas nodales en 1 y 2")
print("FX1: {0}".format(n1.fx))
print("FX2: {0}".format(n2.fx))
# d) Fuerzas en cada resorte
print("\nd) Fuerzas en elementos")
print("FE1:\n {0}".format(e1.fx))
print("FE2:\n {0}".format(e2.fx))
print("FE3:\n {0}".format(e3.fx))
```
a) Matriz global:
[[ 1000. 0. -1000. 0.]
[ 0. 3000. 0. -3000.]
[-1000. 0. 3000. -2000.]
[ 0. -3000. -2000. 5000.]]
b) Desplazamientos de nodos 3 y 4
UX3: 0.9090909090909092
UX4: 1.3636363636363638
c) Fuerzas nodales en 1 y 2
FX1: -909.0909090909091
FX2: -4090.9090909090914
d) Fuerzas en elementos
FE1:
[[-909.09090909]
[ 909.09090909]]
FE2:
[[-909.09090909]
[ 909.09090909]]
FE3:
[[ 4090.90909091]
[-4090.90909091]]
```python
```
| ca8c22db1da3507053d25a609638fd5072331bfa | 9,537 | ipynb | Jupyter Notebook | docs/nusa-info/es/spring-element.ipynb | JorgeDeLosSantos/nusa | 05623a72b892330e4b0e059a03ac4614da934ce9 | [
"MIT"
]
| 92 | 2016-11-14T01:39:55.000Z | 2022-03-27T17:23:41.000Z | docs/nusa-info/es/spring-element.ipynb | JorgeDeLosSantos/nusa | 05623a72b892330e4b0e059a03ac4614da934ce9 | [
"MIT"
]
| 1 | 2017-11-30T05:04:02.000Z | 2018-08-29T04:31:39.000Z | docs/nusa-info/es/spring-element.ipynb | JorgeDeLosSantos/nusa | 05623a72b892330e4b0e059a03ac4614da934ce9 | [
"MIT"
]
| 31 | 2017-05-17T18:50:18.000Z | 2022-03-12T03:08:00.000Z | 32.328814 | 115 | 0.539583 | true | 2,219 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.843895 | 0.728612 | __label__spa_Latn | 0.968922 | 0.531141 |
# Polar and Cilindrical Frame of Reference
Renato Naville Watanabe
Consider that we have the position vector $\bf\vec{r}$ of a particle, moving in a circular path indicated in the figure below by a dashed line. This vector $\bf\vec{r(t)}$ is described in a fixed reference frame as:
<span class="notranslate">
\begin{equation}
{\bf\hat{r}}(t) = {x}{\bf\hat{i}}+{y}{\bf\hat{j}} + {z}{\bf\hat{k}}
\end{equation}
</span>
Naturally, we could describe all the kinematic variables in the fixed reference frame. But in circular motions, is convenient to define a basis with a vector in the direction of the position vector $\bf\vec{r}$. So, the vector $\bf\hat{e_R}$ is defined as:
<span class="notranslate">
\begin{equation}
{\bf\hat{e_R}} = \frac{\bf\vec{r}}{\Vert{\bf\vec{r} }\Vert}
\end{equation}
</span>
The second vector of the basis can be obtained by the cross multiplication between $\bf\hat{k}$ and $\bf\hat{e_R}$:
<span class="notranslate">
\begin{equation}
{\bf\hat{e_\theta}} = {\bf\hat{k}} \times {\bf\hat{e_R}}
\end{equation}
</span>
The third vector of the basis is the conventional ${\bf\hat{k}}$ vector.
This basis can be used also for non-circular movements. For a 3D movement, the versor ${\bf\hat{e_R}}$ is obtained by removing the projection of the vector ${\bf\vec{r}}$ onto the versor ${\bf\hat{k}}$:
<span class="notranslate">
\begin{equation}
{\bf\hat{e_R}} = \frac{\bf\vec{r} - ({\bf\vec{r}.{\bf\hat{k}}){\bf\hat{k}}}}{\Vert\bf\vec{r} - ({\bf\vec{r}.{\bf\hat{k}}){\bf\hat{k}}\Vert}}
\end{equation}
</span>
## Time-derivative of the versors ${\bf\hat{e_R}}$ and ${\bf\hat{e_\theta}}$
To obtain the expressions of the velocity and acceleration vectors, it is necessary to obtain the expressions of the time-derivative of the vectors ${\bf\hat{e_R}}$ and ${\bf\hat{e_\theta}}$.
This can be done by noting that:
<span class="notranslate">
\begin{align}
{\bf\hat{e_R}} &= \cos(\theta){\bf\hat{i}} + \sin(\theta){\bf\hat{j}}\\
{\bf\hat{e_\theta}} &= -\sin(\theta){\bf\hat{i}} + \cos(\theta){\bf\hat{j}}
\end{align}
</span>
Deriving ${\bf\hat{e_R}}$ we obtain:
<span class="notranslate">
\begin{equation}
\frac{d{\bf\hat{e_R}}}{dt} = -\sin(\theta)\dot\theta{\bf\hat{i}} + \cos(\theta)\dot\theta{\bf\hat{j}} = \dot{\theta}{\bf\hat{e_\theta}}
\end{equation}
</span>
Similarly, we obtain the time-derivative of ${\bf\hat{e_\theta}}$:
<span class="notranslate">
\begin{equation}
\frac{d{\bf\hat{e_\theta}}}{dt} = -\cos(\theta)\dot\theta{\bf\hat{i}} - \sin(\theta)\dot\theta{\bf\hat{j}} = -\dot{\theta}{\bf\hat{e_R}}
\end{equation}
</span>
## Position, velocity and acceleration
### Position
The position vector $\bf\vec{r}$, from the definition of $\bf\hat{e_R}$, is:
<span class="notranslate">
\begin{equation}
{\bf\vec{r}} = R{\bf\hat{e_R}} + z{\bf\hat{k}}
\end{equation}
</span>
where $R = \Vert\bf\vec{r} - ({\bf\vec{r}.{\bf\hat{k}}){\bf\hat{k}}\Vert}$.
### Velocity
The velocity vector $\bf\vec{v}$ is obtained by deriving the vector $\bf\vec{r}$:
<span class="notranslate">
\begin{equation}
{\bf\vec{v}} = \frac{d(R{\bf\hat{e_R}})}{dt} + \dot{z}{\bf\hat{k}} = \dot{R}{\bf\hat{e_R}}+R\frac{d\bf\hat{e_R}}{dt}=\dot{R}{\bf\hat{e_R}}+R\dot{\theta}{\bf\hat{e_\theta}}+ \dot{z}{\bf\hat{k}}
\end{equation}
</span>
### Acceleration
The acceleration vector $\bf\vec{a}$ is obtained by deriving the velocity vector:
<span class="notranslate">
\begin{align}
{\bf\vec{a}} =& \frac{d(\dot{R}{\bf\hat{e_R}}+R\dot{\theta}{\bf\hat{e_\theta}}+\dot{z}{\bf\hat{k}})}{dt}=\\
=&\ddot{R}{\bf\hat{e_R}}+\dot{R}\frac{d\bf\hat{e_R}}{dt} + \dot{R}\dot{\theta}{\bf\hat{e_\theta}} + R\ddot{\theta}{\bf\hat{e_\theta}} + R\dot{\theta}\frac{d{\bf\hat{e_\theta}}}{dt} + \ddot{z}{\bf\hat{k}}=\\
=&\ddot{R}{\bf\hat{e_R}}+\dot{R}\dot{\theta}{\bf\hat{e_\theta}} + \dot{R}\dot{\theta}{\bf\hat{e_\theta}} + R\ddot{\theta}{\bf\hat{e_\theta}} - R\dot{\theta}^2{\bf\hat{e_R}}+ \ddot{z}{\bf\hat{k}} =\\
=&\ddot{R}{\bf\hat{e_R}}+2\dot{R}\dot{\theta}{\bf\hat{e_\theta}}+ R\ddot{\theta}{\bf\hat{e_\theta}} - {R}\dot{\theta}^2{\bf\hat{e_R}}+ \ddot{z}{\bf\hat{k}} =\\
=&(\ddot{R}-R\dot{\theta}^2){\bf\hat{e_R}}+(2\dot{R}\dot{\theta} + R\ddot{\theta}){\bf\hat{e_\theta}}+ \ddot{z}{\bf\hat{k}}
\end{align}
</span>
+ The term $\ddot{R}$ is an acceleration in the radial direction.
+ The term $R\ddot{\theta}$ is an angular acceleration.
+ The term $\ddot{z}$ is an acceleration in the $\bf\hat{k}$ direction.
+ The term $-R\dot{\theta}^2$ is the well known centripetal acceleration.
+ The term $2\dot{R}\dot{\theta}$ is known as Coriolis acceleration. This term may be difficult to understand. It appears when there is displacement in the radial and angular directions at the same time.
## Important to note
The reader must bear in mind that the use of a different basis to represent the position, velocity or acceleration vectors is only a different representation of the same vector. For example, for the acceleration vector:
<span class="notranslate">
\begin{equation}
{\bf\vec{a}} = \ddot{x}{\bf\hat{i}}+ \ddot{y}{\bf\hat{j}} + \ddot{z}{\bf\hat{k}}=(\ddot{R}-R\dot{\theta}^2){\bf\hat{e_R}}+(2\dot{R}\dot{\theta} + R\ddot{\theta}){\bf\hat{e_\theta}}+ \ddot{z}{\bf\hat{k}}=\dot{\Vert\bf\vec{v}\Vert}{\bf\hat{e}_t}+{\Vert\bf\vec{v}\Vert}^2\Vert{\bf\vec{C}} \Vert{\bf\hat{e}_n}
\end{equation}
</span>
In which the last equality is the acceleration vector represented in the path-coordinate of the particle (see http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Time-varying%20frames.ipynb).
## Example
Consider a particle following the spiral path described below:
<span class="notranslate">
\begin{equation}
{\bf\vec{r}}(t) = (2\sqrt(t)\cos(t)){\bf\hat{i}}+ (2\sqrt(t)\sin(t)){\bf\hat{j}}
\end{equation}
</span>
```python
import numpy as np
import sympy as sym
from sympy.plotting import plot_parametric,plot3d_parametric_line
from sympy.vector import CoordSys3D
import matplotlib.pyplot as plt
# from matplotlib import rc
# rc('text', usetex=True)
sym.init_printing()
```
### Solving numerically
```python
t = np.linspace(0.01,10,30).reshape(-1,1) #create a time vector and reshapes it to a column vector
R = 2*np.sqrt(t)
theta = t
rx = R*np.cos(t)
ry = R*np.sin(t)
r = np.hstack((rx, ry)) # creates the position vector by stacking rx and ry horizontally
```
```python
e_r = r/np.linalg.norm(r, axis=1, keepdims=True) # defines e_r vector
e_theta = np.cross([0,0,1],e_r)[:,0:-1] # defines e_theta vector
```
```python
dt = t[1] #defines delta_t
Rdot = np.diff(R, axis=0)/dt #find the R derivative
thetaDot = np.diff(theta, axis=0)/dt #find the angle derivative
v = Rdot*e_r[0:-1,:] +R[0:-1]*thetaDot*e_theta[0:-1,:] # find the linear velocity.
```
```python
Rddot = np.diff(Rdot, axis=0)/dt
thetaddot = np.diff(thetaDot, axis=0)/dt
```
```python
a = ((Rddot - R[1:-1]*thetaDot[0:-1]**2)*e_r[1:-1,:]
+ (2*Rdot[0:-1]*thetaDot[0:-1] + Rdot[0:-1]*thetaddot)*e_theta[1:-1,:])
```
```python
from matplotlib.patches import FancyArrowPatch
%matplotlib inline
plt.rcParams['figure.figsize']=10,10
fig = plt.figure()
plt.plot(r[:,0],r[:,1],'.')
ax = fig.add_axes([0,0,1,1])
for i in np.arange(len(t)-2):
vec1 = FancyArrowPatch(r[i,:],r[i,:]+e_r[i,:],mutation_scale=30,color='r', label='e_r')
vec2 = FancyArrowPatch(r[i,:],r[i,:]+e_theta[i,:],mutation_scale=30,color='g', label='e_theta')
ax.add_artist(vec1)
ax.add_artist(vec2)
plt.xlim((-10,10))
plt.ylim((-10,10))
plt.grid()
plt.legend([vec1, vec2],[r'$\vec{e_r}$', r'$\vec{e_{\theta}}$'])
plt.show()
```
```python
from matplotlib.patches import FancyArrowPatch
%matplotlib inline
plt.rcParams['figure.figsize']=10,10
fig = plt.figure()
plt.plot(r[:,0],r[:,1],'.')
ax = fig.add_axes([0,0,1,1])
for i in np.arange(len(t)-2):
vec1 = FancyArrowPatch(r[i,:],r[i,:]+v[i,:],mutation_scale=10,color='r')
vec2 = FancyArrowPatch(r[i,:],r[i,:]+a[i,:],mutation_scale=10,color='g')
ax.add_artist(vec1)
ax.add_artist(vec2)
plt.xlim((-10,10))
plt.ylim((-10,10))
plt.grid()
plt.legend([vec1, vec2],[r'$\vec{v}$', r'$\vec{a}$'])
plt.show()
```
### Solved simbolically (extra reading)
```python
O = sym.vector.CoordSys3D(' ')
t = sym.symbols('t')
```
```python
r = 2*sym.sqrt(t)*sym.cos(t)*O.i+2*sym.sqrt(t)*sym.sin(t)*O.j
r
```
```python
plot_parametric(r.dot(O.i),r.dot(O.j),(t,0,10))
```
```python
e_r = r - r.dot(O.k)*O.k
e_r = e_r/sym.sqrt(e_r.dot(O.i)**2+e_r.dot(O.j)**2+e_r.dot(O.k)**2)
```
```python
e_r
```
```python
e_theta = O.k.cross(e_r)
e_theta
```
```python
from matplotlib.patches import FancyArrowPatch
plt.rcParams['figure.figsize']=10,10
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.axis("on")
time = np.linspace(0,10,30)
for instant in time:
vt = FancyArrowPatch([float(r.dot(O.i).subs(t,instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t,instant))+float(e_r.dot(O.i).subs(t,instant)), float(r.dot(O.j).subs(t, instant))+float(e_r.dot(O.j).subs(t,instant))],
mutation_scale=20,
arrowstyle="->",color="r",label='${{e_r}}$')
vn = FancyArrowPatch([float(r.dot(O.i).subs(t, instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t, instant))+float(e_theta.dot(O.i).subs(t, instant)), float(r.dot(O.j).subs(t, instant))+float(e_theta.dot(O.j).subs(t, instant))],
mutation_scale=20,
arrowstyle="->",color="g",label='${{e_{theta}}}$')
ax.add_artist(vn)
ax.add_artist(vt)
plt.xlim((-10,10))
plt.ylim((-10,10))
plt.legend(handles=[vt,vn],fontsize=20)
plt.grid()
plt.show()
```
```python
R = 2*sym.sqrt(t)
```
```python
Rdot = sym.diff(R,t)
Rddot = sym.diff(Rdot,t)
Rddot
```
```python
v = Rdot*e_r + R*e_theta
```
```python
v
```
```python
a = (Rddot - R)*e_r + (2*Rdot*1+0)*e_theta
aCor = 2*Rdot*1*e_theta
aCor
```
```python
a
```
```python
from matplotlib.patches import FancyArrowPatch
plt.rcParams['figure.figsize'] = 10,10
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.axis("on")
time = np.linspace(0.1,10,30)
for instant in time:
vt = FancyArrowPatch([float(r.dot(O.i).subs(t,instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t,instant))+float(v.dot(O.i).subs(t,instant)), float(r.dot(O.j).subs(t, instant))+float(v.dot(O.j).subs(t,instant))],
mutation_scale=20,
arrowstyle="->",color="r",label='${{v}}$')
vn = FancyArrowPatch([float(r.dot(O.i).subs(t, instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t, instant))+float(a.dot(O.i).subs(t, instant)), float(r.dot(O.j).subs(t, instant))+float(a.dot(O.j).subs(t, instant))],
mutation_scale=20,
arrowstyle="->",color="g",label='${{a}}$')
vc = FancyArrowPatch([float(r.dot(O.i).subs(t, instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t, instant))+float(aCor.dot(O.i).subs(t, instant)), float(r.dot(O.j).subs(t, instant))+float(aCor.dot(O.j).subs(t, instant))],
mutation_scale=20,
arrowstyle="->",color="b",label='${{a_{Cor}}}$')
ax.add_artist(vn)
ax.add_artist(vt)
ax.add_artist(vc)
plt.xlim((-10,10))
plt.ylim((-10,10))
plt.legend(handles=[vt,vn,vc],fontsize=20)
plt.grid()
plt.show()
```
## Problems
1. Problems from 15.1.1 to 15.1.14 from Ruina and Rudra's book,
2. Problems from 18.1.1 to 18.1.8 and 18.1.10 from Ruina and Rudra's book.
## Reference
- Ruina A, Rudra P (2019) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
```python
```
| 3a437142aeaf2c6ad3111429fdbe505e512bd23a | 236,287 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/PolarCoordinates-checkpoint.ipynb | erichuang2013/BMC | 18c08d9b581672fcf8e1132e37da2ee978f315dc | [
"CC-BY-4.0"
]
| 1 | 2019-10-18T22:00:48.000Z | 2019-10-18T22:00:48.000Z | notebooks/PolarCoordinates.ipynb | erichuang2013/BMC | 18c08d9b581672fcf8e1132e37da2ee978f315dc | [
"CC-BY-4.0"
]
| null | null | null | notebooks/PolarCoordinates.ipynb | erichuang2013/BMC | 18c08d9b581672fcf8e1132e37da2ee978f315dc | [
"CC-BY-4.0"
]
| null | null | null | 306.071244 | 151,756 | 0.924058 | true | 4,075 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.743168 | 0.60053 | __label__eng_Latn | 0.462518 | 0.233562 |
# Bayesian Temporal Matrix Factorization
**Published**: October 8, 2019
**Author**: Xinyu Chen [[**GitHub homepage**](https://github.com/xinychen)]
**Download**: This Jupyter notebook is at our GitHub repository. If you want to evaluate the code, please download the notebook from the repository of [**tensor-learning**](https://github.com/xinychen/tensor-learning/blob/master/content/BTMF.ipynb).
## Abstract
Large-scale and multidimensional spatiotemporal data sets are becoming ubiquitous in many real-world applications such as monitoring traffic and air quality. Making predictions on these time series has become a critical challenge due to not only the large-scale and high-dimensional nature but also the considerable amount of missing data. In this work, we propose a Bayesian Temporal Matrix Factorization (BTMF) model for modeling multidimensional time series - and in particular spatiotemporal data - in the presence of missing data. By integrating low-rank matrix factorization and vector autoregressive (VAR) process into a single probabilistic graphical model, our model can effectively perform predictions without imputing those missing values. We develop efficient Gibbs sampling algorithms for model inference and test the proposed BTMF on several real-world spatiotemporal data sets for both missing data imputation and short-term rolling prediction tasks. This post is mainly about BTMF models and their **`Python`** implementation with an application of spatiotemporal data imputation.
## 1 Motivation
## 2 Problem Description
We assume a spatiotemporal setting for multidimensional time series data throughout this work. In general, modern spatiotemporal data sets collected from sensor networks can be organized as matrix time series. For example, we can denote by matrix $Y\in\mathbb{R}^{N\times T}$ a multivariate time series collected from $N$ locations/sensors on $T$ time stamps, with each row $$\boldsymbol{y}_{i}=\left(y_{i,1},y_{i,2},...,y_{i,t-1},y_{i,t},y_{i,t+1},...,y_{i,T}\right)$$
corresponding to the time series collected at location $i$.
As mentioned, making accurate predictions on incomplete time series is very challenging, while missing data problem is almost inevitable in real-world applications. Figure 1 illustrates the prediction problem for incomplete time series data. Here we use $(i,t)\in\Omega$ to index the observed entries in matrix $Y$.
> **Figure 1**: Illustration of multivariate time series and the prediction problem in the presence of missing values (green: observed data; white: missing data; red: prediction).
## 3 Model Description
Given a partially observed spatiotemporal matrix $Y\in\mathbb{R}^{N \times T}$, one can factorize it into a spatial factor matrix $W\in\mathbb{R}^{R \times N}$ and a temporal factor matrix $X\in\mathbb{R}^{R \times T}$ following general matrix factorization model:
\begin{equation}
Y\approx W^{\top}X,
\label{btmf_equation1}
\end{equation}
and element-wise, we have
\begin{equation}
y_{it}\approx \boldsymbol{w}_{i}^\top\boldsymbol{x}_{t}, \quad \forall (i,t),
\label{btmf_equation2}
\end{equation}
where vectors $\boldsymbol{w}_{i}$ and $\boldsymbol{x}_{t}$ refer to the $i$-th column of $W$ and the $t$-th column of $X$, respectively.
The standard matrix factorization model is a good approach to deal with the missing data problem; however, it cannot capture the dependencies among different columns in $X$, which are critical in modeling time series data. To better characterize the temporal dependencies and impose temporal smoothness, a novel AR regularizer is introduced on $X$ in TRMF (i.e., Temporal Regularizer Matrix Factorization proposed by [Yu et al., 2016](https://www.cs.utexas.edu/~rofuyu/papers/tr-mf-nips.pdf)):
\begin{equation} \label{equ:VAR}
\begin{aligned}
\boldsymbol{x}_{t+1}&=\sum\nolimits_{k=1}^{d}A_{k}\boldsymbol{x}_{t+1-h_k}+\boldsymbol{\epsilon}_t, \\
&=A^\top \boldsymbol{v}_{t+1}+\boldsymbol{\epsilon}_{t}, \\
\end{aligned}
\end{equation}
where $\mathcal{L}=\left\{h_1,\ldots,h_k,\ldots,h_d\right\}$ is a lag set ($d$ is the order of this AR model), each $A_k$ ($k\in\left\{1,...,d\right\}$) is a $R\times R$ coefficient matrix, and $\boldsymbol{\epsilon}_t$ is a zero mean Gaussian noise vector. For brevity, matrix $A\in \mathbb{R}^{(R d) \times R}$ and vector $\boldsymbol{v}_{t+1}\in \mathbb{R}^{(R d) \times 1}$ are defined as
\begin{equation*}
A=\left[A_{1}, \ldots, A_{d}\right]^{\top} ,\quad \boldsymbol{v}_{t+1}=\left[\begin{array}{c}{\boldsymbol{x}_{t+1-h_1}} \\ {\vdots} \\ {\boldsymbol{x}_{t+1-h_d}}\end{array}\right] .
\end{equation*}
> **Figure 2**: A graphical illustration of the rolling prediction scheme using BTMF (with VAR process) (green: observed data; white: missing data; red: prediction).
In [Yu et al., 2016](https://www.cs.utexas.edu/~rofuyu/papers/tr-mf-nips.pdf), to avoid overfitting and reduce the number of parameters, the coefficient matrix in TRMF is further assumed to be a diagonal $A_k=\text{diag}(\boldsymbol{\theta}_{k})$. Therefore, they have
\begin{equation} \label{equ:AR}
\boldsymbol{x}_{t+1}=\boldsymbol{\theta}_{1}\circledast\boldsymbol{x}_{t+1-h_1}+\cdots+\boldsymbol{\theta}_{d}\circledast\boldsymbol{x}_{t+1-h_d}+\boldsymbol{\epsilon}_t,
\end{equation}
where the symbol $\circledast$ denotes the element-wise Hadamard product. However, unlike Equation (4), a vector autoregressive (VAR) model in Equation (3) is actually more powerful for capturing multivariate time series patterns.
> **Figure 3**: A graphical illustration of the rolling prediction scheme using BTMF (with AR process) (green: observed data; white: missing data; red: prediction).
In the following, we first introduce a Bayesian temporal matrix factorization model with an autoregressive model given in Equation (4), and then discuss another model with a vector autoregressive (VAR) model shown in Equation (3).
## 4 Bayesian Sequential Matrix Factorization (BSMF)
## 5 Bayesian Temporal Matrix Factorization with Vector Autoregressive Model
### 5.1 Model Specification
Following the general Bayesian probabilistic matrix factorization models (e.g., BPMF proposed by [Salakhutdinov & Mnih, 2008](https://www.cs.toronto.edu/~amnih/papers/bpmf.pdf)), we assume that each observed entry in $Y$ follows a Gaussian distribution with precision $\tau$:
\begin{equation}
y_{i,t}\sim\mathcal{N}\left(\boldsymbol{w}_i^\top\boldsymbol{x}_t,\tau^{-1}\right),\quad \left(i,t\right)\in\Omega.
\label{btmf_equation3}
\end{equation}
On the spatial dimension, we use a simple Gaussian factor matrix without imposing any dependencies explicitly:
\begin{equation}
\boldsymbol{w}_i\sim\mathcal{N}\left(\boldsymbol{\mu}_{w},\Lambda_w^{-1}\right),
\end{equation}
and we place a conjugate Gaussian-Wishart prior on the mean vector and the precision matrix:
\begin{equation}
\boldsymbol{\mu}_w | \Lambda_w \sim\mathcal{N}\left(\boldsymbol{\mu}_0,(\beta_0\Lambda_w)^{-1}\right),\Lambda_w\sim\mathcal{W}\left(W_0,\nu_0\right),
\end{equation}
where $\boldsymbol{\mu}_0\in \mathbb{R}^{R}$ is a mean vector, $\mathcal{W}\left(W_0,\nu_0\right)$ is a Wishart distribution with a $R\times R$ scale matrix $W_0$ and $\nu_0$ degrees of freedom.
In modeling the temporal factor matrix $X$, we re-write the VAR process as:
\begin{equation}
\begin{aligned}
\boldsymbol{x}_{t}&\sim\begin{cases}
\mathcal{N}\left(\boldsymbol{0},I_R\right),&\text{if $t\in\left\{1,2,...,h_d\right\}$}, \\
\mathcal{N}\left(A^\top \boldsymbol{v}_{t},\Sigma\right),&\text{otherwise},\\
\end{cases}\\
\end{aligned}
\label{btmf_equation5}
\end{equation}
Since the mean vector is defined by VAR, we need to place the conjugate matrix normal inverse Wishart (MNIW) prior on the coefficient matrix $A$ and the covariance matrix $\Sigma$ as follows,
\begin{equation}
\begin{aligned}
A\sim\mathcal{MN}_{(Rd)\times R}\left(M_0,\Psi_0,\Sigma\right),\quad
\Sigma \sim\mathcal{IW}\left(S_0,\nu_0\right), \\
\end{aligned}
\end{equation}
where the probability density function for the $Rd$-by-$R$ random matrix $A$ has the form:
\begin{equation}
\begin{aligned}
&p\left(A\mid M_0,\Psi_0,\Sigma\right) \\
=&\left(2\pi\right)^{-R^2d/2}\left|\Psi_0\right|^{-R/2}\left|\Sigma\right|^{-Rd/2} \\
&\times \exp\left(-\frac{1}{2}\text{tr}\left[\Sigma^{-1}\left(A-M_0\right)^{\top}\Psi_{0}^{-1}\left(A-M_0\right)\right]\right), \\
\end{aligned}
\label{mnpdf}
\end{equation}
where $\Psi_0\in\mathbb{R}^{(Rd)\times (Rd)}$ and $\Sigma\in\mathbb{R}^{R\times R}$ are played as covariance matrices.
For the only remaining parameter $\tau$, we place a Gamma prior $\tau\sim\text{Gamma}\left(\alpha,\beta\right)$ where $\alpha$ and $\beta$ are the shape and rate parameters, respectively.
The above specifies the full generative process of BTMF, and we could also see the Bayesian graphical model shown in Figure 4. Several parameters are introduced to define the prior distributions for hyperparameters, including $\boldsymbol{\mu}_{0}$, $W_0$, $\nu_0$, $\beta_0$, $\alpha$, $\beta$, $M_0$, $\Psi_0$, and $S_0$. These parameters need to provided in advance when training the model. However, it should be noted that the specification of these parameters has little impact on the final results, as the training data will play a much more important role in defining the posteriors of the hyperparameters.
> **Figure 4**: An overview graphical model of BTMF (time lag set: $\left\{1,2,...,d\right\}$). The shaded nodes ($y_{i,t}$) are the observed data in $\Omega$.
### 5.2 Model Inference
Given the complex structure of BTMF, it is intractable to write down the posterior distribution. Here we rely on the MCMC technique for Bayesian learning. In detail, we introduce a Gibbs sampling algorithm by deriving the full conditional distributions for all parameters and hyperparameters. Thanks to the use of conjugate priors in Figure 4, we can actually write down all the conditional distributions analytically. Below we summarize the Gibbs sampling procedure.
#### 1) Sampling Factor Matrix $W$ and Its Hyperparameters
> For programming convenience, we use $W\in\mathbb{R}^{N\times R}$ to replace $W\in\mathbb{R}^{R\times N}$.
```python
import numpy as np
from numpy.linalg import inv as inv
from numpy.random import multivariate_normal as mvnrnd
from scipy.stats import wishart
def cov_mat(mat):
new_mat = mat - np.mean(mat, axis = 0)
return np.einsum('ti, tj -> ij', new_mat, new_mat)
def sample_factor_w(sparse_mat, binary_mat, W, X, tau):
"""Sampling N-by-R factor matrix W and its hyperparameters (mu_w, Lambda_w)."""
dim1, rank = W.shape
beta0 = 1
W_bar = np.mean(W, axis = 0)
var_mu_hyper = (dim1 * W_bar) / (dim1 + beta0)
var_W_hyper = inv(np.eye(rank) + cov_mat(W) + dim1 * beta0 / (dim1 + beta0) * np.outer(W_bar, W_bar))
var_Lambda_hyper = wishart(df = dim1 + rank, scale = var_W_hyper, seed = None).rvs()
var_mu_hyper = mvnrnd(var_mu_hyper, inv((dim1 + beta0) * var_Lambda_hyper))
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
var_mu = tau * np.matmul(Xt.T, sparse_mat[i, pos0[0]]) + np.matmul(var_Lambda_hyper, var_mu_hyper)
inv_var_Lambda = inv(tau * np.matmul(Xt.T, Xt) + var_Lambda_hyper)
W[i, :] = mvnrnd(np.matmul(inv_var_Lambda, var_mu), inv_var_Lambda)
return W
```
#### 2) Sampling VAR Coefficients $A$ and Its Hyperparameters
**Foundations of VAR**
Vector autoregression (VAR) is a multivariate extension of autoregression (AR). Formally, VAR for $R$-dimensional vectors $\boldsymbol{x}_{t}$ can be written as follows,
\begin{equation}
\begin{aligned}
\boldsymbol{x}_{t}&=A_{1} \boldsymbol{x}_{t-h_1}+\cdots+A_{d} \boldsymbol{x}_{t-h_d}+\boldsymbol{\epsilon}_{t}, \\
&= A^\top \boldsymbol{v}_{t}+\boldsymbol{\epsilon}_{t},~t=h_d+1, \ldots, T, \\
\end{aligned}
\end{equation}
where
\begin{equation}
A=\left[A_{1}, \ldots, A_{d}\right]^{\top} \in \mathbb{R}^{(R d) \times R},\quad \boldsymbol{v}_{t}=\left[\begin{array}{c}{\boldsymbol{x}_{t-h_1}} \\ {\vdots} \\ {\boldsymbol{x}_{t-h_d}}\end{array}\right] \in \mathbb{R}^{(R d) \times 1}.
\end{equation}
In the following, if we define
\begin{equation}
Z=\left[\begin{array}{c}{\boldsymbol{x}_{h_d+1}^{\top}} \\ {\vdots} \\ {\boldsymbol{x}_{T}^{\top}}\end{array}\right] \in \mathbb{R}^{(T-h_d) \times R},\quad Q=\left[\begin{array}{c}{\boldsymbol{v}_{h_d+1}^{\top}} \\ {\vdots} \\ {\boldsymbol{v}_{T}^{\top}}\end{array}\right] \in \mathbb{R}^{(T-h_d) \times(R d)},
\end{equation}
then, we could write the above mentioned VAR as
\begin{equation}
\underbrace{Z}_{(T-h_d)\times R}\approx \underbrace{Q}_{(T-h_d)\times (Rd)}\times \underbrace{A}_{(Rd)\times R}.
\end{equation}
> To include temporal factors $\boldsymbol{x}_{t},t=1,...,h_d$, we also define $$Z_0=\left[\begin{array}{c}{\boldsymbol{x}_{1}^{\top}} \\ {\vdots} \\ {\boldsymbol{x}_{h_d}^{\top}}\end{array}\right] \in \mathbb{R}^{h_d \times R}.$$
**Build a Bayesian VAR on temporal factors $\boldsymbol{x}_{t}$**
\begin{equation}
\begin{aligned}
\boldsymbol{x}_{t}&\sim\begin{cases}\mathcal{N}\left(A^\top \boldsymbol{v}_{t},\Sigma\right),~\text{if $t\in\left\{h_d+1,...,T\right\}$},\\{\mathcal{N}\left(\boldsymbol{0},I_R\right),~\text{otherwise}}.\end{cases}\\
A&\sim\mathcal{MN}_{(Rd)\times R}\left(M_0,\Psi_0,\Sigma\right), \\
\Sigma &\sim\mathcal{IW}\left(S_0,\nu_0\right), \\
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
&\mathcal{M N}_{(R d) \times R}\left(A | M_{0}, \Psi_{0}, \Sigma\right)\\
\propto|&\Sigma|^{-R d / 2} \exp \left(-\frac{1}{2} \operatorname{tr}\left[\Sigma^{-1}\left(A-M_{0}\right)^{\top} \Psi_{0}^{-1}\left(A-M_{0}\right)\right]\right), \\
\end{aligned}
\end{equation}
and
\begin{equation}
\mathcal{I} \mathcal{W}\left(\Sigma | S_{0}, \nu_{0}\right) \propto|\Sigma|^{-\left(\nu_{0}+R+1\right) / 2} \exp \left(-\frac{1}{2} \operatorname{tr}\left(\Sigma^{-1}S_{0}\right)\right).
\end{equation}
**Likelihood from temporal factors $\boldsymbol{x}_{t}$**
\begin{equation}
\begin{aligned}
&\mathcal{L}\left(X\mid A,\Sigma\right) \\
\propto &\prod_{t=1}^{h_d}p\left(\boldsymbol{x}_{t}\mid \Sigma\right)\times \prod_{t=h_d+1}^{T}p\left(\boldsymbol{x}_{t}\mid A,\Sigma\right) \\
\propto &\left|\Sigma\right|^{-T/2}\exp\left\{-\frac{1}{2}\sum_{t=h_d+1}^{T}\left(\boldsymbol{x}_{t}-A^\top \boldsymbol{v}_{t}\right)^\top\Sigma^{-1}\left(\boldsymbol{x}_{t}-A^\top \boldsymbol{v}_{t}\right)\right\} \\
\propto &\left|\Sigma\right|^{-T/2}\exp\left\{-\frac{1}{2}\text{tr}\left[\Sigma^{-1}\left(Z_0^\top Z_0+\left(Z-QA\right)^\top \left(Z-QA\right)\right)\right]\right\}
\end{aligned}
\end{equation}
**Posterior distribution**
Consider
\begin{equation}
\begin{aligned}
&\left(A-M_{0}\right)^{\top} \Psi_{0}^{-1}\left(A-M_{0}\right)+S_0+Z_0^\top Z_0+\left(Z-QA\right)^\top \left(Z-QA\right) \\
=&A^\top\left(\Psi_0^{-1}+Q^\top Q\right)A-A^\top\left(\Psi_0^{-1}M_0+Q^\top Z\right) \\
&-\left(\Psi_0^{-1}M_0+Q^\top Z\right)^\top A \\
&+\left(\Psi_0^{-1}M_0+Q^\top Z\right)^\top\left(\Psi_0^{-1}+Q^\top Q\right)\left(\Psi_0^{-1}M_0+Q^\top Z\right) \\
&-\left(\Psi_0^{-1}M_0+Q^\top Z\right)^\top\left(\Psi_0^{-1}+Q^\top Q\right)\left(\Psi_0^{-1}M_0+Q^\top Z\right) \\
&+M_0^\top\Psi_0^{-1}M_0+S_0+Z_0^\top Z_0+Z^\top Z \\
=&\left(A-M^{*}\right)^\top\left(\Psi^{*}\right)^{-1}\left(A-M^{*}\right)+S^{*}, \\
\end{aligned}
\end{equation}
which is in the form of $\mathcal{MN}\left(\cdot\right)$ and $\mathcal{IW}\left(\cdot\right)$.
The $Rd$-by-$R$ matrix $A$ has a matrix normal distribution, and $R$-by-$R$ covariance matrix $\Sigma$ has an inverse Wishart distribution, that is,
\begin{equation}
A \sim \mathcal{M N}_{(R d) \times R}\left(M^{*}, \Psi^{*}, \Sigma\right), \quad \Sigma \sim \mathcal{I} \mathcal{W}\left(S^{*}, \nu^{*}\right),
\end{equation}
with
\begin{equation}
\begin{cases}
{\Psi^{*}=\left(\Psi_{0}^{-1}+Q^{\top} Q\right)^{-1}}, \\ {M^{*}=\Psi^{*}\left(\Psi_{0}^{-1} M_{0}+Q^{\top} Z\right)}, \\ {S^{*}=S_{0}+Z^\top Z+M_0^\top\Psi_0^{-1}M_0-\left(M^{*}\right)^\top\left(\Psi^{*}\right)^{-1}M^{*}}, \\
{\nu^{*}=\nu_{0}+T-h_d}.
\end{cases}
\end{equation}
```python
from scipy.stats import invwishart
def mnrnd(M, U, V):
"""
Generate matrix normal distributed random matrix.
M is a m-by-n matrix, U is a m-by-m matrix, and V is a n-by-n matrix.
"""
dim1, dim2 = M.shape
X0 = np.random.rand(dim1, dim2)
P = np.linalg.cholesky(U)
Q = np.linalg.cholesky(V)
return M + np.matmul(np.matmul(P, X0), Q.T)
def sample_var_coefficient(X, time_lags):
dim2, rank = X.shape
d = time_lags.shape[0]
Z_mat = X[np.max(time_lags) : dim2, :]
Q_mat = X[np.max(time_lags) - time_lags[0] : dim2 - time_lags[0], :]
for k in range(1, d):
Q_mat = np.append(Q_mat, X[np.max(time_lags) - time_lags[k] : dim2 - time_lags[k], :], axis = 1)
var_Psi = inv(np.eye(rank * d) + np.matmul(Q_mat.T, Q_mat))
var_M = np.matmul(var_Psi, np.matmul(Q_mat.T, Z_mat))
var_S = (np.eye(rank) + np.matmul(Z_mat.T, Z_mat) - np.matmul(np.matmul(var_M.T, inv(var_Psi)), var_M))
Sigma = invwishart(df = rank + dim2 - np.max(time_lags), scale = var_S, seed = None).rvs()
return mnrnd(var_M, var_Psi, Sigma), Sigma
```
#### 3) Sampling Factor Matrix $X$
**Posterior distribution**
\begin{equation}
\begin{aligned}
y_{it}&\sim\mathcal{N}\left(\boldsymbol{w}_{i}^\top\boldsymbol{x}_{t},\tau^{-1}\right),~\left(i,t\right)\in\Omega, \\
\boldsymbol{x}_{t}&\sim\begin{cases}\mathcal{N}\left(\sum_{k=1}^{d}A_{k} \boldsymbol{x}_{t-h_k},\Sigma\right),~\text{if $t\in\left\{h_d+1,...,T\right\}$},\\{\mathcal{N}\left(\boldsymbol{0},I\right),~\text{otherwise}}.\end{cases}\\
\end{aligned}
\end{equation}
If $t\in\left\{1,...,h_d\right\}$, parameters of the posterior distribution $\mathcal{N}\left(\boldsymbol{x}_{t}\mid \boldsymbol{\mu}_{t}^{*},\Sigma_{t}^{*}\right)$ are
\footnotesize{
\begin{equation}
\begin{aligned}
\Sigma_{t}^{*}&=\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} {A}_{k}^{\top} \Sigma^{-1} A_{k}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}\boldsymbol{w}_{i}^\top+I\right)^{-1}, \\
\boldsymbol{\mu}_{t}^{*}&=\Sigma_{t}^{*}\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} A_{k}^{\top} \Sigma^{-1} \boldsymbol{\psi}_{t+h_{k}}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}y_{it}\right). \\
\end{aligned}
\end{equation}
If $t\in\left\{h_d+1,...,T\right\}$, then parameters of the posterior distribution $\mathcal{N}\left(\boldsymbol{x}_{t}\mid \boldsymbol{\mu}_{t}^{*},\Sigma_{t}^{*}\right)$ are
\begin{equation}
\begin{aligned}
\Sigma_{t}^{*}&=\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} {A}_{k}^{\top} \Sigma^{-1} A_{k}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}\boldsymbol{w}_{i}^\top+\Sigma^{-1}\right)^{-1}, \\
\boldsymbol{\mu}_{t}^{*}&=\Sigma_{t}^{*}\left(\sum_{k=1, h_{d}<t+h_{k} \leq T}^{d} A_{k}^{\top} \Sigma^{-1} \boldsymbol{\psi}_{t+h_{k}}+\tau\sum_{i:(i,t)\in\Omega}\boldsymbol{w}_{i}y_{it}+\Sigma^{-1}\sum_{k=1}^{d}A_{k}\boldsymbol{x}_{t-h_k}\right), \\
\end{aligned}
\end{equation}
where
$$\boldsymbol{\psi}_{t+h_k}=\boldsymbol{x}_{t+h_k}-\sum_{l=1,l\neq k}^{d}A_{l}\boldsymbol{x}_{t+h_k-h_l}.$$
```python
def sample_factor_x(sparse_mat, binary_mat, time_lags, W, X, tau, A, Lambda_x):
dim2, rank = X.shape
d = time_lags.shape[0]
mat0 = np.matmul(Lambda_x, A.T)
mat1 = np.zeros((rank, rank, d))
mat2 = np.zeros((rank, rank))
for k in range(d):
Ak = A[k * rank : (k + 1) * rank, :]
mat1[:, :, k] = np.matmul(Ak, Lambda_x)
mat2 += np.matmul(mat1[:, :, k], Ak.T)
for t in range(dim2):
pos0 = np.where(sparse_mat[:, t] != 0)
Wt = W[pos0[0], :]
Nt = np.zeros(rank)
if t >= np.max(time_lags):
Qt = np.matmul(mat0, X[t - time_lags, :].reshape([rank * d]))
if t < dim2 - np.max(time_lags):
Mt = mat2.copy()
for k in range(d):
A0 = A.copy()
A0[k * rank : (k + 1) * rank, :] = 0
var5 = (X[t + time_lags[k], :]
- np.matmul(A0.T, X[t + time_lags[k]
- time_lags, :].reshape([rank * d])))
Nt += np.matmul(mat1[:, :, k], var5)
elif t >= dim2 - np.max(time_lags) and t < dim2 - np.min(time_lags):
index = list(np.where(t + time_lags < dim2))[0]
Mt = np.zeros((rank, rank))
for k in index:
Ak = A[k * rank : (k + 1) * rank, :]
Mt += np.matmul(np.matmul(Ak, Lambda_x), Ak.T)
A0 = A.copy()
A0[k * rank : (k + 1) * rank, :] = 0
var5 = (X[t + time_lags[k], :]
- np.matmul(A0.T, X[t + time_lags[k]
- time_lags, :].reshape([rank * d])))
Nt += np.matmul(np.matmul(Ak, Lambda_x), var5)
inv_var_Lambda = inv(tau * np.matmul(Wt.T, Wt) + Mt + Lambda_x)
elif t < np.max(time_lags):
Qt = np.zeros(rank)
index = list(np.where(t + time_lags >= np.max(time_lags)))[0]
Mt = np.zeros((rank, rank))
for k in index:
Ak = A[k * rank : (k + 1) * rank, :]
Mt += np.matmul(np.matmul(Ak, Lambda_x), Ak.T)
A0 = A.copy()
A0[k * rank : (k + 1) * rank, :] = 0
var5 = (X[t + time_lags[k], :]
- np.matmul(A0.T, X[t + time_lags[k]
- time_lags, :].reshape([rank * d])))
Nt += np.matmul(np.matmul(Ak, Lambda_x), var5)
inv_var_Lambda = inv(tau * np.matmul(Wt.T, Wt) + Mt + np.eye(rank))
var_mu = tau * np.matmul(Wt.T, sparse_mat[pos0[0], t]) + Nt + Qt
X[t, :] = mvnrnd(np.matmul(inv_var_Lambda, var_mu), inv_var_Lambda)
return X
```
```python
def sample_factor_x(sparse_mat, binary_mat, time_lags, W, X, tau, A, Lambda_x):
dim2, rank = X.shape
d = time_lags.shape[0]
for t in range(dim2):
pos0 = np.where(sparse_mat[:, t] != 0)
Wt = W[pos0[0], :]
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t >= np.max(time_lags):
Qt = np.matmul(Lambda_x, np.matmul(A.T, X[t - time_lags, :].reshape([rank * d])))
if t >= np.max(time_lags) and t < dim2 - np.max(time_lags):
index = list(range(0, d))
elif t >= dim2 - np.max(time_lags) and t < dim2 - np.min(time_lags):
index = list(np.where(t + time_lags < dim2))[0]
elif t < np.max(time_lags):
Qt = np.zeros(rank)
index = list(np.where(t + time_lags >= np.max(time_lags)))[0]
if t < dim2 - np.min(time_lags):
for k in index:
Ak = A[k * rank : (k + 1) * rank, :]
Mt += np.matmul(np.matmul(Ak, Lambda_x), Ak.T)
A0 = A.copy()
A0[k * rank : (k + 1) * rank, :] = 0
var5 = (X[t + time_lags[k], :]
- np.matmul(A0.T, X[t + time_lags[k] - time_lags, :].reshape([rank * d])))
Nt += np.matmul(np.matmul(Ak, Lambda_x), var5)
var_mu = tau * np.matmul(Wt.T, sparse_mat[pos0[0], t]) + Nt + Qt
if t < np.max(time_lags):
inv_var_Lambda = inv(tau * np.matmul(Wt.T, Wt) + Mt + np.eye(rank))
else:
inv_var_Lambda = inv(tau * np.matmul(Wt.T, Wt) + Mt + Lambda_x)
X[t, :] = mvnrnd(np.matmul(inv_var_Lambda, var_mu), inv_var_Lambda)
return X
```
#### 4) Sampling Precision $\tau$
```python
def sample_precision_tau(sparse_mat, mat_hat, position):
var_alpha = 1e-6 + 0.5 * sparse_mat[position].shape[0]
var_beta = 1e-6 + 0.5 * np.sum((sparse_mat - mat_hat)[position] ** 2)
return np.random.gamma(var_alpha, 1 / var_beta)
```
#### 5) BTMF Implementation
- **Gibbs sampling**
- Burn-in process
- Sampling process
- **Imputation**
- **Prediction**
```python
def BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter):
"""Bayesian Temporal Matrix Factorization, BTMF."""
W = init["W"]
X = init["X"]
dim1, dim2 = sparse_mat.shape
d = time_lags.shape[0]
pos = np.where((dense_mat != 0) & (sparse_mat == 0))
position = np.where(sparse_mat != 0)
binary_mat = np.zeros((dim1, dim2))
binary_mat[position] = 1
tau = 1
mat_hat_plus = np.zeros((dim1, dim2))
for it in range(burn_iter + gibbs_iter):
W = sample_factor_w(sparse_mat, binary_mat, W, X, tau)
A, Sigma = sample_var_coefficient(X, time_lags)
X = sample_factor_x(sparse_mat, binary_mat, time_lags, W, X, tau, A, inv(Sigma))
mat_hat = np.matmul(W, X.T)
tau = sample_precision_tau(sparse_mat, mat_hat, position)
rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos]) ** 2) / dense_mat[pos].shape[0])
if (it + 1) % 1 == 0 and it < burn_iter:
print('Iteration: {}'.format(it + 1))
print('RMSE: {:.6}'.format(rmse))
print()
if it + 1 > burn_iter:
mat_hat_plus += mat_hat
mat_hat = mat_hat_plus / gibbs_iter
final_mape = np.sum(np.abs(dense_mat[pos] - mat_hat[pos]) / dense_mat[pos]) / dense_mat[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos]) ** 2) / dense_mat[pos].shape[0])
print('Imputation MAPE: {:.6}'.format(final_mape))
print('Imputation RMSE: {:.6}'.format(final_rmse))
print()
return mat_hat
```
## 6 Spatiotemporal Missing Data Imputation
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
# binary_tensor = np.zeros(tensor.shape)
# for i1 in range(tensor.shape[0]):
# for i2 in range(tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
# binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
init = {"W": 0.1 * np.random.rand(dim1, rank), "X": 0.1 * np.random.rand(dim2, rank)}
burn_iter = 1000
gibbs_iter = 100
BTMF(dense_mat, sparse_mat, init, rank, time_lags, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
Iteration: 1
RMSE: 6.54248
Iteration: 2
RMSE: 5.76746
Iteration: 3
RMSE: 5.7331
Iteration: 4
RMSE: 5.74967
Iteration: 5
RMSE: 5.48408
Iteration: 6
RMSE: 5.36829
Iteration: 7
RMSE: 4.93222
Iteration: 8
RMSE: 4.70856
Iteration: 9
RMSE: 4.66498
Iteration: 10
RMSE: 4.74202
Iteration: 11
RMSE: 4.69944
Iteration: 12
RMSE: 4.64699
Iteration: 13
RMSE: 4.58376
Iteration: 14
RMSE: 4.56541
Iteration: 15
RMSE: 4.57934
Iteration: 16
RMSE: 4.53475
Iteration: 17
RMSE: 4.54741
Iteration: 18
RMSE: 4.60075
Iteration: 19
RMSE: 4.63094
Iteration: 20
RMSE: 4.59308
Iteration: 21
RMSE: 4.5828
Iteration: 22
RMSE: 4.54219
Iteration: 23
RMSE: 4.51025
Iteration: 24
RMSE: 4.482
Iteration: 25
RMSE: 4.46875
Iteration: 26
RMSE: 4.46078
Iteration: 27
RMSE: 4.45712
Iteration: 28
RMSE: 4.45436
Iteration: 29
RMSE: 4.4563
Iteration: 30
RMSE: 4.45248
Iteration: 31
RMSE: 4.45449
Iteration: 32
RMSE: 4.45683
Iteration: 33
RMSE: 4.4544
Iteration: 34
RMSE: 4.45835
Iteration: 35
RMSE: 4.46065
Iteration: 36
RMSE: 4.45809
Iteration: 37
RMSE: 4.46387
Iteration: 38
RMSE: 4.4565
Iteration: 39
RMSE: 4.46289
Iteration: 40
RMSE: 4.46492
Iteration: 41
RMSE: 4.46722
Iteration: 42
RMSE: 4.46884
Iteration: 43
RMSE: 4.46624
Iteration: 44
RMSE: 4.4711
Iteration: 45
RMSE: 4.47314
Iteration: 46
RMSE: 4.47297
Iteration: 47
RMSE: 4.47793
Iteration: 48
RMSE: 4.47917
Iteration: 49
RMSE: 4.48087
Iteration: 50
RMSE: 4.48217
Iteration: 51
RMSE: 4.48399
Iteration: 52
RMSE: 4.48549
Iteration: 53
RMSE: 4.48188
Iteration: 54
RMSE: 4.48285
Iteration: 55
RMSE: 4.48434
Iteration: 56
RMSE: 4.48593
Iteration: 57
RMSE: 4.49323
Iteration: 58
RMSE: 4.48787
Iteration: 59
RMSE: 4.49444
Iteration: 60
RMSE: 4.49477
Iteration: 61
RMSE: 4.49183
Iteration: 62
RMSE: 4.48968
Iteration: 63
RMSE: 4.49057
Iteration: 64
RMSE: 4.49384
Iteration: 65
RMSE: 4.49177
Iteration: 66
RMSE: 4.49185
Iteration: 67
RMSE: 4.49192
Iteration: 68
RMSE: 4.48892
Iteration: 69
RMSE: 4.48786
Iteration: 70
RMSE: 4.49479
Iteration: 71
RMSE: 4.49446
Iteration: 72
RMSE: 4.49888
Iteration: 73
RMSE: 4.49949
Iteration: 74
RMSE: 4.49848
Iteration: 75
RMSE: 4.49856
Iteration: 76
RMSE: 4.49747
Iteration: 77
RMSE: 4.50231
Iteration: 78
RMSE: 4.49866
Iteration: 79
RMSE: 4.49953
Iteration: 80
RMSE: 4.49757
Iteration: 81
RMSE: 4.49981
Iteration: 82
RMSE: 4.49921
Iteration: 83
RMSE: 4.49948
Iteration: 84
RMSE: 4.49625
Iteration: 85
RMSE: 4.50165
Iteration: 86
RMSE: 4.50148
Iteration: 87
RMSE: 4.50663
Iteration: 88
RMSE: 4.50327
Iteration: 89
RMSE: 4.50887
Iteration: 90
RMSE: 4.50576
Iteration: 91
RMSE: 4.5023
Iteration: 92
RMSE: 4.50629
Iteration: 93
RMSE: 4.50923
Iteration: 94
RMSE: 4.51307
Iteration: 95
RMSE: 4.50818
Iteration: 96
RMSE: 4.50953
Iteration: 97
RMSE: 4.50612
Iteration: 98
RMSE: 4.50728
Iteration: 99
RMSE: 4.50728
Iteration: 100
RMSE: 4.50946
Iteration: 101
RMSE: 4.50996
Iteration: 102
RMSE: 4.51108
Iteration: 103
RMSE: 4.5158
Iteration: 104
RMSE: 4.51864
Iteration: 105
RMSE: 4.51461
Iteration: 106
RMSE: 4.51631
Iteration: 107
RMSE: 4.509
Iteration: 108
RMSE: 4.50882
Iteration: 109
RMSE: 4.51614
Iteration: 110
RMSE: 4.51157
Iteration: 111
RMSE: 4.51044
Iteration: 112
RMSE: 4.50869
Iteration: 113
RMSE: 4.50927
Iteration: 114
RMSE: 4.50959
Iteration: 115
RMSE: 4.51389
Iteration: 116
RMSE: 4.51256
Iteration: 117
RMSE: 4.51325
Iteration: 118
RMSE: 4.50974
Iteration: 119
RMSE: 4.51655
Iteration: 120
RMSE: 4.5115
Iteration: 121
RMSE: 4.51645
Iteration: 122
RMSE: 4.51753
Iteration: 123
RMSE: 4.51597
Iteration: 124
RMSE: 4.51971
Iteration: 125
RMSE: 4.52052
Iteration: 126
RMSE: 4.51515
Iteration: 127
RMSE: 4.51936
Iteration: 128
RMSE: 4.52494
Iteration: 129
RMSE: 4.52475
Iteration: 130
RMSE: 4.52563
Iteration: 131
RMSE: 4.5213
Iteration: 132
RMSE: 4.51497
Iteration: 133
RMSE: 4.50955
Iteration: 134
RMSE: 4.51106
Iteration: 135
RMSE: 4.51367
Iteration: 136
RMSE: 4.51811
Iteration: 137
RMSE: 4.51519
Iteration: 138
RMSE: 4.5184
Iteration: 139
RMSE: 4.51618
Iteration: 140
RMSE: 4.51715
Iteration: 141
RMSE: 4.5148
Iteration: 142
RMSE: 4.51163
Iteration: 143
RMSE: 4.51101
Iteration: 144
RMSE: 4.50948
Iteration: 145
RMSE: 4.50372
Iteration: 146
RMSE: 4.50677
Iteration: 147
RMSE: 4.51051
Iteration: 148
RMSE: 4.51091
Iteration: 149
RMSE: 4.512
Iteration: 150
RMSE: 4.51022
Iteration: 151
RMSE: 4.51022
Iteration: 152
RMSE: 4.50981
Iteration: 153
RMSE: 4.51303
Iteration: 154
RMSE: 4.51394
Iteration: 155
RMSE: 4.51371
Iteration: 156
RMSE: 4.50995
Iteration: 157
RMSE: 4.51253
Iteration: 158
RMSE: 4.51617
Iteration: 159
RMSE: 4.51278
Iteration: 160
RMSE: 4.52169
Iteration: 161
RMSE: 4.52016
Iteration: 162
RMSE: 4.51821
Iteration: 163
RMSE: 4.51732
Iteration: 164
RMSE: 4.51899
Iteration: 165
RMSE: 4.52507
Iteration: 166
RMSE: 4.5249
Iteration: 167
RMSE: 4.52547
Iteration: 168
RMSE: 4.52445
Iteration: 169
RMSE: 4.52202
Iteration: 170
RMSE: 4.52317
Iteration: 171
RMSE: 4.52458
Iteration: 172
RMSE: 4.52301
Iteration: 173
RMSE: 4.52225
Iteration: 174
RMSE: 4.5229
Iteration: 175
RMSE: 4.51735
Iteration: 176
RMSE: 4.5254
Iteration: 177
RMSE: 4.51979
Iteration: 178
RMSE: 4.52697
Iteration: 179
RMSE: 4.51904
Iteration: 180
RMSE: 4.51799
Iteration: 181
RMSE: 4.52044
Iteration: 182
RMSE: 4.52175
Iteration: 183
RMSE: 4.52076
Iteration: 184
RMSE: 4.5163
Iteration: 185
RMSE: 4.51807
Iteration: 186
RMSE: 4.51643
Iteration: 187
RMSE: 4.521
Iteration: 188
RMSE: 4.52175
Iteration: 189
RMSE: 4.51775
Iteration: 190
RMSE: 4.51784
Iteration: 191
RMSE: 4.51418
Iteration: 192
RMSE: 4.51654
Iteration: 193
RMSE: 4.51179
Iteration: 194
RMSE: 4.51213
Iteration: 195
RMSE: 4.50986
Iteration: 196
RMSE: 4.51197
Iteration: 197
RMSE: 4.51769
Iteration: 198
RMSE: 4.51498
Iteration: 199
RMSE: 4.51622
Iteration: 200
RMSE: 4.51443
Iteration: 201
RMSE: 4.51696
Iteration: 202
RMSE: 4.51898
Iteration: 203
RMSE: 4.51956
Iteration: 204
RMSE: 4.52316
Iteration: 205
RMSE: 4.51832
Iteration: 206
RMSE: 4.51886
Iteration: 207
RMSE: 4.52576
Iteration: 208
RMSE: 4.5217
Iteration: 209
RMSE: 4.5252
Iteration: 210
RMSE: 4.5261
Iteration: 211
RMSE: 4.52519
Iteration: 212
RMSE: 4.52557
Iteration: 213
RMSE: 4.52475
Iteration: 214
RMSE: 4.52106
Iteration: 215
RMSE: 4.52829
Iteration: 216
RMSE: 4.53196
Iteration: 217
RMSE: 4.52872
Iteration: 218
RMSE: 4.53156
Iteration: 219
RMSE: 4.52768
Iteration: 220
RMSE: 4.52834
Iteration: 221
RMSE: 4.53278
Iteration: 222
RMSE: 4.53164
Iteration: 223
RMSE: 4.52732
Iteration: 224
RMSE: 4.53258
Iteration: 225
RMSE: 4.52714
Iteration: 226
RMSE: 4.52354
Iteration: 227
RMSE: 4.52532
Iteration: 228
RMSE: 4.52041
Iteration: 229
RMSE: 4.52318
Iteration: 230
RMSE: 4.51986
Iteration: 231
RMSE: 4.52249
Iteration: 232
RMSE: 4.52028
Iteration: 233
RMSE: 4.52147
Iteration: 234
RMSE: 4.51878
Iteration: 235
RMSE: 4.52191
Iteration: 236
RMSE: 4.52596
Iteration: 237
RMSE: 4.52439
Iteration: 238
RMSE: 4.52472
Iteration: 239
RMSE: 4.52445
Iteration: 240
RMSE: 4.525
Iteration: 241
RMSE: 4.52026
Iteration: 242
RMSE: 4.52282
Iteration: 243
RMSE: 4.5235
Iteration: 244
RMSE: 4.52011
Iteration: 245
RMSE: 4.52712
Iteration: 246
RMSE: 4.52684
Iteration: 247
RMSE: 4.52991
Iteration: 248
RMSE: 4.52969
Iteration: 249
RMSE: 4.53143
Iteration: 250
RMSE: 4.51897
Iteration: 251
RMSE: 4.52219
Iteration: 252
RMSE: 4.52754
Iteration: 253
RMSE: 4.52623
Iteration: 254
RMSE: 4.52428
Iteration: 255
RMSE: 4.53503
Iteration: 256
RMSE: 4.53656
Iteration: 257
RMSE: 4.53299
Iteration: 258
RMSE: 4.52972
Iteration: 259
RMSE: 4.5281
Iteration: 260
RMSE: 4.52628
Iteration: 261
RMSE: 4.52702
Iteration: 262
RMSE: 4.52688
Iteration: 263
RMSE: 4.52302
Iteration: 264
RMSE: 4.52649
Iteration: 265
RMSE: 4.52815
Iteration: 266
RMSE: 4.52705
Iteration: 267
RMSE: 4.52549
Iteration: 268
RMSE: 4.52887
Iteration: 269
RMSE: 4.52597
Iteration: 270
RMSE: 4.52783
Iteration: 271
RMSE: 4.52954
Iteration: 272
RMSE: 4.52559
Iteration: 273
RMSE: 4.5254
Iteration: 274
RMSE: 4.53485
Iteration: 275
RMSE: 4.53453
Iteration: 276
RMSE: 4.53197
Iteration: 277
RMSE: 4.53227
Iteration: 278
RMSE: 4.53732
Iteration: 279
RMSE: 4.5352
Iteration: 280
RMSE: 4.53879
Iteration: 281
RMSE: 4.53062
Iteration: 282
RMSE: 4.53645
Iteration: 283
RMSE: 4.53852
Iteration: 284
RMSE: 4.53585
Iteration: 285
RMSE: 4.52965
Iteration: 286
RMSE: 4.5337
Iteration: 287
RMSE: 4.52432
Iteration: 288
RMSE: 4.5294
Iteration: 289
RMSE: 4.53376
Iteration: 290
RMSE: 4.52812
Iteration: 291
RMSE: 4.5261
Iteration: 292
RMSE: 4.5263
Iteration: 293
RMSE: 4.52704
Iteration: 294
RMSE: 4.52451
Iteration: 295
RMSE: 4.52665
Iteration: 296
RMSE: 4.52461
Iteration: 297
RMSE: 4.52631
Iteration: 298
RMSE: 4.51936
Iteration: 299
RMSE: 4.52083
Iteration: 300
RMSE: 4.51877
Iteration: 301
RMSE: 4.52033
Iteration: 302
RMSE: 4.52117
Iteration: 303
RMSE: 4.52341
Iteration: 304
RMSE: 4.52289
Iteration: 305
RMSE: 4.52642
Iteration: 306
RMSE: 4.52659
Iteration: 307
RMSE: 4.52489
Iteration: 308
RMSE: 4.51955
Iteration: 309
RMSE: 4.52345
Iteration: 310
RMSE: 4.52159
Iteration: 311
RMSE: 4.52437
Iteration: 312
RMSE: 4.52354
Iteration: 313
RMSE: 4.52731
Iteration: 314
RMSE: 4.52163
Iteration: 315
RMSE: 4.52916
Iteration: 316
RMSE: 4.53132
Iteration: 317
RMSE: 4.5321
Iteration: 318
RMSE: 4.5296
Iteration: 319
RMSE: 4.53563
Iteration: 320
RMSE: 4.53643
Iteration: 321
RMSE: 4.52883
Iteration: 322
RMSE: 4.53063
Iteration: 323
RMSE: 4.53079
Iteration: 324
RMSE: 4.52998
Iteration: 325
RMSE: 4.53215
Iteration: 326
RMSE: 4.52885
Iteration: 327
RMSE: 4.52988
Iteration: 328
RMSE: 4.52789
Iteration: 329
RMSE: 4.52612
Iteration: 330
RMSE: 4.53023
Iteration: 331
RMSE: 4.52489
Iteration: 332
RMSE: 4.53017
Iteration: 333
RMSE: 4.52457
Iteration: 334
RMSE: 4.51781
Iteration: 335
RMSE: 4.5236
Iteration: 336
RMSE: 4.52071
Iteration: 337
RMSE: 4.52232
Iteration: 338
RMSE: 4.52527
Iteration: 339
RMSE: 4.51647
Iteration: 340
RMSE: 4.5226
Iteration: 341
RMSE: 4.52087
Iteration: 342
RMSE: 4.51478
Iteration: 343
RMSE: 4.51801
Iteration: 344
RMSE: 4.51371
Iteration: 345
RMSE: 4.51591
Iteration: 346
RMSE: 4.52391
Iteration: 347
RMSE: 4.52513
Iteration: 348
RMSE: 4.5226
Iteration: 349
RMSE: 4.52642
Iteration: 350
RMSE: 4.52443
Iteration: 351
RMSE: 4.52426
Iteration: 352
RMSE: 4.52348
Iteration: 353
RMSE: 4.52185
Iteration: 354
RMSE: 4.5259
Iteration: 355
RMSE: 4.52545
Iteration: 356
RMSE: 4.52195
Iteration: 357
RMSE: 4.51736
Iteration: 358
RMSE: 4.52086
Iteration: 359
RMSE: 4.51767
Iteration: 360
RMSE: 4.52084
Iteration: 361
RMSE: 4.52417
Iteration: 362
RMSE: 4.52255
Iteration: 363
RMSE: 4.51977
Iteration: 364
RMSE: 4.5217
Iteration: 365
RMSE: 4.5242
Iteration: 366
RMSE: 4.52695
Iteration: 367
RMSE: 4.52257
Iteration: 368
RMSE: 4.5255
Iteration: 369
RMSE: 4.52872
Iteration: 370
RMSE: 4.52941
Iteration: 371
RMSE: 4.52717
Iteration: 372
RMSE: 4.52902
Iteration: 373
RMSE: 4.52998
Iteration: 374
RMSE: 4.52738
Iteration: 375
RMSE: 4.52985
Iteration: 376
RMSE: 4.52792
Iteration: 377
RMSE: 4.53206
Iteration: 378
RMSE: 4.53315
Iteration: 379
RMSE: 4.53134
Iteration: 380
RMSE: 4.53562
Iteration: 381
RMSE: 4.53507
Iteration: 382
RMSE: 4.53692
Iteration: 383
RMSE: 4.53146
Iteration: 384
RMSE: 4.53446
Iteration: 385
RMSE: 4.53091
Iteration: 386
RMSE: 4.5357
Iteration: 387
RMSE: 4.53309
Iteration: 388
RMSE: 4.53531
Iteration: 389
RMSE: 4.54061
Iteration: 390
RMSE: 4.53918
Iteration: 391
RMSE: 4.53562
Iteration: 392
RMSE: 4.53603
Iteration: 393
RMSE: 4.52891
Iteration: 394
RMSE: 4.532
Iteration: 395
RMSE: 4.53347
Iteration: 396
RMSE: 4.53356
Iteration: 397
RMSE: 4.53255
Iteration: 398
RMSE: 4.53476
Iteration: 399
RMSE: 4.53475
Iteration: 400
RMSE: 4.52886
Iteration: 401
RMSE: 4.52994
Iteration: 402
RMSE: 4.52655
Iteration: 403
RMSE: 4.53032
Iteration: 404
RMSE: 4.52539
Iteration: 405
RMSE: 4.52453
Iteration: 406
RMSE: 4.52824
Iteration: 407
RMSE: 4.53087
Iteration: 408
RMSE: 4.52986
Iteration: 409
RMSE: 4.52777
Iteration: 410
RMSE: 4.53172
Iteration: 411
RMSE: 4.53029
Iteration: 412
RMSE: 4.53078
Iteration: 413
RMSE: 4.52824
Iteration: 414
RMSE: 4.52678
Iteration: 415
RMSE: 4.52826
Iteration: 416
RMSE: 4.52866
Iteration: 417
RMSE: 4.52841
Iteration: 418
RMSE: 4.52684
Iteration: 419
RMSE: 4.52831
Iteration: 420
RMSE: 4.5276
Iteration: 421
RMSE: 4.52809
Iteration: 422
RMSE: 4.52785
Iteration: 423
RMSE: 4.52665
Iteration: 424
RMSE: 4.52958
Iteration: 425
RMSE: 4.52598
Iteration: 426
RMSE: 4.52429
Iteration: 427
RMSE: 4.52721
Iteration: 428
RMSE: 4.52759
Iteration: 429
RMSE: 4.52677
Iteration: 430
RMSE: 4.52952
Iteration: 431
RMSE: 4.52845
Iteration: 432
RMSE: 4.53172
Iteration: 433
RMSE: 4.52989
Iteration: 434
RMSE: 4.5319
Iteration: 435
RMSE: 4.53024
Iteration: 436
RMSE: 4.53294
Iteration: 437
RMSE: 4.52975
Iteration: 438
RMSE: 4.53034
Iteration: 439
RMSE: 4.53403
Iteration: 440
RMSE: 4.52657
Iteration: 441
RMSE: 4.52537
Iteration: 442
RMSE: 4.52632
Iteration: 443
RMSE: 4.525
Iteration: 444
RMSE: 4.52483
Iteration: 445
RMSE: 4.52809
Iteration: 446
RMSE: 4.52304
Iteration: 447
RMSE: 4.51666
Iteration: 448
RMSE: 4.51913
Iteration: 449
RMSE: 4.52086
Iteration: 450
RMSE: 4.52373
Iteration: 451
RMSE: 4.52259
Iteration: 452
RMSE: 4.52086
Iteration: 453
RMSE: 4.52423
Iteration: 454
RMSE: 4.51958
Iteration: 455
RMSE: 4.52224
Iteration: 456
RMSE: 4.523
Iteration: 457
RMSE: 4.52758
Iteration: 458
RMSE: 4.52867
Iteration: 459
RMSE: 4.52593
Iteration: 460
RMSE: 4.52626
Iteration: 461
RMSE: 4.52714
Iteration: 462
RMSE: 4.52334
Iteration: 463
RMSE: 4.5271
Iteration: 464
RMSE: 4.53112
Iteration: 465
RMSE: 4.52943
Iteration: 466
RMSE: 4.5279
Iteration: 467
RMSE: 4.53
Iteration: 468
RMSE: 4.52898
Iteration: 469
RMSE: 4.53234
Iteration: 470
RMSE: 4.53738
Iteration: 471
RMSE: 4.53088
Iteration: 472
RMSE: 4.52802
Iteration: 473
RMSE: 4.52704
Iteration: 474
RMSE: 4.52213
Iteration: 475
RMSE: 4.5274
Iteration: 476
RMSE: 4.52198
Iteration: 477
RMSE: 4.52191
Iteration: 478
RMSE: 4.52346
Iteration: 479
RMSE: 4.52429
Iteration: 480
RMSE: 4.52095
Iteration: 481
RMSE: 4.52704
Iteration: 482
RMSE: 4.53436
Iteration: 483
RMSE: 4.52934
Iteration: 484
RMSE: 4.52894
Iteration: 485
RMSE: 4.52883
Iteration: 486
RMSE: 4.53024
Iteration: 487
RMSE: 4.53104
Iteration: 488
RMSE: 4.52468
Iteration: 489
RMSE: 4.52624
Iteration: 490
RMSE: 4.52493
Iteration: 491
RMSE: 4.52657
Iteration: 492
RMSE: 4.52444
Iteration: 493
RMSE: 4.52802
Iteration: 494
RMSE: 4.52425
Iteration: 495
RMSE: 4.52896
Iteration: 496
RMSE: 4.52695
Iteration: 497
RMSE: 4.53165
Iteration: 498
RMSE: 4.52803
Iteration: 499
RMSE: 4.52833
Iteration: 500
RMSE: 4.52886
Iteration: 501
RMSE: 4.5265
Iteration: 502
RMSE: 4.53043
Iteration: 503
RMSE: 4.53015
Iteration: 504
RMSE: 4.52619
Iteration: 505
RMSE: 4.52874
Iteration: 506
RMSE: 4.53217
Iteration: 507
RMSE: 4.52727
Iteration: 508
RMSE: 4.52778
Iteration: 509
RMSE: 4.52443
Iteration: 510
RMSE: 4.5266
Iteration: 511
RMSE: 4.53239
Iteration: 512
RMSE: 4.53004
Iteration: 513
RMSE: 4.53317
Iteration: 514
RMSE: 4.53468
Iteration: 515
RMSE: 4.53953
Iteration: 516
RMSE: 4.53206
Iteration: 517
RMSE: 4.53579
Iteration: 518
RMSE: 4.5304
Iteration: 519
RMSE: 4.52855
Iteration: 520
RMSE: 4.53703
Iteration: 521
RMSE: 4.53452
Iteration: 522
RMSE: 4.53407
Iteration: 523
RMSE: 4.53334
Iteration: 524
RMSE: 4.53168
Iteration: 525
RMSE: 4.53437
Iteration: 526
RMSE: 4.53955
Iteration: 527
RMSE: 4.53421
Iteration: 528
RMSE: 4.53459
Iteration: 529
RMSE: 4.53562
Iteration: 530
RMSE: 4.54047
Iteration: 531
RMSE: 4.54247
Iteration: 532
RMSE: 4.53641
Iteration: 533
RMSE: 4.53905
Iteration: 534
RMSE: 4.54087
Iteration: 535
RMSE: 4.53636
Iteration: 536
RMSE: 4.53788
Iteration: 537
RMSE: 4.53504
Iteration: 538
RMSE: 4.53872
Iteration: 539
RMSE: 4.53922
Iteration: 540
RMSE: 4.53894
Iteration: 541
RMSE: 4.53858
Iteration: 542
RMSE: 4.53996
Iteration: 543
RMSE: 4.54151
Iteration: 544
RMSE: 4.54145
Iteration: 545
RMSE: 4.54106
Iteration: 546
RMSE: 4.53761
Iteration: 547
RMSE: 4.53532
Iteration: 548
RMSE: 4.53868
Iteration: 549
RMSE: 4.5374
Iteration: 550
RMSE: 4.53311
Iteration: 551
RMSE: 4.53784
Iteration: 552
RMSE: 4.53647
Iteration: 553
RMSE: 4.53308
Iteration: 554
RMSE: 4.53142
Iteration: 555
RMSE: 4.52967
Iteration: 556
RMSE: 4.52901
Iteration: 557
RMSE: 4.52616
Iteration: 558
RMSE: 4.5256
Iteration: 559
RMSE: 4.52306
Iteration: 560
RMSE: 4.52195
Iteration: 561
RMSE: 4.52115
Iteration: 562
RMSE: 4.53097
Iteration: 563
RMSE: 4.52992
Iteration: 564
RMSE: 4.52793
Iteration: 565
RMSE: 4.53062
Iteration: 566
RMSE: 4.52713
Iteration: 567
RMSE: 4.53011
Iteration: 568
RMSE: 4.52876
Iteration: 569
RMSE: 4.53496
Iteration: 570
RMSE: 4.53019
Iteration: 571
RMSE: 4.52878
Iteration: 572
RMSE: 4.5281
Iteration: 573
RMSE: 4.53244
Iteration: 574
RMSE: 4.52934
Iteration: 575
RMSE: 4.52825
Iteration: 576
RMSE: 4.53346
Iteration: 577
RMSE: 4.53242
Iteration: 578
RMSE: 4.53382
Iteration: 579
RMSE: 4.53286
Iteration: 580
RMSE: 4.53213
Iteration: 581
RMSE: 4.53003
Iteration: 582
RMSE: 4.52839
Iteration: 583
RMSE: 4.52916
Iteration: 584
RMSE: 4.53061
Iteration: 585
RMSE: 4.52901
Iteration: 586
RMSE: 4.53405
Iteration: 587
RMSE: 4.53424
Iteration: 588
RMSE: 4.53169
Iteration: 589
RMSE: 4.52615
Iteration: 590
RMSE: 4.52935
Iteration: 591
RMSE: 4.52739
Iteration: 592
RMSE: 4.53195
Iteration: 593
RMSE: 4.53024
Iteration: 594
RMSE: 4.53021
Iteration: 595
RMSE: 4.53175
Iteration: 596
RMSE: 4.52985
Iteration: 597
RMSE: 4.53102
Iteration: 598
RMSE: 4.52636
Iteration: 599
RMSE: 4.53202
Iteration: 600
RMSE: 4.5258
Iteration: 601
RMSE: 4.52745
Iteration: 602
RMSE: 4.51972
Iteration: 603
RMSE: 4.52381
Iteration: 604
RMSE: 4.52075
Iteration: 605
RMSE: 4.52824
Iteration: 606
RMSE: 4.52638
Iteration: 607
RMSE: 4.51942
Iteration: 608
RMSE: 4.52171
Iteration: 609
RMSE: 4.52829
Iteration: 610
RMSE: 4.52577
Iteration: 611
RMSE: 4.52418
Iteration: 612
RMSE: 4.52322
Iteration: 613
RMSE: 4.52642
Iteration: 614
RMSE: 4.52789
Iteration: 615
RMSE: 4.52859
Iteration: 616
RMSE: 4.52633
Iteration: 617
RMSE: 4.52544
Iteration: 618
RMSE: 4.52514
Iteration: 619
RMSE: 4.52644
Iteration: 620
RMSE: 4.53211
Iteration: 621
RMSE: 4.52624
Iteration: 622
RMSE: 4.53716
Iteration: 623
RMSE: 4.52543
Iteration: 624
RMSE: 4.52536
Iteration: 625
RMSE: 4.5277
Iteration: 626
RMSE: 4.52221
Iteration: 627
RMSE: 4.52674
Iteration: 628
RMSE: 4.52772
Iteration: 629
RMSE: 4.52888
Iteration: 630
RMSE: 4.52968
Iteration: 631
RMSE: 4.52842
Iteration: 632
RMSE: 4.53178
Iteration: 633
RMSE: 4.53182
Iteration: 634
RMSE: 4.53385
Iteration: 635
RMSE: 4.52912
Iteration: 636
RMSE: 4.53271
Iteration: 637
RMSE: 4.5326
Iteration: 638
RMSE: 4.5278
Iteration: 639
RMSE: 4.53181
Iteration: 640
RMSE: 4.52993
Iteration: 641
RMSE: 4.53449
Iteration: 642
RMSE: 4.52944
Iteration: 643
RMSE: 4.53202
Iteration: 644
RMSE: 4.52921
Iteration: 645
RMSE: 4.53377
Iteration: 646
RMSE: 4.52777
Iteration: 647
RMSE: 4.53272
Iteration: 648
RMSE: 4.52852
Iteration: 649
RMSE: 4.52956
Iteration: 650
RMSE: 4.52949
Iteration: 651
RMSE: 4.53218
Iteration: 652
RMSE: 4.53422
Iteration: 653
RMSE: 4.52896
Iteration: 654
RMSE: 4.52982
Iteration: 655
RMSE: 4.53125
Iteration: 656
RMSE: 4.52995
Iteration: 657
RMSE: 4.53204
Iteration: 658
RMSE: 4.52827
Iteration: 659
RMSE: 4.52993
Iteration: 660
RMSE: 4.52843
Iteration: 661
RMSE: 4.52466
Iteration: 662
RMSE: 4.53145
Iteration: 663
RMSE: 4.52921
Iteration: 664
RMSE: 4.53189
Iteration: 665
RMSE: 4.538
Iteration: 666
RMSE: 4.53556
Iteration: 667
RMSE: 4.53759
Iteration: 668
RMSE: 4.53156
Iteration: 669
RMSE: 4.52688
Iteration: 670
RMSE: 4.52716
Iteration: 671
RMSE: 4.52928
Iteration: 672
RMSE: 4.52819
Iteration: 673
RMSE: 4.53209
Iteration: 674
RMSE: 4.529
Iteration: 675
RMSE: 4.53476
Iteration: 676
RMSE: 4.53239
Iteration: 677
RMSE: 4.52864
Iteration: 678
RMSE: 4.53138
Iteration: 679
RMSE: 4.53058
Iteration: 680
RMSE: 4.53075
Iteration: 681
RMSE: 4.53052
Iteration: 682
RMSE: 4.52817
Iteration: 683
RMSE: 4.5321
Iteration: 684
RMSE: 4.52772
Iteration: 685
RMSE: 4.52689
Iteration: 686
RMSE: 4.53069
Iteration: 687
RMSE: 4.5321
Iteration: 688
RMSE: 4.53238
Iteration: 689
RMSE: 4.52893
Iteration: 690
RMSE: 4.53086
Iteration: 691
RMSE: 4.5339
Iteration: 692
RMSE: 4.52961
Iteration: 693
RMSE: 4.53223
Iteration: 694
RMSE: 4.53309
Iteration: 695
RMSE: 4.53002
Iteration: 696
RMSE: 4.53247
Iteration: 697
RMSE: 4.53161
Iteration: 698
RMSE: 4.52681
Iteration: 699
RMSE: 4.53073
Iteration: 700
RMSE: 4.53233
Iteration: 701
RMSE: 4.532
Iteration: 702
RMSE: 4.53403
Iteration: 703
RMSE: 4.53078
Iteration: 704
RMSE: 4.53706
Iteration: 705
RMSE: 4.53681
Iteration: 706
RMSE: 4.5351
Iteration: 707
RMSE: 4.53331
Iteration: 708
RMSE: 4.53511
Iteration: 709
RMSE: 4.54066
Iteration: 710
RMSE: 4.54084
Iteration: 711
RMSE: 4.53453
Iteration: 712
RMSE: 4.53941
Iteration: 713
RMSE: 4.54032
Iteration: 714
RMSE: 4.53667
Iteration: 715
RMSE: 4.53503
Iteration: 716
RMSE: 4.53776
Iteration: 717
RMSE: 4.53762
Iteration: 718
RMSE: 4.54009
Iteration: 719
RMSE: 4.53594
Iteration: 720
RMSE: 4.53677
Iteration: 721
RMSE: 4.53586
Iteration: 722
RMSE: 4.52967
Iteration: 723
RMSE: 4.53158
Iteration: 724
RMSE: 4.52934
Iteration: 725
RMSE: 4.53479
Iteration: 726
RMSE: 4.53863
Iteration: 727
RMSE: 4.53261
Iteration: 728
RMSE: 4.53745
Iteration: 729
RMSE: 4.53554
Iteration: 730
RMSE: 4.53565
Iteration: 731
RMSE: 4.53635
Iteration: 732
RMSE: 4.53439
Iteration: 733
RMSE: 4.53859
Iteration: 734
RMSE: 4.53381
Iteration: 735
RMSE: 4.53835
Iteration: 736
RMSE: 4.53386
Iteration: 737
RMSE: 4.52928
Iteration: 738
RMSE: 4.53142
Iteration: 739
RMSE: 4.53259
Iteration: 740
RMSE: 4.53116
Iteration: 741
RMSE: 4.53143
Iteration: 742
RMSE: 4.53251
Iteration: 743
RMSE: 4.52778
Iteration: 744
RMSE: 4.5266
Iteration: 745
RMSE: 4.53205
Iteration: 746
RMSE: 4.52854
Iteration: 747
RMSE: 4.52901
Iteration: 748
RMSE: 4.52818
Iteration: 749
RMSE: 4.52808
Iteration: 750
RMSE: 4.52765
Iteration: 751
RMSE: 4.52938
Iteration: 752
RMSE: 4.52935
Iteration: 753
RMSE: 4.52982
Iteration: 754
RMSE: 4.52737
Iteration: 755
RMSE: 4.52888
Iteration: 756
RMSE: 4.5248
Iteration: 757
RMSE: 4.5276
Iteration: 758
RMSE: 4.53045
Iteration: 759
RMSE: 4.52649
Iteration: 760
RMSE: 4.52516
Iteration: 761
RMSE: 4.5279
Iteration: 762
RMSE: 4.52506
Iteration: 763
RMSE: 4.52742
Iteration: 764
RMSE: 4.5259
Iteration: 765
RMSE: 4.52556
Iteration: 766
RMSE: 4.52728
Iteration: 767
RMSE: 4.52895
Iteration: 768
RMSE: 4.52796
Iteration: 769
RMSE: 4.52635
Iteration: 770
RMSE: 4.53237
Iteration: 771
RMSE: 4.53229
Iteration: 772
RMSE: 4.53328
Iteration: 773
RMSE: 4.53033
Iteration: 774
RMSE: 4.53227
Iteration: 775
RMSE: 4.52984
Iteration: 776
RMSE: 4.53046
Iteration: 777
RMSE: 4.53181
Iteration: 778
RMSE: 4.52834
Iteration: 779
RMSE: 4.5283
Iteration: 780
RMSE: 4.52956
Iteration: 781
RMSE: 4.53006
Iteration: 782
RMSE: 4.52947
Iteration: 783
RMSE: 4.53159
Iteration: 784
RMSE: 4.53054
Iteration: 785
RMSE: 4.5322
Iteration: 786
RMSE: 4.53251
Iteration: 787
RMSE: 4.53305
Iteration: 788
RMSE: 4.53242
Iteration: 789
RMSE: 4.53519
Iteration: 790
RMSE: 4.53574
Iteration: 791
RMSE: 4.53697
Iteration: 792
RMSE: 4.53377
Iteration: 793
RMSE: 4.53626
Iteration: 794
RMSE: 4.5319
Iteration: 795
RMSE: 4.52958
Iteration: 796
RMSE: 4.53134
Iteration: 797
RMSE: 4.52882
Iteration: 798
RMSE: 4.52978
Iteration: 799
RMSE: 4.53047
Iteration: 800
RMSE: 4.53358
Iteration: 801
RMSE: 4.52785
Iteration: 802
RMSE: 4.52874
Iteration: 803
RMSE: 4.52888
Iteration: 804
RMSE: 4.53159
Iteration: 805
RMSE: 4.5317
Iteration: 806
RMSE: 4.52803
Iteration: 807
RMSE: 4.53252
Iteration: 808
RMSE: 4.53031
Iteration: 809
RMSE: 4.5247
Iteration: 810
RMSE: 4.52764
Iteration: 811
RMSE: 4.52238
Iteration: 812
RMSE: 4.5245
Iteration: 813
RMSE: 4.52781
Iteration: 814
RMSE: 4.52699
Iteration: 815
RMSE: 4.5261
Iteration: 816
RMSE: 4.52934
Iteration: 817
RMSE: 4.52881
Iteration: 818
RMSE: 4.53071
Iteration: 819
RMSE: 4.52363
Iteration: 820
RMSE: 4.52889
Iteration: 821
RMSE: 4.52984
Iteration: 822
RMSE: 4.53265
Iteration: 823
RMSE: 4.53176
Iteration: 824
RMSE: 4.53325
Iteration: 825
RMSE: 4.52734
Iteration: 826
RMSE: 4.532
Iteration: 827
RMSE: 4.53267
Iteration: 828
RMSE: 4.52872
Iteration: 829
RMSE: 4.52942
Iteration: 830
RMSE: 4.53005
Iteration: 831
RMSE: 4.53429
Iteration: 832
RMSE: 4.53
Iteration: 833
RMSE: 4.52923
Iteration: 834
RMSE: 4.52564
Iteration: 835
RMSE: 4.52425
Iteration: 836
RMSE: 4.52404
Iteration: 837
RMSE: 4.52049
Iteration: 838
RMSE: 4.52268
Iteration: 839
RMSE: 4.52179
Iteration: 840
RMSE: 4.52469
Iteration: 841
RMSE: 4.52052
Iteration: 842
RMSE: 4.52798
Iteration: 843
RMSE: 4.52719
Iteration: 844
RMSE: 4.52822
Iteration: 845
RMSE: 4.52859
Iteration: 846
RMSE: 4.52124
Iteration: 847
RMSE: 4.52414
Iteration: 848
RMSE: 4.52514
Iteration: 849
RMSE: 4.52461
Iteration: 850
RMSE: 4.5199
Iteration: 851
RMSE: 4.51987
Iteration: 852
RMSE: 4.5177
Iteration: 853
RMSE: 4.52069
Iteration: 854
RMSE: 4.52308
Iteration: 855
RMSE: 4.52469
Iteration: 856
RMSE: 4.52429
Iteration: 857
RMSE: 4.52416
Iteration: 858
RMSE: 4.52467
Iteration: 859
RMSE: 4.52621
Iteration: 860
RMSE: 4.52895
Iteration: 861
RMSE: 4.52379
Iteration: 862
RMSE: 4.52622
Iteration: 863
RMSE: 4.52564
Iteration: 864
RMSE: 4.5248
Iteration: 865
RMSE: 4.52622
Iteration: 866
RMSE: 4.52746
Iteration: 867
RMSE: 4.52238
Iteration: 868
RMSE: 4.51981
Iteration: 869
RMSE: 4.52142
Iteration: 870
RMSE: 4.52218
Iteration: 871
RMSE: 4.52303
Iteration: 872
RMSE: 4.51897
Iteration: 873
RMSE: 4.52221
Iteration: 874
RMSE: 4.52117
Iteration: 875
RMSE: 4.52593
Iteration: 876
RMSE: 4.5243
Iteration: 877
RMSE: 4.52011
Iteration: 878
RMSE: 4.52336
Iteration: 879
RMSE: 4.52029
Iteration: 880
RMSE: 4.52199
Iteration: 881
RMSE: 4.52244
Iteration: 882
RMSE: 4.5225
Iteration: 883
RMSE: 4.52427
Iteration: 884
RMSE: 4.52516
Iteration: 885
RMSE: 4.5242
Iteration: 886
RMSE: 4.52271
Iteration: 887
RMSE: 4.52822
Iteration: 888
RMSE: 4.5245
Iteration: 889
RMSE: 4.52579
Iteration: 890
RMSE: 4.52192
Iteration: 891
RMSE: 4.52051
Iteration: 892
RMSE: 4.52079
Iteration: 893
RMSE: 4.5188
Iteration: 894
RMSE: 4.51464
Iteration: 895
RMSE: 4.51549
Iteration: 896
RMSE: 4.51603
Iteration: 897
RMSE: 4.51803
Iteration: 898
RMSE: 4.51942
Iteration: 899
RMSE: 4.52185
Iteration: 900
RMSE: 4.51905
Iteration: 901
RMSE: 4.52046
Iteration: 902
RMSE: 4.51986
Iteration: 903
RMSE: 4.52328
Iteration: 904
RMSE: 4.52418
Iteration: 905
RMSE: 4.52608
Iteration: 906
RMSE: 4.52332
Iteration: 907
RMSE: 4.51948
Iteration: 908
RMSE: 4.52113
Iteration: 909
RMSE: 4.5165
Iteration: 910
RMSE: 4.5155
Iteration: 911
RMSE: 4.5213
Iteration: 912
RMSE: 4.51997
Iteration: 913
RMSE: 4.52115
Iteration: 914
RMSE: 4.51878
Iteration: 915
RMSE: 4.51856
Iteration: 916
RMSE: 4.52534
Iteration: 917
RMSE: 4.52514
Iteration: 918
RMSE: 4.52971
Iteration: 919
RMSE: 4.52668
Iteration: 920
RMSE: 4.52256
Iteration: 921
RMSE: 4.53017
Iteration: 922
RMSE: 4.52728
Iteration: 923
RMSE: 4.52537
Iteration: 924
RMSE: 4.52452
Iteration: 925
RMSE: 4.52477
Iteration: 926
RMSE: 4.52484
Iteration: 927
RMSE: 4.52344
Iteration: 928
RMSE: 4.52644
Iteration: 929
RMSE: 4.52175
Iteration: 930
RMSE: 4.52767
Iteration: 931
RMSE: 4.53032
Iteration: 932
RMSE: 4.52528
Iteration: 933
RMSE: 4.52911
Iteration: 934
RMSE: 4.52504
Iteration: 935
RMSE: 4.52358
Iteration: 936
RMSE: 4.52341
Iteration: 937
RMSE: 4.52468
Iteration: 938
RMSE: 4.52237
Iteration: 939
RMSE: 4.52759
Iteration: 940
RMSE: 4.5322
Iteration: 941
RMSE: 4.53629
Iteration: 942
RMSE: 4.53121
Iteration: 943
RMSE: 4.53599
Iteration: 944
RMSE: 4.53496
Iteration: 945
RMSE: 4.53192
Iteration: 946
RMSE: 4.53273
Iteration: 947
RMSE: 4.52694
Iteration: 948
RMSE: 4.53442
Iteration: 949
RMSE: 4.52953
Iteration: 950
RMSE: 4.52779
Iteration: 951
RMSE: 4.5288
Iteration: 952
RMSE: 4.52918
Iteration: 953
RMSE: 4.5308
Iteration: 954
RMSE: 4.5365
Iteration: 955
RMSE: 4.5374
Iteration: 956
RMSE: 4.53578
Iteration: 957
RMSE: 4.53429
Iteration: 958
RMSE: 4.53475
Iteration: 959
RMSE: 4.53505
Iteration: 960
RMSE: 4.53629
Iteration: 961
RMSE: 4.5373
Iteration: 962
RMSE: 4.53279
Iteration: 963
RMSE: 4.5353
Iteration: 964
RMSE: 4.53679
Iteration: 965
RMSE: 4.53489
Iteration: 966
RMSE: 4.53818
Iteration: 967
RMSE: 4.53242
Iteration: 968
RMSE: 4.53528
Iteration: 969
RMSE: 4.53466
Iteration: 970
RMSE: 4.52915
Iteration: 971
RMSE: 4.53365
Iteration: 972
RMSE: 4.53147
Iteration: 973
RMSE: 4.53764
Iteration: 974
RMSE: 4.53687
Iteration: 975
RMSE: 4.52839
Iteration: 976
RMSE: 4.53139
Iteration: 977
RMSE: 4.53105
Iteration: 978
RMSE: 4.53345
Iteration: 979
RMSE: 4.53612
Iteration: 980
RMSE: 4.53581
Iteration: 981
RMSE: 4.53507
Iteration: 982
RMSE: 4.53987
Iteration: 983
RMSE: 4.5382
Iteration: 984
RMSE: 4.53801
Iteration: 985
RMSE: 4.53654
Iteration: 986
RMSE: 4.53308
Iteration: 987
RMSE: 4.53264
Iteration: 988
RMSE: 4.53146
Iteration: 989
RMSE: 4.52973
Iteration: 990
RMSE: 4.53141
Iteration: 991
RMSE: 4.53123
Iteration: 992
RMSE: 4.53162
Iteration: 993
RMSE: 4.53167
Iteration: 994
RMSE: 4.53414
Iteration: 995
RMSE: 4.53266
Iteration: 996
RMSE: 4.53523
Iteration: 997
RMSE: 4.5357
Iteration: 998
RMSE: 4.53572
Iteration: 999
RMSE: 4.53471
Iteration: 1000
RMSE: 4.52807
Imputation MAPE: 0.1038
Imputation RMSE: 4.48337
Running time: 3183 seconds
```python
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
time_lags = np.array([1, 2, 288])
init = {"W": 0.1 * np.random.rand(dim1, rank), "X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BTMF(dense_mat, sparse_mat, init, rank, time_lags, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## 7 Multivariate Time Series Prediction
```python
def BTMF_burn(dense_mat, sparse_mat, init, time_lags, burn_iter):
W = init["W"]
X = init["X"]
dim1, dim2 = sparse_mat.shape
d = time_lags.shape[0]
pos = np.where((dense_mat != 0) & (sparse_mat == 0))
position = np.where(sparse_mat != 0)
binary_mat = np.zeros((dim1, dim2))
binary_mat[position] = 1
tau = 1
for it in range(burn_iter):
W = sample_factor_w(sparse_mat, binary_mat, W, X, tau)
A, Sigma = sample_var_coefficient(X, time_lags)
X = sample_factor_x(sparse_mat, binary_mat, time_lags, W, X, tau, A, inv(Sigma))
mat_hat = np.matmul(W, X.T)
tau = sample_precision_tau(sparse_mat, mat_hat, position)
rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos]) ** 2) / dense_mat[pos].shape[0])
if (it + 1) % 1 == 0 and it < burn_iter:
print('Iteration: {}'.format(it + 1))
print('RMSE: {:.6}'.format(rmse))
print()
return W, X, tau, A
```
```python
def BTMF_4cast(mat, binary_mat, num_step, time_lags, init, gibbs_iter):
"""Forecast (`4cast`) time series with Bayesian Temporal Matrix Factorization (BTMF)."""
W = init["W"]
X = init["X"]
tau = init["tau"]
A = init["A"]
rank = W.shape[1]
d = time_lags.shape[0]
mat_hat = np.zeros((W.shape[0], num_step, gibbs_iter))
for it in range(gibbs_iter):
W = sample_factor_w(mat, binary_mat, W, X, tau)
A, Sigma = sample_var_coefficient(X, time_lags)
X = sample_factor_x(sparse_mat, binary_mat, time_lags, W, X, tau, A, inv(Sigma))
X_new = X.copy()
for t in range(num_step):
var = X_new[X.shape[0] + t - 1 - time_lags, :].reshape([rank * d])
X_new = np.append(X_new, np.matmul(A.T, var).reshape([1, rank]), axis = 0)
# mat_hat[:, :, it] = np.random.normal(np.matmul(W, X_new[-1 - num_step : -1, :].T), 1 / tau) # dim1 * num_step
mat_hat[:, :, it] = np.matmul(W, X_new[-1 - num_step : -1, :].T) # dim1 * num_step
return mat_hat, W, X_new, tau, A
```
```python
def forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter):
W, X, tau, A = BTMF_burn(dense_mat[:, : start_time], sparse_mat[:, : start_time],
init, time_lags, burn_iter)
result = np.zeros((W.shape[0], num_roll * num_step, gibbs_iter))
for t in range(num_roll):
mat = sparse_mat[:, : start_time + t * num_step]
print(mat.shape[1])
position = np.where(mat != 0)
binary_mat = mat.copy()
binary_mat[position] = 1
init = {"W": W, "X": X, "tau": tau, "A": A}
mat_hat, W, X, tau, A = BTMF_4cast(mat, binary_mat,
num_step, time_lags, init, gibbs_iter)
result[:, t * num_step : (t + 1) * num_step, :] = mat_hat
mat_hat0 = np.mean(result, axis = 2)
small_dense_mat = dense_mat[:, start_time : dense_mat.shape[1]]
pos = np.where(small_dense_mat != 0)
final_mape = np.sum(np.abs(small_dense_mat[pos] -
mat_hat0[pos]) / small_dense_mat[pos]) / small_dense_mat[pos].shape[0]
final_rmse = np.sqrt(np.sum((small_dense_mat[pos] -
mat_hat0[pos]) ** 2) / small_dense_mat[pos].shape[0])
print('Final MAPE: {:.6}'.format(final_mape))
print('Final RMSE: {:.6}'.format(final_rmse))
print()
return result
```
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================
# binary_tensor = np.zeros(tensor.shape)
# for i1 in range(tensor.shape[0]):
# for i2 in range(tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
# binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
num_step = 6
num_roll = int(144 * 5 / num_step)
start_time = dim2 - num_roll * num_step
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(start_time, rank)}
burn_iter = 10
gibbs_iter = 2
result = forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
Iteration: 1
RMSE: 5.94597
Iteration: 2
RMSE: 5.74879
Iteration: 3
RMSE: 5.70634
Iteration: 4
RMSE: 5.47132
Iteration: 5
RMSE: 4.99418
Iteration: 6
RMSE: 4.76256
Iteration: 7
RMSE: 4.68645
Iteration: 8
RMSE: 4.5744
Iteration: 9
RMSE: 4.55261
Iteration: 10
RMSE: 4.54463
8064
8070
8076
8082
8088
8094
8100
8106
8112
8118
8124
8130
8136
8142
8148
8154
8160
8166
8172
8178
8184
8190
8196
8202
8208
8214
8220
8226
8232
8238
8244
8250
8256
8262
8268
8274
8280
8286
8292
8298
8304
8310
8316
8322
8328
8334
8340
8346
8352
8358
8364
8370
8376
8382
8388
8394
8400
8406
8412
8418
8424
8430
8436
8442
8448
8454
8460
8466
8472
8478
8484
8490
8496
8502
8508
8514
8520
8526
8532
8538
8544
8550
8556
8562
8568
8574
8580
8586
8592
8598
8604
8610
8616
8622
8628
8634
8640
8646
8652
8658
8664
8670
8676
8682
8688
8694
8700
8706
8712
8718
8724
8730
8736
8742
8748
8754
8760
8766
8772
8778
Final MAPE: 0.372138
Final RMSE: 15.0209
Running time: 762 seconds
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
num_step = 6
num_roll = int(144 * 5 / num_step)
start_time = dim2 - num_roll * num_step
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(start_time, rank)}
burn_iter = 100
gibbs_iter = 20
result = forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
Iteration: 1
RMSE: 5.94449
Iteration: 2
RMSE: 5.75295
Iteration: 3
RMSE: 5.71641
Iteration: 4
RMSE: 5.53581
Iteration: 5
RMSE: 5.0634
Iteration: 6
RMSE: 4.88969
Iteration: 7
RMSE: 4.74457
Iteration: 8
RMSE: 4.68477
Iteration: 9
RMSE: 4.56635
Iteration: 10
RMSE: 4.54667
Iteration: 11
RMSE: 4.53713
Iteration: 12
RMSE: 4.50478
Iteration: 13
RMSE: 4.44558
Iteration: 14
RMSE: 4.41896
Iteration: 15
RMSE: 4.40447
Iteration: 16
RMSE: 4.39169
Iteration: 17
RMSE: 4.3801
Iteration: 18
RMSE: 4.36146
Iteration: 19
RMSE: 4.34012
Iteration: 20
RMSE: 4.323
Iteration: 21
RMSE: 4.2991
Iteration: 22
RMSE: 4.27874
Iteration: 23
RMSE: 4.26798
Iteration: 24
RMSE: 4.26194
Iteration: 25
RMSE: 4.25591
Iteration: 26
RMSE: 4.25072
Iteration: 27
RMSE: 4.24422
Iteration: 28
RMSE: 4.24307
Iteration: 29
RMSE: 4.23493
Iteration: 30
RMSE: 4.22458
Iteration: 31
RMSE: 4.21628
Iteration: 32
RMSE: 4.21207
Iteration: 33
RMSE: 4.21048
Iteration: 34
RMSE: 4.20769
Iteration: 35
RMSE: 4.20458
Iteration: 36
RMSE: 4.20501
Iteration: 37
RMSE: 4.20187
Iteration: 38
RMSE: 4.20378
Iteration: 39
RMSE: 4.20147
Iteration: 40
RMSE: 4.2014
Iteration: 41
RMSE: 4.20172
Iteration: 42
RMSE: 4.20174
Iteration: 43
RMSE: 4.20179
Iteration: 44
RMSE: 4.19926
Iteration: 45
RMSE: 4.20019
Iteration: 46
RMSE: 4.1975
Iteration: 47
RMSE: 4.19601
Iteration: 48
RMSE: 4.19907
Iteration: 49
RMSE: 4.19479
Iteration: 50
RMSE: 4.19641
Iteration: 51
RMSE: 4.19439
Iteration: 52
RMSE: 4.19538
Iteration: 53
RMSE: 4.19394
Iteration: 54
RMSE: 4.19151
Iteration: 55
RMSE: 4.19388
Iteration: 56
RMSE: 4.18899
Iteration: 57
RMSE: 4.1936
Iteration: 58
RMSE: 4.19189
Iteration: 59
RMSE: 4.19198
Iteration: 60
RMSE: 4.1927
Iteration: 61
RMSE: 4.19205
Iteration: 62
RMSE: 4.19358
Iteration: 63
RMSE: 4.19379
Iteration: 64
RMSE: 4.19318
Iteration: 65
RMSE: 4.19543
Iteration: 66
RMSE: 4.19417
Iteration: 67
RMSE: 4.19487
Iteration: 68
RMSE: 4.19499
Iteration: 69
RMSE: 4.19354
Iteration: 70
RMSE: 4.19311
Iteration: 71
RMSE: 4.19313
Iteration: 72
RMSE: 4.18999
Iteration: 73
RMSE: 4.18869
Iteration: 74
RMSE: 4.19073
Iteration: 75
RMSE: 4.18876
Iteration: 76
RMSE: 4.19048
Iteration: 77
RMSE: 4.19235
Iteration: 78
RMSE: 4.19325
Iteration: 79
RMSE: 4.18861
Iteration: 80
RMSE: 4.19046
Iteration: 81
RMSE: 4.19224
Iteration: 82
RMSE: 4.18991
Iteration: 83
RMSE: 4.19028
Iteration: 84
RMSE: 4.19048
Iteration: 85
RMSE: 4.18996
Iteration: 86
RMSE: 4.19008
Iteration: 87
RMSE: 4.18816
Iteration: 88
RMSE: 4.18801
Iteration: 89
RMSE: 4.1857
Iteration: 90
RMSE: 4.18845
Iteration: 91
RMSE: 4.18824
Iteration: 92
RMSE: 4.18866
Iteration: 93
RMSE: 4.1874
Iteration: 94
RMSE: 4.19071
Iteration: 95
RMSE: 4.18984
Iteration: 96
RMSE: 4.18867
Iteration: 97
RMSE: 4.18954
Iteration: 98
RMSE: 4.18876
Iteration: 99
RMSE: 4.18756
Iteration: 100
RMSE: 4.18824
8064
8070
8076
8082
8088
8094
8100
8106
8112
8118
8124
8130
8136
8142
8148
8154
8160
8166
8172
8178
8184
8190
8196
8202
8208
8214
8220
8226
8232
8238
8244
8250
8256
8262
8268
8274
8280
8286
8292
8298
8304
8310
8316
8322
8328
8334
8340
8346
8352
8358
8364
8370
8376
8382
8388
8394
8400
8406
8412
8418
8424
8430
8436
8442
8448
8454
8460
8466
8472
8478
8484
8490
8496
8502
8508
8514
8520
8526
8532
8538
8544
8550
8556
8562
8568
8574
8580
8586
8592
8598
8604
8610
8616
8622
8628
8634
8640
8646
8652
8658
8664
8670
8676
8682
8688
8694
8700
8706
8712
8718
8724
8730
8736
8742
8748
8754
8760
8766
8772
8778
Final MAPE: 0.178763
Final RMSE: 6.72478
Running time: 6631 seconds
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 144])
num_step = 6
num_roll = int(144 * 5 / num_step)
start_time = dim2 - num_roll * num_step
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(start_time, rank)}
burn_iter = 1000
gibbs_iter = 100
result = forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
Iteration: 1
RMSE: 5.93895
Iteration: 2
RMSE: 5.75626
Iteration: 3
RMSE: 5.70908
Iteration: 4
RMSE: 5.47771
Iteration: 5
RMSE: 5.05707
Iteration: 6
RMSE: 4.87126
Iteration: 7
RMSE: 4.72067
Iteration: 8
RMSE: 4.6074
Iteration: 9
RMSE: 4.55761
Iteration: 10
RMSE: 4.54906
Iteration: 11
RMSE: 4.54095
Iteration: 12
RMSE: 4.53278
Iteration: 13
RMSE: 4.51274
Iteration: 14
RMSE: 4.48254
Iteration: 15
RMSE: 4.43496
Iteration: 16
RMSE: 4.40604
Iteration: 17
RMSE: 4.39511
Iteration: 18
RMSE: 4.37981
Iteration: 19
RMSE: 4.35406
Iteration: 20
RMSE: 4.32049
Iteration: 21
RMSE: 4.29999
Iteration: 22
RMSE: 4.29119
Iteration: 23
RMSE: 4.28679
Iteration: 24
RMSE: 4.27596
Iteration: 25
RMSE: 4.268
Iteration: 26
RMSE: 4.25774
Iteration: 27
RMSE: 4.25484
Iteration: 28
RMSE: 4.24948
Iteration: 29
RMSE: 4.24814
Iteration: 30
RMSE: 4.24573
Iteration: 31
RMSE: 4.24307
Iteration: 32
RMSE: 4.23946
Iteration: 33
RMSE: 4.23698
Iteration: 34
RMSE: 4.22852
Iteration: 35
RMSE: 4.22466
Iteration: 36
RMSE: 4.22338
Iteration: 37
RMSE: 4.21678
Iteration: 38
RMSE: 4.21427
Iteration: 39
RMSE: 4.20808
Iteration: 40
RMSE: 4.20989
Iteration: 41
RMSE: 4.20762
Iteration: 42
RMSE: 4.20557
Iteration: 43
RMSE: 4.20634
Iteration: 44
RMSE: 4.20334
Iteration: 45
RMSE: 4.20354
Iteration: 46
RMSE: 4.19892
Iteration: 47
RMSE: 4.20183
Iteration: 48
RMSE: 4.19973
Iteration: 49
RMSE: 4.19988
Iteration: 50
RMSE: 4.19659
Iteration: 51
RMSE: 4.19682
Iteration: 52
RMSE: 4.19734
Iteration: 53
RMSE: 4.19564
Iteration: 54
RMSE: 4.19198
Iteration: 55
RMSE: 4.19448
Iteration: 56
RMSE: 4.19818
Iteration: 57
RMSE: 4.19714
Iteration: 58
RMSE: 4.19567
Iteration: 59
RMSE: 4.19736
Iteration: 60
RMSE: 4.19704
Iteration: 61
RMSE: 4.19558
Iteration: 62
RMSE: 4.19394
Iteration: 63
RMSE: 4.19251
Iteration: 64
RMSE: 4.19598
Iteration: 65
RMSE: 4.19175
Iteration: 66
RMSE: 4.19533
Iteration: 67
RMSE: 4.1947
Iteration: 68
RMSE: 4.19227
Iteration: 69
RMSE: 4.1952
Iteration: 70
RMSE: 4.19366
Iteration: 71
RMSE: 4.19342
Iteration: 72
RMSE: 4.19299
Iteration: 73
RMSE: 4.19461
Iteration: 74
RMSE: 4.19233
Iteration: 75
RMSE: 4.19217
Iteration: 76
RMSE: 4.19459
Iteration: 77
RMSE: 4.19308
Iteration: 78
RMSE: 4.19372
Iteration: 79
RMSE: 4.1936
Iteration: 80
RMSE: 4.19267
Iteration: 81
RMSE: 4.19254
Iteration: 82
RMSE: 4.1938
Iteration: 83
RMSE: 4.19193
Iteration: 84
RMSE: 4.19022
Iteration: 85
RMSE: 4.19177
Iteration: 86
RMSE: 4.19154
Iteration: 87
RMSE: 4.19062
Iteration: 88
RMSE: 4.19284
Iteration: 89
RMSE: 4.18914
Iteration: 90
RMSE: 4.18927
Iteration: 91
RMSE: 4.19005
Iteration: 92
RMSE: 4.18961
Iteration: 93
RMSE: 4.18975
Iteration: 94
RMSE: 4.19173
Iteration: 95
RMSE: 4.19157
Iteration: 96
RMSE: 4.18828
Iteration: 97
RMSE: 4.18815
Iteration: 98
RMSE: 4.18974
Iteration: 99
RMSE: 4.18887
Iteration: 100
RMSE: 4.18804
Iteration: 101
RMSE: 4.18748
Iteration: 102
RMSE: 4.18849
Iteration: 103
RMSE: 4.188
Iteration: 104
RMSE: 4.19015
Iteration: 105
RMSE: 4.18699
Iteration: 106
RMSE: 4.18692
Iteration: 107
RMSE: 4.1884
Iteration: 108
RMSE: 4.18753
Iteration: 109
RMSE: 4.18557
Iteration: 110
RMSE: 4.18822
Iteration: 111
RMSE: 4.189
Iteration: 112
RMSE: 4.1869
Iteration: 113
RMSE: 4.18694
Iteration: 114
RMSE: 4.18876
Iteration: 115
RMSE: 4.1872
Iteration: 116
RMSE: 4.18639
Iteration: 117
RMSE: 4.18803
Iteration: 118
RMSE: 4.18574
Iteration: 119
RMSE: 4.18825
Iteration: 120
RMSE: 4.18677
Iteration: 121
RMSE: 4.18598
Iteration: 122
RMSE: 4.18498
Iteration: 123
RMSE: 4.18506
Iteration: 124
RMSE: 4.18846
Iteration: 125
RMSE: 4.18845
Iteration: 126
RMSE: 4.18425
Iteration: 127
RMSE: 4.18349
Iteration: 128
RMSE: 4.18544
Iteration: 129
RMSE: 4.18539
Iteration: 130
RMSE: 4.18532
Iteration: 131
RMSE: 4.18514
Iteration: 132
RMSE: 4.18433
Iteration: 133
RMSE: 4.18594
Iteration: 134
RMSE: 4.18414
Iteration: 135
RMSE: 4.18297
Iteration: 136
RMSE: 4.18576
Iteration: 137
RMSE: 4.18116
Iteration: 138
RMSE: 4.1837
Iteration: 139
RMSE: 4.18314
Iteration: 140
RMSE: 4.18137
Iteration: 141
RMSE: 4.18295
Iteration: 142
RMSE: 4.18098
Iteration: 143
RMSE: 4.18215
Iteration: 144
RMSE: 4.18241
Iteration: 145
RMSE: 4.18162
Iteration: 146
RMSE: 4.18243
Iteration: 147
RMSE: 4.18267
Iteration: 148
RMSE: 4.18263
Iteration: 149
RMSE: 4.18137
Iteration: 150
RMSE: 4.18372
Iteration: 151
RMSE: 4.18208
Iteration: 152
RMSE: 4.17983
Iteration: 153
RMSE: 4.18229
Iteration: 154
RMSE: 4.18344
Iteration: 155
RMSE: 4.18001
Iteration: 156
RMSE: 4.17874
Iteration: 157
RMSE: 4.18139
Iteration: 158
RMSE: 4.18192
Iteration: 159
RMSE: 4.18291
Iteration: 160
RMSE: 4.18102
Iteration: 161
RMSE: 4.18168
Iteration: 162
RMSE: 4.18134
Iteration: 163
RMSE: 4.18262
Iteration: 164
RMSE: 4.18136
Iteration: 165
RMSE: 4.18115
Iteration: 166
RMSE: 4.18201
Iteration: 167
RMSE: 4.17968
Iteration: 168
RMSE: 4.18225
Iteration: 169
RMSE: 4.18097
Iteration: 170
RMSE: 4.1841
Iteration: 171
RMSE: 4.17984
Iteration: 172
RMSE: 4.1809
Iteration: 173
RMSE: 4.17986
Iteration: 174
RMSE: 4.17861
Iteration: 175
RMSE: 4.18045
Iteration: 176
RMSE: 4.17714
Iteration: 177
RMSE: 4.17921
Iteration: 178
RMSE: 4.18006
Iteration: 179
RMSE: 4.17881
Iteration: 180
RMSE: 4.17908
Iteration: 181
RMSE: 4.18011
Iteration: 182
RMSE: 4.18031
Iteration: 183
RMSE: 4.17814
Iteration: 184
RMSE: 4.17795
Iteration: 185
RMSE: 4.18009
Iteration: 186
RMSE: 4.18009
Iteration: 187
RMSE: 4.17787
Iteration: 188
RMSE: 4.17836
Iteration: 189
RMSE: 4.1789
Iteration: 190
RMSE: 4.17868
Iteration: 191
RMSE: 4.17927
Iteration: 192
RMSE: 4.17811
Iteration: 193
RMSE: 4.17838
Iteration: 194
RMSE: 4.17664
Iteration: 195
RMSE: 4.17891
Iteration: 196
RMSE: 4.17975
Iteration: 197
RMSE: 4.18044
Iteration: 198
RMSE: 4.18138
Iteration: 199
RMSE: 4.17869
Iteration: 200
RMSE: 4.17578
Iteration: 201
RMSE: 4.17624
Iteration: 202
RMSE: 4.17747
Iteration: 203
RMSE: 4.17532
Iteration: 204
RMSE: 4.17876
Iteration: 205
RMSE: 4.17715
Iteration: 206
RMSE: 4.17796
Iteration: 207
RMSE: 4.17551
Iteration: 208
RMSE: 4.17704
Iteration: 209
RMSE: 4.17478
Iteration: 210
RMSE: 4.17757
Iteration: 211
RMSE: 4.17695
Iteration: 212
RMSE: 4.17369
Iteration: 213
RMSE: 4.17359
Iteration: 214
RMSE: 4.17426
Iteration: 215
RMSE: 4.17247
Iteration: 216
RMSE: 4.17126
Iteration: 217
RMSE: 4.177
Iteration: 218
RMSE: 4.17701
Iteration: 219
RMSE: 4.17552
Iteration: 220
RMSE: 4.17676
Iteration: 221
RMSE: 4.17863
Iteration: 222
RMSE: 4.17941
Iteration: 223
RMSE: 4.17908
Iteration: 224
RMSE: 4.17827
Iteration: 225
RMSE: 4.17631
Iteration: 226
RMSE: 4.17672
Iteration: 227
RMSE: 4.1752
Iteration: 228
RMSE: 4.17723
Iteration: 229
RMSE: 4.17695
Iteration: 230
RMSE: 4.17613
Iteration: 231
RMSE: 4.17632
Iteration: 232
RMSE: 4.17808
Iteration: 233
RMSE: 4.17542
Iteration: 234
RMSE: 4.17551
Iteration: 235
RMSE: 4.17575
Iteration: 236
RMSE: 4.17817
Iteration: 237
RMSE: 4.17549
Iteration: 238
RMSE: 4.17589
Iteration: 239
RMSE: 4.1765
Iteration: 240
RMSE: 4.17623
Iteration: 241
RMSE: 4.17397
Iteration: 242
RMSE: 4.17509
Iteration: 243
RMSE: 4.17429
Iteration: 244
RMSE: 4.17529
Iteration: 245
RMSE: 4.17607
Iteration: 246
RMSE: 4.17392
Iteration: 247
RMSE: 4.17289
Iteration: 248
RMSE: 4.17157
Iteration: 249
RMSE: 4.17634
Iteration: 250
RMSE: 4.17411
Iteration: 251
RMSE: 4.17242
Iteration: 252
RMSE: 4.17225
Iteration: 253
RMSE: 4.17385
Iteration: 254
RMSE: 4.17513
Iteration: 255
RMSE: 4.17334
Iteration: 256
RMSE: 4.17315
Iteration: 257
RMSE: 4.17348
Iteration: 258
RMSE: 4.17558
Iteration: 259
RMSE: 4.17489
Iteration: 260
RMSE: 4.17453
Iteration: 261
RMSE: 4.17606
Iteration: 262
RMSE: 4.17288
Iteration: 263
RMSE: 4.17344
Iteration: 264
RMSE: 4.17338
Iteration: 265
RMSE: 4.17289
Iteration: 266
RMSE: 4.17387
Iteration: 267
RMSE: 4.17399
Iteration: 268
RMSE: 4.17473
Iteration: 269
RMSE: 4.17627
Iteration: 270
RMSE: 4.1739
Iteration: 271
RMSE: 4.17374
Iteration: 272
RMSE: 4.17678
Iteration: 273
RMSE: 4.17561
Iteration: 274
RMSE: 4.17334
Iteration: 275
RMSE: 4.17215
Iteration: 276
RMSE: 4.1725
Iteration: 277
RMSE: 4.17401
Iteration: 278
RMSE: 4.17075
Iteration: 279
RMSE: 4.17269
Iteration: 280
RMSE: 4.17289
Iteration: 281
RMSE: 4.17093
Iteration: 282
RMSE: 4.16826
Iteration: 283
RMSE: 4.17359
Iteration: 284
RMSE: 4.17055
Iteration: 285
RMSE: 4.17155
Iteration: 286
RMSE: 4.17145
Iteration: 287
RMSE: 4.16986
Iteration: 288
RMSE: 4.16994
Iteration: 289
RMSE: 4.17305
Iteration: 290
RMSE: 4.1739
Iteration: 291
RMSE: 4.17216
Iteration: 292
RMSE: 4.1747
Iteration: 293
RMSE: 4.17331
Iteration: 294
RMSE: 4.17382
Iteration: 295
RMSE: 4.17009
Iteration: 296
RMSE: 4.172
Iteration: 297
RMSE: 4.174
Iteration: 298
RMSE: 4.17279
Iteration: 299
RMSE: 4.17181
Iteration: 300
RMSE: 4.17332
Iteration: 301
RMSE: 4.17236
Iteration: 302
RMSE: 4.17359
Iteration: 303
RMSE: 4.17273
Iteration: 304
RMSE: 4.17165
Iteration: 305
RMSE: 4.17166
Iteration: 306
RMSE: 4.17236
Iteration: 307
RMSE: 4.17234
Iteration: 308
RMSE: 4.17236
Iteration: 309
RMSE: 4.17428
Iteration: 310
RMSE: 4.17245
Iteration: 311
RMSE: 4.17205
Iteration: 312
RMSE: 4.171
Iteration: 313
RMSE: 4.17249
Iteration: 314
RMSE: 4.17232
Iteration: 315
RMSE: 4.17093
Iteration: 316
RMSE: 4.17175
Iteration: 317
RMSE: 4.16898
Iteration: 318
RMSE: 4.17128
Iteration: 319
RMSE: 4.17105
Iteration: 320
RMSE: 4.17003
Iteration: 321
RMSE: 4.16914
Iteration: 322
RMSE: 4.17083
Iteration: 323
RMSE: 4.16978
Iteration: 324
RMSE: 4.17017
Iteration: 325
RMSE: 4.17097
Iteration: 326
RMSE: 4.16986
Iteration: 327
RMSE: 4.1682
Iteration: 328
RMSE: 4.16749
Iteration: 329
RMSE: 4.16929
Iteration: 330
RMSE: 4.16873
Iteration: 331
RMSE: 4.17118
Iteration: 332
RMSE: 4.17154
Iteration: 333
RMSE: 4.17001
Iteration: 334
RMSE: 4.1708
Iteration: 335
RMSE: 4.17264
Iteration: 336
RMSE: 4.17023
Iteration: 337
RMSE: 4.16931
Iteration: 338
RMSE: 4.16813
Iteration: 339
RMSE: 4.16789
Iteration: 340
RMSE: 4.16668
Iteration: 341
RMSE: 4.17025
Iteration: 342
RMSE: 4.16777
Iteration: 343
RMSE: 4.1675
Iteration: 344
RMSE: 4.1687
Iteration: 345
RMSE: 4.16835
Iteration: 346
RMSE: 4.16751
Iteration: 347
RMSE: 4.16978
Iteration: 348
RMSE: 4.16877
Iteration: 349
RMSE: 4.1689
Iteration: 350
RMSE: 4.16963
Iteration: 351
RMSE: 4.16777
Iteration: 352
RMSE: 4.16809
Iteration: 353
RMSE: 4.16797
Iteration: 354
RMSE: 4.16736
Iteration: 355
RMSE: 4.1671
Iteration: 356
RMSE: 4.16618
Iteration: 357
RMSE: 4.1656
Iteration: 358
RMSE: 4.1659
Iteration: 359
RMSE: 4.16696
Iteration: 360
RMSE: 4.16637
Iteration: 361
RMSE: 4.16601
Iteration: 362
RMSE: 4.16693
Iteration: 363
RMSE: 4.16618
Iteration: 364
RMSE: 4.16718
Iteration: 365
RMSE: 4.16688
Iteration: 366
RMSE: 4.16519
Iteration: 367
RMSE: 4.16792
Iteration: 368
RMSE: 4.16583
Iteration: 369
RMSE: 4.16547
Iteration: 370
RMSE: 4.16399
Iteration: 371
RMSE: 4.16468
Iteration: 372
RMSE: 4.16694
Iteration: 373
RMSE: 4.16725
Iteration: 374
RMSE: 4.16577
Iteration: 375
RMSE: 4.16579
Iteration: 376
RMSE: 4.16584
Iteration: 377
RMSE: 4.16751
Iteration: 378
RMSE: 4.16708
Iteration: 379
RMSE: 4.16637
Iteration: 380
RMSE: 4.1665
Iteration: 381
RMSE: 4.16789
Iteration: 382
RMSE: 4.16743
Iteration: 383
RMSE: 4.16706
Iteration: 384
RMSE: 4.16624
Iteration: 385
RMSE: 4.16684
Iteration: 386
RMSE: 4.16697
Iteration: 387
RMSE: 4.16894
Iteration: 388
RMSE: 4.16874
Iteration: 389
RMSE: 4.16899
Iteration: 390
RMSE: 4.16905
Iteration: 391
RMSE: 4.16961
Iteration: 392
RMSE: 4.16781
Iteration: 393
RMSE: 4.16878
Iteration: 394
RMSE: 4.16832
Iteration: 395
RMSE: 4.16762
Iteration: 396
RMSE: 4.16623
Iteration: 397
RMSE: 4.16665
Iteration: 398
RMSE: 4.16558
Iteration: 399
RMSE: 4.16722
Iteration: 400
RMSE: 4.16444
Iteration: 401
RMSE: 4.16517
Iteration: 402
RMSE: 4.16674
Iteration: 403
RMSE: 4.16591
Iteration: 404
RMSE: 4.16632
Iteration: 405
RMSE: 4.16617
Iteration: 406
RMSE: 4.16708
Iteration: 407
RMSE: 4.16571
Iteration: 408
RMSE: 4.16768
Iteration: 409
RMSE: 4.16844
Iteration: 410
RMSE: 4.16772
Iteration: 411
RMSE: 4.16555
Iteration: 412
RMSE: 4.16743
Iteration: 413
RMSE: 4.16563
Iteration: 414
RMSE: 4.1675
Iteration: 415
RMSE: 4.16743
Iteration: 416
RMSE: 4.16605
Iteration: 417
RMSE: 4.16326
Iteration: 418
RMSE: 4.16383
Iteration: 419
RMSE: 4.16552
Iteration: 420
RMSE: 4.16279
Iteration: 421
RMSE: 4.16415
Iteration: 422
RMSE: 4.16367
Iteration: 423
RMSE: 4.16418
Iteration: 424
RMSE: 4.16417
Iteration: 425
RMSE: 4.16372
Iteration: 426
RMSE: 4.16442
Iteration: 427
RMSE: 4.16545
Iteration: 428
RMSE: 4.16462
Iteration: 429
RMSE: 4.16555
Iteration: 430
RMSE: 4.16425
Iteration: 431
RMSE: 4.16431
Iteration: 432
RMSE: 4.16681
Iteration: 433
RMSE: 4.16657
Iteration: 434
RMSE: 4.16499
Iteration: 435
RMSE: 4.16477
Iteration: 436
RMSE: 4.16458
Iteration: 437
RMSE: 4.16168
Iteration: 438
RMSE: 4.16246
Iteration: 439
RMSE: 4.16225
Iteration: 440
RMSE: 4.16225
Iteration: 441
RMSE: 4.16407
Iteration: 442
RMSE: 4.16112
Iteration: 443
RMSE: 4.16249
Iteration: 444
RMSE: 4.16291
Iteration: 445
RMSE: 4.16253
Iteration: 446
RMSE: 4.16451
Iteration: 447
RMSE: 4.16368
Iteration: 448
RMSE: 4.16489
Iteration: 449
RMSE: 4.16381
Iteration: 450
RMSE: 4.16302
Iteration: 451
RMSE: 4.163
Iteration: 452
RMSE: 4.16421
Iteration: 453
RMSE: 4.16465
Iteration: 454
RMSE: 4.1636
Iteration: 455
RMSE: 4.1642
Iteration: 456
RMSE: 4.16564
Iteration: 457
RMSE: 4.16436
Iteration: 458
RMSE: 4.16503
Iteration: 459
RMSE: 4.16319
Iteration: 460
RMSE: 4.16312
Iteration: 461
RMSE: 4.16486
Iteration: 462
RMSE: 4.16428
Iteration: 463
RMSE: 4.16379
Iteration: 464
RMSE: 4.16337
Iteration: 465
RMSE: 4.16406
Iteration: 466
RMSE: 4.16658
Iteration: 467
RMSE: 4.16517
Iteration: 468
RMSE: 4.16659
Iteration: 469
RMSE: 4.16484
Iteration: 470
RMSE: 4.16298
Iteration: 471
RMSE: 4.1631
Iteration: 472
RMSE: 4.1661
Iteration: 473
RMSE: 4.16564
Iteration: 474
RMSE: 4.16446
Iteration: 475
RMSE: 4.16275
Iteration: 476
RMSE: 4.16292
Iteration: 477
RMSE: 4.16035
Iteration: 478
RMSE: 4.16126
Iteration: 479
RMSE: 4.16133
Iteration: 480
RMSE: 4.16026
Iteration: 481
RMSE: 4.16129
Iteration: 482
RMSE: 4.16133
Iteration: 483
RMSE: 4.16274
Iteration: 484
RMSE: 4.16192
Iteration: 485
RMSE: 4.16262
Iteration: 486
RMSE: 4.16308
Iteration: 487
RMSE: 4.16263
Iteration: 488
RMSE: 4.16222
Iteration: 489
RMSE: 4.16207
Iteration: 490
RMSE: 4.16218
Iteration: 491
RMSE: 4.16179
Iteration: 492
RMSE: 4.1642
Iteration: 493
RMSE: 4.16363
Iteration: 494
RMSE: 4.16102
Iteration: 495
RMSE: 4.16284
Iteration: 496
RMSE: 4.16171
Iteration: 497
RMSE: 4.1652
Iteration: 498
RMSE: 4.16433
Iteration: 499
RMSE: 4.16273
Iteration: 500
RMSE: 4.16107
Iteration: 501
RMSE: 4.16232
Iteration: 502
RMSE: 4.1618
Iteration: 503
RMSE: 4.16223
Iteration: 504
RMSE: 4.16168
Iteration: 505
RMSE: 4.16284
Iteration: 506
RMSE: 4.16378
Iteration: 507
RMSE: 4.16212
Iteration: 508
RMSE: 4.16422
Iteration: 509
RMSE: 4.16234
Iteration: 510
RMSE: 4.16324
Iteration: 511
RMSE: 4.16225
Iteration: 512
RMSE: 4.16082
Iteration: 513
RMSE: 4.163
Iteration: 514
RMSE: 4.16321
Iteration: 515
RMSE: 4.16434
Iteration: 516
RMSE: 4.16235
Iteration: 517
RMSE: 4.1636
Iteration: 518
RMSE: 4.16216
Iteration: 519
RMSE: 4.16235
Iteration: 520
RMSE: 4.16234
Iteration: 521
RMSE: 4.16288
Iteration: 522
RMSE: 4.16282
Iteration: 523
RMSE: 4.16184
Iteration: 524
RMSE: 4.16182
Iteration: 525
RMSE: 4.16098
Iteration: 526
RMSE: 4.16173
Iteration: 527
RMSE: 4.15926
Iteration: 528
RMSE: 4.16168
Iteration: 529
RMSE: 4.16106
Iteration: 530
RMSE: 4.15992
Iteration: 531
RMSE: 4.16317
Iteration: 532
RMSE: 4.16298
Iteration: 533
RMSE: 4.16369
Iteration: 534
RMSE: 4.16314
Iteration: 535
RMSE: 4.16304
Iteration: 536
RMSE: 4.16304
Iteration: 537
RMSE: 4.16133
Iteration: 538
RMSE: 4.16195
Iteration: 539
RMSE: 4.16294
Iteration: 540
RMSE: 4.16286
Iteration: 541
RMSE: 4.16243
Iteration: 542
RMSE: 4.16176
Iteration: 543
RMSE: 4.16272
Iteration: 544
RMSE: 4.15979
Iteration: 545
RMSE: 4.16191
Iteration: 546
RMSE: 4.16243
Iteration: 547
RMSE: 4.16039
Iteration: 548
RMSE: 4.15978
Iteration: 549
RMSE: 4.16092
Iteration: 550
RMSE: 4.15984
Iteration: 551
RMSE: 4.1623
Iteration: 552
RMSE: 4.16409
Iteration: 553
RMSE: 4.16033
Iteration: 554
RMSE: 4.16119
Iteration: 555
RMSE: 4.15966
Iteration: 556
RMSE: 4.15945
Iteration: 557
RMSE: 4.16213
Iteration: 558
RMSE: 4.15928
Iteration: 559
RMSE: 4.16085
Iteration: 560
RMSE: 4.16203
Iteration: 561
RMSE: 4.16011
Iteration: 562
RMSE: 4.15991
Iteration: 563
RMSE: 4.15951
Iteration: 564
RMSE: 4.1596
Iteration: 565
RMSE: 4.16355
Iteration: 566
RMSE: 4.15974
Iteration: 567
RMSE: 4.15856
Iteration: 568
RMSE: 4.15861
Iteration: 569
RMSE: 4.15716
Iteration: 570
RMSE: 4.15849
Iteration: 571
RMSE: 4.16046
Iteration: 572
RMSE: 4.16011
Iteration: 573
RMSE: 4.15921
Iteration: 574
RMSE: 4.15938
Iteration: 575
RMSE: 4.16154
Iteration: 576
RMSE: 4.16154
Iteration: 577
RMSE: 4.15887
Iteration: 578
RMSE: 4.1588
Iteration: 579
RMSE: 4.15796
Iteration: 580
RMSE: 4.15974
Iteration: 581
RMSE: 4.15819
Iteration: 582
RMSE: 4.15807
Iteration: 583
RMSE: 4.15667
Iteration: 584
RMSE: 4.15733
Iteration: 585
RMSE: 4.15993
Iteration: 586
RMSE: 4.16053
Iteration: 587
RMSE: 4.16118
Iteration: 588
RMSE: 4.16026
Iteration: 589
RMSE: 4.15987
Iteration: 590
RMSE: 4.15952
Iteration: 591
RMSE: 4.15682
Iteration: 592
RMSE: 4.15832
Iteration: 593
RMSE: 4.15833
Iteration: 594
RMSE: 4.1577
Iteration: 595
RMSE: 4.15775
Iteration: 596
RMSE: 4.15642
Iteration: 597
RMSE: 4.15571
Iteration: 598
RMSE: 4.1577
Iteration: 599
RMSE: 4.15653
Iteration: 600
RMSE: 4.15757
Iteration: 601
RMSE: 4.15906
Iteration: 602
RMSE: 4.16079
Iteration: 603
RMSE: 4.15945
Iteration: 604
RMSE: 4.15811
Iteration: 605
RMSE: 4.1588
Iteration: 606
RMSE: 4.15881
Iteration: 607
RMSE: 4.15717
Iteration: 608
RMSE: 4.15933
Iteration: 609
RMSE: 4.15987
Iteration: 610
RMSE: 4.15728
Iteration: 611
RMSE: 4.15994
Iteration: 612
RMSE: 4.15902
Iteration: 613
RMSE: 4.15852
Iteration: 614
RMSE: 4.15839
Iteration: 615
RMSE: 4.1578
Iteration: 616
RMSE: 4.15797
Iteration: 617
RMSE: 4.15774
Iteration: 618
RMSE: 4.15835
Iteration: 619
RMSE: 4.15824
Iteration: 620
RMSE: 4.15705
Iteration: 621
RMSE: 4.15726
Iteration: 622
RMSE: 4.15782
Iteration: 623
RMSE: 4.15857
Iteration: 624
RMSE: 4.15799
Iteration: 625
RMSE: 4.15884
Iteration: 626
RMSE: 4.16054
Iteration: 627
RMSE: 4.15902
Iteration: 628
RMSE: 4.15699
Iteration: 629
RMSE: 4.16109
Iteration: 630
RMSE: 4.15861
Iteration: 631
RMSE: 4.15917
Iteration: 632
RMSE: 4.15887
Iteration: 633
RMSE: 4.15647
Iteration: 634
RMSE: 4.15823
Iteration: 635
RMSE: 4.16028
Iteration: 636
RMSE: 4.1601
Iteration: 637
RMSE: 4.15742
Iteration: 638
RMSE: 4.15817
Iteration: 639
RMSE: 4.15769
Iteration: 640
RMSE: 4.15625
Iteration: 641
RMSE: 4.15763
Iteration: 642
RMSE: 4.15804
Iteration: 643
RMSE: 4.15785
Iteration: 644
RMSE: 4.15737
Iteration: 645
RMSE: 4.15898
Iteration: 646
RMSE: 4.15838
Iteration: 647
RMSE: 4.15853
Iteration: 648
RMSE: 4.15881
Iteration: 649
RMSE: 4.15838
Iteration: 650
RMSE: 4.16069
Iteration: 651
RMSE: 4.15955
Iteration: 652
RMSE: 4.15669
Iteration: 653
RMSE: 4.15708
Iteration: 654
RMSE: 4.15769
Iteration: 655
RMSE: 4.15694
Iteration: 656
RMSE: 4.157
Iteration: 657
RMSE: 4.15634
Iteration: 658
RMSE: 4.15761
Iteration: 659
RMSE: 4.15692
Iteration: 660
RMSE: 4.15807
Iteration: 661
RMSE: 4.16184
Iteration: 662
RMSE: 4.15858
Iteration: 663
RMSE: 4.16103
Iteration: 664
RMSE: 4.15995
Iteration: 665
RMSE: 4.15981
Iteration: 666
RMSE: 4.1576
Iteration: 667
RMSE: 4.15777
Iteration: 668
RMSE: 4.15947
Iteration: 669
RMSE: 4.1592
Iteration: 670
RMSE: 4.15904
Iteration: 671
RMSE: 4.1585
Iteration: 672
RMSE: 4.1585
Iteration: 673
RMSE: 4.15837
Iteration: 674
RMSE: 4.15709
Iteration: 675
RMSE: 4.15779
Iteration: 676
RMSE: 4.15935
Iteration: 677
RMSE: 4.15877
Iteration: 678
RMSE: 4.15883
Iteration: 679
RMSE: 4.15922
Iteration: 680
RMSE: 4.15867
Iteration: 681
RMSE: 4.15815
Iteration: 682
RMSE: 4.15678
Iteration: 683
RMSE: 4.1566
Iteration: 684
RMSE: 4.15598
Iteration: 685
RMSE: 4.15618
Iteration: 686
RMSE: 4.15449
Iteration: 687
RMSE: 4.15927
Iteration: 688
RMSE: 4.15882
Iteration: 689
RMSE: 4.15921
Iteration: 690
RMSE: 4.15909
Iteration: 691
RMSE: 4.15761
Iteration: 692
RMSE: 4.15742
Iteration: 693
RMSE: 4.15599
Iteration: 694
RMSE: 4.15684
Iteration: 695
RMSE: 4.15769
Iteration: 696
RMSE: 4.15657
Iteration: 697
RMSE: 4.15524
Iteration: 698
RMSE: 4.15687
Iteration: 699
RMSE: 4.1581
Iteration: 700
RMSE: 4.15827
Iteration: 701
RMSE: 4.15746
Iteration: 702
RMSE: 4.15712
Iteration: 703
RMSE: 4.15504
Iteration: 704
RMSE: 4.15773
Iteration: 705
RMSE: 4.159
Iteration: 706
RMSE: 4.15919
Iteration: 707
RMSE: 4.15864
Iteration: 708
RMSE: 4.15756
Iteration: 709
RMSE: 4.15506
Iteration: 710
RMSE: 4.15699
Iteration: 711
RMSE: 4.15752
Iteration: 712
RMSE: 4.15788
Iteration: 713
RMSE: 4.15727
Iteration: 714
RMSE: 4.15913
Iteration: 715
RMSE: 4.15857
Iteration: 716
RMSE: 4.15899
Iteration: 717
RMSE: 4.15628
Iteration: 718
RMSE: 4.15863
Iteration: 719
RMSE: 4.15747
Iteration: 720
RMSE: 4.15825
Iteration: 721
RMSE: 4.15687
Iteration: 722
RMSE: 4.1563
Iteration: 723
RMSE: 4.15624
Iteration: 724
RMSE: 4.15781
Iteration: 725
RMSE: 4.15611
Iteration: 726
RMSE: 4.1586
Iteration: 727
RMSE: 4.15703
Iteration: 728
RMSE: 4.15698
Iteration: 729
RMSE: 4.15683
Iteration: 730
RMSE: 4.15752
Iteration: 731
RMSE: 4.15657
Iteration: 732
RMSE: 4.15914
Iteration: 733
RMSE: 4.16022
Iteration: 734
RMSE: 4.16061
Iteration: 735
RMSE: 4.16015
Iteration: 736
RMSE: 4.15716
Iteration: 737
RMSE: 4.16056
Iteration: 738
RMSE: 4.15791
Iteration: 739
RMSE: 4.15853
Iteration: 740
RMSE: 4.15696
Iteration: 741
RMSE: 4.15783
Iteration: 742
RMSE: 4.15702
Iteration: 743
RMSE: 4.15734
Iteration: 744
RMSE: 4.1568
Iteration: 745
RMSE: 4.15814
Iteration: 746
RMSE: 4.15754
Iteration: 747
RMSE: 4.15596
Iteration: 748
RMSE: 4.15875
Iteration: 749
RMSE: 4.15772
Iteration: 750
RMSE: 4.15802
Iteration: 751
RMSE: 4.15765
Iteration: 752
RMSE: 4.15695
Iteration: 753
RMSE: 4.15986
Iteration: 754
RMSE: 4.15844
Iteration: 755
RMSE: 4.15867
Iteration: 756
RMSE: 4.15851
Iteration: 757
RMSE: 4.15696
Iteration: 758
RMSE: 4.15604
Iteration: 759
RMSE: 4.15601
Iteration: 760
RMSE: 4.15818
Iteration: 761
RMSE: 4.15847
Iteration: 762
RMSE: 4.15796
Iteration: 763
RMSE: 4.15887
Iteration: 764
RMSE: 4.15793
Iteration: 765
RMSE: 4.15877
Iteration: 766
RMSE: 4.15808
Iteration: 767
RMSE: 4.15952
Iteration: 768
RMSE: 4.15749
Iteration: 769
RMSE: 4.15667
Iteration: 770
RMSE: 4.15812
Iteration: 771
RMSE: 4.15867
Iteration: 772
RMSE: 4.15655
Iteration: 773
RMSE: 4.15763
Iteration: 774
RMSE: 4.15908
Iteration: 775
RMSE: 4.15774
Iteration: 776
RMSE: 4.15607
Iteration: 777
RMSE: 4.1559
Iteration: 778
RMSE: 4.15557
Iteration: 779
RMSE: 4.15565
Iteration: 780
RMSE: 4.15667
Iteration: 781
RMSE: 4.15631
Iteration: 782
RMSE: 4.1573
Iteration: 783
RMSE: 4.15513
Iteration: 784
RMSE: 4.15728
Iteration: 785
RMSE: 4.15477
Iteration: 786
RMSE: 4.15596
Iteration: 787
RMSE: 4.15689
Iteration: 788
RMSE: 4.15679
Iteration: 789
RMSE: 4.15816
Iteration: 790
RMSE: 4.15667
Iteration: 791
RMSE: 4.15663
Iteration: 792
RMSE: 4.15647
Iteration: 793
RMSE: 4.15827
Iteration: 794
RMSE: 4.15685
Iteration: 795
RMSE: 4.15613
Iteration: 796
RMSE: 4.15649
Iteration: 797
RMSE: 4.1563
Iteration: 798
RMSE: 4.15712
Iteration: 799
RMSE: 4.15812
Iteration: 800
RMSE: 4.1572
Iteration: 801
RMSE: 4.15575
Iteration: 802
RMSE: 4.15595
Iteration: 803
RMSE: 4.15698
Iteration: 804
RMSE: 4.15932
Iteration: 805
RMSE: 4.15982
Iteration: 806
RMSE: 4.1565
Iteration: 807
RMSE: 4.15554
Iteration: 808
RMSE: 4.15535
Iteration: 809
RMSE: 4.15631
Iteration: 810
RMSE: 4.15458
Iteration: 811
RMSE: 4.15689
Iteration: 812
RMSE: 4.15488
Iteration: 813
RMSE: 4.15619
Iteration: 814
RMSE: 4.15715
Iteration: 815
RMSE: 4.15415
Iteration: 816
RMSE: 4.1535
Iteration: 817
RMSE: 4.15597
Iteration: 818
RMSE: 4.15546
Iteration: 819
RMSE: 4.15563
Iteration: 820
RMSE: 4.15337
Iteration: 821
RMSE: 4.15443
Iteration: 822
RMSE: 4.15354
Iteration: 823
RMSE: 4.15319
Iteration: 824
RMSE: 4.15386
Iteration: 825
RMSE: 4.15305
Iteration: 826
RMSE: 4.1542
Iteration: 827
RMSE: 4.15446
Iteration: 828
RMSE: 4.15537
Iteration: 829
RMSE: 4.15485
Iteration: 830
RMSE: 4.15455
Iteration: 831
RMSE: 4.15381
Iteration: 832
RMSE: 4.15293
Iteration: 833
RMSE: 4.15339
Iteration: 834
RMSE: 4.15438
Iteration: 835
RMSE: 4.15388
Iteration: 836
RMSE: 4.15402
Iteration: 837
RMSE: 4.15334
Iteration: 838
RMSE: 4.1554
Iteration: 839
RMSE: 4.15568
Iteration: 840
RMSE: 4.15788
Iteration: 841
RMSE: 4.15639
Iteration: 842
RMSE: 4.15776
Iteration: 843
RMSE: 4.15681
Iteration: 844
RMSE: 4.15712
Iteration: 845
RMSE: 4.15679
Iteration: 846
RMSE: 4.15642
Iteration: 847
RMSE: 4.15971
Iteration: 848
RMSE: 4.15703
Iteration: 849
RMSE: 4.15466
Iteration: 850
RMSE: 4.15552
Iteration: 851
RMSE: 4.15658
Iteration: 852
RMSE: 4.15462
Iteration: 853
RMSE: 4.1566
Iteration: 854
RMSE: 4.15421
Iteration: 855
RMSE: 4.15498
Iteration: 856
RMSE: 4.15379
Iteration: 857
RMSE: 4.15441
Iteration: 858
RMSE: 4.15279
Iteration: 859
RMSE: 4.15556
Iteration: 860
RMSE: 4.15473
Iteration: 861
RMSE: 4.15602
Iteration: 862
RMSE: 4.15379
Iteration: 863
RMSE: 4.15428
Iteration: 864
RMSE: 4.15354
Iteration: 865
RMSE: 4.1558
Iteration: 866
RMSE: 4.15519
Iteration: 867
RMSE: 4.15477
Iteration: 868
RMSE: 4.15636
Iteration: 869
RMSE: 4.15373
Iteration: 870
RMSE: 4.15472
Iteration: 871
RMSE: 4.15732
Iteration: 872
RMSE: 4.15654
Iteration: 873
RMSE: 4.15646
Iteration: 874
RMSE: 4.15549
Iteration: 875
RMSE: 4.15704
Iteration: 876
RMSE: 4.15463
Iteration: 877
RMSE: 4.15368
Iteration: 878
RMSE: 4.15772
Iteration: 879
RMSE: 4.15647
Iteration: 880
RMSE: 4.15665
Iteration: 881
RMSE: 4.15569
Iteration: 882
RMSE: 4.15439
Iteration: 883
RMSE: 4.1527
Iteration: 884
RMSE: 4.1554
Iteration: 885
RMSE: 4.15438
Iteration: 886
RMSE: 4.15238
Iteration: 887
RMSE: 4.15265
Iteration: 888
RMSE: 4.1552
Iteration: 889
RMSE: 4.15408
Iteration: 890
RMSE: 4.15447
Iteration: 891
RMSE: 4.15393
Iteration: 892
RMSE: 4.15473
Iteration: 893
RMSE: 4.15443
Iteration: 894
RMSE: 4.15489
Iteration: 895
RMSE: 4.15547
Iteration: 896
RMSE: 4.15547
Iteration: 897
RMSE: 4.15386
Iteration: 898
RMSE: 4.15405
Iteration: 899
RMSE: 4.15272
Iteration: 900
RMSE: 4.15296
Iteration: 901
RMSE: 4.15311
Iteration: 902
RMSE: 4.15507
Iteration: 903
RMSE: 4.15346
Iteration: 904
RMSE: 4.15461
Iteration: 905
RMSE: 4.15206
Iteration: 906
RMSE: 4.15446
Iteration: 907
RMSE: 4.15504
Iteration: 908
RMSE: 4.15574
Iteration: 909
RMSE: 4.15402
Iteration: 910
RMSE: 4.15308
Iteration: 911
RMSE: 4.155
Iteration: 912
RMSE: 4.15647
Iteration: 913
RMSE: 4.15674
Iteration: 914
RMSE: 4.15459
Iteration: 915
RMSE: 4.15744
Iteration: 916
RMSE: 4.15384
Iteration: 917
RMSE: 4.15472
Iteration: 918
RMSE: 4.15455
Iteration: 919
RMSE: 4.15229
Iteration: 920
RMSE: 4.15317
Iteration: 921
RMSE: 4.15333
Iteration: 922
RMSE: 4.15408
Iteration: 923
RMSE: 4.15353
Iteration: 924
RMSE: 4.15217
Iteration: 925
RMSE: 4.15339
Iteration: 926
RMSE: 4.15074
Iteration: 927
RMSE: 4.15465
Iteration: 928
RMSE: 4.15373
Iteration: 929
RMSE: 4.15498
Iteration: 930
RMSE: 4.1556
Iteration: 931
RMSE: 4.15304
Iteration: 932
RMSE: 4.15465
Iteration: 933
RMSE: 4.15367
Iteration: 934
RMSE: 4.15473
Iteration: 935
RMSE: 4.15471
Iteration: 936
RMSE: 4.15567
Iteration: 937
RMSE: 4.1553
Iteration: 938
RMSE: 4.15612
Iteration: 939
RMSE: 4.15595
Iteration: 940
RMSE: 4.15551
Iteration: 941
RMSE: 4.15551
Iteration: 942
RMSE: 4.15364
Iteration: 943
RMSE: 4.1547
Iteration: 944
RMSE: 4.1545
Iteration: 945
RMSE: 4.15482
Iteration: 946
RMSE: 4.15417
Iteration: 947
RMSE: 4.15322
Iteration: 948
RMSE: 4.15297
Iteration: 949
RMSE: 4.154
Iteration: 950
RMSE: 4.15177
Iteration: 951
RMSE: 4.15395
Iteration: 952
RMSE: 4.15337
Iteration: 953
RMSE: 4.15431
Iteration: 954
RMSE: 4.15226
Iteration: 955
RMSE: 4.15305
Iteration: 956
RMSE: 4.15527
Iteration: 957
RMSE: 4.15138
Iteration: 958
RMSE: 4.15376
Iteration: 959
RMSE: 4.15315
Iteration: 960
RMSE: 4.15528
Iteration: 961
RMSE: 4.15594
Iteration: 962
RMSE: 4.15606
Iteration: 963
RMSE: 4.15597
Iteration: 964
RMSE: 4.15667
Iteration: 965
RMSE: 4.15627
Iteration: 966
RMSE: 4.15389
Iteration: 967
RMSE: 4.15359
Iteration: 968
RMSE: 4.15297
Iteration: 969
RMSE: 4.15342
Iteration: 970
RMSE: 4.15471
Iteration: 971
RMSE: 4.15533
Iteration: 972
RMSE: 4.15454
Iteration: 973
RMSE: 4.15643
Iteration: 974
RMSE: 4.15336
Iteration: 975
RMSE: 4.15231
Iteration: 976
RMSE: 4.15104
Iteration: 977
RMSE: 4.154
Iteration: 978
RMSE: 4.15376
Iteration: 979
RMSE: 4.15413
Iteration: 980
RMSE: 4.15162
Iteration: 981
RMSE: 4.15275
Iteration: 982
RMSE: 4.15388
Iteration: 983
RMSE: 4.15513
Iteration: 984
RMSE: 4.15584
Iteration: 985
RMSE: 4.15474
Iteration: 986
RMSE: 4.15343
Iteration: 987
RMSE: 4.15481
Iteration: 988
RMSE: 4.15436
Iteration: 989
RMSE: 4.15307
Iteration: 990
RMSE: 4.15283
Iteration: 991
RMSE: 4.15265
Iteration: 992
RMSE: 4.1551
Iteration: 993
RMSE: 4.15512
Iteration: 994
RMSE: 4.15345
Iteration: 995
RMSE: 4.15506
Iteration: 996
RMSE: 4.15246
Iteration: 997
RMSE: 4.15427
Iteration: 998
RMSE: 4.15387
Iteration: 999
RMSE: 4.15278
Iteration: 1000
RMSE: 4.15219
8064
8070
8076
8082
8088
8094
8100
8106
8112
8118
8124
8130
8136
8142
8148
8154
8160
8166
8172
8178
8184
8190
8196
8202
8208
8214
8220
8226
8232
8238
8244
8250
8256
8262
8268
8274
8280
8286
8292
8298
8304
8310
8316
8322
8328
8334
8340
8346
8352
8358
8364
8370
8376
8382
8388
8394
8400
8406
8412
8418
8424
8430
8436
8442
8448
8454
8460
8466
8472
8478
8484
8490
8496
8502
8508
8514
8520
8526
8532
8538
8544
8550
8556
8562
8568
8574
8580
8586
8592
8598
8604
8610
8616
8622
8628
8634
8640
8646
8652
8658
8664
8670
8676
8682
8688
8694
8700
8706
8712
8718
8724
8730
8736
8742
8748
8754
8760
8766
8772
8778
Final MAPE: 0.164031
Final RMSE: 6.08458
Running time: 32810 seconds
```python
mat_hat10 = np.percentile(result, 5, axis = 2)
mat_hat90 = np.percentile(result, 95, axis = 2)
mat_hat = np.mean(result, axis = 2)
```
```python
X = dense_mat.copy()
pred_steps = int(num_roll * num_step)
tv = 144
```
```python
import matplotlib.pyplot as plt
plt.style.use('ggplot')
figsize = 2
for i in range(3):
fig = plt.figure(figsize = (4 * figsize, 1 * figsize))
ax = fig.add_axes([0.13, 0.28, 0.85, 0.68])
plt.plot(X[i, 54 * tv :], color = "black", linewidth = 0.5)
plt.plot(list(range(X.shape[1] - pred_steps - 54 * tv, X.shape[1] - 54 * tv)),
mat_hat[i, :], color = "#e3120b", linewidth = 2.0)
plt.plot(list(range(X.shape[1] - pred_steps - 54 * tv, X.shape[1] - 54 * tv)),
mat_hat10[i, :], color = "blue", linewidth = 0.5)
plt.plot(list(range(X.shape[1] - pred_steps - 54 * tv, X.shape[1] - 54 * tv)),
mat_hat90[i, :], color = "green", linewidth = 0.5)
```
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(random_tensor + 0.5 - missing_rate).reshape([random_tensor.shape[0],
random_tensor.shape[1]
* random_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
num_step = 5
num_roll = int(108 * 5 / num_step)
start_time = dim2 - num_roll * num_step
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(start_time, rank)}
burn_iter = 100
gibbs_iter = 10
result = forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
Iteration: 1
RMSE: 101.287
Iteration: 2
RMSE: 70.3381
Iteration: 3
RMSE: 53.6979
Iteration: 4
RMSE: 49.4189
Iteration: 5
RMSE: 47.1461
Iteration: 6
RMSE: 45.86
Iteration: 7
RMSE: 45.7183
Iteration: 8
RMSE: 45.8896
Iteration: 9
RMSE: 46.2484
Iteration: 10
RMSE: 46.7664
Iteration: 11
RMSE: 47.4777
Iteration: 12
RMSE: 47.3682
Iteration: 13
RMSE: 47.5301
Iteration: 14
RMSE: 46.6131
Iteration: 15
RMSE: 45.4952
Iteration: 16
RMSE: 44.8379
Iteration: 17
RMSE: 44.3899
Iteration: 18
RMSE: 44.7274
Iteration: 19
RMSE: 45.0761
Iteration: 20
RMSE: 45.8206
Iteration: 21
RMSE: 45.0064
Iteration: 22
RMSE: 44.4442
Iteration: 23
RMSE: 45.485
Iteration: 24
RMSE: 45.1824
Iteration: 25
RMSE: 44.531
Iteration: 26
RMSE: 44.591
Iteration: 27
RMSE: 44.6757
Iteration: 28
RMSE: 44.9708
Iteration: 29
RMSE: 45.0923
Iteration: 30
RMSE: 45.0159
Iteration: 31
RMSE: 44.904
Iteration: 32
RMSE: 45.3887
Iteration: 33
RMSE: 46.0251
Iteration: 34
RMSE: 45.6158
Iteration: 35
RMSE: 45.502
Iteration: 36
RMSE: 44.7124
Iteration: 37
RMSE: 44.7735
Iteration: 38
RMSE: 43.9153
Iteration: 39
RMSE: 43.7807
Iteration: 40
RMSE: 44.1097
Iteration: 41
RMSE: 43.6194
Iteration: 42
RMSE: 44.0903
Iteration: 43
RMSE: 44.4178
Iteration: 44
RMSE: 43.6577
Iteration: 45
RMSE: 43.8949
Iteration: 46
RMSE: 44.4267
Iteration: 47
RMSE: 43.694
Iteration: 48
RMSE: 43.6536
Iteration: 49
RMSE: 44.7691
Iteration: 50
RMSE: 43.8228
Iteration: 51
RMSE: 44.2445
Iteration: 52
RMSE: 43.9087
Iteration: 53
RMSE: 43.3626
Iteration: 54
RMSE: 43.486
Iteration: 55
RMSE: 43.7106
Iteration: 56
RMSE: 44.1793
Iteration: 57
RMSE: 43.8947
Iteration: 58
RMSE: 44.2573
Iteration: 59
RMSE: 44.0629
Iteration: 60
RMSE: 43.4897
Iteration: 61
RMSE: 42.5123
Iteration: 62
RMSE: 42.6549
Iteration: 63
RMSE: 43.0575
Iteration: 64
RMSE: 43.3994
Iteration: 65
RMSE: 42.6277
Iteration: 66
RMSE: 43.9714
Iteration: 67
RMSE: 43.942
Iteration: 68
RMSE: 43.5723
Iteration: 69
RMSE: 44.0702
Iteration: 70
RMSE: 43.9521
Iteration: 71
RMSE: 44.324
Iteration: 72
RMSE: 44.3996
Iteration: 73
RMSE: 44.9953
Iteration: 74
RMSE: 43.1493
Iteration: 75
RMSE: 43.4973
Iteration: 76
RMSE: 43.2437
Iteration: 77
RMSE: 43.317
Iteration: 78
RMSE: 42.9942
Iteration: 79
RMSE: 42.1611
Iteration: 80
RMSE: 42.6557
Iteration: 81
RMSE: 42.1078
Iteration: 82
RMSE: 42.2983
Iteration: 83
RMSE: 43.1372
Iteration: 84
RMSE: 42.7722
Iteration: 85
RMSE: 43.1752
Iteration: 86
RMSE: 42.8701
Iteration: 87
RMSE: 43.3126
Iteration: 88
RMSE: 42.5251
Iteration: 89
RMSE: 42.6926
Iteration: 90
RMSE: 42.7488
Iteration: 91
RMSE: 42.4678
Iteration: 92
RMSE: 42.4275
Iteration: 93
RMSE: 43.4612
Iteration: 94
RMSE: 43.1562
Iteration: 95
RMSE: 42.5489
Iteration: 96
RMSE: 42.4343
Iteration: 97
RMSE: 41.5781
Iteration: 98
RMSE: 42.5625
Iteration: 99
RMSE: 43.1908
Iteration: 100
RMSE: 43.0127
2160
2165
2170
2175
2180
2185
2190
2195
2200
2205
2210
2215
2220
2225
2230
2235
2240
2245
2250
2255
2260
2265
2270
2275
2280
2285
2290
2295
2300
2305
2310
2315
2320
2325
2330
2335
2340
2345
2350
2355
2360
2365
2370
2375
2380
2385
2390
2395
2400
2405
2410
2415
2420
2425
2430
2435
2440
2445
2450
2455
2460
2465
2470
2475
2480
2485
2490
2495
2500
2505
2510
2515
2520
2525
2530
2535
2540
2545
2550
2555
2560
2565
2570
2575
2580
2585
2590
2595
2600
2605
2610
2615
2620
2625
2630
2635
2640
2645
2650
2655
2660
2665
2670
2675
2680
2685
2690
2695
Final MAPE: 0.585583
Final RMSE: 80.7796
Running time: 875 seconds
```python
mat_hat10 = np.percentile(result, 10, axis = 2)
mat_hat90 = np.percentile(result, 90, axis = 2)
mat_hat = np.mean(result, axis = 2)
X = dense_mat.copy()
pred_steps = int(num_roll * num_step)
tv = 108
import matplotlib.pyplot as plt
plt.style.use('ggplot')
figsize = 2
for i in range(3):
fig = plt.figure(figsize = (8 * figsize, 2 * figsize))
ax = fig.add_axes([0.13, 0.28, 0.85, 0.68])
plt.plot(X[i, 18 * tv :], color = "black", linewidth = 0.5)
plt.plot(list(range(X.shape[1] - pred_steps - 18 * tv, X.shape[1] - 18 * tv)),
mat_hat[i, :], color = "#e3120b", linewidth = 2.0)
plt.plot(list(range(X.shape[1] - pred_steps - 18 * tv, X.shape[1] - 18 * tv)),
mat_hat10[i, :], color = "blue", linewidth = 0.5)
plt.plot(list(range(X.shape[1] - pred_steps - 18 * tv, X.shape[1] - 18 * tv)),
mat_hat90[i, :], color = "green", linewidth = 0.5)
```
```python
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.0
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(random_tensor + 0.5 - missing_rate).reshape([random_tensor.shape[0],
random_tensor.shape[1]
* random_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
```
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
num_step = 5
num_roll = int(108 * 5 / num_step)
start_time = dim2 - num_roll * num_step
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(start_time, rank)}
burn_iter = 100
gibbs_iter = 10
result = forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
/Users/xinyuchen/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18: RuntimeWarning: invalid value encountered in double_scalars
Iteration: 1
RMSE: nan
Iteration: 2
RMSE: nan
Iteration: 3
RMSE: nan
Iteration: 4
RMSE: nan
Iteration: 5
RMSE: nan
Iteration: 6
RMSE: nan
Iteration: 7
RMSE: nan
Iteration: 8
RMSE: nan
Iteration: 9
RMSE: nan
Iteration: 10
RMSE: nan
Iteration: 11
RMSE: nan
Iteration: 12
RMSE: nan
Iteration: 13
RMSE: nan
Iteration: 14
RMSE: nan
Iteration: 15
RMSE: nan
Iteration: 16
RMSE: nan
Iteration: 17
RMSE: nan
Iteration: 18
RMSE: nan
Iteration: 19
RMSE: nan
Iteration: 20
RMSE: nan
Iteration: 21
RMSE: nan
Iteration: 22
RMSE: nan
Iteration: 23
RMSE: nan
Iteration: 24
RMSE: nan
Iteration: 25
RMSE: nan
Iteration: 26
RMSE: nan
Iteration: 27
RMSE: nan
Iteration: 28
RMSE: nan
Iteration: 29
RMSE: nan
Iteration: 30
RMSE: nan
Iteration: 31
RMSE: nan
Iteration: 32
RMSE: nan
Iteration: 33
RMSE: nan
Iteration: 34
RMSE: nan
Iteration: 35
RMSE: nan
Iteration: 36
RMSE: nan
Iteration: 37
RMSE: nan
Iteration: 38
RMSE: nan
Iteration: 39
RMSE: nan
Iteration: 40
RMSE: nan
Iteration: 41
RMSE: nan
Iteration: 42
RMSE: nan
Iteration: 43
RMSE: nan
Iteration: 44
RMSE: nan
Iteration: 45
RMSE: nan
Iteration: 46
RMSE: nan
Iteration: 47
RMSE: nan
Iteration: 48
RMSE: nan
Iteration: 49
RMSE: nan
Iteration: 50
RMSE: nan
Iteration: 51
RMSE: nan
Iteration: 52
RMSE: nan
Iteration: 53
RMSE: nan
Iteration: 54
RMSE: nan
Iteration: 55
RMSE: nan
Iteration: 56
RMSE: nan
Iteration: 57
RMSE: nan
Iteration: 58
RMSE: nan
Iteration: 59
RMSE: nan
Iteration: 60
RMSE: nan
Iteration: 61
RMSE: nan
Iteration: 62
RMSE: nan
Iteration: 63
RMSE: nan
Iteration: 64
RMSE: nan
Iteration: 65
RMSE: nan
Iteration: 66
RMSE: nan
Iteration: 67
RMSE: nan
Iteration: 68
RMSE: nan
Iteration: 69
RMSE: nan
Iteration: 70
RMSE: nan
Iteration: 71
RMSE: nan
Iteration: 72
RMSE: nan
Iteration: 73
RMSE: nan
Iteration: 74
RMSE: nan
Iteration: 75
RMSE: nan
Iteration: 76
RMSE: nan
Iteration: 77
RMSE: nan
Iteration: 78
RMSE: nan
Iteration: 79
RMSE: nan
Iteration: 80
RMSE: nan
Iteration: 81
RMSE: nan
Iteration: 82
RMSE: nan
Iteration: 83
RMSE: nan
Iteration: 84
RMSE: nan
Iteration: 85
RMSE: nan
Iteration: 86
RMSE: nan
Iteration: 87
RMSE: nan
Iteration: 88
RMSE: nan
Iteration: 89
RMSE: nan
Iteration: 90
RMSE: nan
Iteration: 91
RMSE: nan
Iteration: 92
RMSE: nan
Iteration: 93
RMSE: nan
Iteration: 94
RMSE: nan
Iteration: 95
RMSE: nan
Iteration: 96
RMSE: nan
Iteration: 97
RMSE: nan
Iteration: 98
RMSE: nan
Iteration: 99
RMSE: nan
Iteration: 100
RMSE: nan
2160
2165
2170
2175
2180
2185
2190
2195
2200
2205
2210
2215
2220
2225
2230
2235
2240
2245
2250
2255
2260
2265
2270
2275
2280
2285
2290
2295
2300
2305
2310
2315
2320
2325
2330
2335
2340
2345
2350
2355
2360
2365
2370
2375
2380
2385
2390
2395
2400
2405
2410
2415
2420
2425
2430
2435
2440
2445
2450
2455
2460
2465
2470
2475
2480
2485
2490
2495
2500
2505
2510
2515
2520
2525
2530
2535
2540
2545
2550
2555
2560
2565
2570
2575
2580
2585
2590
2595
2600
2605
2610
2615
2620
2625
2630
2635
2640
2645
2650
2655
2660
2665
2670
2675
2680
2685
2690
2695
Final MAPE: 0.582281
Final RMSE: 75.4304
Running time: 866 seconds
```python
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
time_lags = np.array([1, 2, 108])
num_step = 5
num_roll = int(108 * 5 / num_step)
start_time = dim2 - num_roll * num_step
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(start_time, rank)}
burn_iter = 500
gibbs_iter = 50
result = forecastor(dense_mat, sparse_mat, init, time_lags,
num_roll, start_time, num_step, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
/Users/xinyuchen/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18: RuntimeWarning: invalid value encountered in double_scalars
Iteration: 1
RMSE: nan
Iteration: 2
RMSE: nan
Iteration: 3
RMSE: nan
Iteration: 4
RMSE: nan
Iteration: 5
RMSE: nan
Iteration: 6
RMSE: nan
Iteration: 7
RMSE: nan
Iteration: 8
RMSE: nan
Iteration: 9
RMSE: nan
Iteration: 10
RMSE: nan
Iteration: 11
RMSE: nan
Iteration: 12
RMSE: nan
Iteration: 13
RMSE: nan
Iteration: 14
RMSE: nan
Iteration: 15
RMSE: nan
Iteration: 16
RMSE: nan
Iteration: 17
RMSE: nan
Iteration: 18
RMSE: nan
Iteration: 19
RMSE: nan
Iteration: 20
RMSE: nan
Iteration: 21
RMSE: nan
Iteration: 22
RMSE: nan
Iteration: 23
RMSE: nan
Iteration: 24
RMSE: nan
Iteration: 25
RMSE: nan
Iteration: 26
RMSE: nan
Iteration: 27
RMSE: nan
Iteration: 28
RMSE: nan
Iteration: 29
RMSE: nan
Iteration: 30
RMSE: nan
Iteration: 31
RMSE: nan
Iteration: 32
RMSE: nan
Iteration: 33
RMSE: nan
Iteration: 34
RMSE: nan
Iteration: 35
RMSE: nan
Iteration: 36
RMSE: nan
Iteration: 37
RMSE: nan
Iteration: 38
RMSE: nan
Iteration: 39
RMSE: nan
Iteration: 40
RMSE: nan
Iteration: 41
RMSE: nan
Iteration: 42
RMSE: nan
Iteration: 43
RMSE: nan
Iteration: 44
RMSE: nan
Iteration: 45
RMSE: nan
Iteration: 46
RMSE: nan
Iteration: 47
RMSE: nan
Iteration: 48
RMSE: nan
Iteration: 49
RMSE: nan
Iteration: 50
RMSE: nan
Iteration: 51
RMSE: nan
Iteration: 52
RMSE: nan
Iteration: 53
RMSE: nan
Iteration: 54
RMSE: nan
Iteration: 55
RMSE: nan
Iteration: 56
RMSE: nan
Iteration: 57
RMSE: nan
Iteration: 58
RMSE: nan
Iteration: 59
RMSE: nan
Iteration: 60
RMSE: nan
Iteration: 61
RMSE: nan
Iteration: 62
RMSE: nan
Iteration: 63
RMSE: nan
Iteration: 64
RMSE: nan
Iteration: 65
RMSE: nan
Iteration: 66
RMSE: nan
Iteration: 67
RMSE: nan
Iteration: 68
RMSE: nan
Iteration: 69
RMSE: nan
Iteration: 70
RMSE: nan
Iteration: 71
RMSE: nan
Iteration: 72
RMSE: nan
Iteration: 73
RMSE: nan
Iteration: 74
RMSE: nan
Iteration: 75
RMSE: nan
Iteration: 76
RMSE: nan
Iteration: 77
RMSE: nan
Iteration: 78
RMSE: nan
Iteration: 79
RMSE: nan
Iteration: 80
RMSE: nan
Iteration: 81
RMSE: nan
Iteration: 82
RMSE: nan
Iteration: 83
RMSE: nan
Iteration: 84
RMSE: nan
Iteration: 85
RMSE: nan
Iteration: 86
RMSE: nan
Iteration: 87
RMSE: nan
Iteration: 88
RMSE: nan
Iteration: 89
RMSE: nan
Iteration: 90
RMSE: nan
Iteration: 91
RMSE: nan
Iteration: 92
RMSE: nan
Iteration: 93
RMSE: nan
Iteration: 94
RMSE: nan
Iteration: 95
RMSE: nan
Iteration: 96
RMSE: nan
Iteration: 97
RMSE: nan
Iteration: 98
RMSE: nan
Iteration: 99
RMSE: nan
Iteration: 100
RMSE: nan
Iteration: 101
RMSE: nan
Iteration: 102
RMSE: nan
Iteration: 103
RMSE: nan
Iteration: 104
RMSE: nan
Iteration: 105
RMSE: nan
Iteration: 106
RMSE: nan
Iteration: 107
RMSE: nan
Iteration: 108
RMSE: nan
Iteration: 109
RMSE: nan
Iteration: 110
RMSE: nan
Iteration: 111
RMSE: nan
Iteration: 112
RMSE: nan
Iteration: 113
RMSE: nan
Iteration: 114
RMSE: nan
Iteration: 115
RMSE: nan
Iteration: 116
RMSE: nan
Iteration: 117
RMSE: nan
Iteration: 118
RMSE: nan
Iteration: 119
RMSE: nan
Iteration: 120
RMSE: nan
Iteration: 121
RMSE: nan
Iteration: 122
RMSE: nan
Iteration: 123
RMSE: nan
Iteration: 124
RMSE: nan
Iteration: 125
RMSE: nan
Iteration: 126
RMSE: nan
Iteration: 127
RMSE: nan
Iteration: 128
RMSE: nan
Iteration: 129
RMSE: nan
Iteration: 130
RMSE: nan
Iteration: 131
RMSE: nan
Iteration: 132
RMSE: nan
Iteration: 133
RMSE: nan
Iteration: 134
RMSE: nan
Iteration: 135
RMSE: nan
Iteration: 136
RMSE: nan
Iteration: 137
RMSE: nan
Iteration: 138
RMSE: nan
Iteration: 139
RMSE: nan
Iteration: 140
RMSE: nan
Iteration: 141
RMSE: nan
Iteration: 142
RMSE: nan
Iteration: 143
RMSE: nan
Iteration: 144
RMSE: nan
Iteration: 145
RMSE: nan
Iteration: 146
RMSE: nan
Iteration: 147
RMSE: nan
Iteration: 148
RMSE: nan
Iteration: 149
RMSE: nan
Iteration: 150
RMSE: nan
Iteration: 151
RMSE: nan
Iteration: 152
RMSE: nan
Iteration: 153
RMSE: nan
Iteration: 154
RMSE: nan
Iteration: 155
RMSE: nan
Iteration: 156
RMSE: nan
Iteration: 157
RMSE: nan
Iteration: 158
RMSE: nan
Iteration: 159
RMSE: nan
Iteration: 160
RMSE: nan
Iteration: 161
RMSE: nan
Iteration: 162
RMSE: nan
Iteration: 163
RMSE: nan
Iteration: 164
RMSE: nan
Iteration: 165
RMSE: nan
Iteration: 166
RMSE: nan
Iteration: 167
RMSE: nan
Iteration: 168
RMSE: nan
Iteration: 169
RMSE: nan
Iteration: 170
RMSE: nan
Iteration: 171
RMSE: nan
Iteration: 172
RMSE: nan
Iteration: 173
RMSE: nan
Iteration: 174
RMSE: nan
Iteration: 175
RMSE: nan
Iteration: 176
RMSE: nan
Iteration: 177
RMSE: nan
Iteration: 178
RMSE: nan
Iteration: 179
RMSE: nan
Iteration: 180
RMSE: nan
Iteration: 181
RMSE: nan
Iteration: 182
RMSE: nan
Iteration: 183
RMSE: nan
Iteration: 184
RMSE: nan
Iteration: 185
RMSE: nan
Iteration: 186
RMSE: nan
Iteration: 187
RMSE: nan
Iteration: 188
RMSE: nan
Iteration: 189
RMSE: nan
Iteration: 190
RMSE: nan
Iteration: 191
RMSE: nan
Iteration: 192
RMSE: nan
Iteration: 193
RMSE: nan
Iteration: 194
RMSE: nan
Iteration: 195
RMSE: nan
Iteration: 196
RMSE: nan
Iteration: 197
RMSE: nan
Iteration: 198
RMSE: nan
Iteration: 199
RMSE: nan
Iteration: 200
RMSE: nan
Iteration: 201
RMSE: nan
Iteration: 202
RMSE: nan
Iteration: 203
RMSE: nan
Iteration: 204
RMSE: nan
Iteration: 205
RMSE: nan
Iteration: 206
RMSE: nan
Iteration: 207
RMSE: nan
Iteration: 208
RMSE: nan
Iteration: 209
RMSE: nan
Iteration: 210
RMSE: nan
Iteration: 211
RMSE: nan
Iteration: 212
RMSE: nan
Iteration: 213
RMSE: nan
Iteration: 214
RMSE: nan
Iteration: 215
RMSE: nan
Iteration: 216
RMSE: nan
Iteration: 217
RMSE: nan
Iteration: 218
RMSE: nan
Iteration: 219
RMSE: nan
Iteration: 220
RMSE: nan
Iteration: 221
RMSE: nan
Iteration: 222
RMSE: nan
Iteration: 223
RMSE: nan
Iteration: 224
RMSE: nan
Iteration: 225
RMSE: nan
Iteration: 226
RMSE: nan
Iteration: 227
RMSE: nan
Iteration: 228
RMSE: nan
Iteration: 229
RMSE: nan
Iteration: 230
RMSE: nan
Iteration: 231
RMSE: nan
Iteration: 232
RMSE: nan
Iteration: 233
RMSE: nan
Iteration: 234
RMSE: nan
Iteration: 235
RMSE: nan
Iteration: 236
RMSE: nan
Iteration: 237
RMSE: nan
Iteration: 238
RMSE: nan
Iteration: 239
RMSE: nan
Iteration: 240
RMSE: nan
Iteration: 241
RMSE: nan
Iteration: 242
RMSE: nan
Iteration: 243
RMSE: nan
Iteration: 244
RMSE: nan
Iteration: 245
RMSE: nan
Iteration: 246
RMSE: nan
Iteration: 247
RMSE: nan
Iteration: 248
RMSE: nan
Iteration: 249
RMSE: nan
Iteration: 250
RMSE: nan
Iteration: 251
RMSE: nan
Iteration: 252
RMSE: nan
Iteration: 253
RMSE: nan
Iteration: 254
RMSE: nan
Iteration: 255
RMSE: nan
Iteration: 256
RMSE: nan
Iteration: 257
RMSE: nan
Iteration: 258
RMSE: nan
Iteration: 259
RMSE: nan
Iteration: 260
RMSE: nan
Iteration: 261
RMSE: nan
Iteration: 262
RMSE: nan
Iteration: 263
RMSE: nan
Iteration: 264
RMSE: nan
Iteration: 265
RMSE: nan
Iteration: 266
RMSE: nan
Iteration: 267
RMSE: nan
Iteration: 268
RMSE: nan
Iteration: 269
RMSE: nan
Iteration: 270
RMSE: nan
Iteration: 271
RMSE: nan
Iteration: 272
RMSE: nan
Iteration: 273
RMSE: nan
Iteration: 274
RMSE: nan
Iteration: 275
RMSE: nan
Iteration: 276
RMSE: nan
Iteration: 277
RMSE: nan
Iteration: 278
RMSE: nan
Iteration: 279
RMSE: nan
Iteration: 280
RMSE: nan
Iteration: 281
RMSE: nan
Iteration: 282
RMSE: nan
Iteration: 283
RMSE: nan
Iteration: 284
RMSE: nan
Iteration: 285
RMSE: nan
Iteration: 286
RMSE: nan
Iteration: 287
RMSE: nan
Iteration: 288
RMSE: nan
Iteration: 289
RMSE: nan
Iteration: 290
RMSE: nan
Iteration: 291
RMSE: nan
Iteration: 292
RMSE: nan
Iteration: 293
RMSE: nan
Iteration: 294
RMSE: nan
Iteration: 295
RMSE: nan
Iteration: 296
RMSE: nan
Iteration: 297
RMSE: nan
Iteration: 298
RMSE: nan
Iteration: 299
RMSE: nan
Iteration: 300
RMSE: nan
Iteration: 301
RMSE: nan
Iteration: 302
RMSE: nan
Iteration: 303
RMSE: nan
Iteration: 304
RMSE: nan
Iteration: 305
RMSE: nan
Iteration: 306
RMSE: nan
Iteration: 307
RMSE: nan
Iteration: 308
RMSE: nan
Iteration: 309
RMSE: nan
Iteration: 310
RMSE: nan
Iteration: 311
RMSE: nan
Iteration: 312
RMSE: nan
Iteration: 313
RMSE: nan
Iteration: 314
RMSE: nan
Iteration: 315
RMSE: nan
Iteration: 316
RMSE: nan
Iteration: 317
RMSE: nan
Iteration: 318
RMSE: nan
Iteration: 319
RMSE: nan
Iteration: 320
RMSE: nan
Iteration: 321
RMSE: nan
Iteration: 322
RMSE: nan
Iteration: 323
RMSE: nan
Iteration: 324
RMSE: nan
Iteration: 325
RMSE: nan
Iteration: 326
RMSE: nan
Iteration: 327
RMSE: nan
Iteration: 328
RMSE: nan
Iteration: 329
RMSE: nan
Iteration: 330
RMSE: nan
Iteration: 331
RMSE: nan
Iteration: 332
RMSE: nan
Iteration: 333
RMSE: nan
Iteration: 334
RMSE: nan
Iteration: 335
RMSE: nan
Iteration: 336
RMSE: nan
Iteration: 337
RMSE: nan
Iteration: 338
RMSE: nan
Iteration: 339
RMSE: nan
Iteration: 340
RMSE: nan
Iteration: 341
RMSE: nan
Iteration: 342
RMSE: nan
Iteration: 343
RMSE: nan
Iteration: 344
RMSE: nan
Iteration: 345
RMSE: nan
Iteration: 346
RMSE: nan
Iteration: 347
RMSE: nan
Iteration: 348
RMSE: nan
Iteration: 349
RMSE: nan
Iteration: 350
RMSE: nan
Iteration: 351
RMSE: nan
Iteration: 352
RMSE: nan
Iteration: 353
RMSE: nan
Iteration: 354
RMSE: nan
Iteration: 355
RMSE: nan
Iteration: 356
RMSE: nan
Iteration: 357
RMSE: nan
Iteration: 358
RMSE: nan
Iteration: 359
RMSE: nan
Iteration: 360
RMSE: nan
Iteration: 361
RMSE: nan
Iteration: 362
RMSE: nan
Iteration: 363
RMSE: nan
Iteration: 364
RMSE: nan
Iteration: 365
RMSE: nan
Iteration: 366
RMSE: nan
Iteration: 367
RMSE: nan
Iteration: 368
RMSE: nan
Iteration: 369
RMSE: nan
Iteration: 370
RMSE: nan
Iteration: 371
RMSE: nan
Iteration: 372
RMSE: nan
Iteration: 373
RMSE: nan
Iteration: 374
RMSE: nan
Iteration: 375
RMSE: nan
Iteration: 376
RMSE: nan
Iteration: 377
RMSE: nan
Iteration: 378
RMSE: nan
Iteration: 379
RMSE: nan
Iteration: 380
RMSE: nan
Iteration: 381
RMSE: nan
Iteration: 382
RMSE: nan
Iteration: 383
RMSE: nan
Iteration: 384
RMSE: nan
Iteration: 385
RMSE: nan
Iteration: 386
RMSE: nan
Iteration: 387
RMSE: nan
Iteration: 388
RMSE: nan
Iteration: 389
RMSE: nan
Iteration: 390
RMSE: nan
Iteration: 391
RMSE: nan
Iteration: 392
RMSE: nan
Iteration: 393
RMSE: nan
Iteration: 394
RMSE: nan
Iteration: 395
RMSE: nan
Iteration: 396
RMSE: nan
Iteration: 397
RMSE: nan
Iteration: 398
RMSE: nan
Iteration: 399
RMSE: nan
Iteration: 400
RMSE: nan
Iteration: 401
RMSE: nan
Iteration: 402
RMSE: nan
Iteration: 403
RMSE: nan
Iteration: 404
RMSE: nan
Iteration: 405
RMSE: nan
Iteration: 406
RMSE: nan
Iteration: 407
RMSE: nan
Iteration: 408
RMSE: nan
Iteration: 409
RMSE: nan
Iteration: 410
RMSE: nan
Iteration: 411
RMSE: nan
Iteration: 412
RMSE: nan
Iteration: 413
RMSE: nan
Iteration: 414
RMSE: nan
Iteration: 415
RMSE: nan
Iteration: 416
RMSE: nan
Iteration: 417
RMSE: nan
Iteration: 418
RMSE: nan
Iteration: 419
RMSE: nan
Iteration: 420
RMSE: nan
Iteration: 421
RMSE: nan
Iteration: 422
RMSE: nan
Iteration: 423
RMSE: nan
Iteration: 424
RMSE: nan
Iteration: 425
RMSE: nan
Iteration: 426
RMSE: nan
Iteration: 427
RMSE: nan
Iteration: 428
RMSE: nan
Iteration: 429
RMSE: nan
Iteration: 430
RMSE: nan
Iteration: 431
RMSE: nan
Iteration: 432
RMSE: nan
Iteration: 433
RMSE: nan
Iteration: 434
RMSE: nan
Iteration: 435
RMSE: nan
Iteration: 436
RMSE: nan
Iteration: 437
RMSE: nan
Iteration: 438
RMSE: nan
Iteration: 439
RMSE: nan
Iteration: 440
RMSE: nan
Iteration: 441
RMSE: nan
Iteration: 442
RMSE: nan
Iteration: 443
RMSE: nan
Iteration: 444
RMSE: nan
Iteration: 445
RMSE: nan
Iteration: 446
RMSE: nan
Iteration: 447
RMSE: nan
Iteration: 448
RMSE: nan
Iteration: 449
RMSE: nan
Iteration: 450
RMSE: nan
Iteration: 451
RMSE: nan
Iteration: 452
RMSE: nan
Iteration: 453
RMSE: nan
Iteration: 454
RMSE: nan
Iteration: 455
RMSE: nan
Iteration: 456
RMSE: nan
Iteration: 457
RMSE: nan
Iteration: 458
RMSE: nan
Iteration: 459
RMSE: nan
Iteration: 460
RMSE: nan
Iteration: 461
RMSE: nan
Iteration: 462
RMSE: nan
Iteration: 463
RMSE: nan
Iteration: 464
RMSE: nan
Iteration: 465
RMSE: nan
Iteration: 466
RMSE: nan
Iteration: 467
RMSE: nan
Iteration: 468
RMSE: nan
Iteration: 469
RMSE: nan
Iteration: 470
RMSE: nan
Iteration: 471
RMSE: nan
Iteration: 472
RMSE: nan
Iteration: 473
RMSE: nan
Iteration: 474
RMSE: nan
Iteration: 475
RMSE: nan
Iteration: 476
RMSE: nan
Iteration: 477
RMSE: nan
Iteration: 478
RMSE: nan
Iteration: 479
RMSE: nan
Iteration: 480
RMSE: nan
Iteration: 481
RMSE: nan
Iteration: 482
RMSE: nan
Iteration: 483
RMSE: nan
Iteration: 484
RMSE: nan
Iteration: 485
RMSE: nan
Iteration: 486
RMSE: nan
Iteration: 487
RMSE: nan
Iteration: 488
RMSE: nan
Iteration: 489
RMSE: nan
Iteration: 490
RMSE: nan
Iteration: 491
RMSE: nan
Iteration: 492
RMSE: nan
Iteration: 493
RMSE: nan
Iteration: 494
RMSE: nan
Iteration: 495
RMSE: nan
Iteration: 496
RMSE: nan
Iteration: 497
RMSE: nan
Iteration: 498
RMSE: nan
Iteration: 499
RMSE: nan
Iteration: 500
RMSE: nan
2160
2165
2170
2175
2180
2185
2190
2195
2200
2205
2210
2215
2220
2225
2230
2235
2240
2245
2250
2255
2260
2265
2270
2275
2280
2285
2290
2295
2300
2305
2310
2315
2320
2325
2330
2335
2340
2345
2350
2355
2360
2365
2370
2375
2380
2385
2390
2395
2400
2405
2410
2415
2420
2425
2430
2435
2440
2445
2450
2455
2460
2465
2470
2475
2480
2485
2490
2495
2500
2505
2510
2515
2520
2525
2530
2535
2540
2545
2550
2555
2560
2565
2570
2575
2580
2585
2590
2595
2600
2605
2610
2615
2620
2625
2630
2635
2640
2645
2650
2655
2660
2665
2670
2675
2680
2685
2690
2695
Final MAPE: 0.514535
Final RMSE: 74.9771
Running time: 4369 seconds
| 23b623bbfef6a8ecbce6621eb3eeb240ccea3e52 | 645,737 | ipynb | Jupyter Notebook | content/BTMF.ipynb | ni11235/tensor-learning | c5710e985e99108cdc1efee14467a733c3128a76 | [
"MIT"
]
| 1 | 2020-04-08T20:15:43.000Z | 2020-04-08T20:15:43.000Z | content/BTMF.ipynb | stel-nik/tensor-learning | c5710e985e99108cdc1efee14467a733c3128a76 | [
"MIT"
]
| null | null | null | content/BTMF.ipynb | stel-nik/tensor-learning | c5710e985e99108cdc1efee14467a733c3128a76 | [
"MIT"
]
| 1 | 2021-04-20T15:09:19.000Z | 2021-04-20T15:09:19.000Z | 53.274235 | 91,100 | 0.723691 | true | 61,454 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.709019 | 0.61727 | __label__yue_Hant | 0.349316 | 0.272456 |
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import pandas as pd
import numpy.random as rnd
from scipy.integrate import odeint,quad
from scipy.stats import kde,beta
import seaborn as sns
%matplotlib inline
from importlib import reload
pi=np.pi
from numpy import linalg as LA
from scipy.linalg import expm
from scipy.optimize import brentq
```
```python
import stablecompoper
sns.set()
plt.rcParams.update({
"text.usetex": True
})
plt.rc('text', usetex=False)
```
## The monotype linear birth and death process in a periodic environment
This process $(X(t),t\ge 0)$ with values in $\mathbb{N}$ is described by its
time varying generator
\begin{equation}
L_t f(x) = x\left[\lambda(t)(f(x+1)-f(x)) + \mu(t) (f(x-1)-f(x))\right]\,,
\end{equation}
where $\lambda,\mu$ are non negative $T$-periodic functions.
Let $Z(t)$ be the point measure on $S$ describing the set of states
(i.e. phases) of the individuals born before $t$ and still alive at
time $t$ : if $Z(t) = \sum_i \delta_{s_i}$ then $<Z(t), f>=
\sum_i f(s_i)$. We have the convergence in $L^1$, when we start with $X(s)=1$ one individual of phase $s$,
\begin{equation}
\lim_{n\to +\infty} e^{-\alpha(nT-s)} <Z(nT),f> = h(s) \int_S f(t)\, d\pi(t)\,,
\end{equation}
where the reproductive value of phase $s$ is the periodic function for $T=1$
\begin{equation}
h(s) = e^{\alpha s -\varphi(s)}\,,
\end{equation}
and the measure $\pi$ is the stable composition law
\begin{equation}
\boxed{\pi(dt) = \frac1{e^{A(T)} -1} \lambda(t) e^{A(t)}\, 1_{t\in(0,T)}\, dt\,.}
\end{equation}
The process is one dimensional, and the death rate is constant
$\mu(t)=\mu_0$ and the birth rate is
\begin{equation}
\lambda(t) = \lambda_0 (1 + c \cos(2\pi t/T))\,.
\end{equation}
The stable composition law is thus
\begin{equation}
\pi(dt) = \frac1{e^{A(T)} -1} \lambda(t) e^{A(t)}\, 1_{t\in(0,T)}\,
dt\,,
\end{equation}
with
\begin{equation}
A(t) = \lambda_0 (t + \frac{ c T}{2 \pi} \sin(2\pi t/T) )
\end{equation}
We perform a simulation of the linear birth and death process for $N$
periods, and we keep the phase, the birth dates modulo $T$, of the
living individuals at time $N T$. We wait until the first non extinct
population, and then we plot its histogram against the true density
$\pi$ and against the birth rate $\lambda(t)$
#### Remarque
Il ne faut pas partir avec un nombre de périodes $N$ trop grand, ni une période $T$ trop grande, sinon la taille de l'échantillon est beaucoup trop grande. Pour avoir une bonne estimation de la densité par l'histogramme, il suffit d'avoir une taile d'échantillon au dessus de 2000.En considerant le cas constant, on voit qu'il faut à peu près prendre $e^{N T (\lambda_0 -\mu_0)}\simeq 2000$ ce qui donne $NT \simeq 7.6/(\lambda_0 -\mu_0) \simeq 12$
Pour obtenir le cas où les taux de naissance et de mort sont constants,
i l suffit de prendre c=0.
```python
from ipywidgets import GridspecLayout,Layout,Button, AppLayout,TwoByTwoLayout,interactive_output
import ipywidgets as widgets
def create_expanded_button(description, button_style):
return Button(description=description, button_style=button_style, layout=Layout(height='auto', width='auto'))
grid = GridspecLayout(3, 3)
blzero=widgets.FloatSlider(min=0.0, max=4.0, step=0.1, value=0.8, continuous_update=False,description=r'$\lambda_0$')
bmuzero=widgets.FloatSlider(min=0.0, max=2.0, step=0.1, value=0.1, continuous_update=False,description=r'$\mu_0$')
bT=widgets.IntSlider(min=1, max=10, step=1, value=2, continuous_update=False,description='T')
bN=widgets.IntSlider(min=1, max=20, step=1, value=8, continuous_update=False,description='N')
bcoeff=widgets.FloatSlider(min=0.0, max=2.0, step=0.1, value=0.5, continuous_update=False,description='c')
grid[0,0]=blzero
grid[0,1]=bmuzero
grid[0,2]=bcoeff
grid[1,0]=bT
grid[1,1]=bN
w=interactive_output(stablecompoper.nsestimdenszchi,{'lzero':blzero,'muzero':bmuzero,'T':bT,'N':bN,'coeff':bcoeff})
display(grid,w)
#grid
```
```python
reload(stablecompoper)
plt.rcParams['figure.figsize']=(14,6)
stablecompoper.stablerepro(coeff=0.5,image=True)
```
```python
```
```python
```
| 8bb3aff7410102be5cfee761030230115db1906f | 100,645 | ipynb | Jupyter Notebook | StableCompositionPeriodic.ipynb | philcarmona/conda | 80ea5e0e30aab2817ab7e2883aff49fa654bb79b | [
"BSD-3-Clause"
]
| null | null | null | StableCompositionPeriodic.ipynb | philcarmona/conda | 80ea5e0e30aab2817ab7e2883aff49fa654bb79b | [
"BSD-3-Clause"
]
| null | null | null | StableCompositionPeriodic.ipynb | philcarmona/conda | 80ea5e0e30aab2817ab7e2883aff49fa654bb79b | [
"BSD-3-Clause"
]
| null | null | null | 397.806324 | 64,952 | 0.933211 | true | 1,373 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.798187 | 0.67522 | __label__eng_Latn | 0.628496 | 0.407094 |
# Kalman.jl
The [Kalman.jl](https://github.com/wkearn/Kalman.jl) package aims to provide a Julian interface for various kinds of Kalman filter.
```
using Kalman, Winston
```
The Kalman filter is a way to estimate the state of a system given some knowledge of the (possibly stochastic) dynamics of that system and some noisy data collected from the system. Basically you take your state estimate, run it through the system model to advance it in time (along with the covariance matrix of your estimate), then you update your state estimate and state covariance matrix with the information gained from the data you have collected.
We'll try out the Kalman filter using an example from [Welch and Bishop, 2006](http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf).
Imagine we have a voltmeter taking measurements of a noisy but, on average, constant circuit. We first need a linear state-space model of this system that looks like $ x_{k+1} = Ax_k+w_k$. Because the voltage of the circuit doesn't change from time to time, we set $A=1$ and find our model $ x_{k+1} = x_k + w_k$ where $w_k$ is a vector of white Gaussian noise with variance $Q$.
The second half of the Kalman filter model is an observation model which relates the measurements you take from the system to the system state. It takes the form $ z_{k} = Hx_k + v_k$ where $v_k$ is white Gaussian noise with variance $R$. For our example, we just take direct measurements of the voltage, so $H=1$.
## Types
Now we look at how we can represent those models in Julia.
```
type LinearModel <: Model
a::Matrix
g::Matrix
q::Matrix
end
```
A `LinearModel` takes a matrix `a` representing the state-transition matrix, `g` which explains how the process noise affects the system state, and `q` which is the process noise covariance matrix.
We can set one of those up for our model now. Our two transition matrices are just 1 as above. We'll give our process some small (but non-zero!) noise. The circuit isn't going to be perfectly constant.
```
f = LinearModel([1]',[1]',[1e-10]')
```
LinearModel(1x1 Array{Int64,2}:
1,1x1 Array{Int64,2}:
1,1x1 Array{Float64,2}:
1.0e-10)
The observation model is represented with a `LinearObservationModel` type
```
type LinearObservationModel <: ObservationModel
h::Matrix
r::Matrix
end
```
with definitions as above. For our example h is just `[1]'`, and we'll give our voltmeter $0.1\ \mathrm{V}$ RMS noise.
```
z = LinearObservationModel([1]',[0.01]')
```
LinearObservationModel(1x1 Array{Int64,2}:
1,1x1 Array{Float64,2}:
0.01)
Finally, we need a way to represent the state of our system. To completely describe the state, we need both a state vector and a state covariance matrix
```
type State
x::Vector
p::Matrix
end
```
We have to seed our Kalman filter with an initial state and covariance estimate. It turns out that the covariance will converge regardless of the starting value. For our example as well, the state will converge, but there are systems with dynamics which might lead the state into a local minimum, so it's important to make a reasonable estimate for the state.
We'll start with $\hat{x}_0 = 0$ and $P_0 = 0.5$
```
x0 = State([0.0],[1.0]')
```
State{Float64}([0.0],1x1 Array{Float64,2}:
1.0)
Now we have everything to make our Kalman Filter. We just stick the two models and the state into a `BasicKalmanFilter` type
```
type BasicKalmanFilter <: LinearKalmanFilter
x::State
f::LinearModel
z::LinearObservationModel
adv::Bool
end
```
The `adv` field is a bad way of ensuring that we first advance our filter in time and then update it with some measurements. That may go away.
```
kf = BasicKalmanFilter(x0,f,z,false)
```
BasicKalmanFilter(State{Float64}([0.0],1x1 Array{Float64,2}:
1.0),LinearModel(1x1 Array{Int64,2}:
1,1x1 Array{Int64,2}:
1,1x1 Array{Float64,2}:
1.0e-10),LinearObservationModel(1x1 Array{Int64,2}:
1,1x1 Array{Float64,2}:
0.01),false)
Now we need some data. Data to be fed to the Kalman filter is wrapped in an `Observation` type (which should probably just be a typealias for a Vector). Let's assume that our true voltage $x = -0.37727\ \mathrm{V}$.
```
x = -0.37727
y = map(y->Observation([y]),x+0.1*randn(50))
```
50-element Array{Observation{Float64},1}:
Observation{Float64}([-0.556689])
Observation{Float64}([-0.28849])
Observation{Float64}([-0.429203])
Observation{Float64}([-0.27673])
Observation{Float64}([-0.461203])
Observation{Float64}([-0.348198])
Observation{Float64}([-0.587995])
Observation{Float64}([-0.333402])
Observation{Float64}([-0.413611])
Observation{Float64}([-0.502472])
Observation{Float64}([-0.43491])
Observation{Float64}([-0.465556])
Observation{Float64}([-0.275459])
⋮
Observation{Float64}([-0.4012])
Observation{Float64}([-0.494106])
Observation{Float64}([-0.516698])
Observation{Float64}([-0.389034])
Observation{Float64}([-0.283263])
Observation{Float64}([-0.198191])
Observation{Float64}([-0.318942])
Observation{Float64}([-0.229493])
Observation{Float64}([-0.260345])
Observation{Float64}([-0.485391])
Observation{Float64}([-0.297279])
Observation{Float64}([-0.203137])
Now we can run our filter by sequentially calling `predict(kf::BasicKalmanFilter)` and `update(kf::BasicKalmanFilter,y::Observation)` or by simply calling `predictupdate(kf,y)`. We'll also store the state and variance information and plot it up with Winston.
```
xs = x0.x[1]*ones(50)
ps = x0.p[1]*ones(50)
for i = 1:49
predictupdate!(kf,y[i])
xs[i+1] = kf.x.x[1]
ps[i+1] = kf.x.p[1]
end
```
```
plot([0,50],[x,x],"b")
oplot(map(y->y.y[1],y),"r.")
oplot(xs,"g")
xlim(0,50)
```
```
plot(ps[2:50],"g")
```
## Unscented Kalman filters.
Unscented Kalman filters let you apply the prediction-update method of the Kalman filter to nonlinear models. They do this by deterministically sampling a set of "sigma points" which are then run through the filter equations to reconstruct the state and covariance matrices after a nonlinear transformation.
Unscented Kalman filters aren't pushed to Github yet, and there are some restrictions on the kind of model you can use for it (additive Gaussian noise for now; you might be able to get away with non-Gaussian noise, but multiplicative noise definitely won't work).
Still, let's see if we can't make them work. We'll take the example from Kandepu et al. (2008), a van der Pol Oscillator.
The equations of motion are
\begin{align}
\dot{x}_1 &= -x_2 \\
\dot{x}_2 &= -\mu (1-x_1^2)x^2 + x_1
\end{align}
where $\mu=0.2$.
```
include(Pkg.dir("Kalman","sandbox","unscented.jl"))
```
predictupdate! (generic function with 3 methods)
```
const μ = 0.2
x0 = State([0.,5],5.0*eye(2))
Q = 1e-3*eye(2)
R = [0.1 0;
0 1e-3]
function f2(x::Vector,dt::Float64)
x1 = zeros(x)
x1[1] = x[1] + dt*-x[2]
x1[2] = x[2] + dt*(-μ*(1-x[1]^2)*x[2]+x[1])
x1
end
h(x) = x
dt = 0.01
xs = fill(zeros(x0.x),4000)
xs[1] = [1.4,0]
x = [1.4,0]
for i = 2:4000
x = f2(x,dt)
xs[i] = x
end
ys = xs .+ map(y->sqrt(R)*randn(2),1:4000)
```
4000-element Array{Array{Float64,1},1}:
[1.09289,-0.0238201]
[1.46761,0.0453131]
[1.35845,0.0130256]
[1.10434,0.0656155]
[1.18755,0.0145539]
[1.32602,0.0972942]
[1.25023,0.136334]
[0.902791,0.112868]
[1.22303,0.0854962]
[1.11071,0.148996]
[1.36694,0.151589]
[1.71204,0.152962]
[1.61606,0.172073]
⋮
[0.0785064,0.015488]
[-0.771996,0.0356125]
[0.339114,0.0404495]
[0.108062,0.0174942]
[0.323293,0.00062621]
[-0.0526205,0.041678]
[-0.0474085,0.00729233]
[0.202621,0.0664006]
[-0.329399,0.0489221]
[-0.157211,0.0354753]
[0.131959,0.0343319]
[0.109813,0.0808878]
```
fm = AdditiveUnscentedModel(x->f2(x,dt),Q)
zm = AdditiveUnscentedObservationModel(h,R)
p0 = 5.0*eye(2)
kf = AdditiveUnscentedKalmanFilter(State([0,5.0],p0),fm,zm,0.1,2.0,0.0)
xs1 = 0.0*ones(4000)
xs2 = 5.0*ones(4000)
ps = 5.0*ones(4000)
for i = 2:4000
predictupdate!(kf,Observation([ys[i]]))
xs1[i] = kf.x.x[1]
xs2[i] = kf.x.x[2]
ps[i] = kf.x.p[1]
end
```
```
plot(xs1,"k")
```
| 85bf20179627c8dc3455f1e43fa346b6b25df13c | 81,090 | ipynb | Jupyter Notebook | examples/CAJUN_09-25-2014.ipynb | wkearn/Kalman.jl | a080e4e18b2f27af60fa3142be843d09541cbf7d | [
"MIT"
]
| 44 | 2015-02-11T14:27:24.000Z | 2021-07-30T20:51:38.000Z | examples/CAJUN_09-25-2014.ipynb | wkearn/Kalman.jl | a080e4e18b2f27af60fa3142be843d09541cbf7d | [
"MIT"
]
| 2 | 2015-10-02T11:47:41.000Z | 2018-02-02T09:45:46.000Z | examples/CAJUN_09-25-2014.ipynb | wkearn/Kalman.jl | a080e4e18b2f27af60fa3142be843d09541cbf7d | [
"MIT"
]
| 21 | 2015-03-18T21:46:30.000Z | 2021-07-30T20:51:42.000Z | 164.149798 | 31,781 | 0.878419 | true | 2,776 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90599 | 0.833325 | 0.754984 | __label__eng_Latn | 0.896091 | 0.592412 |
### ¿ Cómo se mueve un péndulo ?
> Se dice que un sistema cualquiera, mecánico, eléctrico, neumático, etc., es un oscilador armónico si, cuando se deja en libertad fuera de su posición de equilibrio, vuelve hacia ella describiendo oscilaciones sinusoidales, o sinusoidales amortiguadas en torno a dicha posición estable.
- https://es.wikipedia.org/wiki/Oscilador_armónico
Referencias:
- http://matplotlib.org
- https://seaborn.pydata.org
- http://www.numpy.org
- http://ipywidgets.readthedocs.io/en/latest/index.html
**En realidad esto es el estudio de oscilaciones. **
___
<div>
</div>
Los sistemas mas sencillos a estudiar en oscilaciones son el sistema ` masa-resorte` y el `péndulo simple`.
<div>
</div>
\begin{align}
\ddot{x} + \omega_{0}^2 x &= 0, \quad \omega_{0} = \sqrt{\frac{k}{m}}\notag\\
\ddot{\theta} + \omega_{0}^{2}\, \theta &= 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}
\end{align}
### Sistema `masa-resorte`
La solución a este sistema `masa-resorte`se explica en términos de la segunda ley de Newton. Para este caso, si la masa permanece constante y solo consideramos la dirección en $x$. Entonces,
\begin{equation}
F = m\ddot{x} = m \frac{d\dot{x}}{dt} = m \frac{d^2x}{dt^2}
\end{equation}
¿ Cuál es la fuerza? ** Ley de Hooke! **
\begin{equation}
F = -k x, \quad k > 0
\end{equation}
Vemos que la fuerza se opone al desplazamiento y su intensidad es proporcional al mismo. Y $k$ es la constante elástica o recuperadora del resorte.
Cuya solución se escribe como
\begin{equation}
x(t) = A \cos(\omega_{o} t) + B \sin(\omega_{o} t)
\end{equation}
Y su primera derivada (velocidad) sería
\begin{equation}
\dot{x}(t) = \omega_{0}[- A \sin(\omega_{0} t) + B\cos(\omega_{0}t)]
\end{equation}
- **¿Cómo se ven las gráficas de `x vs t`, tanto para la posición y velocidad?**
_Esta instrucción es para que las gráficas aparezcan dentro de este entorno._
```python
%matplotlib inline
```
_Esta es la librería con todas las instrucciones para realizar gráficos. _
```python
import matplotlib.pyplot as plt
```
```python
import matplotlib as mpl
label_size = 14
mpl.rcParams['xtick.labelsize'] = label_size
mpl.rcParams['ytick.labelsize'] = label_size
```
_Y esta es la librería con todas las funciones matemáticas necesarias._
```python
import numpy as np
```
```python
t = np.linspace(0, 50, 100)
plt.figure(figsize = (7, 4))
plt.plot(t, .5*np.cos(.5 *t) + .1 * np.sin(.5 *t), '-', lw = 1, ms = 4,
label = '$x(t)$')
plt.plot(t, .5*.1*np.cos(.5*t) - .5*.5 * np.sin(.5*t), 'ro-', lw = 1, ms = 4,
label = r'$\dot{x(t)}$')
plt.xlabel('$t$', fontsize = 20)
plt.show()
```
```python
plt.figure(figsize = (7, 4))
plt.scatter(t, .5*np.cos(.5 *t) + .1 * np.sin(.5 *t), lw = 0, c = 'red',
label = '$x(t)$')
plt.plot(t, .5*np.cos(.5 *t) + .1 * np.sin(.5 *t), 'r-', lw = 1)
plt.scatter(t, .5*.1*np.cos(.5*t) - .5*.5 * np.sin(.5*t), lw = 0, c = 'b',
label = r'$\dot{x(t)}$')
plt.plot(t, .5*.1*np.cos(.5*t) - .5*.5 * np.sin(.5*t), 'b-', lw = 1)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc = 'best')
plt.show()
```
Y si consideramos un conjunto de frecuencias de oscilación, entonces
```python
frecuencias = np.array([.1, .2 , .5, .6])
plt.figure(figsize = (7, 4))
for f in frecuencias:
plt.plot(t, .5 * np.cos(f * t) + .1 * np.sin(f * t), '*-')
plt.xlabel('$t$', fontsize = 16)
plt.ylabel('$x(t)$', fontsize = 16)
plt.title('Oscilaciones', fontsize = 16)
plt.show()
```
Estos colores, son el default de `matplotlib`, sin embargo existe otra librería dedicada, entre otras cosas, a la presentación de gráficos.
```python
import seaborn as sns
sns.set(style='ticks', palette='Set2')
```
```python
frecuencias = np.array([.1, .2 , .5, .6])
plt.figure(figsize = (7, 4))
for f in frecuencias:
plt.plot(t, .5 * np.cos(f * t) + .1 * np.sin(f * t), 'o-',
label = '$\omega_0 = %s$'%f)
plt.xlabel('$t$', fontsize = 16)
plt.ylabel('$x(t)$', fontsize = 16)
plt.title('Oscilaciones', fontsize = 16)
plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), prop={'size': 14})
plt.show()
```
Si queremos tener manipular un poco mas las cosas, hacemos uso de lo siguiente:
```python
from ipywidgets import *
```
```python
def masa_resorte(t = 0):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(.5 * np.cos(.5 * t) + .1 * np.sin(.5 * t), [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact(masa_resorte, t = (0, 50,.01));
```
La opción de arriba generalmente será lenta, así que lo recomendable es usar `interact_manual`.
```python
def masa_resorte(t = 0):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(.5 * np.cos(.5 * t) + .1 * np.sin(.5 * t), [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact_manual(masa_resorte, t = (0, 50,.01));
```
### Péndulo simple
Ahora, si fijamos nuestra atención al movimiento de un péndulo simple _(oscilaciones pequeñas)_. La ecuación diferencial a resolver tiene la misma forma. La diferencia más evidente es como hemos definido a $\omega_{0}$. Esto quiere decir que,
\begin{equation}
\theta(t) = A\cos(\omega_{0} t) + B\sin(\omega_{0}t)
\end{equation}
Si graficamos la ecuación de arriba vamos a encontrar un comportamiento muy similar al ya discutido anteriormente. Es por ello que ahora veremos el movimiento en el plano $xy$. Es decir,
\begin{align}
x &= l \sin(\theta), \quad
y = l \cos(\theta)
\end{align}
```python
def theta_t(a, b, g, l, t):
omega_0 = np.sqrt(g/l)
return a * np.cos(omega_0 * t) + b * np.sin(omega_0 * t)
```
```python
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t(.4, .6, 9.8, 2, t))
y = - 2 * np.cos(theta_t(.4, .6, 9.8, 2, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01));
```
### Condiciones iniciales
Realmente lo que se tiene que resolver es,
\begin{equation}
\theta(t) = \theta(0) \cos(\omega_{0} t) + \frac{\dot{\theta}(0)}{\omega_{0}} \sin(\omega_{0} t)
\end{equation}
> **Actividad.** Modificar el programa anterior para incorporar las condiciones iniciales.
```python
# Solución:
def theta_t():
return
```
```python
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t( , t))
y = - 2 * np.cos(theta_t(, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01));
```
### Espacio fase $(q, p)$
Recuerden el momento lineal se define como
$
p = m v,
$
donde $m$ es la masa y $v$ la velocidad. Y la posición y velocidad para el sistema `masa-resorte` se escriben como:
\begin{align}
x(t) &= x(0) \cos(\omega_{o} t) + \frac{\dot{x}(0)}{\omega_{0}} \sin(\omega_{o} t)\\
\dot{x}(t) &= -\omega_{0}x(0) \sin(\omega_{0} t) + \dot{x}(0)\cos(\omega_{0}t)]
\end{align}
```python
k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
x_0 = .5
x_0_dot = .1
```
```python
t = np.linspace(0, 50, 300)
```
```python
x_t = x_0 *np.cos(omega_0 *t) + (x_0_dot/omega_0) * np.sin(omega_0 *t)
x_t_dot = -omega_0 * x_0 * np.sin(omega_0 * t) + x_0_dot * np.cos(omega_0 * t)
```
```python
plt.figure(figsize = (7, 4))
plt.plot(t, x_t, label = '$x(t)$', lw = 1)
plt.plot(t, x_t_dot, label = '$\dot{x}(t)$', lw = 1)
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), prop={'size': 14})
plt.xlabel('$t$', fontsize = 18)
plt.show()
```
```python
plt.figure(figsize = (5, 5))
plt.plot(x_t, x_t_dot/omega_0, 'ro', ms = 2)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
```
```python
plt.figure(figsize = (5, 5))
plt.scatter(x_t, x_t_dot/omega_0, cmap = 'viridis', c = x_t_dot, s = 8, lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
```
#### Multiples condiciones iniciales
```python
k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
```
```python
t = np.linspace(0, 50, 50)
```
```python
x_0s = np.array([.7, .5, .25, .1])
x_0s_dot = np.array([.2, .1, .05, .01])
cmaps = np.array(['viridis', 'inferno', 'magma', 'plasma'])
```
```python
plt.figure(figsize = (6, 6))
for indx, x_0 in enumerate(x_0s):
x_t = x_0 *np.cos(omega_0 *t) + (x_0s_dot[indx]/omega_0) * np.sin(omega_0 *t)
x_t_dot = -omega_0 * x_0 * np.sin(omega_0 * t) + x_0s_dot[indx] * np.cos(omega_0 * t)
plt.scatter(x_t, x_t_dot/omega_0, cmap = cmaps[indx],
c = x_t_dot, s = 10,
lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
#plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5))
```
Trayectorias del oscilador armónico simple en el espacio fase $(x,\, \dot{x}\,/\omega_0)$ para diferentes valores de la energía.
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Lázaro Alonso.
</footer>
```python
```
| 8f3d4eaac94af6fa72f1560a62e37727524fea9c | 396,093 | ipynb | Jupyter Notebook | SimulacionO2017/Modulo 1/Clase4_OsciladorArmonico.ipynb | jcmartinez67/SimulacionMatematica_O2017 | 085394f939c37717acbbe19a90cad661b279fbee | [
"MIT"
]
| null | null | null | SimulacionO2017/Modulo 1/Clase4_OsciladorArmonico.ipynb | jcmartinez67/SimulacionMatematica_O2017 | 085394f939c37717acbbe19a90cad661b279fbee | [
"MIT"
]
| null | null | null | SimulacionO2017/Modulo 1/Clase4_OsciladorArmonico.ipynb | jcmartinez67/SimulacionMatematica_O2017 | 085394f939c37717acbbe19a90cad661b279fbee | [
"MIT"
]
| null | null | null | 476.073317 | 76,262 | 0.931953 | true | 3,464 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.779993 | 0.618617 | __label__spa_Latn | 0.456448 | 0.275585 |
<a href="https://colab.research.google.com/github/AnnalisaGibbs/AB-Demo/blob/master/AG_sp3_mod2_assign_Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb" target="_parent"></a>
# Statistics
```
import math
import numpy as np
import pandas as pd
```
## 1.1 Sales for the past week was the following amounts: [3505, 2400, 3027, 2798, 3700, 3250, 2689]. Without using library functions, what is the mean, variance, and standard deviation of of sales from last week? (for extra bonus points, write your own function that can calculate these two values for any sized list)
```
```
## 1.2 Find the covariance between last week's sales numbers and the number of customers that entered the store last week: [127, 80, 105, 92, 120, 115, 93] (you may use librray functions for calculating the covariance since we didn't specifically talk about its formula)
```
```
## 1.3 Find the standard deviation of customers who entered the store last week. Then, use the standard deviations of both sales and customers to standardize the covariance to find the correlation coefficient that summarizes the relationship between sales and customers. (You may use library functions to check your work.)
```
```
## 1.4 Use pandas to import a cleaned version of the titanic dataset from the following link: [Titanic Dataset](https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv)
## Calculate the variance-covariance matrix and correlation matrix for the titanic dataset's numeric columns. (you can encode some of the categorical variables and include them as a stretch goal if you finish early)
```
```
# Orthogonality
## 2.1 Plot two vectors that are orthogonal to each other. What is a synonym for orthogonal?
```
```
## 2.2 Are the following vectors orthogonal? Why or why not?
\begin{align}
a = \begin{bmatrix} -5 \\ 3 \\ 7 \end{bmatrix}
\qquad
b = \begin{bmatrix} 6 \\ -8 \\ 2 \end{bmatrix}
\end{align}
```
```
## 2.3 Compute the following values: What do these quantities have in common?
## What is $||c||^2$?
## What is $c \cdot c$?
## What is $c^{T}c$?
\begin{align}
c = \begin{bmatrix} 2 & -15 & 6 & 20 \end{bmatrix}
\end{align}
```
```
# Unit Vectors
## 3.1 Using Latex, write the following vectors as a linear combination of scalars and unit vectors:
\begin{align}
d = \begin{bmatrix} 7 \\ 12 \end{bmatrix}
\qquad
e = \begin{bmatrix} 2 \\ 11 \\ -8 \end{bmatrix}
\end{align}
Your text here
## 3.2 Turn vector $f$ into a unit vector:
\begin{align}
f = \begin{bmatrix} 4 & 12 & 11 & 9 & 2 \end{bmatrix}
\end{align}
```
```
# Linear Independence / Dependence
## 4.1 Plot two vectors that are linearly dependent and two vectors that are linearly independent (bonus points if done in $\mathbb{R}^3$).
# Span
## 5.1 What is the span of the following vectors?
\begin{align}
g = \begin{bmatrix} 1 & 2 \end{bmatrix}
\qquad
h = \begin{bmatrix} 4 & 8 \end{bmatrix}
\end{align}
```
```
## 5.2 What is the span of $\{l, m, n\}$?
\begin{align}
l = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}
\qquad
m = \begin{bmatrix} -1 & 0 & 7 \end{bmatrix}
\qquad
n = \begin{bmatrix} 4 & 8 & 2\end{bmatrix}
\end{align}
```
```
# Basis
## 6.1 Graph two vectors that form a basis for $\mathbb{R}^2$
```
```
## 6.2 What does it mean to form a basis?
# Rank
## 7.1 What is the Rank of P?
\begin{align}
P = \begin{bmatrix}
1 & 2 & 3 \\
-1 & 0 & 7 \\
4 & 8 & 2
\end{bmatrix}
\end{align}
## 7.2 What does the rank of a matrix tell us?
# Linear Projections
## 8.1 Line $L$ is formed by all of the vectors that can be created by scaling vector $v$
\begin{align}
v = \begin{bmatrix} 1 & 3 \end{bmatrix}
\end{align}
\begin{align}
w = \begin{bmatrix} -1 & 2 \end{bmatrix}
\end{align}
## find $proj_{L}(w)$
## graph your projected vector to check your work (make sure your axis are square/even)
```
```
# Stretch Goal
## For vectors that begin at the origin, the coordinates of where the vector ends can be interpreted as regular data points. (See 3Blue1Brown videos about Spans, Basis, etc.)
## Write a function that can calculate the linear projection of each point (x,y) (vector) onto the line y=x. run the function and plot the original points in blue and the new projected points on the line y=x in red.
## For extra points plot the orthogonal vectors as a dashed line from the original blue points to the projected red points.
```
import pandas as pd
import matplotlib.pyplot as plt
# Creating a dataframe for you to work with -Feel free to not use the dataframe if you don't want to.
x_values = [1, 4, 7, 3, 9, 4, 5 ]
y_values = [4, 2, 5, 0, 8, 2, 8]
data = {"x": x_values, "y": y_values}
df = pd.DataFrame(data)
df.head()
plt.scatter(df.x, df.y)
plt.show()
```
```
```
| b8c2b0a52751cb88e31a406362d80650cc62b2d1 | 23,746 | ipynb | Jupyter Notebook | AG_sp3_mod2_assign_Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb | AnnalisaGibbs/AB-Demo | 8a6e362c324c36a69b4eb6330b0d7c8e82824e77 | [
"MIT"
]
| null | null | null | AG_sp3_mod2_assign_Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb | AnnalisaGibbs/AB-Demo | 8a6e362c324c36a69b4eb6330b0d7c8e82824e77 | [
"MIT"
]
| null | null | null | AG_sp3_mod2_assign_Copy_of_LS_DS_132_Intermediate_Linear_Algebra_Assignment.ipynb | AnnalisaGibbs/AB-Demo | 8a6e362c324c36a69b4eb6330b0d7c8e82824e77 | [
"MIT"
]
| null | null | null | 39.184818 | 8,696 | 0.63358 | true | 1,460 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.932453 | 0.813423 | __label__eng_Latn | 0.98854 | 0.728187 |
# MEC6514 - Masse ajoutée
Ce document est un support au cours sur la masse ajoutée. Vous y retrouverez la démarche pour obtenir la masse ajoutée d'un cylindre circulaire, à l'aide de Python, ainsi que les formules pour d'autres formes plus complexes, fournies par Naudascher et Rockwell (1994).
## Masse ajoutée d'un cylindre circulaire
Pour pouvoir déterminer la masse ajoutée d'un cylindre circulaire, on applique la théorie des écoulements potentiels, qui nous permet de simplifier les équations de Navier-Stokes. Ceci amène les hypothèses suivantes :
- Fluide parfait ($R_e \rightarrow \infty$ et viscosité négligeable)
- Ecoulement irrotationnel (vorticité nulle)
- Fluide incompressible (hypothèse que l'on ajoute et non obligatoire)
Avant de décrire le cylindre, on importe les librairies de Python nécessaires :
```python
# librairie et outils pour le calcul symbolique
from sympy import symbols,dsolve,diff,Function,lambdify
import sympy as sp
# librairie pour le calcul numérique
import numpy as np
# librairie pour les figures et tracés
import matplotlib.pyplot as plt
```
On considère un cylindre circulaire de rayon $a$ se déplaçant à la vitesse $U$ et l'accélération $\dot{U}$ dans un fluide au repos remplissant un domaine infini (de masse volumique $\rho$).
```python
t = symbols('t') # permet de définir t comme une variable inconnue
# les paramètres suivants peuvent être changés à votre guise
a = 0.02 # rayon du cylindre (en m)
U = t+2 # Vitesse dépendant du temps, pour une accélération non nulle
rho = 1000 # masse volumique du fluide (en kg/m^3)
```
On définit les coordonnées comme cylindriques (ou polaires). Pour obtenir la masse ajoutée du cylindre, il faut déterminer dans un premier temps le champs de vitesse du fluide environnant. Pour cela, on définit les conditions limites :
- Condition de non-pénétration sur la paroi du cylindre :
à $r=a$, la vitesse radiale $u_r=U\cos{\theta}$
- Loin du cylindre, pas d'effet de celui-ci sur l'écoulement :
à $r=\infty$, la vitesse radiale $u_r=0$ et la vitesse azimutale $u_{\theta}=0$
On résout l'équation de Laplace en coordonnées cylindriques (et 2D), dont la solution est le potentiel de déplacement fluide $\phi(r,\theta)$ :
$\frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial\phi}{\partial r}\right) + \frac{1}{r^2} \frac{\partial^2\phi}{\partial\theta^2}= 0$
Pour résoudre cette équation, on applique la méthode de séparation des variables : $\phi(r,\theta) = R(r)T(\theta)$, ce qui nous donne :
$\frac{r}{R(r)}\frac{\partial}{\partial r}\left(r \frac{\partial R}{\partial r}\right)=-\frac{1}{T(\theta)} \frac{\partial^2 T}{\partial\theta^2}$
Les termes $R(r)$ et $T(\theta)$ étant indépendants, chaque côté de l'expression est égal à une constante. En posant cette constante égale à $1$, on obtient deux équations découplées, qu'il nous est possible de résoudre avec Python :
$\frac{\partial^2 T}{\partial\theta^2}+T(\theta)=0$ et $r^2 \frac{\partial^2 R}{\partial r^2} +r \frac{\partial R}{\partial r} - R(r) = 0$
```python
theta = symbols('theta')
T = Function('T')(theta) # on introduit T la fonction symbolique d'argument theta
# dsolve résoud des EDO et diff est l'opérateur différentiel symbolique
T_theta = dsolve(diff(diff(T,theta),theta)+T,T).args[1]
print('T(theta) = ',T_theta)
```
T(theta) = C1*sin(theta) + C2*cos(theta)
On procède de la même manière avec $R(r)$ :
```python
r = symbols('r')
R = Function('R')(r)
R_r = dsolve(r**2*diff(diff(R,r),r)+r*diff(R,r)-R,R).args[1]
print(' R(r) = ',R_r)
```
R(r) = C1/r + C2*r
On applique ensuite les conditions limites pour déterminer les constantes inconnues :
- $r = \infty$, $u_r=0$ $\rightarrow$ $C_2^r=0$
- $r = \infty$, $u_{\theta}=0$ $\rightarrow$ $C_2^{\theta}=C_1^{\theta}\tan{\theta}$
- $r=a$, $u_r=U\cos{\theta}$ $\rightarrow$ $2 C_1^r C_1^{\theta} = -a^2 U$
Soit :
$\phi=-\frac{a^2}{r} U\cos{\theta}$
Et :
$u_r = \frac{\partial\phi}{\partial r}=\frac{a^2}{r^2} U\cos{\theta}$, $u_{\theta} = \frac{1}{r}\frac{\partial\phi}{\partial\theta}=\frac{a^2}{r^2} U\sin{\theta}$
On peut maintenant tracer les lignes de courant autour du cylindre. Ce dernier étant en accélération, il est nécessaire de choisir un instant auquel on effectue le tracé. On a arbitrairement choisi l'instant initial $t=0$, mais vous pouvez changer celui-ci.
```python
U_0 = lambdify(t,U)(0) # changement d'instant à effectuer ici
phi = -a**2/r*U_0*sp.cos(theta)
# vitesse radiale
ur = lambdify((r,theta),diff(phi,r)) # passage d'une expression symbolique à une fonction numérique
# vitesse tangentielle (ou azimutale)
utheta = lambdify((r,theta),1/r*diff(phi,theta))
''' Calcul du champs de vitesse en coordonnées cartésiennes pour tracer '''
Ux = np.zeros((int(5*a*1000)+1,int(5*a*1000)+1))
Vy = np.zeros((int(5*a*1000)+1,int(5*a*1000)+1))
# x1 -> y et x2 -> x
x1,x2 = np.arange(-2.5*a,2.5*a+0.001,0.001), np.arange(-2.5*a,2.5*a+0.001,0.001)
i = 0
while i < int(5*a*1000)+1:
j = 0
while j < int(5*a*1000)+1:
# calcul des coordonnées cylindriques
r = np.sqrt((x1[i])**2+(x2[j])**2)
theta = np.arcsin(x1[i]/np.sqrt((x1[i])**2+(x2[j])**2))
if r>a:
if x1[i] > 0 and x2[j] > 0:
# quadrant supérieur droit
Ux[i,j] = ur(r,theta)*np.cos(theta)-utheta(r,theta)*np.sin(theta)
Vy[i,j] = ur(r,theta)*np.sin(theta)+utheta(r,theta)*np.cos(theta)
elif x1[i] > 0 and x2[j] < 0:
# quadrant supérieur gauche
Ux[i,j] = -ur(r,np.pi-theta)*np.cos(theta)-utheta(r,np.pi-theta)*np.sin(theta)
Vy[i,j] = ur(r,np.pi-theta)*np.sin(theta)-utheta(r,np.pi-theta)*np.cos(theta)
elif x1[i] < 0 and x2[j] < 0:
# quadrant inférieur gauche
Ux[i,j] = -ur(r,np.pi-theta)*np.cos(theta)-utheta(r,np.pi-theta)*np.sin(theta)
Vy[i,j] = ur(r,np.pi-theta)*np.sin(theta)-utheta(r,np.pi-theta)*np.cos(theta)
elif x1[i] < 0 and x2[j] > 0:
# quadrant inférieur droit
Ux[i,j] = ur(r,theta)*np.cos(theta)-utheta(r,theta)*np.sin(theta)
Vy[i,j] = ur(r,theta)*np.sin(theta)+utheta(r,theta)*np.cos(theta)
j += 1
i += 1
''' Tracé des lignes de courant '''
X1,X2 = np.meshgrid(x1,x2)
s_p = [(a*np.cos(i),a*np.sin(i)) for i in np.linspace(0,2*np.pi,36+1)] # sélection des points de départ des lignes de courant
cylinder = plt.Circle((0, 0),a,ec='k',fc='w',lw=3,zorder=10) # définition du cylindre
fig = plt.figure(figsize=(10,10))
ax = fig.gca()
ax.streamplot(X1,X2,Ux,Vy,color='k',density=1.5,start_points=s_p)
ax.add_patch(cylinder)
ax.set_xlim(-2.5*a,2.5*a)
ax.set_ylim(-2.5*a,2.5*a)
ax.set_xlabel('$x (m)$')
ax.set_ylabel('$y (m)$')
ax.set_title('Vélocité - Lignes de courant')
plt.show()
```
Le potentiel de déplacement fluide $\phi$ étant déterminé, on évalue la distribution de la pression $p$ avec l'équation d'Euler :
$\rho\frac{\partial\phi}{\partial t} + \rho\frac{1}{2}\left(\overrightarrow{\nabla}\phi\right)^2+p=0$
```python
# on rédéfinit ces variables car elles ont été utilisées précédemment
a,r,theta,rho,U,U_point = symbols('a r theta rho U U_point')
phi = -a**2/r*U*sp.cos(theta)
dphi_dt = -a**2/r*U_point*sp.cos(theta)
grad_phi = np.array([[diff(phi,r)],[1/r*diff(phi,theta)]]) # définition du gradient de phi
p = sp.simplify(-(rho*dphi_dt+rho*1/2*np.dot(grad_phi.T,grad_phi)))
p = p[0,0]
print('p = ',p)
```
p = -U**2*a**4*rho/(2*r**4) + U_point*a**2*rho*cos(theta)/r
On calcule la force exercée par le fluide sur le cylindre en intégrant la pression sur la surface du cylindre :
$F_x = \int\limits^{2\pi}_0 pa\cos{\theta}d\theta$, en $r=a$
```python
Fx = sp.integrate(p*a*sp.cos(theta),(theta,0,2*sp.pi))
Fx = sp.nsimplify(Fx,tolerance=1e-10).subs(r,a)
print('Fx = ',Fx)
```
Fx = pi*U_point*a**2*rho
Ce qui nous permet de déterminer la masse ajoutée du cylindre par unité de longueur $m_f$ avec $F_x=m_f\dot{U}$.
```python
mf = Fx/U_point
print('Expression de mf : ',mf)
mf = mf.subs([(sp.pi,np.pi),(a,0.02),(rho,1000)]) # on remplace les variables par leur valeur, vous pouvez les modifier
print('mf = ',mf)
```
Expression de mf : pi*a**2*rho
mf = 1.25663706143592
$A_{xx}$ (respectivement $A_{yy}$) est la contribution du mouvement du cylindre dans la direction $x$ (respectivement $y$) à la masse ajoutée dans la direction $x$ (respectivement $y$). Les deux sont égales pour le cylindre circulaire et c'est leur valeur que l'on vient de calculer.
Pour le cas du cylindre circulaire, la contribution du mouvement dans la direction $y$ (respectivement $x$) à la
masse ajoutée dans la direction $x$ (respectivement $y$) $A_{xy}$ est nul.
Il nous est possible de calculer ces termes pour d'autres formes, dont les expressions ont été évaluées par Naudascher et Rockwell (1994).
Les formes suivantes sont centrées en $(0,0)$, c'est pour cela que chaque paramètre demandé est la moitié de la longueur habituellement utilisée. De plus, pour certaines formes, les ratios entre paramètres sont limités, car nous utilisons les résultats (parfois expérimentaux) fournis par Naudascher et Rockwell (1994). Vous êtes libres de modifier les paramètres pour calculer la masse ajoutée associée.
## Masse ajoutée d'une ellipse
```python
a = 0.02 # demi grand-axe (en m)
b = 0.01 # demi petit-axe (en m)
rho = 1000 # masse volumique du fluide (en kg/m3)
print('Axx = ',rho*np.pi*b**2)
print('Ayy = ',rho*np.pi*a**2)
print('Axy = ',rho*np.pi/8*(a**2-b**2)**2)
```
Axx = 0.3141592653589793
Ayy = 1.2566370614359172
Axy = 3.534291735288518e-05
## Masse ajoutée d'un rectangle
```python
a = 0.02 # demie largeur (en m)
a_b = 0.2 # ratio de la demie largeur sur la demie hauteur (en m) - à choisir parmi 0.1,0.2,0.5,1.0,2.0,5.0
rho = 1000 # masse volumique du fluide (en kg/m3)
b = a/a_b
if a_b == 0.1:
alpha = 2.23
alpha2 = 0.147
elif a_b == 0.2:
alpha = 1.98
alpha2 = 0.15
elif a_b == 0.5:
alpha = 1.7
alpha2 = 0.15
elif a_b == 1:
alpha = 1.51
alpha2 = 0.234
elif a_b == 2:
alpha = 1.36
alpha1 = 0.15
alpha2 = 0
elif a_b == 5:
alpha = 1.21
alpha1 = 0.15
alpha2 = 0
print('Ayy = ',alpha*rho*np.pi*a**2)
if alpha2 != 0:
print('Axy = ',alpha2*rho*np.pi*a**4)
else:
print('Axy = ',alpha1*rho*np.pi*b**4)
```
Ayy = 2.4881413816431164
Axy = 7.539822368615503e-05
## Masse ajoutée d'une plaque
L'épaisseur de la plaque est considérée négligeable devant sa demie largeur.
```python
a = 0.02 # demie largeur de la plaque (en m)
rho = 1000 # masse volumique du fluide (en kg/m3)
print('Axx = 0')
print('Ayy = ',rho*np.pi*a**2)
print('Axy = ',rho*np.pi/8*a**4)
```
Axx = 0
Ayy = 1.2566370614359172
Axy = 6.283185307179586e-05
## Masse ajoutée d'un losange
```python
a = 0.02 # demie largeur (en m)
a_b = 0.2 # ratio de la demie largeur sur la demie hauteur (en m) - à choisir parmi 0.2,0.5,1.0,2.0
rho = 1000 # masse volumique du fluide (en kg/m3)
b = a/a_b
if a_b == 0.2:
alpha = 0.61
elif a_b == 0.5:
alpha = 0.67
elif a_b == 1:
alpha = 0.76
elif a_b == 2.0:
alpha = 0.85
print('Ayy = ',alpha*rho*np.pi*a**2)
```
Ayy = 0.7665486074759096
## Masse ajoutée d'une poutre en I
```python
a = 0.02 # demie largeur de la poutre (en m)
rho = 1000 # masse volumique du fluide (en kg/m3)
# tiré des expériences de Naudascher et Rockwell
d = a/2.6
b = d/3.6
print('Ayy = ',2.11*rho*np.pi*a**2)
```
Ayy = 2.6515041996297852
## Masse ajoutée d'une poutre en croix
```python
a = 0.02 # taille d'une branche (en m)
rho = 1000 # masse volumique du fluide (en kg/m3)
print('Axx = ', rho*np.pi*a**2)
print('Axy = ',rho*2/np.pi*a**2)
```
Axx = 1.2566370614359172
Axy = 0.25464790894703254
| 206dd96e2bc29d9f76df7ac0e634267a8cec35ca | 111,247 | ipynb | Jupyter Notebook | Chapitre_2/.ipynb_checkpoints/Masse_ajoutee-checkpoint.ipynb | lm2-poly/FSI | d92517c8301d88a2b444086d8ce27f86269262a4 | [
"MIT"
]
| null | null | null | Chapitre_2/.ipynb_checkpoints/Masse_ajoutee-checkpoint.ipynb | lm2-poly/FSI | d92517c8301d88a2b444086d8ce27f86269262a4 | [
"MIT"
]
| null | null | null | Chapitre_2/.ipynb_checkpoints/Masse_ajoutee-checkpoint.ipynb | lm2-poly/FSI | d92517c8301d88a2b444086d8ce27f86269262a4 | [
"MIT"
]
| null | null | null | 173.281931 | 91,724 | 0.892348 | true | 4,201 | Qwen/Qwen-72B | 1. YES
2. YES | 0.932453 | 0.763484 | 0.711913 | __label__fra_Latn | 0.855374 | 0.492344 |
# DPM 2 - Variational Inference for Deep Continuous LVMs
**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/deep_probabilistic_models_II/tutorial_2b.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/deep_probabilistic_models_II/tutorial_2b.ipynb)
**Authors**: Wilker Aziz
## 0. Intended Learning Outcomes
After this tutorial the student should be able to
* parameterise a latent variable model with continuous latent variables
* estimate parameters using neural variational inference
**Remark** This tutorial builds upon the previous one and there is a lot of shared/unchanged code. The only changes are:
* additional prior nets (for continuous variables)
* additional CPD nets (for continuous variables)
* we changed DRL and forward in the NVIL class such that it can support LVMs trained via SFE or via reparameterisation
```python
import torch
import numpy as np
import random
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
import torch.distributions as td
from functools import partial
from itertools import chain
from collections import defaultdict, OrderedDict
from tqdm.auto import tqdm
```
```python
def seed_all(seed=42):
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
seed_all()
```
## 1. Data
We are going to use toy image datsets for this notebook. These are fixed-dimensional observations for which encoder and decoders are relatively easy to design. This way we can focus on the aspects that are probabilistic in nature.
```python
from torchvision.datasets import FashionMNIST
from torchvision import transforms
from torchvision.transforms import ToTensor
from torch.utils.data import random_split, Dataset
from torch.utils.data.dataloader import DataLoader
from torchvision.utils import make_grid
import torch.optim as opt
%matplotlib inline
```
A helper to binarize datasets:
```python
class Binarizer(Dataset):
def __init__(self, ds, threshold=0.5):
self._ds = ds
self._threshold = threshold
def __len__(self):
"""Size of the corpus in number of sequence pairs"""
return len(self._ds)
def __getitem__(self, idx):
"""
Return corpus_x[idx] and corpus_y[idx] converted to codes
the latter has the EOS code in the end
"""
x, y = self._ds[idx]
return (x >= self._threshold).float(), y
```
FashionMNIST
```python
dataset = FashionMNIST(root='data/', train=True, download=True,
transform=transforms.Compose([transforms.Resize(64), transforms.ToTensor()]))
```
```python
img_shape = dataset[0][0].shape
print("Shape of an image:", img_shape)
```
Shape of an image: torch.Size([1, 64, 64])
Let's make a dev set for ourselves:
```python
val_size = 1000
train_size = len(dataset) - val_size
train_ds, val_ds = random_split(dataset, [train_size, val_size])
len(train_ds), len(val_ds)
```
(59000, 1000)
we suggest that you binarize the data in a first pass through this notebook, but as you will see, we can also model the continuous pixel intensities.
```python
bin_data = True
```
```python
if bin_data:
train_ds = Binarizer(train_ds)
val_ds = Binarizer(val_ds)
```
```python
batch_size = 64
train_loader = DataLoader(train_ds, batch_size, shuffle=True, num_workers=2, pin_memory=True)
val_loader = DataLoader(val_ds, batch_size, num_workers=2, pin_memory=True)
```
Let's visualise a few samples
```python
for images, y in train_loader:
print('images.shape:', images.shape)
plt.figure(figsize=(16,8))
plt.axis('off')
plt.imshow(make_grid(images, nrow=16).permute((1, 2, 0)))
plt.show()
break
```
## 2. Latent variable models
We will be using NNs to parameterise a latent variable model, that is, a joint distribution over a collection of random variables (rvs), some of which are observed, some of which are not.
We are interested in two random variables (rvs):
* a discrete latent code $Z \in \mathcal Z$
* and an image $X \in \mathcal X \subseteq \mathbb R^D$
In this tutorial, $x$ is has a number $C$ of channels, a certain width $W$ and a certain height $H$, so $\mathcal X \subseteq \mathbb R^{C \times W \times H}$. Because we have fixed $D = C \times W \times H$, $\mathcal X$ is finite-dimensional, but this need not be the case in general (for example, in a different domain, $\mathcal X$ could be the unbounded space of all sentences of arbitrary lenght). We may treat the pixel intensities as discrete or continuous, as long as we choose an appropriate pmf/pdf for each case.
In this tutorial we will look into continuous latent codes. That is, $z \in \mathcal Z \subseteq \mathbb R^K$.
We specify a joint distribution over $\mathcal Z \times \mathcal X$ by specifying a joint probability density function (pdf):
\begin{align}
p_{ZX}(z, x|\theta) &= p_Z(z|\theta)p_{X|Z}(x|z, \theta)
\end{align}
Here $\theta$ denotes the parameters of the NNs that parameterise the pdf $p_Z$ and the pdf $p_{X|Z=z}$ (for any given $z$).
In this tutorial, the prior is fixed, but in general it need not be. We do not have additional predictors to condition on, but in some application domains you may have (e.g., in imagine captioning, we may be interested in a joint for a caption $y$ and a latent code $z$ given an image $x$; in image generation, we may be interested in a joint distribution for an image $x$ and a latent code $z$ given a caption $y$).
### 2.1 Prior networks
We begin by specifying the component that parameterises the prior $p_Z$.
A prior network is an NN that parameterises a fixed prior distribution for the instances in a batch.
```python
class PriorNet(nn.Module):
"""
An NN that parameterises a prior distribution.
For this lab, our priors are fixed, so this NN's forward pass
simply returns a fixed prior with a given batch_shape.
"""
def __init__(self, outcome_shape: tuple):
"""
outcome_shape: this is the shape of a single outcome
if you use a single integer k, we will turn it into (k,)
"""
super().__init__()
if isinstance(outcome_shape, int):
outcome_shape = (outcome_shape,)
self.outcome_shape = outcome_shape
def forward(self, batch_shape):
"""
Returns a td object for the batch.
"""
raise NotImplementedError("Implement me!")
```
Let's implement two priors.
**A standard Gaussian prior**
Here the latent code is a point in the $K$-dimensional real coordinate space. We use a standard Gaussian per coordinate:
\begin{align}
p_Z(z) &= \prod_{k=1}^K \mathcal N(z_k|0, 1)
\end{align}
**A mixture of Gaussians prior**
Here we learn a mixture of $C$ Gaussians, each a product of $K$ independent Gaussians:
\begin{align}
p_Z(z|\theta) &= \sum_{c=1}^C \omega_c \prod_{k=1}^K \mathcal N(z_k|\mu_c, \sigma_c^2)
\end{align}
where the prior parameters are the mixing coefficients $\omega_{1:C} \in \Delta_{C-1}$, the locations $\mu_{1:C} \in \mathbb R^C$ and the scales $\sigma_{1:C} \in \mathbb R^C_{>0}$.
```python
class GaussianPriorNet(PriorNet):
"""
For z a K-dimensional code:
p(z) = prod_k Normal(z[k]|0, 1)
"""
def __init__(self, outcome_shape):
super().__init__(outcome_shape)
# the product of Bernoulli priors will have Bernoulli(0.5) factors
self.register_buffer("locs", torch.zeros(self.outcome_shape, requires_grad=False).detach())
self.register_buffer("scales", torch.ones(self.outcome_shape, requires_grad=False).detach())
def forward(self, batch_shape):
shape = batch_shape + self.outcome_shape
# we wrap around td.Independent to obtain a pdf over multivariate draws
return td.Independent(td.Normal(loc=self.locs.expand(shape), scale=self.scales.expand(shape)), len(self.outcome_shape))
class MoGPriorNet(PriorNet):
"""
For z a K-dimensional code:
p(z|w_1...w_C, u_1...u_C, s_1...s_C)
= \sum_c w_c prod_k Normal(z[k]|u[c], s[c]^2)
"""
def __init__(self, outcome_shape, num_components, lbound=-10, rbound=10):
super().__init__(outcome_shape)
# [C]
self.logits = nn.Parameter(torch.rand(num_components, requires_grad=True), requires_grad=True)
# (C,) + outcome_shape
shape = (num_components,) + self.outcome_shape
self.locs = nn.Parameter(torch.rand(shape, requires_grad=True), requires_grad=True)
self.scales = nn.Parameter(1 + torch.rand(shape, requires_grad=True), requires_grad=True)
self.num_components = num_components
def forward(self, batch_shape):
# e.g., with batch_shape (B,) and outcome_shape (K,) this is
# [B, C, K]
shape = batch_shape + (self.num_components,) + self.outcome_shape
# we wrap around td.Independent to obtain a pdf over multivariate draws
# (note that C is not part of the event_shape, thus td.Independent will
# should not treat that dimension as part of the outcome)
# in our example, a draw from independent would return [B, C] draws of K-dimensional outcomes
comps = td.Independent(td.Normal(loc=self.locs.expand(shape), scale=self.scales.expand(shape)), len(self.outcome_shape))
# a batch of component selectors
pc = td.Categorical(logits=self.logits.expand(batch_shape + (self.num_components,)))
# and finally, a mixture
return td.MixtureSameFamily(pc, comps)
```
```python
def test_priors(batch_size=2, latent_dim=3, num_comps=5):
prior_net = GaussianPriorNet(latent_dim)
print("\nGaussian")
print(" trainable parameters")
print(list(prior_net.parameters()))
print(f" outcome_shape={prior_net.outcome_shape}")
p = prior_net(batch_shape=(batch_size,))
print(f" distribution: {p}")
z = p.sample()
print(f" sample: {z}")
print(f" shapes: sample={z.shape} log_prob={p.log_prob(z).shape}")
prior_net = MoGPriorNet(latent_dim, num_comps)
print("\nMixture of Gaussian")
print(" trainable parameters")
print(list(prior_net.parameters()))
print(f" outcome_shape={prior_net.outcome_shape}")
p = prior_net(batch_shape=(batch_size,))
print(f" distribution: {p}")
z = p.sample()
print(f" sample: {z}")
print(f" shapes: sample={z.shape} log_prob={p.log_prob(z).shape}")
test_priors()
```
Gaussian
trainable parameters
[]
outcome_shape=(3,)
distribution: Independent(Normal(loc: torch.Size([2, 3]), scale: torch.Size([2, 3])), 1)
sample: tensor([[-0.3545, -0.5105, -0.7530],
[ 0.6498, -0.0851, -1.2621]])
shapes: sample=torch.Size([2, 3]) log_prob=torch.Size([2])
Mixture of Gaussian
trainable parameters
[Parameter containing:
tensor([0.8617, 0.8520, 0.3585, 0.6196, 0.5566], requires_grad=True), Parameter containing:
tensor([[0.4819, 0.0711, 0.2805],
[0.4312, 0.1763, 0.3839],
[0.0172, 0.8007, 0.8341],
[0.6358, 0.9348, 0.1698],
[0.6220, 0.4291, 0.3030]], requires_grad=True), Parameter containing:
tensor([[1.5164, 1.3117, 1.3240],
[1.5596, 1.3319, 1.5549],
[1.1613, 1.6315, 1.0815],
[1.9174, 1.9954, 1.5884],
[1.8620, 1.4976, 1.3426]], requires_grad=True)]
outcome_shape=(3,)
distribution: MixtureSameFamily(
Categorical(logits: torch.Size([2, 5])),
Independent(Normal(loc: torch.Size([2, 5, 3]), scale: torch.Size([2, 5, 3])), 1))
sample: tensor([[-1.5431, -0.7577, 0.3158],
[ 3.9455, -0.1169, 0.7572]])
shapes: sample=torch.Size([2, 3]) log_prob=torch.Size([2])
### 2.2 Conditional probability distributions
Next, we create code to parameterise conditional probability distributions (cpds), which we do by having an NN parameterise a choice of pmf/pdf. This will be useful in parameterising the $p_{X|Z=z}$ component of our latent variable models (and, later on, it will also be useful for variational inference, when we develop $q_{Z|X=x}$).
Our general strategy is to map from a number of inputs (which the user will choose) to the parameters of a pmf/pdf support by `torch.distributions`.
```python
class CPDNet(nn.Module):
"""
Let L be a choice of distribution
and x ~ L is an outcome with shape outcome_shape
This is an NN whose forward method maps from a number of inputs to the
parameters of L's pmf/pdf and returns a torch.distributions
object representing L's pmf/pdf.
"""
def __init__(self, outcome_shape):
"""
outcome_shape: this is the shape of a single outcome
if you use a single integer k, we will turn it into (k,)
"""
super().__init__()
if isinstance(outcome_shape, int):
outcome_shape = (outcome_shape,)
self.outcome_shape = outcome_shape
def forward(self, inputs):
"""
Return a torch.distribution object predicted from `inputs`.
inputs: a tensor with shape batch_shape + (num_inputs,)
"""
raise NotImplementedError("Implemented me")
```
#### 2.2.1 Observational model
The observational model prescribes the distribution of $X|Z=z$.
If we assume our pixel intensities are binary, we can use a product of $C\times W \times H$ Bernoulli distributions, which we parameterise jointly using an NN:
\begin{align}
p_{X|Z}(x|z, \theta) &= \prod_{c=1}^C\prod_{w=1}^W\prod_{h=1}^H \mathrm{Bernoulli}(x_{c,w,h} | f_{c,w,h}(z; \theta))
\end{align}
Here $\mathbf f(z; \theta) \in (0,1)^{C}\times(0,1)^W \times (0,1)^H$ is an NN architecture such as a feed-forward net or a stack of transposed convolution layers. In NN literature, such architectures are often called *decoders*.
If we assume our pixel intensities are real values in $[0, 1]$ (0 and 1 included), we need to parameterise a pdf. A good choice of pdf is the [ContinuousBernoulli distributions](https://arxiv.org/abs/1907.06845), which is a single-parameter distribution (much like the Bernoulli) whose support is the set $[0, 1]$.
Let's start by designing $\mathbf f$.
A very basic design uses a FFNN:
```python
class ReshapeLast(nn.Module):
"""
Helper layer to reshape the rightmost dimension of a tensor.
This can be used as a component of nn.Sequential.
"""
def __init__(self, shape: tuple):
"""
shape: desired rightmost shape
"""
super().__init__()
self._shape = shape
def forward(self, input):
# reshapes the last dimension into self.shape
return input.reshape(input.shape[:-1] + self._shape)
def build_ffnn_decoder(latent_size, num_channels, width=64, height=64, hidden_size=512, p_drop=0.):
"""
Map the latent code to a tensor with shape [num_channels, width, height]
using a FFNN with 2 hidden layers.
latent_size: size of latent code
num_channels: number of channels in the output
width: image shape
height: image shape
hidden_size: we first map from latent_size to hidden_size and
then use feed forward NNs to map it to [num_channels, width, height]
p_drop: dropout rate before linear layers
"""
decoder = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(latent_size, hidden_size),
nn.ReLU(),
nn.Dropout(p_drop),
nn.Linear(hidden_size, hidden_size),
nn.ReLU(),
nn.Dropout(p_drop),
nn.Linear(hidden_size, num_channels * width * height),
ReshapeLast((num_channels, width, height)),
)
return decoder
```
```python
# mapping from 10-dimensional latent code
build_ffnn_decoder(latent_size=10, num_channels=1)(torch.zeros((5, 10))).shape
```
torch.Size([5, 1, 64, 64])
```python
# we can also have a structured batch shape (e.g., [3, 5])
build_ffnn_decoder(latent_size=10, num_channels=1)(torch.zeros((3, 5, 10))).shape
```
torch.Size([3, 5, 1, 64, 64])
The downside is that the output layer is rather large.
An architecture with inductive biases that are more appropriate for our data type is a CNN, in particular, a transposed CNN. Here we design one such decoder:
```python
class MySequential(nn.Sequential):
"""
This is a version of nn.Sequential that works with structured batches
(i.e., batches that have multiple dimensions)
even when some of the nn layers in it does not.
The idea is to just wrap nn.Sequential around two calls to reshape
which remove and restore the batch dimensions.
"""
def __init__(self, *args, event_dims=1):
super().__init__(*args)
self._event_dims = event_dims
def forward(self, input):
# memorise batch shape
batch_shape = input.shape[:-self._event_dims]
# memorise latent shape
event_shape = input.shape[-self._event_dims:]
# flatten batch shape and obtain outputs
output = super().forward(input.reshape( (-1,) + event_shape))
# restore batch shape
return output.reshape(batch_shape + output.shape[1:])
def build_cnn_decoder(latent_size, num_channels, width=64, height=64, hidden_size=1024, p_drop=0.):
"""
Map the latent code to a tensor with shape [num_channels, width, height].
latent_size: size of latent code
num_channels: number of channels in the output
width: must be 64 (for now)
height: must be 64 (for now)
hidden_size: we first map from latent_size to hidden_size and
then use transposed 2d convolutions to [num_channels, width, height]
p_drop: dropout rate before linear layers
"""
if width != 64:
raise ValueError("The width is hardcoded")
if height != 64:
raise ValueError("The height is hardcoded")
# TODO: change the architecture so width and height are not hardcoded
decoder = MySequential(
nn.Dropout(p_drop),
nn.Linear(latent_size, hidden_size),
ReshapeLast((hidden_size, 1, 1)),
nn.ConvTranspose2d(hidden_size, 128, 5, 2),
nn.ReLU(),
nn.ConvTranspose2d(128, 64, 5, 2),
nn.ReLU(),
nn.ConvTranspose2d(64, 32, 6, 2),
nn.ReLU(),
nn.ConvTranspose2d(32, num_channels, 6, 2),
event_dims=1
)
return decoder
```
```python
# a batch of five 10-dimensional latent codes is transformed
# into a batch of 5 images, each with shape [1,64,64]
build_cnn_decoder(latent_size=10, num_channels=1)(torch.zeros((5, 10))).shape
```
torch.Size([5, 1, 64, 64])
```python
# note that because we use MySequential,
# we can have a batch of [3, 5] assignments
# (this is useful, for example, when we have multiple draws of the latent
# variable for each of the data points in the batch)
build_cnn_decoder(latent_size=10, num_channels=1)(torch.zeros((3, 5, 10))).shape
```
torch.Size([3, 5, 1, 64, 64])
Now we are in position to design a CPDNet for our image model, it simply combines a choice of decoder with a choice of distribution:
```python
class BinarizedImageModel(CPDNet):
def __init__(self, num_channels, width, height, latent_size, decoder_type=build_ffnn_decoder, p_drop=0.):
super().__init__((num_channels, width, height))
self.decoder = decoder_type(
latent_size=latent_size,
num_channels=num_channels,
width=width,
height=height,
p_drop=p_drop
)
def forward(self, z):
"""
Return the cpd X|Z=z
z: batch_shape + (latent_dim,)
"""
# batch_shape + (num_channels, width, height)
h = self.decoder(z)
return td.Independent(td.Bernoulli(logits=h), len(self.outcome_shape))
class ContinuousImageModel(CPDNet):
# TODO: this could be an exercise
def __init__(self, num_channels, width, height, latent_size, decoder_type=build_ffnn_decoder, p_drop=0.):
super().__init__((num_channels, width, height))
self.decoder = decoder_type(
latent_size=latent_size,
num_channels=num_channels,
width=width,
height=height,
p_drop=p_drop
)
def forward(self, z):
"""
Return the cpd X|Z=z
z: batch_shape + (latent_dim,)
"""
# batch_shape + (num_channels, width, height)
h = self.decoder(z)
return td.Independent(td.ContinuousBernoulli(logits=h), len(self.outcome_shape))
```
```python
obs_model = BinarizedImageModel(
num_channels=img_shape[0],
width=img_shape[1],
height=img_shape[2],
latent_size=10,
p_drop=0.1,
)
print(obs_model)
# a batch of five zs is mapped to 5 distributions over [1,64,64]-dimensional
# binary tensors
print(obs_model(torch.zeros([5, 10])))
```
BinarizedImageModel(
(decoder): Sequential(
(0): Dropout(p=0.1, inplace=False)
(1): Linear(in_features=10, out_features=512, bias=True)
(2): ReLU()
(3): Dropout(p=0.1, inplace=False)
(4): Linear(in_features=512, out_features=512, bias=True)
(5): ReLU()
(6): Dropout(p=0.1, inplace=False)
(7): Linear(in_features=512, out_features=4096, bias=True)
(8): ReshapeLast()
)
)
Independent(Bernoulli(logits: torch.Size([5, 1, 64, 64])), 3)
We can also use a different decoder
```python
obs_model = BinarizedImageModel(
num_channels=img_shape[0],
width=img_shape[1],
height=img_shape[2],
latent_size=10,
p_drop=0.1,
decoder_type=build_cnn_decoder
)
print(obs_model)
# a batch of five zs is mapped to 5 distributions over [1,64,64]-dimensional
# binary tensors
print(obs_model(torch.zeros([5, 10])))
```
BinarizedImageModel(
(decoder): MySequential(
(0): Dropout(p=0.1, inplace=False)
(1): Linear(in_features=10, out_features=1024, bias=True)
(2): ReshapeLast()
(3): ConvTranspose2d(1024, 128, kernel_size=(5, 5), stride=(2, 2))
(4): ReLU()
(5): ConvTranspose2d(128, 64, kernel_size=(5, 5), stride=(2, 2))
(6): ReLU()
(7): ConvTranspose2d(64, 32, kernel_size=(6, 6), stride=(2, 2))
(8): ReLU()
(9): ConvTranspose2d(32, 1, kernel_size=(6, 6), stride=(2, 2))
)
)
Independent(Bernoulli(logits: torch.Size([5, 1, 64, 64])), 3)
### 2.3 Joint distribution
We can now combine a prior and an observational model into a joint distribution. A joint distribution supports a few important operations such as marginal and posterior pdf assessments, as well as sampling from the joint distribution. Marginal and posterior assessments require computations that may or may not be tractable, see below.
From a joint pdf, we can compute the marginal density of $x$ via
\begin{align}
p_X(x|\theta) &= \int_{\mathcal Z} p_{ZX}(z,x|\theta) \mathrm{d}z\\
&= \int_{\mathcal Z} p_Z(z|\theta)p_{X|Z}(x|z, \theta) \mathrm{d}z
\end{align}
For uncountable $\mathcal Z$ and a general enough parameterisation of $p_{X|Z=z}$, this is intractable (a special case where this is tractable is that where $p_{X|Z=z}$ is itself a Gaussian whose mean depend linearly on $z$, such a conditional is unreasonably simple for interesting datasets).
The posterior density of $z$ given $x$ depends on the intractable marginal:
\begin{align}
p_{Z|X}(z|x, \theta) &= \frac{p_Z(z|\theta)p_{X|Z}(x|z, \theta)}{p_X(x|\theta)}
\end{align}
As marginalisation is intractable, we can obtain a naive lowerbound by direct application of Jensen's inequality:
\begin{align}
\log p_X(x|\theta) &= \log \int_{\mathcal Z} p_Z(z|\theta)p_{X|Z}(x|z, \theta) \mathrm{d} z\\
&\overset{\text{JI}}{\ge} \int_{\mathcal Z} p_Z(z|\theta) \log p_{X|Z}(x|z, \theta) \mathrm{d}z \\
&\overset{\text{MC}}{\approx} \frac{1}{S} \sum_{s=1}^S \log p_{X|Z}(x|z_s, \theta) \quad \text{where } z_s \sim p_Z
\end{align}
A better lowerbound could be obtained via importance sampling, but it would require training an approximating distribution (as we will do in variational inference).
Recall that, given a dataset $\mathcal D$, the log-likelihood function $\mathcal L(\theta|\mathcal D)= \sum_{x \in \mathcal D} \log p_X(x|\theta)$ requires performing marginal density assessments. Whenever exact marginalisation is intractable, we are unaible to assess $\mathcal L(\theta|\mathcal D)$ and its gradient with respect to $\theta$. If the prior is fixed, we can use the naive lowerbound to obtain a gradient estimate, but, again, our naive application of JI leads to a generally rather loose bound.
```python
class JointDistribution(nn.Module):
"""
A wrapper to combine a prior net and a cpd net into a joint distribution.
"""
def __init__(self, prior_net: PriorNet, cpd_net: CPDNet):
"""
prior_net: object to parameterise p_Z
cpd_net: object to parameterise p_{X|Z=z}
"""
super().__init__()
self.prior_net = prior_net
self.cpd_net = cpd_net
def prior(self, shape):
return self.prior_net(shape)
def obs_model(self, z):
return self.cpd_net(z)
def sample(self, shape):
"""
Return z via prior_net(shape).sample()
and x via cpd_net(z).sample()
"""
pz = self.prior_net(shape)
z = pz.sample()
px_z = self.cpd_net(z)
x = px_z.sample()
return z, x
def log_prob(self, z, x):
"""
Assess the log density of the joint outcome.
"""
batch_shape = z.shape[:-len(self.prior_net.outcome_shape)]
pz = self.prior_net(batch_shape)
px_z = self.cpd_net(z)
return pz.log_prob(z) + px_z.log_prob(x)
def log_marginal(self, x, enumerate_fn):
"""
Return log marginal density of x.
enumerate_fn: function that enumerates the support of the prior
(this is needed for marginalisation p(x) = \int p(z, x) dz)
This only really makes sense if the support is a
(small) countably finite set. In such cases, you can use
enumerate=lambda p: p.enumerate_support()
which is supported, for example, by Categorical and OneHotCategorical.
If the support is discrete (eg, bit vectors) you can still dare to
enumerate it explicitly, but you will need to write cutomised code,
as torch.distributions will not offer that functionality for you.
If the support is uncountable, countably infinite, or just large
anyway, you need approximate tools (such as VI, importance sampling, etc)
"""
batch_shape = x.shape[:-len(self.cpd_net.outcome_shape)]
pz = self.prior_net(batch_shape)
log_joint = []
# (support_size,) + batch_shape
z = enumerate_fn(pz)
px_z = self.cpd_net(z)
# (support_size,) + batch_shape
log_joint = pz.log_prob(z) + px_z.log_prob(x.unsqueeze(0))
# batch_shape
return torch.logsumexp(log_joint, 0)
def posterior(self, x, enumerate_fn):
"""
Return the posterior distribution Z|X=x.
As the code is discrete, we return a discrete distribution over
the complete space of all possible latent codes. This is done via
exhaustive enumeration provided by `enumerate_fn`.
"""
batch_shape = x.shape[:-len(self.cpd_net.outcome_shape)]
pz = self.prior_net(batch_shape)
# (support_size,) + batch_shape
z = enumerate_fn(pz)
px_z = self.cpd_net(z)
# (support_size,) + batch_shape
log_joint = pz.log_prob(z) + px_z.log_prob(x.unsqueeze(0))
# batch_shape + (support_size,)
log_joint = torch.swapaxes(log_joint, 0, -1)
return td.Categorical(logits=log_joint)
def naive_lowerbound(self, x, num_samples: int):
"""
Return an MC lowerbound on log marginal density of x:
log p(x) >= 1/S \sum_s log p(x|z[s])
with z[s] ~ p_Z
"""
batch_shape = x.shape[:-len(self.cpd_net.outcome_shape)]
pz = self.prior_net(batch_shape)
# (num_samples,) + batch_shape + prior_outcome_shape
log_probs = []
# I'm using a for loop, but note that with enough GPU memory
# one could parallelise this step
for z in pz.sample((num_samples,)):
px_z = self.cpd_net(z)
log_probs.append(px_z.log_prob(x))
# (num_samples,) + batch_shape
log_probs = torch.stack(log_probs)
# batch_shape
return torch.mean(log_probs, 0)
```
```python
def test_joint_dist(latent_size=10, num_comps=3, data_shape=(1, 64, 64), batch_size=2, hidden_size=32):
p = JointDistribution(
prior_net=GaussianPriorNet(latent_size),
cpd_net=BinarizedImageModel(
num_channels=data_shape[0],
width=data_shape[1],
height=data_shape[2],
latent_size=latent_size,
decoder_type=build_cnn_decoder
)
)
print("Model for binarized data")
print(p)
z, x = p.sample((batch_size,))
print("sampled z")
print(z)
print("sampled x")
print(x)
print("MC lowerbound")
print(" 1:", p.naive_lowerbound(x, 10))
print(" 2:", p.naive_lowerbound(x, 10))
print("\n\n")
p = JointDistribution(
prior_net=MoGPriorNet(latent_size, num_comps),
cpd_net=ContinuousImageModel(
num_channels=data_shape[0],
width=data_shape[1],
height=data_shape[2],
latent_size=latent_size,
decoder_type=build_cnn_decoder
)
)
print("Model for continuous data")
print(p)
z, x = p.sample((batch_size,))
print("sampled z")
print(z)
print("sampled x")
print(x)
print("MC lowerbound")
print(" 1:", p.naive_lowerbound(x, 10))
print(" 2:", p.naive_lowerbound(x, 10))
test_joint_dist(10)
```
Model for binarized data
JointDistribution(
(prior_net): GaussianPriorNet()
(cpd_net): BinarizedImageModel(
(decoder): MySequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=10, out_features=1024, bias=True)
(2): ReshapeLast()
(3): ConvTranspose2d(1024, 128, kernel_size=(5, 5), stride=(2, 2))
(4): ReLU()
(5): ConvTranspose2d(128, 64, kernel_size=(5, 5), stride=(2, 2))
(6): ReLU()
(7): ConvTranspose2d(64, 32, kernel_size=(6, 6), stride=(2, 2))
(8): ReLU()
(9): ConvTranspose2d(32, 1, kernel_size=(6, 6), stride=(2, 2))
)
)
)
sampled z
tensor([[-0.8407, 0.2065, -1.2518, -0.9531, 0.1592, -0.1754, -1.6121, 0.1075,
-0.7088, -0.3303],
[-0.1712, 0.1613, -0.3089, 0.2564, 0.0636, 0.7447, 0.7566, -0.4789,
1.1534, -0.8448]])
sampled x
tensor([[[[0., 1., 1., ..., 1., 1., 1.],
[1., 1., 0., ..., 1., 0., 1.],
[1., 0., 1., ..., 0., 0., 1.],
...,
[0., 0., 1., ..., 0., 1., 0.],
[1., 1., 0., ..., 1., 1., 1.],
[1., 0., 0., ..., 0., 0., 1.]]],
[[[0., 1., 0., ..., 1., 1., 1.],
[1., 1., 0., ..., 1., 0., 0.],
[0., 0., 0., ..., 1., 0., 1.],
...,
[0., 1., 0., ..., 1., 1., 1.],
[1., 1., 0., ..., 1., 0., 0.],
[0., 0., 1., ..., 0., 1., 1.]]]])
MC lowerbound
1: tensor([-2837.7363, -2837.9871], grad_fn=<MeanBackward1>)
2: tensor([-2837.9243, -2838.0288], grad_fn=<MeanBackward1>)
Model for continuous data
JointDistribution(
(prior_net): MoGPriorNet()
(cpd_net): ContinuousImageModel(
(decoder): MySequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=10, out_features=1024, bias=True)
(2): ReshapeLast()
(3): ConvTranspose2d(1024, 128, kernel_size=(5, 5), stride=(2, 2))
(4): ReLU()
(5): ConvTranspose2d(128, 64, kernel_size=(5, 5), stride=(2, 2))
(6): ReLU()
(7): ConvTranspose2d(64, 32, kernel_size=(6, 6), stride=(2, 2))
(8): ReLU()
(9): ConvTranspose2d(32, 1, kernel_size=(6, 6), stride=(2, 2))
)
)
)
sampled z
tensor([[ 0.5024, 0.1232, 1.0614, 1.5447, 1.1263, 1.0789, 2.7709, 1.9042,
-1.0329, 3.2964],
[ 1.0950, -1.1369, 1.0845, 1.6282, -0.8367, 1.1774, -0.4392, -0.3845,
1.5312, -1.0509]])
sampled x
tensor([[[[0.2456, 0.6908, 0.2022, ..., 0.9125, 0.4140, 0.0154],
[0.9211, 0.8082, 0.1273, ..., 0.0145, 0.6978, 0.2584],
[0.9423, 0.6667, 0.8885, ..., 0.8678, 0.2550, 0.1817],
...,
[0.9706, 0.4656, 0.9801, ..., 0.3806, 0.5779, 0.3176],
[0.8841, 0.9396, 0.1979, ..., 0.4812, 0.1924, 0.1274],
[0.9042, 0.7917, 0.8952, ..., 0.4248, 0.3569, 0.6764]]],
[[[0.2313, 0.6762, 0.1603, ..., 0.8595, 0.0705, 0.6296],
[0.6333, 0.8830, 0.3072, ..., 0.0994, 0.7293, 0.3187],
[0.2723, 0.6290, 0.7742, ..., 0.1179, 0.6817, 0.0761],
...,
[0.2667, 0.3814, 0.9479, ..., 0.4457, 0.3651, 0.0825],
[0.7468, 0.6061, 0.9087, ..., 0.1951, 0.1604, 0.0321],
[0.6896, 0.4760, 0.0178, ..., 0.3982, 0.9280, 0.0480]]]])
MC lowerbound
1: tensor([0.0437, 0.5014], grad_fn=<MeanBackward1>)
2: tensor([0.1282, 0.4484], grad_fn=<MeanBackward1>)
## 3. Learning
We estimate $\theta$ using stochastic gradient-based maximum likelihood estimation. For a tractable model, we can assess the log-likelihood function
\begin{align}
\mathcal L(\theta|\mathcal D) &= \sum_{x \in \mathcal D} \log p_X(x|\theta)
\end{align}
and estimate $\nabla_{\theta} \mathcal L(\theta|\mathcal D)$ using random mini-batches:
\begin{align}
\nabla_{\theta}\mathcal L(\theta|\mathcal D) &\overset{\text{MC}}{\approx} \frac{1}{S} \sum_{s=1}^S \nabla_{\theta}\log p_X(x^{(s)}|\theta) \\
&\text{where }x^{(s)} \sim \mathcal D
\end{align}
An intractable model, such as the continuous LVM above requires approximate inference.
### 3.1 Intractable LVMs
When marginalisation is intractable, we resort to variational inference introducing a parametric approximation $q_{Z|X=x}$ to the model's true posterior distribution $p_{Z|X=x}$ and estimate both the approximation and the joint distribution by maximising the evidence lowerbound (ELBO), shown below for a single observation $x$:
\begin{align}
\mathcal E(\lambda, \theta| \mathcal D) &= \mathbb E_{x \sim \mathcal D}\left[ \mathbb E\left[\log \frac{ p_{ZX}(z, x|\theta)}{q_{Z|X}(z|x, \lambda)}\right] \right] \\
&= \underbrace{\mathbb E_{x \sim \mathcal D}\left[ \mathbb E[ \log p_{Z|X}(x|z,\theta)] \right]}_{-D} - \underbrace{\mathbb E_{x \sim \mathcal D}\left[ \mathrm{KL}(q_{Z|X=x}||p_Z) \right]}_{R}
\end{align}
where the inner expectation is taken with respect to $q_{Z|X}(z|x, \lambda)$.
When computed in expectation under the data distribution, the two components of the ELBO, namely, the expected log-likelihood $\mathbb E[ \log p_{Z|X}(x|z,\theta)]$ and the "KL term" $\mathrm{KL}(q_{Z|X=x}||p_Z)$, are related to two information-theoretic quantities known as *distortion* and *rate*.
We choose our approximation to be such that: its support is embeded in the support of the prior, it is simple enough to sample from, and it is simple enough to assess the mass of a sample. If possible, we choose it such that other quantities are also tractable (e.g., entropy, relative entropy).
For a multivariate $z$, we normally choose a factorised family, for example, if $z$ is a point in $\mathbb R^K$:
\begin{align}
q_{Z|X}(z|x, \lambda) &= \prod_{k=1}^K \mathcal N(z_k|\mu_k(x;\lambda), \sigma^2_k(x;\lambda))
\end{align}
with $\boldsymbol\mu(x;\lambda) \in \mathbb R^K$ and $\boldsymbol\sigma(x;\lambda) \in \mathbb R^K_{>0}$. This is called a *mean field assumption*.
We can obtain a more complex approximation by, for example, using a mixture of mean field families:
\begin{align}
q_{Z|X}(z|x, \lambda) &= \sum_{c=1}^C \omega_c(x; \lambda) \prod_{k=1}^K \mathcal N(z_k|\mu_k(x;\lambda), \sigma^2_k(x;\lambda))
\end{align}
with $\boldsymbol\mu(x;\lambda) \in \mathbb R^K$, $\boldsymbol\sigma(x;\lambda) \in \mathbb R^K_{>0}$, and $\boldsymbol\omega(x; \lambda) \in \Delta_{C-1}$.
There are other ways to inject structure in the variational approximation, a common example is to use a normalising flow. When designing a structured approximation a few things must be kept in mind:
* sampling should remain tractable
* assessing the density of a sample should remain tractable
* it's okay if we cannot compute entropy or KL in closed-form, we can always estimate the gradient of such terms (e.g., via score function estimation)
As we shall see the two approximations above differ in a crucial way, the simple Gaussian mean field is amenable to a continuously differentiable reparameterisation which leads to a lower variance estimator (compared to, for example, the score function estimator).
```python
```
#### 3.1.1 Reparameterised gradient
For some distributions, it is possible to obtain a sample via a continuously differentiable transformation of a fixed random source. This enables a class for gradient estimators known as *reparameterised gradient* (or the "reparameterisation trick"). In these cases $Z = \mathcal T(\epsilon, \lambda)$ with $\epsilon$ drawn from a distribution whose parameters are independent of $\lambda$. Moreover, $\mathcal T$ is differentiable and invertible.
### 3.2 Inference model
The inference model is a conditional model of the latent variable, for which we design CPD nets.
Before we go on, it is useful to design an "encoder" a function that maps an image $x \in \mathcal X$ to a fixed-size vector that we can use as a compact representation of $x$. Next, we design one such encoder employing FFNNs and another employing CNNs.
```python
class FlattenImage(nn.Module):
def forward(self, input):
return input.reshape(input.shape[:-3] + (-1,))
def build_ffnn_encoder(num_channels, width=64, height=64, output_size=1024, p_drop=0.):
encoder = nn.Sequential(
FlattenImage(),
nn.Dropout(p_drop),
nn.Linear(num_channels * width * height, output_size//2),
nn.ReLU(),
nn.Dropout(p_drop),
nn.Linear(output_size//2, output_size//2),
nn.ReLU(),
nn.Dropout(p_drop),
nn.Linear(output_size//2, output_size),
)
return encoder
def build_cnn_encoder(num_channels, width=64, height=64, output_size=1024, p_drop=0.):
if width != 64:
raise ValueError("The width is hardcoded")
if height != 64:
raise ValueError("The height is hardcoded")
if output_size != 1024:
raise ValueError("The output_size is hardcoded")
# TODO: change the architecture so width, height and output_size are not hardcoded
encoder = MySequential(
nn.Conv2d(num_channels, 32, 4, 2),
nn.LeakyReLU(0.2),
nn.Conv2d(32, 64, 4, 2),
nn.LeakyReLU(0.2),
nn.Conv2d(64, 128, 4, 2),
nn.LeakyReLU(0.2),
nn.Conv2d(128, 256, 4, 2),
nn.LeakyReLU(0.2),
FlattenImage(),
event_dims=3
)
return encoder
```
```python
# a batch of five [1, 64, 64]-dimensional images is encoded into
# five 1024-dimensional vectors
build_ffnn_encoder(num_channels=1)(torch.zeros((5, 1, 64, 64))).shape
```
torch.Size([5, 1024])
```python
# and, again, we can have structured batches
# (here trying with (3,5))
build_ffnn_encoder(num_channels=1)(torch.zeros((3, 5, 1, 64, 64))).shape
```
torch.Size([3, 5, 1024])
```python
# a batch of five [1, 64, 64]-dimensional images is encoded into
# five 1024-dimensional vectors
build_cnn_encoder(num_channels=1)(torch.zeros((5, 1, 64, 64))).shape
```
torch.Size([5, 1024])
```python
# and, again, since we use MySequential we can have structured batches
# (here trying with (3,5))
build_cnn_encoder(num_channels=1)(torch.zeros((3, 5, 1, 64, 64))).shape
```
torch.Size([3, 5, 1024])
We can now design some CPD nets, assuming they map from an encoding of an image to a pmf over $\mathcal Z$.
**Gaussian mean field**
This can be used to parameterise a cpd over real vectors of fixed dimensionality.
**Mixture of Gaussian mean fields**
This can also be used to parameterise a cpd over real vectors of fixed dimensionality, but it achieves a more complex density (e.g., multimodal).
```python
class GaussianCPDNet(CPDNet):
"""
Output distribution is a product of Gaussian distributions
"""
def __init__(self, outcome_shape, num_inputs: int, hidden_size: int=None, p_drop: float=0.):
"""
outcome_shape: shape of the outcome (int or tuple)
if int, we turn it into a singleton tuple
num_inputs: rightmost dimensionality of the inputs to forward
hidden_size: size of hidden layers for the CPDNet (use None to skip)
p_drop: configure dropout before every Linear layer
"""
super().__init__(outcome_shape)
num_outputs = np.prod(self.outcome_shape)
if hidden_size:
self.encoder = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(num_inputs, hidden_size),
nn.ReLU()
)
else:
self.encoder = nn.Identity()
hidden_size = num_inputs
self.locs = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(hidden_size, num_outputs),
ReshapeLast(self.outcome_shape)
)
self.scales = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(hidden_size, num_outputs),
nn.Softplus(), # we use the softplus activations for the scales
ReshapeLast(self.outcome_shape)
)
def forward(self, inputs):
h = self.encoder(inputs)
return td.Independent(td.Normal(loc=self.locs(h), scale=self.scales(h)), len(self.outcome_shape))
class MoGCPDNet(CPDNet):
"""
Output distribution is a mixture of products of Gaussian distributions
"""
def __init__(self, outcome_shape, num_inputs: int, hidden_size: int=None, p_drop: float=0., num_components=2):
"""
outcome_shape: shape of the outcome (int or tuple)
if int, we turn it into a singleton tuple
num_inputs: rightmost dimensionality of the inputs to forward
hidden_size: size of hidden layers for the CPDNet (use None to skip)
p_drop: configure dropout before every Linear layer
num_components: number of Gaussians to be mixed
"""
super().__init__(outcome_shape)
self.num_components = num_components
num_outputs = num_components * np.prod(self.outcome_shape)
if hidden_size:
self.encoder = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(num_inputs, hidden_size),
nn.ReLU()
)
else:
self.encoder = nn.Identity()
hidden_size = num_inputs
self.locs = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(hidden_size, num_outputs),
ReshapeLast((num_components,) + self.outcome_shape)
)
self.scales = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(hidden_size, num_outputs),
nn.Softplus(), # we use the softplus activations for the scales
ReshapeLast((num_components,) + self.outcome_shape)
)
self.logits = nn.Sequential(
nn.Dropout(p_drop),
nn.Linear(hidden_size, num_components),
ReshapeLast((num_components,))
)
def forward(self, inputs):
h = self.encoder(inputs)
comps = td.Independent(td.Normal(loc=self.locs(h), scale=self.scales(h)), len(self.outcome_shape))
pc = td.Categorical(logits=self.logits(h))
return td.MixtureSameFamily(pc, comps)
```
```python
def test_cpds(outcome_shape, num_comps=2, batch_size=3, input_dim=5, hidden_size=2):
cpd_net = GaussianCPDNet(outcome_shape, num_inputs=input_dim, hidden_size=hidden_size)
print("\nGaussian")
print(cpd_net)
print(f" outcome_shape={cpd_net.outcome_shape}")
inputs = torch.from_numpy(np.random.uniform(size=(batch_size, input_dim))).float()
print(f" shape of inputs: {inputs.shape}")
p = cpd_net(inputs)
print(f" distribution: {p}")
z = p.sample()
print(f" sample: {z}")
print(f" shapes: sample={z.shape} log_prob={p.log_prob(z).shape}")
cpd_net = MoGCPDNet(outcome_shape, num_inputs=input_dim, hidden_size=hidden_size, num_components=num_comps)
print("\nMixture of Gaussians")
print(cpd_net)
print(f" outcome_shape={cpd_net.outcome_shape}")
inputs = torch.from_numpy(np.random.uniform(size=(batch_size, input_dim))).float()
print(f" shape of inputs: {inputs.shape}")
p = cpd_net(inputs)
print(f" distribution: {p}")
z = p.sample()
print(f" sample: {z}")
print(f" shapes: sample={z.shape} log_prob={p.log_prob(z).shape}")
# Try a few
test_cpds(12)
#test_cpds(12, hidden_size=None)
# your latent code could be a metrix (we talk about it as a "vector" for convenience)
#test_cpds((4, 5))
```
Gaussian
GaussianCPDNet(
(encoder): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=5, out_features=2, bias=True)
(2): ReLU()
)
(locs): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=2, out_features=12, bias=True)
(2): ReshapeLast()
)
(scales): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=2, out_features=12, bias=True)
(2): Softplus(beta=1, threshold=20)
(3): ReshapeLast()
)
)
outcome_shape=(12,)
shape of inputs: torch.Size([3, 5])
distribution: Independent(Normal(loc: torch.Size([3, 12]), scale: torch.Size([3, 12])), 1)
sample: tensor([[-0.2740, 1.4641, -0.1492, 0.8619, 0.8987, -0.6384, 0.1898, -0.0084,
0.0154, -0.0720, 0.5614, -0.7824],
[ 1.4560, -0.2668, -0.2343, -0.8925, 0.0078, -0.1212, 0.3651, -0.8519,
1.0650, -1.2128, -0.4736, -0.1115],
[ 0.1504, -0.5747, 0.1681, 0.4921, 1.2217, -0.4474, -0.9136, -1.1818,
0.9822, -0.6862, -0.0885, -0.3536]])
shapes: sample=torch.Size([3, 12]) log_prob=torch.Size([3])
Mixture of Gaussians
MoGCPDNet(
(encoder): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=5, out_features=2, bias=True)
(2): ReLU()
)
(locs): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=2, out_features=24, bias=True)
(2): ReshapeLast()
)
(scales): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=2, out_features=24, bias=True)
(2): Softplus(beta=1, threshold=20)
(3): ReshapeLast()
)
(logits): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=2, out_features=2, bias=True)
(2): ReshapeLast()
)
)
outcome_shape=(12,)
shape of inputs: torch.Size([3, 5])
distribution: MixtureSameFamily(
Categorical(logits: torch.Size([3, 2])),
Independent(Normal(loc: torch.Size([3, 2, 12]), scale: torch.Size([3, 2, 12])), 1))
sample: tensor([[ 2.2989, 0.6416, -0.7611, -0.2090, -0.4584, 0.5179, -0.1198, 0.0880,
0.7712, -0.4315, -0.3756, 0.2588],
[-0.1332, -0.0895, -0.0302, 1.7607, -0.6219, 0.5210, 1.3052, 0.4076,
0.2439, -0.4406, 0.0756, 0.6263],
[-0.1179, 1.3194, -0.2864, 1.0093, 1.1053, 0.9005, 0.6088, 0.2438,
0.7631, -0.8659, 0.4035, 0.1295]])
shapes: sample=torch.Size([3, 12]) log_prob=torch.Size([3])
Last, but certainly not least, we can combine our encoder and a choice of CPD net.
```python
class InferenceModel(CPDNet):
def __init__(
self, cpd_net_type,
latent_size, num_channels=1, width=64, height=64,
hidden_size=1024, p_drop=0.,
encoder_type=build_ffnn_encoder):
super().__init__(latent_size)
self.latent_size = latent_size
# encodes an image to a hidden_size-dimensional vector
self.encoder = encoder_type(
num_channels=num_channels,
width=width,
height=height,
output_size=hidden_size,
p_drop=p_drop
)
# maps from a hidden_size-dimensional encoding
# to a cpd for Z|X=x
self.cpd_net = cpd_net_type(
latent_size,
num_inputs=hidden_size,
hidden_size=2*latent_size,
p_drop=p_drop
)
def forward(self, x):
h = self.encoder(x)
return self.cpd_net(h)
```
```python
InferenceModel(GaussianCPDNet, latent_size=10)(torch.zeros(5, 1, 64, 64))
```
Independent(Normal(loc: torch.Size([5, 10]), scale: torch.Size([5, 10])), 1)
```python
InferenceModel(partial(MoGCPDNet, num_components=3), latent_size=10)(torch.zeros(5, 1, 64, 64))
```
MixtureSameFamily(
Categorical(logits: torch.Size([5, 3])),
Independent(Normal(loc: torch.Size([5, 3, 10]), scale: torch.Size([5, 3, 10])), 1))
We now have everything in place to use variational inference.
### 3.3 Neural Variational Inference
We will train our generative model via variational inference, for which we need to train an inference model along with it. We will use the ELBO objective, and gradient estimators based on score function estimation and differentiable reparameterisation.
It's common to refer to any one such model as a variational auto-encoder (VAE), especially so when using reparameterised gradients.
Let's start with score function estimation.
Given a data point $x$, we estimate the gradient of the ELBO with respect to $\lambda$ by MC estimating the following expressions:
\begin{align}
\nabla_{\lambda} \mathcal E(\lambda, \theta|x) &= \mathbb E\left[ r(z, x; \theta, \lambda) \nabla_{\lambda}\log \frac{p_{ZX}(z, x|\theta)}{q_{Z|X}(z|x, \lambda)} \right]
\end{align}
\begin{align}
\nabla_{\theta} \mathcal E(\lambda, \theta|x) &= \mathbb E\left[ \nabla_{\theta} \log p_{ZX}(z, x|\theta) \right]
\end{align}
where the "reward" function in the gradient estimator for $\lambda$ is
\begin{align}
r(z, x; \theta, \lambda) &= \log p_{X|Z}(x|z, \theta)
\end{align}
And, because this gradient estimator is rather noisy, it's commong to transform the reward function by further employing control variates. The simplest control variates are functions of $x$ and possibly of $\theta$ and $\lambda$, but not a function of the action $z$ with respect to which we evaluate the reward function. We will implement those as wrappers around the reward function. So, let's start by agreeing on the API of our control variates.
```python
class VarianceReduction(nn.Module):
"""
We will be using simple forms of control variates for variance reduction.
These are transformations of the reward that are independent of the sampled
latent variable, but they can, in principle, depend on x, and on the
parameters of the generative and inference model.
Some of these are trainable components, thus they also contribute to the loss.
"""
def __init__(self):
super().__init__()
def forward(self, r, x, q, r_fn):
"""
Return the transformed reward and a contribution to the loss.
r: a batch of rewards
x: a batch of observations
q: policy
r_fn: reward function
"""
return r, torch.zeros_like(r)
```
In case a reparameterisation trick is available for the approximate posterior, we can MC estimate the following expressions:
\begin{align}
\nabla_{\lambda}\mathcal E(\lambda, \theta|x) &= \mathbb E_{\epsilon \sim s(\cdot)}\left[\nabla_{\lambda}\log \frac{p_{XZ}(x, Z=\mathcal T(\epsilon, \lambda)|\theta)}{q_{Z|X}(Z=\mathcal T(\epsilon, \lambda)|x, \lambda)}\right]
\end{align}
and
\begin{align}
\nabla_{\theta}\mathcal E(\lambda, \theta|x) &= \mathbb E_{\epsilon \sim s(\cdot)}\left[\nabla_{\theta}\log \frac{p_{XZ}(x, Z=\mathcal T(\epsilon, \lambda)|\theta)}{q_{Z|X}(Z=\mathcal T(\epsilon, \lambda)|x, \lambda)}\right]
\end{align}
Gradients of these kind are commonly referred to as *path derivatives*. We don't need to implement the path derivatives ourselves, nor the transformations, rather we use a distribution object which supports an `rsample` method (for "reparameterised sample"), this distribution will be able to assess the density of the sample should we need it and if we obtained the sample via `rsample` the path derivative will be automatically available to backprop.
If KL divergence from the prior to the approximate posterior is computable, we use a different gradient estimator for $\lambda$ and $\theta$, namely:
\begin{align}
\nabla_{\lambda}\mathcal E(\lambda, \theta|x) &= \mathbb E_{\epsilon \sim s(\cdot)}\left[\nabla_{\lambda}\log p_{X|Z}(x|Z=\mathcal T(\epsilon, \lambda), \theta)\right] - \nabla_{\lambda}\mathrm{KL}(q_{Z|X} || p_Z)
\end{align}
and
\begin{align}
\nabla_{\theta}\mathcal E(\lambda, \theta|x) &= \mathbb E_{\epsilon \sim s(\cdot)}\left[\nabla_{\theta}\log p_{X|Z}(x|Z=\mathcal T(\epsilon, \lambda), \theta)\right] - \nabla_{\theta}\mathrm{KL}(q_{Z|X} || p_Z)
\end{align}
Recall that, in practice, we will need to design a surrogate loss: a node in the computation graph whose backward corresponds to the gradient estimator we want. Check carefully the changes we made to the forward method of the NVIL class.
Now we can work on our general NVIL model. The following class implements the NVIL objective as well as a lot of helper code to manipulate the model components in interesting ways (e.g., sampling, sampling conditionally, estimating marginal density, etc.)
```python
class NVIL(nn.Module):
"""
A generative model p(z)p(x|z) and an approximation q(z|x) to that
model's true posterior.
The approximation is estimated to maximise the ELBO, and so is the joint
distribution.
"""
def __init__(self, gen_model: JointDistribution, inf_model: InferenceModel, cv_model: VarianceReduction):
"""
gen_model: p(z)p(x|z)
inf_model: q(z|x) which approximates p(z|x)
cv_model: optional transformations of the reward
"""
super().__init__()
self.gen_model = gen_model
self.inf_model = inf_model
self.cv_model = cv_model
def gen_params(self):
return self.gen_model.parameters()
def inf_params(self):
return self.inf_model.parameters()
def cv_params(self):
return self.cv_model.parameters()
def sample(self, batch_size, sample_size=None, oversample=False):
"""
A sample from the joint distribution:
z ~ prior
x|z ~ obs model
batch_size: number of samples in a batch
sample_size: if None, the output tensor has shape [batch_size] + data_shape
if 1 or more, the output tensor has shape [sample_size, batch_size] + data_shape
while batch_size controls a parallel computation,
sample_size controls a sequential computation (a for loop)
oversample: if True, samples z (batch_size times), hold it fixed,
and sample x (sample_size times)
"""
pz = self.gen_model.prior((batch_size,))
samples = [None] * (sample_size or 1)
px_z = self.gen_model.obs_model(pz.sample()) if oversample else None
for k in range(sample_size or 1):
if not oversample:
px_z = self.gen_model.obs_model(pz.sample())
samples[k] = px_z.sample()
x = torch.stack(samples)
return x if sample_size else x.squeeze(0)
def cond_sample(self, x, sample_size=None, oversample=False):
"""
Condition on x and draw a sample:
z|x ~ inf model
x'|z ~ obs model
x: a batch of seed data samples
sample_size: if None, the output tensor has shape [batch_size] + data_shape
if 1 or more, the output tensor has shape [sample_size, batch_size] + data_shape
sample_size controls a sequential computation (a for loop)
oversample: if True, samples z (batch_size times), hold it fixed,
and sample x' (sample_size times)
"""
qz = self.inf_model(x)
samples = [None] * (sample_size or 1)
px_z = self.gen_model.obs_model(qz.sample()) if oversample else None
for k in range(sample_size or 1):
if not oversample:
px_z = self.gen_model.obs_model(qz.sample())
samples[k] = px_z.sample()
x = torch.stack(samples)
return x if sample_size else x.squeeze(0)
def log_prob(self, z, x):
"""
The log density of the joint outcome under the generative model
z: [batch_size, latent_dim]
x: [batch_size] + data_shape
"""
return self.gen_model.log_prob(z=z, x=x)
def DRL(self, x, sample_size=None):
"""
MC estimates of a model's
* distortion D
* rate R
* and log-likelihood L
The estimates are based on single data points
but multiple latent samples.
x: batch_shape + data_shape
sample_size: if 1 or more, we use multiple samples
sample_size controls a sequential computation (a for loop)
"""
sample_size = sample_size or 1
obs_dims = len(self.gen_model.cpd_net.outcome_shape)
batch_shape = x.shape[:-obs_dims]
with torch.no_grad():
qz = self.inf_model(x)
pz = self.gen_model.prior(batch_shape)
try: # not every design admits tractable KL
R = td.kl_divergence(qz, pz)
except NotImplementedError:
# MC estimation of KL(q(z|x)||p(z))
z = qz.sample((sample_size,))
R = (qz.log_prob(z) - pz.log_prob(z)).mean(0)
D = 0
ratios = [None] * sample_size
for k in range(sample_size):
z = qz.sample()
px_z = self.gen_model.obs_model(z)
ratios[k] = pz.log_prob(z) + px_z.log_prob(x) - qz.log_prob(z)
D = D - px_z.log_prob(x)
ratios = torch.stack(ratios, dim=-1)
L = torch.logsumexp(ratios, dim=-1) - np.log(sample_size)
D = D / sample_size
return D, R, L
def elbo(self, x, sample_size=None):
"""
An MC estimate of ELBO = -D -R
x: [batch_size] + data_shape
sample_size: if 1 or more, we use multiple samples
sample_size controls a sequential computation (a for loop)
"""
D, R, _ = self.DRL(x, sample_size=sample_size)
return -D -R
def log_prob_estimate(self, x, sample_size=None):
"""
An importance sampling estimate of log p(x)
x: [batch_size] + data_shape
sample_size: if 1 or more, we use multiple samples
sample_size controls a sequential computation (a for loop)
"""
_, _, L = self.DRL(x, sample_size=sample_size)
return L
def forward(self, x, sample_size=None, rate_weight=1.):
"""
A surrogate for an MC estimate of - grad ELBO
x: [batch_size] + data_shape
sample_size: if 1 or more, we use multiple samples
sample_size controls a sequential computation (a for loop)
cv: optional module for variance reduction
"""
sample_size = sample_size or 1
obs_dims = len(self.gen_model.cpd_net.outcome_shape)
batch_shape = x.shape[:-obs_dims]
qz = self.inf_model(x)
pz = self.gen_model.prior(batch_shape)
# we can *always* make use of the score function estimator (SFE)
use_sfe = True
# these 3 log densities will contribute to the different parts of the objective
log_p_x_z = 0.
log_p_z = 0.
log_q_z_x = 0.
# these quantities will help us compute the SFE part of the objective
# (if needed)
sfe = 0
reward = 0
cv_reward = 0
raw_r = 0
cv_loss = 0
for _ in range(sample_size):
# Obtain a sample
if qz.has_rsample: # this is how td objects tell us whether they are continuously reparameterisable
z = qz.rsample()
use_sfe = False # with path derivatives, we do not need SFE
else:
z = qz.sample()
# Parameterise the observational model
px_z = self.gen_model.obs_model(z)
# Compute all three relevant densities:
# p(x|z,theta)
log_p_x_z = log_p_x_z + px_z.log_prob(x)
# q(z|x,lambda)
log_q_z_x = log_q_z_x + qz.log_prob(z)
# p(z|theta)
log_p_z = log_p_z + pz.log_prob(z)
# Compute the "reward" for SFE
raw_r = log_p_x_z + log_p_z - log_q_z_x
# Apply variance reduction techniques
r, l = self.cv_model(raw_r.detach(), x=x, q=qz, r_fn=lambda a: self.gen_model(a).log_prob(x))
cv_loss = cv_loss + l
# SFE part for updating lambda
sfe = sfe + r.detach() * qz.log_prob(z)
# Compute the sample mean for the different terms
sfe = (sfe / sample_size)
cv_loss = cv_loss / sample_size
log_p_x_z = log_p_x_z / sample_size
log_p_z = log_p_z / sample_size
log_q_z_x = log_q_z_x / sample_size
D = - log_p_x_z
try: # not every design admits tractable KL
R = td.kl_divergence(qz, pz)
except NotImplementedError:
R = log_q_z_x - log_p_z
if use_sfe:
# the first two terms update theta
# the last term updates lambda
elbo_grad_surrogate = log_p_x_z + log_p_z + sfe
# note that the term (log_p_x_z + log_p_z) is also part of sfe
# but there it is detached, meaning that it won't contribute to
# grad theta
else:
# without SFE, we can use the classic form of the ELBO
elbo_grad_surrogate = -D - R
loss = -elbo_grad_surrogate + cv_loss
return {'loss': loss.mean(0), 'ELBO': (-D -R).mean(0).item(), 'D': D.mean(0).item(), 'R': R.mean(0).item(), 'cv_loss': cv_loss.mean(0).item()}
```
Here's an example
```python
vae = NVIL(
JointDistribution(
GaussianPriorNet(10),
BinarizedImageModel(
num_channels=img_shape[0],
width=img_shape[1],
height=img_shape[2],
latent_size=10,
p_drop=0.1,
decoder_type=build_ffnn_decoder
)
),
InferenceModel(
cpd_net_type=GaussianCPDNet,
latent_size=10,
num_channels=img_shape[0],
width=img_shape[1],
height=img_shape[2],
encoder_type=build_ffnn_encoder
),
VarianceReduction()
)
vae
```
NVIL(
(gen_model): JointDistribution(
(prior_net): GaussianPriorNet()
(cpd_net): BinarizedImageModel(
(decoder): Sequential(
(0): Dropout(p=0.1, inplace=False)
(1): Linear(in_features=10, out_features=512, bias=True)
(2): ReLU()
(3): Dropout(p=0.1, inplace=False)
(4): Linear(in_features=512, out_features=512, bias=True)
(5): ReLU()
(6): Dropout(p=0.1, inplace=False)
(7): Linear(in_features=512, out_features=4096, bias=True)
(8): ReshapeLast()
)
)
)
(inf_model): InferenceModel(
(encoder): Sequential(
(0): FlattenImage()
(1): Dropout(p=0.0, inplace=False)
(2): Linear(in_features=4096, out_features=512, bias=True)
(3): ReLU()
(4): Dropout(p=0.0, inplace=False)
(5): Linear(in_features=512, out_features=512, bias=True)
(6): ReLU()
(7): Dropout(p=0.0, inplace=False)
(8): Linear(in_features=512, out_features=1024, bias=True)
)
(cpd_net): GaussianCPDNet(
(encoder): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=1024, out_features=20, bias=True)
(2): ReLU()
)
(locs): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=20, out_features=10, bias=True)
(2): ReshapeLast()
)
(scales): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=20, out_features=10, bias=True)
(2): Softplus(beta=1, threshold=20)
(3): ReshapeLast()
)
)
)
(cv_model): VarianceReduction()
)
```python
for x, y in train_loader:
print('x.shape:', x.shape)
print(vae(x))
break
```
x.shape: torch.Size([64, 1, 64, 64])
{'loss': tensor(2845.1970, grad_fn=<MeanBackward1>), 'ELBO': -2845.197021484375, 'D': 2843.9189453125, 'R': 1.27774977684021, 'cv_loss': 0.0}
#### 3.3.1 Training algorithm
We have up to three components (recall that some control variates can have their own parameters), so we will be manipulating up to three optimisers:
```python
class OptCollection:
def __init__(self, gen, inf, cv=None):
self.gen = gen
self.inf = inf
self.cv = cv
def zero_grad(self):
self.gen.zero_grad()
self.inf.zero_grad()
if self.cv:
self.cv.zero_grad()
def step(self):
self.gen.step()
self.inf.step()
if self.cv:
self.cv.step()
```
Here's some helper code to assess and train the model
```python
from collections import defaultdict, OrderedDict
from tqdm.auto import tqdm
def assess(model, sample_size, dl, device):
"""
Wrapper for estimating a model's ELBO, distortion, rate, and log-likelihood
using all data points in a data loader.
"""
D = 0
R = 0
L = 0
data_size = 0
with torch.no_grad():
for batch_x, batch_y in dl:
Dx, Rx, Lx = model.DRL(batch_x.to(device), sample_size=sample_size)
D = D + Dx.sum(0)
R = R + Rx.sum(0)
L = L + Lx.sum(0)
data_size += batch_x.shape[0]
D = D / data_size
R = R / data_size
L = L / data_size
return {'ELBO': (-D -R).item(), 'D': D.item(), 'R': R.item(), 'L': L.item()}
def train_vae(model: NVIL, opts: OptCollection,
training_data, dev_data,
batch_size=64, num_epochs=10, check_every=10,
sample_size_training=1,
sample_size_eval=10,
grad_clip=5.,
num_workers=2,
device=torch.device('cuda:0')
):
"""
model: pytorch model
optimiser: pytorch optimiser
training_corpus: a TaggedCorpus for trianing
dev_corpus: a TaggedCorpus for dev
batch_size: use more if you have more memory
num_epochs: use more for improved convergence
check_every: use less to check performance on dev set more often
device: where we run the experiment
Return a log of quantities computed during training (for plotting)
"""
batcher = DataLoader(training_data, batch_size, shuffle=True, num_workers=num_workers, pin_memory=True)
dev_batcher = DataLoader(dev_data, batch_size, num_workers=num_workers, pin_memory=True)
total_steps = num_epochs * len(batcher)
log = defaultdict(list)
step = 0
model.eval()
for k, v in assess(model, sample_size_eval, dev_batcher, device=device).items():
log[f"dev.{k}"].append((step, v))
with tqdm(range(total_steps)) as bar:
for epoch in range(num_epochs):
for batch_x, batch_y in batcher:
model.train()
opts.zero_grad()
loss_dict = model(
batch_x.to(device),
sample_size=sample_size_training,
)
for metric, value in loss_dict.items():
log[f'training.{metric}'].append((step, value))
loss_dict['loss'].backward()
nn.utils.clip_grad_norm_(
model.parameters(),
grad_clip
)
opts.step()
bar_dict = OrderedDict()
for metric, value in loss_dict.items():
bar_dict[f'training.{metric}'] = f"{loss_dict[metric]:.2f}"
for metric in ['ELBO', 'D', 'R', 'L']:
bar_dict[f"dev.{metric}"] = "{:.2f}".format(log[f"dev.{metric}"][-1][1])
bar.set_postfix(bar_dict)
bar.update()
if step % check_every == 0:
model.eval()
for k, v in assess(model, sample_size_eval, dev_batcher, device=device).items():
log[f"dev.{k}"].append((step, v))
step += 1
model.eval()
for k, v in assess(model, sample_size_eval, dev_batcher, device=device).items():
log[f"dev.{k}"].append((step, v))
return log
```
And, finally, some code to help inspect samples
```python
def inspect_lvm(model, dl, device):
for x, y in dl:
x_ = model.sample(16, 4, oversample=True).cpu().reshape(-1, 1, 64, 64)
plt.figure(figsize=(16,8))
plt.axis('off')
plt.imshow(make_grid(x_, nrow=16).permute((1, 2, 0)))
plt.title("Prior samples")
plt.show()
plt.figure(figsize=(16,8))
plt.axis('off')
plt.imshow(make_grid(x, nrow=16).permute((1, 2, 0)))
plt.title("Observations")
plt.show()
x_ = model.cond_sample(x.to(device)).cpu().reshape(-1, 1, 64, 64)
plt.figure(figsize=(16,8))
plt.axis('off')
plt.imshow(make_grid(x_, nrow=16).permute((1, 2, 0)))
plt.title("Conditional samples")
plt.show()
break
```
#### 3.3.2 Variance reduction
Here are some concrete strategies for variance reduction. You can skip those in a first pass.
```python
class CentredReward(VarianceReduction):
"""
This control variate does not have trainable parameters,
it maintains a running estimate of the average reward and updates
a batch of rewards by computing reward - avg.
"""
def __init__(self, alpha=0.9):
super().__init__()
self._alpha = alpha
self._r_mean = 0.
def forward(self, r, x=None, q=None, r_fn=None):
"""
Centre the reward and update running estimates of mean.
"""
with torch.no_grad():
# sufficient statistics for next updates
r_mean = torch.mean(r, dim=0)
# centre the signal
r = r - self._r_mean
# update running estimate of mean
self._r_mean = (1-self._alpha) * self._r_mean + self._alpha * r_mean.item()
return r, torch.zeros_like(r)
class ScaledReward(VarianceReduction):
"""
This control variate does not have trainable parameters,
it maintains a running estimate of the reward's standard deviation and
updates a batch of rewards by computing reward / maximum(stddev, 1).
"""
def __init__(self, alpha=0.9):
super().__init__()
self._alpha = alpha
self._r_std = 1.0
def forward(self, r, x=None, q=None, r_fn=None):
"""
Scale the reward by a running estimate of std, and also update the estimate.
"""
with torch.no_grad():
# sufficient statistics for next updates
r_std = torch.std(r, dim=0)
# standardise the signal
r = r / self._r_std
# update running estimate of std
self._r_std = (1-self._alpha) * self._r_std + self._alpha * r_std.item()
# it's not safe to standardise with scales less than 1
self._r_std = np.maximum(self._r_std, 1.)
return r, torch.zeros_like(r)
class SelfCritic(VarianceReduction):
"""
This control variate does not have trainable parameters,
it updates a batch of rewards by computing reward - reward', where
reward' is (log p(X=x|Z=z')).detach() assessed for a novel sample
z' ~ Z|X=x.
"""
def __init__(self):
super().__init__()
def forward(self, r, x, q, r_fn):
"""
Standardise the reward and update running estimates of mean/std.
"""
with torch.no_grad():
z = q.sample()
r = r - r_fn(z, x)
return r, torch.zeros_like(r)
class Baseline(VarianceReduction):
"""
An input-dependent baseline implemented as an MLP.
The trainable parameters are adjusted via MSE.
"""
def __init__(self, num_inputs, hidden_size, p_drop=0.):
super().__init__()
self.baseline = nn.Sequential(
FlattenImage(),
nn.Dropout(p_drop),
nn.Linear(num_inputs, hidden_size),
nn.ReLU(),
nn.Dropout(p_drop),
nn.Linear(hidden_size, 1)
)
def forward(self, r, x, q=None, r_fn=None):
"""
Return r - baseline(x) and Baseline's loss.
"""
# batch_shape + (1,)
r_hat = self.baseline(x)
# batch_shape
r_hat = r_hat.squeeze(-1)
loss = (r - r_hat)**2
return r - r_hat.detach(), loss
class CVChain(VarianceReduction):
def __init__(self, *args):
super().__init__()
if len(args) == 1 and isinstance(args[0], OrderedDict):
for key, module in args[0].items():
self.add_module(key, module)
else:
for idx, module in enumerate(args):
self.add_module(str(idx), module)
def forward(self, r, x, q, r_fn):
loss = 0
for cv in self._modules.values():
r, l = cv(r, x=x, q=q, r_fn=r_fn)
loss = loss + l
return r, loss
```
#### 3.3.3 Experiment
```python
seed_all()
my_device = torch.device('cuda:0')
model = NVIL(
JointDistribution(
GaussianPriorNet(10),
BinarizedImageModel(
num_channels=img_shape[0],
width=img_shape[1],
height=img_shape[2],
latent_size=10,
p_drop=0.1
)
),
InferenceModel(
latent_size=10,
num_channels=img_shape[0],
width=img_shape[1],
height=img_shape[2],
cpd_net_type=GaussianCPDNet # Gaussian prior and Gaussian posterior: this is a classic VAE
),
VarianceReduction(), # no variance reduction is needed for a VAE
#CVChain( # variance reduction helps SFE
# CentredReward(),
# #Baseline(np.prod(img_shape), 512), # this is how you would use a trained baselined
# #ScaledReward()
#)
).to(my_device)
opts = OptCollection(
# Tips based on empirical practice:
# Adam is the go-to choice for (reparameterised) VAEs
opt.Adam(model.gen_params(), lr=5e-4, weight_decay=1e-6),
opt.Adam(model.inf_params(), lr=1e-4),
# Adam is not often a good choice for SFE-based optimisation
# a possible reason: SFE is too noisy and the design choices behind Adam
# were made having reparameterised gradients in mind
#opt.RMSprop(model.gen_params(), lr=5e-4, weight_decay=1e-6),
#opt.RMSprop(model.inf_params(), lr=1e-4),
#opt.RMSprop(model.cv_params(), lr=1e-4, weight_decay=1e-6) # you need this if your baseline has trainable parameters
)
model
```
NVIL(
(gen_model): JointDistribution(
(prior_net): GaussianPriorNet()
(cpd_net): BinarizedImageModel(
(decoder): Sequential(
(0): Dropout(p=0.1, inplace=False)
(1): Linear(in_features=10, out_features=512, bias=True)
(2): ReLU()
(3): Dropout(p=0.1, inplace=False)
(4): Linear(in_features=512, out_features=512, bias=True)
(5): ReLU()
(6): Dropout(p=0.1, inplace=False)
(7): Linear(in_features=512, out_features=4096, bias=True)
(8): ReshapeLast()
)
)
)
(inf_model): InferenceModel(
(encoder): Sequential(
(0): FlattenImage()
(1): Dropout(p=0.0, inplace=False)
(2): Linear(in_features=4096, out_features=512, bias=True)
(3): ReLU()
(4): Dropout(p=0.0, inplace=False)
(5): Linear(in_features=512, out_features=512, bias=True)
(6): ReLU()
(7): Dropout(p=0.0, inplace=False)
(8): Linear(in_features=512, out_features=1024, bias=True)
)
(cpd_net): GaussianCPDNet(
(encoder): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=1024, out_features=20, bias=True)
(2): ReLU()
)
(locs): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=20, out_features=10, bias=True)
(2): ReshapeLast()
)
(scales): Sequential(
(0): Dropout(p=0.0, inplace=False)
(1): Linear(in_features=20, out_features=10, bias=True)
(2): Softplus(beta=1, threshold=20)
(3): ReshapeLast()
)
)
)
(cv_model): VarianceReduction()
)
```python
log = train_vae(
model=model,
opts=opts,
training_data=train_ds,
dev_data=val_ds,
batch_size=256,
num_epochs=5, # use more for better models
check_every=100,
sample_size_training=1,
sample_size_eval=1,
grad_clip=5.,
device=my_device
)
```
0%| | 0/1155 [00:00<?, ?it/s]
```python
log.keys()
```
dict_keys(['dev.ELBO', 'dev.D', 'dev.R', 'dev.L', 'training.loss', 'training.ELBO', 'training.D', 'training.R', 'training.cv_loss'])
```python
fig, axs = plt.subplots(1, 3 + int('training.cv_loss' in log), sharex=True, sharey=False, figsize=(20, 5))
_ = axs[0].plot(np.array(log['training.ELBO'])[:,0], np.array(log['training.ELBO'])[:,1])
_ = axs[0].set_ylabel("training ELBO")
_ = axs[0].set_xlabel("steps")
_ = axs[1].plot(np.array(log['training.D'])[:,0], np.array(log['training.D'])[:,1])
_ = axs[1].set_ylabel("training D")
_ = axs[1].set_xlabel("steps")
_ = axs[2].plot(np.array(log['training.R'])[:,0], np.array(log['training.R'])[:,1])
_ = axs[2].set_ylabel("training R")
_ = axs[2].set_xlabel("steps")
if 'training.cv_loss' in log:
_ = axs[3].plot(np.array(log['training.cv_loss'])[:,0], np.array(log['training.cv_loss'])[:,1])
_ = axs[3].set_ylabel("cv loss")
_ = axs[3].set_xlabel("steps")
fig.tight_layout(h_pad=2, w_pad=2)
```
```python
fig, axs = plt.subplots(1, 4, sharex=True, sharey=False, figsize=(20, 5))
_ = axs[0].plot(np.array(log['dev.ELBO'])[:,0], np.array(log['dev.ELBO'])[:,1])
_ = axs[0].set_ylabel("dev ELBO")
_ = axs[0].set_xlabel("steps")
_ = axs[1].plot(np.array(log['dev.D'])[:,0], np.array(log['dev.D'])[:,1])
_ = axs[1].set_ylabel("dev D")
_ = axs[1].set_xlabel("steps")
_ = axs[2].plot(np.array(log['dev.R'])[:,0], np.array(log['dev.R'])[:,1])
_ = axs[2].set_ylabel("dev R")
_ = axs[2].set_xlabel("steps")
_ = axs[3].plot(np.array(log['dev.L'])[:,0], np.array(log['dev.L'])[:,1])
_ = axs[3].set_ylabel("dev L")
_ = axs[3].set_xlabel("steps")
fig.tight_layout(h_pad=2, w_pad=2)
```
```python
inspect_lvm(model, DataLoader(val_ds, 64, num_workers=2, pin_memory=True), my_device)
```
## 4. Beyond
There are various things you can try.
You can try to use a **trainable prior**. If you do, you will probably note that it is not trivial how to get the prior to be used in an interesting way. In fact, trained priors are completely data-driven, and there's no reason to believe that an NN will find a "data-driven" explanation of the data that is anything like what you would like it to. If you want the different components of your prior to specialise to certain types of output, you will need to design stronger pressures. For example, you may use some degree of annotation to inform what each component should typically be responsible for. Ideas that dispense with the need for annotation will have to focus on the architecture of the decoder or on other penalties in the loss. For example, a decoder that is built with some geometric properties.
If you use a trainable prior, it is a good idea to try and visualise what the final prior looks like. You can try sampling from it and plotting histograms of the different coordinates of the samples. You can flatten the samples and inspect the coordinates marginally, you can also use other plotting tools (see [some examples from seaborn](https://seaborn.pydata.org/examples/scatterplot_matrix.html)) to spot dependency, for example. And, of course, you can always use tools for dimensionality reduction (eg, [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html)).
You can try to use a **stronger posterior approximation** that is reparameterisable. A good idea is to build a bijective transformation (ie., a normalising flow).
You can try to **improve the gradient estimator** of the mixture of mean field families. Within a mixture of C components that are each reparameterisable, only the component assigmment is a discrete operation, so with access to the sampling procedure internal to mixture, one can design a customised gradient estimator that updates $\omega(x; \lambda)$ through SFE, but updates $\mu(x; \lambda)$ and $\sigma(x;\lambda)$ through reparameterisation.
| 92e7a1041684f4a1d47701c2ce492e1e0db143f5 | 952,445 | ipynb | Jupyter Notebook | docs/tutorial_notebooks/DL2/deep_probabilistic_models_II/tutorial_2b.ipynb | mkofinas/uvadlc_notebooks | 4ee911852004d351ffefa58daa11aec098385dc1 | [
"MIT"
]
| 2 | 2020-09-30T07:26:29.000Z | 2020-10-09T14:51:09.000Z | docs/tutorial_notebooks/DL2/deep_probabilistic_models_II/tutorial_2b.ipynb | phlippe/notebook_test | 2ec9803490ea85e1f2f95731e2bfe52fa7581d50 | [
"MIT"
]
| null | null | null | docs/tutorial_notebooks/DL2/deep_probabilistic_models_II/tutorial_2b.ipynb | phlippe/notebook_test | 2ec9803490ea85e1f2f95731e2bfe52fa7581d50 | [
"MIT"
]
| null | null | null | 304.879962 | 340,364 | 0.913076 | true | 22,574 | Qwen/Qwen-72B | 1. YES
2. YES | 0.695958 | 0.7773 | 0.540968 | __label__eng_Latn | 0.87873 | 0.09518 |
# <u>Count models</u>
Counting is child's play, as easy as 1, 2, 3, right?
As usual, the answer's not so simple: some things are easy to count and some things aren't, making some numbers easier to predict than others. What do we do?
1. <font color="darkorchid">**Count what's available**</font> as carefully as possible.
2. Build an appropriate <font color="darkorchid">**probability model**</font> to predict likely outcomes.
To explore this, we'll look at 4 real examples:
* cases of chronic medical conditions,
* car crashes in Tennessee,
* births in the United States, and
* coughing in Spain.
We'll see how these relate to 3 fundamental <font color="darkorchid">**count models**</font>:
* binomial models,
* Poisson models, and
* negative binomial models.
If time permits, we'll also talk about using the <font color="darkorchid">**Kolmogorov-Smirnov test**</font> to compare observed and simulated samples. When we meet again, we'll see how to build better predictive models, namely <font color="darkorchid">**regression models for counts**</font> that incorporate predictor variables.
All of this (and more!) is in Chapters 5 and 7 of my Manning book, [Regression: A friendly guide](https://www.manning.com/books/regression-a-friendly-guide); these 2 chapters will be added to the MEAP soon. If you just can't wait, this notebook and the relevant CSVs are available <font color="deeppink"><u>_now!_</u></font> in my [regression repo on github](https://github.com/mbrudd/regression) — please clone and submit comments!
## <u>Imports and settings</u>
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import binom
from scipy.stats import poisson
from scipy.stats import nbinom
from scipy.stats import kstest
import warnings
warnings.filterwarnings('ignore')
sns.set_theme()
plt.rcParams['figure.figsize'] = [15,6]
```
## <u>Binomial models</u>
### Chronic conditions by age
Here's a counting problem: given a group of people, how many have a chronic medical condition? Let's see what the 2009 [National Ambulatory Medical Care Survey (NAMCS)](https://www.cdc.gov/nchs/ahcd/index.htm) says:
```python
chronic = pd.read_csv("chronic.csv")
cohorts = pd.read_csv("chronic_cohorts.csv")
cohorts
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Age</th>
<th>Total</th>
<th>Sick</th>
<th>Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>796</td>
<td>21</td>
<td>0.026382</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>505</td>
<td>32</td>
<td>0.063366</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>320</td>
<td>29</td>
<td>0.090625</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>258</td>
<td>37</td>
<td>0.143411</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>299</td>
<td>30</td>
<td>0.100334</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>96</th>
<td>96</td>
<td>13</td>
<td>12</td>
<td>0.923077</td>
</tr>
<tr>
<th>97</th>
<td>97</td>
<td>11</td>
<td>10</td>
<td>0.909091</td>
</tr>
<tr>
<th>98</th>
<td>98</td>
<td>6</td>
<td>5</td>
<td>0.833333</td>
</tr>
<tr>
<th>99</th>
<td>99</td>
<td>2</td>
<td>1</td>
<td>0.500000</td>
</tr>
<tr>
<th>100</th>
<td>100</td>
<td>6</td>
<td>5</td>
<td>0.833333</td>
</tr>
</tbody>
</table>
<p>101 rows × 4 columns</p>
</div>
In this dataset, <font color="darkorchid">_Sick_</font> means having at least one of the following conditions:
* arthritis,
* asthma,
* COPD,
* cancer,
* depression,
* diabetes,
* hyperlipidemia,
* hypertension,
* obesity,
* osteoporosis,
* cerebrovascular disease,
* chronic renal failure,
* congestive heart failure, or
* ischemic heart disease.
This is quite a list! The snippet shown corroborates what's expected: older people are more likely to have at least one of these conditions. For example, 30.4% of 25-year-olds have one and 81.43% of the 65-year-olds do:
```python
cohorts[ (cohorts["Age"]==25) | (cohorts["Age"]==65) ]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Age</th>
<th>Total</th>
<th>Sick</th>
<th>Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<th>25</th>
<td>25</td>
<td>273</td>
<td>83</td>
<td>0.304029</td>
</tr>
<tr>
<th>65</th>
<td>65</td>
<td>463</td>
<td>377</td>
<td>0.814255</td>
</tr>
</tbody>
</table>
</div>
These percentages are <font color="darkorchid">**empirical probabilities**</font>. Based on them, what are the chances of having <u>_exactly_</u> (a) 83 sick people out of 273 25-year-olds and (b) 377 sick people out of 463 65-year-olds? We need
### The binomial distribution
If you try something $n$ times with a chance $p$ of <font color="darkorchid">**success**</font> on each <font color="darkorchid">**trial**</font>, the number $Y$ of successes is a <font color="darkorchid">**random variable**</font>: you don't know what the exact outcome will be until it happens, but you know the probability of each possible outcome. In fact, the probability of getting exactly $k$ successes is
$$P( \, Y = k \, ) ~ = ~ \binom{n}{k} \, p^{k} \, (1-p)^{n-k} \ , \quad \text{where} \quad \binom{n}{k} = \frac{ n! }{ k! (n-k)! } \quad . $$
This is the <font color="darkorchid">**probability mass function (PMF)**</font> for the <font color="darkorchid">**binomial distribution**</font> $B(n,p)$. For a <font color="darkorchid">**binomial random variable**</font> $Y \sim B(n,p)$,
$$\operatorname{E}\left( \, Y \, \right) ~ = ~ np \quad \text{and} \quad \operatorname{Var}( \, Y \, ) ~ = ~ np(1-p) \ .$$
> **Use the `binom` module from `scipy` (`import`ed above) to work with binomial distributions.**
25-year-olds in the NAMCS sample are like a <font color="darkorchid">**binomial experiment**</font> with $n=273$ and $p=.304029$ ; the chance of observing exactly 83 successes (perverse lingo, I know!) is just over 5% :
```python
p_25 = cohorts["Percentage"][25]
binom.pmf(83,273,p_25)
```
0.05243020970380286
65-year-olds are like a binomial experiment with 𝑛=463 and 𝑝=.814255 ; the chance of observing exactly 377 successes is just under 5% :
```python
p_65 = cohorts["Percentage"][65]
binom.pmf(377,463,p_65)
```
0.047625769242453334
Don't be alarmed at these low probabilities — these are actually the most likely outcomes:
```python
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.suptitle('Binomial models of chronic condition incidence')
x = np.arange(binom.ppf(0.01, 273, p_25),
binom.ppf(0.99, 273, p_25))
ax1.plot(x, binom.pmf(x, 273, p_25), 'bo', ms=8)
ax1.vlines(x, 0, binom.pmf(x, 273, p_25), colors='b', lw=3, alpha=0.5)
ax1.set_xlabel('25-year-olds')
x = np.arange(binom.ppf(0.01, 463, p_65),
binom.ppf(0.99, 463, p_65))
ax2.plot(x, binom.pmf(x, 463, p_65), 'bo', ms=8)
ax2.vlines(x, 0, binom.pmf(x, 463, p_65), colors='b', lw=3, alpha=0.5)
ax2.set_xlabel('65-year-olds');
```
It's no accident that these look like normal distributions — if $n$ is large and/or $p$ is close to $0.5$, $B(n,p)$ is approximately normal with mean $np$ and variance $np(1-p)$ :
$$ B(n,p) ~ \approx ~ N( np, np(1-p) ) \quad \text{for} \quad n \gg 1 \quad \text{or} \quad p \approx 0.5 \ .$$
Turn this around for some quick and dirty calculations. For example, $B(463,.814) \approx N( 377, 70)$, so there will usually be between $377 - 2\sqrt{70} \approx 360 $ and $377 + 2\sqrt{70} \approx 394$ successes, as you can easily check above!
Things are different if $n$ is small and/or $p$ is far from $0.5$. No worries, though — `binom.pmf` works fine either way.
### Binomial logistic regression
Instead of working separately with each age group — i.e., each <font color="firebrick">**covariate class**</font> — we should really construct a <font color="firebrick">**binomial logistic regression model**</font> to consolidate the age group percentages efficiently.
```python
sns.regplot(data=chronic,x="Age",
y="Condition",
logistic=True,
scatter=False,
ci=None,
line_kws={"lw":"4"})
plt.xlim(-2,102)
plt.ylabel("Probability")
plt.plot( cohorts["Age"], cohorts["Percentage"],'.k');
```
This particular logistic model, plotted in blue, is <font color="firebrick">**simple**</font> — it involves only one predictor :
$$\log{ \left( \text{Odds of a condition} \right) } ~ = ~ -2.04 + .052*\text{Age} \ . $$
This model is really a family of binomial distributions, one for each covariate class; each probability is directly related to `Age`.
## <u>Poisson models</u>
### Monthly car crashes
Here's a different counting problem: how many traffic accidents are there each month where you live? For my home state of Tennessee, the [Department of Transportation provides the relevant data](https://www.tn.gov/safety/stats/crashdata.html):
```python
crashes = pd.read_csv("TDOT.csv")
crashes
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>County</th>
<th>Year</th>
<th>January</th>
<th>February</th>
<th>March</th>
<th>April</th>
<th>May</th>
<th>June</th>
<th>July</th>
<th>August</th>
<th>September</th>
<th>October</th>
<th>November</th>
<th>December</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Anderson</td>
<td>2010</td>
<td>138</td>
<td>113</td>
<td>169</td>
<td>169</td>
<td>155</td>
<td>152</td>
<td>165</td>
<td>176</td>
<td>178</td>
<td>169</td>
<td>179</td>
<td>152</td>
</tr>
<tr>
<th>1</th>
<td>Bedford</td>
<td>2010</td>
<td>71</td>
<td>72</td>
<td>77</td>
<td>70</td>
<td>87</td>
<td>99</td>
<td>90</td>
<td>91</td>
<td>86</td>
<td>118</td>
<td>105</td>
<td>95</td>
</tr>
<tr>
<th>2</th>
<td>Benton</td>
<td>2010</td>
<td>21</td>
<td>23</td>
<td>27</td>
<td>32</td>
<td>34</td>
<td>29</td>
<td>16</td>
<td>43</td>
<td>31</td>
<td>50</td>
<td>29</td>
<td>36</td>
</tr>
<tr>
<th>3</th>
<td>Bledsoe</td>
<td>2010</td>
<td>5</td>
<td>1</td>
<td>3</td>
<td>3</td>
<td>5</td>
<td>1</td>
<td>7</td>
<td>4</td>
<td>3</td>
<td>6</td>
<td>4</td>
<td>3</td>
</tr>
<tr>
<th>4</th>
<td>Blount</td>
<td>2010</td>
<td>126</td>
<td>159</td>
<td>171</td>
<td>145</td>
<td>153</td>
<td>139</td>
<td>201</td>
<td>264</td>
<td>338</td>
<td>267</td>
<td>316</td>
<td>269</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>1040</th>
<td>Wayne</td>
<td>2020</td>
<td>23</td>
<td>15</td>
<td>19</td>
<td>27</td>
<td>18</td>
<td>22</td>
<td>20</td>
<td>24</td>
<td>23</td>
<td>18</td>
<td>22</td>
<td>19</td>
</tr>
<tr>
<th>1041</th>
<td>Weakley</td>
<td>2020</td>
<td>39</td>
<td>35</td>
<td>40</td>
<td>19</td>
<td>28</td>
<td>43</td>
<td>37</td>
<td>47</td>
<td>54</td>
<td>44</td>
<td>33</td>
<td>21</td>
</tr>
<tr>
<th>1042</th>
<td>White</td>
<td>2020</td>
<td>30</td>
<td>31</td>
<td>33</td>
<td>21</td>
<td>50</td>
<td>32</td>
<td>36</td>
<td>46</td>
<td>34</td>
<td>29</td>
<td>42</td>
<td>43</td>
</tr>
<tr>
<th>1043</th>
<td>Williamson</td>
<td>2020</td>
<td>394</td>
<td>424</td>
<td>270</td>
<td>170</td>
<td>226</td>
<td>304</td>
<td>312</td>
<td>310</td>
<td>293</td>
<td>405</td>
<td>368</td>
<td>377</td>
</tr>
<tr>
<th>1044</th>
<td>Wilson</td>
<td>2020</td>
<td>282</td>
<td>268</td>
<td>239</td>
<td>143</td>
<td>267</td>
<td>258</td>
<td>285</td>
<td>295</td>
<td>273</td>
<td>306</td>
<td>335</td>
<td>299</td>
</tr>
</tbody>
</table>
<p>1045 rows × 14 columns</p>
</div>
```python
crashes["County"].nunique()
```
95
Of these 95 counties, let's check out Meigs County (home of the ghost town [Cute, Tennessee](https://en.wikipedia.org/wiki/Cute,_Tennessee)) :
```python
meigs = crashes[crashes["County"]=="Meigs"]
meigs = meigs.drop("County",1)
meigs = meigs.melt( id_vars="Year", var_name="Month", value_name="Crashes" )
meigs
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Month</th>
<th>Crashes</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010</td>
<td>January</td>
<td>11</td>
</tr>
<tr>
<th>1</th>
<td>2011</td>
<td>January</td>
<td>13</td>
</tr>
<tr>
<th>2</th>
<td>2012</td>
<td>January</td>
<td>12</td>
</tr>
<tr>
<th>3</th>
<td>2013</td>
<td>January</td>
<td>10</td>
</tr>
<tr>
<th>4</th>
<td>2014</td>
<td>January</td>
<td>16</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>127</th>
<td>2016</td>
<td>December</td>
<td>16</td>
</tr>
<tr>
<th>128</th>
<td>2017</td>
<td>December</td>
<td>18</td>
</tr>
<tr>
<th>129</th>
<td>2018</td>
<td>December</td>
<td>9</td>
</tr>
<tr>
<th>130</th>
<td>2019</td>
<td>December</td>
<td>11</td>
</tr>
<tr>
<th>131</th>
<td>2020</td>
<td>December</td>
<td>17</td>
</tr>
</tbody>
</table>
<p>132 rows × 3 columns</p>
</div>
```python
sns.histplot( data=meigs, x="Crashes", discrete=True)
plt.title("Monthly crashes in Meigs County, TN");
```
### The Poisson distribution
What's the relevant probability distribution? Let's see...
* There are a bunch of encounters between cars every month — lots of opportunities for accidents to occur.
* Most encounters don't result in an accident (thank goodness!), but accidents occur at a roughly constant rate per month.
This is a binomial experiment with a large number $n$ of trials, a small chance $p$ of "success" (an accident), and a roughly constant expected number $\lambda = np$ of successes per month, so that $\displaystyle{p = \frac{\lambda}{n}} \ll 1$. Letting $Y$ denote the number of accidents per month,
$$ \begin{align}
P( Y = k ) & ~ = ~ \frac{ n! }{ k! (n-k)! } \left( \frac{\lambda}{n} \right)^{k} \left( 1 - \frac{\lambda}{n} \right)^{(n-k)} \\
& ~ = ~ \frac{ n! }{ k! (n-k)! } \left( \frac{\lambda}{n} \right)^{k} \left( 1 - \frac{\lambda}{n} \right)^{n} \left( 1 - \frac{\lambda}{n} \right)^{-k} \\
& ~ \approx ~ \frac{ n! }{ k! (n-k)! } \left( \frac{\lambda}{n} \right)^{k} e^{-\lambda} \\
& ~ = ~ \frac{n (n-1) (n-2) \cdots (n-k+1) }{ n \cdot n \cdot n \cdots n } \ e^{-\lambda} \, \frac{\lambda^k}{ k! } \\
& ~ \approx ~ e^{-\lambda} \, \frac{\lambda^k}{ k! } \ .
\end{align} $$
Lo and behold, that's it! This is the PMF for the <font color="darkorchid">**Poisson distribution**</font> with <font color="darkorchid">**rate**</font> $\lambda$, denoted $\operatorname{Pois}(\lambda)$. The mean and variance of a <font color="darkorchid">**Poisson random variable**</font> $Y \sim \operatorname{Pois}(\lambda)$ are <u>_equal_</u>:
$$\operatorname{E}\left( \, Y \, \right) ~ = ~ \operatorname{Var}( \, Y \, ) ~ = ~ \lambda \ .$$
> **Use the `poisson` module from `scipy` (`import`ed above) to work with Poisson distributions.**
### Simulating monthly crashes
Now you'll see why I picked Meigs County:
```python
meigs.agg( Mean = ("Crashes","mean"), Variance = ("Crashes","var") )
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Crashes</th>
</tr>
</thead>
<tbody>
<tr>
<th>Mean</th>
<td>13.712121</td>
</tr>
<tr>
<th>Variance</th>
<td>15.198936</td>
</tr>
</tbody>
</table>
</div>
How well does the relevant Poisson distribution model crashes in Meigs County?
```python
meigs["Simulation"] = poisson.rvs(meigs["Crashes"].mean(),
size=meigs.shape[0] )
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=meigs, x="Crashes", discrete=True, ax=axs[0])
axs[0].set_title("Observed crashes per month in Meigs County, TN")
sns.histplot(data=meigs, x="Simulation", discrete=True, ax=axs[1])
axs[1].set_title("Simulated crashes per month in Meigs County, TN");
```
### Monthly births
What happens if we count _births_ every month instead of car crashes? Let's look at recent data from the [National Vital Statistics System](https://www.cdc.gov/nchs/nvss/births.htm), downloaded directly from [CDC Wonder](https://wonder.cdc.gov/natality-expanded-current.html):
```python
births = pd.read_csv("births.csv")
births
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>county</th>
<th>year</th>
<th>month</th>
<th>births</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Baldwin County, AL</td>
<td>2016</td>
<td>January</td>
<td>63</td>
</tr>
<tr>
<th>1</th>
<td>Baldwin County, AL</td>
<td>2016</td>
<td>February</td>
<td>67</td>
</tr>
<tr>
<th>2</th>
<td>Baldwin County, AL</td>
<td>2016</td>
<td>March</td>
<td>69</td>
</tr>
<tr>
<th>3</th>
<td>Baldwin County, AL</td>
<td>2016</td>
<td>April</td>
<td>63</td>
</tr>
<tr>
<th>4</th>
<td>Baldwin County, AL</td>
<td>2016</td>
<td>May</td>
<td>51</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>27727</th>
<td>Winnebago County, WI</td>
<td>2019</td>
<td>August</td>
<td>66</td>
</tr>
<tr>
<th>27728</th>
<td>Winnebago County, WI</td>
<td>2019</td>
<td>September</td>
<td>45</td>
</tr>
<tr>
<th>27729</th>
<td>Winnebago County, WI</td>
<td>2019</td>
<td>October</td>
<td>41</td>
</tr>
<tr>
<th>27730</th>
<td>Winnebago County, WI</td>
<td>2019</td>
<td>November</td>
<td>51</td>
</tr>
<tr>
<th>27731</th>
<td>Winnebago County, WI</td>
<td>2019</td>
<td>December</td>
<td>54</td>
</tr>
</tbody>
</table>
<p>27732 rows × 4 columns</p>
</div>
```python
births_by_county = births.groupby("county",
as_index=False).agg(Mean = ("births", "mean"),
Variance = ("births", "var"),
Max = ("births","max") )
births_by_county["Ratio"] = births_by_county["Variance"] / births_by_county["Mean"]
births_by_county = births_by_county[ births_by_county["Max"] < 100 ]
births_by_county = births_by_county.sort_values(by="Ratio", ascending=False)
births_by_county[ abs( births_by_county["Ratio"] - 1 ) < .05 ]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>county</th>
<th>Mean</th>
<th>Variance</th>
<th>Max</th>
<th>Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<th>572</th>
<td>Yellowstone County, MT</td>
<td>68.666667</td>
<td>71.929078</td>
<td>87</td>
<td>1.047511</td>
</tr>
<tr>
<th>293</th>
<td>Litchfield County, CT</td>
<td>45.125000</td>
<td>47.260638</td>
<td>60</td>
<td>1.047327</td>
</tr>
<tr>
<th>20</th>
<td>Ascension Parish, LA</td>
<td>51.812500</td>
<td>54.155585</td>
<td>69</td>
<td>1.045222</td>
</tr>
<tr>
<th>81</th>
<td>Catawba County, NC</td>
<td>68.666667</td>
<td>71.418440</td>
<td>87</td>
<td>1.040074</td>
</tr>
<tr>
<th>48</th>
<td>Boone County, KY</td>
<td>51.687500</td>
<td>53.070479</td>
<td>72</td>
<td>1.026757</td>
</tr>
<tr>
<th>426</th>
<td>Richland County, OH</td>
<td>36.333333</td>
<td>36.950355</td>
<td>49</td>
<td>1.016982</td>
</tr>
<tr>
<th>422</th>
<td>Randolph County, NC</td>
<td>49.333333</td>
<td>49.631206</td>
<td>65</td>
<td>1.006038</td>
</tr>
<tr>
<th>536</th>
<td>Washington County, MD</td>
<td>57.104167</td>
<td>57.414450</td>
<td>73</td>
<td>1.005434</td>
</tr>
<tr>
<th>46</th>
<td>Blount County, TN</td>
<td>32.250000</td>
<td>32.319149</td>
<td>43</td>
<td>1.002144</td>
</tr>
<tr>
<th>486</th>
<td>St. Lawrence County, NY</td>
<td>27.416667</td>
<td>27.226950</td>
<td>45</td>
<td>0.993080</td>
</tr>
</tbody>
</table>
</div>
Oddly enough, the mean and variance for births per month are closest for a county in Tennessee! How well do births in Blount County agree with $\operatorname{Pois}(32.25,32.32)$?
```python
blount = births[ births[ "county"] == "Blount County, TN" ]
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=blount, x="births", discrete=True, ax=axs[0])
axs[0].set_title("Observed births per month in Blount County, TN")
pois_rv = pd.DataFrame({"Simulation" : poisson.rvs(np.mean(blount["births"]),
size=blount.shape[0])})
sns.histplot(data=pois_rv, x="Simulation", discrete=True, ax=axs[1])
axs[1].set_title("Simulated births per month in Blount County, TN");
```
### Simple Poisson regression: a preview
A simple Poisson regression model fits a family of Poisson RVs to observations. Each covariate class — one for each value $x$ of the predictor — has a mean rate $\lambda$ given (approximately!) by
$$ \log{ \left( \lambda \right) } ~ = ~ a + bx \ . $$
We'll discuss this in detail in the next Twitch session!
## <u>Negative binomial models</u>
### Overdispersed births
The Poisson distribution can clearly be a reasonable model for car crashes or births per month in a given county — ***if*** the mean and the variance are ***equal***! This is definitely not always the case:
```python
births_by_county
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>county</th>
<th>Mean</th>
<th>Variance</th>
<th>Max</th>
<th>Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<th>253</th>
<td>Kanawha County, WV</td>
<td>28.305556</td>
<td>155.989683</td>
<td>52</td>
<td>5.510921</td>
</tr>
<tr>
<th>235</th>
<td>Jackson County, MI</td>
<td>58.645833</td>
<td>157.850621</td>
<td>91</td>
<td>2.691591</td>
</tr>
<tr>
<th>423</th>
<td>Rankin County, MS</td>
<td>54.145833</td>
<td>131.063387</td>
<td>80</td>
<td>2.420563</td>
</tr>
<tr>
<th>342</th>
<td>Monroe County, IN</td>
<td>48.041667</td>
<td>115.317376</td>
<td>72</td>
<td>2.400362</td>
</tr>
<tr>
<th>441</th>
<td>Saline County, AR</td>
<td>53.541667</td>
<td>127.955674</td>
<td>79</td>
<td>2.389834</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>469</th>
<td>Sheboygan County, WI</td>
<td>39.833333</td>
<td>36.141844</td>
<td>55</td>
<td>0.907327</td>
</tr>
<tr>
<th>556</th>
<td>Whitfield County, GA</td>
<td>47.708333</td>
<td>42.593972</td>
<td>65</td>
<td>0.892799</td>
</tr>
<tr>
<th>226</th>
<td>Houston County, GA</td>
<td>70.270833</td>
<td>61.903812</td>
<td>89</td>
<td>0.880932</td>
</tr>
<tr>
<th>312</th>
<td>Marathon County, WI</td>
<td>50.479167</td>
<td>42.084663</td>
<td>66</td>
<td>0.833704</td>
</tr>
<tr>
<th>549</th>
<td>Wayne County, OH</td>
<td>46.145833</td>
<td>36.893174</td>
<td>58</td>
<td>0.799491</td>
</tr>
</tbody>
</table>
<p>226 rows × 5 columns</p>
</div>
```python
kanawha = births[ births["county"] == "Kanawha County, WV"]
kanawha.shape
```
(36, 4)
```python
fig, axs = plt.subplots(2, sharex=True)
sns.histplot(data=kanawha,
x="births",
discrete=True, ax=axs[0])
axs[0].set_title("Observed births per month in Kanawha County, WV")
pois = pd.DataFrame({"Sample" : poisson.rvs(np.mean(kanawha["births"]),
size=kanawha.shape[0])})
sns.histplot(data=pois, x="Sample",
discrete=True, ax=axs[1])
axs[1].set_title("Poisson-simulated births per month in Kanawha County, WV");
```
Monthly births in Kanawha County, WV are definitely <u>_not_</u> Poisson distributed! This is <font color="darkorchid">**overdispersion**</font>: the variance is much larger than the mean. Overdispersion and <font color="darkorchid">**underdispersion**</font> (variance smaller than the mean) are common in count modeling; in either case, Poisson models are _not_ appropriate — they only apply to counts that are <font color="darkorchid">**equidispersed**</font>.
### Overdispersed car crashes
***All*** of the counties in the `crashes` dataset exhibit overdispersion, with Meigs County the closest to being equidispersed. At the other extreme, monthly crashes in nearby Hamilton County have a variance that is more than 20 times their mean:
```python
hamilton = crashes[ crashes["County"] == "Hamilton" ]
hamilton = hamilton.drop("County",1)
hamilton = hamilton.melt(id_vars="Year",
var_name="Month",
value_name="Crashes" )
hamilton.agg(Mean = ("Crashes","mean"),
Variance = ("Crashes","var") )
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Crashes</th>
</tr>
</thead>
<tbody>
<tr>
<th>Mean</th>
<td>1047.060606</td>
</tr>
<tr>
<th>Variance</th>
<td>22476.347444</td>
</tr>
</tbody>
</table>
</div>
```python
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=hamilton, x="Crashes", discrete=True, ax=axs[0])
axs[0].set_title("Observed crashes per month in Hamilton County, TN")
pois = pd.DataFrame({"Sample" : poisson.rvs(np.mean(hamilton["Crashes"]),
size=hamilton.shape[0])})
sns.histplot(data=pois, x="Sample", discrete=True, ax=axs[1])
axs[1].set_title("Poisson-simulated crashes per month in Hamilton County, TN");
```
Terrible!! Never use a Poisson model when there's significant overdispersion!
### Coughs per hour
Here are 2 common features of the `crashes` and `births` datasets:
* counts are recorded <u>_per month_</u>, and
* <u>_there are no zero counts_</u> for any month.
Of course, there's nothing special about working with months, and there can be plenty of zeroes in some datasets. When monitoring a person's coughs, for example, it's natural to record coughs <u>_per hour_</u>, yielding lots of zeroes. Here are coughs recorded per hour by the [Hyfe cough monitoring app](https://www.hyfeapp.com/) for a person in Spain:
```python
primera = pd.read_csv("primera.csv")
primera["datetime"] = pd.to_datetime( primera["datetime"] )
primera
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>datetime</th>
<th>coughs</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2020-11-06 01:00:00</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>2020-11-06 02:00:00</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>2020-11-06 03:00:00</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>2020-11-06 04:00:00</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>2020-11-06 05:00:00</td>
<td>0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>7076</th>
<td>2021-08-27 22:00:00</td>
<td>0</td>
</tr>
<tr>
<th>7077</th>
<td>2021-08-27 23:00:00</td>
<td>0</td>
</tr>
<tr>
<th>7078</th>
<td>2021-08-28 00:00:00</td>
<td>0</td>
</tr>
<tr>
<th>7079</th>
<td>2021-08-28 01:00:00</td>
<td>0</td>
</tr>
<tr>
<th>7080</th>
<td>2021-08-28 02:00:00</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>7081 rows × 2 columns</p>
</div>
Despite what's shown for these 10 hours, this person coughed quite a bit from time to time:
```python
sns.lineplot(data=primera, x="datetime", y="coughs");
```
Even so, there were way more hours with 0 or just a few coughs:
```python
sns.histplot(data=primera, x="coughs", discrete=True);
```
There's clearly some overdispersion here, as we can easily check:
```python
primera.agg(Mean = ("coughs","mean"),
Variance = ("coughs","var") )
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>coughs</th>
</tr>
</thead>
<tbody>
<tr>
<th>Mean</th>
<td>0.99195</td>
</tr>
<tr>
<th>Variance</th>
<td>12.03341</td>
</tr>
</tbody>
</table>
</div>
This is pretty typical for coughing — here's another Hyfe user from Spain:
```python
segunda = pd.read_csv("segunda.csv")
segunda["datetime"] = pd.to_datetime( segunda["datetime"] )
segunda
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>datetime</th>
<th>coughs</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2020-11-14 01:00:00</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>2020-11-14 02:00:00</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>2020-11-14 03:00:00</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>2020-11-14 04:00:00</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>2020-11-14 05:00:00</td>
<td>0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2012</th>
<td>2021-02-05 21:00:00</td>
<td>0</td>
</tr>
<tr>
<th>2013</th>
<td>2021-02-05 22:00:00</td>
<td>0</td>
</tr>
<tr>
<th>2014</th>
<td>2021-02-05 23:00:00</td>
<td>0</td>
</tr>
<tr>
<th>2015</th>
<td>2021-02-06 00:00:00</td>
<td>0</td>
</tr>
<tr>
<th>2016</th>
<td>2021-02-06 01:00:00</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>2017 rows × 2 columns</p>
</div>
```python
sns.lineplot(data=segunda, x="datetime", y="coughs");
```
```python
sns.histplot(data=segunda, x="coughs", discrete=True);
```
```python
segunda.agg(Mean = ("coughs","mean"),
Variance = ("coughs","var"))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>coughs</th>
</tr>
</thead>
<tbody>
<tr>
<th>Mean</th>
<td>0.411502</td>
</tr>
<tr>
<th>Variance</th>
<td>2.487328</td>
</tr>
</tbody>
</table>
</div>
### The negative binomial distribution
The Poisson distribution doesn't apply to overdispersed counts, so what does? The go-to option in many cases is the <font color="darkorchid">**negative binomial distribution**</font>, whose PMF has various forms. The form implemented by `scipy.nbinom` is
$$ P( Y = k ) = \binom{ k+n-1 }{ n-1 } p^{n} \left( 1 - p \right)^{k} \ , \quad k = 0, 1, 2, \ldots \ , $$
reminiscent of the binomial distribution; this is the probability of needing $k+n$ trials to observe $n$ successes when the chance of success per trial is $p$. The mean and variance of $Y \sim \operatorname{NB}(n,p))$ are
$$\operatorname{E}\left( \, Y \, \right) ~ = ~ \frac{n}{p} \quad \text{and} \quad \operatorname{Var}( \, Y \, ) ~ = ~ \frac{n(1-p)}{p^2} \ .$$
For count models, different parameterizations are common and more practical. Let $\mu$ denote the average of $Y \sim \operatorname{NB}(n,p))$, let $\sigma^2$ denote its variance, and define the <font color="darkorchid">**dispersion parameter**</font> $\alpha$ by
$$ \sigma^2 ~ = ~ \mu + \alpha \mu^2 \quad \iff \quad \alpha ~ = ~ \frac{\sigma^2 - \mu}{\mu^2} \ . $$
Given the values of 2 of these parameters, relate them to $n$ and $p$:
$$ p ~ = ~ \frac{\mu}{\mu + \sigma^2} ~ = ~ \frac{1}{2 + \alpha \mu} \quad \text{and} \quad n ~ = ~ \frac{\mu^2}{\mu + \sigma^2} ~ = ~ \frac{\mu}{2 + \alpha \mu} \ .$$
You can then translate between the versions — $\operatorname{NB}(n,p)$, $\operatorname{NB}(\mu,\sigma^2)$ and $\operatorname{NB}(\mu,\alpha)$ — as needed. Beware that other people use $\displaystyle{\frac{1}{\alpha}}$ as the dispersion parameter, and other relationships between it and the variance are possible. And most importantly, notice that <u>_two parameters are needed!!_</u>
> **Use the `nbinom` module from `scipy` (`import`ed above) to work with negative binomial distributions.**
### Simulating monthly births
```python
mu = kanawha["births"].mean()
var = kanawha["births"].var()
n_kanawha = mu**2 / (mu + var )
p_kanawha = mu / ( mu + var )
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=kanawha, x="births", discrete=True, ax=axs[0])
axs[0].set_title("Observed births per month in Kanawha County, WV")
NB = pd.DataFrame({"Simulation" : nbinom.rvs(n_kanawha, p_kanawha,
size=kanawha.shape[0])})
sns.histplot(data=NB, x="Simulation", discrete=True, ax=axs[1])
axs[1].set_title("NB-simulated births per month in Kanawha County, WV");
```
### Simulating monthly crashes
```python
mu = hamilton["Crashes"].mean()
var = hamilton["Crashes"].var()
n = mu**2 / (mu + var )
p = mu / ( mu + var )
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=hamilton, x="Crashes", discrete=True, ax=axs[0])
axs[0].set_title("Observed crashes per month in Hamilton County, TN")
NB = pd.DataFrame({"Simulation" : nbinom.rvs(n, p,
size=hamilton.shape[0])})
sns.histplot(data=NB, x="Simulation", discrete=True, ax=axs[1])
axs[1].set_title("NB-simulated crashes per month in Hamilton County, TN");
```
### Simulating coughs per hour
```python
mu = primera["coughs"].mean()
var = primera["coughs"].var()
n = mu**2 / (mu + var )
p = mu / ( mu + var )
primera["simulation"] = nbinom.rvs(n, p, size=primera.shape[0])
fig, axs = plt.subplots(2, sharex=True)
sns.lineplot( data=primera, x="datetime", y="coughs", ax=axs[0])
axs[0].set_title("Recorded coughs per hour")
sns.lineplot(data=primera, x="datetime", y="simulation", ax=axs[1])
axs[1].set_title("NB-simulated coughs per hour");
```
```python
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=primera, x="coughs", discrete=True, ax=axs[0])
axs[0].set_title("Recorded coughs per hour")
sns.histplot(data=primera, x="simulation", discrete=True, ax=axs[1])
axs[1].set_title("NB-simulated coughs per hour");
```
```python
mu = segunda["coughs"].mean()
var = segunda["coughs"].var()
n = mu**2 / (mu + var )
p = mu / ( mu + var )
segunda["simulation"] = nbinom.rvs(n, p, size=segunda.shape[0])
fig, axs = plt.subplots(2, sharex=True)
sns.lineplot( data=segunda, x="datetime", y="coughs", ax=axs[0])
axs[0].set_title("Recorded coughs per hour")
sns.lineplot(data=segunda, x="datetime", y="simulation", ax=axs[1])
axs[1].set_title("NB-simulated coughs per hour");
```
```python
fig, axs = plt.subplots(2, sharex=True)
sns.histplot( data=segunda, x="coughs", discrete=True, ax=axs[0])
axs[0].set_title("Recorded coughs per hour")
sns.histplot(data=segunda, x="simulation", discrete=True, ax=axs[1])
axs[1].set_title("NB-simulated coughs per hour");
```
### Simple negative binomial regression: a preview
A simple negative binomial regression model fits a family of NB RVs to observations. Each covariate class — one for each value $x$ of the predictor — has a mean rate $\mu$ given (approximately!) by
$$ \log{ \left( \mu \right) } ~ = ~ a + bx $$
<u>_AND_</u> a dispersion parameter $\alpha$ determined by the fitting process. More on this next time!
## <u>The Kolmogorov-Smirnov test</u>
### Cumulative distribution functions
The PMF of a discrete RV $Y$ — binomial, Poisson, or negative binomial — determines $P( Y = k )$ for any nonnegative integer $k$; an alternative characterization of $Y$ is its <font color="darkorchid">**cumulative distribution function (CDF)**</font>, which gives $P( Y \leq k )$ for any such $k$. Using recorded or simulated counts instead of a theoretical distribution yields <font color="darkorchid">**empirical cumulative distribution functions (ECDF)**</font>, and comparing them is the basis of the <font color="darkorchid">**Kolmogorov-Smirnov test**</font> — as with any other hypothesis test, you should always complement this one with a visual and a good dose of common sense!
```python
sns.ecdfplot(data=meigs[["Crashes","Simulation"]]);
```
```python
kstest(meigs["Crashes"],meigs["Simulation"])
```
KstestResult(statistic=0.09848484848484848, pvalue=0.5457001750199713)
The null hypothesis here is that <u>_both of these samples are drawn from the same distribution_</u> — the $p$-value is large, so we have no reason to reject this hypothesis for the crashes in Meigs County. Based on this data, monthly crashes there do seem to be Poisson distributed!
```python
ks_kanawha = pd.DataFrame({"births":kanawha["births"],
"simulation":nbinom.rvs( n_kanawha, p_kanawha, size=kanawha.shape[0])})
sns.ecdfplot(data=ks_kanawha);
```
```python
kstest(ks_kanawha["births"], ks_kanawha["simulation"])
```
KstestResult(statistic=0.25, pvalue=0.2122866675915156)
```python
sns.ecdfplot(data=segunda[["coughs","simulation"]]);
```
```python
kstest(segunda["coughs"],segunda["simulation"])
```
KstestResult(statistic=0.04858701041150223, pvalue=0.017093207893396013)
```python
```
| 430df5e351e97a635f2c2742ab9a90cf6e8d264d | 618,577 | ipynb | Jupyter Notebook | twitch/count_models/.ipynb_checkpoints/Count models-Copy1-checkpoint.ipynb | mbrudd/regression | 348bc855d8164b55593ee47f656603dabd48d422 | [
"MIT"
]
| 8 | 2021-07-17T05:41:07.000Z | 2022-03-16T12:45:38.000Z | twitch/count_models/.ipynb_checkpoints/Count models-Copy1-checkpoint.ipynb | mbrudd/regression | 348bc855d8164b55593ee47f656603dabd48d422 | [
"MIT"
]
| null | null | null | twitch/count_models/.ipynb_checkpoints/Count models-Copy1-checkpoint.ipynb | mbrudd/regression | 348bc855d8164b55593ee47f656603dabd48d422 | [
"MIT"
]
| 1 | 2022-01-24T13:04:33.000Z | 2022-01-24T13:04:33.000Z | 226.917461 | 69,528 | 0.891419 | true | 14,540 | Qwen/Qwen-72B | 1. YES
2. YES | 0.718594 | 0.760651 | 0.546599 | __label__eng_Latn | 0.636045 | 0.108263 |
```python
from sympy import *
```
```python
p1 = Symbol("p1")
p2 = Symbol("p2")
p3 = Symbol("p3")
p4 = Symbol("p4")
p5 = 1-p1-p2-p3
p6 = 1-p1-p2-p4
lambd = Symbol("l")
y1= Symbol("y1")
y2= Symbol("y2")
expr = y1*y2*p1 + (1-y1)*(1-y2)*p2 + y1*(1-y2)*(lambd*p3 + (1-lambd)*p4) + (1-y1)*y2*(lambd*p4 + (1-lambd)*p5)
print(simplify(expand(expr)))
```
-l*p1*y1*y2 + l*p1*y2 - l*p2*y1*y2 + l*p2*y2 - 2*l*p3*y1*y2 + l*p3*y1 + l*p3*y2 - l*p4*y1 + l*p4*y2 + l*y1*y2 - l*y2 + 2*p1*y1*y2 - p1*y2 + 2*p2*y1*y2 - p2*y1 - 2*p2*y2 + p2 + p3*y1*y2 - p3*y2 - p4*y1*y2 + p4*y1 - y1*y2 + y2
```python
```
| ba0f34eb16e01972eeaa637e47f0c76dd8176bc6 | 1,576 | ipynb | Jupyter Notebook | nash/algebra.ipynb | ericschulman/nash | bc2421f703887a553eec2a4664607061df5b377a | [
"MIT"
]
| null | null | null | nash/algebra.ipynb | ericschulman/nash | bc2421f703887a553eec2a4664607061df5b377a | [
"MIT"
]
| null | null | null | nash/algebra.ipynb | ericschulman/nash | bc2421f703887a553eec2a4664607061df5b377a | [
"MIT"
]
| null | null | null | 21.888889 | 234 | 0.468274 | true | 318 | Qwen/Qwen-72B | 1. YES
2. YES | 0.936285 | 0.740174 | 0.693014 | __label__vie_Latn | 0.055829 | 0.448435 |
```python
```
```python
#Import packages
from scipy import optimize,arange
from numpy import array
import numpy as np
import matplotlib.pyplot as plt
import random
import sympy as sm
from math import *
%matplotlib inline
from IPython.display import Markdown, display
import pandas as pd
```
```python
a = sm.symbols('a')
c_vec = sm.symbols('c_vec')
b = sm.symbols('b')
q_vec = sm.symbols('q_i') # q for firm i
q_minus = sm.symbols('q_{-i}') # q for the the opponents
#The profit of firm 1 is then:
Pi_i = q_vec*((a-b*(q_vec+q_minus))-c_vec)
#giving focs:
foc = sm.diff(Pi_i,q_vec)
foc
```
$\displaystyle a - b q_{i} - b \left(q_{i} + q_{-i}\right) - c_{vec}$
In order to use this in our solutionen, we rewrite $x_{i}+x_{-i} = \sum x_{i}$ using np.sum and then define a function for the foc
```python
def foc1(a,b,q_vec,c_vec):
# Using the result from the sympy.diff
return -b*q_vec+a-b*np.sum(q_vec)-c_vec
```
```python
```
| cfb604e1a04f934fa8f818212845d0e7256750d2 | 2,626 | ipynb | Jupyter Notebook | modelproject/Untitled1.ipynb | AskerNC/projects-2021-aristochats | cade4c02de648f4cd1220216598dc24b67bb8559 | [
"MIT"
]
| null | null | null | modelproject/Untitled1.ipynb | AskerNC/projects-2021-aristochats | cade4c02de648f4cd1220216598dc24b67bb8559 | [
"MIT"
]
| null | null | null | modelproject/Untitled1.ipynb | AskerNC/projects-2021-aristochats | cade4c02de648f4cd1220216598dc24b67bb8559 | [
"MIT"
]
| null | null | null | 22.254237 | 137 | 0.517136 | true | 307 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.822189 | 0.741549 | __label__eng_Latn | 0.883377 | 0.5612 |
# Tutorial 01: Conforming Finite Element Method for a Nonlinear Poisson Equation
In this tutorial we extend tutorial 00 in the following ways:
1. Solve a nonlinear stationary partial differential equation (PDE).
2. Use conforming finite element spaces of arbitrary order.
3. Use different types of (conforming) meshes (simplicial, cubed and mixed).
4. Use multiple types of boundary conditions.
Combined with the fact that the implementation works in any dimension (note: it is not claimed to be efficient in high dimension $d>3$) this comprises already a relatively large space of different methods, so the example illustrates the flexibility of PDELab. Moreover, the finite element method developed in this tutorial will serve as a building block for instationary problems, adaptive mesh refinement and
parallel solution in subsequent tutorials. This tutorial depends on tutorial 00 which discusses piecewise linear elements on simplicial elements. It is assumed that you have worked through tutorial 00 before.
## Problem Formulation
Here we consider the following nonlinear Poisson equation with
Dirichlet and Neumann boundary conditions:
\begin{align}\label{eq:ProblemStrong}
-\Delta u + q(u) &= f &&\text{in $\Omega$}, \\
u &= g &&\text{on $\Gamma_D\subseteq\partial\Omega$},\\
-\nabla u\cdot \nu &= j &&\text{on $\Gamma_N=\partial\Omega\setminus\Gamma_D$}.
\end{align}
$\Omega\subset\mathbb{R}^d$ is a domain, $q:\mathbb{R}\to\mathbb{R}$ is a given, possibly
nonlinear function and $f: \Omega\to\mathbb{R}$ is the source term and
$\nu$ denotes the unit outer normal to the domain.
The weak formulation of this problem is derived by multiplication with an appropriate
test function and integrating by parts. This results in the abstract problem:
\begin{equation}
\text{Find $u\in U$ s.t.:} \quad r^{\text{NLP}}(u,v)=0 \quad \forall v\in V,
\label{Eq:BasicBuildingBlock}
\end{equation}
with the continuous residual form
\begin{equation}
r^{\text{NLP}}(u,v) = \int_\Omega \nabla u \cdot \nabla v + (q(u)-f)v\,dx + \int_{\Gamma_N} jv\,ds
\label{eq:ResidualForm}
\end{equation}
and the function spaces
$U= \{v\in H^1(\Omega) \,:\, \text{''$v=g$'' on $\Gamma_D$}\}$
and $V= \{v\in H^1(\Omega) \,:\, \text{''$v=0$'' on $\Gamma_D$}\}$.
We assume that $q$ is such that this problem has a unique solution.
## Realization in PDELab
The structure of the code is very similar to that of tutorial 00. Again, all the Dune sources are included through one convenience header:
```c++
#include <dune/jupyter.hh>
#include "nonlinearpoissonfem.hh"
//#include "nitschenonlinearpoissonfem.hh"
#include "problem.hh"
```
As for the previous tutorial, a number of runtime paramters is provided in a configuration file `tutorial01.ini`, which can be found in the `notebooks/tutorial01` directory. Again the paramters are parsed into a `ParamterTree`.
```c++
Dune::ParameterTree ptree;
Dune::ParameterTreeParser ptreeparser;
ptreeparser.readINITree("tutorial01.ini",ptree);
```
Ini File not that important for jupyter notebooks
**First, a DUNE grid object is instantiated**
The dimension is set and the refinement is read from the ini file. <a id='dim'> </a>
```c++
// read ini file
const int dim = 2;
const int refinement = ptree.get<int>("grid.refinement");
```
The first step is again, to instantiate a grid. We use again a two-dimensional, unstructured mesh for this simulation, as known from the previous tutorial. Again, the mesh resolves the unit square $\Omega = [0,1]^2$. The file is specified in the `unitsquare.msh` file that was generated using Gmsh.
```c++
using Grid = Dune::UGGrid<dim>;
std::string filename = ptree.get("grid.twod.filename",
"unitsquare.msh");
Dune::GridFactory<Grid> factory;
Dune::GmshReader<Grid>::read(factory,filename,true,true);
std::unique_ptr<Grid> gridp(factory.createGrid());
Dune::Timer timer;
gridp->globalRefine(refinement);
std::cout << "Time for mesh refinement " << timer.elapsed()
<< " seconds" << std::endl;
using GV = Grid::LeafGridView;
using DF = Grid::ctype;
GV gv = gridp->leafGridView();
```
*Using yasp grid instead of uggrid. Note, that additionally another finite element map needs to be selected.*
<a id='yasp'> </a>
```c++
//solution to task 1.2 using the structured grid factory
/*using Grid = Dune::YaspGrid<dim>;
using DF = Grid::ctype;
Dune::FieldVector<DF,dim> lowerleft(0.0);
Dune::FieldVector<DF,dim> upperright(1.0);
auto N = Dune::filledArray<dim, unsigned int>(2);
auto grid = Dune::StructuredGridFactory<Grid>::createCubeGrid(lowerleft, upperright, N);
using GV = Grid::LeafGridView;
GV gv = grid->leafGridView();
*/
```
```c++
gridp
```
Here, a polynomial degree can bee chosen:
```c++
const int degree = 1;
```
```c++
//ug grid
using FEM = Dune::PDELab::PkLocalFiniteElementMap<GV,DF,double,degree>;
```
*Finite element map for yasp grid*
```c++
//yasp
//using FEM = Dune::PDELab::QkLocalFiniteElementMap<GV,DF,double,degree>;
```
```c++
FEM fem(gv);
```
### Driver
**The driver instantiates the necessary PDELab classes for solving a nonlinear stationary problem and finally solves the problem**
```c++
// type for calculations
using RF = double;
```
The interface of the parameter class is defined by the implementor of the local operator and is not part of PDELab. As shown in tutorial 00 it is perfectly possible to have a local operator without a parameter class.
The following code segment instantiates the problem class which is called `Problem` here (it is explained in detail below)
```c++
// make PDE parameter class
RF eta = ptree.get("problem.eta",(RF)1.0);
Problem<RF> problem(eta);
```
```c++
eta
```
Now there are two places where information from the PDE is used in PDELab. First of all we need to have an object that can be used as an argument to `Dune::PDELab::interpolate` to initialize a vector which represents the initial guess and the Dirichlet boundary conditions.
The class `Problem` defines a method which we need to use to define a class with the interface of `Dune::PDELab::GridFunction`.
This is accomplished by the following code:
```c++
auto g = Dune::PDELab::makeGridFunctionFromCallable(
gv,
[&](const auto& e, const auto& x) {
return problem.g(e,x);
}
);;
```
Similarly, we need an object that can be passed to `Dune::PDELab::constraints` to fill a constraints container which is used to build a subspace of a function space. Again, the class `Problem` defines such a method which is extracted with a lambda function:
```c++
auto b = Dune::PDELab::makeBoundaryConditionFromCallable(
gv,
[&](const auto& i, const auto& x){
return problem.b(i,x);
}
);;
```
The next step is to define the grid function space. This is
exactly the same code as in tutorial 00.
```c++
// Make grid function space
//== Exercise 2 {
using CON = Dune::PDELab::ConformingDirichletConstraints;
// using CON = Dune::PDELab::NoConstraints;
//== }
using VBE = Dune::PDELab::ISTL::VectorBackend<>;
using GFS = Dune::PDELab::GridFunctionSpace<GV,FEM,CON,VBE>;
GFS gfs(gv,fem);
gfs.name("Vh");
```
Now comes unchanged code to assemble the constraints, instantiate a coefficient vector, making a discrete grid function that can be used for visualization and interpolating the initial guess and Dirichlet boundary conditions:
```c++
// Assemble constraints
//== Exercise 2 {
using CC = typename GFS::template
ConstraintsContainer<RF>::Type;
CC cc;
Dune::PDELab::constraints(b,gfs,cc); // assemble constraints
std::cout << "constrained dofs=" << cc.size() << " of "
<< gfs.globalSize() << std::endl;
// using CC = Dune::PDELab::EmptyTransformation;
//== }
// A coefficient vector
using Z = Dune::PDELab::Backend::Vector<GFS,RF>;
Z z(gfs); // initial value
// Make a grid function out of it
using ZDGF = Dune::PDELab::DiscreteGridFunction<GFS,Z>;
ZDGF zdgf(gfs,z);
// Fill the coefficient vector
Dune::PDELab::interpolate(g,gfs,z);
```
The next step is to instantiate a local operator, called `NonlinearPoissonFEM`, containing the implementation of the element-wise computations of the finite element method. As explained above the local operator is parametrized by the class
`Problem`. In addition, also the finite element map is passed as a template parameter for reasons that will become clear below:
```c++
// Make a local operator
//== Exercise 2 {
using LOP = NonlinearPoissonFEM<Problem<RF>,FEM> ;
LOP lop(problem);
// RF stab = ptree.get("fem.stab",(RF)1);
// using LOP = NitscheNonlinearPoissonFEM<Problem<RF>,FEM> ;
// LOP lop(problem,stab);
//== }
```
Now the grid function space, local operator, matrix backend and
constraints container are used to set up a grid operator facilitating the global residual assembly, Jacobian assembly and matrix-free Jacobian application. The matrix backend is initialized with a guess of the approximate number
of nonzero matrix entries per row.
```c++
// Make a global operator
using MBE = Dune::PDELab::ISTL::BCRSMatrixBackend<>;
MBE mbe((int)pow(1+2*degree,dim));
typedef Dune::PDELab::GridOperator<
GFS,GFS, /* ansatz and test space */
LOP, /* local operator */
MBE, /* matrix backend */
RF,RF,RF, /* domain, range, jacobian field type*/
CC,CC /* constraints for ansatz and test space */
> GO;
//== Exercise 2 {
GO go(gfs,cc,gfs,cc,lop,mbe);
// GO go(gfs,gfs,lop,mbe);
//== }
```
In order to prepare for the solution process an appropriate linear solver needs to be selected:
```c++
using LS = Dune::PDELab::ISTLBackend_SEQ_CG_AMG_SSOR<GO> ;
LS ls(100,2);
```
Since the problem is nonlinear we use the implementation of
Newton's method in PDELab. It provides the inexact Newton method
in the sense that the iterative solution of the linear subproblems is stopped early and uses line search as globalization strategy:
```c++
Dune::PDELab::Newton<GO,LS,Z> newton(go,z,ls);
newton.setReassembleThreshold(0.0); // always reassemble J
newton.setVerbosityLevel(3); // be verbose
newton.setReduction(1e-10); // total reduction
newton.setMinLinearReduction(1e-4); // min. red. in lin. solve
newton.setMaxIterations(25); // limit number of its
newton.setLineSearchMaxIterations(10); // limit line search
```
Now, finally do all the work and solve the problem:
```c++
newton.apply();
```
At the end we can write the VTK file with subsampling: <a id='exact'> </a>
```c++
//write filename to .txt file
std::string filename=ptree.get("output.filename","output");
std::ofstream out("name.txt");
out << filename;
out.close();
// Write VTK output file
int subsampling = ptree.get("output.subsampling",degree);
Dune::SubsamplingVTKWriter<GV> vtkwriter(gv,Dune::refinementIntervals(subsampling));
using VTKF = Dune::PDELab::VTKGridFunctionAdapter<ZDGF> ;
vtkwriter.addVertexData(std::shared_ptr<VTKF>(new VTKF(zdgf,"fesol")));
```
*Visualize exact solution*
```c++
//solution to exercise 2.1.3
/*Z w(gfs);
Dune::PDELab::interpolate(g,gfs,w); // Lagrange interpolation of exact solution
ZDGF wdgf(gfs,w);*/
```
```c++
//solution to exercise 2.1.3
/*vtkwriter.addVertexData(std::shared_ptr<VTKF>(new VTKF(wdgf,"exact")));*/
```
The visualization data can be generated in Jupyter by printing the VTKWriter instance:
```c++
vtkwriter
```
### Problem.hh
Note that the following cells are note executable. Changes can be made in the file `problem.hh` which is provided in the same folder as this notebook.
The class `Problem` contained in the file `problem.hh` provides all parameter functions for the PDE problem. It is parameterized with the floating point type to be used:
```c++
template<typename Number>
class Problem
```
Its constructor takes a parameter $\eta$ as argument:
```c++
Problem (const Number& eta_) : eta(eta_) {}
```
Now come the parameter functions defining the PDE problem.
First is the nonlinearity $q(u)$:
```c++
Number q (Number u) const
{
return eta*u*u;
}
```
We also provide the derivative of the function $q$ as a seperate method:
```c++
Number qprime (Number u) const
{
return 2*eta*u;
}
```
This allows the implementation of an exact Jacobian later (illustrated in tutorial 02) and is actually not needed here as we will use a numerical Jacobian.
Next is the right hand side function $f$ which gets an element `e` and a local coordinate `x` within
the corresponding reference element as a parameter:
```c++
template<typename E, typename X>
Number f (const E& e, const X& x) const
{
return -2.0*x.size();
}
```
The argument `x` can be expected to be an instance of`Dune::FieldVector` which has a method `size`
giving the number of components of the vector, i.e. the space dimension.
The next method simply called `b` is the boundary condition type function. It should return true if the position given by intersection `i` and a local coordinate `x` within the reference element of the intersection is on the Dirichlet boundary.
In the particular instance here we set $\Gamma_D=\partial\Omega$:
```c++
template<typename I, typename X>
bool b (const I& i, const X& x) const
{
return true;
}
```
The value of the Dirichlet boundary condition is now defined by the method `g`. As explained above it
is more appropriate to provide a function $u_g$ that can be evaluated
on $\overline{\Omega}$ and gives the value of $g$ on the Dirichlet boundary
and the initial guess for the nonlinear solver on all other points:
```c++
template<typename E, typename X>
Number g (const E& e, const X& x) const
{
auto global = e.geometry().global(x);
Number s=0.0;
for (std::size_t i=0; i<global.size(); i++) s+=global[i]*global[i];
return s;
}
```
As with the function `f` above, the arguments are an element and a local coordinate
in its reference element. Here we evaluate it as $u_g(e,x) = \|\mu_e(x)\|^2$.
Finally, there is a method defining the value of the Neumann boundary condition.
Although there is no Neumann boundary here, the method has to be provided but is
never called. The arguments of the method are the same as for the boundary
condition type function `b`:
```c++
template<typename I, typename X>
Number j (const I& i, const X& x) const
{
return 0.0;
}
```
### Local Operator
We now turn to how the residual can be evaluated in practice. The residual form \eqref{eq:ResidualForm} can be readily decomposed into elementwise contributions:
\begin{equation}
r^{\text{NLP}}\left(u,v\right) =
\sum_{T\in\mathcal{T}_h} \alpha_T^V(u,v)
+ \sum_{T\in\mathcal{T}_h} \lambda_T^V(v)
+ \sum_{F\in\mathcal{F}_h^{\partial\Omega}}\lambda_F^B(v)
\end{equation}
with
\begin{align}
\alpha_T^V(u,v) &= \int_T \nabla u \cdot \nabla v + q(u) v \,dx, &
\lambda_T^V(v) &= - \int_T f v \,dx, &
\lambda_F^B(v) &= \int_{F\cap\Gamma_N} j v\,ds.
\end{align}
Here $\mathcal{F}_h^{\partial\Omega}$ is the set of intersections of elements with the domain boundary $\partial\Omega$.
The element-wise computations can be classified on the one hand as volume integrals (superscript $V$), boundary integrals (superscript $B$) and skeleton integrals (superscript $S$, to be shown later) and on the
other hand as integrals depending on trial and test functions ($\alpha$-terms) and integrals depending only on test functions ($\lambda$-terms). Here we need three of these six possible combinations.
The three terms can now be evaluated using the techniques introduced in tutorial 00 with the small extension that for general maps $\mu_T$ we have
$$\nabla w(\mu_T(\hat x)) = J_{\mu_T}^{-T}(\hat x) \hat\nabla \hat w (\hat x)$$
with $J_{\mu_T}(\hat x)$ the Jacobian of $\mu_T$ at point $\hat x$.
For a linear map $\mu_T(\hat{x}) = B_T\hat{x} + a_t $ the Jacobian is simply given by $J_T = B_T$.
The class `NonlinearPoissonFEM` implements the element-wise computations of the finite element method. Evaluation of the residual $R(z)$ is accomplished
by the three types of contributions shown in equation \eqref{eq:FinalResidualEvaluation}.
In order to make things as simple as possible we chose to implement the evaluation of the Jacobian and the matrix-free Jacobian application with finite differences.
The definition of class `NonlinearPoissonFEM` starts as follows:
```c++
template<typename Param, typename FEM>
class NonlinearPoissonFEM :
public Dune::PDELab::
NumericalJacobianVolume<NonlinearPoissonFEM<Param,FEM> >,
public Dune::PDELab::
NumericalJacobianApplyVolume<NonlinearPoissonFEM<Param,FEM> >,
public Dune::PDELab::FullVolumePattern,
public Dune::PDELab::LocalOperatorDefaultFlags
```
The class is parametrized by a parameter class and a finite element map.
Implementation of element-wise contributions to the Jacobian and matrix-free
Jacobian evaluation is achieved through inheriting from the
classes `NumericalJacobianVolume` and `NumericalJacobianApplyVolume`.
Using the *curiously recurring template pattern* these classes provide
the corresponding methods without any additional coding effort
based on the `alpha_volume` method explained below.
The other two base classes are the same as in tutorial 00.
The private data members are a cache for evaluation of the basis functions on the reference element:
```c++
typedef typename FEM::Traits::FiniteElementType::
Traits::LocalBasisType LocalBasis;
Dune::PDELab::LocalBasisCache<LocalBasis> cache;
```
a reference to the parameter object:
```c++
Param& param;
```
and an integer value controlling the order of the formulas used for numerical quadrature:
```c++
int incrementorder;
```
The public part of the class starts with the definition of the flags controlling
the generic assembly process. The `doPatternVolume` flag specifies that the sparsity pattern of the Jacobian is determined by couplings between degrees of freedom associated with single elements. The corresponding
default pattern assembly method is inherited from the class `FullVolumePattern`:
```c++
enum { doPatternVolume = true };
```
The residual assembly flags indicate that in this local operator we will provide the methods `lambda_volume`, `lambda_boundary` and `alpha_volume`:
```c++
enum { doLambdaVolume = true };
enum { doLambdaBoundary = true };
enum { doAlphaVolume = true };
```
Next comes the constructor taking as an argument a reference to a parameter object and the optional increment of the quadrature order:
```c++
NonlinearPoissonFEM (Param& param_, int incrementorder_=0)
: param(param_), incrementorder(incrementorder_)
{}
```
#### Method `lambda_volume`
For any $(T,m)\in C(i)$ we obtain
\begin{equation*}
\begin{split}
\lambda_T^V(\phi_i) &= - \int_T f \phi_i \,dx =
- \int_{\hat T} f(\mu_T(\hat x)) p_m^{\hat T}(\hat x) |\text{det} J_{\mu_T}(\hat x)|\, d\hat x .
\end{split}
\end{equation*}
This integral on the reference element is then computed by employing numerical integration of appropriate order. $\hat{p}^{\hat{T}}$ are the local functions on the reference element,the global Lagrange basis functions spanning $V_h^{k,d}(\mathcal{T}_h)$ are then defined by
\begin{equation*}
\phi_i(x) = \left\{\begin{array}{ll}
p^{\hat T}_m(\mu_T^{-1}(x)) & x\in T \wedge (T,m)\in C(i) \\
0 & \text{else}
\end{array}\right. , \quad i\in\mathcal{I}\left(V_h^{k,d}(\mathcal{T}_h)\right) .
\end{equation*}
The evaluation for all test functions with support on element $T$ may be collected in a vector
\begin{equation*}
(\mathcal{L}_T^V)_m = - \int_{\hat T} f(\mu_T(\hat x)) p_m^{\hat T}(\hat x)
|\text{det} J_{\mu_T}(\hat x)|\, d\hat x.
\end{equation*}
This method was also present in the local operator `PoissonP1` in tutorial00. It implements the term $\mathcal{L}_T^V$ and has the interface:
```c++
//! right hand side integral
template<typename EG, typename LFSV, typename R>
void lambda_volume (const EG& eg, const LFSV& lfsv, R& r) const
```
The implementation here uses numerical quadrature of sufficiently high order
which is selected at the beginning of the method:
```c++
auto geo = eg.geometry();
const int order = incrementorder + 2*lfsv.finiteElement().localBasis().order();
auto rule = Dune::PDELab::quadratureRule(geo,order);
```
The DUNE quadrature rules provide a container of quadrature points that can be iterated over:
```c++
for (const auto& ip : rule)
{
// evaluate basis functions
auto& phihat = cache.evaluateFunction(ip.position(), lfsv.finiteElement().localBasis());
// integrate -f*phi_i
decltype(ip.weight()) factor = ip.weight()*
geo.integrationElement(ip.position());
auto f=param.f(eg.entity(),ip.position());
for (size_t i=0; i<lfsv.size(); i++)
r.accumulate(lfsv,i,-f*phihat[i]*factor);
}
```
At each quadrature point all basis functions are evaluated. The local function space argument `lfsv` provides all the basis functions on the reference element. Evaluations are cached for each point as the evaluation
may be quite costly, especially for high order. In addition, copying of data is avoided as the cache returns only a reference to the data stored in the cache. The integration factor is the product of the weight of the quadrature point and the value of $|\text{det} J_{\mu_T}(\hat x)|$. The implementation works also
for non-affine element transformation. The quadrature order should be increased by providing a value for `incrementorder` in the constructor.
Then the parameter function can be evaluated and finally the residual contributions for each test function are stored in the result object `r`.
#### Method `lambda_boundary`
For $F\in\mathcal{F}_h^{\partial\Omega}$ with $F\cap\Gamma_N\neq\emptyset$ and $(T_F^-,m)\in C(i)$ we obtain
\begin{equation*}
\begin{split}
\lambda_T^B(\phi_i) &= \int_{F} j v\,ds =
\int_{\hat F} j(\mu_F(s)) p_m^{\hat T}(\eta_F(s))
\sqrt{|\text{det} (J^T_{\mu_F}(s)J_{\mu_F}(s))|} \,ds
\end{split}
\end{equation*}
Because integration is over a face of codimension 1 now, two mappings are involved. The map $\mu_F$ maps the reference element $\hat F$ of $F$ into global coordinates while the map $\eta_F$ maps $\hat F$ into the reference element $\hat T$ of $T$. Also the integration element has to be redefined accordingly.
Again, all contributions of the face $F$ can be collected in a vector:
\begin{equation*}
(\mathcal{L}_T^B)_m =
\int_{\hat F} j(\mu_F(s)) p_m^{\hat T}(\eta_F(s))
\sqrt{|\text{det} J^T_{\mu_T}(s)J_{\mu_T}(s)|} \,ds .
\end{equation*}
The `lambda_boundary` implements the residual contributions due to Neumann boundary conditions.
It implements the term $\mathcal{L}_T^B$ and has the following interface:
```c++
template<typename IG, typename LFSV, typename R>
void lambda_boundary (const IG& ig, const LFSV& lfsv, R& r) const
```
The difference to `lambda_volume` is that now an intersection is provided as first argument.
The method begins by evaluating the type of the boundary condition at the midpoint of the edge:
```c++
auto localgeo = ig.geometryInInside();
auto facecenterlocal = referenceElement(localgeo).position(0,0);
bool isdirichlet = param.b(ig.intersection(),facecenterlocal);
```
To that end the center of the reference element of the intersection is computed in the variable`facecenterlocal` before the parameter function can be called.
If the boundary condition type evaluated at the face center is Dirichlet then the complete face is assumed to be part of the Dirichlet boundary:
```c++
// skip rest if we are on Dirichlet boundary
if (isdirichlet) return;
```
It is thus assumed that the mesh resolves all positions where the boundary type changes.
Now that we are on a Neumann boundary an appropriate quadrature rule is selected for integration:
```c++
auto globalgeo = ig.geometry();
const int order = incrementorder + 2*lfsv.finiteElement().localBasis().order();
auto rule = Dune::PDELab::quadratureRule(globalgeo,order);
```
And here is the integral over the face:
```c++
// loop over quadrature points and integrate normal flux
for (const auto& ip : rule)
{
// quadrature point in local coordinates of element
auto local = localgeo.global(ip.position());
// evaluate shape functions (assume Galerkin method)
auto& phihat = cache.evaluateFunction(local,lfsv.finiteElement().localBasis());
// integrate j
decltype(ip.weight()) factor = ip.weight()*globalgeo.integrationElement(ip.position());
auto j = param.j(ig.intersection(),ip.position());
for (size_t i=0; i<lfsv.size(); i++)
r.accumulate(lfsv,i,j*phihat[i]*factor);
}
```
Every quadrature point on the face needs to be mapped to the reference of the volume element for evaluation of the basis functions. The evaluation uses the basis function cache. Then the integration factor is computed and the contributions for all the test functions are accumulated.
#### Method `alpha_volume`
For any $(T,m)\in C(i)$ we get
\begin{equation*}
\begin{split}
\alpha_T^V(u_h,\phi_i) &= \int_T \nabla u \cdot \nabla \phi_i + q(u) \phi_i \,dx,
= \int_T \sum_j (z)_j \left(\nabla \phi_j \cdot \nabla \phi_i \right)
+ q\left( \sum_j (z)_j \phi_j \right) \phi_i \,dx,\\
&= \int_{\hat T} \sum_{n} (z)_{g_T(n)} (J_{\mu_T}^{-T}(\hat x) \hat\nabla p_n^{\hat T}(\hat x) )
\cdot (J_{\mu_T}^{-T}(\hat x) \hat\nabla p_m^{\hat T}(\hat x) ) \\
&\hspace{40mm}+ q\left( \sum_n (z)_{g_T(n)} p_n^{\hat T}(\hat x) \right) p_m^{\hat T}(\hat x)
|\text{det} J_{\mu_T}(\hat x)| \,d\hat x
\end{split}
\end{equation*}
Again contributions for all test functions can be collected in a vector
\begin{equation*}
\begin{split}
(\mathcal{R}_T^V(R_T z))_m &=
\sum_{n} (z)_{g_T(n)} \int_{\hat T} (J_{\mu_T}^{-T}(\hat x) \hat\nabla p_n^{\hat T}(\hat x) )
\cdot (J_{\mu_T}^{-T}(\hat x) \hat\nabla p_m^{\hat T}(\hat x) ) |\text{det} J_{\mu_T}(\hat x)| \,d\hat x\\
&\hspace{30mm}+ \int_{\hat T} q\left( \sum_n (z)_{g_T(n)} p_n^{\hat T}(\hat x) \right) p_m^{\hat T}(\hat x)
|\text{det} J_{\mu_T}(\hat x)| \,d\hat x
\end{split}
\end{equation*}
This method was already present in tutorial00. It implements the term $\mathcal{R}_T^V(R_T z)$ and its interface is
```c++
template<typename EG, typename LFSU, typename X,
typename LFSV, typename R>
void alpha_volume (const EG& eg, const LFSU& lfsu, const X& x,
const LFSV& lfsv, R& r) const
```
The method starts by extracting the space dimension and the floating point type to be used for computations:
```c++
const int dim = EG::Entity::dimension;
typedef decltype(Dune::PDELab::makeZeroBasisFieldValue(lfsu)) RF;
```
Then a quadrature rule is selected
```c++
auto geo = eg.geometry();
const int order = incrementorder + 2*lfsu.finiteElement().localBasis().order();
auto rule = Dune::PDELab::quadratureRule(geo,order);
```
and the quadrature loop is started
```c++
for (const auto& ip : rule)
{
```
Within the quadrature loop the basis functions are evaluated
```c++
auto& phihat = cache.evaluateFunction(ip.position(),lfsu.finiteElement().localBasis());
```
and the value of $u_h$ at the quadrature point is computed.
```c++
RF u=0.0;
for (size_t i=0; i<lfsu.size(); i++)
u += x(lfsu,i)*phihat[i];
```
Then the gradients of the basis functions on the reference element are evaluated via the evaluation cache:
```c++
auto& gradphihat = cache.evaluateJacobian(ip.position(), lfsu.finiteElement().localBasis());
```
Now the gradients need to be transformed from the reference element to the transformed element by multiplication with $J_{\mu_T}^{-1}(\hat x)$:
```c++
// transform gradients of shape functions to real element
const auto S = geo.jacobianInverseTransposed(ip.position());
auto gradphi = makeJacobianContainer(lfsu);
for (size_t i=0; i<lfsu.size(); i++)
S.mv(gradphihat[i][0],gradphi[i][0]);
```
Note that, as explained in tutorial00, DUNE allows basis functions in general to be vector valued. Therefore `gradphi[i][0]` contains the gradient (with $d$ components) of the component 0 of basis function number $i$.
Now $\nabla u_h$ can be computed
```c++
Dune::FieldVector<RF,dim> gradu(0.0);
for (size_t i=0; i<lfsu.size(); i++)
gradu.axpy(x(lfsu,i),gradphi[i][0]);
```
and we are in the position to finally compute the residual contributions:
```c++
// integrate (grad u)*grad phi_i + q(u)*phi_i
auto factor = ip.weight()*
geo.integrationElement(ip.position());
auto q = param.q(u);
for (size_t i=0; i<lfsu.size(); i++)
r.accumulate(lfsu,i,(gradu*gradphi[i][0]+
q*phihat[i])*factor);
}
```
Now with these definitions in place the evaluation of the algebraic residual is
\begin{equation}
R(z) =
\sum_{T\in\mathcal{T}_h} R_T^T \mathcal{R}_T^V(R_T z)
+ \sum_{T\in\mathcal{T}_h} R_T^T \mathcal{L}_T^V
+ \sum_{F\in\mathcal{F}_h^{\partial\Omega}\cap\Gamma_N} R_T^T \mathcal{L}_F^B
\label{eq:FinalResidualEvaluation}
\end{equation}
The Jacobian of the residual is
\begin{equation*}
(J(z))_{i,j} = \frac{\partial R_i}{\partial z_j} (z) =
\sum_{(T,m,n) : (T,m)\in C(i) \wedge (T,n)\in C(j)} \frac{\partial (\mathcal{R}_T^V)_m}{\partial z_n}
(R_T z)
\end{equation*}
Note that:
1. Entries of the Jacobian can be computed element by element.
2. The derivative is independent of the $\lambda$-terms as they only depend on the test functions.
3. In the implementation below the Jacobian is computed numerically by finite differences. This can be achieved automatically by deriving from an additional base class.
# Exercises
## Warming Up
1. Try out different polynomial degrees and values for $\eta$. The parameter $\eta$ can be changed in the ini file 'tutorial01.ini'. The polynomial degree can be changed from within the notebook.
Here are some suggestions that could be interesting:
- use higher values for $\eta$
- Try all combinations of `degree=1|2` and `subsampling=1|2`. Look at your solutions using paraview and the `warp by scalar` filter. You can see the underlying grid by choosing `surface with edges` instead of `surface` in the paraview drop down menu. How does subsampling change the output?
- *example for influence of nonlinearity $\eta$ on solution.
Chosen parameters: `degree = 1` `uggrid`*
<table>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</table>
- *example for different choices of degree and subsampling for $\eta = 10$*
<table>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</table>
*The subsampling divides each cell into four additional cells, however just virtually for the visualiztation, the actual grid stays the same. As paraview visualizes either the value of a cell or the value of the vertices, a further subdivision for the visualization prevents the loss of accuracy if polynomials of higher order are employed. This is especially important, when paraview is used for the post processing of data. A higher subsampling than polynomial degree is thus not sensible and only increases the required memory.*
2. Try to use `yasp` grid instead of `ug` grid and visualize the solution with paraview. You can use the structured grid factory as presented in the grid-exercise.
- *The solution can be found within the notebook [(go to solution)](#yasp)*.
3. Try use a 3D grid implementation. For ug grid, another gmsh file needs to be provided through the ini file.
- *change the dimension to 3 at the beginning of the notebook [(go to cell)](#dim)*. This already suffices for using a 3D yasp grid.
- *for ug grid: use unitcube.msh instead of unitsquare.msh, i.e. change section of ini file that is read from `grid.twod` to `grid.threed`.*
```c++
using Grid = Dune::UGGrid<dim>;
std::string filename = ptree.get("grid.threed.filename", //3D
"unitcube.msh");
```
- *example 3D output*
4. It is easy to implement different nonlinearities. Use $q(u)=\exp(\eta u)$ by adjusting the file `problem.hh`.
- *changes in 'problem.hh'*:
```c++
Number q (Number u) const
{
// return eta*u*u;
return std::exp(eta*u);
}
//! derivative of nonlinearity
Number qprime (Number u) const
{
// return 2*eta*u;
return eta*std::exp(eta*u);
}
```
5. Go back to $q(u)=\eta u^2$. Now we want to see how good our approximation is. Change the function $f(x)$ in the file problem.hh to $f(x)=-2d+\eta(\sum_{i=1}^d(x)_i^2)^2$ where $d$ is the dimension (and therefore size of $x$). Then $u(x)=\sum_{i=1}^d(x)_i^2=g(x)$ is the exact solution. Visualize the exact solution like it is done in tutorial 00. Start with `degree = 1`, `eta = 10` and `subsampling = 1`. Use paraview to see how the maximal error $\max|u-u_h|$ behaves for different refienement levels `refinement=1|...|5`. Additionally you can calculate the L2-norm of the error, as before in tutorial 00. Then try again for `degree=2`. What happens here?
- *change of function f in `problem.hh`*:
```c++
//! right hand side
template<typename E, typename X>
Number f (const E& e, const X& x) const
{
// return -2.0*x.size();
// right hand side where g(x) is exact solution
auto global = e.geometry().global(x);
Number s=0.0;
for (std::size_t i=0; i<global.size(); i++) s+=global[i]*global[i];
return -2.0*x.size()+eta*s*s;
}
```
- *The visualization for the exact solution is embedded in the notebook [(go to solution)](#exact).*
- *The following table shows the maximum norm,which is given as $||f||_{\max} := \max|f(x)|$ and the L2-norm as $||f||_{0,\Omega} = \left( \int_{\Omega} f(x)^2 dx \right)^{1/2} $ for different mesh refinements. The L2 error is depicted in the graph on the right side.*
<table style = "width:100%">
<tr>
<td>
<table style="border:1px dotted; width:100%">
<tr style="border:1px dotted">
<th> refinement </th>
<th> ${||u-u_h||}_\max$ </th>
<th> ${||u-u_h||}_{0,\Omega}$ </th>
</tr>
<tr>
<td> 1 </td>
<td> 0.000784874 </td>
<td> 0.000232161 </td>
</tr>
<tr>
<td> 2 </td>
<td> 0.000241339 </td>
<td> 5.51530598e-05 </td>
</tr>
<tr>
<td> 3 </td>
<td> 7.14064e-05 </td>
<td> 1.35100333e-05</td>
</tr>
<tr>
<td> 4 </td>
<td> 2.18749e-05 </td>
<td> 3.35459387e-06 </td>
</tr>
<tr>
<td> 5 </td>
<td> 6.61612e-06 </td>
<td> 8.37383425e-07</td>
</tr>
</table>
</td>
<td>
</td>
</tr>
</table>
- *For `degree = 2` the finite element solution $u_h$ matches the exact solution $u$*.
## Nitsche's Method for weak Dirichlet Boundary Conditions
In this exercise we want to implement Dirichlet boundary conditions in a weak sense by using Nitsche's method. Instead of incorporating the Dirichlet boundary condition into the Ansatzspace we modify the residual:
\begin{align*}
r^{\text{Nitsche}}(u,v) &= \int_\Omega \nabla u \cdot \nabla v + (q(u)-f)v\,dx + \int_{\Gamma_N} jv\,ds \\
&\quad - \int_{\Gamma_D} \nabla u \cdot\nu v\,ds - \int_{\Gamma_D} (u-g)\nabla v \cdot\nu\,ds
+ \eta_{stab} \int_{\Gamma_D} (u-g)v\,ds.
\end{align*}
Here $\eta_{stab}$ denotes a stabilization parameter that should be equal to $\eta_{stab}=c/h$ for a constant $c>0$ large enough. This stabilization term is needed to ensure coercivity of the bilinear form.
In order to implement this method you have to do the following:
- include the new file `nitschenonlinearpoissonfem` instead of `nonlinearpoissonfem`
- you have to turn of the constraints. The code is already there you just have to comment/uncomment the parts marked with
```c++
//== Exercise 2 {
...
// ...
//== }
```
- By changing these lines you use no constraints, an empty constraints container and construct the grid operator without constraints. Besides that you use the new `NitscheNonlinearPoissonFEM` local operator that expects the stabilization parameter $\eta_{stab}$.
- the key part is adding the `alpha_boundary` method to the new local operator in the file `nitschenonlinearpoissonfem.hh`. Take a close look at the `lambda_volume`, `lambda_boundary` and `alpha_volume` methods and you should be on your way.
*Hint:* The code for generating the transformation is already there:
```c++
// transform gradients of shape functions to real element
const auto S = geo_inside.jacobianInverseTransposed(local);
```
*Note*: Remember that the quadrature points need to be mapped to the reference element of the inside cell for the evaluation of the basis functions.
- *solution for method `alpha_boundary`
The interface for the method `alpha_boundary` is basically the same as for the previously described method `alpha_volume`, only this time an intersection is provided instead of an entitiy.*
```c++
//! boundary integral depending on test and ansatz functions
template<typename IG, typename LFSU, typename X,
typename LFSV, typename R>
void alpha_boundary (const IG& ig, const LFSU& lfsu, const X& x,
const LFSV& lfsv, R& r) const
```
*As in `lambda_boundary` the method begins by evaluating the type of the boundary condition at the midpoint of the edge:*
```c++
{
// evaluate boundary condition type
auto localgeo = ig.geometryInInside();
auto facecenterlocal =
referenceElement(localgeo).position(0,0);
bool isdirichlet=param.b(ig.intersection(),facecenterlocal);
// skip rest if we are _not_ on Dirichlet boundary
if (!isdirichlet) return;
```
*Reminder: Here, `localgeo` provides the mapping from the reference element of the intersection to the reference element of the inside element.*
*Note: In order to get the center of the reference element of the intersection both `localgeo` and `geo` can be used, as the method `referenceElement()` returns for both the reference element of the intersection.*
*Followed by the definition of usefull types and the choice of quadrature rule, as described before:*
```c++
// types & dimension
const int dim = IG::Entity::dimension;
using RF = decltype(Dune::PDELab::makeZeroBasisFieldValue(lfsu));
// select quadrature rule
auto geo = ig.geometry();
const int order = incrementorder+
2*lfsu.finiteElement().localBasis().order();
auto rule = Dune::PDELab::quadratureRule(geo,order);
// geometry of inside cell
auto geo_inside = ig.inside().geometry();
```
*As described for the method `lambda_volume`, every quadrature point on the face needs to be mapped to the reference element of the volume element for the evaluation of the basis functions. This is accomplished by the method `global()`. The evaluation uses the basis function cache.*
```c++
// loop over quadrature points
for (const auto& ip : rule)
{
// quadrature point in local coordinates of element
auto local = localgeo.global(ip.position());
// evaluate basis functions
auto& phihat = cache.evaluateFunction(local,
lfsu.finiteElement().localBasis());
```
*The evaluation of u, the gradient of shape functions and the the gradient of u is the same as described in `alpha_volume`.The function `jacobianInverseTransposed` is provided with the integration point `ip` in local coordinates `local` of the reference element of the inside cell.*
```c++
// evaluate u
RF u=0.0;
for (size_t i=0; i<lfsu.size(); i++)
u += x(lfsu,i)*phihat[i];
// evaluate gradient of shape functions
auto& gradphihat = cache.evaluateJacobian(local,
lfsu.finiteElement().localBasis());
// transform gradients of shape functions to real element
const auto S = geo_inside.jacobianInverseTransposed(local);
auto gradphi = makeJacobianContainer(lfsu);
for (size_t i=0; i<lfsu.size(); i++)
S.mv(gradphihat[i][0],gradphi[i][0]);
// compute gradient of u
Dune::FieldVector<RF,dim> gradu(0.0);
for (size_t i=0; i<lfsu.size(); i++)
gradu.axpy(x(lfsu,i),gradphi[i][0]);
```
*Additionally, the value of the Dirichlet BC and the unit outer normal vector are required for the computation of the integral*
```c++
// get unit outer normal vector and g
auto n = ig.unitOuterNormal(ip.position());
auto g = param.g(ig.intersection(),ip.position());
```
*Finally, the integration factor is computed and the contributions for all the test functions are accumulated.
Note: `geo` maps the coordinates from the reference element of the intersection to global coordinates, so the provided argument is `ip.position` (and not local).*
```c++
// integrate -(grad u)*n * phi_i + q(u)*phi_i
auto factor = ip.weight()*
geo.integrationElement(ip.position());
for (size_t i=0; i<lfsu.size(); i++)
r.accumulate(lfsu,i,(-(gradu*n)*phihat[i]-
(u-g)*(gradphi[i][0]*n)+
stab*(u-g)*phihat[i])*factor);
}
}
```
When you have done all that test your implementation. Use the test case from exercise 1 with $f(x)=-2d+\eta(\sum_{i=1}^d(x)_i^2)^2$ and exact solution $u(x)=\sum_{i=1}^d(x)_i^2=g(x)$ and compare it to your approximation.
Introduce the parameter
```ini
[fem]
stab = 100
```
in the ini file and look at the maximal error $\max|u-u_h|$ for `stab=10|100|1000.`
- *$\eta_{stab}$ needs to be large enough to ensure coercivity. For too small $\eta_{stab}$ the error is dominated by the error on the boundaries, for sufficiently large $\eta_{stab}$ the error is distributed as before.*
<table>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</table>
- *Error for Dirchlet boundary conditions in ansatzspace and in residual.*
<table>
<tr>
<td></td>
<td></td>
</tr>
</table>
| 59e9ba422b151766c8f1ffe2cf388e10b6b2213b | 71,976 | ipynb | Jupyter Notebook | notebooks/tutorial01/pdelab-tutorial01.ipynb | dokempf/dune-jupyter-course | 1da9c0c2a056952a738e8c7f5aa5aa00fb59442c | [
"BSD-3-Clause"
]
| 1 | 2022-01-21T03:16:12.000Z | 2022-01-21T03:16:12.000Z | notebooks/tutorial01/pdelab-tutorial01.ipynb | dokempf/dune-jupyter-course | 1da9c0c2a056952a738e8c7f5aa5aa00fb59442c | [
"BSD-3-Clause"
]
| 21 | 2021-04-22T13:52:59.000Z | 2021-10-04T13:31:59.000Z | notebooks/tutorial01/pdelab-tutorial01.ipynb | dokempf/dune-jupyter-course | 1da9c0c2a056952a738e8c7f5aa5aa00fb59442c | [
"BSD-3-Clause"
]
| 1 | 2021-04-21T08:20:02.000Z | 2021-04-21T08:20:02.000Z | 29.270435 | 676 | 0.56027 | true | 11,588 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.771844 | 0.660583 | __label__eng_Latn | 0.974397 | 0.373087 |
# TSSL Lab 1 - Autoregressive models
We load a few packages that are useful for solvign this lab assignment.
```python
import pandas as pd # Loading data / handling data frames
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from sklearn import linear_model as lm # Used for solving linear regression problems
from sklearn.neural_network import MLPRegressor # Used for NAR model
from tssltools_lab1 import acf, acfplot # Module available in LISAM - Used for plotting ACF
```
## 1.1 Loading, plotting and detrending data
In this lab we will build autoregressive models for a data set corresponding to the Global Mean Sea Level (GMSL) over the past few decades. The data is taken from https://climate.nasa.gov/vital-signs/sea-level/ and is available on LISAM in the file `sealevel.csv`.
**Q1**: Load the data and plot the GMSL versus time. How many observations are there in total in this data set?
_Hint:_ With pandas you can use the function `pandas.read_csv` to read the csv file into a data frame. Plotting the time series can be done using `pyplot`. Note that the sea level data is stored in the 'GMSL' column and the time when each data point was recorded is stored in the column 'Year'.
**A1**:
```python
seaLevelData = pd.read_csv("sealevel.csv")
```
```python
#plot of GMSL vs Time
plt.plot(seaLevelData['Year'] , seaLevelData['GMSL'])
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.suptitle("GMSL VS Time")
plt.show()
```
```python
#total number of observation
nObs = seaLevelData.shape[0]
print(f"Total number of obervation in given data is {nObs}")
```
Total number of obervation in given data is 997
**Q2**: The data has a clear upward trend. Before fitting an AR model to this data need to remove this trend. Explain, using one or two sentences, why this is necessary.
**A2:** Since given data have a clear upward trend which introduces changing mean over time. Now to apply the AR model we need data with constant mean. Thus we have to detrend the data so stationary dataset can achieve and use for AR modeling and another benefit is that it enables discovering other aspects of the data like the autocorrelations and dependencies of the variables.
**Q3** Detrend the data following these steps:
1. Fit a straight line, $\mu_t=\theta_0 + \theta_1 u_t $ to the data based on the method of least squares. Here, $u_t$ is the time point when obervation $t$ was recorded.
_Hint:_ You can use `lm.LinearRegression().fit(...)` from scikit-learn. Note that the inputs need to be passed as a 2D array.
Before going on to the next step, plot your fitted line and the data in one figure.
2. Subtract the fitted line from $y_t$ for the whole data series and plot the deviations from the straight line.
**From now, we will use the detrended data in all parts of the lab.**
_Note:_ The GMSL data is recorded at regular time intervals, so that $u_{t+1} - u_t = $ const. Therefore, you can just as well use $t$ directly in the linear regression function if you prefer, $\mu_t=\theta_0 + \theta_1 t $.
**A3:**
```python
x = seaLevelData['Year'].to_numpy().reshape((-1 ,1))
y = seaLevelData['GMSL'].to_numpy()
model_lm = lm.LinearRegression().fit(x , y)
```
```python
theta0 = model_lm.intercept_
theta1 = model_lm.coef_
#Y_Pred = theta0 + theta1 * x
Y_Pred = model_lm.predict(x)
plt.plot(seaLevelData['Year'] , seaLevelData['GMSL'] , label = "Original Data")
plt.plot(x, Y_Pred, color='red' , label = "Fitted Line")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
```python
detrededData = y - Y_Pred
plt.plot(x, Y_Pred, color='red' , label = "Fitted Line")
plt.plot(x, detrededData , color='blue' , label = "Detrended Data")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.suptitle("Detrended Data GMSL Vs Time")
plt.legend()
plt.show()
```
**Q4:** Split the (detrended) time series into training and validation sets. Use the values from the beginning up to the 700th time point (i.e. $y_t$ for $t=1$ to $t=700$) as your training data, and the rest of the values as your validation data. Plot the two data sets.
_Note:_ In the above, we have allowed ourselves to use all the available data (train + validation) when detrending. An alternative would be to use only the training data also when detrending the model. The latter approach is more suitable if, either:
* we view the linear detrending as part of the model choice. Perhaps we wish to compare different polynomial trend models, and evaluate their performance on the validation data, or
* we wish to use the second chunk of observations to estimate the performance of the final model on unseen data (in that case it is often referred to as "test data" instead of "validation data"), in which case we should not use these observations when fitting the model, including the detrending step.
In this laboration we consider the linear detrending as a predetermined preprocessing step and therefore allow ourselves to use the validation data when computing the linear trend.
**A4:**
```python
trainDataY = detrededData[0:700 , ]
ValidDataY = detrededData[700: , ]
trainDataX = seaLevelData['Year'][0:700]
ValidDataX = seaLevelData['Year'][700:]
```
```python
plt.plot(trainDataX, trainDataY, color='blue')
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.suptitle("Training Data")
plt.show()
plt.plot(ValidDataX, ValidDataY, color='blue')
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.suptitle("Validation Data")
plt.show()
```
## 1.2 Fit an autoregressive model
We will now fit an AR$(p)$ model to the training data for a given value of the model order $p$.
**Q5**: Create a function that fits an AR$(p)$ model for an arbitrary value of p. Use this function to fit a model of order $p=10$ to the training data and write out (or plot) the coefficients.
_Hint:_ Since fitting an AR model is essentially just a standard linear regression we can make use of `lm.LinearRegression().fit(...)` similarly to above. You may use the template below and simply fill in the missing code.
**A5:**
```python
def fit_ar(y, p):
"""
Fits an AR(p) model. The loss function is the sum of squared errors from t=p+1 to t=n.
:param y: array (n,), training data points
:param p: int, AR model order
:return theta: array (p,), learnt AR coefficients
"""
# Number of training data points
n = y.shape[0]
# Construct the regression matrix
Phi = np.zeros((n-p,p), dtype= int)
for j in range(p):
Phi[:,j] = y[j:n+j-p]
# Drop the first p values from the target vector y
yy = y[p:] # yy = (y_{t+p+1}, ..., y_n)
# Here we use fit_intercept=False since we do not want to include an intercept term in the AR model
regr = lm.LinearRegression(fit_intercept=False)
regr.fit(Phi,yy)
return regr.coef_
```
```python
#Calculating theta_hat from training data
#set p = 10 as AR(10) model and detrained train data as training data
theta_hat = fit_ar( y = trainDataY , p = 10 )
plt.plot(range(len(theta_hat)), theta_hat, color='blue')
plt.ylabel("Theta")
plt.xlabel("Index")
plt.suptitle("Theta Coef")
plt.show()
```
**Q6:** Next, write a function that computes the one-step-ahead prediction of your fitted model. 'One-step-ahead' here means that in order to predict $y_t$ at $t=t_0$, we use the actual values of $y_t$ for $t<t_0$ from the data. Use your function to compute the predictions for both *training data* and *validation data*. Plot the predictions together with the data (you can plot both training and validation data in the same figure). Also plot the *residuals*.
_Hint:_ It is enought to call the predict function once, for both training and validation data at the same time.
**A6:**
```python
def predict_ar_1step(theta, y_target):
"""Predicts the value y_t for t = p+1, ..., n, for an AR(p) model, based on the data in y_target using
one-step-ahead prediction.
:param theta: array (p,), AR coefficients, theta=(a1,a2,...,ap).
:param y_target: array (n,), the data points used to compute the predictions.
:return y_pred: array (n-p,), the one-step predictions (\hat y_{p+1}, ...., \hat y_n)
"""
n = len(y_target)
p = len(theta)
# Number of steps in prediction
m = n-p
y_pred = np.zeros(m)
for i in range(m):
# if i == 0 :
# prevY = y_target[i:p+i]
# #print(i)
# #print(prevY[i:p+i])
# y_pred[i] = np.flip(prevY[i:p+i]).dot(theta)
# #print(y_pred[i])
# prevY = np.append(prevY , y_pred[i])
# #print(prevY[i:p+i])
y_pred[i] = y_target[i:p+i].dot(theta)
return y_pred
```
```python
YPred_Train = predict_ar_1step(theta = theta_hat , y_target = trainDataY)
YPred_Valid = predict_ar_1step(theta = theta_hat , y_target = ValidDataY)
```
```python
#plot for predicted values for train and validation data
plt.plot(trainDataX , trainDataY , label = "Training Data")
plt.plot(trainDataX[len(theta_hat):,] , YPred_Train , label = "Prediction for Training Data")
plt.plot(ValidDataX , ValidDataY , label = "Validation Data")
plt.plot(ValidDataX[len(theta_hat):,] , YPred_Valid , label = "Prediction for Validation Data")
#figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
**Q7:** Compute and plot the autocorrelation function (ACF) of the *residuals* only for the *validation data*. What conclusions can you draw from the ACF plot?
_Hint:_ You can use the function `acfplot` from the `tssltools` module, available on the course web page.
**A7:**
```python
help(acfplot)
```
Help on function acfplot in module tssltools_lab1:
acfplot(x, lags=None, conf=0.95)
Plots the empirical autocorralation function.
:param x: array (n,), sequence of data points
:param lags: int, maximum lag to compute the ACF for. If None, this is set to n-1. Default is None.
:param conf: float, number in the interval [0,1] which specifies the confidence level (based on a central limit
theorem under a white noise assumption) for two dashed lines drawn in the plot. Default is 0.95.
:return:
The plot indicates that the sign of the auto-correlation changes depending on the value of the coefficients. Moreover, we can see that the decay in the ACF is relatively slow, meaning the process takes a long to forget its past.
```python
resiValidation = ValidDataY[10:] - YPred_Valid
AutoCoR = acfplot(x = resiValidation)
```
## 1.3 Model validation and order selection
Above we set the model order $p=10$ quite arbitrarily. In this section we will try to find an appropriate order by validation.
**Q8**: Write a loop in which AR-models of orders from $p=2$ to $p=150$ are fitted to the data above. Plot the training and validation mean-squared errors for the one-step-ahead predictions versus the model order.
Based on your results:
- What is the main difference between the changes in training error and validation error as the order increases?
- Based on these results, which model order would you suggest to use and why?
_Note:_ There is no obvious "correct answer" to the second question, but you still need to pick an order an motivate your choice!
```python
MSE_Train = []
MSE_Valid = []
# def ModelCompare(y,p):
# theta_hat = fit_ar(y, p)
# predY = predict_ar_1step(theta = theta_hat , y_target = y)
# residuals = y[len(theta_hat):] - predY
# mse = sum(residuals**2) / len(residuals)
# return mse
for i in range(2,151):
#calculate theta_hat
theta_hat = fit_ar(trainDataY, i)
#pred for train Data
predYtrain = predict_ar_1step(theta = theta_hat , y_target = trainDataY)
residualsYtrain = trainDataY[len(theta_hat):] - predYtrain
MSE_Train.append(sum(residualsYtrain**2) / len(residualsYtrain))
#pred for Valid Data
predYValid = predict_ar_1step(theta = theta_hat , y_target = ValidDataY)
residualsYvalid = ValidDataY[len(theta_hat):] - predYValid
MSE_Valid.append(sum(residualsYvalid**2) / len(residualsYvalid))
```
```python
plt.plot(range(2,151) , MSE_Train , label = "MSE Train Data")
plt.plot(range(2,151) , MSE_Valid , label = "MSE Valid Data")
plt.xlabel("p")
plt.ylabel("MSE")
plt.suptitle("MSE vs Model Order(P)")
plt.legend()
plt.show()
```
**A8:**
Looking at the above plot of MSE vs Model Order P we can see that MSE for training data decreases with increased model order p while for validation data it is marginally changing between 4.6 and 5.
While around model order 107 both values decrease and post that MSE for Validation Data starts to increase.
Based on the above result we can consider model order as 10.
**Q9:** Based on the chosen model order, compute the residuals of the one-step-ahead predictions on the *validation data*. Plot the autocorrelation function of the residuals. What conclusions can you draw? Compare to the ACF plot generated above for p=10.
```python
#at model order of 110
#calculate theta_hat
theta_hat = fit_ar(trainDataY, 107)
#pred for Valid Data
predYValid = predict_ar_1step(theta = theta_hat , y_target = ValidDataY)
residualsValid107 = ValidDataY[len(theta_hat):] - predYValid
AutoCoR110 = acfplot(x = residualsValid107)
```
The plot suggests a decrease in the ACF compared to when p=10. Because we are using a much higher order now which means we
consider 107 previous times when predicting a new value. This leads to a
lower correlation as the lags used are far apart from the predictions
point.
## 1.4 Long-range predictions
So far we have only considered one-step-ahead predictions. However, in many practical applications it is of interest to use the model to predict further into the future. For intance, for the sea level data studied in this laboration, it is more interesting to predict the level one year from now, and not just 10 days ahead (10 days = 1 time step in this data).
**Q10**:
Write a function that simulates the value of an AR($p$) model $m$ steps into the future, conditionally on an initial sequence of data points. Specifically, given $y_{1:n}$ with $n\geq p$ the function/code should predict the values
\begin{align}
\hat y_{t|n} &= \mathbb{E}[y_{t} | y_{1:n}], & t&=n+1,\dots,n+m.
\end{align}
Use this to predict the values for the validation data ($y_{701:997}$) conditionally on the training data ($y_{1:700}$) and plot the result.
_Hint:_ Use the pseudo-code derived at the first pen-and-paper session.
**A10:**
```python
def simulate_ar(y, theta, m):
"""Simulates an AR(p) model for m steps, with initial condition given by the last p values of y
:param y: array (n,) with n>=p. The last p values are used to initialize the simulation.
:param theta: array (p,). AR model parameters,
:param m: int, number of time steps to simulate the model for.
"""
p = len(theta)
y_sim = np.zeros(m)
phi = np.flip(y[-p:].copy()) # (y_{n-1}, ..., y_{n-p})^T - note that y[ntrain-1] is the last training data point
#Last Element Index which is to be removed
#popOutIndex = len(phi) - 1
theta = np.flip(theta.copy())
for i in range(m):
y_sim[i] = phi.dot(theta)
#phi = np.append(y_sim[i] , np.delete(phi , popOutIndex))
phi = np.append(y_sim[i] , np.delete(phi , -1))
return y_sim
```
```python
theta_Gen = fit_ar(y = trainDataY , p = 27)
m = 997 - 700
ySimPredValid = simulate_ar( y = trainDataY , theta = theta_Gen , m = m)
plt.plot(ValidDataX , ValidDataY , label = "Original Validation Data")
plt.plot(ValidDataX , ySimPredValid , label = "Simulated Validation Data")
#figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
**Q11:** Using the same function as above, try to simulate the process for a large number of time steps (say, $m=2000$). You should see that the predicted values eventually converge to a constant prediction of zero. Is this something that you would expect to see in general? Explain the result.
**A11**
```python
ySimLargeM = simulate_ar( y = trainDataY , theta = theta_Gen , m = 2000)
plt.plot(range(2000) , ySimLargeM , label = "Simulated Data with m = 2000")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
```python
AutoCoR_YSimLarge = acfplot(x = ySimLargeM)
```
It can be seen that for such large prediction, predicted values end up being a function of Noise as after a point all past values considered are newly predicted values. Now Noise is distributed with mean = 0 thus it is expected to see a convergence of mean to 0.
It can be seen from the ACF plot that after a large lag like 1000, autocorrelation between values is very low and now they are only random walk generated by the distribution of noise function which has mean as 0, thus new predicted expected values are converging to 0.
## 1.5 Nonlinear AR model
In this part, we switch to a nonlinear autoregressive (NAR) model, which is based on a feedforward neural network. This means that in this model the recursive equation for making predictions is still in the form $\hat y_t=f_\theta(y_{t-1},...,y_{t-p})$, but this time $f$ is a nonlinear function learned by the neural network. Fortunately almost all of the work for implementing the neural network and training it is handled by the `scikit-learn` package with a few lines of code, and we just need to choose the right structure, and prepare the input-output data.
**Q12**: Construct a NAR($p$) model with a feedforward (MLP) network, by using the `MLPRegressor` class from `scikit-learn`. Set $p$ to the same value as you chose for the linear AR model above. Initially, you can use an MLP with a single hidden layer consisting of 10 hidden neurons.
Train it using the same training data as above and plot the one-step-ahead predictions as well as the residuals, on both the training and validation data.
_Hint:_ You will need the methods `fit` and `predict` of `MLPRegressor`. Read the user guide of `scikit-learn` for more details. Recall that a NAR model is conceptuall very similar to an AR model, so you can reuse part of the code from above.
**A12:**
```python
def phiCal(y , p):
n = y.shape[0]
Phi = np.zeros((n-p,p), dtype= int)
for j in range(p):
Phi[:,j] = y[j:(n+j-p)]
return Phi
p = 108
yModel = trainDataY[p:]
model = MLPRegressor(hidden_layer_sizes=(10,) , activation= "relu")
reg = model.fit(phiCal(y = trainDataY , p = 108), yModel)
n = trainDataY.shape[0]
dataValid_Y = np.append(trainDataY[(n - p): ] , ValidDataY)
predictedVal = reg.predict(phiCal(y = dataValid_Y , p = 108))
```
/usr/local/lib/python3.6/dist-packages/sklearn/neural_network/_multilayer_perceptron.py:571: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.
% self.max_iter, ConvergenceWarning)
```python
reg.coefs_[0].shape
```
(108, 10)
```python
plt.plot(ValidDataX , predictedVal , label = "NAR Prediction with p = 108")
plt.plot(ValidDataX , ValidDataY , label = "Original Data")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
```python
NARresidual = ValidDataY - predictedVal
plt.plot(ValidDataX , ValidDataY , label = "Original Data")
plt.plot(ValidDataX , NARresidual , label = "Residuals")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
**Q13:** Try to expirement with different choices for the hyperparameters of the network (e.g. number of hidden layers and units per layer, activation function, etc.) and the optimizer (e.g. `solver` and `max_iter`).
Are you satisfied with the results? Why/why not? Discuss what the limitations of this approach might be.
**A13:**
We have tried with multiple settings, and with a higher number of hidden layers and other different parameters, we could see that residual for training data was significantly low while there was high variance for Validation data, which in turn suggested overfitting of the model.
The below model captures non-linearity in data with a comparatively lower residual, thus performs better comparatively.
If comparing the above two models below is satisfactory while it still has a high residual for validation data.
```python
model = MLPRegressor(hidden_layer_sizes=(30,10) , activation= "tanh" ,
solver="sgd" , learning_rate = "adaptive" , max_iter = 200)
reg = model.fit(phiCal(y = trainDataY , p = 108), yModel)
n = trainDataY.shape[0]
#dataValid_Y = np.append(trainDataY[(n - p): ] , ValidDataY)
predictTrainData = reg.predict(phiCal(y = trainDataY , p = 108))
predictTrainDataResidual = trainDataY[p:] - predictTrainData
predictedVal = reg.predict(phiCal(y = dataValid_Y , p = 108))
NARresidualOpt = ValidDataY - predictedVal
plt.plot(ValidDataX , NARresidual , label = "Pre Changes Residuals A12 ")
plt.plot(ValidDataX , NARresidualOpt , label = "Post Changes Residuals A13")
plt.plot(trainDataX[p:] , predictTrainDataResidual , label = "Train Pred Residuals")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
```python
plt.plot(ValidDataX , NARresidual , label = "Pre Changes Residuals :: A12 ")
plt.plot(ValidDataX , NARresidualOpt , label = "Post Changes Residuals :: A13")
plt.xlabel("Time")
plt.ylabel("GMSL")
plt.legend()
plt.show()
```
| 30cd0e571beb2a053efc6f4d9e08658b394141b7 | 560,381 | ipynb | Jupyter Notebook | Task1/tssl_lab1.ipynb | amannayak/Time-Series-and-Sequence-Analysis | 543b0df70fe603c0ae08e9be8de632895f1bd24c | [
"MIT"
]
| null | null | null | Task1/tssl_lab1.ipynb | amannayak/Time-Series-and-Sequence-Analysis | 543b0df70fe603c0ae08e9be8de632895f1bd24c | [
"MIT"
]
| null | null | null | Task1/tssl_lab1.ipynb | amannayak/Time-Series-and-Sequence-Analysis | 543b0df70fe603c0ae08e9be8de632895f1bd24c | [
"MIT"
]
| null | null | null | 382.251705 | 61,382 | 0.928365 | true | 5,694 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.901921 | 0.812206 | __label__eng_Latn | 0.974535 | 0.72536 |
```python
%matplotlib notebook
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
# Axes3D import adds feature, it enables using projection='3d' in add_subplot
import matplotlib.pyplot as plt
import random
```
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import matplotlib.cm as cm
from IPython.display import display, Math, clear_output
import sympy
from sympy import *
from sympy.physics.vector import ReferenceFrame, CoordinateSym
from sympy.vector import CoordSys3D, divergence, curl
import ipyvolume as ipv
import time
from ipywidgets import Output, interact
import ipywidgets as widgets
np.seterr(divide='ignore', invalid='ignore')
init_printing()
```
## Earnshaw's Theorem: 8 pts along a cube, position a charge in the center. Is it stable?
```python
# create a X,Y grid
x = np.linspace(0, 100, 300)
y = np.linspace(0, 100, 300)
X, Y = np.meshgrid(x, y)
```
```python
# calculate the electric potential at a x,y,z point due to 8 point charges
# at the vertices of a cube (100x100x100)
def V(x,y,z):
v = 0
for crds in [(0,0,0), (0,0,100), (0, 100, 0), (100,0,0), (100,100,0), (100,0,100), (0,100,100), (100,100,100)]:
v += 1/np.sqrt( (x-crds[0])**2 + (y-crds[1])**2 + (z-crds[2])**2 )
return v
```
```python
# set z fixed in the center of the cube, get potential versus x,y
v_res = V(X,Y,50)
```
```python
from mpl_toolkits import mplot3d
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, v_res, rstride=1, cstride=1,
cmap='viridis', edgecolor='none')
```
<IPython.core.display.Javascript object>
<mpl_toolkits.mplot3d.art3d.Poly3DCollection at 0xb27c29438>
| a7f5f63c5dc86b21e472fc598f172d9ccd10c79b | 770,455 | ipynb | Jupyter Notebook | earnshaw.ipynb | lucask07/teaching-notebooks | 732638b3bac528f85e0dc649c4671c005f58b22b | [
"MIT"
]
| null | null | null | earnshaw.ipynb | lucask07/teaching-notebooks | 732638b3bac528f85e0dc649c4671c005f58b22b | [
"MIT"
]
| null | null | null | earnshaw.ipynb | lucask07/teaching-notebooks | 732638b3bac528f85e0dc649c4671c005f58b22b | [
"MIT"
]
| null | null | null | 822.257204 | 732,043 | 0.942019 | true | 539 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.76908 | 0.650598 | __label__eng_Latn | 0.705324 | 0.349887 |
# Estimators as Statistical Decision Functions
**February 2017**
In this notebook we discuss some key concepts of statistical decision theory in order to provide a general framework for the comparison of alternative estimators based on their finite sample performance.
* The primitive object is a statistical decision problem containing a loss function, an action space, and a set of assumed statistical models. We present estimation problems familiar from econometrics as special cases of statistical decision problems. The common framework helps highlighting similarities and differences.
* We compare estimators based on their (finite sample) risk, where risk is derived from an unknown true data generating mechanism.
* We present some straightforward examples to illustrate the main ideas.
------------------------------------------------------
## Notation
Let $\mathbb{R}$ and $\mathbb{Z}$ denote the space of reals and integers, respectively. $\mathbb{R}_+$ is the space of nonnegative real numbers. We use the notation $X^Y$ for a function space that consists of functions mapping from the space $Y$ into the space $X$. As a result, $\mathbb{R}^{d\mathbb{Z}}$ denotes the space of sequences made up of $d$-dimensional real numbers, while $\mathbb R^{Z}_+$ denotes the set of functions mapping from the range of (random) variable $Z$ to the space of nonnegative reals.
Let $Z_t$ be a $d$-dimensional random vector representing the value that the $d$ observables take at period $t$. The stochastic process $\{Z_t\}_{t\in\mathbb Z}$ is denoted by $Z^{\infty}$, the partial history including $n$ consecutive elements of $Z^{\infty}$ is $Z^{n}:=\{Z_1, Z_2, \dots, Z_n\}$. Small letters stand for realizations of random variables, hence $z^{\infty}$, $z^n$ and $z$ represent the realization of the stochastic process, the sample and a single observation, respectively.
We use capital letters for distributions, small letter counterparts denote the associated densities. For example, we use the generic $Q$ notation for ergodic distributions, $q$ for the corresponding density and $q(\cdot|\cdot)$ for the conditional density. Caligraphic letters are used to denote sets:
* $\mathcal{P}$ -- the set of *strictly stationary* probability distributions over the observables
* $\mathcal{Q}\subset \mathcal{P}$ -- the set of *ergodic* distributions (statistical models) over the observables
* $\mathcal{F}$ -- *admissible space*: abstract function space including all functions for which the loss function is well defined
------------------------------------------------------
## Introduction
### Stationarity and statistical models
We model data as a partial realization of a stochastic process $Z^{\infty}$ taking values in $\mathbb{R}^{d}$. Denote a particular realization as $z^{\infty} \in \mathbb{R}^{k\mathbb{Z}}$ and let the partial history $z^{n}$ containing $n$ consecutive elements of the realization be the *sample* of size $n$. We assume that there exists a core mechanism undelying this process that describes the relationship among the elements of the vector $Z$. Our aim is to draw inference about this mechanism after observing a single partial realization $z^{n}$.
How is this possible without being able to draw different samples under the exact same conditions? Following the exposition of [Breiman (1969)](#breiman1969) a fruitful approach is to assume that the underlying mechanism is time invariant with the stochastic process being strictly stationary and study its statistical properties by taking long-run time averages of the realization $z^{\infty}$ (or functions thereof), e.g.
$$\lim_{n\to \infty}\frac{1}{n}\sum_{t = 1}^{n} z_t\quad\quad \lim_{n\to \infty}\frac{1}{n} \sum_{t = 1}^{n} z^2_t\quad\quad \lim_{n\to \infty}\frac{1}{n}\sum_{t = k}^{n+k} z_{t}z_{t-k}$$
Since the mechanism is assumed to be stable over time, it does not matter when we start observing the process.
Notice, however, that strictly speaking these time averages are properties of the particular realization, the extent to which they can be generalized to the mechanism itself is not obvious. To address this question, it is illuminating to bundle realizations that share certain statistical properties together in order to construct a universe of (counterfactual) alternative $z^{\infty}$-s, the so called *ensemble*. Statistical properties of the data generating mechanism can be summarized by assigning probabilities to (sets of) these $z^{\infty}$-s in an internally consistent manner. These considerations lead to the idea of statistical models.
**Statistical models** are probability distributions over sequences $z^{\infty}$ that assign probabilities so that the unconditional moments are consistent with the associated long-run time averages. In other words, with statistical models the time series and ensemble averages coincide, which is the property known as **ergodicity**. Roughly speaking, ergodicity allows us to learn about the ensemble dimension by using a *single* realization $z^{\infty}$.
### Dependence
In reality, being endowed only with a partial history of $z^{\infty}$, we cannot calculate the exact log-run time averages. By imposing more structure on the problem and having a sufficiently large sample, however, we can obtain reasonable approximations. To this end, we need to assume some form of weak independence ("mixing"), or more precisely, the property that on average, the dependence between the elements of $\{Z_t\}_{t\in\mathbb{Z}}$ dies out as we increase the gap between them.
Consequently, if we observe a *long* segment of $z^{\infty}$ and cut it up into shorter consecutive pieces, say of length $l$, then, we might consider these pieces (provided that $l$ is "large enough") as nearly independent records from the distribution of the $l$-block, $Z^l$. To clarify this point, consider a statistical model $Q_{Z^{\infty}}$ (joint distribution over sequences $z^{\infty}$) with density function $q_{z^{\infty}}$ and denote the implied density of the sample as $q_{n}$. Note that because of strict stationarity, it is enough to use the number of consecutive elements as indices. Under general regularity conditions we can decompose this density as
$$q_{n}\left(z^n\right) = q_{n-1}\left(z_n | z^{n-1}\right)q_{n-1}\left(z^{n-1}\right) = q_{n-1}\left(z_n | z^{n-1}\right)q_{n-2}\left(z_{n-1}|z^{n-2}\right)\dots q_{1}\left(z_{2}|z_1\right)q_{1}\left(z_1\right)$$
For simplicity, we assume that the stochastic process is Markov so that the partial histories $z^{i}$ for $i=1,\dots, n-1$ in the conditioning sets can be replaced by the "right" number of lags $z^{n-1}_{n-l}$ and we can drop the subindex from the conditional densties
$$q_{n}(z^n) = q(z_n | z^{n-1}_{n-1-l})q(z_{n-1}|z^{n-2}_{n-2-l})\dots q(z_{l+1}|z_{1}^{l})q_{l}(z^l) \quad\quad\quad (1)$$
This assumption is much stronger than what we really need. First, it suffices to require the existence of a history-dependent latent state variable similar to the Kalman filter. Moreover, we could also relax the Markov assumption and allow for dependence that dies out only asymptotically. In practice, however, we often have a stong view about the dependency structure, or at least we are willing to use economic theory to guide our choice of $l$. In these cases we almost always assume a Markovian structure. For simplicity, in these lectures, unless otherwise stated, we will restrict ourselves to the family of Markov processes.
This assumption allows us to learn about the underlying mechanism $Q_{Z^{\infty}}$ via its $l+1$-period building blocks. Once we determine the (ensemble) distribution of the block, $Q_{Z^{[l+1]}}$, we can "build up" $Q_{Z^{\infty}}$ from these blocks by using a formula similar to (1). Having said that the block distribution $Q_{Z^{[l+1]}}$ carries the same information as $Q_{Z^{\infty}}$. Therefore, from now on, we define $Z$ as the minimal block we need to know and treat it as an **observation**. Statistical models can be represented by their predictions about the ensemble distribution $P$ of this observable.
### True data generating process
We assume that the mechanism underlying $Z^{\infty}$ can be represented with a statistical model $P$ and it is called **true data generating process (DGP)**. We seek to learn about the features of this model from the observed data.
-----------------------------------------
## Primitives of the problem
Following [Wald (1950)](#wald1950) every statistical decision problem that we will consider can be represented with a triple $(\mathcal{H}, \mathcal{A}, L)$, where
1. **Assumed statistical models**, $\mathcal{H}\subseteq \mathcal{Q} \subset \mathcal{P}$
$\mathcal{H}$ is a collection of ergodic probability measures over the observed data, which captures our *maintained assumptions* about the mechanism underlying $Z^{\infty}$. The set of all ergodic distributions $\mathcal{Q}$ is a strict subset of $\mathcal{P}$--the space of strictly stationary probability distributions over the observed data. In fact, the set of ergodic distributions, $\mathcal{Q}$, constitute the extremum points of the set $mathcal{P}$. Ergodicity implies that with infinite data we could single out one element from $\mathcal{H}$.
2. **Action space**, $\mathcal{A}\subseteq \mathcal{F}$
The set of allowable actions. It is an abstract set embodying our proposed *specification* by which we aim to capture features of the true data generating mechanism. It is a subset of $\mathcal{F}$--the largest possible set of functions for which the loss function (see below) is well defined.
3. **Loss function** $L: \mathcal{P}\times \mathcal{F} \mapsto \mathbb{R}_+$
The loss function measures the performance of alternative actions $a\in \mathcal{F}$ under a given distribution $P\in \mathcal{P}$. In principle, $L$ measures the distance between distributions in $\mathcal{P}$ along particular dimensions determined by features of the data generating mechanism that we are interested in. By assigning zero distance to models that share a particular set of features (e.g. conditional expectation, set of moments, etc.), the loss function can 'determine' the relevant features of the problem.
Given the assumed statistical models, we can restrict the domain of the loss function without loss in generality such that, $L: \mathcal{H}\times\mathcal{A}\mapsto\mathbb{R}_+$.
-----------------------------------
### Examples
**Quadratic loss:**
The most commonly used loss function is the quadratic
$$L(P, a) = \int \lVert z - a \rVert^2\mathrm{d}P(z)$$
where the admissible space is $\mathcal{F}\subseteq \mathbb{R}^{k}$. Another important case is when we can write $Z = (Y, X)$, where $Y$ is univariate and the loss function is
$$L(P, a) = \int (y - a(x))^2\mathrm{d}P(y, z)$$
and the admissible space $\mathcal{F}$ contains all square integrable real functions of $X$.
**Relative entropy loss:**
When we specificy a whole distribution and are willing to approximate $P$, one useful measure for comparison of distributions is the Kullback-Leibler divergence, or relative entropy
$$L(P, a) = - \int \log \frac{p}{a}(z) \mathrm{d}P(z)$$
in which case the admissible space is the set of distributions which have a density (w.r.t. the Lebesgue measure) $\mathcal{F} = \{a: Z \mapsto \mathbb{R}_+ : \int a(z)\mathrm{d}z=1\}$.
**Generalized Method of Moments:**
Following the exposition of [Manski (1994)](#manski1994), many econometric problems can be cast as solving the equation $T(P, \theta) = \mathbf{0}$ in the parameter $\theta$, for a given function $T: \mathcal{P}\times\Theta \mapsto \mathbb{R}^m$ with $\Theta$ being the parameter space. By expressing estimation problems in terms of unconditional moment restrictions, for example, we can write $T(P, \theta) = \int g(z; \theta)\mathrm{d}P(z) = \mathbf{0}$ for some function $g$. Taking an *origin-preserving continuous transformation* $r:\mathbb{R}^m \mapsto \mathbb{R}_+$ so that
$$T(P, \theta) = \mathbf{0} \iff r(T)=0$$
we can present the problem in terms of minimizing a particular loss function. Define the admissible space as $\mathcal{F} = \Theta$, then the method of moment estimator minimizes the loss $L(P, \theta) = r\circ T(P, \theta)$. The most common form of $L$ is
$$L(P, \theta) = \left[\int g(z; \theta)\mathrm{d}P(z)\right]' W \left[\int g(z; \theta)\mathrm{d}P(z)\right]$$
where $W$ is a $m\times m$ positive-definite weighting matrix.
----------------------------------------------
### Features and the best-in-class action
By using a loss function, we acknowledge that learning about the true mechanism might be too ambitious, so we better focus our attention only on certain features of it and try to approximate those with our specification. The loss function expresses our assessment about the importance of different features and about the penalty used to punish deviations from the true features. We define the **feature functional** $\gamma: \mathcal{P}\mapsto \mathcal{F}$ by the following optimization over the admissible space $\mathcal{F}$
$$\gamma(P) := \arg\min_{a \in \mathcal{F}} \ L(P,a)$$
and say that $\gamma(P)$ captures the features of $P$ that we wish to learn about. It follows that by changing $L$ we are effectively changing the features of interest.
If one knew the data generating process, there would be no need for statistical inference. What makes the problem statistical is that the distribution $P$ describing the environment is unknown. The statistician must base her action on the available data, which is a partial realization of the underlying data generating mechanism. As we will see, this lack of information implies that for statistical inference the whole admissible space $\mathcal F$ is almost always "too large". As a result, one typically looks for an approximation in a restricted action space $\mathcal{A}\subsetneq \mathcal{F}$, for which we define the **best-in-class action** as follows
$$a^*_{L,\ P,\ \mathcal{A}} := \arg\min_{a \in \mathcal{A}} \ L(P,a).$$
Whith a restricted action space, this best-in-class action might differ from the true feature $\gamma(P)$. We can summarize this scenario compactly by $\gamma(P)\notin \mathcal{A}$ and saying that our specification embodied by $\mathcal{A}$ is **misspecified**. Naturally, in such cases properties of the loss function become crucial by specifying the nature of punishments used to weight deviations from $\gamma(P)$. We will talk more about misspecification in the following sections. A couple of examples should help clarifying the introduced concepts.
* **Conditional expectation -- regression function estimation**
Consider the quadratic loss function over the domain of all square integrable functions $L^2(X, \mathbb{R})$ and let $Z = (Y, X)$, where $Y$ is a scalar. The corresponding feature is
$$\gamma(P) = \mathbb{E}[Y|X] = \arg\min_{a \in L^2(X)} \int\limits_{(Y,X)} (y - a(x))^2\mathrm{d}P(y, x)$$
If the action space $\mathcal{A}$ does not include all square integrable functions, but only the set of affine functions, the best in class action, i.e., the linear projection of $Y$ to the space spanned by $X$, will be different from $\gamma(P)$ in general. In other words, the linear specification for the conditional expectation $Y|X$ is misspecified.
* **Density function estimation**
Consider the Kullback-Leibler distance over the set of distributions with existing density functions. Denote this set by $D_Z$. Given that the true $P\in D_Z$, the corresponding feature is
$$\gamma(P) = \arg\min_{a \in D_Z} \int\limits_{Z}\log\left(\frac{p(z)}{a(z)}\right) \mathrm{d}P(z)$$
which provides the density $p\in\mathbb{R}_+^Z$ such that $\int p(z)\mathrm{d}z =1$ and for any sensible set $B\subseteq \mathbb{R}^k$, $\int_B p(z)\mathrm{d}z = P(B)$. If the action space $\mathcal{A}$ is only a parametric subset of $D_Z$, the best in class action will be the best approximation in terms of KLIC. For an extensive treatment see [White (1994)](#white1994).
### Statistical models vs. specifications
An important aspect of the statistical decision problem is the relationship between $\mathcal{H}$ and $\mathcal{A}$. Our *maintained assumptions* about the mechanism are embodied in $\mathcal{H}$, so a natural attitude is to be as agnostic as possible about $\mathcal{H}$ in order to avoid incredible assumptions. Once we determined $\mathcal{H}$, the next step is to choose the specification, that is the action space $\mathcal{A}$.
- One approach is to tie $\mathcal{H}$ and $\mathcal{A}$ together. For example, the assumptions of the standard linear regression model outline the distributions contained in $\mathcal{H}$ (normal with zero mean and homoscedasticity), for which the natural action space is the space of affine functions.
- On the other hand, many approaches explicitly disentangle $\mathcal{A}$ from $\mathcal{H}$ and try to be agnostic about the maintained assumptions $\mathcal{H}$ and rather impose restrictions on the action space $\mathcal{A}$. At the cost of giving up some potentially undominated actions this approach can largely influence the success of the inference problem in finite samples.
By choosing an action space not being tied to the set of assumed statistical models, the statistician inherently introduces a possibility of misspecification -- for some statistical models there could be an action outside of the action space which would fare better than any other action within $\mathcal{A}$. However, coarsening the action space in this manner has the benefit of restricting the variability of estimated actions arising from the randomness of the sample.
In this case, the best-in-class action has a special role, namely, it minimizes the "distance" between $\mathcal{A}$ and the true feature $\gamma(\mathcal A)$, thus measuring the benchmark bias stemming from restricting $\mathcal{A}$.
----------------------------------------------
## Example - Coin tossing
The observable is a binary variable $Z\in\{0, 1\}$ generated by some statistical model. One might approach this problem by using the following triple
* *Assumed statistical models*, $\mathcal{H}$:
* $Z$ is generated by an i.i.d. Bernoulli distribution, i.e. $\mathcal{H} = \{P(z; \theta): \theta \in[0,1]\}$
* The probability mass function associated with the distribution $P(z;\theta)\in\mathcal{H}$ has the form
$$p(z; \theta) = \theta^z(1-\theta)^{1-z}.$$
* *Action space*, $\mathcal{A}$:
* Let the action space be equal to $\mathcal{H}$, that is $\mathcal{A} = \{P(z, a): a\in[0,1]\} = \mathcal{H}$.
* *Loss function*, $L$: We entertain two alternative loss functions
* Relative entropy
$$L_{RE}(P, a) = \sum_{z\in\{0,1\}} p(z; \theta)\log \frac{p(z; \theta)}{p(z; a)} = E_{\theta}[\log p(z; \theta)] - E_{\theta}[\log p(z; a)]$$
* Quadratic loss
$$L_{MSE}(P, a) = \sum_{z\in\{0,1\}} p(z; \theta)(\theta - a)^2 = E_{\theta}[(\theta - a)^2]$$
where $E_{\theta}$ denotes the expectation operator with respect to the distribution $P(z; \theta)\in\mathcal{H}$.
----------------------------------------------
## Example - Linear regression function
In the basic setup of regression function estimation we write $Z=(Y,X)\in\mathbb{R}^2$ and the objective is to predict the value of $Y$ as a function of $X$ by penalizing the deviations through the quadratic loss function. Let $\mathcal{F}:= \{f:X \mapsto Y\}$ be the family of square integrable functions mapping from $X$ to $Y$. The following is an example for a triple
* *Assumed statistical models*, $\mathcal{H}$
* $(Y,X)$ is generated by an i.i.d. joint Normal distribution, $\mathcal{N}(\mu, \Sigma)$, implying that the true regression function, i.e. conditional expectation, is affine.
* *Action space*, $\mathcal{A}$
* The action space is the set of affine functions over $X$, i.e. $\mathcal{A}:= \{a \in \mathcal{F} : a(x) = \beta_0 + \beta_1 x\}$.
* *Loss function*, $L$
* Quadratic loss function
$$L(P, f) = \int\limits_{(Y,X)}(y - f(x))^2\mathrm{d}P(y,x)$$
----------------------------------------------
## Statistical Decision Functions
<!---
The time invariant stochastic relationship between the data and the environment allows the decision maker to carry out statistical inference regarding the data generating process.
--->
A **statistical decision function** (or statistical decision rule) is a function mapping samples (of different sizes) to actions from $\mathcal{A}$. In order to flexibly talk about the behavior of decision rules as the sample size grows to infinity, we define the domain of the decision rule to be the set of samples of all potential sample sizes, $\mathcal{S}:= \bigcup_{n\geq1}Z^n$. The decision rule is then defined as a sequence of functions
$$ d:\mathcal{S} \mapsto \mathcal{A} \quad \quad \text{that is} \quad \quad \{d(z^n)\}_{n\geq 1}\subseteq \mathcal{A},\quad \forall z^{n}, \forall n\geq 1. $$
----------------------------------------------
### Example (cont) - estimator for coin tossing
One common way to find a decision rule is to plug the empirical distribution $P_{n}$ into the loss function $L(P, a)$ to obtain
$$L_{RE}\left(P_{n}; a\right) = \frac{1}{n}\sum_{i = 1}^{n} \log \frac{p(z_i; \theta)}{p(z_i; a)}\quad\quad\text{and}\quad\quad L_{MSE}\left(P_{n}; a\right) = \frac{1}{n}\sum_{i = 1}^{n} (z_i -a)^2$$
and to look for an action that minimizes this sample analog. In case of relative entropy loss, it is
$$d(z^n) := \arg \min_{a} L(P_{n}, a) = \arg\max_{a\in[0,1]} \frac{1}{n}\sum_{i=1}^{n} \log f(z_i ,a) = \arg\max_{a\in[0,1]} \frac{1}{n}\underbrace{\left(\sum_{i=1}^{n} z_i\right)}_{:= y}\log a + \left(\frac{n-y}{n}\right)\log(1-a) $$
where we define the random variable $Y_n := \sum_{i = 1}^{n} Z_i$ as the number of $1$s in the sample of size $n$, with $y$ denoting a particular realization. The solution of the above problem is the *maximum likelihood estimator* taking the following form
$$\hat{a}(z^n) = \frac{1}{n}\sum_{i=1}^{n} z_i = \frac{y}{n}$$
and hence the **maximum likelihood** decision rule is
$$d_{mle}(z^n) = P(z, \hat{a}(z^n)).$$
It is straightforward to see that if we used the quadratic loss instead of relative entropy, the decision rule would be identical to $d_{mle}(z^n)$. Nonetheless, the two loss functions can lead to very different assessment of the decision rule as will be shown below.
----------------
For comparison, we consider another decision rule, a particular Bayes estimator (posterior mean), which takes the following form
$$d_{bayes}(z^n) = P(z, \hat{a}_B(z^n))\quad\quad\text{where}\quad\quad \hat{a}_B(z^n) = \frac{\sum^{n}_{i=1} z_i + \alpha}{n + \alpha + \beta} = \frac{y + \alpha}{n + \alpha + \beta}$$
where $\alpha, \beta > 0$ are given parameters of the Beta prior. Later, we will see how one can derive such estimators. What is important for us now is that this is an alternative decision rule arising from the same triple $(\mathcal{H}, \mathcal{A}, L_{MSE})$ as the maximum likelihood estimator, with possibly different statistical properties.
----------------------------------------------
### Example (cont) - estimator for linear regression function
In this case the approach that we used to derive the maximum likelihood estimator in the coin tossing example leads to the following sample analog objective function
$$ d_{OLS}(z^n):= \arg\min_{a \in \mathcal{A}}L(P_{n},a) = \arg\min_{\beta_0, \ \beta_1} \sum_{t=1}^n (y_t - \beta_0 - \beta_1 x_t)^2. $$
With a bit of an abuse of notation redefine $X$ to include the constant for the intercept, i.e. $\mathbf{X} = (\mathbf{\iota}, x^n)$. Then the solution for the vector of coefficients, $\mathbf{\beta}=(\beta_0, \beta_1)$, in the ordinary least squares regression is given by
$$\hat{\mathbf{\beta}}_{OLS} := (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T \mathbf{Y}. $$
Hence, after sample $z^n$, the decision rule predicts $y$ as an affine function given by $d_{OLS}(z^n) = \hat{a}_{OLS}$ such that
$$ \hat{a}_{OLS}(x) := \langle \mathbf{\hat{\beta}}_{OLS}, (1, x) \rangle $$
where $\langle \cdot, \cdot \rangle$ denotes the inner product on $\mathbb R^{2}$.
----------------
Again, for comparison we consider a Bayesian decision rule where the conditional prior distribution of $\beta$ is distributed as $\beta|\sigma \sim \mathcal{N}(\mu_b, \sigma^2\mathbf{\Lambda_b}^{-1})$. Then the decision rule is given by
$$ \hat{\mathbf{\beta}}_{bayes} := (\mathbf{X}^T \mathbf{X} + \mathbf{\Lambda_b})^{-1}(\mathbf{\Lambda_b} \mu_b + \mathbf{X}^T \mathbf{Y}). $$
Hence, decision rule after sample $z^n$ is an affine function given by $d_{bayes}(z^n) = \hat{a}_{bayes}$ such that
$$ \hat{a}_{bayes}(x) := \langle \mathbf{\hat{\beta}}_{bayes}, (1, x) \rangle. $$
Again, our only purpose here is to show that we can define alternative decision rules for the same triple $(\mathcal{H}, \mathcal{A}, L_{MSE})$ which might exhibit different statistical properties.
----------------------------------------------
## Induced Distributions over Actions and Losses
For a given sample $z^n$, the decision rule assigns an action $d(z^n)\in\mathcal{A}$, which is then evaluated with the loss function $L(P, d(z^n))$ using a particular distribution $P\in\mathcal{H}$. Evaluating the decision rule and the loss function with a single sample, however, does not capture the uncertainty arising from the randomness of the sample. To get that we need to assess the decision rule in counterfactual worlds with different realizations for $Z^n$.
For each possible data generating mechanism, we can characterize the properties of a given decision rule by considering the distribution that it induces over losses. It is instructive to note that the decision rule $d$ in fact gives rise to
* **induced action distribution:** distribution induced by $d$ over the action space, $\mathcal{A}$
* **induced loss distribution:** distribution induced by $d$ over the loss space, i.e. $\mathbb{R}_+$.
This approach proves to be useful as the action space can be an abstract space with no immediate notion of metric while the range of the loss function is always the real line (or a subset of it). In other words, a possible way to compare different decision rules is to compare the distributions they induce over losses under different data generating mechanisms for a fixed sample size.
### Evaluating Decision Functions
Comparing distributions, however, is often an ambiguous task. A special case where one could safely claim that one decision rule is better than another is if the probability that the loss is under a certain $x$ level is always greater for one decision rule than the other. For instance, we could say that $d_1$ is a better decision rule than $d_2$ relative to $\mathcal{H}$ if for all $P\in\mathcal{H}$
$$ P\{z^n: L(P, d_1(z^n)) \leq x\} \geq P\{z^n: L(P, d_2(z^n)) \leq x\} \quad \forall \ x\in\mathbb{R} $$
which is equivalent to stating that the induced distribution of $d_2$ is *first-order stochastically dominating* the induced distribution of $d_1$ for every $P\in\mathcal{H}$. This, of course, implies that
$$ \mathbb{E}[L(P, d_1(z^n))] \leq \mathbb{E}[L(P, d_2(z^n))]$$
where the expectation is taken with respect to the sample distributed according to $P$.
In fact, the expected value of the induced loss is the most common measure to evaluate decision rules. Since the loss is defined over the real line, this measure always gives a single real number which serves as a basis of comparison for a given data generating process. The expected value of the loss induced by a decision rule is called **the risk** of the decision rule and is denoted by
$$R_n(P, d) = \mathbb{E}[L(P, d(z^n))].$$
This functional now provides a clear and straightforward ordering of decision rules so that $d_1$ is preferred to $d_2$ for a given sample size $n$, if $R_n(P, d_1) < R_n\left(P, d_2\right)$. Following this logic, it might be tempting to look for the decision rule that is optimal in terms of finite sample risk. This problem, however, is immensly complicated because its criterion function hinges on an object, $P$, that we cannot observe.
Nonetheless, statistical decision theory provides a very useful common framework in which different approaches to constructing decision rules can be analyzed, highlighting their relative strengths and weaknesses. In notebook3 and notebook4 {REF to notebooks} we will consider three approaches, each of them having alternative ways to handle the ignorance about the true risk.
1. **Classical approach:** where the main assessment of a decision rule is based on its asymptotic properties.
2. **Bayesian approach:** where the ignorance about $P$ is resolved by the use of a prior.
3. **Statistical learning theory approach:** where a decision rule is judged according to its performance under the least favorable (worst-case) distribution.
----------------------------------------------
### Example (cont) - induced distributions for coin tossing
Consider the case when the true data generating process is indeed i.i.d. Bernoulli with parameter $\theta_0$. This implies that we have a correctly sepcified model. The sample that we are endowed with to use for inference has the size $n=25$.
* The left panel in the following figure represents the distribution of the sample. More precisely, the different sample realizations $z^n$ have equal probability, but because all information contained in a given sample can be summerized by the sum of $1$s, $Y=\sum_{t=1}^{n} Z_t$ and $Y$ is a sufficient statistic, we plot the distribution of $Y$ instead.
* The right panel shows the shapes of the two loss functions that we are considering. Notice that while quadratic loss is symmetric, relative entropy loss is asymmetric. That is, although both loss functions give rise to the same decision rule, we see that they punish deviations from the truth (red vertical line) quite differently. In particular, the entropy loss is unbounded over the domain: at $a=0$ and $a=1$ its value is undefined (or takes infinity).
The left and right panels of the following figure shows the induced action distributions of the MLE and Bayes decision rules (when $\alpha=5$, $\beta=2$) respectively for two alternative values of $\theta_0$. More transparent colors denote the scenario corresponding to the sample distribution of last figure. Faded colors show the distributions induced by an alternative $\theta_0$, while the prior parameters of the Bayes decision rule are kept fixed.
* **Bias vs. variance:** The MLE estimator is unbiased in the sense that its mean always coincide with the true $\theta_0$. In contrast, the Bayes estimator is biased, the extent of which depends on the relationship between the prior parameters and the true value: when the prior concentrates near $\theta_0$, the bias is small, but as the faded distributions demonstrate, for other $\theta_0$s the bias can be significant. Notice, however, that $d_{bayes}$ is always less dispersed than $d_{mle}$, in the sense that the values to which it assigns positive probability are more densely placed in $[0, 1]$. Exploiting this trade-off between bias and variance will be a crucial device in finding decision rules with low risk.
Finally, the figure below compares the performance of the two decision rules according to the their finite sample risk. The first row represents the induced loss distribution of the MLE estimator for the relative entropy and quadratic loss functions. The two panels of the second row show the same distributions for the Bayes decision rule. The vertical dashed lines indicate the value of the respective risk functionals.
* **Loss function matters:** For all sample sizes, the probability mass function of the MLE estimator assigns positive probability to both $a=0$ and $a = 1$, whereas the support of the Bayes estimator lies always in the interior $(0, 1)$. This difference has significant consequences for the relative entropy risk, because as we saw above $L_{RE}$ is undefined at the boundaries of $[0, 1]$. As a result, the relative entopy risk of the MLE estimator does not exist and so the Bayes estimator always wins in terms of realative entropy. The secret of $d_{bayes}$ is to shrink the effective action space.
* **Dependence on $\theta_0:$** Comparing the decision rules in terms of the quadratic loss reveals that the true $\theta_0$ is a critical factor. It determines the size of the bias (hence the risk) of the Bayes estimator. Since $\theta_0$ is unknown, this naturally introduces a subjective (not data driven) element into our analysis: when the prior happens to concentrate around the true $\theta_0$ the Bayes estimator performs better than the MLE, otherwise the bias could be so large that it flips the ordering of decision rules.
----------------------------------------------
### Example (cont) - induced distributions for linear regression
Suppose that our model is correctly specified. In particular, let the data generating mechanism be i.i.d. with
$$ (Y,X) \sim \mathcal{N}(\mu, \Sigma) \quad\quad \text{where}\quad\quad \mu = (1, 3)\quad \text{and}\quad \Sigma =
\begin{bmatrix}
4 & 1 \\
1 & 8
\end{bmatrix}.$$
Under this data generating mechanism, the optimal regression function is affine with coefficients
$$
\begin{align}
\beta_0 &= \mu_Y - \rho\frac{\sigma_Y}{\sigma_X}\mu_X = 1 - \frac{1}{8} 3 = -0.625, \\
\beta_1 &= \rho\frac{\sigma_Y}{\sigma_X} = \frac{1}{8} = 0.125.
\end{align}
$$
Due to correct specification, these coefficients in fact determine the feature, i.e. the true regression function.
For the Bayes estimator consider the prior
$$\mu \sim \mathcal{N}\left(\mu_b, \Lambda_b^{-1}\right) \quad\quad \text{where}\quad\quad \mu_b = (2, 2)\quad \text{and}\quad \Lambda_b =
\begin{bmatrix}
6 & -3 \\
-3 & 6
\end{bmatrix}$$
and suppose that $\Sigma$ is known. Let the sample size be $n=50$. With the given specification we can *simulate* the induced action and loss distributions.
The following figure shows contour plots of the induced action distributions associated with the OLS and Bayes estimators. The red dot depicts the best-in-class action.
* One can see that the OLS estimator is unbiased in the sense that the induced action distribution concentrates around the best-in-class action. In contrast, the Bayes estimator exhibits a slight bias.
* On the other hand, the variance of the Bayes decision rule is smaller than that of the OLS estimator.
Using quadrature methods one can calculate the loss of each action which gives rise to the induced loss distribution. As an approximation to these induced loss distributions, the following figure shows the histograms emerging from these calculations.
* In terms of risk the slightly bigger bias of the Bayes estimate is compensated by its lower variance (across the different sample realizations). As a result, in this particular example, the risk of the Bayes decision rule is lower than that of the OLS estimator.
* The true feature lies within the action space and the model is very "simple", hence it's difficult to beat the OLS (we need small sample and large noise). Using a more complex or misspecified model this might not be the case.
----------------------------------------------
## Misspecification and the bias-variance dilemma
In the above examples we maintained the assumption of correctly specified models, i.e., the true feature of the data generating process lied within the action set $\mathcal{A}$. In applications using nonexperimental data, however, it is more reasonable to assume that the action set contains only approximations of the true feature.
Nothing in the analysis above prevents us from entertaining the possibility of misspecification. In these instances one can look at $a^{*}_{L, P, \mathcal{A}}$ as the best approximation of $\gamma(P)$ achievable by the model specification $\mathcal{A}$. For example, even though the true regression function (conditional expectation) might not be linear, the exercise of estimating the *best linear approximation* of the regression function is well defined.
In theory, one can investigate the approximation error emerging from a misspecified $\mathcal{A}$ via the loss function without mentioning the inference (finite sample) problem at all. In particular, the **misspecification error** can be defined as
$$\min_{a\in\mathcal{A}} \ L(P,a) - L(P, \gamma(P))$$
This naturally leads to a dilemma regarding the "size" of the action space: with a richer $\mathcal{A}$, in principle, we can get closer to the true feature by making the misspecification error small. Notice, however, that in practice, not knowing $P$ implies that we cannot solve the above optimization problem and obtain the best-in-class action. As we show in notebook2 {REF}, a possible way to proceed is to require the so called *consistency* property from our decision rule by which we can guarantee to get very close to $a^{*}_{L, P, \mathcal{A}}$ with *sufficiently large* samples, however, what "sufficently large" means will be determined by the size of our $\mathcal{A}$. Larger action spaces will require larger samples to get sensible estimates for the best-in-class action. In fact, by using a "too large" $\mathcal{A}$ accompanied with a "too small" sample, our estimator's performance can be so bad that misspecification concerns become secondary.
In other words, finiteness of the sample gives rise to a trade-off between the severity of misspecifiation and the credibility of our estimates. To see this, decompose the deviation of the finite sample risk from the value of loss at the truth (excess risk) for a given decision rule $d$ and sample size $n$:
$$R_n(P, d) - L\left(P, \gamma(P) \right) = \underbrace{R_n(P, d) - L\left(P, a^{*}_{L,P, \mathcal{A}}\right)}_{\text{estimation error}} + \underbrace{L\left(P, a^{*}_{L, P, \mathcal{A}}\right)- L\left(P, \gamma(P)\right)}_{\text{misspecification error}}$$
While the estimation error stems from the fact that we do not know $P$, so we have to use a finite sample to approximate the best-in-class action, misspecification error, not influenced by any random object, arises from the necessity of $\mathcal{A}\subsetneq\mathcal{F}$.
This trade-off resembles the bias-variance dilemma well-known from classical statistics. Statisticians often connect the estimation error with the decision rule's variance, whereas the misspecification error is considered as the bias term. We will see in notebook3 {REF} that this interpretation is slightly misleading. Nonetheless, it is true that, similar to the bias-variance trade-off, manipulation of (the size of) $\mathcal{A}$ is the key device to address the estimation-misspecification error trade-off. The minimal excess risk can be reached by the action space where the following two forces are balanced {REF to figure in notebook3}:
* the estimation error (variance) is increasing in the size of $\mathcal{A}$,
* the misspecification error (bias) is weakly decreasing in the size of $\mathcal{A}$.
In the next lecture {REF: notebook2}, we will give a more elaborate definition of what do we mean by the "size" of $\mathcal{A}$.
**A warning**
The introduced notion of misspecification is a *statistical* one. From a modeller's point of view, a natural question to ask is to what extent misspecification affects the economic interpretation of the parameters of a fitted statistical model. Intuitively, a necessary condition for the sensibility of economic interpretation is to have a correctly specified statistical model. Because different economic models can give rise to the same statistical model, this condition is by no means sufficient. From this angle, a misspecified statistical model can easily invalidate any kind of economic interpretation of estimated parameters. This issue is more subtle and it would require an extensive treatment that we cannot deliver here, but it is worth keeping in mind the list of very strong assumptions that we are (implicitly) using when we give well-defined meaning to our parameter estimates. An interesting discussion can be found in Chapter 4 of [White (1994)](#white1994).
--------------------------------------
### References
Breiman, Leo (1969). Probability and Stochastic Processes: With a View Towards Applications. Houghton Mifflin. <a name="breiman1969"></a>
Wald, Abraham (1950). Statistical Decision Functions. John Wiley and Sons, New York. <a name="wald1950"></a>
Manski, Charles (1988). Analog estimation in econometrics. Chapman and Hall, London. <a name="manski1994"></a>
White, Halbert (1994). Estimation, Inference and Specification Analysis (Econometric Society Monographs). Cambridge University Press. <a name="white1994"></a>
```python
```
| 2b674302eff4c7f4327974767adc4e6219362cb9 | 47,255 | ipynb | Jupyter Notebook | Notebook_01_wald/statistical_decision_functions_text.ipynb | QuantEcon/econometrics | b7eb4f57eca1903891e888e3640da731b2479c66 | [
"BSD-3-Clause"
]
| 28 | 2017-01-10T09:19:53.000Z | 2021-06-29T18:47:36.000Z | Notebook_01_wald/statistical_decision_functions_text.ipynb | QuantEcon/econometrics | b7eb4f57eca1903891e888e3640da731b2479c66 | [
"BSD-3-Clause"
]
| null | null | null | Notebook_01_wald/statistical_decision_functions_text.ipynb | QuantEcon/econometrics | b7eb4f57eca1903891e888e3640da731b2479c66 | [
"BSD-3-Clause"
]
| 21 | 2016-12-27T12:13:36.000Z | 2021-11-17T13:49:12.000Z | 83.489399 | 985 | 0.662173 | true | 10,342 | Qwen/Qwen-72B | 1. YES
2. YES | 0.785309 | 0.833325 | 0.654417 | __label__eng_Latn | 0.997881 | 0.358761 |
```python
%reset
#Importando librerías
from sympy import *
from sympy.parsing.sympy_parser import *
from sympy.physics.qho_1d import *
from sympy.physics.hydrogen import *
from matplotlib import *
from ipywidgets import *
#Impresión para "jupyter notebook"
init_printing(use_latex = 'mathjax')
```
Once deleted, variables cannot be recovered. Proceed (y/[n])? y
```python
[hbar, n, m_p, l_well, omega, r, theta, phi, r_2, theta_2, phi_2, a, l, e, epsilon_0, Z] = symbols('hbar n m_p l_well omega r theta phi r_2 theta_2 phi_2 a l e epsilon_0 Z', real = True, positive = True)
[x, m] = symbols('x m', real = True)
```
```python
def Ylm(l, m, theta, phi):
return ((-1)**m)*Ynm(l, m, theta, phi).expand(func = True)
```
```python
def Rnl(n, l, r, Z):
return Z*sqrt(Z)*sqrt(((2/(n*a))**3)*(factorial(n-l-1)/(2*n*((factorial(n+l))**3))))*exp(-Z*r/(n*a))*(2*Z*r/(n*a))**l*(assoc_laguerre(n-l-1, 2*l+1, 2*Z*r/(n*a)))
```
```python
def PSInlm(n, l, m, r, theta, phi, Z):
return Rnl(n, l, r, Z)*Ylm(l, m, theta, phi)
```
```python
def HydroE_n(n):
return -m_p*e**4/(2*(4*pi*epsilon_0)**2*hbar**2*n**2)
```
```python
def B(x, height, width, center):
h = height
w = width
c = center
return h*(Heaviside(x+w/2-c)-Heaviside(x-w/2-c))
```
El programa ya reconoce las variables simbolicas:
\begin{align*}
\hbar && \text{constante de Planck}\\
n && \text{nivel de energia / número cuántico principal}\\
m_p && \text{masa de la partícula}\\
l_{well} && \text{ancho de la caja (Particula en una caja)}\\
\omega && \text{frecuencia angular (Oscilador armónico)}\\
a && \text{Radio de Bohr (Átomo de hidrógeno)}\\
l && \text{número cuántico asimutal (Átomo de hidrógeno)}\\
m && \text{número cuántico magnético (Átomo de hidrógeno)}\\
e && \text{carga del electrón (Átomo de hidrógeno)}\\
\epsilon_0 && \text{permitividad del vacío (Átomo de hidrógeno)}\\
Z && \text{protones en el nucleo (Átomo hidrogenoide)}\\
x && \text{variable espacial (coordenadas cartesianas)}\\
r && \text{variable espacial (coordenadas polares esféricas)}\\
\theta && \text{variable espacial (coordenadas polares esféricas)}\\
\phi && \text{variable espacial (coordenadas polares esféricas)}\\
\end{align*}
También está disponible, para modelar las perturbaciones, la función:
$$B(x, h, w, c)$$
La cual modela una barrera con altura $h$, ancho $w$ y centro $c$.
```python
items_layout = Layout(width='auto') # override the default width of the button to 'auto' to let the button grow
box_layout = Layout(display='flex',
flex_flow='column',
align_items='stretch',
border='solid',
width='100%')
```
```python
Problem = RadioButtons(options = ['Harmonic Oscilator', 'Particle In a Box', 'Hydrogen Atom', 'Helium Atom EO(1)'])
display(Problem)
```
A Jupyter Widget
```python
Function = Checkbox(value = False, description='\(\psi_{n}\)')
Energy = Checkbox(value = True, description='\(E_{n}\)')
Variables = Text(description = '\(Vars= \)', layout = items_layout)
Perturbation = Text(description = '\(H^´= \)', layout = items_layout)
Order = IntSlider(min = 0, max = 2, description = '\(CO: \)', layout = items_layout)
Level = BoundedIntText(min = 0, description = '\(n= \)', layout = items_layout)
IntegralX1 = Text(description = '\(x_{1}=\)', layout = items_layout)
IntegralX2 = Text(description = '\(x_{2}=\)', layout = items_layout)
IntegralR1 = Text(description = '\(r_{11}=\)', layout = items_layout)
IntegralR2 = Text(description = '\(r_{21}=\)', layout = items_layout)
IntegralTHETA1 = Text(description = '\(\\theta_{11}=\)', layout = items_layout)
IntegralTHETA2 = Text(description = '\(\\theta_{21}=\)', layout = items_layout)
IntegralPHI1 = Text(description = '\(\phi_{11}=\)', layout = items_layout)
IntegralPHI2 = Text(description = '\(\phi_{21}=\)', layout = items_layout)
IntegralR12 = Text(description = '\(r_{12}=\)', layout = items_layout)
IntegralR22 = Text(description = '\(r_{22}=\)', layout = items_layout)
IntegralTHETA12 = Text(description = '\(\\theta_{12}=\)', layout = items_layout)
IntegralTHETA22 = Text(description = '\(\\theta_{22}=\)', layout = items_layout)
IntegralPHI12 = Text(description = '\(\phi_{12}=\)', layout = items_layout)
IntegralPHI22 = Text(description = '\(\phi_{22}=\)', layout = items_layout)
ProtonsZ = Text(description = '\(Z=\)', layout = items_layout)
Iterations = BoundedIntText(min = 1, description = '\(Iter= \)', layout = items_layout)
if Problem.value == 'Harmonic Oscilator' or Problem.value == 'Particle In a Box':
items = [Function, Energy, Variables, Perturbation, Order, Level, IntegralX1, IntegralX2, Iterations]
elif Problem.value == 'Hydrogen Atom':
items = [Function, Energy, Variables, Perturbation, Order, Level, Iterations]
elif Problem.value == 'Helium Atom EO(1)':
items = []
box = Box(children = items, layout=box_layout)
display(box)
```
A Jupyter Widget
```python
if Variables.value != '':
var(Variables.value)
if Perturbation.value != '':
Hp = eval(Perturbation.value)
else:
Hp = 0
O = Order.value
n = Level.value
if IntegralX1.value != '':
x1 = eval(IntegralX1.value)
if IntegralX2.value != '':
x2 = eval(IntegralX2.value)
p = Iterations.value
if Problem.value == 'Hydrogen Atom' or Problem.value == 'Helium Atom EO(1)':
l = 0
m = 0
r1 = 0
r2 = oo
theta1 = 0
theta2 = pi
phi1 = 0
phi2 = 2*pi
if Problem.value == 'Hydrogen Atom':
Z = 1
elif Problem.value == 'Helium Atom EO(1)':
Z = 2
Hp = e**2/(r_2-r)
O = 1
```
```python
Hp
```
$$c x^{3} + d x^{4}$$
```python
if Problem.value == 'Harmonic Oscilator' or Problem.value == 'Particle In a Box':
def psi(n):
if Problem.value == 'Harmonic Oscilator':
return psi_n(n, x, m_p, omega)
if Problem.value == 'Particle In a Box':
return sqrt(2/l)*sin(((n*pi)/l)*x)
elif Problem.value == 'Hydrogen Atom':
def psi(n, l, m, Z):
return PSInlm(n, l, m, r, theta, phi, Z)
elif Problem.value == 'Helium Atom EO(1)':
def psi():
return PSInlm(1, l, m, r, theta, phi, 2)*PSInlm(1, l, m, r_2, theta_2, phi_2, 2)
```
```python
def E(n):
if Problem.value == 'Harmonic Oscilator':
return E_n(n, omega)
if Problem.value == 'Particle In a Box':
return (n**2*pi**2*hbar**2)/(2*m_p*l**2)
if Problem.value == 'Hydrogen Atom':
return HydroE_n(n)
if Problem.value == 'Helium Atom EO(1)':
return HydroE_n(1)+HydroE_n(1)
```
```python
if Perturbation.value != '':
if O >= 1:
if Problem.value == 'Harmonic Oscilator':
for i in range(0, p+1):
j = i
if i != n:
if ((n == 0) and (i == 1)) or (i==0):
IntegrandHmn = Matrix([psi(i)*Hp*psi(n)])
Enm = Matrix([E(n)-E(i)])
else:
IntegrandHmn = IntegrandHmn.row_insert(j, Matrix([psi(i)*Hp*psi(n)]))
Enm = Enm.row_insert(j, Matrix([E(n)-E(i)]))
if Problem.value == 'Particle In a Box' or Problem.value == 'Hydrogen Atom':
for i in range(0, p+2):
j = i
if i != n and i != 0:
if ((n == 1) and (i == 2)) or (i==1):
if Problem.value == 'Particle In a Box':
IntegrandHmn = Matrix([psi(i)*Hp*psi(n)])
elif Problem.value == 'Hydrogen Atom':
IntegrandHmn = Matrix([psi(i, l, m, Z)*Hp*psi(n, l, m, Z)])
Enm = Matrix([E(n)-E(i)])
else:
if Problem.value == 'Particle In a Box':
IntegrandHmn = IntegrandHmn.row_insert(j, Matrix([psi(i)*Hp*psi(n)]))
elif Problem.value == 'Hydrogen Atom':
IntegrandHmn = IntegrandHmn.row_insert(j, Matrix([psi(i, l, m, Z)*Hp*psi(n, l, m, Z)]))
Enm = Enm.row_insert(j, Matrix([E(n)-E(i)]))
if Problem.value == 'Harmonic Oscilator' or Problem.value == 'Particle In a Box':
Hmn = integrate(IntegrandHmn, (x, x1, x2))
elif Problem.value == 'Hydrogen Atom':
Hmn = integrate(integrate(integrate(IntegrandHmn*r**2*sin(theta), (r, r1, r2)), (theta, theta1, theta2)), (phi, phi1, phi2))
```
```python
Hmn
```
$$\left[\begin{matrix}\frac{3 \sqrt{2} \hbar^{\frac{3}{2}} c}{4 m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}}}\\\frac{3 \sqrt{2} \hbar^{2} d}{2 m_{p}^{2} \omega^{2}}\\\frac{\sqrt{3} \hbar^{\frac{3}{2}} c}{2 m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}}}\\\frac{\sqrt{6} \hbar^{2} d}{2 m_{p}^{2} \omega^{2}}\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\end{matrix}\right]$$
```python
Enm
```
$$\left[\begin{matrix}- \hbar \omega\\- 2 \hbar \omega\\- 3 \hbar \omega\\- 4 \hbar \omega\\- 5 \hbar \omega\\- 6 \hbar \omega\\- 7 \hbar \omega\\- 8 \hbar \omega\\- 9 \hbar \omega\\- 10 \hbar \omega\\- 11 \hbar \omega\\- 12 \hbar \omega\\- 13 \hbar \omega\\- 14 \hbar \omega\\- 15 \hbar \omega\\- 16 \hbar \omega\\- 17 \hbar \omega\\- 18 \hbar \omega\\- 19 \hbar \omega\\- 20 \hbar \omega\\- 21 \hbar \omega\\- 22 \hbar \omega\\- 23 \hbar \omega\\- 24 \hbar \omega\\- 25 \hbar \omega\\- 26 \hbar \omega\\- 27 \hbar \omega\\- 28 \hbar \omega\\- 29 \hbar \omega\\- 30 \hbar \omega\end{matrix}\right]$$
```python
if Energy.value == True:
EO0 = E(n)
Emat = Matrix([EO0])
if Perturbation.value != '' or Problem.value == 'Helium Atom EO(1)':
if O >= 1:
if Problem.value == 'Harmonic Oscilator' or Problem.value == 'Particle In a Box':
IntegrandEO1 = psi(n)*Hp*psi(n)
IntEO1 = integrate(IntegrandEO1, (x, x1, x2))
elif Problem.value == 'Hydrogen Atom':
IntegrandEO1 = psi(n, l, m, Z)*Hp*psi(n, l, m, Z)
IntEO1 = integrate(integrate(integrate(IntegrandEO1*r**2*sin(theta), (r, r1, r2)), (theta, theta1, theta2)), (phi, phi1, phi2))
elif Problem.value == 'Helium Atom EO(1)':
IntegrandEO1 = psi()*Hp*psi()
IntEO1 = integrate(integrate(integrate(integrate(integrate(integrate(IntegrandEO1*r**2*sin(theta)*r_2**2*sin(theta_2), (r, r1, r2)), (theta, theta1, theta2)), (phi, phi1, phi2)), (r_2, r1, r2)), (theta_2, theta1, theta2)), (phi_2, phi1, phi2))
EO1 = IntEO1
Emat = Emat.row_insert(1, Matrix([EO1]))
if O == 2:
HmnSq = zeros(len(Hmn), 1)
for i in range(p):
HmnSq[i] = Hmn[i]**2
DivisionEO2 = zeros(p, 1)
for i in range(p):
DivisionEO2[i] = HmnSq[i]/Enm[i]
EO2 = 0
for i in range(p):
EO2 = EO2 + DivisionEO2[i]
Emat = Emat.row_insert(2, Matrix([EO2]))
Ep = 0
for i in range(O+1):
Ep = Ep + Emat[i]
```
```python
EO0
```
$$\frac{\hbar \omega}{2}$$
```python
IntEO1
```
$$\frac{3 \hbar^{2} d}{4 m_{p}^{2} \omega^{2}}$$
```python
DivisionEO2
```
$$\left[\begin{matrix}- \frac{9 \hbar^{2} c^{2}}{8 m_{p}^{3} \omega^{4}}\\- \frac{9 \hbar^{3} d^{2}}{4 m_{p}^{4} \omega^{5}}\\- \frac{\hbar^{2} c^{2}}{4 m_{p}^{3} \omega^{4}}\\- \frac{3 \hbar^{3} d^{2}}{8 m_{p}^{4} \omega^{5}}\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\end{matrix}\right]$$
```python
Emat
```
$$\left[\begin{matrix}\frac{\hbar \omega}{2}\\\frac{3 \hbar^{2} d}{4 m_{p}^{2} \omega^{2}}\\- \frac{11 \hbar^{2} c^{2}}{8 m_{p}^{3} \omega^{4}} - \frac{21 \hbar^{3} d^{2}}{8 m_{p}^{4} \omega^{5}}\end{matrix}\right]$$
```python
Ep
```
$$- \frac{11 \hbar^{2} c^{2}}{8 m_{p}^{3} \omega^{4}} - \frac{21 \hbar^{3} d^{2}}{8 m_{p}^{4} \omega^{5}} + \frac{3 \hbar^{2} d}{4 m_{p}^{2} \omega^{2}} + \frac{\hbar \omega}{2}$$
```python
if Function.value == True:
if Problem.value == 'Harmonic Oscilator' or Problem.value == 'Particle In a Box':
PHIO0 = psi(n)
elif Problem.value == 'Hydrogen Atom':
PHIO0 = psi(n, l, m, Z)
PHImat = Matrix([PHIO0])
if Perturbation.value != '':
if O >= 1:
if Problem.value == 'Harmonic Oscilator':
for i in range(0, p+1):
j = i
if i != n:
if ((n == 0) and (i == 1)) or (i == 0):
Phi_pO1 = Matrix([psi(i)])
else:
Phi_pO1 = Phi_pO1.row_insert(j, Matrix([psi(i)]))
if Problem.value == 'Particle In a Box' or Problem.value == 'Hydrogen Atom':
for i in range(0, p+2):
j = i
if i != n and i != 0:
if ((n == 1) and (i == 2)) or (i == 1):
if Problem.value == 'Particle In a Box':
Phi_pO1 = Matrix([psi(i)])
elif Problem.value == 'Hydrogen Atom':
Phi_pO1 = Matrix([psi(i, l, m, Z)])
else:
if Problem.value == 'Particle In a Box':
Phi_pO1 = Phi_pO1.row_insert(j, Matrix([psi(i)]))
elif Problem.value == 'Hydrogen Atom':
Phi_pO1 = Phi_pO1.row_insert(j, Matrix([psi(i, l, m, Z)]))
ProductPHIO1 = zeros(p, 1)
for i in range(p):
ProductPHIO1[i] = (Hmn[i]/Enm[i])*Phi_pO1[i]
PHIO1 = 0
for i in range(p):
PHIO1 = PHIO1 + ProductPHIO1[i]
PHImat = PHImat.row_insert(1, Matrix([PHIO1]))
PHIp = 0
if O < 2:
for i in range(O+1):
PHIp = PHIp + PHImat[i]
else:
for i in range(2):
PHIp = PHIp + PHImat[i]
```
```python
PHIO0
```
$$\frac{\sqrt[4]{m_{p}} \sqrt[4]{\omega}}{\sqrt[4]{\hbar} \sqrt[4]{\pi}} e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}$$
```python
Phi_pO1
```
$$\left[\begin{matrix}\frac{\sqrt{2} m_{p}^{\frac{3}{4}} \omega^{\frac{3}{4}} x}{\hbar^{\frac{3}{4}} \sqrt[4]{\pi}} e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{2} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{4 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{4 m_{p}}{\hbar} \omega x^{2} - 2\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{3} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{12 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{8 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{12 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{6} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{48 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{16 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{48 m_{p}}{\hbar} \omega x^{2} + 12\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{15} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{240 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{32 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{160 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{120 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{5} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{480 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{64 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{480 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{720 m_{p}}{\hbar} \omega x^{2} - 120\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{70} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{6720 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{128 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} - \frac{1344 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} + \frac{3360 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{1680 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{70} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{26880 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{256 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} - \frac{3584 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} + \frac{13440 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{13440 m_{p}}{\hbar} \omega x^{2} + 1680\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{35} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{80640 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{512 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} - \frac{9216 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} + \frac{48384 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{80640 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{30240 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{7} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{161280 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{1024 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} - \frac{23040 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} + \frac{161280 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{403200 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{302400 m_{p}}{\hbar} \omega x^{2} - 30240\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{154} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{3548160 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{2048 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} - \frac{56320 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} + \frac{506880 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} - \frac{1774080 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} + \frac{2217600 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{665280 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{231} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{21288960 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{4096 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} - \frac{135168 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} + \frac{1520640 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} - \frac{7096320 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} + \frac{13305600 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{7983360 m_{p}}{\hbar} \omega x^{2} + 665280\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{6006} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{553512960 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{8192 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} - \frac{319488 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} + \frac{4392960 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} - \frac{26357760 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} + \frac{69189120 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{69189120 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{17297280 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{858} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{1107025920 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{16384 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} - \frac{745472 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} + \frac{12300288 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} - \frac{92252160 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} + \frac{322882560 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{484323840 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{242161920 m_{p}}{\hbar} \omega x^{2} - 17297280\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{715} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{5535129600 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{32768 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} - \frac{1720320 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} + \frac{33546240 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} - \frac{307507200 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} + \frac{1383782400 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} - \frac{2905943040 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} + \frac{2421619200 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{518918400 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{1430} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{44281036800 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{65536 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} - \frac{3932160 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} + \frac{89456640 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} - \frac{984023040 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} + \frac{5535129600 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} - \frac{15498362880 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} + \frac{19372953600 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{8302694400 m_{p}}{\hbar} \omega x^{2} + 518918400\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{12155} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{752777625600 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{131072 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} - \frac{8912896 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} + \frac{233963520 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} - \frac{3041525760 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} + \frac{20910489600 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} - \frac{75277762560 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} + \frac{131736084480 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{94097203200 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{17643225600 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{12155} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{4516665753600 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{262144 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} - \frac{20054016 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} + \frac{601620480 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} - \frac{9124577280 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} + \frac{75277762560 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} - \frac{338749931520 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} + \frac{790416506880 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{846874828800 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{317578060800 m_{p}}{\hbar} \omega x^{2} - 17643225600\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{461890} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{171633298636800 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{524288 x^{19}}{\hbar^{\frac{19}{2}}} m_{p}^{\frac{19}{2}} \omega^{\frac{19}{2}} - \frac{44826624 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} + \frac{1524105216 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} - \frac{26671841280 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} + \frac{260050452480 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} - \frac{1430277488640 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} + \frac{4290832465920 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} - \frac{6436248698880 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} + \frac{4022655436800 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{670442572800 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{46189} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{343266597273600 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{1048576 x^{20}}{\hbar^{10}} m_{p}^{10} \omega^{10} - \frac{99614720 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} + \frac{3810263040 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} - \frac{76205260800 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} + \frac{866834841600 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} - \frac{5721109954560 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} + \frac{21454162329600 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} - \frac{42908324659200 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} + \frac{40226554368000 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{13408851456000 m_{p}}{\hbar} \omega x^{2} + 670442572800\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{1939938} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{14417197085491200 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{2097152 x^{21}}{\hbar^{\frac{21}{2}}} m_{p}^{\frac{21}{2}} \omega^{\frac{21}{2}} - \frac{220200960 x^{19}}{\hbar^{\frac{19}{2}}} m_{p}^{\frac{19}{2}} \omega^{\frac{19}{2}} + \frac{9413591040 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} - \frac{213374730240 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} + \frac{2800543334400 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} - \frac{21844238008320 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} + \frac{100119424204800 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} - \frac{257449947955200 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} + \frac{337903056691200 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{187723920384000 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{28158588057600 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{176358} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{28834394170982400 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{4194304 x^{22}}{\hbar^{11}} m_{p}^{11} \omega^{11} - \frac{484442112 x^{20}}{\hbar^{10}} m_{p}^{10} \omega^{10} + \frac{23011000320 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} - \frac{586780508160 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} + \frac{8801707622400 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} - \frac{80095539363840 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} + \frac{440525466501120 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} - \frac{1415974713753600 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} + \frac{2477955749068800 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{2064963124224000 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{619488937267200 m_{p}}{\hbar} \omega x^{2} - 28158588057600\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{2028117} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{663191065932595200 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{8388608 x^{23}}{\hbar^{\frac{23}{2}}} m_{p}^{\frac{23}{2}} \omega^{\frac{23}{2}} - \frac{1061158912 x^{21}}{\hbar^{\frac{21}{2}}} m_{p}^{\frac{21}{2}} \omega^{\frac{21}{2}} + \frac{55710842880 x^{19}}{\hbar^{\frac{19}{2}}} m_{p}^{\frac{19}{2}} \omega^{\frac{19}{2}} - \frac{1587759022080 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} + \frac{26991903375360 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} - \frac{283414985441280 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} + \frac{1842197405368320 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} - \frac{7237204092518400 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} + \frac{16283709208166400 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} - \frac{18997660742860800 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} + \frac{9498830371430400 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{1295295050649600 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{676039} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{2652764263730380800 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{16777216 x^{24}}{\hbar^{12}} m_{p}^{12} \omega^{12} - \frac{2315255808 x^{22}}{\hbar^{11}} m_{p}^{11} \omega^{11} + \frac{133706022912 x^{20}}{\hbar^{10}} m_{p}^{10} \omega^{10} - \frac{4234024058880 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} + \frac{80975710126080 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} - \frac{971708521512960 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} + \frac{7368789621473280 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} - \frac{34738579644088320 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} + \frac{97702255248998400 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} - \frac{151981285942886400 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} + \frac{113985964457164800 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{31087081215590400 m_{p}}{\hbar} \omega x^{2} + 1295295050649600\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{1352078} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{26527642637303808000 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{33554432 x^{25}}{\hbar^{\frac{25}{2}}} m_{p}^{\frac{25}{2}} \omega^{\frac{25}{2}} - \frac{5033164800 x^{23}}{\hbar^{\frac{23}{2}}} m_{p}^{\frac{23}{2}} \omega^{\frac{23}{2}} + \frac{318347673600 x^{21}}{\hbar^{\frac{21}{2}}} m_{p}^{\frac{21}{2}} \omega^{\frac{21}{2}} - \frac{11142168576000 x^{19}}{\hbar^{\frac{19}{2}}} m_{p}^{\frac{19}{2}} \omega^{\frac{19}{2}} + \frac{238163853312000 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} - \frac{3239028405043200 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} + \frac{28341498544128000 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} - \frac{157902634745856000 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} + \frac{542790306938880000 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} - \frac{1085580613877760000 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} + \frac{1139859644571648000 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{518118020259840000 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{64764752532480000 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{104006} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{53055285274607616000 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{67108864 x^{26}}{\hbar^{13}} m_{p}^{13} \omega^{13} - \frac{10905190400 x^{24}}{\hbar^{12}} m_{p}^{12} \omega^{12} + \frac{752458137600 x^{22}}{\hbar^{11}} m_{p}^{11} \omega^{11} - \frac{28969638297600 x^{20}}{\hbar^{10}} m_{p}^{10} \omega^{10} + \frac{688028909568000 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} - \frac{10526842316390400 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} + \frac{105268423163904000 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} - \frac{684244750565376000 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} + \frac{2822509596082176000 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} - \frac{7056273990205440000 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} + \frac{9878783586287616000 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{6735534263377920000 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{1683883565844480000 m_{p}}{\hbar} \omega x^{2} - 64764752532480000\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{156009} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{477497567471468544000 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{134217728 x^{27}}{\hbar^{\frac{27}{2}}} m_{p}^{\frac{27}{2}} \omega^{\frac{27}{2}} - \frac{23555211264 x^{25}}{\hbar^{\frac{25}{2}}} m_{p}^{\frac{25}{2}} \omega^{\frac{25}{2}} + \frac{1766640844800 x^{23}}{\hbar^{\frac{23}{2}}} m_{p}^{\frac{23}{2}} \omega^{\frac{23}{2}} - \frac{74493355622400 x^{21}}{\hbar^{\frac{21}{2}}} m_{p}^{\frac{21}{2}} \omega^{\frac{21}{2}} + \frac{1955450585088000 x^{19}}{\hbar^{\frac{19}{2}}} m_{p}^{\frac{19}{2}} \omega^{\frac{19}{2}} - \frac{33438205005004800 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} + \frac{378966323390054400 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} - \frac{2842247425425408000 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} + \frac{13855956198948864000 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} - \frac{42337643941232640000 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} + \frac{76207759094218752000 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} - \frac{72743770044481536000 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} + \frac{30309904185200640000 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{3497296636753920000 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{44574} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{1909990269885874176000 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{268435456 x^{28}}{\hbar^{14}} m_{p}^{14} \omega^{14} - \frac{50734301184 x^{26}}{\hbar^{13}} m_{p}^{13} \omega^{13} + \frac{4122161971200 x^{24}}{\hbar^{12}} m_{p}^{12} \omega^{12} - \frac{189619450675200 x^{22}}{\hbar^{11}} m_{p}^{11} \omega^{11} + \frac{5475261638246400 x^{20}}{\hbar^{10}} m_{p}^{10} \omega^{10} - \frac{104029971126681600 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} + \frac{1326382131865190400 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} - \frac{11368989701701632000 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} + \frac{64661128928428032000 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} - \frac{237090806070902784000 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} + \frac{533454313659531264000 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} - \frac{678941853748494336000 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} + \frac{424338658592808960000 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{97924305829109760000 m_{p}}{\hbar} \omega x^{2} + 3497296636753920000\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{646323} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{55389717826690351104000 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{536870912 x^{29}}{\hbar^{\frac{29}{2}}} m_{p}^{\frac{29}{2}} \omega^{\frac{29}{2}} - \frac{108984795136 x^{27}}{\hbar^{\frac{27}{2}}} m_{p}^{\frac{27}{2}} \omega^{\frac{27}{2}} + \frac{9563415773184 x^{25}}{\hbar^{\frac{25}{2}}} m_{p}^{\frac{25}{2}} \omega^{\frac{25}{2}} - \frac{478170788659200 x^{23}}{\hbar^{\frac{23}{2}}} m_{p}^{\frac{23}{2}} \omega^{\frac{23}{2}} + \frac{15122151191347200 x^{21}}{\hbar^{\frac{21}{2}}} m_{p}^{\frac{21}{2}} \omega^{\frac{21}{2}} - \frac{317565175018291200 x^{19}}{\hbar^{\frac{19}{2}}} m_{p}^{\frac{19}{2}} \omega^{\frac{19}{2}} + \frac{4525303744010649600 x^{17}}{\hbar^{\frac{17}{2}}} m_{p}^{\frac{17}{2}} \omega^{\frac{17}{2}} - \frac{43960093513246310400 x^{15}}{\hbar^{\frac{15}{2}}} m_{p}^{\frac{15}{2}} \omega^{\frac{15}{2}} + \frac{288488113680678912000 x^{13}}{\hbar^{\frac{13}{2}}} m_{p}^{\frac{13}{2}} \omega^{\frac{13}{2}} - \frac{1250115159282941952000 x^{11}}{\hbar^{\frac{11}{2}}} m_{p}^{\frac{11}{2}} \omega^{\frac{11}{2}} + \frac{3437816688028090368000 x^{9}}{\hbar^{\frac{9}{2}}} m_{p}^{\frac{9}{2}} \omega^{\frac{9}{2}} - \frac{5625518216773238784000 x^{7}}{\hbar^{\frac{7}{2}}} m_{p}^{\frac{7}{2}} \omega^{\frac{7}{2}} + \frac{4922328439676583936000 x^{5}}{\hbar^{\frac{5}{2}}} m_{p}^{\frac{5}{2}} \omega^{\frac{5}{2}} - \frac{1893203246029455360000 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} + \frac{202843204931727360000 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\\frac{\sqrt{1077205} \sqrt[4]{m_{p}} \sqrt[4]{\omega}}{553897178266903511040000 \sqrt[4]{\hbar} \sqrt[4]{\pi}} \left(\frac{1073741824 x^{30}}{\hbar^{15}} m_{p}^{15} \omega^{15} - \frac{233538846720 x^{28}}{\hbar^{14}} m_{p}^{14} \omega^{14} + \frac{22069421015040 x^{26}}{\hbar^{13}} m_{p}^{13} \omega^{13} - \frac{1195426971648000 x^{24}}{\hbar^{12}} m_{p}^{12} \omega^{12} + \frac{41242230521856000 x^{22}}{\hbar^{11}} m_{p}^{11} \omega^{11} - \frac{952695525054873600 x^{20}}{\hbar^{10}} m_{p}^{10} \omega^{10} + \frac{15084345813368832000 x^{18}}{\hbar^{9}} m_{p}^{9} \omega^{9} - \frac{164850350674673664000 x^{16}}{\hbar^{8}} m_{p}^{8} \omega^{8} + \frac{1236377630060052480000 x^{14}}{\hbar^{7}} m_{p}^{7} \omega^{7} - \frac{6250575796414709760000 x^{12}}{\hbar^{6}} m_{p}^{6} \omega^{6} + \frac{20626900128168542208000 x^{10}}{\hbar^{5}} m_{p}^{5} \omega^{5} - \frac{42191386625799290880000 x^{8}}{\hbar^{4}} m_{p}^{4} \omega^{4} + \frac{49223284396765839360000 x^{6}}{\hbar^{3}} m_{p}^{3} \omega^{3} - \frac{28398048690441830400000 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} + \frac{6085296147951820800000 m_{p}}{\hbar} \omega x^{2} - 202843204931727360000\right) e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\end{matrix}\right]$$
```python
ProductPHIO1
```
$$\left[\begin{matrix}- \frac{3 c x e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{2 \sqrt[4]{\hbar} \sqrt[4]{\pi} m_{p}^{\frac{3}{4}} \omega^{\frac{7}{4}}}\\- \frac{3 \hbar^{\frac{3}{4}} d e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{8 \sqrt[4]{\pi} m_{p}^{\frac{7}{4}} \omega^{\frac{11}{4}}} \left(\frac{4 m_{p}}{\hbar} \omega x^{2} - 2\right)\\- \frac{\sqrt[4]{\hbar} c e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{24 \sqrt[4]{\pi} m_{p}^{\frac{5}{4}} \omega^{\frac{9}{4}}} \left(\frac{8 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{12 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right)\\- \frac{\hbar^{\frac{3}{4}} d e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{64 \sqrt[4]{\pi} m_{p}^{\frac{7}{4}} \omega^{\frac{11}{4}}} \left(\frac{16 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{48 m_{p}}{\hbar} \omega x^{2} + 12\right)\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\end{matrix}\right]$$
```python
PHImat
```
$$\left[\begin{matrix}\frac{\sqrt[4]{m_{p}} \sqrt[4]{\omega}}{\sqrt[4]{\hbar} \sqrt[4]{\pi}} e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}\\- \frac{3 c x e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{2 \sqrt[4]{\hbar} \sqrt[4]{\pi} m_{p}^{\frac{3}{4}} \omega^{\frac{7}{4}}} - \frac{\sqrt[4]{\hbar} c e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{24 \sqrt[4]{\pi} m_{p}^{\frac{5}{4}} \omega^{\frac{9}{4}}} \left(\frac{8 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{12 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) - \frac{3 \hbar^{\frac{3}{4}} d e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{8 \sqrt[4]{\pi} m_{p}^{\frac{7}{4}} \omega^{\frac{11}{4}}} \left(\frac{4 m_{p}}{\hbar} \omega x^{2} - 2\right) - \frac{\hbar^{\frac{3}{4}} d e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{64 \sqrt[4]{\pi} m_{p}^{\frac{7}{4}} \omega^{\frac{11}{4}}} \left(\frac{16 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{48 m_{p}}{\hbar} \omega x^{2} + 12\right)\end{matrix}\right]$$
```python
PHIp
```
$$- \frac{3 c x e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{2 \sqrt[4]{\hbar} \sqrt[4]{\pi} m_{p}^{\frac{3}{4}} \omega^{\frac{7}{4}}} - \frac{\sqrt[4]{\hbar} c e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{24 \sqrt[4]{\pi} m_{p}^{\frac{5}{4}} \omega^{\frac{9}{4}}} \left(\frac{8 x^{3}}{\hbar^{\frac{3}{2}}} m_{p}^{\frac{3}{2}} \omega^{\frac{3}{2}} - \frac{12 x}{\sqrt{\hbar}} \sqrt{m_{p}} \sqrt{\omega}\right) - \frac{3 \hbar^{\frac{3}{4}} d e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{8 \sqrt[4]{\pi} m_{p}^{\frac{7}{4}} \omega^{\frac{11}{4}}} \left(\frac{4 m_{p}}{\hbar} \omega x^{2} - 2\right) - \frac{\hbar^{\frac{3}{4}} d e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}}{64 \sqrt[4]{\pi} m_{p}^{\frac{7}{4}} \omega^{\frac{11}{4}}} \left(\frac{16 x^{4}}{\hbar^{2}} m_{p}^{2} \omega^{2} - \frac{48 m_{p}}{\hbar} \omega x^{2} + 12\right) + \frac{\sqrt[4]{m_{p}} \sqrt[4]{\omega}}{\sqrt[4]{\hbar} \sqrt[4]{\pi}} e^{- \frac{m_{p} \omega x^{2}}{2 \hbar}}$$
## Teoría de perturbaciones
Consiste en resolver un sistema perturbado(se conoce la solución al no perturbado), y donde el interés es conocer la contribución de la parte perturbada $H'$ al nuevo sistema total.
$$ H = H^{0} + H'$$
Para sistemas no degenerados, la corrección a la energía a primer orden se calcula como
$$E_{n}^{(1)} = \int\psi_{n}^{(0)*} H' \psi_{n}^{(0)}d\tau$$
** Tarea 1 : Programar esta ecuación si conoces $H^{0}$ y sus soluciones. **
```python
```
Y la corrección a la función de onda, también a primer orden, se obtiene como:
$$ \psi_{n}^{(1)} = \sum_{m\neq n} \frac{\langle\psi_{m}^{(0)} | H' | \psi_{n}^{(0)} \rangle}{E_{n}^{(0)} - E_{m}^{(0)}} \psi_{m}^{(0)}$$
**Tarea 2: Programar esta ecuación si conoces $H^{0}$ y sus soluciones. **
```python
### Solución
```
**Tarea 3: Investigue las soluciones a segundo orden y también programe las soluciones. **
```python
### Solución
```
**Tarea 4. Resolver el átomo de helio aplicando los programas anteriores.**
```python
```
**Tarea 5: Método variacional-perturbativo. **
Este método nos permite estimar de forma precisa $E^{(2)}$ y correcciones perturbativas de la energía de órdenes más elevados para el estado fundamental del sistema, sin evaluar sumas infinitas. Ver ecuación 9.38 del libro.
**Resolver el átomo de helio, considerando este método (sección 9.4), como mejor le parezca. **
**Tarea 6. Revisar sección 9.7. **
Inicialmente a mano, y sengunda instancia favor de intentar programar sección del problema, i.e. integral de Coulomb e integral de intercambio.
## Siguiente: Segunda parte, Octubre
Simetrías moleculares y Hartree-Fock
```python
```
| 25011e88272b2b2b804aaba29e3e925e2294fdef | 300,559 | ipynb | Jupyter Notebook | Perturbaciones/JuanPabloPerezAbascal/Perturbations-Copy1.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
]
| null | null | null | Perturbaciones/JuanPabloPerezAbascal/Perturbations-Copy1.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
]
| null | null | null | Perturbaciones/JuanPabloPerezAbascal/Perturbations-Copy1.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
]
| null | null | null | 76.01391 | 24,016 | 0.126923 | true | 17,647 | Qwen/Qwen-72B | 1. YES
2. YES | 0.815232 | 0.774583 | 0.631466 | __label__yue_Hant | 0.163392 | 0.305437 |
```python
from scipy.integrate import odeint
from sympy.plotting import plot
from sympy import init_printing
import sympy
from sympy.abc import t
from sympy import Array, Sum, Indexed, IndexedBase, Idx
init_printing()
from sympy.abc import t # x is the independent variable
from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols, exp, pi, diff, Poly
from sympy.physics.units.systems import SI
from sympy.physics.units import length, mass, acceleration, force
from sympy.physics.units import gravitational_constant as G
from sympy.physics.units.systems.si import dimsys_SI
import sympy.physics.units as units
#import pprint as pps
import sympy
import sympy.physics.units.util as util
from dataclasses import dataclass
from sympy.simplify.radsimp import collect
from sympy.assumptions.refine import refine
from sympy import init_printing
from sympy.simplify.powsimp import powsimp
init_printing()
from scipy.integrate import odeint
from sympy.plotting import plot
from sympy import init_printing
import sympy
from sympy.abc import t
from sympy import Array, Sum, Indexed, IndexedBase, Idx
init_printing()
from sympy.abc import t # x is the independent variable
from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols, exp, pi, diff, Poly
from sympy import I
```
```python
from sympy import symbols
import sympy.physics.units as u
Q, q, E, omega_0, omega, m, P_abs, f = symbols("Q, q, E, omega_0, omega, m, P_abs, f")
# where m is the reduced mass.
```
### Equation 9 from Yang et al, integrated power absorbed by one virus
```python
P = 0.5 * (Q * (q * E)**2 * omega_0 * omega**2 ) / (Q**2 * m * (omega_0**2 - omega**2)**2 + (omega_0 * omega)**2 * m)
display(Eq(P_abs, P))
```
```python
freq = f * 2 * pi
#remember the angular frequency
```
Now assume we drive on-resonance.
$\omega = \omega_0$ (this isn't exactly right since max power vs max amplitude freq is slightly different, but this is a minor correction for low Q):
```python
P_1 = P.subs([(omega, freq), (omega_0, freq)])
display(Eq(P_abs, P_1))
```
whoops, had the wrong value for reduced mass!
```python
MDa = 1.66054e-21
f_ = 8.2e9 * u.Hz
m_ = 14.5 * MDa * u.kg # 60 MDa
E_ = 50.0 * u.volts / u.m
Q_ = 1.95 # dimensionless
q_ = 1e7 * 1.602e-19 * u.coulomb
P_2 = P_1.subs([(f, f_), (m, m_), (E, E_), (Q, Q_), (q, q_)])
P_3 = u.convert_to(P_2, u.watts).evalf()
P_3
```
```python
# cuvette_volume = 1e-3*u.liter
cuvette_volume_liters = 1e-6*u.liter # yang et al use, say, 1 microliter
# liu et al use "1 drop", which could be 50 microliters
# N = 1e7 #number of viruses per 1 mL cuvette
cuvette_volume = u.convert_to(cuvette_volume_liters, u.meter**3)
display(cuvette_volume)
# yang use 7.5*10^8 / mL - not clear how they concentrate, I guess the MOI was just high enough
N_m3 = (7.5*(10**14)) / (u.meter**3) # N/cubic m
# N_m3 = 1e9 * 1e6 / (u.meter**3)
N = N_m3 * cuvette_volume
P_4 = (P_3 * N)
P_4
```
```python
print(sympy.pretty(P_4 / (1e-6*u.watts)), "microwatts")
```
3.78198538901213 microwatts
```python
medium_conductivity = 12 * u.S / u.m
cuvette_power = (0.5 * medium_conductivity * (E_)**2 * cuvette_volume)
cuvette_power = u.convert_to(cuvette_power, u.watts).evalf()
cuvette_power
```
```python
SNR = P_4 / cuvette_power
SNR
```
```python
```
```python
```
| 7daca4f9cec80e6f2ad2735ab11fd3dce45fbe2e | 31,153 | ipynb | Jupyter Notebook | documents/SNR.ipynb | 0xDBFB7/covidinator | e9c103e5e62bc128169400998df5f5cd13bd8949 | [
"MIT"
]
| null | null | null | documents/SNR.ipynb | 0xDBFB7/covidinator | e9c103e5e62bc128169400998df5f5cd13bd8949 | [
"MIT"
]
| null | null | null | documents/SNR.ipynb | 0xDBFB7/covidinator | e9c103e5e62bc128169400998df5f5cd13bd8949 | [
"MIT"
]
| null | null | null | 87.263305 | 6,508 | 0.833852 | true | 1,114 | Qwen/Qwen-72B | 1. YES
2. YES | 0.718594 | 0.70253 | 0.504834 | __label__eng_Latn | 0.814117 | 0.011228 |
<h1><center>Report 6</center></h1>
<h3><center>Jiachen Tian</center><h3>
## Objectives achieved this week
- Explore the optical flow algorithm, precisely Lucas Kanade optical flow.
- Implement Lucas Kanade optical flow from scratch and got it work on sample cases.
## Objectives for next week
- Keep adding more functions to Lucas Kanade optical flow.
- Adding a pyramid to the optical flow to ensure precision.
- Parallelize optical flow as much as possible because its a slow algorithm.
## Results Demo
### Lucas Kanade optical flow is exactly what I am looking for.
#### pros
-Could be fast after proper parallelization.
-Could be Robust by implementing extra algorithm including Gaussian and Laplacian Pyramids.
-Completely automated.
#### cons
-Hard to implement.
#### Theory explanation
-Every image has gradient. After converting it to single color, Gradient could be represented by change of numbers on the image.
-Optical flow assunes image gradient remains constant after it moves to a new location.(as shown below, x -> location x, y -> location y, t -> time, u and v -> moving factor, I() -> gradient factor).
-After step 4, since we have a whole image of vectors but only two variables, we could use overconstrained linear system to solve for the two unknowns as shown at step 5.
-Therefore, U and V(direction vector for each pixel displacement) could be solved.
```latex
%%latex
\begin{align}
(1)I(x, y, t) = I(x + u, y + v, t + 1)//Consistency \\
(2)I(x+n, y+v,t+1) - I(x,y,t) = t_x*u+I_y*v+I_t \\
(3)I_x*u+I_y*v+I_t = 0 \\
(4)\nabla I[u v]^T + I_t = 0 \\
(5)\begin{bmatrix}
\sum{I_xI_x} & \sum{I_xI_y}\\
\sum{I_xI_y} & \sum{I_yI_y}
\end{bmatrix}
\begin{bmatrix}
u \\
v
\end{bmatrix}
=-\begin{bmatrix}
\sum{I_xI_t}\\
\sum{I_yI_t}
\end{bmatrix}
\end{align}
```
\begin{align}
(1)I(x, y, t) = I(x + u, y + v, t + 1)//Consistency \\
(2)I(x+n, y+v,t+1) - I(x,y,t) = t_x*u+I_y*v+I_t \\
(3)I_x*u+I_y*v+I_t = 0 \\
(4)\nabla I[u v]^T + I_t = 0 \\
(5)\begin{bmatrix}
\sum{I_xI_x} & \sum{I_xI_y}\\
\sum{I_xI_y} & \sum{I_yI_y}
\end{bmatrix}
\begin{bmatrix}
u \\
v
\end{bmatrix}
=-\begin{bmatrix}
\sum{I_xI_t}\\
\sum{I_yI_t}
\end{bmatrix}
\end{align}
### Picture Demo(Pay attention to the shift between upper picture and lower picture)
## Conclusion
-Now we are officially getting into the fun stuffs. Successful implementation would result in best outcomes possible among all the other algoritums. Failure, on the other hand would be devastating due to its hard-to-debug nature. One last thing to look into if successfully implemented would be hidden markov models, which could further improve precision.
| fe4b11068d067ec04e53f3b0193be258fa92285e | 372,838 | ipynb | Jupyter Notebook | doc/Report6.ipynb | Tian99/Robust-eye-gaze-tracker | b4849281b45ab9dbf880eb899ae2aa5a6249ce9b | [
"MIT"
]
| 1 | 2020-10-01T01:33:47.000Z | 2020-10-01T01:33:47.000Z | doc/Report6.ipynb | Tian99/Robust-eye-gaze-tracker | b4849281b45ab9dbf880eb899ae2aa5a6249ce9b | [
"MIT"
]
| 2 | 2020-09-10T22:08:16.000Z | 2020-10-23T00:20:06.000Z | doc/Report6.ipynb | Tian99/Robust-eye-gaze-tracker | b4849281b45ab9dbf880eb899ae2aa5a6249ce9b | [
"MIT"
]
| 1 | 2020-08-20T21:43:36.000Z | 2020-08-20T21:43:36.000Z | 2,232.562874 | 184,392 | 0.960782 | true | 860 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.70253 | 0.56251 | __label__eng_Latn | 0.981244 | 0.145229 |
# Introduction to reproducibility and power issues
## Some Definitions
* $H_0$ : null hypothesis: The hypotheis that the effect we are testing for is null
* $H_A$ : alternative hypothesis : Not $H_0$, so there is some signal
* $T$ : The random variable that takes value "significant" or "not significant"
* $T_S$ : Value of T when test is significant (eg $T = T_S$)
* $T_N$ : Value of T when test is not significant (eg $T = T_N$)
* $\alpha$ : false positive rate - probability to reject $H_0$ when $H_0$ is true (therefore $H_A$ is false)
* $\beta$ : false negative rate - probability to accept $H_0$ when $H_A$ is true (i.e. $H_0$ is false)
power = $1-\beta$
where $\beta$ is the risk of *false negative*
So, to compute power, *we need to know what is the risk of false negative*, ie, the risk to not show a significant effect while we have some signal (null is false).
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import scipy.stats as sst
```
```python
from sympy import symbols, Eq, solve, simplify, lambdify, init_printing, latex
init_printing(use_latex=True, order='old')
```
```python
from IPython.display import HTML
# Code to make HTML for a probability table
def association_table(assocs, title):
latexed = {'title': title}
for key, value in assocs.items():
latexed[key] = latex(value)
latexed['s_total'] = latex(assocs['t_s'] + assocs['f_s'])
latexed['ns_total'] = latex(assocs['t_ns'] + assocs['f_ns'])
return """<h3>{title}</h3>
<TABLE><TR><TH>$H/T$<TH>$T_S$<TH>$T_N$
<TR><TH>$H_A$<TD>${t_s}$<TD>${t_ns}$
<TR><TH>$H_0$<TD>${f_s}$<TD>${f_ns}$
<TR><TH>Total<TD>${s_total}$<TD>${ns_total}$
</TABLE>""".format(**latexed)
```
```python
from sympy.abc import alpha, beta # get alpha, beta symbolic variables
assoc = dict(t_s = 1 - beta, # H_A true, test significant = true positives
t_ns = beta, # true, not significant = false negatives
f_s = alpha, # false, significant = false positives
f_ns = 1 - alpha) # false, not sigificant = true negatives
HTML(association_table(assoc, 'Not considering prior'))
```
<h3>Not considering prior</h3>
<TABLE><TR><TH>$H/T$<TH>$T_S$<TH>$T_N$
<TR><TH>$H_A$<TD>$1 - \beta$<TD>$\beta$
<TR><TH>$H_0$<TD>$\alpha$<TD>$1 - \alpha$
<TR><TH>Total<TD>$1 + \alpha - \beta$<TD>$1 + \beta - \alpha$
</TABLE>
## How do we compute power ?
### What is the effect ?
$$\hspace{3cm}\mu = \mu_1 - \mu_2$$
### What is the standardized effect ? (eg Cohen's d)
$$\hspace{3cm}d = \frac{\mu_1 - \mu_2}{\sigma} = \frac{\mu}{\sigma}$$
### "Z" : Effect accounting for the sample size
$$\hspace{3cm}Z = \frac{\mu}{\sigma / \sqrt{n}}$$
### Cohen's d value:
```python
# print some cohen values
# %pylab inline
muse = (.05, .1,.2,.3,.4,.5);
sigmas = np.linspace(1.,.5,len(muse))
cohenstr = ["For sigma = %3.2f and m = %3.2f Cohen d = %3.2f" %(sig,mu,coh)
for (sig,mu,coh) in zip(sigmas,muse, np.asarray(muse)/sigmas)]
for s in cohenstr:
print(s)
```
For sigma = 1.00 and m = 0.05 Cohen d = 0.05
For sigma = 0.90 and m = 0.10 Cohen d = 0.11
For sigma = 0.80 and m = 0.20 Cohen d = 0.25
For sigma = 0.70 and m = 0.30 Cohen d = 0.43
For sigma = 0.60 and m = 0.40 Cohen d = 0.67
For sigma = 0.50 and m = 0.50 Cohen d = 1.00
We have to estimate the effect $\mu$, say under some normal noise. Our statistic will be:
$$
t = \frac{\hat{\mu}}{\hat{\sigma_{\mu}}} = \frac{\hat{\mu}}{\hat{{SE}_{\mu}}}
$$
Power is the probability that the observed t is greater than $t_{.05}$, computing $t_{.05}$ by assuming that we are under the null.
So, we compute $t_{.05}$, and want to compute $P(t > t_{.05})$.
To compute this, __we need the distribution of our measured t - therefore we need to know the signal / effect size !__
Let's assume we know this and call it $t_{nc}$, and $F_{nc}$ for the cumulative distribution (more on this in the appendix).
$\mbox{Power} = 1 - \beta = P(t > t_{.05}) = 1 - F_{nc}(t_{.05})$
__This power will depend on 4 parameters :__
$$ \mbox{The non standardized effect : } \mu$$
$$\mbox{The standard deviation of the data : } \sigma$$
$$\mbox{The number of subjects : } n$$
$$\mbox{The type I risk of error : } \alpha$$
And on the distribution of the statistic under the alternative hypothesis. Here, we assume our original data are normals, and the $t = \frac{\hat{\mu}}{\hat{{SE}_{\mu}}}$ statistics follows a non central t distribution with non centrality parameter
$$\theta = \mu \sqrt{n}/\sigma$$
and $n-1$ degrees of freedom.
```python
import scipy.stats as sst
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
```
```python
# plot power as a function of n : define a little function that
# takes n, mu, sigma, alpha, and report n.
# Optionally plot power as a function of nfrom matplotlib.patches import Polygon
from matplotlib.patches import Polygon
def stat_power(n=16, mu=1., sigma=1., alpha=0.05, plot=False, xlen=500):
"""
This function computes the statistical power of an analysis assuming a normal
distribution of the data with a one sample t-test
Parameters:
-----------
n: int,
The number of sample in the experiment
mu: float
The mean of the alternative
sigma: float
The standard deviation of the alternative
plot: bool
Plot something
alpha: float
The risk of error (type I)
xlen: int
Number of points for the display
Returns:
--------
float
The statistical power for this number of sample, mu, sigma, alpha
"""
df = n-1
theta = np.sqrt(n)*mu/sigma
t_alph_null = sst.t.isf(alpha, df)
ncrv = sst.nct(df, theta)
spow = 1 - ncrv.cdf(t_alph_null)
if plot:
# define the domain of the plot
norv = sst.norm(0, 1.)
bornesnc = ncrv.isf([0.001, .999])
bornesn = norv.isf([0.001, .999])
# because the nc t will have higher max borne, and the H0 normal will be on the left
x = np.linspace(np.min(bornesn), np.max(bornesnc), xlen)
t_line = np.zeros_like(x)
# define the line
x_t_line = np.argmin((x-t_alph_null)**2)
y_t_line = np.max(np.hstack((ncrv.pdf(x), norv.pdf(x))))
t_line[x_t_line] = y_t_line
fig, ax = plt.subplots()
plt.plot(x, ncrv.pdf(x), 'g', x, norv.pdf(x), 'b', x, t_line, 'r')
# Make the shaded region
# http://matplotlib.org/xkcd/examples/showcase/integral_demo.html
a = x[x_t_line]; b = np.max(bornesnc);
ix = np.linspace(a,b)
iy = ncrv.pdf(ix)
verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
ax.set_xlabel("t-value - H1 centred on " + r"$\theta $" + " = %4.2f; " %theta
+ r"$\mu$" + " = %4.2f" %mu);
ax.set_ylabel("Probability(t)");
ax.set_title('H0 and H1 sampling densities '
+ r'$\beta$' + '= %3.2f' %spow + ' n = %d' %n)
plt.show()
return spow
```
```python
n = 30
mu = .5
sigma = 1.
pwr = stat_power(n, mu, sigma, plot=True, alpha=0.05, xlen=500)
print ("Power = ", pwr, " Z effect (Non centrality parameter) = ", mu*np.sqrt(n)/sigma)
```
```python
n = 12
mu = .5
sigma = 1.
pwr = stat_power(n, mu, sigma, plot=True, alpha=0.05, xlen=500)
print("Power = ", pwr, " Z effect (Non centrality parameter): ", mu*np.sqrt(n)/sigma)
```
### Plot power as a function of the number of subject in the study
```python
def pwr_funcofsubj(muse, nses, alpha=.05, sigma=1):
"""
muse: array of mu
nses: array of number of subjects
alpha: float, type I risk
sigma: float, data sigma
"""
mstr = [ 'd='+str(m) for m in np.asarray(muse)/sigma]
lines=[]
for mu in (muse):
pw = [stat_power(n, mu, sigma, alpha=alpha, plot=False) for n in nses]
(pl,) = plt.plot(nses, pw)
lines.append(pl)
plt.legend( lines, mstr, loc='upper right', shadow=True)
plt.xlabel(" Number of subjects ")
plt.ylabel(" Power ");
return None
mus = (.05, .1,.2,.3,.4,.5, .6);
#nse = range(70, 770, 20)
nse = range(7, 77, 2)
alph = 1.e-3
pwr_funcofsubj(mus, nse, alph)
```
### **** Here - play with n ****
```python
mus = (.05,.1,.2,.3,.4,.5,.6);
nse = range(10, 330, 20)
#nse = range(7, 77, 2)
alph = 0.001
pwr_funcofsubj(mus, nse, alph)
```
### Here - play with $\alpha$
```python
mus = (.05, .1,.2,.3,.4,.5, .6);
nse = range(10, 770, 20)
#nse = range(7, 77, 2)
alph = 0.05/30000
pwr_funcofsubj(mus, nse, alph)
```
### What is the effect size of APOE on the hippocampal volume ?
Authors find p value of 6.63e-10
They had 733 subjects
```python
n01 = sst.norm(0,1.)
z = n01.isf(6.6311e-10)
d = n01.isf(6.6311e-10)/np.sqrt(733)
print("z = %4.3f d = %4.3f " %(z,d))
```
z = 6.064 d = 0.224
```python
```
| 4a252ebeb5a54c583cd4ce2fbea1435da32d75c3 | 262,855 | ipynb | Jupyter Notebook | notebooks/Power-basics.ipynb | jbpoline/module-stats | 539b2972c3e19646cce335a488bd631bbd266435 | [
"CC-BY-4.0"
]
| null | null | null | notebooks/Power-basics.ipynb | jbpoline/module-stats | 539b2972c3e19646cce335a488bd631bbd266435 | [
"CC-BY-4.0"
]
| null | null | null | notebooks/Power-basics.ipynb | jbpoline/module-stats | 539b2972c3e19646cce335a488bd631bbd266435 | [
"CC-BY-4.0"
]
| null | null | null | 370.74048 | 55,128 | 0.922912 | true | 2,950 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.828939 | 0.692565 | __label__eng_Latn | 0.883696 | 0.447391 |
# 01: Introduction to Monte Carlo Methods
A great reference for this material is W. Krauth's excellent book ''Statistical Mechanics: Algorithms and Computations''
```python
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
try: plt.style.use('./mc_notebook.mplstyle')
except: pass
```
## Estimating π
We will attempt to estimate $\pi$ by considering the unit circle embedded in a square of side 1
```python
plt.figure(figsize=(4,4))
cx = np.linspace(-1,1,1000)
cy = np.sqrt(1-cx**2)
plt.fill_between(cx,-cy,cy, color='gray')
plt.xlabel('x')
plt.ylabel('y')
plt.xticks([]);
plt.yticks([]);
```
We know the area of the circle is $A_\circ = \pi$ while the area of the square is $A_\square = 4$ thus we can compute $\pi$ as the ratio:
\begin{equation}
\pi = 4 \frac{A_\circ}{A_\square}
\end{equation}
This ratio can be estimated using a simple ''children's game'' of pebble toss.
### Direct Sampling
1. Randomly toss a pebble into the square.
2. Record the position where it lands $\vec{r} = (x,y)$
3. Determine whether the stone fell inside the circle: $r < 1$
4. Repeat $N$ times and determine the total number of hits $N_{\rm hits}$
We can then determine:
\begin{equation}
\frac{N_{\rm hits}}{N} = \frac{A_\circ}{A_\square} = \frac{\pi}{4}\, .
\end{equation}
```python
N = 2**14
# sample inside the square
x = np.random.uniform(-1,1,N)
y = np.random.uniform(-1,1,N)
# count the number of hits
N_hits = np.sum(x**2+y**2<1)
print('π ≃ %6.4f' % (4*N_hits/N))
```
π ≃ 3.1648
```python
plt.figure(figsize=(4,4))
cx = np.linspace(-1,1,1000)
cy = np.sqrt(1-cx**2)
plt.plot(x,y, 'k', marker='o', linewidth=0, markersize=2, markeredgewidth=0)
plt.fill_between(cx,-cy,cy, color='gray')
plt.xlabel('x')
plt.ylabel('y')
plt.xticks([]);
plt.yticks([]);
```
### Markov Chain Sampling
Suppose that we are not able reach the entire area of the square by throwing pebbles. An alternative sampling approach proceeds by:
1. Start in the corner of the squre $\vec{r} = (1,1)$
2. Throw a pebble
* if the new pebble is inside the square, move to its position
* if not, drop the pebble at your current location
3. Record the position (new or current) $\vec{r} = (x,y)$
4. Determine if $r < 1$
5. Repeat $N$ times and determine the relative number of hits
The hallmark of this type of sampling is that the $(n+1)^{\rm th}$ configuration only depends on the $n^{\rm th}$ one.
```python
N = 2**12
x = np.ones(N)
y = np.ones(N)
δ = 0.3
for n in range(1,N):
# throw in a box of side δ
Δx = np.random.uniform(-δ,δ)
Δy = np.random.uniform(-δ,δ)
# update only if the new position is inside the square
x[n] = x[n-1] + Δx*(np.abs(x[n-1]+Δx) < 1)
y[n] = y[n-1] + Δy*(np.abs(y[n-1]+Δy) < 1)
# count the number of hits
N_hits = np.sum(x**2+y**2<1)
print('π ≃ %6.4f' % (4*N_hits/N))
```
π ≃ 3.0156
```python
plt.figure(figsize=(4,4))
cx = np.linspace(-1,1,1000)
cy = np.sqrt(1-cx**2)
plt.plot(x,y, 'k', marker='o', linewidth=0.2, markersize=2, markeredgewidth=0)
plt.fill_between(cx,-cy,cy, color='gray')
plt.xlabel('x')
plt.ylabel('y')
plt.xticks([]);
plt.yticks([]);
```
## What are we doing?
We are using different sampling schemes to compute the ratio of two-dimensional integrals:
\begin{align*}
\frac{A_\circ}{A_\square} = \frac{\pi}{4} &=
\frac{\int_{-1}^1 dx \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} dy}{\int_{-1}^1 dx \int_{-1}^1 dy}\\
&= \frac{\int_{-1}^1 dx \int_{-1}^1 dy\, \mathcal{O}(x,y)}{\int_{-1}^1 dx \int_{-1}^1 dy}
\end{align*}
where we have defined a function:
\begin{equation}
\mathcal{O}(x,y) =
\begin{cases}
1 &;& x^2 + y^2 < 1 \\
0 &;& \text{otherwise}
\end{cases}.
\end{equation}
This integral can be re-written as an ''average'' of the function $\mathcal{O}(x,y)$ over the square with side $2$ by defining the uniform probability distribution:
\begin{equation}
\pi(x,y) = \frac{1}{{\int_{-1}^1 dx \int_{-1}^1 dy}}
\end{equation}
which yields:
\begin{align*}
\langle \mathcal{O} \rangle &= \int_{-1}^1 dx \int_{-1}^1 dy\, \pi (x,y)\, \mathcal{O}(x,y) \newline
&\simeq \frac{1}{N} \sum_{n=0}^{N-1} \mathcal{O}(x_n,y_n) = \frac{N_{\rm hits}}{N} \simeq \frac{\pi}{4}
\end{align*}
where $x_n,y_n \in \pi(x,y)$; i.e. in the last line the probablity distribution doesn't explicity apear, **it is sampled**!
Using this idea we can re-write **any** d-dimensional integral as a Monte Carlo sampling problem:
\begin{align*}
I = \frac{\int_{\Omega} d^d x\, f(x_1,\ldots,x_d)}{\int_\Omega d^d x\,} & = \int_\Omega d^d x\, \pi(x_1,\ldots,x_n) \, f(x_1,\ldots,x_n)
= \langle f \rangle \newline
&\simeq \frac{1}{N} \sum_n f(\mathbf{x}_n)
\end{align*}
where $\mathbf{x}_n$ is a vector sampled accoridng to the $d$-dimensional uniform distribution $\pi$.
```python
```
| 5473ff71d4db3ff1b88b42dcdead2b880825e144 | 759,107 | ipynb | Jupyter Notebook | introduction_monte_carlo.ipynb | agdelma/intro_monte_carlo | 7b1cbfdd669ad0adc784e7a344c32e1569d45f1a | [
"MIT"
]
| 10 | 2018-05-31T14:38:56.000Z | 2021-01-28T01:23:17.000Z | introduction_monte_carlo.ipynb | agdelma/intro_monte_carlo | 7b1cbfdd669ad0adc784e7a344c32e1569d45f1a | [
"MIT"
]
| null | null | null | introduction_monte_carlo.ipynb | agdelma/intro_monte_carlo | 7b1cbfdd669ad0adc784e7a344c32e1569d45f1a | [
"MIT"
]
| 6 | 2019-07-21T08:54:54.000Z | 2021-12-07T00:37:31.000Z | 2,350.176471 | 510,484 | 0.96288 | true | 1,677 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944177 | 0.859664 | 0.811675 | __label__eng_Latn | 0.884617 | 0.724125 |
# Playing with single-qubit gates
In this file we will visualize the behaviour of single-qubit rotation gates. I've written out some video-making functions to make it more exciting!
**Important note**: The movies created here are stored on-disk and then read into the notebook player, so this script will be creating new files and directories on your HD.
```python
import numpy as np
from qiskit import QuantumRegister, ClassicalRegister
from qiskit import QuantumCircuit
from qiskit import execute, BasicAer
import qiskit.tools.visualization as qvis
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import matplotlib.pyplot as plt
# We will run a shell command to knit the image files together
import os
# For in-notebook video display
import io
import base64
from IPython.display import HTML
```
This function creates a movie given a name and a sequence of angles to rotate through. You can ignore this for now, but run it so you have the function for later.
```python
def create_movie(name, rotation_sequence, start_from_plus = False):
# If start_from_plus is set to true, we'll apply a Hadamard before doing anything
# else so that we initialize the qubit in the |+> state
# Create the movie directory; overwrite if one of the same name is already present
os.system(f"rm -r {name}")
os.system(f"mkdir {name}")
# Apply the rotations and save the images
for idx_frame, rotation in enumerate(rotation_sequence):
q = QuantumRegister(1)
circ = QuantumCircuit(q)
if start_from_plus:
circ.h(q)
circ.u3(*rotation, q)
backend = BasicAer.get_backend('statevector_simulator') # the device to run on
result = execute(circ, backend).result()
psi = result.get_statevector(circ)
img = qvis.plot_bloch_multivector(psi, title=f"{name}")
img.savefig(f"{name}/{name}_{str(idx_frame)}.png")
plt.show()
# Create the movie
os.system(f'ffmpeg -r 20 -i {name}/{name}_%01d.png {name}/{name}_animated.webm')
# Return the movie for display - thank you stackoverflow!
video = io.open(f'{os.getcwd()}/{name}/{name}_animated.webm', 'r+b').read()
encoded = base64.b64encode(video)
return HTML(data=''''''.format(encoded.decode('ascii')))
```
In the cell below, we will choose form of our rotation and set up a series of angles to plot at. We will be using the u3 operator provided by Qiskit. u3 allows us to specify a 3 parameter rotation of the form
\begin{equation}
u3(\theta, \phi, \lambda) = \begin{pmatrix}
\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
e^{i\phi}\sin(\theta/2) & e^{i\lambda + i\phi} \cos(\theta/2)
\end{pmatrix}
\end{equation}
Here are the parameterizations of u3 for the Pauli rotation gates:
\begin{eqnarray}
R_x (\theta) &=& u3 (\theta, -\pi/2, \pi/2) \\
R_y (\theta) &=& u3 (\theta, 0, 0) \\
R_z (\theta) &=& u3 (0, 0, \theta)
\end{eqnarray}
We can use these three alone to produce most of our universal gate set. For the Hadamard, though, we need something extra because it is not a rotation around one of the Cartesian axes. (It is in fact a rotation around $\hat{x} + \hat{z}$).
\begin{equation}
H = u3(\pi/2, 0, \pi)
\end{equation}
```python
# An X rotation
rotation_angle = np.pi
name = "x_gate"
num_frames = 40
intermediate_angles = np.linspace(0, rotation_angle, num_frames)
rotation_sequence = [(theta, -np.pi/2, np.pi/2) for theta in intermediate_angles] # The form of the tuple here specifies which gate you perform
create_movie(name, rotation_sequence)
```
```python
# A Y rotation
rotation_angle = np.pi
name = "y_rotation"
num_frames = 40
intermediate_angles = np.linspace(0, rotation_angle, num_frames)
rotation_sequence = [(theta, 0, 0) for theta in intermediate_angles] # The form of the tuple here specifies which gate you perform
create_movie(name, rotation_sequence)
```
```python
# A Z rotation
rotation_angle = np.pi
name = "z_rotation"
num_frames = 40
intermediate_angles = np.linspace(0, rotation_angle, num_frames)
rotation_sequence = [(0, 0, theta) for theta in intermediate_angles] # The form of the tuple here specifies which gate you perform
create_movie(name, rotation_sequence, start_from_plus=True)
```
```python
# S gate
rotation_angle = np.pi/2
name = "s_gate"
num_frames = 40
intermediate_angles = np.linspace(0, rotation_angle, num_frames)
rotation_sequence = [(0, 0, theta) for theta in intermediate_angles] # The form of the tuple here specifies which gate you perform
create_movie(name, rotation_sequence, start_from_plus=True)
```
```python
# T gate
rotation_angle = np.pi/4
name = "t_gate"
num_frames = 40
intermediate_angles = np.linspace(0, rotation_angle, num_frames)
rotation_sequence = [(0, 0, theta) for theta in intermediate_angles] # The form of the tuple here specifies which gate you perform
create_movie(name, rotation_sequence, start_from_plus=True)
```
To animate the Hadamard matrix, which is not an $x$, $y$, or $z$ rotation alone, we will need to use the general form of a unitary that creates a superposition. From the Qiskit documentation, that is $u3(\pi/2, \phi, \lambda)$:
\begin{equation}
u3(\pi/2, \phi, \lambda) = \frac{1}{\sqrt{2}} \begin{pmatrix}
1 & -e^{-i\lambda} \\
e^{i\phi} & e^{i(\phi+\lambda)}
\end{pmatrix}
\end{equation}
To get the Hadamard, we will need to play with the second *and* third parameters of the tuple going from 0 to $\pi$.
```python
# Hadamard rotation
name = "hadamard_gate"
num_frames = 40
intermediate_angles_x = np.linspace(0, np.pi/2, num_frames)
intermediate_angles_y = np.linspace(-np.pi/2, 0, num_frames)
intermediate_angles_z = np.linspace(np.pi/2, np.pi, num_frames)
rotation_sequence = [(intermediate_angles_x[i], intermediate_angles_y[i], intermediate_angles_z[i]) for i in range(num_frames)]
create_movie(name, rotation_sequence)
```
```python
```
| 6944cbd96e25b66024522463b48266a8ba882f68 | 9,295 | ipynb | Jupyter Notebook | 01-gate-model-theory/notebooks/Single-Qubit-Gates.ipynb | a-capra/Intro-QC-TRIUMF | 9738e6a49f226367247cf7bc05a00751f7bf2fe5 | [
"MIT"
]
| 27 | 2019-05-09T17:40:20.000Z | 2021-12-15T12:23:17.000Z | 01-gate-model-theory/notebooks/Single-Qubit-Gates.ipynb | a-capra/Intro-QC-TRIUMF | 9738e6a49f226367247cf7bc05a00751f7bf2fe5 | [
"MIT"
]
| 1 | 2021-09-29T07:34:09.000Z | 2021-09-29T21:01:29.000Z | 01-gate-model-theory/notebooks/Single-Qubit-Gates.ipynb | a-capra/Intro-QC-TRIUMF | 9738e6a49f226367247cf7bc05a00751f7bf2fe5 | [
"MIT"
]
| 14 | 2019-05-09T18:45:49.000Z | 2021-12-15T12:23:21.000Z | 35.342205 | 250 | 0.567187 | true | 1,601 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.857768 | 0.723866 | __label__eng_Latn | 0.934652 | 0.520116 |
# Wave propagation in Time Domain
In this section we discuss how to solve the PDE for 2D sonic wave equation in time domain using `esys.escript`. It is assumed that you have worked through the [introduction section on `esys.escript`](escriptBasics.ipynb).
First we will provide the basic theory.
## Sonic Wave Equation in Time domain
The sonic wave equation in time domain can be written as a system of first order differential equations:
\begin{equation}\label{eqWAVEF1}
\dot{\mathbf{V}} = - \nabla p
\end{equation}
\begin{equation}\label{eqWAVEF2}
\frac{1}{c^2} \dot{p} + \nabla^t \mathbf{V} = \delta_{\mathbf{x}_s} \cdot w(t)
\end{equation}
where $p$ is the pressure (or volume change) and
$\mathbf{V}$ it gradient. Source at location $\mathbf{x}_s$. $c$ is the wave propagation speed.
The dot refers time derivative. $\mathbf{x}_s$ is the location of the source,
$\delta_{\mathbf{x}_s}$ is Dirac $\delta$-function and $w(t)$ the source wavelet as function of time $t$.
In more details these equations are given as
\begin{equation}\label{eqWAVEF3}
\dot{\mathbf{V}} = -\begin{bmatrix}
\frac{\partial p}{\partial x_0 }\\
\frac{\partial p}{\partial x_1 }
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqWAVEF4}
\frac{1}{c^2} \dot{p} + \frac{\partial V_0}{\partial x_0 }+
\frac{\partial V_1}{\partial x_1 } = \delta_{\mathbf{x}_s} \cdot w(t)
\end{equation}
For the wavelet we use the [Ricker wavelet](http://subsurfwiki.org/wiki/Ricker_wavelet)
```python
f = 15 # peak frequency of the Ricker wavelet
dt=0.001 # time resolution
tend=0.75 # end of wavelet
%matplotlib inline
```
We use the implementation in `esys.escript` which is a bit more suitable for us them the [scipy version](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.ricker.html):
```python
from esys.escript import *
from esys.downunder import Ricker
wavelet=Ricker(f)
```
For plotting we evaluate the Ricker over the interval 0 to `tend`:
```python
import numpy as np
times=np.arange(0,tend, dt)
signal=wavelet.getValue(times)
```
```python
import matplotlib.pyplot as plt
plt.figure()# figsize=(7,7))
plt.plot(times, signal)
plt.xlabel('time')
plt.ylabel('amplitude')
plt.title(f'Ricker peak frequency ={f} Hz')
```
We also want to take a look at the power spectrum. We get the Fourier coefficients and frequencies using `np.fft`.
Notice that we are using the real value fast fourier transformation (FFT) as the values of `signal` are real.
As a consequence only the spectrum for positive frequencies needs to be inspected:
```python
fourier_full = np.fft.rfft(signal)
freq_all = np.fft.rfftfreq(signal.size, d=dt)
```
And we plot the power spectrum over frequency:
```python
plt.figure()# figsize=(7,7))
plt.plot(freq_all, abs(fourier_full))
plt.xlabel('frequency [Hz]')
plt.ylabel('power')
plt.show()
```
To assess the requirement is terms of grid sizes for solving the wave equation we need to have an idea of the
largest frequency making contributions to the signal of the wavelet. To do this we first
need to collect a list of all frequencies for which the spectrum lays above a threshold. Here we
use $0.001$ times the maximum spectrum value to get an index of all frequencies that have
significant contribution to the signal:
```python
freq_index=np.where( abs(fourier_full) > max(abs(fourier_full))*0.001 )[0]
```
Then we grab the relevant frequencies from the array of frequencies and get the max:
```python
fmax=max(freq_all[freq_index])
print(f"maximum freqency = {fmax} Hz")
```
maximum freqency = 46.666666666666664 Hz
## Domain set up
We consider a single reflector set up. The reflector is located at a depth of $500m$
where the top layer has propagation speed of $c_{top}=1500m/s$
and the bottom layer has propagation speed of $c_{top}=3000m/s$.
Use a frequency of $f=5 Hz$. The domain has depth of $1km$ and width $3km$
with $300 \times 100$ grid.
The source is located at the surface at an offset of $1500m$ from the boundary.
This time we use a `sys.speckley` domain which is more suitable for wave problems. It actual uses higher
order polynomial approximations:
```python
nx=100 # elements in horizontal direction
ny=60 # elements in vertical direction
dx=15 # element size
order=6 # element order
```
```python
Width=nx*dx
Depth=ny*dx
print(f"Domain dimension is {Width} x {Depth}")
```
Domain dimension is 1500 x 900
Again we use a single source at the center of the top edge of the domain:
```python
from esys.speckley import Rectangle
sources=[(Width/2, Depth) ]
sourcetags=["source66" ]
domain = Rectangle(order, n0=nx, n1=ny, l0=Width, l1=Depth, diracPoints=sources, diracTags=sourcetags)
```
Velocity configuration is
```python
c_top=1800 # m/s
c_bottom=3000 # m/s
d0=Depth-200. # m top layer is 200m thick
```
Set up of the velocity $c$:
```python
X=ReducedFunction(domain).getX()
m=wherePositive(X[1]-d0)
c=c_top*m+c_bottom*(1-m)
c=interpolate(c, Function(domain))
```
We solve the wave equation using the Heun scheme which is 2-nd order Runge-Kutta method with time step size: Starting from $U^{(0)}$:
\begin{equation}\label{eq:H1}
\hat{U}^{(n+1)} = U^{(n)} + h \cdot F(U^{(n)}, t^{(n)})
\end{equation}
\begin{equation}\label{eq:H2}
\hat{U}^{(n+1)} = U^{(n)} + \frac{h}{2} (F(U^{(n)}, t^{(n)})+F(\hat{U}^{(n+1)}, t^{(n+1)}) )
\end{equation}
In our case:
\begin{equation}\label{eqWAVEF8}
U=\begin{bmatrix}
\mathbf{V}\\
p
\end{bmatrix}
\end{equation}
and
\begin{equation}\label{eqWAVEF9}
F=\begin{bmatrix}
-\nabla p\\
dp
\end{bmatrix}
\end{equation}
with
\begin{equation}\label{eqWAVEF10}
\frac{1}{c^2} dp = - \nabla^t \mathbf{V} + \delta_{\mathbf{x}_s} \cdot w(t)
\end{equation}
for which we use the `LinearSinglePDE` with $D=\frac{1}{c^2}$, $X=-\mathbf{V}$ and
$y_{dirac}=\delta_{\mathbf{x}_s} \cdot w(t)$
Define the PDE:
```python
from esys.escript.linearPDEs import LinearSinglePDE, SolverOptions
pde = LinearSinglePDE(domain)
pde.getSolverOptions().setSolverMethod(SolverOptions.LUMPING)
```
Setting $D$ is the PDE:
```python
pde.setValue(D=1./(c**2) )
```
An define the Dirac $\delta$ function for the source `source66` which we will later multiply with the wavelet value $(w(t)$ to define $y_{dirac}$:
```python
input_loc=Scalar(0., DiracDeltaFunctions(domain))
input_loc.setTaggedValue("source66", 1.)
```
The wave length $\lambda$ needs to smaller then the node spacing in order to be able to resolve
the wave and its gradient accurately (note $2\pi f \lambda=c$):
```python
dx_max=inf(c/(2*np.pi*fmax))
dx_used=dx/order
print(f"maximum grid step size = {dx_max}. found = {dx_used}")
```
maximum grid step size = 6.13883351925882. found = 2.5
## Time integrations
Also should the changes between to time steps not be two dramatic in the sense that the wave from should not
travel more than then a node distance within a time step. This leads to the condition $h<h_max$:
```python
h_max=inf(dx/c/order)
print(f"maximum time step size = {h_max}.")
```
maximum time step size = 0.0008333333333333334.
** Note **: if the time step is two large then solution can become unstable. The exact maximum size
depends on additional factors such as the time integration scheme and the spatial discretization scheme.
In some an exact correction factor is known but here we use $h_{max}$ as a guidance to find the appropriate step size.
This is a Heun step:
```python
def stepHeun(t, p, v, n, h):
for k in range(n):
dv1=grad(p)
a=wavelet.getValue(t+k*h)
#print(t+k*h, a)
pde.setValue(X=-v, y_dirac=input_loc*a)
dp1=pde.getSolution()
vp=v+dv1*h
pp=p+dp1*h
dv2=grad(pp)
a=wavelet.getValue(t+k*h+h)
pde.setValue(X=-vp, y_dirac=input_loc*a)
dp2=pde.getSolution()
v=v+(dv1+dv2)*h/2
p=p+(dp1+dp2)*h/2
return t+n*h, p, v
```
We want to track the solution at three locations at the surface:
```python
from esys.escript.pdetools import Locator
loc0=Locator(Solution(domain), x=( Width/2, Depth) )
loc1=Locator(Solution(domain), x=( Width/2*0.9, Depth) )
loc2=Locator(Solution(domain), x=( Width/2*0.5, Depth) )
```
And also collect over a set of virtual geophones to create a synthetic seismic survey:
```python
loc=Locator(Solution(domain), x=[ ( Width/2+z, Depth) for z in np.linspace(-500, 500, 80) ] )
```
We write a function to progress the time integration from some time `t` to some
time `tend` and collect the values at the geophones over time is lists:
```python
Ts=[]
Trace0=[]
Trace1=[]
Trace2=[]
Traces=[]
h=0.00004
n=int(dt/h)
print("n= ",n)
def progress(p, v, t, tend):
while t < tend:
t, p, v = stepHeun(t, p, v, n, h)
Ts.append(t)
Trace0.append(loc0(p))
Trace1.append(loc1(p))
Trace2.append(loc2(p))
Traces.append(loc(p))
#print(Ts[-1], Trace0[-1], Trace1[-1], Trace2[-1])
return t, p, v
```
n= 25
We set the initial values and progress to $20\%$ of the total end time. This will take some time:
```python
# start time integration
v=Vector(0., Function(domain))
p=Scalar(0., Solution(domain))
t=0
t, p, v = progress(p, v, t, tend*0.2)
```
Lets plot the solution at that time:
```python
p_np=convertToNumpy(p)
x_np=convertToNumpy(p.getFunctionSpace().getX())
plt.figure(figsize=(15,5))
plt.tricontourf(x_np[0], x_np[1], p_np[0], 20)
plt.xlabel('x0 [m]')
plt.ylabel('x1 [m]')
plt.title(f"Pressure at time t={t*1000.} msec")
plt.colorbar()
plt.show()
```
And now progress to $40\%$ end time:
```python
t, p, v = progress(p, v, t, tend*0.4)
```
Lets plot the solution at that time:
```python
p_np=convertToNumpy(p)
x_np=convertToNumpy(p.getFunctionSpace().getX())
plt.figure(figsize=(15,5))
plt.tricontourf(x_np[0], x_np[1], p_np[0], 20)
plt.xlabel('x0 [m]')
plt.ylabel('x1 [m]')
plt.title(f"Pressure at time t={t*1000.} msec")
plt.colorbar()
plt.show()
```
And now progress to 60%
end time:
```python
t, p, v = progress(p, v, t, tend*0.6)
```
Lets plot the solution at that time:
```python
p_np=convertToNumpy(p)
x_np=convertToNumpy(p.getFunctionSpace().getX())
plt.figure(figsize=(15,5))
plt.tricontourf(x_np[0], x_np[1], p_np[0], 20)
plt.xlabel('x0 [m]')
plt.ylabel('x1 [m]')
plt.title(f"Pressure at time t={t*1000.} msec")
plt.colorbar()
plt.show()
```
And now progress to 80% end time:
```python
t, p, v = progress(p, v, t, tend*0.8)
```
```python
p_np=convertToNumpy(p)
x_np=convertToNumpy(p.getFunctionSpace().getX())
plt.figure(figsize=(15,5))
plt.tricontourf(x_np[0], x_np[1], p_np[0], 20)
plt.xlabel('x0 [m]')
plt.ylabel('x1 [m]')
plt.title(f"Pressure at time t={t*1000.} msec")
plt.colorbar()
plt.show()
```
## Seismograms
Plot the traces at the monitoring points:
```python
plt.figure(figsize=(10,5))
plt.plot(Ts, Trace0, label="%s"%(loc0.getX()[0]))
plt.plot(Ts, Trace1, label="%s"%(loc1.getX()[0]))
plt.plot(Ts, Trace2, label="%s"%(loc2.getX()[0]))
plt.xlabel('x0 [m]')
plt.ylabel('aplitude [m]')
plt.title("Amplitude vs offset")
plt.legend()
```
Finally we can create the synthetic recordings as `numpy` array and plot them:
```python
traces=np.array(Traces)
```
```python
a=abs(traces).max()
plt.figure(figsize=(10,5))
plt.imshow(traces, extent=(0, 80*10, len(Traces),0), cmap='seismic', vmin=-a, vmax=a )
plt.colorbar()
```
| 9ffd4ee8a09237723454460884bc6761ff7c646c | 276,375 | ipynb | Jupyter Notebook | B_GeophyicalModeling/WavesTime.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
]
| 20 | 2019-11-06T09:08:54.000Z | 2021-12-03T08:37:47.000Z | B_GeophyicalModeling/WavesTime.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
]
| null | null | null | B_GeophyicalModeling/WavesTime.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
]
| 3 | 2020-11-23T14:16:06.000Z | 2022-03-31T14:45:46.000Z | 249.435921 | 47,320 | 0.927019 | true | 3,532 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.835484 | 0.660475 | __label__eng_Latn | 0.923835 | 0.372836 |
# Penalised Regression
## YouTube Videos
1. **Scikit Learn Linear Regression:** https://www.youtube.com/watch?v=EvnpoUTXA0E
2. **Scikit Learn Linear Penalise Regression:** https://www.youtube.com/watch?v=RhsEAyDBkTQ
## Introduction
We often do not want the coefficients/ weights to be too large. Hence we append the loss function with a penalty function to discourage large values of $w$.
\begin{align}
\mathcal{L} & = \sum_{i=1}^N (y_i-f(x_i|w,b))^2 + \alpha \sum_{j=1}^D w_j^2 + \beta \sum_{j=1}^D |w_j|
\end{align}
where, $f(x_i|w,b) = wx_i+b$. The values of $\alpha$ and $\beta$ are positive (or zero), with higher values enforcing the weights to be closer to zero.
## Lesson Structure
1. The task of this lesson is to infer the weights given the data (observations, $y$ and inputs $x$).
2. We will be using the module `sklearn.linear_model`.
```python
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
%matplotlib inline
# In order to reproduce the exact same number we need to set the seed for random number generators:
np.random.seed(1)
```
A normally distributed random looks as follows:
```python
e = np.random.randn(10000,1)
plt.hist(e,100) #histogram with 100 bins
plt.ylabel('y')
plt.xlabel('x')
plt.title('Histogram of Normally Distributed Numbers')
plt.show()
```
Generate observations $y$ given feature (design) matrix $X$ according to:
$$
y = Xw + \xi\\
\xi_i \sim \mathcal{N}(0,\sigma^2)
$$
In this particular case, $w$ is a 100 dimensional vector where 90% of the numbers are zero. i.e. only 10 of the numbers are non-zero.
```python
# Generate the data
N = 40 # Number of observations
D = 100 # Dimensionality
x = np.random.randn(N,D) # get random observations of x
w_true = np.zeros((D,1)) # create a weight vector of zeros
idx = np.random.choice(100,10,replace=False) # randomly choose 10 of those weights
w_true[idx] = np.random.randn(10,1) # populate then with 10 random weights
e = np.random.randn(N,1) # have a noise vector
y = np.matmul(x,w_true) + e # generate observations
# create validation set:
N_test = 50
x_test = np.random.randn(50,D)
y_test_true = np.matmul(x_test,w_true)
```
```python
model = LinearRegression()
model.fit(x,y)
# plot the true vs estimated coeffiecients
plt.plot(np.arange(100),np.squeeze(model.coef_))
plt.plot(np.arange(100),w_true)
plt.legend(["Estimated","True"])
plt.title('Estimated Weights')
plt.show()
```
One way of testing how good your model is to look at metrics. In the case of regression Mean Squared Error (MSE) is a common metric which is defined as:
$$ \frac{1}{N}\sum_{i=1}^N \xi_i^2$$ where, $\xi_i = y_i-f(x_i|w,b)$. Furthermore it is best to look at the MSE on a validation set, rather than on the training dataset that we used to train the model.
```python
y_est = model.predict(x_test)
mse = np.mean(np.square(y_test_true-y_est))
print(mse)
```
6.79499954951
Ridge regression is where you penalise the weights by setting the $\alpha$ parameter right at the top. It penalises it so that the higher **the square of the weights** the higher the loss.
```python
from sklearn.linear_model import Ridge
model = Ridge(alpha=5.0,fit_intercept = False)
model.fit(x,y)
# plot the true vs estimated coeffiecients
plt.plot(np.arange(100),np.squeeze(model.coef_))
plt.plot(np.arange(100),w_true)
plt.legend(["Estimated","True"])
plt.show()
```
This model is slightly better than without any penalty on the weights.
```python
y_est = model.predict(x_test)
mse = np.mean(np.square(y_test_true-y_est))
print(mse)
```
6.42288072501
Lasso is a model that encourages weights to go to zero exactly, as opposed to Ridge regression which encourages small weights.
```python
from sklearn.linear_model import Lasso
model = Lasso(alpha=0.1,fit_intercept = False)
model.fit(x,y)
# plot the true vs estimated coeffiecients
plt.plot(np.arange(100),np.squeeze(model.coef_))
plt.plot(np.arange(100),w_true)
plt.legend(["Estimated","True"])
plt.title('Lasso regression weight inference')
plt.show()
```
The MSE is significantly better than both the above models.
```python
y_est = model.predict(x_test)[:,None]
mse = np.mean(np.square(y_test_true-y_est))
print(mse)
```
2.30600134084
Automated Relevance Determination (ARD) regression is similar to lasso in that it encourages zero weights. However, the advantage is that you do not need to set a penalisation parameter, $\alpha$, $\beta$ in this model.
```python
from sklearn.linear_model import ARDRegression
model = ARDRegression(fit_intercept = False)
model.fit(x,y)
# plot the true vs estimated coeffiecients
plt.plot(np.arange(100),np.squeeze(model.coef_))
plt.plot(np.arange(100),w_true)
plt.legend(["Estimated","True"])
plt.show()
```
```python
y_est = model.predict(x_test)[:,None]
mse = np.mean(np.square(y_test_true-y_est))
print(mse)
```
2.87016741296
### Note:
Rerun the above with setting N=400
## Inverse Problems
The following section is optional and you may skip it. It is not necessary for understanding Deep Learning.
Inverse problems are where given the outputs you are required to infer the inputs. A typical example is X-rays. Given the x-ray sensor readings, the algorithm needs to build an image of an individuals bone structure.
See [here](http://scikit-learn.org/stable/auto_examples/applications/plot_tomography_l1_reconstruction.html#sphx-glr-auto-examples-applications-plot-tomography-l1-reconstruction-py) for an example of l1 reguralisation applied to a compressed sensing problem (has a resemblance to the x-ray problem).
```python
```
| fb5bc37188a96c164c15e44283f685de29d93f14 | 129,305 | ipynb | Jupyter Notebook | deepschool.io/Lesson 01 - PenalisedRegression - Solutions.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
]
| 4 | 2018-03-28T09:08:02.000Z | 2021-11-17T11:15:38.000Z | Lesson 01 - PenalisedRegression - Solutions.ipynb | OWLYone/deepschool.io | ae6718fc14f3ac499697c97edc97a66dad9d9a6c | [
"Apache-2.0"
]
| null | null | null | Lesson 01 - PenalisedRegression - Solutions.ipynb | OWLYone/deepschool.io | ae6718fc14f3ac499697c97edc97a66dad9d9a6c | [
"Apache-2.0"
]
| 5 | 2019-03-24T19:29:08.000Z | 2019-07-24T13:38:40.000Z | 313.087167 | 31,014 | 0.922377 | true | 1,483 | Qwen/Qwen-72B | 1. YES
2. YES | 0.938124 | 0.874077 | 0.819993 | __label__eng_Latn | 0.949821 | 0.743451 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.