text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
```python
import numpy as np
import numba
import matplotlib.pyplot as plt
import sympy as sym
plt.style.use('presentation')
%matplotlib notebook
def d2np(d):
names = []
numbers = ()
dtypes = []
for item in d:
names += item
if type(d[item]) == float:
numbers += (d[item],)
dtypes += [(item,float)]
if type(d[item]) == int:
numbers += (d[item],)
dtypes += [(item,int)]
if type(d[item]) == np.ndarray:
numbers += (d[item],)
dtypes += [(item,np.float64,d[item].shape)]
return np.array([numbers],dtype=dtypes)
```
```python
psi_ds,psi_qs,psi_dr,psi_qr = sym.symbols('psi_ds,psi_qs,psi_dr,psi_qr')
i_ds,i_qs,i_dr,i_qr = sym.symbols('i_ds,i_qs,i_dr,i_qr')
di_ds,di_qs,di_dr,di_qr = sym.symbols('di_ds,di_qs,di_dr,di_qr')
L_s,L_r,L_m = sym.symbols('L_s,L_r,L_m')
R_s,R_r = sym.symbols('R_s,R_r')
omega_s,omega_r,sigma = sym.symbols('omega_s,omega_r,sigma')
v_ds,v_qs,v_dr,v_qr = sym.symbols('v_ds,v_qs,v_dr,v_qr')
eq_ds = (L_s+L_m)*i_ds + L_m*i_dr - psi_ds
eq_qs = (L_s+L_m)*i_qs + L_m*i_qr - psi_qs
eq_dr = (L_r+L_m)*i_dr + L_m*i_ds - psi_dr
eq_qr = (L_r+L_m)*i_qr + L_m*i_qs - psi_qr
dpsi_ds = v_ds - R_s*i_ds + omega_s*psi_qs
dpsi_qs = v_qs - R_s*i_qs - omega_s*psi_ds
dpsi_dr = v_dr - R_r*i_dr + sigma*omega_s*psi_qr
dpsi_qr = v_qr - R_r*i_qr - sigma*omega_s*psi_dr
s = sym.solve([ eq_ds, eq_qs, eq_dr, eq_qr,
dpsi_ds,dpsi_qs,dpsi_dr,dpsi_qr],
[ i_ds, i_qs, i_dr, i_qr,
psi_ds, psi_qs, i_dr, psi_qr])
s = sym.solve([dpsi_ds,dpsi_qs,dpsi_dr,dpsi_qr],
[ i_ds, i_qs, i_dr, i_qr,
psi_ds, psi_qs, i_dr, psi_qr])
for item in s:
print(item, '=', s[item])
```
i_qr = (-omega_s*psi_dr*sigma + v_qr)/R_r
i_ds = (omega_s*psi_qs + v_ds)/R_s
i_dr = (omega_s*psi_qr*sigma + v_dr)/R_r
i_qs = (-omega_s*psi_ds + v_qs)/R_s
```python
(L_s+L_m)*i_ds_ref + L_m*i_dr = - psi_ds
(L_s+L_m)*i_qs_ref + L_m*i_qr = - psi_qs
```
```python
# [1] T. Demiray, F. Milano, and G. Andersson,
# “Dynamic phasor modeling of the doubly-fed induction generator under unbalanced conditions,” 2007 IEEE Lausanne POWERTECH, Proc., no. 2, pp. 1049–1054, 2007.
@numba.jit(nopython=True, cache=True)
def dfim(struct,i):
x_idx = struct[i]['dfim_idx']
L_m = struct[i]['L_m']
L_r = struct[i]['L_r']
L_s = struct[i]['L_s']
R_r = struct[i]['R_r']
R_s = struct[i]['R_s']
Dt = struct[i]['Dt']
psi_ds = struct[i]['psi_ds']
psi_qs = struct[i]['psi_qs']
psi_dr = struct[i]['psi_dr']
psi_qr = struct[i]['psi_qr']
i_ds = struct[i]['i_ds']
struct[i]['i_qs'] = i_qs
struct[i]['i_dr'] = i_dr
struct[i]['i_qr'] = i_qr
v_ds = struct[i]['v_ds']
v_qs = struct[i]['v_qs']
v_dr = struct[i]['v_dr']
v_qr = struct[i]['v_qr']
omega_r = struct[i]['omega_r']
omega_s = struct[i]['omega_s']
sigma = (omega_s - omega_r)/omega_s
if np.abs(sigma)<0.01:
psi_qr = 0.0
psi_dr = 0.0
i_qr = ( L_m*psi_qr - L_m*psi_qs + L_s*psi_qr)/(L_m*L_r + L_m*L_s + L_r*L_s)
i_qs = (-L_m*psi_qr + L_m*psi_qs + L_r*psi_qs)/(L_m*L_r + L_m*L_s + L_r*L_s)
i_dr = ( L_m*psi_dr - L_m*psi_ds + L_s*psi_dr)/(L_m*L_r + L_m*L_s + L_r*L_s)
i_ds = (-L_m*psi_dr + L_m*psi_ds + L_r*psi_ds)/(L_m*L_r + L_m*L_s + L_r*L_s)
psi_qs = (R_s*i_ds - v_ds)/omega_s
psi_ds = (v_qs - R_s*i_qs)/omega_s
else:
i_qr = ( L_m*psi_qr - L_m*psi_qs + L_s*psi_qr)/(L_m*L_r + L_m*L_s + L_r*L_s)
i_qs = (-L_m*psi_qr + L_m*psi_qs + L_r*psi_qs)/(L_m*L_r + L_m*L_s + L_r*L_s)
i_dr = ( L_m*psi_dr - L_m*psi_ds + L_s*psi_dr)/(L_m*L_r + L_m*L_s + L_r*L_s)
i_ds = (-L_m*psi_dr + L_m*psi_ds + L_r*psi_ds)/(L_m*L_r + L_m*L_s + L_r*L_s)
psi_qs = (R_s*i_ds - v_ds)/omega_s
psi_ds = (v_qs - R_s*i_qs)/omega_s
psi_qr = (R_r*i_dr - v_dr)/(sigma*omega_s)
psi_dr = (v_qr - R_r*i_qr)/(sigma*omega_s)
tau_e = psi_qr*i_dr - psi_dr*i_qr
struct[i]['i_ds'] = i_ds
struct[i]['i_qs'] = i_qs
struct[i]['i_dr'] = i_dr
struct[i]['i_qr'] = i_qr
struct[i]['psi_ds'] = psi_ds
struct[i]['psi_qs'] = psi_qs
struct[i]['psi_dr'] = psi_dr
struct[i]['psi_qr'] = psi_qr
struct[i]['tau_e'] = tau_e
struct[i]['sigma'] = sigma
return tau_e
@numba.jit(nopython=True, cache=True)
def wecs_mech_1(struct,i):
x_idx = struct[i]['mech_idx']
omega_t = struct[i]['x'][x_idx,0] # rad/s
tau_t = struct[i]['tau_t']
tau_r = struct[i]['tau_r']
J_t = struct[i]['J_t']
N_tr = struct[i]['N_tr']
Dt = struct[i]['Dt']
domega_t = 1.0/J_t*(tau_t - N_tr*tau_r)
omega_r = N_tr*omega_t
struct[i]['f'][x_idx,0] = domega_t
struct[i]['omega_r'] = omega_r
struct[i]['omega_t'] = omega_t
return omega_t
```
```python
Omega_b = 2.0*np.pi*50.0
S_b = 1.0e6
U_b = 690.0
Z_b = U_b**2/S_b
#nu_w =np.linspace(0.1,15,N)
H = 0.001
# H = 0.5*J*Omega_t_n**2/S_b
S_b = 2.0e6
Omega_t_n = 1.5
J_t = 2*H*S_b/Omega_t_n**2
#Z_b = 1.0
#Omega_b = 1.0
d =dict(R_r = 0.01*Z_b,
R_s = 0.01*Z_b,
L_r = 0.08*Z_b/Omega_b,
L_s = 0.1*Z_b/Omega_b,
L_m = 3.0*Z_b/Omega_b,
psi_ds = 0.0,
psi_qs = 0.0,
psi_dr = 0.0,
psi_qr = 0.0,
i_ds = 0.0,
i_qs = 0.0,
i_dr = 0.0,
i_qr = 0.0,
v_ds = 0.0,
v_qs = 0.0,
v_dr = 0.0,
v_qr = 0.0,
omega_r = Omega_b*0.99,
omega_s = Omega_b,
sigma = 0.0,
tau_e = 0.0,
x = np.zeros((1,1)),
f = np.zeros((1,1)),
Dt = 0.0,
J_t = J_t,
omega_t = 0.0,
tau_t = 0.0,
tau_r = 0.0,
N_tr = 20.0,
dfim_idx = 0,
mech_idx = 0,
)
struct = d2np(d)
dfim(struct,0)
wecs_mech_1(struct,0)
```
0.0
```python
struct = d2np(d)
struct['v_ds'] = 325.0
struct['v_qs'] = 0.0
struct['omega_r'] = Omega_b*0.0
Dt = 1.0e-3
struct[0]['x']= np.zeros((1,1))
Tau_e = []
Omega_r = []
T =[]
N_steps = 10000
X = np.zeros((N_steps,1))
def f_eval(struct):
#dfim(struct,0)
wecs_mech_1(struct,0)
return struct[0]['f']
for it in range(N_steps):
t = Dt*it
dfim(struct,0)
f1 = np.copy(f_eval(struct))
x1 = np.copy(struct[0]['x'])
struct[0]['x'] = np.copy(x1 + Dt*f1)
dfim(struct,0)
struct[0]['x'] = np.copy(x1 + 0.5*Dt*(f1 + f_eval(struct)))
struct[0]['tau_r'] = -struct[0]['tau_e']
Tau_e += [float(struct['tau_e'])]
Omega_r += [float(struct['omega_r'])]
T +=[t]
```
```python
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(8, 5), sharex = True)
axes.plot(T,Omega_r)
fig.savefig('dfim_tau_e.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
```python
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(8, 5), sharex = True)
axes.plot(T,Tau_e)
fig.savefig('dfim_tau_e.svg', bbox_inches='tight')
```
<IPython.core.display.Javascript object>
```python
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(8, 5), sharex = True)
axes.plot(T,X[:,0])
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x7f94a3d74ef0>]
```python
X[:,4]
```
array([ 0.00000000e+00, 0.00000000e+00, -5.74800542e-11, ...,
1.57079633e+01, 1.57079633e+01, 1.57079633e+01])
```python
```
```python
```
| 4b2e7b90077f2c397d41cf5185a008038d77a00c | 161,638 | ipynb | Jupyter Notebook | models/dfim-alg.ipynb | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
]
| 15 | 2019-01-29T08:22:39.000Z | 2022-01-13T20:41:32.000Z | models/dfim-alg.ipynb | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
]
| 1 | 2017-11-28T21:34:52.000Z | 2017-11-28T21:34:52.000Z | models/dfim-alg.ipynb | pydgrid/pydgrid | c56073c385f42883c79333533f7cfb8383a173aa | [
"MIT"
]
| 4 | 2018-02-15T02:12:47.000Z | 2020-02-16T17:52:15.000Z | 58.841645 | 19,121 | 0.615759 | true | 2,914 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.721743 | 0.621809 | __label__kor_Hang | 0.115609 | 0.283002 |
```python
# Header starts here.
from sympy.physics.units import *
from sympy import *
# Rounding:
import decimal
from decimal import Decimal as DX
from copy import deepcopy
def iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN):
import sympy
"""
Rounding acc. to DIN EN ISO 80000-1:2013-08
place value = Rundestellenwert
"""
assert pv in set([
# place value # round to:
1, # 1
0.1, # 1st digit after decimal
0.01, # 2nd
0.001, # 3rd
0.0001, # 4th
0.00001, # 5th
0.000001, # 6th
0.0000001, # 7th
0.00000001, # 8th
0.000000001, # 9th
0.0000000001, # 10th
])
objc = deepcopy(obj)
try:
tmp = DX(str(float(objc)))
objc = tmp.quantize(DX(str(pv)), rounding=rounding)
except:
for i in range(len(objc)):
tmp = DX(str(float(objc[i])))
objc[i] = tmp.quantize(DX(str(pv)), rounding=rounding)
return objc
# LateX:
kwargs = {}
kwargs["mat_str"] = "bmatrix"
kwargs["mat_delim"] = ""
# kwargs["symbol_names"] = {FB: "F^{\mathsf B}", }
# Units:
(k, M, G ) = ( 10**3, 10**6, 10**9 )
(mm, cm) = ( m/1000, m/100 )
Newton = kg*m/s**2
Pa = Newton/m**2
MPa = M*Pa
GPa = G*Pa
kN = k*Newton
deg = pi/180
half = S(1)/2
# Header ends here.
#
# https://colab.research.google.com/github/kassbohm/tm-snippets/blob/master/ipynb/TM_A/TM_2/rod-lin_cc.ipynb
# Input:
(l, lp) = ( 10 *cm, 12.5 *cm )
alpha = 30 * deg
e = Matrix([1, 0])
pprint("\n(l, l') / cm:")
tmp = Matrix([l, lp])
tmp /= cm
tmp = iso_round(tmp, 0.001)
pprint(tmp)
pprint("\nα / deg:")
tmp = alpha / deg
pprint(tmp)
alpha = N(alpha, 50)
ca, sa = cos(alpha), sin(alpha)
pprint("\nr / cm:")
r = Matrix([l, 0])
tmp = r
tmp /= cm
tmp = iso_round(tmp, 0.001)
pprint(tmp)
pprint("\nr' / cm:")
rp = lp*Matrix([ca, sa])
tmp = rp
tmp /= cm
tmp = iso_round(tmp, 0.001)
pprint(tmp)
pprint("\nΔℓ / cm:")
dell = lp - l
tmp = dell
tmp /= cm
tmp = iso_round(tmp, 0.001)
pprint(tmp)
pprint("\nΔl / cm:")
dl = e.dot(rp - r)
tmp = dl
tmp /= cm
tmp = iso_round(tmp, 0.001)
pprint(tmp)
pprint("\n|Δℓ - Δl | / l:")
tmp = abs(dell - dl)/l
tmp = iso_round(tmp, 0.001)
pprint(tmp)
# (l, l') / cm:
# ⎡10.0⎤
# ⎢ ⎥
# ⎣12.5⎦
#
# α / deg:
# 30
#
# r / cm:
# ⎡10.0⎤
# ⎢ ⎥
# ⎣0.0 ⎦
#
# r' / cm:
# ⎡10.825⎤
# ⎢ ⎥
# ⎣ 6.25 ⎦
#
# Δℓ / cm:
# 2.500
#
# Δl / cm:
# 0.825
#
# |Δℓ - Δl | / l:
# 0.167
```
| c83fdd45b5d984fa5bbc6ff72d004e6b918451d7 | 5,141 | ipynb | Jupyter Notebook | ipynb/TM_A/TM_2/rod-lin_cc.ipynb | kassbohm/tm-snippets | 5e0621ba2470116e54643b740d1b68b9f28bff12 | [
"MIT"
]
| null | null | null | ipynb/TM_A/TM_2/rod-lin_cc.ipynb | kassbohm/tm-snippets | 5e0621ba2470116e54643b740d1b68b9f28bff12 | [
"MIT"
]
| null | null | null | ipynb/TM_A/TM_2/rod-lin_cc.ipynb | kassbohm/tm-snippets | 5e0621ba2470116e54643b740d1b68b9f28bff12 | [
"MIT"
]
| null | null | null | 30.241176 | 121 | 0.371134 | true | 1,020 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.763484 | 0.651949 | __label__kor_Hang | 0.14083 | 0.353026 |
# MathematicalProgram Tutorial
For instructions on how to run these tutorial notebooks, please see the [README](https://github.com/RobotLocomotion/drake/blob/master/tutorials/README.md).
## Background
Many engineering problems can be formulated as mathematical optimization problems, and solved by numerical solvers. A generic mathematical optimization problem can be formulated as
\begin{align}
\begin{array}{rl}
\min_x \; & f(x)
\\[0.1pc]\text{subject to} \; & x \in\mathcal{S}
\end{array}
\hspace{1.45em}
{\scriptsize
\begin{array}{|ll|}
\hline \text{The real-valued decision variable is} & x
\\[-0.21pc] \text{The real-valued cost function is} & f(x)
\\[-0.21pc] \text{The constraint set is} & \mathcal{S}
\\[-0.21pc] \text{The optimal $x$ that minimizes the cost function is} & x^*
\\[0.03pc]\hline
\end{array}
}
\end{align}
where $x$ is the real-valued decision variable(s), $f(x)$ is the real-valued *cost function*, $\mathcal{S}$ is the constraint set for $x$. Our goal is to find the optimal $x^*$ within the constraint set $\mathcal{S}$, such that $x^*$ minimizes the cost function $f(x)$.
For example, the following optimization problem determines the value of $x$
that minimizes $x^3 + 2x + 1$ subject to $x \ge 1$.
\begin{align}
\begin{array}{rl}
\min_x & x^3 + 2x + 1
\\[0.1pc] \text{subject to} & x \ge 1
\end{array}
\hspace{1.45em}
{\scriptsize
\begin{array}{|ll|}
\hline \text{The real-valued decision variable is} & x
\\[-0.21pc] \text{The real-valued cost function $f(x)$ is} & x^3 + 2x + 1
\\[-0.21pc] \text{The set $\mathcal{S}$ of constraints is} & x \ge 1
\\[-0.21pc] \text{The value that minimizes the cost function is} & x^* = 1
\\[0.03pc]\hline
\end{array}}
\end{align}
In general, how an optimization problem is solved depends on its categorization (categories include Linear Programming, Quadratic Programming, Mixed-integer Programming, etc.). Categorization depends on properties of both the cost function $f(x)$ and the constraint set $\mathcal{S}$. For example, if the cost function $f(x)$ is a linear function of $x$, and the constraint $\mathcal{S}$ is a linear set $\mathcal{S} = \{x | Ax\le b\}$, then we have a *linear programming* problem, which is efficiently solved with certain solvers.
There are multiple solvers for each category of optimization problems,
but each solver has its own API and data structures.
Frequently, users need to rewrite code when they switch solvers.
To remedy this, Drake provides a common API through the *MathematicalProgram* class.
In addition to avoiding solver-specific code,
the constraint and cost functions can be written in symbolic form (which makes code more readable).
In these ways, Drake's MathematicalProgram is akin to [YALMIP](https://yalmip.github.io/) in MATLAB or [JuMP](https://github.com/JuliaOpt/JuMP.jl) in Julia, and we support both Python and C++.
<br> Note: Drake supports many [solvers](https://drake.mit.edu/doxygen_cxx/group__solvers.html)
(some are open-source and some require a license).
Drake can formulate and solve the following categories of optimization problems
* Linear programming
* Quadratic programming
* Second-order cone programming
* Nonlinear nonconvex programming
* Semidefinite programming
* Sum-of-squares programming
* Mixed-integer programming (mixed-integer linear programming, mixed-integer quadratic programming, mixed-integer second-order cone programming).
* Linear complementarity problem
This tutorial provides the basics of Drake's MathematicalProgram.
Advanced tutorials are available at the [bottom](#Advanced-tutorials) of this document.
## Basics of MathematicalProgram class
Drake's MathematicalProgram class contains the mathematical formulation of an optimization problem, namely the decision variables $x$, the cost function $f(x)$, and the constraint set $\mathcal{S}$.
### Initialize a MathematicalProgram object
To initialize this class, first create an empty MathematicalProgram as
```python
%matplotlib notebook
```
```python
from pydrake.solvers.mathematicalprogram import MathematicalProgram
import numpy as np
import matplotlib.pyplot as plt
# Create an empty MathematicalProgram named prog (with no decision variables,
# constraints or cost function)
prog = MathematicalProgram()
```
### Adding decision variables
Shown below, the function `NewContinuousVariables` adds two new continuous decision variables to `prog`. The newly added variables are returned as `x` in a numpy array.
<br><font size=-1> Note the range of the variable is a continuous set, as opposed to binary variables which only take discrete value 0 or 1.</font>
```python
x = prog.NewContinuousVariables(2)
```
The default names of the variable in *x* are "x(0)" and "x(1)". The next line prints the default names and types in `x`, whereas the second line prints the symbolic expression "1 + 2x[0] + 3x[1] + 4x[1]".
```python
print(x)
print(1 + 2*x[0] + 3*x[1] + 4*x[1])
```
To create an array `y` of two variables named "dog(0)"" and "dog(1)", pass the name "dog" as a second argument to `NewContinuousVariables()`. Also shown below is the printout of the two variables in `y` and a symbolic expression involving `y`.
```python
y = prog.NewContinuousVariables(2, "dog")
print(y)
print(y[0] + y[0] + y[1] * y[1] * y[1])
```
To create a $3 \times 2$ matrix of variables named "A", type
```python
var_matrix = prog.NewContinuousVariables(3, 2, "A")
print(var_matrix)
```
### Adding constraints
There are many ways to impose constraints on the decision variables. This tutorial shows a few simple examples. Refer to the links at the [bottom](#Advanced-tutorials) of this document for other types of constraints.
#### AddConstraint
The simplest way to add a constraint is with `MathematicalProgram.AddConstraint()`.
```python
# Add the constraint x(0) * x(1) = 1 to prog
prog.AddConstraint(x[0] * x[1] == 1)
```
You can also add inequality constraints to `prog` such as
```python
prog.AddConstraint(x[0] >= 0)
prog.AddConstraint(x[0] - x[1] <= 0)
```
`prog` automatically analyzes these symbolic inequality constraint expressions and determines they are all *linear* constraints on $x$.
### Adding Cost functions
In a complicated optimization problem, it is often convenient to write the total cost function $f(x)$ as a sum of individual cost functions
\begin{align}
f(x) = \sum_i g_i(x)
\end{align}
#### AddCost method.
The simplest way to add an individual cost function $g_i(x)$ to the total cost function $f(x)$ is with the `MathematicalProgram.AddCost()` method (as shown below).
```python
# Add a cost x(0)**2 + 3 to the total cost. Since prog doesn't have a cost before, now the total cost is x(0)**2 + 3
prog.AddCost(x[0] ** 2 + 3)
```
To add another individual cost function $x(0) + x(1)$ to the total cost function $f(x)$, simply call `AddCost()` again as follows
```python
prog.AddCost(x[0] + x[1])
```
now the total cost function becomes $x(0)^2 + x(0) + x(1) + 3$.
`prog` can analyze each of these individual cost functions and determine that $x(0) ^ 2 + 3$ is a convex quadratic function, and $x(0) + x(1)$ is a linear function of $x$.
### Solve the optimization problem
Once all the decision variables/constraints/costs are added to `prog`, we are ready to solve the optimization problem.
#### Automatically choosing a solver
The simplest way to solve the optimization problem is to call `Solve()` function. Drake's MathematicalProgram analyzes the type of the constraints/costs, and then calls an appropriate solver for your problem. The result of calling `Solve()` is stored inside the return argument. Here is a code snippet
```python
"""
Solves a simple optimization problem
min x(0)^2 + x(1)^2
subject to x(0) + x(1) = 1
x(0) <= x(1)
"""
from pydrake.solvers.mathematicalprogram import Solve
# Set up the optimization problem.
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddConstraint(x[0] + x[1] == 1)
prog.AddConstraint(x[0] <= x[1])
prog.AddCost(x[0] **2 + x[1] ** 2)
# Now solve the optimization problem.
result = Solve(prog)
# print out the result.
print("Success? ", result.is_success())
# Print the solution to the decision variables.
print('x* = ', result.GetSolution(x))
# Print the optimal cost.
print('optimal cost = ', result.get_optimal_cost())
# Print the name of the solver that was called.
print('solver is: ', result.get_solver_id().name())
```
Notice that we can then retrieve optimization result from the return argument of `Solve`. For example, the solution $x^*$ is retrieved from `result.GetSolution()`, and the optimal cost from `result.get_optimal_cost()`.
Some optimization solution is infeasible (doesn't have a solution). For example in the following code example, `result.get_solution_result()` will not report `kSolutionFound`.
```python
"""
An infeasible optimization problem.
"""
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1)[0]
y = prog.NewContinuousVariables(1)[0]
prog.AddConstraint(x + y >= 1)
prog.AddConstraint(x + y <= 0)
prog.AddCost(x)
result = Solve(prog)
print("Success? ", result.is_success())
print(result.get_solution_result())
```
#### Manually choosing a solver
If you want to choose a solver yourself, rather than Drake choosing one for you, you could instantiate a solver explicitly, and call its `Solve` function. There are two apporaches to instantiate a solver. For example, if I want to solve a problem using the open-source solver [IPOPT](https://github.com/coin-or/Ipopt), I can instantiate the solver using either of the two approaches:
1. The simplest approach is to call `solver = IpoptSolver()`
2. The second approach is to construct a solver with a given solver ID as `solver = MakeSolver(IpoptSolver().solver_id())`
```python
"""
Demo on manually choosing a solver
Solves the problem
min x(0)
s.t x(0) + x(1) = 1
0 <= x(1) <= 1
"""
from pydrake.solvers.ipopt import IpoptSolver
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddConstraint(x[0] + x[1] == 1)
prog.AddConstraint(0 <= x[1])
prog.AddConstraint(x[1] <= 1)
prog.AddCost(x[0])
# Choose IPOPT as the solver.
# First instantiate an IPOPT solver.
solver = IpoptSolver()
# The initial guess is [1, 1]. The third argument is the options for Ipopt solver,
# and we set no solver options.
result = solver.Solve(prog, np.array([1, 1]), None)
print(result.get_solution_result())
print("x* = ", result.GetSolution(x))
print("Solver is ", result.get_solver_id().name())
print("Ipopt solver status: ", result.get_solver_details().status,
", meaning ", result.get_solver_details().ConvertStatusToString())
```
Note that `solver.Solve()` expects three input arguments, the optimization program `prog`, the initial guess of the decision variable values (`[1, 1]` in this case), and an optional setting for the solver (`None` in this case, we use the default IPOPT setting). If you don't have an initial guess, you could call `solver.Solve(prog)`. Drake will choose a default initial guess (a 0-valued vector), but this initial guess might be a bad starting point for optimization. Note from the following example code, with the default initial guess, the solver cannot find a solution, even though a solution exists (and could be found with initial guess [1, 1]).
```python
from pydrake.solvers.mathematicalprogram import MakeSolver
solver = MakeSolver(IpoptSolver().solver_id())
result = solver.Solve(prog)
print(result.get_solution_result())
print("x* = ", result.GetSolution(x))
```
Also note that if we know which solver is called, then we can access some solver-specific result, by calling `result.get_solver_details()`. For example, `IpoptSolverDetails` contains a field `status`, namely the status code of the IPOPT solver, we could access this info by
```python
print("Ipopt solver status: ", result.get_solver_details().status,
", meaning ", result.get_solver_details().ConvertStatusToString())
```
Each solver has its own details. You should refer to `FooSolverDetails` class on what is stored inside the return argument of `result.get_solver_details()`. For example, if you know that IPOPT is called, then refer to `IpoptSolverDetails` class; for OSQP solver, refer to `OsqpSolverDetails`, etc.
### Using an initial guess
Some optimization problems, such as nonlinear optimization, require an initial guess. Other types of problems, such as quadratic programming, mixed-integer optimization, etc, can be solved faster if a good initial guess is provided. The user could provide an initial guess as an input argument in `Solve` function. If no initial guess is provided, Drake will use a zero-valued vector as the initial guess.
In the example below, we show that an initial guess could affect the result of the problem. Without an user-provided initial guess, the solver might be unable to find the solution.
```python
from pydrake.solvers.ipopt import IpoptSolver
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddConstraint(x[0]**2 + x[1]**2 == 100.)
prog.AddCost(x[0]**2-x[1]**2)
solver = IpoptSolver()
# The user doesn't provide an initial guess.
result = solver.Solve(prog, None, None)
print(f"Without a good initial guess, the result is {result.is_success()}")
print(f"solution {result.GetSolution(x)}")
# Pass an initial guess
result = solver.Solve(prog, [-5., 0.], None)
print(f"With a good initial guess, the result is {result.is_success()}")
print(f"solution {result.GetSolution(x)}")
```
For more details on setting the initial guess, the user could refer to [Nonlinear program](./nonlinear_program.ipynb) section `Setting the initial guess`.
## Add callback
Some solvers support adding a callback function in each iteration. One usage of the callback is to visualize the solver progress in the current iteration. `MathematicalProgram` supports this usage through the function `AddVisualizationCallback`, although the usage is not limited to just visualization, the callback function can do anything. Here is an example
```python
# Visualize the solver progress in each iteration through a callback
# Find the closest point on a curve to a desired point.
fig = plt.figure()
curve_x = np.linspace(1, 10, 100)
ax = plt.gca()
ax.plot(curve_x, 9./curve_x)
ax.plot(-curve_x, -9./curve_x)
ax.plot(0, 0, 'o')
x_init = [4., 5.]
point_x, = ax.plot(x_init[0], x_init[1], 'x')
ax.axis('equal')
def update(x):
global iter_count
point_x.set_xdata(x[0])
point_x.set_ydata(x[1])
ax.set_title(f"iteration {iter_count}")
fig.canvas.draw()
fig.canvas.flush_events()
# Also update the iter_count variable in the callback.
# This shows we can do more than just visualization in
# callback.
iter_count += 1
plt.pause(0.1)
iter_count = 0
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddConstraint(x[0] * x[1] == 9)
prog.AddCost(x[0]**2 + x[1]**2)
prog.AddVisualizationCallback(update, x)
result = Solve(prog, x_init)
```
## Advanced tutorials
[Setting solver parameters](./solver_parameters.ipynb)
[Updating costs and constraints (e.g. for efficient solving of many similar programs)](./updating_costs_and_constraints.ipynb)
[Debugging tips](./debug_mathematical_program.ipynb)
[Linear program](./linear_program.ipynb)
[Quadratic program](./quadratic_program.ipynb)
[Nonlinear program](./nonlinear_program.ipynb)
[Sum-of-squares optimization](./sum_of_squares_optimization.ipynb)
| a915bee529a1c12e3df77118b303b8743e2dbf3a | 22,394 | ipynb | Jupyter Notebook | tutorials/mathematical_program.ipynb | RobotLocomotion/drake-python3.7 | ae397a4c6985262d23e9675b9bf3927c08d027f5 | [
"BSD-3-Clause"
]
| 2 | 2021-02-25T02:01:02.000Z | 2021-03-17T04:52:04.000Z | tutorials/mathematical_program.ipynb | RobotLocomotion/drake-python3.7 | ae397a4c6985262d23e9675b9bf3927c08d027f5 | [
"BSD-3-Clause"
]
| null | null | null | tutorials/mathematical_program.ipynb | RobotLocomotion/drake-python3.7 | ae397a4c6985262d23e9675b9bf3927c08d027f5 | [
"BSD-3-Clause"
]
| 1 | 2021-06-13T12:05:39.000Z | 2021-06-13T12:05:39.000Z | 37.261231 | 657 | 0.604582 | true | 4,028 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.853913 | 0.718846 | __label__eng_Latn | 0.975312 | 0.508451 |
# Importações
```python
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d.axes3d import Axes3D
from sympy import symbols, diff
from matplotlib import cm #colormap
from math import log
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
%matplotlib inline
```
## Visualização de gráficos 3d
### Minimizando a função
$$f(x, y) = \frac {1} {3^{-x^2 - y ^ 2} + 1}$$
#### Minimizando
$$f(x,y) = \frac {1}{r+1}$$
Com $r$ igual a $3^{-x^2 - y ^ 2}$
```python
def f(x, y):
r = 3**(-x**2 - y**2)
return 1 / (r + 1)
```
```python
#Gerando dados
x = np.linspace(start = -2, stop = 2, num = 200)
y = np.linspace(start = -2, stop = 2, num = 200)
x, y = np.meshgrid(x, y)
```
```python
#Gerando grafico 3d
fig = plt.figure(figsize = [16,12])
ax = fig.gca(projection = "3d")
ax.plot_surface(x, y, f(x, y), cmap = cm.summer, alpha = 0.4)
ax.set_xlabel("X", fontsize = 20)
ax.set_ylabel("Y", fontsize = 20)
ax.set_zlabel("f(x, y)", fontsize = 20)
plt.show()
```
## Derivadas Parciais e Computação Simbolica
### $$ \frac {\partial f}{\partial x} = \frac{2x \ln(3) \cdot 3 ^ {-x^2 - y^2}}{(3^{-x^2 -y^2} + 1)^2}$$
### $$ \frac {\partial f}{\partial y} = \frac{2y \ln(3) \cdot 3 ^ {-x^2 - y^2}}{\left(3^{-x^2 -y^2} + 1\right)^2}$$
```python
a, b = symbols("x, y")
print("Nossa função custo: ", f(a, b))
print("Derivada parcial de f(a, b) em relação a X: ", diff(f(a, b), a))
print("Valor de f(x, y) com x=1.8 e y=1: ", f(a,b).evalf(subs = {a:1.8, b:1}))
```
Nossa função custo: 1/(3**(-x**2 - y**2) + 1)
Derivada parcial de f(a, b) em relação a X: 2*3**(-x**2 - y**2)*x*log(3)/(3**(-x**2 - y**2) + 1)**2
Valor de f(x, y) com x=1.8 e y=1: 0.990604794032582
### Método do gradiente (Gradient descent) com Sympy
```python
#Configuração inicial
multiplicador = 0.1
max_iter = 200
params = np.array([1.8, 1.0]) #INICIAL
for n in range(max_iter):
gradiente_x = diff(f(a,b), a).evalf(subs = {a:params[0], b:params[1]})
gradiente_y = diff(f(a,b), b).evalf(subs = {a:params[0], b:params[1]})
gradientes = np.array([gradiente_x, gradiente_y])
params = params - multiplicador * gradientes
#Resultados
print("Valores no vetor gradientes: ", gradientes)
print("Menor valor de x: ", params[0])
print("Menor valor de y: ", params[1])
print("Custo: ", f(params[0], params[1]))
```
Valores no vetor gradientes: [0.000461440542096373 0.000256355856720208]
Menor valor de x: 0.000793898510134722
Menor valor de y: 0.000441054727852623
Custo: 0.500000226534985
```python
#Derivadas parciais
def fpx(x, y):
r = 3**(-x**2 - y**2)
return (2 * x * log(3) * r) / ((r + 1) ** 2)
def fpy(x, y):
r = 3**(-x**2 - y**2)
return (2 * y * log(3) * r) / ((r + 1) ** 2)
```
```python
# Configuração inicial
multiplicador = 0.1
max_iter = 200
params = np.array([1.8, 1.0]) #INICIAL
for n in range(max_iter):
gradiente_x = fpx(params[0], params[1])
gradiente_y = fpy(params[0], params[1])
gradientes = np.array([gradiente_x, gradiente_y])
params = params - multiplicador * gradientes
#Resultados
print("Valores no vetor gradientes: ", gradientes)
print("Menor valor de x: ", params[0])
print("Menor valor de y: ", params[1])
print("Custo: ", f(params[0], params[1]))
```
Valores no vetor gradientes: [0.00046144 0.00025636]
Menor valor de x: 0.0007938985101347202
Menor valor de y: 0.0004410547278526219
Custo: 0.5000002265349848
## Plotando método gradiente no grafico 3D & Vetores numpy avançados
```python
# Configuração inicial
multiplicador = 0.5
max_iter = 200
params = np.array([1.8, 1.0]) #INICIAL
valores = params.reshape(1, 2)
for n in range(max_iter):
gradiente_x = fpx(params[0], params[1])
gradiente_y = fpy(params[0], params[1])
gradientes = np.array([gradiente_x, gradiente_y])
params = params - multiplicador * gradientes
valores = np.append(valores, params.reshape(1,2), axis = 0)
#Resultados
print("Valores no vetor gradientes: ", gradientes)
print("Menor valor de x: ", params[0])
print("Menor valor de y: ", params[1])
print("Custo: ", f(params[0], params[1]))
```
Valores no vetor gradientes: [1.56952449e-26 8.71958048e-27]
Menor valor de x: 2.0725232663390205e-26
Menor valor de y: 1.1514018146327904e-26
Custo: 0.5
```python
#Gerando grafico 3d
fig = plt.figure(figsize = [16,12])
ax = fig.gca(projection = "3d")
ax.set_xlabel("X", fontsize = 20)
ax.set_ylabel("Y", fontsize = 20)
ax.set_zlabel("f(x, y)", fontsize = 20)
ax.plot_surface(x, y, f(x, y), cmap = cm.summer, alpha = 0.4)
ax.scatter(valores[:, 0], valores[:, 1], f(valores[:, 0], valores[:,1]), s = 50)
plt.show()
```
```python
#Praticando numpy arrays
kirk = np.array([["Captain", "MC"]])
hs_band = np.array([["Black Thought", "MC"], ["Questlove", "Drums"]])
print("band>>>>", hs_band.shape)
the_roots = np.append(arr = hs_band, values = kirk, axis = 0)
the_roots = np.append(arr = the_roots, values = [["Malik B", "MC"]], axis = 0)
print(the_roots)
print("Nicknames...")
print(the_roots[:, [0]])
print("Funçoes...")
print(the_roots[:, [1]])
```
band>>>> (2, 2)
[['Black Thought' 'MC']
['Questlove' 'Drums']
['Captain' 'MC']
['Malik B' 'MC']]
Nicknames...
[['Black Thought']
['Questlove']
['Captain']
['Malik B']]
Funçoes...
[['MC']
['Drums']
['MC']
['MC']]
```python
```
```python
```
```python
```
# Trabalhando com Dados e uma função custo real
## MSE: Uma função custo para problemas reais
### $$ RSS = \sum_{i=1} ^ {n} \big(y ^ {(i)} - h_\theta x^{(i)}\big)^2$$
### $$ MSE = \sum_{i=1} ^ {n} \big(y - \hat y \big)^2$$
### $$ MSE = \frac{1}{n} \sum_{i=1} ^ {n} \big(y ^ {(i)} - h_\theta x^{(i)}\big)^2$$
### $$ MSE = \frac{1}{n} \sum_{i=1} ^ {n} \big(y - \hat y \big)^2$$
```python
```
```python
```
```python
```
```python
#Criando valores
x_5 = np.array([[0.1, 1.2, 2.4, 3.2, 4.1, 5.7, 6.5]]).transpose()
y_5 = np.array([1.7, 2.4, 3.5, 3.0, 6.1, 9.4, 8.2]).reshape(7,1)
print("x_5 shape: ", x_5.shape)
print("y_5 shape: ", y_5.shape)
```
x_5 shape: (7, 1)
y_5 shape: (7, 1)
```python
regr = LinearRegression()
regr.fit(x_5, y_5)
print("Theta 0: ", regr.intercept_[0])
print("Theta 1: ", regr.coef_[0][0])
```
Theta 0: 0.8475351486029536
Theta 1: 1.2227264637835915
```python
plt.scatter(x_5, y_5, s= 50)
plt.plot(x_5, regr.predict(x_5), color= "orange", linewidth = 3)
plt.xlabel("Valores de x")
plt.ylabel("Valores de y")
plt.show()
```
```python
#y_hat = theta0 + theta1 * x
print(regr.coef_[0][0])
print(regr.intercept_[0])
y_hat = 0.8475351486029536 + (1.2227264637835915 * x_5)
```
1.2227264637835915
0.8475351486029536
```python
def mse(y, y_hat):
return (1/len(y)) * (sum((y - y_hat) ** 2))
```
```python
print("MSE calculado manualmente: ", mse(y_5, y_hat))
print("MSE usando calculo manual: ", mean_squared_error(y_5, y_hat))
print("MSE calculado automaticamente: ", mean_squared_error(y_5, regr.predict(x_5)))
```
MSE calculado manualmente: [0.94796558]
MSE usando calculo manual: 0.9479655759794577
MSE calculado automaticamente: 0.9479655759794577
```python
#Criando valores de Theta0 e Theta1
#LEMBRANDO QUE: T0 é a intercessão e T1 é o coeficiente angula
```
```python
n_tetas = 200
t_0 = np.linspace(start= -1, stop = 3, num = n_tetas)
t_1 = np.linspace(start= -1, stop = 3, num = n_tetas)
plot_t0, plot_t1 = np.meshgrid(t_0, t_1)
plot_mse = np.zeros(plot_t0.shape)
```
```python
for i in range(n_tetas):
for j in range(n_tetas):
y_hat = plot_t0[i][j] + plot_t1[i][j]*x_5
plot_mse[i][j] = mse(y_5, y_hat)
```
```python
fig = plt.figure(figsize=[16,10])
ax = fig.gca(projection = "3d")
ax.set_xlabel("teta 0", fontsize = 20)
ax.set_ylabel("teta 1", fontsize = 20)
ax.set_zlabel("MSE", fontsize = 20)
ax.plot_surface(plot_t0, plot_t1, plot_mse, cmap = cm.summer)
```
## Derivadas parciais em respeito a $\theta_0$ e a $\theta_1$:
### $$\frac{\partial MSE}{\partial \theta_0} = -\frac{2}{n} \sum_{i=1}^n \big(y^{(i)} - \theta_0 - \theta_1 x^{(i)} \big) $$
### $$\frac{\partial MSE}{\partial \theta_1} = -\frac{2}{n} \sum_{i=1}^n \big(y^{(i)} - \theta_0 - \theta_1 x^{(i)} \big) \cdot \big(x^{(i)}\big) $$
### MSE e Gradient Descent
```python
#X e y são os dados, e tetas são os dois valores que estamos tentando achar
def grad(x, y, tetas):
n = y.size
t0_derivada = (-2/n) * sum(y - tetas[0] - tetas[1] * x)
t1_derivada = (-2/n) * sum((y - tetas[0] - tetas[1] * x) * x)
return np.array([t0_derivada[0], t1_derivada[0]])
```
```python
max_iter = 1000
multiplicador = 0.01
tetas = np.array([2.9, 2.9])
teta_valores = tetas.reshape(1,2)
mse_valores = mse(y_5, tetas[0] + tetas[1]*x_5)
for i in range(max_iter):
tetas = tetas - multiplicador * grad(x_5, y_5, tetas)
teta_valores = np.append(arr = teta_valores, values = tetas.reshape(1,2), axis = 0)
mse_valores = np.append(arr = mse_valores, values = mse(y_5, tetas[0] + tetas[1]*x_5), axis = 0)
print("Minimo ocorre em t0: ", tetas[0])
print("Minimo ocorre em t1 : ", tetas[1])
print("MSE: ", mse(y_5, tetas[0] + tetas[1] * x_5))
```
Minimo ocorre em t0: 0.8532230461743415
Minimo ocorre em t1 : 1.2214935332607393
MSE: [0.94797511]
```python
fig = plt.figure(figsize=[16,10])
ax = fig.gca(projection = "3d")
ax.plot_surface(plot_t0, plot_t1, plot_mse, cmap = cm.winter, alpha = 0.4)
ax.scatter(teta_valores[:, 0], teta_valores[:, 1], mse_valores, color = "red")
ax.set_xlabel("teta 0", fontsize = 20)
ax.set_ylabel("teta 1", fontsize = 20)
ax.set_zlabel("Cost - MSE", fontsize = 20)
```
| 4db8c723c54a5204f5cf5c311432a08964ecfc59 | 916,519 | ipynb | Jupyter Notebook | Secao-4/Grandient Descend.ipynb | jhonatacaiob/Bootcamp-Data-Science | 1d5415429462594be873946a6b601c63a8dad875 | [
"MIT"
]
| 1 | 2021-05-14T00:42:33.000Z | 2021-05-14T00:42:33.000Z | Secao-4/Grandient Descend.ipynb | jhonatacaiob/Bootcamp-Data-Science | 1d5415429462594be873946a6b601c63a8dad875 | [
"MIT"
]
| null | null | null | Secao-4/Grandient Descend.ipynb | jhonatacaiob/Bootcamp-Data-Science | 1d5415429462594be873946a6b601c63a8dad875 | [
"MIT"
]
| null | null | null | 1,161.621039 | 245,396 | 0.956149 | true | 3,678 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.810479 | 0.726393 | __label__por_Latn | 0.187496 | 0.525986 |
# Support Vector Machines on the D-Wave Quantum Annealer
#### Created by Gabriele Cavallaro ([email protected])
### 0. Setting Up the Access to the D-Wave 2000Q quantum computer
- Make a free account to run on the D-Wave through [Leap](https://www.dwavesys.com/take-leap)
- Install Ocean Software with [pip install dwave-ocean-sdk](https://docs.ocean.dwavesys.com/en/latest/overview/install.html)
- Configuring the D-Wave System as a Solver with [dwave config create](https://docs.ocean.dwavesys.com/en/latest/overview/dwavesys.html#dwavesys)
### 1. Data Preparation
#### 1.1 Load of the Python Modules
```python
from utils import * # It contains functions for threat the data (I/O, encoding/decoding) and metrics for evaluations
```
#### 1.2 Select the Dataset
##### In this notebook we consider the datasets of [HyperLabelMe](http://hyperlabelme.uv.es/index.html) (i.e., a benchmark system for remote sensing image classification).
- It contains 43 image datasets, both multi- and hyperspectral
- For each one, training pairs (spectra and their labels) and test spectra are provided
- The test labels are not given. The predicted labels needs to be uploaded in HyperLabelMe which will return the accuracy
```python
# Load the data
# Ima40.txt can be downloaded after registering at [1]
id_dataset='Im40'
[X_train, Y_train, X_test]=dataread('input_datasets/hyperlabelme/'+id_dataset+'.txt')
```
#### 1.3 Background on Support Vector Machines (SVMs)
A SVM learns its parameters from a set of annotated training samples
- $D=\left\{\textbf{x}_{n}, y_{n}: n=0, \ldots, N-1\right\}$
- with $\textbf{x}_{n} \in \mathbb{R}^{d}$ being a feature vector and $y_n$ its label.
A SVM separates the samples of different classes in their feature space by tracing maximum margin hyperplanes.
The training consists of solving a [quadratic programming (QP)](https://www.cambridge.org/us/academic/subjects/mathematics/numerical-recipes/numerical-recipes-art-scientific-computing-3rd-edition?format=HB&utm_source=shortlink&utm_medium=shortlink&utm_campaign=numericalrecipes) problem.
\begin{equation}
\label{eq:qp_equation}
L=\frac{1}{2} \sum_{n m} \alpha_{n} \alpha_{m} y_{n} y_{m} k\left(\mathbf{x}_{n}, \mathbf{x}_{m}\right)-\sum_{n} \alpha_{n} \qquad \qquad \qquad \text { (1) }
\end{equation}
\begin{equation}
\label{eq:svm_constrains}
\text {subject to} \quad 0 \leq \alpha_{n} \leq C \> \> \text { and } \> \> \sum_{n} \alpha_{n} y_{n}=0 \qquad \qquad \text { (2) }
\end{equation}
For $N$ coefficients $\alpha_{n} \in \mathbb{R}$, where $C$ is a regularization parameter and $k(.,.)$ is a kernel function that enables a SVM to compute non-linear decision functions (by means of the kernel trick
[kernel trick](https://dl.acm.org/citation.cfm?id=559923)).
The type of kernel function which is most commonly used is the RBF:
$\operatorname{rbf}\left(\mathbf{x}_{n}, \mathbf{x}_{m}\right)=e^{-\overline{\gamma}\left\|\mathbf{x}_{n}-\mathbf{x}_{m}\right\|^{2}}$.
The SVM decision boundary is based on the samples corresponding to $\alpha_{n} \neq 0$ (i.e., support vectors).
A typical solution often contains many $\alpha_{n}=0$.
The prediction for an arbitrary
sample $\mathbf{x} \in \mathbb{R}^{d}$ can be made by evaluating the decision function (i.e., signed distance between the sample $\mathbf{x}$ and the decision boundary)
\begin{equation}
\label{eq:decision_function}
f(\mathbf{x})=\sum_{n} \alpha_{n} y_{n} k\left(\mathbf{x}_{n}, \mathbf{x}\right)+b \qquad \qquad \text { (3) }
\end{equation}
where the bias $b$ can be computed by
\begin{equation}
b=\frac{\sum_{n} \alpha_{n}\left(C-\alpha_{n}\right)\left[y_{n}-\sum_{m} \alpha_{m} y_{m} k\left(\mathbf{x}_{m}, \mathbf{x}_{n}\right)\right]}{\sum_{n} \alpha_{n}\left(C-\alpha_{n}\right)} \qquad \qquad \text { (4) }
\end{equation}
The class label for $\mathbf{x}$ predicted is $\widetilde{y}=\operatorname{sign}(f(\mathbf{x}))$.
#### 1.4 Quantum SVM
The DW2000Q QA requires the SVM training to be formulated as a [Quadratic Unconstrained Binary Optimization (QUBO)](https://docs.dwavesys.com/docs/latest/c_gs_3.html) problem which is defined as the minimization of the energy function:
\begin{equation}
E=\sum_{i \leq j} a_{i} Q_{i j} a_{j} \qquad \qquad \text { (5) }
\end{equation}
with $a_{i} \in\{0,1\}$ the binary variables of the optimization problem, and $Q$
the QUBO weight matrix (i.e., an upper-triangular matrix of real numbers).
Since the solution of Eqs. (1)-(2) consists of real numbers $\alpha_{n} \in \mathbb{R}$ and Eq.(4) can only computes discrete solutions, the following encoding is used:
\begin{equation}
\label{eq:encoding}
\alpha_{n}=\sum_{k=0}^{K-1} B^{k} a_{K n+k} \qquad \qquad \text { (6) }
\end{equation}
where $a_{K n+k} \in\{0,1\}$ are binary variables, $K$ is the number of
binary variables to encode $\alpha_{n}$, and $B$ is the base used for the
encoding.
The formulation of the QP of Eqs. (1)-(2) as QUBO is obtained through the encoding of Eq. (6) and the introduction of a multiplier $\xi$ to include the first constraint of Eq. (2) as a squared penalty term:
\begin{equation}
\label{eq:formulation_qp_quantum_1}
E=\frac{1}{2} \sum_{n m k j} a_{K n+k} a_{K m+j} B^{k+j} y_{n} y_{m} k\left(\mathbf{x}_{n}, \mathbf{x}_{m}\right)
-\sum_{n k} B^{k} a_{K n+k}+\xi\left(\sum_{n k} B^{k} a_{K n+k} y_{n}\right)^{2} \qquad \qquad \text { (7) }
\end{equation}
\begin{equation}
\label{eq:formulation_qp_quantum_2}
=\sum_{n, m=0}^{N-1} \sum_{k, j=0}^{K-1} a_{K n+k} \widetilde{Q}_{K n+k, K m+j} a_{K m+j} \qquad \qquad \text { (8) }
\end{equation}
where $\widetilde{Q}$ is a matrix of size $K N \times K N$ given by
\begin{equation}
\label{eq:q_embedding}
\widetilde{Q}_{K n+k, K m+j} =\frac{1}{2} B^{k+j} y_{n} y_{m}\left(k\left(\mathbf{x}_{n}, \mathbf{x}_{m}\right)+\xi\right) -\delta_{n m} \delta_{k j} B^{k} \qquad \qquad \text { (9) }
\end{equation}
Since $\widetilde{Q}$ is symmetric, the upper-triangular \ac{QUBO} matrix $Q$ is defined by
$Q_{i j}=\widetilde{Q}_{i j}+\widetilde{Q}_{j i}$ for $i<j$ and $Q_{i i}=\widetilde{Q}_{i i}$. The second constraint of Eq. (2) is automatically included in Eq. (8) through the encoding given in Eq. (6), since the maximum for $\alpha_{n}$ is given by
\begin{equation}
C=\sum_{k=1}^{K} B^{k} \qquad \qquad \text { (10) }
\end{equation}
The last step required to run the optimization on the DW2000Q QA is the embedding procedure.
This is necessary because the QUBO problem given in Eq. (5) includes some couplers $Q_{i,j}\neq0$ between qubit $i$ and qubit $j$ for which no physical connection exists on the chip (i.e., constraint of the Chimera topology of the DW2000Q quantum processor).
The embedding increases the number of logical connections between the qubits.
When no embedding can be found, the number of nonzero couplers $n_{cpl}$ is the parameter that can be reduced until an embedding is found.
The DW2000Q QA computes a variety of close-to-optimal solutions (i.e., different coefficients $\{\alpha_{n}\}^{(i)}$ obtained from Eq. (6)). Many of these solutions may have a slightly higher
energy than the global minimum $\{\alpha_{n}\}^*$ that can be found by the classical SVM. However, these solutions can still solve the classification problem for the training data.
For each run on the DW2000Q QA, the 20 lowest energy samples from 10,000 reads are kept.
#### 1.5 Quantum SVM: Calibration Phase
The SVM on the QA depends on four hyperparameters:
the encoding base $B$, the number $K$ of qubits per coefficient $\alpha_{n}$, the multiplier $\xi$, and the kernel parameter $\gamma$. The parameter $n_{cpl}$ varies for each run and is not a parameter of the SVM itself.
The hyperparameters are selected through a 10-fold cross-validation. Each training set includes only 30 samples (i.e., choice due to the limitations of the QA). The validation includes the remaining samples that are used for the evaluation of the performance.
For each dataset, the values are calibrated by evaluating the SVM for $B \in \{2, 3, 5, 10\}$, $K\in \{2, 3\}$, $\xi \in \{0, 1, 5\}$, and $\gamma\in \{ −1, 0.125, 0.25, 0.5, 1, 2, 4, 8 \}$.
```python
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
# 10-fold Monte Carlo (or split-and-shuffle) cross-validation
fold=10
for i in range(0,fold):
X_train_cal, X_val_cal, Y_train_cal, Y_val_cal = train_test_split(X_train,Y_train, test_size=0.94, random_state=i)
# Pre-processing
X_train_cal = preprocessing.scale(X_train_cal)
X_val_cal = preprocessing.scale(X_val_cal)
# Write the data
write_samples(X_train_cal, Y_train_cal,'input_datasets/calibration/'+id_dataset+'/'+id_dataset+'calibtrain'+str(i))
write_samples(X_val_cal, Y_val_cal,'input_datasets/calibration/'+id_dataset+'/'+id_dataset+'calibval'+str(i))
print('Each training set includes '+str(X_train_cal.shape[0])+ ' samples')
print('Each validation set includes '+str(X_val_cal.shape[0])+ ' samples')
```
```python
from quantum_SVM import *
# Hyperparameters
B=[2,3,5,10]
K=[2,3]
xi=[0,1,5]
gamma=[-1,0.125,0.25,0.5,1,2,4,8]
n_experiments=len(B)*len(K)*len(xi)*len(gamma)
hyperparameters=np.zeros([n_experiments,4], dtype=float)
path_data_key='input_datasets/calibration/'+id_dataset+'/'
data_key = id_dataset+'calibtrain'
path_out='outputs/calibration/'+id_dataset+'/'
trainacc=np.zeros([fold], dtype=float)
trainauroc=np.zeros([fold], dtype=float)
trainauprc=np.zeros([fold], dtype=float)
testacc=np.zeros([fold], dtype=float)
testauroc=np.zeros([fold], dtype=float)
testauprc=np.zeros([fold], dtype=float)
trainacc_all=np.zeros([n_experiments], dtype=float)
trainauroc_all=np.zeros([n_experiments], dtype=float)
trainauprc_all=np.zeros([n_experiments], dtype=float)
testacc_all=np.zeros([n_experiments], dtype=float)
testauroc_all=np.zeros([n_experiments], dtype=float)
testauprc_all=np.zeros([n_experiments], dtype=float)
f = open(path_out+'calibration_results.txt',"w")
f.write("B\t K\t xi\t gamma\t trainacc\t trainauroc\t trainauprc\t testacc\t testauroc\t testauprc\n")
count=0
for x in range(0,len(B)):
for y in range(0,len(K)):
for z in range(0,len(xi)):
for i in range(0,len(gamma)):
for j in range(0,fold):
path=gen_svm_qubos(B[x],K[y],xi[z],gamma[i],path_data_key,data_key+str(j),path_out)
pathsub=dwave_run(path_data_key,path)
[trainacc[j],trainauroc[j],trainauprc[j],testacc[j],testauroc[j],testauprc[j]]=eval_run_rocpr_curves(path_data_key,pathsub,'noplotsave')
hyperparameters[count,0]=B[x]
hyperparameters[count,1]=K[y]
hyperparameters[count,2]=xi[z]
hyperparameters[count,3]=gamma[i]
trainacc_all[count]=np.average(trainacc)
trainauroc_all[count]=np.average(trainauroc)
trainauprc_all[count]=np.average(trainauprc)
testacc_all[count]=np.average(testacc)
testauroc_all[count]=np.average(testauroc)
testauprc_all[count]=np.average(testauprc)
np.save(path_out+'hyperparameters', hyperparameters)
np.save(path_out+'trainacc_all', trainacc_all)
np.save(path_out+'trainauroc_all', trainauroc_all)
np.save(path_out+'trainauprc_all', trainauprc_all)
np.save(path_out+'testacc_all', testacc_all)
np.save(path_out+'testauroc_all', testauroc_all)
np.save(path_out+'testauprc_all', testauprc_all)
f.write(f'{B[x]}\t {K[y]}\t {xi[z]}\t {gamma[i]:8.3f}\t {np.average(trainacc):8.4f}\t {np.average(trainauroc):8.4f}\t {np.average(trainauprc):8.4f}\t {np.average(testacc):8.4f}\t {np.average(testauroc):8.4f}\t {np.average(testauprc):8.4f}')
f.write("\n")
count=count+1
f.close()
```
#### 1.6 Quantum SVM: Training Phase
To overcome the problem of the limited connectivity of the Chimera graph of the DW2000Q QA the whole training set is split into small disjoint subsets $D^{(train,l)}$ of $~40$ samples, with $l=0,...,int(N/40)$.
The strategy is to build an ensemble of quantum weak SVMs (qeSVMs) where each classifier is trained on $D^{(train,l)}$.
This is achieved in two steps. First, for each subset $D^{(train,l)}$ the twenty best solutions from the DW2000Q QA (i.e., qSVM$(B, K, \xi , \gamma )\#i$ for $i =0, ... ,19$) are combined by averaging over the respective decision functions $f^{l,i}(\mathbf{x})$ (see Eq. (3)).
Since the decision function is linear in the coefficients
and the bias $b^{(l,i)}$ is computed from $\alpha_{n}^{(l,i)}$ via Eq. (4), this procedure effectively results in one classifier with an effective set of coefficients
$\alpha_{n}^{(l)}=\sum_{i} \alpha_{n}^{(l, i)} / 20$ and bias
$b^{l}=\sum_{i} b^{(l, i)} / 20$.
Second, an average is made over the $int(N/40)$ subsets.
Note, however, that the data points
$\left(\mathbf{x}_{n}^{(l)}, y_{n}^{(l)}\right) \in D^{(\text {train }, l)}$ are now different for each $l$. The full decision function is
\begin{equation}
F(\mathbf{x})=\frac{1}{L} \sum_{n l} \alpha_{n}^{(l)} y_{n}^{(l)} k\left(\mathbf{x}_{n}^{(l)}, \mathbf{x}\right)+b,
\end{equation}
where $b=\sum_{l} b^{(l)} / L$. As before, the decision for the class label of a point $\mathbf{x}$ is obtained through $\widetilde{t}=\operatorname{sign}(F(\mathbf{x}))$.
```python
from quantum_SVM import *
import numpy as np
from utils import *
from sklearn.model_selection import KFold
from sklearn import preprocessing
# Write the data
experiments=1
slice=40 # Number of samples to use for the training
fold=int(len(X_train)/40)
print(fold)
for i in range(0,experiments):
cv = KFold(n_splits=fold, random_state=i, shuffle=True)
count=0
for test_index, train_index in cv.split(X_train):
#print("Train Index: ", len(train_index), "\n")
X_train_slice, y_train_slice = X_train[train_index], Y_train[train_index]
X_train_slice = preprocessing.scale(X_train_slice)
X_test_slice, y_test_slice = X_train[test_index], Y_train[test_index]
X_test_slice = preprocessing.scale(X_test_slice)
write_samples(X_train_slice, y_train_slice,f'input_datasets/train/'+id_dataset+'/'+id_dataset+'calibtrain'+str(i)+'_'+str(count))
write_samples(X_test_slice, y_test_slice,f'input_datasets/train/'+id_dataset+'/'+id_dataset+'calibval'+str(i)+'_'+str(count))
count=count+1
print("Each training set has", len(train_index), "samples\n")
```
```python
# Get the calibration results
path_out='outputs/calibration/'+id_dataset+'/'
hyperparameters=np.load(path_out+'hyperparameters.npy')
testauprc_all=np.load(path_out+'testauprc_all.npy')
# Select the best hyperparameter set for the max value of testauprc
idx_max = np.where(testauprc_all == np.amax(testauprc_all))
B=int(hyperparameters[int(idx_max[0]),0])
K=int(hyperparameters[int(idx_max[0]),1])
xi=int(hyperparameters[int(idx_max[0]),2])
gamma=hyperparameters[int(idx_max[0]),3]
print('The best hyperparameters are:\n'+'B = '+str(B)+' K = '+str(K)+' xi = '+str(xi)+' gamma = '+str(gamma))
path_data_key='input_datasets/train/'+id_dataset+'/'
data_key = id_dataset+'calibtrain'
path_out='outputs/train/'+id_dataset+'/'
trained_SVMs=[]
for j in range(0,experiments):
for i in range(0,fold):
path=gen_svm_qubos(B,K,xi,gamma,path_data_key,data_key+str(j)+'_'+str(i),path_out)
trained_SVMs.append(dwave_run(path_data_key,path))
np.save(path_out+'trained_SVMs',trained_SVMs)
```
```python
from quantum_SVM import *
import numpy as np
from utils import *
path_data_key='input_datasets/train/'+id_dataset+'/'
data_key = id_dataset+'calibtrain'
[trainacc[j],trainauroc[j],trainauprc[j],testacc[j],testauroc[j],testauprc[j]]=eval_run_rocpr_curves(path_data_key,'outputs/train/im16/runim16calibtrain0_0_B=2_K=3_xi=1_gamma=0.25/result_couplers=2000/','saveplot')
```
#### 1.7 Quantum SVM: Test Phase
The performance of the qeSVMs can be evaluated directly on [HyperLabelMe](http://hyperlabelme.uv.es/index.html) by uploading the predictions (i.e., output file of the next cell)
```python
from quantum_SVM import *
from sklearn import preprocessing
# Pre-processing the test spectra
X_test = preprocessing.scale(X_test)
path_data_key='input_datasets/train/'+id_dataset+'/'
data_key = id_dataset+'calibtrain'
path_train_out='outputs/train/'+id_dataset+'/'
path_test_out='outputs/test/'+id_dataset+'/'
path_files=np.load(path_train_out+'trained_SVMs.npy')
experiments=1
slices=10
scores=[]
for j in range(0,experiments):
for i in range(0,slices):
scores.append(predict(path_data_key,path_files[i],X_test))
avg_scores=np.zeros((scores[0].shape[0]))
Y_predicted=np.zeros((scores[0].shape[0]),int)
for i in range(0,scores[0].shape[0]):
tmp=0
for y in range(0,slices):
tmp=tmp+scores[y][i]
avg_scores[i]=tmp/slices
for i in range(0,scores[0].shape[0]):
if(avg_scores[i]<0):
Y_predicted[i]=1
else:
Y_predicted[i]=2
datawrite(path_test_out,'qeSVM', 'Im16', Y_predicted)
```
| 035e6856029b60c6f938304d63f75f67e87b046e | 22,810 | ipynb | Jupyter Notebook | run_qeSVM.ipynb | GaIbatorix/Quantum-SVM | 30e2d7378ac6e19a4ba062b92970a9e8033ad525 | [
"MIT"
]
| null | null | null | run_qeSVM.ipynb | GaIbatorix/Quantum-SVM | 30e2d7378ac6e19a4ba062b92970a9e8033ad525 | [
"MIT"
]
| null | null | null | run_qeSVM.ipynb | GaIbatorix/Quantum-SVM | 30e2d7378ac6e19a4ba062b92970a9e8033ad525 | [
"MIT"
]
| null | null | null | 44.901575 | 296 | 0.582508 | true | 5,213 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.640636 | 0.511347 | __label__eng_Latn | 0.855116 | 0.02636 |
# Job Search Model
**Randall Romero Aguilar, PhD**
This demo is based on the original Matlab demo accompanying the <a href="https://mitpress.mit.edu/books/applied-computational-economics-and-finance">Computational Economics and Finance</a> 2001 textbook by Mario Miranda and Paul Fackler.
Original (Matlab) CompEcon file: **demdp04.m**
Running this file requires the Python version of CompEcon. This can be installed with pip by running
!pip install compecon --upgrade
<i>Last updated: 2021-Oct-01</i>
<hr>
## About
Infinitely-lived worker must decide whether to quit, if employed, or search for a job, if unemployed, given prevailing market wages.
### States
- w prevailing wage
- i unemployed (0) or employed (1) at beginning of period
### Actions
- j idle (0) or active (i.e., work or search) (1) this period
### Parameters
| Parameter | Meaning |
|-----------|-------------------------|
| $v$ | benefit of pure leisure |
| $\bar{w}$ | long-run mean wage |
| $\gamma$ | wage reversion rate |
| $p_0$ | probability of finding job |
| $p_1$ | probability of keeping job |
| $\sigma$ | standard deviation of wage shock |
| $\delta$ | discount factor |
# Preliminary tasks
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from compecon import BasisSpline, DPmodel, qnwnorm, demo
```
## FORMULATION
### Worker's reward
The worker's reward is:
- $w$ (the prevailing wage rate), if he's employed and active (working)
- $u=90$, if he's unemployed but active (searching)
- $v=95$, if he's idle (quit if employed, not searching if unemployed)
```python
u = 90
v = 95
def reward(w, x, employed, active):
if active:
return w.copy() if employed else np.full_like(w, u) # the copy is critical!!! otherwise it passes a pointer to w!!
else:
return np.full_like(w, v)
```
### Model dynamics
#### Stochastic Discrete State Transition Probabilities
An unemployed worker who is searching for a job has a probability $p_0=0.2$ of finding it, while an employed worker who doesn't want to quit his job has a probability $p_1 = 0.9$ of keeping it. An idle worker (someone who quits or doesn't search for a job) will definitely be unemployed next period. Thus, the transition probabilities are
\begin{align}
q = \begin{bmatrix}1-p_0 &p_0\\1-p_1&p_1\end{bmatrix},&\qquad\text{if active} \\
= \begin{bmatrix}1 & 0\\1 &0 \end{bmatrix},&\qquad\text{if iddle}
\end{align}
```python
p0 = 0.20
p1 = 0.90
q = np.zeros((2, 2, 2))
q[1, 0, 1] = p0
q[1, 1, 1] = p1
q[:, :, 0] = 1 - q[:, :, 1]
```
#### Stochastic Continuous State Transition
Assuming that the wage rate $w$ follows an exogenous Markov process
\begin{equation}
w_{t+1} = \bar{w} + \gamma(w_t − \bar{w}) + \epsilon_{t+1}
\end{equation}
where $\bar{w}=100$ and $\gamma=0.4$.
```python
wbar = 100
gamma = 0.40
def transition(w, x, i, j, in_, e):
return wbar + gamma * (w - wbar) + e
```
Here, $\epsilon$ is normal $(0,\sigma^2)$ wage shock, where $\sigma=5$. We discretize this distribution with the function ```qnwnorm```.
```python
sigma = 5
m = 15
e, w = qnwnorm(m, 0, sigma ** 2)
```
### Approximation Structure
To discretize the continuous state variable, we use a cubic spline basis with $n=150$ nodes between $w_\min=0$ and $w_\max=200$.
```python
n = 150
wmin = 0
wmax = 200
basis = BasisSpline(n, wmin, wmax, labels=['wage'])
```
## SOLUTION
To represent the model, we create an instance of ```DPmodel```. Here, we assume a discout factor of $\delta=0.95$.
```python
model = DPmodel(basis, reward, transition,
i =['unemployed', 'employed'],
j = ['idle', 'active'],
discount=0.95, e=e, w=w, q=q)
```
Then, we call the method ```solve``` to solve the Bellman equation
```python
S = model.solve(show=True)
S.head()
```
Solving infinite-horizon model collocation equation by Newton's method
iter change time
------------------------------
0 2.1e+03 0.1516
1 3.5e+01 0.3620
2 1.9e+01 0.5605
3 1.4e+00 0.7629
4 2.8e-12 1.0018
Elapsed Time = 1.00 Seconds
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>wage</th>
<th>i</th>
<th>value</th>
<th>resid</th>
<th>j*</th>
<th>value[idle]</th>
<th>value[active]</th>
</tr>
<tr>
<th></th>
<th>wage</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="5" valign="top">unemployed</th>
<th>0.000000</th>
<td>0.000000</td>
<td>0</td>
<td>1911.101186</td>
<td>0.000000e+00</td>
<td>idle</td>
<td>1911.101186</td>
<td>1906.101213</td>
</tr>
<tr>
<th>0.133422</th>
<td>0.133422</td>
<td>0</td>
<td>1911.101997</td>
<td>-1.405169e-10</td>
<td>idle</td>
<td>1911.101997</td>
<td>1906.102026</td>
</tr>
<tr>
<th>0.266845</th>
<td>0.266845</td>
<td>0</td>
<td>1911.102810</td>
<td>-1.452918e-10</td>
<td>idle</td>
<td>1911.102810</td>
<td>1906.102841</td>
</tr>
<tr>
<th>0.400267</th>
<td>0.400267</td>
<td>0</td>
<td>1911.103625</td>
<td>-5.434231e-11</td>
<td>idle</td>
<td>1911.103625</td>
<td>1906.103656</td>
</tr>
<tr>
<th>0.533689</th>
<td>0.533689</td>
<td>0</td>
<td>1911.104440</td>
<td>9.526957e-11</td>
<td>idle</td>
<td>1911.104440</td>
<td>1906.104473</td>
</tr>
</tbody>
</table>
</div>
### Compute and Print Critical Action Wages
```python
def critical(db):
wcrit = np.interp(0, db['value[active]'] - db['value[idle]'], db['wage'])
vcrit = np.interp(wcrit, db['wage'], db['value[idle]'])
return wcrit, vcrit
wcrit0, vcrit0 = critical(S.loc['unemployed'])
print(f'Critical Search Wage = {wcrit0:5.1f}')
wcrit1, vcrit1 = critical(S.loc['employed'])
print(f'Critical Quit Wage = {wcrit1:5.1f}')
```
Critical Search Wage = 93.8
Critical Quit Wage = 79.4
### Plot Action-Contingent Value Function
```python
vv = ['value[idle]','value[active]']
fig1 = plt.figure(figsize=[12,4])
# UNEMPLOYED
demo.subplot(1,2,1,'Action-Contingent Value, Unemployed', 'Wage', 'Value')
plt.plot(S.loc['unemployed',vv])
demo.annotate(wcrit0, vcrit0, f'$w^*_0 = {wcrit0:.1f}$', 'wo', (5, -5), fs=12)
plt.legend(['Do Not Search', 'Search'], loc='upper left')
# EMPLOYED
demo.subplot(1,2,2,'Action-Contingent Value, Employed', 'Wage', 'Value')
plt.plot(S.loc['employed',vv])
demo.annotate(wcrit1, vcrit1, f'$w^*_0 = {wcrit1:.1f}$', 'wo',(5, -5), fs=12)
plt.legend(['Quit', 'Work'], loc='upper left')
```
### Plot Residual
```python
S['resid2'] = 100 * (S['resid'] / S['value'])
fig2 = demo.figure('Bellman Equation Residual', 'Wage', 'Percent Residual')
plt.plot(S.loc['unemployed','resid2'])
plt.plot(S.loc['employed','resid2'])
plt.legend(model.labels.i)
```
## SIMULATION
### Simulate Model
We simulate the model 10000 times for a time horizon $T=40$, starting with an unemployed worker ($i=0$) at the long-term wage rate mean $\bar{w}$. To be able to reproduce these results, we set the random seed at an arbitrary value of 945.
```python
T = 40
nrep = 10000
sinit = np.full((1, nrep), wbar)
iinit = 0
data = model.simulate(T, sinit, iinit, seed=945)
```
```python
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>time</th>
<th>_rep</th>
<th>i</th>
<th>wage</th>
<th>j*</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>0</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
<td>1</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>2</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>3</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>4</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
</tbody>
</table>
</div>
### Print Ergodic Moments
```python
ff = '\t{:12s} = {:5.2f}'
print('\nErgodic Means')
print(ff.format('Wage', data['wage'].mean()))
print(ff.format('Employment', (data['i'] == 'employed').mean()))
print('\nErgodic Standard Deviations')
print(ff.format('Wage',data['wage'].std()))
print(ff.format('Employment', (data['i'] == 'employed').std()))
```
Ergodic Means
Wage = 100.02
Employment = 0.58
Ergodic Standard Deviations
Wage = 5.37
Employment = 0.49
```python
ergodic = pd.DataFrame({
'Ergodic Means' : [data['wage'].mean(), (data['i'] == 'employed').mean()],
'Ergodic Standard Deviations': [data['wage'].std(), (data['i'] == 'employed').std()]},
index=['Wage', 'Employment'])
ergodic.round(2)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Ergodic Means</th>
<th>Ergodic Standard Deviations</th>
</tr>
</thead>
<tbody>
<tr>
<th>Wage</th>
<td>100.02</td>
<td>5.37</td>
</tr>
<tr>
<th>Employment</th>
<td>0.58</td>
<td>0.49</td>
</tr>
</tbody>
</table>
</div>
### Plot Expected Discrete State Path
```python
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>time</th>
<th>_rep</th>
<th>i</th>
<th>wage</th>
<th>j*</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>0</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
<td>1</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>2</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>3</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>4</td>
<td>unemployed</td>
<td>100.0</td>
<td>active</td>
</tr>
</tbody>
</table>
</div>
```python
data['ii'] = data['i'] == 'employed'
fig3 = demo.figure('Probability of Employment', 'Period','Probability')
plt.plot(data[['ii','time']].groupby('time').mean())
```
### Plot Simulated and Expected Continuous State Path
```python
subdata = data[data['_rep'].isin(range(3))]
fig4 = demo.figure('Simulated and Expected Wage', 'Period', 'Wage')
plt.plot(subdata.pivot('time', '_rep', 'wage'))
plt.plot(data[['time','wage']].groupby('time').mean(),'k--',label='mean')
```
```python
#demo.savefig([fig1,fig2,fig3,fig4])
```
| 9086b2a7e7e42eed4db2b76383500dc8e6de09fd | 273,341 | ipynb | Jupyter Notebook | _build/jupyter_execute/notebooks/dp/04 Job Search Model.ipynb | randall-romero/CompEcon-python | c7a75f57f8472c972fddcace8ff7b86fee049d29 | [
"MIT"
]
| 23 | 2016-12-14T13:21:27.000Z | 2020-08-23T21:04:34.000Z | _build/jupyter_execute/notebooks/dp/04 Job Search Model.ipynb | randall-romero/CompEcon | c7a75f57f8472c972fddcace8ff7b86fee049d29 | [
"MIT"
]
| 1 | 2017-09-10T04:48:54.000Z | 2018-03-31T01:36:46.000Z | _build/jupyter_execute/notebooks/dp/04 Job Search Model.ipynb | randall-romero/CompEcon-python | c7a75f57f8472c972fddcace8ff7b86fee049d29 | [
"MIT"
]
| 13 | 2017-02-25T08:10:38.000Z | 2020-05-15T09:49:16.000Z | 266.93457 | 110,656 | 0.912512 | true | 4,158 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.679179 | 0.616227 | __label__eng_Latn | 0.590303 | 0.270033 |
```python
import numpy as np
import scipy.sparse
```
---
# Item XVIII
Let $A_n$:
$$
A_n =
\begin{bmatrix}
1 & -2 & 0 & \dots & 0 \\
0 & 1 & -2 & 0 & \dots \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & 0 & 1 & -2 \\
0 & \dots & \dots & 0 & 1
\end{bmatrix} \in \mathbb{R}^{n \times n}
$$
* Determine $A_n^{-1}$
* Determine $\kappa_\infty(A_n) = ||A_n||_\infty ||A_n^{-1}||_\infty$.
* Solve the largest linear system of equations you can solve in 1 minute with the solution $x$ equal to $-1_n + U (-\delta,\delta)$ using Backward Substitution for $\delta =10^{-14}$. Notice, after you generate $x$ you need to find the RHS and then solve it, hopefully, coming back to the solution you defined previously. Did you recover the solution?
---
### Item A
We can write $A_n = I + R$ where
$$
R = [r_{ij}]_{i \in \{1..n\}\\j \in \{1..n\}} \text{ where } r_{ij} =
\begin{cases}
-2 & \text{if $j=i+1$} \\
0 & \text{otherwise}
\end{cases}
$$
We can see that:
$$
R^k = \left[r^{(k)}_{ij}\right]_{i \in \{1..n\} \\ j \in \{1..n\}} \text{ where } r_{ij} =
\begin{cases}
(-2)^k & \text{if $j=i+k$} \\
0 & \text{otherwise}
\end{cases}
$$
And we make use of the identity [1]:
$$
(I + R)^{-1} = I + \sum_{k=1}^{n-1} (-1)^k R^{k}
$$
And thus we have:
\begin{align}
A_n^{-1} &= \begin{bmatrix}
1 & 2 & 4 & 8 & 16 & \dots & \\
0 & 1 & 2 & 4 & 8 & \dots & \\
\vdots & \ddots & \ddots & \ddots & \ddots \\
0 & \dots & \dots & \dots & \dots & 1 \\
\end{bmatrix}
\\&= [a^{(-1)}_{ij}]_{i \in \{1..n\}\\ j \in \{1..n\}} \text{ where } a^{(-1)}_{ij} =
\begin{cases}
2^{j-i} & \text{if $j \geq i$} \\
0 & \text{otherwise}
\end{cases}
\end{align}
### Item B
The definition of the norm $||A||_\infty$ is:
$$
\sup_x \frac{||Ax||_\infty}{||x||_\infty}
$$
In order to calculate $||A||_\infty$, let $w$ be the vector so that:
$$
\sup_x \frac{||Ax||_\infty}{||x||_\infty} = \frac{||Aw||_\infty}{||w||_\infty} \quad \wedge \quad ||w||_\infty=1
$$
We have that
$$
||Aw||_\infty = \max\left(\max(w_i -2 w_{i+1} )_{i \in \{1..n{-}1\}},w_{n}\right)
$$
this can be maximized when $w_i=1$ and $w_{i+1}=-1$ for any $i<n$.
So, we can pick the vector:
$$
w = [1,-1,0,\dots,0]
$$
that follows the conditions.
And we have that $\frac{||Aw||_\infty}{||w||_\infty} = ||A||_\infty = 3$.
For $||A^{-1}_n||$ we can see that the first component of $Aw$ will have the largest absolute value if each $w_i$ has the same sign (as $||A^{-1}_n w||$ would be smaller if this doesn't hold), so we pick $w = [1,1,1,\dots,1]$, as it maximizes:
$$
||Aw||_\infty = \sum_{k=0}^{n-1} 2^k w_i
$$
while ensuring $||w||_\infty=1$.
This results in
$$
||A^{-1}_n w|| = \sum_{k=0}^{n-1} 2^k w_i = \sum_{k=0}^{n-1} 2^k = 2^n-1 = ||A^{-1}_n|| \,.
$$
Finally $\kappa_\infty(A_n) = ||A_n||_\infty ||A_n^{-1}||_\infty = 3 \cdot (2^{n}-1)$
### Item C
Assuming a general implementation of backward sustitution is requested (it will be more efficient to just work with the cells on the extended diagonal).
```python
def problem(n,theta=1e-14):
sol = -1.0+(2*np.random.random(n)-1.0)*theta
matrix_a = scipy.sparse.eye(n,dtype='int',format="lil")
j_s = range(1,n)
i_s = range(0,n-1)
matrix_a[(i_s,j_s)] = -2
return matrix_a,sol
```
```python
def backward_subs(matrix,b):
n = matrix.shape[0]
xs = np.zeros(n)
for i in range(n-1,-1,-1):
col = matrix[i,i+1:].toarray()
xs[i] = b[i] - np.sum(xs[i+1:]*col)
xs[i] /= matrix[i,i]
return xs
```
```python
aa,bb = problem(10)
print(aa)
print(np.array(aa[2,3:]))
print(bb)
```
(0, 0) 1
(0, 1) -2
(1, 1) 1
(1, 2) -2
(2, 2) 1
(2, 3) -2
(3, 3) 1
(3, 4) -2
(4, 4) 1
(4, 5) -2
(5, 5) 1
(5, 6) -2
(6, 6) 1
(6, 7) -2
(7, 7) 1
(7, 8) -2
(8, 8) 1
(8, 9) -2
(9, 9) 1
(0, 0) -2
[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]
```python
xx = backward_subs(aa,bb)
recovered_bb = aa.dot(xx)
print("bb")
print(bb)
print("recovered_bb")
print(recovered_bb)
```
bb
[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]
recovered_bb
[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]
We generate problems up to $n=100$. Larger problems weren't generated because the powers of 2 do overflow (altough we could make problems of a size large enough to that the Backpropagation takes 1min, some of the values on it will be $\text{nan}$.
```python
# Now generate larger problem
SIZES = [10,100,1000]
AAs = {}
BBs = {}
XXs = {}
for siz in SIZES:
print("Backward substitution on size %d:"%siz)
aa,bb = problem(siz)
AAs[siz] = aa
BBs[siz] = bb
%time xx = backward_subs(aa,bb)
XXs[siz] = xx
```
Backward substitution on size 10:
CPU times: user 2.11 ms, sys: 0 ns, total: 2.11 ms
Wall time: 2.12 ms
Backward substitution on size 100:
CPU times: user 9.78 ms, sys: 0 ns, total: 9.78 ms
Wall time: 9.57 ms
Backward substitution on size 1000:
CPU times: user 96 ms, sys: 66 µs, total: 96.1 ms
Wall time: 89.4 ms
```python
for siz in SIZES:
aa = AAs[siz]
bb = BBs[siz]
xx = XXs[siz]
recovered_bb = aa.dot(xx)
print("For n=%d"%siz)
print(" bb")
print(recovered_bb)
print(" Max error: %f"%(np.max(np.abs(recovered_bb-bb))))
```
For n=10
bb
[-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]
Max error: 0.000000
For n=100
bb
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. -2. -1. -1. -1. -1. -1. -1. -1.
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]
Max error: 1.000000
For n=1000
bb
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. -1. -1. -1. -1. -1. -1. -1.
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]
Max error: 1.000000
We can se that the recovered $b$ only has value $-1$ in the last components, and the previous ones cannot be reconstructed. It can also be seen that $10^{-14}$ is a variation too small and it's absorved by the $-1$.
### References
* [1] https://math.stackexchange.com/a/47554
```python
```
| 91b65d94c1d31c8a15e1350b002a42b68ea6539d | 14,935 | ipynb | Jupyter Notebook | t1_questions/item_18.ipynb | autopawn/cc5-works | 63775574c82da85ed0e750a4d6978a071096f6e7 | [
"MIT"
]
| null | null | null | t1_questions/item_18.ipynb | autopawn/cc5-works | 63775574c82da85ed0e750a4d6978a071096f6e7 | [
"MIT"
]
| null | null | null | t1_questions/item_18.ipynb | autopawn/cc5-works | 63775574c82da85ed0e750a4d6978a071096f6e7 | [
"MIT"
]
| null | null | null | 35.390995 | 364 | 0.371744 | true | 6,629 | Qwen/Qwen-72B | 1. YES
2. YES | 0.815232 | 0.851953 | 0.69454 | __label__eng_Latn | 0.352797 | 0.45198 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1-ModelTypes/student/W1D1_Tutorial2.ipynb" target="_parent"></a>
# NMA Model Types Tutorial 2: "How" models
In this tutorial we will explore models that can potentially explain *how* the spiking data we have observed is produced. That is, the models will tell us something about the *mechanism* underlying the physiological phenomenon.
Our objectives:
- Write code to simulate a simple "leaky integrate-and-fire" neuron model
- Make the model more complicated — but also more realistic — by adding more physiologically-inspired details
```python
#@title Video: "How" models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='yWPQsBud4Cc', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=yWPQsBud4Cc
## Setup
**Don't forget to execute the hidden cells!**
```python
#@title Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import ipywidgets as widgets
```
```python
#@title Helper Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins."""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, color='r', linestyle='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
if ymax == ymin:
ymax = None
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=False, axis='x', tight=True)
def plot_neuron_stats(v, spike_times):
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# membrane voltage trace
ax1.plot(v[0:100])
ax1.set(xlabel='Time', ylabel='Voltage')
# plot spike events
for x in spike_times:
if x >= 100:
break
ax1.axvline(x, color='limegreen')
# ISI distribution
isi = np.diff(spike_times)
n_bins = bins = np.arange(isi.min(), isi.max() + 2) - .5
counts, bins = np.histogram(isi, bins)
vlines = []
if len(isi) > 0:
vlines = [np.mean(isi)]
xmax = max(20, int(bins[-1])+5)
histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={
'xlabel': 'Inter-spike interval',
'ylabel': 'Number of intervals',
'xlim': [0, xmax]
})
```
## The Linear Integrate-and-Fire Neuron
One of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential $V_m$ over time in response to excitatory input currents $I$ scaled by some factor $\alpha$:
\begin{align}
dV_m = {\alpha}I
\end{align}
Once $V_m$ reaches a threshold value of 1, a spike is emitted, the neuron resets $V_m$ back to 0, and the process continues.
#### Spiking Inputs
We now have a model for the neuron dynamics. Next we need to consider what form the input $I$ will take. How should we represent the presynaptic neuron firing behavior providing the input coming into our model nueuron? We learned previously that a good approximation of spike timing is a Poisson random variable, so we can do that here as well
\begin{align}
I \sim Poisson(\lambda)
\end{align}
where $\lambda$ is the average rate of incoming spikes.
### Exercise: Compute $dV_m$
For your first exercise, you will write the code to compute the change in voltage $dV_m$ of the linear integrate-and-fire model neuron. The rest of the code to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` in the `lif_neuron` method below. The value for $\lambda$ needed for the Poisson random variable is named `rate`.
TIP: The [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) package is a great resource for working with and sampling from various probability distributions.
```python
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
#######################################################################
## TODO for students: compute dv, then remove the NotImplementedError #
#######################################################################
# dv = ...
raise NotImplementedError("Student excercise: compute the change in membrane potential")
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Uncomment these lines after completing the lif_neuron function
# v, spike_times = lif_neuron()
# plot_neuron_stats(v, spike_times)
```
**Example output:**
### Parameter Exploration
Here's an interactive demo that shows how the model behavior changes for different parameter values.
**Remember to enable the demo by running the cell.**
```python
#@title Linear Integrate-and-Fire Model Neuron Explorer
def _lif_neuron(n_steps=1000, alpha=0.01, rate=10):
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(
n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),
alpha=widgets.FloatLogSlider(0.01, min=-4, max=-1),
rate=widgets.IntSlider(10, min=1, max=20)
)
def plot_lif_neuron(n_steps=1000, alpha=0.01, rate=10):
v, spike_times = _lif_neuron(int(n_steps), alpha, rate)
plot_neuron_stats(v, spike_times)
```
## Inhibitory signals
Our linear integrate-and-fire neuron from the previous section was indeed able to produce spikes, but the actual spiking behavior did not line up with our expectations of exponentially distributed ISIs. This means we need to refine our model!
In the previous model we only considered excitatory behavior -- the only way the membrane potential could decrease is upon a spike event. We know, however, that there are other factors that can drive $V_m$ down. First is the natural tendancy of the neuron to return to some steady state or resting potential. We can update our previous model as follows:
\begin{align}
dV_m = -{\beta}V_m + {\alpha}I
\end{align}
where $V_m$ is the current membrane potential and $\beta$ is some leakage factor. This is a basic form of the popular Leaky Integrate-and-Fire model neuron (for a more detailed discussion of the LIF Neuron, see the Appendix).
We also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable, giving us
\begin{align}
I = I_{exc} - I_{inh} \\
I_{exc} \sim Poisson(\lambda_{exc}) \\
I_{inh} \sim Poisson(\lambda_{inh})
\end{align}
where $\lambda_{exc}$ and $\lambda_{inh}$ are the rates of the excitatory and inhibitory presynaptic neurons, respectively.
### Exercise: Compute $dV_m$ with inhibitory signals
For your second exercise, you will again write the code to compute the change in voltage $dV_m$, though now of the LIF model neuron described above. Like last time, the rest of the code needed to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` below.
```python
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
############################################################
## Students: compute dv and remove the NotImplementedError #
############################################################
# dv = ...
raise NotImplementedError("Student excercise: compute the change in membrane potential")
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Uncomment these lines do make the plot once you've completed the function
# v, spike_times = lif_neuron_inh()
# plot_neuron_stats(v, spike_times)
```
```python
v, spike_times = lif_neuron_inh()
plot_neuron_stats(v, spike_times)
```
### Parameter Exploration
Like last time, you can now explore how you LIF model behaves when the various parameters of the system are changed.
```python
#@title LIF Model Neuron with Inhibitory Inputs Explorer
def _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),
alpha=widgets.FloatLogSlider(0.5, min=-2, max=1),
beta=widgets.FloatLogSlider(0.1, min=-2, max=0),
exc_rate=widgets.IntSlider(10, min=1, max=20),
inh_rate=widgets.IntSlider(10, min=1, max=20))
def plot_lif_neuron(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
v, spike_times = _lif_neuron_inh(int(n_steps), alpha, beta, exc_rate, inh_rate)
plot_neuron_stats(v, spike_times)
```
```python
#@title Video: Balanced inputs
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='buXEQPp9LKI', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=buXEQPp9LKI
## Appendix
### Why do neurons spike?
A neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs.
The storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model.
### The LIF Model Neuron
The full equation for the LIF neuron is
\begin{align}
C_{m}\frac{dV_m}{dt} = -(V_m - V_{rest})/R_{m} + I
\end{align}
where $C_m$ is the membrane capacitance, $R_M$ is the membrane resistance, $𝑉_{𝑟𝑒𝑠𝑡}$ is the resting potential, and 𝐼 is some input current (from other neurons, an electrode, ...).
In our above examples we set many of these properties to convenient values ($C_m = R_m = dt = 1$, $V_{rest} = 0$) to focus more on the overall behavior, though these too can be manipulated to achieve different dynamics.
| 1dc5b1654a18a30b32a15b473ca84156d11f8d3e | 255,426 | ipynb | Jupyter Notebook | tutorials/W1D1-ModelTypes/student/W1D1_Tutorial2.ipynb | schwartenbeckph/course-content | 7ff8942ba11fb734785a53eefda115f616ff0b88 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W1D1-ModelTypes/student/W1D1_Tutorial2.ipynb | schwartenbeckph/course-content | 7ff8942ba11fb734785a53eefda115f616ff0b88 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W1D1-ModelTypes/student/W1D1_Tutorial2.ipynb | schwartenbeckph/course-content | 7ff8942ba11fb734785a53eefda115f616ff0b88 | [
"CC-BY-4.0"
]
| null | null | null | 142.775852 | 48,944 | 0.879116 | true | 3,353 | Qwen/Qwen-72B | 1. YES
2. YES | 0.812867 | 0.822189 | 0.668331 | __label__eng_Latn | 0.970779 | 0.391087 |
```python
import numpy as np
import pandas as pd
from sympy import sieve
# prep
cities = pd.read_csv('../data/raw/cities.csv', index_col=['CityId'])
pnums = list(sieve.primerange(0, cities.shape[0]))
# function
def score_it(path):
path_df = cities.reindex(path).reset_index()
path_df['step'] = np.sqrt((path_df.X - path_df.X.shift())**2 + (path_df.Y - path_df.Y.shift())**2)
path_df['step_adj'] = np.where((path_df.index) % 10 != 0, path_df.step, path_df.step +
path_df.step*0.1*(~path_df.CityId.shift().isin(pnums)))
return path_df.step_adj.sum()
# usage: path is array_like
sub = pd.read_csv('../data/raw/sample_submission.csv')
print(score_it(sub.Path.values))
```
446884407.521
| 5065f33b916829def54dc3b7f20d3bc9f2c630f3 | 1,541 | ipynb | Jupyter Notebook | notebooks/Score.ipynb | alexandrnikitin/kaggle-traveling-santa-2018-prime-paths | 44a537ee3388d52dba5abffedd8f014820c8fd40 | [
"MIT"
]
| null | null | null | notebooks/Score.ipynb | alexandrnikitin/kaggle-traveling-santa-2018-prime-paths | 44a537ee3388d52dba5abffedd8f014820c8fd40 | [
"MIT"
]
| null | null | null | notebooks/Score.ipynb | alexandrnikitin/kaggle-traveling-santa-2018-prime-paths | 44a537ee3388d52dba5abffedd8f014820c8fd40 | [
"MIT"
]
| null | null | null | 25.262295 | 111 | 0.531473 | true | 208 | Qwen/Qwen-72B | 1. YES
2. YES | 0.926304 | 0.754915 | 0.699281 | __label__eng_Latn | 0.220316 | 0.462994 |
```python
%matplotlib inline
```
```python
%run proof_setup
```
```python
import numpy as np
import sympy as sm
def rotate(sinw, cosw, sini, cosi, x2, y2, z2):
Rwinv = sm.Matrix([[cosw, sinw, 0], [-sinw, cosw, 0], [0, 0, 1]])
Riinv = sm.Matrix([[1, 0, 0], [0, cosi, sini], [0, -sini, cosi]])
v2 = sm.Matrix([[x2], [y2], [z2]])
v0 = Rwinv * Riinv * v2
return sm.simplify(v0), sm.simplify(v2)
def get_quadratic_eqs(circular=False, edge=False, printer=None, wcase=False):
if printer is None:
printer = lambda x: x
semimajor, ecc, w, incl, x, y, z, L = sm.symbols("a, e, omega, i, x, y, z, L")
sinw = sm.sin(w)
cosw = sm.cos(w)
sini = sm.sin(incl)
cosi = sm.cos(incl)
if edge:
cosi = 0
sini = 1
y = z * cosi / sini
if wcase:
sinw = 0
cosw = 1
if circular:
ecc = 0
v0, v2 = rotate(sinw, cosw, sini, cosi, x, y, z)
print("x0 =", printer(v0[0]))
print("y0 =", printer(v0[1]))
print("z0 =", printer(v0[2]))
print()
eq = (v0[0] - semimajor*ecc)**2 + v0[1]**2/(1-ecc**2) - semimajor**2
eq1 = sm.poly(eq, x, z)
denom = (ecc**2 - 1)
print("A =", printer(sm.simplify(denom * eq1.coeff_monomial(x**2))))
print("B =", printer(sm.cancel(denom * eq1.coeff_monomial(x*z))))
print("C =", printer(sm.simplify(denom * eq1.coeff_monomial(z**2))))
print("D =", printer(sm.simplify(denom * eq1.coeff_monomial(x))))
print("E =", printer(sm.simplify(denom * eq1.coeff_monomial(z))))
print("F =", printer(sm.simplify(denom * eq1.coeff_monomial(1))))
return (
sm.simplify(denom * eq1.coeff_monomial(x**2)),
sm.simplify(denom * eq1.coeff_monomial(x*z)),
sm.simplify(denom * eq1.coeff_monomial(z**2)),
sm.simplify(denom * eq1.coeff_monomial(x)),
sm.simplify(denom * eq1.coeff_monomial(z)),
sm.simplify(denom * eq1.coeff_monomial(1)),
)
```
```python
get_quadratic_eqs(printer=sm.latex)
print()
print()
get_quadratic_eqs(circular=True, printer=sm.latex);
print()
print()
get_quadratic_eqs(wcase=True, printer=sm.latex);
print()
print()
get_quadratic_eqs(edge=True, printer=sm.latex);
```
x0 = x \cos{\left (\omega \right )} + \frac{z \sin{\left (\omega \right )}}{\sin{\left (i \right )}}
y0 = - x \sin{\left (\omega \right )} + \frac{z \cos{\left (\omega \right )}}{\sin{\left (i \right )}}
z0 = 0
A = e^{2} \cos^{2}{\left (\omega \right )} - 1
B = \frac{2 e^{2} \sin{\left (\omega \right )} \cos{\left (\omega \right )}}{\sin{\left (i \right )}}
C = \frac{e^{2} \sin^{2}{\left (\omega \right )} - 1}{\sin^{2}{\left (i \right )}}
D = 2 a e \left(- e^{2} + 1\right) \cos{\left (\omega \right )}
E = - \frac{2 a e \left(e^{2} - 1\right) \sin{\left (\omega \right )}}{\sin{\left (i \right )}}
F = a^{2} \left(e^{2} - 1\right)^{2}
x0 = x \cos{\left (\omega \right )} + \frac{z \sin{\left (\omega \right )}}{\sin{\left (i \right )}}
y0 = - x \sin{\left (\omega \right )} + \frac{z \cos{\left (\omega \right )}}{\sin{\left (i \right )}}
z0 = 0
A = -1
B = 0
C = - \frac{1}{\sin^{2}{\left (i \right )}}
D = 0
E = 0
F = a^{2}
x0 = x
y0 = \frac{z}{\sin{\left (i \right )}}
z0 = 0
A = e^{2} - 1
B = 0
C = - \frac{1}{\sin^{2}{\left (i \right )}}
D = 2 a e \left(- e^{2} + 1\right)
E = 0
F = a^{2} \left(e^{2} - 1\right)^{2}
x0 = x \cos{\left (\omega \right )} + z \sin{\left (\omega \right )}
y0 = - x \sin{\left (\omega \right )} + z \cos{\left (\omega \right )}
z0 = 0
A = e^{2} \cos^{2}{\left (\omega \right )} - 1
B = 2 e^{2} \sin{\left (\omega \right )} \cos{\left (\omega \right )}
C = e^{2} \sin^{2}{\left (\omega \right )} - 1
D = 2 a e \left(- e^{2} + 1\right) \cos{\left (\omega \right )}
E = 2 a e \left(- e^{2} + 1\right) \sin{\left (\omega \right )}
F = a^{2} \left(e^{2} - 1\right)^{2}
```python
def get_quartic_expr(circular=False, edge=False, printer=None, wcase=False):
if printer is None:
printer = lambda x: x
A, B, C, D, E, F, T, L, x = sm.symbols("A, B, C, D, E, F, T, L, x", real=True)
if edge:
A, B, C, D, E, F = get_quadratic_eqs(edge=True)
p0 = T
p1 = 0
p2 = x**2 - L**2
q0 = C
q1 = B*x + E
q2 = A*x**2 + D*x + F
quartic = sm.Poly((p0*q2 - p2*q0)**2 - (p0*q1 - p1*q0)*(p1*q2 - p2*q1), x)
if circular:
args = {A:-1, B: 0, D:0, E: 0}
elif wcase:
args = {B: 0, E: 0}
quartic = sm.factor(quartic.subs(args))
print(quartic)
return
else:
args = {}
for i in range(5):
print("a_{0} =".format(i), printer(sm.factor(sm.simplify(quartic.coeff_monomial(x**i).subs(args)))))
```
```python
get_quartic_expr(printer=sm.latex)
print()
print()
get_quartic_expr(circular=True, printer=sm.latex)
print()
print()
get_quartic_expr(wcase=True, printer=sm.latex)
```
a_0 = C^{2} L^{4} + 2 C F L^{2} T - E^{2} L^{2} T + F^{2} T^{2}
a_1 = - 2 T \left(B E L^{2} - C D L^{2} - D F T\right)
a_2 = 2 A C L^{2} T + 2 A F T^{2} - B^{2} L^{2} T - 2 C^{2} L^{2} - 2 C F T + D^{2} T^{2} + E^{2} T
a_3 = 2 T \left(A D T + B E - C D\right)
a_4 = A^{2} T^{2} - 2 A C T + B^{2} T + C^{2}
a_0 = \left(C L^{2} + F T\right)^{2}
a_1 = 0
a_2 = - 2 \left(C + T\right) \left(C L^{2} + F T\right)
a_3 = 0
a_4 = \left(C + T\right)^{2}
(A*T*x**2 + C*L**2 - C*x**2 + D*T*x + F*T)**2
```python
```
```python
```
```python
def balance_companion_matrix(companion_matrix):
diag = np.array(np.diag(companion_matrix))
companion_matrix[np.diag_indices_from(companion_matrix)] = 0.0
degree = len(diag)
# gamma <= 1 controls how much a change in the scaling has to
# lower the 1-norm of the companion matrix to be accepted.
#
# gamma = 1 seems to lead to cycles (numerical issues?), so
# we set it slightly lower.
gamma = 0.9
scaling_has_changed = True
while scaling_has_changed:
scaling_has_changed = False
for i in range(degree):
row_norm = np.sum(np.abs(companion_matrix[i]))
col_norm = np.sum(np.abs(companion_matrix[:, i]))
# Decompose row_norm/col_norm into mantissa * 2^exponent,
# where 0.5 <= mantissa < 1. Discard mantissa (return value
# of frexp), as only the exponent is needed.
_, exponent = np.frexp(row_norm / col_norm)
exponent = exponent // 2
if exponent != 0:
scaled_col_norm = np.ldexp(col_norm, exponent)
scaled_row_norm = np.ldexp(row_norm, -exponent)
if scaled_col_norm + scaled_row_norm < gamma * (col_norm + row_norm):
# Accept the new scaling. (Multiplication by powers of 2 should not
# introduce rounding errors (ignoring non-normalized numbers and
# over- or underflow))
scaling_has_changed = True
companion_matrix[i] *= np.ldexp(1.0, -exponent)
companion_matrix[:, i] *= np.ldexp(1.0, exponent)
companion_matrix[np.diag_indices_from(companion_matrix)] = diag
return companion_matrix
def solve_companion_matrix(poly):
poly = np.atleast_1d(poly)
comp = np.eye(len(poly) - 1, k=-1)
comp[:, -1] = -poly[:-1] / poly[-1]
return np.linalg.eigvals(balance_companion_matrix(comp))
def _get_quadratic(a, e, cosw, sinw, cosi, sini):
e2 = e*e
e2mo = e2 - 1
return (
(e2*cosw*cosw - 1),
2*e2*sinw*cosw/sini,
(e2mo - e2*cosw*cosw)/(sini*sini),
-2*a*e*e2mo*cosw,
-2*a*e*e2mo*sinw/sini,
a**2*e2mo*e2mo,
)
def _get_quartic(A, B, C, D, E, F, T, L):
A2 = A*A
B2 = B*B
C2 = C*C
D2 = D*D
E2 = E*E
F2 = F*F
T2 = T*T
L2 = L*L
return (
C2*L2*L2 + 2*C*F*L2*T - E2*L2*T + F2*T2,
-2*T*(B*E*L2 - C*D*L2 - D*F*T),
2*A*C*L2*T + 2*A*F*T2 - B2*L2*T - 2*C2*L2 - 2*C*F*T + D2*T2 + E2*T,
2*T*(A*D*T + B*E - C*D),
A2*T2 - 2*A*C*T + B2*T + C2,
)
def _get_roots_general(a, e, omega, i, L, tol=1e-8):
cosw = np.cos(omega)
sinw = np.sin(omega)
cosi = np.cos(i)
sini = np.sin(i)
f0 = 2 * np.arctan2(cosw, 1 + sinw)
quad = _get_quadratic(a, e, cosw, sinw, cosi, sini)
A, B, C, D, E, F = quad
T = cosi / sini
T *= T
quartic = _get_quartic(A, B, C, D, E, F, T, L)
roots = solve_companion_matrix(quartic)
roots = roots[np.argsort(np.real(roots))]
# Deal with multiplicity
roots[0] = roots[:2][np.argmin(np.abs(np.imag(roots[:2])))]
roots[1] = roots[2:][::-1][np.argmin(np.abs(np.imag(roots[2:])[::-1]))]
roots = roots[:2]
# Only select real roots
roots = np.clip(np.real(roots[np.abs(np.imag(roots)) < tol]), -L, L)
if len(roots) < 2:
return np.empty(0)
angles = []
for x in roots:
b0 = A*x*x + D*x + F
b1 = B*x + E
b2 = C
z1 = -0.5 * b1 / b2
arg = b1*b1 - 4*b0*b2
if arg < 0:
continue
z2 = 0.5 * np.sqrt(arg) / b2
for sgn in [-1, 1]:
z = z1 + sgn * z2
if z > 0:
continue
y = z * cosi / sini
x0 = x*cosw + z*sinw/sini
y0 = -x*sinw + z*cosw/sini
angle = np.arctan2(y0, x0) - np.pi
if angle < -np.pi:
angle += 2*np.pi
angles.append(angle - f0)
angles = np.sort(angles)
# Wrap the roots properly to span the transit
if len(angles) == 2:
if np.all(angles > 0):
angles = np.array([angles[1] - 2*np.pi, angles[0]])
if np.all(angles < 0):
angles = np.array([angles[1], angles[0] + 2*np.pi])
else:
angles = np.array([-np.pi, np.pi])
return angles + f0
def check_roots(a, e, omega, i, L, tol=1e-8):
L /= a
a = 1.0
roots = _get_roots_general(a, e, omega, i, L, tol=tol)
for f in roots:
b2 = a**2*(e**2 - 1)**2*(np.cos(i)**2*(np.cos(omega)*np.sin(f) + np.sin(omega)*np.cos(f))**2 + (np.cos(omega)*np.cos(f) - np.sin(omega)*np.sin(f))**2)/(e*np.cos(f) + 1)**2
print("b2 = ", b2, " L2 = ", L**2)
print(roots)
```
```python
check_roots(10.0, 0.5, -0.15, 0.5*np.pi - 0.01, 1.0)
```
b2 = 0.010014687059509328 L2 = 0.010000000000000002
b2 = 0.010019159137027812 L2 = 0.010000000000000002
[1.58853526 1.83656275]
```python
check_roots(100.0, 0.0, np.pi, 0.5*np.pi, 1.5)
```
b2 = 0.00022499999999999343 L2 = 0.000225
b2 = 0.000225000000000003 L2 = 0.000225
[-1.58579689 -1.55579576]
```python
get_quadratic_eqs()
print()
get_quartic_expr();
```
x0 = x*cos(omega) + z*sin(omega)/sin(i)
y0 = -x*sin(omega) + z*cos(omega)/sin(i)
z0 = 0
A = e**2*cos(omega)**2 - 1
B = 2*e**2*sin(omega)*cos(omega)/sin(i)
C = (e**2*sin(omega)**2 - 1)/sin(i)**2
D = 2*a*e*(-e**2 + 1)*cos(omega)
E = -2*a*e*(e**2 - 1)*sin(omega)/sin(i)
F = a**2*(e**2 - 1)**2
a_0 = C**2*L**4 + 2*C*F*L**2*T - E**2*L**2*T + F**2*T**2
a_1 = -2*T*(B*E*L**2 - C*D*L**2 - D*F*T)
a_2 = 2*A*C*L**2*T + 2*A*F*T**2 - B**2*L**2*T - 2*C**2*L**2 - 2*C*F*T + D**2*T**2 + E**2*T
a_3 = 2*T*(A*D*T + B*E - C*D)
a_4 = A**2*T**2 - 2*A*C*T + B**2*T + C**2
```python
get_quadratic_eqs(edge=True)
print()
print()
get_quadratic_eqs(circular=True, printer=sm.latex, edge=True);
```
x0 = x*cos(omega) + z*sin(omega)
y0 = -x*sin(omega) + z*cos(omega)
z0 = -y
b_0 = (a**2*e**4 - 2*a**2*e**2 + a**2 - 2*a*e**3*x*cos(omega) + 2*a*e*x*cos(omega) + e**2*x**2*cos(omega)**2 - x**2)/(e**2 - 1)
b_1 = (-2*a*e**3*sin(omega) + 2*a*e*sin(omega) + 2*e**2*x*sin(omega)*cos(omega))/(e**2 - 1)
b_2 = (e**2*sin(omega)**2 - 1)/(e**2 - 1)
x0 = x
y0 = z
z0 = - y
b_0 = - a^{2} + x^{2}
b_1 = 0
b_2 = 1
```python
```
| fd748de99c4fc3ae9d6b66167c4cfcee8d0f7929 | 17,807 | ipynb | Jupyter Notebook | paper/proofs/quartic-proof.ipynb | exowanderer/exoplanet | dfd4859525ca574f1936de7b683951c35c292586 | [
"MIT"
]
| 2 | 2021-10-01T12:46:09.000Z | 2022-03-24T10:25:20.000Z | paper/proofs/quartic-proof.ipynb | Junjun1guo/exoplanet | 5df07b16cf7f8770f02fa53598ae3961021cfd0f | [
"MIT"
]
| null | null | null | paper/proofs/quartic-proof.ipynb | Junjun1guo/exoplanet | 5df07b16cf7f8770f02fa53598ae3961021cfd0f | [
"MIT"
]
| null | null | null | 32.854244 | 188 | 0.429719 | true | 4,639 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.787931 | 0.718937 | __label__eng_Latn | 0.15223 | 0.508663 |
# Programovanie
Letná škola FKS 2018
Maťo Gažo, Fero Dráček
(& vykradnuté materiály od Mateja Badina, Feriho Hermana, Kuba, Peťa, Jarných škôl FX a kade-tade po internete)
V tomto kurze si ukážeme základy programovania a naučíme sa programovať matematiku a fyziku.
Takéto vedomosti sú skvelé a budete vďaka nim:
* vedieť efektívnejšie robiť domáce úlohy
* kvalitnejšie riešiť seminárové a olympiádové príklady
* lepšie rozumieť svetu (IT je dnes na trhu najrýchlejšie rozvíjajúcim sa odvetvím)
Počítač je blbý a treba mu všetko povedať a vysvetliť. Komunikovať sa s ním dá na viacerých úrovniach, my budeme používať Python. Python (názov odvodený z Monty Python's Flying Circus) je všeobecný programovací jazyk, ktorým sa dajú vytvárať webové stránky ako aj robiť seriózne vedecké výpočty. To znamená, že naučiť sa ho nie je na škodu a možno vás raz bude živiť.
Rozhranie, v ktorom píšeme kód, sa volá Jupyter Notebook. Je to prostredie navrhnuté tak, aby sa dalo programovať doslova v prehliadači a aby sa kód dal kúskovať. Pre zbehnutie kúskov programu stačí stlačiť Shift+Enter.
# Dátové typy a operátory
### Čísla
podľa očakávaní, vracia trojku
```python
3
```
3
```python
2+3 # scitanie
```
5
```python
6-2 # odcitanie
```
4
```python
10*2 # nasobenie
```
20
```python
35/5 # delenie
```
7.0
```python
5//3 # celociselne delenie TODO je toto treba?
```
1
```python
7%3 # modulo
```
1
```python
2**3 # umocnovanie
```
8
```python
4 * (2 + 3) # poradie dodrzane
```
20
### Logické výrazy
```python
1 == 1 # logicka rovnost
```
True
```python
2 != 3 # logicka nerovnost
```
True
```python
1 < 10
```
True
```python
1 > 10
```
False
```python
2 <= 2
```
True
# Premenné
Toto je premenná.
Po stlačení Shift+Enter program v okienku zbehne a premenná sa uloží do pamäte (RAMky, všetko sa deje na RAMke).
```python
a = 2
```
Teraz s ňou možno pracovať ako s bežným číslom.
```python
2 * a
```
4
```python
a + a
```
4
```python
a + a*a
```
6
Možno ju aj umocniť.
```python
a**3
```
8
Pridajme druhú premennú.
```python
b = 5
```
Nasledovné výpočty dopadnú podľa očakávaní.
```python
a + b
```
7
```python
a * b
```
10
```python
b**a
```
25
Reálne čísla môžeme zobrazovať aj vo vedeckej forme: $2.3\times 10^{-3}$.
```python
d = 2.3e-3
```
# Funkcie
Spravme si jednoduchú funkciu, ktorá za nás sčíta dve čísla, aby sme sa s tým už nemuseli trápiť my:
```python
def scitaj(a, b):
return a + b
```
```python
scitaj(10, 12) # vrati sucet
```
22
Funkcia funguje na celých aj reálnych číslach.
Naša sčítacia funkcia má __štyri podstatné veci__:
1. `def`: toto slovo definuje funkciu.
2. dvojbodka na konci prvého riadku, odtiaľ začína definícia.
3. Odsadenie kódu vnútri funkcie o štyri medzery.
4. Samotný kód. V ňom sa môže diať čokoľvek, Python ho postupne prechádza.
5. `return`: kľúčová vec. Za toto slovo sa píše, čo je output funkcie.
### Úloha 1
Napíšte funkciu `priemer`, ktorá zoberie dve čísla (výšky dvoch chlapcov) a vypočíta ich priemernú výšku.
Ak máš úlohu hotovú, prihlás sa vedúcemu.
```python
# Tvoje riesenie:
def priemer(prvy, druhy):
return ((prvy+druhy)/2)
priemer(90,20)
```
55.0
# Poďme na fyziku
V tomto momente môžeme začať používať Python ako sofistikovanejšiu kalkulačku a počítať ňou základné fyzikálne problémy.
Jednoduchý príklad s ktorým začneme: dostanete zadaných niekoľko fyziklánych konštánt ako **premenné**.
Predstavte si, že máte za úlohou vypočítať nejakú fyzikálnu veličinu pre niekoľko zadaných hodnôt. Veľmi pohodné je napísať si funkciu, do ktorej vždy zadáme počiatočné hodnoty.
Zadané konštanty
```python
kb=1.38064852e-23 # Boltzmanova konštanta
G=6.67408e-11 # Gravitačná konštanta
```
## Úloha 2
Napíšte funkciu kotrá spočíta gravitačnú silu medzi dvomi telesami pre zadanú vzdialenosť $r$ a hmotnosti $m_1$ a $m_2$.
Pripomíname, vzorec pre výpočet gravitačnej sily je
$F=G \frac{m_1 m_2}{r^2}$
```python
# Tvoje riesenie:
def Sila(m_1, m_2, r):
F=G* m_1*m_2/r**2
return (F)
Sila(10,10,100)
```
6.67408e-13
## Úloha 3
Napíšte funkciu kotrá spočíta tlak v nádobe s objemom $V$, teplotou $T$ v ktorej je $N$ častíc.
Pripomíname, vzorec pre výpočet tlaku je
$p=\frac{N kb T}{V}$
```python
# tvoje riesenie:
def tlak(N,T,V):
p=N*kb*T/V
return p
tlak(6e23, 270,1)
```
2236.6506024
## Úloha 4
Napíšte funkciu ktorá vráti výslednú rýchlosť dvoch guličiek po dokonale pružnej zrážke. Funkciu má mať ako vstupné argumenty hmotnosti $m_1$, $m_2$ a rýchlosti $u_1$ a $u_2$ guličiek pred zrážkou. Výstupom budú nové rýchlosti $v_1$ a $v_2$.
Hint: Využitím zákonu zachovania energie prídeme ku nasledujúcim výrazom pre nové rýchlosti.
$v_1=\frac{u_1 (m_1-m_2)+2 m_2u_2}{m_1+m_2}$
$v_2=\frac{u_2 (m_2-m_1)+2 m_1u_1}{m_1+m_2}$
```python
# tvoje riesenie:
def zrazka(m_1,m_2,u_1,u_2):
return ((u_1 *(m_1-m_2)+2*m_2*u_2)/(m_1+m_2),(u_2 *(m_2-m_1)+2* m_1*u_1)/(m_1+m_2))
zrazka(1,1,10,-10)
```
(-10.0, 10.0)
# Zoznamy
Zatiaľ sme sa zoznámili s číslami (celé, reálne), stringami a trochu aj logickými hodnotami.
Zo všetkých týchto prvkov vieme vytvárať množiny, v informatickom jazyku `zoznamy`.
Na úvod sa teda pozrieme, ako s vytvára zoznam (po anglicky `list`). Takúto vec všeobecne nazývame dátová štruktúra.
```python
li = [] # prazdny list
```
```python
ve = [4, 2, 3] # list s cislami
```
```python
ve
```
[4, 2, 3]
```python
ve[0] # indexovat zaciname nulou!
```
4
```python
ve[1]
```
2
```python
ve[-1] # vybratie posledneho prvku
```
3
```python
w = [5, 10, 15]
```
Čo sa sa stane, ak zoznamy sčítame? Spoja sa.
```python
ve + w
```
[4, 2, 3, 5, 10, 15]
Môžeme ich násobiť?
```python
ve * ve
```
Smola, nemôžeme. Ale všimnime si, aká užitočná je chybová hláška. Jasne nám hovorí, že nemožno násobiť `list`y.
So zoznamami môžeme robiť rôzne iné užitočné veci. Napríklad ich sčítať.
```python
sum(ve)
```
9
Alebo zistiť dĺžku:
```python
len(ve)
```
3
Alebo ich utriediť:
Alebo na koniec pridať nový prvok:
```python
ve.append(10)
ve
```
[4, 2, 3, 10]
Alebo odobrať:
### Interval
Zoznam možno zadefinovať cez rozsah:
```python
range(10)
type(range(10))
```
range
```python
list(range(10))
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```python
list(range(3, 9))
```
[3, 4, 5, 6, 7, 8]
## Úloha 5
Spočítajte:
* súčet všetkých čísel od 1 do 1000.
Vytvorte zoznam `letnaskola`, ktorý bude obsahovať vašich 5 obľúbených celých čísel.
* Pridajte na koniec zoznamu číslo 100
* Prepíšte prvé číslo v zozname tak, aby sa rovnalo poslednému v zozname.
* Vypočítajte súčet prvého čísla, posledného čísla a dĺžky zoznamu.
```python
# Tvoje riesenie:
zoznam = list(range(1,1001))
print(sum(zoznam))
letnaskola = [1,1995,12,6,42]
print(letnaskola)
letnaskola.append(100)
print(letnaskola)
letnaskola[0] = letnaskola[len(letnaskola)-1]
print(letnaskola)
print(letnaskola[0]+letnaskola[len(letnaskola)-1], len(letnaskola))
```
500500
[1, 1995, 12, 6, 42]
[1, 1995, 12, 6, 42, 100]
[100, 1995, 12, 6, 42, 100]
200 6
# For cyklus
Indexy zoznamu môžeme postupne prechádzať. For cyklus je tzv. `iterátor`, ktorý iteruje cez zoznam.
```python
for i in [3,2,5,6]:
print(i)
```
2
5
7
8
10
11
14
18
20
25
```python
for i in [3,2,5,6]:
print(i**2)
```
9
4
25
36
Ako úspešne vytvoriť For cyklus? Podobne, ako pri funkciách:
* `for`: toto slovo je na začiatku.
* `i`: iterovana velicina
* `in`: pred zoznamom, cez ktorý prechádzame (iterujeme).
* dvojbodka na konci prvého riadku.
* kod, ktory sa cykli sa odsadzuje o štyri medzery.
Za pomoci for cyklu môžeme takisto sčítať čísla. Napr. čísla od 0 do 100:
```python
suma = 0
for i in range(101): # uvedomme si, preco tam je 101 a nie 100
suma = suma + i # skratene sum += i
print(suma)
```
5050
## Hľadanie hodnoty zlatého rezu $\varphi$
Jednoduché cvičenie na oboznámenie sa s tzv. selfkonzistentným problémom a for cyklom.
Zlatý rez je možné nájsť ako riešienie rovnice
$x=1+1/x$
Jej riešenie vieme hladať postupným iterovaním
```python
x = 1;
for i in range (0,20):
x = 1+1/x
print (x)
```
2.0
1.5
1.6666666666666665
1.6
1.625
1.6153846153846154
1.619047619047619
1.6176470588235294
1.6181818181818182
1.6179775280898876
1.6180555555555556
1.6180257510729614
1.6180371352785146
1.6180327868852458
1.618034447821682
1.618033813400125
1.6180340557275543
1.6180339631667064
1.6180339985218035
1.618033985017358
## Úloha 6
Spočítajte súčet druhých mocnín všetkých nepárnych čísel od 1 do 100 s využitím for cyklu.
```python
# Tvoje riesenie:
suma = 0
for i in range(50):
suma = suma + (2*i+1)**2
print(suma)
```
166650
## Úloha 7
### Dvojhlavý tank (FKS 30.2.2.A2)
Nevieme odkiaľ, no máme bombastický tank, ktorý má dve hlavne namierené opačným smerom – samozrejme tak, že nemieria proti sebe ;-). V tanku je
$N = 42$ nábojov s hmotnosťou $m= 20$ kg. Tank s nábojmi váži dokopy
$M= 43$ t. Potom tank začne strieľať striedavo z hlavní náboje rýchlosťou v= 1 000 m s frekvenciou strieľania $f= 0.2$ Hz. Keďže tank je nezabrzdený a dobre naolejovaný, začne sa pohybovať.
Ako ďaleko od pôvodného miesta vystrelí posledný náboj? Akej veľkej chyby by sme sa dopustili,
ak by sme zanedbali zmenu celkovej hmotnosti tanku počas strieľania?
Hint: http://old.fks.sk/archiv/2014_15/30vzorakyLeto2.pdf , strana 10
```python
x=0
m=20
Mtank=[43000]
vtank=[0]
v=-1000
f=0.2
for i in range(43):
x=x+vtank[-1]*1/f
vtank.append(vtank[-1]-m*v/(Mtank[-1]-m))
Mtank.append(Mtank[-1]-m)
v=-v
print(x)
```
48.83697212654822
# Podmienky
Pochopíme ich na príklade. Zmeňte `a` a zistite, čo to spraví.
```python
a = 5
if a == 3:
print("cislo a je rovne trom.")
elif a == 5:
print("cislo a je rovne piatim")
else:
print("cislo a nie je rovne trom ani piatim.")
```
cislo a je rovne piatim
Za pomoci podmienky teraz môžeme z for cyklu vypísať napr. len párne čísla. Párne číslo identifikujeme ako také, ktoré po delení dvomi dáva zvyšok nula.
Pre zvyšok po delení sa používa percento:
```python
for i in range(10):
if i % 2 == 0:
print(i)
```
0
2
4
6
8
Cyklus mozeme zastavit, ak sa porusi nejaka podmienka
```python
for i in range(20):
print(i)
if i>10:
print('Koniec.')
break
```
0
1
2
3
4
5
6
7
8
9
10
11
Koniec.
## Hľadanie hodnoty Ludolfovho čísla $\pi$
Pomocou Monte Carlo metódy integrovania sa naučíme ako napríklad vypočítať $\pi$.
Nasledujúce príkazy vygenerujú zoznam náhodných čísel od nula po jeden
```python
import random as rnd
import numpy as np
NOP = 50000
CoordXList = [];
CoordYList = [];
for j in range (NOP):
CoordXList.append(rnd.random())
CoordYList.append(rnd.random())
```
Tieto dva zoznamy použijeme ako $x$-ové a $y$-ové súradnice bodov v rovnine. Kedže náhodné rozdelenie bodov je rovnomerné, tak pomer bodov, ktoré sa nachádzajú vnútri štvrťkružnice s polomerom jedna ku všetkým bodom musí byť rovnaký ako pomer plochy štvrťkruhu a štvroca.
Teda $$\frac{\frac{1}{4}\pi 1^2}{1^2}\stackrel{!}{=}\frac{N_{in}}{NOP}.$$
Nasledujúce dve bunky vygenerujú obrázok rozloženia bodov a stvrťkružnicu
```python
CircPhi = np.arange(0,np.pi/2,0.01)
```
```python
import matplotlib.pyplot as plt
f1=plt.figure(figsize=(7,7))
plt.plot(
CoordXList,
CoordYList,
color = "red",
linestyle= "none",
marker = ","
)
plt.plot(np.cos(CircPhi),np.sin(CircPhi))
#plt.axis([0, 1, 0, 1])
#plt.axes().set_aspect('equal', 'datalim')
plt.show(f1)
```
## Úloha 8
Teraz je vašou úlohou spočítať $\pi$. Hint: bod je vnútri štvrťkružnice pokiaľ platí $x^2+y^2<1.$
```python
#vase riesenie
NumIn = 0
for j in range (NOP):
#if (CoordXList[j] - 0.5)*(CoordXList[j] - 0.5) + (CoordYList[j] - 0.5)*(CoordYList[j] - 0.5) < 0.25:
if CoordXList[j]*CoordXList[j] + CoordYList[j]*CoordYList[j] <= 1:
NumIn = NumIn + 1;
```
```python
NumIn/NOP*4
```
3.14768
# Numerické sčítavanie
Vo fyzike je často užitočné rozdeliť si problém na malé časti.
## Úloha 9
Teraz je vašou úlohou vymyslieť ako spočítať gravitačné pole od jednorozmernej usečky o výške h nad stredom úsečky.
Úsečka má hmotnosť $M$ a dĺžku $L$.
Úsečku si rozdelíte na $N$ malých dielov. Hmotnosť jedného takého dieliku je potom
$$dm=\frac{M}{N}$$
Vzdialenosť takéhoto bodu od stedu úsečky $x$.
Potom gravitačná pole od toho malého kúsku v žiadanom bode je:
$$\vec{\Delta g}=-G \frac{\Delta m}{r^3}\vec{r}.$$
Rozdrobené na $y$-ovú a $x$-ovú zložku:
$$\Delta g_y=-G \frac{\Delta m}{(x^2+h^2)}\cos(\phi)=-G \frac{\Delta m}{(x^2+h^2)}\frac{h}{\sqrt{x^2=h^2}},$$
respektíve
$$\Delta g_x=-G \frac{\Delta m}{(x^2+h^2)}\sin(\phi)=-G \frac{\Delta m}{(x^2+h^2)}\frac{x}{\sqrt{x^2+h^2}},$$
Vašou úlohou je rozdeliť takúto úsečku a sčítať príspevky od všetkých malých kúskov. Premyslite si že kedže sme v strede úsečky, $x$-ové príspevky sa navzájom vynulujú.
Ak vám to príde príliš jednoduché, tak možte naprogramovať program ktorých spočíta gravitačné pole nad lubovoľným bodom
```python
N=1000
M=1000
L=2
h=1
#vase riesenie
g=0
for i in range(-int(N/2),int(N/2)):
g=g+G*M/N*(i/N)/((i/N)**2+h**2)**(3/2)
g
```
-2.3877914507634306e-11
# Obiehanie Zeme okolo Slnka
Fyziku (dúfam!) všetci poznáme.
* gravitačná sila:
$$ \mathbf F(\mathbf r) = -\frac{G m M}{r^3} \mathbf r $$
### Eulerov algoritmus (zlý)
$$\begin{align}
a(t) &= F(t)/m \\
v(t+dt) &= v(t) + a(t) dt \\
x(t+dt) &= x(t) + v(t) dt \\
\end{align}$$
### Verletov algoritmus (dobrý)
$$ x(t+dt) = 2 x(t) - x(t-dt) + a(t) dt^2 $$
```python
from numpy.linalg import norm
G = 6.67e-11
Ms = 2e30
Mz = 6e24
dt = 86400.0
N = int(365*86400.0/dt)
#print(N)
R0 = 1.5e11
r_list = np.zeros((N, 2))
r_list[0] = [R0, 0.0] # mozno miesat listy s ndarray
v0 = 29.7e3
v_list = np.zeros((N, 2))
v_list[0] = [0.0, v0]
# sila medzi planetami
def force(A, r):
return -A / norm(r)**3 * r
# Verletova integracia
def verlet_step(r_n, r_nm1, a, dt): # r_nm1 -- r n minus 1
return 2*r_n - r_nm1 + a*dt**2
# prvy krok je specialny
a = force(G*Ms, r_list[0])
r_list[1] = r_list[0] + v_list[0]*dt + a*dt**2/2
# riesenie pohybovych rovnic
for i in range(2, N):
a = force(G*Ms, r_list[i-1])
r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)
plt.plot(r_list[:, 0], r_list[:, 1])
plt.xlim([-2e11, 2e11])
plt.ylim([-2e11, 2e11])
plt.xlabel("$x$", fontsize=20)
plt.ylabel("$y$", fontsize=20)
plt.gca().set_aspect('equal', adjustable='box')
#plt.axis("equal")
plt.show()
```
## Pridajme Mesiac
```python
Mm = 7.3e22
R0m = R0 + 384e6
v0m = v0 + 1e3
rm_list = np.zeros((N, 2))
rm_list[0] = [R0m, 0.0]
vm_list = np.zeros((N, 2))
vm_list[0] = [0.0, v0m]
# prvy Verletov krok
am = force(G*Ms, rm_list[0]) + force(G*Mz, rm_list[0] - r_list[0])
rm_list[1] = rm_list[0] + vm_list[0]*dt + am*dt**2/2
# riesenie pohybovych rovnic
for i in range(2, N):
a = force(G*Ms, r_list[i-1]) - force(G*Mm, rm_list[i-1]-r_list[i-1])
am = force(G*Ms, rm_list[i-1]) + force(G*Mz, rm_list[i-1]-r_list[i-1])
r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)
rm_list[i] = verlet_step(rm_list[i-1], rm_list[i-2], am, dt)
plt.plot(r_list[:, 0], r_list[:, 1])
plt.plot(rm_list[:, 0], rm_list[:, 1])
plt.xlabel("$x$", fontsize=20)
plt.ylabel("$y$", fontsize=20)
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim([-2e11, 2e11])
plt.ylim([-2e11, 2e11])
plt.show() # mesiac moc nevidno, ale vieme, ze tam je
```
## Úloha pre Vás: Treba pridať Mars :)
Pridajte Mars!
## Matematické kyvadlo s odporom
Nasimulujte matematické kyvadlo s odporom $\gamma$,
$$ \ddot \theta = -\frac g l \sin\theta -\gamma \theta^2,$$
za pomoci metódy `odeint`.
Alebo pád telesa v odporovom prostredí:
$$ a = -g - kv^2.$$
```python
from scipy.integrate import odeint
def F(y, t, g, k):
return [y[1], g -k*y[1]**2]
N = 101
k = 1.0
g = 10.0
t = np.linspace(0, 1, N)
y0 = [0.0, 0.0]
y = odeint(F, y0, t, args=(g, k))
plt.plot(t, y[:, 1])
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$v(t)$", fontsize=20)
plt.show()
```
## Harmonický oscilátor pomocou metódy Leapfrog (modifikácia Verletovho algoritmu)
```python
N = 10000
t = linspace(0,100,N)
dt = t[1] - t[0]
# Funkcie
def integrate(F,x0,v0,gamma):
x = zeros(N)
v = zeros(N)
E = zeros(N)
# Počiatočné podmienky
x[0] = x0
v[0] = v0
# Integrovanie rovníc pomocou metódy Leapfrog (wiki)
fac1 = 1.0 - 0.5*gamma*dt
fac2 = 1.0/(1.0 + 0.5*gamma*dt)
for i in range(N-1):
v[i + 1] = fac1*fac2*v[i] - fac2*dt*x[i] + fac2*dt*F[i]
x[i + 1] = x[i] + dt*v[i + 1]
E[i] += 0.5*(x[i]**2 + ((v[i] + v[i+1])/2.0)**2)
E[-1] = 0.5*(x[-1]**2 + v[-1]**2)
# Vrátime riešenie
return x,v,E
```
```python
# Pozrime sa na tri rôzne počiatočné podmienky
F = zeros(N)
x1,v1,E1 = integrate(F,0.0,1.0,0.0) # x0 = 0.0, v0 = 1.0, gamma = 0.0
x2,v2,E2 = integrate(F,0.0,1.0,0.05) # x0 = 0.0, v0 = 1.0, gamma = 0.01
x3,v3,E3 = integrate(F,0.0,1.0,0.4) # x0 = 0.0, v0 = 1.0, gamma = 0.5
# Nakreslime si grafy
plt.rcParams["axes.grid"] = True
plt.rcParams['font.size'] = 14
plt.rcParams['axes.labelsize'] = 18
plt.figure()
plt.subplot(211)
plt.plot(t,x1)
plt.plot(t,x2)
plt.plot(t,x3)
plt.ylabel("x(t)")
plt.subplot(212)
plt.plot(t,E1,label=r"$\gamma = 0.0$")
plt.plot(t,E2,label=r"$\gamma = 0.01$")
plt.plot(t,E3,label=r"$\gamma = 0.5$")
plt.ylim(0,0.55)
plt.ylabel("E(t)")
plt.xlabel("Čas")
plt.legend(loc="center right")
plt.tight_layout()
```
A čo ak bude oscilátor aj tlmenný?
```python
def force(f0,t,w,T):
return f0*cos(w*t)*exp(-t**2/T**2)
F1 = zeros(N)
F2 = zeros(N)
F3 = zeros(N)
for i in range(N-1):
F1[i] = force(1.0,t[i] - 20.0,1.0,10.0)
F2[i] = force(1.0,t[i] - 20.0,0.9,10.0)
F3[i] = force(1.0,t[i] - 20.0,0.8,10.0)
```
```python
x1,v1,E1 = integrate(F1,0.0,0.0,0.0)
x2,v2,E2 = integrate(F1,0.0,0.0,0.01)
x3,v3,E3 = integrate(F1,0.0,0.0,0.1)
plt.figure()
plt.subplot(211)
plt.plot(t,x1)
plt.plot(t,x2)
plt.plot(t,x3)
plt.ylabel("x(t)")
plt.subplot(212)
plt.plot(t,E1,label=r"$\gamma = 0$")
plt.plot(t,E2,label=r"$\gamma = 0.01$")
plt.plot(t,E3,label=r"$\gamma = 0.1$")
pt.ylabel("E(t)")
plt.xlabel("Time")
plt.rcParams['legend.fontsize'] = 14.0
plt.legend(loc="upper left")
plt.show()
```
```python
```
| 284cf508cce5828af010d7a7dbe2dcd99eb0c76a | 125,526 | ipynb | Jupyter Notebook | Programko_vzor.ipynb | matoga/LetnaSkolaFKS_notebooks | 26faa2d30ee942e18246fe466d9bf42f16cc1433 | [
"MIT"
]
| null | null | null | Programko_vzor.ipynb | matoga/LetnaSkolaFKS_notebooks | 26faa2d30ee942e18246fe466d9bf42f16cc1433 | [
"MIT"
]
| null | null | null | Programko_vzor.ipynb | matoga/LetnaSkolaFKS_notebooks | 26faa2d30ee942e18246fe466d9bf42f16cc1433 | [
"MIT"
]
| null | null | null | 58.877111 | 73,968 | 0.775003 | true | 8,350 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.91611 | 0.823694 | __label__slk_Latn | 0.992821 | 0.752049 |
# Selección óptima de portafolios II
Entonces, tenemos que:
- La LAC describe las posibles selecciones de riesgo-rendimiento entre un activo libre de riesgo y un activo riesgoso.
- Su pendiente es igual al radio de Sharpe del activo riesgoso.
- La asignación óptima de capital para cualquier inversionista es el punto tangente de la curva de indiferencia del inversionista con la LAC.
Para todo lo anterior, supusimos que ya teníamos el portafolio óptimo (activo riesgoso).
En la clase pasada aprendimos a hallar este portafolio óptimo si el conjunto de activos riesgosos estaba conformado únicamente por dos activos:
$$w_{1,EMV}=\frac{(E[r_1]-r_f)\sigma_2^2-(E[r_2]-r_f)\sigma_{12}}{(E[r_2]-r_f)\sigma_1^2+(E[r_1]-r_f)\sigma_2^2-((E[r_1]-r_f)+(E[r_2]-r_f))\sigma_{12}}.$$
- Sin embargo, la complejidad del problema crece considerablemente con el número de variables, y la solución analítica deja de ser viable cuando mencionamos que un portafolio bien diversificado consta aproximadamente de 50-60 activos.
- En esos casos, este problema se soluciona con rutinas numéricas que hagan la optimización por nosotros, porque son una solución viable y escalable a más variables.
**Objetivos:**
- ¿Cuál es el portafolio óptimo de activos riesgosos cuando tenemos más de dos activos?
- ¿Cómo construir la frontera de mínima varianza cuando tenemos más de dos activos?
*Referencia:*
- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.
___
## 1. Maximizando el radio de Sharpe
### ¿Qué pasa si tenemos más de dos activos riesgosos?
En realidad es algo muy similar a lo que teníamos con dos activos.
- Para dos activos, construir la frontera de mínima varianza es trivial: todas las posibles combinaciones.
- Con más de dos activos, recordar la definición: la frontera de mínima varianza es el lugar geométrico de los portafolios que proveen el mínimo riesgo para un nivel de rendimiento dado.
<font color=blue> Ver en el tablero.</font>
Analíticamente:
- $n$ activos,
- caracterizados por $(\sigma_i,E[r_i])$,
- cada uno con peso $w_i$, con $i=1,2,\dots,n$.
Entonces, buscamos los pesos tales que
\begin{align}
\min_{w_1,\dots,w_n} & \quad \sum_{i=1}^{n}w_i^2\sigma_i^2+\sum_{i=1}^{n}\sum_{j=1,j\neq i}^{n}w_iw_j\sigma_{ij}\\
\text{s.a.} & \quad \sum_{i=1}^{n}w_i=1, w_i\geq0\\
& \quad \sum_{i=1}^{n}w_iE[r_i]=\bar{\mu},
\end{align}
donde $\bar{\mu}$ corresponde a un nivel de rendimiento objetivo.
**Obviamente, tendríamos que resolver este problema para muchos niveles de rendimiento objetivo.**
- <font color=blue> Explicar relación con gráfica.</font>
- <font color=green> Recordar clase 10.</font>
Lo anterior se puede escribir vectorialmente como:
\begin{align}
\min_{\boldsymbol{w}} & \quad \boldsymbol{w}^T\Sigma\boldsymbol{w}\\
\text{s.a.} & \quad \boldsymbol{1}^T\boldsymbol{w}=1, \boldsymbol{w}\geq0\\
& \quad E[\boldsymbol{r}^T]\boldsymbol{w}=\bar{\mu},
\end{align}
donde:
- $\boldsymbol{w}=\left[w_1,\dots,w_n\right]^T$ es el vector de pesos,
- $\boldsymbol{1}=\left[1,\dots,1\right]^T$ es un vector de unos,
- $E[\boldsymbol{r}]=\left[E[r_1],\dots,E[r_n]\right]^T$ es el vector de rendimientos esperados, y
- $\Sigma=\left[\begin{array}{cccc}\sigma_{1}^2 & \sigma_{12} & \dots & \sigma_{1n} \\
\sigma_{21} & \sigma_{2}^2 & \dots & \sigma_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{n1} & \sigma_{n2} & \dots & \sigma_{n}^2\end{array}\right]$ es la matriz de varianza-covarianza.
**Esta última forma es la que comúnmente usamos al programar, por ser eficiente y escalable a problemas de N variables.**
### Entonces, ¿para cuántos niveles de rendimiento objetivo tendríamos que resolver el anterior problema con el fin de graficar la frontera de mínima varianza?
- Observar que el problema puede volverse muy pesado a medida que incrementamos el número de activos en nuestro portafolio...
- Una tarea bastante compleja.
### Sucede que, en realidad, sólo necesitamos conocer dos portafolios que estén sobre la *frontera de mínima varianza*.
- Si logramos encontrar dos portafolios sobre la frontera, entonces podemos a la vez encontrar todas las posibles combinaciones de estos dos portafolios para trazar la frontera de mínima varianza.
- Ver el caso de dos activos.
### ¿Qué portafolios usar?
Hasta ahora, hemos estudiando profundamente como hallar dos portafolios muy importantes que de hecho yacen sobre la frontera de mínima varianza:
1. Portafolio de EMV: máximo SR.
2. Portafolio de mínima varianza: básicamente, el mismo problema anterior, sin la restricción de rendimiento objetivo.
Luego, tomar todas las posibles combinaciones de dichos portafolios usando las fórmulas para dos activos de medias y varianzas:
- w: peso para el portafolio EMV,
- 1-w: peso para le portafolio de mínima varianza.
## 2. Ejemplo ilustrativo.
Retomamos el ejemplo de mercados de acciones en los países integrantes del $G5$: EU, RU, Francia, Alemania y Japón.
```python
# Importamos pandas y numpy
import pandas as pd
import numpy as np
```
```python
# Resumen en base anual de rendimientos esperados y volatilidades
annual_ret_summ = pd.DataFrame(columns=['EU', 'RU', 'Francia', 'Alemania', 'Japon'], index=['Media', 'Volatilidad'])
annual_ret_summ.loc['Media'] = np.array([0.1355, 0.1589, 0.1519, 0.1435, 0.1497])
annual_ret_summ.loc['Volatilidad'] = np.array([0.1535, 0.2430, 0.2324, 0.2038, 0.2298])
annual_ret_summ.round(4)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>EU</th>
<th>RU</th>
<th>Francia</th>
<th>Alemania</th>
<th>Japon</th>
</tr>
</thead>
<tbody>
<tr>
<th>Media</th>
<td>0.1355</td>
<td>0.1589</td>
<td>0.1519</td>
<td>0.1435</td>
<td>0.1497</td>
</tr>
<tr>
<th>Volatilidad</th>
<td>0.1535</td>
<td>0.243</td>
<td>0.2324</td>
<td>0.2038</td>
<td>0.2298</td>
</tr>
</tbody>
</table>
</div>
```python
# Matriz de correlación
corr = pd.DataFrame(data= np.array([[1.0000, 0.5003, 0.4398, 0.3681, 0.2663],
[0.5003, 1.0000, 0.5420, 0.4265, 0.3581],
[0.4398, 0.5420, 1.0000, 0.6032, 0.3923],
[0.3681, 0.4265, 0.6032, 1.0000, 0.3663],
[0.2663, 0.3581, 0.3923, 0.3663, 1.0000]]),
columns=annual_ret_summ.columns, index=annual_ret_summ.columns)
corr.round(4)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>EU</th>
<th>RU</th>
<th>Francia</th>
<th>Alemania</th>
<th>Japon</th>
</tr>
</thead>
<tbody>
<tr>
<th>EU</th>
<td>1.0000</td>
<td>0.5003</td>
<td>0.4398</td>
<td>0.3681</td>
<td>0.2663</td>
</tr>
<tr>
<th>RU</th>
<td>0.5003</td>
<td>1.0000</td>
<td>0.5420</td>
<td>0.4265</td>
<td>0.3581</td>
</tr>
<tr>
<th>Francia</th>
<td>0.4398</td>
<td>0.5420</td>
<td>1.0000</td>
<td>0.6032</td>
<td>0.3923</td>
</tr>
<tr>
<th>Alemania</th>
<td>0.3681</td>
<td>0.4265</td>
<td>0.6032</td>
<td>1.0000</td>
<td>0.3663</td>
</tr>
<tr>
<th>Japon</th>
<td>0.2663</td>
<td>0.3581</td>
<td>0.3923</td>
<td>0.3663</td>
<td>1.0000</td>
</tr>
</tbody>
</table>
</div>
```python
# Tasa libre de riesgo
rf = 0.05
```
Esta vez, supondremos que tenemos disponibles todos los mercados de acciones y el activo libre de riesgo.
#### 1. Construir la frontera de mínima varianza
##### 1.1. Encontrar portafolio de mínima varianza
```python
# Importamos funcion minimize del modulo optimize de scipy
from scipy.optimize import minimize
```
```python
np.diag(np.array([1, 2, 3, 4]))
```
array([[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
```python
## Construcción de parámetros
# 1. Sigma: matriz de varianza-covarianza Sigma = S.dot(corr).dot(S)
S = np.diag(annual_ret_summ.loc['Volatilidad'].values)
Sigma = S.dot(corr).dot(S)
# 2. Eind: rendimientos esperados activos individuales
Eind = annual_ret_summ.loc['Media'].values
```
```python
# Función objetivo
def var(w, Sigma):
return w.T.dot(Sigma).dot(w)
```
```python
# Número de activos
N = len(Eind)
# Dato inicial
w0 = np.ones(N)/N
# Cotas de las variables
bnds = ((0, 1), ) * N
# Restricciones
cons = {'type': 'eq', 'fun': lambda w: w.sum() - 1}
```
```python
# Portafolio de mínima varianza
minvar = minimize(fun=var,
x0=w0,
args=(Sigma,),
bounds=bnds,
constraints=cons)
minvar
```
fun: 0.01861776391061502
jac: array([0.03718246, 0.03881475, 0.03859101, 0.03755156, 0.0370423 ])
message: 'Optimization terminated successfully'
nfev: 42
nit: 7
njev: 7
status: 0
success: True
x: array([6.17797049e-01, 3.46944695e-18, 0.00000000e+00, 2.09394358e-01,
1.72808594e-01])
```python
# Pesos, rendimiento y riesgo del portafolio de mínima varianza
w_minvar = minvar.x
E_minvar = Eind.T.dot(w_minvar)
s_minvar = var(w_minvar, Sigma)**0.5
RS_minvar = (E_minvar - rf) / s_minvar
w_minvar, E_minvar, s_minvar, RS_minvar
```
(array([6.17797049e-01, 3.46944695e-18, 0.00000000e+00, 2.09394358e-01,
1.72808594e-01]),
0.13962903688859557,
0.13644692708381168,
0.6568783834431207)
##### 1.2. Encontrar portafolio EMV
```python
# Función objetivo
def menos_RS(w, Eind, rf, Sigma):
E_port = Eind.T.dot(w)
s_port = var(w, Sigma)**0.5
RS = (E_port - rf) / s_port
return - RS
```
```python
# Número de activos
N = len(Eind)
# Dato inicial
w0 = np.ones(N)/N
# Cotas de las variables
bnds = ((0, 1), ) * N
# Restricciones
cons = {'type': 'eq', 'fun': lambda w: w.sum() - 1}
```
```python
# Portafolio EMV
emv = minimize(fun=menos_RS,
x0=w0,
args=(Eind, rf, Sigma),
bounds=bnds,
constraints=cons)
emv
```
fun: -0.6644372965632436
jac: array([-0.3608895 , -0.36076408, -0.36036385, -0.36108153, -0.36062376])
message: 'Optimization terminated successfully'
nfev: 30
nit: 5
njev: 5
status: 0
success: True
x: array([0.50714174, 0.07470888, 0.02471533, 0.18943972, 0.20399434])
```python
# Pesos, rendimiento y riesgo del portafolio EMV
w_emv = emv.x
E_emv = Eind.T.dot(w_emv)
s_emv = var(w_emv, Sigma)**0.5
RS_emv = (E_emv - rf) / s_emv
w_emv, E_emv, s_emv, RS_emv
```
(array([0.50714174, 0.07470888, 0.02471533, 0.18943972, 0.20399434]),
0.1420657564717856,
0.13856199365687238,
0.6644372965632436)
```python
w_minvar, E_minvar, s_minvar, RS_minvar
```
(array([6.17797049e-01, 3.46944695e-18, 0.00000000e+00, 2.09394358e-01,
1.72808594e-01]),
0.13962903688859557,
0.13644692708381168,
0.6568783834431207)
```python
annual_ret_summ.columns
```
Index(['EU', 'RU', 'Francia', 'Alemania', 'Japon'], dtype='object')
##### 1.3. Construir frontera de mínima varianza
También debemos encontrar la covarianza (o correlación) entre estos dos portafolios:
```python
# Covarianza entre los portafolios
cov_emv_minvar = w_emv.T.dot(Sigma).dot(w_minvar)
cov_emv_minvar
```
0.018690275386034134
```python
# Correlación entre los portafolios
corr_emv_minvar = cov_emv_minvar / (s_emv * s_minvar)
corr_emv_minvar
```
0.9885708894197612
```python
# Vector de w
w_p = np.linspace(0, 5)
```
```python
# DataFrame de portafolios:
# 1. Índice: i
# 2. Columnas 1-2: w, 1-w
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
frontera = pd.DataFrame(data={'Media': w_p * E_emv + (1 - w_p) * E_minvar,
'Vol': ((w_p * s_emv)**2 + ((1 - w_p) * s_minvar)**2 + 2 * w_p * (1 - w_p) * cov_emv_minvar)**0.5})
frontera['RS'] = (frontera['Media'] - rf) /frontera['Vol']
frontera.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Media</th>
<th>Vol</th>
<th>RS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.139629</td>
<td>0.136447</td>
<td>0.656878</td>
</tr>
<tr>
<th>1</th>
<td>0.139878</td>
<td>0.136518</td>
<td>0.658359</td>
</tr>
<tr>
<th>2</th>
<td>0.140126</td>
<td>0.136622</td>
<td>0.659677</td>
</tr>
<tr>
<th>3</th>
<td>0.140375</td>
<td>0.136759</td>
<td>0.660833</td>
</tr>
<tr>
<th>4</th>
<td>0.140624</td>
<td>0.136930</td>
<td>0.661827</td>
</tr>
</tbody>
</table>
</div>
```python
# Importar librerías de gráficos
from matplotlib import pyplot as plt
%matplotlib inline
```
```python
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, los activos individuales
# y los portafolios hallados
plt.figure(figsize=(10, 6))
# Frontera
plt.scatter(frontera['Vol'], frontera['Media'], c = frontera['RS'], cmap='RdYlBu', label = 'Frontera de minima varianza')
plt.colorbar()
# Activos ind
for activo in annual_ret_summ.columns:
plt.plot(annual_ret_summ.loc['Volatilidad', activo],
annual_ret_summ.loc['Media', activo],
'o',
ms=5,
label = activo)
# Port. óptimos
plt.plot(s_minvar, E_minvar, '*g', ms=10, label='Portafolio de mínima varianza')
plt.plot(s_emv, E_emv, '*r', ms=10, label='Portafolio eficiente en media varianza')
plt.xlabel('Volatilidad $\sigma$')
plt.ylabel('Rendimiento esperado $E[r]$')
plt.grid()
plt.legend(loc='best')
```
**A partir de lo anterior, solo restaría construir la LAC y elegir la distribución de capital de acuerdo a las preferencias (aversión al riesgo).**
___
## 3. Comentarios finales
### 3.1. Restricciones adicionales
Los inversionistas pueden tener restricciones adicionales:
1. Restricciones en posiciones cortas.
2. Pueden requerir un rendimiento mínimo.
3. Inversión socialmente responsable: prescinden de inversiones en negocios o paises considerados éticamente o políticamente indeseables.
Todo lo anterior se puede incluir como restricciones en el problema de optimización, y puede ser llevado a cabo a costa de un cociente de Sharpe menor.
### 3.2. Críticas a la optimización media varianza
1. Solo importan medias y varianzas: recordar que la varianza subestima el riesgo en algunos casos.
2. Preferencias media-varianza tratan las ganancias y pérdidas simétricamente: el sentimiento de insatisfacción de una perdida es mayor al sentimiento de satisfacción de una ganancia (aversión a pérdidas).
3. La aversión al riesgo es constante: la actitud frente al riesgo puede cambiar, por ejemplo con el estado de la economía.
4. Horizonte corto (un periodo).
5. Basura entra - basura sale: la optimización media varianza es supremamente sensible a las entradas: estimaciones de rendimientos esperados y varianzas.
___
# Anuncios parroquiales
## 1. Quiz la próxima clase (clases 12, 13, y 14).
## 2. Revisar archivo Tarea 6. Viernes, 23 de Octubre.
## 3. [Nota interesante](http://yetanothermathprogrammingconsultant.blogspot.com/2016/08/portfolio-optimization-maximize-sharpe.html)
## 4. Martes 3 y Viernes 6 de Noviembre, ¡NO HAY CLASE!
## 5. Clase de repaso módulo 3 - Jueves 22 de Octubre.
## 7. Clases de repaso módulos 3 y 4 - Jueves 12 de Noviembre.
## 8. Examen final: Viernes 13 de Noviembre.
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| 5b8216981dd8d44ebb7af44ec4569c152ca027bc | 67,338 | ipynb | Jupyter Notebook | Modulo3/Clase14_SeleccionOptimaPortII.ipynb | duarteandres/porinvo2020 | 93c1d90653382b29a5cf1e5d60b591d8400013b2 | [
"MIT"
]
| null | null | null | Modulo3/Clase14_SeleccionOptimaPortII.ipynb | duarteandres/porinvo2020 | 93c1d90653382b29a5cf1e5d60b591d8400013b2 | [
"MIT"
]
| null | null | null | Modulo3/Clase14_SeleccionOptimaPortII.ipynb | duarteandres/porinvo2020 | 93c1d90653382b29a5cf1e5d60b591d8400013b2 | [
"MIT"
]
| null | null | null | 69.780311 | 38,056 | 0.762526 | true | 5,713 | Qwen/Qwen-72B | 1. YES
2. YES | 0.661923 | 0.865224 | 0.572712 | __label__spa_Latn | 0.812412 | 0.168931 |
```python
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "last_expr"
import numpy as np
from matplotlib import pyplot as plt
```
# 变化率
对于函数 ${y = f(x)}$ 定义变化率为:
$$ roc = \frac{\Delta{y}}{\Delta{x}} $$
代入函数曲线上的两点,可以计算 ${m}$ :
\begin{equation}roc = \frac{f(x_{2}) - f(x_{1})}{x_{2} - x_{1}} \end{equation}
例如 ${f(x) = x^{2} + x}$
```python
def f(x):
return (x)**2 + x
x = np.array(range(0, 11))
y = np.array([0,10])
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.plot(x, f(x), color='g')
plt.plot(y, f(y), color='m')
plt.show()
```
# 极限
例:
\begin{equation}\lim_{x \to 5} f(x)\end{equation}
```python
%matplotlib widget
x = [*range(0,5), *np.arange(4.25, 6, 0.25), *range(6, 11)]
y = [f(i) for i in x]
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
plt.plot(5, f(5), color='red', marker='o', markersize=10)
plt.plot(5.25, f(5.25), color='blue', marker='<', markersize=10)
plt.plot(4.75, f(4.75), color='orange', marker='>', markersize=10)
plt.show()
```
## 连续性
对函数 ${g(x) = -(\frac{12}{2x})^{2},\;\; x \ne 0}$ 画出图形
```python
%matplotlib widget
def g(x):
if x != 0:
return -(12/(2*x))**2
x = range(-20, 21)
y = [g(a) for a in x]
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
plt.plot(x,y, color='g')
xy = (0,g(1))
plt.annotate('O',xy, xytext=(-0.7, -37),fontsize=14,color='b')
plt.show()
```
函数 ${h(x) = 2\sqrt{x},\;\; x \ge 0}$ 是不连续的。
```python
%matplotlib inline
def h(x):
if x >= 0:
import numpy as np
return 2 * np.sqrt(x)
x = range(-20, 21)
y = [h(a) for a in x]
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
plt.plot(x,y, color='g')
plt.plot(0, h(0), color='g', marker='o', markerfacecolor='g', markersize=10)
plt.show()
```
函数 ${
k(x) = \begin{cases}
x + 20, & \text{if } x \le 0, \\
x - 100, & \text{otherwise }\end{cases}}$ 是不连续的。
```python
%matplotlib inline
def k(x):
import numpy as np
if x <= 0:
return x + 20
else:
return x - 100
x1 = range(-20, 1)
x2 = range(1, 20)
y1 = [k(i) for i in x1]
y2 = [k(i) for i in x2]
plt.xlabel('x')
plt.ylabel('k(x)')
plt.grid()
plt.plot(x1,y1, color='g')
plt.plot(x2,y2, color='g')
plt.plot(0, k(0), color='g', marker='o', markerfacecolor='g', markersize=10)
plt.plot(0, k(0.0001), color='g', marker='o', markerfacecolor='w', markersize=10)
plt.show()
```
### 无穷
对
$$d(x) = \frac{4}{x - 25},\;\; x \ne 25$$
当 ${x \to 25}$ 时:
```python
%matplotlib inline
def d(x):
if x != 25:
return 4 / (x - 25)
x = [*range(-100, 24), *np.arange(24.9, 25.2, 0.1), *range(26, 101)]
y = [d(i) for i in x]
plt.xlabel('x')
plt.ylabel('d(x)')
plt.grid()
plt.plot(x,y, color='purple')
plt.show()
```
\begin{equation}\lim_{x \to 25^{+}} d(x) = \infty \end{equation}
\begin{equation}\lim_{x \to 25^{-}} d(x) = -\infty \end{equation}
对函数 ${e(x) = \begin{cases}
5, & \text{if } x = 0, \\
1 + x^{2}, & \text{otherwise }
\end{cases}}$ 求 ${{x\to0}}$ 时的极限:
$$\lim_{x \to 0} a(x) \ne a(0) $$
$$\lim_{x \to 0} a(x) = 1 $$
```python
%matplotlib inline
def e(x):
if x == 0:
return 5
else:
return 1 + x**2
x= [-1, -0.5, -0.2, -0.1, -0.01, 0.01, 0.1, 0.2, 0.5, 1]
y =[e(i) for i in x]
plt.xlabel('x')
plt.ylabel('e(x)')
plt.grid()
plt.plot(x, y, color='m')
plt.scatter(0, e(0), color='m')
plt.plot(0, 1, color='m', marker='o', markerfacecolor='w', markersize=10)
plt.show()
```
对函数
$$g(x) = \frac{x^{2} - 1}{x - 1}, x \ne 1$$
求 ${x \to 1}$ 时的极限:
\begin{equation}\lim_{x \to a} g(x) = \frac{(x-1)(x+1)}{x - 1}\end{equation}
\begin{equation}\lim_{x \to a} g(x)= x+1\end{equation}
\begin{equation}\lim_{x \to 1} g(x) = 2\end{equation}
类似的,对函数
\begin{equation}h(x) = \frac{\sqrt{x} - 2}{x - 4}, x \ne 4 \text{ and } x \ge 0\end{equation}
求 ${x \to 4}$ 时的极限:
\begin{equation}\lim_{x \to a}h(x) = \frac{1}{{\sqrt{x} + 2}}\end{equation}
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{{\sqrt{4} + 2}} = \frac{1}{4}\end{equation}
## 极限的算术运算
\begin{equation}\lim_{x \to a} (j(x) + l(x)) = \lim_{x \to a} j(x) + \lim_{x \to a} l(x)\end{equation}
\begin{equation}\lim_{x \to a} (j(x) - l(x)) = \lim_{x \to a} j(x) - \lim_{x \to a} l(x)\end{equation}
\begin{equation}\lim_{x \to a} (j(x) \cdot l(x)) = \lim_{x \to a} j(x) \cdot \lim_{x \to a} l(x)\end{equation}
\begin{equation}\lim_{x \to a} \frac{j(x)}{l(x)} = \frac{\lim_{x \to a} j(x)}{\lim_{x \to a} l(x)}\end{equation}
\begin{equation}\lim_{x \to a} (j(x))^{n} = \Big(\lim_{x \to a} j(x)\Big)^{n}\end{equation}
## 微分和求导
- 导函数的定义:
\begin{equation}\lim_{h \to 0} \frac{f(x + h) - f(x)}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} \end{equation}
\begin{equation}\frac{d}{dx}f(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} \end{equation}
```python
%matplotlib inline
def f(x):
return x**2 + x
x = list(range(0, 11))
y = [f(i) for i in x]
x1 = 3
y1 = f(x1)
h = 3
x2 = x1+h
y2 = f(x2)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
plt.plot(x,y, color='green')
plt.scatter(x1,y1, c='red')
plt.annotate('(x,f(x))',(x1,y1), xytext=(x1-0.5, y1+3))
plt.scatter(x2,y2, c='red')
plt.annotate('(x+h, f(x+h))',(x2,y2), xytext=(x2+0.5, y2))
plt.show()
```
#### 导函数也可记为
\begin{equation}f'(\textbf{a}) = \lim_{h \to 0} \frac{f(\textbf{a} + h) - f(\textbf{a})}{h} \Leftrightarrow f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x - a} \end{equation}
### 求导数
\begin{equation}f(x) = x^{2} + x\end{equation}
\begin{equation}f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h} \end{equation}
\begin{equation}f'(\textbf{2}) = \lim_{h \to 0} \frac{f(\textbf{2} + h) - f(\textbf{2})}{h} \end{equation}
\begin{equation}f'(2) = \lim_{h \to 0} \frac{((2+h)^{2} + 2 + h) - (2^{2} + 2)}{h} \end{equation}
\begin{equation}f'(2) = \lim_{h \to 0} \frac{(h^{2} + 5h + 6) - 6}{h} = \lim_{h \to 0} \frac{h^{2} + 5h}{h} = \lim_{h \to 0} h + 5 = 5\end{equation}
```python
%matplotlib widget
def f(x):
return x**2 + x
x = list(range(0, 11))
y = [f(i) for i in x]
x1 = 2
y1 = f(x1)
m = 5
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
plt.plot(x,y, color='green')
plt.scatter(x1,y1, c='red')
plt.annotate('(x,f(x))',(x1,y1), xytext=(x1-0.5, y1+3))
xMin = x1 - 5
yMin = y1 - (5*m)
xMax = x1 + 5
yMax = y1 + (5*m)
plt.plot([xMin,xMax],[yMin,yMax], color='magenta')
plt.show()
```
### 求导函数
\begin{equation}f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{((x+h)^{2} + x + h) - (x^{2} + x)}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{x^{2} + h^{2} + 2xh + x + h - x^{2} - x}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{h^{2} + 2xh + h}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} 2x + h + 1 \end{equation}
\begin{equation}f'(x) = 2x + 1 \end{equation}
## 可微
并不是所有的函数都是可以求导函数的。
可微的函数有下面的几何特征
- 连续
- 切线不竖直
- 光滑
\begin{equation}
q(x) = \begin{cases}
\frac{40,000}{x^{2}}, & \text{if } x < -4, \\
(x^{2} -2) \cdot (x - 1), & \text{if } x \ne 0 \text{ and } x \ge -4 \text{ and } x < 8, \\
(x^{2} -2), & \text{if } x \ne 0 \text{ and } x \ge 8
\end{cases}
\end{equation}
```python
%matplotlib inline
def q(x):
if x != 0:
if x < -4:
return 40000 / (x**2)
elif x < 8:
return (x**2 - 2) * x - 1
else:
return (x**2 - 2)
x = [*range(-10, -5), -4.01]
x2 = [*range(-4, 8), 7.9999, *range(8, 11)]
y = [q(i) for i in x]
y2 = [q(i) for i in x2]
plt.xlabel('x')
plt.ylabel('q(x)')
plt.grid()
plt.plot(x,y, color='purple')
plt.plot(x2,y2, color='purple')
plt.scatter(-4,q(-4), c='red')
plt.annotate('A (x= -4)',(-5,q(-3.9)), xytext=(-7, q(-3.9)))
plt.scatter(0,0, c='red')
plt.annotate('B (x= 0)',(0,0), xytext=(-1, 40))
plt.scatter(8,q(8), c='red')
plt.annotate('C (x= 8)',(8,q(8)), xytext=(8, 100))
plt.show()
```
- A不连续
- B不连续
- C不光滑
## 求导规则
### 基本规则
\begin{equation}f(x) = \pi \;\; \therefore \;\; f'(x) = 0 \end{equation}
\begin{equation}f(x) = 2g(x) \;\; \therefore \;\; f'(x) = 2g'(x) \end{equation}
\begin{equation}f(x) = g(x) + h(x) \;\; \therefore \;\; f'(x) = g'(x) + h'(x) \end{equation}
\begin{equation}f(x) = k(x) - l(x) \;\; \therefore \;\; f'(x) = k'(x) - l'(x) \end{equation}
\begin{equation}\frac{d}{dx}(2x + 6) = \frac{d}{dx} 2x + \frac{d}{dx} 6 = 2\end{equation}
### 幂规则
> 极其常用的规则
\begin{equation}f(x) = x^{n} \;\; \therefore \;\; f'(x) = nx^{n-1}\end{equation}
例如:
\begin{equation}f(x) = x^{3} \;\; \therefore \;\; f'(x) = 3x^{2}\end{equation}
\begin{equation}f(x) = x^{-2} \;\; \therefore \;\; f'(x) = -2x^{-3}\end{equation}
\begin{equation}f(x) = x^{2} \;\; \therefore \;\; f'(x) = 2x\end{equation}
### 幂规则的推导
\begin{equation}f(x) = x^{2}\end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{(x + h)^{2} - x^{2}}{h} \end{equation}
\begin{equation}f'(x) = \lim_{h \to 0} \frac{x^{2} + h^{2} + 2xh - x^{2}}{h} = \lim_{h \to 0} \frac{h^{2} + 2xh}{h} = \lim_{h \to 0} h + 2x = 2x\end{equation}
### 乘法规则
\begin{equation}\frac{d}{dx}[f(x)g(x)] = f'(x)g(x) + f(x)g'(x) \end{equation}
例如
\begin{equation}f(x) = 2x^{2} \end{equation}
\begin{equation}g(x) = x + 1 \end{equation}
\begin{equation}f'(x) = 4x \end{equation}
\begin{equation}g'(x) = 1 \end{equation}
\begin{equation}\frac{d}{dx}[f(x)g(x)] = (4x \cdot (x + 1)) + (2x^{2} \cdot 1) \end{equation}
\begin{equation}\frac{d}{dx}[f(x)g(x)] = 6x^{2} + 4x \end{equation}
### 商数规则
\begin{equation}r(x) = \frac{s(x)}{t(x)} \end{equation}
\begin{equation}r'(x) = \frac{s'(x)t(x) - s(x)t'(x)}{(t(x))^{2}} \end{equation}
例如
\begin{equation}s(x) = 3x^{2} \end{equation}
\begin{equation}t(x) = 2x\end{equation}
\begin{equation}r'(x) = \frac{(6x \cdot 2x) - (3x^{2} \cdot 2)}{(2x)^{2}} = \frac{6x^{2}}{4x^{2}} = \frac{3}{2}\end{equation}
### 链式规则
\begin{equation}\frac{d}{dx}[o(i(x))] = o'(i(x)) \cdot i'(x)\end{equation}
例如
\begin{equation}i(x) = x^{2} \end{equation}
\begin{equation}o(x) = 2x \end{equation}
\begin{equation}o'(x) = 2, i'(x) = 2x\end{equation}
\begin{equation}\frac{d}{dx}[o(i(x))] = 4x\end{equation}
# 极值和优化
对函数${k(x) = -10x^{2} + 100x + 3}$ 有导函数:
\begin{equation}k'(x) = -20x + 100 \end{equation}
```python
%matplotlib inline
def k(x):
return -10*(x**2) + (100*x) + 3
def kd(x):
return -20*x + 100
x = list(range(0, 11))
y = [k(i) for i in x]
yd = [kd(i) for i in x]
plt.axhline()
plt.axvline()
plt.xlabel('x (time in seconds)')
plt.ylabel('k(x) (height in feet)')
plt.xticks(range(0,15, 1))
plt.yticks(range(-200, 500, 20))
plt.grid()
plt.plot(x,y, color='green')
plt.plot(x,yd, color='purple')
x1 = 2
x2 = 5
x3 = 8
plt.plot([x1-1,x1+1],[k(x1)-(kd(x1)),k(x1)+(kd(x1))], color='r')
plt.plot([x2-1,x2+1],[k(x2)-(kd(x2)),k(x2)+(kd(x2))], color='r')
plt.plot([x3-1,x3+1],[k(x3)-(kd(x3)),k(x3)+(kd(x3))], color='r')
plt.show()
```
## 找最大值和最小值
对 ${k(x) = -10x^{2} + 100x + 3}$ 有 ${k'(x) = -20x + 100 }$
\begin{equation}-20x + 100 = 0 \end{equation}
\begin{equation}x = 5 \end{equation}
如果求二阶导数
\begin{equation}k'(x) = -20x + 100 \Rightarrow k''(x) = -20\end{equation}
根据二阶导数为常量,而且是负常量,得知一阶导函数是线性下降的。导函数为0的点就是极大值。
函数
\begin{equation}w(x) = x^{2} + 2x + 7 \end{equation}
```python
%matplotlib inline
def w(x):
return (x**2) + (2*x) + 7
def wd(x):
return 2*x + 2
x = list(range(-10, 11))
y = [w(i) for i in x]
yd = [wd(i) for i in x]
plt.axhline()
plt.axvline()
plt.xlabel('x')
plt.ylabel('w(x)')
plt.xticks(range(-10,15, 1))
plt.yticks(range(-200, 500, 20))
plt.grid()
plt.plot(x,y, color='g')
plt.plot(x,yd, color='m')
plt.show()
```
## 极值点
对函数
\begin{equation}v(x) = x^{3} - 2x + 100 \end{equation}
```python
%matplotlib widget
def v(x):
return (x**3) - (2*x) + 100
def vd(x):
return 3*(x**2) - 2
x = list(range(-10, 11))
y = [v(i) for i in x]
yd = [vd(i) for i in x]
plt.axhline()
plt.axvline()
plt.xlabel('x')
plt.ylabel('v(x)')
plt.xticks(range(-10,15, 1))
plt.yticks(range(-1000, 2000, 100))
plt.grid()
plt.plot(x,y, color='g')
plt.plot(x,yd, color='m')
plt.show()
```
```python
%matplotlib inline
def k(x):
return -10*(x**2) + (100*x) + 3
def kd(x):
return -20*x + 100
def k2d(x):
return -20
plt.axhline()
plt.axvline()
x = list(range(0, 11))
y = [k(i) for i in x]
yd = [kd(i) for i in x]
y2d = [k2d(i) for i in x]
plt.xlabel('x')
plt.ylabel('k(x)')
plt.xticks(range(0,15, 1))
plt.yticks(range(-200, 500, 20))
plt.grid()
plt.plot(x,y, color='g')
plt.plot(x,yd, color='r')
plt.plot(x,y2d, color='b')
plt.show()
```
练习:
\begin{equation}w(x) = x^{2} + 2x + 7 \end{equation}
```python
%matplotlib inline
def w(x):
return (x**2) + (2*x) + 7
def wd(x):
return 2*x + 2
def w2d(x):
return 2
x = list(range(-10, 11))
y = [w(i) for i in x]
yd = [wd(i) for i in x]
y2d = [w2d(i) for i in x]
plt.axhline()
plt.axvline()
plt.xlabel('x (time in days)')
plt.ylabel('w(x) (flowers)')
plt.xticks(range(-10,15, 1))
plt.yticks(range(-200, 500, 20))
plt.grid()
plt.plot(x,y, color='green')
plt.plot(x,yd, color='purple')
plt.plot(x,y2d, color='magenta')
plt.show()
```
## 极值点不一定是最大值或者最小值
\begin{equation}v(x) = x^{3} - 6x^{2} + 12x + 2 \end{equation}
\begin{equation}v'(x) = 3x^{2} - 12x + 12 = 0\end{equation}
${x = 2}$ 是极值点
```python
%matplotlib inline
def v(x):
return (x**3) - (6*(x**2)) + (12*x) + 2
def vd(x):
return (3*(x**2)) - (12*x) + 12
def v2d(x):
return (3*(2*x)) - 12
from matplotlib import pyplot as plt
x = list(range(-5, 11))
y = [v(i) for i in x]
yd = [vd(i) for i in x]
y2d = [v2d(i) for i in x]
plt.xlabel('x')
plt.ylabel('v(x)')
plt.xticks(range(-10,15, 1))
plt.yticks(range(-2000, 2000, 50))
plt.grid()
plt.plot(x,y, color='green')
plt.plot(x,yd, color='purple')
plt.plot(x,y2d, color='magenta')
plt.show()
print ("v(2) = " + str(v(2)))
print ("v'(2) = " + str(vd(2)))
print ("v''(2) = " + str(v2d(2)))
```
## 优化算法
- 求一阶导数
- 求极值
- 求二阶导数检查是否为最大值
# 偏微分
对多变量函数
$$f(x,y) = x^2 + y^2$$
对于变量x, 定义
$$\frac{\partial f(x,y)}{\partial x} = \frac{\partial (x^2 + y^2)}{\partial x}$$
$$\frac{\partial x^2}{\partial x} = 2x,\;\;\; \frac{\partial y^2}{\partial x} = 0$$
$$\frac{\partial f(x,y)}{\partial x} = 2x + 0 = 2x$$
对于变量y
$$\frac{\partial f(x,y)}{\partial y} = 0 + 2y = 2y$$
## 计算梯度
> 梯度是多唯曲面的切面
$$\frac{\partial f(x,y)}{\partial x} = 2x \\
\frac{\partial f(x,y)}{\partial y} = 2y$$
梯度向量:
$$grad(f(x,y)) = \vec{g(x,y)} = \begin{bmatrix}\frac{\partial f(x,y)}{\partial x} \\ \frac{\partial f(x,y)}{\partial y} \end{bmatrix} = \begin{bmatrix}2x \\ 2y \end{bmatrix} $$
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
el = np.arange(-5,6)
nx, ny = np.meshgrid(el, el, sparse=False, indexing='ij')
x_coord = []
y_coord = []
z = []
for i in range(11):
for j in range(11):
x_coord.append(float(-nx[i,j]))
y_coord.append(float(-ny[i,j]))
z.append(nx[i,j]**2 + ny[i,j]**2)
x_grad = [-2 * x for x in x_coord]
y_grad = [-2 * y for y in y_coord]
plt.xlim(-5.5,5.5)
plt.ylim(-5.5,5.5)
for x, y, xg, yg in zip(list(x_coord), list(y_coord), list(x_grad), list(y_grad)):
if x != 0.0 or y != 0.0: ## Avoid the zero divide when scaling the arrow
l = math.sqrt(xg**2 + yg**2)/2.0
plt.quiver(x, y, xg, yg, width = l, units = 'dots')
z = np.array(z).reshape(11,11)
plt.contour(el, el, z)
```
图例
- 箭头是梯度方向
- 箭头宽度代表大小
- 梯度方向是垂直于曲面的
## 梯度下降算法
1. 从一个起点开始。
2. 计算这点的梯度。
3. 沿着梯度方向探索一步。
4. 判断梯度是否接近零,如果是,停止。
5. 否则,从第2步继续。
# 积分
对函数 ${f(x)=x}$ 有:
$$\int f(x)\;dx = \frac{1}{2} x^2$$
$$\int_0^2 f(x)\;dx = \frac{1}{2} x^2\ \big|_0^2 = \frac{4}{2} - \frac{0}{2} = 2$$
```python
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
def f(x):
return x
x = range(0, 11)
y = [f(a) for a in x]
plt.plot(x,y, color='purple')
section = np.arange(0, 2, 1/20)
plt.fill_between(section,f(section), color='orange')
plt.show()
```
```python
import scipy.integrate as integrate
i, e = integrate.quad(lambda x: f(x), 0, 2)
print (i)
```
2.0
$$\int_0^3 3x^2 + 2x + 1\;dx = \frac{3}{3} x^3 + \frac{2}{2} x^2 + x\ \big|_0^3
= 27 + 9 + 3 + 0 + 0 + 0
= 39$$
```python
from matplotlib.patches import Polygon
def g(x):
return 3 * x**2 + 2 * x + 1
x = range(0, 11)
y = [g(a) for a in x]
fig, ax = plt.subplots()
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
plt.plot(x,y, color='purple')
ix = np.linspace(0, 3)
iy = g(ix)
verts = [(0, 0)] + list(zip(ix, iy)) + [(3, 0)]
poly = Polygon(verts, facecolor='orange')
ax.add_patch(poly)
plt.show()
```
```python
i, e = integrate.quad(lambda x: 3 * x**2 + 2 * x + 1, 0, 3)
print(i)
print(e)
```
38.99999999999999
4.3298697960381095e-13
$$\int^{\infty}_0 e^{-5x} dx$$
```python
import numpy as np
i, e = integrate.quad(lambda x: np.exp(-x*5), 0, np.inf)
print('Integral: ' + str(i))
print('Absolute Error: ' + str(e))
```
Integral: 0.20000000000000007
Absolute Error: 1.560666811361375e-11
$$\int_{-\infty}^{\infty} \frac{1}{2 \pi} e^{\frac{-x^2}{\sqrt(2 \pi)}} dx$$
```python
import numpy as np
norms = lambda x: np.exp(-x**2/2.0)/np.sqrt(2.0 * 3.14159)
i, e = integrate.quad(norms, -np.inf, np.inf)
print('Integral: ' + str(i))
print('Absolute Error: ' + str(e))
```
Integral: 1.0000004223321999
Absolute Error: 1.0178195684846592e-08
| b10d4b03f49bcc2621ce0d79573c474a71301b96 | 419,069 | ipynb | Jupyter Notebook | 基础教程/A1-Python与基础知识/数学基础/02_微积分.ipynb | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
]
| 11,094 | 2019-05-07T02:48:50.000Z | 2022-03-31T08:49:42.000Z | 基础教程/A1-Python与基础知识/数学基础/02_微积分.ipynb | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
]
| 157 | 2019-05-13T15:07:19.000Z | 2022-03-23T08:52:32.000Z | 基础教程/A1-Python与基础知识/数学基础/02_微积分.ipynb | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
]
| 2,412 | 2019-05-07T02:55:15.000Z | 2022-03-30T06:56:52.000Z | 231.019294 | 74,772 | 0.915427 | true | 7,859 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.83762 | 0.70513 | __label__yue_Hant | 0.257176 | 0.476585 |
```
import numpy as np
import scipy as sp
import scipy.signal
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
```
The linear interpolation is a convolution of the samples with a triangular pulse $h_l(t)$. The triangular pulse is the convolution of two rectangular pulses $ h_l(t) = (p_{\tau} \ast p_{\tau}) (t) $. The frequency response $ H_l(f) $ is therefore
\begin{equation}
H_l(f) = (\tau \cdot \text{sinc}(f \tau))^2.
\end{equation}
For the linear interpolator $ \tau = T_S $ where $ T_S $ is the sampling time of the input discrete signal.
```
# sampling period & frequency ofthe input discrete signal
Ts = 1
Fs = 1/Ts
# analytic frequency response of the interpolating filter
N = 40
res = 100
f = np.linspace(-N*Fs,N*Fs, 2*N*res)
tau = Ts
H_l = tau**2 * np.sinc(f * tau)**2
```
```
fig, ax = plt.subplots(1, 1, figsize=(13,4))
ax.plot(f, 20 * np.log10( np.abs(H_l) ))
ax.set_ylabel('Amplitude [dB]', fontsize=12)
ax.set_xlabel('Frequency [Hz]', fontsize=12)
n = np.arange(0,9,1)
ax.set_xticks(Fs * n)
labels = ["$%d F_S$" % y for y in n[1:]]
labels.insert(0,"0")
ax.set_xticklabels(labels, fontsize=12)
ax.grid()
ax.axis('tight');
ax.set_ylim([-100, 10]); # semicolon suppresses the output
ax.set_xlim([0, n[-1]*Fs]);
```
With this operation I get a continuous signal from a discrete one. If I want to output a discrete signal, I must sample the output. This is the same as both:
- sampling the impulse response of the linear interpolation filter,
- interpolating zeros to the input signal
with sampling time
\begin{equation} T_{S1} = L \cdot T_S \qquad L \in \mathbb{N}. \end{equation}
This produces spectral copies of the original frequency response, at $ n F_{S1} $
```
L = 6 # Fs1 is L times Fs
Fs1 = L * Fs
# view
f_min = 0
f_max = 8*Fs
fig, ax = plt.subplots(1, 1, figsize=(12,6))
# plot some spectral copies
f = np.linspace(f_min, f_max, ((f_max-f_min)/Fs)*res)
tau = Ts
H_l_tot = np.zeros((len(f),))
for i in range(-10,10):
H_l = tau**2 * np.sinc((f + i*Fs1) * tau)**2
ax.plot(f, 20 * np.log10( np.abs(H_l) ), '--b', alpha=0.4)
H_l_tot = H_l_tot + H_l
# plot the sum of all the spectral copies
ax.plot(f, 20 * np.log10( np.abs(H_l_tot)), 'b')
ax.set_ylabel('Amplitude [dB]', fontsize=12)
ax.set_xlabel('Frequency [Hz]', fontsize=12)
n = np.arange(0,9,1)
ax.set_xticks(Fs * n)
labels = ["$%d F_S$" % y for y in n[1:]]
labels.insert(0,"0")
ax.set_xticklabels(labels, fontsize=12)
ax.grid()
ax.axis('tight');
ax.set_ylim([-100, 10]); # semicolon suppresses the output
ax.set_xlim([f_min, f_max]);
```
We show that by fourier transforming the impulse response of the linear interpolator, sampled at $F_{S1}$, we obtain the same result.
```
nfft = 1024
h_l_sampled = np.hstack((np.linspace(0,1-1/L,L), np.linspace(1,0+1/L,L)))
H_l_sampled = np.fft.rfft(h_l_sampled, nfft, axis=-1)
f_sampled = Fs1 * np.linspace(0,0.5,(nfft/2)+1)
```
```
fig, ax = plt.subplots(1, 1, figsize=(6,4))
#plt.title('Digital filter frequency response')
# highest frequency (pi) corresponds to M * Fs/2
ax.plot(f_sampled, 20 * np.log10(abs(H_l_sampled)), 'b', label=r"$ALIAS_{10}$")
ax.set_ylabel('Amplitude [dB]', fontsize=12)
ax.set_xlabel('Frequency [Hz]', fontsize=12)
ax.grid()
n = np.arange(0,9,1)
ax.set_xticks(Fs * n)
labels = ["$%d F_S$" % y for y in n[1:]]
labels.insert(0,"0")
ax.set_xticklabels(labels, fontsize=12)
ax.set_xlim([0, L*Fs/2]);
```
Lets normalize them to compare the frequency response of the sampled version with respect to the continuous version
```
# frequency response of the sampled (at L*Fs) interpolator
L = 6
Fs1 = L * Fs
# fft computation
nfft = 1024
h_l_sampled = np.hstack((np.linspace(0,1-1/L,L), np.linspace(1,0+1/L,L)))
H_l_sampled = np.fft.rfft(h_l_sampled, nfft, axis=-1)
f_sampled = Fs1 * np.linspace(0,0.5,(nfft/2)+1) # [0, .5 Fs1]
#Normalization
H_l_sampled = H_l_sampled/np.max(np.abs(H_l_sampled))
# frequency response of the continuous-time interpolator
f = np.linspace(0*Fs,(L/2)*Fs, (L/2)*100) # we care only up to (L/2)*Fs
tau = Ts
H_l = tau**2 * np.sinc(f * tau)**2
#Normalization
H_l = H_l / np.max(H_l)
```
```
fig, ax = plt.subplots(1, 1, figsize=(12,4))
# highest frequency (pi) corresponds to M * Fs/2
ax.plot(f_sampled, 20 * np.log10(abs(H_l_sampled)), 'b', label=r"$ALIAS_{%d}$" % L)
ax.plot(f, 20 * np.log10(abs(H_l)), 'g', label=r"$sinc$")
ax.set_ylabel('Amplitude [dB]', fontsize=12)
ax.set_xlabel('Frequency [Hz]', fontsize=12)
ax.grid()
ax.legend(loc=0)
n = np.arange(0,9,1)
ax.set_xticks(Fs * n)
labels = ["$%d F_S$" % y for y in n[1:]]
labels.insert(0,"0")
ax.set_xticklabels(labels, fontsize=12)
ax.set_xlim([0, L*Fs/2]);
ax.set_ylim([-100, 10]);
```
| 096ab30778f022e72905d7c191d8dec9b899a329 | 230,875 | ipynb | Jupyter Notebook | audio/LinearInterpolation.ipynb | brunodigiorgi/ipn-notes | c8840a45989f25442c1d800ef8acdf8c630cdafc | [
"CC-BY-3.0"
]
| 1 | 2018-03-07T13:46:17.000Z | 2018-03-07T13:46:17.000Z | audio/LinearInterpolation.ipynb | brunodigiorgi/ipn-notes | c8840a45989f25442c1d800ef8acdf8c630cdafc | [
"CC-BY-3.0"
]
| null | null | null | audio/LinearInterpolation.ipynb | brunodigiorgi/ipn-notes | c8840a45989f25442c1d800ef8acdf8c630cdafc | [
"CC-BY-3.0"
]
| null | null | null | 712.57716 | 135,737 | 0.935731 | true | 1,570 | Qwen/Qwen-72B | 1. YES
2. YES | 0.948155 | 0.899121 | 0.852506 | __label__eng_Latn | 0.782267 | 0.81899 |
# Design of a Cold Weather Fuel for a Camping Stove
The venerable alcohol stove has been invaluable camping accessory for generations. They are simple, reliable, and in a pinch, can be made from aluminum soda cans.
Alcohol stoves are typically fueled with denatured alcohol. Denatured alcohol, sometimes called methylated spirits, is a generally a mixture of ethanol and other alcohols and compounds designed to make it unfit for human consumption. An MSDS description of one [manufacturer's product](https://www.korellis.com/wordpress/wp-content/uploads/2016/05/Alcohol-Denatured.pdf) describes a roughly fifity/fifty mixture of ethanol and methanol.
The problem with alcohol stoves is they can be difficult to light in below freezing weather. The purpose of this notebook is to design of an alternative cold weather fuel that could be mixed from other materials commonly available from hardware or home improvement stores.
```
%%capture
!pip install -q pyomo
!apt-get install -y -qq glpk-utils
```
## Data
The following data was collected for potential fuels commonly available at hardware and home improvement stores. The data consists of price (\$/gal.) and parameters to predict vapor pressure using the Antoine equation,
\begin{align}
\log_{10}P^{vap}_{s}(T) & = A_s - \frac{B_s}{T + C_s}
\end{align}
where the subscript $s$ refers to species, temperature $T$ is in units of degrees Celcius, and pressure $P$ is in units of mmHg. The additional information for molecular weight and specific gravity will be needed to present the final results in volume fraction.
```
data = {
'ethanol' : {'MW': 46.07, 'SG': 0.791, 'A': 8.04494, 'B': 1554.3, 'C': 222.65},
'methanol' : {'MW': 32.04, 'SG': 0.791, 'A': 7.89750, 'B': 1474.08, 'C': 229.13},
'isopropyl alcohol': {'MW': 60.10, 'SG': 0.785, 'A': 8.11778, 'B': 1580.92, 'C': 219.61},
'acetone' : {'MW': 58.08, 'SG': 0.787, 'A': 7.02447, 'B': 1161.0, 'C': 224.0},
'xylene' : {'MW': 106.16, 'SG': 0.870, 'A': 6.99052, 'B': 1453.43, 'C': 215.31},
'toluene' : {'MW': 92.14, 'SG': 0.865, 'A': 6.95464, 'B': 1344.8, 'C': 219.48},
}
```
## Denatured Alcohol
The first step is to determine the vapor pressure of denatured alcohol over a typical range of operating temperatures. For this we assume denatured alcohol is a 40/60 (mole fraction) mixture of ethanol and methanol.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def Pvap(T, s):
return 10**(data[s]['A'] - data[s]['B']/(T + data[s]['C']))
def Pvap_denatured(T):
return 0.4*Pvap(T, 'ethanol') + 0.6*Pvap(T, 'methanol')
T = np.linspace(0, 40, 200)
plt.plot(T, Pvap_denatured(T))
plt.title('Vapor Pressure of denatured alcohol')
plt.xlabel('temperature / °C')
plt.ylabel('pressure / mmHg')
print("Vapor Pressure at 0C =", round(Pvap_denatured(0),1), "mmHg")
```
## Cold Weather Product Requirements
We seek a cold weather fuel with increased vapor pressure at 0°C and lower, and also provides safe and normal operation of the alcohol stove at higher operating temperatures.
For this purpose, we seek a mixture of commonly available liquids with a vapor pressure of at least 22 mmHg at the lowest possible temperature, and no greater than the vapor pressure of denatured alcohol at temperatures 30°C and above.
```
for s in data.keys():
plt.plot(T, Pvap(T,s))
plt.plot(T, Pvap_denatured(T), 'k', lw=3)
plt.legend(list(data.keys()) + ['denatured alcohol'])
plt.title('Vapor Pressure of selected compounds')
plt.xlabel('temperature / °C')
plt.ylabel('pressure / mmHg');
```
## Optimization Model
The first optimization model is to create a mixture that maximizes the vapor pressure at -10°C while having a vapor pressure less than or equal to denatured alcohol at 30°C and above.
The decision variables in the optimization model correspond to $x_s$, the mole fraction of each species $s \in S$ from the set of available species $S$. By definition, the mole fractions must satisfy
\begin{align}
x_s & \geq 0 & \forall s\in S \\
\sum_{s\in S} x_s & = 1
\end{align}
The objective is to maximize the vapor pressure at low temperatures, say -10°C, while maintaing a vapor pressure less than or equal to denatured alcohol at 30°C. Using Raoult's law for ideal mixtures,
\begin{align}
\max_{x_s} \sum_{s\in S} x_s P^{vap}_s(-10°C) \\
\end{align}
subject to
\begin{align}
\sum_{s\in S} x_s P^{vap}_s(30°C) & \leq P^{vap}_{denatured\ alcohol}(30°C) \\
\end{align}
This optimization model is implemented in Pyomo in the following cell.
```
import pyomo.environ as pyomo
m = pyomo.ConcreteModel()
S = data.keys()
m.x = pyomo.Var(S, domain=pyomo.NonNegativeReals)
def Pmix(T):
return sum(m.x[s]*Pvap(T,s) for s in S)
m.obj = pyomo.Objective(expr = Pmix(-10), sense=pyomo.maximize)
m.cons = pyomo.ConstraintList()
m.cons.add(sum(m.x[s] for s in S)==1)
m.cons.add(Pmix(30) <= Pvap_denatured(30))
m.cons.add(Pmix(40) <= Pvap_denatured(40))
solver = pyomo.SolverFactory('glpk')
solver.solve(m)
print("Vapor Pressure at -10°C =", m.obj(), "mmHg")
T = np.linspace(-10,40,200)
plt.plot(T, Pvap_denatured(T), 'k', lw=3)
plt.plot(T, [Pmix(T)() for T in T], 'r', lw=3)
plt.legend(['denatured alcohol'] + ['cold weather blend'])
plt.title('Vapor Pressure of selected compounds')
plt.xlabel('temperature / °C')
plt.ylabel('pressure / mmHg');
```
## Display Composition
```
import pandas as pd
s = data.keys()
results = pd.DataFrame.from_dict(data).T
for s in S:
results.loc[s,'mole fraction'] = m.x[s]()
MW = sum(m.x[s]()*data[s]['MW'] for s in S)
for s in S:
results.loc[s,'mass fraction'] = m.x[s]()*data[s]['MW']/MW
vol = sum(m.x[s]()*data[s]['MW']/data[s]['SG'] for s in S)
for s in S:
results.loc[s,'vol fraction'] = m.x[s]()*data[s]['MW']/data[s]['SG']/vol
results
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>MW</th>
<th>SG</th>
<th>mole fraction</th>
<th>mass fraction</th>
<th>vol fraction</th>
</tr>
</thead>
<tbody>
<tr>
<th>acetone</th>
<td>7.02447</td>
<td>1161.00</td>
<td>224.00</td>
<td>58.08</td>
<td>0.787</td>
<td>0.428164</td>
<td>0.2906</td>
<td>0.311695</td>
</tr>
<tr>
<th>ethanol</th>
<td>8.04494</td>
<td>1554.30</td>
<td>222.65</td>
<td>46.07</td>
<td>0.791</td>
<td>0.000000</td>
<td>0.0000</td>
<td>0.000000</td>
</tr>
<tr>
<th>isopropyl alcohol</th>
<td>8.11778</td>
<td>1580.92</td>
<td>219.61</td>
<td>60.10</td>
<td>0.785</td>
<td>0.000000</td>
<td>0.0000</td>
<td>0.000000</td>
</tr>
<tr>
<th>methanol</th>
<td>7.89750</td>
<td>1474.08</td>
<td>229.13</td>
<td>32.04</td>
<td>0.791</td>
<td>0.000000</td>
<td>0.0000</td>
<td>0.000000</td>
</tr>
<tr>
<th>toluene</th>
<td>6.95464</td>
<td>1344.80</td>
<td>219.48</td>
<td>92.14</td>
<td>0.865</td>
<td>0.000000</td>
<td>0.0000</td>
<td>0.000000</td>
</tr>
<tr>
<th>xylene</th>
<td>6.99052</td>
<td>1453.43</td>
<td>215.31</td>
<td>106.16</td>
<td>0.870</td>
<td>0.571836</td>
<td>0.7094</td>
<td>0.688305</td>
</tr>
</tbody>
</table>
</div>
```
```
| e966de449415e25a337de21acadc364c1aaefc10 | 107,362 | ipynb | Jupyter Notebook | notebooks/lp/Mixture_Design_Cold_Weather_Fuel.ipynb | edgBR/ND-Pyomo-Cookbook | 0fb121f7e572e088a83c22c166c159ff55c2efb4 | [
"Apache-2.0"
]
| null | null | null | notebooks/lp/Mixture_Design_Cold_Weather_Fuel.ipynb | edgBR/ND-Pyomo-Cookbook | 0fb121f7e572e088a83c22c166c159ff55c2efb4 | [
"Apache-2.0"
]
| null | null | null | notebooks/lp/Mixture_Design_Cold_Weather_Fuel.ipynb | edgBR/ND-Pyomo-Cookbook | 0fb121f7e572e088a83c22c166c159ff55c2efb4 | [
"Apache-2.0"
]
| 1 | 2020-09-03T18:53:07.000Z | 2020-09-03T18:53:07.000Z | 222.281573 | 43,180 | 0.869703 | true | 2,552 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.868827 | 0.729583 | __label__eng_Latn | 0.888158 | 0.533398 |
# Inferential Statistics Ib - Frequentism
## Learning objectives
Welcome to the second Frequentist inference mini-project! Over the course of working on this mini-project and the previous frequentist mini-project, you'll learn the fundamental concepts associated with frequentist inference. The following list includes the topics you will become familiar with as you work through these two mini-projects:
* the _z_-statistic
* the _t_-statistic
* the difference and relationship between the two
* the Central Limit Theorem, its assumptions and consequences
* how to estimate the population mean and standard deviation from a sample
* the concept of a sampling distribution of a test statistic, particularly for the mean
* how to combine these concepts to calculate confidence intervals and p-values
* how those confidence intervals and p-values allow you to perform hypothesis (or A/B) tests
## Prerequisites
* what a random variable is
* what a probability density function (pdf) is
* what the cumulative density function is
* a high-level sense of what the Normal distribution
If these concepts are new to you, please take a few moments to Google these topics in order to get a sense of what they are and how you might use them.
These two notebooks were designed to bridge the gap between having a basic understanding of probability and random variables and being able to apply these concepts in Python. This second frequentist inference mini-project focuses on a real-world application of this type of inference to give you further practice using these concepts.
In the previous notebook, we used only data from a known normal distribution. You'll now tackle real data, rather than simulated data, and answer some relevant real-world business problems using the data.
## Hospital medical charges
Imagine that a hospital has hired you as their data analyst. An administrator is working on the hospital's business operations plan and needs you to help them answer some business questions. This mini-project, as well as the bootstrap and Bayesian inference mini-projects also found in this unit are designed to illustrate how each of the inferential statistics methods have their uses for different use cases. In this assignment notebook, you're going to use frequentist statistical inference on a data sample to answer the questions:
* has the hospital's revenue stream fallen below a key threshold?
* are patients with insurance really charged different amounts than those without?
Answering that last question with a frequentist approach makes some assumptions, or requires some knowledge, about the two groups. In the next mini-project, you'll use bootstrapping to test that assumption. And in the final mini-project of the unit, you're going to create a model for simulating _individual_ charges (not a sampling distribution) that the hospital can use to model a range of scenarios.
We are going to use some data on medical charges obtained from [Kaggle](https://www.kaggle.com/easonlai/sample-insurance-claim-prediction-dataset). For the purposes of this exercise, assume the observations are the result of random sampling from our one hospital. Recall in the previous assignment, we introduced the Central Limit Theorem (CLT), and how it tells us that the distributions of sample statistics approach a normal distribution as $n$ increases. The amazing thing about this is that it applies to the sampling distributions of statistics that have been calculated from even highly non-normal distributions of data. Remember, also, that hypothesis testing is very much based on making inferences about such sample statistics. You're going to rely heavily on the CLT to apply frequentist (parametric) tests to answer the questions in this notebook.
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
from numpy.random import seed
import seaborn as sns
medical = pd.read_csv('data/insurance2.csv')
```
```python
medical.shape
```
(1338, 8)
```python
medical.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>age</th>
<th>sex</th>
<th>bmi</th>
<th>children</th>
<th>smoker</th>
<th>region</th>
<th>charges</th>
<th>insuranceclaim</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>19</td>
<td>0</td>
<td>27.900</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>16884.92400</td>
<td>1</td>
</tr>
<tr>
<th>1</th>
<td>18</td>
<td>1</td>
<td>33.770</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>1725.55230</td>
<td>1</td>
</tr>
<tr>
<th>2</th>
<td>28</td>
<td>1</td>
<td>33.000</td>
<td>3</td>
<td>0</td>
<td>2</td>
<td>4449.46200</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>33</td>
<td>1</td>
<td>22.705</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>21984.47061</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>32</td>
<td>1</td>
<td>28.880</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>3866.85520</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
__Q:__ Plot the histogram of charges and calculate the mean and standard deviation. Comment on the appropriateness of these statistics for the data.
__A:__
```python
# Let's look at the max value so we can use a number around it for our
# xticks
max(medical.charges)
```
63770.42801
```python
sns.set()
plt.figure(figsize=[12,8])
plt.hist(medical.charges, bins = 50, color='C4')
plt.xlabel('Charges')
plt.ylabel('Count')
plt.title('Medical Charges and Counts')
plt.xticks(range(0,65000,2500), rotation = 'vertical')
plt.show()
```
```python
print('The mean of the medical charges is {} '.format(round(np.mean(medical.charges),2)))
print('The standard deviation of the medical charges is {} '.format(round(np.std(medical.charges),2)))
```
The mean of the medical charges is 13270.42
The standard deviation of the medical charges is 12105.48
The combination of mean and standard deviation is appropriate. The median wouldn't be suitable as it looks like more people pay under 15000 for medical charges. Also, the distribution is not normal and skewed right.
__Q:__ The administrator is concerned that the actual average charge has fallen below 12000, threatening the hospital's operational model. On the assumption that these data represent a random sample of charges, how would you justify that these data allow you to answer that question? And what would be the most appropriate frequentist test, of the ones discussed so far, to apply?
__A:__ The data is not normally distributed and it looks like it is skewed to the right. We have a large sample size as well which means that the CLT should be ok to use. The mean is 13270.42 which isn't much higher than their concern average amount. Since we don't have the population mean in this example, I think the most appropriate frequentist test would be to use the CLT to resample the means and work toward a better distribution
__Q:__ Given the nature of the administrator's concern, what is the appropriate confidence interval in this case? A one-sided or two-sided interval? Calculate the critical value and the relevant 95% confidence interval for the mean and comment on whether the administrator should be concerned?
__A:__ I think this should be a one sided interval since we are concerned about the actual average charge fallen below 12000 and not concerned if it is over 12000. The mean is 13270.42 for the sample collected and should not be concerned at first glance.
```python
# Critical Value with sample size of 1338
n = len(medical.charges)
dof = n - 1
p = 0.95
critical_t = t.ppf(p, dof)
print('The critical t value for one tailed 95% confidence interval is: {} '.format(critical_t))
```
The critical t value for one tailed 95% confidence interval is: 1.6459941145571317
```python
# Standard Error of the Mean (a.k.a. the standard deviation of the sampling distribution of the sample mean!
se = (np.std(medical.charges)) / (np.sqrt(n))
moe = critical_t * se # Margin of Error
print('The Margin of error is: {} '.format(moe))
```
The Margin of error is: 544.7314053390934
```python
lower = (np.mean(medical.charges)) - moe
lower
#print('The lower is {}'.format(lower) + "\n""The administrator shouldn't be concerened as we are 95 percent confident it lies above {}".format(lower))
```
12725.690859802164
The administrator shouldn't be concerened as we are 95% confident that the actual average charge is above 12725.690859802164
The administrator then wants to know whether people with insurance really are charged a different amount to those without.
__Q:__ State the null and alternative hypothesis here. Use the _t_-test for the difference between means where the pooled standard deviation of the two groups is given by
\begin{equation}
s_p = \sqrt{\frac{(n_0 - 1)s^2_0 + (n_1 - 1)s^2_1}{n_0 + n_1 - 2}}
\end{equation}
and the *t* test statistic is then given by
\begin{equation}
t = \frac{\bar{x}_0 - \bar{x}_1}{s_p \sqrt{1/n_0 + 1/n_1}}.
\end{equation}
What assumption about the variances of the two groups are we making here?
__A:__
1. The H0 would be that people with insurance are charged the same.
2. The Ha would be that they are NOT charged the same.
The null hypothesis (H0) is that there is no difference between people with insurance and people without insurance, that they are charged the same.
The alternative hypothesis is that the people with insurance are charged differently than those without insurance.
My assumptions are:
When we plot the data, we would see a normal distribution (bell-shaped distribution curve).
Equal variance exists when the standard deviations of samples are equal or approximately equal.
__Q:__ Perform this hypothesis test both manually, using the above formulae, and then using the appropriate function from [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html#statistical-tests) (hint, you're looking for a function to perform a _t_-test on two independent samples). For the manual approach, calculate the value of the test statistic and then its probability (the p-value). Verify you get the same results from both.
__A:__
```python
#print(medical.columns)
# This is a binary column so 1 means it has insurance claim 0 means it has no claims
#print(medical.insuranceclaim.head())
```
```python
insured_charges = medical.loc[medical['insuranceclaim'] == 1, 'charges']
uninsured_charges = medical.loc[medical['insuranceclaim'] == 0, 'charges']
n1 = sum(medical['insuranceclaim'] == 1)
x1 = (np.mean(insured_charges))
s1 = (np.std(insured_charges,ddof = 1))
n0 = sum(medical['insuranceclaim'] == 0)
x0 = (np.mean(uninsured_charges))
s0 = (np.std(uninsured_charges, ddof =1))
print('The Uninsured: \n','count', n0, 'mean', x0, 'standard deviation', s0)
print('The Insured: \n', 'count', n1, 'mean', x1, 'standard deviation', s1)
```
The Uninsured:
count 555 mean 8821.421892306294 standard deviation 6446.510126811736
The Insured:
count 783 mean 16423.928276537663 standard deviation 14045.928418802127
```python
# breaking down the formula
a = (n0 - 1) * s0 **2 # first part of the formula
b = (n1 - 1) * s1 **2 # second part of the formula
dof = (n0 + n1 - 2)
sp = np.sqrt((a + b)/dof) # pooled standard deviation
print('sp:', sp)
t_test = ((x0 - x1)/(sp*(np.sqrt(1/n0 + 1/n1))))
print('t test:', t_test)
```
sp: 11520.034268775256
t test: -11.89329903087671
```python
# For a two-sided test, if the value of the test statistic from your sample is negative, then the p-value is equal to
# two times the p-value for the lower-tailed p-value (i.e. 2 * cdf(ts))
p_value = 2 * t.cdf(t_test, dof)
print(f'The p value is: ', p_value)
```
The p value is: 4.461230231620972e-31
The p value is less than 0.05 so we reject the null hypothesis (H0). This means that people with insurance are charged differently!
```python
# Using the appropriate function from scipy.stats to Calculate the T-test for the means of two independent samples.
from scipy.stats import ttest_ind
# (stats.ttest_ind) This is a two-sided test for the null hypothesis that 2 independent samples have identical average
# This test assumes that the populations have identical variances by default.
print(ttest_ind(uninsured,insured))
```
Ttest_indResult(statistic=-11.893299030876712, pvalue=4.461230231620717e-31)
Congratulations! Hopefully you got the exact same numerical results. This shows that you correctly calculated the numbers by hand. Secondly, you used the correct function and saw that it's much easier to use. All you need to do pass your data to it.
__Q:__ In the above calculations, we assumed the sample variances were equal. We may well suspect they are not (we'll explore this in another assignment). The calculation becomes a little more complicated to do by hand in this case, but we now know of a helpful function. Check the documentation for the function to tell it not to assume equal variances and perform the test again.
__A:__
```python
# Let's not assume equal variances and perform this test again
print(ttest_ind(uninsured,insured, equal_var = False))
```
Ttest_indResult(statistic=-13.298031957975649, pvalue=1.1105103216309125e-37)
__Q:__ Conceptual question: look through the documentation for statistical test functions in scipy.stats. You'll see the above _t_-test for a sample, but can you see an equivalent one for performing a *z*-test from a sample? Comment on your answer.
```python
#import scipy
#help(scipy.stats.zscore)
```
__A:__ The scipy.stats has a zscore method that we can use.
zscore(a, axis=0, ddof=0)
Calculate the z score of each value in the sample, relative to the
sample mean and standard deviation.
## Learning outcomes
Having completed this project notebook, you now have good hands-on experience:
* using the central limit theorem to help you apply frequentist techniques to answer questions that pertain to very non-normally distributed data from the real world
* performing inference using such data to answer business questions
* forming a hypothesis and framing the null and alternative hypotheses
* testing this using a _t_-test
| 35fa5fde6b4d1544cdf47b91d87f2f99e0f95a95 | 47,612 | ipynb | Jupyter Notebook | Python Statistics/Mini Projects/inferential_statistics_frequentist_mini-projects6.28.19/inferential_statistics_1b-Q6.25.ipynb | atalebizadeh/DS-Career-Track | 8bf78ef11041aef94810a392022cd51b94462d9c | [
"MIT"
]
| null | null | null | Python Statistics/Mini Projects/inferential_statistics_frequentist_mini-projects6.28.19/inferential_statistics_1b-Q6.25.ipynb | atalebizadeh/DS-Career-Track | 8bf78ef11041aef94810a392022cd51b94462d9c | [
"MIT"
]
| null | null | null | Python Statistics/Mini Projects/inferential_statistics_frequentist_mini-projects6.28.19/inferential_statistics_1b-Q6.25.ipynb | atalebizadeh/DS-Career-Track | 8bf78ef11041aef94810a392022cd51b94462d9c | [
"MIT"
]
| null | null | null | 67.248588 | 24,156 | 0.772998 | true | 3,763 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.90053 | 0.815232 | 0.734141 | __label__eng_Latn | 0.99801 | 0.543988 |
# [Polytropic TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Initial Data
## Authors: Phil Chang, Zach Etienne, & Leo Werneck
### Formatting improvements courtesy Brandon Clark
## This module sets up initial data for a [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) star in *spherical, isotropic coordinates*
**Module Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [start-to-finish TOV module](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb) for full test). Note that convergence at the surface of the star is lower order due to the sharp drop to zero in $T^{\mu\nu}$.
### NRPy+ Source Code for this module: [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py)
[comment]: <> (Introduction: TODO)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This module is organized as follows:
1. [Step 1](#initializenrpy): **Initialize core Python/NRPy+ modules**
1. [Step 2](#polytropic_eoss): **Polytropic EOSs**
1. [Step 2.a](#polytropic_eoss__continuity_of_pcold): *Continuity of $P_{\rm cold}$*
1. [Step 2.b](#polytropic_eoss__p_poly_tab): *Computing $P_{j}$*
1. [Step 2.c](#polytropic_eoss__parameters_from_input): *Setting up EOS parameters from user input*
1. [Step 2.d](#polytropic_eoss__pcold): *Computing $P_{\rm cold}\left(\rho_{b}\right)$*
1. [Step 2.e](#polytropic_eoss__rhob): *Computing $\rho_{b}\left(P_{\rm cold}\right)$*
1. [Step 2.f](#polytropic_eoss__polytropic_index): *Determining the polytropic index*
1. [Step 2.f.i](#polytropic_eoss__polytropic_index__from_rhob): From $\rho_{b}$
1. [Step 2.f.ii](#polytropic_eoss__polytropic_index__from_pcold): From $P_{\rm cold}$
1. [Step 2.g](#polytropic_eoss__simple_test): *Simple test of our functions*
1. [Step 2.h](#polytropic_eoss__pcold_plot): *Visualizing $P_{\rm cold}\left(\rho_{b}\right)$*
1. [Step 3](#tov): **The TOV Equations**
1. [Step 4](#code_validation): **Code Validation against `TOV.TOV_Solver` NRPy+ module**
1. [Step 5](#latex_pdf_output): **Output this module to $\LaTeX$-formatted PDF**
<a id='initializenrpy'></a>
# Step 1: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```python
# Step 1: Import needed Python/NRPy+ modules
import numpy as np
import scipy.integrate as si
import math
import sys
```
<a id='polytropic_eoss'></a>
# Step 2: Polytropic EOSs \[Back to [top](#toc)\]
$$\label{polytropic_eoss}$$
<a id='polytropic_eoss__continuity_of_pcold'></a>
## Step 2.a: Continuity of $P_{\rm cold}$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__continuity_of_pcold}$$
Consider a piecewise polytrope EOS of the form
$$
\boxed{
P_{\rm cold} =
\left\{
\begin{matrix}
K_{0}\rho_{b}^{\Gamma_{0}} & , & \rho_{b} \leq \rho_{0}\\
K_{1}\rho_{b}^{\Gamma_{1}} & , & \rho_{0} \leq \rho_{b} \leq \rho_{1}\\
\vdots & & \vdots\\
K_{j}\rho_{b}^{\Gamma_{j}} & , & \rho_{j-1} \leq \rho_{b} \leq \rho_{j}\\
\vdots & & \vdots\\
K_{N-2}\rho_{b}^{\Gamma_{N-2}} & , & \rho_{N-3} \leq \rho_{b} \leq \rho_{N-2}\\
K_{N-1}\rho_{b}^{\Gamma_{N-1}} & , & \rho_{b} \geq \rho_{N-2}
\end{matrix}
\right.
}\ .
$$
The case of a single polytrope is given by the first EOS above, with no condition imposed on the value of $\rho$, i.e.
$$
\boxed{P_{\rm cold} = K_{0}\rho_{b}^{\Gamma_{0}} = K\rho_{b}^{\Gamma}}\ .
$$
Notice that we have the following sets of variables:
$$
\left\{\underbrace{\rho_{0},\rho_{1},\ldots,\rho_{N-2}}_{N-1\ {\rm values}}\right\}\ ;\
\left\{\underbrace{K_{0},K_{1},\ldots,K_{N-1}}_{N\ {\rm values}}\right\}\ ;\
\left\{\underbrace{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{N-1}}_{N\ {\rm values}}\right\}\ .
$$
Also, notice that $K_{0}$ and the entire sets $\left\{\rho_{0},\rho_{1},\ldots,\rho_{N-1}\right\}$ and $\left\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{N}\right\}$ must be specified by the user. The values of $\left\{K_{1},\ldots,K_{N}\right\}$, on the other hand, are determined by imposing that $P_{\rm cold}$ be continuous, i.e.
$$
P_{\rm cold}\left(\rho_{0}\right) = K_{0}\rho_{0}^{\Gamma_{0}} = K_{1}\rho_{0}^{\Gamma_{1}} \implies
\boxed{K_{1} = K_{0}\rho_{0}^{\Gamma_{0}-\Gamma_{1}}}\ .
$$
Analogously,
$$
\boxed{K_{j} = K_{j-1}\rho_{j-1}^{\Gamma_{j-1}-\Gamma_{j}}\ ,\ j\in\left[1,N-1\right]}\ .
$$
Again, for the case of a single polytropic EOS, the set $\left\{\rho_{j}\right\}$ is empty and $\left\{\Gamma_{j},K_{j}\right\}\to \left\{\Gamma,K\right\}$.
Below we implement a function to set up $\left\{K_{j}\right\}$ for both single and piecewise polytropic EOSs, based on the last boxed equation above.
```python
# Function : impose_continuity_on_P_cold()
# Author(s) : Leo Werneck
# Description : This function populates the array K_poly_tab
# by demanding that P_cold be everywhere continuous
# Dependencies : none
#
# Inputs : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho distinguish one EOS from the
# other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - uninitialized, see output variable below
# P_poly_tab - uninitialized, see function
# compute_P_poly_tab() below
# K_poly_tab0 - value of K_poly_tab[0], for the first EOS
#
# Outputs : eos.K_poly_tab - values of K to be used within each EOS, determined
# by imposing that P_cold be everywhere continuous
def impose_continuity_on_P_cold(eos,K_poly_tab0):
# A piecewise polytropic EOS is given by
# .--------------------------------------------------------------------------.
# | / K_0 * rho^(Gamma_0) , rho < rho_0 ; |
# | | K_1 * rho^(Gamma_1) , rho_0 < rho < rho_1 ; |
# | | ... ... |
# | P = < K_j * rho^(Gamma_j) , rho_(j-1) < rho < rho_j ; |
# | | ... ... |
# | | K_(n-2) * rho^(Gamma_(n-2)) , rho_(neos-3) < rho < rho_(neos-2) ; |
# | \ K_(n-1) * rho^(Gamma_(n-1)) , rho > rho_(neos-2) . |
# .--------------------------------------------------------------------------.
# Notice that the case of a single polytropic EOS corresponds to
# the first EOS in the boxed equation above, with no condition on
# rho. Thus we need only return K_poly_tab0.
eos.K_poly_tab[0] = K_poly_tab0
if eos.neos==1:
return
# For the case of a piecewise polytropic EOS, emanding that P_cold
# be everywhere continuous results in the relation:
# .-----------------------------------------------------.
# | K_j = K_(j-1) * rho_(j-1)^( Gamma_(j-1) - Gamma_j ) |
# .-----------------------------------------------------.
for j in range(1,eos.neos):
eos.K_poly_tab[j] = eos.K_poly_tab[j-1]*eos.rho_poly_tab[j-1]**(eos.Gamma_poly_tab[j-1]-eos.Gamma_poly_tab[j])
return
```
<a id='polytropic_eoss__p_poly_tab'></a>
## Step 2.b: Computing $P_{j}$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__p_poly_tab}$$
We now set a new set of quantities, $P_{\rm tab}$, which are used in a similar fashion to $\rho_{j}$, that is, to determine which EOS we should use. This quantity is defined as:
$$
\boxed{
P_{\rm tab} =
\left\{
\begin{matrix}
P_{0} = K_{0}\rho_{0}^{\Gamma_{0}}& , & P \leq P_{0} \implies \rho_{b} \leq \rho_{0}\\
P_{1} = K_{1}\rho_{1}^{\Gamma_{1}}& , & P_{0}\leq P\leq P_{1} \implies \rho_{0} \leq \rho_{b} \leq \rho_{1}\\
\vdots & & \vdots\\
P_{j} = K_{j}\rho_{j}^{\Gamma_{j}}& , & P_{j-1}\leq P\leq P_{j} \implies \rho_{j-1} \leq \rho_{b} \leq \rho_{j}\\
\vdots & & \vdots\\
P_{N-2} = K_{N-2}\rho_{N-2}^{\Gamma_{N-2}}& , & P_{N-3}\leq P\leq P_{N-2} \implies \rho_{N-3} \leq \rho_{b} \leq \rho_{N-2}\\
- & , & P \geq P_{N-2} \implies \rho_{b} \geq \rho_{N-2}\ .
\end{matrix}
\right.
}
$$
```python
# Function : compute_P_poly_tab()
# Author(s) : Leo Werneck
# Description : This function populates the array eos.P_poly_tab,
# used to distinguish which EOS we are using in the
# case of a piecewise polytropic EOS
# Dependencies : none
#
# Inputs : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho used to distinguish one EOS from
# the other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - value of K to be used within each EOS
# P_poly_tab - uninitialized, see output variable below
#
# Outputs : eos.P_poly_tab - values of P used to distinguish one EOS from
# the other (not required for a single polytrope)
def compute_P_poly_tab(eos):
# We now compute the values of P_poly_tab that are used
# to find the appropriate polytropic index and, thus,
# EOS we must use.
# First, if we have a single polytrope EOS, we need to
# do nothing.
if eos.neos==1:
return
# For the case of a piecewise polytropic EOS, we have
# .---------------------------.
# | P_j = K_j*rho_j^(Gamma_j) |
# .---------------------------.
for j in range(eos.neos-1):
eos.P_poly_tab[j] = eos.K_poly_tab[j]*rho_poly_tab[j]**(Gamma_poly_tab[j])
return
```
<a id='polytropic_eoss__parameters_from_input'></a>
## Step 2.c: Setting up EOS parameters from user input \[Back to [top](#toc)\]
$$\label{polytropic_eoss__parameters_from_input}$$
We now implement a driver function to set up all polytropic EOS related quantities based on the input given by the user. From the given input set:
$$
\left\{n_{\rm eos}, \rho_{j}, \Gamma_{j}, K_{0}\right\}\ ,
$$
the code returns a "C like struct" (a [*named tuple*](https://docs.python.org/3/library/collections.html#collections.namedtuple)) containing
$$
\left\{n_{\rm eos}, \rho_{j}, \Gamma_{j}, K_{j}, P_{j}\right\}\ .
$$
```python
# Function : set_single_or_piecewise_polytrope_EOS_parameters()
# Author(s) : Leo Werneck
# Description : This function determine all polytropic related
# parameters from user input
# Dependencies : impose_continuity_on_P_cold()
# compute_P_poly_tab()
#
# Inputs : neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho distinguish one EOS from the
# other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab0 - value of K_poly_tab[0], for the first EOS
#
# Outputs : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho used to distinguish one EOS from
# the other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - value of K to be used within each EOS
# P_poly_tab - values of P used to distinguish one EOS from
# the other (not required for a single polytrope)
def set_single_or_piecewise_polytrope_EOS_parameters(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0):
# Error check #1: Verify if the correct number of rho_poly_tab has been given by the user
if (neos == 1):
pass
elif len(rho_poly_tab) != neos-1:
print("Error: neos="+str(neos)+". Expected "+str(neos-1)+" values of rho_poly_tab, but "+str(len(rho_poly_tab))+" values were given.")
sys.exit(1)
# Error check #2: Verify if the correct number of Gamma_poly_tab has been given by the user
if len(Gamma_poly_tab) != neos:
print("Error: neos="+str(neos)+". Expected "+str(neos)+" values of Gamma_poly_tab, but "+str(len(Gamma_poly_tab))+" values were given.")
sys.exit(2)
# Create the arrays to store the values of K_poly_tab and eps_integ_const_tab
K_poly_tab = [0 for i in range(neos)]
P_poly_tab = [0 for i in range(neos-1)]
# Create the EOS "struct" (named tuple)
from collections import namedtuple
eos_struct = namedtuple("eos_struct","neos rho_poly_tab Gamma_poly_tab K_poly_tab P_poly_tab")
eos = eos_struct(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab,P_poly_tab)
# Step 1: Determine K_poly_tab. For the details, please see the implementation
# of the function impose_continuity_on_P_cold() below.
impose_continuity_on_P_cold(eos,K_poly_tab0)
# Step 2: Determine eps_integ_const_tab. For the details, please see the
# implementation of the function impose_continuity_on_eps_cold() below.
compute_P_poly_tab(eos)
return eos
```
<a id='polytropic_eoss__pcold'></a>
## Step 2.d: Computing $P_{\rm cold}\left(\rho_{b}\right)$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__pcold}$$
Then, let us compute $P_{\rm cold}$ for a polytropic EOS:
$$
\boxed{
P_{\rm cold} =
\left\{
\begin{matrix}
K_{0}\rho_{b}^{\Gamma_{0}} & , & \rho_{b} \leq \rho_{0}\\
K_{1}\rho_{b}^{\Gamma_{1}} & , & \rho_{0} \leq \rho_{b} \leq \rho_{1}\\
\vdots & & \vdots\\
K_{j}\rho_{b}^{\Gamma_{j}} & , & \rho_{j-1} \leq \rho_{b} \leq \rho_{j}\\
\vdots & & \vdots\\
K_{N-2}\rho_{b}^{\Gamma_{N-2}} & , & \rho_{N-3} \leq \rho_{b} \leq \rho_{N-2}\\
K_{N-1}\rho_{b}^{\Gamma_{N-1}} & , & \rho_{b} \geq \rho_{N-2}
\end{matrix}
\right.
}\ .
$$
```python
# Function : Polytrope_EOS__compute_P_cold_from_rhob()
# Author(s) : Leo Werneck
# Description : This function computes P_cold for a polytropic EOS
# Dependencies : polytropic_index_from_rhob()
#
# Inputs : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho distinguish one EOS from the
# other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - value of K to be used within each EOS
# P_poly_tab - values of P used to distinguish one EOS from
# the other (not required for a single polytrope)
# rho_baryon - the value of rho for which we want to
# compute P_cold
#
# Outputs : P_cold - for a single or piecewise polytropic EOS
def Polytrope_EOS__compute_P_cold_from_rhob(eos, rho_baryon):
# Compute the polytropic index from rho_baryon
j = polytropic_index_from_rhob(eos, rho_baryon)
# Return the value of P_cold for a polytropic EOS
# .--------------------------------.
# | P_cold = K_j * rho_b^(Gamma_j) |
# .--------------------------------.
return eos.K_poly_tab[j]*rho_baryon**eos.Gamma_poly_tab[j]
```
<a id='polytropic_eoss__rhob'></a>
## Step 2.e: Computing $\rho_{b}\left(P_{\rm cold}\right)$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__rhob}$$
Then, let us compute $\rho_{b}$ as a function of $P_{\rm cold}\equiv P$ for a polytropic EOS:
$$
\boxed{
\rho_{b} =
\left\{
\begin{matrix}
\left(\frac{P}{K_{0}}\right)^{1/\Gamma_{0}} & , & P \leq P_{0}\\
\left(\frac{P}{K_{1}}\right)^{1/\Gamma_{1}} & , & P_{0} \leq P \leq P_{1}\\
\vdots & & \vdots\\
\left(\frac{P}{K_{j}}\right)^{1/\Gamma_{j}} & , & P_{j-1} \leq P \leq P_{j}\\
\vdots & & \vdots\\
\left(\frac{P}{K_{N-2}}\right)^{1/\Gamma_{N-2}} & , & P_{N-3} \leq P \leq P_{N-2}\\
\left(\frac{P}{K_{N-1}}\right)^{1/\Gamma_{N-1}} & , & P \geq P_{N-2}
\end{matrix}
\right.
}\ .
$$
```python
# Function : Polytrope_EOS__compute_rhob_from_P_cold()
# Author(s) : Leo Werneck
# Description : This function computes rho_b for a polytropic EOS
# Dependencies : polytropic_index_from_P()
#
# Inputs : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho distinguish one EOS from the
# other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - value of K to be used within each EOS
# P_poly_tab - values of P used to distinguish one EOS from
# the other (not required for a single polytrope)
# P - the value of P for which we want to
# compute rho_b
#
# Outputs : rho_baryon - for a single or piecewise polytropic EOS
def Polytrope_EOS__compute_rhob_from_P_cold(eos,P):
# Compute the polytropic index from P
j = polytropic_index_from_P(eos,P)
# Return the value of rho_b for a polytropic EOS
# .----------------------------------.
# | rho_b = (P_cold/K_j)^(1/Gamma_j) |
# .----------------------------------.
return (P/eos.K_poly_tab[j])**(1.0/eos.Gamma_poly_tab[j])
```
<a id='polytropic_eoss__polytropic_index'></a>
## Step 2.f: Determining the polytropic index \[Back to [top](#toc)\]
$$\label{polytropic_eoss__polytropic_index}$$
<a id='polytropic_eoss__polytropic_index__from_rhob'></a>
## Step 2.f.i: From $\rho_{b}$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__polytropic_index__from_rhob}$$
The function below determines the polytropic index from a given value $\rho_{b} = \rho_{\rm in}$.
```python
# Function : polytropic_index_from_rhob()
# Author(s) : Leo Werneck and Zach Etienne
# Description : This function computes P_cold for a polytropic EOS
# Dependencies : none
#
# Input(s) : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho distinguish one EOS from the
# other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - value of K to be used within each EOS
# P_poly_tab - values of P used to distinguish one EOS from
# the other (not required for a single polytrope)
# rho_in - value of rho for which we compute the
# polytropic index
#
# Output(s) : polytropic index computed from rho_in
def polytropic_index_from_rhob(eos, rho_in):
# Returns the value of the polytropic index based on rho_in
polytropic_index = 0
if not (eos.neos==1):
for j in range(eos.neos-1):
polytropic_index += (rho_in > eos.rho_poly_tab[j])
return polytropic_index
```
<a id='polytropic_eoss__polytropic_index__from_pcold'></a>
## Step 2.f.ii: From $P_{\rm cold}$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__polytropic_index__from_pcold}$$
The function below determines the polytropic index from a given value $P_{\rm cold} = P$.
```python
# Function : polytropic_index_from_P()
# Author(s) : Leo Werneck and Zach Etienne
# Description : This function computes P_cold for a polytropic EOS
# Dependencies : none
#
# Input(s) : eos - named tuple containing the following:
# neos - number of EOSs to be used (single polytrope = 1)
# rho_poly_tab - values of rho distinguish one EOS from the
# other (not required for a single polytrope)
# Gamma_poly_tab - values of Gamma to be used within each EOS
# K_poly_tab - value of K to be used within each EOS
# P_poly_tab - values of P used to distinguish one EOS from
# the other (not required for a single polytrope)
# P_in - value of P for which we compute the
# polytropic index
#
# Output(s) : polytropic index computed from P_in
def polytropic_index_from_P(eos, P_in):
# Returns the value of the polytropic index based on P_in
polytropic_index = 0
if not (eos.neos==1):
for j in range(eos.neos-1):
polytropic_index += (P_in > eos.P_poly_tab[j])
return polytropic_index
```
<a id='polytropic_eoss__simple_test'></a>
## Step 2.g: Simple test of our functions \[Back to [top](#toc)\]
$$\label{polytropic_eoss__simple_test}$$
We now want to test the functions we have implemented above. In order for us to work with realistic values (i.e. values actually used by researchers), we will implement a simple test using the values from [Table II in J.C. Read *et al.* (2008)](https://arxiv.org/pdf/0812.2163.pdf):
| $\rho_{i}$ | $\Gamma_{i}$ | $K_{\rm expected}$ |
|------------|--------------|--------------------|
|2.44034e+07 | 1.58425 | 6.80110e-09 |
|3.78358e+11 | 1.28733 | 1.06186e-06 |
|2.62780e+12 | 0.62223 | 5.32697e+01 |
| $-$ | 1.35692 | 3.99874e-08 |
First, we have $n_{\rm eos} = 4$. Then, giving our function the values
$$
\begin{align}
\left\{\rho_{j}\right\} &= \left\{\text{2.44034e+07},\text{3.78358e+11},\text{2.62780e+12}\right\}\ ,\\
\left\{\Gamma_{j}\right\} &= \left\{{\rm 1.58425},{\rm 1.28733},{\rm 0.62223},{\rm 1.35692}\right\}\ ,\\
K_{0} &= \text{6.80110e-09}\ ,
\end{align}
$$
we expect to obtain the values
$$
\begin{align}
K_{1} &= \text{1.06186e-06}\ ,\\
K_{2} &= \text{5.32697e+01}\ ,\\
K_{3} &= \text{3.99874e-08}\ .
\end{align}
$$
```python
# Number of equation of states (i.e. polytropes)
# to be used for the solution. The single-polytrope
# case corresponds to neos = 1
neos = 4
# User input #1: The values of rho_b that distinguish one EOS from the other
# Values taken from Table II of J.C. Read et al. (2008)
# https://arxiv.org/pdf/0812.2163.pdf
rho_poly_tab = [2.44034e+07,3.78358e+11,2.62780e+12]
# User input #2: The values of Gamma to be used within each EOS
# Values taken from Table II of J.C. Read et al. (2008)
# https://arxiv.org/pdf/0812.2163.pdf
Gamma_poly_tab = [1.58425,1.28733,0.62223,1.35692]
# User input #3: The value K_0, to be used within the *first* EOS. The other
# values of K_j are determined by imposing that P be everywhere
# continuous
# Value taken from Table II of J.C. Read et al. (2008)
# https://arxiv.org/pdf/0812.2163.pdf
K_poly_tab0 = 6.80110e-09
# Set up EOS parameters
eos = set_single_or_piecewise_polytrope_EOS_parameters(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
# Validade results obtained against the expected ones:
K_expected = [6.80110e-09,1.06186e-06,5.32697e+01,3.99874e-08]
for j in range(1,eos.neos):
print("K_"+str(j)+": Expected - obtained = "+str(np.fabs(eos.K_poly_tab[j] - K_expected[j])))
```
K_1: Expected - obtained = 1.5516672146434653e-11
K_2: Expected - obtained = 0.005384583961500766
K_3: Expected - obtained = 4.669172431909315e-12
<a id='polytropic_eoss__pcold_plot'></a>
## Step 2.h: Visualizing $P_{\rm cold}\left(\rho_{b}\right)$ \[Back to [top](#toc)\]
$$\label{polytropic_eoss__pcold_plot}$$
Now let us visualize our results by plotting $P_{\rm cold}$ as a function of $\rho_{b}$. We will make a plot that covers the entire range of $\left\{\rho_{j}\right\}$ so that we can visualize whether or not all of our EOSs are being used. We will also differenciate each EOS by color, ranging from colder regions (blue) to hotter regions (red).
```python
# Let us plot our piecewise polytropic so that we can see what is happening
# First set the number of points in the plot
n_plot = 1000
# Then split the plot by the number of EOSs used in the code
rho_b = [0 for i in range(eos.neos)]
P_cold = [[0 for j in range(n_plot/eos.neos)] for i in range(eos.neos)]
# Then set the plotting limits for each region
lim = [eos.rho_poly_tab[0]/50.0,eos.rho_poly_tab[0],eos.rho_poly_tab[1],eos.rho_poly_tab[2],eos.rho_poly_tab[2]*30.0]
# Then populate the rho_b arrays
for j in range(eos.neos):
rho_b[j] = np.linspace(lim[j],lim[j+1],n_plot/eos.neos)
# Finally, populate the P array
for i in range(eos.neos):
for j in range(n_plot/4):
P_cold[i][j] = Polytrope_EOS__compute_P_cold_from_rhob(eos, rho_b[i][j])
import matplotlib.pyplot as plt
colors = ['blue','green','orange','red']
for i in range(eos.neos):
label=r'EOS #'+str(i+1)+r', parameters: $\left(\Gamma_'+str(i)+r',K_'+str(i)+r'\right)$'
plt.plot(np.log10(rho_b[i]),np.log10(P_cold[i]),label=label,c=colors[i])
plt.legend()
plt.title(r"$\log_{10}\left(P_{\rm cold}\right)\times\log_{10}\left(\rho_{b}\right)$",fontsize=14)
plt.xlabel(r"$\log_{10}\left(\rho_{b}\right)$",fontsize=14)
plt.ylabel(r"$\log_{10}\left(P_{\rm cold}\right)$",fontsize=14)
plt.grid()
plt.savefig("P_cold__rho_b__piecewise_polytrope.png")
plt.close()
from IPython.display import Image
Image("P_cold__rho_b__piecewise_polytrope.png")
```
<a id='tov'></a>
# Step 3: The TOV equations \[Back to [top](#toc)\]
$$\label{tov}$$
The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in terms of the *Schwarzschild coordinate* $r$ is written (in the $-+++$ form):
$$
ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2Gm}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2,
$$
where $m(r)$ is the mass-energy enclosed at a given $r$, and is equal to the total star's mass outside the stellar radius $r=R$.
In terms of the *isotropic coordinate* $\bar{r}$ with $G=c=1$ (i.e., the coordinate system and units we'd prefer to use), the ($-+++$ form) line element is written:
$$
ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),
$$
where $\phi$ here is the *conformal factor*.
Setting components of the above line element equal to one another, we get (in $G=c=1$ units):
\begin{align}
r^2 &= e^{4\phi} \bar{r}^2 \implies e^{4\phi} = \frac{r^2}{\bar{r}^2} \\
\left(1 - \frac{2m}{r}\right)^{-1} dr^2 &= e^{4\phi} d\bar{r}^2 \\
\implies \frac{d\bar{r}(r)}{dr} &= \left(1 - \frac{2m}{r} \right)^{-1/2} \frac{\bar{r}(r)}{r}.
\end{align}
The TOV equations provide radial ODEs for the pressure and $\nu$ (from [the Wikipedia article on the TOV solution](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation)):
\begin{align}
\frac{dP}{dr} &= - \frac{1}{r} \left( \frac{\rho + P}{2} \right) \left(\frac{2 m}{r} + 8 \pi r^2 P\right) \left(1 - \frac{2 m}{r}\right)^{-1} \\
\frac{d \nu}{d r} &= \frac{1}{r}\left(1 - \frac{2 m}{r}\right)^{-1} \left(\frac{2 m}{r} + 8 \pi r^2 P\right) \\
\end{align}
Assuming a polytropic equation of state, which relates the pressure $P$ to the baryonic rest-mass density $\rho_B$,
$$
P(\rho_B) = K \rho_B^\Gamma,
$$
the specific internal energy will be given by
$$
\epsilon = \frac{P}{\rho_B (\Gamma - 1)},
$$
so the total mass-energy density $\rho$ is given by
$$
\rho = \rho_B (1 + \epsilon).
$$
Given this, the mass-energy $m(r)$ density is the solution to the ODE:
$$
\frac{m(r)}{dr} = 4\pi r^2 \rho(r)
$$
Thus the full set of ODEs that need to be solved is given by
$$
\boxed{
\begin{align}
\frac{dP}{dr} &= - \frac{1}{r} \left( \frac{\rho + P}{2} \right) \left(\frac{2 m}{r} + 8 \pi r^2 P\right) \left(1 - \frac{2 m}{r}\right)^{-1} \\
\frac{d \nu}{d r} &= \frac{1}{r}\left(1 - \frac{2 m}{r}\right)^{-1} \left(\frac{2 m}{r} + 8 \pi r^2 P\right) \\
\frac{m(r)}{dr} &= 4\pi r^2 \rho(r) \\
\frac{d\bar{r}(r)}{dr} &= \left(1 - \frac{2m}{r} \right)^{-1/2} \frac{\bar{r}(r)}{r}
\end{align}
}\ .
$$
The following code solves these equations, and was largely written by Phil Chang.
```python
# Step 2: The TOV equations
## TOV SOLVER FOR SINGLE AND PIECEWISE POLYTROPES
## Authors: Phil Chang, Zachariah B. Etienne, Leo Werneck
# Full documentation for this module may be found in the NRPy+ tutorial Jupyter notebook:
# Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb
# Inputs:
# * Output data file name
# * rho_baryon_central, the central density of the TOV star.
# * n, the polytropic equation of state index. n=1 models cold, degenerate neutron star matter.
# * K_Polytrope, the polytropic constant.
# * Verbose output toggle (default = True)
# Output: An initial data file (default file name = "outputTOVpolytrope.txt") that well
# samples the (spherically symmetric) solution both inside and outside the star.
# It is up to the initial data module to perform the 1D interpolation to generate
# the solution at arbitrary radius. The file has the following columns:
# Column 1: Schwarzschild radius
# Column 2: rho(r), *total* mass-energy density (as opposed to baryonic rest-mass density)
# Column 3: P(r), Pressure
# Column 4: m(r), mass enclosed
# Column 5: e^{nu(r)}, g_{tt}(r)
# Column 6: e^{4 phi(r)}, conformal factor g_{rr}(r)
# Column 7: rbar(r), Isotropic radius
# rbar refers to the isotropic radius, and
# R_Schw refers to the Schwarzschild radius
def TOV_rhs(r_Schw, y) :
# In \tilde units
#
P = y[0]
m = y[1]
nu = y[2]
rbar = y[3]
j = polytropic_index_from_P(eos,P)
Gamma = Gamma_poly_tab[j]
Gam1 = Gamma-1.0
rho_baryon = Polytrope_EOS__compute_rhob_from_P_cold(eos,P)
rho = rho_baryon + P/Gam1 # rho is the *total* mass-energy density!
if( r_Schw < 1e-4 or m <= 0.):
m = 4*math.pi/3. * rho*r_Schw**3
dPdrSchw = -(rho + P)*(4.*math.pi/3.*r_Schw*rho + 4.*math.pi*r_Schw*P)/(1.-8.*math.pi*rho*r_Schw*r_Schw)
drbardrSchw = 1./(1. - 8.*math.pi*rho*r_Schw*r_Schw)**0.5
else:
dPdrSchw = -(rho + P)*(m + 4.*math.pi*r_Schw**3*P)/(r_Schw*r_Schw*(1.-2.*m/r_Schw))
drbardrSchw = 1./(1. - 2.*m/r_Schw)**0.5*rbar/r_Schw
dmdrSchw = 4.*math.pi*r_Schw*r_Schw*rho
dnudrSchw = -2./(P + rho)*dPdrSchw
return [dPdrSchw, dmdrSchw, dnudrSchw, drbardrSchw]
def integrateStar( eos, P, dumpData = False):
integrator = si.ode(TOV_rhs).set_integrator('dop853')
y0 = [P, 0., 0., 0.]
integrator.set_initial_value(y0,0.)
dr_Schw = 1e-5
P = y0[0]
PArr = []
r_SchwArr = []
mArr = []
nuArr = []
rbarArr = []
r_Schw = 0.
while integrator.successful() and P > 1e-9*y0[0] :
P, m, nu, rbar = integrator.integrate(r_Schw + dr_Schw)
r_Schw = integrator.t
dPdrSchw, dmdrSchw, dnudrSchw, drbardrSchw = TOV_rhs( r_Schw+dr_Schw, [P,m,nu,rbar])
dr_Schw = 0.1*min(abs(P/dPdrSchw), abs(m/dmdrSchw))
dr_Schw = min(dr_Schw, 1e-2)
PArr.append(P)
r_SchwArr.append(r_Schw)
mArr.append(m)
nuArr.append(nu)
rbarArr.append(rbar)
M = mArr[-1]
R_Schw = r_SchwArr[-1]
# Apply integration constant to ensure rbar is continuous across TOV surface
for ii in range(len(rbarArr)):
rbarArr[ii] *= 0.5*(np.sqrt(R_Schw*(R_Schw - 2.0*M)) + R_Schw - M) / rbarArr[-1]
nuArr_np = np.array(nuArr)
# Rescale solution to nu so that it satisfies BC: exp(nu(R))=exp(nutilde-nu(r=R)) * (1 - 2m(R)/R)
# Thus, nu(R) = (nutilde - nu(r=R)) + log(1 - 2*m(R)/R)
nuArr_np = nuArr_np - nuArr_np[-1] + math.log(1.-2.*mArr[-1]/r_SchwArr[-1])
r_SchwArrExtend_np = 10.**(np.arange(0.01,5.0,0.01))*r_SchwArr[-1]
r_SchwArr.extend(r_SchwArrExtend_np)
mArr.extend(r_SchwArrExtend_np*0. + M)
PArr.extend(r_SchwArrExtend_np*0.)
exp2phiArr_np = np.append( np.exp(nuArr_np), 1. - 2.*M/r_SchwArrExtend_np)
nuArr.extend(np.log(1. - 2.*M/r_SchwArrExtend_np))
rbarArr.extend( 0.5*(np.sqrt(r_SchwArrExtend_np**2 - 2.*M*r_SchwArrExtend_np) + r_SchwArrExtend_np - M) )
# Appending to a Python array does what one would reasonably expect.
# Appending to a numpy array allocates space for a new array with size+1,
# then copies the data over... over and over... super inefficient.
r_SchwArr_np = np.array(r_SchwArr)
PArr_np = np.array(PArr)
rho_baryonArr_np = np.array(PArr)
for j in range(len(PArr_np)):
# Compute rho_b from P
rho_baryonArr_np[j] = Polytrope_EOS__compute_rhob_from_P_cold(eos,PArr_np[j])
mArr_np = np.array(mArr)
rbarArr_np = np.array(rbarArr)
confFactor_exp4phi_np = (r_SchwArr_np/rbarArr_np)**2
# Compute the *total* mass-energy density (as opposed to the *baryonic* mass density)
rhoArr_np = []
for i in range(len(rho_baryonArr_np)):
polytropic_index = 0
if not (eos.neos==1):
for i in range(eos.neos-1):
polytropic_index += (PArr_np[j] > P_poly_tab[i])
rhoArr_np.append(rho_baryonArr_np[i] + PArr_np[i]/(eos.Gamma_poly_tab[polytropic_index] - 1.))
print(len(r_SchwArr_np),len(rhoArr_np),len(PArr_np),len(mArr_np),len(exp2phiArr_np))
# Special thanks to Leonardo Werneck for pointing out this issue with zip()
if sys.version_info[0] < 3:
np.savetxt("outputTOVpolytrope.txt", zip(r_SchwArr_np,rhoArr_np,PArr_np,mArr_np,exp2phiArr_np,confFactor_exp4phi_np,rbarArr_np),
fmt="%.15e")
else:
np.savetxt("outputTOVpolytrope.txt", list(zip(r_SchwArr_np,rhoArr_np,PArr_np,mArr_np,exp2phiArr_np,confFactor_exp4phi_np,rbarArr_np)),
fmt="%.15e")
return R_Schw, M
############################
# Single polytrope example #
############################
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = set_single_or_piecewise_polytrope_EOS_parameters(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
# Set initial condition (Pressure computed from central density)
rho_baryon_central = 0.129285
P_initial_condition = Polytrope_EOS__compute_P_cold_from_rhob(eos, rho_baryon_central)
R_Schw_TOV,M_TOV = integrateStar(eos, P_initial_condition, True)
print("Just generated a TOV star with R_Schw = "+str(R_Schw_TOV)+" , M = "+str(M_TOV)+" , M/R_Schw = "+str(M_TOV/R_Schw_TOV)+" .")
```
(1051, 1051, 1051, 1051, 1051)
Just generated a TOV star with R_Schw = 0.956568142523 , M = 0.14050303285288188 , M/R_Schw = 0.1468824086931645 .
<a id='code_validation'></a>
# Step 4: Code Validation \[Back to [top](#toc)\]
$$\label{code_validation}$$
<a id='code_validation__single_polytrope_eos'></a>
## Step 4.a: Code Validation against `TOV.TOV_Solver` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation__single_polytrope_eos}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for these TOV initial data between
1. this tutorial and
2. the NRPy+ [TOV.TOV_Solver](../edit/TOV/TOV_Solver.py) module.
```python
# Step 3: Code Validation against TOV.TOV_Solver module
import filecmp
import TOV.TOV_Solver as TOV
TOV.TOV_Solver("outputTOVpolytrope-validation.txt",rho_baryon_central=0.129285, \
rho_poly_tab=[],Gamma_poly_tab=[2.0], K_poly_tab0=1.0, \
verbose = False)
if filecmp.cmp('outputTOVpolytrope.txt',
'outputTOVpolytrope-validation.txt') == False:
print("ERROR: TOV initial data test FAILED!")
exit(1)
else:
print("TOV initial data test PASSED.")
```
(1051, 1051, 1051, 1051, 1051)
TOV initial data test PASSED.
<a id='code_validation__piecewise_polytrope_eos'></a>
## Step 4.b: Code Validation against an external code \[Back to [top](#toc)\]
$$\label{code_validation__piecewise_polytrope_eos}$$
We now validate our results against [Joshua Faber's TOV solver](https://ccrg.rit.edu/~jfaber/BNSID/TOV/tov_solver.C), which uses a $n_{\rm eos}=7$ piecewise polytropic EOS. Faber's code uses the following parameters:
| $\rho_{j}$ | $\Gamma_{j}$ | $K_{j}$ |
|--------------|--------------|-------------|
| 2.440619e+07 | 1.58425 | 6.80110e-09 |
| 3.783555e+11 | 1.28733 | $-$ |
| 2.627847e+12 | 0.62223 | $-$ |
| $\rho_{3}$ | 1.35692 | $-$ |
| 5.011872e+14 | $\Gamma_{4}$ | $-$ |
| 1e+15 | $\Gamma_{5}$ | $-$ |
| $-$ | $\Gamma_{6}$ | $-$ |
The values of $\left\{\rho_{0},\rho_{1},\rho_{2},\Gamma_{0},\Gamma_{1},\Gamma_{2},\Gamma_{3},K_{0}\right\}$ are meant to match the values from [table II in Read *et al.*](https://arxiv.org/pdf/0812.2163.pdf). However Faber's code uses as input
$$
\left\{
\begin{align}
\log_{10}\left(\rho_{0}\right) &= 7.3875\ ,\\
\log_{10}\left(\rho_{1}\right) &= 11.5779\ ,\\
\log_{10}\left(\rho_{2}\right) &= 12.4196\ ,\\
\log_{10}\left(\rho_{4}\right) &= 14.7\ ,\\
\log_{10}\left(\rho_{5}\right) &= 15.0\ ,\\
\end{align}
\right.
$$
causing slight discrepancy between the table above and [table II in Read *et al.*](https://arxiv.org/pdf/0812.2163.pdf). For example, using $\rho_{0}$ from our table above, we have
$$
\log_{10}\left(\rho_{0}^{\rm Faber}\right) = 7.387499987892311\ ,
$$
while using $\rho_{0}$ from Read *et al.* yields
$$
\log_{10}\left(\rho_{0}^{\rm Readetal}\right) = 7.387450338567011\ ,
$$
i.e.
$$
\frac{\left|\log_{10}\left(\rho_{0}^{\rm Faber}\right) - \log_{10}\left(\rho_{0}^{\rm Readetal}\right)\right|}{\log_{10}\left(\rho_{0}^{\rm Faber}\right)} = 6.720720863798443\times10^{-6}
$$
We will therefore use the table above instead, as to produce results that more closely resemble those from Faber's code.
```python
########################################################
# Piecewise Polytrope EOS - TOV Solver Validation Test #
########################################################
# Initial data table - Following Joshua Faber's TOV solver
# (https://ccrg.rit.edu/~jfaber/BNSID/TOV/tov_solver.C)
# .--------------.---------.-------------.
# | log10(rho_j) | Gamma_j | $K_{j}$ |
# .--------------.---------.-------------.
# | 7.3875 | 1.58425 | 6.80110e-09 |
# | 11.5779 | 1.28733 | - |
# | 12.4196 | 0.62223 | - |
# | log10(rho_3) | 1.35692 | - |
# | 14.7 | Gamma_4 | - |
# | 15.0 | Gamma_5 | - |
# | - | Gamma_6 | - |
# .--------------.---------.-------------.
# Set neos to 7
neos = 7
# Set rho_poly_tab using the table above
# Notice that rho_3 is not yet set
rho_poly_tab = [0 for i in range(neos-1)]
rho_poly_tab[0] = 10.0**(7.3875)
rho_poly_tab[1] = 10.0**(11.5779)
rho_poly_tab[2] = 10.0**(12.4196)
rho_poly_tab[4] = 10.0**(14.7)
rho_poly_tab[5] = 10.0**(15.0)
# Set Gamma_poly_tab using the table above
# Notice that Gamma_4, Gamma_5, and Gamma_6
# are not yet set
Gamma_poly_tab = [0 for i in range(neos)]
Gamma_poly_tab[0] = 1.58425
Gamma_poly_tab[1] = 1.28733
Gamma_poly_tab[2] = 0.62223
Gamma_poly_tab[3] = 1.35692
# Set K_poly_tab0 according to the table above
K_poly_tab0 = 6.80110e-09
```
Faber's code then takes as input $\left\{\log_{10}\left(p_{1}\right),\Gamma_{4},\Gamma_{5},\Gamma_{6}\right\}$, where
$$
\log_{10}\left(p_{3}\right) \equiv K_{j} + \Gamma_{j}\log_{10}\left(\rho_{3}\right)\ .
$$
Thus, we have the EOS
$$
p_{1} = K_{3}\rho_{3}^{\Gamma_{3}} \implies \rho_{3} = \left(\frac{p_{1}}{K_{3}}\right)^{1/\Gamma_{3}}\ .
$$
Now we use
$$
\boxed{K_{3} = K_{2}\rho_{2}^{\Gamma_{2}-\Gamma_{3}}}\ .
$$
This means that our computation of the EOS parameters will have to be done by hand.
```python
# Set up K_poly_tab
# Add K_0 then compute {K_1,K_2,K_3}
K_poly_tab = [0 for i in range(neos)]
K_poly_tab[0] = K_poly_tab0
for j in range(1,4):
K_poly_tab[j] = K_poly_tab[j-1]*rho_poly_tab[j-1]**(Gamma_poly_tab[j-1] - Gamma_poly_tab[j])
# User input: log10(p_1), Gamma_4, Gamma_5, Gamma_6
# We will be using here the SLy EOS parameters found
# in table III of Read et al.
# (https://arxiv.org/pdf/0812.2163.pdf)
c = 2.99 # Speed of light in cm/s
log10_of_p1 = 34.380 # log10(p1), with p1 in dyne/cm^2
log10_of_p1_cgs = log10_of_p1 + 2.0*np.log10(c)
p1 = 10.0**log10_of_p1_cgs # SLy EOS parameter p1
rho_poly_tab[3] = (p1/K_poly_tab[3])**(1.0/Gamma_poly_tab[3]) # Last value needed for rho_poly_tab
Gamma_poly_tab[4] = 3.005 # SLy EOS parameter Gamma_1
Gamma_poly_tab[5] = 2.988 # SLy EOS parameter Gamma_2
Gamma_poly_tab[6] = 2.851 # SLy EOS parameter Gamma_3
# Compute {K_4,K_5,K_6}
for j in range(4,neos):
K_poly_tab[j] = K_poly_tab[j-1]*rho_poly_tab[j-1]**(Gamma_poly_tab[j-1] - Gamma_poly_tab[j])
# Initialize the P_poly_tab array
P_poly_tab = [0 for i in range(neos-1)]
# Set up the EOS "struct" (named tuple)
from collections import namedtuple
eos_struct = namedtuple("eos_struct","neos rho_poly_tab Gamma_poly_tab K_poly_tab P_poly_tab")
eos = eos_struct(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab,P_poly_tab)
# Compute {P_0,P_1,P_2,P_3,P_4,P_5}
compute_P_poly_tab(eos)
```
```python
# Expected value of rho_poly_tab
rho_expected = [2.440619068041981e+07,3.783554551232308e+11,2.627846540838499e+12,1.462304063288540e+14,5.011872336272714e+14,1.000000000000000e+15]
Ptab_expected = [3.437368046964257e+03,8.524200720315686e+08,2.846981588373988e+09,6.649729273215710e+11,2.693802277376512e+13,2.122099366611129e+14]
Pcgs_expected = [3.536226035543711e+00,8.930653667543710e+00,9.454384658543711e+00,1.182280396446974e+01,1.343036571573900e+01,1.432676571573900e+01]
Gamma_expected = [1.584250000000000e+00,1.287330000000000e+00,6.222299999999999e-01,1.356920000000000e+00,3.005000000000000e+00,2.988000000000000e+00,2.851000000000000e+00]
Kpoly_expected = [6.801100000000000e-09,1.061880536433024e-06,5.327665511164587e+01,3.999272773574940e-08,1.806615432455483e-31,3.211927366633627e-31,3.645572300283066e-29]
for i in range(eos.neos-1):
print("rho_diff_"+str(i)+" = "+str(np.fabs(rho_expected[i] - eos.rho_poly_tab[i])/rho_expected[i]))
for i in range(eos.neos-1):
print("P_diff_"+str(i)+" = "+str(np.fabs(Ptab_expected[i] - eos.P_poly_tab[i])/Ptab_expected[i]))
for i in range(eos.neos):
print("Gamma_diff_"+str(i)+" = "+str(np.fabs(Gamma_expected[i] - eos.Gamma_poly_tab[i])/Gamma_expected[i]))
for i in range(eos.neos):
print("Kpoly_diff_"+str(i)+" = "+str(np.fabs(Kpoly_expected[i] - eos.K_poly_tab[i])/Kpoly_expected[i]))
```
rho_diff_0 = 1.5263710536567174e-16
rho_diff_1 = 0.0
rho_diff_2 = 1.8581041259897853e-16
rho_diff_3 = 2.1131283202035885e+17
rho_diff_4 = 1.247038946855552e-16
rho_diff_5 = 0.0
P_diff_0 = 1.9844282515284264e-15
P_diff_1 = 1.118784444075047e-15
P_diff_2 = 1.339909355649788e-15
P_diff_3 = 3.225064554989832e+23
P_diff_4 = 1.0
P_diff_5 = 1.0
Gamma_diff_0 = 0.0
Gamma_diff_1 = 0.0
Gamma_diff_2 = 0.0
Gamma_diff_3 = 0.0
Gamma_diff_4 = 0.0
Gamma_diff_5 = 0.0
Gamma_diff_6 = 0.0
Kpoly_diff_0 = 0.0
Kpoly_diff_1 = 3.9883627121539447e-16
Kpoly_diff_2 = 1.3336849587705835e-16
Kpoly_diff_3 = 0.0
Kpoly_diff_4 = 1.0
Kpoly_diff_5 = 1.0
Kpoly_diff_6 = 1.0
```python
eos.K_poly_tab
```
[6.8011e-09,
1.0618805364330245e-06,
53.276655111645866,
3.99927277357494e-08,
2.0415487123835052e-33,
3.6296081954241814e-33,
4.119644558459321e-31]
```python
# Let us plot our piecewise polytropic so that we can see what is happening
# First set the number of points in the plot
n_plot = 100000
# Then split the plot by the number of EOSs used in the code
rho_b = [0 for i in range(eos.neos)]
P_cold = [[0 for j in range(n_plot/eos.neos)] for i in range(eos.neos)]
# Then set the plotting limits for each region
lim = [eos.rho_poly_tab[0]/50.0,
eos.rho_poly_tab[0],
eos.rho_poly_tab[1],
eos.rho_poly_tab[2],
eos.rho_poly_tab[3],
eos.rho_poly_tab[4],
eos.rho_poly_tab[5],
eos.rho_poly_tab[5]*30.0]
# Then populate the rho_b arrays
for j in range(eos.neos):
rho_b[j] = np.linspace(lim[j],lim[j+1],n_plot/eos.neos)
# Finally, populate the P array
for i in range(eos.neos):
for j in range(n_plot/eos.neos):
P_cold[i][j] = Polytrope_EOS__compute_P_cold_from_rhob(eos, rho_b[i][j])
import matplotlib.pyplot as plt
f = plt.figure(figsize=(12,8))
colors = ['blue',
'green',
'yellow',
'orange',
'magenta',
'pink',
'red']
for i in range(eos.neos):
label=r'EOS #'+str(i+1)+r', parameters: $\left(\Gamma_'+str(i)+r',K_'+str(i)+r'\right)$'
plt.plot(np.log10(rho_b[i]),np.log10(P_cold[i]),label=label,c=colors[i])
plt.legend()
plt.title(r"$\log_{10}\left(P_{\rm cold}\right)\times\log_{10}\left(\rho_{b}\right)$",fontsize=14)
plt.xlabel(r"$\log_{10}\left(\rho_{b}\right)$",fontsize=14)
plt.ylabel(r"$\log_{10}\left(P_{\rm cold}\right)$",fontsize=14)
plt.grid()
plt.xlim(14.5,15.1)
plt.savefig("P_cold__rho_b__piecewise_polytrope.png")
plt.close()
from IPython.display import Image
Image("P_cold__rho_b__piecewise_polytrope.png")
```
<a id='latex_pdf_output'></a>
# Step 5: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-ADM_Initial_Data-TOV](Tutorial-ADM_Initial_Data-TOV.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ADM_Initial_Data-TOV.ipynb
!pdflatex -interaction=batchmode Tutorial-ADM_Initial_Data-TOV.tex
!pdflatex -interaction=batchmode Tutorial-ADM_Initial_Data-TOV.tex
!pdflatex -interaction=batchmode Tutorial-ADM_Initial_Data-TOV.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
[NbConvertApp] Converting notebook Tutorial-ADM_Initial_Data-TOV.ipynb to latex
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
/Users/werneck/anaconda3/lib/python3.7/site-packages/nbconvert/utils/pandoc.py:52: RuntimeWarning: You are using an unsupported version of pandoc (2.2.3.2).
Your version must be at least (1.12.1) but less than (2.0.0).
Refer to http://pandoc.org/installing.html.
Continuing with doubts...
check_pandoc_version()
[NbConvertApp] Support files will be in Tutorial-ADM_Initial_Data-TOV_files/
[NbConvertApp] Making directory Tutorial-ADM_Initial_Data-TOV_files
[NbConvertApp] Writing 97247 bytes to Tutorial-ADM_Initial_Data-TOV.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.19 (TeX Live 2018) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.19 (TeX Live 2018) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.19 (TeX Live 2018) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
| bebe0ad45bac69218276d87488fc28c1a22c32fd | 126,377 | ipynb | Jupyter Notebook | Tutorial-ADM_Initial_Data-TOV.ipynb | dinatraykova/nrpytutorial | 74d1bab0c45380727975568ba956b69c082e2293 | [
"BSD-2-Clause"
]
| null | null | null | Tutorial-ADM_Initial_Data-TOV.ipynb | dinatraykova/nrpytutorial | 74d1bab0c45380727975568ba956b69c082e2293 | [
"BSD-2-Clause"
]
| null | null | null | Tutorial-ADM_Initial_Data-TOV.ipynb | dinatraykova/nrpytutorial | 74d1bab0c45380727975568ba956b69c082e2293 | [
"BSD-2-Clause"
]
| 2 | 2019-11-14T03:31:18.000Z | 2019-12-12T13:42:52.000Z | 78.543816 | 49,184 | 0.71282 | true | 18,684 | Qwen/Qwen-72B | 1. YES
2. YES | 0.884039 | 0.661923 | 0.585166 | __label__eng_Latn | 0.674863 | 0.197866 |
```python
from estado import *
from sympy import *
import copy
```
```python
import numpy as np
from sympy import *
import copy
```
```python
# pot = 120000
# Tmin = 305 #kelvin
# Tmax = 1500 #kelvin
# razao_compr = 15/1
# Pmin = 100
# k = 1.4 #constante ar
# Cp = 1.004
#
#
#
#
#
# T1 = Tmin
# T3 = Tmax
# P1 = Pmin
# P2 = P1*razao_compr
# P3 = P2
# P4 = P1
#
# T4 = T3*(P4/P3)**((k-1)/k)
#
# T2 = T1*(P2/P1)**((k-1)/k)
#
# DeltaH43 = Cp*(T4-T3)
#
# DeltaH21 = Cp*(T2-T1)
#
#
# mponto = -pot/(DeltaH43+DeltaH21)
#
# Pot_turb = -mponto*DeltaH43
#
# Efic = 1 - 1/(razao_compr**((k-1)/k))
#
# print(Pot_turb/1000, 'Pot turb')
# print(Efic*100, 'Eficiencia')
#
#
```
```python
```
```python
# pot = 15000
# Tmin = 27 + 273.15 #kelvin
# # Tmax = 1500 #kelvin
# Calor = 950
# razao_compr = 14/1
# Pmin = 100
#
# k = 1.4 #constante ar
# Cp = 1.004
#
# T1 = Tmin
# # T3 = Tmax
# P1 = Pmin
# P2 = P1*razao_compr
# P3 = P2
# P4 = P1
# R = 0.287
# #
# arr = np.array([['300', str(T1), '320'],
# ['300.47','','320.58']])
#
# h1 = interpolacao(arr)
#
#
# arr = np.array([['300', str(T1), '320'],
# ['6.86926','','6.93413']])
#
# s1 = interpolacao(arr)
#
# s2 = s1 + R * log(razao_compr)
#
# arr = np.array([['7.61090', str(s2), '7.64448'],
# ['620','','640']])
#
# T2 = interpolacao(arr)
#
# arr = np.array([['628.38', '', '649.53'],
# ['620',str(T2),'640']])
#
# h2 = interpolacao(arr)
#
#
# h3 = h2 + Calor
#
# arr = np.array([['1450', '', '1500'],
# ['1575.40',str(h3),'1635.80']])
#
# T3 = interpolacao(arr)
#
# arr = np.array([['1450', str(T3), '1500'],
# ['8.57111','','8.61208']])
#
# s3 = interpolacao(arr)
#
# s4 = s3 + R * log(1/razao_compr)
#
# arr = np.array([['7.80008', str(s4), '7.82905'],
# ['756.73','','778.46']])
#
# h4 = interpolacao(arr)
#
# mponto = -pot/(h4 - h3 + h2 - h1)
#
#
# Pot_turb = -mponto*DeltaH43
#
# Efic = 1 - 1/(razao_compr**((k-1)/k))
#
# print(mponto, 'Vaz mass')
# print(T3 , 'Tmax')
#
```
```python
# s4
```
```python
# Tfria = -10
# Tamb = 25
# material =
#
# _estado34 = estado('nh3','saturado',T=Tamb)
# _estado34.propriedade_dado_titulo(1)
# estado3 = copy.copy(_estado34)
# _estado34.propriedade_dado_titulo(0)
# estado4 = copy.copy(_estado34)
#
#
# _estado12 = estado('nh3','saturado',T=Tfria)
# _estado12.titulo_dada_propriedade('specific_entropy',estado4.specific_entropy)
#
#
# estado1 = copy.copy(_estado12)
# _estado12.titulo_dada_propriedade('specific_entropy',estado3.specific_entropy)
# estado2 = copy.copy(_estado12)
```
```python
```
1.248
| 0e9f856ff3ac3ed387ba93aa94a7b7630c1903c9 | 6,076 | ipynb | Jupyter Notebook | Thermo/testes.ipynb | victorathanasio/Personal-projects | 94c870179cec32aa733a612a6faeb047df16d977 | [
"MIT"
]
| null | null | null | Thermo/testes.ipynb | victorathanasio/Personal-projects | 94c870179cec32aa733a612a6faeb047df16d977 | [
"MIT"
]
| null | null | null | Thermo/testes.ipynb | victorathanasio/Personal-projects | 94c870179cec32aa733a612a6faeb047df16d977 | [
"MIT"
]
| null | null | null | 22.094545 | 90 | 0.40372 | true | 1,047 | Qwen/Qwen-72B | 1. YES
2. YES | 0.7773 | 0.787931 | 0.612459 | __label__por_Latn | 0.081911 | 0.261277 |
```python
# Header starts here.
from sympy.physics.units import *
from sympy import *
# Rounding:
import decimal
from decimal import Decimal as DX
from copy import deepcopy
def iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN):
import sympy
"""
Rounding acc. to DIN EN ISO 80000-1:2013-08
place value = Rundestellenwert
"""
assert pv in set([
# place value # round to:
1, # 1
0.1, # 1st digit after decimal
0.01, # 2nd
0.001, # 3rd
0.0001, # 4th
0.00001, # 5th
0.000001, # 6th
0.0000001, # 7th
0.00000001, # 8th
0.000000001, # 9th
0.0000000001, # 10th
])
objc = deepcopy(obj)
try:
tmp = DX(str(float(objc)))
objc = tmp.quantize(DX(str(pv)), rounding=rounding)
except:
for i in range(len(objc)):
tmp = DX(str(float(objc[i])))
objc[i] = tmp.quantize(DX(str(pv)), rounding=rounding)
return objc
# LateX:
kwargs = {}
kwargs["mat_str"] = "bmatrix"
kwargs["mat_delim"] = ""
# kwargs["symbol_names"] = {FB: "F^{\mathsf B}", }
# Units:
(k, M, G ) = ( 10**3, 10**6, 10**9 )
(mm, cm) = ( m/1000, m/100 )
Newton = kg*m/s**2
Pa = Newton/m**2
MPa = M*Pa
GPa = G*Pa
kN = k*Newton
half = S(1)/2
# Header ends here.
#
# https://colab.research.google.com/github/kassbohm/tm-snippets/blob/master/ipynb/TM_1/2_ZK/1.2.F_cc.ipynb
# Given symbols:
F1, F2 = var("F1, F2", real=true)
a1, a2 = var("a1, a2", real=true)
sub_list=[
( F1, 6 *Newton ),
( F2, 2 *Newton ),
( a1, 60 *pi/180 ),
( a2, 45 *pi/180 ),
]
p1 = pi + a1
p2 = -a2
c1, s1 = cos(p1), sin(p1)
c2, s2 = cos(p2), sin(p2)
prec = 0.01
# for x in [c1, s1, c2, s2]:
# tmp = x
# tmp = tmp.subs(sub_list)
# pprint(iso_round(tmp, prec))
e1 = Matrix([c1, s1])
e2 = Matrix([c2, s2])
F1 = F1 * e1
F2 = F2 * e2
R = F1 + F2
pprint("\n(x,y)-comps of R / N:")
tmp = R
tmp = tmp.subs(sub_list)
tmp /= Newton
pprint(iso_round(tmp, prec))
pprint("\nMagnitude of R / N:")
Rmag = R.norm()
tmp = Rmag
tmp = tmp.subs(sub_list)
tmp /= Newton
pprint(iso_round(tmp, prec))
prec = 1
pprint("\nphi_R / deg:")
Rx, Ry = R[0], R[1]
Rx = Rx.subs(sub_list)
Ry = Ry.subs(sub_list)
Rx = Rx.simplify()
Ry = Ry.simplify()
tmp = Ry / (Rmag + Rx)
tmp = tmp.subs(sub_list)
tmp = tmp.simplify()
tmp = atan(tmp)
tmp = 2*tmp*180/pi
pprint(iso_round(tmp, prec))
exit()
# (x,y)-comps of R / N:
# ⎡-1.59⎤
# ⎢ ⎥
# ⎣-6.61⎦
#
# Magnitude of R / N:
# 6.80
#
# phi_R / deg:
# -103
```
| ce1866be65fba92208885622c31cf259764d4afa | 5,045 | ipynb | Jupyter Notebook | ipynb/TM_1/2_ZK/1.2.F_cc.ipynb | kassbohm/tm-snippets | 5e0621ba2470116e54643b740d1b68b9f28bff12 | [
"MIT"
]
| null | null | null | ipynb/TM_1/2_ZK/1.2.F_cc.ipynb | kassbohm/tm-snippets | 5e0621ba2470116e54643b740d1b68b9f28bff12 | [
"MIT"
]
| null | null | null | ipynb/TM_1/2_ZK/1.2.F_cc.ipynb | kassbohm/tm-snippets | 5e0621ba2470116e54643b740d1b68b9f28bff12 | [
"MIT"
]
| null | null | null | 30.575758 | 119 | 0.373835 | true | 1,018 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.749087 | 0.643963 | __label__eng_Latn | 0.21341 | 0.334473 |
<a href="https://colab.research.google.com/github/praveentn/ml-repos/blob/master/qc/pennylane/QML_Regression_Model_01.ipynb" target="_parent"></a>
***QUANTUM MACHINE LEARNING MODEL PREDICTOR FOR CONTINUOUS VARIABLE***
By Roberth Saénz Pérez Alvarado , [email protected], [email protected]
According to this paper: "Predicting toxicity by quantum machine learning" (Teppei Suzuki, Michio Katouda 2020) https://arxiv.org/abs/2008.07715 is possible to predict continuous variables using 2 qbits per feature applying encodings, variational circuits and some lineal transformations on expected values in order to predict values close to real target.
Image from: https://arxiv.org/ftp/arxiv/papers/2008/2008.07715.pdf
I uploaded the following example from https://pennylane.ai/qml/demos/quantum_neural_net using PennyLane libraries, a short dataset which consist on 1 variable input and 1 output, so that the processing does not take too much time.
```
pip install numba==0.49.1
```
Collecting numba==0.49.1
Downloading numba-0.49.1-cp37-cp37m-manylinux2014_x86_64.whl (3.6 MB)
[K |████████████████████████████████| 3.6 MB 5.2 MB/s
[?25hCollecting llvmlite<=0.33.0.dev0,>=0.31.0.dev0
Downloading llvmlite-0.32.1-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB)
[K |████████████████████████████████| 20.2 MB 67.7 MB/s
[?25hRequirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.7/dist-packages (from numba==0.49.1) (1.19.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba==0.49.1) (57.2.0)
Installing collected packages: llvmlite, numba
Attempting uninstall: llvmlite
Found existing installation: llvmlite 0.34.0
Uninstalling llvmlite-0.34.0:
Successfully uninstalled llvmlite-0.34.0
Attempting uninstall: numba
Found existing installation: numba 0.51.2
Uninstalling numba-0.51.2:
Successfully uninstalled numba-0.51.2
Successfully installed llvmlite-0.32.1 numba-0.49.1
```
pip install tensornetwork==0.3
```
Collecting tensornetwork==0.3
Downloading tensornetwork-0.3.0-py3-none-any.whl (216 kB)
[K |████████████████████████████████| 216 kB 5.3 MB/s
[?25hRequirement already satisfied: opt-einsum>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from tensornetwork==0.3) (3.3.0)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensornetwork==0.3) (3.1.0)
Collecting graphviz>=0.11.1
Downloading graphviz-0.17-py3-none-any.whl (18 kB)
Requirement already satisfied: scipy>=1.1 in /usr/local/lib/python3.7/dist-packages (from tensornetwork==0.3) (1.4.1)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from tensornetwork==0.3) (1.19.5)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensornetwork==0.3) (1.5.2)
Installing collected packages: graphviz, tensornetwork
Attempting uninstall: graphviz
Found existing installation: graphviz 0.10.1
Uninstalling graphviz-0.10.1:
Successfully uninstalled graphviz-0.10.1
Successfully installed graphviz-0.17 tensornetwork-0.3.0
```
!pip install pennylane pennylane-sf
import pennylane
dev = pennylane.device('default.qubit', wires=2)
```
Collecting pennylane
Downloading PennyLane-0.16.0-py3-none-any.whl (514 kB)
[K |████████████████████████████████| 514 kB 5.4 MB/s
[?25hCollecting pennylane-sf
Downloading PennyLane_SF-0.16.0-py3-none-any.whl (29 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from pennylane) (1.19.5)
Collecting semantic-version==2.6
Downloading semantic_version-2.6.0-py3-none-any.whl (14 kB)
Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from pennylane) (2.6.2)
Collecting autoray
Downloading autoray-0.2.5-py3-none-any.whl (16 kB)
Requirement already satisfied: autograd in /usr/local/lib/python3.7/dist-packages (from pennylane) (1.3)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/dist-packages (from pennylane) (1.4.4)
Requirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from pennylane) (0.10.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from pennylane) (1.4.1)
Collecting strawberryfields>=0.15
Downloading StrawberryFields-0.18.0-py3-none-any.whl (4.9 MB)
[K |████████████████████████████████| 4.9 MB 47.1 MB/s
[?25hRequirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from strawberryfields>=0.15->pennylane-sf) (2.8.2)
Collecting thewalrus>=0.15.0
Downloading thewalrus-0.15.1-cp37-cp37m-manylinux2010_x86_64.whl (3.3 MB)
[K |████████████████████████████████| 3.3 MB 45.6 MB/s
[?25hCollecting urllib3>=1.25.3
Downloading urllib3-1.26.6-py2.py3-none-any.whl (138 kB)
[K |████████████████████████████████| 138 kB 56.8 MB/s
[?25hCollecting quantum-blackbird>=0.3.0
Downloading quantum_blackbird-0.3.0-py3-none-any.whl (47 kB)
[K |████████████████████████████████| 47 kB 4.5 MB/s
[?25hRequirement already satisfied: requests>=2.22.0 in /usr/local/lib/python3.7/dist-packages (from strawberryfields>=0.15->pennylane-sf) (2.23.0)
Requirement already satisfied: sympy>=1.5 in /usr/local/lib/python3.7/dist-packages (from strawberryfields>=0.15->pennylane-sf) (1.7.1)
Requirement already satisfied: numba in /usr/local/lib/python3.7/dist-packages (from strawberryfields>=0.15->pennylane-sf) (0.49.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.8.0->strawberryfields>=0.15->pennylane-sf) (1.15.0)
Collecting antlr4-python3-runtime==4.8
Downloading antlr4-python3-runtime-4.8.tar.gz (112 kB)
[K |████████████████████████████████| 112 kB 58.7 MB/s
[?25hRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.22.0->strawberryfields>=0.15->pennylane-sf) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.22.0->strawberryfields>=0.15->pennylane-sf) (2.10)
Collecting urllib3>=1.25.3
Downloading urllib3-1.25.11-py2.py3-none-any.whl (127 kB)
[K |████████████████████████████████| 127 kB 56.4 MB/s
[?25hRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.22.0->strawberryfields>=0.15->pennylane-sf) (2021.5.30)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy>=1.5->strawberryfields>=0.15->pennylane-sf) (1.2.1)
Requirement already satisfied: dask[delayed] in /usr/local/lib/python3.7/dist-packages (from thewalrus>=0.15.0->strawberryfields>=0.15->pennylane-sf) (2.12.0)
Collecting repoze.lru>=0.7
Downloading repoze.lru-0.7-py3-none-any.whl (10 kB)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba->strawberryfields>=0.15->pennylane-sf) (57.2.0)
Requirement already satisfied: llvmlite<=0.33.0.dev0,>=0.31.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba->strawberryfields>=0.15->pennylane-sf) (0.32.1)
Requirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.7/dist-packages (from autograd->pennylane) (0.16.0)
Requirement already satisfied: cloudpickle>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from dask[delayed]->thewalrus>=0.15.0->strawberryfields>=0.15->pennylane-sf) (1.3.0)
Requirement already satisfied: toolz>=0.7.3 in /usr/local/lib/python3.7/dist-packages (from dask[delayed]->thewalrus>=0.15.0->strawberryfields>=0.15->pennylane-sf) (0.11.1)
Building wheels for collected packages: antlr4-python3-runtime
Building wheel for antlr4-python3-runtime (setup.py) ... [?25l[?25hdone
Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.8-py3-none-any.whl size=141230 sha256=4d75569f8698ee08b786e0af09ac750ef6993fd4fc8955bfeb65aadfa82d77e7
Stored in directory: /root/.cache/pip/wheels/ca/33/b7/336836125fc9bb4ceaa4376d8abca10ca8bc84ddc824baea6c
Successfully built antlr4-python3-runtime
Installing collected packages: urllib3, repoze.lru, antlr4-python3-runtime, thewalrus, semantic-version, quantum-blackbird, autoray, strawberryfields, pennylane, pennylane-sf
Attempting uninstall: urllib3
Found existing installation: urllib3 1.24.3
Uninstalling urllib3-1.24.3:
Successfully uninstalled urllib3-1.24.3
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.[0m
Successfully installed antlr4-python3-runtime-4.8 autoray-0.2.5 pennylane-0.16.0 pennylane-sf-0.16.0 quantum-blackbird-0.3.0 repoze.lru-0.7 semantic-version-2.6.0 strawberryfields-0.18.0 thewalrus-0.15.1 urllib3-1.25.11
```
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
```
```
```
```
import urllib
from io import StringIO
SINE_URL = "https://raw.githubusercontent.com/XanaduAI/pennylane/v0.3.0/examples/data/sine.txt"
count = 0
sine = []
data = urllib.request.urlopen(SINE_URL)
for line in data:
d = line.decode("utf-8").strip()
sine.append(d.split(" "))
# print(line.decode("utf-8"))
df = pd.DataFrame(sine)
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-1.126864863386807247e-01</td>
<td>-3.708159070558681991e-01</td>
</tr>
<tr>
<th>1</th>
<td>-3.301285119623154074e-01</td>
<td>-7.792427358570894746e-01</td>
</tr>
<tr>
<th>2</th>
<td>5.437373320144804900e-01</td>
<td>1.054098341344321677e+00</td>
</tr>
<tr>
<th>3</th>
<td>8.070486641099010594e-01</td>
<td>5.322417543552232511e-01</td>
</tr>
<tr>
<th>4</th>
<td>5.553805663845667873e-01</td>
<td>9.748881583874732248e-01</td>
</tr>
</tbody>
</table>
</div>
```
df.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>50.000000</td>
<td>50.000000</td>
</tr>
<tr>
<th>mean</th>
<td>0.051329</td>
<td>-0.000046</td>
</tr>
<tr>
<th>std</th>
<td>0.554555</td>
<td>0.799718</td>
</tr>
<tr>
<th>min</th>
<td>-0.865836</td>
<td>-1.167565</td>
</tr>
<tr>
<th>25%</th>
<td>-0.457141</td>
<td>-0.802728</td>
</tr>
<tr>
<th>50%</th>
<td>0.088861</td>
<td>0.100107</td>
</tr>
<tr>
<th>75%</th>
<td>0.542681</td>
<td>0.714016</td>
</tr>
<tr>
<th>max</th>
<td>0.967816</td>
<td>1.293714</td>
</tr>
</tbody>
</table>
</div>
```
print(df.dtypes)
df[0] = df[0].astype('float')
df[1] = df[1].astype('float')
print(df.dtypes)
```
0 object
1 object
dtype: object
0 float64
1 float64
dtype: object
```
```
With dataset from https://pennylane.ai/qml/demos/quantum_neural_net.html for evaluating if the algorithm can capture the nonlineality of this data.
```
# # data = np.loadtxt("sine.txt")
# data = pd.read_url(SINE_URL)
# print(data.shape)
# X = data[:, 0]
# Y = data[:, 1]
```
```
X = df[0].values
Y = df[1].values
X.shape, Y.shape
```
((50,), (50,))
For encoding data, the autors (Teppei Suzuki, Michio Katouda 2020) use qbits rotations followed by 2 qbits entangled states in the form: U(x)-CNOT-U(x)-CNOT.
```
def statepreparation(x, nqbits):
qml.RY(x, wires=[0])
qml.RZ(x, wires=[0])
qml.RY(x, wires=[1])
qml.RZ(x, wires=[1])
for q in range(nqbits-1):
qml.CNOT(wires=[q, q+1])
qml.RY(x, wires=[0])
qml.RZ(x, wires=[0])
qml.RY(x, wires=[1])
qml.RZ(x, wires=[1])
for q in range(nqbits-1):
qml.CNOT(wires=[q, q+1])
```
Then applying variational circuits constructed by "ℓ" layers consisted of singles qbits rotations 𝑈ℓ(𝜽ℓ) and two-qbit entangler blocks comprising CNOT gates.
```
def layer(theta):
nqbits=len(theta[0])
nlayer=len(theta)
for l in range(nlayer):
for i in range(nqbits-1):
qml.CNOT(wires=[i, i+1])
for q in range(nqbits):
theta0=theta[l][q][0]
theta1=theta[l][q][1]
theta2=theta[l][q][2]
qml.RX(theta0, wires=[q])
qml.RZ(theta1, wires=[q])
qml.RX(theta2, wires=[q])
```
Then designing the quantum circuit in order to get as ouput a list of measure expecting values by Pauli Z operator on each qbit.
```
nqbits=2
dev = qml.device("default.qubit", wires=nqbits)
@qml.qnode(dev)
def qcircuit(theta, x):
measure=[]
nqbits=len(theta[0])
statepreparation(x, nqbits)
layer(theta)
for i in range(nqbits):
zeta=qml.expval(qml.PauliZ([i]))
measure.append(zeta)
return measure
```
Once we get the set of expected values, these will use these measure list "m" as input for a multiple linear model. So for a set of expectation values from 𝑀 qubits for 𝑖th data, the predicted value 𝑦 can be expressed as:
Image from: https://arxiv.org/ftp/arxiv/papers/2008/2008.07715.pdf
With M = set of "m"s and the optimized "betas":
Image from: https://arxiv.org/ftp/arxiv/papers/2008/2008.07715.pdf
There is an scalar factor fz for observable quantities wich takes a set of observable quantities depending the model, in this case this hyperparameters is equal to all the set of observables, fz = 2.
```
def betas_matrix(m,y):
mq=np.array(m)
betas= np.matmul(np.matmul(np.linalg.inv(np.matmul(np.transpose(mq),mq )),np.transpose(mq)),y)
return betas
```
```
def betas_model(theta,x,y):
measure=[]
for i in range(len(x)):
m=qcircuit(theta, x[i])
measure.append(m)
mq=measure
betas=betas_matrix(mq,y)
return betas
```
```
def predictor(theta,x,fz,betas):
m=qcircuit(theta, x)
m=m[0:fz]
matriz_pr=[]
for i in range(fz):
pr=m[i,]*betas[i,]
matriz_pr.append(pr)
pred=np.sum(matriz_pr)
return pred
```
```
def square_loss(labels, predictions):
loss = 0
for l, p in zip(labels, predictions):
loss = loss + (l - p) ** 2
loss = loss / len(labels)
return loss
```
```
def cost(theta,x,y,fz):
betas=betas_model(theta,x,y)
predi=[]
for i in range(len(x)):
pred=predictor(theta,x[i],fz,betas)
predi.append(pred)
predic=predi
res=square_loss(y, predic)
return res
```
Setting the hyperparameters with 2 qbits, 3 layers, fz=2 and applying Adam optimizer for gettting the values of theta that minimize the cost function:
```
opt = AdamOptimizer(0.01, beta1=0.9, beta2=0.999)
num_qubits = 2
num_layers = 3
fz=2
theta_init = 0.01 * np.random.randn(num_layers, num_qubits, 3)
theta = theta_init
```
```
print(theta_init)
```
[[[-0.01066219 -0.00466144 0.00348735]
[-0.00140246 -0.02235699 -0.01095535]]
[[-0.0060416 0.00712165 -0.02804185]
[ 0.02580493 0.00618456 0.00589463]]
[[-0.00993859 0.00256266 -0.00243132]
[ 0.01367501 -0.02210988 0.01655903]]]
```
for it in range(10):
theta = opt.step(lambda v: cost(v, X, Y, fz), theta)
betas = betas_model(theta,X,Y)
predics =[]
for i in range(len(X)):
p = predictor(theta,X[i],fz,betas)
predics.append(p)
print("Iter: {:5d} | Cost: {:0.7f} | R2 Score {:0.5f}".format(it + 1, cost(theta, X, Y, fz), r2_score(Y, predics)))
```
Iter: 1 | Cost: 0.0203546 | R2 Score 0.96752
Iter: 2 | Cost: 0.0202555 | R2 Score 0.96768
Iter: 3 | Cost: 0.0201583 | R2 Score 0.96784
Iter: 4 | Cost: 0.0200629 | R2 Score 0.96799
Iter: 5 | Cost: 0.0199695 | R2 Score 0.96814
Iter: 6 | Cost: 0.0198779 | R2 Score 0.96828
Iter: 7 | Cost: 0.0197879 | R2 Score 0.96843
Iter: 8 | Cost: 0.0196991 | R2 Score 0.96857
Iter: 9 | Cost: 0.0196113 | R2 Score 0.96871
Iter: 10 | Cost: 0.0195242 | R2 Score 0.96885
```
print(theta, betas)
```
[[[-0.02397043 0.35283109 -0.01008763]
[ 0.04388198 0.35952261 0.0726559 ]]
[[-0.16938529 0.33715347 -0.18493568]
[ 0.10941618 0.3624669 0.11444983]]
[[-0.04905933 0.58524372 -0.06179887]
[ 0.12223021 0.33308673 0.15268837]]] [-2.62630513 3.22373378]
We collect the predictions of the trained model for 50 values in the range [−1,1], just like the example of https://pennylane.ai/qml/demos/quantum_neural_net.html
```
x_pred = np.linspace(-1, 1, 50)
y_pred=[]
for i in range(len(X)):
new_pred=predictor(theta,x_pred[i],fz,betas)
y_pred.append(new_pred)
```
Finally plotting the results we see that the model (red dots) have been learned from the data (blue dots) and has the similar shape and non linearity characteristic as the original.
```
```
### With 10 iterations
```
plt.figure()
plt.scatter(X, Y, color="blue")
plt.scatter(x_pred, y_pred, color="red")
plt.xlabel("x")
plt.ylabel("f(x)")
plt.tick_params(axis="both", which="major")
plt.tick_params(axis="both", which="minor")
plt.show()
```
### With 100 iterations
```
plt.figure()
plt.scatter(X, Y, color="blue")
plt.scatter(x_pred, y_pred, color="red")
plt.xlabel("x")
plt.ylabel("f(x)")
plt.tick_params(axis="both", which="major")
plt.tick_params(axis="both", which="minor")
plt.show()
```
```
```
```
```
| 44ba587bf006d7aceb244e163b80613eed19adfd | 148,970 | ipynb | Jupyter Notebook | qc/pennylane/QML_Regression_Model_01.ipynb | praveentn/ml-repos | 066bddf1393bef2704d64eadf72947e44f406139 | [
"MIT"
]
| 5 | 2020-07-20T17:33:41.000Z | 2021-07-02T03:25:54.000Z | qc/pennylane/QML_Regression_Model_01.ipynb | praveentn/ml-repos | 066bddf1393bef2704d64eadf72947e44f406139 | [
"MIT"
]
| null | null | null | qc/pennylane/QML_Regression_Model_01.ipynb | praveentn/ml-repos | 066bddf1393bef2704d64eadf72947e44f406139 | [
"MIT"
]
| null | null | null | 139.746717 | 76,863 | 0.819286 | true | 6,331 | Qwen/Qwen-72B | 1. YES
2. YES | 0.740174 | 0.712232 | 0.527176 | __label__eng_Latn | 0.525989 | 0.063136 |
$\newcommand{\vct}[1]{\boldsymbol{#1}}
\newcommand{\mtx}[1]{\mathbf{#1}}
\newcommand{\tr}{^\mathrm{T}}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\lpa}{\left(}
\newcommand{\rpa}{\right)}
\newcommand{\lsb}{\left[}
\newcommand{\rsb}{\right]}
\newcommand{\lbr}{\left\lbrace}
\newcommand{\rbr}{\right\rbrace}
\newcommand{\fset}[1]{\lbr #1 \rbr}
\newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}$
# Single layer models
In this lab we will implement a single-layer network model consisting of solely of an affine transformation of the inputs. The relevant material for this was covered in [the slides of the first lecture](http://www.inf.ed.ac.uk/teaching/courses/mlp/2016/mlp01-intro.pdf).
We will first implement the forward propagation of inputs to the network to produce predicted outputs. We will then move on to considering how to use gradients of an error function evaluated on the outputs to compute the gradients with respect to the model parameters to allow us to perform an iterative gradient-descent training procedure. In the final exercise you will use an interactive visualisation to explore the role of some of the different hyperparameters of gradient-descent based training methods.
#### A note on random number generators
It is generally a good practice (for machine learning applications **not** for cryptography!) to seed a pseudo-random number generator once at the beginning of each experiment. This makes it easier to reproduce results as the same random draws will produced each time the experiment is run (e.g. the same random initialisations used for parameters). Therefore generally when we need to generate random values during this course, we will create a seeded random number generator object as we do in the cell below.
```python
import numpy as np
seed = 27092016
rng = np.random.RandomState(seed)
```
## Exercise 1: linear and affine transforms
Any *linear transform* (also called a linear map) of a finite-dimensional vector space can be parametrised by a matrix. So for example if we consider $\vct{x} \in \reals^{D}$ as the input space of a model with $D$ dimensional real-valued inputs, then a matrix $\mtx{W} \in \reals^{K\times D}$ can be used to define a prediction model consisting solely of a linear transform of the inputs
\begin{equation}
\vct{y} = \mtx{W} \vct{x}
\qquad
\Leftrightarrow
\qquad
y_k = \sum_{d=1}^D \lpa W_{kd} x_d \rpa \quad \forall k \in \fset{1 \dots K}
\end{equation}
with here $\vct{y} \in \reals^K$ the $K$-dimensional real-valued output of the model. Geometrically we can think of a linear transform doing some combination of rotation, scaling, reflection and shearing of the input.
An *affine transform* consists of a linear transform plus an additional translation parameterised by a vector $\vct{b} \in \reals^K$. A model consisting of an affine transformation of the inputs can then be defined as
\begin{equation}
\vct{y} = \mtx{W}\vct{x} + \vct{b}
\qquad
\Leftrightarrow
\qquad
y_k = \sum_{d=1}^D \lpa W_{kd} x_d \rpa + b_k \quad \forall k \in \fset{1 \dots K}
\end{equation}
In machine learning we will usually refer to the matrix $\mtx{W}$ as a *weight matrix* and the vector $\vct{b}$ as a *bias vector*.
Generally rather than working with a single data vector $\vct{x}$ we will work with batches of datapoints $\fset{\vct{x}^{(b)}}_{b=1}^B$. We could calculate the outputs for each input in the batch sequentially
\begin{align}
\vct{y}^{(1)} &= \mtx{W}\vct{x}^{(1)} + \vct{b}\\
\vct{y}^{(2)} &= \mtx{W}\vct{x}^{(2)} + \vct{b}\\
\dots &\\
\vct{y}^{(B)} &= \mtx{W}\vct{x}^{(B)} + \vct{b}\\
\end{align}
by looping over each input in the batch and calculating the output. However in general loops in Python are slow (particularly compared to compiled and typed languages such as C). This is due at least in part to the large overhead in dynamically inferring variable types. In general therefore wherever possible we want to avoid having loops in which such overhead will become the dominant computational cost.
For array based numerical operations, one way of overcoming this bottleneck is to *vectorise* operations. NumPy `ndarrays` are typed arrays for which operations such as basic elementwise arithmetic and linear algebra operations such as computing matrix-matrix or matrix-vector products are implemented by calls to highly-optimised compiled libraries. Therefore if you can implement code directly using NumPy operations on arrays rather than by looping over array elements it is often possible to make very substantial performance gains.
As a simple example we can consider adding up two arrays `a` and `b` and writing the result to a third array `c`. First lets initialise `a` and `b` with arbitrary values by running the cell below.
```python
size = 1000
a = np.arange(size)
b = np.ones(size)
```
Now let's time how long it takes to add up each pair of values in the two array and write the results to a third array using a loop-based implementation. We will use the `%%timeit` magic briefly mentioned in the previous lab notebook specifying the number of times to loop the code as 100 and to give the best of 3 repeats. Run the cell below to get a print out of the average time taken.
```python
%%timeit -n 100 -r 3
c = np.empty(size)
for i in range(size):
c[i] = a[i] + b[i]
```
100 loops, best of 3: 2.18 ms per loop
And now we will perform the corresponding summation with the overloaded addition operator of NumPy arrays. Again run the cell below to get a print out of the average time taken.
```python
%%timeit -n 100 -r 3
c = a + b
```
The slowest run took 4.89 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 3.42 µs per loop
The first loop-based implementation should have taken on the order of milliseconds ($10^{-3}$s) while the vectorised implementation should have taken on the order of microseconds ($10^{-6}$s), i.e. a $\sim1000\times$ speedup. Hopefully this simple example should make it clear why we want to vectorise operations whenever possible!
Getting back to our affine model, ideally rather than individually computing the output corresponding to each input we should compute the outputs for all inputs in a batch using a vectorised implementation. As you saw last week, data providers return batches of inputs as arrays of shape `(batch_size, input_dim)`. In the mathematical notation used earlier we can consider this as a matrix $\mtx{X}$ of dimensionality $B \times D$, and in particular
\begin{equation}
\mtx{X} = \lsb \vct{x}^{(1)} ~ \vct{x}^{(2)} ~ \dots ~ \vct{x}^{(B)} \rsb\tr
\end{equation}
i.e. the $b^{\textrm{th}}$ input vector $\vct{x}^{(b)}$ corresponds to the $b^{\textrm{th}}$ row of $\mtx{X}$. If we define the $B \times K$ matrix of outputs $\mtx{Y}$ similarly as
\begin{equation}
\mtx{Y} = \lsb \vct{y}^{(1)} ~ \vct{y}^{(2)} ~ \dots ~ \vct{y}^{(B)} \rsb\tr
\end{equation}
then we can express the relationship between $\mtx{X}$ and $\mtx{Y}$ using [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) and addition as
\begin{equation}
\mtx{Y} = \mtx{X} \mtx{W}\tr + \mtx{B}
\end{equation}
where $\mtx{B} = \lsb \vct{b} ~ \vct{b} ~ \dots ~ \vct{b} \rsb\tr$ i.e. a $B \times K$ matrix with each row corresponding to the bias vector. The weight matrix needs to be transposed here as the inner dimensions of a matrix multiplication must match i.e. for $\mtx{C} = \mtx{A} \mtx{B}$ then if $\mtx{A}$ is of dimensionality $K \times L$ and $\mtx{B}$ is of dimensionality $M \times N$ then it must be the case that $L = M$ and $\mtx{C}$ will be of dimensionality $K \times N$.
The first exercise for this lab is to implement *forward propagation* for a single-layer model consisting of an affine transformation of the inputs in the `fprop` function given as skeleton code in the cell below. This should work for a batch of inputs of shape `(batch_size, input_dim)` producing a batch of outputs of shape `(batch_size, output_dim)`.
You will probably want to use the NumPy `dot` function and [broadcasting features](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to implement this efficiently. If you are not familiar with either / both of these you may wish to read the [hints](#Hints:-Using-the-dot-function-and-broadcasting) section below which gives some details on these before attempting the exercise.
```python
def fprop(inputs, weights, biases):
"""Forward propagates activations through the layer transformation.
For inputs `x`, outputs `y`, weights `W` and biases `b` the layer
corresponds to `y = W x + b`.
Args:
inputs: Array of layer inputs of shape (batch_size, input_dim).
weights: Array of weight parameters of shape
(output_dim, input_dim).
biases: Array of bias parameters of shape (output_dim, ).
Returns:
outputs: Array of layer outputs of shape (batch_size, output_dim).
"""
return inputs.dot(weights.T) + biases
```
Once you have implemented `fprop` in the cell above you can test your implementation by running the cell below.
```python
inputs = np.array([[0., -1., 2.], [-6., 3., 1.]])
weights = np.array([[2., -3., -1.], [-5., 7., 2.]])
biases = np.array([5., -3.])
true_outputs = np.array([[6., -6.], [-17., 50.]])
if not np.allclose(fprop(inputs, weights, biases), true_outputs):
print('Wrong outputs computed.')
else:
print('All outputs correct!')
```
All outputs correct!
### Hints: Using the `dot` function and broadcasting
For those new to NumPy below are some details on the `dot` function and broadcasting feature of NumPy that you may want to use for implementing the first exercise. If you are already familiar with these and have already completed the first exercise you can move on straight to [second exercise](#Exercise-2:-visualising-random-models).
#### `numpy.dot` function
Matrix-matrix, matrix-vector and vector-vector (dot) products can all be computed in NumPy using the [`dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) function. For example if `A` and `B` are both two dimensional arrays, then `C = np.dot(A, B)` or equivalently `C = A.dot(B)` will both compute the matrix product of `A` and `B` assuming `A` and `B` have compatible dimensions. Similarly if `a` and `b` are one dimensional arrays then `c = np.dot(a, b)` / `c = a.dot(b)` will compute the [scalar / dot product](https://en.wikipedia.org/wiki/Dot_product) of the two arrays. If `A` is a two-dimensional array and `b` a one-dimensional array `np.dot(A, b)` / `A.dot(b)` will compute the matrix-vector product of `A` and `b`. Examples of all three of these product types are shown in the cell below:
```python
# Initiliase arrays with arbitrary values
A = np.arange(9).reshape((3, 3))
B = np.ones((3, 3)) * 2
a = np.array([-1., 0., 1.])
b = np.array([0.1, 0.2, 0.3])
print(A.dot(B)) # Matrix-matrix product
print(B.dot(A)) # Reversed product of above A.dot(B) != B.dot(A) in general
print(A.dot(b)) # Matrix-vector product
print(b.dot(A)) # Again A.dot(b) != b.dot(A) unless A is symmetric i.e. A == A.T
print(a.dot(b)) # Vector-vector scalar product
```
[[ 6. 6. 6.]
[ 24. 24. 24.]
[ 42. 42. 42.]]
[[ 18. 24. 30.]
[ 18. 24. 30.]
[ 18. 24. 30.]]
[ 0.8 2.6 4.4]
[ 2.4 3. 3.6]
0.2
#### Broadcasting
Another NumPy feature it will be helpful to get familiar with is [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). Broadcasting allows you to apply operations to arrays of different shapes, for example to add a one-dimensional array to a two-dimensional array or multiply a multidimensional array by a scalar. The complete set of rules for broadcasting as explained in the official documentation page just linked to can sound a bit complex: you might find the [visual explanation on this page](http://www.scipy-lectures.org/intro/numpy/operations.html#broadcasting) more intuitive. The cell below gives a few examples:
```python
# Initiliase arrays with arbitrary values
A = np.arange(6).reshape((3, 2))
b = np.array([0.1, 0.2])
c = np.array([-1., 0., 1.])
print(A + b) # Add b elementwise to all rows of A
print((A.T + c).T) # Add b elementwise to all columns of A
print(A * b) # Multiply each row of A elementise by b
```
[[ 0.1 1.2]
[ 2.1 3.2]
[ 4.1 5.2]]
[[-1. 0.]
[ 2. 3.]
[ 5. 6.]]
[[ 0. 0.2]
[ 0.2 0.6]
[ 0.4 1. ]]
## Exercise 2: visualising random models
In this exercise you will use your `fprop` implementation to visualise the outputs of a single-layer affine transform model with two-dimensional inputs and a one-dimensional output. In this simple case we can visualise the joint input-output space on a 3D axis.
For this task and the learning experiments later in the notebook we will use a regression dataset from the [UCI machine learning repository](http://archive.ics.uci.edu/ml/index.html). In particular we will use a version of the [Combined Cycle Power Plant dataset](http://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant), where the task is to predict the energy output of a power plant given observations of the local ambient conditions (e.g. temperature, pressure and humidity).
The original dataset has four input dimensions and a single target output dimension. We have preprocessed the dataset by [whitening](https://en.wikipedia.org/wiki/Whitening_transformation) it, a common preprocessing step. We will only use the first two dimensions of the whitened inputs (corresponding to the first two principal components of the inputs) so we can easily visualise the joint input-output space.
The dataset has been wrapped in the `CCPPDataProvider` class in the `mlp.data_providers` module and the data included as a compressed file in the data directory as `ccpp_data.npz`. Running the cell below will initialise an instance of this class, get a single batch of inputs and outputs and import the necessary `matplotlib` objects.
```python
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mlp.data_providers import CCPPDataProvider
%matplotlib notebook
data_provider = CCPPDataProvider(
which_set='train',
input_dims=[0, 1],
batch_size=5000,
max_num_batches=1,
shuffle_order=False
)
input_dim, output_dim = 2, 1
inputs, targets = data_provider.next()
```
Here we used the `%matplotlib notebook` magic command rather than the `%matplotlib inline` we used in the previous lab as this allows us to produce interactive 3D plots which you can rotate and zoom in/out by dragging with the mouse and scrolling the mouse-wheel respectively. Once you have finished interacting with a plot you can close it to produce a static inline plot using the <i class="fa fa-power-off"></i> button in the top-right corner.
Now run the cell below to plot the predicted outputs of a randomly initialised model across the two dimensional input space as well as the true target outputs. This sort of visualisation can be a useful method (in low dimensions) to assess how well the model is likely to be able to fit the data and to judge appropriate initialisation scales for the parameters. Each time you re-run the cell a new set of random parameters will be sampled
Some questions to consider:
* How do the weights and bias initialisation scale affect the sort of predicted input-output relationships?
* Magnitude of weights initialisation scale determines how steep along the two input directions the plane predictions lie on is. The magnitude of the bias initialisation scale determines the typical offset of the plane from the `output = 0.` plane.
* Does the linear form of the model seem appropriate for the data here?
* While it appears a linear-model will not be able to fully capture the input-output relationship evident in the data with there some degree of non-linearity seeming to be present, as a first approximation a linear model seems a reasonable choice as a simple model for the data.
```python
weights_init_range = 0.5
biases_init_range = 0.1
# Randomly initialise weights matrix
weights = rng.uniform(
low=-weights_init_range,
high=weights_init_range,
size=(output_dim, input_dim)
)
# Randomly initialise biases vector
biases = rng.uniform(
low=-biases_init_range,
high=biases_init_range,
size=output_dim
)
# Calculate predicted model outputs
outputs = fprop(inputs, weights, biases)
# Plot target and predicted outputs against inputs on same axis
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)
ax.plot(inputs[:, 0], inputs[:, 1], outputs[:, 0], 'b.', ms=2)
ax.set_xlabel('Input dim 1')
ax.set_ylabel('Input dim 2')
ax.set_zlabel('Output')
ax.legend(['Targets', 'Predictions'], frameon=False)
fig.tight_layout()
```
<IPython.core.display.Javascript object>
## Exercise 3: computing the error function and its gradient
Here we will consider the task of regression as covered in the first lecture slides. The aim in a regression problem is given inputs $\fset{\vct{x}^{(n)}}_{n=1}^N$ to produce outputs $\fset{\vct{y}^{(n)}}_{n=1}^N$ that are as 'close' as possible to a set of target outputs $\fset{\vct{t}^{(n)}}_{n=1}^N$. The measure of 'closeness' or distance between target and predicted outputs is a design choice.
A very common choice is the squared Euclidean distance between the predicted and target outputs. This can be computed as the sum of the squared differences between each element in the target and predicted outputs. A common convention is to multiply this value by $\frac{1}{2}$ as this gives a slightly nicer expression for the error gradient. The error for the $n^{\textrm{th}}$ training example is then
\begin{equation}
E^{(n)} = \frac{1}{2} \sum_{k=1}^K \lbr \lpa y^{(n)}_k - t^{(n)}_k \rpa^2 \rbr.
\end{equation}
The overall error is then the *average* of this value across all training examples
\begin{equation}
\bar{E} = \frac{1}{N} \sum_{n=1}^N \lbr E^{(n)} \rbr.
\end{equation}
*Note here we are using a slightly different convention from the lectures. There the overall error was considered to be the sum of the individual error terms rather than the mean. To differentiate between the two we will use $\bar{E}$ to represent the average error here as opposed to sum of errors $E$ as used in the slides with $\bar{E} = \frac{E}{N}$. Normalising by the number of training examples is helpful to do in practice as this means we can more easily compare errors across data sets / batches of different sizes, and more importantly it means the size of our gradient updates will be independent of the number of training examples summed over.*
The regression problem is then to find parameters of the model which minimise $\bar{E}$. For our simple single-layer affine model here that corresponds to finding weights $\mtx{W}$ and biases $\vct{b}$ which minimise $\bar{E}$.
As mentioned in the lecture, for this simple case there is actually a closed form solution for the optimal weights and bias parameters. This is the linear least-squares solution those doing MLPR will have come across.
However in general we will be interested in models where closed form solutions do not exist. We will therefore generally use iterative, gradient descent based training methods to find parameters which (locally) minimise the error function. A basic requirement of being able to do gradient-descent based training is (unsuprisingly) the ability to evaluate gradients of the error function.
In the next exercise we will consider how to calculate gradients of the error function with respect to the model parameters $\mtx{W}$ and $\vct{b}$, but as a first step here we will consider the gradient of the error function with respect to the model outputs $\fset{\vct{y}^{(n)}}_{n=1}^N$. This can be written
\begin{equation}
\pd{\bar{E}}{\vct{y}^{(n)}} = \frac{1}{N} \lpa \vct{y}^{(n)} - \vct{t}^{(n)} \rpa
\qquad \Leftrightarrow \qquad
\pd{\bar{E}}{y^{(n)}_k} = \frac{1}{N} \lpa y^{(n)}_k - t^{(n)}_k \rpa \quad \forall k \in \fset{1 \dots K}
\end{equation}
i.e. the gradient of the error function with respect to the $n^{\textrm{th}}$ model output is just the difference between the $n^{\textrm{th}}$ model and target outputs, corresponding to the $\vct{\delta}^{(n)}$ terms mentioned in the lecture slides.
The third exercise is, using the equations given above, to implement functions computing the mean sum of squared differences error and its gradient with respect to the model outputs. You should implement the functions using the provided skeleton definitions in the cell below.
```python
def error(outputs, targets):
"""Calculates error function given a batch of outputs and targets.
Args:
outputs: Array of model outputs of shape (batch_size, output_dim).
targets: Array of target outputs of shape (batch_size, output_dim).
Returns:
Scalar error function value.
"""
return 0.5 * ((outputs - targets)**2).sum() / outputs.shape[0]
def error_grad(outputs, targets):
"""Calculates gradient of error function with respect to model outputs.
Args:
outputs: Array of model outputs of shape (batch_size, output_dim).
targets: Array of target outputs of shape (batch_size, output_dim).
Returns:
Gradient of error function with respect to outputs.
This will be an array of shape (batch_size, output_dim).
"""
return (outputs - targets) / outputs.shape[0]
```
Check your implementation by running the test cell below.
```python
outputs = np.array([[1., 2.], [-1., 0.], [6., -5.], [-1., 1.]])
targets = np.array([[0., 1.], [3., -2.], [7., -3.], [1., -2.]])
true_error = 5.
true_error_grad = np.array([[0.25, 0.25], [-1., 0.5], [-0.25, -0.5], [-0.5, 0.75]])
if not error(outputs, targets) == true_error:
print('Error calculated incorrectly.')
elif not np.allclose(error_grad(outputs, targets), true_error_grad):
print('Error gradient calculated incorrectly.')
else:
print('Error function and gradient computed correctly!')
```
Error function and gradient computed correctly!
## Exercise 4: computing gradients with respect to the parameters
In the previous exercise you implemented a function computing the gradient of the error function with respect to the model outputs. For gradient-descent based training, we need to be able to evaluate the gradient of the error function with respect to the model parameters.
Using the [chain rule for derivatives](https://en.wikipedia.org/wiki/Chain_rule#Higher_dimensions) we can write the partial deriviative of the error function with respect to single elements of the weight matrix and bias vector as
\begin{equation}
\pd{\bar{E}}{W_{kj}} = \sum_{n=1}^N \lbr \pd{\bar{E}}{y^{(n)}_k} \pd{y^{(n)}_k}{W_{kj}} \rbr
\quad \textrm{and} \quad
\pd{\bar{E}}{b_k} = \sum_{n=1}^N \lbr \pd{\bar{E}}{y^{(n)}_k} \pd{y^{(n)}_k}{b_k} \rbr.
\end{equation}
From the definition of our model at the beginning we have
\begin{equation}
y^{(n)}_k = \sum_{d=1}^D \lbr W_{kd} x^{(n)}_d \rbr + b_k
\quad \Rightarrow \quad
\pd{y^{(n)}_k}{W_{kj}} = x^{(n)}_j
\quad \textrm{and} \quad
\pd{y^{(n)}_k}{b_k} = 1.
\end{equation}
Putting this together we get that
\begin{equation}
\pd{\bar{E}}{W_{kj}} =
\sum_{n=1}^N \lbr \pd{\bar{E}}{y^{(n)}_k} x^{(n)}_j \rbr
\quad \textrm{and} \quad
\pd{\bar{E}}{b_{k}} =
\sum_{n=1}^N \lbr \pd{\bar{E}}{y^{(n)}_k} \rbr.
\end{equation}
Although this may seem a bit of a roundabout way to get to these results, this method of decomposing the error gradient with respect to the parameters in terms of the gradient of the error function with respect to the model outputs and the derivatives of the model outputs with respect to the model parameters, will be key when calculating the parameter gradients of more complex models later in the course.
Your task in this exercise is to implement a function calculating the gradient of the error function with respect to the weight and bias parameters of the model given the already computed gradient of the error function with respect to the model outputs. You should implement this in the `grads_wrt_params` function in the cell below.
```python
def grads_wrt_params(inputs, grads_wrt_outputs):
"""Calculates gradients with respect to model parameters.
Args:
inputs: array of inputs to model of shape (batch_size, input_dim)
grads_wrt_to_outputs: array of gradients of with respect to the model
outputs of shape (batch_size, output_dim).
Returns:
list of arrays of gradients with respect to the model parameters
`[grads_wrt_weights, grads_wrt_biases]`.
"""
grads_wrt_weights = grads_wrt_outputs.T.dot(inputs)
grads_wrt_biases = grads_wrt_outputs.sum(0)
return [grads_wrt_weights, grads_wrt_biases]
```
Check your implementation by running the test cell below.
```python
inputs = np.array([[1., 2., 3.], [-1., 4., -9.]])
grads_wrt_outputs = np.array([[-1., 1.], [2., -3.]])
true_grads_wrt_weights = np.array([[-3., 6., -21.], [4., -10., 30.]])
true_grads_wrt_biases = np.array([1., -2.])
grads_wrt_weights, grads_wrt_biases = grads_wrt_params(
inputs, grads_wrt_outputs)
if not np.allclose(true_grads_wrt_weights, grads_wrt_weights):
print('Gradients with respect to weights incorrect.')
elif not np.allclose(true_grads_wrt_biases, grads_wrt_biases):
print('Gradients with respect to biases incorrect.')
else:
print('All parameter gradients calculated correctly!')
```
All parameter gradients calculated correctly!
## Exercise 5: wrapping the functions into reusable components
In exercises 1, 3 and 4 you implemented methods to compute the predicted outputs of our model, evaluate the error function and its gradient on the outputs and finally to calculate the gradients of the error with respect to the model parameters. Together they constitute all the basic ingredients we need to implement a gradient-descent based iterative learning procedure for the model.
Although you could implement training code which directly uses the functions you defined, this would only be usable for this particular model architecture. In subsequent labs we will want to use the affine transform functions as the basis for more interesting multi-layer models. We will therefore wrap the implementations you just wrote in to reusable components that we can build more complex models with later in the course.
* In the [`mlp.layers`](/edit/mlp/layers.py) module, use your implementations of `fprop` and `grad_wrt_params` above to implement the corresponding methods in the skeleton `AffineLayer` class provided.
* In the [`mlp.errors`](/edit/mlp/errors.py) module use your implementation of `error` and `error_grad` to implement the `__call__` and `grad` methods respectively of the skeleton `SumOfSquaredDiffsError` class provided. Note `__call__` is a special Python method that allows an object to be used with a function call syntax.
Run the cell below to use your completed `AffineLayer` and `SumOfSquaredDiffsError` implementations to train a single-layer model using batch gradient descent on the CCCP dataset.
```python
from mlp.layers import AffineLayer
from mlp.errors import SumOfSquaredDiffsError
from mlp.models import SingleLayerModel
from mlp.initialisers import UniformInit, ConstantInit
from mlp.learning_rules import GradientDescentLearningRule
from mlp.optimisers import Optimiser
import logging
# Seed a random number generator
seed = 27092016
rng = np.random.RandomState(seed)
# Set up a logger object to print info about the training run to stdout
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.handlers = [logging.StreamHandler()]
# Create data provider objects for the CCPP training set
train_data = CCPPDataProvider('train', [0, 1], batch_size=100, rng=rng)
input_dim, output_dim = 2, 1
# Create a parameter initialiser which will sample random uniform values
# from [-0.1, 0.1]
param_init = UniformInit(-0.1, 0.1, rng=rng)
# Create our single layer model
layer = AffineLayer(input_dim, output_dim, param_init, param_init)
model = SingleLayerModel(layer)
# Initialise the error object
error = SumOfSquaredDiffsError()
# Use a basic gradient descent learning rule with a small learning rate
learning_rule = GradientDescentLearningRule(learning_rate=1e-2)
# Use the created objects to initialise a new Optimiser instance.
optimiser = Optimiser(model, error, learning_rule, train_data)
# Run the optimiser for 5 epochs (full passes through the training set)
# printing statistics every epoch.
stats, keys, run_time = optimiser.train(num_epochs=10, stats_interval=1)
# Plot the change in the error over training.
fig = plt.figure(figsize=(8, 4))
ax = fig.add_subplot(111)
ax.plot(np.arange(1, stats.shape[0] + 1), stats[:, keys['error(train)']])
ax.set_xlabel('Epoch number')
ax.set_ylabel('Error')
```
Epoch 1: 0.00s to complete
error(train)=1.67e-01, cost(param)=0.00e+00
Epoch 2: 0.00s to complete
error(train)=9.30e-02, cost(param)=0.00e+00
Epoch 3: 0.00s to complete
error(train)=7.95e-02, cost(param)=0.00e+00
Epoch 4: 0.00s to complete
error(train)=7.71e-02, cost(param)=0.00e+00
Epoch 5: 0.00s to complete
error(train)=7.66e-02, cost(param)=0.00e+00
Epoch 6: 0.00s to complete
error(train)=7.65e-02, cost(param)=0.00e+00
Epoch 7: 0.00s to complete
error(train)=7.65e-02, cost(param)=0.00e+00
Epoch 8: 0.00s to complete
error(train)=7.65e-02, cost(param)=0.00e+00
Epoch 9: 0.00s to complete
error(train)=7.63e-02, cost(param)=0.00e+00
Epoch 10: 0.00s to complete
error(train)=7.64e-02, cost(param)=0.00e+00
<IPython.core.display.Javascript object>
<matplotlib.text.Text at 0x7fd88df72350>
Using similar code to previously we can now visualise the joint input-output space for the trained model. If you implemented the required methods correctly you should now see a much improved fit between predicted and target outputs when running the cell below.
```python
data_provider = CCPPDataProvider(
which_set='train',
input_dims=[0, 1],
batch_size=5000,
max_num_batches=1,
shuffle_order=False
)
inputs, targets = data_provider.next()
# Calculate predicted model outputs
outputs = model.fprop(inputs)[-1]
# Plot target and predicted outputs against inputs on same axis
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)
ax.plot(inputs[:, 0], inputs[:, 1], outputs[:, 0], 'b.', ms=2)
ax.set_xlabel('Input dim 1')
ax.set_ylabel('Input dim 2')
ax.set_zlabel('Output')
ax.legend(['Targets', 'Predictions'], frameon=False)
fig.tight_layout()
```
<IPython.core.display.Javascript object>
## Exercise 6: visualising training trajectories in parameter space
Running the cell below will display an interactive widget which plots the trajectories of gradient-based training of the single-layer affine model on the CCPP dataset in the three dimensional parameter space (two weights plus bias) from random initialisations. Also shown on the right is a plot of the evolution of the error function (evaluated on the current batch) over training. By moving the sliders you can alter the training hyperparameters to investigate the effect they have on how training procedes.
Some questions to explore:
* Are there multiple local minima in parameter space here? Why?
* In this case there is a single unique global minima, as suggested by the fact random parameter initialisations consistently converge to the same point in parameter space. As mentioned previously there is a closed form solution for the optimal weights and biases for this simple single-layer affine model <a href='https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)'>linear least squares</a>) and the error function is [convex](https://en.wikipedia.org/wiki/Convex_function).
* What happens to learning for very small learning rates? And very large learning rates?
* For very small learning rates, the training proceeds very slowly and the parameters do not tend to converge to the global optimum unless a lot of training epochs are used. For very large learning rates, the gradient descent dynamic becomes increasingly instable, leading to large oscillations or at extreme values divergence in the parameter space.
* How does the batch size affect learning?
* Smaller batch sizes generally lead to quicker initial learning as the parameters are updated more frequently (more batches in an epoch) however as the batch becomes smaller the error estimate calculated from the batch and its gradients become increasingly noisy estimates for the true error function / gradients. This can be observed by the less smooth trajectories in parameter space for lower batch sizes and greater noise in the batch error curves.
**Note:** You don't need to understand how the code below works. The idea of this exercise is to help you understand the role of the various hyperparameters involved in gradient-descent based training methods.
```python
from ipywidgets import interact
%matplotlib inline
def setup_figure():
# create figure and axes
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_axes([0., 0., 0.5, 1.], projection='3d')
ax2 = fig.add_axes([0.6, 0.1, 0.4, 0.8])
# set axes properties
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
ax2.set_yscale('log')
ax1.set_xlim((-2, 2))
ax1.set_ylim((-2, 2))
ax1.set_zlim((-2, 2))
#set axes labels and title
ax1.set_title('Parameter trajectories over training')
ax1.set_xlabel('Weight 1')
ax1.set_ylabel('Weight 2')
ax1.set_zlabel('Bias')
ax2.set_title('Batch errors over training')
ax2.set_xlabel('Batch update number')
ax2.set_ylabel('Batch error')
return fig, ax1, ax2
def visualise_training(n_epochs=1, batch_size=200, log_lr=-1., n_inits=5,
w_scale=1., b_scale=1., elev=30., azim=0.):
fig, ax1, ax2 = setup_figure()
# create seeded random number generator
rng = np.random.RandomState(1234)
# create data provider
data_provider = CCPPDataProvider(
input_dims=[0, 1],
batch_size=batch_size,
shuffle_order=False,
)
learning_rate = 10 ** log_lr
n_batches = data_provider.num_batches
weights_traj = np.empty((n_inits, n_epochs * n_batches + 1, 1, 2))
biases_traj = np.empty((n_inits, n_epochs * n_batches + 1, 1))
errors_traj = np.empty((n_inits, n_epochs * n_batches))
# randomly initialise parameters
weights = rng.uniform(-w_scale, w_scale, (n_inits, 1, 2))
biases = rng.uniform(-b_scale, b_scale, (n_inits, 1))
# store initial parameters
weights_traj[:, 0] = weights
biases_traj[:, 0] = biases
# iterate across different initialisations
for i in range(n_inits):
# iterate across epochs
for e in range(n_epochs):
# iterate across batches
for b, (inputs, targets) in enumerate(data_provider):
outputs = fprop(inputs, weights[i], biases[i])
errors_traj[i, e * n_batches + b] = error(outputs, targets)
grad_wrt_outputs = error_grad(outputs, targets)
weights_grad, biases_grad = grads_wrt_params(inputs, grad_wrt_outputs)
weights[i] -= learning_rate * weights_grad
biases[i] -= learning_rate * biases_grad
weights_traj[i, e * n_batches + b + 1] = weights[i]
biases_traj[i, e * n_batches + b + 1] = biases[i]
# choose a different color for each trajectory
colors = plt.cm.jet(np.linspace(0, 1, n_inits))
# plot all trajectories
for i in range(n_inits):
lines_1 = ax1.plot(
weights_traj[i, :, 0, 0],
weights_traj[i, :, 0, 1],
biases_traj[i, :, 0],
'-', c=colors[i], lw=2)
lines_2 = ax2.plot(
np.arange(n_batches * n_epochs),
errors_traj[i],
c=colors[i]
)
ax1.view_init(elev, azim)
plt.show()
w = interact(
visualise_training,
elev=(-90, 90, 2),
azim=(-180, 180, 2),
n_epochs=(1, 5),
batch_size=(100, 1000, 100),
log_lr=(-3., 1.),
w_scale=(0., 2.),
b_scale=(0., 2.),
n_inits=(1, 10)
)
for child in w.widget.children:
child.layout.width = '100%'
```
| df0880f19583fd7463a1dc588afac249cc794589 | 661,325 | ipynb | Jupyter Notebook | notebooks/02_Single_layer_models.ipynb | pligor/msd-music-genre-classification | 8988ec6e8b15927a52d772fc04540a7c334a5cd4 | [
"MIT"
]
| 5 | 2018-02-16T09:24:19.000Z | 2021-04-16T01:08:10.000Z | notebooks/02_Single_layer_models.ipynb | pligor/mnist-from-scratch | 2cee349d3bc74afa157a9d0a55c0d2374e4c4a33 | [
"MIT"
]
| null | null | null | notebooks/02_Single_layer_models.ipynb | pligor/mnist-from-scratch | 2cee349d3bc74afa157a9d0a55c0d2374e4c4a33 | [
"MIT"
]
| 2 | 2019-03-15T07:49:16.000Z | 2019-03-30T09:33:10.000Z | 200.644721 | 193,317 | 0.863562 | true | 9,721 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.79053 | 0.630991 | __label__eng_Latn | 0.989143 | 0.304334 |
```python
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Philosopher:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Philosopher', sans-serif;
font-weight: 400;
font-size: 2.2em;
line-height: 100%;
color: rgb(0, 80, 120);
margin-bottom: 0.1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h2 {
font-family: 'Philosopher', serif;
font-weight: 400;
font-size: 1.9em;
line-height: 100%;
color: rgb(245,179,64);
margin-bottom: 0.1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h3 {
font-family: 'Philosopher', serif;
margin-top:12px;
margin-bottom: 3px;
font-style: italic;
color: rgb(94,127,192);
}
.text_cell_render h4 {
font-family: 'Philosopher', serif;
}
.text_cell_render h5 {
font-family: 'Alegreya Sans', sans-serif;
font-weight: 300;
font-size: 16pt;
color: grey;
font-style: italic;
margin-bottom: .1em;
margin-top: 0.1em;
display: block;
}
.text_cell_render h6 {
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 10pt;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 100%;
}
</style>
```python
from sympy import init_printing, Matrix, symbols
init_printing()
```
# Solving homogeneous systems
# Pivot variables
# Special solutions
```python
#import numpy as np
from sympy import init_printing, Matrix, symbols
#import matplotlib.pyplot as plt
#import seaborn as sns
#from IPython.display import Image
from warnings import filterwarnings
init_printing(use_latex = 'mathjax')
%matplotlib inline
filterwarnings('ignore')
```
Here, we are trying to solve a system of linear equations. For _homogeneous systems_ the right-hand side is the zero vector. Consider the example below.
```python
A = Matrix([[1, 2, 2, 2], [2, 4, 6, 8], [3, 6, 8, 10]])
A # A 3x4 matrix
```
```python
x1, x2, x3, x4 = symbols('x1, x2, x3, x4')
x_vect = Matrix([x1, x2, x3, x4]) # A 4x1 matrix
x_vect
```
```python
b = Matrix([0, 0, 0])
b # A 3x1 matrix
```
The column vector, $\underline{x}$ is a set of all the solutions to this homogeneous equation. It forms the nullspace. Note that the column vectors in $A$ are not linearly independent.
Performing elementary row operations leaves us with the matrix below. It has two pivots, which is termed **rank** $2$.
```python
A.rref() # rref being reduced row echelon form
```
Its representation is shown in (1) and in (2).
$$ { x }_{ 1 }\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}+{ x }_{ 2 }\begin{bmatrix} 2 \\ 0 \\ 0 \end{bmatrix}+{ x }_{ 3 }\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}+{ x }_{ 4 }\begin{bmatrix} -2 \\ 2 \\ 0 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\tag{1}$$
$${ x }_{ 1 }+2{ x }_{ 2 }+0{ x }_{ 3 }-2{ x }_{ 4 }=0\\ 0{ x }_{ 1 }+0{ x }_{ 2 }+{ x }_{ 3 }+2{ x }_{ 4 }=0\\ { x }_{ 1 }+0{ x }_{ 2 }+0{ x }_{ 3 }+0{ x }_{ 4 }=0\tag{2}$$
We are free to choose a value for $x_4$. Let's say $x_{4}=t$ as in (3).
$$\begin{align}{ x }_{ 1 }+2{ x }_{ 2 }+0{ x }_{ 3 }-2{ x }_{ 4 }&=0\\ 0{ x }_{ 1 }+0{ x }_{ 2 }+{ x }_{ 3 }+2t&=0\\ { x }_{ 1 }+0{ x }_{ 2 }+0{ x }_{ 3 }+0{ x }_{ 4 }&=0\\ \therefore \quad { x }_{ 3 }&=-2t\end{align}\tag{3}$$
Let's then say that $x_{3}=s$ as is (4).
$$\begin{align}{ x }_{ 1 }+2s+0{ x }_{ 3 }-2t&=0 \\ \therefore \quad {x}_{1}&=2t-2s\end{align}\tag{4}$$
The results is shown in (5), which is the complete nullspace and has dimension $2$.
$$ \begin{bmatrix} { x }_{ 1 } \\ { x }_{ 2 } \\ { x }_{ 3 } \\ { x }_{ 4 } \end{bmatrix}=\begin{bmatrix} -2s+2t \\ s \\ -2t \\ t \end{bmatrix}=\begin{bmatrix} -2s \\ s \\ 0 \\ 0 \end{bmatrix}+\begin{bmatrix} 2t \\ 0 \\ -2t \\ t \end{bmatrix}=s\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix}+t\begin{bmatrix} 2 \\ 0 \\ -2 \\ 1 \end{bmatrix}\tag{5}$$
From the above, we clearly have two vectors in the solution and we can take constant multiples of these to fill up our solution space (our nullspace).
We can easily calculate how many free variables we will have by subtracting the number of pivots (rank) from the number of variables, $x_i$ in $\underline{x}$. Here we have $4 - 2 = 2$.
#### Example problem
* Calculate $\underline{x} for the transpose of $A$ above.
#### Solution
```python
A_trans = A.transpose() # Creating a new matrix called A_trans and giving it the value of the inverse of A
A_trans
```
```python
A_trans.rref() # In reduced row echelon form this would be the following matrix
```
Remember that this is $4$ equations in $3$ unknowns, shown in (6) and (7).
$${ x }_{ 1 }\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}+{ x }_{ 2 }\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}+{ x }_{ 3 }\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}\tag{6}$$
$$\begin{align}{ x }_{ 1 }+0{ x }_{ 2 }+{ x }_{ 3 }&=0\\ 0{ x }_{ 1 }+{ x }_{ 2 }+{ x }_{ 3 }&=0\\ 0{ x }_{ 1 }+0{ x }_{ 2 }+0{ x }_{ 3 }&=0\\ 0{ x }_{ 1 }+0{ x }_{ 2 }+0{ x }_{ 3 }&=0\end{align}\tag{7}$$
We are free to choose $x_3$. Let's do $x_{3}=t$. The results are shown in (8), (9), and (10).
$$t\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}-t\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}+t\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}\tag{8}$$
$$\begin{align}{ x }_{ 3 }&=t\\ { x }_{ 1 }+0{ x }_{ 2 }+t&=0\\ 0{ x }_{ 1 }+{ x }_{ 2 }+t&=0\\ \therefore \quad { x }_{ 2 }&=-t\\ \therefore \quad { x }_{ 1 }&=-t\end{align}\tag{9} $$
$$\begin{bmatrix} { x }_{ 1 } \\ { x }_{ 2 } \\ { x }_{ 3 } \end{bmatrix}=\begin{bmatrix} t \\ -t \\ t \end{bmatrix}=t\begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix}\tag{10}$$
We had $n=3$ unknowns and $r=2$ (rank) pivots. The solution set (nullspace) will thus have $1$ variable, i.e. $t=3-2=1$.
The third column is the sum of the first two, so only $2$ columns are linearly independent. We thus expect $2$ pivots and can predict the nullspace to have only $1$ variable (i.e. it is one-dimensional).
```python
```
| dfcdd4ec4a85b4a1307eb9dcc689cd0975f4a327 | 24,832 | ipynb | Jupyter Notebook | Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_7_Solving_homogeneous_systems_Pivot_variables_Special_solutions.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| null | null | null | Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_7_Solving_homogeneous_systems_Pivot_variables_Special_solutions.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| null | null | null | Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_7_Solving_homogeneous_systems_Pivot_variables_Special_solutions.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 46.676692 | 2,712 | 0.661888 | true | 2,566 | Qwen/Qwen-72B | 1. YES
2. YES | 0.63341 | 0.803174 | 0.508739 | __label__eng_Latn | 0.69976 | 0.020299 |
# Cylinder Models
In this section, we describe models of intra-axonal diffusion.
In all cases, the intra-axonal diffusion is represented using axially symmetric cylinder models with $\boldsymbol{\mu}\in\mathbb{S}^2$ the orientation parallel to the cylinder axis.
The three-dimensional diffusion signal in these models is given as the separable product of (free) parallel and restricted perpendicular diffusion *(Assaf et al. 2004)*.
This means that the three-dimensional signal is given by
\begin{equation}
E_{\textrm{intra}}(\textbf{q},\Delta,\delta,\lambda_\parallel,R) = E_\parallel(q_\parallel,\Delta,\delta,\lambda_\parallel)\times E_\perp(q_\perp,\Delta,\delta,R)
\end{equation}
with parallel q-value $q_\parallel=\textbf{q}^T\boldsymbol{\mu}$, perpendicular q-value $q_\perp=(\textbf{q}^T\textbf{q}-(\textbf{q}^T\boldsymbol{\mu})^2))^{1/2}$, parallel diffusivity $\lambda_\parallel>0$ and cylinder radius $R>0$[mm]. The parallel signal is usually given by Gaussian diffusion as
\begin{equation}
E_\parallel(q_\parallel,\Delta,\delta,\lambda_\parallel)=\exp(-4\pi^2q_\parallel^2\lambda_\parallel(\Delta-\delta/3)).
\end{equation}
The perpendicular signal $E_\perp$ is described using various cylinder models.
In the rest of this section, we start with describing the simplest, having the strongest tissue assumptions (C1), and more towards more general models (C4).
# Stick: C1
The simplest model for intra-axonal diffusion is the ``Stick'' -- a cylinder with zero radius *(Behrens et al. 2003)*.
The Stick model assumes that, because axon diameters are very small, the perpendicular diffusion attenuation inside these axons is negligible compared to the overall signal attenuation.
The perpendicular diffusion coefficient is therefore be approximated by zero, so the perpendicular signal attenuation is always equal to one as $E_\perp=1$.
Inserting this definition into the equation above leads to the simple signal representation
\begin{equation}
E_{\textrm{Stick}}(b,\textbf{n},\boldsymbol{\mu},\lambda_\parallel)=\exp(-b\lambda_\parallel(\textbf{n}^T\boldsymbol{\mu})^2),
\end{equation}
which is the same as a DTI Tensor with $\lambda_\parallel=\lambda_1$ and $\lambda_\perp=\lambda_2=\lambda_3=0$.
Despite its simplicity, it turns out approximating axons as Sticks is quite reasonable at clinical gradient strengths *(Burcaw et al. 2015)*.
In fact, the Stick is used in the most state-of-the-art microstructure models modeling axonal dispersion *(Tariq et al. 2016, Kaden et al. 2016)*.
```python
from dmipy.signal_models import cylinder_models
from dmipy.core.acquisition_scheme import acquisition_scheme_from_bvalues
import numpy as np
stick = cylinder_models.C1Stick(mu=[0, 0], lambda_par=1.7e-9)
Nsamples = 100
bvecs_parallel = np.tile(np.r_[0., 0., 1.], (Nsamples, 1))
bvecs_perpendicular = np.tile(np.r_[0., 1., 0.], (Nsamples, 1))
bvals = np.linspace(0, 2e9, Nsamples)
delta = 0.01
Delta = 0.03
scheme_parallel = acquisition_scheme_from_bvalues(bvals, bvecs_parallel, delta, Delta)
scheme_perpendicular = acquisition_scheme_from_bvalues(bvals, bvecs_perpendicular, delta, Delta)
Estick_parallel = stick(scheme_parallel)
Estick_perpendicular = stick(scheme_perpendicular)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(bvals, Estick_parallel, label="Stick $E_\parallel$")
plt.plot(bvals, Estick_perpendicular, label="Stick $E_\perp$")
plt.legend(fontsize=12)
plt.title("Signal attenuation Stick", fontsize=17)
plt.xlabel("b-value [s/m$^2$]", fontsize=15)
plt.ylabel("Signal Attenuation", fontsize=15);
```
## Stejskal-Tanner Cylinder: C2
In reality, axons have a non-zero radius.
To account for this, different cylinder models for perpendicular diffusion have been proposed for different combinations of PGSE acquisition parameters.
The simplest is the Stejskal-Tanner approximation of the cylinder *(Soderman and Johnson 1995)*, which has the hardest assumptions on the PGSE protocol.
First, it assumes that pulse length $\delta$ is so short that no diffusion occurs during the application of the gradient pulse ($\delta\rightarrow0$).
Second, it assumes that pulse separation $\Delta$ is long enough for diffusion with intra-cylindrical diffusion coefficient $D$ to be restricted inside a cylinder of radius $R$ ($\Delta\gg R^2/D$).
Within these assumptions, the perpendicular, intra-cylindrical signal attenuation is given as
\begin{equation}
E_\perp(q,R|\delta\rightarrow0,\Delta\gg R^2/D)=\left(\frac{J_1(2\pi q R)}{\pi q R}\right)^2,
\end{equation}
where we use the ``$|$'' to separate function parameters from model assumptions, and $J_1$ is a Bessel function of the first kind. Taking $\lim_{R\rightarrow0}$ of this equation simplifies the three-dimensional Soderman model to the Stick model.
```python
from dmipy.core.acquisition_scheme import acquisition_scheme_from_qvalues
stesjskal_tanner = cylinder_models.C2CylinderStejskalTannerApproximation(mu=[0, 0], lambda_par=1.7e-9)
Nsamples = 100
bvecs_perpendicular = np.tile(np.r_[0., 1., 0.], (Nsamples, 1))
qvals = np.linspace(0, 3e5, Nsamples)
delta = 0.01
Delta = 0.03
scheme_perpendicular = acquisition_scheme_from_qvalues(qvals, bvecs_perpendicular, delta, Delta)
for diameter in np.linspace(1e-6, 1e-5, 5):
plt.plot(qvals, stesjskal_tanner(scheme_perpendicular, diameter=diameter),
label="Diameter="+str(1e6 * diameter)+"$\mu m$")
plt.legend(fontsize=12)
plt.title("Stesjkal-Tanner attenuation over cylinder diameter", fontsize=17)
plt.xlabel("perpendicular q-value [1/m]", fontsize=15)
plt.ylabel("E(q$_\perp$)", fontsize=15);
```
## Callaghan Cylinder: C3
The ``Callaghan'' model relaxes Soderman's $\Delta\gg R^2/D$ assumption to allow for unrestricted diffusion at shorter pulse separation $\Delta$ *(Callaghan 1995)*. In this case, the perpendicular signal attenuation is given as
\begin{align}
E_\perp(q,\Delta,R|\delta\rightarrow0)&=\sum^\infty_k4\exp(-\beta^2_{0k}D\Delta/R^2)\times \frac{\left((2\pi qR)J_0^{'}(2\pi qR)\right)^2}{\left((2\pi qR)^2-\beta_{0k}^2\right)^2}\nonumber\\
&+\sum^\infty_{nk}8\exp(-\beta^2_{nk}D\Delta/R^2)\times \frac{\beta^2_{nk}}{\left(\beta_{nk}^2-n^2\right)}\times\frac{\left((2\pi qR)J_n^{'}(2\pi qR)\right)^2}{\left((2\pi qR)^2-\beta_{nk}^2\right)^2}
\end{align}
where $J_n^{'}$ are the derivatives of the $n^{th}$-order Bessel function and $\beta_{nk}$ are the arguments that result in zero-crossings. Taking $\lim_{\Delta\rightarrow\infty}$ of this equation simplifies the Callaghan model to the Soderman model. The Callaghan model has been used to estimate the axon diameter distribution in the multi-compartment AxCaliber approach *(Assaf et al. 2008)*. However, the authors also mention that the perpendicular diffusion is likely already restricted for realistic axon diameters ($<2\mu$m) *(Aboitiz et al. 1992)* for the shortest possible $\Delta$ in PGSE protocols (${\sim}10$ms). This limits the added value of the Callaghan model over the Soderman model in axon diameter estimation.
```python
callaghan = cylinder_models.C3CylinderCallaghanApproximation(mu=[0, 0], lambda_par=1e-7)
Nsamples = 100
bvecs_perpendicular = np.tile(np.r_[0., 1., 0.], (Nsamples, 1))
qvals = np.linspace(0, 3e5, Nsamples)
delta = 0.001
Delta = 0.001
scheme_perpendicular = acquisition_scheme_from_qvalues(qvals, bvecs_perpendicular, delta, Delta)
plt.plot(qvals, np.exp(-scheme_perpendicular.bvalues * 1.7e-9), label="Free Diffusion", c='r', ls='--')
for Delta in [0.001, 0.0025, 0.015]:
scheme_perpendicular = acquisition_scheme_from_qvalues(qvals, bvecs_perpendicular, delta, Delta)
plt.plot(qvals, callaghan(scheme_perpendicular, diameter=10e-6), label='Callaghan Delta='+str(1e3 * Delta)+'ms')
plt.plot(qvals, stesjskal_tanner(scheme_perpendicular, diameter=10e-6), label="Soderman", c='blue', ls='--')
plt.legend()
```
For a big cylinder of 10$\mu$ diameter, it can be seen that free diffusion and the Callaghan model are very similar for an extremely short pulse separation of 1ms. The signal is already becoming significantly restricted at 2.5ms, and at 15ms the Callaghan and Soderman approximations have converged (completely restricted).
This shows the problem of using the Callaghan model for axon diameter estimation - for axons of diameter 0.1-2 $\mu$m the diffusion is already restricted around 1 or 2 ms, meaning there is no signal contrast for intra-axonal diffusion when Delta varies.
## Gaussian Phase Cylinder: C4
The last cylinder model generalization we discuss is the "Van Gelderen" model *(VanGelderen et al. 1994)*, which relaxes the last $\delta\rightarrow0$ assumption to allow for finite pulse length $\delta$. This model is based on the ``Neuman'' model *(Neuman 1974)*, which assumes Gaussian diffusion during the gradient pulse. In this case, the signal attenuation is given as
\begin{equation}
E_\perp(q,\Delta,\delta,R)=-8\pi^2q^2\sum^\infty_{m=1}\dfrac{\left[2Da_m^2\delta-2 + 2e^{-Da_m^2\delta} + 2e^{-Da_m^2\Delta}-e^{-Da_m^2(\Delta-\delta)}-e^{-Da_m^2(\Delta-\delta)}\right]}{\delta^2D^2a_m^6(R^2a_m^2-1)}
\end{equation}
where $a_m$ are roots of the equation $J_1^{'}(a_mR)=0$, with $J_1^{'}$ again the derivative of the Bessel function of the first kind, and $D$ is the intra-axonal diffusivity.
According to *(Neuman 1974)*, taking the double $\lim_{(\delta,\Delta)\rightarrow(0,\infty)}$ of the equation above should simplify the Van Gelderen model to the Soderman Model, although he does not show this explicitly.
For its generality, the Van Gelderen model has been used in most recent studies regarding in-vivo axon diameter estimation *(Huang et al. 2015, Ferizi et al. 2015, De Santis et al. 2016 )*.
```python
vangelderen = cylinder_models.C4CylinderGaussianPhaseApproximation()
```
## References
- Aboitiz, Francisco, et al. "Fiber composition of the human corpus callosum." Brain research 598.1 (1992): 143-153.
- Assaf, Yaniv, et al. "AxCaliber: a method for measuring axon diameter distribution from diffusion MRI." Magnetic resonance in medicine 59.6 (2008): 1347-1354.
- Assaf, Yaniv, et al. "New modeling and experimental framework to characterize hindered and restricted water diffusion in brain white matter." Magnetic Resonance in Medicine 52.5 (2004): 965-978.
- Behrens, Timothy EJ, et al. "Characterization and propagation of uncertainty in diffusion‐weighted MR imaging." Magnetic resonance in medicine 50.5 (2003): 1077-1088.
- Burcaw, Lauren M., Els Fieremans, and Dmitry S. Novikov. "Mesoscopic structure of neuronal tracts from time-dependent diffusion." NeuroImage 114 (2015): 18-37.
- Callaghan, Paul T. "Pulsed-gradient spin-echo NMR for planar, cylindrical, and spherical pores under conditions of wall relaxation." Journal of magnetic resonance, Series A 113.1 (1995): 53-59.
- De Santis, Silvia, Derek K. Jones, and Alard Roebroeck. "Including diffusion time dependence in the extra-axonal space improves in vivo estimates of axonal diameter and density in human white matter." NeuroImage 130 (2016): 91-103.
- Ferizi, Uran, et al. "White matter compartment models for in vivo diffusion MRI at 300mT/m." NeuroImage 118 (2015): 468-483.
- Huang, Susie Y., et al. "The impact of gradient strength on in vivo diffusion MRI estimates of axon diameter." NeuroImage 106 (2015): 464-472.
- Kaden, Enrico, et al. "Multi-compartment microscopic diffusion imaging." NeuroImage 139 (2016): 346-359.
- Neuman, C. H. "Spin echo of spins diffusing in a bounded medium." The Journal of Chemical Physics 60.11 (1974): 4508-4511.
- Söderman, Olle, and Bengt Jönsson. "Restricted diffusion in cylindrical geometry." Journal of Magnetic Resonance, Series A 117.1 (1995): 94-97.
- Tariq, Maira, et al. "Bingham–noddi: Mapping anisotropic orientation dispersion of neurites using diffusion mri." NeuroImage 133 (2016): 207-223.
- Vangelderen, P., et al. "Evaluation of restricted diffusion in cylinders. Phosphocreatine in rabbit leg muscle." Journal of Magnetic Resonance, Series B 103.3 (1994): 255-260.
| babaaa3cef37b7dc17a02460c76b454f5639900d | 137,445 | ipynb | Jupyter Notebook | examples/example_cylinder_models.ipynb | AthenaEPI/mipy | dbbca4066a6c162dcb05865df5ff666af0e4020a | [
"MIT"
]
| 59 | 2018-02-22T19:14:19.000Z | 2022-02-22T05:40:27.000Z | examples/example_cylinder_models.ipynb | AthenaEPI/mipy | dbbca4066a6c162dcb05865df5ff666af0e4020a | [
"MIT"
]
| 95 | 2018-02-03T11:55:30.000Z | 2022-03-31T15:10:39.000Z | examples/example_cylinder_models.ipynb | AthenaEPI/mipy | dbbca4066a6c162dcb05865df5ff666af0e4020a | [
"MIT"
]
| 23 | 2018-02-13T07:21:01.000Z | 2022-02-22T20:12:08.000Z | 177.119845 | 51,872 | 0.882069 | true | 3,416 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.727975 | 0.612828 | __label__eng_Latn | 0.916219 | 0.262136 |
# Checkpoint 1
### Read This First
**1. Use the constants provided in the cell below. Do not use your own constants.**
**2. Put the code that produces the output for a given task in the specific cell indicated. You are welcome to add as many cells as you like for imports, function definitions, variables, etc. Additional cells need to be in the proper order such that your code runs the first time through.**
The Coulomb law is given by:
$
\Large
\begin{align}
F(r) = -\frac{e^{2}}{4 \pi \epsilon_{0} r^{2}} \left( \frac{r}{r_{0}} \right)^{\alpha},
\end{align}
$
where $r_{0}$ is the Bhor radius, given by:
$
\Large
\begin{align}
r_{0} = \frac{4 \pi \epsilon_{0} \hbar^{2}}{m e^{2}}.
\end{align}
$
The electric potential is given by:
$
\Large
\begin{align}
V(r) = \int_{r}^{\infty} F(r^{\prime}) dr^{\prime}
\end{align}
$
Use the following constants:
* $\frac{\hbar^{2}}{2m} = 0.0380998\ nm^{2} eV$ (called `c1` below)
* $\frac{e^{2}}{4 \pi \epsilon_{0}} = 1.43996\ nm\ eV$ (called `c2` below)
* $r_{0} = 0.0529177\ nm$ (called `r0` below)
* Planck constant $h = 6.62606896\times10^{-34} J s$ (`h`)
* Speed of light $c = 299792458\ m/s$ (`c`)
```python
# add imports here
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import quad
from scipy.linalg import eigvalsh_tridiagonal
from scipy.stats import linregress
```
```python
plt.rcParams['figure.figsize'] = (10, 6)
plt.rcParams['font.size'] = 14
```
```python
# Constants (use these)
c1 = 0.0380998 # nm^2 eV
c2 = 1.43996 # nm eV
r0 = 0.0529177 # nm
h = 6.62606896e-34 # J s
c = 299792458. # m/s
hc = 1239.8419 # eV nm
```
## Task 1
Write a code that calculates $V(r)$ numerically for $\alpha = 0.01$ and plots it for $r$ = 0.01...1 nm. Remember to label the axes.
```python
def get_force(r, alpha=0):
"""
Method: calculates the electrostatic force between a proton and
an electron with charge ±e respectively
:param r: radial distance
:param alpha: perturbation
:return force: value for force as a float
"""
force = -c2 * 1 / (r)**2 * (r / r0)**alpha
return force
def potential_numerical(r, alpha=0):
"""
Method: calculates the electrostatic potential for a pairwise
system through the integration of the force between the pair
:param radii: numPy array of radial distances
:param alpha: perturbation
:return potential: numPy array of potential values
"""
potential = quad(get_force, r, np.inf, args=(alpha))[0]
return potential
```
```python
N = 13000 # Number of integration steps
radii = np.linspace(0.01, 1.5, num=N) # Defining radial distances (rmax = 1.5nm)
# rmax = 1.5nm gets good contribution of the energy from BOTH n=1 and n=2 energy levels
potential = np.array([]) # Forming potential array
for i in range(radii.size):
potential = np.append(potential, potential_numerical(radii[i], alpha=0)) # Getting potential
# Plotting electrostatic potential vs radial distance for H atom
plt.title("Electrostatic Potential vs. Radial Distance")
plt.xlabel("Radial Distance (nm)")
plt.xlim(right=1.0)
plt.ylabel("Potential (eV)")
plt.plot(radii, potential)
plt.show()
```
## Task 2
In addition to (1), the test below will compare the analytic expression for $𝑉(r)$ with the numerically obtained values for $r$ = 0.01,0.02...1 nm. The biggest absolute difference $diff = max |V_{exact}(r) − V_{numerical}(r)|$ must be smaller than 10$^{−5}$ eV. There is nothing else for you to do.
```python
# We will call your function for one value of r and alpha = 0.01. There will be more tests!
potential_numerical(0.5, 0.01)
```
-2.975081858428647
## Task 3
In addition to (2), calculate the first 2 energy levels (eigenvalues of $H$) for $\alpha = 0, 0.01$ and print out the values in eV. The values must be accurate to 0.01 eV. This requires sufficiently large $r_{max}$ and $N$. Plot the difference $\Delta E$ between the two energies for $\alpha = 0, 0.01$. Remember to label the axes.
```python
def energy_levels(alpha):
"""
Method: determines the Hamiltonian using sparse matrices
where H = -hbar**2/2m * (d/dr)**2 + V(r)
:param radii: numPy array of radial distances
:param dr: step
:return E1,E2: the two lowest eigenvalues
"""
# Constructing Hamiltonian
dr_array = np.arange(dr, radii.size * dr + dr, dr)
pot_diag = np.array([])
for i in range(dr_array.size):
pot_diag = np.append(pot_diag, potential_numerical(dr_array[i], alpha))
H_diag = (- c1 / (dr)**2) * np.full(radii.size, -2) + pot_diag
off_diag =(- c1 / (dr)**2) * np.full(radii.size - 1, 1)
# Finding eigenvalues of tridiagonal Hamiltonian matrix
eigvals = eigvalsh_tridiagonal(H_diag, off_diag, select='i', select_range=(0,1))
np.sort(np.real(eigvals))
E1, E2 = eigvals[0], eigvals[1]
return E1, E2
```
```python
dr = max(radii)/N
delta_E = np.array([])
alpha_values = np.arange(0.0, 0.01, 0.001)
# Calculating delta E for different alpha values
for alpha in alpha_values:
E1, E2 = energy_levels(alpha)
delta_E = np.append(delta_E, E2-E1)
# Plotting delta_E vs. alpha
plt.title("En=2 - En=1 for the Hydrogen atom with a perturbed potential")
plt.xlabel("Alpha (Perturbation)")
plt.ylabel("Delta E (eV)")
plt.plot(alpha_values, delta_E)
plt.show()
```
```python
# Print out the energy levels for alpha = 0, 0.01.
e_levels_0 = energy_levels(0.0)
e_levels_0_01 = energy_levels(0.01)
print ("alpha = 0.00:", e_levels_0)
print ("alpha = 0.01:", e_levels_0_01)
```
alpha = 0.00: (-13.605598566607691, -3.401402280845947)
alpha = 0.01: (-13.807382891866881, -3.5346022779978856)
## Task 4
In addition to (3), assuming that the transition between the 1st excited and the ground state corresponds to the wavelength $\lambda = 121.5 \pm 0.1$ nm, what is the maximum value of $\alpha_{max} > 0$ consistent with this measurement (i.e., the largest $\alpha_{max} > 0$ such that the predicted and measured wavelengths differ by less than 0.1 nm)?
```python
def find_alpha_max():
# Only need delta_E_max to find alpha_max
# delta_E max corresponds to minimum wavelength between n=2 and n=1 levels
lambda_min = 121.4
delta_E_max = hc / lambda_min
# Finding equation of line from plot above
r = linregress(alpha_values, delta_E)
# Finding corresponding alpha_max to delta_E_max from straight line
alpha_max = (delta_E_max - r.intercept)/r.slope
return alpha_max
```
```python
# Run the function and print alpha_max.
alpha_max = find_alpha_max()
print ("alpha_max:", alpha_max)
```
alpha_max: 0.0012803032280778062
## Task 5
Improve the accuracy of the computation of the two energy levels to 0.001 eV and find $\alpha_{max}$ assuming the wavelength $\lambda = 121.503 \pm 0.01$ nm.
```python
### TASK 5
def energy_levels_improved(alpha):
"""
Method: determines the Hamiltonian using sparse matrices
where H = -hbar**2/2m * (d/dr)**2 + V(r)
:param radii: numPy array of radial distances
:param dr: step
:return E1,E2: the two lowest eigenvalues
"""
dr_array = np.arange(dr, radii.size * dr + dr, dr)
pot_diag = np.array([])
for i in range(dr_array.size):
pot_diag = np.append(pot_diag, potential_numerical(dr_array[i], alpha))
H_diag = (- c1 / (dr)**2) * np.full(radii.size, -2) + pot_diag
off_diag =(- c1 / (dr)**2) * np.full(radii.size - 1, 1)
eigvals = eigvalsh_tridiagonal(H_diag, off_diag, select='i', select_range=(0,1))
np.sort(np.real(eigvals))
E1, E2 = eigvals[0], eigvals[1]
return E1, E2
def find_alpha_max_improved():
# Only need delta_E_max to find alpha_max
# delta_E max corresponds to minimum wavelength between n=2 and n=1 levels
# Using old accuray of ±0.001nm as I think the code is fast enough
lambda_min = 121.502
delta_E_max = hc / lambda_min
# Finding equation of line from plot above
r = linregress(alpha_values, delta_E)
# Finding corresponding alpha_max to delta_E_max from straight line
alpha_max = (delta_E_max - r.intercept)/r.slope
return alpha_max
```
```python
# Run the function and print alpha_max.
e_levels_0_01 = energy_levels_improved(0.01)
print ("alpha = 0.01:", e_levels_0_01)
# Run the function and print alpha_max.
alpha_max_improved = find_alpha_max_improved()
print ("alpha_max:", alpha_max_improved)
```
alpha = 0.01: (-13.807382891866881, -3.5346022779978856)
alpha_max: 2.8704299245945143e-05
## Task 6
How would one achieve the same accuracy with significantly smaller matrices? Hint: can we represent $R$ from Eq. (1) as a linear combination of functions that solve the "unperturbed" equation, and translate this into an eigenproblem for a certain $N \times N$ matrix, with $N < 100$?
```python
### TASK 6
def energy_levels_best(alpha):
# Remove the line that says "raise NotImplementedError"
# YOUR CODE HERE
raise NotImplementedError()
return E1, E2
def find_alpha_max_best():
# Remove the line that says "raise NotImplementedError"
# YOUR CODE HERE
raise NotImplementedError()
return alpha_max
```
```python
# Run the function and print alpha_max.
e_levels_0_01 = energy_levels_best(0.01)
print ("alpha = 0.01:", e_levels_0_01)
# Run the function and print alpha_max.
alpha_max_best = find_alpha_max_best()
print ("alpha_max:", alpha_max_best)
```
```python
```
| 11d272fea2086122a24a000ded0d43bb4ce25d8a | 78,747 | ipynb | Jupyter Notebook | c1/h_levels.ipynb | c-abbott/num-rep | fb548007b84f96d46527b8ea3ba0461b32a34452 | [
"MIT"
]
| null | null | null | c1/h_levels.ipynb | c-abbott/num-rep | fb548007b84f96d46527b8ea3ba0461b32a34452 | [
"MIT"
]
| null | null | null | c1/h_levels.ipynb | c-abbott/num-rep | fb548007b84f96d46527b8ea3ba0461b32a34452 | [
"MIT"
]
| null | null | null | 105.417671 | 30,396 | 0.851575 | true | 2,858 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.879147 | 0.784033 | __label__eng_Latn | 0.923141 | 0.659903 |
# Taylor Problem 16.14 version finite difference
Here we'll solve the wave equation for $u(x,t)$,
$\begin{align}
\frac{\partial^2 u(x,t)}{\partial t^2} = c^2 \frac{\partial^2 u(x,t)}{\partial x^2}
\end{align}$
by a finite difference method, given its initial shape and velocity at $t=0$ from $x=0$ to $x=L$, with $L=1$,
The wave speed is $c=1$.
The shape is:
$\begin{align}
u(x,0) = \left\{
\begin{array}{ll}
2x & 0 \leq x \leq \frac12 \\
2(1-x) & \frac12 \leq x \leq 1
\end{array}
\right.
\;,
\end{align}$
and the velocity is zero for all $x$ at $t=0$. So this could represent the string on a guitar plucked at $t=0$.
* Created 03-Apr-2019. Last revised 06-Apr-2019 by Dick Furnstahl ([email protected]).
**Template version: Add your own code where your see** `#***` **based on the equations below.**
## Background and equations
We will discretize $0 \leq x \leq L$ into the array `x_pts` with equal spacing $\Delta x$. How do we find an expression for the second derivative in terms of points in `x_pts`? Taylor expansion, of course! Look at a step forward and back in $x$:
$\begin{align}
u(x+\Delta x,t) &= u(x,t) + \frac{\partial u}{\partial x}\Delta x
+ \frac12\frac{\partial^2 u}{\partial x^2}(\Delta x)^2
+ \frac16\frac{\partial^3 u}{\partial x^3}(\Delta x)^3
+ \mathcal{O}(\Delta x)^4 \\
u(x-\Delta x,t) &= u(x,t) - \frac{\partial u}{\partial x}\Delta x
+ \frac12\frac{\partial^2 u}{\partial x^2}(\Delta x)^2
- \frac16\frac{\partial^3 u}{\partial x^3}(\Delta x)^3
+ \mathcal{O}(\Delta x)^4
\end{align}$
with all of the derivatives evaluated at $(x,t)$.
By adding these equations we eliminate all of the odd derivatives and can solve for the second derivative:
$\begin{align}
\frac{\partial^2 u}{\partial x^2} = \frac{u(x+\Delta x,t) - 2 u(x,t) + u(x-\Delta x,t)}{(\Delta x)^2}
+ \mathcal{O}(\Delta x)^2
\end{align}$
which is good to order $(\Delta x)^2$ rather than $\Delta x$ because of the cancellation of odd terms.
We get a similar expression for the time derivative by adding equations expanding to $t\pm \Delta t$. But instead of solving for the second derivative, we solve for $u$ at a time step forward in terms of $u$ at previous times:
$\begin{align}
u(x, t+\Delta t) \approx 2 u(x,t) - u(x, t-\Delta t) + \frac{\partial^2 u}{\partial t^2}(\Delta t)^2
\end{align}$
and substitute the expression for $\partial^2 u/\partial x^2$, defining $c' \equiv \Delta x/\Delta t$.
So to take a time step of $\Delta t$ we use:
$\begin{align}
u(x, t+\Delta t) \approx 2 u(x,t) - u(x, t-\Delta t) + \frac{c^2}{c'{}^2}
[u(x+\Delta x,t) - 2 u(x,t) + u(x-\Delta x,t)]
\qquad \textbf{(A)}
\end{align}$
**This is the equation to code for advancing the wave in time.**
To use the equation, we need to save $u$ for all $x$ at two consectutive times. We'll call those `u_past` and `u_present` and call the result of applying the equation `u_future`. We have the initial $u(x,0)$ but to proceed to get $u(x,\Delta t)$ we'll still need $u(x,-\Delta t)$ while what we have is $\partial u(x,0)/\partial t$. We once again turn to a Taylor expansion:
$\begin{align}
u(x, -\Delta t) = u(x,0) - \frac{\partial u(x,0)}{\partial t}(\Delta t)
% + \frac12 \frac{\partial^2 u}{\partial t^2}(\Delta t)^2 + \cdots \\
% &= u(x,0) - \frac{\partial u(x,0)}{\partial t}(\Delta t)
% + \frac12 \frac{c^2}{c'{}^2} [u(x+\Delta x,0) - 2 u(x,0) + u(x-\Delta x,0)]
\qquad \textbf{(B)}
\end{align}$
and now we know both terms on the right side of $\textbf{(B)}$, so we can use this in $\textbf{(A)}$ to take the first step to $u(x, \Delta t)$.
\[Note: in the class notes an expression for $u(x, -\Delta t)$ was given that included a $(\Delta t)^2$ correction. However, this doesn't work, so use $\textbf{(B)}$ instead.\]
```python
%matplotlib inline
```
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
```
```python
class uTriangular():
"""
uTriangular class sets up a wave at x_pts. Now with finite difference.
Parameters
----------
x_pts : array of floats
x positions of the wave
delta_t : float
time step
c : float
speed of the wave
L : length of the string
Methods
-------
k(self, n)
Returns the wave number given n.
u_wave(self, t)
Returns u(x, t) for x in x_pts and time t
"""
def __init__(self, x_pts, delta_t=0.01, c_wave=1., L=1.):
#*** Add code for initializaton
self.delta_x = x_pts[1] - x_pts[0]
self.c_prime = self.delta_x / self.delta_t # c' definition
self.u_start() # set the starting functions
def u_0(self, x):
"""Initial shape of string."""
if (x <= L/2.):
#*** fill in the rest
def u_dot_0(self, x):
"""Initial velocity of string."""
return #*** fill this in
def u_start(self):
"""Initiate u_past and u_present."""
self.u_present = np.zeros(len(x_pts))
self.u_dot_present = np.zeros(len(x_pts))
self.u_past = np.zeros(len(x_pts))
for i in np.arange(1, len(x_pts) - 1):
x = self.x_pts[i]
self.u_present[i] = #*** define the t=0 u(x,0)
self.u_dot_present[i] = #*** define the t=0 u_dot(x,0)
self.u_past[i] += #*** Implement equation (B)
self.t_now = 0.
def k(self, n):
"""Wave number for n
"""
return n * np.pi / self.L
def u_wave_step(self):
"""Returns the wave at the next time step, t_now + delta_t.
"""
u_future = np.zeros(len(self.x_pts)) # initiate to zeros
for i in np.arange(1, len(x_pts) - 1):
u_future[i] = #*** Implement equation (A)
# update past and present u
self.u_past = self.u_present
self.u_present = u_future
return u_future
def u_wave_at_t(self, t):
"""
Returns the wave at time t by calling u_wave_step multiple times
"""
self.u_start() # reset to the beginning for now (sets t_now=0)
if (t < self.delta_t):
return self.u_present
else:
for step in np.arange(self.t_now, t+self.delta_t, self.delta_t):
u_future = self.u_wave_step()
return u_future
```
First look at the initial ($t=0$) wave form.
```python
L = 1.
c_wave = 1.
omega_1 = np.pi * c_wave / L
tau = 2.*np.pi / omega_1
# Set up the array of x points (whatever looks good)
x_min = 0.
x_max = L
delta_x = 0.01
x_pts = np.arange(x_min, x_max + delta_x, delta_x)
# Set up the t mesh for the animation. The maximum value of t shown in
# the movie will be t_min + delta_t * frame_number
t_min = 0. # You can make this negative to see what happens before t=0!
t_max = 2.*tau
delta_t = 0.0099
print('delta_t = ', delta_t)
t_pts = np.arange(t_min, t_max + delta_t, delta_t)
# instantiate a wave
u_triangular_1 = uTriangular(x_pts, delta_t, c_wave, L)
print('c_prime = ', u_triangular_1.c_prime)
print('c wave = ', u_triangular_1.c)
# Make a figure showing the initial wave.
t_now = 0.
u_triangular_1.u_start()
fig = plt.figure(figsize=(6,4), num='Standing wave')
ax = fig.add_subplot(1,1,1)
ax.set_xlim(x_min, x_max)
gap = 0.1
ax.set_ylim(-1. - gap, 1. + gap)
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$u(x, t=0)$')
ax.set_title(rf'$t = {t_now:.1f}$')
line, = ax.plot(x_pts,
u_triangular_1.u_present,
color='blue', lw=2)
# add a line at a later time
line_2, = ax.plot(x_pts,
u_triangular_1.u_wave_at_t(0.5),
color='black', lw=2)
fig.tight_layout()
```
Next make some plots at an array of time points.
```python
t_array = tau * np.arange(0, 1.125, .125)
fig_array = plt.figure(figsize=(12,12), num='Triangular wave')
for i, t_now in enumerate(t_array):
ax_array = fig_array.add_subplot(3, 3, i+1)
ax_array.set_xlim(x_min, x_max)
gap = 0.1
ax_array.set_ylim(-1. - gap, 1. + gap)
ax_array.set_xlabel(r'$x$')
ax_array.set_ylabel(r'$u(x, t)$')
ax_array.set_title(rf'$t/\tau = {t_now/tau:.3f}$')
ax_array.plot(x_pts,
u_triangular_1.u_wave_at_t(t_now),
color='blue', lw=2)
fig_array.tight_layout()
fig_array.savefig('Taylor_Problem_16p14_finite_difference.png',
bbox_inches='tight')
```
Now it is time to animate!
We use the cell "magic" `%%capture` to keep the figure from being shown here. If we didn't the animated version below would be blank.
```python
%%capture
fig_anim = plt.figure(figsize=(6,3), num='Triangular wave')
ax_anim = fig_anim.add_subplot(1,1,1)
ax_anim.set_xlim(x_min, x_max)
gap = 0.1
ax_anim.set_ylim(-1. - gap, 1. + gap)
# By assigning the first return from plot to line_anim, we can later change
# the values in the line.
u_triangular_1.u_start()
line_anim, = ax_anim.plot(x_pts,
u_triangular_1.u_wave_at_t(t_min),
color='blue', lw=2)
fig_anim.tight_layout()
```
```python
def animate_wave(i):
"""This is the function called by FuncAnimation to create each frame,
numbered by i. So each i corresponds to a point in the t_pts
array, with index i.
"""
t = t_pts[i]
y_pts = u_triangular_1.u_wave_at_t(t)
line_anim.set_data(x_pts, y_pts) # overwrite line_anim with new points
#return line_anim # this is needed for blit=True to work
```
```python
frame_interval = 80. # time between frames
frame_number = 201 # number of frames to include (index of t_pts)
anim = animation.FuncAnimation(fig_anim,
animate_wave,
init_func=None,
frames=frame_number,
interval=frame_interval,
blit=False,
repeat=False)
```
```python
HTML(anim.to_jshtml()) # animate using javascript
```
```python
```
| dead949f6df6d594b52ba35ff5a97b83f91fb5a2 | 15,082 | ipynb | Jupyter Notebook | 2020_week_11/Taylor_problem_16p14_finite_difference_template.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
]
| null | null | null | 2020_week_11/Taylor_problem_16p14_finite_difference_template.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
]
| null | null | null | 2020_week_11/Taylor_problem_16p14_finite_difference_template.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
]
| null | null | null | 35.739336 | 387 | 0.487866 | true | 3,065 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.754915 | 0.654538 | __label__eng_Latn | 0.920255 | 0.359043 |
The parameter space of SSMs provides a compact representation that is amenable to learning algorithms (e.g. classification or clustering), evaluation, and exploration. Some of these aspects will be explored in this part of the tutorial.
We employ shape data of 116 distal femora that where, among others, used in
[Rigid motion invariant statistical shape modeling based on discrete fundamental forms](https://doi.org/10.1016/j.media.2021.102178), F. Ambellan, S. Zachow, Christoph von Tycowicz, Medical Image Analysis (2021). [PDF](https://arxiv.org/pdf/2111.06850.pdf)
The respective segmentation masks are publicly available at [pubdata.zib.de](https://pubdata.zib.de).
To speed up things a little bit, this part of the tutorial starts right after the model creation step, i.e. we have already constructed a shape model. We will work with the shape weights stored in `SSM.coeffs` (uniquely determining all input shapes).
The data set splits into two subgoups of the same cardinality, namely, healthy and diseased femora. In the following we want to visualize the shape weights in different ways and later use them to perform a classification experiment on osteoarthritis.
# Visualization of Principal Weights
## Load data
At first we load the principal weights that are stored together with the labels (healthy/diseased) in a numpy array for PDM and FCM, repsectively. At second we define unique colors to represent FCM and PDM in all subsequent plots.
```python
import numpy as np
from tutorial2_pop_med_image_shape_ana.utils.sammon import sammon
from tutorial2_pop_med_image_shape_ana.utils.utils import runSVMClassification, plotClassificationResults
from sklearn.preprocessing import normalize
import matplotlib.pyplot as plt
dataFCM = np.load('tutorial2_pop_med_image_shape_ana/data/femurOAclassificationDataFCM.npy')
dataPDM = np.load('tutorial2_pop_med_image_shape_ana/data/femurOAclassificationDataPDM.npy')
# first row -> label {0, 1}, second to last row -> shape weights (column is sample, row is feature)
labels = dataFCM[0, :]
fcmFeatures = dataFCM[1:, :]
pdmFeatures = dataPDM[1:, :]
# dark green (FCM) and dark violet (PDM)
colors = ['#008c04', '#ae00d8']
```
## Task 4 on Visualization of Two Principal Weights
At first will will focus on different pairs of two weights, indexed by (`pw1`, `pw2`), that we can easily visualize in 2d scatter plots. We heryby assign different markers to different disease states.
Choose different values for `pw1` and `pw2` (values between 0 and 114). What is your impression?
- Are there some weights to appear more expressive than others? (w.r.t. disease state)
- If so, what do you think why?
- What can you say about the difference between FCM and PDM?
```python
[pW1, pW2] = [0, 1] # eg [0, 1], [10, 15], [0, 114] ...
# split data into healthy and diseased index lists
healthy = np.where(labels == 0)[0]
diseased = np.where(labels != 0)[0]
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
data_list = [fcmFeatures, pdmFeatures]
title_list = ['FCM-weights of Two Principal Directions', ' PDM-weights of Two Principal Directions']
legendLocation_list = ['upper right', 'upper left']
for k in range(2):
ax[k].scatter(data_list[k][pW1, healthy], data_list[k][pW2, healthy], s=40, linewidths=2, c=colors[k], label='healthy')
ax[k].scatter(data_list[k][pW1, diseased], data_list[k][pW2, diseased], s=40, linewidths=2, c='white', edgecolors=colors[k], label='diseased')
ax[k].set_xticks([])
ax[k].set_yticks([])
ax[k].set_title(title_list[k])
ax[k].legend(loc=legendLocation_list[k])
plt.show()
```
## Task 5 on Sammon Projection to Two Dimensions
The Sammon projection tries to find a low dimensional (2d in our case) representation of some given high dimensional data, s.t. the following error is minimal
\begin{equation}
Err = \dfrac{1}{\sum_{i<j}d_{R^2}(\text{pr}(\alpha_i), \text{pr}(\alpha_i))} \sum_{i<j}\dfrac{(d_{R^2}(\text{pr}(\alpha_i), \text{pr}(\alpha_i)) - d_{R^d}(\alpha_i, \alpha_j))^2}{d_{R^2}(\text{pr}(\alpha_i), \text{pr}(\alpha_i))}.
\end{equation}
In other words: The distances between two weight vectors $\alpha_i, \alpha_j$ before projection should be close to those after projection.
(cf. [A nonlinear mapping for data structure analysis](https://doi.org/10.1109/T-C.1969.222678), JW. Sammon, IEEE Transactions on computers (1969). [PDF](http://syllabus.cs.manchester.ac.uk/pgt/2021/COMP61021/reference/Sammon.pdf))
Sammon projection can also be applied to a subset of the weight vectors, defined by the index range (`nR1`, `nR2`).
Choose different values for `nR1` and `nR2` (values between 0 and 114, `nR1` < `nR2 + 1`). What is your impression?
- Are there some weights (subsets of weights) to appear more expressive than others? (w.r.t. disease state)
- If so, what do you think why?
- What can you say about the difference between FCM and PDM?
```python
[nR1, nR2] = [0, 114] # e.g. [0, 114] (full range), [105, 114], etc.
fcmSammon, _ = sammon(fcmFeatures[nR1:nR2, :].transpose(), 2, display=0)
pdmSammon, _ = sammon(pdmFeatures[nR1:nR2, :].transpose(), 2, display=0)
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
data_list = [fcmSammon, pdmSammon]
title_list = ['Sammon Projection of FCM-weights', 'Sammon Projection of PDM-weights']
for k in range(2):
ax[k].scatter(data_list[k][healthy, 0], data_list[k][healthy, 1], s=40, linewidths=2, c=colors[k], label='healthy')
ax[k].scatter(data_list[k][diseased, 0], data_list[k][diseased, 1], s=40, linewidths=2, c='white', edgecolors=colors[k], label='diseased')
ax[k].set_xticks([])
ax[k].set_yticks([])
ax[k].set_title(title_list[k])
ax[k].legend(loc=legendLocation_list[k])
plt.show()
```
# Task 6 on Osteoarthritis Classification Experiment
We emply a linear Support Vector Machine (SVM) trained on the princpal weights of PDM and FCM in order to classify distal femur bones as healthy of diseased w.r.t knee osteoarthritis.
To get a more complete picture we train the SVM on diffrent partitions of the data, e.g. `nPartitions=9` indicats SVM classifers trained on 10% to 90% (randomly selected elements) of all input data, using the repsective complement as test set. In order to acknowledge the randomess in the experiment design appropriately we repeat the experiment for every partition 'nRandomSamplings' times.
Furthermore, classification can be carried out also on a a subset of the weight vectors, defined by the index range (`nR1`, `nR2`).
The averaged results are plotted together with bars quantifying the standard deviation.
Choose different values for `nR1` and `nR2` (values between 0 and 114, `nR1 < nR2`). What is your impression?
- Are there some weights (subsets of weights) to appear more expressive than others? (w.r.t. disease state)
- If so, what do you think why?
- What can you say about the difference between FCM and PDM?
- Choose different values for 'nRandomSamplings' (e.g. 10, 100, 1000). Interpret what you see.
```python
nPartitions = 9
nRandomSamplings = 100
[nR1, nR2] = [0, 114] # e.g. [0, 114] (full range), [0, 3], [105, 114], etc.
# normalize feature vectors
fcmFeaturesNorm = normalize(fcmFeatures[nR1:nR2, :], axis=0, norm="l2")
pdmFeaturesNorm = normalize(pdmFeatures[nR1:nR2, :], axis=0, norm="l2")
fcmavgAccuracyPerPartition, fcmstdDevPerPartition = runSVMClassification(nPartitions, nRandomSamplings, fcmFeaturesNorm, labels)
pdmavgAccuracyPerPartition, pdmstdDevPerPartition = runSVMClassification(nPartitions, nRandomSamplings, pdmFeaturesNorm, labels)
data_list_avg = [fcmavgAccuracyPerPartition, pdmavgAccuracyPerPartition]
data_list_std = [fcmstdDevPerPartition, fcmstdDevPerPartition]
plotClassificationResults(data_list_avg, data_list_std, plt, colors)
```
```python
```
| 413100680de6bcba4e55d5b2e092b063cccf4ebe | 10,557 | ipynb | Jupyter Notebook | tutorial2_pop_med_image_shape_ana_03_VisClass.ipynb | ckolbPTB/TES_21_22_Tutorials | 764cd34e7248830e2c53688fd0a4882ead8d3860 | [
"Apache-2.0"
]
| 2 | 2021-09-08T11:31:07.000Z | 2021-09-08T11:45:45.000Z | tutorial2_pop_med_image_shape_ana_03_VisClass.ipynb | MATHplus-Young-Academy/TES_21_22_Tutorials | 3d8b12f40cf90f8471e94ef02160523857ded2ba | [
"Apache-2.0"
]
| null | null | null | tutorial2_pop_med_image_shape_ana_03_VisClass.ipynb | MATHplus-Young-Academy/TES_21_22_Tutorials | 3d8b12f40cf90f8471e94ef02160523857ded2ba | [
"Apache-2.0"
]
| 4 | 2021-11-02T17:16:06.000Z | 2022-01-24T18:39:08.000Z | 42.568548 | 400 | 0.628588 | true | 2,208 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.863392 | 0.764647 | __label__eng_Latn | 0.951698 | 0.614863 |
# MADE Demo
Autoregressive density estimation for MNIST using masked FFNN (MADE).
```python
import sys
import argparse
import pprint
import pathlib
import numpy as np
import torch
import json
from tqdm import tqdm
from collections import OrderedDict, defaultdict
from torch.distributions import Bernoulli
import dgm
from dgm.conditional import MADEConditioner
from dgm.likelihood import FullyFactorizedLikelihood, AutoregressiveLikelihood
from dgm.opt_utils import get_optimizer, ReduceLROnPlateau
from utils import load_mnist, Batcher
```
Organise some hyperparameters
```python
from collections import namedtuple
Config = namedtuple(
"Config",
['seed', 'device', 'batch_size', 'data_dir',
'height', 'width', 'input_dropout',
'hidden_sizes', 'num_masks', 'resample_mask_every', 'll_samples',
'epochs', 'opt', 'lr', 'momentum', 'l2_weight', 'patience', 'early_stopping'])
args = Config(
seed=42,
device='cuda:0',
batch_size=128,
data_dir='./data',
height=28,
width=28,
input_dropout=0.,
hidden_sizes=[8000, 8000],
num_masks=1,
resample_mask_every=20,
ll_samples=10,
epochs=200,
opt='adam',
lr=1e-4,
momentum=0.,
l2_weight=1e-4,
patience=10,
early_stopping=10,
)
```
We like reproducibility
```python
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
```
Load data
```python
train_loader, valid_loader, test_loader = load_mnist(
args.batch_size,
save_to='{}/std/{}x{}'.format(args.data_dir, args.height, args.width),
height=args.height,
width=args.width)
```
Build a model
\begin{align}
P(x) &= \prod_{i=1}^{|x|} \text{Bern}(x_i|\underbrace{f(x_{<i})}_{\text{MADE}})
\end{align}
For that we need a *conditioner*
* the part that maps prefixes into Bernoulli parameters
and an *autoregressive likelihood*
* the part that combines the Bernoulli factors
```python
x_size = args.width * args.height
device = torch.device(args.device)
made = MADEConditioner(
input_size=x_size, # our only input to the MADE layer is the observation
output_size=x_size * 1, # number of parameters to predict
context_size=0, # we do not have any additional inputs
hidden_sizes=args.hidden_sizes,
num_masks=args.num_masks
)
model = AutoregressiveLikelihood(
event_size=x_size, # size of observation
dist_type=Bernoulli,
conditioner=made
).to(device)
print("\n# Architecture")
print(model)
```
# Architecture
AutoregressiveLikelihood(
(conditioner): MADEConditioner(
(_made): MADE(
(hidden_activation): ReLU()
(net): Sequential(
(0): MaskedLinear(in_features=784, out_features=8000, bias=True)
(1): ReLU()
(2): MaskedLinear(in_features=8000, out_features=8000, bias=True)
(3): ReLU()
(4): MaskedLinear(in_features=8000, out_features=784, bias=True)
)
)
)
)
Let's configure the optimiser
```python
print("\n# Optimizer")
opt = get_optimizer(args.opt, model.parameters(), args.lr, args.l2_weight, args.momentum)
scheduler = ReduceLROnPlateau(
opt,
factor=0.5,
patience=args.patience,
early_stopping=args.early_stopping,
mode='min', threshold_mode='abs')
print(opt)
```
# Optimizer
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
lr: 0.0001
weight_decay: 0.0001
)
Some helper code for batching MNIST digits
```python
def get_batcher(data_loader):
batcher = Batcher(
data_loader,
height=args.height,
width=args.width,
device=torch.device(args.device),
binarize=True,
num_classes=10,
onehot=True
)
return batcher
```
Helper code for validating a model
```python
def validate(batcher, args, model,
optimizer=None, scheduler=None, writer=None, name='dev'):
"""
:return: stop flag, dict
NLL can be found in the dict
"""
if args.num_masks == 1:
resample_mask = False
num_samples = 1
else: # ensemble
resample_mask = True
num_samples = args.ll_samples
with torch.no_grad():
model.eval()
print_ = defaultdict(list)
nb_instances = 0.
for x_mb, y_mb in batcher:
# [B, H*W]
x_mb = x_mb.reshape(-1, args.height * args.width)
# [B, 10]
made_inputs = x_mb
# [B, H*W]
p_x = model(
inputs=None,
history=x_mb,
num_samples=num_samples, resample_mask=resample_mask
)
# [B]
nll = -p_x.log_prob(x_mb).sum(-1)
# accumulate metrics
print_['NLL'].append(nll.sum().item())
nb_instances += x_mb.size(0)
return_dict = {k: np.sum(v) / nb_instances for k, v in print_.items()}
if writer:
writer.add_scalar('%s/NLL' % name, return_dict['NLL'])
stop = False
if scheduler is not None:
stop = scheduler.step(return_dict['NLL'])
return stop, return_dict
```
Main training loop
```python
print("\n# Training")
#from tensorboardX import SummaryWriter
#writer = SummaryWriter(args.logdir)
writer = None
step = 1
for epoch in range(args.epochs):
iterator = tqdm(get_batcher(train_loader))
for x_mb, y_mb in iterator:
# [B, H*W]
x_mb = x_mb.reshape(-1, args.height * args.width)
# [B, 10]
context = None
model.train()
opt.zero_grad()
if args.num_masks == 1:
resample_mask = False
else: # training with variable masks
resample_mask = args.resample_mask_every > 0 and step % args.resample_mask_every == 0
# [B, H*W]
noisy_x = torch.where(
torch.rand_like(x_mb) > args.input_dropout, x_mb, torch.zeros_like(x_mb)
)
p_x = model(
inputs=context,
history=noisy_x,
resample_mask=resample_mask
)
# [B, H*W]
ll_mb = p_x.log_prob(x_mb)
# [B]
ll = ll_mb.sum(-1)
loss = -(ll).mean()
loss.backward()
opt.step()
display = OrderedDict()
display['0s'] = '{:.2f}'.format((x_mb == 0).float().mean().item())
display['1s'] = '{:.2f}'.format((x_mb == 1).float().mean().item())
display['NLL'] = '{:.2f}'.format(-ll.mean().item())
if writer:
writer.add_scalar('training/LL', ll)
#writer.add_image('training/posterior/sample', z.mean(0).reshape(1,1,-1) * 255)
iterator.set_postfix(display, refresh=False)
step += 1
stop, dict_valid = validate(get_batcher(valid_loader), args, model, opt, scheduler,
writer=writer, name="dev")
if stop:
print('Early stopping at epoch {:3}/{}'.format(epoch + 1, args.epochs))
break
print('Epoch {:3}/{} -- '.format(epoch + 1, args.epochs) + \
', '.join(['{}: {:4.2f}'.format(k, v) for k, v in sorted(dict_valid.items())]))
```
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=540.60]
# Training
100%|██████████| 430/430 [00:27<00:00, 16.01it/s, 0s=0.87, 1s=0.13, NLL=183.00]
0%| | 2/430 [00:00<00:27, 15.56it/s, 0s=0.86, 1s=0.14, NLL=186.59]
Epoch 1/200 -- NLL: 178.83
100%|██████████| 430/430 [00:27<00:00, 15.48it/s, 0s=0.86, 1s=0.14, NLL=150.84]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=146.11]
Epoch 2/200 -- NLL: 146.80
100%|██████████| 430/430 [00:28<00:00, 15.45it/s, 0s=0.87, 1s=0.13, NLL=127.02]
0%| | 2/430 [00:00<00:28, 15.00it/s, 0s=0.87, 1s=0.13, NLL=129.89]
Epoch 3/200 -- NLL: 129.88
100%|██████████| 430/430 [00:28<00:00, 15.39it/s, 0s=0.86, 1s=0.14, NLL=130.06]
0%| | 2/430 [00:00<00:28, 15.01it/s, 0s=0.87, 1s=0.13, NLL=122.98]
Epoch 4/200 -- NLL: 121.00
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.86, 1s=0.14, NLL=120.82]
0%| | 2/430 [00:00<00:28, 14.96it/s, 0s=0.87, 1s=0.13, NLL=111.05]
Epoch 5/200 -- NLL: 115.91
100%|██████████| 430/430 [00:28<00:00, 15.38it/s, 0s=0.87, 1s=0.13, NLL=116.86]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.88, 1s=0.12, NLL=105.97]
Epoch 6/200 -- NLL: 112.54
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=104.61]
0%| | 2/430 [00:00<00:28, 14.97it/s, 0s=0.87, 1s=0.13, NLL=108.41]
Epoch 7/200 -- NLL: 110.12
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=110.64]
0%| | 2/430 [00:00<00:29, 14.51it/s, 0s=0.87, 1s=0.13, NLL=110.41]
Epoch 8/200 -- NLL: 108.26
100%|██████████| 430/430 [00:28<00:00, 15.37it/s, 0s=0.87, 1s=0.13, NLL=103.89]
0%| | 2/430 [00:00<00:28, 14.93it/s, 0s=0.86, 1s=0.14, NLL=111.29]
Epoch 9/200 -- NLL: 106.60
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=102.59]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.86, 1s=0.14, NLL=109.08]
Epoch 10/200 -- NLL: 105.41
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.86, 1s=0.14, NLL=103.54]
0%| | 2/430 [00:00<00:28, 14.94it/s, 0s=0.87, 1s=0.13, NLL=100.54]
Epoch 11/200 -- NLL: 104.03
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=102.78]
0%| | 2/430 [00:00<00:28, 14.95it/s, 0s=0.87, 1s=0.13, NLL=101.37]
Epoch 12/200 -- NLL: 102.83
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=98.70]
0%| | 2/430 [00:00<00:28, 14.99it/s, 0s=0.86, 1s=0.14, NLL=100.94]
Epoch 13/200 -- NLL: 101.76
100%|██████████| 430/430 [00:28<00:00, 15.38it/s, 0s=0.87, 1s=0.13, NLL=98.34]
0%| | 2/430 [00:00<00:28, 15.03it/s, 0s=0.87, 1s=0.13, NLL=102.14]
Epoch 14/200 -- NLL: 100.93
100%|██████████| 430/430 [00:28<00:00, 15.36it/s, 0s=0.88, 1s=0.12, NLL=99.11]
0%| | 2/430 [00:00<00:28, 14.95it/s, 0s=0.87, 1s=0.13, NLL=96.83]
Epoch 15/200 -- NLL: 100.19
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=98.58]
0%| | 2/430 [00:00<00:28, 15.01it/s, 0s=0.86, 1s=0.14, NLL=100.46]
Epoch 16/200 -- NLL: 99.53
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=100.28]
0%| | 2/430 [00:00<00:28, 15.01it/s, 0s=0.87, 1s=0.13, NLL=98.57]
Epoch 17/200 -- NLL: 98.87
100%|██████████| 430/430 [00:28<00:00, 15.38it/s, 0s=0.87, 1s=0.13, NLL=99.80]
0%| | 2/430 [00:00<00:28, 14.91it/s, 0s=0.87, 1s=0.13, NLL=95.93]
Epoch 18/200 -- NLL: 98.36
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.86, 1s=0.14, NLL=100.65]
0%| | 2/430 [00:00<00:28, 15.00it/s, 0s=0.88, 1s=0.12, NLL=94.52]
Epoch 19/200 -- NLL: 97.65
100%|██████████| 430/430 [00:28<00:00, 15.37it/s, 0s=0.86, 1s=0.14, NLL=96.53]
0%| | 2/430 [00:00<00:28, 14.98it/s, 0s=0.88, 1s=0.12, NLL=95.14]
Epoch 20/200 -- NLL: 97.15
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=95.29]
0%| | 2/430 [00:00<00:28, 15.15it/s, 0s=0.87, 1s=0.13, NLL=95.32]
Epoch 21/200 -- NLL: 96.70
100%|██████████| 430/430 [00:28<00:00, 15.38it/s, 0s=0.87, 1s=0.13, NLL=93.82]
0%| | 2/430 [00:00<00:28, 15.01it/s, 0s=0.86, 1s=0.14, NLL=98.83]
Epoch 22/200 -- NLL: 96.50
100%|██████████| 430/430 [00:28<00:00, 15.37it/s, 0s=0.86, 1s=0.14, NLL=95.41]
0%| | 2/430 [00:00<00:28, 15.05it/s, 0s=0.87, 1s=0.13, NLL=95.56]
Epoch 23/200 -- NLL: 95.68
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.87, 1s=0.13, NLL=91.23]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=93.88]
Epoch 24/200 -- NLL: 95.46
100%|██████████| 430/430 [00:28<00:00, 15.37it/s, 0s=0.87, 1s=0.13, NLL=92.03]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.87, 1s=0.13, NLL=94.70]
Epoch 25/200 -- NLL: 94.99
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=91.26]
0%| | 2/430 [00:00<00:28, 15.05it/s, 0s=0.87, 1s=0.13, NLL=94.32]
Epoch 26/200 -- NLL: 94.78
100%|██████████| 430/430 [00:28<00:00, 15.37it/s, 0s=0.87, 1s=0.13, NLL=94.88]
0%| | 2/430 [00:00<00:28, 15.23it/s, 0s=0.87, 1s=0.13, NLL=92.48]
Epoch 27/200 -- NLL: 94.41
100%|██████████| 430/430 [00:28<00:00, 15.38it/s, 0s=0.86, 1s=0.14, NLL=97.64]
0%| | 2/430 [00:00<00:28, 14.95it/s, 0s=0.88, 1s=0.12, NLL=90.83]
Epoch 28/200 -- NLL: 94.17
100%|██████████| 430/430 [00:28<00:00, 15.44it/s, 0s=0.87, 1s=0.13, NLL=92.35]
0%| | 2/430 [00:00<00:27, 15.30it/s, 0s=0.87, 1s=0.13, NLL=90.41]
Epoch 29/200 -- NLL: 93.88
100%|██████████| 430/430 [00:28<00:00, 15.44it/s, 0s=0.87, 1s=0.13, NLL=90.95]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.88, 1s=0.12, NLL=89.75]
Epoch 30/200 -- NLL: 93.69
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.86, 1s=0.14, NLL=94.78]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=92.13]
Epoch 31/200 -- NLL: 93.05
100%|██████████| 430/430 [00:28<00:00, 15.55it/s, 0s=0.87, 1s=0.13, NLL=89.58]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.88, 1s=0.12, NLL=90.83]
Epoch 32/200 -- NLL: 93.11
100%|██████████| 430/430 [00:28<00:00, 15.37it/s, 0s=0.87, 1s=0.13, NLL=87.04]
0%| | 2/430 [00:00<00:28, 15.09it/s, 0s=0.87, 1s=0.13, NLL=88.42]
Epoch 33/200 -- NLL: 92.71
100%|██████████| 430/430 [00:28<00:00, 15.49it/s, 0s=0.86, 1s=0.14, NLL=89.68]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.87, 1s=0.13, NLL=87.11]
Epoch 34/200 -- NLL: 92.52
100%|██████████| 430/430 [00:28<00:00, 15.36it/s, 0s=0.87, 1s=0.13, NLL=90.26]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=88.90]
Epoch 35/200 -- NLL: 92.36
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=86.81]
0%| | 2/430 [00:00<00:28, 15.14it/s, 0s=0.87, 1s=0.13, NLL=88.92]
Epoch 36/200 -- NLL: 92.06
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.86, 1s=0.14, NLL=89.17]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.86, 1s=0.14, NLL=90.22]
Epoch 37/200 -- NLL: 91.83
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=88.98]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=89.95]
Epoch 38/200 -- NLL: 91.65
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.86, 1s=0.14, NLL=92.90]
0%| | 2/430 [00:00<00:28, 15.14it/s, 0s=0.87, 1s=0.13, NLL=86.14]
Epoch 39/200 -- NLL: 91.48
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=88.64]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=88.80]
Epoch 40/200 -- NLL: 91.10
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=86.71]
0%| | 2/430 [00:00<00:28, 15.14it/s, 0s=0.87, 1s=0.13, NLL=85.25]
Epoch 41/200 -- NLL: 91.02
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.87, 1s=0.13, NLL=88.53]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=88.62]
Epoch 42/200 -- NLL: 91.01
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.86, 1s=0.14, NLL=90.27]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=88.74]
Epoch 43/200 -- NLL: 90.90
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=87.56]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.86, 1s=0.14, NLL=90.56]
Epoch 44/200 -- NLL: 90.64
100%|██████████| 430/430 [00:28<00:00, 15.45it/s, 0s=0.87, 1s=0.13, NLL=92.64]
0%| | 2/430 [00:00<00:28, 15.16it/s, 0s=0.87, 1s=0.13, NLL=89.84]
Epoch 45/200 -- NLL: 90.45
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=88.97]
0%| | 2/430 [00:00<00:28, 15.15it/s, 0s=0.88, 1s=0.12, NLL=85.55]
Epoch 46/200 -- NLL: 90.15
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.87, 1s=0.13, NLL=93.56]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=86.29]
Epoch 47/200 -- NLL: 90.18
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=90.79]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=88.41]
Epoch 48/200 -- NLL: 89.92
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=87.27]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=89.30]
Epoch 49/200 -- NLL: 89.88
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=90.28]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.87, 1s=0.13, NLL=86.43]
Epoch 50/200 -- NLL: 89.77
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=90.08]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.87, 1s=0.13, NLL=85.85]
Epoch 51/200 -- NLL: 89.75
100%|██████████| 430/430 [00:28<00:00, 15.44it/s, 0s=0.87, 1s=0.13, NLL=88.34]
0%| | 2/430 [00:00<00:28, 15.03it/s, 0s=0.87, 1s=0.13, NLL=88.33]
Epoch 52/200 -- NLL: 89.43
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.86, 1s=0.14, NLL=90.48]
0%| | 2/430 [00:00<00:28, 15.17it/s, 0s=0.87, 1s=0.13, NLL=87.43]
Epoch 53/200 -- NLL: 89.21
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.86, 1s=0.14, NLL=87.72]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.87, 1s=0.13, NLL=86.13]
Epoch 54/200 -- NLL: 89.27
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=88.06]
0%| | 2/430 [00:00<00:28, 15.05it/s, 0s=0.87, 1s=0.13, NLL=87.58]
Epoch 55/200 -- NLL: 88.99
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.86, 1s=0.14, NLL=85.87]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=87.08]
Epoch 56/200 -- NLL: 88.96
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=85.27]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=89.78]
Epoch 57/200 -- NLL: 88.86
100%|██████████| 430/430 [00:28<00:00, 15.50it/s, 0s=0.87, 1s=0.13, NLL=87.83]
0%| | 2/430 [00:00<00:28, 15.29it/s, 0s=0.88, 1s=0.12, NLL=82.58]
Epoch 58/200 -- NLL: 88.77
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.87, 1s=0.13, NLL=86.85]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=85.00]
Epoch 59/200 -- NLL: 88.69
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.87, 1s=0.13, NLL=85.80]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=88.11]
Epoch 60/200 -- NLL: 88.38
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.86, 1s=0.14, NLL=88.80]
0%| | 2/430 [00:00<00:27, 15.30it/s, 0s=0.87, 1s=0.13, NLL=83.50]
Epoch 61/200 -- NLL: 88.54
100%|██████████| 430/430 [00:28<00:00, 15.15it/s, 0s=0.88, 1s=0.12, NLL=83.24]
0%| | 2/430 [00:00<00:28, 15.05it/s, 0s=0.87, 1s=0.13, NLL=88.49]
Epoch 62/200 -- NLL: 88.37
100%|██████████| 430/430 [00:28<00:00, 15.45it/s, 0s=0.87, 1s=0.13, NLL=86.54]
0%| | 2/430 [00:00<00:28, 15.17it/s, 0s=0.88, 1s=0.12, NLL=81.83]
Epoch 63/200 -- NLL: 88.52
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.88, 1s=0.12, NLL=83.34]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.87, 1s=0.13, NLL=85.14]
Epoch 64/200 -- NLL: 88.17
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=83.71]
0%| | 2/430 [00:00<00:28, 15.21it/s, 0s=0.87, 1s=0.13, NLL=81.86]
Epoch 65/200 -- NLL: 88.04
100%|██████████| 430/430 [00:28<00:00, 15.44it/s, 0s=0.87, 1s=0.13, NLL=84.48]
0%| | 2/430 [00:00<00:28, 15.20it/s, 0s=0.87, 1s=0.13, NLL=87.96]
Epoch 66/200 -- NLL: 88.16
100%|██████████| 430/430 [00:28<00:00, 15.48it/s, 0s=0.87, 1s=0.13, NLL=85.14]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=85.12]
Epoch 67/200 -- NLL: 88.18
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.86, 1s=0.14, NLL=86.76]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=86.37]
Epoch 68/200 -- NLL: 87.83
100%|██████████| 430/430 [00:28<00:00, 15.47it/s, 0s=0.87, 1s=0.13, NLL=82.42]
0%| | 2/430 [00:00<00:28, 15.24it/s, 0s=0.87, 1s=0.13, NLL=85.36]
Epoch 69/200 -- NLL: 87.82
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=86.63]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.86, 1s=0.14, NLL=88.78]
Epoch 70/200 -- NLL: 87.87
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=86.15]
0%| | 2/430 [00:00<00:28, 15.20it/s, 0s=0.87, 1s=0.13, NLL=84.53]
Epoch 71/200 -- NLL: 87.63
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.88, 1s=0.12, NLL=83.52]
0%| | 2/430 [00:00<00:28, 15.04it/s, 0s=0.87, 1s=0.13, NLL=83.23]
Epoch 72/200 -- NLL: 87.63
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=86.17]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=81.23]
Epoch 73/200 -- NLL: 87.57
100%|██████████| 430/430 [00:28<00:00, 14.99it/s, 0s=0.87, 1s=0.13, NLL=84.90]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.87, 1s=0.13, NLL=82.50]
Epoch 74/200 -- NLL: 87.42
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=84.88]
0%| | 2/430 [00:00<00:28, 15.15it/s, 0s=0.87, 1s=0.13, NLL=81.42]
Epoch 75/200 -- NLL: 87.41
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=82.15]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.87, 1s=0.13, NLL=82.34]
Epoch 76/200 -- NLL: 87.37
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=86.65]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.87, 1s=0.13, NLL=84.66]
Epoch 77/200 -- NLL: 87.20
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=85.89]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.87, 1s=0.13, NLL=85.22]
Epoch 78/200 -- NLL: 87.29
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=81.57]
0%| | 2/430 [00:00<00:28, 15.01it/s, 0s=0.88, 1s=0.12, NLL=82.69]
Epoch 79/200 -- NLL: 87.04
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=83.04]
0%| | 2/430 [00:00<00:28, 15.03it/s, 0s=0.87, 1s=0.13, NLL=86.07]
Epoch 80/200 -- NLL: 87.28
100%|██████████| 430/430 [00:28<00:00, 15.51it/s, 0s=0.87, 1s=0.13, NLL=83.22]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=84.26]
Epoch 81/200 -- NLL: 87.18
100%|██████████| 430/430 [00:28<00:00, 15.35it/s, 0s=0.88, 1s=0.12, NLL=80.37]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=82.31]
Epoch 82/200 -- NLL: 87.01
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.86, 1s=0.14, NLL=84.00]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=84.94]
Epoch 83/200 -- NLL: 86.98
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.88, 1s=0.12, NLL=86.64]
0%| | 2/430 [00:00<00:28, 15.15it/s, 0s=0.87, 1s=0.13, NLL=83.31]
Epoch 84/200 -- NLL: 86.97
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=83.95]
0%| | 2/430 [00:00<00:28, 15.03it/s, 0s=0.86, 1s=0.14, NLL=83.34]
Epoch 85/200 -- NLL: 86.99
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=86.84]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=85.20]
Epoch 86/200 -- NLL: 86.84
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=85.53]
0%| | 2/430 [00:00<00:28, 15.05it/s, 0s=0.87, 1s=0.13, NLL=81.82]
Epoch 87/200 -- NLL: 86.77
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=84.83]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.87, 1s=0.13, NLL=83.47]
Epoch 88/200 -- NLL: 86.65
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.86, 1s=0.14, NLL=83.04]
0%| | 2/430 [00:00<00:28, 15.00it/s, 0s=0.87, 1s=0.13, NLL=80.21]
Epoch 89/200 -- NLL: 86.75
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=82.52]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.86, 1s=0.14, NLL=86.51]
Epoch 90/200 -- NLL: 86.63
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.88, 1s=0.12, NLL=85.19]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.86, 1s=0.14, NLL=84.78]
Epoch 91/200 -- NLL: 86.57
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=83.03]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.87, 1s=0.13, NLL=84.04]
Epoch 92/200 -- NLL: 86.69
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.88, 1s=0.12, NLL=79.43]
0%| | 2/430 [00:00<00:28, 15.23it/s, 0s=0.86, 1s=0.14, NLL=86.96]
Epoch 93/200 -- NLL: 86.53
100%|██████████| 430/430 [00:28<00:00, 15.46it/s, 0s=0.87, 1s=0.13, NLL=79.47]
0%| | 2/430 [00:00<00:28, 15.15it/s, 0s=0.87, 1s=0.13, NLL=80.32]
Epoch 94/200 -- NLL: 86.59
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=86.14]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=84.00]
Epoch 95/200 -- NLL: 86.43
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=83.54]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.86, 1s=0.14, NLL=85.80]
Epoch 96/200 -- NLL: 86.21
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=80.04]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.86, 1s=0.14, NLL=84.77]
Epoch 97/200 -- NLL: 86.36
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=79.34]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=82.33]
Epoch 98/200 -- NLL: 86.32
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.87, 1s=0.13, NLL=80.63]
0%| | 2/430 [00:00<00:28, 15.25it/s, 0s=0.86, 1s=0.14, NLL=84.73]
Epoch 99/200 -- NLL: 86.29
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=83.12]
0%| | 2/430 [00:00<00:28, 15.00it/s, 0s=0.87, 1s=0.13, NLL=81.92]
Epoch 100/200 -- NLL: 86.23
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.86, 1s=0.14, NLL=87.93]
0%| | 2/430 [00:00<00:28, 15.17it/s, 0s=0.88, 1s=0.12, NLL=82.57]
Epoch 101/200 -- NLL: 86.28
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=85.76]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=81.11]
Epoch 102/200 -- NLL: 86.36
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=81.03]
0%| | 2/430 [00:00<00:28, 15.02it/s, 0s=0.87, 1s=0.13, NLL=82.56]
Epoch 103/200 -- NLL: 86.35
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=79.41]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.88, 1s=0.12, NLL=81.03]
Epoch 104/200 -- NLL: 86.24
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=83.91]
0%| | 2/430 [00:00<00:28, 15.16it/s, 0s=0.87, 1s=0.13, NLL=84.81]
Epoch 105/200 -- NLL: 86.06
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.86, 1s=0.14, NLL=85.12]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.87, 1s=0.13, NLL=83.63]
Epoch 106/200 -- NLL: 86.04
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=80.11]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.87, 1s=0.13, NLL=80.59]
Epoch 107/200 -- NLL: 85.99
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=81.93]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.87, 1s=0.13, NLL=83.43]
Epoch 108/200 -- NLL: 86.03
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=85.82]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.87, 1s=0.13, NLL=80.85]
Epoch 109/200 -- NLL: 85.89
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=83.12]
0%| | 2/430 [00:00<00:28, 15.09it/s, 0s=0.86, 1s=0.14, NLL=80.67]
Epoch 110/200 -- NLL: 85.90
100%|██████████| 430/430 [00:28<00:00, 15.36it/s, 0s=0.87, 1s=0.13, NLL=83.61]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.87, 1s=0.13, NLL=82.41]
Epoch 111/200 -- NLL: 85.75
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=85.15]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.87, 1s=0.13, NLL=81.26]
Epoch 112/200 -- NLL: 85.85
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.88, 1s=0.12, NLL=79.55]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=79.91]
Epoch 113/200 -- NLL: 85.83
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=80.87]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.86, 1s=0.14, NLL=83.63]
Epoch 114/200 -- NLL: 85.73
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=82.48]
0%| | 2/430 [00:00<00:28, 15.18it/s, 0s=0.87, 1s=0.13, NLL=83.82]
Epoch 115/200 -- NLL: 85.81
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=83.58]
0%| | 2/430 [00:00<00:28, 15.13it/s, 0s=0.87, 1s=0.13, NLL=83.58]
Epoch 116/200 -- NLL: 85.76
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.88, 1s=0.12, NLL=84.03]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.86, 1s=0.14, NLL=84.93]
Epoch 117/200 -- NLL: 85.81
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=80.20]
0%| | 2/430 [00:00<00:28, 15.16it/s, 0s=0.86, 1s=0.14, NLL=82.02]
Epoch 118/200 -- NLL: 85.59
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=79.74]
0%| | 2/430 [00:00<00:28, 15.08it/s, 0s=0.86, 1s=0.14, NLL=80.44]
Epoch 119/200 -- NLL: 85.61
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=82.00]
0%| | 2/430 [00:00<00:28, 14.96it/s, 0s=0.87, 1s=0.13, NLL=82.88]
Epoch 120/200 -- NLL: 85.71
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=81.11]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.87, 1s=0.13, NLL=81.17]
Epoch 121/200 -- NLL: 85.68
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=80.78]
0%| | 2/430 [00:00<00:28, 15.10it/s, 0s=0.87, 1s=0.13, NLL=85.49]
Epoch 122/200 -- NLL: 85.61
100%|██████████| 430/430 [00:28<00:00, 15.39it/s, 0s=0.87, 1s=0.13, NLL=81.06]
0%| | 2/430 [00:00<00:28, 15.18it/s, 0s=0.87, 1s=0.13, NLL=83.62]
Epoch 123/200 -- NLL: 85.61
100%|██████████| 430/430 [00:28<00:00, 15.33it/s, 0s=0.87, 1s=0.13, NLL=83.31]
0%| | 2/430 [00:00<00:28, 15.15it/s, 0s=0.87, 1s=0.13, NLL=80.56]
Epoch 124/200 -- NLL: 85.57
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.88, 1s=0.12, NLL=81.85]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=81.25]
Epoch 125/200 -- NLL: 85.65
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=83.00]
0%| | 2/430 [00:00<00:28, 15.17it/s, 0s=0.87, 1s=0.13, NLL=82.55]
Epoch 126/200 -- NLL: 85.60
100%|██████████| 430/430 [00:28<00:00, 15.39it/s, 0s=0.86, 1s=0.14, NLL=82.55]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.86, 1s=0.14, NLL=82.18]
Epoch 127/200 -- NLL: 85.47
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=83.97]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=80.45]
Epoch 128/200 -- NLL: 85.69
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=78.86]
0%| | 2/430 [00:00<00:28, 15.19it/s, 0s=0.87, 1s=0.13, NLL=82.06]
Epoch 129/200 -- NLL: 85.70
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=83.12]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.86, 1s=0.14, NLL=82.71]
Epoch 130/200 -- NLL: 85.51
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=85.25]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.87, 1s=0.13, NLL=80.89]
Epoch 131/200 -- NLL: 85.29
100%|██████████| 430/430 [00:28<00:00, 15.39it/s, 0s=0.87, 1s=0.13, NLL=79.68]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=77.44]
Epoch 132/200 -- NLL: 85.40
100%|██████████| 430/430 [00:28<00:00, 15.43it/s, 0s=0.86, 1s=0.14, NLL=86.57]
0%| | 2/430 [00:00<00:28, 15.07it/s, 0s=0.87, 1s=0.13, NLL=81.14]
Epoch 133/200 -- NLL: 85.57
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=79.65]
0%| | 2/430 [00:00<00:28, 15.12it/s, 0s=0.87, 1s=0.13, NLL=79.15]
Epoch 134/200 -- NLL: 85.38
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=79.83]
0%| | 2/430 [00:00<00:28, 15.04it/s, 0s=0.88, 1s=0.12, NLL=80.38]
Epoch 135/200 -- NLL: 85.30
100%|██████████| 430/430 [00:28<00:00, 15.40it/s, 0s=0.87, 1s=0.13, NLL=79.43]
0%| | 2/430 [00:00<00:28, 15.14it/s, 0s=0.87, 1s=0.13, NLL=79.21]
Epoch 136/200 -- NLL: 85.32
100%|██████████| 430/430 [00:28<00:00, 15.41it/s, 0s=0.87, 1s=0.13, NLL=80.84]
0%| | 2/430 [00:00<00:28, 15.16it/s, 0s=0.87, 1s=0.13, NLL=81.10]
Epoch 137/200 -- NLL: 85.53
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=83.15]
0%| | 2/430 [00:00<00:28, 15.17it/s, 0s=0.87, 1s=0.13, NLL=81.54]
Epoch 138/200 -- NLL: 85.32
100%|██████████| 430/430 [00:28<00:00, 15.39it/s, 0s=0.88, 1s=0.12, NLL=75.25]
0%| | 2/430 [00:00<00:28, 15.06it/s, 0s=0.87, 1s=0.13, NLL=78.96]
Epoch 139/200 -- NLL: 85.56
100%|██████████| 430/430 [00:28<00:00, 15.45it/s, 0s=0.87, 1s=0.13, NLL=78.74]
0%| | 2/430 [00:00<00:28, 15.11it/s, 0s=0.87, 1s=0.13, NLL=81.66]
Epoch 140/200 -- NLL: 85.54
100%|██████████| 430/430 [00:28<00:00, 15.42it/s, 0s=0.87, 1s=0.13, NLL=80.93]
Early stopping at epoch 141/200
Let's visualise samples
```python
from matplotlib import pyplot as plt
def visualize_made(batcher: Batcher, args, model, N=4, writer=None, name='dev'):
assert N <= args.batch_size, "N should be no bigger than a batch"
with torch.no_grad():
model.eval()
plt.figure(figsize=(2*N, 2*N))
plt.subplots_adjust(wspace=0.5, hspace=0.5)
# Some visualisations
for x_mb, y_mb in batcher:
# [B, H*W]
x_mb = x_mb.reshape(-1, args.height * args.width)
x_mb = x_mb[:N]
# [B, 10]
context = None
p_x = model(inputs=None, history=x_mb, resample_mask=False)
# [B, H*W]
ll = p_x.log_prob(x_mb)
prob = torch.exp(ll)
# reconstruct bottom half of N instances
x_rec = model.sample(inputs=None, history=x_mb, start_from=args.height * args.width // 2)
# sample N instances
x_sample = model.sample(
inputs=None,
history=torch.zeros(
[N, args.height * args.width],
device=torch.device(args.device),
dtype=torch.float32
),
start_from=0
)
for i in range(N):
plt.subplot(4, N, 0*N + i + 1)
plt.imshow(x_mb[i].reshape(args.height, args.width).cpu(), cmap='Greys')
plt.title("x%d" % (i + 1))
plt.subplot(4, N, 1*N + i + 1)
plt.imshow(prob[i].reshape(args.height, args.width).cpu(), cmap='Greys')
plt.title("prob%d" % (i + 1))
plt.subplot(4, N, 2*N + i + 1)
plt.axhline(y=args.height//2, c='red', linewidth=1, ls='--')
plt.imshow(x_rec[i].reshape(args.height, args.width).cpu(), cmap='Greys')
plt.title("rec%d" % (i + 1))
plt.subplot(4, N, 3*N + i + 1)
plt.imshow(x_sample[i].reshape(args.height, args.width).cpu(), cmap='Greys')
plt.title("sample%d" % (i + 1))
break
plt.show()
```
A few reconstructions for the validation set as well as samples from the autoregressive likelihood
```python
visualize_made(get_batcher(valid_loader), args, model)
```
```python
```
| 3a7f9137bfab8bb29cb81816dd18393fa87f3997 | 93,893 | ipynb | Jupyter Notebook | examples/mnist/MADE demo.ipynb | probabll/dgm.pt | 95b5b1eb798b87c3d621e7416cc1c423c076c865 | [
"MIT"
]
| 1 | 2021-02-16T12:56:52.000Z | 2021-02-16T12:56:52.000Z | examples/mnist/MADE demo.ipynb | probabll/dgm.pt | 95b5b1eb798b87c3d621e7416cc1c423c076c865 | [
"MIT"
]
| 1 | 2020-03-20T08:44:21.000Z | 2020-03-20T08:44:21.000Z | examples/mnist/MADE demo.ipynb | probabll/dgm.pt | 95b5b1eb798b87c3d621e7416cc1c423c076c865 | [
"MIT"
]
| 1 | 2020-04-16T19:22:22.000Z | 2020-04-16T19:22:22.000Z | 35.337975 | 23,452 | 0.556836 | true | 19,121 | Qwen/Qwen-72B | 1. YES
2. YES | 0.682574 | 0.752013 | 0.513304 | __label__yue_Hant | 0.17198 | 0.030906 |
# <p style="text-align: center;"> Variational Linear Systems: Simple Example </p>
<p style="text-align: center;"> Ryan LaRose </p>
This notebook briefly demonstrates the current state of the Variational Linear Systems (VLS) code. All code is contained in `vls_pauli.py`, which defines a `PauliSystem` class.
```python
# =======
# imports
# =======
import time
import numpy as np
from vls_pauli import PauliSystem
from cirq import ParamResolver, Symbol, ops, Circuit, LineQubit
```
# Creating a Linear System of Equations
A `PauliSystem` consists of a matrix of the form
\begin{equation}
A = \sum_{k = 1}^{K} c_k \sigma_k
\end{equation}
where $c_k$ are complex coefficients and $\sigma_k$ are strings of Pauli operators. In code, we represent the matrix $A$ as arrays of strings corresponding to Pauli operators. For example, to represent the Pauli operators
\begin{align}
\sigma_1 &= \sigma_I \otimes \sigma_x \otimes \sigma_Y \otimes \sigma_Z
\end{align}
we would write:
```python
# specify the pauli operators of the matrix
Amat_ops = np.array([["I", "X", "Y", "Z"]])
```
To store more terms, we simply append more lists of Pauli operators (string keys) to the operator matrix above. Coefficients $c_k$ are stored similarly as arrays of complex values:
```python
# specify the coefficients of each term in the matrix
Amat_coeffs = np.array([1-0j])
```
Finally, the solution vector
\begin{equation}
|b\rangle = U |0\rangle
\end{equation}
is represented by the unitary $U$ that (efficiently) prepares $|b\rangle$ from the ground state. For example, the unitary $U$ could be
\begin{equation}
U = \sigma_I \otimes \sigma_X \otimes \sigma_Y \otimes \sigma_Z,
\end{equation}
which we would represent in code as:
```python
# specify the unitary that prepares the solution vector b
Umat_ops = np.array(["I", "X", "Y", "Z"])
```
To create `PauliSystem`, we can then simply feed in `Amat_coeffs`, `Amat_ops`, and `Umat_ops`.
```python
# create a linear system of equations
system = PauliSystem(Amat_coeffs, Amat_ops, Umat_ops)
```
# Working with a `PauliSystem`
The `PauliSystem` class can tell basic information about the system:
```python
print("Number of qubits in system:", system.num_qubits())
print("Size of matrix:", system.size())
```
Number of qubits in system: 4
Size of matrix: (16, 16)
To see the actual matrix representation of the system (in the computational basis), we can do:
```python
# get the matrix of the system
matrix = system.matrix()
print(matrix)
```
[[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.-1.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+1.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.-1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.-1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.-1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.-1.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+1.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+1.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.-1.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.-1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+1.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+1.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.-1.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]]
We can also see the solution vector $|b\rangle$ by doing:
```python
b = system.vector()
print(b)
```
[0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+1.j 0.+0.j 0.+0.j 0.+0.j
0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
# Creating an Ansatz
Initially, the `PauliSystem` ansatz for $V$ is an empty circuit:
```python
# print out the initial (empty) ansatz
system.ansatz
```
<pre style="overflow: auto; white-space: pre;"></pre>
We are free to pick whatever ansatz we wish. Here, we will start with the two-qubit alternating ansatz and simplify it to single qubit rotations. The two-qubit alternating ansatz is built-in to the `PauliSystem` class and can be easily created by doing:
```python
system.make_ansatz_circuit()
system.ansatz
```
<pre style="overflow: auto; white-space: pre;">1: ───X^Symbol("0")────Y^Symbol("1")────Z^Symbol("2")────@───X^Symbol("6")────Y^Symbol("7")────Z^Symbol("8")────@───X^Symbol("39")───Y^Symbol("40")───Z^Symbol("41")───────X───X^Symbol("45")───Y^Symbol("46")───Z^Symbol("47")───────X───
│ │ │ │
2: ───X^Symbol("3")────Y^Symbol("4")────Z^Symbol("5")────X───X^Symbol("9")────Y^Symbol("10")───Z^Symbol("11")───X───X^Symbol("24")───Y^Symbol("25")───Z^Symbol("26")───@───┼───X^Symbol("30")───Y^Symbol("31")───Z^Symbol("32")───@───┼───
│ │ │ │
3: ───X^Symbol("12")───Y^Symbol("13")───Z^Symbol("14")───@───X^Symbol("18")───Y^Symbol("19")───Z^Symbol("20")───@───X^Symbol("27")───Y^Symbol("28")───Z^Symbol("29")───X───┼───X^Symbol("33")───Y^Symbol("34")───Z^Symbol("35")───X───┼───
│ │ │ │
4: ───X^Symbol("15")───Y^Symbol("16")───Z^Symbol("17")───X───X^Symbol("21")───Y^Symbol("22")───Z^Symbol("23")───X───X^Symbol("36")───Y^Symbol("37")───Z^Symbol("38")───────@───X^Symbol("42")───Y^Symbol("43")───Z^Symbol("44")───────@───</pre>
This circuit contains 48 parameters (4 qubits x 2 "gates" / qubit x 6 parameters / gate). (Note that printing the circuit gets cut off in the notebook, scroll side to side to see the entire circuit.) For our simple example, we will chop off some of the gates to make the optimization easier:
```python
# remove some of the gates and print it out
system.ansatz = system.ansatz[:-13]
system.ansatz
```
<pre style="overflow: auto; white-space: pre;">1: ───X^Symbol("0")────Y^Symbol("1")────Z^Symbol("2")────
2: ───X^Symbol("3")────Y^Symbol("4")────Z^Symbol("5")────
3: ───X^Symbol("12")───Y^Symbol("13")───Z^Symbol("14")───
4: ───X^Symbol("15")───Y^Symbol("16")───Z^Symbol("17")───</pre>
# Computing the Cost
The local cost function is computed via the Hadamard Test. The local cost function can be written
\begin{equation}
C_1 = 1 - \frac{1}{n} \sum_{k = 1}^{K} \sum_{l \geq k}^{K} \frac{w_{k, l} c_k c_l^*}{\langle 0 | V^\dagger A_k^\dagger A_l V | 0 \rangle} \sum_{j = 1}^{n} \text{Re} \, \langle V_{k, l}^{(j)} \rangle
\end{equation}
where
\begin{equation}
\langle V_{k, l}^{(j)} \rangle := \langle0^{\otimes n}| V^\dagger A_k^\dagger U P_j U^\dagger A_l V |0^{\otimes n}\rangle
\end{equation}
and
\begin{equation}
w_{k, l} = \begin{cases}
1 \qquad \text{if } k = l\\
2 \qquad \text{otherwise}
\end{cases} .
\end{equation}
Thus we have $(n + 1) K^2$ different circuits to run in order to compute the cost. For this simple example, $n = 4$ and $K = 1$, so we only have five circuits to run. The circuit for computing $\langle V_{1, 1}^{(1)} \rangle$ is shown below:
```python
system.make_hadamard_test_circuit(system.ops[0], system.ops[0], 0, "real")
```
<pre style="overflow: auto; white-space: pre;">0: ───H──────────────────────────────────────────────────────@───@───@───@───────@───@───@───H───M('z')───
│ │ │ │ │ │ │
1: ───────X^Symbol("0")────Y^Symbol("1")────Z^Symbol("2")────┼───┼───┼───@───────┼───┼───┼────────────────
│ │ │ │ │ │
2: ───────X^Symbol("3")────Y^Symbol("4")────Z^Symbol("5")────X───┼───┼───X───X───X───┼───┼────────────────
│ │ │ │
3: ───────X^Symbol("12")───Y^Symbol("13")───Z^Symbol("14")───────Y───┼───Y───Y───────Y───┼────────────────
│ │
4: ───────X^Symbol("15")───Y^Symbol("16")───Z^Symbol("17")───────────Z───Z───Z───────────Z────────────────</pre>
The circuit for computing the norm
\begin{equation}
\langle 0 | V^\dagger A_k^\dagger A_l V | 0 \rangle = \langle \psi | A_k^\dagger A_l | \psi \rangle
\end{equation}
for the example $k = 0$, $l = 0$ is shown below:
```python
system.make_norm_circuit(system.ops[0], system.ops[0], "real")
```
<pre style="overflow: auto; white-space: pre;">0: ───H──────────────────────────────────────────────────────@───@───@───@───@───@───H───M('z')───
│ │ │ │ │ │
1: ───────X^Symbol("0")────Y^Symbol("1")────Z^Symbol("2")────┼───┼───┼───┼───┼───┼────────────────
│ │ │ │ │ │
2: ───────X^Symbol("3")────Y^Symbol("4")────Z^Symbol("5")────X───┼───┼───X───┼───┼────────────────
│ │ │ │
3: ───────X^Symbol("12")───Y^Symbol("13")───Z^Symbol("14")───────Y───┼───────Y───┼────────────────
│ │
4: ───────X^Symbol("15")───Y^Symbol("16")───Z^Symbol("17")───────────Z───────────Z────────────────</pre>
To compute the cost, we can call `PauliSystem.cost` or `PauliSystem.eff_cost` (the latter exploits symmetries to compute the cost more efficiently) and pass in a set of angles to the ansatz gates:
```python
# =======================================
# compute the cost for some set of angles
# =======================================
# normalize the coefficients
system.normalize_coeffs()
# get some angles
angles = np.random.randn(18)
# compute the cost and time it
start = time.time()
cost = system.eff_cost(angles)
end = time.time() - start
# print out the results
print("Local cost C_1 =", cost)
print("Time to compute cost =", end, "seconds")
```
0.93245
Local cost C_1 = 0.93245
Time to compute cost = 0.28073954582214355 seconds
# Solving the System
To solve the system, we minimize the cost function. We'll do this below with the Powell optimization algorithm.
```python
# ===============================================
# minimize the cost (prints each cost evaluation)
# ===============================================
start = time.time()
out = system.solve(x0=angles, opt_method="COBYLA")
end = time.time() - start
```
0.9289499999999999
0.5724
0.9228500000000001
0.5689500000000001
0.60015
0.59275
0.5699000000000001
0.5736
0.5724
0.5871
0.57345
0.58565
0.56935
0.8321000000000001
0.8319
0.5731999999999999
0.78175
0.7933
0.5726
1.1881
0.90985
0.7093499999999999
0.68825
0.5679
0.486
0.46304999999999996
0.5261
0.4115
0.40464999999999995
0.40035
0.32825000000000004
0.21225000000000005
0.17710000000000004
0.1372
0.1362
0.12850000000000006
0.1290499999999999
0.14834999999999998
0.13065000000000004
0.1499999999999999
0.13560000000000005
0.15510000000000002
0.13044999999999995
0.15080000000000005
0.18009999999999993
0.1735
0.07740000000000002
0.08599999999999997
0.11825000000000008
0.14174999999999993
0.07645000000000002
0.06230000000000002
0.06555
0.052000000000000046
0.05315000000000003
0.11620000000000008
0.05710000000000004
0.054750000000000076
0.05630000000000002
0.05095000000000005
0.0616000000000001
0.05689999999999995
0.05625000000000002
0.041749999999999954
0.06655
0.03964999999999996
0.042200000000000015
0.04239999999999999
0.04544999999999999
0.06269999999999998
0.020950000000000024
0.02429999999999999
0.014599999999999946
0.01330000000000009
0.01905000000000001
0.012249999999999983
0.023499999999999965
0.013699999999999934
0.011299999999999977
0.024249999999999994
0.012049999999999894
0.026499999999999968
0.010050000000000114
0.011050000000000004
0.00824999999999998
0.00934999999999997
0.009550000000000058
0.02639999999999998
0.00924999999999998
0.016650000000000054
0.012899999999999912
0.020100000000000007
0.00824999999999998
0.011050000000000004
0.007800000000000029
0.00924999999999998
0.012199999999999989
0.009449999999999958
0.013100000000000112
0.009299999999999975
0.009700000000000042
0.008249999999999869
0.011150000000000104
0.011249999999999982
0.006050000000000111
0.00934999999999997
0.0034999999999999476
0.0033999999999999586
0.0034999999999999476
0.003549999999999942
0.0040999999999999925
0.0048000000000000265
0.0024999999999999467
0.00495000000000001
0.0021500000000000963
0.0041999999999999815
0.0026999999999999247
0.0028000000000000247
0.0020499999999999963
0.0024999999999999467
0.0019499999999998963
0.004450000000000065
0.003450000000000064
0.0014499999999999513
0.0017000000000000348
0.0005500000000000504
0.0007499999999999174
0.0011999999999999789
0.0012499999999999734
0.0004999999999999449
0.0010000000000000009
0.0009000000000000119
0.0017000000000000348
0.0008000000000000229
0.0009000000000000119
0.0008500000000000174
0.0007499999999999174
0.0006500000000000394
0.0007500000000000284
0.0007500000000000284
0.0011499999999999844
0.0005499999999999394
0.0005499999999999394
0.00034999999999996145
0.0005499999999999394
0.0004999999999999449
0.0006500000000000394
0.00019999999999997797
0.0004999999999999449
0.00029999999999996696
0.00044999999999995044
0.00034999999999996145
0.0004999999999999449
0.00029999999999996696
0.0007000000000000339
0.00029999999999996696
0.00039999999999995595
0.0004999999999999449
0.0003500000000000725
0.00039999999999995595
0.00034999999999996145
0.0006500000000000394
0.0005500000000000504
0.00039999999999995595
0.00034999999999996145
0.0005499999999999394
0.0006000000000000449
0.00039999999999995595
0.00024999999999997247
0.00039999999999995595
0.00014999999999998348
0.00029999999999996696
0.00029999999999996696
0.00044999999999995044
0.0006000000000000449
0.0006000000000000449
0.0005500000000000504
0.00044999999999995044
0.00019999999999997797
0.00024999999999997247
0.00019999999999997797
0.00039999999999995595
0.00029999999999996696
0.00039999999999995595
0.00024999999999997247
0.0005999999999999339
0.0005499999999999394
0.0004999999999999449
0.00039999999999995595
0.00044999999999995044
0.00034999999999996145
0.00044999999999995044
0.0006000000000000449
0.00034999999999996145
0.00034999999999996145
0.00029999999999996696
0.0006499999999999284
0.00014999999999998348
0.00039999999999995595
0.00019999999999997797
0.00029999999999996696
0.00034999999999996145
0.0005499999999999394
0.00034999999999996145
0.00034999999999996145
0.0005999999999999339
0.0007000000000000339
0.00019999999999997797
0.0003500000000000725
0.00039999999999995595
0.0006000000000000449
0.0004999999999999449
0.00029999999999996696
0.00029999999999996696
0.00040000000000006697
0.0006000000000000449
0.00039999999999995595
0.00034999999999996145
0.00024999999999997247
0.00039999999999995595
0.00034999999999996145
0.00024999999999997247
0.00024999999999997247
0.00014999999999998348
0.00019999999999997797
0.00029999999999996696
0.0007000000000000339
```python
print("It took {} minute(s) to solve the system.".format(round(end / 60)))
#print("Number of iterations of optimization method:", out["nit"])
print("Number of function evaluations:", out["nfev"])
```
It took 1 minute(s) to solve the system.
Number of function evaluations: 227
# Comparing the Estimated and Exact Solutions
Below we print out the cost at the optimal angles found for the ansatz and print out the ansatz circuit with the optimal angles.
```python
# get the optimal angles
opt_angles = out["x"]
# evaluate the cost at the optimal angles found
system.eff_cost(opt_angles)
# get a param resolver
param_resolver = ParamResolver(
{str(ii) : opt_angles[ii] for ii in range(len(opt_angles))}
)
sol_circ = system.ansatz.with_parameters_resolved_by(param_resolver)
sol_circ
```
0.00040000000000006697
<pre style="overflow: auto; white-space: pre;">1: ───X^-0.994─────Y^-0.991─────Z^0.915────
2: ───X^-0.00312───Y^0.0113─────Z^-0.588───
3: ───X^0.0075─────Y^0.000161───Z^0.944────
4: ───X^0.00105────Y^-0.00422───Z^-0.38────</pre>
Next we convert the circuit to a unitary matrix and get the first column to check our solution with the actual solution.
```python
# get the approximate solution and compute "bhat"
xhat = sol_circ.to_unitary_matrix()[:, 0]
bhat = np.dot(matrix, xhat)
# print out the overlap between "bhat" and the actual solution vector b
print("overlap of computed and exact solution =", np.dot(b.conj().T, bhat))
```
overlap of computed and exact solution = (0.04257143659529625-0.9986949908283624j)
```python
# make sure both vectors are normalized
print("<bhat|bhat> =", np.dot(bhat.conj().T, bhat))
print("<b|b> =", np.dot(b.conj().T, b))
```
<bhat|bhat> = (1.0000000000000004+0j)
<b|b> = (1+0j)
# Future Work
* Better optimization methods.
* Optimize over a subset of the parameters at a time, then loop through (and reoptimize).
* Add random gates using simulated annealing.
* Compute all $nK^2$ circuits in parallel.
* Compute expectations of local observables at each cost iteration.
* Allow for arbitrary unitaries (not just Paulis)
| f247cf3a1f3b3e5c68b182e2651008b354bfc8fd | 34,030 | ipynb | Jupyter Notebook | pauli/vls_example.ipynb | rmlarose/vls | 95aeed8c96afdc073aa828ec331b6d9988ed2980 | [
"Apache-2.0"
]
| null | null | null | pauli/vls_example.ipynb | rmlarose/vls | 95aeed8c96afdc073aa828ec331b6d9988ed2980 | [
"Apache-2.0"
]
| null | null | null | pauli/vls_example.ipynb | rmlarose/vls | 95aeed8c96afdc073aa828ec331b6d9988ed2980 | [
"Apache-2.0"
]
| null | null | null | 34.974306 | 319 | 0.451161 | true | 7,506 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.712232 | 0.535608 | __label__eng_Latn | 0.18541 | 0.082725 |
```python
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
```python
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from numpy import *
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
font = {'size': 14}
matplotlib.rc('font', **font)
```
```python
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
```
```python
from sympy import symbols, Matrix
from sympy.parsing.sympy_parser import parse_expr
```
```python
vec2 = parse_expr('Matrix([1, 1])')
vec3 = parse_expr('Matrix([1, 1, 1])')
vec2, vec3
```
(Matrix([
[1],
[1]]), Matrix([
[1],
[1],
[1]]))
```python
V = np.array([vec2[0].evalf(), vec2[1].evalf()]).astype('float')
V
```
array([1., 1.])
```python
U = np.array([vec3[0].evalf(), vec3[1].evalf(), vec3[2].evalf()]).astype('float')
U
```
array([1., 1., 1.])
```python
V[0]
```
array([3.])
```python
vec2 = parse_expr('Matrix([3,2])')
V = np.array(vec2).astype('float')
if V.shape == (2, 1): V = np.array([V])
plt.figure(figsize=(8,8))
plt.xlim([-2,2])
plt.ylim([-2,2])
plt.axis('equal')
X = [np.cos(x) for x in np.linspace(0, 2*np.pi, 64)]
Y = [np.sin(y) for y in np.linspace(0, 2*np.pi, 64)]
plot(X,Y, c='darkgray', lw=3)
for i in range(V.shape[0]):
x = V[i][0]; y = V[i][1]
magn = np.sqrt(x**2 + y**2)
annotate("", xy=(x/magn, y/magn), xytext=(0, 0),arrowprops=dict(arrowstyle="->", color="purple", lw=4))
axhline(0, ls='dashed', alpha=0.33, c='gray', lw=3)
axvline(0, ls='dashed', alpha=0.33, c='gray', lw=3)
plt.axis('off')
fig.tight_layout()
```
```python
vec3 = parse_expr('Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])')
U = np.array(vec3).astype('float')
U.shape
```
(3, 3)
```python
U
```
array([1., 0., 0.])
```python
vec3 = parse_expr('Matrix([0, 0, 1])')
U = np.array(vec3).astype('float')
if U.shape == (3, 1): U = np.array([U])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
plt.xlim([-2,2])
plt.ylim([-2,2])
plt.axis('equal')
X = np.array([np.cos(x) for x in np.linspace(0, 2*np.pi, 64)])
Y = np.array([np.sin(y) for y in np.linspace(0, 2*np.pi, 64)])
zeros = np.zeros(X.shape)
plot(X,zeros,zeros,c='darkgray', lw=2, alpha=0.33, ls='dashed')
plot(zeros,X,zeros,c='darkgray', lw=2, alpha=0.33, ls='dashed')
#annotate("", xy=(V[0]/magn, V[1]/magn), xytext=(0, 0),arrowprops=dict(arrowstyle="->", color="purple", lw=3))
plot(zeros,zeros,X,c='darkgray', lw=2, alpha=0.33, ls='dashed')
for i in range(U.shape[0]):
x = U[i][0]; y = U[i][1]; z = U[i][2]
magn = np.sqrt(x**2+y**2+z**2)
ax.add_artist(Arrow3D([0, x], [0, y], [0,z], mutation_scale=15, lw=3, arrowstyle="-|>", color="purple"))
#ax.add_artist(Arrow3D([0, 0], [0, -1], [0,1], mutation_scale=15, lw=3, arrowstyle="-|>", color="purple"))
plot(X,Y,zeros,c='darkgray', lw=3, alpha=0.33)
plot(zeros,X,Y,c='darkgray', lw=3, alpha=0.33)
plt.axis('off')
fig.tight_layout()
```
```python
```
| 2175623a8fefa05d795a6dcedb895c64da7846ea | 60,028 | ipynb | Jupyter Notebook | Calcupy/vector plot 2d and 3d.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
]
| 1 | 2018-08-28T12:16:12.000Z | 2018-08-28T12:16:12.000Z | Calcupy/vector plot 2d and 3d.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
]
| null | null | null | Calcupy/vector plot 2d and 3d.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
]
| null | null | null | 186.42236 | 31,316 | 0.899414 | true | 1,261 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.785309 | 0.672107 | __label__eng_Latn | 0.1933 | 0.399861 |
# Profiling and Optimising
IPython provides some tools for making it a bit easier to profile and optimise your code.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
```python
try:
import seaborn as sns
except ImportError:
print("That's okay")
```
## `%timeit`
The main IPython tool we are going to use here is `%timeit`,
a magic that automates measuring how long it takes to run a snippet of code.
```python
for N in (100, 500, 1000, 2000):
print("Size: {0} x {0}".format(N))
A = np.random.random((N, N))
%timeit A.dot(A)
```
Let's look at what options `%timeit` can take.
```python
%timeit?
```
We can save the result in an object with `%timeit -o`,
and specify to only run one group of 100 iterations.
```python
A = np.random.random((100, 100))
tr = %timeit -o -n 1 -r 100 A.dot(A)
```
```python
tr.best
```
```python
tr.best, tr.worst
```
```python
tr.all_runs
```
```python
plt.hist(np.array(tr.all_runs) * 1e6)
plt.xlabel("t (µs)")
```
## Diffusing a wave
Our task is to optimise a 1-D diffusion algorithm,
using numpy and Cython.
Our input signal is a sawtooth wave:
$$
x_\mathrm{sawtooth}(t) = \frac{A}{2}-\frac {A}{\pi}\sum_{k=1}^{\infty}\frac {\sin (2\pi kft)}{k}
$$
```python
from scipy.signal import sawtooth
T = 8 * np.pi
t = np.linspace(0, T, 512)
x = sawtooth(t)
plt.plot(t, x)
steps = 2048
```
We are going to diffuse the wave by evolving the heat equation:
$$
\frac{\delta x}{\delta t} = \alpha \frac{\delta^2 x}{\delta^2}{t}
$$
Which we can discretize for our arrays:
\begin{align}
x_{k} =& \frac{1}{4} \left(
x_{k-1}[i-1] +
2 x_{k-1}[i] +
x_{k-1}[i+1]
\right) \\
x_{k}[0] =& x_{0}[0] \\
x_{k}[N] =& x_{0}[N] \\
\end{align}
## Pure Python
We'll start with a pure Python implementation,
to use as a reference.
```python
def blur_py(x, steps=1024):
x = 1 * x # copy
y = np.empty_like(x)
y[0] = x[0]
y[-1] = x[-1]
for _ in range(steps):
for i in range(1, len(x)-1):
y[i] = .25 * ( y[i-1] + 2 * y[i] + y[i+1] )
x, y = y, x # swap for next step
return x
```
```python
y = blur_py(x, steps)
plt.plot(t, x, '--')
plt.plot(t, y)
```
Now we can measure how long it takes to run evolve this system:
```python
ref_run = %timeit -o y = blur_py(x, steps)
t_ref = ref_run.best
times = [t_ref]
labels = ['python']
```
So it takes about one second.
We can also see how it changes with different times and resolutions.
## Vectorizing with numpy
We can vectorize the inner loop with a single numpy operation:
```python
import numpy as np
def blur_np(x, steps=1024):
x = 1 * x
y = np.empty_like(x)
y[0] = x[0]
y[-1] = x[-1]
for _ in range(steps):
y[1:-1] = .25 * (x[:-2] + 2 * x[1:-1] + x[2:])
x, y = y, x
return x
```
```python
y = blur_np(x, steps)
plt.plot(t, x, '--')
plt.plot(t, y)
```
```python
np_r = %timeit -o blur_np(x, steps)
t_np = np_r.best
```
```python
times.append(t_np)
labels.append('numpy')
```
```python
def plot_times():
ind = np.arange(len(times))
plt.bar(ind, times, log=True)
plt.xticks(ind + 0.3, labels, rotation=30)
plt.ylim(.1 * min(times), times[0])
plot_times()
```
So vectorizing the inner loop brings us from 1 second to 25 milliseconds,
an improvement of 40x:
```python
t_ref / t_np
```
# Cython
[Cython](http://cython.org/) provides an IPython extension,
which defines a magic we can use to inline bits of Cython code in the notebook:
```python
%load_ext Cython
```
```cython
%%cython
def csum(n):
cs = 0
for i in range(n):
cs += i
return cs
```
```python
%timeit csum(5)
```
`%%cython -a` shows you annotations about the generated sourcecode.
The key to writing Cython is to minimize the amount of Python calls in the generated code. In general: yellow = slow.
```python
def psum(n):
cs = 0
for i in range(n):
cs += i
return cs
```
```cython
%%cython -a
def csum(n):
cs = 0
for i in range(n):
cs += i
return cs
```
Uh oh, that looks like a lot of yellow.
We can reduce it by adding some type annotations:
```cython
%%cython -a
def csum2(int n):
cdef int i
cs = 0
for i in range(n):
cs += i
return cs
```
Almost there, but I still see yellow on the lines with `cs`:
```cython
%%cython -a
def csum3(int n):
cdef int i
cdef int cs = 0
for i in range(n):
cs += i
return cs
```
Much better!
Now there's only Python when entering and leaving the function,
which is about as good as we can do.
```python
N = 1000000
%timeit psum (N)
%timeit csum (N)
%timeit csum2(N)
%timeit csum3(N)
```
## Blurring with Cython
Now we can apply the same principles to writing a blur
in Cython.
```cython
%%cython -a
import numpy as np
def blur_cython(x, steps=1024):
x = 1 * x # copy
y = np.empty_like(x)
y[0] = x[0]
y[-1] = x[-1]
for _ in range(steps):
for i in range(1, len(x)-1):
y[i] = .25 * ( x[i-1] + 2 * x[i] + x[i+1] )
x, y = y, x # swap for next step
return x
```
```python
c1 = %timeit -o y = blur_cython(x, steps)
t_c1 = c1.best
times.append(t_c1)
labels.append("cython (no hints)")
```
```python
plot_times()
```
Without annotations, we don't get much improvement over the pure Python version.
We can note the types of the input arguments, to get some improvements:
```cython
%%cython -a
import numpy as np
cimport numpy as np
def blur_cython2(x, int steps=1024):
x = 1 * x # copy
y = np.empty_like(x)
y[0] = x[0]
y[-1] = x[-1]
cdef int i, N = len(x)
for _ in range(steps):
for i in range(1, N-1):
y[i] = .25 * ( x[i-1] + 2 * x[i] + x[i+1] )
x, y = y, x # swap for next step
return x
```
```python
c2 = %timeit -o blur_cython2(x, steps)
t_c2 = c2.best
times.append(t_c2)
labels.append("cython (loops)")
plot_times()
```
Just by making sure the iteration variables are defined as integers, we can save about 25% of the time.
The biggest key to optimizing with Cython is getting that yellow out of your loops.
The more deeply nested a bit of code is within a loop,
the more often it is called, and the more value you can get out of making it fast.
In Cython, fast means avoiding Python (getting rid of yellow).
To get rid of Python calls, we need to tell Python about the numpy arrays `x` and `y`:
```cython
%%cython -a
import numpy as np
cimport numpy as np
def blur_cython_typed(np.ndarray[double, ndim=1] x_, int steps=1024):
# x = 1 * x # copy
cdef size_t i, N = x_.shape[0]
cdef np.ndarray[double, ndim=1] x
cdef np.ndarray[double, ndim=1] y
x = 1 * x_
y = np.empty_like(x_)
y[0] = x[0]
y[-1] = x[-1]
for _ in range(steps):
for i in range(1, N-1):
y[i] = .25 * ( y[i-1] + 2 * y[i] + y[i+1] )
x, y = y, x # swap for next step
return x
```
```python
ct = %timeit -o y = blur_cython_typed(x, steps)
t_ct = ct.best
times.append(t_ct)
labels.append("cython (types)")
plot_times()
```
We can furter optimize with Cython macros,
which disable bounds checking and negative indexing,
and avoiding the Python variable swaping by using indices into a single array:
```cython
%%cython -a
#cython: boundscheck=False
#cython: wraparound=False
import numpy as np
cimport numpy as np
def blur_cython_optimized(np.ndarray[double, ndim=1] x, int steps=1024):
cdef size_t N = x.shape[0]
cdef np.ndarray[double, ndim=2] y
y = np.empty((2, N), dtype=np.float64)
y[0,:] = x
y[1,0] = x[0]
y[1,N-1] = x[N-1]
cdef size_t _, i, j=0, k=1
for _ in range(steps):
j = _ % 2
k = 1 - j
for i in range(1, N-1):
y[k,i] = .25 * ( y[j,i-1] + 2 * y[j,i] + y[j,i+1] )
return y[k]
```
Note how there is now zero yellow called in any of the loops,
only in the initial copy of the input array.
```python
copt = %timeit -o y = blur_cython_optimized(x, steps)
t_copt = copt.best
times.append(t_copt)
labels.append("cython (optimized)")
plot_times()
```
```python
y = blur_cython_optimized(x, steps)
plt.plot(t, x, '--')
plt.plot(t, y)
```
## numba
[numba](http://numba.pydata.org/) is a library that attempts to automatically do type-based optimizations like we did with Cython.
To use numba, you decorate functions with `@autojit`.
```python
import numba
@numba.autojit
def blur_numba(x, steps=1024):
"""identical to blur_py, other than the decorator"""
x = 1 * x # copy
y = np.empty_like(x)
y[0] = x[0]
y[-1] = x[-1]
for _ in range(steps):
for i in range(1, len(x)-1):
y[i] = .25 * ( y[i-1] + 2 * y[i] + y[i+1] )
x, y = y, x # swap for next step
return x
y = blur_numba(x, steps)
```
```python
nb = %timeit -o blur_numba(x, steps)
t_nb = nb.best
times.append(t_nb)
labels.append("numba")
plot_times()
```
What's impressive about numba in this case
is that it is able to beat all but the most optimized of our implementations without any help.
Like Cython, numba can do an even better job when you provide it with more information about how a function will be called.
## Profiling
```python
%%writefile profileme.py
import os
import glob
list(os.walk('/tmp'))
```
```python
!python -m cProfile profileme.py
```
```python
import os
import cProfile
cProfile.run("list(os.walk('/tmp'))")
```
```python
%prun list(os.walk('/tmp'))
```
```python
%load_ext snakeviz
```
```python
%snakeviz list(os.walk('/usr/local'))
```
| 277bc247839d63e84c99f658ed88fa4459fbbc21 | 19,503 | ipynb | Jupyter Notebook | Profiling and Optimizing with IPython.ipynb | minrk/ipython-cse17 | 16a9059c7054a8bd4977a3cb8b09c100ea779069 | [
"BSD-3-Clause"
]
| 3 | 2017-03-02T07:11:37.000Z | 2017-03-03T06:13:32.000Z | Profiling and Optimizing with IPython.ipynb | minrk/ipython-cse17 | 16a9059c7054a8bd4977a3cb8b09c100ea779069 | [
"BSD-3-Clause"
]
| null | null | null | Profiling and Optimizing with IPython.ipynb | minrk/ipython-cse17 | 16a9059c7054a8bd4977a3cb8b09c100ea779069 | [
"BSD-3-Clause"
]
| null | null | null | 22.087203 | 139 | 0.482695 | true | 3,091 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.909907 | 0.726276 | __label__eng_Latn | 0.948002 | 0.525714 |
# Variability in the Arm Endpoint Stiffness
In this notebook, we will calculate the feasible endpoint stiffness of a
simplified arm model for an arbitrary movement. The calculation of the feasible
muscle forces and the generation of the movement is presented in
feasible_muscle_forces.ipynb. The steps are as follows:
1. Generate a movement using task space projection
2. Calculate the feasible muscle forces that satisfy the movement
3. Calculate the feasible endpoint stiffness
```python
# notebook general configuration
%load_ext autoreload
%autoreload 2
# imports and utilities
import numpy as np
import sympy as sp
from IPython.display import display, Image
sp.interactive.printing.init_printing()
import logging
logging.basicConfig(level=logging.INFO)
# plot
%matplotlib inline
from matplotlib.pyplot import *
rcParams['figure.figsize'] = (10.0, 6.0)
# utility for displaying intermediate results
enable_display = True
def disp(*statement):
if (enable_display):
display(*statement)
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
## Step 1: Task Space Inverse Dynamics Controller
The task space position ($x_t$) is given as a function of the generalized
coordinates ($q$)
\begin{equation}\label{equ:task-position}
x_t = g(q), x_t \in \Re^{d}, q \in \Re^{n}, d \leq n
\end{equation}
The first and second derivatives with respect to time (the dot notation depicts
a derivative with respect to time) are given by
\begin{equation}\label{equ:task-joint-vel}
\dot{x}_t = J_t(q) \dot{q}, \; J_t(q) =
\begin{bmatrix}
\frac{\partial g_1}{\partial q_1} & \cdots & \frac{\partial g_1}{\partial q_n} \\
\vdots & \ddots & \vdots \\
\frac{\partial g_d}{\partial q_1} & \cdots & \frac{\partial g_d}{\partial q_n}
\end{bmatrix}
\in \Re^{d\times n}
\end{equation}
\begin{equation}\label{equ:task-joint-acc}
\ddot{x}_t = \dot{J}_t\dot{q} + J_t\ddot{q}
\end{equation}
The task Jacobian defines a dual relation between motion and force
quantities. The virtual work principle can be used to establish the link between
task and join space forces (augmented by the null space)
\begin{equation}\label{equ:joint-task-forces-vw}
\begin{aligned}
\tau^T \delta q &= f_t^T \delta x_t \\
\tau^T \delta q &= f_t^T J_t \delta q \\
\tau &= J_t^T f_t + N_{J_t} \tau_0, \; N_{J_t} = (I - J_t^T \bar{J}_t^T)
\end{aligned}
\end{equation}
where $N_{J_t} \in \Re^{n \times n}$ represents the right null space of $J_t$
and $\bar{J}_t$ the generalized inverse. Let the joint space equations of motion
(EoMs) have the following form
\begin{equation}\label{equ:eom-joint-space}
\begin{gathered}
M(q) \ddot{q} + f(q, \dot{q}) = \tau \\
f(q, \dot{q}) = \tau_g(q) + \tau_c(q, \dot{q}) + \tau_{o}(q, \dot{q})
\end{gathered}
\end{equation}
where $M \in \Re^{n \times n}$ denotes the symmetric, positive definite joint
space inertia mass matrix, $n$ the number of DoFs of the model and ${q, \dot{q},
\ddot{q}} \in \Re^{n}$ the joint space generalized coordinates and their
derivatives with respect to time. The term $f \in \Re^{n}$ is the sum of all
joint space forces, $\tau_g \in \Re^{n}$ is the gravity, $\tau_c \in \Re^{n}$
the Coriolis and centrifugal and $\tau_{o} \in \Re^{n}$ other generalized
forces. Term $\tau \in \Re^{n}$ denotes a vector of applied generalized forces
that actuate the model.
We can project the joint space EoMs in the task space by multiplying both sides
from the left with $J_t M^{-1}$
\begin{equation}\label{equ:eom-task-space}
\begin{gathered}
J_t M^{-1}M \ddot{q} + J_t M^{-1}f = J_t M^{-1}\tau \\
\ddot{x}_t - \dot{J}_t\dot{q} + J_t M^{-1}f = J_t M^{-1} (J^T_t f_t + N_{J_t} \tau_0) \\
\Lambda_t(\ddot{x}_t + b_t) + \bar{J}_t^T f = f_t
\end{gathered}
\end{equation}
where $\Lambda_t=(J_tM^{-1}J_t^T)^{-1} \in \Re^{d \times d}$ represents the task
space inertia mass matrix, $b_t = - \dot{J}_t\dot{q}$ the task bias term and
$\bar{J}_t^T = \Lambda_m RM^{-1} \in \Re^{d \times n}$ the generalized inverse
transpose of $J_t$ that is used to project joint space quantities in the task
space. Note that $\bar{J}_t^T N_{J_t} \tau_0 = 0$.
The planning will be performed in task space in combination with a Proportional
Derivative (PD) tracking scheme
\begin{equation}\label{equ:pd-controller}
\ddot{x}_t = \ddot{x}_d + k_p (x_d - x_t) + k_d (\dot{x}_d - x_t)
\end{equation}
where $x_d, \dot{x}_d, \ddot{x}_d$ are the desired position, velocity and
acceleration of the task and $k_p = 50, k_d = 5$ the tracking gains.
The desired task goal is derived from a smooth sigmoid function that produces
bell-shaped velocity profiles in any direction around the initial position of
the end effector
\begin{equation}\label{equ:sigmoid}
\begin{gathered}
x_d(t) = [x_{t,0}(0) + a (tanh(b (t - t_0 - 1)) + 1) / 2, x_{t,1}(0)]^T, \;
\dot{x}_d(t) = \frac{d x_d(t)}{dt}, \; \ddot{x}_d(t) = \frac{d \dot{x}_d(t)}{dt} \\
x_d^{'} = H_z(\gamma) x_d, \; \dot{x}_d^{'} = H_z(\gamma) \dot{x}_d,
\; \ddot{x}_d^{'} = H_z(\gamma) \ddot{x}_d
\end{gathered}
\end{equation}
where $x_{t, 0}$, $x_{t, 1}$ represent the $2D$ components of $x_t$, $a = 0.3$,
$b = 4$ and $t_0 = 0$. Different directions of movement are achieved by
transforming the goals with $H_z(\gamma)$, which defines a rotation around
the $z$-axis of an angle $\gamma$.
```python
# import necessary modules
from model import ArmModel
from projection import TaskSpace
from controller import TaskSpaceController
from simulation import Simulation
```
```python
# construct model gravity disabled to improve execution time during numerical
# integration note that if enabled, different PD gains are required to track the
# movement accurately
model = ArmModel(use_gravity=0, use_coordinate_limits=1, use_viscosity=1)
model.pre_substitute_parameters()
```
```python
# simulation parameters
t_end = 2.0
angle = np.pi # direction of movement
fig_name = 'results/feasible_stiffness/feasible_forces_ts180'
# define the end effector position in terms of q's
end_effector = sp.Matrix(model.ee)
disp('x_t = ', end_effector)
# task space controller
task = TaskSpace(model, end_effector)
controller = TaskSpaceController(model, task, angle=angle)
# numerical integration
simulation = Simulation(model, controller)
simulation.integrate(t_end)
# plot simulation results
fig, ax = subplots(2, 3, figsize=(15, 10))
simulation.plot_simulation(ax[0])
controller.reporter.plot_task_space_data(ax[1])
fig.tight_layout()
fig.savefig(fig_name + '.pdf', format='pdf', dpi=300)
fig.savefig(fig_name + '.eps', format='eps', dpi=300)
```
## Step 2: Calculation of the Feasible Muscle Force Space
The feasible muscle forces are calculated below. Initially, the moment arm and
maximum muscle force quantities are computed for each instance of the
movement. Then the following inequality is formed assuming a linear muscle model
\begin{equation}\label{equ:linear-muscle-null-space-inequality}
\begin{gathered}
f_m = f_{max} \circ a_m = f_m^{\parallel} +
N_{R} f_{m0},\; 0 \preceq a_m \preceq 1
\rightarrow \\
\begin{bmatrix}
- N_{R} \\
\hdashline
N_{R}
\end{bmatrix}
f_{m0} \preceq
\begin{bmatrix}
f_m^{\parallel} \\
\hdashline
f_{max} - f_m^{\parallel}
\end{bmatrix} \\
Z f_{m0} \preceq \beta
\end{gathered}
\end{equation}
where $a_m \in \Re^{m}$ represents a vector of muscle activations, $f_{max} \in
\Re^{m}$ a vector specifying the maximum muscle forces, $\circ$ the Hadamard
(elementwise) product, $f_m^{\parallel}$ the particular muscle force solution
that satisfies the action, $N_{R}$ the moment arm null space and $f_{m0}$ the
null space forces.
The next step is to sample the inequality $Z f_{m0} \leq \beta$. This is the
bottleneck of the analysis. The *convex_bounded_vertex_enumeration* uses the
lsr method, which is a vertex enumeration algorithm for finding the vertices
of a polytope in $O(v m^3)$.
```python
# import necessary modules
from analysis import FeasibleMuscleSetAnalysis
# initialize feasible muscle force analysis
feasible_muscle_set = FeasibleMuscleSetAnalysis(model, controller.reporter)
```
## Step 3: Calculate the Feasible Task Stiffness
In the following section, we will introduce a method for calculating the
feasible muscle forces that satisfy the motion and the physiological muscle
constraints. As the muscles are the main actors of the system, it is important
to examine the effect of muscle redundancy on the calculation of limbs'
stiffness.
The muscle stiffness is defined as
\begin{equation}\label{equ:muscle-stiffness}
K_m = \frac{\partial f_m}{\partial l_{m}},\; K_m \in \Re^{m \times m}
\end{equation}
where $f_m \in \Re^{m}$ represents the muscle forces, $l_{m} \in \Re^{m}$ the
musculotendon lengths and $m$ the number of muscles. The joint stiffness is
defined as
\begin{equation}\label{equ:joint-stiffness}
K_j = \frac{\partial \tau}{\partial q},\; K_j \in \Re^{n \times n}
\end{equation}
where $\tau \in \Re^{n}$, $q \in \Re^{n}$ are the generalized forces and
coordinates, respectively and $n$ the DoFs of the system. Finally, the task
stiffness is defined as
\begin{equation}\label{equ:task-stiffness}
K_t = \frac{\partial f_t}{\partial x_t},\; K_t \in \Re^{d \times d}
\end{equation}
where $f_t \in \Re^{d}$ denotes the forces, $x_t \in \Re^{d}$ the positions and
$d$ the DoFs of the task.
The derivation starts with a model for computing the muscle stiffness matrix
$K_m$. The two most adopted approaches are to either use the force-length
characteristics of the muscle model or to approximate it using the definition of
the short range stiffness, where the latter is shown to explain most of the
variance in the experimental measurements. The short range stiffness is
proportional to the force developed by the muscle ($f_m$)
\begin{equation}\label{equ:short-range-stiffness}
k_{s} = \gamma \frac{f_m}{l_m^o}
\end{equation}
where $\gamma = 23.4$ is an experimentally determined constant and $l_m^o$ the
optimal muscle length. This definition will be used to populate the diagonal
elements of the muscle stiffness matrix, whereas inter-muscle coupling
(non-diagonal elements) will be assumed zero since it is difficult to measure
and model in practice.
The joint stiffness is related to the muscle stiffness through the following
relationship
\begin{equation}\label{equ:joint-muscle-stiffness}
K_j = -\frac{\partial R^T}{\partial q} \bullet_2 f_m - R^T K_m R
\end{equation}
where the first term captures the varying effect of the muscle moment arm ($R
\in \Re^{m \times n}$), while the second term maps the muscle space stiffness to
joint space. The notation $\bullet_2$ denotes a product of a rank-3 tensor
($\frac{\partial R^T}{\partial q} \in \Re^{n \times m \times n}$, a 3D matrix)
and a rank-1 tensor ($f_m \in \Re^{m}$, a vector), where the index $2$ specifies
that the tensor dimensional reduction (by summation) is performed across the
second dimension, resulting in a reduced rank-2 tensor of dimensions $n \times
n$.
In a similar manner, the task stiffness is related to the muscle stiffness
through the following relationship
\begin{equation}\label{equ:task-muscle-stiffness}
K_t = -J_t^{+T} \left(\frac{\partial J_t^T}{\partial q} \bullet_2
f_t + \frac{\partial R^T}{\partial q} \bullet_2 f_m + R^T
K_m R\right) J_t^{+}
\end{equation}
where the task Jacobian matrix ($J_t \in \Re^{d \times n}$) describes the
mapping from joint to task space ($\Re^{n} \rightarrow \Re^{d}$), $+$ stands for
the Moore-Penrose pseudoinverse and $+T$ the transposed pseudoinverse operator.
Algorithm for calculating the feasible joint stiffness:
**Step 1:** Calculate the feasible muscle forces $f_m^{\oplus}$ that satisfy the
task and the physiological muscle constraints
**Step 2:** Calculate the muscle stiffness matrix $K_m$ using the short range
stiffness model
\begin{equation*}\label{equ:short-range-stiffness-2}
k_s = \gamma \frac{f_m}{l_m^o},\; \gamma = 23.4
\end{equation*}
**Step 3:** Calculate the task $K_t$ and joint $K_j$ stiffness
\begin{equation*}
\begin{gathered}
K_j = -\frac{\partial R^T}{\partial q} \bullet_2 f_m - R^T K_m R \\
K_t = -J_t^{+T} \left(\frac{\partial J_t^T}{\partial q} \bullet_2
f_t + \frac{\partial R^T}{\partial q} \bullet_2 f_m + R^T
K_m R\right) J_t^{+}
\end{gathered}
\end{equation*}
```python
# import necessary modules
from analysis import StiffnessAnalysis
from util import calculate_stiffness_properties
base_name = 'results/feasible_stiffness/feasible_stiffness_ts180_'
# initialize stiffness analysis
stiffness_analysis = StiffnessAnalysis(model, task, controller.reporter,
feasible_muscle_set)
# calculate feasible stiffness
calculate_stiffness_properties(stiffness_analysis, base_name, 0, t_end, 0.2, 500)
```
```python
Image(url=base_name + 'anim.gif')
```
The left diagram shows the feasible major and minor axes of the endpoint
stiffness using scaled ($\text{scaling} = 0.0006$) ellipses (ellipses are
omitted for visibility reasons). The ellipse is a common way to visualize the
task stiffness, where the major axis (red) of the ellipse is oriented along the
maximum stiffness and the area is proportional to the determinant of $K_t$,
conveying the stiffness amplitude. The stiffness capacity (area) is increased in
the last pose, since the arm has already reached its final position and muscle
forces are not needed for it to execute any further motion. The second diagram
(middle) depicts the distribution of ellipse parameters (area and orientation
$\phi$). Finally, the rightmost box plot shows the feasible joint stiffness
distribution at three distinct time instants. Experimental measurements have
showed that the orientation of stiffness ellipses varies in a range of about
$30^{\circ}$. While our simulation results confirm this, they also reveal a
tendency of fixation towards specific directions for higher stiffness
amplitudes. The large variation of feasible stiffness verifies that this type of
analysis conveys important findings that complement experimental observations.
| 3a47e3fe1722658009de3a6dcc199786c4bae523 | 961,154 | ipynb | Jupyter Notebook | arm_model/feasible_task_stiffness.ipynb | mitkof6/musculoskeletal-stiffness | 150a43a3d748bb0b630e77cde19ab65df5fb089c | [
"CC-BY-4.0"
]
| 4 | 2019-01-24T08:10:20.000Z | 2021-04-04T18:55:02.000Z | arm_model/feasible_task_stiffness.ipynb | mitkof6/musculoskeletal-stiffness | 150a43a3d748bb0b630e77cde19ab65df5fb089c | [
"CC-BY-4.0"
]
| null | null | null | arm_model/feasible_task_stiffness.ipynb | mitkof6/musculoskeletal-stiffness | 150a43a3d748bb0b630e77cde19ab65df5fb089c | [
"CC-BY-4.0"
]
| null | null | null | 1,176.443084 | 133,484 | 0.952572 | true | 4,197 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.66888 | 0.576266 | __label__eng_Latn | 0.9789 | 0.177189 |
# Maximum Likelihood Estimate
Suppose we are given a problem where we can assume the _parametric class_ of distribution (e.g. Normal Distribution) that generates a set of data, and we want to determine the most likely parameters of this distribution using the given data. Since this class of distribution has a finite number of parameters (e.g. mean $\mu$ and standard deviation $\sigma$, in case of normal distribution) that need to be figured out in order to identify the particular member of the class, we will use the given data to do so.
The obtained parameter estimates will be called **Maximum Likelihood Estimates**.
Let us consider a Random Variable $X$ to be normally distributed with some mean $\mu$ and standard deviation $\sigma$. We need to estimate $\mu$ and $\sigma$ using our samples which accurately represent the actual $X$ and not just the samples that we have drawn out.
## Estimating Parameters
Let's have a look at the Probability Density Function (PDF) for the Normal Distribution and see what they mean.
$$
\begin{equation}
f(x; \mu, \sigma) = \frac{e^{-(x - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}}
\end{equation}
$$ (eq_normal_dist)
This equation is used to obtain the probability of our sample $x$ being from our random variable $X$, when the true parameters of the distribution are $\mu$ and $\sigma$. Normal distributions with different $\mu$ and $\sigma$ are shown below.
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
np.random.seed(10)
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = (12, 8)
def plot_normal(x_range, mu=0, sigma=1, **kwargs):
'''
https://emredjan.github.io/blog/2017/07/19/plotting-distributions/
'''
x = x_range
y = norm.pdf(x, mu, sigma)
plt.plot(x, y, **kwargs)
mus = np.linspace(-6, 6, 6)
sigmas = np.linspace(1, 3, 6)
assert len(mus) == len(sigmas)
x_range = np.linspace(-10, 10, 200)
for mu, sigma in zip(mus, sigmas):
plot_normal(x_range, mu, sigma, label=f'$\mu$ = {mu:.2f}, $\sigma$ = {sigma:.2f}')
plt.legend();
```
Let us consider that our sample = 5. Then what is the probability that it comes from a normal distribution with $\mu = 4$ and $\sigma = 1$? To get this probability, we only need to plug in the values of $x, \mu$ and $\sigma$ in Equation {eq}`eq_normal_dist`. Scipy as a handy function [`norm.pdf()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html) that we can use to obtain this easily.
```python
from scipy.stats import norm
norm.pdf(5, 4, 1)
```
0.24197072451914337
What if our sample came from a different distribution with $\mu = 3$ and $\sigma = 2$?
```python
norm.pdf(5, 3, 2)
```
0.12098536225957168
As we can see, the PDF equation {eq}`eq_normal_dist` shows us how likely our sample are from a distribution with certain parameters. Current results show that our sample is more likely to have come from the first distribution. But this with just a single sample. What if we had multiple samples and we wanted to estimate the parameters?
Let us assume we have multiple samples from $X$ which we assume to have come from some normal distribution. Also, all the samples are mutually independent of one another. In this case, the we can get the total probability of observing all samples by multiplying the probabilities of observing each sample individually.
E.g., The probability that both $7$ and $1$ are drawn from a normal distribution with $\mu = 4$ and $\sigma=2$ is equal to:
```python
norm.pdf(7, 4, 2) * norm.pdf(1, 4, 2)
```
0.004193701896768355
## Likelihood of many samples
```python
x_data = np.random.randint(-9, high=9, size=5)
print(x_data)
```
[ 0 -5 6 -9 8]
In maximum likelihood estimation (MLE), we specify a distribution of unknown parameters and then use our data to obtain the actual parameter values. In essence, MLE's aim is to find the set of parameters for the probability distribution that maximizes the likelihood of the data points. This can be formally expressed as:
$$
\begin{equation}
\hat{\mu}, \hat{\sigma} = \operatorname*{argmax}_{\mu, \sigma} \prod_{i=1}^n f(x_i)
\end{equation}
$$ (eq_likelihood)
However, it is difficult to optimize this product of probabilities because of long and messy calculations. Thus, we use log-likelihood which is the logarithm of the probability that the data point is observed. Formally Equation {eq}`eq_likelihood` can be re-written as,
$$
\begin{equation}
\hat{\mu}, \hat{\sigma} = \operatorname*{argmax}_{\mu, \sigma}\Sigma_{i=1}^n \ln f(x_i)
\end{equation}
$$ (eq_log_likelihood)
This is because logarithmic function is a monotonically increasing function. Thus, taking the log of another function does not change the point where the original function peaks. There are two main advantages of using log-likelihood:
1. The exponential terms in the probability density function are more manageable and easily optimizable.
1. The product of all likelihoods become a sum of individual likelihoods which allows these individual components to be maximized rather than working with the product of $n$ probability density functions.
Now, let us find the maximum likelihood estimates using the log likelihood function.
$$
\begin{align*}
\begin{split}
& \ln\left[ L(\mu, \sigma|x_1, ..., x_n)\right] \\
&= \ln \left( \frac{e^{-(x_1 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \times \frac{e^{-(x_2 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \times ... \times \frac{e^{-(x_n - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) \\
&= \ln\left( \frac{e^{-(x_1 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) + \ln\left( \frac{e^{-(x_2 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) + ... \\
&\quad + \ln\left( \frac{e^{-(x_n - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right)
\end{split}
\end{align*}
$$ (eq_mle_all_terms)
Here,
$$
\begin{align*}
&\ln\left( \frac{e^{-(x_1 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) \\
&= \ln\left( \frac{1} {\sigma\sqrt{2\pi}} \right) + \ln\left( e^{-(x_1 - \mu)^{2}/(2\sigma^{2}) } \right) \\
&= \ln\left[ (2\pi\sigma^2)^{\frac{-1}{2}} \right] - \frac{(x_1 - \mu)^2}{2\sigma^2}\ln(e) \\
&= -\frac{1}{2}\ln ( 2\pi\sigma^2) - \frac{(x_1-\mu)^2}{2\sigma^2} \\
&= -\frac{1}{2}\ln(2\pi) - \frac{1}{2}\ln(\sigma^2) - \frac{(x_1 - \mu)^2}{2\sigma^2} \\
\end{align*}
$$
$$
\begin{align*}
&= -\frac{1}{2}\ln(2\pi) - \ln(\sigma) - \frac{(x_1 - \mu)^2}{2\sigma^2} \\
\end{align*}
$$ (eq_mle_single_term)
Thus, Equation {eq}`eq_mle_all_terms` can be written as:
$$
\begin{align*}
\ln\left[ L(\mu, \sigma|x_1, ..., x_n)\right] &= \ln\left( \frac{e^{-(x_1 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) + \ln\left( \frac{e^{-(x_2 - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) + ... \\
&\quad + \ln\left( \frac{e^{-(x_n - \mu)^{2}/(2\sigma^{2}) }} {\sigma\sqrt{2\pi}} \right) \\
&= \left[ -\frac{1}{2}\ln(2\pi) - \ln(\sigma) - \frac{(x_1 - \mu)^2}{2\sigma^2} \right] \\
&\quad + \left[ -\frac{1}{2}\ln(2\pi) - \ln(\sigma) - \frac{(x_2- \mu)^2}{2\sigma^2} \right] \\
&\quad + ... + \left[ -\frac{1}{2}\ln(2\pi) - \ln(\sigma) - \frac{(x_n - \mu)^2}{2\sigma^2} \right] \\
\end{align*}
$$
$$
\begin{align*}
&= -\frac{n}{2}\ln(2\pi) - n\ln(\sigma) - \frac{(x_1-\mu)^2}{2\sigma^2} - \frac{(x_2-\mu)^2}{2\sigma^2} - ... - \frac{(x_n-\mu)^2}{2\sigma^2}
\end{align*}
$$ (eq_mle_simplified)
## Values of parameters
Now, we will use Equation {eq}`eq_mle_simplified` to find the values of $\mu$ and $\sigma$. For this purpose, we take the partial derivative of Equation {eq}`eq_mle_simplified` with respect to $\mu$ and $\sigma$.
$$
\begin{align*}
\frac{\partial}{\partial \mu}\ln\left[L(\mu, \sigma|x_1, x_2, ..., x_n) \right] &= 0 - 0 + \frac{x_1 - \mu}{\sigma^2} + \frac{x_2 - \mu}{\sigma^2} + ... + \frac{x_n - \mu}{\sigma^2} \\
\end{align*}
$$
$$
\begin{align*}
\frac{\partial}{\partial \mu}\ln\left[L(\mu, \sigma|x_1, x_2, ..., x_n) \right] &= \frac{1}{\sigma^2}\left[ (x_1 + x_2 + ... + x_n) - n\mu \right]
\end{align*}
$$ (eq_mu)
$$
\begin{align*}
\frac{\partial}{\partial \sigma}\ln\left[L(\mu, \sigma|x_1, x_2, ..., x_n) \right] &= 0 - \frac{n}{\sigma} + \frac{(x_1 - \mu)^2}{\sigma^3} + \frac{(x_2 - \mu)^2}{\sigma^3} + ... + \frac{(x_n - \mu)^2}{\sigma^3} \\
\end{align*}
$$
$$
\begin{align*}
\frac{\partial}{\partial \sigma}\ln\left[L(\mu, \sigma|x_1, x_2, ..., x_n) \right] &= -\frac{n}{\sigma} + \frac{1}{\sigma^3}\left[ (x_1 - \mu)^2 + (x_2 - \mu)^2 + ...+ (x_n - \mu)^2 \right]
\end{align*}
$$ (eq_sigma)
Now, to find the maximum likelihood estimate for $\mu$ and $\sigma$, we need to solve for the derivative with respect to $\mu = 0$ and $\sigma = 0$, because the slope is 0 at the peak of the curve.
Thus, using Equation {eq}`eq_mu` and setting $\frac{\partial}{\partial \mu}\ln\left[L(\mu, \sigma|x_1, x_2, ..., x_n) \right] = 0$, we get,
$$
\begin{align*}
0 &= \frac{1}{\sigma^2}\left[ (x_1 + x_2 + ... + x_n) - n\mu \right] \\
0 &= (x_1+x_2 + ... + x_n) - n\mu \\
\end{align*}
$$
$$
\begin{align*}
\mu &= \frac{(x_1+x_2+...+x_n)}{n}
\end{align*}
$$ (eq_mu_final)
Thus, the maximum likelihood estimate for $\mu$ is the mean of the samples.
Simialrly, using Equation {eq}`eq_sigma` and setting $\frac{\partial}{\partial \sigma}\ln\left[L(\mu, \sigma|x_1, x_2, ..., x_n) \right] = 0$, we get,
$$
\begin{align*}
0 &= -\frac{n}{\sigma} + \frac{1}{\sigma^3}\left[ (x_1 - \mu)^2 + (x_2 - \mu)^2 + ...+ (x_n - \mu)^2 \right] \\
0 &= -n + \frac{1}{\sigma^2}\left[ (x_1-\mu)^2 + (x_2-\mu)^2 + ...+ (x_n-\mu)^2 \right] \\
n\sigma^2 &= (x_1-\mu)^2 + (x_2-\mu)^2 + ...+ (x_n-\mu)^2 \\
\end{align*}
$$
$$
\begin{align*}
\sigma &= \sqrt{\frac{(x_1-\mu)^2 + (x_2-\mu)^2 + ...+ (x_n-\mu)^2}{n}} \\
\end{align*}
$$ (eq_sigma_final)
Thus, the maximum likelihood estimate for $\sigma$ is the standard deviation of the samples.
## An Example
Let us now consider 5 samples with values 0, -5, 6, -9 and 8. We want to know the normal distribution from which all of these samples were most likely to be drawn. In other words, we would like to maximize the value of $f(0, -5, 6, -9, 8)$ as given in Equation {eq}`eq_normal_dist`. Since we do not know the values of $\mu$ and $\sigma$ for the required distribution, we need to estimate them using Equations {eq}`eq_mu_final` and {eq}`eq_sigma_final` respectively.
Using the formulae of $\mu$ and $\sigma$, we get,
```python
samples = np.array([0, -5, 6, -9, 8])
mu = np.mean(samples)
sigma = np.std(samples)
print(f'mu = {mu:.2f} and sigma = {sigma:.2f}')
```
mu = 0.00 and sigma = 6.42
Let us plot the normal distribution with these values and also mark the given points.
```python
x_range = np.linspace(-20, 20, 200)
plot_normal(x_range, mu, sigma)
plt.axhline(y=0)
plt.vlines(samples, ymin=0, ymax=norm.pdf(samples, mu, sigma), linestyle=':')
plt.plot(samples, [0]*samples.shape[0], 'o', zorder=10, clip_on=False);
```
Thus, this is the most likely normal distribution from which the five sample points were drawn out.
## References
1. [What is Maximum Likelihood Estimation — Examples in Python | Medium.com](https://medium.com/@rrfd/what-is-maximum-likelihood-estimation-examples-in-python-791153818030)
1. [Maximum Likelihood Estimation: How it Works and Implementing in Python | towardsdatascience.com](https://towardsdatascience.com/maximum-likelihood-estimation-how-it-works-and-implementing-in-python-b0eb2efb360f)
1. [Maximum Likelihood For the Normal Distribution, step-by-step! | YouTube](https://youtu.be/Dn6b9fCIUpM)
| abfaf3ac444188e61baceac4e9c7a15e62b025f3 | 139,405 | ipynb | Jupyter Notebook | my_ml_recipes/text/mle/intro.ipynb | somnathrakshit/mymlrecipes | dd2313ebb03b7a655fada559c3d7da8de885f0bf | [
"MIT"
]
| null | null | null | my_ml_recipes/text/mle/intro.ipynb | somnathrakshit/mymlrecipes | dd2313ebb03b7a655fada559c3d7da8de885f0bf | [
"MIT"
]
| null | null | null | my_ml_recipes/text/mle/intro.ipynb | somnathrakshit/mymlrecipes | dd2313ebb03b7a655fada559c3d7da8de885f0bf | [
"MIT"
]
| null | null | null | 305.043764 | 82,448 | 0.914487 | true | 3,890 | Qwen/Qwen-72B | 1. YES
2. YES | 0.828939 | 0.808067 | 0.669838 | __label__eng_Latn | 0.917379 | 0.39459 |
# Solow-Model with human capital and distorting taxation
**Importing relevant packages and modules**
```python
import numpy as np
from scipy import optimize
import sympy as sm
import matplotlib.pyplot as plt
# autoreload modules when code is run
%load_ext autoreload
%autoreload 2
#XD FOR SMART UPLOAD
# local modules
# import modelproject if code is used form another notebook or file
```
# Model description
The following solow-model incorperates human capital accumulation aswell as distorting taxation.
The model consists of the following equations:
1 $$ (1-\tau)^{\eta}L_t, \text{ } 0<\eta<1,$$
2 $$H_{t+1}={\tau}w_tN_t+(1-{\delta})H_t $$
3 $$K_{t+1}=s_kY_t+(1-{\delta})K_t $$
4 $$K^\alpha_{t}H^\beta_{t}(A_{t}N_{t})^{1-\alpha-\beta} $$
5 $$H_t=h_{t}N_t $$
6 $$L_{t+1}=(1+n)L_t $$
7 $$A_{t+1}=(1+g)A_t $$
* $K_t$ is capital in period t
* $L_t$ is labor i period t (with a constant growth rate of $n$)
* $H_t$ is human capital in period t
* $A_t$ is technology in period t (with a constant growth rate of $g$)
* $N_t$ is the total workhours supplied in the economy in period t
* $Y_t$ is GDP
The model contains the following parameters:$$(\eta,\tau,s_k,\delta,\alpha,\beta,n,g) $$
Using the equations above, we can find the two transitioncurves given by:
$$ \tilde{k}_{t+1}=\left(\frac{1}{(1+n)(1+g}\right)[s_{k}\tilde{k}^{\alpha}_{t}\tilde{h}^{\beta}_{t}(1-\tau)^{\eta(1+\alpha)}+(1+\delta)\tilde{k}_{t}] $$
and
$$ \tilde{h}_{t+1}=\left(\frac{1}{(1+n)(1+g}\right)[\tau(1-\alpha)\tilde{k}^{\alpha}_{t}\tilde{h}^{\beta}_{t}(1-\tau)^{\eta(1+\alpha)}+(1+\delta)\tilde{h}_{t}] $$
where $\tilde{k}_{t}$ is the capital per effective worker given by $\tilde{k}_{t}=\frac{Y}{AK}$
and
$\tilde{h}_{t}$ is the human capital per effective worker given by $\tilde{h}_{t}=\frac{Y}{AK}$
```python
```
| 0743b150557771d07131064fff02b058a3a02082 | 3,420 | ipynb | Jupyter Notebook | modelproject/modelproject_working.ipynb | Santesson97/projects-2020-qvk188-og-sqn417 | 176f9648675e0d742083eaf621b28dc150f3071a | [
"MIT"
]
| null | null | null | modelproject/modelproject_working.ipynb | Santesson97/projects-2020-qvk188-og-sqn417 | 176f9648675e0d742083eaf621b28dc150f3071a | [
"MIT"
]
| null | null | null | modelproject/modelproject_working.ipynb | Santesson97/projects-2020-qvk188-og-sqn417 | 176f9648675e0d742083eaf621b28dc150f3071a | [
"MIT"
]
| null | null | null | 26.71875 | 186 | 0.531579 | true | 639 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.7773 | 0.703185 | __label__eng_Latn | 0.947244 | 0.472065 |
```python
from IPython.display import Image
from IPython.core.display import HTML
from sympy import *; x,h,y,n,t = symbols("x h y n t"); C, D = symbols("C D", real=True)
Image(url= "https://i.imgur.com/eCBwLMi.png")
```
```python
expr = Integral(7*cos(t) - 8*sin(t), t)
expr
```
$\displaystyle \int \left(- 8 \sin{\left(t \right)} + 7 \cos{\left(t \right)}\right)\, dt$
```python
expr2 = expr.doit()
print(expr2, " Do not forget to add a constant C to final answer")
```
7*sin(t) + 8*cos(t) Do not forget to add a constant C to final answer
```python
Image(url= "https://i.imgur.com/kIo1SdC.png")
```
| 5b981987056147c7593419a2dc4f76503503c5b9 | 2,599 | ipynb | Jupyter Notebook | Calculus_Homework/WWB12.14.ipynb | NSC9/Sample_of_Work | 8f8160fbf0aa4fd514d4a5046668a194997aade6 | [
"MIT"
]
| null | null | null | Calculus_Homework/WWB12.14.ipynb | NSC9/Sample_of_Work | 8f8160fbf0aa4fd514d4a5046668a194997aade6 | [
"MIT"
]
| null | null | null | Calculus_Homework/WWB12.14.ipynb | NSC9/Sample_of_Work | 8f8160fbf0aa4fd514d4a5046668a194997aade6 | [
"MIT"
]
| null | null | null | 21.479339 | 110 | 0.482878 | true | 205 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.815232 | 0.686284 | __label__eng_Latn | 0.371393 | 0.432798 |
# Logistic Regression
Researchers are often interested in setting up a model to analyze the relationship between predictors (i.e., independent variables) and it's corresponsing response (i.e., dependent variable). Linear regression is commonly used when the response variable is continuous. One assumption of linear models is that the residual errors follow a normal distribution. This assumption fails when the response variable is categorical, so an ordinary linear model is not appropriate. This newsletter presents a regression model for response variable that is dichotomous–having two categories. Examples are common: whether a plant lives or dies, whether a survey respondent agrees or disagrees with a statement, or whether an at-risk child graduates or drops out from high school.
In ordinary linear regression, the response variable (Y) is a linear function of the coefficients (B0, B1, etc.) that correspond to the predictor variables (X1, X2, etc.,). A typical model would look like:
Y = B0 + B1*X1 + B2*X2 + B3*X3 + … + E
For a dichotomous response variable, we could set up a similar linear model to predict individual category memberships if numerical values are used to represent the two categories. Arbitrary values of 1 and 0 are chosen for mathematical convenience. Using the first example, we would assign Y = 1 if a plant lives and Y = 0 if a plant dies.
This linear model does not work well for a few reasons. First, the response values, 0 and 1, are arbitrary, so modeling the actual values of Y is not exactly of interest. Second, it is the probability that each individual in the population responds with 0 or 1 that we are interested in modeling. For example, we may find that plants with a high level of a fungal infection (X1) fall into the category “the plant lives” (Y) less often than those plants with low level of infection. Thus, as the level of infection rises, the probability of plant living decreases.
Thus, we might consider modeling P, the probability, as the response variable. Again, there are problems. Although the general decrease in probability is accompanied by a general increase in infection level, we know that P, like all probabilities, can only fall within the boundaries of 0 and 1. Consequently, it is better to assume that the relationship between X1 and P is sigmoidal (S-shaped), rather than a straight line.
It is possible, however, to find a linear relationship between X1 and function of P. Although a number of functions work, one of the most useful is the logit function. It is the natural log of the odds that Y is equal to 1, which is simply the ratio of the probability that Y is 1 divided by the probability that Y is 0. The relationship between the logit of P and P itself is sigmoidal in shape. The regression equation that results is:
ln[P/(1-P)] = B0 + B1*X1 + B2*X2 + …
Although the left side of this equation looks intimidating, this way of expressing the probability results in the right side of the equation being linear and looking familiar to us. This helps us understand the meaning of the regression coefficients. The coefficients can easily be transformed so that their interpretation makes sense.
The logistic regression equation can be extended beyond the case of a dichotomous response variable to the cases of ordered categories and polytymous categories (more than two categories).
# Mathematics behind Logistic Regression
## Notation
The problem structure is the classic classification problem. Our data set $\mathcal{D}$ is composed of $N$ samples. Each sample is a tuple containing a feature vector and a label. For any sample $n$ the feature vector is a $d+1$ dimensional column vector denoted by ${\bf x}_n$ with $d$ real-valued components known as features. Samples are represented in homogeneous form with the first component equal to $1$: $x_0=1$. Vectors are bold-faced. The associated label is denoted $y_n$ and can take only two values: $+1$ or $-1$.
$$
\mathcal{D} = \lbrace ({\bf x}_1, y_1), ({\bf x}_2, y_2), ..., ({\bf x}_N, y_N) \rbrace \\
{\bf x}_n = \begin{bmatrix} 1 & x_1 & ... & x_d \end{bmatrix}^T
$$
## Learning Algorithm
The learning algorithm is how we search the set of possible hypotheses (hypothesis space $\mathcal{H}$) for the best parameterization (in this case the weight vector ${\bf w}$). This search is an optimization problem looking for the hypothesis that optimizes an error measure.
There is no sophisticted, closed-form solution like least-squares linear, so we will use gradient descent instead. Specifically we will use batch gradient descent which calculates the gradient from all data points in the data set.
Luckily, our "cross-entropy" error measure is convex so there is only one minimum. Thus the minimum we arrive at is the global minimum.
Gradient descent is a general method and requires twice differentiability for smoothness. It updates the parameters using a first-order approximation of the error surface.
$$
{\bf w}_{i+1} = {\bf w}_i + \nabla E_\text{in}({\bf w}_i)
$$
To learn we're going to minimize the following error measure using batch gradient descent.
$$
e(h({\bf x}_n), y_n) = \ln \left( 1+e^{-y_n \; {\bf w}^T {\bf x}_n} \right) \\
E_\text{in}({\bf w}) = \frac{1}{N} \sum_{n=1}^{N} e(h({\bf x}_n), y_n) = \frac{1}{N} \sum_{n=1}^{N} \ln \left( 1+e^{-y_n \; {\bf w}^T {\bf x}_n} \right)
$$
We'll need the derivative of the point loss function and possibly some abuse of notation.
$$
\frac{d}{d{\bf w}} e(h({\bf x}_n), y_n)
= \frac{-y_n \; {\bf x}_n \; e^{-y_n {\bf w}^T {\bf x}_n}}{1 + e^{-y_n {\bf w}^T {\bf x}_n}}
= -\frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}^T {\bf x}_n}}
$$
With the point loss derivative we can determine the gradient of the in-sample error:
$$
\begin{align}
\nabla E_\text{in}({\bf w})
&= \frac{d}{d{\bf w}} \left[ \frac{1}{N} \sum_{n=1}^N e(h({\bf x}_n), y_n) \right] \\
&= \frac{1}{N} \sum_{n=1}^N \frac{d}{d{\bf w}} e(h({\bf x}_n), y_n) \\
&= \frac{1}{N} \sum_{n=1}^N \left( - \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}^T {\bf x}_n}} \right) \\
&= - \frac{1}{N} \sum_{n=1}^N \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}^T {\bf x}_n}} \\
\end{align}
$$
Our weight update rule per batch gradient descent becomes
$$
\begin{align}
{\bf w}_{i+1} &= {\bf w}_i - \eta \; \nabla E_\text{in}({\bf w}_i) \\
&= {\bf w}_i - \eta \; \left( - \frac{1}{N} \sum_{n=1}^N \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}_i^T {\bf x}_n}} \right) \\
&= {\bf w}_i + \eta \; \left( \frac{1}{N} \sum_{n=1}^N \frac{y_n \; {\bf x}_n}{1 + e^{y_n {\bf w}_i^T {\bf x}_n}} \right) \\
\end{align}
$$
where $\eta$ is our learning rate.
### Enough with the theory, now jump to the implimentation. We will look at 2 libraries for the same.
## Logistic Regression with statsmodel
We'll be using the same dataset as UCLA's Logit Regression tutorial to explore logistic regression in Python. Our goal will be to identify the various factors that may influence admission into graduate school.
The dataset contains several columns which we can use as predictor variables:
* gpa
* gre score
* rank or prestige of an applicant's undergraduate alma mater
* The fourth column, admit, is our binary target variable. It indicates whether or not a candidate was admitted our not.
```python
import numpy as np
import pandas as pd
import pylab as pl
import statsmodels.api as sm
```
```python
df = pd.read_csv("binary.csv")
#df = pd.read_csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
```
```python
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>rank</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>3</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>3</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
```python
#RENAMING THE RANK COLUMN
df.columns = ["admit", "gre", "gpa", "prestige"]
df.head()
#df.shape
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>prestige</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>3</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>3</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
### Summary Statistics & Looking at the data
Now that we've got everything loaded into Python and named appropriately let's take a look at the data. We can use the pandas function which describes a summarized view of everything. There's also function for calculating the standard deviation, std.
A feature I really like in pandas is the pivot_table/crosstab aggregations. crosstab makes it really easy to do multidimensional frequency tables. You might want to play around with this to look at different cuts of the data.
```python
pd.crosstab(df["admit"],df["prestige"],rownames = ["admit"])
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>prestige</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
<tr>
<th>admit</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>28</td>
<td>97</td>
<td>93</td>
<td>55</td>
</tr>
<tr>
<th>1</th>
<td>33</td>
<td>54</td>
<td>28</td>
<td>12</td>
</tr>
</tbody>
</table>
</div>
```python
df.hist()
pl.show()
```
```python
```
### dummy variables
pandas gives you a great deal of control over how categorical variables can be represented. We're going dummify the "prestige" column using get_dummies.
get_dummies creates a new DataFrame with binary indicator variables for each category/option in the column specified. In this case, prestige has four levels: 1, 2, 3 and 4 (1 being most prestigious). When we call get_dummies, we get a dataframe with four columns, each of which describes one of those levels.
```python
dummy_ranks = pd.get_dummies(df["prestige"], prefix = "prestige")
```
```python
dummy_ranks.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>prestige_1</th>
<th>prestige_2</th>
<th>prestige_3</th>
<th>prestige_4</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
# CREATING A CLEAN DATA FRAME
cols_to_keep = ["admit", "gre", "gpa"]
data = df[cols_to_keep].join(dummy_ranks.loc[:,"prestige_2":])
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>prestige_2</th>
<th>prestige_3</th>
<th>prestige_4</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
Once that's done, we merge the new dummy columns with the original dataset and get rid of the prestige column which we no longer need.
Lastly we're going to add a constant term for our logistic regression. The statsmodels function we would use requires intercepts/constants to be specified explicitly.
### Performing the regression
Actually doing the logistic regression is quite simple. Specify the column containing the variable you're trying to predict followed by the columns that the model should use to make the prediction.
In our case we'll be predicting the admit column using gre, gpa, and the prestige dummy variables prestige_2, prestige_3 and prestige_4. We're going to treat prestige_1 as our baseline and exclude it from our fit. This is done to prevent multicollinearity, or the dummy variable trap caused by including a dummy variable for every single category.
```python
#ADDING THE INTERCEPT MANUALLY
data["intercept"] = 1.0
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>prestige_2</th>
<th>prestige_3</th>
<th>prestige_4</th>
<th>intercept</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1.0</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1.0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1.0</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
```python
train_cols = data.columns[1:]
logit = sm.Logit(data["admit"], data[train_cols])
```
```python
results = logit.fit()
```
Optimization terminated successfully.
Current function value: 0.573147
Iterations 6
Since we're doing a logistic regression, we're going to use the statsmodels Logit function. For details on other models available in statsmodels, check out their docs here.
### Interpreting the results
One of my favorite parts about statsmodels is the summary output it gives. If you're coming from R, I think you'll like the output and find it very familiar too.
```python
ironman = results.predict([800,4,0,0,0,1.0])
```
```python
print(ironman)
```
[0.73840825]
```python
results.summary()
```
<table class="simpletable">
<caption>Logit Regression Results</caption>
<tr>
<th>Dep. Variable:</th> <td>admit</td> <th> No. Observations: </th> <td> 400</td>
</tr>
<tr>
<th>Model:</th> <td>Logit</td> <th> Df Residuals: </th> <td> 394</td>
</tr>
<tr>
<th>Method:</th> <td>MLE</td> <th> Df Model: </th> <td> 5</td>
</tr>
<tr>
<th>Date:</th> <td>Fri, 20 Nov 2020</td> <th> Pseudo R-squ.: </th> <td>0.08292</td>
</tr>
<tr>
<th>Time:</th> <td>13:11:55</td> <th> Log-Likelihood: </th> <td> -229.26</td>
</tr>
<tr>
<th>converged:</th> <td>True</td> <th> LL-Null: </th> <td> -249.99</td>
</tr>
<tr>
<th>Covariance Type:</th> <td>nonrobust</td> <th> LLR p-value: </th> <td>7.578e-08</td>
</tr>
</table>
<table class="simpletable">
<tr>
<td></td> <th>coef</th> <th>std err</th> <th>z</th> <th>P>|z|</th> <th>[0.025</th> <th>0.975]</th>
</tr>
<tr>
<th>gre</th> <td> 0.0023</td> <td> 0.001</td> <td> 2.070</td> <td> 0.038</td> <td> 0.000</td> <td> 0.004</td>
</tr>
<tr>
<th>gpa</th> <td> 0.8040</td> <td> 0.332</td> <td> 2.423</td> <td> 0.015</td> <td> 0.154</td> <td> 1.454</td>
</tr>
<tr>
<th>prestige_2</th> <td> -0.6754</td> <td> 0.316</td> <td> -2.134</td> <td> 0.033</td> <td> -1.296</td> <td> -0.055</td>
</tr>
<tr>
<th>prestige_3</th> <td> -1.3402</td> <td> 0.345</td> <td> -3.881</td> <td> 0.000</td> <td> -2.017</td> <td> -0.663</td>
</tr>
<tr>
<th>prestige_4</th> <td> -1.5515</td> <td> 0.418</td> <td> -3.713</td> <td> 0.000</td> <td> -2.370</td> <td> -0.733</td>
</tr>
<tr>
<th>intercept</th> <td> -3.9900</td> <td> 1.140</td> <td> -3.500</td> <td> 0.000</td> <td> -6.224</td> <td> -1.756</td>
</tr>
</table>
```python
```
| de9984065298edf92d580644c8987d8ee0ab15cf | 46,024 | ipynb | Jupyter Notebook | CODE_FILES/AiRobosoft/Machine Learning/Project_2/logistics_regression.ipynb | PrithaChakravarty/INTERNSOFT-codefiles-and-projects | 8ea83035ad0a33e80c8e40fe79b80202269599aa | [
"MIT"
]
| null | null | null | CODE_FILES/AiRobosoft/Machine Learning/Project_2/logistics_regression.ipynb | PrithaChakravarty/INTERNSOFT-codefiles-and-projects | 8ea83035ad0a33e80c8e40fe79b80202269599aa | [
"MIT"
]
| null | null | null | CODE_FILES/AiRobosoft/Machine Learning/Project_2/logistics_regression.ipynb | PrithaChakravarty/INTERNSOFT-codefiles-and-projects | 8ea83035ad0a33e80c8e40fe79b80202269599aa | [
"MIT"
]
| null | null | null | 44.640155 | 11,192 | 0.563597 | true | 6,329 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.893309 | 0.815088 | __label__eng_Latn | 0.947697 | 0.732055 |
# Hybrid Monte Carlo
## Markov Models
In this notebook we analyse the evolution of HJM / Markov models. These models arise e.g. from Quasi Gaussian rates models or Markov Futures models.
We assume the model is driven by $d$-dimensional state variable $x$ and $d\times d$-dimensional auxilliary variable $y$.
The state variable $x$ follows a mean-reverting dynamic
$$
dx(t) = \left[ \theta(y,t) - \chi(t) x(t) \right] dt + \sigma(t)^\top dW(t).
$$
Model parameters are $d\times d$-matrix of volatilities $\sigma(t)$ and diagonal $d\times d$-matrix of mean reversion speed parameters $\chi(t)$. The $d$-dimensional drift vector $\theta(y,t)$ couples state variables $x$ and auxilliary variables $y$.
The auxilliary variables $y$ follow the convection equation
$$
dy(t) = \left[ \sigma(t)^\top \sigma(t) - \chi(t) y(t) - y(t) \chi(t) \right] dt.
$$
### Quasi-Gaussian Model
For Quasi-Gaussian rates models the drift term becomes
$$
\theta(y,t) = y(t) \mathbb{1}.
$$
### Markov Futures Model
For Markov Futures models the drift term becomes
$$
\theta(y,t) = \frac{1}{2} \left[ y(t)\chi(t) - \sigma(t)^\top \sigma(t) \right] \mathbb{1}.
$$
### Notation
We use the following functions to structure our notation:
$$
H(s,t) = \text{diag}\left\{ h_i(s,t)\right\},
\quad \text{with} \quad
h_i(s,t) = \exp\left\{ -\int_s^t \chi_i(u)du \right\}
$$
and
$$
G(s,t) = \int_s^t H(s,u)du = \int_s^t H(u,t)du.
$$
Functions $G$ and $H$ are related via $G'(s,t)=\partial G(s,t)/\partial t = H(s,t)$.
Moreover, we denote the instantanous variance as
$$
V(t) = \sigma(t)^\top \sigma(t).
$$
## Evolution of State and Auxilliary Variables
In this section we derive a representation of the state and auxilliary variables. This representation is the basis for Monte-Carlo simulation.
We consider a time intervall from $s$ to $t$ ($s<t$). To simplify notation and derivation we assume model parameters are constant on that intervall. That is $\sigma(u)=\sigma$ and $\chi(u)=\chi$ for $u\in (s, t]$.
Note that even though model parameters are assumed (piece-wise) constant the drift term $\theta(u,t)$ is modelled exact. This is required because this quantity grows monotonically in time.
### Auxilliary Variables
The dynamics for $y$ are
$$
dy(t) = \left[ V(t) - \chi(t) y(t) - y(t) \chi(t) \right] dt.
$$
The solution to this linear systems of ODEs is given by
$$
y(t) = H(s,t)y(s)H(s,t) +
\underbrace{\int_s^t H(u,t) V(u) H(u,t)du}_{M(s,t)}.
$$
For constant parameters the integral $M(s,t) = \left[ M_{i,j}(s,t) \right]$ is given as
$$
M_{i,j}(s,t) = V_{i,j} \frac{1 - e^{-(\chi_i + \chi_j)(t-s)}}{\chi_i + \chi_j}.
$$
The elements of the first term $H(s,t)y(s)H(s,t)$ are $y_{i,j}(s)e^{-(\chi_i + \chi_j)(t-s)}$. This yields a representation of the elements $y_{i,j}(t)$ which we will use later on.
We get the representation of auxilliary variable elements as
\begin{align}
y_{i,j}(t) &= y_{i,j}(s)e^{-(\chi_i + \chi_j)(t-s)} +
V_{i,j} \frac{1 - e^{-(\chi_i + \chi_j)(t-s)}}{\chi_i + \chi_j} \\
&= \left[ y_{i,j}(s) - \frac{V_{i,j}}{\chi_i + \chi_j} \right]
e^{-(\chi_i + \chi_j)(t-s)} +
\frac{V_{i,j}}{\chi_i + \chi_j}. \\
&= a_{i,j} e^{-(\chi_i + \chi_j)(t-s)} + b_{i,j}
\end{align}
with $b_{i,j} = V_{i,j} / (\chi_i + \chi_j)$ and $a_{i,j} = V_{i,j} - b_{i,j}$.
Substituting $B = \left[ b_{i,j} \right]_{i,j}$ we get the matrix representation $A = y(s)-B$ and
$$
y(t) = H(s,t) A H(s,t) + B.
$$
### State Variable
We have
$$
dx(t) = \left[ \theta(y,t) - \chi(t) x(t) \right] dt + \sigma(t)^\top dW(t).
$$
The solution is given as
$$
x(t) = H(s,t) \left[ x(s) + \int_s^t H(s,u)^{-1}
\left[ \theta(y,u) du + \sigma(u)^\top dW(u) \right] \right].
$$
Covariance conditional on time $s$ becomes
$$
\text{Cov}_s \left[x(t)\right] = \int_s^t H(u,t) \sigma(u)^\top \sigma(u) H(u,t) du.
$$
It turns out that
$$
\text{Cov}_s \left[x(t)\right] = \int_s^t H(u,t) V(u) H(u,t) du = M(s,t).
$$
Conditional expectation is given as
$$
\mathbb{E}_s\left[x(t)\right] = H(s,t) x(s) + \int_s^t H(u,t) \theta(y,u) du.
$$
For general drift functions $\theta(y,t)$ and with the representation $y(t) = H(s,t) A H(s,t) + B$ we can formulate $f(u) = H(u,t) \theta\left(y(u),u\right)$. Then we can apply standard quadrature methods to calculate $\int_s^t f(u) du$. This typically works well since functions involved are smooth.
### Integrated Auxilliary Variable
The conditional expectation of the state variable requires calculation of $\int_s^t H(u,t) \theta(y,u) du$. In relevant applications the drift function $\theta(y,u)$ is an affine function of $y(u)$. In such situations we can calculate the integral.
As an intermediate step we calculate $\int_s^t H(u,t) y(u) du$. Using $y(u)=H(s,u) A H(s,u) + B$ yields
\begin{align}
\int_s^t H(u,t) y(u) du
&= \int_s^t H(u,t) \left[ H(s,u) A H(s,u) + B \right] du \\
&= \int_s^t H(s,t) A H(s,u) du + \int_s^t H(u,t)B du \\
&= H(s,t) A \int_s^t H(s,u) du + \int_s^t H(u,t) du B \\
&= H(s,t) A G(s,t) + G(s,t) B.
\end{align}
We can also substitute back $A = y(s)-B$. This yields
\begin{align}
\int_s^t H(u,t) y(u) du
&= H(s,t) \left[ y(s) - B \right] G(s,t) + G(s,t) B \\
&= H(s,t) y(s) G(s,t) - H(s,t) B G(s,t) + G(s,t) B \\
&= H(s,t) y(s) G(s,t) + \left[G(s,t) B - H(s,t) B G(s,t) \right].
\end{align}
We have a closer look at the term $C = \left[G(s,t) B - H(s,t) B G(s,t) \right]$. The elements $C_{i,j}$ are
\begin{align*}
C_{i,j} &= \frac{1-e^{-\chi_i(t-s)}}{\chi_i} \frac{V_{i,j}}{\chi_i + \chi_j} -
e^{-\chi_i(t-s)} \frac{V_{i,j}}{\chi_i + \chi_j} \frac{1-e^{-\chi_j(t-s)}}{\chi_j} \\
&= \frac{V_{i,j}}{\chi_i + \chi_j} \left[
\frac{1-e^{-\chi_i(t-s)}}{\chi_i} - e^{-\chi_i(t-s)} \frac{1-e^{-\chi_j(t-s)}}{\chi_j}
\right].
\end{align*}
For the diagonal elements $C_{i,i}$ we further get
\begin{align*}
C_{i,i} &= \frac{V_{i,j}}{2 \chi_i} \left[
\frac{1-e^{-\chi_i(t-s)}}{\chi_i} - e^{-\chi_i(t-s)} \frac{1-e^{-\chi_i(t-s)}}{\chi_i} \right] \\
&= \frac{V_{i,j}}{2 \chi_i} \frac{1-e^{-\chi_i(t-s)}}{\chi_i} \left[
1 - e^{-\chi_i(t-s)} \right] \\
&= \frac{V_{i,j}}{2} \left[ \frac{1-e^{-\chi_i(t-s)}}{\chi_i} \right]^2.
\end{align*}
### Quasi Gaussian Model Drift and Expectation
The drift term in Quasi Gaussian rates models is just $\theta(y,t) = y(t) \mathbb{1}$. This yields the conditional expectation
$$
\mathbb{E}_s\left[x(t)\right] = H(s,t) \left[ x(s) + A G(s,t) \mathbb{1} \right] + G(s,t) B \mathbb{1}
$$
or equivalently
$$
\mathbb{E}_s\left[x(t)\right] = H(s,t) \left[ x(s) + y(s) G(s,t) \mathbb{1} \right] + \left[G(s,t) B - H(s,t) B G(s,t) \right] \mathbb{1}.
$$
### Futures Model Drift and Expectation
For Futures Model the drift is
$\theta(y,t) = \frac{1}{2} \left[ y(t)\chi(t) - \sigma(t)^\top \sigma(t) \right] \mathbb{1}$.
We can use linearity of the integral operator and get
\begin{align}
\int_s^t H(u,t) \theta(y,u) du
&= \int_s^t H(u,t) \frac{1}{2} \left[ y(t)\chi(t) - \sigma(t)^\top \sigma(t) \right] \mathbb{1} du \\
&= \frac{1}{2} \int_s^t H(u,t) \left[ y(t)\chi - V \right] \mathbb{1} du \\
&= \frac{1}{2} \left[ \int_s^t H(u,t) y(t) du \chi\mathbb{1} - \int_s^t H(u,t) du V \mathbb{1} \right] \\
&= \frac{1}{2} \left\{ \left[ H(s,t) A G(s,t) + G(s,t) B \right] \chi\mathbb{1} - G(s,t) V \mathbb{1} \right\}.
\end{align}
This gives the conditional expectation of the state variable as
$$
\mathbb{E}_s\left[x(t)\right] = H(s,t) x(s) +
\frac{1}{2} \left\{ \left[ H(s,t) A G(s,t) + G(s,t) B\right] \chi - G(s,t) V \right\}\mathbb{1}.
$$
## Futures Reconstruction
Using the Markov Model representation the futures price $F(t,T)$ can be reconstructed from state variables $x(t)$ and auxilliary variables $y(t)$ via
$$
F(t,T) = F(0,T) * e^{X(t,T)}
$$
with
$$
X(t,T) = \mathbb{1}^\top H(t,T) \left[ x(t) +
\frac{1}{2} y(t) \left(I - H(t,T)\right)\mathbb{1}
\right].
$$
## Model Test
```python
import sys
sys.path.append('../') # make python find our modules
import numpy as np
from hybmc.models.MarkovFutureModel import MarkovFutureModel
```
We set up a simple 2-factor model.
```python
d = 2
times = np.array([0.0])
sigmaT = np.zeros([1,2,2])
sigmaT[0,:,:] = np.array([[ 0.10, 0.15 ],
[ 0.20, 0.25 ]])
chi = np.array([0.10, 0.20])
#
model = MarkovFutureModel(None,d,times,sigmaT,chi) # future curve is not yet implemented
```
Using the model we simulate the state variables.
```python
from hybmc.simulations.McSimulation import McSimulation
times = np.linspace(0.0, 5.0, 2)
nPaths = 2**13
seed = 314159265359
sim = McSimulation(model,times,nPaths,seed,False,showProgress=True)
```
Now we can calculate futures prices and analyse the terminal distrivution.
```python
idx = -1
t = times[idx]
dT = 1.0
T = t + dT
F = np.array([ model.futurePrice(t,T,X,None) for X in sim.X[:,idx,:] ])
Fav = np.mean(F)
sigma = np.std(F) / np.sqrt(t)
print('t: %6.2f, T: %6.2f, F: %6.4f, sigma: %6.4f' % (t,T,Fav,sigma) )
```
## Andersen Parametrisation
We test another parametrisation taken from Andersen 2008, sec. 7.6.
```python
def sigmaT(sigma_0, sigma_infty, rho_infty):
h1 = -sigma_infty + rho_infty * sigma_0
h2 = sigma_0 * np.sqrt(1.0 - rho_infty**2)
hi = sigma_infty
return np.array([ [h1, h2], [hi, 0.0] ])
def chi(kappa):
return np.array([ kappa, 0.0 ])
```
```python
# Andersen example
kappa = 1.35
sigma_0 = 0.50
sigma_infty = 0.17
rho_infty = 0.50
sigmaT_ = sigmaT(sigma_0,sigma_infty,rho_infty)
chi_ = chi(kappa)
print(sigmaT_)
print(chi_)
```
```python
d = 2
times = np.array([0.0])
model = MarkovFutureModel(None,d,times,np.array([ sigmaT_ ]),chi_)
```
```python
times = np.linspace(0.0, 5.0, 2)
nPaths = 2**13
seed = 314159265359
sim = McSimulation(model,times,nPaths,seed,False,showProgress=True)
```
```python
idx = -1
t = times[idx]
dT = 1.0
T = t + dT
F = np.array([ model.futurePrice(t,T,X,None) for X in sim.X[:,idx,:] ])
Fav = np.mean(F)
sigma = np.std(F) / np.sqrt(t)
print('t: %6.2f, T: %6.2f, F: %6.4f, sigma: %6.4f' % (t,T,Fav,sigma) )
```
```python
```
| 5909ab2af93f9caf4d2bbebdcfb8a46f906af03a | 15,457 | ipynb | Jupyter Notebook | doc/MarkovModels.ipynb | sschlenkrich/HybridMonteCarlo | 72f54aa4bcd742430462b27b72d70369c01f9ac4 | [
"MIT"
]
| 3 | 2021-08-18T18:34:41.000Z | 2021-12-24T07:05:19.000Z | doc/MarkovModels.ipynb | sschlenkrich/HybridMonteCarlo | 72f54aa4bcd742430462b27b72d70369c01f9ac4 | [
"MIT"
]
| null | null | null | doc/MarkovModels.ipynb | sschlenkrich/HybridMonteCarlo | 72f54aa4bcd742430462b27b72d70369c01f9ac4 | [
"MIT"
]
| 3 | 2021-01-31T11:41:19.000Z | 2022-03-25T19:51:20.000Z | 33.748908 | 312 | 0.490975 | true | 3,794 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.803174 | 0.734824 | __label__eng_Latn | 0.643821 | 0.545575 |
<!-- dom:TITLE: Shell-model project for Nuclear Talent course -->
# Shell-model project for Nuclear Talent course
<!-- dom:AUTHOR: European Center for Theoretical Studies in Nuclear Physics and Related Areas, Trento, Italy, 3-21 July, 2017 -->
<!-- Author: -->
**European Center for Theoretical Studies in Nuclear Physics and Related Areas, Trento, Italy, 3-21 July, 2017**
Date: **This text will be updated with more material**
## Introduction
The project is divided in three main parts. The first part deals with a simple pairing model and the development of a shell-model program related to this model. This program can then be developed into a more general shell-model program that allows you to study general nuclear structure problems. That is the second part of the project. In parallel, we will also use NushellX in order to perform more advanced shell-model studies and compare the results obtained with your own shell-model code to those of NushellX. We expect you to form working groups consisting of typically three (or more) participants. Every group should establish its own Github or Gitlab repository for the project.
## Part 1, pairing problem
In the first part of the project we will thus work with a simplified Hamiltonian consisting of a one-body operator and a so-called
pairing interaction term. It is a model which to a large extent mimicks some central features of
atomic nuclei, certain atoms and systems which exhibit superfluiditity or superconductivity.
Pairing plays a central role in nuclear physics, in particular, for identical particles it makes up large fractions of the correlations among particles. The partial wave $^{1}S_0$ of the nucleon-nucleon force plays a central role in setting up pairing correlations in nuclei. Without this particular partial wave, the $J=0$ ground state spin assignment for many nuclei with even numbers of particles would not be possible.
We define first the Hamiltonian, with a definition of the model space and
the single-particle basis. Thereafter, we present the various steps which are needed to develop a shell-model program for studying the pairing problem.
The Hamiltonian acting in the complete Hilbert space (usually infinite
dimensional) consists of an unperturbed one-body part, $\hat{H}_0$,
and a perturbation $\hat{H}_I$.
We limit ourselves to at most two-body interactions, our Hamiltonian is
then represented by the following operators
<!-- Equation labels as ordinary links -->
<div id="eq:hamiltonian"></div>
$$
\begin{equation}
\hat{H} = \hat{H}_0 +\hat{H}_I=\sum_{pq}\langle p |h_0|q\rangle a_{p}^{\dagger}a_{q} +\frac{1}{4}\sum_{pqrs}\langle pq| V|rs\rangle a_{p}^{\dagger}a_{q}^{\dagger}a_{s}a_{r},
\label{eq:hamiltonian} \tag{1}
\end{equation}
$$
where $a_{p}^{\dagger}$ and $a_{q}$ etc are standard fermion creation and annihilation operators, respectively,
and $pqrs$ represent all possible single-particle quantum numbers.
The full single-particle space is defined by the completeness relation
$\hat{1} = \sum_{p =1}^{\infty}|p \rangle \langle p|$.
In our calculations we will let the single-particle states $|p\rangle$
be eigenfunctions of the one-particle operator $\hat{h}_0$.
The above Hamiltonian
acts in turn on various many-body Slater determinants constructed from the single-basis defined by the one-body
operator $\hat{h}_0$.
Our specific model consists of only $2$ doubly-degenerate and equally spaced
single-particle levels labeled by $p=1,2,\dots$ and spin $\sigma=\pm
1$.
In Eq. ([eq:hamiltonian](#eq:hamiltonian)) the labels $pqrs$ could also include spin $\sigma$. From now and for the rest of this project, labels like $pqrs$ represent the states without spin. The spin quantum numbers need to be accounted for explicitely.
We write
the Hamiltonian as
$$
\hat{H} = \hat{H}_0 +\hat{H}_I=\hat{H}_0 + \hat{V} ,
$$
where
$$
\hat{H}_0=\xi\sum_{p\sigma}(p-1)a_{p\sigma}^{\dagger}a_{p\sigma}.
$$
Here, $H_0$ is the unperturbed Hamiltonian with a spacing between
successive single-particle states given by $\xi$, which we will set to
a constant value $\xi=1$ without loss of generality.
The two-body
operator $\hat{V}$ has one term only. It represents the
pairing contribution and carries a constant strength $g$
and is given by
$$
\langle q+q-| V|s+s-\rangle = -g
$$
where $g$ is a constant. The above labeling means that for a general matrix elements
$\langle pq| V|rs\rangle$ we require that the states $p$ and $q$ (and $r$ and $s$) have the same number
quantum number $q$ but opposite spins. The two spins values are
$\sigma = \pm 1$.
When setting up the Hamiltonian matrix you need to figure out how to make the two-body interaction antisymmetric.
The variables $\sigma=\pm$ represent the two possible spin values. The
interaction can only couple pairs and excites therefore only two
particles at the time.
In our model we have kept both the interaction strength and the single-particle level as constants.
In a realistic system like the atomic nucleus this is not the case.
The unperturbed Hamiltonian $\hat{H}_0$ and $\hat{V}$ commute
with the spin projection $\hat{S}_z$ and the total spin
$\hat{S}^2$.
This is an important feature of our system that allows us to block-diagonalize
the full Hamiltonian. In this project we will focus only on total spin $S=0$, this case is normally called the no-broken pair case.
### Part 1a: Paper and pencil gym while we wait for the more serious stuff
Show that the
unperturbed Hamiltonian $\hat{H}_0$ and $\hat{V}$ commute
with both the spin projection $\hat{S}_z$ and the total spin
$\hat{S}^2$, given by
$$
\hat{S}_z := \frac{1}{2}\sum_{p\sigma} \sigma a^{\dagger}_{p\sigma}a_{p\sigma}
$$
and
$$
\hat{S}^2 := \hat{S}_z^2 + \frac{1}{2}(\hat{S}_+\hat{S}_- +
\hat{S}_-\hat{S}_+),
$$
where
$$
\hat{S}_\pm := \sum_{p} a^{\dagger}_{p\pm} a_{p\mp}.
$$
This is an important feature of our system that allows us to block-diagonalize
the full Hamiltonian. We will focus on total spin $S=0$.
In this case, it is convenient to define the so-called pair creation and pair
annihilation operators
$$
\hat{P}^{+}_p = a^{\dagger}_{p+}a^{\dagger}_{p-},
$$
and
$$
\hat{P}^{-}_p = a_{p-}a_{p+},
$$
respectively.
The Hamiltonian (with $\xi=1$) we will use can be written as
$$
\hat{H}=\sum_{p\sigma}(p-1)a_{p\sigma}^{\dagger}a_{p\sigma}
-g\sum_{pq}\hat{P}^{+}_p\hat{P}^{-}_q.
$$
Show that Hamiltonian commutes with the product of the pair creation and annihilation operators.
This model corresponds to a system with no broken pairs. This means that the Hamiltonian can only link two-particle states in so-called spin-reversed states.
### Part 1b: Simpler case
Assume now that the effective Hilbert space consists only of the two lowest single-particle states and that we have two particles only.
Set up the possible two-particle configurations when we have only two single-particle states, that is $p=1$ and $p=2$.
Construct thereafter the Hamiltonian matrix using second quantization and for example Wick's theorem
for a system with no broken pairs and spin $S=0$ (with projection $S_z=0$) for the case of the two lowest single-particle levels and two particles only. This gives you a
$2\times 2$ matrix to be diagonalized.
Find the eigenvalues by diagonalizing the Hamiltonian matrix.
Vary your results for selected values of $g\in [-1,1]$ and comment your results.
### Part 1c: Setting up the Hamiltonian matrix
Construct thereafter the Hamiltonian matrix for a system with no broken pairs and spin $S=0$ for the case of the four lowest single-particle levels. Our system consists of four particles only.
Our single-particle space consists of only the four lowest levels
$p=1,2,3,4$. You need to set up all possible Slater determinants and the Hamiltonian matrix using second quantization and
find all eigenvalues by diagonalizing the Hamiltonian matrix.
Vary your results for values of $g\in [-1,1]$. Your Hamiltonian matrix is a $6\times 6$ matrix.
These results will serve as a benchmark for the construction of our shell-model program.
We refer to this as the exact results. Comment the behavior of the ground state as function of $g$.
### Part 1d: Diagonalizing the Hamiltonian matrix
Our next step is to develop a code which sets up the above Hamiltonian matrices for two and four particles in 2 and 4 single-particles states (the same as what you did in exercises b) and c) and obtain the eigenvalues.
To achieve this you should
* Decide whether you want to read from file the single-particle data and the matrix elements in $m$-scheme, or set them up internally in your code. The latter is the simplest possibility for the pairing model, whereas the first option gives you a more general code which can be extended to the more realistic cases discussed in the second part.
* Based on the single-particle basis, write a function which sets up all possible Slater determinants which have total $M=0$. Test that this function reproduces the cases in b) and c). If you make this function more general, it can then be reused for say a shell-model calculation of $sd$-shell nuclei in the second part.
* Use the Slater determinant basis from the previous step to set up the Hamiltonian matrix.
* With the Hamiltonian matrix, you can finally diagonalize the matrix and obtain the final eigenvalues and test against the results of b) and c).
Codes to diagonalize in C++ or Fortran can be provided. For Python, numpy contains eigenvalue solvers based on for example Householder's and Givens' algorithms. These are topics which can we discuss separately. The lecture slides contain a rather detailed recipe
on how to construct a Slater determinant basis and how to set up the Hamiltonian matrix to diagonalize.
### Part 1e: Further benchmarks
In developing the code it also useful to test against cases which have closed-form solutions. One obvious case is that of removing the
two-body interaction. Then we have only the single-particle energies.
For the case of degenerate single-particle orbits, that is one value of total single-particle angular momentum only $j$, with degeneracy $\Omega=2j+1$, one can show that the ground state energy $E_0$ is with $n$ particles
$$
E_0= -\frac{g}{4}n\left(\Omega-n+2\right).
$$
Enlarge now your system to six and eight fermions and to $p=6$ and $p=8$ single-particle states, respectively. Run your program for a degenerate single-particle state with degeneracy $\Omega$ and test
against the exact result for the ground state. Introduce thereafter a finite single-particle spacing and study the results as you vary $g$, as done in b) and c). Comment your results.
## Part 2, building your own shell-model program
The way we will set up the Slater determinants here follows a simple odometric recipe. The way it is done in more professional codes, is to use bitwise manipulations. The latter is a possible extension/challenge for those interested.
Part two of our project consists of at developing your own shell-model code that can perform shell-model studies of the oxygen isotopes using standard
effective interactions (provided by us) using as example the $1s0d$ shell as model space.
You may also need to consider a bit representation and manipulation of Slater determinants and to implement
the Lanczos algorithm. These details will be discussed during our lectures.
* For the shell-model part you need now to read in your data from file, both the single-particle states and the effective interaction. We will provide you with the [USDB](https://journals.aps.org/prc/abstract/10.1103/PhysRevC.74.034315) ($1s0d$-shell effective interaction). If you have not done so, rewrite your code from the project so that you can read in this interaction.
* For the oxygen isotopes you can actually use your previous program and perform shell-model calculations of the oxygen isotopes using the $1s0d$ shell. Your results should agree with those obtained using Alex Brown's code Nushellx. Compute the spectra of the 3-4 lowest lying states of the oxygen isotopes from ${}^{18}\mbox{O}$ to ${}^{28}\mbox{O}$ and compare with data and the Alex Brown's results.
* The code we wrote in the project was however not very efficient, unless you already implemented the bit representation. As an optional challenge, you may now wish to consider the inclusion of a bit representation along the lines discussed in the lecture slides, and inserted below in the appendix here as well. Note that this part may quickly become time consuming. You may also consider implementing the Lanczos' algorithm as discussed by [Whitehead et al.](https://link.springer.com/chapter/10.1007%2F978-1-4615-8234-2_2)
* We will also provide you with effective $1s0d$-shell interactions derived using Coupled cluster theory and the aim is to reproduce the published results of [Jansen et al](https://journals.aps.org/prc/abstract/10.1103/PhysRevC.94.011301), see also [their Physical Review Letters article as well](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.142502). The derivation of these interactions, with pertaining codes will be discussed by Gustav in his lectures at this course.
## Part 3
The aim of this project is to study the structure of selected
low-lying states of the oxygen and fluorine isotopes towards their
respective dripline. These chains of isotopes have been studied
extensively during the last years, with many
efforts toward the understanding of their dripline properties,
involving studies of low-lying excited states and electromagnetic
transitions. For the oxygen isotopes, ${}^{24}\mbox{O}$ is the last
particle-stable nucleus, and for the fluorine isotopes ${}^{31}\mbox{F}$ is
assumed to the last stable one. This part can also be used to benchmark your shell-model program from the second part.
The task here is to study these isotopic chains, extract excitation
energies and selected observables and compare with available data. To achieve this you will
need to use an effective interaction designed for the $1s0d$ shell
first and then, for nuclei beyond $A=24$ you may need to consider
degrees of freedom from the $1p0f$ shell. Since a full calculation in
these two major shells becomes quickly time-consuming for the fluorine
isotopes, you will need to truncate the number of particles which can
leave/occupy selected single-particle states. In the file which
contains the single-particle data, you can reduce the size of the
total space of Slater determinants by limiting the number of particles
which can populate the $1p0f$ shell. Here you could limit yourselves
to consider only the single-particle states $0f_{7/2}$ and $1p_{3/2}$.
* Test your effective interaction and setup of single-particle energies by computing the spectra of ${}^{18}\mbox{O}$ and ${}^{18}\mbox{F}$ in order to see that your $1s0d$-shell calculations where set up correctly. Compare the spectra with available data. Use the [USDA and USDB interactions](https://journals.aps.org/prc/abstract/10.1103/PhysRevC.74.034315) in the NushellX directory over interactions.
* Perform shell-model studies using Nushellx for all oxygen isotopes from ${}^{18}\mbox{O}$ to ${}^{28}\mbox{O}$, plot the lowest-lying 3-4 states and compare with data where available. Comment your results.
* Perform also shell-model studies using Nushellx for all oxygen isotopes from ${}^{18}\mbox{F}$ to ${}^{29}\mbox{F}$, plot the lowest-lying 3-4 states and compare with data where available. Comment your results. Try also to compute ${}^{30}\mbox{F}$ and ${}^{31}\mbox{F}$. Here you need to include the $0f_{7/2}$ and $1p_{3/2}$ single-particle states.
* See also if you can find excited states in ${}^{25}\mbox{O}$ and ${}^{25}\mbox{F}$ with negative parity.
* Use the monopole interactions to calculate the energies for the ground states of the four nuclei ${}^{22-25}\mbox{O}$ assuming a single Slater determinant for each. The USDB two-body matrix elements are assumed to scale like $\mbox{18/A}^{0.3}$.
* Compare the results in the last problem to the full $1s0d$ model space results and to experiment.
* Calculate the spectroscopic factors from the ground state of ${}^{23}\mbox{O}$ to all states in ${}^{22}\mbox{O}$ in the full $1s0d$ model space. Use the sum rule to obtain the orbital occupations in ${}^{23}\mbox{O}$ for $0d_{5/2}$, $1s_{1/2}$ and $0d_{3/2}$. Compare these to those given in the so-called xxx.occ file.
* Calculate the spectroscopic factors from the ground state of ${}^{23}\mbox{O}$ to all states in ${}^{24}\mbox{O}$ in the full $1s0d$ model space. Use the sum rule to obtain the number of holes in those three orbits in ${}^{23}\mbox{O}$. Compare these to those given in the xxx.occ file.
* Calculate the ${}^{23}\mbox{O}$ $\mbox{5/2}^+_1$ to ${}^{22}\mbox{O}$ $\mbox{0}^+_1$ spectroscopic factor. Explain why it is so small.
* Use the interaction wspot to obtain the single-particle decay width for the ${}^{23}\mbox{O}$ $\mbox{5/2}^{+}_{1}$ state using the experimental neutron separation energy as a constraint. Combine this with the result of the last problem to obtain its neutron decay width. Compare to experiment.
* Calculate the neutron decay width of the ${}^{25}\mbox{O}$ $\mbox{3/2}^{+}_{1}$ state and compare to experiment. Use the experimental neutron separation energy as a constraint.
* Calculate the gamma decay of ${}^{22}\mbox{O}$ for levels up to 6 MeV and compare to experiment. Calculate the B(E2) for Coulex to the $\mbox{2}^{+}_{1}$ state in ${}^{22}\mbox{O}$ and compare with experiment.
* Calculate the magnetic moment for the $\mbox{1/2}^+_1$ ground state of ${}^{23}\mbox{O}$ and compare to the single-particle (Schmidt) value.
* Calculate the Fermi ($F$) and Gamow-Teller ($GT$) beta decay of ${}^{22}\mbox{O}$. The experimental energy of the lowest $1^{+}$ state in ${}^{22}\mbox{F}$ is 1.627 MeV. You will need to put this in the xxx.beq file and rerun the beta program (see the end of the xxx.bat file for how to do this). Compare the summed $B(F)$ and $B(GT)$ values to that expected from the sum-rules. What fraction of the $GT$ sum-rule is in the transition to the lowest energy $1^{+}$ state?
# Appendix: Bit representation
In the build-up of a shell model code that is meant to tackle large dimensionalities
is the action of the Hamiltonian $\hat{H}$ on a
Slater determinant represented in second quantization as
$$
\vert \alpha_1\dots \alpha_n\rangle = a_{\alpha_1}^\dagger a_{\alpha_2}^\dagger \dots a_{\alpha_n}^\dagger \vert 0\rangle.
$$
The time consuming part stems from the action of the Hamiltonian
on the above determinant,
$$
\left(\sum_{\alpha\beta} \langle \alpha\vert \hat{t}+\hat{u}\vert \beta\rangle a_\alpha^\dagger a_\beta + \frac{1}{4} \sum_{\alpha\beta\gamma\delta}
\langle\alpha \beta\vert \hat{V}\vert \gamma \delta\rangle a_\alpha^\dagger a_\beta^\dagger a_\delta a_\gamma\right)a_{\alpha_1}^\dagger a_{\alpha_2}^\dagger \dots a_{\alpha_n}^\dagger \vert 0\rangle.
$$
A practically useful way to implement this action is to encode a Slater determinant as a bit pattern.
Assume that we have at our disposal $n$ different single-particle orbits
$\alpha_0,\alpha_2,\dots,\alpha_{n-1}$ and that we can distribute among these orbits $N\le n$ particles.
A Slater determinant can then be coded as an integer of $n$ bits. As an example, if we have $n=16$ single-particle states
$\alpha_0,\alpha_1,\dots,\alpha_{15}$ and $N=4$ fermions occupying the states $\alpha_3$, $\alpha_6$, $\alpha_{10}$ and $\alpha_{13}$
we could write this Slater determinant as
$$
\Phi_{\Lambda} = a_{\alpha_3}^\dagger a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle .
$$
The unoccupied single-particle states have bit value $0$ while the occupied ones are represented by bit state $1$.
In the binary notation we would write this 16 bits long integer as
$$
\begin{array}{cccccccccccccccc}
{\alpha_0}&{\alpha_1}&{\alpha_2}&{\alpha_3}&{\alpha_4}&{\alpha_5}&{\alpha_6}&{\alpha_7} & {\alpha_8} &{\alpha_9} & {\alpha_{10}} &{\alpha_{11}} &{\alpha_{12}} &{\alpha_{13}} &{\alpha_{14}} & {\alpha_{15}} \\
{0} & {0} &{0} &{1} &{0} &{0} &{1} &{0} &{0} &{0} &{1} &{0} &{0} &{1} &{0} & {0} \\
\end{array}
$$
which translates into the decimal number
$$
2^3+2^6+2^{10}+2^{13}=9288.
$$
We can thus encode a Slater determinant as a bit pattern.
With $N$ particles that can be distributed over $n$ single-particle states, the total number of Slater determinats (and defining thereby the dimensionality of the system) is
$$
\mathrm{dim}(\mathcal{H}) = \left(\begin{array}{c} n \\N\end{array}\right).
$$
The total number of bit patterns is $2^n$.
We assume again that we have at our disposal $n$ different single-particle orbits
$\alpha_0,\alpha_2,\dots,\alpha_{n-1}$ and that we can distribute among these orbits $N\le n$ particles.
The ordering among these states is important as it defines the order of the creation operators.
We will write the determinant
$$
\Phi_{\Lambda} = a_{\alpha_3}^\dagger a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle ,
$$
in a more compact way as
$$
\Phi_{3,6,10,13} = |0001001000100100\rangle.
$$
The action of a creation operator is thus
$$
a^\dagger_{\alpha_4}\Phi_{3,6,10,13} = a^\dagger_{\alpha_4}|0001001000100100\rangle=a^\dagger_{\alpha_4}a_{\alpha_3}^\dagger a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle ,
$$
which becomes
$$
-a_{\alpha_3}^\dagger a^\dagger_{\alpha_4} a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle =-|0001101000100100\rangle.
$$
Similarly
$$
a^\dagger_{\alpha_6}\Phi_{3,6,10,13} = a^\dagger_{\alpha_6}|0001001000100100\rangle=a^\dagger_{\alpha_6}a_{\alpha_3}^\dagger a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle ,
$$
which becomes
$$
-a^\dagger_{\alpha_4} (a_{\alpha_6}^\dagger)^ 2 a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle =0!
$$
This gives a simple recipe:
* If one of the bits $b_j$ is $1$ and we act with a creation operator on this bit, we return a null vector
* If $b_j=0$, we set it to $1$ and return a sign factor $(-1)^l$, where $l$ is the number of bits set before bit $j$.
Consider the action of $a^\dagger_{\alpha_2}$ on various slater determinants:
$$
\begin{array}{ccc}
a^\dagger_{\alpha_2}\Phi_{00111}& = a^\dagger_{\alpha_2}|00111\rangle&=0\times |00111\rangle\\
a^\dagger_{\alpha_2}\Phi_{01011}& = a^\dagger_{\alpha_2}|01011\rangle&=(-1)\times |01111\rangle\\
a^\dagger_{\alpha_2}\Phi_{01101}& = a^\dagger_{\alpha_2}|01101\rangle&=0\times |01101\rangle\\
a^\dagger_{\alpha_2}\Phi_{01110}& = a^\dagger_{\alpha_2}|01110\rangle&=0\times |01110\rangle\\
a^\dagger_{\alpha_2}\Phi_{10011}& = a^\dagger_{\alpha_2}|10011\rangle&=(-1)\times |10111\rangle\\
a^\dagger_{\alpha_2}\Phi_{10101}& = a^\dagger_{\alpha_2}|10101\rangle&=0\times |10101\rangle\\
a^\dagger_{\alpha_2}\Phi_{10110}& = a^\dagger_{\alpha_2}|10110\rangle&=0\times |10110\rangle\\
a^\dagger_{\alpha_2}\Phi_{11001}& = a^\dagger_{\alpha_2}|11001\rangle&=(+1)\times |11101\rangle\\
a^\dagger_{\alpha_2}\Phi_{11010}& = a^\dagger_{\alpha_2}|11010\rangle&=(+1)\times |11110\rangle\\
\end{array}
$$
What is the simplest way to obtain the phase when we act with one annihilation(creation) operator
on the given Slater determinant representation?
We have an SD representation
$$
\Phi_{\Lambda} = a_{\alpha_0}^\dagger a_{\alpha_3}^\dagger a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle ,
$$
in a more compact way as
$$
\Phi_{0,3,6,10,13} = |1001001000100100\rangle.
$$
The action of
$$
a^\dagger_{\alpha_4}a_{\alpha_0}\Phi_{0,3,6,10,13} = a^\dagger_{\alpha_4}|0001001000100100\rangle=a^\dagger_{\alpha_4}a_{\alpha_3}^\dagger a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle ,
$$
which becomes
$$
-a_{\alpha_3}^\dagger a^\dagger_{\alpha_4} a_{\alpha_6}^\dagger a_{\alpha_{10}}^\dagger a_{\alpha_{13}}^\dagger \vert 0 \rangle =-|0001101000100100\rangle.
$$
The action
$$
a_{\alpha_0}\Phi_{0,3,6,10,13} = |0001001000100100\rangle,
$$
can be obtained by subtracting the logical sum (AND operation) of $\Phi_{0,3,6,10,13}$ and
a word which represents only $\alpha_0$, that is
$$
|1000000000000000\rangle,
$$
from $\Phi_{0,3,6,10,13}= |1001001000100100\rangle$.
This operation gives $|0001001000100100\rangle$.
Similarly, we can form $a^\dagger_{\alpha_4}a_{\alpha_0}\Phi_{0,3,6,10,13}$, say, by adding
$|0000100000000000\rangle$ to $a_{\alpha_0}\Phi_{0,3,6,10,13}$, first checking that their logical sum
is zero in order to make sure that orbital $\alpha_4$ is not already occupied.
It is trickier however to get the phase $(-1)^l$.
One possibility is as follows
* Let $S_1$ be a word that represents the $1-$bit to be removed and all others set to zero. In the previous example $S_1=|1000000000000000\rangle$
* Define $S_2$ as the similar word that represents the bit to be added, that is in our case $S_2=|0000100000000000\rangle$.
* Compute then $S=S_1-S_2$, which here becomes
$$
S=|0111000000000000\rangle
$$
* Perform then the logical AND operation of $S$ with the word containing
$$
\Phi_{0,3,6,10,13} = |1001001000100100\rangle,
$$
which results in $|0001000000000000\rangle$. Counting the number of $1-$bits gives the phase. Here you need however an algorithm for bitcounting. Several efficient ones available.
| 6ca91c3530f84d01637b73e288cfd63d8ab141e9 | 34,323 | ipynb | Jupyter Notebook | doc/pub/projects/ipynb/projects.ipynb | NuclearTalent/NuclearStructure | 7d18ed926172abeea358e95f4e95415e7b0a3498 | [
"CC0-1.0"
]
| 7 | 2017-07-04T16:21:42.000Z | 2019-05-24T18:10:11.000Z | doc/pub/projects/ipynb/projects.ipynb | NuclearTalent/NuclearStructure | 7d18ed926172abeea358e95f4e95415e7b0a3498 | [
"CC0-1.0"
]
| null | null | null | doc/pub/projects/ipynb/projects.ipynb | NuclearTalent/NuclearStructure | 7d18ed926172abeea358e95f4e95415e7b0a3498 | [
"CC0-1.0"
]
| 7 | 2017-06-30T16:55:37.000Z | 2021-05-01T07:54:49.000Z | 43.011278 | 698 | 0.619089 | true | 7,394 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.810479 | 0.800692 | 0.648944 | __label__eng_Latn | 0.996304 | 0.346045 |
# Introduction to ML using ML4H
## Prerequisites
- Basic comfort with python, some linear algebra, some data science
- Follow the instructions in the main [README](https://github.com/broadinstitute/ml4h) for installing ML4H
- Now we are ready to teach the machines!
```python
# Imports
import os
import sys
import pickle
import random
from typing import List, Dict, Callable
from collections import defaultdict, Counter
import csv
import gzip
import h5py
import shutil
import zipfile
import pydicom
import numpy as np
from keras import metrics
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import gridspec
```
### Linear Regression
We explore machine learning on Bio medical data using Cloud computing, Python, Tensorflow, and the ml4h codebase.
We will start with linear regression. Our model is a vector, one weight for each input feature, and a single bias weight.
\begin{equation}
y = xw + b
\end{equation}
For notational convenience absorb the bias term into the weight vector by adding a 1 to the input data matrix $X$
\begin{equation}
y = [\textbf{1}, X][b, \textbf{w}]^T
\end{equation}
#### The Dense Layer is Matrix (Tensor) Multiplication
```python
def linear_regression():
samples = 40
real_weight = 2.0
real_bias = 0.5
x = np.linspace(-1, 1, samples)
y = real_weight*x + real_bias + (np.random.randn(*x.shape) * 0.1)
linear_model = Sequential()
linear_model.add(Dense(1, input_dim=1))
linear_model.compile(loss='mean_squared_error', optimizer='sgd')
linear_model.summary()
linear_model.fit(x, y, batch_size=1, epochs=6)
learned_slope = linear_model.get_weights()[0][0][0]
learned_bias = linear_model.get_weights()[1][0]
print('Learned slope:', learned_slope, 'real slope:', real_weight, 'learned bias:', learned_bias, 'real bias:', real_bias)
plt.plot(x, y)
plt.plot([-1,1], [-learned_slope+learned_bias, learned_slope+learned_bias], 'r')
plt.show()
print('Linear Regression complete!')
```
```python
linear_regression()
```
## Now Logistic Regression:
We take the real-valued predictions from linear regression and squish them with a sigmoid.
\begin{equation}
\textbf{y} = \sigma(X\textbf{w} + b)
\end{equation}
where
\begin{equation}
\sigma(x) = \frac{e^x}{1+e^x} = \frac{1}{1+e^{-x}}
\end{equation}
```python
def sigmoid(x):
a = []
for item in x:
a.append(np.exp(item)/(1+np.exp(item)))
return a
x = np.arange(-10., 10., 0.2)
sig = sigmoid(x)
plt.plot(x,sig)
plt.show()
```
```python
def logistic_regression(epochs = 600, num_labels = 10):
train, test, valid = load_data('mnist.pkl.gz')
train_y = make_one_hot(train[1], num_labels)
valid_y = make_one_hot(valid[1], num_labels)
test_y = make_one_hot(test[1], num_labels)
logistic_model = Sequential()
logistic_model.add(Dense(num_labels, activation='softmax', input_dim=784, name='mnist_templates'))
logistic_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
logistic_model.summary()
templates = logistic_model.layers[0].get_weights()[0]
plot_templates(templates, 0)
print('weights shape:', templates.shape)
for e in range(epochs):
trainidx = random.sample(range(0, train[0].shape[0]), 8192)
x_batch = train[0][trainidx,:]
y_batch = train_y[trainidx]
logistic_model.train_on_batch(x_batch, y_batch)
if e % 100 == 0:
plot_templates(logistic_model.layers[0].get_weights()[0], e)
print('Logistic Model test set loss and accuracy:', logistic_model.evaluate(test[0], test_y), 'at epoch', e)
def plot_templates(templates, epoch):
n = 10
templates = templates.reshape((28,28,n))
plt.figure(figsize=(16, 8))
for i in range(n):
ax = plt.subplot(2, 5, i+1)
plt.imshow(templates[:, :, i])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plot_name = "./regression_example/mnist_templates_"+str(epoch)+".png"
if not os.path.exists(os.path.dirname(plot_name)):
os.makedirs(os.path.dirname(plot_name))
plt.savefig(plot_name)
def make_one_hot(y, num_labels):
ohy = np.zeros((len(y), num_labels))
for i in range(0, len(y)):
ohy[i][y[i]] = 1.0
return ohy
def load_data(dataset):
''' Loads the dataset
:param dataset: the path to the dataset (here MNIST)'''
data_dir, data_file = os.path.split(dataset)
if data_dir == "" and not os.path.isfile(dataset):
# Check if dataset is in the data directory.
new_path = os.path.join("data", dataset)
if os.path.isfile(new_path) or data_file == 'mnist.pkl.gz':
dataset = new_path
if (not os.path.isfile(dataset)) and data_file == 'mnist.pkl.gz':
from urllib.request import urlretrieve
origin = ('http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz')
print('Downloading data from %s' % origin)
if not os.path.exists(os.path.dirname(dataset)):
os.makedirs(os.path.dirname(dataset))
urlretrieve(origin, dataset)
print('loading data...')
f = gzip.open(dataset, 'rb')
if sys.version_info[0] == 3:
u = pickle._Unpickler(f)
u.encoding = 'latin1'
train_set, valid_set, test_set = u.load()
else:
train_set, valid_set, test_set = pickle.load(f)
f.close()
return train_set, valid_set, test_set
def plot_mnist(sides):
train, _, _ = load_data('mnist.pkl.gz')
print(train[0].shape)
mnist_images = train[0].reshape((-1, 28, 28, 1))
sides = int(np.ceil(np.sqrt(min(sides, mnist_images.shape[0]))))
_, axes = plt.subplots(sides, sides, figsize=(16, 16))
for i in range(sides*sides):
axes[i // sides, i % sides].imshow(mnist_images[i, ..., 0], cmap='gray')
```
# Look B4 U Learn!
```python
plot_mnist(25)
```
## Cross Entropy Loss:
Our favorite loss function for categorical data.
\begin{equation}
L(true, model) = -\sum_{x\in\mathcal{X}} true(x)\, \log model(x)
\end{equation}
Binary cross entropy with $N$ data points $x$ each with a binary label:
\begin{equation}
true(x) \in \{0, 1\} \\
L(true, model) = -\frac{1}{N}\sum^N_{i=1} true(x_i)\log(model(x_i)) + (1-true(x_i))log(1-model(x_i))
\end{equation}
This is the Kullback Leibler divergence between the true distribution and the predicted.
This function emerges in many fields as diverse as probability, information theory, and physics.
What is the information difference between the truth and our model? How much data do I lose by replacing the truth with the model's predictions. What is the temperature difference between my predictions and the truth?!
Categorical cross entropy with $K$ different classes or labels:
\begin{equation}
true(x) \in \{0, 1, 2, ..., K\} \\
L(true, model) = -\frac{1}{N}\sum^N_{i=1}\sum^K_{j=1} y_{ik}\log(q_k(x_i)))
\end{equation}
```python
logistic_regression()
```
# Deep Models: "Hidden" Layers and The MultiLayerPerceptron
```python
def multilayer_perceptron():
train, test, valid = load_data('mnist.pkl.gz')
num_labels = 10
train_y = make_one_hot(train[1], num_labels)
valid_y = make_one_hot(valid[1], num_labels)
test_y = make_one_hot(test[1], num_labels)
mlp_model = Sequential()
mlp_model.add(Dense(500, activation='relu', input_dim=784))
mlp_model.add(Dense(num_labels, activation='softmax'))
mlp_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
mlp_model.summary()
mlp_model.fit(train[0], train_y, validation_data=(valid[0],valid_y), batch_size=32, epochs=3)
print('Multilayer Perceptron trained. Test set loss and accuracy:', mlp_model.evaluate(test[0], test_y))
multilayer_perceptron()
```
# Convolutions Flip, Slide, Multiply, Add
Convolutions look for their kernel in a larger signal.
In convolution, you always and only find what you're looking with.
Convolution and cross correlation are deeply related:
\begin{equation}
f(t) \circledast g(t) \triangleq\ \int_{-\infty}^\infty f(\tau) g(t - \tau) \, d\tau. = \int_{-\infty}^\infty f(t-\tau) g(\tau)\, d\tau.
\end{equation}
```python
def convolutional_neural_network(filters=32, kernel_size=(3,3), padding='valid', num_labels = 10):
train, test, valid = load_data('mnist.pkl.gz')
train_y = make_one_hot(train[1], num_labels)
valid_y = make_one_hot(valid[1], num_labels)
test_y = make_one_hot(test[1], num_labels)
print(train[0].shape)
mnist_images = train[0].reshape((-1, 28, 28, 1))
mnist_valid = valid[0].reshape((-1, 28, 28, 1))
mnist_test = test[0].reshape((-1, 28, 28, 1))
cnn_model = Sequential()
cnn_model.add(Conv2D(input_shape=(28, 28, 1), filters=filters, kernel_size=kernel_size, padding=padding, activation='relu'))
cnn_model.add(Conv2D(filters=filters, kernel_size=kernel_size, padding=padding, activation='relu'))
cnn_model.add(Conv2D(filters=filters, kernel_size=kernel_size, padding=padding, activation='relu'))
cnn_model.add(Flatten())
cnn_model.add(Dense(16, activation='relu'))
cnn_model.add(Dense(num_labels, activation='softmax'))
cnn_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
cnn_model.summary()
cnn_model.fit(mnist_images, train_y, validation_data=(mnist_valid, valid_y), batch_size=32, epochs=3)
print('Convolutional Neural Network trained. Test set loss and accuracy:', cnn_model.evaluate(mnist_test, test_y))
convolutional_neural_network()
```
# Why (and When!) is Convolution Helpful?
- Decouples input size from model size
- Translationally Equivariant (Not Invariant), so we can find features wherever they might occur in the signal
- Local structure is often informative
- But not always! (eg Tabular data)
| 80069d5abc4820b8431a4121a020144a1bb7ef6d | 14,601 | ipynb | Jupyter Notebook | notebooks/ML4H_ML_intro.ipynb | deflaux/ml4h | b1de99143b943c78cd8a1c86fcac36523c502ee5 | [
"BSD-3-Clause"
]
| 45 | 2020-11-02T18:22:17.000Z | 2022-03-17T04:11:37.000Z | notebooks/ML4H_ML_intro.ipynb | deflaux/ml4h | b1de99143b943c78cd8a1c86fcac36523c502ee5 | [
"BSD-3-Clause"
]
| 55 | 2020-11-03T19:49:53.000Z | 2022-03-17T11:04:54.000Z | notebooks/ML4H_ML_intro.ipynb | deflaux/ml4h | b1de99143b943c78cd8a1c86fcac36523c502ee5 | [
"BSD-3-Clause"
]
| 9 | 2020-11-02T18:22:46.000Z | 2022-01-25T22:14:32.000Z | 35.183133 | 228 | 0.56455 | true | 2,705 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.727975 | 0.6535 | __label__eng_Latn | 0.591129 | 0.356631 |
```python
# G2: Standard 2D Model
# Make SymPy available to this program:
import sympy
from sympy import *
# Make GAlgebra available to this program:
from galgebra.ga import *
from galgebra.mv import *
from galgebra.printer import Fmt, GaPrinter, Format
# Fmt: sets the way that a multivector's basis expansion is output.
# GaPrinter: makes GA output a little more readable.
# Format: turns on latex printer.
from galgebra.gprinter import gFormat, gprint
gFormat()
```
```python
# g2: The geometric algebra G^2.
g2coords = (x,y) = symbols('x y', real=True)
g2 = Ga('e', g=[1,1,1], coords=g2coords)
(ex, ey) = g2.mv()
grad = g2.grad
from galgebra.dop import *
pdx = Pdop(x)
pdy = Pdop(y)
```
```python
```
| 6ce06a5394329a5ead8306f042600c31bcd75417 | 1,704 | ipynb | Jupyter Notebook | python/GeometryAG/gaprimer/g2.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
]
| null | null | null | python/GeometryAG/gaprimer/g2.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
]
| null | null | null | python/GeometryAG/gaprimer/g2.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
]
| null | null | null | 23.342466 | 86 | 0.535798 | true | 226 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.91848 | 0.715424 | 0.657103 | __label__eng_Latn | 0.818649 | 0.365001 |
# Options 1
This notebook shows payoff functions and pricing bounds for options.
## Load Packages and Extra Functions
```julia
using Printf
include("jlFiles/printmat.jl")
```
printyellow (generic function with 1 method)
```julia
using Plots, LaTeXStrings
#pyplot(size=(600,400))
gr(size=(480,320))
default(fmt = :png)
```
# Payoffs and Profits of Options
Let $K$ be the strike price and $S_m$ the price of the underlying at expiration ($m$ years ahead) of the option contract.
The call and put profits (at expiration) are
$\text{call profit}_{m}\ = \max\left( 0,S_{m}-K\right) - e^{my}C$
$\text{put profit}_{m}=\max\left( 0,K-S_{m}\right) - e^{my}P $,
where $C$ and $P$ are the call and put option prices (paid today). In both cases the first term ($\max()$) represents the payoff at expiration, while the second term ($e^{my}C$ or $e^{my}P$) subtracts the capitalised value of the option price (premium) paid at inception of the contract.
The profit of a straddle is the sum of those of a call and a put.
### A Remark on the Code
- `Sₘ.>K` creates a vector of false/true
- `ifelse.(Sₘ.>K,"yes","no")` creates a vector of "yes" or "no" depending on whether `Sₘ.>K` or not.
```julia
Sₘ = [4.5,5.5] #possible values of underlying at expiration
K = 5 #strike price
C = 0.4 #call price (just a number that I made up)
P = 0.4 #put price
(y,m) = (0,1) #zero interest to keep it simple, 1 year to expiration
CallPayoff = max.(0,Sₘ.-K) #payoff at expiration
CallProfit = CallPayoff .- exp(m*y)*C #profit at expiration
ExerciseIt = ifelse.(Sₘ.>K,"yes","no") #"no"/"yes" for exercise
printblue("Payoff and profit of a call option with strike price $K and price (premium) of $C:\n")
printmat([Sₘ ExerciseIt CallPayoff CallProfit],colNames=["Sₘ","Exercise","Payoff","Profit"])
```
[34m[1mPayoff and profit of a call option with strike price 5 and price (premium) of 0.4:[22m[39m
Sₘ Exercise Payoff Profit
4.500 no 0.000 -0.400
5.500 yes 0.500 0.100
```julia
Sₘ = 0:0.1:10 #more possible outcomes, for plotting
CallProfit = max.(0,Sₘ.-K) .- exp(m*y)*C
PutProfit = max.(0,K.-Sₘ) .- exp(m*y)*P
p1 = plot( Sₘ,[CallProfit PutProfit],
linecolor = [:red :green],
linestyle = [:dash :dot],
linewidth = 2,
label = ["call" "put"],
ylim = (-1,5),
legend = :top,
title = "Profits of call and put options, strike = $K",
xlabel = "Asset price at expiration" )
display(p1)
```
```julia
StraddleProfit = CallProfit + PutProfit #a straddle: 1 call and 1 put
p1 = plot( Sₘ,StraddleProfit,
linecolor = :blue,
linewidth = 2,
label = "call+put",
ylim = (-1,5),
legend = :top,
title = "Profit of a straddle, strike = $K",
xlabel = "Asset price at expiration" )
display(p1)
```
# Put-Call Parity for European Options
A no-arbitrage condition says that
$
C-P=e^{-my}(F-K)
$
must hold, where $F$ is the forward price. This is the *put-call parity*.
Also, when the underlying asset has no dividends (until expiration of the option), then the forward-spot parity says that $F=e^{my}S$.
```julia
(S,K,m,y) = (42,38,0.5,0.05) #current price of underlying etc
C = 5.5 #assume this is the price of a call option(K)
F = exp(m*y)*S #forward-spot parity
P = C - exp(-m*y)*(F-K)
printblue("Put-Call parity when (C,S,y,m)=($C,$S,$y,$m):\n")
printmat([C,exp(-m*y),F-K,P],rowNames=["C","exp(-m*y)","F-K","P"])
```
[34m[1mPut-Call parity when (C,S,y,m)=(5.5,42,0.05,0.5):[22m[39m
C 5.500
exp(-m*y) 0.975
F-K 5.063
P 0.562
# Pricing Bounds
The pricing bounds for (American and European) call options are
$\begin{align}
C & \leq e^{-my}F\\
C & \geq\max[0,e^{-my}(F-K)]
\end{align}$
```julia
(S,K,m,y) = (42,38,0.5,0.05) #current price of underlying etc
F = exp(m*y)*S
C_Upper = exp(-m*y)*F
C_Lower = max.(0,exp(-m*y)*(F.-K))
printlnPs("Pricing bounds for European call option with strike $K",C_Lower,C_Upper)
```
Pricing bounds for European call option with strike 38 4.938 42.000
```julia
K_range = 30:0.5:50 #pricing bounds for many strike prices
n = length(K_range)
C_Upper = exp(-m*y)*F
C_Lower = max.(0,exp(-m*y)*(F.-K_range))
P_Upper = exp(-m*y)*K_range
P_Lower = max.(0,exp(-m*y)*(K_range.-F));
```
```julia
p1 = plot( K_range,[C_Upper*ones(n) C_Lower],
linecolor = [:green :blue],
linewidth = 2,
linestyle = [:dash :dot],
label = [L"C \leq \exp(-my)F " L"C \geq \max[0,\exp(-my)(F-K)]"],
ylim = (-1,S+1),
legend = :right,
title = "Price bounds on European call options",
xlabel = "Strike price" )
display(p1)
```
The pricing bounds for (European) put options are
$\begin{align}
P_{E} & \leq e^{-my}K\\
e^{-my}(K-F) & \leq P_{E}
\end{align}$
```julia
p1 = plot( K_range,[P_Upper P_Lower],
linecolor = [:green :blue],
linewidth = 2,
linestyle = [:dash :dot],
label = [L" P \leq \exp(-my)K " L" P \geq \max[0,\exp(-my)(K-F)] "],
ylim = (-1,50),
legend = :right,
title = "Price bounds on European put options",
xlabel = "Strike price" )
display(p1)
```
```julia
```
| e5c0b6d2798adf7f5a034dcb3497ed07d826e5aa | 128,397 | ipynb | Jupyter Notebook | Ch19_Options1.ipynb | PaulSoderlind/FinancialTheoryMSc | bbdbeaedea4feb25b019608e65e0d8a5ecde946d | [
"MIT"
]
| 31 | 2017-10-22T20:52:31.000Z | 2022-01-03T22:53:06.000Z | Ch19_Options1.ipynb | PaulSoderlind/FinancialTheoryMSc | bbdbeaedea4feb25b019608e65e0d8a5ecde946d | [
"MIT"
]
| null | null | null | Ch19_Options1.ipynb | PaulSoderlind/FinancialTheoryMSc | bbdbeaedea4feb25b019608e65e0d8a5ecde946d | [
"MIT"
]
| 21 | 2017-11-27T21:34:48.000Z | 2021-09-11T05:58:09.000Z | 342.392 | 31,445 | 0.927561 | true | 1,791 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.83762 | 0.757753 | __label__eng_Latn | 0.907813 | 0.598847 |
```python
%matplotlib inline
```
```python
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sn
sn.set_style("whitegrid")
import sympy as sym
import pycollocation
import pyam
```
```python
pyam.__version__
```
'0.1.2a0'
## Defining inputs
Need to define some heterogenous factors of production...
```python
# define some workers skill
x, loc1, mu1, sigma1 = sym.var('x, loc1, mu1, sigma1')
skill_cdf = 0.5 + 0.5 * sym.erf((sym.log(x - loc1) - mu1) / sym.sqrt(2 * sigma1**2))
skill_params = {'loc1': 1e0, 'mu1': 0.0, 'sigma1': 1.0}
workers = pyam.Input(var=x,
cdf=skill_cdf,
params=skill_params,
bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles!
alpha=0.005,
measure=1.0
)
# define some firms
y, loc2, mu2, sigma2 = sym.var('y, loc2, mu2, sigma2')
productivity_cdf = 0.5 + 0.5 * sym.erf((sym.log(y - loc2) - mu2) / sym.sqrt(2 * sigma2**2))
productivity_params = {'loc2': 1e0, 'mu2': 0.0, 'sigma2': 1.0}
firms = pyam.Input(var=y,
cdf=productivity_cdf,
params=productivity_params,
bounds=(1.2, 1e1), # guesses for the alpha and (1 - alpha) quantiles!
alpha=0.005,
measure=1.0
)
```
Note that we are shifting the distributions of worker skill and firm productivity to the right by 1.0 in order to try and avoid issues with having workers (firms) with near zero skill (productivity).
```python
xs = np.linspace(workers.lower, workers.upper, 1e4)
plt.plot(xs, workers.evaluate_pdf(xs))
plt.xlabel('Worker skill, $x$', fontsize=20)
plt.show()
```
## Defining a production process
Next need to define some production process...
```python
# define symbolic expression for CES between x and y
omega_A, sigma_A = sym.var('omega_A, sigma_A')
A = ((omega_A * x**((sigma_A - 1) / sigma_A) +
(1 - omega_A) * y**((sigma_A - 1) / sigma_A))**(sigma_A / (sigma_A - 1)))
# define symbolic expression for CES between x and y
r, l, omega_B, sigma_B = sym.var('r, l, omega_B, sigma_B')
B = ((omega_B * r**((sigma_B - 1) / sigma_B) +
(1 - omega_B) * l**((sigma_B - 1) / sigma_B))**(sigma_B / (sigma_B - 1)))
F = A * B
```
```python
# negative assortativity requires that sigma_A * sigma_B > 1
F_params = {'omega_A':0.25, 'omega_B':0.5, 'sigma_A':2.0, 'sigma_B':1.0 }
```
## Define a boundary value problem
```python
problem = pyam.AssortativeMatchingProblem(assortativity='negative',
input1=workers,
input2=firms,
F=sym.limit(F, sigma_B, 1),
F_params=F_params)
```
## Pick some collocation solver
```python
solver = pycollocation.OrthogonalPolynomialSolver(problem)
```
## Compute some decent initial guess
Currently I guess that $\mu(x)$ is has the form...
$$ \hat{\mu}(x) = \beta_0 + \beta_1 f(x) $$
(i.e., a linear translation) of some function $f$. Using my $\hat{\mu}(x)$, I can then back out a guess for $\theta(x)$ implied by the model...
$$ \hat{\theta}(x) = \frac{H(x)}{\hat{\mu}'(x)} $$
```python
initial_guess = pyam.OrthogonalPolynomialInitialGuess(solver)
initial_polys = initial_guess.compute_initial_guess("Chebyshev",
degrees={'mu': 40, 'theta': 70},
f=lambda x, alpha: x**alpha,
alpha=1.0)
```
```python
# quickly plot the initial conditions
xs = np.linspace(workers.lower, workers.upper, 1000)
plt.plot(xs, initial_polys['mu'](xs))
plt.plot(xs, initial_polys['theta'](xs))
plt.grid('on')
```
## Solve the model!
```python
domain = [workers.lower, workers.upper]
initial_coefs = {'mu': initial_polys['mu'].coef,
'theta': initial_polys['theta'].coef}
solver.solve(kind="Chebyshev",
coefs_dict=initial_coefs,
domain=domain,
method='hybr')
```
```python
solver.result.success
```
False
## Plot some results
```python
viz = pyam.Visualizer(solver)
```
```python
viz.interpolation_knots = np.linspace(workers.lower, workers.upper, 1000)
viz.residuals.plot()
plt.show()
```
```python
viz.normalized_residuals[['mu', 'theta']].plot(logy=True)
plt.show()
```
```python
viz.solution.tail()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>F</th>
<th>Fl</th>
<th>Flr</th>
<th>Fr</th>
<th>Fx</th>
<th>Fxl</th>
<th>Fxr</th>
<th>Fxy</th>
<th>Fy</th>
<th>Fyl</th>
<th>Fyr</th>
<th>factor_payment_1</th>
<th>factor_payment_2</th>
<th>mu</th>
<th>theta</th>
</tr>
</thead>
<tbody>
<tr>
<th>12.508857</th>
<td>18.206082</td>
<td>5.150932</td>
<td>2.575466</td>
<td>9.103041</td>
<td>0.398371</td>
<td>0.112708</td>
<td>0.199185</td>
<td>0.040918</td>
<td>0.934998</td>
<td>0.264533</td>
<td>0.467499</td>
<td>5.150932</td>
<td>9.103041</td>
<td>14.142185</td>
<td>1.767261</td>
</tr>
<tr>
<th>12.520161</th>
<td>18.206094</td>
<td>5.153478</td>
<td>2.576739</td>
<td>9.103047</td>
<td>0.397750</td>
<td>0.112589</td>
<td>0.198875</td>
<td>0.040864</td>
<td>0.935230</td>
<td>0.264729</td>
<td>0.467615</td>
<td>5.153478</td>
<td>9.103047</td>
<td>14.142192</td>
<td>1.766389</td>
</tr>
<tr>
<th>12.531464</th>
<td>18.206107</td>
<td>5.156022</td>
<td>2.578011</td>
<td>9.103054</td>
<td>0.397132</td>
<td>0.112469</td>
<td>0.198566</td>
<td>0.040811</td>
<td>0.935460</td>
<td>0.264925</td>
<td>0.467730</td>
<td>5.156022</td>
<td>9.103054</td>
<td>14.142198</td>
<td>1.765519</td>
</tr>
<tr>
<th>12.542768</th>
<td>18.206122</td>
<td>5.158563</td>
<td>2.579281</td>
<td>9.103061</td>
<td>0.396514</td>
<td>0.112349</td>
<td>0.198257</td>
<td>0.040757</td>
<td>0.935691</td>
<td>0.265121</td>
<td>0.467846</td>
<td>5.158563</td>
<td>9.103061</td>
<td>14.142205</td>
<td>1.764651</td>
</tr>
<tr>
<th>12.554071</th>
<td>18.206134</td>
<td>5.161101</td>
<td>2.580551</td>
<td>9.103067</td>
<td>0.395898</td>
<td>0.112230</td>
<td>0.197949</td>
<td>0.040704</td>
<td>0.935921</td>
<td>0.265316</td>
<td>0.467961</td>
<td>5.161101</td>
<td>9.103067</td>
<td>14.142212</td>
<td>1.763784</td>
</tr>
</tbody>
</table>
</div>
```python
viz.solution[['mu', 'theta']].plot(subplots=True)
plt.show()
```
```python
viz.solution[['Fxy', 'Fyl']].plot()
plt.show()
```
## Plot factor payments
Note the `factor_payment_1` is wages and `factor_payment_2` is profits...
```python
viz.solution[['factor_payment_1', 'factor_payment_2']].plot(subplots=True)
plt.show()
```
## Plot firm size against wages and profits
```python
fig, axes = plt.subplots(1, 2, sharey=True)
axes[0].scatter(viz.solution.factor_payment_1, viz.solution.theta, alpha=0.5,
edgecolor='none')
axes[0].set_ylim(0, 1.05 * viz.solution.theta.max())
axes[0].set_xlabel('Wages, $w$')
axes[0].set_ylabel(r'Firm size, $\theta$')
axes[1].scatter(viz.solution.factor_payment_2, viz.solution.theta, alpha=0.5,
edgecolor='none')
axes[1].set_xlabel(r'Profits, $\pi$')
plt.show()
```
```python
# to get correlation just use pandas!
viz.solution.corr()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>F</th>
<th>Fl</th>
<th>Flr</th>
<th>Fr</th>
<th>Fx</th>
<th>Fxl</th>
<th>Fxr</th>
<th>Fxy</th>
<th>Fy</th>
<th>Fyl</th>
<th>Fyr</th>
<th>factor_payment_1</th>
<th>factor_payment_2</th>
<th>mu</th>
<th>theta</th>
</tr>
</thead>
<tbody>
<tr>
<th>F</th>
<td>1.000000</td>
<td>0.999021</td>
<td>0.999021</td>
<td>1.000000</td>
<td>0.220731</td>
<td>-0.997585</td>
<td>0.220731</td>
<td>-0.972939</td>
<td>0.992605</td>
<td>-0.614955</td>
<td>0.992605</td>
<td>0.999021</td>
<td>1.000000</td>
<td>0.995928</td>
<td>0.946557</td>
</tr>
<tr>
<th>Fl</th>
<td>0.999021</td>
<td>1.000000</td>
<td>1.000000</td>
<td>0.999021</td>
<td>0.235079</td>
<td>-0.999269</td>
<td>0.235079</td>
<td>-0.976903</td>
<td>0.996133</td>
<td>-0.634499</td>
<td>0.996133</td>
<td>1.000000</td>
<td>0.999021</td>
<td>0.996189</td>
<td>0.950319</td>
</tr>
<tr>
<th>Flr</th>
<td>0.999021</td>
<td>1.000000</td>
<td>1.000000</td>
<td>0.999021</td>
<td>0.235079</td>
<td>-0.999269</td>
<td>0.235079</td>
<td>-0.976903</td>
<td>0.996133</td>
<td>-0.634499</td>
<td>0.996133</td>
<td>1.000000</td>
<td>0.999021</td>
<td>0.996189</td>
<td>0.950319</td>
</tr>
<tr>
<th>Fr</th>
<td>1.000000</td>
<td>0.999021</td>
<td>0.999021</td>
<td>1.000000</td>
<td>0.220731</td>
<td>-0.997585</td>
<td>0.220731</td>
<td>-0.972939</td>
<td>0.992605</td>
<td>-0.614955</td>
<td>0.992605</td>
<td>0.999021</td>
<td>1.000000</td>
<td>0.995928</td>
<td>0.946557</td>
</tr>
<tr>
<th>Fx</th>
<td>0.220731</td>
<td>0.235079</td>
<td>0.235079</td>
<td>0.220731</td>
<td>1.000000</td>
<td>-0.271075</td>
<td>1.000000</td>
<td>-0.434832</td>
<td>0.309461</td>
<td>-0.842048</td>
<td>0.309461</td>
<td>0.235079</td>
<td>0.220731</td>
<td>0.307638</td>
<td>0.523476</td>
</tr>
<tr>
<th>Fxl</th>
<td>-0.997585</td>
<td>-0.999269</td>
<td>-0.999269</td>
<td>-0.997585</td>
<td>-0.271075</td>
<td>1.000000</td>
<td>-0.271075</td>
<td>0.984286</td>
<td>-0.998298</td>
<td>0.660006</td>
<td>-0.998298</td>
<td>-0.999269</td>
<td>-0.997585</td>
<td>-0.998115</td>
<td>-0.960981</td>
</tr>
<tr>
<th>Fxr</th>
<td>0.220731</td>
<td>0.235079</td>
<td>0.235079</td>
<td>0.220731</td>
<td>1.000000</td>
<td>-0.271075</td>
<td>1.000000</td>
<td>-0.434832</td>
<td>0.309461</td>
<td>-0.842048</td>
<td>0.309461</td>
<td>0.235079</td>
<td>0.220731</td>
<td>0.307638</td>
<td>0.523476</td>
</tr>
<tr>
<th>Fxy</th>
<td>-0.972939</td>
<td>-0.976903</td>
<td>-0.976903</td>
<td>-0.972939</td>
<td>-0.434832</td>
<td>0.984286</td>
<td>-0.434832</td>
<td>1.000000</td>
<td>-0.988764</td>
<td>0.763266</td>
<td>-0.988764</td>
<td>-0.976903</td>
<td>-0.972939</td>
<td>-0.989281</td>
<td>-0.993717</td>
</tr>
<tr>
<th>Fy</th>
<td>0.992605</td>
<td>0.996133</td>
<td>0.996133</td>
<td>0.992605</td>
<td>0.309461</td>
<td>-0.998298</td>
<td>0.309461</td>
<td>-0.988764</td>
<td>1.000000</td>
<td>-0.699261</td>
<td>1.000000</td>
<td>0.996133</td>
<td>0.992605</td>
<td>0.996621</td>
<td>0.969117</td>
</tr>
<tr>
<th>Fyl</th>
<td>-0.614955</td>
<td>-0.634499</td>
<td>-0.634499</td>
<td>-0.614955</td>
<td>-0.842048</td>
<td>0.660006</td>
<td>-0.842048</td>
<td>0.763266</td>
<td>-0.699261</td>
<td>1.000000</td>
<td>-0.699261</td>
<td>-0.634499</td>
<td>-0.614955</td>
<td>-0.676317</td>
<td>-0.814112</td>
</tr>
<tr>
<th>Fyr</th>
<td>0.992605</td>
<td>0.996133</td>
<td>0.996133</td>
<td>0.992605</td>
<td>0.309461</td>
<td>-0.998298</td>
<td>0.309461</td>
<td>-0.988764</td>
<td>1.000000</td>
<td>-0.699261</td>
<td>1.000000</td>
<td>0.996133</td>
<td>0.992605</td>
<td>0.996621</td>
<td>0.969117</td>
</tr>
<tr>
<th>factor_payment_1</th>
<td>0.999021</td>
<td>1.000000</td>
<td>1.000000</td>
<td>0.999021</td>
<td>0.235079</td>
<td>-0.999269</td>
<td>0.235079</td>
<td>-0.976903</td>
<td>0.996133</td>
<td>-0.634499</td>
<td>0.996133</td>
<td>1.000000</td>
<td>0.999021</td>
<td>0.996189</td>
<td>0.950319</td>
</tr>
<tr>
<th>factor_payment_2</th>
<td>1.000000</td>
<td>0.999021</td>
<td>0.999021</td>
<td>1.000000</td>
<td>0.220731</td>
<td>-0.997585</td>
<td>0.220731</td>
<td>-0.972939</td>
<td>0.992605</td>
<td>-0.614955</td>
<td>0.992605</td>
<td>0.999021</td>
<td>1.000000</td>
<td>0.995928</td>
<td>0.946557</td>
</tr>
<tr>
<th>mu</th>
<td>0.995928</td>
<td>0.996189</td>
<td>0.996189</td>
<td>0.995928</td>
<td>0.307638</td>
<td>-0.998115</td>
<td>0.307638</td>
<td>-0.989281</td>
<td>0.996621</td>
<td>-0.676317</td>
<td>0.996621</td>
<td>0.996189</td>
<td>0.995928</td>
<td>1.000000</td>
<td>0.971758</td>
</tr>
<tr>
<th>theta</th>
<td>0.946557</td>
<td>0.950319</td>
<td>0.950319</td>
<td>0.946557</td>
<td>0.523476</td>
<td>-0.960981</td>
<td>0.523476</td>
<td>-0.993717</td>
<td>0.969117</td>
<td>-0.814112</td>
<td>0.969117</td>
<td>0.950319</td>
<td>0.946557</td>
<td>0.971758</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
```python
# or a subset
viz.solution[['theta', 'factor_payment_1']].corr()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>theta</th>
<th>factor_payment_1</th>
</tr>
</thead>
<tbody>
<tr>
<th>theta</th>
<td>1.000000</td>
<td>0.950319</td>
</tr>
<tr>
<th>factor_payment_1</th>
<td>0.950319</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
```python
# or actual values!
viz.solution.corr().loc['theta']['factor_payment_1']
```
0.95031918230948131
## Plot the density for firm size
As you can see, the theta function is hump-shaped. Nothing special, but when calculating the pdf some arrangements have to be done for this: sort the thetas preserving the order (so we can relate them to their xs) and then use carefully the right x for calculating the pdf.
The principle of Philipp's trick is:
$pdf_x(x_i)$ can be interpreted as *number of workers with ability x*. $\theta_i$ is the size of the firms that employs workers of kind $x_i$. As all firms that match with workers type $x_i$ choose the same firm size, $pdf_x(x_i)/\theta_i$ is the number of firms of size $\theta_i$.
Say there are 100 workers with ability $x_i$, and their associated firm size $\theta_i$ is 2. Then there are $100/2 = 50$ $ \theta_i$ firms
```python
fig, axes = plt.subplots(1, 3)
theta_pdf = viz.compute_pdf('theta', normalize=True)
theta_pdf.plot(ax=axes[0])
axes[0].set_xlabel(r'Firm size, $\theta$')
axes[0].set_title(r'pdf')
theta_cdf = viz.compute_cdf(theta_pdf)
theta_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
axes[1].set_xlabel(r'Firm size, $\theta$')
theta_sf = viz.compute_sf(theta_cdf)
theta_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
axes[2].set_xlabel(r'Firm size, $\theta$')
plt.tight_layout()
plt.show()
```
## Distributions of factor payments
Can plot the distributions of average factor payments...
```python
fig, axes = plt.subplots(1, 3)
factor_payment_1_pdf = viz.compute_pdf('factor_payment_1', normalize=True)
factor_payment_1_pdf.plot(ax=axes[0])
axes[0].set_title(r'pdf')
factor_payment_1_cdf = viz.compute_cdf(factor_payment_1_pdf)
factor_payment_1_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
factor_payment_1_sf = viz.compute_sf(factor_payment_1_cdf)
factor_payment_1_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
plt.tight_layout()
plt.show()
```
```python
fig, axes = plt.subplots(1, 3)
factor_payment_2_pdf = viz.compute_pdf('factor_payment_2', normalize=True)
factor_payment_2_pdf.plot(ax=axes[0])
axes[0].set_title(r'pdf')
factor_payment_2_cdf = viz.compute_cdf(factor_payment_2_pdf)
factor_payment_2_cdf.plot(ax=axes[1])
axes[1].set_title(r'cdf')
factor_payment_2_sf = viz.compute_sf(factor_payment_2_cdf)
factor_payment_2_sf.plot(ax=axes[2])
axes[2].set_title(r'sf')
plt.tight_layout()
plt.show()
```
## Widget
```python
from IPython.html import widgets
```
```python
def interactive_plot(viz, omega_A=0.25, omega_B=0.5, sigma_A=0.5, sigma_B=1.0,
loc1=1.0, mu1=0.0, sigma1=1.0, loc2=1.0, mu2=0.0, sigma2=1.0):
# update new parameters as needed
new_F_params = {'omega_A': omega_A, 'omega_B': omega_B,
'sigma_A': sigma_A, 'sigma_B': sigma_B}
viz.solver.problem.F_params = new_F_params
new_input1_params = {'loc1': loc1, 'mu1': mu1, 'sigma1': sigma1}
viz.solver.problem.input1.params = new_input1_params
new_input2_params = {'loc2': loc2, 'mu2': mu2, 'sigma2': sigma2}
viz.solver.problem.input2.params = new_input2_params
# solve the model using a hotstart initial guess
domain = [viz.solver.problem.input1.lower, viz.solver.problem.input1.upper]
initial_coefs = viz.solver._coefs_array_to_dict(viz.solver.result.x, viz.solver.degrees)
viz.solver.solve(kind="Chebyshev",
coefs_dict=initial_coefs,
domain=domain,
method='hybr')
if viz.solver.result.success:
viz._Visualizer__solution = None # should not need to access this!
viz.interpolation_knots = np.linspace(domain[0], domain[1], 1000)
viz.solution[['mu', 'theta']].plot(subplots=True)
viz.normalized_residuals[['mu', 'theta']].plot(logy=True)
else:
print "Foobar!"
```
```python
viz_widget = widgets.fixed(viz)
# widgets for the model parameters
eps = 1e-2
omega_A_widget = widgets.FloatSlider(value=0.25, min=eps, max=1-eps, step=eps,
description=r"$\omega_A$")
sigma_A_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps,
description=r"$\sigma_A$")
omega_B_widget = widgets.FloatSlider(value=0.5, min=eps, max=1-eps, step=eps,
description=r"$\omega_B$")
sigma_B_widget = widgets.fixed(1.0)
# widgets for input distributions
loc_widget = widgets.fixed(1.0)
mu_1_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps,
description=r"$\mu_1$")
mu_2_widget = widgets.FloatSlider(value=0.0, min=-1.0, max=1.0, step=eps,
description=r"$\mu_2$")
sigma_1_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps,
description=r"$\sigma_1$")
sigma_2_widget = widgets.FloatSlider(value=1.0, min=eps, max=2-eps, step=eps,
description=r"$\sigma_2$")
```
```python
widgets.interact(interactive_plot, viz=viz_widget, omega_A=omega_A_widget,
sigma_A=sigma_A_widget, omega_B=omega_B_widget,
sigma_B=sigma_B_widget, sigma1=sigma_1_widget,
loc1=loc_widget, mu1 = mu_1_widget,
loc2=loc_widget, sigma2=sigma_2_widget, mu2 = mu_2_widget)
```
```python
# widget is changing the parameters of the underlying solver
solver.result.x
```
array([ 7.25026241e+01, 4.44958662e+01, -2.32094502e+01,
6.62025133e+00, 5.81461799e-01, -1.55946410e+00,
6.26485738e-01, 3.47032707e-02, -1.34255798e-01,
2.33827843e-02, 5.00922933e-02, -5.15670685e-02,
2.86571642e-02, -1.41408791e-02, 1.11098441e-02,
-1.18476805e-02, 1.15288324e-02, -9.86307280e-03,
8.03792622e-03, -6.69413471e-03, 5.77789650e-03,
-5.05085914e-03, 4.38977476e-03, -3.78748392e-03,
3.26426857e-03, -2.82203708e-03, 2.44698900e-03,
-2.12428421e-03, 1.84442215e-03, -1.60187017e-03,
1.39228307e-03, -1.21137286e-03, 1.05506872e-03,
-9.19854262e-04, 8.02819187e-04, -7.01524100e-04,
6.13866717e-04, -5.38013838e-04, 4.72372899e-04,
-4.15568976e-04, 3.66416864e-04, -3.23892681e-04,
2.87109830e-04, -2.55299928e-04, 2.27797146e-04,
-2.04024618e-04, 1.83482440e-04, -1.65737213e-04,
1.50413030e-04, -1.37183732e-04, 1.25766216e-04,
-1.15914624e-04, 1.07415296e-04, -1.00082376e-04,
9.37539977e-05, -8.82889744e-05, 8.35639309e-05,
-7.94707899e-05, 7.59145078e-05, -7.28109983e-05,
7.00853553e-05, -6.76706220e-05, 6.55070496e-05,
-6.35408793e-05, 6.17211507e-05, -5.99950522e-05,
5.83072166e-05, -5.66099134e-05, 5.48781544e-05,
-5.30977342e-05, 5.11925525e-05, -4.89365611e-05,
4.60308938e-05, -4.25082051e-05, 3.92086051e-05,
-1.87680507e-05, 6.10424640e+01, 3.86836905e+01,
-2.21649811e+01, 5.26294133e+00, 2.01751068e+00,
-2.41479138e+00, 8.91641452e-01, 3.96751691e-02,
-1.67706401e-01, 1.85366413e-02, 7.15768521e-02,
-6.46042588e-02, 2.92261437e-02, -8.98904604e-03,
5.76335767e-03, -7.66261342e-03, 7.85910010e-03,
-6.14814581e-03, 4.28490171e-03, -3.12973675e-03,
2.54158023e-03, -2.15279042e-03, 1.78914710e-03,
-1.45008533e-03, 1.17116479e-03, -9.58354064e-04,
7.94695948e-04, -6.62679003e-04, 5.53059870e-04,
-4.62188213e-04, 3.87674593e-04, -3.26777656e-04,
2.76738706e-04, -2.35322435e-04, 2.00898051e-04,
-1.72244304e-04, 1.48370419e-04, -1.28442076e-04,
1.11765505e-04, -9.77761048e-05, 8.60169736e-05,
-7.61151131e-05, 6.77628858e-05, -6.07054673e-05,
5.47317959e-05, -4.96670756e-05, 4.53662817e-05,
-4.17087266e-05, 3.85936833e-05, -3.59368849e-05,
3.36676811e-05, -3.17266913e-05, 3.00638572e-05,
-2.86368285e-05, 2.74096272e-05, -2.63515534e-05,
2.54362956e-05, -2.46411808e-05, 2.39464638e-05,
-2.33346025e-05, 2.27896763e-05, -2.22973074e-05,
2.18451093e-05, -2.14225261e-05, 2.10180922e-05,
-2.06143949e-05, 2.01871960e-05, -1.97180633e-05,
1.92152208e-05, -1.87045927e-05, 1.81448944e-05,
-1.73067164e-05, 1.58166339e-05, -1.35683931e-05,
1.12566297e-05, -5.08591626e-06])
```python
```
| 34d7df6e24a98e3516e0f87754074761bf0e52af | 404,230 | ipynb | Jupyter Notebook | examples/negative-assortative-matching.ipynb | crisla/pyAM | 759aa00fe430f017d195a4210b9655d8a10e275f | [
"MIT"
]
| null | null | null | examples/negative-assortative-matching.ipynb | crisla/pyAM | 759aa00fe430f017d195a4210b9655d8a10e275f | [
"MIT"
]
| 1 | 2015-10-28T17:35:52.000Z | 2015-10-28T18:18:12.000Z | examples/negative-assortative-matching.ipynb | crisla/pyAM | 759aa00fe430f017d195a4210b9655d8a10e275f | [
"MIT"
]
| 2 | 2018-10-15T19:33:47.000Z | 2019-10-27T16:31:44.000Z | 275.173587 | 56,266 | 0.895767 | true | 8,908 | Qwen/Qwen-72B | 1. YES
2. YES | 0.779993 | 0.72487 | 0.565394 | __label__eng_Latn | 0.134769 | 0.151929 |
# Building PGMs for the paper using daft
```python
from matplotlib import rc
import matplotlib.pyplot as plt
import sys
rc("font", family="serif", size=10)
rc("text", usetex=True)
import daft
```
## Model
For a single star, we have:
\begin{equation}
p(P_s, \theta\ |\ \mathcal{D}) = p(\mathcal{D}\ |\ \theta)\ p(\theta\ |\ Q_WMB)\ p(Q_WMB)\, ,\text{where}\\
p(\mathcal{D}\ |\ \theta) = \mathcal{N}(\mathcal{D} - \theta, \sigma_{\mathcal{D}})\, ,\\
p(\theta\ |\ Q_WMB) = Q_WMB\ \text{KDE}_WMB(\theta) + (1-Q_WMB)\text{KDE}_{s}(\theta)\, ,\\
p(P_s) = U(0, 1)\, ,
\end{equation}
where $\theta$ are latent parameters for mass, temperature, age and rotation ($\theta = \{M, T_{\rm eff}, \tau, P\}$), drawn from a mixture model of two Kernel Density Estimates (KDEs), modulated by a mixture parameter $P_s$.
```python
#Instantiate the PGM
sy = 1.
sx = 1.
pgm = daft.PGM(shape=[2.5*sx,2.7*sy])
# Ps
pgm.add_node(daft.Node("Ps", r"$Q_{\rm WMB}$", 1.5*sx, 2.5*sy, fontsize=7))
#p(theta | Ps, KDEs, KDEro)
pgm.add_node(daft.Node("theta", r"$\theta$", 1.5*sx, 1.5*sy))
pgm.add_node(daft.Node("KDEs", r"$\kappa_{\rm s}$", 0.5*sx, 1.5*sy, offset=(0,-5), fixed=True))
pgm.add_node(daft.Node("KDEro", r"$\kappa_{\rm WMB}$", 2.5*sx, 1.5*sy, offset=(0,-5), fixed=True))
#p(D | theta)
pgm.add_node(daft.Node("D", r"$\mathcal{D}$", 1.5*sx, 0.5*sy, observed=True))
pgm.add_node(daft.Node("sigD", r"$\sigma_{\mathcal{D}}$", 2.5*sx, 0.5*sy, offset=(0, -5), fixed=True))
# #Add in edges
pgm.add_edge('Ps','theta')
pgm.add_edge('KDEs', 'theta')
pgm.add_edge('KDEro', 'theta')
pgm.add_edge('theta', 'D')
pgm.add_edge('sigD', 'D')
```
<daft.Edge at 0x7fa96f1c6ca0>
```python
pgm.render()
pgm.figure.savefig('natastron/2nd_draft/Images/pgm_models.pdf')
# pgm.figure.savefig("/Users/Oliver/PhD/arcanum/Chapters/6_Chapter/Images/pgm_models.pdf")
# pgm.figure.savefig("pgm_models.pdf")
pgm.figure.savefig("natastron/Publication/Images/pgm_models.jpg", dpi=300)
plt.show()
```
```python
```
```python
```
| 092ae6127c05162df5506107587ff7240a5f952c | 10,103 | ipynb | Jupyter Notebook | paper/pgms.ipynb | ojhall94/halletal2021 | f6c30fca721f313e0fd417e1a6e81c5a9b9805e8 | [
"MIT"
]
| 1 | 2021-04-25T07:45:20.000Z | 2021-04-25T07:45:20.000Z | paper/pgms.ipynb | ojhall94/halletal2021 | f6c30fca721f313e0fd417e1a6e81c5a9b9805e8 | [
"MIT"
]
| null | null | null | paper/pgms.ipynb | ojhall94/halletal2021 | f6c30fca721f313e0fd417e1a6e81c5a9b9805e8 | [
"MIT"
]
| 1 | 2022-02-19T09:38:35.000Z | 2022-02-19T09:38:35.000Z | 65.603896 | 6,076 | 0.785113 | true | 780 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.754915 | 0.679823 | __label__kor_Hang | 0.142649 | 0.417789 |
# Assignment
## Deadline: March 31st 2016
## Instructions
* Write a report of all the problems that you have solved and include the code that you used.
* You can use any packages or tools that you see most fit for the purpose.
* Your work will be evaluated based on your report and the tools used.
## Problem 1
A window is being built and the bottom is a rectangle and the top is a semicircle. If there is 12 m of framing materials what must the dimensions of the window be to make the window area as big as possible?
Model the decision problem as an optimization problem and solve it with a method of your choosing. Analyse the result.
## Problem 2
The 10-dimensional Robsenbrock function (one of the variants) is defined as
$$
f(\mathbf{x}) = \sum_{i=1}^{9} 100 (x_{i+1} - x_i^2 )^2 + (1-x_i)^2
$$
for $x\in\mathbb R^{10}$.
Compare at least two different optimization method's performance in minimizing this function over $\mathbb R^{10}$. You can decide the method of comparison as the one that makes most sense to you.
## Problem 3
The problem will be available at http://mhartikainen.pythonanywhere.com/evaluate/. The call http://mhartikainen.pythonanywhere.com/evaluate/1.0/2.0/3.0/4.0 evaluates the optimization problem for variable values $x = (1.0, 2.0,3.0,4.0)$.
The format of the problem is
$$
\begin{align}
\min \ &f(x)\\
\text{s.t. }&g_1(x) \geq 0\\
&h_1(x) = 0\\
&h_2(x) = 0\\
&x\in \mathbb R^4.
\end{align}
$$
Calling the function will result in a json file, which will tell you all the information about the problem that you have available.
Solve the optimization problem, when you can make function calls only using a RESTful API (only the get method implemented). Use the tools and optimization method of your choosing. Analyse the results.
## Problem 4
Study biobjective optimization problem
$$
\begin{align}
\min \ &(\|x-(1,0)\|,\|x-(0,1)\|\\
\text{s.t. }&x\in \mathbb R^2.
\end{align}
$$
Try to generate an evenly spread representation of the Pareto front. Plot the results in both the decision and objective spaces.
| b3e7b25db07f09dd42b7a332fef694d7ad02515e | 3,817 | ipynb | Jupyter Notebook | Assignment.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
]
| 4 | 2019-04-26T12:46:14.000Z | 2021-11-23T03:38:59.000Z | Assignment.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
]
| null | null | null | Assignment.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
]
| 6 | 2016-01-08T16:28:11.000Z | 2021-04-10T05:18:10.000Z | 26.880282 | 246 | 0.559078 | true | 592 | Qwen/Qwen-72B | 1. YES
2. YES | 0.675765 | 0.908618 | 0.614012 | __label__eng_Latn | 0.996103 | 0.264886 |
## Developing A Theoretical Understanding on the Limitations of ML interpretation methods
### Linear Regression
Let's start with a simple 10-variable regression problem.
\begin{equation}
y = 2.0x_1 + 1.5x_2 + 1.2x_3 + 0.5x_4 + 0.2x_5 + x_6 + x_7 + x_8 + x_9 + x_{10} + \epsilon
\end{equation}
where $x_i = \mathcal{U}(-1,1)$ and $\epsilon=\mathcal{N}(\mu=0, \sigma=0.1)$. In the special case where $\epsilon = 0$ (or at least sufficiently small), the predictor ranking is based on the magnitude of the coefficients.
```python
import sys, os
current_dir = os.getcwd()
path = os.path.dirname(current_dir)
sys.path.append(path)
import mintpy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.model_selection import train_test_split
```
```python
n_examples = 2000
n_vars = 10
weights = [2.0, 1.5, 1.2, 0.5, 0.2] + [1.0]*5
X = np.stack([np.random.uniform(-1,1, size=n_examples) for _ in range(n_vars)], axis=-1)
feature_names = [f'X_{i+1}' for i in range(n_vars)]
X = pd.DataFrame(X, columns=feature_names)
error = np.random.normal(loc=0, scale=0.1, size=n_examples)
y = X.dot(weights)#+error
```
```python
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=0, test_size=0.25)
lr = LinearRegression()
lr.fit(X_train, y_train)
print('Learned Coefficients: ', lr.coef_)
```
Learned Coefficients: [2. 1.5 1.2 0.5 0.2 1. 1. 1. 1. 1. ]
The coefficients approximate the true coefficients quite well given the error.
```python
myInterpreter = mintpy.InterpretToolkit(models=lr, model_names='Linear Regression', examples=X,targets=y,)
```
```python
results = myInterpreter.calc_ale(features=feature_names, n_bins=5)
```
100%|██████████| 10/10 [00:00<00:00, 131.40it/s]
```python
results = myInterpreter.calc_permutation_importance(
n_vars=len(feature_names),
evaluation_fn='mse',
n_bootstrap=1000,
subsample=1.0,
n_jobs=10,
)
```
```python
myInterpreter.plot_importance(method='singlepass')
```
Since the features are completely independent, the single-pass permutation method is good option for ranking the features. The results show the correct rankings:
* $X_1$, $X_2$, and $X_3$ are the most important
* $X_6-X_{10}$ have equivalent ranking (discerned by the CIs)
* $X_4-X_5$ are the least important
```python
myInterpreter.plot_importance(method='multipass')
```
# Increasing error
The following test includes increasing the error and seeing if the rankings are altered.
```python
scales = np.arange(0.1, 1.7, 0.2)
ranked_features=[]
for s in scales:
error = np.random.normal(loc=0, scale=s, size=n_examples)
y = X.dot(weights)+error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=0, test_size=0.25)
lr = LinearRegression()
lr.fit(X_train, y_train)
results = myInterpreter.calc_permutation_importance(
n_vars=len(feature_names),
evaluation_fn='mse',
n_bootstrap=100,
subsample=1.0,
n_jobs=10,
)
ranked_features.append(results['multipass_rankings__Linear Regression'].values)
```
```python
def get_avg_ranking(ranked_features, feature_names):
"""Compute the average rankings for a set of features"""
rankings ={f:[] for f in feature_names}
for rank in ranked_features:
for f in feature_names:
rankings[f].append(np.where(rank==f))
for f in feature_names:
rankings[f] = np.mean(rankings[f])
return rankings
rankings = get_avg_ranking(ranked_features, feature_names)
```
```python
fig, ax = plt.subplots(dpi=150)
avg_rankings = [rankings[f] for f in feature_names]
ax.plot(np.arange(n_vars), avg_rankings, color='xkcd:medium blue', linewidth=2.5)
ax.set_xticks(np.arange(n_vars))
ax.set_xticklabels([f'${f}$' for f in feature_names])
ax.tick_params(labelsize=10)
ax.set_xlabel('Features')
ax.set_ylabel('Ranking (lower is better!)')
ax.grid(alpha=0.6)
```
## Introducing colinearity
Takeaways
*
*
```python
fig, ax = plt.subplots(dpi=150)
scales = np.arange(0.1, 1.0, 0.1)
for s in scales:
data1 = [np.random.uniform(-1,1, size=n_examples) for _ in range(int(n_vars/2))]
data2 = [d+np.random.normal(0,s, size=n_examples) for d in data1]
X = np.stack(data1+data2, axis=-1)
feature_names = [f'X_{i+1}' for i in range(n_vars)]
X = pd.DataFrame(X, columns=feature_names)
print(X.corr().abs()['X_1']['X_6'])
error = np.random.normal(loc=0, scale=0.1, size=n_examples)
y = X.dot(weights)+error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=0, test_size=0.25)
lr = LinearRegression()
lr.fit(X_train, y_train)
myInterpreter = mintpy.InterpretToolkit(models=lr,
model_names='Linear Regression',
examples=X_train,targets=y_train,
)
results = myInterpreter.calc_permutation_importance(
n_vars=len(feature_names),
evaluation_fn='mse',
n_bootstrap=100,
subsample=1.0,
n_jobs=10,
)
rankings = results['multipass_rankings__Linear Regression'].values
ranked_features.append(rankings)
rankings = [np.where(rankings==f)[0] for f in feature_names]
ax.plot(np.arange(n_vars), rankings, linewidth=2.5, label=f'$\sigma$={s:.1f}')
ax.set_xticks(np.arange(n_vars))
ax.set_xticklabels([f'${f}$' for f in feature_names])
ax.tick_params(labelsize=10)
ax.set_xlabel('Features')
ax.set_ylabel('Ranking (lower is better!)')
ax.grid(alpha=0.6)
ax.legend()
ax.set_title('10-var Linear Regression\nWith Varying levels of Correlations')
```
```python
fig, ax = plt.subplots(dpi=150)
scales = np.arange(0.1, 1.7, 0.2)
for s in scales:
data1 = [np.random.uniform(-1,1, size=n_examples) for _ in range(int(n_vars/2))]
data2 = [d+np.random.normal(0,s, size=n_examples) for d in data1]
X = np.stack(data1+data2, axis=-1)
feature_names = [f'X_{i+1}' for i in range(n_vars)]
X = pd.DataFrame(X, columns=feature_names)
error = np.random.normal(loc=0, scale=0.5, size=n_examples)
y = X.dot(weights)+error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=0, test_size=0.25)
rf = RandomForestRegressor()
rf.fit(X_train, y_train)
myInterpreter = mintpy.InterpretToolkit(
models=rf,
model_names='Random Forest',
examples=X_train,targets=y_train,
)
results = myInterpreter.calc_permutation_importance(
n_vars=len(feature_names),
evaluation_fn='mse',
n_bootstrap=100,
subsample=1.0,
n_jobs=10,
)
rankings = results['multipass_rankings__Random Forest'].values
ranked_features.append(rankings)
rankings = [np.where(rankings==f)[0] for f in feature_names]
ax.plot(np.arange(n_vars), rankings, linewidth=2.5, label=f'$\sigma$={s:.1f}')
ax.set_xticks(np.arange(n_vars))
ax.set_xticklabels([f'${f}$' for f in feature_names])
ax.tick_params(labelsize=10)
ax.set_xlabel('Features')
ax.set_ylabel('Ranking (lower is better!)')
ax.grid(alpha=0.6)
ax.legend()
ax.set_title('10-var Random Forest\nWith Varying levels of Correlations')
```
```python
```
| 0a78d9a42f767701614834232a178e352557e2cc | 489,178 | ipynb | Jupyter Notebook | tutorial_notebooks/.ipynb_checkpoints/theoretical_interpretations-checkpoint.ipynb | monte-flora/scikit-explain | d93ca4c77d1d47e613479ae36cc055ffaafea88c | [
"MIT"
]
| 9 | 2021-04-12T16:11:38.000Z | 2022-03-18T09:03:58.000Z | tutorial_notebooks/.ipynb_checkpoints/theoretical_interpretations-checkpoint.ipynb | monte-flora/py-mint | 0dd22a2e3dce68d1a12a6a99623cb4d4cb407b58 | [
"MIT"
]
| 21 | 2021-04-13T01:17:40.000Z | 2022-03-11T16:06:50.000Z | tutorial_notebooks/.ipynb_checkpoints/theoretical_interpretations-checkpoint.ipynb | monte-flora/mintpy | 23f9a952726dc0e69dfcdda2f8c7c27858aa9a11 | [
"MIT"
]
| 1 | 2021-11-15T20:56:46.000Z | 2021-11-15T20:56:46.000Z | 998.322449 | 159,168 | 0.951529 | true | 2,127 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.875787 | 0.778363 | __label__eng_Latn | 0.656198 | 0.646731 |
# Lecture 12: Fourier Series
## Background
A summation of sines and cosines can be used to approximate periodic functions. The Fourier Series is one such series and is given by:
$$
f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n cos(nx) + \sum_{m=1}^{\infty} b_m sin(mx)
$$
The coefficients are related to the original function by:
$$
a_n = \frac{1}{\pi} \int^{2\pi}_0 f(s) \cos ns \; ds, \;\; n = 0,1,2,...
$$
and
$$
b_m = \frac{1}{\pi} \int^{2\pi}_0 f(s) \sin ms \; ds, \;\; m = 1,2,...
$$
This series with coefficients determined by the integrals may be used in the solution of [ordinary differential](https://en.wikipedia.org/wiki/Ordinary_differential_equation) and [partial differential equations](https://en.wikipedia.org/wiki/Partial_differential_equation). In materials engineering you will sometimes see diffusion problems use series solutions to describe the evolution of a concentration field where there is a factor composed of an infinite series and a factor containing an exponential in time. Together they are selected to describe the diffusive evolution of a system. A classic example of a [Fourier series](https://en.wikipedia.org/wiki/Fourier_series) in the solution to a diffusion problem is in Jackson and Hunt's paper on eutectic solidification. In that paper the boundary condition was represented by a Fourier series to model the composition profile across eutectic lamellae.
## What Skills Will I Learn?
* A wave can be described by two parameters: the frequency and amplitude.
* Arbitrary, periodic functions can be approximated from combinations of individual waves if the frequencies and amplitudes are chosen correctly.
* Sines and cosines are basis vectors and for appropriate definitions of the dot product, orthogonal in a particular (Fourier) space the same way the unit vectors $\hat{i}$, $\hat{j}$, and $\hat{k}$ are orthogonal in Eucledian space.
* A generalized inner (dot) product of functions can be used to compute the correct combinations of frequencies and amplitudes to approximate a function.
## What Steps Should I Take?
1. Compute Fourier coefficients using the inner product of functions.
1. Learn how to shift the functions represented to arbitrary center points and domain widths.
1. Demonstrate that Fourier basis vectors are orthogonal by showing the inner product is zero over some domain.
1. Use a Fourier series to represent a sawtooth wave.
1. Prepare a new notebook (not just modifications to this one) that describes your approach to computing the above items.
## A Sucessful Jupyter Notebook Will
* Demonstrate the student's capability to calculate Fourier Series approximations to functions chosen by the student.
* Identify the audience for which the work is intended;
* Run the code necessary to draw one of the plane groups;
* Provide a narrative and equations to explain why your approach is relevant to solving the problem;
* Provide references and citations to any others' work you use to complete the assignment;
* Be checked into your GitHub repository by the due date (one week from assignment).
A high quality communication provides an organized, logically progressing, blend of narrative, equations, and code that teaches the reader a particular topic or idea. You will be assessed on:
* The functionality of the code (i.e. it should perform the task assigned).
* The narrative you present. I should be able to read and learn from it. Choose your audience wisely.
* The supporting equations and figures you choose to include.
If your notebook is just computer code your assignment will be marked incomplete.
## Reading and Reference
* Essential Mathematical Methods for Physicists, H. Weber and G. Arfken, Academic Press, 2003
* Advanced engineering Mathematics, E. Kreyszig, John wiley and Sons, 2010
* Numerical Recipes, W. Press, Cambridge University Press, 1986
* C. Hammond, The Basics of Crystallography and Diffraction, Oxford Science Publications, 4th ed.
* B. Gustafsson, Fundamentals of Scientific Computing, Springer, 2011
* S. Farlow, Partial Differential Equations for Scientists and Engineers, Dover, 1993
### Representations of a Wave
---
A wave:
* is represented by a frequency and amplitude.
* is periodic on some domain, usually 0 to 2$\pi$ but can also be $-\pi$ to $\pi$ or anything else.
* can be summed in combination with other waves to construct more complex functions.
```python
# Note the form of the import statements. Keep the namespaces from colliding.
%matplotlib inline
import numpy as np
import sympy as sp
def plotSine(amplitude=2.4, frequency=np.pi/3.0, npoints=200):
"""
Plots a sine function with a user specified amplitude and frequency.
Parameters
----------
amplitude : amplitude of the sine wave.
frequency : the frequency of the sine wave.
npoints : the number of points to use when plotting the wave.
Returns
-------
A plot.
"""
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 2*np.pi, npoints)
f = amplitude*np.sin(2*np.pi*frequency*t)
fname = r"$A(t) = A \sin(2 \pi f t)$"
fig, ax = plt.subplots()
ax.plot(t, f, label=fname)
ax.legend(loc='upper right')
ax.set_xlabel(r'$t$', fontsize=18)
ax.set_ylabel(r'$A$', fontsize=18)
ax.set_title('A Sine Wave');
plt.show()
return
```
```python
plotSine()
```
All the properties of the wave are specified with these three pieces of information:
* It is a sine wave
* It has amplitude 2.4
* It has frequency $\pi$/3
In the previous plot we know that the frequency of $2\pi/3$ and coefficient (amplitue) of $2.4$ were linked through the `sin` function. So it isn't hard to extrapolate to a situation where we might have MANY functions each with their own amplitude. We could also imagine having many `sin` functions each with a different frequency - so let us make a list of amplitudes and frequencies (numerically) that we can use for plotting. The following histogram plots the amplitudes for each frequency.
```python
def plotPower(amplitudes=[0,0,1.0,2.0,0,0,0], period=2.0*np.pi, npoints=200):
"""
Plots a power series and the associated function assuming that the amplitudes
provided are equally divided over the period of 2\pi unless specified. Can also
change the number of points to represent the function if necessary.
"""
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,5))
fig.subplots_adjust(bottom=0.2)
frequencies = np.linspace(0, period, len(amplitudes))
t = np.linspace(0, period, npoints)
# Reminder: zip([1,2,3],[4,5,6]) --> [(1,4),(2,5),(3,6)]
f = sum([amplitude*np.sin(2*np.pi*frequency*t) for (amplitude, frequency) in zip(amplitudes, frequencies)])
ax[0].bar(frequencies, amplitudes)
ax[0].set_xlabel(r'$f$', fontsize=12)
ax[0].set_ylabel(r'$A$', fontsize=12)
ax[0].set_title(r'Power Spectrum')
ax[1].plot(t, f)
ax[1].set_xlabel(r'$f$', fontsize=12)
ax[1].set_ylabel(r'$A$', fontsize=12)
ax[1].set_title(r'Constructed Function')
plt.show()
return
```
```python
plotPower()
```
```python
plotPower(amplitudes=[0,1,1,1,0.4,0,0])
```
The plot above is one common way of visualizing the amplitudes of each term in a series. Each bar represents the amplitude of a particular frequency in the reconstructed function.
### A Vector Space and Dot Products
----
A vector is an element of a _vector space_. A vector space is the set of all vectors having dimension N.
We are introduced to the Euclidian vectors $\hat{i}$, $\hat{j}$, and $\hat{k}$ in physical problems and we gain a physical intuition for orthogonality. We also learn a mechanism for computing the [dot product](https://en.wikipedia.org/wiki/Dot_product) in Euclidian systems, but other generalizations are possible. One such generalization is the dot product of functions.
This dot product of functions can be used to determine Fourier coefficients.
```python
t = sp.symbols('t')
sp.init_printing()
def signal(x):
return (x*(2 - x)*(1 - x)**2)
```
```python
sp.plot(signal(t), (t,0,2));
```
Is there a way to approximate the function above? For real functions, the dot product can be generalized by the inner product, defined as:
$$ < f(x) | g(x) > = \int_{-L}^{L} f(x) g(x) dx $$
If this quantity is zero, then the functions are orthogonal. If the functions are orthogonal then they form a function space and can be used to approximate other functions.
The dot product for vectors v and w in Euclidian space has a geometric interpretation:
$$
\mathbf{v} \cdot \mathbf{w} = |v||w| \cos{\theta}
$$
This scalar quantity tells you how much of the vector v points along w, i.e., the magnitude of a vector pointing in the direction of $\hat{v}$ you need to add to some other (mutually orthogonal) vectors in order to reproduce w as a summation. When generalized to functions we write:
$$
< f(x) | g(x) > = \int_{-L}^{L} f(x) g(x) dx
$$
This computes how much of function $f(x)$ is projected onto $g(x)$. Using a point $x = a$, compute $f(a)$ and $g(a)$. $f(a)$ and $g(a)$ represent the height of each function above/below the x-axis, so a vector from (a, 0) to (a, f(a)) can be dotted with a vector from (a, 0) to (a, g(a)). They are necessarily parallel along the space that contains the x-axis, so their dot product is just the product of their magnitudes: $f(a)$ times $g(a)$. Now, multiply this by dx to keep the contribution from position $x=a$ proportional to how many additional x-positions you'll do this for. Take this dot product over and over, at each x-position, always scaling by $dx$ to keep it all in proportion. The sum of these dot products is the projection of $f(x)$ onto $g(x)$ (or vice-versa).
### Interactive Visualization of the Dot Product of Functions
----
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
# Somehow we want to add this text to the plot...
# dot_prod_value = sp.integrate(sp.sin(2*x)*sp.sin(x), (x, 0, 2*sp.pi))
def npf(x):
return np.sin(2*x)
def npg(x):
return np.sin(x)
def spf(x):
return sp.sin(2*x)
def spg(x):
return sp.sin(x)
# Make ff and gg tuples of np/sp functions? - or we can lambdafy the sp functions.
def myfig(ff,gg,a):
"""
This function's docstring explaining the function.
"""
x = np.linspace(0, 2*np.pi, 100)
y1 = ff(x)
y2 = gg(x)
y3 = ff(x)*gg(x)
fig = plt.figure(figsize=(8,5))
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x, y1, 'r', label=r"$f(x)$")
axes.arrow(a, 0, 0, ff(a), length_includes_head=True, head_length=0.1, head_width=0.1, color='r')
axes.plot(x, y2, 'g', label=r"$g(x)$")
axes.arrow(a, 0, 0, gg(a), length_includes_head=True, head_length=0.1, head_width=0.1, color='g')
axes.plot(x, y3, 'b', label=r"$f(x) \cdot g(x)$")
axes.arrow(a, 0, 0, ff(a)*gg(a), length_includes_head=True, head_length=0.1, head_width=0.1, color='b')
axes.legend()
axes.grid(True)
plt.show()
return
```
```python
interact(myfig, ff=fixed(npf), gg=fixed(npg), a=(0,np.pi*2,0.05))
```
interactive(children=(FloatSlider(value=3.1, description='a', max=6.283185307179586, step=0.05), Output()), _d…
<function __main__.myfig(ff, gg, a)>
Using `scipy` we can perform this and other integrations numerically. Two examples are given for the following functions:
```python
from scipy import integrate
import numpy as np
def myfunc1(x):
return np.sin(4*x)
def myfunc2(x):
return np.sin(x)
def myfunc3(x):
return myfunc1(x)*myfunc2(x)
def myfuncx2(x):
return x**2
```
```python
[integrate.quad(myfuncx2, 0, 4), 4.0**3/3.0]
```
```python
integrate.quad(myfunc3, 0, 2*np.pi)
```
```python
import sympy as sp
sp.init_printing()
n, m = sp.symbols('n m', Integer=True)
x = sp.symbols('x')
def f(x):
return sp.sin(n*x)
def g(x):
return sp.sin(m*x)
# scope of variables in def is local.
def func_dot(f, g, lb, ub):
return sp.integrate(f(x)*g(x), (x, lb, ub))
func_dot(f, g, 0, 2*sp.pi)
```
### DIY: Demonstrate the Inner Product of Certain Functions are Zero
----
Identify the conditions under which the inner product of:
$$
<\sin{4x}, \sin{x}>
$$
and
$$
<\sin{nx}, \sin{mx}>
$$
are zero.
### The Fourier Series Definied on Arbitrary Ranges
This discussion is derived from Sean Mauch's open source Applied Mathematics textbook. If $f(x)$ is defined over $c-L \leq x \leq c+L $ and $f(x+2L) = f(x)$ then $f(x)$ can be written as:
$$
f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n cos \left( \frac{n \pi (x+c)}{L} \right) + \sum_{m=1}^{\infty} b_m sin \left( \frac{m \pi (x+c)}{L} \right)
$$
and the coefficients:
$$ a_n = \langle f(x) | \cos \left( \frac{n\pi (x+c)}{L} \right) \rangle = \frac{1}{L}\int^{c+L}_{c-L} f(x) \cos \frac{n \pi (x+c)}{L} $$
and
$$ b_m = \langle f(x) | \sin \left( \frac{m\pi (x+c)}{L} \right) \rangle = \frac{1}{L}\int^{c+L}_{c-L} f(x) \sin \frac{m \pi (x+c)}{L} $$
Using our generalized dot product for functions as defined above we can compute the Fourier coefficients. The code for this follows in functions `a_n_amplitudes` and `b_m_amplitudes`.
### Computing the Fourier Coefficients by Hand
----
Note: These next couple of cells take a few seconds to run.
```python
import sympy as sp
import numpy as np
x = sp.symbols('x')
dum = sp.symbols('dum')
sp.init_printing()
lam = 2
center = 1
def signal(x):
return (x*(2 - x)*(1 - x)**2)
def mySpecialFunction(x):
return sp.sin(2*x)
def b_m_amplitudes(n, funToProject, center, lam):
return (2/lam)*sp.integrate(funToProject(dum)*sp.sin(2*n*sp.pi*dum/lam), (dum,center-lam/2,center+lam/2))
def a_n_amplitudes(m, funToProject, center, lam):
return (2/lam)*sp.integrate(funToProject(dum)*sp.cos(2*m*sp.pi*dum/lam), (dum,center-lam/2,center+lam/2))
def b_m_vectorspace_element(n, var, lam):
return sp.sin(2*n*sp.pi*var/lam)
def a_n_vectorspace_element(m, var, lam):
if m==0:
return sp.Rational(1,2)
elif m!=0:
return sp.cos(2*m*sp.pi*var/lam)
```
```python
terms = 3
funToProject = signal
an_vectors = [a_n_vectorspace_element(n, x, lam) for n in range(terms)]
an_amplitudes = [a_n_amplitudes(n, funToProject, center, lam) for n in range(terms)]
bm_vectors = [b_m_vectorspace_element(m, x, lam) for m in range(terms)]
bm_amplitudes = [b_m_amplitudes(m, funToProject, center, lam) for m in range(terms)]
```
We use a list comprehension to collect the basis vectors and amplitudes into a useful data structure through the `zip` function.
```python
truncatedSeries = (sum([a*b for a,b in zip(an_vectors,an_amplitudes)])
+ sum([c*d for c,d in zip(bm_vectors,bm_amplitudes)]))
truncatedSeries
```
We can now plot this series and see the comparison of the signal (blue) and the series representation (red). We can quantitatively describe the accuracy between the approximation and the function.
```python
p = sp.plot(signal(x), truncatedSeries, (x, 0, 2), show=False, title=r'Comparison of Series (Red) and Function (Blue)')
p[0].line_color = 'blue'
p[1].line_color = 'red'
p.show()
```
It is also possible to unpack the series above and look at the plot of each individual term's contribution to the approximate function.
```python
test = [c*d for c,d in zip(an_vectors,an_amplitudes)]
p2 = sp.plot(test[0],(x,0,2), show=False)
#[p2.append(sp.plot(test[i], (x,0,2), show=False)[0]) for i in range(1,5,1)]
[p2.append(sp.plot(i, (x,0,2), show=False)[0]) for i in test]
for i in range(1,terms,1):
#p = sp.plot(test[i], (x,0,2), show=False)
#p2.append(p[0])
p2[i].line_color = 1.0-i/5.0,i/5.0,0.3
[p2.append(sp.plot(test[i], (x,0,2), show=False)[0])]
p2.show()
```
### Computing the Fourier Coefficients using Sympy
----
Here we use `sympy`'s `fourier_series` function to build a truncated series. We plot the series so that you can explore what happens when you change the number of terms. The `interact` command creates a widget you can use to explore the effect of changing the nubmer of terms.
```python
import sympy as sp
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
```
```python
sp.fourier_series(x**2)
```
```python
sp.init_printing()
x = sp.symbols('x')
def myAwesomeFunction(a):
return a
def fsMyFunc(terms, var):
return sp.fourier_series(myAwesomeFunction(var), (var, -sp.pi, sp.pi)).truncate(n=terms)
def plotMyFunc(terms):
p1 = sp.plot(fsMyFunc(terms,x),(x,-sp.pi, sp.pi), show=False, line_color='r')
p2 = sp.plot(myAwesomeFunction(x), (x,-sp.pi,sp.pi), show=False, line_color='b')
p2.append(p1[0])
p2.show()
return None
plt.rcParams['lines.linewidth'] = 3
plt.rcParams['figure.figsize'] = 8, 6
```
```python
interact(plotMyFunc, terms=(1,10,1));
```
interactive(children=(IntSlider(value=5, description='terms', max=10, min=1), Output()), _dom_classes=('widget…
### Homework: Series for a Sawtooth Wave
----
Using a Fourier series, represent the following periodic function:
$$f(x) = \left\{
\begin{array}{ll}
x, & 0 \leq x \leq \pi, \\
x-2\pi, & \pi \leq x \leq 2\pi,
\end{array}
\right.$$
### Homework: Compute Your Own
----
Compute a Fourier series for a function of your choosing. Think about the restrictions on the use of the Fourier series.
| 092519e29d3d410b69fb0cbee5f4c1e9cd8169f6 | 255,395 | ipynb | Jupyter Notebook | Lecture-12-Fourier-Series.ipynb | mathinmse/mathinmse.github.io | 837e508bfeeb7d108019fb9bc499066b2b653551 | [
"MIT"
]
| 23 | 2017-07-19T04:04:38.000Z | 2022-02-18T19:33:43.000Z | Lecture-12-Fourier-Series.ipynb | mathinmse/mathinmse.github.io | 837e508bfeeb7d108019fb9bc499066b2b653551 | [
"MIT"
]
| 2 | 2019-04-08T15:21:45.000Z | 2020-03-03T20:19:00.000Z | Lecture-12-Fourier-Series.ipynb | mathinmse/mathinmse.github.io | 837e508bfeeb7d108019fb9bc499066b2b653551 | [
"MIT"
]
| 11 | 2017-07-27T02:27:49.000Z | 2022-01-27T08:16:40.000Z | 230.500903 | 45,140 | 0.913213 | true | 4,971 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.90599 | 0.814595 | __label__eng_Latn | 0.97989 | 0.73091 |
```python
import sympy
from phasor.utilities.ipynb.displays import *
from phasor.utilities.ipynb.ipy_sympy import *
import scipy.linalg
import numpy.testing as np_test
import declarative as decl
from declarative.bunch import (
DeepBunch,
)
#import numpy as np
from phasor import system
from phasor import readouts
from phasor import optics
from phasor.optics.nonlinear_crystal import NonlinearCrystal
from phasor.utilities.print import pprint
```
Populating the interactive namespace from numpy and matplotlib
Sympy version: 1.0
```python
sys = system.BGSystem()
sys.own.PSL = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 1.,
)
sys.own.dither = optics.AM()
sys.own.ktp = NonlinearCrystal(
nlg = .1,
length_mm = np.linspace(0, 20,300),
N_ode = 1000,
solution_order = 5,
)
sys.own.mDC2 = optics.HarmonicMirror(
mirror_H1 = optics.Mirror(
T_hr = 1,
),
mirror_H2 = optics.Mirror(
T_hr = 0,
),
AOI_deg = 45,
)
sys.own.PD_R = optics.MagicPD()
sys.own.PD_G = optics.MagicPD()
sys.own.hPD_R = optics.HiddenVariableHomodynePD()
sys.own.hPD_G = optics.HiddenVariableHomodynePD()
sys.PSL.po_Fr.bond_sequence(
sys.dither.po_Fr,
sys.ktp.po_Fr,
sys.mDC2.po_FrA,
sys.PD_R.po_Fr,
sys.hPD_R.po_Fr,
)
sys.mDC2.po_FrB.bond_sequence(
sys.PD_G.po_Fr,
sys.hPD_G.po_Fr,
)
sys.own.DC_R = readouts.DCReadout(
port = sys.PD_R.Wpd.o,
)
sys.own.DC_G = readouts.DCReadout(
port = sys.PD_G.Wpd.o,
)
#print("A")
#pprint(sys.ctree.test.PSL)
#print("sys.DC_R.DC_readout", sys.DC_R.DC_readout, 2)
#print("sys.DC_G.DC_readout", sys.DC_G.DC_readout, 1)
```
```python
axB = mplfigB(Nrows=1)
axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout, color = 'red')
axB.ax0.plot(sys.ktp.length_mm, sys.DC_G.DC_readout, color = 'green')
axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout + sys.DC_G.DC_readout, color = 'black')
axB.ax0.plot(sys.ktp.length_mm, 1 * np.tanh(.200 * sys.ktp.length_mm)**2, ls = '--', color = 'blue')
axB.ax0.set_ylim(0, 1.1)
```
```python
sys = system.BGSystem()
sys.own.PSL = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 1.,
)
sys.own.dither = optics.AM()
sys.own.ktp = NonlinearCrystal(
nlg = .1,
length_mm = 10,#np.linspace(0, 20, 2),
N_ode = 100,
)
sys.own.mDC2 = optics.HarmonicMirror(
mirror_H1 = optics.Mirror(
T_hr = 1,
),
mirror_H2 = optics.Mirror(
T_hr = 0,
),
AOI_deg = 45,
)
sys.own.PD_R = optics.MagicPD()
sys.own.PD_G = optics.MagicPD()
sys.own.hPD_R = optics.HiddenVariableHomodynePD()
sys.own.hPD_G = optics.HiddenVariableHomodynePD()
sys.system.bond_sequence(
sys.PSL.po_Fr,
sys.dither.po_Fr,
sys.ktp.po_Fr,
sys.mDC2.po_FrA,
sys.PD_R.po_Fr,
sys.hPD_R.po_Fr,
)
sys.system.bond_sequence(
sys.mDC2.po_FrB,
sys.PD_G.po_Fr,
sys.hPD_G.po_Fr,
)
sys.own.DC_R = readouts.DCReadout(
port = sys.PD_R.Wpd.o,
)
sys.own.DC_G = readouts.DCReadout(
port = sys.PD_G.Wpd.o,
)
sys.own.AC_G = readouts.HomodyneACReadout(
portNI = sys.hPD_G.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_R = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_R.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_RGI = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdI.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_N = readouts.NoiseReadout(
port_map = dict(
RI = sys.hPD_R.rtWpdI.o,
RQ = sys.hPD_R.rtWpdQ.o,
GI = sys.hPD_G.rtWpdI.o,
GQ = sys.hPD_G.rtWpdQ.o,
)
)
#print("A")
#pprint(sys.ctree.test.PSL)
#print("sys.DC_R.DC_readout", sys.DC_R.DC_readout, 2)
#print("sys.DC_G.DC_readout", sys.DC_G.DC_readout, 1)
```
```python
print(sys.AC_R.AC_CSD_IQ[:,:].real)
print(sys.AC_G.AC_CSD_IQ[:,:].real)
print(sys.AC_RGI.AC_CSD_IQ[:,:].real)
print(sys.AC_R.AC_CSD_IQ[:,:].imag)
print(sys.AC_G.AC_CSD_IQ[:,:].imag)
print(sys.AC_RGI.AC_CSD_IQ[:,:].imag)
#print(sys.AC_G.AC_CSD_IQ)
```
DELTA V MAX: 1.0 AT ORDER: 2
DELTA V MAX: 1.0 AT ORDER: 3
DELTA V MAX: 1.0 AT ORDER: 4
DELTA V MAX: 0 AT ORDER: 5
[[ 2.95063498e-20 0.00000000e+00]
[ 0.00000000e+00 3.20266704e-19]]
[[ 1.70619168e-19 0.00000000e+00]
[ 0.00000000e+00 2.21543722e-19]]
[[ 2.95063498e-20 -1.98021227e-20]
[ -1.98021227e-20 1.70619168e-19]]
[[ 0.00000000e+00 -4.27315706e-34]
[ -4.51389831e-34 0.00000000e+00]]
[[ 0.00000000e+00 2.40741243e-35]
[ 2.40741243e-35 0.00000000e+00]]
[[ 0. 0.]
[ 0. 0.]]
```python
print(sys.AC_R.AC_CSD_ellipse.min / 9.33e-20)
pprint(sys.AC_R.AC_CSD_ellipse)
print(sys.AC_G.AC_CSD_ellipse.min / (2 * 9.33e-20))
pprint(sys.AC_G.AC_CSD_ellipse)
print(sys.AC_RGI.AC_CSD_ellipse.min / (9.33e-20))
pprint(sys.AC_RGI.AC_CSD_ellipse)
```
0.316252409861
Bunch(
'deg' = 89.999999631349098,
'max' = 3.2026670395521625e-19,
'min' = 2.95063498399929e-20,
'rad' = 1.5707963203607247,
)
0.914357815051
Bunch(
'deg' = 89.999998754240323,
'max' = 2.2154372219180949e-19,
'min' = 1.7061916828853204e-19,
'rad' = 1.5707963050522886,
)
0.287033410317
Bunch(
'deg' = 82.161436769909585,
'max' = 1.7334530094597723e-19,
'min' = 2.6780217182547756e-20,
'rad' = 1.4339875898040568,
)
```python
def gen_arr(lst = ['RI', 'GI']):
arr = np.zeros((len(lst), len(lst)))
for idx_L, NL in enumerate(lst):
for idx_R, NR in enumerate(lst):
arr[idx_L, idx_R] = sys.AC_N.CSD[(NL, NR)].real
return arr
print(gen_arr())
print(gen_arr(['RQ', 'GQ']))
print(gen_arr(['RI', 'GI', 'RQ', 'GQ']))
```
[[ 2.95063498e-20 -1.98021227e-20]
[ -1.98021227e-20 1.70619168e-19]]
[[ 3.20266704e-19 7.43405451e-20]
[ 7.43405451e-20 2.21543722e-19]]
[[ 2.95063498e-20 -1.98021227e-20 0.00000000e+00 0.00000000e+00]
[ -1.98021227e-20 1.70619168e-19 0.00000000e+00 0.00000000e+00]
[ 0.00000000e+00 0.00000000e+00 3.20266704e-19 7.43405451e-20]
[ 0.00000000e+00 0.00000000e+00 7.43405451e-20 2.21543722e-19]]
```python
sys = system.BGSystem()
sys.own.PSLR = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 1.,
)
sys.own.PSLG = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 1.,
multiple = 2,
)
sys.own.dither = optics.AM()
sys.own.ktp = NonlinearCrystal(
nlg = .1,
length_mm = np.linspace(0, 10, 100),
N_ode = 100,
)
sys.own.mDC2 = optics.HarmonicMirror(
mirror_H1 = optics.Mirror(
T_hr = 1,
),
mirror_H2 = optics.Mirror(
T_hr = 0,
),
AOI_deg = 45,
)
sys.own.PD_R = optics.MagicPD()
sys.own.PD_G = optics.MagicPD()
sys.own.hPD_R = optics.HiddenVariableHomodynePD(
source_port = sys.PSLR.po_Fr.o,
)
sys.own.hPD_G = optics.HiddenVariableHomodynePD(
source_port = sys.PSLG.po_Fr.o,
)
sys.system.bond_sequence(
sys.PSLR.po_Fr,
sys.dither.po_Fr,
sys.ktp.po_Fr,
sys.mDC2.po_FrA,
sys.PD_R.po_Fr,
sys.hPD_R.po_Fr,
)
sys.system.bond_sequence(
sys.mDC2.po_FrB,
sys.PD_G.po_Fr,
sys.hPD_G.po_Fr,
)
sys.own.DC_R = readouts.DCReadout(
port = sys.PD_R.Wpd.o,
)
sys.own.DC_G = readouts.DCReadout(
port = sys.PD_G.Wpd.o,
)
sys.own.AC_G = readouts.HomodyneACReadout(
portNI = sys.hPD_G.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_R = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_R.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_RGI = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdI.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_N = readouts.NoiseReadout(
port_map = dict(
RI = sys.hPD_R.rtWpdI.o,
RQ = sys.hPD_R.rtWpdQ.o,
GI = sys.hPD_G.rtWpdI.o,
GQ = sys.hPD_G.rtWpdQ.o,
)
)
#print("A")
#pprint(sys.ctree.test.PSL)
#print("sys.DC_R.DC_readout", sys.DC_R.DC_readout, 2)
#print("sys.DC_G.DC_readout", sys.DC_G.DC_readout, 1)
```
```python
axB = mplfigB(Nrows=3)
axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout, color = 'red')
axB.ax0.plot(sys.ktp.length_mm, sys.DC_G.DC_readout, color = 'green')
axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout + sys.DC_G.DC_readout, color = 'black')
axB.ax0.plot(sys.ktp.length_mm, 1 * np.tanh(.200 * sys.ktp.length_mm)**2, ls = '--', color = 'blue')
axB.ax0.set_ylim(0, 1.1)
axB.ax1.plot(sys.ktp.length_mm, sys.AC_R.AC_CSD_IQ[0,0]**.5, color = 'red')
axB.ax1.plot(sys.ktp.length_mm, sys.AC_R.AC_CSD_IQ[1,1]**.5, color = 'blue')
axB.ax1.plot(sys.ktp.length_mm, sys.AC_R.AC_CSD_ellipse.max**.5, color = 'orange', ls = '--')
axB.ax1.plot(sys.ktp.length_mm, sys.AC_R.AC_CSD_ellipse.min**.5, color = 'purple', ls = '--')
axB.ax1.plot(sys.ktp.length_mm, sys.AC_R.AC_CSD_ellipse.min**.25 * sys.AC_R.AC_CSD_ellipse.max**.25, color = 'black', ls = '--')
axB.ax2.plot(sys.ktp.length_mm, sys.AC_G.AC_CSD_IQ[0,0]**.5, color = 'red')
axB.ax2.plot(sys.ktp.length_mm, sys.AC_G.AC_CSD_IQ[1,1]**.5, color = 'blue')
axB.ax2.plot(sys.ktp.length_mm, sys.AC_G.AC_CSD_ellipse.max**.5, color = 'orange', ls = '--')
axB.ax2.plot(sys.ktp.length_mm, sys.AC_G.AC_CSD_ellipse.min**.5, color = 'purple', ls = '--')
axB.ax2.plot(sys.ktp.length_mm, sys.AC_G.AC_CSD_ellipse.min**.25 * sys.AC_G.AC_CSD_ellipse.max**.25, color = 'black', ls = '--')
#axB.ax0.plot(sys.ktp.length_mm, sys.DC_G.DC_readout, color = 'green')
#axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout + sys.DC_G.DC_readout, color = 'black')
#axB.ax0.plot(sys.ktp.length_mm, 1 * np.tanh(.100 * sys.ktp.length_mm)**2, color = 'blue')
#axB.ax0.set_ylim(0, 1.1)
```
```python
1.802e-19 - 2*2.36e-20
```
```python
sys.AC_RGI.AC_CSD_IQ[0,0]
```
(2.9506349839992912e-20+0j)
```python
sys.AC_RGI.AC_CSD_ellipse
```
Bunch(
'deg' = 82.1614367699,
'max' = 1.73345300946e-19,
'min' = 2.67802171825e-20,
'rad' = 1.4339875898,
)
```python
```
```python
sys = system.BGSystem()
sys.own.PSLG = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 1.,
multiple = 2,
)
sys.own.PSLR = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 0.001,
multiple = 1,
)
sys.own.dither = optics.AM()
sys.own.ktp = NonlinearCrystal(
nlg = .1,
length_mm = 10, #np.linspace(0, 20, 2),
N_ode = 20,
)
sys.own.mDC1 = optics.HarmonicMirror(
mirror_H1 = optics.Mirror(
T_hr = 0,
),
mirror_H2 = optics.Mirror(
T_hr = 1,
),
AOI_deg = 45,
)
sys.own.mDC2 = optics.HarmonicMirror(
mirror_H1 = optics.Mirror(
T_hr = 1,
),
mirror_H2 = optics.Mirror(
T_hr = 0,
),
AOI_deg = 45,
)
sys.own.PD_R = optics.MagicPD()
sys.own.PD_G = optics.MagicPD()
sys.own.hPD_R = optics.HiddenVariableHomodynePD(
source_port = sys.PSLR.po_Fr.o,
)
sys.own.hPD_G = optics.HiddenVariableHomodynePD()
sys.system.bond_sequence(
sys.PSLG.po_Fr,
sys.mDC1.po_FrA,
sys.dither.po_Fr,
sys.ktp.po_Fr,
sys.mDC2.po_FrA,
sys.PD_R.po_Fr,
sys.hPD_R.po_Fr,
)
#sys.system.bond_sequence(
# sys.PSLR.po_Fr,
# sys.mDC1.po_BkB,
#)
sys.system.bond_sequence(
sys.mDC2.po_FrB,
sys.PD_G.po_Fr,
sys.hPD_G.po_Fr,
)
sys.own.DC_R = readouts.DCReadout(
port = sys.PD_R.Wpd.o,
)
sys.own.DC_G = readouts.DCReadout(
port = sys.PD_G.Wpd.o,
)
sys.own.AC_G = readouts.HomodyneACReadout(
portNI = sys.hPD_G.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_R = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_R.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_RGI = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdI.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_N = readouts.NoiseReadout(
port_map = dict(
RI = sys.hPD_R.rtWpdI.o,
RQ = sys.hPD_R.rtWpdQ.o,
GI = sys.hPD_G.rtWpdI.o,
GQ = sys.hPD_G.rtWpdQ.o,
)
)
#print("A")
#pprint(sys.ctree.test.PSL)
#print("sys.DC_R.DC_readout", sys.DC_R.DC_readout, 2)
#print("sys.DC_G.DC_readout", sys.DC_G.DC_readout, 1)
```
```python
print(sys.AC_R.AC_CSD_IQ[:,:])
print(sys.AC_G.AC_CSD_IQ[:,:])
print(sys.AC_RGI.AC_CSD_IQ[:,:])
print((sys.AC_R.AC_CSD_ellipse.min * sys.AC_R.AC_CSD_ellipse.max)**.5)
print(sys.AC_R.AC_CSD_ellipse.min / (sys.AC_R.AC_CSD_ellipse.min * sys.AC_R.AC_CSD_ellipse.max)**.5)
sys.AC_R.AC_CSD_ellipse
```
DELTA V MAX: 1.0 AT ORDER: 2
DELTA V MAX: 1.0 AT ORDER: 3
DELTA V MAX: 0 AT ORDER: 4
[[ 3.51189991e-19+0.j -3.38556848e-19+0.j]
[ -3.38556848e-19+0.j 3.51189991e-19+0.j]]
[[ 1.86696036e-19+0.j 0.00000000e+00+0.j]
[ 0.00000000e+00+0.j 1.86696036e-19+0.j]]
[[ 3.51189991e-19+0.j 0.00000000e+00+0.j]
[ 0.00000000e+00+0.j 1.86696036e-19+0.j]]
9.33470448838e-20
0.13533522683
Bunch(
'deg' = 45.0,
'max' = 6.89746838796e-19,
'min' = 1.26331434933e-20,
'rad' = 0.785398163397,
)
```python
axB = mplfigB(Nrows=1)
axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout, color = 'red')
axB.ax0.plot(sys.ktp.length_mm, sys.DC_G.DC_readout, color = 'green')
axB.ax0.plot(sys.ktp.length_mm, sys.DC_R.DC_readout + sys.DC_G.DC_readout, color = 'black')
#axB.ax0.plot(sys.ktp.length_mm, 1 * np.tanh(.100 * sys.ktp.length_mm)**2, color = 'blue')
axB.ax0.set_ylim(0, 1.1)
```
```python
sys = system.BGSystem()
sys.own.PSLG = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 1.,
multiple = 2,
)
sys.own.PSLR = optics.Laser(
F = sys.system.F_carrier_1064,
power_W = 0.001,
multiple = 1,
)
sys.own.PD_R = optics.MagicPD()
sys.own.PD_G = optics.MagicPD()
sys.own.dither = optics.AM()
sys.own.hPD_R = optics.HiddenVariableHomodynePD(
source_port = sys.PSLR.po_Fr.o,
)
sys.own.hPD_G = optics.HiddenVariableHomodynePD()
sys.system.bond_sequence(
sys.PSLG.po_Fr,
sys.PD_G.po_Fr,
sys.hPD_G.po_Fr,
)
sys.system.bond_sequence(
sys.PSLR.po_Fr,
sys.PD_R.po_Fr,
sys.hPD_R.po_Fr,
)
sys.own.DC_R = readouts.DCReadout(
port = sys.PD_R.Wpd.o,
)
sys.own.DC_G = readouts.DCReadout(
port = sys.PD_G.Wpd.o,
)
sys.own.AC_G = readouts.HomodyneACReadout(
portNI = sys.hPD_G.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_R = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_R.rtWpdQ.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_RGI = readouts.HomodyneACReadout(
portNI = sys.hPD_R.rtWpdI.o,
portNQ = sys.hPD_G.rtWpdI.o,
portD = sys.dither.Drv.i,
)
sys.own.AC_N = readouts.NoiseReadout(
port_map = dict(
RI = sys.hPD_R.rtWpdI.o,
RQ = sys.hPD_R.rtWpdQ.o,
GI = sys.hPD_G.rtWpdI.o,
GQ = sys.hPD_G.rtWpdQ.o,
)
)
#print("A")
#pprint(sys.ctree.test.PSL)
#print("sys.DC_R.DC_readout", sys.DC_R.DC_readout, 2)
#print("sys.DC_G.DC_readout", sys.DC_G.DC_readout, 1)
```
```python
print(sys.AC_R.AC_CSD_IQ[:,:])
print(sys.AC_G.AC_CSD_IQ[:,:])
print(sys.AC_RGI.AC_CSD_IQ[:,:])
print((sys.AC_R.AC_CSD_ellipse.min * sys.AC_R.AC_CSD_ellipse.max)**.5)
print(sys.AC_R.AC_CSD_ellipse.min / (sys.AC_R.AC_CSD_ellipse.min * sys.AC_R.AC_CSD_ellipse.max)**.5)
sys.AC_R.AC_CSD_ellipse
```
DELTA V MAX: 1.0 AT ORDER: 2
DELTA V MAX: 1.0 AT ORDER: 3
DELTA V MAX: 0 AT ORDER: 4
[[ 9.33480181e-20+0.j 0.00000000e+00+0.j]
[ 0.00000000e+00+0.j 9.33480181e-20+0.j]]
[[ 1.86696036e-19+0.j 0.00000000e+00+0.j]
[ 0.00000000e+00+0.j 1.86696036e-19+0.j]]
[[ 9.33480181e-20 0.00000000e+00]
[ 0.00000000e+00 1.86696036e-19]]
9.33480180645e-20
1.0
/home/mcculler/local/home_sync/projects/phasor/phasor/readouts/homodyne_AC.py:158: RuntimeWarning: invalid value encountered in true_divide
ratio = ((NIQ[1, 0] > 0)*2 - 1) * np.sqrt(disc / (max_eig - min_eig))
Bunch(
'deg' = nan,
'max' = 9.33480180645e-20,
'min' = 9.33480180645e-20,
'rad' = nan,
)
```python
```
| 5ad7d6d9028cb49b623823c23952bebf36e9c640 | 191,451 | ipynb | Jupyter Notebook | phasor/nonlinear_crystal/nonlinear_optics_sym/show_NLC.ipynb | mccullerlp/phasor-doc | d4255d015023c51b762340e51c15dde609715212 | [
"Apache-2.0"
]
| null | null | null | phasor/nonlinear_crystal/nonlinear_optics_sym/show_NLC.ipynb | mccullerlp/phasor-doc | d4255d015023c51b762340e51c15dde609715212 | [
"Apache-2.0"
]
| null | null | null | phasor/nonlinear_crystal/nonlinear_optics_sym/show_NLC.ipynb | mccullerlp/phasor-doc | d4255d015023c51b762340e51c15dde609715212 | [
"Apache-2.0"
]
| null | null | null | 188.807692 | 97,888 | 0.885501 | true | 6,459 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.675765 | 0.535953 | __label__kor_Hang | 0.215756 | 0.083528 |
# Tutorial 04: Finite Elements for the Wave Equation
In this tutorial we solve the wave equation formulated as a first order in time system. This way the example serves as a model for the treatment of systems of partial differential equations in PDELab. This tutorial depends on tutorial 01 and 03.
# PDE Problem
As an example for a system we consider the wave equation with reflective boundary conditions:
\begin{align} \label{eq:WaveEquation}
\partial_{tt} u-c^2\Delta u &= 0 &&\text{in $\Omega\times\Sigma$},\\
u &= 0 &&\text{on $\partial\Omega$},\\
u &= q &&\text{at $t=0$},\\
\partial_t u &= w &&\text{at $t=0$},
\end{align}
where $c$ is the speed of sound.
Renaming $u_0=u$ and introducing $u_1=\partial_t u_0 =\partial_t u$ we can write the wave equation as a system of two equations:
\begin{align}
\partial_t u_1 - c^2\Delta u_0 &=0 &&\text{in $\Omega\times\Sigma$}, \label{eq:2a}\\
\partial_t u_0 - u_1 &= 0 &&\text{in $\Omega\times\Sigma$}, \label{eq:2b}\\
u_0 &= 0 &&\text{on $\partial\Omega$},\\
u_1 &= 0 &&\text{on $\partial\Omega$},\\
u_0 &= q &&\text{at $t=0$},\\
u_1 &= w &&\text{at $t=0$}.\label{eq:2f}
\end{align}
Since $u_0=u=0$ on the boundary we also have $\partial_t u = u_1 = 0$ on the boundary.
But one may also omit the boundary condition on $u_1$.
Note that there are several alternative ways how to write the scalar equation
\eqref{eq:WaveEquation} as a system of PDEs:
1. Eriksson et al. [[6]](#cit1) apply the Laplacian to equation \eqref{eq:2b}
\begin{equation} \label{eq:Eriksson}
\Delta \partial_t u_0 - \Delta u_1 = 0
\end{equation}
which has advantages for energy conservation but requires additional smoothness properties.
2. Alternatively, we may introduce the abbreviations $q=\partial_t u$ and $w=-\nabla u$, so $\partial_{tt} u - c^2 \Delta u =\partial_{tt} u - c^2 \nabla\cdot\nabla u = \partial_{t} q + c^2 \nabla\cdot w = 0$. Taking partial derivatives of the introduced variables we obtain $\partial_{x_i} q=\partial_{x_i} \partial_t u = \partial_t \partial_{x_i} u = - \partial_t w_i$. This results in a first-order hyperbolic system of PDEs for $q$ and $w$
\begin{align*}
\partial_t q + c^2 \nabla\cdot w &= 0\\
\partial_t w + \nabla q &= 0
\end{align*}
which are called equations of linear acoustics. This formulation is physically more relevant. It can be modified to handle discontinuous material properties and upwind finite volume methods can be used for numerical treatment.
Here we will stay, however, with the simplest formulation \eqref{eq:2a} - \eqref{eq:2f} for simplicity.
## Weak Formulation
Multiplying \eqref{eq:2a}
with the test function $v_0$ and \eqref{eq:2b} with the test function $v_1$
and using integration by parts we arrive at the weak formulation: Find $(u_0(t),u_1(t))\in
U_0\times U_1$ s.t.
\begin{align}
d_t (u_1,v_0)_{0,\Omega} + c^2 (\nabla u_0, \nabla v_0)_{0,\Omega} &=
0 \quad \forall v_0 \in U_0 \notag \\
d_t (u_0,v_1)_{0,\Omega} - (u_1,v_1)_{0,\Omega} &= 0 \quad \forall
v_1 \in U_1 \label{eq:WeakFormSystem}
\end{align}
where we used the notation of the $L^2$ inner product $(u,v)_{0,\Omega} = \int_\Omega
u v \, dx$. An equivalent formulation to (\ref{eq:WeakFormSystem}) that hides the system structure reads as follows:
\begin{equation}
\label{eq:WeakForm}
\begin{split}
d_t &\left[ (u_0,v_1)_{0,\Omega} + (u_1,v_0)_{0,\Omega}\right] \\
&\hspace{20mm}+ \left[ c^2 (\nabla u_0,\nabla v_0)_{0,\Omega} -(u_1,v_1)_{0,\Omega} \right] = 0
\quad \forall (v_0,v_1)\in U_0\times U_1
\end{split}
\end{equation}
With the latter we readily identify the temporal and spatial residual forms:
\begin{align}
m^{\text{WAVE}}((u_0,u_1),(v_0,v_1)) &= (u_0,v_1)_{0,\Omega} + (u_1,v_0)_{0,\Omega},
\label{eq:TemporalResForm}\\
r^{\text{WAVE}}((u_0,u_1),(v_0,v_1)) &= c^2 (\nabla u_0,\nabla
v_0)_{0,\Omega} - (u_1,v_1)_{0,\Omega} \; , \label{eq:SpatialResForm}
\end{align}
while with the former the system structure is more visible which might help to understand the implementation presented in section
[Realization in PDELab](#implementation).
The spaces $U_0$ and $U_1$ can differ as different types of boundary conditions can be incorporated into the ansatz spaces. But here both spaces are constrained by homogeneous Dirichlet boundary conditions.
## Generalization
The abstract setting of PDELab with its weighted residual formulation carries over to the case of systems of
partial differential equations when cartesian products of functions spaces are introduced, i.e. the abstract *stationary* problem then reads
\begin{equation}
\text{Find $u_h\in U_h=U_h^1\times \ldots \times U_h^s$ s.t.:} \quad r_h(u_h,v)=0
\quad \forall v\in V_h=V_h^1\times\ldots\times V_h^s
\label{Eq:BasicSystemBuildingBlock}
\end{equation}
with $s$ the number of components in the system. Again the concepts are completely orthogonal meaning that $r_h$ might be affine linear or nonlinear in its first argument and the instationary case works as well.
From an organizational point of view it makes sense to allow that a component space $U_h^i$ in the cartesian product is itself a product space. This naturally leads to a *tree structure* in the
function spaces.
Consider as an example the Stokes equation in $d$ space dimensions.There one has pressure $p$ and velocity $v$ with components $v_1,\ldots,v_d$ as unknowns. An appropriate function space then would be $$ U = (P,(V^1,\ldots,V^d)).$$
# Finite Element Method
The finite element method applied to \eqref{eq:WeakForm} is straightforward. We may use the conforming space $V_h^{k,d}(\mathcal{T}_h)$ of degree $k$ in dimension $d$ for each of the components. Typically one would choose
the same polynomial degree for both components.
# Realization in PDELab
<a id= "implementation"> </a>
```c++
#include<dune/jupyter.hh>
#include "wavefem.hh"
```
```c++
// open ini file
Dune::ParameterTree ptree;
Dune::ParameterTreeParser ptreeparser;
ptreeparser.readINITree("exercise04.ini",ptree);
// read ini file
const int refinement = ptree.get<int>("grid.refinement");
```
Choose polynomial degree:
```c++
const int degree = 2;
```
Instantiation of a 1D grid, which allows for a faster execution time compared to the 2D grid.
```c++
static const int dim = 1;
// read grid parameters from input file
using DF = Dune::OneDGrid::ctype;
auto a = ptree.get<DF>("grid.oned.a");
auto b = ptree.get<DF>("grid.oned.b");
auto N = ptree.get<unsigned int>("grid.oned.elements");
// create equidistant intervals
using Intervals = std::vector<DF>;
Intervals intervals(N+1);
for(unsigned int i=0; i<N+1; ++i)
intervals[i] = a + DF(i)*(b-a)/DF(N);
// Construct grid
using Grid = Dune::OneDGrid;
Grid grid(intervals);
grid.globalRefine(refinement);
// call generic function
using GV = Dune::OneDGrid::LeafGridView;
GV gv = grid.leafGridView();
```
Instantiate a 2D UGGrid, as used in previous tutorials.
```c++
/*
static const int dim = 2;
using Grid = Dune::UGGrid<dim>;
using DF = Grid::ctype;
Dune::FieldVector<DF,dim> L;
//upper right
L[0] = ptree.get("grid.structured.LX",(double)1.0);
L[1] = ptree.get("grid.structured.LY",(double)1.0);
std::array<unsigned int,dim> N;
N[0] = ptree.get("grid.structured.NX",(unsigned int)10);
N[1] = ptree.get("grid.structured.NY",(unsigned int)10);
//lower left
Dune::FieldVector<double,dim> lowerleft(0.0);
// build a structured simplex grid
auto gridp = Dune::StructuredGridFactory<Grid>::createSimplexGrid(lowerleft, L, N);
gridp->globalRefine(refinement);
using GV = Grid::LeafGridView;
GV gv=gridp->leafGridView();
*/
```
```c++
using FEM = Dune::PDELab::PkLocalFiniteElementMap<GV,DF,double,degree>;
FEM fem(gv);
```
There are several changes now in the driver due to the system of PDEs.The first step is to set up the grid function space using the given finite element map:
```c++
using RF = double;
// Make grid function space used per component
using CON = Dune::PDELab::ConformingDirichletConstraints;
using VBE0 = Dune::PDELab::ISTL::VectorBackend<>;
using GFS0 = Dune::PDELab::GridFunctionSpace<GV,FEM,CON,VBE0>;
GFS0 gfs0(gv,fem);
```
```c++
// Make grid function space for the system
using VBE =
Dune::PDELab::ISTL::VectorBackend<
Dune::PDELab::ISTL::Blocking::fixed
>;
using OrderingTag = Dune::PDELab::EntityBlockedOrderingTag;
using GFS =
Dune::PDELab::PowerGridFunctionSpace<GFS0,2,VBE,OrderingTag>;
GFS gfs(gfs0);
```
- *solution to exercise 3:*
```c++
/*
using VBE = Dune::PDELab::ISTL::VectorBackend <
Dune::PDELab::ISTL::Blocking::none
>;
*/
```
The next step is to set up the product space containing two components. This is done by the following code section:
PDELab offers two different class templates to build product spaces. The one used here is `PowerGridFunctionSpace` which creates a product of a compile-time given number (2 here)
of *identical* function spaces (`GFS0` here) which may only differ in the constraints. With the class template`CompositeGridFunctionSpace` you can create a product space where all components might be different spaces.
We also have to set up names for the child spaces to facilitate VTK output later on:
```c++
using namespace Dune::TypeTree::Indices;
gfs.child(_0).name("u0");
gfs.child(_1).name("u1");
```
An important aspect of product spaces is the ordering of the corresponding degrees of freedom. Often the solvers need to exploit an underlying block structure of the matrices.
This works in two stages: An ordering has first to be specified when creating product spaces
which is then subsequently exploited in the backend. Here we use the `EntityBlockedOrderingTag` to specify that all degrees of freedom related to a geometric entity should be numbered consecutively in the coefficient vector. Other options are the `LexicographicOrderingTag` ordering first all degrees of freedom of the first component space, then all of the second component space and so on. With the Iterative Solver Template Library ISTL it is now possible to exploit the block structure at compile-time.
Here we use the tag `fixed` in the ISTL vector backend to indicate that at this level we want to create blocks of fixed size (in this case the block size will be two --
corresponding to the degrees of freedom per entity). Another option would be the tag `none` which is the default. Then the degrees of freedom are still ordered in the specified way but no block structure is introduced on the ISTL level. *Important notice:* Using fixed block structure in ISTL requires that there is the same number of degrees of freedom per entity. This is true for polynomial degrees one and two but *not* for higher polynomial degree!
In order to define a function that specifies the initial value we can use the same techniques as in the scalar case. A PDELab grid function
is be constructed from the lambda closure
```c++
auto u = Dune::PDELab::makeGridFunctionFromCallable(
gv,
[](const auto& x){
Dune::FieldVector<RF,dim> rv(0.0);
for (int i=0; i<dim; i++) rv[0] += (x[i]-0.375)*(x[i]-0.375);
rv[0] = std::max(0.0,1.0-8.0*sqrt(rv[0]));
return rv;
}
);;
```
The lambda closure is now returning two components in a `FieldVector`. The first component is the initial value for $u$ and the second component
is the initial value for $\partial_t u$.
Using the grid function a coefficient vector can now be initialized:
```c++
using Z = Dune::PDELab::Backend::Vector<GFS,RF>;
Z z(gfs); // initial value
Dune::PDELab::interpolate(u,gfs,z);
```
The next step is to assemble the constraints container for the composite function space. Unfortunately there is currently no way to define the constraints for both components in one go. We need to set up a separate lambda closure for each component:
```c++
auto b0 = Dune::PDELab::
makeBoundaryConditionFromCallable(
gv,
[](const auto& x){return true;}
);;
```
```c++
auto b1 = Dune::PDELab::
makeBoundaryConditionFromCallable(
gv,
[](const auto& x){return true;}
);;
```
and then combine it using:
```c++
using B = Dune::PDELab::CompositeConstraintsParameters<
decltype(b0),decltype(b1)
>;
B b(b0,b1);
```
Note that you could define different constraints for each component space although it is the same underlying function space.
Now the constraints container can be assembled as before:
```c++
using CC = typename GFS::template ConstraintsContainer<RF>::Type;
CC cc;
Dune::PDELab::constraints(b,gfs,cc);
```
```c++
std::cout << "constrained dofs=" << cc.size() << " of "
<< gfs.globalSize() << std::endl;
set_constrained_dofs(cc,0.0,z); // set zero Dirichlet boundary conditions
```
Then the VTK writer is prepared and the first file is written.
```c++
int subsampling=ptree.get("output.subsampling",(int)1);
using VTKWRITER = Dune::SubsamplingVTKWriter<GV>;
VTKWRITER vtkwriter(gv,Dune::refinementIntervals(subsampling));
std::string filename=ptree.get("output.filename","output") + std::to_string(dim) + "d";
std::filesystem::create_directory(filename);
//write filename to .txt file
std::ofstream out("name.txt");
out << filename;
out.close();
using VTKSEQUENCEWRITER = Dune::VTKSequenceWriter<GV>;
VTKSEQUENCEWRITER vtkSequenceWriter(
std::make_shared<VTKWRITER>(vtkwriter),filename,filename,"");
```
As we do not want to manually extract the subspaces for $u_0$ and $u_1$ from the overall space to add to them to the VTK writer, we call a PDELab helper function that handles this automatically:
```c++
// add data field for all components of the space to the VTK writer
Dune::PDELab::addSolutionToVTKWriter(vtkSequenceWriter,gfs,z);
```
The rest of the is the same as for tutorial 03 except that a linear solver is used instead of Newton's method.
```c++
vtkSequenceWriter.write(0.0,Dune::VTK::appendedraw);
// Make instationary grid operator
double speedofsound=ptree.get("problem.speedofsound",(double)1.0);
using LOP = WaveFEM<FEM>;
// using LOP = WaveFEMElip<FEM>;
LOP lop(speedofsound);
using TLOP = WaveL2<FEM>;
// using TLOP = WaveElip<FEM>;
TLOP tlop;
using MBE = Dune::PDELab::ISTL::BCRSMatrixBackend<>;
int degree = ptree.get("fem.degree",(int)1);
MBE mbe((int)pow(1+2*degree,dim));
using GO0 = Dune::PDELab::GridOperator<GFS,GFS,LOP,MBE,RF,RF,RF,CC,CC>;
GO0 go0(gfs,cc,gfs,cc,lop,mbe);
using GO1 = Dune::PDELab::GridOperator<GFS,GFS,TLOP,MBE,RF,RF,RF,CC,CC>;
GO1 go1(gfs,cc,gfs,cc,tlop,mbe);
using IGO = Dune::PDELab::OneStepGridOperator<GO0,GO1>;
IGO igo(go0,go1);
// Linear problem solver
//--iterative solver
using LS = Dune::PDELab::ISTLBackend_SEQ_BCGS_SSOR;
LS ls(5000,false);
using SLP = Dune::PDELab::StationaryLinearProblemSolver<IGO,LS,Z>;
SLP slp(igo,ls,1e-8);
//--direct solver
//using LDS = Dune::PDELab::ISTLBackend_SEQ_UMFPack;
//LDS lds;
//using SLP = Dune::PDELab::StationaryLinearProblemSolver<IGO,LDS,Z>;
//SLP slp(igo,lds,1e-8);
//---
// select and prepare time-stepping scheme
int torder = ptree.get("fem.torder",(int)1);
Dune::PDELab::OneStepThetaParameter<RF> method1(1.0);
Dune::PDELab::Alexander2Parameter<RF> method2;
Dune::PDELab::Alexander3Parameter<RF> method3;
Dune::PDELab::TimeSteppingParameterInterface<RF>* pmethod=&method1;
if (torder==1) pmethod = &method1;
if (torder==2) pmethod = &method2;
if (torder==3) pmethod = &method3;
if (torder<1||torder>3) std::cout<<"torder should be in [1,3]"<<std::endl;
Dune::PDELab::OneStepMethod<RF,IGO,SLP,Z,Z> osm(*pmethod,igo,slp);
osm.setVerbosityLevel(2);
// subspaces
using U0SUB = Dune::PDELab::GridFunctionSubSpace<GFS,Dune::TypeTree::TreePath<0> >;
U0SUB u0sub(gfs);
using U1SUB = Dune::PDELab::GridFunctionSubSpace<GFS,Dune::TypeTree::TreePath<1> >;
U1SUB u1sub(gfs);
// Make discrete grid functions for components
using U0DGF = Dune::PDELab::DiscreteGridFunction<U0SUB,Z>;
U0DGF u0dgf(u0sub,z);
using U1DGF = Dune::PDELab::DiscreteGridFunction<U1SUB,Z>;
U1DGF u1dgf(u1sub,z);
// initialize simulation time
RF time = 0.0;
```
```c++
// time loop
RF T = ptree.get("problem.T",(RF)1.0);
RF dt = ptree.get("fem.dt",(RF)0.1);
while (time<T-1e-8)
{
// do time step
Z znew(z);
osm.apply(time,dt,z,znew);
// accept time step
z = znew;
time+=dt;
//exercise 5: put your code here
// output to VTK file
vtkSequenceWriter.write(time,Dune::VTK::appendedraw);
}
```
```c++
vtkSequenceWriter
```
# Local Operator
## Spatial Local Operator
The spatial residual form \eqref{eq:SpatialResForm} is implemented by the local operator `WaveFEM` in file `wavefem.hh`. Cache construction and flags settings are the same as in tutorial 01 and 03. Only volume terms are used here. Note also that no parameter object is necessary as the only parameter is the speed of sound $c$.
### `alpha_volume` Method
The method `alpha_volume` has the *same* interface as in the scalar case:
```c++
template<typename EG, typename LFSU, typename X, typename LFSV, typename R>
void alpha_volume (const EG& eg, const LFSU& lfsu, const X& x, const LFSV& lfsv, R& r) const
```
However the trial and test function spaces `LFSU` and `LFSV` now reflect the component structure of the global function space, i.e. they consist of two components. *Important notice: Here we assume that trial and test space are identical (up to constraints) and that also both components are identical!*
The two components can be extracted with the following code
```c++
// select the two components (but assume Galerkin scheme U=V)
using namespace Dune::TypeTree::Indices;
auto lfsu0 = lfsu.child(_0);
auto lfsu1 = lfsu.child(_1);
```
The function spaces `lfsu0` and `lfsu1` are now scalar spaces (which we assume to be identical).
After extracting the dimension
```c++
const int dim = EG::Entity::dimension;
```
we select a quadrature rule
```c++
auto geo = eg.geometry();
const int order = 2*lfsu0.finiteElement().localBasis().order();
auto rule = Dune::PDELab::quadratureRule(geo,order);
```
and may now loop over the quadrature points.
For each quadrature point, evaluate the basis function of the first component:
```c++
for (const auto& ip : rule)
{
auto& phihat =
cache.evaluateFunction(ip.position(),lfsu0.finiteElement().localBasis());
```
As the components are identical we need only evaluate the basis once and can compute the value of $u_1$ at the quadrature point
```c++
// evaluate u1
RF u1=0.0;
for (std::size_t i=0; i<lfsu0.size(); i++) u1 += x(lfsu1,i)*phihat[i];
```
Then we evaluate the gradients of the basis functions
```c++
auto& gradphihat =
cache.evaluateJacobian(ip.position(),lfsu0.finiteElement().localBasis());
```
transform them from the reference element to the real element
```c++
const auto S = geo.jacobianInverseTransposed(ip.position());
auto gradphi = makeJacobianContainer(lfsu0);
for (std::size_t i=0; i<lfsu0.size(); i++)
S.mv(gradphihat[i][0],gradphi[i][0]);
```
and compute the gradient of $u_0$:
```c++
Dune::FieldVector<RF,dim> gradu0(0.0);
for (std::size_t i=0; i<lfsu0.size(); i++)
gradu0.axpy(x(lfsu0,i),gradphi[i][0]);
```
With the integration factor
```c++
RF factor = ip.weight() * geo.integrationElement(ip.position());
```
the residuals can now be accumulated:
```c++
for (std::size_t i=0; i<lfsu0.size(); i++) {
r.accumulate(lfsu0,i,c*c*(gradu0*gradphi[i][0])*factor);
r.accumulate(lfsu1,i,-u1*phihat[i]*factor);
```
### `jacobian_volume` Method
As the problem is linear it is advisable to also implement the `jacobian_volume` method for efficiency and accuracy.
The interface is the same as in the scalar case:
```c++
template<typename EG, typename LFSU, typename X, typename LFSV, typename M>
void jacobian_volume (const EG& eg, const LFSU& lfsu, const X& x, const LFSV& lfsv,
M& mat) const
```
Component selection, quadrature rule selection and basis evaluation are the same as in `alpha_volume`.
We only consider the accumulation of the Jacobian entries here:
```c++
// integrate both equations
RF factor = ip.weight() * geo.integrationElement(ip.position());
for (std::size_t j=0; j<lfsu0.size(); j++)
for (std::size_t i=0; i<lfsu0.size(); i++) {
mat.accumulate(lfsu0,i,lfsu0,j,c*c*(gradphi[j][0]*gradphi[i][0])*factor);
mat.accumulate(lfsu1,i,lfsu1,j,-phihat[j]*phihat[i]*factor);
}
```
Note how the diagonal sub-blocks of the Jacobian with respect to the first and second component are accessed. Finally, `WaveFEM` also implements the matrix-free versions
for Jacobian application.
## Temporal Local Operator
The temporal residual form \eqref{eq:TemporalResForm} is implemented by the local operator `WaveL2` in file `wavefem.hh`. Cache construction and flags settings are the same as in tutorial 01 and 03. Only volume terms are used here.
### `alpha_volume` Method
The `alpha_volume` method is pretty similar
to the one in the spatial operator, except that the value of $u_0$ is needed instead of the gradient. Here we just show the residual accumulation:
```c++
// integration factor
RF factor = ip.weight() * geo.integrationElement(ip.position());
// integrate u*phi_i
for (std::size_t i=0; i<lfsu0.size(); i++) {
r.accumulate(lfsu0,i,u1*phihat[i]*factor);
r.accumulate(lfsu1,i,u0*phihat[i]*factor);
}
```
Note that $u_1$ is integrated with respect to test function $v_0$
and vice versa.
### `jacobian_volume` Method
The corresponding Jacobian entries are accumulated in the `jacobian_volume` method:
```c++
// integration factor
RF factor = ip.weight() * geo.integrationElement(ip.position());
// loop over all components
for (std::size_t j=0; j<lfsu0.size(); j++)
for (std::size_t i=0; i<lfsu0.size(); i++) {
mat.accumulate(lfsu0,i,lfsu1,j,phihat[j]*phihat[i]*factor);
mat.accumulate(lfsu1,i,lfsu0,j,phihat[j]*phihat[i]*factor);
}
```
That's it! 293 lines of code to implement the finite element method for the wave equation.
# Exercise
## Getting to know the Code
The code of this exercise solves the wave equation formulated as a first order in time system. As already explained above, we can write the wave equation as a system of two equations by substituting $u_0=u$ and introducing $u_1=\partial_t u_0 =\partial_t u$.
As in the previous exercises you can control most of the settings through the ini-file `exercise04.ini`. Get an overview of the configurable settings, compile and run `exercise04`.
The program writes output with the extension `pvd`. This is one of several ways to write VTK output for the instationary case, c.f. the documentation of tutorial03. The `pvd`-file can be visualized by ParaView and consists of a collection of the corresponding `vtu`-files. In order to visualize a 1D solution, one can apply the "Plot Over Line" filter. Note that our solution is always given by $u_0$.
- *1d solution of wave equation*
<video src="wave1d.ogv" controls width="60%">
- *2d solution of wave equation*
<video src="wave2d.ogv" controls width="100%">
## Try various time integrators, in particular the Crank-Nicolson method
We want to examine the numerical solution under different time discretization schemes, eg. Implicit Euler or Crank-Nicolson. In order to change the time discretization scheme you will have to search for the line
```c++
Dune::PDELab::Alexander2Parameter<RF> pmethod;
```
Recall the previous `exercise03` and change the one step $\theta$ scheme in a way that corresponds to the Crank-Nicolson method.
**Note that** you can compare the solutions in ParaView. To do that you have torename the second solution in `exercise04.ini` and use the "Append Attributes" filter before "Plot Over Line". Do not forget to change the parameter `torder` in `exercise04.ini` in order to get the correct scheme.
**Remark** You can decide which dimension to investigate. A 2D simulation will give you nicer pictures at the cost of longer run times.
- *Crank-Nicolson method:*
```c++
Dune::PDELab::OneStepThetaParameter<RF> method1(0.5);
```
<video src="wave1dCNIE.ogv" controls width="60%">
## Explore polynomial degrees greater than $2$ by changing the blocking to`none`.
**Step 1:** Find the place where your Local Finite Element Maps are created and make it possible to use polynomial degree 3
```c++
Dune::PDELab::PkLocalFiniteElementMap<GV,DF,double,deg> FEM;
```
Compile your program and check the results.
**Note that** `deg` is a static template parameter.
- *Program should compile and give a segmentation error when started.*
- *For the notebook: The initialization of the coefficient vector fails.*
**Step 2:**
Before we start read the part that describes the specification of an ordering when creating product spaces ([here](#ordering)). Using fixed block structure in ISTL requires that the number of degrees of freedom per entity is constant for each geometry type. This is true for polynomial degrees one and two but not for higher polynomial degree!
To avoid segfaults you need to change
```c++
Dune::PDELab::istl::VectorBackend<Dune::PDELab::istl::Blocking::fixed> VBE;
```
to
```c++
Dune::PDELab::istl::VectorBackend<Dune::PDELab::istl::Blocking::none> VBE;
```
within the notebook.
## Changing the Local Operator
Now consider the elliptic projection as in Eriksson et al. \cite{Eriksson}, namely applying the Laplacian to equation \eqref{eq:2b}
\begin{equation}\label{eq:elip}
-\Delta \partial_t u_0 + \Delta u_1 = 0,
\end{equation}
which has the advantage of energy conservation but requires additional smoothness properties.
The main work is now to change the local operators given in the file `wavefem.hh`. This is done in several steps:
1. Copy the file `wavefem.hh` to a new file and rename it.
2. Rename the local operators in the new file, e.g. change`WaveL2` to `WaveElip`.
3. Include your new file in `exercise04.cc` and change the types of `LOP` and `TLOP` in
the notebook.
```c++
// Make instationary grid operator
double speedofsound = ptree.get("problem.speedofsound",1.0);
typedef WaveFEM<FEM> LOP;
LOP lop(speedofsound);
typedef WaveL2<FEM> TLOP;
TLOP tlop;
```
After these preparations you need to implement the following change:
$$ \partial_t u_0 - u_1 = 0 \Rightarrow \Delta \partial_t u_0 -
\Delta u_1 = 0.$$
1. In the spatial local operator, change:
$$ - u_1 = 0 \Rightarrow - \Delta u_1 = 0$$
See how it is done for the $\Delta u_0$, we recall the
corresponding part of `WaveFEM::alpha_volume()`:
```c++
// integrate both equations
RF factor = ip.weight() * geo.integrationElement(ip.position());
for (size_t i=0; i<lfsu0.size(); i++) {
r.accumulate(lfsu0,i,c*c*(gradu0*gradphi[i][0])*factor);
r.accumulate(lfsu1,i,-u1*phihat[i]*factor);
}
```
- *solution for `WaveFEMElip::alpha_volume`*
```c++
...
// compute gradient of u1
Dune::FieldVector<RF,dim> gradu1(0.0);
for (size_t i=0; i<lfsu1.size(); i++)
gradu1.axpy(x(lfsu1,i),gradphi[i][0]);
// integrate both equations
RF factor = ip.weight() * geo.integrationElement(ip.position());
for (size_t i=0; i<lfsu0.size(); i++) {
r.accumulate(lfsu0,i,c*c*(gradu0*gradphi[i][0])*factor);
r.accumulate(lfsu1,i,-(gradu1*gradphi[i][0])*factor);
```
2. In the temporal local operator, change:
$$ \partial_t u_0 = 0 \Rightarrow \Delta \partial_t u_0= 0$$
See how it is done for the $\Delta u_0$, we recall the
corresponding part of `WaveL2::alpha_volume()`:
```c++
// integrate u*phi_i
for (size_t i=0; i<lfsu0.size(); i++) {
r.accumulate(lfsu0,i,u1*phihat[i]*factor);
r.accumulate(lfsu1,i,u0*phihat[i]*factor);
}
```
- *solution for `WaveElip::alpha_volume`*
```c++
// integrate u*phi_i
for (size_t i=0; i<lfsu0.size(); i++) {
r.accumulate(lfsu0,i,u1*phihat[i]*factor);
r.accumulate(lfsu1,i,(gradu0*gradphi[i][0])*factor);
```
3. Do not forget to update the `jacobian_volume()` methods accordingly. As the problem is linear, this should not be too difficult.
- *solution for `WaveFEMElip::jacobian_volume`*
```c++
for (size_t i=0; i<lfsu0.size(); i++) {
mat.accumulate(lfsu0,i,lfsu0,j,c*c*(gradphi[j][0]*gradphi[i] [0])*factor);
mat.accumulate(lfsu1,i,lfsu1,j,-(gradphi[j][0]*gradphi[i][0])*factor);
}
```
- *solution for `WaveElip::jacobian_volume`*
```c++
for (size_t i=0; i<lfsu0.size(); i++)
S.mv(gradphihat[i][0],gradphi[i][0]);
// loop over all components
for (size_t j=0; j<lfsu0.size(); j++)
for (size_t i=0; i<lfsu0.size(); i++) {
mat.accumulate(lfsu0,i,lfsu1,j,phihat[j]*phihat[i]*factor);
mat.accumulate(lfsu1,i,lfsu0,j,(gradphi[j][0]*gradphi[i][0])*factor);
}
```
## Energy Conservation
If we multiply \eqref{eq:2a} by $u_1$ and \eqref{eq:elip} by $u_0$ and add, the terms $-(\Delta u_0,u_1 )$ and $(\Delta u_1,u_0 )$ cancel out, leading to the conclusion that the energy $$E(t) = \|u_1\|^2 + \| \nabla u_0\|^2 $$ is constant in time.
Your task is to check the energy conservation for the elliptic projection and Crank-Nicolson in time. You can use the following PDELab utilities:
```c++
Dune::PDELab::SqrGridFunctionAdapter
Dune::PDELab::integrateGridFunction
Dune::PDELab::DiscreteGridFunctionGradient
Dune::PDELab::SqrGridFunctionAdapter
```
If you have problems with this task check the online documentation https://www.dune-project.org/doxygen/pdelab/master/ or the solution.
- *solution for energy conservation*
```c++
// Energy - integrate u square
typedef Dune::PDELab::SqrGridFunctionAdapter<U1DGF> SQRU1DGF;
SQRU1DGF u1sqrdgf(u1dgf);
typename SQRU1DGF::Traits::RangeType sqru1(0.0);
Dune::PDELab::integrateGridFunction(u1sqrdgf, sqru1, 2*degree);
using DGFU0G = Dune::PDELab::DiscreteGridFunctionGradient<U0SUB, Z>;
DGFU0G dgfu0g(u0sub,z);
//gradient square
using SQRDGFU0G = Dune::PDELab::SqrGridFunctionAdapter<DGFU0G>;
SQRDGFU0G sqrdgfu0g(dgfu0g);
//integrate gradient square
typename SQRDGFU0G::Traits::RangeType sqru0g(0.0);
Dune::PDELab::integrateGridFunction(sqrdgfu0g, sqru0g, 2*degree);
std::cout << "::: energy " << sqru1 + sqru0g << std::endl;
```
## Additional Task
If you are done with these exercises, you can play with the initial conditions. Use the following setting in one space dimension:
- Change the initial conditions to $\sin(2x)$ (implement it as a lambda function)
- Change the domain size to $[0,\pi]$
# Bibliographie
[1] Bastian, P. *Lecture Notes on Scientific Computing with Partial Differential Equations*. Heidelberg University, 2014. http://conan.iwr.uni-heidelberg.de/teaching/numerik2_ss2014/num2.pdf.
[2] Braess, D. *Finite Elemente*. Springer, 3. edition, 2003.
[3] Brenner, S. C. and Scott, L. R. *The mathematical theory of finite element methods*, Springer, 1994.
[4] Ciarlet, P. G. . *The finite element method for elliptic problems*. SIAM, Classics in Applied Mathematics, 2002.
[5] Elman, H., Silvester, D. and Wathen, A. *Finite Elements and Fast Iterative Solvers*. Oxford University Press,2005.
[6] Eriksson, K., Estep, D., Hansbo, P. and Johnson, C. *Computational Differential Equations*. Cambridge University Press, 1996. <a id ="cit1"> </a>
[7] Großmann, C. and Roos, H.-G.*Numerische Behandlung partieller Differentialgleichungen*. Teubner, 2006.
[8] Hackbusch, W. *Theorie und Numerik elliptischer Differentialgleichungen*. Teubner, 1986. http://www.mis.mpg.de/preprints/ln/lecturenote-2805.pdf.
[9] Rannacher, R. *Einführung in die Numerische Mathematik II (Numerik partieller Differentialgleichungen)*. Heidelberg University, 2006. http://numerik.iwr.uni-heidelberg.de/~lehre/notes.
| 399f289cfc461c130547023a057fe5527825e38a | 51,376 | ipynb | Jupyter Notebook | notebooks/tutorial04/pdelab-tutorial04.ipynb | dokempf/dune-jupyter-course | 1da9c0c2a056952a738e8c7f5aa5aa00fb59442c | [
"BSD-3-Clause"
]
| 1 | 2022-01-21T03:16:12.000Z | 2022-01-21T03:16:12.000Z | notebooks/tutorial04/pdelab-tutorial04.ipynb | dokempf/dune-jupyter-course | 1da9c0c2a056952a738e8c7f5aa5aa00fb59442c | [
"BSD-3-Clause"
]
| 21 | 2021-04-22T13:52:59.000Z | 2021-10-04T13:31:59.000Z | notebooks/tutorial04/pdelab-tutorial04.ipynb | dokempf/dune-jupyter-course | 1da9c0c2a056952a738e8c7f5aa5aa00fb59442c | [
"BSD-3-Clause"
]
| 1 | 2021-04-21T08:20:02.000Z | 2021-04-21T08:20:02.000Z | 31.193685 | 515 | 0.580135 | true | 9,363 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.727975 | 0.582884 | __label__eng_Latn | 0.940685 | 0.192565 |
# Session 5: Gradient Descent
## House sale-value prediction using the Boston housing dataset
------------------------------------------------------
*ATDST, 2017-2018*
*Pablo M. Olmos [email protected]*
------------------------------------------------------
## Importing Packages
```python
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn import linear_model
%matplotlib inline
```
Today, we will continue with the example we used for session 2: Predicting house values using the average number of rooms in the [Boston housing dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.names).
Attribute Information can be found [here](https://www.kaggle.com/c/boston-housing).
### Loading Data
We will manage the database using [Panda's library and Dataframes](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)
```python
housing_data=pd.read_csv('./Boston_train.csv')
```
We divide the whole data set in **80% training** and **20% test**:
```python
N = len(housing_data)
N_train = np.round(0.8 * N).astype(np.int32)
np.random.seed(seed=10) #To fix the random seed
mask = np.random.permutation(len(housing_data))
regression_data_frame = housing_data[['rm','medv']].iloc[list(mask[0:N_train])]
X_0 = np.array(regression_data_frame['rm'])
Y = np.array(regression_data_frame['medv'])
regression_data_frame_test = housing_data[['rm','medv']].iloc[list(mask[N_train:-1])]
X_0_test = np.array(regression_data_frame_test['rm'])
Y_test = np.array(regression_data_frame_test['medv'])
```
# Numerical Optimization with Gradient Descend
As we know, the Ridge regression optimization problem
$$\boldsymbol{\theta}_\lambda = \arg \min_{\theta} \frac{1}{N} \left[\sum_{i=1}^{N} (y^{(i)}-\boldsymbol{\theta}^T\mathbf{x}^{(i)})^2 + \lambda \sum_{j=1}^{D+1} \theta_j^2\right],$$
can be solved using the **normal equation**
$$\boldsymbol{\theta}_\lambda = (\mathbf{X}^T\mathbf{X} + \mathbf{D}_\lambda)^{-1}\mathbf{X}^T\mathbf{y},$$
However, as we advanced in Session 2, computing the normal equation has many important drawbacks:
- You need to keep the full target matrix $\mathbf{X}_{N\times (D+1)}$ in memory, which can be huge in large datasets!
- You need to invert the matrix $(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}$ $\Rightarrow$ $\mathcal{O}(D^3)$ complexity.
- For small data sets ($N\leq D$), $(\mathbf{X}^T\mathbf{X})^{-1}$ can be non-invertible!
- Once you get new data, how do you update $\boldsymbol{\theta}^*$?
Today, we will learn how to apply a very simple algorithm to perform numerical optimization: **gradient descent** (GD). GD is one of the cornerstones of **convex optimization**, but it is also widely used for **non-convex optimization** for instance in [deep learning with neural networks](https://www.youtube.com/watch?v=IHZwWFHWa-w). Be aware that GD is probably the simplest numerical optimization algorithm and that there is a whole [field](https://web.stanford.edu/group/sisl/k12/optimization/#!index.md) devoted to optimize numerically a function.
In a nutshell, GD tries to iteratively converge to the minimum of a function $f(\boldsymbol{\theta})$ by iteratively applying the following update rule:
$$\boldsymbol{\theta}_{\ell+1} = \boldsymbol{\theta}_\ell - \alpha \nabla f(\boldsymbol{\theta})|_{\boldsymbol{\theta}_\ell},$$
where $\nabla(\cdot)$ is the gradient operator and $\alpha$ is the learning rate. **Setting the learning rate** is a problem in general. Despite we won't cover the topic in detail and just restrict to simple checks, be aware that there are many modifications to GD that attempt to authomatically tune $\alpha$, like the [line search](https://people.maths.ox.ac.uk/hauser/hauser_lecture2.pdf) method.
#### Check out this [beautiful post](http://www.benfrederickson.com/numerical-optimization/)!
___
# Solving the Ridge regression problem using GD
If we define
$$ J(\boldsymbol{\theta}) = \frac{1}{N} \left[\sum_{i=1}^{N} (y^{(i)}-\boldsymbol{\theta}^T\mathbf{x}^{(i)})^2 + \lambda \sum_{j=1}^{D+1}\theta_j^2\right],$$
then it is easy to check that, for $m>0$:
\begin{align}
\frac{\partial J(\boldsymbol{\theta})}{\partial \theta_m}= \frac{2}{N} \left[\lambda\theta_m-\sum_{i=1}^{N} x_m^{(i)}\left(y^{(i)}-\boldsymbol{\theta}^T\mathbf{x}^{(i)}\right)\right]= \frac{2}{N} \left[\lambda\theta_m - \mathbf{e}^T \mathbf{X}_{:,m}\right],
\end{align}
where $\mathbf{e}=\mathbf{y}-\boldsymbol{\theta}^T \mathbf{X}$ is the error vector and $\mathbf{X}_{:,m}$ is the $m$-th column of the normalized training feature matrix. For $m=0$ the first term is not present, since the intercept is not regularized:
\begin{align}
\frac{\partial J(\boldsymbol{\theta})}{\partial \theta_m}= \frac{-2}{N} \left[\sum_{i=1}^{N} x_m^{(i)}\left(y^{(i)}-\boldsymbol{\theta}^T\mathbf{x}^{(i)}\right)\right]= \frac{-2}{N} \left[\mathbf{e}^T \mathbf{X}_{:,m}\right],
\end{align}
Note that in both cases ** the gradient vanishes when the error is zero**.
Lets now program the GD method in a modular way. First, we incorporate some useful functions from the ** previous session**. They will be included as a Python library file.
```python
import Ridge_functions as ridge # You can find details in provided file S4_Ridge_functions.py
```
## Compute Gradient
** First ** you have to program a function that computes the gradient and takes as input arguments
- The feature matrix $\mathbf{X}$
- The error vector $\mathbf{e}$
- The value of $\lambda$
- The current value of $\boldsymbol{\theta}$
and returns as output the $(D+1)$-dimensional gradient of $J(\boldsymbol{\theta})$.
```python
def compute_gradient(feature_matrix,error,l,T):
## YOUR CODE HERE
gradient = l*T - error @ feature_matrix
gradient[0] -= l*T[0]
gradient *= 2.0/feature_matrix.shape[0]
return(gradient)
```
## Numerical evaluation of the gradient
Before moving forward, let's run a numerical check to verify that the gradient you compute in the function above is correct.
1) Fix a given arbitrary value of $\boldsymbol{\theta}^o$.
2) Introduce a small distortion in one of the components of $\boldsymbol{\theta}$. For instance, $\theta_m^+=\theta^o_m$ for $m\neq 1$ and $\theta^+_1 = \theta^o_1+\epsilon$, where $\epsilon=10^{-3}$. Evaluate the cost function $J(\boldsymbol{\theta}^+)$.
3) Define a new vector $\boldsymbol{\theta}^-$ such that $\theta_m^-=\theta^o_m$ for $m\neq 1$ and $\theta^-_1 = \theta^o_1-\epsilon$, where $\epsilon=10^{-3}$. Evaluate the cost function $J(\boldsymbol{\theta}^-)$.
4) Verify that
$$\frac{\partial J(\boldsymbol{\theta})}{\partial \theta_1} \approx \frac{J(\boldsymbol{\theta}^+)-J(\boldsymbol{\theta}^-)}{2\epsilon}$$,
where the derivative in the left hand side is given by the second component of the gradient computed using your function above. **Repeat the experiment using the first component of the gradient, i.e., the derivative w.r.t. $\theta_0$**.
```python
# Lets fix a degree
deg = 2
l= 1.0
np.random.seed(seed=10) #To fix the random seed
T = np.random.rand(3)
# This is how we would compute the cost function at T and obtain the normalized train feature matrix
# J_train,_,F_train,_ = ridge.eval_J_Ridge_given_T(X_0,X_0_test,deg,Y,Y_test,l,T)
# Your code here
mod = np.zeros(T.shape)
epsilon = 1e-2
index = 0
mod[index] += epsilon
T_p = T + mod
T_m = T - mod
J_p,_,F_train,_ = ridge.eval_J_Ridge_given_T(X_0,X_0_test,deg,Y,Y_test,l,T_p)
J_m,_,_,_ = ridge.eval_J_Ridge_given_T(X_0,X_0_test,deg,Y,Y_test,l,T_m)
error = (Y - ridge.LS_evaluate(F_train,T))
gradient = compute_gradient(F_train,error,l,T)
print("The gradient at position %d is %f" %(index,gradient[index]))
print("The approximate gradient at position %d is %f" %(index,(J_p-J_m)/(2*epsilon))) # Your code here
```
The gradient at position 0 is -44.204727
The approximate gradient at position 0 is -44.198928
## Implementing GD
Now, complete the following function to estimate the Ridge solution using gradient descend. The inputs are:
- The normalized train feature matrix $\mathbf{X}$
- The target vector $\mathbf{Y}$
- The initial value of theta $\boldsymbol{\theta}$
- The step size $\alpha$
- The stopping tolerance: a threshold that halts the algorithm when the norm of the gradient is below this limit
- The maximum number of iterations
```python
def regression_gradient_descent(F_train, Y, T_0, l, step_size, tolerance, iter_max,verbose=True,period_verbose=1000):
converged = False
T = np.array(T_0) # make sure it's a numpy array
it=0
while not converged:
# First, compute the error vector
error= (Y - ridge.LS_evaluate(F_train,T)) #YOUR CODE HERE
# Second compute the gradient vector
gradient= compute_gradient(F_train,error,l,T) #YOUR CODE HERE
# Finally, update the theta vector
T=T- step_size * gradient #YOUR CODE HERE
grad_norm=np.linalg.norm(gradient)
if(verbose==True and it % period_verbose ==0):
J_Ridge = ridge.J_error_L2(Y,ridge.LS_evaluate(F_train,T),T,l)
print ("Iterations = %d" %(it))
print ("Gradient norm %f" %(grad_norm))
print ("Ridge cost function %f" %(J_Ridge))
if grad_norm < tolerance:
converged = True
elif it > iter_max:
converged = True
else:
it=it+1
if(converged==True):
J_Ridge = ridge.J_error_L2(Y,ridge.LS_evaluate(F_train,T),T,l)
print ("Iterations = %d" %(it))
print ("Gradient norm %f" %(grad_norm))
print ("Ridge cost function %f" %(J_Ridge))
return(T)
```
## Evaluate solution and compare with normal equation
Run an example to verify that the GD solution is close to the one predicted by the normal equation. Investigate the effect of the step size.
```python
# Lets fix a degree
deg = 5
l= 1.0
T = np.random.rand(6)
#Cost function at initial T and feature matrix
J_0,_,F_train,_ = ridge.eval_J_Ridge_given_T(X_0,X_0_test,deg,Y,Y_test,l,T)
step_size = 1e-02
iter_max = 2e4
tolerance = 1e-03
period= 1e04
T_opt = regression_gradient_descent(F_train, Y, T, l, step_size, tolerance, iter_max,verbose=True,period_verbose=period)
T_normal = ridge.Ridge_solution(F_train,Y,l)
plt.stem(T_normal,'r',label='Normal Equation')
plt.plot(T_opt,'b',label='Gradient Descent')
plt.legend()
```
```python
```
```python
```
| a3b7abff611f75d47438aa1d2c6467eafd7e4e5f | 31,710 | ipynb | Jupyter Notebook | Notebooks/Session 5 Gradient Descent/S5-Gradiend Descend (Complete).ipynb | olmosUC3M/Introduction-to-Data-Science-and-Machine-Learning | 33a908011a5673dcbc6136dfc1eae868ef32e6b4 | [
"MIT"
]
| 7 | 2018-04-30T18:44:05.000Z | 2020-09-13T23:53:11.000Z | Notebooks/Session 5 Gradient Descent/S5-Gradiend Descend (Complete).ipynb | olmosUC3M/Introduction-to-Data-Science-and-Machine-Learning | 33a908011a5673dcbc6136dfc1eae868ef32e6b4 | [
"MIT"
]
| null | null | null | Notebooks/Session 5 Gradient Descent/S5-Gradiend Descend (Complete).ipynb | olmosUC3M/Introduction-to-Data-Science-and-Machine-Learning | 33a908011a5673dcbc6136dfc1eae868ef32e6b4 | [
"MIT"
]
| 1 | 2019-03-13T16:01:38.000Z | 2019-03-13T16:01:38.000Z | 69.387309 | 15,644 | 0.756859 | true | 2,973 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.870597 | 0.783999 | __label__eng_Latn | 0.892791 | 0.659824 |
```python
%matplotlib inline
%run ../setup/nb_setup
```
# Orbits 2: Orbits in Axisymmetric Potentials
Author(s): Adrian Price-Whelan
## Learning goals
In this tutorial, we will introduce axisymmetric potential models, and explore differences between orbits in non-spherical potentials with what we learned about orbits in spherical systems.
## Introduction
As we saw in the previous tutorial, while spherical potential models can have complex radial density profiles, orbits in spherical potential models are planar and can be characterized by their radial and azimuthal frequencies because these orbits have at least four isolating integrals of motion (energy and the three components of angular momentum). As we will see in this tutorial, as the symmetries of a potential model are relaxed (from spherical to axisymmetric, and then in the next tutorial, from axisymmetric to triaxial), the number of isolating integrals of motion for a generic orbit decreases to three, and they become difficult to compute. The implication of this is that generic orbits in non-spherical potential models are no longer confined to a plane: They fill a three-dimensional volume instead of a two-dimensional region. In addition, these more complex models have regions of *chaotic* or irregular orbits, which differ from regular orbits in many important ways, as we will see in the next tutorial.
In this tutorial, we will introduce some commonly-used axisymmetric gravitational potential models, compute the orbits of particles in some of these models, and analyze the properties of orbits in non-spherical models.
## Terminology and Notation
- (See Orbits tutorial 1)
- Cylindrical radius: $R = \sqrt{x^2 + y^2}$
- Maximum $z$ excursion: $z_\textrm{max} = \textrm{max}_\textrm{time}(z)$
### Notebook Setup and Package Imports
```python
from astropy.constants import G
import astropy.units as u
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import gala.dynamics as gd
import gala.integrate as gi
import gala.potential as gp
from gala.units import galactic
```
## Axisymmetric Potential Models
Rather than having spherical symmetry, axisymmetric potential models are symmetric to rotations around a particular axis. A common use case for these potential models is for representing flattened and disky systems (like the stellar and gas disks of the Milky Way). A common way of constructing axisymmetric potentials is to start with a spherical potential model, but replace the spherical radius with an elliptical radius that breaks the symmetry between $x, y$ and $z$. For example:
$$
r = \sqrt{x^2 + y^2 + z^2} \rightarrow \sqrt{x^2 + y^2 + (|z| + a)^2}
$$
By convention, in many contexts we will take the $z$ axis to be the symmetry axis, so we often use cylindrical coordinates $R, z$ when working with axisymmetric models.
One commonly-used axisymmetric model is the Miyamoto–Nagai (MN) potential, which is given by
$$
\Phi_{\textrm{MN}}(R, z) = - \frac{G\, M}{\sqrt{R^2 + (a + \sqrt{z^2 + b^2})^2}}
$$
In the limit that $a \rightarrow 0$, the MN potential reduces to the spherical Plummer model introduced in the previous tutorial. In the limit that $b \rightarrow 0$, the MN potential reduces to the potential generated by an infinitely-thin disk. Depending on the setting of $a$ and $b$, we can represent a variety of flattened density distributions, like galactic stellar or gas disks.
To get a feel for how orbits behave in axisymmetric potentials, we will compute some orbits in the MN potential using [Gala](http://gala.adrian.pw/). We will start by defining a model that has parameters similar to the stellar mass distribution in the local part of the Galactic disk (parameter values taken from [this paper](https://ui.adsabs.harvard.edu/abs/2021ApJ...910...17P/abstract)):
```python
mn_disk = gp.MiyamotoNagaiPotential(
m=6.98e10 * u.Msun, a=3 * u.kpc, b=0.28 * u.kpc, units=galactic
)
```
To start off, we will first use Gala to plot equipotential and isodensity contours to visualize the structure of this potential–density pair. To do this, we will visualize the lines of equal potential/density in 2D slices of the 3D models. We therefore need to specify which axis to "slice," and what value in that coordinate to slice at. In the other two coordinates, we need to specify grids over which to compute the potential or density (so we can use matplotlib's `contourf()` function to visualize the curves). Here, we will make plots of x-y (z=0) and x-z (y=0) slices. We will compute the potential and density on grids of 256 by 256 points between (-10, 10) kpc:
```python
grid = np.linspace(-10, 10, 256) * u.kpc
```
We will first plot the isopotential contours using the `.plot_contours()` method on any Gala potential object:
```python
fig, axes = plt.subplots(
1, 2, figsize=(10, 5), sharex=True, sharey=True, constrained_layout=True
)
mn_disk.plot_contours(grid=(grid, grid, 0), ax=axes[0])
mn_disk.plot_contours(grid=(grid, 0, grid), ax=axes[1])
for ax in axes:
ax.set_xlabel("$x$")
axes[0].set_ylabel("$y$")
axes[1].set_ylabel("$z$")
for ax in axes:
ax.set_aspect("equal")
fig.suptitle("Iso-potential contours", fontsize=22)
```
For comparison, we will now plot the same slices, but visualizing the isodensity contours using the `.plot_density_contours()` method of any Gala potential object:
```python
fig, axes = plt.subplots(
1, 2, figsize=(10, 5), sharex=True, sharey=True, constrained_layout=True
)
mn_disk.plot_density_contours(grid=(grid, grid, 0), ax=axes[0])
mn_disk.plot_density_contours(grid=(grid, 0, grid), ax=axes[1])
for ax in axes:
ax.set_xlabel("$x$")
axes[0].set_ylabel("$y$")
axes[1].set_ylabel("$z$")
for ax in axes:
ax.set_aspect("equal")
fig.suptitle("Iso-density contours", fontsize=22)
```
Note that the density contours are *much* more "disky" than the potential contours (i.e. the axis ratio comparing $z$ to $R$ is much smaller in the density than in the potential). This is generally true: Any flattening or axis ratio in density leads to less flattening / rounder potential contours. Keep this in mind for later! Setting a very small flattening parameter in the potential can therefore lead to very strange (or unphysical) density distributions.
Below, we will compute some orbits in the MN model. However, we would like to compare these orbits to orbits computed in an equivalent spherical, Plummer model with the same mass and scale radius as our disk. We will therefore first define a plummer model with the same mass and scale radius to use below.
### Exercise: Defining an comparison Plummer model
As mentioned above, in a particular limit, the MN potential becomes the Plummer potential. What should we set the Plummer scale length to so that, in the midplane (z=0) it has the same profile as our MN potential? I.e. what combination of the MN scale parameters $a_\textrm{MN}$ and $b_\textrm{MN}$ should we set the Plummer scale length to?
Write the answer here:
...
Define the comparison Plummer model using the mass from the MN potential, and the correct combination of the MN scale parameters (you can use `mn_disk.parameters['m']`, `mn_disk.parameters['a']`, and `mn_disk.parameters['b']` to retrieve the parameter values so you don't have to re-define them).
```python
# plummer = gp.PlummerPotential(...)
```
Demonstrate that the mass enclosed (computed assuming that both potentials are spherical) are equivalent in the midplane at $(x, y, z) = (8, 0, 0)~\textrm{kpc}$
```python
# Menc_MN = ...
# Menc_plummer =
```
---
## Orbits in axisymmetric potentials
In spherical potential models, we saw that orbits are confined to a plane, and can either be circular or have some radial oscillations that cause orbits to form a rosette pattern.
In axisymmetric potential models, new types of orbital shapes are allowed, and circular orbits only exist in the symmetry plane (here, the $x$-$y$ plane). In particular, as we will see, generic orbits are no longer confined to a plane and instead can oscillate in the $z$ direction with a non-commensurate frequency.
Even though the concept of the circular velocity only makes sense in the symmetry plane, we can still use the value of the circular velocity (computed assuming a spherical mass enclosed) as a way of initializing orbits, because the value of $v_{\rm circ}$ will have the right order of magnitude to stay bound and remain *close* to circular in many regimes. To demonstrate the types of orbits that we see in axisymmetric potentials, we will therefore compute three orbits in both the MN and comparison Plummer potentials using the circular velocity to inform our initial conditions. In the example below, we will start one orbit in the $x,y$ plane, one started slightly above the $x,y$ plane, and a third started far above the $x,y$ plane.
```python
# We first define the positions: These are [x, y, z] values
# for the three orbital initial conditions
mn_xyz = ([[8, 0, 0.0], [8, 0, 1.0], [8, 0, 10.0]] * u.kpc).T
# We compute the "circular velocity" (assuming this is a
# spherical potential) at each of the locations
mn_vcirc = mn_disk.circular_velocity(mn_xyz)
# We then use the circular velocity to set the scale of our
# initial velocities: We set the vy equal to the circular velocity,
# and vx equal to 10% of the circular velocity.
# The line below uses Numpy array broadcasting:
# https://numpy.org/doc/stable/user/basics.broadcasting.html
mn_vxyz = mn_vcirc[np.newaxis] * np.array([0.1, 1, 0])[:, np.newaxis]
mn_w0 = gd.PhaseSpacePosition(pos=mn_xyz, vel=mn_vxyz)
```
We will use these same intial conditions to compute orbits in both potential models and compare them below:
```python
mn_dt = 1.0 * u.Myr
mn_steps = 4000
mn_orbits = mn_disk.integrate_orbit(mn_w0, dt=mn_dt, n_steps=mn_steps)
plummer_orbits = plummer.integrate_orbit(mn_w0, dt=mn_dt, n_steps=mn_steps)
```
Let's plot the 3D configurations of the orbits computed in each of the potentials: We can use the `.plot_3d()` method on any `Orbit` object to make these plots for us, and in each panel the different colors will correspond to the different initial conditions:
```python
fig, axes = plt.subplots(1, 2, figsize=(16, 8), subplot_kw=dict(projection="3d"))
_ = mn_orbits.plot_3d(ax=axes[0])
_ = plummer_orbits.plot_3d(ax=axes[1])
for ax in axes:
ax.azim = 30
ax.elev = 15
axes[0].set_title("Miyamoto–Nagai")
axes[1].set_title("Plummer")
```
We could also instead all 2D projections of the orbits (xy, xz, yz) using the `.plot()` method:
```python
for orbits, name in zip([mn_orbits, plummer_orbits], ["Miyamoto–Nagai", "Plummer"]):
fig = orbits.plot()
for ax in fig.axes:
ax.set_xlim(-15, 15)
ax.set_ylim(-15, 15)
fig.suptitle(name, fontsize=22)
```
Visually, it the orbits in the Plummer potential are planar (2D; as we expect), but in the Miyamoto–Nagai potential, the orbits seem to fill a 3D volume (except for the orbit started in the symmetry plane). In fact, in $x-y$ projections these orbits look a lot like their spherical analogs, however these have vertical deviations as well that give vertical thickness to the orbits: These orbits are called "tube" orbits because the surface they cover looks like a hollowed-out tube.
Because of the azimuthal symmetry of axisymmetric potentials, orbits in these models are sometimes also plotted in the *meridional plane*, which plots the cylindrical radius $R$ vs. $z$. We can plot cylindrical coordinates by using the `.cylindrical` attribute of `Orbit` objects, and then specify that we want to plot just $R$ (called `rho` in Gala) and $z$ by passing these in with the `components=` keyword argument:
```python
fig = mn_orbits.cylindrical.plot(["rho", "z"], labels=["$R$ [kpc]", "$z$ [kpc]"])
fig.axes[0].set_title("Miyamoto–Nagai", fontsize=22)
```
### Exercise: Why do the orbits in the Miyamoto–Nagai potential look different from the Plummer orbits?
Compute and plot the three components of the angular momentum for all orbits in both potentials. Do you see any differences? Given what we discussed about integrals of motion in the last tutorial, what do you think the connection is between the angular momentum components and the phase-space properties of the orbit?
```python
```
## The epicyclic approximation: Building intuition for close-to-planar, nearly-circular orbits
Because of the azimuthal symmetry of axisymmetric potentials (so $L_z$ is conserved), the full Hamiltonian for any orbit in an axisymmetric potential (in terms of cylindrical position and conjugate momentum coordinates $p_R, p_\phi, p_z$)
$$
H(R, \phi, z, p_R, p_\phi, p_z) = \frac{1}{2}(p_R^2 + \frac{p_\phi^2}{R^2} + p_z^2) + \Phi(R, \phi, z)
$$
can be reduced to a 2D Hamiltonian that governs the motion in $R$ and $z$ (noting that $p_\phi = L_z$)
$$
H(R, z, p_R, p_z; L_z) = \frac{1}{2}(p_R^2 + p_z^2) + \Phi(R, z) + \frac{L_z^2}{2\,R^2}
$$
where now $L_z$ can be thought of as a parameter that labels an orbit, not as a coordinate. Because the terms in a Hamiltonian are often grouped into "terms that depend on the momentum coordinates" and "terms that depend on the position coordinates," the dependence on $\frac{L_z^2}{2\,R^2}$ is sometimes absorbed into the expression of the potential and referred to as the *effective potential* $\Phi_{\rm eff}(R, z)$:
$$
\Phi_{\rm eff}(R, z) = \Phi(R, z) + \frac{L_z^2}{2\,R^2}
$$
The equations of motion for $R$ and $z$ are therefore
$$
\begin{align}
\dot{p_R} &= - \frac{\partial H}{\partial R} = - \frac{\partial \Phi_{\rm eff}}{\partial R}\\
\dot{R} &= p_R\\
\ddot{R} &= - \frac{\partial \Phi_{\rm eff}}{\partial R}\\
\end{align}
$$
and
$$
\begin{align}
\dot{p_z} &= - \frac{\partial H}{\partial z} = - \frac{\partial \Phi}{\partial z}\\
\dot{z} &= p_z\\
\ddot{z} &= - \frac{\partial \Phi}{\partial z}\\
\end{align}
$$
In general, for relevant axisymmetric potentials used in galactic dynamics, the potential expressions are complex enough that the partial derivative expressions still contain terms that mix $R$ and $z$ so that these are still coupled differential equations.
In disk galaxies, however, most stars are on orbits such that their maximum excursions in $z$, sometimes called $z_\textrm{max}$, are much smaller than the mean cylindrical radius of the orbit, i.e. $z_\textrm{max} \ll \textrm{mean}(R)$. In this limit, it is often conceptually useful (and sometimes quantitatively reasonable) to treat the motion as if it were decoupled in the radial $R$ and vertical $z$ dimensions. In reality, the motion *is* coupled for any non-planar orbit, as we saw with the equations of motion above. We can also see this geometrically using the numerical orbits we computed above: A truly uncoupled, non-resonant orbit would fill a rectangular area in positional coordinates, whereas instead orbits in the meridional plane have a slope to their upper and lower $z$, and have curvature at maximum and minimum $R$. For example, for the non-planar, intermediate orbit we computed above, compare the area filled by this orbit to the rectangular frame of the meridional plane plot:
```python
fig = mn_orbits[:, 1].cylindrical.plot(["rho", "z"], labels=["$R$", "$z$"])
```
However, as an approximation to gain intuition, we can make the simplifying assumption that the motion is decoupled. Making this assumption is equivalent to assuming that the potential model is separable such that
$$
\Phi(R, z) \approx \Phi(R) + \Phi(z) \quad .
$$
With the assumption of decoupled motion, and from observing that the orbital trajectory in the meridional plane oscillates in both radius $R$ and height $z$, we can use the effective potential to find the central points in each coordinate about which the orbit oscillates. We do this by taking the derivative of the effective potential and setting it equal to zero. For the vertical direction, this is easy: We assume that any potential is symmetric about $z=0$ (a condition for it to be axisymmetric), and so
$$
0 = \frac{\partial \Phi(z)}{\partial z}
$$
must occur anywhere $z=0$.
For radius,
$$
\begin{align}
0 &= \frac{\partial \Phi_\textrm{eff}}{\partial R} \\
\frac{\partial \Phi(R)}{\partial R} &= \frac{L_z^2}{R^3}
\end{align}
$$
By convention, the radius at which this expression is valid is called the *guiding-center radius*, $R_g$:
$$
\left.\frac{\partial \Phi(R)}{\partial R}\right|_{R_g} = \frac{L_z^2}{R_g^3}
$$
The guiding center radius is an important conceptual quantity: The "guiding center" is an implicit component of the orbit that revolves around the center of the potential on a circular orbit with a constant frequency. Therefore, given our approximations so far, the only reason an orbit in an axisymmetric potential appears non-circular is because a given orbit may make radial $R$ and vertical $z$ oscillations away from the (circular, planar) guiding-center orbit.
### Exercise: Estimate the guiding center radius for an orbit in the Miyamoto–Nagai potential
Estimate the guiding center radius of a planar orbit in the MN potential (the `mn_disk` we defined above) with the initial conditions:
$$
(x,y,z) = (8.5, 0, 0.02)~\textrm{kpc}\\
(v_x,v_y,v_z) = (0, 168, 0)~\textrm{km}~\textrm{s}^{-1}\\
$$
Hint: you might find the root finder `scipy.optimize.root` useful!
Compute an orbit from these initial conditions, plot it in the meridional plane, and draw a vertical line on the plot at the location of the guiding center radius.
```python
from scipy.optimize import root
```
```python
```
---
For the orbit you computed in the exercise above, we can get a better understanding of the geometry of epicyclic motion by plotting the orbit in a coordinate frame that rotates with the azimuthal frequency of the guiding center orbit. This frame rotates with a constant angular speed around the $z$ axis with a frequency (from dimensional analysis):
$$
\Omega_{\phi}^* = \frac{L_z}{R_g^2}
$$
With Gala, we can define a rotating reference frame using the `ConstantRotatingFrame()` frame class:
```python
guiding_center_Omega_phi = trial_Lz / trial_Rg ** 2
guiding_center_frame = gp.ConstantRotatingFrame(
[0, 0, 1] * guiding_center_Omega_phi, units=galactic
)
```
We can then transform from the default, static frame to the rotating frame using the `Orbit.to_frame()` method:
```python
guiding_center_orbit = trial_orbit.to_frame(guiding_center_frame)
guiding_center_orbit
```
The returned object is still an `Orbit` instance, so we can plot its trajectory in the x-y plane using `.plot()` as we have done above:
```python
fig, ax = plt.subplots(figsize=(6, 6))
guiding_center_orbit.plot(["x", "y"], auto_aspect=False, axes=ax)
ax.scatter(trial_Rg.value, 0, color="tab:red")
ax.set_xlim(7, 10)
ax.set_ylim(-1.5, 1.5)
```
In the figure above, the red dot shows the location of the guiding center, and the black smeared-out ellipse is the orbit: This is the radial epicycle! The fact that it does not close on itself (to form a perfect ellipse) is because our assumptions are approximate: the orbit also oscillates in $z$.
For even less eccentric orbits, a crude way of approximating an epicycle orbit is to combine two circular orbits: one orbit around the origin of the coordinate system (i.e. the guiding center), and the other around the guiding radius
$$
Z(t) = e^{i\,\Omega_\phi t} \, \times \, A_R\,e^{-i\,\Omega_R t}\\
x(t) = \textrm{Re}(Z)\\
y(t) = \textrm{Im}(Z)
$$
where the minus sign is because the epicycle rotates in the opposite sense, as we learned in the lectures, and $A_R$ is the amplitude of the radial epicycle (which is related to the eccentricity).
Here is an interactive plot that lets us vary the R amplitude and the ratio of $\Omega_\phi/\Omega_R$. If you recall the previous tutorial, there are two limiting cases for the frequency ratio: When $\Omega_\phi=\Omega_R$ we get a Keplerian orbit, and when $\Omega_\phi = \frac{1}{2}\Omega_R$ we get an elliptical orbit centered on the origin. Try playing with the parameter values (using the sliders under the plot below). Can you find any resonant orbits? What resonances do they correspond to?
```python
from ipywidgets import interact, widgets
from IPython.display import display
t = np.arange(0, 32, 1e-2)
Omega_phi = 2 * np.pi
fig, ax = plt.subplots(figsize=(6, 6))
(l,) = ax.plot([], [], marker="")
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
ax.set_xlabel("$x$")
ax.set_ylabel("$y$")
def plot_func(Omega_phi_over_Omega_R, amp_R):
Omega_R = Omega_phi / Omega_phi_over_Omega_R
zz = amp_R * np.exp(-1j * Omega_R * t) + 1
zz = zz * np.exp(1j * Omega_phi * t)
l.set_data(zz.real, zz.imag)
fig.canvas.draw()
display(fig)
plt.close()
```
```python
omega_slider = widgets.FloatSlider(min=0.5, max=1.0, step=0.02, value=1.74)
amp_R_slider = widgets.FloatSlider(min=0, max=1, step=0.1, value=0.3)
interact(plot_func, Omega_phi_over_Omega_R=omega_slider, amp_R=amp_R_slider);
```
### One more frequency: azimuthal, radial, and vertical frequencies for orbits in axisymmetric models
From the previous tutorial and the previous section here, you should now be familiar with the azimuthal and radial frequencies, $\Omega_\phi, \Omega_R$. Generic orbits in axisymmetric potentials also have a third frequency: The $z$ or *vertical* frequency $\Omega_z$. Under the epicycle approximation, and with an assumed separable (in $R$, $z$) Hamiltonian, the equations of motion for $R$ and $z$ of an object are:
$$
\ddot{R} = -\frac{\partial \Phi_\textrm{eff}}{\partial R}\\
\ddot{z} = -\frac{\partial \Phi_\textrm{eff}}{\partial z}
$$
As noted previously, generically we have to solve these expressions numerically. However, to gain some intuition about the expected orbital properties, like the radial and vertical frequencies, it is useful to make one more approximation related to the epicycle assumption. If we are dealing with near-circular and near-coplanar orbits, we can get an expression for the effective potential close to an orbit's guiding center position $(R, z) \sim (R_g, 0)$ by Taylor expanding the effective potential: This will allow us to solve the orbital equations analytically.
Expanding the effective potential $\Phi_\textrm{eff}$ around the guiding center, we get:
$$
\Phi_{\textrm{eff}}(R, z) \approx \Phi_{\textrm{eff}}(R_g, 0) +
\frac{1}{2}\left.\frac{\partial^2\Phi_\textrm{eff}}{\partial R^2}\right|_{(R_g, 0)} \, (R-R_g)^2 +
\frac{1}{2}\left.\frac{\partial^2\Phi_\textrm{eff}}{\partial z^2}\right|_{(R_g, 0)} \, z^2 +
\mathcal{O}((R-R_g)\,z^2)
$$
Note that in this Taylor series approximation, the potential is separable up to mixed terms (like $(R-R_g)\,z^2$)! With this approximation, the equations of motion are (introducing the variable $X = R-R_g$ for convenience)
$$
\ddot{X} = - \left.\frac{\partial^2\Phi_\textrm{eff}}{\partial R^2}\right|_{(R_g, 0)} \, X\\
\ddot{z} = - \left.\frac{\partial^2\Phi}{\partial z^2}\right|_{(R_g, 0)} \, z
$$
which you may recognize as equations for two independent simple harmonic oscillators: One in radius, that oscillates around the guiding center, and one in vertical position that oscillates around the midplane of the potential. From these expressions, we can read off the expected frequencies of oscillation (for orbits started at the midplane, at the guiding center):
$$
\Omega_R^2 = \left.\frac{\partial^2\Phi_\textrm{eff}}{\partial R^2}\right|_{(R_g, 0)}\\
\Omega_z^2 = \left.\frac{\partial^2\Phi}{\partial z^2}\right|_{(R_g, 0)}
$$
### Exercise: Estimate (analytically) the radial, vertical, and azimuthal frequencies for an orbit at the Solar radius
The Sun is approximately at a radius of $R_\odot \approx 8.1~\textrm{kpc}$ in the Milky Way's disk. This region of the disk is still dominated (in mass) by the gravitational potential of the stars and gas, so we can neglect the dark matter halo for first approximation.
Assuming $R_g = R_\odot$, estimate the azimuthal frequency of a circular orbit at the Solar circle using the `mn_disk` potential we defined above.
```python
```
Recall that the expression for the effective potential for a Miyamoto–Nagai disk is:
$$
\Phi_\textrm{eff}(R, z) = - \frac{G \, M}{\sqrt{R^2 + (a + \sqrt{b^2 + z^2})^2}} + \frac{L_z^2}{2\,R^2}
$$
Estimate the radial and vertical frequencies for an orbit near the solar circle. How do the frequency and period values compare (radial to azimuthal to vertical)?
*Hint: you either want to take second derivatives of the expression above and evaluate this manually, or you can use the `.hessian()` method on any Gala potential object (but note that this computes the Cartesian 2nd derivative matrix at a specified position)*
```python
```
## Other Axisymmetric and Flattened Potential Models
We have so far worked a lot with the Miyamoto–Nagai potential model, however like with spherical potential models, there are many options for potential–density pairs for flattened, axisymmetric potential models. In fact, as hinted at above, we can take any spherical potential model, replace the radius value with an ellipsoidal radius, and this gets us a flattened potential. However, it is worth noting that doing this for a spherical potential–density pair does not always lead to an analytic potential–density pair in axisymmetric coordinates.
One other example of a flattened potential that can be related to a density distribution is the flattened logarithmic potential:
$$
\Phi_L(R, z) = \frac{1}{2} v_0^2 \ln\left( R^2 + \frac{z^2}{q^2} + r_c^2 \right)\\
\rho_L(R, z) = \frac{v_0^2}{4\pi \, G \, q^2} \, \frac{(2\,q^2 +1)\,r_c^2 + R^2 + (2 - \frac{1}{q^2})\,z^2}{(R^2 + \frac{z^2}{q^2} + r_c^2)^2}
$$
where $q$ sets the amount of flattening (when $q = 1$, this is a spherical model).
Like the spherical logarithmic potential, this flattened model has the feature that when $R \gg r_c$, the circular velocity curve is close to constant with a value $v_c \approx v_0$, and is therefore useful for constructing simple mass models for computing orbits in the combined stars + dark matter potential of a Galaxy. However, as this model was defined by turning a spherical logarithmic *potential* model into a flattened potential model, the density does not have to be physically meaningful at all positions for all parameter values.
### Exercise: Are there values for the parameters (q, r_c) that lead to unphysical density values?
What values of the parameters?
```python
```
## Integrals of motion for axisymmetric orbits: Connection to action-angle coordinates
We have talked a lot about approximating orbits in axisymmetric potentials because most stars in the Milky Way are on orbits that can be described using the terminology and quantities we have discussed so far. We have focused a lot on the different frequencies that an orbit can have in different cylindrical coordinates, assuming that the motion is separable. However another important aspect of the geometry of axisymmetric orbits is the *amplitude* of oscillation in the different coordinates.
As we discussed early on in this tutorial, in axisymmetric models, a generic orbit has three integrals of motion, which could be energy, $z$-component of angular momentum, and the "third integral." It turns out that any orbit also has a different set of three integrals of motion that has a closer connection to the orbital geometry we discussed: something close to the amplitude of oscillations in the radial direction, something close to the amplitude of oscillations in the vertical direction, and the $z$-component of angular momentum (which is the analog of the others for the azimuthal direction) — these are the "radial action" $J_R$, the "vertical action" $J_z$, and the "azimuthal action" $J_\phi = L_z$. I added the phrase "something close to" in the previous sentence because the actions are not exactly related to the amplitudes in $R$ and $z$: Most of the expressions and intuition-building approximations we have made above have been under the assumption that the potential is separable in these coordinates. But we also saw that this is not true in detail (there is curvature to the orbital boundaries when orbits are plotted in the meridional $R,Z$ plane). The actions are like a generalization of the conceptual / approximate arguments we made above, but which take into account the fact that the orbits are not separable in detail.
Actions are special and useful for a number of reasons. For one, they connect to the intuitions we built about orbits above: For example, orbits with larger radial action have larger oscillations in $R$, and the same for $z$. But, as mentioned briefly in the previous tutorial, they also help define a dynamically-useful coordinate system known as action-angle coordinates. In this coordinate system, any point in ordinary phase-space $(\boldsymbol{x}, \boldsymbol{v})$ can be represented instead by a location in action-angle coordinates $(\boldsymbol{\theta}, \boldsymbol{J})$, where for axisymmetric orbits the actions and angles are:
$$
\boldsymbol{J} = (J_R, J_\phi, J_z)\\
\boldsymbol{\theta} = (\theta_R, \theta_\phi, \theta_z)
$$
where $J_\phi = L_z$, and in the limit $J_R, J_z \rightarrow 0$, $\theta_\phi \rightarrow \phi$.
The actions, which are the momentum coordinates, are integrals of motion, so $\dot{\boldsymbol{J}} = 0$. The Hamiltonian equations of motion are therefore:
$$
\dot{J}_i = -\frac{\partial H}{\partial \theta_i} = 0\\
\dot{\theta}_i = \frac{\partial H}{\partial J_i}
$$
This implies that the Hamiltonian $H$ must depend only on the actions, and so the expressions $\frac{\partial H}{\partial J_i}$ must be a function only of the actions. We can therefore integrate the equations of motion for the angle variables:
$$
\dot{\theta}_i = \frac{\partial H}{\partial J_i}\\
\theta_i(t) = \frac{\partial H}{\partial J_i} \, t + \theta_{i}(0)
$$
The angle variables increase linearly with time with a constant rate set by $\frac{\partial H}{\partial J_i}$, which must have units of frequency: We therefore define the *fundamental frequencies* of an orbit to be:
$$
\Omega_i = \frac{\partial H}{\partial J_i}
$$
which, in vector form, will be $\boldsymbol{\Omega} = (\Omega_R, \Omega_\phi, \Omega_z)$!
Action-angle coordinates are conceptually and dynamically very useful, and there is a lot more to be said about them. However, a subtlety of working with these coordinates is that it is often challenging to transform from $(x, v) \rightarrow (J, \theta)$ and even more challenging to transform back $(J, \theta) \rightarrow (x, v)$. The transformation often requires either numerically integrating an orbit for a long time, or making approximations to the potential (that often are not valid for all types of orbits). We will not talk much about the different methods for numerically estimating actions and angles (unless you ask!), but we recommend checking out [Sanders et al. 2016](https://ui.adsabs.harvard.edu/abs/2016MNRAS.457.2107S/abstract) if you are interested.
| ad8d3774e08bf2b7aeb5b8cbbdd483277bbe3c88 | 41,757 | ipynb | Jupyter Notebook | 1-Orbits/2-Orbits-in-axisymmetric-potentials.ipynb | CCADynamicsGroup/SummerSchoolWorkshops | b7f2f2cd049eb21c7b2220e424e67e466c5ba106 | [
"MIT"
]
| 5 | 2021-07-09T00:18:32.000Z | 2022-02-21T16:44:15.000Z | 1-Orbits/2-Orbits-in-axisymmetric-potentials.ipynb | CCADynamicsGroup/SummerSchoolWorkshops | b7f2f2cd049eb21c7b2220e424e67e466c5ba106 | [
"MIT"
]
| 7 | 2021-06-28T14:04:40.000Z | 2021-07-08T13:16:09.000Z | 1-Orbits/2-Orbits-in-axisymmetric-potentials.ipynb | CCADynamicsGroup/SummerSchoolWorkshops | b7f2f2cd049eb21c7b2220e424e67e466c5ba106 | [
"MIT"
]
| 4 | 2021-09-24T21:48:58.000Z | 2022-02-21T16:44:59.000Z | 47.183051 | 1,369 | 0.636947 | true | 8,124 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.760651 | 0.640335 | __label__eng_Latn | 0.996935 | 0.326044 |
```python
from IPython.display import Image
Image('../../Python_probability_statistics_machine_learning_2E.png',width=200)
```
# Projection Methods
<div id="ch:prob:sec:projection"></div>
The concept of
projection is key to developing an intuition about conditional
probability. We
already have a natural intuition of projection from looking at
the shadows of
objects on a sunny day. As we will see, this simple idea
consolidates many
abstract ideas in optimization and mathematics. Consider
[Figure](#fig:probability_001) where we want to find a point along the blue
line
(namely, $\mathbf{x}$) that is closest to the black square (namely,
$\mathbf{y}$). In other words, we want to inflate the gray circle until it just
touches the black line. Recall that the circle boundary is the set of points for
which
$$
\sqrt{(\mathbf{y}-\mathbf{x})^T(\mathbf{y}-\mathbf{x})}
=\|\mathbf{y}-\mathbf{x} \| = \epsilon
$$
for some value of $\epsilon$. So we want a point $\mathbf{x}$ along
the line
that satisfies this for the smallest $\epsilon$. Then, that point
will be the
closest point on the black line to the black square.
It may be obvious from the
diagram, but the closest point on the line
occurs where the line segment from
the black square to the black line is
perpedicular to the line. At this point,
the gray circle just touches the black
line. This is illustrated below in
[Figure](#fig:probability_002).
<!-- dom:FIGURE: [fig-
probability/probability_001.png, width=500 frac=0.90] Given the point
$\mathbf{y}$ (black square) we want to find the $\mathbf{x}$ along the line that
is closest to it. The gray circle is the locus of points within a fixed distance
from $\mathbf{y}$. <div id="fig:probability_001"></div> -->
<!-- begin figure
-->
<div id="fig:probability_001"></div>
<p>Given the point $\mathbf{y}$ (black
square) we want to find the $\mathbf{x}$ along the line that is closest to it.
The gray circle is the locus of points within a fixed distance from
$\mathbf{y}$.</p>
<!-- end figure -->
**Programming Tip.**
[Figure](#fig:probability_001) uses
the `matplotlib.patches` module. This
module contains primitive shapes like
circles, ellipses, and rectangles that
can be assembled into complex graphics.
After importing a particular shape, you
can apply that shape to an existing axis
using the `add_patch` method. The
patches themselves can by styled using the
usual formatting keywords like
`color` and `alpha`.
<!-- dom:FIGURE: [fig-
probability/probability_002.png, width=500 frac=0.90] The closest point on the
line occurs when the line is tangent to the circle. When this happens, the black
line and the line (minimum distance) are perpedicular. <div
id="fig:probability_002"></div> -->
<!-- begin figure -->
<div
id="fig:probability_002"></div>
<p>The closest point on the line occurs when
the line is tangent to the circle. When this happens, the black line and the
line (minimum distance) are perpedicular.</p>
<!-- end figure -->
Now that we
can see what's going on, we can construct the the solution
analytically. We can
represent an arbitrary point along the black line as:
$$
\mathbf{x}=\alpha\mathbf{v}
$$
where $\alpha\in\mathbb{R}$ slides the point up and down the line with
$$
\mathbf{v} = \left[ 1,1 \right]^T
$$
Formally, $\mathbf{v}$ is the *subspace* onto which we want to
*project*
$\mathbf{y}$. At the closest point, the vector between
$\mathbf{y}$ and
$\mathbf{x}$ (the *error* vector above) is
perpedicular to the line. This means
that
$$
(\mathbf{y}-\mathbf{x} )^T \mathbf{v} = 0
$$
and by substituting and working out the terms, we obtain
$$
\alpha = \frac{\mathbf{y}^T\mathbf{v}}{ \|\mathbf{v} \|^2}
$$
The *error* is the distance between $\alpha\mathbf{v}$ and $
\mathbf{y}$. This
is a right triangle, and we can use the Pythagorean
theorem to compute the
squared length of this error as
$$
\epsilon^2 = \|( \mathbf{y}-\mathbf{x} )\|^2 = \|\mathbf{y}\|^2 - \alpha^2
\|\mathbf{v}\|^2 = \|\mathbf{y}\|^2 -
\frac{\|\mathbf{y}^T\mathbf{v}\|^2}{\|\mathbf{v}\|^2}
$$
where $ \|\mathbf{v}\|^2 = \mathbf{v}^T \mathbf{v} $. Note that since
$\epsilon^2 \ge 0 $, this also shows that
$$
\| \mathbf{y}^T\mathbf{v}\| \le \|\mathbf{y}\| \|\mathbf{v}\|
$$
which is the famous and useful Cauchy-Schwarz inequality which we
will exploit
later. Finally, we can assemble all of this into the *projection*
operator
$$
\mathbf{P}_v = \frac{1}{\|\mathbf{v}\|^2 } \mathbf{v v}^T
$$
With this operator, we can take any $\mathbf{y}$ and find the closest
point on
$\mathbf{v}$ by doing
$$
\mathbf{P}_v \mathbf{y} = \mathbf{v} \left( \frac{ \mathbf{v}^T \mathbf{y}
}{\|\mathbf{v}\|^2} \right)
$$
where we recognize the term in parenthesis as the $\alpha$ we
computed earlier.
It's called an *operator* because it takes a vector
($\mathbf{y}$) and produces
another vector ($\alpha\mathbf{v}$). Thus,
projection unifies geometry and
optimization.
## Weighted distance
We can easily extend this projection
operator to cases where the measure of
distance between $\mathbf{y}$ and the
subspace $\mathbf{v}$ is weighted. We can
accommodate these weighted distances
by re-writing the projection operator as
<!-- Equation labels as ordinary links -->
<div id="eq:weightedProj"></div>
$$
\begin{equation}
\mathbf{P}_v=\mathbf{v}\frac{\mathbf{v}^T\mathbf{Q}^T}{\mathbf{v}^T\mathbf{Q v}}
\end{equation}
\label{eq:weightedProj} \tag{1}
$$
where $\mathbf{Q}$ is positive definite matrix. In the previous
case, we
started with a point $\mathbf{y}$ and inflated a circle centered at
$\mathbf{y}$
until it just touched the line defined by $\mathbf{v}$ and this
point was
closest point on the line to $\mathbf{y}$. The same thing happens
in the general
case with a weighted distance except now we inflate an
ellipse, not a circle,
until the ellipse touches the line.
<!-- # @@@CODE src-
probability/Projection.py fromto: ^theta@^fig,ax -->
<!-- dom:FIGURE: [fig-
probability/probability_003.png, width=500 frac=0.95] In the weighted case, the
closest point on the line is tangent to the ellipse and is still perpedicular in
the sense of the weighted distance. <div id="fig:probability_003"></div> -->
<!-- begin figure -->
<div id="fig:probability_003"></div>
<p>In the weighted
case, the closest point on the line is tangent to the ellipse and is still
perpedicular in the sense of the weighted distance.</p>
<!-- end figure -->
Note that the
error vector ($\mathbf{y}-\alpha\mathbf{v}$) in [Figure](#fig:probability_003)
is still perpendicular to the line (subspace
$\mathbf{v}$), but in the space of
the weighted distance. The difference
between the first projection (with the
uniform circular distance) and the
general case (with the elliptical weighted
distance) is the inner product
between the two cases. For example, in the first
case we have $\mathbf{y}^T
\mathbf{v}$ and in the weighted case we have
$\mathbf{y}^T \mathbf{Q}^T
\mathbf{v}$. To move from the uniform circular case
to the weighted ellipsoidal
case, all we had to do was change all of the vector
inner products. Before we
finish, we need a formal property of projections:
$$
\mathbf{P}_v \mathbf{P}_v = \mathbf{P}_v
$$
known as the *idempotent* property which basically says that once we
have
projected onto a subspace, subsequent projections leave us in the
same subspace.
You can verify this by computing Equation [1](#eq:weightedProj).
Thus,
projection ties a minimization problem (closest point to a line) to an
algebraic
concept (inner product). It turns out that these same geometric ideas
from
linear algebra [[strang2006linear]](#strang2006linear) can be translated to the
conditional
expectation. How this works is the subject of our next section.
| 84887c899b20dfddef3ef71d7ca9d00d0774fa9b | 187,853 | ipynb | Jupyter Notebook | chapter/probability/projection.ipynb | derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E | 9d12a298d43ae285d9549a79bb5544cf0a9b7516 | [
"MIT"
]
| 224 | 2019-05-07T08:56:01.000Z | 2022-03-25T15:50:41.000Z | chapter/probability/projection.ipynb | derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E | 9d12a298d43ae285d9549a79bb5544cf0a9b7516 | [
"MIT"
]
| 9 | 2019-08-27T12:57:17.000Z | 2021-09-21T15:45:13.000Z | chapter/probability/projection.ipynb | derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E | 9d12a298d43ae285d9549a79bb5544cf0a9b7516 | [
"MIT"
]
| 73 | 2019-05-25T07:15:47.000Z | 2022-03-07T00:22:37.000Z | 604.028939 | 176,652 | 0.943099 | true | 2,255 | Qwen/Qwen-72B | 1. YES
2. YES | 0.72487 | 0.737158 | 0.534344 | __label__eng_Latn | 0.99721 | 0.07979 |
```python
'''
File name : Math HW 1A -- Dimensional Analysis.ipynb
Author : Ming-Feng Ho
Date created : 9/30/2018
Date last modified : 10/2/2018
Python Version : 3.6
Sympy Version : 1.3
'''
import sympy as sp
# pretty print the latex formula in notebook
sp.init_printing()
# this function is like the print function in Jupyter notebook,
# you can check the Jupyter display document.
from IPython.display import display
```
```python
# (a) use the Coulomb law in cgs unit
F, q1, q2, r = sp.symbols("F q1 q2 r", real=True)
coulomb_law = sp.Eq(F, q1 * q2 * r**-2)
print("(a) Coulomb law in cgs unit")
display(coulomb_law)
# use symbols to represent the dimensions: length, mass, time
L, M, T = sp.symbols("L M T")
# convert into dimension
coulomb_law_dimension = coulomb_law.subs({
F : M * L * T**-2,
q1: sp.symbols("[q]"),
q2: sp.symbols("[q]"),
r : L
})
print("Substitute the dimensions")
display(coulomb_law_dimension)
# solve for [q]
sol = sp.solve(coulomb_law_dimension, sp.symbols("[q]"), rational=False)[1]
coulomb_law_dimension = sp.Eq(sp.symbols("[q]"), sol)
print("Solve for [q]")
display(coulomb_law_dimension.expand(force=True))
```
```python
# (b) Lagrangian L = Kinetic energy - potential energy, so it is in the same unit of energy.
Lagrangian = M * (L * T**-1)**2
print("(b) Lagrangian is in the same unit of energy, so I use m*v**2 to express the dimension of Lagrangian: [Lagrangian] =")
display(Lagrangian)
# Action is [S] = [Lagrangian] * T
S = sp.symbols("[S]")
display(sp.Eq(S, Lagrangian * T, ))
```
```python
# (c) Recall the F = q (v x B) equation
B, v, q = sp.symbols("B v q")
print("(c) Recall the F = q (v x B) equation")
# damn ... sympy's cross product is in vetor form.
display(sp.Eq(
sp.symbols("[B]"),
(F * q**-1 * v**-1).subs({
F : M * L * T**-2,
q : coulomb_law_dimension.rhs,
v : L * T**-1
})
).expand(force=True))
```
```python
# (d) Energy is in the same unit of Largrangian
print("(d) Energy is in the same unit of Largrangian")
display(sp.Eq(
sp.symbols("[E]"),
Lagrangian
))
```
```python
# (a) dessert animal
```
(a) dessert animal
I played a video game called "Zelda : Breath of the Wild" on Nintendo switch before. It is a really nice game. "Zelda" allows players to explore the world of Hyrule Kingdom with limited constraints. Anyway, there's a town called Gerudo on the southern area of the Hyrule Kingdom. The town is located in a harsh dessert. While the players is running in the dessert area, because it is too hot, their life (or Health Point (HP)) will decrease until they die.
So, we may assume that the maximal distance an animal can run depends on the how many HPs they have and the rate of decreasing HPs under the hot desert weather.
```python
# maximal distance = velocity * time = velocity * (number of HPs) / (rate of decreasing HP)
HP, rate = sp.symbols("HP rate")
maximal_distance = v * HP / rate
print("maximal distance = velocity * time = velocity * (number of HPs) / (rate of decreasing HP) =\n= {}\n".format(maximal_distance))
# assume: number of HPs is proportional to the volume size of an animal.
HP_expr = L**3
print("Assume: number of HPs is proportional to the volume size of an animal. \nLarger animals get more HPs. \nIn the game, the Bosses are usually bigger and have more HPs, so there may have a correlation here.")
display(sp.Eq(sp.symbols("[HP]"), HP_expr))
# assume: rate of decreasing HPs is proportional to the surface size of an animal.
rate_expr = L**2
print("Assume: rate of decreasing HPs is proportional to the surface size of an animal. \nLarger surface area usually more easily suffer from dehydration. \nAnd when you dehydration in the dessert, you die.")
display(sp.Eq(sp.symbols("[rate]"), rate_expr))
# combine all information, and recall that time is proportional to (number of HPs) / (rate of decreasing HP)
maximal_distance_dimension = maximal_distance.subs({
v : L * T**-1,
HP : HP_expr,
rate : rate_expr,
}).subs({
T : HP_expr * rate_expr**-1
})
print("Combine all information, and recall that time is proportional to (number of HPs) / (rate of decreasing HP)")
display(sp.Eq(
sp.symbols("[MaximalDistance]"),
maximal_distance_dimension
))
```
(b) Height of the jump of a animal
```python
# (b) Height of the jump of a animal
# recall the equation of Work and Energy
dx, W, m, g, h = sp.symbols("dx W m g h")
work_energy_expr = sp.Eq(F * dx, m * g * h)
print("Recall the equation of Work and Energy: \nWork = force * distance = mass * gravitational_acceleration * height")
display(work_energy_expr)
h_expr = sp.Eq(h, sp.solve(work_energy_expr, h)[0])
display(h_expr)
# set the force proportional to the cross section
print("Force is proptional to the cross section, which means the force is proportional to the size square, ")
c = sp.symbols("c")
F_expr = sp.Eq(F, c * L**2)
display(F_expr)
print("where L is the size and c is just a constant.")
# set the mass is proportional to the size^3, distance for the force pushing to floor is proportional to size
print("Say if the density of any animal is just a constant, we can set the mass is just proportional to the size^3.")
print("And, the dx, a displacement in a straight line in the direction of the force, is just proportional to the size.")
print("We get, ")
display(h_expr.subs({
F : F_expr.rhs,
m : L**3,
g : 1,
dx : L
}).subs({
c : sp.symbols("c'")
}))
print("Surprisingly, we get the height an animal can jump is a constant.")
```
I guess it is the problem of the body density. If we take density into account, we get
\begin{equation}
h \propto \frac{1}{\rho},
\end{equation}
where $\rho$ is the body density. The above relation is more promising than $h$ is just a constant.
| 04c0ac59e3075d42cbf73721c0fc63c6ed32e9e9 | 28,366 | ipynb | Jupyter Notebook | Math HW 1A -- Dimensional Analysis.ipynb | jibanCat/math_homework | 228fd61a84d03cdd8944d348e98662f1033e97ef | [
"MIT"
]
| null | null | null | Math HW 1A -- Dimensional Analysis.ipynb | jibanCat/math_homework | 228fd61a84d03cdd8944d348e98662f1033e97ef | [
"MIT"
]
| null | null | null | Math HW 1A -- Dimensional Analysis.ipynb | jibanCat/math_homework | 228fd61a84d03cdd8944d348e98662f1033e97ef | [
"MIT"
]
| null | null | null | 48.488889 | 1,448 | 0.699464 | true | 1,673 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.833325 | 0.740625 | __label__eng_Latn | 0.987028 | 0.559051 |
# Introduction
Understanding the behavior of neural networks and why they generalize has been a central pursuit of the theoretical deep learning community.
Recently, [Valle-Perez et al. (2019)](http://arxiv.org/abs/1805.08522) observed that neural networks have a certain "simplicity bias" and proposed this as a solution to the generalization question.
One of the ways with which they argued that this bias exists is the following experiment:
they drew a large sample of boolean functions $\{\pm1\}^7 \to \{\pm 1\}$ by randomly initializing neural networks and thresholding the output.
They observed that there is a bias toward some "simple" functions which get sampled disproportionately more often.
However, their experiments were only done for 2 layer relu networks.
Can one expect this "simplicity bias" to hold universally, for any architecture?
# A Quick Replication of Valle-Perez et al.'s Probability-vs-Rank Experiment
```python
import numpy as np
import scipy as sp
from scipy.special import erf as erf
from collections import OrderedDict as OD
import matplotlib.pyplot as plt
from itertools import product
import seaborn as sns
sns.set()
from mpl_toolkits.axes_grid1 import ImageGrid
def tight_layout(plt):
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
# our functions for sampling boolean functions belong here
from sample_boolean import *
np.random.seed(0)
_ = torch.manual_seed(0)
```
We sample $10^4$ random neural networks on the 7-dimensional boolean cube $\{\pm 1\}^7$ and threshold the results to get $10^4$ boolean functions.
Here, we sample 2 layer relu networks with 40 neurons each, with weights $W_{ij} \sim \mathcal N(0, \sigma_w^2/40) = \mathcal N(0, 2/40)$ and biases $b_i \sim \mathcal N(0, \sigma_b^2) = \mathcal N(0, 2)$, following [Valle-Perez et al. (2019)](http://arxiv.org/abs/1805.08522).
```python
WIDTHSPEC = (7, 40, 40, 1)
nsamples = 10**4
funcounters = {'relu': {}}
funfreq = {'relu': {}}
# vb = sigma_b^2
vb = 2
# vw = \sigma_w^2
for vw in [2]:
# `funcounters` holds a dictionary (more precisely, a `Counter` object)
# of boolean function (as a string of length 2^7 = 128) to its frequency
funcounters['relu'][vw] = sample_boolean_fun(MyNet(nn.ReLU, WIDTHSPEC), vw, vb, nsamples, outformat='counter')
# `funfreq` just has a list of frequencies
funfreq['relu'][vw] = OD(funcounters['relu'][vw].most_common()).values()
```
Sort the functions according to frequency and then plot its rank in this order versus its empirical probability.
```python
plt.plot(np.array(list(funfreq['relu'][2]), dtype='float')/ nsamples, '--', label='relu | 2 | 2')
plt.loglog()
plt.xlabel('rank')
plt.ylabel('probability')
plt.title('relu network simplicity bias')
plt.show()
```
Indeed, some functions are *way more* likely than others.
For example, what are the top 4 most frequent boolean functions? They are either constant functions or a single value different from one.
```python
for boolfun, freq in funcounters['relu'][2].most_common()[:4]:
print('function as a binary string:')
print('\t', boolfun)
print('frequency')
print('\t', freq)
```
function as a binary string:
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
frequency
1802
function as a binary string:
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
frequency
1762
function as a binary string:
11111111111111111111111111111111111111111111111111111111111111111111110111111111111111111111111111111111111111111111111111111111
frequency
9
function as a binary string:
11101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
frequency
5
Hmm this is pretty interesting!
Would this phenomenon hold for architectures other than a 2 layer relu network?
For example, let's try some networks with sigmoid activations.
Here we will use `erf`, since we can do some spectral calculations for it later.
```python
nsamples = 10**4
funcounters['erf'] = {}
funfreq['erf'] = {}
vb = 0
for vw in [1, 2, 4]:
# `funcounters` holds a dictionary (more precisely, a `Counter` object) of boolean function (as a string) to its frequency
funcounters['erf'][vw] = sample_boolean_fun(MyNet(Erf, WIDTHSPEC), vw, vb, nsamples, outformat='counter')
# `funfreq` just has a list of frequencies
funfreq['erf'][vw] = OD(funcounters['erf'][vw].most_common()).values()
```
```python
plt.plot(np.array(list(funfreq['relu'][2]), dtype='float')/ nsamples, '--', label='relu | 2 | 2')
plt.plot(np.array(list(funfreq['erf'][1]), dtype='float')/ nsamples, label='erf | 1 | 0')
plt.plot(np.array(list(funfreq['erf'][2]), dtype='float')/ nsamples, label='erf | 2 | 0')
plt.plot(np.array(list(funfreq['erf'][4]), dtype='float')/ nsamples, label='erf | 4 | 0')
plt.loglog()
plt.xlabel('rank')
plt.ylabel('probability')
plt.title(u'probability vs rank of $10^4$ random networks on $\{\pm1\}^7$')
plt.legend(title='$\phi$ | $\sigma_w^2$ | $\sigma_b^2$')
plt.show()
```
Looks like this "simplicity bias" is diminished when we use `erf`, and then goes away when we increase $\sigma_w^2$!
So it doesn't look like this "simplicity bias" is universal.
How can we understand this phenomenon better?
When can we expect "simplicity bias"?
# A Spectral Perspective on Simplicity Bias
*A priori*, the nonlinear nature seems to present an obstacle in reasoning about the distribution of random networks.
However, this question turns out to be more easily treated if we allow the *width to go to infinity*.
A long line of works starting with [Neal (1995)](http://www.cs.toronto.edu/~radford/bnn.book.html) and extended recently by [Lee et al. (2018)](https://openreview.net/forum?id=B1EA-M-0Z), [Novak et al. (2019)](https://arxiv.org/abs/1810.05148), and [Yang (2019)](https://arxiv.org/abs/1902.04760) has shown that randomly initialized, infinite-width networks are distributed as Gaussian processes.
These Gaussian processes also describe finite width random networks well as confirmed by [Valle-Pereze et al.](http://arxiv.org/abs/1805.08522) themselves.
We will refer to the corresponding kernels as the *Conjugate Kernels* (CK), following the terminology of [Daniely et al. (2017)](http://papers.nips.cc/paper/6427-toward-deeper-understanding-of-neural-networks-the-power-of-initialization-and-a-dual-view-on-expressivity.pdf).
Given the CK $K$, the simplicity bias of a wide neural network can be read off quickly from the *spectrum of $K$*:
If the largest eigenvalue of $K$ accounts for most of its trace, then a typical random network looks like a function from the top eigenspace of $K$.
More precisely, if we have the eigendecomposition
\begin{equation}
K = \sum_{i \ge 1} \lambda_i u_i\otimes u_i
\label{eqn:eigendecomposition}
\end{equation}
with eigenvalues $\lambda_i$ in decreasing order and corresponding eigenfunctions $u_i$, then each sample (i.e. wide neural network) from this GP can be obtained as
$$
\sum_{i \ge 1} \sqrt{\lambda_i} \omega_i u_i,\quad
\omega_i \sim \mathcal N(0, 1).
$$
If, for example, $\lambda_1 \gg \sum_{i \ge 2}\lambda_i$, then a typical sample function is just a very small perturbation of $u_1$.
This motivates us to take a look at the spectrum of the CK.
## A brief summary of the spectral theory of CK
Now, if the CK has spectra difficult to compute, then this perspective is not so useful.
But in idealized settings, where the data distribution is uniform over the boolean cube, the sphere, or from the standard Gaussian, a complete (or almost complete in the Gaussian case) eigendecomposition of the kernel can be obtained, thanks to the symmetry of the domain.
Of course, the probability-vs-rank experiment of Valle-Parez et al. is in this exact setting.
Here and in the paper, we focus on the boolean cube, since in high dimensions, all three distributions are very similar, and the boolean cube eigenvalues are much easier to compute (see paper for more details).
We briefly summarize the spectral theory of CK and NTK (of multilayer perceptrons, or MLPs) on the boolean cube.
First, these kernels are always diagonalized by the *boolean Fourier basis*, which are just monomial functions like $x_1 x_3 x_{10}$.
These Fourier basis functions are naturally graded by their *degree*, ranging from 0 to the dimension $d$ of the cube.
Then the kernel has $d+1$ unique eigenvalues,
$$\mu_0, \ldots, \mu_d$$
corresponding to each of the degrees, so that the eigenspace associated to $\mu_k$ is a $\binom d k$ dimensional space of monomials with degree $k$.
These eigenvalues are simple linear functions of a small number of the kernel values, and can be easily computed.
So let's compute the eigenvalues of the CK correponding to the architectures we've used above!
# Computing Eigenvalues over a Grid of Hyperparameters
Our methods for doing the theoretical computations lie in the `theory` module.
```python
from theory import *
```
First, let's compute the eigenvalues of erf CK and NTK over these hyperparameters:
- $\sigma_w^2 \in \{1, 2, 4\}$
- $\sigma_b^2 = 0$
- dimension 7 boolean cube
- depth up to 100
- degree $k \le 7$.
```python
erfvwrange = [1, 2, 4]
erfvbrange = [0]
s_erfvws, s_erfvbs = np.meshgrid([1, 2, 4], [0], indexing='ij')
dim = 7
depth = 100
maxdeg = 7
```
As mentioned in the paper, any CK or NTK $K$ of multilayer perceptrons (MLPs) takes the form
$$K(x, y) = \Phi\left(\frac{\langle x, y \rangle}{\|x\|\|y\|}, \frac{\|x\|^2}d, \frac{\|y\|^2}d\right)$$
for some function $\Phi: \mathbb R^3 \to \mathbb R$.
On the boolean cube $\{1, -1\}^d$, $\|x\|^2 = d$ for all $x$, and $\langle x, y \rangle / d$ takes value in a discrete set $\{-1, -1+2/d, \ldots, 1-2/d, 1\}$.
Thus $K(x, y)$ only takes a finite number of different values as well.
We first compute these values (see paper for the precise formulas).
```python
# `erfkervals` has two entries, with keys `cks` and `ntks`, but the `ntks` entry is not relevant to us here
# Each entry is an array with shape (`depth`, len(erfvwrange), len(erfvbrange), `dim`+1)
# The last dimension carries the entries $\Phi(-1), \Phi(-1 + 2/d), ..., \Phi(1)$
s_erfkervals = boolcubeFgrid(dim, depth, s_erfvws, s_erfvbs, VErf, VDerErf)
```
The eigenvalues $\mu_k, k = 0, 1, \ldots, d$, can be expressed a simple linear function of $\Phi$'s values, as hinted before.
However, a naive evaluation would lose too much numerical precision because the number of alternating terms.
Instead, we do something more clever, resulting in the following algorithm:
- For $\Delta = 2/d$, we first evaluate $\Phi^{(a)}(x) = \frac 1 2 \left(\Phi^{(a-1)}(x) - \Phi^{(a-1)}(x - \Delta)\right)$ with base case $\Phi^{(0)} = \Phi$, for $a = 0, 1, \ldots$, and for various values of $x$.
- Then we just sum a bunch of nonnegative terms to get the eigenvalue $\mu_k$ associated to degree $k$ monomials
$$\mu_k = \frac 1{2^{d-k}} \sum_{r=0}^{d-k}\binom{d-k}r \Phi^{(k)}(1 - r \Delta).$$
Note that, here we will compute *normalized eigenvalues*, normalized by their trace.
So these normalized eigenvalues, with multiplicity, should sum up to 1.
```python
s_erfeigs = {}
# `erfeigs['ck']` is an array with shape (`maxdeg`, `depth`+1, len(erfvwrange), len(erfvbrange))
# `erfeigs['ck'][k, L] is the matrix of eigenvalue $\mu_k$ for a depth $L$ erf network,
# as a function of the values of $\sigma_w^2, \sigma_b^2$ in `erfvwrange` and `erfvbrange`
# Note that these eigenvalues are normalized by the trace
# (so that all normalized eigenvalues sum up to 1)
s_erfeigs['ck'] = relu(boolCubeMuAll(dim, maxdeg, s_erfkervals['cks'], twostep=False))
```
This computes all we need for the erf kernels.
Now let's do the relu one.
```python
s_reluvws, s_reluvbs = np.meshgrid([2], [1], indexing='ij')
dim = 7
depth = 2
maxdeg = 7
s_relukervals = boolcubeFgrid(dim, depth, s_reluvws, s_reluvbs, VReLU, VStep)
s_relueigs = {}
s_relueigs['ck'] = relu(boolCubeMuAll(dim, maxdeg, s_relukervals['cks'], twostep=False))
```
# A Spectral Explanation of the Simplicity Bias
```python
def prunesmall(s, thr=1e-14):
t = np.array(s)
t[t<thr] = 0
return t
```
```python
plt.figure(figsize=(12, 4.25))
ax0 = plt.subplot(121)
plt.plot(np.array(list(funfreq['relu'][2]), dtype='float')/ nsamples, '--', label='relu | 2 | 2')
plt.plot(np.array(list(funfreq['erf'][1]), dtype='float')/ nsamples, label='erf | 1 | 0')
plt.plot(np.array(list(funfreq['erf'][2]), dtype='float')/ nsamples, label='erf | 2 | 0')
plt.plot(np.array(list(funfreq['erf'][4]), dtype='float')/ nsamples, label='erf | 4 | 0')
plt.loglog()
plt.xlabel('rank')
plt.ylabel('probability')
ax0.text(-.15, -.15, '(a)', fontsize=24, transform=ax0.axes.transAxes)
plt.title(u'probability vs rank of $10^4$ random networks on $\{\pm1\}^7$')
plt.legend(title='$\phi$ | $\sigma_w^2$ | $\sigma_b^2$')
ax1 = plt.subplot(122)
plt.plot(prunesmall(s_relueigs['ck'][:, -1, 0, 0]), marker='x', linestyle='None', label=r'relu | 2 | 2 | 2')
for i in range(3):
plt.plot(prunesmall(s_erfeigs['ck'][:, 2, i, 0]), marker='o', linestyle='None',
label=r'erf | {} | 0 | 2'.format(2**i))
plt.plot(prunesmall(s_erfeigs['ck'][:, 32, -1, 0]), marker='*', linestyle='None', label=r'erf | 4 | 0 | 32')
plt.legend(title=r'$\phi$ | $\sigma_w^2$ | $\sigma_b^2$ | depth', loc='lower left')
plt.xlabel('degree $k$')
plt.ylabel(r'normalized eigenvalue $\tilde{\mu}_k$')
plt.title('erf networks lose simplicity bias for large $\sigma_w^2$ and depth')
plt.semilogy()
ax1.text(-.15, -.15, '(b)', fontsize=24, transform=ax1.axes.transAxes)
tight_layout(plt)
```
In **(a)**, we have reproduced the plot from above.
In **(b)** we have plotted the 8 unique (normalized) eigenvalues for the CK of each architecture given in the legend.
Immediately, we see that for relu and $\sigma_w^2 = \sigma_b^2 = 2$, the degree 0 eigenspace, corresponding to constant functions, accounts for more than $80\%$ of the variance.
This means that a typical infinite-width relu network of 2 layers is expected to be almost constant, and this should be even more true after we threshold the network to be a boolean function.
Indeed, this is exactly what we saw in [Section 2](#A-Quick-Replication-of-Valle-Perez-et-al.'s-Frequency-vs-Rank-Experiment)
On the other hand, for erf and $\sigma_b = 0$, the even degree $\mu_k$s all vanish, and most of the variance comes from degree 1 components (i.e. linear functions).
This concentration in degree 1 also lessens as $\sigma_w^2$ increases.
But because this variance is spread across a dimension 7 eigenspace, we don't see duplicate function samples nearly as much as in the relu case.
As $\sigma_w$ increases, we also see the eigenvalues become more equally distributed, which corresponds to the flattening of the probability-vs-rank curve in (a).
Finally, we observe that a 32-layer erf network with $\sigma_w^2 = 4$ has all its nonzero eigenvalues (associated to odd degrees) all equal (see points marked by $*$ in (b)).
This means that its distribution is a "white noise" on the space of *odd* functions, and the distribution of boolean functions obtained by thresholding the Gaussian process samples is the *uniform distribution* on *odd* functions.
This is the complete lack of simplicity bias modulo the oddness constraint.
Therefore, the simplicity bias is *really far away* from being universal to all neural networks, and seems more like a particular (nice) property of relu.
However, from the spectral perspective, there is a weak sense in which a simplicity bias holds for all neural network-induced CKs and NTKs.
We prove the following theorem in the paper.
**Theorem (Weak Spectral Simplicity Bias).**
Let $K$ be the CK of an MLP (with any nonlinearity) on a boolean cube $\{\pm1\}^d$.
Then the eigenvalues $\mu_k, k = 0, \ldots, d,$ satisfy
\begin{equation}
\mu_0 \ge \mu_2 \ge \cdots \ge \mu_{2k} \ge \cdots,\quad
\mu_1 \ge \mu_3 \ge \cdots \ge \mu_{2k+1} \ge \cdots.
\label{eqn:weaksimplicitybias}
\end{equation}
Even though it's not true that the fraction of variance contributed by the degree $k$ eigenspace is decreasing with $k$, the eigenvalue themselves will be in a nonincreasing pattern across even and odd degrees.
Of course, as we have seen, this is a *very weak* sense of simplicity bias, as it doesn't prevent "white noise" behavior as in the case of erf CK with large $\sigma_w^2$ and large depth.
# Conclusion
We have clarified the extent of "simplicity bias" in neural networks from the angle of eigendecomposition of the associated infinite-width conjugate kernel.
While this bias does not seem universal, it could still be that, architectures benefiting from a simplicity bias also do generalize better.
This would require some knowledge of the training of neural networks though.
Coincidentally, recent advances in deep learning theory have revealed that a different kernel, the *Neural Tangent Kernel*, in fact governs the evolution of NN gradient descent dynamics.
We discuss training and generalization from a spectral analysis of the NTK in the notebook *[Neural Network Generalization](NeuralNetworkGeneralization.ipynb)*, and more thoroughly in our full paper [*A Fine-Grained Spectral Perspective on Neural Networks*](https://arxiv.org/abs/1907.10599).
# Appendix
## The $\{0, 1\}^d$ Boolean Cube vs the $\{\pm 1 \}^d$ Boolean Cube
[Valle-Perez et al.]() actually did their experiments on the $\{0, 1\}^d$ boolean cube, whereas in the paper and the notebook here, we have focused on the $\{\pm 1\}^d$ boolean cube.
As datasets are typically centered before feeding into a neural network (for example, using `torchvision.transform.Normalize`), $\{\pm 1\}^d$ is much more natural.
In comparison, using the $\{0, 1\}^d$ cube is equivalent to adding a bias in the input of a network and reducing the weight variance in the input layer, since any $x \in \{\pm 1\}^d$ corresponds to $\frac 1 2 (x + 1) \in \{0, 1\}^d$.
Nevertheless, here we verify that the main point of the paper and of the examples above still holds over the $\{0, 1\}^d$ cube.
Let's do the same experiments as the beginning for $\{0, 1\}^d$.
```python
WIDTHSPEC = (7, 40, 40, 1)
nsamples = 10**4
funcounters = {'relu': {}}
funfreq = {'relu': {}}
# vb = sigma_b^2
vb = 2
# vw = \sigma_w^2
for vw in [2]:
# `funcounters` holds a dictionary (more precisely, a `Counter` object)
# of boolean function (as a string of length 2^7 = 128) to its frequency
funcounters['relu'][vw] = sample_boolean_fun(MyNet(nn.ReLU, WIDTHSPEC), vw, vb, nsamples, outformat='counter',
bit=[0, 1]) # this allows us to sample over the {0, 1} cube
# `funfreq` just has a list of frequencies
funfreq['relu'][vw] = OD(funcounters['relu'][vw].most_common()).values()
```
```python
nsamples = 10**4
funcounters['erf'] = {}
funfreq['erf'] = {}
vb = 0
for vw in [1, 2, 4]:
# `funcounters` holds a dictionary (more precisely, a `Counter` object) of boolean function (as a string) to its frequency
funcounters['erf'][vw] = sample_boolean_fun(MyNet(Erf, WIDTHSPEC), vw, vb, nsamples, outformat='counter',
bit=[0, 1]) # this allows us to sample over the {0, 1} cube
# `funfreq` just has a list of frequencies
funfreq['erf'][vw] = OD(funcounters['erf'][vw].most_common()).values()
```
Let's also try a 32 layer erf network with $\sigma_w^2 = 4$, which gives a "white noise" distribution over $\{\pm 1\}^d$
```python
nsamples = 10**4
vw = 4
vb = 0
widthspec = [7] + [40] * 32 + [1]
funcounters['deeperf'] = sample_boolean_fun(MyNet(Erf, widthspec), vw, vb, nsamples, outformat='counter',
bit=[0, 1]) # this allows us to sample over the {0, 1} cube
funfreq['deeperf'] = OD(funcounters['deeperf'].most_common()).values()
```
Plot them as before...
```python
plt.plot(np.array(list(funfreq['relu'][2]), dtype='float')/ nsamples, '--', label='relu | 2 | 2 | 2')
plt.plot(np.array(list(funfreq['erf'][1]), dtype='float')/ nsamples, label='erf | 1 | 0 | 2')
plt.plot(np.array(list(funfreq['erf'][2]), dtype='float')/ nsamples, label='erf | 2 | 0 | 2')
plt.plot(np.array(list(funfreq['erf'][4]), dtype='float')/ nsamples, label='erf | 4 | 0 | 2')
plt.plot(np.array(list(funfreq['deeperf']), dtype='float')/ nsamples, label='erf | 4 | 0 | 32')
plt.loglog()
plt.xlabel('rank')
plt.ylabel('probability')
plt.title(u'probability vs rank of $10^4$ random networks on $\{0, 1\}^7$')
plt.legend(title='$\phi$ | $\sigma_w^2$ | $\sigma_b^2$ | depth')
```
Just like over the $\{\pm 1\}^d$ cube, the relu network biases significantly toward certain functions, but with erf, and with increasing $\sigma_w^2$, this lessens.
With depth 32 and $\sigma_w^2$, the boolean functions obtained from erf network see no bias at all.
| 8f341a87ea35b691f883525d837be8c836a1320c | 134,322 | ipynb | Jupyter Notebook | ClarifyingSimplicityBias.ipynb | thegregyang/NNspectra | 8c71181e93a46cdcabfafdf71ae5a58830cbb27d | [
"MIT"
]
| 46 | 2019-07-25T01:23:26.000Z | 2022-03-25T13:49:08.000Z | ClarifyingSimplicityBias.ipynb | saarthaks/NNspectra | 8c71181e93a46cdcabfafdf71ae5a58830cbb27d | [
"MIT"
]
| null | null | null | ClarifyingSimplicityBias.ipynb | saarthaks/NNspectra | 8c71181e93a46cdcabfafdf71ae5a58830cbb27d | [
"MIT"
]
| 9 | 2019-07-26T00:06:33.000Z | 2021-07-16T15:49:20.000Z | 155.106236 | 45,792 | 0.877771 | true | 6,073 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.79053 | 0.626974 | __label__eng_Latn | 0.978886 | 0.295002 |
```python
import sympy as sym
import numpy
import matplotlib.pyplot as plt
import mpmath
def lagrange_series(x, N):
psi = []
# h = Rational(1, N)
h = 1.0/N
points = [i*h for i in range(N+1)]
for i in range(len(points)):
p = 1
for k in range(len(points)):
if k != i:
p *= (x - points[k])/(points[i] - points[k])
psi.append(p)
psi = psi[1:-1]
return psi
def analytical():
eps_values = [1.0, 0.1, 0.01, 0.001]
for eps in eps_values:
x = numpy.arange(Omega[0], Omega[1], 1/((N+1)*100.0))
ue = (numpy.exp(-x/eps) - 1)/ (numpy.exp(-1/eps) - 1)
print((len(x), len(ue)))
plt.plot(x, ue)
plt.legend(["$\epsilon$=%.1e" % eps for eps in eps_values],
loc="lower right")
plt.title("Analytical Solution")
plt.show()
def bernstein_series(x, N):
# FIXME: check if a normalization constant is common in the definition
# advantage is that the basis is always positive
psi = []
# for k in range(0,N+1):
for k in range(1,N): # bc elsewhere
psi_k = x**k*(1-x)**(N-k)
psi.append(psi_k)
return psi
def sin_series(x, N):
# FIXME: do not satisfy bc
psi = []
for k in range(1,N):
psi_k = sym.sin(sym.pi*k*x)
psi.append(psi_k)
return psi
def series(x, series_type, N):
if series_type=="sin" : return sin_series(x, N)
elif series_type=="Bernstein" : return bernstein_series(x, N)
elif series_type=="Lagrange" : return lagrange_series(x, N)
else: print("series type unknown ") # sys.exit(0)
def epsilon_experiment(N, series_type, Omega,
eps_values = [1.0, 0.1, 0.01, 0.001]):
# x is global, symbol or array
psi = series(x, series_type, N)
f = 1
for eps in eps_values:
A = sym.zeros(N-1, N-1)
b = sym.zeros(N-1)
for i in range(0, N-1):
integrand = f*psi[i]
integrand = sym.lambdify([x], integrand, 'mpmath')
b[i,0] = mpmath.quad(integrand, [Omega[0], Omega[1]])
for j in range(0, N-1):
integrand = eps*sym.diff(psi[i], x)*\
sym.diff(psi[j], x) - sym.diff(psi[i], x)*psi[j]
integrand = sym.lambdify([x], integrand, 'mpmath')
A[i,j] = mpmath.quad(integrand, [Omega[0], Omega[1]])
c = A.LUsolve(b)
u = sum(c[r,0]*psi[r] for r in range(N-1)) + x
U = sym.lambdify([x], u, modules='numpy')
x_ = numpy.arange(Omega[0], Omega[1], 1/((N+1)*100.0))
U_ = U(x_)
plt.plot(x_, U_)
plt.legend(["$\epsilon$=%.1e" % eps for eps in eps_values],
loc="upper left")
plt.title(series_type)
plt.show()
if __name__ == '__main__':
import sys
if len(sys.argv) > 1:
series_type = sys.argv[1]
else:
series_type = "Bernstein"
if len(sys.argv) > 2:
N = int(sys.argv[2])
else:
N = 8
#series_type = "sin"
#series_type = "Lagrange"
Omega = [0, 1]
x = sym.Symbol("x")
analytical()
epsilon_experiment(N, series_type, Omega)
```
```python
```
| 5297782210a51e33332dc9e51eedbb8537e9ab19 | 4,807 | ipynb | Jupyter Notebook | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/05_CONV_DIFF_GLOBAL.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| null | null | null | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/05_CONV_DIFF_GLOBAL.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| null | null | null | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/05_CONV_DIFF_GLOBAL.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 32.046667 | 83 | 0.444976 | true | 990 | Qwen/Qwen-72B | 1. YES
2. YES | 0.924142 | 0.785309 | 0.725736 | __label__eng_Latn | 0.366511 | 0.524461 |
# Vector Space Model
Adapted from [this](https://de.dariah.eu/tatom/working_with_text.html) blog post, written by [Allen Riddell](http://www.ariddell.org/).
One of the benefits of the DTM is that it allows us to think about text within the bounds of geometry, which then allows us to think about the "distance" between texts. Today's tutorial will explore how we might use distance measures in our text analysis workflow, and toward what end.
### Learning Goals
* Gain an intuition about how we might think about, and measure, the distance between texts
* Learn how to measure distances using `scikit-learn`
* Learn how to visualize distances in a few ways, and how that might help us in our text analysis project
* Learn more about the flexibilities and range of tools in `scikit-learn`
### Outline
<ol start="0">
<li>[Vectorizing our text: The Sparse DTM to Numpy Array](#vector)</li>
<li>[Comparing Texts](#compare)</li>
<li>[Visualizing Distance](#visual)</li>
<li>[Clustering Text based on Distance Metrics (if time)](#cluster)</li>
<li>[K-Means Clustering (if time)](#kmeans)</li>
</ol>
### Key Terms
* Euclidean Distance
* In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" (i.e. straight-line) distance between two points in Euclidean space. With this distance, Euclidean space becomes a metric space.
* Cosine Similarity
* Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine of 0° is 1, and it is less than 1 for any other angle.
* Multidimensional Scaling
* Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. It refers to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix.
* Dendrogram
* A dendrogram (from Greek dendro "tree" and gramma "drawing") is a tree diagram frequently used to illustrate the arrangement of the clusters produced by hierarchical clustering.
* K-Means Clustering
* k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster.
<a id='vector'></a>
### 0. From DTM to Numpy Array
First, let's create our DTM, and then turn it from a sparse matrix to a regular (dense) array.
We'll use a different input option than we have, an option called `filename`.
```python
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
filenames = ['../Data/Alcott_GarlandForGirls.txt',
'../Data/Austen_PrideAndPrejudice.txt',
'../Data/Machiavelli_ThePrince.txt',
'../Data/Marx_CommunistManifesto.txt']
vectorizer = CountVectorizer(input='filename', encoding='utf-8',stop_words='english') #filname input, which bypases reading in files
dtm = vectorizer.fit_transform(filenames) # a sparse matrix
vocab = vectorizer.get_feature_names() # a list
dtm
```
```python
dtm = dtm.toarray() # convert to a regular, dense array
vocab = np.array(vocab)
dtm
```
<a id='compare'></a>
### 1. Comparing texts
Arranging our texts in a document-term matrix make available a range of exploratory procedures. For example, calculating a measure of similarity between texts becomes simple. Since each row of the document-term matrix is a sequence of a novel’s word frequencies, it is possible to put mathematical notions of similarity (or distance) between sequences of numbers in service of calculating the similarity (or distance) between any two novels. One frequently used measure of distance between vectors (a measure easily converted into a measure of similarity) is Euclidean distance. The Euclidean distance between two vectors in the plane should be familiar from geometry, as it is the length of the hypotenuse that joins the two vectors. For instance, consider the Euclidean distance between the vectors \begin{align}
\overrightarrow{x}=(1,3) \space \space and\space\space\overrightarrow{y}=(4,2) \end{align}
the Euclidian distance can be calculated as follows:
\begin{align}
\sqrt{(1-4)^2 + (3-2)^2} = \sqrt{10}
\end{align}
>Note
Measures of distance can be converted into measures of similarity. If your measures of distance are all between zero and one, then a measure of similarity could be one minus the distance. (The inverse of the distance would also serve as a measure of similarity.)
Distance between two vectors:
>Note
More generally, given two vectors \begin{align} \overrightarrow{x} \space \space and\space\space\overrightarrow{y}\end{align}
>in *p*-dimensional space, the Euclidean distance between the two vectors is given by
>\begin{align}
||\overrightarrow{x} −\overrightarrow{y}||=\sqrt{\sum_{i=1}^P (x_i−y_i)^2}
\end{align}
This concept of distance is not restricted to two dimensions. For example, it is not difficult to imagine the figure above translated into three dimensions. We can also persuade ourselves that the measure of distance extends to an arbitrary number of dimensions; for any two matched components in a pair of vectors (such as x<sub>2</sub> and y<sub>2</sub>), differences increase the distance.
Since two novels in our corpus now have an expression as vectors, we can calculate the Euclidean distance between them. We can do this by hand or we can avail ourselves of the `scikit-learn` function `euclidean_distances`.
A challenge for you: calculate Euclidean distance of sample texts by hand.
```python
from sklearn.metrics.pairwise import euclidean_distances
euc_dist = euclidean_distances(dtm)
print(filenames[1])
print(filenames[2])
print("\nDistance between Austen and Machiavelli:")
# the distance between Austen and Machiavelli
print(euc_dist[1, 2])
# which is greater than the distance between *Austen* and *Alcott* (index 0)
print("\nDistance between Austen and Machiavelli is greater than the distance between Austen and Alcott:")
euc_dist[1, 2] > euc_dist[0, 1]
```
And if we want to use a measure of distance that takes into consideration the length of the novels (an excellent idea), we can calculate the cosine similarity by importing `sklearn.metrics.pairwise.cosine_similarity` and use it in place of `euclidean_distances`.
Cosine similarity measure the angle between two vectors:
Question: How does length factor into these two equations?
Keep in mind that cosine similarity is a measure of similarity (rather than distance) that ranges between 0 and 1 (as it is the cosine of the angle between the two vectors). In order to get a measure of distance (or dissimilarity), we need to “flip” the measure so that a larger angle receives a larger value. The distance measure derived from cosine similarity is therefore one minus the cosine similarity between two vectors.
```python
from sklearn.metrics.pairwise import cosine_similarity
cos_dist = 1 - cosine_similarity(dtm)
np.round(cos_dist, 2)
```
```python
##EX:
## 1. Print the cosine distance between Austen and Machiavelli
## 2. Is this distance greater or less than the distance between Austen and Alcott?
print(cos_dist[1, 2])
# which is greater than the distance between *Austen* and
# *Alcott* (index 0)
cos_dist[1, 2] > cos_dist[1, 0]
```
<a id='visual'></a>
### 2. Visualizing distances
It is often desirable to visualize the pairwise distances between our texts. A general approach to visualizing distances is to assign a point in a plane to each text, making sure that the distance between points is proportional to the pairwise distances we calculated. This kind of visualization is common enough that it has a name, “multidimensional scaling” (MDS) and family of functions in `scikit-learn`.
```python
import os # for os.path.basename
import matplotlib.pyplot as plt
from sklearn.manifold import MDS
# two components as we're plotting points in a two-dimensional plane
# "precomputed" because we provide a distance matrix
# we will also specify `random_state` so the plot is reproducible.
mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1)
pos = mds.fit_transform(euc_dist) # shape (n_components, n_samples)
xs, ys = pos[:, 0], pos[:, 1]
# short versions of filenames:
# convert 'data/austen-brontë/Austen_Emma.txt' to 'Austen_Emma'
names = [os.path.basename(fn).replace('.txt', '') for fn in filenames]
for x, y, name in zip(xs, ys, names):
plt.scatter(x, y)
plt.text(x, y, name)
plt.show()
```
<a id='cluster'></a>
### 3. Clustering texts based on distance
Clustering texts into discrete groups of similar texts is often a useful exploratory step. For example, a researcher may be wondering if certain textual features partition a collection of texts by author or by genre. Pairwise distances alone do not produce any kind of classification. To put a set of distance measurements to work in classification requires additional assumptions, such as a definition of a group or cluster.
The ideas underlying the transition from distances to clusters are, for the most part, common sense. Any clustering of texts should result in texts that are closer to each other (in the distance matrix) residing in the same cluster. There are many ways of satisfying this requirement; there no unique clustering based on distances that is the “best”. One strategy for clustering in circulation is called Ward’s method. Rather than producing a single clustering, Ward’s method produces a hierarchy of clusterings, as we will see in a moment. All that Ward’s method requires is a set of pairwise distance measurements–such as those we calculated a moment ago. Ward’s method produces a hierarchical clustering of texts via the following procedure:
1. Start with each text in its own cluster
2. Until only a single cluster remains,
* Find the closest clusters and merge them. The distance between two clusters is the change in the sum of squared distances when they are merged.
3. Return a tree containing a record of cluster-merges.
The function [scipy.cluster.hierarchy.ward](https://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html) performs this algorithm and returns a tree of cluster-merges. The hierarchy of clusters can be visualized using `scipy.cluster.hierarchy.dendrogram`.
```python
from scipy.cluster.hierarchy import ward, dendrogram
linkage_matrix = ward(euc_dist)
dendrogram(linkage_matrix, orientation="right", labels=names)
plt.tight_layout() # fixes margins
plt.show()
```
<a id='kmeans'></a>
### 4. K-Means Clustering
From the dendrogram above, we might expect these four novels to have two clusters: Austen and Alcott, and Machiavelli and Marx.
Let's see if this is the case using k-means clustering, which clusters on Euclidean distance.
```python
from sklearn.cluster import KMeans
km = KMeans(n_clusters=2, random_state=0)
clusters = km.fit(dtm)
clusters.labels_
```
```python
list(zip(filenames, clusters.labels_))
```
```python
print("Top terms per cluster:")
order_centroids = clusters.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(2):
print("Cluster %d:" % i,)
for ind in order_centroids[i, :20]:
print(' %s' % terms[ind],)
print()
```
<a id='exercise'></a>
### Exercise:
1. Find the Euclidian distance and cosine distance for the 5 sentences below. Do the distance measures make sense?
2. Visualize the potential clusters using a dendrogram. Do the clusters make sense?
3. How might we make the clusters better?
```python
text0 = 'I like to eat broccoli and bananas.'
text1 = 'I ate a banana and spinach smoothie for breakfast.'
text2 = 'Chinchillas and kittens are cute.'
text3 = 'My sister adopted a kitten yesterday.'
text4 = 'Look at this cute hamster munching on a piece of broccoli.'
text_list = [text0, text1, text2, text3, text4]
#create vector for text "names"
names = ['eat', 'smoothie', 'chinchillas', 'adopted', 'munching']
```
```python
#solution
ex_vectorizer = CountVectorizer(stop_words='english')
ex_dtm = ex_vectorizer.fit_transform(text_list) # a sparse matrix
vocab = ex_vectorizer.get_feature_names() # a list
ex_dtm = ex_dtm.toarray()
ex_dtm
```
```python
ex_euc_dist = euclidean_distances(ex_dtm)
print(text_list[0])
print(text_list[1])
print(text_list[2])
print(ex_euc_dist[0, 2])
ex_euc_dist[0, 2] > ex_euc_dist[0, 1]
```
```python
ex_cos_dist = 1 - cosine_similarity(ex_dtm)
print(np.round(ex_cos_dist, 2))
print(ex_cos_dist[0,2])
ex_cos_dist[0,2] > ex_cos_dist[0,1]
```
```python
linkage_matrix = ward(ex_euc_dist)
dendrogram(linkage_matrix, orientation="right", labels=names)
plt.tight_layout() # fixes margins
plt.show()
```
```python
from nltk.stem.porter import PorterStemmer
import re
porter_stemmer = PorterStemmer()
#remove punctuation
text_list = [re.sub("[,.]", "", sentence) for sentence in text_list]
#stem words
text_list_stemmed = [' '.join([porter_stemmer.stem(word) for word in sentence.split(" ")]) for sentence in text_list]
text_list_stemmed
```
```python
dtm_stem = ex_vectorizer.fit_transform(text_list_stemmed)
```
```python
ex_dist = 1 - cosine_similarity(dtm_stem)
print(np.round(ex_dist, 2))
print(ex_dist[0,2])
print(ex_dist[0,1])
ex_dist[0,2] > ex_dist[0,1]
```
```python
linkage_matrix = ward(ex_dist)
dendrogram(linkage_matrix, orientation="right", labels=names)
plt.tight_layout() # fixes margins
plt.show()
```
```python
print(text_list[0])
print(text_list[1])
print(text_list[2])
print(text_list[3])
print(text_list[4])
```
| 3be4b964bd1559c1e17207ea7439dd29e4cb5be4 | 20,106 | ipynb | Jupyter Notebook | 05-TextExploration/01-VectorSpaceModel_ExerciseSolutions.ipynb | lknelson/text-analysis-2017 | a562765eaa6b7ad35a8b447931945c462c122e9d | [
"BSD-3-Clause"
]
| 29 | 2017-05-17T17:06:28.000Z | 2021-08-24T15:25:00.000Z | 05-TextExploration/01-VectorSpaceModel_ExerciseSolutions.ipynb | lknelson/text-analysis-2017 | a562765eaa6b7ad35a8b447931945c462c122e9d | [
"BSD-3-Clause"
]
| null | null | null | 05-TextExploration/01-VectorSpaceModel_ExerciseSolutions.ipynb | lknelson/text-analysis-2017 | a562765eaa6b7ad35a8b447931945c462c122e9d | [
"BSD-3-Clause"
]
| 7 | 2017-05-17T16:54:47.000Z | 2021-07-25T19:34:25.000Z | 35.211909 | 825 | 0.623446 | true | 3,337 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.815232 | 0.727033 | __label__eng_Latn | 0.988611 | 0.527474 |
```python
from sympy import *
init_printing(use_unicode=True)
from sympy.codegen.ast import Assignment
```
```python
C = Matrix( symarray('C', (2,2)) )
R = Matrix( symarray('R', (2,2)) )
n = Matrix( symarray('n', (2)) )
t = Matrix( symarray('t', (2)) )
```
```python
R[0,0]=t[0]
R[1,0]=t[1]
R[0,1]=n[0]
R[1,1]=n[1]
```
```python
C
```
```python
simplify(transpose(R)*C*R)
```
```python
```
| e3dcaded367878f991763d0f737ac114eb33ea6c | 14,690 | ipynb | Jupyter Notebook | PythonCodes/Utilities/Sympy/.ipynb_checkpoints/RotationMatrix-checkpoint.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
]
| 1 | 2020-02-25T08:05:13.000Z | 2020-02-25T08:05:13.000Z | PythonCodes/Utilities/Sympy/.ipynb_checkpoints/RotationMatrix-checkpoint.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
]
| null | null | null | PythonCodes/Utilities/Sympy/.ipynb_checkpoints/RotationMatrix-checkpoint.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
]
| null | null | null | 94.774194 | 9,176 | 0.834173 | true | 158 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.629775 | 0.502678 | __label__eng_Latn | 0.297257 | 0.006218 |
# Data Visualization with PCA
## The Challenges of High-dimensional Data
Once we decide to measure more than three features per input vector, it can become challenging to understand how a network is learning to solve such a problem since we can no longer generate a plot or visualization of the feature vector space to which the network is being exposed. One-, two-, or three-dimensional vectors are easy enough to plot, and we could even color the corresponding points based on their class assignments to see how groups of similar items are in similar parts of the vector space. If we can see straight boundaries (lines or planes) between the differently colored point clouds, then we would understand why a linear (i.e. single-layer) network might be capable of producing a reasonable solution to the problem at hand. Also, if we see regions where a single line/plane will not suffice for separating the different classes, then we might have reason to suspect that a linear network will fail to solve the problem, and probably attempt to use a multilayer network instead.
Given the advantages of visualizing such relationships, a common trick when exploring high-dimensional data sets is to **project** the high-dimensional data vectors onto a low-dimensional space (two or three dimensions) where we can see if such relationships exist. Such projection is _risky_ in the sense that we will be throwing information away to perform the projection (similar to how neural networks throw information away when performing regression or classification), and we may no longer see some important relationships in the low-dimensional projection of the data. However, it is sometimes possible that the projection **will** preserve relationships between the original data vectors that are important for making an accurate classification or regression while also enabling visualization.
In this assignment, we will explore a commonly-used **linear** projection known as Principal Component Analysis (PCA) to visualize some data sets and see how such projections might be useful for understanding why our single-layer networks were able to effectively learn the functions that these data sets represent. Since PCA is a linear method, it will be limited in its ability to produce useful _projections_ in a manner roughly analogous to how single-layer neural networks are limited in their ability to solve _linearly separable_ problems. Since we will be producing low-dimensional projections, relative to the original dimensionality of the vectors, this technique is also a form of **dimensionality reduction** in that the new two- or three- dimensional vectors that we produce will still share some of the relationships between one another that the higher-dimensional vectors possessed. This might even mean that we could use these new vectors as inputs for a neural network instead of the original, high-dimensional vectors. This could lead to a significant reduction in the number of neural units and connection weights in a network, and hence reduce its computation time. Also, some of the original features (or combinations of features) may just not be very useful for the problem at hand, and removing them would allow the network to focus on only the more relevant features in the data. In some cases (albeit rarely with PCA), this can even lead to superior performance of the trained neural network overall. While we will focus on two-dimensional projections in this assignment, PCA can be used to reduce the dimensionality of a given set of input vectors to any chosen number less than or equal to the original dimensionality. The smaller the dimensionality of the projection: the more information is being projected away. Thus, two-dimensional projections are often too low to be of any real use on many large data sets. However, some data sets might be reduced from millions of features to thousands, or thousands to hundreds, while still preserving the vast majority of the information that they encode. Such large reductions can make certain problems far more tractable to learn than they would otherwise be.
## Gathering the Iris data set
Let's start out by grabbing some data that we are already a little familiar with, and see if we can use PCA to better understand why a linear network can learn to solve this function effectively.
Let's start by importing some tools for the job...
```python
# For reading data sets from the web.
import pandas
# For lots of great things.
import numpy as np
# To make our plots.
import matplotlib.pyplot as plt
%matplotlib inline
# Because sympy and LaTeX make
# everything look wonderful!
import sympy as sp
sp.init_printing(use_latex=True)
from IPython.display import display
# We will use this to check our implementation...
from sklearn.decomposition import PCA
# We will grab another data set using Keras
# after we finish up with Iris...
import tensorflow.keras as keras
```
Now let's grab the Iris data set and start projecting!
```python
iris_data = np.array(
pandas.read_table(
"https://www.cs.mtsu.edu/~jphillips/courses/CSCI4850-5850/public/iris-data.txt",
delim_whitespace=True,
header=None))
```
```python
# Remember the data is composed of feature
# vectors AND class labels...
X = iris_data[:,0:4] # 0,1,2,3
Y = iris_data[:,4] # 4
# Pretty-print with display()!
display(X.shape)
display(Y.shape)
display(sp.Matrix(np.unique(Y)).T)
```
The Iris data set consists of 150 four-dimensional feature vectors, each one assigned to one of three class labels (0,1,2) corresponding to an iris species.
We could potentially use **four** dimensions to plot and understand this data. Namely, we could make 3D plots using the first three dimensions, and sort the points along the fourth dimension so that we could play them in-sequence like a movie. However, that can still be tricky to visualize since we may miss some relationships between frames in our "movie" if they are far apart in time. Potentially more useful would be to plot the first three dimensions in one plot, then the last three dimensions in another, where the two plots now share the middle two dimensions. Still, if relationships between the first and fourth dimensions were the most important, we might not see them very clearly using this presentation.
Let's see if a PCA projection down to just two dimensions would be more effective.
To do this we will be using some linear algebra that we have already seen before, but we need to process the data just a little before we can use those tools.
First, we will _mean-center_ the values of _each_ feature in the data. That is, we will find the _mean_ value of the first feature across all examples in the data set, and then subtract the mean value from this feature for all examples. We will perform the same operation for all four features as well, so that each feature will have its mean value effectively set to zero. You can think of this as moving the entire set of data vectors so that the average of the data vectors now lies at the value zero in all four dimensions. In other words, it's a _translation_ operation on the original data. The relative distances between all of the points is maintained, so all of the relationships between the data vectors important for classification is maintained as well.
We will use a custom function for this, that we apply to each of the columns using the `apply_along_axis()` function:
```python
# Mean center a vector
def mean_center(x):
return x - np.mean(x)
# Call this function for each column in the data (move along axis 0 or the rows)
Xcentered = np.apply_along_axis(mean_center,0,X)
```
Now that we have a mean-centered data matrix, we will use singular value decomposition to extract the left-singular vectors and singular-values of this matrix.
```python
U,S,V = np.linalg.svd(Xcentered,full_matrices=True)
# Percent variance accounted for
plt.plot(100.0*S/np.sum(S))
plt.ylabel('% Var')
plt.xlabel('Singular Value')
plt.show()
```
Each of the singular values indicate some amount of variance present in the original data set that is captured by the corresponding left-singular vector (column of U). They are sorted in order from largest to smallest when returned by the `svd()` function so that you can see that the largest amount of variance is captured by the first left-singular vector, the second most variance by the second singular-vector, and so on. Often, the sum of the singular values is calculated to obtain the _total_ variance in the data, and then used to normalize the variance to obtain the percentage of variance captured by each left-singular vector. Given the data above, it is clear that the first two vectors alone will account for over 85% of the total variance in the data, and should form a reasonable projection for the data set.
```python
# Variance accounted for in the first two principal components
100.0*(S[0]+S[1])/np.sum(S)
```
The singular-values (S) are mapped into a rectangular, diagonal matrix which is then multiplied by the left-singular vectors (U). The vectors in U are all unit length, so this operation effectively scales the length of the first 4 vectors by each of the corresponding singular values. In the end, these operations will produce a rotated version of our original data set where the major orthogonal directions capturing the largest variance in the data lie along the principal axes (x and y for a 2D plot). Each of these so-called principal components is a linear combination of our original feature vectors, and allows us to produce a projection onto a smaller set of these components by simply throwing away vectors associated with small singular values. Thus, while we obtain all 4 of the principal components for the iris data set, we will throw away the last two as they capture less than 15% of the variance in the data.
```python
# Scale the singular vectors, resulting in a rotated form of our mean-centered data
D = np.zeros([X.shape[0],X.shape[1]])
np.fill_diagonal(D,S)
Xrotated = np.dot(U,D)
# Extract just the first two principal components!
PCs = Xrotated[:,0:2]
PCs.shape
```
Now that we have projected our data set into a low-dimensional space where we can better visualize it, let's make a plot to see how it looks. We will be careful to color the points by the associated class label.
```python
# The x and y values come from the two
# Principal Components and the colors for
# each point are selected based on the
# corresponding iris species for each point...
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
You can see in the above plot that the data forms three fairly distinct groups of points. The red group on the left is easily linearly separable from the other two groups. Even the green and blue groups are fairly distinct. While you can almost draw a straight line between them, it appears that a few data points from each of these groups would lie on the opposite side, and not allow for perfect classification with a linear network. Nevertheless, we can now see why a linear network might work well on this data, and (perhaps more importantly) that an additional feature measurement may be needed to completely separate the green and blue species.
We can perform the same analysis (just like we performed PCA above from scratch with numpy) by using the SciKitLearn library:
```python
pca = PCA(2)
PCs = pca.fit_transform(X)[:,0:2]
```
```python
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
The result is the same here, but you will sometimes notice that this data has been "flipped" or "mirrored" along the origin of one (or both) of the principal components. This commonly occurs since there is an equally valid rotation of the data 180 degrees along any axis that still preserves all of the variance and internal relationships between the data points. Either way, the same amount of variance is accounted for on each component, and decision boundaries can still be explored.
Let's perform a similar analysis with a much larger data set.
## Exploring MNIST with PCA
```python
# Load the MNIST data set using Keras
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Subsample (there's a lot of data here!)
X = x_train[range(0,x_train.shape[0],10),:,:]
Y = y_train[range(0,y_train.shape[0],10)]
display(X.shape)
display(Y.shape)
```
We will only look at the training data for this data set for now (the test data is of the same kind so will not be needed for our purposes in this assignment). The training set consists of 60,000 images each 28x28 pixels in size. However, we have selected just 6000 of those images for this example to make the analysis more tractable (60,000 would take a long time and lots of memory to compute the PCs). Each pixel is represented by an integer intensity value between 0 and 255. For the sake of examining what these images look like, let's scale those intensities to be floating point values in the range [0,1]:
```python
X = X.astype('float32') / 255.0
```
Once normalized, the images can be easily plotted as a kind of "heatmap" where black pixels are represented by low intensities and white pixels by high intensities. The `imshow()` function will map high intensities to a dark colors (blue) and low intensities to light colors (yellow). Intermediate values are colored green with more or less blue/yellow depending on which intensity they favor more.
Let's take a look at the first five images in our subset of the MNIST data...
```python
# Plot some of the images
for i in range(5):
plt.figure()
plt.imshow(X[i,:,:])
plt.show()
```
```python
# What are their corresponding class labels?
display(sp.Matrix(Y[0:5]))
```
You can see that each of these images corresponds to a hand-written digit, each labeled with the appropriate number in the category labels, Y.
We will now **flatten** these images so that we can perform principal component analysis, so that each pixel is treated as a single measurement. The numpy function `reshape()` allows us to do this by providing a new shape for the data that has the same _total_ number of entries. Here we convert earh 28x28 matrix (image) into a 784 element vector (since 28x28=784). PCA can still detect relationships between the pixel values even when they are not arranged in a matrix form, so we treat each image as a vector.
```python
X = X.reshape(X.shape[0],X.shape[1]*X.shape[2])
X.shape
```
Each image has now been encoded as a feature vector in a 784-dimensional space. Even though the pixel intensities make sense to us when visualized as a 2D image, newly initialized neural networks only experience the vector space for the first time, and have to learn such relationships from scratch. However, let's see if PCA can provide some insight on the difficulty of this task.
We will apply the same approach as before:
1. mean-centering the features
2. calculating the SVD
3. examining the singular values
4. scaling the left-singular vectors
5. plotting the two-dimensional projection
NOTE: It may take a minute or so to compute the SVD for a data set of this size, so be patient on the steps below.
```python
# Mean-centering
Xcentered = np.apply_along_axis(mean_center,0,X)
# SVD
U,S,V = np.linalg.svd(Xcentered,full_matrices=True)
# Percent variance accounted for
plt.plot(100.0*S/np.sum(S))
plt.ylabel('% Var')
plt.xlabel('Singular Value')
plt.show()
```
```python
# Variance accounted for in the first two principal components
100.0*(S[0]+S[1])/np.sum(S)
```
You can see that the variance accounted for dips sharply (which is good, because having a few PCs which capture a lot of variance is a useful thing). However, the variance captured by the first __two__ components together is _less than 5% of the total variance_ in the data set, so PCA might __not__ be so useful for visualization.
However, let's take a quick look at one more thing before moving on:
```python
# Variance accounted for in the first two principal components
display(100.0*(np.sum(S[0:340]))/np.sum(S))
# Reduction?
display(100*340/len(S))
```
Notice that 90% of the **total** variance in the data can be captured using 340 principal components. 340 is only just over 43% of the original dimensionality, so 90% of the data set can be effectively represented using a space less than half of the original in size. Thus, PCA can also be thought of as a lossy linear data compression technique as well. While we can't visualize a 340-dimensional space, a much smaller network would be required to process this data, possibly without sacrificing generalization accuracy (but that's for a later time). Let's just take a look at the first two principal components for now:
```python
D = np.zeros([X.shape[0],X.shape[1]])
np.fill_diagonal(D,S)
Xrotated = np.dot(U,D)
# First two principal components!
PCs = Xrotated[:,0:2]
PCs.shape
```
```python
# Need a lot of colors for this one!
plt.scatter(PCs[:,0],PCs[:,1],
color=[['red','green','blue',
'cyan','magenta','yellow',
'black','brown','grey',
'purple'][i] for i in Y.astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
Note that the clutter of 6000 points is pretty bad in this space, and reducing the sampling may help in some ways. Let's try that now:
```python
plt.scatter(PCs[range(0,6000,10),0],PCs[range(0,6000,10),1],
color=[['red','green','blue',
'cyan','magenta','yellow',
'black','brown','grey',
'purple'][i] for i in Y[range(0,6000,10)].astype(int)])
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.show()
```
Even with the subsampling, some interesting divisions between classes can be found, but others are not so clear. For example, the red points now correspond to images of zeros and green dots now correspond to images of ones, and these seem to lie on opposite ends of the first principal component. Also, the brown sevens seem very distinct from the cyan threes. However, others are more cluttered, like the purple nines, brown sevens, and magenta fours. However, this makes some sense because these numbers share some common featural similarities.
Because PCA is a **linear** technique, it is somewhat limited in its ability to capture some of the **non-linear** relationships between the data vectors. Also, two dimensions, while useful for visualization, is still often too low to capture all of the relevant relationships that allow for categorization. The 2D representation can be thought of as capturing the **lower-bound on the distances between the data vectors** since *adding more dimensions may cause points to move farther apart, but they could never cause them to move closer together*.
Even with its drawbacks, PCA is a useful technique for quickly creating projections of high-dimensional data onto lower-dimensional spaces for exploratory analysis and lossy compression. There is even a neural network architecture that we will look at briefly later in the semester which allows a single-layer network to learn to perform PCA on a set of data by using a specific learning rule (Oja's Rule). However, because we can solve linear problems like PCA in closed form just like we did above, it is atypical to utilize Oja's Rule since it requires training several many epochs and is therefore slower than closed-form solutions.
| 5888f76cb559ac6429568c511e7ddcfd318fd9c7 | 317,776 | ipynb | Jupyter Notebook | Introductions/Visualization with PCA.ipynb | CSCI4850/notebook-examples | 8846792d8acc6b619c22f5d8bc7a4b3a446a8296 | [
"MIT"
]
| 5 | 2018-03-28T18:06:21.000Z | 2021-11-11T19:50:49.000Z | Introductions/Visualization with PCA.ipynb | CSCI4850/notebook-examples | 8846792d8acc6b619c22f5d8bc7a4b3a446a8296 | [
"MIT"
]
| null | null | null | Introductions/Visualization with PCA.ipynb | CSCI4850/notebook-examples | 8846792d8acc6b619c22f5d8bc7a4b3a446a8296 | [
"MIT"
]
| 2 | 2018-03-06T02:15:26.000Z | 2019-06-23T15:01:19.000Z | 381.026379 | 91,748 | 0.933346 | true | 4,316 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.913677 | 0.781971 | __label__eng_Latn | 0.999674 | 0.655113 |
# 1. 1D example
$\begin{align}
f_{simulator}\colon\mathbb{R}^{N\times D} &\to\mathbb{R}^{N} \\
X &\mapsto \mathbf{y}
\end{align}$
```python
import numpy as np
import torch
```
```python
# set logger and enforce reproducibility
from GPErks.log.logger import get_logger
from GPErks.utils.random import set_seed
log = get_logger()
seed = 8
set_seed(seed) # reproducible sampling
```
<br/>
**1D function example**: Forrester et al. (2008)
$f(x) = (6x - 2)^2 \sin(12x - 4)$
<br/>
```python
# function to learn (normally a high-dimensional, expensive deterministic model)
from GPErks.utils.test_functions import forrester
f = lambda x: forrester(x)
D = 1
```
```python
# build dataset
from GPErks.gp.data.dataset import Dataset
dataset = Dataset.build_from_function(
f,
D,
n_train_samples=10,
n_test_samples=10,
design="srs",
seed=seed,
l_bounds=[0],
u_bounds=[1] # can put None if, as in this case, parameters range in [0, 1]
)
```
<br/>
**Gaussian process emulator (GPE):**
<br/>
$f(\mathbf{x}) = h(\mathbf{x}) + g(\mathbf{x})$
<br/>
**deterministic part:**
<br/>
$h(\mathbf{x}) := \beta_0 + \beta_1 x_1 + \dots + \beta_{D} x_{D}$
<br/>
**stochastic part:**
<br/>
$\begin{align}
&g(\mathbf{x})\sim\mathcal{GP}(\mathbf{0},\,k_{\text{SE}}(d(\mathbf{x},\,\mathbf{x}'))) \\
&k_{\text{SE}}(d(\mathbf{x},\,\mathbf{x}')) := \sigma_f^2\, e^{-\frac{1}{2}\,d(\mathbf{x},\,\mathbf{x}')} \\
&d(\mathbf{x},\,\mathbf{x}') := (\mathbf{x}-\mathbf{x}')^\mathsf{T}\,\Lambda\,(\mathbf{x}-\mathbf{x}')
\end{align}$
<br/>
**likelihood:**
<br/>
$y=f(\mathbf{x}) + \varepsilon,\quad \varepsilon\sim\mathcal{N}(0,\,\sigma_n^2)$
<br/>
```python
# choose likelihood
from gpytorch.likelihoods import GaussianLikelihood
likelihood = GaussianLikelihood()
```
```python
# choose mean function
from gpytorch.means import LinearMean
mean_function = LinearMean(input_size=dataset.input_size)
```
```python
# choose covariance function (kernel)
from gpytorch.kernels import RBFKernel, ScaleKernel
kernel = ScaleKernel(RBFKernel(ard_num_dims=dataset.input_size))
```
```python
# choose metrics
from torchmetrics import MeanSquaredError, R2Score
metrics = [MeanSquaredError(), R2Score()]
```
```python
# define experiment
from GPErks.gp.experiment import GPExperiment
experiment = GPExperiment(
dataset,
likelihood,
mean_function,
kernel,
n_restarts=3,
metrics=metrics,
seed=seed, # reproducible training
learn_noise=True # y = f(x) + e, e ~ N(0, sigma^2I)
)
```
```python
# choose training options: device + optimizer
device = "cuda" if torch.cuda.is_available() else "cpu"
optimizer = torch.optim.Adam(experiment.model.parameters(), lr=0.1)
```
```python
# train model
from GPErks.train.emulator import GPEmulator
emulator = GPEmulator(experiment, device)
emulator.train(optimizer)
```
```python
# inference on stored test set
x_test = dataset.X_test
y_test = dataset.y_test
y_mean, y_std = emulator.predict(x_test)
for metric in metrics:
print( metric(
torch.from_numpy(y_mean), torch.from_numpy(y_test)
).item()
)
```
```python
# perk n.1: automatic inference
from GPErks.perks.inference import Inference
inference = Inference(emulator)
inference.summary() # can be retrieved from inference.scores_dct
print( inference.scores_dct )
```
```python
# nice plotting
x_train = dataset.X_train
y_train = dataset.y_train
xx = np.linspace(dataset.l_bounds[0], dataset.u_bounds[0], 1000)
yy_mean, yy_std = emulator.predict(xx)
yy_true = f(xx)
import matplotlib.pyplot as plt
height = 9.36111
width = 5.91667
fig, axis = plt.subplots(1, 1, figsize=(4*width/3, height/2))
axis.plot(xx, yy_true, c="C0", ls="--", label="true function")
CI = 2
axis.plot(xx, yy_mean, c="C0", label="predicted mean")
axis.fill_between(
xx, yy_mean - CI * yy_std, yy_mean + CI * yy_std, color="C0", alpha=0.15, label="~95% CI"
)
axis.scatter(x_train, y_train, fc="C0", ec="C0", label="training data")
axis.scatter(x_test, y_test, fc="none", ec="C0", label="testing data")
axis.legend(loc="best")
fig.tight_layout()
plt.show()
```
```python
# check testing points
inference.plot()
```
```python
# draw samples from the posterior distribution
y_mean, y_std = emulator.predict(x_test)
print(y_mean.shape)
print(y_std.shape)
```
```python
y_samples = emulator.sample(x_test, n_draws=5)
print(y_samples.shape)
```
```python
y_samples = emulator.sample(xx, n_draws=5)
fig, axis = plt.subplots(1, 1, figsize=(4*width/3, height/2))
for i, ys in enumerate(y_samples):
axis.plot(xx, ys, lw=0.8, label=f"posterior sample #{i+1}", zorder=1)
axis.plot(xx, yy_mean, c="k", lw=2, ls="--", label="posterior mean", zorder=2)
axis.scatter(x_train, y_train, fc="k", ec="k", label="training data", zorder=2)
axis.legend(loc="best")
fig.tight_layout()
plt.show()
```
| 96431b650b5c8b24907934bd94164771d9650536 | 9,460 | ipynb | Jupyter Notebook | notebooks/example_1.ipynb | stelong/GPErks | 7e8e0e4561c10ad21fba2079619418e416a167b6 | [
"MIT"
]
| null | null | null | notebooks/example_1.ipynb | stelong/GPErks | 7e8e0e4561c10ad21fba2079619418e416a167b6 | [
"MIT"
]
| 6 | 2021-12-10T14:16:51.000Z | 2022-03-25T16:26:50.000Z | notebooks/example_1.ipynb | stelong/GPErks | 7e8e0e4561c10ad21fba2079619418e416a167b6 | [
"MIT"
]
| 1 | 2022-01-28T11:12:33.000Z | 2022-01-28T11:12:33.000Z | 24.829396 | 134 | 0.515645 | true | 1,506 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.785309 | 0.708286 | __label__eng_Latn | 0.405255 | 0.483917 |
# Lecture 18
## MGFs to get moments of Expo and Normal, sums of Poissons, joint distributions
## MGF for $Expo(1)$
Let $X \sim Expo(1)$.
We begin by finding the MGF of $X$:
\begin{align}
M(t) &= \mathbb{E}(e^{tX}) &\quad \text{ definition of MGF} \\
&= \int_{0}^{\infty} e^{-x} \, e^{tx} dx \\
&= \int_{0}^{\infty} e^{-x(1-t)} dx \\
&= \boxed{\frac{1}{1-t}} &\quad \text{ for } t < 1
\end{align}
In finding the moments, by definition we have:
* $M'(0) = \mathbb{E}(X)$
* $M''(0) = \mathbb{E}(X^2)$
* $M'''(0) = \mathbb{E}(X^3)$
* ... and so on ...
Even though finding derivatives of $\frac{1}{1-t}$ is not all that bad, it is nevertheless annoying busywork. But since we know that the $n^{th}$ moment is the coefficient of the $n^{th}$ term of the Taylor Expansion of $X$, we can leverage that fact instead.
\begin{align}
\frac{1}{1-t} &= \sum_{n=0}^{\infty} t^n &\quad \text{ for } |t| < 1 \\
&= \sum_{n=0}^{\infty} \frac{n! \, t^n}{n!} &\quad \text{ since we need the form } \sum_{n=0}^{\infty} \left( \frac{\mathbb{E}(X^{n}) \, t^{n}}{n!}\right) \\
\\
\Rightarrow \mathbb{E}(X^n) &= \boxed{n!}
\end{align}
And now we can simply _generate_ arbitary moments for r.v. $X$!
* $\mathbb{E}(X) = 1! = 1$
* $\mathbb{E}(X^2) = 2! = 2$
* $\Rightarrow \mathbb{Var}(X) = \mathbb{E}(X^2) - \mathbb{E}X^2 = 2 - 1 = 1$
## MGF for $Expo(\lambda)$
Let $Y \sim Expo(\lambda)$.
We begin with
\begin{align}
\text{let } X &= \lambda Y \\
\text{and so } X &= \lambda Y \sim Expo(1) \\
\\
\text{then } Y &= \frac{X}{\lambda} \\
Y^n &= \frac{X^n}{\lambda^n} \\
\\
\Rightarrow \mathbb{E}(Y^n) &= \frac{\mathbb{E}(X^n)}{\lambda^n} \\
&= \boxed{\frac{n!}{\lambda^n}}
\end{align}
And as before, we now can simply _generate_ arbitary moments for r.v. $Y$!
* $\mathbb{E}(Y) = \frac{1!}{\lambda^1} = \frac{1}{\lambda}$
* $\mathbb{E}(Y^2) = \frac{2!}{\lambda^2} = \frac{2}{\lambda^2}$
* $\Rightarrow \mathbb{Var}(Y) = \mathbb{E}(Y^2) - \mathbb{E}Y^2 = \frac{2}{\lambda^2} - \left(\frac{1}{\lambda}\right)^2 = \frac{1}{\lambda^2}$
## MGF for standard Normal $\mathcal{N}(0, 1)$
Let $Z \sim \mathcal{N}(0,1)$; find **all** its moments.
We have seen before that, by symmetry, $\mathbb{E}(Z^{2n+1}) = 0$ for all odd values of $n$.
So we will focus in on the _even_ values of $n$.
Now the MGF $M(t) = e^{t^2/2}$. Without taking _any_ derivatives, we can immediately Taylor expand that, since it is continuous everywhere.
\begin{align}
M(t) &= e^{t^2/2} \\
&= \sum_{n=0}^{\infty} \frac{\left(t^2/2\right)^n}{n!} \\
&= \sum_{n=0}^{\infty} \frac{t^{2n}}{2^n \, n!} \\
&= \sum_{n=0}^{\infty} \frac{(2n)! \, t^{2n}}{2^n \, n! \, (2n)!} &\quad \text{ since we need the form } \sum_{n=0}^{\infty} \left( \frac{\mathbb{E}(X^{n}) \, t^{n}}{n!}\right) \\
\\
\Rightarrow \mathbb{E}(Z^{2n}) &= \boxed{\frac{(2n)!}{2^n \, n!}}
\end{align}
Let's double-check this with what know about $\mathbb{Var}(Z)$
* by symmetry, we know that $\mathbb{E}(Z) = 0$
* at $n = 1$, $\mathbb{E}(Z^2) = \frac{2!}{2 \times 1!} = 1$
* $\Rightarrow \mathbb{Var}(Z) = \mathbb{E}(Z^2) - \mathbb{E}Z^2 = 1 - 0 = 1$
* at $n = 2$, $\mathbb{E}(Z^4) = \frac{4!}{4 \times 2!} = 3$
* at $n = 3$, $\mathbb{E}(Z^6) = \frac{6!}{8 \times 3!} = 15$
And so you might have noticed a pattern here. Let us rewrite those even moments once more:
* at $n = 1$, $\mathbb{E}(Z^2) = 1$
* at $n = 2$, $\mathbb{E}(Z^4) = 1 \times 3 = 3$
* at $n = 3$, $\mathbb{E}(Z^6) = 1 \times 3 \times 5 = 15$
## MGF for $Pois(\lambda)$
Let $X \sim Pois(\lambda)$; now let's consider MGFs and how to use them to find sums of random variables (convolutions).
\begin{align}
M(t) &= \mathbb{E}(e^{tX}) \\
&= \sum_{k=0}^{\infty} e^{tk} \, \frac{\lambda^k e^{-\lambda}}{k!} \\
&= e^{-\lambda} \sum_{k=0}^{\infty} \frac{\lambda^k e^{tk}}{k!} \\
&= e^{-\lambda} e^{\lambda e^t} &\quad \text{but the right is just another Taylor expansion} \\
&= \boxed{e^{\lambda (e^t - 1)}}
\end{align}
Now let's let $Y \sim Pois(\mu)$, and it is independent of $X$. Find the distribution of $(X + Y)$.
You may recall that with MGFs, all we need to do is _multiply_ the MGFs.
\begin{align}
e^{\lambda (e^t - 1)} \, e^{\mu (e^t - 1)} &= e^{(\lambda + \mu)(e^t - 1)} \\
\\
\Rightarrow X + Y &\sim \mathcal{Pois}(\lambda + \mu)
\end{align}
When adding a Poisson distribution $X$ with another independent Poisson distribution $Y$, the resulting convolution $X + Y$ will also be Poisson. This interesting relation only happens with Poisson distributions.
Now think about what happens with $X$ and $Y$ are _not_ independent.
Let $Y = X$, so that $X + Y = 2X$, which is clearly *not* Poisson, as
* $X + Y = 2X$ is an even function which cannot be Poisson, since Poisson can take on all values both even _and_ odd
* Mean $\mathbb{E}(2x) = 2\lambda$, but $\mathbb{Var}(2X) = 4\lambda$, and since the mean and variance are _not_ equal, this cannot be Poisson
### Some definitions
In the most basic case of two r.v. in a joint distribution, consider both r.v.'s _together_:
> **Joint CDF**
>
> In the general case, the joint CDF fof two r.v.'s is $ F(x,y) = P(X \le x, Y \le y)$
<p/>
> **Joint PDF**
> $f(x, y)$ such that, in the *continuous* case $P((X,Y) \in B) = \iint_B f(x,y)\,dx\,dy$
#### Joint PMF
$f(x, y)$ such that, in the *discrete* case
\begin{align}
P(X=x, Y=y)
\end{align}
We also can consider a single r.v. of a joint distribution:
#### Independence and Joint Distributions
$X, Y$ are independent iff $F(x,y) = F_X(x) \, F_Y(y)$.
\begin{align}
P(X=x, Y=y) &= P(X=x) \, P(Y=y) &\quad \text{discrete case} \\
\\\\
f(x, y) &= f_X(x) \, f_Y(y) &\quad \text{continuous case}
\end{align}
... with the caveat that this must be so for *all* $\text{x, y} \in \mathbb{R}$
#### Marginals (and how to get them)
$P(X \le x)$ is the *marginal distribution* of $X$, where we consider one r.v. at a time.
In the case of a two-r.v. joint distribution, we can get the marginals by using the joint distribution itself:
\begin{align}
P(X=x) &= \sum_y P(X=x, Y=y) &\quad \text{marginal PMF, discrete case, for } x \\
\\\\
f_Y(y) &= \int_{-\infty}^{\infty} f_{(X,Y)}(x,y) \, dx &\quad \text{marginal PDF, continuous case, for } y
\end{align}
## Example: Discrete Joint Distribution
Let $X, Y$ be both Bernoulli. $X$ and $Y$ may be independent; or they might be dependent. They may or may not have the same $p$. But they are both related in the form of a *joint distribution*.
We can lay out this joint distribution in a $2 \times 2$ contigency table like below:
| | $Y=0$ | $Y=1$ |
|-------|:-----:|:-----:|
| $X=0$ | 2/6 | 1/6 |
| $X=1$ | 2/6 | 1/6 |
In order to be a joint distribution, all of the values in our contigency table must be positive; and they must all sum up to 1. The example above shows such a PMF.
Let's add the marginals for $X$ and $Y$ to our $2 \times 2$ contigency table:
| | $Y=0$ | $Y=1$ | ... |
|:-----:|:-----:|:-----:|:-----:|
| $X=0$ | 2/6 | 1/6 | 3/6 |
| $X=1$ | 2/6 | 1/6 | 3/6 |
| ... | 4/6 | 2/6 | |
Observe how in our example, we have:
\begin{align}
P(X=0,Y=0) &= P(X=0) \, P(Y=0) \\
&= 3/6 \times 4/6 = 12/36 &= \boxed{2/6} \\
\\
P(X=0,Y=1) &= P(X=0) \, P(Y=1) \\
&= 3/6 \times 2/6 = 6/36 &= \boxed{1/6} \\
P(X=1,Y=0) &= P(X=1) \, P(Y=0) \\
&= 3/6 \times 4/6 = 12/36 &= \boxed{2/6} \\
\\
P(X=1,Y=1) &= P(X=1) \, P(Y=1) \\
&= 3/6 \times 2/6 = 6/36 &= \boxed{1/6} \\
\end{align}
and so you can see that $X$ and $Y$ are independent.
Now here's an example of a two r.v. joint distribution where $X$ and $Y$ are _dependent_; check it out for yourself.
| | $Y=0$ | $Y=1$ |
|:-----:|:-----:|:-----:|
| $X=0$ | 1/3 | 0 |
| $X=1$ | 1/3 | 1/3 |
## Example: Continuous Joint Distribution
Now say we had Uniform distributions on a square such that $x,y \in [0,1]$.
The joint PDF would be constant on/within the square; and 0 outside.
\begin{align}
\text{joint PDF} &=
\begin{cases}
c &\quad \text{if } 0 \le x \le 1 \text{, } 0 \le y \le 1 \\
\\
0 &\quad \text{otherwise}
\end{cases}
\end{align}
In 1-dimension space, if you integrate $1$ over some interval you get the _length_ of that interval.
In 2-dimension space, if you integrate $1$ over some region, you get the _area_ of that region.
Normalizing $c$, we know that $c = \frac{1}{area} = 1$.
Marginally, $X$ and $Y$ are independent $\mathcal{Unif}(1)$.
## Example: Dependent, Continuous Joint Distribution
Now say we had Uniform distributions on a _disc_ such that $x^2 + y^2 \le 1$.
#### Joint PDF
In this case, the joint PDF is $Unif$ over the area of a disc centered at the origin with radius 1.
\begin{align}
\text{joint PDF} &=
\begin{cases}
\frac{1}{\pi} &\quad \text{if } x^2 + y^2 \le 1 \\
\\
0 &\quad \text{otherwise}
\end{cases}
\end{align}
#### Marginal PDF
| 59e713edd912f7c66d601b46b3b579c1a4a57e30 | 13,002 | ipynb | Jupyter Notebook | Lecture_18.ipynb | dirtScrapper/Stats-110-master | a123692d039193a048ff92f5a7389e97e479eb7e | [
"BSD-3-Clause"
]
| null | null | null | Lecture_18.ipynb | dirtScrapper/Stats-110-master | a123692d039193a048ff92f5a7389e97e479eb7e | [
"BSD-3-Clause"
]
| null | null | null | Lecture_18.ipynb | dirtScrapper/Stats-110-master | a123692d039193a048ff92f5a7389e97e479eb7e | [
"BSD-3-Clause"
]
| null | null | null | 37.148571 | 269 | 0.466774 | true | 3,495 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.90599 | 0.811995 | __label__eng_Latn | 0.911572 | 0.724868 |
# **Обратный маятник**
---
Рассматривается проблема стабилизации обратного маятника. Движение маятника происходит в плоскости XoZ. Невесомый стержень закреплён на тележке, которая может перемещаться под действием горизонтально приложенной силы u(так называемый x inverted pendulum).
Пусть m, M - массы грузика и тележки соответственно. l - расстояние от точки крепления до грузика. Кинетическая и потенциальная энергия системы имеют вид
$$K = \frac{1}{2} M \dot x^2 + \frac{1}{2} m (\dot x_{p}^2 + \dot z_{p}^2)$$
$$П = mgz_{p}$$
где $x_{p} = x + l \sin{\theta}$, $z_{p} = x + l \cos{\theta}$, g - ускорение свободного падения
Из уравнений Лагранжа следует система
\begin{equation*}
\begin{cases}
(M + m) \ddot x + m l \ddot \theta \cos{\theta} - m l \dot \theta^2 \sin{\theta} = u\\
\ddot x \cos{\theta} + l \ddot \theta - g sin{\theta} = 0
\end{cases}
\end{equation*}
Приводя к нормальной форме Коши, получим
\begin{equation}
\begin{cases}
\dot x_{0} = x_{1}\\
\dot x_{1} = \frac{- m g \cos{x_{2}} \sin{x_{2}} + m l x_{3}^2 \sin{x_{2}} + u}{M + m \sin^2{x_{2}}} + d_{1}\\
\dot x_{2} = x_{3}\\
\dot x_{3} = \frac{m l x_{3}^2 \cos{x_{2}} \sin{x_{2}} - u \cos{x_{2}} + (M + m) g \sin{x_{2}} }{M l + m l \sin^2{x_{2}}} + d_{2}
\end{cases}
\end{equation}
где $d_{1}$ и $d_{2}$ - сторонние возмущения, которые могут действовать на систему
# Стабилизация угла
Для выполения задачи достаточно одного ПИД-регулятора.
```python
from math import *
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
```
Параметры маятника
```python
class Parameters:
def __init__(self):
self.m = 0.1
self.M = 1.
self.l = 0.3
self.g = 9.8
```
Right hand system - правая часть системы дифференциальных уравнений
```python
def rhs(t, x, p, u):
dxdt = np.zeros(4)
dxdt[0] = x[1]
dxdt[1] = (-p.m * p.g * cos(x[2]) * sin(x[2]) + p.m * p.l * sin(x[2]) * x[3] ** 2 + u) / (
p.M + p.m * sin(x[2]) ** 2)
dxdt[2] = x[3]
dxdt[3] = (-p.m * p.l * cos(x[2]) * sin(x[2]) * x[3] ** 2 - cos(x[2]) * u + (p.M + p.m) * p.g * sin(x[2])) / (
p.M * p.l + p.m * p.l * sin(x[2]) ** 2)
return dxdt
```
Управляющее устройство(ПИД - регулятор)
```python
class PID_Controller():
def __init__(self, dt, kp, ki, kd):
self.kp = kp
self.ki = ki
self.kd = kd
self.integral = 0
self.dt = dt
self.e_prev = None
def update(self, state, reference):
error = state - reference
if self.e_prev is None:
dedt = 0
else:
# Производная ошибки считается простейшей формулой первого порядка аппроксимации
dedt = (error - self.e_prev) / self.dt
# Интеграл считается простейшей формулой прямоугольников
self.integral = self.integral + error * self.dt
self.e_prev = error
return self.kp * error + self.ki * self.integral + self.kd * dedt
```
Требуемое значение угла, в котором должен удерживаться маятник
```python
def reference_angle(t):
return 0
```
Непосредственно решатель системы уравнений
```python
def solve(t0, tf, dt, x0, p, pid):
time = np.arange(t0, tf, dt)
result = x0
force = []
for i in range(len(time) - 1):
u = pid.update(x0[2], reference_angle(time[i]))
force = np.append(force, u)
solution = solve_ivp(lambda t, x: rhs(t, x, p, u), (time[i], time[i + 1]), x0)
x0 = solution.y[:, -1]
result = np.vstack((result, x0))
return time, force, result
```
Параметры регулятора и интегратора, графики решений.
Если в начальный момент времени маятник отклонён на 0.5 радиан, то он быстро приходит в верхнее положение и остается там при $k_{p} = 25$, $k_{i} = 15$, $k_{d} = 3$
```python
t_0 = 0
t_f = 20
d_t = 0.01
init = [0., 0., 0.5, 0.]
param = Parameters()
pid = PID_Controller(d_t, 25, 15, 3)
[t, f, x] = solve(t_0, t_f, d_t, init, param, pid)
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 2, 1)
ax.plot(t, x[:, 2], label = 'Result')
ax.plot(t,np.zeros_like(x[:, 2]),'--',color ='r', label = 'Reference')
ax.grid()
ax.legend()
ax.set_xlabel('Time, sec')
ax.set_ylabel('Angle, rad')
ax = fig.add_subplot(1, 2, 2)
ax.plot(t[0:-1], f)
ax.set_xlabel('Time, sec')
ax.set_ylabel('Force, N')
ax.grid()
plt.show()
```
#Стабилизация угла и положения тележки
Добвим второй ПИД-регулятор, который позволит контролировать положение тележки, то есть точки крепления стержня.
Пусть теперь тележка стоит на коротком столе, а потому её движение ограничено: $-0.5 \leq x \geq 0.5$. Учтём это простейшим образом, не допуская длительного воздействия, выводящего систему за данные рамки
```python
def reference_position(t):
return 0
def rhs(t, x, p, u):
dxdt = np.zeros(4)
dxdt[0] = x[1]
dxdt[1] = (-p.m * p.g * cos(x[2]) * sin(x[2]) + p.m * p.l * sin(x[2]) * x[3] ** 2 + u) / (
p.M + p.m * sin(x[2]) ** 2)
dxdt[2] = x[3]
dxdt[3] = (-p.m * p.l * cos(x[2]) * sin(x[2]) * x[3] ** 2 - cos(x[2]) * u + (p.M + p.m) * p.g * sin(x[2])) / (
p.M * p.l + p.m * p.l * sin(x[2]) ** 2)
return dxdt
def solve(t0, tf, dt, x0, p, pid1, pid2):
time = np.arange(t0, tf, dt)
result = x0
force = []
for i in range(len(time) - 1):
u = pid1.update(x0[2], reference_angle(time[i])) - pid2.update(x0[0], reference_position(time[i]))
force = np.append(force, u)
solution = solve_ivp(lambda t, x: rhs(t, x, p, u), (time[i], time[i + 1]), x0)
x0 = solution.y[:, -1]
if abs(x0[0]) > 0.5:
x0[0] = 0.5 * np.sign(x0[0])
i = i - 1
result = np.vstack((result, x0))
return time, force, result
```
Если в нулевой момент времени маятник отклонён на 0.5 радиан, а тележка находится в начале координат и неподвижна, то при тех же параметрах регулятора угла и при параметрах $k_{p} = -2.4$, $k_{i} = -1$, $k_{d} = -0.75$ регулятора положения тележки система быстро приходит в требуемое состояние $x = 0 $, $\theta = 0$
```python
t_0 = 0
t_f = 20
d_t = 0.01
init = [0., 0., 0.5, 0.]
param = Parameters()
pid1 = PID_Controller(d_t, 25, 15, 3)
pid2 = PID_Controller(d_t, -2.4, -1, -0.75)
[t, f, x] = solve(t_0, t_f, d_t, init, param, pid1, pid2)
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 3, 1)
ax.plot(t, x[:, 2], label = 'Result')
ax.plot(t,np.zeros_like(x[:, 2]),'--',color ='r', label = 'Reference')
ax.grid()
ax.legend()
ax.set_xlabel('Time, sec')
ax.set_ylabel('Angle, rad')
ax = fig.add_subplot(1, 3, 2)
ax.plot(t, x[:, 0], label = 'Result')
ax.plot(t,np.zeros_like(x[:, 0]),'--',color ='r', label = 'Reference')
ax.set_xlabel('Time, sec')
ax.set_ylabel('x position, N')
ax.grid()
ax.legend()
ax = fig.add_subplot(1, 3, 3)
ax.plot(t[0:-1], f)
ax.set_xlabel('Time, sec')
ax.set_ylabel('Force, N')
ax.grid()
plt.show()
```
# Выход на заданную тректорию
С помощью двух воздействий можно также заставлять маятник двигаться по желаемой траектории. Пусть мы хотим добиться периодического движения вида $x = 0.3 \sin{0.05 \pi t}$, а на систему дополнительно влияют сильные сторонние возмущения $d_{1} = d_{2}= 20 \sin{20 \pi t}$. Не изменяя параметры регуляторов, можно быстро получить необходимое движение.
```python
def reference_position(t):
return 0.3 * sin(0.05 * pi * t)
def rhs(t, x, p, u):
dxdt = np.zeros(4)
dxdt[0] = x[1]
dxdt[1] = (-p.m * p.g * cos(x[2]) * sin(x[2]) + p.m * p.l * sin(x[2]) * x[3] ** 2 + u) / (
p.M + p.m * sin(x[2]) ** 2) + 20 * sin(20 * pi * t)
dxdt[2] = x[3]
dxdt[3] = (-p.m * p.l * cos(x[2]) * sin(x[2]) * x[3] ** 2 - cos(x[2]) * u + (p.M + p.m) * p.g * sin(x[2])) / (
p.M * p.l + p.m * p.l * sin(x[2]) ** 2) + 20 * sin(20 * pi * t)
return dxdt
```
```python
t_0 = 0
t_f = 100
d_t = 0.01
init = [0., 0., 0.5, 0.]
param = Parameters()
pid1 = PID_Controller(d_t, 25, 15, 3)
pid2 = PID_Controller(d_t, -2.4, -1, -0.75)
[t, f, x] = solve(t_0, t_f, d_t, init, param, pid1, pid2)
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 3, 1)
ax.plot(t, x[:, 2], label = 'Result')
ax.plot(t,np.zeros_like(x[:, 2]),'--',color ='r', label = 'Reference')
ax.grid()
ax.legend()
ax.set_xlabel('Time, sec')
ax.set_ylabel('Angle, rad')
ax = fig.add_subplot(1, 3, 2)
ax.plot(t, x[:, 0], label = 'Result')
ax.plot(t, [reference_position(t[i]) for i in range(len(t))], color = 'r', label ='Reference')
ax.legend()
ax.set_xlabel('Time, sec')
ax.set_ylabel('x position, N')
ax.grid()
ax = fig.add_subplot(1, 3, 3)
ax.plot(t[0:-1], f)
ax.set_xlabel('Time, sec')
ax.set_ylabel('Force, N')
ax.grid()
plt.show()
```
# Выводы
Таким образом, грамотная конструкция системы управления с использованием обыкновенных ПИД-регуляторов способна контролировать поведение обратного маятника: не только удерживать систему в заданном положении, но и устанавливать предложенную траекторию. Однако, главная проблема заключается в тонкости подбора параметров регуляторов, который фактически происходит методом проб и ошибок
| f70a5b49fa12eb57350866d70fc1e2ba2ca0a1fc | 180,478 | ipynb | Jupyter Notebook | Homework Problems/Lecture 4-5 PID/PID for Inverted Pendulum.ipynb | DPritykin/Control-Theory-Course | f27c13cd0bf9671518c78414f8c3963c7cb870d6 | [
"MIT"
]
| 6 | 2022-02-21T06:42:30.000Z | 2022-03-14T05:18:00.000Z | Homework Problems/Lecture 4-5 PID/PID for Inverted Pendulum.ipynb | DPritykin/Control-Theory-Course | f27c13cd0bf9671518c78414f8c3963c7cb870d6 | [
"MIT"
]
| null | null | null | Homework Problems/Lecture 4-5 PID/PID for Inverted Pendulum.ipynb | DPritykin/Control-Theory-Course | f27c13cd0bf9671518c78414f8c3963c7cb870d6 | [
"MIT"
]
| 1 | 2022-03-07T16:25:30.000Z | 2022-03-07T16:25:30.000Z | 319.430088 | 71,652 | 0.926168 | true | 3,674 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.815232 | 0.711166 | __label__rus_Cyrl | 0.147359 | 0.490608 |
# *Circuitos Elétricos I - Primeiro Estágio 2020.1e*
## Gabarito da avaliação
```python
m = [9,1,6] # últimos dígitos da matrícula
```
```python
import numpy as np
import sympy as sp
```
### Problema 1
a. $R_{eq}=?$
```python
# define valores das resistências
R1 = (m[0]+1)*1e3
R2 = (m[1]+1)*1e3
R3 = (m[2]+1)*1e3
```
```python
Req = ((R1+R3)*2*R3)/(R1+3*R3)
Req = Req + 3*R2
Req = (Req*R2)/(Req+R2)
print('Req = %.2f kΩ' %(Req/1000))
```
Req = 1.69 kΩ
b. Leitura do voltímetro ideal
```python
# divisor de tensão
Vs = 100
Req = ((R1+R3)*2*R3)/(R1+3*R3)
Vmed1 = Vs*Req/(Req+3*R2)
print('Vmed = %.2f V' %(Vmed1))
```
Vmed = 45.45 V
c. Leitura do voltímetro de resistência interna $R_i = 20R_3$
```python
# divisor de tensão
Vs = 100
Req = ((R1+R3)*2*R3)/(R1+3*R3)
Req = (Req*20*R3)/(Req+20*R3)
Vmed2 = Vs*Req/(Req+3*R2)
Erro = (Vmed1-Vmed2)/Vmed1
print('Vmed = %.2f V' %(Vmed2))
print('Erro absoluto = %.2f V' %(Vmed1-Vmed2))
print('Erro percentual = %.2f %%' %(Erro*100))
```
Vmed = 44.25 V
Erro absoluto = 1.21 V
Erro percentual = 2.65 %
### Problema 2
```python
# define valores das resistências
R1 = m[1]+1
R2 = m[2]+1
print('R1 = ', R1, 'Ω', ' R2 = ', R2, 'Ω',)
```
R1 = 2 Ω R2 = 5 Ω
a. Correntes de malha
```python
# define as variáveis
i1, i2, i3, ix = sp.symbols('i1, i2, i3, ix')
# define os sistema de equações
eq1 = sp.Eq(i1+2*ix,0)
eq2 = sp.Eq(i2+ix,0)
eq3 = sp.Eq(i3-0.5,0)
eq4 = sp.Eq(-R2*(i1-i2)-10+2*R1*(i2-i3)+3*R1*i2,0)
# resolve o sistema
soluc = sp.solve((eq1, eq2, eq3, eq4), dict=True)
print('Equações: \n\n', eq1,'\n', eq2,'\n', eq3,'\n', eq4,'\n')
i1 = np.array([sol[i1] for sol in soluc])
i2 = np.array([sol[i2] for sol in soluc])
i3 = np.array([sol[i3] for sol in soluc])
ix = np.array([sol[ix] for sol in soluc])
print('Correntes de malha:\n\n i1 = %.2f A,\n i2 = %.2f A,\n i3 = %.2f A,\n ix = %.2f A.' %(i1, i2, i3, ix))
```
Equações:
Eq(i1 + 2*ix, 0)
Eq(i2 + ix, 0)
Eq(i3 - 0.5, 0)
Eq(-5*i1 + 15*i2 - 4*i3 - 10, 0)
Correntes de malha:
i1 = 4.80 A,
i2 = 2.40 A,
i3 = 0.50 A,
ix = -2.40 A.
b. $v_a=?$, $v_b=?$
```python
va = R2*(i1-i2)
vb = 2*R1*(i2-i3)
print('va = %.2f V' %(va))
print('vb = %.2f V' %(vb))
```
va = 12.00 V
vb = 7.60 V
c. Potências
```python
# tensões desconhecidas
v_cI = R1*i1 + va
v_I = vb - 7*R2*i3
# potências
p_cI = 2*ix*v_cI
p_V = -10*i2
p_I = v_I*i3
p_R = R1*i1**2 + R2*(i1-i2)**2 + 3*R1*i2**2 + 2*R1*(i2-i3)**2 + 7*R2*i3**2
print('Potências:\n\n p_CI = %.2f W,\n p_V = %.2f W,\n p_I = %.2f W,\n p_R = %.2f W.\n' %(p_cI, p_V, p_I, p_R))
print('Soma das potências: %.2f W.'%(p_cI+p_V+p_I+p_R))
```
Potências:
p_CI = -103.68 W,
p_V = -24.00 W,
p_I = -4.95 W,
p_R = 132.63 W.
Soma das potências: 0.00 W.
### Problema 3
a. $v_{th}=?$ utilizando o princípio da superposição
```python
# define valores das resistências
R1 = m[0]+1
R2 = m[1]+1
R3 = m[2]+1
```
```python
# define variáveis auxiliares x, y, z
x = 12
y = 2
z = 10
vth = 0
for ind in range(0,3):
# define as variáveis
v1, v2, v3 = sp.symbols('v1, v2, v3')
if ind == 0: # fonte de corrente 1A
x = 12
y = 0
z = 0
elif ind == 1:# fonte de tensão 2V
x = 0
y = 2
z = 0
elif ind == 2:# fonte de tensão 10V
x = 0
y = 0
z = 10
# define os sistema de equações
eq1 = sp.Eq(-v1/(R1+12) -v1/2 - (v2-v3)/3 - (v2-v3)/(R3+2), -x/(R1+12)+y/(R3+2))
eq2 = sp.Eq(v2-v1, z)
eq3 = sp.Eq(-v3/R2 + (v2-v3)/3 + (v2-v3)/(R3+2), -y/(R3+2))
# resolve o sistema
soluc = sp.solve((eq1, eq2, eq3), dict=True)
v1 = np.array([sol[v1] for sol in soluc])
v2 = np.array([sol[v2] for sol in soluc])
v3 = np.array([sol[v3] for sol in soluc])
vth = vth + (-v3)
print('vth %d = %.2f V' %(ind+1, -v3))
print('vth(superposição) = %.2f V' %(vth))
```
vth 1 = -0.43 V
vth 2 = -0.20 V
vth 3 = -3.40 V
vth(superposição) = -4.03 V
b. $R_{th}=?$
```python
# Rth via resistência equivalente
Req1 = ((R1+12)*2)/(R1+14)
Req2 = ((R3+2)*3)/(R3+5)
Req = ((Req1+Req2)*R2)/(Req1+Req2+R2)
print('Via resistência equivalente:')
print('Rth = %.2f Ω\n' %(Req))
# Rth via Icc
# define variáveis auxiliares x, y, z
x = 12 # fonte de corrente 1A (transformada para fonte de tensão)
y = 2 # fonte de tensão 2V
z = 10 # fonte de tensão 10V
# define as variáveis
v1, v2 = sp.symbols('v1, v2')
# define os sistema de equações
eq1 = sp.Eq(-v1/(R1+12) -v1/2 - v2/3 - v2/(R3+2), -x/(R1+12)+y/(R3+2))
eq2 = sp.Eq(v2-v1, z)
# resolve o sistema
soluc = sp.solve((eq1, eq2), dict=True)
v1 = np.array([sol[v1] for sol in soluc])
v2 = np.array([sol[v2] for sol in soluc])
icc = -v2/3 - (v2+2)/(R3+2)
# calcula vth/icc
Rth = vth/icc
print('Via corrente de curto circuito:')
print('Rth = %.2f Ω' %(Rth))
```
Via resistência equivalente:
Rth = 1.32 Ω
Via corrente de curto circuito:
Rth = 1.32 Ω
c. $R_L=?$ tal que $\eta = 0.9$, onde $\eta = \frac{v_{th}i}{R_Li^2}$.
```python
print('RL = %.2f Ω' %(9*Rth))
```
RL = 11.89 Ω
| df4ae21b5317a9bb5e269d4968e36e6c375f0b80 | 11,103 | ipynb | Jupyter Notebook | Jupyter notebooks/.ipynb_checkpoints/Circuitos Eletricos I - Primeiro Estagio 2020.1e-checkpoint.ipynb | Jefferson-Lopes/ElectricCircuits | bf2075dc0731cacece75f7b0b378c180630bdf85 | [
"MIT"
]
| 9 | 2021-05-19T18:36:53.000Z | 2022-01-18T16:30:17.000Z | Jupyter notebooks/.ipynb_checkpoints/Circuitos Eletricos I - Primeiro Estagio 2020.1e-checkpoint.ipynb | Jefferson-Lopes/ElectricCircuits | bf2075dc0731cacece75f7b0b378c180630bdf85 | [
"MIT"
]
| null | null | null | Jupyter notebooks/.ipynb_checkpoints/Circuitos Eletricos I - Primeiro Estagio 2020.1e-checkpoint.ipynb | Jefferson-Lopes/ElectricCircuits | bf2075dc0731cacece75f7b0b378c180630bdf85 | [
"MIT"
]
| 10 | 2021-06-25T12:52:40.000Z | 2022-03-11T14:25:48.000Z | 21.727984 | 126 | 0.433036 | true | 2,382 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91118 | 0.785309 | 0.715557 | __label__por_Latn | 0.481705 | 0.500811 |
# When To Stop Fuzzing
In the past chapters, we have discussed several fuzzing techniques. Knowing _what_ to do is important, but it is also important to know when to _stop_ doing things. In this chapter, we will learn when to _stop fuzzing_ – and use a prominent example for this purpose: The *Enigma* machine that was used in the second world war by the navy of Nazi Germany to encrypt communications, and how Alan Turing and I.J. Good used _fuzzing techniques_ to crack ciphers for the Naval Enigma machine.
Turing did not only develop the foundations of computer science, the Turing machine. Together with his assistant I.J. Good, he also invented estimators of the probability of an event occuring that has never previously occured. We show how the Good-Turing estimator can be used to quantify the *residual risk* of a fuzzing campaign that finds no vulnerabilities. Meaning, we show how it estimates the probability of discovering a vulnerability when no vulnerability has been observed before throughout the fuzzing campaign.
We discuss means to speed up [coverage-based fuzzers](Coverage.ipynb) and introduce a range of estimation and extrapolation methodologies to assess and extrapolate fuzzing progress and residual risk.
**Prerequisites**
* _The chapter on [Coverage](Coverage.ipynb) discusses how to use coverage information for an executed test input to guide a coverage-based mutational greybox fuzzer_.
* Some knowledge of statistics is helpful.
```python
import fuzzingbook_utils
```
```python
import Fuzzer
import Coverage
```
## The Enigma Machine
It is autumn in the year of 1938. Turing has just finished his PhD at Princeton University demonstrating the limits of computation and laying the foundation for the theory of computer science. Nazi Germany is rearming. It has reoccupied the Rhineland and annexed Austria against the treaty of Versailles. It has just annexed the Sudetenland in Czechoslovakia and begins preparations to take over the rest of Czechoslovakia despite an agreement just signed in Munich.
Meanwhile, the British intelligence is building up their capability to break encrypted messages used by the Germans to communicate military and naval information. The Germans are using [Enigma machines](https://en.wikipedia.org/wiki/Enigma_machine) for encryption. Enigma machines use a series of electro-mechanical rotor cipher machines to protect military communication. Here is a picture of an Enigma machine:
By the time Turing joined the British Bletchley park, the Polish intelligence reverse engineered the logical structure of the Enigma machine and built a decryption machine called *Bomba* (perhaps because of the ticking noise they made). A bomba simulates six Enigma machines simultaneously and tries different decryption keys until the code is broken. The Polish bomba might have been the very _first fuzzer_.
Turing took it upon himself to crack ciphers of the Naval Enigma machine, which were notoriously hard to crack. The Naval Enigma used, as part of its encryption key, a three letter sequence called *trigram*. These trigrams were selected from a book, called *Kenngruppenbuch*, which contained all trigrams in a random order.
### The Kenngruppenbuch
Let's start with the Kenngruppenbuch (K-Book).
We are going to use the following Python functions.
* `shuffle(elements)` - shuffle *elements* and put items in random order.
* `choice(elements, p=weights)` - choose an item from *elements* at random. An element with twice the *weight* is twice as likely to be chosen.
* `log(a)` - returns the natural logarithm of a.
* `a ** b` - is the a to the power of b (a.k.a. [power operator](https://docs.python.org/3/reference/expressions.html#the-power-operator))
```python
import string
import numpy
```
```python
from numpy.random import choice
from numpy.random import shuffle
from numpy import log
```
We start with creating the set of trigrams:
```python
letters = list(string.ascii_letters[26:]) # upper-case characters
trigrams = [str(a + b + c) for a in letters for b in letters for c in letters]
shuffle(trigrams)
```
```python
trigrams[:10]
```
['ENN', 'FQG', 'GQU', 'MJG', 'RHW', 'ZEJ', 'AFX', 'ELU', 'CGD', 'FIC']
These now go into the Kenngruppenbuch. However, it was observed that some trigrams were more likely chosen than others. For instance, trigrams at the top-left corner of any page, or trigrams on the first or last few pages were more likely than one somewhere in the middle of the book or page. We reflect this difference in distribution by assigning a _probability_ to each trigram, using Benford's law as introduced in [Probabilistic Fuzzing](ProbabilisticGrammarFuzzer.ipynb).
Recall, that Benford's law assigns the $i$-th digit the probability $\log_{10}\left(1 + \frac{1}{i}\right)$ where the base 10 is chosen because there are 10 digits $i\in [0,9]$. However, Benford's law works for an arbitrary number of "digits". Hence, we assign the $i$-th trigram the probability $\log_b\left(1 + \frac{1}{i}\right)$ where the base $b$ is the number of all possible trigrams $b=26^3$.
```python
k_book = {} # Kenngruppenbuch
for i in range(1, len(trigrams) + 1):
trigram = trigrams[i - 1]
# choose weights according to Benford's law
k_book[trigram] = log(1 + 1 / i) / log(26**3 + 1)
```
Here's a random trigram from the Kenngruppenbuch:
```python
random_trigram = choice(list(k_book.keys()), p=list(k_book.values()))
random_trigram
```
'FQG'
And this is its probability:
```python
k_book[random_trigram]
```
0.04148257970673253
### Fuzzing the Enigma
In the following, we introduce an extremely simplified implementation of the Naval Enigma based on the trigrams from the K-book. Of course, the encryption mechanism of the actual Enigma machine is much more sophisticated and worthy of a much more detailed investigation. We encourage the interested reader to follow up with further reading listed in the Background section.
The personell at Bletchley Park can only check whether an encoded message is encoded with a (guessed) trigram.
Our implementation `naval_enigma()` takes a `message` and a `key` (i.e., the guessed trigram). If the given key matches the (previously computed) key for the message, `naval_enigma()` returns `True`.
```python
from Fuzzer import RandomFuzzer
from Fuzzer import Runner
```
```python
class EnigmaMachine(Runner):
def __init__(self, k_book):
self.k_book = k_book
self.reset()
def reset(self):
"""Resets the key register"""
self.msg2key = {}
def internal_msg2key(self, message):
"""Internal helper method.
Returns the trigram for an encoded message."""
if not message in self.msg2key:
# Simulating how an officer chooses a key from the Kenngruppenbuch to encode the message.
self.msg2key[message] = choice(list(self.k_book.keys()), p=list(self.k_book.values()))
trigram = self.msg2key[message]
return trigram
def naval_enigma(self, message, key):
"""Returns true if 'message' is encoded with 'key'"""
if key == self.internal_msg2key(message):
return True
else:
return False
```
To "fuzz" the `naval_enigma()`, our job will be to come up with a key that matches a given (encrypted) message. Since the keys only have three characters, we have a good chance to achieve this in much less than a seconds. (Of course, longer keys will be much harder to find via random fuzzing.)
```python
class EnigmaMachine(EnigmaMachine):
def run(self, tri):
"""PASS if cur_msg is encoded with trigram tri"""
if self.naval_enigma(self.cur_msg, tri):
outcome = self.PASS
else:
outcome = self.FAIL
return (tri, outcome)
```
Now we can use the `EnigmaMachine` to check whether a certain message is encoded with a certain trigram.
```python
enigma = EnigmaMachine(k_book)
enigma.cur_msg = "BrEaK mE. L0Lzz"
enigma.run("AAA")
```
('AAA', 'FAIL')
The simplest way to crack an encoded message is by brute forcing. Suppose, at Bletchley park they would try random trigrams until a message is broken.
```python
class BletchleyPark(object):
def __init__(self, enigma):
self.enigma = enigma
self.enigma.reset()
self.enigma_fuzzer = RandomFuzzer(
min_length=3,
max_length=3,
char_start=65,
char_range=26)
def break_message(self, message):
"""Returning the trigram for an encoded message"""
self.enigma.cur_msg = message
while True:
(trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
if outcome == self.enigma.PASS:
break
return trigram
```
How long does it take Bletchley park to find the key using this brute forcing approach?
```python
from Timer import Timer
```
```python
enigma = EnigmaMachine(k_book)
bletchley = BletchleyPark(enigma)
with Timer() as t:
trigram = bletchley.break_message("BrEaK mE. L0Lzz")
```
Here's the key for the current message:
```python
trigram
```
'BRB'
And no, this did not take long:
```python
'%f seconds' % t.elapsed_time()
```
'0.669883 seconds'
```python
'Bletchley cracks about %d messages per second' % (1/t.elapsed_time())
```
'Bletchley cracks about 1 messages per second'
### Turing's Observations
Okay, lets crack a few messages and count the number of times each trigram is observed.
```python
from collections import defaultdict
```
```python
n = 100 # messages to crack
```
```python
observed = defaultdict(int)
for msg in range(0, n):
trigram = bletchley.break_message(msg)
observed[trigram] += 1
# list of trigrams that have been observed
counts = [k for k, v in observed.items() if int(v) > 0]
t_trigrams = len(k_book)
o_trigrams = len(counts)
```
```python
"After cracking %d messages, we observed %d out of %d trigrams." % (
n, o_trigrams, t_trigrams)
```
'After cracking 100 messages, we observed 83 out of 17576 trigrams.'
```python
singletons = len([k for k, v in observed.items() if int(v) == 1])
```
```python
"From the %d observed trigrams, %d were observed only once." % (
o_trigrams, singletons)
```
'From the 83 observed trigrams, 72 were observed only once.'
Given a sample of previously used entries, Turing wanted to _estimate the likelihood_ that the current unknown entry was one that had been previously used, and further, to estimate the probability distribution over the previously used entries. This lead to the development of the estimators of the missing mass and estimates of the true probability mass of the set of items occuring in the sample. Good worked with Turing during the war and, with Turing’s permission, published the analysis of the bias of these estimators in 1953.
Suppose, after finding the keys for n=100 messages, we have observed the trigram "ABC" exactly $X_\text{ABC}=10$ times. What is the probability $p_\text{ABC}$ that "ABC" is the key for the next message? Empirically, we would estimate $\hat p_\text{ABC}=\frac{X_\text{ABC}}{n}=0.1$. We can derive the empirical estimates for all other trigrams that we have observed. However, it becomes quickly evident that the complete probability mass is distributed over the *observed* trigrams. This leaves no mass for *unobserved* trigrams, i.e., the probability of discovering a new trigram. This is called the missing probability mass or the discovery probability.
Turing and Good derived an estimate of the *discovery probability* $p_0$, i.e., the probability to discover an unobserved trigram, as the number $f_1$ of trigrams observed exactly once divided by the total number $n$ of messages cracked:
$$
p_0 = \frac{f_1}{n}
$$
where $f_1$ is the number of singletons and $n$ is the number of cracked messages.
Lets explore this idea for a bit. We'll extend `BletchleyPark` to crack `n` messages and record the number of trigrams observed as the number of cracked messages increases.
```python
class BletchleyPark(BletchleyPark):
def break_message(self, message):
"""Returning the trigram for an encoded message"""
# For the following experiment, we want to make it practical
# to break a large number of messages. So, we remove the
# loop and just return the trigram for a message.
#
# enigma.cur_msg = message
# while True:
# (trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
# if outcome == self.enigma.PASS:
# break
trigram = enigma.internal_msg2key(message)
return trigram
def break_n_messages(self, n):
"""Returns how often each trigram has been observed,
and #trigrams discovered for each message."""
observed = defaultdict(int)
timeseries = [0] * n
# Crack n messages and record #trigrams observed as #messages increases
cur_observed = 0
for cur_msg in range(0, n):
trigram = self.break_message(cur_msg)
observed[trigram] += 1
if (observed[trigram] == 1):
cur_observed += 1
timeseries[cur_msg] = cur_observed
return (observed, timeseries)
```
Let's crack 2000 messages and compute the GT-estimate.
```python
n = 2000 # messages to crack
```
```python
bletchley = BletchleyPark(enigma)
(observed, timeseries) = bletchley.break_n_messages(n)
```
Let us determine the Good-Turing estimate of the probability that the next trigram has not been observed before:
```python
singletons = len([k for k, v in observed.items() if int(v) == 1])
gt = singletons / n
gt
```
0.4025
We can verify the Good-Turing estimate empirically and compute the empirically determined probability that the next trigram has not been observed before. To do this, we repeat the following experiment repeats=1000 times, reporting the average: If the next message is a new trigram, return 1, otherwise return 0. Note that here, we do not record the newly discovered trigrams as observed.
```python
repeats = 1000 # experiment repetitions
```
```python
newly_discovered = 0
for cur_msg in range(n, n + repeats):
trigram = bletchley.break_message(cur_msg)
if(observed[trigram] == 0):
newly_discovered += 1
newly_discovered / repeats
```
0.427
Looks pretty accurate, huh? The difference between estimates is reasonably small, probably below 0.03. However, the Good-Turing estimate did not nearly require as much computational resources as the empirical estimate. Unlike the empirical estimate, the Good-Turing estimate can be computed during the campaign. Unlike the empirical estimate, the Good-Turing estimate requires no additional, redundant repetitions.
In fact, the Good-Turing (GT) estimator often performs close to the best estimator for arbitrary distributions ([Try it here!](#Kenngruppenbuch)). Of course, the concept of *discovery* is not limited to trigrams. The GT estimator is also used in the study of natural languages to estimate the likelihood that we haven't ever heard or read the word we next encounter. The GT estimator is used in ecology to estimate the likelihood of discovering a new, unseen species in our quest to catalog all _species_ on earth. Later, we will see how it can be used to estimate the probability to discover a vulnerability when none has been observed, yet (i.e., residual risk).
Alan Turing was interested in the _complement_ $(1-GT)$ which gives the proportion of _all_ messages for which the Brits have already observed the trigram needed for decryption. For this reason, the complement is also called sample coverage. The *sample coverage* quantifies how much we know about decryption of all messages given the few messages we have already decrypted.
The probability that the next message can be decrypted with a previously discovered trigram is:
```python
1 - gt
```
0.5974999999999999
The *inverse* of the GT-estimate (1/GT) is a _maximum likelihood estimate_ of the expected number of messages that we can decrypt with previously observed trigrams before having to find a new trigram to decrypt the message. In our setting, the number of messages for which we can expect to reuse previous trigrams before having to discover a new trigram is:
```python
1 / gt
```
2.484472049689441
But why is GT so accurate? Intuitively, despite a large sampling effort (i.e., cracking $n$ messages), there are still $f_1$ trigrams that have been observed only once. We could say that such "singletons" are very rare trigrams. Hence, the probability that the next messages is encoded with such a rare but observed trigram gives a good upper bound on the probability that the next message is encoded with an evidently much rarer, unobserved trigram. Since Turing's observation 80 years ago, an entire statistical theory has been developed around the hypothesis that rare, observed "species" are good predictors of unobserved species.
Let's have a look at the distribution of rare trigrams.
```python
%matplotlib inline
```
```python
import matplotlib.pyplot as plt
```
```python
frequencies = [v for k, v in observed.items() if int(v) > 0]
frequencies.sort(reverse=True)
# Uncomment to see how often each discovered trigram has been observed
# print(frequencies)
# frequency of rare trigrams
plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
plt.hist(frequencies, range=[1, 21], bins=numpy.arange(1, 21) - 0.5)
plt.xticks(range(1, 21))
plt.xlabel('# of occurances (e.g., 1 represents singleton trigrams)')
plt.ylabel('Frequency of occurances')
plt.title('Figure 1. Frequency of Rare Trigrams')
# trigram discovery over time
plt.subplot(1, 2, 2)
plt.plot(timeseries)
plt.xlabel('# of messages cracked')
plt.ylabel('# of trigrams discovered')
plt.title('Figure 2. Trigram Discovery Over Time');
```
```python
# Statistics for most and least often observed trigrams
singletons = len([v for k, v in observed.items() if int(v) == 1])
total = len(frequencies)
print("%3d of %3d trigrams (%.3f%%) have been observed 1 time (i.e., are singleton trigrams)."
% (singletons, total, singletons * 100 / total))
print("%3d of %3d trigrams ( %.3f%%) have been observed %d times."
% (1, total, 1 / total, frequencies[0]))
```
805 of 995 trigrams (80.905%) have been observed 1 time (i.e., are singleton trigrams).
1 of 995 trigrams ( 0.001%) have been observed 180 times.
The *majority of trigrams* have been observed only once, as we can see in Figure 1 (left). In other words, a the majority of observed trigrams are "rare" singletons. In Figure 2 (right), we can see that discovery is in full swing. The trajectory seems almost linear. However, since there is a finite number of trigrams (26^3 = 17,576) trigram discovery will slow down and eventually approach an asymptote (the total number of trigrams).
### Boosting the Performance of BletchleyPark
Some trigrams have been observed very often. We call these "abundant" trigrams.
```python
print("Trigram : Frequency")
for trigram in sorted(observed, key=observed.get, reverse=True):
if observed[trigram] > 10:
print(" %s : %d" % (trigram, observed[trigram]))
```
Trigram : Frequency
ENN : 180
FQG : 72
GQU : 55
RHW : 49
MJG : 46
ZEJ : 33
AFX : 30
FIC : 29
ELU : 23
ODQ : 21
CGD : 18
KSJ : 17
IJZ : 15
UUJ : 15
VSQ : 12
ZBD : 12
NBE : 12
SNP : 11
We'll speed up the code breaking by _trying the abundant trigrams first_.
First, we'll find out how many messages can be cracked by the existing brute forcing strategy at Bledgley park, given a maximum number of attempts. We'll also track the number of messages cracked over time (`timeseries`).
```python
class BletchleyPark(BletchleyPark):
def __init__(self, enigma):
super().__init__(enigma)
self.cur_attempts = 0
self.cur_observed = 0
self.observed = defaultdict(int)
self.timeseries = [None] * max_attempts * 2
def break_message(self, message):
"""Returns the trigram for an encoded message, and
track #trigrams observed as #attempts increases."""
self.enigma.cur_msg = message
while True:
self.cur_attempts += 1 # NEW
(trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
self.timeseries[self.cur_attempts] = self.cur_observed # NEW
if outcome == self.enigma.PASS:
break
return trigram
def break_max_attempts(self, max_attempts):
"""Returns #messages successfully cracked after a given #attempts."""
cur_msg = 0
n_messages = 0
while True:
trigram = self.break_message(cur_msg)
# stop when reaching max_attempts
if self.cur_attempts >= max_attempts:
break
# update observed trigrams
n_messages += 1
self.observed[trigram] += 1
if (self.observed[trigram] == 1):
self.cur_observed += 1
self.timeseries[self.cur_attempts] = self.cur_observed
cur_msg += 1
return n_messages
```
`original` is the number of messages cracked by the bruteforcing strategy, given 100k attempts. Can we beat this?
```python
max_attempts = 100000
```
```python
bletchley = BletchleyPark(enigma)
original = bletchley.break_max_attempts(max_attempts)
original
```
7
Now, we'll create a boosting strategy by trying trigrams first that we have previously observed most often.
```python
class BoostedBletchleyPark(BletchleyPark):
def break_message(self, message):
"""Returns the trigram for an encoded message, and
track #trigrams observed as #attempts increases."""
self.enigma.cur_msg = message
# boost cracking by trying observed trigrams first
for trigram in sorted(self.prior, key=self.prior.get, reverse=True):
self.cur_attempts += 1
(_, outcome) = self.enigma.run(trigram)
self.timeseries[self.cur_attempts] = self.cur_observed
if outcome == self.enigma.PASS:
return trigram
# else fall back to normal cracking
return super().break_message(message)
```
`boosted` is the number of messages cracked by the boosted strategy.
```python
boostedBletchley = BoostedBletchleyPark(enigma)
boostedBletchley.prior = observed
boosted = boostedBletchley.break_max_attempts(max_attempts)
boosted
```
16
We see that the boosted technique cracks substantially more messages. It is worthwhile to record how often each trigram is being used as key and try them in the order of their occurence.
***Try it***. *For practical reasons, we use a large number of previous observations as prior (`boostedBletchley.prior = observed`). You can try to change the code such that the strategy uses the trigram frequencies (`self.observed`) observed **during** the campaign itself to boost the campaign. You will need to increase `max_attempts` and wait for a long while.*
Let's compare the number of trigrams discovered over time.
```python
# print plots
line_old, = plt.plot(bletchley.timeseries, label="Bruteforce Strategy")
line_new, = plt.plot(boostedBletchley.timeseries, label="Boosted Strategy")
plt.legend(handles=[line_old, line_new])
plt.xlabel('# of cracking attempts')
plt.ylabel('# of trigrams discovered')
plt.title('Trigram Discovery Over Time');
```
We see that the boosted fuzzer is constantly superior over the random fuzzer.
## Estimating the Probability of Path Discovery
<!-- ## Residual Risk: Probability of Failure after an Unsuccessful Fuzzing Campaign -->
<!-- Residual risk is not formally defined in this section, so I made the title a bit more generic -- AZ -->
So, what does Turing's observation for the Naval Enigma have to do with fuzzing _arbitrary_ programs? Turing's assistant I.J. Good extended and published Turing's work on the estimation procedures in Biometrica, a journal for theoretical biostatistics that still exists today. Good did not talk about trigrams. Instead, he calls them "species". Hence, the GT estimator is presented to estimate how likely it is to discover a new species, given an existing sample of individuals (each of which belongs to exactly one species).
Now, we can associate program inputs to species, as well. For instance, we could define the path that is exercised by an input as that input's species. This would allow us to _estimate the probability that fuzzing discovers a new path._ Later, we will see how this discovery probability estimate also estimates the likelihood of discovering a vulnerability when we have not seen one, yet (residual risk).
Let's do this. We identify the species for an input by computing a hash-id over the set of statements exercised by that input. In the [Coverage](Coverage.ipynb) chapter, we have learned about the [Coverage class](Coverage.ipynb#A-Coverage-Class) which collects coverage information for an executed Python function. As an example, the function [`cgi_decode()`](Coverage.ipynb#A-CGI-Decoder) was introduced. The function `cgi_decode()` takes a string encoded for a website URL and decodes it back to its original form.
Here's what `cgi_decode()` does and how coverage is computed.
```python
from Coverage import Coverage, cgi_decode
```
```python
encoded = "Hello%2c+world%21"
with Coverage() as cov:
decoded = cgi_decode(encoded)
```
```python
decoded
```
'Hello, world!'
```python
print(cov.coverage());
```
{('cgi_decode', 25), ('__exit__', 80), ('cgi_decode', 24), ('cgi_decode', 13), ('cgi_decode', 23), ('cgi_decode', 12), ('cgi_decode', 33), ('cgi_decode', 22), ('cgi_decode', 11), ('cgi_decode', 32), ('cgi_decode', 21), ('cgi_decode', 10), ('cgi_decode', 31), ('cgi_decode', 20), ('cgi_decode', 19), ('cgi_decode', 18), ('cgi_decode', 17), ('cgi_decode', 27), ('cgi_decode', 16), ('cgi_decode', 26)}
### Trace Coverage
First, we will introduce the concept of execution traces, which are a coarse abstraction of the execution path taken by an input. Compared to the definition of path, a trace ignores the sequence in which statements are exercised or how often each statement is exercised.
* `pickle.dumps()` - serializes an object by producing a byte array from all the information in the object
* `hashlib.md5()` - produces a 128-bit hash value from a byte array
```python
import pickle
import hashlib
```
```python
def getTraceHash(cov):
pickledCov = pickle.dumps(cov.coverage())
hashedCov = hashlib.md5(pickledCov).hexdigest()
return hashedCov
```
Remember our model for the Naval Enigma machine? Each message must be decrypted using exactly one trigram while multiple messages may be decrypted by the same trigram. Similarly, we need each input to yield exactly one trace hash while multiple inputs can yield the same trace hash.
Let's see whether this is true for our `getTraceHash()` function.
```python
inp1 = "a+b"
inp2 = "a+b+c"
inp3 = "abc"
with Coverage() as cov1:
cgi_decode(inp1)
with Coverage() as cov2:
cgi_decode(inp2)
with Coverage() as cov3:
cgi_decode(inp3)
```
The inputs `inp1` and `inp2` execute the same statements:
```python
inp1, inp2
```
('a+b', 'a+b+c')
```python
cov1.coverage() - cov2.coverage()
```
set()
The difference between both coverage sets is empty. Hence, the trace hashes should be the same:
```python
getTraceHash(cov1)
```
'7ea57fa32002a357d62c81a40c363f0e'
```python
getTraceHash(cov2)
```
'7ea57fa32002a357d62c81a40c363f0e'
```python
assert getTraceHash(cov1) == getTraceHash(cov2)
```
In contrast, the inputs `inp1` and `inp3` execute _different_ statements:
```python
inp1, inp3
```
('a+b', 'abc')
```python
cov1.coverage() - cov3.coverage()
```
{('cgi_decode', 21)}
Hence, the trace hashes should be different, too:
```python
getTraceHash(cov1)
```
'7ea57fa32002a357d62c81a40c363f0e'
```python
getTraceHash(cov3)
```
'54a1a3b094f620070507fb0e3ad23a70'
```python
assert getTraceHash(cov1) != getTraceHash(cov3)
```
### Measuring Trace Coverage over Time
In order to measure trace coverage for a `function` executing a `population` of fuzz inputs, we slightly adapt the `population_coverage()` function from the [Chapter on Coverage](Coverage.ipynb#Coverage-of-Basic-Fuzzing).
```python
def population_trace_coverage(population, function):
cumulative_coverage = []
all_coverage = set()
cumulative_singletons = []
cumulative_doubletons = []
singletons = set()
doubletons = set()
for s in population:
with Coverage() as cov:
try:
function(s)
except BaseException:
pass
cur_coverage = set([getTraceHash(cov)])
# singletons and doubletons -- we will need them later
doubletons -= cur_coverage
doubletons |= singletons & cur_coverage
singletons -= cur_coverage
singletons |= cur_coverage - (cur_coverage & all_coverage)
cumulative_singletons.append(len(singletons))
cumulative_doubletons.append(len(doubletons))
# all and cumulative coverage
all_coverage |= cur_coverage
cumulative_coverage.append(len(all_coverage))
return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons
```
Let's see whether our new function really contains coverage information only for *two* traces given our three inputs for `cgi_decode`.
```python
all_coverage = population_trace_coverage([inp1, inp2, inp3], cgi_decode)[0]
assert len(all_coverage) == 2
```
Unfortunately, the `cgi_decode()` function is too simple. Instead, we will use the original Python [HTMLParser](https://docs.python.org/3/library/html.parser.html) as our test subject.
```python
from Fuzzer import RandomFuzzer
from Coverage import population_coverage
from html.parser import HTMLParser
```
```python
trials = 50000 # number of random inputs generated
```
Let's run a random fuzzer for $n=50000$ times and plot trace coverage over time.
```python
# create wrapper function
def my_parser(inp):
parser = HTMLParser() # resets the HTMLParser object for every fuzz input
parser.feed(inp)
```
```python
# create random fuzzer
fuzzer = RandomFuzzer(min_length=1, max_length=100,
char_start=32, char_range=94)
# create population of fuzz inputs
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
# execute and measure trace coverage
trace_timeseries = population_trace_coverage(population, my_parser)[1]
# execute and measure code coverage
code_timeseries = population_coverage(population, my_parser)[1]
# plot trace coverage over time
plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
plt.plot(trace_timeseries)
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.title('Trace Coverage Over Time')
# plot code coverage over time
plt.subplot(1, 2, 2)
plt.plot(code_timeseries)
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements covered')
plt.title('Code Coverage Over Time');
```
Above, we can see trace coverage (left) and code coverage (right) over time. Here are our observations.
1. **Trace coverage is more robust**. There are less sudden jumps in the graph compared to code coverage.
2. **Trace coverage is more fine grained.** There more traces than statements covered at the end (y-axis)
3. **Trace coverage grows more steadily**. Code coverage exercise more than half the statements with the first input that it exercises after 50k inputs. Instead, the number of traces covered grows slowly and steadily since each input can yield only one execution trace.
It is for this reason that one of the most prominent and successful fuzzers today, american fuzzy lop (AFL), uses a similar *measure of progress* (a hash computed over the branches exercised by the input).
### Evaluating the Discovery Probability Estimate
Let's find out how the Good-Turing estimator performs as estimate of discovery probability when we are fuzzing to discover execution traces rather than trigrams.
To measure the empirical probability, we execute the same population of inputs (n=50000) and measure in regular intervals (measurement=100 intervals). During each measurement, we repeat the following experiment repeats=500 times, reporting the average: If the next input yields a new trace, return 1, otherwise return 0. Note that during these repetitions, we do not record the newly discovered traces as observed.
```python
repeats = 500 # experiment repetitions
measurements = 100 # experiment measurements
```
```python
emp_timeseries = []
all_coverage = set()
step = int(trials / measurements)
for i in range(0, trials, step):
if i - step >= 0:
for j in range(step):
inp = population[i - j]
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
all_coverage |= set([getTraceHash(cov)])
discoveries = 0
for _ in range(repeats):
inp = fuzzer.fuzz()
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
if getTraceHash(cov) not in all_coverage:
discoveries += 1
emp_timeseries.append(discoveries / repeats)
```
Now, we compute the Good-Turing estimate over time.
```python
gt_timeseries = []
singleton_timeseries = population_trace_coverage(population, my_parser)[2]
for i in range(1, trials + 1, step):
gt_timeseries.append(singleton_timeseries[i - 1] / i)
```
Let's go ahead and plot both time series.
```python
line_emp, = plt.semilogy(emp_timeseries, label="Empirical")
line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing")
plt.legend(handles=[line_emp, line_gt])
plt.xticks(range(0, measurements + 1, int(measurements / 5)),
range(0, trials + 1, int(trials / 5)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('discovery probability')
plt.title('Discovery Probability Over Time');
```
Again, the Good-Turing estimate appears to be *highly accurate*. In fact, the empirical estimator has a much lower precision as indicated by the large swings. You can try and increase the number of repetitions (repeats) to get more precision for the empirical estimates, however, at the cost of waiting much longer.
### Discovery Probability Quantifies Residual Risk
Alright. You have gotten a hold of a couple of powerful machines and used them to fuzz a software system for several months without finding any vulnerabilities. Is the system vulnerable?
Well, who knows? We cannot say for sure; there is always some residual risk. Testing is not verification. Maybe the next test input that is generated reveals a vulnerability.
Let's say *residual risk* is the probability that the next test input reveals a vulnerability that has not been found, yet. Böhme \cite{stads} has shown that the Good-Turing estimate of the discovery probability is also an estimate of the maxmimum residual risk.
**Proof sketch (Residual Risk)**. Here is a proof sketch that shows that an estimator of discovery probability for an arbitrary definition of species gives an upper bound on the probability to discover a vulnerability when none has been found: Suppose, for each "old" species A (here, execution trace), we derive two "new" species: Some inputs belonging to A expose a vulnerability while others belonging to A do not. We know that _only_ species that do not expose a vulnerability have been discovered. Hence, _all_ species exposing a vulnerability and _some_ species that do not expose a vulnerability remain undiscovered. Hence, the probability to discover a new species gives an upper bound on the probability to discover (a species that exposes) a vulnerability. **QED**.
An estimate of the discovery probability is useful in many other ways.
1. **Discovery probability**. We can estimate, at any point during the fuzzing campaign, the probability that the next input belongs to a previously unseen species (here, that it yields a new execution trace, i.e., exercises a new set of statements).
2. **Complement of discovery probability**. We can estimate the proportion of *all* inputs the fuzzer can generate for which we have already seen the species (here, execution traces). In some sense, this allows us to quantify the *progress of the fuzzing campaign towards completion*: If the probability to discovery a new species is too low, we might as well abort the campaign.
3. **Inverse of discovery probability**. We can predict the number of test inputs needed, so that we can expect the discovery of a new species (here, execution trace).
## How Do We Know When to Stop Fuzzing?
In fuzzing, we have measures of progress such as [code coverage](Coverage.ipynb) or [grammar coverage](GrammarCoverageFuzzer.ipynb). Suppose, we are interested in covering all statements in the program. The _percentage_ of statements that have already been covered quantifies how "far" we are from completing the fuzzing campaign. However, sometimes we know only the _number_ of species $S(n)$ (here, statements) that have been discovered after generating $n$ fuzz inputs. The percentage $S(n)/S$ can only be computed if we know the _total number_ of species $S$. Even then, not all species may be feasible.
### A Success Estimator
If we do not _know_ the total number of species, then let's at least _estimate_ it: As we have seen before, species discovery slows down over time. In the beginning, many new species are discovered. Later, many inputs need to be generated before discovering the next species. In fact, given enough time, the fuzzing campaign approaches an _asymptote_. It is this asymptote that we can estimate.
In 1984, Anne Chao, a well-known theoretical bio-statistician, has developed an estimator $\hat S$ which estimates the asymptotic total number of species $S$:
\begin{align}
\hat S_\text{Chao1} = \begin{cases}
S(n) + \frac{f_1^2}{2f_2} & \text{if $f_2>0$}\\
S(n) + \frac{f_1(f_1-1)}{2} & \text{otherwise}
\end{cases}
\end{align}
* where $f_1$ and $f_2$ is the number of singleton and doubleton species, respectively (that have been observed exactly once or twice, resp.), and
* where $S(n)$ is the number of species that have been discovered after generating $n$ fuzz inputs.
So, how does Chao's estimate perform? To investigate this, we generate trials=400000 fuzz inputs using a fuzzer setting that allows us to see an asymptote in a few seconds. We measure trace coverage coverage. After half-way into our fuzzing campaign (trials/2=100000), we generate Chao's estimate $\hat S$ of the asymptotic total number of species. Then, we run the remainer of the campaign to see the "empirical" asymptote.
```python
trials = 400000
fuzzer = RandomFuzzer(min_length=2, max_length=4,
char_start=32, char_range=32)
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
_, trace_ts, f1_ts, f2_ts = population_trace_coverage(population, my_parser)
```
```python
time = int(trials / 2)
time
```
200000
```python
f1 = f1_ts[time]
f2 = f2_ts[time]
Sn = trace_ts[time]
if f2 > 0:
hat_S = Sn + f1 * f1 / (2 * f2)
else:
hat_S = Sn + f1 * (f1 - 1) / 2
```
After executing `time` fuzz inputs (half of all), we have covered these many traces:
```python
time
```
200000
```python
Sn
```
66
We can estimate there are this many traces in total:
```python
hat_S
```
84.0
Hence, we have achieved this percentage of the estimate:
```python
100 * Sn / hat_S
```
78.57142857142857
After executing `trials` fuzz inputs, we have covered these many traces:
```python
trials
```
400000
```python
trace_ts[trials - 1]
```
70
The accuracy of Chao's estimator is quite reasonable. It isn't always accurate -- particularly at the beginning of a fuzzing campaign when the [discovery probability](WhenIsEnough.ipynb#Measuring-Trace-Coverage-over-Time) is still very high. Nevertheless, it demonstrates the main benefit of reporting a percentage to assess the progress of a fuzzing campaign towards completion.
***Try it***. *Try setting and `trials` to 1 million and `time` to `int(trials / 4)`.*
### Extrapolating Fuzzing Success
<!-- ## Cost-Benefit Analysis: Extrapolating the Number of Species Discovered -->
Suppose you have run the fuzzer for a week, which generated $n$ fuzz inputs and discovered $S(n)$ species (here, covered $S(n)$ execution traces). Instead, of running the fuzzer for another week, you would like to *predict* how many more species you would discover. In 2003, Anne Chao and her team developed an extrapolation methodology to do just that. We are interested in the number $S(n+m^*)$ of species discovered if $m^*$ more fuzz inputs were generated:
\begin{align}
\hat S(n + m^*) = S(n) + \hat f_0 \left[1-\left(1-\frac{f_1}{n\hat f_0 + f_1}\right)^{m^*}\right]
\end{align}
* where $\hat f_0=\hat S - S(n)$ is an estimate of the number $f_0$ of undiscovered species, and
* where $f_1$ the number of singleton species, i.e., those we have observed exactly once.
The number $f_1$ of singletons, we can just keep track of during the fuzzing campaign itself. The estimate of the number $\hat f_0$ of undiscovered species, we can simply derive using Chao's estimate $\hat S$ and the number of observed species $S(n)$.
Let's see how Chao's extrapolator performs by comparing the predicted number of species to the empirical number of species.
```python
prediction_ts = [None] * time
f0 = hat_S - Sn
for m in range(trials - time):
prediction_ts.append(Sn + f0 * (1 - (1 - f1 / (time * f0 + f1)) ** m))
```
```python
plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
plt.plot(trace_ts, color='white')
plt.plot(trace_ts[:time])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.subplot(1, 3, 2)
line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.subplot(1, 3, 3)
line_emp, = plt.plot(trace_ts, color='grey', label="Actual progress")
line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_emp, line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised');
```
The prediction from Chao's extrapolator looks quite accurate. We make a prediction at $time=trials/4$. Despite an extrapolation by 3 times (i.e., at trials), we can see that the predicted value (black, dashed line) closely matches the empirical value (grey, solid line).
***Try it***. Again, try setting and `trials` to 1 million and `time` to `int(trials / 4)`.
## Lessons Learned
* One can measure the _progress_ of a fuzzing campaign (as species over time, i.e., $S(n)$).
* One can measure the _effectiveness_ of a fuzzing campaign (as asymptotic total number of species $S$).
* One can estimate the _effectiveness_ of a fuzzing campaign using the Chao1-estimator $\hat S$.
* One can extrapolate the _progress_ of a fuzzing campaign, $\hat S(n+m^*)$.
* One can estimate the _residual risk_ (i.e., the probability that a bug exists that has not been found) using the Good-Turing estimator $GT$ of the species discovery probability.
## Next Steps
This chapter is the last in the book! If you want to continue reading, have a look at the [Appendices](99_Appendices.ipynb). Otherwise, _make use of what you have learned and go and create great fuzzers and test generators!_
## Background
* A **statistical framework for fuzzing**, inspired from ecology. Marcel Böhme. [STADS: Software Testing as Species Discovery](https://mboehme.github.io/paper/TOSEM18.pdf). ACM TOSEM 27(2):1--52
* Estimating the **discovery probability**: I.J. Good. 1953. [The population frequencies of species and the
estimation of population parameters](https://www.jstor.org/stable/2333344). Biometrika 40:237–264.
* Estimating the **asymptotic total number of species** when each input can belong to exactly one species: Anne Chao. 1984. [Nonparametric estimation of the number of classes in a population](https://www.jstor.org/stable/4615964). Scandinavian Journal of Statistics 11:265–270
* Estimating the **asymptotic total number of species** when each input can belong to one or more species: Anne Chao. 1987. [Estimating the population size for capture-recapture data with unequal catchability](https://www.jstor.org/stable/2531532). Biometrics 43:783–791
* **Extrapolating** the number of discovered species: Tsung-Jen Shen, Anne Chao, and Chih-Feng Lin. 2003. [Predicting the Number of New Species in Further Taxonomic Sampling](http://chao.stat.nthu.edu.tw/wordpress/paper/2003_Ecology_84_P798.pdf). Ecology 84, 3 (2003), 798–804.
## Exercises
I.J. Good and Alan Turing developed an estimator for the case where each input belongs to exactly one species. For instance, each input yields exactly one execution trace (see function [`getTraceHash`](#Trace-Coverage)). However, this is not true in general. For instance, each input exercises multiple statements and branches in the source code. Generally, each input can belong to one *or more* species.
In this extended model, the underlying statistics are quite different. Yet, all estimators that we have discussed in this chapter turn out to be almost identical to those for the simple, single-species model. For instance, the Good-Turing estimator $C$ is defined as
$$C=\frac{Q_1}{n}$$
where $Q_1$ is the number of singleton species and $n$ is the number of generated test cases.
Throughout the fuzzing campaign, we record for each species the *incidence frequency*, i.e., the number of inputs that belong to that species. Again, we define a species $i$ as *singleton species* if we have seen exactly one input that belongs to species $i$.
### Exercise 1: Estimate and Evaluate the Discovery Probability for Statement Coverage
In this exercise, we create a Good-Turing estimator for the simple fuzzer.
#### Part 1: Population Coverage
Implement a function `population_stmt_coverage()` as in [the section on estimating discovery probability](#Estimating-the-Discovery-Probability) that monitors the number of singletons and doubletons over time, i.e., as the number $i$ of test inputs increases.
```python
from Coverage import population_coverage, Coverage
...
```
**Solution.** Here we go:
```python
def population_stmt_coverage(population, function):
cumulative_coverage = []
all_coverage = set()
cumulative_singletons = []
cumulative_doubletons = []
singletons = set()
doubletons = set()
for s in population:
with Coverage() as cov:
try:
function(s)
except BaseException:
pass
cur_coverage = cov.coverage()
# singletons and doubletons
doubletons -= cur_coverage
doubletons |= singletons & cur_coverage
singletons -= cur_coverage
singletons |= cur_coverage - (cur_coverage & all_coverage)
cumulative_singletons.append(len(singletons))
cumulative_doubletons.append(len(doubletons))
# all and cumulative coverage
all_coverage |= cur_coverage
cumulative_coverage.append(len(all_coverage))
return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons
```
#### Part 2: Population
Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` from [the chapter on Fuzzers](Fuzzer.ipynb) to generate a population of $n=10000$ fuzz inputs.
```python
from Fuzzer import RandomFuzzer
from html.parser import HTMLParser
...
```
Ellipsis
**Solution.** This is fairly straightforward:
```python
trials = 2000 # increase to 10000 for better convergences. Will take a while..
```
We create a wrapper function...
```python
def my_parser(inp):
parser = HTMLParser() # resets the HTMLParser object for every fuzz input
parser.feed(inp)
```
... and a random fuzzer:
```python
fuzzer = RandomFuzzer(min_length=1, max_length=1000,
char_start=0, char_range=255)
```
We fill the population:
```python
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
```
#### Part 3: Estimating Probabilities
Execute the generated inputs on the Python HTML parser (`from html.parser import HTMLParser`) and estimate the probability that the next input covers a previously uncovered statement (i.e., the discovery probability) using the Good-Turing estimator.
**Solution.** Here we go:
```python
measurements = 100 # experiment measurements
step = int(trials / measurements)
gt_timeseries = []
singleton_timeseries = population_stmt_coverage(population, my_parser)[2]
for i in range(1, trials + 1, step):
gt_timeseries.append(singleton_timeseries[i - 1] / i)
```
#### Part 4: Empirical Evaluation
Empirically evaluate the accuracy of the Good-Turing estimator (using $10000$ repetitions) of the probability to cover new statements using the experimental procedure at the end of [the section on estimating discovery probability](#Estimating-the-Discovery-Probability).
**Solution.** This is as above:
```python
# increase to 10000 for better precision (less variance). Will take a while..
repeats = 100
```
```python
emp_timeseries = []
all_coverage = set()
for i in range(0, trials, step):
if i - step >= 0:
for j in range(step):
inp = population[i - j]
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
all_coverage |= cov.coverage()
discoveries = 0
for _ in range(repeats):
inp = fuzzer.fuzz()
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
# If intersection not empty, a new stmt was (dis)covered
if cov.coverage() - all_coverage:
discoveries += 1
emp_timeseries.append(discoveries / repeats)
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
line_emp, = plt.semilogy(emp_timeseries, label="Empirical")
line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing")
plt.legend(handles=[line_emp, line_gt])
plt.xticks(range(0, measurements + 1, int(measurements / 5)),
range(0, trials + 1, int(trials / 5)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('discovery probability')
plt.title('Discovery Probability Over Time');
```
### Exercise 2: Extrapolate and Evaluate Statement Coverage
In this exercise, we use Chao's extrapolation method to estimate the success of fuzzing.
#### Part 1: Create Population
Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` to generate a population of $n=400000$ fuzz inputs.
**Solution.** Here we go:
```python
trials = 400 # Use 400000 for actual solution. This takes a while!
```
```python
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
_, stmt_ts, Q1_ts, Q2_ts = population_stmt_coverage(population, my_parser)
```
#### Part 2: Compute Estimate
Compute an estimate of the total number of statements $\hat S$ after $n/4=100000$ fuzz inputs were generated. In the extended model, $\hat S$ is computed as
\begin{align}
\hat S_\text{Chao1} = \begin{cases}
S(n) + \frac{Q_1^2}{2Q_2} & \text{if $Q_2>0$}\\
S(n) + \frac{Q_1(Q_1-1)}{2} & \text{otherwise}
\end{cases}
\end{align}
* where $Q_1$ and $Q_2$ is the number of singleton and doubleton statements, respectively (i.e., statements that have been exercised by exactly one or two fuzz inputs, resp.), and
* where $S(n)$ is the number of statements that have been (dis)covered after generating $n$ fuzz inputs.
**Solution.** Here we go:
```python
time = int(trials / 4)
Q1 = Q1_ts[time]
Q2 = Q2_ts[time]
Sn = stmt_ts[time]
if Q2 > 0:
hat_S = Sn + Q1 * Q1 / (2 * Q2)
else:
hat_S = Sn + Q1 * (Q1 - 1) / 2
print("After executing %d fuzz inputs, we have covered %d **(%.1f %%)** statements.\n" % (time, Sn, 100 * Sn / hat_S) +
"After executing %d fuzz inputs, we estimate there are %d statements in total.\n" % (time, hat_S) +
"After executing %d fuzz inputs, we have covered %d statements." % (trials, stmt_ts[trials - 1]))
```
After executing 100 fuzz inputs, we have covered 126 **(63.6 %)** statements.
After executing 100 fuzz inputs, we estimate there are 198 statements in total.
After executing 400 fuzz inputs, we have covered 171 statements.
#### Part 3: Compute and Evaluate Extrapolator
Compute and evaluate Chao's extrapolator by comparing the predicted number of statements to the empirical number of statements.
**Solution.** Here's our solution:
```python
prediction_ts = [None] * time
Q0 = hat_S - Sn
for m in range(trials - time):
prediction_ts.append(Sn + Q0 * (1 - (1 - Q1 / (time * Q0 + Q1)) ** m))
```
```python
plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
plt.plot(stmt_ts, color='white')
plt.plot(stmt_ts[:time])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised')
plt.subplot(1, 3, 2)
line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised')
plt.subplot(1, 3, 3)
line_emp, = plt.plot(stmt_ts, color='grey', label="Actual progress")
line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_emp, line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised');
```
| e1b31cc851e71afe47d42243fabb7a32044ffb0a | 320,450 | ipynb | Jupyter Notebook | docs/beta/notebooks/WhenToStopFuzzing.ipynb | hardik05/fuzzingbook | f1b91dbefb7f4376a4e226bd0b963a5d80daede0 | [
"MIT"
]
| 1 | 2021-11-02T18:40:46.000Z | 2021-11-02T18:40:46.000Z | docs/beta/notebooks/WhenToStopFuzzing.ipynb | heruix/fuzzingbook | f1b91dbefb7f4376a4e226bd0b963a5d80daede0 | [
"MIT"
]
| null | null | null | docs/beta/notebooks/WhenToStopFuzzing.ipynb | heruix/fuzzingbook | f1b91dbefb7f4376a4e226bd0b963a5d80daede0 | [
"MIT"
]
| 2 | 2019-12-28T16:53:57.000Z | 2021-11-02T18:40:51.000Z | 87.244759 | 42,696 | 0.839769 | true | 13,584 | Qwen/Qwen-72B | 1. YES
2. YES | 0.766294 | 0.831143 | 0.6369 | __label__eng_Latn | 0.991485 | 0.318062 |
Assignment 1: Iterative Velocity Analysis
Assignment 3: Computation of Jacobian and workspace
DH Parameters
## Assignment of $i$ frame and $\bar{i}$ frame for each of the links is as depicted in the figure above. The DH parameters are also shown in the same figure.
```python
# Created by Dr. Sangamesh Deepak R to teach Robotics online during covid 19 outbreak
import sympy as sy
import numpy as np
sy.init_printing()
```
```python
# Link parameters
a0 = 0
a1 = 0
a2 = sy.Symbol(r'L_3')
alpha0 = 0
alpha1 = -sy.pi/2
alpha2 = 0
```
```python
# Joint parameters
theta1 = sy.Symbol(r'\theta_1')
theta2 = sy.Symbol(r'\theta_2')
theta3 = sy.Symbol(r'\theta_3')
d1 = sy.Symbol(r'L_1') + sy.Symbol(r'L_2')
d2 = 0
d3 = 0
```
```python
# transformation from of i' frame with respect to i frame
def link_transform(a_i, alpha_i):
Link_T = sy.Matrix([[1, 0, 0, a_i], [0, sy.cos(alpha_i), -sy.sin(alpha_i), 0], [0, sy.sin(alpha_i), sy.cos(alpha_i), 0], \
[0,0,0,1] ])
return Link_T
```
```python
# transformation of i frame with respect to (i-1)' frame'
def joint_transform(d_i, theta_i):
Joint_T = sy.Matrix([[sy.cos(theta_i), -sy.sin(theta_i), 0, 0],
[sy.sin(theta_i), sy.cos(theta_i), 0, 0],
[0, 0, 1, d_i],
[0,0,0,1] ])
return Joint_T
```
```python
# Computation of transformation matricies of different link frames with respect to the ground frame
T_0 = sy.Identity(4)
T_0_1 = sy.trigsimp( link_transform(a0, alpha0)*joint_transform(d1, theta1))
T_1_2 = sy.trigsimp( link_transform(a1, alpha1)*joint_transform(d2, theta2) )
T_0_2 = sy.trigsimp( T_0_1* T_1_2);
T_2_3 = sy.trigsimp(link_transform(a2, alpha2)*joint_transform(d3, theta3) )
T_0_3 = sy.trigsimp( T_0_2* T_2_3);
T_3_T = link_transform(sy.Symbol(r'L_4'), sy.pi)
T_0_T = sy.trigsimp( T_0_3* T_3_T)
```
```python
T_0_1, T_0_2, T_0_3, T_0_T # Transformation matricies of first, second, third and fourth bodies
```
```python
# Extraction of Rotation matrices
R_0_1= T_0_1[0:3,0:3]
R_1_2= T_1_2[0:3,0:3]
R_2_3= T_2_3[0:3,0:3]
R_3_T= T_3_T[0:3,0:3]
r_0_1=T_0_1[0:3,3]
r_1_2=T_1_2[0:3,3]
r_2_3=T_2_3[0:3,3]
r_3_T=T_3_T[0:3,3]
```
```python
def cross_product(a,b):
c=sy.Matrix([
[a[1,0]*b[2,0]-a[2,0]*b[1,0]],
[a[2,0]*b[0,0]-a[0,0]*b[2,0]],
[a[0,0]*b[1,0]-a[1,0]*b[0,0]]
])
return c
```
```python
d_d1=0
d_d2=0
d_d3=0
d_theta1 = sy.Symbol(r'\dot{\theta}_1')
d_theta2 = sy.Symbol(r'\dot{\theta}_2')
d_theta3 = sy.Symbol(r'\dot{\theta}_3')
d_d1, d_d2, d_d3, d_theta1, d_theta2, d_theta3
```
```python
omega_0_0 = sy.Matrix([[0],[0],[0]])
v_0_0 = sy.Matrix([[0],[0],[0]])
```
```python
omega_1_1= R_0_1.T*(omega_0_0)+sy.Matrix([[0],[0],[d_theta1] ])
v_1_1 = R_0_1.T*(v_0_0 + cross_product(omega_0_0,r_0_1))+sy.Matrix([[0],[0],[d_d1] ])
omega_1_1, v_1_1
```
```python
omega_2_2= R_1_2.T*(omega_1_1)+sy.Matrix([[0],[0],[d_theta2] ])
v_2_2 = R_1_2.T*(v_1_1 + cross_product(omega_1_1,r_1_2))+sy.Matrix([[0],[0],[d_d2] ])
omega_2_2, v_2_2
```
```python
omega_3_3= R_2_3.T*(omega_2_2)+sy.Matrix([[0],[0],[d_theta3] ])
v_3_3 = R_2_3.T*(v_2_2 + cross_product(omega_2_2,r_2_3))+sy.Matrix([[0],[0],[d_d3] ])
omega_3_3, v_3_3
```
```python
omega_T_T= R_3_T.T*(omega_3_3)
v_T_T = R_3_T.T*(v_3_3 + cross_product(omega_3_3,r_3_T))
omega_T_T, v_T_T
```
## The required expressions for ${}^{0}\boldsymbol{\omega}_{0}$, ${}^{1}\boldsymbol{\omega}_{1}$, ${}^{2}\boldsymbol{\omega}_{2}$, ${}^{3}\boldsymbol{\omega}_{3}$, ${}^{T}\boldsymbol{\omega}_{T}$, ${}^{0}\boldsymbol{v}_{0}$, ${}^{1}\boldsymbol{v}_{1}$, ${}^{2}\boldsymbol{v}_{2}$, ${}^{3}\boldsymbol{v}_{3}$, ${}^{T}\boldsymbol{v}_{T}$ are as above
```python
R_0_T= T_0_T[0:3,0:3]
v_0_T=sy.trigsimp(R_0_T*v_T_T)
omega_0_T = sy.trigsimp(R_0_T*omega_T_T)
```
```python
mu_0_T = sy.Matrix([v_0_T, omega_0_T])
mu_0_T
```
```python
a1= mu_0_T.subs([(d_theta1, 1), (d_theta2,0), (d_theta3, 0)])
a2= mu_0_T.subs([(d_theta1, 0), (d_theta2,1), (d_theta3, 0)])
a3= mu_0_T.subs([(d_theta1, 0), (d_theta2,0), (d_theta3, 1)])
```
```python
a1
```
```python
J=a1
J=J.col_insert(1,a2)
J=J.col_insert(2,a3)
J
```
## The analytical expression for the Jacobian is as found above
## The workspace of the robot when $\theta_1$ is held constant is same as that of 2R robot considered in class as well as in mid-semester examination. When there is no joint limits, the work-space was an annular circular area. When $\theta_1$ is sweeped over $2 \pi$, the annular circular area sweeps an annular SPHERE.
```python
J_num_1 = J.subs([(theta1, 0), (theta2, 0), (theta3, 0)]) # Numerical value of Jacobian at a configuration that lies at the boundary of the workspace
J_num_2 = J.subs([(theta1, 0), (theta2, 0), (theta3, sy.pi/2)]) # Numerical value of Jacobian at configuration that lies in interior of the workspace
display([J_num_1, J_num_2])
display([J_num_1.columnspace(), J_num_2.columnspace()])
```
## Rank of the Jacobian at both of the two configurations choosen above is three.
```python
```
| 43d634cfae48d972df2f5ffdde55b84090769023 | 370,159 | ipynb | Jupyter Notebook | Assignment_1_3_solution.ipynb | euler1sangamesh/Velocity_Jacobian_of_serial_robot | 8a439ca2c7e0abd939297910cc5addccc28f424f | [
"MIT"
]
| null | null | null | Assignment_1_3_solution.ipynb | euler1sangamesh/Velocity_Jacobian_of_serial_robot | 8a439ca2c7e0abd939297910cc5addccc28f424f | [
"MIT"
]
| null | null | null | Assignment_1_3_solution.ipynb | euler1sangamesh/Velocity_Jacobian_of_serial_robot | 8a439ca2c7e0abd939297910cc5addccc28f424f | [
"MIT"
]
| null | null | null | 459.824845 | 161,548 | 0.906032 | true | 2,002 | Qwen/Qwen-72B | 1. YES
2. YES | 0.928409 | 0.810479 | 0.752456 | __label__eng_Latn | 0.547607 | 0.586539 |
The Equations of Ideal Magnetohydrodynamics (MHD)
The ideal MHD equations are in terms of the eight variables in addition to time t which are conserved in a volume:
density $\rho$, momentum, $\rho \mathbf u$, magnetic flux density $\mathbf B$ and energy density, $E$.
These equations expressed in conservation form are as follows:
Conservation of Mass:
\begin{equation}
\frac{\partial \rho}{\partial t}+\boldsymbol{\nabla}\cdot(\rho \mathbf u)=0
\end{equation}
Conservation of Momentum:
\begin{equation}
\frac{\partial}{\partial t}\left(
\rho \mathbf u
\right)
+\boldsymbol{\nabla}
\cdot
\left[
\rho \mathbf u \otimes \mathbf u
+\left(
p_{B}
\right)
\mathbf{{\overline {\overline I}}}
-\mathbf{B} \otimes \mathbf{B}
\right]
=0
\end{equation}
Conservation of Flux:
\begin{equation}
\frac{\partial \mathbf{B} }{\partial t}+
\boldsymbol{\nabla}
\cdot
(
\mathbf{u} \otimes \mathbf{B}-
\mathbf{B} \otimes \mathbf{u})
= 0
\end{equation}
Conservation of Energy:
\begin{equation}
\frac{\partial E}{\partial t}
+\boldsymbol{\nabla}
\cdot
\left[
\left( E + p_{B}\right) \mathbf{u}
-(\mathbf{u}\cdot\mathbf{B})\mathbf{B}
\right] = 0
\end{equation}
where $E$ is the total energy, kinetic, internal and magnetic
\begin{equation}
E =
\frac{1}{2}\rho |\mathbf{u}|^2 +
\frac{p}{\gamma-1}+
\frac{1}{2} |\mathbf{B}|^2
\end{equation}
and $p_{B}$ is the sum of thermal and magnetic pressure:
\begin{equation}
p_{B} = p + \frac{1}{2}|\mathbf{B}|^2
\end{equation}
The units are chosen so that $\mathbf B$ absorbs a factor of $1/\sqrt {4 \pi}$.
The adiabatic index is $\gamma = 5/3$ for a monatomic gas throughout the simulations.
$\mathbf{I}$ is the identity tensor.
There are eight equations and nine variables. In order to close the system one more equation is required, the equation of state.
The equation of state is chosen to be the ideal gas equation.
```python
```
| 80871b6a724a04b566d628f4eaf4653a987166b6 | 3,287 | ipynb | Jupyter Notebook | MHD Equations.ipynb | mhdproject/mhdvanleer | 63d3a3f94ae2265dd78bc05b23fa3a6cbc950309 | [
"MIT"
]
| 5 | 2018-12-31T17:40:48.000Z | 2021-06-17T22:10:25.000Z | MHD Equations.ipynb | mhdproject/mhdvanleer | 63d3a3f94ae2265dd78bc05b23fa3a6cbc950309 | [
"MIT"
]
| 2 | 2017-07-12T18:45:00.000Z | 2017-07-23T10:05:03.000Z | MHD Equations.ipynb | garethcmurphy/mhdvanleer | 63d3a3f94ae2265dd78bc05b23fa3a6cbc950309 | [
"MIT"
]
| 1 | 2017-11-03T19:47:55.000Z | 2017-11-03T19:47:55.000Z | 29.088496 | 137 | 0.535138 | true | 628 | Qwen/Qwen-72B | 1. YES
2. YES | 0.946597 | 0.805632 | 0.762609 | __label__eng_Latn | 0.937241 | 0.610128 |
```python
%matplotlib inline
```
# Introduction to Optimal Transport with Python
This example gives an introduction on how to use Optimal Transport in Python.
```python
# Author: Remi Flamary, Nicolas Courty, Aurelie Boisbunon
#
# License: MIT License
# sphinx_gallery_thumbnail_number = 1
```
## POT Python Optimal Transport Toolbox
### POT installation
* Install with pip::
pip install pot
* Install with conda::
conda install -c conda-forge pot
### Import the toolbox
```python
import numpy as np # always need it
import pylab as pl # do the plots
import ot # ot
import time
```
### Getting help
Online documentation : `<https://pythonot.github.io/all.html>`_
Or inline help:
```python
help(ot.dist)
```
## First OT Problem
We will solve the Bakery/Cafés problem of transporting croissants from a
number of Bakeries to Cafés in a City (in this case Manhattan). We did a
quick google map search in Manhattan for bakeries and Cafés:
We extracted from this search their positions and generated fictional
production and sale number (that both sum to the same value).
We have acess to the position of Bakeries ``bakery_pos`` and their
respective production ``bakery_prod`` which describe the source
distribution. The Cafés where the croissants are sold are defined also by
their position ``cafe_pos`` and ``cafe_prod``, and describe the target
distribution. For fun we also provide a
map ``Imap`` that will illustrate the position of these shops in the city.
Now we load the data
```python
data = np.load('../data/manhattan.npz')
bakery_pos = data['bakery_pos']
bakery_prod = data['bakery_prod']
cafe_pos = data['cafe_pos']
cafe_prod = data['cafe_prod']
Imap = data['Imap']
print('Bakery production: {}'.format(bakery_prod))
print('Cafe sale: {}'.format(cafe_prod))
print('Total croissants : {}'.format(cafe_prod.sum()))
```
## Plotting bakeries in the city
Next we plot the position of the bakeries and cafés on the map. The size of
the circle is proportional to their production.
```python
pl.figure(1, (7, 6))
pl.clf()
pl.imshow(Imap, interpolation='bilinear') # plot the map
pl.scatter(bakery_pos[:, 0], bakery_pos[:, 1], s=bakery_prod, c='r', ec='k', label='Bakeries')
pl.scatter(cafe_pos[:, 0], cafe_pos[:, 1], s=cafe_prod, c='b', ec='k', label='Cafés')
pl.legend()
pl.title('Manhattan Bakeries and Cafés')
```
## Cost matrix
We can now compute the cost matrix between the bakeries and the cafés, which
will be the transport cost matrix. This can be done using the
`ot.dist <https://pythonot.github.io/all.html#ot.dist>`_ function that
defaults to squared Euclidean distance but can return other things such as
cityblock (or Manhattan distance).
```python
C = ot.dist(bakery_pos, cafe_pos)
labels = [str(i) for i in range(len(bakery_prod))]
f = pl.figure(2, (14, 7))
pl.clf()
pl.subplot(121)
pl.imshow(Imap, interpolation='bilinear') # plot the map
for i in range(len(cafe_pos)):
pl.text(cafe_pos[i, 0], cafe_pos[i, 1], labels[i], color='b',
fontsize=14, fontweight='bold', ha='center', va='center')
for i in range(len(bakery_pos)):
pl.text(bakery_pos[i, 0], bakery_pos[i, 1], labels[i], color='r',
fontsize=14, fontweight='bold', ha='center', va='center')
pl.title('Manhattan Bakeries and Cafés')
ax = pl.subplot(122)
im = pl.imshow(C, cmap="coolwarm")
pl.title('Cost matrix')
cbar = pl.colorbar(im, ax=ax, shrink=0.5, use_gridspec=True)
cbar.ax.set_ylabel("cost", rotation=-90, va="bottom")
pl.xlabel('Cafés')
pl.ylabel('Bakeries')
pl.tight_layout()
```
The red cells in the matrix image show the bakeries and cafés that are
further away, and thus more costly to transport from one to the other, while
the blue ones show those that are very close to each other, with respect to
the squared Euclidean distance.
## Solving the OT problem with `ot.emd <https://pythonot.github.io/all.html#ot.emd>`_
```python
start = time.time()
ot_emd = ot.emd(bakery_prod, cafe_prod, C)
time_emd = time.time() - start
```
The function returns the transport matrix, which we can then visualize (next section).
### Transportation plan vizualization
A good vizualization of the OT matrix in the 2D plane is to denote the
transportation of mass between a Bakery and a Café by a line. This can easily
be done with a double ``for`` loop.
In order to make it more interpretable one can also use the ``alpha``
parameter of plot and set it to ``alpha=G[i,j]/G.max()``.
```python
# Plot the matrix and the map
f = pl.figure(3, (14, 7))
pl.clf()
pl.subplot(121)
pl.imshow(Imap, interpolation='bilinear') # plot the map
for i in range(len(bakery_pos)):
for j in range(len(cafe_pos)):
pl.plot([bakery_pos[i, 0], cafe_pos[j, 0]], [bakery_pos[i, 1], cafe_pos[j, 1]],
'-k', lw=3. * ot_emd[i, j] / ot_emd.max())
for i in range(len(cafe_pos)):
pl.text(cafe_pos[i, 0], cafe_pos[i, 1], labels[i], color='b', fontsize=14,
fontweight='bold', ha='center', va='center')
for i in range(len(bakery_pos)):
pl.text(bakery_pos[i, 0], bakery_pos[i, 1], labels[i], color='r', fontsize=14,
fontweight='bold', ha='center', va='center')
pl.title('Manhattan Bakeries and Cafés')
ax = pl.subplot(122)
im = pl.imshow(ot_emd)
for i in range(len(bakery_prod)):
for j in range(len(cafe_prod)):
text = ax.text(j, i, '{0:g}'.format(ot_emd[i, j]),
ha="center", va="center", color="w")
pl.title('Transport matrix')
pl.xlabel('Cafés')
pl.ylabel('Bakeries')
pl.tight_layout()
```
The transport matrix gives the number of croissants that can be transported
from each bakery to each café. We can see that the bakeries only need to
transport croissants to one or two cafés, the transport matrix being very
sparse.
## OT loss and dual variables
The resulting wasserstein loss loss is of the form:
\begin{align}W=\sum_{i,j}\gamma_{i,j}C_{i,j}\end{align}
where $\gamma$ is the optimal transport matrix.
```python
W = np.sum(ot_emd * C)
print('Wasserstein loss (EMD) = {0:.2f}'.format(W))
```
## Regularized OT with Sinkhorn
The Sinkhorn algorithm is very simple to code. You can implement it directly
using the following pseudo-code
In this algorithm, $\oslash$ corresponds to the element-wise division.
An alternative is to use the POT toolbox with
`ot.sinkhorn <https://pythonot.github.io/all.html#ot.sinkhorn>`_
Be careful of numerical problems. A good pre-processing for Sinkhorn is to
divide the cost matrix ``C`` by its maximum value.
### Algorithm
```python
# Compute Sinkhorn transport matrix from algorithm
reg = 0.1
K = np.exp(-C / C.max() / reg)
nit = 100
u = np.ones((len(bakery_prod), ))
for i in range(1, nit):
v = cafe_prod / np.dot(K.T, u)
u = bakery_prod / (np.dot(K, v))
ot_sink_algo = np.atleast_2d(u).T * (K * v.T) # Equivalent to np.dot(np.diag(u), np.dot(K, np.diag(v)))
# Compute Sinkhorn transport matrix with POT
ot_sinkhorn = ot.sinkhorn(bakery_prod, cafe_prod, reg=reg, M=C / C.max())
# Difference between the 2
print('Difference between algo and ot.sinkhorn = {0:.2g}'.format(np.sum(np.power(ot_sink_algo - ot_sinkhorn, 2))))
```
### Plot the matrix and the map
```python
print('Min. of Sinkhorn\'s transport matrix = {0:.2g}'.format(np.min(ot_sinkhorn)))
f = pl.figure(4, (13, 6))
pl.clf()
pl.subplot(121)
pl.imshow(Imap, interpolation='bilinear') # plot the map
for i in range(len(bakery_pos)):
for j in range(len(cafe_pos)):
pl.plot([bakery_pos[i, 0], cafe_pos[j, 0]],
[bakery_pos[i, 1], cafe_pos[j, 1]],
'-k', lw=3. * ot_sinkhorn[i, j] / ot_sinkhorn.max())
for i in range(len(cafe_pos)):
pl.text(cafe_pos[i, 0], cafe_pos[i, 1], labels[i], color='b',
fontsize=14, fontweight='bold', ha='center', va='center')
for i in range(len(bakery_pos)):
pl.text(bakery_pos[i, 0], bakery_pos[i, 1], labels[i], color='r',
fontsize=14, fontweight='bold', ha='center', va='center')
pl.title('Manhattan Bakeries and Cafés')
ax = pl.subplot(122)
im = pl.imshow(ot_sinkhorn)
for i in range(len(bakery_prod)):
for j in range(len(cafe_prod)):
text = ax.text(j, i, np.round(ot_sinkhorn[i, j], 1),
ha="center", va="center", color="w")
pl.title('Transport matrix')
pl.xlabel('Cafés')
pl.ylabel('Bakeries')
pl.tight_layout()
```
We notice right away that the matrix is not sparse at all with Sinkhorn,
each bakery delivering croissants to all 5 cafés with that solution. Also,
this solution gives a transport with fractions, which does not make sense
in the case of croissants. This was not the case with EMD.
### Varying the regularization parameter in Sinkhorn
```python
reg_parameter = np.logspace(-3, 0, 20)
W_sinkhorn_reg = np.zeros((len(reg_parameter), ))
time_sinkhorn_reg = np.zeros((len(reg_parameter), ))
f = pl.figure(5, (14, 5))
pl.clf()
max_ot = 100 # plot matrices with the same colorbar
for k in range(len(reg_parameter)):
start = time.time()
ot_sinkhorn = ot.sinkhorn(bakery_prod, cafe_prod, reg=reg_parameter[k], M=C / C.max())
time_sinkhorn_reg[k] = time.time() - start
if k % 4 == 0 and k > 0: # we only plot a few
ax = pl.subplot(1, 5, k // 4)
im = pl.imshow(ot_sinkhorn, vmin=0, vmax=max_ot)
pl.title('reg={0:.2g}'.format(reg_parameter[k]))
pl.xlabel('Cafés')
pl.ylabel('Bakeries')
# Compute the Wasserstein loss for Sinkhorn, and compare with EMD
W_sinkhorn_reg[k] = np.sum(ot_sinkhorn * C)
pl.tight_layout()
```
This series of graph shows that the solution of Sinkhorn starts with something
very similar to EMD (although not sparse) for very small values of the
regularization parameter, and tends to a more uniform solution as the
regularization parameter increases.
### Wasserstein loss and computational time
```python
# Plot the matrix and the map
f = pl.figure(6, (4, 4))
pl.clf()
pl.title("Comparison between Sinkhorn and EMD")
pl.plot(reg_parameter, W_sinkhorn_reg, 'o', label="Sinkhorn")
XLim = pl.xlim()
pl.plot(XLim, [W, W], '--k', label="EMD")
pl.legend()
pl.xlabel("reg")
pl.ylabel("Wasserstein loss")
```
In this last graph, we show the impact of the regularization parameter on
the Wasserstein loss. We can see that higher
values of ``reg`` leads to a much higher Wasserstein loss.
The Wasserstein loss of EMD is displayed for
comparison. The Wasserstein loss of Sinkhorn can be a little lower than that
of EMD for low values of ``reg``, but it quickly gets much higher.
| b353d81f1ce9daa72854d088790c90fe71dee1a9 | 16,052 | ipynb | Jupyter Notebook | master/_downloads/f7942777fc8bc11618d8908da9b54edc/plot_Intro_OT.ipynb | PythonOT/pythonot.github.io | 102512d51c24679b61bec8986806dc9063f81676 | [
"MIT"
]
| 5 | 2020-06-12T10:53:15.000Z | 2021-11-06T13:21:56.000Z | master/_downloads/f7942777fc8bc11618d8908da9b54edc/plot_Intro_OT.ipynb | PythonOT/pythonot.github.io | 102512d51c24679b61bec8986806dc9063f81676 | [
"MIT"
]
| 1 | 2020-08-28T08:15:56.000Z | 2020-08-28T08:15:56.000Z | master/_downloads/f7942777fc8bc11618d8908da9b54edc/plot_Intro_OT.ipynb | PythonOT/pythonot.github.io | 102512d51c24679b61bec8986806dc9063f81676 | [
"MIT"
]
| 1 | 2020-08-28T08:08:09.000Z | 2020-08-28T08:08:09.000Z | 50.319749 | 1,201 | 0.593633 | true | 2,934 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.819893 | 0.630564 | __label__eng_Latn | 0.936165 | 0.303341 |
# Decentralization Planning
## Objective and Prerequisites
Ready for a mathematical optimization modeling challenge? Put your skills to the test with this example, where you’ll learn how to model and solve a decentralization planning problem. You’ll have to figure out – given a set of departments of a company, and potential cities where these departments can be located – the “best” location for each department in order to maximize gross margins.
This model is example 10 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 265 and 317-319.
This modeling example is at the advanced level, where we assume that you know Python and the Gurobi Python API and that you have advanced knowledge of building mathematical optimization models. Typically, the objective function and/or constraints of these examples are complex or require advanced features of the Gurobi Python API.
**Download the Repository** <br />
You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip).
## Problem Description
A large company wants to move some of its departments out of London. Doing so will result in reduced costs in some areas
(such as cheaper housing, government incentives, easier recruitment, etc.), and increased costs in other areas (such as communication between departments). The cost implications for all possible locations of each department have been calculated.
The goal is to determine where to locate each department in order to maximize the total difference between the reduced costs from relocating and the increased communication costs between departments.
The company comprises five departments (A, B, C, D and E). The possible cities for relocation are Bristol and Brighton, or a department may be kept in London. None of these cities (including London) may be the location for more than three of the departments.
## Model Formulation
### Sets and Indices
$d,d2 \in \text{Departments}=\{A,B,C,D,E\}$
$c,c2 \in \text{Cities}=\{\text{Bristol}, \text{Brighton}, \text{London}\}$
### Parameters
$\text{benefit}_{d,c} \in \mathbb{R}^+$: Benefit -in thousands of dollars per year, derived from relocating department $d$ to city $c$.
$\text{communicationCost}_{d,c,d2,c2} \in \mathbb{R}^+$: Communication cost -in thousands of dollars per year, derived from relocating department $d$ to city $c$ and relocating department $d2$ to city $c2$.
We define the set $dcd2c2 = \{(d,c,d2,c2) \in \text{Departments} \times \text{Cities} \times \text{Departments} \times \text{Cities}: \text{communicationCost}_{d,c,d2,c2} > 0 \}$
### Decision Variables
$\text{locate}_{d,c} \in \{0,1 \}$: This binary variable is equal 1, if department $d$ is located at city $c$, and 0 otherwise.
$y_{d,c,d2,c2} = \text{locate}_{d,c}*\text{locate}_{d2,c2} \in \{0,1 \}$: This auxiliary binary variable is equal 1, if department $d$ is located at city $c$ and department $d2$ is located at city $c2$, and 0 otherwise.
### Constraints
**Department location**: Each department must be located in only one city.
\begin{equation}
\sum_{c \in \text{Cities}} \text{locate}_{d,c} = 1 \quad \forall d \in \text{Departments}
\end{equation}
**Departments limit**: No city may be the location for more than three departments.
\begin{equation}
\sum_{d \in \text{Departments}} \text{locate}_{d,c} \leq 3 \quad \forall c \in \text{Cities}
\end{equation}
**Logical Constraints**:
- If $y_{d,c,d2,c2} = 1$ then $\text{locate}_{d,c} = 1$ and $\text{locate}_{d2,c2} = 1$.
\begin{equation}
y_{d,c,d2,c2} \leq \text{locate}_{d,c} \quad \forall (d,c,d2,c2) \in dcd2c2
\end{equation}
\begin{equation}
y_{d,c,d2,c2} \leq \text{locate}_{d2,c2} \quad \forall (d,c,d2,c2) \in dcd2c2
\end{equation}
- If $\text{locate}_{d,c} = 1$ and $\text{locate}_{d2,c2} = 1 $ then $y_{d,c,d2,c2} = 1$.
\begin{equation}
\text{locate}_{d,c} + \text{locate}_{d2,c2} - y_{d,c,d2,c2} \leq 1 \quad \forall (d,c,d2,c2) \in dcd2c2
\end{equation}
### Objective Function
**Gross margin**: Maximize the gross margin of relocation.
\begin{equation}
\text{Maximize} \quad Z = \sum_{d \in \text{Departments}} \sum_{c \in \text{Cities}} \text{benefit}_{d,c}*\text{locate}_{d,c} -
\sum_{d,c,d2,c2 \in dcd2c2} \text{communicationCost}_{d,c,d2,c2}*y_{d,c,d2,c2}
\end{equation}
This linear integer programming formulation of the decentralization problem is in fact a linearization of a quadratic assignment formulation of this problem. With Gurobi 9.0, you can directly solve the quadratic assignment formulation of the decentralization problem without the auxiliary variables and the logical constraints.
### Objective Function
**Gross margin**: Maximize the gross margin of relocation.
\begin{equation}
\text{Maximize} \quad Z = \sum_{d \in \text{Departments}} \sum_{c \in \text{Cities}} \text{benefit}_{d,c}*\text{locate}_{d,c} -
\sum_{d,c,d2,c2 \in dcd2c2} \text{communicationCost}_{d,c,d2,c2}*\text{locate}_{d,c}*\text{locate}_{d2,c2}
\end{equation}
### Constraints
**Department location**: Each department must be located in only one city.
\begin{equation}
\sum_{c \in \text{Cities}} \text{locate}_{d,c} = 1 \quad \forall d \in \text{Departments}
\end{equation}
**Departments limit**: No city may be the location for more than three departments.
\begin{equation}
\sum_{d \in \text{Departments}} \text{locate}_{d,c} \leq 3 \quad \forall c \in \text{Cities}
\end{equation}
## Python Implementation
We import the Gurobi Python Module and other Python libraries.
```python
%pip install gurobipy
```
```python
import pandas as pd
import gurobipy as gp
from gurobipy import GRB
# tested with Python 3.7.0 & Gurobi 9.0
```
## Input data
We define all the input data for the model.
```python
# Lists of deparments and cities
Deparments = ['A','B','C','D','E']
Cities = ['Bristol', 'Brighton', 'London']
# Create a dictionary to capture benefits -in thousands of dollars from relocation.
d2c, benefit = gp.multidict({
('A', 'Bristol'): 10,
('A', 'Brighton'): 10,
('A', 'London'): 0,
('B', 'Bristol'): 15,
('B', 'Brighton'): 20,
('B', 'London'): 0,
('C', 'Bristol'): 10,
('C', 'Brighton'): 15,
('C', 'London'): 0,
('D', 'Bristol'): 20,
('D', 'Brighton'): 15,
('D', 'London'): 0,
('E', 'Bristol'): 5,
('E', 'Brighton'): 15,
('E', 'London'): 0
})
# Create a dictionary to capture the communication costs -in thousands of dollars from relocation.
dcd2c2, communicationCost = gp.multidict({
('A','London','C','Bristol'): 13,
('A','London','C','Brighton'): 9,
('A','London','C','London'): 10,
('A','London','D','Bristol'): 19.5,
('A','London','D','Brighton'): 13.5,
('A','London','D','London'): 15,
('B','London','C','Bristol'): 18.2,
('B','London','C','Brighton'): 12.6,
('B','London','C','London'): 14,
('B','London','D','Bristol'): 15.6,
('B','London','D','Brighton'): 10.8,
('B','London','D','London'): 12,
('C','London','E','Bristol'): 26,
('C','London','E','Brighton'): 18,
('C','London','E','London'): 20,
('D','London','E','Bristol'): 9.1,
('D','London','E','Brighton'): 6.3,
('D','London','E','London'): 7,
('A','Bristol','C','Bristol'): 5,
('A','Bristol','C','Brighton'): 14,
('A','Bristol','C','London'): 13,
('A','Bristol','D','Bristol'): 7.5,
('A','Bristol','D','Brighton'): 21,
('A','Bristol','D','London'): 19.5,
('B','Bristol','C','Bristol'): 7,
('B','Bristol','C','Brighton'): 19.6,
('B','Bristol','C','London'): 18.2,
('B','Bristol','D','Bristol'): 6,
('B','Bristol','D','Brighton'): 16.8,
('B','Bristol','D','London'): 15.6,
('C','Bristol','E','Bristol'): 10,
('C','Bristol','E','Brighton'): 28,
('C','Bristol','E','London'): 26,
('D','Bristol','E','Bristol'): 3.5,
('D','Bristol','E','Brighton'): 9.8,
('D','Bristol','E','London'): 9.1,
('A','Brighton','C','Bristol'): 14,
('A','Brighton','C','Brighton'): 5,
('A','Brighton','C','London'): 9,
('A','Brighton','D','Bristol'): 21,
('A','Brighton','D','Brighton'): 7.5,
('A','Brighton','D','London'): 13.5,
('B','Brighton','C','Bristol'): 19.6,
('B','Brighton','C','Brighton'): 7,
('B','Brighton','C','London'): 12.6,
('B','Brighton','D','Bristol'): 16.8,
('B','Brighton','D','Brighton'): 6,
('B','Brighton','D','London'): 10.8,
('C','Brighton','E','Bristol'): 28,
('C','Brighton','E','Brighton'): 10,
('C','Brighton','E','London'): 18,
('D','Brighton','E','Bristol'): 9.8,
('D','Brighton','E','Brighton'): 3.5,
('D','Brighton','E','London'): 6.3
})
```
## Model Deployment
We create a model and the variables. These binary decision variables define the city at which each department will be located.
Solving quadratic assignment problems with Gurobi is as easy as configuring the global parameter `nonConvex`, and setting this parameter to the value of 2.
```python
model = gp.Model('decentralization')
# Set global parameters
model.params.nonConvex = 2
# locate deparment d at city c
locate = model.addVars(d2c, vtype=GRB.BINARY, name="locate")
```
Using license file c:\gurobi\gurobi.lic
Changed value of parameter nonConvex to 2
Prev: -1 Min: -1 Max: 2 Default: -1
Each department must be located in exactly one city.
```python
# Department location constraint
department_location = model.addConstrs((gp.quicksum(locate[d,c] for c in Cities) == 1 for d in Deparments),
name='department_location')
```
No city may be the location for more than three departments.
```python
# Limit on number of departments
departments_limit = model.addConstrs((gp.quicksum(locate[d,c] for d in Deparments) <= 3 for c in Cities),
name='departments_limit')
```
We now set the optimization objective, which is to maximize gross margins.
```python
model.setObjective((gp.quicksum(benefit[d,c]*locate[d,c] for d,c in d2c)
- gp.quicksum(communicationCost[d,c,d2,c2]*locate[d,c]*locate[d2,c2] for d,c,d2,c2 in dcd2c2) ),
GRB.MAXIMIZE)
```
```python
# Verify model formulation
model.write('decentralizationQA.lp')
# Run optimization engine
model.optimize()
```
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 8 rows, 15 columns and 30 nonzeros
Model fingerprint: 0x2ad3c449
Model has 54 quadratic objective terms
Variable types: 0 continuous, 15 integer (15 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [5e+00, 2e+01]
QObjective range [7e+00, 6e+01]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 3e+00]
Found heuristic solution: objective -73.9000000
Presolve time: 0.00s
Presolved: 62 rows, 69 columns, 192 nonzeros
Variable types: 0 continuous, 69 integer (69 binary)
Root relaxation: objective -6.750000e+01, 14 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 67.50000 0 10 -73.90000 67.50000 191% - 0s
H 0 0 -33.9000000 67.50000 299% - 0s
H 0 0 -16.3000000 67.50000 514% - 0s
H 0 0 14.9000000 67.50000 353% - 0s
0 0 30.00000 0 22 14.90000 30.00000 101% - 0s
Cutting planes:
Gomory: 3
MIR: 16
Zero half: 4
Mod-K: 2
RLT: 25
Explored 1 nodes (43 simplex iterations) in 0.04 seconds
Thread count was 8 (of 8 available processors)
Solution count 4: 14.9 -16.3 -33.9 -73.9
Optimal solution found (tolerance 1.00e-04)
Best objective 1.490000000000e+01, best bound 1.490000000000e+01, gap 0.0000%
## Analysis
The optimal relocation plan and associated financial report follows.
```python
relocation_plan = pd.DataFrame(columns=["Department", "City"])
count = 0
for c in Cities:
for d in Deparments:
if(locate[d,c].x > 0.5):
count += 1
relocation_plan = relocation_plan.append({"Department": d, "City": c }, ignore_index=True )
relocation_plan.index=['']*count
relocation_plan
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Department</th>
<th>City</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>A</td>
<td>Bristol</td>
</tr>
<tr>
<th></th>
<td>D</td>
<td>Bristol</td>
</tr>
<tr>
<th></th>
<td>B</td>
<td>Brighton</td>
</tr>
<tr>
<th></th>
<td>C</td>
<td>Brighton</td>
</tr>
<tr>
<th></th>
<td>E</td>
<td>Brighton</td>
</tr>
</tbody>
</table>
</div>
```python
print("\n\n_________________________________________________________________________________")
print(f"Financial report")
print("_________________________________________________________________________________")
total_benefit = 0
for c in Cities:
for d in Deparments:
if(locate[d,c].x > 0.5):
total_benefit += 1000*benefit[d,c]
dollars_benefit = '${:,.2f}'.format(total_benefit)
print(f"The yearly total benefit is {dollars_benefit} dollars")
total_communication_cost = 0
for d,c,d2,c2 in dcd2c2:
if(locate[d,c].x*locate[d2,c2].x > 0.5):
total_communication_cost += 1000*communicationCost[d,c,d2,c2]
dollars_communication_cost = '${:,.2f}'.format(total_communication_cost)
print(f"The yearly total communication cost is {dollars_communication_cost} dollars")
total_gross_margin = total_benefit - total_communication_cost
dollars_gross_margin = '${:,.2f}'.format(total_gross_margin)
print(f"The yearly total gross margin is {dollars_gross_margin} dollars")
```
_________________________________________________________________________________
Financial report
_________________________________________________________________________________
The yearly total benefit is $80,000.00 dollars
The yearly total communication cost is $65,100.00 dollars
The yearly total gross margin is $14,900.00 dollars
## References
H. Paul Williams, Model Building in Mathematical Programming, fifth edition.
Copyright © 2020 Gurobi Optimization, LLC
| b891e4bc9a482e5970725cb47dae298669513586 | 21,404 | ipynb | Jupyter Notebook | decentralization_planning/decentralization_planning_gcl.ipynb | anupamsharmaberkeley/Gurobi_Optimization | 701200b5bfd9bf46036675f5b157b3d8e3728ff9 | [
"Apache-2.0"
]
| 153 | 2019-07-11T15:08:37.000Z | 2022-03-25T10:12:54.000Z | decentralization_planning/decentralization_planning_gcl.ipynb | anupamsharmaberkeley/Gurobi_Optimization | 701200b5bfd9bf46036675f5b157b3d8e3728ff9 | [
"Apache-2.0"
]
| 7 | 2020-10-29T12:34:13.000Z | 2022-02-28T14:16:43.000Z | decentralization_planning/decentralization_planning_gcl.ipynb | anupamsharmaberkeley/Gurobi_Optimization | 701200b5bfd9bf46036675f5b157b3d8e3728ff9 | [
"Apache-2.0"
]
| 91 | 2019-11-11T17:04:54.000Z | 2022-03-30T21:34:20.000Z | 36.094435 | 399 | 0.524995 | true | 4,519 | Qwen/Qwen-72B | 1. YES
2. YES | 0.936285 | 0.83762 | 0.784251 | __label__eng_Latn | 0.818567 | 0.66041 |
```julia
using MDBM
using PyPlot;
pygui(true);
```
# 1. Implcit equation with constraint
\begin{align}
x,y & \in \left[ -5,5 \right] \\
x^2+y^2-2^2 & =0 \\
x-y & >0
\end{align}
```julia
function foo(x,y)
x^2.0+y^2.0-4.0^2.0
end
function c(x,y)
x-y
end
ax1=Axis([-5,-2.5,0,2.5,5],"x")
ax2=Axis(-5:2:5.0,"b")
mymdbm=MDBM_Problem(foo,[ax1,ax2],constraint=c)
iteration=6 #number of refinements (resolution doubling)
solve!(mymdbm,iteration)
#evaluated points
x_eval,y_eval=getevaluatedpoints(mymdbm)
#solution points
x_sol,y_sol=getinterpolatedsolution(mymdbm)
fig = figure(1);clf()
scatter(x_eval,y_eval,s=2)
scatter(x_sol,y_sol,s=4);
# plot the constraint
mymdbm_c=MDBM_Problem(c,[ax1,ax2])
solve!(mymdbm_c,iteration)
x_sol,y_sol=getinterpolatedsolution(mymdbm_c)
scatter(x_sol,y_sol,s=4);
```
```julia
fig = figure(101);clf()
myDT1=connect(mymdbm);
for i in 1:length(myDT1)
dt=myDT1[i]
P1=getinterpolatedsolution(mymdbm.ncubes[dt[1]],mymdbm)
P2=getinterpolatedsolution(mymdbm.ncubes[dt[2]],mymdbm)
plot([P1[1],P2[1]],[P1[2],P2[2]], color="k")
end
```
# 2. System of implicit equations
## parameter space
\begin{equation}
x,y,z \in \left[ -2,2 \right]
\end{equation}
## Sphere 1
Single implicit eqution
\begin{equation}
x^2+y^2+z^2-1 =0
\end{equation}
## Sphere 2
Single implicit eqution
\begin{equation}
(x-0.5)^2+(y-0.5)^2+(z-0.5)^2-1 =0
\end{equation}
## Intersection of two spheres
System of implicit equtions
\begin{align}
x^2+y^2+z^2-1 & =0 \\
(x-0.5)^2+(y-0.5)^2+(z-0.5)^2-1 & =0
\end{align}
```julia
using LinearAlgebra
axes=[-2:2,-2:2,-2:2]
fig = figure(2);clf()
#Sphere1
fS1(x...)=norm([x...],2.0)-1
Sphere1mdbm=MDBM_Problem(fS1,axes)
solve!(Sphere1mdbm,4)
a_sol,b_sol,c_sol=getinterpolatedsolution(Sphere1mdbm)
plot3D(a_sol,b_sol,c_sol,linestyle="", marker=".", markersize=1);
#Sphere2
fS2(x...)=norm([x...] .- 0.5,2.0) -1.0
Sphere2mdbm=MDBM_Problem(fS2,axes)
solve!(Sphere2mdbm,4)
a_sol,b_sol,c_sol=getinterpolatedsolution(Sphere2mdbm)
plot3D(a_sol,b_sol,c_sol,linestyle="", marker=".", markersize=1);
#Intersection
# fS12(x...)=[fS1(x...),fS2(x...)]
function fS12(x,y,z)
fS1(x,y,z),fS2(x,y,z)
end
Intersectmdbm=MDBM_Problem(fS12,axes)
solve!(Intersectmdbm,6)
a_sol,b_sol,c_sol=getinterpolatedsolution(Intersectmdbm)
plot3D(a_sol,b_sol,c_sol,color="k",linestyle="", marker=".", markersize=2);
```
# 3. Non-smooth problem
## Mandelbrot set
Convergence test of series $z_{i+1}=z_i^2+c$ for $z_0=0$, $c \in \mathbb{C}$
In the evaluation $c=x + i y$
If $\| z_i \| >2$, then the series will diverge.
```julia
function mandelbrot(x,y)
c=x+y*1im
z=Complex(0)
k=0
maxiteration=1000
while (k<maxiteration && abs(z)<4.0)
z=z^2+c
k=k+1
end
abs(z)-2.0
end
Mandelbrotmdbm=MDBM_Problem(mandelbrot,[-5:2,-2:2])
solve!(Mandelbrotmdbm,8)
real_c_sol,imag_c_sol=getinterpolatedsolution(Mandelbrotmdbm)
fig = figure(3);clf()
plot(real_c_sol,imag_c_sol,linestyle="", marker=".", markersize=1);
```
# 4. Continuation behavioue
## 4.1 Problem:
\begin{equation}
x,y \in \left[ -2,2 \right]
\end{equation}
\begin{equation}
\| \mathbf{x} +1.5\|_{1.7}+\sin(5x_1)-2 =0
\end{equation}
```julia
using LinearAlgebra
mymdbm=MDBM_Problem((x...)->norm([x...].+ 1.5,1.7)+sin(x[1]*5)-2.0,[-2:2,-2:2])
interpolate!(mymdbm,interpolationorder=1)
for k=1:5
refine!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
end
checkneighbour!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
#evaluated points
a_eval,b_eval=getevaluatedpoints(mymdbm)
a_sol,b_sol=getinterpolatedsolution(mymdbm)
fig = figure(4)
plot(a_eval,b_eval,linestyle="", marker=".",markersize=1)
plot(a_sol,b_sol,linestyle="", marker=".",markersize=1);
```
## 4.2 Exploring the missing component
Due to the coarse initial mesh, some part of the solution is missing.
A continuation-like exploration of the missing component can be performed by checking the neighbouring n-cubes.
It is also clear, that the range of the initial mesh do not cover the object. The actual grid can be prepend and append also.
Note, that the extended grid is used only for the continuation of the detected segments! The initialized new grid points are not evaluated.<br>
In this example there is a closed curve of soultion around $x_1=-4.1$ which is lost!
```julia
# # extension with the same resolution:
# axesextend!(mymdbm,1,prepend=mymdbm.axes[1].ticks[1:end-1] .+(mymdbm.axes[1].ticks[1]- mymdbm.axes[1].ticks[end]));
# axesextend!(mymdbm,2,prepend=mymdbm.axes[2].ticks[1:end-1] .+(mymdbm.axes[2].ticks[1]- mymdbm.axes[2].ticks[end]));
#extension with different reselution
axesextend!(mymdbm,1,prepend=-6.2:0.1:-2.2);
axesextend!(mymdbm,2,prepend=-6.2:0.05:-2.2,append=2.2:0.2:3);
checkneighbour!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
#evaluated points
a_eval,b_eval=getevaluatedpoints(mymdbm)
#solution points
a_sol,b_sol=getinterpolatedsolution(mymdbm)
# scatter(a_eval,b_eval,markersize=1)
# scatter!(a_sol,b_sol,size = (800, 800),markersize=2,
# xticks = mymdbm.axes[1].ticks , yticks = mymdbm.axes[2].ticks, gridalpha=0.8)
fig = figure(4)
clf()
plot(a_eval,b_eval,linestyle="", marker=".",markersize=1)
plot(a_sol,b_sol,linestyle="", marker=".",markersize=1);
```
## 4.3 Further refinement
The prepended and appended grid has a poor resolution, thus further refinement can be used to increase the resolution.
```julia
for k=1:2
refine!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
end
checkneighbour!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
#evaluated points
a_eval,b_eval=getevaluatedpoints(mymdbm)
#solution points
a_sol,b_sol=getinterpolatedsolution(mymdbm)
# scatter(a_eval,b_eval,markersize=1)
# scatter!(a_sol,b_sol,size = (800, 800),markersize=2,
# xticks = mymdbm.axes[1].ticks , yticks = mymdbm.axes[2].ticks, gridalpha=0.8)
fig = figure(4)
clf()
plot(a_eval,b_eval,linestyle="", marker=".",markersize=1)
plot(a_sol,b_sol,linestyle="", marker=".",markersize=1);
```
# 5. Constraint only
If only a constraint if provided, then a dens point cloud will be provided wich could be used for later computataions.
## Problem:
\begin{equation}
x,y \in \left[ -5,2 \right]
\end{equation}
\begin{equation}
\| \mathbf{x} +1.5\|_{1.7}+\sin(5x_1)-2 <0
\end{equation}
Note, in this case the functinos will be evaluated in all the gridpoint of the interior.
```julia
using LinearAlgebra
ax1=Axis(-5:3.0,"a")
ax2=Axis(-5:3.0,"b")
mymdbm=MDBM_Problem((x...)->[],[ax1,ax2],constraint=(x...) -> -(norm([x...].+ 1.5,1.7)+sin(x[1]*5)-2.0))
# mymdbm=MDBM_Problem((x...) -> -maximum([0.0,-(norm([x...].+ 1.5,1.7)+sin(x[1]*5)-2.0)]),[ax1,ax2])
interpolate!(mymdbm,interpolationorder=1)
for k=1:4
refine!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
end
checkneighbour!(mymdbm)
interpolate!(mymdbm,interpolationorder=1)
#solution points
a_sol,b_sol=getinterpolatedsolution(mymdbm)
F_sol=map((x,y)->x*x-y*y,a_sol,b_sol)
# scatter(a_sol,b_sol,F_sol,size = (500, 500))
fig = figure(5)
plot3D(a_sol,b_sol,F_sol,linestyle="", marker=".",markersize=4);
```
# 6. Connection of point could
The generated points coloud can be connected by generatigh the eges between the neighbouring n-cubes.
## Problem:
\begin{align}
x,y,z & \in \left[ -5,5 \right] \\
x^2+y^2+z^2-4 & =0 \\
z+y-\cos(3x) & >0
\end{align}
```julia
ax1=Axis(-3.0:3.0,"x")
ax2=Axis(-3.0:3.0,"y")
ax3=Axis(-3.0:3.0,"z")
function foo(x,y,z)
[x^2+y^2+z^2-4]
end
function cont(x,y,z)
[z+y-cos(3*x)-1]
end
mymdbm=MDBM_Problem(foo,[ax1,ax2,ax3],constraint=cont)
solve!(mymdbm,2)
#solution points
x_sol,y_sol,z_sol=getinterpolatedsolution(mymdbm);
fig = figure(10),clf()
plot3D(x_sol,y_sol,z_sol,linestyle="", marker=".",markersize=4);
#connecting the neighbouring n-cubes
myDT1=connect(mymdbm);
#plot solution lines one-by-one
fig = figure(11);clf()
for i in 1:length(myDT1)
dt=myDT1[i]
P1=getinterpolatedsolution(mymdbm.ncubes[dt[1]],mymdbm)
P2=getinterpolatedsolution(mymdbm.ncubes[dt[2]],mymdbm)
plot3D([P1[1],P2[1]],[P1[2],P2[2]],[P1[3],P2[3]], color="k")
end
```
# 7. Complex problem: 4 parameter, 2 equation, 1 constraint
## Problem:
\begin{align}
x,y,z \in \left[ -3,3 \right] & \quad r \in \left[ 1,2 \right] \\
x^2+y^2+z^2-r^2 & =0 \\
z+y-\cos(3x) & =0 \\
z-\sin(5y) & >0
\end{align}
```julia
ax1=Axis(-3.0:3.0,"x")
ax2=Axis(-3.0:3.0,"y")
ax3=Axis(-3.0:3.0,"z")
ax4=Axis(1.0:0.5:2.0,"r")
function foo(x,y,z,r)
[x^2+y^2+z^2-r^2,
z+y-cos(3*x)]
end
function cont(x,y,z,r)
z-sin(5*y)
end
mymdbm=MDBM_Problem(foo,[ax1,ax2,ax3,ax4],constraint=cont)
solve!(mymdbm,3)
#solution points
x_sol,y_sol,z_sol,r_sol=getinterpolatedsolution(mymdbm);
fig = figure(20);clf()
plot3D(x_sol,y_sol,z_sol,linestyle="", marker=".",markersize=4);
```
| 6226462123e92349566368fe926f6a706830542e | 14,012 | ipynb | Jupyter Notebook | examples/test.ipynb | arturgower/MDBM.jl | 8c60ff4abffb308d5b1b8394d37ff8daa8640b53 | [
"MIT"
]
| 32 | 2018-11-22T16:08:40.000Z | 2021-12-07T18:29:07.000Z | examples/test.ipynb | arturgower/MDBM.jl | 8c60ff4abffb308d5b1b8394d37ff8daa8640b53 | [
"MIT"
]
| 18 | 2019-02-20T15:48:21.000Z | 2021-12-15T20:19:02.000Z | examples/test.ipynb | arturgower/MDBM.jl | 8c60ff4abffb308d5b1b8394d37ff8daa8640b53 | [
"MIT"
]
| 5 | 2018-08-14T13:56:12.000Z | 2021-12-09T13:43:35.000Z | 28.772074 | 152 | 0.528261 | true | 3,332 | Qwen/Qwen-72B | 1. YES
2. YES | 0.926304 | 0.861538 | 0.798046 | __label__eng_Latn | 0.455207 | 0.692461 |
# Calculate price-equilibrium using simulations
```python
import sys, numpy as np, scipy
from sympy import symbols
from typing import Callable
from log_progress import log_progress
np.random.seed(None)
import matplotlib.pyplot as plt, mpld3
%matplotlib inline
mpld3.enable_notebook() # to zoom and move in plots
resetSize,r,zmin,zmax,beta,D,L,Supply = symbols('a r z_{\min} z_{\max} \\beta \\Delta \\ell \\tau', positive=True,finite=True,real=True)
params = {
L: 10, # total transfers per pair per day.
D: 6, # delta transfers per day (Alice-to-Bob minus Bob-to-Alice) in the asymmetric case.
beta: 0.01, # value / transfer-size
r: 4/100/365, # interest rate per day
resetSize: 1.1, # records per reset tx
Supply: 288000, # records per day
zmin: 0.001, # min transfer size in bitcoins (for power law distribution)
zmax: 1, # max transfer size in bitcoins (for uniform distribution)
}
# NOTE: These are the same params used in the symbolic comnputations (market-equilibrium notebook).
```
```python
if "Simulation" in sys.modules: del sys.modules["Simulation"]
from Simulation import *
sim = PowerlawSymmetricSimulation(params, numOfDays=1000, filenamePrefix="interpolation-tables/powerlaw-symmetric-1000days")
# You can also try the following options:
#sim = PowerlawAsymmetricSimulation(params, numOfDays=1000, filenamePrefix="interpolation-tables/powerlaw-asymmetric-1000days")
#sim = UniformSymmetricSimulation(params, numOfDays=1000, filenamePrefix="interpolation-tables/uniform-symmetric-1000days")
#sim = UniformAsymmetricSimulation(params, numOfDays=1000, filenamePrefix="interpolation-tables/uniform-asymmetric-1000days")
sim.loadTables()
```
Simulation.py version 1.0
```python
supply = params[Supply]
sim.calculateEquilibriumBlockchainFeeTable(
numOfDays=1000,
numsOfUsers=np.linspace(100000,10000000,50),
supply=supply,
numOfSamples=50,
recreateAllSamples=False)
sim.saveTables()
sim.plotEquilibriumBlockchainFeeTable(supply)
```
```python
table=sim.equilibriumBlockchainFeeTables[supply]
xs = table.xValues
ys = table.yValuesAverage
### Log-log regression:
regressionCoeffs = np.polyfit(np.log(xs), np.log(ys), 1)
regressionFunction = lambda x: regressionCoeffs[0]*x**1 + regressionCoeffs[1]#*x + regressionCoeffs[2]
plt.plot(xs, ys, 'r.')
plt.plot(xs, np.exp(regressionFunction(np.log(xs))), 'g')
```
```python
### Lin-lin regression, higher power:
regressionCoeffs = np.polyfit(xs, ys, 2)
regressionString = "{:.2e} n^2 + {:.2e} n + {:.2e}".format(*regressionCoeffs)
print (regressionString)
regressionFunction = lambda x: regressionCoeffs[0]*x**2 + regressionCoeffs[1]*x + regressionCoeffs[2]
plt.plot(xs, ys, 'r.')
plt.plot(xs, regressionFunction(xs), 'g')
```
```python
### Lin-lin regression of 1/x:
regressionCoeffs = np.polyfit(1/xs, ys, 1)
print(regressionCoeffs)
regressionFunction = lambda x: regressionCoeffs[0]*x**1 + regressionCoeffs[1]#*x + regressionCoeffs[2]
plt.plot(xs, ys, 'r.')
plt.plot(xs, regressionFunction(1/xs), 'g')
```
```python
```
| 3df8c5bb6815cd4aec689eecd883e92f74e72487 | 431,448 | ipynb | Jupyter Notebook | old/market-equilibrium-regression-tests.ipynb | erelsgl/bitcoin-simulations | 79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b | [
"MIT"
]
| 1 | 2018-11-26T02:44:38.000Z | 2018-11-26T02:44:38.000Z | old/market-equilibrium-regression-tests.ipynb | erelsgl/bitcoin-simulations | 79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b | [
"MIT"
]
| null | null | null | old/market-equilibrium-regression-tests.ipynb | erelsgl/bitcoin-simulations | 79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b | [
"MIT"
]
| 3 | 2018-09-06T00:11:26.000Z | 2021-08-29T17:14:59.000Z | 1,003.367442 | 94,764 | 0.807411 | true | 873 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.782662 | 0.663673 | __label__eng_Latn | 0.317599 | 0.380265 |
# Fail/Pass tests and the implied failure rate and confidence levels
Let's say we conduct a fail/pass test.
We subject $n_\mathrm{s}$ samples to an accelerated (representative) life test of $m=1$ lifetime equivalents.
The test is considered a success if $100\,\%$ of the $n_\mathrm{s}$ samples survive.
Yet, this leaves the important question of how certain can we be that the population as a whole (from which the $n_\mathrm{s}$ samples are a representative sub-set) will survive the $m=1$ lifetime equivalents?
And, apart from the implied confidence level, what fraction of the population is still expected to fail even if $100\,\%$ of the $n_\mathrm{s}$ samples did survive?
We consider a binomial distribution: the samples either survive (pass), or die (fail).
Each sample has a probability $p$ of dying.
Hence, starting with $n_\mathrm{s}$ samples, the probability of ending up with $k$ dead (failed) samples is
$$
B(k) = \frac{n_\mathrm{s}!}{k!(n_\mathrm{s}-k)!} p^k (1-p)^{(n_\mathrm{s}-k)}.
$$
The case of interest is with $k=0$ dead samples at the end of the test, i.e. all passed.
This leaves us with the special case of
\begin{equation}
B(0) = (1-p)^{n_\mathrm{s}}.
\label{eq:B0}
\end{equation}
With $p$ being the probability of dying, $(1-p)=R$ can be said to be a measure of the reliability (in the sense of the probability to survive the foreseen lifetime) of the devices being tested.
The probability $B(0)$ represents a measure of how often we expect to see this survival rate.
In other words, it is $B(0)=(1-C)$, with $C$ the confidence level.
In other words, we can rewrite Eq. (\ref{eq:B0}) as
\begin{equation}
(1-C) = R^{n_\mathrm{s}}.
\label{eq:CRn}
\end{equation}
And from Eq. (\ref{eq:CRn}) follows how $n_\mathrm{s}$ relates to $R$ and $C$:
\begin{equation}
n_\mathrm{s} = \frac{\ln(1-C)}{\ln(R)}.
\label{eq:n_lnCR}
\end{equation}
We can reuse already tested samples and expose them to the accelerated life test a second, third, ... , $m$ time.
Up to a certain point this would be equivalent a higher number of samples.
Written differently,
$$
n_\mathrm{s} = m \times n_\mathrm{actual\_samples},
$$
and thus
$$
n_\mathrm{s} = \frac{\ln(1-C)}{m \ln(R)}.
$$
Sidenote, it should be clear that a test conducted with only one sample but which was exposed to $m=100$ lifetimes would NOT be statistically equally meaningful as a test with $100$ samples exposed to $m=1$ lifetime.
In what context $m>1$ might be meaningful depends on various circumstances.
The confidence level $C$ is a critical parameter of a fail/pass test.
The purpose of a fail/pass test is to learn something
(that's the purpose of any test, of course, not only fail/pass).
Often this tool comes into play to confirm that a new product can be launched.
However, testing costs money (for the samples) and time (for the actual testing).
Hence, small sample sizes are typically preferred.
Yet, confirmation bias aside, what can we possibly learn from a passed test with few samples?
Let me assume an extreme case example: testing two samples, which are so inherently flawed that they fail half of the time.
Such a test will fail in $C=75\,\%$ of the time.
Upon failure the development team would likely investigate the failed sample and subsequently improve it.
This could be considered a good outcome.
On the other hand, starting off with an only $R=50\,\%$ reliable product left a lot of room for improvement.
And maybe even worse, there are still $B(0)=25\,\%$ of the time in which the two samples do NOT fail,
$25\,\%$ of the time in which the team would not investigate the failure mechanisms, and
$25\,\%$ of the time in which the product launch would continue according to schedule and being shipped.
It should be without saying, no customer is interested in a $R=50\,\%$ reliable product.
If the product was slightly more reliable (say, a still rather bad $R=70\,\%$),
the two samples would pass in $B(0)=49\,\%$ of the cases.
In other words, the fail/pass test would be only just slightly more informative than an independent coin-flip!
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
%matplotlib inline
```
## Analytical examples
```python
C = 0.9
R = 0.9
m = 1
n_s = np.log(1-C)/(m * np.log(R))
print("""
In the case we want to be $C=90\%$ confident that the whole population shows a $R=90\%$ reliability
(i.e. at least $90\%$ of the whole will survive duration $m$),
the minimum number of samples to test is
n_s = {:.2f}
""".format(n_s))
```
In the case we want to be $C=90\%$ confident that the whole population shows a $R=90\%$ reliability
(i.e. at least $90\%$ of the whole will survive duration $m$),
the minimum number of samples to test is
n_s = 21.85
```python
m = 1
n_s = 77
R = 0.95
# (1-C) = B(0) = (1-p)^n = R^n
C = 1 - R**n_s
print("""
Given m={}, n_s={}, and an assumed reliability of R={}%, the confidence level is
C = {:.2f}%.
""".format(m, n_s, R*100, C*100))
```
Given m=1, n_s=77, and an assumed reliability of R=95.0%, the confidence level is
C = 98.07%.
## Numerical examples
Given $n_\mathrm{s}$ samples which each have an (assumed) intrinsic reliability $R$, how often would we run a test without any sample failing?
This rate of at least one failure represents our confidence level that we would have encountered at least one failing sample if the underlying reliability was lower than the assumed $R$.
```python
class Device:
def __init__(self, intrinsic_reliability):
self._R = intrinsic_reliability
def test(self):
# test considered passed (i.e. device survived test) if
# probability of intrinsic reliability is larger than outcome of random event
return int( self._R > np.random.random() )
def conduct_test(test_battery):
# return 1 = pass: all DUTs survived
# return 0 = fail: at least 1 DUT died
survival = 0
for device in test_battery:
survival += device.test()
return int( survival==len(test_battery) )
# parameters to explore
reliabilities = [0.5, 0.8, 0.9, 0.95, 0.98, 0.99, 0.999]
n_samples = np.arange(2,100,5) # note: I want to cover n_s=22 and n_s=77 mentioned in the text above
N = 1000 # simulate N times, with N large to be statistically meaningful, but not too large to get the computer running forever
test_repetitions = np.arange(N)
# init results containers
# a multi-dimensional dictionary
# e.g. frequency_of_catching_subpar_samples[0.9][22] will contain the confidence level
# for a device with an intrinsic reliability R=0.9 when tested with n_s=22 samples.
frequency_of_catching_subpar_samples = {}
for R in reliabilities: frequency_of_catching_subpar_samples[R] = {}
repetition_passed = np.zeros(N)
# simulating the accelerated life tests
for R in reliabilities:
for n_s in n_samples:
test_battery = [Device(R) for _ in range(n_s)]
for rep in test_repetitions:
repetition_passed[rep] = conduct_test(test_battery)
frequency_of_catching_subpar_samples[R][n_s] = 1-np.mean(repetition_passed)
print("Confidence level of detecting fact that population shows reliability of <R given n_s samples is")
def print_summary(R, n_s):
print("R={}, n_s={}".format(R,n_s), "--> C=",frequency_of_catching_subpar_samples[R][n_s])
print_summary(0.9, 22)
print_summary(0.95, 77)
print_summary(0.99, max(n_samples))
```
Confidence level of detecting fact that population shows reliability of <R given n_s samples is
R=0.9, n_s=22 --> C= 0.9
R=0.95, n_s=77 --> C= 0.979
R=0.99, n_s=97 --> C= 0.6619999999999999
```python
plt.figure(figsize=(12,6))
linestyles = [(0,()),
(0,(1,1,5,3)),
(0,(4,1)),
(0,(5,1,1,3)),
(0,(1,1)),
(0,(1,1,1,1,5,5)),
(0,(5,1,1,1,1,5))]
linewidth = 3
for j, p in enumerate(reliabilities):
confidence_of_catching_p_reliable_failures = [frequency_of_catching_subpar_samples[p][n_s] for n_s in n_samples]
plt.plot(n_samples, confidence_of_catching_p_reliable_failures, label="R = {}".format(p),
linestyle=linestyles[j], linewidth=linewidth)
plt.plot(n_samples, 1-p**n_samples, linestyle='-', linewidth=0.5, color='black')
plt.grid()
plt.xlim([0,100])
plt.ylim([0,1])
fontsize = 'x-large'
plt.legend(loc='best', fontsize=fontsize)
plt.title("Numerical simulation vs analytical $C = 1 - R^{n_s}$ (thin black lines)", fontsize=fontsize)
plt.xlabel("Number of samples tested (-)", fontsize=fontsize)
plt.ylabel("Confidence level (-)", fontsize=fontsize)
plt.xticks(np.arange(0,101,10))
plt.yticks(np.linspace(0,1,11))
print("")
```
The numerical simulations are in line with the analytical solution derived above.
All the usual disclaimers regarding statistical explorations apply:
The "tested" samples are assumed to be identical and independent.
The analytical solution is true only in the limit of infinite repetitions.
And so on.
## Mapping out R vs C analytically
In the above sections we have looked at the question:
"how many samples are we supposed to test assuming a target $R$ and $C$?"
In the present section we approach the problem from the other side.
Assuming we have $n_\mathrm{s}$ samples available,
what can we learn from a fail/pass test?
As already mentioned above,
if all $n_\mathrm{s}$ samples pass the test,
we forego the opportunity to actually learn something
(meaning, by opening up a failing sample and learning about the root causes).
The only statement we can make given a $100\,\%$ pass rate is
that the underlying population is $R$ reliable with $C$ confidence,
related to each other through Eq. (\ref{eq:CRn}) and (\ref{eq:n_lnCR}), respectively.
As an example let's assume we have $n_\mathrm{s}=4$ samples available.
What is the underlying reliability $R$ of the population supposed to be
for us to have a reasonable chance of catching a failing sample
among these $n_\mathrm{s}=4$ samples?
One way to go about answering this question is to flip a coin:
tail means the population is impecable and can be shipped,
head means the population shows some not-further-specified flaws.
This strategy, stated in this extreme form, does not take the fail/pass test into account in any way whatsoever.
However, an unrelated coin flip is not actually worth less
than a test with expected frequency of finding a failing part of $C\leq0.5$.
For example, if none of the $n_\mathrm{s}=4$ samples failed,
but, if at the same time we were to expect the underlying population to show a reliability
of at least $R\geq0.84$,
then the fact that all $n_\mathrm{s}=4$ samples have passed
is actually less informative than a completely unrelated coin flip!
More generally, from Eq. (\ref{eq:CRn}) we find
\begin{equation}
R = \sqrt[\leftroot{-2}\uproot{2}n_\mathrm{s}]{1-C}.
\label{eq:nrootC}
\end{equation}
For a test with $n_\mathrm{s}=4$ to be meaningful -- say $C\geq0.9$ --
the underlying population has to show a reliability of at most $R=0.56$.
```python
R = lambda C, n_s: (1 - C)**(1/n_s)
def R_summary(C, n_s):
print('Underlying reliability has to be at most R={:.2f}, given C={} with n_s={}'.format(R(C,n_s),C,n_s))
R_summary(0.5,4)
R_summary(0.9,4)
```
Underlying reliability has to be at most R=0.84, given C=0.5 with n_s=4
Underlying reliability has to be at most R=0.56, given C=0.9 with n_s=4
```python
plt.figure(figsize=(12,6))
linestyles = [(0,()),
(0,(1,1,5,3)),
(0,(4,1)),
(0,(5,1,1,3)),
(0,(1,1)),
(0,(1,1,1,1,5,5)),
(0,(5,1,1,1,1,5))]
linewidth = 3
sample_ns = [4, 10, 22, 77]
assumed_Rs = np.array(list(np.linspace(0.5,0.9,9)) + list(np.linspace(0.91,1,10)))
for j, n in enumerate(sample_ns):
plt.plot(assumed_Rs, 1-assumed_Rs**n, label='{}'.format(n),
linestyle=linestyles[j], linewidth=linewidth)
# caution: C<0.5 means confidence worse than a coin flip
# i.e. don't even think about ending up in this region!
warning_color = 'red'
plt.fill_between([0.5,1], [0.5,0.5], facecolor=warning_color, alpha=0.1)
plt.axhline(y=0.5, color='red', linewidth=0.5)
plt.text(0.575,0.325,'confidence worse than coin flip', fontsize=fontsize, color=warning_color)
# formating
plt.legend(loc='best')
plt.grid('on')
plt.xlabel("Assumed reliability (-)", fontsize=fontsize)
plt.ylabel("Confidence level (-)", fontsize=fontsize)
plt.xticks(np.linspace(0.5,1,11))
plt.xlim([0.5,1])
plt.yticks(np.linspace(0,1,11))
plt.ylim([0,1])
print("")
```
## Further reading
This notebook was originally inspired by an Accendo Reliability podcast on the topic
<https://accendoreliability.com/podcast/arw/making-use-reliability-statistics/>
Regarding what we can learn from a failed vs passed test (equally applicable for software and hardware),
I can recommend
K. Henney “A Test of Knowledge” <https://medium.com/@kevlinhenney/a-test-of-knowledge-78f4688dc9cb>
| d62839bf413f271840258acbb0423621154e7c43 | 176,422 | ipynb | Jupyter Notebook | Fail-Pass_tests_and_number_of_samples.ipynb | stefantkeller/jupyternotebooks | d81d86f1c0f244e92fd03df74d5fafda953180ae | [
"MIT"
]
| null | null | null | Fail-Pass_tests_and_number_of_samples.ipynb | stefantkeller/jupyternotebooks | d81d86f1c0f244e92fd03df74d5fafda953180ae | [
"MIT"
]
| null | null | null | Fail-Pass_tests_and_number_of_samples.ipynb | stefantkeller/jupyternotebooks | d81d86f1c0f244e92fd03df74d5fafda953180ae | [
"MIT"
]
| null | null | null | 333.500945 | 98,368 | 0.922323 | true | 3,676 | Qwen/Qwen-72B | 1. YES
2. YES | 0.955981 | 0.857768 | 0.82001 | __label__eng_Latn | 0.994873 | 0.743492 |
## Last Class:
* MAP estimates
* Beta distribution: Conjugate prior for a Bernoulli/Binomial likelihood.
# LINEAR REGRESSION
## Topics
* $ MSE = Bias^2 + Variance $
* Gauss Markov Theorem
### Question 1
* Show $ E (b) = \beta $ under the assumption that $ y_i = \alpha + \beta x_i + \epsilon_i$ where $ \epsilon_i \sim N(0, \sigma^2)$. Thus show that least square estimators (a & b) are unbiased.
## Relationship of bias and variance with MSE
Mean Square Error measure the "average" distance of the parameter estimate from its true value.
### Question 2
Prove that:
$$ \begin{align}
\operatorname{MSE_{\theta}}(\hat{\theta}) &= \operatorname{E}_{X|\theta} \left [(\hat{\theta}-\theta)^2 \right ] \\
&= \operatorname{Var}_{\theta}(\hat\theta)+ \operatorname{Bias}_{\theta}(\hat\theta)^2
\end{align} $$
[Hint Use the fact that $ Var(X) = E[X^2] - E[X]^2 $]
## Gauss Markov Theorem
If:
* The expected average value of residuals is 0. ($ E(\epsilon_i ) = 0 $)
* The spread of residuals is constant and finite for all $ X_i (Var(\epsilon_i ) = \sigma^2 $ )
* There is no relationship amongst the residuals ( $ cov(\epsilon_i , \epsilon_j ) = 0 $)
* There is no relationship between the residuals and the $ X_i (cov(X_i , \epsilon_i ) = 0 $)
Then, least square estimates have lowest variance amongst all linear unbiased estimates.
Note: Our assumption that $ y_i = \alpha + \beta x_i + \epsilon_i$ where $ \epsilon_i \to N(0, \sigma^2)$ is a special case of the Gauss Markov theorem. (We additionally, assume that the $epsilon_i$ are normal distributed)
### Proof:
Let the regression line be $ Y = b_{0} + b_{1}X$.
Least square estimates of coefficients are given by:
$$
b_{1} = \frac{\sum_{i}{(x_{i}-\bar{x})(y_{i}-\bar{y})}}{\sum_{i}{(x_{i}-\bar{x})^{2}}} = \sum_{i}{K_{i}Y_{i}}
$$
where,
$$
K_{i} = \frac{(x_{i}-\bar{x})}{\sum_{i}{(x_{i}-\bar{x})^{2}}}
$$
and
$$
Y_{i} = y_{i}-\bar{y}
$$
And the other coefficient is given by,
$$
b_{0} = \bar{y} - b_{1}\bar{x}
$$
Now first calculate variance of $b_{1}$,
\begin{align*}
\sigma^{2}(b_{1})= & \sigma^{2}(\sum_{i}{K_{i}Y_{i}}) \\
= & \sum_{i}{K_{i}^{2}\sigma^{2}(Y_{i})} .... (Why?)\\
= & \sigma^{2} * \sum_{i}{\frac{1}{(x_{i}-\bar{x})^{2}}}
\end{align*}
Here $\sigma^{2}$ is the variance of each $Y_{i}$. \\
Now consider another estimator of $\beta_{1}$ as $\hat{\beta_{1}}$.\\
Let,
$$
\hat{\beta_{1}} = \sum_{i}{c_{i}y_{i}}
$$
for some $c_{i}$.
Now consider expected value and variance of this estimator.
\begin{align*}
E(\hat{\beta_{1}}) = & \sum_{i}{c_{i}E(y_{i})} \\
= & \sum_{i}{c_{i}E(\beta_{0} + \beta_{1}x_{i})} \\
= & \beta_{0}\sum_{i}{c_{i}} + \beta_{1}\sum_{i}{c_{i}x_{i}}
\end{align*}
As $\hat{\beta_{1}}$ is an unbiased estimator, $E(\hat{\beta_{1}}) = \beta_{1}$ for generic values of $x_{i}$. \\
So from above expression we can get conditions on $c_{i}$'s as\\
$\sum_{i}{c_{i}}=0$ and \\
$\sum_{i}{c_{i}x_{i}}=1$
Variance of the estimator is given by,
\begin{align*}
\sigma^{2}(\hat{\beta_{1}}) = & \sum_{i}{c_{i}\sigma^{2}(y_{i})} \\
= & \sigma^{2}\sum_{i}{c_{i}^{2}}
\end{align*}
Let $c_{i} = K_{i} + d_{i}$ for some $d_{i}$. Then we can write,
\begin{align*}
\sigma^{2}(\hat{\beta_{1}}) = & \sigma^{2}*(\sum_{i}{( K_{i} + d_{i})^{2}}) \\
= & \sigma^{2}*(\sum_{i}{K_{i}^{2}} + \sum_{i}{d_{i}^{2}} + 2\sum_{i}{K_{i}d_{i}}) \\
= & \sigma^{2}\sum_{i}{K_{i}^{2}} + \sigma^{2}\sum_{i}{d_{i}^{2}} + 2\sigma^{2}\sum_{i}{K_{i}d_{i}} \\
= & \sigma^{2}(b_{1}) + \sigma^{2}\sum_{i}{d_{i}^{2}} + 2\sigma^{2}\sum_{i}{K_{i}d_{i}} .................. (\sigma^{2}\sum_{i}{K_{i}^{2}} = \sigma^{2}(b_{1}))
\end{align*}
Now consider the expression $\sum_{i}{K_{i}d_{i}}$.
\begin{align*}
\sum_{i}{K_{i}d_{i}} = & \sum_{i}{K_{i}(c_{i} - K_{i})} \\
= & \sum_{i}{K_{i}c_{i}} - \sum_{i}{K_{i}^{2}} \\
= & \sum_{i}{c_{i}(\frac{(x_{i}-\bar{x})}{\sum_{i}{(x_{i}-\bar{x})^{2}}})} - \frac{1}{(x_{i}-\bar{x})^{2}} \\
= & \frac{\sum_{i}{c_{i}x_{i}} - \sum_{i}{c_{i}} - 1 }{\sum_{i}{(x_{i}-\bar{x})^{2}}}
\end{align*}
We know that $\sum_{i}{c_{i}x_{i}} = 1$ and $\sum_{i}{c_{i}} = 0$ as $\beta_{1}$ is an unbiased estimator (derived above). So substituting these values in above equation,
\begin{align*}
\sum_{i}{K_{i}d_{i}} = & \frac{1 - 0 - 1}{\sum_{i}{(x_{i}-\bar{x})^{2}}} \\
= & 0 ........................................(*)
\end{align*}
Therefore we get,
\begin{align*}
\sigma^{2}(\hat{\beta_{1}}) = & \sigma^{2}(b_{1}) + \sigma^{2}\sum_{i}{d_{i}^{2}} + 2*0 \\
= & \sigma^{2}(b_{1}) + \sigma^{2}\sum_{i}{d_{i}^{2}} \\
\geq & \sigma^{2}(b_{1})
\end{align*}
Thus, the least square estimate is the most **efficient** one amongst unbiased estimators.
## Linear Regression: Summary
* Minimizing the Mean squared loss function, L is the same as minimizing the (conditional) negative log likelihood (Maximizing the likelihood) under the assumption that $ Y|X \sim \alpha + \beta X + \epsilon ; \quad \epsilon \sim N(0, \sigma^2) $
* Thus $$ b = \frac{\sum{x'_i y'_i}}{\sum{x'_i}^2} = \frac{sample \; covariance \; between \; x \; and \; y} {sample \; variance \; of \; x} $$
$$ a = \bar{y} - b \bar{x} $$ correspond to MLE estimates with the above assumption
* Both the above estimates are unbiased.
* The Gauss Markow theorem states that amongst unbiased estimates, the above estimates have the least variance and are thus the most efficient ones. **BLUE - Best Linear Unbiased Estimator **
Further reading: https://people.eecs.berkeley.edu/~jegonzal/assets/slides/linear_regression.pdf
### Multivariate Linear Regression
http://faculty.cas.usf.edu/mbrannick/regression/Reg2IV.html
| be6f63c811e0f0bb0408670e67960174ecc24d3d | 9,057 | ipynb | Jupyter Notebook | pages/tutorials/tut_3_2/Linear_Regression_(Cont.).ipynb | nnfl-lab-book/Labs | 6eced463f63f0f9fd049668cec48b0813a500673 | [
"CC-BY-3.0"
]
| 1 | 2019-10-28T11:14:46.000Z | 2019-10-28T11:14:46.000Z | pages/tutorials/tut_3_2/Linear_Regression_(Cont.).ipynb | nnfl-lab-book/Labs | 6eced463f63f0f9fd049668cec48b0813a500673 | [
"CC-BY-3.0"
]
| null | null | null | pages/tutorials/tut_3_2/Linear_Regression_(Cont.).ipynb | nnfl-lab-book/Labs | 6eced463f63f0f9fd049668cec48b0813a500673 | [
"CC-BY-3.0"
]
| 1 | 2019-01-20T05:17:40.000Z | 2019-01-20T05:17:40.000Z | 33.420664 | 264 | 0.483825 | true | 2,086 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.884039 | 0.740489 | __label__eng_Latn | 0.652734 | 0.558736 |
# IWI-131 Programación
## Introducción a Python 3.X
Python es lenguage de programación cuya ejecución es realizada a través de un interprete. Éste interprete lee código desde:
* La consola de Python.
* Archivos de texto (con extensión .py)
En esta clase nos centraremos mayoritariamente en la **consola de python**.
## Tipos de Datos
Python opera sobre datos de distintos tipos. Cada tipo de datos tiene reglas que establecen la forma en que se deben escribir los valores literales (constantes) de ese tipo. Además, cada tipo de datos cuentan con una serie de operadores y funciones que se pueden aplicar. En algunos casos es posible convertir un dato particular de un tipo a otro, ya sea de forma implícita o explícita.
### Números enteros
Tipo `int` (*integer*)
```python
1
```
1
```python
+135
```
135
```python
-124
```
-124
### Números Reales
Tipo `float` (*floating point*)
```python
-0.36
```
-0.36
```python
1.0
```
1.0
```python
6.02e23
```
6.02e+23
### Valores Lógicos
Tipo `bool`
```python
True
```
True
```python
False
```
False
### Texto
Tipo `str` (*strings*)
```python
"hola"
```
'hola'
```python
'hola'
```
'hola'
```python
"Let's Go!"
```
"Let's Go!"
```python
'Ella dijo "Hola"'
```
'Ella dijo "Hola"'
## Expresiones y operadores
**Expresión:** combinación de valores que pueden ser evaluados y entregan un resultado.
Pueden estar formados por:
- **Valores literales**
- **Variables**
- **Operadores**
- **Llamadas a funciones**
**Operador:** símbolo en una expresión que representa una operación aplicada a los valores sobre los que actúa.
### Operadores Aritméticos
Operan sobre valores numéricos y entregan un valor numérico como resultado.
Pueden ser:
- Operadores binarios
- Operadores unarios
#### Operadores binarios
- Suma (`+`)
- Resta (`-`)
- Multiplicación (`*`)
- División (`/`)
- División Entera (`//`)
- Módulo o resto de la división (`%`)
- Potencia (`**`)
```python
3+2
```
5
```python
8-5
```
3
```python
8-5.0
```
3.0
```python
1/2
```
0.5
```python
1//2
```
0
```python
5%2
```
1
```python
2**2
```
4
#### Operadores unarios
- Positivo (`+`)
- Negativo (`-`)
```python
+3
```
3
```python
-5.0
```
-5.0
## Llamados a funciones y uso de biblioteca
Algunas funciones se encuentran incorporadas al núcleo del lenguaje Python y se pueden utilizar directamente (`round`, `abs`). En otros casos, las funciones están agrupadas en colecciones denominadas bibliotecas (`math`, `random`), y es necesario primero importarlas desde la biblioteca antes de poder utilizarlas.
Algunos ejemplos
**Valor absoluto** $|x|$
```python
abs(4-5)
```
1
**Redondear**
```python
round(2.456)
```
2
### Ejemplos de funciones de la biblioteca `math`
**Exponencial** $e^x$
```python
from math import exp
exp(1)
```
2.718281828459045
**Raíz cuadrada** $\sqrt{x}$
```python
import math
math.sqrt(36)
```
6.0
### Ejemplos de funciones de la biblioteca `random`
`randint(a,b)` entrega un entero aleatorio entre $[a,b]$
```python
from random import randint
randint(1,10)
```
10
## Precedencia de Operadores
Las expresiones se evalúan siguiendo reglas de precedencia que resuelven las ambigüedades. La precedencia de operadores, de mayor a menor, es la siguiente:
1. `(, )` (parentesis)
* `abs()`, `sqrt()`, `randint()` (llamada a funciones)
* `**`
* `+x`, `-x` (unario)
* `*`, `/`, `//`, `%`
* `+`, `-`
### Asociatividad de Operadores
El operador `**` es asociativo por la derecha. Ejemplo:
```python
2**3**2
```
512
Los operadores `*`, `/` y `//` lo son por la izquierda. Ejemplo:
```python
24/4/2
```
3.0
## Conversión entre tipos de datos (*casting*)
```python
int(3.5)
```
3
```python
float("1")
```
1.0
```python
str(25)
```
'25'
```python
bool(0.0)
```
False
```python
int("hola")
```
## Asignación de variables
Una asignación de variables tiene la forma:
<div align="center">
<div style="font-size:2em"> <variable> = <expresion></div>
</div>
- Primero se evalúa la expresión a la derecha del signo igual.
- El resultado de la evaluación es asignado a la variable a la izquierda del signo igual.
¿Qué valor tienen las siguiente variables?
```python
a = 4 + 5
b = a + 4
a = 2
d = a - 3
e = e + 1
```
## Entrada de datos
```python
dato = input()
```
a
```python
nombre = input("Ingrese su nombre: ")
```
Ingrese su nombre: Juan
¿Qué tipo de dato es la variable `nombre`?
```python
nombre
```
'Juan'
## Salida de datos
```python
print("Hola mundo")
```
Hola mundo
```python
a = 6
x = a**2
```
```python
print(a, 'al cuadrado es', x)
```
6 al cuadrado es 36
```python
print(a, x)
```
6 36
```python
print(a)
print(x)
```
6
36
## Comentarios
- Son textos que serán ignorados por el intérprete de Python
- Se utilizan para explicar el código y hacerlo más fácil de entender
- Existen dos tipos de comentario:
1. Los que se escriben a la derecha de un caracter `#`
- Cualquier texto que aparezca a la derecha de un caracter `#` será ignorado
- Terminan cuando termina la línea
2. Comentarios de múltiples líneas
- Se encierran entre tres comillas al inicio y final: `'''`
- Pueden comprender varias líneas
```python
#El siguiente codigo muestra la suma de 2 + 2 en pantalla
print(2 + 2)
```
4
## Ejemplos
### Ejemplo 1
Solicitar el nombre del usuario e imprimir el mensaje `"Yo soy nombre"`.
```python
nombre = input("Ingrese nombre: ")
print("Yo soy", nombre)
```
Ingrese nombre: Juan
Yo soy Juan
### Ejemplo 2
Desarrollar un programa que convierta temperatura de Farenheit a Celsius. La fórmula de conversión es la siguiente:
\begin{equation}
C = \frac{5}{9}\,(F - 32)
\end{equation}
```python
f = float(input('Temp. en Farenheit: '))
c = (5/9) * (f-32)
print('El equivalente en Celsius es aproximadamente:', int(round(c)))
```
Temp. en Farenheit: 32
El equivalente en Celsius es aproximadamente: 0
## Algunos errores
### Error de ejecución (*runtime error*)
Un error de ejecución ocurre cuando el programa termina abruptamente por una condición que ocurre y que le impide continuar ejecutándose. Por ejemplo, cuando se produce un error aritmético al intentar hacer una división por cero.
```python
n = 8
m = 0
print('Listo')
print(n/m)
```
### Error de sintaxis (*syntax error*)
Un error de sintaxis ocurre cuando nos equivocamos en la forma (sintaxis) de escribir una instrucción de nuestro programa. Por ejemplo, cuando no cerramos la cantidad adecuada de paréntesis, olvidamos una coma, escribimos mal una instrucción de Python.
```python
2*(3+4))
```
```python
n = 6
print(n)
n + 2 = 7
print(n)
```
### Errores de nombre (*name error*)
Un error de nombre ocurre cuando se intenta acceder al contenido de una variable que no ha sido inicializada y que por lo tanto no existe. También puede ocurrir al intentar utilizar una función que no ha sido definida o importada desde una biblioteca.
```python
x = 20
print(5 * x)
print(5 * y)
```
## Reglas para definir identificadores
Los identificadores son los nombres con los que nombramos variables y otros elementos de nuestros programas. En general, como una buena práctica de programación, queremos que nuestras variables y funciones tengan nombres representativos, que indique por sí mismos su propósito. Por ejemplo, una variable para guardar la edad de una persona debería llamarse `edad` y no `x`, a pesar de que el programa funcionará correctamente con cualquiera de las dos.
Python tiene reglas simples para definir identificadores:
1. Un identificador puede contener cualquier combinación de letras (mayúsculas o minúsculas), dígitos y caracteres de guión bajo.
2. El primer caracter debe ser una letra. Las letras mayúsculas y las minúsculas se consideran diferentes, por lo que edad, Edad, y EDAD, son todos identificadores distintos.
Como consecuencia de las reglas 1. y 2., no es posible utilizar espacios en blanco en un identificador. Si queremos tener un identificador compuesto por varias palabras utilizamos guiones bajos para separarlas o una combinación de minúsculas y mayúsculas para destacar las palabras. Por ejemplo: `nombre_cliente` o `nombreCliente`.
## Ejercicios
### Ejercicio 1
Escribir una expresión para sumar los dígitos de un número entero de $3$ dígitos, que se ingrese por pantalla:
```python
n = int(input('Ingrese un número de 3 dígitos: '))
suma = n%10 + (n//10)%10 + n//100
print(suma)
```
Ingrese un número de 3 dígitos: 123
6
### Ejercicio 2
Modifique el resultado anterior para que reciba un entero con $3$ dígitos iguales y retorne el resultado ```n//suma```.
```python
n = int(input('Ingresa un numero de 3 digitos iguales: '))
suma = n%10 + (n//10)%10 + n//100
resultado = n//suma
print(resultado)
```
Ingresa un numero de 3 digitos iguales: 111
37
### Ejercicio 3
Realice un programa que determine el área de un círculo a partir de su radio.
```python
pi = 3.1415
radio = float(input('Ingrese el radio de un círculo: '))
print("El área de la circunferencia es", pi*(radio**2))
```
Ingrese el radio de un círculo: 4
El área de la circunferencia es 50.264
### Ejercicio 4
**RUTEO**: Rutee el siguiente programa e indique que es lo que imprime. Cada vez que el valor de una variable cambie, ponga su valor en una nueva fila de la tabla. La tabla tiene filas de sobra:
```python
a = 94567
b = 28954
c = 36532
d = 11404
e = 40613
a = a//10000
b = (b//1000)%10
c = (c//100)%10
d = (d//10)%10
e = e%10
print (a,b,c,d,e)
```
**Solución ruteo**:
| | | | | |
|---|---|---|---|---|
| **a** | **b** | **c** | **d** | **e** |
| 94567 | | | | |
| | 28954 | | | |
| | | 36532 | | |
| | | | 11404 | |
| | | | | 40613 |
| 9 | | | | |
| | 8 | | | |
| | | 5 | | |
| | | | 0 | |
| | | | | 3 |
### Ejercicio 5
Realice un programa que calcule el área de un triángulo a partir de las longitudes de sus lados.
Para calcularlo puede utilizar la fórmula de Herón:
\begin{equation}
A = \sqrt{s\,(s-a)(s-b)(s-c)},
\end{equation}
donde $a$, $b$ y $c$ son las longitudes de cada lado y $s=\dfrac{a+b+c}{2}$ es el semiperímetro.
```python
l1 = float(input("Ingrese longitud de lado 1: "))
l2 = float(input("Ingrese longitud de lado 2: "))
l3 = float(input("Ingrese longitud de lado 3: "))
s = (l1 + l2 + l3) / 2 # semiperímetro
d1 = s-l1 # diferencia1
d2 = s-l2 # diferencia2
d3 = s-l3 # diferencia3
prod = s*d1*d2*d3 # producto de diferencias y semiperimetro
area = prod ** (1 / 2) # raíz cuadrada
# ¿cómo se podría hacer lo mismo utilizando math.sqrt()?
print("El área del triángulo es", area)
```
Ingrese longitud de lado 1: 3
Ingrese longitud de lado 2: 4
Ingrese longitud de lado 3: 5
El área del triángulo es 6.0
| 337d1f0678301b7be3e0ef0efa0b7c11be6bbd51 | 63,601 | ipynb | Jupyter Notebook | notebooks/02_Intro_Python.ipynb | dsanmartin/IWI-131 | 77d1eb4bf2638cb83b8114ae6e84ddec3f34511d | [
"BSD-3-Clause"
]
| 1 | 2021-09-07T22:58:53.000Z | 2021-09-07T22:58:53.000Z | notebooks/02_Intro_Python.ipynb | dsanmartin/iwi-131 | 77d1eb4bf2638cb83b8114ae6e84ddec3f34511d | [
"BSD-3-Clause"
]
| null | null | null | notebooks/02_Intro_Python.ipynb | dsanmartin/iwi-131 | 77d1eb4bf2638cb83b8114ae6e84ddec3f34511d | [
"BSD-3-Clause"
]
| 1 | 2020-09-23T21:20:40.000Z | 2020-09-23T21:20:40.000Z | 22.69843 | 696 | 0.523042 | true | 3,511 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.904651 | 0.801187 | __label__spa_Latn | 0.98627 | 0.699758 |
# *FEniCS tutorial:* Heat equation with Dirichlet boundary conditions in two dimensions
In this demo, we solve the two-dimensional diffusion equation with Dirichlet
boundary conditions $u_D$ and source term $f$. Both are chosen so as to yield
an exact analytic result against which we can compare the numerical results.
$$
\begin{align}
u' &= \nabla^2 u + f \quad\text{in the unit square} \\
u &= u_D \hphantom{u+f}\quad\text{on the boundary} \\
u &= u_0 \hphantom{u+f}\quad\;\text{at $t = 0$}
\end{align}
$$
with
$$
\begin{align}
u_D &= 1 + x^2 + \alpha y^2 + \beta t \\
u_0 &= u_D(t=0) \\[0.5ex]
f &= \beta - 2 (1 + \alpha)
\end{align}
$$
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import time as tm
# import os
# import sys
# import re
# from IPython.display import Image
# from IPython.core.interactiveshell import InteractiveShell
# InteractiveShell.ast_node_interactivity = "all"
```
```python
from fenics import *
```
```python
## set up the problem and define the main simulation function, evolve()
# simulation parameters
# -- boundary function
alpha = 3 # parameter alpha
beta = 1.2 # parameter beta
# -- time step
T = 2.0 # total simulation time
n_steps = 10 # number of time steps
dt = T / n_steps # size of time step
nip = 1 # number of intervals between plots
# -- mesh density
nx = ny = 8 # mesh density along axes
# create mesh and define function space
mesh = UnitSquareMesh(nx, ny)
V = FunctionSpace(mesh, 'P', 1)
# define boundary condition
u_D = Expression('1 + x[0]*x[0] + alpha*x[1]*x[1] + beta*t',
degree=2, alpha=alpha, beta=beta, t=0)
u_D_min = 1.
u_D_max = 1. + 1 + alpha + beta * T
def boundary(x, on_boundary):
return on_boundary
bc = DirichletBC(V, u_D, boundary)
# define initial value
u_n = interpolate(u_D, V)
# define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(beta - 2 - 2*alpha)
F = u*v*dx + dt*dot(grad(u), grad(v))*dx - (u_n + dt*f)*v*dx
a, L = lhs(F), rhs(F)
# define time-evolution function
def evolve():
# compute and report initial err at vertices
t = 0
u_D.t = t
u_x = interpolate(u_D, V)
err = np.abs(u_n.vector() - u_x.vector()).max()
yield 0, u_n, err
#print('idx = %2d: t = %.2f: err = %.3g' % (0, 0., err))
# time-stepping
u = Function(V)
for n in range(1, n_steps + 1):
# update current time
t += dt
u_D.t = t
# compute solution
solve(a == L, u, bc)
# compute and report current state and max err at vertices
if n % nip == 0:
u_x = interpolate(u_D, V)
err = np.abs(u.vector() - u_x.vector()).max()
yield t, u, err
#print('idx = %2d: t = %.2f: err = %.3g' % (n, t, err))
# update previous solution
u_n.assign(u)
```
```python
n_rows = 3
n_cols = 5
fig_wd = 15
# default sizing here yields unit aspect ratio
plt.figure(figsize = (fig_wd, fig_wd * n_rows // n_cols))
idx = 0
for t, u, e in evolve():
idx += 1
print('idx = %2d: t = %.2f: err = %.3g' % (idx, t, e))
ax = plt.subplot(n_rows, n_cols, idx)
plot(u, vmin=u_D_min, vmax=u_D_max)
```
```python
```
| 7929ab4515c56f483cdea232134c15a2551b9479 | 5,342 | ipynb | Jupyter Notebook | examples/notebooks/thermal_transport/xt01_heat_2D_exact.ipynb | radiasoft/rslaser | 096ebf41f56a1f34cea91a597f804d427bea34bc | [
"Apache-2.0"
]
| null | null | null | examples/notebooks/thermal_transport/xt01_heat_2D_exact.ipynb | radiasoft/rslaser | 096ebf41f56a1f34cea91a597f804d427bea34bc | [
"Apache-2.0"
]
| 49 | 2020-06-29T17:04:49.000Z | 2022-03-28T21:39:28.000Z | examples/notebooks/thermal_transport/xt01_heat_2D_exact.ipynb | radiasoft/rslaser | 096ebf41f56a1f34cea91a597f804d427bea34bc | [
"Apache-2.0"
]
| 1 | 2021-02-25T13:28:30.000Z | 2021-02-25T13:28:30.000Z | 28.72043 | 96 | 0.488207 | true | 1,032 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.72487 | 0.641968 | __label__eng_Latn | 0.744681 | 0.329837 |
# Input Driven HMM
This notebook is a simple example of an HMM with exogenous inputs. The inputs modulate the probability of discrete state transitions via a multiclass logistic regression. Let $z_t \in \{1, \ldots, K\}$ denote the discrete latent state at time $t$ and $u_t \in \mathbb{R}^U$ be the exogenous input at time~$t$. The transition probability is given by,
$$
\begin{align}
\Pr(z_t = k \mid z_{t-1} = j, u_t) =
\frac{\exp\{\log P_{j,k} + w_k^\mathsf{T} u_t\}}
{\sum_{k'=1}^K \exp\{\log P_{j,k'} + w_{k'}^\mathsf{T} u_t\}}.
\end{align}
$$
The parameters of the transition model are $P \in \mathbb{R}_+^{K \times K}$, a baseline set of (unnormalized) transition weights, and $W \in \mathbb{R}^{K \times U}$, a set of input weights.
## 1. Setup
The line `import ssm` imports the package for use. Here, we have also imported a few other packages for plotting.
```python
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import ssm
import seaborn as sns
from ssm.util import one_hot, find_permutation
%matplotlib inline
npr.seed(0)
sns.set(palette="colorblind")
```
## 2. Create an Input Driven HMM
SSM is designed to be modular, so that the user can easily mix and match different types of transitions and observations.
We create an input-driven HMM with the following line:
```python
true_hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
```
Let's look at what each of the arguments do. The first three arguments specify the number of states, and the dimensionality of the observations and inputs.
**Setting the observation model**
For this example, we have set `observations="categorical"`, which means each observation will take on one of a discrete set of values, i.e $y_t \in \{1, \ldots, C \}$.
For categorical observations, the observations are drawn from a multinomial distribution, with parameters depending on the current state. Assuming $z_t = k$,the observations are a vector $y \in \mathbb{R}^D$, where $y_i \sim \text{mult} (\lambda_{k,i})$, where $\lambda_{k,i}$ is the multinomal parameter associated with coordinate $i$ of the observations in state $k$. Note that each observation variable is independent from the others.
For categorical observations, we also specify the number of discrete observations possible (in this case 3). We do this by creating a dictionary where the keys are the keyword arguments which we want to pass to the observation model. For categorical observations, there is just one keyword argument, `C`, which specifies the number of categories. This is set using `observation_kwargs=dict(C=num_categories)`.
The observations keyword argument should be one of : `"gaussian", "poisson" "studentst", "exponential", "bernoulli", "autoregressive", "robust_autoregressive"`.
**NOTE:**
Setting the observations as "autoregressive" means that each observation will be dependent on the prior observation, as well as on the input (if the input is nonzero). By constrast, the standard "inputdriven" transitions are not affected by previous observations or directly by the inputs.
**Setting the transition model**
In order to create an HMM with exogenous inputs, we set ```transitions="inputdriven"```. This means that the baseline transition matrix $P$ is modified according to a Generalized Linear Model, as described at the top of the page.
SSM support many transition models, set by keyword argument to the constructor of the class. The keyword argument should be one of: `"standard", "sticky", "inputdriven", "recurrent", "recurrent_only", "rbf_recurrent", "nn_recurrent".` We're working on creating standalone documentation to describe these in more detail. For most users, the stationary and input driven transition classes should suffice.
**Creating inputs and sampling**
After creating our HMM object, we create an input array called `inpt` which is simply a jittered sine wave. We also increase the transition weights so that it will be clear (for demonstration purposes) that the input is changing the transition probabilities. In this case, we will actually increase the weights such that the transitions appear almost deterministic.
```python
# Set the parameters of the HMM
time_bins = 1000 # number of time bins
num_states = 2 # number of discrete states
obs_dim = 1 # data dimension
input_dim = 1 # input dimension
num_categories = 3 # number of output types/categories
# Make an HMM
true_hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
# Optionally, turn up the input weights to exaggerate the effect
true_hmm.transitions.Ws *= 3
# Create an exogenous input
inpt = np.sin(2 * np.pi * np.arange(time_bins) / 50)[:, None] + 1e-1 * npr.randn(time_bins, input_dim)
# Sample some data from the HMM
true_states, obs = true_hmm.sample(time_bins, input=inpt)
# Compute the true log probability of the data, summing out the discrete states
true_lp = true_hmm.log_probability(obs, inputs=inpt)
# By default, SSM returns categorical observations as a list of lists.
# We convert to a 1D array for plotting.
obs_flat = np.array([x[0] for x in obs])
```
```python
# Plot the data
plt.figure(figsize=(8, 5))
plt.subplot(311)
plt.plot(inpt)
plt.xticks([])
plt.xlim(0, time_bins)
plt.ylabel("input")
plt.subplot(312)
plt.imshow(true_states[None, :], aspect="auto")
plt.xticks([])
plt.xlim(0, time_bins)
plt.ylabel("discrete\nstate")
plt.yticks([])
# Create Cmap for visualizing categorical observations
plt.subplot(313)
plt.imshow(obs_flat[None,:], aspect="auto", )
plt.xlim(0, time_bins)
plt.ylabel("observation")
plt.grid(b=None)
plt.show()
```
### 2.1 Exercise: EM for the input-driven HMM
There are a few good references that derive the EM algorithm for the case of a vanilla HMM (e.g Machine Learning and Pattern Recognition, by Chris Bishop, and [this tutorial](https://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf) by Lawrence Rabiner. How should the EM updates change for the case of input-driven HMMs?
## 3. Fit an input-driven HMM to data
Below, we'll show to fit an input-driven HMM from data. We'll treat the samples generated above as a dataset, and try to learn the appropriate HMM parameters from this dataset.
We create a new HMM object here, with the same parameters as the HMM in Section 1:
```python
hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
```
We fit the dataset simply by calling the `fit` method:
```python
hmm_lps = hmm.fit(obs, inputs=inpt, method="em", num_em_iters=N_iters)
```
Here, the variable `hmm_lps` will be set to a list of log-probabilities at each step of the EM-algorithm, which we'll use to check convergence.
```python
# Now create a new HMM and fit it to the data with EM
N_iters = 100
hmm = ssm.HMM(num_states, obs_dim, input_dim,
observations="categorical", observation_kwargs=dict(C=num_categories),
transitions="inputdriven")
# Fit
hmm_lps = hmm.fit(obs, inputs=inpt, method="em", num_iters=N_iters)
```
HBox(children=(FloatProgress(value=0.0), HTML(value='')))
### 3.1 Permute the latent states, check convergence
As in the vanilla-HMM notebook, we need to find a permutation of the latent states from our new hmm such that they match the states from the true HMM above. SSM accomplishes this with two function calls: first, we call `find_permutation(true_states, inferred_states)` which returns a list of indexes into states.
Then, we call `hmm.permute(permuation)` with the results of our first function call. Finally, we set `inferred_states` to be the underlying states we predict given the data.
Below, we plot the results of the `fit` function in order to check convergence of the EM algorithm. We see that the log-probability from the EM algorithm approaches the true log-probability of the data (which we have stored as `lp_true`).
```python
# Find a permutation of the states that best matches the true and inferred states
hmm.permute(find_permutation(true_states, hmm.most_likely_states(obs, input=inpt)))
inferred_states = hmm.most_likely_states(obs, input=inpt)
```
```python
# Plot the log probabilities of the true and fit models
plt.plot(hmm_lps, label="EM")
plt.plot([0, N_iters], true_lp * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, N_iters)
plt.ylabel("Log Probability")
plt.show()
```
### 3.3 Exercise: Change the Fitting Method
As an experiment, try fitting the same dataset using another fitting method. The two other fitting methods supported for HMMs are "sgd" and "adam", which you can set by passing `method="sgd"` and `method="adam"` respectively. For these methods, you'll probably need to increase the number of iteratations to around 1000 or so.
After fitting with a different method, re-run the two cells above to generate a plot. How does the convergence of these other methods converge to EM?
```python
# Plot the true and inferred states
plt.figure(figsize=(8, 3.5))
plt.subplot(211)
plt.imshow(true_states[None, :], aspect="auto")
plt.xticks([])
plt.xlim(0, time_bins)
plt.ylabel("true\nstate")
plt.yticks([])
plt.subplot(212)
plt.imshow(inferred_states[None, :], aspect="auto")
plt.xlim(0, time_bins)
plt.ylabel("inferred\nstate")
plt.yticks([])
plt.show()
```
## 4. Visualize the Learned Parameters
After calling `fit`, our new HMM object will have parameters updated according to the dataset. We can get a sense of whether we successfully learned these parameters by comparing them to the _true_ parameters which generated the data.
Below, we plot the baseline log transition probabilities (the log of the state-transition matrix) as well as the input weights $w$.
```python
# Plot the true and inferred input effects
plt.figure(figsize=(8, 4))
vlim = max(abs(true_hmm.transitions.log_Ps).max(),
abs(true_hmm.transitions.Ws).max(),
abs(hmm.transitions.log_Ps).max(),
abs(hmm.transitions.Ws).max())
plt.subplot(141)
plt.imshow(true_hmm.transitions.log_Ps, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=1)
plt.xticks(np.arange(num_states))
plt.yticks(np.arange(num_states))
plt.title("True\nBaseline Weights")
plt.grid(b=None)
plt.subplot(142)
plt.imshow(true_hmm.transitions.Ws, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=num_states/input_dim)
plt.xticks(np.arange(input_dim))
plt.yticks(np.arange(num_states))
plt.title("True\nInput Weights")
plt.grid(b=None)
plt.subplot(143)
plt.imshow(hmm.transitions.log_Ps, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=1)
plt.xticks(np.arange(num_states))
plt.yticks(np.arange(num_states))
plt.title("Inferred\nBaseline Weights")
plt.grid(b=None)
plt.subplot(144)
plt.imshow(hmm.transitions.Ws, vmin=-vlim, vmax=vlim, cmap="RdBu", aspect=num_states/input_dim)
plt.xticks(np.arange(input_dim))
plt.yticks(np.arange(num_states))
plt.title("Inferred\nInput Weights")
plt.grid(b=None)
plt.colorbar()
plt.show()
```
| e3a2a8b7ce03b5ab8026e70c2ff9fcb1935d8390 | 103,040 | ipynb | Jupyter Notebook | notebooks/2 Input Driven HMM.ipynb | GrohLab/ssm | a27c0d47837f676db9f7cf48924a653d148c5635 | [
"MIT"
]
| 208 | 2018-06-14T16:20:11.000Z | 2020-08-18T01:13:46.000Z | notebooks/2 Input Driven HMM.ipynb | GrohLab/ssm | a27c0d47837f676db9f7cf48924a653d148c5635 | [
"MIT"
]
| 82 | 2018-06-28T15:15:41.000Z | 2020-07-30T15:00:46.000Z | notebooks/2 Input Driven HMM.ipynb | GrohLab/ssm | a27c0d47837f676db9f7cf48924a653d148c5635 | [
"MIT"
]
| 83 | 2018-06-28T22:23:27.000Z | 2020-10-02T19:27:53.000Z | 248.289157 | 43,896 | 0.91084 | true | 2,833 | Qwen/Qwen-72B | 1. YES
2. YES | 0.875787 | 0.743168 | 0.650857 | __label__eng_Latn | 0.977418 | 0.350489 |
# The Laplace Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Analysis of Passive Electrical Networks
The Laplace transform is a well-established tool for the analysis of differential equations including initial values. [Electrical networks](https://en.wikipedia.org/wiki/Electrical_network) composed of linear passive elements, like resistors, capacitors and inductors can be described mathematically by linear ordinary differential equations (ODEs) with constant coefficients. The Laplace transform provides an elegant way of analyzing such networks. This is illustrated in the following.
### Complex Impedances and Equivalent Networks
The concept of complex impedances is used to analyze passive electrical networks in the Laplace domain. Let's first take a look at the ODEs describing the relation between voltage $u(t)$ and current $i(t)$ for linear passive elements. They are summarized in the second column of the following table
| Element | $\quad \qquad \qquad \quad \quad$ Temporal Domain $\qquad \qquad \qquad \quad \quad$ | $\qquad \qquad$ Laplace Domain $\qquad \qquad$ | Impedance $Z(s)$ |
|:---:|:---:|:---:|:---:|
| | $u(t) = R \cdot i(t)$ | $U(s) = R \cdot I(s)$ | $R$ |
| | $\begin{matrix} u(t) = L \frac{d}{dt} i(t) \\ i(t) = \frac{1}{L} \int_{0}^{t} u(\tau) d\tau + i(0+) \epsilon(t) \end{matrix}$ | $\begin{matrix} U(s) = s L I(s) - L i(0+) \\ I(s) = \frac{1}{s L} U(s) + \frac{1}{s} i(0+) \end{matrix}$ | $s L$ |
| | $\begin{matrix} u(t) = \frac{1}{C} \int_{0}^{t} i(\tau) d\tau + u(0+) \epsilon(t) \\ i(t) = C \frac{d}{dt} u(t) \end{matrix}$ | $\begin{matrix} U(s) = \frac{1}{s C} I(s) + \frac{1}{s} u(0+) \\ I(s) = s C U(s) - C u(0+) \end{matrix}$ | $\frac{1}{s C}$
It was assumed that the voltage $u(t)=0$ and current $i(t)=0$ for $t < 0$, hence that both are causal signals. The initial values $u(0+)$ and $i(0+)$ denote their right-sided limit values for $t=0$. For instance $u(0+) = \lim_{\epsilon \to 0} u(0 + \epsilon)$. They initial values represent the energy stored in the capacitors and inductors at time instant $t=0$, respectively. The analysis of a passive electrical network is performed by applying [Kirchhoff's circuit laws](https://en.wikipedia.org/wiki/Kirchhoff's_circuit_laws) resulting in an ODE, describing the relation between input and output voltage for instance. This ODE has to be solved explicitly. See for instance the [previous network analysis example](../systems_time_domain/network_analysis.ipynb).
The time-domain relations can be transformed into the Laplace domain by applying the [differentiation and integration theorem](table_theorems_transforms.ipynb#Properties-and-Theorems) of the Laplace transform. The results are summarized in the third column. The differentiation and integration are now represented by algebraic operations. Kirchhoff's circuit laws can be applied straightforwardly to the transformed quantities using the transformed relations. This is due to the fact that the Laplace transform is a linear operation. The result is an algebraic equation that can be solved straightforward with respect to the desired quantities.
When the initial values $i(0+)$ or $i(0+)$ are zero, the elements can be characterized in the Laplace domain by their [complex impedances](https://en.wikipedia.org/wiki/Electrical_impedance). The complex impedance $Z(s)$ is defined as follows
\begin{equation}
Z(s) = \frac{U(s)}{I(s)}
\end{equation}
Complex impedances can be used to represent a passive electrical network in the Laplace domain. The analysis of an electrical network in the Laplace domain is illustrated by the example given in the next section. Note that similar considerations also apply to mechanical systems and other problems that can be described by ODEs.
### Example: Second-Order Low-Pass Filter
The second-order low-pass filter from the [previous example](../systems_time_domain/network_analysis.ipynb) is analyzed using the Laplace transform. First the step response for zero initial values is computed followed by an analysis including initial values in a second stage.
#### Output signal for zero initial values
It is assumed that no energy is stored in the capacitor and inductor for $t<0$. Consequently, the initial values can be discarded. The equivalent network in the Laplace domain is derived by transforming the input $x(t)$ and output $y(t)$, and introducing the complex impedances from above table for its elements.
Applying [Kirchhoff's circuit laws](https://en.wikipedia.org/wiki/Kirchhoff's_circuit_laws) with the complex impedances of the network elements yields the output signal $Y(s)$ in relation to the input $X(s)$ in the Laplace domain as
\begin{equation}
Y(s) = \frac{1}{LC s^2 + RC s + 1} \cdot X(s)
\end{equation}
This relation is defined in `SymPy` for subsequent evaluation.
```python
%matplotlib inline
import sympy as sym
sym.init_printing()
s = sym.symbols('s', complex=True)
t, R, L, C = sym.symbols('t R L C', positive=True)
X = sym.Function('X')(s)
Y = 1/(L*C*s**2 + R*C*s + 1) * X
Y
```
The response $y(t)$ of the network to a [Heaviside signal](../continuous_signals/standard_signals.ipynb#Heaviside-Signal) at its input, is computed by setting the input to $x(t) = \epsilon(t)$. The Laplace transform $Y(s)$ of the output signal is hence given as
\begin{equation}
Y(s) = \frac{1}{LC s^2 + RC s + 1} \cdot \frac{1}{s}
\end{equation}
The output signal $y(t)$ is computed by inverse Laplace transform of $Y(s)$ for the normalized values $L = .5$, $R = 1$, $C = .4$.
```python
RLC = {R: 1, L: sym.Rational('.5'), C: sym.Rational('.4')}
y = sym.inverse_laplace_transform(Y.subs(RLC).subs(X, 1/s), s, t)
y
```
The result is simplified for sake of readability
```python
y = y.simplify()
y
```
```python
sym.plot(y, (t, 0, 5), xlabel='$t$', ylabel='$y(t)$');
```
The computation of the output signal $y(t)$ did not require the solution of the underlying ODE as in the [previous example](../systems_time_domain/network_analysis.ipynb). Based on the equivalent network in the Laplace domain, only the computation of an inverse Laplace transform was required. Above result is equal to the [solution of the ODE for an Heaviside signal at the input](../systems_time_domain/network_analysis.ipynb#Step-Response).
#### Output signal including initial values
Now the analysis is performed for non-zero initial values. As initial values, the normalized voltage $u_\text{C}(0+) = -1$ at the capacitor and the normalized current $i_\text{L}(0+) = 0$ at the inductor is assumed. The network is analyzed again using Kirchhoff's circuit laws, but now the initial values are not discarded. Using the Laplace domain representation of the network elements from above table, this results in
\begin{align}
Y(s) &= \underbrace{\frac{1}{L C s^2 + R C s + 1} \cdot X(s)}_{Y_\text{ext}(s)} \\
&+ \underbrace{\frac{R C + L C s}{L C s^2 + R C s + 1} \cdot y(0+) + \frac{L}{L C s^2 + R C s + 1} \cdot i_\text{L}(0+)}_{Y_\text{int}(s)}
\end{align}
where the fact has been used that the initial voltage $u_\text{C}(0+)$ at the capacitor is equal to the initial value of the output $y(0+)$. The index for the current $i_\text{L}(t)$ at the inductor is discarded in the remainder for brevity. The terms have been sorted with respect to their dependence on the input $X(s)$, and the initial values $y(0+)$ and $i(0+)$. The part of the output signal which depends only on the input is termed as *external* part $Y_\text{ext}(s)$. The parts of the output signal which depend only on the initial values are termed as *internal* parts $Y_\text{int}(s)$. The output signal is given as superposition of both contributions
\begin{equation}
y(t) = y_\text{ext}(t) + y_\text{int}(t)
\end{equation}
where $y_\text{ext}(t) = \mathcal{L}^{-1} \{ Y_\text{ext}(s) \}$ and $y_\text{int}(t) = \mathcal{L}^{-1} \{ Y_\text{int}(s) \}$.
The external part of the output signal has already been computed in the previous section
```python
yext = y
yext.simplify()
```
The Laplace transform of the internal part $Y_\text{int}(s)$ is defined for evaluation
```python
i0, y0 = sym.symbols('i0 y0', real=True)
Yint = (R*C + L*C*s) / (L*C*s**2 + R*C*s + 1) * y0 + L / (L*C*s**2 + R*C*s + 1) * i0
Yint
```
Now the inverse Laplace transform is computed for the initial values $y(0+)$ and $i(0+)$, and the specific values of $R$, $L$ and $C$ given above
```python
yint = sym.inverse_laplace_transform(Yint.subs(RLC).subs(i0, 0).subs(y0, -1), s, t)
yint
```
The output signal $y(t)$ is given as superposition of the external and internal contribution
```python
y = yext + yint
y.simplify()
```
The internal $y_\text{int}(t)$ (green line) and external $y_\text{ext}(t)$ (blue line) part, as well as the output signal $y(t)$ (red line) is plotted for illustration
```python
p1 = sym.plot(yext, (t, 0, 5), line_color='b', xlabel='$t$', ylabel='$y(t)$', show=False)
p2 = sym.plot(yint, (t, 0, 5), line_color='g', show=False)
p3 = sym.plot(y, (t, 0, 5), line_color='r', show=False)
p1.extend(p2)
p1.extend(p3)
p1.show()
```
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
| 86cd72caa3b95d92c5c3eb6ef2906c85bed8b525 | 62,480 | ipynb | Jupyter Notebook | laplace_transform/network_analysis.ipynb | xushoucai/signals-and-systems-lecture | 30dbbf9226d93b454639955f5462d57546a921c5 | [
"MIT"
]
| 1 | 2019-01-11T02:04:18.000Z | 2019-01-11T02:04:18.000Z | laplace_transform/network_analysis.ipynb | xushoucai/signals-and-systems-lecture | 30dbbf9226d93b454639955f5462d57546a921c5 | [
"MIT"
]
| null | null | null | laplace_transform/network_analysis.ipynb | xushoucai/signals-and-systems-lecture | 30dbbf9226d93b454639955f5462d57546a921c5 | [
"MIT"
]
| null | null | null | 141.678005 | 19,524 | 0.85757 | true | 2,766 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.847968 | 0.74827 | __label__eng_Latn | 0.987331 | 0.576815 |
# Inaugural Project
> **Note the following:**
> 1. This is an example of how to structure your **inaugural project**.
> 1. Remember the general advice on structuring and commenting your code from [lecture 5](https://numeconcopenhagen.netlify.com/lectures/Workflow_and_debugging).
> 1. Remember this [guide](https://www.markdownguide.org/basic-syntax/) on markdown and (a bit of) latex.
> 1. Turn on automatic numbering by clicking on the small icon on top of the table of contents in the left sidebar.
> 1. The `inauguralproject.py` file includes a function which can be used multiple times in this notebook.
Imports and set magics:
```python
import math
from scipy import optimize
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from sympy import symbols, Eq, solve
```
```python
# Opgave 1
As the model is a static model, and utility is strictly increasing with consumption the consumer will always choose to spend all his wealth on consumption, hence c = x, which is substituted directly into the utility function.
We use a minimize optimizer with bounds 0,1 to get the values of l and for w values in the range 0.5 - 1.5 and will be plotted in question 2.
```
```python
#Utility function defined
def u_func(c, l):
return np.log(m + w * l - (t_0 * w * l + t_1 * max(w * l-kappa,0))) - v*l **(1+1/eps)/(1+1/eps)
#Variables defined
m = 1
v = 10
eps = 0.3
t_0 = 0.4
t_1 = 0.1
kappa = 0.4
#Lists pre defined
l_plot = []
c_plot = []
w_plot = []
def value_of_choice(l,m,v,eps,t_0,t_1,kappa,w):
c = m + w * l - (t_0 * w * l + t_1 * max(w * l - kappa,0))
return -u_func(c,l)
#There is added 0.05 to the end of the renge in order to account for zero indexing.
for w in np.arange(0.5,1.5 + 0.05,0.05):
sol_case = optimize.minimize_scalar(
value_of_choice,method='bounded',
bounds=(0,1),args=(m,v,eps,t_0,t_1,kappa,w))
l = sol_case.x
c = m + w * l - (t_0 * w * l + t_1 * max(w * l-kappa,0))
# Store results as lists
l_plot.append(f'{l:.3}')
c_plot.append(f'{c:.3}')
w_plot.append(f'{w:.3}')
#float l for plotting
l_plot = [float(i) for i in l_plot]
#Print the newly stored lists (to see that they are stored correctly)
print('l results: ' + str(l_plot))
print('c results: ' + str(c_plot))
print('w results: ' + str(w_plot))
```
l results: [0.339, 0.348, 0.356, 0.363, 0.37, 0.376, 0.382, 0.388, 0.393, 0.398, 0.4, 0.387, 0.391, 0.395, 0.399, 0.403, 0.407, 0.41, 0.413, 0.417, 0.42]
c results: ['1.1', '1.11', '1.13', '1.14', '1.16', '1.17', '1.18', '1.2', '1.21', '1.23', '1.24', '1.24', '1.26', '1.27', '1.28', '1.29', '1.3', '1.32', '1.33', '1.34', '1.35']
w results: ['0.5', '0.55', '0.6', '0.65', '0.7', '0.75', '0.8', '0.85', '0.9', '0.95', '1.0', '1.05', '1.1', '1.15', '1.2', '1.25', '1.3', '1.35', '1.4', '1.45', '1.5']
```python
# Opgave 2
Viewing the plot we see, that there is a break where higher wages will result in less time spent working. This is due to the new tax bracket off-setting the gain from an
increase in consumption for the consumer. Same can be seen for the consumption, here there is a point where consumption stagnates as the consumer when maximizing will
work less at the higher wage due to the tax bracket changing if he were to work more.
```
```python
#Plot of labour and wage
plt.plot(w_plot, l_plot)
plt.xlabel("Wage")
plt.ylabel("Optimal labour")
plt.title("Labour")
plt.show()
#Plot of consumption and wage
plt.plot(w_plot, c_plot)
plt.xlabel("Wage")
plt.ylabel("Optimal consumption")
plt.title("Consumption")
plt.show()
```
```python
# Opgave 3 & 4
The tax revenue increases when the elasticity of labour falls from 0.3 to 0.1.
```
```python
def u_func3(c, l):
return np.log(m + wi * l - (t_0 * wi * l + t_1 * max(wi * l-kappa,0))) - v*l **(1+1/eps)/(1+1/eps)
def value_of_choice3(l,m,v,eps,t_0,t_1,kappa,wi):
c = m + wi * l - (t_0 * wi * l + t_1 * max(wi * l - kappa,0))
return -u_func3(c,l)
N = 10000
# w = np.random.uniform(size=N, low=0.5, high=1.5)
w = np.linspace(0.5, 1.5, N)
for eps in [0.3, 0.1]:
l = []
c = []
tax_revenue = 0
for i, wi in enumerate(w):
sol_case = optimize.minimize_scalar(
value_of_choice3, method='bounded',
bounds=(0,1),args=(m,v,eps,t_0,t_1,kappa,wi))
l.append(sol_case.x)
c.append(m + wi * l[-1] - (t_0 * wi * l[-1] + t_1 * max(wi * l[-1] - kappa,0)))
tax_revenue += t_0 * wi * l[i] + t_1 * np.max(wi * l[i] - kappa, 0)
print(tax_revenue)
```
1571.9643471169
3194.9003633141756
```python
# Opgave 5
From the perspective of the available tax revenue, the best case for t_0 is 0.78, with no top tax bracket (t_1 = 0). However, it is currently unclear how this would affect a deadweight loss and overall utility in the society, so it is not to say if this is actually the best values for t_0, t_1 and Kappa
We use N = 100 to not get a too long loading time
(the function was stopped to avoid too many results)
```
```python
N = 100
w = np.linspace(0.5, 1.5, N)
eps = 0.3
tax = {}
for t_0 in np.linspace(0,1, 10):
for t_1 in np.linspace(0, 1, 10):
for kappa in np.linspace(t_0, 1, 10):
l = []
c = []
tax_revenue = 0
for i, wi in enumerate(w):
sol_case = optimize.minimize_scalar(
value_of_choice3, method='bounded',
bounds=(0,1),args=(m,v,eps,t_0,t_1,kappa,wi))
l.append(sol_case.x)
c.append(m + wi * l[-1] - (t_0 * wi * l[-1] + t_1 * max(wi * l[-1] - kappa,0)))
tax_revenue += t_0 * wi * l[i] + t_1 * np.max(wi * l[i] - kappa, 0)
key = (round(t_0, 2), round(t_1, 2), round(kappa, 2))
tax[key] = tax_revenue
print(f"{key} -> T = {round(tax_revenue,2)}")
```
```python
# Conclusion
Overall the wage effect on l* and c* is linear only distorted at the point where the consumer has to think about if he/she wants to be in the top or lower tax bracket.
The tax revenue is affected positively by the elasticity of labour supply, which is useful to know when deciding what kind of tax percentage one would want to implement.
Also the maximum tax revenue is found at 78 % income tax, and no top income tax. However, it is unclear what is best for the society as a whole, as deadweight loss is not
accounted for.
```
| ccdcea64c0045a8d06c5946683c2a5b547097b50 | 153,497 | ipynb | Jupyter Notebook | inauguralproject/inauguralproject.ipynb | NumEconCopenhagen/projects-2020-group-16 | 714cfeae2f5783abc9789aabe0f2e6a324571e8a | [
"MIT"
]
| null | null | null | inauguralproject/inauguralproject.ipynb | NumEconCopenhagen/projects-2020-group-16 | 714cfeae2f5783abc9789aabe0f2e6a324571e8a | [
"MIT"
]
| 8 | 2020-04-07T16:14:54.000Z | 2020-05-14T14:26:48.000Z | inauguralproject/inauguralproject.ipynb | NumEconCopenhagen/projects-2020-group-16 | 714cfeae2f5783abc9789aabe0f2e6a324571e8a | [
"MIT"
]
| null | null | null | 475.22291 | 50,520 | 0.682254 | true | 2,145 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.810479 | 0.642796 | __label__eng_Latn | 0.981223 | 0.33176 |
# Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train.
One idea along these lines is batch normalization which was proposed by [1] in 2015.
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167)
```python
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def print_mean_std(x,axis=0):
print(' means: ', x.mean(axis=axis))
print(' stds: ', x.std(axis=axis))
print()
```
=========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========
You will need to compile a Cython extension for a portion of this assignment.
The instructions to do this will be given in a section of the notebook below.
There will be an option for Colab users and another for Jupyter (local) users.
```python
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
X_train: (49000, 3, 32, 32)
y_train: (49000,)
X_val: (1000, 3, 32, 32)
y_val: (1000,)
X_test: (1000, 3, 32, 32)
y_test: (1000,)
## Batch normalization: forward
In the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
Referencing the paper linked to above in [1] may be helpful!
```python
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print_mean_std(a,axis=0)
gamma = np.ones((D3,))
beta = np.zeros((D3,))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=0)
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
# Now means should be close to beta and stds close to gamma
print('After batch normalization (gamma=', gamma, ', beta=', beta, ')')
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=0)
```
Before batch normalization:
means: [ -2.3814598 -13.18038246 1.91780462]
stds: [27.18502186 34.21455511 37.68611762]
After batch normalization (gamma=1, beta=0)
means: [5.32907052e-17 7.04991621e-17 4.11476409e-17]
stds: [0.99999999 1. 1. ]
After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )
means: [11. 12. 13.]
stds: [0.99999999 1.99999999 2.99999999]
```python
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print_mean_std(a_norm,axis=0)
```
After batch normalization (test-time):
means: [-0.03927354 -0.04349152 -0.10452688]
stds: [1.01531428 1.01238373 0.97819988]
## Batch normalization: backward
Now implement the backward pass for batch normalization in the function `batchnorm_backward`.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
```python
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
#You should expect to see relative errors between 1e-13 and 1e-8
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
dx error: 1.7029258328157158e-09
dgamma error: 7.417225040694815e-13
dbeta error: 2.8795057655839487e-12
## Batch normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too!
In the forward pass, given a set of inputs $X=\begin{bmatrix}x_1\\x_2\\...\\x_N\end{bmatrix}$,
we first calculate the mean $\mu$ and variance $v$.
With $\mu$ and $v$ calculated, we can calculate the standard deviation $\sigma$ and normalized data $Y$.
The equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).
\begin{align}
& \mu=\frac{1}{N}\sum_{k=1}^N x_k & v=\frac{1}{N}\sum_{k=1}^N (x_k-\mu)^2 \\
& \sigma=\sqrt{v+\epsilon} & y_i=\frac{x_i-\mu}{\sigma}
\end{align}
The meat of our problem during backpropagation is to compute $\frac{\partial L}{\partial X}$, given the upstream gradient we receive, $\frac{\partial L}{\partial Y}.$ To do this, recall the chain rule in calculus gives us $\frac{\partial L}{\partial X} = \frac{\partial L}{\partial Y} \cdot \frac{\partial Y}{\partial X}$.
The unknown/hart part is $\frac{\partial Y}{\partial X}$. We can find this by first deriving step-by-step our local gradients at
$\frac{\partial v}{\partial X}$, $\frac{\partial \mu}{\partial X}$,
$\frac{\partial \sigma}{\partial v}$,
$\frac{\partial Y}{\partial \sigma}$, and $\frac{\partial Y}{\partial \mu}$,
and then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\frac{\partial Y}{\partial X}$.
If it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\frac{\partial L}{\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\frac{\partial \mu}{\partial x_i}, \frac{\partial v}{\partial x_i}, \frac{\partial \sigma}{\partial x_i},$ then assemble these pieces to calculate $\frac{\partial y_i}{\partial x_i}$.
You should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation.
After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
```python
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
```
dx difference: 1.9314251924176756e-12
dgamma difference: 0.0
dbeta difference: 0.0
speedup: 1.76x
## Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.
Concretely, when the `normalization` flag is set to `"batchnorm"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
```python
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
# You should expect losses between 1e-4~1e-10 for W,
# losses between 1e-08~1e-10 for b,
# and losses between 1e-08~1e-09 for beta and gammas.
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
normalization='batchnorm')
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
```
Running check with reg = 0
Initial loss: 2.2611955101340957
W1 relative error: 1.10e-04
W2 relative error: 2.85e-06
W3 relative error: 3.92e-10
b1 relative error: 8.33e-09
b2 relative error: 2.00e-07
b3 relative error: 4.78e-11
beta1 relative error: 7.33e-09
beta2 relative error: 1.89e-09
gamma1 relative error: 7.57e-09
gamma2 relative error: 1.96e-09
Running check with reg = 3.14
Initial loss: 6.996533220108303
W1 relative error: 1.98e-06
W2 relative error: 2.28e-06
W3 relative error: 1.11e-08
b1 relative error: 1.39e-08
b2 relative error: 2.36e-08
b3 relative error: 2.23e-10
beta1 relative error: 6.65e-09
beta2 relative error: 5.69e-09
gamma1 relative error: 8.80e-09
gamma2 relative error: 4.14e-09
# Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
```python
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
print('Solver with batch norm:')
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True,print_every=20)
bn_solver.train()
print('\nSolver without batch norm:')
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
```
Solver with batch norm:
(Iteration 1 / 200) loss: 2.340974
(Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000
(Epoch 1 / 10) train acc: 0.314000; val_acc: 0.266000
(Iteration 21 / 200) loss: 2.039365
(Epoch 2 / 10) train acc: 0.385000; val_acc: 0.279000
(Iteration 41 / 200) loss: 2.041103
(Epoch 3 / 10) train acc: 0.493000; val_acc: 0.308000
(Iteration 61 / 200) loss: 1.753902
(Epoch 4 / 10) train acc: 0.531000; val_acc: 0.307000
(Iteration 81 / 200) loss: 1.246584
(Epoch 5 / 10) train acc: 0.574000; val_acc: 0.314000
(Iteration 101 / 200) loss: 1.323435
(Epoch 6 / 10) train acc: 0.649000; val_acc: 0.327000
(Iteration 121 / 200) loss: 1.134313
(Epoch 7 / 10) train acc: 0.693000; val_acc: 0.316000
(Iteration 141 / 200) loss: 1.216312
(Epoch 8 / 10) train acc: 0.740000; val_acc: 0.305000
(Iteration 161 / 200) loss: 0.735565
(Epoch 9 / 10) train acc: 0.823000; val_acc: 0.335000
(Iteration 181 / 200) loss: 0.927183
(Epoch 10 / 10) train acc: 0.805000; val_acc: 0.301000
Solver without batch norm:
(Iteration 1 / 200) loss: 2.302332
(Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000
(Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000
(Iteration 21 / 200) loss: 2.041970
(Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000
(Iteration 41 / 200) loss: 1.900473
(Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000
(Iteration 61 / 200) loss: 1.713156
(Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000
(Iteration 81 / 200) loss: 1.662209
(Epoch 5 / 10) train acc: 0.433000; val_acc: 0.300000
(Iteration 101 / 200) loss: 1.696236
(Epoch 6 / 10) train acc: 0.529000; val_acc: 0.345000
(Iteration 121 / 200) loss: 1.555476
(Epoch 7 / 10) train acc: 0.536000; val_acc: 0.301000
(Iteration 141 / 200) loss: 1.434704
(Epoch 8 / 10) train acc: 0.615000; val_acc: 0.330000
(Iteration 161 / 200) loss: 1.050906
(Epoch 9 / 10) train acc: 0.663000; val_acc: 0.343000
(Iteration 181 / 200) loss: 0.814525
(Epoch 10 / 10) train acc: 0.747000; val_acc: 0.340000
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
```python
def plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):
"""utility function for plotting training history"""
plt.title(title)
plt.xlabel(label)
bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]
bl_plot = plot_fn(baseline)
num_bn = len(bn_plots)
for i in range(num_bn):
label='with_norm'
if labels is not None:
label += str(labels[i])
plt.plot(bn_plots[i], bn_marker, label=label)
label='baseline'
if labels is not None:
label += str(labels[0])
plt.plot(bl_plot, bl_marker, label=label)
plt.legend(loc='lower center', ncol=num_bn+1)
plt.subplot(3, 1, 1)
plot_training_history('Training loss','Iteration', solver, [bn_solver], \
lambda x: x.loss_history, bl_marker='o', bn_marker='o')
plt.subplot(3, 1, 2)
plot_training_history('Training accuracy','Epoch', solver, [bn_solver], \
lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')
plt.subplot(3, 1, 3)
plot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \
lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')
plt.gcf().set_size_inches(15, 15)
plt.show()
```
# Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
```python
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers_ws = {}
solvers_ws = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers_ws[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers_ws[weight_scale] = solver
```
Running weight scale 1 / 20
Running weight scale 2 / 20
Running weight scale 3 / 20
Running weight scale 4 / 20
Running weight scale 5 / 20
Running weight scale 6 / 20
Running weight scale 7 / 20
Running weight scale 8 / 20
Running weight scale 9 / 20
Running weight scale 10 / 20
Running weight scale 11 / 20
Running weight scale 12 / 20
Running weight scale 13 / 20
Running weight scale 14 / 20
Running weight scale 15 / 20
Running weight scale 16 / 20
Running weight scale 17 / 20
Running weight scale 18 / 20
Running weight scale 19 / 20
Running weight scale 20 / 20
```python
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers_ws[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))
best_val_accs.append(max(solvers_ws[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))
final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Inline Question 1:
Describe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?
## Answer:
[FILL THIS IN]
# Batch normalization and batch size
We will now run a small experiment to study the interaction of batch normalization and batch size.
The first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.
```python
def run_batchsize_experiments(normalization_mode):
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
n_epochs=10
weight_scale = 2e-2
batch_sizes = [5,10,50]
lr = 10**(-3.5)
solver_bsize = batch_sizes[0]
print('No normalization: batch size = ',solver_bsize)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)
solver = Solver(model, small_data,
num_epochs=n_epochs, batch_size=solver_bsize,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
verbose=False)
solver.train()
bn_solvers = []
for i in range(len(batch_sizes)):
b_size=batch_sizes[i]
print('Normalization: batch size = ',b_size)
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)
bn_solver = Solver(bn_model, small_data,
num_epochs=n_epochs, batch_size=b_size,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
verbose=False)
bn_solver.train()
bn_solvers.append(bn_solver)
return bn_solvers, solver, batch_sizes
batch_sizes = [5,10,50]
bn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')
```
No normalization: batch size = 5
Normalization: batch size = 5
Normalization: batch size = 10
Normalization: batch size = 50
```python
plt.subplot(2, 1, 1)
plot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.subplot(2, 1, 2)
plot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.gcf().set_size_inches(15, 10)
plt.show()
```
## Inline Question 2:
Describe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?
## Answer:
[FILL THIS IN]
# Layer Normalization
Batch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations.
Several alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.
[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)
## Inline Question 3:
Which of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?
1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.
2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1.
3. Subtracting the mean image of the dataset from each image in the dataset.
4. Setting all RGB values to either 0 or 1 depending on a given threshold.
## Answer:
[FILL THIS IN]
# Layer Normalization: Implementation
Now you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.
Here's what you need to do:
* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`.
Run the cell below to check your results.
* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`.
Run the second cell below to check your results.
* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `"layernorm"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity.
Run the third cell below to run the batch size experiment on layer normalization.
```python
# Check the training-time forward pass by checking means and variances
# of features both before and after layer normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 =4, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before layer normalization:')
print_mean_std(a,axis=1)
gamma = np.ones(D3)
beta = np.zeros(D3)
# Means should be close to zero and stds close to one
print('After layer normalization (gamma=1, beta=0)')
a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=1)
gamma = np.asarray([3.0,3.0,3.0])
beta = np.asarray([5.0,5.0,5.0])
# Now means should be close to beta and stds close to gamma
print('After layer normalization (gamma=', gamma, ', beta=', beta, ')')
a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})
print_mean_std(a_norm,axis=1)
```
Before layer normalization:
means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]
stds: [10.07429373 28.39478981 35.28360729 4.01831507]
After layer normalization (gamma=1, beta=0)
means: [-4.81096644e-16 0.00000000e+00 7.40148683e-17 -5.55111512e-16]
stds: [0.99999995 0.99999999 1. 0.99999969]
After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )
means: [5. 5. 5. 5.]
stds: [2.99999985 2.99999998 2.99999999 2.99999907]
```python
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
ln_param = {}
fx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]
fg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]
fb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = layernorm_forward(x, gamma, beta, ln_param)
dx, dgamma, dbeta = layernorm_backward(dout, cache)
#You should expect to see relative errors between 1e-12 and 1e-8
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
dx error: 1.4336160411201157e-09
dgamma error: 4.519489546032799e-12
dbeta error: 2.276445013433725e-12
# Layer Normalization and batch size
We will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!
```python
ln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')
plt.subplot(2, 1, 1)
plot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \
lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.subplot(2, 1, 2)
plot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \
lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)
plt.gcf().set_size_inches(15, 10)
plt.show()
```
## Inline Question 4:
When is layer normalization likely to not work well, and why?
1. Using it in a very deep network
2. Having a very small dimension of features
3. Having a high regularization term
## Answer:
[FILL THIS IN]
| 30764120c947b2e9527b96d02e5b269c06379b2e | 813,581 | ipynb | Jupyter Notebook | assignments/2020/assignment2/BatchNormalization.ipynb | yafei-liu/cs231n.github.io | 411057406046c115306e1ee87dd0353a2dc4572c | [
"MIT"
]
| null | null | null | assignments/2020/assignment2/BatchNormalization.ipynb | yafei-liu/cs231n.github.io | 411057406046c115306e1ee87dd0353a2dc4572c | [
"MIT"
]
| null | null | null | assignments/2020/assignment2/BatchNormalization.ipynb | yafei-liu/cs231n.github.io | 411057406046c115306e1ee87dd0353a2dc4572c | [
"MIT"
]
| null | null | null | 746.404587 | 114,471 | 0.784967 | true | 9,037 | Qwen/Qwen-72B | 1. YES
2. YES | 0.682574 | 0.824462 | 0.562756 | __label__eng_Latn | 0.915528 | 0.145801 |
```python
from IPython.display import Image
Image('../../Python_probability_statistics_machine_learning_2E.png',width=200)
```
<!-- # TODO: Elastic Net -->
<!-- # TODO: Nice intuition about ridge regression Data_Analysis_Data_Mining_Azzalini, p. 64 -->
We have referred to regularization in the section [ch:ml:sec:logreg](#ch:ml:sec:logreg), but we want to develop this important
idea more fully. Regularization is the mechanism by which we
navigate the bias/variance trade-off. To get started, let's
consider a classic constrained least squares problem,
$$
\begin{aligned}
& \underset{\mathbf{x}}{\text{minimize}}
& & \Vert\mathbf{x}\Vert_2^2 \\
& \text{subject to:}
& & x_0 + 2 x_1 = 1
\end{aligned}
$$
where $\Vert\mathbf{x}\Vert_2=\sqrt{x_0^2+x_1^2}$ is the
$L_2$ norm. Without the constraint, it would be easy to minimize
the objective function --- just take $\mathbf{x}=0$. Otherwise,
suppose we somehow know that $\Vert\mathbf{x}\Vert_2<c$, then
the locus of points defined by this inequality is the circle in
[Figure](#fig:regularization_001). The constraint is the line in
the same figure. Because every value of $c$ defines a circle, the
constraint is satisfied when the circle touches the line. The
circle can touch the line at many different points, but we are
only interested in the smallest such circle because this is a
minimization problem. Intuitively, this means that we *inflate* a
$L_2$ ball at the origin and stop when it just touches the
contraint. The point of contact is our $L_2$ minimization
solution.
<!-- dom:FIGURE: [fig-machine_learning/regularization_001.png, width=500 frac=0.75] The solution of the constrained $L_2$ minimization problem is at the point where the constraint (dark line) intersects the $L_2$ ball (gray circle) centered at the origin. The point of intersection is indicated by the dark circle. The two neighboring squares indicate points on the line that are close to the solution. <div id="fig:regularization_001"></div> -->
<!-- begin figure -->
<div id="fig:regularization_001"></div>
<p>The solution of the constrained $L_2$ minimization problem is at the point where the constraint (dark line) intersects the $L_2$ ball (gray circle) centered at the origin. The point of intersection is indicated by the dark circle. The two neighboring squares indicate points on the line that are close to the solution.</p>
<!-- end figure -->
We can obtain the same result using the method of Lagrange
multipliers. We can rewrite the entire $L_2$ minimization problem
as one objective function using the Lagrange multiplier,
$\lambda$,
$$
J(x_0,x_1,\lambda) = x_0^2+x_1^2 + \lambda (1-x_0-x_1)
$$
and solve this as an ordinary function using calculus. Let's
do this using Sympy.
```python
import sympy as S
S.var('x:2 l',real=True)
J=S.Matrix([x0,x1]).norm()**2 + l*(1-x0-2*x1)
sol=S.solve(map(J.diff,[x0,x1,l]))
print(sol)
```
{l: 2/5, x0: 1/5, x1: 2/5}
**Programming Tip.**
Using the `Matrix` object is overkill for this problem but it
does demonstrate how Sympy's matrix machinery works. In this case,
we are using the `norm` method to compute the $L_2$ norm of the
given elements. Using `S.var` defines Sympy variables and injects
them into the global namespace. It is more Pythonic to do
something like `x0 = S.symbols('x0',real=True)` instead but the
other way is quicker, especially for variables with many
dimensions.
The solution defines the exact point where the line is
tangent to the circle in [Figure](#fig:regularization_001). The
Lagrange multiplier has incorporated the constraint into the objective
function.
```python
%matplotlib inline
import numpy as np
from numpy import pi, linspace, sqrt
from matplotlib.patches import Circle
from matplotlib.pylab import subplots
x1 = linspace(-1,1,10)
dx=linspace(.7,1.3,3)
fline = lambda x:(1-x)/2.
fig,ax=subplots()
_=ax.plot(dx*1/5,fline(dx*1/5),'s',ms=10,color='gray')
_=ax.plot(x1,fline(x1),color='gray',lw=3)
_=ax.add_patch(Circle((0,0),1/sqrt(5),alpha=0.3,color='gray'))
_=ax.plot(1/5,2/5,'o',color='k',ms=15)
_=ax.set_xlabel('$x_1$',fontsize=24)
_=ax.set_ylabel('$x_2$',fontsize=24)
_=ax.axis((-0.6,0.6,-0.6,0.6))
ax.set_aspect(1)
fig.tight_layout()
fig.savefig('fig-machine_learning/regularization_001.png')
```
There is something subtle and very important about the nature of the solution,
however. Notice that there are other points very close to the solution on the
circle, indicated by the squares in [Figure](#fig:regularization_001). This
closeness could be a good thing, in case it helps us actually find a solution
in the first place, but it may be unhelpful in so far as it creates ambiguity.
Let's hold that thought and try the same problem using the $L_1$ norm instead
of the $L_2$ norm. Recall that
$$
\Vert \mathbf{x}\Vert_1 = \sum_{i=1}^d \vert x_i \vert
$$
where $d$ is the dimension of the vector $\mathbf{x}$. Thus, we can
reformulate the same problem in the $L_1$ norm as in the following,
$$
\begin{aligned}
& \underset{\mathbf{x}}{\text{minimize}}
& & \Vert\mathbf{x}\Vert_1 \\
& \text{subject to:}
& & x_1 + 2 x_2 = 1
\end{aligned}
$$
It turns out that this problem is somewhat harder to
solve using Sympy, but we have convex optimization modules in Python
that can help.
```python
from cvxpy import Variable, Problem, Minimize, norm1, norm
x=Variable((2,1),name='x')
constr=[np.matrix([[1,2]])*x==1]
obj=Minimize(norm1(x))
p= Problem(obj,constr)
p.solve()
print(x.value)
```
[[1.31394645e-29]
[5.00000000e-01]]
**Programming Tip.**
The `cvxy` module provides a unified and accessible interface to the powerful
`cvxopt` convex optimization package, as well as other open-source solver
packages.
As shown in [Figure](#fig:regularization_002), the constant-norm
contour in the $L_1$ norm is shaped like a diamond instead of a circle.
Furthermore, the solutions found in each case are different. Geometrically,
this is because inflating the circular $L_2$ reaches out in all directions
whereas the $L_1$ ball creeps out along the principal axes. This effect is
much more pronounced in higher dimensional spaces where $L_1$-balls get more
spikey [^spikey]. Like the $L_2$ case, there are also neighboring points on
the constraint line, but notice that these are not close to the boundary of the
corresponding $L_1$ ball, as they were in the $L_2$ case. This means that
these would be harder to confuse with the optimal solution because they
correspond to a substantially different $L_1$ ball.
[^spikey]: We discussed the geometry of high dimensional space
when we covered the curse of dimensionality in the
statistics chapter.
To double-check our earlier $L_2$ result, we can also use the
`cvxpy` module to find the $L_2$ solution as in the following
code,
```python
constr=[np.matrix([[1,2]])*x==1]
obj=Minimize(norm(x,2)) #L2 norm
p= Problem(obj,constr)
p.solve()
print(x.value)
```
[[0.2]
[0.4]]
The only change to the code is the $L_2$ norm and we get
the same solution as before.
Let's see what happens in higher dimensions for both $L_2$ and
$L_1$ as we move from two dimensions to four dimensions.
```python
x=Variable((4,1),name='x')
constr=[np.matrix([[1,2,3,4]])*x==1]
obj=Minimize(norm1(x))
p= Problem(obj,constr)
p.solve()
print(x.value)
```
[[-3.64540020e-29]
[-7.29271858e-29]
[-8.32902339e-23]
[ 2.50000000e-01]]
And also in the $L_2$ case with the following code,
```python
constr=[np.matrix([[1,2,3,4]])*x==1]
obj=Minimize(norm(x,2))
p= Problem(obj,constr)
p.solve()
print(x.value)
```
[[0.03333333]
[0.06666667]
[0.1 ]
[0.13333333]]
Note that the $L_1$ solution has selected out only one
dimension for the solution, as the other components are
effectively zero. This is not so with the $L_2$ solution, which
has meaningful elements in multiple coordinates. This is because
the $L_1$ problem has many pointy corners in the four dimensional
space that poke at the hyperplane that is defined by the
constraint. This essentially means the subsets (namely, the points
at the corners) are found as solutions because these touch the
hyperplane. This effect becomes more pronounced in higher
dimensions, which is the main benefit of using the $L_1$ norm
as we will see in the next section.
```python
from matplotlib.patches import Rectangle, RegularPolygon
r=RegularPolygon((0,0),4,1/2,pi/2,alpha=0.5,color='gray')
fig,ax=subplots()
dx = np.array([-0.1,0.1])
_=ax.plot(dx,fline(dx),'s',ms=10,color='gray')
_=ax.plot(x1,fline(x1),color='gray',lw=3)
_=ax.plot(0,1/2,'o',color='k',ms=15)
_=ax.add_patch(r)
_=ax.set_xlabel('$x_1$',fontsize=24)
_=ax.set_ylabel('$x_2$',fontsize=24)
_=ax.axis((-0.6,0.6,-0.6,0.6))
_=ax.set_aspect(1)
fig.tight_layout()
fig.savefig('fig-machine_learning/regularization_002.png')
```
<!-- dom:FIGURE: [fig-machine_learning/regularization_002.png, width=500 frac=0.75] The diamond is the $L_1$ ball in two dimensions and the line is the constraint. The point of intersection is the solution to the optimization problem. Note that for $L_1$ optimization, the two nearby points on the constraint (squares) do not touch the $L_1$ ball. Compare this with [Figure](#fig:regularization_001). <div id="fig:regularization_002"></div> -->
<!-- begin figure -->
<div id="fig:regularization_002"></div>
<p>The diamond is the $L_1$ ball in two dimensions and the line is the constraint. The point of intersection is the solution to the optimization problem. Note that for $L_1$ optimization, the two nearby points on the constraint (squares) do not touch the $L_1$ ball. Compare this with [Figure](#fig:regularization_001).</p>
<!-- end figure -->
<!-- p. 168 D:\Volume2_Indexed\Statistical_Machine_Learning_Notes_Tibshirani.pdf -->
## Ridge Regression
Now that we have a sense of the geometry of the situation, let's revisit
our classic linear regression probem. To recap, we want to solve the following
problem,
$$
\min_{\boldsymbol{\beta}\in \mathbb{R}^p} \Vert y - \mathbf{X}\boldsymbol{\beta}\Vert
$$
where $\mathbf{X}=\left[
\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_p \right]$ and $\mathbf{x}_i\in
\mathbb{R}^n$. Furthermore, we assume that the $p$ column vectors are linearly
independent (i.e., $\texttt{rank}(\mathbf{X})=p$). Linear regression produces
the $\boldsymbol{\beta}$ that minimizes the mean squared error above. In the
case where $p=n$, there is a unique solution to this problem. However, when
$p<n$, then there are infinitely many solutions.
To make this concrete, let's work this out using Sympy. First,
let's define an example $\mathbf{X}$ and $\mathbf{y}$ matrix,
```python
import sympy as S
from sympy import Matrix
X = Matrix([[1,2,3],
[3,4,5]])
y = Matrix([[1,2]]).T
```
Now, we can define our coefficient vector $\boldsymbol{\beta}$
using the following code,
```python
b0,b1,b2=S.symbols('b:3',real=True)
beta = Matrix([[b0,b1,b2]]).T # transpose
```
Next, we define the objective function we are trying to minimize
```python
obj=(X*beta -y).norm(ord=2)**2
```
**Programming Tip.**
The Sympy `Matrix` class has useful methods like the `norm` function
used above to define the objective function. The `ord=2` means we want
to use the $L_2$ norm. The expression in parenthesis evaluates to a
`Matrix` object.
Note that it is helpful to define real variables using
the keyword argument whenever applicable because it relieves
Sympy's internal machinery of dealing with complex numbers.
Finally, we can use calculus to solve this by setting the
derivatives of the objective function to zero.
```python
sol=S.solve([obj.diff(i) for i in beta])
beta.subs(sol)
```
$\displaystyle \left[\begin{matrix}b_{2}\\\frac{1}{2} - 2 b_{2}\\b_{2}\end{matrix}\right]$
Notice that the solution does not uniquely specify all the components
of the `beta` variable. This is a consequence of the $p<n$ nature of this
problem where $p=2$ and $n=3$. While the the existence of this ambiguity does
not alter the solution,
```python
obj.subs(sol)
```
$\displaystyle 0$
But it does change the length of the solution vector
`beta`,
```python
beta.subs(sol).norm(2)
```
$\displaystyle \sqrt{2 b_{2}^{2} + \left(2 b_{2} - \frac{1}{2}\right)^{2}}$
If we want to minimize this length we can easily
use the same calculus as before,
```python
S.solve((beta.subs(sol).norm()**2).diff())
```
[1/6]
This provides the solution of minimum length
in the $L_2$ sense,
```python
betaL2=beta.subs(sol).subs(b2,S.Rational(1,6))
betaL2
```
$\displaystyle \left[\begin{matrix}\frac{1}{6}\\\frac{1}{6}\\\frac{1}{6}\end{matrix}\right]$
But what is so special about solutions of minimum length? For machine
learning, driving the objective function to zero is symptomatic of overfitting
the data. Usually, at the zero bound, the machine learning method has
essentially memorized the training data, which is bad for generalization. Thus,
we can effectively stall this problem by defining a region for the solution
that is away from the zero-bound.
$$
\begin{aligned}
& \underset{\boldsymbol{\beta}}{\text{minimize}}
& & \Vert y - \mathbf{X}\boldsymbol{\beta}\Vert_2^2 \\
& \text{subject to:}
& & \Vert\boldsymbol{\beta}\Vert_2 < c
\end{aligned}
$$
where $c$ is the tuning parameter. Using the same process as before,
we can re-write this as the following,
$$
\min_{\boldsymbol{\beta}\in\mathbb{R}^p}\Vert y-\mathbf{X}\boldsymbol{\beta}\Vert_2^2 +\alpha\Vert\boldsymbol{\beta}\Vert_2^2
$$
where $\alpha$ is the tuning parameter. These are the *penalized* or
Lagrange forms of these problems derived from the constrained versions. The
objective function is penalized by the $\Vert\boldsymbol{\beta}\Vert_2$ term.
For $L_2$ penalization, this is called *ridge* regression. This is
implemented in Scikit-learn as `Ridge`. The following code sets this up for
our example,
```python
from sklearn.linear_model import Ridge
clf = Ridge(alpha=100.0,fit_intercept=False)
clf.fit(np.array(X).astype(float),np.array(y).astype(float))
```
Ridge(alpha=100.0, copy_X=True, fit_intercept=False, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001)
Note that the `alpha` scales of the penalty for the
$\Vert\boldsymbol{\beta}\Vert_2$. We set the `fit_intercept=False` argument to
omit the extra offset term from our example. The corresponding solution is the
following,
```python
print(clf.coef_)
```
[[0.0428641 0.06113005 0.07939601]]
To double-check the solution, we can use some optimization tools from
Scipy and our previous Sympy analysis, as in the following,
```python
from scipy.optimize import minimize
f = S.lambdify((b0,b1,b2),obj+beta.norm()**2*100.)
g = lambda x:f(x[0],x[1],x[2])
out = minimize(g,[.1,.2,.3]) # initial guess
out.x
```
array([0.0428641 , 0.06113005, 0.07939601])
**Programming Tip.**
We had to define the additional `g` function from the lambda function we
created from the Sympy expression in `f` because the `minimize` function
expects a single object vector as input instead of a three separate arguments.
which produces the same answer as the `Ridge` object. To
better understand the meaning of this result, we can re-compute the
mean squared error solution to this problem in one step using matrix
algebra instead of calculus,
```python
betaLS=X.T*(X*X.T).inv()*y
betaLS
```
$\displaystyle \left[\begin{matrix}\frac{1}{6}\\\frac{1}{6}\\\frac{1}{6}\end{matrix}\right]$
Notice that this solves the posited problem exactly,
```python
X*betaLS-y
```
$\displaystyle \left[\begin{matrix}0\\0\end{matrix}\right]$
This means that the first term in the objective function
goes to zero,
$$
\Vert y-\mathbf{X}\boldsymbol{\beta}_{LS}\Vert=0
$$
But, let's examine the $L_2$ length of this solution versus
the ridge regression solution,
```python
print(betaLS.norm().evalf(), np.linalg.norm(clf.coef_))
```
0.288675134594813 0.10898596412575512
Thus, the ridge regression solution is shorter in the $L_2$
sense, but the first term in the objective function is not zero for
ridge regression,
```python
print((y-X*clf.coef_.T).norm()**2)
```
1.86870864136429
Ridge regression solution trades fitting error
($\Vert y-\mathbf{X} \boldsymbol{\beta}\Vert_2$) for solution
length ($\Vert\boldsymbol{\beta}\Vert_2$).
Let's see this in action with a familiar example from
[ch:stats:sec:nnreg](#ch:stats:sec:nnreg). Consider [Figure](#fig:regularization_003).
For this example, we created our usual chirp signal and attempted to
fit it with a high-dimensional polynomial, as we did in
the section [ch:ml:sec:cv](#ch:ml:sec:cv). The lower panel is the same except with ridge
regression. The shaded gray area is the space between the true signal
and the approximant in both cases. The horizontal hash marks indicate
the subset of $x_i$ values that each regressor was trained on.
Thus, the training set represents a non-uniform sample of the
underlying chirp waveform. The top panel shows the usual polynomial
regression. Note that the regressor fits the given points extremely
well, but fails at the endpoint. The ridge regressor misses many of
the points in the middle, as indicated by the gray area, but does not
overshoot at the ends as much as the plain polynomial regression. This
is the basic trade-off for ridge regression. The Jupyter
notebook corresponding to this section has the code for this graph, but the main steps
are shown in the following,
```python
# create chirp signal
xi = np.linspace(0,1,100)[:,None]
# sample chirp randomly
xin= np.sort(np.random.choice(xi.flatten(),20,replace=False))[:,None]
# create sampled waveform
y = np.cos(2*pi*(xin+xin**2))
# create full waveform for reference
yi = np.cos(2*pi*(xi+xi**2))
# create polynomial features
from sklearn.preprocessing import PolynomialFeatures
qfit = PolynomialFeatures(degree=8) # quadratic
Xq = qfit.fit_transform(xin)
# reformat input as polynomial
Xiq = qfit.fit_transform(xi)
from sklearn.linear_model import LinearRegression
lr=LinearRegression() # create linear model
lr.fit(Xq,y) # fit linear model
# create ridge regression model and fit
clf = Ridge(alpha=1e-9,fit_intercept=False)
clf.fit(Xq,y)
```
Ridge(alpha=1e-09, copy_X=True, fit_intercept=False, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001)
```python
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import numpy as np
from numpy import cos, pi
np.random.seed(1234567)
xi = np.linspace(0,1,100)[:,None]
xin = np.linspace(0,1,20)[:,None]
xin= np.sort(np.random.choice(xi.flatten(),20,replace=False))[:,None]
f0 = 1 # init frequency
BW = 2
y = cos(2*pi*(f0*xin+(BW/2.0)*xin**2))
yi = cos(2*pi*(f0*xi+(BW/2.0)*xi**2))
qfit = PolynomialFeatures(degree=8) # quadratic
Xq = qfit.fit_transform(xin)
Xiq = qfit.fit_transform(xi)
lr=LinearRegression() # create linear model
_=lr.fit(Xq,y)
fig,axs=subplots(2,1,sharex=True,sharey=True)
fig.set_size_inches((6,6))
ax=axs[0]
_=ax.plot(xi,yi,label='true',ls='--',color='k')
_=ax.plot(xi,lr.predict(Xiq),label=r'$\beta_{LS}$',color='k')
_=ax.legend(loc=0)
_=ax.set_ylabel(r'$\hat{y}$ ',fontsize=22,rotation='horizontal')
_=ax.fill_between(xi.flatten(),yi.flatten(),lr.predict(Xiq).flatten(),color='gray',alpha=.3)
_=ax.set_title('Polynomial Regression of Chirp Signal')
_=ax.plot(xin, -1.5+np.array([0.01]*len(xin)), '|', color='k',mew=3)
clf = Ridge(alpha=1e-9,fit_intercept=False)
_=clf.fit(Xq,y)
ax=axs[1]
_=ax.plot(xi,yi,label=r'true',ls='--',color='k')
_=ax.plot(xi,clf.predict(Xiq),label=r'$\beta_{RR}$',color='k')
_=ax.legend(loc=(0.25,0.70))
_=ax.fill_between(xi.flatten(),yi.flatten(),clf.predict(Xiq).flatten(),color='gray',alpha=.3)
# add rug plot
_=ax.plot(xin, -1.5+np.array([0.01]*len(xin)), '|', color='k',mew=3)
_=ax.set_xlabel('$x$',fontsize=22)
_=ax.set_ylabel(r'$\hat{y}$ ',fontsize=22,rotation='horizontal')
_=ax.set_title('Ridge Regression of Chirp Signal')
fig.savefig('fig-machine_learning/regularization_003.png')
```
<!-- dom:FIGURE: [fig-machine_learning/regularization_003.png, width=500 frac=0.85] The top figure shows polynomial regression and the lower panel shows polynomial ridge regression. The ridge regression does not match as well throughout most of the domain, but it does not flare as violently at the ends. This is because the ridge constraint holds the coefficient vector down at the expense of poorer performance along the middle of the domain. <div id="fig:regularization_003"></div> -->
<!-- begin figure -->
<div id="fig:regularization_003"></div>
<p>The top figure shows polynomial regression and the lower panel shows polynomial ridge regression. The ridge regression does not match as well throughout most of the domain, but it does not flare as violently at the ends. This is because the ridge constraint holds the coefficient vector down at the expense of poorer performance along the middle of the domain.</p>
<!-- end figure -->
## Lasso Regression
Lasso regression follows the same basic pattern as ridge regression,
except with the $L_1$ norm in the objective function.
$$
\min_{\boldsymbol{\beta}\in\mathbb{R}^p}\Vert y-\mathbf{X}\boldsymbol{\beta}\Vert^2 +\alpha\Vert\boldsymbol{\beta}\Vert_1
$$
The interface in Scikit-learn is likewise the same.
The following is the same problem as before using lasso
instead of ridge regression,
```python
X = np.matrix([[1,2,3],
[3,4,5]])
y = np.matrix([[1,2]]).T
from sklearn.linear_model import Lasso
lr = Lasso(alpha=1.0,fit_intercept=False)
_=lr.fit(X,y)
print(lr.coef_)
```
[0. 0. 0.32352941]
As before, we can use the optimization tools in Scipy to solve this
also,
```python
from scipy.optimize import fmin
obj = 1/4.*(X*beta-y).norm(2)**2 + beta.norm(1)*l
f = S.lambdify((b0,b1,b2),obj.subs(l,1.0))
g = lambda x:f(x[0],x[1],x[2])
fmin(g,[0.1,0.2,0.3])
```
Optimization terminated successfully.
Current function value: 0.360297
Iterations: 121
Function evaluations: 221
array([2.27469304e-06, 4.02831864e-06, 3.23134859e-01])
**Programming Tip.**
The `fmin` function from Scipy's optimization module uses an
algorithm that does not depend upon derivatives. This is useful
because, unlike the $L_2$ norm, the $L_1$ norm has sharp corners
that make it harder to estimate derivatives.
This result matches the previous one from the
Scikit-learn `Lasso` object. Solving it using Scipy is motivating
and provides a good sanity check, but specialized algorithms are
required in practice. The following code block re-runs the lasso
with varying $\alpha$ and plots the coefficients in [Figure](#fig:regularization_004). Notice that as $\alpha$ increases, all
but one of the coefficients is driven to zero. Increasing $\alpha$
makes the trade-off between fitting the data in the $L_1$ sense
and wanting to reduce the number of nonzero coefficients
(equivalently, the number of features used) in the model. For a
given problem, it may be more practical to focus on reducing the
number of features in the model (i.e., large $\alpha$) than the
quality of the data fit in the training data. The lasso provides a
clean way to navigate this trade-off.
The following code loops over a set of $\alpha$ values and
collects the corresponding lasso coefficients to be plotted
in [Figure](#fig:regularization_004)
```python
o=[]
alphas= np.logspace(-3,0,10)
for a in alphas:
clf = Lasso(alpha=a,fit_intercept=False)
_=clf.fit(X,y)
o.append(clf.coef_)
```
```python
fig,ax=subplots()
fig.set_size_inches((8,5))
k=np.vstack(o)
ls = ['-','--',':','-.']
for i in range(k.shape[1]):
_=ax.semilogx(alphas,k[:,i],'o-',
label='coef %d'%(i),
color='k',ls=ls[i],
alpha=.8,)
_=ax.axis(ymin=-1e-1)
_=ax.legend(loc=0)
_=ax.set_xlabel(r'$\alpha$',fontsize=20)
_=ax.set_ylabel(r'Lasso coefficients',fontsize=16)
fig.tight_layout()
fig.savefig('fig-machine_learning/regularization_004.png')
```
<!-- dom:FIGURE: [fig-machine_learning/regularization_004.png, width=500 frac=0.85] As $\alpha$ increases, more of the model coefficients are driven to zero for lasso regression. <div id="fig:regularization_004"></div> -->
<!-- begin figure -->
<div id="fig:regularization_004"></div>
<p>As $\alpha$ increases, more of the model coefficients are driven to zero for lasso regression.</p>
<!-- end figure -->
| 9a4bd3632013dfe897870b40c1a15ad4a566cf17 | 309,096 | ipynb | Jupyter Notebook | chapter/machine_learning/regularization.ipynb | derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E | 9d12a298d43ae285d9549a79bb5544cf0a9b7516 | [
"MIT"
]
| 224 | 2019-05-07T08:56:01.000Z | 2022-03-25T15:50:41.000Z | chapter/machine_learning/regularization.ipynb | derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E | 9d12a298d43ae285d9549a79bb5544cf0a9b7516 | [
"MIT"
]
| 9 | 2019-08-27T12:57:17.000Z | 2021-09-21T15:45:13.000Z | chapter/machine_learning/regularization.ipynb | derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E | 9d12a298d43ae285d9549a79bb5544cf0a9b7516 | [
"MIT"
]
| 73 | 2019-05-25T07:15:47.000Z | 2022-03-07T00:22:37.000Z | 227.276471 | 176,652 | 0.914599 | true | 6,994 | Qwen/Qwen-72B | 1. YES
2. YES | 0.746139 | 0.853913 | 0.637138 | __label__eng_Latn | 0.984592 | 0.318615 |
# 第7章 確率ベクトルの変換
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from numpy.random import *
```
```python
def SampleMean(Π):
N = len(Π)
if N == 0:
return False #計算不可能
else:
return sum(Π)/len(Π)
def SampleVariance(Π):
m=SampleMean(Π) #標本平均
return SampleMean( (Π-m*np.ones_like(Π))**2 )
def Cov(X,Y):
mx = SampleMean(X)
my = SampleMean(Y)
return SampleMean(X*Y)-mx*my
```
```python
def _gauss(x,m,v):
return 1.0/np.sqrt(2*np.pi*v) * np.exp(-(x-m)**2/(2*v))
gauss = np.vectorize(_gauss,excluded=['m','v'])
```
## 7.1 確率変数の加法とスカラ倍
## 7.1.1 加法 $Z = X + Y$ の分布
### 例題7.1
```python
import sympy as sym
sym.init_printing(use_latex='mathjax') #整形した数式の表示
def Example_7_1(np_zz):
x, y, z = sym.symbols('z y z')
m, v = sym.symbols('m v')
a, b = sym.symbols('a b')
# pX(x) ガウス,pXzy = pX(z-y)
pX = 1/sym.sqrt(2*sym.pi*v) * sym.exp(-(x-m)**2/(2*v))
gauss_erf = sym.simplify( sym.integrate(pX,(x, 0, sym.Symbol('A'))) )
pXzy = pX.subs([(x,z-y)])
#print(pXzy)
pZ = sym.simplify(sym.integrate(pXzy/(b-a), (y, a, b)))
list_pZ = [ pZ.subs([(z,_z),(m,0),(v,1),(a,-2),(b,5)]) for _z in np_zz ]
return pZ, np.array(list_pZ), gauss_erf
np_zz = np.linspace(-6,10,100)
sym_pZ, np_pZ, gauss_erf = Example_7_1(np_zz)
sym_pZ
```
$$\frac{1}{2 \left(a - b\right)} \left(\operatorname{erf}{\left (\frac{\sqrt{2}}{2 \sqrt{v}} \left(a + m - z\right) \right )} - \operatorname{erf}{\left (\frac{\sqrt{2}}{2 \sqrt{v}} \left(b + m - z\right) \right )}\right)$$
解答例の計算に使うガウス分布の累積積分 ※erf$(-x)$=$-$erf$(x)$に注意!
```python
gauss_erf
```
$$\frac{1}{2} \operatorname{erf}{\left (\frac{\sqrt{2} m}{2 \sqrt{v}} \right )} + \frac{1}{2} \operatorname{erf}{\left (\frac{\sqrt{2} \left(A - m\right)}{2 \sqrt{v}} \right )}$$
### 図7.1
```python
X = randn(100000) #平均0分散1の正規乱数
Y = uniform(-2, 5, 100000) #区間[-2,5]の一様乱数
Z = X + Y
plt.figure(figsize=(5,2))
plt.hist(Z, range=[-6,10], bins=16, density=True, color='w', edgecolor='k')
plt.plot(np_zz,np_pZ, 'k-') #例題7.1で導出した式
plt.xlim([-6,10])
plt.xticks( np.arange(-6, 11, 2) )
plt.xlabel('$z$', fontsize=12)
plt.ylabel('$p_Z(z)$', fontsize=12)
plt.text(5.7, 0.12,'$m=0$, $v=1$')
plt.text(5.7, 0.09,'$a=-2$, $b=5$')
plt.tight_layout()
plt.savefig('figs/Ch07-gauss_uniform.eps', bb_inches='tight')
```
## 7.2 ガウス確率変数の加法とスカラ倍
## 7.2.1 加法 $Z = X + Y$ の分布
```python
X = randn(100000) + 5*np.ones(100000)
Y = X + randn(100000)
```
```python
mx = SampleMean(X) # Xの期待値
my = SampleMean(Y) # Yの期待値
vx = SampleVariance(X) # Xの分散
vy = SampleVariance(Y) # Yの分散
Cxy = Cov(X,Y) # X,Yの共分散
mx, my, vx, vy, Cxy
```
$$\left ( 5.003215749042992, \quad 5.005103605232999, \quad 0.9971453405948929, \quad 1.9989074994142773, \quad 0.9978212271095934\right )$$
```python
mz = mx + my
vz = vx + 2*Cxy + vy
```
```python
Z = X + Y
```
```python
plt.hist(Z, range=[0,20], bins=21, normed=True)
xx = np.linspace(0,20,100)
plt.plot(xx, gauss(xx,mz,vz),'-')
plt.xlabel('$z$')
plt.ylabel('$p(z)$')
plt.grid(linestyle='--')
```
## 7.2.2 スカラ倍 $Z = a X$ の分布
```python
X = randn(100000) + 5*np.ones_like(X)
```
```python
mx = SampleMean(X) # Xの期待値
vx = SampleVariance(X) # Xの分散
mx, vx
```
$$\left ( 4.998777615535563, \quad 0.9971116584657035\right )$$
```python
a = 0.5
mz = a*mx
vz = (a**2)*vx
```
```python
Z = a*X
```
```python
plt.hist(Z, range=[0,5], bins=21, normed=True)
xx = np.linspace(0,5,100)
plt.plot(xx, gauss(xx,mz,vz),'-')
plt.xlabel('$z$')
plt.ylabel('$p(z)$')
plt.grid(linestyle='--')
```
## 7.2.3 線形結合 $Z = a_1 X_1 + a_2 X_2$ の分布
```python
X1 = randn(100000) + 5*np.ones_like(X)
X2 = X1 + randn(100000)
```
```python
m1 = SampleMean(X1) # X1の期待値
m2 = SampleMean(X2) # X2の期待値
v1 = SampleVariance(X1) # X1の分散
v2 = SampleVariance(X2) # X2の分散
C12 = Cov(X1,X2) # X,Yの共分散
m1, m2, v1, v2, C12
```
$$\left ( 4.993018944113484, \quad 4.994622009447813, \quad 1.0091661730120924, \quad 1.9940143270333988, \quad 1.0032547238840444\right )$$
```python
a1 = 0.5
a2 = 2
mz = a1*mx + a2*my
vz = (a1**2)*vx + 2*a1*a2*C12 + (a2**2)*vy
```
```python
Z = a1*X1 + a2*X2
```
```python
plt.hist(Z, range=[0,25], bins=21, normed=True)
xx = np.linspace(0,25,100)
plt.plot(xx, gauss(xx,mz,vz),'-')
plt.xlabel('$z$')
plt.ylabel('$p(z)$')
plt.grid(linestyle='--')
```
## 7.3 共分散行列と多変数ガウス分布
## 7.3.1 共分散行列
```python
def CovMatrix(XX,YY):
dim = len(XX[0])
cov = np.zeros([dim,dim])
for i in range(dim):
cov[i,i] = Cov(XX[:,i], YY[:,i])
for j in range(i):
cov[i,j] = Cov(XX[:,i], YY[:,j])
cov[j,i] = cov[i,j]
return cov
```
### 数値例:3次元ガウス分布のランダムデータ
```python
means = [1,2,3] #期待値ベクトル
covmat = [[1,0.5,0.2],[0.5,2,1],[0.2,1,3]] #分散共分散行列
data_for_test = np.random.multivariate_normal(means, covmat, 100000)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot(data_for_test[:,0],data_for_test[:,1],data_for_test[:,2],".",markersize=0.5)
```
#### データの分散共分散行列
標本数が有限なので,ぴったりではないですが,covmat の成分と一致します.
```python
CovMatrix(data_for_test,data_for_test)
```
array([[1.00379077, 0.50090574, 0.20194248],
[0.50090574, 1.99579224, 0.99797172],
[0.20194248, 0.99797172, 2.98543577]])
## 7.3.2 多変数ガウス分布
$n$次元の式を使って,2次元のガウス分布を記述してみる.
```python
def _Gauss2dv(_x1, _x2, _mm, _covmat):
xx = np.array([_x1,_x2])
mm = np.array(_mm)
covmat = np.array(_covmat)
n = len(mm)
det = np.linalg.det(covmat)
inv = np.linalg.pinv(covmat)
xxm = xx - mm
a = 1/( (np.sqrt(2*np.pi)**n)*np.sqrt(det) )
# b = -0.5*np.dot(xxm,np.dot(inv,xxm))
b = -0.5*xxm.dot(inv.dot(xxm))
return a*np.exp(b)
Gauss2dv = np.vectorize(_Gauss2dv, excluded=[2,3])
```
```python
xx = np.linspace(-2,4,50)
yy = np.linspace(-5,5,50)
```
```python
Means = np.array([1,0])
CovMat = np.array([[1,0.8],[0.8,2]])
XX,YY = np.meshgrid(xx,yy)
ZZ = Gauss2dv(XX, YY, Means, CovMat)
```
```python
from matplotlib import cm
def density_plot(Xgrid, Ygrid, Zgrid, zlabel='zlabel', cmap=cm.coolwarm):
fig = plt.figure(figsize=(10,4))
ax2d = fig.add_subplot(121)
ax3d = fig.add_subplot(122, projection='3d')
cont = ax2d.contourf(Xgrid, Ygrid, Zgrid, cmap=cmap)
ax2d.set_xlabel('$x$')
ax2d.set_ylabel('$y$')
surf = ax3d.plot_surface(Xgrid, Ygrid, Zgrid, cmap=cmap, linewidth=0)
ax3d.set_xlabel('$x$')
ax3d.set_ylabel('$y$')
ax3d.set_zlabel(zlabel)
fig.colorbar(cont, ax=ax2d, label=zlabel)
plt.tight_layout(pad=1)
```
```python
density_plot(XX, YY, ZZ)
```
## 7.4 ガウス確率ベクトルの線形変換
## 7.4.2 2変数ガウス分布の例(標本バージョン)
上と同じ 2変数ガウス分布で生成した標本点
```python
X_data = np.random.multivariate_normal(Means, CovMat, 500000)
plt.plot(X_data[:,0],X_data[:,1],".",markersize=0.5)
```
標本の共分散を計算すると,所与の共分散 0.8 (CovMatの非対角要素)に近い値となる.
```python
Cov(X_data[:,0],X_data[:,1])
```
$$0.8013065611086408$$
```python
CovMat[0,1]
```
$$0.8$$
対角化の変換行列 $S$
```python
l,S = np.linalg.eig(CovMat)
l,S
```
(array([0.55660189, 2.44339811]), array([[-0.87464248, -0.48476853],
[ 0.48476853, -0.87464248]]))
この $S$ で実際に $S^T$(CovMat)$S$ は対角化される.
```python
(S.T).dot(CovMat.dot(S))
```
array([[ 5.56601887e-01, -2.22044605e-16],
[-1.66533454e-16, 2.44339811e+00]])
$S$ で変換した標本点 $Y = S^T X$
```python
Y_data = ( (S.T).dot(X_data.T) ).T
plt.plot(Y_data[:,0],Y_data[:,1],".",markersize=0.5)
```
分布がまっすぐになった.この標本の共分散を計算すると,無相関を意味する 0 に近い値となる.
```python
Cov(Y_data[:,0], Y_data[:,1])
```
$$0.0021889781040889678$$
## 7.4.2 2変数ガウス分布の例(理論計算バージョン)
```python
def density_plot2(Xgrid, Ygrid, Zgrid, zmax=0.31, zstep=0.1, cmap="binary", labels=['$x_1$','$x_2$','$p(x_1,x_2)$']):
fig = plt.figure(figsize=(4,3))
ax3d = fig.add_subplot(111, projection='3d')
ax3d.view_init(elev=17, azim=-115)
surf = ax3d.plot_surface(Xgrid, Ygrid, Zgrid, cmap=cmap, linewidth=0)
zoff = -0.7*zmax
ax3d.set_zticks(np.arange(0,zmax,zstep))
ax3d.set_zlim([zoff,zmax])
ax3d.contourf(Xgrid, Ygrid, Zgrid, cmap=cmap, offset=zoff)
ax3d.set_xlabel(labels[0],fontsize=12)
ax3d.set_ylabel(labels[1],fontsize=12)
ax3d.zaxis.set_rotate_label(False)
ax3d.set_zlabel(labels[2],fontsize=12,rotation=90)
plt.tight_layout(pad=1)
```
### 元の2変数ガウス分布
```python
xx = np.linspace(-4,4,100)
yy = np.linspace(-4,4,100)
XX,YY = np.meshgrid(xx,yy)
Means = np.array([0,0])
CovMat_org = np.array([[1,0.8],[0.8,1]])
ZZ_org = Gauss2dv(XX, YY, Means, CovMat_org)
density_plot2(XX, YY, ZZ_org)
```
#### 式(7.31)
```python
eig_values, Smat=np.linalg.eig(CovMat_org)
print('固有値 =\n', eig_values)
print('行列 S =\n', Smat)
```
固有値 =
[1.8 0.2]
行列 S =
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]
テキストの手計算とは違う並べ順で,固有値が返ってきました.その結果,行列Sの列が入れ替わってます.
固有値のソートが降順になる仕様なんですかね.
試しに,この並べ順のまま進めてみましょう.
#### 式(7.33)
```python
CovMat_new = Smat.T.dot(CovMat_org).dot(Smat)
print(CovMat_new)
```
[[ 1.80000000e+00 -1.11022302e-16]
[-1.38777878e-16 2.00000000e-01]]
```python
ZZ_new = Gauss2dv(XX, YY, Means, CovMat_new)
density_plot2(XX, YY, ZZ_new, labels=['$y_1$','$y_2$','$p\'(y_1,y_2)$'])
```
当然ながら,テキストの手計算による共分散行列とは,90度違う結果になります.別に間違ってはいません.対角化後の$y_1$と$y_2$の立場を入れ替えれば同じ結果です.
### テキストに合せた修正
間違いではありませんが,一応,手計算に合せた結果も出しときましょう.列を入れ替えるには・・・
```python
Smat2 = Smat[:,[1,0]]
print(Smat2)
```
[[-0.70710678 0.70710678]
[ 0.70710678 0.70710678]]
はい,これでテキストの手計算と同じ並べ順になりました.
#### 式(7.33)
これで,対角化後の共分散行列も揃いまして,
```python
CovMat_new2 = Smat2.T.dot(CovMat_org).dot(Smat2)
print(CovMat_new2)
```
[[ 2.00000000e-01 -1.38777878e-16]
[-1.11022302e-16 1.80000000e+00]]
#### 図7.1 (b)
以下がテキストにあるグラフになります.
```python
ZZ_new2 = Gauss2dv(XX, YY, Means, CovMat_new2)
density_plot2(XX, YY, ZZ_new2, labels=['$y_1$','$y_2$','$p\'(y_1,y_2)$'])
plt.savefig('figs/Ch07_gauss_2d_rot.eps', bbox_inches='tight')
```
### 答えが2種類あっていいの?
+ いーんです.
+ 固有ベクトルを並べて行列Sを作るときに,その並べ順については,計算する人の裁量で決めます.
+ ただし,その結果,変換後の独立変数$y_1$, $y_2$の立場が入れ替わります.
| 8310637593e07b6426e02ac87a2e18263fbd7afe | 471,322 | ipynb | Jupyter Notebook | Jupyter/Ch07.ipynb | ktysd/prob-tut | 22ad199616fedee9b545780d2ae627788be8b0fd | [
"MIT"
]
| 2 | 2020-07-07T14:41:19.000Z | 2020-07-09T06:14:20.000Z | Jupyter/Ch07.ipynb | ktysd/prob-tut | 22ad199616fedee9b545780d2ae627788be8b0fd | [
"MIT"
]
| null | null | null | Jupyter/Ch07.ipynb | ktysd/prob-tut | 22ad199616fedee9b545780d2ae627788be8b0fd | [
"MIT"
]
| 2 | 2020-04-24T10:38:17.000Z | 2020-07-10T12:54:18.000Z | 384.438825 | 86,720 | 0.94163 | true | 4,779 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.615088 | 0.533303 | __label__yue_Hant | 0.159387 | 0.077371 |
# Using optimization routines from `scipy` and `statsmodels`
```python
%matplotlib inline
```
```python
import scipy.linalg as la
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
import pandas as pd
```
```python
np.set_printoptions(precision=3, suppress=True)
```
Using `scipy.optimize`
----
One of the most convenient libraries to use is `scipy.optimize`, since it is already part of the Anaconda installation and it has a fairly intuitive interface.
```python
from scipy import optimize as opt
```
#### Minimizing a univariate function $f: \mathbb{R} \rightarrow \mathbb{R}$
```python
def f(x):
return x**4 + 3*(x-2)**3 - 15*(x)**2 + 1
```
```python
x = np.linspace(-8, 5, 100)
plt.plot(x, f(x));
```
The [`minimize_scalar`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html#scipy.optimize.minimize_scalar) function will find the minimum, and can also be told to search within given bounds. By default, it uses the Brent algorithm, which combines a bracketing strategy with a parabolic approximation.
```python
opt.minimize_scalar(f, method='Brent')
```
fun: -803.39553088258845
nfev: 12
nit: 11
success: True
x: -5.5288011252196627
```python
opt.minimize_scalar(f, method='bounded', bounds=[0, 6])
```
fun: -54.210039377127622
message: 'Solution found.'
nfev: 12
status: 0
success: True
x: 2.6688651040396532
### Local and global minima
```python
def f(x, offset):
return -np.sinc(x-offset)
```
```python
x = np.linspace(-20, 20, 100)
plt.plot(x, f(x, 5));
```
```python
# note how additional function arguments are passed in
sol = opt.minimize_scalar(f, args=(5,))
sol
```
fun: -0.049029624014074166
nfev: 11
nit: 10
success: True
x: -1.4843871263953001
```python
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red')
pass
```
#### We can try multiple random starts to find the global minimum
```python
lower = np.random.uniform(-20, 20, 100)
upper = lower + 1
sols = [opt.minimize_scalar(f, args=(5,), bracket=(l, u)) for (l, u) in zip(lower, upper)]
```
```python
idx = np.argmin([sol.fun for sol in sols])
sol = sols[idx]
```
```python
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red');
```
#### Using a stochastic algorithm
See documentation for the [`basinhopping`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.basinhopping.html) algorithm, which also works with multivariate scalar optimization. Note that this is heuristic and not guaranteed to find a global minimum.
```python
from scipy.optimize import basinhopping
x0 = 0
sol = basinhopping(f, x0, stepsize=1, minimizer_kwargs={'args': (5,)})
sol
```
fun: -1.0
lowest_optimization_result: fun: -1.0
hess_inv: array([[ 0.304]])
jac: array([ 0.])
message: 'Optimization terminated successfully.'
nfev: 15
nit: 3
njev: 5
status: 0
success: True
x: array([ 5.])
message: ['requested number of basinhopping iterations completed successfully']
minimization_failures: 0
nfev: 1848
nit: 100
njev: 616
x: array([ 5.])
```python
plt.plot(x, f(x, 5))
plt.axvline(sol.x, c='red');
```
### Constrained optimization with `scipy.optimize`
Many real-world optimization problems have constraints - for example, a set of parameters may have to sum to 1.0 (equality constraint), or some parameters may have to be non-negative (inequality constraint). Sometimes, the constraints can be incorporated into the function to be minimized, for example, the non-negativity constraint $p \gt 0$ can be removed by substituting $p = e^q$ and optimizing for $q$. Using such workarounds, it may be possible to convert a constrained optimization problem into an unconstrained one, and use the methods discussed above to solve the problem.
Alternatively, we can use optimization methods that allow the specification of constraints directly in the problem statement as shown in this section. Internally, constraint violation penalties, barriers and Lagrange multipliers are some of the methods used used to handle these constraints. We use the example provided in the Scipy [tutorial](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) to illustrate how to set constraints.
We will optimize:
$$
f(x) = -(2xy + 2x - x^2 -2y^2)
$$
subject to the constraint
$$
x^3 - y = 0 \\
y - (x-1)^4 - 2 \ge 0
$$
and the bounds
$$
0.5 \le x \le 1.5 \\
1.5 \le y \le 2.5
$$
```python
def f(x):
return -(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
```
```python
x = np.linspace(0, 3, 100)
y = np.linspace(0, 3, 100)
X, Y = np.meshgrid(x, y)
Z = f(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
plt.contour(X, Y, Z, np.arange(-1.99,10, 1), cmap='jet');
plt.plot(x, x**3, 'k:', linewidth=1)
plt.plot(x, (x-1)**4+2, 'k:', linewidth=1)
plt.fill([0.5,0.5,1.5,1.5], [2.5,1.5,1.5,2.5], alpha=0.3)
plt.axis([0,3,0,3])
```
To set constraints, we pass in a dictionary with keys `type`, `fun` and `jac`. Note that the inequality constraint assumes a $C_j x \ge 0$ form. As usual, the `jac` is optional and will be numerically estimated if not provided.
```python
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - (x[0]-1)**4 - 2])})
bnds = ((0.5, 1.5), (1.5, 2.5))
```
```python
x0 = [0, 2.5]
```
Unconstrained optimization
```python
ux = opt.minimize(f, x0, constraints=None)
ux
```
fun: -1.9999999999996365
hess_inv: array([[ 0.998, 0.501],
[ 0.501, 0.499]])
jac: array([ 0., -0.])
message: 'Optimization terminated successfully.'
nfev: 24
nit: 5
njev: 6
status: 0
success: True
x: array([ 2., 1.])
Constrained optimization
```python
cx = opt.minimize(f, x0, bounds=bnds, constraints=cons)
cx
```
fun: 2.0499154720910759
jac: array([-3.487, 5.497, 0. ])
message: 'Optimization terminated successfully.'
nfev: 21
nit: 5
njev: 5
status: 0
success: True
x: array([ 1.261, 2.005])
```python
x = np.linspace(0, 3, 100)
y = np.linspace(0, 3, 100)
X, Y = np.meshgrid(x, y)
Z = f(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))
plt.contour(X, Y, Z, np.arange(-1.99,10, 1), cmap='jet');
plt.plot(x, x**3, 'k:', linewidth=1)
plt.plot(x, (x-1)**4+2, 'k:', linewidth=1)
plt.text(ux['x'][0], ux['x'][1], 'x', va='center', ha='center', size=20, color='blue')
plt.text(cx['x'][0], cx['x'][1], 'x', va='center', ha='center', size=20, color='red')
plt.fill([0.5,0.5,1.5,1.5], [2.5,1.5,1.5,2.5], alpha=0.3)
plt.axis([0,3,0,3]);
```
## Some applications of optimization
### Finding paraemeters for ODE models
This is a specialized application of `curve_fit`, in which the curve to be fitted is defined implicitly by an ordinary differential equation
$$
\frac{dx}{dt} = -kx
$$
and we want to use observed data to estimate the parameters $k$ and the initial value $x_0$. Of course this can be explicitly solved but the same approach can be used to find multiple parameters for $n$-dimensional systems of ODEs.
[A more elaborate example for fitting a system of ODEs to model the zombie apocalypse](http://adventuresinpython.blogspot.com/2012/08/fitting-differential-equation-system-to.html)
```python
from scipy.integrate import odeint
def f(x, t, k):
"""Simple exponential decay."""
return -k*x
def x(t, k, x0):
"""
Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0
"""
x = odeint(f, x0, t, args=(k,))
return x.ravel()
```
```python
# True parameter values
x0_ = 10
k_ = 0.1*np.pi
# Some random data genererated from closed form solution plus Gaussian noise
ts = np.sort(np.random.uniform(0, 10, 200))
xs = x0_*np.exp(-k_*ts) + np.random.normal(0,0.1,200)
popt, cov = curve_fit(x, ts, xs)
k_opt, x0_opt = popt
print("k = %g" % k_opt)
print("x0 = %g" % x0_opt)
```
k = 0.313527
x0 = 9.97788
```python
import matplotlib.pyplot as plt
t = np.linspace(0, 10, 100)
plt.plot(ts, xs, 'r.', t, x(t, k_opt, x0_opt), '-');
```
### Another example of fitting a system of ODEs using the `lmfit` package
You may have to install the [`lmfit`](http://cars9.uchicago.edu/software/python/lmfit/index.html) package using `pip` and restart your kernel. The `lmfit` algorithm is another wrapper around `scipy.optimize.leastsq` but allows for richer model specification and more diagnostics.
```python
! pip install lmfit
```
Requirement already satisfied (use --upgrade to upgrade): lmfit in /opt/conda/lib/python3.5/site-packages
Requirement already satisfied (use --upgrade to upgrade): scipy in /opt/conda/lib/python3.5/site-packages (from lmfit)
Requirement already satisfied (use --upgrade to upgrade): numpy in /opt/conda/lib/python3.5/site-packages (from lmfit)
[33mYou are using pip version 8.1.2, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
```python
from lmfit import minimize, Parameters, Parameter, report_fit
import warnings
```
```python
def f(xs, t, ps):
"""Lotka-Volterra predator-prey model."""
try:
a = ps['a'].value
b = ps['b'].value
c = ps['c'].value
d = ps['d'].value
except:
a, b, c, d = ps
x, y = xs
return [a*x - b*x*y, c*x*y - d*y]
def g(t, x0, ps):
"""
Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0
"""
x = odeint(f, x0, t, args=(ps,))
return x
def residual(ps, ts, data):
x0 = ps['x0'].value, ps['y0'].value
model = g(ts, x0, ps)
return (model - data).ravel()
t = np.linspace(0, 10, 100)
x0 = np.array([1,1])
a, b, c, d = 3,1,1,1
true_params = np.array((a, b, c, d))
np.random.seed(123)
data = g(t, x0, true_params)
data += np.random.normal(size=data.shape)
# set parameters incluing bounds
params = Parameters()
params.add('x0', value= float(data[0, 0]), min=0, max=10)
params.add('y0', value=float(data[0, 1]), min=0, max=10)
params.add('a', value=2.0, min=0, max=10)
params.add('b', value=2.0, min=0, max=10)
params.add('c', value=2.0, min=0, max=10)
params.add('d', value=2.0, min=0, max=10)
# fit model and find predicted values
result = minimize(residual, params, args=(t, data), method='leastsq')
final = data + result.residual.reshape(data.shape)
# plot data and fitted curves
plt.plot(t, data, 'o')
plt.plot(t, final, '-', linewidth=2);
# display fitted statistics
report_fit(result)
```
#### Optimization of graph node placement
To show the many different applications of optimization, here is an example using optimization to change the layout of nodes of a graph. We use a physical analogy - nodes are connected by springs, and the springs resist deformation from their natural length $l_{ij}$. Some nodes are pinned to their initial locations while others are free to move. Because the initial configuration of nodes does not have springs at their natural length, there is tension resulting in a high potential energy $U$, given by the physics formula shown below. Optimization finds the configuration of lowest potential energy given that some nodes are fixed (set up as boundary constraints on the positions of the nodes).
$$
U = \frac{1}{2}\sum_{i,j=1}^n ka_{ij}\left(||p_i - p_j||-l_{ij}\right)^2
$$
Note that the ordination algorithm Multi-Dimensional Scaling (MDS) works on a very similar idea - take a high dimensional data set in $\mathbb{R}^n$, and project down to a lower dimension ($\mathbb{R}^k$) such that the sum of distances $d_n(x_i, x_j) - d_k(x_i, x_j)$, where $d_n$ and $d_k$ are some measure of distance between two points $x_i$ and $x_j$ in $n$ and $d$ dimension respectively, is minimized. MDS is often used in exploratory analysis of high-dimensional data to get some intuitive understanding of its "structure".
```python
from scipy.spatial.distance import pdist, squareform
```
- P0 is the initial location of nodes
- P is the minimal energy location of nodes given constraints
- A is a connectivity matrix - there is a spring between $i$ and $j$ if $A_{ij} = 1$
- $L_{ij}$ is the resting length of the spring connecting $i$ and $j$
- In addition, there are a number of `fixed` nodes whose positions are pinned.
```python
n = 20
k = 1 # spring stiffness
P0 = np.random.uniform(0, 5, (n,2))
A = np.ones((n, n))
A[np.tril_indices_from(A)] = 0
L = A.copy()
```
```python
L.astype('int')
```
array([[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
```python
def energy(P):
P = P.reshape((-1, 2))
D = squareform(pdist(P))
return 0.5*(k * A * (D - L)**2).sum()
```
```python
D0 = squareform(pdist(P0))
E0 = 0.5* k * A * (D0 - L)**2
```
```python
D0[:5, :5]
```
array([[ 0. , 5.039, 0.921, 1.758, 1.99 ],
[ 5.039, 0. , 5.546, 3.414, 4.965],
[ 0.921, 5.546, 0. , 2.133, 2.888],
[ 1.758, 3.414, 2.133, 0. , 2.762],
[ 1.99 , 4.965, 2.888, 2.762, 0. ]])
```python
E0[:5, :5]
```
array([[ 0. , 8.159, 0.003, 0.288, 0.49 ],
[ 0. , 0. , 10.333, 2.915, 7.862],
[ 0. , 0. , 0. , 0.642, 1.782],
[ 0. , 0. , 0. , 0. , 1.552],
[ 0. , 0. , 0. , 0. , 0. ]])
```python
energy(P0.ravel())
```
```python
# fix the position of the first few nodes just to show constraints
fixed = 4
bounds = (np.repeat(P0[:fixed,:].ravel(), 2).reshape((-1,2)).tolist() +
[[None, None]] * (2*(n-fixed)))
bounds[:fixed*2+4]
```
[[1.191249528562059, 1.191249528562059],
[4.0389554314507805, 4.0389554314507805],
[4.474891439430058, 4.474891439430058],
[0.216114460398234, 0.216114460398234],
[1.5097341813135952, 1.5097341813135952],
[4.902910992971438, 4.902910992971438],
[2.6975241127686767, 2.6975241127686767],
[3.1315468085492815, 3.1315468085492815],
[None, None],
[None, None],
[None, None],
[None, None]]
```python
sol = opt.minimize(energy, P0.ravel(), bounds=bounds)
```
#### Visualization
Original placement is BLUE
Optimized arrangement is RED.
```python
plt.scatter(P0[:, 0], P0[:, 1], s=25)
P = sol.x.reshape((-1,2))
plt.scatter(P[:, 0], P[:, 1], edgecolors='red', facecolors='none', s=30, linewidth=2);
```
Optimization of standard statistical models
---
When we solve standard statistical problems, an optimization procedure similar to the ones discussed here is performed. For example, consider multivariate logistic regression - typically, a Newton-like algorithm known as iteratively reweighted least squares (IRLS) is used to find the maximum likelihood estimate for the generalized linear model family. However, using one of the multivariate scalar minimization methods shown above will also work, for example, the BFGS minimization algorithm.
The take home message is that there is nothing magic going on when Python or R fits a statistical model using a formula - all that is happening is that the objective function is set to be the negative of the log likelihood, and the minimum found using some first or second order optimization algorithm.
```python
import statsmodels.api as sm
```
### Logistic regression as optimization
Suppose we have a binary outcome measure $Y \in {0,1}$ that is conditinal on some input variable (vector) $x \in (-\infty, +\infty)$. Let the conditioanl probability be $p(x) = P(Y=y | X=x)$. Given some data, one simple probability model is $p(x) = \beta_0 + x\cdot\beta$ - i.e. linear regression. This doesn't really work for the obvious reason that $p(x)$ must be between 0 and 1 as $x$ ranges across the real line. One simple way to fix this is to use the transformation $g(x) = \frac{p(x)}{1 - p(x)} = \beta_0 + x.\beta$. Solving for $p$, we get
$$
p(x) = \frac{1}{1 + e^{-(\beta_0 + x\cdot\beta)}}
$$
As you all know very well, this is logistic regression.
Suppose we have $n$ data points $(x_i, y_i)$ where $x_i$ is a vector of features and $y_i$ is an observed class (0 or 1). For each event, we either have "success" ($y = 1$) or "failure" ($Y = 0$), so the likelihood looks like the product of Bernoulli random variables. According to the logistic model, the probability of success is $p(x_i)$ if $y_i = 1$ and $1-p(x_i)$ if $y_i = 0$. So the likelihood is
$$
L(\beta_0, \beta) = \prod_{i=1}^n p(x_i)^y(1-p(x_i))^{1-y}
$$
and the log-likelihood is
\begin{align}
l(\beta_0, \beta) &= \sum_{i=1}^{n} y_i \log{p(x_i)} + (1-y_i)\log{1-p(x_i)} \\
&= \sum_{i=1}^{n} \log{1-p(x_i)} + \sum_{i=1}^{n} y_i \log{\frac{p(x_i)}{1-p(x_i)}} \\
&= \sum_{i=1}^{n} -\log 1 + e^{\beta_0 + x_i\cdot\beta} + \sum_{i=1}^{n} y_i(\beta_0 + x_i\cdot\beta)
\end{align}
Using the standard 'trick', if we augment the matrix $X$ with a column of 1s, we can write $\beta_0 + x_i\cdot\beta$ as just $X\beta$.
```python
df_ = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv")
df_.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>rank</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>3</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>3</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
```python
# We will ignore the rank categorical value
cols_to_keep = ['admit', 'gre', 'gpa']
df = df_[cols_to_keep]
df.insert(1, 'dummy', 1)
df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>dummy</th>
<th>gre</th>
<th>gpa</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>1</td>
<td>380</td>
<td>3.61</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>1</td>
<td>660</td>
<td>3.67</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>1</td>
<td>800</td>
<td>4.00</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>1</td>
<td>640</td>
<td>3.19</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>1</td>
<td>520</td>
<td>2.93</td>
</tr>
</tbody>
</table>
</div>
### Solving as a GLM with IRLS
This is very similar to what you would do in R, only using Python's `statsmodels` package. The GLM solver uses a special variant of Newton's method known as iteratively reweighted least squares (IRLS), which will be further desribed in the lecture on multivarite and constrained optimizaiton.
```python
model = sm.GLM.from_formula('admit ~ gre + gpa',
data=df, family=sm.families.Binomial())
fit = model.fit()
fit.summary()
```
<table class="simpletable">
<caption>Generalized Linear Model Regression Results</caption>
<tr>
<th>Dep. Variable:</th> <td>admit</td> <th> No. Observations: </th> <td> 400</td>
</tr>
<tr>
<th>Model:</th> <td>GLM</td> <th> Df Residuals: </th> <td> 397</td>
</tr>
<tr>
<th>Model Family:</th> <td>Binomial</td> <th> Df Model: </th> <td> 2</td>
</tr>
<tr>
<th>Link Function:</th> <td>logit</td> <th> Scale: </th> <td>1.0</td>
</tr>
<tr>
<th>Method:</th> <td>IRLS</td> <th> Log-Likelihood: </th> <td> -240.17</td>
</tr>
<tr>
<th>Date:</th> <td>Thu, 09 Mar 2017</td> <th> Deviance: </th> <td> 480.34</td>
</tr>
<tr>
<th>Time:</th> <td>20:12:11</td> <th> Pearson chi2: </th> <td> 398.</td>
</tr>
<tr>
<th>No. Iterations:</th> <td>6</td> <th> </th> <td> </td>
</tr>
</table>
<table class="simpletable">
<tr>
<td></td> <th>coef</th> <th>std err</th> <th>z</th> <th>P>|z|</th> <th>[95.0% Conf. Int.]</th>
</tr>
<tr>
<th>Intercept</th> <td> -4.9494</td> <td> 1.075</td> <td> -4.604</td> <td> 0.000</td> <td> -7.057 -2.842</td>
</tr>
<tr>
<th>gre</th> <td> 0.0027</td> <td> 0.001</td> <td> 2.544</td> <td> 0.011</td> <td> 0.001 0.005</td>
</tr>
<tr>
<th>gpa</th> <td> 0.7547</td> <td> 0.320</td> <td> 2.361</td> <td> 0.018</td> <td> 0.128 1.381</td>
</tr>
</table>
### Or use R
```python
%load_ext rpy2.ipython
```
```r
%%R -i df
m <- glm(admit ~ gre + gpa, data=df, family="binomial")
summary(m)
```
Call:
glm(formula = admit ~ gre + gpa, family = "binomial", data = df)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.2730 -0.8988 -0.7206 1.3013 2.0620
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.949378 1.075093 -4.604 4.15e-06 ***
gre 0.002691 0.001057 2.544 0.0109 *
gpa 0.754687 0.319586 2.361 0.0182 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 499.98 on 399 degrees of freedom
Residual deviance: 480.34 on 397 degrees of freedom
AIC: 486.34
Number of Fisher Scoring iterations: 4
### Home-brew logistic regression using a generic minimization function
This is to show that there is no magic going on - you can write the function to minimize directly from the log-likelihood equation and run a minimizer. It will be more accurate if you also provide the derivative (+/- the Hessian for second order methods), but using just the function and numerical approximations to the derivative will also work. As usual, this is for illustration so you understand what is going on - when there is a library function available, you should probably use that instead.
```python
def f(beta, y, x):
"""Minus log likelihood function for logistic regression."""
return -((-np.log(1 + np.exp(np.dot(x, beta)))).sum() + (y*(np.dot(x, beta))).sum())
```
```python
beta0 = np.zeros(3)
opt.minimize(f, beta0, args=(df['admit'], df.ix[:, 'dummy':]), method='BFGS', options={'gtol':1e-2})
```
fun: 240.17199087261878
hess_inv: array([[ 1.115, -0. , -0.27 ],
[-0. , 0. , -0. ],
[-0.27 , -0. , 0.098]])
jac: array([ 0. , -0.002, -0. ])
message: 'Optimization terminated successfully.'
nfev: 65
nit: 8
njev: 13
status: 0
success: True
x: array([-4.949, 0.003, 0.755])
### Optimization with `sklearn`
There are also many optimization routines in the `scikit-learn` package, as you already know from the previous lectures. Many machine learning problems essentially boil down to the minimization of some appropriate loss function.
### Resources
- [Scipy Optimize reference](http://docs.scipy.org/doc/scipy/reference/optimize.html)
- [Scipy Optimize tutorial](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)
- [LMFit - a modeling interface for nonlinear least squares problems](http://cars9.uchicago.edu/software/python/lmfit/index.html)
- [CVXpy- a modeling interface for convex optimization problems](https://github.com/cvxgrp/cvxpy)
- [Quasi-Newton methods](http://en.wikipedia.org/wiki/Quasi-Newton_method)
- [Convex optimization book by Boyd & Vandenberghe](http://stanford.edu/~boyd/cvxbook/)
- [Nocedal and Wright textbook](http://www.springer.com/us/book/9780387303031)
| 7753d38340ca2f868dcb658fd438f7bcdfec8056 | 417,425 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/S09D_Optimization_Examples-checkpoint.ipynb | ZhechangYang/STA663 | 0dcf48e3e7a2d1f698b15e84946e44344b8153f5 | [
"BSD-3-Clause"
]
| null | null | null | notebooks/.ipynb_checkpoints/S09D_Optimization_Examples-checkpoint.ipynb | ZhechangYang/STA663 | 0dcf48e3e7a2d1f698b15e84946e44344b8153f5 | [
"BSD-3-Clause"
]
| null | null | null | notebooks/.ipynb_checkpoints/S09D_Optimization_Examples-checkpoint.ipynb | ZhechangYang/STA663 | 0dcf48e3e7a2d1f698b15e84946e44344b8153f5 | [
"BSD-3-Clause"
]
| null | null | null | 263.859039 | 83,134 | 0.899869 | true | 9,205 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.917303 | 0.788572 | __label__eng_Latn | 0.901392 | 0.670449 |
# Exercise 2
Write a function to compute the roots of a mathematical equation of the form
\begin{align}
ax^{2} + bx + c = 0.
\end{align}
Your function should be sensitive enough to adapt to situations in which a user might accidentally set $a=0$, or $b=0$, or even $a=b=0$. For example, if $a=0, b\neq 0$, your function should print a warning and compute the roots of the resulting linear function. It is up to you on how to handle the function header: feel free to use default keyword arguments, variable positional arguments, variable keyword arguments, or something else as you see fit. Try to make it user friendly.
Your function should return a tuple containing the roots of the provided equation.
**Hint:** Quadratic equations can have complex roots of the form $r = a + ib$ where $i=\sqrt{-1}$ (Python uses the notation $j=\sqrt{-1}$). To deal with complex roots, you should import the `cmath` library and use `cmath.sqrt` when computing square roots. `cmath` will return a complex number for you. You could handle complex roots yourself if you want, but you might as well use available libraries to save some work.
```python
import cmath
def find_root(a,b,c):
if (a==0 and b==0 and c==0):
print("warning!\n x has infinite numbers")
return()
elif (a==0 and b==0 and c!=0):
print("error!\n no x")
return()
elif (a==0 and b!=0):
print("warning!\n x=",-c/b)
return(-c/b)
else:
x1=(-b+cmath.sqrt(b*b-4*a*c))/(2*a)
x2=(-b-cmath.sqrt(b*b-4*a*c))/(2*a)
print("x1=",x1)
print("x2=",x2)
return(x1,x2)
find_root(0,0,0)
```
warning!
x has infinite numbers
()
```python
```
| 3ab5036975894c0b6baef7bab0dee28ea8669d18 | 2,978 | ipynb | Jupyter Notebook | lectures/L5/Exercise_2.ipynb | crystalzhaizhai/cs207_yi_zhai | faabdc5dd1171af04eed6639225adddc26402bf1 | [
"MIT"
]
| null | null | null | lectures/L5/Exercise_2.ipynb | crystalzhaizhai/cs207_yi_zhai | faabdc5dd1171af04eed6639225adddc26402bf1 | [
"MIT"
]
| null | null | null | lectures/L5/Exercise_2.ipynb | crystalzhaizhai/cs207_yi_zhai | faabdc5dd1171af04eed6639225adddc26402bf1 | [
"MIT"
]
| null | null | null | 30.701031 | 495 | 0.543318 | true | 468 | Qwen/Qwen-72B | 1. YES
2. YES | 0.921922 | 0.904651 | 0.834017 | __label__eng_Latn | 0.994252 | 0.776034 |
# Variational Auto-Encoder (VAE)
### Zhenwen Dai (2019-05-29)
Variational auto-encoder (VAE) is a latent variable model that uses a latent variable to generate data represented in vector form. Consider a latent variable $x$ and an observed variable $y$. The plain VAE is defined as
\begin{align}
p(x) =& \mathcal{N}(0, I) \\
p(y|x) =& \mathcal{N}(f(x), \sigma^2I)
\end{align}
where $f$ is the deep neural network (DNN), often referred to as the decoder network.
The variational posterior of VAE is defined as
\begin{align}
q(x) = \mathcal{N}\left(g_{\mu}(y), \sigma^2_x I)\right)
\end{align}
where $g_{\mu}$ is the encoder networks that generate the mean of the variational posterior of $x$. For simplicity, we assume that all the data points share the same variance in the variational posteior. This can be extended by generating the variance also from the encoder network.
```python
import warnings
warnings.filterwarnings('ignore')
import mxfusion as mf
import mxnet as mx
import numpy as np
import mxnet.gluon.nn as nn
import mxfusion.components
import mxfusion.inference
%matplotlib inline
from pylab import *
```
## Load a toy dataset
```python
import GPy
data = GPy.util.datasets.oil_100()
Y = data['X']
label = data['Y'].argmax(1)
```
```python
N, D = Y.shape
```
## Model Defintion
We first define that the encoder and decoder DNN with MXNet Gluon blocks. Both DNNs have two hidden layers with tanh non-linearity.
```python
Q = 2
```
```python
H = 50
encoder = nn.HybridSequential(prefix='encoder_')
with encoder.name_scope():
encoder.add(nn.Dense(H, in_units=D, activation="tanh", flatten=False))
encoder.add(nn.Dense(H, in_units=H, activation="tanh", flatten=False))
encoder.add(nn.Dense(Q, in_units=H, flatten=False))
encoder.initialize(mx.init.Xavier(magnitude=3))
```
```python
H = 50
decoder = nn.HybridSequential(prefix='decoder_')
with decoder.name_scope():
decoder.add(nn.Dense(H, in_units=Q, activation="tanh", flatten=False))
decoder.add(nn.Dense(H, in_units=H, activation="tanh", flatten=False))
decoder.add(nn.Dense(D, in_units=H, flatten=False))
decoder.initialize(mx.init.Xavier(magnitude=3))
```
Then, we define the model of VAE in MXFusion. Note that for simplicity in implementation, we use scalar normal distributions defined for individual entries of a Matrix instead of multivariate normal distributions with diagonal covariance matrices.
```python
from mxfusion.components.variables.var_trans import PositiveTransformation
from mxfusion import Variable, Model, Posterior
from mxfusion.components.functions import MXFusionGluonFunction
from mxfusion.components.distributions import Normal
from mxfusion.components.functions.operators import broadcast_to
m = Model()
m.N = Variable()
m.decoder = MXFusionGluonFunction(decoder, num_outputs=1,broadcastable=True)
m.x = Normal.define_variable(mean=broadcast_to(mx.nd.array([0]), (m.N, Q)),
variance=broadcast_to(mx.nd.array([1]), (m.N, Q)), shape=(m.N, Q))
m.f = m.decoder(m.x)
m.noise_var = Variable(shape=(1,), transformation=PositiveTransformation(), initial_value=mx.nd.array([0.01]))
m.y = Normal.define_variable(mean=m.f, variance=broadcast_to(m.noise_var, (m.N, D)),
shape=(m.N, D))
print(m)
```
Model (37a04)
Variable (b92c2) = BroadcastToOperator(data=Variable noise_var (a50d4))
Variable (39c2c) = BroadcastToOperator(data=Variable (e1aad))
Variable (b7150) = BroadcastToOperator(data=Variable (a57d4))
Variable x (53056) ~ Normal(mean=Variable (b7150), variance=Variable (39c2c))
Variable f (ad606) = GluonFunctionEvaluation(decoder_input_0=Variable x (53056), decoder_dense0_weight=Variable (b9b70), decoder_dense0_bias=Variable (d95aa), decoder_dense1_weight=Variable (73dc2), decoder_dense1_bias=Variable (b85dd), decoder_dense2_weight=Variable (7a61c), decoder_dense2_bias=Variable (eba91))
Variable y (23bca) ~ Normal(mean=Variable f (ad606), variance=Variable (b92c2))
We also define the variational posterior following the equation above.
```python
q = Posterior(m)
q.x_var = Variable(shape=(1,), transformation=PositiveTransformation(), initial_value=mx.nd.array([1e-6]))
q.encoder = MXFusionGluonFunction(encoder, num_outputs=1, broadcastable=True)
q.x_mean = q.encoder(q.y)
q.x.set_prior(Normal(mean=q.x_mean, variance=broadcast_to(q.x_var, q.x.shape)))
print(q)
```
Posterior (4ec05)
Variable x_mean (86d22) = GluonFunctionEvaluation(encoder_input_0=Variable y (23bca), encoder_dense0_weight=Variable (51b3d), encoder_dense0_bias=Variable (c0092), encoder_dense1_weight=Variable (ad9ef), encoder_dense1_bias=Variable (83db0), encoder_dense2_weight=Variable (78b82), encoder_dense2_bias=Variable (b856d))
Variable (6dc84) = BroadcastToOperator(data=Variable x_var (19d07))
Variable x (53056) ~ Normal(mean=Variable x_mean (86d22), variance=Variable (6dc84))
## Variational Inference
Variational inference is done via creating an inference object and passing in the stochastic variational inference algorithm.
```python
from mxfusion.inference import BatchInferenceLoop, StochasticVariationalInference, GradBasedInference
observed = [m.y]
alg = StochasticVariationalInference(num_samples=3, model=m, posterior=q, observed=observed)
infr = GradBasedInference(inference_algorithm=alg, grad_loop=BatchInferenceLoop())
```
SVI is a gradient-based algorithm. We can run the algorithm by providing the data and specifying the parameters for the gradient optimizer (the default gradient optimizer is Adam).
```python
infr.run(max_iter=2000, learning_rate=1e-2, y=mx.nd.array(Y), verbose=True)
```
Iteration 200 loss: 1720.556396484375
Iteration 400 loss: 601.11962890625
Iteration 600 loss: 168.620849609375
Iteration 800 loss: -48.67474365234375
Iteration 1000 loss: -207.34835815429688
Iteration 1200 loss: -354.17742919921875
Iteration 1400 loss: -356.26409912109375
Iteration 1600 loss: -561.263427734375
Iteration 1800 loss: -697.8665161132812
Iteration 2000 loss: -753.83203125 8
## Plot the training data in the latent space
Finally, we may be interested in visualizing the latent space of our dataset. We can do that by calling encoder network.
```python
from mxfusion.inference import TransferInference
q_x_mean = q.encoder.gluon_block(mx.nd.array(Y)).asnumpy()
```
```python
for i in range(3):
plot(q_x_mean[label==i,0], q_x_mean[label==i,1], '.')
```
| 3e0efdc643b4d29590aa433a32cd4b0ab669ee0d | 18,116 | ipynb | Jupyter Notebook | examples/notebooks/variational_auto_encoder.ipynb | JeremiasKnoblauch/MXFusion | af6223e9636b055d029d136dd7ae023b210b4560 | [
"Apache-2.0"
]
| 2 | 2019-05-31T09:50:47.000Z | 2021-03-06T09:38:47.000Z | examples/notebooks/variational_auto_encoder.ipynb | JeremiasKnoblauch/MXFusion | af6223e9636b055d029d136dd7ae023b210b4560 | [
"Apache-2.0"
]
| null | null | null | examples/notebooks/variational_auto_encoder.ipynb | JeremiasKnoblauch/MXFusion | af6223e9636b055d029d136dd7ae023b210b4560 | [
"Apache-2.0"
]
| 1 | 2019-05-30T09:39:46.000Z | 2019-05-30T09:39:46.000Z | 55.91358 | 7,804 | 0.762034 | true | 1,763 | Qwen/Qwen-72B | 1. YES
2. YES | 0.865224 | 0.695958 | 0.60216 | __label__eng_Latn | 0.73437 | 0.237349 |
# Field, Goldsmith, & Habing Multi-Phase ISM
Figure 1.10-1.12 from Chapter 1 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021,
Cambridge University Press.
This notebook creates figures illustrating the Field, Goldsmith, and Habing (FGH) multi-phase interstellar
medium model [Field, Goldsmith, & Habing 1969, ApJ, 155, L149](https://ui.adsabs.harvard.edu/abs/1969ApJ...155L.149F/abstract)
There are 3 figures
* Figure 1.10 - FGH Cooling function $\Lambda(T)$
* Figure 1.11 - Equilibrium density $n_{eq}(T)$
* Figure 1.12 - Pressure $P$ vs density $n_{eq}$
```python
%matplotlib inline
import math
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
# SciPy bits we use for analysis
from scipy.signal import argrelmin, argrelmax
from scipy import stats
import warnings
warnings.filterwarnings('ignore',category=UserWarning, append=True)
```
## Standard Plot Format
Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style.
```python
# graphic aspect ratio = width/height
aspect = 4.0/3.0
# Text width in inches - don't change, this is defined by the print layout
textWidth = 6.0 # inches
# output format and resolution
figFmt = 'png'
dpi = 600
# Graphic dimensions
plotWidth = dpi*textWidth
plotHeight = plotWidth/aspect
axisFontSize = 14
labelFontSize = 10
lwidth = 0.5
axisPad = 5
wInches = textWidth
hInches = wInches/aspect
# LaTeX is used throughout for markup of symbols, Times-Roman serif font
plt.rc('text', usetex=True)
plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'})
# Font and line weight defaults for axes
matplotlib.rc('axes',linewidth=lwidth)
matplotlib.rcParams.update({'font.size':axisFontSize})
# axis and label padding
plt.rcParams['xtick.major.pad'] = f'{axisPad}'
plt.rcParams['ytick.major.pad'] = f'{axisPad}'
plt.rcParams['axes.labelpad'] = f'{axisPad}'
```
## FGH Model Calculation
The basic parameters for these models are set in the code section below:
* Ionization fraction: $x_e$=0.001
* Temperature Range: $T_e$=10 to 20000K (logratithmic)
* gain factor: $G$=20, defined such that $n_{eq}$=G/$\Lambda$ ($\Lambda$ is the total cooling rate)
The model assumes three sources of collisional cooling using these scaling relations:
HI Lyman-$\alpha$ Cooling (Eqn 1.38):
\begin{equation}
\frac{\Lambda^{e}_{Ly\alpha}}{10^{-27}{\rm erg\,cm^3\,s^{-1}}} \approx
6\times10^{5} \left(\frac{x}{0.001}\right)
\left(\frac{T}{10^{4}K}\right)^{-1/2}
\exp\left(-\frac{1.18\times10^{5}K}{T}\right)
\end{equation}
Carbon (CII) Cooling (Eqn 1.35) electron collisional term:
\begin{equation}
\frac{\Lambda^{e}_{CII}}{10^{-27}{\rm erg\,cm^3\,s^{-1}}} \approx
3.1 \left(\frac{x}{0.001}\right)
\left(\frac{T}{100K}\right)^{-0.5}
\exp\left(-\frac{91.2K}{T}\right)
\end{equation}
and H collisional term:
\begin{equation}
\frac{\Lambda^{H}_{CII}}{10^{-27}{\rm erg\,cm^3\,s^{-1}}} \approx
5.2\left(\frac{T}{100K}\right)^{0.13}
\exp\left(-\frac{91.2K}{T}\right)
\end{equation}
Oxygen (OI) Cooling:
\begin{equation}
\frac{\Lambda^{H}_{OI}}{10^{-27}{\rm erg\,cm^3\,s^{-1}}} \approx
4.1\left(\frac{T}{100K}\right)^{0.42}
\exp\left(-\frac{228K}{T}\right)
\end{equation}
We compute total cooling ($\Lambda=\Lambda_{Ly\alpha}+\Lambda_{CII}+\Lambda_{OII}$), equilibrium density
($n_{eq}$), and pressure ($P=n_{eq}kT$) as a function of logarithmic steps in temperature.
We have adopted the Lodders (2010) abundances for C and O, as used in the ISM/IGM book
(see Chapter 1, Table 1.2).
```python
xe = 0.001
minT = 10.0
maxT = 20000.
gain = 20.0
# Boltzmann Constant (CODATA 2018)
k = 1.380649e-16 # erg K^-1
minLogT = math.log10(minT)
maxLogT = math.log10(maxT)
logT = np.linspace(minLogT,maxLogT,num=1001)
T = 10.0**logT
xfac = xe/0.001
TH = 118000.0 # hydrogen excitation temperature in K
TC = 91.2 # carbon excitation temperature in K
TO = 228.0 # oxygen excitation temperature in K
# Lyman-alpha cooling
coolLya = 6.0e5*(xfac/np.sqrt(T/1.0e4))*np.exp(-TH/T)
# Carbon cooling
coolC = 3.1*(xfac/np.sqrt(T/100.0))*np.exp(-TC/T) + 5.2*((T/100.0)**0.13)*np.exp(-TC/T)
# Oxygen cooling
coolO = 4.1*((T/100.0)**0.42)*np.exp(-TO/T)
# Total cooling
coolTot = (coolLya + coolC + coolO)
# equilibrium density
neq = gain/coolTot
# pressure
P = neq*k*T
```
## FGH Cooling Function - Figure 1.10
Plot the cooling function $\Lambda(T)$ vs $T$ including the curves for the individual contributions
```python
plotFile = f'Fig1_10.{figFmt}'
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
# Limits
minCool = 1.0e-30 # erg cm^3 s^-1
maxCool = 1.0e-24
# Labels
xLabel = r'Temperature [K]'
yLabel = r'$\Lambda$ [erg cm$^3$ s$^{-1}$]'
plt.xlim(minT,maxT)
ax.set_xscale('log')
ax.set_xticks([10,100,1000,1.0e4])
ax.set_xticklabels(['10','100','1000','10$^{4}$'])
plt.xlabel(xLabel)
plt.ylim(minCool,maxCool)
ax.set_yscale('log')
ax.set_yticks([1.0E-30,1.0E-29,1.0E-28,1.0E-27,1.0e-26,1.0e-25,1.0e-24])
ax.set_yticklabels(['$10^{-30}$','10$^{-29}$','10$^{-28}$','10$^{-27}$','10$^{-26}$','10$^{-25}$','10$^{-24}$'])
plt.ylabel(yLabel)
# Plot the total and individual cooling functions
plt.plot(T,1.0e-27*coolTot,'-',color='black',lw=2,zorder=10)
plt.plot(T,1.0e-27*coolLya,'--',color='black',lw=1,zorder=10)
plt.plot(T,1.0e-27*coolC,':',color='black',lw=1,zorder=10)
plt.plot(T,1.0e-27*coolO,'-.',color='black',lw=1,zorder=10)
# label components
lfs = np.rint(1.2*axisFontSize)
plt.text(1000.0,1.7e-26,'Total',fontsize=lfs,rotation=10.0,ha='center',va='bottom')
plt.text(80.0,1.0e-28,r'$[\textsc{O\,i}]\,\lambda$63$\mu m$',fontsize=lfs)
plt.text(3000.0,3.5e-27,r'$[\textsc{C\,ii}]\,\lambda$158$\mu m$',fontsize=lfs,rotation=3.0,ha='center')
plt.text(5400.0,1.0e-28,r'Ly$\alpha$',fontsize=lfs,ha='center')
# make the figure
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')
```
## FGH equilibrium density - Figure 1.11
Plot the equlibrium density function $n_{eq}$ vs $T$ for the FGH model.
```python
plotFile = f'Fig1_11.{figFmt}'
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
# Limits
minNe = 0.01 # cm^{-3}
maxNe = 20000.0
# Labels
xLabel = r'Temperature [K]'
yLabel = r'$n$ [cm$^{-3}$]'
plt.xlim(minT,maxT)
ax.set_xscale('log')
ax.set_xticks([10,100,1000,1.0e4])
ax.set_xticklabels(['10','100','1000','10$^{4}$'])
plt.xlabel(xLabel)
plt.ylim(minNe,maxNe)
ax.set_yscale('log')
ax.set_yticks([0.01,0.1,1.0,10.,100.,1e3,1e4])
ax.set_yticklabels(['0.01','0.1','1','10','100','1000','10$^{4}$'])
plt.ylabel(yLabel)
# Plot neq vs T
plt.plot(T,neq,'-',color='black',lw=2,zorder=10)
plt.fill_between(T,neq,maxNe,facecolor="#eaeaea")
# label regions above and below
lfs = np.rint(1.2*axisFontSize)
plt.text(200.0,0.1,'Net heating',fontsize=lfs,ha='center',zorder=10)
plt.text(1000.0,20.0,'Net cooling',fontsize=lfs,ha='center',zorder=10)
# make the figure
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')
```
## FGH pressure vs density - Figure 1.12
Plot the equlibrium pressure vs density for the FGH model.
We numerically search for the stability region pressure limits and the crossing points at a reference pressure of
P= 2×10−13 dyne/cm 2 . The methods used are a little dodgy, but are robust here as the pressure-density curve is
well-behaved.
```python
plotFile = f'Fig1_12.{figFmt}'
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
plt.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
plt.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
# Limits
minNe = 0.02 # cm^{-3}
maxNe = 10000.0
minP = 4.0e-14 # dyne cm^-2
maxP = 1.0e-11
# Labels
xLabel = r'$n$ [cm$^{-3}$]'
yLabel = r'$P$ [dyne cm$^{-2}$]'
plt.xlim(minNe,maxNe)
plt.xscale('log')
ax.set_xticks([0.1,1.0,10.,1.0e2,1.0e3,1.0e4])
ax.set_xticklabels(['0.1','1.0','10','100','1000','10$^4$'])
plt.xlabel(xLabel)
plt.ylim(minP,maxP)
ax.set_yscale('log')
ax.set_yticks([1.0e-13,1.0e-12,1.0e-11])
ax.set_yticklabels(['10$^{-13}$','10$^{-12}$','10$^{-11}$'])
plt.ylabel(yLabel)
# plot the n-P curve
plt.plot(neq,P,'-',color='black',lw=2,zorder=10)
plt.fill_between(neq,P,maxP,facecolor="#eaeaea")
# FGH stability region - estimate from array using scipy.signal argrelmin() and argrelmax()
# peak-finding functions
iMin = argrelmin(P)[0]
iMax = argrelmax(P)[0]
plt.hlines(P[iMin],minNe,maxNe,color='black',ls='--',lw=0.5)
plt.hlines(P[iMax],minNe,maxNe,color='black',ls='--',lw=0.5)
# Reference pressure, 2e-13 dyne/cm^2
pFGH = 2.0e-13
# The FGH points are at zero crossings of P(n)-fghP. Find the nearest zero-crossing, then
# fit a line to +/-3 points around it and find the crossing point. This is dodgy generally
# but we get away with it because the P-n curve is well-behaved.
iFGH = np.where(np.diff(np.sign(P-pFGH)))[0]
nFGH = []
for i in iFGH:
slope, inter, rVal, pVal, stdErr = stats.linregress(neq[i-3:i+3],P[i-3:i+3]-pFGH)
xZero = -inter/slope
nFGH.append(xZero)
# print(f'n_eq = {xZero:.5e} cm^-3')
lfs = np.rint(1.2*axisFontSize)
plt.plot(nFGH[0],pFGH,color='black',marker='o',ms=8,mfc='black')
plt.text(1.4*nFGH[0],pFGH,'F',fontsize=lfs,va='center',zorder=10)
plt.plot(nFGH[1],pFGH,color='black',marker='o',ms=8,mfc='black')
plt.text(1.4*nFGH[1],pFGH,'G',fontsize=lfs,va='center',zorder=10)
plt.plot(nFGH[2],pFGH,color='black',marker='o',ms=8,mfc='black')
plt.text(1.4*nFGH[2],pFGH,'H',fontsize=lfs,va='center',zorder=10)
plt.text(10.0,1.1*P[iMax],'Net cooling',fontsize=lfs,ha='center',va='bottom',zorder=10)
plt.text(1300.0,pFGH,'Net heating',fontsize=lfs,ha='center',va='center',zorder=10)
# make the figure
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')
```
| 0f61218e3c33a524a3e789c2b2d20e26e4e28923 | 15,046 | ipynb | Jupyter Notebook | Chapter1/Fig1_FGH.ipynb | CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium | 6d19cd4a517126e0f4737ba0f338117098224d92 | [
"CC0-1.0",
"CC-BY-4.0"
]
| 10 | 2021-04-20T07:26:10.000Z | 2022-02-24T11:02:47.000Z | Chapter1/Fig1_FGH.ipynb | CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium | 6d19cd4a517126e0f4737ba0f338117098224d92 | [
"CC0-1.0",
"CC-BY-4.0"
]
| null | null | null | Chapter1/Fig1_FGH.ipynb | CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium | 6d19cd4a517126e0f4737ba0f338117098224d92 | [
"CC0-1.0",
"CC-BY-4.0"
]
| null | null | null | 32.779956 | 135 | 0.541406 | true | 3,693 | Qwen/Qwen-72B | 1. YES
2. YES | 0.812867 | 0.737158 | 0.599212 | __label__eng_Latn | 0.397379 | 0.2305 |
# Modeling and Simulation in Python
Chapter 9
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```python
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import everything from SymPy.
from sympy import *
# Set up Jupyter notebook to display math.
init_printing()
```
The following displays SymPy expressions and provides the option of showing results in LaTeX format.
```python
from sympy.printing import latex
def show(expr, show_latex=False):
"""Display a SymPy expression.
expr: SymPy expression
show_latex: boolean
"""
if show_latex:
print(latex(expr))
return expr
```
### Analysis with SymPy
Create a symbol for time.
```python
t = symbols('t')
```
If you combine symbols and numbers, you get symbolic expressions.
```python
expr = t + 1
```
The result is an `Add` object, which just represents the sum without trying to compute it.
```python
type(expr)
```
sympy.core.add.Add
`subs` can be used to replace a symbol with a number, which allows the addition to proceed.
```python
expr.subs(t, 2)
```
`f` is a special class of symbol that represents a function.
```python
f = Function('f')
```
f
The type of `f` is `UndefinedFunction`
```python
type(f)
```
sympy.core.function.UndefinedFunction
SymPy understands that `f(t)` means `f` evaluated at `t`, but it doesn't try to evaluate it yet.
```python
f(t)
```
`diff` returns a `Derivative` object that represents the time derivative of `f`
```python
dfdt = diff(f(t), t)
```
```python
type(dfdt)
```
sympy.core.function.Derivative
We need a symbol for `alpha`
```python
alpha = symbols('alpha')
```
Now we can write the differential equation for proportional growth.
```python
eq1 = Eq(dfdt, alpha*f(t))
```
And use `dsolve` to solve it. The result is the general solution.
```python
solution_eq = dsolve(eq1)
```
We can tell it's a general solution because it contains an unspecified constant, `C1`.
In this example, finding the particular solution is easy: we just replace `C1` with `p_0`
```python
C1, p_0 = symbols('C1 p_0')
```
```python
particular = solution_eq.subs(C1, p_0)
```
In the next example, we have to work a little harder to find the particular solution.
### Solving the quadratic growth equation
We'll use the (r, K) parameterization, so we'll need two more symbols:
```python
r, K = symbols('r K')
```
Now we can write the differential equation.
```python
eq2 = Eq(diff(f(t), t), r * f(t) * (1 - f(t)/K))
```
And solve it.
```python
solution_eq = dsolve(eq2)
```
The result, `solution_eq`, contains `rhs`, which is the right-hand side of the solution.
```python
general = solution_eq.rhs
```
We can evaluate the right-hand side at $t=0$
```python
at_0 = general.subs(t, 0)
```
Now we want to find the value of `C1` that makes `f(0) = p_0`.
So we'll create the equation `at_0 = p_0` and solve for `C1`. Because this is just an algebraic identity, not a differential equation, we use `solve`, not `dsolve`.
The result from `solve` is a list of solutions. In this case, [we have reason to expect only one solution](https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem), but we still get a list, so we have to use the bracket operator, `[0]`, to select the first one.
```python
solutions = solve(Eq(at_0, p_0), C1)
```
```python
value_of_C1 = solutions[0]
```
Now in the general solution, we want to replace `C1` with the value of `C1` we just figured out.
```python
particular = general.subs(C1, value_of_C1)
```
The result is complicated, but SymPy provides a method that tries to simplify it.
```python
particular = simplify(particular)
```
Often simplicity is in the eye of the beholder, but that's about as simple as this expression gets.
Just to double-check, we can evaluate it at `t=0` and confirm that we get `p_0`
```python
particular.subs(t, 0)
```
This solution is called the [logistic function](https://en.wikipedia.org/wiki/Population_growth#Logistic_equation).
In some places you'll see it written in a different form:
$f(t) = \frac{K}{1 + A e^{-rt}}$
where $A = (K - p_0) / p_0$.
We can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:
```python
A = (K - p_0) / p_0
```
```python
logistic = K / (1 + A * exp(-r*t))
```
To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.
```python
simplify(particular - logistic)
```
This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).
But if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.
### Exercises
**Exercise:** Solve the quadratic growth equation using the alternative parameterization
$\frac{df(t)}{dt} = \alpha f(t) + \beta f^2(t) $
```python
alpha = symbols('alpha')
beta = symbols('beta')
t = symbols('t')
f = Function('f')
p_0 = symbols('p_0')
C1 = symbols('C1')
```
```python
a_eq = Eq(diff(f(t),t), alpha*f(t)+beta*f(t)**2)
```
```python
a_sol = dsolve(a_eq)
```
```python
a_gen = a_sol.rhs
```
```python
c1_particular = solve(Eq(a_gen.subs(t, 0), p_0), C1)
```
```python
a_part = a_gen.subs(C1, c1_particular)
```
```python
a_sol_full = simplify(a_gen + a_part)
```
**Exercise:** Use [WolframAlpha](https://www.wolframalpha.com/) to solve the quadratic growth model, using either or both forms of parameterization:
df(t) / dt = alpha f(t) + beta f(t)^2
or
df(t) / dt = r f(t) (1 - f(t)/K)
Find the general solution and also the particular solution where `f(0) = p_0`.
```python
```
| bffc158dc7cd71af943efec573f1cd25e9a0f370 | 56,507 | ipynb | Jupyter Notebook | code/chap09mine.ipynb | SSModelGit/ModSimPy | 4d1e3d8c3b878ea876e25e6a74509535f685f338 | [
"MIT"
]
| null | null | null | code/chap09mine.ipynb | SSModelGit/ModSimPy | 4d1e3d8c3b878ea876e25e6a74509535f685f338 | [
"MIT"
]
| null | null | null | code/chap09mine.ipynb | SSModelGit/ModSimPy | 4d1e3d8c3b878ea876e25e6a74509535f685f338 | [
"MIT"
]
| null | null | null | 48.712931 | 2,209 | 0.717009 | true | 1,665 | Qwen/Qwen-72B | 1. YES
2. YES | 0.957278 | 0.90599 | 0.867284 | __label__eng_Latn | 0.990879 | 0.853325 |
# Portable cooling system
A portable cooling system uses canisters of volume 2 L charged with refrigerant R134a at 20°C and quality 0.05 (i.e., vapor mass fraction). Heat is transfered from a person to the canister. Saturated vapor escapes from the canister when the pressure in the canister reaches 30°C, controlled by a relief valve. The cooling system stops working and must be discarded when it is empty of liquid refrigerant.
**Problem:** Determine the cooling density of the system: the ratio of the amount of energy that can be absorbed before the canister is discarded to the initial system mass. Compare this to the cooling density of an ice pack (with latent heat of fusion of $\Delta h_{\text{fus}} = 333.6$ J/g).
```python
# load necessary modules
# Numpy adds some useful numerical types and functions
import numpy as np
# Cantera will handle thermodynamic properties
import cantera as ct
# Pint gives us some helpful unit conversion
from pint import UnitRegistry
ureg = UnitRegistry()
Q_ = ureg.Quantity
```
Specify initial state:
```python
volume = Q_(2, 'liter')
temp_charge = Q_(20, 'degC')
quality_charge = 0.05
initial = ct.Hfc134a()
initial.TX = temp_charge.to('K').magnitude, quality_charge
```
Final state, based on the temperature when the relief valve opens and that the fluid is saturated vapor:
```python
final = ct.Hfc134a()
temp_final = Q_(30, 'degC')
quality_final = 1.0
final.TX = temp_final.to('K').magnitude, quality_final
```
Do a mass balance on the system, over the process from state 1 to state 2:
\begin{equation}
0 = m_{\text{out}} + m_2 - m_1 \;,
\end{equation}
which we can use to find the mass that leaves the canister ($m_{\text{out}}$):
```python
mass_initial = volume / Q_(initial.v, 'm^3/kg')
mass_final = volume / Q_(final.v, 'm^3/kg')
mass_out = mass_initial - mass_final
print(f'mass that left the canister: {mass_out.to(ureg.kg): .2f}')
```
mass that left the canister: 0.70 kilogram
Next, we can perform an energy balance of the system over the process:
\begin{equation}
Q_{\text{in}} = m_{\text{out}} h_{\text{out}} + m_2 u_2 - m_1 u_1 \;,
\end{equation}
where $h_{\text{out}}$ is the enthalpy of the refigerant leaving the system, which has the same properties as the final state in the tank (no liquid, and 30°C):
```python
heat_in = (
mass_out*Q_(final.h, 'J/kg') +
mass_final*Q_(final.u, 'J/kg') -
mass_initial*Q_(initial.u, 'J/kg')
)
```
THe cooling density (neglecting the canister mass) is
\begin{equation}
CD = \frac{Q_{\text{in}}}{m_1}
\end{equation}
```python
cooling_density = heat_in / mass_initial
cooling_density.ito('W hr/kg')
print(f'cooling density: {cooling_density: .2f}')
cooling_density_ice = Q_(333.6, 'J/g')
cooling_density_ice.ito('W hr/kg')
print(f'cooling density of ice: {cooling_density_ice: .2f}')
```
cooling density: 49.37 hour * watt / kilogram
cooling density of ice: 92.67 hour * watt / kilogram
| a06d36be0a5279c8d8fe267d100bbc269377f506 | 5,153 | ipynb | Jupyter Notebook | book/content/first-law/portable-cooling-system.ipynb | kyleniemeyer/computational-thermo | 3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| 13 | 2020-04-01T05:52:06.000Z | 2022-03-27T20:25:59.000Z | book/content/first-law/portable-cooling-system.ipynb | kyleniemeyer/computational-thermo | 3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| 1 | 2020-04-28T04:02:05.000Z | 2020-04-29T17:49:52.000Z | book/content/first-law/portable-cooling-system.ipynb | kyleniemeyer/computational-thermo | 3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd | [
"CC-BY-4.0",
"BSD-3-Clause"
]
| 6 | 2020-04-03T14:52:24.000Z | 2022-03-29T02:29:43.000Z | 27.704301 | 413 | 0.570541 | true | 860 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.779993 | 0.704563 | __label__eng_Latn | 0.977872 | 0.475268 |
# Variational Mixture of Gaussians
Gaussian mixture models are widely used to model complex intractable probability distributions using a mixture of Gaussians. In a [previous post](https://chandrusuresh.github.io/MyNotes/files/DensityEstimation/GaussianMixtureModels.html), GMMs were discussed together with a Maximum Likelihood approach to fit a GMM to the Palmer Penguins dataset using the EM algorithm.
Here we descrive the variational inference algorithm to the same dataset. The following plate model is assumed for the variational approximation.
The following plate model in Fig 10.5 from [1] is assumed.
The variational approach resolves many of the limitations of the MLE approach.
- Variational approach uses a Bayesian model that enables determining parameter uncertainty
- Prevents over-fitting of data to model
- The cardinality/dimensionality of the latent variable can be determined or inferred by the algorithm.
## Theory
Given a dataset for random variable $x$, we introduce a $K$ dimensional binary random variable $z$ having a 1-of-$K$ representation in which a particular element $z_k = 1$ with $z_i = 0 \text{ } \forall i\ne k$ i.e., $\sum{z_k} = 1$. Assume the dataset has $N$ points. The dataset of observed and latent variables are denoted by $X$ and $Z$ respectively.
The marginal distribution $p(Z,X)$ is defined as $ p(Z,X) = p(X|Z)\cdot p(Z)$.
If $p(z_k = 1) = \pi_k$ and $p(x|z_k=1) = \mathcal{N}(x|\mu_k,\Lambda_k^{-1})$ then, $$\begin{align} p(Z) &= \prod_{n=1}^N \prod_{k=1}^K \pi_k^{z_{nk}} \\
p(X|Z) &= \prod_{n=1}^N \prod_{k=1}^K \Bigg(\mathcal{N}(x|\mu_k,\Lambda_k^{-1})\Bigg)^{z_{nk}} \end{align}$$
The marginal distribution of $X$ is therefore,
$$ \begin{align} p(X) &= \sum_{z,\pi,\mu,\Lambda} p(X,Z,\pi,\mu,\Lambda) \\
&= \sum_{z,\pi,\mu,\Lambda} p(X|Z,\mu,\Lambda) p(Z|\pi) p(\pi) p(\mu|\Lambda) p(\Lambda) \end{align}$$
We now consider a variational distribution that factorizes the latent variables and parameters as:
$$ q(Z,\pi,\mu,\Lambda) = q(Z)q(\pi,\mu,\Lambda)$$
### Conjugate Priors
We introduce conjugate priors for $\pi$,$\mu$,$\Lambda$ as follows.
For $\pi$ we choose a Dirichlet prior with the same parameter $\alpha_0$ for each component.
$$ p(\pi) = \text{Dir}(\pi|\alpha_0) = C(\alpha_0)\prod_{k=1}^K{\pi_k^{\alpha_0-1}}$$
where $C(\alpha_0)$ is the normalization constant.
For $\mu$ and $\Lambda$, a Gaussian-Wishart prior is chosen for the mean and precision of each component.
$$ \begin{align} p(\mu,\Lambda) &= p(\mu|\Lambda) p(\Lambda) \\
&= \prod_{k=1}^K\mathcal{N}\Big(\mu_k|m_0,(\beta_0\Lambda_k)^{-1})\Big) \mathcal{W}(\Lambda_k|W_0,\nu_0) \end{align}$$
### Optimal factor for $q(Z)$
The update equation for the latent variable is given by,
$$ \begin{align} \ln{q^*(Z)} &= \mathbb{E}_{\pi,\mu,\Lambda}\Big[ \ln{p(X,Z,\pi,\mu,\Lambda}\Big] \\
&= \mathbb{E}_{\pi,\mu,\Lambda}\Big[\ln\Big\{p(X|Z,\mu,\Lambda) p(Z|\pi) p(\pi) p(\mu|\Lambda) p(\Lambda) \Big\}\Big]\end{align}$$
By combining the terms not including $Z$ in the above expression into a constant term,
$$ \begin{align} \ln{q^*(Z)} &= \mathbb{E}_{\pi}\Big[\ln p(Z|\pi)\Big] + \mathbb{E}_{\mu,\Lambda}\Big[\ln{p(X|Z,\pi,\mu)}\Big] + \text{const.}\end{align}$$
$$ \begin{align}\mathbb{E}_{\pi}\Big[\ln p(Z|\pi)\Big] &= \mathbb{E}_{\pi}\Big[\sum_{n=1}^N{\sum_{k=1}^K{z_{nk}\pi_k}}\Big] \\
&= \sum_{n=1}^N{\sum_{k=1}^K{z_{nk}\mathbb{E}\Big[\pi_k\Big]}} \end{align}$$
$$ \begin{align}\mathbb{E}_{\mu,\Lambda}\Big[\ln{p(X|Z,\pi,\mu)}\Big] &= \mathbb{E}_{\mu,\Lambda}\Bigg[\sum_{n=1}^N{\sum_{k=1}^K{z_{nk}\Big(-\frac{D}{2}\ln(2\pi) + \frac{1}{2}\ln{|\Lambda_k|} \\
- \frac{1}{2}(x_n-\mu_k)^T \Lambda_k (x_n-\mu_k)\Big)\Bigg]}} \\
&= \sum_{n=1}^N{\sum_{k=1}^K{z_{nk}\Bigg(\frac{1}{2}\mathbb{E}\Big[\ln{|\Lambda_k|}\Big] - \frac{D}{2}\ln(2\pi) \\
- \frac{1}{2}\mathbb{E}\Big[(x_n-\mu_k)^T \Lambda_k (x_n-\mu_k)\Big]\Bigg)}} \end{align}$$
Substituting the above 2 expressions above,
$$ \begin{align} \ln{q^*(Z)} &= \sum_{n=1}^N{\sum_{k=1}^K{z_{nk} \rho_{nk}}} + \text{const.}\end{align}$$
where $\rho_{nk}$ is given by,
$$ \rho_{nk} = \mathbb{E}\Big[\pi_k\Big] + \frac{1}{2}\mathbb{E}\Big[\ln{|\Lambda_k|}\Big] - \frac{D}{2}\ln(2\pi) - \frac{1}{2}\mathbb{E}\Big[(x_n-\mu_k)^T \Lambda_k (x_n-\mu_k)\Big]$$
The distribution $q^*(Z)$ is given by,
$$ \Rightarrow q^*(Z) = \prod_{n=1}^N{\prod_{k=1}^K{r_{nk}^{z_{nk}}}}$$
where $r_{nk}$ are the responsibilities with
$$ r_{nk} = \frac{\rho_{nk}}{\sum_{j=1}^K\rho_{nj}}$$
For the discrete distribution $q^*(Z)$, we have $\mathbb{E}\Big[q(z_{nk})\Big] = r_{nk}$
The following provide expressions for the terms in the above expression for $\rho_{nk}$.
$$\begin{align} \mathbb{E}\Big[(x_n-\mu_k)^T \Lambda_k (x_n-\mu_k)\Big] &= D \beta_k^{-1} + \nu_k \mathbb{E}\Big[(x_n-m_k)^T W_k (x_n-m_k)\Big] \\ \mathbb{E}\Big[\ln{|\Lambda_k|}\Big] &= \sum_{i=1}^D \psi\Bigg( \frac{\nu_k+1-i}{2}\Bigg) + D \ln 2 + \ln |W_k| \\
\mathbb{E}\Big[\pi_k\Big] &= \psi(\alpha_k) - \psi(\hat{\alpha}) \end{align}$$
where $\psi(\cdot)$ is the digamma function and $\hat{\alpha} = \sum_k \alpha_k$.
### Optimal factor for $q(\pi,\mu,\Lambda)$
Given the expression for $q^*(Z)$ and $r_{nk}$, the following quantities are defined.
$$ \begin{align} N_k &= \sum_{n=1}^N{r_{nk}} \\
\bar{x}_k&= \frac{1}{N_k}\sum_{n=1}^N{r_{nk}x_n}\\
S_k &= \frac{1}{N_k}\sum_{n=1}^N{r_{nk}(x_n-\bar{x}_k)(x_n-\bar{x}_k)^T} \end{align}$$
Now the optimal factor $q(\pi,\mu,\Lambda)$ is given by
$$ \begin{align} \ln{q^*(\pi,\mu,\Lambda)} &= \mathbb{E}_{Z}\Big[\ln\Big\{p(X|Z,\mu,\Lambda) p(Z|\pi) p(\pi) p(\mu|\Lambda) p(\Lambda) \Big\}\Big] \\
&= \ln{p(\pi)} + \mathbb{E}_{Z}\Bigg[\sum_{k=1}^K{\ln p(\mu_k,\Lambda_k)} + \ln p(Z|\pi) \\
&+ \sum_{n=1}^N{\sum_{k=1}^K{z_{nk}\ln \mathcal{N}(x_n|\mu_k,\Lambda_k)}} \Bigg] + \text{const.} \\
&= \ln{p(\pi)} + \mathbb{E}_{Z}\Big[\ln p(Z|\pi)\Big] + \sum_{k=1}^K{\ln p(\mu_k,\Lambda_k)} \\
&+ \sum_{n=1}^N{\sum_{k=1}^K{\mathbb{E}_{Z}\Big[z_{nk}\Big]\ln \mathcal{N}(x_n|\mu_k,\Lambda_k)}} + \text{const.}\end{align}$$
The expression above factorizes into terms involving just $\pi$ and $\mu,\Lambda$ thereby implying independence of these variables. Note that this result is not based on any prior assumption but a direct result of the model.
$$ \begin{align} q^*(\pi,\mu,\Lambda) &= q(\pi) \prod_{k=1}^K{q(\mu,\Lambda)} \\
\text{where}\quad \ln q^*(\pi) &= \ln{p(\pi)} + \mathbb{E}_{Z}\Big[\ln p(Z|\pi)\Big] + \text{const.}\\
\text{and}\quad \ln q^*(\mu,\Lambda) &= \sum_{k=1}^K{\ln p(\mu_k,\Lambda_k)}
+ \sum_{n=1}^N{\sum_{k=1}^K{\mathbb{E}_{Z}\Big[z_{nk}\Big]\ln \mathcal{N}(x_n|\mu_k,\Lambda_k)}} + \text{const.}\end{align}$$
### Optimal factor for $q(\pi)$
$$\begin{align} \ln q^*(\pi) &= \ln{p(\pi)} + \mathbb{E}_{Z}\Big[\ln p(Z|\pi)\Big] + \text{const.} \\
&= (\alpha_0-1)\sum_{k=1}^K{\pi_k} + \sum_{n=1}^N{\sum_{k=1}^K{\mathbb{E}_{Z}[z_{nk}]\pi_k}} + \text{const.} \\
&= (\alpha_0-1)\sum_{k=1}^K{\pi_k} + \sum_{n=1}^N{\sum_{k=1}^K{r_{nk}\pi_k}} + \text{const.} \\
&= (\alpha_0-1)\sum_{k=1}^K{\pi_k} + \sum_{k=1}^K{N_k\pi_k} + \text{const.} \\ &= (\alpha_0+N_k-1)\sum_{k=1}^K{\pi_k} + \text{const.} \\ \end{align}$$
The posterior $q^*(\pi)$ is also a Dirichlet distribution with parameter $\alpha_0+N_k$. i.e. $q^*(\pi) = \text{Dir}(\pi|\alpha_0+N_k)$
### Optimal factor for $q(\mu,\Lambda)$
The derivation for the parameters of the posterior distribution for $q(\mu,\Lambda)$ is very involved, so only the results are presented here. More details can be found in section 10.2.1. in [1].
$$\begin{align} q(\mu_k,\Lambda_k) &= \mathcal{N}\Big(\mu_k|m_k,(\beta_k \Lambda_k)^{-1}\Big)\cdot \mathcal{W}(\Lambda_k|W_k,\nu_k) \\
\text{where} \quad \beta_k &= \beta_0 + N_k \\
m_k &= \frac{1}{\beta_k}(\beta_0 m_0 + N_k \bar{x}_k) \\
W_k^{-1} &= W_0^{-1} + N_k S_k + \frac{\beta_0 N_k}{\beta_0+N_k}(\bar{x}_k-m_0)(\bar{x}_k-m_0)^T \\
\nu_k &= \nu_0 + N_k\end{align}$$
## Example
The palmer penguins dataset released by [2] and obtained from [3] is used as an example. Two features - Flipper Length & Culmen Length are used as the features to cluster the dataset. We set K = 6 and demonstrate that 3 of these components are redundant while correctly identifying the 3 correct categories of penguins - Adelie, Chinstrap and Gentoo. This same dataset was used in the GMM section. The dataset is plotted below. This is the same dataset used to demonstrate [Gaussian Mixture Models using the EM algorithm](https://chandrusuresh.github.io/MyNotes/files/DensityEstimation/GaussianMixtureModels.html).
```python
import pandas as pd
import requests
import io
import numpy as np
from scipy.stats import multivariate_normal as gaussian
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from matplotlib.patches import Ellipse
import matplotlib.transforms as transforms
import matplotlib.colors as mcolors
from scipy.special import digamma
from scipy.stats import multivariate_normal as gaussian
from scipy.stats import wishart,dirichlet
from scipy.special import softmax as softmax
def getCSV(url):
download = requests.get(url).content
df = pd.read_csv(io.StringIO(download.decode('utf-8')))
return df
file = "https://raw.githubusercontent.com/mcnakhaee/palmerpenguins/master/palmerpenguins/data/penguins-raw.csv"
df = getCSV(file)
```
```python
txt_labels = np.unique(df['Species'])
lbl = txt_labels[0]
fig,ax = plt.subplots(1,2,figsize=(10,5))
df_data = [None]*len(txt_labels)
img = mpimg.imread('../../img/lter_penguins.png')
ax[0].imshow(img)
ax[0].axis('off')
color = ['tomato','mediumorchid','seagreen','aqua','black','magenta']
for i,lbl in enumerate(txt_labels):
df_data[i] = df[df['Species'] == lbl]
# print(df_data[i].columns)
ax[1].scatter(df_data[i]['Flipper Length (mm)'],df_data[i]['Culmen Length (mm)'],color=color[i])
# ax[1].axis('off')
ax[1].set_xlabel('Flipper Length');
ax[1].set_ylabel('Culmen Length');
```
```python
## Number of classes
K = 10
flp_len = np.mean(df['Flipper Length (mm)'])
clm_len = np.mean(df['Culmen Length (mm)'])
df = df[df['Flipper Length (mm)'].notna()]
df = df[df['Culmen Length (mm)'].notna()]
data = np.matrix(np.c_[df['Flipper Length (mm)'],df['Culmen Length (mm)']].T)
# print(data)
x_mean = np.array([[flp_len],[clm_len]])
d = data - np.reshape(x_mean,(2,1))
cov = np.matmul(d,d.T)/float(data.shape[1])
prec = np.linalg.inv(cov)#
## Init
pts = data.shape[1]
m_init = np.mean(data[:,:pts//K],axis=1)
prev = pts//K
for k in range(1,K):
m_init = np.c_[m_init,np.mean(data[:,prev+1:prev+1+(pts//K)],axis=1)]
prev = prev+1+(pts//K)
m_init = np.matrix(m_init)
print(m_init)
# m_init = data[:,-K:]
# m_init = data[:,10:10+K]
# m_init = data[:,:K]
# m_init = np.matrix(np.c_[data[:,0],data[:,200],data[:,-1]])
# m_init = data[:,:K]
# m_init = np.matrix(np.random.randn(2,K))
beta_init = np.array([1. for k in range(K)])
W_init = [prec for k in range(K)]
nu_init = np.array([2. for k in range(K)])
alpha_init = np.array([0.001 for k in range(K)])
```
[[186.35294118 188.55882353 191.85294118 192.88235294 206.82352941
215.82352941 219.58823529 214.38235294 194.38235294 197.51851852]
[ 38.64411765 38.92941176 38.51764706 39.44117647 43.96176471
46.76470588 48.07058824 48.46176471 48.84411765 48.88518519]]
```python
## Variational Approximation
def getZ(X,m,beta,W,nu,alpha):
dig_alpha = digamma(np.sum(alpha))
D = X.shape[0]
N = X.shape[1]
rho = np.zeros((N,K))
r = np.zeros((N,K))
for k in range(K):
E_ln_pi_k = digamma(alpha[k]) - dig_alpha
E_ln_sig_k = float(D)*np.log(2) + np.log(np.linalg.det(W[k]))
for i in range(1,D+1):
E_ln_sig_k = E_ln_sig_k + digamma(0.5*(nu[k]+1-i))
tmpSum = E_ln_pi_k + 0.5*E_ln_sig_k - 0.5*float(D)*np.log(2*np.pi)
for n in range(N):
dx = X[:,n] - m[:,k]
E_mu_sig = float(D)/beta[k] + nu[k]*np.matmul(dx.T,np.matmul(W[k],dx))
rho[n,k] = tmpSum - 0.5*E_mu_sig
for n in range(N):
rho[n,:] = softmax(rho[n,:])
return rho
def getHelperVariables(X,r):
Nk = np.sum(r,axis=0)
x = np.matrix(np.zeros((X.shape[0],K)))
for k in range(K):
if Nk[k] != 0:
for n in range(X.shape[1]):
x[:,k] = x[:,k] + r[n,k]/Nk[k]*X[:,n]
S = [0*np.eye(X.shape[0]) for k in range(K)]
for k in range(K):
if Nk[k] != 0:
for n in range(X.shape[1]):
dx = X[:,n]-x[:,k]
S[k] = S[k] + r[n,k]*np.matmul(dx,dx.T)/Nk[k]
return Nk,x,S
def getMu(m0,beta0,Nk,x):
beta = beta0+Nk
# m = (beta0*m0 + Nk*x)/beta
m = (np.multiply(beta_init,m_init) + np.multiply(Nk,x))/beta
return m,beta
def getPi(alpha0,Nk):
return alpha0+Nk
def getSigma(m0,beta0,W0_inv,nu0,Nk,x,S):
nu = nu0+Nk
Wk = [0*np.eye(2) for k in range(K)]
for k in range(K):
dx = x[:,k]-m0[:,k]
Wk_inv = W0_inv[k] + Nk[k]*S[k] + beta0[k]*Nk[k]/(beta0[k]+Nk[k])*np.matmul(dx,dx.T)
Wk[k] = np.linalg.inv(Wk_inv)
return Wk,nu
def getLogLikelihood(X,r,Nk,x,S,m,beta,W,nu,alpha):
pi = Nk/float(X.shape[1])
logLikelihood = 0
eps = 1E-20*np.ones(pi.shape)
pi_new = np.maximum(pi,eps)
pi_new = pi_new/np.sum(pi_new)
logLikelihood = 0#dirichlet.logpdf(pi_new,alpha)
pi1 = np.array([])
alpha1 = np.array([])
for k in range(K):
if pi[k] == 0:
continue
pi1 = np.append(pi1,np.array([pi[k]]))
alpha1 = np.append(alpha1,np.array([alpha[k]]))
prec = np.linalg.inv(S[k])
cv_mat = S[k]/beta[k]
log_mu = np.log(gaussian.pdf(np.ravel(x[:,k]),mean=np.ravel(m[:,k]),cov=cv_mat))
log_sig = np.log(wishart.pdf(prec,df=nu[k],scale=W[k]))
logLikelihood = logLikelihood + log_mu + log_sig
for n in range(X.shape[1]):
prob = 0
for k in range(K):
# if np.linalg.det(S[k]) == 0:
if pi[k] == 0:
continue
prob = prob + pi[k]*gaussian.pdf(np.ravel(X[:,n]),mean=np.ravel(x[:,k]),cov=S[k])
logLikelihood = logLikelihood + np.log(prob)
logLikelihood = logLikelihood + dirichlet.logpdf(pi1,alpha1)#dirichlet.logpdf(pi_new,alpha)#
return logLikelihood
```
```python
def VariationalGMM(X,m0,beta0,W0,nu0,alpha0,max_iter=500,tol=1E-6):
m1 = m0
beta1 = beta0
W1 = W0.copy()
nu1 = nu0
alpha1=alpha0
W0_inv = []
for k in range(K):
W0_inv += [np.linalg.inv(W0[k])]
c = 0
logLikelihood = []
while c < max_iter:
r = getZ(data,m1,beta1,W1,nu1,alpha1)
Nk,x,S = getHelperVariables(data,r)
m,beta = getMu(m0,beta0,Nk,x)
alpha = getPi(alpha0,Nk)
W,nu = getSigma(m0,beta0,W0_inv,nu0,Nk,x,S)
logLikelihood.append(getLogLikelihood(X,r,Nk,x,S,m,beta,W,nu,alpha))
# print(c,logLikelihood[-1],np.round(Nk,3))
max_diff = np.max(np.abs(beta-beta1))
max_diff = max(max_diff,np.max(np.abs(alpha-alpha1)))
max_diff = max(max_diff,np.max(np.abs(nu-nu1)))
max_diff = max(max_diff,np.max(np.abs(m-m1)))
for k in range(K):
max_diff = max(max_diff,np.max(np.abs(W[k]-W1[k])))
m1 = m
beta1 = beta
W1 = W.copy()
nu1 = nu
alpha1=alpha
if max_diff <= tol:
print("Algorithm converged after iteration:",c)
break
c = c+1
print("Final Log Likelihood:",logLikelihood[-1])
print("Effective cluster size:",np.round(Nk,3))
return m,beta,W,nu,alpha,logLikelihood
def confidence_ellipse(ax, mu, cov, n_std=3.0, facecolor='none', **kwargs):
"""
Create a plot of the covariance confidence ellipse of `x` and `y`
Parameters
----------
cov : Covariance matrix
Input data.
ax : matplotlib.axes.Axes
The axes object to draw the ellipse into.
n_std : float
The number of standard deviations to determine the ellipse's radiuses.
Returns
-------
matplotlib.patches.Ellipse
Other parameters
----------------
kwargs : `~matplotlib.patches.Patch` properties
"""
# if cov != cov.T:
# raise ValueError("Not a valid covariance matrix")
# cov = np.cov(x, y)
pearson = cov[0, 1]/np.sqrt(cov[0, 0] * cov[1, 1])
# Using a special case to obtain the eigenvalues of this
# two-dimensionl dataset.
ell_radius_x = np.sqrt(1 + pearson)
ell_radius_y = np.sqrt(1 - pearson)
ellipse = Ellipse((0, 0),
width=ell_radius_x * 2,
height=ell_radius_y * 2,
facecolor=facecolor,
**kwargs)
# Calculating the stdandard deviation of x from
# the squareroot of the variance and multiplying
# with the given number of standard deviations.
scale_x = np.sqrt(cov[0, 0]) * n_std
mean_x = mu[0]
# calculating the stdandard deviation of y ...
scale_y = np.sqrt(cov[1, 1]) * n_std
mean_y = mu[1]
transf = transforms.Affine2D() \
.rotate_deg(45) \
.scale(scale_x, scale_y) \
.translate(mean_x, mean_y)
ellipse.set_transform(transf + ax.transData)
return ax.add_patch(ellipse)
```
```python
m,beta,W,nu,alpha,logLikelihood = VariationalGMM(data,m_init,beta_init,W_init,nu_init,alpha_init)#,max_iter=1000)
```
Algorithm converged after iteration: 148
Final Log Likelihood: -2199.653956297383
Effective cluster size: [ 0. 0. 148.792 0. 0. 130.389 0. 0. 1.132
61.688]
#### Determining optimal number of clusters
```python
k_idx = []
dist_tol = 1E-6
r = getZ(data,m,beta,W,nu,alpha)
Nk,x,S = getHelperVariables(data,r)
m_final = None
beta_final = None
alpha_final = None
W_final = None
nu_final = None
N_final = None
x_final = None
S_final = None
for k in range(K):
dist = np.linalg.norm(m[:,k]-m_init[:,k])
if dist >= 1E-6 and Nk[k] >= 1:#0.01*data.shape[1]:
if m_final is None:
m_final = m[:,k]
beta_final = np.array([beta[k]])
alpha_final = np.array([alpha[k]])
nu_final = np.array([nu[k]])
W_final = [W[k]]
N_final = np.array([Nk[k]])
x_final = x[:,k]
S_final = [S[k]]
else:
m_final = np.c_[m_final,m[:,k]]
beta_final = np.append(beta_final,np.array([beta[k]]))
alpha_final = np.append(alpha_final,np.array([alpha[k]]))
nu_final = np.append(nu_final,np.array([nu[k]]))
W_final += [W[k]]
N_final = np.append(N_final,np.array([Nk[k]]))
x_final = np.c_[x_final,x[:,k]]
S_final += [S[k]]
K_final = x_final.shape[1]
print("Number of actual clusters:",K_final)
print()
print("Cluster Means:")
print(x_final)
print()
print("Cluster Covariance")
for k in range(K_final):
print(S_final[k])
```
Number of actual clusters: 4
Cluster Means:
[[189.3128161 216.55057185 182.07307154 196.19772708]
[ 38.75547148 47.27299022 56.98587604 49.06080624]]
Cluster Covariance
[[35.94386692 3.55981486]
[ 3.55981486 6.92690337]]
[[47.61652171 15.77126996]
[15.77126996 10.59665065]]
[[17.21662508 -7.1001078 ]
[-7.1001078 8.94857704]]
[[35.18552793 10.10415175]
[10.10415175 7.84172856]]
```python
fig,ax = plt.subplots(1,3,figsize=(20,5))
K = K_final
r = getZ(data,m_final,beta_final,W_final,nu_final,alpha_final)
Nk,x,S = getHelperVariables(data,r)
Nk = np.sum(r,axis=0)
for n in range(data.shape[1]):
rgb = np.array([0,0,0])
for k in range(K):
rgb = rgb+r[n,(k+2)%K]*np.array(mcolors.to_rgb(color[k]))
ax[1].scatter(data[0,n],data[1,n],color=rgb)
ax[1].set_title('Classification as a function of responsibilities')
for k in range(3):
ax[0].scatter(df_data[k]['Flipper Length (mm)'],df_data[k]['Culmen Length (mm)'],color=color[k],alpha=0.3)
for k in range(K):
ki = (k+2)%K
ax[0].plot(x_final[0,ki],x_final[1,ki],'kx')
for i in range(3):
confidence_ellipse(ax[0],x_final[:,ki],S_final[ki],i+1,edgecolor=color[k],linestyle='dashed')
ax[0].set_title('Variational Approximation')
ax[0].set_xlabel('Flipper Length (mm)');
ax[0].set_ylabel('Culmen Length (mm)');
ax[2].plot(range(len(logLikelihood)),logLikelihood);
ax[2].set_title('Learning curve');
ax[2].set_ylabel('Log Likehood');
ax[2].set_xlabel('Iteration Number');
```
## References
[1]: Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Springer.
[2]: Horst AM, Hill AP, Gorman KB (2020). palmerpenguins: Palmer Archipelago (Antarctica) penguin data. R package version 0.1.0. https://allisonhorst.github.io/palmerpenguins/.
[3]: CSV data downloaded from https://github.com/mcnakhaee/palmerpenguins
[4]: Code for plotting confidence ellipses from https://matplotlib.org/3.1.0/gallery/statistics/confidence_ellipse.html
| c8414b74f09b2f87a5ab8d98e18c334517b9220c | 148,790 | ipynb | Jupyter Notebook | files/DensityEstimation/VariationalGMM.ipynb | chandrusuresh/MyNotes | 4e0f86195d6d9eb3168bfb04ca42120e9df17f0b | [
"MIT"
]
| null | null | null | files/DensityEstimation/VariationalGMM.ipynb | chandrusuresh/MyNotes | 4e0f86195d6d9eb3168bfb04ca42120e9df17f0b | [
"MIT"
]
| null | null | null | files/DensityEstimation/VariationalGMM.ipynb | chandrusuresh/MyNotes | 4e0f86195d6d9eb3168bfb04ca42120e9df17f0b | [
"MIT"
]
| null | null | null | 210.452617 | 117,664 | 0.875334 | true | 7,218 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.851953 | 0.769564 | __label__eng_Latn | 0.340198 | 0.626287 |
# 1. Regressão Linear
## 1.1. Univariada
Existem diversos problemas na natureza para os quais procura-se obter valores de saída dado um conjunto de dados de entrada. Suponha o problema de predizer os valores de imóveis de uma determinada cidade, conforme apresentado na Figura 1, em que podemos observer vários pontos que representam diferentes imóveis, cada qual com seu preço de acordo com o seu tamanho.
Em problemas de **regressão**, objetiva-se estimar valores de saída de acordo com um conjunto de valores de entrada. Desta forma, considerando o problema anterior, a ideia consiste em estimar o preço de uma casa de acordo com o seu tamanho, isto é, gostaríamos de encontrar uma **linha reta** que melhor se adequa ao conjunto de pontos na Figura 1.
```python
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from matplotlib import pyplot
import numpy
```
```python
# gerando um conjunto de pontos aleatórios para um problema de regressão linear ***
x, y = make_regression(n_samples=100, n_features=1, noise=5.7)
# apresenta o conjunto de dados criado no passo anterior ***
fig = pyplot.figure(figsize=(15,7))
pyplot.subplot(1, 2, 1)
pyplot.scatter(x,y)
pyplot.xlabel("Tamanho ($m^2$)")
pyplot.ylabel("Preço (R\$x$10^3$)")
pyplot.title("(a)")
# executando regressor linear
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) #criando partições
model = LinearRegression()
model.fit(x_train, y_train) # treinando o algoritmo
pyplot.subplot(1, 2, 2)
pyplot.scatter(x,y)
pyplot.plot(x, model.predict(x), color = 'red')
pyplot.xlabel("Tamanho ($m^2$)")
pyplot.ylabel("Preço (R\$x$10^3$)")
pyplot.title("(b)")
fig.tight_layout(pad=10)
fig.suptitle("Figura 1: Exemplo de conjunto de dados de valores imobiliários: (a) conjunto de dados de entrada e (b) linha reta estimada via regressão linear.", y=0.18)
pyplot.show()
```
Seja um conjunto de dados ${\cal D}=\{(x_1,y_1),(x_2,y_2),\ldots,(x_m,y_m)\}$ tal que $x_i\in\Re$ denota o conjunto dos dados de **entrada** (isto é, o tamanho da casa) e $y_i\in\Re$ representa o seu valor. Além disso, seja ${\cal D}_{tr}\subset {\cal D}$ o chamado **conjunto de treinamento** e ${\cal D}_{ts}\subset {\cal D}\backslash{\cal D}_{tr}$ o **conjunto de teste**. Usualmente, técnicas de aprendizado de máquina são avaliadas em conjuntos de treinamento e teste disjuntos, ou seja, temos que ${\cal D}_{tr}$ e ${\cal D}_{ts}$ são denominados **partições** do conjunto original ${\cal D}$. Em nosso exemplo, temos que $x_i$ e $y_i$ correspondem ao tamanho e preço do imóvel, respectivamente.
Basicamente, um algoritmo de regressão linear recebe como entrada um conjunto de dados de treinamento e objetiva estimar uma função linear (reta) a qual chamamos de **função hipótese**, dada por:
\begin{equation}
h_\textbf{w}(x) = w_0+w_1x,
\tag{1}
\end{equation}
em que $\textbf{w}=[w_0\ w_1]$ corresponde aos parâmetros do modelo, sendo que $w_0,w_1\in\Re$. Dependendo dos valores assumidos por $\textbf{w}$, a função hipótese pode assumir diferentes comportamentos, conforme ilustra a Figura 2.
```python
fig = pyplot.figure(figsize=(15,7))
x = numpy.arange(-10, 10, 0.5)
pyplot.subplot(2, 2, 1)
y = 1.5 + 0*x #h_w(x) = 1.5 + w_1*0
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = 1.5$ $(w_0 = 1$ e $w_1 = 1)$")
pyplot.subplot(2, 2, 2)
y = 0 + 0.5*x #h_w(x) = 0 + 0.5*x
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = 0.5x$ $(w_0 = 0$ e $w_1 = 0.5)$")
pyplot.subplot(2, 2, 3)
y = 1 + 0.5*x #h_w(x) = 1 + 0.5*x
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = 1 + 0.5x$ $(w_0 = 1$ e $w_1 = 0.5)$")
pyplot.subplot(2, 2, 4)
y = 0 - 0.5*x #h_w(x) = 0 - 0.5*x
pyplot.plot(x, y, color = "red")
pyplot.title("$h_w(x) = -0.5x$ $(w_0 = 0$ e $w_1 = -0.5)$")
fig.tight_layout(pad=2)
fig.suptitle("Figura 2: Exemplos de diferentes funções hipótese.", y=0.01)
pyplot.show()
```
De maneira geral, o objetivo da regressão linear é encontrar valores para $\textbf{w}=[w_0\ w_1]$ de tal forma que $h_w(x_i)$ é o mais próximo possível de $y_i$ considerando o conjunto de treinamento ${\cal D}_{tr}$, $\forall i\in\{1,2,\ldots,m^\prime\}$, em que $m^\prime=\left|{\cal D}_{tr}\right|$. Em outras palavras, o objetivo consiste em resolver o seguinte problema de minimização:
\begin{equation}
\label{e.mse}
\underset{\textbf{w}}{\operatorname{argmin}}\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2.
\tag{3}
\end{equation}
Essa equação é também conhecida por **erro médio quadrático**, do inglês *Minimum Square Error* (MSE). Uma outra denominação bastante comum é a de **função de custo**. Note que $h_w(x_i)$ representa o **preço estimado** do imóvel pela técnica de regressão linear, ao passo que $y_i$ denota o seu **valor rea**l dado pelo conjunto de treinamento.
Podemos simplificar a Equação \ref{e.mse} e reescrevê-la da seguinte maneira:
\begin{equation}
\label{e.mse_simplified}
\underset{\textbf{w}}{\operatorname{argmin}}J(\textbf{w}),
\tag{4}
\end{equation}
em que $J(\textbf{w})=\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2$. Partindo desta premissa, vamos simplificar um pouco mais a notação e assumir que nossa função hipótese cruza a origem do plano cartesiano:
\begin{equation}
\label{e.hypothesis_origin}
h_w(\textbf{x}) = w_1x,
\tag{5}
\end{equation}
ou seja, $w_0=0$. Neste caso, nosso problema de otimização restringe-se a encontrar $w_1$ que minimiza a seguinte equação:
\begin{equation}
\label{e.mse_simplified_origin}
\underset{w_1}{\operatorname{argmin}}J(w_1).
\tag{6}
\end{equation}
Como exemplo, suponha o seguinte conjunto de treinamento ${\cal D}_{tr}=\{(1,1),(2,2),(3,3)\}$, o qual é ilustrado na Figura 3a. Como pode ser observado, a função hipótese que modela esse conjunto de treinamento é dada por $h_\textbf{w}(x)=x$, ou seja, $\textbf{w}=[0\ 1]$, conforme apresentado na Figura 3b.
```python
fig = pyplot.figure(figsize=(13,7))
x = numpy.arange(1, 4, 1)
pyplot.subplot(2, 2, 1)
y = x #h_w(x) = x
pyplot.scatter(x,y)
pyplot.title("(a)")
pyplot.subplot(2, 2, 2)
pyplot.scatter(x,y)
pyplot.plot(x, x, color = "red")
pyplot.title("(b)")
fig.suptitle("Figura 3: (a) conjunto de treinamento ($m^\prime=3$) e (b) função hipótese que intercepta os dados.", y=0.47)
pyplot.show()
```
Na prática, a ideia consiste em testar diferentes valores de $w_1$ e calcular o valor de $J(w_1)$. Aquele que minimizar a função de custo, é o valor de $w_1$ a ser utilizado no modelo (função hipótese). Suponha que tomemos como valor inicial $w_1 = 1$, ou seja, o "chute" correto. Neste caso, temos que:
\begin{equation}
\begin{split}
J(w_1) & =\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2 \\
& = \frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(w_1x_i-y_i)^2 \\
& = \frac{1}{2\times3}\left[(1-1)^2+(2-2)^2+(3-3)^2\right] \\
& = \frac{1}{6}\times 0 = 0.
\end{split}
\tag{7}
\end{equation}
Neste caso, temos que $J(w_1) = 0$ para $w_1 = 1$, ou seja, o custo é o menor possível dado que achamos a função hipótese **exata** que intercepta os dados. Agora, suponha que tivéssemos escolhido $w_1 = 0.5$. Neste caso, a funcão de custo seria calculada da seguinte forma:
\begin{equation}
\begin{split}
J(w_1) & =\frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(h_\textbf{w}(x_i)-y_i)^2 \\
& = \frac{1}{2m^\prime}\sum_{i=1}^{m^\prime}(w_1x_i-y_i)^2 \\
& = \frac{1}{2\times3}\left[(0.5-1)^2+(1-2)^2+(1.5-3)^2\right] \\
& = \frac{1}{6}\times (0.25+1+2.25) \approx 0.58.
\end{split}
\tag{8}
\end{equation}
Neste caso, nosso erro foi ligeiramente maior. Caso continuemos a calcular $J(w_1)$ para diferentes valores de $w_1$, obteremos o gráfico ilustrado na Figura 4.
```python
def J(w_1, x, y):
error = numpy.zeros(len(w_1))
for i in range(len(w_1)):
error[i] = 0
for j in range(3):
error[i] = error[i] + numpy.power(w_1[i]*x[j]-y[j], 2)
return error
w_1 = numpy.arange(-7,10,1) #criando um vetor
error = J(w_1, x, y)
pyplot.plot(w_1, error, color = "red")
pyplot.xlabel("$w_1$")
pyplot.ylabel("$J(w_1)$")
pyplot.title("Figura 4: Comportamento da função de custo para diferentes valores de $w_1$.", y=-0.27)
pyplot.show()
```
Portanto, temos que $w_1=1$ é o valor que minimiza $J(w_1)$ para o exemplo citado anteriormente. Voltando à função de custo dada pela Equação \ref{e.mse}, a questão que temos agora é: como podemos encontrar valores plausíveis para o vetor de parâmetros $\textbf{w}=[w_0\ w_1]$. Uma abordagem simples seria testar uma combinação de valores aleatórios para $w_0$ e $w_1$ e tomar aqueles que minimizam $J(\textbf{w})$. Entretanto, essa heurística não garante que bons resultados sejam alcançados, principalmente em situações mais complexas.
Uma abordagem bastante comum para esse problema de otimização é fazer uso da ténica conhecida por **Gradiente Descentente** (GD), a qual consiste nos seguintes passos gerais:
1. Escolha valores aleatórios para $w_0$ e $w_1$.
2. Iterativamente, modifique os valores de $w_0$ e $w_1$ de tal forma a minimizar $J(\textbf{w})$.
A grande questão agora é qual heurística utilizar para atualizar os valores do vetor $\textbf{w}$. A técnica de GD faz uso das **derivadas parciais** da função de custo para guiar o processo de otimização no sentido do mínimo da função por meio da seguinte regra de atualização dos pesos:
\begin{equation}
\label{e.update_rule_GD}
w^{t+1}_j = w^{t}_j - \alpha\frac{\partial J(\textbf{w})}{\partial w_j},\ j\in\{0,1\},
\tag{9}
\end{equation}
em que $\alpha$ corresponde à chamada **taxa de aprendizado**.
Um questionamento bastante comum diz repeito ao termo derivativo, ou seja, como podemos calculá-lo. Para fins de explicação, suponha que tenhamos apenas o parâmetro $w_1$ para otimizar, ou seja, a nossa função hipótese é dada pela Equação \ref{e.hypothesis_origin}. Neste caso, o objetivo é minimizar $J(w_1)$ para algum valor de $w_1$. Na prática, o que significa a derivada em um dado ponto? A Figura \ref{f.derivada} ilustra melhor esta situação.
```python
# Código basedo em https://stackoverflow.com/questions/54961306/how-to-plot-the-slope-tangent-line-of-parabola-at-any-point
# Definindo a parábola
def f(x):
return x**2
# Definindo a derivada da parábola
def slope(x):
return 2*x
# Definindo conjunto de dados para x
x = numpy.linspace(-5,5,100)
# Escolhendo pontos para traçar as retas tangentes
x1 = -3
y1 = f(x1)
x2 = 3
y2 = f(x2)
x3 = 0
y3 = f(x3)
# Definindo intervalo de dados em x para plotar a reta tangente
xrange1 = numpy.linspace(x1-1, x1+1, 10)
xrange2 = numpy.linspace(x2-1, x2+1, 10)
xrange3 = numpy.linspace(x3-1, x3+1, 10)
# Definindo a reta tangente
# y = m*(x - x1) + y1
def tangent_line(x, x1, y1):
return slope(x1)*(x - x1) + y1
# Desenhando as figuras
fig = pyplot.figure(figsize=(13,9))
pyplot.subplot2grid((2,4),(0,0), colspan = 2)
pyplot.title("Decaimento < 0.")
pyplot.plot(x, f(x))
pyplot.scatter(x1, y1, color='C1', s=50)
pyplot.plot(xrange1, tangent_line(xrange1, x1, y1), 'C1--', linewidth = 2)
pyplot.subplot2grid((2,4),(0,2), colspan = 2)
pyplot.title("Decaimento > 0.")
pyplot.plot(x, f(x))
pyplot.scatter(x2, y2, color='C1', s=50)
pyplot.plot(xrange2, tangent_line(xrange2, x2, y2), 'C1--', linewidth = 2)
pyplot.subplot2grid((2,4),(1,1), colspan = 2)
pyplot.title("Decaimento = 0.")
pyplot.plot(x, f(x))
pyplot.scatter(x3, y3, color='C1', s=50)
pyplot.plot(xrange3, tangent_line(xrange3, x3, y3), 'C1--', linewidth = 2)
```
| dc9d31a82d1bdb8e49ee6418dc7ecc6fe555554b | 173,213 | ipynb | Jupyter Notebook | machine_learning/aula_1/aula_1.ipynb | jppbsi/lectures | f0366901f71eb86489547a471cc959272d6abdc3 | [
"Apache-2.0"
]
| 7 | 2020-03-12T11:50:21.000Z | 2021-04-16T20:08:36.000Z | machine_learning/aula_1/aula_1.ipynb | jppbsi/lectures | f0366901f71eb86489547a471cc959272d6abdc3 | [
"Apache-2.0"
]
| null | null | null | machine_learning/aula_1/aula_1.ipynb | jppbsi/lectures | f0366901f71eb86489547a471cc959272d6abdc3 | [
"Apache-2.0"
]
| 3 | 2020-07-02T17:59:47.000Z | 2021-04-17T00:02:16.000Z | 419.401937 | 45,732 | 0.928978 | true | 4,084 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.841826 | 0.775144 | __label__por_Latn | 0.98723 | 0.639252 |
# Geometric Series for Elementary Economics
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sympy as sym
from sympy import init_printing
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
```
```python
# True present value of a finite lease
def finite_lease_pv_true(T, g, r, x_0):
G = (1 + g)
R = (1 + r)
return (x_0 * (1 - G**(T + 1) * R**(-T - 1))) / (1 - G * R**(-1))
# First approximation for our finite lease
def finite_lease_pv_approx_1(T, g, r, x_0):
p = x_0 * (T + 1) + x_0 * r * g * (T + 1) / (r - g)
return p
# Second approximation for our finite lease
def finite_lease_pv_approx_2(T, g, r, x_0):
return (x_0 * (T + 1))
# Infinite lease
def infinite_lease(g, r, x_0):
G = (1 + g)
R = (1 + r)
return x_0 / (1 - G * R**(-1))
```
```python
def plot_function(axes, x_vals, func, args):
axes.plot(x_vals, func(*args), label=func.__name__)
T_max = 50
T = np.arange(0, T_max+1)
g = 0.02
r = 0.03
x_0 = 1
our_args = (T, g, r, x_0)
funcs = [finite_lease_pv_true,
finite_lease_pv_approx_1,
finite_lease_pv_approx_2]
## the three functions we want to compare
fig, ax = plt.subplots()
ax.set_title('Finite Lease Present Value $T$ Periods Ahead')
for f in funcs:
plot_function(ax, T, f, our_args)
ax.legend()
ax.set_xlabel('$T$ Periods Ahead')
ax.set_ylabel('Present Value, $p_0$')
plt.show()
```
```python
# Convergence of infinite and finite
T_max = 1000
T = np.arange(0, T_max+1)
fig, ax = plt.subplots()
ax.set_title('Infinite and Finite Lease Present Value $T$ Periods Ahead')
f_1 = finite_lease_pv_true(T, g, r, x_0)
f_2 = np.ones(T_max+1)*infinite_lease(g, r, x_0)
ax.plot(T, f_1, label='T-period lease PV')
ax.plot(T, f_2, '--', label='Infinite lease PV')
ax.set_xlabel('$T$ Periods Ahead')
ax.set_ylabel('Present Value, $p_0$')
ax.legend()
plt.show()
```
```python
# First view
# Changing r and g
fig, ax = plt.subplots()
ax.set_title('Value of lease of length $T$')
ax.set_ylabel('Present Value, $p_0$')
ax.set_xlabel('$T$ periods ahead')
T_max = 10
T=np.arange(0, T_max+1)
rs, gs = (0.9, 0.5, 0.4001, 0.4), (0.4, 0.4, 0.4, 0.5),
comparisons = ('$\gg$', '$>$', r'$\approx$', '$<$')
for r, g, comp in zip(rs, gs, comparisons):
ax.plot(finite_lease_pv_true(T, g, r, x_0), label=f'r(={r}) {comp} g(={g})')
ax.legend()
plt.show()
```
```python
# Second view
fig = plt.figure()
T = 3
ax = fig.gca(projection='3d')
r = np.arange(0.01, 0.99, 0.005)
g = np.arange(0.011, 0.991, 0.005)
rr, gg = np.meshgrid(r, g)
z = finite_lease_pv_true(T, gg, rr, x_0)
# Removes points where undefined
same = (rr == gg)
z[same] = np.nan
surf = ax.plot_surface(rr, gg, z, cmap=cm.coolwarm,
antialiased=True, clim=(0, 15))
fig.colorbar(surf, shrink=0.5, aspect=5)
ax.set_xlabel('$r$')
ax.set_ylabel('$g$')
ax.set_zlabel('Present Value, $p_0$')
ax.view_init(20, 10)
ax.set_title('Three Period Lease PV with Varying $g$ and $r$')
plt.show()
```
```python
# Creates algebraic symbols that can be used in an algebraic expression
g, r, x0 = sym.symbols('g, r, x0')
G = (1 + g)
R = (1 + r)
p0 = x0 / (1 - G * R**(-1))
init_printing()
print('Our formula is:')
p0
```
```python
print('dp0 / dg is:')
dp_dg = sym.diff(p0, g)
dp_dg
```
```python
print('dp0 / dr is:')
dp_dr = sym.diff(p0, r)
dp_dr
```
```python
# Function that calculates a path of y
def calculate_y(i, b, g, T, y_init):
y = np.zeros(T+1)
y[0] = i + b * y_init + g
for t in range(1, T+1):
y[t] = b * y[t-1] + i + g
return y
# Initial values
i_0 = 0.3
g_0 = 0.3
# 2/3 of income goes towards consumption
b = 2/3
y_init = 0
T = 100
fig, ax = plt.subplots()
ax.set_title('Path of Aggregate Output Over Time')
ax.set_xlabel('$t$')
ax.set_ylabel('$y_t$')
ax.plot(np.arange(0, T+1), calculate_y(i_0, b, g_0, T, y_init))
# Output predicted by geometric series
ax.hlines(i_0 / (1 - b) + g_0 / (1 - b), xmin=-1, xmax=101, linestyles='--')
plt.show()
```
```python
bs = (1/3, 2/3, 5/6, 0.9)
fig,ax = plt.subplots()
ax.set_title('Changing Consumption as a Fraction of Income')
ax.set_ylabel('$y_t$')
ax.set_xlabel('$t$')
x = np.arange(0, T+1)
for b in bs:
y = calculate_y(i_0, b, g_0, T, y_init)
ax.plot(x, y, label=r'$b=$'+f"{b:.2f}")
ax.legend()
plt.show()
```
```python
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 10))
fig.subplots_adjust(hspace=0.3)
x = np.arange(0, T+1)
values = [0.3, 0.4]
for i in values:
y = calculate_y(i, b, g_0, T, y_init)
ax1.plot(x, y, label=f"i={i}")
for g in values:
y = calculate_y(i_0, b, g, T, y_init)
ax2.plot(x, y, label=f"g={g}")
axes = ax1, ax2
param_labels = "Investment", "Government Spending"
for ax, param in zip(axes, param_labels):
ax.set_title(f'An Increase in {param} on Output')
ax.legend(loc ="lower right")
ax.set_ylabel('$y_t$')
ax.set_xlabel('$t$')
plt.show()
```
```python
```
| 8d773610c0fbd775c54ea8fdcffadafc76e9c900 | 217,839 | ipynb | Jupyter Notebook | Geometric Series for Elementary Economics.ipynb | DiogoRibeiro7/Finance | 6babc706bd523fc83e1dd1fda7f57aef969c5347 | [
"Apache-2.0"
]
| null | null | null | Geometric Series for Elementary Economics.ipynb | DiogoRibeiro7/Finance | 6babc706bd523fc83e1dd1fda7f57aef969c5347 | [
"Apache-2.0"
]
| null | null | null | Geometric Series for Elementary Economics.ipynb | DiogoRibeiro7/Finance | 6babc706bd523fc83e1dd1fda7f57aef969c5347 | [
"Apache-2.0"
]
| null | null | null | 422.98835 | 60,680 | 0.937348 | true | 1,779 | Qwen/Qwen-72B | 1. YES
2. YES | 0.928409 | 0.855851 | 0.79458 | __label__eng_Latn | 0.571601 | 0.684408 |
# Método de punto fijo
J.J
---
Dado que se busca resolver $f(x) = 0$, la función $f$ puede ser reescrita como $f(x) = g(x) - x = 0$. El método de punto fijo está dado por las iteraciones:
\begin{equation}
x_{k+1} = g(x_{k}).
\end{equation}
El método converge a una raíz si los valores de $x_{k}$ en cada iteración cumplen $|g'(x)| \leq 1$.
Ejemplo: $f(x) = 2x^{3} - 9x^{2} + 7x + 6 = 0$
\begin{equation}
x = \frac{1}{7} \{ -2x^{3} + 9x^{2} - 6 \}
\end{equation}
```python
import numpy as np
from sympy import *
from sympy.utilities.lambdify import lambdify
import matplotlib.pyplot as plt
init_printing(use_unicode=True)
```
```python
#calcular la derivada de f
x = symbols('x')
funcion = 2*x**3 - 9*x**2 + 7*x + 6
gfuncion = (-2*x**3 + 9*x**2 - 6)/7 #escribir la función aqui
dgfuncion = diff(gfuncion, x)
print(str(dgfuncion))
```
-6*x**2/7 + 18*x/7
```python
f = lambdify(x, funcion)
g = lambdify(x, gfuncion)
dg = lambdify(x, dgfuncion)
```
```python
X = np.linspace(-1, 4, 100)
plt.plot(X,dg(X), label = 'g´(x)')
plt.ylim(-1,1)
plt.legend()
plt.show()
```
La desigualdad se cumple aproximadamente entre en los intervalos $[-0.44, 0.46]$ y $[2.5, 3.34]$, aun que en realidad, en el primer intervalo no converge.
```python
e = 0.0001 #error
maxit = 100 #iteraciones máximas
```
```python
def PuntoFijo(x0, func = g, error = e, iterations = maxit):
it = 0
while (abs(f(x0)) > e) and (it < maxit):
it += 1
xk = g(x0)
x0 = xk
return x0
```
```python
sol = PuntoFijo(2.6)
print(sol)
```
2.9999926857948673
```python
plt.plot(X, f(X), label='f(x)')
plt.plot(sol,f(sol),'ro')
plt.legend()
plt.show()
```
```python
```
| a2b8fb80b8dd915054442a98f81be0155fd6a9c8 | 31,673 | ipynb | Jupyter Notebook | Ec. No lineales/Punto_Fijo.ipynb | JosueJuarez/M-todos-Num-ricos | 8e328ef0f70519be57163b556db1fd27c3b04560 | [
"MIT"
]
| null | null | null | Ec. No lineales/Punto_Fijo.ipynb | JosueJuarez/M-todos-Num-ricos | 8e328ef0f70519be57163b556db1fd27c3b04560 | [
"MIT"
]
| null | null | null | Ec. No lineales/Punto_Fijo.ipynb | JosueJuarez/M-todos-Num-ricos | 8e328ef0f70519be57163b556db1fd27c3b04560 | [
"MIT"
]
| null | null | null | 143.968182 | 13,560 | 0.89641 | true | 645 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91118 | 0.912436 | 0.831393 | __label__spa_Latn | 0.546493 | 0.769938 |
# Linear Regression Derived
#### Cost function
$ C = \sum_i (y_i - mx_i - c)^2 $
#### Calculate the partial derivative with respect to $m$
$
\begin{align}
\frac{\partial C}{\partial m} &= \sum_i 2(y_i - m x_i -c)(-x_i) \\
&= -2 \sum_i x_i (y_i - m x_i -c) \\
\end{align}
$
#### Set derivative to zero
$
\begin{align}
& \frac{\partial C}{\partial m} = 0 \\
\Rightarrow & -2 \sum_i x_i (y_i - m x_i -c) = 0 \\
\Rightarrow & \sum_i ( x_i y_i - m x_i x_i - x_i c ) = 0 \\
\Rightarrow & \sum_i x_i y_i - \sum_i m x_i x_i - \sum_i x_i c = 0 \\
\Rightarrow & \sum_i x_i y_i - c \sum_i x_i = m \sum_i x_i x_i \\
\Rightarrow & m = \frac{\sum_i x_i y_i - c \sum_i x_i}{\sum_i x_i x_i}
\end{align}
$
#### Calculate the partial derivative with respect to $c$
$
\begin{align}
\frac{\partial C}{\partial c} &= \sum_i 2(y_i - m x_i -c)(-1) \\
&= -2 \sum_i (y_i - m x_i -c) \\
\end{align}
$
#### Set the derivative to zero
$
\begin{align}
& \frac{\partial C}{\partial c} = 0 \\
\Rightarrow & -2 \sum_i (y_i - m x_i - c) = 0 \\
\Rightarrow & \sum_i (y_i - m x_i - c) = 0 \\
\Rightarrow & \sum_i y_i - \sum_i m x_i - \sum_i c = 0 \\
\Rightarrow & \sum_i y_i - m \sum_i x_i - c \sum_i 1 = 0 \\
\Rightarrow & \sum_i y_i - m \sum_i x_i = c \sum_i 1 \\
\Rightarrow & c = \frac{\sum_i y_i - m \sum_i x_i}{\sum_i 1} \\
\Rightarrow & c = \frac{\sum_i y_i}{\sum_i 1} - m \frac{\sum_i x_i}{\sum_i 1} \\
\Rightarrow & c = \bar{y} - m \bar{x} \\
\end{align}
$
#### Combine the estimates
$
\begin{align}
& m = \frac{\sum_i x_i y_i - c \sum_i x_i}{\sum_i x_i x_i} \\
& c = \bar{y} - m \bar{x} \\
& \Rightarrow m = \frac{\sum_i x_i y_i - (\bar{y} - m \bar{x}) \sum_i x_i}{\sum_i x_i x_i} \\
& \Rightarrow m = \frac{\sum_i x_i y_i - \bar{y} \sum_i x_i + m \bar{x} \sum_i x_i}{\sum_i x_i x_i} \\
& \Rightarrow m = \frac{\sum_i x_i y_i - \bar{y} \sum_i x_i}{\sum_i x_i x_i} + m \frac{\bar{x} \sum_i x_i}{\sum_i x_i x_i} \\
& \Rightarrow m - m \frac{\bar{x} \sum_i x_i}{\sum_i x_i x_i} = \frac{\sum_i x_i y_i - \bar{y} \sum_i x_i}{\sum_i x_i x_i} \\
& \Rightarrow m(1 - \frac{\bar{x} \sum_i x_i}{\sum_i x_i x_i}) = \frac{\sum_i x_i y_i - \bar{y} \sum_i x_i}{\sum_i x_i x_i} \\
& \Rightarrow m(\frac{\sum_i x_i x_i - \bar{x} \sum_i x_i}{\sum_i x_i x_i}) = \frac{\sum_i x_i y_i - \bar{y} \sum_i x_i}{\sum_i x_i x_i} \\
& \Rightarrow m(\sum_i x_i x_i - \bar{x} \sum_i x_i) = \sum_i x_i y_i - \bar{y} \sum_i x_i \\
& \Rightarrow m = \frac{\sum_i x_i y_i - \bar{y} \sum_i x_i}{\sum_i x_i x_i - \bar{x} \sum_i x_i} \\
\end{align}
$
#### End
| 744a3fb2ee56906abcfeef89b2dd4e4f7cfc5d2e | 4,706 | ipynb | Jupyter Notebook | linear-regression-derived.ipynb | ianmcloughlin/jupyter-teaching-notebooks | 46fed19115bf2f7e63c7bcb3d78bee92295c33b6 | [
"Unlicense"
]
| 10 | 2018-10-23T15:30:48.000Z | 2021-10-31T20:30:47.000Z | linear-regression-derived.ipynb | ianmcloughlin/jupyter-teaching-notebooks | 46fed19115bf2f7e63c7bcb3d78bee92295c33b6 | [
"Unlicense"
]
| null | null | null | linear-regression-derived.ipynb | ianmcloughlin/jupyter-teaching-notebooks | 46fed19115bf2f7e63c7bcb3d78bee92295c33b6 | [
"Unlicense"
]
| 34 | 2018-10-30T00:08:01.000Z | 2021-01-08T23:33:52.000Z | 28.011905 | 161 | 0.469188 | true | 1,043 | Qwen/Qwen-72B | 1. YES
2. YES | 0.962673 | 0.914901 | 0.880751 | __label__yue_Hant | 0.139449 | 0.884612 |
# Job Shop Scheduling Sample
## Introduction
Job shop scheduling is a common and important problem in many industries. For example, in the automobile industry manufacturing a car involves many different types of operations which are performed by a number of specialized machines - optimizing the production line to minimize manufacturing time can make for significant cost savings.
The job shop scheduling problem is defined as follows: you have a set of jobs ($J_0, J_1, J_2, \dots, J_{a-1} \text{, where } a \text{ is the total number of jobs}$), which have various processing times and need to be processed using a set of machines ($m_0, m_1, m_2, \dots, m_{b-1}\text{, where } b \text{ is the total number of machines}$). The goal is to complete all jobs in the shortest time possible. This is called minimizing the **makespan**.
Each job consists of a set of operations, and the operations must be performed in the correct order to complete that job.
In this sample, we'll introduce the necessary concepts and tools for describing this problem in terms of a penalty model, and then solve an example problem using the Azure Quantum Optimization service.
Imagine, for example, that you have a to-do list. Each item on the list is a **job** using this new terminology.
Each job in this list consists of a set of operations, and each operation has a processing time. You also have some tools at hand that you can use to complete these jobs (the **machines**).
TODOs:
- Pay electricity bill
1. Log in to site (*2 minutes*) - **computer**
2. Pay bill (*1 minute*) - **computer**
3. Print receipt (*3 minutes*) - **printer**
- Plan camping trip
1. Pick campsite (*2 minutes*) - **computer**
2. Pay online (*2 minutes*) - **computer**
3. Print receipt (*3 minutes*) - **printer**
- Book dentist appointment
1. Choose time (*1 minute*) - **computer**
2. Pay online (*2 minutes*) - **computer**
3. Print receipt (*3 minutes*) - **printer**
4. Guiltily floss your teeth (*2 minutes*) - **tooth floss**
But there are some constraints:
1. Each of the tasks (**operations**) in a todo (**job**) must take place in order. You can't print the receipt before you have made the payment! This is called a **precedence constraint**.
2. You start an operation only once, and once started it must be completed before you do anything else. There's no time for procrastination! This is called the **operation-once constraint**.
3. Each tool (**machine**) can only do one thing at a time. You can't simultaneously print two receipts unless you invest in multiple printers. This is the **no-overlap constraint**.
## Cost functions
The rest of this sample will be spent constructing what is known as a **cost function**, which is used to represent the problem. This cost function is what will be submitted to the Azure Quantum Optimization solver.
> **NOTE**:
> If you have completed the Microsoft Quantum Learn Module [Solve optimization problems by using quantum-inspired optimization](https://docs.microsoft.com/learn/modules/solve-quantum-inspired-optimization-problems/), this concept should already be familiar. A simplified version of this job shop sample is also available [here](https://docs.microsoft.com/learn/modules/solve-job-shop-optimization-azure-quantum/) on MS Learn.
Each point on a cost function represents a different solution configuration - in this case, each configuration is a particular assignment of starting times for the operations you are looking to schedule. The goal of the optimization is to minimize the cost of the solution - in this instance the aim is to minimize the amount of time taken to complete all operations.
Before you can submit the problem to the Azure Quantum solvers, you'll need to transform it to a representation that the solvers can work with. This is done by creating an array of `Term` objects, representing the problem constraints. Positive terms penalize certain solution configurations, while negative ones support them. By adding penalties to terms that break the constraints, you increase the relative cost of those configurations and reduce the likelihood that the optimizer will settle for these suboptimal solutions.
The idea is to make these invalid solutions so expensive that the solver can easily locate valid, low-cost solutions by navigating to low points (minima) in the cost function. However, you must also ensure that these solutions are not so expensive as to create peaks in the cost function that are so high that the solver can't travel over them to discover better optima on the other side.
## Azure Quantum setup
The Azure Quantum Optimization service is exposed via a Python SDK, which you will be making use of during the rest of this sample. This means that before you get started with formulating the problem, you first need to import some Python modules and set up an Azure Quantum `Workspace`.
You will need to enter your Azure Quantum workspace details in the cell below before you run it:
```
from typing import List
from azure.quantum.optimization import Term
from azure.quantum import Workspace
workspace = Workspace (
subscription_id = "", # Add your subscription_id
resource_group = "", # Add your resource_group
name = "", # Add your workspace name
location = "" # Add your workspace location (for example, "westus")
)
workspace.login()
```
<msrest.authentication.BasicTokenAuthentication at 0x27ca6170f70>
## Problem formulation
Now that you have set up your development environment, you can start to formulate the problem.
The first step is to take the constraints identified above and formulate them as mathematical equations that you can work with.
Let's first introduce some notation because we are lazy and also want to avoid carpal tunnel syndrome.
Let's stick with the previous example of the todo list:
- $J_{0}$: Pay electricity bill
- $O_{0}$: Log in to site (*2 minutes*) - **computer**
- $O_{1}$: Pay bill (*1 minute*) - **computer**
- $O_{2}$: Print receipt (*3 minutes*) - **printer**
- $J_{1}$: Plan camping trip
- $O_{3}$: Pick campsite (*2 minutes*) - **computer**
- $O_{4}$: Pay online (*2 minutes*) - **computer**
- $O_{5}$: Print receipt (*3 minutes*) - **printer**
- $J_{2}$: Book dentist appointment
- $O_{6}$: Choose time (*1 minute*) - **computer**
- $O_{7}$: Pay online (*2 minutes*) - **computer**
- $O_{8}$: Print receipt (*3 minutes*) - **printer**
- $O_{9}$: Guiltily floss your teeth (*2 minutes*) - **tooth floss**
Above, you can see that the jobs have been labeled as $J$ and assigned index numbers $0$, $1$ and $2$, to represent each of the three tasks you have. The operations that make up each job have also been defined, and are represented by the letter $O$.
To make it easier to code up later, all operations are identified with a continuous index number rather than, for example, starting from $0$ for each job. This allows you to keep track of operations by their ID numbers in the code and schedule them according to the constraints and machine availability. You can tie the operations back to their jobs later on using a reference.
Below, you see how these definitions combine to give us a mathematical formulation for the jobs:
$$
\begin{align}
J_{0} &= \{O_{0}, O_{1}, O_{2}\} \\
J_{1} &= \{O_{3}, O_{4}, O_{5}\} \\
J_{2} &= \{O_{6}, O_{7}, O_{8}, O_{9}\} \\
\end{align}
$$
**More generally:**
$$
\begin{align}
J_{0} &= \{O_{0}, O_{1}, \ldots , O_{k_{0}-1}\} \text{, where } k_{0} = n_{0} \text{, the number of operations in job } J_{0}\\
\\
J_{1} &= \{O_{k_{0}}, O_{k_{0}+1}, \ldots , O_{k_{1}-1}\} \text{, where } k_{1} = n_{0} + n_{1} \text{, the number of operations in jobs } J_{0} \text{ and } J_{1} \text{ combined}\\
\\
&\vdots \\
\\
J_{n-1} &= \{O_{k_{n-2}}, O_{k_{n-2}+1}, \ldots , O_{k_{n-1}-1}\} \text{, where } k_{n-1} = \text{ the total number of operations across all jobs }\\
\end{align}
$$
The next piece of notation you will need is a binary variable, which will be called $x_{i, t}$.
You will use this variable to represent whether an operation starts at time $t$ or not:
$$
\begin{align}
\text{If } x_{i,t} &= 1, \text{ } O_i\text{ starts at time } \textit{t} \\
\text{If } x_{i,t} &= 0, \text{ } O_i\text{ does not start at time } \textit{t} \\
\end{align}
$$
Because $x_{i, t}$ can take the value of either $0$ or $1$, this is known as a binary optimization problem. More generally, this is called a polynomial unconstrained binary optimization (or PUBO) problem. You may also see these PUBO problems referred to as Higher Order Binomial Optimization (HOBO) problems - these terms both refer to the same thing.
$t$ is used to represent the time. It goes from time $0$ to $T - 1$ in integer steps. $T$ is the latest time an operation can be scheduled:
$$0 \leq t < T$$
Lastly, $p_{i}$ is defined to be the processing time for operation $i$ - the amount of time it takes for operation $i$ ($O_{i}$) to complete:
$$\text{If } O_{i} \text{ starts at time } \textit{t} \text{, it will finish at time } t + p_{i}$$
$$\text{If } O_{i+1} \text{ starts at time } \textit{s} \text{, it will finish at time } s + p_{i+1}$$
Now that the terms have been defined, you can move on to formulating the problem.
The first step is to represent the constraints mathematically. This will be done using a penalty model - every time the optimizer explores a solution that violates one or more constraints, you need to give that solution a penalty:
| Constraint | Penalty condition |
|---|---|
|**Precedence constraint**<br>Operations in a job must take place in order.|Assign penalty every time $O_{i+1}$ starts before $O_{i}$ has finished (they start out of order).|
|**Operation-once constraint**<br>Each operation is started once and only once.|Assign penalty if an operation isn't scheduled within the allowed time.<br>**Assumption:** if an operation starts, it runs to completion.|
|**No-overlap constraint**<br>Machines can only do one thing at a time.|Assign penalty every time two operations on a single machine are scheduled to run at the same time.|
You will also need to define an objective function, which will minimize the time taken to complete all operations (the **makespan**).
## Expressing a cost function using the Azure Quantum Optimization SDK
As you will see during the exploration of the cost function and its constituent penalty terms below, the overall cost function is quadratic (because the highest order polynomial term you have is squared). This makes this problem a **Quadratic Unconstrained Binary Optimization (QUBO)** problem, which is a specific subset of **Polynomial Unconstrained Binary Optimization (PUBO)** problems (which allow for higher-order polynomial terms than quadratic). Fortunately, the Azure Quantum Optimization service is set up to accept PUBO (and Ising) problems, which means you don't need to modify the above representation to fit the solver.
As introduced above, the binary variables over which you are optimizing are the operation starting times $x_{i,t}$. Instead of using two separate indices as in the mathematical formulation, you will need to define a singly-indexed binary variable $x_{i \cdot T + t}$. Given time steps $t \in [0, T-1]$, every operation $i$ contributes $T$ indices. The operation starts at the value of $t$ for which $x_{i \cdot T + t}$ equals 1.
In order to submit a problem to the Azure Quantum services, you will first be creating a `Problem` instance. This is a Python object that stores all the required information, such as the cost function details and what kind of problem we are modeling.
To represent cost functions, we'll make use of a formulation using `Term` objects. Ultimately, any polynomial cost function can be written as a simple sum of products. That is, the function can be rewritten to have the following form, where $p_k$ indicates a product over the problem variables $x_0, x_1, \dots$:
$$ H(x) = \sum_k \alpha_k \cdot p_k(x_0, x_1, \dots) $$
$$ \text{e.g. } H(x) = 5 \cdot (x_0) + 2 \cdot (x_1 \cdot x_2) - 3 \cdot ({x_3}^2) $$
In this form, every term in the sum has a coefficient $\alpha_k$ and a product $p_k$. In the `Problem` instance, each term in the sum is represented by a `Term` object, with parameters `c` - corresponding to the coefficient, and `indices` - corresponding to the product. Specifically, the `indices` parameter is populated with the indices of all variables appearing in the term. For instance, the term $2 \cdot (x_1 \cdot x_2)$ translates to the following object: `Term(c=2, indices=[1,2])`.
More generally, `Term` objects take on the following form:
```python
Term(c: float, indices: []) # Constant terms like +1
Term(c: float, indices: [int]) # Linear terms like x
Term(c: float, indices: [int, int]) # Quadratic terms like x^2 or xy
```
If there were higher order terms (cubed, for example), you would just add more elements to the indices array, like so:
```python
Term(c: float, indices: [int, int, int, ...])
```
## Defining problem parameters in code
Now that you've defined the problem parameters mathematically, you can transform this information to code. The following two code snippets show how this is done.
First, the helper function `process_config` is defined:
```
def process_config(jobs_ops_map: dict, machines_ops_map: dict, processing_time: dict, T: int):
"""
Process & validate problem parameters (config) and generate inverse dict of operations to jobs.
Keyword arguments:
jobs_ops_map (dict): Map of jobs to operations {job: [operations]}
machines_ops_map(dict): Mapping of operations to machines, e.g.:
machines_ops_map = {
0: [0,1], # Operations 0 & 1 assigned to machine 0
1: [2,3] # Operations 2 & 3 assigned to machine 1
}
processing_time (dict): Operation processing times
T (int): Allowed time (jobs can only be scheduled below this limit)
"""
# Problem cannot take longer to complete than all operations executed sequentially
## Sum all operation processing times to calculate the maximum makespan
T = min(sum(processing_time.values()), T)
# Ensure operation assignments to machines are sorted in ascending order
for m, ops in machines_ops_map.items():
machines_ops_map[m] = sorted(ops)
ops_jobs_map = {}
for job, ops in jobs_ops_map.items():
# Fail if operation IDs within a job are out of order
assert (ops == sorted(ops)), f"Operation IDs within a job must be in ascending order. Job was: {job}: {ops}"
for op in ops:
# Fail if there are duplicate operation IDs
assert (op not in ops_jobs_map.keys()), f"Operation IDs must be unique. Duplicate ID was: {op}"
ops_jobs_map[op] = job
return ops_jobs_map, T
```
Below, you can see the code representation of the problem parameters: the maximum allowed time `T`, the operation processing times `processing_time`, the mapping of operations to jobs (`jobs_ops_map` and `ops_jobs_map`), and the assignment of operations to machines (`machines_ops_map`).
```
# Set problem parameters
## Allowed time (jobs can only be scheduled below this limit)
T = 21
## Processing time for each operation
processing_time = {0: 2, 1: 1, 2: 3, 3: 2, 4: 2, 5: 3, 6: 1, 7: 2, 8: 3, 9: 2}
## Assignment of operations to jobs (job ID: [operation IDs])
### Operation IDs within a job must be in ascending order
jobs_ops_map = {
0: [0, 1, 2], # Pay electricity bill
1: [3, 4, 5], # Plan camping trip
2: [6, 7, 8, 9] # Book dentist appointment
}
## Assignment of operations to machines
### Ten jobs, three machines
machines_ops_map = {
0: [0, 1, 3, 4, 6, 7], # Operations 0, 1, 3, 4, 6 and 7 are assigned to machine 0 (the computer)
1: [2, 5, 8], # Operations 2, 5 and 8 are assigned to machine 1 (the printer)
2: [9] # Operation 9 is assigned to machine 2 (the tooth floss)
}
## Inverse mapping of jobs to operations
ops_jobs_map, T = process_config(jobs_ops_map, machines_ops_map, processing_time, T)
```
In the next sections, you will construct mathematical representations of the penalty terms and use these to build the cost function, which will be of the format:
$$H(x) = \alpha \cdot f(x) + \beta \cdot g(x) + \gamma \cdot h(x) + \delta \cdot k(x) $$
where:
$$f(x) \text{, } g(x) \text{ and } h(x) \text{ represent the penalty functions.}$$
$$k(x) \text{ represents the objective function.}$$
$$\alpha, \beta, \gamma \text{ and } \delta \text{ represent the different weights assigned to the penalties.}$$
The weights represent how important each penalty function is, relative to all the others. In the following units, you will learn how to build these penalty and objective functions, combine them to form the cost function $H(x)$, and solve the problem using Azure Quantum. Over the rest of this sample, you will learn how to build these penalty and objective functions, combine them to form the cost function $H(x)$, and solve the problem using Azure Quantum.
To do this, you will explore how to formulate each of these constraints mathematically, and how this translates to code.
## Precedence constraint
The precedence constraint is defined as follows:
| Constraint | Penalty condition |
|---|---|
|**Precedence constraint**<br>Operations in a job must take place in order.|Assign penalty every time $O_{i+1}$ starts before $O_{i}$ has finished (they start out of order).|
### Worked Example
Let's take job 1 ($J_{1}$) as an example:
- $J_{1}$: Plan camping trip
- $O_{3}$: Pick campsite (*2 minutes*)
- $O_{4}$: Pay online (*2 minutes*)
- $O_{5}$: Print receipt (*3 minutes*)
Let's formulate the penalty conditions for $O_{3}$ and $O_{4}$: you want to add a penalty if $O_{4}$ starts before $O_{3}$ finishes. First, you'll define our terms and set some of their values:
$$\text{Total simulation time } T = 4$$
$$O_{3} \text{ processing time: } p_{3} = 2$$
$$O_{3} \text{ starts at time } \textit{t} \text{, and finishes at time } t+p_{3}$$
$$O_{3} \text{ starts at any time } 0 \leq t < T $$
$$O_{4} \text{ can start at time } s \geq t + p_{3} $$
$O_{3}$’s finishing time is given by adding its processing time $p_{3}$ (which we’ve set to be 2) to its start time $t$. You can see the start and end times for $O_{3}$ in the table below:
| $t$ | $t = p_{3}$|
|---|---|
|0|2|
|1|3|
|2|4|
To avoid violating this constraint, the start time of $O_{4}$ (denoted by $s$) must be greater than or equal to the end time of $O_{3}$, like we see in the next column:
| $t$ | $t = p_{3}$|$s \geq t+p_{3}$|
|---|---|---|
|0|2|2, 3, 4|
|1|3|3, 4|
|2|4|4|
||**Valid configuration?**|✔|
The ✔ means that any $s$ value in this column is valid, as it doesn't violate the precedence constraint.
Conversely, if $s$ is less than $t + p_{3}$ (meaning $O_{4}$ starts before $O_{3}$ finishes), you need to add a penalty. Invalid $s$ values for this example are shown in the rightmost column:
| $t$ | $t = p_{3}$|$s \geq t+p_{3}$|$s < t+p_{3}$|
|---|---|---|---|
|0|2|2, 3, 4|0, 1|
|1|3|3, 4|0, 1, 2|
|2|4|4|0, 1, 2, 3|
||**Valid configuration?**|✔|✘|
In the table above, ✘ has been used to denote that any $s$ value in the last column is invalid, as it violates the precedence constraint.
### Penalty Formulation
This is formulated as a penalty by counting every time consecutive operations $O_{i}$ and $O_{i + 1}$ in a job take place out of order.
As you saw above: for an operation $O_{i}$, if the start time of $O_{i + 1}$ (denoted by $s$) is less than the start time of $O_{i}$ (denoted by $t$) plus its processing time $p_{i}$, then that counts as a penalty. Mathematically, this penalty condition looks like: $s < t + p_{i}$.
You sum that penalty over all the operations of a job ($J_{n}$) for all the jobs:
$$f(x) = \sum_{k_{n-1} \leq i < k_n, s < t + p_{i}}x_{i,t}\cdot x_{i+1,s} \text{ for each job } \textit{n}.$$
Let's break that down:
- $k_{n-1} \leq i < k_{n}$
This means you sum over all operations for a single job.
- $s < t + p_{i}$
This is the penalty condition - any operation that satisfies this condition is in violation of the precedence constraint.
- $x_{i, t}\cdot x_{i+1, s}$
This represents the table you saw in the example above, where $t$ is allowed to vary from $0 \rightarrow T - 1$ and you assign a penalty whenever the constraint is violated (when $s < t + p_{i}$).
This translates to a nested `for` loop: the outer loop has limits $0 \leq t < T$ and the inner loop has limits $0 \leq s < t + p_{i}$
### Code
Using the mathematical formulation and the breakdown above, you can now translate this constraint function to code. You will see the `weight` argument included in this code snippet - this will be assigned a value later on when you call the function:
```
"""
# Reminder of the relevant parameters
## Time to allow for all jobs to complete
T = 21
## Processing time for each operation
processing_time = {0: 2, 1: 1, 2: 3, 3: 2, 4: 2, 5: 3, 6: 1, 7: 2, 8: 3, 9: 2}
## Assignment of operations to jobs (job ID: [operation IDs])
### Operation IDs within a job must be in ascending order
jobs_ops_map = {
0: [0, 1, 2], # Pay electricity bill
1: [3, 4, 5], # Plan camping trip
2: [6, 7, 8, 9] # Book dentist appointment
}
"""
def precedence_constraint(jobs_ops_map:dict, T:int, processing_time:dict, weight:float):
"""
Construct penalty terms for the precedence constraint.
Keyword arguments:
jobs_ops_map (dict): Map of jobs to operations {job: [operations]}
T (int): Allowed time (jobs can only be scheduled below this limit)
processing_time (dict): Operation processing times
weight (float): Relative importance of this constraint
"""
terms = []
# Loop through all jobs:
for ops in jobs_ops_map.values():
# Loop through all operations in this job:
for i in range(len(ops) - 1):
for t in range(0, T):
# Loop over times that would violate the constraint:
for s in range(0, min(t + processing_time[ops[i]], T)):
# Assign penalty
terms.append(Term(c=weight, indices=[ops[i]*T+t, (ops[i+1])*T+s]))
return terms
```
> **NOTE**:
> This nested loop structure is probably not the most efficient way to do this but it is the most direct comparison to the mathematical formulation.
## Operation-once constraint
The operation-once constraint is defined as follows:
| Constraint | Penalty condition |
|---|---|
|**Operation-once constraint**<br>Each operation is started once and only once.|Assign penalty if an operation isn't scheduled within the allowed time.<br>**Assumption:** if an operation starts, it runs to completion.|
#### Worked Example
We will again take job 1 ($J_{1}$) as an example:
- $J_{1}$: Plan camping trip
- $O_{3}$: Pick campsite (*2 minutes*)
- $O_{4}$: Pay online (*2 minutes*)
- $O_{5}$: Print receipt (*3 minutes*)
Recall the variable $x_{i,t}$:
$$
\begin{align}
\text{If } x_{i,t} &= 1, \text{ } O_i\text{ starts at time } \textit{t} \\
\text{If } x_{i,t} &= 0, \text{ } O_i\text{ does not start at time } \textit{t} \\
\end{align}
$$
According to this constraint, $x_{i,t}$ for a specific operation should equal 1 **once and only once** from $t = 0 \rightarrow T - 1$ (because it should start once and only once during the allowed time).
So in this case, you need to assign a penalty if the sum of $x_{i,t}$ for each operation across all allowed times doesn’t equal exactly 1.
Let’s take $O_{3}$ as an example again:
|$t$|$x_{3,t}$|
|---|---|
|0|0|
|1|1|
|2|0|
|$\sum_t {x_{3,t}} =$|1|
|**Valid configuration?**|✔|
In the right hand column, you see that $O_{3}$ starts at time 1 and no other time ($x_{3,t} = 1$ at time $t = 1$ and is $0$ otherwise). The sum of $x_{i,t}$ values over all $t$ for this example is therefore 1, which is what is expected! This is therefore a valid solution.
In the example below, you see an instance where $O_{3}$ is scheduled more than once ($x_{3,t} = 1$ more than once), in violation of the constraint:
|$t$|$x_{3,t}$|
|---|---|
|0|0|
|1|1|
|2|1|
|$\sum_t {x_{3,t}} =$|2|
|**Valid configuration?**|✘|
You can see from the above that $O_{3}$ has been scheduled to start at both time 1 and time 2, so the sum of $x_{i,t}$ values over all $t$ is now greater than 1. This violates the constraint and thus you must apply a penalty.
In the last example, you see an instance where $O_{3}$ has not been scheduled at all:
|$t$|$x_{3,t}$|
|---|---|
|0|0|
|1|0|
|2|0|
|$\sum_t {x_{3,t}} =$|0|
|**Valid configuration?**|✘|
In this example, none of the $x_{3,t}$ values equal 1 for any time in the simulation, meaning the operation is never scheduled. This means that the sum of $x_{3,t}$ values over all $t$ is 0 - the constraint is once again violated and you must allocate a penalty.
In summary:
|$t$|$x_{3,t}$|$x_{3,t}$|$x_{3,t}$|
|---|---|---|---|
|0|0|0|0|
|1|1|1|0|
|2|0|1|0|
|$\sum_t {x_{3,t}} =$|1|2|0|
|**Valid configuration?**|✔|✘|✘|
Now that you understand when to assign penalties, let's formulate the constraint mathematically.
### Penalty Formulation
As seen previously, you want to assign a penalty whenever the sum of $x_{i,t}$ values across all possible $t$ values is not equal to 1. This is how you represent that mathematically:
$$g(x) = \sum_{i} \left(\left(\sum_{0\leq t < T} x_{i,t}\right) - 1\right)^2.$$
Let's break that down:
- $\left(\sum_{0\leq t < T} x_{i,t}\right) - 1$
As you saw in the sum row of the tables in the worked example, $\sum_{0\leq t < T} x_{i,t}$ should always equal exactly 1 (meaning that an operation must be scheduled **once and only once** during the allowed time). This means that $\left(\sum_{0\leq t < T} x_{i,t}\right) - 1$ should always give 0. This means there is no penalty assigned when the constraint is not violated.
In the case where $\sum_{0\leq t < T} x_{i,t} > 1$ (meaning an operation is scheduled to start more than once, like in the second example above), you now have a positive, non-zero penalty term as $\left(\sum_{0\leq t < T} x_{i,t}\right) - 1 > 0$.
In the case where $\sum_{0\leq t < T} x_{i,t} = 0$ (meaning an operation is never scheduled to start, like in the last example above), you now have a $-1$ penalty term as $\left(\sum_{0\leq t < T} x_{i,t}\right) - 1 = 0 - 1 = -1$.
- $\left(\sum\dots\right)^2$
Because the penalty terms must always be positive (otherwise you would be *reducing* the penalty when an operation isn't scheduled), you must square the result of $\left(\sum_{0\leq t < T} x_{i,t}\right) - 1$.
This ensures that the penalty term is always positive (as $(-1)^2 = 1$).
- $\sum_{i} \left((\dots)^2\right)$
Lastly, you must sum all penalties accumulated across all operations $O_{i}$ from all jobs.
To translate this constraint to code form, you are going to need to expand the quadratic equation in the sum.
To do this, Let's once again take $O_{3}$ as an example. Let's set $T = 2$ so the $t$ values will be 0 and 1. The first step will be to substitute in these values:
$$
\begin{align}
\sum_{i} \left(\left(\sum_{0\leq t < T} x_{i,t}\right) - 1\right)^2 &= \left(x_{3,0} + x_{3,1} - 1\right)^2
\end{align}
$$
For simplicity, the $x_{3,t}$ variables will be renamed as follows:
$$
\begin{align}
x_{3,0} &= x \\
x_{3,1} &= y
\end{align}
$$
Substituting these values in, you now have the following:
$$
\begin{align}
\sum_{i} \left(\left(\sum_{0\leq t < T} x_{i,t}\right) - 1\right)^2 &= \left(x_{3,0} + x_{3,1} - 1\right)^2 \\
&=\left(x + y - 1\right)^2
\end{align}
$$
Next, you need to expand out the bracket and multiply each term in the first bracket with all terms in the other bracket:
$$
\begin{align}
\sum_{i} \left(\left(\sum_{0\leq t < T} x_{i,t}\right) - 1\right)^2 &= \left(x_{3,0} + x_{3,1} - 1\right)^2 \\
&= \left(x + y - 1\right)^2 \\
&= (x + y - 1)\cdot(x + y - 1) \\
&= x^2 + y^2 + 2xy - 2x - 2y + 1
\end{align}
$$
The final step simplifies things a little. Because this is a binary optimization problem, $x$ and $y$ can only take the values of $0$ or $1$. Because of this, the following holds true:
$$x^2 = x$$
$$y^2 = y,$$
as
$$0^2 = 0$$
and
$$1^2 = 1$$
This means that the quadratic terms in the penalty function can combine with the two linear terms, giving the following formulation of the penalty function:
$$
\begin{align}
\sum_{i} \left(\left(\sum_{0\leq t < T} x_{i,t}\right) - 1\right)^2 &= x^2 + y^2 + 2xy - 2x - 2y + 1 \\
&= x + y + 2xy - 2x - 2y + 1 \\
&= 2xy - x - y + 1
\end{align}
$$
If $T$ was larger, you would have more terms ($z$ and so on, for example).
### Code
You can now use this expanded version of the penalty function to build the penalty terms in code. Again, the `weight` argument is included, to be assigned a value later on:
```
"""
# Reminder of the relevant parameters
## Allowed time (jobs can only be scheduled below this limit)
T = 21
## Assignment of operations to jobs (operation ID: job ID)
ops_jobs_map = {0: 0, 1: 0, 2: 0, 3: 1, 4: 1, 5: 1, 6: 2, 7: 2, 8: 2, 9: 2}
"""
def operation_once_constraint(ops_jobs_map:dict, T:int, weight:float):
"""
Construct penalty terms for the operation once constraint.
Penalty function is of form: 2xy - x - y + 1
Keyword arguments:
ops_jobs_map (dict): Map of operations to jobs {op: job}
T (int): Allowed time (jobs can only be scheduled below this limit)
weight (float): Relative importance of this constraint
"""
terms = []
# 2xy - x - y parts of the constraint function
# Loop through all operations
for op in ops_jobs_map.keys():
for t in range(T):
# - x - y terms
terms.append(Term(c=weight*-1, indices=[op*T+t]))
# + 2xy term
# Loop through all other start times for the same job
# to get the cross terms
for s in range(t+1, T):
terms.append(Term(c=weight*2, indices=[op*T+t, op*T+s]))
# + 1 term
terms.append(Term(c=weight*1, indices=[]))
return terms
```
### No-overlap constraint
The no-overlap constraint is defined as follows:
| Constraint | Penalty condition |
|---|---|
|**No-overlap constraint**<br>Machines can only do one thing at a time.|Assign penalty every time two operations on a single machine are scheduled to run at the same time.|
#### Worked Example
For this final constraint, $J_{1}$ will once again be used as an example:
- $J_{1}$: Plan camping trip
- $O_{3}$: Pick campsite (*2 minutes*) - **computer**
- $O_{4}$: Pay online (*2 minutes*) - **computer**
- $O_{5}$: Print receipt (*3 minutes*) - **printer**
Recall once more the variable $x_{i,t}$:
$$
\begin{align}
\text{If } x_{i,t} &= 1, \text{ } O_i\text{ starts at time } \textit{t} \\
\text{If } x_{i,t} &= 0, \text{ } O_i\text{ does not start at time } \textit{t} \\
\end{align}
$$
As you can see from the above, $O_{3}$ and $O_{4}$ must be completed using the same machine (the computer). You can't do two things at the same time using the same machine, so to avoid violating the no-overlap constraint, you must ensure that $O_{3}$ and $O_{4}$ begin at different times: $x_{3,t}$ and $x_{4,t}$ must not equal 1 at the same time. You must also make sure that the operations don't overlap, just like you saw in the precedence constraint. This means that if $O_{3}$ starts at time $t$, $O_{4}$ must not start at times where $t \leq s < t + p_{3}$ (after $O_{3}$ has started but before it has been completed using the machine).
One example of a valid configuration is shown below:
|$t$|$x_{3,t}$|$x_{4,t}$|$x_{3,t} \cdot x_{4,t}$|
|---|---|---|---|
|0|1|0|0|
|1|0|0|0|
|2|0|1|0|
|||$\sum_{t} x_{3,t} \cdot x_{4,t} =$|0|
|||**Valid configuration?**|✔|
As you can see, when you compare $x_{i,t}$ values pairwise at each time in the simulation, their product always equals 0. Further to this, you can see that $O_{4}$ starts two time steps after $O_{3}$, which means that there is no overlap.
Below, we see a configuration that violates the constraint:
|$t$|$x_{3,t}$|$x_{4,t}$|$x_{3,t} \cdot x_{4,t}$|
|---|---|---|---|
|0|0|0|0|
|1|1|1|1|
|2|0|0|0|
|||$\sum_{t} x_{3,t} \cdot x_{4,t} =$|1|
|||**Valid configuration?**|✘|
In this instance, $O_{3}$ and $O_{4}$ are both scheduled to start at $t = 1$ and given they require the same machine, this means that the constraint has been violated. The pairwise product of $x_{i,t}$ values is therefore no longer always equal to 0, as for $t = 1$ we have: $x_{3,1} \cdot x_{4,1} = 1$
Another example of an invalid configuration is demonstrated below:
|$t$|$x_{3,t}$|$x_{4,t}$|$x_{3,t} \cdot x_{4,t}$|
|---|---|---|---|
|0|1|0|0|
|1|0|1|0|
|2|0|0|0|
|||$\sum_{t} x_{3,t} \cdot x_{4,t} =$|0|
|||**Valid configuration?**|✘|
In the above scenario, the two operations' running times have overlapped ($t \leq s < t + p_{3}$), and therefore this configuration is not valid.
You can now use this knowledge to mathematically formulate the constraint.
#### Penalty Formulation
As you saw from the tables in the worked example, for the configuration to be valid, the sum of pairwise products of $x_{i,t}$ values for a machine $m$ at any time $t$ must equal 0. This gives you the penalty function:
$$h(x) = \sum_{i,t,k,s} x_{i,t}\cdot x_{k,s} = 0 \text{ for each machine } \textit{m}$$
Let's break that down:
- $\sum_{i,t,k,s}$
For operation $i$ starting at time $t$, and operation $k$ starting at time $s$, you need to sum over all possible start times $0 \leq t < T$ and $0 \leq s < T$. This indicates the need for another nested `for` loop, like you saw for the precedence constraint.
For this summation, $i \neq k$ (you should always be scheduling two different operations).
For two operations happening on a single machine, $t \neq s$ or the constraint has been violated. If $t = s$ for the operations, they have been scheduled to start on the same machine at the same time, which isn't possible.
- $x_{i,t}\cdot x_{k,s}$
This is the product you saw explicitly calculated in the rightmost columns of the tables from the worked example. If two different operations $i$ and $k$ start at the same time ($t = s$), this product will equal 1. Otherwise, it will equal 0.
- $\sum(\dots) = 0 \text{ for each machine } \textit{m}$
This sum is performed for each machine $m$ independently.
If all $x_{i,t} \cdot x_{k,s}$ products in the summation equal 0, the total sum comes to 0. This means no operations have been scheduled to start at the same time on this machine and thus the constraint has not been violated. You can see an example of this in the bottom row of the first table from the worked example, above.
If any of the $x_{i,t} \cdot x_{k,s}$ products in the summation equal 1, this means that $t = s$ for those operations and therefore two operations have been scheduled to start at the same time on the same machine. The sum now returns a value greater than 1, which gives us a penalty every time the constraint is violated. You can see an example of this in the bottom row of the second table from the worked example.
### Code
Using the above, you can transform the final penalty function into code that will generate the terms needed by the solver. As with the previous two penalty functions, the `weight` is included in the definition of the `Term` objects:
```
"""
# Reminder of the relevant parameters
## Allowed time (jobs can only be scheduled below this limit)
T = 21
## Processing time for each operation
processing_time = {0: 2, 1: 1, 2: 3, 3: 2, 4: 2, 5: 3, 6: 1, 7: 2, 8: 3, 9: 2}
## Assignment of operations to jobs (operation ID: job ID)
ops_jobs_map = {0: 0, 1: 0, 2: 0, 3: 1, 4: 1, 5: 1, 6: 2, 7: 2, 8: 2, 9: 2}
## Assignment of operations to machines
### Ten jobs, three machines
machines_ops_map = {
0: [0, 1, 3, 4, 6, 7], # Operations 0, 1, 3, 4, 6 and 7 are assigned to machine 0 (the computer)
1: [2, 5, 8], # Operations 2, 5 and 8 are assigned to machine 1 (the printer)
2: [9] # Operation 9 is assigned to machine 2 (the tooth floss)
}
"""
def no_overlap_constraint(T:int, processing_time:dict, ops_jobs_map:dict, machines_ops_map:dict, weight:float):
"""
Construct penalty terms for the no overlap constraint.
Keyword arguments:
T (int): Allowed time (jobs can only be scheduled below this limit)
processing_time (dict): Operation processing times
weight (float): Relative importance of this constraint
ops_jobs_map (dict): Map of operations to jobs {op: job}
machines_ops_map(dict): Mapping of operations to machines, e.g.:
machines_ops_map = {
0: [0,1], # Operations 0 & 1 assigned to machine 0
1: [2,3] # Operations 2 & 3 assigned to machine 1
}
"""
terms = []
# For each machine
for ops in machines_ops_map.values():
# Loop over each operation i requiring this machine
for i in ops:
# Loop over each operation k requiring this machine
for k in ops:
# Loop over simulation time
for t in range(T):
# When i != k (when scheduling two different operations)
if i != k:
# t = s meaning two operations are scheduled to start at the same time on the same machine
terms.append(Term(c=weight*1, indices=[i*T+t, k*T+t]))
# Add penalty when operation runtimes overlap
for s in range(t, min(t + processing_time[i], T)):
terms.append(Term(c=weight*1, indices=[i*T+t, k*T+s]))
# If operations are in the same job, penalize for the extra time 0 -> t (operations scheduled out of order)
if ops_jobs_map[i] == ops_jobs_map[k]:
for s in range(0, t):
if i < k:
terms.append(Term(c=weight*1, indices=[i*T+t, k*T+s]))
if i > k:
terms.append(Term(c=weight*1, indices=[i*T+s, k*T+t]))
return terms
```
## Minimize the makespan
So far you've learned how to represent constraints of your optimization problem with a penalty model, which allows you to obtain *valid* solutions to your problem from the optimizer. Remember however that your end goal is to obtain an *optimal* (or close to optimal) solution. In this case, you're looking for the schedule with the fastest completion time of all jobs.
The makespan $M$ is defined as the total time required to run all jobs, or alternatively the finishing time of the last job, which is what you want to minimize. To this end, you need to add a fourth component to the cost function that adds larger penalties for solutions with larger makespans:
$$ H(x) = \alpha \cdot f(x) + \beta \cdot g(x) + \gamma \cdot h(x) + \mathbf{\delta \cdot k(x)} $$
Let's come up with terms that increase the value of the cost function the further out the last job is completed. Remember that the completion time of a job depends solely on the completion time of its final operation. However, since you have no way of knowing in advance what the last job will be, or at which time the last operation will finish, you'll need to include a term for each operation and time step. These terms need to scale with the time parameter $t$, and consider the operation processing time, in order to penalize large makespans over smaller ones.
Some care is required in determining the penalty values, or *coefficients*, of these terms. Recall that you are given a set of operations $\{O_i\}$, which each take processing time $p_i$ to complete. An operation scheduled at time $t$ will then *complete* at time $t + p_i$. Let's define the coefficient $w_t$ as the penalty applied to the cost function for an operation to finish at time $t$. As operations can be scheduled in parallel, you don't know how many might complete at any given time, but you do know that this number is at most equal to the number of available machines $m$. The sum of all penalty values for operations completed at time $t$ are thus in the range $[0, ~m \cdot w_t]$. You want to avoid situations were completing a single operation at time $t+1$ is less expensive than m operations at time $t$. Thus, the penalty values cannot follow a simple linear function of time.
Precisely, you want your coefficients to satisfy:
$$ w_{t+1} > m \cdot w_{t} $$
For a suitable parameter $\epsilon > 0$, you can then solve the following recurrence relation:
$$ w_{t+1} = m \cdot w_{t}+\epsilon $$
The simplest solution is given by the function:
$$ w_{t} = \epsilon \cdot \frac{m^t-1}{m-1} $$
### Limiting the number of terms
Great! You now have a formula for the coefficients of the makespan penalty terms that increase with time while taking into account that operations can be scheduled in parallel. Before implementing the new terms, let's try to limit the amount of new terms you're adding as much as possible. To illustrate, recall the job shop example you've been working on:
$$
\begin{align}
J_{0} &= \{O_{0}, O_{1}, O_{2}\} \\
J_{1} &= \{O_{3}, O_{4}, O_{5}\} \\
J_{2} &= \{O_{6}, O_{7}, O_{8}, O_{9}\} \\
\end{align}
$$
First, consider that you only need the last operation in every job, as the precedence constraint guarantees that all other operations are completed before it. Given $n$ jobs, you thus consider only the operations $\{O_{k_0-1}, O_{k_1-1}, \dots, O_{k_{n-1}-1}\}$, where the indices $k_j$ denotes the number of operations up to and including job $j$. In this example, you only add terms for the following operations:
$$ \{O_2, O_5, O_9\} $$
$$ \text{with } k_0 = 3, k_1 = 6, k_2 = 10 $$
Next, you can find a lower bound for the makespan and only penalize makespans that are greater than this minimum. A simple lower bound is given by the longest job, as each operation within a job must execute sequentially. You can express this lower bound as follows:
$$ M_{lb} = \max\limits_{0 \leq j \lt n} \{ \sum_{i = k_j}^{k_{j+1}-1} p_i \} \leq M $$
For the processing times given in this example, you get:
$$
\begin{align}
J_{0} &: ~~ p_0 + p_1 + p_2 = 2 + 1 + 3 = 6 \\
J_{1} &: ~~ p_3 + p_4 + p_5 = 2 + 2 + 3 = 7 \\
J_{2} &: ~~ p_6 + p_7 + p_8 + p_9 = 1 + 2 + 3 + 2 = 8 \\
\\
&\Rightarrow M_{lb} = 8
\end{align}
$$
Finally, the makespan is upper-bounded by the sequential execution time of all jobs, 6 + 7 + 8 = 21 in this case. The simulation time T should never exceed this upper bound. Regardless of whether this is the case or not, you need to include penalties for all time steps up to T, or else larger time steps without a penalty will be favored over smaller ones!
To summarize:
- Makespan penalty terms are only added for the last operation in every job $\{O_{k_0-1}, O_{k_1-1}, \dots, O_{k_{n-1}-1}\}$
- The makespan is lower-bounded by the longest job $\Rightarrow$ only include terms for time steps $M_{lb} < t < T$
### Implementing the penalty terms
You are now ready to add the makespan terms to the cost function. Recall that all terms contain a coefficient and one (or multiple) binary decision variables $x_{i,t}$. Contrary to the coefficients $w_t$ defined above, where $t$ refers to the completion time of an operation, the variables $x_{i,t}$ determine if an operation $i$ is *scheduled* at time t. To account for this difference, you'll have to shift the variable index by the operation's processing time $p_i$. All makespan terms can then be expressed as follows:
$$ k(x) = \sum_{i \in \{k_0-1, \dots, k_{n-1}-1\}} \left( \sum_{M_{lb} < t < T+p_i} w_t \cdot x_{i, ~t-p_i} \right) $$
Lastly, you need to make a small modification to the coefficient function so that the first value $w_{M_{lb}+1}$ always equals one. With $\epsilon = 1$ and $t_0 = M_{lb}$ you get:
$$ w_{t} = \frac{m^{t-t_0}-1}{m-1} $$
### Code
The code below implements the ideas discussed above by generating the necessary `Term` objects required by the solver.
```
"""
# Reminder of the relevant parameters
## Allowed time (jobs can only be scheduled below this limit)
T = 21
## Processing time for each operation
processing_time = {0: 2, 1: 1, 2: 3, 3: 2, 4: 2, 5: 3, 6: 1, 7: 2, 8: 3, 9: 2}
## Assignment of operations to jobs (job ID: [operation IDs])
jobs_ops_map = {
0: [0, 1, 2], # Pay electricity bill
1: [3, 4, 5], # Plan camping trip
2: [6, 7, 8, 9] # Book dentist appointment
}
"""
def calc_penalty(t:int, m_count:int, t0:int):
assert m_count > 1 # Ensure you don't divide by 0
return (m_count**(t - t0) - 1)/float(m_count - 1)
def makespan_objective(T:int, processing_time:dict, jobs_ops_map:dict, m_count:int, weight:float):
"""
Construct makespan minimization terms.
Keyword arguments:
T (int): Allowed time (jobs can only be scheduled below this limit)
processing_time (dict): Operation processing times
jobs_ops_map (dict): Map of jobs to operations {job: [operations]}
m_count (int): Number of machines
weight (float): Relative importance of this constraint
"""
terms = []
lower_bound = max([sum([processing_time[i] for i in job]) for job in jobs_ops_map.values()])
upper_bound = T
# Loop through the final operation of each job
for job in jobs_ops_map.values():
i = job[-1]
# Loop through each time step the operation could be completion at
for t in range(lower_bound + 1, T + processing_time[i]):
terms.append(Term(c=weight*(calc_penalty(t, m_count, lower_bound)), indices=[i*T + (t - processing_time[i])]))
return terms
```
## Putting it all together
As a reminder, here are the penalty terms:
| Constraint | Penalty condition |
|---|---|
|**Precedence constraint**<br>Operations in a job must take place in order.|Assign penalty every time $O_{i+1}$ starts before $O_{i}$ has finished (they start out of order).|
|**Operation-once constraint**<br>Each operation is started once and only once.|Assign penalty if an operation isn't scheduled within the allowed time.<br>**Assumption:** if an operation starts, it runs to completion.|
|**No-overlap constraint**<br>Machines can only do one thing at a time.|Assign penalty every time two operations on a single machine are scheduled to run at the same time.|
- **Precedence constraint**:
$$f(x) = \sum_{k_{n-1} \leq i < k_n, s < t + p_{i}}x_{i,t}\cdot x_{i+1,s} \text{ for each job } \textit{n}$$
- **Operation-once constraint**:
$$g(x) = \sum_{i} \left(\left(\sum_{0\leq t < T} x_{i,t}\right) - 1\right)^2$$
- **No-overlap constraint**:
$$h(x) = \sum_{i,t,k,s} x_{i,t}\cdot x_{k,s} = 0 \text{ for each machine } \textit{m}$$
- **Makespan minimization**:
$$k(x) = \sum_{i \in \{k_0-1, \dots, k_{n-1}-1\}} \left( \sum_{M_{lb} < t < T+p_i} w_t \cdot x_{i, ~t-p_i} \right)$$
As you saw earlier, combining the penalty functions is straightforward - all you need to do is assign each term a weight and add all the weighted terms together, like so:
$$H(x) = \alpha \cdot f(x) + \beta \cdot g(x) + \gamma \cdot h(x) + \delta \cdot k(x) $$
$$\text{where }\alpha, \beta, \gamma \text{ and } \delta \text{ represent the different weights assigned to the penalties.}$$
The weights represent how important each penalty function is, relative to all the others.
> **NOTE:**
> Along with modifying your cost function (how you represent the penalties), tuning these weights will define how much success you will have solving your optimization problem. There are many ways to represent each optimization problem's penalty functions and many ways to manipulate their relative weights, so this may require some experimentation before you see success. The end of this sample dives a little deeper into parameter tuning.
### Code
As a reminder, below you again see the code representation of the problem parameters: the maximum allowed time `T`, the operation processing times `processing_time`, the mapping of operations to jobs (`jobs_ops_map` and `ops_jobs_map`), the assignment of operations to machines (`machines_ops_map`), and the helper function `process_config`.
```
def process_config(jobs_ops_map:dict, machines_ops_map:dict, processing_time:dict, T:int):
"""
Process & validate problem parameters (config) and generate inverse dict of operations to jobs.
Keyword arguments:
jobs_ops_map (dict): Map of jobs to operations {job: [operations]}
machines_ops_map(dict): Mapping of operations to machines, e.g.:
machines_ops_map = {
0: [0,1], # Operations 0 & 1 assigned to machine 0
1: [2,3] # Operations 2 & 3 assigned to machine 1
}
processing_time (dict): Operation processing times
T (int): Allowed time (jobs can only be scheduled below this limit)
"""
# Problem cannot take longer to complete than all operations executed sequentially
## Sum all operation processing times to calculate the maximum makespan
T = min(sum(processing_time.values()), T)
# Ensure operation assignments to machines are sorted in ascending order
for m, ops in machines_ops_map.items():
machines_ops_map[m] = sorted(ops)
ops_jobs_map = {}
for job, ops in jobs_ops_map.items():
# Fail if operation IDs within a job are out of order
assert (ops == sorted(ops)), f"Operation IDs within a job must be in ascending order. Job was: {job}: {ops}"
for op in ops:
# Fail if there are duplicate operation IDs
assert (op not in ops_jobs_map.keys()), f"Operation IDs must be unique. Duplicate ID was: {op}"
ops_jobs_map[op] = job
return ops_jobs_map, T
# Set problem parameters
## Allowed time (jobs can only be scheduled below this limit)
T = 21
## Processing time for each operation
processing_time = {0: 2, 1: 1, 2: 3, 3: 2, 4: 2, 5: 3, 6: 1, 7: 2, 8: 3, 9: 2}
## Assignment of operations to jobs (job ID: [operation IDs])
### Operation IDs within a job must be in ascending order
jobs_ops_map = {
0: [0, 1, 2],
1: [3, 4, 5],
2: [6, 7, 8, 9]
}
## Assignment of operations to machines
### Three jobs, two machines
machines_ops_map = {
0: [0, 1, 3, 4, 6, 7], # Operations 0, 1, 3, 4, 6 and 7 are assigned to machine 0 (the computer)
1: [2, 5, 8], # Operations 2, 5 and 8 are assigned to machine 1 (the printer)
2: [9] # Operation 9 is assigned to machine 2 (the tooth floss)
}
## Inverse mapping of jobs to operations
ops_jobs_map, T = process_config(jobs_ops_map, machines_ops_map, processing_time, T)
```
The following code snippet shows how you assign weight values and assemble the penalty terms by summing the output of the penalty and objective functions, as was demonstrated mathematically earlier in this sample. These terms represent the cost function and they are what you will submit to the solver.
```
# Generate terms to submit to solver using functions defined previously
## Assign penalty term weights:
alpha = 1 # Precedence constraint
beta = 1 # Operation once constraint
gamma = 1 # No overlap constraint
delta = 0.00000005 # Makespan minimization (objective function)
## Build terms
### Constraints:
c1 = precedence_constraint(jobs_ops_map, T, processing_time, alpha)
c2 = operation_once_constraint(ops_jobs_map, T, beta)
c3 = no_overlap_constraint(T, processing_time, ops_jobs_map, machines_ops_map, gamma)
### Objective function
c4 = makespan_objective(T, processing_time, jobs_ops_map, len(machines_ops_map), delta)
### Combine terms:
terms = []
terms = c1 + c2 + c3 + c4
```
> **NOTE**:
> You can find the full Python script for this sample [here](TODO)
## Submit problem to Azure Quantum
This code submits the terms to the Azure Quantum `SimulatedAnnealing` solver. You could also have used the same problem definition with any of the other Azure Quantum Optimization solvers available (for example, `ParallelTempering`). You can find further information on the various solvers available through the Azure Quantum Optimization service [here](TODO).
The job is run synchronously in this instance, however this could also be submitted asynchronously as shown in the next subsection.
```
from azure.quantum.optimization import Problem, ProblemType
from azure.quantum.optimization import SimulatedAnnealing # Change this line to match the Azure Quantum Optimization solver type you wish to use
# Problem type is PUBO in this instance. You could also have chosen to represent the problem in Ising form.
problem = Problem(name="Job shop sample", problem_type=ProblemType.pubo, terms=terms)
# Provide details of your workspace, created at the beginning of this tutorial
# Provide the name of the solver you wish to use for this problem (as imported above)
solver = SimulatedAnnealing(workspace, timeout = 100) # Timeout in seconds
# Run job synchronously
result = solver.optimize(problem)
config = result['configuration']
print(config)
```
..............{'0': 1, '21': 0, '22': 0, '1': 0, '23': 1, '2': 0, '24': 0, '3': 0, '25': 0, '4': 0, '26': 0, '5': 0, '27': 0, '6': 0, '28': 0, '7': 0, '29': 0, '8': 0, '30': 0, '9': 0, '31': 0, '10': 0, '32': 0, '11': 0, '33': 0, '12': 0, '34': 0, '13': 0, '35': 0, '14': 0, '36': 0, '15': 0, '37': 0, '16': 0, '38': 0, '17': 0, '39': 0, '18': 0, '40': 0, '19': 0, '41': 0, '20': 0, '42': 0, '43': 0, '44': 0, '45': 1, '46': 0, '47': 0, '48': 0, '49': 0, '50': 0, '51': 0, '52': 0, '53': 0, '54': 0, '55': 0, '56': 0, '57': 0, '58': 0, '59': 0, '60': 0, '61': 0, '62': 0, '63': 0, '84': 0, '85': 0, '64': 0, '86': 0, '65': 0, '87': 0, '66': 0, '88': 0, '67': 0, '89': 0, '68': 0, '90': 0, '69': 1, '91': 0, '70': 0, '92': 1, '71': 0, '93': 0, '72': 0, '94': 0, '73': 0, '95': 0, '74': 0, '96': 0, '75': 0, '97': 0, '76': 0, '98': 0, '77': 0, '99': 0, '78': 0, '100': 0, '79': 0, '101': 0, '80': 0, '102': 0, '81': 0, '103': 0, '82': 0, '104': 0, '83': 0, '105': 0, '106': 0, '107': 0, '108': 0, '109': 0, '110': 0, '111': 0, '112': 0, '113': 0, '114': 0, '115': 1, '116': 0, '117': 0, '118': 0, '119': 0, '120': 0, '121': 0, '122': 0, '123': 0, '124': 0, '125': 0, '126': 0, '147': 0, '127': 0, '148': 0, '128': 0, '149': 0, '129': 1, '150': 0, '130': 0, '151': 1, '131': 0, '152': 0, '132': 0, '153': 0, '133': 0, '154': 0, '134': 0, '155': 0, '135': 0, '156': 0, '136': 0, '157': 0, '137': 0, '158': 0, '138': 0, '159': 0, '139': 0, '160': 0, '140': 0, '161': 0, '141': 0, '162': 0, '142': 0, '163': 0, '143': 0, '164': 0, '144': 0, '165': 0, '145': 0, '166': 0, '146': 0, '167': 0, '168': 0, '169': 0, '170': 0, '171': 0, '172': 0, '173': 0, '174': 1, '175': 0, '176': 0, '177': 0, '178': 0, '179': 0, '180': 0, '181': 0, '182': 0, '183': 0, '184': 0, '185': 0, '186': 0, '187': 0, '188': 0, '189': 0, '190': 0, '191': 0, '192': 0, '193': 0, '194': 0, '195': 0, '196': 0, '197': 0, '198': 0, '199': 0, '200': 1, '201': 0, '202': 0, '203': 0, '204': 0, '205': 0, '206': 0, '207': 0, '208': 0, '209': 0}
## Run job asynchronously
Alternatively, a job can be run asynchronously, as shown below:
```python
# Submit problem to solver
job = solver.submit(problem)
print(job.id)
# Get job status
job.refresh()
print(job.details.status)
# Get results
result = job.get_results()
config = result['configuration']
print(config)
```
## Map variables to operations
This code snippet contains several helper functions which are used to parse the results returned from the solver and print them to screen in a user-friendly format.
```
def create_op_array(config: dict):
"""
Create array from returned config dict.
Keyword arguments:
config (dictionary): config returned from solver
"""
variables = []
for key, val in config.items():
variables.insert(int(key), val)
return variables
def print_problem_details(ops_jobs_map:dict, processing_time:dict, machines_ops_map:dict):
"""
Print problem details e.g. operation runtimes and machine assignments.
Keyword arguments:
ops_jobs_map (dict): Map of operations to jobs {operation: job}
processing_time (dict): Operation processing times
machines_ops_map(dict): Mapping of machines to operations
"""
machines = [None] * len(ops_jobs_map)
for m, ops in machines_ops_map.items():
for op in ops:
machines[op] = m
print(f" Job ID: {list(ops_jobs_map.values())}")
print(f" Operation ID: {list(ops_jobs_map.keys())}")
print(f"Operation runtime: {list(processing_time.values())}")
print(f" Assigned machine: {machines}")
print()
def split_array(T:int, array:List[int]):
"""
Split array into rows representing the rows of our operation matrix.
Keyword arguments:
T (int): Time allowed to complete all operations
array (List[int]): array of x_i,t values generated from config returned by solver
"""
ops = []
i = 0
while i < len(array):
x = array[i:i+T]
ops.append(x)
i = i + T
return ops
def print_matrix(T:int, matrix:List[List[int]]):
"""
Print final output matrix.
Keyword arguments:
T (int): Time allowed to complete all operations
matrix (List[List[int]]): Matrix of x_i,t values
"""
labels = " t:"
for t in range(0, T):
labels += f" {t}"
print(labels)
idx = 0
for row in matrix:
print("x_" + str(idx) + ",t: ", end="")
print(' '.join(map(str,row)))
idx += 1
print()
def extract_start_times(jobs_ops_map:dict, matrix:List[List[int]]):
"""
Extract operation start times & group them into jobs.
Keyword arguments:
jobs_ops_map (dict): Map of jobs to operations {job: [operations]}
matrix (List[List[int]]): Matrix of x_i,t values
"""
#jobs = {}
jobs = [None] * len(jobs_ops_map)
op_start_times = []
for job, ops in jobs_ops_map.items():
x = [None] * len(ops)
for i in range(len(ops)):
try :
x[i] = matrix[ops[i]].index(1)
op_start_times.append(matrix[ops[i]].index(1))
except ValueError:
x[i] = -1
op_start_times.append(-1)
jobs[job] = x
return jobs, op_start_times
```
## Results
Finally, you take the config returned by the solver and read out the results.
```
# Produce 1D array of x_i,t = 0, 1 representing when each operation starts
op_array = create_op_array(config)
# Print config details:
print(f"Config dict:\n{config}\n")
print(f"Config array:\n{op_array}\n")
# Print problem setup
print_problem_details(ops_jobs_map, processing_time, machines_ops_map)
# Print final operation matrix, using the returned config
print("Operation matrix:")
matrix = split_array(T, op_array)
print_matrix(T, matrix)
# Find where each operation starts (when x_i,t = 1) and return the start time
print("Operation start times (grouped into jobs):")
jobs, op_start_times = extract_start_times(jobs_ops_map, matrix)
print(jobs)
# Calculate makespan (time taken to complete all operations - the objective you are minimizing)
op_end_times = [op_start_times[i] + processing_time[i] for i in range(len(op_start_times))]
makespan = max(op_end_times)
print(f"\nMakespan (time taken to complete all operations): {makespan}")
```
Config dict:
{'0': 1, '21': 0, '22': 0, '1': 0, '23': 1, '2': 0, '24': 0, '3': 0, '25': 0, '4': 0, '26': 0, '5': 0, '27': 0, '6': 0, '28': 0, '7': 0, '29': 0, '8': 0, '30': 0, '9': 0, '31': 0, '10': 0, '32': 0, '11': 0, '33': 0, '12': 0, '34': 0, '13': 0, '35': 0, '14': 0, '36': 0, '15': 0, '37': 0, '16': 0, '38': 0, '17': 0, '39': 0, '18': 0, '40': 0, '19': 0, '41': 0, '20': 0, '42': 0, '43': 0, '44': 0, '45': 1, '46': 0, '47': 0, '48': 0, '49': 0, '50': 0, '51': 0, '52': 0, '53': 0, '54': 0, '55': 0, '56': 0, '57': 0, '58': 0, '59': 0, '60': 0, '61': 0, '62': 0, '63': 0, '84': 0, '85': 0, '64': 0, '86': 0, '65': 0, '87': 0, '66': 0, '88': 0, '67': 0, '89': 0, '68': 0, '90': 0, '69': 1, '91': 0, '70': 0, '92': 1, '71': 0, '93': 0, '72': 0, '94': 0, '73': 0, '95': 0, '74': 0, '96': 0, '75': 0, '97': 0, '76': 0, '98': 0, '77': 0, '99': 0, '78': 0, '100': 0, '79': 0, '101': 0, '80': 0, '102': 0, '81': 0, '103': 0, '82': 0, '104': 0, '83': 0, '105': 0, '106': 0, '107': 0, '108': 0, '109': 0, '110': 0, '111': 0, '112': 0, '113': 0, '114': 0, '115': 1, '116': 0, '117': 0, '118': 0, '119': 0, '120': 0, '121': 0, '122': 0, '123': 0, '124': 0, '125': 0, '126': 0, '147': 0, '127': 0, '148': 0, '128': 0, '149': 0, '129': 1, '150': 0, '130': 0, '151': 1, '131': 0, '152': 0, '132': 0, '153': 0, '133': 0, '154': 0, '134': 0, '155': 0, '135': 0, '156': 0, '136': 0, '157': 0, '137': 0, '158': 0, '138': 0, '159': 0, '139': 0, '160': 0, '140': 0, '161': 0, '141': 0, '162': 0, '142': 0, '163': 0, '143': 0, '164': 0, '144': 0, '165': 0, '145': 0, '166': 0, '146': 0, '167': 0, '168': 0, '169': 0, '170': 0, '171': 0, '172': 0, '173': 0, '174': 1, '175': 0, '176': 0, '177': 0, '178': 0, '179': 0, '180': 0, '181': 0, '182': 0, '183': 0, '184': 0, '185': 0, '186': 0, '187': 0, '188': 0, '189': 0, '190': 0, '191': 0, '192': 0, '193': 0, '194': 0, '195': 0, '196': 0, '197': 0, '198': 0, '199': 0, '200': 1, '201': 0, '202': 0, '203': 0, '204': 0, '205': 0, '206': 0, '207': 0, '208': 0, '209': 0}
Config array:
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Job ID: [0, 0, 0, 1, 1, 1, 2, 2, 2, 2]
Operation ID: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Operation runtime: [2, 1, 3, 2, 2, 3, 1, 2, 3, 2]
Assigned machine: [0, 0, 1, 0, 0, 1, 0, 0, 1, 2]
Operation matrix:
t: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
x_0,t: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_1,t: 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_2,t: 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_3,t: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_4,t: 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
x_5,t: 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
x_6,t: 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_7,t: 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_8,t: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_9,t: 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
Operation start times (grouped into jobs):
[[0, 2, 3], [6, 8, 10], [3, 4, 6, 11]]
Makespan (time taken to complete all operations): 13
For this small problem instance, the solver quickly returned a solution. For bigger, more complex problems you may need to run the job asynchronously, as shown earlier in this sample.
## Validate the solution
In this instance, it is possible to visually verify that the solution does not validate any constraints:
- Operations belonging to the same job happen in order
- Operations are started once and only once
- Each machine only has one operation running at a time
In this particular instance, you can also tell that the solver scheduled the repair tasks in such a way that the **total time to complete them all (the makespan) was minimized** - both machines are continuously in operation, with no time gaps between scheduled operations. This is the solution with the lowest possible cost, also known as the global minimum for the cost function. However, you must remember that these solvers are heuristics and are therefore not guaranteed to find the best solution possible, particularly when the problem definition becomes more complex.
Depending on how well the cost function is defined and the weights are tuned, the solver will have varying degrees of success. This reinforces the importance of verifying and evaluating returned solutions, to enable tuning of the problem definition and parameters (such as weights/coefficients) in order to improve solution quality.
For larger or more complex problems, it will not always be possible to verify the solution by eye. It is therefore common practice to implement some code to verify that solutions returned from the optimizer are valid, as well as evaluating how good the solutions are (at least relative to solutions returned previously). This capability is also useful when it comes to tuning weights and penalty functions.
You can perform this validation using the following code snippet, which checks the solution against all three constraints before declaring the solution valid or not. If any of the constraints are violated, the solution will be marked as invalid. An example of an invalid solution has also been included, for comparison.
```
def check_precedence(processing_time, jobs):
"""
Check if the solution violates the precedence constraint.
Returns True if the constraint is violated.
Keyword arguments:
processing_time (dict): Operation processing times
jobs (List[List[int]]): List of operation start times, grouped into jobs
"""
op_id = 0
for job in jobs:
for i in range(len(job) - 1):
if job[i+1] - job[i] < processing_time[op_id]:
return True
op_id += 1
op_id += 1
return False
def check_operation_once(matrix):
"""
Check if the solution violates the operation once constraint.
Returns True if the constraint is violated.
Keyword arguments:
matrix (List[List[int]]): Matrix of x_i,t values
"""
for x_it_vals in matrix:
if sum(x_it_vals) != 1:
return True
return False
def check_no_overlap(op_start_times:list, machines_ops_map:dict, processing_time:dict):
"""
Check if the solution violates the no overlap constraint.
Returns True if the constraint is violated.
Keyword arguments:
op_start_times (list): Start times for the operations
machines_ops_map(dict): Mapping of machines to operations
processing_time (dict): Operation processing times
"""
pvals = list(processing_time.values())
# For each machine
for ops in machines_ops_map.values():
machine_start_times = [op_start_times[i] for i in ops]
machine_pvals = [pvals[i] for i in ops]
# Two operations start at the same time on the same machine
if len(machine_start_times) != len(set(machine_start_times)):
return True
# There is overlap in the runtimes of two operations assigned to the same machine
machine_start_times, machine_pvals = zip(*sorted(zip(machine_start_times, machine_pvals)))
for i in range(len(machine_pvals) - 1):
if machine_start_times[i] + machine_pvals[i] > machine_start_times[i+1]:
return True
return False
def validate_solution(matrix:dict, machines_ops_map:dict, processing_time:dict, jobs_ops_map:dict):
"""
Check that solution has not violated any constraints.
Returns True if the solution is valid.
Keyword arguments:
matrix (List[List[int]]): Matrix of x_i,t values
machines_ops_map(dict): Mapping of machines to operations
processing_time (dict): Operation processing times
jobs_ops_map (dict): Map of jobs to operations {job: [operations]}
"""
jobs, op_start_times = extract_start_times(jobs_ops_map, matrix)
# Check if constraints are violated
precedence_violated = check_precedence(processing_time, jobs)
operation_once_violated = check_operation_once(matrix)
no_overlap_violated = check_no_overlap(op_start_times, machines_ops_map, processing_time)
if not precedence_violated and not operation_once_violated and not no_overlap_violated:
print("Solution is valid.\n")
else:
print("Solution not valid. Details:")
print(f"\tPrecedence constraint violated: {precedence_violated}")
print(f"\tOperation once constraint violated: {operation_once_violated}")
print(f"\tNo overlap constraint violated: {no_overlap_violated}\n")
print_problem_details(ops_jobs_map, processing_time, machines_ops_map)
print("Azure Quantum solution:")
print_matrix(T, matrix)
print("Operation start times (grouped into jobs):")
print(jobs)
print()
validate_solution(matrix, machines_ops_map, processing_time, jobs_ops_map)
```
Job ID: [0, 0, 0, 1, 1, 1, 2, 2, 2, 2]
Operation ID: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Operation runtime: [2, 1, 3, 2, 2, 3, 1, 2, 3, 2]
Assigned machine: [0, 0, 1, 0, 0, 1, 0, 0, 1, 2]
Azure Quantum solution:
t: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
x_0,t: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_1,t: 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_2,t: 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_3,t: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_4,t: 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
x_5,t: 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
x_6,t: 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_7,t: 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_8,t: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x_9,t: 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
Operation start times (grouped into jobs):
[[0, 2, 3], [6, 8, 10], [3, 4, 6, 11]]
Solution is valid.
As you can see, the result returned by the Azure Quantum solver has been confirmed as valid (it does not violate any of the constraints).
## Tune parameters
Great! You've learned how to model a cost function, run a solver, and verify the solution of an optimization problem using Azure Quantum. Using your knowledge, you successfully repaired your ship! However, you may have been wondering how exactly the weights that appear in the cost function were chosen. Let's take a look at a general method that can help you balance the different components that make up a cost function.
If you recall, the cost function is made up of four components, one for each constraint and one to minimize the makespan:
$$ H(x) = \alpha \cdot f(x) + \beta \cdot g(x) + \gamma \cdot h(x) + \delta \cdot k(x) $$
The importance attributed to each term can be adjusted using the weights (coefficients) $\alpha, \beta, \gamma, \text{ and } \delta$. The process of adjusting these weights is referred to as *parameter tuning*. In general, there's no absolute rule to determine the optimal value for each weight, and you might have to use some trial and error to figure out what works best for your problem. However, the guidelines below can help you get good starting point.
#### Adjusting the optimization term weight
Intuitively, it should be clear that satisfying the constraints is more important than minimizing the makespan. An invalid solution, even with a very small makespan, would be useless to you. The weights of the cost function can be used to reflect this fact. As a rule of thumb, breaking a single constraint should be around 5-10x more expensive than any valid solution.
Let's start with an upper bound on the value of the cost function for any valid solution. At worst, a valid solution (meaning that $f(x) = g(x) = h(x) = 0$) contributes at most $m \cdot w_{T-1+max(p_i)}$ to the cost function. This is the case when $m$ operations, all taking $max(p_i)$ to complete, are scheduled at the last time step $T-1$. For convenience, let's say that this should result in a cost function value of $1$. You can compute what the value of $\delta$ should be to achieve this value. The code example you've been working with uses the following parameters:
$$ m = 3, ~ T = 21, ~ max(p_i) = 3, ~ M_{lb} = 8, ~ w_t = \frac{m^{t-M_{lb}}}{m-1} $$
First, calculate the latest time an operation could finish. This is given by the max time $T$ (minus one because you are using 0-based indexing), plus the longest processing time for any operation ($max(p_i)$):
$$t_{max} = T - 1 + max(p_i) = 21 - 1 + 3 = 23$$
Then, calculate $w_{t_{max}}$:
$$ w_{t_{max}} = \frac{m ^ {t_{max} - M_{lb}}}{m - 1} = \frac{3^{23 - 8}}{3 - 1} = \frac{3^{15}}{2} = 7,174,453.5 $$
The upper bound is then:
$$ m \cdot w_{t_{max}} = 3 \times 7,174,453.5 = 21,523,360.5 $$
To obtain the desired value of $1$, you can approximately set the weight to:
$$ \delta = \frac{1}{m \cdot w_{t_{max}}} = \frac{1}{21,523,360.5} = 0.00000005 $$
#### Adjusting the constraint weights
As mentioned in the previous section, breaking a single constraint should incur a penalty roughly 5-10x higher than that of the worst valid solution. Assuming that breaking one constraint adds a value of $1$ to the cost function, you can set the remaining weights to:
$$ \alpha = \beta = \gamma = 5 $$
Now, you can run a problem instance and use the verifier to check if any constraints are being broken. If all constraints are satisfied, congratulations! You should have obtained a good solution from the optimizer.
If instead one constraint is consistently broken, you probably need to increase its weight compared to the others.
#### Further adjustments
You may also come across situations in which constraints are being broken without a particular preference for which. In this case, make sure the time $T$ given a large enough value. If $T$ is too small, there may not even exist a valid solution, or the solver could be too constrained to feasibly find one.
Optionally, if you're looking for better solutions than the ones obtained so far, you may always try to lower the value of $T$, or increase the importance of the makespan component $\delta$. A tighter bound on the makespan can help the solver find a more optimal solution, as can increasing the weight $\delta$. You may also find that doing so increases the speed at which a solution is found. If any problems pop up with broken constraints, you went too far and need to change the parameters in the other direction again.
## Next steps
Now that you understand the problem scenario and how to define the cost function, there are a number of experiments you can perform to deepen your understanding and improve the solution defined above:
- Modify the problem definition:
- Change the number of jobs, operations, and/or machines
- Vary the number of operations in each job
- Change operation runtimes
- Change machine assignments
- Add/remove machines
- Rewrite the penalty functions to improve their efficiency
- Tune the parameters
- Try using a different solver (such as `ParallelTempering`)
| b6b91833121d8bc9561903bf78354050c8c4e05f | 91,529 | ipynb | Jupyter Notebook | samples/job-shop-scheduling/job-shop-sample.ipynb | dime10/qio-samples | b63bdf9c4a781f0162cd048ef833b88b90cda94d | [
"MIT"
]
| null | null | null | samples/job-shop-scheduling/job-shop-sample.ipynb | dime10/qio-samples | b63bdf9c4a781f0162cd048ef833b88b90cda94d | [
"MIT"
]
| 1 | 2021-04-13T01:14:35.000Z | 2021-04-13T01:17:06.000Z | samples/job-shop-scheduling/job-shop-sample.ipynb | dime10/qio-samples | b63bdf9c4a781f0162cd048ef833b88b90cda94d | [
"MIT"
]
| 1 | 2021-04-12T21:58:25.000Z | 2021-04-12T21:58:25.000Z | 55.104756 | 3,593 | 0.580111 | true | 22,976 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.853913 | 0.727493 | __label__eng_Latn | 0.994455 | 0.528543 |
# Expectation Maximization for the Gaussian Mixture Model
* Last class we introduced the Gaussian Mixture Model:
* $p(x) = \sum_{k=1}^K \pi_k N(x | \mu_k, \Sigma_k)$ where $0 \le \pi_k \le 1$ and $\sum_k \pi_k = 1$
* Suppose we are given $X = \left\{ \mathbf{x}_1, \ldots, \mathbf{x}_N\right\}$ where each $\mathbf{x}_i$ is a sample from one of the $K$ Gaussians in our mixture model. We want to estimate $\pi_k, \mu_k, \Sigma_k$ given $X$.
* So, we want to maximize the following data likelihood:
\begin{equation}
\hat\Theta = argmax_\Theta \prod_{i=1}^N \sum_{k=1}^K \pi_k N(x | \mu_k, \Sigma_k)
\end{equation}
where $\Theta = \left\{ \pi_k, \mu_k, \Sigma_k \right\}_{k=1}^K$
* It is difficult to maximize! We should try to simpler version in which we add latent variables to simplify the problem (and apply EM).
* The hidden/latent/missing variable we added was the label of the Gaussian from which $\mathbf{x}_i$ was drawn
\begin{equation}
\mathbf{x}, z \sim f(\mathbf{x},z|\theta)
\end{equation}
* The complete data likelihood is then:
\begin{eqnarray}
L^c &=& \prod_{i=1}^N p(\mathbf{x}_i | z_i, \theta)p(z_i)\\
&=& \prod_{i=1}^N N(\mathbf{x}_i| \mu_{z_i}, \theta_{z_i})\pi_{z_i}
\end{eqnarray}
* Since we do not know the $z_i$ values, we do not just guess one value - we average over all possible values for $z_i$. In other words, we take the *expectation* of the complete likelihood with respect to $z$
\begin{eqnarray}
Q(\Theta, \Theta^t) &=& \mathbb{E}\left[ \ln p(\mathbf{X}, \mathbf{z}|\Theta) | \mathbf{X}, \Theta^{t} \right]\\
&=& \sum_{\mathbf{z}} p(\mathbf{z}|\mathbf{X},\Theta^t)\ln p(\mathbf{X}, \mathbf{z}|\Theta)\\
&=& \sum_{i=1}^N \sum_{z_i=1}^K p(z_i|\mathbf{x}_i,\Theta^t)\ln p(\mathbf{x}_i, z_i|\Theta)\\
&=& \sum_{i=1}^N \sum_{z_i=1}^K p(z_i|\mathbf{x}_i,\Theta^t)\ln p(\mathbf{x}_i|z_i,\Theta) p(z_i)
\end{eqnarray}
* Thus, to take the expectation, we need $p(z_i|\mathbf{x}_i,\Theta^t)$
\begin{eqnarray}
p(z_i|\mathbf{x}_i,\Theta^t) &=& \frac{\pi_{z_i}^t p_{z_i}(\mathbf{x}_i|\theta_{z_i}^t, z_i)}{p(\mathbf{x}_i|\Theta^t)}\\
&=& \frac{\pi_{z_i}^t p_{z_i}(\mathbf{x}_i|\theta_{z_i}^t, z_i)}{\sum_{k=1}^K \pi_k^t p_k(\mathbf{x}_i|\theta_k^t, k)}
\end{eqnarray}
* This completes the Expectation step in EM. Now, we must derive the update equations for the Maximization step. So, we need to maximize for $\pi_k, \Sigma_k, \mu_k$.
## Update equation for mean of the kth Gaussian
* For simplicity, let us assume that $\Sigma_k = \sigma_k^2\mathbf{I}$
\begin{eqnarray}
Q(\Theta, \Theta^t) &=& \sum_{i=1}^N \sum_{z_i=1}^K p(z_i|\mathbf{x}_i,\Theta^t)\ln p(\mathbf{x}_i|z_i,\Theta) p(z_i)\\
&=& \sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)\ln {N}(\mathbf{x}_i|\mu_k, \sigma_k^2) \pi_k\\
&=& \sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)\ln {N}(\mathbf{x}_i|\mu_k, \sigma_k^2) + \ln \pi_k\\
&=& \sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)\left( -\frac{d}{2}\ln\sigma_k^2 -\frac{1}{2\sigma_k^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 + \ln \pi_k \right)
\end{eqnarray}
\begin{eqnarray}
\frac{\partial Q(\Theta, \Theta^t)}{\partial \mu_k} &=& \frac{\partial}{\partial \mu_k} \left[\sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)\left( -\frac{d}{2}\ln\sigma_k^2 -\frac{1}{2\sigma_k^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 + \ln \pi_k \right)\right] = 0\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\left( \frac{1}{\sigma_k^2}\left(\mathbf{x}_i - \mu_k \right) \right) = 0\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\frac{\mathbf{x}_i}{\sigma_k^2} - \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) \frac{\mu_k}{\sigma_k^2} = 0\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\frac{\mathbf{x}_i}{\sigma_k^2} = \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) \frac{\mu_k}{\sigma_k^2}\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\mathbf{x}_i = \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) \mu_k\\
& &\mu_k = \frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\mathbf{x}_i}{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}
%
\end{eqnarray}
## Update equation for the variance of the kth Gaussian
\begin{eqnarray}
\frac{\partial Q(\Theta, \Theta^t)}{\partial \sigma_k^2} &=& \frac{\partial}{\partial \sigma_k^2} \left[\sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)\left( -\frac{d}{2}\ln\sigma_k^2 -\frac{1}{2\sigma_k^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 + \ln \pi_k \right)\right] = 0\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\left[\left( -\frac{1}{2\sigma_k^2} + \frac{2}{2\left(\sigma_k^2\right)^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 \right)\right] = 0\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\frac{2}{2\sigma_k^2} = \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\frac{1}{\left(\sigma_k^2\right)^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 \\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) = \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\frac{1}{\sigma_k^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 \\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) = \frac{1}{\sigma_k^2} \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\left\| \mathbf{x}_i - \mu_k \right\|_2^2 \\
&=& \sigma_k^2 = \frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\left\| \mathbf{x}_i - \mu_k \right\|_2^2}{ \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}
%
\end{eqnarray}
## Update Equation for mixture weights
\begin{eqnarray}
& & \frac{\partial Q(\Theta, \Theta^t)}{\partial \pi_k} = \nonumber \\
& &\frac{\partial}{\partial \pi_k} \left[\sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)\left( -\frac{d}{2}\ln\sigma_k^2 -\frac{1}{2\sigma_k^2}\left\| \mathbf{x}_i - \mu_k \right\|_2^2 + \ln \pi_k \right) - \lambda\left(\sum_{k=1}^K \pi_k - 1\right)\right] = 0 \nonumber\\
&=&\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)\left(\frac{1}{\pi_k} \right) - \lambda = 0 \nonumber\\
&=& \left(\frac{1}{\pi_k} \right) \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)- \lambda = 0 \nonumber\\
&=& \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) = \lambda \pi_k \nonumber\\
%
\end{eqnarray}
Since $\sum_k \pi_k = 1$, then:
\begin{eqnarray}
& \sum_{k=1}^K \frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}{\lambda} = \sum_{k=1}^K \pi_k = 1\nonumber\\
& \lambda = \sum_{k=1}^K \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t) \nonumber\\
\end{eqnarray}
So:
\begin{eqnarray}
\pi_k &=&\frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}{\sum_{k=1}^K \sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)} \\
&=&\frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}{\sum_{i=1}^N \sum_{k=1}^K p(z_i=k|\mathbf{x}_i,\Theta^t)}\\
&=&\frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}{\sum_{i=1}^N 1}\\
&=&\frac{\sum_{i=1}^N p(z_i=k|\mathbf{x}_i,\Theta^t)}{N}
\end{eqnarray}
* We now have everything we need to implement the EM algorithm.
* Pseudo-code for the algorithm is:
* Initialize all parameters
* t = 1
* While convergence not yet reached:
* E-step: Compute $p(z_i=k|\mathbf{x}_i,\Theta^t)$ for every $\mathbf{x}_i$ and $k$
* M-step:
* Update $\mu_k$ for all $k$
* Update $\sigma_k^2$ for all $k$
* Update $\pi_k$ for all $k$
* t = t+1
* Check convergence criteria
```python
```
| ac21a798b683762b25246c87dee1076f7ed98dbb | 9,807 | ipynb | Jupyter Notebook | Lecture10_EM/Expectation Maximization for the Gaussian Mixture Model.ipynb | Danlobaton/LectureNotes | fdfc8521d0cede127eb27f75337fbcc4f63eb042 | [
"MIT"
]
| null | null | null | Lecture10_EM/Expectation Maximization for the Gaussian Mixture Model.ipynb | Danlobaton/LectureNotes | fdfc8521d0cede127eb27f75337fbcc4f63eb042 | [
"MIT"
]
| null | null | null | Lecture10_EM/Expectation Maximization for the Gaussian Mixture Model.ipynb | Danlobaton/LectureNotes | fdfc8521d0cede127eb27f75337fbcc4f63eb042 | [
"MIT"
]
| null | null | null | 51.888889 | 331 | 0.514734 | true | 3,206 | Qwen/Qwen-72B | 1. YES
2. YES | 0.930458 | 0.83762 | 0.77937 | __label__yue_Hant | 0.448461 | 0.649071 |
<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=SocialStudies/BubonicPlague/bubonic-plague-and-SIR-model.ipynb&depth=1" target="_parent"></a>
# Bubonic Plage - SIR Model
### Grade 11 Social Studies
We are interested in modelling a bubonic plague outbreak. We part from the assumption that the total population can be subdivided into a set of classes, each of which depends on the state of the infection. The [**SIR Model**](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology) is the simplest one and, as its name suggests, it divides the population into three classes.
**Outcomes:**
* Examine and visualize concepts and examples related to the bubonic plague.
* Examine the timeline/map of the Black Death.
* Visualize mathematical model that shows the recovery, infection, and removal rates.
## The SIR Outbreak Model
### Population Parameters
In this model, the total population is divided into three groups:
* Susceptible: individuals that can become infected but haven't been yet
* Infected: individuals that are infected
* Removed: individuals that are either dead or recovered
We are looking at the changes, over the course of an outbreak, of the numbers of these individuals, represented by $S$, $I$, and $R$. In other words we want to understand how, as time passes, the number of individuals in each group changes.
Having a realistic model might be useful for predicting the long-term outcome of an outbreak and informing public health interventions.
If we can predict that the number of removed people will stay low and the number of infected people will quickly go down to zero, then there is no need to intervene and we can let the outbreak end by itself while only providing medical attention to the infected people.
Conversely if we predict a large increase of the numbers of infected and removed individuals, then the outbreak needs a quick intervention before it results in a large number of casualties. In a plague outbreak this intervention would, for example, be to make sure there is no contact between infected and susceptible people.
We now describe the SIR (Susceptible, Infected, Removed) mathematical model of an outbreak over time (for example every week). We write $S_t, I_t, R_t$ to denote the number of susceptible, infected, and removed individuals at time point $t$. $t=1$ is the first recorded time point, $t=2$ is the second and so on. We call *time unit* the time elapsed between two time points, for example a day or a week.
In this model, we assume that the **total population is constant** (so births and deaths are ignored) for the duration of the model simulation. We represent the total population size by $N$, and so at any time point $t$ we have $$N=S_t + I_t + R_t$$
### Modelling the disease progression
We assume that transmission requires contact between an infected individual and a susceptible individual. We also assume that the disease takes a constant amount of time to progress within an infected individual until they are removed (die or recover). We need to define these two processes (infection and removal) and model how they impact the transition from the time $t = (S_t,I_t,R_t)$ to the next state $t + 1 = (S_{t+1},I_{t+1},R_{t+1})$.
The occurrences of new infections of is modelled using a parameter $\beta$, that gives the proportion of contacts between susceptible people and infected people, during one time unit, that results in infection. Then we can describe the number of newly infected people as $\dfrac{\beta S_t I_t}{N}$, where the term $S_t I_t$ represents the set of all possible contacts between susceptible and infected individuals. We discuss this term later.
The occurrence of removals of infected people is modelled using a parameter denoted by $\gamma$. It is defined to be proportion of infected individuals that die or recover between two time points. If we are given that the duration of an infection is $T$ (i.e. how many time points it takes for an individual between infection and removal), then $\gamma = \dfrac{1}{T}$.
Taking into account the rate of contact $\beta$ and rate of removal $\gamma$, then each group population changes within one unit of time as follows
$$
\begin{align}
S_{t+1} &= S_t - \dfrac{{\beta} S_t I_t}{N}\\
I_{t+1} &= I_t + \dfrac{{\beta} S_t I_t}{N} - \gamma I_t \\
R_{t+1} &= R_t + \gamma I_t\\
N&=S_t + I_t + R_t
\end{align}
$$
These equations form the SIR model. They allow, from knowing the parameters of the model ($\beta$ and $\gamma$) and the current state ($S_t,I_t,R_t$) of a population to predict the next states of the population for later time points. Such models are critical in our days for monitoring and controlling infectious diseases outbreaks.
##### Technical remarks.
First, note that the SIR model does not enforce that the values $S_t,I_t,R_t$ at a given time point are integers. As $\beta$ and $\gamma$ are actually floating numbers, these values are actually most of the time not integers. This is fine as the SIR model is an approximate model and aims mostly at predicting the general dynamics of an outbreak, not the precise values for the number of susceptible, infected, and removed individuals.
Next, one can ask how to find the values of the parameters $\beta$ and $\gamma$ that are necessary to have a full SIR model.
As discussed above, the parameter $\gamma$ is relatively easy to find from knowing how the disease progress in a patient, as it is mostly the inverse of the average time a patient is sick.
The parameter $\beta$ is less easy to obtain. Reading the equations we can see that during a time point, out of the $S_t$ susceptible individuals, the number that get infected is $(\dfrac{{\beta}}{N}) S_t I_t$. As mentioned above, the product $S_t I_t$ can be interpreted as the set of all possible contacts between the $S_t$ susceptible individuals and the $I_t$ infected individuals and is often a large number, much larger than $S_t$ and in the order of $N^2$. The division by $N$ aims to lower this number, mostly to normalize it by the total population, to make sure it is in order of $N$ and not quadratic in $N$. So in order for the number of newly infected individuals during a time unit to be reasonable, $\beta$ is generally a small number between $0$ and $1$. But formally, if we pick a value for $\beta$ that is too large, then the SIR model will predict value for $S_t$ that can be negative, which is inconsistent with the modelled phenomenon. So choosing the value of $\beta$ is the crucial step in modelling an outbreak.
```python
# This function takes as input a vector y holding all initial values,
# t: the number of time points (e.g. days)
# beta: proportion of contacts that result in infections
# gamma: proportion of infected that are removed
# S1,I1,R1 = initial population sizes
def discrete_SIR(S1,I1,R1,t,beta,gamma):
# Empy arrays for each class
S = [] # susceptible population
I = [] # infected population
R = [] # removed population
N = S1+I1+R1 # the total population
# Append initial values
S.append(S1)
I.append(I1)
R.append(R1)
# apply SIR model: iterate over the total number of days - 1
for i in range(t-1):
S_next = S[i] - (beta/N)*((S[i]*I[i]))
S.append(S_next)
I_next = I[i] + (beta/N)*((S[i]*I[i])) - gamma*I[i]
I.append(I_next)
R_next = R[i] + gamma * I[i]
R.append(R_next)
# return arrays S,I,R whose entries are various values for susceptible, infected, removed
return((S,I,R))
```
## Modelling an outbreak related to the Great Plague of London
The last major epidemic of the bubonic plague in England occurred between 1665 and 1666 ([click here for further reading](https://www.britannica.com/event/Great-Plague-of-London)). This epidemic did not kill as many people as the Black Death (1347 - 1351), however it is remembered as the "Great Plague of London" as it was the last widespread outbreak that affected England.
"City records indicate that some 68,596 people died during the epidemic, though the actual number of deaths is suspected to have exceeded 100,000 out of a total population estimated at 460,000. " [Great Plague of London"; Encyclopædia Britannica; Encyclopædia Britannica, inc.; September 08, 2016](https://www.britannica.com/event/Great-Plague-of-London)
When the bubonic plague outbreak hit London, people started to leave the city and go to the countryside, hoping to avoid the disease. But as can be expected, some of these people were already infected when they left London, and so carried the disease to start other outbreaks in some nearby villages. This happened in the village of Eyam.
When Eyam authorities realized a plague outbreak had started, they took the difficult decision to close the village in order to avoid to spread the disease further. So nobody was allowed to enter or leave the village and people stayed there hoping the outbreak would end by itself without too many casualties; note that from a mathematical point of view, that implies that the assumption that the sum of the numbers of susceptible, infected and removed individuals, the population, is constant.
Also the village authorities recorded regularly the number of infected and dead people; these data are described in the table below, for the period from June 19 1665 to October 19 1665, with data being recorded every 2 weeks. Obviously these data are imperfect (some people did not declare they were sick by fear of being ostracized, some people died too fast for the plague to be diagnosed, etc.), but nevertheless, they provide us with interesting data to see if the SIR model is an appropriate model for such a plague outbreak.
| Date |Day Number |Susceptible | Infected |
|-------||-------------|----------|
|June 19 1665|0|254|7|
|July 3 1665|14|235|14|
|July 19 1665|28|201|22|
|Aug 3 1665|42|153|29|
|Aug 19 1665|56|121| 21|
|Sept 3 1665|70|108|8|
|Sept 19 1665|84|121|21|
|Oct 3 1665| 98|NA | NA|
|Oct 19 1665|112| 83 | 0|
The average time an infected individual remains infected by the bubonic plague is 11 days.
With the information above, we will be able to get the parameters of the SIR model for this outbreak and observe if indeed what this model predicts generates results corresponding to what happened in reality.
### Question 1:
Assuming that on June 19 no individuals had died, i.e. no one was in the Removed class, what is the value of $N$, i.e. the number of individuals in the total population?
### Question 2:
We know that the average time an individual remained infected is 11 days. What is the rate of removal ($\gamma$)?
### Question 3:
We are now trying something more difficult but more interesting. We introduced a mathematical model for outbreaks, but nothing so far shows that this SIR model is appropriate to model an outbreak such as the Eyam plague outbreak. We want to answer this question now.
From questions 1 and 2 above we know the values of $N$ and $\gamma$ (check your answers at the bottom of this notebook). From the data table we also know $S_1,I_1,R_1$, the state of the population at the start of the outbreak. So if we want to apply the SIR model we need to find a value for $\beta$ the parameter, the number susceptible people becoming infected during a time unit. We consider here that a time unit is 1 day; the Eyam outbreak spanned 112 days, so 112 time units, even if data were only recorded every 2 weeks.
A standard scientific approach for the problem of finding $\beta$ is to try various values and see if there is one that leads to predicted values for $S_n,I_n,R_n$ that match the observed data. In order to evaluate this match, we focus on the number of infected people, the most important element of an outbreak.
The code below allows you to do this: you can choose a value of $\beta$, click on the "Run interact" button and it will show on the same graph a set of 8 blue dots (the observed number of infected people from the data table) and a set of 112 red dots, corresponding to the predicted number of infected individuals for the chosen value of $\beta$.
While there are several mathematical ways to define what would be the *best fit*, here we are not getting into this and you are just asked to try to find a value of $\beta$ that generated blue dots being approximately on the graph defined by the red dots. Pay particular attention to the first four blue dots.
Note that in this case $0 < \beta < 1$.
##### Warning:
The SIR model is a very simple approximation of the dynamics of a true outbreak, so don't expect to find a value of $\beta$ that generates a graph that contains exactly all observed data points (blue dots).
In particular note that the data from September 3 and 19 seem to be somewhat of an anomaly as we observe a sharp decrease in the number of infected followed by a surge. This could be due to many reasons, for example poor statistics recording (we are considering a group of people under heavy stress likely more motivated by trying to stay alive than to record accurate vital statistics).
So here we are interested in finding a parameter $\beta$ that captures the general dynamics (increase followed by a post-peak decrease) of the outbreak. You can expect to find a reasonable value for $\beta$ but be aware that many values, especially too high, will result in a very poor match between observed data and model predictions.
```python
from ipywidgets import interact_manual, interact,widgets
import matplotlib.pyplot as plt
# set style
s = {'description_width': 'initial'}
# Set interact manual box widget for beta
@interact(answer=widgets.FloatText(value=0.50, description='Enter beta ',
disabled=False, style=s, step=0.01
))
# define function to find the appropriate value of beta
# this function takes as input a floating value and outputs a plot with the best fit curve
def find_beta(answer):
# set initial values for SIR model
S1,I1,R1 = 254,7,0
# Use original data on Number of infected from table in the notebook
ori_data = [7,14,22,29,21,8,21,0]
# use days, time data was provided biweekly, we transform to days here
ori_days = [1,14,28,42,56,70,84,112]
# set number of days as the second to last entry on the ori_days array
n = ori_days[len(ori_days)-1]-ori_days[0]+1
# get beta from answer - to be sure transform to float
beta = float(answer)
# Gamma was obtained from disease
gamma = 1/11
# Compute SIR values using our discrete_SIR function
(S,I,R) = discrete_SIR(S1,I1,R1,n,beta,gamma)
# Figure
#fig,ax = plt.subplot(figsize=(10,10))
fig = plt.figure(facecolor='w',figsize=(17,5))
ax = fig.add_subplot(111,facecolor = '#ffffff')
# Scatter plot of original number of infected in the course of 112 days
plt.scatter(ori_days,ori_data,c="blue", label="Original Data")
# Scatter plot of infected obtained from SIR mode, in the course of 112 days
plt.scatter(range(n),I,c="red",label="SIR Model Predictions")
# Make the plot pretty
plt.xlabel('Time (days)')
plt.ylabel('Infected Individuals')
plt.title('Real Data vs Model')
#legend = ax.legend()
plt.show()
```
interactive(children=(FloatText(value=0.5, description='Enter beta ', step=0.01, style=DescriptionStyle(descri…
## Simulating a Disease Outbreak
To conclude we will use the widgets below to simulate a disease outbreak using the SIR model.
You can choose the values of all the elements of the model (sizes of the compartments of the population at the beginning of the outbreak, parameters $\gamma$ and $\beta$, and duration in time units (days) of the outbreak. The default parameters are the ones from the Eyam plague outbreak.
The result is a series of three graphs that shows how the three components of the population change during the outbreak. It allows you to see the impact of changes in the parameters $\gamma$ and $\beta$, such as increasing $\beta$ (making the outbreak progress faster) or reducing $\gamma$ (decreasing the removal rate).
You can use this interactive tool to try to fit the SIR model to match the observed data.
```python
import matplotlib.pyplot as plt
import numpy as np
from math import ceil
# This function takes as input initial values of susceptible, infected and removed, number of days, beta and gamma
# it plots the SIR model with the above conditions
def plot_SIR(S1,I1,R1,n,beta,gamma):
# Initialize figure
fig = plt.figure(facecolor='w',figsize=(17,5))
ax = fig.add_subplot(111,facecolor = '#ffffff')
# Compute SIR values for our initial data and parameters
(S_f,I_f,R_f) = discrete_SIR(S1,I1,R1,n,beta,gamma)
# Set x axis
x = [i for i in range(n)]
# Scatter plot of evolution of susceptible over the course of x days
plt.scatter(x,S_f,c= 'b',label='Susceptible')
# Scatter plot of evolution of infected over the course of x days
plt.scatter(x,I_f,c='r',label='Infected')
# Scatter plot of evolution of removed over the course of x days
plt.scatter(x,R_f,c='g',label='Removed')
# Make the plot pretty
plt.xlabel('Time (days)')
plt.ylabel('Number of individuals')
plt.title('Simulation of a Disease Outbreak Using the SIR Model')
legend = ax.legend()
plt.show()
# Print messages to aid student understand and interpret what is happening in the plot
print("SIMULATION DATA\n")
print("Beta: " + str(beta))
print("Gamma: " + str(gamma))
print("\n")
print("Initial Conditions:")
print("Total number of Susceptible: " + str(ceil(S_f[0])))
print("Total number of Infected: " + str(ceil(I_f[0])))
print("Total number of Removed: " + str(ceil(R_f[0])))
print("\n")
print("After " + str(n) + " days:")
print("Total number of Susceptible: " + str(ceil(S_f[n-1])))
print("Total number of Infected: " + str(ceil(I_f[n-1])) )
print("Total number of Removed: " + str(ceil(R_f[n-1])))
# Tweaking initial Values
from ipywidgets import widgets, interact, interact_manual
# Set function above so that the user can set all parameters and manually start simulation
s = {'description_width': 'initial'}
interact(plot_SIR,
S1 = widgets.IntSlider(value=254, min=200, max=1000, step=1, style=s, description="Susceptible Initial",
disabled=False, orientation='horizontal', readout=True),
I1 = widgets.IntSlider(value=7, min=0, max=500, step=1, style=s, description="Infected Initial",
disabled=False, orientation='horizontal', readout=True),
R1 = widgets.IntSlider(value=0, min=0, max=500, step=1, style=s, description="Removed Initial",
disabled=False, orientation='horizontal', readout=True),
n = widgets.IntSlider(value=112, min=0, max=500, step=1, style=s, description="Time (days)",
disabled=False, orientation='horizontal', readout=True),
beta = widgets.FloatText(value=1.50, description=r'$ \beta$ parameter',
disabled=False, style = s, step=0.01),
gamma = widgets.FloatText(value=1.50, description= r'$ \gamma$ parameter',
disabled=False, style=s, step=0.01)
);
```
interactive(children=(IntSlider(value=254, description='Susceptible Initial', max=1000, min=200, style=SliderS…
### Answer 1
Since we are assuming the population is constant, and since $S_1 = 254, I_1 = 7, R_1 = 0$, then $S_1 + I_1 + R_1 = 254 + 7 + 0 = 261$.
### Answer 2
We know that, on average, an individual will remain infected for approximately 11 days. This means that one individual moves to the removed class for every 11 days, and the rate of removal is $\gamma = \frac{1}{11} = 0.0909...$.
### Answer 3
The best value is approximately $\beta = 0.14909440503418078$.
<h2 align='center'>Conclusion</h2>
In this notebook we learned about the SIR discrete model to model an outbreak. We learned that this model is one of the simplest ones and that it separates the total population $N$ (a constant) into three categories: Infected, Susceptible and Removed. We learned about rates of infection and removal and how this affects the number of individuals in each class.
We also ran a basic but realistic simulation of a bubonic plague outbreak of the Great Plague of London that took place in the village Eyam in 1665 and learned about the devastating effect this had on the population.
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| b68f7c8c8baa23cae190b4d3b4eca5237ec43ad9 | 26,444 | ipynb | Jupyter Notebook | _build/jupyter_execute/curriculum-notebooks/SocialStudies/BubonicPlague/bubonic-plague-and-SIR-model.ipynb | BryceHaley/curriculum-jbook | d1246799ddfe62b0cf5c389394a18c2904383437 | [
"CC-BY-4.0"
]
| 1 | 2022-03-18T18:19:40.000Z | 2022-03-18T18:19:40.000Z | _build/jupyter_execute/curriculum-notebooks/SocialStudies/BubonicPlague/bubonic-plague-and-SIR-model.ipynb | callysto/curriculum-jbook | ffb685901e266b0ae91d1250bf63e05a87c456d9 | [
"CC-BY-4.0"
]
| null | null | null | _build/jupyter_execute/curriculum-notebooks/SocialStudies/BubonicPlague/bubonic-plague-and-SIR-model.ipynb | callysto/curriculum-jbook | ffb685901e266b0ae91d1250bf63e05a87c456d9 | [
"CC-BY-4.0"
]
| null | null | null | 57.238095 | 1,048 | 0.643624 | true | 5,066 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.749087 | 0.684423 | __label__eng_Latn | 0.99805 | 0.428476 |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/linear_numpy_short.ipynb" target="_parent"></a>
# numpy と 線形代数
線形代数を学ぶには numpy と sympy の両方が必要である。
numpy でないと機械学習なのど高速な計算ができないし、sympy でないと数式処理ができない。
* 行列計算
* 行列方程式を解く
* 逆行列と行列式を計算する
# 行列計算
次の例を考える
$$
A = \begin{pmatrix}
5 & 6 & 2\\
4 & 7 & 19\\
0 & 3 & 12
\end{pmatrix}
$$
$$
B = \begin{pmatrix}
14 & -2 & 12\\
4 & 4 & 5\\
5 & 5 & 1
\end{pmatrix}
$$
numpy を使うためには import する必要がある。
```python
import numpy as np
A = np.matrix([[5, 6, 2],
[4, 7, 19],
[0, 3, 12]])
B = np.matrix([[14, -2, 12],
[4, 4, 5],
[5, 5, 1]])
print(A)
print(B)
```
[[ 5 6 2]
[ 4 7 19]
[ 0 3 12]]
[[14 -2 12]
[ 4 4 5]
[ 5 5 1]]
同じことを sympy でやってみると次のようになる。
```python
# 同じことを sympy でやってみると
from sympy import *
A_sympy = Matrix([[5, 6, 2],
[4, 7, 19],
[0, 3, 12]])
B_sympy = Matrix([[14, -2, 12],
[4, 4, 5],
[5, 5, 1]])
display(A_sympy)
display(B_sympy)
```
$\displaystyle \left[\begin{matrix}5 & 6 & 2\\4 & 7 & 19\\0 & 3 & 12\end{matrix}\right]$
$\displaystyle \left[\begin{matrix}14 & -2 & 12\\4 & 4 & 5\\5 & 5 & 1\end{matrix}\right]$
次の計算をする
* $5A$
* $A ^ 3$
* $A + B$;
* $A - B$;
* $AB$
```python
print(A)
print(5 * A)
print(A**2)
print(A**3)
print(A+B)
print(A-B)
print(A*B)
```
[[ 5 6 2]
[ 4 7 19]
[ 0 3 12]]
[[25 30 10]
[20 35 95]
[ 0 15 60]]
[[ 49 78 148]
[ 48 130 369]
[ 12 57 201]]
[[ 557 1284 3356]
[ 760 2305 6994]
[ 288 1074 3519]]
[[19 4 14]
[ 8 11 24]
[ 5 8 13]]
[[ -9 8 -10]
[ 0 3 14]
[ -5 -2 11]]
[[104 24 92]
[179 115 102]
[ 72 72 27]]
# いまここ
---
練習問題 $\quad$ Compute $A ^ 2 - 2 A + 3$ with:
$$A =
\begin{pmatrix}
1 & -1\\
2 & 1
\end{pmatrix}
$$
---
## Solving Matrix equations
We can use Numpy to (efficiently) solve large systems of equations of the form:
$$Ax=b$$
Let us illustrate that with:
$$
A = \begin{pmatrix}
5 & 6 & 2\\
4 & 7 & 19\\
0 & 3 & 12
\end{pmatrix}
$$
$$
b = \begin{pmatrix}
-1\\
2\\
1
\end{pmatrix}
$$
```python
A = np.matrix([[5, 6, 2],
[4, 7, 19],
[0, 3, 12]])
b = np.matrix([[-1], [2], [1]])
```
We use the `linalg.solve` command:
```python
x = np.linalg.solve(A, b)
x
```
matrix([[ 0.45736434],
[-0.62790698],
[ 0.24031008]])
We can verify our result:
```python
A * x
```
matrix([[-1.],
[ 2.],
[ 1.]])
---
練習問題 $\quad$ 行列方程式 $Bx=b$ を解く。
---
# 逆行列と行列式を求める
逆行列は次のようにして求められる。
```python
# 逆行列は inv を使って求める
Ainv = np.linalg.inv(A)
Ainv
```
matrix([[-0.20930233, 0.51162791, -0.7751938 ],
[ 0.37209302, -0.46511628, 0.6744186 ],
[-0.09302326, 0.11627907, -0.08527132]])
$A^{-1}A=\mathbb{1}$ となることを確認する。
```python
A * Ainv
```
matrix([[ 1.00000000e+00, 2.77555756e-17, 3.05311332e-16],
[ -2.08166817e-16, 1.00000000e+00, -2.08166817e-16],
[ 5.55111512e-17, -5.55111512e-17, 1.00000000e+00]])
若干見にくいが、[[1,0,0],[0,1,0],[0,0,1]] であることがわかる。
行列式は次のように求める。
```python
# 行列式
np.linalg.det(A)
```
-129.00000000000009
---
練習問題 $\quad$ 行列 $B$ の逆行列と行列式を求める。
| 4a74bb475fb60a232c495d3de786b65d365be035 | 12,815 | ipynb | Jupyter Notebook | linear_numpy_short.ipynb | kalz2q/-yjupyternotebooks | ba37ac7822543b830fe8602b3f611bb617943463 | [
"MIT"
]
| 1 | 2021-09-16T03:45:19.000Z | 2021-09-16T03:45:19.000Z | linear_numpy_short.ipynb | kalz2q/-yjupyternotebooks | ba37ac7822543b830fe8602b3f611bb617943463 | [
"MIT"
]
| null | null | null | linear_numpy_short.ipynb | kalz2q/-yjupyternotebooks | ba37ac7822543b830fe8602b3f611bb617943463 | [
"MIT"
]
| null | null | null | 23.131769 | 240 | 0.349824 | true | 1,663 | Qwen/Qwen-72B | 1. YES
2. YES | 0.939913 | 0.79053 | 0.74303 | __label__yue_Hant | 0.221336 | 0.56464 |
## Fibonacci numbers
The Fibonacci numbers are defined recursively by the following difference equation:
\begin{equation}
\left\{
\begin{aligned}
F_{n} & = F_{n-1} + F_{n-2} \\
F_1 & = 1 \\
F_0 & = 0 \\
\end{aligned}
\right.
\end{equation}
It is easy to compute the first few elements in the sequence:
$0, 1, 1, 2, 3, 5, 8, 13, 21, 34 \cdots $
<!-- PELICAN_END_SUMMARY -->
## Derivation of the general formula
It is possible to derive a general formula for $F_n$ without computing all the previous numbers in the sequence. If a gemetric series (i.e. a series with a constant ratio between consecutive terms $r^n$) is to solve the difference equation, we must have
\begin{aligned}
r^n = r^{n-1} + r^{n-2} \\
\end{aligned}
which is equivalent to
\begin{aligned}
r^2 = r + 1 \\
\end{aligned}
This equation has two unique solutions
\begin{aligned}
\varphi = & \frac{1 + \sqrt{5}}{2} \approx 1.61803\cdots \\
\psi = & \frac{1 - \sqrt{5}}{2} = 1 - \varphi = - {1 \over \varphi} \approx -0.61803\cdots \\
\end{aligned}
In particular the larger root is known as the _golden ratio_
\begin{align}
\varphi = \frac{1 + \sqrt{5}}{2} \approx 1.61803\cdots
\end{align}
Now, since both roots solve the difference equation for Fibonacci numbers, any linear combination of the two sequences also solves it
\begin{aligned}
a \left(\frac{1 + \sqrt{5}}{2}\right)^n + b \left(\frac{1 - \sqrt{5}}{2}\right)^n \\
\end{aligned}
It's not hard to see that all Fibonacci numbers must be of this general form because we can uniquely solve for $a$ and $b$ such that the initial conditions of $F_1 = 1$ and $F_0 = 0$ are met
\begin{equation}
\left\{
\begin{aligned}
F_0 = 0 = a \left(\frac{1 + \sqrt{5}}{2}\right)^0 + b \left(\frac{1 - \sqrt{5}}{2}\right)^0 \\
F_1 = 1 = a \left(\frac{1 + \sqrt{5}}{2}\right)^1 + b \left(\frac{1 - \sqrt{5}}{2}\right)^1 \\
\end{aligned}
\right.
\end{equation}
yielding
\begin{equation}
\left\{
\begin{aligned}
a = \frac{1}{\sqrt{5}} \\
b = \frac{-1}{\sqrt{5}} \\
\end{aligned}
\right.
\end{equation}
We have therefore derived the general formula for the $n$-th Fibonacci number
\begin{aligned}
F_n = \frac{1}{\sqrt{5}} \left(\frac{1 + \sqrt{5}}{2}\right)^n - \frac{1}{\sqrt{5}} \left(\frac{1 - \sqrt{5}}{2}\right)^n \\
\end{aligned}
Since the second term has an absolute value smaller than $1$, we can see that the ratios of Fibonacci numbers converge to the golden ratio
\begin{aligned}
\lim_{n \rightarrow \infty} \frac{F_n}{F_{n-1}} = \frac{1 + \sqrt{5}}{2}
\end{aligned}
## Various implementations in Python
Writing a function in Python that outputs the $n$-th Fibonacci number seems simple enough. However even in this simple case one should be aware of some of the computational subtleties in order to avoid common pitfalls and improve efficiency.
### Common pitfall #1: inefficient recursion
Here's a very straight-forward recursive implementation
```python
import math
from __future__ import print_function
```
```python
def fib_recursive(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib_recursive(n-1) + fib_recursive(n-2)
```
```python
print([fib_recursive(i) for i in range(20)])
```
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181]
this seems to work fine, however the recursion overhead is actually very significant when $n$ is just slightly large. Here I'm computing $F_{34}$ and it takes more than 3 seconds! (on a 2013 model Macbook Air)
```python
%timeit fib_recursive(34)
```
1 loops, best of 3: 3.58 s per loop
The overhead incurred by creating a large number of stack frames is tremendous. Python by default does not perform what's known as tail recursion elimination http://stackoverflow.com/questions/13543019/why-is-recursion-in-python-so-slow, and therefore this is a very inefficient implemenation. In contrast, if we have an iterative implementation, the speed is dramatically faster
```python
def fib_iterative(n):
a, b = 0, 1
while n > 0:
a, b = b, a + b
n -= 1
return a
```
```python
%timeit fib_iterative(34)
```
100000 loops, best of 3: 4.59 µs per loop
Now, let's see if we can make it even faster by eliminating the loop altogether and just go straight to the general formula we derived earlier
```python
def fib_formula(n):
golden_ratio = (1 + math.sqrt(5)) / 2
val = (golden_ratio**n - (1 - golden_ratio)**n) / math.sqrt(5)
return int(round(val))
```
```python
%timeit fib_formula(34)
```
1000000 loops, best of 3: 1.36 µs per loop
Even faster, great! And since we are not looping anymore, we should expect to see the computation time to scale better as $n$ increases. That's indeed what we see:
```python
import pandas as pd
import numpy as np
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.pylabtools import figsize
figsize(15, 5)
```
```python
elapsed = {}
elapsed['iterative'] = {}
elapsed['formula'] = {}
for i in range(34):
result = %timeit -n 10000 -q -o fib_iterative(i)
elapsed['iterative'][i] = result.best
result = %timeit -n 10000 -q -o fib_formula(i)
elapsed['formula'][i] = result.best
```
```python
elapased_ms = pd.DataFrame(elapsed) * 1000
elapased_ms.plot(title='time taken to compute the n-th Fibonaccis number')
plt.ylabel('time taken (ms)')
plt.xlabel('n')
```
Indeed as we expect, the iterative approach scales linearly, while the formula approach is basically constant time.
However we need to be careful with using a numerical formula like this for getting integer results.
### Common pitfall #2: numerical precision
Here we compare the actual values obtained by `fib_iterative()` and `fib_formula()`. Notice that it does not take a very large `n` for us to run into numerical precision issues.
When `n` is 71 we are starting to get different results from the two implementations!
```python
df = {}
df['iterative'] = {}
df['formula'] = {}
df['diff'] = {}
for i in range(100):
df['iterative'][i] = fib_iterative(i)
df['formula'][i] = fib_formula(i)
df['diff'][i] = df['formula'][i] - df['iterative'][i]
df = pd.DataFrame(df, columns=['iterative', 'formula', 'diff'])
df.index.name = 'n-th Fibonacci'
df.ix[68:74]
```
<div style="max-height:1000px;max-width:1500px;overflow:auto;">
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>iterative</th>
<th>formula</th>
<th>diff</th>
</tr>
<tr>
<th>n-th Fibonacci</th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>68</th>
<td> 72723460248141</td>
<td> 72723460248141</td>
<td> 0</td>
</tr>
<tr>
<th>69</th>
<td> 117669030460994</td>
<td> 117669030460994</td>
<td> 0</td>
</tr>
<tr>
<th>70</th>
<td> 190392490709135</td>
<td> 190392490709135</td>
<td> 0</td>
</tr>
<tr>
<th>71</th>
<td> 308061521170129</td>
<td> 308061521170130</td>
<td> 1</td>
</tr>
<tr>
<th>72</th>
<td> 498454011879264</td>
<td> 498454011879265</td>
<td> 1</td>
</tr>
<tr>
<th>73</th>
<td> 806515533049393</td>
<td> 806515533049395</td>
<td> 2</td>
</tr>
<tr>
<th>74</th>
<td> 1304969544928657</td>
<td> 1304969544928660</td>
<td> 3</td>
</tr>
</tbody>
</table>
</div>
You can see that `fib_iterative()` produces the correct result by eyeballing the sum of $F_{69}$ and $F_{70}$, while `fib_formual()` starts to have precision errors as the number gets larger. So, be mindful with precision issues when doing numerical computing. Here's a nice article on this topic http://www.codeproject.com/Articles/25294/Avoiding-Overflow-Underflow-and-Loss-of-Precision
Also notice that unlike C/C++, in Python there's technically no limit in the precision of its integer representation. In Python 2 any overflowing operation on `int` is automatically converted into `long`, and `long` has arbitrary precision. In Python 3 it is just `int`. More information on Python's arbitrary-precision integers can be found here http://stackoverflow.com/questions/9860588/maximum-value-for-long-integer
| e44db84d693f5c7af7794fe4b7a5ad73e04f9ae2 | 52,342 | ipynb | Jupyter Notebook | blog/Fibonacci_numbers_in_python.ipynb | mortada/notebooks | 12fd6a5fc1430efee63889f6cb709d8e94bde602 | [
"Apache-2.0"
]
| 5 | 2015-06-16T23:48:03.000Z | 2021-05-29T09:39:32.000Z | blog/Fibonacci_numbers_in_python.ipynb | mortada/notebooks | 12fd6a5fc1430efee63889f6cb709d8e94bde602 | [
"Apache-2.0"
]
| null | null | null | blog/Fibonacci_numbers_in_python.ipynb | mortada/notebooks | 12fd6a5fc1430efee63889f6cb709d8e94bde602 | [
"Apache-2.0"
]
| 3 | 2015-06-18T17:55:08.000Z | 2020-04-05T08:59:48.000Z | 94.480144 | 36,806 | 0.818845 | true | 2,602 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.934395 | 0.847788 | __label__eng_Latn | 0.950882 | 0.808029 |
# Symbolinen laskenta sympy-kirjastoa käyttäen
Tässä muistikirjassa käydään läpi Pythonin symbolisen laskennan sympy-kirjaston perusulottuvuuksia.
```python
import sympy
```
```python
z**2
```
```python
sympy.var('z') # esitellään muuttuja ensin
```
z
```python
z**2 # nyt sillä voidaan laskea
```
z**2
```python
True^True # ^ on varattu Pythonissa XOR-operaatiolle
```
False
```python
True^False
```
True
```python
z**(1/2)
```
z**0.5
Yllä Python laskee suluissa olevan jakolaskun ensin
ja siitä tulee Python-objekti ennen kuin SymPy ehtii muuttaa sen SymPyn olioksi.
Ratkaisu on käyttää sympify() tai Rational-luokkaa.
```python
z**(sympy.sympify(1)/2)
```
sqrt(z)
```python
z**sympy.Rational(1/2)
```
sqrt(z)
Esitellään muuttujia:
```python
from sympy import symbols
x, y, z = symbols('x y z')
u, v, w = symbols('u v w')
symbols('j0:7')
```
(j0, j1, j2, j3, j4, j5, j6)
```python
g, h = symbols('h g')
```
```python
g
```
h
```python
h
```
g
Variables Assignment does not Create a Relation Between Expressions
```python
from sympy import Symbol
a=Symbol('a') # muuttujan a arvo a
```
```python
b=a+1 # muuttujan b arvo lauseke a+1
```
```python
print(b)
```
a + 1
```python
a=4 # nyt a osoittaa kokonaislukuun 4, ei symboliin a
```
```python
print(a)
```
4
```python
print(b) # b osoittaa lausekkeeseen, jonka arvo a+1
```
a + 1
```python
from sympy.abc import p
```
```python
sympy.sqrt(p**2) # SymPy ei yksinkertaista tätä, koska vastauksena voi olla 1- tai 1
```
sqrt(p**2)
```python
p=Symbol('p', positive=True) # nyt ratkaisu on yksikäsitteinen
```
```python
sympy.sqrt(p**2)
```
p
Yhtäsuuruuden testausta, Eq, ==
```python
from sympy import symbols
x, y = symbols('x y')
sympy.Eq(x,y) # symbolinen yhtäsuuruus ilmaistaan Eq:lla
```
Eq(x, y)
```python
(x-1)**2 == x**2-2*x+1 # == testaa symbolista samankaltaisuutta
```
False
```python
(x-1)**2 == (x-1)**2 # == testaa symbolista samankaltaisuutta
```
True
Semanttisen samankaltaisuuden testaamiseen vähennä lausekkeet toisistaan ja käytä expand, simplify. Muista myös trigsimp, powsimp, trigsimp, logcombine, radsimp, together
```python
from sympy import simplify
simplify((x-1)**2-(x**2-2*x+1))
```
0
```python
from sympy import sin, cos, simplify, expand, powsimp, trigsimp, logcombine, radsimp, together
sinilauseke1 = sin(2*x)-2*sin(x)*cos(x)
```
```python
simplify(sinilauseke1)
```
0
```python
expand(sinilauseke1, trig=True)
```
0
```python
from sympy.abc import a,b
expand((a-b)**4) # lasketaan (a-b)^4
```
a**4 - 4*a**3*b + 6*a**2*b**2 - 4*a*b**3 + b**4
```python
expand(2*x+3*y, complex=True) # otetaan kompleksiluvut käyttöön
```
2*re(x) + 3*re(y) + 2*I*im(x) + 3*I*im(y)
```python
expand(cos(x+y)**2, trig=True) # otetaan trigonometriset funktiot käyttöön
```
sin(x)**2*sin(y)**2 - 2*sin(x)*sin(y)*cos(x)*cos(y) + cos(x)**2*cos(y)**2
```python
expand(cos(x+2*y), trig=True) # otetaan trigonometriset funktiot käyttöön
```
-2*sin(x)*sin(y)*cos(y) + 2*cos(x)*cos(y)**2 - cos(x)
```python
simplify((x**3+y**2*x)/x) # lausekkeen sieventämistä
```
x**2 + y**2
```python
from sympy import I
expand((4+3*I)**(4+3*I)) #lausekkeen sieventämistä
```
336*I*(4 + 3*I)**(3*I) - 527*(4 + 3*I)**(3*I)
Harjoittele: laske (x-y)^4 ja sievennä sin(x)/cos(x)
```python
powsimp(x**a*x**b) #lasketaan potensseja
```
x**(a + b)
```python
import pprint #pretty printing
pprint((3-x**(2*x)/(x+1)))
```
```python
together(x**6-17*x**4+8*x**2+5+3*x**5+22*x**3-8*x+6)
```
x**6 + 3*x**5 - 17*x**4 + 22*x**3 + 8*x**2 - 8*x + 11
```python
together(1/a+1/b)
```
(a + b)/(a*b)
Kuviot
```python
import matplotlib.pyplot
%matplotlib inline
```
```python
sympy.plot(sympy.sin(x))
```
```python
sympy.plot(sympy.sin(x)/x)
```
```python
sympy.plot(sympy.exp(-x)*sympy.sin(x**2), (x, 0, 1)) #tässä rajoitettiin aluetta
```
```python
sympy.plot(sympy.exp(2*x))
```
```python
sympy.plot(sympy.log(2/x))
```
Derivointi, differentiaalit ja integrointi
```python
from sympy import diff, exp
diff(cos(2*x),x)
```
-2*sin(2*x)
```python
k = symbols('k', integer=True)
diff(exp(k*x),x)
```
k*exp(k*x)
```python
from sympy import log
diff(log(1/x),x)
```
-1/x
Korkeammat derivaatat
```python
diff(cos(2*x),x,3)
```
8*sin(2*x)
```python
diff(exp(k*x),x,2)
```
k**2*exp(k*x)
```python
from sympy import erf
diff(erf(x),x)
```
2*exp(-x**2)/sqrt(pi)
Hyvä tapa on myös määritellä lausekkeita ja derivoida niitä:
```python
expr = x**2*sympy.sin(sympy.log(x))
```
```python
sympy.diff(expr,x)
```
2*x*sin(log(x)) + x*cos(log(x))
Osittaiderivointi:
```python
expr2 = x*sympy.cos(y**2 + x)
```
```python
sympy.diff(expr2, x, 2, y, 3)
```
4*y*(-2*x*y**2*sin(x + y**2) + 3*x*cos(x + y**2) + 4*y**2*cos(x + y**2) + 6*sin(x + y**2))
Määräämätön derivaatta:
```python
sympy.Derivative(expr2, x, 2, y, 3)
```
Derivative(x*cos(x + y**2), (x, 2), (y, 3))
```python
sympy.Derivative(expr2, x, 2, y, 3).doit()
```
4*y*(-2*x*y**2*sin(x + y**2) + 3*x*cos(x + y**2) + 4*y**2*cos(x + y**2) + 6*sin(x + y**2))
Integrointi:
```python
from sympy import integrate
integrate(5*x**5,x)
```
5*x**6/6
```python
integrate(3*exp(3*x),x)
```
exp(3*x)
```python
integrate(cos(x),x)
```
sin(x)
```python
integrate(log(x),x)
```
x*log(x) - x
```python
integrate(exp(-x**2)*erf(x), x)
```
sqrt(pi)*erf(x)**2/4
```python
sympy.integrate(sympy.exp(-(x+y))*sympy.cos(x)*sympy.sin(y), x, y)
```
-exp(-x)*exp(-y)*sin(x)*sin(y)/4 - exp(-x)*exp(-y)*sin(x)*cos(y)/4 + exp(-x)*exp(-y)*sin(y)*cos(x)/4 + exp(-x)*exp(-y)*cos(x)*cos(y)/4
Määritetään lauseke ja operoidaan sillä:
```python
integrand=sympy.log(x)**2
```
```python
sympy.integrate(integrand, x)
```
x*log(x)**2 - 2*x*log(x) + 2*x
Sarjat:
```python
from sympy import series
series(cos(x), x)
```
1 - x**2/2 + x**4/24 + O(x**6)
```python
series(erf(x),x)
```
2*x/sqrt(pi) - 2*x**3/(3*sqrt(pi)) + x**5/(5*sqrt(pi)) + O(x**6)
```python
series(exp(x),x)
```
1 + x + x**2/2 + x**3/6 + x**4/24 + x**5/120 + O(x**6)
```python
series(1/cos(x), x)
```
1 + x**2/2 + 5*x**4/24 + O(x**6)
Sympy laskimena
```python
v, w, u = sympy.symbols('v, w, u')
n = sympy.symbols('n', interger=True)
from sympy import Mul, Pow, Rational, pi, N, oo
```
```python
Mul(3, Rational(2,4))
```
3/2
```python
Mul(3, Rational(2,4), evaluate=False)
```
3/2
```python
Pow(x*y,2)
```
x**2*y**2
```python
Pow(x*y,2, evaluate=False)
```
(x*y)**2
```python
pi*2
```
2*pi
```python
exp(1)
```
E
```python
pi.evalf()
```
3.14159265358979
```python
pi.evalf(5)
```
3.1416
```python
N(pi)
```
3.14159265358979
```python
N(pi, 5)
```
3.1416
```python
print(exp(1))
```
E
```python
oo>999999999
```
True
```python
oo/2
```
oo
Laske
- luvun kaksi neliöjuuri 50 desimaalilla,
- mikä on piin 26. desimaali,
- paljonko on luvun pi + e likiarvo.
Raja-arvot
```python
from sympy import limit, sin, exp, oo
limit(sin(x)/x,x,0)
```
1
```python
limit(exp(x)/x, x, oo)
```
oo
```python
limit((1+1/x)**x,x,oo)
```
E
Määrätty intergraali:
```python
integrand=sympy.log(x)**2
```
```python
sympy.integrate(integrand, (x, 1, 10))
```
-20*log(10) + 18 + 10*log(10)**2
```python
sympy.integrate(sympy.exp(-x), (x, 0, sympy.oo))
```
1
Moninkertainen integraali:
```python
integrate(exp(-x**2-y**2),(x,-oo,oo),(y,-oo,oo))
```
pi
```python
Voidaan myös luoda määräämätön Integral-objekti. Sen arvo voidaan laskea myöhemmin.
```
```python
sympy.Integral(integrand, (x, 1, 10))
```
Integral(log(x)**2, (x, 1, 10))
```python
sympy.Integral(integrand, (x, 1, 10)).doit()
```
-20*log(10) + 18 + 10*log(10)**2
Yhtälönratkaisu
```python
a, b, c, x = sympy.symbols(('a', 'b', 'c', 'x'))
```
```python
quadr_eq = sympy.Eq(a*x**2+b*x+c, 0)
```
```python
sympy.solve(quadr_eq)
```
[{a: -(b*x + c)/x**2}]
```python
sympy.solve(quadr_eq, x)
```
[(-b + sqrt(-4*a*c + b**2))/(2*a), -(b + sqrt(-4*a*c + b**2))/(2*a)]
Opetus: Jos ei muuta määritetä, sympy ratkaisee ensimmäisen symbolin suhteen.
```python
from sympy import solve
solve(x**4 - 1, x)
```
[-1, 1, -I, I]
```python
solve(x**2+2*x-1,x)
```
[-1 + sqrt(2), -sqrt(2) - 1]
Yllä olevat vastaukset hakasuluissa [] ovat listoja. Listat voivat sisältää erilaisia objekteja ja niitä voi muuttaa.
```python
from sympy import roots
roots(x**2+2*x-1,x)
```
{-sqrt(2) - 1: 1, -1 + sqrt(2): 1}
```python
solve([x + 5*y - 2, -3*x + 6*y - 15], [x, y])
```
{x: -3, y: 1}
Edellä olevat vastaukset taas ovat sanakirjoja. Ne ovat järjestämättömiä listoja ilman toistoja.
Huom. solve -> list, roots -> dictionary.
```python
solve(exp(x) + 1, x) #Eulerin lause, Eulerin identiteetti
```
[I*pi]
Ratkaise yhtälö
(x-1)^4=4^4
Jatketaan yhtälön quadr_eq ratkaisemista:
```python
ratkaisut=sympy.solve(quadr_eq, x)
```
```python
xplus=ratkaisut[0]
xminus=ratkaisut[1]
```
```python
xplus_arvot=xplus.subs([(a,1),(b,2),(c,3)])
```
```python
xplus_arvot
```
-1 + sqrt(2)*I
```python
Voidaan sijoittaa myös muita muuttujia:
```
```python
sympy.var('z0')
xminus_arvot=xminus.subs([(b,a), (c,a+z0)])
```
```python
xminus_arvot
```
-(a + sqrt(a**2 - 4*a*(a + z0)))/(2*a)
```python
xminus_arvot.simplify()
```
-(a + sqrt(-a*(3*a + 4*z0)))/(2*a)
Solveset on suositeltavin yhtälöiden ratkaisuun. Solveset kertoo kunkin ratkaisun vain kerran, moninkertaisuuden selvittämiseksi on hyvä käyttää funktiota roots:
```python
from sympy import solveset, roots
solveset(x**3-6*x**2+9*x,x)
```
{0, 3}
```python
roots(x**3-6*x**2+9*x,x)
```
{0: 1, 3: 2}
Huomaa vielä ero kahden seuraavan outputin kanssa.
```python
solveset(exp(x), x) # Yhtälöllä e^x = 0 ei ole ratkaisuja.
```
EmptySet()
```python
solveset(cos(x)-x,x) # Tälle yhtälölle sympy ei löydä ratkaisua.
```
ConditionSet(x, Eq(-x + cos(x), 0), Complexes(Reals x Reals, False))
Määritellään vielä funktioita:
```python
from sympy import Function
f, g = symbols('f g', cls=Function)
```
```python
f(x)
```
f(x)
```python
f(x).diff(x)
```
Derivative(f(x), x)
Tekijöihin jakoa:
```python
poly = x**4 - 3*x**2 + 1
```
```python
from sympy import factor
factor(poly)
```
(x**2 - x - 1)*(x**2 + x - 1)
```python
factor(poly,modulus=5)
```
(x - 2)**2*(x + 2)**2
Boolean lausekkeita
```python
from sympy import satisfiable
satisfiable(x & y)
```
{x: True, y: True}
| e53ee13591150405e32e19fe8545ea465b05baf2 | 130,468 | ipynb | Jupyter Notebook | symbolinen-laskenta.ipynb | juhanurmonen/symbolinen-laskenta | a7936f96eca2ffe38ae1d93345cb4862e00ffd42 | [
"MIT"
]
| null | null | null | symbolinen-laskenta.ipynb | juhanurmonen/symbolinen-laskenta | a7936f96eca2ffe38ae1d93345cb4862e00ffd42 | [
"MIT"
]
| null | null | null | symbolinen-laskenta.ipynb | juhanurmonen/symbolinen-laskenta | a7936f96eca2ffe38ae1d93345cb4862e00ffd42 | [
"MIT"
]
| null | null | null | 50.884555 | 29,190 | 0.781195 | true | 4,392 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90599 | 0.893309 | 0.809329 | __label__fin_Latn | 0.776623 | 0.718676 |
# Exercise 4, answers
```python
def f_constrained(x):
return x[0]**2+x[1]**2+x[0]+2*x[1], [], [x[0]+x[1]-1]
```
Problem 1
```python
def alpha(x,f):
(_,ieq,eq) = f(x)
return sum([min([0,ieq_j])**2 for ieq_j in ieq])\
+sum([eq_k**2 for eq_k in eq])
def penalized_function(x,f,r):
return f(x)[0] + r*alpha(x,f)
```
#### Let us solve the penalized problem with penalty term growin in a loop
```python
import numpy as np
from scipy.optimize import minimize
r = 0
x_old = np.array([float('inf')]*2)
x_new = [0,0]
while np.linalg.norm(x_new-x_old)>0.0001:
res = minimize(lambda x:penalized_function(x,f_constrained,r),
[0,0],method='Nelder-Mead')
x_old = x_new
x_new = np.array(res.x)
r = r+1
print x_new, r
```
[ 0.74340989 0.24336301] 95
## Problem 2
```python
def f_constrained_approx(x,epsilon):
return x[0]**2+x[1]**2+x[0]+2*x[1], [x[0]+x[1]-1+epsilon,\
epsilon-(x[0]+x[1]-1)], []
```
```python
def beta(x,f):
_,ieq,_ = f(x)
try:
value=sum([1/max([0,ieq_j]) for ieq_j in ieq])
except ZeroDivisionError:
value = float("inf")
return value
def function_with_barrier(x,f,r):
return f(x)[0]+r*beta(x,f)
```
```python
import numpy as np
import ad
from scipy.optimize import minimize
r = 1.0
epsilon = 1.
x_old = np.array([float('inf')]*2)
x_new = [1,0]
while np.linalg.norm(x_new-x_old)>0.0001:
g = lambda x: function_with_barrier(x,\
lambda y: f_constrained_approx(y,epsilon),r)
res = minimize(g,x_new,method='Newton-CG',jac=ad.gh(g)[0],\
hess=ad.gh(g)[1])
x_old = x_new
x_new = res.x
r=r/2
epsilon = epsilon/2
print x_new, epsilon, r
```
[ 0.5578683 0.0578683] 0.5 0.5
[ 0.68545665 0.18545665] 0.25 0.25
[ 0.73158293 0.23158293] 0.125 0.125
[ 0.74519333 0.24519333] 0.0625 0.0625
[ 0.74878417 0.24878417] 0.03125 0.03125
[ 0.74969513 0.24969513] 0.015625 0.015625
[ 0.74992373 0.24992373] 0.0078125 0.0078125
[ 0.74998093 0.24998093] 0.00390625 0.00390625
## Problem 3
```python
import numpy as np
def project_vector(A,vector):
```
```python
import numpy as np
import ad
def projected_gradient_method(f,A,start,step,precision):
f_old = float('Inf')
x = np.array(start)
steps = []
f_new = f(x)
while abs(f_old-f_new)>precision:
f_old = f_new
gradient = ad.gh(f)[0](x)
grad_proj = project_vector(A,[-i for i in gradient])#The only changes to steepest..
grad_proj = np.array(grad_proj.transpose())[0] #... descent are here!
x = x+grad_proj*step
f_new = f(x)
steps.append(list(x))
return x,f_new,steps
```
```python
projected_gradient_method(lambda x:f_constrained(x)[0],[[1,1]],[1,0]\
,.5,0.000001)
```
(array([ 0.75, 0.25]), 1.875, [[0.75, 0.25], [0.75, 0.25]])
## Problem 4
Need to show that there exists unique Lagrance multiplier vectors $\lambda^* = (\lambda^*_1,\ldots,\lambda_J^*)$ and $\mu^*=(\mu_1^*,\ldots,\mu_K^*)$ such that
$$
\begin{align}
&\nabla_xL(x,\lambda,\mu) = 0\\
&\mu_j^*\geq0,\text{ for all }j=1,\ldots,J\\
&\mu_j^*g_j(x)=0,\text{for all }j=1,\ldots,J,
\end{align}
$$
where $$L(x,\lambda,\mu) = f(x)- \sum_{k=1}^K\mu_kg_k(x) -\sum_{j=1}^J\lambda_jh_j(x)$$
Now, $f(x) = x_1^2+x_2^2+x_1+2x_2$, $g(x) = 0$ and $h(x)=x_1+x_2-1$.
Thus, stability rule becomes $$
\left\{
\begin{align}
2x_1+1-\lambda = 0\\
2x_2+2-\lambda=0.
\end{align}
\right.
$$
We do not have a complementary rule, since we do not have inequality constraints!
```python
def check_KKT_eqc(x,tol):
l = 2*x[0]+1
if abs(2*x[1]+2-l)<=tol:
print abs(2*x[1]+2-l)
return True
return False
```
```python
check_KKT_eqc([0.74998093,0.24998093],0.000001)
```
0.0
True
| f29b133e010b8edc4e51c8554cd47bc1626d4195 | 8,195 | ipynb | Jupyter Notebook | Exercise 4 answers.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
]
| 4 | 2019-04-26T12:46:14.000Z | 2021-11-23T03:38:59.000Z | Exercise 4 answers.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
]
| null | null | null | Exercise 4 answers.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
]
| 6 | 2016-01-08T16:28:11.000Z | 2021-04-10T05:18:10.000Z | 22.513736 | 176 | 0.473093 | true | 1,464 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.822189 | 0.68515 | __label__eng_Latn | 0.364705 | 0.430165 |
# Worksheet 1
```
%matplotlib inline
```
```
import ngcm_utils as ngcm
```
The first worksheet covers basic topics in linear algebra. There is also a basic question on nonlinear root-finding.
## Question 1
Write down the 1, 2 and $\infty$ vector norms of
\begin{equation}
{\bf v}_1 = \begin{pmatrix} 1 \\ 3 \\ -1 \end{pmatrix}, \quad {\bf v}_2 = \begin{pmatrix} 1 \\ -2 \end{pmatrix}, \quad {\bf v}_3 = \begin{pmatrix} 1 \\ 6 \\ -3 \\ 1 \end{pmatrix}.
\end{equation}
### Answer Question 1
We know that the 1-norm is the sum of the absolute values, so
\begin{align}
| {\bf v}_{1} |_1 &= 1 + 3 + 1 = 5, \\
| {\bf v}_{2} |_1 &= 1 + 2 = 3, \\
| {\bf v}_{2} |_1 &= 1 + 6 + 3 + 1 = 11.
\end{align}
The 2-norm is the square root of the sum of the squares, so
\begin{align}
| {\bf v}_{1} |_2 &= \sqrt{1^2 + 3^2 + 1^2} = \sqrt{11} \approx 3.3166, \\
| {\bf v}_{2} |_2 &= \sqrt{1^2 + 3^2} = \sqrt{10} \approx 2.2361, \\
| {\bf v}_{2} |_2 &= \sqrt{1^2 + 6^2 + 3^2 + 1^2} = \sqrt{47} \approx 6.8557.
\end{align}
The $\infty$-norm is the maximum absolute value, so
\begin{align}
| {\bf v}_{1} |_{\infty} &= 3, \\
| {\bf v}_{2} |_{\infty} &= 2, \\
| {\bf v}_{2} |_{\infty} &= 6.
\end{align}
Next we calculate the condition numbers numerically using python, as a cross-check. Use numpy for this, via its linear algebra subpackage `numpy.linalg`.
```
import numpy as np
import numpy.linalg as la
v1 = np.array([1.0, 3.0, -1.0])
v2 = np.array([1.0, -2.0])
v3 = np.array([1.0, 6.0, -3.0, 1.0])
for norm in [1, 2, np.inf]:
for v in [v1, v2, v3]:
print("The {0} norm of {1} is {2:.3}".\
format(norm, v, la.norm(v, norm)))
```
The 1 norm of [ 1. 3. -1.] is 5.0
The 1 norm of [ 1. -2.] is 3.0
The 1 norm of [ 1. 6. -3. 1.] is 11.0
The 2 norm of [ 1. 3. -1.] is 3.32
The 2 norm of [ 1. -2.] is 2.24
The 2 norm of [ 1. 6. -3. 1.] is 6.86
The inf norm of [ 1. 3. -1.] is 3.0
The inf norm of [ 1. -2.] is 2.0
The inf norm of [ 1. 6. -3. 1.] is 6.0
To show how this would be done in Matlab, we use the Octave mode here (note that Octave is a free software Matlab clone: not as full featured, but using very similar syntax).
```
%load_ext octavemagic
```
```octave
%%octave
v1 = [1, 3, -1];
v2 = [1, -2];
v3 = [1, 6, -3, 1];
printf("The 1 norm of v1 is %f.\n", norm(v1, 1))
printf("The 1 norm of v2 is %f.\n", norm(v2, 1))
printf("The 1 norm of v3 is %f.\n", norm(v3, 1))
printf("The 2 norm of v1 is %f.\n", norm(v1, 2))
printf("The 2 norm of v2 is %f.\n", norm(v2, 2))
printf("The 2 norm of v3 is %f.\n", norm(v3, 2))
printf("The inf norm of v1 is %f.\n", norm(v1, "inf"))
printf("The inf norm of v2 is %f.\n", norm(v2, "inf"))
printf("The inf norm of v3 is %f.\n", norm(v3, "inf"))
```
The 1 norm of v1 is 5.000000.
The 1 norm of v2 is 3.000000.
The 1 norm of v3 is 11.000000.
The 2 norm of v1 is 3.316625.
The 2 norm of v2 is 2.236068.
The 2 norm of v3 is 6.855655.
The inf norm of v1 is 3.000000.
The inf norm of v2 is 2.000000.
The inf norm of v3 is 6.000000.
## Question 2
Find the 1 and $\infty$ norms of
\begin{equation}
A_1 = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, \quad A_2 = \begin{pmatrix} -3 & 2 \\ 3 & 6 \end{pmatrix}.
\end{equation}
### Answer Question 2
The 1-norm of a matrix is the maximum of the 1-norms of the **column** vectors. For $A_1$ the 1-norms of the column vectors are 4 and 6 respectively. For $A_2$ they are 6 and 8 respectively. So we have
\begin{align}
\| A_1 \|_1 &= 6, \\
\| A_2 \|_1 & = 8.
\end{align}
The $\infty$ norm of a matrix is the maximum of the 1-norms of the **row** vectors. For $A_1$ the 1-norms of the row vectors are 3 and 7 respectively. For $A_2$ they are 5 and 9 respectively. So we have
\begin{align}
\| A_1 \|_{\infty} &= 7, \\
\| A_2 \|_{\infty} & = 9.
\end{align}
It is equally straightforward to repeat this calculation with python, using the same approach.
```
A1 = np.array([[ 1.0, 2.0],[3.0, 4.0]])
A2 = np.array([[-3.0, 2.0],[3.0, 6.0]])
for norm in [1, np.inf]:
for A in [A1, A2]:
print("The {0} norm of \n{1}\nis {2:.3}\n".\
format(norm, A, la.norm(A, norm)))
```
The 1 norm of
[[ 1. 2.]
[ 3. 4.]]
is 6.0
The 1 norm of
[[-3. 2.]
[ 3. 6.]]
is 8.0
The inf norm of
[[ 1. 2.]
[ 3. 4.]]
is 7.0
The inf norm of
[[-3. 2.]
[ 3. 6.]]
is 9.0
Again, using Matlab syntax we would do the following.
```octave
%%octave
A1 = [ 1 2; 3 4];
A2 = [-3 2; 3 6];
printf("The 1 norm of A1 is %f.\n", norm(A1, 1))
printf("The 1 norm of A2 is %f.\n", norm(A2, 1))
printf("The inf norm of A1 is %f.\n", norm(A1, "inf"))
printf("The inf norm of A2 is %f.\n", norm(A2, "inf"))
```
The 1 norm of A1 is 6.000000.
The 1 norm of A2 is 8.000000.
The inf norm of A1 is 7.000000.
The inf norm of A2 is 9.000000.
## Question 3
Find the condition number of the above matrices.
### Answer Question 3
The analytic calculation of the condition number requires the norm of the inverse matrices, which is **not** the inverse of the norm of the matrix. This is, of course, not useful for the numerical work.
The inverse matrices are
\begin{align}
A_1^{-1} & = - \frac{1}{2} \begin{pmatrix} 4 & -2 \\ -3 & 1 \end{pmatrix}, \\
A_2^{-1} & = - \frac{1}{24} \begin{pmatrix} 6 & -2 \\ -3 & -3 \end{pmatrix}.
\end{align}
It follows that the matrix norms of the inverse matrices are
\begin{align}
\| A_1^{-1} \| & = \frac{7}{2}, & \| A_2^{-1} \|_1 & = \frac{3}{8}, \\
\| A_1^{-1} \|_{\infty} & = 3, & \| A_2^{-1} \|_{\infty} & = \frac{1}{3}.
\end{align}
Therefore the condition numbers with respect to the 1-norm are
\begin{align}
K(A_1) & = \| A_1 \|_1 \| A_1^{-1} \|_1 \\ & = 21, \\
K(A_2) & = \| A_2 \|_1 \| A_2^{-1} \|_1 \\ & = 3,
\end{align}
and the condition numbers with respect to the $\infty$-norm are
\begin{align}
K(A_1) & = \| A_1 \|_{\infty} \| A_1^{-1} \|_{\infty} \\ & = 21, \\
K(A_2) & = \| A_2 \|_{\infty} \| A_2^{-1} \|_{\infty} \\ & = 3.
\end{align}
Note that it is chance that, in this case, they are identical: this will not be true in general.
This suggests that, if the solution of a linear system is needed as part of a numerical method, any intrinsic errors will be increased more by the matrix $A_1$ than by the matrix $A_2$, by a factor $\sim 10$.
When calculating the condition number numerically, the Singular Value Decomposition (SVD) is typically used, which does not require inverting the matrix. Thus calculating the condition number of fast, and should be done before any matrix operation to check the condition of the matrix. Within python, this is straightforward.
```
A1 = np.array([[ 1.0, 2.0],[3.0, 4.0]])
A2 = np.array([[-3.0, 2.0],[3.0, 6.0]])
for norm in [1, 2, np.inf]:
for A in [A1, A2]:
print("The condition number with respect to the"\
" {0} norm of\n{1}\nis {2:.3}\n"\
.format(norm, A, la.cond(A, norm)))
```
The condition number with respect to the 1 norm of
[[ 1. 2.]
[ 3. 4.]]
is 21.0
The condition number with respect to the 1 norm of
[[-3. 2.]
[ 3. 6.]]
is 3.0
The condition number with respect to the 2 norm of
[[ 1. 2.]
[ 3. 4.]]
is 14.9
The condition number with respect to the 2 norm of
[[-3. 2.]
[ 3. 6.]]
is 1.89
The condition number with respect to the inf norm of
[[ 1. 2.]
[ 3. 4.]]
is 21.0
The condition number with respect to the inf norm of
[[-3. 2.]
[ 3. 6.]]
is 3.0
Again, similar functions exist in Matlab.
```octave
%%octave
A1 = [ 1 2; 3 4];
A2 = [-3 2; 3 6];
printf("The condition number with respect to the 1 norm of A1 is %f.\n", \
cond(A1, 1))
printf("The condition number with respect to the 1 norm of A2 is %f.\n", \
cond(A2, 1))
printf("The condition number with respect to the 2 norm of A1 is %f.\n", \
cond(A1, 2))
printf("The condition number with respect to the 2 norm of A2 is %f.\n", \
cond(A2, 2))
printf("The condition number with respect to the inf norm of A1 is %f.\n", \
cond(A1, "inf"))
printf("The condition number with respect to the inf norm of A2 is %f.\n", \
cond(A2, "inf"))
```
The condition number with respect to the 1 norm of A1 is 21.000000.
The condition number with respect to the 1 norm of A2 is 3.000000.
The condition number with respect to the 2 norm of A1 is 14.933034.
The condition number with respect to the 2 norm of A2 is 1.886618.
The condition number with respect to the inf norm of A1 is 21.000000.
The condition number with respect to the inf norm of A2 is 3.000000.
## Question 4
Explain the difference between direct and indirect methods for solving linear systems. Give an example of when the latter may be more useful.
*This is a standard part of an exam question: see, e.g., 07/08*.
### Answer Question 4
*Direct methods* consist of a finite list of transformations of the original matrix of the coefficients that reduce the linear systems to one that is easily solved.
*Indirect* or *iterative methods*, consist of algorithms that specify a series of steps, possibly infinite, that lead closer and closer to the solution; there may not be a guarantee that they ever exactly reach it. This may not seem a very desirable feature until we remember that we cannot in any case perfectly represent an exact solution: most iterative methods provide us with a highly accurate solution in relatively few iterations.
Large, sparse matrices are ideally solved using iterative methods.
## Coding Question 1
For each of the matrices above, work out their transpose and inverse using standard python commands.
### Answer Coding Question 1
```
A1 = np.array([[ 1.0, 2.0],[3.0, 4.0]])
A2 = np.array([[-3.0, 2.0],[3.0, 6.0]])
for A in [A1, A2]:
print("The transpose of \n{0}\nis \n{1}\n".format(A, np.transpose(A)))
print("The inverse of \n{0}\nis \n{1}\n".format(A, la.inv(A)))
```
The transpose of
[[ 1. 2.]
[ 3. 4.]]
is
[[ 1. 3.]
[ 2. 4.]]
The inverse of
[[ 1. 2.]
[ 3. 4.]]
is
[[-2. 1. ]
[ 1.5 -0.5]]
The transpose of
[[-3. 2.]
[ 3. 6.]]
is
[[-3. 3.]
[ 2. 6.]]
The inverse of
[[-3. 2.]
[ 3. 6.]]
is
[[-0.25 0.08333333]
[ 0.125 0.125 ]]
Once again, we can use Matlab functions to do this.
```octave
%%octave
A1 = [ 1 2; 3 4];
A2 = [-3 2; 3 6];
printf("The transpose of A1 is \n")
disp(A1.')
printf("The inverse of A1 is \n")
disp(inv(A1))
printf("The transpose of A2 is \n")
disp(A2.')
printf("The inverse of A2 is \n")
disp(inv(A2))
```
The transpose of A1 is
1 3
2 4
The inverse of A1 is
-2.00000 1.00000
1.50000 -0.50000
The transpose of A2 is
-3 3
2 6
The inverse of A2 is
-0.25000 0.08333
0.12500 0.12500
## Coding Question 3
Note that coding question 2 has been effectively answered above, in the "theory" section.
Write a function that takes a matrix, computes its condition number, and reports whether the matrix is suitably well-conditions. The choice of criteria is up to you.
### Answer Coding Question 3
As we are now starting to use functions, we could define them in the notebook, or in a separate file. Here I have used a separate file : `Worksheet1_Functions.py`. The function is defined as follows.
```
ngcm.highlight_source("Worksheet1_Functions", "MatrixConditionCheck")
```
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title></title>
<meta http-equiv="content-type" content="text/html; charset=None">
<style type="text/css">
td.linenos { background-color: #f0f0f0; padding-right: 10px; }
span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
pre { line-height: 125%; }
body .hll { background-color: #ffffcc }
body { background: #f8f8f8; }
body .c { color: #408080; font-style: italic } /* Comment */
body .err { border: 1px solid #FF0000 } /* Error */
body .k { color: #008000; font-weight: bold } /* Keyword */
body .o { color: #666666 } /* Operator */
body .cm { color: #408080; font-style: italic } /* Comment.Multiline */
body .cp { color: #BC7A00 } /* Comment.Preproc */
body .c1 { color: #408080; font-style: italic } /* Comment.Single */
body .cs { color: #408080; font-style: italic } /* Comment.Special */
body .gd { color: #A00000 } /* Generic.Deleted */
body .ge { font-style: italic } /* Generic.Emph */
body .gr { color: #FF0000 } /* Generic.Error */
body .gh { color: #000080; font-weight: bold } /* Generic.Heading */
body .gi { color: #00A000 } /* Generic.Inserted */
body .go { color: #888888 } /* Generic.Output */
body .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
body .gs { font-weight: bold } /* Generic.Strong */
body .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
body .gt { color: #0044DD } /* Generic.Traceback */
body .kc { color: #008000; font-weight: bold } /* Keyword.Constant */
body .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
body .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
body .kp { color: #008000 } /* Keyword.Pseudo */
body .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
body .kt { color: #B00040 } /* Keyword.Type */
body .m { color: #666666 } /* Literal.Number */
body .s { color: #BA2121 } /* Literal.String */
body .na { color: #7D9029 } /* Name.Attribute */
body .nb { color: #008000 } /* Name.Builtin */
body .nc { color: #0000FF; font-weight: bold } /* Name.Class */
body .no { color: #880000 } /* Name.Constant */
body .nd { color: #AA22FF } /* Name.Decorator */
body .ni { color: #999999; font-weight: bold } /* Name.Entity */
body .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
body .nf { color: #0000FF } /* Name.Function */
body .nl { color: #A0A000 } /* Name.Label */
body .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
body .nt { color: #008000; font-weight: bold } /* Name.Tag */
body .nv { color: #19177C } /* Name.Variable */
body .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
body .w { color: #bbbbbb } /* Text.Whitespace */
body .mf { color: #666666 } /* Literal.Number.Float */
body .mh { color: #666666 } /* Literal.Number.Hex */
body .mi { color: #666666 } /* Literal.Number.Integer */
body .mo { color: #666666 } /* Literal.Number.Oct */
body .sb { color: #BA2121 } /* Literal.String.Backtick */
body .sc { color: #BA2121 } /* Literal.String.Char */
body .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
body .s2 { color: #BA2121 } /* Literal.String.Double */
body .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
body .sh { color: #BA2121 } /* Literal.String.Heredoc */
body .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
body .sx { color: #008000 } /* Literal.String.Other */
body .sr { color: #BB6688 } /* Literal.String.Regex */
body .s1 { color: #BA2121 } /* Literal.String.Single */
body .ss { color: #19177C } /* Literal.String.Symbol */
body .bp { color: #008000 } /* Name.Builtin.Pseudo */
body .vc { color: #19177C } /* Name.Variable.Class */
body .vg { color: #19177C } /* Name.Variable.Global */
body .vi { color: #19177C } /* Name.Variable.Instance */
body .il { color: #666666 } /* Literal.Number.Integer.Long */
</style>
</head>
<body>
<h2></h2>
<div class="highlight"><pre><span class="k">def</span> <span class="nf">MatrixConditionCheck</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">MaxConditionNumber</span> <span class="o">=</span> <span class="mf">10.0</span><span class="p">):</span>
<span class="sd">"""Check the condition number of a matrix.</span>
<span class="sd"> Only write output to screen if the condition number is too high.</span>
<span class="sd"> Should return something, really."""</span>
<span class="n">ConditionNumber</span> <span class="o">=</span> <span class="n">la</span><span class="o">.</span><span class="n">cond</span><span class="p">(</span><span class="n">A</span><span class="p">)</span>
<span class="k">if</span> <span class="n">ConditionNumber</span> <span class="o">></span> <span class="n">MaxConditionNumber</span><span class="p">:</span>
<span class="k">print</span><span class="p">(</span><span class="s">"The condition number of the matrix</span><span class="se">\n</span><span class="s">{0}</span><span class="se">\n</span><span class="s">"</span>\
<span class="s">"is too large (i.e., it is {1:.4} which is larger"</span>\
<span class="s">" than {2:.4}).</span><span class="se">\n</span><span class="s">"</span><span class="o">.</span>\
<span class="n">format</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">ConditionNumber</span><span class="p">,</span> <span class="n">MaxConditionNumber</span><span class="p">))</span>
</pre></div>
</body>
</html>
We can then import the module and check the function using the earlier input matrices.
```
import Worksheet1_Functions as w1
A1 = np.array([[ 1.0, 2.0],[3.0, 4.0]])
A2 = np.array([[-3.0, 2.0],[3.0, 6.0]])
w1.MatrixConditionCheck(A1)
w1.MatrixConditionCheck(A2)
```
The condition number of the matrix
[[ 1. 2.]
[ 3. 4.]]
is too large (i.e., it is 14.93 which is larger than 10.0).
Note the Matlab implementation in `Worksheet1_MyConditionNumber.m`.
```
ngcm.highlight_source_matlab("Worksheet1_MyConditionNumber.m")
```
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title></title>
<meta http-equiv="content-type" content="text/html; charset=None">
<style type="text/css">
td.linenos { background-color: #f0f0f0; padding-right: 10px; }
span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
pre { line-height: 125%; }
body .hll { background-color: #ffffcc }
body { background: #f8f8f8; }
body .c { color: #408080; font-style: italic } /* Comment */
body .err { border: 1px solid #FF0000 } /* Error */
body .k { color: #008000; font-weight: bold } /* Keyword */
body .o { color: #666666 } /* Operator */
body .cm { color: #408080; font-style: italic } /* Comment.Multiline */
body .cp { color: #BC7A00 } /* Comment.Preproc */
body .c1 { color: #408080; font-style: italic } /* Comment.Single */
body .cs { color: #408080; font-style: italic } /* Comment.Special */
body .gd { color: #A00000 } /* Generic.Deleted */
body .ge { font-style: italic } /* Generic.Emph */
body .gr { color: #FF0000 } /* Generic.Error */
body .gh { color: #000080; font-weight: bold } /* Generic.Heading */
body .gi { color: #00A000 } /* Generic.Inserted */
body .go { color: #888888 } /* Generic.Output */
body .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
body .gs { font-weight: bold } /* Generic.Strong */
body .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
body .gt { color: #0044DD } /* Generic.Traceback */
body .kc { color: #008000; font-weight: bold } /* Keyword.Constant */
body .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
body .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
body .kp { color: #008000 } /* Keyword.Pseudo */
body .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
body .kt { color: #B00040 } /* Keyword.Type */
body .m { color: #666666 } /* Literal.Number */
body .s { color: #BA2121 } /* Literal.String */
body .na { color: #7D9029 } /* Name.Attribute */
body .nb { color: #008000 } /* Name.Builtin */
body .nc { color: #0000FF; font-weight: bold } /* Name.Class */
body .no { color: #880000 } /* Name.Constant */
body .nd { color: #AA22FF } /* Name.Decorator */
body .ni { color: #999999; font-weight: bold } /* Name.Entity */
body .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
body .nf { color: #0000FF } /* Name.Function */
body .nl { color: #A0A000 } /* Name.Label */
body .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
body .nt { color: #008000; font-weight: bold } /* Name.Tag */
body .nv { color: #19177C } /* Name.Variable */
body .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
body .w { color: #bbbbbb } /* Text.Whitespace */
body .mf { color: #666666 } /* Literal.Number.Float */
body .mh { color: #666666 } /* Literal.Number.Hex */
body .mi { color: #666666 } /* Literal.Number.Integer */
body .mo { color: #666666 } /* Literal.Number.Oct */
body .sb { color: #BA2121 } /* Literal.String.Backtick */
body .sc { color: #BA2121 } /* Literal.String.Char */
body .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
body .s2 { color: #BA2121 } /* Literal.String.Double */
body .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
body .sh { color: #BA2121 } /* Literal.String.Heredoc */
body .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
body .sx { color: #008000 } /* Literal.String.Other */
body .sr { color: #BB6688 } /* Literal.String.Regex */
body .s1 { color: #BA2121 } /* Literal.String.Single */
body .ss { color: #19177C } /* Literal.String.Symbol */
body .bp { color: #008000 } /* Name.Builtin.Pseudo */
body .vc { color: #19177C } /* Name.Variable.Class */
body .vg { color: #19177C } /* Name.Variable.Global */
body .vi { color: #19177C } /* Name.Variable.Instance */
body .il { color: #666666 } /* Literal.Number.Integer.Long */
</style>
</head>
<body>
<h2></h2>
<div class="highlight"><pre><span class="c">%</span>
<span class="c">% function ConditionNumber = Worksheet1_MyConditionNumber(A)</span>
<span class="c">%</span>
<span class="c">% Computes the condition number of the input matrix A.</span>
<span class="c">%</span>
<span class="c">% Expects a square 2-D matrix A. It will complain if the condition number</span>
<span class="c">% is larger than 10 (this is a ludicrously low bound, but tests that the</span>
<span class="c">% function actually works with the default input).</span>
<span class="c">%</span>
<span class="k">function</span><span class="w"> </span>ConditionNumber <span class="p">=</span><span class="w"> </span><span class="nf">Worksheet1_MyConditionNumber</span><span class="p">(</span>A<span class="p">)</span><span class="w"></span>
<span class="c">% Check the input is reasonable</span>
<span class="k">if</span> <span class="p">(</span><span class="n">not</span><span class="p">(</span><span class="n">isnumeric</span><span class="p">(</span><span class="n">A</span><span class="p">)))</span>
<span class="n">error</span><span class="p">(</span><span class="s">'The input must be a numerical array!'</span><span class="p">);</span>
<span class="k">elseif</span> <span class="p">(</span><span class="nb">ndims</span><span class="p">(</span><span class="n">A</span><span class="p">)</span><span class="o">~=</span><span class="mi">2</span><span class="p">)</span>
<span class="n">error</span><span class="p">(</span><span class="s">'The input must be a 2-d array!'</span><span class="p">);</span>
<span class="k">elseif</span> <span class="p">(</span><span class="nb">size</span><span class="p">(</span><span class="n">A</span><span class="p">,</span><span class="mi">1</span><span class="p">)</span><span class="o">~=</span><span class="nb">size</span><span class="p">(</span><span class="n">A</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span>
<span class="n">error</span><span class="p">(</span><span class="s">'The input must be a square 2-d array!'</span><span class="p">);</span>
<span class="k">end</span>
<span class="c">% Set up the maximum condition number</span>
<span class="n">Max_ConditionNumber</span> <span class="p">=</span> <span class="mi">10</span><span class="p">;</span>
<span class="c">% Compute the condition number</span>
<span class="n">ConditionNumber</span> <span class="p">=</span> <span class="n">cond</span><span class="p">(</span><span class="n">A</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="n">ConditionNumber</span> <span class="o">></span> <span class="n">Max_ConditionNumber</span><span class="p">)</span>
<span class="nb">disp</span><span class="p">(</span><span class="n">sprintf</span><span class="p">(</span><span class="s">'The condition number, %g, of the matrix makes it not suitably well-conditioned (bigger than %g)'</span><span class="p">,</span> <span class="c">...</span>
<span class="n">ConditionNumber</span><span class="p">,</span> <span class="n">Max_ConditionNumber</span><span class="p">));</span>
<span class="k">end</span>
<span class="k">end</span>
</pre></div>
</body>
</html>
```octave
%%octave
A1 = [ 1 2; 3 4];
A2 = [-3 2; 3 6];
Worksheet1_MyConditionNumber(A1);
Worksheet1_MyConditionNumber(A2);
```
The condition number, 14.933, of the matrix makes it not suitably well-conditioned (bigger than 10)
## Coding Question 4
Write a bisection method to find the root of
\begin{equation}
f(x) = \tan (x) - e^{-x}, \quad x \in [0, 1].
\end{equation}
### Answer Coding Question 4
Again we have defined the function in the separate file.
```
ngcm.highlight_source("Worksheet1_Functions", "bisection")
```
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title></title>
<meta http-equiv="content-type" content="text/html; charset=None">
<style type="text/css">
td.linenos { background-color: #f0f0f0; padding-right: 10px; }
span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
pre { line-height: 125%; }
body .hll { background-color: #ffffcc }
body { background: #f8f8f8; }
body .c { color: #408080; font-style: italic } /* Comment */
body .err { border: 1px solid #FF0000 } /* Error */
body .k { color: #008000; font-weight: bold } /* Keyword */
body .o { color: #666666 } /* Operator */
body .cm { color: #408080; font-style: italic } /* Comment.Multiline */
body .cp { color: #BC7A00 } /* Comment.Preproc */
body .c1 { color: #408080; font-style: italic } /* Comment.Single */
body .cs { color: #408080; font-style: italic } /* Comment.Special */
body .gd { color: #A00000 } /* Generic.Deleted */
body .ge { font-style: italic } /* Generic.Emph */
body .gr { color: #FF0000 } /* Generic.Error */
body .gh { color: #000080; font-weight: bold } /* Generic.Heading */
body .gi { color: #00A000 } /* Generic.Inserted */
body .go { color: #888888 } /* Generic.Output */
body .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
body .gs { font-weight: bold } /* Generic.Strong */
body .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
body .gt { color: #0044DD } /* Generic.Traceback */
body .kc { color: #008000; font-weight: bold } /* Keyword.Constant */
body .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
body .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
body .kp { color: #008000 } /* Keyword.Pseudo */
body .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
body .kt { color: #B00040 } /* Keyword.Type */
body .m { color: #666666 } /* Literal.Number */
body .s { color: #BA2121 } /* Literal.String */
body .na { color: #7D9029 } /* Name.Attribute */
body .nb { color: #008000 } /* Name.Builtin */
body .nc { color: #0000FF; font-weight: bold } /* Name.Class */
body .no { color: #880000 } /* Name.Constant */
body .nd { color: #AA22FF } /* Name.Decorator */
body .ni { color: #999999; font-weight: bold } /* Name.Entity */
body .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
body .nf { color: #0000FF } /* Name.Function */
body .nl { color: #A0A000 } /* Name.Label */
body .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
body .nt { color: #008000; font-weight: bold } /* Name.Tag */
body .nv { color: #19177C } /* Name.Variable */
body .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
body .w { color: #bbbbbb } /* Text.Whitespace */
body .mf { color: #666666 } /* Literal.Number.Float */
body .mh { color: #666666 } /* Literal.Number.Hex */
body .mi { color: #666666 } /* Literal.Number.Integer */
body .mo { color: #666666 } /* Literal.Number.Oct */
body .sb { color: #BA2121 } /* Literal.String.Backtick */
body .sc { color: #BA2121 } /* Literal.String.Char */
body .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
body .s2 { color: #BA2121 } /* Literal.String.Double */
body .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
body .sh { color: #BA2121 } /* Literal.String.Heredoc */
body .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
body .sx { color: #008000 } /* Literal.String.Other */
body .sr { color: #BB6688 } /* Literal.String.Regex */
body .s1 { color: #BA2121 } /* Literal.String.Single */
body .ss { color: #19177C } /* Literal.String.Symbol */
body .bp { color: #008000 } /* Name.Builtin.Pseudo */
body .vc { color: #19177C } /* Name.Variable.Class */
body .vg { color: #19177C } /* Name.Variable.Global */
body .vi { color: #19177C } /* Name.Variable.Instance */
body .il { color: #666666 } /* Literal.Number.Integer.Long */
</style>
</head>
<body>
<h2></h2>
<div class="highlight"><pre><span class="k">def</span> <span class="nf">bisection</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">interval</span><span class="p">,</span> <span class="n">tolerance</span> <span class="o">=</span> <span class="mf">1.e-10</span><span class="p">):</span>
<span class="sd">"""General bisection method for a function f of one variable. </span>
<span class="sd"> There must be at least one root within the interval. </span>
<span class="sd"> Default tolerance (width of the interval) is 1e-10."""</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">interval</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span>
<span class="c"># Get the endpoints of the interval</span>
<span class="p">[</span><span class="n">x_min</span><span class="p">,</span> <span class="n">x_max</span><span class="p">]</span> <span class="o">=</span> <span class="n">interval</span>
<span class="c"># Values at the ends of the domain</span>
<span class="n">f_min</span> <span class="o">=</span> <span class="n">f</span><span class="p">(</span><span class="n">x_min</span><span class="p">)</span>
<span class="n">f_max</span> <span class="o">=</span> <span class="n">f</span><span class="p">(</span><span class="n">x_max</span><span class="p">)</span>
<span class="c"># Check that at least one root lies within the interval</span>
<span class="k">assert</span><span class="p">(</span><span class="n">f_min</span> <span class="o">*</span> <span class="n">f_max</span> <span class="o"><</span> <span class="mf">0.0</span><span class="p">)</span>
<span class="c"># The loop</span>
<span class="n">x_c</span> <span class="o">=</span> <span class="p">(</span><span class="n">x_min</span> <span class="o">+</span> <span class="n">x_max</span><span class="p">)</span> <span class="o">/</span> <span class="mf">2.0</span>
<span class="n">f_c</span> <span class="o">=</span> <span class="n">f</span><span class="p">(</span><span class="n">x_c</span><span class="p">)</span>
<span class="n">iteration</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">while</span> <span class="p">((</span><span class="n">x_max</span> <span class="o">-</span> <span class="n">x_min</span> <span class="o">></span> <span class="n">tolerance</span><span class="p">)</span> <span class="ow">and</span> \
<span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">abs</span><span class="p">(</span><span class="n">f_c</span><span class="p">)</span> <span class="o">></span> <span class="n">tolerance</span><span class="p">)</span> <span class="ow">and</span> \
<span class="p">(</span><span class="n">iteration</span> <span class="o"><</span> <span class="mi">100</span><span class="p">)):</span>
<span class="n">iteration</span> <span class="o">=</span> <span class="n">iteration</span><span class="o">+</span><span class="mi">1</span>
<span class="k">if</span> <span class="n">f_min</span> <span class="o">*</span> <span class="n">f_c</span> <span class="o"><</span> <span class="mf">0.0</span><span class="p">:</span>
<span class="n">x_max</span> <span class="o">=</span> <span class="n">x_c</span>
<span class="n">f_max</span> <span class="o">=</span> <span class="n">f_c</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">x_min</span> <span class="o">=</span> <span class="n">x_c</span>
<span class="n">f_min</span> <span class="o">=</span> <span class="n">f_c</span>
<span class="n">x_c</span> <span class="o">=</span> <span class="p">(</span><span class="n">x_min</span> <span class="o">+</span> <span class="n">x_max</span><span class="p">)</span> <span class="o">/</span> <span class="mf">2.0</span>
<span class="n">f_c</span> <span class="o">=</span> <span class="n">f</span><span class="p">(</span><span class="n">x_c</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="s">"The root is approximately {0} where "</span>\
<span class="s">"f is {1:.4} (tolerance {2:.4})"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">x_c</span><span class="p">,</span> <span class="n">f_c</span><span class="p">,</span> <span class="n">tolerance</span><span class="p">))</span>
<span class="k">return</span> <span class="n">x_c</span>
</pre></div>
</body>
</html>
We have already imported the module. Also within that module is the function whose root we are trying to find.
```
ngcm.highlight_source("Worksheet1_Functions", "fn_worksheet1_q4")
```
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title></title>
<meta http-equiv="content-type" content="text/html; charset=None">
<style type="text/css">
td.linenos { background-color: #f0f0f0; padding-right: 10px; }
span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
pre { line-height: 125%; }
body .hll { background-color: #ffffcc }
body { background: #f8f8f8; }
body .c { color: #408080; font-style: italic } /* Comment */
body .err { border: 1px solid #FF0000 } /* Error */
body .k { color: #008000; font-weight: bold } /* Keyword */
body .o { color: #666666 } /* Operator */
body .cm { color: #408080; font-style: italic } /* Comment.Multiline */
body .cp { color: #BC7A00 } /* Comment.Preproc */
body .c1 { color: #408080; font-style: italic } /* Comment.Single */
body .cs { color: #408080; font-style: italic } /* Comment.Special */
body .gd { color: #A00000 } /* Generic.Deleted */
body .ge { font-style: italic } /* Generic.Emph */
body .gr { color: #FF0000 } /* Generic.Error */
body .gh { color: #000080; font-weight: bold } /* Generic.Heading */
body .gi { color: #00A000 } /* Generic.Inserted */
body .go { color: #888888 } /* Generic.Output */
body .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
body .gs { font-weight: bold } /* Generic.Strong */
body .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
body .gt { color: #0044DD } /* Generic.Traceback */
body .kc { color: #008000; font-weight: bold } /* Keyword.Constant */
body .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
body .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
body .kp { color: #008000 } /* Keyword.Pseudo */
body .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
body .kt { color: #B00040 } /* Keyword.Type */
body .m { color: #666666 } /* Literal.Number */
body .s { color: #BA2121 } /* Literal.String */
body .na { color: #7D9029 } /* Name.Attribute */
body .nb { color: #008000 } /* Name.Builtin */
body .nc { color: #0000FF; font-weight: bold } /* Name.Class */
body .no { color: #880000 } /* Name.Constant */
body .nd { color: #AA22FF } /* Name.Decorator */
body .ni { color: #999999; font-weight: bold } /* Name.Entity */
body .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
body .nf { color: #0000FF } /* Name.Function */
body .nl { color: #A0A000 } /* Name.Label */
body .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
body .nt { color: #008000; font-weight: bold } /* Name.Tag */
body .nv { color: #19177C } /* Name.Variable */
body .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
body .w { color: #bbbbbb } /* Text.Whitespace */
body .mf { color: #666666 } /* Literal.Number.Float */
body .mh { color: #666666 } /* Literal.Number.Hex */
body .mi { color: #666666 } /* Literal.Number.Integer */
body .mo { color: #666666 } /* Literal.Number.Oct */
body .sb { color: #BA2121 } /* Literal.String.Backtick */
body .sc { color: #BA2121 } /* Literal.String.Char */
body .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
body .s2 { color: #BA2121 } /* Literal.String.Double */
body .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
body .sh { color: #BA2121 } /* Literal.String.Heredoc */
body .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
body .sx { color: #008000 } /* Literal.String.Other */
body .sr { color: #BB6688 } /* Literal.String.Regex */
body .s1 { color: #BA2121 } /* Literal.String.Single */
body .ss { color: #19177C } /* Literal.String.Symbol */
body .bp { color: #008000 } /* Name.Builtin.Pseudo */
body .vc { color: #19177C } /* Name.Variable.Class */
body .vg { color: #19177C } /* Name.Variable.Global */
body .vi { color: #19177C } /* Name.Variable.Instance */
body .il { color: #666666 } /* Literal.Number.Integer.Long */
</style>
</head>
<body>
<h2></h2>
<div class="highlight"><pre><span class="k">def</span> <span class="nf">fn_worksheet1_q4</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
<span class="sd">"""Simple function defined in question, f(x) = tan(x) - exp(-x)."""</span>
<span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="n">tan</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="o">-</span> <span class="n">np</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="o">-</span><span class="n">x</span><span class="p">)</span>
</pre></div>
</body>
</html>
Test the bisection method using various tolerances.
```
# Now find the root.
x_root = w1.bisection(w1.fn_worksheet1_q4, [0, 1])
# Try changing the precision to see what difference it makes
x_root_e6 = w1.bisection(w1.fn_worksheet1_q4, [0, 1], 1e-6)
x_root_e6 = w1.bisection(w1.fn_worksheet1_q4, [0, 1], 1e-15)
```
The root is approximately 0.531390856602 where f is -9.652e-11 (tolerance 1e-10)
The root is approximately 0.531391143799 where f is 5.551e-07 (tolerance 1e-06)
The root is approximately 0.531390856652 where f is 1.11e-16 (tolerance 1e-15)
A quick and dirty Matlab implementation of the bisection algorithm follows. Note that it is not implemented as a function, but directly as a script: it would be straightforward, in a similar fashion to the python above, to implement it as a function.
```octave
%%octave
% Set up the tolerance in the width of the interval
tolerance = 1e-15;
% Set up the initial bracketing interval
x_lo = 0;
x_hi = 1;
% Set up the function and the initial function values.
f = @(x)(tan(x) - exp(-x));
f_lo = f(x_lo);
f_hi = f(x_hi);
% Set up the bisection loop.
x_guess = (x_lo + x_hi) / 2;
f_guess = f(x_guess);
% Loop until the root is bracketed within the tolerance.
while (x_hi - x_lo > tolerance)
if (f_lo * f_guess < 0)
% The root is between the guessed value and the lower bound
x_hi = x_guess;
f_hi = f_guess;
else
% The root is between the guessed value and the upper bound
x_lo = x_guess;
f_lo = f_guess;
end
% Set the new bisection estimate
x_guess = (x_lo + x_hi) / 2;
f_guess = f(x_guess);
end
printf("The root is approximately %f where f is %g.\n", x_guess, f_guess)
```
The root is approximately 0.531391 where f is -7.77156e-16.
```
from IPython.core.display import HTML
def css_styling():
styles = open("../../IPythonNotebookStyles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:80%;
margin-left:10% !important;
margin-right:auto;
}
h1 {
font-family: Verdana, Arial, Helvetica, sans-serif;
text-align: center;
}
h2 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h3 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
div.text_cell_render{
font-family: Gill, Verdana, Arial, Helvetica, sans-serif;
line-height: 110%;
font-size: 120%;
width:75%;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
/* .prompt{
display: None;
}*/
.text_cell_render h5 {
font-weight: 300;
font-size: 12pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
> (The cell above executes the style for this notebook. It closely follows the style used in the [12 Steps to Navier Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/) course.)
| 1de63cc705c1c7f9cd09cea2d65b8383ec43a74f | 64,245 | ipynb | Jupyter Notebook | Worksheets/Worksheet1_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
]
| 1 | 2021-12-01T09:15:04.000Z | 2021-12-01T09:15:04.000Z | Worksheets/Worksheet1_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
]
| null | null | null | Worksheets/Worksheet1_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
]
| null | null | null | 42.322134 | 448 | 0.488054 | true | 13,786 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.849971 | 0.764227 | __label__eng_Latn | 0.631122 | 0.613888 |
# Linear Equations
The equations in the previous lab included one variable, for which you solved the equation to find its value. Now let's look at equations with multiple variables. For reasons that will become apparent, equations with two variables are known as linear equations.
## Solving a Linear Equation
Consider the following equation:
\begin{equation}2y + 3 = 3x - 1 \end{equation}
This equation includes two different variables, **x** and **y**. These variables depend on one another; the value of x is determined in part by the value of y and vice-versa; so we can't solve the equation and find absolute values for both x and y. However, we *can* solve the equation for one of the variables and obtain a result that describes a relative relationship between the variables.
For example, let's solve this equation for y. First, we'll get rid of the constant on the right by adding 1 to both sides:
\begin{equation}2y + 4 = 3x \end{equation}
Then we'll use the same technique to move the constant on the left to the right to isolate the y term by subtracting 4 from both sides:
\begin{equation}2y = 3x - 4 \end{equation}
Now we can deal with the coefficient for y by dividing both sides by 2:
\begin{equation}y = \frac{3x - 4}{2} \end{equation}
Our equation is now solved. We've isolated **y** and defined it as <sup>3x-4</sup>/<sub>2</sub>
While we can't express **y** as a particular value, we can calculate it for any value of **x**. For example, if **x** has a value of 6, then **y** can be calculated as:
\begin{equation}y = \frac{3\cdot6 - 4}{2} \end{equation}
This gives the result <sup>14</sup>/<sub>2</sub> which can be simplified to 7.
You can view the values of **y** for a range of **x** values by applying the equation to them using the following Python code:
```python
import pandas as pd
# Create a dataframe with an x column containing values from -10 to 10
df = pd.DataFrame ({'x': range(-10, 11)})
# Add a y column by applying the solved equation to x
df['y'] = (3*df['x'] - 4) / 2
#Display the dataframe
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-10</td>
<td>-17.0</td>
</tr>
<tr>
<th>1</th>
<td>-9</td>
<td>-15.5</td>
</tr>
<tr>
<th>2</th>
<td>-8</td>
<td>-14.0</td>
</tr>
<tr>
<th>3</th>
<td>-7</td>
<td>-12.5</td>
</tr>
<tr>
<th>4</th>
<td>-6</td>
<td>-11.0</td>
</tr>
<tr>
<th>5</th>
<td>-5</td>
<td>-9.5</td>
</tr>
<tr>
<th>6</th>
<td>-4</td>
<td>-8.0</td>
</tr>
<tr>
<th>7</th>
<td>-3</td>
<td>-6.5</td>
</tr>
<tr>
<th>8</th>
<td>-2</td>
<td>-5.0</td>
</tr>
<tr>
<th>9</th>
<td>-1</td>
<td>-3.5</td>
</tr>
<tr>
<th>10</th>
<td>0</td>
<td>-2.0</td>
</tr>
<tr>
<th>11</th>
<td>1</td>
<td>-0.5</td>
</tr>
<tr>
<th>12</th>
<td>2</td>
<td>1.0</td>
</tr>
<tr>
<th>13</th>
<td>3</td>
<td>2.5</td>
</tr>
<tr>
<th>14</th>
<td>4</td>
<td>4.0</td>
</tr>
<tr>
<th>15</th>
<td>5</td>
<td>5.5</td>
</tr>
<tr>
<th>16</th>
<td>6</td>
<td>7.0</td>
</tr>
<tr>
<th>17</th>
<td>7</td>
<td>8.5</td>
</tr>
<tr>
<th>18</th>
<td>8</td>
<td>10.0</td>
</tr>
<tr>
<th>19</th>
<td>9</td>
<td>11.5</td>
</tr>
<tr>
<th>20</th>
<td>10</td>
<td>13.0</td>
</tr>
</tbody>
</table>
</div>
We can also plot these values to visualize the relationship between x and y as a line. For this reason, equations that describe a relative relationship between two variables are known as *linear equations*:
```python
%matplotlib inline
from matplotlib import pyplot as plt
plt.plot(df.x, df.y, color="grey", marker = "o")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.show()
```
In a linear equation, a valid solution is described by an ordered pair of x and y values. For example, valid solutions to the linear equation above include:
- (-10, -17)
- (0, -2)
- (9, 11.5)
The cool thing about linear equations is that we can plot the points for some specific ordered pair solutions to create the line, and then interpolate the x value for any y value (or vice-versa) along the line.
## Intercepts
When we use a linear equation to plot a line, we can easily see where the line intersects the X and Y axes of the plot. These points are known as *intercepts*. The *x-intercept* is where the line intersects the X (horizontal) axis, and the *y-intercept* is where the line intersects the Y (horizontal) axis.
Let's take a look at the line from our linear equation with the X and Y axis shown through the origin (0,0).
```python
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
## add axis lines for 0,0
plt.axhline()
plt.axvline()
plt.show()
```
The x-intercept is the point where the line crosses the X axis, and at this point, the **y** value is always 0. Similarly, the y-intercept is where the line crosses the Y axis, at which point the **x** value is 0. So to find the intercepts, we need to solve the equation for **x** when **y** is 0.
For the x-intercept, our equation looks like this:
\begin{equation}0 = \frac{3x - 4}{2} \end{equation}
Which can be reversed to make it look more familar with the x expression on the left:
\begin{equation}\frac{3x - 4}{2} = 0 \end{equation}
We can multiply both sides by 2 to get rid of the fraction:
\begin{equation}3x - 4 = 0 \end{equation}
Then we can add 4 to both sides to get rid of the constant on the left:
\begin{equation}3x = 4 \end{equation}
And finally we can divide both sides by 3 to get the value for x:
\begin{equation}x = \frac{4}{3} \end{equation}
Which simplifies to:
\begin{equation}x = 1\frac{1}{3} \end{equation}
So the x-intercept is 1<sup>1</sup>/<sub>3</sub> (approximately 1.333).
To get the y-intercept, we solve the equation for y when x is 0:
\begin{equation}y = \frac{3\cdot0 - 4}{2} \end{equation}
Since 3 x 0 is 0, this can be simplified to:
\begin{equation}y = \frac{-4}{2} \end{equation}
-4 divided by 2 is -2, so:
\begin{equation}y = -2 \end{equation}
This gives us our y-intercept, so we can plot both intercepts on the graph:
```python
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
## add axis lines for 0,0
plt.axhline()
plt.axvline()
plt.annotate('x-intercept',(1.333, 0))
plt.annotate('y-intercept',(0,-2))
plt.show()
```
The ability to calculate the intercepts for a linear equation is useful, because you can calculate only these two points and then draw a straight line through them to create the entire line for the equation.
## Slope
It's clear from the graph that the line from our linear equation describes a slope in which values increase as we travel up and to the right along the line. It can be useful to quantify the slope in terms of how much **x** increases (or decreases) for a given change in **y**. In the notation for this, we use the greek letter Δ (*delta*) to represent change:
\begin{equation}slope = \frac{\Delta{y}}{\Delta{x}} \end{equation}
Sometimes slope is represented by the variable ***m***, and the equation is written as:
\begin{equation}m = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \end{equation}
Although this form of the equation is a little more verbose, it gives us a clue as to how we calculate slope. What we need is any two ordered pairs of x,y values for the line - for example, we know that our line passes through the following two points:
- (0,-2)
- (6,7)
We can take the x and y values from the first pair, and label them x<sub>1</sub> and y<sub>1</sub>; and then take the x and y values from the second point and label them x<sub>2</sub> and y<sub>2</sub>. Then we can plug those into our slope equation:
\begin{equation}m = \frac{7 - -2}{6 - 0} \end{equation}
This is the same as:
\begin{equation}m = \frac{7 + 2}{6 - 0} \end{equation}
That gives us the result <sup>9</sup>/<sub>6</sub> which is 1<sup>1</sup>/<sub>2</sub> or 1.5 .
So what does that actually mean? Well, it tells us that for every change of **1** in x, **y** changes by 1<sup>1</sup>/<sub>2</sub> or 1.5. So if we start from any point on the line and move one unit to the right (along the X axis), we'll need to move 1.5 units up (along the Y axis) to get back to the line.
You can plot the slope onto the original line with the following Python code to verify it fits:
```python
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
# set the slope
m = 1.5
# get the y-intercept
yInt = -2
# plot the slope from the y-intercept for 1x
mx = [0, 1]
my = [yInt, yInt + m]
plt.plot(mx,my, color='red', lw=5)
plt.show()
```
### Slope-Intercept Form
One of the great things about algebraic expressions is that you can write the same equation in multiple ways, or *forms*. The *slope-intercept form* is a specific way of writing a 2-variable linear equation so that the equation definition includes the slope and y-intercept. The generalised slope-intercept form looks like this:
\begin{equation}y = mx + b \end{equation}
In this notation, ***m*** is the slope and ***b*** is the y-intercept.
For example, let's look at the solved linear equation we've been working with so far in this section:
\begin{equation}y = \frac{3x - 4}{2} \end{equation}
Now that we know the slope and y-intercept for the line that this equation defines, we can rewrite the equation as:
\begin{equation}y = 1\frac{1}{2}x + -2 \end{equation}
You can see intuitively that this is true. In our original form of the equation, to find y we multiply x by three, subtract 4, and divide by two - in other words, x is half of 3x - 4; which is 1.5x - 2. So these equations are equivalent, but the slope-intercept form has the advantages of being simpler, and including two key pieces of information we need to plot the line represented by the equation. We know the y-intecept that the line passes through (0, -2), and we know the slope of the line (for every x, we add 1.5 to y.
Let's recreate our set of test x and y values using the slope-intercept form of the equation, and plot them to prove that this describes the same line:
```python
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
# Create a dataframe with an x column containing values from -10 to 10
df = pd.DataFrame ({'x': range(-10, 11)})
# Define slope and y-intercept
m = 1.5
yInt = -2
# Add a y column by applying the slope-intercept equation to x
df['y'] = m*df['x'] + yInt
# Plot the line
from matplotlib import pyplot as plt
plt.plot(df.x, df.y, color="grey")
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.axhline()
plt.axvline()
# label the y-intercept
plt.annotate('y-intercept',(0,yInt))
# plot the slope from the y-intercept for 1x
mx = [0, 1]
my = [yInt, yInt + m]
plt.plot(mx,my, color='red', lw=5)
plt.show()
```
```python
```
| 27f39bd77b5d1c587b5144ce70f8f1fd22b518a3 | 83,186 | ipynb | Jupyter Notebook | MathsToML/Module01-Equations, Graphs, and Functions/01-02-Linear Equations.ipynb | hpaucar/data-mining-repo | d0e48520bc6c01d7cb72e882154cde08020e1d33 | [
"MIT"
]
| null | null | null | MathsToML/Module01-Equations, Graphs, and Functions/01-02-Linear Equations.ipynb | hpaucar/data-mining-repo | d0e48520bc6c01d7cb72e882154cde08020e1d33 | [
"MIT"
]
| null | null | null | MathsToML/Module01-Equations, Graphs, and Functions/01-02-Linear Equations.ipynb | hpaucar/data-mining-repo | d0e48520bc6c01d7cb72e882154cde08020e1d33 | [
"MIT"
]
| null | null | null | 431.015544 | 14,558 | 0.894826 | true | 3,573 | Qwen/Qwen-72B | 1. YES
2. YES | 0.939913 | 0.90053 | 0.84642 | __label__eng_Latn | 0.997367 | 0.80485 |
# Tutorial rápido de Python para Matemáticos
© Ricardo Miranda Martins, 2022 - http://www.ime.unicamp.br/~rmiranda/
## Índice
1. [Introdução](1-intro.html)
2. [Python é uma boa calculadora!](2-calculadora.html) [(código fonte)](2-calculadora.ipynb)
3. [Resolvendo equações](3-resolvendo-eqs.html) [(código fonte)](3-resolvendo-eqs.ipynb)
4. [Gráficos](4-graficos.html) [(código fonte)](4-graficos.ipynb)
5. **[Sistemas lineares e matrizes](5-lineares-e-matrizes.html)** [(código fonte)](5-lineares-e-matrizes.ipynb)
6. [Limites, derivadas e integrais](6-limites-derivadas-integrais.html) [(código fonte)](6-limites-derivadas-integrais.ipynb)
7. [Equações diferenciais](7-equacoes-diferenciais.html) [(código fonte)](7-equacoes-diferenciais.ipynb)
# Resolvendo sistemas lineares
Resolver sistemas lineares é o ganha pão de matemáticos. Aliás, você sabe por quê dedicamos tanto tempo ensinando alunos a resolverem sistemas lineares? Simples: é que não sabemos direito resolver os não-lineares.
Existem basicamente duas formas do Python resolver sistemas lineares: pelo comando tradicional ```solve``` do SymPy ou pelo comando específico para sistemas lineares, o ```linsolve```. Essa segunda forma é mais rápida se o sistema for muito grande. No entanto, como você já deve imaginar, resolver sistemas lineares usando o NumPy é ainda mais rápido do que usando o SymPy.
Para sistemas muito grandes (muito mais que 1000 equações/variáveis), existem outros pacotes mais eficientes - se for seu caso, imagino que você não esteja aprendendo Python aqui nesse tutorial.
A rotina abaixo resolve o sistema
$$\left\{\begin{array}{lcl}
x+y-2z&=&1,\\
x+y-z&=&3,\\
3x+y-z&=&3
\end{array}\right.$$
usando o ```linsolve``` do SymPy.
```python
import sympy as sp
x, y, z = sp.symbols('x y z')
eq1=sp.Eq(x+y-2*z,1)
eq2=sp.Eq(x+y-z,3)
eq3=sp.Eq(3*x+y-z,3)
sp.linsolve((eq1,eq2,eq3),(x,y,z))
```
$\displaystyle \left\{\left( 0, \ 5, \ 2\right)\right\}$
Se quisermos a solução num outro formato, podemos usar o ```solve```:
```python
sp.solve((eq1,eq2,eq3),(x,y,z))
```
{x: 0, y: 5, z: 2}
## Um exercício legal para testar nossos conhecimentos
Sejam $P=(\alpha_1,\beta_1,\gamma_1)$, $Q=(\alpha_2,\beta_2,\gamma_2)$ e $R=(\alpha_3,\beta_3,\gamma_3)$ pontos não-colineares em $\mathbb R^3.$ Encontre a equação do plano $\pi$, da forma $ax+by+cz=d$, que contém estes três pontos.
Primeiro vamos a uma solução "algébrica", levando em conta a equação do plano. Vamos usar os valores $P=(0,0,1)$, $Q=(3,1,0)$ e $R=(0,2,2)$ no exemplo abaixo.
```python
import sympy as sp
# define as variaveis e constantes que vamos usar
x, y, z = sp.symbols('x y z')
a, b, c, d = sp.symbols('a b c d')
# equacao do plano na forma geral
eq = sp.Eq(a*x+b*y+c*z,d)
# definindo os pontos
P=sp.Array([0,0,1])
Q=sp.Array([3,1,0])
R=sp.Array([0,2,2])
# substituindo x,y,z na equacao do plano pelos pontos
eq1=eq.subs([(x,P[0]),(y,P[1]),(z,P[2])])
eq2=eq.subs([(x,Q[0]),(y,Q[1]),(z,Q[2])])
eq3=eq.subs([(x,R[0]),(y,R[1]),(z,R[2])])
# as equacoes acima resultam num sistema nas variaveis a, b, c, d.
# resolvendo o sistema:
sol=sp.linsolve((eq1,eq2,eq3),(a,b,c,d))
#armazenando as solucoes
(A,B,C,D)=tuple(*sol)
# finalmente, exibindo a equação do plano, substituindo a,b,c,d pelos
# valores que encontramos
eq.subs([(a,A),(b,B),(c,C),(d,D)])
```
$\displaystyle \frac{d x}{2} - \frac{d y}{2} + d z = d$
A solução acima fica na dependência de uma constante, no caso $d$. Pode-se substituir qualquer valor para $d$.
Agora vamos a uma solução um pouco mais geométrica. Vamos obter o vetor normal do plano como sendo o produto vetorial $n=u\times v$, onde $u=PQ$ e $v=PR$. A seguir, obtemos a equação do plano utilizando um dos pontos por onde ele passa. A solução é mais longa, porém é bem mais elegante e evita o aparecimento de constantes indesejadas.
```python
# importando a biblioteca para calculos simbolicos
import sympy as sp
# vamos usar a biblioteca NumPy para calcular o produto vetorial e tambem
# o produto interno
import numpy as np
# define as variaveis e constantes que vamos usar
x, y, z = sp.symbols('x y z')
a, b, c, d = sp.symbols('a b c d')
# equacao do plano na forma geral
eq = sp.Eq(a*x+b*y+c*z,d)
# definindo os pontos
P=sp.Array([0,0,1])
Q=sp.Array([3,1,0])
R=sp.Array([0,2,2])
# criando os vetores
u=Q-P
v=R-P
# obtendo o vetor normal
n=np.cross(u,v)
(a,b,c)=n
# termo nao-homogeneo
d=np.dot(n,P)
# exibindo a equação do plano
sp.Eq(a*x+b*y+c*z,d)
```
$\displaystyle 3 x - 3 y + 6 z = 6$
# Matrizes
Claro que poderíamos ter mesclado os capítulos sobre matrizes e sistemas lineares. Só deixamos separado para poder explorar um pouco melhor as propriedades desses retângulos cheios de números que nós adoramos.
Aliás, você sabe de onde vem a regra estranha de multiplicar matrizes? Vem da composição de operadores lineares. O Python trabalha com matrizes muito bem. Só devemos ter um pouco de cuidado com a notação.
Vamos fixar a notação e trabalhar abaixo com as matrizes $$A=\begin{pmatrix}1&2\\ -3&1\end{pmatrix},$$ $$B=\begin{pmatrix}2\\ 4\end{pmatrix}$$ e
$$C=\begin{pmatrix}2&5\\1&1\end{pmatrix}$$
em nossos exemplos.
```python
import sympy as sp
A=sp.Matrix([[1,2],[-3,1]])
B=sp.Matrix([[2],[4]])
C=sp.Matrix([[2,5],[4,10]])
```
```python
A
```
$\displaystyle \left[\begin{matrix}1 & 2\\-3 & 1\end{matrix}\right]$
```python
B
```
$\displaystyle \left[\begin{matrix}2\\4\end{matrix}\right]$
```python
C
```
$\displaystyle \left[\begin{matrix}2 & 5\\4 & 10\end{matrix}\right]$
Algumas matrizes especiais podem ser criadas de modo mais simples. Por exemplo, a identidade $n\times n$ pode ser criada com o comando ```eye(n)```.
```python
sp.eye(2)
```
$\displaystyle \left[\begin{matrix}1 & 0\\0 & 1\end{matrix}\right]$
Já a matriz nula $m\times n$ pode ser criada com o comando ```zeros(m,n)```:
```python
sp.zeros(2,2)
```
$\displaystyle \left[\begin{matrix}0 & 0\\0 & 0\end{matrix}\right]$
Podemos também criar de modo muito fácil uma matriz $m\times n$ só com 1's com o comando ```ones(m,n)```:
```python
sp.ones(2,2)
```
$\displaystyle \left[\begin{matrix}1 & 1\\1 & 1\end{matrix}\right]$
O produto de matrizes é feito com o mesmo símbolo usual: para obter o produto de A por B, o comando é ```A*B```:
```python
A*C
```
$\displaystyle \left[\begin{matrix}10 & 25\\-2 & -5\end{matrix}\right]$
```python
A*B
```
$\displaystyle \left[\begin{matrix}10\\-2\end{matrix}\right]$
Lembre-se que nem sempre dá para multiplicar duas matrizes - só podemos multiplicar uma matriz $m\times n$ por uma $n\times r$. Se tentarmos com matrizes de dimensões "estranhas", vai dar erro:
```python
B*A
```
Lembre-se também que o produto de matrizes não é comutativo, ou seja, $A\cdot C$ pode ser diferente de $C\cdot A$, ou seja, $AC-CA$ pode ser diferente de zero:
```python
A*C-C*A
```
$\displaystyle \left[\begin{matrix}23 & 16\\24 & -23\end{matrix}\right]$
Se $A$ é uma matriz quadrada, dizemos que ela é invertível se existe outra matriz $D$ tal que $AD=DA=I_{2\times 2}$. Essa matriz $D$ é chamada de matriz inversa de $A$ e denotada por $D=A^{-1}$. Nem toda matriz admite inversa, mas quando a inversa existe ela pode ser calculada no Python com o comando ```A**(-1)```:
```python
A**(-1)
```
$\displaystyle \left[\begin{matrix}\frac{1}{7} & - \frac{2}{7}\\\frac{3}{7} & \frac{1}{7}\end{matrix}\right]$
Uma propriedade das matrizes que tem inversa é que ela tem o determinante diferente de zero. O determinante é uma "propriedade" da matriz, então podemos "resgatar" o determinante de $A$ com o sufixo ```A.det()```:
```python
A.det()
```
$\displaystyle 7$
A mesma lógica é seguida se quisermos os autovalores ou autovetores da matriz:
```python
A.eigenvals()
```
{1 - sqrt(6)*I: 1, 1 + sqrt(6)*I: 1}
```python
A.eigenvects()
```
[(1 - sqrt(6)*I,
1,
[Matrix([
[sqrt(6)*I/3],
[ 1]])]),
(1 + sqrt(6)*I,
1,
[Matrix([
[-sqrt(6)*I/3],
[ 1]])])]
Se você achou a saída do comando acima um pouco feia, talvez prefira usar o ```pprint``` (pretty print):
```python
sp.init_printing(use_unicode=True)
A.eigenvects()
```
Agora sim, dá pra entender né? A saída é composta pelo autovalor, sua multiplicidade, e o(s) autovetor(es) associado(s). Vamos fazer um exemplo $3\times 3$ que tem um autovalor de multiplicidade 2 para ilustrar melhor isso:
```python
M=sp.Matrix([[3,0,0],[0,3,0],[0,1,1]])
M.eigenvects()
```
Determinar a solução do sistema linear $AX=0$ é o mesmo que calcular o "núcleo" da matriz $A$. Isso pode ser feito com o sufixo ```nullspace()```. Vamos fazer isso para as matrizes $A$ e $C$ para comparar os resultados:
```python
A.nullspace()
```
```python
C.nullspace()
```
Note que o resultado acima nos permite concluir que a matriz $C$ tem um autovalor nulo, e o autovetor associado é justamente a resposta anterior, que é o gerador do núcleo de $C$. Quer conferir? Então toma:
```python
C.eigenvects()
```
Também podemos obter os vetores que são geradores da imagem de $A$, ou seja, se considerarmos o operador $T_A:\mathbb R^2\rightarrow\mathbb R^2$, $T_A(v)\mapsto Av$, então o sufixo ```columnspace()``` nos dará os vetores que são geradores da imagem desse operador:
```python
A.columnspace()
```
```python
C.columnspace()
```
Uma coisa que sempre fazemos com matrizes é calcular a sua forma de Jordan, ou, quando possível, diagonalizá-la. Isso pode ser feito com o comando ```diagonalize()```:
```python
A.diagonalize()
```
Esse comando retorna duas matrizes: a primeira delas é a matriz mudança de base; a segunda é a forma diagonal da matriz. Se quisermos usar notações para essas matrizes, vamos denotar por $Q$ a matriz mudança de base e $D$ a matriz diagonal:
```python
Q,D = A.diagonalize()
```
Da teoria, sabemos que $Q\cdot D\cdot Q^{-1}=A$. Em geral, isso só dá certo de usarmos o ```simplify``` no produto matricial do lado esquerdo:
```python
sp.simplify(Q*D*(Q**(-1)))
```
## Um exercício interessante: reconhecimento de cônicas
Considere uma equação quadrática da forma $$ax^2+bxy+cy^2+dx+ey+f=0.$$
Você já deve saber que a representação dos pontos $(x,y)$ que satisfaem essa equação tem uma classificação bem detalhada: basicamente, com exceção de alguns casos degenerados, teremos ou uma elipse ou uma hipérbole ou uma parábola. São as famosas "cônicas", curvas que são obtidas na interseção de um plano com um cone duplo.
O procedimento mais comum usado para decidir qual dos casos temos, e para escrever uma equação mais "simpática" para essa cônica é usar autovalores e autovetores. Vamos fazer isso no exemplo abaixo.
Você pode trocar a equação que deve continuar funcionando (cuidado com algumas mudanças de coordenadas e com divisões por zero que podem aparecer no processo ao alterar os coeficientes).
A ideia do "reconhecimento de cônicas" é razoavelmente simples, mas os cálculos são bem chatos de fazer na mão.
A equação $$ax^2+bxy+cy^2+dx+ey+f=0$$ pode ser reescrita em forma matricial como
$$\begin{pmatrix}x&y
\end{pmatrix}\begin{pmatrix}a&b/2\\ b/2&c\end{pmatrix}\begin{pmatrix}x\\y
\end{pmatrix} +\begin{pmatrix}d&e
\end{pmatrix}\begin{pmatrix}x\\y
\end{pmatrix}+\begin{pmatrix}f\end{pmatrix}=0.$$
Como a matriz $\begin{pmatrix}f\end{pmatrix}$ é $1\times 1$, em geral não iremos usar a notação matricial e escrever somente $f$:
$$\begin{pmatrix}x&y
\end{pmatrix}\begin{pmatrix}a&b/2\\ b/2&c\end{pmatrix}\begin{pmatrix}x\\y
\end{pmatrix} +\begin{pmatrix}d&e
\end{pmatrix}\begin{pmatrix}x\\y
\end{pmatrix}+f=0.$$
Denote por $$A=\begin{pmatrix}a&b/2\\ b/2&c\end{pmatrix}.$$ Essa matriz satisfaz $A=A^t$, então pelo Teorema Espectral provado por Cauchy em 1829, existem matrizes $Q$ e $D$, sendo $D$ uma matriz diagonal, de modo que $$QD Q^{-1}=A,$$ o que é equivalente a $$D=Q^{-1}AQ.$$
Podemos construir as matrizes $Q$ e $D$ assim: a matriz $D$ é a matriz diagonal com os autovalores de $A$ e a matriz $Q$ é uma matriz cujas colunas são autovetores normais (de norma 1) de $A$, satifazendo $Q^t=Q^{-1}$. Após calculá-las, realizamos a mudança de coordenadas
$$\begin{pmatrix}x\\y\end{pmatrix}=Q\begin{pmatrix}u\\v\end{pmatrix}.$$
Substituindo na equação da cônica, temos
$$\left(Q\begin{pmatrix}u\\v\end{pmatrix}\right)^t \begin{pmatrix}a&b/2\\ b/2&c\end{pmatrix}Q\begin{pmatrix}u\\v\end{pmatrix} +\begin{pmatrix}d&e
\end{pmatrix}Q\begin{pmatrix}u\\v\end{pmatrix}+f=0,$$
ou seja,
$$\begin{pmatrix}u&v
\end{pmatrix}Q^t AQ\begin{pmatrix}u\\v\end{pmatrix} +\begin{pmatrix}d&e
\end{pmatrix}Q\begin{pmatrix}u\\v\end{pmatrix}+f=0.$$
Usando que $Q^tAQ=D$ a equação acima fica
$$\begin{pmatrix}u&v
\end{pmatrix}D\begin{pmatrix}u\\v\end{pmatrix} +\begin{pmatrix}d&e
\end{pmatrix}Q\begin{pmatrix}u\\v\end{pmatrix}+f=0.$$
Se $$D=\begin{pmatrix}\lambda&0\\0&\mu\end{pmatrix},$$ onde $\lambda,\mu$ são os autovalores de $A$, então escrevendo novamente a equação da cônica temos
$$\lambda u^2+\mu v^2+\alpha u+\beta v+f=0,$$ para alguns valores de $\alpha,\beta$. O próximo passo é completar quadrados para deixar a equação mais simples. Aqui é preciso tomar cuidado, e a teoria não pode ser tão geral, pois algumas condições degeneradas podem ocorrer ($\lambda$ ou $\mu$ serem zero, por exemplo). Vamos supor que $\lambda\neq 0$, $\mu\neq 0$ e seguir com as contas.
Reescrevemos a equação anterior como
$$\lambda\left( u^2+\dfrac{\alpha u}{\lambda}\right)+\mu \left(v^2+\dfrac{\beta v}{\mu}\right)+f=0.$$
Completando quadrados, temos
$$\lambda\left( u^2+\dfrac{\alpha u}{\lambda}\right)+\mu \left(v^2+\dfrac{\beta v}{\mu}\right)+f=0$$
e daí
$$\lambda\left( u^2+\dfrac{\alpha u}{\lambda}+ \dfrac{\alpha^2}{4\lambda^2} \right)+\mu \left(v^2+\dfrac{\beta v}{\mu}+\dfrac{\beta^2}{4\mu^2}\right)+f-\dfrac{\alpha^2}{4\lambda}-\dfrac{\beta^2}{4\mu}=0.$$
Agrupando tudo, ficamos com a equação
$$\lambda\left( u+\dfrac{\alpha}{2\lambda} \right)^2+\mu \left(v+\dfrac{\beta}{2\mu}\right)^2+\tilde f=0,$$ onde $\tilde f=f-\dfrac{\alpha^2}{4\lambda}-\dfrac{\beta^2}{4\mu}$.
Fazendo uma nova mudança de coordenadas para $w=u+\dfrac{\alpha}{2\lambda}$, $z=v+\dfrac{\beta}{2\mu}$ ficamos com a equacão $$\lambda w^2+\mu z^2+\tilde f=0.$$
Conhecendo os valores de $\lambda,\mu,\tilde f$ poderemos reconhecer a cônica! Vamos agora fazer os cálculos de tudo isso no Python, e depois fazer alguns gráficos.
```python
a=1
b=2
c=-3
d=1
e=2
f=-4
x, y = sp.symbols('x y')
sp.Eq(a*x**2+b*x*y+c*y**2+d*x+e*y+f,0)
```
Vantagens do computador: vamos usar o Python pra pegar uma dica do que esperar lá no fim, plotando os pontos que satisfazem essa equação. O código abaixo é uma versão simplificada do nosso código para plotar curvas de nível (no caso, a curva de nível 0).
```python
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
delta = 0.01
x = np.arange(-4.0, 4.0, delta)
y = np.arange(-4.0, 4.0, delta)
x, y = np.meshgrid(x, y)
z = a*x**2+b*x*y+c*y**2+d*x+e*y+f
fig = plt.figure()
ax = fig.add_subplot()
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.contour(x, y, z,0)
```
Humm.. me parece que teremos uma hipérbole. Isso significa que podemos esperar um autovalor positivo e um negativo. Vamos começar as contas definindo a matriz A.
Aqui vamos usar um truque: queremos levar todos esses cálculos de forma algébrica, sem aproximações numéricas. Só que na definição da matriz $A$, temos um $b/2$: se digitarmos somente $b/2$, o resultado vai ser em ponto flutuante. Então vamos forçar o SymPy a usar o $b/2$ como sendo um número racional, com o comando ```Rational(b,2)```.
```python
A=sp.Matrix([[a,sp.Rational(b, 2)],[sp.Rational(b, 2),c]])
```
Agora vamos pedir ao Python para diagonalizar a matriz $A$, salvar a matriz diagonal como $D$ e a matriz mudança de base como $Q$:
```python
Q,D = A.diagonalize()
```
Conferindo como estão as matrizes:
```python
Q,D
```
Para que a nossa estratégia de mudança de base para reconhecimento de cônica funcione, precisamos pegar a matriz de mudança de base como sendo uma matriz cujas colunas formam uma base ortonormal de $\mathbb R^2$. No momento temos somente uma base. Vamos então escolher outra matriz $Q$ normalizando as colunas da matriz atual. Isso pode ser feito com o comando abaixo:
```python
n1=sp.simplify(sp.sqrt(Q[0]**2+Q[2]**2))
n2=sp.simplify(sp.sqrt(Q[1]**2+Q[3]**2))
Q2=sp.simplify(sp.Matrix([[Q[0]/n1,Q[1]/n2],[Q[2]/n1,Q[3]/n2]]))
Q=Q2
Q
```
Note que agora a matriz $Q$ satisfaz o que queríamos: $Q^t=Q^{-1}$..
```python
sp.simplify(Q**(-1)-Q.transpose())
```
... e ainda vale $Q^{-1}AQ=D$:
```python
sp.simplify(Q**(-1)*A*Q)
```
Vamos agora ver como ficou a cônica após a primeira mudança de coordenada que fizemos:
```python
# aplicando a mudança de coordenadas; z2 será a nova equacao
x, y = sp.symbols('x y')
z2=sp.simplify(
(sp.Matrix([[x,y]])*(Q**(-1)*A*Q)*sp.Matrix([[x],[y]]))+(sp.Matrix([[d,e]]))*Q*(sp.Matrix([[x],[y]]))+sp.Matrix([[f]])
)[0]
```
```python
sp.init_printing(use_unicode=False)
```
```python
# exibindo a nova equacao, e como vamos usá-la para plotar, passando tudo
# pra ponto flutuante. você vai precisar copiar o resultado desse comando e
# colar na próxima linha de código, no z = ....
z3=z2.evalf()
repr(z3)
```
'1.23606797749979*x**2 + 1.43275483056186*x - 3.23606797749979*y**2 + 1.71674505838791*y - 4.0'
```python
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
delta = 0.01
x = np.arange(-4.0, 4.0, delta)
y = np.arange(-4.0, 4.0, delta)
x, y = np.meshgrid(x, y)
z3 = 1.23606797749979*x**2 + 1.43275483056186*x - 3.23606797749979*y**2 + 1.71674505838791*y - 4.0
fig = plt.figure()
ax = fig.add_subplot()
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.contour(x, y, z3,0)
```
Olha só, já conseguimos desentortar a hipérbole!! Agora só precisamos da segunda mudança de coordenadas para poder centralizá-la. Vamos precisar calcular $\alpha$, $\beta$ e os autovalores $\lambda$ e $\mu$.
```python
lam = D[0]
mu = D[3]
```
Note que $\alpha$ é como chamamos o coeficiente de $x$ na expressão pós-primeira mudança de coordenadas. Essa expressão pra gente está guardada na variável ```z2```. Como vamos recuperá-la? Bom, uma possibilidade é derivar essa expressão em $x$ e depois fazer todas as outras variáveis iguais a zero (pense um pouco nessa estratégia).
```python
x, y = sp.symbols('x y')
termolinear=sp.simplify((sp.Matrix([[d,e]]))*Q*(sp.Matrix([[x],[y]])))[0]
alfa=sp.diff(termolinear,x)
beta=sp.diff(termolinear,y)
```
Agora já podemos plotar o gráfico final:
```python
# definindo quem é f-til
ftil=f-(alfa**2)/(4*lam)-beta**2/(4*mu)
# resetando as variáveis e escrevendo a equacao final
x, y = sp.symbols('x y')
print((lam*x**2 + mu*y**2 +ftil).evalf())
# plotando
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
delta = 0.01
x = np.arange(-4.0, 4.0, delta)
y = np.arange(-4.0, 4.0, delta)
x, y = np.meshgrid(x, y)
z4 = 1.23606797749979*x**2 - 3.23606797749979*y**2 - 4.1875
fig = plt.figure()
ax = fig.add_subplot()
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.contour(x, y, z4,0)
```
Por fim, vamos plotar todas as curvas num único sistema de eixos, para você ver o que aconteceu. A curva em vermelho é a original, a curva em azul é após a mudança de coordenadas do tipo rotação, e a curva preta é após a segunda mudança de coordenadas, de translação.
```python
# plotando
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
delta = 0.01
x = np.arange(-4.0, 4.0, delta)
y = np.arange(-4.0, 4.0, delta)
x, y = np.meshgrid(x, y)
z = a*x**2+b*x*y+c*y**2+d*x+e*y+f
z3 = 1.23606797749979*x**2 + 1.43275483056186*x - 3.23606797749979*y**2 + 1.71674505838791*y - 4.0
z4 = 1.23606797749979*x**2 - 3.23606797749979*y**2 - 4.1875
fig = plt.figure()
ax = fig.add_subplot()
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.contour(x, y, z,0,colors=['red'])
plt.contour(x, y, z3,0,colors=['blue'])
plt.contour(x, y, z4,0,colors=['black'])
plt.show()
```
E a equação final é:
```python
x, y = sp.symbols('x y')
sp.Eq(lam*x**2+mu*y**2+ftil,0)
```
| b0c71fb93e7c36d0303881d55c24cd409fbaefd8 | 170,534 | ipynb | Jupyter Notebook | 5-lineares-e-matrizes.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
]
| null | null | null | 5-lineares-e-matrizes.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
]
| null | null | null | 5-lineares-e-matrizes.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
]
| null | null | null | 98.460739 | 21,136 | 0.82899 | true | 7,188 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.884039 | 0.796104 | __label__por_Latn | 0.987648 | 0.687948 |
## Optimal Power Flow
**Power Systems Optimization**
by Michael R. Davidson, Jesse D. Jenkins and Sambuddha Chakrabarti (last updated: October 7, 2020)
This notebook provides an introduction to the Optimal Power Flow (OPF) problem in electric power systems—which minimizes the short-run production costs of meeting electricity demand at a number of connected locations from a given set of generators subject to various technical and transmission network flow limit constraints. This will be our first treatment of a *network*, which is critical to all power systems.
We will first introduce a model of transmission flows that assumes we can control the flow along each path, in what is called a "**transport model**." This is a straightforward extension to [Economic Dispatch](04-Economic-Dispatch.ipynb) (ED), where we have multiple supply and demand balance constraints at each location or "node" in the network, and a new set of flow constraints between nodes. This is also similar to other common optimization problems such as fleet routing of shipments.
We will then introduce a linear approximation to the optimal power flow problem known as "**DC-OPF**", where we begin to incorporate some of the physics involved in how electricity flows along transmission lines. With this formulation, we recognize that given "injections" (i.e., generation) and "withdrawals" (i.e., demand) of power at each node in the network, flows along lines are not independently controllable. Instead, electrical power flows across transmission lines in relation to their physical properties, namely power flows across parallel circuits or paths in inverse proportion to the [electrical impedance](https://en.wikipedia.org/wiki/Electrical_impedance) of the lines. This can (very frequently) result in hitting flow constraints before we would if could control power flows across all lines as in the transport problem.
This notebook does not explore the full functionality of DC-OPF, which can include inter-temporal constraints, additional generation constraints (e.g., on voltage), security constraints to ensure stability in the case of contingencies, and network losses.
Full "AC optimal power flow" models are also beyond the scope of this notebook, as the full set of physics associated with the interactions of AC flows introduces non-convexities that make this problem much harder to solve. Due to the non-convex nature of the AC power flow problem, simplified formulations that linearize and approximate these complex constraints are frequently employed in power systems operations, including by electricity system operators, which use a linearized "security-constrained" optimal power flow (which ensures power flows would remain simulatenously feasible across a range of possible contingencies) to clear real-time electricity markets.
We will start off with some simple systems, whose solutions can be worked out manually without resorting to any mathematical optimization model and software. But, eventually we will be solving larger system, thereby emphasizing the importance of such software and mathematical models.
## Introduction to OPF
The Optimal Power Flow (OPF) problem is a power system optimal scheduling problem which captures the physics of electricity flows across electricity networks, adding a layer of complexity and more realism to the Economic Dispatch (ED) problem. OPF usually attempts to capture the entire network topology by representing the transmission line interconnections between different nodes (also known as buses, or locations where generators or demand inject or withdraw power into/from the network) including various electrical parameters, such as resistance, series reactance, shunt admittance, etc. The full alternating current or "AC" OPF is a non-convex problem and turns out to be an extremely hard problem to solve (usually NP-hard). Hence, system operators and power marketers usually go about solving a linearized version of it, called the "DC-OPF." The DC-OPF approximation works satisfactorily for bulk power transmission networks as long as such networks are not operated at the brink of instability or under heavily loaded conditions.
## "Transport" model
We will first examine the case where we allow for transmission but ignore the physics of electricity flows, and instead treat it like transporting an ordinary commodity.
$$
\begin{align}
\min \ & \sum_{g \in G} VarCost_g \times GEN_g & \\
\text{s.t.} & \\
& \sum_{g \in G_i} GEN_g - Demand_i = \sum_{j \in J_i} FLOW_{ij} & \forall \quad i \in \mathcal{N}\\
& FLOW_{ij} \leq MaxFlow_{ij} & \forall \quad i \in \mathcal{N}, \forall j \in J_i \\
& FLOW_{ij} = - FLOW_{ji} & \forall \quad i, j \in \mathcal{N} \\
& GEN_g \leq Pmax_g & \forall \quad g \in G \\
& GEN_g \geq Pmin_g & \forall \quad g \in G
\end{align}
$$
We introduce a few new **sets** in the above:
- $\mathcal{N}$, the set of all nodes (or buses) in the network where generation, storage, or demand (load) are located
- $J_i \subset \mathcal{N}$, the subset of nodes that are connected to node $i$
- $G_i \subset G$, the subset of generators located at node $i$
The **decision variables** in the above problem are:
- $GEN_{g}$, the generation (in MW) produced by each generator, $g$
- $FLOW_{ij}$, the flow (in MW) along the line from $i$ to $j$
The **parameters** are:
- $Pmin_g$, the minimum operating bounds for the generator (based on engineering or natural resource constraints)
- $Pmax_g$, the maximum operating bounds for the generator (based on engineering or natural resource constraints)
- $Demand_i$, the demand (in MW) at bus $i$
- $MaxFlow_{ij}$, the maximum allowable flow along the line from $i$ to $j$
- $VarCost_g$, the variable cost of generator $g$
Notice how the problem above is equivalent to producing a single type of good at a set of factories and shipping them along capacity-limited corridors (roads, rail lines, etc.) to meet a set of demands in other locations.
### 1. Load packages
```julia
import Pkg; Pkg.add("VegaLite"); Pkg.add("PrettyTables")
using JuMP, GLPK
using Plots; plotly();
using VegaLite # to make some nice plots
using DataFrames, CSV, PrettyTables
ENV["COLUMNS"]=120; # Set so all columns of DataFrames and Matrices are displayed
```
[32m[1m Resolving[22m[39m package versions...
[32m[1mNo Changes[22m[39m to `~/.julia/environments/v1.5/Project.toml`
[32m[1mNo Changes[22m[39m to `~/.julia/environments/v1.5/Manifest.toml`
[32m[1m Resolving[22m[39m package versions...
[32m[1mNo Changes[22m[39m to `~/.julia/environments/v1.5/Project.toml`
[32m[1mNo Changes[22m[39m to `~/.julia/environments/v1.5/Manifest.toml`
### 2. Load and format data
We will load a modified 3-bus case stored in the [MATPOWER case format](https://matpower.org/docs/ref/matpower5.0/caseformat.html). It consists of:
- two generator buses where 1000 MW generators are located, one with variable cost of 50/MWh and another with variable cost of 100/MWh
- one load bus where 600 MW of demand is located
- three lines connecting the buses, each with a maximum flow of 500 MW
The location and numbering of the components:
```julia
datadir = joinpath("OPF_data")
gen = CSV.read(joinpath(datadir,"gen.csv"), DataFrame);
gencost = CSV.read(joinpath(datadir,"gencost.csv"), DataFrame);
branch = CSV.read(joinpath(datadir,"branch.csv"), DataFrame);
bus = CSV.read(joinpath(datadir,"bus.csv"), DataFrame);
# Rename all columns to lowercase (by convention)
for f in [gen, gencost, branch, bus]
rename!(f,lowercase.(names(f)))
end
# create generator ids
gen.id = 1:nrow(gen);
gencost.id = 1:nrow(gencost);
# create line ids
branch.id = 1:nrow(branch);
# add set of rows for reverse direction with same parameters
branch2 = copy(branch)
branch2.f = branch2.fbus
branch2.fbus = branch.tbus
branch2.tbus = branch2.f
branch2 = branch2[:,names(branch)]
append!(branch,branch2)
# Calculate the susceptance of each line, on the assumption that
# reactance is >> resistance, such that we can approximate
# resistance as = 0 and treat susceptance as the simple
# reciprocal of reactance (x).
# See https://en.wikipedia.org/wiki/Susceptance#Relationship_to_reactance
branch.sus = 1 ./ branch.x
# Here are the buses:
bus
```
<table class="data-frame"><thead><tr><th></th><th>bus_i</th><th>type</th><th>pd</th><th>qd</th><th>gs</th><th>bs</th><th>area</th><th>vm</th><th>va</th><th>basekv</th><th>zone</th><th>vmax</th><th>vmin</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Float64</th></tr></thead><tbody><p>3 rows × 13 columns</p><tr><th>1</th><td>1</td><td>2</td><td>0</td><td>0.0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>230</td><td>1</td><td>1.1</td><td>0.9</td></tr><tr><th>2</th><td>2</td><td>2</td><td>0</td><td>0.0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>230</td><td>1</td><td>1.1</td><td>0.9</td></tr><tr><th>3</th><td>3</td><td>1</td><td>600</td><td>98.61</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>230</td><td>1</td><td>1.1</td><td>0.9</td></tr></tbody></table>
Columns pd and qd indicate the [active and reactive power](https://en.wikipedia.org/wiki/AC_power#Active,_reactive,_and_apparent_power) withdrawal at the bus. (We will ignore qd for this notebook, since we are not considering full AC power flows.) We do not need any of other columns for our purposes.
```julia
# This is what the generator dataset looks like:
gen
```
<table class="data-frame"><thead><tr><th></th><th>bus</th><th>pg</th><th>qg</th><th>qmax</th><th>qmin</th><th>vg</th><th>mbase</th><th>status</th><th>pmax</th><th>pmin</th><th>pc1</th><th>pc2</th><th>qc1min</th><th>qc1max</th><th>qc2min</th><th>qc2max</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th></tr></thead><tbody><p>2 rows × 22 columns (omitted printing of 6 columns)</p><tr><th>1</th><td>1</td><td>40</td><td>0</td><td>30.0</td><td>-30.0</td><td>1</td><td>100</td><td>1</td><td>1000</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><th>2</th><td>2</td><td>170</td><td>0</td><td>127.5</td><td>-127.5</td><td>1</td><td>100</td><td>1</td><td>1000</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></tbody></table>
```julia
# and generator cost dataset:
gencost
```
<table class="data-frame"><thead><tr><th></th><th>model</th><th>startup</th><th>shutdown</th><th>n</th><th>x1</th><th>y1</th><th>id</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th></tr></thead><tbody><p>2 rows × 7 columns</p><tr><th>1</th><td>2</td><td>0</td><td>0</td><td>2</td><td>50</td><td>0</td><td>1</td></tr><tr><th>2</th><td>2</td><td>0</td><td>0</td><td>2</td><td>100</td><td>0</td><td>2</td></tr></tbody></table>
In the above, model=2 indicates a polynomial variable cost formulation and the column n=2 indicates that there are two terms. Thus, we have a linear cost (in the x1 column) without any quadratic terms (and a zero constant term):
$$
VarCost_g = x1_g
$$
```julia
# Here are the transmission lines:
branch
```
<table class="data-frame"><thead><tr><th></th><th>fbus</th><th>tbus</th><th>r</th><th>x</th><th>b</th><th>ratea</th><th>rateb</th><th>ratec</th><th>ratio</th><th>angle</th><th>status</th><th>angmin</th><th>angmax</th><th>id</th><th>sus</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>6 rows × 15 columns</p><tr><th>1</th><td>1</td><td>3</td><td>0.00281</td><td>0.0281</td><td>0.00712</td><td>500</td><td>500</td><td>500</td><td>0</td><td>0</td><td>1</td><td>-360</td><td>360</td><td>1</td><td>35.5872</td></tr><tr><th>2</th><td>1</td><td>2</td><td>0.00281</td><td>0.0281</td><td>0.00712</td><td>500</td><td>500</td><td>500</td><td>0</td><td>0</td><td>1</td><td>-360</td><td>360</td><td>2</td><td>35.5872</td></tr><tr><th>3</th><td>2</td><td>3</td><td>0.00281</td><td>0.0281</td><td>0.00712</td><td>500</td><td>500</td><td>500</td><td>0</td><td>0</td><td>1</td><td>-360</td><td>360</td><td>3</td><td>35.5872</td></tr><tr><th>4</th><td>3</td><td>1</td><td>0.00281</td><td>0.0281</td><td>0.00712</td><td>500</td><td>500</td><td>500</td><td>0</td><td>0</td><td>1</td><td>-360</td><td>360</td><td>1</td><td>35.5872</td></tr><tr><th>5</th><td>2</td><td>1</td><td>0.00281</td><td>0.0281</td><td>0.00712</td><td>500</td><td>500</td><td>500</td><td>0</td><td>0</td><td>1</td><td>-360</td><td>360</td><td>2</td><td>35.5872</td></tr><tr><th>6</th><td>3</td><td>2</td><td>0.00281</td><td>0.0281</td><td>0.00712</td><td>500</td><td>500</td><td>500</td><td>0</td><td>0</td><td>1</td><td>-360</td><td>360</td><td>3</td><td>35.5872</td></tr></tbody></table>
Note that while there are three lines, there are six entries here. There is an entry for each 'direction' of flow the lines can accomodate, hence six entries for three lines. The column `fbus` denotes the ID of the "from bus" or origin bus and the column `tbus` denotes the ID of the "to bus" or destination bus (e.g. `fbus`=1, `tbus`=3 is the flow in the direction from bus 1 to bus 3).
For this transport model formulation, we are only using transmission line capacity (known as "ratings"), given in `ratea`, `rateb`, and `ratec`. These correspond to different ratings based on how long the line might be overloaded, with `ratec` known as an "emergency rating", which could exceed the long-term rating, `ratea`. We will use `ratea` for this model. The dataset also contains resistance and reactance.
### 3. Create solver function (transport)
```julia
#=
Function to solve transport flow problem
Inputs:
gen -- dataframe with generator info
branch -- dataframe with transmission lines info
gencost -- dataframe with generator info
bus -- dataframe with bus types and loads
Note: it is always a good idea to include a comment blog describing your
function's inputs clearly!
=#
function transport(gen, branch, gencost, bus)
Transport = Model(GLPK.Optimizer) # You could use Clp as well, with Clp.Optimizer
# Define sets
# Set of all generators
G = gen.id
# Set of all nodes
N = bus.bus_i
# Note: sets J_i and G_i will be described using dataframe indexing below
# Decision variables
@variables(Transport, begin
GEN[G] >= 0 # generation
# Note: we assume Pmin = 0 for all resources for simplicty here
FLOW[N,N] # flow
# Note: flow is not constrained to be positive
# By convention, positive values will indicate flow from the first to second node
# in the tuple, and a negative flow will indicate flow from the second to the first
# This matrix is thus "anti-symmetric", which we will ensure with an appropriate
# constraint.
end)
# Objective function: minimize sum of generation variable costs for all generators
@objective(Transport, Min,
sum( gencost[g,:x1] * GEN[g]
for g in G)
)
# Supply/demand balance constraints, accounting for power flows in/out of each node
@constraint(Transport, cBalance[i in N],
sum(GEN[g] for g in gen[gen.bus .== i,:id])
- bus[bus.bus_i .== i,:pd][1] ==
sum(FLOW[i,j] for j in branch[branch.tbus .== i,:fbus]))
# Max generation constraints
@constraint(Transport, cMaxGen[g in G],
GEN[g] <= gen[g,:pmax])
# Flow constraints on each branch
for l in 1:nrow(branch)
@constraint(Transport,
FLOW[branch[l,:fbus][1],branch[l,:tbus][1]] <=
branch[l,:ratea])
end
# Anti-symmetric flow constraints
@constraint(Transport, cFlowSymmetric[i in N, j in N],
FLOW[i,j] == -FLOW[j,i])
# Solve statement (! indicates runs in place)
optimize!(Transport)
# Dataframe of optimal decision variables
generation = DataFrame(
id = gen.id,
node = gen.bus,
gen = value.(GEN).data
)
flows = value.(FLOW).data
# Return the solution and objective as named tuple
return (
generation = generation,
flows,
cost = objective_value(Transport),
status = termination_status(Transport)
)
end
```
transport (generic function with 1 method)
### 4. Solve
```julia
solution = transport(gen, branch, gencost, bus)
solution.generation
```
<table class="data-frame"><thead><tr><th></th><th>id</th><th>node</th><th>gen</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>2 rows × 3 columns</p><tr><th>1</th><td>1</td><td>1</td><td>600.0</td></tr><tr><th>2</th><td>2</td><td>2</td><td>0.0</td></tr></tbody></table>
We generate all 600 MW from Gen A at Bus 1.
```julia
DataFrame(solution.flows)
```
<table class="data-frame"><thead><tr><th></th><th>x1</th><th>x2</th><th>x3</th></tr><tr><th></th><th>Float64</th><th>Float64</th><th>Float64</th></tr></thead><tbody><p>3 rows × 3 columns</p><tr><th>1</th><td>0.0</td><td>100.0</td><td>500.0</td></tr><tr><th>2</th><td>-100.0</td><td>0.0</td><td>100.0</td></tr><tr><th>3</th><td>-500.0</td><td>-100.0</td><td>0.0</td></tr></tbody></table>
In turn, the following flows are created:
- $l_{13}$ = 500 MW
- $l_{12}$ = 100 MW
- $l_{23}$ = 100 MW
Hence, we are able to maximize the capacity of the line from 1 to 3 ($l_{13}$), and then route the remaining power through Bus 2.
## The "DC" Optimal Power Flow problem
The above model is not physically correct as we cannot arbitrarily route power through lines. We will now introduce a linear approximation to the optimal power flow problem that incorporates this limitation and is tractable and reasonably accurate. This is commonly called the "DC" optimal power flow problem, but in reality (as we'll see below), it is a linearization that still relates to the physics of AC power flows, it just simplifies (and ignores) certain non-convexities to produce a tractable linear programming problem that remains a valid approximation under certain circumstances.
In the "DC" or linear approximation of the AC optimal power flow problem, power flows along a line from bus $i$ to bus $j$ are driven by voltage [phase angle](https://en.wikipedia.org/wiki/Phasor) differences, denoted by $\theta_i$ and $\theta_j$:
$$
FLOW_{ij} = BaseMVA \times B_{ij} (\theta_i-\theta_j)
$$
Where $FLOW_{ij}$ is the flow across the line from node $i$ to node $j$ (in MW), $BaseMVA$ is the base power for the network (in MVA), $B_{ij}$ is the [susceptance](https://en.wikipedia.org/wiki/Susceptance) for the line connecting buses $i$ and $j$ (in per unit terms) and $(\theta_i-\theta_j)$ is the difference in voltage angles between buses (in radians).
Susceptance is the imaginary part of the [admittance](https://en.wikipedia.org/wiki/Admittance) of a line, where [admittance](https://en.wikipedia.org/wiki/Admittance) is a complex number that describes how easy it is for AC current to flow across a given conductor. Power flows in parallel circuits in an AC network in proportion to their admittance (or inverse proportion to [impedance](https://en.wikipedia.org/wiki/Electrical_impedance), which describes how hard it is a measure of the opposition that a circuit presents to a current when a voltage is applied).
Voltage [phase](https://www.allaboutcircuits.com/textbook/alternating-current/chpt-1/ac-phase/) angles describe the displacement of the AC voltage waveform at each node, relative to a reference or "slack" bus. A difference in voltage angles between buses $i$ and $j$ indicates that the peaks and troughs in the sinusoidal voltage waveform at bus $i$ are shifted in time relative to the voltage waveform at bus $j$, as in the image below.
(*Image source: [allaboutcircuits.com](https://www.allaboutcircuits.com/textbook/alternating-current/chpt-1/ac-phase/)*)
In AC circuits, power flows from nodes with higher voltage angle to buses with lower voltage angle, just as power flows from higher voltage magnitude to lower voltage magnitude in DC circuits.
What causes the shift in voltage phase angle? An AC current flowing across a conductor encountering either [inductive reactance](https://en.wikipedia.org/wiki/Electrical_impedance#Inductive_reactance) (which relates to the magnetizing current, or the energy required to continually induce or establish magnetic fields around a conductor as AC current polarity flips each cycle) or [capacitive reactance](https://en.wikipedia.org/wiki/Electrical_impedance#Capacitive_reactance) (which relates to the charging current, or the energy required to sustain capacitive charges between two conductors separated by an insulator) will experience a shift in the voltage waveform, relative to the current wave form. Inductive reactance causes the voltage waveform to shift forward relative to the current, while capacitive reactive cases the voltage to shift backwards relative to the current. In overhead transmission lines, the primary source of voltage phase angle shifts is the inductance of the transmission lines themselves (although capacitance is relevant for underground lines).
(For (a bit) more on the physics of power flow on AC transmissio lines, you can review [this tutorial from the PJM Interconnection](https://learn.pjm.com/~/media/training/nerc-certifications/gen-exam-materials/bet/20160104-basics-of-elec-power-flow-on-ac.ashx))
The reason this approximation is known as the "DC OPF" is because we ignore [reactive power](https://en.wikipedia.org/wiki/AC_power#Reactive_power_ flows and focus only on flows of real power as in a DC network. But you can see why "DC" OPF is actually a misnomer: we're still dealing with AC voltages and susceptance terms here, we're just linearizing the problem through some simplifying assumptions. In particular, the three basic assumptions used to derive a linearized or "DC" OPF approximation from the underlying AC OPF problem are as follows:
1. The resistance for each branch is negligible relative to the reactance, and can therefore be approximated as ~0.
2. The voltage magnitude at each bus is constant and equal to the base voltage (e.g. equal to 1 p.u).
3. The voltage angle difference $(\theta_j-\theta_i)$ across any branch from bus $i$ to $j$ is sufficiently small such that $cos(\theta_i-\theta_j) \approx 1$ and $sin(\theta_i-\theta_j) \approx (\theta_i-\theta_j)$. Note that $\theta_i,\theta_j$ are measured in [radians](https://en.wikipedia.org/wiki/Radian).
Fortunately, under normal operating conditions, these conditions hold for electricity transmission networks (although note they are not generally acceptable simplifying assumptions for lower voltage distribution networks).
We can now modify the above transport model to incorporate these new power flow-related decision variables and constraints:
$$
\begin{align}
\min \ & \sum_{g \in G} VarCost_g \times GEN_g & \\
\text{s.t.} & \\
& \sum_{g \in G_i} GEN_g - Demand_i = \sum_{j\in J(i)} FLOW_{i,j} & \forall \quad i \in \mathcal{N}\\
& FLOW_{i,j} \leq MaxFlow_{ij} & \forall \quad i \in \mathcal{N}, \forall j \in J_i \\
& FLOW_{i,j} = BaseMVA \times B_{ij}(\theta_i-\theta_j) & \forall \quad i \in \mathcal{N}, \forall j \in J_i \\
& GEN_g \leq Pmax_g & \forall \quad g \in G \\
& GEN_g \geq Pmin_g & \forall \quad g \in G \\
& \theta_{slack} = 0
\end{align}
$$
Note that we no longer require a constraint to enforce anti-symmetric flows.
We have the following **sets**:
- $\mathcal{N}$, the set of all nodes (or buses) in the network
- $J_i \subset \mathcal{N}$, the subset of nodes that are connected to node $i$
- $G_i \subset G$, the subset of generators located at node $i$
The **decision variables** in the above problem are:
- $GEN_{g}$, the generation (in MW) produced by each generator, $g$
- $\theta_i$, the voltage phase angle at bus $i$ relative to the slack or reference bus ($\theta_{slack}$)
- $FLOW_{i,j}$, the flow from bus i to bus j (in MW)
Note that unlike the transport flow problem, in the OPF problem we *do not* directly choose the flows across lines, but rather choose the real power injections at generator buses via $GEN_{g}$ and the voltage angles $\theta_i$, which collectively determine the power flows across lines via the collection of constraints above. The $FLOW$ decision variable is thus an "auxialiary" variable, as it is precisely determined by the constraint $FLOW_{i,j} = BaseMVA \times B_{ij}(\theta_i-\theta_j)$ for all pairs ($i,j$) for which transmission lines exist.
(Note that we create $FLOW$ decisions for some pairs of nodes ($i,j$) that are not connected by lines; these variables are "free" variables, as they are not constrained and do not show up in/effect the objective function, so the solver will generally remove these variables in pre-solve step. We could try a different approach to sets for the $FLOW$ variable, e.g. by setting across lines instead of pairs of nodes, to avoid creating these unecessary free variables, but for convenience, we'll create and ignore them for now).
The **parameters** are:
- $Pmin_g$, the minimum operating bounds for the generator (based on engineering or natural resource constraints)
- $Pmax_g$, the maximum operating bounds for the generator (based on engineering or natural resource constraints)
- $Demand_i$, the demand (in MW) at bus $i$
- $MaxFlow_{ij}$, the maximum allowable flow along the line from $i$ to $j$
- $VarCost_g$, the variable cost of generator $g$
- $B_{ij}$, susceptance for line connecting buses $i$ and $j$
- $\theta_{slack}$, the "slack" bus or reference bus from which relative voltage angles at all other buses are calcluated. Thus $\theta_{slack}=0$.
- $BaseMVA$, the base power in MVA for the network (used to scale from standard units to per unit values or vice versa).
### 3. Create solver function (dcopf)
```julia
#=
Function to solve DC OPF problem
Inputs:
gen -- dataframe with generator info
branch -- dataframe with transmission lines info
gencost -- dataframe with generator info
bus -- dataframe with bus types and loads
Note: it is always a good idea to include a comment blog describing your
function's inputs clearly!
=#
function dcopf(gen, branch, gencost, bus)
DCOPF = Model(GLPK.Optimizer) # You could use Clp as well, with Clp.Optimizer
# Define sets
# Set of all generators
G = gen.id
# Set of all nodes
N = bus.bus_i
# sets J_i and G_i will be described using dataframe indexing below
# Define per unit base units for the system
# used to convert from per unit values to standard unit
# values (e.g. p.u. power flows to MW/MVA)
baseMVA = gen.mbase[1] # base MVA is 100 MVA for this system
# Decision variables
@variables(DCOPF, begin
GEN[G] >= 0 # generation
# Note: we assume Pmin = 0 for all resources for simplicty here
THETA[N] # voltage phase angle of bus
FLOW[N,N] # flows between all pairs of nodes
end)
# Create slack bus with reference angle = 0
# Note: by convention this is a generator bus. Hence, we will select bus 1
fix(THETA[1],0)
# Objective function
@objective(DCOPF, Min,
sum( gencost[g,:x1] * GEN[g]
for g in G)
)
# Supply demand balances
@constraint(DCOPF, cBalance[i in N],
sum(GEN[g] for g in gen[gen.bus .== i,:id])
- bus[bus.bus_i .== i,:pd][1] ==
sum(FLOW[i,j] for j in branch[branch.fbus .== i,:tbus])
)
# Max generation constraint
@constraint(DCOPF, cMaxGen[g in G],
GEN[g] <= gen[g,:pmax])
# Flow constraints on each branch;
# In DCOPF, line flow is a function of voltage angles
# Create an array of references to the line constraints,
# which we "fill" below in loop
cLineFlows = JuMP.Containers.DenseAxisArray{Any}(undef, 1:nrow(branch))
for l in 1:nrow(branch)
cLineFlows[l] = @constraint(DCOPF,
FLOW[branch[l,:fbus],branch[l,:tbus]] ==
baseMVA*branch[l,:sus]*(THETA[branch[l,:fbus]] - THETA[branch[l,:tbus]])
)
end
# Max line flow constraints
# Create an array of references to the line constraints,
# which we "fill" below in loop
cLineLimits = JuMP.Containers.DenseAxisArray{Any}(undef, 1:nrow(branch))
for l in 1:nrow(branch)
cLineLimits[l] = @constraint(DCOPF,
FLOW[branch[l,:fbus],branch[l,:tbus]]
<= branch[l,:ratea]
)
end
# Solve statement (! indicates runs in place)
optimize!(DCOPF)
# Output variables
generation = DataFrame(
id = gen.id,
node = gen.bus,
gen = value.(GEN).data
)
angles = value.(THETA).data
flows = DataFrame(
fbus = branch.fbus,
tbus = branch.tbus,
flow = baseMVA .* branch.sus .* (angles[branch.fbus] .- angles[branch.tbus])
)
# We output the marginal values of the demand constraints,
# which will in fact be the prices to deliver power at a given bus.
prices = DataFrame(
node = bus.bus_i,
value = dual.(cBalance).data)
# Return the solution and objective as named tuple
return (
generation = generation,
angles,
flows,
prices,
cost = objective_value(DCOPF),
status = termination_status(DCOPF)
)
end
```
dcopf (generic function with 1 method)
### 4. Solve
```julia
solution = dcopf(gen, branch, gencost, bus)
solution.generation
```
<table class="data-frame"><thead><tr><th></th><th>id</th><th>node</th><th>gen</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>2 rows × 3 columns</p><tr><th>1</th><td>1</td><td>1</td><td>600.0</td></tr><tr><th>2</th><td>2</td><td>2</td><td>0.0</td></tr></tbody></table>
Hence, we generate all 600 MW from Gen A at Bus 1.
```julia
# These are the voltage phase angles of the buses relative to Bus 1.
solution.angles
```
3-element Array{Float64,1}:
0.0
-0.05619999999999999
-0.11239999999999997
```julia
solution.flows
```
<table class="data-frame"><thead><tr><th></th><th>fbus</th><th>tbus</th><th>flow</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>6 rows × 3 columns</p><tr><th>1</th><td>1</td><td>3</td><td>400.0</td></tr><tr><th>2</th><td>1</td><td>2</td><td>200.0</td></tr><tr><th>3</th><td>2</td><td>3</td><td>200.0</td></tr><tr><th>4</th><td>3</td><td>1</td><td>-400.0</td></tr><tr><th>5</th><td>2</td><td>1</td><td>-200.0</td></tr><tr><th>6</th><td>3</td><td>2</td><td>-200.0</td></tr></tbody></table>
Thus, we notice that, in contrast to the transport model, we do not max out the capacity of $l_{13}$. The following flows are created:
- $l_{1,3}$ = 400 MW
- $l_{1,2}$ = 200 MW
- $l_{2,3}$ = 200 MW
The reason we can't make maximum use of line $l_{1,3}$ is because power flows split across parallel circuits in inverse proportion to the impedance of the circuit paths. Here, all three branches have equal susceptance, and thus equal impedance (since we are assuming resistance is ~0). Thus, the path $l_{1,2} \rightarrow l_{2,3}$ has twice the impedance as the path $l_{1,3}$ and thus takes half as much power flow coming from Gen A at Bus 1.
### 5. Solve high demand case
Now, let's increase demand at Bus 3 to 800 MW. Despite spare capacity at Gen A, it turns out we will no longer be able to generate all of our power from Gen A alone.
```julia
bus_high = copy(bus)
bus_high[3,:pd] = 800
sol_high = dcopf(gen, branch, gencost, bus_high)
sol_high.generation
```
<table class="data-frame"><thead><tr><th></th><th>id</th><th>node</th><th>gen</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>2 rows × 3 columns</p><tr><th>1</th><td>1</td><td>1</td><td>700.0</td></tr><tr><th>2</th><td>2</td><td>2</td><td>100.0</td></tr></tbody></table>
This situation is explained by flow patterns, where the capacity of $l_{13}$ is at its maximum, but in order to meet demand at Bus 3, more power needs to be injected in Bus 2, requiring the more costly generator at Bus 2 to dispatch, despite spare capacity at the generator at Bus 1.
```julia
sol_high.flows
```
<table class="data-frame"><thead><tr><th></th><th>fbus</th><th>tbus</th><th>flow</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>6 rows × 3 columns</p><tr><th>1</th><td>1</td><td>3</td><td>500.0</td></tr><tr><th>2</th><td>1</td><td>2</td><td>200.0</td></tr><tr><th>3</th><td>2</td><td>3</td><td>300.0</td></tr><tr><th>4</th><td>3</td><td>1</td><td>-500.0</td></tr><tr><th>5</th><td>2</td><td>1</td><td>-200.0</td></tr><tr><th>6</th><td>3</td><td>2</td><td>-300.0</td></tr></tbody></table>
The following flows are created:
- $l_{1,3}$ = 500 MW
- $l_{1,2}$ = 200 MW
- $l_{2,3}$ = 300 MW
### 6. Compare prices
The marginal values of the demand constraints at a given bus represent the change in the objective that results from increasing demand at the bus by one unit. This is the natural definition of a "value" of power at that location, and is the basis for **[locational marginal prices](https://www.iso-ne.com/participate/support/faq/lmp)** (LMPs) found in electricity markets.
We examine first the regular case of demand = 600 MW, then the high demand case = 800 MW.
```julia
solution.prices
```
<table class="data-frame"><thead><tr><th></th><th>node</th><th>value</th></tr><tr><th></th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>3 rows × 2 columns</p><tr><th>1</th><td>1</td><td>50.0</td></tr><tr><th>2</th><td>2</td><td>50.0</td></tr><tr><th>3</th><td>3</td><td>50.0</td></tr></tbody></table>
All prices are the same in this case. The interpretation: if we were to add an incremental load at any of the buses, we could meet it from additional production from Gen A which has marginal cost of \$50 / MWh. We are not going to hit any transmission limits.
```julia
sol_high.prices
```
<table class="data-frame"><thead><tr><th></th><th>node</th><th>value</th></tr><tr><th></th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>3 rows × 2 columns</p><tr><th>1</th><td>1</td><td>50.0</td></tr><tr><th>2</th><td>2</td><td>100.0</td></tr><tr><th>3</th><td>3</td><td>150.0</td></tr></tbody></table>
Something interesting has happened!
First, note that the prices are different. Hence, we will not be able to meet incremental load from production by Gen A (except if we add load right at Gen A located at Bus 1). Similarly, load at Bus 2 can be met by increasing production from Gen B with marginal cost = \$100 / MWh.
However, why does Bus 3 have a marginal price of \$150 / MWh?! That's higher than the marginal cost of either of our two generators?
The answer lies in what must happen to meet an incremental load at Bus 3 while respecting transmission constraints. We must increase from Gen B, but in doing so, part of the power from Gen B will go through $l_{2,1} \rightarrow l_{1,3}$ in addition to $l_{2,3}$, since power flows split across parallel paths in proportion to admittance (or inverse proportion to impedance). However, without adjusting Gen A's output, an increase in production from Gen B will cause us to exceed the transmission constraint on line $l_{1,3}$, requiring us to throttle back power from Gen A to keep power flows feasible.
The exact change in generation for an incremental 1 MW load at Bus 3 is thus:
- Gen B $\uparrow$ 2 MW
- Gen A $\downarrow$ 1 MW
Hence:
$$
Price_3 = 2 \times VarCost_B - VarCost_A = \$150 \text{ / MWh}
$$
In a network with thousands of nodes and many parallel paths and loop flows, one can see quite quickly how prices may vary in unexpected ways; hence, the need for detailed mathematical models to compute locational marginal prices.
### 6. The IEEE 14 bus test system
We now explore a complicated system, the 14-bus IEEE test system, illustrated here:
The system consists of:
- 2 generators (located at nodes 1 and 2)
- 11 loads
- a meshed transmission network including transformers and multiple voltages
Our data files for the test system contain parameters for resistance and reactance of the transmission lines, which are related to complex impedance:
$$
Z = R + iX
$$
where $R$ = resistance is the real part, $X$ = reactance is the imaginary part. Recall from above that impedance is the inverse of admittance; hence, we have the following transformation for susceptance:
$$
B = \text{Im}\left(\frac{1}{R + iX}\right) = \frac{-X}{|R + iX|^2} = \frac{-X}{R^2 + X^2}
$$
But, since we neglect the resistance for the purpose of solving the DC-OPF, we can approximate the susceptance from above as:
$$
B = \frac{1}{X}
$$
The data are converted to have positive values of both $X$ and $B$, hence we remove the negative sign.
```julia
datadir = joinpath("ieee_test_cases")
gens = CSV.read(joinpath(datadir,"Gen14.csv"), DataFrame);
lines = CSV.read(joinpath(datadir,"Tran14_b.csv"), DataFrame);
loads = CSV.read(joinpath(datadir,"Load14.csv"), DataFrame);
# Rename all columns to lowercase (by convention)
for f in [gens, lines, loads]
rename!(f,lowercase.(names(f)))
end
# create generator ids
gens.id = 1:nrow(gens);
# create line ids
lines.id = 1:nrow(lines);
# add set of rows for reverse direction with same parameters
lines2 = copy(lines)
lines2.f = lines2.fromnode
lines2.fromnode = lines.tonode
lines2.tonode = lines2.f
lines2 = lines2[:,names(lines)]
append!(lines,lines2)
# calculate simple susceptance, ignoring resistance as earlier
lines.b = 1 ./ lines.reactance
# keep only a single time period
loads = loads[:,["connnode","interval-1_load"]]
rename!(loads,"interval-1_load" => "demand");
lines
```
<table class="data-frame"><thead><tr><th></th><th>fromnode</th><th>tonode</th><th>resistance</th><th>reactance</th><th>contingencymarked</th><th>capacity</th><th>id</th><th>b</th></tr><tr><th></th><th>Int64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>40 rows × 8 columns</p><tr><th>1</th><td>1</td><td>2</td><td>0.01938</td><td>0.05917</td><td>1</td><td>10000</td><td>1</td><td>16.9005</td></tr><tr><th>2</th><td>1</td><td>5</td><td>0.05403</td><td>0.22304</td><td>1</td><td>10000</td><td>2</td><td>4.4835</td></tr><tr><th>3</th><td>2</td><td>3</td><td>0.04699</td><td>0.19797</td><td>1</td><td>10000</td><td>3</td><td>5.05127</td></tr><tr><th>4</th><td>2</td><td>4</td><td>0.05811</td><td>0.17632</td><td>1</td><td>10000</td><td>4</td><td>5.67151</td></tr><tr><th>5</th><td>2</td><td>5</td><td>0.05695</td><td>0.17388</td><td>1</td><td>10000</td><td>5</td><td>5.75109</td></tr><tr><th>6</th><td>3</td><td>4</td><td>0.06701</td><td>0.17103</td><td>1</td><td>10000</td><td>6</td><td>5.84693</td></tr><tr><th>7</th><td>4</td><td>5</td><td>0.01335</td><td>0.04211</td><td>1</td><td>10000</td><td>7</td><td>23.7473</td></tr><tr><th>8</th><td>4</td><td>7</td><td>0.0</td><td>0.20912</td><td>1</td><td>10000</td><td>8</td><td>4.78194</td></tr><tr><th>9</th><td>4</td><td>9</td><td>0.0</td><td>0.55618</td><td>1</td><td>10000</td><td>9</td><td>1.79798</td></tr><tr><th>10</th><td>5</td><td>6</td><td>0.0</td><td>0.25202</td><td>1</td><td>10000</td><td>10</td><td>3.96794</td></tr><tr><th>11</th><td>6</td><td>11</td><td>0.09498</td><td>0.1989</td><td>1</td><td>10000</td><td>11</td><td>5.02765</td></tr><tr><th>12</th><td>6</td><td>12</td><td>0.12291</td><td>0.25581</td><td>1</td><td>10000</td><td>12</td><td>3.90915</td></tr><tr><th>13</th><td>6</td><td>13</td><td>0.06615</td><td>0.13027</td><td>1</td><td>10000</td><td>13</td><td>7.67636</td></tr><tr><th>14</th><td>7</td><td>8</td><td>0.0</td><td>0.17615</td><td>1</td><td>10000</td><td>14</td><td>5.67698</td></tr><tr><th>15</th><td>7</td><td>9</td><td>0.0</td><td>0.11001</td><td>1</td><td>10000</td><td>15</td><td>9.09008</td></tr><tr><th>16</th><td>9</td><td>10</td><td>0.03181</td><td>0.0845</td><td>1</td><td>10000</td><td>16</td><td>11.8343</td></tr><tr><th>17</th><td>9</td><td>14</td><td>0.12711</td><td>0.27038</td><td>1</td><td>10000</td><td>17</td><td>3.6985</td></tr><tr><th>18</th><td>10</td><td>11</td><td>0.08205</td><td>0.19207</td><td>1</td><td>10000</td><td>18</td><td>5.20644</td></tr><tr><th>19</th><td>12</td><td>13</td><td>0.22092</td><td>0.19988</td><td>1</td><td>10000</td><td>19</td><td>5.003</td></tr><tr><th>20</th><td>13</td><td>14</td><td>0.17093</td><td>0.34802</td><td>1</td><td>10000</td><td>20</td><td>2.8734</td></tr><tr><th>21</th><td>2</td><td>1</td><td>0.01938</td><td>0.05917</td><td>1</td><td>10000</td><td>1</td><td>16.9005</td></tr><tr><th>22</th><td>5</td><td>1</td><td>0.05403</td><td>0.22304</td><td>1</td><td>10000</td><td>2</td><td>4.4835</td></tr><tr><th>23</th><td>3</td><td>2</td><td>0.04699</td><td>0.19797</td><td>1</td><td>10000</td><td>3</td><td>5.05127</td></tr><tr><th>24</th><td>4</td><td>2</td><td>0.05811</td><td>0.17632</td><td>1</td><td>10000</td><td>4</td><td>5.67151</td></tr><tr><th>25</th><td>5</td><td>2</td><td>0.05695</td><td>0.17388</td><td>1</td><td>10000</td><td>5</td><td>5.75109</td></tr><tr><th>26</th><td>4</td><td>3</td><td>0.06701</td><td>0.17103</td><td>1</td><td>10000</td><td>6</td><td>5.84693</td></tr><tr><th>27</th><td>5</td><td>4</td><td>0.01335</td><td>0.04211</td><td>1</td><td>10000</td><td>7</td><td>23.7473</td></tr><tr><th>28</th><td>7</td><td>4</td><td>0.0</td><td>0.20912</td><td>1</td><td>10000</td><td>8</td><td>4.78194</td></tr><tr><th>29</th><td>9</td><td>4</td><td>0.0</td><td>0.55618</td><td>1</td><td>10000</td><td>9</td><td>1.79798</td></tr><tr><th>30</th><td>6</td><td>5</td><td>0.0</td><td>0.25202</td><td>1</td><td>10000</td><td>10</td><td>3.96794</td></tr><tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr></tbody></table>
```julia
gens
```
<table class="data-frame"><thead><tr><th></th><th>connnode</th><th>c2</th><th>c1</th><th>c0</th><th>pgmax</th><th>pgmin</th><th>rgmax</th><th>rgmin</th><th>pgprev</th><th>id</th></tr><tr><th></th><th>Int64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Int64</th></tr></thead><tbody><p>2 rows × 10 columns</p><tr><th>1</th><td>1</td><td>0.0430293</td><td>20</td><td>0</td><td>332.4</td><td>0</td><td>100</td><td>-100</td><td>161.619</td><td>1</td></tr><tr><th>2</th><td>2</td><td>0.25</td><td>20</td><td>0</td><td>140.0</td><td>0</td><td>100</td><td>-100</td><td>97.4704</td><td>2</td></tr></tbody></table>
```julia
loads
```
<table class="data-frame"><thead><tr><th></th><th>connnode</th><th>demand</th></tr><tr><th></th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>11 rows × 2 columns</p><tr><th>1</th><td>2</td><td>-21.7</td></tr><tr><th>2</th><td>3</td><td>-94.2</td></tr><tr><th>3</th><td>4</td><td>-47.8</td></tr><tr><th>4</th><td>5</td><td>-7.6</td></tr><tr><th>5</th><td>6</td><td>-11.2</td></tr><tr><th>6</th><td>9</td><td>-29.5</td></tr><tr><th>7</th><td>10</td><td>-9.0</td></tr><tr><th>8</th><td>11</td><td>-3.5</td></tr><tr><th>9</th><td>12</td><td>-6.1</td></tr><tr><th>10</th><td>13</td><td>-13.5</td></tr><tr><th>11</th><td>14</td><td>-14.9</td></tr></tbody></table>
The structure of these data are different than the above case formats, hence we write a modified solver function:
```julia
#=
Function to solve DC OPF problem using IEEE test cases
Inputs:
gen_info -- dataframe with generator info
line_info -- dataframe with transmission lines info
loads -- dataframe with load info
=#
function dcopf_ieee(gens, lines, loads)
DCOPF = Model(GLPK.Optimizer) # You could use Clp as well, with Clp.Optimizer
# Define sets based on data
# Set of generator buses
G = gens.connnode
# Set of all nodes
N = sort(union(unique(lines.fromnode),
unique(lines.tonode)))
# sets J_i and G_i will be described using dataframe indexing below
# Define per unit base units for the system
# used to convert from per unit values to standard unit
# values (e.g. p.u. power flows to MW/MVA)
baseMVA = 100 # base MVA is 100 MVA for this system
# Decision variables
@variables(DCOPF, begin
GEN[N] >= 0 # generation
# Note: we assume Pmin = 0 for all resources for simplicty here
THETA[N] # voltage phase angle of bus
FLOW[N,N] # flows between all pairs of nodes
end)
# Create slack bus with reference angle = 0; use bus 1 with generator
fix(THETA[1],0)
# Objective function
@objective(DCOPF, Min,
sum( gens[g,:c1] * GEN[g] for g in G)
)
# Supply demand balances
@constraint(DCOPF, cBalance[i in N],
sum(GEN[g] for g in gens[gens.connnode .== i,:connnode])
+ sum(load for load in loads[loads.connnode .== i,:demand])
== sum(FLOW[i,j] for j in lines[lines.fromnode .== i,:tonode])
)
# Max generation constraint
@constraint(DCOPF, cMaxGen[g in G],
GEN[g] <= gens[g,:pgmax])
# Flow constraints on each branch;
# In DCOPF, line flow is a function of voltage angles
# Create an array of references to the line constraints,
# which we "fill" below in loop
cLineFlows = JuMP.Containers.DenseAxisArray{Any}(undef, 1:nrow(lines))
for l in 1:nrow(lines)
cLineFlows[l] = @constraint(DCOPF,
FLOW[lines[l,:fromnode],lines[l,:tonode]] ==
baseMVA * lines[l,:b] *
(THETA[lines[l,:fromnode]] - THETA[lines[l,:tonode]])
)
end
# Max line flow limits
# Create an array of references to the line constraints,
# which we "fill" below in loop
cLineLimits = JuMP.Containers.DenseAxisArray{Any}(undef, 1:nrow(lines))
for l in 1:nrow(lines)
cLineLimits[l] = @constraint(DCOPF,
FLOW[lines[l,:fromnode],lines[l,:tonode]] <=
lines[l,:capacity]
)
end
# Solve statement (! indicates runs in place)
optimize!(DCOPF)
# Output variables
generation = DataFrame(
node = gens.connnode,
gen = value.(GEN).data[gens.connnode]
)
angles = value.(THETA).data
flows = DataFrame(
fbus = lines.fromnode,
tbus = lines.tonode,
flow = baseMVA * lines.b .* (angles[lines.fromnode] .-
angles[lines.tonode]))
# We output the marginal values of the demand constraints,
# which will in fact be the prices to deliver power at a given bus.
prices = DataFrame(
node = N,
value = dual.(cBalance).data)
# Return the solution and objective as named tuple
return (
generation = generation,
angles,
flows,
prices,
cost = objective_value(DCOPF),
status = termination_status(DCOPF)
)
end
```
dcopf_ieee (generic function with 1 method)
```julia
solution = dcopf_ieee(gens, lines, loads);
```
```julia
solution.generation
```
<table class="data-frame"><thead><tr><th></th><th>node</th><th>gen</th></tr><tr><th></th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>2 rows × 2 columns</p><tr><th>1</th><td>1</td><td>119.0</td></tr><tr><th>2</th><td>2</td><td>140.0</td></tr></tbody></table>
```julia
solution.angles
```
14-element Array{Float64,1}:
0.0
-0.03791491976786273
-0.18200655970723797
-0.1453797728538324
-0.12249815256001242
-0.22741136081397662
-0.20659753163771283
-0.20659753163771283
-0.23880184324738518
-0.2442681772629424
-0.239406951515819
-0.24660972634711795
-0.2494179036348092
-0.26611574994200304
```julia
solution.prices
```
<table class="data-frame"><thead><tr><th></th><th>node</th><th>value</th></tr><tr><th></th><th>Int64</th><th>Float64</th></tr></thead><tbody><p>14 rows × 2 columns</p><tr><th>1</th><td>1</td><td>20.0</td></tr><tr><th>2</th><td>2</td><td>20.0</td></tr><tr><th>3</th><td>3</td><td>20.0</td></tr><tr><th>4</th><td>4</td><td>20.0</td></tr><tr><th>5</th><td>5</td><td>20.0</td></tr><tr><th>6</th><td>6</td><td>20.0</td></tr><tr><th>7</th><td>7</td><td>20.0</td></tr><tr><th>8</th><td>8</td><td>20.0</td></tr><tr><th>9</th><td>9</td><td>20.0</td></tr><tr><th>10</th><td>10</td><td>20.0</td></tr><tr><th>11</th><td>11</td><td>20.0</td></tr><tr><th>12</th><td>12</td><td>20.0</td></tr><tr><th>13</th><td>13</td><td>20.0</td></tr><tr><th>14</th><td>14</td><td>20.0</td></tr></tbody></table>
```julia
```
| ab5bdefe3746f5ca7eb5911c9bc6e33db3ab5f83 | 87,126 | ipynb | Jupyter Notebook | Notebooks/06-Optimal-Power-Flow.ipynb | cristianho3/power-systems-optimization | 547e03857a8b7019f507cb88be8efc261799b572 | [
"CC-BY-4.0",
"MIT"
]
| null | null | null | Notebooks/06-Optimal-Power-Flow.ipynb | cristianho3/power-systems-optimization | 547e03857a8b7019f507cb88be8efc261799b572 | [
"CC-BY-4.0",
"MIT"
]
| null | null | null | Notebooks/06-Optimal-Power-Flow.ipynb | cristianho3/power-systems-optimization | 547e03857a8b7019f507cb88be8efc261799b572 | [
"CC-BY-4.0",
"MIT"
]
| null | null | null | 51.010539 | 4,184 | 0.522588 | true | 16,076 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.847968 | 0.817574 | 0.693277 | __label__eng_Latn | 0.908098 | 0.449046 |
## Question 1.a
### Assumption:
The dataset for each of the class for the MRI images can be considered as a Gaussian distribution mainly because of small variations in the class distribution. In addition, assuming that each image has a size of 256x256 pixels.
### Prior Information:
In total there are 6 different classes including the CN tower images. The different classes of images are Head images, Neck images, Spine images, abdomen images, pelvis images and CN tower images in the dataset.
### Approach 1:
1. Detecting the Outliers in Preprocessing step: considering the CN tower images as an outlier in the complete human body part images we can try to detect these outliers by using different outliers detecting techniques such as Z-score, IQR and Isolation Forest.
### Approach 2:
If Approach 1 fails, we then use Soft Clustering method to cluster different classes in the dataset as Hard Clustering methods don't have uncertainity measures or a probability that tells us how much a data point is associated with a specific cluster. In our case we use Guassian Mixture Models (GMMs) for this problem. As the image dataset is a 256x256 we first flatten the matrix to a vector in the preprocessing step. In the second step we perform dimensionality reduction using Principle Component Analysis (PCA) to reduce the dimensions of the dataset. The main reason this step is required is due the compexity of GMM of $\mathcal{O}(NKD^{3})$ as D = 256x256 the runtime complexity will be higher so to make it effecient we perform PCA by projecting the data on top principle components (can be choosen by considering the variance for the top Eigenvalues and Eigenvectors) to reduce the dimesion of the dataset and then pass PCA transformed dataset to the GMMs.
### Gaussian Mixture Models (GMMs):
A Gaussian Mixture Model consists of several Gaussians which is given by $ k \quad \epsilon \quad \{1,2,..,K\}$, where K is the number of clusters in the dataset. From the prior information the total number of clusters is 6 so in our case K = 6.
For each of the Gaussian cluster k, the mixture has some important parameters as given below:
1. Mean ($\mu$): The mean ($\mu$) defines the center of the Cluster.
2. Covariance ($\Sigma$): The covariance defines the width of the distribution.
3. Mixing Probability ($\Pi$): It defines how big or small the Gaussian Function is.
Maximization algorithm can be used to obtain the optimal values of these parameters so that each Gaussian fits the data points belonging to each of the cluster. In general the Gaussian density function is given by
\begin{equation}
\tag{1}
N(x| \mu, \Sigma) = \frac{1}{(2\pi)^{\frac{D}{2}} |\Sigma|^{\frac{1}{2}}} exp \left(\frac{-1}{2} (x-\mu)^{T} \Sigma ^{-1} (x-\mu)\right)
\end{equation}
In equation 1, x is the datapoint, D is the number of dimensions of each data point. Here $\mu$ is the mean and $\Sigma$ is the covariance and N is the number of datapoints. As there are several Gaussians we need to find the optimal parameters for the whole mixture which can be modelled by considering that we want to know the probability that a given data point $x_{n}$ comes form Gaussian $k$ which can be expressed as below:
\begin{equation}
\tag{2}
p(z_{nk} = 1 | x_{n})
\end{equation}
This effectively tells us for a given data point x, what is the probability it came from Gaussian $k$. In equation 2 $z$ is the latent variable which takes the value of 1 when x comes from Gaussian $k$ else it takes 0 as the value. This variable is usefull in determining the Gaussian mixture parameters by calculating its probabiltiy of occurrence. Now let $\pi_{k} = p(z_{k} = 1) $ be the overall probability that a given point comes from Gaussian $k$ and for k different Gaussian let $z = \{ z_{1},...,z_{K} \}$. Now
\begin{equation}
\tag{3}
p(x_{n}, z) = \Pi_{k=1}^{K} N(x_{n} | \mu_{k}, \Sigma_{k})^{z_{k}}
\end{equation}
solving this further after applying Bayes rule will yeild us
\begin{equation}
\tag{4}
p(z_{k} = 1 | x_{n}) = \frac{\pi_{k} N(x_{n} | \mu_{k},\Sigma_{k})}{ \Sigma_{j=1}^{K} \pi_{j} N(x_{n} | \mu_{j}, \Sigma_{j})} = \gamma(z_{nk})
\end{equation}
using Expectation Maximization(EM) algorithm we can obatin the optimal parameters $\mu_{k}^{*}$ and $\Sigma_{k}^{*}$ as below:
\begin{equation}
\tag{5}
\mu_{k}^{*} = \frac{\Sigma_{n=1}^{N} \gamma(z_{nk}) x_{n}}{\Sigma_{n=1}{N} \gamma(z_{nk})}
\end{equation}
\begin{equation}
\tag{6}
\Sigma_{k}^{*} = \frac{\Sigma_{n=1}^{N} \gamma(z_{nk})(x_{n}-\mu_{k})(x_{n} - \mu_{k})^{T}}{\Sigma_{n=1}^{N} \gamma(z_{nk})}
\end{equation}
using EM algorithm we can converge to the likelihood value of the above two parameter which will effectively determine the cluster class for the data points and then we can identify the CN tower image from the rest of the MRI images.
## Question 1.c
1. The computational complexity for the Guassian Mixture Model is $\mathcal{O}(NKD^{3})$ where N is the total number of samples, K is the number of clusters and D is the dimension of the datapoint which will effectively reduce to $\mathcal{O}(N)$ for the case when $N >> K$ and $N>>D$. In addition, GMMs can be implemented to perform parallel computation so we can use GPUs to perform the computation as the size of the dataset is very large. This will reduce the computation time thereby giving results quickly.
2. We can use the above method for differentiating the MRI vs non-MRI images. If we have access to the labels for each of the dataset we can use different AI/ML models to train the network to identify the image otherwise we have to use unsupervised learning algorithms to differentiate between the two classes.
| 560f599f621f59a6c6f8b95f67dbba75674ac819 | 7,125 | ipynb | Jupyter Notebook | Task_2/Task2.ipynb | somesh636/Brainmaven_Tasks | 92072f51d99aac384c4955d71ae6d78d19d95c5e | [
"MIT"
]
| null | null | null | Task_2/Task2.ipynb | somesh636/Brainmaven_Tasks | 92072f51d99aac384c4955d71ae6d78d19d95c5e | [
"MIT"
]
| null | null | null | Task_2/Task2.ipynb | somesh636/Brainmaven_Tasks | 92072f51d99aac384c4955d71ae6d78d19d95c5e | [
"MIT"
]
| null | null | null | 60.897436 | 977 | 0.645053 | true | 1,536 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.810479 | 0.695203 | __label__eng_Latn | 0.997942 | 0.453521 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.