text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
listlengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
listlengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
listlengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
```python
%matplotlib inline
```
```python
import numpy as np
import matplotlib.pyplot as plt
from sympy.physics.hydrogen import R_nl
from sympy import integrate, oo, var
from numerov import radial_integral
from numerov.basis import generate_basis
```
```python
basis = list(generate_basis(range(4, 7)))
print(basis)
```
[(4.0, 0.0), (4.0, 1.0), (4.0, 2.0), (4.0, 3.0), (5.0, 0.0), (5.0, 1.0), (5.0, 2.0), (5.0, 3.0), (5.0, 4.0), (6.0, 0.0), (6.0, 1.0), (6.0, 2.0), (6.0, 3.0), (6.0, 4.0), (6.0, 5.0)]
```python
%%time
step = 0.0001
mat_numerov = np.zeros((len(basis), len(basis)))
for i, state_1 in enumerate(basis):
for j, state_2 in enumerate(basis):
mat_numerov[i, j] = radial_integral(*state_1, *state_2, step=step)
```
CPU times: user 815 ms, sys: 0 ns, total: 815 ms
Wall time: 815 ms
```python
fig, ax = plt.subplots()
p = ax.imshow(np.abs(mat_numerov))
plt.colorbar(p)
plt.show()
```
```python
%%time
var("r")
mat_sympy = np.zeros((len(basis), len(basis)))
for i, state_1 in enumerate(basis):
for j, state_2 in enumerate(basis):
mat_sympy[i, j] = integrate(R_nl(*state_2, r) * r**3 * R_nl(*state_1, r), (r, 0, oo)).evalf()
```
CPU times: user 20.5 s, sys: 67.2 ms, total: 20.6 s
Wall time: 20.6 s
```python
fig, ax = plt.subplots()
p = ax.imshow(np.abs(mat_sympy))
plt.colorbar(p)
plt.show()
```
```python
# diff
fig, ax = plt.subplots()
p = ax.imshow(np.abs(mat_sympy) - np.abs(mat_numerov))
plt.colorbar(p)
plt.show()
```
```python
n1, l1 = 12, 5
n2, l2 = 15, 4
```
```python
integral_sympy = integrate(R_nl(n1, l1, r) * r**3 * R_nl(n2, l2, r), (r, 0, oo)).evalf()
print(integral_sympy)
```
4.57318723103945
```python
integral_numerov = radial_integral(n1, l1, n2, l2, step=step)
print(integral_numerov)
```
4.573187231242028
```python
# fractional difference
abs(integral_numerov - integral_sympy) / integral_sympy
```
$\displaystyle 4.42975659646626 \cdot 10^{-11}$
```python
# compare step size
step_values = np.linspace(0.0001, 0.001, 100)
integral_values = np.array([radial_integral(n1, l1, n2, l2, step=st) for st in step_values])
# plot
fig, ax = plt.subplots()
ax.plot(step_values, 100.0 * (integral_values - integral_sympy) / integral_sympy, "x")
ax.axhline(0, c='grey')
ax.set_xlabel("step (a.u.)")
ax.set_ylabel("difference (%)")
plt.show()
```
```python
# sympy
%timeit integrate(R_nl(n1, l1, r) * r**3 * R_nl(n2, l2, r), (r, 0, oo)).evalf()
```
151 ms ± 22.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```python
# numerov
%timeit radial_integral(n1, l1, n2, l2, step=step)
```
5.88 ms ± 677 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```python
```
| ea348391ddc3b08b642f78f40f5911ed139adb77 | 46,208 | ipynb | Jupyter Notebook | notebooks/radial integral.ipynb | ad3ller/numerov | 78453cf66d0f300225e507daf9e4aeed5b9af5b9 | [
"BSD-3-Clause"
]
| 1 | 2020-12-01T21:01:58.000Z | 2020-12-01T21:01:58.000Z | notebooks/radial integral.ipynb | ad3ller/numerov | 78453cf66d0f300225e507daf9e4aeed5b9af5b9 | [
"BSD-3-Clause"
]
| null | null | null | notebooks/radial integral.ipynb | ad3ller/numerov | 78453cf66d0f300225e507daf9e4aeed5b9af5b9 | [
"BSD-3-Clause"
]
| null | null | null | 137.52381 | 12,980 | 0.895732 | true | 1,029 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.803174 | 0.68584 | __label__eng_Latn | 0.128863 | 0.431768 |
# PRAKTIKUM 13
`Solusi Persamaan Differensial Biasa (PDB) 2`
1. Runge-Kutta Fehlberg (RKF45)
2. Runge-Kutta untuk Sistem Persamaan Differensial Biasa
3. Persamaan Differensial Biasa (PDB) ordo tinggi (lebih dari 1)
4. PDB dengan Masalah Nilai Batas
1. Metode _Linear-Shooting_
2. Metode Beda-Hingga (_Finite-Difference_)
<hr style="border:2px solid black"> </hr>
# 1 Runge-Kutta Fehlberg (RKF45)
RKF45 merupakan modifikasi metode RK4 untuk memperoleh metode dengan ukuran langkah h yang adaptif, artinya ukuran langkah akan menyesuaikan bentuk fungsi (h kecil jika fungsi curam, h besar jika fungsi landai).
Solusi pada setiap iterasinya dihitung sebanyak dua kali, masing-masing menggunakan metode berorde 4 dan metode berorde 5.
Selisih kedua solusi pada setiap iterasinya digunakan untuk menentukan ukuran langkah h pada iterasi berikutnya. Ukuran langkah h dapat membesar atau mengecil sesuai kebutuhan.
```julia
#%%METODE RUNGE-KUTTA FEHLBERG 4/5
#%
#% Digunakan untuk mencari solusi persamaan differensial
#% dy/dt = f(t,y) dengan masalah nilai awal y(a) = y0
#%
#% sol = rkf45(f,a,b,y0,M,delta)
#% Input : f -> fungsi f(t,y)
#% a,b -> batas bawah dan atas solusi MNA
#% y0 -> nilai awal y(a)=y0
#% M -> banyaknya sub-interval awal
#% delta-> toleransi per langkah yang diberikan
#% Output : sol -> solusi PD, sol=[T,Y]
#%
#% Digunakan Sebagai Pedoman Praktikum Metode Numerik
#%
#% Lihat juga : heun, taylor, rungekutta
function rkf45(f,a,b,y0,M,delta)
M = round(M)
a2=1/4;b2=1/4;a3=3/8;b3=3/32;c3=9/32;
a4=12/13;b4=1932/2197;c4=-7200/2197;d4=7296/2197;
a5=1;b5=439/216;c5=-8;d5=3680/513;e5=-845/4104;
a6=1/2;b6=-8/27;c6=2;d6=-3544/2565;e6=1859/4104;f6=-11/40;
r1=1/360;r3=-128/4275;r4=-2197/75240;r5=1/50;r6=2/55;
n1=25/216;n3=1408/2565;n4=2197/4104;n5=-1/5;
big=1e15;
h=(b-a)/M;
hmin=h/64;
hmax=h*64;
maxi=200;
j=1;
Y = y0;
T = a;
br= b-0.001*abs(b);
err=NaN
while T[j]<b
if (T[j]+h)>br;h=b-T[j];end
#% Hitung koefisien
k1=h*f(T[j],Y[j]);
y2=Y[j]+b2*k1;
k2=h*f(T[j]+a2*h,y2);
y3=Y[j]+b3*k1+c3*k2;
k3=h*f(T[j]+a3*h,y3);
y4=Y[j]+b4*k1+c4*k2+d4*k3;
k4=h*f(T[j]+a4*h,y4);
y5=Y[j]+b5*k1+c5*k2+d5*k3+e5*k4;
k5=h*f(T[j]+a5*h,y5);
y6=Y[j]+b6*k1+c6*k2+d6*k3+e6*k4+f6*k5;
k6=h*f(T[j]+a6*h,y6);
err=abs(r1*k1+r3*k3+r4*k4+r5*k5+r6*k6);
ynew=Y[j]+n1*k1+n3*k3+n4*k4+n5*k5;
#% Perbarui ukuran langkah
if (err<delta) || (h<2*hmin)
Y = [Y; ynew];
if (T[j]+h)>br
T = [T; b];
else
T = [T; T[j]+h];
end
j=j+1;
end
if (err==0)
s=0;
else
s=0.84*(delta*h/err)^(0.25);
end
if (s<0.75)&&(h>2*hmin);h = h/2;end
if (s>1.50)&&(2*h<hmax);h = 2*h;end
if abs(Y[j])>big || maxi==j;break;end
end
sol = [T Y];
return sol
end
```
### Contoh 1
Diberikan persamaan differensial biasa $$y'=1+y^2$$ dengan masalah nilai awal $y(0)=0$. Berikut merupakan solusi numerik dari PDB tersebut menggunakan metode RKF45 pada $ t\in [0,1.4]$ dengan toleransi kesalahan $2\times10^{-5}$ dan ukuran langkah awal $h_0=0.2$.
```julia
# Hitung Solusi
f(t,y) = 1+y^2
a = 0
b = 1.4
y0 = 0
h0 = 0.2
M = (b-a)/h0
delta = 2*10^-5
sol = rkf45(f,a,b,y0,M,delta)
```
```julia
using Plots
```
```julia
# Plot Solusi
t = sol[:,1];
y = sol[:,2];
plot(t,y,legend = :false)
scatter!(t,y)
```
# 2 Runge-Kutta untuk Sistem PDB
Sistem persamaan differensial merupakan masalah yang sering ditemui dalam pemodelan matematika. Sistem persamaan differensial biasanya digambarkan dalam suatu bidang fase dan bidang solusi yang dapat diselesaikan secara numerik menggunakan metode Runge-Kutta orde-4 seperti berikut.
```julia
#%%METODE RUNGE-KUTTA ORDE 4 untuk sistem PD
#%
#% Digunakan untuk mencari solusi sistem persamaan
#% differensial dx/dt = f(t,x,y) dengan x(a) = x0
#% dy/dt = g(t,x,y) dengan y(a) = y0
#%
#% sol = rungekuttasistem(f,a,b,y0,M)
#% Input : f -> fungsi berisi f(t,[x,y]) dan g(t,[x,y])
#% a,b -> batas bawah dan atas solusi MNA
#% y0 -> nilai awal y(a)=[x0,y0]
#% M -> banyaknya sub-interval
#% Output : sol -> solusi PD, sol=[T,X,Y]
#%
#% Digunakan Sebagai Pedoman Praktikum Metode Numerik
#%
#% Lihat juga : heun, taylor, rungekutta
function rungekuttasistem(f,a,b,y0,M)
M = Int(M)
h = (b-a)/M;
T = a:h:b;
Y = Array{Float64}(undef,M+1,length(y0))
Y[1,:] = y0;
for k = 1:M
f1 = f(T[k] ,Y[k,:] );
f2 = f(T[k]+h/2 ,Y[k,:]+f1*h/2 );
f3 = f(T[k]+h/2 ,Y[k,:]+f2*h/2 );
f4 = f(T[k]+h ,Y[k,:]+f3*h );
Y[k+1,:] = Y[k,:] + h/6*(f1+2*f2+2*f3+f4);
end
sol = [T Y];
end
```
### Contoh 2
Diberikan sistem persamaan differensial seperti berikut.
\begin{align}
\dot{x}= x+2y \\
\dot{y}=3x+2y
\end{align}
dengan $ x(0)=6 $ dan $ y(0)=4 $.
Berikut merupakan langkah-langkah untuk menggambarkan plot bidang solusi dari sistem persamaan differensial di atas pada interval $ t\in[0, 0.2] $ dengan ukuran langkah $ h = 0.02 $.
```julia
# Hitung Solusi, misalkan x = z(1) dan y = z(2)
f(t,z) = [ z[1]+2*z[2] , 3*z[1]+2*z[2] ]
a = 0
b = 0.2
y0 = [6, 4]
h = 0.02
M = (b-a)/h
sol = rungekuttasistem(f,a,b,y0,M)
```
```julia
# Plot Fase
t = sol[:,1];
x = sol[:,2];
y = sol[:,3];
plt = plot(x,y,label="Bidang Fase",legend=:topleft,xlabel="x",ylabel="y")
```
```julia
# Plot Solusi
t = sol[:,1];
x = sol[:,2];
y = sol[:,3];
plt = plot(t,x,label="solusi x",legend=:topleft)
plot!(t,y,label="solusi y")
```
# 3. Persamaan Differensial Biasa (PDB) ordo tinggi (lebih dari 1)
Selain persamaan differensial orde pertama, sering kali dijumpai masalah matematika dalam bentuk persamaan differensial orde ke-2 atau lebih tinggi. Masalah persamaan differensial orde ke-2 atau lebih tinggi dapat diselesaikan dengan cara mentransformasi persamaan tersebut ke dalam bentuk sistem persamaan differensial orde pertama, kemudian diselesaikan menggunakan metode Runge-Kutta orde-4.
### Contoh 3
Diketahui suatu persamaan differensial orde-2 seperti berikut.
\begin{equation}\label{eq:13 orde2}
x''(t)+4x'(t)+5x(t)=0
\end{equation}
dengan nilai awal $ x(0)=3 $ dan $ x'(0)=-5 $. Berikut merupakan penyelesaian masalah tersebut menggunakan metode Runge-Kutta pada interval $ [0,5] $ dengan $ 50 $ sub-interval dan perbandingan solusi numerik terhadap solusi eksaknya, yaitu $$ x(t)=3e^{-2t}\cos(t)+e^{-2t}\sin(t) $$
#### Transformasi persamaan differensial orde-2 menjadi sistem persamaan differensial orde-1
Misalkan, $ x'(t)=y(t) $ sehingga persamaan differensial memiliki bentuk
\begin{align*}
&\ x''(t)+4x'(t)+5x(t)=0 \\
\Leftrightarrow &\ y'(t)+4y(t)+5x(t)=0 \\
\Leftrightarrow &\ y'(t)=-4y(t)-5x(t)
\end{align*}
dan nilai awal $ x'(0)=-5 $ menjadi $ y(0)=-5 $.
Secara lengkap, sistem baru yang equivalen dengan persamaan differensial di atas dapat dituliskan sebagai berikut.
\begin{align}
x'&=y \\
y'&=-4y-5x
\end{align}
dengan $x(0)=3$ dan $ y(0)=-5 $
```julia
# Hitung Solusi
f(t,z) = [ z[2] , -4*z[2]-5*z[1] ]
a = 0
b = 5
y0 = [3, -5]
M = 50
sol = rungekuttasistem(f,a,b,y0,M)
```
```julia
# Plot Solusi
t = sol[:,1];
x = sol[:,2];
plt = plot(t,x,label="x(t)")
```
```julia
# Hitung Galat
xt(t) = 3*exp(-2*t)*cos(t)+exp(-2*t)*sin(t)
galat = abs.(x .- xt.(t))
```
```julia
# Plot Galat
plt = plot(t,galat)
```
# 4 PDB dengan Masalah Nilai Batas
Masalah nilai batas merupakan bentuk lain dari masalah persamaan differensial biasa.
Masalah nilai batas memiliki bentuk umum berupa persamaan differensial orde 2 yaitu
$$ x''=f(x',x,t) $$
dengan nilai batas pada selang $ [a,b] $, yaitu $ x(a)=\alpha $ dan $ x(b)=\beta $.
Pada materi ini, akan dipelajari dua metode untuk mencari solusi numerik masalah nilai batas, yaitu metode \textit{linear shooting} dan \textit{finite-difference}.
## A. Metode _Linear-Shooting_
Ide dasar dari metode _linear shooting_ adalah melakukan transformasi masalah nilai batas menjadi dua persamaan differensial orde-2 dengan masalah nilai awal.
Misalkan, diketahui masalah nilai batas $ x''= p(t)x'+q(t)x+r(t) $ dengan nilai batas $ x(a)=\alpha $ dan $ x(b)=\beta $. Transformasi dari masalah nilai batas tersebut adalah
$ u''=p(t)u'+q(t)u+r(t) $ dengan $ u(a)=\alpha $ dan $ u'(a)=0 $
$ v''=p(t)v'+q(t)v $ dengan $ v(a)=0 $ dan $ v'(a)=1 $
Persamaan tersebut dapat diselesaikan menggunakan metode Runge-Kutta orde-4 dengan mengubahnya menjadi bentuk sistem persamaan differensial, sehingga akan didapatkan solusi $ u(t) $ dan $ v(t) $. Selanjutnya, solusi masalah nilai batas $ x(t) $ dapat dicari dengan Persamaan berikut.
\begin{equation}\label{eq:13 linshoot}
x(t) = u(t)+\dfrac{\beta-u(b)}{v(b)}v(t)
\end{equation}
```julia
#%%METODE LINEAR SHOOTING
#%
#% Digunakan untuk mencari solusi persamaan differensial
#% x''=p(t)x'+q(t)x+r(t)
#% dengan masalah nilai batas x(a) = alpha , x(b)=beta
#%
#% fungsi ini membutuhkan adanya fungsi rungekuttasistem
#%
#% solusi = linearshooting(F1,F2,a,b,alpha,beta,M)
#% Input : F1,F2 -> SPD hasil transformasi u dan v
#% a,b -> batas bawah dan atas solusi MNA
#% alpha,beta-> nilai awal x(a)=alpha,x(b)=beta
#% M -> banyaknya sub-interval
#% Output : solusi -> solusi PD, sol=[T,X]
#%
#% Digunakan Sebagai Pedoman Praktikum Metode Numerik
#%
#% Lihat juga : rkf45, findiff, rungekuttasistem
function linearshooting(F1,F2,a,b,alpha,beta,M)
M = Int(M)
Za = [alpha,0];
sol = rungekuttasistem(F1,a,b,Za,M);
U = sol[:,2];
Za = [0,1];
sol = rungekuttasistem(F2,a,b,Za,M);
V = sol[:,2];
T = sol[:,1];
X = U + (beta-U[M+1])*V/V[M+1];
solusi = [T X];
end
```
### Contoh 4
Diketahui persamaan masalah nilai batas seperti berikut.
\begin{equation}\label{eq:13 kasus 13.3}
x''(t)=\dfrac{2t}{1+t^2}x'(t)-\dfrac{2}{1+t^2}x(t)+1
\end{equation}
dengan nilai batas $ x(0)=1.25 $ dan $ x(4)=-0.95 $ pada interval $ [0,4] $.
Berikut merupakan langkah-langkah untuk menyelesaikan masalah nilai batas tersebut menggunakan metode _linear shooting_.
#### Transformasi masalah nilai batas menjadi masalah nilai awal orde-2.
Berdasarkan Persamaan transformasi linear shooting, dua masalah nilai awal orde-2 yang setara dengan masalah nilai batas di atas adalah
$u''=\dfrac{2t}{1+t^2}u'-\dfrac{2}{1+t^2}u+1$ dengan $ u(0)=1.25 $ dan $ u'(0)=0 $
Serta,
$v''=\dfrac{2t}{1+t^2}v'-\dfrac{2}{1+t^2}v$ dengan $ v(0)=0 $ dan $ v'(0)=1 $.
#### Transformasi masing-masing masalah nilai awal orde-2 menjadi sistem persamaan differensial orde-1.
Masing-masing masalah nilai awal orde-2 di atas dapat ditulis dalam bentuk sistem persamaan linear orde pertama, yaitu
$u' =u_2 $
$u_2'=\dfrac{2t}{1+t^2}u_2-\dfrac{2}{1+t^2}u+1$
dengan $ u(0)=1.25 $ dan $ u_2(0)=0 $. Serta,
$v' =v_2$
$v_2' =\dfrac{2t}{1+t^2}v_2-\dfrac{2}{1+t^2}v$
dengan $ v(0)=0 $ dan $ v_2(0)=1 $.
```julia
# Hitung Solusi
F1(t,z) = [ z[2], 2*t/(1+t^2)*z[2]-2/(1+t^2)*z[1]+1 ]
F2(t,z) = [ z[2], 2*t/(1+t^2)*z[2]-2/(1+t^2)*z[1] ]
a = 0
b = 4
alpha = 1.25
beta = -0.95
M = 40
solusi = linearshooting(F1,F2,a,b,alpha,beta,M)
```
```julia
# Plot Solusi
t = solusi[:,1];
x = solusi[:,2];
plt = plot(t,x,legend=:false)
```
# B. Metode Beda-Hingga (Finite-Difference)
Ide dasar dari metode _finite-difference_ adalah mengubah persamaan differensial dengan masalah nilai batas menjadi formula beda-hingga. Bentuk SPL dari beda-hingga dapat dilihat pada bahan perkuliahan
```julia
#%%METODE FINITE-DIFFERENCE
#
#% Digunakan untuk mencari solusi persamaan differensial
#% x''=p(t)x'+q(t)x+r(t)
#% dengan masalah nilai batas x(a) = alpha , x(b)=beta
#
#% solusi = findiff(p,q,r,a,b,alpha,beta,M)
# Input : p,q,r -> fungsi p(t) q(t) dan r(t)
#% a,b -> batas bawah dan atas solusi MNA
#% alpha,beta-> nilai awal x(a)=alpha,x(b)=beta
# M -> banyaknya sub-interval
#% Output : solusi -> solusi PD, sol=[T,X]
#%
#% Digunakan Sebagai Pedoman Praktikum Metode Numerik
#%
#% Lihat juga : rkf45, linearshooting, rungekuttasistem
using LinearAlgebra
function findiff(p,q,r,a,b,alpha,beta,M)
h = (b-a)/M;
T = a:h:b;
T = T[2:end-1];
#% Bangun Matriks B
B = -h^2*r.(T);
B[1] = B[1] + (1+h/2*p(T[1]))*alpha;
B[end] = B[end] + (1-h/2*p(T[end]))*beta;
# Bangun Matriks A - Bagian Diagonal
Ad = 2 .+h^2*q.(T);
#% Bangun Matriks A - Bagian Bawah Diagonal
Tbawah = T[2:end];
Abawah = -1 .-h/2*p.(Tbawah);
#% Bangun Matriks A - Bagian Atas Diagonal
Tatas = T[1:end-1];
Aatas = -1 .+h/2*p.(Tatas);
A = Tridiagonal(Abawah,Ad,Aatas)
#% Selesaikan AX=B
X = A\B;
T = [a; T; b];
X = [alpha; X ;beta];
solusi = [T X];
end
```
### Contoh 5
Diketahui persamaan masalah nilai batas seperti berikut.
\begin{equation}
x''(t)=\dfrac{2t}{1+t^2}x'(t)-\dfrac{2}{1+t^2}x(t)+1
\end{equation}
dengan nilai batas $ x(0)=1.25 $ dan $ x(4)=-0.95 $ pada interval $ [0,4] $.
Berikut merupakan langkah-langkah untuk menyelesaikan masalah nilai batas tersebut menggunakan metode _finite-difference_.
```julia
# Hitung Solusi
p(t)= 2*t/(1+t^2);
q(t)= -2/(1+t^2);
r(t)= 1 + 0*t;
a = 0
b = 4
alpha = 1.25
beta = -0.95
M = 40
solusi = findiff(p,q,r,a,b,alpha,beta,M)
```
```julia
# Plot Solusi
t = solusi[:,1];
x = solusi[:,2];
plt = plot(t,x,legend=:false)
```
<hr style="border:2px solid black"> </hr>
# Soal Latihan
Kerjakan soal berikut pada saat kegiatan praktikum berlangsung.
`Nama: ________`
`NIM: ________`
### Soal 1
Diberikan persamaan differensial
$$ \dfrac{dy}{dt} = -y-2t-1 ,\ \text{ dengan } y(0)=2 $$
Gunakan metode Runge-Kutta-Fehlberg 4/5 (RKF45) untuk menyelesaikan PD di atas dengan nilai toleransi $ \delta=10^{-6} $, $ 10^{-12} $ dan $ 10^{-16} $, kemudian bandingkan dengan solusi eksaknya yaitu $ y(t)=e^{-t}-2t+1 $ dan gambarkan solusi beserta titik-titik pembagian ukuran langkah seperti pada **Contoh 1**.
```julia
```
### Soal 2
Ulangi langkah-langkah pada **Contoh 2** untuk menggambarkan plot bidang fase dan bidang solusi dari sistem persamaan berikut.
$$
\begin{align}
\dot{x}&= -2x-y \\
\dot{y}&= x-y \\
\end{align}
$$
dengan $ x(0)=6 $ dan $ y(0)=0 $ pada interval $ t\in[0,\ 5] $ dengan ukuran langkah $ h=0.1 $.
```julia
```
### Soal 3
Diketahui suatu persamaan differensial orde-2 seperti berikut.
$$ 2x''(t)-5x'(t)-3x(t)=45e^{2t} $$
dengan nilai awal $ x(0)=2 $ dan $ x'(0)=1 $. Selesaikan masalah tersebut menggunakan Runge-Kutta pada interval $ [0,\ 2] $ dengan $ h=0.05 $, kemudian bandingkan solusi numerik terhadap solusi eksaknya yaitu $ x(t)=4e^{-t/2}+7e^{3t}-9e^{2t} $ dengan langkah-langkah seperti pada **Contoh 3**.
```julia
```
### Soal 4
Diketahui persamaan masalah nilai batas seperti berikut.
$$ x''(t)=-\frac{2}{t}\ x'(t)+\frac{2}{t^2}\ x(t)+\frac{\sin(t)}{t^2} $$
dengan nilai batas $ x(1)=-0.02 $ dan $ x(6)=0.02 $ pada interval $ [1,6] $. Gunakan metode _linear shooting_ untuk menyelesaikan masalah nilai batas di atas dengan langkah-langkah pada **Contoh 4**.
```julia
```
### Soal 5
Diketahui persamaan masalah nilai batas seperti berikut.
$$ x''(t)=-\frac{2}{t}\ x'(t)+\frac{2}{t^2}\ x(t)+\frac{\sin(t)}{t^2} $$
dengan nilai batas $ x(1)=-0.02 $ dan $ x(6)=0.02 $ pada interval $ [1,6] $. Gunakan metode _finite difference_ untuk menyelesaikan masalah nilai batas di atas dengan langkah-langkah pada **Contoh 5**.
```julia
```
| 54b4b418ab811dce36b9064e611b855bcc8fb410 | 23,261 | ipynb | Jupyter Notebook | notebookpraktikum/Praktikum 13.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | notebookpraktikum/Praktikum 13.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | notebookpraktikum/Praktikum 13.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | 31.60462 | 400 | 0.519023 | true | 6,113 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.828939 | 0.704574 | __label__ind_Latn | 0.809685 | 0.475293 |
Function basis methods
======================
```
%matplotlib inline
```
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams.update({'font.size': 14})
```
Introduction to function basis methods
--------------------------------------
### Boundary Value Problems
Considering simple boundary value problem
$$y'' = f(x, y, y'), \quad y(a) = A, \,\, y(b) = B, \quad x \in [a,b].$$
Boundary conditions are only examples here.
Have considered the shooting method (accurate, efficient, may not work)
and the finite difference (relaxation) method (not particularly accurate
or efficient, nearly always works). Both methods require a *grid* with
$n$ points. Accuracy of method depends on $h \propto n^{-1}$.
Can instead use a function basis, independently of a grid.
### Function basis expansion
Aim to solve problem
$${\cal L} y = f,$$
where ${\cal L}$ is a differential operator.
Function basis methods *assume* the solution $y(x)$ can be written
$$y(x) = \sum_{j} c_j u_j (x);$$
constants $c_j$ are *basis coefficients*, $u_j(x)$ are *basis
functions*.
*Choose* “simple” basis functions $u_j$. Get *approximate* solution by
truncating series:
$$y(x) = \sum_{j}^n c_j u_j (x).$$
Additional conditions give linear system for unknowns ($c_j$), defining
solution everywhere.
Collocation methods
-------------------
Simplest function basis method uses a grid. *Collocation* method: fix
$c_j$ by insisting that BVP satisfied exactly at fixed set of points.
1. Assume approximate solution has form ($n$ fixed)
$$\sum_{j}^n c_j u_j (x).$$
2. Boundary conditions $\implies$ two coefficients (e.g. $c_{0, 1}$).
3. Evaluate ${\cal L} y$ at collocation points $\{x_j\}$ $\implies$ linear system.
4. Solve the linear system to give the basis coefficients $c_j$.
### Example
We consider the problem (with exact solution $\exp(x)$)
$$y'' - y = 0, \quad y(0) = 1, \,\, y(1) = e.$$
Choose basis $u_j = x^j \implies y = \sum c_j x^j$. Boundary conditions:
$$c_0 = 1, \quad c_1 = e - \sum_{j \ne 1}^n c_j.$$
Differential operator ${\cal L} y = y'' - y$:
$$\sum_{j=2}^n c_j \left( j (j - 1) x^{j-2} \right) - \sum_{k=0}^n
c_k x^k = 0.$$
Linear system $A {\boldsymbol{c}} = {\boldsymbol{b}}$ ($\{ x_l\}$
collocation points)
$$A_{j l} = \left( j (j - 1) x_l^{j-2} \right) - x_l^j , \quad
{\boldsymbol{b}} \text{ from boundary conditions.}$$
When using only three polynomial basis functions we have
$$c_0 = 1, \quad c_1 = e - 1 - c_2.$$
We then evaluate the system at the collocation point $x = 1/2$ to get
$$\begin{aligned}
&& 0 & = 2 \cdot 1 \cdot \left(\tfrac{1}{2}\right)^0 c_2 - 1 -
\tfrac{1}{2} \left( e - 1 - c_2 \right) -
\left(\tfrac{1}{2}\right)^2 c_2 \\
\Rightarrow && c_2 & = \tfrac{2}{9} (1 + e).
\end{aligned}$$
This, combined with boundary conditions, gives approximate solution
$$y = 1 + \tfrac{1}{9} \left( ( 7e - 11) x + 2(1 + e) x^2 \right).$$
### Example: 3
With only a few points the method is very accurate.
Use Chebyshev collocation points
$$x_k = \tfrac{1}{2} \left( 1 + \cos \left( \frac{(k-1) \pi}{n-1}
\right) \right)$$
to check convergence with $n$.
Convergence is very fast.
```
x_exact = np.linspace(0.0, 1.0, 5000)
y_exact = np.exp(x_exact)
```
```
def CollocationPoly(nbasis):
"""Find the coefficients in the expansion for the above problem."""
# Collocation points are the Chebyshev points
x = 0.5 * (1.0 + np.cos(np.pi * np.array(range(nbasis-1,-1,-1)) / (nbasis-1)))
b = np.zeros_like(x) # The RHS vector
A = np.zeros((nbasis, nbasis)) # The matrix
# Use the left boundary to fix the first coefficient
b[0] = 1.0
A[0, 0] = 1.0
# Use the right boundary to fix the last coefficient
b[-1] = np.exp(1.0)
for j in range(nbasis):
A[-1, j] = 1.0
# Fill the rest of the matrix
for i in range(1, nbasis-1):
for j in range(nbasis):
# The -y' term
A[i, j] -= x[i]**(j)
for j in range(2, nbasis):
# The y'' term
A[i, j] += j*(j-1)*x[i]**(j-2)
# Solve for coefficients
c = np.linalg.solve(A, b)
return c
```
```
# Do the simple case with 3 basis functions
c3 = CollocationPoly(3)
y3 = np.zeros_like(x_exact)
for j in range(3):
y3 += c3[j] * x_exact**j
# Plot result
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(x_exact, y_exact, 'k-', label = "Exact solution")
ax.plot(x_exact, y3, 'b--', label = "Using 3 collocation points")
ax.legend(loc=2)
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$y$");
```
```
# Check the convergence with resolution.
nbasis = range(3, 16)
collocation_error = np.zeros((len(nbasis),))
for i in range(len(nbasis)):
c = CollocationPoly(nbasis[i])
yn = np.zeros_like(x_exact)
for j in range(nbasis[i]):
yn += c[j] * x_exact**j
collocation_error[i] = np.linalg.norm(yn - y_exact, 2)
# Plot errors
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.semilogy(nbasis, collocation_error, 'kx')
ax.set_xlabel("Number of basis functions")
ax.set_ylabel(r"$\|$"+"Error"+r"$\|_2$");
```
Norm method
-----------
Collocation
1. not independent of grid;
2. convergence depends on location of points;
3. BVP solved only at points, otherwise unconstrained.
Instead try to get best solution “on average”. Minimize average error in
some norm,
$$\| {\cal L} (y) (x) - f(x) \|.$$
Assume a function basis expansion
$$y = \sum_{j}^n c_j u_j (x).$$
Gives a minimization problem for the coefficients $c_j$.
### Example
For the example given above we have
$${\cal L} = y'' - y, \quad f = 0,$$
and hence we want to minimize
$$\| y'' - y \|.$$
Norm is over whole interval – so integrate. Typically use 2-norm:
$$\begin{aligned}
F(c_j) & = \| y'' - y \|_2^2 \\
& = \int_0^1 \left[ y'' - y \right]^2 \, \text{d}x.
\end{aligned}$$
This is function of $c_j$ using function basis assumption.
### Example: 2
As normal fix two coefficients using boundary conditions. Using three
basis functions
$$c_0 = 1, \quad c_1 = e - 1 - c_2.$$
Explicitly computing the quadratic form gives us the “average error”
$$F(c_j) = \tfrac{47}{10} c_2^2 - \tfrac{13}{6} (1 + e) c_2 +
\tfrac{1+e+e^2}{3}.$$
To minimize, differentiate with respect to $c_2$ and set to zero, giving
$$c_2 = \tfrac{65}{282} (1 + e).$$
### Example: 3
Even with a few coefficients the result is very accurate.
Additional analysis required to set up general problem.
Ritz methods
------------
General framework: working in Hilbert space $L_2$ define *inner product*
$$< u, v > = \int_a^b u \cdot v \, \text{d} x.$$
Need ${\cal L}$ to be symmetric and positive definite.
Inner product can *measure distance* on $L_2$: i.e., measure the
distance to the “exact solution” of the ODE.
So define *energy* of the element $u$ as
$$< {\cal L}(u), u > \,\, \ge 0,$$
where inequality follows by integration by parts. Then to get solution
minimize
$$J(u) = < {\cal L}(u), u > - 2 < f, u >.$$
### Ritz method applied to a function basis
Can minimize the functional
$$J(y) = < {\cal L}(y), y > - 2 < f, y >$$
when $y$ is given with respect to a function basis,
$$y = \sum_j^n c_j u_j(x).$$
Linearity means this is equivalent to minimizing quadratic form
$$J(y) = \sum_{m,k}^n c_m c_k < {\cal L}(u_m), u_k > - 2 \sum_m c_m
< f, u_m >.$$
Know $u_j$, so re-express as a condition on $c_j$.
### Ritz method applied to a function basis
$$y = \sum_j^n c_j u_j(x).$$
$$J(y) = \sum_{m,k}^n c_m c_k < {\cal L}(u_m), u_k > - 2 \sum_m c_m
< f, u_m >.$$
Minimizing this functional requires
$$\begin{align}
&& \frac{\partial{}}{\partial{c_m}} J & = 0, & m = 1, \dots, n, \\
\Rightarrow && \sum_m^n c_m < {\cal L}(u_m), u_k > & = < f, u_k >,
& k = 1, \dots, n.
\end{align}$$
This is just a linear system to solve.
### Example
For the boundary value problem
$$y'' = -\cos(x), \quad y(0) = 0 = y(\pi)$$
we have the exact solution $y(x) = \cos(x) + 2 x /\pi - 1$. Symmetry of
problem suggests the function basis
$$u_j = \sin(2 j x):$$
satisfies boundary conditions and symmetry.
We need to satisfy
$$\sum_m^n c_m < {\cal L}(u_m), u_k > = < f, u_k >.$$
Vector of unknowns $c_m$; known matrix has elements $A_{m,k} = <
{\cal L}(u_m), u_k >$; known vector has elements $< f, u_k >$.
### Example: 2
The *matrix* $A_{m,k} = < {\cal L}(u_m), u_k >$ has elements
$$\begin{aligned}
< {\cal L}(u_m), u_k > & = -\int_0^{\pi} 4 m^2 \sin(2 m x)
\sin(2 k
x) \, \text{d} x \\
& =
\begin{cases}
-2 \pi k^2, & \text{if $m = k$}, \\
0, & \text{if $m \ne k$}
\end{cases}.
\end{aligned}$$
The *vector* $< f, u_k >$ has elements
$$\begin{aligned}
< f, u_k > & = \int_0^{\pi} -\cos(x) \sin(2 k x) \, \text{d} x \\
& = \frac{4 k}{4 k^2 - 1}.
\end{aligned}$$
$$\begin{aligned}
< {\cal L}(u_m), u_k > & = \begin{cases}
-2 \pi k^2, & \text{if $m = k$}, \\
0, & \text{if $m \ne k$}
\end{cases}, \\
< f, u_k > & = \frac{4 k}{4 k^2 - 1}.
\end{aligned}$$
This gives a *diagonal* system which can be solved to give
$$c_m = \frac{2}{\pi m (4 m^2 - 1)}$$
and hence the approximation
$$y = \frac{2}{\pi} \sum_{m=1}^n \frac{\sin(2 m x)}{m (4 m^2 - 1)}$$
which converges to the Fourier series in the limit $n \rightarrow
\infty$.
### Example: 3
In fact even with very few coefficients the Ritz method is accurate.
It also converges quickly.
```
x_exact = np.linspace(0.0, np.pi, 5000)
y_exact = np.cos(x_exact) + 2.0 * x_exact / np.pi - 1.0
```
```
def RitzCoefficients(nbasis):
"""Returns the Ritz coeffcients for the problem above. Note: cheats and use the form computed analytically above."""
A = np.zeros((nbasis,nbasis))
b = np.zeros((nbasis,))
for i in range(nbasis):
b[i] = 4.0 * (i+1) / (4.0 * (i+1)**2 - 1.0)
A[i, i] = 2.0 * np.pi * (i+1)**2
c = np.linalg.solve(A, b)
return c
```
```
# Plot the result with 3 coefficients
c3 = RitzCoefficients(3)
y3 = np.zeros_like(x_exact)
for j in range(3):
y3 += c3[j] * np.sin(2.0*(j+1)*x_exact)
# Plot result
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(x_exact, y_exact, 'k-', label = "Exact solution")
ax.plot(x_exact, y3, 'b--', label = "Using 3 collocation points")
ax.legend(loc=1)
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$y$");
```
```
# Check the convergence with resolution.
nbasis = range(3, 33, 2)
collocation_error = np.zeros((len(nbasis),))
for i in range(len(nbasis)):
c = RitzCoefficients(nbasis[i])
yn = np.zeros_like(x_exact)
for j in range(nbasis[i]):
yn += c[j] * np.sin(2.0*(j+1)*x_exact)
collocation_error[i] = np.linalg.norm(yn - y_exact, 2)
# Plot errors
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.loglog(nbasis, collocation_error, 'kx')
ax.set_xlabel("Number of basis functions")
ax.set_ylabel(r"$\|$"+"Error"+r"$\|_2$");
```
Summary
=======
- When extreme accuracy is essential then function basis methods are
often used.
- Collocation methods are popular; given the right choice of basis and
collocation points the convergence can be faster than any
polynomial, giving floating point accuracy with a few dozen basis
functions at most.
- Norm and Ritz type methods do not use a grid at all; this can make
them useful in complex domains, particular as a small part of a
larger method (see finite elements).
- The complexity of these methods, particularly norm and Ritz, makes
the work involved in setting up the problem quite extreme. The
actual coding of the method is, however, very small for the
impressive accuracy.
| 2f1c866a9bd75e2ba5ef8dfb5ae6df91c3127f09 | 20,557 | ipynb | Jupyter Notebook | Lectures/20 - Function Basis Methods.ipynb | josh-gree/NumericalMethods | 03cb91114b3f5eb1b56916920ad180d371fe5283 | [
"CC-BY-3.0"
]
| 76 | 2015-02-12T19:51:52.000Z | 2022-03-26T15:34:11.000Z | Lectures/20 - Function Basis Methods.ipynb | josh-gree/NumericalMethods | 03cb91114b3f5eb1b56916920ad180d371fe5283 | [
"CC-BY-3.0"
]
| 2 | 2017-05-24T19:49:52.000Z | 2018-01-23T21:40:42.000Z | Lectures/20 - Function Basis Methods.ipynb | josh-gree/NumericalMethods | 03cb91114b3f5eb1b56916920ad180d371fe5283 | [
"CC-BY-3.0"
]
| 41 | 2015-01-05T13:30:47.000Z | 2022-02-15T09:59:39.000Z | 29.621037 | 137 | 0.457217 | true | 3,853 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90599 | 0.859664 | 0.778847 | __label__eng_Latn | 0.935043 | 0.647854 |
# The Euler equations of gas dynamics
This is the first of two notebooks on the Euler equations. In this notebook, we discuss the equations and the structure of the exact solution to the Riemann problem. In [Euler_approximate_solvers.ipynb](Euler_approximate_solvers.ipynb), we will investigate approximate Riemann solvers.
## Fluid dynamics
In this chapter we study the system of hyperbolic PDEs that governs the motions of a compressible gas in the absence of viscosity. These consist of conservation laws for **mass, momentum**, and **energy**. Together, they are referred to as the **compressible Euler equations**, or simply the Euler equations. Our discussion here is fairly brief; for much more detail see <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite> or <cite data-cite="toro2013riemann"><a href="riemann.html#toro2013riemann">(Toro, 2013)</a></cite>.
### Mass conservation
We will use $\rho(x,t)$ to denote the fluid density and $u(x,t)$ for its velocity. Then the equation for conservation of mass is just the familiar **continuity equation**:
$$\rho_t + (\rho u)_x = 0.$$
### Momentum conservation
We discussed the conservation of momentum in a fluid already in [Acoustics.ipynb](Acoustics.ipynb). For convenience, we review the ideas here. The momentum density is given by the product of mass density and velocity, $\rho u$. The momentum flux has two components. First, the momentum is transported in the same way that the density is; this flux is given by the momentum density times the velocity: $\rho u^2$.
To understand the second term in the momentum flux, we must realize that a fluid is made up of many tiny molecules. The density and velocity we are modeling are average values over some small region of space. The individual molecules in that region are not all moving with exactly velocity $u$; that's just their average. Each molecule also has some additional random velocity component. These random velocities are what accounts for the **pressure** of the fluid, which we'll denote by $p$. These velocity components also lead to a net flux of momentum. Thus the momentum conservation equation is
$$(\rho u)_t + (\rho u^2 + p)_x = 0.$$
This is very similar to the conservation of momentum equation in the shallow water equations, as discussed in [Shallow_water.ipynb](Shallow_water.ipynb), in which case $hu$ is the momentum density and $\frac 1 2 gh^2$ is the hydrostatic pressure. For gas dynamics, a different expression must be used to compute the pressure $p$ from the conserved quantities. This relation is called the *equation of state* of the gas, as discussed further below.
### Energy conservation
The energy has two components: internal energy density $\rho e$ and kinetic energy density $\rho u^2/2$:
$$E = \rho e + \frac{1}{2}\rho u^2.$$
Like the momentum flux, the energy flux involves both bulk transport ($Eu$) and transport due to pressure ($pu$):
$$E_t + (u(E+p)) = 0.$$
### Equation of state
You may have noticed that we have 4 unknowns (density, momentum, energy, and pressure) but only 3 conservation laws. We need one more relation to close the system. That relation, known as the equation of state, expresses how the pressure is related to the other quantities. We'll focus on the case of a polytropic ideal gas, for which
$$p = \rho e (\gamma-1).$$
Here $\gamma$ is the ratio of specific heats, which for air is approximately 1.4.
## The Euler equations
We can write the three conservation laws as a single system $q_t + f(q)_x = 0$ by defining
\begin{align}
q & = \begin{pmatrix} \rho \\ \rho u \\ E\end{pmatrix}, &
f(q) & = \begin{pmatrix} \rho u \\ \rho u^2 + p \\ u(E+p)\end{pmatrix}.
\label{euler_conserved}
\end{align}
This is the 1D Euler system. In three dimensions, the equations are similar. We have two additional velocity components $v, w$, and their corresponding fluxes. Additionally, we have to account for fluxes in the $y$ and $z$ directions. We can write the full system as
$$ q_t + f(q)_x + g(q)_y + h(q)_z = 0$$
with
\begin{align}
q & = \begin{pmatrix} \rho \\ \rho u \\ \rho v \\ \rho w \\ E\end{pmatrix}, &
f(q) & = \begin{pmatrix} \rho u \\ \rho u^2 + p \\ \rho u v \\ \rho u w \\ u(E+p)\end{pmatrix} &
g(q) & = \begin{pmatrix} \rho v \\ \rho uv \\ \rho v^2 + p \\ \rho v w \\ v(E+p)\end{pmatrix} &
h(q) & = \begin{pmatrix} \rho w \\ \rho uw \\ \rho vw \\ \rho w^2 + p \\ w(E+p)\end{pmatrix}.
\end{align}
In the rest of the chapter we focus on the 1D system.
## Hyperbolic structure of the 1D Euler equations
In our discussion of the structure of these equations, it is convenient to work with the primitive variables $(\rho, u, p)$ rather than the conserved variables. The quasilinear form is particularly simple in the primitive variables:
\begin{align} \label{euler_primitive}
\begin{bmatrix} \rho \\ u \\ p \end{bmatrix}_t +
\begin{bmatrix} u & \rho & 0 \\ 0 & u & 1/\rho \\ 0 & \gamma \rho & u \end{bmatrix} \begin{bmatrix} \rho \\ u \\ p \end{bmatrix}_x & = 0.
\end{align}
### Characteristic velocities
The eigenvalues of the flux Jacobian $f'(q)$ for the 1D Euler equations are:
\begin{align}
\lambda_1 & = u-c & \lambda_2 & = u & \lambda_3 & = u+c
\end{align}
Here $c$ is the sound speed:
$$ c = \sqrt{\frac{\gamma p}{\rho}}.$$
These are also the eigenvalues of the coefficient matrix appearing in (\ref{euler_primitive}), and show that acoustic waves propagate at speeds $\pm c$ relative to the fluid velocity $u$. There is also a characteristic speed $\lambda_2 =u$ corresponding to the transport of entropy at the fluid velocity, as discussed further below.
The eigenvectors of the coefficient matrix appearing in (\ref{euler_primitive}) are:
\begin{align}\label{euler_evecs}
r_1 & = \begin{bmatrix} -\rho/c \\ 1 \\ - \rho c \end{bmatrix} &
r_2 & = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} &
r_3 & = \begin{bmatrix} \rho/c \\ 1 \\ \rho c \end{bmatrix}.
\end{align}
These vectors show the relation between jumps in the primitive variables across waves in each family. The eigenvectors of the flux Jacobian $f'(q)$ arising from the conservative form (\ref{euler_conserved}) would be different, and would give the relation between jumps in the conserved variables across each wave.
Notice that the second characteristic speed, $\lambda_2$, depends only on $u$ and that $u$ does not change as we move in the direction of $r_2$. In other words, the 2-characteristic velocity is constant on 2-integral curves. This is similar to the wave that carries changes in the tracer that we considered in [Shallow_tracer.ipynb](Shallow_tracer.ipynb). We say this characteristic field is **linearly degenerate**; it admits neither shocks nor rarefactions. In a simple 2-wave, all characteristics are parallel. A jump in this family carries a change only in the density, and is referred to as a **contact discontinuity**.
The other two fields have characteristic velocities that **do** vary along the corresponding integral curves; thus the 1-wave and the 3-wave in any Riemann solution will be either a shock or a rarefaction. We say these characteristic fields are **genuinely nonlinear**.
Mathematically, the $p$th field is linearly degenerate if
$$\nabla \lambda_p(q) \cdot r_p(q) = 0$$
and genuinely nonlinear if
$$\nabla \lambda_p(q) \cdot r_p(q) \ne 0.$$
### Entropy
Another important quantity in gas dynamics is the **specific entropy**:
$$ s = c_v \log(p/\rho^\gamma) + C,$$
where $c_v$ and $C$ are constants. From the expression (\ref{euler_evecs}) for the eigenvector $r_2$, we see that the pressure and velocity are constant across a 2-wave.
A simple 2-wave is also called an **entropy wave** because a variation in density while the pressure remains constant requires a variation in the entropy of the gas as well. On the other hand a simple acoustic wave (a continuously varying pure 1-wave or 3-wave) has constant entropy throughout the wave; the specific entropy is a Riemann invariant for these families.
A shock wave (either a 1-wave or 3-wave) satisfies the Rankine-Hugoniot conditions and exhibits a jump in entropy. To be physically correct, the entropy of the gas must *increase* as gas molecules pass through the shock, leading to the **entropy condition** for selecting shock waves. We have already seen this term used in the context of shallow water flow even though the entropy condition in that case did not involve the physical entropy.
### Riemann invariants
Since the Euler equations have three components, we expect each integral curve (a 1D set in 3D space) to be defined by two Riemann invariants. These are:
\begin{align}
1 & : s, u+\frac{2c}{\gamma-1} \\
2 & : u, p \\
3 & : s, u-\frac{2c}{\gamma-1}.
\end{align}
### Integral curves
The level sets of these Riemann invariants are two-dimensional surfaces; the intersection of two appropriate level sets defines an integral curve.
The 2-integral curves, of course, are simply lines of constant pressure and velocity (with varying density). Since the field is linearly degenerate, these coincide with the Hugoniot loci.
We can determine the form of the 1- and 3-integral curves using the Riemann invariants above. For a curve passing through $(\rho_0,u_0,p_0)$, we find
\begin{align}
\rho(p) &= (p/p_0)^{1/\gamma} \rho_0,\\
u(p) & = u_0 \pm \frac{2c_0}{\gamma-1}\left(1-(p/p_0)^{(\gamma-1)/(2\gamma)}\right).
\end{align}
Here the plus sign is for 1-waves and the minus sign is for 3-waves.
Below we plot the projection of some integral curves on the $p-u$ plane.
```python
%matplotlib inline
```
```python
from exact_solvers import Euler
from exact_solvers import euler_stripes
from ipywidgets import widgets
from ipywidgets import interact
State = Euler.Primitive_State
gamma = 1.4
```
```python
interact(Euler.plot_integral_curves,
gamma=widgets.FloatSlider(min=1.1,max=3,value=1.4),
rho_0=widgets.FloatSlider(min=0.1,max=3.,value=1.,
description=r'$\rho_0$'));
```
## Rankine-Hugoniot jump conditions
The Hugoniot loci for 1- and 3-shocks are
\begin{align}
\rho(p) &= \left(\frac{1 + \beta p/p_0}{p/p_l + \beta} \right),\\
u(p) & = u_0 \pm \frac{2c_0}{\sqrt{2\gamma(\gamma-1)}}
\left(\frac{1-p/p_0}{\sqrt{1+\beta p/p_0}}\right), \\
\end{align}
where $\beta = (\gamma+1)/(\gamma-1)$.
Here the plus sign is for 1-shocks and the minus sign is for 3-shocks.
Below we plot the projection of some integral curves on the $p-u$ plane.
```python
interact(Euler.plot_hugoniot_loci,
gamma=widgets.FloatSlider(min=1.1,max=3,value=1.4),
rho_0=widgets.FloatSlider(min=0.1,max=3.,value=1.,
description=r'$\rho_0$'))
```
### Entropy condition
As mentioned above, a shock wave is physically relevant only if the entropy of the gas increases as the gas particles move through the shock. A discontinuity satisfying the Rankine-Hugoniot jump conditions that violates this entropy condition (an "entropy-violating shock") is not physically correct and should be replaced by a rarefaction wave in the Riemann solution.
This physical entropy condition is equivalent to the mathematical condition that for a 1-shock to be physically relevant the 1-characteristics must impinge on the shock. If the entropy condition is violated, the 1-characteristics would spread out, allowing the insertion of an expansion fan (rarefaction wave).
## Exact solution of the Riemann problem
The general Riemann solution is found following the steps listed below. This is essentially the same procedure used to determine the correct solution to the Riemann problem for the shallow water equations in [Shallow_water.ipynb](Shallow_water.ipynb), where more details are given.
The Euler equations are a system of three equations and the general Riemann solution consists of three waves, so we must determine two intermediate states rather than the one intermediate state in the shallow water equations. However, it is nearly as simple because of the fact that we know the pressure and velocity are constant across the 2-wave, and so there is a single intermediate pressure $p_m$ and velocity $u_m$ in both intermediate states, and it is only the density that takes different values $\rho_{m1}$ and $\rho_{m2}$. Moreover any jump in density is allowed across the 2-wave, and we have expressions given above for how $u(p)$ varies along any integral curve or Hugoniot locus, expressions that do not explicitly involve $\rho$. So we can determine the intermediate $p_m$ by finding the intersection point of two relevant curves, in step 3 of this general algorithm:
1. Define a piecewise function giving the middle state velocity $u_m$ that can be connected to the left state by an entropy-satisfying shock or rarefaction, as a function of the middle-state pressure $p_m$.
2. Define a piecewise function giving the middle state velocity $u_m$ that can be connected to the right state by an entropy-satisfying shock or rarefaction, as a function of the middle-state pressure $p_m$.
3. Use an iterative solver to find the intersection of the two functions defined above.
4. Use the Riemann invariants to find the intermediate state densities and the solution structure inside any rarefaction waves.
### The structure of centered rarefaction waves
Step 4 above requires finding the structure of rarefaction waves. This can be done using the the fact that the Riemann invaiants are constant through the rarefaction wave. See Chapter 14 of <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite> for more details.
**Give the formulas here?**
## Examples of Riemann solutions
Here we present some representative examples of Riemann problems and solutions. The examples chosen are closely related to the examples used in [Shallow_water.ipynb](Shallow_water.ipynb) and you might want to refer back to that notebook and compare the results.
If you wish to examine the Python code for this chapter, see:
- [exact_solvers/Euler.py](exact_solvers/Euler.py)
### Problem 1: Sod shock tube
First we consider the classic shock tube problem. The initial condition consists of high density and pressure on the left, low density and pressure on the right and zero velocity on both sides. The solution is composed of a shock propagating to the right (3-shock), while a left-going rarefaction forms (1-rarefaction). In between these two waves, there is a jump in the density, which is the contact discontinuity (2-wave) in the linearly degenerate characteristic field.
Note that this set of initial conditions is analogous to the "dam break" problem for shallow water quations, and the resulting structure of the solution is very similar to that obtained when those equations are solved with the addition of a scalar tracer. However, in the Euler equations the entropy jump across a 2-wave does affect the fluid dynamics on either side, so this is not a passive tracer and solving the Riemann problem is slightly more complex.
```python
left_state = State(Density = 3.,
Velocity = 0.,
Pressure = 3.)
right_state = State(Density = 1.,
Velocity = 0.,
Pressure = 1.)
Euler.riemann_solution(left_state,right_state)
```
Here is a plot of the solution in the phase plane, showing the integral curve connecting the left and middle states, and the Hugoniot locus connecting the middle and right states.
```python
Euler.phase_plane_plot(left_state, right_state)
```
### Problem 2: Symmetric expansion
Next we consider the case of equal densities and pressures, and equal and opposite velocities, with the initial states moving away from each other. The result is two rarefaction waves (the contact has zero strength).
```python
left_state = State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
right_state = State(Density = 1.,
Velocity = 3.,
Pressure = 1.)
Euler.riemann_solution(left_state,right_state);
```
```python
Euler.phase_plane_plot(left_state, right_state)
```
### Problem 3: Colliding flows
Next, consider the case in which the left and right states are moving toward each other. This leads to a pair of shocks, with a high-density, high-pressure state in between.
```python
left_state = State(Density = 1.,
Velocity = 3.,
Pressure = 1.)
right_state = State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
Euler.riemann_solution(left_state,right_state)
```
```python
Euler.phase_plane_plot(left_state, right_state)
```
## Plot particle trajectories
In the next plot of the Riemann solution in the $x$-$t$ plane, we also plot the trajectories of a set of particles initially distributed along the $x$ axis at $t=0$, with the spacing inversely proportional to the density. The evolution of the distance between particles gives an indication of how the density changes.
```python
left_state = State(Density = 3.,
Velocity = 0.,
Pressure = 3.)
right_state = State(Density = 1.,
Velocity = 0.,
Pressure = 1.)
Euler.plot_riemann_trajectories(left_state, right_state)
```
Recall that the evolution of the distance between particles gives an indication of how the density changes. Note that it increases across the shock wave and decreases through the rarefaction wave, and that in general there is a jump in density across the contact discontinuity.
## Riemann solution with a colored tracer
Next we plot the Riemann solution with the density plot also showing an advected color to help visualize the flow better. The fluid initially to the left of $x=0$ is colored red and that initially to the right of $x=0$ is colored blue, with stripes of different shades of these colors to help visualize the motion of the fluid.
For the code that produces the plot, see this file: [exact_solvers/euler_stripes.py](exact_solvers/euler_stripes.py)
Let's plot the Sod shock tube data with this colored tracer:
```python
def plot_exact_riemann_solution_stripes_t_slider(t):
euler_stripes.plot_exact_riemann_solution_stripes(rho_l=3.,u_l=0.,p_l=3.,
rho_r=1.,u_r=0.,p_r=1.,
gamma=gamma,t=t)
interact(plot_exact_riemann_solution_stripes_t_slider,
t=widgets.FloatSlider(min=0.,max=1.,step=0.1,value=0.5));
```
Note the following in the figure above:
- The edges of each stripe are being advected with the fluid velocity, so you can visualize how the fluid is moving.
- The width of each stripe initially is inversely proportional to the density of the fluid, so that the total mass of gas within each stripe is the same.
- The total mass within each stripe remains constant as the flow evolves, and the width of each stripe remains inversely proportional to the local density.
- The interface between the red and blue gas moves with the contact discontinuity. The velocity and pressure are constant but the density can vary across this wave.
## Interactive Riemann solver
Here you can set up your own Riemann problem and immediately see the solution. If you don't want to download and run the notebook, an online interactive version is [here](http://sagecell.sagemath.org/?z=eJytWNtu47YWfQ-Qf2BnHiLJsmzFGeAgqIsCnfaxOCgGpw-DwJAtOiaOLgwviTJf30VSlKjYSgdog8FEIdfeXPvKLbGat0KRRtf8lRSSNPz6irm1ulC8alXF9hl_NU9mn1fq-uoo2prIA-OvWcsVq9k3Snqho2yrZ3p9dX1V0iOhXXFQO8FoXTTNDltasbaJnnZV-rQTabf9vW1oqtyvx6Kui22e3cX311cEPx8-fPiDKi0aok7U6SJeB1GtXf3D6SZctPuK1uSFqRNhDVOsqIhUhaKSmOPwn8icWvx8geSgiUlyaGuuFS1JoQjsoQS2NiXhLWuUJB2JXk5UUDzUxSvZU1KQ_LP3mRDFazyqnpzBGqglz_SgWiEBpZbzoW0kFc847kkXjQJXkNwDaDZbrSDjwF_FqU11yh-ywSHuAeu7imyNZV_XD25NDyv5wyrc-HXYuO1XRiXC7ohAiV-xSkSgxG8YJW7tI_nF-Y1U9Og8JtjjSTm_IyRUSo3_HJpbGpGN8jLP4iQyxJZknX1KrD0JLEiS29jDxRlcBHABuHDw72IjW40VySkte0IHS6jhmXwSyh2UgOTKkok9RlzACIsRcejNouKnYsJ4Fd1mif2rB-6pGhELixjAgRUnevg_ObaCHIpnBu7IUbfHjjbISxulBblNIpiwAMVQEfmRrO-HJEQQkMLk5pdBFSmpQj7S8gcEtUPuNY_ZzYgXruBMQYbWfSSf6ZE1Ln-hkT4K1NdBI4uldfVJP7YNQ4-o2gPr6fawnYXtcixtSVXU-7IgnNxbW4wVMGIVxjnPlhFfIRJxkkQX3RnHF0_YnJ9gMsacIC6eIP72BG_WDmZpCRtmLfA5cus0JKPamCQkivKls2kA5guTD4lbvXzcZt6c7ztOXDxOxPE0ti7l0OMQXWFDnC9fimdqWmNB5KnFLvJRoCcd0YSHhDQtnp_Yrop4fO8zaMwl5Cv_aQvz7n1avXUn5EY4rSQdkG-TZ0D-W5yF4XzOVcxx3Xw31ymy72YnNo2ld9vSczlvYzUry4r6ZpoSTfavBEVYomRd6dmThaSBeTxlzbFNCaMiJbV8xKnuSo5wUEoiRGNh8-I2S4-6qnbuvtl-EZqmnWorXMB0md_Fns1vcKJUosWZgS9lioZaU3NXyv4A0pjWSk604kO7AosftnnYjv4JPXN2K7brLA-YrgNff8Sli_Aj-LoqSYNudCo4p800zj2pIFGDRvlnIRr4994bVTKnCHf2MxWPNOyVoxgs8fHTMMkH168NvkTwpmEtaSPt_R_c6zts2btvaIJ5tnJtCaVtIcEFPkGLS2jxNhk_-nnHlo0tmvBefJHuyvtGRSujT_Eg9cXNLsrOYUbg3gtgKICMvmCtK0R7GY_ZMznOVp5tEiNPaFwbjdEwGKDxjc5JNHq1m4HC5XiiIDcKrCK3bEt2RBxGR7-53ldzGi0lx-XgYxAept2GlfSh760Tb6y7G6yzc0xvhphaN12ectn01t3NWSfes25Wo154yXO6buowE9EQ5POm1bfE56LSJqnHbt220szNdrreFxLDb9tczj-4rDNd3Mwg93i0JlSskbw40AhXXGr-rde-8oFXI14Bv87-k9RFF0Gu2Muoi-NV8OeLnN5_nbSuTBR4_mnIoOHbUUmeF47L6eHNAcVb0klmHwsv1pmO361U3xbM6BDMGomfHcwMh4eOxX6KW9hx0AptzoTcBBAd7MNFIcQ3H0vHjwc6XwKNeXk1Trpx3y2C2WFQsRnz06vo2FJvpirErApuOOCMxCjJ-5m6V9UjNhYhLGLTT9QTxNjlcAsYPt2P286UYexeGOC2qPupX-k3c7eZD3v5uHfrBV1peMTtiNj0CDFFbEbEnUNshr07L-JfoAxXMmWr57nqWaZ6jqCe46WnrLTnxC9x4vOc-CwnPseJz3HiU07cc-r7RB_c1PktdVSB8Algp7WqxTuyyRLAUjMJdX5yOzIzQfBKZXjCq2aEX5J9o9soX6d3Qy4KVuOSRWVvx1dqt9UUZoDB8s1nexO_3qQ3_6PmHcY-_rd_hb3p4UXn0OuHZNMzwHjE0AjQBJpHGm3CmdKgvzLTOkErK8pyJ_XeWBPl6SZlizxowE_GDs8TQsHIBOusUJc-pWiD9IWV6rTdxFMIXFTRyNoD8VAzWh-Umwb4NFkGaSyzZrpcsuPRvOkbqaVFnRmUSap2rxWro8jsLzGQJVYuNVKL4U_vf_u5CAMyxqKD9B-KcGHUv9mgOdD4twni0KBtOwCfPLszt8n11c925kW3HZNjN_OZydb6Vlbo0QIu_5SayRFCha4U_JfqcXf56c32Ok75u8K25uf2c6NcvK_8PWFAxs9g7vsNn7wlo6jefjcJv9PwyQtvAPZfTcIG-2T1f7W6UjKoTM3BDx4iPEQ4iFVkIGLyTenvb-uhjIF85_OgORKFnhI0BfdhMPySEtQE9A2LQ59Agadd_BeW1ABm&lang=python).
```python
interact(euler_stripes.plot_exact_riemann_solution_stripes,
rho_l=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=3.,description=r'$\rho_l$'),
u_l=widgets.FloatSlider(min=-10.,max=10.,step=0.1,value=0.,description=r'$u_l$'),
p_l=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=3.,description=r'$p_l$'),
rho_r=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=1.,description=r'$\rho_r$'),
u_r=widgets.FloatSlider(min=-10.,max=10.,step=0.1,value=0.,description=r'$u_r$'),
p_r=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=1.,description=r'$p_r$'),
gamma=widgets.FloatSlider(min=1.1,max=2.,step=0.1,value=1.4,description=r'$\gamma$'),
t=widgets.FloatSlider(min=0.,max=1.,step=0.1,value=0.5));
```
## Riemann problems with vacuum
A vacuum state (with zero pressure and density) in the Euler equations is similar to a dry state (with depth $h=0$) in the shallow water equations. It can arise in the solution of the Riemann problem in two ways:
1. An initial left or right vacuum state: in this case the Riemann solution consists of a single rarefaction, connecting the non-vacuum state to vacuum.
2. A problem where the left and right states are not vacuum but middle states are vacuum. Since this means the middle pressure is smaller than that to the left or right, this can occur only if the 1- and 3-waves are both rarefactions. These rarefactions are precisely those required to connect the left and right states to the middle vacuum state.
### Initial vacuum state
The velocity plot looks a bit strange, but note that the velocity is undefined in vacuum.
```python
left_state = State(Density =0.,
Velocity = 0.,
Pressure = 0.)
right_state = State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
Euler.riemann_solution(left_state,right_state)
```
```python
Euler.phase_plane_plot(left_state, right_state)
```
### Middle vacuum state
```python
left_state = State(Density =1.,
Velocity = -10.,
Pressure = 1.)
right_state = State(Density = 1.,
Velocity = 10.,
Pressure = 1.)
Euler.riemann_solution(left_state,right_state)
```
```python
Euler.phase_plane_plot(left_state, right_state)
```
| 2d6a24a0e2e378d842d221ab164a6584d5e93eed | 32,443 | ipynb | Jupyter Notebook | Euler.ipynb | katrinleinweber/riemann_book | 0bd2320765a459249d938c6913cc39339cddb3fb | [
"BSD-3-Clause"
]
| null | null | null | Euler.ipynb | katrinleinweber/riemann_book | 0bd2320765a459249d938c6913cc39339cddb3fb | [
"BSD-3-Clause"
]
| null | null | null | Euler.ipynb | katrinleinweber/riemann_book | 0bd2320765a459249d938c6913cc39339cddb3fb | [
"BSD-3-Clause"
]
| null | null | null | 46.281027 | 2,696 | 0.645501 | true | 7,260 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.798187 | 0.666872 | __label__eng_Latn | 0.989198 | 0.387698 |
# PRAKTIKUM 9
`Turunan Numerik`
<hr style="border:2px solid black"> </hr>
```julia
using Plots
```
# Definisi Limit Turunan dan Hampiran Turunan
Diberikan suatu fungsi $f(x)$. Turunan fungsi $f$ pada suatu titik $x=a$ disimbolkan sebagai $f'(a)$, didefinisikan sebagai :
$$ f'(a)=\lim_{h\to 0} \frac{f(a+h)-f(a)}{h} $$
Untuk menghampiri nilai turunan numerik dari $ f(x) $, akan dipilih barisan $ {h_k} $ sedemikian sehingga $ h_k\to 0 $ dan hampiran $ f'(x) $ adalah
\begin{equation}\label{eq:9 lim2}
D_k = \dfrac{f(x+h_k)-f(x)}{h_k} \ \ \ \ \ \ \text{untuk}\ \ \ k = 1, 2, \dots, n, \dots \text{dan } h_k = 10^{-k}
\end{equation}
Proses komputasi hanya akan menghitung nilai $ D_1 $, $ D_2 $, $\dots$, $ D_N $ dan menggunakan nilai $ D_N $ sebagai nilai turunan numerik $ f'(x) $. Pertanyaan yang muncul dalam proses numerik ini adalah berapa nilai $ h_N $ yang harus dipilih, sehingga $ D_N $ menjadi nilai hampiran turunan $ f'(x) $ yang baik.
### Contoh 1
Berikut merupakan hampiran $D_k$ dari turunan $f'(x)$ dengan $f(x)=\exp(x)$ ketika $x=1$. Secara analitik nilai $f'(x) = \exp(x)$ sehingga $f'(1)=e$.
|$h_k$|$D_k$|$ \mid D_k-e \mid $|
|--|--|------|
| 1e-1 | 2.85884195 | 0.14056013 |
| 1e-2 | 2.73191866 | 0.01363683 |
| 1e-3 | 2.71964142 | 0.00135959 |
| 1e-4 | 2.71841775 | 0.00013592 |
| 1e-5 | 2.71829542 | 0.00001359 |
| 1e-6 | 2.71828319 | 0.00000136 |
| 1e-7 | 2.71828197 | 0.00000014 |
| 1e-8 | 2.71828182 | 0.00000001 |
| 1e-9 | 2.71828204 | 0.00000022 |
| 1e-10 | 2.71828338 | 0.00000155 |
Akan tetapi semakin kecil nilai $h_k$ tidak menjamin hampiran turunan semakin baik. Dengan demikian, tujuan dari turunan numerik adalah mencari nilai $h_k$ sedemikian sehingga galat yang dihasilkan sekecil mungkin.
```julia
k = 1:10
h = 10.0 .^ -k
f(x) = exp.(x);
a = 1;
Dk = (f(a.+h) .- f(a))./h;
M = [k Dk abs.(Dk.-exp(a))]
```
#### Contoh lain:
Bagaimana dengan turunan $f(x)=\sin(x)$ ketika $x=\pi/3$?
```julia
k = 0:10
h = 10.0.^-k;
f(x) = sin.(x);
a = pi/3;
Dk = (f(a.+h).-f(a))./(h);
M = [k Dk abs.(Dk.-cos(a))]
```
# Hampiran Beda Pusat
Penghitungan turunan numerik menggunakan definisi turunan membutuhkan iterasi yang
cukup banyak karena hanya memiliki kompleksitas komputasi $ O(h) $. Oleh karena itu, diperlukan pengembangan formula yang dapat memberikan nilai akurasi yang baik dengan nilai $ h $ yang lebih besar. Jika fungsi $ f(x) $ dapat dievalusi pada nilai yang terletak disebelah kanan dan kiri dari $ x $, maka formula beda-pusat dapat digunakan untuk menghitung nilai turunan $ f'(x) $.
## Teorema
Asumsikan bahwa $ f\in C^3[a,b] $ dan $ x-h,x,x+h\in [a,b] $, maka
\begin{equation}\label{eq:9 beda1}
f'(x)\approx \dfrac{f(x+h)-f(x-h)}{2h}
\end{equation}
Selain itu, terdapat nilai $ c=c(x)\in [a,b] $ yang menyebabkan
\begin{equation}\label{eq:9 beda2}
f'(x)= \dfrac{f(x+h)-f(x-h)}{2h}+E_{trunc}(f,h)
\end{equation}
dengan
\begin{equation}\label{eq:9 beda3}
E_{trunc}(f,h)=-\dfrac{h^2f^{(3)}(c)}{6}=O(h^2)
\end{equation}
Bentuk $ E(f,h) $ disebut sebagai _truncation error_.
### Contoh 2
Seperti sebelumnya, berikut hampiran $D_k$ dari turunan $f'(x)$ dengan $f(x)=\sin(x)$ ketika $x=\pi/3$ menggunakan beda-pusat.
```julia
k = 1:10
h = 10.0 .^-k;
f(x) = sin.(x);
a = pi/3;
Dk = (f(a.+h).-f(a.-h))./(2*h);
M = [k Dk abs.(Dk.-cos(a))]
```
Akan tetapi cara tersebut menjadi tidak praktis karena nilai turunan ditentukan secara manual. Dengan demikian, untuk menentukan nilai $h_k$ dapat dicari menggunakan iterasi dengan 2 kriteria penghentian yaitu
1. galat telah memenuhi toleransi
2. galat sekarang lebih besar daripada galat sebelumnya
```julia
function bedaPusat(f, a; delta=10^-9)
# Definisikan nilai maksimum iterasi dan toleransi
maxi = 15;
flag = 1;
# Hitung nilai awal turunan numerik ketika h=1, galat awal tidak ada.
h = 1;
D = (f(a+h)-f(a-h))/(2*h);
E = NaN;
sol = NaN
# Mulai proses iterasi untuk mencari nilai turunan numerik.
for k = 1:maxi
h = h/10;
D = [D (f(a+h)-f(a-h))/(2*h)]
E = [E abs(D[k+1]-D[k])]
# Cek apakah nilai galat telah memenuhi toleransi.
if E[k+1]<delta
flag = 0;
sol = D[end];
break
end
# Cek apakah galat sekarang lebih besar daripada galat sebelumnya.
if E[k+1]>E[k]
sol = D[k];
flag = 2;
break
end
end
L = [D' E'];
return sol, flag, L
end
```
### Contoh 3
#### 1 Turunan pada suatu titik fungsi kontinu
Diberikan fungsi $f(x)=\sin(x)$. Hitung hampiran $D_k$ dari turunan $f'(x)$ ketika $x=\pi/3$ menggunakan beda-pusat
```julia
f(x) = sin(x);
a = pi/3;
sol,flag,L = bedaPusat(f,a)
@show sol
L
```
```julia
f(x) = sin(x);
a = pi/3;
sol,flag,L = bedaPusat(f,a,delta = 10^-12)
@show sol
L
```
#### 2 Turunan pada suatu titik fungsi diskret
Diberikan data yaitu $I(t)$ bergantung pada $t$ sebagai berikut.
|$t$ | 1.0 |1.1 |1.2 |1.3 |1.4 |
|--|--|--|--|--|--|
|$I(t)$ | 8.2277 |7.2428 |5.9908 |4.5260 |2.9122|
Berapa nilai $I'(1.2)$ yang dihasilkan oleh metode beda-pusat $ h=0.1 $ dan $ h=0.2 $
$$i1 = \frac{f(1.2+0.1)-f(1.2-0.1)}{2 (0.1)} = \frac {f(1.3) - f(1.1)}{ 0.2} $$
```julia
i1 = (4.5260 - 7.2428)/0.2
```
$$i2 = \frac{f(1.2+0.2)-f(1.2-0.2)}{2 (0.2)} = \frac {f(1.4) - f(1.0)}{ 0.4} $$
```julia
i2 = (2.9122 - 8.2277)/0.4
```
#### 3 Turunan numerik dari fungsi pada interval tertentu
Diberikan fungsi $f(x)=\sin(x)$. Hitung hampiran $D_k$ dari turunan $f'(x)$ ketika $x=[0,2\pi]$ menggunakan beda-pusat
```julia
f(x) = sin.(x);
a = 0:0.01:2*pi
y = Array{Number}(undef,length(a),1)
for i in 1:length(a)
sol,flag,L = bedaPusat(f,a[i]);
y[i] = sol;
end
plt=plot(a,y,legend=:false)
```
## Beda Pusat Ordo $O(h^4)$
Asumsikan bahwa $ f\in C^5[a,b] $ dan $ x-2h,x-h,x,x+h,x+2h \in [a,b] $, maka
\begin{equation}
f'(x)\approx \dfrac{-f(x+2h)+8f(x+h)-8f(x-h)+f(x-2h)}{12h}
\end{equation}
```julia
k = 1:10
h = 10.0 .^-k;
f(x) = sin.(x);
a = pi/3;
Dk = (-f(a.+2*h).+8*f(a.+h).-8*f(a.-h).+f(a.-2*h))./(12*h);
M = [k Dk abs.(Dk.-cos(a))]
```
# Ekstrapolasi Richardson
Metode alternatif lain yang dapat digunakan untuk mencari nilai turunan numerik adalah menggunakan ekstrapolasi Richardson. Metode ini merupakan modifikasi dari metode beda-pusat untuk menghasilkan kompleksitas yang lebih baik. Dengan memanfaatkan kompleksitas beda-pusat yaitu $ O(h^2) $, ekstrapolasi Richardson dapat menghitung solusi numerik dengan kompleksitas $ O(h^4) $, $ O(h^6) $, dan seterusnya.
### Teorema
Misalkan bahwa terdapat dua hampiran $ f'(x_0) $ dengan ordo $ O(h^{2k}) $ yaitu $ D_{k-1}(h) $ dan $ D_{k-1}(2h) $, sehingga memenuhi
\begin{align}
f'(x_0)&=D_{k-1}(h)+Ch^{2k}+\dots
\end{align}
dan
\begin{align}
f'(x_0)&=D_{k-1}(2h)+4^kCh^{2k}+\dots
\end{align}
Selanjutnya, hampiran dengan ordo $ O(h^{2k+2}) $ dapat diperoleh dengan formula
\begin{equation}\label{eq:9 rich7}
f'(x_0)=D_k(h)+O(h^{2k+2})=\dfrac{4^kD_{k-1}(h)-D_{k-1}(2h)}{4^k-1}+O(h^{2k+2})
\end{equation}
Dari contoh sebelumnya bagian 2, diperoleh $D_1(1)$ dan $D_1(2)$, maka $D_2(1)$ adalah
```julia
i1 = -13.584 # Beda Pusat O(h^2) dengan h = 0.1
i2 = -13.289 # Beda Pusat O(h^2) dengan h = 0.2
(4*i1-i2)/3 # Beda Pusat O(h^4) dengan h = 0.1
```
```julia
function richardson(f, a; delta = 1e-9)
# Definisikan nilai maksimum iterasi dan toleransi
maxi = 50;
flag = 1;
# Hitung nilai awal matriks richardson ketika h=1
h = 1;
D = (f(a+h)-f(a-h))/(2*h);
err = NaN;
# Mulai iterasi richardson
for j=1:maxi
# Perkecil h menjadi 1/2 h sebelumnya, kemudian hitung kolom ke-1
# richardson dengan beda-pusat.
h = h/2;
D = [D zeros(size(D,1),1);
(f(a.+h).-f(a.-h))./(2*h) zeros(1, size(D,1))];
# Hitung nilai kolom selanjutnya menggunakan rumus richardson.
for k = 1:j
D[j+1,k+1] = D[j+1,k] + (D[j+1,k]-D[j,k])/(4^k-1);
end
# Hitung nilai galat, jika galat telah memenuhi toleransi, maka
# iterasi dihentikan.
err = abs(D[j+1,j+1]-D[j,j]);
if err<delta
flag=0;
break
end
end
sol = D[end,end];
return sol, flag, err, D
end
```
### Contoh 4
#### Turunan pada suatu titik fungsi kontinu
Diberikan fungsi $f(x)=\sin(x)$. Hitung hampiran dari turunan $f'(x)$ ketika $x=\pi/3$ menggunakan ekstrapolasi Richardson.
```julia
using BenchmarkTools
```
```julia
f(x) = sin.(x);
a = pi/3;
@btime sol,flag,err,D = richardson(f,a)
@show sol
@show err
D
```
```julia
f(x) = sin.(x);
a = pi/3;
@btime sol,flag,err,D = richardson(f,a,delta=10^-16)
@show sol
@show err
;
```
<hr style="border:2px solid black"> </hr>
# Soal Latihan
Kerjakan soal berikut pada saat kegiatan praktikum berlangsung.
`Nama: ________`
`NIM: ________`
### Soal 1
Hitung nilai turunan numerik $ f'(1) $ dari $ f(x)=\sin(x) $, yaitu $ D_k $ untuk $ k=1,2,3,\dots,10 $ menggunakan formula beda-maju dan beda-pusat. Tentukan nilai $k$ yang memberikan galat paling minimum.
```julia
```
### Soal 2
Diketahui fungsi $ f(x)=\cos(x) $. Gunakan metode beda pusat menggunakan fungsi `bedaPusat` untuk mencari turunan numerik dari $ f'(0.2) $ dengan galat paling minimum.
```julia
```
### Soal 3
Diketahui fungsi $ f(x)=\cos(x) $. Gunakan metode beda pusat menggunakan fungsi `bedaPusat` untuk mencari solusi turunan numerik pada interval $ [0,2\pi] $.
```julia
```
### Soal 4
Diketahui fungsi $ f(x)=\cos(x) $. Gunakan ekstrapolasi Richardson pada fungsi `richardson` untuk mencari turunan numerik dari $ f'(0.2) $ dengan mengatur toleransi yaitu $ \delta=10^{-k} $ dan $ k=5,6,7,...,16 $. Bandingkan waktu yang diperlukan untuk mencari turunan tersebut menggunakan `BenchmarkTools`.
```julia
```
| 9cb616598b33370d6a466bbba7539805127ae904 | 15,795 | ipynb | Jupyter Notebook | notebookpraktikum/Praktikum 09.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | notebookpraktikum/Praktikum 09.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | notebookpraktikum/Praktikum 09.ipynb | mkhoirun-najiboi/metnum.jl | a6e35d04dc277318e32256f9b432264157e9b8f4 | [
"MIT"
]
| null | null | null | 28.408273 | 414 | 0.513517 | true | 3,909 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.855851 | 0.77191 | __label__ind_Latn | 0.820619 | 0.631738 |
# Solving systems of linear equations
Consider a general system of $m$ linear equations with $n$ unknowns (variables):
$$ a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n = b_1 \\
a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n = b_2 \\
\vdots \qquad \qquad \vdots \\
a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n = b_m, $$
where $x_i$ are unknowns, $a_{ij}$ are the coefficients of the system and $b_i$ are the RHS terms. These can be real or complex numbers.
Almost every problem in linear algebra will come down to solving such a system. But what does it mean to solve a system of equations?
**Solution of the system.** Solving a system of equations means to find *all* n-tuples $(x_1, x_2, ..., x_n)$ such that substituting each one back into the system gives exactly those values given on the RHS, in the correct order. We call each such tuple a solution of the system of equations.
To solve such a system we will use basic arithmetic operations to transform our problem into a simpler one. That is, we will transform our system into another **equivalent system**. We say that two systems of equations are equivalent if they have the same set of solutions. In other words, the transformed system is equivalent to the original one if the transformation does not cause a solution to be lost or gained.
Three such transformations (sometimes called *elementary row operations*) of a system of linear equations that result in an equivalent system are:
1. Swapping any two equations of the system
2. Multiplying an equation of the system by any number different from 0
3. Adding an equation of the system multiplied by a scalar to another equation of the system
We will aim to use these transformations to eliminate certain unknowns from some equations. Ideally, we would like to reduce one equation to have only one unknown, which we can then simply solve for. Then we plug this value in other equations and so on. Let us demonstrate this on a couple of simple examples.
### Example: Unique solution
Consider the following system of 3 linear equations involving 3 unknowns $x, y, z$:
$$ x + z = 0 \\
y - z = 1 \\
x + 2y + z = 1 $$
Being a system of 3 equations and 3 unknowns, we should be able to solve it. We start by noticing that if we subtract the 1st equation from the 3rd we would be left with an equation involving only $y$. That is, we need to use the transformation rule number 3. After doing that, we get the following equivalent system:
$$ x + z = 0 \\
y - z = 1 \\
2y = 1. $$
Now we can easily see from the 3rd equation that $y = 1/2$. Now we plug this value of $y$ into the 2nd equation and solve it for $z$. Then we plug the value of $z$ into the 1st equation and solve it for $x$. After doing that, we find that the only solution to the problem is a triplet $(1/2, 1/2, -1/2)$.
## Matrix equation
We can represent any system of linear equation in matrix form. The general $m \times n$ system from the beginning of this notebook can be represented as:
$$ A \mathbf{x} = \mathbf{b}, $$
where $A \in \mathbb{c}^{m \times n}$ is called a **coefficient matrix** with coefficients as entries $a_{ij}$ and $\mathbf{x}$ and $\mathbf{b}$ are vectors $\in \mathbb{R}^n$.
The same transformation rules from before still apply to a systems represented in matrix form, where each row is one equation. Since these transformations are performed on both the LHS and RHS of equations it is convenient to write the system using an **augmented matrix** which is obtained by appending $\mathbf{b}$ to $A$:
$$ (A | \mathbf{b}) =
\left ( \begin{array}{cccc|c}
a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\
a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\
& \vdots & & \vdots & \\
a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{array} \right ).$$
To avoid confusing $A$ and $\mathbf{b}$, they are often separated by a straight line, as shown above.
### Example: No solution
Consider a very similar problem to the one in the previous example, which only differs in the sign of $a_{33}$ in the 3rd equation:
$$ x + z = 0 \\
y - z = 1 \\
x + 2y - z = 1 $$
Let us first write it in matrix-form $A \mathbf{x} = \mathbf{b}$:
$$ \begin{pmatrix}
1 & 0 & 1 \\
0 & 1 & -1 \\
1 & 2 & -1 \end{pmatrix}
\begin{pmatrix} x \\ y \\ z \end{pmatrix} =
\begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix} $$
Or using an augmented matrix:
$$ \left ( \begin{array}{ccc|c}
1 & 0 & 1 & 0 \\
0 & 1 & -1 & 1 \\
1 & 2 & -1 & 1
\end{array} \right ) $$
We will again aim to eliminate certain coefficients from equations. For example, we could eliminate the first coefficient in the 3rd row by subtracting the 1st row from the 3rd row. By doing that, we get the equivalent system:
$$\begin{aligned}
\left ( \begin{array}{ccc|c}
1 & 0 & 1 & 0 \\
0 & 1 & -1 & 1 \\
1 & 2 & -1 & 1
\end{array} \right )
\hspace{-0.5em}
\begin{align}
&\phantom{I}\\
&\phantom{II} \\
&L_3 - L_1 \to L_3
\end{align}
\Rightarrow
\left ( \begin{array}{ccc|c}
1 & 0 & 1 & 0 \\
0 & 1 & -1 & 1 \\
0 & 2 & -2 & 1
\end{array} \right )
\end{aligned} $$
Now we want to eliminate the second coefficient in the 3rd equation. To do that, we use the third transformation rule again to subtract $2 \times$ 2nd equation from the 3rd:
$$\begin{aligned}
\left ( \begin{array}{ccc|c}
1 & 0 & 1 & 0 \\
0 & 1 & -1 & 1 \\
0 & 2 & -2 & 1
\end{array} \right )
\hspace{-0.5em}
\begin{align}
&\phantom{I}\\
&\phantom{II} \\
& L_3 - 2L_1 \to L_3
\end{align}
\Rightarrow
\left ( \begin{array}{ccc|c}
1 & 0 & 1 & 0 \\
0 & 1 & -1 & 1 \\
0 & 0 & 0 & -1
\end{array} \right )
\end{aligned} $$
Let us look at the third equation: $0 = -1$. What this equation is telling us is that if a solution exists, that solution would be such that $0 = -1$. Since this is obviously not true, we conclude that there is no solution of this system of equations. Or, more precisely, we found that the solution set of this system is an **[empty set](https://en.wikipedia.org/wiki/Empty_set)**.
## Vector equation
Remember that a product of matrix-vector multiplication is a linear combination of the columns of the matrix. We can therefore write $A\mathbf{x} = \mathbf{b}$ as:
$$ x_1 \begin{pmatrix} \\ a_1 \\ \\ \end{pmatrix}
+ x_2 \begin{pmatrix} \\ a_2 \\ \\ \end{pmatrix}
+ \cdots + x_n \begin{pmatrix} \\ a_n \\ \\ \end{pmatrix}
= \begin{pmatrix} \\ b \\ \\ \end{pmatrix} $$
Therefore, solving $A \mathbf{x} = \mathbf{b}$ can be thought of as finding weights $x_1, ..., x_n$ such that the above is true.
## Triangular systems
Some special types of systems of linear equations can be solved very easily. An especially important type is a triangular system.
A square $n \times n$ matrix $A$ is a **lower triangular matrix** if $a_{ij} = 0$ for $i < j$. Similarly, we say that it is **upper triangular** if $a_{ij} = 0$ for $i > j$. For example,
$$\begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{pmatrix}, \quad
\begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, \quad
\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}.$$
The first two matrices above are lower-triangular because $a_{12}=a_{13}=a_{23} = 0$ and the third one is upper-triangular because $a_{21}=a_{31}=a_{32}=0$.
A system of linear equations which has a triangular coefficient matrix is called a *triangular system*. If you look back at the examples above, you will see that the transformations performed were actually helping us reach a triangular form of the coefficient matrix. Indeed, often the easiest way to solve a system of linear equations will be to transform it into an equivalent triangular system.
### Example: Upper-triangular system
Here we will demonstrate why triangular systems of equations are very simple to solve. Consider the following upper-triangular system:
$$ \begin{pmatrix}
1 & -1 & 2 \\
0 & 2 & -1 \\
0 & 0 & 2 \end{pmatrix}
\begin{pmatrix} x \\ y \\ z \end{pmatrix} =
\begin{pmatrix} -1 \\ 3 \\ 2 \end{pmatrix} $$
Solving an $n \times n$ triangular system comes down to solving, in $n$ steps, one equation with one unknown. So we should be able to solve the above system in just 3 steps. We begin solving an upper-triangular system by solving the last equation, which in our case is the following equation with one unknown: $ 2z = 2 $. Clearly, $z=1$ and $z$ is no longer an unknown variable. Now we work our way up and solve the 2nd equation, plugging in our unique solution of $z$: $2y -z = 2y - 1 = 3$ which is again an equation with one unknown. We find that $y = 2$ which we then plug in the first equation: $ x - y + 2z = x - 2 + 2 = x = -1 $. We have successfully solved the system and we found that it has a unique solution $\mathbf{x} = (-1, 2, 1)$.
If the system is lower-triangular, we start solving it from the 1st equation and work our way down to the last one.
### Trapezoidal matrix
We can generalise the idea of triangular matrices to non-square matrices. A non-square matrix with zero entries below or above the diagonal is called an upper or lower **trapezoidal matrix**. For example:
$$ \begin{pmatrix} 1 & 2 & 2 & 2 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 2\end{pmatrix}, \quad
\begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 \\ 2 & 2 & 1 & 0 & 0 \end{pmatrix} $$
What can we say about such system of equations $A \mathbf{x} = \mathbf{b}$ where $A \in \mathbb{C}^{m \times n}$ and $\mathbf{x}$, $\mathbf{b} \in \mathbb{C}^n$? If $m < n$ there are fewer equations than there are unknowns so the system is **undertermined**. If $m > n$ there are more equations than unknown and the system is **overdetermined**.
#### Example: Underdetermined system
Let us consider the following system of 2 equations and 3 unknowns:
$$ x - y + 2z = -1 \\
2y - z = 3 $$
We begin from the 2nd equation since it involves less unknowns than the 1st equation: $ 2y - z = 3$. Here we have a choice of which variable to solve for, but we shall solve it for $y$ since it is the *leading variable* (first non-zero in the row). We find $y = (z + 3)/2$. $z$ is a *free variable*, which we introduce formally bellow, but it essentially means that $z$ can be any number $z \in \mathbb{C}$, say $z = t$. Substituting $y = (t + 3)/2$ into the 1st equation:
$$ x = y - 2z - 1 = (t + 3)/2 - 2t - 1 = -\frac{3t}{2} + \frac{1}{2} $$
Therefore our solution set in terms of $z$ is $\{(-\frac{3t}{2} + \frac{1}{2}, \frac{t + 3}{2}, t), t \in \mathbb{c}\}$.
## Existence and uniqueness of a solution
Let us think geometrically what it means for a system $A \mathbf{x} = \mathbf{b}$ to have a solution. Consider a simple system of linear equations:
$$ \begin{bmatrix} 2 & 3 \\ 1 & -4 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 3 \end{bmatrix}, $$
or, equivalently,
$$ 2x + 3y = 7 \\ x - 4y = 3. $$
Let us plot these two lines using Python:
```python
import numpy as np
import matplotlib.pyplot as plt
x1 = np.linspace(0, 8, 10)
y1 = (7 - 2 * x1) / 3
x2 = np.linspace(0, 8, 10)
y2 = (x2 - 3) / 4
plt.plot(x1, y1, label=r"$2x + 3y = 7$")
plt.plot(x2, y2, label=r"$x - 4y = 3$")
plt.xlim(1.5, 5)
plt.ylim(-1, 1.5)
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
plt.show()
```
The two lines cross at only one point, at point $ (37/11, 1/11)$. This point is the *unique solution* of the system, $\mathbf{x} = (37/11, 1/11)$.
Let's now consider two systems of two equations whose graphs are parallel lines:
$$ 2x + 3y = 7 \qquad 2x + 3y = 7 \\ 2x + 3y = 5 \qquad 4x + 6y = 14$$
and let us plot them.
```python
x2 = np.linspace(0, 2.5, 10)
y2 = (5 - 2 * x2) / 3
fig, ax = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
ax[0].plot(x1, y1, label=r"$2x + 3y = 7$")
ax[0].plot(x2, y2, label=r"$2x + 3y = 5$")
ax[0].set_aspect('equal')
ax[0].set_xlabel('x')
ax[0].set_ylabel('y')
ax[0].legend(loc='best')
y2 = (14 - 4 * x1) / 6
ax[1].plot(x1, y1, label=r"$2x + 3y = 7$")
ax[1].plot(x1, y2, 'y--', label=r"$4x + 6y = 14$")
ax[1].set_aspect('equal')
ax[1].set_xlabel('x')
ax[1].legend(loc='best')
plt.setp(ax, xlim=(0, 3.7), ylim=(0, 2.5))
plt.show()
```
In the first case, the lines never cross because they are parallel. Therefore, the system of equations has no solution. We can write $\mathbf{x} \in \emptyset$ (empty set).
The lines are parallel in the second case as well, but now they are on top of each other. There are an infinite number of solutions to that system - every point on the line $ y = (7-2x)/3 $ is a solution. We can then write that the solution set is $ \{ (t, \frac{7-2t}{3}), t \in \mathbb{R})\} $.
We can conclude that the existence and uniqueness of the solution of a linear system depends on whether the lines are parallel or not.
Let us test this by finding the cross products of the row-vectors of the coefficient matrices. The row-vectors are normal to the lines given by the equations, so if the equation graphs are parallel, their normals will be too.
```python
from matplotlib.patches import Polygon
x2 = np.linspace(0, 8, 10)
y2 = (x2 - 3) / 4
fig, ax = plt.subplots(1, 2, figsize=(10, 6))
ax[0].plot(x1, y1, label=r"$2x + 3y = 7$")
ax[0].plot(x2, y2, label=r"$x - 4y = 3$")
ax[0].quiver(37/11, 1/11, 2, 3, scale=15, angles='xy', color='b')
ax[0].quiver(37/11, 1/11, 1, -4, scale=15, angles='xy', color='orange')
vertices = np.array([[0, 0], [2, 3], [3, -1], [1, -4]])/4.3 + np.array([37/11, 1/11])
ax[0].add_patch(Polygon(vertices , facecolor='lightblue', alpha=0.7))
ax[0].set_xlim(1.5, 5)
ax[0].set_ylim(-1, 1.5)
ax[0].set_aspect('equal')
ax[0].set_xlabel('x')
ax[0].set_ylabel('y')
ax[0].legend(loc='best')
y2 = (5 - 2 * x2) / 3
ax[1].plot(x1, y1, label=r"$2x + 3y = 7$")
ax[1].plot(x2, y2, label=r"$2x + 3y = 5$")
ax[1].quiver(1.3, 4.4/3, 2, 3, scale=15, angles='xy', color='b')
ax[1].quiver(1.5, 2/3, 2, 3, scale=15, angles='xy', color='orange')
ax[1].set_xlim(0, 3.7)
ax[1].set_ylim(0, 2.5)
ax[1].set_aspect('equal')
ax[1].set_xlabel('x')
ax[1].legend(loc='best')
plt.show()
```
If $\mathbf{a_1} = (a_{11}, a_{12})$ is the first row-vector of a coefficient matrix and $\mathbf{a_2} = (a_{21}, a_{22})$, we can express their cross product as:
$$ ( \mathbf{a_1} \times \mathbf{a_2} ) = |\mathbf{a_1}| |\mathbf{a_2}| \sin(\vartheta) \hat{n}, $$
where $|\cdot|$ denotes the magnitude of the vector, $\vartheta$ is the angle between $\mathbf{a_1}$ and $\mathbf{a_2}$ and $\hat{n}$ is the unit normal vector to both vectors. We can see that the magnitude of the vector is equal to the area of a parallelogram with sides $|\mathbf{u}|$ and $|\mathbf{v}|$, with $\vartheta$ controlling how skewed it is. Such a parallelogram is marked by light-blue on the left figure. We also know from the previous notebook that the magnitude of the cross product of the two rows or columns of the matrix is given by the determinant of the matrix:
$$ |(a_{11}, a_{12}) \times (a_{21}, a_{22}) | = |(a_{11}, a_{21}) \times (a_{12}, a_{22}) | = \det \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}.$$
Let us then conclude, and we will justify this in the next notebooks, that a system of linear equations $A \mathbf{x} = \mathbf{b}$ will have a unique solution iff $\det A \neq 0$. If $\det A = 0$, the system either has infinitely-many solutions or no solutions at all.
# Gaussian elimination
Let us finally formally introduce what we have been trying to achieve in most examples above. **Gaussian elimination** (or row reduction) uses the three transformations mentioned before to reduce a system to **row echelon form**. A matrix is in row echelon form if:
- all zero rows (if they exist) are below all non-zero rows
- the **leading coefficient** (or **pivot**; the first non-zero entry in a row) is always strictly to the right of the leading coefficient of the row above it. That is, for two leading elements $a_{ij}$ and $a_{kl}$: if $i < k$ then it is required that $j < l$.
Notice that these conditions require reduced echelon form to be an upper-trapezoidal matrix. For example:
$$ \begin{pmatrix} 1 & 2 & 0 & 1 \\ 0 & 1 & 0 & 5 \\ 0 & 0 &2 & 2 \end{pmatrix}, \quad
\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad
\begin{pmatrix} 1 & -2 & 0 & 1 \\ 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}$$
The motivation behind performing Gaussian elimination is therefore clear, as we have demonstrated in the upper-triangular example why triangular systems are most convenient for solving systems of linear equations.
Let us consider again a general coefficient matrix $A \in \mathbb{C}^{m \times n}$ in $A \mathbf{x} = \mathbf{b}$:
$$ \begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\
a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\
& \vdots & & \vdots & \\
a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn} \end{pmatrix} $$
What we want to achieve is to have all entries in the 1st column be 0 except for the first one. Then in the 2nd column all entires below the second entry should be 0. In the 3rd column all entries below the third entry should be 0, and so on. That means that we will have to use the 3rd transformation rule and subtract one equation from every one below it, multiplied such that the column entries will cancel:
$$
\begin{aligned}
&\begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\
a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\
& \vdots & & \vdots & \\
a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn} \end{pmatrix}
\hspace{-0.5em}
\begin{align}
&\phantom{L_1}\\
&L_2 - ^{a_{21}}/_{a_{11}}L_1 \to L_2 \\
&L_3 - ^{a_{31}}/_{a_{11}}L_1 \to L_3 \\
&\qquad \cdots \\
&L_m - ^{a_{m1}}/_{a_{11}}L_1 \to L_1
\end{align} \\ \\
\Rightarrow \quad
&\begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
0 & \tilde{a_{22}} & \tilde{a_{23}} & \cdots & \tilde{a_{2n}} \\
0 & \tilde{a_{32}} & \tilde{a_{33}} & \cdots & \tilde{a_{3n}} \\
& \vdots & & \vdots & \\
0 & \tilde{a_{m2}} & \tilde{a_{m3}} & \cdots & \tilde{a_{mn}} \end{pmatrix}
\hspace{-0.5em}
\begin{aligned}
&\phantom{L_1}\\
&\phantom{L_2} \\
&L_3 - ^{a_{32}}/_{a_{12}}L_1 \to L_3 \\
&\qquad \cdots \\
&L_m - ^{a_{m2}}/_{a_{12}}L_1 \to L_1
\end{aligned} \\ \\
\Rightarrow \quad
&\begin{pmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
0 & \tilde{a_{22}} & \hat{a_{23}} & \cdots & \hat{a_{2n}} \\
0 & 0 & \hat{a_{33}} & \cdots & \hat{a_{3n}} \\
& \vdots & & \vdots & \\
0 & 0 & \hat{a_{m3}} & \cdots & \hat{a_{mn}} \end{pmatrix} \\
& \qquad \dots
\end{aligned} $$
And so on. Notice that these operations (transformations) are performed on entire rows, so the entries in the entire row change (here denoted by tilde and hat). That is why we need to be careful what row we add to other row since, for example, adding the first row to other rows later on would re-introduce non-zero values in the first column which we previously eliminated.
### Example
Consider the following system of 3 equations and 3 unknowns $x, y, z$:
$$ x - y + 2z = -1 \\ x + 2y - z = 2 \\ -x + y + z = 0 $$
Let us write it in augmented-matrix form and perform Gaussian eliminations.
$$\begin{aligned}
&\left ( \begin{array}{ccc|c}
1 & -1 & 2 & -1 \\
1 & 2 & -1 & 2 \\
-1 & 1 & 1 & 0 \end{array} \right )
\hspace{-0.5em}
\begin{align}
&\phantom{L_1}\\
&L_2 - L_1 \to L_2 \\
&L_3 + L_1 \to L_3 \\
\end{align} \\ \\
\Rightarrow \quad
&\left ( \begin{array}{ccc|c}
1 & -1 & 2 & -1 \\
0 & 3 & -3 & 3 \\
0 & 0 & 3 & -1 \end{array} \right )
\end{aligned}$$
The one transformation on the 3rd equation eliminated both $x$ and $y$ unknowns from it so we did not have to perform another transformation. Now the system is reduced to an upper-triangular one, which we have encountered before and know how to solve. We begin from the last equation and back-substitute found values as we work our way up. We leave it to the reader to confirm that there is a unique solution $\mathbf{x} = (1/3, 2/3, -1/3)$.
## Transformations as matrices
Recall the example on permutations a few notebooks ago, where we wrote each permutation of rows or columns as a permutation matrix multiplying the original matrix. We can do the same thing with elementary row transformations.
Consider the same example from above, where we performed the following 2 transformations on the square $3 \times 3$ matrix $A$:
1. subtracted row 1 from row 2; $L_2 - L_1 \to L_2$
2. added row 1 to row 3; $L_3 + L_1 \to L_3$
We can write both of these transformations using **elementary matrices**:
$$ E_1 = \begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad E_2 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix} $$
As in the example of permutations, left-multiplication by an elementary matrix $E$ represents elementary row operations, while right-multiplication represents elementary column operations. Our transformations are row operations, so we left multiply our original matrix:
$$ E_2 E_1 A =
\begin{pmatrix} 1 & -1 & 2 \\ 0 & 3 & -3 \\ 0 & 0 & 3 \end{pmatrix},$$
resulting in the same upper-triangular matrix as before. Note that the order in which we multiply is, in general, important.
# Using the inverse matrix to solve a linear system
Remember that an inverse of a square matrix $A$ is $A^{-1}$ such that $AA^{-1} = A^{-1}A = I$. Therefore, if we have a matrix equation $A \mathbf{x} = \mathbf{b}$, finding an inverse $A^{-1}$ would allow us to solve for all unknowns simultaneously, rather than in steps like in the examples above. Here is the idea:
$$ A\mathbf{x} = \mathbf{b} \\
\mbox{multiply both sides by} A^{-1} \\
I \mathbf{x} = A^{-1}\mathbf{b} \\
\mathbf{x} = A^{-1}\mathbf{b} $$
Finding the solution $\mathbf{x}$ is then reduced to a matrix-vector multiplication $A^{-1}\mathbf{b}$.
## Gauss-Jordan elimination
**Gauss-Jordan elimination** is a type of Gaussian elimination that we can use to find the inverse of a matrix. This process is based on reducing a matrix whose inverse we want to find to **reduced row echelon form**. For a matrix to be in reduced row echelon form it must satisfy the two conditions written above for row echelon forms and an additional condition:
- Each leading (pivot) element is 1 and all other entries in their columns are 0.
Examples of such matrices are:
$$ \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad
\begin{pmatrix} 1 & 0 & 4\\ 0 & 1 & 3 \end{pmatrix}, \quad
\begin{pmatrix} 0 & 0 & 1 & 5 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 1 & 6 & 0 \\ 0 & 0 & 0 &0 &0 & 0 & 1 \end{pmatrix} $$
Now, let $A \in \mathbb{R}^{n \times n}$ be a square matrix whose inverse we want to find. We use it to form an augmented block matrix $[A | I]$. We perform elementary row transformations on this matrix such that we reduce $A$ to reduced row echelon form, which we will denote as $A_R$. In this process, the identity matrix $I$ on the right will be transformed to some new matrix $B$. If $A$ is invertible, we will have:
$$ [A | I] \Rightarrow [A_R | B] = [I | A^{-1}] $$
Let us think why this is. We showed above that elementary row operations can be represented as elementary matrices. Let there be $k$ such transformations needed to reduce $A$ to $A_R$. Then we can write:
$$ [A_R | B] = E_k E_{k-1} \dots E_2 E_1[A | I] $$
and let us denote $S = E_k E_{k-1} \dots E_2 E_1$, the product of all $k$ elementary matrices. Now,
$$ [A_R | B] = S[A | I] = [SA | SI] = [SA | S]$$
Therefore, if $A_R = I \Rightarrow I = SA$, meaning that $S = B$ is indeed the inverse of $A$.
If $A$ is not invertible, remember that it means that it is not full-rank. What that means is that $A_R$ will have at least one zero-row. In that case, $B \neq A^{-1}$.
### Example: Calculating an inverse of a matrix
Let us find the inverse matrix of:
$$ A = \begin{bmatrix} 2 & 1 & 3 \\ 0 & 2 & -1 \\ 3 & -1 & 2 \end{bmatrix}. $$
We begin by forming an augmented matrix $[A|I]$ and begin reducing $A$ to reduced row echelon form.
$$\begin{aligned}
{[A | I] =} \quad &\left [ \begin{array}{ccc|ccc}
2 & 1 & 3 & 1 & 0 & 0 \\
0 & 2 & -1 & 0 & 1 & 0\\
3 & -1 & 2 & 0 & 0 & 1 \end{array} \right ]
\hspace{-0.5em}
\begin{aligned}
&^1/_2 L_1 \to L_1 \\
&\phantom{L} \\
&\phantom{L} \\
\end{aligned} \\ \\
\sim \quad
&\left [ \begin{array}{ccc|ccc}
1 & ^1/_2 & ^3/_2 & ^1/_2 & 0 & 0 \\
0 & 2 & -1 & 0 & 1 & 0\\
3 & -1 & 2 & 0 & 0 & 1 \end{array} \right ]
\hspace{-0.5em}
\begin{aligned}
&\phantom{L} \\
&\phantom{L} \\
&L_3 -3L_1 \to L_3 \\
\end{aligned} \\ \\
\sim \quad
&\left [ \begin{array}{ccc|ccc}
1 & ^1/_2 & ^3/_2 & ^1/_2 & 0 & 0 \\
0 & 2 & -1 & 0 & 1 & 0\\
0 & -^5/_2 & -^5/_2 & -^3/_2 & 0 & 1 \end{array} \right ]
\hspace{-0.5em}
\begin{aligned}
&\phantom{L} \\
& ^1/_2 L_2 \to L_2\\
&\phantom{L} \\
\end{aligned} \\ \\
\sim \quad
&\left [ \begin{array}{ccc|ccc}
1 & ^1/_2 & ^3/_2 & ^1/_2 & 0 & 0 \\
0 & 1 & -^1/_2 & 0 & ^1/_2 & 0\\
0 & -^5/_2 & -^5/_2 & -^3/_2 & 0 & 1 \end{array} \right ]
\hspace{-0.5em}
\begin{aligned}
& L_1 -^1/_2 L_2 \to L_1 \\
& \phantom{L}\\
& L_3 +^5/_2 L_2 \to L_3 \\
\end{aligned} \\ \\
\sim \quad
&\left [ \begin{array}{ccc|ccc}
1 & 0 & ^7/_4 & ^1/_2 & -^1/_4 & 0 \\
0 & 1 & -^1/_2 & 0 & ^1/_2 & 0\\
0 & 0 & -^{15}/_4 & -^3/_2 & ^5/_4 & 1 \end{array} \right ]
\hspace{-0.5em}
\begin{aligned}
& \phantom{L} \\
& \phantom{L}\\
& -^4/_{15}L_3 \to L_3 \\
\end{aligned} \\ \\
\sim \quad
&\left [ \begin{array}{ccc|ccc}
1 & 0 & ^7/_4 & ^1/_2 & -^1/_4 & 0 \\
0 & 1 & -^1/_2 & 0 & ^1/_2 & 0\\
0 & 0 & 1 & ^2/_5 & -^1/_3 & -^4/_{15} \end{array} \right ]
\hspace{-0.5em}
\begin{aligned}
& L_1 -^7/_4 L_3 \to L_1 \\
& L_2 +^1/_2 L_3 \to L_2\\
& \phantom{L} \\
\end{aligned} \\ \\
\sim \quad
&\left [ \begin{array}{ccc|ccc}
1 & 0 & 0 & -^1/_5 & ^1/_3 & ^7/_{15} \\
0 & 1 & 0 & ^1/_5 & ^1/_3 & -^2/_{15}\\
0 & 0 & 1 & ^2/_5 & -^1/_3 & -^4/_{15} \end{array} \right ]
= [I | B]
\end{aligned}$$
We successfully found the inverse $A^{-1} = B$! Let us now use it to solve the system $A \mathbf{x} = \mathbf{b}$, where $\mathbf{b} = (2, 0, 1)^T$. As explained before, $A$ on the LHS is eliminated by multiplying both sides by $A^{-1}$ on the left, leaving:
$$ \mathbf{x} = A^{-1}\mathbf{b} =
\begin{pmatrix}
-^1/_5 & ^1/_3 & ^7/_{15} \\
^1/_5 & ^1/_3 & -^2/_{15} \\
^2/_5 & -^1/_3 & -^4/_{15}
\end{pmatrix}
\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix} =
\begin{pmatrix} 1/15 \\ 4/15 \\ 8/15 \end{pmatrix} $$
The reader is encouraged to confirm this is the correct solution either through substitution into the original system or by using another solution method.
```python
```
| 943e1d728ba73d30017127013b40a49b9400eb02 | 83,951 | ipynb | Jupyter Notebook | mathematics/linear_algebra/Linear_Systems.ipynb | jrper/thebe-test | 554484b1422204a23fe47da41c6dc596a681340f | [
"MIT"
]
| null | null | null | mathematics/linear_algebra/Linear_Systems.ipynb | jrper/thebe-test | 554484b1422204a23fe47da41c6dc596a681340f | [
"MIT"
]
| null | null | null | mathematics/linear_algebra/Linear_Systems.ipynb | jrper/thebe-test | 554484b1422204a23fe47da41c6dc596a681340f | [
"MIT"
]
| null | null | null | 121.492041 | 18,100 | 0.786923 | true | 9,488 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.923039 | 0.820359 | __label__eng_Latn | 0.99147 | 0.744302 |
#### Nguyễn Tiến Dũng
*CTTN Toán Tin - K62*
*20170062*
***Đại học Bách khoa Hà Nội***
---
## Phân phối dừng
Cho ma trận chuyển trạng thái $P$
Giả sử tại thời điểm $t$, $X$ có thể nhận các trạng thái $1, 2, 3,...,N$ với xác suất tương ứng là $\pi_1, \pi_2,..,\pi_N$.
Khi đó $\pi = \{\pi_1, \pi_2,...,\pi_N\}$ là vector phân phối tại thời điểm $t$. Khi đó nếu $\pi \times (I - P) = 0$ thì ta gọi vector phân phối xác suất $\pi$ là `phân phối dừng`.
----
## Phân phối giới hạn
Vector trạng thái $\pi_0 = \{\pi_1, \pi_2,...,\pi_N\}$ được gọi là có `phân phối giới hạn` nếu thỏa mãn:
\begin{aligned}
\left\{\begin{matrix}
\pi_1 + \pi_2 + ... + \pi_N & = 1\\
\underset{n \to \infty}{lim}P_{ij}^{(n)} & = \pi_j, \forall i
\end{matrix}\right.
\end{aligned}
---
### Ví dụ
Trong một thành phố có $1500000$ dân có 3 siêu thị lớn cạnh tranh nhau là `BigC`, `VinMart` và `Intimex`. Tại thời điểm ban đầu, người ta thấy có $400000$ khách vào `BigC`, $600000$ khách vào `VinMart` và $500000$ khách vào `Intimex`. Qua một thời gian người ta nhận ra rằng:
- Nếu một khách hàng vào `BigC` thì có $80\%$ họ sẽ quay lại siêu thị này, $10\%$ sang `ViMart` và $10\%$ sang `Intimex`
- Mỗi khách hàng vào `VinMart` có $90\%$ sẽ quay trở lại siêu thị này, $7\%$ chuyển sang `BigC` và $3\%$ chuyển sang `Intimex`
- Mỗi một khách hàng vào `Intimex` sẽ có $85\%$ ở lại, chuyển sang `BigC` là $8.3\%$ và chuyển sang `VinMart` là $6.7\%$
Tính số lượng khách hàng ổn định của mỗi siêu thị
---
Ta có vector phân phối trạng thái
\begin{equation}
\pi_0 = \left[\frac{4}{15}, \frac{2}{5}, \frac{1}{3}\right]
\end{equation}
Ma trận chuyển trạng thái:
\begin{bmatrix}
0.8 & 0.1 & 0.1 \\
0.07 & 0.9 & 0.03 \\
0.083 & 0.067 & 0.85
\end{bmatrix}
Ta có $\pi(P - I) = 0 \leftrightarrow \pi = \pi P \rightarrow \pi = [0.273, 0.454, 0.273]$
---
Dưới đây là đồ thị mô tả thị phần của mỗi siêu thị
```python
import processviz as pvz
```
```python
g = pvz.MarkovChain()
g.from_file('./ass1/input.csv')
```
```python
g.generate_state_graph(10)
g.state_vector
```
```python
g.generate_graph(2)
g.data
```
```python
```
| 284772ae734c58cebf1ad0d39a0814789a479eb1 | 31,014 | ipynb | Jupyter Notebook | assignment/A7/A7.ipynb | jurgendn/processviz | 82808a92662962f04c48673c9cf159d7bc904ff7 | [
"BSD-3-Clause"
]
| null | null | null | assignment/A7/A7.ipynb | jurgendn/processviz | 82808a92662962f04c48673c9cf159d7bc904ff7 | [
"BSD-3-Clause"
]
| null | null | null | assignment/A7/A7.ipynb | jurgendn/processviz | 82808a92662962f04c48673c9cf159d7bc904ff7 | [
"BSD-3-Clause"
]
| 2 | 2020-03-19T11:14:13.000Z | 2021-08-14T14:24:08.000Z | 149.826087 | 20,836 | 0.891017 | true | 963 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.749087 | 0.635202 | __label__vie_Latn | 1.000007 | 0.314117 |
```python
import random
%matplotlib inline
import networkx as nx
```
# Chapter 6 Tutorial
Contents:
1. Partitions
2. Modularity
3. Zachary's Karate Club
4. Girvan-Newman clustering algorithm
## 1. Partitions
A **partition** of a graph is a separation of its nodes into disjoint groups. Consider the following graph:
```python
G = nx.Graph()
#G.add_cycle([0, 1, 2, 3])
nx.add_cycle(G, [0, 1, 2, 3])
# G.add_cycle([4, 5, 6, 7])
nx.add_cycle(G, [4, 5, 6, 7])
G.add_edge(0, 7)
nx.draw(G, with_labels=True)
```
The following is an example of a partition of these nodes:
```python
partition = [
{1, 2, 3},
{4, 5, 6},
{0, 7},
]
```
Observe that every node in the graph is in exactly one of the sets in the partition. Formally, a partition is a list of sets such that every node is in exactly one set. NetworkX can verify that our partition is valid:
```python
nx.community.is_partition(G, partition)
```
True
When developing community detection algorithms, we often make use of a *partition map*, which is a dictionary mapping node names to a partition index. This is useful for quickly comparing if two nodes are in the same cluster in the partition:
```python
partition_map = {}
for idx, cluster_nodes in enumerate(partition):
for node in cluster_nodes:
partition_map[node] = idx
partition_map
```
{1: 0, 2: 0, 3: 0, 4: 1, 5: 1, 6: 1, 0: 2, 7: 2}
In this dictionary, the keys are the node names and two nodes will have the same value if they are in the same partition:
```python
partition_map[0] == partition_map[7]
```
True
We can visualize our partition by drawing the graph with nodes colored by their partition membership:
```python
node_colors = [partition_map[n] for n in G.nodes]
nx.draw(G, node_color=node_colors, with_labels=True)
```
There are two trivial partitions:
1. The partition with one set containing every node;
2. The partition with N sets, each containing a single node.
A valid partition thus contains between 1 and N sets.
Feel free to experiment by changing the partition above and running the subsequent cells.
## 2. Modularity
At a high level, network community detection consists of finding a partition that achieves good separation between the groups of nodes. Before we get into how to find good partitions of a graph, we need an objective -- a way to measure how good the partition is. Modularity is one such objective function.
The modularity of a graph partition compares the number of intra-group edges with a random baseline. Higher modularity scores correspond to a higher proportion of intra-group edges, therefore fewer inter-group edges and better separation of groups.
For weighted undirected networks, as described in the text, we have
\begin{equation}
Q_w=\frac{1}{W}\sum_C \left(W_C-\frac{s_C^2}{4W}\right),
\label{eq:wmodul}
\end{equation}
where
* $W$ is the total weight of the links of the network,
* $W_C$ the total weight of the internal links of cluster $C$, and
* $s_C$ the total strength of the nodes of $C$.
The total weight $W$ is half the total strength for the same reason that the number of edges $L$ is half the total degree. While this formula may look a bit complicated, it's straightforward to write code to compute the sum:
```python
def modularity(G, partition):
W = sum(G.edges[v, w].get('weight', 1) for v, w in G.edges)
summation = 0
for cluster_nodes in partition:
s_c = sum(G.degree(n, weight='weight') for n in cluster_nodes)
# Use subgraph to count only internal links
C = G.subgraph(cluster_nodes)
W_c = sum(C.edges[v, w].get('weight', 1) for v, w in C.edges)
summation += W_c - s_c ** 2 / (4 * W)
return summation / W
```
```python
modularity(G, partition)
```
0.2222222222222222
Let's compare this to a partition we would suspect to have higher modularity:
```python
partition_2 = [
{0, 1, 2, 3},
{4, 5, 6, 7},
]
modularity(G, partition_2)
```
0.3888888888888889
### NetworkX function
NetworkX provides a modularity function that is more efficient than ours:
```python
nx.community.quality.modularity(G, partition_2)
```
0.38888888888888884
## 3. Zachary's Karate Club
When writing and testing community-detection algorithms, it helps to make use of benchmark networks: graphs with a known, "natural" community structure. Perhaps the most famous benchmark graph is Zachary's Karate Club. It contains 34 nodes, representing members of a karate club whose interactions were monitored over a period of three years by researchers. Links in this graph connect individuals interacting outside club activities, a proxy for social ties.
During the course of the study, a conflict between the instructor Mr. Hi (node 0) and the president, or Officer (node 33) led to a split of the club into separate groups led by Mr. Hi and Officer. In this case we know whom each member of the group followed after the split, providing empirical community labels: those members who followed Mr. Hi are said to be one community and those following the Officer make up the other.
For this graph, we assume that the post-split group composition was largely driven by the social ties: members of the same friend groups would want to be part of the same club after the split. We thus expect a good community-detection algorithm to predict the post-split group composition with high accuracy.
Zachary's karate club is such a popular benchmark graph that it has its own function in NetworkX:
```python
K = nx.karate_club_graph()
nx.draw(K, with_labels=True)
```
Each node in a NetworkX graph has a dictionary of *attributes* associated with it. This dictionary can hold arbitrary data about a node. We can get the attributes for a single node by giving the node name to the `nodes` object.
Each node in this graph has a `'club'` attribute, indicating whether the member followed the instructor or the president after the split:
```python
K.nodes[0]
```
{'club': 'Mr. Hi'}
```python
K.nodes[9]
```
{'club': 'Officer'}
We can visualize these labels by coloring each node according to its `'club'` attribute:
```python
K = nx.karate_club_graph()
club_color = {
'Mr. Hi': 'orange',
'Officer': 'lightblue',
}
node_colors = [club_color[K.nodes[n]['club']] for n in K.nodes]
nx.draw(K, node_color=node_colors, with_labels=True)
```
This separation looks good, in that there are relatively few inter-community links as opposed to intra-community links. Let's create a graph partition based on these labels and measure its modularity.
We can do this by creating a dictionary of two sets, one for each value of the nodes' `'club'` attribute, then assigning the nodes to the corresponding set.
```python
groups = {
'Mr. Hi': set(),
'Officer': set(),
}
for n in K.nodes:
club = K.nodes[n]['club']
groups[club].add(n)
groups
```
{'Mr. Hi': {0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 16, 17, 19, 21},
'Officer': {9,
14,
15,
18,
20,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33}}
By using the dictionary's `.values()` method, we can get a list of sets that define our partition:
```python
empirical_partition = list(groups.values())
empirical_partition
```
[{0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 16, 17, 19, 21},
{9, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33}]
```python
nx.community.is_partition(K, empirical_partition)
```
True
Since our partition is indeed a valid partition, we can get the modularity of this partition:
```python
nx.community.quality.modularity(K, empirical_partition)
```
0.3582347140039448
This is a relatively high modularity, which is what we expect.
### Comparison to a random partition
For the sake of comparison, let's generate a random partition of this network and check its modularity. We would expect a modularity close to zero in this case.
First we generate a sample of 17 nodes, half the total number of nodes, and assign them to one community. Our second community then includes the nodes in the graph not in the first community. We can use some set arithmetic to do this concisely:
```python
random_nodes = random.sample(K.nodes, 17)
random_partition = [set(random_nodes),
set(K.nodes) - set(random_nodes)]
random_partition
```
[{0, 1, 2, 10, 11, 12, 13, 14, 15, 16, 17, 18, 22, 26, 27, 29, 33},
{3, 4, 5, 6, 7, 8, 9, 19, 20, 21, 23, 24, 25, 28, 30, 31, 32}]
We can visualize this partition and observe that the communities are much less natural-looking, as we would expect from a random assignment.
```python
random_node_colors = ['orange' if n in random_nodes else 'lightblue' for n in K.nodes]
nx.draw(K, node_color=random_node_colors)
```
And finally we can test the modularity of this partition:
```python
nx.community.quality.modularity(K, random_partition)
```
-0.05530900723208418
Since this is a random process the modularity won't be exactly zero, but it should be fairly close. Go ahead and repeat the process of generating a random partition and testing its modularity -- it will fluctuate around its mean value of zero.
# 4. Girvan-Newman clustering
Our task in this part will be to implement the Girvan-Newman clustering algorithm. Since NetworkX can do the heavy lifting for us -- computing betweenness centrality -- the code part of the task is relatively straightforward. Most of our effort here is spent interpreting and explaining intermediate results.
Recall from the text the Girvan-Newman clustering algorithm:
1. Create a partition sequence
1. Calculate the betweenness centrality for all links.
2. Remove the link with largest betweenness and create a partition using connected components.
3. Recalculate the betweenness centrality of the links of the resulting graph.
4. Repeat from step B until no links remain.
2. Evaluate each partition in the sequence and choose the one with the highest modularity.
During this process, the number of connected components in the graph will increase monotonically as clusters are broken up. Since we are removing one link at a time, the number of connected components can increase by at most one between steps in the sequence -- it's not possible for a single edge to connect more than two nodes, and thus components.
We hope that the resulting partition of the graph will approximate its underlying community structure. We'll use the Karate Club graph here because we know the ground-truth community labels and can compare the result obtained from the algorithm.
```python
G = nx.karate_club_graph()
nx.draw(G)
```
## 4.1 Create a partition sequence
### A. Calculate the betweenness centrality for all links
NetworkX does the heavy lifting here. All we need to do is understand the output.
```python
nx.edge_betweenness_centrality(G)
```
{(0, 1): 0.025252525252525245,
(0, 2): 0.0777876807288572,
(0, 3): 0.02049910873440285,
(0, 4): 0.0522875816993464,
(0, 5): 0.07813428401663694,
(0, 6): 0.07813428401663695,
(0, 7): 0.0228206434088787,
(0, 8): 0.07423959482783014,
(0, 10): 0.0522875816993464,
(0, 11): 0.058823529411764705,
(0, 12): 0.04652406417112298,
(0, 13): 0.04237189825425121,
(0, 17): 0.04012392835922248,
(0, 19): 0.045936960642843,
(0, 21): 0.040123928359222474,
(0, 31): 0.1272599949070537,
(1, 2): 0.023232323232323233,
(1, 3): 0.0077243018419489,
(1, 7): 0.007422969187675069,
(1, 13): 0.01240556828792123,
(1, 17): 0.01869960105254222,
(1, 19): 0.014633732280791102,
(1, 21): 0.01869960105254222,
(1, 30): 0.032280791104320514,
(2, 3): 0.022430184194890075,
(2, 7): 0.025214328155504617,
(2, 8): 0.009175791528732704,
(2, 9): 0.030803836686189627,
(2, 13): 0.007630931160342923,
(2, 27): 0.04119203236850296,
(2, 28): 0.02278244631185807,
(2, 32): 0.06898678663384543,
(3, 7): 0.003365588659706307,
(3, 12): 0.012299465240641705,
(3, 13): 0.01492233256939139,
(4, 6): 0.0047534165181224,
(4, 10): 0.0029708853238265,
(5, 6): 0.0029708853238265003,
(5, 10): 0.0047534165181224,
(5, 16): 0.029411764705882353,
(6, 16): 0.029411764705882353,
(8, 30): 0.00980392156862745,
(8, 32): 0.0304416716181422,
(8, 33): 0.04043657867187279,
(9, 33): 0.029615482556659026,
(13, 33): 0.06782389723566191,
(14, 32): 0.024083977025153497,
(14, 33): 0.03473955238661121,
(15, 32): 0.024083977025153497,
(15, 33): 0.03473955238661121,
(18, 32): 0.024083977025153497,
(18, 33): 0.03473955238661121,
(19, 33): 0.05938233879410351,
(20, 32): 0.024083977025153497,
(20, 33): 0.03473955238661121,
(22, 32): 0.024083977025153493,
(22, 33): 0.03473955238661121,
(23, 25): 0.019776193305605066,
(23, 27): 0.010536739948504653,
(23, 29): 0.00665478312537136,
(23, 32): 0.022341057635175278,
(23, 33): 0.03266983561101209,
(24, 25): 0.0042186571598336305,
(24, 27): 0.018657159833630418,
(24, 31): 0.040106951871657755,
(25, 31): 0.04205783323430383,
(26, 29): 0.004532722179781003,
(26, 33): 0.0542908072319837,
(27, 33): 0.030477039300568713,
(28, 31): 0.0148544266191325,
(28, 33): 0.024564977506153975,
(29, 32): 0.023328523328523323,
(29, 33): 0.029807882749059215,
(30, 32): 0.01705288175876411,
(30, 33): 0.02681436210847975,
(31, 32): 0.04143394731630026,
(31, 33): 0.05339388280564752,
(32, 33): 0.008225108225108224}
The resulting dictionary has edge tuples as the keys, and each associated value is the betweenness centrality of that edge. The algorithm to compute the edge betweenness of all edges in a graph costs about the same as calculating it for a single edge, so we'll make use of this dictionary with the computed values for every edge.
Once computed for all edges, we can easily get the associated betweenness for a single edge. For example, to get the edge betweenness of the edge between nodes 0 and 1:
```python
my_edge_betweenness = nx.edge_betweenness_centrality(G)
my_edge_betweenness[0, 1]
```
0.025252525252525245
Recall that dictionaries also have the `.get` method. This will be used in the next step.
```python
my_edge_betweenness.get((0, 1))
```
0.025252525252525245
### B. Remove the link with largest betweenness...
Given this dictionary of betweenness values for each edge, we can make use of Python's builtin `max` function to give us the key in this dictionary with the greatest value. Since there is a key in the dictionary for each edge in the graph, the following two expressions are equivalent, but the second one is probably more explicit as to what we're doing with this statement.
I'm using the name `my_edge_betweenness` to make clear that this is a dictionary we've named and not a NetworkX function.
```python
max(my_edge_betweenness, key=my_edge_betweenness.get)
```
(0, 31)
```python
max(G.edges(), key=my_edge_betweenness.get)
```
(0, 31)
This is then the edge we want to remove at this step in the process:
```python
my_edge_betweenness = nx.edge_betweenness_centrality(G)
most_valuable_edge = max(G.edges(), key=my_edge_betweenness.get)
G.remove_edge(*most_valuable_edge)
```
The "splat" in the last statement above `G.remove_edge(*most_valuable_edge)` performs tuple unpacking into the arguments of a function. For example, if our most valuable edge is `(0, 31)`,
G.remove_edge(*most_valuable_edge)
is the same as
G.remove_edge(most_valuable_edge[0], most_valuable_edge[1])
or
G.remove_edge(0, 31)
### B. (cont'd) ...and create a partition using connected components
This one is almost a gimme because the `nx.connected_components()` function gives us almost exactly what we want:
```python
nx.connected_components(G)
```
<generator object connected_components at 0x7fe9b7567eb0>
We just have to remember to ask for it in a list:
```python
list(nx.connected_components(G))
```
[{0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33}]
Remember: a partition is a list of sets where every node is in exactly one of these sets. This is just what we have here, although it's a bit boring since we've only removed one edge and so there is still one connected component. If you like, you can try running the previous two cells a few times until you have more than one connected component so you can see what that looks like.
Note that this feature whereby the connected components correspond exactly to our putative community labels is particular to the Girvan-Newman algorithm: other clustering algorithms may use different ways of generating their partitions.
### C. Recalculate the betweenness centrality of the links of the resulting graph.
### D. Repeat from step B until no links remain.
This implies that we need a loop to repeat this process $L$ times, once for each edge, and that we should keep track of the partitions generated. Straightforward stuff. We'll start with a fresh Karate Club graph since we removed some edges above:
```python
G = nx.karate_club_graph()
partition_sequence = []
for _ in range(G.number_of_edges()):
my_edge_betweenness = nx.edge_betweenness_centrality(G)
most_valuable_edge = max(G.edges(), key=my_edge_betweenness.get)
G.remove_edge(*most_valuable_edge)
my_partition = list(nx.connected_components(G))
partition_sequence.append(my_partition)
```
Note the idiomatic construction of this `for` loop. Using `_` as the name for the loop variable tells the reader that we don't expect to do anything with the loop variable -- we just want to perform a task a specific number of times. One might be tempted to use a `while` loop here, but that way lie dragons: a mistake in a `while` loop can lead to infinite loops which are a headache.
If we've done this right, we should have a partition for each step of the process, *i.e.* one for each edge in the graph:
```python
len(partition_sequence), nx.karate_club_graph().number_of_edges()
```
(78, 78)
Since we started with a connected graph, removing one edge probably doesn't disconnect the graph, so our first partition probably only has one community:
```python
len(partition_sequence[0])
```
1
...and the last partition should also be trivial, with each node in its own community:
```python
len(partition_sequence[-1]), nx.karate_club_graph().number_of_nodes()
```
(34, 34)
## 4.2 Evaluate the modularity of each partition in the sequence
We now have a sequence of partitions and a function to calculate the modularity of a partition. This is a great time to use a list comprehension!
```python
G = nx.karate_club_graph()
modularity_sequence = [modularity(G, p) for p in partition_sequence]
modularity_sequence
```
[0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.35996055226824464,
0.35996055226824464,
0.35996055226824464,
0.348783694937541,
0.348783694937541,
0.348783694937541,
0.348783694937541,
0.3632478632478632,
0.3632478632478632,
0.3632478632478632,
0.3632478632478632,
0.3632478632478632,
0.3632478632478632,
0.4012984878369493,
0.3925049309664694,
0.3925049309664694,
0.376232741617357,
0.376232741617357,
0.35831689677843515,
0.35831689677843515,
0.34171597633136086,
0.34171597633136086,
0.3247863247863247,
0.3247863247863247,
0.3159105851413542,
0.3159105851413542,
0.2986522024983562,
0.2986522024983562,
0.28040762656147256,
0.28040762656147256,
0.26282051282051266,
0.26282051282051266,
0.24753451676528584,
0.24753451676528584,
0.22682445759368833,
0.22682445759368833,
0.20890861275476658,
0.20890861275476658,
0.1898422090729783,
0.18129520052596976,
0.18129520052596976,
0.18129520052596976,
0.1600920447074293,
0.1600920447074293,
0.1469428007889546,
0.1469428007889546,
0.1469428007889546,
0.12031558185404337,
0.12031558185404337,
0.12031558185404337,
0.10815253122945431,
0.10815253122945431,
0.09064760026298489,
0.08029257067718606,
0.06993754109138725,
0.057856673241288625,
0.057856673241288625,
0.03418803418803419,
0.022024983563445105,
0.022024983563445105,
0.022024983563445105,
-0.002876397107166334,
-0.002876397107166334,
-0.026298487836949366,
-0.03763971071663378,
-0.03763971071663378,
-0.053747534516765276,
-0.04980276134122286]
This sequence is then the modularity of the partition at each step in the algorithm. The first several entries in this sequence are effectively zero while there is only one community/component, then it jumps up once there is more than one community. We can use pyplot to visualize this relationship:
```python
import matplotlib.pyplot as plt
plt.plot(modularity_sequence)
plt.ylabel('Modularity')
plt.xlabel('Algorithm step')
```
### Get the partition with highest modularity
Visually, we see a peak in the modularity sequence. This is the partition that maximizes modularity, and thus the output of the algorithm. We can use the `max` function to get the partition with highest modularity. Ideally we want to write the following:
```python
best_partition = max(partition_sequence, key=nx.community.quality.modularity)
```
...but we receive an error. Recall that a key function must take exactly one argument, the item in the sequence being evaluated, but the modularity function takes two arguments: the graph and the partition. We can fix this by writing a single-argument function to use as the key:
```python
def my_modularity(partition):
return nx.community.quality.modularity(G, partition)
best_partition = max(partition_sequence, key=my_modularity)
```
Advanced Pythonauts will see a differet solution to this using the `zip` function to align the previously-generated partition & modularity sequences, but this solution is more explicit.
So after all that work, what is the best partition?
```python
best_partition
```
[{0, 1, 3, 7, 11, 12, 13, 17, 19, 21},
{2, 24, 25, 27, 28, 31},
{4, 5, 6, 10, 16},
{8, 14, 15, 18, 20, 22, 23, 26, 29, 30, 32, 33},
{9}]
Interesting! The partition of the karate club graph with highest modularity actually has five components! Let's visualize them, using our code for partition maps we wrote back at the beginning of this tutorial:
```python
def create_partition_map(partition):
partition_map = {}
for idx, cluster_nodes in enumerate(partition):
for node in cluster_nodes:
partition_map[node] = idx
return partition_map
```
```python
best_partition_map = create_partition_map(best_partition)
node_colors = [best_partition_map[n] for n in G.nodes()]
nx.draw(G, with_labels=True, node_color=node_colors)
```
Exactly how good is this five-community clustering?
```python
nx.community.quality.modularity(G, best_partition)
```
0.40129848783694944
It's higher than the "ground truth" communities we evaluated in section 3, which is a good sign, but for the specific problem of trying to predict the post-split community membership, a clustering into five groups is useless to us.
### Get the best partition with a given number of communities
One of the most useful parts of the Girvan-Newman algorithm is that it is also useful when we have a specific number of clusters we want. In this case, we know the karate club split into two groups, so let's get the partition in the sequence with two components:
```python
for partition in partition_sequence:
if len(partition) == 2:
two_cluster_partition = partition
break
two_cluster_partition
```
[{0, 1, 3, 4, 5, 6, 7, 10, 11, 12, 13, 16, 17, 19, 21},
{2, 8, 9, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33}]
```python
two_cluster_partition_map = create_partition_map(two_cluster_partition)
node_colors = [two_cluster_partition_map[n] for n in G.nodes()]
nx.draw(G, with_labels=True, node_color=node_colors)
```
How good is this partition? We can get its modularity:
```python
nx.community.quality.modularity(G, two_cluster_partition)
```
0.3599605522682445
Pretty good -- comparable to the ground truth community labels. Let's compare these side-by-side:
```python
import matplotlib.pyplot as plt
pos = nx.layout.spring_layout(G)
fig = plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
two_cluster_partition_map = create_partition_map(two_cluster_partition)
node_colors = [two_cluster_partition_map[n] for n in G.nodes()]
nx.draw(G, with_labels=True, node_color=node_colors, pos=pos)
plt.title('Predicted communities')
plt.subplot(1, 2, 2)
node_colors = [G.nodes[n]['club'] == 'Officer' for n in G.nodes()]
nx.draw(G, with_labels=True, node_color=node_colors, pos=pos)
plt.title('Actual communities')
```
We can see that the predicted community labels are pretty accurate, only differing on a couple nodes that, visually, seem like they could plausibly beling to either group. Zachary's original paper even explains the practical considerations of one of these mispredicted nodes: student 8 was very near receiving his black belt from Mr. Hi and thus did not want to leave the group even though several of his friends did.
```python
G.nodes[8]
```
{'club': 'Mr. Hi'}
#### Aside
The astute reader might note that there may be several two-cluster partitions in the partition sequence we generated. We assert the following to be true:
1. For every integer 1 to N, the number of nodes, there is a partition in the sequence with that number of clusters
2. Every partition in the sequence with the same number of clusters is the same
Proving these is left as an exercise to the reader, but as a consequence of these being true, optimized implementations of Girvan-Newman clustering will only store one partition for each number of clusters. This is how the implementation in NetworkX works, only providing one partition for each number of communities greater than one.
## NetworkX Function
`nx.community.girvan_newman(G)` will generate a sequence containing one partition of each size greater than one. Here we can see the first several are the same as those we generated:
```python
list(nx.community.girvan_newman(G))[:5]
```
[({0, 1, 3, 4, 5, 6, 7, 10, 11, 12, 13, 16, 17, 19, 21},
{2, 8, 9, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33}),
({0, 1, 3, 4, 5, 6, 7, 10, 11, 12, 13, 16, 17, 19, 21},
{2, 8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33},
{9}),
({0, 1, 3, 7, 11, 12, 13, 17, 19, 21},
{2, 8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33},
{4, 5, 6, 10, 16},
{9}),
({0, 1, 3, 7, 11, 12, 13, 17, 19, 21},
{2, 24, 25, 27, 28, 31},
{4, 5, 6, 10, 16},
{8, 14, 15, 18, 20, 22, 23, 26, 29, 30, 32, 33},
{9}),
({0, 1, 3, 7, 12, 13, 17, 19, 21},
{2, 24, 25, 27, 28, 31},
{4, 5, 6, 10, 16},
{8, 14, 15, 18, 20, 22, 23, 26, 29, 30, 32, 33},
{9},
{11})]
```python
```
```python
```
| 0a19adc148b7dae16d47e7bc03b5b047b397147d | 610,939 | ipynb | Jupyter Notebook | tutorials/Chapter 6 Tutorial.ipynb | arvidl/FirstCourseNetworkScience | 29516707b98555658bedbeea84c09564d20870e7 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/Chapter 6 Tutorial.ipynb | arvidl/FirstCourseNetworkScience | 29516707b98555658bedbeea84c09564d20870e7 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/Chapter 6 Tutorial.ipynb | arvidl/FirstCourseNetworkScience | 29516707b98555658bedbeea84c09564d20870e7 | [
"CC-BY-4.0"
]
| null | null | null | 345.945074 | 121,788 | 0.932884 | true | 8,282 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.870597 | 0.719733 | __label__eng_Latn | 0.987815 | 0.510513 |
# Computing mass functions, halo biases and concentrations
This notebook illustrates how to compute mass functions, halo biases and concentration-mass relations with CCL, as well as how to translate between different mass definitions.
```python
import numpy as np
import pylab as plt
import pyccl as ccl
%matplotlib inline
```
## Preliminaries
Generate a cosmology object and a few mass/redshift arrays
```python
# Cosmology
cosmo = ccl.Cosmology(Omega_c=0.27, Omega_b=0.045,
h=0.67, A_s=2.1e-9, n_s=0.96)
# Array of masses
m_arr = np.geomspace(1E10,1E15,128)
# Array of redshifts
z_arr = np.linspace(0.,1.,16)
```
## Mass definitions
CCL admits 3 different classes of definitions:
- Spherical overdensity (SO). The mass is defined as that enclosed by a radius within which the mean density is a factor $\Delta$ larger than the matter or critical density ($x$ is either $M$ or ${\rm crit}$:
\begin{equation}
M_{\Delta,x} = \frac{4\pi}{3}\Delta\rho_x R_{\Delta,x}^3
\end{equation},
- Virial spherical overdensity. The same as SO for the specific choice $\Delta=\Delta_{\rm vir}(z)$ and $x={\rm critical}$, where $\Delta_{\rm vir}$ is the virial overdensity, which CCL computes from Brian & Norman 1998.
- Friends-of-friends masses (fof).
If you can attach a concentration-mass relation to a given SO mass definition, CCL is then able to translate masses according to that definition into any other SO definition assuming an NFW profile. This is only an approximation, and it's actually better to make sure you use consistent mass definitions throughout, but this functionality is provided for convenience.
These mass definition objects can then be passed around to all halo-model functions to make sure masses are treated consistently.
```python
# Delta=200 (matter).
# This one has an associated concentration-mass relation,
# so we can convert to other SO mass definitions
hmd_200m = ccl.halos.MassDef200m()
# Delta=200 (critical).
# This one has an associated concentration-mass relation,
# so we can convert to other SO mass definitions
hmd_200c = ccl.halos.MassDef200c()
# You can also change the c(M) relation as follows:
hmd_200c_b = ccl.halos.MassDef200c(c_m='Bhattacharya13')
# Delta=500 (matter).
# This one does not have a c(M) relation.
hmd_500m = ccl.halos.MassDef(500, 'matter')
# Virial overdensity
hmd_vir = ccl.halos.MassDef('vir', 'critical')
# FoF mass definition
hmd_fof = ccl.halos.MassDef('fof', 'matter')
```
Note that associating concentration-mass relations with mass definitions is only necessary if you'll want to translate between different mass definitions. Otherwise, you can use any concentration-mass relation you want for a given mass definition as we show further down (even if that c(M) relation is not the one you used to initialize the corresponding mass definition object).
## Mass functions
Mass functions are computed through classes that inherit from the `MassFunc` class. CCL supports a wide variety of mass function parametrizations, but more can be created following the instructions in the documentation.
All mass functions have a mass definition attached to them. Some mass functions support a range of mass definitions, and you can select which one you want when instantiating the class. All mass functions have default mass definitions, which are used if `None` is passed (which is the case below).
```python
hmfs = []
# Press & Schechter mass function
hmfs.append(ccl.halos.MassFuncPress74(cosmo))
# Sheth & Tormen mass function
hmfs.append(ccl.halos.MassFuncSheth99(cosmo))
# Tinker 2008 mass function
hmfs.append(ccl.halos.MassFuncTinker08(cosmo))
# Tinker 2010 mass function
hmfs.append(ccl.halos.MassFuncTinker10(cosmo))
# Bocquet 2016 mass function
hmfs.append(ccl.halos.MassFuncBocquet16(cosmo))
# Let's plot all of them at z=0
plt.figure()
for mf in hmfs:
nm = mf.get_mass_function(cosmo, m_arr, 1.)
plt.plot(m_arr,
m_arr * nm, label=mf.name)
plt.xscale('log')
plt.ylim([1E9,8.5E9])
plt.legend()
plt.xlabel(r'$M/M_\odot$', fontsize=14)
plt.ylabel(r'$M\,\frac{dn}{d\log_{10}M}\,[M_\odot\,{\rm Mpc}^{-3}]$',
fontsize=14);
```
Let's explore the time evolution of the mass function
```python
# Look at time evolution
from matplotlib.pyplot import cm
hmf_200m = ccl.halos.MassFuncTinker08(cosmo, mass_def=hmd_200m)
plt.figure()
plt.title(r'$0<z<1$',fontsize=14)
for z in z_arr:
nm = hmf_200m.get_mass_function(cosmo, m_arr, 1./(1+z))
plt.plot(m_arr,
m_arr * nm, c=cm.autumn(z))
plt.xscale('log')
plt.ylim([5E8,7E9])
plt.xlabel(r'$M/M_\odot$',fontsize=14)
plt.ylabel(r'$M\,\frac{dn}{d\log_{10}M}\,[M_\odot\,{\rm Mpc}^{-3}]$',
fontsize=14);
```
## Halo bias
Similar comments apply to the different halo bias parametrizations supported by CCL.
```python
hbfs = []
# Sheth & Tormen 1999
hbfs.append(ccl.halos.HaloBiasSheth99(cosmo))
# Sheth & Tormen 2001
hbfs.append(ccl.halos.HaloBiasSheth01(cosmo))
# Bhattacharya 2011
hbfs.append(ccl.halos.HaloBiasBhattacharya11(cosmo))
# Tinker 2010
hbfs.append(ccl.halos.HaloBiasTinker10(cosmo))
# Let's plot all of them at z=0
plt.figure()
for bf in hbfs:
bm = bf.get_halo_bias(cosmo, m_arr, 1.)
plt.plot(m_arr, bm, label=bf.name)
plt.xscale('log')
plt.legend()
plt.xlabel(r'$M/M_\odot$', fontsize=14)
plt.ylabel(r'$b_h(M)$', fontsize=14);
```
## Concentration-mass relation
Concentration-mass relations work in a similar way
```python
cmrs = []
# Diemer 2015
cmrs.append(ccl.halos.ConcentrationDiemer15())
# Bhattacharya 2013
cmrs.append(ccl.halos.ConcentrationBhattacharya13())
# Prada 2012
cmrs.append(ccl.halos.ConcentrationPrada12())
# Klypin 2011
cmrs.append(ccl.halos.ConcentrationKlypin11())
# Duffy 2008
cmrs.append(ccl.halos.ConcentrationDuffy08())
# Let's plot all of them at z=0
plt.figure()
for cmr in cmrs:
cm = cmr.get_concentration(cosmo, m_arr, 1.)
plt.plot(m_arr, cm, label=cmr.name)
plt.xscale('log')
plt.legend()
plt.xlabel(r'$M/M_\odot$', fontsize=14)
plt.ylabel(r'$c(M)$', fontsize=14);
```
## Convenience functions
It is possible to select mass functions, halo biases and concentration-mass relation from their name as follows
```python
nm = ccl.halos.mass_function_from_name('Tinker08')
bm = ccl.halos.halo_bias_from_name('Tinker10')
cm = ccl.halos.concentration_from_name('Duffy08')
print(nm)
print(bm)
print(cm)
```
<class 'pyccl.halos.hmfunc.MassFuncTinker08'>
<class 'pyccl.halos.hbias.HaloBiasTinker10'>
<class 'pyccl.halos.concentration.ConcentrationDuffy08'>
## Mass conversion
The lines below show how to convert between different mass definitions (and the consequences of doing so). First, we generate mass function objects for $\Delta=200$ and $500$. Then, we compute the mass function using both parametrizations, but for masses defined using $\Delta=200$ (the $\Delta=500$ mass function will use the concentration-mass relation to translate masses from $\Delta=200$ to $\Delta=500$ automatically). As you can see, doing so incurrs a systematic error of 5-20%.
```python
# Let's define a mass function object for Delta = 500 (matter)
hmf_500m = ccl.halos.MassFuncTinker08(cosmo, mass_def=hmd_500m)
# Now let's compare the mass function parametrized for 200 (matter)
# with the mass function parametrized for 500 (matter) but
# translated to 200 (matter)
nm = hmf_200m.get_mass_function(cosmo, m_arr, 1.,
mdef_other = hmd_200m)
nm_trans = hmf_500m.get_mass_function(cosmo, m_arr, 1.,
mdef_other = hmd_200m)
plt.figure()
plt.plot(m_arr,nm_trans/nm-1)
plt.xscale('log')
plt.xlabel(r'$M/M_\odot$',fontsize=14)
plt.ylabel('Error from mass translation$',
fontsize=14);
```
```python
```
| 81d6e5b8e0eb642a6b284a72c0ef142508f4a8b4 | 183,537 | ipynb | Jupyter Notebook | Halo-mass-function-example.ipynb | bjornvz/CCLX | dfb0fba4114dea267dec59ebca57870493f43f57 | [
"BSD-3-Clause"
]
| 14 | 2019-12-08T11:05:29.000Z | 2022-02-26T19:13:52.000Z | Halo-mass-function-example.ipynb | bjornvz/CCLX | dfb0fba4114dea267dec59ebca57870493f43f57 | [
"BSD-3-Clause"
]
| 19 | 2019-11-20T02:17:01.000Z | 2022-03-11T11:40:10.000Z | Halo-mass-function-example.ipynb | bjornvz/CCLX | dfb0fba4114dea267dec59ebca57870493f43f57 | [
"BSD-3-Clause"
]
| 7 | 2020-02-14T10:57:19.000Z | 2022-03-28T19:21:20.000Z | 443.326087 | 53,884 | 0.94035 | true | 2,232 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.757794 | 0.651448 | __label__eng_Latn | 0.912282 | 0.351864 |
```python
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
# Naive Baye's Classifier
Naive Baye's classifier is based on Baye's theorem and is used for classification problems. For instance in NLP text classification: topic modeling, sentiment analysis, spam detection etc
## Bayes' theorem:
Partition of set $X$ on subsets $\{A_i \subset X|I \in I\}$ is $X = \bigcup_{i \in I}A_i$ and $A_i \cap A_j = \emptyset$ for every pair $i,j \in I$
For each subset $A \subset X$ we have partition $X = A \cup A^c$
#### Total probability theorem:
For every partition $A_1, A_2 \dots, A_k$ of $\Omega$ and event $B \subset \Omega$:
$$P(B) = \sum_{i=1}^{k}P(B|A_i)P(A_i)$$
#### Theorem (Bayes' theorem):
Let $A_1, A_2 \dots, A_k$ be a partition of $\Omega$ such that $P(A_i) > 0$ for each $i \in \{1, 2, \dots, k\}$. Then for $B \subset \Omega$ event, such that $P(B) > 0$, for each $i \in \{1, 2, \dots, k\}$:
$$
P(A_i|B) = \frac{P(B|A_i)P(A_i)}{\sum_{j=1}^{k}P(B|A_j)P(A_j)}
$$
#### Note:
We call $P(A_i)$ the prior probability and $P(A_i|B)$ the posterior probability
For the events $A$ and $B$ such that $P(B) \gt 0$ we have:
$$
P(A|B) = \frac{P(B|A)P(A)}{P(B)}
$$
<br>
We can consider the partition of $\Omega$ on $A$ and $A^c$, the from Byes' theorem we have:
$$
P(A|B) = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|A^c)P(A^c)} = \text{(by the total probability) }\frac{P(B|A)P(A)}{P(B)}
$$
#### Example:
Divide emails $A_1 = \text{"spam"}$, $A_2 = \text{"low priority"}$ and $A_3 = \text{"high priority"}$ and let: $P(A_1) = 0.7$, $P(A_2) = 0.2$ and $P(A_3) = 0.1$. ($P(A_1) + P(A_2) + P(A_3) = 0.7 + 0.2 + 0.1 = 1$)
<br>
Let $B$ be the event that email contains the word "free" and we know from previous experience that: $P(B|A_1) = 0.9$, $P(B|A_2) = 0.01$ and $P(B|A_3) = 0.01$.
<br>
If we receive the email with word "free" in it, what is the probability, that this email is spam?
From Bayes' theorem:
$$
P(A_1|B) = \frac{P(B|A_1)P(A_1)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + P(B|A_3)P(A_3)} = \frac{0.9 \cdot 0.7}{0.9 \cdot 0.7 + 0.01 \cdot 0.2 + 0.01 \cdot .01} = 0.995
$$
## Multi-dimensional case
#### Example Golf and Weather
```python
import pandas as pd
import numpy as np
from pathlib import Path
```
```python
path = Path('data')
nb = path / 'naive-bayes'
golf_csv = nb / 'golf.csv'
```
```python
def strip_txt(txt:str) -> str:
return txt.replace("'", '').strip() if txt else txt
```
```python
df = pd.read_csv(golf_csv, converters={'outlook':strip_txt,
'temp': strip_txt,
'humidity': strip_txt,
'wind': strip_txt,
'label': strip_txt})
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>outlook</th>
<th>temp</th>
<th>humidity</th>
<th>wind</th>
<th>label</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Sunny</td>
<td>Hot</td>
<td>High</td>
<td>Weak</td>
<td>No</td>
</tr>
<tr>
<th>1</th>
<td>Sunny</td>
<td>Hot</td>
<td>High</td>
<td>Strong</td>
<td>No</td>
</tr>
<tr>
<th>2</th>
<td>Overcast</td>
<td>Hot</td>
<td>High</td>
<td>Weak</td>
<td>Yes</td>
</tr>
<tr>
<th>3</th>
<td>Rain</td>
<td>Mild</td>
<td>High</td>
<td>Weak</td>
<td>Yes</td>
</tr>
<tr>
<th>4</th>
<td>Rain</td>
<td>Cool</td>
<td>Normal</td>
<td>Weak</td>
<td>Yes</td>
</tr>
<tr>
<th>5</th>
<td>Rain</td>
<td>Cool</td>
<td>Normal</td>
<td>Strong</td>
<td>No</td>
</tr>
<tr>
<th>6</th>
<td>Overcast</td>
<td>Cool</td>
<td>Normal</td>
<td>Strong</td>
<td>Yes</td>
</tr>
<tr>
<th>7</th>
<td>Sunny</td>
<td>Mild</td>
<td>High</td>
<td>Weak</td>
<td>No</td>
</tr>
<tr>
<th>8</th>
<td>Sunny</td>
<td>Cool</td>
<td>Normal</td>
<td>Weak</td>
<td>Yes</td>
</tr>
<tr>
<th>9</th>
<td>Rain</td>
<td>Mild</td>
<td>Normal</td>
<td>Weak</td>
<td>Yes</td>
</tr>
<tr>
<th>10</th>
<td>Sunny</td>
<td>Mild</td>
<td>Normal</td>
<td>Strong</td>
<td>Yes</td>
</tr>
<tr>
<th>11</th>
<td>Overcast</td>
<td>Mild</td>
<td>High</td>
<td>Strong</td>
<td>Yes</td>
</tr>
<tr>
<th>12</th>
<td>Overcast</td>
<td>Hot</td>
<td>Normal</td>
<td>Weak</td>
<td>Yes</td>
</tr>
<tr>
<th>13</th>
<td>Rain</td>
<td>Mild</td>
<td>High</td>
<td>Strong</td>
<td>No</td>
</tr>
</tbody>
</table>
</div>
Let's calculate each Label probabilities
<br>
$P(\text{"Yes"}) = 9/14$ and $P(\text{"No"}) = 5/14$
Let't conditional probability of outlook 'Sunny' feature with respect of labels
<br>
$P(\text{"Yes"}|\text{"Sunny"}) = 2/9$ and $P(\text{"No"}|\text{"Sunny"}) = 3/5$
Let't conditional probability of outlook 'Overcast' feature with respect of labels
<br>
$𝑃("Yes"|"Overcast")=3/9$ and $𝑃("No"|"Overcast")=0/5$
```python
df.outlook
```
0 Sunny
1 Sunny
2 Overcast
3 Rain
4 Rain
5 Rain
6 Overcast
7 Sunny
8 Sunny
9 Rain
10 Sunny
11 Overcast
12 Overcast
13 Rain
Name: outlook, dtype: object
```python
y_vals = df[df.label.str.contains('Yes')].count()[0]
n_vals = df[df.label.str.contains('No')].count()[0]
f_vals = df.count()[0]
y_vals, n_vals, f_vals
```
(9, 5, 14)
```python
df.outlook.unique()
```
array(['Sunny', 'Overcast', 'Rain'], dtype=object)
```python
sunny_y = df[df.outlook.str.contains('Sunny') & df.label.str.contains('Yes')].count()[0]
sunny_n = df[df.outlook.str.contains('Sunny') & df.label.str.contains('No')].count()[0]
overcast_y = df[df.outlook.str.contains('Overcast') & df.label.str.contains('Yes')].count()[0]
overcast_n = df[df.outlook.str.contains('Overcast') & df.label.str.contains('No')].count()[0]
rain_y = df[df.outlook.str.contains('Rain') & df.label.str.contains('Yes')].count()[0]
rain_n = df[df.outlook.str.contains('Rain') & df.label.str.contains('No')].count()[0]
print(f'sunny_y = {sunny_y}/{y_vals}, sunny_n = {sunny_n}/{n_vals}')
print(f'overcast_y = {overcast_y}/{y_vals}, overcast_n = {overcast_n}/{n_vals}')
print(f'rain_y = {rain_y}/{y_vals}, rain_y = {rain_y}/{n_vals}')
```
sunny_y = 2/9, sunny_n = 3/5
overcast_y = 4/9, overcast_n = 0/5
rain_y = 3/9, rain_y = 3/5
```python
def count_feat(col_name:str, col_val:str) -> int:
return df[df[col_name].str.contains(col_val)].count()[0]
def count_cond(col_name:str, col_val:str, lab:str) -> int:
return df[df[col_name].str.contains(col_val) & df.label.str.contains(lab)].count()[0]
def count_probs(col_name:str) -> int:
col_vals = df[col_name].unique()
for col_val in col_vals:
val_y = count_cond(col_name, col_val, 'Yes')
val_n = count_cond(col_name, col_val, 'No')
val_f = count_feat(col_name, col_val)
yield val_y, val_n, val_f, col_val
```
```python
col_vals = df.temp.unique()
temp_vals = [(ys, ns, fs, vls) for (ys, ns, fs, vls) in count_probs('temp')]
temp_vals
```
[(2, 2, 4, 'Hot'), (4, 2, 6, 'Mild'), (3, 1, 4, 'Cool')]
```python
col_vals = [(col_name, [(ys, ns, fs, vls) for (ys, ns, fs, vls) in count_probs(col_name)])
for col_name in df.columns]
col_vals
```
[('outlook', [(2, 3, 5, 'Sunny'), (4, 0, 4, 'Overcast'), (3, 2, 5, 'Rain')]),
('temp', [(2, 2, 4, 'Hot'), (4, 2, 6, 'Mild'), (3, 1, 4, 'Cool')]),
('humidity', [(3, 4, 7, 'High'), (6, 1, 7, 'Normal')]),
('wind', [(6, 2, 8, 'Weak'), (3, 3, 6, 'Strong')]),
('label', [(0, 5, 5, 'No'), (9, 0, 9, 'Yes')])]
```python
lns = ''
for col_val in col_vals:
ln = f'{col_val[0]}: \n' + '\n'.join(f'P({nm}) = {f_v}, P({nm}|Yes) = {y_v}/{y_vals}, P({nm}|No) = {n_v}/{n_vals}'
for y_v, n_v, f_v, nm in col_val[1]) + '\n'
lns += ln
lns += '===============\n'
print(lns)
```
outlook:
P(Sunny) = 5, P(Sunny|Yes) = 2/9, P(Sunny|No) = 3/5
P(Overcast) = 4, P(Overcast|Yes) = 4/9, P(Overcast|No) = 0/5
P(Rain) = 5, P(Rain|Yes) = 3/9, P(Rain|No) = 2/5
===============
temp:
P(Hot) = 4, P(Hot|Yes) = 2/9, P(Hot|No) = 2/5
P(Mild) = 6, P(Mild|Yes) = 4/9, P(Mild|No) = 2/5
P(Cool) = 4, P(Cool|Yes) = 3/9, P(Cool|No) = 1/5
===============
humidity:
P(High) = 7, P(High|Yes) = 3/9, P(High|No) = 4/5
P(Normal) = 7, P(Normal|Yes) = 6/9, P(Normal|No) = 1/5
===============
wind:
P(Weak) = 8, P(Weak|Yes) = 6/9, P(Weak|No) = 2/5
P(Strong) = 6, P(Strong|Yes) = 3/9, P(Strong|No) = 3/5
===============
label:
P(No) = 5, P(No|Yes) = 0/9, P(No|No) = 5/5
P(Yes) = 9, P(Yes|Yes) = 9/9, P(Yes|No) = 0/5
===============
```python
model_vals = {col_name: {vls: (ys, ns, fs) for (ys, ns, fs, vls) in count_probs(col_name)}
for col_name in df.columns if col_name != 'label'}
model_vals
```
{'outlook': {'Sunny': (2, 3, 5), 'Overcast': (4, 0, 4), 'Rain': (3, 2, 5)},
'temp': {'Hot': (2, 2, 4), 'Mild': (4, 2, 6), 'Cool': (3, 1, 4)},
'humidity': {'High': (3, 4, 7), 'Normal': (6, 1, 7)},
'wind': {'Weak': (6, 2, 8), 'Strong': (3, 3, 6)}}
Outlook: "Sunny",
Temperature: "Cool",
Humidity: "High",
Wind: "Strong"
```python
out_v = model_vals['outlook']['Sunny']
tmp_v = model_vals['temp']['Cool']
hum_v = model_vals['humidity']['High']
wnd_v = model_vals['wind']['Strong']
yes_raw = (out_v[0] / y_vals) * (tmp_v[0] / y_vals) * (hum_v[0] / y_vals) * (wnd_v[0] / y_vals) * (y_vals / f_vals)
no_raw = (out_v[1] / n_vals) * (tmp_v[1] / n_vals) * (hum_v[1] / n_vals) * (wnd_v[1] / n_vals) * (n_vals / f_vals)
yes_raw, no_raw
```
(0.005291005291005291, 0.02057142857142857)
```python
p_x = (out_v[2] / f_vals) * (tmp_v[2] / f_vals) * (hum_v[2] / f_vals) * (wnd_v[2] / f_vals)
p_x
```
0.021865889212827987
```python
yes_pred = yes_raw / p_x
no_pred = no_raw / p_x
print(f'yes_pred = {yes_pred}, no_pred = {no_pred}')
```
yes_pred = 0.2419753086419753, no_pred = 0.9408
```python
def predict(x:tuple) -> tuple:
out_v = model_vals['outlook'][x[0].capitalize()]
tmp_v = model_vals['temp'][x[1].capitalize()]
hum_v = model_vals['humidity'][x[2].capitalize()]
wnd_v = model_vals['wind'][x[3].capitalize()]
yes_raw = (out_v[0] / y_vals) * (tmp_v[0] / y_vals) * (hum_v[0] / y_vals) * (wnd_v[0] / y_vals) * (y_vals / f_vals)
no_raw = (out_v[1] / n_vals) * (tmp_v[1] / n_vals) * (hum_v[1] / n_vals) * (wnd_v[1] / n_vals) * (n_vals / f_vals)
p_x = (out_v[2] / f_vals) * (tmp_v[2] / f_vals) * (hum_v[2] / f_vals) * (wnd_v[2] / f_vals)
yes_pred = yes_raw / p_x
no_pred = no_raw / p_x
return yes_pred, no_pred
```
```python
x_vec = ('Sunny', 'Cool', 'High', 'Strong')
y_pr, n_pr = predict(x_vec)
print(f'yes_pred = {y_pr}, no_pred = {n_pr}')
```
yes_pred = 0.2419753086419753, no_pred = 0.9408
## Analysis of algorithm
Learning a Naive Bayes classifier is just a matter of counting how many times each attribute co-occurs with each class
As you observed algorithm fits more for categorical variables rather than numerical. Also if sample distribution is representative.
#### Advantages
- It performs better on multi-class classification.
- It performs better on smaller datasets.
- If independence is hold, Naive Baye's performs better than Logistic regression on smaller datasets.
- It performs better for categorical variables, for numerical variables Gaussian distribution is preffered.
#### Disadvantages
- If categorical variable is unseen during the training then we get zero frequency problem and smoothing technique is used (Laplasian Smoothing)
- It is known as bad estimator, probabilities should not be tacken in consideration
- In real life features are not independent
Main question is how can we deal with zero frequencies. Laplassian smoothing is a little trick:
$$P(x_i|C_p) = \frac{count(x_i, C_p) + 1}{\sum_{j=1}^{n}(count(x_j, C_p) + 1)}$$
<br>
or
$$P(x_i|C_p) = \frac{count(x_i, C_p) + 1}{\sum_{j=1}^{n}count(x_j, C_p) + n}$$
## Theoretical Part
Recall conditional probability
$$p(C_k \mid x_1, \dots, x_n)$$
and Baye's theorem
$$p(C_k \mid \mathbf{x}) = \frac{p(C_k) \ p(\mathbf{x} \mid C_k)}{p(\mathbf{x})}$$
Recall that for given $C$ the probability $$P(A \mid C)$$ can be considered as usual probability mesure:
- $$P(\Omega \mid C) = 1$$
- $$P(A \mid C) \le 1$$
etc
The evens $A$ and $B$ are independent if $P(A\cap B) = P(A)P(B)$ or $P(A, B) = P(A)P(B)$ we can use this formula for conditional independence (implies from ebove property of conditional probability)
$$P(A \cap B | C) = P(A \mid C)P(B \mid C)$$
By the definition of conditional probability $P(A\cap B)= P(A \mid B)P(B)$
<br>
$$\begin{align}
P(A \mid B \cap C) = \frac{P(B \cap C \cap A)}{P(B \cap C)}= \frac{P(B \cap A \cap C)}{P(B \cap C)} = \\
\frac{P(A \cap B \mid C)P(C)}{P(B \cap C)} = \\
\frac{P(A \mid C)P(B \mid C)P(C)}{P(B \cap C)} = \\
\frac{P(A \mid C)P(B \mid C)P(C)}{P(B \mid C)P(C)} = P(A \mid C)
\end{align}$$
<br>
$$P(A| B \cap C) = P(A \mid B, C) = P(A\mid C)$$
$$\begin{align}
P(C_k, x_1, \dots, x_n) & = P(x_1, \dots, x_n, C_k) \\
& = P(x_1 \mid x_2, \dots, x_n, C_k) \ P(x_2, \dots, x_n, C_k) \\
& = P(x_1 \mid x_2, \dots, x_n, C_k) \ P(x_2 \mid x_3, \dots, x_n, C_k) \ P(x_3, \dots, x_n, C_k) \\
& = \dots \\
& = P(x_1 \mid x_2, \dots, x_n, C_k) \ P(x_2 \mid x_3, \dots, x_n, C_k) \dots P(x_{n-1} \mid x_n, C_k) \ p(x_n \mid C_k) \ P(C_k) \\
\end{align}$$
In general
$$P(x_i \mid x_{i+1}, \dots ,x_{n}, C_k ) = P(x_i \mid C_k)$$
We have formula:
$$\begin{align}
P(C_k, x_1, \dots, x_n) & = \frac{P(x_1 \mid C_k)P(x_2 \mid C_k) \dots P(x_n \mid C_k)P(C_k)}{P(x)} \\
& = \frac{P(x_1 \mid C_k)P(x_2 \mid C_k) \dots P(x_n \mid C_k)P(C_k)}{P(x_1, x_2, \dots, x_n)} \\
&= \frac{P(x_1 \mid C_k)P(x_2 \mid C_k) \dots P(x_n \mid C_k)P(C_k)}{P(x_1)P(x_2) \dots P(x_n)} \\
\end{align}$$
Denominator is the same for all classes so classifier will decide by the formula:
$$\hat{y} = \underset{k \in \{1, \dots, K\}}{\operatorname{argmax}} \ P(C_k) \displaystyle\prod_{i=1}^n P(x_i \mid C_k)$$
## Different types of Naive Baye's
#### Multinomial naive Bayes
Let event occures with $(p_1, \dots, p_n)$ probabilities, then we have:
<br>
$$P(\mathbf{x} \mid C_k) = \frac{(\sum_i x_i)!}{\prod_i x_i !} \prod_i {p_{ki}}^{x_i}$$
Then our estimator with $\log$ becomes linear classifier:
$$
\begin{align}
\log p(C_k \mid \mathbf{x}) & \varpropto \log \left( p(C_k) \prod_{i=1}^n {p_{ki}}^{x_i} \right) \\
& = \log p(C_k) + \sum_{i=1}^n x_i \cdot \log p_{ki} \\
& = b + \mathbf{w}_k^\top \mathbf{x}
\end{align}
$$
<br>
where $b = \log p(C_k)$ and $w_{ki} = \log p_{ki}$
#### Bernoulli naive Bayes
When we have occurrence instead of frequency:
$$P(\mathbf{x} \mid C_k) = \prod_{i=1}^n p_{ki}^{x_i} (1 - p_{ki})^{(1-x_i)}$$
#### Gaussian naive Bayes
Let for some $x$ and class $C_k$ data is distrubuted with normal (Gaussian) distribution.
Compute mean and variance of $x$ for each class: $\mu_k$ and $\sigma_{k}^{2}$, then fop value $v$:
$$p(x=v \mid C_k)=\frac{1}{\sqrt{2\pi\sigma^2_k}}\,e^{ -\frac{(v-\mu_k)^2}{2\sigma^2_k} }$$
#### Excercise:
Try The approach for spam classifier
#### Excercise:
Classify <a href="https://archive.ics.uci.edu/ml/datasets/iris">Iris dataset</a> with Naive Baye's classifier
#### Excercise:
Classify <a href="https://archive.ics.uci.edu/ml/datasets/Wine">Wine dataset</a> with Naive Baye's classifier
#### Excercise:
Classify <a href="https://archive.ics.uci.edu/ml/datasets/Adult">Adult dataset</a> with Naive Baye's classifier
| f795cb325e7a242b156e4a59a4f5b243ebfc71aa | 33,094 | ipynb | Jupyter Notebook | content/week-12/Naive Bayes.ipynb | GiorgiBeriashvili/school-of-ai | abd033fecf32c1222da097aa8420db6c69b357e6 | [
"Apache-2.0",
"MIT"
]
| null | null | null | content/week-12/Naive Bayes.ipynb | GiorgiBeriashvili/school-of-ai | abd033fecf32c1222da097aa8420db6c69b357e6 | [
"Apache-2.0",
"MIT"
]
| null | null | null | content/week-12/Naive Bayes.ipynb | GiorgiBeriashvili/school-of-ai | abd033fecf32c1222da097aa8420db6c69b357e6 | [
"Apache-2.0",
"MIT"
]
| null | null | null | 25.976452 | 230 | 0.453617 | true | 6,232 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.817574 | 0.660655 | __label__eng_Latn | 0.416425 | 0.373254 |
# Spectral Analysis of Deterministic Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Introduction
The analysis of the spectral properties of a signal plays an important role in digital signal processing. Some application examples are the
* [Spectrum analyzer](https://en.wikipedia.org/wiki/Spectrum_analyzer)
* Detection of (harmonic) signals
* [Estimation of fundamental frequency and harmonics](https://en.wikipedia.org/wiki/Modal_analysis)
* Spectral suppression: acoustic echo suppression, noise reduction, ...
In the practical realization of spectral analysis techniques the [discrete Fourier transformation](https://en.wikipedia.org/wiki/Discrete_Fourier_transform) (DFT) is applied to discrete finite-length signals in order to gain insights into their spectral composition. A basic task in spectral analysis is to determine the amplitude (and phase) of dominant harmonic contributions in a signal mixture. The properties of the DFT with respect to the analysis of an harmonic exponential signal are discussed in the following.
## The Leakage Effect
[Spectral leakage](https://en.wikipedia.org/wiki/Spectral_leakage) is a fundamental effect of the DFT. It limits the ability to detect harmonic signals in signal mixtures and hence the performance of spectral analysis. In order to discuss this effect, first the DFT of a single harmonic exponential signal is regarded. Its spectrum is derived in four steps:
1. Fourier transform of an harmonic exponential signal,
2. discrete-time Fourier transform (DTFT) of a discrete harmonic exponential signal, and
3. DTFT of a finite-length discrete harmonic exponential signal
4. sampling of the DTFT
These steps are detailed in the remaining subsections.
### Fourier Transformation of an Exponential Signal
The harmonic exponential signal is defined as
\begin{equation}
x(t) = \mathrm{e}^{\,\mathrm{j}\, \omega_0 \, t}
\end{equation}
where $\omega_0 = 2 \pi f$ denotes the angular frequency of the signal. The Fourier transform of the exponential signal is
\begin{equation}
X(\mathrm{j}\, \omega) = \int\limits_{-\infty}^{\infty} x(t) \,\mathrm{e}^{\,- \mathrm{j}\, \omega \,t} \mathrm{d}t = 2\pi \; \delta(\omega - \omega_0)
\end{equation}
The spectrum consists of a single shifted Dirac impulse located at the angular frequency $\omega_0$ of the exponential signal. Hence the spectrum $X(\mathrm{j}\, \omega)$ consists of a clearly isolated and distinguishable event. In practice, it is not possible to compute the Fourier transformation of a continuous signal by means of digital signal processing.
### Discrete-Time Fourier Transformation of the Discrete Exponential Signal
Now lets consider sampled signals. The discrete exponential signal $x[k]$ is derived from its continuous counterpart $x(t)$ above by equidistant sampling $x[k] := x(k T)$ with the sampling interval $T$
\begin{equation}
x[k] = \mathrm{e}^{\,\mathrm{j}\, \Omega_0 \,k}
\end{equation}
where $\Omega_0 = \omega_0 T$ denotes the normalized angular frequency. The [discrete-time Fourier transform](https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform) (DTFT) is the Fourier transformation of a sampled signal. For the exponential signal it is given as
\begin{equation}
X(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \sum_{k = -\infty}^{\infty} x[k]\, \mathrm{e}^{\,-\mathrm{j}\, \Omega \,k} = 2\pi \sum_{n = -\infty}^{\infty} \delta((\Omega-\Omega_0) - 2\,\pi\,n)
\end{equation}
The spectrum of the DTFT is $2\pi$-periodic due to sampling. As a consequence, the transformation of the discrete exponential signal consists of a series Dirac impulses. For the region of interest $-\pi < \Omega \leq \pi$ the spectrum consists of a clearly isolated and distinguishable event, as for the continuous case.
The DTFT cannot be realized in practice, since is requires the knowledge of the signal $x[k]$ for all time instants $k$. In general, a measured signal is only known within a finite time-interval. The DFT of a signal of finite length can be derived from the DTFT in two steps:
1. truncation (windowing) of the signal and
2. sampling of the DTFT spectrum of the windowed signal.
The consequences of these two steps are investigated in the following.
### Discrete-Time Fourier Transformation of a Truncated Discrete Exponential Signal
In general, truncation of a signal $x[k]$ to a length of $N$ samples is modeled by multiplying the signal with a window function $w[k]$ of length $N$
\begin{equation}
x_N[k] = x[k] \cdot w[k]
\end{equation}
where $x_N[k]$ denotes the truncated signal and $w[k] = 0$ for $\{k: k < 0 \wedge k \geq N \}$. The spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ can be derived from the multiplication theorem of the DTFT as
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \frac{1}{2 \pi} X(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \circledast_N W(\mathrm{e}^{\,\mathrm{j}\, \Omega})
\end{equation}
where $\circledast$ denotes the cyclic/[circular convolution](https://en.wikipedia.org/wiki/Circular_convolution) of length $N$. A hard truncation of the signal to $N$ samples is modeled by the rectangular signal
\begin{equation}
w[k] = \text{rect}_N[k] = \begin{cases}
1 & \mathrm{for} \; 0\leq k<N \\
0 & \mathrm{otherwise}
\end{cases}
\end{equation}
Its spectrum is given as
\begin{equation}
W(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \mathrm{e}^{\,-\mathrm{j} \, \Omega \,\frac{N-1}{2}} \cdot \frac{\sin(\frac{N \,\Omega}{2})}{\sin(\frac{\Omega}{2})}
\end{equation}
The DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of the truncated exponential signal is derived by introducing the DTFT of the exponential signal and the window function, exploiting the properties of the Dirac impulse and the cyclic convolution as
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \mathrm{e}^{\,-\mathrm{j}\, (\Omega-\Omega_0) \, \frac{N-1}{2}} \cdot \frac{\sin(\frac{N\, (\Omega-\Omega_0)}{2})}{\sin(\frac{(\Omega-\Omega_0)}{2})}
\end{equation}
Clearly the DTFT of the truncated harmonic exponential signal $x_N[k]$ is not given by a series of Dirac impulses. Above equation is evaluated numerically in order to illustrate the properties of $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Om0 = 1 # frequency of exponential signal
N = 32 # length of signal
# DTFT of finite length exponential signal (analytic)
Om = np.linspace(-np.pi, np.pi, num=1024)
XN = np.exp(-1j * (Om-Om0) * (N-1) / 2) * (np.sin(N * (Om-Om0) / 2)) / (np.sin((Om-Om0) / 2))
# plot spectrum
plt.figure(figsize = (10, 8))
plt.plot(Om, abs(XN), 'r')
plt.title(r'Absolute value of the DTFT of a truncated exponential signal $e^{j \Omega_0 k}$ with $\Omega_0=$%2.2f' %Om0)
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|X_N(e^{j \Omega})|$')
plt.axis([-np.pi, np.pi, -0.5, N+5])
plt.grid()
```
**Excercise**
* Change the frequency `Om0` of the signal and rerun the example. How does the magnitude spectrum change?
* Change the length `N` of the signal and rerun the example. How does the magnitude spectrum change?
The maximum of the absolute value of the spectrum is located at the frequency $\Omega_0$. It should become clear that truncation of the exponential signal leads to a broadening of the spectrum. The shorter the signal, the wider the mainlobe becomes.
### The Leakage Effect of the Discrete Fourier Transformation
The DFT is derived from the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of the truncated signal $x_N[k]$ by sampling the DTFT equidistantly at $\Omega = \mu \frac{2 \pi}{N}$
\begin{equation}
X[\mu] = X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})\big\vert_{\Omega = \mu \frac{2 \pi}{N}}
\end{equation}
For the DFT of the exponential signal we finally get
\begin{equation}
X[\mu] = \mathrm{e}^{\,\mathrm{j}\, (\Omega_0 - \mu \frac{2 \pi}{N}) \frac{N-1}{2}} \cdot \frac{\sin(\frac{N \,(\Omega_0 - \mu \frac{2 \pi}{N})}{2})}{\sin(\frac{\Omega_0 - \mu \frac{2 \pi}{N}}{2})}
\end{equation}
The sampling of the DTFT is illustrated in the following example. Note that the normalized angular frequency $\Omega_0$ has been expressed in terms of the periodicity $P$ of the exponential signal $\Omega_0 = P \; \frac{2\pi}{N}$.
```python
N = 32 # length of the signal
P = 10.33 # periodicity of the exponential signal
Om0 = P * (2*np.pi/N) # frequency of exponential signal
# truncated exponential signal
x = np.exp(1j*Om0*np.arange(N))
# DTFT of finite length exponential signal (analytic)
Om = np.linspace(0, 2*np.pi, num=1024)
Xw = np.exp(-1j*(Om-Om0)*(N-1)/2)*(np.sin(N*(Om-Om0)/2))/(np.sin((Om-Om0)/2))
# DFT of the exponential signal by FFT
X = np.fft.fft(x)
mu = np.arange(N) * 2*np.pi/N
# plot spectra
plt.figure(figsize = (10, 8))
plt.hold(True)
plt.plot(Om, abs(Xw), 'r', label=r'$X_N(e^{j \Omega})$')
plt.stem(mu, abs(X), label=r'$X_N[\mu]$', basefmt=' ')
plt.title(r'Absolute value of the DTFT/DFT of a truncated exponential signal $e^{j \Omega_0 k}$ with $\Omega_0=$%2.2f' %Om0)
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|X_N(e^{j \Omega})|$, $|X[\mu]|$')
plt.axis([0, 2*np.pi, -0.5, N+5]);
plt.legend()
plt.grid()
plt.show()
```
**Exercise**
* Change the periodicity `P` of the exponential signal and rerun the example. What happens if the periodicity is an integer? Why?
* Change the length `N` of the DFT? How does the spectrum change?
* What conclusions can be drawn for the analysis of a single exponential signal by the DFT?
You should have noticed that for an exponential signal whose periodicity is an integer $P \in \mathbb{N}$, the DFT consists of a discrete Dirac pulse $X[\mu] = \delta[\mu - P]$. In this case, the sampling points coincide with the maximum of the main lobe or the zeros of the DTFT. For non-integer $P$, hence non-periodic exponential signals with respect to the signal length $N$, the DFT has additional contributions. The shorter the length $N$, the wider these contributions are spread in the spectrum. This smearing effect is known as *leakage effect* of the DFT. It limits the achievable frequency resolution of the DFT when analyzing signal mixtures consisting of more than one exponential signal. This is illustrated by the following numerical examples.
### Analysis of Signal Mixtures by the Discrete Fourier Transformation
In order to discuss the implications of the leakage effect when analyzing signal mixtures, the superposition of two exponential signals with different amplitudes and frequencies is considered
\begin{equation}
x_N[k] = A_1 \cdot e^{\mathrm{j} \Omega_1 k} + A_2 \cdot e^{\mathrm{j} \Omega_2 k}
\end{equation}
For convenience, a function is defined that calculates and plots the magnitude spectrum of $x_N[k]$.
```python
def dft_signal_mixture(N, A1, P1, A2, P2):
# N: length of signal/DFT
# A1, P1, A2, P2: amplitude and periodicity of 1st/2nd complex exponential
# generate the signal mixture
Om0_1 = P1 * (2*np.pi/N) # frequency of 1st exponential signal
Om0_2 = P2 * (2*np.pi/N) # frequency of 2nd exponential signal
k = np.arange(N)
x = A1 * np.exp(1j*Om0_1*k) + A2 * np.exp(1j*Om0_2*k)
# DFT of the signal mixture
mu = np.arange(N)
X = np.fft.fft(x)
# plot spectrum
plt.figure(figsize = (10, 8))
plt.stem(mu, abs(X), basefmt=' ')
plt.title(r'Absolute value of the DFT of a signal mixture')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X[\mu]|$')
plt.axis([0, N, -0.5, N+5]);
plt.grid()
```
Lets first consider the case that the frequencies of the two exponentials are rather far apart
```python
dft_signal_mixture(32, 1, 10.3, 1, 15.2)
```
Investigating the magnitude spectrum one could conclude that the signal consists of two major contributions at the frequencies $\mu_1 = 10$ and $\mu_2 = 15$. Now lets take a look at a situation where the frequencies are closer together
```python
dft_signal_mixture(32, 1, 10.3, 1, 10.9)
```
From visual inspection of the spectrum it is rather unclear if the mixture consists of one or two exponential signals. So far the levels of both signals where chosen equal.
Lets consider the case where the second signal has a much lower level that the first one. The frequencies have been chosen equal to the first example
```python
dft_signal_mixture(32, 1, 10.3, 0.1, 15.2)
```
Now the contribution of the second exponential is almost hidden in the spread spectrum of the first exponential. From these examples it should have become clear that the leakage effect limits the spectral resolution of the DFT.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2017*.
| c81e740d6551808b9fa97c594a79b3caf62089f7 | 154,292 | ipynb | Jupyter Notebook | spectral_analysis_deterministic_signals/leakage_effect.ipynb | swchao/digitalSignalProcessingLecture | 89acae62ea710211014912d61a461ca8a3d6d713 | [
"MIT"
]
| null | null | null | spectral_analysis_deterministic_signals/leakage_effect.ipynb | swchao/digitalSignalProcessingLecture | 89acae62ea710211014912d61a461ca8a3d6d713 | [
"MIT"
]
| null | null | null | spectral_analysis_deterministic_signals/leakage_effect.ipynb | swchao/digitalSignalProcessingLecture | 89acae62ea710211014912d61a461ca8a3d6d713 | [
"MIT"
]
| 1 | 2019-05-09T04:10:31.000Z | 2019-05-09T04:10:31.000Z | 342.871111 | 41,428 | 0.912316 | true | 3,688 | Qwen/Qwen-72B | 1. YES
2. YES | 0.7773 | 0.857768 | 0.666743 | __label__eng_Latn | 0.980012 | 0.387398 |
```python
!pip install pandas
import sympy as sym
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
sym.init_printing()
```
Requirement already satisfied: pandas in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (0.23.4)
Requirement already satisfied: pytz>=2011k in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from pandas) (2021.1)
Requirement already satisfied: numpy>=1.9.0 in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from pandas) (1.16.4)
Requirement already satisfied: python-dateutil>=2.5.0 in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from pandas) (2.8.2)
Requirement already satisfied: six>=1.5 in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from python-dateutil>=2.5.0->pandas) (1.16.0)
## Correlación
La correlación entre las señales $f(t)$ y $g(t)$ es una operación que indica cuán parecidas son las dos señales entre sí.
\begin{equation}
(f \; \circ \; g)(\tau) = h(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
Observe que la correlación y la convolución tienen estructura similares.
\begin{equation}
f(t) * g(t) = \int_{-\infty}^{\infty} f(\tau) \cdot g(t - \tau) \; d\tau
\end{equation}
## Señales periódicas
La señal $y(t)$ es periódica si cumple con la condición $y(t+nT)=y(t)$ para todo $n$ entero. En este caso, $T$ es el periodo de la señal.
La señal seno es la oscilación más pura que se puede expresar matemáticamente. Esta señal surge al considerar la proyección de un movimiento circular uniforme.
## Serie de Fourier
Si se combinan apropiadamente un conjunto de oscilaciones puras, como combinaciones lineales de señales desplazadas y escaladas en tiempo y amplitud, podría recrearse cualquiér señal periódica. Esta idea da lugar a las series de Fourier.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} C_n \cdot cos(n \omega_0 t - \phi_n)
\end{equation}
La señal $y(t)$ es igual a una combinación de infinitas señales coseno, cada una con una amplitud $C_n$, una frecuencia $n \omega_0$ y un desfase $\phi_n$.
También puede expresarse como:
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
La serie queda definida si se encuentran los valores apropiados de $A_n$ y $B_n$ para todos los valores de $n$.
Observe que:
- $A_n$ debe ser más grande si $y(t)$ se "parece" más a un cos.
- $B_n$ debe ser más grande si $y(t)$ se "parece" más a un sin.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
\begin{equation}
(f \; \circ \; g)(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
\begin{equation}
(y \; \circ \; sin_n)(\tau) = \int_{-\infty}^{\infty} y(t) \cdot sin(n \omega_0(t + \tau)) \; dt
\end{equation}
Considerando:
- $\tau=0$ para no incluir desfases.
- la señal $y(t)$ es periódica con periodo $T$.
\begin{equation}
(y \; \circ \; sin_n)(0) = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Esta expresión puede interpretarse como el parecido de una señal $y(t)$ a la señal $sin$ con crecuencia $n \omega_0$ promediado a lo largo de un periodo sin desfase del seno.
Retomando la idea inicial
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
donde
\begin{equation}
A_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot cos(n \omega_0 t) \; dt
\end{equation}
\begin{equation}
B_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Se recomienda al estudiante que encuentre la relación entre las Series anteriores y la siguiente alternativa para representar la Series de Fourier.
\begin{equation}
y(t) = \sum_{n=-\infty}^{\infty} C_n \cdot e^{j n \omega_0 t}
\end{equation}
donde
\begin{equation}
C_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot e^{j n \omega_0 t} \; dt
\end{equation}
Los valores $C_n$ son el espectro de la señal periódica $y(t)$ y son una representación en el dominio de la frecuencia.
**Ejemplo # 1**
La señal $y(t) = sin(2 \pi t)$ es en sí misma una oscilación pura de periodo $T=1$.
```python
# Se define y como el seno de t
t = sym.symbols('t', real=True)
#T = sym.symbols('T', real=True)
T = 1
nw = sym.symbols('n', real=True)
delta = sym.DiracDelta(nw)
w0 = 2 * sym.pi / T
y = t
# y = 4*sym.sin(w0*t + 0.5) - 10
# y = sym.sin(w0*t)
# y = (t-0.5)*(t-0.5)
y
```
Aunque la sumatoria de las series de Fourier incluye infinitos términos, solamente se tomaran las primeras 3 componentes.
```python
n_max = 3
y_ser = 0
C = 0
ns = range(-n_max,n_max+1)
espectro = pd.DataFrame(index = ns,
columns= ['C','C_np','C_real','C_imag','C_mag','C_ang'])
for n in espectro.index:
C_n = (1/T)*sym.integrate(y*sym.exp(-1j*n*w0*t), (t,0,T)).evalf()
C = C + C_n*delta.subs(nw,nw-n)
y_ser = y_ser + C_n*sym.exp(1j*n*w0*t)
espectro['C'][n]=C_n
C_r = float(sym.re(C_n))
C_i = float(sym.im(C_n))
espectro['C_real'][n] = C_r
espectro['C_imag'][n] = C_i
espectro['C_np'][n] = complex(C_r + 1j*C_i)
espectro['C_mag'][n] = np.absolute(espectro['C_np'][n])
espectro['C_ang'][n] = np.angle(espectro['C_np'][n])
espectro
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>C</th>
<th>C_np</th>
<th>C_real</th>
<th>C_imag</th>
<th>C_mag</th>
<th>C_ang</th>
</tr>
</thead>
<tbody>
<tr>
<th>-3</th>
<td>-6.61744490042422e-24 - 0.0530516476972984*I</td>
<td>(-6.617444900424222e-24-0.05305164769729844j)</td>
<td>-6.61744e-24</td>
<td>-0.0530516</td>
<td>0.0530516</td>
<td>-1.5708</td>
</tr>
<tr>
<th>-2</th>
<td>2.64697796016969e-23 - 0.0795774715459477*I</td>
<td>(2.6469779601696886e-23-0.07957747154594767j)</td>
<td>2.64698e-23</td>
<td>-0.0795775</td>
<td>0.0795775</td>
<td>-1.5708</td>
</tr>
<tr>
<th>-1</th>
<td>1.05879118406788e-22 - 0.159154943091895*I</td>
<td>(1.0587911840678754e-22-0.15915494309189535j)</td>
<td>1.05879e-22</td>
<td>-0.159155</td>
<td>0.159155</td>
<td>-1.5708</td>
</tr>
<tr>
<th>0</th>
<td>0.500000000000000</td>
<td>(0.5+0j)</td>
<td>0.5</td>
<td>0</td>
<td>0.5</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>1.05879118406788e-22 + 0.159154943091895*I</td>
<td>(1.0587911840678754e-22+0.15915494309189535j)</td>
<td>1.05879e-22</td>
<td>0.159155</td>
<td>0.159155</td>
<td>1.5708</td>
</tr>
<tr>
<th>2</th>
<td>2.64697796016969e-23 + 0.0795774715459477*I</td>
<td>(2.6469779601696886e-23+0.07957747154594767j)</td>
<td>2.64698e-23</td>
<td>0.0795775</td>
<td>0.0795775</td>
<td>1.5708</td>
</tr>
<tr>
<th>3</th>
<td>-6.61744490042422e-24 + 0.0530516476972984*I</td>
<td>(-6.617444900424222e-24+0.05305164769729844j)</td>
<td>-6.61744e-24</td>
<td>0.0530516</td>
<td>0.0530516</td>
<td>1.5708</td>
</tr>
</tbody>
</table>
</div>
La señal reconstruida con un **n_max** componentes
```python
y_ser
```
```python
plt.rcParams['figure.figsize'] = 7, 2
#g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue',legend=True, label = 'y(t) original')
#g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red',legend=True, label = 'y(t) reconstruida')
g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue')
g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red')
g1.extend(g2)
g1.show()
```
```python
C
```
```python
plt.rcParams['figure.figsize'] = 7, 4
plt.stem(espectro.index,espectro['C_mag'])
```
**Ejercicio**
Use las siguientes funciones para definir un periodo de una señal periódica con periodo $T=1$:
\begin{equation}
y_1(t) = \begin{cases}
-1 & 0 \leq t < 0.5 \\
1 & 0.5 \leq t < 1
\end{cases}
\end{equation}
\begin{equation}
y_2(t) = t
\end{equation}
\begin{equation}
y_3(t) = 3 sin(2 \pi t)
\end{equation}
Varíe la cantidad de componentes que reconstruyen cada función y analice la reconstrucción obtenida y los valores de $C_n$
```python
```
| 0f9633d7f5e808e9f403142ad9e3b65715203e68 | 69,645 | ipynb | Jupyter Notebook | .ipynb_checkpoints/04_Series_de_Fourier-checkpoint.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
]
| null | null | null | .ipynb_checkpoints/04_Series_de_Fourier-checkpoint.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
]
| null | null | null | .ipynb_checkpoints/04_Series_de_Fourier-checkpoint.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
]
| null | null | null | 111.969453 | 16,768 | 0.823505 | true | 3,267 | Qwen/Qwen-72B | 1. YES
2. YES | 0.800692 | 0.845942 | 0.677339 | __label__spa_Latn | 0.420127 | 0.412017 |
# Van der Pol oscillator
We will look at the second order differentual equation (see https://en.wikipedia.org/wiki/Van_der_Pol_oscillator):
$$
{d^2y_0 \over dx^2}-\mu(1-y_0^2){dy_0 \over dx}+y_0= 0
$$
```python
from __future__ import division, print_function
import itertools
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from pyodesys.symbolic import SymbolicSys
sp.init_printing()
%matplotlib inline
print(sp.__version__)
```
One way to reduce the order of our second order differential equation is to formulate a system of first order ODEs, using:
$$ y_1 = \dot y_0 $$
which gives us:
$$
\begin{cases}
\dot y_0 = y_1 \\
\dot y_1 = \mu(1-y_0^2) y_1-y_0
\end{cases}
$$
Let's call this system of ordinary differential equations vdp1:
```python
vdp1 = lambda x, y, p: [y[1], -y[0] + p[0]*y[1]*(1 - y[0]**2)]
```
```python
y0 = [0, 1]
mu = 2.5
tend = 25
```
```python
odesys1 = SymbolicSys.from_callback(vdp1, 2, 1, names='y0 y1'.split())
odesys1.exprs
```
```python
# Let us plot using 30 data points
res1 = odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], name='vode')
res1.plot()
print(res1.yout.shape)
```
```python
# Let us interpolate between data points
res2 = odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode', nderiv=1)
res2.plot(m_lim=21)
print(res2.yout.shape)
```
```python
odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode', nderiv=2)
xplt, yplt = odesys1.plot_result(m_lim=21, interpolate=30)
print(odesys1._internal[1].shape, yplt.shape)
```
Equidistant points are not optimal for plotting this function. Using ``roots`` kwarg we can make the solver report the output where either the function value, its first or second derivative is zero.
```python
odesys2 = SymbolicSys.from_other(odesys1, roots=odesys1.exprs + (odesys1.dep[0],))
# We could also add a higher derivative: tuple(odesys1.get_jac().dot(odesys1.exprs)))
```
```python
# Let us plot using 10 data points
res2 = odesys2.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode',
nderiv=1, atol=1e-4, rtol=1e-4)
xout, yout, info = res2
xplt, yplt = odesys2.plot_result(m_lim=21, interpolate=30, indices=[0])
xroots, yroots = info['roots_output'][0], info['roots_output'][1][:, 0]
plt.plot(xroots, yroots, 'bd')
print(odesys2._internal[1].shape, yplt.shape, xroots.size)
```
```python
odesys2.roots
```
```python
res2.plot(indices=[0])
plt.plot(xplt, [res2.at(_)[0][0, 0] for _ in xplt])
```
```python
res1.plot(indices=[0])
plt.plot(xplt, [res1.at(_, use_deriv=True)[0][0] for _ in xplt])
plt.plot(xplt, [res1.at(_, use_deriv=False)[0][0] for _ in xplt])
```
| e6336ccdcfe2988e4d81ef075dbd84dc2ce922c3 | 5,108 | ipynb | Jupyter Notebook | examples/van_der_pol_interpolation.ipynb | slayoo/pyodesys | 8e1afb195dadf6c6f8e765873bc9dd0fae067c39 | [
"BSD-2-Clause"
]
| 82 | 2015-09-29T16:51:03.000Z | 2022-02-02T13:26:50.000Z | examples/van_der_pol_interpolation.ipynb | slayoo/pyodesys | 8e1afb195dadf6c6f8e765873bc9dd0fae067c39 | [
"BSD-2-Clause"
]
| 28 | 2015-09-29T14:40:45.000Z | 2021-09-18T19:29:50.000Z | examples/van_der_pol_interpolation.ipynb | slayoo/pyodesys | 8e1afb195dadf6c6f8e765873bc9dd0fae067c39 | [
"BSD-2-Clause"
]
| 13 | 2016-03-18T14:00:39.000Z | 2021-09-17T13:54:29.000Z | 25.162562 | 204 | 0.546398 | true | 910 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92944 | 0.896251 | 0.833012 | __label__eng_Latn | 0.641609 | 0.7737 |
# Chapter 3
`Original content created by Cam Davidson-Pilon`
`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`
____
## Opening the black box of MCMC
The previous two chapters hid the inner-mechanics of PyMC3, and more generally Markov Chain Monte Carlo (MCMC), from the reader. The reason for including this chapter is three-fold. The first is that any book on Bayesian inference must discuss MCMC. I cannot fight this. Blame the statisticians. Secondly, knowing the process of MCMC gives you insight into whether your algorithm has converged. (Converged to what? We will get to that) Thirdly, we'll understand *why* we are returned thousands of samples from the posterior as a solution, which at first thought can be odd.
### The Bayesian landscape
When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating an $N$ dimensional space for the prior distributions to exist in. Associated with the space is an additional dimension, which we can describe as the *surface*, or *curve*, that sits on top of the space, that reflects the *prior probability* of a particular point. The surface on the space is defined by our prior distributions. For example, if we have two unknowns $p_1$ and $p_2$, and priors for both are $\text{Uniform}(0,5)$, the space created is a square of length 5 and the surface is a flat plane that sits on top of the square (representing that every point is equally likely).
```python
%matplotlib inline
import scipy.stats as stats
from IPython.core.pylabtools import figsize
import numpy as np
figsize(12.5, 4)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
jet = plt.cm.jet
fig = plt.figure()
x = y = np.linspace(0, 5, 100)
X, Y = np.meshgrid(x, y)
plt.subplot(121)
uni_x = stats.uniform.pdf(x, loc=0, scale=5)
uni_y = stats.uniform.pdf(y, loc=0, scale=5)
M = np.dot(uni_x[:, None], uni_y[None, :])
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=jet, vmax=1, vmin=-.15, extent=(0, 5, 0, 5))
plt.xlim(0, 5)
plt.ylim(0, 5)
plt.title("Landscape formed by Uniform priors.")
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(X, Y, M, cmap=plt.cm.jet, vmax=1, vmin=-.15)
ax.view_init(azim=390)
plt.title("Uniform prior landscape; alternate view");
```
Alternatively, if the two priors are $\text{Exp}(3)$ and $\text{Exp}(10)$, then the space is all positive numbers on the 2-D plane, and the surface induced by the priors looks like a water fall that starts at the point (0,0) and flows over the positive numbers.
The plots below visualize this. The more dark red the color, the more prior probability is assigned to that location. Conversely, areas with darker blue represent that our priors assign very low probability to that location.
```python
figsize(12.5, 5)
fig = plt.figure()
plt.subplot(121)
exp_x = stats.expon.pdf(x, scale=3)
exp_y = stats.expon.pdf(x, scale=10)
M = np.dot(exp_x[:, None], exp_y[None, :])
CS = plt.contour(X, Y, M)
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=jet, extent=(0, 5, 0, 5))
#plt.xlabel("prior on $p_1$")
#plt.ylabel("prior on $p_2$")
plt.title("$Exp(3), Exp(10)$ prior landscape")
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(X, Y, M, cmap=jet)
ax.view_init(azim=390)
plt.title("$Exp(3), Exp(10)$ prior landscape; \nalternate view");
```
These are simple examples in 2D space, where our brains can understand surfaces well. In practice, spaces and surfaces generated by our priors can be much higher dimensional.
If these surfaces describe our *prior distributions* on the unknowns, what happens to our space after we incorporate our observed data $X$? The data $X$ does not change the space, but it changes the surface of the space by *pulling and stretching the fabric of the prior surface* to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the *posterior distribution*.
Again I must stress that it is, unfortunately, impossible to visualize this in large dimensions. For two dimensions, the data essentially *pushes up* the original surface to make *tall mountains*. The tendency of the observed data to *push up* the posterior probability in certain areas is checked by the prior probability distribution, so that less prior probability means more resistance. Thus in the double-exponential prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance (low prior probability) near (5,5). The peak reflects the posterior probability of where the true parameters are likely to be found. Importantly, if the prior has assigned a probability of 0, then no posterior probability will be assigned there.
Suppose the priors mentioned above represent different parameters $\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape:
```python
# create the observed data
# sample size of data we observe, trying varying this (keep it less than 100 ;)
N = 1
# the true parameters, but of course we do not see these values...
lambda_1_true = 1
lambda_2_true = 3
#...we see the data generated, dependent on the above two values.
data = np.concatenate([
stats.poisson.rvs(lambda_1_true, size=(N, 1)),
stats.poisson.rvs(lambda_2_true, size=(N, 1))
], axis=1)
print("observed (2-dimensional,sample size = %d):" % N, data)
# plotting details.
x = y = np.linspace(.01, 5, 100)
likelihood_x = np.array([stats.poisson.pmf(data[:, 0], _x)
for _x in x]).prod(axis=1)
likelihood_y = np.array([stats.poisson.pmf(data[:, 1], _y)
for _y in y]).prod(axis=1)
L = np.dot(likelihood_x[:, None], likelihood_y[None, :])
```
observed (2-dimensional,sample size = 1): [[1 3]]
```python
figsize(12.5, 12)
# matplotlib heavy lifting below, beware!
plt.subplot(221)
uni_x = stats.uniform.pdf(x, loc=0, scale=5)
uni_y = stats.uniform.pdf(x, loc=0, scale=5)
M = np.dot(uni_x[:, None], uni_y[None, :])
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=jet, vmax=1, vmin=-.15, extent=(0, 5, 0, 5))
plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none")
plt.xlim(0, 5)
plt.ylim(0, 5)
plt.title("Landscape formed by Uniform priors on $p_1, p_2$.")
plt.subplot(223)
plt.contour(x, y, M * L)
im = plt.imshow(M * L, interpolation='none', origin='lower',
cmap=jet, extent=(0, 5, 0, 5))
plt.title("Landscape warped by %d data observation;\n Uniform priors on $p_1, p_2$." % N)
plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none")
plt.xlim(0, 5)
plt.ylim(0, 5)
plt.subplot(222)
exp_x = stats.expon.pdf(x, loc=0, scale=3)
exp_y = stats.expon.pdf(x, loc=0, scale=10)
M = np.dot(exp_x[:, None], exp_y[None, :])
plt.contour(x, y, M)
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=jet, extent=(0, 5, 0, 5))
plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none")
plt.xlim(0, 5)
plt.ylim(0, 5)
plt.title("Landscape formed by Exponential priors on $p_1, p_2$.")
plt.subplot(224)
# This is the likelihood times prior, that results in the posterior.
plt.contour(x, y, M * L)
im = plt.imshow(M * L, interpolation='none', origin='lower',
cmap=jet, extent=(0, 5, 0, 5))
plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none")
plt.title("Landscape warped by %d data observation;\n Exponential priors on \
$p_1, p_2$." % N)
plt.xlim(0, 5)
plt.ylim(0, 5);
```
The plot on the left is the deformed landscape with the $\text{Uniform}(0,5)$ priors, and the plot on the right is the deformed landscape with the exponential priors. Notice that the posterior landscapes look different from one another, though the data observed is identical in both cases. The reason is as follows. Notice the exponential-prior landscape, bottom right figure, puts very little *posterior* weight on values in the upper right corner of the figure: this is because *the prior does not put much weight there*. On the other hand, the uniform-prior landscape is happy to put posterior weight in the upper-right corner, as the prior puts more weight there.
Notice also the highest-point, corresponding the the darkest red, is biased towards (0,0) in the exponential case, which is the result from the exponential prior putting more prior weight in the (0,0) corner.
The black dot represents the true parameters. Even with 1 sample point, the mountains attempts to contain the true parameter. Of course, inference with a sample size of 1 is incredibly naive, and choosing such a small sample size was only illustrative.
It's a great exercise to try changing the sample size to other values (try 2,5,10,100?...) and observing how our "mountain" posterior changes.
### Exploring the landscape using the MCMC
We should explore the deformed posterior space generated by our prior surface and observed data to find the posterior mountain. However, we cannot naively search the space: any computer scientist will tell you that traversing $N$-dimensional space is exponentially difficult in $N$: the size of the space quickly blows-up as we increase $N$ (see [the curse of dimensionality](http://en.wikipedia.org/wiki/Curse_of_dimensionality)). What hope do we have to find these hidden mountains? The idea behind MCMC is to perform an intelligent search of the space. To say "search" implies we are looking for a particular point, which is perhaps not an accurate as we are really looking for a broad mountain.
Recall that MCMC returns *samples* from the posterior distribution, not the distribution itself. Stretching our mountainous analogy to its limit, MCMC performs a task similar to repeatedly asking "How likely is this pebble I found to be from the mountain I am searching for?", and completes its task by returning thousands of accepted pebbles in hopes of reconstructing the original mountain. In MCMC and PyMC3 lingo, the returned sequence of "pebbles" are the samples, cumulatively called the *traces*.
When I say MCMC intelligently searches, I really am saying MCMC will *hopefully* converge towards the areas of high posterior probability. MCMC does this by exploring nearby positions and moving into areas with higher probability. Again, perhaps "converge" is not an accurate term to describe MCMC's progression. Converging usually implies moving towards a point in space, but MCMC moves towards a *broader area* in the space and randomly walks in that area, picking up samples from that area.
#### Why Thousands of Samples?
At first, returning thousands of samples to the user might sound like being an inefficient way to describe the posterior distributions. I would argue that this is extremely efficient. Consider the alternative possibilities:
1. Returning a mathematical formula for the "mountain ranges" would involve describing a N-dimensional surface with arbitrary peaks and valleys.
2. Returning the "peak" of the landscape, while mathematically possible and a sensible thing to do as the highest point corresponds to most probable estimate of the unknowns, ignores the shape of the landscape, which we have previously argued is very important in determining posterior confidence in unknowns.
Besides computational reasons, likely the strongest reason for returning samples is that we can easily use *The Law of Large Numbers* to solve otherwise intractable problems. I postpone this discussion for the next chapter. With the thousands of samples, we can reconstruct the posterior surface by organizing them in a histogram.
### Algorithms to perform MCMC
There is a large family of algorithms that perform MCMC. Most of these algorithms can be expressed at a high level as follows: (Mathematical details can be found in the appendix.)
1. Start at current position.
2. Propose moving to a new position (investigate a pebble near you).
3. Accept/Reject the new position based on the position's adherence to the data and prior distributions (ask if the pebble likely came from the mountain).
4. 1. If you accept: Move to the new position. Return to Step 1.
2. Else: Do not move to new position. Return to Step 1.
5. After a large number of iterations, return all accepted positions.
This way we move in the general direction towards the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples as they likely all belong to the posterior distribution.
If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions *that are likely not from the posterior* but better than everything else nearby. Thus the first moves of the algorithm are not reflective of the posterior.
In the above algorithm's pseudocode, notice that only the current position matters (new positions are investigated only near the current position). We can describe this property as *memorylessness*, i.e. the algorithm does not care *how* it arrived at its current position, only that it is there.
### Other approximation solutions to the posterior
Besides MCMC, there are other procedures available for determining the posterior distributions. A Laplace approximation is an approximation of the posterior using simple functions. A more advanced method is [Variational Bayes](http://en.wikipedia.org/wiki/Variational_Bayesian_methods). All three methods, Laplace Approximations, Variational Bayes, and classical MCMC have their pros and cons. We will only focus on MCMC in this book. That being said, my friend Imri Sofar likes to classify MCMC algorithms as either "they suck", or "they really suck". He classifies the particular flavour of MCMC used by PyMC3 as just *sucks* ;)
##### Example: Unsupervised Clustering using a Mixture Model
Suppose we are given the following dataset:
```python
figsize(12.5, 4)
data = np.loadtxt("data/mixture_data.csv", delimiter=",")
plt.hist(data, bins=20, color="k", histtype="stepfilled", alpha=0.8)
plt.title("Histogram of the dataset")
plt.ylim([0, None]);
print(data[:10], "...")
```
What does the data suggest? It appears the data has a bimodal form, that is, it appears to have two peaks, one near 120 and the other near 200. Perhaps there are *two clusters* within this dataset.
This dataset is a good example of the data-generation modeling technique from last chapter. We can propose *how* the data might have been created. I suggest the following data generation algorithm:
1. For each data point, choose cluster 1 with probability $p$, else choose cluster 2.
2. Draw a random variate from a Normal distribution with parameters $\mu_i$ and $\sigma_i$ where $i$ was chosen in step 1.
3. Repeat.
This algorithm would create a similar effect as the observed dataset, so we choose this as our model. Of course, we do not know $p$ or the parameters of the Normal distributions. Hence we must infer, or *learn*, these unknowns.
Denote the Normal distributions $\text{N}_0$ and $\text{N}_1$ (having variables' index start at 0 is just Pythonic). Both currently have unknown mean and standard deviation, denoted $\mu_i$ and $\sigma_i, \; i =0,1$ respectively. A specific data point can be from either $\text{N}_0$ or $\text{N}_1$, and we assume that the data point is assigned to $\text{N}_0$ with probability $p$.
An appropriate way to assign data points to clusters is to use a PyMC3 `Categorical` stochastic variable. Its parameter is a $k$-length array of probabilities that must sum to one and its `value` attribute is a integer between 0 and $k-1$ randomly chosen according to the crafted array of probabilities (In our case $k=2$). *A priori*, we do not know what the probability of assignment to cluster 1 is, so we form a uniform variable on $(0, 1)$. We call call this $p_1$, so the probability of belonging to cluster 2 is therefore $p_2 = 1 - p_1$.
Unfortunately, we can't we just give `[p1, p2]` to our `Categorical` variable. PyMC3 uses Theano under the hood to construct the models so we need to use `theano.tensor.stack()` to combine $p_1$ and $p_2$ into a vector that it can understand. We pass this vector into the `Categorical` variable as well as the `testval` parameter to give our variable an idea of where to start from.
```python
import pymc3 as pm
import theano.tensor as T
with pm.Model() as model:
p1 = pm.Uniform('p', 0, 1)
p2 = 1 - p1
p = T.stack([p1, p2])
assignment = pm.Categorical("assignment", p,
shape=data.shape[0],
testval=np.random.randint(0, 2, data.shape[0]))
print("prior assignment, with p = %.2f:" % p1.tag.test_value)
print(assignment.tag.test_value[:10])
```
prior assignment, with p = 0.50:
[0 1 1 0 1 1 0 1 1 0]
Looking at the above dataset, I would guess that the standard deviations of the two Normals are different. To maintain ignorance of what the standard deviations might be, we will initially model them as uniform on 0 to 100. We will include both standard deviations in our model using a single line of PyMC3 code:
sds = pm.Uniform("sds", 0, 100, shape=2)
Notice that we specified `shape=2`: we are modeling both $\sigma$s as a single PyMC3 variable. Note that this does not induce a necessary relationship between the two $\sigma$s, it is simply for succinctness.
We also need to specify priors on the centers of the clusters. The centers are really the $\mu$ parameters in these Normal distributions. Their priors can be modeled by a Normal distribution. Looking at the data, I have an idea where the two centers might be — I would guess somewhere around 120 and 190 respectively, though I am not very confident in these eyeballed estimates. Hence I will set $\mu_0 = 120, \mu_1 = 190$ and $\sigma_0 = \sigma_1 = 10$.
```python
with model:
sds = pm.Uniform("sds", 0, 100, shape=2)
centers = pm.Normal("centers",
mu=np.array([120, 190]),
sd=np.array([10, 10]),
shape=2)
center_i = pm.Deterministic('center_i', centers[assignment])
sd_i = pm.Deterministic('sd_i', sds[assignment])
# and to combine it with the observations:
observations = pm.Normal("obs", mu=center_i, sd=sd_i, observed=data)
print("Random assignments: ", assignment.tag.test_value[:4], "...")
print("Assigned center: ", center_i.tag.test_value[:4], "...")
print("Assigned standard deviation: ", sd_i.tag.test_value[:4])
```
Random assignments: [0 1 1 0] ...
Assigned center: [120. 190. 190. 120.] ...
Assigned standard deviation: [50. 50. 50. 50.]
Notice how we continue to build the model within the context of `Model()`. This automatically adds the variables that we create to our model. As long as we work within this context we will be working with the same variables that we have already defined.
Similarly, any sampling that we do within the context of `Model()` will be done only on the model whose context in which we are working. We will tell our model to explore the space that we have so far defined by defining the sampling methods, in this case `Metropolis()` for our continuous variables and `ElemwiseCategorical()` for our categorical variable. We will use these sampling methods together to explore the space by using `sample( iterations, step )`, where `iterations` is the number of steps you wish the algorithm to perform and `step` is the way in which you want to handle those steps. We use our combination of `Metropolis()` and `ElemwiseCategorical()` for the `step` and sample 25000 `iterations` below.
```python
with model:
step1 = pm.Metropolis(vars=[p, sds, centers])
step2 = pm.ElemwiseCategorical(vars=[assignment])
trace = pm.sample(25000, step=[step1, step2])
```
<ipython-input-7-d859ea8a62e5>:3: DeprecationWarning: ElemwiseCategorical is deprecated, switch to CategoricalGibbsMetropolis.
step2 = pm.ElemwiseCategorical(vars=[assignment])
Multiprocess sampling (2 chains in 2 jobs)
CompoundStep
>CompoundStep
>>Metropolis: [centers]
>>Metropolis: [sds]
>>Metropolis: [p]
>ElemwiseCategorical: [assignment]
Sampling 2 chains, 0 divergences: 100%|██████████| 51000/51000 [06:42<00:00, 126.79draws/s]
The number of effective samples is smaller than 10% for some parameters.
We have stored the paths of all our variables, or "traces", in the `trace` variable. These paths are the routes the unknown parameters (centers, precisions, and $p$) have taken thus far. The individual path of each variable is indexed by the PyMC3 variable `name` that we gave that variable when defining it within our model. For example, `trace["sds"]` will return a `numpy array` object that we can then index and slice as we would any other `numpy array` object.
```python
figsize(12.5, 9)
plt.subplot(311)
lw = 1
center_trace = trace["centers"]
# for pretty colors later in the book.
colors = ["#348ABD", "#A60628"] if center_trace[-1, 0] > center_trace[-1, 1] \
else ["#A60628", "#348ABD"]
plt.plot(center_trace[:, 0], label="trace of center 0", c=colors[0], lw=lw)
plt.plot(center_trace[:, 1], label="trace of center 1", c=colors[1], lw=lw)
plt.title("Traces of unknown parameters")
leg = plt.legend(loc="upper right")
leg.get_frame().set_alpha(0.7)
plt.subplot(312)
std_trace = trace["sds"]
plt.plot(std_trace[:, 0], label="trace of standard deviation of cluster 0",
c=colors[0], lw=lw)
plt.plot(std_trace[:, 1], label="trace of standard deviation of cluster 1",
c=colors[1], lw=lw)
plt.legend(loc="upper left")
plt.subplot(313)
p_trace = trace["p"]
plt.plot(p_trace, label="$p$: frequency of assignment to cluster 0",
color=colors[0], lw=lw)
plt.xlabel("Steps")
plt.ylim(0, 1)
plt.legend();
```
Notice the following characteristics:
1. The traces converges, not to a single point, but to a *distribution* of possible points. This is *convergence* in an MCMC algorithm.
2. Inference using the first few thousand points is a bad idea, as they are unrelated to the final distribution we are interested in. Thus is it a good idea to discard those samples before using the samples for inference. We call this period before converge the *burn-in period*.
3. The traces appear as a random "walk" around the space, that is, the paths exhibit correlation with previous positions. This is both good and bad. We will always have correlation between current positions and the previous positions, but too much of it means we are not exploring the space well. This will be detailed in the Diagnostics section later in this chapter.
To achieve further convergence, we will perform more MCMC steps. In the pseudo-code algorithm of MCMC above, the only position that matters is the current position (new positions are investigated near the current position), implicitly stored as part of the `trace` object. To continue where we left off, we pass the `trace` that we have already stored into the `sample()` function with the same step value. The values that we have already calculated will not be overwritten. This ensures that our sampling continues where it left off in the same way that it left off.
We will sample the MCMC fifty thousand more times and visualize the progress below:
```python
with model:
trace = pm.sample(50000, step=[step1, step2], trace=trace)
```
Multiprocess sampling (2 chains in 2 jobs)
CompoundStep
>CompoundStep
>>Metropolis: [centers]
>>Metropolis: [sds]
>>Metropolis: [p]
>ElemwiseCategorical: [assignment]
Sampling 2 chains, 0 divergences: 100%|██████████| 101000/101000 [13:09<00:00, 127.88draws/s]
The number of effective samples is smaller than 10% for some parameters.
```python
x1 = np.arange(25000, 75000)
center_trace.shape
```
(125000, 2)
```python
figsize(12.5, 4)
center_trace = trace["centers"][25000:]
prev_center_trace = trace["centers"][:25000]
x = np.arange(25000)
plt.plot(x, prev_center_trace[:, 0], label="previous trace of center 0",
lw=lw, alpha=0.4, c=colors[1])
plt.plot(x, prev_center_trace[:, 1], label="previous trace of center 1",
lw=lw, alpha=0.4, c=colors[0])
x = np.arange(25000, 150000)
plt.plot(x, center_trace[:, 0], label="new trace of center 0", lw=lw, c="#348ABD")
plt.plot(x, center_trace[:, 1], label="new trace of center 1", lw=lw, c="#A60628")
plt.title("Traces of unknown center parameters")
leg = plt.legend(loc="upper right")
leg.get_frame().set_alpha(0.8)
plt.xlabel("Steps");
```
#### Cluster Investigation
We have not forgotten our main challenge: identify the clusters. We have determined posterior distributions for our unknowns. We plot the posterior distributions of the center and standard deviation variables below:
```python
figsize(11.0, 4)
std_trace = trace["sds"][25000:]
prev_std_trace = trace["sds"][:25000]
_i = [1, 2, 3, 4]
for i in range(2):
plt.subplot(2, 2, _i[2 * i])
plt.title("Posterior of center of cluster %d" % i)
plt.hist(center_trace[:, i], color=colors[i], bins=30,
histtype="stepfilled")
plt.subplot(2, 2, _i[2 * i + 1])
plt.title("Posterior of standard deviation of cluster %d" % i)
plt.hist(std_trace[:, i], color=colors[i], bins=30,
histtype="stepfilled")
# plt.autoscale(tight=True)
plt.tight_layout()
```
The MCMC algorithm has proposed that the most likely centers of the two clusters are near 120 and 200 respectively. Similar inference can be applied to the standard deviation.
We are also given the posterior distributions for the labels of the data point, which is present in `trace["assignment"]`. Below is a visualization of this. The y-axis represents a subsample of the posterior labels for each data point. The x-axis are the sorted values of the data points. A red square is an assignment to cluster 1, and a blue square is an assignment to cluster 0.
```python
import matplotlib as mpl
figsize(12.5, 4.5)
plt.cmap = mpl.colors.ListedColormap(colors)
plt.imshow(trace["assignment"][::400, np.argsort(data)],
cmap=plt.cmap, aspect=.4, alpha=.9)
plt.xticks(np.arange(0, data.shape[0], 40),
["%.2f" % s for s in np.sort(data)[::40]])
plt.ylabel("posterior sample")
plt.xlabel("value of $i$th data point")
plt.title("Posterior labels of data points");
```
Looking at the above plot, it appears that the most uncertainty is between 150 and 170. The above plot slightly misrepresents things, as the x-axis is not a true scale (it displays the value of the $i$th sorted data point.) A more clear diagram is below, where we have estimated the *frequency* of each data point belonging to the labels 0 and 1.
```python
cmap = mpl.colors.LinearSegmentedColormap.from_list("BMH", colors)
assign_trace = trace["assignment"]
plt.scatter(data, 1 - assign_trace.mean(axis=0), cmap=cmap,
c=assign_trace.mean(axis=0), s=50)
plt.ylim(-0.05, 1.05)
plt.xlim(35, 300)
plt.title("Probability of data point belonging to cluster 0")
plt.ylabel("probability")
plt.xlabel("value of data point");
```
Even though we modeled the clusters using Normal distributions, we didn't get just a single Normal distribution that *best* fits the data (whatever our definition of best is), but a distribution of values for the Normal's parameters. How can we choose just a single pair of values for the mean and variance and determine a *sorta-best-fit* gaussian?
One quick and dirty way (which has nice theoretical properties we will see in Chapter 5), is to use the *mean* of the posterior distributions. Below we overlay the Normal density functions, using the mean of the posterior distributions as the chosen parameters, with our observed data:
```python
norm = stats.norm
x = np.linspace(20, 300, 500)
posterior_center_means = center_trace.mean(axis=0)
posterior_std_means = std_trace.mean(axis=0)
posterior_p_mean = trace["p"].mean()
plt.hist(data, bins=20, histtype="step", density=True, color="k",
lw=2, label="histogram of data")
y = posterior_p_mean * norm.pdf(x, loc=posterior_center_means[0],
scale=posterior_std_means[0])
plt.plot(x, y, label="Cluster 0 (using posterior-mean parameters)", lw=3)
plt.fill_between(x, y, color=colors[1], alpha=0.3)
y = (1 - posterior_p_mean) * norm.pdf(x, loc=posterior_center_means[1],
scale=posterior_std_means[1])
plt.plot(x, y, label="Cluster 1 (using posterior-mean parameters)", lw=3)
plt.fill_between(x, y, color=colors[0], alpha=0.3)
plt.legend(loc="upper left")
plt.title("Visualizing Clusters using posterior-mean parameters");
```
### Important: Don't mix posterior samples
In the above example, a possible (though less likely) scenario is that cluster 0 has a very large standard deviation, and cluster 1 has a small standard deviation. This would still satisfy the evidence, albeit less so than our original inference. Alternatively, it would be incredibly unlikely for *both* distributions to have a small standard deviation, as the data does not support this hypothesis at all. Thus the two standard deviations are *dependent* on each other: if one is small, the other must be large. In fact, *all* the unknowns are related in a similar manner. For example, if a standard deviation is large, the mean has a wider possible space of realizations. Conversely, a small standard deviation restricts the mean to a small area.
During MCMC, we are returned vectors representing samples from the unknown posteriors. Elements of different vectors cannot be used together, as this would break the above logic: perhaps a sample has returned that cluster 1 has a small standard deviation, hence all the other variables in that sample would incorporate that and be adjusted accordingly. It is easy to avoid this problem though, just make sure you are indexing traces correctly.
Another small example to illustrate the point. Suppose two variables, $x$ and $y$, are related by $x+y=10$. We model $x$ as a Normal random variable with mean 4 and explore 500 samples.
```python
import pymc3 as pm
with pm.Model() as model:
x = pm.Normal("x", mu=4, tau=10)
y = pm.Deterministic("y", 10 - x)
trace_2 = pm.sample(10000, pm.Metropolis())
plt.plot(trace_2["x"])
plt.plot(trace_2["y"])
plt.title("Displaying (extreme) case of dependence between unknowns");
```
As you can see, the two variables are not unrelated, and it would be wrong to add the $i$th sample of $x$ to the $j$th sample of $y$, unless $i = j$.
#### Returning to Clustering: Prediction
The above clustering can be generalized to $k$ clusters. Choosing $k=2$ allowed us to visualize the MCMC better, and examine some very interesting plots.
What about prediction? Suppose we observe a new data point, say $x = 175$, and we wish to label it to a cluster. It is foolish to simply assign it to the *closer* cluster center, as this ignores the standard deviation of the clusters, and we have seen from the plots above that this consideration is very important. More formally: we are interested in the *probability* (as we cannot be certain about labels) of assigning $x=175$ to cluster 1. Denote the assignment of $x$ as $L_x$, which is equal to 0 or 1, and we are interested in $P(L_x = 1 \;|\; x = 175 )$.
A naive method to compute this is to re-run the above MCMC with the additional data point appended. The disadvantage with this method is that it will be slow to infer for each novel data point. Alternatively, we can try a *less precise*, but much quicker method.
We will use Bayes Theorem for this. If you recall, Bayes Theorem looks like:
$$ P( A | X ) = \frac{ P( X | A )P(A) }{P(X) }$$
In our case, $A$ represents $L_x = 1$ and $X$ is the evidence we have: we observe that $x = 175$. For a particular sample set of parameters for our posterior distribution, $( \mu_0, \sigma_0, \mu_1, \sigma_1, p)$, we are interested in asking "Is the probability that $x$ is in cluster 1 *greater* than the probability it is in cluster 0?", where the probability is dependent on the chosen parameters.
\begin{align}
& P(L_x = 1| x = 175 ) \gt P(L_x = 0| x = 175 ) \\\\[5pt]
& \frac{ P( x=175 | L_x = 1 )P( L_x = 1 ) }{P(x = 175) } \gt \frac{ P( x=175 | L_x = 0 )P( L_x = 0 )}{P(x = 175) }
\end{align}
As the denominators are equal, they can be ignored (and good riddance, because computing the quantity $P(x = 175)$ can be difficult).
$$ P( x=175 | L_x = 1 )P( L_x = 1 ) \gt P( x=175 | L_x = 0 )P( L_x = 0 ) $$
```python
norm_pdf = stats.norm.pdf
p_trace = trace["p"][25000:]
prev_p_trace = trace["p"][:25000]
x = 175
v = p_trace * norm_pdf(x, loc=center_trace[:, 0], scale=std_trace[:, 0]) > \
(1 - p_trace) * norm_pdf(x, loc=center_trace[:, 1], scale=std_trace[:, 1])
print("Probability of belonging to cluster 1:", v.mean())
```
Probability of belonging to cluster 1: 0.006376
Giving us a probability instead of a label is a very useful thing. Instead of the naive
L = 1 if prob > 0.5 else 0
we can optimize our guesses using a *loss function*, which the entire fifth chapter is devoted to.
### Using `MAP` to improve convergence
If you ran the above example yourself, you may have noticed that our results were not consistent: perhaps your cluster division was more scattered, or perhaps less scattered. The problem is that our traces are a function of the *starting values* of the MCMC algorithm.
It can be mathematically shown that letting the MCMC run long enough, by performing many steps, the algorithm *should forget its initial position*. In fact, this is what it means to say the MCMC converged (in practice though we can never achieve total convergence). Hence if we observe different posterior analysis, it is likely because our MCMC has not fully converged yet, and we should not use samples from it yet (we should use a larger burn-in period ).
In fact, poor starting values can prevent any convergence, or significantly slow it down. Ideally, we would like to have the chain start at the *peak* of our landscape, as this is exactly where the posterior distributions exist. Hence, if we started at the "peak", we could avoid a lengthy burn-in period and incorrect inference. Generally, we call this "peak" the *maximum a posterior* or, more simply, the *MAP*.
Of course, we do not know where the MAP is. PyMC3 provides a function that will approximate, if not find, the MAP location. In the PyMC3 main namespace is the `find_MAP` function. If you call this function within the context of `Model()`, it will calculate the MAP which you can then pass to `pm.sample()` as a `start` parameter.
start = pm.find_MAP()
trace = pm.sample(2000, step=pm.Metropolis, start=start)
The `find_MAP()` function has the flexibility of allowing the user to choose which optimization algorithm to use (after all, this is a optimization problem: we are looking for the values that maximize our landscape), as not all optimization algorithms are created equal. The default optimization algorithm in function call is the Broyden-Fletcher-Goldfarb-Shanno ([BFGS](https://en.wikipedia.org/wiki/Broyden-Fletcher-Goldfarb-Shanno_algorithm)) algorithm to find the maximum of the log-posterior. As an alternative, you can use other optimization algorithms from the `scipy.optimize` module. For example, you can use Powell's Method, a favourite of PyMC blogger [Abraham Flaxman](http://healthyalgorithms.com/) [1], by calling `find_MAP(fmin=scipy.optimize.fmin_powell)`. The default works well enough, but if convergence is slow or not guaranteed, feel free to experiment with Powell's method or the other algorithms available.
The MAP can also be used as a solution to the inference problem, as mathematically it is the *most likely* value for the unknowns. But as mentioned earlier in this chapter, this location ignores the uncertainty and doesn't return a distribution.
#### Speaking of the burn-in period
It is still a good idea to decide on a burn-in period, even if we are using `find_MAP()` prior to sampling, just to be safe. We can no longer automatically discard sample with a `burn` parameter in the `sample()` function as we could in PyMC2, but it is easy enough to simply discard the beginning section of the trace just through array slicing. As one does not know when the chain has fully converged, a good rule of thumb is to discard the first *half* of your samples, sometimes up to 90% of the samples for longer runs. To continue the clustering example from above, the new code would look something like:
with pm.Model() as model:
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(100000, step=step, start=start)
burned_trace = trace[50000:]
## Diagnosing Convergence
### Autocorrelation
Autocorrelation is a measure of how related a series of numbers is with itself. A measurement of 1.0 is perfect positive autocorrelation, 0 no autocorrelation, and -1 is perfect negative correlation. If you are familiar with standard *correlation*, then autocorrelation is just how correlated a series, $x_t$, at time $t$ is with the series at time $t-k$:
$$R(k) = Corr( x_t, x_{t-k} ) $$
For example, consider the two series:
$$x_t \sim \text{Normal}(0,1), \;\; x_0 = 0$$
$$y_t \sim \text{Normal}(y_{t-1}, 1 ), \;\; y_0 = 0$$
which have example paths like:
```python
figsize(12.5, 4)
import pymc3 as pm
x_t = np.random.normal(0, 1, 200)
x_t[0] = 0
y_t = np.zeros(200)
for i in range(1, 200):
y_t[i] = np.random.normal(y_t[i - 1], 1)
plt.plot(y_t, label="$y_t$", lw=3)
plt.plot(x_t, label="$x_t$", lw=3)
plt.xlabel("time, $t$")
plt.legend();
```
One way to think of autocorrelation is "If I know the position of the series at time $s$, can it help me know where I am at time $t$?" In the series $x_t$, the answer is No. By construction, $x_t$ are random variables. If I told you that $x_2 = 0.5$, could you give me a better guess about $x_3$? No.
On the other hand, $y_t$ is autocorrelated. By construction, if I knew that $y_2 = 10$, I can be very confident that $y_3$ will not be very far from 10. Similarly, I can even make a (less confident guess) about $y_4$: it will probably not be near 0 or 20, but a value of 5 is not too unlikely. I can make a similar argument about $y_5$, but again, I am less confident. Taking this to it's logical conclusion, we must concede that as $k$, the lag between time points, increases the autocorrelation decreases. We can visualize this:
```python
def autocorr(x):
# from http://tinyurl.com/afz57c4
result = np.correlate(x, x, mode='full')
result = result / np.max(result)
return result[result.size // 2:]
colors = ["#348ABD", "#A60628", "#7A68A6"]
x = np.arange(1, 200)
plt.bar(x, autocorr(y_t)[1:], width=1, label="$y_t$",
edgecolor=colors[0], color=colors[0])
plt.bar(x, autocorr(x_t)[1:], width=1, label="$x_t$",
color=colors[1], edgecolor=colors[1])
plt.legend(title="Autocorrelation")
plt.ylabel("measured correlation \nbetween $y_t$ and $y_{t-k}$.")
plt.xlabel("k (lag)")
plt.title("Autocorrelation plot of $y_t$ and $x_t$ for differing $k$ lags.");
```
Notice that as $k$ increases, the autocorrelation of $y_t$ decreases from a very high point. Compare with the autocorrelation of $x_t$ which looks like noise (which it really is), hence we can conclude no autocorrelation exists in this series.
#### How does this relate to MCMC convergence?
By the nature of the MCMC algorithm, we will always be returned samples that exhibit autocorrelation (this is because of the step `from your current position, move to a position near you`).
A chain that is not exploring the space well will exhibit very high autocorrelation. Visually, if the trace seems to meander like a river, and not settle down, the chain will have high autocorrelation.
This does not imply that a converged MCMC has low autocorrelation. Hence low autocorrelation is not necessary for convergence, but it is sufficient. PyMC3 has a built-in autocorrelation plotting function in the `plots` module.
### Thinning
Another issue can arise if there is high-autocorrelation between posterior samples. Many post-processing algorithms require samples to be *independent* of each other. This can be solved, or at least reduced, by only returning to the user every $n$th sample, thus removing some autocorrelation. Below we perform an autocorrelation plot for $y_t$ with differing levels of thinning:
```python
max_x = 200 // 3 + 1
x = np.arange(1, max_x)
plt.bar(x, autocorr(y_t)[1:max_x], edgecolor=colors[0],
label="no thinning", color=colors[0], width=1)
plt.bar(x, autocorr(y_t[::2])[1:max_x], edgecolor=colors[1],
label="keeping every 2nd sample", color=colors[1], width=1)
plt.bar(x, autocorr(y_t[::3])[1:max_x], width=1, edgecolor=colors[2],
label="keeping every 3rd sample", color=colors[2])
plt.autoscale(tight=True)
plt.legend(title="Autocorrelation plot for $y_t$", loc="lower left")
plt.ylabel("measured correlation \nbetween $y_t$ and $y_{t-k}$.")
plt.xlabel("k (lag)")
plt.title("Autocorrelation of $y_t$ (no thinning vs. thinning) \
at differing $k$ lags.");
```
With more thinning, the autocorrelation drops quicker. There is a tradeoff though: higher thinning requires more MCMC iterations to achieve the same number of returned samples. For example, 10 000 samples unthinned is 100 000 with a thinning of 10 (though the latter has less autocorrelation).
What is a good amount of thinning? The returned samples will always exhibit some autocorrelation, regardless of how much thinning is done. So long as the autocorrelation tends to zero, you are probably ok. Typically thinning of more than 10 is not necessary.
### `pymc3.plots`
It seems silly to have to manually create histograms, autocorrelation plots and trace plots each time we perform MCMC. The authors of PyMC3 have included a visualization tool for just this purpose.
The `pymc3.plots` module contains a few different plotting functions that you might find useful. For each different plotting function contained therein, you simply pass a `trace` returned from sampling as well as a list, `varnames`, of the variables that you are interested in. This module can provide you with plots of autocorrelation and the posterior distributions of each variable and their traces, among others.
Below we use the tool to plot the centers of the clusters.
```python
pm.plots.traceplot(trace=trace, varnames=["centers"])
pm.plots.plot_posterior(trace=trace["centers"][:,0])
pm.plots.plot_posterior(trace=trace["centers"][:,1])
pm.plots.autocorrplot(trace=trace, varnames=["centers"]);
```
The first plotting function gives us the posterior density of each unknown in the `centers` variable as well as the `trace` of each. `trace` plot is useful for inspecting that possible "meandering" property that is a result of non-convergence. The density plot gives us an idea of the shape of the distribution of each unknown, but it is better to look at each of them individually.
The second plotting function(s) provides us with a histogram of the samples with a few added features. The text overlay in the center shows us the posterior mean, which is a good summary of posterior distribution. The interval marked by the horizontal black line overlay represents the *95% credible interval*, sometimes called the *highest posterior density interval* and not to be confused with a *95% confidence interval*. We won't get into the latter, but the former can be interpreted as "there is a 95% chance the parameter of interest lies in this interval". When communicating your results to others, it is incredibly important to state this interval. One of our purposes for studying Bayesian methods is to have a clear understanding of our uncertainty in unknowns. Combined with the posterior mean, the 95% credible interval provides a reliable interval to communicate the likely location of the unknown (provided by the mean) *and* the uncertainty (represented by the width of the interval).
The last plots, titled `center_0` and `center_1` are the generated autocorrelation plots, similar to the ones displayed above.
## Useful tips for MCMC
Bayesian inference would be the *de facto* method if it weren't for MCMC's computational difficulties. In fact, MCMC is what turns most users off practical Bayesian inference. Below I present some good heuristics to help convergence and speed up the MCMC engine:
### Intelligent starting values
It would be great to start the MCMC algorithm off near the posterior distribution, so that it will take little time to start sampling correctly. We can aid the algorithm by telling where we *think* the posterior distribution will be by specifying the `testval` parameter in the `Stochastic` variable creation. In many cases we can produce a reasonable guess for the parameter. For example, if we have data from a Normal distribution, and we wish to estimate the $\mu$ parameter, then a good starting value would be the *mean* of the data.
mu = pm.Uniform( "mu", 0, 100, testval = data.mean() )
For most parameters in models, there is a frequentist estimate of it. These estimates are a good starting value for our MCMC algorithms. Of course, this is not always possible for some variables, but including as many appropriate initial values is always a good idea. Even if your guesses are wrong, the MCMC will still converge to the proper distribution, so there is little to lose.
This is what using `MAP` tries to do, by giving good initial values to the MCMC. So why bother specifying user-defined values? Well, even giving `MAP` good values will help it find the maximum a-posterior.
Also important, *bad initial values* are a source of major bugs in PyMC3 and can hurt convergence.
#### Priors
If the priors are poorly chosen, the MCMC algorithm may not converge, or atleast have difficulty converging. Consider what may happen if the prior chosen does not even contain the true parameter: the prior assigns 0 probability to the unknown, hence the posterior will assign 0 probability as well. This can cause pathological results.
For this reason, it is best to carefully choose the priors. Often, lack of covergence or evidence of samples crowding to boundaries implies something is wrong with the chosen priors (see *Folk Theorem of Statistical Computing* below).
#### Covariance matrices and eliminating parameters
### The Folk Theorem of Statistical Computing
> *If you are having computational problems, probably your model is wrong.*
## Conclusion
PyMC3 provides a very strong backend to performing Bayesian inference, mostly because it has abstracted the inner mechanics of MCMC from the user. Despite this, some care must be applied to ensure your inference is not being biased by the iterative nature of MCMC.
### References
1. Flaxman, Abraham. "Powell's Methods for Maximization in PyMC." Healthy Algorithms. N.p., 9 02 2012. Web. 28 Feb 2013. <http://healthyalgorithms.com/2012/02/09/powells-method-for-maximization-in-pymc/>.
```python
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunss.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsx.otf');
}
@font-face {
font-family: "Computer Modern";
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsi.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunso.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top:12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width:800px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 22pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
```python
```
| e3e3185496c0a286d0960cba3329f2fb756070de | 979,447 | ipynb | Jupyter Notebook | Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb | Amirgav/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers-AG | 2ff80f127a0f361a111b9dd65a003f937d1cb67c | [
"MIT"
]
| null | null | null | Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb | Amirgav/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers-AG | 2ff80f127a0f361a111b9dd65a003f937d1cb67c | [
"MIT"
]
| null | null | null | Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb | Amirgav/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers-AG | 2ff80f127a0f361a111b9dd65a003f937d1cb67c | [
"MIT"
]
| null | null | null | 681.59151 | 150,848 | 0.940366 | true | 12,628 | Qwen/Qwen-72B | 1. YES
2. YES | 0.705785 | 0.851953 | 0.601296 | __label__eng_Latn | 0.996529 | 0.235341 |
```python
%%html
<!--Script block to left align Markdown Tables-->
<style>
table {margin-left: 0 !important;}
</style>
```
##### Notes
Leave script block above in place to left justify the table.
This problem can also be used as laboratory exercise in `matplotlib` lesson.
Dependencies: `matplotlib` and `math`; could also be solved using `numpy` and/or `pandas`
```python
```
```python
```
# Problem XX
Graphing Functions Special Functions
Consider the two functions listed below:
\begin{equation}
f(x) = e^{-\alpha x}
\label{eqn:fofx}
\end{equation}
\begin{equation}
g(x) = \gamma sin(\beta x)
\label{eqn:gofx}
\end{equation}
Prepare a plot of the two functions on the same graph.
Use the values in Table below for $\alpha$, $\beta$, and $\gamma$.
|Parameter|Value|
|:---|---:|
|$\alpha$|0.50|
|$\beta$|3.00|
|$\gamma$|$\frac{\pi}{2}$|
The plot should have $x$ values ranging from $0$ to $10$ (inclusive) in sufficiently small increments to see curvature in the two functions as well as to identify the number and approximate locations of intersections. In this problem, intersections are locations in the $x-y$ plane where the two "curves" cross one another of the two plots.
```python
# By-hand evaluate f(x) for x=1, alpha = 1/2
```
```python
# By-hand evaluate g(x) for x=3.14, beta = 1/2, gamma = 2
```
```python
# Define the first function f(x,alpha), test the function using your by hand answer
def f(x,alpha):
import math
f = math.exp(-1.0*alpha*x)
return f
f(1,0.5)
```
0.6065306597126334
```python
# Define the second function g(x,beta,gamma), test the function using your by hand answer
def g(x,beta,gamma):
import math
f = gamma*math.sin(beta*x)
return f
g(3.14,0.5,2.0)
```
1.9999993658636692
```python
# Built a list for x that ranges from 0 to 10, inclusive, with adjustable step sizes for plotting later on
howMany = 10
scale = 10.0/howMany
xvector = []
for i in range(0,howMany+1):
xvector.append(scale*i)
#xvector # activate to display
```
```python
# Build a plotting function that plots both functions on the same chart
alpha = 0.5
beta = 0.5
gamma = 2.0
yf = []
yg = []
for i in range(0,howMany+1):
yf.append(f(xvector[i],alpha))
yg.append(g(xvector[i],beta,gamma))
def plot2lines(list11,list21,list12,list22,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
plt.plot( list11, list21, color ='green', marker ='o', linestyle ='none' , label = "Observed" ) # create a line chart, years on x-axis, gdp on y-axis
plt.plot( list12, list22, color ='red', marker ='o', linestyle ='solid' , label = "Model") # create a line chart, years on x-axis, gdp on y-axis
plt.legend()
plt.title(strtitle)# add a title
plt.ylabel(stry)# add a label to the x and y-axes
plt.xlabel(strx)
plt.show() # display the plot
return #null return
plot2lines(xvector,yf,xvector,yg,'x-value','y-value','plot of f and g')
```
```python
```
```python
```
| 722aac519ac612352924adafc72965fa2b4c8f33 | 23,630 | ipynb | Jupyter Notebook | 5-ExamProblems/.src/ProblemXX/ProblemXX-Dev-Solution.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
]
| null | null | null | 5-ExamProblems/.src/ProblemXX/ProblemXX-Dev-Solution.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
]
| null | null | null | 5-ExamProblems/.src/ProblemXX/ProblemXX-Dev-Solution.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
]
| null | null | null | 98.458333 | 17,588 | 0.854761 | true | 917 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.831143 | 0.717602 | __label__eng_Latn | 0.960926 | 0.505561 |
# Exercise session nº 5
---
# Furrow Constriction in Animal Cell Cytokinesis
__*Sacha Ichbiah, 21/02/22, ENS Paris*__
This subject is extracted from :
> Hervé Turlier et al., *Furrow Constriction in Animal Cell Cytokinesis*, Biophysical Journal, 2014. \
> https://doi.org/10.1016/j.bpj.2013.11.014
Cytokinesis is the process of physical cleavage at the end of cell division; it proceeds by ingression of an acto- myosin furrow at the equator of the cell. Its failure leads to multinucleated cells and is a possible cause of tumorigenesis. Despite its ubiquity in developmental biology, its precise description and understanding have challenged biologists and physicists.
In this paper, the authors propose a model based on a minimal geometry and scaling arguments that gives an interpretation of the process appearing during cytokinesis. It notably demonstrates that because of the cytoplasm incompressibility, the cytokinesis leads to a competition between the furrow line tension and the cell poles' surface tension. This competition sets a threshold for cytokinesis completion, and explains cytokinesis dynamics.
During these session, we will derive the equations of this scaling model of furrow constriction, and will integrate these equation to see constriction dynamics. We will show that it allows to have a cytokinesis duration independant of cell size, which has been observed in C-Elegans.
---
## I - The Scaling Model
The geometry of the dividing cell is described by the apposition of two spherical caps, parametrized by an angle $\theta$ as shown on the left sketch.
The volume of a spherical cap (in blue) is : $\mathcal{V}_{sc}(r,h) = \dfrac{\pi}{3} h^2 (3r - h)$, and its area : $\mathcal{A}_{sc}(r,h)=2\pi r h $ (right sketch).
#### **Question 1 :**
> Noting that the cytoplasmic is an incompressible fluid, establish that $R_0 = R F(\theta)$
We define a dimensionless parameter $\kappa$ to express the competition between the mean contractile surface tension at the furrow and the tension at the cell poles: $\kappa = \dfrac{\gamma}{2R_0N^a_0}$.
The polar contractility tends to reduce the surface $A_p = 2\pi R^2 (1+\text{cos}(\theta))$ of each cell poles, whereas the line tension tends to reduce the contractile ring circumference $r_f$. These effects are captured by a simple mechanical energy $\mathcal{E} = 2\pi r_f \gamma + 2 A_p N^a_0$.
#### **Question 2 :**
> Rescale the energy $\mathcal{E}$ by an energy $\mathcal{E}_0 = 4 \pi R_0^2 N^a_0$ to make it only depend on $\theta$ and $\kappa$.
## II - Mechanical Equilibrium
The local minimum of the energy gives the equilibrium configuration of the cell.
To find this minimum, we will use a library doing symbolic calculus in python called sympy. This will allow us to compute the derivatives effortlessly.
### Symbolic Computation with Sympy *(In french "Calcul Formel")*
We will use sympy, a library that allows to do symbolic computation. Analytical results are always the best, but sometimes the equations does not lead to beautiful simplifications. If we are interested in the numerical result of the equations, we can use sympy to work on the analytical expression directly, obtain derivatives, etc.. before evaluating them on real values. There are three main functions that we will use in sympy, that we will present briefly. If interested, the best symbolic calculus tools are present in Wolfram Mathematica, which is under license.
#### a) Defining expression with symbols and trigonometric functions, and obtain derivatives :
```python
!pip install sympy
import sympy
from sympy import symbols, diff,lambdify, simplify
from sympy import cos, sin
import numpy as np
from scipy.optimize import minimize
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
```
```python
#Symbols are unknown variables, that we aim to replace with real values in the end. We define them with the function symbols :
a,b,c = symbols("a b c")
#We can then define a symbolic expression, containing or not trigonometric functions (among many other possibilities !)
E = a**2 + a*b + cos(c)
#And obtain its derivatives with respect to any variables, eg a :
First_derivative = E.diff(a)
Second_derivative = First_derivative.diff(a)
First_derivative, Second_derivative
```
(2*a + b, 2)
#### b) Substituting variables and evaluating symbolic expressions :
```python
# We can replace symbols with real variables with the method subs :
print("c = pi gives :",E.subs([(c,np.pi)]))
print("Subs method : ",E.subs([(a,2),(b,1),(c,0)]) )
#We can also transform a symbolic expression into a lambda function
#This is a faster process than subs if we need to evaluate the function on many points :
f = lambdify((a,b,c),E,"numpy")
print("Lambify method : ",f(2,1,0))
#We can combine both to replace certain variables before creating a lambda function with the remaining variables :
g = lambdify(a,E.subs([(b,1),(c,0)]))
print("Subs and lambdify combined :",g(2))
#Short benchmarking
from time import time
values_evaluated = np.linspace(0,np.pi,1000)
t1 = time()
g = lambdify(a,E.subs([(b,2),(c,0.2)]))
g(values_evaluated)
t2 = time()
for value in values_evaluated :
E.subs([(a,value),(b,2),(c,0.2)])
t3 = time()
print("Time with lambdify :",round((t2-t1),4))
print("Time with subs :",round((t3-t2),4))
```
c = pi gives : a**2 + a*b - 1.0
Subs method : 7
Lambify method : 7.0
Subs and lambdify combined : 7
Time with lambdify : 0.0029
Time with subs : 0.9684
### The equilibrium configuration during cytokinesis
Let's go back to the initial problem. Our goal is to study the properties of the normalized energy $\overline{\mathcal{E}} = \mathcal{E}/\mathcal{E}_0$.
```python
x, k = symbols("x k")
F = 1 +1.5*cos(x) - 0.5*(cos(x))**3
energy = k * sin(x)/(F**(1/3)) + (1+ cos(x))/(F**(2/3))
#We see that there is no simplification easily given by sympy :
print(simplify(energy),'\n')
#We can replace the values of k and x :
print(energy.subs([(x,np.pi/3),(k,.5)]))
energy
```
(0.873580464736299*k*(-0.333333333333333*cos(x)**3 + cos(x) + 0.666666666666667)**0.666666666666667*sin(x) + 0.763142828368888*(cos(x) + 1)*(-0.333333333333333*cos(x)**3 + cos(x) + 0.666666666666667)**0.333333333333333)*(-0.333333333333333*cos(x)**3 + cos(x) + 0.666666666666667)**(-1.0)
1.42197524663604
$\displaystyle \frac{0.873580464736299 k \sin{\left(x \right)}}{\left(- 0.333333333333333 \cos^{3}{\left(x \right)} + \cos{\left(x \right)} + 0.666666666666667\right)^{0.333333333333333}} + \frac{0.763142828368888 \left(\cos{\left(x \right)} + 1\right)}{\left(- 0.333333333333333 \cos^{3}{\left(x \right)} + \cos{\left(x \right)} + 0.666666666666667\right)^{0.666666666666667}}$
Now that we have implemented our energy in sympy, we can automatically obtain the derivatives with the diff engine. We see that the analytical formulas are quite long and obtaining by hand the derivatives would be both painful and prone to errors.
#### **Question 3 :**
> Obtain the expression of the first and second derivatives of the energy $\overline{\mathcal{E}}$ with respect to theta (i.e x) with the diff function :
```python
```
#### **Question 4 :**
> Plot the energy profile $\overline{\mathcal{E}}(\theta), \theta \in [0,\dfrac{\pi}{2}]$ for $\kappa \in \{0.0,0.1,0.2,0.3,0.4,0.5\}$. What do you observe ?
```python
plt.figure(figsize = (15,25))
vals_theta = np.linspace(0,np.pi/2,10000)
for k_val in np.linspace(0,0.5,6):
print("k_value :",k_val)
...
plt.plot(vals_theta,...,label = k_val.round(2))
plt.legend()
```
#### **Question 5 :**
> Starting with $\theta = \pi/4$, find the angle giving the local energy minimum for each k in values_k. Plot the equilibrium angle $\theta_{min}$, and the value of the derivatives of the energy $\left. \dfrac{\partial \mathcal{E}}{\partial \theta}\right\rvert_{\theta_{min}}$, $\left. \dfrac{\partial^2 \mathcal{E}}{\partial \theta^2}\right\rvert_{\theta_{min}}$, $\left (\left. \dfrac{\partial^2 \mathcal{E}}{\partial \theta^2}\right\rvert_{\theta_{min}}\right)^2$ at this angle for each k.
__Tip :__ : Lambdify the energy function into a function f and find its minimum with the function minimize from scipy (see bellow)
```python
e = lambdify((x,k), energy, "numpy")
npoints = 1001
values_k = np.linspace(0, 1, npoints)
...
for j,k_val in enumerate(values_k) :
f = lambda x : e(x,k_val)
...
sols = minimize(fun = f,x0 = (np.pi/4),method = "SLSQP",bounds=[(0,np.pi/2)])
assert (sols.success)
min_theta = sols['x']
...
fig,ax = plt.subplots(1,4,figsize = (21,3))
ax[0].plot(values_k, ...)
ax[0].set_xlabel("k")
ax[0].set_ylabel("minimum found")
ax[1].plot(values_k, ...)
ax[1].set_xlabel("k")
ax[1].set_ylabel("derivative value at minimum")
ax[2].plot(values_k, ...)
ax[2].set_xlabel("k")
ax[2].set_ylabel("second derivative value at minimum")
ax[3].plot(values_k, ...)
ax[3].set_xlabel("k")
ax[3].set_ylabel("square of second derivative value at minimum")
```
#### **Question 6 :**
> Estimate the value $k_c$ where this local minimum disappears, and its associated angle $\theta_c$. Compute the values of the first two derivatives $\left. \dfrac{\partial \mathcal{E}}{\partial \theta}\right\rvert_{\theta_c} $ and $\left. \dfrac{\partial^2 \mathcal{E}}{\partial \theta^2}\right\rvert_{\theta_c} $. What do you deduce on the order of the phase transition ?
__Tip :__ Find the value where the second derivative evaluated at $\theta_c$ is equal to 0 (i.e the minimum of the square of the second derivative evaluated at $\theta_c$)
#### **Optional : Question 7 :**
> Plot the equilibrium angles and draw the shape of the cells for $\kappa \in \{0,0.2,0.4,0.6\}$
```python
fig,ax = plt.subplots(1,5,figsize =(25,5))
colors = ['tab:blue','tab:orange','tab:green','tab:red']
R0 = 1
for j,idx_val in enumerate([0,2000,4000,6000]) :
theta_sol = Solutions[idx_val]
k_value = values_k[idx_val]
R = R0/((F.subs(x,theta_sol[0]))**(1/3))
e = lambdify((x), energy.subs(k,k_value), "numpy")
theta_values_k = np.linspace(theta_sol, 2*np.pi-theta_sol,100)
circle_x = R*np.cos(theta_values_k)
circle_y = R*np.sin(theta_values_k)
ax[0].plot(vals_theta*180/np.pi,e(vals_theta))
ax[0].scatter(theta_sol*180/np.pi,e(theta_sol),s = 180)
ax[0].set_ylabel("Energy")
ax[0].set_xlabel("Angle value")
ax[0].set_xlim(-5,95)
ax[j+1].plot(circle_x-R*np.cos(theta_sol),circle_y,color=colors[j],linewidth=5)
ax[j+1].plot(R*np.cos(theta_sol)-circle_x,circle_y,color=colors[j],linewidth=5)
ax[j+1].set_title("Equilibrium angle value :" + str((theta_sol[0]*180/np.pi).round(2)))
ax[j+1].set_xlim(-2,2)
ax[j+1].set_ylim(-2,2)
ax[j+1].set_aspect('equal')#, adjustable='box')
```
## III - Dynamics
We now want to study the furrow constriction dynamics, i.e the temporal evolution of $\dfrac{r_f}{R_0}$. We will establish these dynamics by expressing the derivative of this quantity with respect to $\theta$. As before, we will use the symbolic computation library sympy to evaluate numerical quantities.
To establish the dynamic equation, we note that the power of active effects is exactly dissipated by viscous cell deformations. The viscous dissipation is made of two contributions, the stretching of the poles and the constriction of the ring, which we estimate in scaling. The volume of acto-myosin in the poles is $V_p = 2A_p e_p$ and in the ring $V_f = 2\pi r_f w e_f$, where $w$ and $e_f$ are the width and thickness of the contractile ring. (We remind that the surface of each cell pole writes : $A_p = 2\pi R^2 (1+\text{cos}(\theta))$). The value $e_p \approx e_0$ and the ring thickness $e_f$ reach a steady-state value that depends on turnover. This yields the viscous dissipated power :
$\begin{align}
P_d &= \dfrac{1}{2} \eta \left[ V_p \left(\dfrac{1}{R} \dfrac{dr_f}{dt} \right)^2 + V_f \left( \dfrac{1}{r_f} \dfrac{dr_f}{dt}\right)^2 \right] \newline
P_d &= \dfrac{1}{2} \eta \left[ e_f 4 \pi R^2 (1+\text{cos}\theta) \dfrac{1}{R^2} \left(\dfrac{dr_f}{dt} \right)^2 + 2\pi w e_f r_f \dfrac{1}{r_f^2} \left( \dfrac{dr_f}{dt}\right)^2 \right] \newline
&\approx \dfrac{1}{2} \eta \left[ 4 e_0 \pi(1+\text{cos}\theta) \left(\dfrac{dr_f}{dt} \right)^2 + 2\pi w e_f\dfrac{1}{r_f} \left( \dfrac{dr_f}{dt}\right)^2 \right] \newline
&= \left(\dfrac{dr_f}{dt} \right)^2 \dfrac{1}{2} \eta \left[ 4 \pi e_0 (1+\text{cos}\theta) + \dfrac{4\pi}{2} w e_f\dfrac{F(\theta)^{1/3}}{R_0 \sin \theta} \right] \newline
&= \left(\dfrac{dr_f}{dt} \right)^2 4 \pi e_0 \eta \left[ (1+\text{cos}\theta) + \dfrac{1}{2 R_0} w \dfrac{e_f}{e_0} \dfrac{F(\theta)^{1/3}}{\sin \theta} \right] \newline
&= \left(\dfrac{dr_f}{dt} \right)^2 4 \pi e_0 \eta \left[ (1+\text{cos}\theta) + \lambda \dfrac{F(\theta)^{1/3}}{\sin \theta} \right] \newline
\end{align}
$
The balance of mechanical and dissipated powers yields :
$\dfrac{d \mathcal{E}}{dt} + P_d = 0$
Besides :
$
\dfrac{1}{\mathcal{E}_0} \dfrac{d\mathcal{E}}{dt} = \dfrac{\partial \mathcal{E}/\mathcal{E}_0}{\partial \theta} \dfrac{\partial \theta}{\partial r_f} \dfrac{d r_f}{d_t} = \dfrac{\partial \mathcal{E}/\mathcal{E}_0}{\partial \theta} \left(\dfrac{\partial r_f}{\partial \theta}\right)^{-1} \dfrac{d r_f}{d_t}
$
And, with $T_a=\frac{\eta e_0}{N^a_0}$ :
$\dfrac{1}{\mathcal{E}_0} P_d = \left(\dfrac{dr_f}{dt} \right)^2 \dfrac{4\pi e_0 \eta}{4 \pi R_0^2 N^a_0} \left[ (1+\text{cos}\theta) + \dfrac{\lambda}{2} \dfrac{F(\theta)^{1/3}}{\sin \theta} \right] = \left(\dfrac{dr_f}{dt} \right)^2 \dfrac{T_a}{ R_0^2} \left[ (1+\text{cos}\theta) + \dfrac{\lambda}{2} \dfrac{F(\theta)^{1/3}}{\sin \theta} \right] $
We have thus from $\dfrac{d \mathcal{E}/\mathcal{E}_0}{dt} = - P_d/\mathcal{E}_0$ :
$
\begin{align}
\dfrac{\partial \mathcal{E}/\mathcal{E}_0}{\partial \theta} \left(\dfrac{\partial r_f}{\partial \theta}\right)^{-1} \dfrac{d r_f}{d_t} &= - \left(\dfrac{dr_f}{dt} \right)^2 \dfrac{T_a}{ R_0^2} \left[ (1+\text{cos}\theta) + \dfrac{\lambda}{2} \dfrac{F(\theta)^{1/3}}{\sin \theta} \right]
\newline
\dfrac{\partial \mathcal{E}/\mathcal{E}_0}{\partial \theta} \left(\dfrac{\partial r_f/R_0}{\partial \theta}\right)^{-1} &= - \dfrac{dr_f}{dt} \dfrac{T_a}{ R_0} \left[ (1+\text{cos}\theta) + \dfrac{\lambda}{2} \dfrac{F(\theta)^{1/3}}{\sin \theta} \right] \newline
\dfrac{dr_f}{dt} \dfrac{T_a}{ R_0} &= - \dfrac{\partial \mathcal{E}/\mathcal{E}_0}{\partial \theta} \left(\dfrac{\partial r_f/R_0}{\partial \theta}\right)^{-1} \left[ (1+\text{cos}\theta) + \dfrac{\lambda}{2} \dfrac{F(\theta)^{1/3}}{\sin \theta} \right]^{-1} = -\mathcal{H}(\theta,\kappa,\lambda)
\end{align}
$
We will compute numerical the values of this function $\mathcal{H}$ to obtain the evolution of the furrow radius $r_f$
#### **Question 8 :**
> From the last equation, express the angle temporal variation $\dot \theta$.
#### **Question 9 :**
> Compute numerically $\dot \theta$ with sympy, and integrate the evolution of $\theta$, $r_f$ in time with a forward-euler-scheme for $t \in [0,15]$, starting with $\theta(0)=\pi/2$, with $\lambda = 1$ and $\kappa \in \{0.1,0.25,0.4,0.5,0.75,1\}$. Check that it is compatible with the previous results obtained from the static analysis.
```python
x, k, l = symbols("x k l")
F = 1 +1.5*cos(x) - 0.5*(cos(x))**3
energy = k * sin(x)/(F**(1/3)) + (1+ cos(x))/(F**(2/3))
r_f = sin(x)*(F**(-1/3))
dr_f = diff(r_f,x)
first_derivative = diff(energy,x)
H = ((1+cos(x)) + l*(F**(1/3))/sin(x))**(-1) * first_derivative/dr_f
dtheta = - H/dr_f
dtheta = dtheta.subs(l,0.1)
#We can already lambdify the function r_f as it does not depend on k :
func_rf = lambdify(x,r_f,"numpy")
```
```python
theta0 = np.pi/2
R0 = 1
for k_value in [0.1,0.25,0.4,0.5,0.75,1]:
...
npoints = 1000
timepoints = np.linspace(0,15,npoints)
dt = timepoints[1]-timepoints[0]
...
for j,t in enumerate(timepoints) :
...
plt.plot(timepoints,...,label = k_value)
plt.title("Constriction completion or failure")
plt.xlabel("time t/Ta")
plt.ylabel("furrow radius r_f/R_0")
plt.legend()
```
#### **Question 10 :**
> Determine the cytokinesis duration with $R_0 \in \{0.5,1,2,4\}$. Show that in case of cytokinesis completion $\lambda = 1, \kappa = 0.75$, the initial cell radius R0 has no impact on the cytokinesis time.
```python
theta0 = np.pi/2
k_value = 0.75
for R0 in [0.5,1,2,4]:
...
npoints = 1000
timepoints = np.linspace(0,6,npoints)
dt = timepoints[1]-timepoints[0]
...
for j,t in enumerate(timepoints) :
...
plt.plot(timepoints,...,label ="R0: " +str(R0))
plt.title("Cytokinesis time is independant of the cell initial radius")
plt.xlabel("time t/Ta")
plt.ylabel("furrow radius r_f/R_0")
plt.legend()
```
| 981121f584e2e30ca2961318f5cf173510a6e0ef | 150,520 | ipynb | Jupyter Notebook | Ichbiah/TD_5-Cytokinesis/TD_5_Cytokinesis.ipynb | hturlier/M2ICFP | 1d91ff837b05a6058ee34a03fdc8062893287c6e | [
"MIT"
]
| 4 | 2022-02-14T10:17:11.000Z | 2022-03-22T21:16:42.000Z | Ichbiah/TD_5-Cytokinesis/.ipynb_checkpoints/TD_5_Cytokinesis-checkpoint.ipynb | hturlier/M2ICFP | 1d91ff837b05a6058ee34a03fdc8062893287c6e | [
"MIT"
]
| null | null | null | Ichbiah/TD_5-Cytokinesis/.ipynb_checkpoints/TD_5_Cytokinesis-checkpoint.ipynb | hturlier/M2ICFP | 1d91ff837b05a6058ee34a03fdc8062893287c6e | [
"MIT"
]
| 2 | 2022-01-24T15:08:21.000Z | 2022-02-14T10:17:00.000Z | 235.924765 | 68,560 | 0.897575 | true | 5,457 | Qwen/Qwen-72B | 1. YES
2. YES | 0.828939 | 0.849971 | 0.704574 | __label__eng_Latn | 0.92358 | 0.475293 |
# <center>Applied Stochastic Processes HW02</center>
<center>**11510691 程远$\DeclareMathOperator*{\argmin}{argmin}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}$星**</center>
## Question 1
$\bspace \begin{align}
\EE{X \mid X > 1} &= \int_{0}^{\infty} x \cdot f_X\P{x \mid X > 1} \;\dd{x} \\
&= \int_{1}^{\infty} x \cdot \lambda e^{-\lambda x} \;\dd{x} \\
&= \left.-\P{x + \ffrac{1} {\lambda}}e^{-\lambda x}\right|_{1}^{\infty} = \P{1 + \ffrac{1} {\lambda}}e^{-\lambda}
\end{align}$
## Question 2
$\P{\text a}$
$\bspace \begin{align}
f_Y\P{y} &= \int_{-\infty}^{\infty} f\P{x,y} \;\dd{x} \\
&= \int_{0}^{\infty} \ffrac{e^{-x/y} e^{-y}} {y} \;\dd{x} \\
&= \ffrac{1} {y} e^{-y} \cdot \left. \P{-y\exp\P{-\ffrac{1} {y}x}}\right|_{0}^{\infty} \\
&\using{y>0} \ffrac{1} {y} e^{-y} \cdot y = e^{-y}, y > 0
\end{align}$
$\bspace$ and $f_Y\P{y} = 0$ otherwise.
$\P{\text b}$
$\bspace \begin{align}
\EE{X \mid Y = y} &= \int_{-\infty}^{\infty} x \cdot f_X\P{x \mid Y = y}\;\dd{x} \\
&= \int_{0}^{\infty} x \cdot \ffrac{f_{X,Y}\P{x,y}} {f_Y\P{y}}\;\dd{x} \\
&= \int_{0}^{\infty} x \cdot \ffrac{e^{-x/y} e^{-y}} {y} \cdot {e^{y}}\;\dd{x} \\
&= \left.-\P{x+y} \exp\P{-\ffrac{1} {y} x}\right|_{0}^{\infty} \using{y>0} y
\end{align}$
$\P{\text C}$
$\bspace \begin{align}
\EE{X} &= \EE{\EE{X\mid Y}} \\
&= \EE{Y} \\
&= \int_{-\infty}^{\infty} y\cdot f_Y\P{y} \;\dd{y} \\
&= \int_{0}^{\infty} y \cdot e^{-y} \;\dd{y} \\
&= \Big.-\P{y+1}e^{-y}\Big|_{0}^{\infty} = 1
\end{align}$
$\bspace$ And to calculate its variance we first condition $X^2$ on $Y$, giving:
$\bspace \begin{align}
\EE{X^2 \mid Y = y} &= \int_{0}^{\infty} x^2 \cdot \ffrac{e^{-x/y} e^{-y}} {y} \cdot {e^{y}}\;\dd{x} \\
&= \left.-\P{x^2+2yx + 2y^2} \exp\P{-\ffrac{1} {y} x}\right|_{0}^{\infty} \using{y>0} 2y^2
\end{align}$
Thus we can compute $\EE{X^2}$ like:
$\bspace \begin{align}
\EE{X^2} &= \EE{\EE{X^2\mid Y}} \\
&= \EE{2Y^2} \\
&= \int_{-\infty}^{\infty} 2y^2\cdot f_Y\P{y} \;\dd{y} \\
&= 2 \int_{0}^{\infty} y^2 \cdot e^{-y} \;\dd{y} \\
&= \Big.-\P{2y^2+ 4y + 4}e^{-y}\Big|_{0}^{\infty} = 4
\end{align}$
$\bspace \Var{X} = \EE{X^2} - \P{\EE{X}}^2 = 4 - 1^2 = 3$
## Question 3
$\bspace \begin{align}
\text{RHS} &= \Big(\EE{\EE{\P{X - \EE{X \mid Y}}^2 \mid Y}}\Big) + \Big(\EE{\P{\EE{X \mid Y}}^2} - \P{\EE{\EE{X \mid Y}}}^2 \Big) \\
&= \EE{\P{X - \EE{X \mid Y}}^2} + \EE{\P{\EE{X \mid Y}}^2} - \P{\EE{X}}^2 \\
&= \EE{X^2} - 2\cdot \EE{X \cdot \EE{X \mid Y}} + 2 \cdot \EE{\P{\EE{X \mid Y}}^2} - \P{\EE{X}}^2 \\
&= \EE{X^2} - \P{\EE{X}}^2 - 2 \cdot \EE{\EE{X \cdot \EE{X \mid Y} \mid Y}}+ 2 \cdot \EE{\P{\EE{X \mid Y}}^2} \\
&= \Var{X} - 2 \cdot \EE{\P{\EE{X \mid Y}}^2} + 2 \cdot \EE{\P{\EE{X \mid Y}}^2} = \Var{X} = \text{LHS}
\end{align}$
## Question 4
$\P{\text a}$
$\bspace \begin{align}
X &= \sum_{i=1}^{N}T_i
\end{align}$
$\P{\text b}$
$\bspace$ To find $\EE{N}$ we condition that on $Y$, the first choice:
$\bspace \begin{align}
\EE{N} &= \EE{N \mid Y = 1} \cdot P\CB{Y = 1} + \EE{N \mid Y = 2} \cdot P\CB{Y = 2} + \EE{N \mid Y = 3} \cdot P\CB{Y = 3} \\
&= \ffrac{1} {3} \P{1 + 1 + \EE{N} + 1 + \EE{N}}
\end{align}$
$\bspace$ Then we solve the preceding equation and find the solution that tis $\EE{N} = 3$
$\P{\text c}$
$\bspace$ Since $N$ is the total number of choices, or equivalently, there's no $N+1$ choice. Thus $T_N \equiv 2$ and $\EE{T_N} = 2$.
$\P{\text d}$
$\bspace \begin{align}
\EE{\sum_{i=1}^{N}T_i \mid N = n} &= \EE{\sum_{i=1}^{n-1}T_i \mid N = n} + \EE{T_n \mid N = n} \\
&= \EE{\sum_{i=1}^{n-1}T_i} + \EE{T_n} \\
&= \P{n-1} \cdot \ffrac{1} {2} \P{3+5} + 2 = 4n - 2
\end{align}$
$\P{\text e}$
$\bspace \begin{align}
\EE{X} &= \EE{\EE{\sum_{i=1}^{N}T_i \mid N} } \\
&= \EE{4N - 2} \\
&= 4\cdot \EE{N} - 2 = 4 \times 3 - 2 = 10
\end{align}$
## Question 5
$\P{\text a}$
$\bspace \begin{align}
\EE{X_1} = 1 \times \ffrac{r} {r+b} + 0 \times \ffrac{b} {r+b} = \ffrac{r} {r+b}
\end{align}$
$\P{\text b}$
$\bspace$ First we let $Y_i = 1$ if the $i\texttt{-th}$ draw is a red one and $0$ otherwise.
$\bspace \begin{align}
\EE{X_2} &= \EE{X_2 \mid Y_1 = 1} \cdot P\CB{Y_1 = 1} + \EE{X_2 \mid Y_1 = 0} \cdot P\CB{Y_1 = 0} \\
&= \P{2\times \ffrac{r+m} {r+b+m} \times \ffrac{r} {r+b} + \ffrac{b} {r+b + m} \cdot \ffrac{r} {r+b}} + \ffrac{r} {r+b + m} \cdot \ffrac{b} {r+b} = \ffrac{2r} {r+b}
\end{align}$
$\P{\text c}$
$\bspace \begin{align}
\EE{X_3} &= \EE{\EE{\EE{X_3 \mid Y_2}}\mid Y_1} \\
&= \EE{ \EE{X_3 \mid Y_2 = 1}\cdot P\CB{Y_2 = 1} + \EE{X_3 \mid Y_2 = 0}\cdot P\CB{Y_2 = 0}\mid Y_1} \\
&= \EE{X_3 \mid Y_2 = 1, Y_1 = 1} \cdot P\CB{Y_2 = 1 \mid Y_1 = 1} \cdot P\CB{Y_1 = 1} \\
&\bspace + \EE{X_3 \mid Y_2 = 1, Y_1 = 0} \cdot P\CB{Y_2 = 1 \mid Y_1 = 0} \cdot P\CB{Y_1 = 0} \\
&\bspace + \EE{X_3 \mid Y_2 = 0, Y_1 = 1} \cdot P\CB{Y_2 = 0 \mid Y_1 = 1} \cdot P\CB{Y_1 = 1} \\
&\bspace + \EE{X_3 \mid Y_2 = 0, Y_1 = 0} \cdot P\CB{Y_2 = 0 \mid Y_1 = 0} \cdot P\CB{Y_1 = 0} \\
&= \P{3 \times \ffrac{r+2m} {r+b+2m} + 2 \times \ffrac{b} {r+b+2m}}\cdot \ffrac{r+m} {r+b+m} \cdot \ffrac{r} {r+b} \\
&\bspace + \P{2\times\ffrac{r+m} {r+b+2m} + \ffrac{b+m} {r+b+2m}} \cdot \ffrac{r} {r+b+m} \cdot \ffrac{b} {r+b} \\
&\bspace + \P{2\times\ffrac{r+m} {r+b+2m} + \ffrac{b+m} {r+b+2m}} \cdot \ffrac{b} {r+b+m} \cdot \ffrac{r} {r+b} \\
&\bspace + \ffrac{r} {r+b+2m} \cdot \ffrac{b+m} {r+b+m} \cdot \ffrac{b} {r+b} \\
&= \ffrac{3r} {r+b}
\end{align}$
$\P{\text d}$
$\bspace$ Based on the preceding results, I conjecture that $\EE{X_k} = \ffrac{kr} {r+b}$ for $k = 1, 2, \dots$. To prove this we first let $r_i$ and $b_i$ denote the number of red balls and blue balls at $i\texttt{-th}$ draw. Then $r_1 = r$ and $b_1 = b$; by induction, we let this equation holds for $k =1,2,\dots,n-1$ and we condtion $X_n$ on $Y_{n-1}$ finding that:
$\bspace \begin{align}
\EE{X_n} &= \EE{\EE{X_n \mid Y_{n-1}}} \\
&= \EE{X_n \mid Y_{n-1} = 1} \cdot P\CB{Y_{n-1} = 1} + \EE{X_n \mid Y_{n-1} = 0} \cdot P\CB{Y_{n-1} = 0} \\
&= \P{\P{2 + \ffrac{\P{k-2}r} {r+b}}\cdot\ffrac{r_{n-1} + m} {r_{n-1} + b_{n-1} + m} + \P{1+\ffrac{\P{k-2}r} {r+b}} \cdot \ffrac{b_{n-1}} {r_{n-1} + b_{n-1} + m}} \cdot \ffrac{r_{n-1}} {r_{n-1} + b_{n-1}} \\
&\bspace + \P{\P{1+\ffrac{\P{k-2}r} {r+b}} \cdot \ffrac{r_{n-1}} {r_{n-1} + b_{n-1} + m} + \ffrac{\P{k-2}r} {r+b} \cdot \ffrac{b_{n-1} + m} {r_{n-1} + b_{n-1} + m}} \cdot \ffrac{b_{n-1}} {r_{n-1} + b_{n-1}} \\
&= \ffrac{nr} {r+b}
\end{align}$
$\bspace$ solved...😜😜😜
$\P{\text e}$
$\bspace$ To be intuitive, honestly, my first impression on this problem is:
$$\sum_{i=0}^{n} \P{\binom{n} {i} \cdot i \cdot \ffrac{\d{\prod_{j=0}^{i-1}\SB{r+jm}}\d{\prod_{j=0}^{n-i-1} \SB{b+jm}}} {\d{\prod_{k=0}^{n-1}\SB{r+b+km}}}}$$
$\bspace$ That's not something that's gonna help, so another thoughts, breaking $X_n$ apart into $Y_i$. After $\EE{Y_2}$ is given, since $\EE{Y_2} = \EE{Y_1}$, an intuitive thoughts could be that at this time the urn is in the same "**state**" with before. So that $\EE{Y_{n-1}} = \EE{Y_n}$ will always hold and thus $\EE{X_n} = \sum_n \EE{Y_i} = n\ffrac{r} {r+b}$
| 2a9e71dc9cbf74e45e386d8876986a0f8a1ff29a | 10,934 | ipynb | Jupyter Notebook | Probability and Statistics/Applied Random Process/HW/HW_02.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
]
| 2 | 2018-11-27T10:31:08.000Z | 2019-01-20T03:11:58.000Z | Probability and Statistics/Applied Random Process/HW/HW_02.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
]
| null | null | null | Probability and Statistics/Applied Random Process/HW/HW_02.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
]
| 1 | 2020-07-14T19:57:23.000Z | 2020-07-14T19:57:23.000Z | 45.181818 | 384 | 0.432321 | true | 3,901 | Qwen/Qwen-72B | 1. YES
2. YES | 0.731059 | 0.76908 | 0.562243 | __label__yue_Hant | 0.425233 | 0.144608 |
# Sympy
```python
from sympy import *
# init_printing()
x, y, z = symbols("x y z")
```
```python
simplify(sin(x) ** 2 + cos(x) ** 2)
```
$\displaystyle 1$
```python
expand((x + 1) ** 3)
```
$\displaystyle x^{3} + 3 x^{2} + 3 x + 1$
```python
a = 3
b = 8
c = 2
y = a * x ** 2 + b * x + c
plot(y)
```
```python
solveset(y)
```
$\displaystyle \left\{- \frac{4}{3} - \frac{\sqrt{10}}{3}, - \frac{4}{3} + \frac{\sqrt{10}}{3}\right\}$
```python
dy = y.diff(x)
plot(dy)
```
```python
iy = integrate(y, x)
plot(iy)
```
| b1fe1ede2a968d43399a6f4d8a8466b3b7a28ba7 | 163,372 | ipynb | Jupyter Notebook | docs/Library/ThirdParty/sympy.ipynb | yoannmos/PythonGuide | b7885f7da4193801e53edc441ecc4de9ee8ea6f7 | [
"MIT"
]
| 2 | 2021-09-22T02:29:09.000Z | 2021-09-27T09:44:51.000Z | docs/Library/ThirdParty/sympy.ipynb | yoannmos/PythonGuide | b7885f7da4193801e53edc441ecc4de9ee8ea6f7 | [
"MIT"
]
| null | null | null | docs/Library/ThirdParty/sympy.ipynb | yoannmos/PythonGuide | b7885f7da4193801e53edc441ecc4de9ee8ea6f7 | [
"MIT"
]
| null | null | null | 781.684211 | 41,542 | 0.729605 | true | 225 | Qwen/Qwen-72B | 1. YES
2. YES | 0.96378 | 0.859664 | 0.828527 | __label__yue_Hant | 0.252912 | 0.763278 |
```python
%matplotlib inline
import numpy as np
import scipy as sc
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sympy as sp
import itertools
sns.set();
```
```python
def extract(it):
r"""
Extract the values from a iterable of iterables.
The function extracts the values from a iterable of iterables (eg. a list of tuples) to a list
of coordinates. For example,
[(1, 10), (2, 20), (3, 30), (4, 40)] -> [[1, 2, 3, 4], [10, 20, 30, 40]]
If `it` is a list of M tuples each one with N elements, then `extract` returns
a list of N lists each one with M elements.
Parameters
----------
it : iterable
An iterable of iterables.
Returns
------
A list with the lists of first-elements, second-elements and so on.
"""
return list(zip(*it))
```
# Runge-Kutta methods
The Runge-Kutta methods are in fact a family of methods designed to solve an ODE of the form:
$$y' = f(t, y)$$
with initial condition
$$y(t_{0}) = y_{0}$$
In other words, an initial condition problem.
## Two-stage Runge-Kutta methods
The so-called two-stage Runge-Kutta method has equations:
$$y_{k+1} = y_{k} + h\left(1 - \frac{1}{2\lambda}k_{1} + \frac{1}{2\lambda}k_{2}\right)$$
where
$$k_{1} = f(x_{k}, y_{k})$$
and
$$k_{2} = f(x_{k} + \lambda h, y_{k} + \lambda h k_{1})$$
The name "two-stage" comes from the fact that it is actually computed in two stages. First, we have to find $k_{1}$. Then we use that value to compute $k_{2}$.
Depending on the value given to $\lambda$, the method has different names. If $\lambda$ equals $1$, then it is called _improved Euler's method_. If $\lambda$ equals $2/3$, then it is called _Heun's method_.
### Improved Euler's method
By making $\lambda$ equals to $1$ in the general equation of the two-stage Runge-Kutta, we get the improved Euler's method. The equations are the following:
$$y_{k+1} = y_{k} + h\left(\frac{1}{2}k_{1} + \frac{1}{2}k_{2}\right)$$
with
$$k_{1} = f(x_{k}, y_{k})$$
and
$$k_{2} = f(x_{k} + h, y_{k} + hk_{1})$$
### Heun's method
As I said, the Heun's method comes from making $\lambda$ equals to $2/3$ in the general formula of the two-stage Runge-Kutta. The resulting equations are the following:
$$y_{k+1} = y_{k} + h\left(\frac{1}{4}k_{1} + \frac{3}{4}k_{2}\right)$$
where
$$k_{1} = f(x_{k}, y_{k})$$
and
$$k_{2} = f(x_{k} + \frac{2}{3}h, y_{k} + \frac{2}{3}hk_{1})$$
## Four-stage Runge-Kutta methods
### Classical Runge-Kutta method (RK4)
The method often referred as _classical Runge-Kutta method_ or simply _RK4_ is the Runge-Kutta method of 4 stages given by equations below:
$$y_{k+1} = y_{k} + \frac{h}{6}(k_{1} + 2k_{2} + 2k_{3} + k_{4}), n = 0, 1, 2, \dots$$
with
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f\left(x_{k} + \frac{h}{2}, y_{k} + \frac{h}{2}k_{1}\right)$$
$$k_{3} = f\left(x_{k} + \frac{h}{2}, x_{k} + \frac{h}{2}k_{2}\right)$$
$$k_{4} = f(x_{k} + h, y_{k} + h k_{3})$$
### Variant of the classical Runge-Kutta method
There is a variation of the classical Runge-Kutta method (RK4) method. It is given by the following equations:
$$y_{k+1} = y_{k} + \frac{h}{8}(k_{1} + 3k_{2} + 3k_{3} + k_{4}), n = 0, 1, 2, \dots$$
with
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f\left(x_{k} + \frac{h}{3}, y_{k} + \frac{h}{3}k_{1}\right)$$
$$k_{3} = f\left(x_{k} + \frac{2h}{3}, x_{k} + \frac{-h}{3}k_{1} + k_{2}\right)$$
$$k_{4} = f(x_{k} + h, y_{k} + hk_{1} - hk_{2} + hk_{3})$$
## General form and more theory
In general, the whole family of Runge-Kutta methods can be written as
$$y_{k+1} = y_{k} + h \sum_{i=1}^{s}b_{i}k_{i}$$
where
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f(x_{k} + c_{2}h, y_{k} + h(a_{21}k_{1}))$$
$$k_{3} = f(x_{k} + c_{3}h, y_{k} + h(a_{31}k_{1} + a_{32}k_{2}))$$
$$\vdots$$
$$k_{s} = f(x_{k} + c_{s}h, y_{k} + h(a_{s1}k_{1} + a_{s2}k_{2} + \cdots + a_{ss-1}k_{s-1}))$$
A Runge-Kutta method is specified by
$s \doteq$ the number of stages, for $s \geq 1$,
$b_{i} \doteq$ the weights, for $i \in \{1, 2, \cdots, s\}$,
$c_{i} \doteq$ the loadings, for $i \in \{2, 2, \cdots, s\}$,
$a_{ij} \doteq$ the coefficients of $k_{j}$ in equation of $k_{i}$, for $1 \leq j < i \leq s$.
### Butcher tableau
A compact way of summarising these parameters is the Butcher tableau. Its general form is shown below.
|$0$ | | | | | |
|:-------:|---------|---------|---------|-----------|--------|
|$c_{2}$ |$a_{21}$ | | | | |
|$c_{3}$ |$a_{31}$ |$a_{32}$ | | | |
|$\vdots$ |$\vdots$ |$\vdots$ |$\ddots$ | | |
|$c_{s}$ |$a_{s1}$ |$a_{s2}$ |$\cdots$ |$a_{ss-1}$ | |
| |$b_{1}$ |$b_{2}$ |$\cdots$ |$b_{s-1}$ |$b_{s}$ |
### Method's order
A method is said to have order $p$ if the local truncation error is $O(h^{p+1})$. The minimum number of stages, $s$, required for a method to be of order $p$ until order 8 is given by the following table
|$p$ |1 |2 |3 |4 |5 |6 |7 |8 |
|:------:|---|---|---|---|---|---|---|---|
|min $s$ |1 |2 |3 |4 |6 |7 |9 |11 |
### Consistency
The method is said to be consistent if
$$\sum_{j=1}^{i-1}a_{ij} = c_{i}, \text{for } i = 2, \cdots, s$$
In other words, it is consistent if the sum of $a_{ij}$ on the $i$-th row is equal to the respective $c_{i}$.
### Runge-Kutta matrix
The $s$-by-$s$ Runge-Kutta matrix $M_{\mathit{RK}}$ is the lower triangular matrix defined by the coefficients $a_{ij}$ as shown below
$$M_{\mathit{RK}} = \begin{bmatrix}
0 & 0 & \cdots & 0 & 0 \\[0.3em]
a_{21} & 0 & \cdots & 0 & 0 \\[0.3em]
a_{31} & a_{32} & \cdots & 0 & 0 \\[0.3em]
\vdots & \vdots & \ddots & \vdots & \vdots \\[0.3em]
a_{s1} & a_{s2} & \cdots & a_{ss-1} & 0
\end{bmatrix}$$
## Butcher tableau examples
* Euler's method:
$$y_{n+1} = y_{n} + hf(x_{n}, y_{n})$$
|$0$ | |
|:--:|----|
| |$1$ |
* Improved Euler's method:
$$y_{k+1} = y_{k} + h\left(\frac{1}{2}k_{1} + \frac{1}{2}k_{2}\right)$$
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f(x_{k} + h, y_{k} + hk_{1})$$
|$0$ | | |
|:--:|--------------|--------------|
|$1$ |$1$ | |
| |$\frac{1}{2}$ |$\frac{1}{2}$ |
* Heun's method:
$$y_{k+1} = y_{k} + h\left(\frac{1}{4}k_{1} + \frac{3}{4}k_{2}\right)$$
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f(x_{k} + \frac{2}{3}h, y_{k} + \frac{2}{3}hk_{1})$$
|$0$ | | |
|:------------:|--------------|--------------|
|$\frac{2}{3}$ |$\frac{2}{3}$ | |
| |$\frac{1}{4}$ |$\frac{3}{4}$ |
* Classical Runge-Kutta (RK4):
$$y_{k+1} = y_{k} + \frac{h}{6}(k_{1} + 2k_{2} + 2k_{3} + k_{4}), n = 0, 1, 2, \dots$$
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f\left(x_{k} + \frac{h}{2}, y_{k} + \frac{h}{2}k_{1}\right)$$
$$k_{3} = f\left(x_{k} + \frac{h}{2}, y_{k} + \frac{h}{2}k_{2}\right)$$
$$k_{4} = f(x_{k} + h, y_{k} + h k_{3})$$
|$0$ | | | | |
|:------------:|--------------|--------------|--------------|--------------|
|$\frac{1}{2}$ |$\frac{1}{2}$ | | | |
|$\frac{1}{2}$ |$0$ |$\frac{1}{2}$ | | |
|$1$ |$0$ |$0$ |$1$ | |
| |$\frac{1}{6}$ |$\frac{1}{3}$ |$\frac{1}{3}$ |$\frac{1}{6}$ |
* Variant of the classical Runge-Kutta:
$$y_{k+1} = y_{k} + \frac{h}{8}(k_{1} + 3k_{2} + 3k_{3} + k_{4}), n = 0, 1, 2, \dots$$
$$k_{1} = f(x_{k}, y_{k})$$
$$k_{2} = f\left(x_{k} + \frac{h}{3}, y_{k} + \frac{h}{3}k_{1}\right)$$
$$k_{3} = f\left(x_{k} + \frac{2h}{3}, y_{k} - \frac{h}{3}k_{1} + k_{2}\right)$$
$$k_{4} = f(x_{k} + h, y_{k} + hk_{1} - hk_{2} + hk_{3})$$
|$0$ | | | | |
|:------------:|---------------|--------------|--------------|--------------|
|$\frac{1}{3}$ |$\frac{1}{3}$ | | | |
|$\frac{2}{3}$ |$\frac{-1}{3}$ |$1$ | | |
|$1$ |$1$ |$-1$ |$1$ | |
| |$\frac{1}{8}$ |$\frac{3}{8}$ |$\frac{3}{8}$ |$\frac{1}{8}$ |
## Code
### Two-stage Runge-Kutta methods implementation
```python
def rk2(x_0, y_0, f, step=0.001, k_max=None, method='improved_euler'):
r"""
Two-stage Runge-Kutta method for solving first-order ODE.
The function computes `k_max` iterations from the initial conditions `x_0` and `y_0` with
steps of size `step`. It yields a total of `k_max` + 1 values. Being h_{k} the step at x_{k},
the recorrent equation is:
y_{k+1} = y_{k} + h_{k} * (1-(1/(2*lambda)) k_{1} + (1/(2*lambda)) k_{2})
where
k_{1} = f(x_{k}, y_{k})
k_{2} = f(x_{k} + lambda * h_{k}, y_{k} + lambda * h_{k} * k_{1})
When `method` is 'improved_euler', `lambda` is set to 1.
When `method` is 'heun', `lambda` is set to 2/3.
Parameters
----------
x_0 : float
The initial value for the independent variable.
y_0 : array_like
1-D array of initial values for the dependente variable evaluated at `x_0`.
f : callable
The function that represents the first derivative of y with respect to x.
It must accept two arguments: the point x at which it will be evaluated and
the value of y at this point.
step : float, optional
The step size between each iteration.
k_max : number
The maximum number of iterations.
method : ["improved_euler", "heun"]
The specific two-stage method to use.
Yields
------
x_k : float
The point at which the function was evaluated in the last iteration.
y_k : float
The value of the function in the last iteration.
Raises
------
TypeError
If the method argument is invalid or not supported.
"""
if k_max is None: counter = itertools.count()
else: counter = range(k_max)
if method == 'improved_euler':
b1, b2 = 1/2.0, 1/2.0
c2 = 1
a21 = 1
elif method == 'heun':
b1, b2 = 1/4.0, 3/4.0
c2 = 2/3.0
a21 = 2/3.0
else:
raise TypeError("The method {} is not valid or supported.".format(method))
x_k = x_0
y_k = y_0
yield (x_k, y_k)
for k in counter:
k1 = f(x_k, y_k)
k2 = f(x_k + c2 * step, y_k + a21 * step * k1)
y_k = y_k + step * (b1 * k1 + b2 * k2)
x_k = x_k + step
yield (x_k, y_k)
```
### Four-stage Runge-Kutta methods implementation
```python
def rk4(x_0, y_0, f, step=0.001, k_max=None, method='classical'):
r"""
Four-stage Runge-Kutta methods for solving first-order ODE.
The function computes `k_max` iterations from the initial conditions `x_0` and `y_0` with
steps of size `step`. It yields a total of `k_max` + 1 values. We call h_{k} the step at x_{k}.
Classical Runge-Kutta method (RK4):
y_{k+1} = y_{k} + h/6 * (k_{1} + 2*k_{2} + 2*k_{3} + k_{4})
where
k_{1} = f(x_{k}, y_{k})
k_{2} = f(x_{k} + h_{k}/2, y_{k} + h_{k}/2 * k_{1})
k_{3} = f(x_{k} + h_{k}/2, y_{k} + h_{k}/2 * k_{2})
k_{3} = f(x_{k} + h_{k}, y_{k} + h_{k} * k_{3})
Variant of the classical Runge-Kutta method:
y_{k+1} = y_{k} + h/8 * (k_{1} + 3*k_{2} + 3*k_{3} + k_{4})
where
k_{1} = f(x_{k}, y_{k})
k_{2} = f(x_{k} + h_{k}/3, y_{k} + h_{k}/3 * k_{1})
k_{3} = f(x_{k} + 2*h_{k}/3, y_{k} - h_{k}/3 * k_{1} + h_{k} * k_{2})
k_{3} = f(x_{k} + h_{k}, y_{k} + h_{k} * k_{1} - h_{k} * k_{2} + h_{k} * k_{3})
Parameters
----------
x_0 : float
The initial value for the independent variable.
y_0 : array_like
1-D array of initial values for the dependente variable evaluated at `x_0`.
f : callable
The function that represents the first derivative of y with respect to x.
It must accept two arguments: the point x at which it will be evaluated and
the value of y at this point.
step : float, optional
The step size between each iteration.
k_max : number
The maximum number of iterations.
method : ["classical", "variant"]
The specific four-stage method to use.
Yields
------
x_k : float
The point at which the function was evaluated in the last iteration.
y_k : float
The value of the function in the last iteration.
Raises
------
TypeError
If the method argument is invalid or not supported.
"""
if k_max is None: counter = itertools.count()
else: counter = range(k_max)
if method == 'classical':
b1, b2, b3, b4 = 1/6.0, 1/3.0, 1/3.0, 1/6.0
c2, c3, c4 = 1/2.0, 1/2.0, 1
a21, a31, a32, a41, a42, a43 = 1/2.0, 0, 1/2.0, 0, 0, 1
elif method == 'variant':
b1, b2, b3, b4 = 1/8.0, 3/8.0, 3/8.0, 1/8.0
c2, c3, c4 = 1/3.0, 2/3.0, 1
a21, a31, a32, a41, a42, a43 = 1/3.0, -1/3.0, 1, 1, -1, 1
else:
raise TypeError("The method {} is not valid or supported.".format(method))
x_k = x_0
y_k = y_0
yield (x_k, y_k)
for k in counter:
k1 = f(x_k, y_k)
k2 = f(x_k + c2 * step, y_k + a21 * step * k1)
k3 = f(x_k + c3 * step, y_k + a31 * step * k1 + a32 * step * k2)
k4 = f(x_k + c4 * step, y_k + a41 * step * k1 + a42 * step * k2 + a43 * step * k3)
y_k = y_k + step * (b1 * k1 + b2 * k2 + b3 * k3 + b4 * k4)
x_k = x_k + step
yield (x_k, y_k)
```
## Examples
### Example 1: two-stage Runge-Kutta methods
Consider the following IVP:
$$y' = x^{2} + y^{2}$$
with
$$y(0) = 0$$
We will solve this IVP using the improved Euler's method and the Heun's method.
```python
def example1(x_k, y_k):
return x_k**2 + y_k**2
results = rk2(x_0=0.0, y_0=0.0, f=example1, step=0.1, k_max=10, method='improved_euler')
x, y_improved_euler = extract(results)
results = rk2(x_0=0.0, y_0=0.0, f=example1, step=0.1, k_max=10, method='heun')
x, y_heun = extract(results)
df1 = pd.DataFrame({"x": x, "y_improved_euler": y_improved_euler, "y_heun": y_heun})
df1
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x</th>
<th>y_heun</th>
<th>y_improved_euler</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>1</th>
<td>0.1</td>
<td>0.000333</td>
<td>0.000500</td>
</tr>
<tr>
<th>2</th>
<td>0.2</td>
<td>0.002667</td>
<td>0.003000</td>
</tr>
<tr>
<th>3</th>
<td>0.3</td>
<td>0.009002</td>
<td>0.009503</td>
</tr>
<tr>
<th>4</th>
<td>0.4</td>
<td>0.021355</td>
<td>0.022025</td>
</tr>
<tr>
<th>5</th>
<td>0.5</td>
<td>0.041776</td>
<td>0.042621</td>
</tr>
<tr>
<th>6</th>
<td>0.6</td>
<td>0.072411</td>
<td>0.073442</td>
</tr>
<tr>
<th>7</th>
<td>0.7</td>
<td>0.115577</td>
<td>0.116817</td>
</tr>
<tr>
<th>8</th>
<td>0.8</td>
<td>0.173913</td>
<td>0.175396</td>
</tr>
<tr>
<th>9</th>
<td>0.9</td>
<td>0.250586</td>
<td>0.252374</td>
</tr>
<tr>
<th>10</th>
<td>1.0</td>
<td>0.349640</td>
<td>0.351830</td>
</tr>
</tbody>
</table>
</div>
```python
fig, ax = plt.subplots(figsize=(13, 8))
plt.plot(df1['x'], df1['y_improved_euler'], label='Improved Euler approximation with step 0.1', color='blue')
plt.plot(df1['x'], df1['y_heun'], label='Heun approximation with step 0.1', color='red')
plt.legend(loc='upper left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)
ax.set(title="Two-stage Runge-Kutta methods", xlabel="x", ylabel="y");
```
As we can see from the figure above, the solutions are nearly identical (we almost cannot distinguish between them).
### Example 2: four-stage Runge-Kutta methods
Consider the same IVP of example 1:
$$y' = x^{2} + y^{2}$$
with
$$y(0) = 0$$
We will solve this IVP using both the classical Runge-Kutta method (RK4) and its variant.
```python
def example2(x_k, y_k):
return x_k**2 + y_k**2
results = rk4(x_0=0.0, y_0=0.0, f=example2, step=0.1, k_max=10, method='classical')
x, y_classical_rk4 = extract(results)
results = rk4(x_0=0.0, y_0=0.0, f=example2, step=0.1, k_max=10, method='variant')
x, y_variant_rk4 = extract(results)
df2 = pd.DataFrame({"x": x,
"y_classical_rk4": y_classical_rk4,
"y_variant_rk4": y_variant_rk4})
df2
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x</th>
<th>y_classical_rk4</th>
<th>y_variant_rk4</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>1</th>
<td>0.1</td>
<td>0.000333</td>
<td>0.000333</td>
</tr>
<tr>
<th>2</th>
<td>0.2</td>
<td>0.002667</td>
<td>0.002667</td>
</tr>
<tr>
<th>3</th>
<td>0.3</td>
<td>0.009003</td>
<td>0.009003</td>
</tr>
<tr>
<th>4</th>
<td>0.4</td>
<td>0.021359</td>
<td>0.021359</td>
</tr>
<tr>
<th>5</th>
<td>0.5</td>
<td>0.041791</td>
<td>0.041791</td>
</tr>
<tr>
<th>6</th>
<td>0.6</td>
<td>0.072448</td>
<td>0.072448</td>
</tr>
<tr>
<th>7</th>
<td>0.7</td>
<td>0.115660</td>
<td>0.115660</td>
</tr>
<tr>
<th>8</th>
<td>0.8</td>
<td>0.174081</td>
<td>0.174081</td>
</tr>
<tr>
<th>9</th>
<td>0.9</td>
<td>0.250908</td>
<td>0.250908</td>
</tr>
<tr>
<th>10</th>
<td>1.0</td>
<td>0.350234</td>
<td>0.350233</td>
</tr>
</tbody>
</table>
</div>
```python
fig, ax = plt.subplots(figsize=(13, 8))
plt.plot(df2['x'], df2['y_classical_rk4'],
label='Classical Runge-Kutta approximation with step 0.1', color='blue')
plt.plot(df2['x'], df2['y_variant_rk4'],
label='Variant of the classical Runge-Kutta approximation with step 0.1', color='red')
plt.legend(loc='upper left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)
ax.set(title="Four-stage Runge-Kutta methods", xlabel="x", ylabel="y");
```
As we can see from the figure above, the solutions are nearly identical (we almost cannot distinguish between them).
### Example 3
Consider the following IVP:
$$y' = tan(y) + 1$$
with
$$y(0) = 1$$
for $t \in [1, 1.1]$.
We will solve this IVP using Heun's method.
```python
def example3(x_k, y_k):
return np.tan(y_k) + 1
results = rk2(x_0=1.0, y_0=1.0, f=example3, step=0.025, k_max=4, method='heun')
x, y_heun = extract(results)
df3 = pd.DataFrame({"x": x, "y_heun": y_heun})
df3
```
<div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
.dataframe thead th {
text-align: left;
}
.dataframe tbody tr th {
vertical-align: top;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x</th>
<th>y_heun</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.000</td>
<td>1.000000</td>
</tr>
<tr>
<th>1</th>
<td>1.025</td>
<td>1.066869</td>
</tr>
<tr>
<th>2</th>
<td>1.050</td>
<td>1.141332</td>
</tr>
<tr>
<th>3</th>
<td>1.075</td>
<td>1.227418</td>
</tr>
<tr>
<th>4</th>
<td>1.100</td>
<td>1.335079</td>
</tr>
</tbody>
</table>
</div>
### Example 4: Exercise 8 from section 8.6
The following example is actually the exercise 8 from section 8.6 of [Guidi].
Consider the following IVP:
$$y'' + (\exp(y') - 1) + y = -3\cos(t)$$
with
$$y(0) = y'(0) = 0$$
Find an approximation for the solution through the classical Runge-Kutta method for $t \in [0, 50]$ with $h = 0.01$. From the approximation obtained, find an estimative for the oscillation's amplitude for $t \in [43, 50]$ with 4 digits.
As this problem involves a second-order ODE, we must transform the variables so it becomes a system of first-order ODE:
$$u_{1} = y$$
$$u_{2} = y'$$
Since $u_{2}' = y''$, $y'' = f(t, y, y')$ becomes $u_{2}' = g(t, u_{1}, u_{2})$.
The resulting system of first-order ODE is
$$
\begin{cases}
u_{1}' = u_{2} \\
u_{2}' = -3\cos(t) - \exp(u_{2}) + 1 - u_{1}
\end{cases}
$$
```python
def example4(t_k, u_k):
return np.array([u_k[1], -3*np.cos(t_k) - np.exp(u_k[1]) + 1 - u_k[0]])
results = rk4(x_0=0.0, y_0=np.array([0.0, 0.0]), f=example4, step=0.01, k_max=5000, method='classical')
t, ys = extract(results)
y_classical, dy_classical = extract(ys)
df4 = pd.DataFrame({"t": t, "y_classical": y_classical, "dy_classical": dy_classical})
t_interval = (df4.t > 43) & (df4.t < 50)
df4_interval = df4.loc[t_interval, ["t", "y_classical"]]
max_y = df4_interval.loc[:, "y_classical"].max()
min_y = df4_interval.loc[:, "y_classical"].min()
print("The amplitude of oscilattion for t in [43, 50] is {0:.3f}.".format(max_y - min_y))
```
The amplitude of oscilattion for t in [43, 50] is 4.457.
```python
fig, ax = plt.subplots(figsize=(13, 8))
plt.plot(df4['t'], df4['y_classical'],
label="Classical Runge-Kutta approximation with step 0.01", color='blue')
plt.plot(df4_interval['t'], df4_interval['y_classical'],
label="Interval of interest, $t \in [43, 50]$", color='red')
plt.legend(loc='upper right', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)
ax.set(title=r"Solution of y'' + (exp(y') - 1) + y = -3cos(t)", xlabel="t", ylabel="y");
```
### Question 2
```python
def rk4_modified(x_0, y_0, f, step=0.001, k_max=None):
if k_max is None: counter = itertools.count()
else: counter = range(k_max)
b1, b2, b3, b4, b5 = 1/6.0, 0.0, 0.0, 2/3.0, 1/6.0
c2, c3, c4, c5 = 1/3.0, 1/3.0, 1/2.0, 1.0
a21, a31, a32, a41, a42, a43, a51, a52, a53, a54 = 1/3.0, 1/6.0, 1/6.0, 1/8.0, 0.0, 3/8.0, 1/2.0, 0.0, -3/2.0, 2.0
x_k = x_0
y_k = y_0
yield (x_k, y_k)
for k in counter:
k1 = f(x_k, y_k)
k2 = f(x_k + c2 * step, y_k + a21 * step * k1)
k3 = f(x_k + c3 * step, y_k + a31 * step * k1 + a32 * step * k2)
k4 = f(x_k + c4 * step, y_k + a41 * step * k1 + a42 * step * k2 + a43 * step * k3)
k5 = f(x_k + c5 * step, y_k + a51 * step * k1 + a52 * step * k2 + a53 * step * k3 + a54 * step * k4)
y_k = y_k + step * (b1 * k1 + b2 * k2 + b3 * k3 + b4 * k4 + b5 * k5)
x_k = x_k + step
yield (x_k, y_k)
```
```python
def question2(t, u_k):
return np.array([(4/5.0) * u_k[0] * u_k[1] - (1/4.0) * u_k[0], -(4/5.0) * u_k[0] * u_k[1]])
results = rk4_modified(x_0=0.0, y_0=np.array([0.005, 0.995]), f=question2, step=0.0125, k_max=800)
t, i_s = extract(results)
i, s = extract(i_s)
i, s = np.array(i), np.array(s)
df5 = pd.DataFrame({"t": t, "I": i, "S": s, "R": (1 - (i + s))})
df5 = df5[["t", "I", "S", "R"]]
print("Ratio I(10)/R(10) is {:.2f}.".format(df5["I"].iloc[-1]/df5["R"].iloc[-1]))
```
Ratio I(10)/R(10) is 1.16.
```python
fig, ax = plt.subplots(figsize=(13, 8))
plt.plot(df5['t'], df5['I'],
label="$I(t)$: infected", color='blue')
plt.plot(df5['t'], df5['S'],
label="$S(t)$: non-infected", color='green')
plt.plot(df5['t'], df5['R'],
label="$R(t)$: recovered", color='red')
plt.legend(loc='upper right', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)
ax.set(title=r"Epidemic evolution: Kermack–McKendrick SIR model", xlabel="t", ylabel="y");
```
## References
* Guidi, L., Notas da disciplina Cálculo Numérico. Disponível em [Notas da disciplina Cálculo Numérico](http://www.mat.ufrgs.br/~guidi/grad/MAT01169/calculo_numerico.pdf).
* Heath, M. T., Scientific Computing: An Introductory Survey, 2nd Edition, McGraw Hill, 2002.
* Wikipedia, [Runge-Kutta methods](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods).
| f9a26b8b2d21caaa8a6e42959e2911e0826a0b3d | 232,668 | ipynb | Jupyter Notebook | notebooks/math/runge_kutta.ipynb | kmyokoyama/machine-learning | 05c41cfa1d2c070ce4f476a20f5ad0c5bd6a1fe7 | [
"MIT"
]
| null | null | null | notebooks/math/runge_kutta.ipynb | kmyokoyama/machine-learning | 05c41cfa1d2c070ce4f476a20f5ad0c5bd6a1fe7 | [
"MIT"
]
| null | null | null | notebooks/math/runge_kutta.ipynb | kmyokoyama/machine-learning | 05c41cfa1d2c070ce4f476a20f5ad0c5bd6a1fe7 | [
"MIT"
]
| null | null | null | 175.731118 | 69,070 | 0.860153 | true | 9,532 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.897695 | 0.787707 | __label__eng_Latn | 0.711373 | 0.668439 |
```python
import grouptesting
from grouptesting.model import *
from grouptesting.algorithms import *
import autograd.numpy as np
from autograd import grad
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import math
from scipy.stats import bernoulli
from scipy.optimize import minimize, rosen, rosen_der
from sympy import *
def nCr(n,k):
return math.factorial(n) // math.factorial(k) // math.factorial(n - k)
np.random.seed(10)
```
## Plot ideas:
1. NCOMP vs. NDD for some q, or for three different qs (just to show empirical performance of NDD vs NCOMP) x
2. NCOMP with an achievability bound for some q x
4. NDD with achievability found for some q x
### 2. NCOMP with an achievability bound for some q
```python
# Define initial parameters
num_tests = 100
n = 1000 # population size
theta = 0.5
C = 2
n_theta = C * (n**theta)
k = round(n_theta) # number of infected
alpha = 0.5
p = alpha/k # Bernoulli test design probability parameter
eta = 0.001
q_results = []
############ RUN EXPERIMENT #####################
for q in [0.1]:
NCOMP_ber_acc = []
############## Achievable bound 1 for T ################
## Choose delta according to z
T_array = np.linspace(1, n, 25)
z = ((1- alpha/(k) * (1-q))**(k-1) + (1/q)*(1 - alpha/(k) * (1-q))**k) / 2
l = (1 - alpha / k * (1-q))**(k-1)
u = (1 - alpha / k * (1-q))**(k) * (1/q)
print("l-1, u-1:", l-1, u-1)
delta = z - 1
assert (l-1) < delta < (u-1)
########### Achievability Bound 2 Method 2 #######
eta_1, alpha_1, theta_1, q_1 = eta, alpha, theta, q
n_1 = n
n_theta_1 = n_theta
d_1 = round(n_theta_1)
eps = 1e-12
# Objective
Tminus = lambda delta_1: ((1 + eta_1) * theta_1 * (1/(q_1**2)) * (d_1) * (np.log(n_1))) / (alpha_1 * (1 - np.exp(-2)) * (eps + (1+delta_1) - (1- alpha_1/(d_1) * (1-q_1))**(d_1-1))**2)
Tplus = lambda delta_1: ((1 + eta_1) * (1/(q_1**2)) * (d_1) * (np.log(n_1))) / (alpha_1 * (1 - np.exp(-2)) * (eps + (1+delta_1) - (1/q_1) * (1- alpha_1/(d_1) * (1-q_1))**d_1)**2)
ff = lambda delta_1: max(Tminus(delta_1), Tplus(delta_1))
# Constraints
l = (1 - alpha_1 / d_1 * (1-q_1))**(d_1-1)
u = (1 - alpha_1 / d_1 * (1-q_1))**(d_1) * (1/q_1)
bound = (l-1, u-1)
x0 = delta
options={'disp': None, 'maxcor': 100, 'ftol': 1e-14, 'gtol': 1e-012, 'eps': 1e-08, 'maxfun': 15000, 'maxiter': 15000, 'iprint': 1, 'maxls': 20}
gff = grad(ff)
res = minimize(ff, x0, tol=1e-12, bounds=[bound], jac=gff)
achiev_bound = res.fun[0]
###### Main Experiment loop #####
for T in T_array:
NCOMP_ber_error = []
NDD_ber_error = []
COMP_ber_error = []
print("T: ", int(round(T)))
for test in range(num_tests):
sigma = D(n, k) # Generate the vector of defectives
X_ber = Ber(n, int(round(T)), p)
y_ber = Y(dilution_noise(X_ber, q), sigma)
# NCOMP - Bernoulli
sigma_hat_ber = NCOMP(X_ber, y_ber, q=q, delta=delta)
err = error(sigma, sigma_hat_ber)
NCOMP_ber_error.append(error(sigma, sigma_hat_ber))
acc = (num_tests - np.sum(np.array(NCOMP_ber_error))) / num_tests
NCOMP_ber_acc.append(acc)
################ Plot ####################
PAL= ['#f2f0f7','#cbc9e2','#9e9ac8','#756bb1','#54278f']
plt.plot(T_array, NCOMP_ber_acc,'o-', c=PAL[-2], label="NCOMP - q = " + str(round(q, 2)))
# use LaTeX fonts in the plot
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams['text.usetex'] = True
plt.axvline(x=float(achiev_bound), ymin=0, ymax=1, color='black', linestyle='dotted', label="Achievability Bound", linewidth=3)
plt.xticks(ticks=[10, round(float(achiev_bound)), round(n)])
plt.yticks(ticks=[0.0, 1.0])
plt.xlabel(r'Number of tests (T)', fontsize=11)
plt.ylabel(r'Success Probability', fontsize=11)
plt.legend()
plt.savefig('plots/NCOMP Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.png')
plt.savefig('plots/NCOMP Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.eps')
plt.savefig('plots/NCOMP Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.pdf')
plt.show()
```
```python
## Redo the plot because the latex font only comes in the second time you save it for some reason.
PAL= ['#f2f0f7','#cbc9e2','#9e9ac8','#756bb1','#54278f']
plt.plot(T_array, NCOMP_ber_acc,'o-', c=PAL[-2], label="NCOMP - q = " + str(round(q, 2)))
# use LaTeX fonts in the plot
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams['text.usetex'] = True
plt.axvline(x=float(achiev_bound), ymin=0, ymax=1, color='black', linestyle='dotted', label="Achievability Bound", linewidth=3)
plt.xticks(ticks=[10, round(float(achiev_bound)), round(n)])
plt.yticks(ticks=[0.0, 1.0])
plt.xlabel(r'Number of tests (T)', fontsize=11)
plt.ylabel(r'Success Probability', fontsize=11)
plt.legend()
plt.savefig('plots/NCOMP Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.png')
plt.savefig('plots/NCOMP Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.eps')
plt.savefig('plots/NCOMP Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.pdf')
plt.show()
```
### 4. NDD with Achievability bound for some q
```python
# Define initial parameters
num_tests = 100
n = 1000 # population size
theta = 0.15
C = 2
n_theta = C * (n**theta)
k = round(n_theta) # number of infected
eta = 0.001
q_results = []
############ RUN EXPERIMENT #####################
for q in [0.1]:
# alpha = np.log(2)/(1-q) # 0.77
alpha = 0.5 # 0.77
p = alpha/k # Bernoulli test design probability parameter
print("q: ", q)
NDD_ber_acc = []
NCOMP_ber_acc = []
T_array = np.linspace(1, n, 25)
pi_init = (q+0.07)
########### Achievability Bound 2 Method 2 #######
eta_1, alpha_1, theta_1, q_1 = eta, alpha, theta, q
n_1 = n
n_theta_1 = n_theta
d_1 = round(n_theta_1)
eps = 1e-12
# Objective
D_ = lambda _eps: (_eps)*np.log(_eps) - _eps + 1
T1 = lambda pi: (d_1 * np.log(d_1) * np.exp(alpha_1 * (1-q_1)) / ((alpha_1 * q_1)*D_(pi / (q_1 * np.exp(-alpha_1 * (1-q_1))))))
T2 = lambda pi: ((1 - (theta_1 - eps)) * d_1 * np.log(n) * np.exp(alpha_1*(1-q_1)) / (alpha_1 * D_(pi * np.exp(alpha_1 * (1-q_1)))))
T3 = lambda pi: (np.exp(alpha_1) * d_1 * np.log(d_1) / (alpha_1 * (1-q_1)))
ff = lambda pi: max(T1(pi), T2(pi), T3(pi))
bound = (q, np.exp(-alpha * (1-q))-0.01)
x0 = pi_init
gff = grad(ff)
res = minimize(ff, x0, tol=1e-12, bounds=[bound], jac=gff)
achiev_bound = res.fun[0]
pi_NDD = res.x[0]
assert q <= pi_NDD <= 1
###### Main Experiment loop #####
for T in T_array:
NDD_ber_error = []
DD_ber_error = []
NCOMP_ber_error = []
print("T: ", int(round(T)))
for test in range(num_tests):
sigma = D(n, k) # Generate the vector of defectives
X_ber = Ber(n, int(round(T)), p)
y_ber = Y(dilution_noise(X_ber, q), sigma)
# NCOMP - Bernoulli
# Choose delta according to z
z = ((1- alpha/(k) * (1-q))**(k-1) + (1/q)*(1 - alpha/(k) * (1-q))**k) / 2
l = (1 - alpha / k * (1-q))**(k-1)
u = (1 - alpha / k * (1-q))**(k) * (1/q)
delta_NCOMP = z - 1
assert (l-1) < delta_NCOMP < (u-1)
sigma_hat_ber = NCOMP(X_ber, y_ber, q=q, delta=delta_NCOMP)
err = error(sigma, sigma_hat_ber)
NCOMP_ber_error.append(error(sigma, sigma_hat_ber))
# NDD - Bernoulli
sigma_hat_ber = NDD(X_ber, y_ber, pi=(pi_NDD), alpha=alpha, T = T, d = k)
err = error(sigma, sigma_hat_ber)
NDD_ber_error.append(error(sigma, sigma_hat_ber))
acc = (num_tests - np.sum(np.array(NDD_ber_error))) / num_tests
NDD_ber_acc.append(acc)
acc = (num_tests - np.sum(np.array(NCOMP_ber_error))) / num_tests
NCOMP_ber_acc.append(acc)
################ Plot ####################
PAL = ['#edf8e9','#bae4b3','#74c476','#31a354','#006d2c']
plt.plot(T_array, NDD_ber_acc,'o-', c=PAL[-2], label="NDD - q = " + str(round(q, 2)))
# use LaTeX fonts in the plot
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams['text.usetex'] = True
plt.axvline(x=ff(pi_NDD), ymin=0, ymax=1, color='black', linestyle='dotted', label="Achievability Bound", linewidth=3)
plt.xticks(ticks=[round(10.0), round(ff(pi_NDD)), round(n)])
plt.yticks(ticks=[0.0, 1.0])
plt.xlabel(r'Number of tests (T)', fontsize=11)
plt.ylabel(r'Success Probability', fontsize=11)
plt.legend()
plt.savefig('plots/NDD Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', pi = ' + str(round(pi_NDD, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.png')
plt.savefig('plots/NDD Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', pi = ' + str(round(pi_NDD, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.eps')
plt.savefig('plots/NDD Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', pi = ' + str(round(pi_NDD, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.pdf')
plt.show()
```
### 1. NDD vs NCOMP for some/different qs
```python
# Define initial parameters
num_tests = 100
n = 1000 # population size
theta = 0.15
C = 2
n_theta = C * (n**theta)
k = round(n_theta) # number of infected
eta = 0.001
q_results = []
############ RUN EXPERIMENT #####################
for q in [0.00001, 0.1, 0.3, 0.5]:
alpha = 0.5
p = alpha/k # Bernoulli test design probability parameter
print("q: ", q)
NDD_ber_acc = []
DD_ber_acc = []
NCOMP_ber_acc = []
T_array = np.linspace(1, n, 25)
pi_init = (q+0.07)
########### Achievability Bound 2 Method 2 #######
eta_1, alpha_1, theta_1, q_1 = eta, alpha, theta, q
n_1 = n
n_theta_1 = n_theta
d_1 = round(n_theta_1)
eps = 1e-12
# Objective
D_ = lambda _eps: (_eps)*np.log(_eps) - _eps + 1
T1 = lambda pi: (d_1 * np.log(d_1) * np.exp(alpha_1 * (1-q_1)) / ((alpha_1 * q_1)*D_(pi / (q_1 * np.exp(-alpha_1 * (1-q_1))))))
T2 = lambda pi: ((1 - (theta_1 - eps)) * d_1 * np.log(n) * np.exp(alpha_1*(1-q_1)) / (alpha_1 * D_(pi * np.exp(alpha_1 * (1-q_1)))))
T3 = lambda pi: (np.exp(alpha_1) * d_1 * np.log(d_1) / (alpha_1 * (1-q_1)))
ff = lambda pi: max(T1(pi), T2(pi), T3(pi))
bound = (q, np.exp(-alpha * (1-q))-0.01)
x0 = pi_init
gff = grad(ff)
res = minimize(ff, x0, tol=1e-6, bounds=[bound], jac=gff)
achiev_bound = res.fun[0]
pi_NDD = res.x[0]
assert q <= pi_NDD <= 1
###### Main Experiment loop #####
for T in T_array:
NDD_ber_error = []
DD_ber_error = []
NCOMP_ber_error = []
print("T: ", int(round(T)))
for test in range(num_tests):
sigma = D(n, k) # Generate the vector of defectives
X_ber = Ber(n, int(round(T)), p)
y_ber = Y(dilution_noise(X_ber, q), sigma)
# NCOMP - Bernoulli
# Choose delta according to z
z = ((1- alpha/(k) * (1-q))**(k-1) + (1/q)*(1 - alpha/(k) * (1-q))**k) / 2
l = (1 - alpha / k * (1-q))**(k-1)
u = (1 - alpha / k * (1-q))**(k) * (1/q)
delta_NCOMP = z - 1
assert (l-1) < delta_NCOMP < (u-1)
sigma_hat_ber = NCOMP(X_ber, y_ber, q=q, delta=delta_NCOMP)
err = error(sigma, sigma_hat_ber)
NCOMP_ber_error.append(error(sigma, sigma_hat_ber))
# NDD - Bernoulli
sigma_hat_ber = NDD(X_ber, y_ber, pi=(pi_NDD), alpha=alpha, T = T, d = k)
err = error(sigma, sigma_hat_ber)
NDD_ber_error.append(error(sigma, sigma_hat_ber))
acc = (num_tests - np.sum(np.array(NDD_ber_error))) / num_tests
NDD_ber_acc.append(acc)
acc = (num_tests - np.sum(np.array(NCOMP_ber_error))) / num_tests
NCOMP_ber_acc.append(acc)
q_results.append([q, T_array, NCOMP_ber_acc, NDD_ber_acc])
```
q: 1e-05
T: 1
T: 43
T: 84
T: 126
T: 168
T: 209
T: 251
T: 292
T: 334
T: 376
T: 417
T: 459
T: 500
T: 542
T: 584
T: 625
T: 667
T: 709
T: 750
T: 792
T: 834
T: 875
T: 917
T: 958
T: 1000
q: 0.1
T: 1
T: 43
T: 84
T: 126
T: 168
T: 209
T: 251
T: 292
T: 334
T: 376
T: 417
T: 459
T: 500
T: 542
T: 584
T: 625
T: 667
T: 709
T: 750
T: 792
T: 834
T: 875
T: 917
T: 958
T: 1000
q: 0.3
T: 1
T: 43
T: 84
T: 126
T: 168
T: 209
T: 251
T: 292
T: 334
T: 376
T: 417
T: 459
T: 500
T: 542
T: 584
T: 625
T: 667
T: 709
T: 750
T: 792
T: 834
T: 875
T: 917
T: 958
T: 1000
q: 0.5
T: 1
T: 43
T: 84
T: 126
T: 168
T: 209
T: 251
T: 292
T: 334
T: 376
T: 417
T: 459
T: 500
T: 542
T: 584
T: 625
T: 667
T: 709
T: 750
T: 792
T: 834
T: 875
T: 917
T: 958
T: 1000
```python
PAL1 = ['#edf8e9','#bae4b3','#74c476','#31a354','#006d2c']
PAL2 = ['#f2f0f7','#cbc9e2','#9e9ac8','#756bb1','#54278f']
for i in range(len(q_results)):
print(i)
q, T_array, NCOMP_ber_acc, NDD_ber_acc = q_results[i]
if (i==0):
plt.plot(T_array, NDD_ber_acc,'o-', c=PAL1[-1-i], label="NDD - q = 0.0 to 0.5", linewidth=2)
plt.plot(T_array, NCOMP_ber_acc,'--', c=PAL2[-1-i], label="NCOMP - q = 0.0 to 0.5", linewidth=2)
else:
plt.plot(T_array, NDD_ber_acc,'o-', c=PAL1[-1-i], linewidth=2)
plt.plot(T_array, NCOMP_ber_acc,'--', c=PAL2[-1-i], linewidth=2)
# use LaTeX fonts in the plot
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
matplotlib.rcParams['text.usetex'] = True
plt.yticks(ticks=[0.0, 1.0])
plt.xlabel(r'Number of tests (T)', fontsize=11)
plt.ylabel(r'Success Probability', fontsize=11)
plt.legend()
plt.savefig('plots/NCOMP vs NDD Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.png')
plt.savefig('plots/NCOMP vs NDD Report Gabriel 2' + str(round(C, 2)) + 'qs, q = ' + str(round(q, 2)) + ', delta = ' + str(round(delta, 2)) + ', n = ' + str(n) + ', theta = ' + str(round(theta, 2)) + '.pdf')
plt.show()
```
| f971dd125cd5616aaf94fa40093a89e76d3edcf6 | 113,746 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Semester Project Plots Gabriel-checkpoint.ipynb | gabrielarpino/grouptesting | 32e48a019ccf099b8672a3e0daba3313dd3b6512 | [
"MIT"
]
| null | null | null | notebooks/.ipynb_checkpoints/Semester Project Plots Gabriel-checkpoint.ipynb | gabrielarpino/grouptesting | 32e48a019ccf099b8672a3e0daba3313dd3b6512 | [
"MIT"
]
| null | null | null | notebooks/.ipynb_checkpoints/Semester Project Plots Gabriel-checkpoint.ipynb | gabrielarpino/grouptesting | 32e48a019ccf099b8672a3e0daba3313dd3b6512 | [
"MIT"
]
| 1 | 2020-10-15T14:11:46.000Z | 2020-10-15T14:11:46.000Z | 164.610709 | 43,728 | 0.858351 | true | 5,648 | Qwen/Qwen-72B | 1. YES
2. YES | 0.803174 | 0.752013 | 0.603997 | __label__eng_Latn | 0.187922 | 0.241617 |
Copyright 2019 Carsten Blank
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```python
%load_ext autoreload
```
```python
%autoreload 2
%aimport lib_experimental_utils
%aimport lib_experiment_setups
```
```python
import numpy as np
import lib_experimental_utils as lib
from lib_experimental_utils import FinishedExperiment, save
import lib_experiment_setups as expset
```
```python
readout_swap = {}
id1 = expset.create_regular_experiment_and_then_simulation(backend_enum=expset.BackendEnum.IBMQ_OURENSE,
instead_general_weights_use_hadamard=False,
use_barriers=False, readout_swap=readout_swap,
no_experiment=True, dont_use_dask=False)
```
```python
expset.get_ids()
```
```python
loaded_data = expset.load_by_index(0, 'exp_sim_regular_')
loaded_data
```
```python
experiment: FinishedExperiment = loaded_data[0]
simulation: FinishedExperiment = loaded_data[1]
experiment.backend_name
```
```python
w_1 = 0.5
w_2 = 1 - w_1
theta = np.asarray(experiment.theta)
theory_classification = w_1 * np.sin(theta/2 + np.pi/4)**2 - w_2 * np.cos(theta/2 + np.pi/4)**2
experiment.show_plot(compare_classification=theory_classification, classification_label='experiment', compare_classification_label='theory')
simulation.show_plot(compare_classification=theory_classification, classification_label='simulation (noise)', compare_classification_label='theory')
experiment.show_plot(compare_classification=simulation.get_classification(), classification_label='experiment', compare_classification_label='simulation')
```
```python
from scipy.optimize import minimize
def theory_expectation(w_1, w_2):
def inner(x):
return w_1 * np.sin(x/2 + np.pi/4)**2 - w_2 * np.cos(x/2 + np.pi/4)**2
return inner
def mse(classification, theta):
classification = np.asarray(classification)
def inner(x):
a, vartheta, w_1 = x
reference = np.asarray([
a*theory_expectation(w_1=w_1, w_2=1 - w_1)(t - vartheta) for t in theta
])
return np.sqrt(sum(np.power(classification - reference, 2)))
return inner
fun = mse(experiment.get_classification(), theta)
x_0 = [1.0, 0, 0]
result = minimize(fun, x_0)
from sympy import nsimplify
[a, vartheta, w_1] = result.x
"amplitude dampening: {:.4}, shift: {} pi, approx. w_1: {:.4}".format(
a,
nsimplify(vartheta/np.pi, tolerance=0.1),
w_1)
```
```python
lib.save(directory="../experiment_results", experiment=experiment, simulation=simulation)
```
```python
#simulation.parameters['device_properties']
```
| 878f2c0d99ec5d49d8750d23eed0466b2f1490af | 6,399 | ipynb | Jupyter Notebook | notebooks/experiments_paper.ipynb | carstenblank/Quantum-classifier-with-tailored-quantum-kernels---Supplemental | 7c3188f0b71e825bc8ce2b1577a93d10b34abdbc | [
"Apache-2.0"
]
| 11 | 2020-02-18T14:14:40.000Z | 2021-10-10T12:19:23.000Z | notebooks/experiments_paper.ipynb | carstenblank/Quantum-classifier-with-tailored-quantum-kernels---Supplemental | 7c3188f0b71e825bc8ce2b1577a93d10b34abdbc | [
"Apache-2.0"
]
| null | null | null | notebooks/experiments_paper.ipynb | carstenblank/Quantum-classifier-with-tailored-quantum-kernels---Supplemental | 7c3188f0b71e825bc8ce2b1577a93d10b34abdbc | [
"Apache-2.0"
]
| 2 | 2020-07-08T23:17:01.000Z | 2021-09-27T03:13:32.000Z | 25.094118 | 160 | 0.540553 | true | 759 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.76908 | 0.647431 | __label__eng_Latn | 0.594273 | 0.342531 |
## EEML2019: ConvNets and Computer Vision Tutorial (PART II)
### Knowledge distillation: Distilling a pre-trained teacher model into a smaller student model
* Define student model (custom Resnet-21)
* Load pre-trained teacher model (Resnet-50)
* Add KL distillation loss between teacher and student
* Observe the impact of temperature in softmax for teacher predictions
* Train student with the joint loss
### The total loss for the student is:
\begin{equation}
\mathcal{L} = \mathcal{L}_{\text{classif}} + \lambda \mathcal{L}_{\text{distill}}
\end{equation}
For classification loss we use the regular cross-entropy and for the distillation loss, we use Kullback-Leibler (KL) divergence. $\lambda$ is a normalisation factor explained below.
**Reminder**:
Given two distributions $t$ and $s$, we define their cross-entropy over a given set as:
$$H(t,s) = H(t) + \text{KL}(t,s),$$
where $H(t)$ is the entropy of $t$, i.e. $H(t) = \sum_{i=1}^{N}t(x_i) \cdot \log t(x_i)$
and $\text{KL}(t,s)$ is the KL divergence between $t$ and $s$, i.e. $\text{KL}(t,s) = \sum_{i=1}^{N}t(x_i) \cdot \log \frac{t(x_i)}{s(x_i)} . $
However, in most cases of interest to us, $t$ is a constant (either ground truth labels or teacher predictions also considered as constant), so the entropy term can be ignored since its gradient is 0.
Hence we can use cross-entropy $H(t,s)$ for both losses:
- the mismatch between ground truth and student predictions.
- the mismatch between teacher and student distributions.
In the context of distillation, it is useful to also remember that the outputs of the network are logits, which we interpret as probabilities when passed through softmax:
$$p_i^{(T)} =\frac{\exp{(\text{logits}_i / T) }}{\sum_j \exp{(\text{logits}_j / T) }}. $$
$T$ is the softmax temperature usually set to 1. Setting it to a higher value smooths the output probability distribution, an effect desired in distillation. More precisely, we will use
\begin{equation}
\mathcal{L}_{\text{distill}} = H(\text{p}_{\text{teacher}}^{(T)}, \text{p}_{\text{student}}^{(T)}),
\end{equation}
**The normalisation factor**
$\lambda$ is a normalisation factor that ensures the gradients of the two loss terms are comparable in scale. Note that the gradients of the distill loss term scale as $\frac{1}{T^2}$ due to the logits being divided by $T$. Hence we use $$\lambda = T^2$$ to bring distillation term gradients to the same scale as the classification term gradients.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import time
import tensorflow as tf
# Don't forget to select GPU runtime environment in Runtime -> Change runtime type
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
import numpy as np
# Plotting library.
from matplotlib import pyplot as plt
import pylab as pl
from IPython import display
import collections
import enum
```
Found GPU at: /device:GPU:0
```
# Reset graph
tf.reset_default_graph()
```
### Copy the pretrained weights of teacher model on the virtual machine
- we won't do this today as the checkpoint has about 250M
```
# from google.colab import files
# uploaded = files.upload()
# for fn in uploaded.keys():
# print('User uploaded file "{name}" with length {length} bytes'.format(
# name=fn, length=len(uploaded[fn])))
```
## Download dataset to be used for training and testing
* Cifar-10 equivalent of MNIST for natural RGB images
* 60000 32x32 colour images in 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck
* train: 50000; test: 10000
```
cifar10 = tf.keras.datasets.cifar10
# (down)load dataset
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
# Check sizes of tensors
print ('Size of training images')
print (train_images.shape)
print ('Size of training labels')
print (train_labels.shape)
print ('Size of test images')
print (test_images.shape)
print ('Size of test labels')
print (test_labels.shape)
assert train_images.shape[0] == train_labels.shape[0]
```
## Display the images
The gallery function below shows sample images from the data, together with their labels.
```
MAX_IMAGES = 10
def gallery(images, label, title='Input images'):
class_dict = [u'airplane', u'automobile', u'bird', u'cat', u'deer', u'dog', u'frog', u'horse', u'ship', u'truck']
num_frames, h, w, num_channels = images.shape
num_frames = min(num_frames, MAX_IMAGES)
ff, axes = plt.subplots(1, num_frames,
figsize=(num_frames, 1),
subplot_kw={'xticks': [], 'yticks': []})
for i in range(0, num_frames):
if num_channels == 3:
axes[i].imshow(np.squeeze(images[i]))
else:
axes[i].imshow(np.squeeze(images[i]), cmap='gray')
axes[i].set_title(class_dict[label[i][0]])
plt.setp(axes[i].get_xticklabels(), visible=False)
plt.setp(axes[i].get_yticklabels(), visible=False)
ff.subplots_adjust(wspace=0.1)
plt.show()
```
```
gallery(train_images, train_labels)
```
## Prepare the data for training and testing
* for training, we use stochastic optimizers (e.g. SGD, Adam), so we need to sample at random mini-batches from the training dataset
* for testing, we iterate sequentially through the test set
```
# define dimension of the batches to sample from the datasets
BATCH_SIZE_TRAIN = 100 #@param
BATCH_SIZE_TEST = 100 #@param
# create Dataset objects using the data previously downloaded
dataset_train = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
# we shuffle the data and sample repeatedly batches for training
batched_dataset_train = dataset_train.shuffle(100000).repeat().batch(BATCH_SIZE_TRAIN)
# create iterator to retrieve batches
iterator_train = batched_dataset_train.make_one_shot_iterator()
# get a training batch of images and labels
(batch_train_images, batch_train_labels) = iterator_train.get_next()
# check that the shape of the training batches is the expected one
print ('Shape of training images')
print (batch_train_images)
print ('Shape of training labels')
print (batch_train_labels)
```
```
# we do the same for test dataset
dataset_test = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
batched_dataset_test = dataset_test.repeat().batch(BATCH_SIZE_TEST)
iterator_test = batched_dataset_test.make_one_shot_iterator()
(batch_test_images, batch_test_labels) = iterator_test.get_next()
print ('Shape of test images')
print (batch_test_images)
print ('Shape of test labels')
print (batch_test_labels)
```
```
# Squeeze labels and convert from uint8 to int32 - required below by the loss op
batch_test_labels = tf.cast(tf.squeeze(batch_test_labels), tf.int32)
batch_train_labels = tf.cast(tf.squeeze(batch_train_labels), tf.int32)
```
## Preprocess input for training and testing
```
# Data augmentation
# - scale image to [-1 , 1]
# - during training: apply horizontal flip randomly
# - random crop after padding
def train_image_preprocess(h, w, num_transf=None):
def fn(image):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = image * 2 - 1
# image = tf.reshape(image, (32, 32, 3))
image = tf.image.random_flip_left_right(image)
# Data augmentation: pad images and randomly sample a (h, w) patch.
image = tf.pad(image, [[0, 0], [4, 4], [4, 4], [0, 0]], mode='CONSTANT')
image = tf.random_crop(image, size=(BATCH_SIZE_TRAIN, h, w, 3))
return image
return fn
def test_image_preprocess():
def fn(image):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = image * 2 - 1
# image = tf.reshape(image, (32, 32, 3))
return image
return fn
```
## Define the models
```
# define parameters of resnet blocks for two resnet models
ResNetBlockParams = collections.namedtuple(
"ResNetBlockParams", ["output_channels", "bottleneck_channels", "stride"])
# teacher
BLOCKS_50 = (
(ResNetBlockParams(256, 64, 1),) * 2 + (ResNetBlockParams(256, 64, 2),),
(ResNetBlockParams(512, 128, 1),) * 3 + (ResNetBlockParams(512, 128, 2),),
(ResNetBlockParams(1024, 256, 1),) * 5 + (ResNetBlockParams(1024, 256, 2),),
(ResNetBlockParams(2048, 512, 1),) * 3)
# student
BLOCKS_21 = (
(ResNetBlockParams(256, 64, 1),) + (ResNetBlockParams(256, 64, 2),),
(ResNetBlockParams(512, 128, 1),) + (ResNetBlockParams(512, 128, 2),),
(ResNetBlockParams(1024, 256, 1),) + (ResNetBlockParams(1024, 256, 2),),
(ResNetBlockParams(2048, 512, 1),))
```
```
#@title Utils
# initializer
he_initializer = tf.contrib.layers.variance_scaling_initializer()
# helper functions
def _fixed_padding(inputs, kernel_size):
"""Pads the input along the spatial dimensions."""
pad_total = kernel_size - 1
pad_begin = pad_total // 2
pad_end = pad_total - pad_begin
padded_inputs = tf.pad(inputs, [[0, 0], [pad_begin, pad_end],
[pad_begin, pad_end], [0, 0]])
return padded_inputs
def _max_pool2d_same(inputs, kernel_size, stride, padding):
"""Strided 2-D max-pooling with fixed padding.
When padding='SAME' and stride > 1, we do fixed zero-padding followed by
max_pool2d with 'VALID' padding."""
if padding == "SAME" and stride > 1:
padding = "VALID"
inputs = _fixed_padding(inputs, kernel_size)
return tf.layers.MaxPooling2D(kernel_size, strides=stride, padding=padding)(inputs)
def _conv2d_same(inputs, num_outputs, kernel_size, stride, use_bias=False,
name="conv_2d_same"):
"""Strided 2-D convolution with 'SAME' padding. If stride > 1, we do fixed
zero-padding, followed by conv2d with 'VALID' padding."""
if stride == 1:
padding = "SAME"
else:
padding = "VALID"
inputs = _fixed_padding(inputs, kernel_size)
return tf.layers.Conv2D(num_outputs, kernel_size, strides=stride,
padding=padding, use_bias=use_bias, name=name,
kernel_initializer=he_initializer)(inputs)
```
```
# define resnet block v2
def resnet_block(inputs, output_channels, bottleneck_channels, stride,
training=None, name="resnet_block"):
"""Create a resnet block."""
num_input_channels = inputs.get_shape()[-1]
batch_norm_args = {
"training": training
}
# ResNet V2 uses pre-activation, where the batch norm and relu are before
# convolutions, rather than after as in ResNet V1.
preact = tf.layers.BatchNormalization(name=name+"/bn_preact")(inputs,
**batch_norm_args)
preact = tf.nn.relu(preact)
if output_channels == num_input_channels:
# Use subsampling to match output size.
# Note we always use `inputs` in this case, not `preact`.
if stride == 1:
shortcut = inputs
else:
shortcut = _max_pool2d_same(inputs, 1, stride=stride, padding="SAME")
else:
# Use 1x1 convolution shortcut to increase channels to `output_channels`.
shortcut = tf.layers.Conv2D(output_channels, 1, stride,
use_bias=False,
name=name+"/conv_shortcut")(preact)
###########################
# YOUR CODE HERE copy the code you implemented in Part 1
output = shortcut + residual
return output
```
```
# stack resnet blocks
def _build_resnet_blocks(inputs, blocks, batch_norm_args):
"""Connects the resnet block into the graph."""
outputs = []
for num, subblocks in enumerate(blocks):
with tf.variable_scope("block_{}".format(num)):
for i, block in enumerate(subblocks):
args = {
"name": "resnet_block_{}".format(i)
}
args.update(block._asdict())
args.update(batch_norm_args)
inputs = resnet_block(inputs, **args)
outputs += [inputs]
return outputs
```
```
# define full architecture: input convs, resnet blocks, output classifier
def resnet_v2(inputs, blocks, is_training=True,
num_classes=1000,
use_global_pool=True, name="resnet_v2"):
"""ResNet V2."""
blocks = tuple(blocks)
batch_norm_args = {
"training": is_training
}
outputs = []
with tf.variable_scope(name, reuse=tf.AUTO_REUSE):
# Add initial non-resnet conv layer and max_pool
inputs = _conv2d_same(inputs, 64, 7, stride=2, name="root")
inputs = _max_pool2d_same(inputs, 3, stride=2, padding="SAME")
outputs += [inputs]
# Stack resnet blocks
resnet_outputs = _build_resnet_blocks(inputs, blocks, batch_norm_args)
outputs += resnet_outputs
# Take the activations of the last resnet block.
inputs = resnet_outputs[-1]
inputs = tf.layers.BatchNormalization(name="bn_postnorm")(inputs,
**batch_norm_args)
inputs = tf.nn.relu(inputs)
outputs += [inputs]
if use_global_pool:
inputs = tf.reduce_mean(inputs, [1, 2], name="use_global_pool",
keepdims=True)
outputs += [inputs]
# Add output classifier
inputs = tf.layers.Conv2D(num_classes, 1, name="logits")(inputs)
inputs = tf.squeeze(inputs, axis=[1, 2])
outputs += [inputs]
return outputs[-1]
```
## Set up training pipeline
```
# First define the preprocessing ops for the train/test data
crop_height = 32 #@param
crop_width = 32 #@param
preprocess_fn_train = train_image_preprocess(crop_height, crop_width)
preprocess_fn_test = test_image_preprocess()
NUM_CLASSES = 10 #@param
```
### Instantiate teacher
```
teacher_blocks = BLOCKS_50
# teacher runs in inference mode
with tf.variable_scope("teacher"):
teacher_predictions = resnet_v2(preprocess_fn_train(batch_train_images),
teacher_blocks,
num_classes=NUM_CLASSES, is_training=False)
```
### We do not want to alter the teacher weights, so apply `tf.stop_gradients` to `teacher_predictions`
```
################
# YOUR CODE HERE teacher_predictions = tf.stop_gradient...
```
### Load pre-trained weights
- we won't do this today since the checkpoint was not uploaded
```
# # Create saver to restore the pre-trained model
# saver = tf.train.Saver(var_map, reshape=True)
```
### Instantiate student and get predictions
```
student_blocks = BLOCKS_21
with tf.variable_scope("student"):
student_train_predictions = resnet_v2(preprocess_fn_train(batch_train_images),
student_blocks,
num_classes=NUM_CLASSES,
is_training=True)
print (student_train_predictions)
student_test_predictions = resnet_v2(preprocess_fn_test(batch_test_images),
student_blocks,
num_classes=NUM_CLASSES,
is_training=False)
print (student_test_predictions)
```
```
# Get number of parameters in a scope by iterating through the trainable variables
def get_num_params(scope):
total_parameters = 0
for variable in tf.trainable_variables(scope):
# shape is an array of tf.Dimension
shape = variable.get_shape()
variable_parameters = 1
for dim in shape:
variable_parameters *= dim.value
total_parameters += variable_parameters
return total_parameters
```
```
# Get number of parameters in the models.
print ("Total number of parameters of teacher model")
print (get_num_params("teacher"))
print ("Total number of parameters of student model")
print (get_num_params("student"))
```
Total number of parameters of teacher model
23520842
Total number of parameters of student model
9496394
### Set up the training for student, adding the distillation loss weighted by the square of temperature as explained above.
Normally we use T = 1, but for distillation we use T>1, e.g. T=5. We will visualise later the impact of T on logits.
```
T_distill = 5.0
T_normal = 1.0
```
#### First define the regular cross-entropy classification loss
```
def classification_loss(logits=None, labels=None):
# We reduce over batch dimension, to ensure the loss is a scalar.
return tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
```
#### Define the distillation loss
You can do this either with
* `tf.distributions.kl_divergence` between the teacher and student distributions, respectively; or
* `softmax_cross_entropy_with_logits`. Remember that in this case the labels are expected to sum to 1, while the output of the teacher network is logits. So we need to apply `softmax` on the `teacher_predictions`.
```
# Using tf.distributions.kl_divergence
# pp = tf.distributions.Categorical(logits=teacher_predictions)
# qq = tf.distributions.Categorical(logits=student_train_predictions)
# distill_kl_loss = tf.reduce_mean(tf.distributions.kl_divergence(pp, qq))
```
```
# OR simpler, using cross entropy
################
# YOUR CODE HERE distill_kl_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(...
```
#### Define the joint training loss
```
################
# YOUR CODE HERE lambda_ = ...
# YOUR CODE HERE train_loss = classification_loss...
# YOUR CODE HERE add the weighted distillation term
```
```
# For evaluation, we look at top_k_accuracy since it's easier to interpret; normally k=1 or k=5
def top_k_accuracy(k, labels, logits):
in_top_k = tf.nn.in_top_k(predictions=tf.squeeze(logits), targets=labels, k=k)
return tf.reduce_mean(tf.cast(in_top_k, tf.float32))
```
```
#@title Set up the training; better to start with lower lr and longer training schedule
def get_optimizer(step):
"""Get the optimizer used for training."""
lr_schedule = (80e3, 100e3, 110e3)
lr_schedule = tf.to_int64(lr_schedule)
lr_factor = 0.1
lr_init = 0.01
num_epochs = tf.reduce_sum(tf.to_float(step >= lr_schedule))
lr = lr_init * lr_factor**num_epochs
return tf.train.MomentumOptimizer(learning_rate=lr, momentum=0.9)
# Create a global step that is incremented during training; useful for e.g. learning rate annealing
global_step = tf.train.get_or_create_global_step()
# instantiate the optimizer
optimizer = get_optimizer(global_step)
# Get training ops, including BatchNorm update ops
training_op = optimizer.minimize(train_loss, global_step)
update_ops = tf.group(*tf.get_collection(tf.GraphKeys.UPDATE_OPS))
training_op = tf.group(training_op, update_ops)
# Display loss function
def plot_losses(loss_list, steps):
display.clear_output(wait=True)
display.display(pl.gcf())
pl.plot(steps, loss_list, c='b')
time.sleep(1.0)
```
### Teacher and student accuracy
```
test_acc = top_k_accuracy(1, batch_test_labels, student_test_predictions)
# We compute the accuracy of the teacher on the train set to make sure that
# the loading of the pre-trained weights was successful; this should be above 90%;
# today it is close to random since the teacher doesn't use pretrained weights
acc_teacher = top_k_accuracy(1, batch_train_labels, teacher_predictions)
```
### Define ops to visualise the impact of softmax temperature on output distributions
```
probs_high_temp = tf.nn.softmax(tf.div(teacher_predictions, T_distill))
probs_low_temp = tf.nn.softmax(tf.div(teacher_predictions, T_normal))
```
### Define training parameters
```
# Define number of training iterations and reporting intervals
TRAIN_ITERS = 90e3 #@param
REPORT_TRAIN_EVERY = 100 #@param
PLOT_EVERY = 500 #@param
REPORT_TEST_EVERY = 1000 #@param
TEST_ITERS = 100 #@param
```
### Create the session and initialise variables
```
sess = tf.Session()
sess.run(tf.global_variables_initializer())
```
### Load pre-trained weights for teacher, and check accuracy to make sure the import was successful
```
# saver.restore(sess, "resnet50.ckpt")
num_batches = 100 # 100 batches * 64 samples per batch = 6400 out of 50000
avg_accuracy = 0.0
for _ in range(num_batches):
accuracy = sess.run(acc_teacher)
avg_accuracy += accuracy
avg_accuracy /= num_batches
# expected_accuracy > 90% if we had loaded a pretrained checkpoint
print ("Teacher accuracy on a subset of the train set {:.3f}%".format(avg_accuracy))
```
### Visualize the impact of temperature on the logits
```
probs_ht, probs_lt, gt = sess.run([probs_high_temp, probs_low_temp, tf.one_hot(batch_train_labels, NUM_CLASSES)])
# pick one sample and plot
idx = 10
plt.plot(probs_ht[idx], c='r', label='High Temp')
plt.plot(probs_lt[idx], c='g', label='Low Temp')
plt.plot(gt[idx], 'b--', label='GT')
plt.xlim([0,9])
plt.legend()
plt.show()
```
### Train the model.
If running out of memory, reduce the BATCH_SIZE_TRAIN, e.g. 32 or 16.
Note that the execution is slower and more memory is needed now, since for each training iteration of the student we need to run the forward pass for the teacher as well.
### Training the model
```
# Get test ops
test_acc_op = top_k_accuracy(1, batch_test_labels, student_test_predictions)
train_acc_op = top_k_accuracy(1, batch_train_labels, student_train_predictions)
```
```
train_iter = 0
losses = []
steps = []
for train_iter in range(int(TRAIN_ITERS)):
_, train_loss_np = sess.run([training_op, train_loss])
if (train_iter % REPORT_TRAIN_EVERY) == 0:
losses.append(train_loss_np)
steps.append(train_iter)
if (train_iter % PLOT_EVERY) == 0:
pass
# plot_losses(losses, steps)
if (train_iter % REPORT_TEST_EVERY) == 0:
avg_acc = 0.0
train_avg_acc = 0.0
for test_iter in range(TEST_ITERS):
acc, acc_train = sess.run([test_acc_op, train_acc_op])
avg_acc += acc
train_avg_acc += acc_train
avg_acc /= (TEST_ITERS)
train_avg_acc /= (TEST_ITERS)
print ('Test acc at iter {0:5d} out of {1:5d} is {2:.2f}%'.format(int(train_iter), int(TRAIN_ITERS), avg_acc*100.0))
print ('Train acc at iter {0:5d} out of {1:5d} is {2:.2f}%'.format(int(train_iter), int(TRAIN_ITERS), train_avg_acc*100.0))
```
| 9be1927f2e2d03f643518c097c54d4a7de296fd4 | 91,351 | ipynb | Jupyter Notebook | vision/Part2_start.ipynb | EvaBr/PracticalSessions | fc2d6e87a4ea4b0e4eed140f1f36fcd59274051d | [
"MIT"
]
| null | null | null | vision/Part2_start.ipynb | EvaBr/PracticalSessions | fc2d6e87a4ea4b0e4eed140f1f36fcd59274051d | [
"MIT"
]
| null | null | null | vision/Part2_start.ipynb | EvaBr/PracticalSessions | fc2d6e87a4ea4b0e4eed140f1f36fcd59274051d | [
"MIT"
]
| null | null | null | 72.789641 | 49,320 | 0.744644 | true | 5,587 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.73412 | 0.584108 | __label__eng_Latn | 0.860526 | 0.195409 |
### Initialization
#### Notebook stuff
```python
from IPython.display import display, Latex, HTML
display(HTML(open('01.css').read()))
```
#### Numpy and Scipy
```python
import numpy as np
from numpy import array, cos, diag, eye, linspace, pi
from numpy import poly1d, sign, sin, sqrt, where, zeros
from scipy.linalg import eigh, inv, det
```
#### Matplotlib
```python
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-paper')
plt.rcParams['figure.dpi'] = 115
plt.rcParams['figure.figsize'] = (7.5, 2.5)
plt.rcParams['axes.grid'] = True
```
#### Miscellaneous definitions
In the following `ld` and `pmat` are used to display mathematical formulas generated by the program, `rounder` ensures that a floating point number _close_ to an integer will be rounded correctly when formatted as an integer, `p` is a shorthand to calling `poly1d` that is long and requires a single argument, `vw` computes the virtual work done by moments `m` for the curvatures `c`, when the lengths of the beams are `l` and eventually
`p0_p1` given an array of values `p` returns first `p[0], p[1]` then `p[1], p[2]` then...
```python
def ld(*items):
display(Latex('$$' + ' '.join(items) + '$$'))
def pmat(mat, env='bmatrix', fmt='%+f'):
opener = '\\begin{'+env+'}\n '
closer = '\n\\end{'+env+'}'
formatted = '\\\\\n '.join('&'.join(fmt%elt for elt in row) for row in mat)
return opener+formatted+closer
def rounder(mat): return mat+0.01*sign(mat)
def p(*l): return poly1d(l)
def vw(emme, chi, L):
return sum(((m*c).integ()(l)-(m*c).integ()(0)) for (m, c, l) in zip(emme, chi, L))
def p0_p1(p):
from itertools import tee
a, b = tee(p)
next(b, None)
return zip(a, b)
```
# 3 DOF System
## Input motion
We need the imposed displacement, the imposed velocity (an intermediate result) and the imposed acceleration. It is convenient to express these quantities in terms of an adimensional time coordinate $a = \omega_0 t$,
\begin{align}
u &= \frac{4/3\omega_0 t - \sin(4/3\omega_0 t)}{2\pi}
= \frac{\lambda_0 a- \sin(\lambda_0 a)}{2\pi},\\
\dot{u} &= \frac{4}{3}\omega_0 \frac{1-\cos(4/3\omega_0t)}{2\pi}
= \lambda_0 \omega_0 \frac{1-\cos(\lambda_0 a)}{2\pi},\\
\ddot{u} &= \frac{16}{9}\omega_0^2 \frac{\sin(4/3\omega_0t)}{2\pi}
= \lambda_0^2\omega_0^2 \frac{\sin(\lambda_0 a)}{2\pi},
\end{align}
with $\lambda_0=4/3$.
The equations above are valid in the interval
$$ 0 \le t \le \frac{2\pi}{4/3 \omega_0} \rightarrow
0 \le a \le \frac{3\pi}2 $$
(we have multiplied all terms by $\omega_0$ and simplified the last term).
Following a similar reasoning, the plotting interval is equal to $0\le a\le2\pi$.
```python
l0 = 4/3
# define a function to get back the time array and the 3 dependent vars
def a_uA_vA_aA(t0, t1, npoints):
a = linspace(t0, t1, npoints)
uA = where(a<3*pi/2, (l0*a-sin(l0*a))/2/pi, 1)
vA = where(a<3*pi/2, (1-cos(l0*a))/2/pi, 0)
aA = where(a<3*pi/2, 16*sin(l0*a)/18/pi, 0)
return a, uA, vA, aA
# and use it
a, uA, vA, aA = a_uA_vA_aA(0, 2*pi, 501)
```
#### The plots
```python
plt.plot(a/pi, uA)
plt.xlabel(r'$\omega_0 t/\pi$')
plt.ylabel(r'$u_A/\delta$')
plt.title('Imposed support motion');
```
```python
plt.plot(a/pi, vA)
plt.xlabel(r'$\omega_0 t/\pi$')
plt.ylabel(r'$\dot u_A/\delta\omega_0$')
plt.title('Imposed support velocity');
```
```python
plt.plot(a/pi, aA)
plt.xlabel(r'$\omega_0 t/\pi$')
plt.ylabel(r'$\ddot u_A/\delta\omega_0^2$')
plt.title('Imposed support acceleration');
```
## Equation of Motion
The EoM expressed in adimensional coordinates and using adimensional structural matrices is
$$ m\omega_0^2\hat{\boldsymbol M} \frac{\partial^2\boldsymbol x}{\partial a^2}
+ \frac{EJ}{L^3}\hat{\boldsymbol K}\boldsymbol x =
m \hat{\boldsymbol M} \boldsymbol e \omega_0^2 \frac{\partial^2 u_A}{\partial a^2}
$$
using the dot notation to denote derivatives with respect to $a$, if we divide both members by $m\omega_0^2$ we have
$$ \hat{\boldsymbol M} \ddot{\boldsymbol x}
+ \hat{\boldsymbol K}\boldsymbol x =
\hat{\boldsymbol M} \boldsymbol e \ddot{u}_A.
$$
We must determine the influence vector $\boldsymbol e$ and the adimensional structural matrices
### Influence vector
To impose a horizontal displacement in $A$ we must remove one constraint, so that the structure has 1 DOF as a rigid system and the influence vector must be determined by a kinematic analysis.
```python
display(HTML(open('figures/trab1kin_conv.svg').read()))
```
<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="757.158pt" height="381.639pt" viewBox="0 0 757.158 381.639" version="1.1">
<defs>
<g>
<symbol overflow="visible" id="glyph0-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph0-1">
<path style="stroke:none;" d="M 9.796875 -8.421875 C 9.125 -8.296875 8.875 -7.8125 8.875 -7.421875 C 8.875 -6.921875 9.28125 -6.75 9.5625 -6.75 C 10.1875 -6.75 10.625 -7.296875 10.625 -7.84375 C 10.625 -8.71875 9.625 -9.109375 8.765625 -9.109375 C 7.5 -9.109375 6.796875 -7.875 6.609375 -7.484375 C 6.140625 -9.03125 4.859375 -9.109375 4.484375 -9.109375 C 2.375 -9.109375 1.265625 -6.40625 1.265625 -5.953125 C 1.265625 -5.859375 1.34375 -5.765625 1.484375 -5.765625 C 1.65625 -5.765625 1.6875 -5.890625 1.734375 -5.96875 C 2.4375 -8.265625 3.828125 -8.703125 4.421875 -8.703125 C 5.34375 -8.703125 5.53125 -7.828125 5.53125 -7.328125 C 5.53125 -6.875 5.40625 -6.40625 5.171875 -5.40625 L 4.46875 -2.578125 C 4.15625 -1.34375 3.546875 -0.203125 2.453125 -0.203125 C 2.359375 -0.203125 1.84375 -0.203125 1.40625 -0.46875 C 2.140625 -0.625 2.3125 -1.234375 2.3125 -1.484375 C 2.3125 -1.90625 2 -2.140625 1.609375 -2.140625 C 1.109375 -2.140625 0.578125 -1.71875 0.578125 -1.046875 C 0.578125 -0.1875 1.546875 0.203125 2.4375 0.203125 C 3.421875 0.203125 4.125 -0.578125 4.5625 -1.421875 C 4.890625 -0.203125 5.921875 0.203125 6.6875 0.203125 C 8.796875 0.203125 9.921875 -2.5 9.921875 -2.953125 C 9.921875 -3.0625 9.828125 -3.140625 9.703125 -3.140625 C 9.515625 -3.140625 9.5 -3.03125 9.4375 -2.875 C 8.875 -1.046875 7.6875 -0.203125 6.75 -0.203125 C 6.03125 -0.203125 5.640625 -0.75 5.640625 -1.59375 C 5.640625 -2.046875 5.71875 -2.375 6.046875 -3.734375 L 6.78125 -6.546875 C 7.078125 -7.78125 7.78125 -8.703125 8.734375 -8.703125 C 8.78125 -8.703125 9.359375 -8.703125 9.796875 -8.421875 Z M 9.796875 -8.421875 "/>
</symbol>
<symbol overflow="visible" id="glyph1-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph1-1">
<path style="stroke:none;" d="M 2.640625 -5.15625 C 2.390625 -5.140625 2.34375 -5.125 2.34375 -4.984375 C 2.34375 -4.84375 2.40625 -4.84375 2.671875 -4.84375 L 3.328125 -4.84375 C 4.546875 -4.84375 5.09375 -3.84375 5.09375 -2.46875 C 5.09375 -0.59375 4.109375 -0.09375 3.40625 -0.09375 C 2.71875 -0.09375 1.546875 -0.421875 1.140625 -1.359375 C 1.59375 -1.296875 2.015625 -1.546875 2.015625 -2.0625 C 2.015625 -2.484375 1.703125 -2.765625 1.3125 -2.765625 C 0.96875 -2.765625 0.59375 -2.5625 0.59375 -2.015625 C 0.59375 -0.75 1.859375 0.296875 3.453125 0.296875 C 5.15625 0.296875 6.421875 -1 6.421875 -2.453125 C 6.421875 -3.765625 5.359375 -4.8125 3.984375 -5.046875 C 5.234375 -5.40625 6.03125 -6.453125 6.03125 -7.578125 C 6.03125 -8.703125 4.859375 -9.53125 3.46875 -9.53125 C 2.03125 -9.53125 0.96875 -8.65625 0.96875 -7.609375 C 0.96875 -7.046875 1.421875 -6.921875 1.640625 -6.921875 C 1.9375 -6.921875 2.28125 -7.140625 2.28125 -7.578125 C 2.28125 -8.03125 1.9375 -8.234375 1.625 -8.234375 C 1.53125 -8.234375 1.5 -8.234375 1.46875 -8.21875 C 2.015625 -9.1875 3.359375 -9.1875 3.421875 -9.1875 C 3.90625 -9.1875 4.828125 -8.984375 4.828125 -7.578125 C 4.828125 -7.296875 4.796875 -6.5 4.375 -5.875 C 3.9375 -5.25 3.453125 -5.203125 3.0625 -5.1875 Z M 2.640625 -5.15625 "/>
</symbol>
<symbol overflow="visible" id="glyph1-2">
<path style="stroke:none;" d="M 6.3125 -2.40625 L 6 -2.40625 C 5.953125 -2.171875 5.84375 -1.375 5.6875 -1.140625 C 5.59375 -1.015625 4.78125 -1.015625 4.34375 -1.015625 L 1.6875 -1.015625 C 2.078125 -1.34375 2.953125 -2.265625 3.328125 -2.609375 C 5.515625 -4.625 6.3125 -5.359375 6.3125 -6.78125 C 6.3125 -8.4375 5 -9.53125 3.34375 -9.53125 C 1.671875 -9.53125 0.703125 -8.125 0.703125 -6.890625 C 0.703125 -6.15625 1.328125 -6.15625 1.375 -6.15625 C 1.671875 -6.15625 2.046875 -6.375 2.046875 -6.828125 C 2.046875 -7.234375 1.78125 -7.5 1.375 -7.5 C 1.25 -7.5 1.21875 -7.5 1.171875 -7.484375 C 1.453125 -8.46875 2.21875 -9.125 3.15625 -9.125 C 4.375 -9.125 5.125 -8.109375 5.125 -6.78125 C 5.125 -5.5625 4.421875 -4.5 3.59375 -3.578125 L 0.703125 -0.34375 L 0.703125 0 L 5.9375 0 Z M 6.3125 -2.40625 "/>
</symbol>
<symbol overflow="visible" id="glyph1-3">
<path style="stroke:none;" d="M 4.125 -9.1875 C 4.125 -9.53125 4.125 -9.53125 3.84375 -9.53125 C 3.5 -9.15625 2.78125 -8.625 1.3125 -8.625 L 1.3125 -8.203125 C 1.640625 -8.203125 2.359375 -8.203125 3.140625 -8.578125 L 3.140625 -1.109375 C 3.140625 -0.59375 3.09375 -0.421875 1.84375 -0.421875 L 1.390625 -0.421875 L 1.390625 0 C 1.78125 -0.03125 3.171875 -0.03125 3.640625 -0.03125 C 4.109375 -0.03125 5.5 -0.03125 5.875 0 L 5.875 -0.421875 L 5.4375 -0.421875 C 4.171875 -0.421875 4.125 -0.59375 4.125 -1.109375 Z M 4.125 -9.1875 "/>
</symbol>
<symbol overflow="visible" id="glyph2-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph2-1">
<path style="stroke:none;" d="M 7.1875 -2.671875 L 6.875 -2.671875 C 6.703125 -1.453125 6.5625 -1.234375 6.484375 -1.140625 C 6.40625 -1 5.171875 -1 4.921875 -1 L 1.625 -1 C 2.234375 -1.671875 3.4375 -2.890625 4.90625 -4.3125 C 5.953125 -5.296875 7.1875 -6.46875 7.1875 -8.171875 C 7.1875 -10.203125 5.5625 -11.375 3.75 -11.375 C 1.859375 -11.375 0.703125 -9.71875 0.703125 -8.15625 C 0.703125 -7.484375 1.203125 -7.40625 1.40625 -7.40625 C 1.578125 -7.40625 2.09375 -7.5 2.09375 -8.109375 C 2.09375 -8.640625 1.65625 -8.796875 1.40625 -8.796875 C 1.3125 -8.796875 1.203125 -8.78125 1.140625 -8.75 C 1.46875 -10.203125 2.46875 -10.9375 3.515625 -10.9375 C 5.015625 -10.9375 5.984375 -9.75 5.984375 -8.171875 C 5.984375 -6.6875 5.109375 -5.390625 4.125 -4.265625 L 0.703125 -0.390625 L 0.703125 0 L 6.765625 0 Z M 7.1875 -2.671875 "/>
</symbol>
<symbol overflow="visible" id="glyph3-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph3-1">
<path style="stroke:none;" d="M 5.875 -1 C 6.09375 -0.03125 6.921875 0.171875 7.328125 0.171875 C 7.890625 0.171875 8.296875 -0.1875 8.578125 -0.78125 C 8.875 -1.390625 9.09375 -2.40625 9.09375 -2.46875 C 9.09375 -2.546875 9.015625 -2.625 8.921875 -2.625 C 8.765625 -2.625 8.75 -2.53125 8.671875 -2.265625 C 8.375 -1.078125 8.0625 -0.171875 7.375 -0.171875 C 6.859375 -0.171875 6.859375 -0.734375 6.859375 -0.96875 C 6.859375 -1.359375 6.90625 -1.53125 7.078125 -2.25 C 7.203125 -2.71875 7.3125 -3.1875 7.421875 -3.671875 L 8.125 -6.46875 C 8.25 -6.90625 8.25 -6.9375 8.25 -6.984375 C 8.25 -7.25 8.046875 -7.421875 7.78125 -7.421875 C 7.28125 -7.421875 7.15625 -6.984375 7.0625 -6.5625 C 6.890625 -5.890625 5.953125 -2.1875 5.84375 -1.578125 C 5.8125 -1.578125 5.140625 -0.171875 3.890625 -0.171875 C 3 -0.171875 2.828125 -0.953125 2.828125 -1.578125 C 2.828125 -2.5625 3.3125 -3.9375 3.75 -5.09375 C 3.953125 -5.640625 4.046875 -5.875 4.046875 -6.21875 C 4.046875 -6.953125 3.515625 -7.59375 2.6875 -7.59375 C 1.109375 -7.59375 0.46875 -5.09375 0.46875 -4.953125 C 0.46875 -4.890625 0.53125 -4.796875 0.65625 -4.796875 C 0.8125 -4.796875 0.828125 -4.875 0.890625 -5.109375 C 1.3125 -6.59375 1.984375 -7.25 2.640625 -7.25 C 2.8125 -7.25 3.078125 -7.234375 3.078125 -6.6875 C 3.078125 -6.234375 2.890625 -5.734375 2.640625 -5.078125 C 1.875 -3.03125 1.796875 -2.375 1.796875 -1.859375 C 1.796875 -0.109375 3.109375 0.171875 3.828125 0.171875 C 4.921875 0.171875 5.53125 -0.578125 5.875 -1 Z M 5.875 -1 "/>
</symbol>
<symbol overflow="visible" id="glyph4-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph4-1">
<path style="stroke:none;" d="M 2.03125 -1.328125 C 1.609375 -0.625 1.203125 -0.375 0.640625 -0.34375 C 0.5 -0.328125 0.40625 -0.328125 0.40625 -0.125 C 0.40625 -0.046875 0.46875 0 0.546875 0 C 0.765625 0 1.296875 -0.03125 1.515625 -0.03125 C 1.859375 -0.03125 2.25 0 2.578125 0 C 2.65625 0 2.796875 0 2.796875 -0.234375 C 2.796875 -0.328125 2.703125 -0.34375 2.625 -0.34375 C 2.359375 -0.375 2.125 -0.46875 2.125 -0.75 C 2.125 -0.921875 2.203125 -1.046875 2.359375 -1.3125 L 3.265625 -2.828125 L 6.3125 -2.828125 C 6.328125 -2.71875 6.328125 -2.625 6.328125 -2.515625 C 6.375 -2.203125 6.515625 -0.953125 6.515625 -0.734375 C 6.515625 -0.375 5.90625 -0.34375 5.71875 -0.34375 C 5.578125 -0.34375 5.453125 -0.34375 5.453125 -0.125 C 5.453125 0 5.5625 0 5.625 0 C 5.828125 0 6.078125 -0.03125 6.28125 -0.03125 L 6.953125 -0.03125 C 7.6875 -0.03125 8.21875 0 8.21875 0 C 8.3125 0 8.4375 0 8.4375 -0.234375 C 8.4375 -0.34375 8.328125 -0.34375 8.15625 -0.34375 C 7.5 -0.34375 7.484375 -0.453125 7.453125 -0.8125 L 6.71875 -8.265625 C 6.6875 -8.515625 6.640625 -8.53125 6.515625 -8.53125 C 6.390625 -8.53125 6.328125 -8.515625 6.21875 -8.328125 Z M 3.46875 -3.171875 L 5.875 -7.1875 L 6.28125 -3.171875 Z M 3.46875 -3.171875 "/>
</symbol>
</g>
<clipPath id="clip1">
<path d="M 4 4 L 753 4 L 753 381.640625 L 4 381.640625 Z M 4 4 "/>
</clipPath>
<clipPath id="clip2">
<path d="M 284 190 L 286 190 L 286 248 L 284 248 Z M 284 190 "/>
</clipPath>
<clipPath id="clip3">
<path d="M 0 381.640625 L 758 381.640625 L 758 -0.359375 L 0 -0.359375 Z M 283.785156 189.828125 L 285.730469 189.828125 L 288.929688 209.71875 L 280.585938 209.71875 Z M 283.785156 189.828125 "/>
</clipPath>
<clipPath id="clip4">
<path d="M 565 190 L 567 190 L 567 248 L 565 248 Z M 565 190 "/>
</clipPath>
<clipPath id="clip5">
<path d="M 0 381.640625 L 758 381.640625 L 758 -0.359375 L 0 -0.359375 Z M 565.441406 189.828125 L 567.386719 189.828125 L 570.585938 209.71875 L 562.242188 209.71875 Z M 565.441406 189.828125 "/>
</clipPath>
<clipPath id="clip6">
<path d="M 666 252 L 724 252 L 724 254 L 666 254 Z M 666 252 "/>
</clipPath>
<clipPath id="clip7">
<path d="M 0 381.640625 L 758 381.640625 L 758 -0.359375 L 0 -0.359375 Z M 723.863281 252.417969 L 723.863281 254.363281 L 703.976562 257.5625 L 703.976562 249.21875 Z M 723.863281 252.417969 "/>
</clipPath>
</defs>
<g id="surface1">
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 3.101562 165.765625 L 754.1875 165.765625 L 754.1875 153.246094 L 3.101562 153.246094 Z M 3.101562 165.765625 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 3.101562 71.878906 L 754.1875 71.878906 L 754.1875 59.359375 L 3.101562 59.359375 Z M 3.101562 71.878906 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 3.101562 259.652344 L 754.1875 259.652344 L 754.1875 247.132812 L 3.101562 247.132812 Z M 3.101562 259.652344 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 3.101562 353.535156 L 754.1875 353.535156 L 754.1875 341.019531 L 3.101562 341.019531 Z M 3.101562 353.535156 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 90.726562 378.574219 L 103.246094 378.574219 L 103.246094 3.03125 L 90.726562 3.03125 Z M 90.726562 378.574219 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 184.613281 378.574219 L 197.132812 378.574219 L 197.132812 3.03125 L 184.613281 3.03125 Z M 184.613281 378.574219 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 278.496094 378.574219 L 291.015625 378.574219 L 291.015625 3.03125 L 278.496094 3.03125 Z M 278.496094 378.574219 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 372.382812 378.574219 L 384.902344 378.574219 L 384.902344 3.03125 L 372.382812 3.03125 Z M 372.382812 378.574219 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 466.269531 378.574219 L 478.789062 378.574219 L 478.789062 3.03125 L 466.269531 3.03125 Z M 466.269531 378.574219 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 560.15625 378.574219 L 572.671875 378.574219 L 572.671875 3.03125 L 560.15625 3.03125 Z M 560.15625 378.574219 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(90.039062%,90.039062%,90.039062%);fill-opacity:1;" d="M 654.039062 378.574219 L 666.558594 378.574219 L 666.558594 3.03125 L 654.039062 3.03125 Z M 654.039062 378.574219 "/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 31.015625 2221.311875 L 7541.875 2221.311875 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 31.015625 3160.179063 L 7541.875 3160.179063 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 31.015625 1282.48375 L 7541.875 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 31.015625 343.616563 L 7541.875 343.616563 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 3786.0775 L 969.84375 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1908.710938 3786.0775 L 1908.710938 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2847.578125 3786.0775 L 2847.578125 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3786.445312 3786.0775 L 3786.445312 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 4725.273438 3786.0775 L 4725.273438 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5664.140625 3786.0775 L 5664.140625 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(100%,100%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6603.007812 3786.0775 L 6603.007812 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<g clip-path="url(#clip1)" clip-rule="nonzero">
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(38.792419%,38.792419%,38.792419%);stroke-opacity:1;stroke-dasharray:125.181;stroke-miterlimit:10;" d="M 7228.90625 30.647813 L 343.945312 3473.147813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
</g>
<path style="fill:none;stroke-width:20.8635;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(38.792419%,38.792419%,38.792419%);stroke-opacity:1;stroke-dasharray:125.181;stroke-miterlimit:10;" d="M 969.84375 3473.147813 L 969.84375 30.647813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 65.691406 153.246094 L 128.28125 153.246094 L 128.28125 121.953125 L 65.691406 121.953125 Z M 65.691406 153.246094 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 2221.311875 L 1078.359375 2409.124375 L 861.367188 2409.124375 Z M 969.84375 2221.311875 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 844.6875 2471.7025 L 1095.039062 2471.7025 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6603.007812 343.616563 L 6102.265625 343.616563 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6603.007812 343.616563 L 6477.8125 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6477.8125 343.616563 L 6352.65625 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6352.65625 343.616563 L 6227.460938 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6227.460938 343.616563 L 6102.265625 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 7103.710938 343.616563 L 6603.007812 343.616563 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 7103.710938 343.616563 L 6978.554688 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6978.554688 343.616563 L 6853.359375 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6853.359375 343.616563 L 6728.164062 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6728.164062 343.616563 L 6603.007812 218.42125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1470.585938 2471.7025 L 969.84375 2471.7025 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1470.585938 2471.7025 L 1345.390625 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1345.390625 2471.7025 L 1220.234375 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1220.234375 2471.7025 L 1095.039062 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1095.039062 2471.7025 L 969.84375 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 2471.7025 L 469.140625 2471.7025 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 2471.7025 L 844.6875 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 844.6875 2471.7025 L 719.492188 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 719.492188 2471.7025 L 594.296875 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 594.296875 2471.7025 L 469.140625 2596.897813 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(0%,68.943787%,68.943787%);fill-opacity:1;" d="M 578.933594 253.390625 C 578.933594 260.304688 573.328125 265.910156 566.414062 265.910156 C 559.5 265.910156 553.894531 260.304688 553.894531 253.390625 C 553.894531 246.476562 559.5 240.871094 566.414062 240.871094 C 573.328125 240.871094 578.933594 246.476562 578.933594 253.390625 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5789.335938 1282.48375 C 5789.335938 1213.343125 5733.28125 1157.288438 5664.140625 1157.288438 C 5595 1157.288438 5538.945312 1213.343125 5538.945312 1282.48375 C 5538.945312 1351.624375 5595 1407.679063 5664.140625 1407.679063 C 5733.28125 1407.679063 5789.335938 1351.624375 5789.335938 1282.48375 Z M 5789.335938 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(0%,68.943787%,68.943787%);fill-opacity:1;" d="M 297.273438 253.390625 C 297.273438 260.304688 291.671875 265.910156 284.757812 265.910156 C 277.84375 265.910156 272.238281 260.304688 272.238281 253.390625 C 272.238281 246.476562 277.84375 240.871094 284.757812 240.871094 C 291.671875 240.871094 297.273438 246.476562 297.273438 253.390625 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2972.734375 1282.48375 C 2972.734375 1213.343125 2916.71875 1157.288438 2847.578125 1157.288438 C 2778.4375 1157.288438 2722.382812 1213.343125 2722.382812 1282.48375 C 2722.382812 1351.624375 2778.4375 1407.679063 2847.578125 1407.679063 C 2916.71875 1407.679063 2972.734375 1351.624375 2972.734375 1282.48375 Z M 2972.734375 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5977.109375 1532.835313 C 5977.109375 1498.265 5949.0625 1470.257188 5914.492188 1470.257188 C 5879.921875 1470.257188 5851.914062 1498.265 5851.914062 1532.835313 C 5851.914062 1567.405625 5879.921875 1595.413438 5914.492188 1595.413438 C 5949.0625 1595.413438 5977.109375 1567.405625 5977.109375 1532.835313 Z M 5977.109375 1532.835313 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3160.507812 1532.835313 C 3160.507812 1498.265 3132.5 1470.257188 3097.929688 1470.257188 C 3063.359375 1470.257188 3035.351562 1498.265 3035.351562 1532.835313 C 3035.351562 1567.405625 3063.359375 1595.413438 3097.929688 1595.413438 C 3132.5 1595.413438 3160.507812 1567.405625 3160.507812 1532.835313 Z M 3160.507812 1532.835313 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:41.727;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 2221.311875 L 969.84375 1282.48375 L 6603.007812 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:41.727;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6603.007812 343.616563 L 6603.007812 2221.311875 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<g clip-path="url(#clip2)" clip-rule="nonzero">
<g clip-path="url(#clip3)" clip-rule="evenodd">
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2847.578125 1345.061875 L 2847.578125 1908.382188 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2889.296875 1719.2025 L 2847.578125 1886.116563 L 2805.859375 1719.2025 Z M 2889.296875 1719.2025 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<g clip-path="url(#clip4)" clip-rule="nonzero">
<g clip-path="url(#clip5)" clip-rule="evenodd">
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5664.140625 1345.061875 L 5664.140625 1908.382188 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5705.859375 1719.2025 L 5664.140625 1886.116563 L 5622.421875 1719.2025 Z M 5705.859375 1719.2025 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<g clip-path="url(#clip6)" clip-rule="nonzero">
<g clip-path="url(#clip7)" clip-rule="evenodd">
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6665.585938 1282.48375 L 7228.90625 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 7039.765625 1240.765 L 7206.640625 1282.48375 L 7039.765625 1324.2025 Z M 7039.765625 1240.765 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1095.039062 2221.311875 L 1220.234375 1282.48375 L 4975.664062 1783.186875 L 6853.359375 1282.48375 L 6603.007812 343.616563 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,100%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6853.359375 1282.48375 L 7103.710938 2221.311875 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 1094.710313 L 1220.234375 1094.710313 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 969.84375 1157.288438 L 969.84375 1032.093125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1220.234375 1157.288438 L 1220.234375 1032.093125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2847.578125 1032.093125 L 2847.578125 1157.288438 L 2847.578125 1094.710313 L 3097.929688 1094.710313 L 3097.929688 1157.288438 L 3097.929688 1032.093125 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3223.125 1532.835313 L 3348.28125 1532.835313 L 3285.703125 1532.835313 L 3285.703125 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5476.367188 1282.48375 L 5476.367188 1532.835313 L 5413.789062 1532.835313 L 5538.945312 1532.835313 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 5664.140625 1094.710313 L 5664.140625 969.515 L 5664.140625 1032.093125 L 5914.492188 1032.093125 L 5914.492188 1094.710313 L 5914.492188 969.515 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 478.789062 253.390625 C 478.789062 256.847656 475.984375 259.652344 472.527344 259.652344 C 469.070312 259.652344 466.269531 256.847656 466.269531 253.390625 C 466.269531 249.933594 469.070312 247.132812 472.527344 247.132812 C 475.984375 247.132812 478.789062 249.933594 478.789062 253.390625 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 4787.890625 1282.48375 C 4787.890625 1247.913438 4759.84375 1219.866563 4725.273438 1219.866563 C 4690.703125 1219.866563 4662.695312 1247.913438 4662.695312 1282.48375 C 4662.695312 1317.054063 4690.703125 1345.061875 4725.273438 1345.061875 C 4759.84375 1345.061875 4787.890625 1317.054063 4787.890625 1282.48375 Z M 4787.890625 1282.48375 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 666.558594 347.277344 C 666.558594 350.734375 663.757812 353.535156 660.300781 353.535156 C 656.84375 353.535156 654.039062 350.734375 654.039062 347.277344 C 654.039062 343.820312 656.84375 341.019531 660.300781 341.019531 C 663.757812 341.019531 666.558594 343.820312 666.558594 347.277344 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6665.585938 343.616563 C 6665.585938 309.04625 6637.578125 281.038438 6603.007812 281.038438 C 6568.4375 281.038438 6540.390625 309.04625 6540.390625 343.616563 C 6540.390625 378.186875 6568.4375 406.194688 6603.007812 406.194688 C 6637.578125 406.194688 6665.585938 378.186875 6665.585938 343.616563 Z M 6665.585938 343.616563 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 103.246094 159.507812 C 103.246094 162.960938 100.441406 165.765625 96.984375 165.765625 C 93.53125 165.765625 90.726562 162.960938 90.726562 159.507812 C 90.726562 156.050781 93.53125 153.246094 96.984375 153.246094 C 100.441406 153.246094 103.246094 156.050781 103.246094 159.507812 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1032.460938 2221.311875 C 1032.460938 2186.780625 1004.414062 2158.73375 969.84375 2158.73375 C 935.3125 2158.73375 907.265625 2186.780625 907.265625 2221.311875 C 907.265625 2255.882188 935.3125 2283.929063 969.84375 2283.929063 C 1004.414062 2283.929063 1032.460938 2255.882188 1032.460938 2221.311875 Z M 1032.460938 2221.311875 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 93.925781 137.53125 C 93.925781 139.296875 92.492188 140.726562 90.726562 140.726562 C 88.960938 140.726562 87.527344 139.296875 87.527344 137.53125 C 87.527344 135.761719 88.960938 134.328125 90.726562 134.328125 C 92.492188 134.328125 93.925781 135.761719 93.925781 137.53125 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 939.257812 2441.0775 C 939.257812 2423.42125 924.921875 2409.124375 907.265625 2409.124375 C 889.609375 2409.124375 875.273438 2423.42125 875.273438 2441.0775 C 875.273438 2458.772813 889.609375 2473.10875 907.265625 2473.10875 C 924.921875 2473.10875 939.257812 2458.772813 939.257812 2441.0775 Z M 939.257812 2441.0775 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 106.445312 137.53125 C 106.445312 139.296875 105.011719 140.726562 103.246094 140.726562 C 101.476562 140.726562 100.046875 139.296875 100.046875 137.53125 C 100.046875 135.761719 101.476562 134.328125 103.246094 134.328125 C 105.011719 134.328125 106.445312 135.761719 106.445312 137.53125 "/>
<path style="fill:none;stroke-width:10.4318;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1064.453125 2441.0775 C 1064.453125 2423.42125 1050.117188 2409.124375 1032.460938 2409.124375 C 1014.765625 2409.124375 1000.46875 2423.42125 1000.46875 2441.0775 C 1000.46875 2458.772813 1014.765625 2473.10875 1032.460938 2473.10875 C 1050.117188 2473.10875 1064.453125 2458.772813 1064.453125 2441.0775 Z M 1064.453125 2441.0775 " transform="matrix(0.1,0,0,-0.1,0,381.639)"/>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-1" x="578.856" y="203.337"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph1-1" x="590.353" y="206.436"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-1" x="297.216" y="203.337"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph1-2" x="308.713" y="206.436"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-1" x="710.287" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph1-3" x="721.784" y="287.799"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="508.282" y="240.889"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="516.178" y="240.889"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="525.772" y="243.471"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="597.632" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="605.527" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="615.121" y="287.282"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="128.233" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="136.128" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="145.722" y="287.282"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="315.992" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="323.888" y="284.699"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="333.481" y="287.282"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="334.768" y="247.148"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="342.664" y="247.148"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="352.257" y="249.73"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="115.715" y="165.785"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="125.309" y="168.367"/>
</g>
</g>
</svg>
The left beam is constrained by a roller and by the right beam, the first requires that the Centre of Instantaneous Rotation (CIR) belongs to the vertical line in $A$, while the second requires that the CIR belongs to the line that connects the hinges
of the right beam.
The angles of rotation are $\theta_\text{left} = u_A/L$ and $\theta_\text{right}
= -2 u_A/L$ and eventually we have $x_1=x_2=x_3=2u_A$ and
$$ \boldsymbol e = \begin{Bmatrix}2\\2\\2\end{Bmatrix}.$$
```python
e = array((2.0, 2.0, 2.0))
```
### Structural Matrices
```python
display(HTML(open('figures/trab1_conv.svg').read()))
```
<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="334.147pt" height="147.344pt" viewBox="0 0 334.147 147.344" version="1.1">
<defs>
<g>
<symbol overflow="visible" id="glyph0-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph0-1">
<path style="stroke:none;" d="M 5.46875 -1.484375 C 5.515625 -1.015625 5.609375 -0.609375 5.671875 -0.375 C 5.78125 0.046875 5.828125 0.265625 6.21875 0.265625 C 6.640625 0.265625 7.34375 -0.140625 7.34375 -0.328125 C 7.34375 -0.34375 7.34375 -0.390625 7.265625 -0.390625 C 7.25 -0.390625 7.140625 -0.375 6.96875 -0.28125 C 6.875 -0.234375 6.859375 -0.234375 6.828125 -0.234375 C 6.5625 -0.234375 6.5 -0.515625 6.390625 -1.046875 C 6.3125 -1.4375 6.25 -1.71875 6.15625 -3.046875 C 6.09375 -3.84375 6.046875 -4.625 6.046875 -5.421875 L 6.046875 -6.265625 C 6.046875 -6.4375 6.046875 -6.484375 5.953125 -6.484375 C 5.84375 -6.484375 5.484375 -6.359375 5.28125 -6.09375 L 5.21875 -5.984375 C 5.203125 -5.96875 4.5625 -4.5625 3.421875 -2.765625 C 2.953125 -2.046875 1.890625 -0.390625 1.375 -0.390625 C 1.125 -0.390625 0.734375 -0.578125 0.640625 -0.890625 C 0.625 -0.953125 0.625 -1 0.578125 -1 C 0.4375 -1 0.234375 -0.53125 0.234375 -0.296875 C 0.234375 0.015625 0.625 0.453125 1.078125 0.453125 C 1.625 0.453125 2.3125 -0.53125 2.96875 -1.484375 Z M 5.265625 -5.390625 L 5.265625 -5 C 5.265625 -3.984375 5.375 -2.484375 5.421875 -1.984375 L 3.734375 -1.984375 C 3.578125 -1.984375 3.453125 -1.984375 3.203125 -1.828125 C 3.765625 -2.6875 4.046875 -3.15625 4.375 -3.71875 C 4.9375 -4.734375 5.125 -5.140625 5.25 -5.390625 Z M 5.265625 -5.390625 "/>
</symbol>
<symbol overflow="visible" id="glyph0-2">
<path style="stroke:none;" d="M 2.5625 -6.25 C 2.5625 -6.3125 2.53125 -6.328125 2.453125 -6.328125 C 2.359375 -6.328125 2.09375 -6.1875 1.90625 -6.09375 C 1.453125 -5.859375 1.125 -5.703125 1.125 -5.546875 C 1.125 -5.5 1.171875 -5.484375 1.21875 -5.484375 C 1.3125 -5.484375 1.578125 -5.625 1.765625 -5.703125 C 1.609375 -4.59375 1.4375 -3.671875 1.28125 -2.921875 C 1.015625 -1.71875 0.828125 -0.953125 0.390625 -0.109375 C 0.28125 0.109375 0.28125 0.125 0.28125 0.140625 C 0.28125 0.203125 0.359375 0.203125 0.375 0.203125 C 0.46875 0.203125 0.90625 0.046875 1.078125 -0.265625 C 1.25 -0.625 1.53125 -1.140625 1.75 -2 C 1.890625 -2.53125 2.265625 -4.015625 2.96875 -4.953125 C 3.328125 -5.421875 3.6875 -5.828125 4.375 -5.828125 C 4.859375 -5.828125 5.3125 -5.546875 5.3125 -5.03125 C 5.3125 -4.3125 4.59375 -4.03125 3.40625 -3.625 C 2.875 -3.453125 2.75 -3.25 2.75 -3.1875 C 2.75 -3.125 2.796875 -3.125 2.84375 -3.125 C 2.90625 -3.125 3.09375 -3.15625 3.296875 -3.15625 C 4.21875 -3.15625 4.984375 -2.625 4.984375 -1.765625 C 4.984375 -0.515625 3.796875 -0.3125 3.171875 -0.3125 C 2.71875 -0.3125 2.328125 -0.453125 2 -0.796875 C 1.953125 -0.859375 1.9375 -0.859375 1.875 -0.859375 C 1.75 -0.859375 1.5625 -0.765625 1.453125 -0.703125 C 1.25 -0.5625 1.25 -0.53125 1.1875 -0.40625 C 1.34375 -0.234375 1.6875 0.203125 2.5625 0.203125 C 3.875 0.203125 5.765625 -0.703125 5.765625 -2.15625 C 5.765625 -3.015625 5.0625 -3.53125 4.296875 -3.625 C 4.828125 -3.890625 6.09375 -4.5 6.09375 -5.421875 C 6.09375 -5.96875 5.625 -6.328125 4.984375 -6.328125 C 4.359375 -6.328125 3.25 -5.984375 2.359375 -4.859375 L 2.34375 -4.859375 Z M 2.5625 -6.25 "/>
</symbol>
<symbol overflow="visible" id="glyph0-3">
<path style="stroke:none;" d="M 4.59375 -1.40625 C 4.59375 -1.453125 4.5625 -1.484375 4.515625 -1.484375 C 4.421875 -1.484375 4.03125 -1.359375 3.828125 -1.0625 C 3.609375 -0.75 3.234375 -0.28125 2.5 -0.28125 C 1.65625 -0.28125 0.90625 -0.890625 0.90625 -2.25 C 0.90625 -2.859375 1.109375 -4.046875 1.890625 -5.03125 C 2.296875 -5.53125 2.8125 -5.828125 3.546875 -5.828125 C 3.96875 -5.828125 4.125 -5.65625 4.125 -5.34375 C 4.125 -5.015625 3.75 -4.34375 3.703125 -4.265625 C 3.625 -4.140625 3.625 -4.109375 3.625 -4.09375 C 3.625 -4.046875 3.6875 -4.03125 3.71875 -4.03125 C 3.875 -4.03125 4.25 -4.21875 4.375 -4.421875 C 4.40625 -4.46875 4.90625 -5.3125 4.90625 -5.734375 C 4.90625 -6.15625 4.65625 -6.328125 4.15625 -6.328125 C 3.078125 -6.328125 2 -5.734375 1.3125 -4.953125 C 0.453125 -3.984375 0.125 -2.6875 0.125 -1.859375 C 0.125 -0.5 0.875 0.21875 1.890625 0.21875 C 3.359375 0.21875 4.59375 -1.203125 4.59375 -1.40625 Z M 4.59375 -1.40625 "/>
</symbol>
<symbol overflow="visible" id="glyph0-4">
<path style="stroke:none;" d="M 1.921875 0 C 4.078125 0 7.0625 -1.59375 7.0625 -4.03125 C 7.0625 -4.828125 6.6875 -5.375 6.078125 -5.71875 C 5.3125 -6.125 4.546875 -6.125 3.6875 -6.125 C 2.921875 -6.125 2.3125 -6.125 1.53125 -5.78125 C 0.3125 -5.21875 0.1875 -4.484375 0.1875 -4.453125 C 0.1875 -4.40625 0.203125 -4.375 0.28125 -4.375 C 0.34375 -4.375 0.515625 -4.421875 0.6875 -4.53125 C 0.921875 -4.6875 0.9375 -4.734375 0.984375 -4.890625 C 1.109375 -5.25 1.3125 -5.5625 2.53125 -5.625 C 2.421875 -3.84375 1.984375 -2.109375 1.3125 -0.46875 C 0.890625 -0.328125 0.765625 -0.109375 0.765625 -0.0625 C 0.765625 -0.015625 0.765625 0 0.984375 0 Z M 1.953125 -0.5 C 2.796875 -2.53125 3.109375 -3.984375 3.296875 -5.625 C 3.953125 -5.625 6.28125 -5.625 6.28125 -3.640625 C 6.28125 -1.859375 4.65625 -0.5 2.46875 -0.5 Z M 1.953125 -0.5 "/>
</symbol>
<symbol overflow="visible" id="glyph1-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph1-1">
<path style="stroke:none;" d="M 1.6875 -1.40625 C 1.796875 -1.828125 1.96875 -2.484375 1.96875 -2.5625 C 1.984375 -2.609375 2.21875 -3.046875 2.53125 -3.359375 C 2.796875 -3.59375 3.140625 -3.734375 3.484375 -3.734375 C 3.96875 -3.734375 3.96875 -3.265625 3.96875 -3.109375 C 3.96875 -3 3.96875 -2.875 3.859375 -2.4375 L 3.65625 -1.640625 C 3.375 -0.484375 3.296875 -0.203125 3.296875 -0.15625 C 3.296875 -0.046875 3.375 0.09375 3.578125 0.09375 C 3.703125 0.09375 3.84375 0.015625 3.90625 -0.09375 C 3.921875 -0.140625 4 -0.4375 4.046875 -0.609375 L 4.25 -1.40625 C 4.34375 -1.828125 4.515625 -2.484375 4.53125 -2.5625 C 4.546875 -2.609375 4.765625 -3.046875 5.09375 -3.359375 C 5.359375 -3.59375 5.6875 -3.734375 6.046875 -3.734375 C 6.53125 -3.734375 6.53125 -3.265625 6.53125 -3.109375 C 6.53125 -2.5625 6.109375 -1.46875 6.015625 -1.1875 C 5.90625 -0.921875 5.859375 -0.828125 5.859375 -0.65625 C 5.859375 -0.171875 6.234375 0.09375 6.640625 0.09375 C 7.5 0.09375 7.84375 -1.1875 7.84375 -1.28125 C 7.84375 -1.328125 7.828125 -1.390625 7.734375 -1.390625 C 7.625 -1.390625 7.625 -1.34375 7.59375 -1.234375 C 7.359375 -0.46875 6.984375 -0.125 6.65625 -0.125 C 6.59375 -0.125 6.4375 -0.125 6.4375 -0.40625 C 6.4375 -0.640625 6.53125 -0.875 6.59375 -1.0625 C 6.78125 -1.53125 7.140625 -2.46875 7.140625 -2.984375 C 7.140625 -3.78125 6.546875 -3.96875 6.078125 -3.96875 C 5.21875 -3.96875 4.75 -3.328125 4.59375 -3.109375 C 4.5 -3.84375 3.890625 -3.96875 3.515625 -3.96875 C 2.6875 -3.96875 2.25 -3.375 2.09375 -3.1875 C 2.046875 -3.671875 1.671875 -3.96875 1.234375 -3.96875 C 0.859375 -3.96875 0.65625 -3.6875 0.53125 -3.4375 C 0.390625 -3.125 0.265625 -2.625 0.265625 -2.578125 C 0.265625 -2.5 0.328125 -2.46875 0.390625 -2.46875 C 0.484375 -2.46875 0.5 -2.515625 0.546875 -2.703125 C 0.71875 -3.40625 0.90625 -3.734375 1.203125 -3.734375 C 1.484375 -3.734375 1.484375 -3.453125 1.484375 -3.3125 C 1.484375 -3.125 1.40625 -2.859375 1.359375 -2.625 C 1.296875 -2.390625 1.203125 -2 1.171875 -1.890625 L 0.8125 -0.421875 C 0.75 -0.203125 0.75 -0.1875 0.75 -0.15625 C 0.75 -0.046875 0.828125 0.09375 1.015625 0.09375 C 1.140625 0.09375 1.28125 0.015625 1.34375 -0.09375 C 1.375 -0.140625 1.4375 -0.4375 1.484375 -0.609375 Z M 1.6875 -1.40625 "/>
</symbol>
<symbol overflow="visible" id="glyph2-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph2-1">
<path style="stroke:none;" d="M -6.03125 -3.734375 C -6.390625 -3.8125 -6.5 -3.84375 -6.5 -4.78125 C -6.5 -5.078125 -6.5 -5.15625 -6.6875 -5.15625 C -6.8125 -5.15625 -6.8125 -5.046875 -6.8125 -5 C -6.8125 -4.671875 -6.78125 -3.859375 -6.78125 -3.53125 C -6.78125 -3.234375 -6.8125 -2.5 -6.8125 -2.203125 C -6.8125 -2.140625 -6.8125 -2.015625 -6.609375 -2.015625 C -6.5 -2.015625 -6.5 -2.109375 -6.5 -2.296875 C -6.5 -2.3125 -6.5 -2.5 -6.484375 -2.671875 C -6.453125 -2.84375 -6.453125 -2.9375 -6.3125 -2.9375 C -6.28125 -2.9375 -6.25 -2.9375 -6.125 -2.90625 L -0.78125 -1.5625 C -0.390625 -1.46875 -0.3125 -1.453125 -0.3125 -0.65625 C -0.3125 -0.484375 -0.3125 -0.390625 -0.109375 -0.390625 C 0 -0.390625 0 -0.484375 0 -0.65625 L 0 -5.28125 C 0 -5.515625 0 -5.515625 -0.171875 -5.578125 L -2.328125 -6.375 C -2.4375 -6.40625 -2.453125 -6.40625 -2.46875 -6.40625 C -2.5 -6.40625 -2.578125 -6.375 -2.578125 -6.296875 C -2.578125 -6.203125 -2.515625 -6.1875 -2.359375 -6.125 C -1.453125 -5.78125 -0.3125 -5.34375 -0.3125 -3.625 L -0.3125 -2.6875 C -0.3125 -2.546875 -0.3125 -2.515625 -0.3125 -2.46875 C -0.328125 -2.359375 -0.34375 -2.328125 -0.421875 -2.328125 C -0.453125 -2.328125 -0.46875 -2.328125 -0.640625 -2.375 Z M -6.03125 -3.734375 "/>
</symbol>
<symbol overflow="visible" id="glyph3-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph3-1">
<path style="stroke:none;" d="M 1.265625 -0.765625 L 2.328125 -1.796875 C 3.875 -3.171875 4.46875 -3.703125 4.46875 -4.703125 C 4.46875 -5.84375 3.578125 -6.640625 2.359375 -6.640625 C 1.234375 -6.640625 0.5 -5.71875 0.5 -4.828125 C 0.5 -4.28125 1 -4.28125 1.03125 -4.28125 C 1.203125 -4.28125 1.546875 -4.390625 1.546875 -4.8125 C 1.546875 -5.0625 1.359375 -5.328125 1.015625 -5.328125 C 0.9375 -5.328125 0.921875 -5.328125 0.890625 -5.3125 C 1.109375 -5.96875 1.65625 -6.328125 2.234375 -6.328125 C 3.140625 -6.328125 3.5625 -5.515625 3.5625 -4.703125 C 3.5625 -3.90625 3.078125 -3.125 2.515625 -2.5 L 0.609375 -0.375 C 0.5 -0.265625 0.5 -0.234375 0.5 0 L 4.203125 0 L 4.46875 -1.734375 L 4.234375 -1.734375 C 4.171875 -1.4375 4.109375 -1 4 -0.84375 C 3.9375 -0.765625 3.28125 -0.765625 3.0625 -0.765625 Z M 1.265625 -0.765625 "/>
</symbol>
<symbol overflow="visible" id="glyph4-0">
<path style="stroke:none;" d=""/>
</symbol>
<symbol overflow="visible" id="glyph4-1">
<path style="stroke:none;" d="M 3.734375 -6.03125 C 3.8125 -6.390625 3.84375 -6.5 4.78125 -6.5 C 5.078125 -6.5 5.15625 -6.5 5.15625 -6.6875 C 5.15625 -6.8125 5.046875 -6.8125 5 -6.8125 C 4.671875 -6.8125 3.859375 -6.78125 3.53125 -6.78125 C 3.234375 -6.78125 2.5 -6.8125 2.203125 -6.8125 C 2.140625 -6.8125 2.015625 -6.8125 2.015625 -6.609375 C 2.015625 -6.5 2.109375 -6.5 2.296875 -6.5 C 2.3125 -6.5 2.5 -6.5 2.671875 -6.484375 C 2.84375 -6.453125 2.9375 -6.453125 2.9375 -6.3125 C 2.9375 -6.28125 2.9375 -6.25 2.90625 -6.125 L 1.5625 -0.78125 C 1.46875 -0.390625 1.453125 -0.3125 0.65625 -0.3125 C 0.484375 -0.3125 0.390625 -0.3125 0.390625 -0.109375 C 0.390625 0 0.484375 0 0.65625 0 L 5.28125 0 C 5.515625 0 5.515625 0 5.578125 -0.171875 L 6.375 -2.328125 C 6.40625 -2.4375 6.40625 -2.453125 6.40625 -2.46875 C 6.40625 -2.5 6.375 -2.578125 6.296875 -2.578125 C 6.203125 -2.578125 6.1875 -2.515625 6.125 -2.359375 C 5.78125 -1.453125 5.34375 -0.3125 3.625 -0.3125 L 2.6875 -0.3125 C 2.546875 -0.3125 2.515625 -0.3125 2.46875 -0.3125 C 2.359375 -0.328125 2.328125 -0.34375 2.328125 -0.421875 C 2.328125 -0.453125 2.328125 -0.46875 2.375 -0.640625 Z M 3.734375 -6.03125 "/>
</symbol>
</g>
<clipPath id="clip1">
<path d="M 322 104 L 334.148438 104 L 334.148438 121 L 322 121 Z M 322 104 "/>
</clipPath>
<clipPath id="clip2">
<path d="M 322 109 L 334.148438 109 L 334.148438 126 L 322 126 Z M 322 109 "/>
</clipPath>
<clipPath id="clip3">
<path d="M 322 114 L 334.148438 114 L 334.148438 131 L 322 131 Z M 322 114 "/>
</clipPath>
<clipPath id="clip4">
<path d="M 322 118 L 334.148438 118 L 334.148438 136 L 322 136 Z M 322 118 "/>
</clipPath>
<clipPath id="clip5">
<path d="M 318 3 L 334.148438 3 L 334.148438 20 L 318 20 Z M 318 3 "/>
</clipPath>
<clipPath id="clip6">
<path d="M 8 71 L 9 71 L 9 121 L 8 121 Z M 8 71 "/>
</clipPath>
<clipPath id="clip7">
<path d="M 0 147.34375 L 335 147.34375 L 335 -0.65625 L 0 -0.65625 Z M 8.582031 120.183594 L 7.835938 120.183594 L 6.601562 112.527344 L 9.816406 112.527344 Z M 7.835938 71.25 L 8.582031 71.25 L 9.816406 78.90625 L 6.601562 78.90625 Z M 7.835938 71.25 "/>
</clipPath>
<clipPath id="clip8">
<path d="M 126 138 L 223 138 L 223 140 L 126 140 Z M 126 138 "/>
</clipPath>
<clipPath id="clip9">
<path d="M 0 147.34375 L 335 147.34375 L 335 -0.65625 L 0 -0.65625 Z M 223.011719 138.707031 L 223.011719 139.457031 L 215.355469 140.691406 L 215.355469 137.476562 Z M 125.890625 139.457031 L 125.890625 138.707031 L 133.546875 137.476562 L 133.546875 140.691406 Z M 125.890625 139.457031 "/>
</clipPath>
<clipPath id="clip10">
<path d="M 222 138 L 272 138 L 272 140 L 222 140 Z M 222 138 "/>
</clipPath>
<clipPath id="clip11">
<path d="M 0 147.34375 L 335 147.34375 L 335 -0.65625 L 0 -0.65625 Z M 222.261719 139.457031 L 222.261719 138.707031 L 229.917969 137.476562 L 229.917969 140.691406 Z M 271.199219 138.707031 L 271.199219 139.457031 L 263.539062 140.691406 L 263.539062 137.476562 Z M 271.199219 138.707031 "/>
</clipPath>
<clipPath id="clip12">
<path d="M 270 138 L 320 138 L 320 140 L 270 140 Z M 270 138 "/>
</clipPath>
<clipPath id="clip13">
<path d="M 0 147.34375 L 335 147.34375 L 335 -0.65625 L 0 -0.65625 Z M 270.449219 139.457031 L 270.449219 138.707031 L 278.105469 137.476562 L 278.105469 140.691406 Z M 319.382812 138.707031 L 319.382812 139.457031 L 311.726562 140.691406 L 311.726562 137.476562 Z M 319.382812 138.707031 "/>
</clipPath>
<clipPath id="clip14">
<path d="M 8 23 L 9 23 L 9 72 L 8 72 Z M 8 23 "/>
</clipPath>
<clipPath id="clip15">
<path d="M 0 147.34375 L 335 147.34375 L 335 -0.65625 L 0 -0.65625 Z M 8.582031 71.996094 L 7.835938 71.996094 L 6.601562 64.339844 L 9.816406 64.339844 Z M 7.835938 23.0625 L 8.582031 23.0625 L 9.816406 30.71875 L 6.601562 30.71875 Z M 7.835938 23.0625 "/>
</clipPath>
<clipPath id="clip16">
<path d="M 29 138 L 127 138 L 127 140 L 29 140 Z M 29 138 "/>
</clipPath>
<clipPath id="clip17">
<path d="M 0 147.34375 L 335 147.34375 L 335 -0.65625 L 0 -0.65625 Z M 126.640625 138.707031 L 126.640625 139.457031 L 118.984375 140.691406 L 118.984375 137.476562 Z M 29.519531 139.457031 L 29.519531 138.707031 L 37.175781 137.476562 L 37.175781 140.691406 Z M 29.519531 139.457031 "/>
</clipPath>
<clipPath id="clip18">
<path d="M 320 110 L 334.148438 110 L 334.148438 125 L 320 125 Z M 320 110 "/>
</clipPath>
<clipPath id="clip19">
<path d="M 320 115 L 334.148438 115 L 334.148438 130 L 320 130 Z M 320 115 "/>
</clipPath>
</defs>
<g id="surface1">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 395.3125 1239.065 L 202.578125 1239.065 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 395.3125 1239.065 L 347.109375 1287.268125 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 347.109375 1239.065 L 298.945312 1287.268125 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 298.945312 1239.065 L 250.742188 1287.268125 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 250.742188 1239.065 L 202.578125 1287.268125 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3190.078125 1239.065 L 3231.835938 1311.330625 L 3148.320312 1311.330625 Z M 3190.078125 1239.065 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3141.914062 1335.432188 L 3238.28125 1335.432188 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3190.078125 275.354063 L 3262.382812 233.59625 L 3262.382812 317.111875 Z M 3190.078125 275.354063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 323.518125 L 3286.445312 227.150938 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 371.72125 L 3286.445312 178.986875 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip1)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 371.72125 L 3334.648438 323.518125 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<g clip-path="url(#clip2)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 323.518125 L 3334.648438 275.354063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<g clip-path="url(#clip3)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 275.354063 L 3334.648438 227.150938 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<g clip-path="url(#clip4)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 227.150938 L 3334.648438 178.986875 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 1335.432188 L 3093.710938 1335.432188 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip5)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3286.445312 1335.432188 L 3238.28125 1383.635313 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3238.28125 1335.432188 L 3190.078125 1383.635313 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3190.078125 1335.432188 L 3141.914062 1383.635313 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3141.914062 1335.432188 L 3093.710938 1383.635313 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(0%,56.054688%,56.054688%);fill-opacity:1;" d="M 275.640625 71.621094 C 275.640625 74.285156 273.484375 76.441406 270.824219 76.441406 C 268.160156 76.441406 266.003906 74.285156 266.003906 71.621094 C 266.003906 68.960938 268.160156 66.804688 270.824219 66.804688 C 273.484375 66.804688 275.640625 68.960938 275.640625 71.621094 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2756.40625 757.229063 C 2756.40625 730.588438 2734.84375 709.025938 2708.242188 709.025938 C 2681.601562 709.025938 2660.039062 730.588438 2660.039062 757.229063 C 2660.039062 783.830625 2681.601562 805.393125 2708.242188 805.393125 C 2734.84375 805.393125 2756.40625 783.830625 2756.40625 757.229063 Z M 2756.40625 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(0%,56.054688%,56.054688%);fill-opacity:1;" d="M 131.085938 71.621094 C 131.085938 74.285156 128.925781 76.441406 126.265625 76.441406 C 123.605469 76.441406 121.445312 74.285156 121.445312 71.621094 C 121.445312 68.960938 123.605469 66.804688 126.265625 66.804688 C 128.925781 66.804688 131.085938 68.960938 131.085938 71.621094 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1310.859375 757.229063 C 1310.859375 730.588438 1289.257812 709.025938 1262.65625 709.025938 C 1236.054688 709.025938 1214.453125 730.588438 1214.453125 757.229063 C 1214.453125 783.830625 1236.054688 805.393125 1262.65625 805.393125 C 1289.257812 805.393125 1310.859375 783.830625 1310.859375 757.229063 Z M 1310.859375 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6.054688 757.229063 L 158.125 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6.054688 1239.065 L 158.125 1239.065 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip6)" clip-rule="nonzero">
<g clip-path="url(#clip7)" clip-rule="evenodd">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 82.109375 757.229063 L 82.109375 275.354063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 98.164062 684.3775 L 82.109375 748.635313 L 66.015625 684.3775 Z M 98.164062 684.3775 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 66.015625 348.166563 L 82.109375 283.90875 L 98.164062 348.166563 Z M 66.015625 348.166563 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6.054688 275.354063 L 158.125 275.354063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 6.054688 757.229063 L 158.125 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 298.945312 158.635313 L 298.945312 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1262.65625 158.635313 L 1262.65625 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip8)" clip-rule="nonzero">
<g clip-path="url(#clip9)" clip-rule="evenodd">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1262.65625 82.619688 L 2226.367188 82.619688 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1335.46875 98.674375 L 1271.210938 82.619688 L 1335.46875 66.525938 Z M 1335.46875 98.674375 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2153.554688 66.525938 L 2217.8125 82.619688 L 2153.554688 98.674375 Z M 2153.554688 66.525938 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1262.65625 158.635313 L 1262.65625 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2226.367188 158.635313 L 2226.367188 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip10)" clip-rule="nonzero">
<g clip-path="url(#clip11)" clip-rule="evenodd">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2708.242188 82.619688 L 2226.367188 82.619688 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2635.390625 66.525938 L 2699.648438 82.619688 L 2635.390625 98.674375 Z M 2635.390625 66.525938 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2299.179688 98.674375 L 2234.921875 82.619688 L 2299.179688 66.525938 Z M 2299.179688 98.674375 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2226.367188 158.635313 L 2226.367188 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2708.242188 158.635313 L 2708.242188 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip12)" clip-rule="nonzero">
<g clip-path="url(#clip13)" clip-rule="evenodd">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3190.078125 82.619688 L 2708.242188 82.619688 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3117.265625 66.525938 L 3181.523438 82.619688 L 3117.265625 98.674375 Z M 3117.265625 66.525938 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2781.054688 98.674375 L 2716.796875 82.619688 L 2781.054688 66.525938 Z M 2781.054688 98.674375 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2708.242188 158.635313 L 2708.242188 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3190.078125 158.635313 L 3190.078125 6.565 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip14)" clip-rule="nonzero">
<g clip-path="url(#clip15)" clip-rule="evenodd">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 82.109375 1239.065 L 82.109375 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 98.164062 1166.2525 L 82.109375 1230.510313 L 66.015625 1166.2525 Z M 98.164062 1166.2525 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 66.015625 830.041563 L 82.109375 765.78375 L 98.164062 830.041563 Z M 66.015625 830.041563 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<g clip-path="url(#clip16)" clip-rule="nonzero">
<g clip-path="url(#clip17)" clip-rule="evenodd">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 298.945312 82.619688 L 1262.65625 82.619688 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
</g>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 371.757812 98.674375 L 307.5 82.619688 L 371.757812 66.525938 Z M 371.757812 98.674375 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill-rule:evenodd;fill:rgb(0%,0%,0%);fill-opacity:1;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 1189.84375 66.525938 L 1254.0625 82.619688 L 1189.84375 98.674375 Z M 1189.84375 66.525938 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:16.062;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 298.945312 1239.065 L 298.945312 757.229063 L 3190.078125 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style="fill:none;stroke-width:16.062;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3190.078125 1239.065 L 3190.078125 275.354063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 0.605469 107.761719 L 15.8125 107.761719 L 15.8125 83.667969 L 0.605469 83.667969 Z M 0.605469 107.761719 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 66.03125 146.6875 L 90.125 146.6875 L 90.125 131.480469 L 66.03125 131.480469 Z M 66.03125 146.6875 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 162.40625 146.6875 L 186.496094 146.6875 L 186.496094 131.480469 L 162.40625 131.480469 Z M 162.40625 146.6875 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 234.683594 146.6875 L 258.777344 146.6875 L 258.777344 131.480469 L 234.683594 131.480469 Z M 234.683594 146.6875 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 282.871094 146.6875 L 306.964844 146.6875 L 306.964844 131.480469 L 282.871094 131.480469 Z M 282.871094 146.6875 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 0.605469 59.578125 L 15.8125 59.578125 L 15.8125 35.484375 L 0.605469 35.484375 Z M 0.605469 59.578125 "/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 225.046875 71.621094 C 225.046875 72.953125 223.96875 74.03125 222.636719 74.03125 C 221.308594 74.03125 220.226562 72.953125 220.226562 71.621094 C 220.226562 70.292969 221.308594 69.214844 222.636719 69.214844 C 223.96875 69.214844 225.046875 70.292969 225.046875 71.621094 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 2250.46875 757.229063 C 2250.46875 743.90875 2239.6875 733.1275 2226.367188 733.1275 C 2213.085938 733.1275 2202.265625 743.90875 2202.265625 757.229063 C 2202.265625 770.510313 2213.085938 781.291563 2226.367188 781.291563 C 2239.6875 781.291563 2250.46875 770.510313 2250.46875 757.229063 Z M 2250.46875 757.229063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 32.300781 23.4375 C 32.300781 24.765625 31.222656 25.847656 29.894531 25.847656 C 28.5625 25.847656 27.484375 24.765625 27.484375 23.4375 C 27.484375 22.105469 28.5625 21.027344 29.894531 21.027344 C 31.222656 21.027344 32.300781 22.105469 32.300781 23.4375 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 323.007812 1239.065 C 323.007812 1225.78375 312.226562 1214.963438 298.945312 1214.963438 C 285.625 1214.963438 274.84375 1225.78375 274.84375 1239.065 C 274.84375 1252.385313 285.625 1263.166563 298.945312 1263.166563 C 312.226562 1263.166563 323.007812 1252.385313 323.007812 1239.065 Z M 323.007812 1239.065 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 321.417969 23.4375 C 321.417969 24.765625 320.339844 25.847656 319.007812 25.847656 C 317.679688 25.847656 316.601562 24.765625 316.601562 23.4375 C 316.601562 22.105469 317.679688 21.027344 319.007812 21.027344 C 320.339844 21.027344 321.417969 22.105469 321.417969 23.4375 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3214.179688 1239.065 C 3214.179688 1225.78375 3203.398438 1214.963438 3190.078125 1214.963438 C 3176.796875 1214.963438 3166.015625 1225.78375 3166.015625 1239.065 C 3166.015625 1252.385313 3176.796875 1263.166563 3190.078125 1263.166563 C 3203.398438 1263.166563 3214.179688 1252.385313 3214.179688 1239.065 Z M 3214.179688 1239.065 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 321.417969 119.808594 C 321.417969 121.140625 320.339844 122.21875 319.007812 122.21875 C 317.679688 122.21875 316.601562 121.140625 316.601562 119.808594 C 316.601562 118.476562 317.679688 117.398438 319.007812 117.398438 C 320.339844 117.398438 321.417969 118.476562 321.417969 119.808594 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3214.179688 275.354063 C 3214.179688 262.03375 3203.398438 251.2525 3190.078125 251.2525 C 3176.796875 251.2525 3166.015625 262.03375 3166.015625 275.354063 C 3166.015625 288.674375 3176.796875 299.455625 3190.078125 299.455625 C 3203.398438 299.455625 3214.179688 288.674375 3214.179688 275.354063 Z M 3214.179688 275.354063 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 317.832031 14.976562 C 317.832031 15.65625 317.28125 16.210938 316.601562 16.210938 C 315.921875 16.210938 315.367188 15.65625 315.367188 14.976562 C 315.367188 14.296875 315.921875 13.746094 316.601562 13.746094 C 317.28125 13.746094 317.832031 14.296875 317.832031 14.976562 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3178.320312 1323.674375 C 3178.320312 1316.8775 3172.8125 1311.330625 3166.015625 1311.330625 C 3159.21875 1311.330625 3153.671875 1316.8775 3153.671875 1323.674375 C 3153.671875 1330.47125 3159.21875 1335.979063 3166.015625 1335.979063 C 3172.8125 1335.979063 3178.320312 1330.47125 3178.320312 1323.674375 Z M 3178.320312 1323.674375 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 322.648438 14.976562 C 322.648438 15.65625 322.097656 16.210938 321.417969 16.210938 C 320.738281 16.210938 320.1875 15.65625 320.1875 14.976562 C 320.1875 14.296875 320.738281 13.746094 321.417969 13.746094 C 322.097656 13.746094 322.648438 14.296875 322.648438 14.976562 "/>
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3226.484375 1323.674375 C 3226.484375 1316.8775 3220.976562 1311.330625 3214.179688 1311.330625 C 3207.382812 1311.330625 3201.875 1316.8775 3201.875 1323.674375 C 3201.875 1330.47125 3207.382812 1335.979063 3214.179688 1335.979063 C 3220.976562 1335.979063 3226.484375 1330.47125 3226.484375 1323.674375 Z M 3226.484375 1323.674375 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 328.59375 117.398438 C 328.59375 118.050781 328.066406 118.578125 327.414062 118.578125 C 326.765625 118.578125 326.238281 118.050781 326.238281 117.398438 C 326.238281 116.75 326.765625 116.222656 327.414062 116.222656 C 328.066406 116.222656 328.59375 116.75 328.59375 117.398438 "/>
<g clip-path="url(#clip18)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3285.9375 299.455625 C 3285.9375 292.932188 3280.664062 287.65875 3274.140625 287.65875 C 3267.65625 287.65875 3262.382812 292.932188 3262.382812 299.455625 C 3262.382812 305.94 3267.65625 311.213438 3274.140625 311.213438 C 3280.664062 311.213438 3285.9375 305.94 3285.9375 299.455625 Z M 3285.9375 299.455625 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<path style=" stroke:none;fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;" d="M 328.59375 122.21875 C 328.59375 122.867188 328.066406 123.394531 327.414062 123.394531 C 326.765625 123.394531 326.238281 122.867188 326.238281 122.21875 C 326.238281 121.566406 326.765625 121.039062 327.414062 121.039062 C 328.066406 121.039062 328.59375 121.566406 328.59375 122.21875 "/>
<g clip-path="url(#clip19)" clip-rule="nonzero">
<path style="fill:none;stroke-width:4.0155;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 3285.9375 251.2525 C 3285.9375 244.768125 3280.664062 239.494688 3274.140625 239.494688 C 3267.65625 239.494688 3262.382812 244.768125 3262.382812 251.2525 C 3262.382812 257.775938 3267.65625 263.049375 3274.140625 263.049375 C 3280.664062 263.049375 3285.9375 257.775938 3285.9375 251.2525 Z M 3285.9375 251.2525 " transform="matrix(0.1,0,0,-0.1,0,147.344)"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-1" x="26.248" y="16.223"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-2" x="315.899" y="6.586"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-3" x="313.955" y="131.871"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph0-4" x="218.989" y="83.684"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph1-1" x="122.243" y="62"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph1-1" x="266.803784" y="62"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="11.886" y="50.934"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph2-1" x="11.886" y="99.121"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="72.235" y="142.739"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="77.216" y="142.739"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph3-1" x="168.608" y="142.739"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="173.59" y="142.739"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="243.374028" y="142.739"/>
</g>
<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
<use xlink:href="#glyph4-1" x="291.569102" y="142.739"/>
</g>
</g>
</svg>
Compute the 3x3 flexibility using the Principle of Virtual Displacements and the 3x3 stiffness using inversion, while the mass matrix is directly assembled with the understanding that the lumped mass on $x_1$ is $2m$.
The code uses a structure `m` where each of the three rows contains the
computational represention (as polynomial coefficients) of the bending moments due to
a unit load applied in the position of each of the three degrees of freedom,
in each row six groups of polynomial coefficients, one group for each of the six
intervals of definition in which the structure has been subdivided (a possible seventh interval is omitted because the bending moment is always zero for every possible unit load).
```python
l = [1, 2, 2, 1, 1, 1]
h = 0.5 ; t = 3*h
m = [[p(2,0),p(h,0),p(h,1),p(h,0),p(h,h),p(1,0)],
[p(2,0),p(1,0),p(0,2),p(1,0),p(1,1),p(2,0)],
[p(2,0),p(h,0),p(h,1),p(h,0),p(t,h),p(2,0)]]
F = array([[vw(emme, chi, l) for emme in m] for chi in m])
K = inv(F)
M = array(((2.0, 0.0, 0.0),
(0.0, 1.0, 0.0),
(0.0, 0.0, 1.0)))
iM = inv(M)
ld('\\boldsymbol F = \\frac{L^3}{12EJ}\\,', pmat(rounder(F*12), fmt='%+d'))
ld('\\boldsymbol K = \\frac{3 EJ}{1588L^3}\\,',
pmat(rounder(K*1588/3), fmt='%+d'),
'= \\frac{EJ}{L^3}\\;\\hat{\\boldsymbol K}.')
ld('\\boldsymbol M = m\\,', pmat(M, fmt='%d'),
'= m\\;\\hat{\\boldsymbol M}.')
```
$$\boldsymbol F = \frac{L^3}{12EJ}\, \begin{bmatrix}
+92&+128&+101\\
+128&+192&+146\\
+101&+146&+118
\end{bmatrix}$$
$$\boldsymbol K = \frac{3 EJ}{1588L^3}\, \begin{bmatrix}
+1340&-358&-704\\
-358&+655&-504\\
-704&-504&+1280
\end{bmatrix} = \frac{EJ}{L^3}\;\hat{\boldsymbol K}.$$
$$\boldsymbol M = m\, \begin{bmatrix}
2&0&0\\
0&1&0\\
0&0&1
\end{bmatrix} = m\;\hat{\boldsymbol M}.$$
### The eigenvalues problem
We solve immediately the eigenvalue problem because when we know the shortest modal period of vibration it is possible to choose the integration time step $h$ to avoid numerical unstability issues with the linear acceleration algorithm.
```python
wn2, Psi = eigh(K, M)
wn = sqrt(wn2)
li = wn
Lambda2 = diag(wn2)
Lambda = diag(wn)
# eigenvectors are normalized → M* is a unit matrix, as well as its inverse
Mstar, iMstar = eye(3), eye(3)
ld(r'\boldsymbol\Omega^2 = \omega_0^2\,', pmat(Lambda2),
r'=\omega_0^2\,\boldsymbol\Lambda^2.')
ld(r'\boldsymbol\Omega=\omega_0\,', pmat(Lambda),
r'=\omega_0\,\boldsymbol\Lambda.')
ld(r'\boldsymbol T_\text{n}=\frac{2\pi}{\omega_0}\,', pmat(inv(Lambda)),
r'= t_0\,\boldsymbol\Theta.')
ld(r'\Psi=', pmat(Psi), '.')
```
$$\boldsymbol\Omega^2 = \omega_0^2\, \begin{bmatrix}
+0.024831&+0.000000&+0.000000\\
+0.000000&+1.729964&+0.000000\\
+0.000000&+0.000000&+3.166490
\end{bmatrix} =\omega_0^2\,\boldsymbol\Lambda^2.$$
$$\boldsymbol\Omega=\omega_0\, \begin{bmatrix}
+0.157577&+0.000000&+0.000000\\
+0.000000&+1.315281&+0.000000\\
+0.000000&+0.000000&+1.779463
\end{bmatrix} =\omega_0\,\boldsymbol\Lambda.$$
$$\boldsymbol T_\text{n}=\frac{2\pi}{\omega_0}\, \begin{bmatrix}
+6.346086&+0.000000&-0.000000\\
+0.000000&+0.760294&-0.000000\\
+0.000000&+0.000000&+0.561967
\end{bmatrix} = t_0\,\boldsymbol\Theta.$$
$$\Psi= \begin{bmatrix}
-0.431570&+0.504229&-0.243928\\
-0.623940&-0.701061&-0.345271\\
-0.488051&+0.004508&+0.872803
\end{bmatrix} .$$
## Numerical Integration
The shortest period is $T_3 = 2\pi\,0.562/\omega_0 \rightarrow A_3 = 1.124 \pi$ hence to avoid unstability of the linear acceleration algorithm we shall use a non dimensional time step $h<0.55A_3\approx0.6\pi$. We can anticipate that the modal response associated with mode 2 is important ($\lambda_2\approx\lambda_0$) so we choose an adimensional time step $h=A_2/20=2\pi\,0.760/20\approx0.08\pi$ that is much smaller than the maximum time step for which we have a stable behaviour.
### Initialization
First a new, longer adimensional time vector and the corresponding support acceleration, then the efficace load vector (`peff` is an array with 2001 rows and 3 columns, each row corresponding to the force vector in a particular instant of time)
```python
nsppi = 200
a, _, _, aA = a_uA_vA_aA(0, 16*pi, nsppi*16+1)
peff = (- M @ e) * aA[:,None]
```
The constants that we need in the linear acceleration algorithm — note that we have an undamped system or, in other words, $\boldsymbol C = \boldsymbol 0$
```python
h = pi/nsppi
K_ = K + 6*M/h**2
F_ = inv(K_)
dp_v = 6*M/h
dp_a = 3*M
```
### The integration loop
First we initialize the containers where to save the new results with the initial values at $a=0$, next the loop on the values of the load at times $t_i$ and $t_{i+1}$ with $i=0,\ldots,1999$.
```python
Xl, Vl = [zeros(3)], [zeros(3)]
for p0, p1 in p0_p1(peff):
x0, v0 = Xl[-1], Vl[-1]
a0 = iM @ (p0 -K@x0)
dp = (p1-p0) + dp_a@a0 + dp_v@v0
dx = F_@dp
dv = 3*dx/h - 3*v0 - a0*h/2
Xl.append(x0+dx), Vl.append(v0+dv)
Xl = array(Xl) ; Vl = array(Vl)
```
#### Plotting
```python
for i, line in enumerate(plt.plot(a/pi, Xl), 1):
line.set_label(r'$x_{%d}$'%i)
plt.xlabel(r'$\omega_0 t/\pi$')
plt.ylabel(r'$x_i/\delta$')
plt.title('Response — numerical integration — lin.acc.')
plt.legend();
```
## Equation of Motion
Denoting with $\boldsymbol x$ the dynamic component of the displacements, with $\boldsymbol x_\text{tot} = \boldsymbol x + \boldsymbol x_\text{stat} = \boldsymbol x + \boldsymbol e \;u_\mathcal{A}$ the equation of motion is (the independent variable being $a=\omega_0t$)
$$ \hat{\boldsymbol M} \ddot{\boldsymbol x} +
\hat{\boldsymbol K} \boldsymbol x =
- \hat{\boldsymbol M} \boldsymbol e \ddot u_\mathcal{A}. $$
Using mass-normalized eigenvectors, with $\boldsymbol x = \delta\boldsymbol\Psi\boldsymbol q$ we have
$$ \boldsymbol I \ddot{\boldsymbol q} +
\boldsymbol\Lambda^2\boldsymbol q =
\boldsymbol\Psi^T\hat{\boldsymbol M} \boldsymbol e \frac{\ddot u_A}{\delta}.$$
It is $$\frac{\ddot u_A}{\delta} = \frac{1}{2\pi}\,\lambda_0^2\,\sin(\lambda_0a)$$
and $$ \ddot q_i + \lambda_i^2 q_i =
\frac{\Gamma_i}{2\pi}\,\lambda_0^2\,\sin(\lambda_0 a),\qquad\text{with }
\Gamma_i = -\boldsymbol\psi_i^T \hat{\boldsymbol M} \boldsymbol e\text{ and }
\lambda_0 = \frac43.$$
```python
G = - Psi.T @ M @ e
```
Substituting a particular integral $\xi_i=C_i\sin(\lambda_0 a)$ in the
modal equation of motion we have
$$(\lambda^2_i-\lambda^2_0)\,C_i\sin(\lambda_0 a) =
\frac{\Gamma_i}{2\pi}\,\lambda_0^2\,\sin(\lambda_0 a)$$
and solving w/r to $C_i$ we have
$$ C_i = \frac{\Gamma_i}{2\pi}\,\frac{\lambda_0^2}{\lambda_i^2-\lambda_0^2}$$
```python
C = G*l0**2/(li**2-l0**2)/2/pi
```
The modal response, taking into account that we start from rest conditions, is
$$ q_i = C_i\left(\sin(\lambda_0 a) -
\frac{\lambda_0}{\lambda_i}\,\sin(\lambda_i a)\right)$$
$$ \dot q_i = \lambda_0 C_i \left(
\cos(\lambda_0 a) - \cos(\lambda_i a) \right).$$
```python
for n in range(3):
i = n+1
ld(r'q_%d=%+10f\left(\sin\frac43a-%10f\sin%1fa\right)' % (i,C[n],l0/li[n],li[n]),
r'\qquad\text{for }0 \le a \le \frac32\pi')
```
$$q_1= -0.637609\left(\sin\frac43a- 8.461449\sin0.157577a\right) \qquad\text{for }0 \le a \le \frac32\pi$$
$$q_2= +3.691468\left(\sin\frac43a- 1.013725\sin1.315281a\right) \qquad\text{for }0 \le a \le \frac32\pi$$
$$q_3= -0.016167\left(\sin\frac43a- 0.749290\sin1.779463a\right) \qquad\text{for }0 \le a \le \frac32\pi$$
### Free vibration phase, $a\ge 3\pi/2 = a_1$
When the forced phase end, the system is in free vibrations and we can determine the constants of integration requiring that the displacements and velocities of the free vibration equal the displacements and velocities of the forced response at $t=t_0$.
\begin{align}
+ (\cos\lambda_i a_1)\, A_i + (\sin\lambda_i a_1)\, B_i &=
q_i(a_1) \\
- (\sin\lambda_i a_1)\, A_i + (\cos\lambda_i a_1)\, B_i &=
\frac{\dot q_i(a_1)}{\lambda_i}
\end{align}
Because the coefficients form an othogonal matrix,
\begin{align}
A_i &= + (\cos\lambda_i a_1)\, q_i(a_1)
- (\sin\lambda_i a_1)\, \frac{\dot q_i(a_1)}{\lambda_i}\\
B_i &= + (\sin\lambda_i a_1)\, q_i(a_1)
+ (\cos\lambda_i a_1)\, \frac{\dot q_i(a_1)}{\lambda_i}.
\end{align}
```python
a1 = 3*pi/2
q_a1 = C*(sin(l0*a1)-l0*sin(li*a1)/li)
v_a1 = C*l0*(cos(l0*a1)-cos(li*a1))
ABs = []
for i in range(3):
b = array((q_a1[i], v_a1[i]/li[i]))
A = array(((+cos(li[i]*a1), -sin(li[i]*a1)),
(+sin(li[i]*a1), +cos(li[i]*a1))))
ABs.append(A@b)
ABs = array(ABs)
```
#### Analytical expressions
```python
display(Latex(r'Modal responses for $a_1 \le a$.'))
for n in range(3):
i, l, A_, B_ = n+1, li[n], *ABs[n]
display(Latex((r'$$q_{%d} = '+
r'%+6.3f\cos%6.3fa '+
r'%+6.3f\sin%6.3fa$$')%(i, A_, l, B_, l)))
```
Modal responses for $a_1 \le a$.
$$q_{1} = +3.648\cos 0.158a +1.420\sin 0.158a$$
$$q_{2} = +0.318\cos 1.315a -0.014\sin 1.315a$$
$$q_{3} = +0.010\cos 1.779a +0.018\sin 1.779a$$
#### Stitching the two responses
We must evaluate numerically the analytical responses
```python
ac = a[:,None]
q = where(ac<=a1,
C*(sin(l0*ac)-l0*sin(li*ac)/li),
ABs[:,0]*cos(li*ac) + ABs[:,1]*sin(li*ac))
```
#### Plotting the Analytical Response
First, we zoom around $a_1$ to verify the continuity of displacements and velocities
```python
# #### Plot zooming around a1
low, hi = int(0.8*a1*nsppi/pi), int(1.2*a1*nsppi/pi)
for i, line in enumerate(plt.plot(a[low:hi]/pi, q[low:hi]), 1):
line.set_label('$q_{%d}$'%i)
plt.title('Modal Responses, zoom on transition zone')
plt.xlabel(r'$\omega_0 t/\pi$')
plt.legend(loc='best')
plt.show()
```
next, the modal responses over the interval $0 \le a \le 16\pi$
```python
# #### Plot in 0 ≤ a ≤ 16 pi
for i, line in enumerate(plt.plot(a/pi, q), 1):
line.set_label('$q_{%d}$'%i)
plt.title('Modal Responses')
plt.xlabel(r'$\omega_0 t/\pi$')
plt.legend(loc='best');
plt.xticks()
plt.show();
```
### Nodal responses
```python
x = [email protected]
```
Why `x = [email protected]` rather than `x = Psi@q`? Because for different reasons (mostly, ease of use with the plotting libraries) we have all the response arrays organized in the shape of `(Nsteps × 3)`.
That's equivalent to say that `q` and `x`, the Pyton objects, are isomorph to $\boldsymbol q^T$ and $\boldsymbol x^T$ and because it is $$\boldsymbol x^T = (\boldsymbol\Psi \boldsymbol q)^T = \boldsymbol q^T \boldsymbol \Psi^T,$$
in Python to write `x = [email protected]` we have.
That said. here are the plot of the nodal responses. Compare with the numerical solutions.
```python
for i, line in enumerate(plt.plot(a/pi, x), 1):
line.set_label('$x_{%d}/\delta$'%i)
plt.title('Normalized Nodal Displacements — analytical solution')
plt.xlabel(r'$\omega_0 t / \pi$')
plt.legend(loc='best')
plt.show();
```
| 3a584926035e297c67f7c80703bc5b661a72f70b | 375,635 | ipynb | Jupyter Notebook | dati_2017/hw03/01.ipynb | shishitao/boffi_dynamics | 365f16d047fb2dbfc21a2874790f8bef563e0947 | [
"MIT"
]
| null | null | null | dati_2017/hw03/01.ipynb | shishitao/boffi_dynamics | 365f16d047fb2dbfc21a2874790f8bef563e0947 | [
"MIT"
]
| null | null | null | dati_2017/hw03/01.ipynb | shishitao/boffi_dynamics | 365f16d047fb2dbfc21a2874790f8bef563e0947 | [
"MIT"
]
| 2 | 2019-06-23T12:32:39.000Z | 2021-08-15T18:33:55.000Z | 243.12945 | 56,442 | 0.851058 | true | 41,173 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.752013 | 0.6547 | __label__yue_Hant | 0.074375 | 0.359419 |
# ***Introduction to Radar Using Python and MATLAB***
## Andy Harrison - Copyright (C) 2019 Artech House
<br/>
# Bistatic Radar Range Equation
***
The power at the receiving radar for a bistatic configuration is given by (Equation 4.60)
\begin{equation}
P_{radar} = \frac{P_t\, G_t(\theta, \phi)\, G_r(\theta, \phi)\, \sigma(\theta, \phi)\, \lambda^2}{(4\pi)^3\, r_t^2\, r_r^2} \hspace{0.5in} \text{(W)}
\end{equation}
***
Begin by getting the library path
```python
import lib_path
```
Set the minimum and maximum range product (m)
```python
range_product_min = 1e6
range_product_max = 1e7
```
Import the `linspace` routine from `scipy`
```python
from numpy import linspace
```
Set up the range product array
```python
range_product = linspace(range_product_min, range_product_max, 2000)
```
Set up the transmit antenna gain (dB), receive antenna gain (dB), bistatic target RCS (dBsm), peak transmit power (W), and the operating frequency (Hz)
```python
transmit_antenna_gain = 25.0
receive_antenna_gain = 30.0
bistatic_target_rcs = -10.0
peak_power = 50e3
frequency = 1e9
```
Set up the input args
```python
kwargs = {'transmit_target_range': 1.0,
'receive_target_range': range_product,
'peak_power': peak_power,
'transmit_antenna_gain': 10 ** (transmit_antenna_gain / 10.0),
'receive_antenna_gain': 10 ** (receive_antenna_gain / 10.0),
'frequency': frequency,
'bistatic_target_rcs': 10 ** (bistatic_target_rcs / 10.0)}
```
Import the `power_at_radar` routine from `bistatic_radar_range`
```python
from Libs.radar_range.bistatic_radar_range import power_at_radar
```
Calculate the power at the receiving radar
```python
power_at_radar = power_at_radar(**kwargs)
```
Import the `matplotlib` routines and `log10` from `scipy` for displaying the results
```python
from matplotlib import pyplot as plt
from numpy import log10
```
```python
# Set the figure size
plt.rcParams["figure.figsize"] = (15, 10)
# Display the results
plt.plot(range_product / 1.0e6, power_at_radar, '')
# Set the plot title and labels
plt.title('Bistatic Power at the Receiver', size=14)
plt.xlabel('Range Product (km$^2$)', size=12)
plt.ylabel('Power at Receiver (W)', size=12)
# Set the tick label size
plt.tick_params(labelsize=12)
# Turn on the grid
plt.grid(linestyle=':', linewidth=0.5)
```
| 9cbef858f25210473133460f36d59efd5b403753 | 48,051 | ipynb | Jupyter Notebook | jupyter/Chapter04/power_at_radar_bistatic.ipynb | mberkanbicer/software | 89f8004f567129216b92c156bbed658a9c03745a | [
"Apache-2.0"
]
| null | null | null | jupyter/Chapter04/power_at_radar_bistatic.ipynb | mberkanbicer/software | 89f8004f567129216b92c156bbed658a9c03745a | [
"Apache-2.0"
]
| null | null | null | jupyter/Chapter04/power_at_radar_bistatic.ipynb | mberkanbicer/software | 89f8004f567129216b92c156bbed658a9c03745a | [
"Apache-2.0"
]
| null | null | null | 177.309963 | 42,424 | 0.916942 | true | 699 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.782662 | 0.660485 | __label__eng_Latn | 0.86846 | 0.372859 |
```python
from logicqubit.logic import *
from cmath import *
import numpy as np
import sympy as sp
import scipy
from random import randrange
from scipy.optimize import *
import matplotlib.pyplot as plt
```
Cuda is not available!
logicqubit version 1.5.8
```python
gates = Gates()
ID = gates.ID()
X = gates.X()
Y = gates.Y()
Z = gates.Z()
```
```python
IIII = ID.kron(ID).kron(ID).kron(ID)
XXXX = X.kron(X).kron(X).kron(X)
YYYY = Y.kron(Y).kron(Y).kron(Y)
ZZZZ = Z.kron(Z).kron(Z).kron(Z)
signals = [ZZZZ.get()[i,i] for i in range(len(ZZZZ.get()))]
signals # used in <psi|ZZZZ|psi>
```
[1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1]
```python
H = IIII*3 + ZZZZ*8
min(scipy.linalg.eig(H.get())[0])
```
(-5+0j)
```python
def _ansatz(reg, params):
n_qubits = len(reg)
depth = n_qubits
for i in range(depth):
reg[1].CNOT(reg[0])
for j in range(n_qubits):
reg[i].RY(params[j])
def ansatz(reg, params):
n_qubits = len(reg)
depth = n_qubits
for i in range(depth):
for j in range(n_qubits):
if(j < n_qubits-1):
reg[j+1].CNOT(reg[j])
reg[i].RY(params[j])
def ansatz_4q(q1, q2, q3, q4, params):
q1.RY(params[0])
q2.RY(params[1])
q3.RY(params[2])
q4.RY(params[3])
q2.CNOT(q1)
q3.CNOT(q2)
q4.CNOT(q3)
q1.RX(params[4])
q2.RX(params[5])
q3.RX(params[6])
q4.RX(params[7])
q2.CNOT(q1)
q3.CNOT(q2)
q4.CNOT(q3)
q1.RY(params[8])
q2.RY(params[9])
q3.RY(params[10])
q4.RY(params[11])
q2.CNOT(q1)
q3.CNOT(q2)
q4.CNOT(q3)
q1.RY(params[12])
q2.RY(params[13])
q3.RY(params[14])
q4.RY(params[15])
q2.CNOT(q1)
q3.CNOT(q2)
q4.CNOT(q3)
```
```python
def expectation_4q(params):
logicQuBit = LogicQuBit(4)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
q4 = Qubit()
ansatz_4q(q1,q2,q2,q4,params)
#ansatz([q1,q2,q3,q4],params)
psi = logicQuBit.getPsi()
return (psi.adjoint()*H*psi).get()[0][0]
initial_values = np.random.uniform(-np.pi, np.pi, 16)
minimum = minimize(expectation_4q, initial_values, method='Nelder-Mead',options={'xtol': 1e-10, 'ftol': 1e-10})
print(minimum)
```
final_simplex: (array([[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633],
[ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633]]), array([-5., -5., -5., -5., -5., -5., -5., -5., -5., -5., -5., -5., -5.,
-5., -5., -5., -5.]))
fun: -5.000000000000004
message: 'Optimization terminated successfully.'
nfev: 1735
nit: 1043
status: 0
success: True
x: array([ 0.97108118, -0.26330511, -1.30749121, -0.6436676 , -3.14159267,
1.23238916, 0.63362969, -1.27557381, 0.24058204, 0.61156406,
-0.61156407, 2.49663256, 0.96096669, -0.58769253, 1.62413606,
-1.57079633])
```python
def expectation_value(measurements):
probabilities = np.array(measurements)
states = signals
expectation = np.sum(states * probabilities) # <psi|ZZZZ|psi>
return expectation
def sigma_xxxx(params):
logicQuBit = LogicQuBit(4, first_left = False)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
q4 = Qubit()
ansatz_4q(q1,q2,q3,q4,params)
# medidas em XX
q1.RY(-pi/2)
q2.RY(-pi/2)
q3.RY(-pi/2)
q4.RY(-pi/2)
result = logicQuBit.Measure([q1,q2,q3,q4])
result = expectation_value(result)
return result
def sigma_yyyy(params):
logicQuBit = LogicQuBit(4, first_left = False)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
q4 = Qubit()
ansatz_4q(q1,q2,q3,q4,params)
# medidas em YY
q1.RX(pi/2)
q2.RX(pi/2)
q3.RX(pi/2)
q4.RX(pi/2)
result = logicQuBit.Measure([q1,q2,q3,q4])
result = expectation_value(result)
return result
def sigma_zzzz(params):
logicQuBit = LogicQuBit(4, first_left = False)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
q4 = Qubit()
ansatz_4q(q1,q2,q3,q4,params)
result = logicQuBit.Measure([q1,q2,q3,q4])
result = expectation_value(result)
return result
def expectation_energy(params):
xxxx = sigma_xxxx(params)
yyyy = sigma_yyyy(params)
zzzz = sigma_zzzz(params)
result = 3 + 8*zzzz
return result
```
```python
initial_values = np.random.uniform(-np.pi, np.pi, 16)
minimum = minimize(expectation_energy, initial_values, method='Nelder-Mead')
print(minimum)
```
final_simplex: (array([[-3.14158631e+00, -1.56722459e+00, 4.30413762e-01,
9.60212067e-02, 3.14158852e+00, 1.56202882e+00,
-2.35871198e+00, 3.21365907e+00, 1.15642881e-06,
1.57489919e+00, -9.75093207e-01, 3.13672123e+00,
2.87586517e+00, -2.69623135e-03, -5.14798803e-02,
-1.57040064e+00],
[-3.14159290e+00, -1.56719816e+00, 4.30399183e-01,
9.60238298e-02, 3.14158941e+00, 1.56201745e+00,
-2.35870998e+00, 3.21363982e+00, 2.77304237e-06,
1.57491768e+00, -9.75103211e-01, 3.13669459e+00,
2.87585784e+00, -2.69622332e-03, -5.14830310e-02,
-1.57040753e+00],
[-3.14159594e+00, -1.56716079e+00, 4.30383177e-01,
9.60275892e-02, 3.14159095e+00, 1.56197556e+00,
-2.35872335e+00, 3.21365304e+00, -2.24314318e-06,
1.57496858e+00, -9.75105170e-01, 3.13663393e+00,
2.87590226e+00, -2.69620394e-03, -5.14862760e-02,
-1.57040722e+00],
[-3.14158731e+00, -1.56721675e+00, 4.30410025e-01,
9.60208689e-02, 3.14159749e+00, 1.56200806e+00,
-2.35872571e+00, 3.21365441e+00, 7.88620818e-06,
1.57492017e+00, -9.75093757e-01, 3.13670683e+00,
2.87586357e+00, -2.69622854e-03, -5.14815240e-02,
-1.57040637e+00],
[-3.14159070e+00, -1.56729464e+00, 4.30431946e-01,
9.60176831e-02, 3.14158277e+00, 1.56202599e+00,
-2.35872666e+00, 3.21366770e+00, -1.91714491e-06,
1.57484386e+00, -9.75089789e-01, 3.13681646e+00,
2.87590312e+00, -2.69621071e-03, -5.14734134e-02,
-1.57041014e+00],
[-3.14158315e+00, -1.56728858e+00, 4.30435126e-01,
9.60172187e-02, 3.14159478e+00, 1.56204559e+00,
-2.35870798e+00, 3.21366250e+00, -5.05876665e-07,
1.57484518e+00, -9.75090183e-01, 3.13680752e+00,
2.87587776e+00, -2.69623599e-03, -5.14730307e-02,
-1.57040580e+00],
[-3.14157816e+00, -1.56719712e+00, 4.30411769e-01,
9.60207428e-02, 3.14159264e+00, 1.56200536e+00,
-2.35872259e+00, 3.21366700e+00, 1.04248414e-05,
1.57492879e+00, -9.75087896e-01, 3.13668979e+00,
2.87586088e+00, -2.69624670e-03, -5.14818850e-02,
-1.57040181e+00],
[-3.14160136e+00, -1.56725066e+00, 4.30409984e-01,
9.60226334e-02, 3.14158674e+00, 1.56202695e+00,
-2.35870787e+00, 3.21364738e+00, -7.22034267e-06,
1.57487374e+00, -9.75104499e-01, 3.13675810e+00,
2.87589110e+00, -2.69620398e-03, -5.14782178e-02,
-1.57040323e+00],
[-3.14158783e+00, -1.56724599e+00, 4.30408936e-01,
9.60224538e-02, 3.14159065e+00, 1.56203393e+00,
-2.35871759e+00, 3.21361216e+00, -4.46921288e-06,
1.57488294e+00, -9.75099471e-01, 3.13676144e+00,
2.87589427e+00, -2.69618524e-03, -5.14806801e-02,
-1.57041244e+00],
[-3.14159147e+00, -1.56725034e+00, 4.30417971e-01,
9.60196423e-02, 3.14157799e+00, 1.56204219e+00,
-2.35871434e+00, 3.21365347e+00, 4.36422831e-06,
1.57487782e+00, -9.75097877e-01, 3.13675142e+00,
2.87584030e+00, -2.69623005e-03, -5.14774338e-02,
-1.57041203e+00],
[-3.14159400e+00, -1.56725668e+00, 4.30420379e-01,
9.60200547e-02, 3.14159675e+00, 1.56204103e+00,
-2.35870832e+00, 3.21364758e+00, -4.09476034e-06,
1.57487155e+00, -9.75093423e-01, 3.13676714e+00,
2.87588052e+00, -2.69621979e-03, -5.14781746e-02,
-1.57039645e+00],
[-3.14158998e+00, -1.56719152e+00, 4.30394447e-01,
9.60257389e-02, 3.14159518e+00, 1.56202714e+00,
-2.35870377e+00, 3.21363012e+00, -3.88268472e-06,
1.57492483e+00, -9.75106292e-01, 3.13667522e+00,
2.87586945e+00, -2.69621209e-03, -5.14835957e-02,
-1.57040334e+00],
[-3.14158936e+00, -1.56723445e+00, 4.30400634e-01,
9.60243830e-02, 3.14158351e+00, 1.56200936e+00,
-2.35872892e+00, 3.21362481e+00, -3.33030526e-06,
1.57490035e+00, -9.75107650e-01, 3.13673237e+00,
2.87590532e+00, -2.69617691e-03, -5.14796240e-02,
-1.57040644e+00],
[-3.14159040e+00, -1.56720078e+00, 4.30402090e-01,
9.60229989e-02, 3.14158132e+00, 1.56202323e+00,
-2.35870847e+00, 3.21364298e+00, 5.83893432e-06,
1.57491410e+00, -9.75102822e-01, 3.13669864e+00,
2.87584049e+00, -2.69623280e-03, -5.14821907e-02,
-1.57040586e+00],
[-3.14159408e+00, -1.56716667e+00, 4.30392228e-01,
9.60254556e-02, 3.14158885e+00, 1.56201798e+00,
-2.35869895e+00, 3.21364138e+00, -1.79744302e-06,
1.57494772e+00, -9.75099205e-01, 3.13666209e+00,
2.87585853e+00, -2.69623035e-03, -5.14870897e-02,
-1.57038950e+00],
[-3.14159525e+00, -1.56727306e+00, 4.30421870e-01,
9.60192563e-02, 3.14159534e+00, 1.56200342e+00,
-2.35873442e+00, 3.21366520e+00, 4.92719365e-06,
1.57485962e+00, -9.75097477e-01, 3.13677827e+00,
2.87589651e+00, -2.69620920e-03, -5.14751370e-02,
-1.57040971e+00],
[-3.14158461e+00, -1.56722266e+00, 4.30413830e-01,
9.60223626e-02, 3.14159317e+00, 1.56200423e+00,
-2.35871096e+00, 3.21367190e+00, -3.31131891e-06,
1.57488523e+00, -9.75094012e-01, 3.13673916e+00,
2.87591367e+00, -2.69623017e-03, -5.14784284e-02,
-1.57040153e+00]]), array([-4.99997456, -4.99997456, -4.99997456, -4.99997456, -4.99997456,
-4.99997456, -4.99997456, -4.99997456, -4.99997456, -4.99997456,
-4.99997456, -4.99997456, -4.99997456, -4.99997456, -4.99997456,
-4.99997456, -4.99997456]))
fun: -4.999974564586708
message: 'Optimization terminated successfully.'
nfev: 1774
nit: 1286
status: 0
success: True
x: array([-3.14158631e+00, -1.56722459e+00, 4.30413762e-01, 9.60212067e-02,
3.14158852e+00, 1.56202882e+00, -2.35871198e+00, 3.21365907e+00,
1.15642881e-06, 1.57489919e+00, -9.75093207e-01, 3.13672123e+00,
2.87586517e+00, -2.69623135e-03, -5.14798803e-02, -1.57040064e+00])
```python
def gradient(params, evaluate):
n_params = params.shape[0]
shift = pi/2
gradients = np.zeros(n_params)
for i in range(n_params):
#parameter shift rule
shift_vect = np.array([shift if j==i else 0 for j in range(n_params)])
shift_right = params + shift_vect
shift_left = params - shift_vect
expectation_right = evaluate(shift_right)
expectation_left = evaluate(shift_left)
gradients[i] = expectation_right - expectation_left
return gradients
```
```python
params = np.random.uniform(-np.pi, np.pi, 16)
last_params = np.zeros(16)
```
```python
lr = 0.05
err = 1
while err > 1e-15:
grad = gradient(params, expectation_energy)
params = params - lr*grad
err = abs(sum(params - last_params))
last_params = np.array(params)
print(err)
```
<ipython-input-24-cef20e405c15>:15: ComplexWarning: Casting complex values to real discards the imaginary part
gradients[i] = expectation_right - expectation_left
0.08546257568133853
0.15416069170439184
0.0010030077885998523
0.005848087390222023
0.0014074649937678707
0.00037367540024457746
0.000525058931988287
0.0003059735653656581
0.00021555852567467504
0.00013823997861606152
9.094510252460886e-05
5.9029449401171163e-05
3.842826257582921e-05
2.4959655773759692e-05
1.6213877932491627e-05
1.0527286879780107e-05
6.83443283255869e-06
4.43633101898655e-06
2.8795013184979013e-06
1.868905151303224e-06
1.2129531050675268e-06
7.872115472817853e-07
5.108967460198954e-07
3.315667485015439e-07
2.151822029450301e-07
1.396497364414273e-07
9.063018180377469e-08
5.881727571654949e-08
3.81712628172437e-08
2.4772387252625094e-08
1.607677746484626e-08
1.0433500063911083e-08
6.771128102656121e-09
4.3943220173758846e-09
2.8518238881503066e-09
1.8507753107854796e-09
1.201114319115959e-09
7.794980216857539e-10
5.058791163747856e-10
3.2830427265650997e-10
2.1306334474502364e-10
1.3827272660194012e-10
8.973621845598245e-11
5.823808102434214e-11
3.779376811507973e-11
2.452837932764851e-11
1.5917933637865644e-11
1.0330625244137082e-11
6.70330457808177e-12
4.351852211925689e-12
2.823519196226698e-12
1.8316459460265833e-12
1.1899370377932428e-12
7.709388682997087e-13
5.015987625256457e-13
3.2462921240039577e-13
2.107203300738547e-13
1.3700152123874432e-13
8.859579736508749e-14
5.81756864903582e-14
3.708144902248023e-14
2.4646951146678475e-14
1.6653345369377348e-14
9.769962616701378e-15
6.661338147750939e-15
3.3306690738754696e-15
3.3306690738754696e-15
1.9984014443252818e-15
8.881784197001252e-16
```python
expectation_energy(params)
```
(-4.999999999999998+0j)
```python
```
| 60f8b38358595bf12ac43e1cc5e4c2ca075175e0 | 24,544 | ipynb | Jupyter Notebook | vqe_4q.ipynb | clnrp/quantum_machine_learning | 5528a440d230b0613f1bd44a81a2a352441c76e5 | [
"MIT"
]
| null | null | null | vqe_4q.ipynb | clnrp/quantum_machine_learning | 5528a440d230b0613f1bd44a81a2a352441c76e5 | [
"MIT"
]
| null | null | null | vqe_4q.ipynb | clnrp/quantum_machine_learning | 5528a440d230b0613f1bd44a81a2a352441c76e5 | [
"MIT"
]
| null | null | null | 38.171073 | 121 | 0.504889 | true | 8,157 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.611382 | 0.531185 | __label__krc_Cyrl | 0.117852 | 0.07245 |
```python
from sympy import init_session
init_session()
```
IPython console for SymPy 1.5.1 (Python 3.6.9-64-bit) (ground types: gmpy)
These commands were executed:
>>> from __future__ import division
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
>>> init_printing()
Documentation can be found at https://docs.sympy.org/1.5.1/
```python
#init_printing(order="grlex")
```
```python
# this is for rac[5,3]
#
# f(kappa)
# lambda0(alpha, beta)(gamma,delta)(epsilon)/(...+omega+rho)
#
k, l0, a, b, g, d, e, o, r = symbols('k l0 a b g d e o r', real=True, positive=True)
```
```python
# Roman's new factorization:
f1 = k**2 + 2*a**2*k + a**4+b**2
f2 = 1 + g**2*k + d**2*k**2
f3 = 1 + e**2*k
A = a**4+b**2
B = 2*a**2 + g**2*A + e**2*A
f = l0 * f1 * f2 * f3 / (A + B*k + o**2*k**2 + r**2*k**3)
f
```
```python
diff(f,l0)
```
```python
da=diff(f,a)
da.factor()
```
```python
db=diff(f,b)
db.factor()
```
```python
dg=diff(f,g)
dg.factor()
```
```python
dd=diff(f,d)
dd.factor()
```
```python
de=diff(f,e)
de.factor()
```
```python
do=diff(f,o)
do.factor()
```
```python
dr=diff(f,r)
dr.factor()
```
```python
```
```python
```
```python
solveset(f1, k)
```
```python
expand((-a**2 + 1j*b)**2)
```
```python
#
# in f2: d2 = 1/(a**4+b**2) and g2=2*a**2/(a**4+b**2)
#
f3 = k**2/(a**4+b**2) + k*2*a**2/(a**4+b**2) + 1
solveset(f3, k)
```
```python
```
| 2cef6dc1e2835a933e88d966e6d17eeb3dd2aa30 | 111,765 | ipynb | Jupyter Notebook | notebooks/RAC-53_derivatives.ipynb | jeremydavis-2/Jolanta-by-dvr | 025f7392ffc40c12ede2f07efefd1f2b0dcd8d35 | [
"Apache-2.0"
]
| null | null | null | notebooks/RAC-53_derivatives.ipynb | jeremydavis-2/Jolanta-by-dvr | 025f7392ffc40c12ede2f07efefd1f2b0dcd8d35 | [
"Apache-2.0"
]
| null | null | null | notebooks/RAC-53_derivatives.ipynb | jeremydavis-2/Jolanta-by-dvr | 025f7392ffc40c12ede2f07efefd1f2b0dcd8d35 | [
"Apache-2.0"
]
| null | null | null | 232.359667 | 12,572 | 0.873789 | true | 609 | Qwen/Qwen-72B | 1. YES
2. YES | 0.893309 | 0.771844 | 0.689495 | __label__eng_Latn | 0.241324 | 0.440259 |
# Example 2: One-dimensional heat flow (exs2.py)
This example is from the CALFEM manual.
**Purpose:**
Analysis of one-dimensional heat flow.
**Description:**
Consider a wall built up of concrete and thermal insulation. The outdoor
temperature is −17 ◦C and the temperature inside is 20 ◦C. At the inside of
the thermal insulation there is a heat source yielding $10 ~W/m^2$.
The wall is subdivided into five elements and the one-dimensional spring
(analogy) element `spring1e` is used. Equivalent spring stiffnesses are
$k_i = λ A/L$ for thermal conductivity and $k_i = A/R$ for thermal
surface resistance. Corresponding spring stiffnesses per $m^2$ of the wall
are:
\begin{align}
k_1 &= 1/0.04 = 25.0 ~W/K \\
k_2 &= 1.7/0.070 = 24.3 ~W/K \\
k_3 &= 0.040/0.100 = 0.4 ~W/K \\
k_4 &= 1.7/0.100 = 17.0 ~W/K \\
k_5 &= 1/0.13 = 7.7 ~W/K
\end{align}
A global system matrix K and a heat flow vector f are defined. The heat source
inside the wall is considered by setting $f_4 = 10$. The element matrices
`Ke` are computed using `spring1e`, and the function `assem` assembles the
global stiffness matrix.
The system of equations is solved using `solveq` with considerations to the
boundary conditions in `bc` and `bcVal`. The prescribed temperatures are
$T_1 = −17 ~^{\circ}C$ and $T_2 = 20~^{\circ}C$.
Necessary modules are first imported.
```python
import numpy as np
import calfem.core as cfc
```
Next, the element topology is defined
```python
Edof = np.array([
[1,2],
[2,3],
[3,4],
[4,5],
[5,6]
])
```
Create stiffness matrix K and load vector f
```python
K = np.mat(np.zeros((6,6)))
f = np.mat(np.zeros((6,1)))
f[3] = 10.0
```
Define element properties (ep) and create element matrices for the different material layers.
```python
ep1 = 25.0
ep2 = 24.3
ep3 = 0.4
ep4 = 17.0
ep5 = 7.7
```
Element stiffness matrices
```python
Ke1 = cfc.spring1e(ep1)
Ke2 = cfc.spring1e(ep2)
Ke3 = cfc.spring1e(ep3)
Ke4 = cfc.spring1e(ep4)
Ke5 = cfc.spring1e(ep5)
```
Assemble all element matrices into the global stiffness matrix
```python
cfc.assem(Edof[0,:], K, Ke1)
cfc.assem(Edof[1,:], K, Ke2)
cfc.assem(Edof[2,:], K, Ke3)
cfc.assem(Edof[3,:], K, Ke4)
cfc.assem(Edof[4,:], K, Ke5)
print("Stiffness matrix K:")
print(K)
```
Stiffness matrix K:
[[ 25. -25. 0. 0. 0. 0. ]
[-25. 49.3 -24.3 0. 0. 0. ]
[ 0. -24.3 24.7 -0.4 0. 0. ]
[ 0. 0. -0.4 17.4 -17. 0. ]
[ 0. 0. 0. -17. 24.7 -7.7]
[ 0. 0. 0. 0. -7.7 7.7]]
Define the boundary conditions and solve the system of equations
```python
bc = np.array([1,6])
bcVal = np.array([-17.0, 20.0])
a,r = cfc.solveq(K, f, bc, bcVal)
print("Displacements a:")
print(a)
print("Reaction forces r:")
print(r)
```
Displacements a:
[[-17. ]
[-16.43842455]
[-15.86067203]
[ 19.23779344]
[ 19.47540439]
[ 20. ]]
Reaction forces r:
[[-1.40393862e+01]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 0.00000000e+00]
[ 5.68434189e-14]
[ 4.03938619e+00]]
| 0fc6225c93867b95da28cc22b3461c73cce8efca | 6,234 | ipynb | Jupyter Notebook | examples/.ipynb_checkpoints/exs2-checkpoint.ipynb | Karl-Eriksson/calfem-python | e9a88a85d3a73877ec99f7fbd1a296a44c3c9b22 | [
"MIT"
]
| 54 | 2016-04-11T19:12:13.000Z | 2022-02-22T07:15:39.000Z | examples/.ipynb_checkpoints/exs2-checkpoint.ipynb | Karl-Eriksson/calfem-python | e9a88a85d3a73877ec99f7fbd1a296a44c3c9b22 | [
"MIT"
]
| 13 | 2019-07-01T19:48:38.000Z | 2022-02-11T12:50:02.000Z | examples/.ipynb_checkpoints/exs2-checkpoint.ipynb | Karl-Eriksson/calfem-python | e9a88a85d3a73877ec99f7fbd1a296a44c3c9b22 | [
"MIT"
]
| 273 | 2017-08-01T10:29:09.000Z | 2022-02-16T14:02:36.000Z | 22.751825 | 99 | 0.486846 | true | 1,144 | Qwen/Qwen-72B | 1. YES
2. YES | 0.957912 | 0.833325 | 0.798252 | __label__eng_Latn | 0.910963 | 0.692939 |
# Simplified Arm Mode
## Introduction
This notebook presents the analytical derivations of the equations of motion for
three degrees of freedom and nine muscles arm model, some of them being
bi-articular, appropriately constructed to demonstrate both kinematic and
dynamic redundancy (e.g. $d < n < m$). The model is inspired from [1] with some
minor modifications and improvements.
## Model Constants
Abbreviations:
- DoFs: Degrees of Freedom
- EoMs: Equations of Motion
- KE: Kinetic Energy
- PE: Potential Energy
- CoM: center of mass
The following constants are used in the model:
- $m$ mass of a segment
- $I_{z_i}$ inertia around $z$-axis
- $L_i$ length of a segment
- $L_{c_i}$ length of the CoM as defined in local frame of a body
- $a_i$ muscle origin point as defined in the local frame of a body
- $b_i$ muscle insertion point as defined in the local frame of a body
- $g$ gravity
- $q_i$ are the generalized coordinates
- $u_i$ are the generalized speeds
- $\tau$ are the generalized forces
Please note that there are some differences from [1]: 1) $L_{g_i} \rightarrow
L_{c_i}$, 2) $a_i$ is always the muscle origin, 3) $b_i$ is always the muscle
insertion and 4) we don't use double indexing for the bi-articular muscles.
```python
# notebook general configuration
%load_ext autoreload
%autoreload 2
# imports and utilities
import sympy as sp
from IPython.display import display, Image
sp.interactive.printing.init_printing()
import logging
logging.basicConfig(level=logging.INFO)
# plot
%matplotlib inline
from matplotlib.pyplot import *
rcParams['figure.figsize'] = (10.0, 6.0)
# utility for displaying intermediate results
enable_display = True
def disp(*statement):
if (enable_display):
display(*statement)
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
```python
# construct model
from model import ArmModel
model = ArmModel(use_gravity=1, use_coordinate_limits=0, use_viscosity=0)
disp(model.constants)
```
## Dynamics
The simplified arm model has three DoFs and nine muscles, some of them being
bi-articular. The analytical expressions of the EoMs form is given by
\begin{equation}\label{equ:eom-standard-form}
M(q) \ddot{q} + C(q, \dot{q})\dot{q} + \tau_g(q) = \tau
\end{equation}
where $M \in \Re^{n \times n}$ represents the inertia mass matrix, $n$ the DoFs
of the model, $q, \dot{q}, \ddot{q} \in \Re^{n}$ the generalized coordinates and
their derivatives, $C \in \Re^{n \times n}$ the Coriolis and centrifugal matrix,
$\tau_g \in \Re^{n}$ the gravity contribution and $\tau$ the specified
generalized forces.
As the model is an open kinematic chain a simple procedure to derive the EoMs
can be followed. Assuming that the spatial velocity (translational, rotational)
of each body segment is given by $u_b = [v, \omega]^T \in \Re^{6 \times 1}$, the
KE of the system in body local coordinates is defined as
\begin{equation}\label{equ:spatial-ke}
K = \frac{1}{2} \sum\limits_{i=1}^{n_b} (m_i v_i^2 + I_i \omega_i^2) =
\frac{1}{2} \sum\limits_{i=1}^{n_b} u_i^T M_i u_i
\end{equation}
where $M_i = diag(m_i, m_i, m_i, [I_i]_{3 \times 3}) \in \Re^{6 \times 6}$
denotes the spatial inertia mass matrix, $m_i$ the mass and $I_i \in \Re^{3
\times 3}$ the inertia matrix of body $i$. The spatial quantities are related
to the generalized coordinates by the body Jacobian $u_b = J_b \dot{q}, \; J_b
\in \Re^{6 \times n}$. The total KE is coordinate invariant, thus it can be
expressed in different coordinate system
\begin{equation}\label{equ:ke-transformation}
K = \frac{1}{2} \sum\limits_{i=1}^{n_b} q^T J_i^T M_i J_i q
\end{equation}
Following the above definition, the inertia mass matrix of the system can be
written as
\begin{equation}\label{equ:mass-matrix}
M(q) = \sum\limits_{i=1}^{n_b} J_i^T M_i J_i
\end{equation}
Furthermore, the Coriolis and centrifugal forces $C(q, \dot{q}) \dot{q}$ can be
determined directly from the inertia mass matrix
\begin{equation}\label{equ:coriolis-matrix}
C_{ij}(q, \dot{q}) = \sum\limits_{k=1}^{n} \Gamma_{ijk} \; \dot{q}_k, \; i, j
\in [1, \dots n], \;
\Gamma_{ijk} = \frac{1}{2} (
\frac{\partial M_{ij}(q)}{\partial q_k} +
\frac{\partial M_{ik}(q)}{\partial q_j} -
\frac{\partial M_{kj}(q)}{\partial q_i})
\end{equation}
where the functions $\Gamma_{ijk}$ are called the Christoffel symbols. The
gravity contribution can be determined from the PE function
\begin{equation}\label{equ:gravity-pe}
\begin{gathered}
g(q) = \frac{\partial V(q)}{\partial q}, \; V(q) = \sum\limits_{i=1}^{n_b} m_i g h_i(q)
\end{gathered}
\end{equation}
where $h_i(q)$ denotes the vertical displacement of body $i$ with respect to the
ground. In this derivation we chose to collect all forces that act on the system
in the term $f(q, \dot{q})$.
```python
# define the spatial coordinates for the CoM in terms of Lcs' and q's
disp(model.xc[1:])
# define CoM spatial velocities
disp(model.vc[1:])
#define CoM Jacobian
disp(model.Jc[1:])
```
```python
# generate the inertial mass matrix
M = model.M
for i in range(0, M.shape[0]):
for j in range(0, M.shape[1]):
disp('M_{' + str(i + 1) + ',' + str(j + 1) + '} = ', M[i, j])
```
```python
# total forces from Coriolis, centrafugal and gravity
f = model.f
for i in range(0, f.shape[0]):
disp('f_' + str(i + 1) + ' = ', f[i])
```
## Muscle Moment Arm
The muscle forces $f_m$ are transformed into joint space generalized forces
($\tau$) by the moment arm matrix ($\tau = -R^T f_m$). For a n-lateral polygon
it can be shown that the derivative of the side length with respect to the
opposite angle is the moment arm component. As a consequence, when expressing
the muscle length as a function of the generalized coordinates of the model, the
moment arm matrix is evaluated by $R = \frac{\partial l_{mt}}{\partial q}$. The
analytical expressions of the EoMs following our convention are provided below
\begin{equation}\label{equ:eom-notation}
\begin{gathered}
M(q) \ddot{q} + f(q, \dot{q}) = \tau \\
\tau = -R^T(q) f_m
\end{gathered}
\end{equation}
```python
# assert that moment arm is correctly evaluated
# model.test_muscle_geometry() # slow
# muscle length
disp('l_m = ', model.lm)
# moment arm
disp('R = ', model.R)
```
```python
# draw model
fig, ax = subplots(1, 1, figsize=(10, 10), frameon=False)
model.draw_model([60, 70, 50], True, ax, 1, False)
fig.tight_layout()
fig.savefig('results/arm_model.pdf', dpi=600, format='pdf',
transparent=True, pad_inches=0, bbox_inches='tight')
```
[1] K. Tahara, Z. W. Luo, and S. Arimoto, “On Control Mechanism of Human-Like
Reaching Movements with Musculo-Skeletal Redundancy,” in International
Conference on Intelligent Robots and Systems, 2006, pp. 1402–1409.
| 43e8d935050132554c508013b1b671ac63223bb1 | 333,007 | ipynb | Jupyter Notebook | arm_model/model.ipynb | mitkof6/musculoskeletal-stiffness | 150a43a3d748bb0b630e77cde19ab65df5fb089c | [
"CC-BY-4.0"
]
| 4 | 2019-01-24T08:10:20.000Z | 2021-04-04T18:55:02.000Z | arm_model/model.ipynb | mitkof6/musculoskeletal-stiffness | 150a43a3d748bb0b630e77cde19ab65df5fb089c | [
"CC-BY-4.0"
]
| null | null | null | arm_model/model.ipynb | mitkof6/musculoskeletal-stiffness | 150a43a3d748bb0b630e77cde19ab65df5fb089c | [
"CC-BY-4.0"
]
| null | null | null | 238.372942 | 63,508 | 0.796076 | true | 2,086 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.757794 | 0.691439 | __label__eng_Latn | 0.973864 | 0.444776 |
# Surfinpy
#### Tutorial 3 - Pressure
In the previous tutorials we went through the process of generating a simple phase diagram for bulk phases and introducing temperature dependence for gaseous species. This useful however, sometimes it can be more beneficial to convert the chemical potenials (eVs) to partial presure (bar).
Chemical potential can be converted to pressure values using
\begin{align}
P & = \frac{\mu_O}{k_B T} ,
\end{align}
where P is the pressure, $\mu$ is the chemical potential of oxygen, $k_B$ is the Boltzmnann constant and T is the temperature.
```python
import matplotlib.pyplot as plt
from surfinpy import bulk_mu_vs_mu as bmvm
from surfinpy import utils as ut
from surfinpy import data
colors = ['#5B9BD5', '#4472C4', '#A5A5A5', '#772C24', '#ED7D31', '#FFC000', '#70AD47']
```
```python
bulk = data.ReferenceDataSet(cation = 1, anion = 1, energy = -92.0, funits = 10)
```
Additionally, SurfinPy has the functionality to allow you to choose which colours are used for each phase. Specify within the DataSet class color.
```python
Bulk = data.DataSet(cation = 10, x = 0, y = 0, energy = -92.0, color=colors[0], label = "Bulk")
D = data.DataSet(cation = 10, x = 10, y = 0, energy = -310.0, color=colors[1], label = "D")
B = data.DataSet(cation = 10, x = 0, y = 10, energy = -228.0, color=colors[2], label = "B")
F = data.DataSet(cation = 10, x = 8, y = 10, energy = -398.0, color=colors[3], label = "F")
A = data.DataSet(cation = 10, x = 5, y = 20, energy = -470.0, color=colors[4], label = "A")
C = data.DataSet(cation = 10, x = 10, y = 30, energy = -706.0, color=colors[5], label = "C")
E = data.DataSet(cation = 10, x = 10, y = 50, energy = -972.0, color=colors[6], label = "E")
```
```python
data = [Bulk, A, B, C, D, E, F]
```
```python
x_energy=-20.53412969
y_energy=-12.83725889
```
```python
CO2_exp = ut.fit_nist("ref_files/CO2.txt")[298]
Water_exp = ut.fit_nist("ref_files/H2O.txt")[298]
CO2_corrected = x_energy + CO2_exp
Water_corrected = y_energy + Water_exp
deltaX = {'Range': [ -1, 0.6], 'Label': 'CO_2'}
deltaY = {'Range': [ -1, 0.6], 'Label': 'H_2O'}
```
```python
system = bmvm.calculate(data, bulk, deltaX, deltaY, x_energy=CO2_corrected, y_energy=Water_corrected)
ax = system.plot_phase(figsize=(6, 4.5))
plt.show()
```
To convert chemical potential to pressure use the plot_pressure command and the temperature at which the pressure is calculated. For this example we have used 298 K.
```python
import surfinpy
ax = system.plot_pressure(temperature=298, figsize=(5, 4), cbar_title="Bulk Phase System")
plt.show()
```
```python
```
| 6dfae0e3aaf696d8750822411b77c7c4ccad3aaa | 362,870 | ipynb | Jupyter Notebook | examples/Notebooks/Bulk/Tutorial_3.ipynb | jstse/SurfinPy | ff3a79f9415c170885e109ab881368271f3dcc19 | [
"MIT"
]
| null | null | null | examples/Notebooks/Bulk/Tutorial_3.ipynb | jstse/SurfinPy | ff3a79f9415c170885e109ab881368271f3dcc19 | [
"MIT"
]
| null | null | null | examples/Notebooks/Bulk/Tutorial_3.ipynb | jstse/SurfinPy | ff3a79f9415c170885e109ab881368271f3dcc19 | [
"MIT"
]
| null | null | null | 1,664.541284 | 170,226 | 0.733207 | true | 847 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.682574 | 0.549903 | __label__eng_Latn | 0.896864 | 0.115939 |
```python
from sympy import init_session
init_session()
```
IPython console for SymPy 1.6 (Python 3.7.3-64-bit) (ground types: python)
These commands were executed:
>>> from __future__ import division
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
>>> init_printing()
Documentation can be found at https://docs.sympy.org/1.6/
```python
a, b, g, d, k, l0 = symbols('a b g d k l0', real=True, positive=True)
```
```python
l0, a, b, g, d
ksq = k**2
A, B, G, D = a**2, b**2, g**2, d**2
TA = 2*A
A2B = A*A + B
C = TA + G*A2B
f1 = ksq + TA*k + A2B
f2 = 1 + G*k + D*ksq
den = A2B + C*k
f = l0 * f1 * f2 / den
f
```
```python
diff(f,l0)
```
```python
da=diff(f,a)
da.factor()
```
```python
db=diff(f,b)
db.factor()
```
```python
dg=diff(f,g)
dg.factor()
```
```python
dd=diff(f,d)
dd.factor()
```
```python
```
| 063e84e995e7e206a99f714c053218ecec8f76fc | 28,765 | ipynb | Jupyter Notebook | notebooks/RAC_gradients/RAC-41_derivatives.ipynb | tsommerfeld/L2-methods_for_resonances | acba48bfede415afd99c89ff2859346e1eb4f96c | [
"MIT"
]
| null | null | null | notebooks/RAC_gradients/RAC-41_derivatives.ipynb | tsommerfeld/L2-methods_for_resonances | acba48bfede415afd99c89ff2859346e1eb4f96c | [
"MIT"
]
| null | null | null | notebooks/RAC_gradients/RAC-41_derivatives.ipynb | tsommerfeld/L2-methods_for_resonances | acba48bfede415afd99c89ff2859346e1eb4f96c | [
"MIT"
]
| null | null | null | 112.363281 | 3,928 | 0.811333 | true | 373 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91611 | 0.863392 | 0.790961 | __label__eng_Latn | 0.319896 | 0.676001 |
# *Ab* *initio* molecular dynamics of the vibrational motion of HF
### Part 1: Generation of *ab* *initio* potential energy surfaces (PES)
We are going to construct what is often referred to as an *ab* *initio* potential energy surface of the diatomic
molecule hydrogen fluoride. That is, we are going to use various electronic structure theories (Hartree-Fock theory (RHF), 2nd-order perturbation theory (MP2), and Coupled Cluster theory with single and double substitutions (CCSD)) to compute the electronic energy at different geometries of a simple diatomic molecule. The same basis set (correlation consistent polarized triple-zeta, cc-pVTZ) will be used for all calculations. We will use Psi4numpy to facilitate the electronic structure calculations, and then the interpolation capabilities of scipy to simplify the evalution of the potential energy at separations for which we did not explicitly evaluate the electronic energy. We will also use scipy to differentiate the interpolated potential energy surface to obtain the forces acting on the atoms at different separations.
We will start by importing the necessary libraries:
```python
"""This will be pre-written"""
import numpy as np
import psi4
from matplotlib import pyplot as plt
from scipy.interpolate import InterpolatedUnivariateSpline
```
We will use a template for the z-matrix which will allow us to automate the
specification of the bond length of our HF molecule for easy computation of our potential
energy surface.
```python
"""This will be pre-written"""
### template for the z-matrix
mol_tmpl = """H
F 1 **R**"""
```
Now let's create arrays for the bond length and energies at each bond length
for three different levels of theory (RHF, MP2, and CCSD). Let's have our bond lengths
spane 0.5 - 2.25 $\overset{\circ}{A}$; note that should use finer resolution for short bondlengths than our longer bondlengths because we want to be sure we accurately represent the minimum energy point on the PES!
```python
''' Have students write this block! '''
### array of bond-lengths in anstromgs
r_array = np.array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3])
### array for different instances of the HF molecule
molecules =[]
### array for the different RHF energies for different HF bond-lengths
RHF_E_array = []
### array for the different MP2 energies for different HF bond-lengths
MP2_E_array = []
### array for the different CCSD energies for different HF bond-lengths
CCSD_E_array = []
```
Now we will loop over the elements of our r_array, compute the energies at each level of theory for all bond lengths, and store them in their respective arrays.
```python
"""This will be pre-written"""
### loop over the different bond-lengths, create different instances
### of HF molecule
for r in r_array:
molecule = psi4.geometry(mol_tmpl.replace("**R**", str(r)))
molecules.append(molecule)
### loop over instances of molecules, compute the RHF, MP2, and CCSD
### energies and store them in their respective arrays
for mol in molecules:
energy = psi4.energy("SCF/cc-pVTZ", molecule=mol)
RHF_E_array.append(energy)
energy = psi4.energy("MP2/cc-pVTZ", molecule=mol)
MP2_E_array.append(energy)
energy = psi4.energy("CCSD/cc-pVTZ",molecule=mol)
CCSD_E_array.append(energy)
#print(r_array, RHF_E_array, MP2_E_array, CCSD_E_array)
```
```python
for i in range(0,len(r_array)):
print(r_array[i], RHF_E_array[i], MP2_E_array[i], CCSD_E_array[i])
#print(r_array, RHF_E_array, MP2_E_array, CCSD_E_array)
''' Have students write this block! '''
### Plot the 3 different PES
plt.plot(r_array,RHF_E_array,'r*', label='RHF')
plt.plot(r_array,MP2_E_array,'g*', label='MP2')
plt.plot(r_array,CCSD_E_array,'b*', label='CCSD')
plt.legend()
```
Now that you have the raw data, we will interpolate this data using cubic splines. This will permit us to
estimate the potential energy at any arbitrary separation between 0.5 and 2.25 Angstroms.
The general syntax for creating a cubic spline object is as follows:
`spline = InterpolatedUnivariateSpline(x-data, y-data, k=3)`
#### Note on units
The energies we obtained from psi4 are in Hartrees, which are the atomic unit of energy. We have so far been specifying our separation in Angstroms (**not the atomic unit of length**) so we are in a mixed unit system. When we generate our spline, we will use an array of bond lengths in atomic units as the x-data and the energies in atomic units as the y-data, which will yield a PES purely in atomic units. Therefore, the first thing we will do before creating the spline is to create an array of bond lengths in atomic units (~1.89 * bond lengths in Angstroms is the bond length in atomic units); we will then create three cubic splines (RHF_E_Spline, MP2_E_Spline, CCSD_E_SPline) that hold the PES data in atomic units for the three levels of theory.
```python
''' Have students write this block! '''
### get separation vector in atomic units
r_array_au = 1.89*r_array
### spline for RHF Energy
RHF_E_Spline = InterpolatedUnivariateSpline(r_array_au, RHF_E_array, k=3)
### spline for MP2 Energy
MP2_E_Spline = InterpolatedUnivariateSpline(r_array_au, MP2_E_array, k=3)
### spline for CCSD Energy
CCSD_E_Spline = InterpolatedUnivariateSpline(r_array_au, CCSD_E_array, k=3)
```
Now we can plot the splines against the PES data to make sure our splines were generated properly.
```python
''' This will be pre-written '''
### form a much finer grid to evaluate spline object at
r_fine = np.linspace(0.5/0.529,2.3/0.529,200)
### compute the interpolated/extrapolated values for RHF Energy on this grid
RHF_E_fine = RHF_E_Spline(r_fine)
### compute the interpolated/extrapolated values for RHF Energy on this grid
MP2_E_fine = MP2_E_Spline(r_fine)
### compute the interpolated/extrapolated values for RHF Energy on this grid
CCSD_E_fine = CCSD_E_Spline(r_fine)
### plot the interpolated data with lines against computed data in *'s
plt.plot(r_fine, RHF_E_fine, 'red', r_array_au, RHF_E_array, 'r*', label='RHF')
plt.plot(r_fine, MP2_E_fine, 'green', r_array_au, MP2_E_array, 'g*', label='MP2')
plt.plot(r_fine, CCSD_E_fine, 'blue', r_array_au, CCSD_E_array, 'b*', label='CCSD')
plt.legend()
plt.show()
```
### Part 2: Computation of Forces and related quantities and their importance in Newton's law
We can derive a number of important quantities just from the potential energy surfaces we have computed. For example, we estimate the equilibrium bond length by finding the separation at which the potential is minimum; note this would also be the position that the force goes to zero:
\begin{equation}
\frac{d}{dr} V(r_{eq}) = -F(r_{eq}) = 0.
\end{equation}
The force as a function of separation plays a significant role in the vibrational motion of the molecule, as we will see shortly.
First we will compute the forces at each level of theory, storing them in new spline
onjects called RHF_Force, MP2_Force, and CCSD_Force. We can use the fact
that the spline objects (which we previously created) can be directly differentiated using the following syntax:
`spline_derivative = spline.derivative()`
Once computed, plot each spline against the r_fine array previously created!
#### What unit system do you think the forces are in?
```python
''' Have students write this block! '''
### take the derivative of the potential to get the negative of the force from RHF
RHF_Force = RHF_E_Spline.derivative()
### negative of the force from MP2
MP2_Force = MP2_E_Spline.derivative()
### negative of the force from CCSD
CCSD_Force = CCSD_E_Spline.derivative()
### let's plot the forces for each level of theory!
### plot the forces... note we need to multiply by -1 since the spline
### derivative gave us the negative of the force!
plt.plot(r_fine, -1*RHF_Force(r_fine), 'red', label='RHF Force')
plt.plot(r_fine, -1*MP2_Force(r_fine), 'green', label='MP2 Force')
plt.plot(r_fine, -1*CCSD_Force(r_fine), 'blue', label='CCSD Force')
plt.legend()
plt.show()
```
#### Equilibrium bond length
Next we will find where the minimum of the potential energy surfaces are and use that
to find the equilibrium bond length, making use of numpy's argmin function to find the
index corresponding to the minimum value in a numpy array:
```python
''' This block will be pre-written! '''
### Find Equilibrium Bond-Lengths for each level of theory
RHF_Req_idx = np.argmin(RHF_E_fine)
MP2_Req_idx = np.argmin(MP2_E_fine)
CCSD_Req_idx = np.argmin(CCSD_E_fine)
### find the value of the separation corresponding to that index
RHF_Req = r_fine[RHF_Req_idx]
MP2_Req = r_fine[MP2_Req_idx]
CCSD_Req = r_fine[CCSD_Req_idx]
### print equilibrium bond-lengths at each level of theory!
print(" Equilibrium bond length at RHF/cc-pVDZ level is ",RHF_Req, "atomic units")
print(" Equilibrium bond length at MP2/cc-pVDZ level is ",MP2_Req, "atomic units")
print(" Equilibrium bond lengthat CCSD/cc-pVDZ level is ",CCSD_Req, "atomic units")
```
Equilibrium bond length at RHF/cc-pVDZ level is 1.6975235344966797 atomic units
Equilibrium bond length at MP2/cc-pVDZ level is 1.7317209867864842 atomic units
Equilibrium bond lengthat CCSD/cc-pVDZ level is 1.7317209867864842 atomic units
#### At this point, take a moment to compare your equilibrium bond length by level of theory. Which equilibrium bond length do you expect to be most trustworthy? Is it the case that this method produced the **best** equilibrium bond length in this case? Note that the experimental bond length of HF is ~0.92 $\overset{\circ}{A}$.
#### Harmonic Frequency
You might have learned that the Harmonic Oscillator potential, which is a reasonable model for the vibrational motion of diatomic atomcs near their equilibrium bond length, is given by
\begin{equation}
V(r) = \frac{1}{2} k (r-r_{eq})^2 + V_0
\end{equation}
and that the vibrational frequency of the molecule within the Harmonic oscillator model is given by
\begin{equation}
\nu = \frac{1}{2\pi}\sqrt{\frac{k}{\mu}}
\end{equation}
where $\mu$ is the reduced mass of the molecule and $k$ is known as the force constant.
We can estimate the force constant as
\begin{equation}
k = \frac{d^2}{dr^2} V(r_{eq}),
\end{equation}
and the reduced mass of HF is defined as
\begin{equation}
\mu = \frac{m_H \cdot m_F}{m_H + m_F},
\end{equation}
where $m_H$ and $m_F$ are the masses of Hydrogen and Fluoride, respectively.
Let's go ahead and get the force constants at each level of theory, print the values,
and estimate the potential energy within the Harmonic approximation! Just like we were able to differentiate our PES splines to get a force spline, we can differentiate a force splines to get curvature splines (which we can call RHF_Curvature, MP2_Curvature, and CCSD_Curvature); the force constant will then be the curvature evaluated at the equlibrium bond length.
#### Can we use the same equilibrium bond length for all three curvatures? Why or why not?
```python
''' Have students write this block! '''
### get second derivative of potential energy curve... recall that we fit a spline to
### to the first derivative already and called that spline function X_Force, where
### X is either RHF, MP2, or CCSD
RHF_Curvature = RHF_Force.derivative()
MP2_Curvature = MP2_Force.derivative()
CCSD_Curvature = CCSD_Force.derivative()
### evaluate the second derivative at r_eq to get k
RHF_k = RHF_Curvature(RHF_Req)
MP2_k = MP2_Curvature(MP2_Req)
CCSD_k = CCSD_Curvature(CCSD_Req)
### Print force constants for each level of theory!
print("Hartree-Fock force constant is ",RHF_k," atomic units")
print("MP2 force constant is ",MP2_k," atomic units")
print("CCSD force constant is ",CCSD_k," atomic units")
```
Hartree-Fock force constant is 0.7201715614223737 atomic units
MP2 force constant is 0.6423033492324176 atomic units
CCSD force constant is 0.6393333544724277 atomic units
Now that we have the force constants, let's define three different arrays (RHF_Harm_Pot, MP2_Harm_Pot, and CCSD_Harm_Pot) that store the harmonic potentials at each level of theory evaluated at the different bond lengths (in atomic units) stored in the array r_fine; recall the definition of the Harmonic potential is
\begin{equation}
V(r) = \frac{1}{2} k (r-r_{eq})^2 + V_0,
\end{equation}
where we can use $E(r_{eq})$ as $V_0$.
```python
''' Have students write this block! '''
### define harmonic potential for each level of theory
RHF_Harm_Pot = 0.5*RHF_k*(r_fine-RHF_Req)**2 + RHF_E_Spline(RHF_Req)
MP2_Harm_Pot = 0.5*MP2_k*(r_fine-MP2_Req)**2 + MP2_E_Spline(MP2_Req)
CCSD_Harm_Pot = 0.5*CCSD_k*(r_fine-CCSD_Req)**2 + CCSD_E_Spline(CCSD_Req)
```
Let's plot the resulting Harmonic potentials against the *ab* *initio* potentials near the equilibrium geometry
```python
''' This will be pre-written!'''
### plot!
plt.plot(r_fine, RHF_Harm_Pot, 'red', label='Harmonic')
plt.plot(r_fine, RHF_E_fine, 'b--', label='ab initio' )
plt.xlim(1.0, (1.69+0.69))
plt.ylim(-100.1,-99.6)
plt.legend()
plt.show()
```
Finally, let's actually estimate the fundamental vibrational frequency of the molecule
within this model using the force constant and the reduced mass of the molecule.
#### What is the reduced mass of the HF molecule in atomic units?
#### Use your Harmonic force constants to estimate the vibrational frequency of HF and compare the values obtained from each level of theory. Which frequency do you think should be the most trusthworthy? Is it the case that this method produced the best frequency in this case? Note that the experimental vibrational frequency of HF is 124 THz.
```python
''' Have students write this block! '''
### define reduced mass of HF as m_H * m_H /(m_F + m_H) where mass is in atomic units (electron mass = 1)
m_F = 34883.
m_H = 1836.
mu = (m_F * m_H)/(m_F + m_H)
### compute the fundamental frequency at each level of theory
RHF_nu = 1/(np.pi*2) * np.sqrt(RHF_k/mu)
MP2_nu = 1/(np.pi*2) * np.sqrt(MP2_k/mu)
CCSD_nu = 1/(np.pi*2) * np.sqrt(CCSD_k/mu)
### print the values in atomic units!
print("Vibrational frequency of HF at the RHF/cc-pVDZ level is ",RHF_nu," atomic units")
print("Vibrational frequency of HF at the MP2/cc-pVDZ level is ",MP2_nu," atomic units")
print("Vibrational frequency of HF at the CCSD/cc-pVDZ level is ",CCSD_nu," atomic units")
```
Vibrational frequency of HF at the RHF/cc-pVDZ level is 0.0032340020112283717 atomic units
Vibrational frequency of HF at the MP2/cc-pVDZ level is 0.0030541642910292405 atomic units
Vibrational frequency of HF at the CCSD/cc-pVDZ level is 0.0030470949194360422 atomic units
### Part 3: Solving Newton's equation of motion to simulate the dynamics
Next, we want to actually simulate the dynamics of the HF molecule on these *ab* *initio* potential energy surfaces. To do so, we need to solve Newton's equations of motion subject to some initial condition for the position (separation) and momentum (in a relative sense) of the particles. Newton's equations can be written
\begin{equation}
F(r) = \mu \frac{d^2}{dr^2}
\end{equation}
where $\mu$ is the reduced mass in atomic units and $F(r)$ is the Force vs separation in atomic units that was determined previously.
#### What will be the accelation of the bond stretch when H is separated by F by 3 atomic units? You can express your acceleration in atomic units, also.
```python
""" Students will write this line! """
accel = RHF_Force(3)/mu
print(accel)
```
7.807549838748958e-05
#### Numerically solving Newton's equation of motion
If the acceleration, position, and velocity of the bond stretch coordinate are known at some instant in
time $t_i$, then the position and velocity can be estimated at some later time $t_{i+1} = t_i + \Delta t$:
\begin{equation}
r(t_i + \Delta t) = r(t_i) + v(t_i)\Delta t + \frac{1}{2}a(t_i)\Delta t^2
\end{equation}
and
\begin{equation}
v(t_i + \Delta t) = v(t_i) + \frac{1}{2} \left(a(t_i) + a(t_i + \Delta t) \right) \Delta t.
\end{equation}
This prescription for updating the velocities and positions is known as the Velocity-Verlet algorithm.
Note that we need to perform 2 force evaluations per Velocity-Verlet iteration: one corresponding
to position $r(t_i)$ to update the position, and then a second time at the updated position $r(t_i + \Delta t)$
to complete the velocity update.
We will create a function called Velocity_Verlet that takes the arguments r_curr, v_curr, mu, force_spline, and timestep and returns a 2-element array containing the updated position (r) and velocity (v) value.
```python
''' Students will write this! '''
def Velocity_Verlet(r_curr, v_curr, mu, f_interp, dt):
### get acceleration at current time
a_curr = -1*f_interp(r_curr)/mu
### use current acceleration and velocity to update position
r_fut = r_curr + v_curr * dt + 0.5 * a_curr * dt**2
### use r_fut to get future acceleration a_fut
a_fut = -1*f_interp(r_fut)/mu
### use current and future acceleration to get future velocity v_fut
v_fut = v_curr + 0.5*(a_curr + a_fut) * dt
result = [r_fut, v_fut]
return result
```
### Validating Velocity-Verlet algorithm with the Harmonic Oscillator
Newton's equation of motion can be solved analytically for the Harmonic oscillator, and we can use this fact to validate our Velocity-Verlet algorithm (which provides an *approximate* solution to Newton's equation of motion for arbitrary potentials). That is,
the vibrational motion of a diatomic subject to a Harmonic potential predicted
by the Velocity-Verlet algorithm should closely match the analytical solution. Analytically,
the bond length as a function of time for a diatomic experiencing a harmonic potential is given by
\begin{equation}
r(t) = A \: {\rm sin}\left(\sqrt{\frac{k}{\mu}} t + \phi \right) + r_{eq},
\end{equation}
where $A = \frac{r(0)}{{\rm sin}(\phi)}$, $r(0)$ is the initial separation, and $\phi$ is the initial phase of the cycle; note that corresponding to this initial separation is
an initial velocity given by
\begin{equation}
v(0) = A \: \sqrt{\frac{k}{\mu}} {\rm cos}\left( \phi \right).
\end{equation}
Let's define a function harmonic_position that takes arguments of $\sqrt{\frac{k}{\mu}}$ (om), $A$ (amp), $\phi$ (phase), $r_{eq}$ (req), and time (t), and returns the separation.
```python
''' Students will write this! '''
def harmonic_position(om, Amp, phase, req, time):
return Amp * np.sin( om * time + phase ) + req
```
The following code block will call the Velocity Verlet algorithm using
the RHF Harmonic potential 10,000 times with a
timestep of 0.1 atomic units per timestep and will compare the resulting trajectory of bond length vs time (all in atomic units) to the analytic result for the Harmonic oscillator; we will initiate the bond length as being 0.2 atomic units **longer** than $r_eq$ with an initial phase of $\frac{\pi}{4}$.
```python
''' This will be pre-written! '''
### how many updates do you want to perform?
N_updates = 10000
### establish time-step for integration to be 0.02 atomic units... this is about 0.0005 femtoseconds
### so total time is 200000*0.02 atomic units of time which is ~9.6e-13 s, or 960 fs
dt = 0.1
### results from VV algorithm
hr_vs_t = np.zeros(N_updates)
hv_vs_t = np.zeros(N_updates)
### analytic result for r(t)
ar_vs_t = np.zeros(N_updates)
### array to store time in atomic units
t_array = np.zeros(N_updates)
### establish some constants relevant for analytic solution
### harmonic freq
om = np.sqrt(RHF_k/mu)
### initial displacement
x0 = 0.2
### amplitude for analytic solution
Amp = x0/(np.sin(np.pi/4))
### initial velocity
v0 = Amp * om * np.cos(np.pi/4)
hr_vs_t[0] = RHF_Req+x0
hv_vs_t[0] = v0
### We need a spline object for the harmonic force to pass to the Velocity Verlet algorithm,
### let's get that now!
### spline for Harmonic potential using RHF_k
RHF_Harm_Pot_Spline = InterpolatedUnivariateSpline(r_fine, RHF_Harm_Pot, k=3)
### RHF harmonic force
RHF_Harm_Force = RHF_Harm_Pot_Spline.derivative()
### first Velocity Verlet update
result_array = Velocity_Verlet(hr_vs_t[0], hv_vs_t[0], mu, RHF_Harm_Force, dt)
### first analytic result
ar_vs_t[0] = harmonic_position(om, Amp, np.pi/4, RHF_Req, 0)
### do the update N_update-1 more times
for i in range(1,N_updates):
### store current time
t_array[i] = dt*i
### Compute VV update
result_array = Velocity_Verlet(result_array[0], result_array[1], mu, RHF_Harm_Force, dt)
### store results from VV update
hr_vs_t[i] = result_array[0]
hv_vs_t[i] = result_array[1]
### compute and store results from analytic solution
ar_vs_t[i] = harmonic_position(om, Amp, np.pi/4, RHF_Req, dt*i)
### Plot result and compare!
plt.plot(t_array, hr_vs_t, 'red', label="Velocity Verlet")
plt.plot(t_array, ar_vs_t, 'b--', label="Analytic")
plt.legend()
plt.show()
```
Now let's simulate the vibrational motion of HF subject to the *ab* *initio* forces we computed earlier and compare them to the Harmonic motion; recall we have already obtained spline objects for RHF, MP2, and CCSD forces called RHF_Force, MP2_Force, and CCSD_Force.
We will also initialize the simulations using the same values as we did with the Harmonic case to aid our comparison.
```python
""" Students will write this! """
### Now use r_init and v_init and run velocity verlet update N_updates times, plot results
### these arrays will store the time, the position vs time, and the velocity vs time
r_vs_t = np.zeros(N_updates)
v_vs_t = np.zeros(N_updates)
### first entry is the intial position and velocity
r_vs_t[0] = RHF_Req+x0
v_vs_t[0] = v0
### first Velocity Verlet update
result_array = Velocity_Verlet(r_vs_t[0], v_vs_t[0], mu, RHF_Force, dt)
for i in range(1,N_updates):
result_array = Velocity_Verlet(result_array[0], result_array[1], mu, RHF_Force, dt)
r_vs_t[i] = result_array[0]
v_vs_t[i] = result_array[1]
plt.plot(t_array, r_vs_t, 'red', label='ab initio')
plt.plot(t_array, hr_vs_t, 'b--', label='Harmonic')
plt.legend()
plt.show()
```
#### How are the dynamics different when the *ab* *initio* forces are used? Try to identify at least two quantitative ways in which you can distinguish the harmonic motion from the motion deriving from the *ab* *initio* forces.
#### Can you estimate the frequency from the *ab* *initio* trajectories? How does this frequency compare with the Harmonic approximation and with the experimental value?
### For further consideration: What makes a "sensible range of values" for position and velocity?
In this case, we will initialize the position to be a random number between 1.0 and 4.0; for the velocity, we will use the fact that we can estimate the expectation value of kinetic energy for a very similar system (the Harmonic oscillator) in the ground state as follows:
\begin{equation}
\langle T \rangle = \frac{1}{2} E_g,
\end{equation}
where $E_g$ is the ground state of the Harmonic oscillator (this is making use of the Virial theorem). We can easily
find the ground state energy in the Harmonic oscillator approximation of $HF$ using our frequency calculation described above as
\begin{equation}
E_g = \frac{1}{2} h \nu,
\end{equation}
which implies the kinetic energy expectation value is
\begin{equation}
\langle T \rangle = \frac{h}{8 \pi} \sqrt{\frac{k}{\mu}}.
\end{equation}
Since we can say classically that the kinetic energy is given by $T = \frac{1}{2}\mu v^2$, we can estimate the velocity of the bond stretch as follows:
\begin{equation}
v = \sqrt{\frac{2 \langle T \rangle}{\mu}} = \sqrt{ \frac{\hbar \sqrt{\frac{k}{\mu}}}{2\mu}}
\end{equation}
where we have simplified using the fact that $\hbar = \frac{h}{2\pi}$ ($\hbar$ has the value 1 in the atomic unit system we are using up to this point!). We will assume that a reasonable
range of velocities spans plus or minus 3 times this "ground-state" velocity.
```python
### define "ground-state" velocity for each level of theory
v_RHF = np.sqrt( np.sqrt(RHF_k/mu)/(2*mu))
v_MP2 = np.sqrt( np.sqrt(MP2_k/mu)/(2*mu))
v_CCSD = np.sqrt( np.sqrt(CCSD_k/mu)/(2*mu))
### get random position and velocity for RHF HF within a reasonable range
#r_init = np.random.uniform(0.75*RHF_Req,2*RHF_Req)
r_init = RHF_Req
v_init = np.random.uniform(-2*v_RHF,2*v_RHF)
### print initial position and velocity
print("Initial separation is ",r_init, "atomic units")
print("Initial velocity is ",v_init, "atomic units")
### get initial force on the particle based on its separation
RHF_F_init = -1*RHF_Force(r_init)
print("Initial Force is ", RHF_F_init, "atomic units")
```
Initial separation is 1.6975235344966797 atomic units
Initial velocity is -0.0013839204810472618 atomic units
Initial Force is -0.0003233757202937039 atomic units
Now that we have our initial conditions chosen, our force as a function of separation known, and our Velocity Verlet function completed, we are ready to run our simulations!
```python
```
| 2c5be8eacf882f39bdd9f940eb00bb8898179a41 | 163,012 | ipynb | Jupyter Notebook | code/PES_VV_v1.ipynb | MolSSI-Education/ab-initio-md | a749ce15b307603ca8d14fd8927e604ceec47232 | [
"CC-BY-4.0"
]
| 1 | 2020-03-09T23:42:46.000Z | 2020-03-09T23:42:46.000Z | code/PES_VV_v1.ipynb | MolSSI-Education/ab-initio-md | a749ce15b307603ca8d14fd8927e604ceec47232 | [
"CC-BY-4.0"
]
| 1 | 2019-05-22T18:47:51.000Z | 2019-05-22T18:47:51.000Z | code/PES_VV_v1.ipynb | MolSSI-Education/ab-initio-md | a749ce15b307603ca8d14fd8927e604ceec47232 | [
"CC-BY-4.0"
]
| 1 | 2022-02-25T18:36:41.000Z | 2022-02-25T18:36:41.000Z | 178.155191 | 30,300 | 0.888014 | true | 6,960 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.749087 | 0.665758 | __label__eng_Latn | 0.985655 | 0.38511 |
Trusted Notebook" width="500 px" align="left">
# _*Qiskit Aqua: Generating Random Variates*_
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorials.
***
### Contributors
Albert Akhriev<sup>[1]</sup>, Jakub Marecek<sup>[1]</sup>
### Affliation
- <sup>[1]</sup>IBMQ
## Introduction
While classical computers use only pseudo-random routines, quantum computers
can generate true random variates.
For example, the measurement of a quantum superposition is intrinsically random,
as suggested by Born's rule.
Consequently, some of the
best random-number generators are based on such quantum-mechanical effects. (See the
Further, with a logarithmic amount of random bits, quantum computers can produce
linearly many more bits, which is known as
randomness expansion protocols.
In practical applications, one wishes to use random variates of well-known
distributions, rather than random bits.
In this notebook, we illustrate ways of generating random variates of several popular
distributions on IBM Q.
## Random Bits and the Bernoulli distribution
It is clear that there are many options for generating random bits (i.e., Bernoulli-distributed scalars, taking values either 0 or 1). Starting from a simple circuit such as a Hadamard gate followed by measurement, one can progress to vectors of Bernoulli-distributed elements. By addition of such random variates, we could get binomial distributions. By multiplication we could get geometric distributions, although perhaps leading to a circuit depth that may be impratical at the moment, though.
Let us start by importing the basic modules and creating a quantum circuit for generating random bits:
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sys, math, time
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
from qiskit import BasicAer
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute
# In this example we use 'qasm_simulator' backend.
glo_backend = BasicAer.get_backend("qasm_simulator")
```
In the next step we create a quantum circuit, which will be used for generation:
```python
# Number of qubits utilised simultaneously.
glo_num_qubits = 5
def create_circuit(num_target_qubits: int) -> QuantumCircuit:
"""
Creates and returns quantum circuit for random variate generation.
:param num_target_qubits: number of qubits to be used.
:return: quantum curcuit.
"""
assert isinstance(num_target_qubits, int) and num_target_qubits > 0
q = QuantumRegister(num_target_qubits)
c = ClassicalRegister(num_target_qubits)
circuit = QuantumCircuit(q, c)
circuit.h(q)
circuit.barrier()
circuit.measure(q, c)
return circuit
# Create and plot generating quantum circuit.
circuit = create_circuit(glo_num_qubits)
#print(circuit)
circuit.draw(output='mpl')
```
## Uniformly-distributed scalars and vectors
It is clear that there are many options for approximating uniformly-distributed scalars by the choice of an integer from a finite range uniformly at random, e.g., by a binary-code construction from the Bernoulli-distributed vectors. In the following snippet, we generate random bits, which we then convert using the binary-code construction, up to the machine precision of a classical computer.
```python
def uniform_rand_float64(circuit: QuantumCircuit, num_target_qubits: int,
size: int, vmin: float, vmax: float) -> np.ndarray:
"""
Generates a vector of random float64 values in the range [vmin, vmax].
:param circuit: quantum circuit for random variate generation.
:param num_target_qubits: number of qubits to be used.
:param size: length of the vector.
:param vmin: lower bound.
:param vmax: upper bound.
:return: vector of random values.
"""
assert sys.maxsize == np.iinfo(np.int64).max # sizeof(int) == 64 bits
assert isinstance(size, int) and size > 0
assert isinstance(vmin, float) and isinstance(vmax, float) and vmin <= vmax
nbits = 7 * 8 # nbits > mantissa of float64
bit_str_len = (nbits * size + num_target_qubits - 1) // num_target_qubits
job = execute(circuit, glo_backend, shots=bit_str_len, memory=True)
bit_str = ''.join(job.result().get_memory())
scale = float(vmax - vmin) / float(2**nbits - 1)
return np.array([vmin + scale * float(int(bit_str[i:i+nbits], 2))
for i in range(0, nbits * size, nbits)], dtype=np.float64)
def uniform_rand_int64(circuit: QuantumCircuit, num_target_qubits: int,
size: int, vmin: int, vmax: int) -> np.ndarray:
"""
Generates a vector of random int64 values in the range [vmin, vmax].
:param circuit: quantum circuit for random variate generation.
:param num_target_qubits: number of qubits to be used.
:param size: length of the vector.
:param vmin: lower bound.
:param vmax: upper bound.
:return: vector of random values.
"""
assert sys.maxsize == np.iinfo(np.int64).max # sizeof(int) == 64 bits
assert isinstance(size, int) and size > 0
assert isinstance(vmin, int) and isinstance(vmax, int) and vmin <= vmax
assert abs(vmin) <= 2**52 and abs(vmax) <= 2**52 # 52 == mantissa of float64
return np.rint(uniform_rand_float64(circuit, num_target_qubits,
size, float(vmin), float(vmax))).astype(np.int64)
```
### Uniform distribution over floating point numbers.
In this example we draw a random vector of floating-point values uniformly distributed within some arbitrary selected interval:
```python
# Draw a sample from uniform distribution.
start_time = time.time()
sample = uniform_rand_float64(circuit, glo_num_qubits, size=54321, vmin=-7.67, vmax=19.52)
sampling_time = time.time() - start_time
# Print out some details.
print("Uniform distribution over floating point numbers:")
print(" sample type:", type(sample), ", element type:", sample.dtype, ", shape:", sample.shape)
print(" sample min: {:.4f}, max: {:.4f}".format(np.amin(sample), np.amax(sample)))
print(" sampling time: {:.2f} secs".format(sampling_time))
# Plotting the distribution.
plt.hist(sample.ravel(),
bins=min(int(np.ceil(np.sqrt(sample.size))), 100),
density=True, facecolor='b', alpha=0.75)
plt.xlabel("value", size=12)
plt.ylabel("probability", size=12)
plt.title("Uniform distribution over float64 numbers in [{:.2f} ... {:.2f}]".format(
np.amin(sample), np.amax(sample)), size=12)
plt.grid(True)
# plt.savefig("uniform_distrib_float.png", bbox_inches="tight")
plt.show()
```
### Uniform distribution over integers.
Our next example is similar to the previous one, but here we generate a random vector of integers:
```python
# Draw a sample from uniform distribution.
start_time = time.time()
sample = uniform_rand_int64(circuit, glo_num_qubits, size=54321, vmin=37, vmax=841)
sampling_time = time.time() - start_time
# Print out some details.
print("Uniform distribution over bounded integer numbers:")
print(" sample type:", type(sample), ", element type:", sample.dtype, ", shape:", sample.shape)
print(" sample min: {:d}, max: {:d}".format(np.amin(sample), np.amax(sample)))
print(" sampling time: {:.2f} secs".format(sampling_time))
# Plotting the distribution.
plt.hist(sample.ravel(),
bins=min(int(np.ceil(np.sqrt(sample.size))), 100),
density=True, facecolor='g', alpha=0.75)
plt.xlabel("value", size=12)
plt.ylabel("probability", size=12)
plt.title("Uniform distribution over int64 numbers in [{:d} ... {:d}]".format(
np.amin(sample), np.amax(sample)), size=12)
plt.grid(True)
# plt.savefig("uniform_distrib_int.png", bbox_inches="tight")
plt.show()
```
## Normal distribution
To generate random variates with a standard normal distribution using two independent
samples $u_1, u_2$ of the uniform distribution on the unit interval [0, 1], one can
consider the Box-Muller transform to obtain a 2-vector:
\begin{align}
\begin{bmatrix}
%R\cos(\Theta )=
{\sqrt {-2\ln u_{1}}}\cos(2\pi u_{2}) \\
% R\sin(\Theta )=
{\sqrt {-2\ln u_{1}}}\sin(2\pi u_{2})
\end{bmatrix},
\end{align}
wherein we have two independent samples of the standard normal distribution.
In IBM Q, this is implemented as follows:
```python
def normal_rand_float64(circuit: QuantumCircuit, num_target_qubits: int,
size: int, mu: float, sigma: float) -> np.ndarray:
"""
Draws a sample vector from the normal distribution given the mean and standard
deviation, using the Box-Muller method.
"""
TINY = np.sqrt(np.finfo(np.float64).tiny)
assert isinstance(size, int) and size > 0
rand_vec = np.zeros((size,), dtype=np.float64)
# Generate array of uniformly distributed samples, factor 1.5 longer that
# actually needed.
n = (3 * size) // 2
x = np.reshape(uniform_rand_float64(circuit, num_target_qubits,
2*n, 0.0, 1.0), (-1, 2))
x1 = 0.0 # first sample in a pair
c = 0 # counter
for d in range(size):
r2 = 2.0
while r2 >= 1.0 or r2 < TINY:
# Regenerate array of uniformly distributed samples upon shortage.
if c >= n:
c = 0
n = max(size // 10, 1)
x = np.reshape(uniform_rand_float64(circuit, num_target_qubits,
2*n, 0.0, 1.0), (-1, 2))
x1 = 2.0 * x[c, 0] - 1.0 # first sample in a pair
x2 = 2.0 * x[c, 1] - 1.0 # second sample in a pair
r2 = x1 * x1 + x2 * x2
c += 1
f = np.sqrt(np.abs(-2.0 * np.log(r2) / r2))
rand_vec[d] = f * x1
return (rand_vec * sigma + mu)
```
The following example demonstrates how to draw a random vector of normally distributed variates:
```python
# Mean and standard deviation.
mu = 2.4
sigma = 5.1
# Draw a sample from the normal distribution.
start_time = time.time()
sample = normal_rand_float64(circuit, glo_num_qubits, size=4321, mu=mu, sigma=sigma)
sampling_time = time.time() - start_time
# Print out some details.
print("Normal distribution (mu={:.3f}, sigma={:.3f}):".format(mu, sigma))
print(" sample type:", type(sample), ", element type:", sample.dtype, ", shape:", sample.shape)
print(" sample min: {:.4f}, max: {:.4f}".format(np.amin(sample), np.amax(sample)))
print(" sampling time: {:.2f} secs".format(sampling_time))
# Plotting the distribution.
x = np.linspace(mu - 4.0 * sigma, mu + 4.0 * sigma, 1000)
analyt = np.exp(-0.5 * ((x - mu) / sigma)**2) / (sigma * math.sqrt(2.0 * math.pi))
plt.hist(sample.ravel(),
bins=min(int(np.ceil(np.sqrt(sample.size))), 100),
density=True, facecolor='r', alpha=0.75)
plt.plot(x, analyt, '-b', lw=1)
plt.xlabel("value", size=12)
plt.ylabel("probability", size=12)
plt.title("Normal distribution: empirical vs analytic", size=12)
plt.grid(True)
# plt.savefig("normal_distrib.png", bbox_inches="tight")
plt.show()
```
There is a substantial amount of further work needed to either certify the quality of the source of random numbers (cf. NIST SP 800-90B, Recommendation for the Entropy Sources Used for Random Bit Generation) or to use random variates within quantum algorithms (cf. <a href="https://github.com/Qiskit/qiskit-aqua/tree/master/qiskit/aqua/components/uncertainty_models">uncertainty_models</a> within Qiskit Aqua).
```python
```
| 10ead679ccee414c0e72f1ae64c3a20c4c9607fe | 89,921 | ipynb | Jupyter Notebook | qiskit/aqua/generating_random_variates.ipynb | sebhofer/qiskit-tutorials | 1efb5977b00345373b4c4d9889c1823859a248c1 | [
"Apache-2.0"
]
| 2 | 2021-04-29T15:11:27.000Z | 2021-05-09T20:52:21.000Z | qiskit/aqua/generating_random_variates.ipynb | sebhofer/qiskit-tutorials | 1efb5977b00345373b4c4d9889c1823859a248c1 | [
"Apache-2.0"
]
| 1 | 2020-05-08T20:25:11.000Z | 2020-05-08T20:25:11.000Z | qiskit/aqua/generating_random_variates.ipynb | sebhofer/qiskit-tutorials | 1efb5977b00345373b4c4d9889c1823859a248c1 | [
"Apache-2.0"
]
| 1 | 2019-09-02T00:35:21.000Z | 2019-09-02T00:35:21.000Z | 189.706751 | 23,744 | 0.889336 | true | 2,943 | Qwen/Qwen-72B | 1. YES
2. YES | 0.782662 | 0.695958 | 0.5447 | __label__eng_Latn | 0.923521 | 0.103851 |
# Simulating Gate Noise
$$
\newcommand{ket}[1]{\left|{#1}\right\rangle}
\newcommand{bra}[1]{\left\langle {#1}\right|}
\newcommand{tr}{\mathrm{Tr}}
$$
## Pure states vs. mixed states
Errors in quantum computing can introduce classical uncertainty in what the underlying state is.
When this happens we sometimes need to consider not only wavefunctions but also probabilistic sums of
wavefunctions when we are uncertain as to which one we have. For example, if we think that an X gate
was accidentally applied to a qubit with a 50-50 chance then we would say that there is a 50% chance
we have the $\ket{0}$ state and a 50% chance that we have a $\ket{1}$ state.
This is called an "impure" or
"mixed"state in that it isn't just a wavefunction (which is pure) but instead a distribution over
wavefunctions. We describe this with something called a density matrix, which is generally an
operator. Pure states have very simple density matrices that we can write as an outer product of a
ket vector $\ket{\psi}$ with its own bra version $\bra{\psi}=\ket{\psi}^\dagger$.
For a pure state the density matrix is simply
$$
\rho_\psi = \ket{\psi}\bra{\psi}.
$$
The expectation value of an operator for a mixed state is given by
$$
\langle X \rangle_\rho = \tr{X \rho}
$$
where $\tr{A}$ is the trace of an operator, which is the sum of its diagonal elements
which is independent of choice of basis.
Pure state density matrices satisfy
$$
\rho \text{ is pure } \Leftrightarrow \rho^2 = \rho
$$
which you can easily verify for $\rho_\psi$ assuming that the state is normalized.
If we want to describe a situation with classical uncertainty between states $\rho_1$ and
$\rho_2$, then we can take their weighted sum
$$
\rho = p \rho_1 + (1-p) \rho_2
$$
where $p\in [0,1]$ gives the classical probability that the state is $\rho_1$.
Note that classical uncertainty in the wavefunction is markedly different from superpositions.
We can represent superpositions using wavefunctions, but use density matrices to describe
distributions over wavefunctions. You can read more about density matrices [here](https://en.wikipedia.org/wiki/Density_matrix).
# Quantum gate errors
## What are they?
For a quantum gate given by its unitary operator $U$, a "quantum gate error" describes the scenario in which the actually induces transformation deviates from $\ket{\psi} \mapsto U\ket{\psi}$.
There are two basic types of quantum gate errors:
1. **coherent errors** are those that preserve the purity of the input state, i.e., instead of the above mapping we carry out a perturbed, but unitary operation $\ket{\psi} \mapsto \tilde{U}\ket{\psi}$, where $\tilde{U} \neq U$.
2. **incoherent errors** are those that do not preserve the purity of the input state,
in this case we must actually represent the evolution in terms of density matrices.
The state $\rho := \ket{\psi}\bra{\psi}$ is then mapped as
$$
\rho \mapsto \sum_{j=1}^n K_j\rho K_j^\dagger,
$$
where the operators $\{K_1, K_2, \dots, K_m\}$ are called Kraus operators and must obey
$\sum_{j=1}^m K_j^\dagger K_j = I$ to conserve the trace of $\rho$.
Maps expressed in the above form are called Kraus maps. It can be shown that every physical map on a finite
dimensional quantum system can be represented as a Kraus map, though this representation is not generally unique.
[You can find more information about quantum operations here](https://en.wikipedia.org/wiki/Quantum_operation#Kraus_operators)
In a way, coherent errors are *in principle* amendable by more precisely calibrated control. Incoherent errors are more tricky.
## Why do incoherent errors happen?
When a quantum system (e.g., the qubits on a quantum processor) is not perfectly isolated from its environment it generally co-evolves with the degrees of freedom it couples to. The implication is that while the total time evolution of system and environment can be assumed to be unitary, restriction to the system state generally is not.
**Let's throw some math at this for clarity:**
Let our total Hilbert space be given by the tensor product of system and environment Hilbert spaces:
$\mathcal{H} = \mathcal{H}_S \otimes \mathcal{H}_E$.
Our system "not being perfectly isolated" must be translated to the statement that the global Hamiltonian contains a contribution that couples the system and environment:
$$
H = H_S \otimes I + I \otimes H_E + V
$$
where $V$ non-trivally acts on both the system and the environment.
Consequently, even if we started in an initial state that factorized over system and environment $\ket{\psi}_{S,0}\otimes \ket{\psi}_{E,0}$
if everything evolves by the Schrödinger equation
$$
\ket{\psi_t} = e^{-i \frac{Ht}{\hbar}} \left(\ket{\psi}_{S,0}\otimes \ket{\psi}_{E,0}\right)
$$
the final state will generally not admit such a factorization.
## A toy model
**In this (somewhat technical) section we show how environment interaction can corrupt an identity gate and derive its Kraus map.**
For simplicity, let us assume that we are in a reference frame in which both the system and environment Hamiltonian's vanish $H_S = 0, H_E = 0$ and where the cross-coupling is small even when multiplied by the duration of the time evolution $\|\frac{tV}{\hbar}\|^2 \sim \epsilon \ll 1$ (any operator norm $\|\cdot\|$ will do here).
Let us further assume that $V = \sqrt{\epsilon} V_S \otimes V_E$ (the more general case is given by a sum of such terms) and that
the initial environment state satisfies $\bra{\psi}_{E,0} V_E\ket{\psi}_{E,0} = 0$. This turns out to be a very reasonable assumption in practice but a more thorough discussion exceeds our scope.
Then the joint system + environment state $\rho = \rho_{S,0} \otimes \rho_{E,0}$ (now written as a density matrix) evolves as
$$
\rho \mapsto \rho' := e^{-i \frac{Vt}{\hbar}} \rho e^{+i \frac{Vt}{\hbar}}
$$
Using the Baker-Campbell-Hausdorff theorem we can expand this to second order in $\epsilon$
$$
\rho' = \rho - \frac{it}{\hbar} [V, \rho] - \frac{t^2}{2\hbar^2} [V, [V, \rho]] + O(\epsilon^{3/2})
$$
We can insert the initially factorizable state $\rho = \rho_{S,0} \otimes \rho_{E,0}$ and trace over the environmental degrees of freedom to obtain
\begin{align}
\rho_S' := \tr_E \rho' & = \rho_{S,0} \underbrace{\tr \rho_{E,0}}_{1} - \frac{i\sqrt{\epsilon} t}{\hbar} \underbrace{\left[ V_S \rho_{S,0} \underbrace{\tr V_E\rho_{E,0}}_{\bra{\psi}_{E,0} V_E\ket{\psi}_{E,0} = 0} - \rho_{S,0}V_S \underbrace{\tr \rho_{E,0}V_E}_{\bra{\psi}_{E,0} V_E\ket{\psi}_{E,0} = 0} \right]}_0 \\
& \qquad - \frac{\epsilon t^2}{2\hbar^2} \left[ V_S^2\rho_{S,0}\tr V_E^2 \rho_{E,0} + \rho_{S,0} V_S^2 \tr \rho_{E,0}V_E^2 - 2 V_S\rho_{S,0}V_S\tr V_E \rho_{E,0}V_E\right] \\
& = \rho_{S,0} - \frac{\gamma}{2} \left[ V_S^2\rho_{S,0} + \rho_{S,0} V_S^2 - 2 V_S\rho_{S,0}V_S\right]
\end{align}
where the coefficient in front of the second part is by our initial assumption very small $\gamma := \frac{\epsilon t^2}{2\hbar^2}\tr V_E^2 \rho_{E,0} \ll 1$.
This evolution happens to be approximately equal to a Kraus map with operators $K_1 := I - \frac{\gamma}{2} V_S^2, K_2:= \sqrt{\gamma} V_S$:
\begin{align}
\rho_S \to \rho_S' &= K_1\rho K_1^\dagger + K_2\rho K_2^\dagger
= \rho - \frac{\gamma}{2}\left[ V_S^2 \rho + \rho V_S^2\right] + \gamma V_S\rho_S V_S + O(\gamma^2)
\end{align}
This agrees to $O(\epsilon^{3/2})$ with the result of our derivation above. This type of derivation can be extended to many other cases with little complication and a very similar argument is used to derive the [Lindblad master equation](https://en.wikipedia.org/wiki/Lindblad_equation).
# Support for noisy gates on the Rigetti QVM
As of today, users of our Forest API can annotate their QUIL programs by certain pragma statements that inform the QVM that a particular gate on specific target qubits should be replaced by an imperfect realization given by a Kraus map.
## But the QVM propagates *pure states*: How does it simulate noisy gates?
It does so by yielding the correct outcomes **in the average over many executions of the QUIL program**:
When the noisy version of a gate should be applied the QVM makes a random choice which Kraus operator is applied to the current state with a probability that ensures that the average over many executions is equivalent to the Kraus map.
In particular, a particular Kraus operator $K_j$ is applied to $\ket{\psi}_S$
$$
\ket{\psi'}_S = \frac{1}{\sqrt{p_j}} K_j \ket{\psi}_S
$$
with probability $p_j:= \bra{\psi}_S K_j^\dagger K_j \ket{\psi}_S$.
In the average over many execution $N \gg 1$ we therefore find that
\begin{align}
\overline{\rho_S'} & = \frac{1}{N} \sum_{n=1}^N \ket{\psi'_n}_S\bra{\psi'_n}_S \\
& = \frac{1}{N} \sum_{n=1}^N p_{j_n}^{-1}K_{j_n}\ket{\psi'}_S \bra{\psi'}_SK_{j_n}^\dagger
\end{align}
where $j_n$ is the chosen Kraus operator label in the $n$-th trial.
This is clearly a Kraus map itself! And we can group identical terms and rewrite it as
\begin{align}
\overline{\rho_S'} & =
\sum_{\ell=1}^n \frac{N_\ell}{N} p_{\ell}^{-1}K_{\ell}\ket{\psi'}_S \bra{\psi'}_SK_{\ell}^\dagger
\end{align}
where $N_{\ell}$ is the number of times that Kraus operator label $\ell$ was selected.
For large enough $N$ we know that $N_{\ell} \approx N p_\ell$ and therefore
\begin{align}
\overline{\rho_S'} \approx \sum_{\ell=1}^n K_{\ell}\ket{\psi'}_S \bra{\psi'}_SK_{\ell}^\dagger
\end{align}
which proves our claim.
**The consequence is that noisy gate simulations must generally be repeated many times to obtain representative results**.
## How do I get started?
1. Come up with a good model for your noise. We will provide some examples below and may add more such
examples to our public repositories over time. Alternatively, you can characterize the gate under
consideration using [Quantum Process Tomography](https://arxiv.org/abs/1202.5344) or
[Gate Set Tomography](http://www.pygsti.info/) and use the resulting process matrices to obtain a
very accurate noise model for a particular QPU.
2. Define your Kraus operators as a list of numpy arrays `kraus_ops = [K1, K2, ..., Km]`.
3. For your QUIL program `p`, call:
```
p.define_noisy_gate("MY_NOISY_GATE", [q1, q2], kraus_ops)
```
where you should replace `MY_NOISY_GATE` with the gate of interest and `q1, q2` the indices of the qubits.
**Scroll down for some examples!**
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import binom
import matplotlib.colors as colors
%matplotlib inline
```
```python
from pyquil import Program, get_qc
from pyquil.gates import CZ, H, I, X, MEASURE
from scipy.linalg import expm
```
```python
# We could ask for "2q-noisy-qvm" but we will be specifying
# our noise model as PRAGMAs on the Program itself.
qc = get_qc('2q-qvm')
```
# Example 1: Amplitude damping
Amplitude damping channels are imperfect identity maps with Kraus operators
$$
K_1 = \begin{pmatrix}
1 & 0 \\
0 & \sqrt{1-p}
\end{pmatrix} \\
K_2 = \begin{pmatrix}
0 & \sqrt{p} \\
0 & 0
\end{pmatrix}
$$
where $p$ is the probability that a qubit in the $\ket{1}$ state decays to the $\ket{0}$ state.
```python
def damping_channel(damp_prob=.1):
"""
Generate the Kraus operators corresponding to an amplitude damping
noise channel.
:params float damp_prob: The one-step damping probability.
:return: A list [k1, k2] of the Kraus operators that parametrize the map.
:rtype: list
"""
damping_op = np.sqrt(damp_prob) * np.array([[0, 1],
[0, 0]])
residual_kraus = np.diag([1, np.sqrt(1-damp_prob)])
return [residual_kraus, damping_op]
def append_kraus_to_gate(kraus_ops, g):
"""
Follow a gate `g` by a Kraus map described by `kraus_ops`.
:param list kraus_ops: The Kraus operators.
:param numpy.ndarray g: The unitary gate.
:return: A list of transformed Kraus operators.
"""
return [kj.dot(g) for kj in kraus_ops]
def append_damping_to_gate(gate, damp_prob=.1):
"""
Generate the Kraus operators corresponding to a given unitary
single qubit gate followed by an amplitude damping noise channel.
:params np.ndarray|list gate: The 2x2 unitary gate matrix.
:params float damp_prob: The one-step damping probability.
:return: A list [k1, k2] of the Kraus operators that parametrize the map.
:rtype: list
"""
return append_kraus_to_gate(damping_channel(damp_prob), gate)
```
```python
%%time
# single step damping probability
damping_per_I = 0.02
# number of program executions
trials = 200
results_damping = []
lengths = np.arange(0, 201, 10, dtype=int)
for jj, num_I in enumerate(lengths):
print("\r{}/{}, ".format(jj, len(lengths)), end="")
p = Program(X(0))
ro = p.declare("ro")
# want increasing number of I-gates
p.inst([I(0) for _ in range(num_I)])
p.inst(MEASURE(0, ro[0]))
# overload identity I on qc 0
p.define_noisy_gate("I", [0], append_damping_to_gate(np.eye(2), damping_per_I))
p.wrap_in_numshots_loop(trials)
qc.qam.random_seed = int(num_I)
res = qc.run(p)
results_damping.append([np.mean(res), np.std(res) / np.sqrt(trials)])
results_damping = np.array(results_damping)
```
20/21, CPU times: user 74.2 ms, sys: 6.16 ms, total: 80.3 ms
Wall time: 744 ms
```python
dense_lengths = np.arange(0, lengths.max()+1, .2)
survival_probs = (1-damping_per_I)**dense_lengths
logpmf = binom.logpmf(np.arange(trials+1)[np.newaxis, :], trials, survival_probs[:, np.newaxis])/np.log(10)
```
```python
DARK_TEAL = '#48737F'
FUSCHIA = "#D6619E"
BEIGE = '#EAE8C6'
cm = colors.LinearSegmentedColormap.from_list('anglemap', ["white", FUSCHIA, BEIGE], N=256, gamma=1.5)
```
```python
plt.figure(figsize=(14, 6))
plt.pcolor(dense_lengths, np.arange(trials+1)/trials, logpmf.T, cmap=cm, vmin=-4, vmax=logpmf.max())
plt.plot(dense_lengths, survival_probs, c=BEIGE, label="Expected mean")
plt.errorbar(lengths, results_damping[:,0], yerr=2*results_damping[:,1], c=DARK_TEAL,
label=r"noisy qvm, errorbars $ = \pm 2\hat{\sigma}$", marker="o")
cb = plt.colorbar()
cb.set_label(r"$\log_{10} \mathrm{Pr}(n_1; n_{\rm trials}, p_{\rm survival}(t))$", size=20)
plt.title("Amplitude damping model of a single qubit", size=20)
plt.xlabel(r"Time $t$ [arb. units]", size=14)
plt.ylabel(r"$n_1/n_{\rm trials}$", size=14)
plt.legend(loc="best", fontsize=18)
plt.xlim(*lengths[[0, -1]])
plt.ylim(0, 1)
```
# Example 2: dephased CZ-gate
Dephasing is usually characterized through a qubit's $T_2$ time.
For a single qubit the dephasing Kraus operators are
$$
K_1(p) = \sqrt{1-p} I_2 \\
K_2(p) = \sqrt{p} \sigma_Z
$$
where $p = 1 - \exp(-T_2/T_{\rm gate})$ is the probability that the qubit is dephased over the time interval of interest, $I_2$ is the $2\times 2$-identity matrix and $\sigma_Z$ is the Pauli-Z operator.
For two qubits, we must construct a Kraus map that has *four* different outcomes:
1. No dephasing
2. Qubit 1 dephases
3. Qubit 2 dephases
4. Both dephase
The Kraus operators for this are given by
\begin{align}
K'_1(p,q) = K_1(p)\otimes K_1(q) \\
K'_2(p,q) = K_2(p)\otimes K_1(q) \\
K'_3(p,q) = K_1(p)\otimes K_2(q) \\
K'_4(p,q) = K_2(p)\otimes K_2(q)
\end{align}
where we assumed a dephasing probability $p$ for the first qubit and $q$ for the second.
Dephasing is a *diagonal* error channel and the CZ gate is also diagonal, therefore we can get the combined map of dephasing and the CZ gate simply by composing $U_{\rm CZ}$ the unitary representation of CZ with each Kraus operator
\begin{align}
K^{\rm CZ}_1(p,q) = K_1(p)\otimes K_1(q)U_{\rm CZ} \\
K^{\rm CZ}_2(p,q) = K_2(p)\otimes K_1(q)U_{\rm CZ} \\
K^{\rm CZ}_3(p,q) = K_1(p)\otimes K_2(q)U_{\rm CZ} \\
K^{\rm CZ}_4(p,q) = K_2(p)\otimes K_2(q)U_{\rm CZ}
\end{align}
**Note that this is not always accurate, because a CZ gate is often achieved through non-diagonal interaction Hamiltonians! However, for sufficiently small dephasing probabilities it should always provide a good starting point.**
```python
def dephasing_kraus_map(p=.1):
"""
Generate the Kraus operators corresponding to a dephasing channel.
:params float p: The one-step dephasing probability.
:return: A list [k1, k2] of the Kraus operators that parametrize the map.
:rtype: list
"""
return [np.sqrt(1-p)*np.eye(2), np.sqrt(p)*np.diag([1, -1])]
def tensor_kraus_maps(k1, k2):
"""
Generate the Kraus map corresponding to the composition
of two maps on different qubits.
:param list k1: The Kraus operators for the first qubit.
:param list k2: The Kraus operators for the second qubit.
:return: A list of tensored Kraus operators.
"""
return [np.kron(k1j, k2l) for k1j in k1 for k2l in k2]
```
```python
%%time
# single step damping probabilities
ps = np.linspace(.001, .5, 200)
# number of program executions
trials = 500
results = []
for jj, p in enumerate(ps):
corrupted_CZ = append_kraus_to_gate(
tensor_kraus_maps(
dephasing_kraus_map(p),
dephasing_kraus_map(p)
),
np.diag([1, 1, 1, -1]))
print("\r{}/{}, ".format(jj, len(ps)), end="")
# make Bell-state
p = Program(H(0), H(1), CZ(0,1), H(1))
ro = p.declare("ro", memory_size=2)
p.inst(MEASURE(0, ro[0]))
p.inst(MEASURE(1, ro[1]))
# overload identity I on qc 0
p.define_noisy_gate("CZ", [0, 1], corrupted_CZ)
p.wrap_in_numshots_loop(trials)
qc.qam.random_seed = jj
res = qc.run(p)
results.append(res)
results = np.array(results)
```
199/200, CPU times: user 568 ms, sys: 43.2 ms, total: 611 ms
Wall time: 1.89 s
```python
Z1s = (2*results[:,:,0]-1.)
Z2s = (2*results[:,:,1]-1.)
Z1Z2s = Z1s * Z2s
Z1m = np.mean(Z1s, axis=1)
Z2m = np.mean(Z2s, axis=1)
Z1Z2m = np.mean(Z1Z2s, axis=1)
```
```python
plt.figure(figsize=(14, 6))
plt.axhline(y=1.0, color=FUSCHIA, alpha=.5, label="Bell state")
plt.plot(ps, Z1Z2m, "x", c=FUSCHIA, label=r"$\overline{Z_1 Z_2}$")
plt.plot(ps, 1-2*ps, "--", c=FUSCHIA, label=r"$\langle Z_1 Z_2\rangle_{\rm theory}$")
plt.plot(ps, Z1m, "o", c=DARK_TEAL, label=r"$\overline{Z}_1$")
plt.plot(ps, 0*ps, "--", c=DARK_TEAL, label=r"$\langle Z_1\rangle_{\rm theory}$")
plt.plot(ps, Z2m, "d", c="k", label=r"$\overline{Z}_2$")
plt.plot(ps, 0*ps, "--", c="k", label=r"$\langle Z_2\rangle_{\rm theory}$")
plt.xlabel(r"Dephasing probability $p$", size=18)
plt.ylabel(r"$Z$-moment", size=18)
plt.title(r"$Z$-moments for a Bell-state prepared with dephased CZ", size=18)
plt.xlim(0, .5)
plt.legend(fontsize=18)
```
| 8a797983c88a59711f3a56a7d3e8b26c51e62bdc | 192,190 | ipynb | Jupyter Notebook | notebooks/GateNoiseModels.ipynb | stjordanis/forest-tutorials | 39e99e5804891c4eb7420586fc2b691bb7935ddd | [
"Apache-2.0"
]
| 20 | 2020-01-31T03:52:38.000Z | 2022-03-27T16:43:07.000Z | notebooks/GateNoiseModels.ipynb | stjordanis/forest-tutorials | 39e99e5804891c4eb7420586fc2b691bb7935ddd | [
"Apache-2.0"
]
| 3 | 2020-02-05T16:23:50.000Z | 2020-11-13T15:47:38.000Z | notebooks/GateNoiseModels.ipynb | stjordanis/forest-tutorials | 39e99e5804891c4eb7420586fc2b691bb7935ddd | [
"Apache-2.0"
]
| 17 | 2020-01-30T17:07:38.000Z | 2022-01-17T13:57:49.000Z | 307.504 | 88,184 | 0.915074 | true | 5,792 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.839734 | 0.739635 | __label__eng_Latn | 0.975204 | 0.556752 |
# Aproximação e Interpolação: solução dos problemas propostos
Este notebook apresenta a solução dos problemas propostos na aula 01, Apriximação e Interpolação.
<!-- TEASER_END -->
```julia
using PyPlot
```
```julia
using Polynomials
```
```julia
using BenchmarkTools
```
```julia
struct Lagrange
x::Vector{Float64}
y::Vector{Float64}
Lagrange(x, y) = new(copy(x), copy(y))
end
Base.Broadcast.broadcastable(lgr::Lagrange) = Ref(lgr)
function lagrange(k, z, x)
h = 1.0
n = length(z)
for i = 1:(k-1)
h *= (x - z[i]) / (z[k] - z[i])
end
for i = (k+1):n
h *= (x - z[i]) / (z[k] - z[i])
end
return h
end
function interp(lgr::Lagrange, x)
y = lgr.y[1] * lagrange(1, lgr.x, x)
for i = 2:length(lgr.x)
y += lgr.y[i] * lagrange(i, lgr.x, x)
end
return y
end
(lgr::Lagrange)(x) = interp(lgr, x)
```
```julia
struct LinearInterp
x::Vector{Float64}
y::Vector{Float64}
LinearInterp(x, y) = new(copy(x), copy(y))
end
Base.Broadcast.broadcastable(lin::LinearInterp) = Ref(lin)
function interp(lin::LinearInterp, x)
if x < lin.x[1] || x > lin.x[end]
error("Fora do Range")
end
index = 2
n = length(lin.x)
for i = 2:n
if lin.x[i] >= x
index = i
break
end
end
i1 = index-1
return lin.y[i1] + (lin.y[index] - lin.y[i1]) * (x - lin.x[i1]) / (lin.x[index] - lin.x[i1])
end
(lin::LinearInterp)(x) = interp1(lin, x)
```
```julia
function linfit(x,y)
sx = sum(x)
sx2 = sum(x->x^2, x)
N = length(x)
sy = sum(y)
syx = sum(x[i]*y[i] for i in 1:N)
return [N sx; sx sx2] \ [sy; syx]
end
```
# Exercícios
## Problema 1
Interpole a função de Runge com $-1 \le x \le 1$:
$$
f(x) = \frac{1}{1 + 25x^2}
$$
1. Use 11 pontos uniformemente distribuídos
2. Aumente o número de pontos
3. Tente usar os pontos $x_k = \cos\left(\frac{k\pi}{N}\right)$ para $k = 0\ldots N$.
4. Brinque com o número de pontos
```julia
f(x) = 1.0 / (1.0 + 25x^2)
x0 = range(-1.0, 1.0, length=501)
y0 = f.(x0);
```
### Pontos uniformemente distribuídos
```julia
x = range(-1.0, 1.0, length=11)
y = f.(x);
```
```julia
lgr = Lagrange(x, y)
```
```julia
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
**Tá interpolando!** Mas que bosta!!!
Será que é o número de pontos? Vamos ver...
```julia
x = range(-1.0, 1.0, length=12)
y = f.(x)
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
**Com 12 pontos, Melhorou as extremidades mas piorou o centro**. Vamos tentar com 13 pontos
```julia
x = range(-1.0, 1.0, length=13)
y = f.(x)
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
**Tem algo estranho!!! Vamos aumentar bastante o número de pontos**
```julia
x = range(-1.0, 1.0, length=51)
y = f.(x)
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
**Repare na escala. Tem um 1e6, ou seja o valor chega a 5 milhões. Será que a função realmente está interpolando???**
```julia
plot(x0, y0, "r-")
ylim([-5, 5])
plot(x, y, "g.")
plot(x0, u, "b--")
```
**Continua interpolando!?! Mas aumentando o número de pontos, só piora... Vamos ver o que acontece com número de pontos pares.**
```julia
x = range(-1.0, 1.0, length=52)
y = f.(x)
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
```julia
plot(x0, y0, "r-")
ylim([-5, 5])
plot(x, y, "g.")
plot(x0, u, "b--")
```
**É... Quanto mais pontos pior fica! Isto é o fenômeno de Runge**.
### Pontos $x_k = \cos\left(\frac{k\pi}{N}\right)$
```julia
chebpoints(N) = cos.((0:N) .* π ./ N )
```
```julia
x = chebpoints(11);
y = f.(x);
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
**Não está muito bom mas está melhor do que antes**
```julia
for i in 1:length(x)
l0 = lagrange.(i, Ref(x), x0)
plot(x0, l0)
axvline(x[i], color="k", ls=":")
end
axhline(1.0, c="k", ls="--")
```
```julia
x = chebpoints(12);
y = f.(x);
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "rs")
plot(x0, u, "b--")
```
```julia
x = chebpoints(51);
y = f.(x);
lgr = Lagrange(x, y)
u = lgr.(x0);
plot(x0, y0, "r-")
plot(x, y, "g.")
plot(x0, u, "b--")
```
```julia
plot(x0, y0, "r-")
xlim([-1.0, -0.75])
ylim([0.0, 0.2])
plot(x, y, "g.")
plot(x0, u, "b--")
```
```julia
maximum(abs, u-y0)
```
### Vamos ver como o erro se comporta?
```julia
function maxerr(fun, n, xpts)
x = chebpoints(n)
y = fun.(x)
lgr = Lagrange(x, y)
y0 = fun.(x0)
u0 = lgr.(x0)
plot(x0, y0, "r-")
plot(x0, u0, "b--")
plot(x, y, "g.")
return maximum(abs, u0 - y0)
end
```
```julia
maxerr(f, 14, x0)
```
```julia
N = 2:50
ε = maxerr.(f, N, Ref(x0));
```
```julia
#semilogy(N, ε, "ro-")
loglog(N, ε, "ro-")
```
## Problema 2
Procure na Net o método de diferenças divididas de Newton a interpole a função anterior nos mesmos pontos. Este método é simplesmente um jeito inteligente de resolver a matriz apresentada lá em cima.
### Diferenças divididas de Newton
```julia
function divdiffmat(x)
n = length(x)
A = zeros(n,n)
for i = 1:n
A[i,1] = 1.0
for k in 2:i
A[i, k] = A[i,k-1] * (x[i] - x[k-1])
end
end
return A
end
```
```julia
using LinearAlgebra
```
```julia
struct DividedDiff
x::Vector{Float64}
a::Vector{Float64}
end
function divideddiff(x, y)
A = LowerTriangular(divdiffmat(x))
a = A\y
return DividedDiff(copy(x), a)
end
Base.Broadcast.broadcastable(ddif::DividedDiff) = Ref(ddif)
function interp(ddif::DividedDiff, x)
xx = ddif.x
a = ddif.a
y = a[end]
for i in (lastindex(a)-1):-1:1
y = a[i] + (x-xx[i])*y
end
return y
end
(ddif::DividedDiff)(x) = interp(ddif, x)
```
## Algorítmo de Newton
Um jeito interessante de calcular a interpolação usando diferenças divididas. Ao se resolver o sistema linear, pode-se escrever a solução como:
$$
\begin{align}
a_0 &= f(x_0)\\
a_1 & = \frac{f(x_1) - f(y_0)}{x_1 - x_0}\\
a_2 &= \frac{\frac{f(x_2)-f(x_0)}{x_2-x_0} - \frac{f(x_1) - f(x_0)}{x_1-x_0} }{x_2 - x_1}\\
\vdots &= \vdots\\
\end{align}
$$
o k-ésimo coeficiente vale:
$$
a_k = \mathcal{F}\left(x_0, x_1, \ldots, x_k\right)
$$
com
$$
\mathcal{F}\left(x_0, x_1, \ldots, x_k\right) = \frac{\mathcal{F}\left(x_0, x_1, \ldots, x_{k-1}\right) -\mathcal{F}\left(x_1, x_1, \ldots, x_k\right)}{x_0-x_k}
$$
```julia
function newton_divdiff(x, y)
n = length(x)
F = zeros(n)
F0 = zeros(n)
F1 = zeros(n)
F0[1] = y[1]
F1[1] = y[1]
F[1] = y[1]
for i = 2:n
F1[1] = y[i]
for k in 2:i
F1[k] = (F1[k-1] - F0[k-1]) / (x[i] - x[i-k+1])
end
F[i] = F1[i]
for k in 1:i
F0[k] = F1[k]
end
end
return DividedDiff(copy(x), F)
end
```
```julia
x = range(-1.0, 1.0, length=11)
y = f.(x)
ddiff = divideddiff(x, y)
y0 = f.(x0)
u = ddiff.(x0);
plot(x0, y0, "r-")
plot(x0, u, "b--")
plot(x, y, "g.")
```
**Alguém esperava algo melhor? Existe apenas um polinômio passando por um conjunto de pontos!!!**
```julia
f1(x) = sin(π*x)
x = range(-1, 1.0, length=6)
y = f1.(x);
ddiff = divideddiff(x, y)
plot(x0, f1.(x0))
plot(x0, interp.(ddiff, x0), "r--")
plot(x, y, "g.")
```
```julia
d1 = divideddiff(x, y)
d2 = newton_divdiff(x, y);
d1.a - d2.a
```
### Problema 3
Use a biblioteca Interpolations.jl e Dierckx.jl para fazer as interpolações. Compare a interpolação linear com os splines.
```julia
using Interpolations
```
```julia
x = -1:0.2:1
y = f.(x);
```
```julia
itp1 = interpolate((x,), y, Gridded(Constant()));
itp2 = interpolate((x,), y, Gridded(Linear()));
```
```julia
xx = -1:0.001:1
yy = f.(xx)
yy1 = itp1.(xx);
yy2 = itp2.(xx);
plot(x, y, "rs")
plot(xx, yy, "r-")
plot(xx, yy1, "b-")
plot(xx, yy2, "b--")
```
```julia
itp3 = CubicSplineInterpolation((x,), y; bc=Line(OnGrid()));
yy3 = itp3.(xx)
plot(x, y, "rs")
plot(xx, yy, "r-")
plot(xx, yy3, "b:")
```
```julia
function compare(h, x)
xp = -1:h:1
yp = f.(xp)
itp1 = interpolate((xp,), yp, Gridded(Constant()));
itp2 = interpolate((xp,), yp, Gridded(Linear()));
itp3 = CubicSplineInterpolation((xp,), yp, bc=Flat(OnGrid()));
fx = f(x)
return itp1(x)-fx, itp2(x)-fx, itp3(x)-fx
end
```
```julia
hstep = [1.0, 0.5, 0.4, 0.2, 0.1, 0.05, 0.02, 0.01, 0.001]
nn = 2.0 ./ hstep
erra = [compare(h, -1+h/4) for h in hstep]
errb = [compare(h, h/4) for h in hstep];
errc = [compare(h, -0.9905) for h in hstep];
errd = [compare(h, 0.0005) for h in hstep];
```
```julia
ea1 = [abs(e[1]) for e in erra]
ea2 = [abs(e[2]) for e in erra]
ea3 = [abs(e[3]) for e in erra]
eb1 = [abs(e[1]) for e in errb]
eb2 = [abs(e[2]) for e in errb]
eb3 = [abs(e[3]) for e in errb];
ec1 = [abs(e[1]) for e in errc]
ec2 = [abs(e[2]) for e in errc]
ec3 = [abs(e[3]) for e in errc];
ed1 = [abs(e[1]) for e in errd]
ed2 = [abs(e[2]) for e in errd]
ed3 = [abs(e[3]) for e in errd];
```
```julia
loglog(nn, ea1, "rs-")
loglog(nn, ea2, "bo-")
loglog(nn, ea3, "g^-")
```
```julia
loglog(nn, eb1, "rs-")
loglog(nn, eb2, "bo-")
loglog(nn, eb3, "g^-")
```
### Problema 4
Crie funções para fazer os seguintes problemas de mínimos quadrados:
* $y = a_0 x^ {a_1}$
* $y = a_0 \exp \left( a_1 \cdot x\right)$
* Polinômio genérico de ordem n
### $y = a_0 x^ a_1$
Este é um problema não linear! Então precisamos transformar isso em um problema linear.
Tirando o log dos dois lados temos uma expressão do tipo:
$$
\log y = \log a_0 + a_1 \cdot \log x
$$
Agora é fazer o ajuste disso.
```julia
function powerfit(x, y)
lnx = log.(x)
lny = log.(y)
fit = linfit(lnx, lny)
return [exp(fit[1]), fit[2]]
end
```
```julia
x = 1.0:0.2:5.0
y = 1.1 .* x .^ (-2.2);
```
```julia
powerfit(x, y)
```
### $y = a_0 \exp \left(a_1 \cdot x\right)$
Este problema também é não linear! Então precisamos transformar isso em um problema linear.
Tirando o log dos dois lados temos uma expressão do tipo:
$$
\log y = \log a_0 + a_1 \cdot x
$$
```julia
function expfit(x, y)
lny = log.(y)
fit = linfit(x, lny)
return [exp(fit[1]), fit[2]]
end
```
```julia
x = 1.0:0.2:5.0
y = 1.1 .* exp.(2.2 .* x);
```
```julia
expfit(x, y)
```
### Ajuste polinomial
Agora queremos ajustar os pontos com um polinômio de grau $n$:
$$
y = a_0 + a_1\cdot x + a_2\cdot x^2 + \cdots + a_n\cdot x^n
$$
Usando o método dos mínimo quadrados, chegamos à seguinte equação:
$$
\left(
\begin{matrix}
\sum_{i=1}^Q 1 & \sum_{i=1}^Q x_i & \cdots & \sum_{i=1}^Q x_i^n \\
\sum_{i=1}^Q x_i & \sum_{i=1}^Q x_i^2 & \cdots & \sum_{i=1}^Q x_i^{n+1}\\
\vdots & \vdots & \ddots & \vdots \\
\sum_{i=1}^Q x_i^n & \sum_{i=1}^Q x_i^{n+1} & \cdots & \sum_{i=1}^Q x_i^{2n} \\
\end{matrix}\right)
\cdot
\left(\begin{matrix} a_0 \\ a_1 \\ \vdots \\ a_n \end{matrix}\right)
=
\left(\begin{matrix} \sum_{i=1}^Q y_i \\ \sum_{i=1}^Q y_i x_i \\ \vdots \\ \sum_{i=1}^Q y_i x_i^n\end{matrix}\right)
$$
```julia
function polyfit(x, y, n, var=:x)
A = zeros(n+1, n+1)
b = zeros(n+1)
S = zeros(2n + 1)
npts = length(x)
S[1] = npts
for i in 1:2n
s = 0.0
for k in 1:npts
s += x[k]^i
end
S[i+1] = s
end
b[1] = sum(y)
for i in 1:n
s = 0.0
for k in 1:npts
s += y[k]*x[k]^i
end
b[i+1] = s
end
for j in 0:n
for i in 0:n
A[i+1,j+1] = S[i+j+1]
end
end
return Polynomial(A\b, var)
end
```
```julia
f4(x) = 1.0 + x + x^2
```
```julia
x = 0:0.2:10
y = f4.(x);
```
```julia
polyfit(x, y, 2)
```
```julia
polyfit(x, y, 3)
```
```julia
polyfit(x, y, 4)
```
```julia
polyfit(x, y, 5)
```
```julia
```
| ff8f1e6a5a968fa7551b994001d44e72659b7626 | 24,460 | ipynb | Jupyter Notebook | 01-sol-aproximacao.ipynb | pjabardo/sci-comp | 2f590363d0b7edd87d8125494eebac1c346da78d | [
"MIT"
]
| null | null | null | 01-sol-aproximacao.ipynb | pjabardo/sci-comp | 2f590363d0b7edd87d8125494eebac1c346da78d | [
"MIT"
]
| null | null | null | 01-sol-aproximacao.ipynb | pjabardo/sci-comp | 2f590363d0b7edd87d8125494eebac1c346da78d | [
"MIT"
]
| null | null | null | 22.075812 | 208 | 0.434832 | true | 4,965 | Qwen/Qwen-72B | 1. YES
2. YES | 0.831143 | 0.855851 | 0.711335 | __label__por_Latn | 0.302561 | 0.491 |
```python
from sympy import *
# from sympy.abc import *
from IPython.display import display
init_printing()
```
# SymPy
## Symbolic Computation
Free, Open Source, Python
- solve equations - simplify expressions
- compute derivatives, integrals, limits
- work with matrices, - plotting & printing
- code gen - physics - statitics - combinatorics
- number theory - geometry - logic
----
## Modules
[SymPy Core](http://docs.sympy.org/latest/modules/core.html) - [Combinatorics](http://docs.sympy.org/latest/modules/combinatorics/index.html) - [Number Theory](http://docs.sympy.org/latest/modules/ntheory.html) - [Basic Cryptography](http://docs.sympy.org/latest/modules/crypto.html) - [Concrete Maths](http://docs.sympy.org/latest/modules/concrete.html) - [Numerical Evaluation](http://docs.sympy.org/latest/modules/evalf.html) - [Code Gen](http://docs.sympy.org/latest/modules/codegen.html) - [Numeric Computation](http://docs.sympy.org/latest/modules/numeric-computation.html) - [Functions](http://docs.sympy.org/latest/modules/functions/index.html) - [Geometry](http://docs.sympy.org/latest/modules/geometry/index.html) - [Holonomic Functions](http://docs.sympy.org/latest/modules/holonomic/index.html) - [Symbolic Integrals](http://docs.sympy.org/latest/modules/integrals/integrals.html) - [Numeric Integrals](http://docs.sympy.org/latest/modules/integrals/integrals.html#numeric-integrals) - [Lie Algebra](http://docs.sympy.org/latest/modules/liealgebras/index.html) - [Logic](http://docs.sympy.org/latest/modules/logic.html) - [Matricies](http://docs.sympy.org/latest/modules/matrices/index.html) - [Polynomials](http://docs.sympy.org/latest/modules/polys/index.html) - [Printing](http://docs.sympy.org/latest/modules/printing.html) - [Plotting](http://docs.sympy.org/latest/modules/plotting.html) - [Pyglet Plotting](http://docs.sympy.org/latest/modules/plotting.html#module-sympy.plotting.pygletplot) - [Assumptions](http://docs.sympy.org/latest/modules/assumptions/index.html) - [Term Rewriting](http://docs.sympy.org/latest/modules/rewriting.html) - [Series Module](http://docs.sympy.org/latest/modules/series/index.html) - [Sets](http://docs.sympy.org/latest/modules/sets.html) - [Symplify](http://docs.sympy.org/latest/modules/simplify/simplify.html) - [Hypergeometrtic](http://docs.sympy.org/latest/modules/simplify/hyperexpand.html) - [Stats](http://docs.sympy.org/latest/modules/stats.html) - [ODE](http://docs.sympy.org/latest/modules/solvers/ode.html) - [PDE](http://docs.sympy.org/latest/modules/solvers/pde.html) - [Solvers](http://docs.sympy.org/latest/modules/solvers/solvers.html) - [Diophantine](http://docs.sympy.org/latest/modules/solvers/diophantine.html) - [Inequality Solvers](http://docs.sympy.org/latest/modules/solvers/inequalities.html) - [Solveset](http://docs.sympy.org/latest/modules/solvers/solveset.html) - [Tensor](http://docs.sympy.org/latest/modules/tensor/index.html) - [Utilities](http://docs.sympy.org/latest/modules/utilities/index.html) - [Parsing Input](http://docs.sympy.org/latest/modules/parsing.html) - [Calculus](http://docs.sympy.org/latest/modules/calculus/index.html) - [Physics](http://docs.sympy.org/latest/modules/physics/index.html) - [Categrory Theory](http://docs.sympy.org/latest/modules/categories.html) - [Differential Geometry](http://docs.sympy.org/latest/modules/diffgeom.html) - [Vector](http://docs.sympy.org/latest/modules/vector/index.html)
----
## Simple Expressions
```python
# declare variable first
x, y = symbols('x y')
# Declare expression
expr = x + 3*y
# Print expressions
print("expr =", expr)
print("expr + 1 =", expr + 1)
print("expr - x =", expr - x) # auto-simplify
print("x * expr =", x * expr)
```
expr = x + 3*y
expr + 1 = x + 3*y + 1
expr - x = 3*y
x * expr = x*(x + 3*y)
----
## Substitution
```python
x = symbols('x')
expr = x + 1
print(expr)
display(expr)
```
```python
# Evaluate expression at a point
print("expr(2)=", expr.subs(x, 2))
```
expr(2)= 3
```python
# Replace sub expression with another sub expression
# 1. For expressions with symmetry
x, y = symbols('x y')
expr2 = x ** y
expr2 = expr2.subs(y, x**y)
expr2 = expr2.subs(y, x**x)
display(expr2)
```
```python
# 2. Controlled simplifcation
expr3 = sin(2*x) + cos(2*x)
print("expr3")
display(expr3)
print(" ")
print("expand_trig(expr3)")
display(expand_trig(expr3))
print(" ")
print("use this to only expand sin(2*x) if desired")
print("expr3.subs(sin(2*x), 2*sin(x)*cos(x))")
display(expr3.subs(sin(2*x), 2*sin(x)*cos(x)))
```
```python
# multi-substitute
expr4 = x**3 + 4*x*y - z
args = [(x,2), (y,4), (z,0)]
expr5 = expr4.subs(args)
display(expr4)
print("args = ", args)
display(expr5)
```
```python
expr6 = x**4 - 4*x**3 + 4 * x ** 2 - 2 * x + 3
args = [(x**i, y**i) for i in range(5) if i%2 == 0]
display(expr6)
print(args)
display(expr6.subs(args))
```
----
## Equality & Equivalence
```python
# do not use == between symbols and variables, will return false
x = symbols('x')
x+1==4
```
False
```python
# Create a symbolic equality expression
expr2 = Eq(x+1, 4)
print(expr2)
display(expr2)
print("if x=3, then", expr2.subs(x,3))
```
```python
# two equivalent formulas
expr3 = (x + 1)**2 # we use pythons ** exponentiation (instead of ^)
expr4 = x**2 + 2*x + 1
eq34 = Eq(expr3, expr4)
print("expr3")
display(expr3)
print(" ≡ expr4")
display(expr4)
print("")
print("(expr3 == expr4) => ", expr3 == expr4)
print("(these are equivalent, but not the same symbolically)")
print("")
print("Equal by negating, simplifying and comparing to 0")
print("expr3 - expr4 => ", expr3 - expr4)
print("simplify(expr3-expr4)==0=> ", simplify(expr3 - expr4)==0 )
print("")
print("Equals (test by evaluating 2 random points)")
print("expr3.equals(expr4) => ", expr3.equals(expr4))
```
----
## SymPy Types & Casting
```python
print( "1 =", type(1) )
print( "1.0 =", type(1.0) )
print( "Integer(1) =", type(Integer(1)) )
print( "Integer(1)/Integer(3) =", type(Integer(1)/Integer(3)) )
print( "Rational(0.5) =", type(Rational(0.5)) )
print( "Rational(1/3) =", type(Rational(1,3)) )
```
1 = <class 'int'>
1.0 = <class 'float'>
Integer(1) = <class 'sympy.core.numbers.One'>
Integer(1)/Integer(3) = <class 'sympy.core.numbers.Rational'>
Rational(0.5) = <class 'sympy.core.numbers.Half'>
Rational(1/3) = <class 'sympy.core.numbers.Rational'>
```python
# string to SymPy
sympify("x**2 + 3*x - 1/2")
```
----
## Evaluating Expressions
```python
# evaluate as float using .evalf(), and N
display( sqrt(8) )
display( sqrt(8).evalf() )
display( sympy.N(sqrt(8)) )
```
```python
# evaluate as float to nearest n decimals
display(sympy.pi)
display(sympy.pi.evalf(100))
```
----
## SymPy Types
#### Number Class
[Number](http://docs.sympy.org/latest/modules/core.html#number) - [Float](http://docs.sympy.org/latest/modules/core.html#float) - [Rational](http://docs.sympy.org/latest/modules/core.html#rational) - [Integer](http://docs.sympy.org/latest/modules/core.html#integer) - [RealNumber](http://docs.sympy.org/latest/modules/core.html#realnumber)
#### Numbers
[Zero](http://docs.sympy.org/latest/modules/core.html#zero) - [One](http://docs.sympy.org/latest/modules/core.html#one) - [Negative One](http://docs.sympy.org/latest/modules/core.html#negativeone) - [Half](http://docs.sympy.org/latest/modules/core.html#half) - [NaN](http://docs.sympy.org/latest/modules/core.html#nan) - [Infinity](http://docs.sympy.org/latest/modules/core.html#infinity) - [Negative Infinity](http://docs.sympy.org/latest/modules/core.html#negativeinfinity) - [Complex Infinity](http://docs.sympy.org/latest/modules/core.html#complexinfinity)
#### Constants
[E (Transcedental Constant)](http://docs.sympy.org/latest/modules/core.html#exp1) - [I (Imaginary Unit)](http://docs.sympy.org/latest/modules/core.html#imaginaryunit) - [Pi](http://docs.sympy.org/latest/modules/core.html#pi) - [EulerGamma (Euler-Mascheroni constant)](http://docs.sympy.org/latest/modules/core.html#eulergamma) - [Catalan (Catalan's Constant)](http://docs.sympy.org/latest/modules/core.html#catalan) - [Golden Ratio](http://docs.sympy.org/latest/modules/core.html#goldenratio)
### Rational Numbers
```python
# Rational Numbers
expr_rational = Rational(1)/3
print("expr_rational")
display( type(expr_rational) )
display( expr_rational )
eval_rational = expr_rational.evalf()
print("eval_rational")
display( type(eval_rational) )
display( eval_rational )
neval_rational = N(expr_rational)
print("neval_rational")
display( type(neval_rational) )
display( neval_rational )
```
### Complex Numbers
```python
# Complex Numbers supported.
expr_cplx = 2.0 + 2*sympy.I
print("expr_cplx")
display( type(expr_cplx) )
display( expr_cplx )
print("expr_cplx.evalf()")
display( type(expr_cplx.evalf()) )
display( expr_cplx.evalf() )
print("float() - errors")
print(" ")
# this errors complex cannot be converted to float
#display( float(sym_cplx) )
print("complex() - evaluated to complex number")
display( complex(expr_cplx) )
display( type(complex(expr_cplx)) )
```
```python
# Partial Evaluation if cannot be evaluated as float
display( (sympy.pi*x**2 + x/3).evalf(2) )
```
```python
# use substitution in evalf
expr = cos(2*x)
expr.evalf(subs={x:2.4})
```
```python
# sometimes there are round-offs smaller than the desired precision
one = cos(1)**2 + sin(1)**2
display( (one-1).evalf() )
# chop=True can remove these errors
display( (one-1).evalf(chop=True) )
```
```python
import sys
'gmpy2' in sys.modules.keys()
```
True
```python
```
| e7bce31317466b02f49f03142c3434b88f7fc4e2 | 64,892 | ipynb | Jupyter Notebook | notebooks/python-data-science/sympy/sympy.ipynb | sparkboom/my_jupyter_notes | 9255e4236b27f0419cdd2c8a2159738d8fc383be | [
"MIT"
]
| null | null | null | notebooks/python-data-science/sympy/sympy.ipynb | sparkboom/my_jupyter_notes | 9255e4236b27f0419cdd2c8a2159738d8fc383be | [
"MIT"
]
| null | null | null | notebooks/python-data-science/sympy/sympy.ipynb | sparkboom/my_jupyter_notes | 9255e4236b27f0419cdd2c8a2159738d8fc383be | [
"MIT"
]
| null | null | null | 60.420857 | 6,284 | 0.768816 | true | 2,867 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.882428 | 0.739139 | __label__yue_Hant | 0.487394 | 0.5556 |
# Programming Exercise 5:
# Regularized Linear Regression and Bias vs Variance
## Introduction
In this exercise, you will implement regularized linear regression and use it to study models with different bias-variance properties. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).
Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, [`matplotlib`](https://matplotlib.org/) for plotting, and [`scipy`](https://docs.scipy.org/doc/scipy/reference/) for scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
```python
# used for manipulating directory paths
import os
# Scientific and vector computation for python
import numpy as np
# Plotting library
from matplotlib import pyplot
# Optimization module in scipy
from scipy import optimize
# will be used to load MATLAB mat datafile format
from scipy.io import loadmat
# library written for this exercise providing additional functions for assignment submission, and others
import utils
# define the submission/grader object for this exercise
grader = utils.Grader()
# tells matplotlib to embed plots within the notebook
%matplotlib inline
```
## Submission and Grading
After completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.
| Section | Part | Submitted Function | Points |
| :- |:- |:- | :-: |
| 1 | [Regularized Linear Regression Cost Function](#section1) | [`linearRegCostFunction`](#linearRegCostFunction) | 25 |
| 2 | [Regularized Linear Regression Gradient](#section2) | [`linearRegCostFunction`](#linearRegCostFunction) |25 |
| 3 | [Learning Curve](#section3) | [`learningCurve`](#func2) | 20 |
| 4 | [Polynomial Feature Mapping](#section4) | [`polyFeatures`](#polyFeatures) | 10 |
| 5 | [Cross Validation Curve](#section5) | [`validationCurve`](#validationCurve) | 20 |
| | Total Points | |100 |
You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
<div class="alert alert-block alert-warning">
At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once.
</div>
<a id="section1"></a>
## 1 Regularized Linear Regression
In the first half of the exercise, you will implement regularized linear regression to predict the amount of water flowing out of a dam using the change of water level in a reservoir. In the next half, you will go through some diagnostics of debugging learning algorithms and examine the effects of bias v.s.
variance.
### 1.1 Visualizing the dataset
We will begin by visualizing the dataset containing historical records on the change in the water level, $x$, and the amount of water flowing out of the dam, $y$. This dataset is divided into three parts:
- A **training** set that your model will learn on: `X`, `y`
- A **cross validation** set for determining the regularization parameter: `Xval`, `yval`
- A **test** set for evaluating performance. These are “unseen” examples which your model did not see during training: `Xtest`, `ytest`
Run the next cell to plot the training data. In the following parts, you will implement linear regression and use that to fit a straight line to the data and plot learning curves. Following that, you will implement polynomial regression to find a better fit to the data.
```python
# Load from ex5data1.mat, where all variables will be store in a dictionary
data = loadmat(os.path.join('Data', 'ex5data1.mat'))
# Extract train, test, validation data from dictionary
# and also convert y's form 2-D matrix (MATLAB format) to a numpy vector
X, y = data['X'], data['y'][:, 0]
Xtest, ytest = data['Xtest'], data['ytest'][:, 0]
Xval, yval = data['Xval'], data['yval'][:, 0]
# m = Number of examples
m = y.size
# Plot training data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)');
```
### 1.2 Regularized linear regression cost function
Recall that regularized linear regression has the following cost function:
$$ J(\theta) = \frac{1}{2m} \left( \sum_{i=1}^m \left( h_\theta\left( x^{(i)} \right) - y^{(i)} \right)^2 \right) + \frac{\lambda}{2m} \left( \sum_{j=1}^n \theta_j^2 \right)$$
where $\lambda$ is a regularization parameter which controls the degree of regularization (thus, help preventing overfitting). The regularization term puts a penalty on the overall cost J. As the magnitudes of the model parameters $\theta_j$ increase, the penalty increases as well. Note that you should not regularize
the $\theta_0$ term.
You should now complete the code in the function `linearRegCostFunction` in the next cell. Your task is to calculate the regularized linear regression cost function. If possible, try to vectorize your code and avoid writing loops.
<a id="linearRegCostFunction"></a>
```python
def linearRegCostFunction(X, y, theta, lambda_=0.0):
"""
Compute cost and gradient for regularized linear regression
with multiple variables. Computes the cost of using theta as
the parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each datapoint. A vector of
shape (m, ).
theta : array_like
The parameters for linear regression. A vector of shape (n+1,).
lambda_ : float, optional
The regularization parameter.
Returns
-------
J : float
The computed cost function.
grad : array_like
The value of the cost function gradient w.r.t theta.
A vector of shape (n+1, ).
Instructions
------------
Compute the cost and gradient of regularized linear regression for
a particular choice of theta.
You should set J to the cost and grad to the gradient.
"""
# Initialize some useful values
m = y.size # number of training examples
# You need to return the following variables correctly
J = 0
grad = np.zeros(theta.shape)
# ====================== YOUR CODE HERE ======================
# ============================================================
return J, grad
```
When you are finished, the next cell will run your cost function using `theta` initialized at `[1, 1]`. You should expect to see an output of 303.993.
```python
theta = np.array([1, 1])
J, _ = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Cost at theta = [1, 1]:\t %f ' % J)
print('This value should be about 303.993192)\n' % J)
```
After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading.
The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.
*Execute the following cell to grade your solution to the first part of this exercise.*
```python
grader[1] = linearRegCostFunction
grader.grade()
```
<a id="section2"></a>
### 1.3 Regularized linear regression gradient
Correspondingly, the partial derivative of the cost function for regularized linear regression is defined as:
$$
\begin{align}
& \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} & \qquad \text{for } j = 0 \\
& \frac{\partial J(\theta)}{\partial \theta_j} = \left( \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left( x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} \right) + \frac{\lambda}{m} \theta_j & \qquad \text{for } j \ge 1
\end{align}
$$
In the function [`linearRegCostFunction`](#linearRegCostFunction) above, add code to calculate the gradient, returning it in the variable `grad`. <font color='red'><b>Do not forget to re-execute the cell containing this function to update the function's definition.</b></font>
When you are finished, use the next cell to run your gradient function using theta initialized at `[1, 1]`. You should expect to see a gradient of `[-15.30, 598.250]`.
```python
theta = np.array([1, 1])
J, grad = linearRegCostFunction(np.concatenate([np.ones((m, 1)), X], axis=1), y, theta, 1)
print('Gradient at theta = [1, 1]: [{:.6f}, {:.6f}] '.format(*grad))
print(' (this value should be about [-15.303016, 598.250744])\n')
```
*You should now submit your solutions.*
```python
grader[2] = linearRegCostFunction
grader.grade()
```
### Fitting linear regression
Once your cost function and gradient are working correctly, the next cell will run the code in `trainLinearReg` (found in the module `utils.py`) to compute the optimal values of $\theta$. This training function uses `scipy`'s optimization module to minimize the cost function.
In this part, we set regularization parameter $\lambda$ to zero. Because our current implementation of linear regression is trying to fit a 2-dimensional $\theta$, regularization will not be incredibly helpful for a $\theta$ of such low dimension. In the later parts of the exercise, you will be using polynomial regression with regularization.
Finally, the code in the next cell should also plot the best fit line, which should look like the figure below.
The best fit line tells us that the model is not a good fit to the data because the data has a non-linear pattern. While visualizing the best fit as shown is one possible way to debug your learning algorithm, it is not always easy to visualize the data and model. In the next section, you will implement a function to generate learning curves that can help you debug your learning algorithm even if it is not easy to visualize the
data.
```python
# add a columns of ones for the y-intercept
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
theta = utils.trainLinearReg(linearRegCostFunction, X_aug, y, lambda_=0)
# Plot fit over the data
pyplot.plot(X, y, 'ro', ms=10, mec='k', mew=1.5)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.plot(X, np.dot(X_aug, theta), '--', lw=2);
```
<a id="section3"></a>
## 2 Bias-variance
An important concept in machine learning is the bias-variance tradeoff. Models with high bias are not complex enough for the data and tend to underfit, while models with high variance overfit to the training data.
In this part of the exercise, you will plot training and test errors on a learning curve to diagnose bias-variance problems.
### 2.1 Learning Curves
You will now implement code to generate the learning curves that will be useful in debugging learning algorithms. Recall that a learning curve plots training and cross validation error as a function of training set size. Your job is to fill in the function `learningCurve` in the next cell, so that it returns a vector of errors for the training set and cross validation set.
To plot the learning curve, we need a training and cross validation set error for different training set sizes. To obtain different training set sizes, you should use different subsets of the original training set `X`. Specifically, for a training set size of $i$, you should use the first $i$ examples (i.e., `X[:i, :]`
and `y[:i]`).
You can use the `trainLinearReg` function (by calling `utils.trainLinearReg(...)`) to find the $\theta$ parameters. Note that the `lambda_` is passed as a parameter to the `learningCurve` function.
After learning the $\theta$ parameters, you should compute the error on the training and cross validation sets. Recall that the training error for a dataset is defined as
$$ J_{\text{train}} = \frac{1}{2m} \left[ \sum_{i=1}^m \left(h_\theta \left( x^{(i)} \right) - y^{(i)} \right)^2 \right] $$
In particular, note that the training error does not include the regularization term. One way to compute the training error is to use your existing cost function and set $\lambda$ to 0 only when using it to compute the training error and cross validation error. When you are computing the training set error, make sure you compute it on the training subset (i.e., `X[:n,:]` and `y[:n]`) instead of the entire training set. However, for the cross validation error, you should compute it over the entire cross validation set. You should store
the computed errors in the vectors error train and error val.
<a id="func2"></a>
```python
def learningCurve(X, y, Xval, yval, lambda_=0):
"""
Generates the train and cross validation set errors needed to plot a learning curve
returns the train and cross validation set errors for a learning curve.
In this function, you will compute the train and test errors for
dataset sizes from 1 up to m. In practice, when working with larger
datasets, you might want to do this in larger intervals.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n + 1) where m is the
total number of examples, and n is the number of features
before adding the bias term.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
lambda_ : float, optional
The regularization parameter.
Returns
-------
error_train : array_like
A vector of shape m. error_train[i] contains the training error for
i examples.
error_val : array_like
A vecotr of shape m. error_val[i] contains the validation error for
i training examples.
Instructions
------------
Fill in this function to return training errors in error_train and the
cross validation errors in error_val. i.e., error_train[i] and
error_val[i] should give you the errors obtained after training on i examples.
Notes
-----
- You should evaluate the training error on the first i training
examples (i.e., X[:i, :] and y[:i]).
For the cross-validation error, you should instead evaluate on
the _entire_ cross validation set (Xval and yval).
- If you are using your cost function (linearRegCostFunction) to compute
the training and cross validation error, you should call the function with
the lambda argument set to 0. Do note that you will still need to use
lambda when running the training to obtain the theta parameters.
Hint
----
You can loop over the examples with the following:
for i in range(1, m+1):
# Compute train/cross validation errors using training examples
# X[:i, :] and y[:i], storing the result in
# error_train[i-1] and error_val[i-1]
....
"""
# Number of training examples
m = y.size
# You need to return these values correctly
error_train = np.zeros(m)
error_val = np.zeros(m)
# ====================== YOUR CODE HERE ======================
# =============================================================
return error_train, error_val
```
When you are finished implementing the function `learningCurve`, executing the next cell prints the learning curves and produce a plot similar to the figure below.
In the learning curve figure, you can observe that both the train error and cross validation error are high when the number of training examples is increased. This reflects a high bias problem in the model - the linear regression model is too simple and is unable to fit our dataset well. In the next section, you will implement polynomial regression to fit a better model for this dataset.
```python
X_aug = np.concatenate([np.ones((m, 1)), X], axis=1)
Xval_aug = np.concatenate([np.ones((yval.size, 1)), Xval], axis=1)
error_train, error_val = learningCurve(X_aug, y, Xval_aug, yval, lambda_=0)
pyplot.plot(np.arange(1, m+1), error_train, np.arange(1, m+1), error_val, lw=2)
pyplot.title('Learning curve for linear regression')
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 150])
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```python
grader[3] = learningCurve
grader.grade()
```
<a id="section4"></a>
## 3 Polynomial regression
The problem with our linear model was that it was too simple for the data
and resulted in underfitting (high bias). In this part of the exercise, you will address this problem by adding more features. For polynomial regression, our hypothesis has the form:
$$
\begin{align}
h_\theta(x) &= \theta_0 + \theta_1 \times (\text{waterLevel}) + \theta_2 \times (\text{waterLevel})^2 + \cdots + \theta_p \times (\text{waterLevel})^p \\
& = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \cdots + \theta_p x_p
\end{align}
$$
Notice that by defining $x_1 = (\text{waterLevel})$, $x_2 = (\text{waterLevel})^2$ , $\cdots$, $x_p =
(\text{waterLevel})^p$, we obtain a linear regression model where the features are the various powers of the original value (waterLevel).
Now, you will add more features using the higher powers of the existing feature $x$ in the dataset. Your task in this part is to complete the code in the function `polyFeatures` in the next cell. The function should map the original training set $X$ of size $m \times 1$ into its higher powers. Specifically, when a training set $X$ of size $m \times 1$ is passed into the function, the function should return a $m \times p$ matrix `X_poly`, where column 1 holds the original values of X, column 2 holds the values of $X^2$, column 3 holds the values of $X^3$, and so on. Note that you don’t have to account for the zero-eth power in this function.
<a id="polyFeatures"></a>
```python
def polyFeatures(X, p):
"""
Maps X (1D vector) into the p-th power.
Parameters
----------
X : array_like
A data vector of size m, where m is the number of examples.
p : int
The polynomial power to map the features.
Returns
-------
X_poly : array_like
A matrix of shape (m x p) where p is the polynomial
power and m is the number of examples. That is:
X_poly[i, :] = [X[i], X[i]**2, X[i]**3 ... X[i]**p]
Instructions
------------
Given a vector X, return a matrix X_poly where the p-th column of
X contains the values of X to the p-th power.
"""
# You need to return the following variables correctly.
X_poly = np.zeros((X.shape[0], p))
# ====================== YOUR CODE HERE ======================
# ============================================================
return X_poly
```
Now you have a function that will map features to a higher dimension. The next cell will apply it to the training set, the test set, and the cross validation set.
```python
p = 8
# Map X onto Polynomial Features and Normalize
X_poly = polyFeatures(X, p)
X_poly, mu, sigma = utils.featureNormalize(X_poly)
X_poly = np.concatenate([np.ones((m, 1)), X_poly], axis=1)
# Map X_poly_test and normalize (using mu and sigma)
X_poly_test = polyFeatures(Xtest, p)
X_poly_test -= mu
X_poly_test /= sigma
X_poly_test = np.concatenate([np.ones((ytest.size, 1)), X_poly_test], axis=1)
# Map X_poly_val and normalize (using mu and sigma)
X_poly_val = polyFeatures(Xval, p)
X_poly_val -= mu
X_poly_val /= sigma
X_poly_val = np.concatenate([np.ones((yval.size, 1)), X_poly_val], axis=1)
print('Normalized Training Example 1:')
X_poly[0, :]
```
*You should now submit your solutions.*
```python
grader[4] = polyFeatures
grader.grade()
```
## 3.1 Learning Polynomial Regression
After you have completed the function `polyFeatures`, we will proceed to train polynomial regression using your linear regression cost function.
Keep in mind that even though we have polynomial terms in our feature vector, we are still solving a linear regression optimization problem. The polynomial terms have simply turned into features that we can use for linear regression. We are using the same cost function and gradient that you wrote for the earlier part of this exercise.
For this part of the exercise, you will be using a polynomial of degree 8. It turns out that if we run the training directly on the projected data, will not work well as the features would be badly scaled (e.g., an example with $x = 40$ will now have a feature $x_8 = 40^8 = 6.5 \times 10^{12}$). Therefore, you will
need to use feature normalization.
Before learning the parameters $\theta$ for the polynomial regression, we first call `featureNormalize` and normalize the features of the training set, storing the mu, sigma parameters separately. We have already implemented this function for you (in `utils.py` module) and it is the same function from the first exercise.
After learning the parameters $\theta$, you should see two plots generated for polynomial regression with $\lambda = 0$, which should be similar to the ones here:
<table>
<tr>
<td></td>
<td></td>
</tr>
</table>
You should see that the polynomial fit is able to follow the datapoints very well, thus, obtaining a low training error. The figure on the right shows that the training error essentially stays zero for all numbers of training samples. However, the polynomial fit is very complex and even drops off at the extremes. This is an indicator that the polynomial regression model is overfitting the training data and will not generalize well.
To better understand the problems with the unregularized ($\lambda = 0$) model, you can see that the learning curve shows the same effect where the training error is low, but the cross validation error is high. There is a gap between the training and cross validation errors, indicating a high variance problem.
```python
lambda_ = 0
theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,
lambda_=lambda_, maxiter=55)
# Plot training data and fit
pyplot.plot(X, y, 'ro', ms=10, mew=1.5, mec='k')
utils.plotFit(polyFeatures, np.min(X), np.max(X), mu, sigma, theta, p)
pyplot.xlabel('Change in water level (x)')
pyplot.ylabel('Water flowing out of the dam (y)')
pyplot.title('Polynomial Regression Fit (lambda = %f)' % lambda_)
pyplot.ylim([-20, 50])
pyplot.figure()
error_train, error_val = learningCurve(X_poly, y, X_poly_val, yval, lambda_)
pyplot.plot(np.arange(1, 1+m), error_train, np.arange(1, 1+m), error_val)
pyplot.title('Polynomial Regression Learning Curve (lambda = %f)' % lambda_)
pyplot.xlabel('Number of training examples')
pyplot.ylabel('Error')
pyplot.axis([0, 13, 0, 100])
pyplot.legend(['Train', 'Cross Validation'])
print('Polynomial Regression (lambda = %f)\n' % lambda_)
print('# Training Examples\tTrain Error\tCross Validation Error')
for i in range(m):
print(' \t%d\t\t%f\t%f' % (i+1, error_train[i], error_val[i]))
```
One way to combat the overfitting (high-variance) problem is to add regularization to the model. In the next section, you will get to try different $\lambda$ parameters to see how regularization can lead to a better model.
### 3.2 Optional (ungraded) exercise: Adjusting the regularization parameter
In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the the lambda parameter and try $\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve.
For $\lambda = 1$, the generated plots should look like the the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance.
<table>
<tr>
<td></td>
<td></td>
</tr>
</table>
For $\lambda = 100$, you should see a polynomial fit (figure below) that does not follow the data well. In this case, there is too much regularization and the model is unable to fit the training data.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
<a id="section5"></a>
### 3.3 Selecting $\lambda$ using a cross validation set
From the previous parts of the exercise, you observed that the value of $\lambda$ can significantly affect the results of regularized polynomial regression on the training and cross validation set. In particular, a model without regularization ($\lambda = 0$) fits the training set well, but does not generalize. Conversely, a model with too much regularization ($\lambda = 100$) does not fit the training set and testing set well. A good choice of $\lambda$ (e.g., $\lambda = 1$) can provide a good fit to the data.
In this section, you will implement an automated method to select the $\lambda$ parameter. Concretely, you will use a cross validation set to evaluate how good each $\lambda$ value is. After selecting the best $\lambda$ value using the cross validation set, we can then evaluate the model on the test set to estimate
how well the model will perform on actual unseen data.
Your task is to complete the code in the function `validationCurve`. Specifically, you should should use the `utils.trainLinearReg` function to train the model using different values of $\lambda$ and compute the training error and cross validation error. You should try $\lambda$ in the following range: {0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10}.
<a id="validationCurve"></a>
```python
def validationCurve(X, y, Xval, yval):
"""
Generate the train and validation errors needed to plot a validation
curve that we can use to select lambda_.
Parameters
----------
X : array_like
The training dataset. Matrix with shape (m x n) where m is the
total number of training examples, and n is the number of features
including any polynomial features.
y : array_like
The functions values at each training datapoint. A vector of
shape (m, ).
Xval : array_like
The validation dataset. Matrix with shape (m_val x n) where m is the
total number of validation examples, and n is the number of features
including any polynomial features.
yval : array_like
The functions values at each validation datapoint. A vector of
shape (m_val, ).
Returns
-------
lambda_vec : list
The values of the regularization parameters which were used in
cross validation.
error_train : list
The training error computed at each value for the regularization
parameter.
error_val : list
The validation error computed at each value for the regularization
parameter.
Instructions
------------
Fill in this function to return training errors in `error_train` and
the validation errors in `error_val`. The vector `lambda_vec` contains
the different lambda parameters to use for each calculation of the
errors, i.e, `error_train[i]`, and `error_val[i]` should give you the
errors obtained after training with `lambda_ = lambda_vec[i]`.
Note
----
You can loop over lambda_vec with the following:
for i in range(len(lambda_vec))
lambda = lambda_vec[i]
# Compute train / val errors when training linear
# regression with regularization parameter lambda_
# You should store the result in error_train[i]
# and error_val[i]
....
"""
# Selected values of lambda (you should not change this)
lambda_vec = [0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10]
# You need to return these variables correctly.
error_train = np.zeros(len(lambda_vec))
error_val = np.zeros(len(lambda_vec))
# ====================== YOUR CODE HERE ======================
# ============================================================
return lambda_vec, error_train, error_val
```
After you have completed the code, the next cell will run your function and plot a cross validation curve of error v.s. $\lambda$ that allows you select which $\lambda$ parameter to use. You should see a plot similar to the figure below.
In this figure, we can see that the best value of $\lambda$ is around 3. Due to randomness
in the training and validation splits of the dataset, the cross validation error can sometimes be lower than the training error.
```python
lambda_vec, error_train, error_val = validationCurve(X_poly, y, X_poly_val, yval)
pyplot.plot(lambda_vec, error_train, '-o', lambda_vec, error_val, '-o', lw=2)
pyplot.legend(['Train', 'Cross Validation'])
pyplot.xlabel('lambda')
pyplot.ylabel('Error')
print('lambda\t\tTrain Error\tValidation Error')
for i in range(len(lambda_vec)):
print(' %f\t%f\t%f' % (lambda_vec[i], error_train[i], error_val[i]))
```
*You should now submit your solutions.*
```python
grader[5] = validationCurve
grader.grade()
```
### 3.4 Optional (ungraded) exercise: Computing test set error
In the previous part of the exercise, you implemented code to compute the cross validation error for various values of the regularization parameter $\lambda$. However, to get a better indication of the model’s performance in the real world, it is important to evaluate the “final” model on a test set that was not used in any part of training (that is, it was neither used to select the $\lambda$ parameters, nor to learn the model parameters $\theta$). For this optional (ungraded) exercise, you should compute the test error using the best value of $\lambda$ you found. In our cross validation, we obtained a test error of 3.8599 for $\lambda = 3$.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
```python
```
### 3.5 Optional (ungraded) exercise: Plotting learning curves with randomly selected examples
In practice, especially for small training sets, when you plot learning curves to debug your algorithms, it is often helpful to average across multiple sets of randomly selected examples to determine the training error and cross validation error.
Concretely, to determine the training error and cross validation error for $i$ examples, you should first randomly select $i$ examples from the training set and $i$ examples from the cross validation set. You will then learn the parameters $\theta$ using the randomly chosen training set and evaluate the parameters $\theta$ on the randomly chosen training set and cross validation set. The above steps should then be repeated multiple times (say 50) and the averaged error should be used to determine the training error and cross validation error for $i$ examples.
For this optional (ungraded) exercise, you should implement the above strategy for computing the learning curves. For reference, the figure below shows the learning curve we obtained for polynomial regression with $\lambda = 0.01$. Your figure may differ slightly due to the random selection of examples.
*You do not need to submit any solutions for this optional (ungraded) exercise.*
```python
```
| 6c2ed2e67cb63048d8aae9e0ca89657b9e096e64 | 43,405 | ipynb | Jupyter Notebook | Exercise5/.ipynb_checkpoints/exercise5-checkpoint.ipynb | Ishasharmax/MachineLearningNotebooks | c72693ced6f104d235a40023111562b742d38f4f | [
"Apache-2.0"
]
| null | null | null | Exercise5/.ipynb_checkpoints/exercise5-checkpoint.ipynb | Ishasharmax/MachineLearningNotebooks | c72693ced6f104d235a40023111562b742d38f4f | [
"Apache-2.0"
]
| null | null | null | Exercise5/.ipynb_checkpoints/exercise5-checkpoint.ipynb | Ishasharmax/MachineLearningNotebooks | c72693ced6f104d235a40023111562b742d38f4f | [
"Apache-2.0"
]
| null | null | null | 47.385371 | 664 | 0.602972 | true | 7,892 | Qwen/Qwen-72B | 1. YES
2. YES | 0.774583 | 0.835484 | 0.647152 | __label__eng_Latn | 0.995478 | 0.341881 |
# Thermodynamic Model to predict gene expression.
(c) 2020 Tom Röschinger. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT).
```python
import wgregseq
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Set default plotting style
wgregseq.plotting_style();
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
In this notebook we write down a thermodynamic model to predict gene expression for promoter sequences. The goal is to use these predictions, and try to identify locations of binding sites, without knowing the underlying energy matrix for the binding site.
### Input
The input into the model is the binding energy of a transcription factor. This binding energy is given by an energy matrix that might be created arbitrarily, and by the sequence, which might contain single mutations or scrambles. Since the binding energy is the only thing that is going to vary between the sequences,
## Simple repression motif
From [Chure et al., 2019](https://www.pnas.org/content/116/37/18275.short) the fold change in expression due to the simple repression motif is given by
\begin{equation}
\text{fold-change} = \left( 1 + e^{-\beta\Delta \epsilon_{RA} +\log\left( R_A/N_{NS} \right) } \right)^{-1},
\end{equation}
where $R_A$ is the repressor copy number, $N_{NS}$ the number of non-specific binding sites and $\Delta \epsilon_{RA}$ the binding energy of the repressor to the specific site compared to non-specific background. Since we are interested in the change in binding energy due to mutation, let's look at that.
Labeling fold change by $f$, we can write (for the wild type)
$$
-\log (1/f^{\text{(wt)}} -1) + \log (R_A/N_{NS}) = \beta\Delta\epsilon_{RA}^{\text{(wt)}}.
$$
Now, let's write the difference in binding energies. A nice thing is that the repressor copy number and number of non-specific binding sites cancels,
$$
-\log (1/f^{\text{(wt)}} -1) + \log (1/f^{\text{(mut)}} -1) = \beta(\Delta\epsilon_{RA}^{\text{(wt)}} - \Delta\epsilon_{RA}^{\text{(mut)}}).
$$
```python
```
| d322de482b7e7c22c3853bcb08d2fb36e73c8be6 | 3,892 | ipynb | Jupyter Notebook | code/experimental_design/thermodynamic_model.ipynb | tomroesch/Reg-Seq2 | bd54e7ad226ce0cde90ff80781551383d9a2511d | [
"MIT"
]
| null | null | null | code/experimental_design/thermodynamic_model.ipynb | tomroesch/Reg-Seq2 | bd54e7ad226ce0cde90ff80781551383d9a2511d | [
"MIT"
]
| null | null | null | code/experimental_design/thermodynamic_model.ipynb | tomroesch/Reg-Seq2 | bd54e7ad226ce0cde90ff80781551383d9a2511d | [
"MIT"
]
| null | null | null | 30.40625 | 324 | 0.590185 | true | 594 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.774583 | 0.679679 | __label__eng_Latn | 0.990482 | 0.417453 |
<a href="https://colab.research.google.com/github/jingstat/Customer-Churn-Prediction-for-Digital-Music-Service-with-PySpark/blob/main/ALEX_issue17.ipynb" target="_parent"></a>
# Extend the concentrated liquidity
From the balencor paper, we have the invariance function defined as (1), where $L$ is a constant.
\begin{equation*}
B_x^{w_x} B_y^{w_y} = L \tag{1}
\end{equation*}
Let $A_i$ and $A_o$ as the amount of tokens i and o exchanged, when a user sends token i (in) to get tokens o (out). The price of
Out-Given-in:
$$ A_o = B_o (1- (\frac{B_i}{B_i+A_i}))^\frac{w_i}{w_o}$$
In-Given-Out:
$$A_i = B_i((\frac{B_o}{B_o - A_o})^\frac{w_o}{w_i} -1) $$
Spotprice:
$$SP_{i}^o = \frac{B_{i} \cdot w_o}{B_o \cdot w_{i}}$$
In-Given-Price:
$$A_i = B_i((\frac{sp_i^{o'}}{sp_i^o})^{w_o} - 1) $$
Liquidity fingerprint:
If follow the definition of Uniswap v3, where liquidity is defined as $ L = \frac{\partial y}{\partial \sqrt{P}}$ and let price tick $t_i = log(P)$, the liquidity fingerprint is
\begin{equation}
L(t_i) = 2Lw_x^{w_y}w_y^{w_x}exp( (w_x- \frac{1}{2})t_i)
\end{equation}
Concentrated liquilidy
There is a trading function that describes the relationship between the reserves while its liquidity is in the range:
$$ (x + x_{offset})^{w_x} (y+y_{offset})^{w_y} = L $$
$$ L = \frac{\partial y}{\partial P^{w_x}} (\frac{w_x}{w_y})^{w_x} $$
or equivalently by switch symbol x and y (and take price reciprocal)
$$ L = \frac{\partial x}{\partial P^{-w_y}} (\frac{w_y}{w_x})^{w_y} $$
For a price range $[p_a, p_b]$ (price is price of x interms of y) and let tick $t_i = log(P)$
$$ y_{offset} = L \cdot exp(w_x t_i) (\frac{w_y}{w_x})^{w_x} $$
$$ x_{offset} = L \cdot exp(-w_y t_i) (\frac{w_x}{w_y})^{w_y} $$
### Prove of Liquidity fingerprint:
Start with the invariant trading function:
$$ x^{w_x} \cdot y^{w_y} = L$$
Solving for y:
$$ y = (\frac{L}{x^{w_x}})^{\frac{1}{w_y}}$$
Given the sport price P (price of x as of y):
$$ P = \frac{y\cdot w_x}{x \cdot w_y} $$
We can rewrite P as
$$ P = (\frac{L}{x})^\frac{1}{w_y}\cdot \frac{w_x}{w_y}$$
To find the same price, but as a function of y rather than x, we can swithing x and y and taking the reciprocal:
$$ P_y = (\frac{y}{L})^\frac{1}{w_x} \cdot \frac{w_x}{w_y}$$
Solving Y:
$$y = L \cdot P^{w_x} (\frac{w_y}{w_x})^{w_x} \tag{2}$$
If following the definition in Uniswap v3, where liquidity is defined as $ L = \frac{\partial y}{\partial \sqrt{P}}$ and let price tick $t_i = log(P)$, the liquidity fingerprint is
\begin{equation}
L(t_i) = 2Lw_x^{w_y}w_y^{w_x}exp((w_x- \frac{1}{2})t_i)
\end{equation}
In stead of define liquidy fingerprint as$\frac{\partial y}{\partial \sqrt{P}} $ a special case when $w_x = \frac{1}{2}$, we re-define the it as
$$ \frac{\partial y}{\partial P^{w_x}} = L \cdot (\frac{w_y}{w_x})^{w_x} $$ It indicates that liquidity is constant for every price tick chage in unit of $w_ilog(P)$.
To calculate the concentrated liquity boundary, we take the derivative w.r.t $P^{w_x}$:
$$ \frac{\partial y}{\partial P^{w_x}} = L \cdot (\frac{w_y}{w_x})^{w_x} $$
For a price range $[p_a, p_b]$ (price is price of x interms of y), and let price tick $t_i = log(P)$,
$$ \Delta y = \Delta P^{w_x} L\cdot \frac{w_y}{w_x}^{w_x}$$
$$ y_{offset} = L \cdot exp(w_x t_i) (\frac{w_y}{w_x})^{w_x} $$
$$ x_{offset} = L \cdot exp(-w_y t_i) (\frac{w_x}{w_y})^{w_y} $$
```python
# Illustrations
```
```python
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
#Liquity fingerprint
w_x=0.25
w_y= 1-w_x
t = np.array(list(range(-40,50)))/5
p = np.exp(t)
L = 1000
lt = L*2*w_x**w_y*w_y**w_x*np.exp((w_x - 0.5)*t) # w.r.t sqrt(P)
#lt1 = L * (w_y/w_x)**w_x * w_x* np.exp(-w_y*t) # w.r.t P
lt2 = L * (w_y/w_x)**w_x # w.r. t p^w_x
fig, ax = plt.subplots()
#sns.lineplot(t,lt1, label="liquidity space - w.r.t p",)
sns.lineplot(t,lt, label="liquidity space - w.r.t sqrt(p) (t = log(p) ) ",)
sns.lineplot(t,lt2, label="liquidity space 2 - w.r.t p^w_x (t= w_x *log(p)",)
ax.set_xticks(range(-10,10))
ax.vlines(x = [0,1], ymin = [0, 0], ymax = [lt2, lt2], linestyle='--', color = 'orange')
ax.vlines(x = 2, ymin=0, ymax= 441, linestyle='--')
ax.set_ylim(0,)
ax.set_xlabel('price tick')
plt.show()
```
```python
```
```python
x = np.array(list(range(1,2500)))
w_x = 0.25
w_y = 1-w_x
L = 1000
y = (L/(x**w_x))**(1/w_y)
# suppose set price range [pa, pb] = [0.1 2], then ti = -/+ 0.69
pa = 0.5
pb = 2
y_offset = L*np.exp(w_y*np.log(pa))*(w_y/w_x)**(w_x)
x_offset = L*np.exp(-w_y*np.log(pb))*(w_x/w_y)**(w_y)
print("the virtual reserve of x is {}".format(x_offset))
print("the virtual reserve of y is {}".format(y_offset))
x_tilta = x - x_offset
y_tilta = (L/x**w_x)**(1/w_y) - y_offset
df = pd.DataFrame({'x_reserve':x, 'y_reserve':y, 'x_offset': x_tilta, 'y_offset': y_tilta})
ya = y_offset
xa = (L/(ya**w_y))**(1/w_x)
xb = x_offset
yb = (L/(xb**w_x))**(1/w_y)
fig, ax = plt.subplots()
sns.lineplot(data=df, x='x_reserve', y='y_reserve', color = 'blue', label= 'virtual reserves')
sns.lineplot(data=df, x='x_offset', y='y_offset', color= 'orange', label= 'real reserves')
plt.scatter(x=xb, y=yb, color='r', label = 'Pb')
plt.scatter(x=xa, y=ya, color='g', label = 'Pa')
plt.scatter(x=xa-x_offset, y=ya-y_offset, color='g', label = 'Pa')
plt.scatter(x=xb-x_offset, y=yb-y_offset, color='r', label = 'Pa')
ax.set_xlim(0,)
ax.set_ylim(0, 2500)
plt.show()
```
```python
```
```python
```
```python
```
```python
```
# New Section
| c75707de47520cb59bedf5b4b30dfc8a3ce12171 | 51,720 | ipynb | Jupyter Notebook | ALEX_issue17.ipynb | jingstat/Customer-Churn-Prediction-for-Digital-Music-Service-with-PySpark | 3508939c1aee958d36dd6911e634891e92bfdaba | [
"MIT"
]
| null | null | null | ALEX_issue17.ipynb | jingstat/Customer-Churn-Prediction-for-Digital-Music-Service-with-PySpark | 3508939c1aee958d36dd6911e634891e92bfdaba | [
"MIT"
]
| null | null | null | ALEX_issue17.ipynb | jingstat/Customer-Churn-Prediction-for-Digital-Music-Service-with-PySpark | 3508939c1aee958d36dd6911e634891e92bfdaba | [
"MIT"
]
| null | null | null | 140.162602 | 21,578 | 0.841609 | true | 2,071 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.785309 | 0.68909 | __label__eng_Latn | 0.664902 | 0.439318 |
# PC lab 4: Logistic regression for classification
## Introduction
In a binary classification setting, we are interested in assigning an observation $\mathbf{x}$ to one of two possible classes, denoted by $y$. For example, maybe we would like to tell if a patient has a particular disease (y = 1) or not (y = 0), given certain symptoms $\mathbf{x}$. Generally speaking, we want to predict the probability that the class label $y = 1$, conditional on the data that we have observed, $\mathbf{x}$. This probability is also called the *class posterior* or the *class-membership probability*, which we can denote as follows:
\begin{equation}
Pr(Y=1|X) = P(X) = p(y= 1|\mathbf{x})
\end{equation}
The book uses the statistical notation on the left, but the notation with the feature vector $\mathbf{x}$ is more common in machine learning literature. In any case, both notations mean exactly the same. In this PC lab, we will cover one of the most popular classifiers: logistic regression.
Just like linear regression, logistic regression (LR) is a linear model. However, LR does not model the mean of a continuous outcome, but the logarithm of the [odds](https://en.wikipedia.org/wiki/Odds) of the probability $P(X)$:
\begin{equation}
log \frac{P(X)}{1-P(X)} = w_{0}x_{0} + w_{1}x_{1} + ... + w_{p}x_{p} = \mathbf{w^Tx}
\end{equation}
However, we are really interested in the probability $p$ and not in the odds of p. Therefore, it is common to apply the inverse log-odds transformation on both sides of the equation. This transformation is the **logistic function $\phi(z)$**, hence the name of logistic regression:
\begin{equation}
\phi(z) = \frac{1}{1 + e^{-z}} = \frac{e^{z}}{1+e^{z}}
\end{equation}
Verify for yourself that applying $\phi(z)$ on the log-odds yields $p$.
In other words, we can make predictions for $p$ with logistic regression as follows:
\begin{equation}
p(\mathbf{x}) = \phi(w_{0}x_{0} + w_{1}x_{1} + ... + w_{p}x_{p})
\end{equation}
If we want to classify a data point $\mathbf{x}$, we can calculate $p$ with LR and simply assign it to class 1 if $p$ exceeds a certain probability threshold. A typical threshold is 0.5.
<div class="alert alert-success">
<b>EXERCISE: What would happen to our predictions when we would choose a lower threshold, let's say 0.2? How would this affect the accuracy of our predictions? Can you think of a situation where we would want to do this? </b>
</div>
Let's stop for a moment to have a look at what the logistic transformation does:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
x = np.arange(-8,8,0.01) # Generate a range of x values
y = 1/(1+np.exp(-x)) # Calculate the logistic transformation of these x'es
# Plot them
fig, ax = plt.subplots()
ax.scatter(x,y, marker='.');
ax.set_xlabel('x');
ax.set_ylabel('y');
```
As shown, $\phi$ monotonically maps any number from the real domain to a number in [0,1]. Indeed, this is a desirable property if we want to predict a probability!
## Training a LR model
### Loss function: the cross-entropy loss
Now that we have the logistic regression model to predict the probability of belonging to a certain class, all that remains is the question of how to find the weights of the model on a given set of training data. As always, this is the problem of minimizing a loss function to find an optimal set of weights. Where we used the mean squared error (MSE) for linear regression, we will use the **cross-entropy** loss function for LR. Minimizing the binary cross-entropy loss is equivalent to minimizing the negative log-likelihood of the data under a binomial distribution:
\begin{equation}
l_{log} = \frac{1}{n}\sum\limits_{i=1}^{n}-y_{i}log(p(\mathbf{x}_i))-(1-y_i)log(1-p(\mathbf{x}_i))
\end{equation}
Where $y_i$ is the class of data point $i$ and $p(\mathbf{x}_i)$ is the class-membership probability predicted by logistic regression for the observation $\mathbf{x}_i$. If we look at the cross-entropy loss **for a single data point** $l_{log}^{i}$, we can break it down in two parts:
\begin{equation}
l_{log}^{i} =
\begin{cases}
-log(p(\mathbf{x}_i)) & \text{if} \ y_i = 1\\
-log(1-p(\mathbf{x}_i)) & \text{if} \ y_i = 0
\end{cases}
\end{equation}
It should be clear that the cross-entropy loss will be larger for smaller values of $p(\mathbf{x}_i)$ if $y_i = 1$, and vice versa. Let's visualize the cross-entropy loss for these two cases:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
p = np.arange(0.01,0.99,0.01) # Generate a range of predicted probabilities between zero and 1
l_0 = -np.log(p) # cross-entropy loss if y = 1
l_1 = -np.log(1-p)
# Plot them
fig, ax = plt.subplots(figsize=(10,6))
ax.scatter(p,l_0, marker='.');
ax.scatter(p, l_1, marker='.');
ax.set_xlabel('Predicted class-membership probability $p$');
ax.set_ylabel('Cross-entropy loss');
ax.legend(['Cross-entropy loss when y is 1', 'Cross-entropy loss when y is 0']);
```
<div class="alert alert-success">
<b>EXERCISE: Make sure you understand the cross-entropy loss. Verify that it correctly penalizes wrong predictions in both cases. Suppose that we have no information about the data at all, what would be the best guess for p to minimize the cross-entropy loss?</b>
</div>
### Finding the weights with gradient descent
For linear regression, the solutions to the normal equations provide a convenient analytical solution to obtain the optimal set of model weights $\mathbf{w}$ on a set of training data. There is no such solution to find the optimal weights for a logistic regression model, so instead an optimization algorithm such as **gradient descent** is used to train a LR model.
Gradient descent is an iterative optimization algorithm that searches for the optimum of an objective function by making small changes to a set of optimization variables. Gradient descent (and more complex optimization algorithms, but we offer a separate course for that) are widely used in machine learning to find the optimal set of model weights that minimize a certain loss function. Especially when there is no analytical solution for the weights available like for linear regression.
Generally, gradient descent uses the **gradient** of the loss function with respect to the model weights to perform updates to those weights in each iteration. At iteration $k+1$, the algoritm computes the gradient of a loss function $J(\mathbf{w})$ evaluated in the training data. Then, it performs an update to the current parameter values that is relative to the gradient multiplied with the learning rate $\gamma$, which is a constant:
\begin{equation}
\mathbf{w}_{k+1} = \mathbf{w}_{k} - \gamma\nabla{J(\mathbf{w}_{k})}
\end{equation}
Initially, the weights are often initialized with random draws from some distribution. The algorithm continues to do updates, until it converges or until some stopping criterion is reached.
In order to perform gradient descent to find the weights of a logistic regression model, we need to compute the gradient of the loss function with respect to the model parameters. Recall that, for a single data point, the cross-entropy loss function was as follows:
\begin{equation}
l_{log}^{i}(\mathbf{w}) = -y_{i}log(p(\mathbf{x}_i))-(1-y_i)log(1-p(\mathbf{x}_i))
\end{equation}
Where $p(\mathbf{x}_i)$ is nothing else than the weighted sum of the inputs squashed through the sigmoid function:
\begin{equation}
p(\mathbf{x}_i) = \phi(w_{0}x_{0i} + w_{1}x_{1i} + ... + w_{p}x_{pi})
\end{equation}
Before going on, let's first calculate the partial derivative of the sigmoid function:
\begin{equation}
\frac{\partial}{\partial z} \phi(z) = \frac{\partial}{\partial z} \frac{1}{1+e^{-z}} = \frac{e^{-z}}{(1+e^{-z})^2}
\end{equation}
We can rewrite this as follows:
\begin{equation}
\frac{e^{-z}}{(1+e^{-z})^2} = \frac{1 +e^{-z} -1}{(1+e^{-z})^2} = \frac{1}{1+e^{-z}} \Big( 1 - \frac{1}{1+e^{-z}}\Big) = \phi(z)(1 - \phi(z))
\end{equation}
With this result and by applying the chain rule, we can compute the partial derivative of the loss function with respect to the weight $w_j$. We will use the symbol $z$ to denote the weighted sum of the features (i.e., the input for the logistic function) and drop the superscript $i$ for clarity:
\begin{equation}
\frac{\partial l_{log}(\mathbf{w})}{\partial w_j} = \frac{\partial}{\partial w_j} \Big(-ylog(\phi(z))-(1-y)log(1-\phi(z)) \Big) \\ = \Big( \frac{-y}{\phi(z)} + \frac{1-y}{1-\phi(z)} \Big)\frac{\partial}{\partial w_j}\phi(z) \\ = \Big( \frac{-y}{\phi(z)} + \frac{1-y}{1-\phi(z)} \Big) \phi(z)(1-\phi(z))\frac{\partial}{\partial w_j}z
\end{equation}
Since $z = w_{0}x_{0} + w_{1}x_{1} + ... + w_{p}x_{p}$, $\frac{\partial}{\partial w_j}z$ is nothing more than $x_j$, so we can rewrite the above as:
\begin{equation}
\frac{\partial l_{log}(\mathbf{w})}{\partial w_j} = \Big( -y(1-\phi(z) + (1-y)\phi(z))\Big)x_j \\ = \big( -y + \phi(z) \big)x_j = \big( \phi(z) - y \big)x_j
\end{equation}
With this partial derivative of the loss w.r.t $w_j$, we can write the update rule of the gradient descent algorithm for the $j^{th}$ weight:
\begin{equation}
w_{j,k+1} = w_{j,k} - \gamma(\phi(z_k)-y)x_{j}
\end{equation}
In other words, the algorithm will each time perform an update to the weight $w_{j}$ that is in proportion to the difference between the predicted probability of class membership in the previous iteration and the actual class. Makes sense! The entire gradient is simply the vector that contains the partial derivatives with respect to the entire weight vector $\mathbf{w}$, and in reality gradient descent acts on $\mathbf{w}$ and not on an individual weight $w_j$. Also, the gradient is typically not calculated for one data point, but evaluated over the entire training data set.
In practice, software packages such as [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegressionscikit-learn) do this optimization under the hood, so there is no need to implement it manually each time we want to use logistic regression.
## Application: predicting the status of a breast cancer tumor
In the first application of logistic regression, we will use the [Breast Cancer Wisconsin (Diagnostic) Data Set](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29). The dataset contains information on the disease status of 569 breast cancer patients: they were either diagnosed with a malign (status M) or with a benign (status B) tumor.
For each patient, the dataset also contains 30 features that represent statistics of the cell nuclei present in images taken after [fine needle aspirate tissue samples](https://en.wikipedia.org/wiki/Fine-needle_aspiration). These 30 features are the mean, standard deviation and the maximum of 10 measurements on the cell nuclei:
- radius
- texture
- perimeter
- area
- smoothness
- compactness
- concavity
- concave points
- symmetry
- fractal dimension
**Based on these feature of the cell nuclei, we would like to predict whether a patient has a malign or a benign breast cancer tumor.** Let's read in the data:
```python
import pandas as pd
import numpy as np
data = pd.read_csv('./wdbc.data', header=None, index_col=0, names=['Patient ID', 'status'] + list(np.arange(1,31,1)))
status = data['status']
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>status</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>...</th>
<th>21</th>
<th>22</th>
<th>23</th>
<th>24</th>
<th>25</th>
<th>26</th>
<th>27</th>
<th>28</th>
<th>29</th>
<th>30</th>
</tr>
<tr>
<th>Patient ID</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>842302</th>
<td>M</td>
<td>17.99</td>
<td>10.38</td>
<td>122.80</td>
<td>1001.0</td>
<td>0.11840</td>
<td>0.27760</td>
<td>0.3001</td>
<td>0.14710</td>
<td>0.2419</td>
<td>...</td>
<td>25.38</td>
<td>17.33</td>
<td>184.60</td>
<td>2019.0</td>
<td>0.1622</td>
<td>0.6656</td>
<td>0.7119</td>
<td>0.2654</td>
<td>0.4601</td>
<td>0.11890</td>
</tr>
<tr>
<th>842517</th>
<td>M</td>
<td>20.57</td>
<td>17.77</td>
<td>132.90</td>
<td>1326.0</td>
<td>0.08474</td>
<td>0.07864</td>
<td>0.0869</td>
<td>0.07017</td>
<td>0.1812</td>
<td>...</td>
<td>24.99</td>
<td>23.41</td>
<td>158.80</td>
<td>1956.0</td>
<td>0.1238</td>
<td>0.1866</td>
<td>0.2416</td>
<td>0.1860</td>
<td>0.2750</td>
<td>0.08902</td>
</tr>
<tr>
<th>84300903</th>
<td>M</td>
<td>19.69</td>
<td>21.25</td>
<td>130.00</td>
<td>1203.0</td>
<td>0.10960</td>
<td>0.15990</td>
<td>0.1974</td>
<td>0.12790</td>
<td>0.2069</td>
<td>...</td>
<td>23.57</td>
<td>25.53</td>
<td>152.50</td>
<td>1709.0</td>
<td>0.1444</td>
<td>0.4245</td>
<td>0.4504</td>
<td>0.2430</td>
<td>0.3613</td>
<td>0.08758</td>
</tr>
<tr>
<th>84348301</th>
<td>M</td>
<td>11.42</td>
<td>20.38</td>
<td>77.58</td>
<td>386.1</td>
<td>0.14250</td>
<td>0.28390</td>
<td>0.2414</td>
<td>0.10520</td>
<td>0.2597</td>
<td>...</td>
<td>14.91</td>
<td>26.50</td>
<td>98.87</td>
<td>567.7</td>
<td>0.2098</td>
<td>0.8663</td>
<td>0.6869</td>
<td>0.2575</td>
<td>0.6638</td>
<td>0.17300</td>
</tr>
<tr>
<th>84358402</th>
<td>M</td>
<td>20.29</td>
<td>14.34</td>
<td>135.10</td>
<td>1297.0</td>
<td>0.10030</td>
<td>0.13280</td>
<td>0.1980</td>
<td>0.10430</td>
<td>0.1809</td>
<td>...</td>
<td>22.54</td>
<td>16.67</td>
<td>152.20</td>
<td>1575.0</td>
<td>0.1374</td>
<td>0.2050</td>
<td>0.4000</td>
<td>0.1625</td>
<td>0.2364</td>
<td>0.07678</td>
</tr>
</tbody>
</table>
<p>5 rows × 31 columns</p>
</div>
First, let's look at the distribution of the disease status:
```python
pd.value_counts(data['status']).plot(kind='bar');
```
There are about 350 benign cases and roughly 200 malign cases. This is a fairly balanced dataset.
<div class="alert alert-success">
<b>EXERCISE: Suppose that the dataset was unbalanced, with 525 B cases and only 25 M cases. Can you think of any problems this could give if we would evaluate the accuracy of our logistic regression predicitions? We will come back to this problem in one of the next labs.</b>
</div>
In order to perform LR, we will encode the disease status as a binary variable.
```python
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder().fit(status)
encoder.classes_ # 'B' will become class 0, 'M' will become class 1
```
array(['B', 'M'], dtype=object)
```python
y = encoder.transform(status)
x = data.drop('status', axis=1).values # Drop the disease status from the dataframe, convert to numpy array
y
```
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1,
1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1,
0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1,
0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1,
1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0,
0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1,
1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1,
1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1,
1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0,
0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0,
0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0])
<div class="alert alert-success">
<b>EXERCISE: Using scikit-learn, split the data in a 80% training and a 20% test set. Fit a logistic regression model and evaluate trainig and testing accuracy. You should be able to achieve a fairly high accuracy! </b>
</div>
Use [this method](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) for train-test splitting and [this implementation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) to perform logistic regression. You can use the [score method](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.score) to evaluate the accuracy of your model. This method computes the accuracy as follows:
\begin{equation}
score = \frac{\text{Number of correctly classified instances}}{\text{Total number of instances}}
\end{equation}
```python
# ** solution **
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=42)
LRmodel = LogisticRegression()
LRmodel.fit(X_train, y_train)
LRmodel.score(X_train, y_train)
```
/home/gaeta/miniconda3/envs/SynBio/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
0.9538461538461539
```python
LRmodel.score(X_test, y_test)
# ** solution **
```
0.956140350877193
To get an idea of which features are considered important by the LR model, we can visualize the weights it has learned in a bar plot:
```python
fig, ax = plt.subplots(figsize=(10,5))
pd.Series(LRmodel.coef_.flatten()).plot(ax=ax, kind='bar')
```
<div class="alert alert-success">
<b>Use your LR model to predict the class probabilities and the classes for the training data. Use the [```predict_proba()```](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict_proba) method to generate the predicted probabilities. Use the code below to plot the two against each other. Which data points are most likely to be misclassified?</b>
</div>
```python
# ** solution **
predicted_class_probabilities = LRmodel.predict_proba(X_train)[:,1]
predicted_classes = LRmodel.predict(X_train)
#** solution **
misclassified = predicted_classes != y_train
colors = ['#b2182b' if wrong else '#2166ac' for wrong in misclassified ]
fig, ax = plt.subplots(figsize=(8,6))
ax.scatter(predicted_class_probabilities, predicted_classes, marker='.', s=100, color=colors)
ax.set_xlabel('Predicted class probabilies').set_fontsize(20)
ax.set_ylabel('Predicted classes').set_fontsize(20)
ax.legend(['Correctly classified'])
```
Clearly, the misclassified points are those points where the predicted probability of class membership is rather close to 0.5.
# Multiclass classification
## One-versus-one classification
One-versus-one classification is another approach to a multiclass classification problem. For a K-class problem, the strategy consists of training $\frac{K(K-1)}{2}$ classifiers. Each of these classifiers much learn to distinguish to classes. One the classifiers are trained, a voting scheme is applied to make a prediction for an unseen data point: each classifier has to decide between two possible classes. The final predicted class is that class that gets the largest number of votes.
## One-versus-all classification
In one-versus-all (OvA) classification, a single classifier is trained per class, with the samples of that class as positive samples and all other samples as negatives. The strategy proceeds as follows for a K-class classification problem:
**Inputs:**
* a classification algorithm L (learner)
* feature matrix $\mathbf{X}$
* label vector y where $y_i \in {1,...,K}$
**Procedure:**
for each k in {1,...,K}:
* construct a new label vector z where $z_i$ is 1 if $y_i$ = k and 0 otherwise
* train L on $\mathbf{X}$ to obtain a classifier $f_k$. The classifier should return class probabilities and not hard labels.
**Returns**
A list of trained classifiers $f_k$ for each k in {1,...,K}
To make predictions for a new sample $\mathbf{x}$, the $k$ classifiers are applied to $\mathbf{x}$ and the final predicted label is the label that is predicted with the highest confidence (probability):
$\hat{y} = \underset{k \in {1,...,K}}{\mathrm{argmax}} \, f_k(\mathbf{x})$
Let's simulate a toy dataset with three classes and two features, and split it in training and test data:
```python
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
X, y = make_blobs(n_samples=1000, centers= [[-2.5, 0], [0, 1], [3.5, -1]], random_state=42)
#train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
```python
# Make the plot
fig, ax = plt.subplots(figsize=(15,10))
colors=['#66c2a5', '#fc8d62', '#8da0cb']
for i, color in enumerate(colors):
idx_train = np.where(y_train==i)
idx_test = np.where(y_test==i)
plt.scatter(X_train[idx_train,0], X_train[idx_train,1], c=color, edgecolor='black', s=30)
plt.scatter(X_test[idx_test,0], X_test[idx_test, 1],c='white', edgecolor=color, s=70)
ax.legend(['Class 1 - train',
'Class 1 - test',
'Class 2 - train',
'Class 2 - test',
'Class 3 - train',
'Class 3 - test']);
ax.set_xlabel('Feature 1');
ax.set_ylabel('Feature 2');
ax.set_title('Toy dataset for multiclass classification').set_fontsize(20);
```
```python
i = 0
```
```python
z_train = np.zeros(len(y_train))
z_train[np.where(y_train==i)] = 1
```
```python
z_train
```
array([0., 1., 0., 1., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 1., 1., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 1., 1., 0., 0., 0., 0., 1.,
1., 1., 0., 0., 0., 1., 0., 0., 1., 1., 0., 1., 1., 1., 0., 1., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 1.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0.,
1., 1., 0., 1., 1., 0., 1., 0., 1., 0., 0., 1., 0., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 1., 1., 0., 1., 0., 0., 1., 0., 1.,
0., 1., 0., 1., 0., 1., 0., 1., 1., 0., 1., 0., 1., 1., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 1., 1.,
1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 1., 1., 0., 1., 1., 1., 0., 0., 0., 0., 1., 0., 0.,
1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 1., 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1.,
0., 1., 0., 1., 0., 1., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,
1., 0., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.,
0., 0., 1., 0., 0., 1., 0., 1., 1., 1., 0., 0., 0., 0., 1., 1., 0.,
0., 1., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 1.,
1., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 0., 1., 1., 0., 0., 0.,
1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 1., 0., 0., 1., 1.,
0., 0., 1., 1., 0., 0., 0., 1., 0., 0., 1., 1., 0., 1., 0., 0., 1.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0.,
1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 1., 1., 0., 1., 0.,
1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0., 1., 0., 0., 0.,
0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0.,
0., 1., 0., 0., 0., 1., 1., 0., 1., 1., 0., 0., 0., 0., 0., 0., 1.,
1., 0., 0., 1., 0., 1., 1., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 1., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 1.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 1., 0., 1., 1., 0., 0.,
0., 1., 0., 0., 0., 0., 1., 0., 1., 0., 1., 1., 0., 0., 0., 0., 1.,
1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1.,
0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 1., 0.,
1., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1.,
1., 0., 0., 1., 1., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 1., 1., 0., 1.,
0., 0., 0., 1., 0., 1., 0., 0., 0., 1., 1., 1., 0., 0., 0., 1., 0.,
1., 0., 1., 1., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 0., 0., 0.,
0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.,
1., 0., 1., 1., 0., 1., 0., 0., 0., 0., 1., 1., 0., 0., 0., 1., 0.,
0., 1., 0., 0., 0., 1., 1., 1., 1., 0., 1., 1., 0., 0., 0., 1., 1.,
0.])
<div class="alert alert-success">
<b>Implement a one-versus-all loop to tackle this classification problem. Train a list of classifiers on the training data. Make predictions on the test data. You can use the code below to get started. </b>
</div>
```python
# ***solution***
L1 = LogisticRegression()
L2 = LogisticRegression()
L3 = LogisticRegression()
L = [L1, L2, L3]
# Train the list of classifiers in one-v-all fashion
for i,l in enumerate(L):
z_train = (y_train==i)
l.fit(X_train, z_train)
# Make predictions on the test data
predictions = []
for l in L:
predictions.append(l.predict_proba(X_test)[:,1])
predicted_classes = np.array([np.argmax([pred[i] for pred in predictions]) for i in range(len(X_test))])
# ***solution***
```
<div class="alert alert-success">
<b>Run the code below to visualize your predictions. </b>
</div>
```python
classification_accuracy=np.round(np.mean(y_test == predicted_classes)*100,2)
```
```python
# Visualize the predictions
fig, ax = plt.subplots(figsize=(15,10))
colors=['#66c2a5', '#fc8d62', '#8da0cb']
for i, color in enumerate(colors):
idx_train = np.where(y_train==i)
idx_test = np.where(y_test==i)
plt.scatter(X_train[idx_train,0], X_train[idx_train,1], c=color, edgecolor='black', s=30)
plt.scatter(X_test[idx_test,0], X_test[idx_test, 1],c='white', edgecolor=color, s=70)
ax.legend(['Class 1 - train',
'Class 1 - test',
'Class 2 - train',
'Class 2 - test',
'Class 3 - train',
'Class 3 - test']);
# add predictions
for i, color in enumerate(colors):
idx_predicted = np.where(predicted_classes==i)
plt.scatter(X_test[idx_predicted,0], X_test[idx_predicted,1], c=color, marker='s', s=2)
ax.set_xlabel('Feature 1');
ax.set_ylabel('Feature 2');
ax.set_title('Toy dataset for multiclass classification - classification accuracy: {}%'.format(classification_accuracy)).set_fontsize(20);
```
```python
```
```python
```
| ea750cd7b7d3c45c9444d4422cd7d96a6a4c5cbc | 377,923 | ipynb | Jupyter Notebook | predmod/lab4/PClab04_logreg_SOLVED__.ipynb | gdewael/teaching | a78155041918422a843f31c863dd11e8afc5646a | [
"MIT"
]
| null | null | null | predmod/lab4/PClab04_logreg_SOLVED__.ipynb | gdewael/teaching | a78155041918422a843f31c863dd11e8afc5646a | [
"MIT"
]
| null | null | null | predmod/lab4/PClab04_logreg_SOLVED__.ipynb | gdewael/teaching | a78155041918422a843f31c863dd11e8afc5646a | [
"MIT"
]
| null | null | null | 296.875884 | 129,880 | 0.908873 | true | 11,232 | Qwen/Qwen-72B | 1. YES
2. YES | 0.96378 | 0.831143 | 0.801039 | __label__eng_Latn | 0.906044 | 0.699415 |
# Homework - 1
#####Vectors and Matrices
Consider the matrix X and the vectors y and z below:
$$
\mathbf{X} =
\begin{bmatrix}
2&4 \\
1&3
\end{bmatrix}
$$
$$\mathbf{y} = \begin{bmatrix} 1 \\ 3 \end{bmatrix}$$
$$\mathbf{z} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}$$
**1. What is the inner product of the vectors y and z? (this is also sometimes called the dot product, and is sometimes written $y^Tz$)**
Inner Product of y and z ($y^T z$) = $(1 * 2) + ( 3 * 3) = 11$
```
# Inner Product of two vectors using Numpy
import numpy as np
y = np.array([[1], [3]])
z = np.array([[2], [3]])
print np.vdot(y,z)
```
11
**2. What is the product Xy?**
Xy = $\begin{bmatrix} (2 * 1) + ( 1 * 3) \\(4 * 1) + (3 * 3) \end{bmatrix}$
Xy = $\begin{bmatrix} 5 \\ 13 \end{bmatrix}$
```
# Matrix Vector multiplication using Numpy
X = np.array([[2,1],[4,3]])
y = np.array([[1], [3]])
print X.dot(y)
```
[[ 5]
[13]]
**3. Is X invertible? If so, give the inverse, and if no, explain why not.**
A n x n square matrix A is said to be invertible or nonsingular if there exists any n x n square matrix B such that $$ AB = BA = I_n $$ , where $I_n$ is a n x n Identity matrix.
The inverse matrix of $X$, denoted by $X^{-1}$ is $$\mathbf{X^{-1}} = \begin{bmatrix} 1.5&-0.5 \\ -2&1 \end{bmatrix}$$
```
# Inverse of X in Numpy
from numpy.linalg import inv
X_inv = inv(X)
print 'Inverse of X is:'
print X_inv
print 'X * X_inv is an Identity Matrix'
print X.dot(X_inv)
```
Inverse of X is:
[[ 1.5 -0.5]
[-2. 1. ]]
X * X_inv is an Identity Matrix
[[ 1. 0.]
[ 0. 1.]]
**4. What is the rank of X?**
The rank of X is 2, since the column rank is 2.
##### Calculus
**1. If $y = x^3 + x − 5$ then what is the derivative of y with respect to x?**
Derivative of y w.r.t x is: $ \frac{dy}{dx} = 3x^2 + 1$
**2. If $y = x\:sin(z)\:e^{−x}$ then what is the partial derivative of y with respect to x?**
Using Product Rule and the fact that derivative of $e^{-x} = -e^{-x}$, $x = 1$, we have
$\frac{\partial y}{\partial x} = sin(z)\:e^{-x} - x\:sin(z)\:e^{-x}$
##### Probability and Statistics
Consider a sample of data S = {1, 1, 0, 1, 0} created by flipping a coin x five times, where 0 denotes that the coin turned up heads and 1 denotes that it turned up tails.
**1. What is the sample mean for this data?**
Sample Mean is $\frac{1 + 1 + 0 + 1 + 0}{5} = \frac{3}{5}$
**2. What is the sample variance for this data?**
Sample Variance is $\frac{6}{25}$
** 3. What is the probability of observing this data, assuming it was generated by flipping a coin with an equal probability of heads and tails (i.e. the probability distribution is p(x = 1) = 0.5, p(x = 0) = 0.5).**
Since it is a sequence of five independent tosses with P(H) = P(T) = 0.5,
P(S) = $(\frac{1}{2})^5 = \frac{1}{32}$
** 4. Note that the probability of this data sample would be greater if the value of p(x = 1) was not 0.5, but instead some other value. What is the value that maximizes the probability of the sample S. Please justify your answer.**
Answer:
Let p(x=1) = p. Therefore, we have p(x=0) = 1-p. If n is the number of tosses, we have the total probability as $\prod\limits_{i=1}^n p^{x_i} \: (1-p)^{x_i} = p^{\sum_{i=1}^n x_i} \: (1-p)^{n - \sum_{i=1}^n x_i}$
Let y = $ {\sum_{i=1}^n x_i} $.
The above equation can be now written as $ p^{y} \: (1-p)^{n - y}$
Since we need to find the value of p that maximizes the probability of sample S, we can take the log of the above equation, take its derivative w.r.t p, set it to zero and solve for p.
Taking the log, we have $y\>log(p) + (n-y)\>log(1-p)$, since $log(a^p) = p\>log(a)$ and $log(a * b) = log(a) + log(b)$.
Taking the derivative w.r.t p and setting it to 0 and solving for p, we have:
$$\begin{equation}
\begin{split}
\frac{dl(p)}{dp} & = \frac{y}{p} + \frac{n-y}{(1-p)} = 0 \\
\implies 0 & = \frac{(1-p)\>y + p\>(n - y)}{p\>(1-p)} \\
\implies 0 & = \frac{y - py + py - pn}{p\>(1-p)} \\
\implies 0 & = \frac{y - pn}{p\>(1-p)} \\
pn & = y \\
p & = \frac{1}{n} y \\
p & = \frac{1}{n} {\sum_{i=1}^n x_i} \\
\end{split}
\end{equation}$$
Plugging in the values for $x_i$ and n=5, we get $\boxed{p = \frac{3}{5}}$
```
# Custom Styling - Please ignore
# Custom CSS Styling
from IPython.core.display import HTML
def css_styling():
styles = open("../../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunss.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsx.otf');
}
@font-face {
font-family: "Computer Modern";
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsi.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunso.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top:12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width:800px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 22pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
| e05c8b6d1cb8649cf037333ba6cfbc39747e7a4f | 11,770 | ipynb | Jupyter Notebook | coursework/CMU 10-601/Homework 1.ipynb | mathkann/ML | 65ace09c7327c2625ed176bc7d0e7ad46794218e | [
"MIT"
]
| 1 | 2015-08-15T11:16:14.000Z | 2015-08-15T11:16:14.000Z | coursework/CMU 10-601/Homework 1.ipynb | mathkann/ML | 65ace09c7327c2625ed176bc7d0e7ad46794218e | [
"MIT"
]
| null | null | null | coursework/CMU 10-601/Homework 1.ipynb | mathkann/ML | 65ace09c7327c2625ed176bc7d0e7ad46794218e | [
"MIT"
]
| null | null | null | 31.810811 | 243 | 0.434155 | true | 2,142 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92523 | 0.959154 | 0.887438 | __label__eng_Latn | 0.93346 | 0.90015 |
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/pyro_intro.ipynb" target="_parent"></a>
[Pyro](https://pyro.ai/) is a probabilistic programming system built on top of PyTorch. It supports posterior inference based on MCMC and stochastic variational inference; discrete latent variables can be marginalized out exactly using dynamic programmming.
```python
!pip install pyro-ppl
```
Collecting pyro-ppl
[?25l Downloading https://files.pythonhosted.org/packages/aa/7a/fbab572fd385154a0c07b0fa138683aa52e14603bb83d37b198e5f9269b1/pyro_ppl-1.6.0-py3-none-any.whl (634kB)
[K |████████████████████████████████| 634kB 5.4MB/s
[?25hRequirement already satisfied: torch>=1.8.0 in /usr/local/lib/python3.7/dist-packages (from pyro-ppl) (1.8.1+cu101)
Collecting pyro-api>=0.1.1
Downloading https://files.pythonhosted.org/packages/fc/81/957ae78e6398460a7230b0eb9b8f1cb954c5e913e868e48d89324c68cec7/pyro_api-0.1.2-py3-none-any.whl
Requirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.7/dist-packages (from pyro-ppl) (1.19.5)
Requirement already satisfied: tqdm>=4.36 in /usr/local/lib/python3.7/dist-packages (from pyro-ppl) (4.41.1)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from pyro-ppl) (3.3.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.8.0->pyro-ppl) (3.7.4.3)
Installing collected packages: pyro-api, pyro-ppl
Successfully installed pyro-api-0.1.2 pyro-ppl-1.6.0
```python
import matplotlib.pyplot as plt
import numpy as np
import torch
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist
from torch.distributions import constraints
from pyro.infer import MCMC, NUTS, Predictive, HMC
from pyro.infer import SVI, Trace_ELBO
from pyro.infer import EmpiricalMarginal
from pyro.distributions import Beta, Binomial, HalfCauchy, Normal, Pareto, Uniform
from pyro.distributions.util import scalar_like
from pyro.infer.mcmc.util import initialize_model, summary
from pyro.util import ignore_experimental_warning
pyro.set_rng_seed(101)
```
# Example: inferring mean of 1d Gaussian .
We use the simple example from the [Pyro intro](https://pyro.ai/examples/intro_part_ii.html#A-Simple-Example). The goal is to infer the weight $\theta$ of an object, given noisy measurements $y$. We assume the following model:
$$
\begin{align}
\theta &\sim N(\mu=8.5, \tau^2=1.0)\\
y \sim &N(\theta, \sigma^2=0.75^2)
\end{align}
$$
Where $\mu=8.5$ is the initial guess.
```python
def model(hparams, data=None):
prior_mean, prior_sd, obs_sd = hparams
theta = pyro.sample("theta", dist.Normal(prior_mean, prior_sd))
y = pyro.sample("y", dist.Normal(theta, obs_sd), obs=data)
return y
```
## Exact inference
By Bayes rule for Gaussians, we know that the exact posterior,
given a single observation $y=9.5$, is given by
$$
\begin{align}
\theta|y &\sim N(m, s^s) \\
m &=\frac{\sigma^2 \mu + \tau^2 y}{\sigma^2 + \tau^2}
= \frac{0.75^2 \times 8.5 + 1 \times 9.5}{0.75^2 + 1^2}
= 9.14 \\
s^2 &= \frac{\sigma^2 \tau^2}{\sigma^2 + \tau^2}
= \frac{0.75^2 \times 1^2}{0.75^2 + 1^2}= 0.6^2
\end{align}
$$
```python
mu = 8.5
tau = 1.0
sigma = 0.75
hparams = (mu, tau, sigma)
y = 9.5
m = (sigma**2 * mu + tau**2 * y) / (sigma**2 + tau**2) # posterior mean
s2 = (sigma**2 * tau**2) / (sigma**2 + tau**2) # posterior variance
s = np.sqrt(s2)
print(m)
print(s)
```
9.14
0.6
## Ancestral sampling
```python
def model2(hparams, data=None):
prior_mean, prior_sd, obs_sd = hparams
theta = pyro.sample("theta", dist.Normal(prior_mean, prior_sd))
y = pyro.sample("y", dist.Normal(theta, obs_sd), obs=data)
return theta, y
for i in range(5):
theta, y = model2(hparams)
print([theta, y])
```
[tensor(9.1529), tensor(8.7116)]
[tensor(8.7306), tensor(9.3978)]
[tensor(9.0740), tensor(9.4240)]
[tensor(7.3040), tensor(7.8569)]
[tensor(7.8939), tensor(8.0257)]
## MCMC
See [the documentation](http://docs.pyro.ai/en/stable/mcmc.html)
```python
nuts_kernel = NUTS(model)
obs = torch.tensor(y)
mcmc = MCMC(nuts_kernel, num_samples=1000, warmup_steps=50)
mcmc.run(hparams, obs)
print(type(mcmc))
```
Sample: 100%|██████████| 1050/1050 [00:03, 326.67it/s, step size=1.30e+00, acc. prob=0.880]
<class 'pyro.infer.mcmc.api.MCMC'>
```python
samples = mcmc.get_samples()
print(type(samples))
print(samples.keys())
print(samples["theta"].shape)
```
<class 'dict'>
dict_keys(['theta'])
torch.Size([1000])
```python
mcmc.diagnostics()
```
{'acceptance rate': {'chain 0': 0.924},
'divergences': {'chain 0': []},
'theta': OrderedDict([('n_eff', tensor(500.2368)),
('r_hat', tensor(1.0050))])}
```python
thetas = samples["theta"].numpy()
print(np.mean(thetas))
print(np.std(thetas))
```
9.152181
0.625822
## Variational Inference
See [the documentation](http://docs.pyro.ai/en/stable/inference_algos.html)
For the guide (approximate posterior), we use a [pytorch.distributions.normal](https://pytorch.org/docs/master/distributions.html#torch.distributions.normal.Normal).
```python
# the guide must have the same signature as the model
def guide(hparams, data):
y = data
prior_mean, prior_sd, obs_sd = hparams
m = pyro.param("m", torch.tensor(y)) # location
s = pyro.param("s", torch.tensor(prior_sd), constraint=constraints.positive) # scale
return pyro.sample("theta", dist.Normal(m, s))
# initialize variational parameters
pyro.clear_param_store()
# set up the optimizer
# optimizer = pyro.optim.Adam({"lr": 0.001, "betas": (0.90, 0.999)})
optimizer = pyro.optim.SGD({"lr": 0.001, "momentum": 0.1})
# setup the inference algorithm
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
n_steps = 2000
# do gradient steps
obs = torch.tensor(y)
loss_history, m_history, s_history = [], [], []
for t in range(num_steps):
loss_history.append(svi.step(hparams, obs))
m_history.append(pyro.param("m").item())
s_history.append(pyro.param("s").item())
plt.plot(loss_history)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss")
post_mean = pyro.param("m").item()
post_std = pyro.param("s").item()
print([post_mean, post_std])
```
# Example: beta-bernoulli model
Example is from [SVI tutorial](https://pyro.ai/examples/svi_part_i.html)
The model is
$$
\begin{align}
\theta &\sim \text{Beta}(\alpha, \beta) \\
x_i &\sim \text{Ber}(\theta)
\end{align}
$$
where $\alpha=\beta=10$.
```python
alpha0 = 10.0
beta0 = 10.0
def model(data):
alpha0_tt = torch.tensor(alpha0)
beta0_tt = torch.tensor(beta0)
f = pyro.sample("theta", dist.Beta(alpha0_tt, beta0_tt))
# loop over the observed data
for i in range(len(data)):
pyro.sample("obs_{}".format(i), dist.Bernoulli(f), obs=data[i])
def model_binom(data):
alpha0_tt = torch.tensor(alpha0)
beta0_tt = torch.tensor(beta0)
theta = pyro.sample("theta", dist.Beta(alpha0_tt, beta0_tt))
data_np = [x.item() for x in data]
N = len(data_np)
N1 = np.sum(data_np)
N0 = N - N1
pyro.sample("obs", dist.Binomial(N, theta))
```
```python
# create some data with 6 observed heads and 4 observed tails
data = []
for _ in range(6):
data.append(torch.tensor(1.0))
for _ in range(4):
data.append(torch.tensor(0.0))
data_np = [x.item() for x in data]
print(data)
print(data_np)
N = len(data_np)
N1 = np.sum(data_np)
N0 = N - N1
print([N1, N0])
```
[tensor(1.), tensor(1.), tensor(1.), tensor(1.), tensor(1.), tensor(1.), tensor(0.), tensor(0.), tensor(0.), tensor(0.)]
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]
[6.0, 4.0]
```python
```
[tensor(1.), tensor(1.), tensor(1.), tensor(1.), tensor(1.), tensor(1.), tensor(0.), tensor(0.), tensor(0.), tensor(0.)]
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]
## Exact inference
The posterior is given by
$$
\begin{align}
\theta &\sim \text{Ber}(\alpha + N_1, \beta + N_0) \\
N_1 &= \sum_{i=1}^N [x_i=1] \\
N_0 &= \sum_{i=1}^N [x_i=0]
\end{align}
$$
```python
alpha1 = alpha0 + N1
beta1 = beta0 + N0
print("exact posterior: alpha={:0.3f}, beta={:0.3f}".format(alpha1, beta1))
post_mean = alpha1 / (alpha1 + beta1)
post_var = (post_mean * beta1) / ((alpha1 + beta1) * (alpha1 + beta1 + 1))
post_std = np.sqrt(post_var)
print([post_mean, post_std])
```
exact posterior: alpha=16.000, beta=14.000
[0.5333333333333333, 0.08960286733763294]
## MCMC
```python
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=1000, warmup_steps=50)
mcmc.run(data)
print(mcmc.diagnostics())
samples = mcmc.get_samples()
print(samples["theta"].shape)
```
Sample: 100%|██████████| 1050/1050 [00:09, 111.12it/s, step size=1.50e+00, acc. prob=0.803]
{'theta': OrderedDict([('n_eff', tensor(443.8569)), ('r_hat', tensor(0.9992))]), 'divergences': {'chain 0': []}, 'acceptance rate': {'chain 0': 0.864}}
torch.Size([1000])
```python
thetas = samples["theta"].numpy()
print(np.mean(thetas))
print(np.std(thetas))
```
0.5330437
0.09079484
```python
nuts_kernel = NUTS(model_binom)
mcmc = MCMC(nuts_kernel, num_samples=1000, warmup_steps=50)
mcmc.run(data)
print(mcmc.diagnostics())
samples = mcmc.get_samples()
print(samples["theta"].shape)
```
Sample: 100%|██████████| 1050/1050 [00:08, 117.55it/s, step size=9.57e-01, acc. prob=0.919]
{'theta': OrderedDict([('n_eff', tensor(269.4737)), ('r_hat', tensor(0.9990))]), 'divergences': {'chain 0': []}, 'acceptance rate': {'chain 0': 0.951}}
torch.Size([1000])
```python
thetas = samples["theta"].numpy()
print(np.mean(thetas))
print(np.std(thetas))
```
0.48617417
0.112258926
## Variational inference
```python
def guide(data):
alpha_q = pyro.param("alpha_q", torch.tensor(15.0), constraint=constraints.positive)
beta_q = pyro.param("beta_q", torch.tensor(15.0), constraint=constraints.positive)
pyro.sample("theta", dist.Beta(alpha_q, beta_q))
```
```python
# optimizer = pyro.optim.Adam({"lr": 0.0005, "betas": (0.90, 0.999)})
optimizer = pyro.optim.SGD({"lr": 0.001, "momentum": 0.1})
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
n_steps = 2000
loss_history = []
for step in range(n_steps):
loss_history.append(svi.step(data))
plt.plot(loss_history)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
```
```python
# grab the learned variational parameters
alpha_q = pyro.param("alpha_q").item()
beta_q = pyro.param("beta_q").item()
print("variational posterior: alpha={:0.3f}, beta={:0.3f}".format(alpha_q, beta_q))
post_mean = alpha_q / (alpha_q + beta_q)
post_var = (post_mean * beta_q) / ((alpha_q + beta_q) * (alpha_q + beta_q + 1))
post_std = np.sqrt(post_var)
print([post_mean, post_std])
```
variational posterior: alpha=15.414, beta=14.094
[0.5223745147578196, 0.09043264875842827]
```python
```
| 7e0b391aa07c003fcefc56a388f6960fc8c43d78 | 61,852 | ipynb | Jupyter Notebook | notebooks/misc/pyro_intro.ipynb | karm-patel/pyprobml | af8230a0bc0d01bb0f779582d87e5856d25e6211 | [
"MIT"
]
| null | null | null | notebooks/misc/pyro_intro.ipynb | karm-patel/pyprobml | af8230a0bc0d01bb0f779582d87e5856d25e6211 | [
"MIT"
]
| 1 | 2022-03-27T04:59:50.000Z | 2022-03-27T04:59:50.000Z | notebooks/misc/pyro_intro.ipynb | karm-patel/pyprobml | af8230a0bc0d01bb0f779582d87e5856d25e6211 | [
"MIT"
]
| 2 | 2022-03-26T11:52:36.000Z | 2022-03-27T05:17:48.000Z | 69.10838 | 22,352 | 0.800297 | true | 3,706 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.853913 | 0.672824 | __label__eng_Latn | 0.282645 | 0.401528 |
```python
import semicon
import sympy
sympy.init_printing()
```
```python
model = semicon.models.ZincBlende(
components=['foreman', 'zeeman'],
bands=['gamma_6c'],
default_databank='winkler',
)
```
```python
model.hamiltonian
```
$$\left[\begin{matrix}\frac{B_{z} g_{c}}{2} \mu_{B} + E_{0} + E_{v} + \frac{\gamma_{0} \hbar^{2} k_{x}^{2}}{2 m_{0}} + \frac{\gamma_{0} \hbar^{2} k_{y}^{2}}{2 m_{0}} + \frac{\gamma_{0} \hbar^{2} k_{z}^{2}}{2 m_{0}} & \frac{g_{c} \mu_{B}}{2} \left(B_{x} - i B_{y}\right)\\\frac{g_{c} \mu_{B}}{2} \left(B_{x} + i B_{y}\right) & - \frac{B_{z} g_{c}}{2} \mu_{B} + E_{0} + E_{v} + \frac{\gamma_{0} \hbar^{2} k_{x}^{2}}{2 m_{0}} + \frac{\gamma_{0} \hbar^{2} k_{y}^{2}}{2 m_{0}} + \frac{\gamma_{0} \hbar^{2} k_{z}^{2}}{2 m_{0}}\end{matrix}\right]$$
```python
model.bands
```
['gamma_6c']
```python
model.parameters(material='InAs')
```
{'E_0': 0.418, 'Delta_0': 0.38, 'P': 0.9197, 'g_c': -14.9, 'gamma_1': 2.6959795187642825, 'gamma_2': -0.5520102406178573, 'gamma_3': 0.24798975938214163, 'kappa': -1.2520102406178584, 'q': 0.39, 'gamma_0': 43.66812227074236, 'E_v': 0, 'm_0': 510998.94609999994, 'phi_0': 4135.667662, 'mu_B': 5.7883818012e-05, 'hbar': 197.3269788}
# continuum dispersion
```python
import kwant
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
def plot(bands, style, new_gamma_0=None):
model = semicon.models.ZincBlende(
bands=bands,
components=('foreman',),
default_databank='winkler'
)
if new_gamma_0 is not None:
params = model.parameters(material='InAs').renormalize(new_gamma_0=1)
else:
params = model.parameters(material='InAs')
disp = kwant.continuum.lambdify(str(model.hamiltonian), locals=params)
h_k = lambda kx, ky, kz: disp(k_x=kx, k_y=ky, k_z=kz)
k = np.linspace(-.5, .5, 101)
e = np.array([la.eigvalsh(h_k(ki, 0, 0)) for ki in k])
plt.plot(k, e, style)
plt.plot([], [], style, label=bands)
plt.figure(figsize=(12, 10))
plot(bands=('gamma_6c',), style='C0')
plot(bands=('gamma_8v', 'gamma_7v'), style='C1')
plot(bands=('gamma_6c', 'gamma_8v', 'gamma_7v'), style='k--')
plt.legend(prop={'size': 18})
```
| 2ea43d15edb0b1b364753de77d1ffc1dff32c8f5 | 80,286 | ipynb | Jupyter Notebook | notebooks/hamiltonian_and_bulk_bands.ipynb | quantum-tinkerer/semicon | 3b4fc8c3f9a25553fc181a4cb9e5e4109c59a5e2 | [
"BSD-2-Clause"
]
| null | null | null | notebooks/hamiltonian_and_bulk_bands.ipynb | quantum-tinkerer/semicon | 3b4fc8c3f9a25553fc181a4cb9e5e4109c59a5e2 | [
"BSD-2-Clause"
]
| null | null | null | notebooks/hamiltonian_and_bulk_bands.ipynb | quantum-tinkerer/semicon | 3b4fc8c3f9a25553fc181a4cb9e5e4109c59a5e2 | [
"BSD-2-Clause"
]
| 1 | 2019-12-30T00:29:36.000Z | 2019-12-30T00:29:36.000Z | 376.929577 | 74,028 | 0.92315 | true | 907 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.692642 | 0.603012 | __label__yue_Hant | 0.118153 | 0.23933 |
## Breakeven Analysis: 3D Printing vs. Injection Molding ##
Detemine the breakeven point when comparing the production of the plastic enclosure for the SomniCloud
- __Given__: Enclosure volume is $2.57 in^3$
```python
part_vol = 2.57 # in^3
```
### 3D Printing Specs ###
- \$4.25 / cubic inch of ABS
- Tooling Cost: \$0.00
- Machine Time Cost: \$12.00 / hour
- Print Time: 4 hours
```python
# Set 3D ABS variable constants
usd_in3_3d = 4.25 # $/in^3
setup_3d = 0.00 # $ Setup Cost
mc_3d = 12.00 # $/hr
time_3d = 4.0 # hrs
```
### Injection Mold Data ###
- \$4.50 / lb ABS
- Mold Cost: \$6500.00
- Machine Time Cost: \$80.00 / hour
- Cycle Time: 30 seconds
```python
# Set Injection Mold ABS variable constants
usd_lb_im = 6.50 # $/lb
setup_im = 50000.00 # $ Setup Cost
mc_im = 120.00 # $/hr
time_im = 60 / 3600 # hrs
```
### Step 1: Convert \$ / lb to \$ / cubic inch ###
- Density of ABS: $\rho_{ABS} = 1.07 g/cm^{3}$
\begin{equation}
\left(\frac{\$4.50}{lb}\right) \
\left(\frac{1.07g}{cm^{3}}\right) \
\left(\frac{2.2lb}{1000g}\right) \
\left(\frac{2.54cm}{1in}\right)^{3} \
= X \frac{\$}{in_{3}}
\end{equation}
```python
# Create variables for given constants
rho_abs = 1.07 # g/cm^3
```
```python
# Calculate $ per in^3 of injection molded ABS
usd_in3_im = (usd_lb_im) * (rho_abs) * (2.2 / 1000) * ((2.54 / 1)**3) # $/in^3
print(f"Injection Molded ABS [$/in^3]: {usd_in3_im}")
```
Injection Molded ABS [$/in^3]: 0.25073846626400004
### Step 2: Calculate Cost Functions ###
- $Total\ Cost = (Cost\ per\ Unit) * (Number\ of\ Units) + (Setup\ Cost)$
- $Cost\ per\ Unit = (ABS\ Cost\ per\ Unit) + (Machine\ Time\ Cost\ per\ Unit)$
- $ABS\ Cost\ per\ Unit = (ABS\ Cost\ per\ in^{3}) * (Part Volume)$
- $Cost\ per\ Unit = (Machine\ Time\ Cost\ per\ Hour) + (Machine\ Time)$
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import math
N = np.linspace(0,1000,10000)
```
### Cost Function for 3D Printing ###
```python
# Cost per unit for 3D printing
abs_cpu_3d = usd_in3_3d * part_vol # ABS Cost per Unit
mc_cpu_3d = mc_3d * time_3d # Machine Cost per Unit
t_cpu_3d = abs_cpu_3d + mc_cpu_3d # Total per unit cost
# Cost function for 3D printing
C_3d = (t_cpu_3d)*N + setup_3d
print(f"3D Printing Cost per Unit: ${t_cpu_3d:.2f}")
print(f"3D Printing Setup Cost: ${setup_3d:.2f}")
```
3D Printing Cost per Unit: $58.92
3D Printing Setup Cost: $0.00
### Cost Function for Injection Molding Printing ###
```python
# Cost per unit for 3D printing
abs_cpu_im = usd_in3_im * part_vol # ABS Cost per Unit
mc_cpu_im = mc_im * time_im # Machine Cost per Unit
t_cpu_im = abs_cpu_im + mc_cpu_im # Total per unit cost
# Cost function for 3D printing
C_im = (t_cpu_im)*N + setup_im
print(f"3D Printing Cost per Unit: ${t_cpu_im:.2f}")
print(f"3D Printing Setup Cost: ${setup_im:.2f}")
```
3D Printing Cost per Unit: $2.64
3D Printing Setup Cost: $50000.00
### Step 3: Plot Results ###
```python
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(N,C_3d, label=r"3D Printing")
ax.plot(N,C_im, label=r"Injection Molding")
idx = np.argwhere(np.diff(np.sign(C_3d - C_im)) != 0).reshape(-1) + 0
ax.plot(N[idx], C_3d[idx], 'o', markersize=8, color='black')
ax.legend(loc=2) # upper left corner
ax.set_xlabel(r'# of Units', fontsize=18)
ax.set_ylabel(r'Total Cost', fontsize=18)
ax.set_title('Breakeven Analysis')
ax.set_xlim([min(N), max(N)])
ax.set_ylim([0, max(C_3d)])
plt.show()
print(f"Breakeven point = {math.ceil(N[idx])} units")
```
```python
```
| 5318f29c8da0a7954804928595f7af7ed97554f7 | 43,279 | ipynb | Jupyter Notebook | Breakeven Analysis.ipynb | jrmcclure/MAE3501 | f36fac38184001f6b530250a960d5fdbaf9b00ec | [
"MIT"
]
| null | null | null | Breakeven Analysis.ipynb | jrmcclure/MAE3501 | f36fac38184001f6b530250a960d5fdbaf9b00ec | [
"MIT"
]
| null | null | null | Breakeven Analysis.ipynb | jrmcclure/MAE3501 | f36fac38184001f6b530250a960d5fdbaf9b00ec | [
"MIT"
]
| null | null | null | 139.160772 | 35,722 | 0.871832 | true | 1,264 | Qwen/Qwen-72B | 1. YES
2. YES | 0.924142 | 0.808067 | 0.746769 | __label__eng_Latn | 0.227813 | 0.573326 |
<table>
<tr align=left><td>
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
Note: The presentation below largely follows part II in "Finite Difference Methods for Ordinary and Partial Differential Equations" by LeVeque (SIAM, 2007).
```python
```
```python
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
```
# Numerical Solution to ODE Initial Value Problems - Part 1
Many physical, biological, and societal systems can be written as a system of ordinary differential equations (ODEs). In the case where the initial state (value) is know the problems can be written as
$$
\frac{\text{d}\mathbf{u}}{\text{d}t} = \mathbf{f}(t, \mathbf{u}) \quad \mathbf{u}(0) = \mathbf{u}_0
$$
where
- $\mathbf{\!u}(t)$ is the state vector
- $\mathbf{\!f}(t, \mathbf{\!u})$ is a vector-valued function that controls the growth of $\mathbf{u}$ with time
- $\mathbf{\!u}(0)$ is the initial condition at time $t = 0$
### The Example of our time: A non-linear model of Epidemics
Classical [Kermack and McKendrick (1927)](https://royalsocietypublishing.org/doi/10.1098/rspa.1927.0118) SIR model of epidemics (with reinfection)
$$
\begin{align}
\frac{ds}{dt} &= -si + kr \\
\frac{di}{dt} &= si -\sigma i \\
\frac{dr}{dt} &= \sigma i - kr\\
\end{align}
$$
Where the variable $s$ represents the fraction of a population that is **Susceptible** to infection, $i$ is the proportion **Infected** and $r$ the fraction **Recovered**. For this model $s+i+r =1$. The parameters $\sigma, k \geq 0$ control the relative rates of infection and recovery.
For this problem
$$
\mathbf{u}(t) = \begin{bmatrix} s(t)\\ i(t)\\ r(t)\\\end{bmatrix},\quad\quad
\mathbf{f}(t,\mathbf{u}) = \begin{bmatrix} -si + kr \\ si -\sigma i \\ \sigma i - kr\\\end{bmatrix}
$$
### Numerical Solutions
```python
# Solve using SciPy's ODE integrator solve_ivp
from scipy.integrate import solve_ivp
# define the RHS of our system of ODE's
def f_sir(t, u, sigma, k):
s,i,r = u
return numpy.array([-s*i + k*r,
(s - sigma)*i,
sigma*i - k*r ])
```
```python
sigma = .2
k = 0.01
t_max = 100
u_0 = [0.999, 0.001, 0.]
sol = solve_ivp(f_sir, [0, t_max] , u_0, args=(sigma, k), rtol=1.e-6, atol=1.e-9,dense_output = True)
```
```python
t = numpy.linspace(0, t_max, 300)
z = sol.sol(t)
fig = plt.figure(figsize=(20,7))
axes = fig.add_subplot(1,2,1)
axes.plot(t,z[0],'r',label='s', linewidth=2)
axes.plot(t,z[1],'b',label='i', linewidth=2)
axes.plot(t,z[2],'g',label='r', linewidth=2)
axes.plot(t,sigma*numpy.ones(t.shape),'k--',label='$\sigma$')
axes.legend(loc='best',shadow=True, fontsize=14)
axes.set_xlabel('Time',fontsize=16)
axes.set_ylabel('Population',fontsize=16)
axes.grid()
axes.set_title('SIR system: $\sigma={}$, $k={}$'.format(sigma,k),fontsize=18)
plt.show()
```
### Questions an epidemiologist might ask:
- What are the dynamics of this system? does it predict steady or oscillatory solutions?
- Can we estimate critical parameters ($\sigma$, $k$) from data?
- Can we reliably use this model to predict the future?
- How do we evaluate whether this is a *useful* model?
- How might I modify/improve this model.
### Questions a Computational Mathematician might ask:
- Does a solution to the model even exist and is it unique?
- Is our approximate numerical solution accurate?
- What are the dynamics of this system? does it predict steady or oscillatory solutions?
- how do we understand the sensitivity to parameters?
### Existence and Uniqueness of solutions (n-D autonomous systems)
For proof see [Hirsch, Smale, Devaney, Dynamical Systems](https://www.amazon.com/Differential-Equations-Dynamical-Systems-Introduction/dp/0123820103)
#### Theorem: (Picard-Lindelhof)
Given an Autonomous, dynamical system
$$
\frac{\text{d}\mathbf{u}}{\text{d}t} = \mathbf{f}(\mathbf{u}) \quad \mathbf{u}(0) = \mathbf{u}_0
$$
with $\mathbf{u}\in\mathbb{R}^n$ and $\mathbf{f}:\mathbb{R}^n\rightarrow\mathbb{R}^n$
If $\mathbf{f}$ is
* Bounded: $|\mathbf{f}| < M$
* Lipshitz Continuous: $$|\mathbf{f}(\mathbf{x}) - \mathbf{f}(\mathbf{y})| < K|\mathbf{x}- \mathbf{y} |$$
On a ball of radius $\rho$ around the initial condition $\mathbf{u}_0$. Then a unique solution exists to the ODE IVP for some interval of time $t\in[-a,a]$ where $0 < a < \min(\rho/M, 1/K)$
### Geometric Picture
<table>
<tr align=center><td></td>
<td>$$
\frac{\text{d}\mathbf{u}}{\text{d}t} = \mathbf{f}(\mathbf{u}), \quad \mathbf{u}(0) = \mathbf{u}_0
$$</td>
</table>
* **Short Version**: If $\mathbf{f}$ is sufficiently smooth, then a local solution to the ODE exists and is unique
* **Caveat**: The theorem itself gives *NO* constructive way to find that solution
#### Other Examples: Simple radioactive decay
$$
\mathbf{\!u} = [c], \quad \mathbf{f} = [-\lambda c]
$$
$$
\frac{\text{d} c}{\text{d}t} = -\lambda c \quad c(0) = c_0
$$
which has solutions of the form $c(t) = c_0 e^{-\lambda t}$
```python
decay_constant = -numpy.log(2.)/1600.
t = numpy.linspace(0., 4800, 11)
y = numpy.linspace(0., 1.2, 11)
T, Y = numpy.meshgrid(t,y)
dt = numpy.ones(T.shape)
dy = -dt*Y
tp = numpy.linspace(0., 4800, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(T,Y, dt,dy,linewidth=0.1,color='gray')
axes.plot(tp,numpy.exp(decay_constant*tp),linewidth=3)
axes.plot(0.,1.,'ro', markersize=10)
axes.plot(1600., 0.5,'rx',markersize=10)
axes.grid()
axes.set_title("Radioactive Decay, $u' = - \lambda u$, $u(0)=1$, $t_{1/2}=1600$ yr", fontsize=18)
axes.set_xlabel('t (years)', fontsize=16)
axes.set_ylabel('u', fontsize=16)
axes.set_ylim((-.1,1.2))
plt.show()
```
#### Examples: Complex radioactive decay (or chemical system).
Chain of decays from one species to another.
$$\begin{aligned}
\frac{\text{d} c_1}{\text{d}t} &= -\lambda_1 c_1 \\
\frac{\text{d} c_2}{\text{d}t} &= \lambda_1 c_1 - \lambda_2 c_2 \\
\frac{\text{d} c_3}{\text{d}t} &= \lambda_2 c_2 - \lambda_3 c_3
\end{aligned}$$
$$\frac{\text{d} \mathbf{u}}{\text{d}t} = \frac{\text{d}}{\text{d}t}\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} =
\begin{bmatrix}
-\lambda_1 & 0 & 0 \\
\lambda_1 & -\lambda_2 & 0 \\
0 & \lambda_2 & -\lambda_3
\end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}$$
$$\frac{\text{d} \mathbf{u}}{\text{d}t} = A \mathbf{u}$$
For systems of equations like this the general solution to the ODE is the matrix exponential:
$$\mathbf{u}(t) = \mathbf{u}_0 e^{A t}$$
which can be solved given the eigenvalues and eigenvectors of $A$.
#### Examples: Particle tracking in a fluid
$$\frac{\text{d} \mathbf{X}}{\text{d}t} = \mathbf{V}(t, \mathbf{X})$$
In fact all ODE IVP systems can be thought of as tracking particles through a flow field (dynamical system). In 1-dimension the flow "manifold" we are on is fixed by the initial condition.
```python
x = numpy.linspace(0., 1., 11)
y = numpy.linspace(0., 1., 11)
x_fine = numpy.linspace(0., 1.)
y_fine = numpy.linspace(0., 1.)
X, Y = numpy.meshgrid(x,y)
X_fine, Y_fine = numpy.meshgrid(x_fine, y_fine)
pi = numpy.pi
psi = numpy.sin(pi*X_fine)*numpy.sin(pi*Y_fine)
U = pi*numpy.sin(pi*X)*numpy.cos(pi*Y)
V = -pi*numpy.cos(pi*X)*numpy.sin(pi*Y)
x0 = 0.75
y0 = 0.75
psi0 = numpy.sin(pi*x0)*numpy.sin(pi*y0)
fig = plt.figure(figsize=(8,8))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(X,Y, U, V)
axes.plot(.75, 0.75,'ro')
axes.contour(X_fine, Y_fine, psi, [ psi0 ])
axes.grid()
axes.set_title("Particle tracking", fontsize=18)
axes.set_xlabel('y', fontsize=16)
axes.set_ylabel('x', fontsize=16)
plt.show()
```
#### Examples: Van der Pol Oscillator
$$y'' - \mu (1 - y^2) y' + y = 0 \quad \quad \text{with} \quad \quad y(0) = y_0, \quad y'(0) = v_0$$
$$\mathbf{u} = \begin{bmatrix} y \\ y' \end{bmatrix} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}$$
$$\frac{\text{d}}{\text{d}t} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = \begin{bmatrix} u_2 \\ \mu (1 - u_1^2) u_2 - u_1 \end{bmatrix} = \mathbf{f}(t, \mathbf{u})$$
```python
from scipy.integrate import solve_ivp
def f_vanderpol(t, u, mu=5.):
return numpy.array([u[1], mu * (1.0 - u[0]**2) * u[1] - u[0]])
# N = 100
N = 500
t_span = (0., 200.)
u0 = [ 1., 10. ]
f = lambda t, u: f_vanderpol(t, u, mu=50.)
sol = solve_ivp(f, t_span, u0,method='BDF')
```
```python
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(sol.t, sol.y[0])
axes.set_title("Solution to Van der Pol Oscillator", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("y(t)", fontsize=16)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(sol.y[0],sol.y[1],'r')
axes.set_title("Phase Diagram for Van der Pol Oscillator", fontsize=18)
axes.set_xlabel("y(t)", fontsize=16)
axes.set_ylabel("y'(t)", fontsize=16)
axes.grid()
plt.show()
```
## The Big Questions
Given a RHS $\mathbf{f}(t,\mathbf{u})$ and initial condition $\mathbf{u}(t_0) \ldots$
* How do you find a discrete numerical solution that approximates the trajectory $\mathbf{u}(t)$?
* How do you control the accuracy of the approximation?
* How do you improve the efficiency of the approximation?
* How do you understand stability and convergence?
## Some Notation: Basic Stepping schemes
Introducing some notation to simplify things
$$\begin{aligned}
t_0 &= 0 \\
t_1 &= t_0 + \Delta t \\
t_n &= t_{n-1} + \Delta t = n \Delta t + t_0 \\
u_0 &= u(t_0) \approx U_0 \\
u_1 &= u(t_1) \approx U_1 \\
u_n &= u(t_n) \approx U_2 \\
\end{aligned}$$
where lower-case letters are "exact".
Looking back at our work on numerical differentiation why not approximate the derivative as a finite difference:
$$
\frac{u(t + \Delta t) - u(t)}{\Delta t} = f(t, u)
$$
We still need to decide how to evaluate the $f(t, u)$ term however.
One obvious way to do this, is to just use $f(t, u(t))$ and write the update scheme as
$$
u(t + \Delta t) = u(t) + \Delta t f(t,u(t))
$$
Which is our first integration scheme (which goes by the name Euler's method). As usual, though the first scheme is often the worst scheme, but with a bit of understanding we can do much better with not a lot more work.
### Integral form of ODE IVP's: the Relationship to quadrature
Euler's method is an example of a "Single Step, multi-stage" scheme of which there are many. However, to derive them it is actually more instructive to work with the integral form of an ODE, which will put the equations in a form where we can use our ideas from quadrature to make progress
Given a a system of ODE's
$$
\frac{d\mathbf{u}}{dt} = \mathbf{f}(t,\mathbf{u})
$$
We can integrate both sides
$$
\int^{t + \Delta t}_t \frac{d\mathbf{u}}{d \tau} d\tau = \int^{t + \Delta t}_t \mathbf{f}(\tau, \mathbf{u}) d\tau
$$
which is equivalent to the differential form. However using the fundamental theorem of calculus tells us that the LHS is $u(t + \Delta t) - u(t)$ so we can write the ODE as
$$
\mathbf{u}(t + \Delta t) = \mathbf{u}(t) + \int^{t + \Delta t}_t \mathbf{f}(\tau, \mathbf{u}(\tau)) d\tau
$$
## Single-Step Multi-Stage Schemes
The integral form of an ODE initial value problem can be written
$$
u(t + \Delta t) = u(t) + \int^{t + \Delta t}_t f(\tau, u(\tau)) d\tau
$$
Which says that our solution $u$, if it exists at some time $\Delta t$ in the future, is $u(t)$ plus a *number*
$$
K = \int^{t + \Delta t}_t f(\tau, u(\tau) )d\tau
$$
which is a definite *line integral* (along an unknown solution).
An important class of ODE solvers are called *Single Step, Multi-stage schemes* which can be most easily understood as extensions of the Newton-Cotes quadrature schemes for approximating $K$ (plus an error term that will scale as $\Delta t^p$)
### The Geometric picture
```python
t = numpy.linspace(0., 4800, 11)
y = numpy.linspace(0., 1.2, 11)
T, Y = numpy.meshgrid(t,y)
dt = numpy.ones(T.shape)
dy = -dt*Y
tK = 2000.
uK = numpy.exp(decay_constant*tK)
K = uK -1.
tp = numpy.linspace(0., 4800, 100)
tk = numpy.linspace(0., tK, 100)
fig = plt.figure(figsize=(10,8))
axes = fig.add_subplot(1, 1, 1)
axes.quiver(T,Y, dt,dy, color='gray')
axes.plot(tp,numpy.exp(decay_constant*tp))
axes.plot(0.,1.,'ro')
axes.plot(tk,numpy.exp(decay_constant*tk),'r--')
axes.plot(tK, uK, 'ro')
axes.plot([0.,0.], [1., uK], 'r--')
axes.text(10., 0.72, '$K$', fontsize=24, color='red')
axes.plot([0.,tK],[uK, uK], 'r--')
axes.text(900., uK - .1, '$\Delta t$', fontsize=24, color='red')
axes.text(-10, 1., '$U_0$', fontsize=24, color='blue')
axes.text(tK+10, uK, '$U_1$', fontsize=24, color='blue')
axes.grid()
axes.set_title("Direction Set, $u' = - \lambda u$, $u(0)=1$", fontsize=18)
axes.set_xlabel('t (years)', fontsize=16)
axes.set_ylabel('u', fontsize=16)
axes.set_ylim((-.1,1.2))
plt.show()
```
#### Forward Euler scheme
For example, if we approximate $K$ with a left-sided quadrature rule
$$
K = \int^{t + \Delta t}_t f(\tau, u(\tau)) d\tau \approx \Delta t f(t, u(t))
$$
then our first ODE algorithm can be written
$$
u(t + \Delta t) = u(t) + \Delta t f(t, u(t))
$$
Which is exactly Euler's method that we derived previously
in terms of our discrete approximation $U$
$$
\begin{align}
K_1 &= \Delta t f(t_n, U_n)\\
U_{n+1} &= U_n + K_1\\
\end{align}
$$
known as the *forward Euler method*. In essence we are approximating the derivative with the value of the function at the point we are at $t_n$.
```python
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
# Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0)
```
```python
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0., 1., 'ro')
axes.text(0., 1.01, '$U_0$', fontsize=16)
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.text(10., 0.75, '$K_1$', fontsize=16)
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.05))
axes.grid()
plt.show()
```
```python
# Implement Forward Euler
def euler(f, t_span, u0, N):
""" simple implementation of constant step-size forward euler method
This doc string should have so much more in it
"""
t = numpy.linspace(t_span[0], t_span[1],N)
u = numpy.empty(t.shape)
u[0] = u0
delta_t = t[1] - t[0]
for (n, t_n) in enumerate(t[:-1]):
K1 = delta_t * f(t_n, u[n])
u[n + 1] = u[n] + K1
return t, u
```
```python
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
t_span = [0.0, 1.6e3]
u0 = 1.
N = 40
t_euler, u_euler = euler(f, t_span, u0, N)
```
```python
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = lambda t : c_0 * numpy.exp(decay_constant * t)
fig = plt.figure(figsize=(16, 6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t_euler, u_euler, 'or', label="Euler")
axes.plot(t_exact, u_exact(t_exact), 'k--', label="True Solution")
axes.set_title("Forward Euler")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.grid()
axes.legend()
abs_err = numpy.abs(u_euler - u_exact(t_euler))
rel_err = abs_err/u_exact(t_euler)
axes = fig.add_subplot(1, 2, 2)
axes.plot(t_euler,abs_err,'ro',label='absolute error')
axes.plot(t_euler,rel_err,'bo',label='relative error')
axes.set_xlabel("t (years)")
axes.set_ylabel("error")
axes.set_title('Error')
axes.legend(loc='best')
axes.grid()
plt.show()
```
### Backward's Euler
Similar to forward Euler is the *backward Euler* method which, uses a right-rectangle rule to estimate $K$ given $f$ at a future time. i.e.
$$
K\approx \Delta t f(t_{n+1}, U_{n+1})
$$
However, the update scheme now becomes
$$
U_{n+1} = U_n + \Delta t f(t_{n+1}, U_{n+1}).
$$
which requires a (usually non-linear) solve for $U_{n+1}$. Schemes where the function $f$ is evaluated at the unknown time are called *implicit methods*.
For some cases we can solve the equation by hand. For instance in the case of our example problem, $f=\lambda U$, we have:
$$
U_{n+1} = U_n + \Delta t f(t_{n+1}, U_{n+1}) = U_n + \Delta t (\lambda U_{n+1})
$$
which can be solved for $U_{n+1}$ to find
$$\begin{aligned}
U_{n+1} &= U_n + \Delta t (\lambda U_{n+1}) \\
U_{n+1} \left[ 1 - \Delta t \lambda \right ] &= U_n \\
U_{n+1} &= \frac{U_n}{1 - \Delta t \lambda}
\end{aligned}$$
```python
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Backwards Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0 * numpy.exp(decay_constant * dt))
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot(dt, u_np, 'ro')
axes.text(dt+ 10., u_np, '$U_1$', fontsize=16)
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.text(10., 0.85, '$K_1$', fontsize=16)
axes.grid()
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.05))
plt.show()
```
```python
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
n_steps = 20
```
```python
t_exact = numpy.linspace(0.0, 1.6e3, 100)
# Implement backwards Euler
t_backwards = numpy.linspace(0.0, 1.6e3, n_steps)
delta_t = t_backwards[1] - t_backwards[0]
u_backwards = numpy.empty(t_backwards.shape)
u_backwards[0] = c_0
for n in range(0, t_backwards.shape[0] - 1):
u_backwards[n + 1] = u_backwards[n] / (1.0 - decay_constant * delta_t)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t_backwards, u_backwards, 'or', label="Backwards Euler")
axes.plot(t_exact, u_exact(t_exact), 'k--', label="True Solution")
axes.grid()
axes.set_title("Backwards Euler")
axes.set_xlabel("t (years)")
axes.set_ylabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
abs_err = numpy.abs(u_backwards - u_exact(t_backwards))
rel_err = abs_err/u_exact(t_backwards)
axes = fig.add_subplot(1, 2, 2)
axes.plot(t_backwards,abs_err,'ro',label='absolute error')
axes.plot(t_backwards,rel_err,'bo',label='relative error')
axes.set_xlabel("t (years)")
axes.set_ylabel("error")
axes.set_title('Error')
axes.legend(loc='best')
axes.grid()
plt.show()
```
It's also useful to be able to do this in the case of systems of ODEs. Let $f(U) = A U$, then
$$\begin{aligned}
U_{n+1} &= U_n + \Delta t (A U_{n+1}) \\
\left [ I - \Delta t A \right ]U_{n+1} &= U_n \\
U_{n+1} &= \left [ I - \Delta t A \right]^{-1} U_n
\end{aligned}$$
In general however we are often not able to do this with arbitrary $f$.
Another simple implicit method is based on quadrature using the trapezoidal method. The scheme is
$$
\frac{U_{n+1} - U_{n}}{\Delta t} = \frac{1}{2} (f(U_n) + f(U_{n+1}))
$$
In this case what is the update scheme for $f(u) = \lambda u$?
$$\begin{aligned}
U_{n+1} &= U_{n} + \frac{\Delta t}{2} (f(U_n) + f(U_{n+1})) \\
U_{n+1} &= U_{n} + \frac{\Delta t}{2} (\lambda U_n + \lambda U_{n+1}) \\
U_{n+1} \left[1 - \frac{\Delta t \lambda}{2} \right] &= U_{n} \left[1 + \frac{\Delta t \lambda}{2} \right] \\
U_{n+1} &= U_{n} \frac{1 + \frac{\Delta t \lambda}{2}}{1 - \frac{\Delta t \lambda}{2}} \\
\end{aligned}$$
```python
n_steps = 20
```
```python
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
t_exact = numpy.linspace(0.0, 1.6e3, 100)
# Implement trapezoidal method
t = numpy.linspace(0.0, 1.6e3, n_steps)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = c_0
integration_constant = (1.0 + decay_constant * delta_t / 2.0) / (1.0 - decay_constant * delta_t / 2.0)
for n in range(t.shape[0] - 1):
u[n + 1] = u[n] * integration_constant
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t, u, 'or', label="Trapezoidal")
axes.plot(t_exact, u_exact(t_exact), 'k--', label="True Solution")
axes.grid()
axes.set_title("Trapezoidal")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
abs_err = numpy.abs(u - u_exact(t))
rel_err = abs_err/u_exact(t)
axes = fig.add_subplot(1, 2, 2)
axes.plot(t,abs_err,'ro',label='absolute error')
axes.plot(t,rel_err,'bo',label='relative error')
axes.set_xlabel("t (years)")
axes.set_ylabel("error")
axes.set_title('Error')
axes.legend(loc='best')
axes.grid()
plt.show()
```
## Error Analysis of ODE Methods
At this point it is also helpful to introduce more notation to distinguish between the true solution to the ODE $u(t_n)$ and the approximated value which we will denote $U_n$.
**Definition:** We define the *truncation error* of a scheme by replacing the $U_n$ with the true solution $u(t_n)$ in the finite difference formula and looking at the difference from the exact solution.
For example we will use the difference form of forward Euler
$$
\frac{U_{n+1} - U_n}{\Delta t} = f(t_n,U_n)
$$
and define the truncation error as
$$
T(t, u; \Delta t) = \frac{u(t_{n+1}) - u(t_n)}{\Delta t} - f(t_n, u(t_n)).
$$
**Definition:** A method is called *consistent* if
$$
\lim_{\Delta t \rightarrow 0} T(t, u; \Delta t) = 0.
$$
**Definition:** We say that a method is *order* $p$ accurate if
$$
\lVert T(t, u; \Delta t) \rVert \leq C \Delta t^p
$$
uniformally on $t \in [0, \tau]$. This can also be written as $T(t, u; \Delta t) = \mathcal{O}(\Delta t^p)$. Note that a method is consistent if $p > 0$.
### Error Analysis of Forward Euler
We can analyze the error and convergence order of forward Euler by considering the Taylor series centered at $t_n$:
$$
u(t) = u(t_n) + (t - t_n) u'(t_n) + \frac{u''(t_n)}{2} (t - t_n)^2 + \mathcal{O}((t-t_n)^3)
$$
Evaluating this series at $t_{n+1}$ gives
$$\begin{aligned}
u(t_{n+1}) &= u(t_n) + (t_{n+1} - t_n) u'(t_n) + \frac{u''(t_n)}{2} (t_{n+1} - t_n)^2 + \mathcal{O}((t_{n+1}-t_n)^3)\\
&=u_n + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3)
\end{aligned}$$
From the definition of truncation error we can use our Taylor series expression and find the truncation error. Take the finite difference form of forward Euler
$$
\frac{U_{n+1} - U_n}{\Delta t} = f(t_n, U_n)
$$
and replacing the derivative formulation with $u(t_n)$ to find
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{u(t_{n+1}) - u(t_n)}{\Delta t} - f(t_n, u_n) \\
\end{aligned}$$
Given the Taylor's series expansion for $u(t_{n+1})$
$$
u(t_{n+1}) =u(t_n) + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3)
$$
We substitute to find
$$
T(t, u; \Delta t) = \frac{u''(t_n)}{2} \Delta t + \mathcal{O}(\Delta t^2).
$$
This implies that forward Euler is first order accurate and therefore consistent.
Another equivalent definition of the truncation error uses the form
$$
U_{n+1} = u(t_n) + \Delta t f(t_n)
$$
and the definition
$$
T(t, u; \Delta t) = \frac{1}{\Delta t} \left [ U_{n+1} - u(t_{n+1}) \right]
$$
to find
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} [U_{n+1} - u(t + \Delta t)] \\
&= \frac{1}{\Delta t} \left[ \underbrace{u_n + \Delta t f(t_n, u_n)}_{U_{n+1}} - \underbrace{\left( u_n + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3) \right )}_{u(t_{n+1})}\right ] \\
&= \frac{1}{\Delta t} \left[ - \frac{u''(t_n)}{2} \Delta t^2 - \mathcal{O}(\Delta t^3) \right ] \\
&= - \frac{u''(t_n)}{2} \Delta t - \mathcal{O}(\Delta t^2)
\end{aligned}$$
#### Truncation Error vs Step Error
Sometimes we will also consider the "Step Error" which is the error that is introduced over one step
$$
E_h = | U_{n+1} - u_{n+1} |
$$
This leads to an alternate definition of the truncation error as
$$
T(t,u;\Delta t) = \frac{E_h}{\Delta t} = \frac{1}{\Delta t} [U_{n+1} - u_{n+1}]
$$
so if the Truncation error is $O(\Delta t^p)$ then the step error will be order $O(\Delta t^{p+1})$
So for Forward (or Backward's) Euler the step error
$$
E_h = O(\Delta t^2)
$$
The step error can be very useful in *adaptive stepping* schemes
## Runge-Kutta Methods
One way to derive higher-order ODE solvers is to use higher order quadrature schemes that sample the function at a number of intermediate stages to provide a more accurate estimate of $K$. These are not *multi-step* methods as they still only require information from the current time step but they raise the order of accuracy by adding *stages*. These types of methods are called **Runge-Kutta** methods.
### Example: Two-stage Runge-Kutta Methods
The basic idea behind the simplest of the Runge-Kutta methods is to approximate $K$ using a mid-point scheme (which should be 2nd order accurate). Unforrunately, we don't know the value of the mid-point. However we can use an Euler step of size $\Delta t/2$ to estimate the mid-point.
We can write the algorithm as
$$\begin{aligned}
K_1 &= \Delta t f(U_n, t_n) \\
K_2 &= \Delta t f(U_n + K_1/2, t_n + \Delta t/2 )\\
U_{n+1} &= U_n + K_2 \\
\end{aligned}$$
Where we now evaluate the function in two stages $K_1$ and $K_2$.
or for an autonomous ODE
$$
U_{n+1} = U_n + \Delta t f(U_n + \frac{1}{2} \Delta t f(U_n))
$$
```python
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# RK2 step
dt = 1e3
U0 = 1.0
K1 = dt * f(0., U0)
Y1 = U0 + K1/2
K2 = dt * f(dt/2., Y1)
U1 = U0 + K2
```
```python
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0., U0, 'ro')
axes.text(0., U0+.01, '$U_0$', fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K1), 'k--')
axes.plot((0.0, dt/2.), (U0, U0 + K1/2.), 'k')
axes.plot(dt/2., U0 + K1/2, 'ro')
axes.plot((0.0, 0.0), (U0, Y1), 'k--')
axes.text(10., 0.85, '$\\frac{K_1}{2}$', fontsize=18)
axes.plot((0.0, dt/2), (Y1, Y1), 'k--')
axes.text(250, Y1 - 0.05, '$\\frac{\Delta t}{2}$', fontsize=18)
axes.plot(dt, U1, 'go')
axes.plot((0.0, 0.0), (U0, Y1), 'k--')
axes.text(10., 0.85, '$\\frac{K_1}{2}$', fontsize=18)
axes.plot((0.0, dt/2), (Y1, Y1), 'k--')
axes.text(250, Y1 - 0.05, '$\\frac{\Delta t}{2}$', fontsize=18)
axes.plot(dt, U1, 'go')
axes.plot((0., dt), (U0, U1), 'k')
axes.text(dt+20, U1, '$U_1$', fontsize=18)
#axes.plot((0.0, 0.0), (U0, U1), 'g--')
#axes.plot((0.0, dt), (U1, U1), 'g--')
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.05))
axes.grid()
plt.show()
```
#### Error analysis RK2
The truncation error can be computed similarly to how we did so before but we do need to figure out how to compute the derivative inside of the function. Note that due to
$$
f(u(t_n)) = u'(t_n)
$$
that differentiating this with respect to $t$ leads to
$$
f'(u(t_n)) u'(t_n) = u''(t_n)
$$
leading to
$$\begin{aligned}
f\left(u(t_n) + \frac{1}{2} \Delta t f(u(t_n)) \right ) &= f\left(u(t_n) +\frac{1}{2} \Delta t u'(t_n) \right ) \\
&= f(u(t_n)) + \frac{1}{2} \Delta t u'(t_n) f'(u(t_n)) + \frac{1}{8} \Delta t^2 (u'(t_n))^2 f''(u(t_n)) + \mathcal{O}(\Delta t^3) \\
&=u'(t_n) + \frac{1}{2} \Delta t u''(t_n) + \mathcal{O}(\Delta t^2)
\end{aligned}$$
Using our alternative definition of the truncation error we have
$$
T(t, u; \Delta t) = \frac{1}{\Delta t} \left[U_{n+1} - u_{n+1} \right]
$$
or
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} \left[u_n + \Delta t f\left(u_n + \frac{1}{2} \Delta t f(u_n)\right) - \left(u_n + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3) \right ) \right] \\
&=\frac{1}{\Delta t} \left[\Delta t u'(t_n) + \frac{1}{2} \Delta t^2 u''(t_n) + \mathcal{O}(\Delta t^3) - \Delta t u'(t_n) - \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3) \right] \\
&= \mathcal{O}(\Delta t^2)
\end{aligned}$$
so this method is second order accurate.
### Example: Improved Euler's method
The Improved Euler's method is another RK2 scheme but instead of approximating a mid-point quadrature rule, it approximates a trapezoidal rule.
We can write the algorithm as
$$\begin{aligned}
K_1 &= \Delta t f(U_n, t_n) \\
K_2 &= \Delta t f(U_n + K1, t_n + \Delta t )\\
U_{n+1} &= U_n + \frac{1}{2}\left[K_1 +K_2\right] \\
\end{aligned}$$
Where we now use function evaluations at both the initial value, and at the euler point but take the average of those slopes.
Again, error analysis shows that this scheme also has a truncation error $T(t,u:\Delta t) = \mathcal{O}(\Delta t^2)$
```python
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# Improved Euler step
dt = 1e3
U0 = 1.0
K1 = dt * f(0., U0)
Y1 = U0 + K1
K2 = dt * f(dt, Y1)
U1 = U0 + 0.5*(K1 + K2)
```
```python
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0., U0, 'ro')
axes.text(0., U0+.01, '$U_0$', fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K1), 'k--')
axes.plot((0.0, dt), (U0, U0 + K1), 'k')
axes.plot(dt, Y1, 'ro')
axes.text(dt+10, Y1, '$Y_1$', fontsize=18)
axes.plot((0.0, 0.0), (U0, Y1), 'k--')
axes.plot((0.0, dt), (Y1, Y1), 'k--')
axes.text(350, Y1 - 0.05, '$\\frac{\Delta t}{2}$', fontsize=18)
axes.plot(dt, U1, 'go')
axes.plot((0.0, 0.0), (U0, Y1), 'k--')
axes.plot(0., Y1, 'ko')
axes.text(10., Y1, '$K_1$', fontsize=18)
axes.plot((0., 0.), (U0, U0+K2),'b--')
axes.plot(0., U0+K2,'bo--')
axes.text(10., U0+K2, '$K_2$', fontsize=18)
axes.plot(0., U1,'gx', markersize=15)
axes.text(10., U1, '$0.5*(K_1 +K_2)$', fontsize=18)
axes.plot(dt, U1, 'go')
axes.plot((0., dt), (U0, U1), 'k')
axes.text(dt+20, U1, '$U_1$', fontsize=18)
#axes.plot((0.0, 0.0), (U0, U1), 'g--')
#axes.plot((0.0, dt), (U1, U1), 'g--')
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.05))
axes.grid()
plt.show()
```
### Example: 4-stage Runge-Kutta Method
If RK2 is related to a Mid-point quadrature scheme, then the classic 4-stage, 4th order Runge-Kutta scheme should be reminiscent of Simpson's Quadrature rule. It requires 4 samples of $f(t,u)$ at the beginning of the step, two-samples in the middle and one at the end, then a linear combination of those samples
$$\begin{aligned}
K_1 &= \Delta t f(t_n, U_n) \\
K_2 &= \Delta t f(t_n + \Delta t/2, U_n + K_1/2) \\
K_3 &= \Delta t f(t_n + \Delta t/2, U_n + K_2/2) \\
K_4 &= \Delta t f(t_n + \Delta t, U_n + K_3) \\
& \\
U_{n+1} &= U_n + \frac{1}{6} \left [K_1 + 2(K_2 + K_3) + K_4) \right ]
\end{aligned}$$
With truncation error $T = O(\Delta t^4)$
```python
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# RK4 step
dt = 1e3
U0 = 1.0
K1 = dt * f(0., U0)
K2 = dt * f(dt/2., U0 + K1/2)
K3 = dt * f(dt/2., U0 + K2/2)
K4 = dt * f(dt, U0 + K3)
U1 = U0 + 1./6. *( K1 + 2 * (K2 + K3) + K4)
```
```python
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, U0 * numpy.exp(decay_constant * t), label="True Solution")
axes.plot(0., U0, 'ro')
axes.text(0.-20, U0-.04, '$K_1$', color='red',fontsize=16)
axes.text(0., U0+.01, '$U_0$', fontsize=16)
axes.plot((0.0, dt/2.), (U0, U0 + K1/2.), 'k--')
axes.plot(dt/2., U0 + K1/2, 'ro')
axes.text(dt/2-20, U0 + K1/2-.04, '$K_2$', color='red',fontsize=16)
axes.plot((0.0, dt/2.), (U0, U0 + K2/2.), 'k--')
axes.plot(dt/2., U0 + K2/2, 'ro')
axes.text(dt/2-20, U0 + K2/2+.02, '$K_3$', color='red',fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K3), 'k--')
axes.plot(dt, U0 + K3, 'ro')
axes.text(dt-20, U0 + K3-.04, '$K_4$', color='red',fontsize=16)
axes.plot(dt, U1, 'go')
#axes.plot((0., dt), (U0, U1), 'k')
axes.text(dt+20, U1, '$U_1$', fontsize=18)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.05))
axes.grid()
plt.show()
```
```python
def RK2(f, t_span, u0, N):
""" implement constant step size 2 stage Runge-Kutta Method RK2"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = u0
for (n, t_n) in enumerate(t[:-1]):
K_1 = delta_t * f(t_n, u[n])
K_2 = delta_t * f(t_n + delta_t/2., u[n] + K_1/2.)
u[n+1] = u[n] + K_2
return t, u
def improved_euler(f, t_span, u0, N):
""" implement constant step size 2 stage Improved Euler Method trapezoidal rule"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = u0
for (n, t_n) in enumerate(t[:-1]):
K_1 = delta_t * f(t_n, u[n])
K_2 = delta_t * f(t_n + delta_t, u[n] + K_1)
u[n+1] = u[n] + 0.5 * (K_1 + K_2)
return t, u
```
```python
def RK4(f, t_span, u0, N):
""" implement constant step size 4 stage Runge-Kutta Method RK4"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = u0
for (n, t_n) in enumerate(t[:-1]):
K_1 = delta_t * f(t_n, u[n])
K_2 = delta_t * f(t_n + delta_t/2., u[n] + K_1/2.)
K_3 = delta_t * f(t_n + delta_t/2., u[n] + K_2/2.)
K_4 = delta_t * f(t_n + delta_t, u[n] + K_3)
u[n+1] = u[n] + 1./6. * (K_1 + 2.*( K_2 + K_3) + K_4)
return t, u
```
```python
# Implement and compare the two-stage and 4-stage Runge-Kutta methods
f = lambda t, u: -u
N = 20
t_span = [ 0., 5.0 ]
u0 = 1.
u_exact = lambda t: u0*numpy.exp(-t)
t_exact = numpy.linspace(t_span[0], t_span[1], 100)
t_euler, u_euler = euler(f, t_span, u0, N)
t_ieuler, u_ieuler = improved_euler(f, t_span, u0, N)
t_RK2, u_RK2 = RK2(f, t_span, u0, N)
t_RK4, u_RK4 = RK4(f, t_span, u0, N)
```
```python
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(t_exact,u_exact(t_exact),'k',label='exact')
axes.plot(t_euler, u_euler, 'ro', label='euler')
axes.plot(t_ieuler, u_ieuler, 'co', label='improved euler')
axes.plot(t_RK2, u_RK2, 'go', label='RK2')
axes.plot(t_RK4, u_RK4, 'bo', label='RK4')
axes.grid()
axes.set_xlabel('t', fontsize=16)
axes.set_ylabel('u', fontsize=16)
axes.legend(loc='best')
err = lambda u, t: numpy.abs(u - u_exact(t))/u_exact(t)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(t_euler,err(u_euler,t_euler),'ro',label='euler')
axes.semilogy(t_ieuler,err(u_ieuler,t_ieuler),'co',label='improved euler')
axes.semilogy(t_RK2,err(u_RK2,t_RK2),'go',label='RK2')
axes.semilogy(t_RK4,err(u_RK4,t_RK4),'bo',label='RK4')
axes.set_xlabel("t (years)")
axes.set_ylabel("Rel. error")
axes.set_title('Error')
axes.legend(loc='best')
axes.grid()
plt.show()
```
### Convergence of Single Step Multi-Stage schemes
All of the above schemes are consistent and have truncation errors $T\propto\Delta t^p$
```python
N = numpy.array([ 2**n for n in range(4,10)])
err_euler = numpy.zeros(len(N))
err_ieuler = numpy.zeros(len(N))
err_RK2 = numpy.zeros(len(N))
err_RK4 = numpy.zeros(len(N))
t_span = [ 0., 4.]
dt = t_span[1]/N
u0 = 1.
u_exact = u0*numpy.exp(-t_span[1])
for i, n in enumerate(N):
t, u_euler = euler(f, t_span, u0, n)
err_euler[i] = numpy.abs(u_euler[-1] - u_exact)
t, u_ieuler = improved_euler(f, t_span, u0, n)
err_ieuler[i] = numpy.abs(u_ieuler[-1] - u_exact)
t, u_RK2 = RK2(f, t_span, u0, n)
err_RK2[i] = numpy.abs(u_RK2[-1] - u_exact)
t, u_RK4 = RK4(f, t_span, u0, n)
err_RK4[i] = numpy.abs(u_RK4[-1] - u_exact)
err_fit = lambda dt, p: numpy.exp(p[1])*dt**p[0]
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
# Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_euler[2:]),1)
line = axes.loglog(dt, err_euler, 'o', label='euler, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
# Improved Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_ieuler[2:]),1)
line = axes.loglog(dt, err_ieuler, 'o', label='improved euler, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
# RK2
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK2[2:]),1)
line = axes.loglog(dt, err_RK2, 'o', label='rk2, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
#RK4
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK4[2:]),1)
line = axes.loglog(dt, err_RK4, 'o', label='rk4, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
axes.grid()
axes.set_xlabel('$\Delta t$', fontsize=16)
axes.set_ylabel('$Error$', fontsize=16)
axes.set_title('Convergence: Single Step Schemes', fontsize=18)
axes.legend(loc='best', fontsize=14)
plt.show()
```
## Summary: single-Step Multi-Stage Schemes
The integral form of an ODE initial value problem can be written
$$
u(t + \Delta t) = u(t) + \int^{t + \Delta t}_t f(\tilde{t}, u(\tilde{t})) d\tilde{\!t}
$$
Which says that our solution $u$, if it exists at some time $\Delta t$ in the future, is $u(t)$ plus a *number*
$$
K = \int^{t + \Delta t}_t f(\tilde{t}, u(\tilde{t}) )d\tilde{\!t}
$$
which is a definite *line integral* (along an unknown solution).
#### Single Step, Multi-stage schemes
are most easily understood as extensions of the Newton-Cotes quadrature schemes for approximating $K$ (plus an error term that will scale as $\Delta t^p$)
**Explicit Schemes**
<table width="80%">
<tr align="center"><th>Name</th> <th align="center">Stages</th> <th align="center">"Quadrature"</th><th align="center">$$T$$</th></tr>
<tr align="center"><td>Euler</td> <td align="center">1</td> <td align="center">Left-Rectangle</td><td align="center">$$O(\Delta t)$$</td></tr>
<tr align="center"><td>Improved Euler</td> <td align="center">2</td> <td align="center">Trapezoidal</td><td align="center">$$O(\Delta t^2)$$</td></tr>
<tr align="center"><td>RK2</td> <td align="center">2</td> <td align="center">Mid-Point</td><td align="center">$$O(\Delta t^2)$$</td></tr>
<tr align="center"><td>RK4</td> <td align="center">4</td> <td align="center">Simpson</td><td align="center">$$O(\Delta t^4)$$</td></tr>
</table>
**Implicit Schemes**
<table width="80%">
<tr align="center"><th>Name</th> <th align="center">Stages</th> <th align="center">"Quadrature"</th><th align="center">$$T$$</th></tr>
<tr align="center"><td>Backwards-Euler</td> <td align="center">1</td> <td align="center">Right-Rectangle</td><td align="center">$$O(\Delta t)$$</td></tr>
<tr align="center"><td>Trapezoidal</td> <td align="center">2</td> <td align="center">Trapezoidal</td><td align="center">$$O(\Delta t^2)$$</td></tr>
</table>
```python
N = numpy.array([ 2**n for n in range(4,10)])
err_euler = numpy.zeros(len(N))
err_ieuler = numpy.zeros(len(N))
err_RK2 = numpy.zeros(len(N))
err_RK4 = numpy.zeros(len(N))
t_span = [ 0., 4.]
dt = t_span[1]/N
u0 = 1.
u_exact = u0*numpy.exp(-t_span[1])
for i, n in enumerate(N):
t, u_euler = euler(f, t_span, u0, n)
err_euler[i] = numpy.abs(u_euler[-1] - u_exact)
t, u_ieuler = improved_euler(f, t_span, u0, n)
err_ieuler[i] = numpy.abs(u_ieuler[-1] - u_exact)
t, u_RK2 = RK2(f, t_span, u0, n)
err_RK2[i] = numpy.abs(u_RK2[-1] - u_exact)
t, u_RK4 = RK4(f, t_span, u0, n)
err_RK4[i] = numpy.abs(u_RK4[-1] - u_exact)
err_fit = lambda dt, p: numpy.exp(p[1])*dt**p[0]
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
# Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_euler[2:]),1)
line = axes.loglog(dt, err_euler, 'o', label='euler, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
# Improved Euler
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_ieuler[2:]),1)
line = axes.loglog(dt, err_ieuler, 'o', label='improved euler, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
# RK2
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK2[2:]),1)
line = axes.loglog(dt, err_RK2, 'o', label='rk2, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
#RK4
p = numpy.polyfit(numpy.log(dt[2:]), numpy.log(err_RK4[2:]),1)
line = axes.loglog(dt, err_RK4, 'o', label='rk4, p={:3.2f}'.format(p[0]))
axes.loglog(dt, err_fit(dt,p),'--', color=line[0].get_color())
axes.grid()
axes.set_xlabel('$\Delta t$', fontsize=16)
axes.set_ylabel('$Error$', fontsize=16)
axes.set_title('Convergence: Single Step Schemes', fontsize=18)
axes.legend(loc='best', fontsize=14)
plt.show()
```
## Adaptive Time Stepping
#### Why should we care about all of these schemes and their errors?
* Even though we know the formal error. It is with respect to a true solution we don't know.
* In itself, the error estimates don't tell us how to choose a time step $\Delta t$ to keep the solution within a given tolerance
* However, in combination, we can use multiple methods to control the error and provide **Adaptive** time stepping
#### Example: Compare 1 step of Euler to one step of RK2
```python
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
# RK2 step
dt = 1e3
U0 = 1.0
K1 = dt * f(0., U0)
Y1 = U0 + K1/2
K2 = dt * f(dt/2., Y1)
U1 = U0 + K2
t = numpy.linspace(U0, 1600.)
u_true = U0 * numpy.exp(decay_constant * t)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, u_true, label="True Solution")
axes.plot(0., U0, 'ro')
axes.text(0., U0+.01, '$U_0$', fontsize=16)
axes.plot((0.0, dt), (U0, U0 + K1), 'k')
#axes.plot((0.0, dt/2.), (U0, U0 + K1/2.), 'k')
#Euler step
axes.plot(dt, U0 + K1, 'ro')
#axes.plot((0.0, 0.0), (U0, Y1), 'k--')
axes.text(dt + 10., U0 + K1, '$U_{euler}$', fontsize=18)
axes.plot(dt, U1, 'go')
#axes.plot((0.0, 0.0), (U0, Y1), 'k--')
#axes.text(10., 0.85, '$\\frac{K_1}{2}$', fontsize=18)
#axes.plot((0.0, dt/2), (Y1, Y1), 'k--')
#axes.text(250, Y1 - 0.05, '$\\frac{\Delta t}{2}$', fontsize=18)
# RK2 Step
axes.plot(dt, U1, 'go')
axes.plot((0., dt), (U0, U1), 'k')
axes.text(dt+20, U1, '$U_{RK2}$', fontsize=18)
#axes.plot((0.0, 0.0), (U0, U1), 'g--')
#axes.plot((0.0, dt), (U1, U1), 'g--')
# difference
axes.plot((dt, dt), (U1, U0+K1),'k--')
axes.text(dt+40, 0.5*(U1 + U0+K1), '$\Delta\propto\Delta t^2$', fontsize=18)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.05))
axes.legend(loc='best')
axes.grid()
plt.show()
```
#### Relative Truncation Error
If we consider the *Step Error* for each of our schemes, we know that
$$
\begin{align}
u_{n+1} &= U^{euler}_{n+1} + O(\Delta t^2)\\
u_{n+1} &= U^{RK2}_{n+1} + O(\Delta t^3)\\
\end{align}
$$
Therefore we can compute the *relative truncation error* as
$$
\Delta = | U^{euler}_{n+1} - U^{RK2}_{n+1} | = O(\Delta t^{?})
$$
* $\Delta$ is Computable!
* has a known dependence on step-size $\Delta t$
### Adaptive Time Stepping
Given the relative truncation error and its scaling with $\Delta t$, we can now use this to choose a single good time step.
#### Example:
Suppose we wanted our relative truncation error to be small relative to the solution or zero, we could set
$$
\Delta_{target} = \mathtt{rtol}\,U^{RK2}_{n+1} + \mathtt{atol}
$$
where $\mathtt{rtol}$ and $\mathtt{atol}$ are relative and absolute tolerances (and we assume that $U^{RK2}_{n+1}$ is a reasonably good estimate of the true solution)
Moreover, we know how the relative truncation error should scale with time step, i.e.
$$
\Delta_{target} \propto \Delta t_{target}^2
$$
But our measured relative truncation error, $\Delta$ depends on the step size we just took i.e
$$
\Delta_{measured} \propto \Delta t_n^2
$$
### Adaptive Time Stepping
If we take the ratio of both relationships we get
$$
\frac{\Delta_{target}}{\Delta_{measured}} = \left[\frac{\Delta t_{target}}{\Delta t_{n}}\right]^2
$$
or rearranging, our target step size is
$$
\Delta t_{target} = \Delta t_{n}\left[\frac{\Delta_{target}}{\Delta_{measured}}\right]^{\frac{1}{2}}
$$
which tells us how to grow or shrink our time step to maintain accuracy.
In general, if we have two methods with different step errors such that
$$
\Delta \propto \Delta t^p
$$
then our adaptive stepper will look like
$$
\Delta t_{target} = \Delta t_{n}\left[\frac{\Delta_{target}}{\Delta_{measured}}\right]^{1/p}
$$
This leads to all sorts of adaptive schemes most are included in standard libraries.
### Embedded Runge-Kutta Schemes
There are in fact a whole family of **Embedded RK** schemes which are $N$ stage schemes but can combine the $N$ function evaluations in two different ways to produce methods with different error estimates.
A popular one is **RK45** (available in `SciPy`) which is based on the [Dormand-Prince 5(4)](https://doi.org/10.1016/0771-050X(80)90013-3) pair which uses 6 function evaluations per step to produce a 4th order and 5th order scheme. The 4th order scheme controls the time step, and the 5th order scheme actually is the solution.
```python
from scipy.integrate import solve_ivp
def f_vanderpol(t, u, mu=5):
return numpy.array([u[1], mu * (1.0 - u[0]**2) * u[1] - u[0]])
t_span = (0., 50.)
u0 = [ 1., 0. ]
f = lambda t, u : f_vanderpol(t, u, mu=20)
sol = solve_ivp(f, t_span, u0, method='RK45',rtol=1.e-3,atol=1.e-8)
```
```python
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(sol.t, sol.y[0],'o-')
axes.set_title("Solution to Van der Pol Oscillator", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("y(t)", fontsize=16)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
delta_t = sol.t[1:] - sol.t[:-1]
axes.plot(sol.t[:-1], delta_t)
axes.grid()
axes.set_xlabel('$t$', fontsize=16)
axes.set_ylabel('$\Delta t$', fontsize=16)
axes.set_title('Timesteps, N = {}'.format(len(sol.t)), fontsize=18)
plt.show()
```
## Taylor Series Methods
A **Taylor series method** can be derived by direct substitution of the right-hand-side function $f(t, u)$ and its appropriate derivatives into the Taylor series expansion for $u(t_{n+1})$. For a $p$th order method we would look at the Taylor series up to that order and replace all the derivatives of $u$ with derivatives of $f$ instead.
For the general case we have
$$\begin{align*}
u(t_{n+1}) = u(t_n) + \Delta t u'(t_n) + \frac{\Delta t^2}{2} u''(t_n) + \frac{\Delta t^3}{6} u'''(t_n) + \cdots + \frac{\Delta t^p}{p!} u^{(p)}(t_n)
\end{align*}$$
which contains derivatives of $u$ up to $p$th order.
We then replace these derivatives with the appropriate derivative of $f$ which will always be one less than the derivative of $u$ (due to the original ODE)
$$
u^{(p)}(t_n) = f^{(p-1)}(t_n, u(t_n))
$$
leading to the method
$$
\begin{align}
u(t_{n+1}) &= u(t_n) + \Delta t f(t_n, u(t_n)) + \frac{\Delta t^2}{2} f'(t_n, u(t_n)) \\
&+ \frac{\Delta t^3}{6} f''(t_n, u(t_n)) + \cdots + \frac{\Delta t^p}{p!} f^{(p-1)}(t_n, u(t_n)).
\end{align}
$$
### 2nd Order Taylor Series Method
We want terms up to second order so we need to take the derivative of $u' = f(t, u)$ once to find $u'' = f'(t, u)$ and therefore
$$\begin{align*}
u(t_{n+1}) &= u(t_n) + \Delta t u'(t_n) + \frac{\Delta t^2}{2} u''(t_n) \\
&=u(t_n) + \Delta t f(t_n, u(t_n)) + \frac{\Delta t^2}{2} f'(t_n, u(t_n)) ~~~ \text{or} \\
U_{n+1} &= U_n + \Delta t f(t_n, U_n) + \frac{\Delta t^2}{2} f'(t_n, U_n).
\end{align*}$$
With Step error $O(\Delta t^3)$ and truncation error $T$, $O(\Delta t^2)$
### Example
Let's use our simplest problem $u'(t) = \lambda u$ with $f=\lambda u$. Therefore
$$\begin{align*}
f(t,u) &= \lambda u\\
f'(t,u) &= \lambda u' = \lambda f = \lambda^2 u\\
f''(t,u) &= \lambda^2 u' = \lambda^2 f = \lambda^3 u
\end{align*}$$
so a third order scheme would look like
$$
\begin{align}
U(t_{n+1}) &= U(t_n)\left[ 1 + \lambda\Delta t + \frac{(\lambda\Delta t)^2}{2} + \frac{(\lambda\Delta t)^3}{6}\right]+ O(\Delta t^4)
\end{align}
$$
```python
def Taylor_3_flambda_u(lamda, t_span, u0, N):
""" implement constant step size 3rd order Taylor Series method for f(t,u) = \lambda u"""
t = numpy.linspace(t_span[0], t_span[1], N)
lambda_dt = lamda*(t[1] - t[0])
u = numpy.empty(t.shape)
u[0] = u0
for (n, t_n) in enumerate(t[:-1]):
u[n+1] = u[n] * ( 1. + lambda_dt + (lambda_dt**2)/2. + (lambda_dt**3)/6.)
return t, u
```
```python
lam = -1.
t_span = [0., 5.]
u0 = 1.
f = lambda t,u : -u
t_exact = numpy.linspace(t_span[0], t_span[1], 100)
u_exact = u0*numpy.exp(-t_exact)
N = 20
t_taylor, u_taylor = Taylor_3_flambda_u(lam, t_span, u0, N)
t_euler, u_euler = euler(f, t_span, u0, N)
```
```python
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact,u_exact,'k',label='exact')
axes.plot(t_euler, u_euler, 'ro', label='euler')
axes.plot(t_taylor, u_taylor, 'bo', label='Taylor3')
axes.grid()
axes.set_xlabel('t', fontsize=16)
axes.set_ylabel('u', fontsize=16)
axes.legend(loc='best')
plt.show()
```
### Some Drawbacks
**Taylor Series methods**
- require differentiating the given equation which can be cumbersome and difficult to implement
- require a new routine for every $f$
**General one-step/multi-stage methods**
- higher order methods often require a large number of evaluations of $f$ per time step
# Overview -- so far
So far we have discussed 3 basic techniques for integration of ODE IVP's
* **Single-Step Multi-Stage** schemes (explicit and implicit)
* **Taylor's Series** Methods
* **Linear Multi-step** schemes (just started this)
as well as
* **truncation error** of each method (and it's relation to step-error)
* **adaptive stepping** for Single-Step Schemes
In general Single-Step Multi-Stage methods (e.g. Embedded RK schemes) plus adaptive time stepping make for a very robust family of solvers. However there are some other classical schemes worth mentioning that have some advantages
## Linear Multi-Step Methods
**Multi-step methods** are ODE methods that
- require only *one* new function evaluation per time step to work.
- reuse values and function evaluations at some number of previous time steps
**Disadvantages over single step methods**
- Methods are not self-starting, i.e. they require other methods to find the initial values
- Difficult to adapt. The time step Δ𝑡 in one-step methods can be changed at any time while multi-step methods this is much more complex
### Simplest example: The leap-frog method
The leap-frog method is similar to Euler's method in that it uses the information from two-previous time steps to advance the problem. We can write the problem as a centered first-derivative centered at time $t_{n+1}, U_{n+1}$ i.e.
$$\frac{U_{n+2} - U_{n}}{2\Delta t} = f(t_{n+1}, U_{n+1})$$
or
$$
U_{n+2} = U_{n} + 2\Delta t\, f(t_{n+1}, U_{n+1})
$$
this method is known as the leap-frog method
```python
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Leap-Frog step
dt = 1e3
u1 = c_0 * numpy.exp(decay_constant * dt / 2.0)
u_np = c_0 + dt * (decay_constant * u1)
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.plot([0., dt/2, dt], [ c_0, u1, u_np],'ro')
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.grid()
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
```
```python
def leap_frog(f, t_span, u0, N, start=RK2):
""" calculate fixed step with leap-frog iterator with a single step starter
"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.zeros(t.shape)
u[0] = u0
# use a single-step multi-stage method to start
t_start, u_start = start(f, (t[0],t[1]), u0, 2)
u[1] = u_start[-1]
for (n, t_np) in enumerate(t[1:-1]):
u[n+2] = u[n] + 2 *delta_t * f(t_np, u[n+1])
return t, u
```
```python
u0 = 1.0
t_span = (0., 1600.)
N = 7
# Stable example
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
t_exact = numpy.linspace(t_span[0], t_span[1], N)
u_exact = u0 * numpy.exp( decay_constant * t_exact)
t_leapfrog, u_leapfrog = leap_frog(f, t_span, u0, N, start=RK4)
```
```python
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_leapfrog, u_leapfrog, 'or-', label="Leap-Frog")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.grid()
axes.set_title("Leap-Frog", fontsize=18)
axes.set_xlabel("t (years)", fontsize=16)
axes.set_xlabel("$c(t)$", fontsize=16)
axes.legend(loc='best', fontsize=14)
plt.show()
```
### Error Analysis of Leap-Frog Method
To easily analyze this method we will expand the Taylor series around time $t_{n+1}$ to yield
$$\begin{aligned}
u_{n+2} &= u_{n+1} + \Delta t f(t_{n+1},u_{n+1}) + \Delta t^2 \frac{u''(t_{n+1})}{2} + \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4)
\end{aligned}$$
We need one more expansion however due to leap-frog. Recall that leap-frog has the form
$$
U_{n+2} = U_{n} + 2 \Delta t f(t_{n+1}, U_{n+1}).
$$
To handle the $U_{n}$ term we need to write this with relation to $u(t_{n+1})$. Again we use the Taylor series
$$
u(t_n) = u_{n+1} - \Delta t f_{n+1} + \Delta t^2 \frac{u''(t_{n+1})}{2} - \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4)
$$
$$\begin{aligned}
u(t_{n+2}) &= u_{n+1} + \Delta t f_{n+1} + \Delta t^2 \frac{u''(t_{n+1})}{2} + \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4)\\
u(t_{n}) &= u_{n+1} - \Delta t f_{n+1} + \Delta t^2 \frac{u''(t_{n+1})}{2} - \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4)
\end{aligned}$$
Plugging these into our definition of the truncation error along with the leap-frog method definition leads to
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} \left [\underbrace{U_{n} + 2 \Delta t f_{n+1}}_{U_{n+2}} - \underbrace{\left(u_{n+1} + \Delta t f_{n+1} + \Delta t^2 \frac{u''(t_{n+1})}{2} + \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4) \right )}_{u(t_{n+2})} \right ] \\
&=\frac{1}{\Delta t} \left [ \underbrace{ \left(u_{n+1} - \Delta t f_{n+1} + \Delta t^2 \frac{u''(t_{n+1})}{2} - \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4)\right)}_{u_{n}} + 2\Delta t f_n - \underbrace{\left(u_{n+1} + \Delta t f_{n+1} + \Delta t^2 \frac{u''(t_{n+1})}{2} + \Delta t^3 \frac{u'''(t_{n+1})}{6} + \mathcal{O}(\Delta t^4) \right )}_{u(t_{n+2})} \right ] \\
&=\frac{1}{\Delta t} \left [- \Delta t^3 \frac{u'''(t_n)}{3} + \mathcal{O}(\Delta t^4) \right ] \\
&=- \Delta t^2 \frac{u'''(t_n)}{3} + \mathcal{O}(\Delta t^3)
\end{aligned}$$
Therefore the method is second order accurate and is consistent theoretically. In practice it's a bit more complicated than that.
```python
# Compare accuracy between Euler, RK2 and Leap-Frog
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
u_0 = 1.0
t_span = (0.0, 10.0)
num_steps = [2**n for n in range(4,11)]
```
```python
delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_RK2 = numpy.empty(len(num_steps))
error_leapfrog = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(t_span[0], t_span[1], N)
tt, u_euler = euler(f, t_span, u_0, N )
tt, u_rk2 = RK2(f, t_span, u_0, N)
tt, u_leapfrog = leap_frog(f, t_span, u_0, N, start=euler)
delta_t[i] = t[1] - t[0]
# Compute error for each
error_euler[i] = numpy.linalg.norm(delta_t[i] * (u_euler - u_exact(t)), ord=1)
error_RK2[i] = numpy.linalg.norm(delta_t[i] * (u_rk2 - u_exact(t)), ord=1)
error_leapfrog[i] = numpy.linalg.norm(delta_t[i] * (u_leapfrog - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, error_euler, 'bo', label='Forward Euler, $n=1$')
axes.loglog(delta_t, error_RK2, 'ro', label='RK2, $n=2$')
axes.loglog(delta_t, error_leapfrog, 'go', label="Leap-Frog, $n=2$")
axes.loglog(delta_t, order_C(delta_t[2], error_euler[2], 1.0) * delta_t**1.0, '--b')
axes.loglog(delta_t, order_C(delta_t[2], error_RK2[2], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[2], error_leapfrog[2], 2.0) * delta_t**2.0, '--r')
axes.grid()
axes.legend(loc=2, fontsize=14)
axes.set_title("Comparison of Errors", fontsize=18)
axes.set_xlabel("$\Delta t$",fontsize=16)
axes.set_ylabel("$|U(t_f) - u(t_f)|$", fontsize=16)
plt.show()
```
### Look at the errors for Leap-Frog
They're actually quite large...If you make a quick plot of u_leapfrog vs $t$ you'll see what's happening (and is a good example of an issue we will need to address in future lectures)
```python
N= 100
t_leapfrog, u_leapfrog = leap_frog(f, t_span, u_0, N, start=euler)
## Your plotting code here
plt.figure()
plt.plot(t_leapfrog, u_leapfrog)
plt.grid()
plt.show()
```
### General Linear Multi-Step Methods
Leap-frog is perhaps the simplest of multi-step methods but all linear multi-step methods can be written as the linear combination of past, present and future solutions:
$$
\sum^r_{j=0} \alpha_j U_{n+j} = \Delta t \sum^r_{j=0} \beta_j f(U_{n+j}, t_{n+j})
$$
If $\beta_r = 0$ then the method is explicit (only requires previous time steps). Note that the coefficients are not unique as we can multiply both sides by a constant. In practice a normalization of $\alpha_r = 1$ is used.
For example: our Leap-frog method can be written using $r=2$, $\alpha = \begin{bmatrix} -1 & 0 & 1\\
\end{bmatrix}$, $\beta = \begin{bmatrix} 0 & 2 & 0 \\ \end{bmatrix}$
#### Example: Adams Methods
$$
U_{n+r} = U_{n+r-1} + \Delta t \sum^r_{j=0} \beta_j f(U_{n+j}).
$$
All these methods have $\alpha_r = 1$, $\alpha_{r-1} = -1$ and $\alpha_j=0$ for $j < r - 1$.
### Adams-Bashforth Methods
The **Adams-Bashforth** methods are explicit solvers that maximize the order of accuracy given a number of steps $r$. This is accomplished by looking at the Taylor series and picking the coefficients $\beta_j$ to eliminate as many terms in the Taylor series as possible.
$$\begin{aligned}
\text{1-step:} & ~ & U_{n+1} &= U_n +\Delta t f(U_n) \\
\text{2-step:} & ~ & U_{n+2} &= U_{n+1} + \frac{\Delta t}{2} (-f(U_n) + 3 f(U_{n+1})) \\
\text{3-step:} & ~ & U_{n+3} &= U_{n+2} + \frac{\Delta t}{12} (5 f(U_n) - 16 f(U_{n+1}) + 23 f(U_{n+2})) \\
\text{4-step:} & ~ & U_{n+4} &= U_{n+3} + \frac{\Delta t}{24} (-9 f(U_n) + 37 f(U_{n+1}) -59 f(U_{n+2}) + 55 f(U_{n+3}))
\end{aligned}$$
```python
def AB2(f, t_span, u0, N, start=RK2):
""" calculate fixed step Adams-Bashforth 2-step method with a single step starter
reuses previous function evaluations
"""
t = numpy.linspace(t_span[0], t_span[1], N)
delta_t = t[1] - t[0]
u = numpy.zeros(t.shape)
u[0] = u0
# use a single-step multi-stage method to start
t_start, u_start = start(f, (t[0],t[1]), u0, 2)
u[1] = u_start[-1]
# set initial function evaluations
fn = f(t[0], u[0])
fnp = f(t[1], u[1])
for (n, t_np) in enumerate(t[2:]):
u[n+2] = u[n + 1] + delta_t / 2.0 * (-fn + 3.0 * fnp)
fn = fnp
fnp = f(t_np, u[n+2])
return t, u
```
```python
# Use 2-step Adams-Bashforth to compute solution
f = lambda t, u: -u
t_span = (0., 10.)
u0 = 1.0
N = 20
t, u_ab2 = AB2(f, t_span, u0, N, start=RK2)
```
```python
t_exact = numpy.linspace(t_span[0], t_span[1], 100)
u_exact = numpy.exp(-t_exact)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, u_ab2, 'ro', label="2-step A-B")
axes.set_title("Adams-Bashforth Method", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("u(t)",fontsize=16)
axes.legend(loc=1, fontsize=14)
axes.grid()
plt.show()
```
### Adams-Moulton Methods
The **Adams-Moulton** methods are the implicit versions of the Adams-Bashforth methods. Since this gives one additional parameter to use $\beta_r$ these methods are generally one order of accuracy greater than their counterparts.
$$\begin{aligned}
\text{1-step:} & ~ & U_{n+1} &= U_n + \frac{\Delta t}{2} (f(U_n) + f(U_{n+1})) \\
\text{2-step:} & ~ & U_{n+2} &= U_{n+1} + \frac{\Delta t}{12} (-f(U_n) + 8f(U_{n+1}) + 5f(U_{n+2})) \\
\text{3-step:} & ~ & U_{n+3} &= U_{n+2} + \frac{\Delta t}{24} (f(U_n) - 5f(U_{n+1}) + 19f(U_{n+2}) + 9f(U_{n+3})) \\
\text{4-step:} & ~ & U_{n+4} &= U_{n+3} + \frac{\Delta t}{720}(-19 f(U_n) + 106 f(U_{n+1}) -264 f(U_{n+2}) + 646 f(U_{n+3}) + 251 f(U_{n+4}))
\end{aligned}$$
```python
# Use 2-step Adams-Moulton to compute solution
# u' = - decay u
decay_constant = 1.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 20
# N = 10
# N = 5
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
U[1] = U[0] + 0.5 * delta_t * f(t[0], U[0])
U[1] = U[0] + delta_t * f(t[0], U[1])
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t / 12.0)
for n in range(t.shape[0] - 2):
U[n+2] = (U[n+1] + decay_constant * delta_t / 12.0 * (U[n] - 8.0 * U[n+1])) * integration_constant
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-M")
axes.set_title("Adams-Moulton Method ($f=-u$)", fontsize=18)
axes.set_xlabel("t", fontsize=16)
axes.set_ylabel("u(t)", fontsize=16)
axes.legend(loc=1, fontsize=14)
axes.grid()
plt.show()
```
### Truncation Error for Multi-Step Methods
We can again find the truncation error in general for linear multi-step methods:
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} \left [\sum^r_{j=0} \alpha_j u_{n+j} - \Delta t \sum^r_{j=0} \beta_j f(u_{n+j}, t_{n+j}) \right ]
\end{aligned}$$
Using the general expansion and evaluation of the Taylor series about $t_n$ we have
$$\begin{aligned}
u(t_{n+j}) &= u(t_n) + j \Delta t u'(t_n) + \frac{1}{2} (j \Delta t)^2 u''(t_n) + \mathcal{O}(\Delta t^3) \\
u'(t_{n+j}) &= u'(t_n) + j \Delta t u''(t_n) + \frac{1}{2} (j \Delta t)^2 u'''(t_n) + \mathcal{O}(\Delta t^3)
\end{aligned}$$
collecting terms of order $u^{(p)}$
$$
\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t}\left( \sum^r_{j=0} \alpha_j\right) u(t_n) + \left(\sum^r_{j=0} (j\alpha_j - \beta_j)\right) u'(t_n) + \Delta t \left(\sum^r_{j=0} \left (\frac{1}{2}j^2 \alpha_j - j \beta_j \right) \right) u''(t_n) \\
& \quad \quad + \cdots + \Delta t^{q - 1} \left (\sum^r_{j=0} \left(\frac{1}{q!} j^q \alpha_j - \frac{1}{(q-1)!} j^{q-1} \beta_j \right) \right) u^{(q)}(t_n) + \cdots
\end{aligned}$$
The method is *consistent* if the first two terms of the expansion vanish, i.e. $\sum^r_{j=0} \alpha_j = 0$ and $\sum^r_{j=0} j \alpha_j = \sum^r_{j=0} \beta_j$.
```python
# Compare accuracy between RK-2, AB-2 and AM-2, RK-4
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
t_span = (0.0, t_f)
num_steps = [2**n for n in range(4,10)]
delta_t = numpy.empty(len(num_steps))
error_rk = numpy.empty(len(num_steps))
error_rk4 = numpy.empty(len(num_steps))
error_ab = numpy.empty(len(num_steps))
error_am = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute RK2
tt, U_rk = RK2(f, t_span, u0, N)
# Compute RK4
tt, U_rk4 = RK4(f, t_span, u0, N)
# Compute Adams-Bashforth 2-stage
tt, U_ab = AB2(f, t_span, u0, N)
# Compute Adama-Moulton 2-stage
U_am = numpy.empty(t.shape)
U_am[:2] = U_rk[:2]
decay_constant = 1.0
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t[i] / 12.0)
for n in range(t.shape[0] - 2):
U_am[n+2] = (U_am[n+1] + decay_constant * delta_t[i] / 12.0 * (U_am[n] - 8.0 * U_am[n+1])) * integration_constant
# Compute error for each
error_rk[i] = numpy.linalg.norm(delta_t[i] * (U_rk - u_exact(t)), ord=1)
error_rk4[i] = numpy.linalg.norm(delta_t[i] * (U_rk4 - u_exact(t)), ord=1)
error_ab[i] = numpy.linalg.norm(delta_t[i] * (U_ab - u_exact(t)), ord=1)
error_am[i] = numpy.linalg.norm(delta_t[i] * (U_am - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_rk, 'ko', label='RK-2')
axes.loglog(delta_t, error_ab, 'bo', label='AB-2')
axes.loglog(delta_t, error_am, 'go', label="AM-2")
axes.loglog(delta_t, error_rk4, 'co', label='RK-4')
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, order_C(delta_t[0], error_ab[0], 1.0) * delta_t**1.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_ab[1], 2.0) * delta_t**2.0, '--b')
axes.loglog(delta_t, order_C(delta_t[1], error_am[1], 3.0) * delta_t**3.0, '--g')
axes.loglog(delta_t, order_C(delta_t[1], error_rk4[1], 4.0) * delta_t**4.0, '--c')
axes.legend(loc=4, fontsize=14)
axes.set_title("Comparison of Errors",fontsize=18)
axes.set_xlabel("$\Delta t$",fontsize=16)
axes.set_ylabel("$|U(t) - u(t)|$", fontsize=16)
axes.grid()
plt.show()
```
### Predictor-Corrector Methods
One way to simplify the Adams-Moulton methods so that implicit evaluations are not needed is by estimating the required implicit function evaluations with an explicit method. These are often called **predictor-corrector** methods as the explicit method provides a *prediction* of what the solution might be and the now explicit *corrector* step works to make that estimate more accurate.
#### Example: One-Step Adams-Bashforth-Moulton
Use the One-step Adams-Bashforth method to predict the value of $U_{n+1}$ and then use the Adams-Moulton method to correct that value:
$$\begin{aligned}
\hat{U}_{n+1} &= U_n + \Delta t f(U_n) \\
U_{n+1} &= U_n + \frac{1}{2} \Delta t \left[f(U_n) + f(\hat{U}_{n+1}) \right]
\end{aligned}$$
leading to a second order accurate method. Note this algorithm is identical to \_\_\_\_\_\_\_\_\_\_\_\__________?
```python
# One-step Adams-Bashforth-Moulton
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 50
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
for n in range(t.shape[0] - 1):
U[n+1] = U[n] + delta_t * f(t[n], U[n])
U[n+1] = U[n] + 0.5 * delta_t * (f(t[n], U[n]) + f(t[n+1], U[n+1]))
t_ie, u_ieuler = improved_euler(f, (0.0, 10.), 1., N)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-B")
axes.plot(t_ie, u_ieuler, 'bx', label="Improved Euler")
axes.set_title("Adams-Bashforth-Moulton P/C Method", fontsize=18)
axes.set_xlabel("t", fontsize=18)
axes.set_ylabel("u(t)", fontsize=18)
axes.legend(loc='best', fontsize=14)
axes.grid()
plt.show()
```
| 67dd2054501bdb1418616fd85c86eedf9c360232 | 108,080 | ipynb | Jupyter Notebook | 09_ODE_ivp_part1.ipynb | mspieg/intro-numerical-methods | d267a075c95acfed6bbcbe91951a05539be61311 | [
"CC-BY-4.0"
]
| 6 | 2020-09-10T13:01:06.000Z | 2022-01-20T15:05:30.000Z | 09_ODE_ivp_part1.ipynb | AinsleyChen/intro-numerical-methods | 2eda74cccbed5c0d4c57e24c3f4c96a1aa741f08 | [
"CC-BY-4.0"
]
| null | null | null | 09_ODE_ivp_part1.ipynb | AinsleyChen/intro-numerical-methods | 2eda74cccbed5c0d4c57e24c3f4c96a1aa741f08 | [
"CC-BY-4.0"
]
| 35 | 2020-01-21T16:08:37.000Z | 2022-01-21T12:46:56.000Z | 31.093211 | 428 | 0.509974 | true | 24,573 | Qwen/Qwen-72B | 1. YES
2. YES | 0.782662 | 0.865224 | 0.677178 | __label__eng_Latn | 0.663918 | 0.411644 |
# 14 Linear Algebra: Singular Value Decomposition
One can always decompose a matrix $\mathsf{A}$
\begin{gather}
\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}\\
\mathsf{U}^T \mathsf{U} = \mathsf{U} \mathsf{U}^T = 1\\
\mathsf{V}^T \mathsf{V} = \mathsf{V} \mathsf{V}^T = 1
\end{gather}
where $\mathsf{U}$ and $\mathsf{V}$ are orthogonal matrices and the $w_j$ are the _singular values_ that are assembled into a diagonal matrix $\mathsf{W}$.
$$
\mathsf{W} = \text{diag}(w_j)
$$
The inverse (if it exists) can be directly calculated from the SVD:
$$
\mathsf{A}^{-1} = \mathsf{V} \text{diag}(1/w_j) \mathsf{U}^T
$$
## Solving ill-conditioned coupled linear equations
```python
import numpy as np
```
### Non-singular matrix
Solve the linear system of equations
$$
\mathsf{A}\mathbf{x} = \mathbf{b}
$$
Using the standard linear solver in numpy:
```python
A = np.array([
[1, 2, 3],
[3, 2, 1],
[-1, -2, -6],
])
b = np.array([0, 1, -1])
```
```python
np.linalg.solve(A, b)
```
array([ 0.83333333, -0.91666667, 0.33333333])
Using the inverse from SVD:
$$
\mathbf{x} = \mathsf{A}^{-1} \mathbf{b}
$$
```python
U, w, VT = np.linalg.svd(A)
print(w)
```
[7.74140616 2.96605874 0.52261473]
First check that the SVD really factors $\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}$:
```python
U.dot(np.diag(w).dot(VT))
```
array([[ 1., 2., 3.],
[ 3., 2., 1.],
[-1., -2., -6.]])
```python
np.allclose(A, U.dot(np.diag(w).dot(VT)))
```
True
Now calculate the matrix inverse $\mathsf{A}^{-1} = \mathsf{V} \text{diag}(1/w_j) \mathsf{U}^T$:
```python
inv_w = 1/w
print(inv_w)
```
[0.1291755 0.33714774 1.91345545]
```python
A_inv = VT.T.dot(np.diag(inv_w)).dot(U.T)
print(A_inv)
```
[[-8.33333333e-01 5.00000000e-01 -3.33333333e-01]
[ 1.41666667e+00 -2.50000000e-01 6.66666667e-01]
[-3.33333333e-01 -1.08335035e-16 -3.33333333e-01]]
Check that this is the same that we get from `numpy.linalg.inv()`:
```python
np.allclose(A_inv, np.linalg.inv(A))
```
True
Now, *finally* solve (and check against `numpy.linalg.solve()`):
```python
x = A_inv.dot(b)
print(x)
np.allclose(x, np.linalg.solve(A, b))
```
[ 0.83333333 -0.91666667 0.33333333]
True
```python
A.dot(x)
```
array([-7.77156117e-16, 1.00000000e+00, -1.00000000e+00])
```python
np.allclose(A.dot(x), b)
```
True
### Singular matrix
If the matrix $\mathsf{A}$ is *singular* (i.e., its rank (linearly independent rows or columns) is less than its dimension and hence the linear system of equation does not have a unique solution):
For example, the following matrix has the same row twice:
```python
C = np.array([
[ 0.87119148, 0.9330127, -0.9330127],
[ 1.1160254, 0.04736717, -0.04736717],
[ 1.1160254, 0.04736717, -0.04736717],
])
b1 = np.array([ 2.3674474, -0.24813392, -0.24813392])
b2 = np.array([0, 1, 1])
```
```python
np.linalg.solve(C, b1)
```
NOTE: failure is not always that obvious: numerically, a matrix can be *almost* singular.
Try solving the linear system of equations
$$
\mathsf{D}\mathbf{x} = \mathbf{b}_1
$$
with matrix $\mathsf{D}$ below:
```python
D = C.copy()
D[2, :] = C[0] - 3*C[1]
D
```
array([[ 0.87119148, 0.9330127 , -0.9330127 ],
[ 1.1160254 , 0.04736717, -0.04736717],
[-2.47688472, 0.79091119, -0.79091119]])
```python
np.linalg.solve(D, b1)
```
array([1.61493184e+00, 2.01760247e+16, 2.01760247e+16])
Note that some of the values are huge, and suspiciously like the inverse of machine precision? Sign of a nearly singular matrix.
**Note**: *Just because a function did not throw an exception it does not mean that the answer is correct.* **Always check your output!**
Now back to the example with $\mathsf{C}$:
#### SVD for singular matrices
If a matrix is *singular* or *near singular* then one can *still* apply SVD.
One can then compute the *pseudo inverse*
\begin{align}
\mathsf{A}^{-1} &= \mathsf{V} \text{diag}(\alpha_j) \mathsf{U}^T \\
\alpha_j &= \begin{cases}
\frac{1}{w_j}, &\quad\text{if}\ w_j \neq 0\\
0, &\quad\text{if}\ w_j = 0
\end{cases}
\end{align}
i.e., any singular $w_j = 0$ is being "augmented" by setting
$$
\frac{1}{w_j} \rightarrow 0 \quad\text{if}\quad w_j = 0
$$
in $\text{diag}(1/w_j)$.
Perform the SVD for the singular matrix $\mathsf{C}$:
```python
U, w, VT = np.linalg.svd(C)
print(w)
```
[1.99999999e+00 1.00000000e+00 2.46519033e-32]
Note the third value $w_2 \approx 0$: sign of a singular matrix.
Test that the SVD really decomposes $\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}$:
```python
U.dot(np.diag(w).dot(VT))
```
array([[ 0.87119148, 0.9330127 , -0.9330127 ],
[ 1.1160254 , 0.04736717, -0.04736717],
[ 1.1160254 , 0.04736717, -0.04736717]])
```python
np.allclose(C, U.dot(np.diag(w).dot(VT)))
```
True
There are the **singular values** (let's say, $|w_i| < 10^{-12}$):
```python
singular_values = np.abs(w) < 1e-12
print(singular_values)
```
[False False True]
#### Pseudo-inverse
Calculate the **pseudo-inverse** from the SVD
\begin{align}
\mathsf{A}^{-1} &= \mathsf{V} \text{diag}(\alpha_j) \mathsf{U}^T \\
\alpha_j &= \begin{cases}
\frac{1}{w_j}, &\quad\text{if}\ w_j \neq 0\\
0, &\quad\text{if}\ w_j = 0
\end{cases}
\end{align}
Augment:
```python
inv_w = 1/w
inv_w[singular_values] = 0
print(inv_w)
```
[0.5 1. 0. ]
```python
C_inv = VT.T.dot(np.diag(inv_w)).dot(U.T)
print(C_inv)
```
[[-0.04736717 0.46650635 0.46650635]
[ 0.5580127 -0.21779787 -0.21779787]
[-0.5580127 0.21779787 0.21779787]]
#### Solution for $\mathbf{b}_1$
Now solve the linear problem with SVD:
```python
x1 = C_inv.dot(b1)
print(x1)
```
[-0.34365138 1.4291518 -1.4291518 ]
```python
C.dot(x1)
```
array([ 2.3674474 , -0.24813392, -0.24813392])
```python
np.allclose(C.dot(x1), b1)
```
True
Thus, using the pseudo-inverse $\mathsf{C}^{-1}$ we can obtain solutions to the equation
$$
\mathsf{C} \mathbf{x}_1 = \mathbf{b}_1
$$
However, $\mathbf{x}_1$ is not the only solution: there's a whole line of solutions that are formed by the special solution and a combination of the basis vectors in the *null space* of the matrix:
The (right) *kernel* or *null space* contains all vectors $\mathbf{x^0}$ for which
$$
\mathsf{C} \mathbf{x^0} = 0
$$
(The dimension of the null space corresponds to the number of singular values.) You can find a basis that spans the null space. Any linear combination of null space basis vectors will also end up in the null space when $\mathbf{A}$ is applied to it.
Specifically, if $\mathbf{x}_1$ is a special solution and $\lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots$ is a vector in the null space then
$$
\mathbf{x} = \mathbf{x}_1 + ( \lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots )
$$
is **also a solution** because
$$
\mathsf{C} \mathbf{x} = \mathsf{C} \mathbf{x_1} + \mathsf{C} ( \lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots ) = \mathsf{C} \mathbf{x_1} + 0 = \mathbf{b}_1 + 0 = \mathbf{b}_1
$$
The $\lambda_i$ are arbitrary real numbers and hence there is an infinite number of solutions.
In SVD:
* The columns $U_{\cdot, i}$ of $\mathsf{U}$ (i.e. `U.T[i]` or `U[:, i]`) corresponding to non-zero $w_i$, i.e. $\{i : w_i \neq 0\}$, form the basis for the _range_ of the matrix $\mathsf{A}$.
* The columns $V_{\cdot, i}$ of $\mathsf{V}$ (i.e. `V.T[i]` or `V[:, i]`) corresponding to zero $w_i$, i.e. $\{i : w_i = 0\}$, form the basis for the _null space_ of the matrix $\mathsf{A}$.
```python
x1
```
array([-0.34365138, 1.4291518 , -1.4291518 ])
The rank space comes from $\mathsf{U}^T$:
```python
U.T
```
array([[-7.07106782e-01, -4.99999999e-01, -4.99999999e-01],
[ 7.07106780e-01, -5.00000001e-01, -5.00000001e-01],
[-2.47010760e-16, -7.07106781e-01, 7.07106781e-01]])
The basis vectors for the rank space (``~ bool_array`` applies a logical ``NOT`` operation to the entries in the boolean array so that we can pick out "not singular values"):
```python
U.T[~singular_values]
```
array([[-0.70710678, -0.5 , -0.5 ],
[ 0.70710678, -0.5 , -0.5 ]])
The null space comes from $\mathsf{V}^T$:
```python
VT
```
array([[-0.8660254 , -0.35355339, 0.35355339],
[-0.5 , 0.61237244, -0.61237244],
[-0. , -0.70710678, -0.70710678]])
The basis vector for the null space:
```python
VT[singular_values]
```
array([[-0. , -0.70710678, -0.70710678]])
The component of $\mathbf{x}_1$ along the basis vector of the null space of $\mathsf{C}$ (here a 1D space) – note that this component is zero, i.e., the special solution lives in the rank space:
```python
x1.dot(VT[singular_values][0])
```
2.220446049250313e-16
We can create a family of solutions by adding vectors in the null space to the special solution $\mathbf{x}_1$, e.g. $\lambda_1 = 2$:
```python
lambda_1 = 2
x1_1 = x1 + lambda_1 * VT[2]
print(x1_1)
np.allclose(C.dot(x1_1), b1)
```
[-0.34365138 0.01493824 -2.84336536]
True
Thus, **all** solutions are
```
x1 + lambda * VT[2]
```
#### Solution for $\mathbf{b}_2$
The solution vector $x_2$ solves
$$
\mathsf{C}\mathbf{x}_2 = \mathbf{b}_2
$$
```python
b2
```
array([0, 1, 1])
```python
x2 = C_inv.dot(b2)
print(x2)
print(C.dot(x2))
np.allclose(C.dot(x2), b2)
```
[ 0.9330127 -0.43559574 0.43559574]
[-4.4408921e-16 1.0000000e+00 1.0000000e+00]
True
... and the general solution will again be obtained by adding any multiple of the null space basis vector.
#### Null space
The Null space is spanned by the following basis vectors (just one in this example):
```python
null_basis = VT[singular_values]
null_basis
```
array([[-0. , -0.70710678, -0.70710678]])
Show that
$$
\mathsf{C}\mathbf{x}^0 = 0
$$
```python
C.dot(null_basis.T)
```
array([[ 0.0000000e+00],
[-6.9388939e-18],
[-6.9388939e-18]])
## SVD for fewer equations than unknowns
$N$ equations for $M$ unknowns with $N < M$:
* no unique solutions (underdetermined)
* $M-N$ dimensional family of solutions
* SVD: at least $M-N$ zero or negligible $w_j$: columns of $\mathsf{V}$ corresponding to singular $w_j$ span the solution space when added to a particular solution.
Same as the above [**Solving ill-conditioned coupled linear equations**](#Solving-ill-conditioned-coupled-linear-equations).
## SVD for more equations than unknowns
$N$ equations for $M$ unknowns with $N > M$:
* no exact solutions in general (overdetermined)
* but: SVD can provide best solution in the least-square sense
$$
\mathbf{x} = \mathsf{V}\, \text{diag}(1/w_j)\, \mathsf{U}^{T}\, \mathbf{b}
$$
where
* $\mathbf{x}$ is a $M$-dimensional vector of the unknowns (parameters of the fit),
* $\mathsf{V}$ is a $M \times N$ matrix
* the $w_j$ form a square $M \times M$ matrix,
* $\mathsf{U}$ is a $M \times N$ matrix (and $\mathsf{U}^T$ is a $N \times M$ matrix), and
* $\mathbf{b}$ is the $N$-dimensional vector of the given values (data)
It can be shown that $\mathbf{x}$ minimizes the residual
$$
\mathbf{r} := |\mathsf{A}\mathbf{x} - \mathbf{b}|.
$$
where the matrix $\mathsf{A}$ will be described below and will contain the evaluation of the fit function for each data point in $\mathbf{b}$.
(For a $N \le M$, one can find $\mathbf{x}$ so that $\mathbf{r} = 0$ – see above.)
(In the following, we will switch notation and denote the vector of $M$ unknown parameters of the model as $\mathbf{a}$; this $\mathbf{a}$ corresponds to $\mathbf{x}$ above. $N$ is the number of observations.)
### Linear least-squares fitting
This is the *liner least-squares fitting problem*: Given $N$ data points $(x_i, y_i)$ (where $1 \le i \le N$), fit to a linear model $y(x)$, which can be any linear combination of $M$ functions of $x$.
For example, if we have $N$ functions $x^k$ with parameters $a_k$
$$
y(x) = a_1 + a_2 x + a_3 x^2 + \dots + a_M x^{M-1}
$$
or in general
$$
y(x) = \sum_{k=1}^M a_k X_k(x)
$$
The goal is to determine the $M$ coefficients $a_k$.
Define the **merit function**
$$
\chi^2 = \sum_{i=1}^N \left[ \frac{y_i - \sum_{k=1}^M a_k X_k(x_i)}{\sigma_i}\right]^2
$$
(sum of squared deviations, weighted with standard deviations $\sigma_i$ on the $y_i$).
Best parameters $a_k$ are the ones that *minimize $\chi^2$*.
*Design matrix* $\mathsf{A}$ ($N \times M$, $N \geq M$), vector of measurements $\mathbf{b}$ ($N$-dim) and parameter vector $\mathbf{a}$ ($M$-dim):
\begin{align}
A_{ij} &= \frac{X_j(x_i)}{\sigma_i}\\
b_i &= \frac{y_i}{\sigma_i}\\
\mathbf{a} &= (a_1, a_2, \dots, a_M)
\end{align}
The design matrix $\mathsf{A}$ contains the *predicted* values from the basis functions for all values $x_i$ of the independent variable $x$ for which we have measured data $y_i$.
Minimum occurs when the derivative vanishes:
$$
0 = \frac{\partial\chi^2}{\partial a_k} = \sum_{i=1}^N {\sigma_i}^{-2} \left[ y_i - \sum_{j=1}^M a_j X_j(x_i) \right] X_k(x_i), \quad 1 \leq k \leq M
$$
($M$ coupled equations)
To simplify the notation, define the $M \times M$ matrix
\begin{align}
\alpha_{kj} &= \sum_{i=1}^N \frac{X_k(x_i) X_j(x_i)}{\sigma_i^2}\\
\mathsf{\alpha} &= \mathsf{A}^T \mathsf{A}
\end{align}
and the vector of length $M$
\begin{align}
\beta_{k} &= \sum_{i=1}^N \frac{y_i X_k(x_i)}{\sigma_i^2}\\
\boldsymbol{\beta} &= \mathsf{A}^T \mathbf{b}
\end{align}
Then the $M$ coupled equations can be compactly written as
\begin{align}
\sum_{j=1}^{M} \alpha_{kj} a_j &= \beta_k\\
\mathsf{\alpha}\mathbf{a} = \boldsymbol{\beta}
\end{align}
$\mathsf{\alpha}$ and $\boldsymbol{\beta}$ are known, so we have to solve this matrix equation for the vector of the unknown parameters $\mathbf{a}$.
#### Error estimates for the parameters
The inverse of $\mathsf{\alpha}$ is related to the uncertainties in the parameters:
$$
\mathsf{C} := \mathsf{\alpha}^{-1}
$$
in particular
$$
\sigma(a_i) = C_{ii}
$$
(and the $C_{ij}$ are the co-variances).
#### Solution of the linear least-squares fitting problem with SVD
We need to solve the overdetermined system of $M$ coupled equations
\begin{align}
\sum_{j=1}^{M} \alpha_{kj} a_j &= \beta_k\\
\mathsf{\alpha}\mathbf{a} = \boldsymbol{\beta}
\end{align}
We can solve the above equation with SVD.
SVD finds $\mathbf{a}$ that minimizes
$$
\chi^2 = |\mathsf{A}\mathbf{a} - \mathbf{b}|
$$
(proof in _Numerical Recipes_ Ch 2.) and so SVD is suitable to solve the equation above.
We can alternatively use SVD to directly solve the overdetermined system
$$
\mathsf{A}\mathbf{a} = \mathbf{b}
$$
and we should get the same answer.
The first approach might be preferrable because the matrix $\mathsf{\alpha}$ is a relatively small $M \times M$ matrix, i.e., its size depends on the number of parameters. The design matrix $\mathsf{A}$ is a $N \times M$ matrix and can be large (in one dimensions) because its size depends on the possibly large number $N$, the number of data points.
The errors are
$$
\sigma^2(a_j) = \sum_{i=1}^{M} \left(\frac{V_{ji}}{w_i}\right)^2
$$
(see also _Numerical Recipes_ Ch. 15) where $V_{ji}$ are elements of $\mathsf{V}$.
#### Example
Synthetic data
$$
y(x) = 3\sin x - 2\sin 3x + \sin 4x
$$
with noise $r$ added (uniform in range $-5 < r < 5$).
```python
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.style.use('ggplot')
import numpy as np
```
```python
def signal(x, noise=0):
r = np.random.uniform(-noise, noise, len(x))
return 3*np.sin(x) - 2*np.sin(3*x) + np.sin(4*x) + r
```
```python
X = np.linspace(-10, 10, 500)
Y = signal(X, noise=5)
```
```python
plt.plot(X, Y, 'r-', X, signal(X, noise=0), 'k--')
```
Define our fit function (the model) and the basis functions. We need the basis functions for setting up the problem and we will later use the fitfunction together with our parameter estimates to compare our fit to the true underlying function.
```python
def fitfunc(x, a):
return a[0]*np.cos(x) + a[1]*np.sin(x) + \
a[2]*np.cos(2*x) + a[3]*np.sin(2*x) + \
a[4]*np.cos(3*x) + a[5]*np.sin(3*x) + \
a[6]*np.cos(4*x) + a[7]*np.sin(4*x)
def basisfuncs(x):
return np.array([np.cos(x), np.sin(x),
np.cos(2*x), np.sin(2*x),
np.cos(3*x), np.sin(3*x),
np.cos(4*x), np.sin(4*x)])
```
(Note that we could have used the `basisfuncs()` in `fitfunc()` – left as an exercise for the keen reader...)
Set up the $\mathsf{\alpha}$ matrix and the $\boldsymbol{\beta}$ vector (here we assume that all observations have the same error $\sigma = 1$):
```python
M = 8
sigma = 1.
alpha = np.zeros((M, M))
beta = np.zeros(M)
for x in X:
Xk = basisfuncs(x)
for k in range(M):
for j in range(M):
alpha[k, j] += Xk[k]*Xk[j]
for x, y in zip(X, Y):
beta += y * basisfuncs(x)/sigma
```
Finally, solving the problem follows the same procedure as before:
Get the SVD:
```python
U, w, VT = np.linalg.svd(alpha)
V = VT.T
```
In this case, the singular values do not immediately show if any basis functions are superfluous (this would be the case for values close to 0).
```python
w
```
array([296.92809624, 282.94804954, 243.7895787 , 235.7300808 ,
235.15938555, 235.14838812, 235.14821093, 235.14821013])
... nevertheless, remember to routinely mask any singular values or close to singular values:
```python
w_inv = 1/w
w_inv[np.abs(w) < 1e-12] = 0
alpha_inv = V.dot(np.diag(w_inv)).dot(U.T)
```
Solve the system of equations with the pseudo-inverse:
```python
a_values = alpha_inv.dot(beta)
print(a_values)
```
[-0.08436957 3.01646 0.30062685 0.03492872 -0.03711539 -1.70048432
0.12217945 1.03404778]
Compare the fitted values to the original parameters $a_j = 0, +3, 0, 0, 0, -2, 0, +1$.
The original parameters show up as 3.15, -2.04 and 1.08 but the other parameters also have appreciable values. Given that the noise was sizable, this is not unreasonable.
Compare the plot of the underlying true function ("signal", dashed line) to the model ("fit", solid line):
```python
plt.plot(X, fitfunc(X, a_values), 'b-', label="fit")
plt.plot(X, signal(X, noise=0), 'k--', label="signal")
plt.legend(loc="best", fontsize="small")
```
We get some spurious oscillations but overall the result looks reasonable.
```python
```
#### Direct calculation
Instead of solving the compact $M \times M$ matrix equation $\mathsf{\alpha}\mathbf{a} = \boldsymbol{\beta}$, we can try to directly solve the overdetermined $M \times N$ equation
$$
\mathsf{A}\mathbf{a} = \mathbf{b}
$$
Creating the design matrix is straight-forward (but because of the way that `basisfuncs()` returns values, we need to transpose the output to get the proper $N \times M$ matrix $\mathsf{A}$):
```python
A = np.transpose(basisfuncs(X))
b = Y
```
```python
A.shape
```
(500, 8)
Calculate the pseudo-inverse $\mathsf{A}^{-1}$.
Note that we need to explicitly construct the matrix with the inverses of the singular values by filling a $M \times N$ matrix with $\text{diag}(w_i)$.
```python
U, w, VT = np.linalg.svd(A)
V = VT.T
singular_values = np.abs(w) < 1e-12
winv = 1/w
winv[singular_values] = 0
winvmat = np.zeros((V.shape[0], U.shape[0]))
winvmat[:len(winv), :len(winv)] = np.diag(winv)
Ainv = V.dot(winvmat).dot(U.T)
```
The singular values are all well behaved:
```python
w
```
array([17.23160167, 16.8210597 , 15.61376248, 15.35350386, 15.33490742,
15.33454884, 15.33454306, 15.33454304])
```python
V.shape, winvmat.shape, U.T.shape
```
((8, 8), (8, 500), (500, 500))
```python
Ainv.shape
```
(8, 500)
```python
A.shape
```
(500, 8)
Now solve directly
$$
\mathsf{A}^{-1} \mathbf{b} = \mathbf{a}
$$
```python
a = Ainv.dot(b)
a
```
array([-0.08436957, 3.01646 , 0.30062685, 0.03492872, -0.03711539,
-1.70048432, 0.12217945, 1.03404778])
The parameter estimates are the same as above.
```python
a_values - a
```
array([-6.80011603e-16, 5.32907052e-15, -1.11022302e-16, -2.74780199e-15,
-1.03389519e-15, -4.44089210e-16, 5.55111512e-17, 2.22044605e-15])
and hence the plot looks the same:
```python
plt.plot(X, fitfunc(X, a_values), 'b-', label="fit")
plt.plot(X, signal(X, noise=0), 'k--', label="signal")
plt.legend(loc="best", fontsize="small")
```
```python
```
| eaaa04ee7940cdcc3fbcdcc367abb9f4698fac47 | 162,710 | ipynb | Jupyter Notebook | 14_linear_algebra/14_SVD.ipynb | ASU-CompMethodsPhysics-PHY494/PHY494-resources-2019 | e6114b49d28df887abe37c8144df8f4ae8cf6419 | [
"CC-BY-4.0"
]
| null | null | null | 14_linear_algebra/14_SVD.ipynb | ASU-CompMethodsPhysics-PHY494/PHY494-resources-2019 | e6114b49d28df887abe37c8144df8f4ae8cf6419 | [
"CC-BY-4.0"
]
| null | null | null | 14_linear_algebra/14_SVD.ipynb | ASU-CompMethodsPhysics-PHY494/PHY494-resources-2019 | e6114b49d28df887abe37c8144df8f4ae8cf6419 | [
"CC-BY-4.0"
]
| null | null | null | 83.100102 | 43,968 | 0.830508 | true | 7,251 | Qwen/Qwen-72B | 1. YES
2. YES | 0.933431 | 0.835484 | 0.779866 | __label__eng_Latn | 0.912166 | 0.650223 |
# Scenario C - Peak Number Variation (results evaluation)
This file is used to evaluate the inference (numerical) results.
The model used in the inference of the parameters is formulated as follows:
\begin{equation}
\large y = f(x) = \sum\limits_{m=1}^M \big[A_m \cdot e^{-\frac{(x-\mu_m)^2}{2\cdot\sigma_m^2}}\big] + \epsilon
\end{equation}
```python
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pymc3 as pm
import arviz as az
import seaborn as sns
#az.style.use('arviz-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))
```
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Running on PyMC3 v3.8
## Import local modules
```python
import sys
sys.path.append('../../modules')
import results as res
import figures as fig
```
## Load results and extract convergence information
```python
#filelst = ['./output_4x4/scenario_peaks.csv']
filelst = ['./scenario_peaks_mruns_01.csv', './scenario_peaks_mruns_02.csv']
ldf = res.load_results(filelst)
```
reading file: ./scenario_peaks_mruns_01.csv
reading file: ./scenario_peaks_mruns_02.csv
```python
# extract the convergence results per model
peaklist = [2,3,4,5,6]
dres = res.get_model_summary(ldf, peaklist)
```
processing dataframe: 1
number of runs : 4
processing dataframe: 2
number of runs : 4
```python
# figure size and color mapping
figs=(8,8)
#coolwarm, *bone, gray, binary, BuPu, YlGn, Blues, *Greens, Purples
col = "Greens"
col_r = col + "_r"
# axis labels
labels = ['2p','3p','4p','5p','6p']
```
## Heatmaps of n-peak model vs. n-peak number in dataset
### WAIC
```python
fig.plot_heatmap(dres['waic'], labels, title="", color=col, fsize=figs, fname="hmap_waic", precision=".0f")
```
### Rhat
```python
fig.plot_heatmap(dres['rhat'], labels, title="", color=col, fsize=figs, fname="hmap_rhat", precision=".2f")
```
### R2
```python
fig.plot_heatmap(dres['r2'], labels, title="", color=col_r, fsize=figs, fname="hmap_r2", precision=".2f")
```
### BFMI
```python
fig.plot_heatmap(dres['bfmi'], labels, title="", color=col_r, fsize=figs, fname="hmap_bfmi", precision=".2f")
```
### MCSE
```python
fig.plot_heatmap(dres['mcse'], labels, title="", color=col, fsize=figs, fname="hmap_mcse", precision=".2f")
```
### Noise
```python
fig.plot_heatmap(dres['noise'], labels, title="", color=col, fsize=figs, fname="hmap_noise", precision=".2f")
```
### ESS
```python
fig.plot_heatmap(dres['ess'], labels, title="", color=col_r, fsize=figs, fname="hmap_ess", precision=".0f")
```
```python
```
| af2a6f7c576739be1a15ff5a00cb7117ae748654 | 341,245 | ipynb | Jupyter Notebook | code/scenarios/scenario_c/scenario_peaks_evaluation.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
]
| 1 | 2021-01-07T02:22:25.000Z | 2021-01-07T02:22:25.000Z | code/scenarios/scenario_c/scenario_peaks_evaluation.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
]
| null | null | null | code/scenarios/scenario_c/scenario_peaks_evaluation.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
]
| null | null | null | 932.363388 | 58,920 | 0.95401 | true | 819 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.661923 | 0.55444 | __label__eng_Latn | 0.549705 | 0.126479 |
```python
from sympy import *
x, C, D = symbols('x C D')
i, j = symbols('i j', integer=True, positive=True)
psi_i = (1-x)**(i+1)
psi_j = psi_i.subs(i, j)
integrand = diff(psi_i, x)*diff(psi_j, x)
integrand = simplify(integrand)
A_ij = integrate(integrand, (x, 0, 1))
A_ij = simplify(A_ij)
print(('A_ij:', A_ij))
f = 2
b_i = integrate(f*psi_i, (x, 0, 1)) - \
integrate(diff(D*x, x)*diff(psi_i, x), (x, 0, 1)) - \
C*psi_i.subs(x, 0)
b_i = simplify(b_i)
print(('b_i:', b_i))
N = 1
A = zeros(N+1, N+1)
b = zeros(N+1)
print(('fresh b:', b))
for r in range(N+1):
for s in range(N+1):
A[r,s] = A_ij.subs(i, r).subs(j, s)
b[r,0] = b_i.subs(i, r)
print(('A:', A))
print(('b:', b[:,0]))
c = A.LUsolve(b)
print(('c:', c[:,0]))
u = sum(c[r,0]*psi_i.subs(i, r) for r in range(N+1)) + D*x
print(('u:', simplify(u)))
print(("u'':", simplify(diff(u, x, x))))
print(('BC x=0:', simplify(diff(u, x).subs(x, 0))))
print(('BC x=1:', simplify(u.subs(x, 1))))
```
('A_ij:', (i + 1)*(j + 1)/(i + j + 1))
('b_i:', ((-C + D)*(i + 2) + 2)/(i + 2))
('fresh b:', Matrix([
[0, 0],
[0, 0]]))
('A:', Matrix([
[1, 1],
[1, 4/3]]))
('b:', Matrix([
[ -C + D + 1],
[-C + D + 2/3]]))
('c:', Matrix([
[-C + D + 2],
[ -1]]))
('u:', C*x - C + D - x**2 + 1)
("u'':", -2)
('BC x=0:', C)
('BC x=1:', D)
```python
```
| 8044911c94c9a92e2eedfcd0bb0b35a728f8cc5d | 2,544 | ipynb | Jupyter Notebook | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/38_U_XX_2_CD.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| null | null | null | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/38_U_XX_2_CD.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| null | null | null | Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/38_U_XX_2_CD.ipynb | okara83/Becoming-a-Data-Scientist | f09a15f7f239b96b77a2f080c403b2f3e95c9650 | [
"MIT"
]
| 2 | 2022-02-09T15:41:33.000Z | 2022-02-11T07:47:40.000Z | 25.69697 | 69 | 0.411557 | true | 587 | Qwen/Qwen-72B | 1. YES
2. YES | 0.938124 | 0.828939 | 0.777647 | __label__glg_Latn | 0.176586 | 0.645068 |
```python
import numpy as np
import pandas as pd
from scipy.sparse import coo_matrix, eye
import networkx as nx
import matplotlib.pyplot as plt
import graphblas
from graphblas import Matrix, Vector, Scalar
from graphblas import descriptor
from graphblas import unary, binary, monoid, semiring, op
from graphblas import io as gio
```
## Create and visualize a Matrix
```python
# 23 // The input matrix A must be symmetric. Self-edges (diagonal entries) are
# 24 // OK, and are ignored. The values and type of A are ignored; just its
# 25 // pattern is accessed.
row_col = np.array(
[
[0, 0, 0, 1, 2, 2, 3, 6, 6, 9, 9],
[1, 2, 3, 2, 4, 5, 4, 7, 8, 10, 11],
]
)
rows, cols = row_col
data = np.full_like(rows, fill_value=1)
```
```python
A = coo_matrix((data, (rows, cols)), shape=(12, 12)).tolil()
A[cols, rows] = A[rows, cols] # symmetrize matrix
A = A.tocoo()
```
```python
# Draw A using spring layout which may even reveal the connected components
G = nx.convert_matrix.from_scipy_sparse_matrix(A)
layout = nx.drawing.layout.spring_layout(G, k=0.6, scale=1, threshold=1e-10)
nx.draw_networkx(G, with_labels=True, node_size=500, font_color="w", pos=layout)
```
```python
A = gio.from_scipy_sparse_matrix(A, name="A")
# Size of the sparse matrix is 12x12 with 22 non-zero elements of type INT64
```
```python
A
# This is an adjacency matrix
# Reading along a row shows the out-nodes of a vertex
# Reading along a column shows the in-nodes of a vertex
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>A</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Matrix</pre></td>
<td><pre>nvals</pre></td>
<td><pre>nrows</pre></td>
<td><pre>ncols</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>22</td>
<td>12</td>
<td>12</td>
<td>INT64</td>
<td>bitmapr (iso)</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td></td>
<td>1</td>
<td>1</td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>1</td>
<td></td>
<td></td>
<td>1</td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>4</th>
<td></td>
<td></td>
<td>1</td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>5</th>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>6</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td>1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>7</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>8</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>9</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>10</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
</tr>
<tr>
<th>11</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div></details></div>
```python
# graphblas.io.draw could do with a few more tunable options to improve pretty display
gio.draw(A)
```
## Connected Components
https://github.com/GraphBLAS/LAGraph/blob/reorg/src/algorithm/LAGraph_ConnectedComponents.c
Sections of the C-code found at the above link are reproduced here in comments and translated into python
```python
# 10 // Code is based on the algorithm described in the following paper
# 11 // Zhang, Azad, Hu. FastSV: FastSV: A Distributed-Memory Connected Component
# 12 // Algorithm with Fast Convergence (SIAM PP20)
# 13
# 14 // A subsequent update to the algorithm is here (which might not be reflected
# 15 // in this code):
# 16 //
# 17 // Yongzhe Zhang, Ariful Azad, Aydin Buluc: Parallel algorithms for finding
# 18 // connected components using linear algebra. J. Parallel Distributed Comput.
# 19 // 144: 14-27 (2020).
```
```python
# 342 GrB_TRY (GrB_Matrix_nrows (&n, S)) ;
# 343 GrB_TRY (GrB_Matrix_nvals (&nnz, S)) ;
n = A.nrows
nnz = A.nvals
```
```python
# 370 // vectors
# 371 GrB_TRY (GrB_Vector_new (&f, GrB_UINT32, n)) ;
# 372 GrB_TRY (GrB_Vector_new (&gp_new, GrB_UINT32, n)) ;
# 373 GrB_TRY (GrB_Vector_new (&mod, GrB_BOOL, n)) ;
dtype = np.uint32
f = Vector(dtype=dtype, size=n, name="parents") # parent of each vertex
gp_new = Vector(dtype=dtype, size=n, name="grandparents") # grandparent of each vertex
mod = Vector(dtype=bool, size=n, name="modified?") # boolean flag for each vertex
f
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>parents</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>0</td>
<td>12</td>
<td>UINT32</td>
<td>sparse</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div></details></div>
```python
mod
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>modified?</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>0</td>
<td>12</td>
<td>BOOL</td>
<td>sparse</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div></details></div>
```python
# 387 GrB_TRY (GrB_Vector_build (f, I, V32, n, GrB_PLUS_UINT32)) ;
# 388 GrB_TRY (GrB_Vector_dup (&gp, f)) ;
# 389 GrB_TRY (GrB_Vector_dup (&mngp, f)) ;
I = np.arange(n)
V32 = I.astype(dtype)
f.build(I, V32) # The parent of each vertex is initialized to be the vertex itself
gp = f.dup() # grandparent of each vertex initialized to parent
mngp = f.dup(name="Minimum grandparent") # minimum grandparent of each vertex belonging to a star
```
```python
f
# The parent of each vertex is initialized to the vertex itself
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>parents</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>UINT32</td>
<td>full</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
<td>8</td>
<td>9</td>
<td>10</td>
<td>11</td>
</tr>
</tbody>
</table>
</div></details></div>
```python
change = Scalar(dtype=bool, name="changed?") # flag to terminate FastSV algorithm
```
This uses the ***min_second*** semiring with the *GrB_mxv()* function where *min* returns the minimum of its two inputs and *second* returns its second input.
```python
# 703 // hooking & shortcutting
# 704 GrB_TRY (GrB_mxv (mngp, NULL, GrB_MIN_UINT32,
# 705 GrB_MIN_SECOND_SEMIRING_UINT32, T, gp, NULL)) ;
mngp(binary.min) << op.min_second(A @ gp)
mngp
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>Minimum grandparent</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>UINT32</td>
<td>full</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>2</td>
<td>6</td>
<td>6</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>9</td>
</tr>
</tbody>
</table>
</div></details></div>
It is not yet clear to me if the function ***Reduce_assign32*** (described in the C-code) instead of ***GrB_assign*** is really required for the algorithm to work, as it is not referred to in any of the authors' papers. Nevertheless, I'm choosing ***GrB_assign***, in accordance with authors' papers. This seems to work anyway for the example graph used here.
```python
# 706 GrB_TRY (Reduce_assign32 (&f, &mngp, V32, n, nthreads, ht_key,
# 707 ht_val, &seed, msg)) ;
#
#
# 139 //------------------------------------------------------------------------------
# 140 // Reduce_assign32: w (index) += s, using MIN as the "+=" accum operator
# 141 //------------------------------------------------------------------------------
# 142
# 143 // mask = NULL, accumulator = GrB_MIN_UINT32, descriptor = NULL.
# 144 // Duplicates are summed with the accumulator, which differs from how
# 145 // GrB_assign works. GrB_assign states that the presence of duplicates results
# 146 // in undefined behavior. GrB_assign in SuiteSparse:GraphBLAS follows the
# 147 // MATLAB rule, which discards all but the first of the duplicates.
# 148
# 149 // todo: add this to GraphBLAS as a variant of GrB_assign, either as
# 150 // GxB_assign_accum (or another name), or as a GxB_* descriptor setting.
# et cetera
#
#
f(binary.min)[V32] << mngp
```
```python
# 708 GrB_TRY (GrB_eWiseAdd (f, NULL, GrB_MIN_UINT32, GrB_MIN_UINT32,
# 709 mngp, gp, NULL)) ;
f(binary.min) << op.min(mngp | gp)
```
```python
# 710 // calculate grandparent
# 711 // fixme: NULL parameter is SS:GrB extension
# 712 GrB_TRY (GrB_Vector_extractTuples (NULL, V32, &n, f)) ; // fixme
_, V32 = f.to_values()
V32
```
array([0, 0, 0, 0, 2, 2, 6, 6, 6, 9, 9, 9], dtype=uint32)
```python
I = V32.astype(I.dtype)
```
```python
# 719 GrB_TRY (GrB_extract (gp_new, NULL, NULL, f, I, n, NULL)) ;
gp_new << f[I]
```
```python
# 721 // check termination
# 722 GrB_TRY (GrB_eWiseMult (mod, NULL, NULL, GrB_NE_UINT32, gp_new, gp,
# 723 NULL)) ;
# 724 GrB_TRY (GrB_reduce (&change, NULL, GrB_LOR_MONOID_BOOL, mod, NULL)) ;
mod << gp_new.ewise_mult(gp, binary.ne)
change << mod.reduce(binary.lor)
```
```python
mod
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>modified?</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>BOOL</td>
<td>full</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>False</td>
<td>True</td>
<td>True</td>
<td>True</td>
<td>True</td>
<td>True</td>
<td>False</td>
<td>True</td>
<td>True</td>
<td>False</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</div></details></div>
```python
change
```
<div class="gb-scalar"><tt>changed?</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Scalar</pre></td>
<td><pre>value</pre></td>
<td><pre>dtype</pre></td>
</tr>
<tr>
<td>True</td>
<td>BOOL</td>
</tr>
</table>
</div>
</div>
```python
change.value
```
True
```python
# 726 // swap gp and gp_new
# 727 GrB_Vector t = gp ; gp = gp_new ; gp_new = t ;
gp, gp_new = gp_new, gp
```
The algorithm repeats until a new computation is the same as the previous result.
Here is the full python listing updated using changes from authors' paper: Yongzhe Zhang et al., J. Parallel Distributed Comput. 19 144: 14-27 (2020)):
```python
def fastSV(A):
n = A.nrows
I = np.arange(n)
# The parent of each vertex is initialized to be the vertex itself:
f = Vector.from_values(I, I, name="parents")
gp = f.dup() # grandparent of each vertex initialized to parent
gp_dup = gp.dup() # duplicate grandparents
mngp = f.dup() # minimum grandparent of each star-vertex
# boolean flag for each vertex
mod = Vector(dtype=bool, size=n, name="modified?")
# flag to terminate FastSV algorithm
change = Scalar.from_value(True, dtype=bool, name="changed?")
while change:
# Step 1: Hooking phase
mngp << op.min_second(A @ gp)
f(binary.min)[I] << mngp
f << op.min(f | mngp)
# Step 2: Shortcutting
f << op.min(f | gp)
# Step 3: Calculate grandparents
_, I = f.to_values()
gp << f[I]
# Check termination
mod << op.ne(gp_dup & gp)
change << mod.reduce(binary.lor)
gp_dup << gp
return f
```
```python
connected_components = fastSV(A)
connected_components
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>parents</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>INT64</td>
<td>full</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>6</td>
<td>6</td>
<td>6</td>
<td>9</td>
<td>9</td>
<td>9</td>
</tr>
</tbody>
</table>
</div></details></div>
*connected_components* gives the label of the component to which each vertex belongs.
Compare with the graph drawing to check result:
```python
nx.draw_networkx(G, with_labels=True, node_size=500, font_color="w", pos=layout)
```
Each component has been identified and labeled with the least vertex ID in that component.
### And that's FastSV in essentially 10 very readable lines of Python, thanks to GraphBLAS
Now let's test the algorithm a bit further by applying a random permutation to the vertex labels of the graph:
```python
p = np.random.permutation(A.shape[0])
```
```python
p
```
array([ 6, 1, 7, 4, 0, 9, 10, 3, 8, 5, 2, 11])
The permutation $\mathsf{p}$ can be viewed not only as a rearrangement of the vertex labels, but also as a bijection
$$p: V \rightarrow V $$
from the set of vertices $V \subset \mathbb{Z}$ to itself. So, for example,
$$p(0) = \mathsf{p[0]}\mbox{,}\;\;p(1) = \mathsf{p[1]}\mbox{, ...} $$
I do not know if GraphBLAS provides primitives for permuting vertex labels. It might be worthwhile to check. Here I'll try using graphblas:
Let's build the above permutation's matrix $\mathbf{P}$ whose components are defined by:
$$P_{i\,j} \equiv \delta_{p(i)\, j},$$
where $\delta_{i\,j} = 1$ when $i=j$, otherwise $\delta_{i\,j} = 0$.
Note that,
$$ \sum_{j} j\,P_{i\,j} = \sum_{j} j\,\delta_{p(i)\, j} = p(i). $$
Also, it can be shown that
$$ \sum_{j} P^{\phantom{\mathrm{T}}}_{i\,j}P^{\mathrm{T}}_{j\,k} = \sum_{j} P_{i\,j}P_{k\,j} = \delta_{i\, k},$$
where $\mathrm{T}$ denotes the matrix transpose, and
$$P_{i\,j} = \delta_{p(i)\, j} \iff P_{i\,p(j)} = \delta_{i\, j}.$$
We will now use the last equation above to build $\mathbf{P}$:
```python
rows, cols = np.arange(p.size), p
data = np.full_like(rows, fill_value=1)
P = Matrix.from_values(rows, cols, data, name="P")
```
Check from the definition, $P_{i\,j} \equiv \delta_{p(i)\, j}$, that the nonzero matrix elements are indeed correctly placed:
```python
P
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>P</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Matrix</pre></td>
<td><pre>nvals</pre></td>
<td><pre>nrows</pre></td>
<td><pre>ncols</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>12</td>
<td>INT64</td>
<td>csr (iso)</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>1</th>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>2</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>3</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>5</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
</tr>
<tr>
<th>6</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
</tr>
<tr>
<th>7</th>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>8</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>9</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>10</th>
<td></td>
<td></td>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>11</th>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>1</td>
</tr>
</tbody>
</table>
</div></details></div>
Now let us transform the adjacency matrix $\mathbf{A}$, using the permutation matrix $\mathbf{P}$, into
$$\mathbf{A}' = \mathbf{P}^{\mathrm{T}} \cdot \mathbf{A} \cdot \mathbf{P},$$
which ensures that the graph edges are preserved after permutation, that is,
\begin{equation}
\boxed{A_{i\,j} = A'_{p(i)\,p(j)}}
\end{equation}
for all $i$, $j$.
```python
AA = A.dup(name="AA")
AA << P.T @ A @ P
```
Let's redraw the graph with the new labels and compare with the permutation array and graph-drawing above.
```python
A_sci = gio.to_scipy_sparse_matrix(AA, format="csr")
G_perm = nx.convert_matrix.from_scipy_sparse_matrix(A_sci)
layout_perm = {p[k]: layout[k] for k in layout}
nx.draw_networkx(G_perm, with_labels=True, node_size=500, font_color="w", pos=layout_perm)
```
Now let's re-apply the algorithm:
```python
connected_components_perm = fastSV(AA)
connected_components_perm
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>parents</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>INT64</td>
<td>full</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>0</td>
<td>0</td>
<td>2</td>
<td>3</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>0</td>
<td>3</td>
<td>2</td>
</tr>
</tbody>
</table>
</div></details></div>
It looks like once again the algorithm worked as expected. Let's confirm this programmatically by undoing the permutation on the result:
```python
_, components_perm = connected_components_perm.to_values()
```
```python
_, components = connected_components.to_values()
```
```python
def assert_components_equal(components, components_perm, p):
"""
This function undoes the vertex-label permutation p in
components_perm and compares the result to the original
components obtained before the permutation was applied.
"""
# Undo the permutation in components_perm:
components_unperm_not_min = components_perm[p]
# Note that the resulting component-labels are not
# necessarily the minimum vertex-labels for each
# component.
# Extract minimum vertex-label for each component:
non_min_vertices, min_vertices = np.unique(components_unperm_not_min, return_index=True)
# create a mapping from the non-minimum to minimum
# component labels:
q = p.copy()
q[non_min_vertices] = min_vertices
# apply the map:
components_perm_undone = q[components_unperm_not_min]
assert np.all(components == components_perm_undone)
```
```python
assert_components_equal(components, components_perm, p)
```
To further test this assertion function, let us apply a second random permutation $p_2$ to the previous permutation:
```python
p2 = np.random.permutation(A.shape[0])
p2
```
array([ 8, 1, 7, 2, 9, 11, 5, 3, 4, 0, 6, 10])
```python
rows, cols = np.arange(p2.size), p2
data = np.full_like(rows, fill_value=1)
P = Matrix.from_values(rows, cols, data)
```
```python
AAA = A.dup()
AAA << P.T @ AA @ P
```
```python
AAA_sci = gio.to_scipy_sparse_matrix(AAA, format="csr")
G_perm2 = nx.convert_matrix.from_scipy_sparse_matrix(AAA_sci)
layout_perm2 = {p2[k]: layout_perm[k] for k in layout_perm}
nx.draw_networkx(G_perm2, with_labels=True, node_size=500, font_color="w", pos=layout_perm2)
```
```python
connected_components_perm2 = fastSV(AAA)
connected_components_perm2
```
<div>
<style>
table.gb-info-table {
border: 1px solid black;
max-width: 100%;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
td.gb-info-name-cell {
line-height: 100%;
}
details.gb-arg-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
margin-left: 10px;
}
summary.gb-arg-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: -10px;
}
details.gb-expr-details {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
summary.gb-expr-summary {
display: list-item;
outline: none;
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
blockquote.gb-expr-blockquote {
margin-top: 5px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
margin-left: 15px;
}
.gb-scalar {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 5px;
}
/* modify pandas dataframe */
table.dataframe {
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
</style>
<details open class="gb-arg-details"><summary class="gb-arg-summary"><tt>parents</tt><div>
<table class="gb-info-table">
<tr>
<td rowspan="2" class="gb-info-name-cell"><pre>gb.Vector</pre></td>
<td><pre>nvals</pre></td>
<td><pre>size</pre></td>
<td><pre>dtype</pre></td>
<td><pre>format</pre></td>
</tr>
<tr>
<td>12</td>
<td>12</td>
<td>INT64</td>
<td>full</td>
</tr>
</table>
</div>
</summary><div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
</tr>
</thead>
<tbody>
<tr>
<th></th>
<td>0</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>2</td>
<td>7</td>
<td>0</td>
<td>0</td>
<td>7</td>
<td>7</td>
</tr>
</tbody>
</table>
</div></details></div>
```python
_, components_perm2 = connected_components_perm2.to_values()
```
```python
assert_components_equal(components_perm, components_perm2, p2)
```
```python
```
| dcb0c8b86e24e257304cc7fe98348e73521b86ce | 164,665 | ipynb | Jupyter Notebook | notebooks/Connected Components -- FastSV.ipynb | ParticularMiner/grblas | f5cfae47f68aa9b8e7c82c364e8eb16c0051b409 | [
"Apache-2.0"
]
| null | null | null | notebooks/Connected Components -- FastSV.ipynb | ParticularMiner/grblas | f5cfae47f68aa9b8e7c82c364e8eb16c0051b409 | [
"Apache-2.0"
]
| null | null | null | notebooks/Connected Components -- FastSV.ipynb | ParticularMiner/grblas | f5cfae47f68aa9b8e7c82c364e8eb16c0051b409 | [
"Apache-2.0"
]
| null | null | null | 49.271394 | 13,836 | 0.637349 | true | 14,715 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.879147 | 0.763826 | __label__eng_Latn | 0.342345 | 0.612956 |
# Classification Problem (Assignment 5, TAO, Spring 2019)
### Instructor: Dr. Pawan Kumar
## Classification Problem
### Given a set of input vectors corresponding to objects (or featues) decide which of the N classes the object belogs to.
### Reference (some figures for illustration below are taken from this):
1. SVM without Tears, https://med.nyu.edu/chibi/sites/default/files/chibi/Final.pdf
## Possible Methods for Classification Problem
1. Perceptron
2. SVM
3. Neural Networks
## SVM: Support Vector Machines
We will briefly describe the idea behind support vector machines for classification problems. We first describe linear SVM used to classify linearly separable data, and then we describe how we can use these algorithm for non-linearly separable data by so called kernels. The kernels are functions that map non-linearly separable data to a space usually higher dimensional space where the data becomes linearly separable. Let us quickly start with linear SVM.
### Linear SVM for two class classification
We recall the separating hyperpplane theorem: If there are two non-intersecting convex set, then there exists a hyperplane that separates the two convex sets. This is the assumption we will make: we assume that the convex hull of the given data leads to two convex sets for two classes, such that a hyperplane exists that separates the convex hulls.
### Main idea of SVM:
Not just find a hyperplane (as in perceptrons), but find one that keeps a good (largest possible) gap from the the data samples of each class. This gap is popularly called margins.
### Illustration of problem, and kewords
Consider the dataset of cancer and normal patients, hence it is a two class problem. Let us visualize the data:
Let us notice the following about the given data:
0. There are two classes: blue shaded stars and red shaded circles.
2. The input vector is two dimensional, hence it is of the form $(x_1^1, x_2^1).$
2. Here $x_1^1, x_2^2$ are values of the features corresponding to two gene features: Gene X, Gene Y.
3. Here red line is the linear classifier or hyperplane that separates the given input data.
4. There are two dotted lines: one passes through a blue star point, and another dotted line passes through two red shaded circle points.
5. The distance between the two dotted lines is called gap or margin that we mentioned before.
6. Goal of SVM compared to perceptrons is to maximize this margin.
## Formulation of Optimization Model for Linear SVM
We now assume that the red hyperplane above with maximum margin is given by $$w \cdot x + b,$$
We further assume that the dotted lines above are given by $$w \cdot x + b = -1, \quad w \cdot x + b = +1.$$
For reasons, on why we can choose such hyperplane is shown in slides Lecture 16 of TAO. Since we want to maximize the margin the distance between the dotted lines, we recall the formula for diatance between planes. Let $D$ denote the distance, then
$$D = 2/ \| w \|.$$
So, to maximize the margin $D,$ we need to minimize $\| w \|.$ For convenience of writing algorithm (for differentiable function), we can say that minimizing $\| w \|$ is equivalent to minimizing $1/2 \| w \|^2.$ Hence
### Objective function: $\dfrac{1}{2} \| w \|^2$
For our hyperplane to classify correctly, we need points of one class on one side of dotted line, more concretely
$$w \cdot x + b \leq -1,$$
and the we want the samples of another class (red ones) be on the other side of other dotted lines, i.e.,
$$ w \cdot x + b \geq +1.$$
Let us now look what constraints mean in figure:
With this we are all set to write the constraints for our optimization model for SVM.
### Constraints:
$$
\begin{align}
&w \cdot x_i + b \leq -1, \quad \text{if}~y_i = -1\\
&w \cdot x_i + b \geq +1, \quad \text{if}~y_i = +1
\end{align}
$$
Hence, objective function with constraints, gives us the full model. The data for which the label $y_i$ is $-1$ satisfies $w \cdot x + b \leq -1,$ and the data for which the lable $y_i$ is $+1$ satisfies $w \cdot x + b \geq +1.$ Hence both these conditions can be combined to get
$$
\begin{align}
y_i (w \cdot x_i + b) \geq 1
\end{align}
$$
## Optimization Model (Primal Form):
$$
\begin{align}
\text{minimize} \quad & \dfrac{1}{2} \| w \|^2 \\
\text{subject to} \quad &w \cdot x_i + b \geq 1, \quad i=1,\dots,m,
\end{align}
$$
where $m$ is the number of samples $x_i,$ and $w \in \mathbb{R}^n.$
$\color{red}{Question:}$ Prove that the primal objective is convex.
$\color{blue}{Answer}:$ Put your answer here.
$\color{red}{Question}:$ Write the primal problem in standard form.
$\color{blue}{Answer}:$ Put your answer here.
## Optimization Model (Dual Form)
The dual form was derived in lecture 16:
$$
\begin{align*}
&\text{maximize} \quad \sum_{i=1}^m{\lambda_i} - \dfrac{1}{2} \sum_{i=1}^m \lambda_i \lambda_j y_i y_j (x_i \cdot x_j) \\
&\text{subject to} \quad \lambda_i \geq 0, \quad \sum_{i=1}^m{\lambda_i y_i} = 0, \quad i = 1, \dots, m
\end{align*},
$$
where $\lambda_i$ is the Lagrange multiplier. We claim that strong duality holds.
$\color{red}{Question:}$ Show the derivation of dual.
$\color{blue}{Answer:}$ Put your anser here (use latex)
$\color{red}{Question:}$ Prove that strong duality holds.
$\color{blue}{Answer:}$ Put your answer here (use latex)
$\color{red}{Question:}$ Prove that the dual objective is concave.
$\color{blue}{Answer}:$ Put your answer here.
$\color{red}{Question}:$ Write the dual problem in standard form.
$\color{blue}{Answer}:$ Put your answer here.
## Soft Margin SVM
In a variant of soft margin SVM, we assume that some data samples may be outliers or noise, and this prevents the data from being linearly separable. For example, see the figure below
In the figure, we see that
- We believe that two red and one blue sample is noisy or outliers.
- We now want to take into account that real life data is noisy, we decide to allow for some of the noisy data in the margin.
- Let $\xi_i$ denotes how far a data sample is from the middle plane (margin is the area between dotted line).
- For example, one of the noisy red data point in 0.6 away from middle red plane.
- We introduce this slack variable $\xi_i \geq 0$ for each data sample $x_i.$
## Optimization Model: Primal Soft-Margin
We can then write the primal soft-margin optimization model as follows:
$$
\begin{align*}
&\text{minimize} \quad \dfrac{1}{2} \| w \|^2 + C \sum_{i=1}^m \xi_i \\
&\text{subject to} \quad y_i (w \cdot x_i + b) \geq 1 - \xi_i, \quad \xi_i \geq 0, \quad i = 1, \dots, m.
\end{align*}
$$
## Optimization Model: Dual Soft-Margin
We can also write the dual form of soft-margin SVM as follows:
$$
\begin{align*}
\text{Maximize} \quad &\sum_{i=1}^m \lambda_i - \dfrac{1}{2} \sum_{i,j=1}^m \lambda_i \lambda_j \: y_i y_j \: x_i \cdot x_j \\
\text{subject to} \quad &0 \leq \lambda_i \leq C, \quad i=1, \dots, m, \\
&\sum_{i=1}^m \lambda_i y_i = 0.
\end{align*}
$$
$\color{red}{Question:}$ Show the derivation of dual.
$\color{blue}{Answer:}$ Put your anser here (use latex)
$\color{red}{Question:}$ List advantages of dual over primal.
$\color{blue}{Answer:}$ Put your answer here (use latex)
# Kernels in SVM
## Non-Linear Classifiers
- For nonlinear data, we may map the data to a higher dimensional feature space where it is separable. See the figure below:
Such non-linear transformation can be implemented more effectively using the dual formulation.
- If we solve the dual form of linear SVM, then the predictions is given by
$$
\begin{align*}
f(x) &= \text{sign}(w \cdot x + b) \\
w &= \sum_{i=1}^m \alpha_i y_i x_i
\end{align*}
$$
If we assume that we did some transform $\Phi,$ then the classifier is given by
$$
\begin{align*}
f(x) &= \text{sign} (w \cdot \Phi(x) + b) \\
w &= \sum_{i=1}^m \alpha_i y_i \Phi(x_i)
\end{align*}
$$
If we substitute $w$ in $f(x),$ we observe that
$$
\begin{align*}
f(x) = \text{sign} \left ( \sum_{i=1}^m \alpha_i y_i \, \Phi(x_i) \cdot \Phi(x) + b \right) = \text{sign} \left( \sum_{i=1}^m \alpha_i y_i \, K(x_i, x) + b \right)
\end{align*}
$$
Note that doing dot products such as $\Phi(x_i) \cdot \Phi(x),$ if $\Phi(x)$ is a long vector! An important observation is to define this dot product or $K(x,z)$ such that dot products happen in input space rather than the feature space. We can see this with following example:
$$
\begin{align*}
K(x \cdot z) &= (x \cdot z)^2 = \left( \begin{bmatrix}
x_{(1)} \\ x_{(2)}
\end{bmatrix} \cdot \begin{bmatrix}
z_{(1)} \\ z_{(2)}
\end{bmatrix} \right)^2 = (x_{(1)} z_{(1)} + x_{(2)} z_{(2)})^2 \\
&= x_{(1)}^2 z_{(1)}^2 + 2x_{(1)} z_{(1)} x_{(2)} z_{(2)} + x_{(2)}^2 z_{(2)}^2 = \begin{bmatrix}
x_{(1)}^2 \\ \sqrt{2} x_{(1)} x_{(2)} \\ x_{(2)}^2
\end{bmatrix} \cdot \begin{bmatrix}
z_{(1)}^2 \\ \sqrt{2} z_{(1)} z_{(2)} \\ z_{(2)}^2
\end{bmatrix} \\
&= \Phi(x) \cdot \Phi(z)
\end{align*}
$$
$\color{red}{Question:}$ Let the kernel be defined by $K(x,z) = (x \cdot z)^3.$ Define $\Phi(x).$ Assuming that one multiplications is 1 FLOP, and one addition is 1 FLOP, then how many flops you need to compute $K(x \cdot z)$ in input space versus feature space?
$\color{blue}{Answer:}$ Write your answer in this cell.
## Optimization Model: Dual Soft Margin Kernel SVM
We can now write the dual form of soft-margin Kernel SVM as follows:
$$
\begin{align*}
\text{Maximize} \quad &\sum_{i=1}^m \lambda_i - \dfrac{1}{2} \sum_{i, \, j=1}^m \lambda_i \lambda_j \: y_i y_j \: \Phi(x_i) \cdot \Phi(x_j) \\
\text{subject to} \quad &0 \leq \lambda_i \leq C, \quad i=1, \dots, m, \\
&\sum_{i=1}^m \lambda_i y_i = 0.
\end{align*}
$$
## Solver for Optimization Problem: Quadratic Programming
We aspire to solve the above optimization problem using existing quadraric programming library. But we have a problem: the standard libraries use the standard form of the quadratic optimization problem that looks like the following:
$$
\begin{align*}
\text{minimize} \quad &\dfrac{1}{2} x^T P x + q^T x, \\
\text{subject to} \quad &Gx \leq h, \\
&Ax = b
\end{align*}
$$
# Dual Soft-Margin Kernel SVM in Standard QP: Assemble Matrices Vectors
To put the dual Kernel SVM in standard form, we need to set
- matrix $P$
- vector $x$
- vector $q$
- vector $h$
- vector $b$
- matrix $G$
- matrix $A$
### Matrix $P$
Let $$K(x_i, x_j) = \Phi(x_i) \cdot \Phi(x_j),$$ and set $(i,j)$ entry of matrix $P$ as $$P_{ij} = y_iy_j K(x_i,x_j)$$
## Vector $x$
Set $$x = \begin{bmatrix}
\lambda_1 \\
\lambda_2 \\
\vdots \\
\lambda_m
\end{bmatrix}
$$
## Vector $q$
Set $q \in \mathbb{R}^m$
$$ q =
\begin{bmatrix}
-1 \\ -1 \\ \vdots \\ -1
\end{bmatrix}
$$
## Matrix $A$
Set the matrix (in fact vector) $A$ as
$$
A = [y_1, y_2, \dots, y_m]
$$
## Vector $b$
In fact vector $b$ is a scalar here: $$b = 0$$
## Matrix $G$
$$
\begin{align*}
G = \begin{bmatrix}
1 & 0 & \dots & 0 \\
0 & 1 & \dots & 0 \\
\vdots & \ddots & \dots & \vdots \\
0 & 0 & \dots & 1 \\ \hline
-1 & 0 & \dots & 0 \\
\vdots & \ddots & \dots & \vdots \\
0 & 0 & \dots& -1
\end{bmatrix}
\end{align*}
$$
## Vector $h$
Set $h$ as
$$
h = \begin{bmatrix}
C \\
C \\
\vdots \\
C \\
0 \\
\vdots \\
0
\end{bmatrix}
$$
# Implementation of Kernel SVM
We are all set to try out coding the classifier using Kernel SVM. We will first import some libraries. Some of these libraries may not be available in your system. You may install them as follows:
- conda install numpy
- conda install -c conda-forge cvxopt
- sudo apt-get install python-scipy python-matplotlib
Try google search, if these does not work.
```python
import pylab as pl
import cvxopt as cvxopt
from cvxopt import solvers
import numpy as np
```
We will now define a class: svm
This class will have the following functions:
- __init__: where we will define initial default parameters
- *construct_kernel*: here we will define some kernels such as polynomial and RBF (radial basis or Gaussian kernel)
- *train_kernel_svm*: Here we will train, i.e, we will call a quadratic programming solver from cvxopt
- *classify*: Here we will test our classifier
$\color{red}{Question:}$ Fill the TODO below.
```python
class svm:
def __init__(self, kernel='linear', C=None, sigma=1., degree=1., threshold=1e-5):
self.kernel = kernel
if self.kernel == 'linear':
self.degree = 1.
self.kernel = 'poly'
self.C = C
self.sigma = sigma
self.threshold = threshold
self.degree = degree
def construct_kernel(self, X):
self.K = np.dot(X, X.T)
if self.kernel == 'poly':
self.K = (1. + 1./self.sigma*self.K)**self.degree
elif self.kernel == 'rbf':
self.xsquared = (np.diag(self.K)*np.ones((1, self.N))).T
b = np.ones((self.N, 1))
self.K -= 0.5*(np.dot(self.xsquared, b.T) +
np.dot(b, self.xsquared.T))
self.K = np.exp(self.K/(2.*self.sigma**2))
def train_kernel_svm(self, X, targets):
self.N = np.shape(X)[0]
self.construct_kernel(X)
# Assemble the matrices for the constraints
P = TODO
q = TODO
G = TODO
h = TODO
A = TODO
b = TODO
# Call the the quadratic solver of cvxopt library.
sol = cvxopt.solvers.qp(cvxopt.matrix(P), cvxopt.matrix(q), cvxopt.matrix(
G), cvxopt.matrix(h), cvxopt.matrix(A), cvxopt.matrix(b))
# Get the Lagrange multipliers out of the solution dictionary
lambdas = np.array(sol['x'])
# Find the (indices of the) support vectors, which are the vectors with non-zero Lagrange multipliers
self.sv = np.where(lambdas > self.threshold)[0]
self.nsupport = len(self.sv)
print ("Number of support vectors = ", self.nsupport)
# Keep the data corresponding to the support vectors
self.X = X[self.sv, :]
self.lambdas = lambdas[self.sv]
self.targets = targets[self.sv]
self.b = np.sum(self.targets)
for n in range(self.nsupport):
self.b -= np.sum(self.lambdas*self.targets *
np.reshape(self.K[self.sv[n], self.sv], (self.nsupport, 1)))
self.b /= len(self.lambdas)
if self.kernel == 'poly':
def classify(Y, soft=False):
K = (1. + 1./self.sigma*np.dot(Y, self.X.T))**self.degree
self.y = np.zeros((np.shape(Y)[0], 1))
for j in range(np.shape(Y)[0]):
for i in range(self.nsupport):
self.y[j] += self.lambdas[i]*self.targets[i]*K[j, i]
self.y[j] += self.b
if soft:
return self.y
else:
return np.sign(self.y)
elif self.kernel == 'rbf':
def classify(Y, soft=False):
K = np.dot(Y, self.X.T)
c = (1./self.sigma * np.sum(Y**2, axis=1)
* np.ones((1, np.shape(Y)[0]))).T
c = np.dot(c, np.ones((1, np.shape(K)[1])))
aa = np.dot(self.xsquared[self.sv],
np.ones((1, np.shape(K)[0]))).T
K = K - 0.5*c - 0.5*aa
K = np.exp(K/(2.*self.sigma**2))
self.y = np.zeros((np.shape(Y)[0], 1))
for j in range(np.shape(Y)[0]):
for i in range(self.nsupport):
self.y[j] += self.lambdas[i]*self.targets[i]*K[j, i]
self.y[j] += self.b
if soft:
return self.y
else:
return np.sign(self.y)
else:
print ("Error: Invalid kernel")
return
self.classify = classify
```
$\color{red}{Question:}$ How $b$ was computed?
$\color{blue}{Answer:}$ Write your answer here.
# Test the Classifier
In the following, we will now test our classifier.
```python
from importlib import reload
import pylab as pl
import numpy as np
iris = np.loadtxt('iris_proc.data', delimiter=',')
imax = np.concatenate((iris.max(axis=0)*np.ones((1, 5)),
iris.min(axis=0)*np.ones((1, 5))), axis=0).max(axis=0)
target = -np.ones((np.shape(iris)[0], 3), dtype=float)
indices = np.where(iris[:, 4] == 0)
target[indices, 0] = 1.
indices = np.where(iris[:, 4] == 1)
target[indices, 1] = 1.
indices = np.where(iris[:, 4] == 2)
target[indices, 2] = 1.
train = iris[::2, 0:4]
traint = target[::2]
test = iris[1::2, 0:4]
testt = target[1::2]
```
```python
# Training the machines
output = np.zeros((np.shape(test)[0], 3))
# Train for the first set of train data
#svm0 = svm(kernel='linear')
#svm0 = svm(kernel='linear')
#svm0 = svm.svm(kernel='poly',C=0.1,degree=1)
svm0 = svm(kernel='rbf')
svm0.train_kernel_svm(train, np.reshape(traint[:, 0], (np.shape(train[:, :2])[0], 1)))
output[:, 0] = svm0.classify(test, soft=True).T
# Train for the second set of train data
#svm1 = svm(kernel='linear')
#svm1 = svm(kernel='linear')
#svm1 = svm(kernel='poly',degree=3)
svm1 = svm(kernel='rbf')
svm1.train_kernel_svm(train, np.reshape(traint[:, 1], (np.shape(train[:, :2])[0], 1)))
output[:, 1] = svm1.classify(test, soft=True).T
# Train for the third set of train data
#svm2 = svm(kernel='linear')
#svm2 = svm(kernel='linear')
#svm2 = svm(kernel='poly',C=0.1,degree=1)
svm2 = svm(kernel='rbf')
svm2.train_kernel_svm(train, np.reshape(traint[:, 2], (np.shape(train[:, :2])[0], 1)))
output[:, 2] = svm2.classify(test, soft=True).T
```
```python
# Make a decision about which class
# Pick the one with the largest margin
bestclass = np.argmax(output, axis=1)
print (bestclass)
print (iris[1::2, 4])
print("Misclassified locations:")
err = np.where(bestclass != iris[1::2, 4])[0]
print (err)
print (float(np.shape(testt)[0] - len(err)) /
(np.shape(testt)[0]), "test accuracy")
```
## Further Questions
$\color{red}{Question}:$ The IRIS dataset has three classes. Explain by observing the code above how the two class SVM was modified for multiclass classification.
$\color{red}{Question}:$ Write mathematical expressions for the kernels defined above.
$\color{red}{Question}:$ Play with different Kernels. Which kernels (polynomial, RBF, or polynomial) give the best test accuracy?
| 40125c279c02455b81a704f05d8bbc5791e69b49 | 27,140 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Assignment-5-Question-checkpoint.ipynb | keshavbnsl102/TAO-SVM-assignment | d451f4a9a6e942ea8c6b2b9425c6e3c200cd225d | [
"Apache-2.0"
]
| null | null | null | .ipynb_checkpoints/Assignment-5-Question-checkpoint.ipynb | keshavbnsl102/TAO-SVM-assignment | d451f4a9a6e942ea8c6b2b9425c6e3c200cd225d | [
"Apache-2.0"
]
| null | null | null | .ipynb_checkpoints/Assignment-5-Question-checkpoint.ipynb | keshavbnsl102/TAO-SVM-assignment | d451f4a9a6e942ea8c6b2b9425c6e3c200cd225d | [
"Apache-2.0"
]
| null | null | null | 34.310999 | 467 | 0.526013 | true | 5,423 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.880797 | 0.7433 | __label__eng_Latn | 0.959482 | 0.565268 |
jacobian, hessian
```python
% matplotlib inline
import sympy as sy
import math
sy.init_printing(use_latex='mathjax')
import matplotlib as mpl
style_name = 'bmh' #bmh
mpl.style.use(style_name)
np.set_printoptions(precision=4, linewidth =150)
style = plt.style.library[style_name]
style_colors = [ c['color'] for c in style['axes.prop_cycle'] ]
sy.init_printing()
```
```python
x_1, x_2 = sy.symbols('x_1 x_2')
y = x_1 ** 3 + x_2 ** 3 + 2 * x_1 ** 2 + 3 * x_2 ** 2 - x_1 * x_2 + 2 * x_1 + 4 * x_2
# jacobian
x1_diff = sy.diff(y, x_1)
x2_diff = sy.diff(y, x_2)
f_f1_diff = sy.lambdify((x_1, x_2), x1_diff, 'numpy')
f_f2_diff = sy.lambdify((x_1, x_2), x2_diff, 'numpy')
np.array([f_f1_diff(1,2), f_f2_diff(1,2)])
```
array([ 7, 27])
```python
# hessian 은 대칭행렬
x1_x1_diff = sy.diff(y, x_1, x_1)
x1_x2_diff = sy.diff(y, x_1, x_2)
x2_x1_diff = sy.diff(y, x_2, x_1)
x2_x2_diff = sy.diff(y, x_2, x_2)
f_00_diff = sy.lambdify((x_1, x_2), x1_x1_diff, 'numpy')
f_01_diff = sy.lambdify((x_1, x_2), x1_x2_diff, 'numpy')
f_10_diff = sy.lambdify((x_1, x_2), x2_x1_diff, 'numpy')
f_11_diff = sy.lambdify((x_1, x_2), x2_x2_diff, 'numpy')
np.array([[f_00_diff(1,2), f_01_diff(1,2)],[f_10_diff(1,2), f_11_diff(1,2)]])
```
array([[10, -1],
[-1, 18]])
tayler series
$$ T_f(x) = \sum_{n=0}^{\infty}{\frac{f^{n}(a)}{n!}}(x-a)^{n} $$
$$
T_f(x,y) = \sum_{k=0}^{\infty}\sum_{i=0}^{k}{\frac{(x - a)^{k-i}(y - b)^i}{(k - i)!i!}}\left.
{\frac{\partial^kf}{\partial x^{k-i}\partial y^i}}\right|_{(a,b)}
$$
- 단변수
$$
\begin{align}
& T_f(x) = f(x^*) + \frac{d f(x^*)}{dx} (x - x^*) + \frac{1}{2!} \frac{d^2 f(x^*)}{dx^2}(x - x^*)^2 + R\\[1pt]
& (R \approx error)
\end{align}
$$
<br>
- 다변수
$$
\begin{align}
& T_f(x, y) = f(x^*, y^*) + \frac{\partial f}{\partial x}(x - x^*) + \frac{\partial f}{\partial y}(y - y^*) + \\[1pt]
& \frac{1}{2} \left[ \frac{\partial^2 f}{\partial x^2}(x - x^*)^2 + 2 \frac{\partial^2 f}{\partial x \partial y} (x - x^*)(y - y^*) + \frac{\partial^2 f}{\partial y^2}(y - y^*)^2 \right] + R\\[1pt]
& (R \approx error)
\end{align}
$$
<br>
- 그라디언트와 헤시안으로 표현
- 일변수
$$
\Delta f = f'(x^*)d + {\frac{1}{2}}f''(x^*)d^2 + R \;\; ,(d = x - x^*)
$$
- 다변수
$$
\Delta f = \triangledown f\left(\mathbf{x}^*\right)^{\text{T}}\mathbf{d} + \frac{1}{2}\mathbf{d}^{\text{T}} \mathbf{H}\left(\mathbf{x}^*\right)\mathbf{d} + R
$$
<br>
- 테일러 급수(Taylor series) 또는 테일러 전개(Taylor expansion)는 어떤 미지의 함수 f(x)를 아래 식과 같이 근사 다항함수로 표현하는 것을 말합니다.
- x = a 근처에서만 성립한다는 점입니다. 즉, x가 a에서 멀어지면 멀어질수록 f(x) = p(x)로 놓는 것은 큰 오차를 갖게 됩니다. 한편, 근사다항식의 차수는 높으면 높을수록 f(x)를 좀더 잘 근사하게 됩니다.
- 출처: [다크프로그래머](http://darkpgmr.tistory.com/59)
- 설명: [2변수함수의 테일러정리](http://math.kongju.ac.kr/calculus/data/chap9/s6/s6.htm)
```python
x = sy.Symbol('x')
# f = ln(1 + x)
x0 = 1
f = x ** 4 + 2 * x ** 3 + 3 * x ** 2 # x^4 + 2x^3 + 3x^2
i = 3
f.diff(x, i), f.diff(x, i).subs(x, x0)
```
```python
from sympy.functions import sin, cos, ln
plt.style.use("ggplot")
def factorial(n):
if n <= 0:
return 1
else:
return n * factorial(n - 1)
def taylor(function, x0, n, x = sy.Symbol('x')):
i = 0
p = 0
while i <= n:
p += (function.diff(x, i).subs(x, x0))/ (factorial(i)) * (x - x0) ** i
i += 1
return p
def plot(f, x0 = 0, n = 9, by = 2, x_lims = [-10, 10], y_lims = [-10, 10], npoints = 800, x = sy.Symbol('x')):
x1 = np.linspace(x_lims[0], x_lims[1], npoints)
# x=0 에서(맥클로린) 차수를 올려가며 테일러 급수로 근사한 그래프 그리기
for j in range(1, n + 1, by):
func = taylor(f, x0, j)
taylor_lambda = sy.lambdify(x, func, "numpy")
print('Taylor expansion at n=' + str(j), func)
plt.plot(x1, taylor_lambda(x1), label = 'Order '+ str(j))
# 실제 함수 그래프
func_lambda = sy.lambdify(x, f, "numpy")
plt.plot(x1, func_lambda(x1), label = 'f(x)', color = 'black', linestyle = '--')
plt.xlim(x_lims)
plt.ylim(y_lims)
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.title('Taylor series approximation')
plt.show()
```
```python
x = sy.Symbol('x')
# f = ln(1 + x)
f = sin(x)
plot(f)
```
### 수치 알고리즘 일반개념
- 최적화 수치 알고리즘은 일반적으로 다음 단계를 따름
- 단계 1. 타당성있는 출발점 $\mathbf{x}^{(0)}$ 추정, $k=0$
- 단계 2. 탐색방향 $\mathbf{d}^{(k)}$를 계산
- 단계 3. 수렴 검토
- 지역 최소를 위한 1계 필요조건
$$
\color{Red}{f'(x^*) =0}
$$
- 또한 추가적으로 x*에서 함수를 두번미분한 결과값도 양수가 되어야겠지
- 필요조건을 만족시킨다면 그점은
- 지역 최소
- 지역 최대
- 변곡점
- 단계 4. 양의 이동거리 $\alpha_k$ 계산
- 단계 5. 새로운 설계 변수 계산 $\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} + \alpha_k \mathbf{d}^{(k)}$, $k=k+1$
- (벡터의 덧셈으로 원래 벡터에서 뱡향이 달라진 새로운 벡터를 생성) 단계 2로
- 따라서 $\alpha_k$와 $\mathbf{d}^{(k)}$ 계산이 중요
#### 경사도 수치계산 실습
$$
\mu = 0,\; \sigma^2 = 0.2\\
f(x) = {\frac{1}{\sigma\sqrt{2\pi}}}exp\left(-{\frac{(x - \mu)^2}{2\sigma^2}}\right)
$$
```python
x = sy.symbols('x')
sy.simplify(sy.diff(1 / (s * np.sqrt(2 * np.pi)) * sy.exp(-1*(x - m) ** 2 / (2 * v)), x))
```
```python
def f(x):
m, v =0, 0.2
s = np.sqrt(v)
return 1 / (s * np.sqrt(2 * np.pi)) * np.exp(-1*(x - m) ** 2 / (2 * v))
def df_anal(x):
"""
sympy의 결과로 얻은 도함수
"""
return -1 * 4.46031029038193 * x * np.exp(-1 * 2.5 * x ** 2)
def df_numer(x):
"""
수치적으로 도함수의 값을 계산하는 함수
전방, 후방, 중앙차분법 오차 시각화하기
"""
h = 0.1
#result = (f(x+h) - f(x)) / h
#result = (f(x) - f(x-h)) / h
result = (f(x+h/2) - f(x-h/2)) / h
return result
```
```python
x = np.linspace(-5, 5, 200)
plt.plot(x, f(x), lw=3, color=style_colors[0], label=r"$f(x) = \frac{4x}{(x^2 + 1)}$")
plt.plot(x, df_anal(x), lw=10, color=style_colors[1], alpha=0.3 , label=r"$\frac{df}{dx}$")
plt.plot(x, df_numer(x), color=style_colors[1], lw=3, label=r"Numerical derivative")
plt.legend(fontsize=20)
plt.suptitle("Numerical derivative", fontsize=20)
plt.show()
```
```python
```
| f4f3317a0a8aaf7f85c48af0f0b42d82820a9fc6 | 238,917 | ipynb | Jupyter Notebook | 02_optimazation/optimization (Kino).ipynb | seokyeongheo/study-math-with-python | 18266dc137e46ea299cbd89241e474d7fd610122 | [
"MIT"
]
| null | null | null | 02_optimazation/optimization (Kino).ipynb | seokyeongheo/study-math-with-python | 18266dc137e46ea299cbd89241e474d7fd610122 | [
"MIT"
]
| null | null | null | 02_optimazation/optimization (Kino).ipynb | seokyeongheo/study-math-with-python | 18266dc137e46ea299cbd89241e474d7fd610122 | [
"MIT"
]
| 1 | 2018-06-07T05:57:02.000Z | 2018-06-07T05:57:02.000Z | 575.703614 | 117,772 | 0.93905 | true | 2,843 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.845942 | 0.769729 | __label__kor_Hang | 0.787748 | 0.626671 |
# Lecture 10 - Priors on Function Spaces: Gaussian Processes
## Objectives:
+ Express prior knowledge/beliefs about model outputs using Gaussian process (GP)
+ Sample functions from the probability measure defined by GP
## Readings:
Please read the following before lecture:
+ [Chapter 1 from C.E. Rasmussen's textbook on Gaussian processes](http://www.gaussianprocess.org/gpml/chapters/RW1.pdf).
+ (Optional video lecture?) [Neil Lawrence's video lecture on Introduction to Gaussian processes](https://www.youtube.com/watch?v=ewJ3AxKclOg).
### Modeling prior knowledge in Gaussian processes
An experienced scientist or engineer typically has some knowledge about a function of interest $f(\cdot)$ even before observing it anywhere. For example, he/she might know that $f(\cdot)$ cannot exceed, or be smaller than, certain values or that it is periodic or that it shows translational invariance. Such knowledge is known as the *prior knowledge*.
Prior knowledge may be *precise*, e.g., the response is twice differentiable, or it may be vague, e.g., the probability that the periodicity is $T$ is $p(T)$. When one is dealing with vague prior knowledge, he/she may refer to it as *prior belief*. Almost always, prior knowledge a field quantity is a _prior belief_.
Prior beliefs about $f(\cdot)$ can be modeled by a probability measure on the space of functions from $\mathcal{X}$ to $\mathbb{R}$.
A Gaussian process (GP) is a great way to represent this probability measure.
### Introduction to Gaussian Processes.
In many engineering problems we have to deal with functions that are unknown.
For example, in oil reservoir modeling, the permeability tensor or the porosity of
the ground are, generally, unknown quantities.
Therefore, we would like to treat them as if they where random.
That is, we have to talk about probabilities on function spaces.
Such a thing is achieved via the theory of *random fields*.
However, instead of developing the generic mathematical theory of random fields,
we concentrate on a special class of random fields, the *Gaussian random fields*
or *Gaussian processes*.
A Gaussian process (GP) is a generalization of a multivariate Gaussian distribution to
*infinite* dimensions.
It essentially defines a probability measure on a function space.
When we say that $f(\cdot)$ is a GP, we mean that it is a random variable that is actually
a function.
Mathematically, we write:
\begin{equation}
f(\cdot) | m(\cdot), k(\cdot, \cdot) \sim \mbox{GP}\left(f(\cdot) | m(\cdot), k(\cdot, \cdot) \right),
\end{equation}
where
$m:\mathbb{R}^d \rightarrow \mathbb{R}$ is the *mean function* and
$k:\mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ is the *covariance function*.
So, compared to a multivariate normal we have:
+ A random function $f(\cdot)$ instead of a random vector $\mathbf{x}$.
+ A mean function $m(\cdot)$ instead of a mean vector $\boldsymbol{\mu}$.
+ A covariance function $k(\cdot,\cdot)$ instead of a covariance matrix $\mathbf{\Sigma}$.
But, what does this definition actually mean? Actually, it gets its meaning from the multivariate Gaussian distribution. Here is how:
+ Let $\mathbf{x}_{1:n}=\{\mathbf{x}_1,\dots,\mathbf{x}_n\}$ be $n$ points in $\mathbb{R}^d$.
+ Let $\mathbf{f}\in\mathbb{R}^n$ be the outputs of $f(\cdot)$ on each one of the elements of $\mathbf{x}_{1:n}$, i.e.,
$$
\mathbf{f} =
\left(
\begin{array}{c}
f(\mathbf{x}_1)\\
\vdots\\
f(\mathbf{x}_n)
\end{array}
\right).
$$
+ The fact that $f(\cdot)$ is a GP with mean and covariance function $m(\cdot)$ and $k(\cdot,\cdot)$, respectively, *means* that the vector of outputs $\mathbf{f}$ at
the arbitrary inputs in $\mathbf{X}$ is the following multivariate-normal:
$$
\mathbf{f} | \mathbf{x}_{1:n}, m(\cdot), k(\cdot, \cdot) \sim \mathcal{N}\left(\mathbf{f} | \mathbf{m}(\mathbf{x}_{1:n}), \mathbf{K}(\mathbf{x}_{1:n}, \mathbf{x}_{1:n}) \right),
$$
with mean vector:
$$
\mathbf{m}(\mathbf{x}_{1:n}) =
\left(
\begin{array}{c}
m(\mathbf{x}_1)\\
\vdots\\
m(\mathbf{x}_n)
\end{array}
\right),
$$
and covariance matrix:
$$
\mathbf{K}(\mathbf{x}_{1:n},\mathbf{x}_{1:n}) = \left(
\begin{array}{ccc}
k(\mathbf{x}_1,\mathbf{x}_1) & \dots & k(\mathbf{x}_1, \mathbf{x}_n)\\
\vdots & \ddots & \vdots\\
k(\mathbf{x}_n, \mathbf{x}_1) & \dots & k(\mathbf{x}_n, \mathbf{x}_n)
\end{array}
\right).
$$
Now that we have defined a Gaussian process (GP), let us talk about we encode our prior beliefs into a GP.
We do so through the mean and covariance functions.
### Interpretation of the mean function.
What is the meaning of $m(\cdot)$?
Well, it is quite easy to grasp.
For any point $\mathbf{x}\in\mathbb{R}^d$, $m(\mathbf{x})$ should give us the value we believe is more probable for
$f(\mathbf{x})$.
Mathematically, $m(\mathbf{x})$ is nothing more than the expected value of the random variable $f(\mathbf{x})$.
That is:
\begin{equation}
m(\mathbf{x}) = \mathbb{E}[f(\mathbf{x})].
\end{equation}
The mean function can be any arbitrary function. Essentially, it tracks generic trends in the response as the input is varied. In practice, we try and make a suitable choice for the mean function that is easy to work with. Such choices include:
+ zero, i.e.,
$$
m(\mathbf{x}) = 0.
$$
+ a constant, i.e.,
$$
m(\mathbf{x}) = c,
$$
where $c$ is a parameter.
+ linear, i.e.,
$$
m(\mathbf{x}) = c_0 + \sum_{i=1}^dc_ix_i,
$$
where $c_i, i=0,\dots,d$ are parameters.
+ using a set of $m$ basis functions (generalized linear model), i.e.,
$$
m(\mathbf{x}) = \sum_{i=1}^mc_i\phi_i(\mathbf{x}),
$$
where $c_i$ and $\phi_i(\cdot)$ are parameters and basis functions.
+ generalized polynomial chaos (gPC), i.e.,
using a set of $d$ polynomial basis functions upto a given degree $\rho$
$m(\mathbf{x}) = \sum_{i=1}^{d}c_i\phi_i(\mathbf{x})$
where the basis functions $\phi_i$ are mutually orthonormal with respect to some
measure $\mu$:
$$
\int \phi_{i}(\mathbf{x}) \phi_{j}(\mathbf{x}) d\mu(\mathbf{x}) = \delta_{ij}
$$
+ and many other possibilities.
### Interpretation of the covariance function.
What is the meaning of $k(\cdot, \cdot)$?
This concept is considerably more challenging than the mean.
Let's try to break it down:
+ Let $\mathbf{x}\in\mathbb{R}^d$. Then $k(\mathbf{x}, \mathbf{x})$ is the variance of the random variable $f(\mathbf{x})$, i.e.,
$$
\mathbb{V}[f(\mathbf{x})] = \mathbb{E}\left[\left(f(\mathbf{x}) - m(\mathbf{x}) \right)^2 \right].
$$
In other words, we believe that there is about $95\%$ probability that the value of
the random variable $f(\mathbf{x})$ fall within the interval:
$$
\left((m(\mathbf{x}) - 2\sqrt{k(\mathbf{x}, \mathbf{x})}, m(\mathbf{x}) + 2\sqrt{k(\mathbf{x},\mathbf{x})}\right).
$$
+ Let $\mathbf{x},\mathbf{x}'\mathbb{R}^d$. Then $k(\mathbf{x}, \mathbf{x}')$ tells us how the random variable $f(\mathbf{x})$ and
$f(\mathbf{x}')$ are correlated. In particular, $k(\mathbf{x},\mathbf{x}')$ is equal to the covariance
of the random variables $f(\mathbf{x})$ and $f(\mathbf{x}')$, i.e.,
$$
k(\mathbf{x}, \mathbf{x}') = \mathbb{C}[f(\mathbf{x}), f(\mathbf{x}')]
= \mathbb{E}\left[
\left(f(\mathbf{x}) - m(\mathbf{x})\right)
\left(f(\mathbf{x}') - m(\mathbf{x}')\right)
\right].
$$
Essentially, a covariance function (or covariance kernel) defines a nearness or similarity measure on the input space. We cannot choose any arbitrary function of two variables as a covariance kernel. How we go about choosing a covariance function is discussed in great detail [here](http://www.gaussianprocess.org/gpml/chapters/RW4.pdf). We briefly discuss some properties of covariance functions here and then we shall move onto a discussion of what kind of prior beliefs we can encode through the covariance function.
### Properties of the covariance function
+ There is one property of the covariance function that we can note right away.
Namely, that for any $\mathbf{x}\in\mathbb{R}^d$, $k(\mathbf{x}, \mathbf{x}) > 0$.
This is easly understood by the interpretation of $k(\mathbf{x}, \mathbf{x})$ as the variance
of the random variable $f(\mathbf{x})$.
+ $k(\mathbf{x}, \mathbf{x}')$ becomes smaller as the distance between $\mathbf{x}$ and $\mathbf{x}'$ grows.
+ For any choice of points $\mathbf{X}\in\mathbb{R}^{n\times d}$, the covariance matrix: $\mathbf{K}(\mathbf{X}, \mathbf{X})$ has
to be positive-definite (so that the vector of outputs $\mathbf{f}$ is indeed a multivariate
normal distribution).
### Encoding prior beliefs in the covariance function.
+ **Modeling regularity**. The choice of the covariance function controls the regularity properties of the functions sampled from the probability induced by the GP. For example, if the covariance kernel chosen is the squared exponential kernel, which is infinitely differentiable, then the functions sampled from the GP will also be infinitely differentiable.
+ **Modeling invariance** If the covariance kernel is invariant w.r.t. a transformation $T$, i.e., $k(\mathbf{x}, T\mathbf{x}')=k(T\mathbf{x}, \mathbf{x}')=k(\mathbf{x}, \mathbf{x}')$ then samples from the GP will be invariant w.r.t. the same transformation.
+ Other possibilities include periodicity, additivity etc.
### Squared exponential covariance function
Squared expnential (SE) is the most commonly used covariance function.
Its formula is as follows:
$$
k(\mathbf{x}, \mathbf{x}') = v\exp\left\{-\frac{1}{2}\sum_{i=1}^d\frac{(x_i - x_i')^2}{\ell_i^2}\right\},
$$
where $v,\ell_i>0, i=1,\dots,d$ are parameters.
The interpretation of the parameters is as follows:
+ $v$ is known as the *signal strength*. The bigger it is, the more the GP $f(\cdot)$ will vary
about the mean.
+ $\ell_i$ is known as the *length scale* of the $i$-th input dimension of the GP.
The bigger it is, the smoother the samples of $f(\cdot)$ appear along the $i$-th input dimension.
Let's experiment with this for a while:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns # Comment this out if you don't have it
sns.set_style('white')
sns.set_context('talk')
import GPy
# The input dimension
dim = 1
# The variance of the covariance kernel
variance = 1.
# The lengthscale of the covariance kernel
ell = 0.3
# Generate the covariance object
k = GPy.kern.RBF(dim, variance=variance, lengthscale=ell)
# Print it
print k
# and plot it
k.plot()
```
### Example 1: Plotting a covariance function
Remember:
> The covariance function $k(x,x')$ measures the similarity of $f(x)$ and $f(x')$.
The interactive tools provided, draw $k(\mathbf{x}, \mathbf{x}'=0)$ in one and two dimensions.
Use them to answer the following questions:
+ What is the intuitive meaning of $\ell$?
+ What is the intuitive meaning of $v$?
+ There are many other covariance functions that we could be using. Try changing ``RBF`` to ``Exponential``. What changes do you nottice.
+ Repeat the previous steps on a 2D covariance function.
+ If you still have time, try a couple of other covariances, e.g., ``Matern32``, ``Matern52``.
+ If you still have time, explore ``help(GPy.kern)``.
```python
from ipywidgets import interactive
def plot_kernel(variance=1., ell=0.3):
k = GPy.kern.RBF(dim, variance=variance, lengthscale=ell)
k.plot()
plt.ylim(0, 10)
interactive(plot_kernel, variance=(1e-3, 10., 0.01), ell=(1e-3, 10., 0.01))
```
```python
from ipywidgets import interactive
def plot_kernel(variance=1., ell1=0.3, ell2=0.3):
k = GPy.kern.RBF(2, ARD=True, variance=variance,
lengthscale=[ell1, ell2]) # Notice that I just changed the dimension here
k.plot()
interactive(plot_kernel, variance=(1e-3, 10., 0.01), ell1=(1e-3, 10., 0.01), ell2=(1e-3, 10., 0.01))
```
### Example 2: Properties of the covariance matrix
Let $\mathbf{x}_{1:n}$ be an arbitrary set of input points. The covariance matrix $\mathbf{K}\in\mathbb{R}^{n\times n}$ defined by:
$$
\mathbf{K}\equiv\mathbf{K}(\mathbf{x}_{1:n}, \mathbf{x}_{1:n}) = \left(
\begin{array}{ccc}
k(\mathbf{x}_1,\mathbf{x}_1) & \dots & k(\mathbf{x}_1, \mathbf{x}_n)\\
\vdots & \ddots & \vdots\\
k(\mathbf{x}_n, \mathbf{x}_1) & \dots & k(\mathbf{x}_n, \mathbf{x}_n)
\end{array}
\right),
$$
must be [positive definite](https://en.wikipedia.org/wiki/Positive-definite_matrix). Mathematically this can be expressed in two equivalent ways:
+ For all vectors $\mathbf{v}\in\mathbb{R}^T$, we have:
$$
\mathbf{v}^t\mathbf{K}\mathbf{v} > 0,
$$
+ All the eigenvalues of $\mathbf{K}$ are positive.
Using the code provided:
+ Verify that the the sum of two covariance functions is a valid covariance function.
+ Verify that the product of two covariance functions is a valid covariance function.
+ Is the following function a covariance function:
$$
k(x, x') = k_1(x, x')k_2(x, x') + k_3(x, x') + k_4(x, x'),
$$
where all $k_i(x, x')$'s are covariance functions.
+ What about:
$$
k(x, x') = k_1(x, x') / k_2(x, x')?
$$
```python
# Number of dimensions
dim = 1
# Number of input points
n = 20
# The lengthscale
ell = .1
# The variance
variance = 1.
# The covariance function
k1 = GPy.kern.RBF(dim, lengthscale=ell, variance=variance)
# Draw a random set of inputs points in [0, 1]^dim
X = np.random.rand(n, dim)
# Evaluate the covariance matrix on these points
K = k1.K(X)
# Compute the eigenvalues of this matrix
eig_val, eig_vec = np.linalg.eigh(K)
# Plot the eigenvalues (they should all be positive)
print '> plotting eigenvalues of K'
print '> they must all be positive'
fig, ax = plt.subplots()
ax.plot(np.arange(1, n+1), eig_val, '.')
ax.set_xlabel('$i$', fontsize=16)
ax.set_ylabel('$\lambda_i$', fontsize=16)
```
```python
# Now create another (arbitrary) covariance function
k2 = GPy.kern.Exponential(dim, lengthscale=0.2, variance=2.1)
# Create a new covariance function that is the sum of these two:
k_new = k1 + k2
# Let's plot the new covariance
fig, ax = plt.subplots()
k1.plot(ax=ax, label='$k_1$')
k2.plot(ax=ax, label='$k_2$')
k_new.plot(ax=ax, label='$k_1 + k_2$')
plt.legend(fontsize=16);
```
```python
# If this is a valid covariance function, then it must
# be positive definite
# Compute the covariance matrix:
K_new = k_new.K(X)
# and its eigenvalues
eig_val_new, eig_vec_new = np.linalg.eigh(K_new)
# Plot the eigenvalues (they should all be positive)
print '> plotting eigenvalues of K'
print '> they must all be positive'
fig, ax = plt.subplots()
ax.plot(np.arange(1, n+1), eig_val_new, '.')
ax.set_xlabel('$i$', fontsize=16)
ax.set_ylabel('$\lambda_i$', fontsize=16);
```
### Example 3: Sampling from a Gaussian Process.
Samples from a Gaussian process are functions. But, functions are infinite dimensional objects?
We cannot sample directly from a GP....
However, if we are interested in the values of $f(\cdot)$ at any given set of test points $\mathbf{x}_{1:n} = \{\mathbf{x}_1,\dots,\mathbf{x}_b\}$, then we have that:
$$
\mathbf{f} | \mathbf{x}_{1:n}, m(\cdot), k(\cdot, \cdot) \sim \mathcal{N}\left(\mathbf{f} | \mathbf{m}(\mathbf{x}_{1:n}), \mathbf{K}(\mathbf{x}_{1:n}, \mathbf{x}_{1:n}) \right),
$$
where all the quantities have been introduced above.
This is
What we are going to do is pick a dense set of points $\mathbf{x}_{1:n}\in\mathbb{R}^{n\times d}$
sample the value of the GP, $\mathbf{f} = (f(\mathbf{x}_1),\dots,f(\mathbf{x}_n))$ on these points.
We saw above that the probability density of $\mathbf{f}$ is just a multivariate normal
with a mean vector that is specified from the mean function and a covariance matrix
that is specified by the covariance function.
Therefore, all we need to know is how to sample from the multivariate normal.
This is how we do it:
+ Compute the Cholesky of $\mathbf{L}$:
$$
\mathbf{K} = \mathbf{L}\mathbf{L}^T.
$$
+ Draw $n$ random samples $\mathbf{z} = (z_1,\dots,z_n)$ independently from a standard normal.
+ Get one sample by:
$$
\mathbf{f} = \mathbf{m} + \mathbf{L}\mathbf{z}.
$$
```python
# To gaurantee reproducibility
np.random.seed(123456)
# Number of test points
num_test = 10
# Pick a covariance function
k = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=.1)
# Pick a mean function
mean_func = lambda(x): np.zeros(x.shape)
# Pick a bunch of points over which you want to sample the GP
X = np.linspace(0, 1, num_test)[:, None]
# Evaluate the mean function at X
m = mean_func(X)
# Compute the covariance function at these points
nugget = 1e-6 # This is a small number required for stability
C = k.K(X) + nugget * np.eye(X.shape[0])
# Compute the Cholesky of the covariance
# Notice that we need to do this only once
L = np.linalg.cholesky(C)
# Number of samples to take
num_samples = 3
# Take 3 samples from the GP and plot them:
fig, ax = plt.subplots()
# Plot the mean function
ax.plot(X, m)
for i in xrange(num_samples):
z = np.random.randn(X.shape[0], 1) # Draw from standard normal
f = m + np.dot(L, z) # f = m + L * z
ax.plot(X, f, color=sns.color_palette()[1], linewidth=1)
#ax.set_ylim(-6., 6.)
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_ylim(-5, 5);
```
The solid line is the mean function and the dashed lines are 3 samples of f . These don’t look like functions yet. This is because we have used only 10 test points to represent the GP.
#### Questions
1. Edit the code above changing the number of test points ``num_test`` to 20, 50, 100. Rerun the example. How do your samples of f look like now? Do they look more like functions to you? Imagine that the true nature of the GP appears when these test points become infinitely dense.
2. Edit the code above and change the random seed to an arbitrary integer (just make up one). Rerun the example and notice how the sampled functions change.
3. Edit the code above and change the variance first to 0.1 and then to 5 each time rerunning the example. Notice the values on the vertical axis of the plot. What happens to the sampled functions as you do this? What does the variance parameter of the SE control?
4. Edit the code above and now change the length-scale parameter first to 0.05 and then to 1. What happens to the sampled functions as you do this? What does the length- scale parameter of the SE control?
5. Now set the variance and the length-scale back to their original values (1. and 0.1, respectively). Edit the code and change the mean function to:
```
mean_fun = lambda(x): 5 * x
```
Re-run the example. What do you observe? Try a couple more. For example, try:
```
mean_fun = lambda(x): np.sin(5 * np.pi * x)
```
6. So far, all the samples we have seen are smooth. There is this theorem that says that the samples of the GP will be as smooth as the covariance function we use. Since the SE covariance is infinitely smooth, the samples we draw are infinitely smooth. The [Matern 3-2 covariance function](https://en.wikipedia.org/wiki/Matérn_covariance_function) is twice differentiable. Edit the code and
change ``RBF`` to ``Matern32``. Rerun the example. How smooth are the samples now?
7. The exponential covariance function is continuous but not differentiable. Edit the code and change ``RBF`` to ``Exponential``. Rerun the example. How smooth are the samples now?
8. The covariance function can also be used to model invariances. The periodic exponential covariance function is... a periodic covariance function. Edit line 29 and change ``RBF`` to
```
k = GPy.kern.PeriodicMatern32(input_dim=1, variance=500., lengthscale=0.01, period=0.1)
```
Rerun the example. Do you notice the periodic pattern?
9. How can you encode the information that there are two lengthscales in $f(\cdot)$. There are many ways to do this.
Try summing or multiplying covariance functions.
| 1e636a3d3c563a03153d39e8b62073ffea2ca507 | 128,467 | ipynb | Jupyter Notebook | handouts/handout_10.ipynb | FKShi/uq-course | f8b01ce87472abaed29fa87754816b3b1dd7c353 | [
"MIT"
]
| 1 | 2022-02-20T16:32:35.000Z | 2022-02-20T16:32:35.000Z | handouts/handout_10.ipynb | FKShi/uq-course | f8b01ce87472abaed29fa87754816b3b1dd7c353 | [
"MIT"
]
| null | null | null | handouts/handout_10.ipynb | FKShi/uq-course | f8b01ce87472abaed29fa87754816b3b1dd7c353 | [
"MIT"
]
| null | null | null | 162.616456 | 21,740 | 0.868021 | true | 5,711 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.828939 | 0.659552 | __label__eng_Latn | 0.986861 | 0.370691 |
# Using physics informed neural networks (PINNs) to solve parabolic PDEs
In this notebook, we illustrate physics informed neural networks (PINNs) to solve partial differential equations (PDEs) as proposed in
- Maziar Raissi, Paris Perdikaris, George Em Karniadakis. *Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations*. [arXiv 1711.10561](https://arxiv.org/abs/1711.10561)
- Maziar Raissi, Paris Perdikaris, George Em Karniadakis. *Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations*. [arXiv 1711.10566](https://arxiv.org/abs/1711.10566)
- Maziar Raissi, Paris Perdikaris, George Em Karniadakis. *Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations*. J. Comp. Phys. 378 pp. 686-707 [DOI: 10.1016/j.jcp.2018.10.045](https://www.sciencedirect.com/science/article/pii/S0021999118307125)
This notebook is partially based on another implementation of the PINN approach published on [GitHub by pierremtb](https://github.com/pierremtb/PINNs-TF2.0) as well as the original code, see [Maziar Raissi on GitHub](https://github.com/maziarraissi/PINNs).
<a href="https://colab.research.google.com/github/janblechschmidt/PDEsByNNs/blob/main/PINN_Solver.ipynb" target="_parent">
</a>
## Introduction
We describe the PINN approach for approximating the solution $u:[0,T] \times \mathcal{D} \to \mathbb{R}$ of an evolution equation
$$
\begin{align}
\partial_t u (t,x) + \mathcal{N}[u](t,x) &= 0, && (t,x) \in (0,T] \times \mathcal{D},\\
u(0,x) &= u_0(x) \quad && x \in \mathcal{D},
\end{align}
$$
where $\mathcal{N}$ is a nonlinear differential operator acting on $u$,
$\mathcal{D} \subset \mathbb{R}^d$ a bounded domain,
$T$ denotes the final time and
$u_0: \mathcal{D} \to \mathbb{R}$ the prescribed initial data.
Although the methodology allows for different types of boundary conditions, we restrict our discussion to the inhomogeneous Dirichlet case and prescribe
$$
\begin{align}
\hspace{7em} u(t,x) &= u_b(t,x) && \quad (t,x) \in (0,T] \times \partial \mathcal{D},
\end{align}
$$
where $\partial \mathcal{D}$ denotes the boundary of the domain $\mathcal{D}$ and $u_b: (0,T] \times \partial \mathcal{D} \to \mathbb{R}$ the given boundary data.
## Methodology
The method constructs a neural network approximation
$$
u_\theta(t,x) \approx u(t,x)
$$
of the solution of nonlinear PDE, where $u_\theta :[0,T] \times \mathcal{D} \to \mathbb{R}$ denotes a function realized by a neural network with parameters $\theta$.
The continuous time approach for the parabolic PDE as described in ([Raissi et al., 2017 (Part I)](https://arxiv.org/abs/1711.10561)) is based on the (strong) residual of a given neural network approximation $u_\theta \colon [0,T] \times \mathcal{D} \to \mathbb{R} $ of the solution $u$, i.e.,
$$
\begin{align}
r_\theta (t,x) := \partial_t u_\theta (t,x) + \mathcal{N}[u_\theta] (t,x).
\end{align}
$$
To incorporate this PDE residual $r_\theta$ into a loss function to be minimized, PINNs require a further differentiation to evaluate the differential operators $\partial_t u_\theta$ and $\mathcal{N}[u_\theta]$.
Thus the PINN term $r_\theta$ shares the same parameters as the original network $u_\theta(t,x)$, but respects the underlying "physics" of the nonlinear PDE.
Both types of derivatives can be easily determined through automatic differentiation with current state-of-the-art machine learning libraries, e.g., TensorFlow or PyTorch.
The PINN approach for the solution of the initial and boundary value problem now proceeds by minimization of the loss functional
$$
\begin{align}
\phi_\theta(X) := \phi_\theta^r(X^r) + \phi_\theta^0(X^0) + \phi_\theta^b(X^b),
\end{align}
$$
where $X$ denotes the collection of training data and the loss function $\phi_\theta$ contains the following terms:
- the mean squared residual
$$
\begin{align*}
\phi_\theta^r(X^r) := \frac{1}{N_r}\sum_{i=1}^{N_r} \left|r_\theta\left(t_i^r, x_i^r\right)\right|^2
\end{align*}
$$
in a number of collocation points $X^r:=\{(t_i^r, x_i^r)\}_{i=1}^{N_r} \subset (0,T] \times \mathcal{D}$, where $r_\theta$ is the physics-informed neural network,
- the mean squared misfit with respect to the initial and boundary conditions
$$
\begin{align*}
\phi_\theta^0(X^0)
:=
\frac{1}{N_0}
\sum_{i=1}^{N_0} \left|u_\theta\left(t_i^0, x_i^0\right) - u_0\left(x_i^0\right)\right|^2
\quad \text{ and } \quad
\phi_\theta^b(X^b)
:=
\frac{1}{N_b}
\sum_{i=1}^{N_b} \left|u_\theta\left(t_i^b, x_i^b\right) - u_b\left(t_i^b, x_i^b\right)\right|^2
\end{align*}
$$
in a number of points $X^0:=\{(t^0_i,x^0_i)\}_{i=1}^{N_0} \subset \{0\} \times \mathcal{D}$ and $X^b:=\{(t^b_i,x^b_i)\}_{i=1}^{N_b} \subset (0,T] \times \partial \mathcal{D}$, where $u_\theta$ is the neural network approximation of the solution $u\colon[0,T] \times \mathcal{D} \to \mathbb{R}$.
Note that the training data $X$ consists entirely of time-space coordinates.
## Example: Burgers equation
To illustrate the PINN approach we consider the one-dimensional Burgers equation on the spatial domain $\mathcal{D} = [-1,1]$
$$
\begin{aligned}
\partial_t u + u \, \partial_x u - (0.01/\pi) \, \partial_{xx} u &= 0, \quad &&\quad (t,x) \in (0,1] \times (-1,1),\\
u(0,x) &= - \sin(\pi \, x), \quad &&\quad x \in [-1,1],\\
u(t,-1) = u(t,1) &= 0, \quad &&\quad t \in (0,1].
\end{aligned}
$$
This PDE arises in various disciplines such as traffic flow, fluid mechanics and gas dynamics, and can be derived from the Navier-Stokes equations, see
([Basdevant et al., 1986](https://www.researchgate.net/publication/222935980_Spectral_and_finite_difference_solutions_of_Burgers_equation)).
### 1. Import necessary packages and set problem specific data
This code runs with TensorFlow version `2.3.0`.
The implementation relies mainly on the scientific computing library [NumPy](https://numpy.org/doc/stable/user/whatisnumpy.html) and the machine learning library [TensorFlow](https://www.tensorflow.org/).
All computations were performed on an Intel i7 CPU (8th Gen) with 16 GByte DDR3 RAM (2133 MHz) within a couple of minutes.
```python
# Import TensorFlow and NumPy
import tensorflow as tf
import numpy as np
# Set data type
DTYPE='float32'
tf.keras.backend.set_floatx(DTYPE)
# Set constants
pi = tf.constant(np.pi, dtype=DTYPE)
viscosity = .01/pi
# Define initial condition
def fun_u_0(x):
return -tf.sin(pi * x)
# Define boundary condition
def fun_u_b(t, x):
n = x.shape[0]
return tf.zeros((n,1), dtype=DTYPE)
# Define residual of the PDE
def fun_r(t, x, u, u_t, u_x, u_xx):
return u_t + u * u_x - viscosity * u_xx
```
### 2. Generate a set of collocation points
We assume that the collocation points $X_r$ as well as the points for the initial time and boundary data $X_0$ and $X_b$ are generated by random sampling from a uniform distribution.
Although uniformly distributed data are sufficient in our experiments, the authors of
([Raissi et al., 2017 (Part I)](https://arxiv.org/abs/1711.10561))
employed a space-filling Latin hypercube sampling strategy ([Stein, 1987](https://www.tandfonline.com/doi/abs/10.1080/00401706.1987.10488205)).
Our numerical experiments indicate that this strategy slightly improves the observed convergence rate, but for simplicity the code examples accompanying this paper employ uniform sampling throughout.
We choose training data of size $N_0 = N_b =50$ and $N_f=10000$.
```python
# Set number of data points
N_0 = 50
N_b = 50
N_r = 10000
# Set boundary
tmin = 0.
tmax = 1.
xmin = -1.
xmax = 1.
# Lower bounds
lb = tf.constant([tmin, xmin], dtype=DTYPE)
# Upper bounds
ub = tf.constant([tmax, xmax], dtype=DTYPE)
# Set random seed for reproducible results
tf.random.set_seed(0)
# Draw uniform sample points for initial boundary data
t_0 = tf.ones((N_0,1), dtype=DTYPE)*lb[0]
x_0 = tf.random.uniform((N_0,1), lb[1], ub[1], dtype=DTYPE)
X_0 = tf.concat([t_0, x_0], axis=1)
# Evaluate intitial condition at x_0
u_0 = fun_u_0(x_0)
# Boundary data
t_b = tf.random.uniform((N_b,1), lb[0], ub[0], dtype=DTYPE)
x_b = lb[1] + (ub[1] - lb[1]) * tf.keras.backend.random_bernoulli((N_b,1), 0.5, dtype=DTYPE)
X_b = tf.concat([t_b, x_b], axis=1)
# Evaluate boundary condition at (t_b,x_b)
u_b = fun_u_b(t_b, x_b)
# Draw uniformly sampled collocation points
t_r = tf.random.uniform((N_r,1), lb[0], ub[0], dtype=DTYPE)
x_r = tf.random.uniform((N_r,1), lb[1], ub[1], dtype=DTYPE)
X_r = tf.concat([t_r, x_r], axis=1)
# Collect boundary and inital data in lists
X_data = [X_0, X_b]
u_data = [u_0, u_b]
```
Next, we illustrate the collocation points (red circles) and the positions where the boundary and initial conditions will be enforced (cross marks, color indicates value).
```python
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(9,6))
plt.scatter(t_0, x_0, c=u_0, marker='X', vmin=-1, vmax=1)
plt.scatter(t_b, x_b, c=u_b, marker='X', vmin=-1, vmax=1)
plt.scatter(t_r, x_r, c='r', marker='.', alpha=0.1)
plt.xlabel('$t$')
plt.ylabel('$x$')
plt.title('Positions of collocation points and boundary data');
#plt.savefig('Xdata_Burgers.pdf', bbox_inches='tight', dpi=300)
```
### 3. Set up network architecture
In this example, adopted from
([Raissi et al., 2017 (Part I)](https://arxiv.org/abs/1711.10561)), we assume a feedforward neural network of the following structure:
- the input is scaled elementwise to lie in the interval $[-1,1]$,
- followed by 8 fully connected layers each containing 20 neurons and each followed by a hyperbolic tangent activation function,
- one fully connected output layer.
This setting results in a network with $3021$ trainable parameters (first hidden layer: $2 \cdot 20 + 20 = 60$; seven intermediate layers: each $20 \cdot 20 + 20 = 420$; output layer: $20 \cdot 1 + 1 = 21$).
```python
def init_model(num_hidden_layers=8, num_neurons_per_layer=20):
# Initialize a feedforward neural network
model = tf.keras.Sequential()
# Input is two-dimensional (time + one spatial dimension)
model.add(tf.keras.Input(2))
# Introduce a scaling layer to map input to [lb, ub]
scaling_layer = tf.keras.layers.Lambda(
lambda x: 2.0*(x - lb)/(ub - lb) - 1.0)
model.add(scaling_layer)
# Append hidden layers
for _ in range(num_hidden_layers):
model.add(tf.keras.layers.Dense(num_neurons_per_layer,
activation=tf.keras.activations.get('tanh'),
kernel_initializer='glorot_normal'))
# Output is one-dimensional
model.add(tf.keras.layers.Dense(1))
return model
```
### 4. Define routines to determine loss and gradient
In the following code cell, we define a function which evaluates the residual
$$
\begin{align}
r_\theta (t,x) := \partial_t u_\theta (t,x) + \mathcal{N}[u_\theta] (t,x).
\end{align}
$$
of the nonlinear PDE in the points $X_r = \{(t^r_i,x^r_i)\}_{i=1}^{N_r}$.
To compute the necessary partial derivatives we use the automatic differentiation capabilities of TensorFlow.
For the Burgers equation, this entails computing $\partial_t u_\theta$, $\partial_x u_\theta$ and $\partial_{xx} u_\theta$.
In TensorFlow, this is done via a `GradientTape`, see also the [documentation](https://www.tensorflow.org/api_docs/python/tf/GradientTape), which keeps track of the `watched` variables, in our case `t` and `x`, in order to compute the derivatives.
```python
def get_r(model, X_r):
# A tf.GradientTape is used to compute derivatives in TensorFlow
with tf.GradientTape(persistent=True) as tape:
# Split t and x to compute partial derivatives
t, x = X_r[:, 0:1], X_r[:,1:2]
# Variables t and x are watched during tape
# to compute derivatives u_t and u_x
tape.watch(t)
tape.watch(x)
# Determine residual
u = model(tf.stack([t[:,0], x[:,0]], axis=1))
# Compute gradient u_x within the GradientTape
# since we need second derivatives
u_x = tape.gradient(u, x)
u_t = tape.gradient(u, t)
u_xx = tape.gradient(u_x, x)
del tape
return fun_r(t, x, u, u_t, u_x, u_xx)
```
The next function computes the loss for our model
$$
\begin{align}
\phi_\theta(X) := \phi_\theta^r(X^r) + \phi_\theta^0(X^0) + \phi_\theta^b(X^b),
\end{align}
$$
as a function of our the training data.
The collocation points are given by `X_r`, the initial and boundary data is contained in `X_data = [X_0, X_b]` and `u_data = [u_0, u_b]`.
```python
def compute_loss(model, X_r, X_data, u_data):
# Compute phi^r
r = get_r(model, X_r)
phi_r = tf.reduce_mean(tf.square(r))
# Initialize loss
loss = phi_r
# Add phi^0 and phi^b to the loss
for i in range(len(X_data)):
u_pred = model(X_data[i])
loss += tf.reduce_mean(tf.square(u_data[i] - u_pred))
return loss
```
The next function computes the gradient of the loss function $\phi_\theta$ with respect to the unknown variables in the model, also called `trainable_variables` in TensorFlow, i.e. $\nabla_\theta \phi_\theta$.
This is also done via a `GradientTape`, but now it keeps track of the parameters $\theta$ in our model, which can be accessed by `model.trainable_variables`.
```python
def get_grad(model, X_r, X_data, u_data):
with tf.GradientTape(persistent=True) as tape:
# This tape is for derivatives with
# respect to trainable variables
tape.watch(model.trainable_variables)
loss = compute_loss(model, X_r, X_data, u_data)
g = tape.gradient(loss, model.trainable_variables)
del tape
return loss, g
```
### 5. Set up optimizer and train model
Next we initialize the model, set the learning rate to the step function
$$
\delta(n) = 0.01 \, \textbf{1}_{\{n < 1000\}} + 0.001 \, \textbf{1}_{\{1000 \le n < 3000\}} + 0.0005 \, \textbf{1}_{\{3000 \le n\}}
$$
which decays in a piecewise constant fashion, and set up a `tf.keras.optimizer` to train the model.
```python
# Initialize model aka u_\theta
model = init_model()
# We choose a piecewise decay of the learning rate, i.e., the
# step size in the gradient descent type algorithm
# the first 1000 steps use a learning rate of 0.01
# from 1000 - 3000: learning rate = 0.001
# from 3000 onwards: learning rate = 0.0005
lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([1000,3000],[1e-2,1e-3,5e-4])
# Choose the optimizer
optim = tf.keras.optimizers.Adam(learning_rate=lr)
```
Train the model for $N=5000$ epochs (takes approximately 3 minutes).
Here, we set up a function `train_step()` which performs one training step.
*Note*: The `@tf.function` is a so-called `Decorator` within Python. This particular decorator redefines the function that follows, in our case `train_step`, as a TensorFlow graph which may speed up the training significantly.
```python
from time import time
# Define one training step as a TensorFlow function to increase speed of training
@tf.function
def train_step():
# Compute current loss and gradient w.r.t. parameters
loss, grad_theta = get_grad(model, X_r, X_data, u_data)
# Perform gradient descent step
optim.apply_gradients(zip(grad_theta, model.trainable_variables))
return loss
# Number of training epochs
N = 5000
hist = []
# Start timer
t0 = time()
for i in range(N+1):
loss = train_step()
# Append current loss to hist
hist.append(loss.numpy())
# Output current loss after 50 iterates
if i%50 == 0:
print('It {:05d}: loss = {:10.8e}'.format(i,loss))
# Print computation time
print('\nComputation time: {} seconds'.format(time()-t0))
```
### Plot solution
```python
from mpl_toolkits.mplot3d import Axes3D
# Set up meshgrid
N = 600
tspace = np.linspace(lb[0], ub[0], N + 1)
xspace = np.linspace(lb[1], ub[1], N + 1)
T, X = np.meshgrid(tspace, xspace)
Xgrid = np.vstack([T.flatten(),X.flatten()]).T
# Determine predictions of u(t, x)
upred = model(tf.cast(Xgrid,DTYPE))
# Reshape upred
U = upred.numpy().reshape(N+1,N+1)
# Surface plot of solution u(t,x)
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(T, X, U, cmap='viridis');
ax.view_init(35,35)
ax.set_xlabel('$t$')
ax.set_ylabel('$x$')
ax.set_zlabel('$u_\\theta(t,x)$')
ax.set_title('Solution of Burgers equation');
#plt.savefig('Burgers_Solution.pdf', bbox_inches='tight', dpi=300);
```
### Plot the evolution of loss
```python
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.semilogy(range(len(hist)), hist,'k-')
ax.set_xlabel('$n_{epoch}$')
ax.set_ylabel('$\\phi_{n_{epoch}}$');
```
## Class implementation of PINNs
In this section, we implement PINNs as a class which can be used for further testing. Here, we derive the class `PINN_NeuralNet` from `tf.keras.Model`.
Required arguments are the lower bound `lb` and upper bound `ub`.
```python
# Define model architecture
class PINN_NeuralNet(tf.keras.Model):
""" Set basic architecture of the PINN model."""
def __init__(self, lb, ub,
output_dim=1,
num_hidden_layers=8,
num_neurons_per_layer=20,
activation='tanh',
kernel_initializer='glorot_normal',
**kwargs):
super().__init__(**kwargs)
self.num_hidden_layers = num_hidden_layers
self.output_dim = output_dim
self.lb = lb
self.ub = ub
# Define NN architecture
self.scale = tf.keras.layers.Lambda(
lambda x: 2.0*(x - lb)/(ub - lb) - 1.0)
self.hidden = [tf.keras.layers.Dense(num_neurons_per_layer,
activation=tf.keras.activations.get(activation),
kernel_initializer=kernel_initializer)
for _ in range(self.num_hidden_layers)]
self.out = tf.keras.layers.Dense(output_dim)
def call(self, X):
"""Forward-pass through neural network."""
Z = self.scale(X)
for i in range(self.num_hidden_layers):
Z = self.hidden[i](Z)
return self.out(Z)
```
Next, we derive a class `PINNSolver` which can be used as a base class.
It possesses two methods to solve the PDE:
1. the method `solve_with_TFoptimizer` uses a `TensorFlow` optimizer object as input, e.g., the `AdamOptimizer` above;
2. the method `solve_with_LBFGS` resembles the LBFGS method proposed in the original paper using an LBFGS method provided by [`SciPy`](https://www.scipy.org/).
```python
import scipy.optimize
class PINNSolver():
def __init__(self, model, X_r):
self.model = model
# Store collocation points
self.t = X_r[:,0:1]
self.x = X_r[:,1:2]
# Initialize history of losses and global iteration counter
self.hist = []
self.iter = 0
def get_r(self):
with tf.GradientTape(persistent=True) as tape:
# Watch variables representing t and x during this GradientTape
tape.watch(self.t)
tape.watch(self.x)
# Compute current values u(t,x)
u = self.model(tf.stack([self.t[:,0], self.x[:,0]], axis=1))
u_x = tape.gradient(u, self.x)
u_t = tape.gradient(u, self.t)
u_xx = tape.gradient(u_x, self.x)
del tape
return self.fun_r(self.t, self.x, u, u_t, u_x, u_xx)
def loss_fn(self, X, u):
# Compute phi_r
r = self.get_r()
phi_r = tf.reduce_mean(tf.square(r))
# Initialize loss
loss = phi_r
# Add phi_0 and phi_b to the loss
for i in range(len(X)):
u_pred = self.model(X[i])
loss += tf.reduce_mean(tf.square(u[i] - u_pred))
return loss
def get_grad(self, X, u):
with tf.GradientTape(persistent=True) as tape:
# This tape is for derivatives with
# respect to trainable variables
tape.watch(self.model.trainable_variables)
loss = self.loss_fn(X, u)
g = tape.gradient(loss, self.model.trainable_variables)
del tape
return loss, g
def fun_r(self, t, x, u, u_t, u_x, u_xx):
"""Residual of the PDE"""
return u_t + u * u_x - viscosity * u_xx
def solve_with_TFoptimizer(self, optimizer, X, u, N=1001):
"""This method performs a gradient descent type optimization."""
@tf.function
def train_step():
loss, grad_theta = self.get_grad(X, u)
# Perform gradient descent step
optimizer.apply_gradients(zip(grad_theta, self.model.trainable_variables))
return loss
for i in range(N):
loss = train_step()
self.current_loss = loss.numpy()
self.callback()
def solve_with_ScipyOptimizer(self, X, u, method='L-BFGS-B', **kwargs):
"""This method provides an interface to solve the learning problem
using a routine from scipy.optimize.minimize.
(Tensorflow 1.xx had an interface implemented, which is not longer
supported in Tensorflow 2.xx.)
Type conversion is necessary since scipy-routines are written in Fortran
which requires 64-bit floats instead of 32-bit floats."""
def get_weight_tensor():
"""Function to return current variables of the model
as 1d tensor as well as corresponding shapes as lists."""
weight_list = []
shape_list = []
# Loop over all variables, i.e. weight matrices, bias vectors and unknown parameters
for v in self.model.variables:
shape_list.append(v.shape)
weight_list.extend(v.numpy().flatten())
weight_list = tf.convert_to_tensor(weight_list)
return weight_list, shape_list
x0, shape_list = get_weight_tensor()
def set_weight_tensor(weight_list):
"""Function which sets list of weights
to variables in the model."""
idx = 0
for v in self.model.variables:
vs = v.shape
# Weight matrices
if len(vs) == 2:
sw = vs[0]*vs[1]
new_val = tf.reshape(weight_list[idx:idx+sw],(vs[0],vs[1]))
idx += sw
# Bias vectors
elif len(vs) == 1:
new_val = weight_list[idx:idx+vs[0]]
idx += vs[0]
# Variables (in case of parameter identification setting)
elif len(vs) == 0:
new_val = weight_list[idx]
idx += 1
# Assign variables (Casting necessary since scipy requires float64 type)
v.assign(tf.cast(new_val, DTYPE))
def get_loss_and_grad(w):
"""Function that provides current loss and gradient
w.r.t the trainable variables as vector. This is mandatory
for the LBFGS minimizer from scipy."""
# Update weights in model
set_weight_tensor(w)
# Determine value of \phi and gradient w.r.t. \theta at w
loss, grad = self.get_grad(X, u)
# Store current loss for callback function
loss = loss.numpy().astype(np.float64)
self.current_loss = loss
# Flatten gradient
grad_flat = []
for g in grad:
grad_flat.extend(g.numpy().flatten())
# Gradient list to array
grad_flat = np.array(grad_flat,dtype=np.float64)
# Return value and gradient of \phi as tuple
return loss, grad_flat
return scipy.optimize.minimize(fun=get_loss_and_grad,
x0=x0,
jac=True,
method=method,
callback=self.callback,
**kwargs)
def callback(self, xr=None):
if self.iter % 50 == 0:
print('It {:05d}: loss = {:10.8e}'.format(self.iter,self.current_loss))
self.hist.append(self.current_loss)
self.iter+=1
def plot_solution(self, **kwargs):
N = 600
tspace = np.linspace(self.model.lb[0], self.model.ub[0], N+1)
xspace = np.linspace(self.model.lb[1], self.model.ub[1], N+1)
T, X = np.meshgrid(tspace, xspace)
Xgrid = np.vstack([T.flatten(),X.flatten()]).T
upred = self.model(tf.cast(Xgrid,DTYPE))
U = upred.numpy().reshape(N+1,N+1)
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(T, X, U, cmap='viridis', **kwargs)
ax.set_xlabel('$t$')
ax.set_ylabel('$x$')
ax.set_zlabel('$u_\\theta(t,x)$')
ax.view_init(35,35)
return ax
def plot_loss_history(self, ax=None):
if not ax:
fig = plt.figure(figsize=(7,5))
ax = fig.add_subplot(111)
ax.semilogy(range(len(self.hist)), self.hist,'k-')
ax.set_xlabel('$n_{epoch}$')
ax.set_ylabel('$\\phi^{n_{epoch}}$')
return ax
```
### Burgers equation with L-BFGS
The following code cell shows how the new classes `PINN_NeuralNet` and `PINNSolver` can be used to solve the Burgers equation, this time using the `SciPy` implementation of L-BFGS (takes around 3 minutes).
```python
# Initialize model
model = PINN_NeuralNet(lb, ub)
model.build(input_shape=(None,2))
# Initilize PINN solver
solver = PINNSolver(model, X_r)
# Decide which optimizer should be used
#mode = 'TFoptimizer'
mode = 'ScipyOptimizer'
# Start timer
t0 = time()
if mode == 'TFoptimizer':
# Choose optimizer
lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([1000,3000],[1e-2,1e-3,5e-4])
optim = tf.keras.optimizers.Adam(learning_rate=lr)
solver.solve_with_TFoptimizer(optim, X_data, u_data, N=4001)
elif mode == 'ScipyOptimizer':
solver.solve_with_ScipyOptimizer(X_data, u_data,
method='L-BFGS-B',
options={'maxiter': 50000,
'maxfun': 50000,
'maxcor': 50,
'maxls': 50,
'ftol': 1.0*np.finfo(float).eps})
# Print computation time
print('\nComputation time: {} seconds'.format(time()-t0))
```
Plot solution and loss history.
```python
solver.plot_solution();
solver.plot_loss_history();
```
### Solution of a time-dependent Eikonal equation
As a second example we consider the one-dimensional Eikonal equation backward in time on the domain $\mathcal{D}=[-1,1]$
$$
\begin{align}
-\partial_t u(t,x) + |\nabla u|(t,x) &= 1,
\quad & &(t,x) \in [0,T) \times [-1,1],\\
u(T,x) &= 0, \quad & &x \in [-1,1],\\
u(t,-1) = u(t, 1) &= 0, \quad & & t \in [0,T).
\end{align}
$$
Note that the partial differential equation can be equally written as a Hamilton-Jacobi-Bellman equation, viz
$$
-\partial_t u(t,x) + \sup_{|c| \le 1} \{c \, \nabla u(t,x)\} = 1 \quad (t,x) \in [0,T) \times [-1,1],
$$
which characterizes the solution of an optimal control problem seeking to minimize the distance from a point $(t,x)$ to the boundary $[0,T] \times \partial \mathcal{D} \cup \{T\} \times \mathcal{D}$.
As is easily verified, the solution is given by $u(t,x) = \min\{ 1 - t, 1 - |x| \}$.
The fact that the Eikonal equation runs backward in time is in accordance with its interpretation as the optimality condition of a control problem.
Note that this is equation can be transformed into a forward evolution problem by the change of variables $\hat t = T - t$.
#### Problem specific definitions
```python
N_0 = 50
N_b = 50
N_r = 10000
# Specify boundaries
lb = tf.constant([0., -1.], dtype=DTYPE)
ub = tf.constant([1., 1.], dtype=DTYPE)
def Eikonal_u_0(x):
n = x.shape[0]
return tf.zeros((n,1), dtype=DTYPE)
def Eikonal_u_b(t, x):
n = x.shape[0]
return tf.zeros((n,1), dtype=DTYPE)
```
#### Generate data
This code snippet is almost identical to the code from above.
We choose $N_b = 50$ and $N_0 = 50$ uniformly distributed initial value and boundary points and sample $N_r = 10000$ collocation points uniformly within the domain boundaries.
We derive a new solver with `PINNSolver` as base class.
```python
tf.random.set_seed(0)
# Final time data
t_0 = tf.ones((N_0,1), dtype=DTYPE) * ub[0]
x_0 = tf.random.uniform((N_0,1), lb[1], ub[1], dtype=DTYPE)
X_0 = tf.concat([t_0, x_0], axis=1)
u_0 = Eikonal_u_0(x_0)
# Boundary data
t_b = tf.random.uniform((N_b,1), lb[0], ub[0], dtype=DTYPE)
x_b = lb[1] + (ub[1] - lb[1]) * tf.keras.backend.random_bernoulli((N_b,1), 0.5, dtype=DTYPE)
X_b = tf.concat([t_b, x_b], axis=1)
u_b = Eikonal_u_b(t_b, x_b)
# Collocation points
t_r = tf.random.uniform((N_r,1), lb[0], ub[0], dtype=DTYPE)
x_r = tf.random.uniform((N_r,1), lb[1], ub[1], dtype=DTYPE)
X_r = tf.concat([t_r, x_r], axis=1)
# Collect boundary and inital data in lists
X_data = [X_0,X_b]
u_data = [u_0,u_b]
```
#### Derive Eikonal solver class
Now, we derive a solver for the Eikonal equation from the `PINNSolver` class. Since the Eikonal equation does not depend on second-order derivatives, we implement a new method `get_r` which avoids the computation of second derivatives.
```python
class EikonalPINNSolver(PINNSolver):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def fun_r(self, t, x, u, u_t, u_x, u_xx):
"""Residual of the PDE"""
return -u_t + tf.abs(u_x) - 1.
def get_r(self):
"""We update get_r since the Eikonal equation is a first-order equation.
Therefore, it is not necessary to compute second derivatives."""
with tf.GradientTape(persistent=True) as tape:
# Watch variables representing t and x during this GradientTape
tape.watch(self.t)
tape.watch(self.x)
# Compute current values u(t,x)
u = self.model(tf.stack([self.t[:,0], self.x[:,0]], axis=1))
u_x = tape.gradient(u, self.x)
u_t = tape.gradient(u, self.t)
del tape
return self.fun_r(self.t, self.x, u, u_t, u_x, None)
```
#### Setting up the neural network architecture
The neural network model chosen for this particular problem can be simpler.
We decided to use only two hidden layers with 20 neurons in each, resulting in $501$ unknown parameters (first hidden layer: $2 \cdot 20 + 20 = 60$; one intermediate layer: $20 \cdot 20 + 20 = 420$; output layer: $20 \cdot 1 + 1 = 21$).
To account for the lack of smoothness of the solution, we choose a non-differentiable activation function, although the hyperbolic tangent function seems to be able to approximate the kinks in the solution sufficiently well.
Here, we decided to use the \emph{leaky rectified linear unit (leaky ReLU)} activation function
$$
\begin{align*}
\sigma(z) = \begin{cases}
z &\text{ if } z \ge 0,\\
0.1 \, z &\text{ otherwise},
\end{cases}
\end{align*}
$$
which displays a non-vanishing gradient when the unit is not active, i.e., when $z < 0$.
```python
# Initialize model
model = PINN_NeuralNet(lb, ub, num_hidden_layers=2,
activation=tf.keras.layers.LeakyReLU(alpha=0.1),
kernel_initializer='he_normal')
model.build(input_shape=(None,2))
# Initilize PINN solver
eikonalSolver = EikonalPINNSolver(model, X_r)
```
Start training (take approximately 40 seconds).
```python
# Choose step sizes aka learning rate
lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([3000,7000],[1e-1,1e-2,1e-3])
# Solve with Adam optimizer
optim = tf.keras.optimizers.Adam(learning_rate=lr)
# Start timer
t0 = time()
eikonalSolver.solve_with_TFoptimizer(optim, X_data, u_data, N=10001)
# Print computation time
print('\nComputation time: {} seconds'.format(time()-t0))
```
Plot the results.
```python
eikonalSolver.plot_solution();
#plt.savefig('Eikonal_Solution.pdf', bbox_inches='tight', dpi=300)
eikonalSolver.plot_loss_history();
```
## Parameter identification setting
In this section, we want to demonstrate how the PINN approach could be used to solve partial differential equations with unknown parameters $\lambda$.
To be more precise, we consider the parabolic Eikonal equation
$$
\begin{aligned}
- \partial_t u + \sup_{|c|\le1} c \cdot \nabla u = -\partial_t u + |\nabla u| &= \lambda^{-1}\\
u(T,x) &= 0\\
u(t,-1) = u(t,1) &= 0
\end{aligned}
$$
with unknown parameter $\lambda$.
The explicit solution is $u^*(t,x) = \lambda^{-1} \min\{1-t, 1-|x|\}$.
```python
lambd_star = 3.
def u_expl(t, x, lambd_star):
"""Explicit solution of the parametric Eikonal equation."""
y = 1./lambd_star
return y * tf.math.minimum(1-t, 1-tf.abs(x))
```
Next, we draw $N_d = 500$ uniformly distributed measurements of the exact solution.
```python
N_d = 500
noise = 0.0
# Draw points with measurements randomly
t_d = tf.random.uniform((N_d,1), lb[0], ub[0], dtype=DTYPE)
x_d = tf.random.uniform((N_d,1), lb[1], ub[1], dtype=DTYPE)
X_d = tf.concat([t_d, x_d], axis=1)
u_d = u_expl(t_d, x_d, lambd_star)
u_d += noise * tf.random.normal(u_d.shape, dtype=DTYPE)
# Copy original data
X_param = X_data
u_param = u_data
```
Since both the boundary and initial time data are of Dirichlet type, we may handle the measured data exactly like $X_0$ and $X_b$.
Thus, we can simply append $X_d$ and $u_d$ to `Xdata` and `udata`.
Note that the approach illustrated here is slightly different from the one introduced in ([Raissi et al., 2017 (Part II)](https://arxiv.org/abs/1711.10566)) which takes only measurement data into account.
```python
X_param.append(X_d)
u_param.append(u_d)
```
Next, we derive a new network class which takes the additional parameter $\lambda$ into account.
Note that this parameter has to be part of the model in order to be learnt during training.
```python
class PINNIdentificationNet(PINN_NeuralNet):
def __init__(self, *args, **kwargs):
# Call init of base class
super().__init__(*args,**kwargs)
# Initialize variable for lambda
self.lambd = tf.Variable(1.0, trainable=True, dtype=DTYPE)
self.lambd_list = []
```
Now, we derive a new solver class which only updates the evaluation of the residual `fun_r` which now incorporates the $\lambda$-dependency.
In addition, we modify the `callback` function to store the iterates of $\lambda$ in a list `lambd_list` as well.
```python
class EikonalPINNIdentification(EikonalPINNSolver):
def fun_r(self, t, x, u, u_t, u_x, u_xx):
"""Residual of the PDE"""
return -u_t + tf.abs(u_x) - 1./self.model.lambd
def callback(self, xr=None):
lambd = self.model.lambd.numpy()
self.model.lambd_list.append(lambd)
if self.iter % 50 == 0:
print('It {:05d}: loss = {:10.8e} lambda = {:10.8e}'.format(self.iter, self.current_loss, lambd))
self.hist.append(self.current_loss)
self.iter += 1
def plot_loss_and_param(self, axs=None):
if axs:
ax1, ax2 = axs
self.plot_loss_history(ax1)
else:
ax1 = self.plot_loss_history()
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.tick_params(axis='y', labelcolor=color)
ax2.plot(range(len(self.hist)), self.model.lambd_list,'-',color=color)
ax2.set_ylabel('$\\lambda^{n_{epoch}}$', color=color)
return (ax1,ax2)
```
Finally, we set up the model consisting of only two hidden layers employing the Leaky ReLU function with slope parameter $\alpha = 0.1$, i.e.,
$$
\sigma(z) = \begin{cases} x & \text{ if } x \ge 0\\ \alpha \, x & \text{ otherwise.} \end{cases}
$$
The training for $n_{epochs} = 10000$ epochs takes around 45 seconds.
```python
# Initialize model
model = PINNIdentificationNet(lb, ub, num_hidden_layers=2,
activation=tf.keras.layers.LeakyReLU(alpha=0.1),
kernel_initializer='he_normal')
model.build(input_shape=(None,2))
# Initilize solver
eikonalIdentification = EikonalPINNIdentification(model, X_r)
# Choose step sizes aka learning rate
lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([3000,7000],[1e-1,1e-2,1e-3])
# Solve with Adam optimizer
optim = tf.keras.optimizers.Adam(learning_rate=lr)
# Start timer
t0 = time()
eikonalIdentification.solve_with_TFoptimizer(optim, X_param, u_param, N=10001)
# Print computation time
print('\nComputation time: {} seconds'.format(time()-t0))
```
Plot solution $u_\theta(t,x)$ and evolution of loss values $\phi^{n_\text{epoch}}$ and estimated parameters $\lambda^{n_\text{epoch}}$..
```python
ax = eikonalIdentification.plot_solution()
#plt.savefig('Eikonal_PI_Solution.pdf', bbox_inches='tight', dpi=300)
axs = eikonalIdentification.plot_loss_and_param()
#plt.savefig('Eikonal_PI_LossEvolution.pdf', bbox_inches='tight', dpi=300)
```
Finally, we compute the relative error of the identified parameter $\lambda$.
```python
lambd_rel_error = np.abs((eikonalIdentification.model.lambd.numpy()-lambd_star)/lambd_star)
print('Relative error of lambda ', lambd_rel_error)
```
The next code cell performs the previous training 5 times in order to give a more reliable picture of the convergence since the weight matrices are initialized randomly at each run (takes about 4 minutes).
```python
lambd_hist = []
loss_hist = []
for i in range(5):
print('{:s}\nStart of iteration {:d}\n{:s}'.format(50*'-',i,50*'-'))
# Initialize model
model = PINNIdentificationNet(lb, ub, num_hidden_layers=2,
activation=tf.keras.layers.LeakyReLU(alpha=0.1),
kernel_initializer='he_normal')
model.build(input_shape=(None,2))
# Initilize solver
eikonalIdentification = EikonalPINNIdentification(model, X_r)
# Choose step sizes aka learning rate
lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([3000,7000],[1e-1,1e-2,1e-3])
N=10001
# Solve with Adam optimizer
optim = tf.keras.optimizers.Adam(learning_rate=lr)
eikonalIdentification.solve_with_TFoptimizer(optim, X_param, u_param, N=N)
# Store evolution of lambdas
lambd_hist.append(model.lambd_list)
# Store evolution of losses
loss_hist.append(eikonalIdentification.hist)
```
Next, we generate a table printing the mean and standard deviations of the identified parameter $\lambda$ obtained for the previous runs.
```python
print(' i Mean of lambda Std. of lambda')
for i in [1000,2000,3000,4000,5000,6000,7000,8000,9000,10000]:
xi = np.array([ x[i] for x in lambd_hist])
print('{:05d} {:6.4e} {:6.4e}'.format(i, xi.mean(), xi.std()))
```
Next, we plot the five evolutions of $\lambda$ (dark gray), its mean (solid blue) and one standard deviation (shaded blue) together with the true value of $\lambda$ (dashed blue).
```python
fig = plt.figure()
ax = fig.add_subplot(111)
color = 'tab:blue'
Lambd = np.stack(lambd_hist)
lmean = Lambd.mean(axis=0)
lstd = Lambd.std(axis=0)
Lambd_RelError = np.abs((Lambd-lambd_star)/lambd_star)
lrange=range(len(lmean))
for i in range(len(lambd_hist)):
ax.plot(lrange, lambd_hist[i],'-',color='black', alpha=0.5)
ax.plot(lrange, lmean,'-',color=color)
ax.plot(lrange, lambd_star*np.ones_like(lmean),'--',color=color)
ax.fill_between(lrange,lmean-lstd,lmean+lstd, alpha=0.2)
ax.set_ylabel('$\\lambda^{n_{epoch}}$')
ax.set_xlabel('$n_{epoch}$')
ax.set_ylim([2.8,3.2])
#plt.savefig('Eikonal_PI_Evolution.pdf', bbox_inches='tight', dpi=300)
```
Finally, we plot the mean relative error of $\lambda$.
```python
fig = plt.figure()
ax = fig.add_subplot(111)
ax.semilogy(lrange,Lambd_RelError.mean(axis=0))
ax.fill_between(lrange,Lambd_RelError.mean(axis=0)-Lambd_RelError.std(axis=0),
Lambd_RelError.mean(axis=0)+Lambd_RelError.std(axis=0), alpha=0.2)
ax.set_xlabel('$n_{epoch}$')
ax.set_ylabel('$e_{\\lambda}^{rel}$')
ax.set_title('Mean relative error of $\\lambda$');
```
| b03c552032359a3d40f07b7e8a11d914c62b756e | 59,466 | ipynb | Jupyter Notebook | PINN_Solver.ipynb | hinofafa/PDESolveByNN | a0a8fc61e5d3003db344ad406e17c7c61534d6dd | [
"MIT"
]
| 52 | 2021-02-24T08:29:18.000Z | 2022-03-31T07:18:39.000Z | PINN_Solver.ipynb | hinofafa/DeepPDELearner | a0a8fc61e5d3003db344ad406e17c7c61534d6dd | [
"MIT"
]
| 1 | 2021-09-28T21:35:03.000Z | 2022-02-28T13:38:06.000Z | PINN_Solver.ipynb | hinofafa/DeepPDELearner | a0a8fc61e5d3003db344ad406e17c7c61534d6dd | [
"MIT"
]
| 29 | 2021-02-24T15:51:30.000Z | 2022-03-12T20:42:50.000Z | 35.166174 | 355 | 0.534995 | true | 11,327 | Qwen/Qwen-72B | 1. YES
2. YES | 0.933431 | 0.754915 | 0.704661 | __label__eng_Latn | 0.880711 | 0.475495 |
# Inner problem
This notebook will use Dedalus to create a minimal working example of the solution to the inner problem:
\begin{align}
(\Gamma - \partial_{x}^2) u &= 0
\end{align}
where the penalty mask $\Gamma$ satisfies
\begin{align}
x &\to +\infty & \Gamma &\to 0\\
x &\to -\infty & \Gamma &\to 1
\end{align}
and the physical solution $u$ satisfies
\begin{align}
x &\to +\infty & \partial_x u &\to 1\\
x &\to -\infty & u &\to 0
\end{align}
# Imports
```python
import numpy as np
import matplotlib.pyplot as plt
import dedalus.public as de
from matplotlib import rc
rc('font',**{'family':'serif','serif':['Computer Modern Roman']})
rc('text', usetex=True)
```
# Standard discontinuous mask
We will solve the problem numerically for a standard discontinuous mask, with finite boundary conditions at distance 10. We enforce Robin boundary conditions within the solid to match onto the analytical exponential behaviour.
One can change the mask function to calculate different solutions.
Masks chosen according to the optimal criterion will achieve zero displacement from the reference solution $u_0 = x$.
It is important to align the numerical grid with discontinuities in the mask function
```python
def mask(x): return 1.0*(x<0)
```
```python
# Calculate penalized solution
Nx = [128,128]
xb0 = de.Chebyshev('x0',Nx[0],interval=(-10,0))
xb1 = de.Chebyshev('x1',Nx[1],interval=(0,10))
xbasis = de.Compound('x',[xb0,xb1])
domain = de.Domain([xbasis], grid_dtype=np.float64)
x = xbasis.grid(*domain.dealias)
Γ = domain.new_field(name='Γ',scales=domain.dealias)
Γ['g'] = mask(x)
inner = de.LBVP(domain, variables=['u','ux'])
inner.meta[:]['x']['dirichlet'] = True
inner.parameters['Γ'] = Γ
inner.add_equation("dx(ux) - Γ*u = 0")
inner.add_equation("ux - dx(u) = 0")
inner.add_bc("left(ux) - left(u) = 0")
inner.add_bc("right(ux) = 1")
inner_solver = inner.build_solver()
inner_solver.solve()
u, ux = inner_solver.state['u'], inner_solver.state['ux']
```
```python
# Plot penalized and reference solution
fig, ax = plt.subplots()
ax.plot(x,u['g'],label='Penalized')
ax.plot(x[x>0],x[x>0],'k--',label='Reference')
ax.fill_between(x[x<0],0,10,color='lightgray')
ax.set(aspect=1,xlim=[-10,10],ylim=[0,10],xlabel='$x$',ylabel='$u$')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.legend()
```
| e55b61a6d872e824bffb560966fad832dbea7932 | 5,028 | ipynb | Jupyter Notebook | inner-problem.ipynb | ericwhester/volume-penalty-code | 66a1745daeae2ad71bda0bc9299c8b8271a9871f | [
"MIT"
]
| 4 | 2020-03-14T19:40:40.000Z | 2022-03-18T03:02:33.000Z | inner-problem.ipynb | ericwhester/volume-penalty-code | 66a1745daeae2ad71bda0bc9299c8b8271a9871f | [
"MIT"
]
| null | null | null | inner-problem.ipynb | ericwhester/volume-penalty-code | 66a1745daeae2ad71bda0bc9299c8b8271a9871f | [
"MIT"
]
| null | null | null | 26.1875 | 235 | 0.551512 | true | 705 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.819893 | 0.719437 | __label__eng_Latn | 0.731946 | 0.509826 |
```python
import numpy as np #Importa libreria numerica
import sympy as sym #simbolica
import matplotlib.pyplot as plt #importa matplotlib solo pyplot
import matplotlib.image as mpimg
from sympy.plotting import plot #para plotear 2 variables
from sympy.plotting import plot3d # para 3
from sympy.plotting import plot3d_parametric_surface
from IPython.display import Image
import ipympl #Para importar gestor de imagenes
sym.init_printing() #activa a jupyter para mostrar simbolicamente el output
%matplotlib inline
```
```python
Image(filename='LAB1_2.png',width=300)
```
```python
Vin, Vo, V3, V2 = sym.symbols('V_{in}, V_o, V_+, V_-')
V2=V3
Ir1, Ir2, Ir3, Ir4 = sym.symbols('I_{R1}, I_{R2},I_{R3},I_{R4}')
R1, R2, R3, R4, RL = sym.symbols('R1, R2, R3, R4, R_{L}')
Irl = sym.Function('I_{R_L}')(Vin,RL) #Defino viarible simbolica Irl como una funcion de Vin y RL
sym.pprint(Irl)
#SUM(I) en nodo 3 (+) es CERO =>
eq_Irl = sym.Eq(Irl,Ir1+Ir3)
sym.pprint(eq_Irl)
```
I_{R_L}(V_{in}, R_{L})
I_{R_L}(V_{in}, R_{L}) = I_{R1} + I_{R3}
```python
#La corriente en Ir1 es: (Vo-V+)/R1 = Irl - Ir3
# Ir3 = (Vin - V+)/R3
Ir1 = (Vo-V3)/R1
Ir3= (Vin - V3)/R3
eq_Irl = sym.Eq(Irl,Ir1+Ir3)
sym.pprint(eq_Irl)
res=sym.solve(eq_Irl,(Vo-V3))
eq_Vo3=sym.Eq(Vo-V3,sym.expand(res[0]))
sym.pprint(eq_Vo3)
```
-V₊ + V_{in} -V₊ + Vₒ
I_{R_L}(V_{in}, R_{L}) = ──────────── + ────────
R₃ R₁
R₁⋅V₊ R₁⋅V_{in}
-V₊ + Vₒ = R₁⋅I_{R_L}(V_{in}, R_{L}) + ───── - ─────────
R₃ R₃
```python
#SUM(I) en nodo 2 (-) es CERO =>
eq_Ir2 = sym.Eq(Ir2,Ir4)
sym.pprint(eq_Ir2)
Ir2 = (Vo-V3)/R2
Ir4= (V2)/R4
eq_Ir2 = sym.Eq(Ir2,Ir4)
sym.pprint(eq_Ir2)
res2=sym.solve(eq_Ir2,(Vo-V3))
eq_Vo3_=sym.Eq(Vo-V3,sym.expand(res2[0]))
sym.pprint(eq_Vo3_)
```
I_{R2} = I_{R4}
-V₊ + Vₒ V₊
──────── = ──
R₂ R₄
R₂⋅V₊
-V₊ + Vₒ = ─────
R₄
```python
#Remplazando
eq_=sym.Eq(eq_Vo3.rhs-eq_Vo3_.rhs)
sym.pprint(eq_)
```
R₁⋅V₊ R₁⋅V_{in} R₂⋅V₊
R₁⋅I_{R_L}(V_{in}, R_{L}) + ───── - ───────── - ───── = 0
R₃ R₃ R₄
```python
# Irl = V+/RL => V+=Irl*RL
# R1 = 100Ω; R2 = 10KΩ; R3 = 1KΩ y R4 = 100KΩ
res=sym.solve([eq_.subs({V3:Irl/RL,(R2/R4):(R1/R3)})],Irl)
sym.pprint(res)
```
⎧ V_{in}⎫
⎨I_{R_L}(V_{in}, R_{L}): ──────⎬
⎩ R₃ ⎭
```python
Vo= sym.Function('Vo')(Vin,RL) #Defino viarible simbolica Vo como una funcion de Vin y RL
Irl= sym.Symbol('I_{RL}')
sym.pprint(Vo)
sym.pprint(eq_Vo3) #De esa ecuacion se remplaza V+= RL * Irl
sym.pprint(sym.Eq(V3,RL*Irl))
```
Vo(V_{in}, R_{L})
R₁⋅V₊ R₁⋅V_{in}
-V₊ + Vₒ = R₁⋅I_{R_L}(V_{in}, R_{L}) + ───── - ─────────
R₃ R₃
V₊ = I_{RL}⋅R_{L}
```python
eq_Vo=sym.Eq(Vo,Irl*(RL+R1+RL*R1/R3)-Vin*(R1/R3))
sym.pprint(eq_Vo)
#REMPLAZO DEL RESULTADO Irl(V1,RL)
sym.pprint(sym.simplify((eq_Vo.subs(Irl,(Vin/R3)))))
```
⎛ R₁⋅R_{L} ⎞ R₁⋅V_{in}
Vo(V_{in}, R_{L}) = I_{RL}⋅⎜R₁ + ──────── + R_{L}⎟ - ─────────
⎝ R₃ ⎠ R₃
R_{L}⋅V_{in}⋅(R₁ + R₃)
Vo(V_{in}, R_{L}) = ──────────────────────
2
R₃
```python
#RL MAX como VoMAX=10v
#R1 = 100Ω; R2 = 10KΩ; R3 = 1KΩ y R4 = 100KΩ
Vo = sym.Symbol('Vo')
RLM=sym.Function('R_{L_{MAX}}')(Vin)
eq_RLM=sym.Eq(RLM,Vo/(Vin*(R3+R1)/R3**2))
sym.pprint(eq_RLM)
sym.pprint(eq_RLM.subs({Vo:10,R1:100,R2:10e3,R3:1e3,R4:100e3}))
```
2
R₃ ⋅Vo
R_{L_{MAX}}(V_{in}) = ────────────────
V_{in}⋅(R₁ + R₃)
9090.90909090909
R_{L_{MAX}}(V_{in}) = ────────────────
V_{in}
```python
```
| d75574b77024ca6ea2ffeca249126017230ab368 | 97,099 | ipynb | Jupyter Notebook | python/1/LAB1_EJ_2.ipynb | WayraLHD/SRA21 | 1b0447bf925678b8065c28b2767906d1daff2023 | [
"Apache-2.0"
]
| 1 | 2021-09-29T16:38:53.000Z | 2021-09-29T16:38:53.000Z | python/1/LAB1_EJ_2.ipynb | WayraLHD/SRA21 | 1b0447bf925678b8065c28b2767906d1daff2023 | [
"Apache-2.0"
]
| 1 | 2021-08-10T08:24:57.000Z | 2021-08-10T08:24:57.000Z | python/1/LAB1_EJ_2.ipynb | WayraLHD/SRA21 | 1b0447bf925678b8065c28b2767906d1daff2023 | [
"Apache-2.0"
]
| null | null | null | 321.519868 | 89,000 | 0.92003 | true | 1,709 | Qwen/Qwen-72B | 1. YES
2. YES | 0.894789 | 0.817574 | 0.731557 | __label__kor_Hang | 0.112779 | 0.537984 |
<h1>INTERPOLATIONS</h1>
<b>Group 8</b> <br>
Gardyan Priangga Akbar (2301902296)
<h2>When do we need interpolation?</h2>
Interpolation is drawing conclusions from within a set of known information. For example, if we know that 0 is the lowest number and 10 being the maximum, we can determine that the number 5 must lie in between. Interpolation has many real-life applications, such as: <br>
<ul>
<li>When you have the cost of catering for 25 and 100 people, but you need an estimate for the cost of catering for 50 people.</li>
<li>When deciding what laptop to buy and you know the price tag and capabilities of laptops at both the lower and higher ends, interpolation can be used to get the most optimal price and specs out of your budget.</li>
<li>Finding the amount of employees needed to complete a task with the most optimal cost.</li>
<li>And more...</li>
</ul>
There are a number of ways or methods to do interpolation. Two of them, for example, is Lagrange's method and Newton's Divided Difference method. In this notebook, we will be learning about these two and how we can implement them using Python. Strap youselves in, because at the end of every section there will be a playground area for you to explore and mess around! Let's go!
<h2>Lagrange Interpolation</h2>
<h3>The Theory</h3>
Lagrange's method is one of the ways for data interpolation from a set of known data points. With this method, we can interpolate the value of f(x) from any value of x from within the data set. Here is the formula:
Where: <br>
<b>n</b> = the degree of polynomial (for linear n = 1, quadratic n = 2, and so on) <br>
<b>Li(x)</b> = the weighting function
To get the weighting function, the formula is:
For some people this formula might seem quite daunting or scary even. However, this formula is just the equivalent of
<h3>Doing it in Python</h3>
First let's make a list of the data points we know.
```python
xy_values = []
#Initialize x and y values (make sure the X values are in order)
xy_values.append([0, 0])
xy_values.append([10, 227.04])
xy_values.append([15, 362.78])
xy_values.append([20, 517.35])
xy_values.append([22.5, 602.97])
xy_values.append([30, 901.67])
xy_values
```
[[0, 0],
[10, 227.04],
[15, 362.78],
[20, 517.35],
[22.5, 602.97],
[30, 901.67]]
Next let's decide on the order of polynomial to interpolate our data with. We will store it in a variable called <i>n</i>. For reference, to do a linear interpolation, we put our <i>n</i> value as 1. For quadratic <i>n</i> = 2, cubic <i>n</i> = 3, and so on.
```python
n = 1
```
Now let's choose a value of <i>x</i> to interpolate. Obviously, the value of <i>x</i> needs to be within our known data points, otherwise we won't be able to interpolate (that would be extrapolation).
```python
xVal = 16
```
Next we need to pick <b>two</b> points from our known data points that sandwhiches our <i>xVal</i>. We will be keeping track on the indexes. So if our <i>xVal</i> is <b>16</b>, we will be picking the x values <b>15</b> and <b>20</b> because 16 lies between them. As we see in our <i>xy_values</i> list, 15 and 20 are positioned in the indexes <b>2</b> and <b>3</b> respectively. Hence, we take a note of that in a new list.
```python
def get_first2_indexes(xy_values, xVal):
indexes = []
for i in range(len(xy_values)-1):
if xy_values[i][0] < xVal and xy_values[i+1][0] > xVal:
indexes.append(i)
indexes.append(i+1)
return indexes
indexes = get_first2_indexes(xy_values, xVal)
indexes
```
[2, 3]
If <i>n</i> = 1 (linear), we can go directly to finding the weighting function. However, when <i>n</i> > 1, we have to also select adjacent x values from our two chosen data points. Take note to always pick the data point closest to <i>xVal</i>.
For example when <b><i>n</i> = 3</b>:
1. Compare <b>10</b> and <b>22.5</b>
2. <b>10</b> is closer to <b>16</b> than 22.5. So we choose that.
3. <b><i>indexes</i></b> will now house [1, 2, 3]. Take note to keep track the indexes in ascending order.
For example when <b><i>n</i> = 4</b>:
1. We add one more data point from when <i>n</i> = 3.
2. Compare <b>0</b> and <b>22.5</b>
2. <b>22.5</b> is closer to <b>16</b> than 0. So we choose that.
3. <b><i>indexes</i></b> will now house [1, 2, 3, 4].
```python
def get_remaining_indexes(xy_values, indexes, xVal, n):
for _ in range(n-1):
#find the value nearest to xVal
leftIndex = indexes[0]-1
rightIndex = indexes[len(indexes)-1] + 1
#Check if the adjacent index exists in the given xy_values data
if (leftIndex > -1):
if (rightIndex < len(xy_values)):
#Check which one is closer to xVal
if (abs(xy_values[leftIndex][0] - xVal) < abs(xy_values[rightIndex][0] - xVal)):
indexes.insert(0, leftIndex)
else:
indexes.append(rightIndex)
else:
indexes.insert(0, leftIndex)
elif (rightIndex < len(xy_values)):
indexes.append(rightIndex)
get_remaining_indexes(xy_values, indexes, xVal, n)
indexes
```
[2, 3]
Now we can go ahead and try to find the weighting functions. We will be using <b>Sympy</b> to help us keep track of variables and automatically calculate the final result. Let's start by importing Sympy library.
```python
import sympy as sp
x = sp.Symbol('x');
```
We will now proceed in determining the weighting function. Recall that the formula is
```python
def gather_weighting_functions(polynomial):
wFunc = [] #Collection of Ln(x)
for i in range(polynomial+1):
subFunc = [] #Collection of individual (x - xj)/(xi-xj)
for j in range(polynomial+1):
#j != i
if i != j:
#(t - xj)/(xi-xj)
#sub = [i, j]
#sub[0] = xi
#sub[1] = xj
sub = []
sub.append(i)
sub.append(j)
subFunc.append(sub)
wFunc.append(subFunc)
return wFunc
wFunc = gather_weighting_functions(n)
wFunc
```
[[[0, 1]], [[1, 0]]]
The code above simply stores the values i and j in each of their respective iterations.
Recall the formula for lagrange's interpolation to be
We will now put <b>fn(x)</b> together with the code below (Sympy has the benefit of automatically simplifying our otherwise very long equation):
```python
def get_equation(xy_values, wFunc, indexes, x_symbol):
total = 0
for i in range(len(wFunc)):
weight_function_prod = 1
for a in range(len(wFunc[i])):
iIndex = wFunc[i][a][0]
index = indexes[iIndex]
xi = xy_values[index][0]
jIndex = wFunc[i][a][1]
index = indexes[jIndex]
xj = xy_values[index][0]
sub = (x_symbol - xj)/ (xi - xj)
weight_function_prod *= sub
#Multiply by f(i)
total += weight_function_prod * xy_values[indexes[i]][1]
return sp.simplify(total)
equation = get_equation(xy_values, wFunc, indexes, x)
equation
```
$\displaystyle 30.914 x - 100.93$
We are not done however, because we are interested in the value of <b>y</b> when x is our <b><i>xVal</i></b>, which is <b>16</b>. To solve this, we can call Sympy's <b>evalf()</b> function on our <i>equation</i> variable.
```python
#Solve for xVal
result = equation.evalf(subs={x : xVal})
result
```
$\displaystyle 393.694$
If we graph our findings it will look like this
```python
def graph_lagrange(xy_values, equation, xVal, result, x_symbol):
#Graphing
%matplotlib inline
import matplotlib.pyplot as plt
#split x and y
x_values = []
y_values = []
for i in range(len(xy_values)):
x_values.append(xy_values[i][0])
y_values.append(xy_values[i][1])
#Generate x and y
new_x_values = []
new_y_values = []
for i in range(int(min(x_values) * 100), int(max(x_values) * 100), 1):
new_x_values.append(i/100)
new_y_values.append(equation.evalf(subs={x_symbol:i/100}))
plt.plot(x_values, y_values, 'o', label='data')
plt.plot(new_x_values, new_y_values, '-', label='equation')
plt.plot([xVal], [result], '+', label="interpolated data")
plt.legend()
plt.xlabel("X")
plt.ylabel("Y")
print("y =", equation)
plt.show()
graph_lagrange(xy_values, equation, xVal, result, x)
```
Notice how our interpolated <i>xVal</i> lies within our predicted equation, but not all of our known data points. There are some that are relatively quite far away from the equation line. In order to reduce this, we need to use a <b>higher</b> level polynomial (a higher value for <b>n</b>). Do take note that the highest level of polynomial you can do is equal to the number of data points you have minus 1. This is because you do not have enough data points to use a higher degree polynomial.
<h3>Try it Yourself!</h3>
Try experimenting with Lagrange's Interpolation yourself with your own data inputs. <br>
Let's start with the data points that you know:
```python
xy_values = []
#Initialize x and y values (make sure the X values are in order)
xy_values.append([0, 0])
xy_values.append([10, 227.04])
xy_values.append([15, 362.78])
xy_values.append([20, 517.35])
xy_values.append([22.5, 602.97])
xy_values.append([30, 901.67])
xy_values
```
[[0, 0],
[10, 227.04],
[15, 362.78],
[20, 517.35],
[22.5, 602.97],
[30, 901.67]]
Now for what value of x do you want to find?
```python
xVal = 16
xVal
```
16
And what order of polynomial would you like to use? <br>
Note: Be sure to set your value of <b>n</b> to be the amount of data points -1. If you have 6 data points, your max value for <b>n</b> is 5.
```python
n = 3 #Order/degree of polynomial
#n = len(xy_values) - 1 #Use this to use the highest possible degree of polynomial
n
```
3
Your inputs are now in! (Don't change anything in the code below)
```python
import sympy as sp
x = sp.Symbol('x');
indexes = get_first2_indexes(xy_values, xVal)
get_remaining_indexes(xy_values, indexes, xVal, n)
wFunc = gather_weighting_functions(n)
equation = get_equation(xy_values, wFunc, indexes, x)
result = equation.evalf(subs={x : xVal})
equation
```
$\displaystyle 0.00543466666666609 x^{3} + 0.132040000000028 x^{2} + 21.2655333333331 x - 4.25399999999809$
The code has been baked and here is the result!
```python
result
```
$\displaystyle 392.057168000002$
Now let's see how that looks like in a graph.
```python
graph_lagrange(xy_values, equation, xVal, result, x)
```
How did your graph turn out? Were you able to line up your known data points with your equation line? Maybe try with a higher value of n, or try with an entire different data set. Play around!
<h2>Newton Interpolation</h2>
<h3>The Theory</h3>
To do interpolation with Newton's method, we use Newton's Divided Difference Polynomial (NDDP) method. With this method, we wil be able to interpolate the equation of the line using the known data points. Here is the formula:
Where: <br>
<b>n</b> = the degree of polynomial <br>
<b>a<i>n</i></b> = is the divided difference
Solving for <b>a<i>n</i></b> is quite tricky, and so that part will be discussed as we learn to solve NDDP using Python.
<h3>Solving it with Python</h3>
Similar to what we did in Lagrange's, we first make a list of the data points we know.
```python
xy_values = []
#Initialize x and y values (make sure the X values are in order)
xy_values.append([0, 0])
xy_values.append([10, 227.04])
xy_values.append([15, 362.78])
xy_values.append([20, 517.35])
xy_values.append([22.5, 602.97])
xy_values.append([30, 901.67])
```
The beauty with Newton's method is that we do not need to specify what order of polynomial we want to use. That is determined by the amount of data points we have -1. So if we have 6 data points, we will be using a 5 degree polynomial. Our next step is to create the divided difference table, so let's do that.
```python
#Initialize divided difference table
def init_table():
table = []
for _ in range(len(xy_values)):
temp = []
for _ in range(len(xy_values) + 1):
temp.append(-1)
table.append(temp)
#Insert x and y values to table
for i in range(len(xy_values)):
table[i][0] = xy_values[i][0]
table[i][1] = xy_values[i][1]
return table
table = init_table()
table
```
[[0, 0, -1, -1, -1, -1, -1],
[10, 227.04, -1, -1, -1, -1, -1],
[15, 362.78, -1, -1, -1, -1, -1],
[20, 517.35, -1, -1, -1, -1, -1],
[22.5, 602.97, -1, -1, -1, -1, -1],
[30, 901.67, -1, -1, -1, -1, -1]]
We first initialize an empty table by flagging all values with -1. Then we insert our known x and y values there. Our table will look something like this.
Next we need to populate the table and fill in the remaining empty cells. Here is the way to fill in the table in general:
Try taking a look at it carefully. You will see a particular pattern. Let's go ahead and fill the values in.
```python
#Do the divided difference table
def compute_table(table):
y_bound = 1
for col in range(2, len(table[0])):
for row in range(y_bound, len(table)):
try:
delta = (table[row][col-1] - table[y_bound-1][col-1]) / (table[row][0] - table[y_bound-1][0])
except:
delta = 0
#print(table[row][col-1], '-', table[y_bound-1][col-1], "divide", table[row][0], '-', table[y_bound-1][0], '=', delta)
table[row][col] = delta
y_bound += 1
compute_table(table)
table
```
[[0, 0, -1, -1, -1, -1, -1],
[10, 227.04, 22.704, -1, -1, -1, -1],
[15, 362.78, 24.185333333333332, 0.29626666666666635, -1, -1, -1],
[20, 517.35, 25.8675, 0.3163499999999999, 0.004016666666666713, -1, -1],
[22.5,
602.97,
26.79866666666667,
0.3275733333333335,
0.004174222222222287,
6.302222222222958e-05,
-1],
[30,
901.67,
30.055666666666664,
0.36758333333333315,
0.0047544444444444535,
7.377777777777409e-05,
1.434074074072601e-06]]
Next we need to get the values for our <b>a0, a1, a2,...., an</b>. Since we already did the divided difference table, we can just "steal" the values from it. This is how you can get the values from the table:
```python
#Get an values
def get_an_values(table):
an = []
col = 1
for row in range(0, len(table)):
an.append(table[row][col])
col += 1
return an
an = get_an_values(table)
an
```
[0,
22.704,
0.29626666666666635,
0.004016666666666713,
6.302222222222958e-05,
1.434074074072601e-06]
We now have all the pieces to put our NDDP puzzle together. Let's recall the formula again.
Alright let's do it in Python now.
```python
import sympy as sp
x = sp.Symbol('x')
def get_equation_newton(an, x_symb):
func = 0
for a in range(len(an)):
product = an[a]
for i in range(a):
product *= (x_symb - xy_values[i][0])
func += product
func = sp.simplify(func)
return func
func = get_equation_newton(an, x)
func
```
$\displaystyle x \left(1.4340740740726 \cdot 10^{-6} x^{4} - 3.3777777777671 \cdot 10^{-5} x^{3} + 0.00356481481481208 x^{2} + 0.211538888888918 x + 20.2515666666666\right)$
Great! We now have our equation line. Let's plot it and see how it looks.
```python
%matplotlib inline
import matplotlib.pyplot as plt
def graph_newton(xy_values, func, x_symb):
#split n and y
x_values = []
y_values = []
for i in range(len(xy_values)):
x_values.append(xy_values[i][0])
y_values.append(xy_values[i][1])
#Generate x and y
new_x_values = []
new_y_values = []
for i in range(int(min(x_values) * 100), int(max(x_values) * 100), 1):
new_x_values.append(i/100)
new_y_values.append(func.evalf(subs={x_symb:i/100}))
plt.plot(x_values, y_values, 'o', label='data')
plt.plot(new_x_values, new_y_values, '-', label='equation')
plt.legend()
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
def graph_newton_with_interpolation(xy_values, func, x_symb, xVal, yVal):
#split n and y
x_values = []
y_values = []
for i in range(len(xy_values)):
x_values.append(xy_values[i][0])
y_values.append(xy_values[i][1])
#Generate x and y
new_x_values = []
new_y_values = []
for i in range(int(min(x_values) * 100), int(max(x_values) * 100), 1):
new_x_values.append(i/100)
new_y_values.append(func.evalf(subs={x_symb:i/100}))
plt.plot(x_values, y_values, 'o', label='data')
plt.plot(new_x_values, new_y_values, '-', label='equation')
plt.plot([xVal], [yVal], '+', label="interpolated data")
plt.legend()
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
print("y =", func)
graph_newton(xy_values, func, x)
```
Well would you look at that. The graph looks quite nicely; all the data points are inside the graph line. Much better than lagrange's method using a polynomial degree of 1. And we didn't even specify the order of polynomial with Newton's.
<h3>Try it Yourself!</h3>
Try experimenting with Newton's Divided Difference method yourself with your own datasets. <br>
Let's start with the data points that you know:
```python
xy_values = []
#Initialize x and y values (make sure the X values are in order)
xy_values.append([0, 0])
xy_values.append([10, 227.04])
xy_values.append([15, 362.78])
xy_values.append([20, 517.35])
xy_values.append([22.5, 602.97])
xy_values.append([30, 901.67])
xy_values
```
[[0, 0],
[10, 227.04],
[15, 362.78],
[20, 517.35],
[22.5, 602.97],
[30, 901.67]]
Your inputs are now in! Let's process it.
```python
x = sp.Symbol('x')
table = init_table()
compute_table(table)
an = get_an_values(table)
func = get_equation_newton(an, x)
```
The code has been baked and here is the line equation you get:
```python
func
```
$\displaystyle x \left(1.4340740740726 \cdot 10^{-6} x^{4} - 3.3777777777671 \cdot 10^{-5} x^{3} + 0.00356481481481208 x^{2} + 0.211538888888918 x + 20.2515666666666\right)$
Let's see how it looks in a graph
```python
print("y =", func)
graph_newton(xy_values, func, x)
```
How does your graph look? Are all the data points lined up nicely? Maybe try with a higher value of <i>n</i>, or try with an entirely different data set. Play around!
Perhaps you would like to interpolate a value of y? What value for x would you like to try?
```python
xVal = 16
xVal
```
16
```python
yVal = func.evalf(subs={x: xVal})
yVal
```
$\displaystyle 392.070578915556$
Let's see where does that lie in the graph
```python
graph_newton_with_interpolation(xy_values, func, x, xVal, yVal)
```
Does your interpolated data lie somewhere in the graph? Mess around with more interpolation and enjoy your graph :)
<h3>Conclusion</h3>
Alright, now that we've explored both Langrange's and Newton's method to do interpolations, let's do a recap!
Interpolation is all about making an educated guess from within a set of known data. Two methods we can use to do interpolation is Lagrange's and Newton's Divided Difference method. With Lagrange's we can control the degree of polynomial we want to use to produce different accuracies and therefore control the speed at which our program is going to run. Newton's method, on the other hand, would use the highest possible degree of polynomial to produce the most accurate and precise interpolation, but that means it will also use the max amount of time to compute. So which one to use? It all depends on you. Use the method that suits you best.
| 05e284f868b813b89b435e979697f4b6b1583279 | 108,450 | ipynb | Jupyter Notebook | Interpolation.ipynb | GiantSweetroll/Computational-Math-Interpolation | 1945649b1be4a9814deed6ab81ebd510841cd11b | [
"Apache-2.0"
]
| null | null | null | Interpolation.ipynb | GiantSweetroll/Computational-Math-Interpolation | 1945649b1be4a9814deed6ab81ebd510841cd11b | [
"Apache-2.0"
]
| null | null | null | Interpolation.ipynb | GiantSweetroll/Computational-Math-Interpolation | 1945649b1be4a9814deed6ab81ebd510841cd11b | [
"Apache-2.0"
]
| null | null | null | 74.84472 | 14,812 | 0.808953 | true | 5,812 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.891811 | 0.815919 | __label__eng_Latn | 0.972604 | 0.733986 |
# Introduction
This Notebook gives a quick overview of the capabilities of the visiualisation module.
- Easy drawing of plain geometric primitives (rectangles, circles)
- Easy access to interactions (useful for kinematic visualisation)
- Easy access to animation generation (time-dependent functions and configuration space)
```python
%load_ext ipydex.displaytools
import sympy as sp
from sympy import sin, cos
import numpy as np
import scipy.integrate
import symbtools as st
import symbtools.modeltools as mt
import matplotlib.pyplot as plt
import symbtools.visualisation as vt
sp.init_printing()
import importlib
```
```python
theta1, theta2 = theta = st.symb_vector('theta1:3')
dtheta1, dtheta2 = dtheta = st.time_deriv(theta, theta)
tau = sp.symbols('tau')
params = st.symb_vector('m1 l1 J1 mu1 m2 r2 J2 mu2')
st.make_global(params)
```
```python
p0 = sp.Matrix([0, 0])
p1 = sp.Matrix([l1/2*cos(theta1), l1/2*sin(theta1)])
p2 = sp.Matrix([l1*cos(theta1), l1*sin(theta1)])
p3 = p2 + sp.Matrix([r2*cos(theta1+theta2), r2*sin(theta1+theta2)])
dp1 = st.time_deriv(p1, theta)
dp2 = st.time_deriv(p2, theta)
```
```python
T_rot = J1 * dtheta1**2 / 2 + J2 * (dtheta1 + dtheta2)**2 / 2
T_trans = (m1 * dp1.T * dp1 / 2 + m2 * dp2.T * dp2 / 2)[0]
T = T_rot + T_trans
V = p2[1]*9.81*m2 + p1[1]*9.81*m1
friction1 = dtheta1 * mu1
friction2 = dtheta2 * mu2
model = mt.generate_symbolic_model(T, V, theta, [0 - friction1, tau - friction2])
model.tau = [tau]
model.calc_state_eq()
f = model.f
g = model.g
```
```python
m1_val = 0.1
m2_val = 1
l1_val = 0.5
r2_val = 0.2
J1_val = 1/12 * m1_val * l1_val**2
J2_val = 1/2 * m2_val * r2_val**2*5
mu1_val = 0.01
mu2_val = 0.001
param_subs = st.lzip(params, [m1_val, l1_val, J1_val, mu1_val, m2_val, r2_val, J2_val, mu2_val])
f_fun = st.expr_to_func(model.x, f.subs(param_subs))
g_fun = st.expr_to_func(model.x, g.subs(param_subs))
x_init = np.array([-np.pi/2, 0, 0, 0])
tf = 5.0
dt = 1/30
ts = np.arange(0.0, tf, dt)
samples = len(ts)
ys = np.empty((samples, len(model.x)))
ys[0, :] = x_init
T_pulse = 1
# input trajectory
def tau_fun(t, x):
if t<T_pulse:
return np.sin(2*np.pi*t/T_pulse)*4
else:
return 0
# right hand side of the ode
def ode_fun(t, x):
return f_fun(x[0], x[1], x[2], x[3]) + tau_fun(t, x) * g_fun(x[0], x[1], x[2], x[3])
# get state trajectory
for i in range(samples - 1):
result = scipy.integrate.solve_ivp(ode_fun, (ts[i], ts[i+1]), ys[i, :])
ys[i+1, :] = result.y[:, -1]
# get input trajectory after the fact
us = np.empty((samples, 1))
for i in range(samples):
us[i, 0] = tau_fun(ts[i], ys[i, :])
```
```python
importlib.reload(vt)
```
<module 'symbtools.visualisation' from '/media/workcard/workstickdir/projekte/rst_python/symbtools-TUD-RST-Account/symbtools/visualisation.py'>
```python
vis = vt.Visualiser(theta, xlim=(-1, 1), ylim=(-1, 1))
vis.add_linkage([p0.subs(param_subs), p2.subs(param_subs)], color="#1f77b4")
vis.add_disk([p2.subs(param_subs), p3.subs(param_subs)], color="#ff7f0e", lw=2)
```
```python
vis.plot([0.0, 0.0])
```
```python
vis.interact(theta2=(-3.14, 3.14, 0.01))
```
interactive(children=(FloatSlider(value=0.0, description='theta1', max=5.0, min=-5.0), FloatSlider(value=0.0, …
```python
simanim = vt.SimAnimation(model.x, ts, ys)
simanim.add_visualiser(vis)
simanim.display_frame()
```
```python
simanim = vt.SimAnimation(model.x, ts, ys, figsize=(8, 4))
simanim.add_visualiser(vis, 122)
simanim.add_graph(theta, 121)
simanim.display_frame()
```
```python
from matplotlib.gridspec import GridSpec
gs = GridSpec(4, 3, hspace=0.3)
simanim = vt.SimAnimation(model.x, ts, ys, start_pause=0.5, end_pause=0.5, figsize=(12, 10))
simanim.add_visualiser(vis, gs[:, 1:])
simanim.add_graph(theta, gs[0, 0], ax_kwargs=dict(title='Angles')) # most common options can probably be exposed easier
simanim.add_graph([dtheta1, dtheta2], gs[1, 0], ax_kwargs=dict(title='Velocities'))
simanim.add_graph(p3.subs(param_subs), gs[2, 0], ax_kwargs=dict(title='x-y Position'), plot_kwargs=dict(ls='--'))
simanim.add_graph(us[:, 0], gs[3, 0], ax_kwargs=dict(title='Input torque'))
simanim.display_frame() # render just one frame to check the layout
```
```python
# simanim.display() # show the whole animation (increases filesize of jupyter notebook)
```
```python
# this probably needs to be adapted on other systems
# (anaconda version of ffmpeg might lead to an error)
plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'
fname = 'demo_animation.mp4'
simanim.save(fname, dpi=50) # save to file
```
```python
vt.display_video_file(fname)
```
## Create an onion plot (experimental)
```python
indices1 = np.linspace(0, 15, 5, dtype=int)
frame_states1 = ys[indices1, :2]
indices2 = np.linspace(18, 38, 10, dtype=int)
frame_states2 = ys[indices2, :2]
```
```python
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5));
ax1.axis("equal")
ax1.set_xlim(-1, 1)
ax2.axis("equal")
ax2.set_xlim(-1, 1)
vis.plot_onion_skinned(frame_states1, axes=ax1, change_alpha=True, max_lightness=0.75)
vis.plot_onion_skinned(frame_states2, axes=ax2, change_alpha=True, max_lightness=0.75)
```
| 8a61e6ad657d0deb6958e8347b9c9d617ee99f3e | 175,728 | ipynb | Jupyter Notebook | docs/demo_notebooks/demo_visualisation.ipynb | Xabo-RB/symbtools | d7c771319bc5929ce4bfda09c74c6845749f0c3e | [
"BSD-3-Clause"
]
| 5 | 2017-10-15T16:25:01.000Z | 2022-02-27T19:05:04.000Z | docs/demo_notebooks/demo_visualisation.ipynb | Xabo-RB/symbtools | d7c771319bc5929ce4bfda09c74c6845749f0c3e | [
"BSD-3-Clause"
]
| 5 | 2019-07-16T13:09:17.000Z | 2021-12-21T20:10:16.000Z | docs/demo_notebooks/demo_visualisation.ipynb | Xabo-RB/symbtools | d7c771319bc5929ce4bfda09c74c6845749f0c3e | [
"BSD-3-Clause"
]
| 9 | 2017-02-08T12:24:10.000Z | 2022-02-27T19:22:29.000Z | 382.849673 | 70,940 | 0.938428 | true | 1,731 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.774583 | 0.661427 | __label__eng_Latn | 0.247219 | 0.375047 |
<a href="https://colab.research.google.com/github/charmerDark/quantum_svm_vqc_comparison/blob/main/hons_vqc.ipynb" target="_parent"></a>
```python
!pip install qiskit
```
Collecting qiskit
Downloading https://files.pythonhosted.org/packages/ab/05/b9f82e569f1d4c39cd856c2fd04c716999b0e7a7a395a7fd2b1c48d40e68/qiskit-0.25.0.tar.gz
Collecting qiskit-terra==0.17.0
[?25l Downloading https://files.pythonhosted.org/packages/26/50/39921a9aa428e3bb0ca1d646ad72e40d87e82699054b33344d9cf4e54a17/qiskit_terra-0.17.0-cp37-cp37m-manylinux2010_x86_64.whl (6.0MB)
[K |████████████████████████████████| 6.0MB 13.3MB/s
[?25hCollecting qiskit-aer==0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/ed/a0/93b1c1efb6e55185c9af446c2ed44b050a0675045eedae52d1f5c0247cf2/qiskit_aer-0.8.0-cp37-cp37m-manylinux2010_x86_64.whl (17.9MB)
[K |████████████████████████████████| 17.9MB 230kB/s
[?25hCollecting qiskit-ibmq-provider==0.12.2
[?25l Downloading https://files.pythonhosted.org/packages/08/ac/69bb35892303c3a4a52eaaf7ea95431dd3f57963b580a011ee92693a7fcc/qiskit_ibmq_provider-0.12.2-py3-none-any.whl (198kB)
[K |████████████████████████████████| 204kB 44.3MB/s
[?25hCollecting qiskit-ignis==0.6.0
[?25l Downloading https://files.pythonhosted.org/packages/54/be/a13c828e457e09d979667a61bddbd8c7246aafa94e2501b6a9154429cbea/qiskit_ignis-0.6.0-py3-none-any.whl (207kB)
[K |████████████████████████████████| 215kB 42.4MB/s
[?25hCollecting qiskit-aqua==0.9.0
[?25l Downloading https://files.pythonhosted.org/packages/86/26/8d89e8ecb8d21f555496a0db80a5299dd663443a971e9fb4089efc7c5aef/qiskit_aqua-0.9.0-py3-none-any.whl (2.1MB)
[K |████████████████████████████████| 2.1MB 42.4MB/s
[?25hRequirement already satisfied: psutil>=5 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (5.4.8)
Requirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (1.7.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (2.6.0)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (1.19.5)
Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (2.8.1)
Collecting ply>=3.10
[?25l Downloading https://files.pythonhosted.org/packages/a3/58/35da89ee790598a0700ea49b2a66594140f44dec458c07e8e3d4979137fc/ply-3.11-py2.py3-none-any.whl (49kB)
[K |████████████████████████████████| 51kB 4.9MB/s
[?25hRequirement already satisfied: scipy>=1.4 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (1.4.1)
Collecting fastjsonschema>=2.10
Downloading https://files.pythonhosted.org/packages/89/1c/8be51fa42aadc1c1611a52b866e1a5a1032a504f24789cf140b4e6d7c940/fastjsonschema-2.15.0-py3-none-any.whl
Collecting python-constraint>=1.4
Downloading https://files.pythonhosted.org/packages/37/8b/5f1bc2734ca611943e1d6733ee244238679f6410a10cd45ede55a61a8402/python-constraint-1.4.0.tar.bz2
Requirement already satisfied: dill>=0.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.17.0->qiskit) (0.3.3)
Collecting retworkx>=0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/1b/92/f007f8b9d88dcd5b90e363967e5d54431a68c5fe06d83400732e3b438084/retworkx-0.8.0-cp37-cp37m-manylinux2010_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 36.4MB/s
[?25hCollecting pybind11>=2.6
[?25l Downloading https://files.pythonhosted.org/packages/8d/43/7339dbabbc2793718d59703aace4166f53c29ee1c202f6ff5bf8a26c4d91/pybind11-2.6.2-py2.py3-none-any.whl (191kB)
[K |████████████████████████████████| 194kB 53.2MB/s
[?25hRequirement already satisfied: nest-asyncio!=1.1.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.12.2->qiskit) (1.5.1)
Requirement already satisfied: requests>=2.19 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.12.2->qiskit) (2.23.0)
Collecting requests-ntlm>=1.1.0
Downloading https://files.pythonhosted.org/packages/03/4b/8b9a1afde8072c4d5710d9fa91433d504325821b038e00237dc8d6d833dc/requests_ntlm-1.1.0-py2.py3-none-any.whl
Collecting websockets>=8
[?25l Downloading https://files.pythonhosted.org/packages/5a/0b/3ebc752392a368af14dd24ee041683416ac6d2463eead94b311b11e41c82/websockets-8.1-cp37-cp37m-manylinux2010_x86_64.whl (79kB)
[K |████████████████████████████████| 81kB 8.0MB/s
[?25hRequirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.12.2->qiskit) (1.24.3)
Requirement already satisfied: setuptools>=40.1.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-ignis==0.6.0->qiskit) (54.2.0)
Collecting quandl<=3.6.0
Downloading https://files.pythonhosted.org/packages/c2/58/9f0e69d836045e3865d263e9ed49f42b23a58526fdabb30f74c430baee3f/Quandl-3.6.0-py2.py3-none-any.whl
Requirement already satisfied: scikit-learn<=0.24.1,>=0.20.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.0->qiskit) (0.22.2.post1)
Requirement already satisfied: pandas<=1.2.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.0->qiskit) (1.1.5)
Requirement already satisfied: fastdtw<=0.3.4 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.0->qiskit) (0.3.4)
Collecting yfinance<=0.1.55
Downloading https://files.pythonhosted.org/packages/7a/e8/b9d7104d3a4bf39924799067592d9e59119fcfc900a425a12e80a3123ec8/yfinance-0.1.55.tar.gz
Collecting dlx<=1.0.4
Downloading https://files.pythonhosted.org/packages/54/c0/b8fb5bb727e983b6f5251433ef941b48f38c65bb0bd6ec509e9185bcd406/dlx-1.0.4.tar.gz
Collecting docplex<=2.20.204; sys_platform != "darwin"
[?25l Downloading https://files.pythonhosted.org/packages/87/99/6f7c219b39fd58c84688ad0713eb932bfcf6be81fc74519e43ea9c915b56/docplex-2.20.204.tar.gz (611kB)
[K |████████████████████████████████| 614kB 40.9MB/s
[?25hRequirement already satisfied: h5py<=3.1.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-aqua==0.9.0->qiskit) (2.10.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy>=1.3->qiskit-terra==0.17.0->qiskit) (1.2.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.8.0->qiskit-terra==0.17.0->qiskit) (1.15.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.12.2->qiskit) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.12.2->qiskit) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.12.2->qiskit) (3.0.4)
Collecting ntlm-auth>=1.0.2
Downloading https://files.pythonhosted.org/packages/ff/84/97c550164b54942b0e908c31ef09d9469f3ba4cd7332a671e2125732f63b/ntlm_auth-1.5.0-py2.py3-none-any.whl
Collecting cryptography>=1.3
[?25l Downloading https://files.pythonhosted.org/packages/b2/26/7af637e6a7e87258b963f1731c5982fb31cd507f0d90d91836e446955d02/cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2MB)
[K |████████████████████████████████| 3.2MB 37.9MB/s
[?25hCollecting inflection>=0.3.1
Downloading https://files.pythonhosted.org/packages/59/91/aa6bde563e0085a02a435aa99b49ef75b0a4b062635e606dab23ce18d720/inflection-0.5.1-py2.py3-none-any.whl
Requirement already satisfied: more-itertools in /usr/local/lib/python3.7/dist-packages (from quandl<=3.6.0->qiskit-aqua==0.9.0->qiskit) (8.7.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<=0.24.1,>=0.20.0->qiskit-aqua==0.9.0->qiskit) (1.0.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas<=1.2.3->qiskit-aqua==0.9.0->qiskit) (2018.9)
Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from yfinance<=0.1.55->qiskit-aqua==0.9.0->qiskit) (0.0.9)
Collecting lxml>=4.5.1
[?25l Downloading https://files.pythonhosted.org/packages/cf/4d/6537313bf58fe22b508f08cf3eb86b29b6f9edf68e00454224539421073b/lxml-4.6.3-cp37-cp37m-manylinux1_x86_64.whl (5.5MB)
[K |████████████████████████████████| 5.5MB 43.8MB/s
[?25hRequirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.12.2->qiskit) (1.14.5)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.12.2->qiskit) (2.20)
Building wheels for collected packages: qiskit, python-constraint, yfinance, dlx, docplex
Building wheel for qiskit (setup.py) ... [?25l[?25hdone
Created wheel for qiskit: filename=qiskit-0.25.0-cp37-none-any.whl size=3032 sha256=bd9ed1854661fc5bbedca5b27eaadb43274beef9a59100643f76bf8183563671
Stored in directory: /root/.cache/pip/wheels/09/6e/f1/a7eaab6e3943d749f1b257f462f60fbc402b0c8e870b555169
Building wheel for python-constraint (setup.py) ... [?25l[?25hdone
Created wheel for python-constraint: filename=python_constraint-1.4.0-py2.py3-none-any.whl size=24079 sha256=c10ef798e8300679d3f6c19b7d35d72659e24b5725dfb91885537f3e456cee32
Stored in directory: /root/.cache/pip/wheels/34/31/15/7b070b25d0a549d20ce2e9fe6d727471c2c61ef904720fd40c
Building wheel for yfinance (setup.py) ... [?25l[?25hdone
Created wheel for yfinance: filename=yfinance-0.1.55-py2.py3-none-any.whl size=22616 sha256=238d00ee361319b34a584800ff492231426eed70b361a2c43bce46acf5b1f1f2
Stored in directory: /root/.cache/pip/wheels/04/98/cc/2702a4242d60bdc14f48b4557c427ded1fe92aedf257d4565c
Building wheel for dlx (setup.py) ... [?25l[?25hdone
Created wheel for dlx: filename=dlx-1.0.4-cp37-none-any.whl size=5712 sha256=31e657f5b05376f43942320f323e4b11d090fecf09f25c684f9da020b156d00b
Stored in directory: /root/.cache/pip/wheels/bb/ba/15/fdd0deb104df3254912998150ba9245668db06b00af5912d1a
Building wheel for docplex (setup.py) ... [?25l[?25hdone
Created wheel for docplex: filename=docplex-2.20.204-cp37-none-any.whl size=675362 sha256=fc57801d30a3be5977d6c15d001999799d2336a5720425151dbeb70ba2a4e966
Stored in directory: /root/.cache/pip/wheels/ae/2c/e2/a099ebb6fda8adeba9c5fc2e25659d195ad2f5c6cc5fb75fd4
Successfully built qiskit python-constraint yfinance dlx docplex
Installing collected packages: ply, fastjsonschema, python-constraint, retworkx, qiskit-terra, pybind11, qiskit-aer, ntlm-auth, cryptography, requests-ntlm, websockets, qiskit-ibmq-provider, qiskit-ignis, inflection, quandl, lxml, yfinance, dlx, docplex, qiskit-aqua, qiskit
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed cryptography-3.4.7 dlx-1.0.4 docplex-2.20.204 fastjsonschema-2.15.0 inflection-0.5.1 lxml-4.6.3 ntlm-auth-1.5.0 ply-3.11 pybind11-2.6.2 python-constraint-1.4.0 qiskit-0.25.0 qiskit-aer-0.8.0 qiskit-aqua-0.9.0 qiskit-ibmq-provider-0.12.2 qiskit-ignis-0.6.0 qiskit-terra-0.17.0 quandl-3.6.0 requests-ntlm-1.1.0 retworkx-0.8.0 websockets-8.1 yfinance-0.1.55
```python
import numpy as np
from qiskit import BasicAer
from qiskit.aqua import QuantumInstance, aqua_globals
from qiskit.aqua.algorithms import VQC
from qiskit.aqua.components.optimizers import SPSA
from qiskit.circuit.library import TwoLocal, ZFeatureMap,EfficientSU2
from qiskit.aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.metrics import classification_report
import time
seed = 10599
aqua_globals.random_seed = seed
```
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:18: DeprecationWarning: The variable qiskit.aqua.aqua_globals is deprecated. It was moved/refactored to qiskit.utils.aqua_globals (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
```python
class circuit_result():
'''
class to store details of each circuit and later pickle it
'''
def details(self):
print("Circuit type is: \t",self.circuit_type)
print("feature_map used is :\t",self.feature_map)
print("The paulis used were: \t",self.paulis)
print("The entanglement layout is \t",self.entanglement)
print("The repititions of feature map: \t",self.reps)
print("running time is : \t",self.running_time)
print("Accuracy report: \n",accuracy)
print("Circuit Depth: \t", self.depth)
print("Number of operations \t",self.count_ops)
def __init__(self,circuit_type,feature_map,time,accuracy,depth,count_ops,paulis,reps,entanglement):
self.circuit_type=circuit_type
self.feature_map=feature_map
self.running_time=time
self.accuracy=accuracy
self.depth=depth
self.count_ops=count_ops
self.paulis=paulis
self.reps=reps
self.entanglement=entanglement
```
```python
training_size=120
test_size=30
class_labels = [r'A', r'B', r'C']
data, target = datasets.load_iris(return_X_y=True)
sample_train, sample_test, label_train, label_test =train_test_split(data, target, test_size=30, random_state=42)
std_scale = StandardScaler().fit(sample_train)
sample_train = std_scale.transform(sample_train)
sample_test = std_scale.transform(sample_test)
samples = np.append(sample_train, sample_test, axis=0)
minmax_scale = MinMaxScaler((-1, 1)).fit(samples)
sample_train = minmax_scale.transform(sample_train)
sample_test = minmax_scale.transform(sample_test)
training_input = {key: (sample_train[label_train == k, :])[:training_size]
for k, key in enumerate(class_labels)}
test_input = {key: (sample_test[label_test == k, :])[:test_size]
for k, key in enumerate(class_labels)}
```
```python
feature_map = ZFeatureMap(feature_dimension=4, reps=1)
optimizer = SPSA(maxiter=80)
var_form = TwoLocal(4, ['ry','rz'], 'crz', reps=3,entanglement='circular')
vqc = VQC(optimizer, feature_map, var_form, training_input, test_input)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
start=time.time()
result = vqc.run(quantum_instance)
end= time.time()
print("training took ",end-start)
params=vqc.optimal_params
y_pred=vqc.predict(sample_test,quantum_instance,params=params)
accuracy=classification_report(y_pred[1],label_test)
print(accuracy)
```
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/components/optimizers/optimizer.py:50: DeprecationWarning: The package qiskit.aqua.components.optimizers is deprecated. It was moved/refactored to qiskit.algorithms.optimizers (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit.algorithms.optimizers', 'qiskit-terra')
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/algorithms/classifiers/vqc.py:98: DeprecationWarning: The package qiskit.aqua.algorithms.classifiers is deprecated. It was moved/refactored to qiskit_machine_learning.algorithms.classifiers (pip install qiskit-machine-learning). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit-machine-learning')
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/algorithms/vq_algorithm.py:72: DeprecationWarning: The class qiskit.aqua.algorithms.VQAlgorithm is deprecated. It was moved/refactored to qiskit.algorithms.VariationalAlgorithm (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit-terra')
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/quantum_instance.py:137: DeprecationWarning: The class qiskit.aqua.QuantumInstance is deprecated. It was moved/refactored to qiskit.utils.QuantumInstance (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit-terra')
/usr/local/lib/python3.7/dist-packages/qiskit/ml/__init__.py:40: DeprecationWarning: The package qiskit.ml is deprecated. It was moved/refactored to qiskit_machine_learning (pip install qiskit-machine-learning). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
warn_package('ml', 'qiskit_machine_learning', 'qiskit-machine-learning')
training took 543.2600898742676
precision recall f1-score support
0 1.00 1.00 1.00 10
1 0.89 0.89 0.89 9
2 0.91 0.91 0.91 11
accuracy 0.93 30
macro avg 0.93 0.93 0.93 30
weighted avg 0.93 0.93 0.93 30
```python
```
```python
feature_map = ZFeatureMap(feature_dimension=4, reps=1)
optimizer = SPSA(maxiter=80)
var_form = EfficientSU2(4,reps=3)
vqc = VQC(optimizer, feature_map, var_form, training_input, test_input)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
start=time.time()
result = vqc.run(quantum_instance)
end= time.time()
print("training took ",end-start)
params=vqc.optimal_params
y_pred=vqc.predict(sample_test,quantum_instance,params=params)
accuracy=classification_report(y_pred[1],label_test)
print(accuracy)
```
training took 406.8485155105591
precision recall f1-score support
0 1.00 1.00 1.00 10
1 0.89 0.73 0.80 11
2 0.73 0.89 0.80 9
accuracy 0.87 30
macro avg 0.87 0.87 0.87 30
weighted avg 0.88 0.87 0.87 30
```python
TwoLocal(4, ['ry','rz'], 'crz', reps=3,entanglement='sca',insert_barriers=True).draw()
```
<pre style="word-wrap: normal;white-space: pre;background: #fff0;line-height: 1.1;font-family: "Courier New",Courier,monospace"> ┌──────────┐┌──────────┐ ░ ┌──────────┐ »
q_0: ┤ RY(θ[0]) ├┤ RZ(θ[4]) ├─░─┤ RZ(θ[8]) ├─────■───────────────────»
├──────────┤├──────────┤ ░ └────┬─────┘┌────┴─────┐ »
q_1: ┤ RY(θ[1]) ├┤ RZ(θ[5]) ├─░──────┼──────┤ RZ(θ[9]) ├──────■──────»
├──────────┤├──────────┤ ░ │ └──────────┘┌─────┴─────┐»
q_2: ┤ RY(θ[2]) ├┤ RZ(θ[6]) ├─░──────┼──────────────────┤ RZ(θ[10]) ├»
├──────────┤├──────────┤ ░ │ └───────────┘»
q_3: ┤ RY(θ[3]) ├┤ RZ(θ[7]) ├─░──────■───────────────────────────────»
└──────────┘└──────────┘ ░ »
« ░ ┌───────────┐┌───────────┐ ░ »
«q_0: ──────────────░─┤ RY(θ[12]) ├┤ RZ(θ[16]) ├─░────────────────────■──────»
« ░ ├───────────┤├───────────┤ ░ │ »
«q_1: ──────────────░─┤ RY(θ[13]) ├┤ RZ(θ[17]) ├─░────────────────────┼──────»
« ░ ├───────────┤├───────────┤ ░ ┌───────────┐ │ »
«q_2: ──────■───────░─┤ RY(θ[14]) ├┤ RZ(θ[18]) ├─░─┤ RZ(θ[20]) ├──────┼──────»
« ┌─────┴─────┐ ░ ├───────────┤├───────────┤ ░ └─────┬─────┘┌─────┴─────┐»
«q_3: ┤ RZ(θ[11]) ├─░─┤ RY(θ[15]) ├┤ RZ(θ[19]) ├─░───────■──────┤ RZ(θ[21]) ├»
« └───────────┘ ░ └───────────┘└───────────┘ ░ └───────────┘»
« ┌───────────┐ ░ ┌───────────┐┌───────────┐ ░ »
«q_0: ┤ RZ(θ[22]) ├──────────────░─┤ RY(θ[24]) ├┤ RZ(θ[28]) ├─░──────────────»
« └─────┬─────┘┌───────────┐ ░ ├───────────┤├───────────┤ ░ »
«q_1: ──────■──────┤ RZ(θ[23]) ├─░─┤ RY(θ[25]) ├┤ RZ(θ[29]) ├─░───────■──────»
« └─────┬─────┘ ░ ├───────────┤├───────────┤ ░ ┌─────┴─────┐»
«q_2: ───────────────────■───────░─┤ RY(θ[26]) ├┤ RZ(θ[30]) ├─░─┤ RZ(θ[32]) ├»
« ░ ├───────────┤├───────────┤ ░ └───────────┘»
«q_3: ───────────────────────────░─┤ RY(θ[27]) ├┤ RZ(θ[31]) ├─░──────────────»
« ░ └───────────┘└───────────┘ ░ »
« ┌───────────┐ ░ ┌───────────┐┌───────────┐
«q_0: ─────────────┤ RZ(θ[34]) ├──────■───────░─┤ RY(θ[36]) ├┤ RZ(θ[40]) ├
« └─────┬─────┘┌─────┴─────┐ ░ ├───────────┤├───────────┤
«q_1: ───────────────────┼──────┤ RZ(θ[35]) ├─░─┤ RY(θ[37]) ├┤ RZ(θ[41]) ├
« │ └───────────┘ ░ ├───────────┤├───────────┤
«q_2: ──────■────────────┼────────────────────░─┤ RY(θ[38]) ├┤ RZ(θ[42]) ├
« ┌─────┴─────┐ │ ░ ├───────────┤├───────────┤
«q_3: ┤ RZ(θ[33]) ├──────■────────────────────░─┤ RY(θ[39]) ├┤ RZ(θ[43]) ├
« └───────────┘ ░ └───────────┘└───────────┘</pre>
```python
feature_map = ZFeatureMap(feature_dimension=4, reps=1)
optimizer = SPSA(maxiter=100)
var_form = EfficientSU2(4,reps=3)
vqc = VQC(optimizer, feature_map, var_form, training_input, test_input)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
start=time.time()
result = vqc.run(quantum_instance)
end= time.time()
print("training took ",end-start)
params=vqc.optimal_params
y_pred=vqc.predict(sample_test,quantum_instance,params=params)
accuracy=classification_report(y_pred[1],label_test)
print(accuracy)
```
training took 509.36247539520264
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 0.82 0.90 11
2 0.82 1.00 0.90 9
accuracy 0.93 30
macro avg 0.94 0.94 0.93 30
weighted avg 0.95 0.93 0.93 30
```python
feature_map = ZFeatureMap(feature_dimension=4, reps=1)
optimizer = SPSA(maxiter=120)
var_form = TwoLocal(4, ['ry','rz'], 'crz', reps=3,entanglement='circular')
vqc = VQC(optimizer, feature_map, var_form, training_input, test_input)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
start=time.time()
result = vqc.run(quantum_instance)
end= time.time()
print("training took ",end-start)
params=vqc.optimal_params
y_pred=vqc.predict(sample_test,quantum_instance,params=params)
accuracy=classification_report(y_pred[1],label_test)
print(accuracy)
```
training took 815.8849918842316
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 0.90 0.95 10
2 0.91 1.00 0.95 10
accuracy 0.97 30
macro avg 0.97 0.97 0.97 30
weighted avg 0.97 0.97 0.97 30
```python
feature_map = ZFeatureMap(feature_dimension=4, reps=1)
optimizer = SPSA(maxiter=120)
var_form = TwoLocal(4, ['rz'], 'crz', reps=3,entanglement='circular')
vqc = VQC(optimizer, feature_map, var_form, training_input, test_input)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
start=time.time()
result = vqc.run(quantum_instance)
end= time.time()
print("training took ",end-start)
params=vqc.optimal_params
y_pred=vqc.predict(sample_test,quantum_instance,params=params)
accuracy=classification_report(y_pred[1],label_test)
print(accuracy)
```
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/components/optimizers/optimizer.py:50: DeprecationWarning: The package qiskit.aqua.components.optimizers is deprecated. It was moved/refactored to qiskit.algorithms.optimizers (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit.algorithms.optimizers', 'qiskit-terra')
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/algorithms/classifiers/vqc.py:98: DeprecationWarning: The package qiskit.aqua.algorithms.classifiers is deprecated. It was moved/refactored to qiskit_machine_learning.algorithms.classifiers (pip install qiskit-machine-learning). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit-machine-learning')
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/algorithms/vq_algorithm.py:72: DeprecationWarning: The class qiskit.aqua.algorithms.VQAlgorithm is deprecated. It was moved/refactored to qiskit.algorithms.VariationalAlgorithm (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit-terra')
/usr/local/lib/python3.7/dist-packages/qiskit/aqua/quantum_instance.py:137: DeprecationWarning: The class qiskit.aqua.QuantumInstance is deprecated. It was moved/refactored to qiskit.utils.QuantumInstance (pip install qiskit-terra). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
'qiskit-terra')
/usr/local/lib/python3.7/dist-packages/qiskit/ml/__init__.py:40: DeprecationWarning: The package qiskit.ml is deprecated. It was moved/refactored to qiskit_machine_learning (pip install qiskit-machine-learning). For more information see <https://github.com/Qiskit/qiskit-aqua/blob/master/README.md#migration-guide>
warn_package('ml', 'qiskit_machine_learning', 'qiskit-machine-learning')
training took 661.326092004776
precision recall f1-score support
0 0.00 0.00 0.00 0
1 1.00 0.30 0.46 30
2 0.00 0.00 0.00 0
accuracy 0.30 30
macro avg 0.33 0.10 0.15 30
weighted avg 1.00 0.30 0.46 30
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
```python
feature_map = ZFeatureMap(feature_dimension=4, reps=1)
optimizer = SPSA(maxiter=120)
var_form = TwoLocal(4, ['rx', 'rz'], 'crz', reps=3,entanglement='circular')
vqc = VQC(optimizer, feature_map, var_form, training_input, test_input)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
start=time.time()
result = vqc.run(quantum_instance)
end= time.time()
print("training took ",end-start)
params=vqc.optimal_params
y_pred=vqc.predict(sample_test,quantum_instance,params=params)
accuracy=classification_report(y_pred[1],label_test)
print(accuracy)
```
training took 807.7532076835632
precision recall f1-score support
0 1.00 1.00 1.00 10
1 1.00 0.69 0.82 13
2 0.64 1.00 0.78 7
accuracy 0.87 30
macro avg 0.88 0.90 0.87 30
weighted avg 0.92 0.87 0.87 30
```python
```
| 861767dc3350ce6a72b226f49884916c57cade2a | 42,950 | ipynb | Jupyter Notebook | hons_vqc.ipynb | charmerDark/quantum_svm_vqc_comparison | a30fc4b739b05508c7ec764da705130d56d118d6 | [
"MIT"
]
| null | null | null | hons_vqc.ipynb | charmerDark/quantum_svm_vqc_comparison | a30fc4b739b05508c7ec764da705130d56d118d6 | [
"MIT"
]
| null | null | null | hons_vqc.ipynb | charmerDark/quantum_svm_vqc_comparison | a30fc4b739b05508c7ec764da705130d56d118d6 | [
"MIT"
]
| null | null | null | 60.492958 | 399 | 0.502957 | true | 9,907 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.705785 | 0.561564 | __label__eng_Latn | 0.209693 | 0.14303 |
# Homework 5
## Due Date: Tuesday, October 3rd at 11:59 PM
# Problem 1
We discussed documentation and testing in lecture and also briefly touched on code coverage. You must write tests for your code for your final project (and in life). There is a nice way to automate the testing process called continuous integration (CI).
This problem will walk you through the basics of CI and show you how to get up and running with some CI software.
### Continuous Integration
The idea behind continuous integration is to automate away the testing of your code.
We will be using it for our projects.
The basic workflow goes something like this:
1. You work on your part of the code in your own branch or fork
2. On every commit you make and push to GitHub, your code is automatically tested on a fresh machine on Travis CI. This ensures that there are no specific dependencies on the structure of your machine that your code needs to run and also ensures that your changes are sane
3. Now you submit a pull request to `master` in the main repo (the one you're hoping to contribute to). The repo manager creates a branch off `master`.
4. This branch is also set to run tests on Travis. If all tests pass, then the pull request is accepted and your code becomes part of master.
We use GitHub to integrate our roots library with Travis CI and Coveralls. Note that this is not the only workflow people use. Google git..github..workflow and feel free to choose another one for your group.
### Part 1: Create a repo
Create a public GitHub repo called `cs207test` and clone it to your local machine.
**Note:** No need to do this in Jupyter.
```bash
%%bash
git clone https://github.com/xuwd11/cs207test
```
Cloning into 'cs207test'...
### Part 2: Create a roots library
Use the example from lecture 7 to create a file called `roots.py`, which contains the `quad_roots` and `linear_roots` functions (along with their documentation).
Also create a file called `test_roots.py`, which contains the tests from lecture.
All of these files should be in your newly created `cs207test` repo. **Don't push yet!!!**
```python
%%file cs207test/roots.py
def linear_roots(a=1.0, b=0.0):
"""Returns the roots of a linear equation: ax+ b = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of linear term
b: float, optional, default value is 0
Coefficient of constant term
RETURNS
========
roots: 1-tuple of real floats
Has the form (root) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> linear_roots(1.0, 2.0)
-2.0
"""
if a == 0:
raise ValueError("The linear coefficient is zero. This is not a linear equation.")
else:
return ((-b / a))
def quad_roots(a=1.0, b=2.0, c=0.0):
"""Returns the roots of a quadratic equation: ax^2 + bx + c = 0.
INPUTS
=======
a: float, optional, default value is 1
Coefficient of quadratic term
b: float, optional, default value is 2
Coefficient of linear term
c: float, optional, default value is 0
Constant term
RETURNS
========
roots: 2-tuple of complex floats
Has the form (root1, root2) unless a = 0
in which case a ValueError exception is raised
EXAMPLES
=========
>>> quad_roots(1.0, 1.0, -12.0)
((3+0j), (-4+0j))
"""
import cmath # Can return complex numbers from square roots
if a == 0:
raise ValueError("The quadratic coefficient is zero. This is not a quadratic equation.")
else:
sqrtdisc = cmath.sqrt(b * b - 4.0 * a * c)
r1 = -b + sqrtdisc
r2 = -b - sqrtdisc
return (r1 / 2.0 / a, r2 / 2.0 / a)
```
Overwriting cs207test/roots.py
```python
%%file cs207test/test_roots.py
import roots
def test_quadroots_result():
assert roots.quad_roots(1.0, 1.0, -12.0) == ((3+0j), (-4+0j))
def test_quadroots_types():
try:
roots.quad_roots("", "green", "hi")
except TypeError as err:
assert(type(err) == TypeError)
def test_quadroots_zerocoeff():
try:
roots.quad_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
def test_linearoots_result():
assert roots.linear_roots(2.0, -3.0) == 1.5
def test_linearroots_types():
try:
roots.linear_roots("ocean", 6.0)
except TypeError as err:
assert(type(err) == TypeError)
def test_linearroots_zerocoeff():
try:
roots.linear_roots(a=0.0)
except ValueError as err:
assert(type(err) == ValueError)
```
Overwriting cs207test/test_roots.py
### Part 3: Create an account on Travis CI and Start Building
#### Part A:
Create an account on Travis CI and set your `cs207test` repo up for continuous integration once this repo can be seen on Travis.
#### Part B:
Create an instruction to Travis to make sure that
1. python is installed
2. its python 3.5
3. pytest is installed
The file should be called `.travis.yml` and should have the contents:
```yml
language: python
python:
- "3.5"
before_install:
- pip install pytest pytest-cov
script:
- pytest
```
You should also create a configuration file called `setup.cfg`:
```cfg
[tool:pytest]
addopts = --doctest-modules --cov-report term-missing --cov roots
```
#### Part C:
Push the new changes to your `cs207test` repo.
At this point you should be able to see your build on Travis and if and how your tests pass.
### Part 4: Coveralls Integration
In class, we also discussed code coverage. Just like Travis CI runs tests automatically for you, Coveralls automatically checks your code coverage. One minor drawback of Coveralls is that it can only work with public GitHub accounts. However, this isn't too big of a problem since your projects will be public.
#### Part A:
Create an account on [`Coveralls`](https://coveralls.zendesk.com/hc/en-us), connect your GitHub, and turn Coveralls integration on.
#### Part B:
Update your the `.travis.yml` file as follows:
```yml
language: python
python:
- "3.5"
before_install:
- pip install pytest pytest-cov
- pip install coveralls
script:
- py.test
after_success:
- coveralls
```
Be sure to push the latest changes to your new repo.
### Part 5: Update README.md in repo
You can have your GitHub repo reflect the build status on Travis CI and the code coverage status from Coveralls. To do this, you should modify the `README.md` file in your repo to include some badges. Put the following at the top of your `README.md` file:
```
[](https://travis-ci.org/dsondak/cs207testing.svg?branch=master)
[](https://coveralls.io/github/dsondak/cs207testing?branch=master)
```
Of course, you need to make sure that the links are to your repo and not mine. You can find embed code on the Coveralls and Travis CI sites.
---
# Problem 2
Write a Python module for reaction rate coefficients. Your module should include functions for constant reaction rate coefficients, Arrhenius reaction rate coefficients, and modified Arrhenius reaction rate coefficients. Here are their mathematical forms:
\begin{align}
&k_{\textrm{const}} = k \tag{constant} \\
&k_{\textrm{arr}} = A \exp\left(-\frac{E}{RT}\right) \tag{Arrhenius} \\
&k_{\textrm{mod arr}} = A T^{b} \exp\left(-\frac{E}{RT}\right) \tag{Modified Arrhenius}
\end{align}
Test your functions with the following paramters: $A = 10^7$, $b=0.5$, $E=10^3$. Use $T=10^2$.
A few additional comments / suggestions:
* The Arrhenius prefactor $A$ is strictly positive
* The modified Arrhenius parameter $b$ must be real
* $R = 8.314$ is the ideal gas constant. It should never be changed (except to convert units)
* The temperature $T$ must be positive (assuming a Kelvin scale)
* You may assume that units are consistent
* Document each function!
* You might want to check for overflows and underflows
**Recall:** A Python module is a `.py` file which is not part of the main execution script. The module contains several functions which may be related to each other (like in this problem). Your module will be importable via the execution script. For example, suppose you have called your module `reaction_coeffs.py` and your execution script `kinetics.py`. Inside of `kinetics.py` you will write something like:
```python
import reaction_coeffs
# Some code to do some things
# :
# :
# :
# Time to use a reaction rate coefficient:
reaction_coeffs.const() # Need appropriate arguments, etc
# Continue on...
# :
# :
# :
```
Be sure to include your module in the same directory as your execution script.
```python
%%file reaction_coeffs.py
import numpy as np
def const(k):
'''Returen the constant reaction rate coefficient k_const.
INPUTS
=======
k: float
Rection rate coefficient.
RETURNS
========
k_const: float, except the following cases:
If k <= 0, a ValueError exception will be raised;
if k = float('inf'), an OverflowError exception will be raised.
EXAMPLES
=========
>>> const(100)
100
'''
if k <= 0:
raise ValueError('The reaction rate coefficient must be positive.')
if abs(k) == float('inf'):
raise OverflowError
return k
def arr(A, E, T, R=8.314):
'''Returns the Arrhenius reaction rate coefficient.
INPUTS
=======
A: float
The Arrhenius prefactor.
E: float
The activation energy for the reaction (in the same unit as R*T).
T: float
The absolute temperature (in Kelvins).
R: float, optional, default value is 8.314
The universal gas constant.
RETURNS
========
k_arr: float, except the following cases:
If A <= 0, a ValueError exception will be raised;
if T <= 0, a ValueError exception will be raised;
if R <= 0, a ValueError exception will be raised;
if A*exp(-E/R/T) = float('inf'), an OverflowError exception will be raised.
EXAMPLES
=========
>>> arr(10**7, 10**3, 10**2)
3003549.0889639617
'''
if A <= 0:
raise ValueError('The Arrhenius prefactor A must be positive.')
if T <= 0:
raise ValueError('The temperature T must be positive.')
if R <= 0:
raise ValueError('The ideal gas constant R must be positive.')
if R != 8.314:
print('Warning! The ideal gas constant R has been changed from the default value (8.314).')
k_arr = A * np.exp(-E/R/T)
if k_arr == float('inf'):
raise OverflowError
if k_arr == 0:
print('Warning! An underflow error might occur.')
return k_arr
def mod_arr(A, b, E, T, R=8.314):
'''Returns the modified Arrhenius reaction rate coefficient.
INPUTS
=======
A: float
The Arrhenius prefactor.
b: float
The modified Arrhenius parameter. If b is not a float number, a conversion is attempted.
E: float
The activation energy for the reaction (in the same unit as R*T).
T: float
The absolute temperature (in Kelvins).
R: float, optional, default value is 8.314
The universal gas constant.
RETURNS
========
k_mod_arr: float, except the following cases:
If A <= 0, a ValueError exception will be raised;
if b cannot be converted to a float number, a ValueError exception will be raised;
if T <= 0, a ValueError exception will be raised;
if R <= 0, a ValueError exception will be raised;
if A*exp(-E/R/T) = float('inf'), an OverflowError exception will be raised.
EXAMPLES
=========
>>> mod_arr(10**7, 0.5, 10**3, 10**2)
30035490.889639616
'''
if A <= 0:
raise ValueError('The Arrhenius prefactor A must be positive.')
try:
b = float(b)
except:
raise ValueError('The modified Arrhenius parameter b must be real.')
if T <= 0:
raise ValueError('The temperature T must be positive.')
if R <= 0:
raise ValueError('The ideal gas constant R must be positive.')
if R != 8.314:
print('Warning! The ideal gas constant R has been changed from the default value (8.314).')
k_mod_arr = A*T**b*np.exp(-E/R/T)
if k_mod_arr == float('inf'):
raise OverflowError
if k_mod_arr == 0:
print('Warning! An underflow error might occur.')
return k_mod_arr
```
Writing reaction_coeffs.py
```python
# Test
import reaction_coeffs
def test_mod_arr():
assert reaction_coeffs.mod_arr(10**7, 0.5, 10**3, 10**2) == 30035490.889639616
test_mod_arr()
```
---
# Problem 3
Write a function that returns the **progress rate** for a reaction of the following form:
\begin{align}
\nu_{A} A + \nu_{B} B \longrightarrow \nu_{C} C.
\end{align}
Order your concentration vector so that
\begin{align}
\mathbf{x} =
\begin{bmatrix}
\left[A\right] \\
\left[B\right] \\
\left[C\right]
\end{bmatrix}
\end{align}
Test your function with
\begin{align}
\nu_{i}^{\prime} =
\begin{bmatrix}
2.0 \\
1.0 \\
0.0
\end{bmatrix}
\qquad
\mathbf{x} =
\begin{bmatrix}
1.0 \\
2.0 \\
3.0
\end{bmatrix}
\qquad
k = 10.
\end{align}
You must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.
```python
def progress_rate_1(nu, x, k):
'''Returns the progress rate for a reaction of the form: nu_A A + nu_B B -> nu_C C
INPUTS
=======
nu: 3-element list or array
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of
species A, B and C
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
omega: float, except the following cases:
If nu or x is not a 3-element list or array, a TypeError will be raised;
if nu contains non-positive element(s), a ValueError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> progress_rate_1([2, 1, 0], [1, 2, 3], 10)
20
>>> progress_rate_1([1, 1, 0], [4, 2, 3], 10)
80
'''
try:
if len(nu) != 3:
raise TypeError('nu must be a 3 element list or array.')
except:
raise TypeError('nu must be a 3 element list or array.')
if not all([nu_ >= 0 for nu_ in nu]):
raise ValueError('All elements in nu must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = k*x[0]**nu[0]*x[1]**nu[1]
return omega
```
```python
# Tests
def test_progress_rate_1_types():
try:
progress_rate_1([1, 1], [1, 1, 1], 10)
except TypeError as err:
assert(type(err) == TypeError)
def test_progress_rate_1_values():
try:
progress_rate_1([-1, 1, 1], [1, 1, 1], 10)
except ValueError as err:
assert(type(err) == ValueError)
test_progress_rate_1_types()
test_progress_rate_1_values()
```
```python
# doctest
import doctest
doctest.testmod(verbose=True)
```
Trying:
progress_rate_1([2, 1, 0], [1, 2, 3], 10)
Expecting:
20
ok
Trying:
progress_rate_1([1, 1, 0], [4, 2, 3], 10)
Expecting:
80
ok
4 items had no tests:
__main__
__main__.test_mod_arr
__main__.test_progress_rate_1_types
__main__.test_progress_rate_1_values
1 items passed all tests:
2 tests in __main__.progress_rate_1
2 tests in 5 items.
2 passed and 0 failed.
Test passed.
TestResults(failed=0, attempted=2)
---
# Problem 4
Write a function that returns the **progress rate** for a system of reactions of the following form:
\begin{align}
\nu_{11}^{\prime} A + \nu_{21}^{\prime} B \longrightarrow \nu_{31}^{\prime\prime} C \\
\nu_{12}^{\prime} A + \nu_{32}^{\prime} C \longrightarrow \nu_{22}^{\prime\prime} B + \nu_{32}^{\prime\prime} C
\end{align}
Note that $\nu_{ij}^{\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\nu_{ij}^{\prime\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. Therefore, in this convention, I have ordered my vector of concentrations as
\begin{align}
\mathbf{x} =
\begin{bmatrix}
\left[A\right] \\
\left[B\right] \\
\left[C\right]
\end{bmatrix}.
\end{align}
Test your function with
\begin{align}
\nu_{ij}^{\prime} =
\begin{bmatrix}
1.0 & 2.0 \\
2.0 & 0.0 \\
0.0 & 2.0
\end{bmatrix}
\qquad
\nu_{ij}^{\prime\prime} =
\begin{bmatrix}
0.0 & 0.0 \\
0.0 & 1.0 \\
2.0 & 1.0
\end{bmatrix}
\qquad
\mathbf{x} =
\begin{bmatrix}
1.0 \\
2.0 \\
1.0
\end{bmatrix}
\qquad
k = 10.
\end{align}
You must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.
```python
import numpy as np
def progress_rate_2(nu_1, nu_2, x, k):
'''Returns the progress rate for a system of reactions of the form:
nu'_11 A + nu'_21 B -> nu''_31 C
nu'_12 A + nu'_32 C -> nu''_22 B + nu''_32 C
INPUTS
=======
nu_1: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of reactants.
If nu_1 is not an array, a conversion is attempted.
nu_2: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of products.
If nu_2 is not an array, a conversion is attempted.
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
omega: 2-element tuple
Has the form (float, float) which corresponds to the progress rates of 2 reactions, except
the following cases:
If nu_1 or nu_2 cannot be converted to an array of shape (3, 2), a TypeError will be raised;
if nu_1 or nu_2 contains negative element(s), a ValueError will be raised;
if x is not a 3-element list or array, a TypeError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> progress_rate_2([[1, 2], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
(40, 10)
>>> progress_rate_2([[1, 1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 3], 10)
(40, 90)
'''
try:
nu_1 = np.array(nu_1)
if nu_1.shape != (3, 2):
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
try:
nu_2 = np.array(nu_2)
if nu_2.shape != (3, 2):
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
if np.any(nu_1 < 0):
raise ValueError('All elements in nu_1 must be non-negative.')
if np.any(nu_2 < 0):
raise ValueError('All elements in nu_2 must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = (k*np.prod([x[i]**nu_1[i, 0] for i in range(3)]), k*np.prod([x[i]**nu_1[i, 1] for i in range(3)]))
return omega
```
```python
# Tests
def test_progress_rate_2_types():
try:
progress_rate_2([[1, 2], [2, 0], [0, 3, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
except TypeError as err:
assert(type(err) == TypeError)
def test_progress_rate_2_values():
try:
progress_rate_2([[1, -1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
except ValueError as err:
assert(type(err) == ValueError)
test_progress_rate_2_types()
test_progress_rate_2_values()
```
```python
# doctest
import doctest
doctest.testmod(verbose=True)
```
Trying:
progress_rate_1([2, 1, 0], [1, 2, 3], 10)
Expecting:
20
ok
Trying:
progress_rate_1([1, 1, 0], [4, 2, 3], 10)
Expecting:
80
ok
Trying:
progress_rate_2([[1, 2], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
Expecting:
(40, 10)
ok
Trying:
progress_rate_2([[1, 1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 3], 10)
Expecting:
(40, 90)
ok
6 items had no tests:
__main__
__main__.test_mod_arr
__main__.test_progress_rate_1_types
__main__.test_progress_rate_1_values
__main__.test_progress_rate_2_types
__main__.test_progress_rate_2_values
2 items passed all tests:
2 tests in __main__.progress_rate_1
2 tests in __main__.progress_rate_2
4 tests in 8 items.
4 passed and 0 failed.
Test passed.
TestResults(failed=0, attempted=4)
---
# Problem 5
Write a function that returns the **reaction rate** of a system of irreversible reactions of the form:
\begin{align}
\nu_{11}^{\prime} A + \nu_{21}^{\prime} B &\longrightarrow \nu_{31}^{\prime\prime} C \\
\nu_{32}^{\prime} C &\longrightarrow \nu_{12}^{\prime\prime} A + \nu_{22}^{\prime\prime} B
\end{align}
Once again $\nu_{ij}^{\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\nu_{ij}^{\prime\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. In this convention, I have ordered my vector of concentrations as
\begin{align}
\mathbf{x} =
\begin{bmatrix}
\left[A\right] \\
\left[B\right] \\
\left[C\right]
\end{bmatrix}
\end{align}
Test your function with
\begin{align}
\nu_{ij}^{\prime} =
\begin{bmatrix}
1.0 & 0.0 \\
2.0 & 0.0 \\
0.0 & 2.0
\end{bmatrix}
\qquad
\nu_{ij}^{\prime\prime} =
\begin{bmatrix}
0.0 & 1.0 \\
0.0 & 2.0 \\
1.0 & 0.0
\end{bmatrix}
\qquad
\mathbf{x} =
\begin{bmatrix}
1.0 \\
2.0 \\
1.0
\end{bmatrix}
\qquad
k = 10.
\end{align}
You must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.
```python
import numpy as np
def reaction_rate_1(nu_1, nu_2, x, k):
'''Returns the reaction rates for species in a system of reactions of the form:
nu'_11 A + nu'_21 B -> nu''_31 C
nu'_32 C -> nu''_12 A + nu''_22 B
INPUTS
=======
nu_1: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of reactants.
If nu_1 is not an array, a conversion is attempted.
nu_2: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of products.
If nu_2 is not an array, a conversion is attempted.
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
f: 3-element array
Has the form (float, float, float) which corresponds to the reaction rates of species A, B, C,
except the following cases:
If nu_1 or nu_2 cannot be converted to an array of shape (3, 2), a TypeError will be raised;
if nu_1 or nu_2 contains negative element(s), a ValueError will be raised;
if x is not a 3-element list or array, a TypeError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> reaction_rate_1([[1, 0], [2, 0], [0, 2]], [[0, 1], [0, 2], [1, 0]], [1, 2, 1], 10)
array([-30, -60, 20])
>>> reaction_rate_1([[1, 0], [2, 0], [0, 1]], [[0, 1], [0, 2], [1, 0]], [1, 2, 3], 10)
array([-10, -20, 10])
'''
try:
nu_1 = np.array(nu_1)
if nu_1.shape != (3, 2):
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
try:
nu_2 = np.array(nu_2)
if nu_2.shape != (3, 2):
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
if np.any(nu_1 < 0):
raise ValueError('All elements in nu_1 must be non-negative.')
if np.any(nu_2 < 0):
raise ValueError('All elements in nu_2 must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = progress_rate_2(nu_1, nu_2, x, k)
omega = np.array(omega).reshape((len(omega), 1))
f = np.sum((np.dot(nu_2, omega) - np.dot(nu_1, omega)), axis=1)
return f
```
```python
# Tests
def test_reaction_rate_1_types():
try:
reaction_rate_1([[1, 2], [2, 0], [0, 3, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
except TypeError as err:
assert(type(err) == TypeError)
def test_reaction_rate_1_values():
try:
reaction_rate_1([[1, -1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
except ValueError as err:
assert(type(err) == ValueError)
test_reaction_rate_1_types()
test_reaction_rate_1_values()
```
```python
# doctest
import doctest
doctest.testmod(verbose=True)
```
Trying:
progress_rate_1([2, 1, 0], [1, 2, 3], 10)
Expecting:
20
ok
Trying:
progress_rate_1([1, 1, 0], [4, 2, 3], 10)
Expecting:
80
ok
Trying:
progress_rate_2([[1, 2], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
Expecting:
(40, 10)
ok
Trying:
progress_rate_2([[1, 1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 3], 10)
Expecting:
(40, 90)
ok
Trying:
reaction_rate_1([[1, 0], [2, 0], [0, 2]], [[0, 1], [0, 2], [1, 0]], [1, 2, 1], 10)
Expecting:
array([-30, -60, 20])
ok
Trying:
reaction_rate_1([[1, 0], [2, 0], [0, 1]], [[0, 1], [0, 2], [1, 0]], [1, 2, 3], 10)
Expecting:
array([-10, -20, 10])
ok
8 items had no tests:
__main__
__main__.test_mod_arr
__main__.test_progress_rate_1_types
__main__.test_progress_rate_1_values
__main__.test_progress_rate_2_types
__main__.test_progress_rate_2_values
__main__.test_reaction_rate_1_types
__main__.test_reaction_rate_1_values
3 items passed all tests:
2 tests in __main__.progress_rate_1
2 tests in __main__.progress_rate_2
2 tests in __main__.reaction_rate_1
6 tests in 11 items.
6 passed and 0 failed.
Test passed.
TestResults(failed=0, attempted=6)
---
# Problem 6
Put parts 3, 4, and 5 in a module called `chemkin`.
Next, pretend you're a client who needs to compute the reaction rates at three different temperatures ($T = \left\{750, 1500, 2500\right\}$) of the following system of irreversible reactions:
\begin{align}
2H_{2} + O_{2} \longrightarrow 2OH + H_{2} \\
OH + HO_{2} \longrightarrow H_{2}O + O_{2} \\
H_{2}O + O_{2} \longrightarrow HO_{2} + OH
\end{align}
The client also happens to know that reaction 1 is a modified Arrhenius reaction with $A_{1} = 10^{8}$, $b_{1} = 0.5$, $E_{1} = 5\times 10^{4}$, reaction 2 has a constant reaction rate parameter $k = 10^{4}$, and reaction 3 is an Arrhenius reaction with $A_{3} = 10^{7}$ and $E_{3} = 10^{4}$.
You should write a script that imports your `chemkin` module and returns the reaction rates of the species at each temperature of interest given the following species concentrations:
\begin{align}
\mathbf{x} =
\begin{bmatrix}
H_{2} \\
O_{2} \\
OH \\
HO_{2} \\
H_{2}O
\end{bmatrix} =
\begin{bmatrix}
2.0 \\
1.0 \\
0.5 \\
1.0 \\
1.0
\end{bmatrix}
\end{align}
You may assume that these are elementary reactions.
```python
%%file chemkin.py
import numpy as np
import reaction_coeffs
def progress_rate_1(nu, x, k):
'''Returns the progress rate for a reaction of the form: nu_A A + nu_B B -> nu_C C
INPUTS
=======
nu: 3-element list or array
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of
species A, B and C
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
omega: float, except the following cases:
If nu or x is not a 3-element list or array, a TypeError will be raised;
if nu contains non-positive element(s), a ValueError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> progress_rate_1([2, 1, 1], [1, 2, 3], 10)
20
>>> progress_rate_1([1, 1, 1], [4, 2, 3], 10)
80
'''
try:
if len(nu) != 3:
raise TypeError('nu must be a 3 element list or array.')
except:
raise TypeError('nu must be a 3 element list or array.')
if not all([nu_ > 0 for nu_ in nu]):
raise ValueError('All elements in nu must be positive.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = k*x[0]**nu[0]*x[1]**nu[1]
return omega
def progress_rate_2(nu_1, nu_2, x, k):
'''Returns the progress rate for a system of reactions of the form:
nu'_11 A + nu'_21 B -> nu''_31 C
nu'_12 A + nu'_32 C -> nu''_22 B + nu''_32 C
INPUTS
=======
nu_1: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of reactants.
If nu_1 is not an array, a conversion is attempted.
nu_2: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of products.
If nu_2 is not an array, a conversion is attempted.
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
omega: 2-element tuple
Has the form (float, float) which corresponds to the progress rates of 2 reactions, except
the following cases:
If nu_1 or nu_2 cannot be converted to an array of shape (3, 2), a TypeError will be raised;
if nu_1 or nu_2 contains negative element(s), a ValueError will be raised;
if x is not a 3-element list or array, a TypeError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> progress_rate_2([[1, 2], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 1], 10)
(40, 10)
>>> progress_rate_2([[1, 1], [2, 0], [0, 2]], [[0, 0], [0, 1], [2, 1]], [1, 2, 3], 10)
(40, 90)
'''
try:
nu_1 = np.array(nu_1)
if nu_1.shape != (3, 2):
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
try:
nu_2 = np.array(nu_2)
if nu_2.shape != (3, 2):
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
if np.any(nu_1 < 0):
raise ValueError('All elements in nu_1 must be non-negative.')
if np.any(nu_2 < 0):
raise ValueError('All elements in nu_2 must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = (k*np.prod([x[i]**nu_1[i, 0] for i in range(3)]), k*np.prod([x[i]**nu_1[i, 1] for i in range(3)]))
return omega
def reaction_rate_1(nu_1, nu_2, x, k):
'''Returns the reaction rates for species in a system of reactions of the form:
nu'_11 A + nu'_21 B -> nu''_31 C
nu'_32 C -> nu''_12 A + nu''_22 B
INPUTS
=======
nu_1: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of reactants.
If nu_1 is not an array, a conversion is attempted.
nu_2: array of shape (3, 2)
Stoichiometric coefficient vector which specifies the stoichiometric coefficients of products.
If nu_2 is not an array, a conversion is attempted.
x: 3-element list or array
Concentration vector which specifies the concentrations of species A, B and C
k: float
reaction rate coefficient
RETURNS
========
f: 3-element array
Has the form (float, float, float) which corresponds to the reaction rates of species A, B, C,
except the following cases:
If nu_1 or nu_2 cannot be converted to an array of shape (3, 2), a TypeError will be raised;
if nu_1 or nu_2 contains negative element(s), a ValueError will be raised;
if x is not a 3-element list or array, a TypeError will be raised;
if x contains negative element(s), a ValueError will be raised;
if k <= 0, a ValueError will be raised.
EXAMPLES
>>> reaction_rate_1([[1, 0], [2, 0], [0, 2]], [[0, 1], [0, 2], [1, 0]], [1, 2, 1], 10)
array([-30, -60, 20])
>>> reaction_rate_1([[1, 0], [2, 0], [0, 1]], [[0, 1], [0, 2], [1, 0]], [1, 2, 3], 10)
array([-10, -20, 10])
'''
try:
nu_1 = np.array(nu_1)
if nu_1.shape != (3, 2):
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_1 must be able to converted to a 3 X 2 array.')
try:
nu_2 = np.array(nu_2)
if nu_2.shape != (3, 2):
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
except:
raise TypeError('nu_2 must be able to converted to a 3 X 2 array.')
if np.any(nu_1 < 0):
raise ValueError('All elements in nu_1 must be non-negative.')
if np.any(nu_2 < 0):
raise ValueError('All elements in nu_2 must be non-negative.')
try:
if len(x) != 3:
raise TypeError('x must be a 3 element list or array.')
except:
raise TypeError('x must be a 3 element list or array.')
if any([x_ < 0 for x_ in x]):
raise ValueError('All elements in x must be non-negative.')
if k <= 0:
raise ValueError('k must be positive.')
omega = progress_rate_2(nu_1, nu_2, x, k)
omega = np.array(omega).reshape((len(omega), 1))
f = np.sum((np.dot(nu_2, omega) - np.dot(nu_1, omega)), axis=1)
return f
```
Writing chemkin.py
```python
import chemkin
import numpy as np
class reaction:
def __init__(self, nu_1, nu_2, k_paras, specie_names=None):
self.nu_1 = np.array(nu_1)
self.nu_2 = np.array(nu_2)
self.rate_coeffs = self.f_rate_coeffs(k_paras)
if None == specie_names:
self.specie_names = ['Specie{}'.format(i+1) for i in range(self.nu_1.shape[0])]
else:
self.specie_names = specie_names
def f_rate_coeffs(self, k_paras):
def rate_coeffs(T, R=8.314):
k = []
for k_para in k_paras:
if 'k' in k_para:
k.append(k_para['k'])
continue
elif 'b' in k_para:
k.append(chemkin.reaction_coeffs.mod_arr(k_para['A'], k_para['b'], k_para['E'], T, R))
continue
else:
k.append(chemkin.reaction_coeffs.arr(k_para['A'], k_para['E'], T, R))
return k
return rate_coeffs
def progress_rate(self, x, k, **kwargs):
nu_1 = self.nu_1
nu_2 = self.nu_2
if 'nu_1' in kwargs:
nu_1 = kwargs['nu_1']
if 'nu_2' in kwargs:
nu_2 = kwargs['nu_2']
dim = nu_1.shape
omega = [k[j]*np.prod([x[i]**nu_1[i, j] for i in range(dim[0])]) for j in range(dim[1])]
return omega
def reaction_rate(self, x, k, **kwargs):
nu_1 = self.nu_1
nu_2 = self.nu_2
if 'nu_1' in kwargs:
nu_1 = kwargs['nu_1']
if 'nu_2' in kwargs:
nu_2 = kwargs['nu_2']
dim = nu_1.shape
omega = np.array(self.progress_rate(x, k, nu_1=nu_1, nu_2=nu_2)).reshape(len(nu_1[1]), 1)
f = np.sum((np.dot(nu_2, omega) - np.dot(nu_1, omega)), axis=1)
return f
def cal_reaction_rate(self, x, T, R=8.314):
self.x = x
self.T = T
self.k = self.rate_coeffs(self.T, R)
self.f = self.reaction_rate(self.x, self.k)
return self.f
def print_reaction_rate(self):
print('At {} K ({}), the reaction rates are as follows:'\
.format(self.T, ', '.join(['[{}] = {}'.format(specie_name, self.x[i])\
for i, specie_name in enumerate(self.specie_names)])))
print('\n'.join(['f({}) = {}'.format(specie_name, self.f[i])\
for i, specie_name in enumerate(self.specie_names)]))
print()
```
```python
# Test
specie_names = ['H2', 'O2', 'OH', 'HO2', 'H2O']
nu_1 = [[2, 0, 0], [1, 0, 1], [0, 1, 0], [0, 1, 0], [0, 0, 1]]
nu_2 = [[1, 0, 0], [0, 1, 0], [2, 0, 1], [0, 0, 1], [0, 1, 0]]
x = [2, 1, 0.5, 1, 1]
k = [{'A':10**8, 'b':0.5, 'E':5*10**4}, {'k':10**4}, {'A':10**7, 'E':10**4}]
r = reaction(nu_1, nu_2, k, specie_names)
T_list = [750, 1500, 2500]
for T in T_list:
r.cal_reaction_rate(x, T)
r.print_reaction_rate()
```
At 750 K ([H2] = 2, [O2] = 1, [OH] = 0.5, [HO2] = 1, [H2O] = 1), the reaction rates are as follows:
f(H2) = -3607077.8728040676
f(O2) = -5613545.183620796
f(OH) = 9220623.056424864
f(HO2) = 2006467.3108167283
f(H2O) = -2006467.3108167283
At 1500 K ([H2] = 2, [O2] = 1, [OH] = 0.5, [HO2] = 1, [H2O] = 1), the reaction rates are as follows:
f(H2) = -281117620.76487046
f(O2) = -285597559.2380457
f(OH) = 566715180.0029161
f(HO2) = 4479938.47317522
f(H2O) = -4479938.47317522
At 2500 K ([H2] = 2, [O2] = 1, [OH] = 0.5, [HO2] = 1, [H2O] = 1), the reaction rates are as follows:
f(H2) = -1804261425.9632487
f(O2) = -1810437356.938906
f(OH) = 3614698782.902155
f(HO2) = 6175930.975657232
f(H2O) = -6175930.975657232
---
# Problem 7
Get together with your project team, form a GitHub organization (with a descriptive team name), and give the teaching staff access. You can have has many repositories as you like within your organization. However, we will grade the repository called **`cs207-FinalProject`**.
Within the `cs207-FinalProject` repo, you must set up Travis CI and Coveralls. Make sure your `README.md` file includes badges indicating how many tests are passing and the coverage of your code.
| 266b9866d69d3b74f653b64ba8fc0a014626a898 | 55,305 | ipynb | Jupyter Notebook | homeworks/HW5/HW5-final.ipynb | xuwd11/cs207_Weidong_Xu | 00442657239c7a4040501bf7fa0f6697c731fe94 | [
"MIT"
]
| null | null | null | homeworks/HW5/HW5-final.ipynb | xuwd11/cs207_Weidong_Xu | 00442657239c7a4040501bf7fa0f6697c731fe94 | [
"MIT"
]
| null | null | null | homeworks/HW5/HW5-final.ipynb | xuwd11/cs207_Weidong_Xu | 00442657239c7a4040501bf7fa0f6697c731fe94 | [
"MIT"
]
| null | null | null | 37.571332 | 424 | 0.511183 | true | 12,292 | Qwen/Qwen-72B | 1. YES
2. YES | 0.774583 | 0.896251 | 0.694221 | __label__eng_Latn | 0.955191 | 0.45124 |
# CBE 60553, Fall 2017, Homework 1
## Problem 1: Choose your path wisely
A particular system has the equation of state $U = \frac{5}{2} PV + C$, where $C$ is an undetermined constant.
### 1. The system starts at state $A$, in which $P={0.2}\ {MPa}$ and $V = {0.01}\ {m^{3}}$. It is taken quasistatically along the path shown in the figure ($A \rightarrow B$, $B \rightarrow C$, $C \rightarrow A$ ). Calculate the heat transferred from the surroundings, $q$, and the work done on the system, $w$, for each step along the path.
#### $i$) $A \rightarrow B$
$$U_{AB} = U_B - U_A = \frac{5}{2} (P_B V_B - P_A V_A) = {10000}\ J$$
$$W_{AB} = - \int_{V_A}^{V_B}P dV = -P(V_B - V_A) = -4000\ J$$
$$\therefore \ Q_{AB} = U_{AB} - W_{AB} = 14000\ J$$
#### $ii$) $B \rightarrow C$
Need pressure as a function of volume along this path. From the figure, the relationship is linear and given by
$$ P(V)= -15 \times 10^{6} V + 0.65 \times 10^{6} $$
Integrate to find the work
$$W_{BC} = - \int_{V_B}^{V_C}P dV = -\left[\frac{-15 \times 10^{6} V^{2}}{2} + 0.65 \times 10^{6} V \right]_{V_B}^{V_C} = 7000\ J$$
From our expression for U
$$U_{BC} = U_C - U_B = \frac{5}{2} (P_C V_C - P_B V_B) = -2500 \ J$$
$$\therefore \ Q_{BC} = U_{BC} - W_{BC} = -9500\ J$$
#### $iii$) $C \rightarrow A$
$$U_{CA} = U_A - U_C = \frac{5}{2} (P_A V_A - P_C V_C) = -7500\ J$$
Since volume is constant
$$W_{CA} = - \int_{V_C}^{V_A}P dV = 0 $$
$$\therefore \ Q_{CA} = U_{CA} - W_{CA} = -7500\ J$$
### 2. Calculate $q$ and $w$ for a quasistatic process starting at $A$ and ending at $B$ along the path $P=a + b(V-c)^{2}$, where $a = {0.1}\ {MPa}$, $b= 1 \times 10^{3}\ {MPa \cdot m^{-6}}$, and $c = {0.02}\ {m^{3}}$.
$A \rightarrow B$
Along the Parabola
$$ P = 10^{5} + 10^{9} \times (V-0.02)^{2} $$
the work can be found by integration
$$W_{AB} = - \int_{V_A}^{V_B}P dV = - \int_{V_A}^{V_B}\left[10^{5} + 10^{9} \times (V-0.02)^{2} \right] dV = -\left[10^{5} V + \frac {10^{9}} {3} (V-0.02)^{3} \right]_{0.01}^{0.03} = -2666.67\ J$$
Since
$$ U_{AB} = 10000\ J $$
then
$$ Q_{AB} = U_{AB} - W_{AB} = 10000\ J - (-2666.67\ J) = 12666.67\ J$$
### 3. The system exchanges both heat and work with its surroundings along the paths above. An /adiabat/ is a particular quasistatic path along which work is done but no heat is transferred. Find the form of the adiabats $P=P(V)$ for the system described by $U = \frac{5}{2} PV +C$. (Hint: If $\bar{d}q_\text{qs} = 0$, then $dU = \bar{d} w_\text{qs} = -PdV$. What else does $dU$ equal?)
For an adiabatic system,
$$ dU = dQ - PdV = -PdV $$
and we can also write
$$ dU = \left. \frac{\partial U}{\partial V} \right|_{P} dV + \left. \frac{\partial U}{\partial P} \right|_{V} dP = 2.5PdV +
2.5VdP = -PdV $$
$$ \frac {7}{V} dV = - \frac{5}{P}dP $$
$$ \left. ln{V^{7}} \right|_{V_{0}}^{V} = -\left. ln{P^{5}} \right|_{P_{0}}^{P} $$
$$ ln{P^{5}V^{7}} = C'\ (C' = const)$$
$$ P^{5}V^{7} = C\ (C = const)$$
## Problem 2: Is it fundamental enough?
The following ten equations are purported to be fundamental equations for
various thermodynamic systems. Five, however, are inconsisent with the basic
postulates of a fundamental equation and are thus unphysical. For each, plot
the relationship between $S$ and $U$ and identify the five that are
unacceptable. $v_0$, $\theta$, and $R$ are all positive constants and, in the
case of fractional exponents, the real positive root is to be implied.
$ (1)\ S = \left ( \frac{R^2}{v_0\theta} \right )^{1/3}\left ( NVU \right
)^{1/3}\hspace{20pt}
(2)\ S = \left ( \frac{R}{\theta^2} \right )^{1/3}\left ( \frac{NU}{V} \right)^{2/3} $
$ (3)\ S = \left ( \frac{R}{\theta} \right )^{1/2}\left ( NU + \frac{R\theta V^2}{v_0^2} \right)^{1/2}\hspace{20pt}
(4)\ S = \left ( \frac{R^2\theta}{v_0^3} \right ) \frac{V^3}{NU} $
$ (5)\ S = \left ( \frac{R^3}{v_0\theta^2} \right )^{1/5}\left ( N^2U^2V \right)^{1/5}\hspace{20pt}
(6)\ S = NR \ln \left ( \frac{UV}{N^2 R \theta v_0} \right ) $
$ (7)\ S = \left ( \frac{NRU}{\theta} \right )^{1/2}\exp \left (-\frac{V^2}{2N^2v_0^2} \right )\hspace{20pt}
(8)\ S = \left ( \frac{NRU}{\theta} \right )^{1/2}\exp
\left (-\frac{UV}{NR\theta v_0} \right ) $
$ (9)\ U = \left ( \frac{NR\theta V}{v_0} \right ) \left ( 1+\frac{S}{NR} \right ) \exp \left (-S/NR \right)\hspace{20pt}
(10)\ U = \left ( \frac{v_0\theta}{R} \right ) \frac{S^2}{V} \exp\left ( S/NR \right) $
There are three postulates we are testing for
$(i)\ S(\lambda U,\lambda V, \lambda N) = \lambda S(U,V,N)$ : Postulate 3
$(ii)\ \frac{\partial S}{\partial U} > 0 $ : Postulate 2
$ (iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$ : Postulate 4
We assume $v_{0} = 1$, $R = 1$, $\theta = 1$, and $N$ and $V$ are constants.
$(1)\ S = \left ( \frac{R^2}{v_0\theta} \right )^{1/3}\left ( NVU \right)^{1/3} = \left (NVU \right)^{1/3}$
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = (\lambda^{3}NVU)^{1/3} = \lambda \cdot NVU = \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} > 0 $
$\hspace{10pt}$ $(iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(1)$ is acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0,2,100)
S = []
for u in U:
s = u **(1./3) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(2)\ S = \left ( \frac{R}{\theta^2} \right )^{1/3}\left ( \frac{NU}{V} \right)^{2/3} = \left ( \frac{NU}{V} \right)^{2/3} $
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \left (\lambda \frac{NU}{V}\right)^{2/3} = \lambda^{2/3} \left(\frac{NU}{V}\right) ^{2/3} \neq \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} > 0 $
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(2)$ is not acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0,2,100)
S = []
for u in U:
s = u **(2./3) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(3)\ S = \left ( \frac{R}{\theta} \right )^{1/2}\left ( NU + \frac{R\theta V^2}{v_0^2} \right)^{1/2} = \left ( NU + V^2 \right)^{1/2}$
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \left (\lambda^2 NU + \lambda^2 V^2\right)^{1/2} = \lambda \left(NU + V^2\right) ^{1/2} = \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} > 0 $
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(3)$ is acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(-1,2,100)
S = []
for u in U:
s = (u + 1**2)**(1./2) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(4)\ S = \left ( \frac{R^2\theta}{v_0^3} \right ) \frac{V^3}{NU} = \frac{V^3}{NU} $
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \lambda \frac{V^3}{NU}
= \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} < 0 $
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} \neq 0,\ as\ S \rightarrow 0$
$\therefore$ $(4)$ is not acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0.1,2,100)
S = []
for u in U:
s = (1**3) / (1 * u) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(5)\ S = \left ( \frac{R^3}{v_0\theta^2} \right )^{1/5}\left ( N^2U^2V \right)^{1/5} = \left (N^2 U^2 V \right)^{1/5} $
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \lambda \left (N^2 U^2 V \right)^{1/5} = \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} > 0 $
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(5)$ is acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0,2,100)
S = []
for u in U:
s = (u**2)**(1./5) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(6)\ S = NR \ln \left ( \frac{UV}{N^2 R \theta v_0} \right) = N \ln \left ( \frac{UV}{N^2} \right)$
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \lambda N \ln \left (\frac{UV}{N^2} \right) = \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} > 0 $
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} \neq 0,\ as\ S \rightarrow 0$
$\therefore$ $(6)$ is not acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0.01,2,100)
S = []
for u in U:
s = np.log(u) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(7)\ S = \left ( \frac{NRU}{\theta} \right )^{1/2}\exp \left (-\frac{V^2}{2N^2v_0^2} \right) = \left (NU \right )^{1/2}\exp \left (-\frac{V^2}{2N^2} \right)$
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \lambda \left (NU \right )^{1/2}\exp \left (-\frac{V^2}{2N^2} \right)= \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U} > 0 $
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(7)$ is acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0,2,100)
S = []
for u in U:
s = (u**(0.5)) * np.exp(-0.5) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(8)\ S = \left ( \frac{NRU}{\theta} \right )^{1/2}\exp
\left (-\frac{UV}{NR\theta v_0} \right) = \left (NU \right )^{1/2}\exp \left (-\frac{UV}{N} \right)$
$\hspace{10pt}$ $(i)\ S(\lambda U,\lambda V, \lambda N) = \lambda \left (NU \right )^{1/2}\exp \left (-\lambda \frac{UV}{N} \right) \neq \lambda S(U,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U}$ is not monotonically increasing.
$\hspace{10pt}$ $(iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(8)$ is not acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
U = np.linspace(0,2,100)
S = []
for u in U:
s = (u**(0.5)) * np.exp(-u) # assume N = 1 and V = 1
S.append(s)
plt.plot(U, S,'-')
plt.xlabel('U')
plt.ylabel('S')
plt.show()
```
$(9)\ U = \left ( \frac{NR\theta V}{v_0} \right ) \left ( 1+\frac{S}{NR} \right ) \exp \left (-\frac{S}{NR} \right)= \left ( NV \right ) \left ( 1+\frac{S}{N} \right ) \exp \left (-\frac{S}{N} \right)$
$\hspace{10pt}$ $(i)\ U(\lambda S,\lambda V, \lambda N) = \lambda^{2} \left (NV \right ) \left(1 + \frac{S}{N}\right) \exp \left (-\frac{S}{N} \right) \neq \lambda U(S,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U}$ is not monotonically increasing.
$\hspace{10pt}$ $ (iii)$ assume $N = 1$ and $V = 1$,
$\hspace{26pt}$ then $ \frac{\partial U}{\partial S} = \exp \left(-S \right) - \exp \left(-S \right)(1 + S) $
$\hspace{26pt}$ thus, $ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(9)$ is not acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
S = np.linspace(0,2,100)
U = []
for s in S:
u = (1 + s) * np.exp(-s) # assume N = 1 and V = 1
U.append(u)
plt.plot(S, U,'-')
plt.xlabel('S')
plt.ylabel('U')
plt.xlim(0,2)
plt.ylim(0,1.2)
plt.show()
```
$(10)\ U = \left ( \frac{v_0\theta}{R} \right ) \frac{S^2}{V} \exp\left (\frac{S}{NR} \right) = \frac{S^2}{V} \exp\left (\frac{S}{N} \right) $
$\hspace{10pt}$ $(i)\ U(\lambda S,\lambda V, \lambda N) = \lambda \frac{S^2}{V}
\exp \left (\frac{S}{N} \right) = \lambda U(S,V,N) $
$\hspace{10pt}$ $(ii)\ \frac{\partial S}{\partial U}$ > 0.
$\hspace{10pt}$ $ (iii)\ \frac{\partial U}{\partial S} = 0,\ as\ S \rightarrow 0$
$\therefore$ $(10)$ is acceptable.
```python
import matplotlib.pyplot as plt
import numpy as np
S = np.linspace(0,2,100)
U = []
for s in S:
u = (s**(2)) * np.exp(s) # assume N = 1 and V = 1
U.append(u)
plt.plot(S, U,'-')
plt.xlabel('S')
plt.ylabel('U')
plt.xlim(0,2)
plt.show()
```
Therefore, (2),(4),(6),(8) and (9) are not physically permissible.
## Problem 3: Find your equilibrium
The fundamental equations of both systems $A$ and $B$ are
$$ S = \left (
\frac{R^2}{v_0\theta} \right )^{1/3} \left ( N V U \right )^{1/3} $$
The volume and mole number of system $A$ are $ 9 \times 10^{-6}\ m^3 $ and $3$ mol, respectively, and of system $B$ are $ 4 \times 10^{-6}\ m^3 $ and $2$ mol,
respectively. First suppose $A$ and $B$ are completely isolated from one
another. Plot the total entropy $S_A + S_B$ as function of $U_A/(U_A + U_B)$,
where $U_A + U_B = 80$ J. If $A$ and $B$ were connected by a diathermal wall and
the pair allowed to come to equilibrium, what would $U_A$ and $U_B$ be?
Call
$$ X = \frac{U_A}{U_A + U_B}$$
we know $U_A + U_B = 80$, therefore
$$ U_A = 80X,\hspace{20pt} U_B = 80(1 - X) $$
Then setting $R, v_0, \theta = 1 $ and plugging in $V_A$, $V_B$, $N_A$ and $N_B$.
$S = S_A + S_B = \left(3 \times 9 \times 10^{-6} \times 80X \right)^{1/3} + \left(2 \times 4 \times 10^{-6} \times 80(1-X)\right)^{1/3} = 0.086(1-X)^{1/3} + 0.129X^{1/3}$
Entropy is maximized when $X = 0.65$, which is where we would expect the system to go at equilibrium once the internal wall is made diathermal.
```python
import matplotlib.pyplot as plt
import numpy as np
X = np.linspace(0,1,100)
S = []
for x in X:
s = 0.086 * (1 - x)**(1./3) + 0.129 * (x**(1./3))
S.append(s)
plt.plot(X, S,'-')
plt.xlabel('X')
plt.ylabel('S')
plt.show()
```
From this graph, S is maximized when $X = 0.65$.
Therefore, $U_A = 80X = 52\ J$ and $U_B = 28\ J$.
An alternative non-graphical method is to solve for where
$$ \frac{\partial S}{\partial U} = 0 $$
```python
from sympy import *
X = Symbol('X', real = True)
S = 0.086 * (1 - X)**(1./3) + 0.129 * (X**(1./3))
Sprime = S.diff(X) # differentiate S in terms of X
max = solve(Sprime, X) # solve Sprime =0 with respect to X
print 'X =', max[0]
print 'UA =', 80 * max[0]
print 'UB =', 80 * (1 - max[0])
```
X = 0.647529554910575
UA = 51.8023643928460
UB = 28.1976356071540
## Problem 4: Exactly right
The Helmholtz energy $A$ is a thermodynamic state function. Show that
$ \left (\frac{\partial A}{\partial V}\right )_T = - P $ and $ \left(\frac{\partial A}{\partial T}\right )_V = - S\ $
implies $ \left (\frac{\partial S}{\partial V}\right )_T = \left
(\frac{\partial P}{\partial T}\right )_V $
$$ dA = \left (\frac{\partial A}{\partial V}\right)_{T} dV + \left (\frac{\partial A}{\partial T}\right)_{V} dT $$
$$ \left. \frac {\partial}{\partial T} \left(\frac {\partial A}{\partial V} \right)_T \right|_V = \left. \frac {\partial}{\partial V} \left(\frac {\partial A}{\partial T} \right)_V \right|_T $$
$$ \therefore \ \left. \frac{\partial (-P)}{\partial T} \right |_{V} = \left. \frac{\partial (-S)}{\partial V} \right |_{T} $$
## Problem 5: A difference of degree
Determine whether the following five expressions are homogeneous and, if so, what their degree of homogeneity is:
$ (1)\ u=x^2y + xy^2 +3xyz $
$ (2)\ u=\sqrt{x+y} $
$ (3)\ u=\frac{x^3+x^2y+y^3}{x^2+xy+y^2} $
$ (4)\ u=e^{-y/x} $
$ (5)\ u=\frac{x^2+3xy+2y^3}{y^2} $
$(1)\ u(\lambda x,\lambda y, \lambda z) = \lambda^{3} \left(x^2y + xy^2 +3xyz \right) = \lambda^{3} u(x,y,z) $
$ \therefore$ $u$ is homogeneous and the degree of homogeneity is 3.
$(2)\ u(\lambda x,\lambda y, \lambda z) = \lambda^{1/2} \sqrt{x + y} = \lambda^{1/2} u(x,y,z) $
$ \therefore$ $u$ is homogeneous and the degree of homogeneity is 1/2.
$(3)\ u(\lambda x,\lambda y, \lambda z) = \lambda \frac{x^3 + x^2 y + y^3}{x^2 + xy + y^2} = \lambda u(x,y,z) $
$ \therefore$ $u$ is homogeneous and the degree of homogeneity is 1.
$(4)\ u(\lambda x,\lambda y, \lambda z) = e^{-y/x} = u(x,y,z) $
$ \therefore$ $u$ is homogeneous and the degree of homogeneity is 0.
$(5)\ u(\lambda x,\lambda y, \lambda z) = \frac{x^2+3xy+2\lambda y^3}{y^2} $
$ \therefore$ $u$ is not homogeneous.
| 09ba226fb7f5ab67369df9ef6d53345330cb5407 | 206,783 | ipynb | Jupyter Notebook | 02_thermo/.ipynb_checkpoints/HW1-Fa17-soln-checkpoint.ipynb | DPotoyan/Statmech4ChemBio | bc38f04545e1f64848d09c390caad7b54ba3adfd | [
"MIT"
]
| 3 | 2021-04-11T18:03:17.000Z | 2022-03-22T21:32:03.000Z | 02_thermo/.ipynb_checkpoints/HW1-Fa17-soln-checkpoint.ipynb | DPotoyan/Statmech4ChemBio | bc38f04545e1f64848d09c390caad7b54ba3adfd | [
"MIT"
]
| null | null | null | 02_thermo/.ipynb_checkpoints/HW1-Fa17-soln-checkpoint.ipynb | DPotoyan/Statmech4ChemBio | bc38f04545e1f64848d09c390caad7b54ba3adfd | [
"MIT"
]
| 1 | 2022-01-28T18:18:49.000Z | 2022-01-28T18:18:49.000Z | 226.735746 | 20,766 | 0.891156 | true | 6,415 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.851953 | 0.677863 | __label__eng_Latn | 0.760854 | 0.413234 |
```python
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute, Aer, IBMQ, QuantumRegister
from qiskit.compiler import transpile, assemble
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
import numpy as np
import qiskit as qk
import matplotlib.pyplot as plt
from fractions import Fraction
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
# Loading your IBM Q account(s)
provider = IBMQ.load_account()
```
```python
simulator = qk.BasicAer.get_backend('qasm_simulator')
real = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits > 4,
operational=True, simulator=False))
print(real)
```
ibmq_belem
### A Bernstein-Vazarini Probléma
Adott egy feketedoboz függvény ami egy $\underline{x} = (x_1, x_2, ..., x_n)$ bitsorhoz rendel hozzá 0-t vagy 1-et:<br><br>\begin{equation}
f(x_1, x_2, ..., x_n) = 0 \text{ or } 1.
\end{equation}
(Minden $i$-re $x_i$ 0 vagy 1.) Tudjuk, hogy a függvény minden $\underline{x}$ bitsor esetén egy $\underline{s}$ bitsorral vett bitenkénti szorzatának 2 szerinti maradékát adja vissza. A bitenkénti szorzat alatt a következő műveletet értjük: $\underline{s}\cdot\underline{x} = x_1\cdot s_1 +x_2\cdot s_2 +...+x_n\cdot s_n$. Tehát a függvényt a következő alakban írható fel: $f(\underline{x}) = \underline{s}\cdot\underline{x} \text{ mod } 2$. Az a feladat, hogy találjuk ki, hogy mi az $\underline{s}$ bitsor.<br>
**1. feladat**
Gondoljuk végig, hogy hogyan oldanánk meg A Berstein-Vazarini problémát egy klasszikus számítógéppel. Hányszor kéne ehhez elvégezni az $f$ függvényt megvalósító műveletet?
```python
n = 4
s = np.random.randint(0, 2, n)
def f(x, s): # x egy és s azonos hosszúságú numpy array-ok
if len(x) != n:
raise ValueError("x and s have to be of the same length")
return np.dot(x, s)%2
```
**2. feladat**
Határozzuk meg a véletlenszerűen generált $s$ bitsort, anélkül, hogy kiírnánk az értékét. Használjuk az f(x, s) függvényt, ami az $\underline{s}\cdot\underline{x} \text{ mod } 2$ értéket adja vissza.
```python
# megoldás helye
guess = np.zeros(n, dtype=int)
for i in range(n):
x = np.zeros(n)
x[i] = 1
guess[i] = f(x, s)
print(guess, s)
```
[0 0 1 1] [0 0 1 1]
**Állítás**: Az alábbi ábrán látható kvantumáramkör elvégzése után pontosan az $s$ bitsort mérjük. (Az utolsó qubitet nem kell mérni, annak értéke nem érdekel minket.) Tehát elég egyszer elvégeznünk az $f$ függvényt megvalósító műveletet.
A $|-\rangle$ állapotot úgy állíthatjuk elő, hogy az $|{1}\rangle$ állapotra hatunk egy $H$ kapuval.
```python
def black_box(s): # s egy bitsor
n = len(s)
qc = QuantumCircuit(n+1)
for i in range(len(s)):
if s[n-i-1] == 1:
qc.cx(i, n)
qc.name = "f"
return qc
```
```python
"""az i egész szám bináris alakját írja be a függvény n darab qubitbe"""
def encode(i, n):
if 2**n <= i:
raise ValueError("'i' is too big to be stored on n qubits")
bits = np.array(list(format(i, "b")), dtype=int)
while len(bits) < n:
bits = np.insert(bits, 0, 0)
qc = QuantumCircuit(n)
for j in range(len(bits)):
if bits[j] == 1:
qc.x(n-j-1)
qc.name = "%i" %i
return qc
```
**3. feladat (szorgalmi)**
Ellenőrizzük, hogy a black_box(s) kvantumkapu úgy működik-e ahogy azt elvárjuk tőle az $\underline{s}=(1, 0, 1, 1)$ bitsor esetén:
- Hozzunk létre egy 5 qubites kvantumáramkört.
- Írjunk bele egy $\underline{x}$ bitsort az első 4 qubitbe. Ehhez használhatjuk az encode($i$, $n$) függvényt, ami az $i$ egész szám bináris alakját írja bele $n$ darab qubitbe, de a függvény nélkül is könnyen megoldható a feladat.
- Hattassuk a black_box(s) kaput az 5 qubitre, majd mérjük meg az 5. qubitet.
- Ha a black_box(s) kvantumkapu jól működik az $\underline{x}_0 = (0, 0, 0, 1)$, $\underline{x}_1 = (0, 0, 1, 0)$, $\underline{x}_2 = (0, 1, 0, 0)$, $\underline{x}_3 = (1, 0, 0, 0)$ bemeneti bitsorokra, akkor minden bemeneti bitsorra jól működik.
```python
s = np.array([1, 0, 1, 1])
# megoldás helye
for i in range(4):
x = np.zeros(4)
x[i] = 1
qc = QuantumCircuit(5, 1)
qc.append(encode(2**i, 4), range(4))
qc.append(black_box(s), range(5))
qc.measure(4, 0)
counts = execute(qc, simulator, shots=1).result().get_counts()
print(counts)
```
{'1': 1}
{'1': 1}
{'0': 1}
{'1': 1}
**4.feladat**
Rakjuk össze a fenti ábrán látható áramkört. Az $f$-el jelölt kapu helyére rakjuk a black_box($\underline{s}$) kaput. Legyen $\underline{s} = (1, 0, 1, 1)$.
Ellenőrizzük, hogy a kvantumáramkör mérésekor tényleg visszakapjuk-e az $s$ bitsort. (Az áramkört futtathatjuk $\underline{s}$ más értékeire is.) Próbáljuk ki a kvantumáramkört szimulátoron is és igazi kvantumszámítógépen is.
```python
# megoldás helye
qc = QuantumCircuit(5, 4)
qc.x(4)
qc.h(list(range(5)))
qc.append(black_box(s), range(5))
qc.h(list(range(4)))
qc.measure(range(4), range(4))
job = execute(qc, real, shots=100)
job_monitor(job)
counts = job.result().get_counts()
print(counts)
qc.draw()
```
Job Status: job is being validated
```python
plt.bar(counts.keys(), counts.values())
```
```python
```
| d2276f4b11f01a7727c780e1eaab37d7690bc851 | 46,901 | ipynb | Jupyter Notebook | szakkor_files/Berstein-Vazirani-mo.ipynb | thundergoth/KvantumSzakkor_2022 | afc966e11f484c90ae9804d478d1c0d1d8f3f8fd | [
"Apache-2.0",
"CC-BY-4.0"
]
| 2 | 2022-03-30T04:56:20.000Z | 2022-03-30T04:56:34.000Z | szakkor_files/Berstein-Vazirani-mo.ipynb | thundergoth/KvantumSzakkor_2022 | afc966e11f484c90ae9804d478d1c0d1d8f3f8fd | [
"Apache-2.0",
"CC-BY-4.0"
]
| null | null | null | szakkor_files/Berstein-Vazirani-mo.ipynb | thundergoth/KvantumSzakkor_2022 | afc966e11f484c90ae9804d478d1c0d1d8f3f8fd | [
"Apache-2.0",
"CC-BY-4.0"
]
| null | null | null | 159.527211 | 38,200 | 0.883947 | true | 1,994 | Qwen/Qwen-72B | 1. YES
2. YES | 0.766294 | 0.749087 | 0.574021 | __label__hun_Latn | 0.997846 | 0.171972 |
# Optimization in Python
You might have noticed that we didn't do anything related to sparsity with scikit-learn models. A lot of the work we covered in the machine learning class is very recent research, and as such is typically not implemented by the popular libraries.
If we want to do things like sparse regression, we're going to have to roll up our sleeves and do it ourselves. For that, we need to be able to solve optimization problems. In Julia, we did this with JuMP. In Python, we'll use a similar library called *pyomo*.
# Installing pyomo
You can run the following command to install pyomo if you haven't already.
```python
!pip install pyomo --user
```
# Intro to pyomo
Let's see how we translate a simple, 2 variable LP to pyomo code.
$$
\begin{align*}
\max_{x,y} \quad& x + 2y \\
\text{s.t.}\quad& x + y \leq 1 \\
& x, y \geq 0.
\end{align*}
$$
First thing is to import the pyomo functions:
```python
from pyomo.environ import *
from pyomo.opt import SolverFactory
```
Next, we construct a model object. This is a container for everything in our optimization problem: variables, constraints, solver options, etc.
```python
m = ConcreteModel()
```
Next, we define the two decision variables in our optimization problem. We use the ``Var`` function to create the variables. The `within` keyword is used to specify the bounds on the variables, or equivalently the `bounds` keyword. The variables are added to the model object with names `x` and `y`.
```python
m.x = Var(within=NonNegativeReals)
m.y = Var(bounds=(0, float('inf')))
```
We now add the single constraint of our problem using the ``Constraint`` function. We write it algebraically, and save the result to the model.
```python
m.con = Constraint(expr=m.x + m.y <= 1)
```
We specify the objective function with the `Objective` function:
```python
m.obj = Objective(sense=maximize, expr=m.x + 2 * m.y)
```
We solve the optimization problem by first specifying a solver using `SolverFactory` and then using this solver to solve the model:
```python
solver = SolverFactory('gurobi')
solver.solve(m)
```
We can now inspect the solution values and optimal cost.
```python
m.obj()
```
```python
m.x.value
```
```python
m.y.value
```
Let's put it all together to compare with Julia/JuMP
```python
# Create model
m = ConcreteModel()
# Add variables
m.x = Var(within=NonNegativeReals)
m.y = Var(bounds=(0, float('inf')))
# Add constraint
m.con = Constraint(expr=m.x + m.y <= 1)
# Add objective
m.obj = Objective(sense=maximize, expr=m.x + 2 * m.y)
# Solve model
solver = SolverFactory('gurobi')
solver.solve(m)
# Inspect solution
print(m.obj())
print(m.x.value)
print(m.y.value)
```
```julia
# Create model
m = Model(solver=GurobiSolver())
# Add variables
@variable(m, x >= 0)
@variable(m, y >= 0)
# Add constraint
@constraint(m, x + y <= 1)
# Add objective
@objective(m, Max, x + 2y)
# Solve model
solve(m)
# Inspect solution
@show getobjectivevalue(m)
@show getvalue(x)
@show getvalue(y)
```
### Exercise
Code and solve the following optimization problem:
$$
\begin{align*}
\min_{x,y} \quad& 3x - y \\
\text{s.t.}\quad& x + 2y \geq 1 \\
& x \geq 0 \\
& 0 \leq y \leq 1.
\end{align*}
$$
```python
# Create the model
m = ConcreteModel()
# Add the variables
m.x = Var(within=NonNegativeReals)
m.y = Var(bounds=(0, 1))
# Add the constraint
m.con = Constraint(expr=m.x + 2 * m.y >= 1)
# Add the objective
m.obj = Objective(sense=minimize, expr=3 * m.x - m.y)
solver = SolverFactory('gurobi')
solver.solve(m)
print(m.x.value, m.y.value)
```
```python
for v in m.component_data_objects(Var, active=True):
print(v, value(v)) # doctest: +SKIP
```
```python
m.pprint()
```
# Index sets
Let's now move to a more complicated problem. We'll look at a transportation problem:
$$
\begin{align}
\min & \sum\limits_{i = 1}^{m} \sum\limits_{j = 1}^{n} c_{ij} x_{ij}\\
& \sum\limits_{j = 1}^{n} x_{ij} \leq b_i && i = 1, \ldots, m\\
& \sum\limits_{i = 1}^{m} x_{ij} = d_j && j = 1, \ldots, n\\
& x_{ij} \ge 0 && i = 1, \ldots, m, j = 1, \ldots, n
\end{align}
$$
And with some data:
```python
import numpy as np
m = 2 # Number of supply nodes
n = 5 # Number of demand nodes
# Supplies
b = np.array([1000, 4000])
# Demands
d = np.array([500, 900, 1800, 200, 700])
# Costs
c = np.array([[2, 4, 5, 2, 1],
[3, 1, 3, 2, 3]])
```
Now we can formulate the problem with pyomo
```python
model = ConcreteModel()
```
First step is adding variables. We can add variables with indices by passing the relevant index sets to the `Var` constructor. In this case, we need a $m$-by$n$ matrix of variables:
```python
model.x = Var(range(m), range(n), within=NonNegativeReals)
```
Now to add the constraints. We have to add one supply constraint for each factory, so we might try something like:
```python
for i in range(m):
model.supply = Constraint(expr=sum(model.x[i, j] for j in range(n)) <= b[i])
```
Can you see the problem? We are overwriting `model.supply` in each iteration of the loop, and so only the last constraint is applied.
Luckily, pyomo has a (not-so-easy) way to add multiple constraints at a time. We first define a *rule* that takes in the model and any required indices, and then returns the expression for the constraint:
```python
def supply_rule(model, i):
return sum(model.x[i, j] for j in range(n)) <= b[i]
```
We then add the constraint by referencing this rule along with the index set we want the constraint to be defined over:
```python
model.supply2 = Constraint(range(m), rule=supply_rule)
```
We then apply the same approach for the demand constraints
```python
def demand_rule(model, j):
return sum(model.x[i, j] for i in range(m)) == d[j]
model.demand = Constraint(range(n), rule=demand_rule)
```
Finally, we add the objective:
```python
model.obj = Objective(sense=minimize,
expr=sum(c[i, j] * model.x[i, j]
for i in range(m) for j in range(n)))
```
Now we can solve the problem
```python
solver = SolverFactory('gurobi')
solver.solve(model)
```
It solved, so we can extract the results
```python
flows = np.array([[model.x[i, j].value for j in range(n)] for i in range(m)])
flows
```
We can also check the objective value for the cost of this flow
```python
model.obj()
```
For simplicity, here is the entire formulation and solving code together:
```python
model = ConcreteModel()
# Variables
model.x = Var(range(m), range(n), within=NonNegativeReals)
# Supply constraint
def supply_rule(model, i):
return sum(model.x[i, j] for j in range(n)) <= b[i]
model.supply2 = Constraint(range(m), rule=supply_rule)
# Demand constraint
def demand_rule(model, j):
return sum(model.x[i, j] for i in range(m)) == d[j]
model.demand = Constraint(range(n), rule=demand_rule)
# Objective
model.obj = Objective(sense=minimize,
expr=sum(c[i, j] * model.x[i, j]
for i in range(m) for j in range(n)))
# Solve
solver = SolverFactory('gurobi')
solver.solve(model)
# Get results
flows = np.array([[model.x[i, j].value for j in range(n)] for i in range(m)])
model.obj()
```
# Machine Learning
Now let's put our pyomo knowledge to use and implement some of the same methods we saw in the machine learning class
First, specify your solver executable location:
```python
executable='C:/Users/omars/.julia/v0.6/Ipopt/deps/usr/bin/ipopt.exe'
```
To use the version left over from Julia
### On MacOS and Linux
`executable="~/.julia/v0.6/Homebrew/deps/usr/Cellar/ipopt/3.12.4_1/bin/ipopt")`
### On Windows
The path is probably under WinRPM:
`executable='%HOME%/.julia/v0.6/WinRPM/...')")`
# Linear Regression
Let's just try a simple linear regression
```python
def linear_regression(X, y):
n, p = X.shape
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
# Add constraints
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)), 2)
for i in range(n)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return [m.beta[j].value for j in range(p)]
```
Let's load up some data to test it out on:
```python
from sklearn.datasets import load_boston
data = load_boston()
X = data.data
y = data.target
```
Try our linear regression function:
```python
print(linear_regression(X, y))
```
We can compare with sklearn to make sure it's right:
```python
from sklearn.linear_model import LinearRegression
m = LinearRegression(fit_intercept=False)
m.fit(X, y)
m.coef_
```
Just for reference, let's look back at how we do the same thing in JuMP!
```julia
using JuMP, Gurobi
function linear_regression(X, y)
n, p = size(X)
m = Model(solver=GurobiSolver())
@variable(m, beta[1:p])
@objective(m, Min, sum((y[i] - sum(X[i, j] * beta[j] for j = 1:p)) ^ 2 for i = 1:n))
solve(m)
getvalue(beta)
end
```
or even
```julia
using JuMP, Gurobi
function linear_regression(X, y)
n, p = size(X)
m = Model(solver=GurobiSolver())
@variable(m, beta[1:p])
@objective(m, Min, sum((y - X * beta) .^ 2))
solve(m)
getvalue(beta)
end
```
Much simpler!
### Exercise
Modify the linear regression formulation to include an intercept term, and compare to scikit-learn's LinearRegression with `fit_intercept=False` to make sure it's the same
```python
def linear_regression_intercept(X, y):
n, p = X.shape
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
m.b0 = Var()
# Add constraints
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)) - m.b0, 2)
for i in range(n)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return [m.beta[j].value for j in range(p)]
linear_regression_intercept(X, y)
```
```python
m = LinearRegression(fit_intercept=True)
m.fit(X, y)
m.coef_
```
# Robust Regression
We saw in the class that both ridge and lasso regression were robust versions of linear regression. Both of these are provided by `sklearn`, but we need to know how to implement them if we want to extend regression ourselves
```python
def ridge_regression(X, y, rho):
n, p = X.shape
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)),2)
for i in range(n)) + rho * sum(pow(m.beta[j], 2) for j in range(p)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return [m.beta[j].value for j in range(p)]
```
```python
ridge_regression(X, y, 100000)
```
```python
def lasso(X, y, rho):
n, p = X.shape
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)),2)
for i in range(n)) + rho * sum(pow(m.beta[j], 2) for j in range(p)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return [m.beta[j].value for j in range(p)]
```
### Exercise
Implement Lasso regression
```python
def lasso_regression(X, y, rho):
n, p = X.shape
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
m.absb = Var(range(p))
# Add constraints
def absbeta1(m, j):
return m.beta[j] <= m.absb[j]
m.absb1 = Constraint(range(p), rule=absbeta1)
def absbeta2(m, j):
return -m.beta[j] <= m.absb[j]
m.absb2 = Constraint(range(p), rule=absbeta2)
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)), 2)
for i in range(n)) + rho * sum(m.absb[j] for j in range(p)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return [m.beta[j].value for j in range(p)]
```
```python
lasso_regression(X, y, 1000)
```
# Sparse Regression
```python
def sparse_regression(X, y, k):
n, p = X.shape
M = 1000
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
m.z = Var(range(p), within=Binary)
# Add constraints
def bigm1(m, j):
return m.beta[j] <= M * m.z[j]
m.bigm1 = Constraint(range(p), rule=bigm1)
def bigm2(m, j):
return m.beta[j] >= -M * m.z[j]
m.bigm2 = Constraint(range(p), rule=bigm2)
m.sparsity = Constraint(expr=sum(m.z[j] for j in range(p)) <= k)
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)), 2)
for i in range(n)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return [m.beta[j].value for j in range(p)]
```
```python
sparse_regression(X, y, 10)
```
```python
import numpy as np
l = np.array([1,2,3,4])
print(l**2)
print([sqrt(i) for i in l])
```
### Exercise
Try implementing the algorithmic framework for linear regression:
- sparsity constraints
- lasso regularization
- restrict highly correlated pairs of features
- nonlinear transformations (just $\sqrt(x)$ and $x^2$)
```python
import numpy as np
from sklearn.preprocessing import normalize
def all_regression(X_orig, y, k, rho):
n, p_orig = X_orig.shape
M = 10
X = np.concatenate(
[X_orig, np.sqrt(X_orig), np.square(X_orig)], axis=1
)
p = X.shape[1]
# Normalize data
X = normalize(X, axis=0)
y = (y - np.mean(y)) / np.linalg.norm(y)
# Create model
m = ConcreteModel()
# Add variables
m.beta = Var(range(p))
m.z = Var(range(p), within=Binary)
m.absb = Var(range(p))
# Sparsity constraints
def bigm1(m, j):
return m.beta[j] <= M * m.z[j]
m.bigm1 = Constraint(range(p), rule=bigm1)
def bigm2(m, j):
return m.beta[j] >= -M * m.z[j]
m.bigm2 = Constraint(range(p), rule=bigm2)
m.sparsity = Constraint(expr=sum(m.z[j] for j in range(p)) <= k)
# Lasso constraints
def absbeta1(m, j):
return m.beta[j] <= m.absb[j]
m.absb1 = Constraint(range(p), rule=absbeta1)
def absbeta2(m, j):
return -m.beta[j] <= m.absb[j]
m.absb2 = Constraint(range(p), rule=absbeta2)
# Correlation constraints
corX = np.corrcoef(np.transpose(X))
def cor_rule(m, i, j):
if i > j and abs(corX[i, j]) > 0.8:
return (sum(m.z[k] for k in range(i, p, p_orig)) +
sum(m.z[k] for k in range(j, p, p_orig)) <= 1)
else:
return Constraint.Skip
m.cor = Constraint(range(p_orig), range(p_orig), rule=cor_rule)
# Nonlinear constraints
def nl_rule(m, i):
return sum(m.z[k] for k in range(i, p, p_orig)) <= 1
m.nl = Constraint(range(p_orig), rule=nl_rule)
# Add objective
m.obj = Objective(sense=minimize, expr=sum(
pow(y[i] - sum(X[i, j] * m.beta[j] for j in range(p)), 2)
for i in range(n)) + rho * sum(m.absb[j] for j in range(p)))
solver = SolverFactory('ipopt', executable=executable)
## tee=True enables solver output
# results = solver.solve(m, tee=True)
results = solver.solve(m, tee=False)
return np.array([m.beta[j].value for j in range(p)]).reshape(-1, p_orig)
```
```python
all_regression(X, y, 6, 0)
```
# Logistic Regression
Like JuMP, we need to use a new solver for the nonlinear problem. We can use Ipopt as before, except we have to set it up manually. You'll need to download Ipopt and add it to the PATH.
On Mac, you can do this with Homebrew if you have it:
The other way is to download a copy of ipopt and specify the path to it exactly when creating the solver. For example, I have a copy of Ipopt left over from JuMP, which I can use by modifying the SolverFactory line as indicated below:
```python
def logistic_regression(X, y):
n, p = X.shape
# Convert y to (-1, +1)
assert np.min(y) == 0
assert np.max(y) == 1
Y = y * 2 - 1
assert np.min(Y) == -1
assert np.max(Y) == 1
# Create the model
m = ConcreteModel()
# Add variables
m.b = Var(range(p))
m.b0 = Var()
# Set nonlinear objective function
m.obj = Objective(sense=maximize, expr=-sum(
log(1 + exp(-Y[i] * (sum(X[i, j] * m.b[j] for j in range(p)) + m.b0)))
for i in range(n)))
# Solve the model and get the optimal solutions
solver = SolverFactory('ipopt', executable=executable)
solver.solve(m)
return [m.b[j].value for j in range(p)], m.b0.value
```
Load up some data
```python
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
X = data.data
y = data.target
```
```python
logistic_regression(X, y)
```
### Exercise
Implement the regularized versions of logistic regression that scikit-learn provides:
```python
def logistic_regression_l1(X, y, C):
n, p = X.shape
# Convert y to (-1, +1)
assert np.min(y) == 0
assert np.max(y) == 1
Y = y * 2 - 1
assert np.min(Y) == -1
assert np.max(Y) == 1
# Create the model
m = ConcreteModel()
# Add variables
m.b = Var(range(p))
m.b0 = Var()
# Lasso constraints
m.absb = Var(range(p))
def absbeta1(m, j):
return m.b[j] <= m.absb[j]
m.absb1 = Constraint(range(p), rule=absbeta1)
def absbeta2(m, j):
return -m.b[j] <= m.absb[j]
m.absb2 = Constraint(range(p), rule=absbeta2)
# Set nonlinear objective function
m.obj = Objective(sense=minimize, expr=sum(m.absb[j] for j in range(p)) + C * sum(
log(1 + exp(-Y[i] * (sum(X[i, j] * m.b[j] for j in range(p)) + m.b0)))
for i in range(n)))
# Solve the model and get the optimal solutions
solver = SolverFactory('ipopt', executable=executable)
solver.solve(m)
return [m.b[j].value for j in range(p)], m.b0.value
```
```python
logistic_regression_l1(X, y, 100)
```
```python
def logistic_regression_l2(X, y, C):
n, p = X.shape
# Convert y to (-1, +1)
assert np.min(y) == 0
assert np.max(y) == 1
Y = y * 2 - 1
assert np.min(Y) == -1
assert np.max(Y) == 1
# Create the model
m = ConcreteModel()
# Add variables
m.b = Var(range(p))
m.b0 = Var()
# Set nonlinear objective function
m.obj = Objective(sense=minimize, expr=0.5 * sum(pow(m.b[j], 2) for j in range(p)) + C * sum(
log(1 + exp(-Y[i] * (sum(X[i, j] * m.b[j] for j in range(p)) + m.b0)))
for i in range(n)))
# Solve the model and get the optimal solutions
solver = SolverFactory('ipopt', executable=executable)
solver.solve(m)
return [m.b[j].value for j in range(p)], m.b0.value
```
```python
logistic_regression_l2(X, y, 1000)
```
```python
```
| 53c0c5aa6501dc8ad0b566476cc1816921d3a560 | 33,877 | ipynb | Jupyter Notebook | ML3 - Optimization Modeling (Complete).ipynb | oskali/mban_softwareTools | 60b73c798a1f8447de22c46070d023de41d33a30 | [
"MIT"
]
| 1 | 2021-03-06T21:16:13.000Z | 2021-03-06T21:16:13.000Z | ML3 - Optimization Modeling (Complete).ipynb | oskali/mban_softwareTools | 60b73c798a1f8447de22c46070d023de41d33a30 | [
"MIT"
]
| null | null | null | ML3 - Optimization Modeling (Complete).ipynb | oskali/mban_softwareTools | 60b73c798a1f8447de22c46070d023de41d33a30 | [
"MIT"
]
| 6 | 2019-12-03T22:35:28.000Z | 2021-03-04T00:28:02.000Z | 26.13966 | 305 | 0.507099 | true | 5,687 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.931463 | 0.863804 | __label__eng_Latn | 0.853141 | 0.84524 |
# Implementation of Bayesian Neural Network Regression via Hamiltonian Monte Carlo and Black-Box Variational Inference
## Overview
This article explores regression with neural networks from a Bayesian perspective. Priors are placed on network parameters $W$ and sampling techniques such as Hamiltonian Monte Carlo (HMC) and Black-Box Variational Inference (BBVI) are used to infer the posterior $\mathbb{P}(W|\text{Data})$. Implementation and analysis is performed on a toy dataset to keep things as simple as possible.
```python
# Import relevant libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from autograd import numpy as np
from autograd import scipy as sp
from autograd import grad
from autograd.misc.optimizers import adam, sgd
from utils import Feedforward
# Aesthetics and other settings
plt.style.use('seaborn-notebook')
sns.set_style('darkgrid')
warnings.simplefilter('ignore')
%matplotlib inline
```
```python
# Load the data
df = pd.read_csv('data/toy_data.csv')
X_train = df['x'].values.reshape(1,-1)
y_train = df['y'].values.reshape(1,-1)
# Inspect the data
display(df.head())
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>x</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-6.0</td>
<td>-3.380284</td>
</tr>
<tr>
<th>1</th>
<td>-5.6</td>
<td>-2.892117</td>
</tr>
<tr>
<th>2</th>
<td>-5.2</td>
<td>-2.690059</td>
</tr>
<tr>
<th>3</th>
<td>-4.8</td>
<td>-2.040000</td>
</tr>
<tr>
<th>4</th>
<td>-4.4</td>
<td>-1.399942</td>
</tr>
</tbody>
</table>
</div>
We first instantiate a neural network with 1 hidden layer, 5 hidden nodes, and `relu` activation. We then fit the neural network to the data and visualize the fit:
```python
# Set random seed for reproducibility
random = np.random.RandomState(0)
# RELU activation
activation_fn_type = 'relu'
activation_fn = lambda x: np.maximum(np.zeros(x.shape), x)
# NN model design choices
width = 5
hidden_layers = 1
input_dim = 1
output_dim = 1
architecture = {'width': width,
'hidden_layers': hidden_layers,
'input_dim': input_dim,
'output_dim': output_dim,
'activation_fn_type': 'relu',
'activation_fn_params': 'rate=1',
'activation_fn': activation_fn}
# Design choices for stochastic gradient descent
params = {'step_size':1e-3,
'max_iteration':15000,
'random_restarts':1}
# Instantiate and fit NN
nn = Feedforward(architecture, random)
nn.fit(X_train, y_train, params)
```
Iteration 0 lower bound 1188.2522119587577; gradient mag: 3294.9472005145494
Iteration 100 lower bound 528.2005162282456; gradient mag: 2014.0223481861008
Iteration 200 lower bound 231.7192875708107; gradient mag: 1226.9831178661013
Iteration 300 lower bound 103.99541756891784; gradient mag: 716.7107003654073
Iteration 400 lower bound 55.24716113156888; gradient mag: 376.2334633599828
Iteration 500 lower bound 39.94989749994618; gradient mag: 177.96417730567353
Iteration 600 lower bound 35.60307206137053; gradient mag: 77.1935848441468
Iteration 700 lower bound 34.02585610754244; gradient mag: 38.67687513287843
Iteration 800 lower bound 32.961429361488406; gradient mag: 31.1294903391371
Iteration 900 lower bound 31.989641942996613; gradient mag: 24.752496721932584
Iteration 1000 lower bound 31.082564536611727; gradient mag: 24.616299563320236
Iteration 1100 lower bound 30.155032233758106; gradient mag: 24.248289796695605
Iteration 1200 lower bound 29.21001380959729; gradient mag: 23.699589428153832
Iteration 1300 lower bound 28.253006614697387; gradient mag: 23.095433479427737
Iteration 1400 lower bound 27.28927363692245; gradient mag: 22.475317737860422
Iteration 1500 lower bound 26.323662523356898; gradient mag: 21.85037167539629
Iteration 1600 lower bound 25.36059928299808; gradient mag: 21.2251268262558
Iteration 1700 lower bound 24.404090286894164; gradient mag: 20.602713937891444
Iteration 1800 lower bound 23.45772579649591; gradient mag: 19.985836957824546
Iteration 1900 lower bound 22.524686048678994; gradient mag: 19.37690284993496
Iteration 2000 lower bound 21.60775070123689; gradient mag: 18.778013849409064
Iteration 2100 lower bound 20.70931205242157; gradient mag: 18.190943350296987
Iteration 2200 lower bound 19.831392143669202; gradient mag: 17.617113686328434
Iteration 2300 lower bound 18.975663613203803; gradient mag: 17.057579413745877
Iteration 2400 lower bound 18.14347397435544; gradient mag: 16.513017921438415
Iteration 2500 lower bound 17.335872838126043; gradient mag: 15.983728786591485
Iteration 2600 lower bound 16.553641480811155; gradient mag: 15.46964287439822
Iteration 2700 lower bound 15.799798452926806; gradient mag: 14.634100216702958
Iteration 2800 lower bound 15.089224741859164; gradient mag: 13.79947750065518
Iteration 2900 lower bound 14.412878302812285; gradient mag: 13.252437242012407
Iteration 3000 lower bound 13.766721311122602; gradient mag: 12.795482048078652
Iteration 3100 lower bound 13.147000834095966; gradient mag: 12.399165758017642
Iteration 3200 lower bound 12.550715085203827; gradient mag: 12.04347723415768
Iteration 3300 lower bound 11.987988709794728; gradient mag: 11.671698244924366
Iteration 3400 lower bound 11.470331425797704; gradient mag: 11.552845486196434
Iteration 3500 lower bound 10.989896461078452; gradient mag: 11.570145170353353
Iteration 3600 lower bound 10.545865383076634; gradient mag: 9.157301353805481
Iteration 3700 lower bound 10.134390330913082; gradient mag: 8.739366437372636
Iteration 3800 lower bound 9.744699797627584; gradient mag: 8.484502053292161
Iteration 3900 lower bound 9.36943762284221; gradient mag: 8.279364759162567
Iteration 4000 lower bound 9.023880352484412; gradient mag: 12.814088131372873
Iteration 4100 lower bound 8.714826491071893; gradient mag: 8.636545905932076
Iteration 4200 lower bound 8.436787581496317; gradient mag: 9.251058145534484
Iteration 4300 lower bound 8.187373568381847; gradient mag: 7.44204269008863
Iteration 4400 lower bound 7.964500968027262; gradient mag: 6.053454321911013
Iteration 4500 lower bound 7.766266930187459; gradient mag: 5.05121153533447
Iteration 4600 lower bound 7.589279565032135; gradient mag: 4.597739322147365
Iteration 4700 lower bound 7.4224106345582115; gradient mag: 4.456507453357349
Iteration 4800 lower bound 7.258760999455526; gradient mag: 4.341240693065324
Iteration 4900 lower bound 7.106644631620016; gradient mag: 4.52127279264325
Iteration 5000 lower bound 6.974070400942564; gradient mag: 5.2015643395559845
Iteration 5100 lower bound 6.856972561289586; gradient mag: 6.119331677603739
Iteration 5200 lower bound 6.752919981452238; gradient mag: 7.1337121451354495
Iteration 5300 lower bound 6.66013857904494; gradient mag: 8.18215855264054
Iteration 5400 lower bound 6.575879541247726; gradient mag: 9.166712555050198
Iteration 5500 lower bound 6.498596062906515; gradient mag: 3.0663031457822774
Iteration 5600 lower bound 6.426834605421496; gradient mag: 2.433138113554713
Iteration 5700 lower bound 6.3574338849415994; gradient mag: 2.260365420122545
Iteration 5800 lower bound 6.286273937604812; gradient mag: 2.2395351518992523
Iteration 5900 lower bound 6.213043834805279; gradient mag: 2.225849214024534
Iteration 6000 lower bound 6.137748546533556; gradient mag: 2.2122700356057563
Iteration 6100 lower bound 6.060402578608478; gradient mag: 2.198351168395091
Iteration 6200 lower bound 5.9820274881058095; gradient mag: 2.1337008921111766
Iteration 6300 lower bound 5.905113988567324; gradient mag: 2.432126797813235
Iteration 6400 lower bound 5.828319129576143; gradient mag: 2.8272278412165197
Iteration 6500 lower bound 5.751830639546263; gradient mag: 3.396839218046544
Iteration 6600 lower bound 5.672819183849375; gradient mag: 3.511172106456357
Iteration 6700 lower bound 5.593319432198854; gradient mag: 3.9361854982209437
Iteration 6800 lower bound 5.511770386421801; gradient mag: 4.246984794757429
Iteration 6900 lower bound 5.428607645781734; gradient mag: 14.014120586219295
Iteration 7000 lower bound 5.344999645585856; gradient mag: 4.847156007867572
Iteration 7100 lower bound 5.25877317403805; gradient mag: 5.037990425758398
Iteration 7200 lower bound 5.170245548651126; gradient mag: 4.91389612736589
Iteration 7300 lower bound 5.081990798375701; gradient mag: 5.148618362507432
Iteration 7400 lower bound 4.992323968370188; gradient mag: 5.06261285166088
Iteration 7500 lower bound 4.902196879553126; gradient mag: 5.078233305615261
Iteration 7600 lower bound 4.811691076733208; gradient mag: 12.726071077503896
Iteration 7700 lower bound 4.722034252465465; gradient mag: 5.627923598935851
Iteration 7800 lower bound 4.630466155240868; gradient mag: 5.396531979077115
Iteration 7900 lower bound 4.539356504132502; gradient mag: 5.211667295312678
Iteration 8000 lower bound 4.450955469392411; gradient mag: 5.442647223211132
Iteration 8100 lower bound 4.360062097497529; gradient mag: 5.203891398270254
Iteration 8200 lower bound 4.271237398872259; gradient mag: 5.050050070322328
Iteration 8300 lower bound 4.184440979576096; gradient mag: 5.246792589389067
Iteration 8400 lower bound 4.098659993997213; gradient mag: 5.578840619974291
Iteration 8500 lower bound 4.012291494325346; gradient mag: 5.381437473766014
Iteration 8600 lower bound 3.9284516457036753; gradient mag: 5.346155288618992
Iteration 8700 lower bound 3.8439395365195828; gradient mag: 9.992393975865777
Iteration 8800 lower bound 3.7623541691665157; gradient mag: 9.797914768973802
Iteration 8900 lower bound 3.6809356262359367; gradient mag: 9.233951145662385
Iteration 9000 lower bound 3.6026984128920083; gradient mag: 9.590713589034012
Iteration 9100 lower bound 3.5227414053741506; gradient mag: 4.831535832412828
Iteration 9200 lower bound 3.4454461281570956; gradient mag: 4.5469785507301985
Iteration 9300 lower bound 3.3682333282315478; gradient mag: 8.077544206848351
Iteration 9400 lower bound 3.2931617717908908; gradient mag: 5.198762473040969
Iteration 9500 lower bound 3.215814184041517; gradient mag: 3.906186144306806
Iteration 9600 lower bound 3.1406962333053885; gradient mag: 3.899243090106902
Iteration 9700 lower bound 3.0658663969571465; gradient mag: 6.912089942111438
Iteration 9800 lower bound 2.9908245411967576; gradient mag: 3.9861290594186682
Iteration 9900 lower bound 2.9166903463356264; gradient mag: 6.789802894150422
Iteration 10000 lower bound 2.842561978388943; gradient mag: 3.2771432447272884
Iteration 10100 lower bound 2.768815835197236; gradient mag: 2.643640620631951
Iteration 10200 lower bound 2.6967620063896334; gradient mag: 1.755418238034846
Iteration 10300 lower bound 2.626726890192882; gradient mag: 1.9079914587406486
Iteration 10400 lower bound 2.5588304285321377; gradient mag: 1.0932264174027129
Iteration 10500 lower bound 2.4941680185437374; gradient mag: 0.7730587380367278
Iteration 10600 lower bound 2.43331649017813; gradient mag: 0.767066193237107
Iteration 10700 lower bound 2.3765553493366998; gradient mag: 0.7553579421582776
Iteration 10800 lower bound 2.3238459422563555; gradient mag: 0.7228024683428615
Iteration 10900 lower bound 2.2753308343301706; gradient mag: 0.6845211470545377
Iteration 11000 lower bound 2.231249650994413; gradient mag: 0.6413826587050766
Iteration 11100 lower bound 2.1918278044347406; gradient mag: 0.59432252611375
Iteration 11200 lower bound 2.1572058127250546; gradient mag: 0.5444130874954227
Iteration 11300 lower bound 2.1273989093957435; gradient mag: 0.49279021686788194
Iteration 11400 lower bound 2.102281353226884; gradient mag: 0.4406015166810438
Iteration 11500 lower bound 2.0815910694099835; gradient mag: 0.38895956298851503
Iteration 11600 lower bound 2.064950099998062; gradient mag: 0.33889610830740396
Iteration 11700 lower bound 2.05189577096822; gradient mag: 0.2913192491318837
Iteration 11800 lower bound 2.0419171399915643; gradient mag: 0.24697771788258466
Iteration 11900 lower bound 2.0344915040292215; gradient mag: 0.20643621913052787
Iteration 12000 lower bound 2.0291165873537538; gradient mag: 0.17006423386266364
Iteration 12100 lower bound 2.025335369827159; gradient mag: 0.13803877718614158
Iteration 12200 lower bound 2.022752096644942; gradient mag: 0.11035976785042882
Iteration 12300 lower bound 2.0210395323013937; gradient mag: 0.08687528646333166
Iteration 12400 lower bound 2.0199387199884957; gradient mag: 0.06731321537537878
Iteration 12500 lower bound 2.019253225668882; gradient mag: 0.051315577914202284
Iteration 12600 lower bound 2.018840059462057; gradient mag: 0.038472237005877125
Iteration 12700 lower bound 2.018599266629261; gradient mag: 0.02835131706863521
Iteration 12800 lower bound 2.0184637229425633; gradient mag: 0.02052459535499165
Iteration 12900 lower bound 2.018390120143663; gradient mag: 0.014586998208449047
Iteration 13000 lower bound 2.0183516178077516; gradient mag: 0.010170104881193975
Iteration 13100 lower bound 2.0183322452743533; gradient mag: 0.006950134375176586
Iteration 13200 lower bound 2.0183228858875393; gradient mag: 0.0046512512565712325
Iteration 13300 lower bound 2.0183185522950637; gradient mag: 0.003045195905328622
Iteration 13400 lower bound 2.0183166332413265; gradient mag: 0.001948265584582132
Iteration 13500 lower bound 2.0183158223137188; gradient mag: 0.0012165923762486204
Iteration 13600 lower bound 2.0183154961261223; gradient mag: 0.0007405247328450365
Iteration 13700 lower bound 2.018315371562024; gradient mag: 0.00043875394237372057
Iteration 13800 lower bound 2.0183153265300593; gradient mag: 0.00025265818491997387
Iteration 13900 lower bound 2.0183153111653755; gradient mag: 0.00014117995463813228
Iteration 14000 lower bound 2.0183153062338812; gradient mag: 7.641641289882984e-05
Iteration 14100 lower bound 2.0183153047501134; gradient mag: 3.9991531278196136e-05
Iteration 14200 lower bound 2.0183153043331865; gradient mag: 2.019550450242747e-05
Iteration 14300 lower bound 2.018315304224203; gradient mag: 9.82027514829592e-06
Iteration 14400 lower bound 2.0183153041978206; gradient mag: 4.587610018884338e-06
Iteration 14500 lower bound 2.0183153041919346; gradient mag: 2.053921124382295e-06
Iteration 14600 lower bound 2.018315304190734; gradient mag: 8.78978965159312e-07
Iteration 14700 lower bound 2.0183153041905055; gradient mag: 3.584887470453335e-07
Iteration 14800 lower bound 2.0183232872033057; gradient mag: 0.38994676261739214
Iteration 14900 lower bound 2.018315304287419; gradient mag: 0.0013325344134402757
```python
# Create test data
X_test = np.linspace(-8, 8, 100).reshape(1,-1)
# Predict on X_test
y_pred_test = nn.forward(nn.weights, X_test)
# Visualize the learned function
fig, ax = plt.subplots(figsize=(10,6))
ax.scatter(X_train.flatten(), y_train.flatten(), color='green', label='Training Data')
ax.plot(X_test.flatten(), y_pred_test.flatten(), color='blue', label='Learned Function')
ax.legend();
```
## Performing Inference via Hamiltonian Monte Carlo
We implement the following Bayesian model for the data:
$$
\begin{align}
\mathbf{W} &\sim \mathcal{N}(0, 5^2 \mathbf{I}_{D\times D})\\
\mu^{(n)} &= g_{\mathbf{W}}(\mathbf{X}^{(n)})\\
Y^{(n)} &\sim \mathcal{N}(\mu^{(n)}, 0.5^2)\\
\end{align}
$$
where $g_{\mathbf{W}}$ is a neural network with parameters $\mathbf{W}$ represented as a vector in $\mathbb{R}^{D}$ with $D$ being the total number of parameters (including biases).
We first sample from the model posterior via HMC before visualizing the posterior predictive. Lastly, we evaluate the fit of the model to the data as well as the nature of the posterior predictive.
```python
# Model parameters
sigma_w = 25 * np.eye(nn.D)
sigma_y = 0.5
# Log prior distribution
def log_prior(W, Σ):
d = W.shape[0]
det_Σ = np.linalg.det(Σ)
inv_Σ = np.linalg.inv(Σ)
const = -0.5 * (d * np.log(2 * np.pi) + np.log(det_Σ)) # constant term
quad = -0.5 * np.diag(np.dot(np.dot(W, inv_Σ), W.T)) # quadratic term
return const + quad
# Log likelihood
def log_likelihood(W, σ):
n = len(y_train.reshape(-1,1))
const = n * (-np.log(σ) - 0.5 * np.log(2 * np.pi)) # constant term
quad = -0.5 * σ**(-2) * np.sum((y_train.reshape((1,1,n)) - nn.forward(W, X_train))**2, axis=2).flatten() # quadratic term
return const + quad
# Log joint density - to be used with our HMC sampler
log_joint_density = lambda w: log_likelihood(w, sigma_y) + log_prior(w, sigma_w)
```
```python
# HMC sampler settings
M = np.eye(nn.D) # mass
step_size = 1e-3
leapfrog_steps = 50
total_samples = 10000
burn_in = 0.1
thinning_factor = 2
position_init = nn.weights
# Potential energy function
potential_energy = lambda w: -log_joint_density(w)
# Gradient of potential energy function
grad_potential_energy = grad(potential_energy)
# Kinetic energy function
kinetic_energy = lambda p: 0.5 * np.log(np.linalg.det(M)) + 0.5 * p.shape[0] * np.log(2 * np.pi)\
+ 0.5 * np.dot(np.dot(p, np.linalg.inv(M)), p.T)
# Gradient of kinetic energy function
grad_kinetic_energy = lambda p: p
# Total energy function
total_energy = lambda p, q: potential_energy(q) + kinetic_energy(p)
# Simulate movement via leap frog
def leap_frog(position_init, momentum_init, ε, n_steps, potential_energy, kinetic_energy):
position, momentum = position_init, momentum_init
# Leap frog steps
for _ in range(n_steps-1):
momentum = momentum - ε/2 * grad_potential_energy(position) # Half-step update of momentum
position = position + ε * grad_kinetic_energy(momentum) # Full-step update of position
momentum = momentum - ε/2 * grad_potential_energy(position) # Half-step update of momentum
assert not np.any(np.isnan(position))
assert not np.any(np.isnan(momentum))
# Reverse momentum
momentum = - momentum
return position, momentum
# HMC implementation
def hmc(position_current, ε, n_steps, potential_energy, kinetic_energy):
# Sample momentum
momentum_current = np.random.multivariate_normal(np.zeros(nn.D), M).reshape((1,16))
# Simulate hamiltonian dynamics using leap frog
position_proposal, momentum_proposal = leap_frog(position_current.copy(), momentum_current.copy(), ε, n_steps,
potential_energy, kinetic_energy)
# Compute total energy in current position and proposal position
current_total_energy = total_energy(position_current, momentum_current)
proposal_total_energy = total_energy(position_proposal, momentum_proposal)
# Compute acceptance probability
accept_prob = np.min((1, np.exp(current_total_energy-proposal_total_energy)))
# Accept proposal using acceptance probability
if np.random.rand() < accept_prob:
position_current = np.copy(position_proposal)
momentum_current = np.copy(momentum_proposal)
else:
pass
return position_current, momentum_current
# Sample from posterior
np.random.seed(207)
samples = [position_init]
for _ in range(total_samples):
sample = hmc(samples[-1], step_size, leapfrog_steps, potential_energy, kinetic_energy)[0]
samples.append(sample)
samples = np.array(samples)
samples = samples[int(burn_in*total_samples):]
samples = samples[::thinning_factor]
```
We visualize the posterior predictive by obtaining 100 samples from our posterior samples of $W$ and then plotting their predicted values plus additive noise $\epsilon \sim \mathcal{N}(0, 0.5^2)$ at 100 equally spaced values in the interval [-8, 8].
```python
y_pred_test = nn.forward(nn.weights, X_test.reshape((1, -1)))
# Visualize posterior predictive
fig, ax = plt.subplots(figsize = (10, 6))
fig.suptitle('Posterior Predictive', fontsize=22)
ax.scatter(X_train.flatten(), y_train.flatten(), color='green', label=' Training Data')
ax.plot(X_test.flatten(), y_pred_test.flatten(), color='blue', label='Fitted NN Function')
inds = np.random.choice(len(samples), size=100, replace=False)
for sample in samples[inds]:
X_test = np.linspace(-8, 8, 100)
y_test = nn.forward(sample, X_test.reshape((1, -1)))
y_test += np.random.normal(0, 0.5, size=y_test.shape)
ax.plot(X_test.flatten(), y_test.flatten(), alpha = 0.1, color='red')
ax.plot(X_test.flatten(), y_test.flatten(), alpha = 0.1, color='red', label='Posterior Predictive')
ax.legend();
```
From the above, we see that the model fit is reasonable - the distribution is narrow around regions containing in sample data and similar to the MLE. We also see that the model is able to capture the inherent uncertainty in out of sample regions. In other words, the posterior predictive is able to capture the epistemic uncertainty of the model, which is a desirable property.
## Performing Inference via Black-Box Variational Inference with the Reparametrization Trick
We now implement BBVI with the reparametrization trick for approximating an arbitrary posterior $\mathbb{P}(w| \text{Data})$ using an isotropic Gaussian $\mathcal{N}(\mu, \Sigma)$, where $\Sigma$ is a diagonal matrix. We then use this implementation to approximate and sample from the posterior of the Bayesian neural network from the previous section. Finally, as before, we visualize and evaluate the posterior predictive.
```python
# Implementation of BBVI
def black_box_variational_inference(logprob, D, num_samples):
def unpack_params(params):
# Variational dist is a diagonal Gaussian
mean, log_std = params[:D], params[D:]
return mean, log_std
def gaussian_entropy(log_std):
return 0.5 * D * (1.0 + np.log(2*np.pi)) + np.sum(log_std)
rs = np.random.RandomState(0)
def variational_objective(params, t):
"""Provides a stochastic estimate of the variational lower bound."""
mean, log_std = unpack_params(params)
samples = rs.randn(num_samples, D) * np.exp(log_std) + mean
lower_bound = gaussian_entropy(log_std) + np.mean(logprob(samples, t))
return -lower_bound
gradient = grad(variational_objective)
return variational_objective, gradient, unpack_params
# Approximate posterior via BBVI using mean-field gaussian variational family
def variational_inference(Σ, σ, y_train, x_train, S, max_iter, step_size, verbose=False):
d = Σ.shape[0]
inv_Σ = np.linalg.inv(Σ)
det_Σ = np.linalg.det(Σ)
def log_prior(w):
const = -0.5 * (d * np.log(2 * np.pi) + np.log(det_Σ))
quad = -0.5 * np.diag(np.dot(np.dot(w, inv_Σ), w.T))
return const + quad
def log_likelihood(w):
S = w.shape[0]
N = x_train.flatten().shape[0]
const = N * (-np.log(σ) - 0.5 * np.log(2 * np.pi))
quad = -0.5 * σ**(-2) * np.sum((y_train.reshape((1,1,N)) - nn.forward(w, x_train))**2, axis=2).flatten()
return const + quad
log_density = lambda w, t: log_likelihood(w) + log_prior(w)
objective, gradient, unpack_params = black_box_variational_inference(log_density, d, num_samples=S)
def callback(params, t, g):
if verbose:
if t % 100 == 0:
print(f"Iteration {t} lower bound {-objective(params, t)}; gradient mag: {np.linalg.norm(gradient(params, t))}")
print("Optimizing variational parameters...")
init_mean = nn.weights[0]
init_log_std = -100 * np.ones(d)
init_var_params = np.concatenate([init_mean, init_log_std])
variational_params = adam(gradient, init_var_params, step_size=step_size, num_iters=max_iter, callback=callback)
return variational_params
# Set model parameters
Σ = 25 * np.eye(nn.D)
σ = 0.5
# Approximate posterior
post_vi = variational_inference(Σ, σ, y_train, X_train, S=10, max_iter=50000, step_size=1e-2, verbose=True)
```
Optimizing variational parameters...
Iteration 0 lower bound -1625.87910636996; gradient mag: 4.013795829089467
Iteration 100 lower bound -1609.8448622791204; gradient mag: 4.306968219275584
Iteration 200 lower bound -1593.8131248051534; gradient mag: 4.009766141474757
Iteration 300 lower bound -1577.7776106428853; gradient mag: 4.00920827971278
Iteration 400 lower bound -1561.740692526161; gradient mag: 4.0091148458088846
Iteration 500 lower bound -1545.7036380219329; gradient mag: 4.008879503977277
Iteration 600 lower bound -1529.6673740263857; gradient mag: 4.008358490304666
Iteration 700 lower bound -1513.6326624063222; gradient mag: 4.007619718851674
Iteration 800 lower bound -1497.6000755517075; gradient mag: 4.006779059794405
Iteration 900 lower bound -1481.569981206585; gradient mag: 4.005928180175479
Iteration 1000 lower bound -1465.5425622807181; gradient mag: 4.005122607057618
Iteration 1100 lower bound -1449.5178535525542; gradient mag: 4.004390488139093
Iteration 1200 lower bound -1433.495780328836; gradient mag: 4.003742966044396
Iteration 1300 lower bound -1417.4761925104951; gradient mag: 4.003181295625111
Iteration 1400 lower bound -1401.4588923364681; gradient mag: 4.002701090336068
Iteration 1500 lower bound -1385.4436560888034; gradient mag: 4.0023082685577185
Iteration 1600 lower bound -1369.4375713246448; gradient mag: 4.532074111898121
Iteration 1700 lower bound -1353.4280164843774; gradient mag: 4.002517715091629
Iteration 1800 lower bound -1337.4201292604296; gradient mag: 4.001703531330288
Iteration 1900 lower bound -1321.4128904591187; gradient mag: 4.001509248762818
Iteration 2000 lower bound -1305.4128561405416; gradient mag: 6.991522438599369
Iteration 2100 lower bound -1289.4058139387773; gradient mag: 4.0013699913451495
Iteration 2200 lower bound -1273.4009612775474; gradient mag: 4.001234026478513
Iteration 2300 lower bound -1257.3963787506732; gradient mag: 4.001127060897352
Iteration 2400 lower bound -1241.4008047335872; gradient mag: 10.663934907311582
Iteration 2500 lower bound -1225.3926564201954; gradient mag: 4.003288023624756
Iteration 2600 lower bound -1209.3892281234528; gradient mag: 4.001000162876969
Iteration 2700 lower bound -1193.38601114361; gradient mag: 4.000909290618031
Iteration 2800 lower bound -1177.3828701085906; gradient mag: 4.000854892805639
Iteration 2900 lower bound -1161.3851284258976; gradient mag: 5.471890588151078
Iteration 3000 lower bound -1145.3812826772662; gradient mag: 4.000881427058797
Iteration 3100 lower bound -1129.3787590472257; gradient mag: 4.000772450916599
Iteration 3200 lower bound -1113.376299981641; gradient mag: 4.000726878411543
Iteration 3300 lower bound -1097.3741463626807; gradient mag: 4.002919137025768
Iteration 3400 lower bound -1081.3755101120676; gradient mag: 4.11841111212129
Iteration 3500 lower bound -1065.3731407952619; gradient mag: 4.000692932069274
Iteration 3600 lower bound -1049.3711182732595; gradient mag: 4.000638571120524
Iteration 3700 lower bound -1033.369331416136; gradient mag: 4.000725191041065
Iteration 3800 lower bound -1017.3672604866744; gradient mag: 4.000579931670755
Iteration 3900 lower bound -1001.3689196760615; gradient mag: 4.125379930027005
Iteration 4000 lower bound -985.3670204694704; gradient mag: 4.000703895472565
Iteration 4100 lower bound -969.3653259721748; gradient mag: 4.0005472499676396
Iteration 4200 lower bound -953.3636854331241; gradient mag: 4.000522640024147
Iteration 4300 lower bound -937.3620519722504; gradient mag: 4.000503350235954
Iteration 4400 lower bound -921.3922774680084; gradient mag: 27.39652098360508
Iteration 4500 lower bound -905.3620202400215; gradient mag: 4.002589547731495
Iteration 4600 lower bound -889.3605479082636; gradient mag: 4.000490673311918
Iteration 4700 lower bound -873.3591824089159; gradient mag: 4.000461650126223
Iteration 4800 lower bound -857.3580176693251; gradient mag: 4.011799049134225
Iteration 4900 lower bound -841.356555593195; gradient mag: 4.000428929337808
Iteration 5000 lower bound -825.3603426353931; gradient mag: 7.6254600886913595
Iteration 5100 lower bound -809.3566547081297; gradient mag: 4.000876413453307
Iteration 5200 lower bound -793.3555838499822; gradient mag: 4.00041921267175
Iteration 5300 lower bound -777.3544236143534; gradient mag: 4.000398182131863
Iteration 5400 lower bound -761.3532723840017; gradient mag: 4.0003847928883705
Iteration 5500 lower bound -745.3521181342569; gradient mag: 4.000372260197398
Iteration 5600 lower bound -729.3970186331018; gradient mag: 32.297941768394374
Iteration 5700 lower bound -713.3526161094505; gradient mag: 4.002515890779893
Iteration 5800 lower bound -697.3514898083534; gradient mag: 4.000371064986454
Iteration 5900 lower bound -681.350508546264; gradient mag: 4.000351861694795
Iteration 6000 lower bound -665.3495290247743; gradient mag: 4.000339893858897
Iteration 6100 lower bound -649.3485416501119; gradient mag: 4.0003289581024974
Iteration 6200 lower bound -633.3477039069578; gradient mag: 4.335425769229691
Iteration 6300 lower bound -617.34927822946; gradient mag: 4.395126139751338
Iteration 6400 lower bound -601.348014567415; gradient mag: 4.000344012725186
Iteration 6500 lower bound -585.347200519332; gradient mag: 4.000313444646826
Iteration 6600 lower bound -569.3464047704256; gradient mag: 4.000346710429985
Iteration 6700 lower bound -553.3455781286775; gradient mag: 4.0002929289492215
Iteration 6800 lower bound -537.3447461334786; gradient mag: 4.000283943484279
Iteration 6900 lower bound -521.5935789104208; gradient mag: 79.95068242757432
Iteration 7000 lower bound -505.3454170738333; gradient mag: 4.002291804717244
Iteration 7100 lower bound -489.3445465742389; gradient mag: 4.000291889693293
Iteration 7200 lower bound -473.3438200102609; gradient mag: 4.0002732087445345
Iteration 7300 lower bound -457.3430902726327; gradient mag: 4.000264234999987
Iteration 7400 lower bound -441.34235009214177; gradient mag: 4.000256187246359
Iteration 7500 lower bound -425.3416003765744; gradient mag: 4.000248129100213
Iteration 7600 lower bound -409.46026581869336; gradient mag: 53.16975681125179
Iteration 7700 lower bound -393.33516642481607; gradient mag: 4.003616651852114
Iteration 7800 lower bound -377.2877358640248; gradient mag: 4.004799024995106
Iteration 7900 lower bound -361.24319760751047; gradient mag: 4.004988562366386
Iteration 8000 lower bound -345.20240748201525; gradient mag: 4.004673664251389
Iteration 8100 lower bound -329.1655922873085; gradient mag: 4.305376537991918
Iteration 8200 lower bound -313.13476410022264; gradient mag: 7.252135575838624
Iteration 8300 lower bound -297.10249086343117; gradient mag: 4.003008066027186
Iteration 8400 lower bound -281.0758284469266; gradient mag: 4.060139243541528
Iteration 8500 lower bound -265.05197123708155; gradient mag: 4.245712571948154
Iteration 8600 lower bound -249.0324125671533; gradient mag: 7.402131566447914
Iteration 8700 lower bound -233.0123750799052; gradient mag: 5.947720246969008
Iteration 8800 lower bound -216.99439321383107; gradient mag: 4.852019229259524
Iteration 8900 lower bound -200.97883871821352; gradient mag: 5.034254329031476
Iteration 9000 lower bound -184.96458883936958; gradient mag: 4.936607421972713
Iteration 9100 lower bound -168.74673774962076; gradient mag: 5.790877545955348
Iteration 9200 lower bound -152.74332470121215; gradient mag: 6.9704316601721485
Iteration 9300 lower bound -136.77013811652583; gradient mag: 17.524423414134183
Iteration 9400 lower bound -120.79086495341599; gradient mag: 33.24881622878726
Iteration 9500 lower bound -105.7362314851873; gradient mag: 80.99667885542071
Iteration 9600 lower bound -91.27003164383453; gradient mag: 63.021878666553036
Iteration 9700 lower bound -79.32407037783005; gradient mag: 81.05238525499848
Iteration 9800 lower bound -71.63877951028587; gradient mag: 12.212610166397551
Iteration 9900 lower bound -70.13298773921629; gradient mag: 49.84584465300977
Iteration 10000 lower bound -62.506201535961715; gradient mag: 118.89985795703868
Iteration 10100 lower bound -62.65599492087436; gradient mag: 35.52401206822941
Iteration 10200 lower bound -67.62255711648194; gradient mag: 52.43439824221355
Iteration 10300 lower bound -58.04543094003037; gradient mag: 35.6000077670576
Iteration 10400 lower bound -59.42192354096022; gradient mag: 39.21093454598852
Iteration 10500 lower bound -53.57041872954048; gradient mag: 27.794921403961563
Iteration 10600 lower bound -54.911569509506265; gradient mag: 120.31433337001889
Iteration 10700 lower bound -53.835641056629804; gradient mag: 79.75204269785986
Iteration 10800 lower bound -52.708810896003385; gradient mag: 24.828485209050427
Iteration 10900 lower bound -51.48840138897686; gradient mag: 37.38290130120406
Iteration 11000 lower bound -53.627916001388144; gradient mag: 57.90266868161564
Iteration 11100 lower bound -50.95114213579967; gradient mag: 58.278568411480585
Iteration 11200 lower bound -51.15834906344856; gradient mag: 18.90674511732361
Iteration 11300 lower bound -50.852830354983666; gradient mag: 39.52828038402916
Iteration 11400 lower bound -51.275581927924094; gradient mag: 45.58086895921578
Iteration 11500 lower bound -51.44110580806054; gradient mag: 26.06223235244458
Iteration 11600 lower bound -50.446049396782854; gradient mag: 38.62925878629038
Iteration 11700 lower bound -50.87712276461153; gradient mag: 26.54801315293595
Iteration 11800 lower bound -69.59490911130008; gradient mag: 57.42672624363568
Iteration 11900 lower bound -52.949582603629366; gradient mag: 30.616688497432897
Iteration 12000 lower bound -49.90754900947188; gradient mag: 61.80218797753596
Iteration 12100 lower bound -51.931514602461405; gradient mag: 37.95695517228861
Iteration 12200 lower bound -48.40895824544853; gradient mag: 29.600493374059205
Iteration 12300 lower bound -51.07770320496959; gradient mag: 49.05524126402155
Iteration 12400 lower bound -47.62168998002703; gradient mag: 42.14301955834423
Iteration 12500 lower bound -46.85423568502193; gradient mag: 45.77160180802963
Iteration 12600 lower bound -48.35243566459255; gradient mag: 94.15834652794071
Iteration 12700 lower bound -46.26752321368986; gradient mag: 63.61052352559436
Iteration 12800 lower bound -46.00535607366691; gradient mag: 43.04260526802315
Iteration 12900 lower bound -45.34282867573735; gradient mag: 102.6302296596487
Iteration 13000 lower bound -53.38567596049293; gradient mag: 46.70746451855649
Iteration 13100 lower bound -46.40011813456245; gradient mag: 39.321148231096906
Iteration 13200 lower bound -45.26019387695592; gradient mag: 50.403560299258885
Iteration 13300 lower bound -46.54611006940012; gradient mag: 21.110412957900053
Iteration 13400 lower bound -46.80141975840739; gradient mag: 35.56501741516475
Iteration 13500 lower bound -45.347424237009356; gradient mag: 32.84819989716104
Iteration 13600 lower bound -55.2336579020845; gradient mag: 181.22928138595358
Iteration 13700 lower bound -46.69273264549909; gradient mag: 38.66491111494481
Iteration 13800 lower bound -48.706393909339404; gradient mag: 35.25904105949783
Iteration 13900 lower bound -46.09909827584119; gradient mag: 105.44459906603639
Iteration 14000 lower bound -55.07972738626578; gradient mag: 26.68592655409844
Iteration 14100 lower bound -46.70169275894972; gradient mag: 44.74273582363923
Iteration 14200 lower bound -47.89132345952628; gradient mag: 46.079502593549236
Iteration 14300 lower bound -45.96519220350343; gradient mag: 10.475707469473274
Iteration 14400 lower bound -45.11569247349698; gradient mag: 76.0594507979519
Iteration 14500 lower bound -44.776955711090054; gradient mag: 48.44918974393442
Iteration 14600 lower bound -46.50853819276421; gradient mag: 16.750217803789234
Iteration 14700 lower bound -45.18925452534144; gradient mag: 50.850530834067186
Iteration 14800 lower bound -45.85166002016605; gradient mag: 40.00916760338907
Iteration 14900 lower bound -45.599055732303626; gradient mag: 14.089881519894774
Iteration 15000 lower bound -45.59783175787383; gradient mag: 33.61832141972716
Iteration 15100 lower bound -43.56892782372161; gradient mag: 89.90613504112775
Iteration 15200 lower bound -45.09808427968346; gradient mag: 7.201796994509679
Iteration 15300 lower bound -43.503478235060655; gradient mag: 42.6556106963265
Iteration 15400 lower bound -44.43759241564126; gradient mag: 41.23983526713167
Iteration 15500 lower bound -47.027281605393995; gradient mag: 43.2352839742545
Iteration 15600 lower bound -45.55949401253657; gradient mag: 54.388833559746985
Iteration 15700 lower bound -43.07998186212866; gradient mag: 39.06799068793347
Iteration 15800 lower bound -44.89047081313578; gradient mag: 38.32739221239101
Iteration 15900 lower bound -42.920827779525005; gradient mag: 64.07058557600675
Iteration 16000 lower bound -42.59920350244291; gradient mag: 24.228899036066426
Iteration 16100 lower bound -46.20318667960322; gradient mag: 35.30241247248203
Iteration 16200 lower bound -45.97105516399943; gradient mag: 57.61947978542411
Iteration 16300 lower bound -44.62310696831878; gradient mag: 37.028035649306155
Iteration 16400 lower bound -45.404892717312485; gradient mag: 33.34837442158758
Iteration 16500 lower bound -48.81857660953459; gradient mag: 31.26905141230444
Iteration 16600 lower bound -43.77616261721574; gradient mag: 11.45123041512909
Iteration 16700 lower bound -44.20000811106816; gradient mag: 55.710559807286806
Iteration 16800 lower bound -44.57451054108455; gradient mag: 19.79462305517491
Iteration 16900 lower bound -43.45917265605853; gradient mag: 31.896656723596596
Iteration 17000 lower bound -45.84082077401179; gradient mag: 12.31560826794601
Iteration 17100 lower bound -43.47314310159472; gradient mag: 31.65999423693999
Iteration 17200 lower bound -46.12553252269958; gradient mag: 14.680370848930918
Iteration 17300 lower bound -43.51905951528269; gradient mag: 61.46582153516011
Iteration 17400 lower bound -44.95014913256781; gradient mag: 23.228310743004865
Iteration 17500 lower bound -43.940795217352374; gradient mag: 5.2298866706948655
Iteration 17600 lower bound -42.80573450060559; gradient mag: 21.245084492039606
Iteration 17700 lower bound -45.77259000282225; gradient mag: 29.168025225342983
Iteration 17800 lower bound -45.388965073335605; gradient mag: 12.060274930286093
Iteration 17900 lower bound -43.28443285101664; gradient mag: 12.937607993905443
Iteration 18000 lower bound -44.112671511810845; gradient mag: 24.677223586757407
Iteration 18100 lower bound -43.65648447461763; gradient mag: 35.96995981893882
Iteration 18200 lower bound -41.80395700743062; gradient mag: 23.68782671676145
Iteration 18300 lower bound -44.940691005928144; gradient mag: 5.912641346216531
Iteration 18400 lower bound -42.03572695371659; gradient mag: 1344.6836038018432
Iteration 18500 lower bound -43.5053553551317; gradient mag: 79.03717360369697
Iteration 18600 lower bound -43.38505078311677; gradient mag: 16.992463887368856
Iteration 18700 lower bound -42.674983199644615; gradient mag: 15.621589921312122
Iteration 18800 lower bound -43.33395386686651; gradient mag: 296.16747460363587
Iteration 18900 lower bound -42.245277141086; gradient mag: 10.716676629582174
Iteration 19000 lower bound -43.86054945892814; gradient mag: 29.5354105810459
Iteration 19100 lower bound -41.538430768778625; gradient mag: 3.2388263724846174
Iteration 19200 lower bound -41.70471121664583; gradient mag: 15.44279283916351
Iteration 19300 lower bound -41.94065031159376; gradient mag: 4.679668491910979
Iteration 19400 lower bound -42.23276640029787; gradient mag: 9.982164129173459
Iteration 19500 lower bound -42.10927403442939; gradient mag: 9.838077880792897
Iteration 19600 lower bound -42.69751717011375; gradient mag: 9.42506783731324
Iteration 19700 lower bound -84.1052704946424; gradient mag: 35.596939133461056
Iteration 19800 lower bound -42.19241996699444; gradient mag: 7.177552513759342
Iteration 19900 lower bound -43.39993114311426; gradient mag: 12.08577077520931
Iteration 20000 lower bound -41.85723395997377; gradient mag: 10.237923941056222
Iteration 20100 lower bound -42.304827912295124; gradient mag: 3.9392002877917442
Iteration 20200 lower bound -40.697600188619624; gradient mag: 59.148342284904345
Iteration 20300 lower bound -45.853668958356536; gradient mag: 5.139135935821038
Iteration 20400 lower bound -41.643269166603645; gradient mag: 13.407450174172613
Iteration 20500 lower bound -41.711967444401424; gradient mag: 11.680029064950167
Iteration 20600 lower bound -42.824628783584274; gradient mag: 6.769574270969551
Iteration 20700 lower bound -41.58110457141607; gradient mag: 20.19140920646898
Iteration 20800 lower bound -42.49417886613912; gradient mag: 3.598371613314762
Iteration 20900 lower bound -42.29797853622559; gradient mag: 7.6576393717942715
Iteration 21000 lower bound -40.822260455666566; gradient mag: 9.843558303466457
Iteration 21100 lower bound -39.90231153995201; gradient mag: 10.251160106206896
Iteration 21200 lower bound -42.606146164339926; gradient mag: 38.878627840746624
Iteration 21300 lower bound -42.446931898652096; gradient mag: 14.483044280113633
Iteration 21400 lower bound -41.98015277438403; gradient mag: 8.631544823177231
Iteration 21500 lower bound -41.46017300791412; gradient mag: 30.359121356351675
Iteration 21600 lower bound -40.83681296820431; gradient mag: 26.93691893254844
Iteration 21700 lower bound -41.54825343711781; gradient mag: 18.694792629957135
Iteration 21800 lower bound -40.802849291591414; gradient mag: 4.2537593317531055
Iteration 21900 lower bound -42.25699617238534; gradient mag: 33.8911144487595
Iteration 22000 lower bound -42.151389508214905; gradient mag: 15.109117651493985
Iteration 22100 lower bound -40.59433908838606; gradient mag: 8.530043183917558
Iteration 22200 lower bound -41.04691811849315; gradient mag: 19.27175691146966
Iteration 22300 lower bound -42.389117261933876; gradient mag: 18.182812623629033
Iteration 22400 lower bound -41.1769593749463; gradient mag: 19.342545306928397
Iteration 22500 lower bound -40.99429404310009; gradient mag: 10.909589971391267
Iteration 22600 lower bound -41.2807466818264; gradient mag: 42.44995187043484
Iteration 22700 lower bound -40.24935235180248; gradient mag: 5.026006173472815
Iteration 22800 lower bound -42.186923684833936; gradient mag: 22.896515551944375
Iteration 22900 lower bound -43.61480313651416; gradient mag: 20.719268381341415
Iteration 23000 lower bound -41.02916947832877; gradient mag: 7.576070184720111
Iteration 23100 lower bound -40.94610423300299; gradient mag: 10.714120575181756
Iteration 23200 lower bound -41.756628143303935; gradient mag: 21.672615966693872
Iteration 23300 lower bound -39.21904457012876; gradient mag: 5.373144620859568
Iteration 23400 lower bound -42.03901515797162; gradient mag: 5.213303593787348
Iteration 23500 lower bound -40.54592588538834; gradient mag: 19.083913120409363
Iteration 23600 lower bound -41.706651520345716; gradient mag: 12.524346447228806
Iteration 23700 lower bound -40.401168401883226; gradient mag: 5.747086138858918
Iteration 23800 lower bound -42.385104186967794; gradient mag: 5.861317302209921
Iteration 23900 lower bound -41.872691495360115; gradient mag: 18.17476041393884
Iteration 24000 lower bound -40.88453884551983; gradient mag: 73.39119081775242
Iteration 24100 lower bound -43.44242804049945; gradient mag: 42.260520870051906
Iteration 24200 lower bound -43.75286243898268; gradient mag: 16.34919148849513
Iteration 24300 lower bound -41.425327328178014; gradient mag: 18.119703276227803
Iteration 24400 lower bound -40.30811532941682; gradient mag: 15.056614938738345
Iteration 24500 lower bound -39.893265480054446; gradient mag: 33.14511633301052
Iteration 24600 lower bound -41.947892967621605; gradient mag: 4.28525836155336
Iteration 24700 lower bound -40.25083533680828; gradient mag: 5.294118993619067
Iteration 24800 lower bound -40.833217398516844; gradient mag: 45.33318936442913
Iteration 24900 lower bound -42.06437526972891; gradient mag: 17.492605061210217
Iteration 25000 lower bound -41.34268483525091; gradient mag: 9.188616892920578
Iteration 25100 lower bound -40.75534574424633; gradient mag: 98.16146952866012
Iteration 25200 lower bound -40.12987926232219; gradient mag: 8.107265488232986
Iteration 25300 lower bound -39.78779039560334; gradient mag: 5.670757755245432
Iteration 25400 lower bound -40.95866196684865; gradient mag: 16.947851006702503
Iteration 25500 lower bound -40.52422246216475; gradient mag: 35.89433489223013
Iteration 25600 lower bound -38.739239387815026; gradient mag: 14.381824911394686
Iteration 25700 lower bound -40.48745430586065; gradient mag: 46.51427852127787
Iteration 25800 lower bound -40.81980913350136; gradient mag: 21.008160618506896
Iteration 25900 lower bound -41.655536588189115; gradient mag: 33.84607852832001
Iteration 26000 lower bound -40.66179438196828; gradient mag: 9.225022033388433
Iteration 26100 lower bound -41.78800116132338; gradient mag: 17.820848796519527
Iteration 26200 lower bound -42.343702744790264; gradient mag: 19.374565603402992
Iteration 26300 lower bound -39.4704198908267; gradient mag: 49.02401017280006
Iteration 26400 lower bound -39.86521856109388; gradient mag: 8.425747390059447
Iteration 26500 lower bound -39.216472970550704; gradient mag: 19.9156551912075
Iteration 26600 lower bound -38.94047790201782; gradient mag: 11.333899064088405
Iteration 26700 lower bound -40.878723316641434; gradient mag: 7.647252491613212
Iteration 26800 lower bound -42.04325945682516; gradient mag: 4.455854979261044
Iteration 26900 lower bound -41.18344849868353; gradient mag: 13.429148450922808
Iteration 27000 lower bound -41.9595954023598; gradient mag: 11.138344999831126
Iteration 27100 lower bound -40.52231330807349; gradient mag: 35.686288138462345
Iteration 27200 lower bound -41.42709089502239; gradient mag: 17.62296077785555
Iteration 27300 lower bound -39.74971010686809; gradient mag: 18.039258984091717
Iteration 27400 lower bound -40.27137878528885; gradient mag: 4.109080044780605
Iteration 27500 lower bound -42.08298618781781; gradient mag: 19.07020541768473
Iteration 27600 lower bound -43.45849620136542; gradient mag: 38.41476825004413
Iteration 27700 lower bound -42.472526226034105; gradient mag: 45.4940864574689
Iteration 27800 lower bound -41.56559626897204; gradient mag: 6.539835820537954
Iteration 27900 lower bound -40.75714659931374; gradient mag: 11.907342732001513
Iteration 28000 lower bound -43.33077622270719; gradient mag: 27.774979505780248
Iteration 28100 lower bound -40.46289669875605; gradient mag: 12.593395660722082
Iteration 28200 lower bound -40.46540358637205; gradient mag: 8.045398785957415
Iteration 28300 lower bound -40.968410986433696; gradient mag: 17.27010609464663
Iteration 28400 lower bound -42.20833633620026; gradient mag: 22.69767078699619
Iteration 28500 lower bound -40.787009030516444; gradient mag: 4.0908058384092945
Iteration 28600 lower bound -40.47083915244033; gradient mag: 14.357115411697478
Iteration 28700 lower bound -40.304233328136156; gradient mag: 24.00061834930836
Iteration 28800 lower bound -41.62305249591732; gradient mag: 24.91649323210363
Iteration 28900 lower bound -42.90947789481002; gradient mag: 16.139085634275627
Iteration 29000 lower bound -41.84174915399224; gradient mag: 20.240619714216063
Iteration 29100 lower bound -42.154078543230256; gradient mag: 8.51225332221481
Iteration 29200 lower bound -41.29895814622562; gradient mag: 9.669368175980042
Iteration 29300 lower bound -40.75918877426852; gradient mag: 11.187104704843989
Iteration 29400 lower bound -41.36550565050271; gradient mag: 33.937594088270494
Iteration 29500 lower bound -40.07568955834945; gradient mag: 18.903871487088693
Iteration 29600 lower bound -40.77592778582261; gradient mag: 5.9927430307167535
Iteration 29700 lower bound -40.53797224597014; gradient mag: 3.8827890144534645
Iteration 29800 lower bound -42.02704864043841; gradient mag: 337.4177505008032
Iteration 29900 lower bound -39.15673281142994; gradient mag: 12.637727837616067
Iteration 30000 lower bound -40.87367459154853; gradient mag: 4.192077475557058
Iteration 30100 lower bound -40.77135502152568; gradient mag: 3.3934033191781294
Iteration 30200 lower bound -43.41140080963966; gradient mag: 63.96777524675244
Iteration 30300 lower bound -40.862383368327386; gradient mag: 17.380713570929643
Iteration 30400 lower bound -40.69576601512579; gradient mag: 29.259658986715486
Iteration 30500 lower bound -38.97199820409049; gradient mag: 38.12610066354613
Iteration 30600 lower bound -39.99464774488991; gradient mag: 4.001980413598756
Iteration 30700 lower bound -40.69607187127798; gradient mag: 36.25140299898785
Iteration 30800 lower bound -44.348147320488906; gradient mag: 3.8122549795464873
Iteration 30900 lower bound -40.649828999093586; gradient mag: 14.39584607496794
Iteration 31000 lower bound -39.92130262699546; gradient mag: 17.550735104325756
Iteration 31100 lower bound -41.3034764103698; gradient mag: 32.53509359322071
Iteration 31200 lower bound -41.68818191309222; gradient mag: 3.412927831792631
Iteration 31300 lower bound -40.11241024287623; gradient mag: 7.270531095108004
Iteration 31400 lower bound -39.79373560308818; gradient mag: 7.379342844031847
Iteration 31500 lower bound -38.99147328517457; gradient mag: 205.3402466540088
Iteration 31600 lower bound -39.620021904683334; gradient mag: 23.541665663695312
Iteration 31700 lower bound -40.86025075637203; gradient mag: 27.02313131623847
Iteration 31800 lower bound -40.5992938006834; gradient mag: 29.251125005190833
Iteration 31900 lower bound -39.40673854172713; gradient mag: 47.499767593372034
Iteration 32000 lower bound -39.829346871143066; gradient mag: 31.123460906063983
Iteration 32100 lower bound -39.80846631461656; gradient mag: 8.946512351520038
Iteration 32200 lower bound -40.626886047024534; gradient mag: 5.429170624225388
Iteration 32300 lower bound -42.07774695030643; gradient mag: 9.748373647562158
Iteration 32400 lower bound -40.53289822360226; gradient mag: 3.7088840972348756
Iteration 32500 lower bound -40.47929272790021; gradient mag: 19.823447730498025
Iteration 32600 lower bound -42.219591066682455; gradient mag: 19.052384237901965
Iteration 32700 lower bound -40.71670175786751; gradient mag: 9.072783945668514
Iteration 32800 lower bound -41.17034821983929; gradient mag: 5.3492124462435715
Iteration 32900 lower bound -40.16042962759377; gradient mag: 3.947890198569576
Iteration 33000 lower bound -40.73294740386811; gradient mag: 6.105242801036117
Iteration 33100 lower bound -41.70407022192923; gradient mag: 20.254224390757244
Iteration 33200 lower bound -40.54233954660783; gradient mag: 24.995469638349174
Iteration 33300 lower bound -40.42514479054592; gradient mag: 4.516186968214805
Iteration 33400 lower bound -40.347858263687264; gradient mag: 19.11846234613438
Iteration 33500 lower bound -40.023943104017626; gradient mag: 19.359566188091243
Iteration 33600 lower bound -41.611870363216156; gradient mag: 11.397260838406718
Iteration 33700 lower bound -41.85127897690619; gradient mag: 25.59971835157442
Iteration 33800 lower bound -40.56486680034263; gradient mag: 16.24280426139206
Iteration 33900 lower bound -41.63332302022338; gradient mag: 3.6141842152767927
Iteration 34000 lower bound -40.266181536250144; gradient mag: 43.50920706050723
Iteration 34100 lower bound -41.792908800930704; gradient mag: 17.61036957495252
Iteration 34200 lower bound -41.04136399285987; gradient mag: 5.627052541685389
Iteration 34300 lower bound -41.752639381287835; gradient mag: 5.551089852895353
Iteration 34400 lower bound -39.74804211834646; gradient mag: 20.680403499366612
Iteration 34500 lower bound -42.41074391482233; gradient mag: 4.1238847918578365
Iteration 34600 lower bound -41.85256376046706; gradient mag: 27.831689332426745
Iteration 34700 lower bound -40.61829790751051; gradient mag: 29.19575884570113
Iteration 34800 lower bound -40.96902805955251; gradient mag: 19.019578800156236
Iteration 34900 lower bound -40.31177434671974; gradient mag: 34.180546815244696
Iteration 35000 lower bound -40.96995206600208; gradient mag: 8.645637658186274
Iteration 35100 lower bound -42.075993485068985; gradient mag: 15.718286332899183
Iteration 35200 lower bound -42.59454613744627; gradient mag: 21.57585703690739
Iteration 35300 lower bound -60.78069458260231; gradient mag: 10.433336228414767
Iteration 35400 lower bound -39.15418913819894; gradient mag: 46.44613677892922
Iteration 35500 lower bound -39.91632812075725; gradient mag: 759.9800803958713
Iteration 35600 lower bound -41.208408465967544; gradient mag: 8.978219578692707
Iteration 35700 lower bound -40.00446140448054; gradient mag: 21.28648478218684
Iteration 35800 lower bound -41.29548013138272; gradient mag: 18.689686323414627
Iteration 35900 lower bound -40.60815337723565; gradient mag: 49.15530733063129
Iteration 36000 lower bound -42.61696657230135; gradient mag: 21.93848290703545
Iteration 36100 lower bound -40.551124666517254; gradient mag: 24.439375836360554
Iteration 36200 lower bound -41.16643340205104; gradient mag: 11.938599356021296
Iteration 36300 lower bound -41.99693328216977; gradient mag: 9.85564109642347
Iteration 36400 lower bound -40.06128208046623; gradient mag: 28.43782765806377
Iteration 36500 lower bound -42.12827693606958; gradient mag: 15.466000404817306
Iteration 36600 lower bound -40.48678494699044; gradient mag: 11.831993053478994
Iteration 36700 lower bound -40.29634915291706; gradient mag: 13.635313357143323
Iteration 36800 lower bound -42.12995724133165; gradient mag: 22.32135977183423
Iteration 36900 lower bound -40.05164065448328; gradient mag: 7.1220454641360575
Iteration 37000 lower bound -43.28754952750044; gradient mag: 6.188035884200535
Iteration 37100 lower bound -40.855209623356764; gradient mag: 9.390349422575898
Iteration 37200 lower bound -40.44294033205584; gradient mag: 48.678094563922365
Iteration 37300 lower bound -42.65912705053914; gradient mag: 13.393264127163388
Iteration 37400 lower bound -41.02541712513137; gradient mag: 35.321555471809795
Iteration 37500 lower bound -41.37238302728897; gradient mag: 7.494522679577336
Iteration 37600 lower bound -42.67177749736082; gradient mag: 4.52396773332363
Iteration 37700 lower bound -40.304338816008354; gradient mag: 8.568225810717138
Iteration 37800 lower bound -41.08975049226933; gradient mag: 3.97771914465068
Iteration 37900 lower bound -40.283093181735126; gradient mag: 6.851410853253076
Iteration 38000 lower bound -40.92401437015869; gradient mag: 5.337519048666161
Iteration 38100 lower bound -41.744371621510595; gradient mag: 26.555427119614674
Iteration 38200 lower bound -41.43930101617468; gradient mag: 18.21307630349004
Iteration 38300 lower bound -40.284761524424184; gradient mag: 33.222932369342004
Iteration 38400 lower bound -41.097054802351266; gradient mag: 22.515467744466733
Iteration 38500 lower bound -39.180620885954895; gradient mag: 14.296423939366969
Iteration 38600 lower bound -40.34442575505553; gradient mag: 5.692274893531532
Iteration 38700 lower bound -42.05885491786497; gradient mag: 24.773218506804547
Iteration 38800 lower bound -40.940940980918626; gradient mag: 3.1436487444766525
Iteration 38900 lower bound -49.24452540978168; gradient mag: 3.7214289651723687
Iteration 39000 lower bound -41.16551655960467; gradient mag: 17.13334775372095
Iteration 39100 lower bound -40.62447058493636; gradient mag: 18.088610286946075
Iteration 39200 lower bound -44.69397045397851; gradient mag: 10.212773388416053
Iteration 39300 lower bound -40.75317252397034; gradient mag: 24.515508322771502
Iteration 39400 lower bound -41.43771330103824; gradient mag: 41.742415517449665
Iteration 39500 lower bound -41.0921374322578; gradient mag: 31.543309730951336
Iteration 39600 lower bound -40.86330300116745; gradient mag: 19.791455982824004
Iteration 39700 lower bound -40.31784957980149; gradient mag: 37.346990549151926
Iteration 39800 lower bound -39.81343797106409; gradient mag: 33.34435468835366
Iteration 39900 lower bound -39.96393440643628; gradient mag: 40.36436526478913
Iteration 40000 lower bound -40.382104461136635; gradient mag: 12.335206884636866
Iteration 40100 lower bound -41.12075559648058; gradient mag: 8.816372522733412
Iteration 40200 lower bound -40.338518416699685; gradient mag: 40.63204693454371
Iteration 40300 lower bound -40.73988057419786; gradient mag: 30.22877693138028
Iteration 40400 lower bound -39.28311891520741; gradient mag: 28.413983876478266
Iteration 40500 lower bound -42.88192983304532; gradient mag: 9.546409872901009
Iteration 40600 lower bound -41.061476895065205; gradient mag: 18.79830324143832
Iteration 40700 lower bound -39.03533382238935; gradient mag: 21.44372856211927
Iteration 40800 lower bound -43.010807900096296; gradient mag: 7.8129459098644505
Iteration 40900 lower bound -42.09853590988604; gradient mag: 15.904792189029845
Iteration 41000 lower bound -40.56125721651851; gradient mag: 54.74558287448544
Iteration 41100 lower bound -42.17729851428234; gradient mag: 24.76855014530481
Iteration 41200 lower bound -41.732961732801336; gradient mag: 4.579385339126911
Iteration 41300 lower bound -42.08879807440975; gradient mag: 9.611865373121075
Iteration 41400 lower bound -40.3804853084285; gradient mag: 31.03905990510176
Iteration 41500 lower bound -40.333309888250014; gradient mag: 7.839587155088635
Iteration 41600 lower bound -40.53756078116343; gradient mag: 15.387680581230434
Iteration 41700 lower bound -41.904942110435016; gradient mag: 36.772866312202964
Iteration 41800 lower bound -41.161833597617644; gradient mag: 5.713689695013303
Iteration 41900 lower bound -40.867507035742726; gradient mag: 9.855194391968114
Iteration 42000 lower bound -40.2839487592759; gradient mag: 23.373542095012947
Iteration 42100 lower bound -39.11662373769641; gradient mag: 9.424668515239915
Iteration 42200 lower bound -39.712777317377174; gradient mag: 3.2800264497060008
Iteration 42300 lower bound -42.44818207634846; gradient mag: 21.759875584242323
Iteration 42400 lower bound -38.94671432918367; gradient mag: 60.60188624260306
Iteration 42500 lower bound -40.8452114752433; gradient mag: 28.335046437598372
Iteration 42600 lower bound -40.868056912900755; gradient mag: 75.87032070719786
Iteration 42700 lower bound -40.58989777524661; gradient mag: 19.69463714762765
Iteration 42800 lower bound -40.060539767672964; gradient mag: 37.620241183452194
Iteration 42900 lower bound -38.78244901849625; gradient mag: 23.769537217423046
Iteration 43000 lower bound -39.81146954898823; gradient mag: 4.322899963122284
Iteration 43100 lower bound -41.898245320109496; gradient mag: 14.19570014500304
Iteration 43200 lower bound -38.661403431912746; gradient mag: 5.982523120800825
Iteration 43300 lower bound -41.38332673749551; gradient mag: 16.163257978888222
Iteration 43400 lower bound -41.243231334338894; gradient mag: 26.351977565668925
Iteration 43500 lower bound -40.8537842105564; gradient mag: 11.134087894726338
Iteration 43600 lower bound -40.37004062842491; gradient mag: 3.8115369109465265
Iteration 43700 lower bound -41.229632549135125; gradient mag: 16.07800140903307
Iteration 43800 lower bound -41.983144698334414; gradient mag: 16.3878401380863
Iteration 43900 lower bound -40.58245507079723; gradient mag: 6.1303880258659795
Iteration 44000 lower bound -40.80527108622783; gradient mag: 21.871483929669317
Iteration 44100 lower bound -42.68162223761894; gradient mag: 8.162918971626071
Iteration 44200 lower bound -41.62064376155771; gradient mag: 29.51903819234498
Iteration 44300 lower bound -40.65145515940564; gradient mag: 29.851923257365133
Iteration 44400 lower bound -40.47715361250888; gradient mag: 6.455492903807987
Iteration 44500 lower bound -41.38396724281864; gradient mag: 39.68975746975647
Iteration 44600 lower bound -41.47396745956451; gradient mag: 8.584069794427215
Iteration 44700 lower bound -41.3778738616673; gradient mag: 32.082686203671784
Iteration 44800 lower bound -39.61895219977851; gradient mag: 44.24795613108385
Iteration 44900 lower bound -41.415761266435716; gradient mag: 3.662529083785682
Iteration 45000 lower bound -41.22182326254192; gradient mag: 31.48578113184102
Iteration 45100 lower bound -41.310534304847906; gradient mag: 18.042647531251326
Iteration 45200 lower bound -43.450497670054006; gradient mag: 52.1193584762525
Iteration 45300 lower bound -40.26724296261452; gradient mag: 8.177035838005589
Iteration 45400 lower bound -41.90831568266844; gradient mag: 49.47360169154712
Iteration 45500 lower bound -42.62893885452414; gradient mag: 33.23681040452274
Iteration 45600 lower bound -40.12495629141701; gradient mag: 32.94823370856854
Iteration 45700 lower bound -41.202984762155765; gradient mag: 12.471480376328913
Iteration 45800 lower bound -39.791088004806575; gradient mag: 21.872953371559547
Iteration 45900 lower bound -39.5340997616474; gradient mag: 33.635652201040365
Iteration 46000 lower bound -44.07981896018327; gradient mag: 5.75795417102624
Iteration 46100 lower bound -38.81030267209606; gradient mag: 4.046492319695925
Iteration 46200 lower bound -40.91214872353917; gradient mag: 26.412697313406063
Iteration 46300 lower bound -41.79426401908966; gradient mag: 4.8896877660743385
Iteration 46400 lower bound -41.228021267853265; gradient mag: 31.637645335073866
Iteration 46500 lower bound -40.46268502784921; gradient mag: 15.61055341613056
Iteration 46600 lower bound -40.59041676493684; gradient mag: 5.115342330281836
Iteration 46700 lower bound -40.669274264225166; gradient mag: 24.69481966447004
Iteration 46800 lower bound -41.468521660156014; gradient mag: 8.65229754475257
Iteration 46900 lower bound -40.36823854538098; gradient mag: 11.944944218164277
Iteration 47000 lower bound -40.41928092259831; gradient mag: 14.83658437494177
Iteration 47100 lower bound -39.73638376413831; gradient mag: 5.839137763655094
Iteration 47200 lower bound -42.53272809461198; gradient mag: 8.38697946244182
Iteration 47300 lower bound -41.01449813801565; gradient mag: 12.718727120316872
Iteration 47400 lower bound -40.88339272910975; gradient mag: 30.219381930729288
Iteration 47500 lower bound -40.533489490935224; gradient mag: 6.8695804830074065
Iteration 47600 lower bound -41.09064819415493; gradient mag: 121.27967794417675
Iteration 47700 lower bound -43.812731299441666; gradient mag: 16.238290133943913
Iteration 47800 lower bound -42.81123662031848; gradient mag: 21.74671565150544
Iteration 47900 lower bound -40.26168716728116; gradient mag: 8.513456372175193
Iteration 48000 lower bound -40.66730198924765; gradient mag: 26.239805531457883
Iteration 48100 lower bound -40.80525479083812; gradient mag: 19.857706548331944
Iteration 48200 lower bound -39.71548625202652; gradient mag: 13.933213658483213
Iteration 48300 lower bound -41.245099279195884; gradient mag: 4.649653255422183
Iteration 48400 lower bound -40.25924720333923; gradient mag: 13.069222051446838
Iteration 48500 lower bound -41.187283131467666; gradient mag: 3.9230795257995994
Iteration 48600 lower bound -41.2485618913248; gradient mag: 3.8903841582816736
Iteration 48700 lower bound -41.199190114004814; gradient mag: 8.353433649617575
Iteration 48800 lower bound -40.29520860700825; gradient mag: 30.99566444115119
Iteration 48900 lower bound -46.30092994265044; gradient mag: 731.0817811830602
Iteration 49000 lower bound -40.20744168335834; gradient mag: 24.313721891923525
Iteration 49100 lower bound -42.25293917234508; gradient mag: 14.345628783707495
Iteration 49200 lower bound -41.127151763305264; gradient mag: 10.882225655455661
Iteration 49300 lower bound -41.7288842068689; gradient mag: 9.476159359763209
Iteration 49400 lower bound -41.77231181892367; gradient mag: 9.220288031913803
Iteration 49500 lower bound -40.15468502665138; gradient mag: 10.46330274393757
Iteration 49600 lower bound -39.13564627395256; gradient mag: 22.320265636189525
Iteration 49700 lower bound -40.3675096214494; gradient mag: 26.73290192767381
Iteration 49800 lower bound -39.33955947802038; gradient mag: 184.04951834960772
Iteration 49900 lower bound -42.42397156231568; gradient mag: 3.4880296455869084
```python
# Set random seed for reproducibility
np.random.seed(207)
samples = np.random.multivariate_normal(post_vi[:nn.D], np.diag(np.exp(post_vi[nn.D:])), size=100)
y_pred_test = nn.forward(nn.weights, X_test.reshape((1,-1)))
# Visualize posterior predictive
fig, ax = plt.subplots(figsize=(10, 6))
fig.suptitle('Posterior Predictive', fontsize=22)
ax.scatter(X_train.flatten(), y_train.flatten(), color='green', label='Train Data')
ax.plot(X_test.flatten(), y_pred_test.flatten(), color='blue', label='Fitted NN Function')
for sample in samples:
X_test = np.linspace(-8, 8, 100)
y_test = nn.forward(sample.reshape(1,-1), X_test.reshape((1,-1)))
y_test += np.random.normal(0, 0.5, size=y_test.shape)
ax.plot(X_test.flatten(), y_test.flatten(), alpha=0.1, color='red')
ax.plot(X_test.flatten(), y_test.flatten(), alpha=0.1, color='red', label='Posterior Predictive')
ax.legend();
```
From the above, we note that the posterior predictive is unable to reflect the epistemic uncertainty in the OOD regions (as reflected by the narrow distribution). However, we also note that it is able to reflect uncertainty at the domain boundaries.
## Summary
As mentioned earlier, we note that the posterior predictive obtained using HMC is better able to capture uncertainty in the OOD regions. We also note that both models exhibit a high amount of "confidence" for in sample regions, which suggests a high log-likelihood. This result is expected.
Approximating the posterior via BBVI reduces our ability to estimate the epistemic uncertainty of the model.
The samples obtained via HMC are unikely to be a set of representative samples from the BNN posterior. This may be the result of inappropriate model parameters i.e learning rate, sample size, standard deviation, etc.
From the above, we see that the mean field gaussian is unable to capture the epistemic uncertainty of the model. This agrees with what we know about BBVI - that is, there is a trade-off between computational tractability and being able to accurately approximate the true posterior.
There does not appear to be a clear positive relationship between posterior approximation and quality of posterior predictive uncertainties.
BBVI is more computationally tractable than HMC. As mentioned earlier, this may come at the expense of being able to accurately capture the posterior of a model if the underlying assumptions are inappropriate/overly simplistic.
| cdba211b3a4ba2ec6c72bdb3365fe4a3223b7def | 344,891 | ipynb | Jupyter Notebook | BNNs/BNN_Regression.ipynb | alexjlim/projects | d9bdd42d1598ee94c884a855bccbc00c8c3bc57a | [
"MIT"
]
| null | null | null | BNNs/BNN_Regression.ipynb | alexjlim/projects | d9bdd42d1598ee94c884a855bccbc00c8c3bc57a | [
"MIT"
]
| null | null | null | BNNs/BNN_Regression.ipynb | alexjlim/projects | d9bdd42d1598ee94c884a855bccbc00c8c3bc57a | [
"MIT"
]
| null | null | null | 270.290752 | 144,280 | 0.901833 | true | 22,844 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.718594 | 0.616387 | __label__eng_Latn | 0.457101 | 0.270405 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial1.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 1
# Dimensionality Reduction: Geometric view of data
__Content creators:__ Alex Cayco Gajic, John Murray
__Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom
---
# Tutorial Objectives
In this notebook we'll explore how multivariate data can be represented in different orthonormal bases. This will help us build intuition that will be helpful in understanding PCA in the following tutorial.
Overview:
- Generate correlated multivariate data.
- Define an arbitrary orthonormal basis.
- Project the data onto the new basis.
```python
# @title Video 1: Geometric view of data
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="THu9yHnpq9I", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=THu9yHnpq9I
---
# Setup
```python
# Import
import numpy as np
import matplotlib.pyplot as plt
```
```python
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
# @title Helper functions
def get_data(cov_matrix):
"""
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian.
Note that samples are sorted in ascending order for the first random variable
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with each
column corresponding to a different random
variable
"""
mean = np.array([0, 0])
X = np.random.multivariate_normal(mean, cov_matrix, size=1000)
indices_for_sorting = np.argsort(X[:, 0])
X = X[indices_for_sorting, :]
return X
def plot_data(X):
"""
Plots bivariate data. Includes a plot of each random variable, and a scatter
plot of their joint activity. The title indicates the sample correlation
calculated from the data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(X[:, 0], color='k')
plt.ylabel('Neuron 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(X[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(X[:, 1], color='k')
plt.xlabel('Sample Number')
plt.ylabel('Neuron 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(X[:, 1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(X[:, 0], X[:, 1], '.', markerfacecolor=[.5, .5, .5],
markeredgewidth=0)
ax3.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:, 0], X[:, 1])[0, 1]))
plt.show()
def plot_basis_vectors(X, W):
"""
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
W (numpy array of floats) : Square matrix representing new orthonormal
basis each column represents a basis vector
Returns:
Nothing.
"""
plt.figure(figsize=[4, 4])
plt.plot(X[:, 0], X[:, 1], '.', color=[.5, .5, .5], label='Data')
plt.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.plot([0, W[0, 0]], [0, W[1, 0]], color='r', linewidth=3,
label='Basis vector 1')
plt.plot([0, W[0, 1]], [0, W[1, 1]], color='b', linewidth=3,
label='Basis vector 2')
plt.legend()
plt.show()
def plot_data_new_basis(Y):
"""
Plots bivariate data after transformation to new bases.
Similar to plot_data but with colors corresponding to projections onto
basis 1 (red) and basis 2 (blue). The title indicates the sample correlation
calculated from the data.
Note that samples are re-sorted in ascending order for the first
random variable.
Args:
Y (numpy array of floats): Data matrix in new basis each column
corresponds to a different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(Y[:, 0], 'r')
plt.xlabel
plt.ylabel('Projection \n basis vector 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(Y[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(Y[:, 1], 'b')
plt.xlabel('Sample number')
plt.ylabel('Projection \n basis vector 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(Y[:, 1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(Y[:, 0], Y[:, 1], '.', color=[.5, .5, .5])
ax3.axis('equal')
plt.xlabel('Projection basis vector 1')
plt.ylabel('Projection basis vector 2')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:, 0], Y[:, 1])[0, 1]))
plt.show()
```
---
# Section 1: Generate correlated multivariate data
```python
# @title Video 2: Multivariate data
video = YouTubeVideo(id="jcTq2PgU5Vw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=jcTq2PgU5Vw
To gain intuition, we will first use a simple model to generate multivariate data. Specifically, we will draw random samples from a *bivariate normal distribution*. This is an extension of the one-dimensional normal distribution to two dimensions, in which each $x_i$ is marginally normal with mean $\mu_i$ and variance $\sigma_i^2$:
\begin{align}
x_i \sim \mathcal{N}(\mu_i,\sigma_i^2).
\end{align}
Additionally, the joint distribution for $x_1$ and $x_2$ has a specified correlation coefficient $\rho$. Recall that the correlation coefficient is a normalized version of the covariance, and ranges between -1 and +1:
\begin{align}
\rho = \frac{\text{cov}(x_1,x_2)}{\sqrt{\sigma_1^2 \sigma_2^2}}.
\end{align}
For simplicity, we will assume that the mean of each variable has already been subtracted, so that $\mu_i=0$. The remaining parameters can be summarized in the covariance matrix, which for two dimensions has the following form:
\begin{equation*}
{\bf \Sigma} =
\begin{pmatrix}
\text{var}(x_1) & \text{cov}(x_1,x_2) \\
\text{cov}(x_1,x_2) &\text{var}(x_2)
\end{pmatrix}.
\end{equation*}
In general, $\bf \Sigma$ is a symmetric matrix with the variances $\text{var}(x_i) = \sigma_i^2$ on the diagonal, and the covariances on the off-diagonal. Later, we will see that the covariance matrix plays a key role in PCA.
## Exercise 1: Draw samples from a distribution
We have provided code to draw random samples from a zero-mean bivariate normal distribution. Throughout this tutorial, we'll imagine these samples represent the activity (firing rates) of two recorded neurons on different trials. Fill in the function below to calculate the covariance matrix given the desired variances and correlation coefficient. The covariance can be found by rearranging the equation above:
\begin{align}
\text{cov}(x_1,x_2) = \rho \sqrt{\sigma_1^2 \sigma_2^2}.
\end{align}
Use these functions to generate and plot data while varying the parameters. You should get a feel for how changing the correlation coefficient affects the geometry of the simulated data.
**Steps**
* Fill in the function `calculate_cov_matrix` to calculate the desired covariance.
* Generate and plot the data for $\sigma_1^2 =1$, $\sigma_1^2 =1$, and $\rho = .8$. Try plotting the data for different values of the correlation coefficent: $\rho = -1, -.5, 0, .5, 1$.
```python
help(plot_data)
help(get_data)
```
Help on function plot_data in module __main__:
plot_data(X)
Plots bivariate data. Includes a plot of each random variable, and a scatter
plot of their joint activity. The title indicates the sample correlation
calculated from the data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
Help on function get_data in module __main__:
get_data(cov_matrix)
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian.
Note that samples are sorted in ascending order for the first random variable
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with each
column corresponding to a different random
variable
```python
def calculate_cov_matrix(var_1, var_2, corr_coef):
"""
Calculates the covariance matrix based on the variances and correlation
coefficient.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
corr_coef (scalar) : correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
#################################################
## TODO for students: calculate the covariance matrix
# Fill out function and remove
# raise NotImplementedError("Student excercise: calculate the covariance matrix!")
#################################################
# Calculate the covariance from the variances and correlation
cov = corr_coef*np.sqrt(var_1*var)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
###################################################################
## TO DO for students: generate and plot bivariate Gaussian data with variances of 1
## and a correlation coefficients of: 0.8
## repeat while varying the correlation coefficient from -1 to 1
###################################################################
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
# Uncomment to test your code and plot
# cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
# X = get_data(cov_matrix)
# plot_data(X)
```
```python
# to_remove solution
def calculate_cov_matrix(var_1, var_2, corr_coef):
"""
Calculates the covariance matrix based on the variances and correlation
coefficient.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
corr_coef (scalar) : correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
# Calculate the covariance from the variances and correlation
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
# Uncomment to test your code and plot
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
X = get_data(cov_matrix)
with plt.xkcd():
plot_data(X)
```
---
# Section 2: Define a new orthonormal basis
```python
# @title Video 3: Orthonormal bases
video = YouTubeVideo(id="PC1RZELnrIg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=PC1RZELnrIg
Next, we will define a new orthonormal basis of vectors ${\bf u} = [u_1,u_2]$ and ${\bf w} = [w_1,w_2]$. As we learned in the video, two vectors are orthonormal if:
1. They are orthogonal (i.e., their dot product is zero):
\begin{equation}
{\bf u\cdot w} = u_1 w_1 + u_2 w_2 = 0
\end{equation}
2. They have unit length:
\begin{equation}
||{\bf u} || = ||{\bf w} || = 1
\end{equation}
In two dimensions, it is easy to make an arbitrary orthonormal basis. All we need is a random vector ${\bf u}$, which we have normalized. If we now define the second basis vector to be ${\bf w} = [-u_2,u_1]$, we can check that both conditions are satisfied:
\begin{equation}
{\bf u\cdot w} = - u_1 u_2 + u_2 u_1 = 0
\end{equation}
and
\begin{equation}
{|| {\bf w} ||} = \sqrt{(-u_2)^2 + u_1^2} = \sqrt{u_1^2 + u_2^2} = 1,
\end{equation}
where we used the fact that ${\bf u}$ is normalized. So, with an arbitrary input vector, we can define an orthonormal basis, which we will write in matrix by stacking the basis vectors horizontally:
\begin{equation}
{{\bf W} } =
\begin{pmatrix}
u_1 & w_1 \\
u_2 & w_2
\end{pmatrix}.
\end{equation}
## Exercise 2: Find an orthonormal basis
In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary 2-dimensional vector as an input.
**Steps**
* Modify the function `define_orthonormal_basis` to first normalize the first basis vector $\bf u$.
* Then complete the function by finding a basis vector $\bf w$ that is orthogonal to $\bf u$.
* Test the function using initial basis vector ${\bf u} = [3,1]$. Plot the resulting basis vectors on top of the data scatter plot using the function `plot_basis_vectors`. (For the data, use $\sigma_1^2 =1$, $\sigma_2^2 =1$, and $\rho = .8$).
```python
help(plot_basis_vectors)
```
Help on function plot_basis_vectors in module __main__:
plot_basis_vectors(X, W)
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
W (numpy array of floats) : Square matrix representing new orthonormal
basis each column represents a basis vector
Returns:
Nothing.
```python
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats) : arbitrary 2-dimensional vector used for new
basis
Returns:
(numpy array of floats) : new orthonormal basis
columns correspond to basis vectors
"""
#################################################
## TODO for students: calculate the orthonormal basis
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the orthonormal basis function")
#################################################
# normalize vector u
u = ...
# calculate vector w that is orthogonal to w
w = ...
W = np.column_stack([u, w])
return W
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
X = get_data(cov_matrix)
u = np.array([3, 1])
# Uncomment and run below to plot the basis vectors
# W = define_orthonormal_basis(u)
# plot_basis_vectors(X, W)
```
```python
# to_remove solution
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats) : arbitrary 2-dimensional vector used for new
basis
Returns:
(numpy array of floats) : new orthonormal basis
columns correspond to basis vectors
"""
# normalize vector u
u = u / np.sqrt(u[0] ** 2 + u[1] ** 2)
# calculate vector w that is orthogonal to w
w = np.array([-u[1], u[0]])
W = np.column_stack((u, w))
return W
np.random.seed(2020) # set random seed
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
X = get_data(cov_matrix)
u = np.array([3, 1])
# Uncomment and run below to plot the basis vectors
W = define_orthonormal_basis(u)
with plt.xkcd():
plot_basis_vectors(X, W)
```
---
# Section 3: Project data onto new basis
```python
# @title Video 4: Change of basis
video = YouTubeVideo(id="Mj6BRQPKKUc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=Mj6BRQPKKUc
Finally, we will express our data in the new basis that we have just found. Since $\bf W$ is orthonormal, we can project the data into our new basis using simple matrix multiplication :
\begin{equation}
{\bf Y = X W}.
\end{equation}
We will explore the geometry of the transformed data $\bf Y$ as we vary the choice of basis.
## Exercise 3: Define an orthonormal basis
In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary vector as an input.
**Steps**
* Complete the function `change_of_basis` to project the data onto the new basis.
* Plot the projected data using the function `plot_data_new_basis`.
* What happens to the correlation coefficient in the new basis? Does it increase or decrease?
* What happens to variance?
```python
def change_of_basis(X, W):
"""
Projects data onto new basis W.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
#################################################
## TODO for students: project the data onto o new basis W
# Fill out function and remove
raise NotImplementedError("Student excercise: implement change of basis")
#################################################
# project data onto new basis described by W
Y = ...
return Y
# Unomment below to transform the data by projecting it into the new basis
# Y = change_of_basis(X, W)
# plot_data_new_basis(Y)
```
```python
# to_remove solution
def change_of_basis(X, W):
"""
Projects data onto new basis W.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
# project data onto new basis described by W
Y = np.matmul(X, W)
return Y
# Unomment below to transform the data by projecting it into the new basis
Y = change_of_basis(X, W)
with plt.xkcd():
plot_data_new_basis(Y)
```
## Interactive Demo: Play with the basis vectors
To see what happens to the correlation as we change the basis vectors, run the cell below. The parameter $\theta$ controls the angle of $\bf u$ in degrees. Use the slider to rotate the basis vectors.
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
def refresh(theta=0):
u = [1, np.tan(theta * np.pi / 180)]
W = define_orthonormal_basis(u)
Y = change_of_basis(X, W)
plot_basis_vectors(X, W)
plot_data_new_basis(Y)
_ = widgets.interact(refresh, theta=(0, 90, 5))
```
## Questions
* What happens to the projected data as you rotate the basis?
* How does the correlation coefficient change? How does the variance of the projection onto each basis vector change?
* Are you able to find a basis in which the projected data is **uncorrelated**?
---
# Summary
- In this tutorial, we learned that multivariate data can be visualized as a cloud of points in a high-dimensional vector space. The geometry of this cloud is shaped by the covariance matrix.
- Multivariate data can be represented in a new orthonormal basis using the dot product. These new basis vectors correspond to specific mixtures of the original variables - for example, in neuroscience, they could represent different ratios of activation across a population of neurons.
- The projected data (after transforming into the new basis) will generally have a different geometry from the original data. In particular, taking basis vectors that are aligned with the spread of cloud of points decorrelates the data.
* These concepts - covariance, projections, and orthonormal bases - are key for understanding PCA, which we be our focus in the next tutorial.
| 44c0108f26920b92aa0e7f90a5a92c241cf05d60 | 835,258 | ipynb | Jupyter Notebook | tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial1.ipynb | neurorishika/course-content | d7fd2feabd662c8a32afc2837f45cc7f18e1f4aa | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial1.ipynb | neurorishika/course-content | d7fd2feabd662c8a32afc2837f45cc7f18e1f4aa | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial1.ipynb | neurorishika/course-content | d7fd2feabd662c8a32afc2837f45cc7f18e1f4aa | [
"CC-BY-4.0"
]
| null | null | null | 579.637752 | 194,276 | 0.945128 | true | 5,289 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.72487 | 0.583979 | __label__eng_Latn | 0.95969 | 0.195109 |
# Linear Algebra
> ## Linearity
> ### superpostion principle (중첩의 원리)
>> a function $F(x)$ that satisfies the superposition principle is called a linear function
>> ### additivity
>>> ### $F(x_1 + x_2) = F(x_1) + F(x_2)$
>>
>> ### homogeneity
>>> ### $F(ax) = aF(x), \text{ for scalar } a.$
>> $ homogenous solution + particular solution = 0$
>
> ## Algebra
> ### synbols that stand for numbers
>> ## Geometry of Linear Equations
# Methord of Solution
> ## [Row pciture](https://twlab.tistory.com/6?category=668741)
>> 공간상에서 the line,plane of dot products
>> $
\begin{bmatrix}
2 & 5 \\ 1 & 3
\end{bmatrix}
\begin{bmatrix}
1 \\ 2
\end{bmatrix} =
\begin{bmatrix}
2 & 5 \\ 0 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 2
\end{bmatrix} +
\begin{bmatrix}
0 & 0 \\ 1 & 3
\end{bmatrix}
\begin{bmatrix}
1 \\ 2
\end{bmatrix} =
\begin{bmatrix}
12 \\ 7
\end{bmatrix}
$
> ## [Column pciture](https://twlab.tistory.com/6?category=668741)
>> 공간상에서 linear combination of vectors
>> $
\begin{bmatrix}
2 & 5 \\ 1 & 3
\end{bmatrix}
\begin{bmatrix}
1 \\ 2
\end{bmatrix} =
1\:
\begin{bmatrix}
2 \\ 1
\end{bmatrix} +
2 \:
\begin{bmatrix}
5 \\ 3
\end{bmatrix} =
\begin{bmatrix}
12 \\ 7
\end{bmatrix}
$
```python
import sympy as sm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib widget
```
### find solution.
> ### $
\begin{cases}
2x & - & y & & & = 0 \\
-x & + & 2y & - &z & = -1 \\
& - &3y & + & 4z & = 4
\end{cases}$
> #### $
\begin{bmatrix} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -3 & 4 \end{bmatrix} \:
\begin{bmatrix} x \\ y \\ z \end{bmatrix} \: = \:
\begin{bmatrix} 0 \\ -1 \\ 4 \end{bmatrix}
$
```python
fig = plt.figure()
ax = fig.add_subplot(projection = '3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
ax.set_zlim(-10,10)
xi = np.linspace(-5,5,10)
yi = np.linspace(-5,5,10)
xi,yi = np.meshgrid(xi,yi)
ax.plot_surface(xi, 2*xi, yi,alpha=0.5)
ax.plot_surface(xi, yi, -xi+2*yi+1,alpha=0.5)
ax.plot_surface(xi, yi, 3/4*yi+1,alpha=0.5)
```
<mpl_toolkits.mplot3d.art3d.Poly3DCollection at 0x7fc9b82c7040>
<div style="display: inline-block;">
<div class="jupyter-widgets widget-label" style="text-align: center;">
Figure
</div>
</div>
```python
x,y,z = sm.symbols('x y z')
sm.solve([2*x-y,-x+2*y-z+1,-3*y+4*z-4],[x,y,z])
```
{x: 0, y: 0, z: 1}
```python
ax.scatter(0,0,1,marker='o',color='r',s = 100)
```
<mpl_toolkits.mplot3d.art3d.Path3DCollection at 0x7fc97dac4e50>
# Column picture
> ## $
x\:
\begin{bmatrix}
2 \\ -1 \\ 0
\end{bmatrix} +
y\:
\begin{bmatrix}
-1 \\ 2 \\ -3
\end{bmatrix} +
z\:
\begin{bmatrix}
0 \\ -1 \\ 4
\end{bmatrix} \:= \:
\begin{bmatrix}
0 \\ -1 \\ 4
\end{bmatrix}
$
```python
fig1 = plt.figure()
ax = fig1.add_subplot(projection='3d')
ax.set_xlim(-4,4)
ax.set_ylim(-4,4)
ax.set_zlim(-4,4)
ax.quiver(0,0,0,2,-1,0)
ax.quiver(0,0,0,-1,2,-3)
ax.quiver(0,0,0,0,-1,4)
ax.scatter(0,0,0,c='r')
ax.scatter(0,-1,4,c='r')
```
<mpl_toolkits.mplot3d.art3d.Path3DCollection at 0x7fc97c1049d0>
<div style="display: inline-block;">
<div class="jupyter-widgets widget-label" style="text-align: center;">
Figure
</div>
</div>
```python
M = sm.Matrix([[2,-1,0,0],[-1,2,-1,-1],[0,-3,4,4]])
sm.Matrix([(2,-1,0,0),(-1,2,-1,-1),(0,-3,4,4)])
sm.Matrix(((2,-1,0,0),(-1,2,-1,-1),(0,-3,4,4)))
A[:,:-1]
A[:,-1]
```
```python
sm.linsolve(sm.Matrix([[2,-1,0,0],[-1,2,-1,-1],[0,-3,4,4]]),(x,y,z))
```
```python
sm.linsolve((M[:,:-1],M[:,-1]),x,y,z)
```
| dee3075fff5179af7523d1b7e27c656475a4bdab | 236,094 | ipynb | Jupyter Notebook | python/Vectors/algebra.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
]
| null | null | null | python/Vectors/algebra.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
]
| null | null | null | python/Vectors/algebra.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
]
| null | null | null | 659.480447 | 136,055 | 0.943844 | true | 1,441 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.76908 | 0.670905 | __label__eng_Latn | 0.127156 | 0.397068 |
# TALENT Course 11
## Learning from Data: Bayesian Methods and Machine Learning
### York, UK, June 10-28, 2019
$% Some LaTeX definitions we'll use.
\newcommand{\pr}{\textrm{p}}
$
## Model selection (I)
### Bayesian evidence:
Please see the full version of the lecture notes here in [html](pub/model_selection-bs.html) and [pdf](pub/model_selection-minted.pdf) formats. The notes contain a somewhat adapted version of Ch. 4.1 in Sivia's book: "The story of Dr A and Prof B", which is extremely well written. It also contains a summary of Laplace's method for approximating evidence factors.
### Import of modules
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy import optimize
# Not really needed, but nicer plots
import seaborn as sns
sns.set()
sns.set_context("talk")
```
## What order polynomial?
Throughout the rest of this section, we will use data that was generated from a "true model" where x and y satisfy the following:
$$
y_i = x_i \sin(x_i)+\epsilon_i,
$$
where $0 \leq x_i \leq 3$ and the noise is drawn from a normal distribution $\epsilon_i \sim \mathcal{N}(0, \sigma_0)$. The values for 20 regularly spaced points with $\sigma_0=0.1$ are shown below.
```python
#------------------------------------------------------------
# Define our functional form
def true_func(x):
return np.sin(x) * x
def func(x, dy=0.1):
return np.random.normal(true_func(x), dy)
#------------------------------------------------------------
# select the (noisy) data
np.random.seed(0)
num_data = 20
x_max = 3
x = np.linspace(0, x_max, num_data+2)[1:-1]
sig0 = 0.1 # try 0.5 or higher or 0.01
y = func(x, sig0)
```
```python
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1)
ax.errorbar(x, y, sig0, fmt='o');
ax.plot(x, true_func(x), color='red')
ax.set(xlabel='x',ylabel='y')
fig.tight_layout()
```
Assume that we have a multiple of models, with varying degree of sophistication. In this example we use polynomials of different orders to represent models of increasing complexity and with an increasing number of model parameters.
> Our task is to find which model finds the most support in the given data.
It is clear that a more complicated model with more free parameters should be able to fit the data much more closely. But what is the evidence in the data for such a complicated model? Finding the answer to this question is a task for a Bayesian, and the problem is generally known as *Model selection*.
Below, we will use an approximate way of computing the Bayesian evidence, namely the Laplace method. In some cases one can also use conjugate priors to simplify the computation of the evidence factor. Or one can use certain sampling methods to compute the evidence numerically. The highlight will be the comparison of different models using the evidences to extract odds-ratios.
### The Model
See previous lecture notes, in particular [Parameter estimation III](../bayesian-parameter-estimation/Lecture_Th1a_rjf.pdf) for some more details.
In general, we're fitting a $M$-degree polynomial to data,
$$
y_M(x) = \sum_{i=0}^M \theta_i x^i
$$
where we use $\theta$ to denote our parameter vector of length $M$.
Assuming all the points are independent, we can find the full log likelihood by adding the individual likelihoods together:
$$
\begin{align}
\log p(D\mid\theta, I) &= -\frac{1}{2}\sum_{i=1}^N\left(\log(2\pi\sigma_0^2) + \frac{\left[ y_i - y_M(x_i;\theta)\right]^2}{\sigma_0^2}\right) \\
&= \text{constant} - \sum_{i=1}^N \frac{\left[ y_i - y_M(x_i;\theta)\right]^2}{2 \sigma_0^2}
\end{align}
$$
We often define the residuals
$$
R_i = \left[ y_i - y_M(x_i;\theta) \right]/\sigma_0,
$$
so that the relevant chi-square sum reads $- \sum_{i=1}^N R_i^2 / 2$.
```python
def residuals(theta, x=x, y=y, sigma0=sig0):
dy = y - np.polyval(theta,x)
return dy / sigma0
# Standard likelihood with Gaussian errors as specified
# uniform prior for theta
def log_likelihood(theta):
return -0.5 * np.sum(residuals(theta)**2)
```
### Max likelihood fits
We can maximize the likelihood to find $\theta$ within a frequentist paradigm. Let us start with a linear fit:
```python
degree = 1
theta_hat = np.polyfit(x, y, degree)
x_fit = np.linspace(0, x_max, 1000)
y_fit = np.polyval(theta_hat, x_fit)
```
Rather than just plotting this fit, we will compare several different models in the figure below.
```python
def fit_degree_n(degree, ax):
"""Fit a polynomial of order 'degree', return the chi-squared, and plot in axes 'ax'."""
theta_hat = np.polyfit(x, y, degree)
x_fit = np.linspace(0, x_max, 1000)
y_fit = np.polyval(theta_hat, x_fit)
ax.errorbar(x, y, sig0, fmt='o');
ax.plot(x_fit, true_func(x_fit), color='red', alpha=.5)
ax.text(0.03, 0.96, f"d = {degree}", transform=plt.gca().transAxes,
ha='left', va='top',
bbox=dict(ec='k', fc='w', pad=10))
ax.plot(x_fit, y_fit, '-k')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$');
return -2. * log_likelihood(theta_hat) # chi_squared
#------------------------------------------------------------
# First figure: plot points with a linear fit
nrows=1; ncols=2;
fig = plt.figure(figsize=(6*ncols, 6*nrows))
num_plots = nrows * ncols
degrees = np.zeros(num_plots, dtype=int)
chi_sqs_dof = np.zeros(num_plots)
print('degree chi^2/dof')
for i in range(num_plots):
ax = fig.add_subplot(nrows, ncols, i+1)
degrees[i] = i
dof = len(x) - (degrees[i])
chi_sqs_dof[i] = fit_degree_n(i, ax) / dof
fig.tight_layout()
```
### Questions
* Change the degree of the polynomial that is used for the fit. Plot the fits next to each other in a multi-panel figure (e.g., make it 3 by 3).
* Compute the chi-squared value per degree of freedom (the ingredients are already included in the script) and plot that as a function of the degree of the polynomial. Is it decreasing, or is there a peak?
* For which degree polynomials would you say that you're underfitting the data?
* For which degree polynomials would you say that you're overfitting the data?
### Cross validation
This section will introduce the frequentist tool of cross-validation. This approach is used extensively within machine-learning as a way to handle overfitting and underfitting, bias and variance.
```python
# Select the cross-validation points
ncross=5
index_cv = np.random.choice(range(len(x)), ncross, replace=False)
x_cv=x[index_cv]
y_cv=y[index_cv]
```
```python
# The training data is then
x_train = np.delete(x,index_cv)
y_train = np.delete(y,index_cv)
```
```python
# Plot training and CV errors as a function of polynomial degree d
degree_max = 13
d = np.arange(0, degree_max+1)
training_err = np.zeros(d.shape)
crossval_err = np.zeros(d.shape)
fig,ax = plt.subplots(figsize=(8, 6))
for i in range(len(d)):
p = np.polyfit(x_train, y_train, d[i])
training_err[i] = np.sqrt(np.sum((np.polyval(p, x_train) - y_train) ** 2)
/ len(y_train))
crossval_err[i] = np.sqrt(np.sum((np.polyval(p, x_cv) - y_cv) ** 2)
/ len(y_cv))
ax.plot(d, crossval_err, '--k', label='cross-validation')
ax.plot(d, training_err, '-k', label='training')
ax.plot(d, sig0 * np.ones(d.shape), ':k')
ax.set_xlim(0, degree_max)
# You might need to change the y-scale if you make modifications to the training data
ax.set_ylim(0, 0.8)
ax.set_xlabel('polynomial degree')
ax.set_ylabel('rms error')
ax.legend(loc='best');
```
### Questions
* Can you see the transition from underfit to overfit in this figure?
* What would you say is the degree of polynomial that is supported by the data?
* Try changing the size of the cross-validation and training sets. Does the conclusions become more/less clear?
* Does the results change between different runs with the same number of CV samples? If so, why?
* K-fold cross validation is a popular variant of CV. It addresses some issues with the sensitivity to the actual choice of which data is used for training and validation. What do you think that it means, and what is the possible drawback if you have a computational expensive model?
* Leave-one-out is another variant. For linear regression problems, this type of cross-validation can actually be performed without having to do multiple fits. What do you think that it means?
* It is common to divide the data into a training set, a cross-validation set, and a test set. What do you think is the purpose of having three different sets?
## Bayesian evidence
Let us try the Bayesian approach and actually compute the evidence for these different models. We will use the Laplace method for computing the norm of the posterior distribution (i.e. approximating it as a single Gaussian).
We use simple uniform priors for the model parameters:
$$
p(\theta_i|I) = \left\{
\begin{array}{ll}
\frac{1}{\theta_\mathrm{max} - \theta_\mathrm{min}} & \text{for } \theta_\mathrm{min} \leq \theta_i \leq \theta_\mathrm{max}, \\
0 & \text{otherwise},
\end{array}
\right.
$$
which means that the posterior will be
$$
p(\theta | D, I) = \frac{1}{(\theta_\mathrm{max} - \theta_\mathrm{min})^K} \frac{1}{\sqrt{(2\pi)\sigma_0^2}^N} \exp\left( -\chi^2 / 2\right),
$$
within the allowed prior region for the $K$ parameters and zero elsewhere.
Assuming that the peak of the Gaussian is located at $\theta^*$, well inside the prior region; we can easily approximate the integral
$$
Z_p = \int d^K \theta p(\theta | D, I),
$$
using Laplace's method (see lecture notes here in [html](pub/model_selection-bs.html) and [pdf](pub/model_selection-minted.pdf) formats). With this particular choice of prior, and again under the assumption that the cut at the edges does not change the integral over the multidimensional integral, we get
$$
Z_p \approx \frac{1}{(\theta_\mathrm{max} - \theta_\mathrm{min})^K} \exp\left( -\chi^2(\theta^*) / 2\right) \frac{\sqrt{(2\pi)^K}}{\sqrt{\det(\Sigma^{-1})}},
$$
where $\Sigma^{-1}_{ij} = \partial^2\chi^2/\partial \theta_i \partial \theta_j$ (i.e. the Hessian) evaluated at the maximum $\theta^*$. Note that we removed the constant factor $\sqrt{(2\pi)\sigma_0}^N$ since it will be the same for all models.
Note that for this linear regression problem we can get all these quantities ($\theta^*$, $\Sigma$) via linear algebra. See, e.g., Dick's [lecture notes](https://github.com/NuclearTalent/Bayes2019/blob/master/topics/bayesian-parameter-estimation/Lecture_Th1a_rjf.pdf) or Hogg's nice paper: [Data analysis recipes: Fitting a model to data](https://arxiv.org/abs/1008.4686). Below, we will use `numpy.polyfit` to extract the relevant quantities.
```python
# We use a uniform prior for all parameters in [-10,10]
theta_max = 10
theta_min = -10
prior_range = theta_max - theta_min
```
```python
degree_max = 6
evidence = np.zeros(degree_max+1)
print("Degree P* Best fit parameters: ")
for ideg,deg in enumerate(range(degree_max+1)):
theta_hat, Cov = np.polyfit(x, y, deg,cov='unscaled')
if not (np.all(theta_hat < theta_max) and np.all(theta_hat > theta_min)):
print("Outside of prior range")
P_star = np.exp(log_likelihood(theta_hat))
H=np.linalg.inv(Cov)
evidence[ideg] = P_star * np.sqrt((2*np.pi)**deg / np.linalg.det(H)) / prior_range**deg
print (f' {deg} {P_star:.2e} ',('{:5.2f} '*len(theta_hat)).format(*theta_hat))
```
Degree P* Best fit parameters:
0 1.76e-151 1.13
1 1.18e-76 0.50 0.38
2 2.69e-22 -0.59 2.26 -0.54
3 6.15e-03 -0.49 1.61 -0.44 0.21
4 6.44e-03 -0.02 -0.35 1.35 -0.25 0.18
5 7.62e-03 0.06 -0.48 0.88 -0.10 0.43 0.09
6 7.62e-03 0.00 0.03 -0.38 0.72 0.03 0.39 0.09
```python
d = np.arange(0, degree_max+1)
fig,ax = plt.subplots(figsize=(8, 6))
ax.plot(d,evidence,'o-')
ax.set_xlabel('polynomial degree')
ax.set_ylabel('evidence');
```
### Questions
* Can you see the transition from underfit to overfit in this figure?
* What would you say is the degree of polynomial that is supported by the data?
```python
# Odds ratio table
```
### Questions
* What happens when you change the number of the generated data?
* What happens when you change the range of the generated data?
* What happens when you change the error of the generated data?
#### Odds-ratios
Quoting the well-known paper by Trotta:
[Bayes in the sky: Bayesian inference and model selection in cosmology](https://arxiv.org/abs/0803.4089) we can quantify an empirical scale for evaluating the strength of evidence when comparing two models:
Here, the ratio of the evidences of model $M_0$ and $M_1$ is given by,
\begin{equation}
\label{eq:Bayes_factor}
B_{01} = \frac{p(\mathrm{data} | M_0)}{p(\mathrm{data} | M_1)} \; ,
\end{equation}
which is also called _Bayes factor_. That means $|\ln B_{01}| \equiv |\ln p(\mathrm{data} | M_0) - \ln p(\mathrm{data} | M_1)|$ is the relevant quantity for estimating the strength of evidence of the two models (see first and last column of the table).
### Questions
* Create a table of odds-ratios to select between pairs of the different-order polynomial models, given that the ratio of prior probabilities for the different models is unity.
```python
```
| 11708f3d8a52bda04365792f09be49bdaaf54ca0 | 175,518 | ipynb | Jupyter Notebook | topics/model-selection/model-selection_I.ipynb | asemposki/Bayes2019 | bea9dbe5205fbf5939a154b1c3773e6c3baf39a4 | [
"CC0-1.0"
]
| 13 | 2019-06-06T17:55:08.000Z | 2021-11-16T08:26:26.000Z | topics/model-selection/model-selection_I.ipynb | asemposki/Bayes2019 | bea9dbe5205fbf5939a154b1c3773e6c3baf39a4 | [
"CC0-1.0"
]
| 1 | 2019-06-14T16:17:36.000Z | 2019-06-15T04:41:39.000Z | topics/model-selection/model-selection_I.ipynb | asemposki/Bayes2019 | bea9dbe5205fbf5939a154b1c3773e6c3baf39a4 | [
"CC0-1.0"
]
| 17 | 2019-06-10T18:23:29.000Z | 2021-12-22T15:38:30.000Z | 266.339909 | 54,972 | 0.917815 | true | 3,732 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.812867 | 0.646764 | __label__eng_Latn | 0.981308 | 0.340981 |
# Fitting Logistic Regression Models$
\newcommand{\cond}{{\mkern+2mu} \vert {\mkern+2mu}}
\newcommand{\SetDiff}{\mathrel{\backslash}}
\DeclareMathOperator{\BetaFunc}{Β}
\DeclareMathOperator{\GammaFunc}{Γ}
\DeclareMathOperator{\prob}{p}
\DeclareMathOperator{\cost}{J}
\DeclareMathOperator{\score}{V}
\DeclareMathOperator{\dcategorical}{Categorical}
\DeclareMathOperator{\dcategorical}{Categorical}
\DeclareMathOperator{\ddirichlet}{Dirichlet}
$
"The logistic regression model arises from the desire to model the posterior probabilities of the $K$ classes via linear functions in $x$, while at the same time ensuring that they sum to one and remain in $[0, 1]$.", Hastie et al., 2009 (p. 119).
$$
\begin{align}
\log \frac{\prob(Y = k \cond X = x)}{\prob(Y = K \cond X = x)} = \beta_k^{\text{T}} x && \text{for } k = 1, \dotsc, K-1.
\end{align}
$$
The probability for the case $Y = K$ is held out, as the probabilities must sum to one, so there are only $K-1$ free variables.
Thus if there are two categories, there is just a single linear function.
This gives us that
$$
\begin{align}
\prob(Y = k \cond X = x) &= \frac{\exp(\beta_k^{\text{T}}x)}{1 + \sum_{i=1}^{K-1} \exp(\beta_i^{\text{T}}x)} & \text{for } k = 1, \dotsc, K-1 \\[3pt]
\prob(Y = K \cond X = x) &= \frac{1}{1 + \sum_{i=1}^{K-1} \exp(\beta_i^{\text{T}}x)}.
\end{align}
$$
Note that if we fix $\beta_K = 0$, we have the form
$$
\begin{align}
\prob(Y = k \cond X = x) &= \frac{\exp(\beta_k^{\text{T}}x)}{\sum_{i=1}^K \exp(\beta_i^{\text{T}}x)} & \text{for } k = 1, \dotsc, K.
\end{align}
$$
Then, writing $\beta = \{\beta_1^{\text{T}}, \dotsc, \beta_{K}^{\text{T}}\}$, we have that $\prob(Y = k \cond X = x) = \prob_k(x; \beta)$.
The log likelihood for $N$ observations is
$$
\ell(\beta) = \sum_{n=1}^N \log \prob_{y_n}(x_n; \beta).
$$
We can use the cost function
$$
\begin{align}
\cost(\beta)
&= -\frac{1}{N} \left[ \sum_{n=1}^N \sum_{k=1}^K 1[y_n = k] \log \prob_k(x_n; \beta) \right] \\
&= -\frac{1}{N} \left[ \sum_{n=1}^N \sum_{k=1}^K 1[y_n = k] \log \frac{\exp(\beta_k^{\text{T}}x_n)}{\sum_{i=1}^K \exp(\beta_i^{\text{T}}x_n)} \right] \\
&= -\frac{1}{N} \left[ \sum_{n=1}^N \sum_{k=1}^K 1[y_n = k] \left( \beta_k^{\text{T}}x_n - \log \sum_{i=1}^K \exp(\beta_i^{\text{T}}x_n) \right) \right].
\end{align}
$$
Thus the score function is
$$
\score_k(\beta) = \nabla_{\beta_k} \cost(\beta) = -\frac{1}{N} \left[ \sum_{n=1}^N x_n \big( 1[y_n = k] - \prob_k(x_n; \beta) \big) \right].
$$
| 0c523af19974039c6aad3465514dc0efb6eef906 | 4,078 | ipynb | Jupyter Notebook | GLM/Fitting Logistic Regression Models.ipynb | ConradScott/IJuliaSamples | 0f5d212dcf63bc795e79ac790aa3f1c9b010c89e | [
"Apache-2.0"
]
| null | null | null | GLM/Fitting Logistic Regression Models.ipynb | ConradScott/IJuliaSamples | 0f5d212dcf63bc795e79ac790aa3f1c9b010c89e | [
"Apache-2.0"
]
| null | null | null | GLM/Fitting Logistic Regression Models.ipynb | ConradScott/IJuliaSamples | 0f5d212dcf63bc795e79ac790aa3f1c9b010c89e | [
"Apache-2.0"
]
| null | null | null | 31.859375 | 255 | 0.500736 | true | 966 | Qwen/Qwen-72B | 1. YES
2. YES | 0.960361 | 0.879147 | 0.844298 | __label__eng_Latn | 0.442494 | 0.799921 |
# How Long Do Stars Live?
The Sun produces 400 trillion trillion watts of energy every second - that's enough to power our current energy use for 500,000 years! But where does all of that energy come from?
In the nineteenth century, this was a major question. Early astronomers assumed that the Sun's energy came from gravitational energy that was stored when a cloud of gas collapsed to form it. Let's estimate how much energy that would be. The gravitational energy of the Sun can be estimated from the following equation:
\begin{equation}
E = \frac{3}{5} \frac{G \, M}{R^2}
\end{equation}
Where G is the gravitational constant, M is the mass of the Sun, and R is the radius of the sun. Use Python below to calculate the gravitational energy of the Sun:
```python
G =
M =
R =
E =
```
The Sun gives off energy at a rate of $1.99\times10^{33}$ ergs s$^{-1}$. From the gravitational energy that you just calculated, how long could gravitational energy power the Sun's current brightness? Calculate that below:
```python
```
We know that the Earth is 4.5678 *billion* years old - how does the number you just calculated compare?
It turns out that it took Albert Einstein to solve this problem with his famous equation, $E = m \, c^2$. The physics behind this equation is that mass and energy are two sides of the same coin. Mass can become energy and energy can become mass. Critically, for the Sun, this is exactly what happens when Hydrogen atoms undergo fusion to form Helium atoms. Let's check this. Look up the mass of the Hydrogen atom, and the mass of the Helium atom. Does four times the mass of the Hydrogen atom equal the mass of the Helium atom? Do your calculations below:
```python
```
If you did your calculations correctly, you should have found that 4 Hydrogen atoms are more massive than a single Helium atom. So where does that extra mass go? This is what powers the Sun! Also, see [here](https://www.youtube.com/watch?v=23e-SnQvCaA).
Let's figure out how long the Sun could live while fusing Hydrogen as it's source of energy. Let's use Einstein's equation to calculate how much "mass energy" the Sun has stored up:
```python
# Hint: m should be the mass of the Sun, and c is the speed of light
```
In the same way as before, using the rate at which the Sun gives off energy currently, how long could the Sun live for? Calculate below:
```python
```
If we consider the fusion of hydrogen into helium above, only a small fraction of the total mass going in to the reaction is converted into energy, with the remainder going into energy, i.e.
\begin{equation}
4 \, \times \, ^1_1H \rightarrow \, ^4_2He + \epsilon.
\end{equation}
Calculate what fraction of the total mass is actually converted into energy from this reaction:
```python
```
Now, adjust your calculation for the lifetime of the Sun using this efficiency factor:
```python
```
In reality, only the central region of the Sun, the "core", is hot enough and dense enough to be fusing Hydrogen into Helium, and so in practice, only some of the core will be used to power the life of a star.
What happens when the core has turned all of its Hydrogen into Helium? Without anything powering it, the Sun will start to collapse in on itself, and eventually the density and temperatures get high enough to start fusing heavier elements into even heavier elements. If a star is massive enough (the Sun isn't!), this will continue until the core is made of iron. Why iron? Well, take a look at this plot:
Atoms can only fuse together if the resulting atom is more tightly bound than its component elements. Once you get up to iron (Fe), it turns out that all more massive elements are less tightly bound, and so fusion can't happen any more. Instead, the star collapses, bounces off of this dense iron core, and explodes in a *supernova*!
```python
```
| b49b798851b859351cf0a9d0e3c693913f3b7e40 | 6,092 | ipynb | Jupyter Notebook | BonusProblems/Module1/BonusChallenge3.ipynb | psheehan/CIERA-HS-Program | 76f7f0ff994e74e646fa34bbb41c314bf7526e9b | [
"Naumen",
"Condor-1.1",
"MS-PL"
]
| 2 | 2019-06-25T02:36:49.000Z | 2020-06-09T21:44:41.000Z | BonusProblems/Module1/BonusChallenge3.ipynb | psheehan/CIERA-HS-Program | 76f7f0ff994e74e646fa34bbb41c314bf7526e9b | [
"Naumen",
"Condor-1.1",
"MS-PL"
]
| null | null | null | BonusProblems/Module1/BonusChallenge3.ipynb | psheehan/CIERA-HS-Program | 76f7f0ff994e74e646fa34bbb41c314bf7526e9b | [
"Naumen",
"Condor-1.1",
"MS-PL"
]
| 7 | 2019-06-25T15:33:10.000Z | 2021-05-12T18:04:36.000Z | 35.418605 | 562 | 0.63132 | true | 910 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944995 | 0.853913 | 0.806943 | __label__eng_Latn | 0.999695 | 0.713132 |
# Non Linear Regression Analysis
## Objectives
* Differentiate between linear and non-linear regression
* Use non-linear regression model in Python
If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression since linear regression presumes that the data is linear.
Let's learn about non linear regressions and apply an example in python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014.
<h2 id="importing_libraries">Importing required libraries</h2>
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Although linear regression can do a great job at modeling some datasets, it cannot be used for all datasets. First recall how linear regression, models a dataset. It models the linear relationship between a dependent variable y and the independent variables x. It has a simple equation, of degree 1, for example y = $2x$ + 3.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
Non-linear regression is a method to model the non-linear relationship between the independent variables $x$ and the dependent variable $y$. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). For example:
$$ \ y = a x^3 + b x^2 + c x + d \ $$
Non-linear functions can have elements like exponentials, logarithms, fractions, and so on. For example: $$ y = \log(x)$$
We can have a function that's even more complicated such as :
$$ y = \log(a x^3 + b x^2 + c x + d)$$
Let's take a look at a cubic function's graph.
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function.
Some other types of non-linear functions are:
### Quadratic
$$ Y = X^2 $$
```python
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
### Exponential
An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
```python
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
### Logarithmic
The response $y$ is a results of applying the logarithmic map from the input $x$ to the output $y$. It is one of the simplest form of **log()**: i.e. $$ y = \log(x)$$
Please consider that instead of $x$, we can use $X$, which can be a polynomial representation of the $x$ values. In general form it would be written as\
\begin{equation}
y = \log(X)
\end{equation}
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
### Sigmoidal/Logistic
$$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
<a id="ref2"></a>
# Non-Linear Regression example
For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
```python
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O china_gdp.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/china_gdp.csv
df = pd.read_csv("china_gdp.csv")
df.head(10)
```
2021-09-15 10:54:35 URL:https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1]
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1960</td>
<td>5.918412e+10</td>
</tr>
<tr>
<th>1</th>
<td>1961</td>
<td>4.955705e+10</td>
</tr>
<tr>
<th>2</th>
<td>1962</td>
<td>4.668518e+10</td>
</tr>
<tr>
<th>3</th>
<td>1963</td>
<td>5.009730e+10</td>
</tr>
<tr>
<th>4</th>
<td>1964</td>
<td>5.906225e+10</td>
</tr>
<tr>
<th>5</th>
<td>1965</td>
<td>6.970915e+10</td>
</tr>
<tr>
<th>6</th>
<td>1966</td>
<td>7.587943e+10</td>
</tr>
<tr>
<th>7</th>
<td>1967</td>
<td>7.205703e+10</td>
</tr>
<tr>
<th>8</th>
<td>1968</td>
<td>6.999350e+10</td>
</tr>
<tr>
<th>9</th>
<td>1969</td>
<td>7.871882e+10</td>
</tr>
</tbody>
</table>
</div>
```python
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
### Choosing a model
From an initial look at the plot, we determine that the logistic function could be a good approximation,
since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
```python
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Independent Variable')
plt.show()
```
The formula for the logistic function is the following:
$$ \hat{Y} = \frac1{1+e^{\beta\_1(X-\beta\_2)}}$$
$\beta\_1$: Controls the curve's steepness,
$\beta\_2$: Slides the curve on the x-axis.
### Building The Model
Now, let's build our regression model and initialize its parameters.
```python
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
```
Lets look at a sample sigmoid line that might fit with the data:
```python
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
```
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
```python
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
```
#### How we find the best parameters for our fit line?
we can use **curve_fit** which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, \*popt) - ydata is minimized.
popt are our optimized parameters.
```python
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
```
beta_1 = 690.451715, beta_2 = 0.997207
Now we plot our resulting regression model.
```python
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
```
## Practice
Can you calculate what is the accuracy of our model?
```python
# write your code here
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
```
/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/scipy/optimize/minpack.py:829: OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)
Mean absolute error: 0.24
Residual sum of squares (MSE): 0.15
R2-score: -808961813431844989939715407872.00
<details><summary>Click here for the solution</summary>
```python
# split data into train/test
msk = np.random.rand(len(df)) < 0.8
train_x = xdata[msk]
test_x = xdata[~msk]
train_y = ydata[msk]
test_y = ydata[~msk]
# build the model using train set
popt, pcov = curve_fit(sigmoid, train_x, train_y)
# predict using test set
y_hat = sigmoid(test_x, *popt)
# evaluation
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2))
from sklearn.metrics import r2_score
print("R2-score: %.2f" % r2_score(y_hat , test_y) )
```
</details>
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">Watson Studio</a>
| 9d210f25b15a2babf8c093eaf3a52263471d8e20 | 155,629 | ipynb | Jupyter Notebook | MachineLearning_Basics/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb | niranjan-1/Data_Science_Projects | d6a7677b967f90a7881742cef8030a92e5148871 | [
"MIT"
]
| null | null | null | MachineLearning_Basics/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb | niranjan-1/Data_Science_Projects | d6a7677b967f90a7881742cef8030a92e5148871 | [
"MIT"
]
| null | null | null | MachineLearning_Basics/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb | niranjan-1/Data_Science_Projects | d6a7677b967f90a7881742cef8030a92e5148871 | [
"MIT"
]
| null | null | null | 193.568408 | 18,532 | 0.905583 | true | 3,461 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91848 | 0.884039 | 0.811973 | __label__eng_Latn | 0.934099 | 0.724817 |
```python
!pip install pandas
import sympy as sym
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
sym.init_printing()
sym.__version__
```
Requirement already satisfied: pandas in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (0.23.4)
Requirement already satisfied: numpy>=1.9.0 in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from pandas) (1.16.4)
Requirement already satisfied: python-dateutil>=2.5.0 in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2011k in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from pandas) (2021.1)
Requirement already satisfied: six>=1.5 in c:\users\usuario\.conda\envs\sistdin\lib\site-packages (from python-dateutil>=2.5.0->pandas) (1.16.0)
'1.4'
## Correlación
La correlación entre las señales $f(t)$ y $g(t)$ es una operación que indica cuán parecidas son las dos señales entre sí.
\begin{equation}
(f \; \circ \; g)(\tau) = h(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
Observe que la correlación y la convolución tienen estructura similares.
\begin{equation}
f(t) * g(t) = \int_{-\infty}^{\infty} f(\tau) \cdot g(t - \tau) \; d\tau
\end{equation}
## Señales periódicas
La señal $y(t)$ es periódica si cumple con la condición $y(t+nT)=y(t)$ para todo $n$ entero. En este caso, $T$ es el periodo de la señal.
La señal seno es la oscilación más pura que se puede expresar matemáticamente. Esta señal surge al considerar la proyección de un movimiento circular uniforme.
## Serie de Fourier
Si se combinan apropiadamente un conjunto de oscilaciones puras, como combinaciones lineales de señales desplazadas y escaladas en tiempo y amplitud, podría recrearse cualquiér señal periódica. Esta idea da lugar a las series de Fourier.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} C_n \cdot cos(n \omega_0 t - \phi_n)
\end{equation}
La señal $y(t)$ es igual a una combinación de infinitas señales coseno, cada una con una amplitud $C_n$, una frecuencia $n \omega_0$ y un desfase $\phi_n$.
También puede expresarse como:
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
La serie queda definida si se encuentran los valores apropiados de $A_n$ y $B_n$ para todos los valores de $n$.
Observe que:
- $A_n$ debe ser más grande si $y(t)$ se "parece" más a un cos.
- $B_n$ debe ser más grande si $y(t)$ se "parece" más a un sin.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
\begin{equation}
(f \; \circ \; g)(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
\begin{equation}
(y \; \circ \; sin_n)(\tau) = \int_{-\infty}^{\infty} y(t) \cdot sin(n \omega_0(t + \tau)) \; dt
\end{equation}
Considerando:
- $\tau=0$ para no incluir desfases.
- la señal $y(t)$ es periódica con periodo $T$.
\begin{equation}
(y \; \circ \; sin_n)(0) = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Esta expresión puede interpretarse como el parecido de una señal $y(t)$ a la señal $sin$ con crecuencia $n \omega_0$ promediado a lo largo de un periodo sin desfase del seno.
Retomando la idea inicial
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
donde
\begin{equation}
A_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot cos(n \omega_0 t) \; dt
\end{equation}
\begin{equation}
B_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Se recomienda al estudiante que encuentre la relación entre las Series anteriores y la siguiente alternativa para representar la Series de Fourier.
\begin{equation}
y(t) = \sum_{n=-\infty}^{\infty} C_n \cdot e^{j n \omega_0 t}
\end{equation}
donde
\begin{equation}
C_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot e^{j n \omega_0 t} \; dt
\end{equation}
Los valores $C_n$ son el espectro de la señal periódica $y(t)$ y son una representación en el dominio de la frecuencia.
**Ejemplo # 1**
La señal $y(t) = sin(2 \pi t)$ es en sí misma una oscilación pura de periodo $T=1$.
```python
# Se define y como el seno de t
t = sym.symbols('t', real=True)
#T = sym.symbols('T', real=True)
T = 1
nw = sym.symbols('n', real=True)
delta = sym.DiracDelta(nw)
w0 = 2 * sym.pi / T
y = t**2
#vy = 4*sym.sin(w0*t + 0.5) - 10
# y = sym.sin(w0*t)
# y = (t-0.5)*(t-0.5)
y
```
Aunque la sumatoria de las series de Fourier incluye infinitos términos, solamente se tomaran las primeras 3 componentes.
```python
n_max = 5
y_ser = 0
C = 0
ns = range(-n_max,n_max+1)
espectro = pd.DataFrame(index = ns,
columns= ['C','C_np','C_real','C_imag','C_mag','C_ang'])
for n in espectro.index:
C_n = (1/T)*sym.integrate(y*sym.exp(-1j*n*w0*t), (t,0,T)).evalf()
C = C + C_n*delta.subs(nw,nw-n)
y_ser = y_ser + C_n*sym.exp(1j*n*w0*t)
espectro['C'][n]=C_n
C_r = float(sym.re(C_n))
C_i = float(sym.im(C_n))
espectro['C_real'][n] = C_r
espectro['C_imag'][n] = C_i
espectro['C_np'][n] = complex(C_r + 1j*C_i)
espectro['C_mag'][n] = np.absolute(espectro['C_np'][n])
espectro['C_ang'][n] = np.angle(espectro['C_np'][n])
espectro
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>C</th>
<th>C_np</th>
<th>C_real</th>
<th>C_imag</th>
<th>C_mag</th>
<th>C_ang</th>
</tr>
</thead>
<tbody>
<tr>
<th>-5</th>
<td>0.00202642367284676 - 0.0318309886183791*I</td>
<td>(0.0020264236728467556-0.03183098861837907j)</td>
<td>0.00202642</td>
<td>-0.031831</td>
<td>0.0318954</td>
<td>-1.50722</td>
</tr>
<tr>
<th>-4</th>
<td>0.00316628698882306 - 0.0397887357729738*I</td>
<td>(0.0031662869888230555-0.039788735772973836j)</td>
<td>0.00316629</td>
<td>-0.0397887</td>
<td>0.0399145</td>
<td>-1.49139</td>
</tr>
<tr>
<th>-3</th>
<td>0.00562895464679654 - 0.0530516476972984*I</td>
<td>(0.005628954646796543-0.05305164769729844j)</td>
<td>0.00562895</td>
<td>-0.0530516</td>
<td>0.0533494</td>
<td>-1.46509</td>
</tr>
<tr>
<th>-2</th>
<td>0.0126651479552922 - 0.0795774715459477*I</td>
<td>(0.012665147955292222-0.07957747154594767j)</td>
<td>0.0126651</td>
<td>-0.0795775</td>
<td>0.080579</td>
<td>-1.41297</td>
</tr>
<tr>
<th>-1</th>
<td>0.0506605918211689 - 0.159154943091895*I</td>
<td>(0.05066059182116889-0.15915494309189535j)</td>
<td>0.0506606</td>
<td>-0.159155</td>
<td>0.167023</td>
<td>-1.26263</td>
</tr>
<tr>
<th>0</th>
<td>0.333333333333333</td>
<td>(0.3333333333333333+0j)</td>
<td>0.333333</td>
<td>0</td>
<td>0.333333</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.0506605918211689 + 0.159154943091895*I</td>
<td>(0.05066059182116889+0.15915494309189535j)</td>
<td>0.0506606</td>
<td>0.159155</td>
<td>0.167023</td>
<td>1.26263</td>
</tr>
<tr>
<th>2</th>
<td>0.0126651479552922 + 0.0795774715459477*I</td>
<td>(0.012665147955292222+0.07957747154594767j)</td>
<td>0.0126651</td>
<td>0.0795775</td>
<td>0.080579</td>
<td>1.41297</td>
</tr>
<tr>
<th>3</th>
<td>0.00562895464679654 + 0.0530516476972984*I</td>
<td>(0.005628954646796543+0.05305164769729844j)</td>
<td>0.00562895</td>
<td>0.0530516</td>
<td>0.0533494</td>
<td>1.46509</td>
</tr>
<tr>
<th>4</th>
<td>0.00316628698882306 + 0.0397887357729738*I</td>
<td>(0.0031662869888230555+0.039788735772973836j)</td>
<td>0.00316629</td>
<td>0.0397887</td>
<td>0.0399145</td>
<td>1.49139</td>
</tr>
<tr>
<th>5</th>
<td>0.00202642367284676 + 0.0318309886183791*I</td>
<td>(0.0020264236728467556+0.03183098861837907j)</td>
<td>0.00202642</td>
<td>0.031831</td>
<td>0.0318954</td>
<td>1.50722</td>
</tr>
</tbody>
</table>
</div>
La señal reconstruida con un **n_max** componentes
```python
y_ser
```
```python
plt.rcParams['figure.figsize'] = 7, 2
#g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue',legend=True, label = 'y(t) original')
#g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red',legend=True, label = 'y(t) reconstruida')
g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue')
g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red')
g1.extend(g2)
g1.show()
```
```python
C
```
```python
plt.rcParams['figure.figsize'] = 7, 4
plt.stem(espectro.index,espectro['C_mag'])
```
**Ejercicio**
Use las siguientes funciones para definir un periodo de una señal periódica con periodo $T=1$:
\begin{equation}
y_1(t) = \begin{cases}
-1 & 0 \leq t < 0.5 \\
1 & 0.5 \leq t < 1
\end{cases}
\end{equation}
\begin{equation}
y_2(t) = t
\end{equation}
\begin{equation}
y_3(t) = 3 sin(2 \pi t)
\end{equation}
Varíe la cantidad de componentes que reconstruyen cada función y analice la reconstrucción obtenida y los valores de $C_n$
```python
```
```python
```
| 31459132614647fb755fdfa5b6df2ccbece0f4a4 | 87,161 | ipynb | Jupyter Notebook | 04_Series_de_Fourier.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
]
| null | null | null | 04_Series_de_Fourier.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
]
| null | null | null | 04_Series_de_Fourier.ipynb | pierrediazp/Se-ales_y_Sistemas | b14bdaf814b0643589660078ddd39b5cdf86b659 | [
"MIT"
]
| null | null | null | 124.338088 | 19,760 | 0.832115 | true | 3,705 | Qwen/Qwen-72B | 1. YES
2. YES | 0.658418 | 0.868827 | 0.572051 | __label__spa_Latn | 0.318346 | 0.167395 |
# An interactive introduction to polyphase filterbanks
**Author:** Danny Price, UC Berkeley
**License:** [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)
```python
%matplotlib inline
```
```python
# Import required modules
import numpy as np
import scipy
from scipy.signal import firwin, freqz, lfilter
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
def db(x):
""" Convert linear value to dB value """
return 10*np.log10(x)
```
## Introduction
If you've opened up this notebook, you're probably trying to learn about polyphase filterbanks and/or spectrometers and found it all a bit confusing.
This notebook is here to help.
To get the most out of this notebook, you should supplement it with a more rigorous overview of the PFB and spectrometers. I've written up a [chapter on spectrometers in radio astronomy](http://arxiv.org/abs/1607.03579) which can serve as your noble steed. There is quite a bit of background knowledge about digital signal processing (DSP) that I'm not going to present -- head on over to the free [DSP Guide](http://www.dspguide.com/ch1.htm) by Stephen Smith if you need a refresher.
## What is a PFB?
A polyphase filterbank (PFB) is simply an efficient computational structure used to form a bank of filters. All that is required to form a PFB is to place a "prototype polyphase filter structure" in front of an FFT. The frontend enhances the filter response of the FFT, making it better by using time samples and filter coefficients.
That's it. For more information, have a read of [this chapter](http://arxiv.org/abs/1607.03579). As a first call though, let's look at polyphase decomposition, and how to do it using `Numpy`.
## Polyphase decomposition
Polyphase decomposition is at the heart of the PFB technique, and is just decomposing a signal $x(n)$ into multiple 'phases' or 'branches'. For example, even and odd decomposition is just:
$$\begin{eqnarray}
x_{even}(n') & = & \left\{ x(0),x(2),x(4),...\right\} \\
x_{odd}(n') & = & \left\{ x(1),x(3),x(5),...\right\} .
\end{eqnarray}$$
More generally, we can decompose $x(n)$ into $P$ phases, denoted $x_p(n')$. Below is a simple example of polyphase decomposition using numpy:
```python
x = np.array([1,2,3,4,5,6,7,8,9,10])
P = 5
x_p = x.reshape((len(x)//P, P)).T
print (x_p)
```
[[ 1 6]
[ 2 7]
[ 3 8]
[ 4 9]
[ 5 10]]
### The PFB frontend
Next, let's have a look at the polyphase frontend. This sounds fancy but isn't all that complicated. The purpose of the PFB frontend is to convert your set of $P$ polyphase branches $x_p(n')$ into a set of subfiltered signals, $y_p(n')$
$$
\begin{equation}
y_{p}(n')=\sum_{m=0}^{M-1}h_{p}(m)x_{p}(n'-m),
\end{equation}
$$
where $h_p$ are filter coefficients that have been divided between the $P$ branches.
Here is a diagram showing the operations performed by the frontend, for $M=3$ taps:
The diagram shows an input signal being divided into $M$ taps, each with $P$ points. Within each tap, the signal is multiplied by the filter coefficients, then a sum across taps is performed. After this, another $P$ points are read, and the signals propagate left-to-right into the next tap (following the arrows).
Not 100% sure you really understand that diagram? Well, let's try and code it up, and hopefully get a better handle on what's happening. Here's a simple implementation:
```python
def pfb_fir_frontend(x, win_coeffs, M, P):
W = int(x.shape[0] / M / P)
x_p = x.reshape((W*M, P)).T
h_p = win_coeffs.reshape((M, P)).T
x_summed = np.zeros((P, M * W - M))
for t in range(0, M*W-M):
x_weighted = x_p[:, t:t+M] * h_p
x_summed[:, t] = x_weighted.sum(axis=1)
return x_summed.T
```
Wow. Only 9 lines required! This is short enough for us to go through line by line:
1. Function declaration. The frontend reads in:
* an input signal x (a numpy array). For this simple code, x has to be a multiple of $M*P$
* some window coefficients,
* an integer M representing the number of taps
* an integer P representing the number of branches
2. Compute the number of windows of length $P$ there are in the data.
3. We apply polyphase decomposition on $x(n)$ to get a set of branches $x_p(n')$.
4. We also divide the window coefficients into branches.
6. Instantiate an empty array to store the signal $y_p(n')$. This is a little shorter than the original $x_p(n')$ as it takes a few cycles for the taps to fill up with data.
7. Now we start a loop, so we can multiply through each time step by the filter coefficients.
8. This is the magic line. we take $M$ samples from each branch, $x_p(n')$, and multiply it through by the filter coefficients. We need to march through the entire `x_p` array, hence the loop.
9. Now we sum over taps.
10. Return the data, with a transpose so that axes are returned as (time, branch).
Let's apply this to some example data. To do that, we'll need a function to generate window coefficients. Fortunately, this is built in to `scipy`. We can make a simple function to generate a `sinc` of the right length and multiply it through by the window of our choice:
```python
def generate_win_coeffs(M, P, window_fn="hamming"):
win_coeffs = scipy.signal.get_window(window_fn, M*P)
sinc = scipy.signal.firwin(M * P, cutoff=1.0/P, window="rectangular")
win_coeffs *= sinc
return win_coeffs
```
```python
M = 8
P = 32
x = np.sin(np.arange(0, M*P*10) / np.pi)
win_coeffs = generate_win_coeffs(M, P, window_fn="hamming")
plt.subplot(2,1,1)
plt.title("Time samples")
plt.plot(x)
plt.xlim(0, M*P*3)
plt.subplot(2,1,2)
plt.title("Window function")
plt.plot(win_coeffs)
plt.xlim(0, M*P)
plt.tight_layout(pad=1.0)
plt.show()
```
Now we are ready to try applying `pfb_fir_frontend` to our data:
```python
y_p = pfb_fir_frontend(x, win_coeffs, M, P)
print("n_taps: %i" % M)
print("n_branches: %i" % P)
print("Input signal shape: %i" % x.shape)
print("Window shape: %i" % win_coeffs.shape)
print("Output data shape: %s" % str(y_p.shape))
```
n_taps: 8
n_branches: 32
Input signal shape: 2560
Window shape: 256
Output data shape: (72, 32)
And we can plot the output `y_p` using `imshow`:
```python
plt.figure()
plt.imshow(y_p)
plt.xlabel("Branch")
plt.ylabel("Time")
plt.figure()
plt.plot(y_p[0], label="p=0")
plt.plot(y_p[1], label="p=1")
plt.plot(y_p[2], label="p=2")
plt.xlabel("Time sample, $n'$")
plt.legend()
plt.show()
```
Don't spend too much time trying to interpret this! The frontend only becomes interesting when you follow it up with an FFT.
## Polyphase filterbank
now we have an PFB frontend, all we need is to add on an FFT. Here is the code to implement a simple PFB in python:
```python
def fft(x_p, P, axis=1):
return np.fft.rfft(x_p, P, axis=axis)
def pfb_filterbank(x, win_coeffs, M, P):
x_fir = pfb_fir_frontend(x, win_coeffs, M, P)
x_pfb = fft(x_fir, P)
return x_pfb
```
The first function is just a helper, and uses the in-built `numpy.fft` library. We apply the FFT over a given axis, which in this case is branches (the number of branches == length of FFT).
The actual `pfb_filterbank` function is now just two lines long: apply a `pfb_fir_frontend` to the data, and then apply an `fft` to the output. The final step is taking the output of the `pfb_filterbank`, squaring it, and taking an average over time.
Finally, here's a function that implements a spectrometer:
```python
def pfb_spectrometer(x, n_taps, n_chan, n_int, window_fn="hamming"):
M = n_taps
P = n_chan
# Generate window coefficients
win_coeffs = generate_win_coeffs(M, P, window_fn)
# Apply frontend, take FFT, then take power (i.e. square)
x_fir = pfb_fir_frontend(x, win_coeffs, M, P)
x_pfb = fft(x_fir, P)
x_psd = np.abs(x_pfb)**2
# Trim array so we can do time integration
x_psd = x_psd[:np.round(x_psd.shape[0]//n_int)*n_int]
# Integrate over time, by reshaping and summing over axis (efficient)
x_psd = x_psd.reshape(x_psd.shape[0]//n_int, n_int, x_psd.shape[1])
x_psd = x_psd.mean(axis=1)
return x_psd
```
Let's try it out by generating some data
```python
M = 4 # Number of taps
P = 1024 # Number of 'branches', also fft length
W = 1000 # Number of windows of length M*P in input time stream
n_int = 2 # Number of time integrations on output data
# Generate a test data steam
samples = np.arange(M*P*W)
noise = np.random.normal(loc=0.5, scale=0.1, size=M*P*W)
freq = 1
amp = 0.02
cw_signal = amp * np.sin(samples * freq)
data = noise + cw_signal
```
Which we can have a quick look at first:
```python
plt.subplot(3,1,1)
plt.title("Noise")
plt.plot(noise[:250])
plt.subplot(3,1,2)
plt.title("Sin wave")
plt.plot(cw_signal[:250])
plt.subplot(3,1,3)
plt.title("Noise + sin")
plt.plot(data[:250])
plt.xlabel("Time samples")
plt.tight_layout()
plt.show()
```
Now, let's compute the spectrum and plot it over frequency vs. time using `imshow`
```python
X_psd = pfb_spectrometer(data, n_taps=M, n_chan=P, n_int=2, window_fn="hamming")
plt.imshow(db(X_psd), cmap='viridis', aspect='auto')
plt.colorbar()
plt.xlabel("Channel")
plt.ylabel("Time")
plt.show()
```
This plot over frequency vs. time is known as a *waterfall plot*. At the moment, we can't see the sin wave we put in there. If we integrate longer, the noise integrates down as $\sqrt{t}$ (see the radiometer equation), whereas the sin wave is coherent. Using a longer time integration:
```python
X_psd2 = pfb_spectrometer(data, n_taps=M, n_chan=P, n_int=1000, window_fn="hamming")
plt.plot(db(X_psd[0]), c='#cccccc', label='short integration')
plt.plot(db(X_psd2[1]), c='#cc0000', label='long integration')
plt.ylim(-50, -30)
plt.xlim(0, P/2)
plt.xlabel("Channel")
plt.ylabel("Power [dB]")
plt.legend()
plt.show()
```
### Testing leakage with sin waves
Is the PFB's spectral leakage as good as people claim? We can test this out by sweeping a sine wave input and looking at the response of a few channels as a function of sine wave period.
```python
M, P, W = 6, 512, 256 # taps, channels, windows
period = np.linspace(0, 0.025, 101)
chan0_val = []
chan1_val = []
chan2_val = []
for p in period:
t = np.arange(0, M*P*W)
x = np.sin(t * p) + 0.001
X_psd = pfb_spectrometer(x, n_taps=M, n_chan=P, n_int=256, window_fn="hamming")
chan0_val.append(X_psd[0, 0])
chan1_val.append(X_psd[0, 1])
chan2_val.append(X_psd[0, 2])
plt.plot(period, db(chan0_val))
plt.plot(period, db(chan1_val))
plt.plot(period, db(chan2_val))
plt.xlim(period[0], period[-1])
plt.ylabel("Power [dB]")
plt.xlabel("Input sine wave period")
plt.show()
```
## Where to go from here
The PFB code in this notebook is quite simple, with no bells and whistles. As an exercise, you could:
* add some error handling (e.g. what happens when the time stream isn't a multiple of $M\times P$?),
* make it read from a file and output to another file
* make it work on datasets larger than your computer's memory
* Implement some more fancy features like oversampling
* Implement an inverse PFB
* port it to Julia, Cythonize it, put it in a docker container, print out a figure and stick it on your macbook.
* etcetera.
If you do something that you think would make a great example, please push it to this github repository!
### Open source codes
Are you about to build a new instrument that needs a PFB spectrometer? The good news is that you probably don't have to write your own highly efficient PFB implementation, because people have done it for you. Here's a selection of codes:
* The [CASPER](https://casper.berkeley.edu/wiki/Getting_Started) collaboration provide a FPGA-based PFB and a design environment for making FPGA-based instruments for radio astronomy.
* Karel Adámek, Jan Novotný and Wes Armour wrote very efficient PFB codes for CPU, GPU and Intel Phi, available on [github](https://github.com/wesarmour/astro-accelerate) and detailed in [arXiv](http://arxiv.org/abs/1511.03599)
* Jayanth Chennamangalam created a PFB GPU code, which is used in the [VEGAS spectrometer](http://www.gb.nrao.edu/vegas/). It is available on [github](https://github.com/jayanthc/grating/) and detailed on [arXiv](http://arxiv.org/abs/1411.0436).
### Citing
If you find this notebook useful, please consider referencing the accompanying chapter in your thesis / paper / postcard / sticky note:
Danny C. Price, *Spectrometers and Polyphase Filterbanks in Radio Astronomy*, 2016. Available online at: http://arxiv.org/abs/1607.03579
| 04231a605733441273904fac744a4c109c0b9892 | 295,109 | ipynb | Jupyter Notebook | pfb_introduction.ipynb | telegraphic/pfb_introduction | 8a3c62dcc2c1ff9e4165c0f67c0702f95514d84f | [
"CC-BY-4.0"
]
| 32 | 2017-01-26T22:52:22.000Z | 2022-03-30T21:20:15.000Z | pfb_introduction.ipynb | evanmayer/pfb_introduction | 8a3c62dcc2c1ff9e4165c0f67c0702f95514d84f | [
"CC-BY-4.0"
]
| 5 | 2019-11-10T11:02:42.000Z | 2020-09-22T01:04:22.000Z | pfb_introduction.ipynb | evanmayer/pfb_introduction | 8a3c62dcc2c1ff9e4165c0f67c0702f95514d84f | [
"CC-BY-4.0"
]
| 16 | 2016-10-23T02:53:12.000Z | 2021-08-06T13:58:17.000Z | 492.669449 | 97,480 | 0.941381 | true | 3,610 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.805632 | 0.662382 | __label__eng_Latn | 0.984838 | 0.377266 |
# Taylor problem 5.50
last revised: 21-Jan-2019 by Dick Furnstahl [[email protected]]
Here we are exploring the Fourier series for a waveform defined to be odd about the origin, so $f(-t) = -f(t)$, with period $\tau$. That means that the integrand for the $a_m$ coefficients is odd and so all of the corresponding integrals vanish.
The particular wave of interest here is a sawtooth, such that in the interval $-\tau/2 \leq t \leq \tau/2$, the function takes the form:
$\newcommand{\fmax}{f_{\textrm{max}}}$
$\begin{align}
f(t) = \left\{ \begin{array}{ll}
\fmax(t/\tau) & t < 0 \\
\fmax(t/\tau) & t > 0
\end{array}
\right.
\end{align}$
(we wrote it this way so it looks like the function for problem 5.49).
As already note, the $a_m$ coefficients are zero, so we only calculate the $b_m$ coefficients. Here $\omega \equiv 2\pi/\tau$. The result is:
$\begin{align}
b_m = \frac{2}{\tau} \int_{-\tau/2}^{\tau/2} \sin(m\omega t) f(t)\, dt =
% 2 \fmax \int_0^1 \sin(m\pi t) t\, dt
% &= - \frac{2\fmax}{(m\pi)^2)}\left[\sin(m\pi t)\right]^1_0 \\
% =
\left\{
\begin{array}{ll}
-\frac{ \fmax}{m\pi} & [m\ \mbox{even}] \\
\frac{ \fmax}{m\pi} & [m\ \mbox{odd}]
\end{array}
\right.
\end{align}$
Note that the coefficients are independent of $\tau$. Is this a general result?
## Define the functions we'll need
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
```
We start by defining a function for the sawtooth wave at any $t$. The definition here is for a scalar function. That is, it won't work to call it with $t$ and array of time points, unlike other functions we have defined. It is possible to make it work, but then the function will be much less clear. When we need to evaluate it for all elements of an array, we will use the construction: `np.array([sawtooth(t) for t in t_pts])` for the array `t_pts`.
```python
def sawtooth(t, tau, f_max=1):
"""Returns the sawtooth wave of amplitude f_max and odd about the
origin at time t. The period is tau. It is defined as a scalar
function (i.e., only one value of t can be passed at a time).
"""
if np.floor(t) % 2 == 0:
t_adjust = t - np.floor(t)
return t_adjust / tau
else:
t_adjust = t - (np.floor(t) + 1)
return t_adjust / tau
```
Now a function that creates an array of Fourier coefficients for the sawtooth wave up to order N_max.
```python
def sawtooth_coeffs_by_hand(N_max, tau=2., f_max=1.):
"""Fourier coefficients calculated by hand and loaded into an array.
Note that these are independent of tau, but we pass it for
consistency with other functions.
"""
coeffs_array = [(0., 0.)] # a_0 and b_0
for n in np.arange(1, N_max, 1):
if (n % 2) == 0: # for even n
b_n = -f_max / (n * np.pi)
else: # for odd n
b_n = f_max / (n * np.pi)
a_n = 0.
coeffs_array.append((a_n, b_n))
return np.array(coeffs_array) # convert to a numpy array
```
We would like a general way to construct the away of Fourier coefficients given any periodic function. Our first pass at that uses a class definition and the scipy integration function quad.
```python
class FourierSeries():
"""
Fourier series class finds the coefficients in a Fourier series with
period tau up to a specified order.
Assume these imports:
from scipy.integrate import quad
import numpy as np
"""
def __init__(self,
function,
tau=2,
N_max=10
):
self.function = function
self.tau = tau
self.omega = 2. * np.pi / tau
self.N_max = N_max
# add something to quit if Nmax < 0 or not an integer (try and except)
def a0_calc(self):
"""Calculate the constant Fourier coefficient by integration"""
answer, error = quad(self.function, -tau/2., tau/2., args=(tau,))
return (1./self.tau) * answer
def an_integrand(self, t, n):
"""Integrand for the nth cosine coefficient"""
return self.function(t, tau) * np.cos(n * self.omega * t)
def an_calc(self, n):
"""Calculate the nth cosine coefficient (n > 0)"""
# note comma after n in args
answer, error = quad(self.an_integrand, -tau/2., tau/2., args=(n,))
return (2./self.tau) * answer
def bn_integrand(self, t, n):
"""Integrand for the nth cosine coefficient"""
return self.function(t, tau) * np.sin(n * self.omega * t)
def bn_calc(self, n):
"""Calculate the nth cosine coefficient (n > 0)"""
answer, error = quad(self.bn_integrand, -tau/2., tau/2., args=(n,))
return (2./self.tau) * answer
def coeffs_upto_Nmax(self):
"""Calculate the Fourier series up to Nmax"""
# first generate the coefficient
coeffs_array = [(self.a0_calc(), 0)] # a_0 and b_0
for n in np.arange(1, N_max, 1):
a_n = self.an_calc(n)
b_n = self.bn_calc(n)
coeffs_array.append((a_n, b_n)) # append a tuple of coefficients
return np.array(coeffs_array) # convert to a numpy array
```
Finally, we need a function that can take as input an array of t values and an array of Fourier coefficients and return the function at those t values with terms up to order N_max.
```python
def Fourier_reconstruct(t_pts, coeffs_array, tau, N_max):
"""Sum up the Fourier series up to n = N_max terms."""
omega = 2. * np.pi / tau
result = 0.
# iterate over coefficients but only up to N_max
for n, (a,b) in enumerate(coeffs_array[:N_max+1]):
result = result + a * np.cos(n * omega * t_pts) \
+ b * np.sin(n * omega * t_pts)
return result
```
## Problem 5.50
Ok, now we can do problem 5.49. Calculate the coefficients both ways.
```python
N_max = 20
tau = 2.
f_max = 1.
coeffs_by_hand = sawtooth_coeffs_by_hand(N_max, tau, f_max)
fs = FourierSeries(sawtooth, tau, N_max)
coeffs_by_quad = fs.coeffs_upto_Nmax()
```
Let's check that the exact and numerical calculation of the coefficients agree.
(Note the space in the formats, e.g., `{a1: .6f}`. This means to leave an extra space for a positive number so that it aligns at the decimal point with negative numbers.)
```python
print(' n a_exact a_quad b_exact b_quad')
for n, ((a1,b1), (a2,b2)) in enumerate(zip(coeffs_by_hand,
coeffs_by_quad)):
print(f'{n:2d} {a1: .6f} {a2: .6f} {b1: .6f} {b2: .6f}')
```
n a_exact a_quad b_exact b_quad
0 0.000000 0.000000 0.000000 0.000000
1 0.000000 0.000000 0.318310 0.318310
2 0.000000 0.000000 -0.159155 -0.159155
3 0.000000 0.000000 0.106103 0.106103
4 0.000000 0.000000 -0.079577 -0.079577
5 0.000000 0.000000 0.063662 0.063662
6 0.000000 0.000000 -0.053052 -0.053052
7 0.000000 0.000000 0.045473 0.045473
8 0.000000 0.000000 -0.039789 -0.039789
9 0.000000 0.000000 0.035368 0.035368
10 0.000000 0.000000 -0.031831 -0.031831
11 0.000000 0.000000 0.028937 0.028937
12 0.000000 0.000000 -0.026526 -0.026526
13 0.000000 0.000000 0.024485 0.024485
14 0.000000 0.000000 -0.022736 -0.022736
15 0.000000 0.000000 0.021221 0.021221
16 0.000000 0.000000 -0.019894 -0.019894
17 0.000000 0.000000 0.018724 0.018724
18 0.000000 0.000000 -0.017684 -0.017684
19 0.000000 0.000000 0.016753 0.016753
Make the comparison plot requested: N_max = 2 vs. N_max = 6.
```python
t_pts = np.arange(-2., 6., .01)
f_pts_2 = Fourier_reconstruct(t_pts, coeffs_by_quad, tau, 2)
f_pts_6 = Fourier_reconstruct(t_pts, coeffs_by_quad, tau, 6)
# Python way to evaluate the sawtooth function at an array of points:
# * np.array creates a numpy array;
# * note the []s around the inner statement;
# * sawtooth(t) for t in t_pts
# means step through each element of t_pts, call it t, and
# evaluate sawtooth at that t.
# * This is called a list comprehension. There are more compact ways,
# but this is clear and easy to debug.
sawtooth_t_pts = np.array([sawtooth(t, tau, f_max) for t in t_pts])
```
```python
fig_1 = plt.figure(figsize=(10,5))
ax_1 = fig_1.add_subplot(1,2,1)
ax_1.plot(t_pts, f_pts_2, label='N = 2', color='blue')
ax_1.plot(t_pts, sawtooth_t_pts, label='exact', color='red')
ax_1.set_xlim(-1.1,4.1)
ax_1.set_xlabel('t')
ax_1.set_ylabel('f(t)')
ax_1.set_title('N = 2')
ax_1.legend()
ax_2 = fig_1.add_subplot(1,2,2)
ax_2.plot(t_pts, f_pts_6, label='N = 6', color='blue')
ax_2.plot(t_pts, sawtooth_t_pts, label='exact', color='red')
ax_2.set_xlim(-1.1,4.1)
ax_2.set_xlabel('t')
ax_2.set_ylabel('f(t)')
ax_2.set_title('N = 6')
ax_2.legend();
fig_1.tight_layout()
fig_1.savefig('problem_5.50.png')
```
```python
```
```python
```
| 078e1693370fec579a427daf9fbdac22e40c0bf8 | 68,233 | ipynb | Jupyter Notebook | 2020_week_2/Taylor_problem_5.50.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
]
| null | null | null | 2020_week_2/Taylor_problem_5.50.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
]
| null | null | null | 2020_week_2/Taylor_problem_5.50.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
]
| null | null | null | 169.733831 | 54,440 | 0.876467 | true | 2,973 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.882428 | 0.77131 | __label__eng_Latn | 0.921008 | 0.630344 |
# Exercises and Problems for Module 2
```python
import numpy as np
from pint import UnitRegistry
import matplotlib.pyplot as plt
import Utils16101
import sympy
sympy.init_printing()
%matplotlib inline
```
```python
ureg = UnitRegistry()
Q_ = ureg.Quantity
```
## Exercise 2.4.2: compute lift coefficient
First aircraft (Cessna like)
```python
w1 = Q_(2400.,'lbf')
Sref1 = Q_(180.,'foot**2')
v1 = Q_(140.,'mph')
alt1 = Q_(12e3,'foot')
ρ1 = Q_(1.6e-3,'slug/foot**3')
```
Second aircraft (*B777* like)
```python
w2 = Q_(550e3,'lbf')
Sref2 = Q_(4.6e3,'foot**2')
v2 = Q_(560.,'mph')
alt2 = Q_(35e3,'foot')
ρ2 = Q_(7.4e-4,'slug/foot**3')
```
**Results**
```python
print("First aircraft: ",Utils16101.computeLiftCoeff(w1,Sref1,v1,alt1,ρ1))
print("Second aircraft: ",Utils16101.computeLiftCoeff(w2,Sref2,v2,alt2,ρ2))
```
First aircraft: 0.3953028287585151 dimensionless
Second aircraft: 0.47903177691800075 dimensionless
## Exercise 2.4.3: drag comparison
Hypoteses:
* $C_{Dcyl}\approx1$ and $C_{Dfair}\approx0.01$
* $S_{ref\ cyl} = d\cdot h$, and $S_{ref\ fair} = c\cdot h$, with $c = 10d$
* same $V_{\infty}$
Expression of Drag:
$$D = \frac{1}{2} \cdot C_D \rho V_{\infty}^2 S_{ref}$$
Ratio of Drags
$$\frac{D_{cyl}}{D_{fair}} = \frac{\frac{1}{2} \cdot C_{Dcyl} \rho V_{\infty}^2 S_{ref\ cyl}}{\frac{1}{2} \cdot C_{Dfair} \rho V_{\infty}^2 S_{ref\ fair}} = \frac{C_{Dcyl} \cdot dh}{C_{Dfair} \cdot 10dh} $$
## Exercise 2.4.7: _Mach_ and _Reynolds_ number comparisons
First aircraft additional parameters:
```python
c1 = Q_(5.0,'foot')
μ1 = Q_(3.5e-7,'slug/foot/second')
a1 = Q_(1.1e3,'foot/second')
```
Second aircraft additional parameters
```python
c2 = Q_(23.0,'foot')
μ2 = Q_(3.0e-7,'slug/foot/second')
a2 = Q_(9.7e2,'foot/second')
```
```python
Ma1, Re1 = Utils16101.computeMachRe(v1,a1,μ1,c1,ρ1)
Ma2, Re2 = Utils16101.computeMachRe(v2,a2,μ2,c2,ρ2)
print("First aircraft - Ma: {0:10.3e} Re: {1:10.3e}".format(Ma1.magnitude,Re1.magnitude))
print("Second aircraft - Ma: {0:10.3e} Re: {1:10.3e}".format(Ma2.magnitude,Re2.magnitude))
```
First aircraft - Ma: 1.867e-01 Re: 4.693e+06
Second aircraft - Ma: 8.467e-01 Re: 4.660e+07
## Exercise 2.4.10: dynamic similarity
**Wind tunnel** test conditions
```python
ρ_inf = Q_(2.4e-3,'slug/ft**3')
a_inf = Q_(1.1e3,'ft/s')
μ_inf = Q_(3.7e-7,'slug/ft/s')
v = Q_(200.,'mph')
c = c1/4
```
```python
Ma_wt, Re_wt = Utils16101.computeMachRe(v,a_inf,μ_inf,c,ρ_inf)
print("Wind tunnel - Ma: {0:10.3e} Re: {1:10.3e}".format(Ma_wt.magnitude,Re_wt.magnitude))
```
Wind tunnel - Ma: 2.667e-01 Re: 2.378e+06
## Exercise 2.5.2: minimum Takeoff velocity
Minimum required lift: **L = W** as $V_{\infty} \perp \vec{g} $
$$L = W = \frac{1}{2} \cdot \rho V_{\infty}^2 C_L * S_{ref} $$
```python
W = Q_(650e3,'lbf')
Sref = Q_(4.6e3,'ft**2')
ρ_inf = Q_(2.4e-3,'slug/ft**3')
CL_max = 2.5
```
```python
V_inf = np.sqrt(2*W.to('slug*ft/s**2')/(ρ_inf*CL_max*Sref))
print(V_inf.to('mph'))
```
147.97411698442224 mph
## Exercise 2.6.2: Range estimate
**Breguet** equation for determining range (level flight, no _takeoff_ or _landing_):
$$R = \eta_0 \cdot \frac{L}{D} \cdot \frac{Q_R}{g} \cdot \ln \left(1+\frac{W_{fuel}}{W_{final}}\right)$$
```python
η0 = Q_(0.32,'dimensionless')
LoverD = Q_(17.,'dimensionless')
QR = Q_(42.,'MJ/kg')
g = Q_(9.80665,'m/s**2')
W_in = Q_(400e3,'kg')
W_fuel = Q_(175e3,'kg')
W_final = W_in - W_fuel
```
```python
R = η0 * LoverD * QR.to('m**2/s**2')/g*np.log(1+W_fuel/W_final)
print("Range = {0:10.3e}".format(R.to('km')))
```
Range = 1.341e+04 kilometer
# Sample Problems
## Problem 2.7.1: Lift and Drag for flat plate in supersonic flow
Hypoteses:
* $\Delta p = p_l - p_u > 0$
* $p_l , p_u constant $
* $\alpha \ small \rightarrow \cos(\alpha) \approx 1, \sin(\alpha) \approx \alpha$
Relations:
$$
\begin{align}
L &= \Delta p \cdot S \cos(\alpha) \\
D &= \Delta p \cdot S \sin(\alpha)
\end{align}
$$
**Lift** and **Drag** coefficients:
$$
\begin{align}
C_L &= \frac{L}{\frac{1}{2}\rho_{\infty} V_{\infty}^2S} &\approx \frac{\Delta p}{\frac{1}{2}\rho_{\infty} V_{\infty}^2} \\
C_D &= \frac{D}{\frac{1}{2}\rho_{\infty} V_{\infty}^2S} &\approx \frac{\Delta p \alpha}{\frac{1}{2}\rho_{\infty} V_{\infty}^2}
\end{align}
$$
$\Delta p \propto \alpha$ for *supersonic flow* and *small angle*
$$
\begin{align}
C_L &\approx \frac{\Delta p}{\frac{1}{2}\rho_{\infty} V_{\infty}^2} &\propto \frac{\alpha}{\frac{1}{2}\rho_{\infty} V_{\infty}^2}\\
C_D &\approx \frac{\Delta p \alpha}{\frac{1}{2}\rho_{\infty} V_{\infty}^2} &\propto \frac{\alpha^2}{\frac{1}{2}\rho_{\infty} V_{\infty}^2S}
\end{align}
$$
## Problem 2.7.2: Aerodynamic performance
Aircraft parameters:
```python
W = Q_(550e3,'lbf')
Sref = Q_(4.6e3,'ft**2')
AR = Q_(9.,'dimensionless')
```
Air parameters at two different altitudes
```python
ρ_inf1 = Q_(1.6e-3,'slug/ft**3') #1.2e4 ft
ρ_inf2 = Q_(7.3e-4,'slug/ft**3') #3.5e4 ft
a_inf1 = Q_(1069.,'ft/s')
a_inf2 = Q_(973.,'ft/s')
```
Aircraft speed
```python
Ma = Q_(0.85,'dimensionless')
```
**Parabolic drag model**
$$C_D = C_{D0} + \frac{C_L^2}{\pi e AR}$$
with:
* _AR_: Aspect ratio
* _e_: **Oswald** span efficiency
```python
C_D0 = Q_(0.05,'dimensionless')
e_osw = Q_(0.8,'dimensionless')
```
```python
V_inf1 = Ma*a_inf1
V_inf2 = Ma*a_inf2
C_L1 = W.to('slug*ft/s**2')/(0.5*ρ_inf1*V_inf1**2*Sref)
C_L2 = W.to('slug*ft/s**2')/(0.5*ρ_inf2*V_inf2**2*Sref)
print("Lift coefficient at 12000ft: {0:10.3e}".format(C_L1))
print("Lift coefficient at 35000ft: {0:10.3e}".format(C_L2))
```
Lift coefficient at 12000ft: 1.810e-01 dimensionless
Lift coefficient at 35000ft: 4.789e-01 dimensionless
**NB**: _Drag count_ $\rightarrow C_D \cdot 10^4$
```python
C_D1 = C_D0 + C_L1**2/(np.pi*e_osw*AR)
C_D2 = C_D0 + C_L2**2/(np.pi*e_osw*AR)
print("Drag count at 12000ft: {0:10.1f}".format(C_D1*1e4))
print("Drag count at 35000ft: {0:10.1f}".format(C_D2*1e4))
```
Drag count at 12000ft: 514.5 dimensionless
Drag count at 35000ft: 601.4 dimensionless
Lift to Drag ratio:
```python
L_D1 = C_L1/C_D1
L_D2 = C_L2/C_D2
print("Lift to Drag ratio at 12000ft: {0:10.3e}".format(L_D1))
print("Lift to Drag ratio at 35000ft: {0:10.3e}".format(L_D2))
```
Lift to Drag ratio at 12000ft: 3.518e+00 dimensionless
Lift to Drag ratio at 35000ft: 7.963e+00 dimensionless
**Required Thrust**: $T = D$
```python
T1 = 0.5*C_D1*ρ_inf1*V_inf1**2*Sref
T2 = 0.5*C_D2*ρ_inf2*V_inf2**2*Sref
print("Thrust required at 12000ft: {0:10.3e}".format(T1.to('lbf')))
print("Thrust required at 35000ft: {0:10.3e}".format(T2.to('lbf')))
```
Thrust required at 12000ft: 1.563e+05 force_pound
Thrust required at 35000ft: 6.907e+04 force_pound
**Required Power**: $P = T \cdot V_{\infty}$
```python
P1 = T1.to('lbf')*V_inf1
P2 = T2.to('lbf')*V_inf2
print("Power required at 12000ft: {0:10.3e}".format(P1))
print("Power required at 35000ft: {0:10.3e}".format(P2))
```
Power required at 12000ft: 1.420e+08 foot * force_pound / second
Power required at 35000ft: 5.712e+07 foot * force_pound / second
## Problem 2.7.3: sensitivity of payload
Using **Breguet** equation and comparing terms to get the same range
$$ 0.99 \eta_0 \frac{L}{D} \cdot \frac{Q_R}{g} \ln \left(\frac{W_{in}-100n}{W_{fin}-100n}\right) =
\eta_0 \frac{L}{D} \cdot \frac{Q_R}{g} \ln \left(\frac{W_{in}}{W_{fin}}\right)$$
which gives:
$$ \left(\frac{W_{in}-100n}{W_{fin}-100n}\right)^{0.99} = \left(\frac{W_{in}}{W_{fin}}\right)$$
```python
Win = 400e3
Wfin = 400e3-175e3
```
```python
n = np.arange(25.,35.)
y = ((Win-100*n)/(Wfin-100*n))**0.99 - Win/Wfin
```
```python
plt.figure(figsize=(16,10), dpi=300)
plt.plot(n, y, lw=3.)
plt.grid();
```
```python
zero_crossing = np.where(np.diff(np.sign(y)))[0]+1
```
```python
print("number of passengers: {0:d}".format(int(n[zero_crossing])))
```
number of passengers: 30
## Problem 2.7.4: rate of climb
Relations:
- $\dot{h} = V_{\infty} \sin(\theta)$
- $ T = D + W \sin(\theta)$
so:
$$ \dot{h} = V_{\infty} \cdot \frac{T-D}{W}$$
## Problem 2.7.5: maximum lift-to-drag ratio
```python
Cd, Cd0, K = sympy.symbols('C_D C_D0 K')
```
```python
expr = sympy.sqrt((Cd-Cd0)*K)/Cd
expr
```
```python
sympy.simplify(sympy.diff(expr,Cd))
```
Maximum lift to drag ratio for $C_D = 2D_{D0}$
$$ \left(\frac{L}{D} \right)_{max} = \frac{1}{2}\sqrt{\frac{\pi e AR}{C_{D0}}}$$
# Homework
## Problem 2.8.1: cryogenic wind tunnel test
Small aircraft flying at following conditions:
```python
V_full = Q_(10.,'m/s')
ρ_full = Q_(0.5,'kg/m**3')
T_full = Q_(233.,'K')
```
Air supposed to be ideal gas:
```python
R = Q_(287,'J/kg/K')
γ = Q_(1.4,'dimensionless')
```
Temperature - viscosity dependance: $\frac{\mu_1}{\mu_2} = \sqrt{\frac{T_1}{T_2}}$
**Freestream pressure**
```python
p_full = ρ_full*R*T_full
print("Freestream pressure: {0:10.3e}".format(p_full.to('Pa')))
```
Freestream pressure: 3.344e+04 pascal
** Mach number**
```python
a_full = np.sqrt(γ*R.to('m**2/s**2/K')*T_full)
Ma_full = V_full/a_full
print("Fullscale Mach number: {0:10.3e}".format(Ma_full))
```
Fullscale Mach number: 3.268e-02 dimensionless
```python
scale = Q_(0.2,'dimensionless')
p_scale = Q_(1e5,'Pa')
```
** Compare Reynolds and Mach numbers:**
$$
\begin{align}
Re: & \frac{\rho_f V_f l_f }{\mu_f} &=& \frac{\rho_s V_s l_s}{\mu_s} &\rightarrow & \frac{\rho_s}{\rho_f} &=&
\frac{\mu_s}{\mu_f} \cdot \frac{V_f}{V_s} \cdot \frac{1}{scale} \\
Mach: & \frac{V_f}{a_f} &=& \frac{V_s}{a_s} &\rightarrow & \frac{}{} \frac{V_s}{V_f} &=&
\sqrt{\frac{T_f}{T_s}} \\
\end{align}
$$
Using temperature - viscosity dependance:
$$ \frac{\rho_s}{\rho_f} = \frac{1}{scale} $$
Knowing $\rho_s$ from relation above and $p_s$ and using $p = \rho RT$ we find $T_s$
From _Mach number_ relation we find $V_s$
```python
ρ_scale = ρ_full / scale
T_scale = p_scale.to('kg/m/s**2')/R.to('m**2/s**2/K')/ρ_scale
V_scale = np.sqrt(T_scale/T_full)*V_full
```
```python
print("Scaled model density: {0:10.3f}".format(ρ_scale))
print("Scaled model Temperature: {0:10.3f}".format(T_scale))
print("Scaled model velocity: {0:10.3f}".format(V_scale))
```
Scaled model density: 2.500 kilogram / meter ** 3
Scaled model Temperature: 139.373 kelvin
Scaled model velocity: 7.734 meter / second
** Drag comparison **
$$D = \frac{1}{2}C_D\rho V_{\infty}^2S_{ref}$$
comparing drag:
$$\frac{D_f}{D_s} = \frac{\rho_f V_{\infty f}^2}{\rho_s V_{\infty s}^2} \cdot \frac{1}{scale^2}$$
```python
D_scale = Q_(100.,'N')
D_full = D_scale*ρ_full/ρ_scale*(V_full/V_scale)**2/(scale**2)
print("Full model Drag: {0:10.3f}".format(D_full))
```
Full model Drag: 835.887 newton
## Problem 2.8.2: impact of winglet on performance
Data:
```python
η0 = Q_(0.34,'dimensionless')
LD = Q_(16.,'dimensionless')
Win = Q_(225e3,'kg')
Wfuel = Q_(105e3,'kg')
Wfinal = Win-Wfuel
Qr = Q_(42.,'MJ/kg')
g = Q_(9.81,'m/s**2')
```
```python
rng0 = LD*η0*Qr.to('m**2/s**2')/g*np.log(Win/Wfinal)
print("Original range: {0:10.3f}".format(rng0.to('km')))
```
Original range: 14640.622 kilometer
**Winglets** give 5% of reduction of Drag:
Fuel consumption over the same range
$$
\begin{align}
\eta_0 \frac{L}{D} \frac{Q_R}{g} \ln \left(1+\frac{W_{fuel0}}{W_{final}}\right) &= \eta_0 \frac{L}{0.95D} \frac{Q_R}{g} \ln \left(1+\frac{W_{fuel1}}{W_{final}}\right) \\
\left(1+\frac{W_{fuel0}}{W_{final}}\right)^{0.95} &= \left(1+\frac{W_{fuel1}}{W_{final}}\right)
\end{align}
$$
```python
Wfuel1 = Wfinal*( (1+Wfuel/Wfinal)**0.95 -1)
print("Improved fuel consumption: {0:10.3f}".format(Wfuel1))
```
Improved fuel consumption: 98038.133 kilogram
```python
Fuel_dens = Q_(0.81,'kg/l')
Fuel_cost = Q_(0.75,'mol/l') # just joking... can we define new units?
```
```python
fuel_savings = (Wfuel-Wfuel1)*Q_(365,'1/year')/Fuel_dens*Fuel_cost
print("Annual savings: {0:10.3e}".format(fuel_savings))
```
Annual savings: 2.353e+06 mole / year
**Winglets** again give 5% of reduction of Drag:
Weight increase over the same range given 1% of fuel reduction
$$
\begin{align}
\eta_0 \frac{L}{D} \frac{Q_R}{g} \ln \left(1+\frac{W_{fuel}}{W_{final}}\right) &= \eta_0 \frac{L}{0.95D} \frac{Q_R}{g} \ln \left(1+\frac{0.99W_{fuel}}{W_{final1}}\right) \\
\left(1+\frac{W_{fuel}}{W_{final}}\right)^{0.95} &= \left(1+\frac{0.99W_{fuel}}{W_{final1}}\right)
\end{align}
$$
```python
Wfinal1 = 0.99*Wfuel/((1+Wfuel/Wfinal)**0.95-1)
print("Aircraft mass increment: {0:10.3f}".format(Wfinal1-Wfinal))
```
Aircraft mass increment: 7236.205 kilogram
```python
fuel_savings1 = 0.01*Wfuel*Q_(365,'1/year')/Fuel_dens*Fuel_cost
print("Annual savings: {0:10.3e}".format(fuel_savings1))
```
Annual savings: 3.549e+05 mole / year
## Problem 2.8.3: Minimum power flight with *parabolic Drag Model*
Power consumption $P = D \cdot V_{\infty}$
$$
\begin{align}
D &= \frac{1}{2}C_D\rho_{\infty}V_{\infty}^2S_{ref}\\
L &= W \\
L &= \frac{1}{2}C_L\rho_{\infty}V_{\infty}^2S_{ref}
\end{align}
$$
From the above relations:
$$
\begin{align}
P &= \frac{1}{2}C_D\rho_{\infty}V_{\infty}^3S_{ref}\\
V_{\infty} &= \sqrt{\frac{2W}{C_L \rho_{\infty} S_{ref}}}\\
P &= W \cdot \sqrt{\frac{2W}{\rho_{\infty}S_{ref}}} \cdot C_D \cdot C_L^{-\frac{3}{2}}
\end{align}
$$
$C_L$ that minimizes power consumption
```python
Cl, Cd0, K, e, AR, rho, Sr, W = sympy.symbols('C_L C_D0 K e AR rho S_r W')
```
```python
P_expr = sympy.sqrt(2*W/(rho*Sr))*W*(Cd0+Cl**2/(sympy.pi*e*AR))*sympy.sqrt(Cl**(-3))
P_expr
```
```python
sympy.simplify(sympy.diff(P_expr,Cl))
```
Lift coefficient at minimum power consumption: $C_L = \sqrt{3 \pi e AR C_{D0}}$
Induced Drag - Total Drag ratio: $C_D = C_{D0} + \frac{C_L^2}{\pi e AR} = C_{D0} + 3 C_{D0}$
$$\frac{C_{Di}}{C_D} = \frac{3}{4}$$
Case of autonomous aircraft
```python
Splan = Q_(0.3,'m**2')
W = Q_(3.5,'N').to('kg*m/s**2')
ρ = Q_(1.225,'kg/m**3')
AR = Q_(10,'dimensionless')
e = Q_(0.95,'dimensionless')
Cd0 = Q_(0.02,'dimensionless')
```
```python
Cl_min = np.sqrt(3*np.pi*e*AR*Cd0)
print("Lift Coefficient at minimum power consumption: {0:10.3f}".format(Cl_min))
```
Lift Coefficient at minimum power consumption: 1.338 dimensionless
```python
Cd_min = 4*Cd0
print("Drag Coefficient at minimum power consumption: {0:10.3f}".format(Cd_min))
```
Drag Coefficient at minimum power consumption: 0.080 dimensionless
```python
Vinf = np.sqrt(2*W/(Cl_min*ρ*Splan))
print("Velocity at minimum power consumption: {0:10.3f}".format(Vinf))
```
Velocity at minimum power consumption: 3.773 meter / second
```python
T = (0.5*Cd_min*ρ*Vinf**2*Splan).to('N')
print("Thrust required at minimum power consumption: {0:10.3f}".format(T))
```
Thrust required at minimum power consumption: 0.209 newton
```python
P = (T*Vinf).to('W')
print("Power required at minimum power consumption: {0:10.3f}".format(P))
```
Power required at minimum power consumption: 0.789 watt
```python
```
| 1aa1c6750764d686a821dbc99ac254363c9cfe58 | 81,053 | ipynb | Jupyter Notebook | problems02.ipynb | Ccaccia73/Intro2Aero_Edx | 28714f3d937fe4738b0cf72f2fdc44010503ae39 | [
"Artistic-2.0"
]
| null | null | null | problems02.ipynb | Ccaccia73/Intro2Aero_Edx | 28714f3d937fe4738b0cf72f2fdc44010503ae39 | [
"Artistic-2.0"
]
| null | null | null | problems02.ipynb | Ccaccia73/Intro2Aero_Edx | 28714f3d937fe4738b0cf72f2fdc44010503ae39 | [
"Artistic-2.0"
]
| null | null | null | 52.906658 | 35,332 | 0.75078 | true | 5,951 | Qwen/Qwen-72B | 1. YES
2. YES | 0.890294 | 0.851953 | 0.758489 | __label__eng_Latn | 0.224387 | 0.600555 |
<h1><center>Modelling Montesinho Natural Park's conflagrations</center></h1>
<h3><center>University of Cyprus - Project for MAS451</center></h3>
<h3><center>Ifigeneia Galanou, Evi Zaou, Marios Andreou</center></h3>
---
## Introduction
Modelling instances of conflagrations, in regards to various parameters and variables can be crucial to be able to predict what affects the probability distribution of dependent variables such as fire occurrence, burnt area and rate of spread. While controlled fires are sometimes used to maintain the balance of various forest ecosystems, like controlling the areas that carnivorous plants are prevalent, not being able to take the necessary precautions to counteract and put out ill-intent or accidental conflagrations could have cataclysmic consequences to individuals in economical and health-related ways, to the flora and fauna and to rural communities. Here is where the data from the flames that plagued Portugal's Montesinho Natural Park from January 2000 to December 2003, come in handy. The dataset gathered in this time interval, contains 517 instances and is defined by 13 columns; 1 is the dependent variable; **total burnt area** while the other 12 are the explanatory variable, where 4 of them are discrete-valued and the other 8 are continuous in nature.
Attribute/Variable Description:
---
* $X$: the x-axis spatial coordinate within the Montesinho park map; $X\in\{1,\dots,9\}$
* $Y$: the y-axis spatial coordinate within the Montesinho park map; $Y\in\{1,\dots,9\}$
* For these refer to map below:
* **Month**: the month that the fire occurred. In the current state of the data, this variable is character-valued with values from 'jan' to 'dec'. _These were translated to numerical values where_ month $\in\{1,\dots,12\}$.
* **Day**: the day the fire occurred. In the current state of the data, this variable is character-valued with values from 'mon' to 'sun'. _These were translated to numerical values where_ day $\in\{1,\dots,7\}$.
* **Fine Fuel Moisture Code (FFMC)**: a numeric rating of the moisture content of litter and other cured fine fuels. *FFMC provides a measure of ease of fire inception and flammability of the top fuel layer, where initial ignition usually occurs.* This is a percentage so FFMC $\in[0,100]$
* **Duff Moisture Code (DMC)**: a numeric rating of the average moisture content of loosely compacted organic layers of moderate depth. _This code gives an indication of fuel consumption in moderate duff layers and medium-size woody material._
* **Drought Code (DC)**: a numeric rating of the average moisture content of deep, compact organic layers. _This code is a useful indicator of seasonal drought effects on forest fuels and the amount of smoldering in deep duff layers and large logs._
* **Initial Spread Index (ISI)**: _a numeric rating that approximates the expected rate of fire spread._ **It is based on wind speed and FFMC.** Like the rest of the FWI system components, ISI does not take fuel type into account. Actual spread rates vary between fuel types at the same ISI.
* Mathematically speaking this is approximately the derivative of the theoretical expected value of our dependent variable in this instance; $\mathbf{E}[Y|\vec{X}=\vec{x}]$, with respect to time. Indeed,
\begin{equation} \label{eq:1}
ISI\approxeq\mathbf{E}[\frac{d(Y|\vec{X}=\vec{x})}{dt}]\stackrel{\because \mathbf{E}[\cdot] \text{is linear}}{=}\frac{d(\mathbf{E}[Y|\vec{X}=\vec{x}])}{dt}
\end{equation}
* **Temperature (temp)**: the temperature in Celsius degrees
* **Relative Humidity (RH)**: the relative humidity present in air expressed as a percentage
* **wind**: wind speed in $km/h$
* **rain**: outside rain in $mm/m^2$
* **area**: the burnt area of the forest (in ha - hectares where $1ha=10000m^2$). In the current state of the data, our **dependent variable** is very skewed towards $0$.
FWI
---
---
The aforementioned indices are found in the **Canadian Forest Fire Weather Index (FWI) System, which is adopted nowadays by all governing bodies overlooking the preservation and safety of forests.**
**The Fire Weather Index (FWI) is a numeric rating of fire intensity. It is based on the ISI _(included in our dataset - affected by FFMC and Wind speed)_ and the BUI _(NOT included in our dataset - affected by DC and DMC)_, and is used as a general index of fire danger throughout the forested areas of Canada and the world.**
_**Summary**_
_The Canadian Forest Fire Weather Index (FWI) System consists of six components that account for the effects of fuel moisture and weather conditions on fire behavior._
_The first three components are fuel moisture codes, which are numeric ratings of the moisture content of the forest floor and other dead organic matter. Their values rise as the moisture content decreases. There is one fuel moisture code for each of three layers of fuel: litter and other fine fuels; loosely compacted organic layers of moderate depth; and deep, compact organic layers._
_The remaining three components are fire behavior indices, which represent the rate of fire spread, the fuel available for combustion, and the frontal fire intensity; these three values rise as the fire danger increases_
_**Structure of the FWI System**_
_The diagram below illustrates the components of the FWI System. Calculation of the components is based on consecutive daily observations of temperature, relative humidity, wind speed, and 24-hour precipitation. The six standard components provide numeric ratings of relative potential for wildland fire._
## Changes made to the data
---
Due to several reasons some changes were made to the dataset to make it more usable for our task at hand. Those changes, will be showcased in an interactive manner here; just like us, where through our project we discovered the necessity of these changes, we will showcase where issues with the "raw" material arose.
```R
# Importing the data from the csv
data<-read.csv("forestfiresdata.csv")
names(data)[1]<-"X" # Due to poor encoding from .xlsx to .csv,
# there needs to be a slight renaming of
# the first variable.
n=nrow(data)
# Printing the data to get an idea of what we have
head(data, 5)
```
<table class="dataframe">
<caption>A data.frame: 5 × 13</caption>
<thead>
<tr><th></th><th scope=col>X</th><th scope=col>Y</th><th scope=col>month</th><th scope=col>day</th><th scope=col>FFMC</th><th scope=col>DMC</th><th scope=col>DC</th><th scope=col>ISI</th><th scope=col>temp</th><th scope=col>RH</th><th scope=col>wind</th><th scope=col>rain</th><th scope=col>area</th></tr>
<tr><th></th><th scope=col><int></th><th scope=col><int></th><th scope=col><int></th><th scope=col><int></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><int></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>1</th><td>7</td><td>5</td><td> 3</td><td>5</td><td>86.2</td><td>26.2</td><td> 94.3</td><td>5.1</td><td> 8.2</td><td>51</td><td>6.7</td><td>0.0</td><td>0</td></tr>
<tr><th scope=row>2</th><td>7</td><td>4</td><td>10</td><td>2</td><td>90.6</td><td>35.4</td><td>669.1</td><td>6.7</td><td>18.0</td><td>33</td><td>0.9</td><td>0.0</td><td>0</td></tr>
<tr><th scope=row>3</th><td>7</td><td>4</td><td>10</td><td>6</td><td>90.6</td><td>43.7</td><td>686.9</td><td>6.7</td><td>14.6</td><td>33</td><td>1.3</td><td>0.0</td><td>0</td></tr>
<tr><th scope=row>4</th><td>8</td><td>6</td><td> 3</td><td>5</td><td>91.7</td><td>33.3</td><td> 77.5</td><td>9.0</td><td> 8.3</td><td>97</td><td>4.0</td><td>0.2</td><td>0</td></tr>
<tr><th scope=row>5</th><td>8</td><td>6</td><td> 3</td><td>7</td><td>89.3</td><td>51.3</td><td>102.2</td><td>9.6</td><td>11.4</td><td>99</td><td>1.8</td><td>0.0</td><td>0</td></tr>
</tbody>
</table>
Firstly, we can observe that due to the fact that our dependent variable is heavily skewed towards 0, we need to take some transformation of the data to increase its "variability". This can be done using the the logarithmic transformation of $log(area+1)$ to reduce or even remove the skewness of our original data. This leaves 0 unchanged, because $log1=0$ and because it is a strictly increasing function, this means that: area $\in[0,1090.84] \Rightarrow \ log(area+1)\in[0,6.9956]$.
*Also it is important to mention that, while some fires were recorded to have 0ha area this does not mean that those fires were not significant. Having an area of 0ha means that the recorded fire was below 360 $m^2$ in area because the lowest measure in our dataset, is measured to be just 0.36ha. Maybe this inaccuracy to the measures for smaller fires was possibly attributed to the way of how they were measuring the burnt area, which was most likely done with lasers or using sattelites, which are known to be inaccurate for small scale measurements.*
```R
data$area=log(data$area+1)
```
```R
# We notice more fires in the park occuring in the 4-month period of June-September
barplot(table(data$month)/n,ylim=c(0,0.5))
# Actual percentage of fires in the interval June-September
percent=1-length(which(data$month!=6&data$month!=7&data$month!=8&data$month!=9))/n
cat("Percentage of fires in the interval June-September: ", percent)
```
We can observe that a significant portion of the fires; $\sim79\%$ lies in the 4-month period of June-September. This leads us to **"aggregate"** the values of this explanatory variable to the following: **We define the month attribute so that it is a boolean-valued variable that takes the value 1 if the month that the fire was recorded was any of the following: June, July, August, September and 0 elsewhere.**
This could lead many to question our decision, as it would be more explanatory, simple or just logical to have month take the values between 1 through 12. Actually, we took this course of action and had month be as described above. But unfortunately, this led to some very misleading results. The regression line that was produced for that model, when only the month variable was kept varied and others where constants in various levels, it showed that even in months where fires were non-existent like in October through December, there was a strong possibility of very disastrous fires taking place. This of course is ridiculous as it can be seen from the bar chart above. This phenomenon can be ascribed to the fact that the months that the most fires happened (as well as the most devastating ones), **were in a sequence**; June-September and not only that but **the frequency of the fires in these months also followed a near strictly ascending order**. This leads to the problem that for whatever values that we attributed to the other explanatory variables, the regression line on that level had a positive slope - this means that any month following September, will have (if it has) more devastasting fires that any of these 4, which is non-sense because we have insufficient data for any of the months following September (and even if fires were recorded in those months; October and December, they had their area recorded to 0ha).
Also another reason that this model is misleading is that for us humans, the difference between December and January is just one month - for the model in this case, the difference between these two months would be eleven!
Another possible course of action, which we did not explore, would be to split month to 12 distinct boolean variables, where each one corresponded to a month of the year, and it took the unit value if the fire was recorded in that year, else it had a value of 0. This of course will increase the **perplexity** of our model, as it will introduce 12 new variables in place of 1. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance, as we know. This could potentialy increase the MSE of our model, if the trade-off between bias and variance favoured variance.
```R
# Making the aforementioned changes
data[which(data$month!=6&data$month!=7&data$month!=8&data$month!=9),]$month=0
data[which(data$month!=0),]$month=1
```
## Some plots and the exploration of the connections between the explanatory variables with other independent variables or with the dependent variable
---
```R
cor(data, use="everything") # The correlations between the dependent and
# independent variables of the dataset
```
<table class="dataframe">
<caption>A matrix: 13 × 13 of type dbl</caption>
<thead>
<tr><th></th><th scope=col>X</th><th scope=col>Y</th><th scope=col>month</th><th scope=col>day</th><th scope=col>FFMC</th><th scope=col>DMC</th><th scope=col>DC</th><th scope=col>ISI</th><th scope=col>temp</th><th scope=col>RH</th><th scope=col>wind</th><th scope=col>rain</th><th scope=col>area</th></tr>
</thead>
<tbody>
<tr><th scope=row>X</th><td> 1.000000000</td><td> 0.539548171</td><td>-0.077277663</td><td>-0.0249218945</td><td>-0.02103927</td><td>-0.048384178</td><td>-0.0859161229</td><td> 0.006209941</td><td>-0.05125826</td><td> 0.085223194</td><td> 0.01879782</td><td> 0.06538717</td><td> 0.0619949083</td></tr>
<tr><th scope=row>Y</th><td> 0.539548171</td><td> 1.000000000</td><td>-0.062752510</td><td>-0.0054533368</td><td>-0.04630755</td><td> 0.007781561</td><td>-0.1011777674</td><td>-0.024487992</td><td>-0.02410308</td><td> 0.062220731</td><td>-0.02034085</td><td> 0.03323410</td><td> 0.0388382135</td></tr>
<tr><th scope=row>month</th><td>-0.077277663</td><td>-0.062752510</td><td> 1.000000000</td><td> 0.0340745434</td><td> 0.35376232</td><td> 0.682605401</td><td> 0.7933953370</td><td> 0.370698052</td><td> 0.61456238</td><td> 0.003824557</td><td>-0.19287971</td><td> 0.03535387</td><td> 0.0355575605</td></tr>
<tr><th scope=row>day</th><td>-0.024921895</td><td>-0.005453337</td><td> 0.034074543</td><td> 1.0000000000</td><td>-0.04106833</td><td> 0.062870397</td><td> 0.0001049027</td><td> 0.032909260</td><td> 0.05219034</td><td> 0.092151437</td><td> 0.03247816</td><td>-0.04834015</td><td> 0.0002081962</td></tr>
<tr><th scope=row>FFMC</th><td>-0.021039272</td><td>-0.046307546</td><td> 0.353762321</td><td>-0.0410683308</td><td> 1.00000000</td><td> 0.382618800</td><td> 0.3305117952</td><td> 0.531804931</td><td> 0.43153226</td><td>-0.300995416</td><td>-0.02848481</td><td> 0.05670153</td><td> 0.0467985637</td></tr>
<tr><th scope=row>DMC</th><td>-0.048384178</td><td> 0.007781561</td><td> 0.682605401</td><td> 0.0628703973</td><td> 0.38261880</td><td> 1.000000000</td><td> 0.6821916120</td><td> 0.305127835</td><td> 0.46959384</td><td> 0.073794941</td><td>-0.10534225</td><td> 0.07478998</td><td> 0.0671527398</td></tr>
<tr><th scope=row>DC</th><td>-0.085916123</td><td>-0.101177767</td><td> 0.793395337</td><td> 0.0001049027</td><td> 0.33051180</td><td> 0.682191612</td><td> 1.0000000000</td><td> 0.229154169</td><td> 0.49620805</td><td>-0.039191647</td><td>-0.20346569</td><td> 0.03586086</td><td> 0.0663597560</td></tr>
<tr><th scope=row>ISI</th><td> 0.006209941</td><td>-0.024487992</td><td> 0.370698052</td><td> 0.0329092595</td><td> 0.53180493</td><td> 0.305127835</td><td> 0.2291541691</td><td> 1.000000000</td><td> 0.39428710</td><td>-0.132517177</td><td> 0.10682589</td><td> 0.06766819</td><td>-0.0103468787</td></tr>
<tr><th scope=row>temp</th><td>-0.051258262</td><td>-0.024103084</td><td> 0.614562376</td><td> 0.0521903410</td><td> 0.43153226</td><td> 0.469593844</td><td> 0.4962080531</td><td> 0.394287104</td><td> 1.00000000</td><td>-0.527390339</td><td>-0.22711622</td><td> 0.06949055</td><td> 0.0534865490</td></tr>
<tr><th scope=row>RH</th><td> 0.085223194</td><td> 0.062220731</td><td> 0.003824557</td><td> 0.0921514374</td><td>-0.30099542</td><td> 0.073794941</td><td>-0.0391916472</td><td>-0.132517177</td><td>-0.52739034</td><td> 1.000000000</td><td> 0.06941007</td><td> 0.09975122</td><td>-0.0536621583</td></tr>
<tr><th scope=row>wind</th><td> 0.018797818</td><td>-0.020340852</td><td>-0.192879712</td><td> 0.0324781638</td><td>-0.02848481</td><td>-0.105342253</td><td>-0.2034656909</td><td> 0.106825888</td><td>-0.22711622</td><td> 0.069410067</td><td> 1.00000000</td><td> 0.06111888</td><td> 0.0669734893</td></tr>
<tr><th scope=row>rain</th><td> 0.065387168</td><td> 0.033234103</td><td> 0.035353869</td><td>-0.0483401530</td><td> 0.05670153</td><td> 0.074789982</td><td> 0.0358608620</td><td> 0.067668190</td><td> 0.06949055</td><td> 0.099751223</td><td> 0.06111888</td><td> 1.00000000</td><td> 0.0233113127</td></tr>
<tr><th scope=row>area</th><td> 0.061994908</td><td> 0.038838213</td><td> 0.035557561</td><td> 0.0002081962</td><td> 0.04679856</td><td> 0.067152740</td><td> 0.0663597560</td><td>-0.010346879</td><td> 0.05348655</td><td>-0.053662158</td><td> 0.06697349</td><td> 0.02331131</td><td> 1.0000000000</td></tr>
</tbody>
</table>
```R
# Creating the linear model, while NOT including the spatial independent variables
# As these, will be studied differently at the end.
fit<-lm(area~.-X-Y,data)
```
```R
summary(fit)
```
Call:
lm(formula = area ~ . - X - Y, data = data)
Residuals:
Min 1Q Median 3Q Max
-1.5460 -1.1067 -0.6090 0.8758 5.7138
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1277109 1.3802175 0.093 0.9263
month -0.1424500 0.3058480 -0.466 0.6416
day 0.0017721 0.0303403 0.058 0.9534
FFMC 0.0076873 0.0145341 0.529 0.5971
DMC 0.0012756 0.0014796 0.862 0.3890
DC 0.0003966 0.0004425 0.896 0.3706
ISI -0.0226305 0.0171961 -1.316 0.1888
temp 0.0064831 0.0196563 0.330 0.7417
RH -0.0043101 0.0056076 -0.769 0.4425
wind 0.0752979 0.0367373 2.050 0.0409 *
rain 0.0876678 0.2143783 0.409 0.6828
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.398 on 506 degrees of freedom
Multiple R-squared: 0.02031, Adjusted R-squared: 0.0009523
F-statistic: 1.049 on 10 and 506 DF, p-value: 0.4006
```R
library(geometry) # To use the dot function of this package
b_coeff = c(1,rep(c(0),each=10))
for (i in 3:(ncol(data)-1)){
b_coeff[i-1] = mean(data[,i])
}
# This is just to take the "best" cross-cut for the values
# of the other explanatory variables that are NOT presented in the graph,
# to draw a "good" line from the regression plane to the corresponding
# plane created by the dependent and corresponding indpendent variable.
```
Warning message:
"package 'geometry' was built under R version 4.0.5"
```R
plot(data$month,data$area, xlab="month", ylab="area",
main="Scatter plot of area and month")
barplot(table(data$month)/n,ylim=c(0,1), main="Frequency of the month variable")
```
```R
day_fun=function(x){fit$coeff[3]*x+dot(b_coeff[-3],fit$coeff[-3])}
plot(data$day,data$area, xlab="day", ylab="area",
main="Scatter plot of area and day")
lines(x=data$day,y=day_fun(data$day))
# As expected, there seems to be zero statistical significance to the day
# the fire occured and its destructiveness.
barplot(table(data$day)/n,ylim=c(0,1),
main="Fire frequency on the basis of days")
barplot(table(data[which(data$area!=0),]$day)/n,ylim=c(0,1),
main="Fire frequency of SIGNIFICANT fires on the basis of days")
# As expected, there seems to be zero statistical significance to frequency
# of the fires occured regarding the day that they occurred.
```
```R
FFMC_fun=function(x){fit$coeff[4]*x+dot(b_coeff[-4],fit$coeff[-4])}
plot(data$FFMC,data$area,xlab="FFMC", ylab="area",
main="Scatter plot of area with respect to FFMC")
lines(x=data$FFMC,y=FFMC_fun(data$FFMC))
```
---
Here we are able to see a trend. For higher values of the fuel moisture FFMC index more intense fires were recorded. Acutally according to fire science subtracting the FFMC value from 100 can provide an estimate for the equivalent (approximately 10h) fuel moisture content (**FMC**; $FMC=1-FFMC$), most accurate when FFMC values are roughly above 80, which is just the case for our data. Now, according to the paper **"Moisture content thresholds for ignition and rate of fire spread for various dead fuels in northeast forest ecosystems of China"**, cited below, _"Fuel moisture content is one of the important factors that determine ignition probability and fire behaviour in forest ecosystems."_ **It supports that as FMC decreases, thus FFMC increases in the same manner, then the area of fire increases.** **This is also supported by the FWI structure flow chart provided at the beginning**, as the fire behaviour index, ISI - Initial Spread Index is affected by the FFMC (which we will see that it increases as FFMC increases) and while ISI increases, it is logical that also the area of the fire increases as well, attributed to the approximate functional relation that we proved at (\ref{eq:1}).
---
```R
DMC_fun=function(x){fit$coeff[5]*x+dot(b_coeff[-5],fit$coeff[-5])}
plot(data$DMC,data$area,xlab="DMC", ylab="area",
main="Scatter plot of area with respect to DMC")
lines(x=data$DMC,y=DMC_fun(data$DMC))
# Moisture in the DMC layer is expected to help prevent burning
# in material deeper down in the available fuel.
# Thus DMC decreasing => burnt area decreasing. This is apparent
# from the many 0 values that we have for burnt area for smaller DMC.
```
```R
DC_fun=function(x){fit$coeff[6]*x+dot(b_coeff[-6],fit$coeff[-6])}
plot(data$DC,data$area,xlab="DC", ylab="area",
main="Scatter plot of area with respect to DC")
lines(x=data$DC,y=DC_fun(data$DC))
# Drought Code increasing => burnt area increasing, as expected.
# But still many ~0 area fires are noticed for higher values of DC
# which could be attributed to many other factors.
```
```R
ISI_fun=function(x){fit$coeff[7]*x+dot(b_coeff[-7],fit$coeff[-7])}
plot(data$ISI,data$area,xlab="ISI", ylab="area",
main="Scatter plot of area with respect to ISI")
plot(data$wind,data$ISI,xlab="wind", ylab="ISI",
main="Scatter plot of ISI with respect to wind")
# Checking the validity of the theory we refered to in the introduction
cat("Covariance and correlation between ISI and wind speed: ",
cov(data$ISI,data$wind)," and ",cor(data$ISI,data$wind), "respectively. \n")
plot(data$FFMC,data$ISI,xlab="FFMC", ylab="ISI",
main="Scatter plot of ISI with respect to FFMC")
# Checking the validity of the theory we refered to in the introduction
cat("Covariance between ISI and FFMC: ",
cov(data$ISI,data$FFMC))
```
---
It is clear from the scatter plot between ISI and FFMC, that they theory is validated, and ISI increases while FFMC increases, and in a quadratic or maybe even in an exponential way.
The problem we see is between ISI and wind speed. While the theory states that ISI is positevely affected by wind speed, this does not appear in the corresponding scatter plot and their corresponding sample covariance and correlation could be considered to be rather small. This is mostly attributed to the fact that _while wind speed is by nature a continuously-valued variable, this does not show up in our data. In all of the 517 observations collected, wind speed only takes values from the set:_ $\{0.4, 0.9, 1.3, 1.8, 2.2,2.7,3.1,3.6,4,4.5,4.9,5.4,5.8,6.3,6.7,7.2,7.6,8,8.5,8.9,9.4 \}$ - **only 21 values which gives it a kind of discrete nature** and leaves us unable to make cogent inferences just from the scatter plot; further statistical analysis is required before we make any conclusive statements.
---
```R
temp_fun=function(x){fit$coeff[8]*x+dot(b_coeff[-8],fit$coeff[-8])}
plot(data$temp,data$area, xlab="Temperature", ylab="area",
main="Scatter plot of area with respect to temperature")
lines(x=data$temp,y=temp_fun(data$temp))
plot(data$temp,data$month, xlab="Temperature", ylab="month",
main="Scatter plot of month with respect to temperature")
cat("Covariance and correlation between month and temperature: ",
cov(data$month,data$temp)," and ",cor(data$month,data$temp), "respectively. \n")
plot(data$temp,data$FFMC, xlab="Temperature", ylab="FFMC",
main="Scatter plot of FFMC with respect to temperature")
cat("Covariance and correlation between FFMC and temperature: ",
cov(data$FFMC,data$temp)," and ",cor(data$FFMC,data$temp), "respectively. \n")
plot(data$temp,data$DMC, xlab="Temperature", ylab="DMC",
main="Scatter plot of DMC with respect to temperature")
cat("Covariance and correlation between DMC and temperature: ",
cov(data$DMC,data$temp)," and ",cor(data$DMC,data$temp), "respectively. \n")
plot(data$temp,data$DC,xlab="Temperature", ylab="DC",
main="Scatter plot of DC with respect to temperature")
cat("Covariance and correlation between DC and temperature: ",
cov(data$DC,data$temp)," and ",cor(data$DC,data$temp), "respectively. \n")
```
---
Here we notice clearly that area seems to positevely correlated with temperature (see also covariance and correlation matrix provided above).
Also, it is logical to check the scatter plot between month and temperature where we see that, like it should be expected, higher temperatures (>22 degrees Celsius) **only** appear in the "hot-months" group of June through September and in great density. Not only that, but we see the great positive correlation between these two independent variables.
Lastly, the FWI states that temperature affects **ALL** of the Fuel Moisture codes; FFMC, DMC and DC and this is established by the above scatter plots and calculations of their pairwise covariances and correlations.
---
```R
RH_fun=function(x){fit$coeff[9]*x+dot(b_coeff[-9],fit$coeff[-9])}
plot(data$RH,data$area,xlab="RH", ylab="area",
main="Scatter plot of area with respect to relative humidity")
lines(x=data$RH,y=temp_fun(data$RH))
plot(data$RH,data$FFMC,ylim=c(75,100),xlab="RH", ylab="FFMC",
main="Scatter plot of FFMC with respect to relative humidity")
cat("Covariance and correlation between RH and FFMC: ",
cov(data$RH,data$FFMC)," and ",cor(data$RH,data$FFMC), "respectively. \n")
plot(data$RH,data$DMC,ylim=c(75,100),xlab="RH", ylab="DMC",
main="Scatter plot of DMC with respect to relative humidity")
cat("Covariance and correlation between RH and DMC: ",
cov(data$RH,data$DMC)," and ",cor(data$RH,data$DMC), "respectively. \n")
```
---
Due to the fact the theory states that the lower the relative humidity measured (in percentage) in the air is, the higher the Fine Fuel Moisture Code (FFMC) is, this means that the most destructive fires are witnessed with lower percentages of air humidity. This is of course due to the fact that as FFMC increases so does the likelihood of the fire being more destructive!
Now, while the scatter plot of FFMC vs. RH does support the above statement, the same cannot be stated for DMC vs RH, which according to FWI's structure DMC is affected by RH, but in their corresponding scatter plot shows no signs of any pattern.
---
```R
wind_fun=function(x){fit$coeff[10]*x+dot(b_coeff[-10],fit$coeff[-10])}
plot(data$wind,data$area,xlab="Wind", ylab="area",
main="Scatter plot of area with respect to wind speed")
lines(x=data$wind,y=wind_fun(data$wind))
```
```R
rain_fun=function(x){fit$coeff[11]*x+dot(b_coeff[-11],fit$coeff[-11])}
plot(data$rain,data$area,xlab="Rain", ylab="area",
main="Scatter plot of area with respect to outside rain")
lines(x=data$rain,y=rain_fun(data$rain))
```
---
As expected, **NEARLY ALL** of the fires that were recorded in this time frame at Montesinho Natural Park, had a common attribute - **there was no (or negligible) outside rain recorded at that time.**
___
Residual Analysis
---
___
```R
res=fit$residuals
plot(fit$fitted,res,xlab="Fitted values", ylab="Residuals",
main="Residuals vs Fitted Values")
abline(h=0)
```
Due to how the graph is, it looks like the residuals **might be correlated**, while homoscedasticity seems to hold. This is due to the "line" that forms in this plot.
This "line" can be attributed to several factors. Firstly, the error terms might indeed be correlated/not independent. This might be the case for 2 prevalent reasons;
1. It is possible that in cases of rekindling, the people who gathered the data were putting down this "fires" as new ones, which due to how close they happened in time, it is possible the variables for the weather observations were the same and the ones describing the fuel moisture codes were affected by the previous fire, leading to correlated measures.
2. If a fire happened somewhere in the park, were a previous fire was recorded, it is possible that the effects of the previous one affected the range of the next one, leading of course to correlation between the two measures. **This need of course to be validated using also the spatial variables that we excluded from our model**.
But this "line" could also be attributed to the fact that there was a significant amount of fires that were put down as having 0ha of area in range, thus this phenomenon we see **might have nothing to do with the correlation/dependance between the measures**.
We need to plot the residuals against the independent variables as well, to validate this. This will also help us to establish the fact that homoscedasticity also holds.
```R
par(mfrow=c(2,1))
plot(data$month,res,xlab="month", ylab="Residuals",
main="Residuals vs month")
abline(h=0)
plot(data$day,res,xlab="day", ylab="Residuals",
main="Residuals vs day")
abline(h=0)
par(mfrow=c(2,1))
plot(data$FFMC,res,xlab="FFMC", ylab="Residuals",
main="Residuals vs FFMC")
abline(h=0)
plot(data$DMC,res,xlab="DMC", ylab="Residuals",
main="Residuals vs DMC")
abline(h=0)
par(mfrow=c(2,1))
plot(data$DC,res,xlab="DC", ylab="Residuals",
main="Residuals vs DC")
abline(h=0)
plot(data$ISI,res,xlab="ISI", ylab="Residuals",
main="Residuals vs ISI")
abline(h=0)
par(mfrow=c(2,1))
plot(data$temp,res,xlab="Temperature", ylab="Residuals",
main="Residuals vs Temperature")
abline(h=0)
plot(data$RH,res,xlab="RH", ylab="Residuals",
main="Residuals vs RH")
abline(h=0)
par(mfrow=c(2,1))
plot(data$wind,res,xlab="Wind speed", ylab="Residuals",
main="Residuals vs Wind speed")
abline(h=0)
plot(data$rain,res,xlab="Rain", ylab="Residuals",
main="Residuals vs Rain")
abline(h=0)
```
So it quite safe to assume that:
\begin{equation} \label{eq:2}
Var(\vec{ε})= σ^2\mathbf{I}_n
\end{equation}
where $\sigma>0$ is a constant, from now on.
As for the normality of the residuals:
```R
e = fit$residuals
sigma=summary(fit)$sigma
# The diagonal of the Hat/Influence matrix
h = hatvalues(fit)
e_studentized = e/(sigma*sqrt(1-h))
qqnorm(e_studentized)
qqline(e_studentized)
grid()
```
While a kind of "divergence" is apparent for the negative residuals - **which is of course attributed to the "line" that we observed in the scatter plot of the residuals against the fitted values**, this shows us that to a good degree of precision, the normality assumption of the error terms in our model holds water. This means that (\ref{eq:2}), becomes:
\begin{equation} \label{eq:3}
\vec{ε}\stackrel{\text{i.i.d}}{\sim}\mathcal{N}(\vec{0},\sigma^2\mathbf{I}_n)
\end{equation}
Also, under (\ref{eq:3}), we know that $\text {Cov}(\vec{e},\vec{\hat{Y}})=\mathbf{0}\in\mathbf{R}^{nxn}$, which holds for our case (so we can feel better about our assumptions at (\ref{eq:3})):
```R
cat(cov(res,fit$fitted),"~= 0")
```
4.157096e-17 ~= 0
---
## Multicolinearity
___
Now from the FWI structure and from the scatter plots we analyzed above between the explanatory variables it looks like **there is a possibility of having strong correlations between the explanatory variables**, leading to _multilinearity_. This can be checked by looking at the determinant of the matrix $\mathbf{X}^T\mathbf{X}$, as it is a continuous function of its elements, so if its close to 0, this means that some columns are really close to being a linear combination of some others.
```R
X_matrix = model.matrix(fit, data[,-c(1,2)])
det(t(X_matrix)%*%X_matrix)
```
4.33437756101153e+42
So our previous assumption is clearly and completely wrong as the determinant is **huge**! This also means, **that the variance of the estimations of the coefficients is going to be quite small**, as it is expected that the diagonal values of the inverse matrix $(\mathbf{X}^T\mathbf{X})^{-1}$ are going to be quite small in size, which are, as seen below. **This means that our confidence intervals down in line** (and accordingly for the corresponding t-tests about the coefficients) **are going to be rather "narrow"!**
```R
diag(solve(t(X_matrix)%*%X_matrix))
```
<style>
.dl-inline {width: auto; margin:0; padding: 0}
.dl-inline>dt, .dl-inline>dd {float: none; width: auto; display: inline-block}
.dl-inline>dt::after {content: ":\0020"; padding-right: .5ex}
.dl-inline>dt:not(:first-of-type) {padding-left: .5ex}
</style><dl class=dl-inline><dt>(Intercept)</dt><dd>0.975042721092408</dd><dt>month</dt><dd>0.0478784150623763</dd><dt>day</dt><dd>0.000471161194951573</dd><dt>FFMC</dt><dd>0.000108119315559541</dd><dt>DMC</dt><dd>1.12049918735283e-06</dd><dt>DC</dt><dd>1.00215763862933e-07</dd><dt>ISI</dt><dd>0.000151352150298896</dd><dt>temp</dt><dd>0.000197756304152428</dd><dt>RH</dt><dd>1.60944143860101e-05</dd><dt>wind</dt><dd>0.000690786515789251</dd><dt>rain</dt><dd>0.0235228602209352</dd></dl>
___
Some hypothesis tests
---
___
Now we will conduct some hypothesis tests that will help us better understand the results that will emerge from our model selection algorithms, which we will see later in this paper. First we begin by analyzing the summary of the full linear model, given by R.
```R
sum_fit<-summary(fit)
sum_fit
```
Call:
lm(formula = area ~ . - X - Y, data = data)
Residuals:
Min 1Q Median 3Q Max
-1.5460 -1.1067 -0.6090 0.8758 5.7138
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1277109 1.3802175 0.093 0.9263
month -0.1424500 0.3058480 -0.466 0.6416
day 0.0017721 0.0303403 0.058 0.9534
FFMC 0.0076873 0.0145341 0.529 0.5971
DMC 0.0012756 0.0014796 0.862 0.3890
DC 0.0003966 0.0004425 0.896 0.3706
ISI -0.0226305 0.0171961 -1.316 0.1888
temp 0.0064831 0.0196563 0.330 0.7417
RH -0.0043101 0.0056076 -0.769 0.4425
wind 0.0752979 0.0367373 2.050 0.0409 *
rain 0.0876678 0.2143783 0.409 0.6828
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.398 on 506 degrees of freedom
Multiple R-squared: 0.02031, Adjusted R-squared: 0.0009523
F-statistic: 1.049 on 10 and 506 DF, p-value: 0.4006
First, from the significance codes provided, we see that only the **wind** explanatory parameter is statistically significant on a 0.05 significance level, as the p-value of the two-tail test: $H_0: \ \beta_{9}=0 \text{ vs } H_1: \ \beta_{9}\neq0$, where $\beta_{9}$ is the parameter corresponding to the explanatory variable for wind speed $X_9$, is equal to $0.0409$, which is less than $0.05$ and such, $H_0$ is rejected.
As for the F-test:
\begin{equation} \label{eq:4}
H_0: \beta_1=\dots=\beta_{10}=0 \text{ vs } H_1: \exists i\in\{1,\dots,10\}: \ \beta_i\neq0
\end{equation}
We see that its p-value is equal to $0.4006$, that means that even for high significance levels like $α=0.1$, where for higher significance levels we reject the null hypothesis "easier", the null hypothesis is **NOT REJECTED**. In other words, under the null hypothesis, that all of the regression parameters are equal to zero, we observe that the model, at its current "full" state, is not better than the random one! This is because even under the null hypothesis, the probability of seeing even more extreme outcomes that this one, is quite large and equal to 40% - which is exactly the p-value. This can be attributed to several factors - either some variables are affecting the model to such a severe degree that this hypothesis test has no choice but to **ACCEPT** the null hypothesis, even if some other variable are significant in this situation. **But most likely, the reason that we observe this, has to do with the significant number of zeroes recorded in our dependent variable! This is briefly investigated at the end, in the "Modelling the significant conflagrations" section.**
Now we see that wind is quite significant in our model which could be attributed either to the fact that the theory behind the FWI system supports that it contributes explicitly (and implicitly through FFMC) to the value of ISI, or because it has a discrete nature in its values, when it should be more continuous in reality. Here we will start by taking it and ISI out from the model, and see what the resulting model's summary tells us. This, like aforementioned, could possibly help us understand more the results that will come out from the model selection algorithms. Also, it is important to notice here that our model has an abysmally low $R^2$ and an even lower adjusted $R^2_{\text{adj}}$, which is proof that our fit is not "well-suited" due to its high perplexity in the current state - there are too many explanatory variables.
```R
fit_simpler=lm(area~.-X-Y-wind-ISI,data)
summary(fit_simpler)
```
Call:
lm(formula = area ~ . - X - Y - wind - ISI, data = data)
Residuals:
Min 1Q Median 3Q Max
-1.4374 -1.1013 -0.6246 0.8754 5.7822
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.0874179 1.2687913 0.857 0.392
month -0.2001786 0.3024238 -0.662 0.508
day 0.0041600 0.0303750 0.137 0.891
FFMC 0.0010909 0.0131150 0.083 0.934
DMC 0.0014892 0.0014801 1.006 0.315
DC 0.0003906 0.0004366 0.895 0.371
temp -0.0036554 0.0191740 -0.191 0.849
RH -0.0056195 0.0055903 -1.005 0.315
rain 0.1203099 0.2141800 0.562 0.575
Residual standard error: 1.402 on 508 degrees of freedom
Multiple R-squared: 0.01058, Adjusted R-squared: -0.004999
F-statistic: 0.6792 on 8 and 508 DF, p-value: 0.7101
Now if we see the corresponding F-test to see if this model is "better" at describing/modelling the task at hand, than the "full" one:
```R
anova(fit_simpler,fit)
```
<table class="dataframe">
<caption>A anova: 2 × 6</caption>
<thead>
<tr><th></th><th scope=col>Res.Df</th><th scope=col>RSS</th><th scope=col>Df</th><th scope=col>Sum of Sq</th><th scope=col>F</th><th scope=col>Pr(>F)</th></tr>
<tr><th></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th><th scope=col><dbl></th></tr>
</thead>
<tbody>
<tr><th scope=row>1</th><td>508</td><td>998.4229</td><td>NA</td><td> NA</td><td> NA</td><td> NA</td></tr>
<tr><th scope=row>2</th><td>506</td><td>988.6030</td><td> 2</td><td>9.819886</td><td>2.513073</td><td>0.08202974</td></tr>
</tbody>
</table>
The p-value of this F-test is very small, that even in the strict significance level of $\alpha=0.1$, we have to reject the null-hypothesis of taking out the explanatory variables: wind and ISI.
## Model Selection Algorithms
---
**We begin with the stepwise regression methods that focus on penalizing the perplexity of our model.** We do this by using the Best Subset Selection method, as the number of our explanatory variables is rather small, so we know that the time complexity of the algorithm, which is $\mathcal{O}(2^q)$, is not too high for q=10.
```R
library(MASS)
library(leaps)
bss_penalty = regsubsets(area~.-X-Y, data, nvmax=10)
bss_summ=summary(bss_penalty)
bss_summ
```
Warning message:
"package 'leaps' was built under R version 4.0.5"
Subset selection object
Call: regsubsets.formula(area ~ . - X - Y, data, nvmax = 10)
10 Variables (and intercept)
Forced in Forced out
month FALSE FALSE
day FALSE FALSE
FFMC FALSE FALSE
DMC FALSE FALSE
DC FALSE FALSE
ISI FALSE FALSE
temp FALSE FALSE
RH FALSE FALSE
wind FALSE FALSE
rain FALSE FALSE
1 subsets of each size up to 10
Selection Algorithm: exhaustive
month day FFMC DMC DC ISI temp RH wind rain
1 ( 1 ) " " " " " " "*" " " " " " " " " " " " "
2 ( 1 ) " " " " " " " " "*" " " " " " " "*" " "
3 ( 1 ) " " " " " " " " "*" " " " " "*" "*" " "
4 ( 1 ) " " " " " " "*" " " "*" " " "*" "*" " "
5 ( 1 ) " " " " " " "*" "*" "*" " " "*" "*" " "
6 ( 1 ) " " " " "*" "*" "*" "*" " " "*" "*" " "
7 ( 1 ) " " " " "*" "*" "*" "*" " " "*" "*" "*"
8 ( 1 ) "*" " " "*" "*" "*" "*" " " "*" "*" "*"
9 ( 1 ) "*" " " "*" "*" "*" "*" "*" "*" "*" "*"
10 ( 1 ) "*" "*" "*" "*" "*" "*" "*" "*" "*" "*"
The table above shows the linear models that had the **lowest** $SS_{res}$, for each possible number of variables (the number of variables is shown in the first column). We observe, the models **are not nested**, thus **if we used the forward inclusion or backward exclusion methods, we would "miss" some of the "best" models**. This might be the case since most of our explanatory parameters - **especially the Fuel Moisture Codes**, are affected by the fire weather observations (for example temperature and rain affect both DMC and DC).
Starting with the first model in the list, the variable that is included in it is surprisingly DMC (Duff Moisture Code), which means that it is the model that has the lowest $SS_{res}$ compared to all the other models with just one non-intercept parameter - this could be attributed to the fact that we are "missing" the BUI - Buildup Index from our data, which is the second Fire Behaviour index that affects FWI. For the model with two explanatory variables which has the lowest residual sum of squares, we see that it picks up the explanatory variables Wind and DC (Drought Code), which is something that we did expect, especially wind being one of them, as it is the most statistically significant parameter in our "full" model.
To get a clearer picture about which model is the “best”, using the results from the best subset selection algorithm with penalty criteria, we plot the corresponding graph for every criterion (Adjusted R-Squared, Mallow's $C_p$ and BIC), that shows the value of each indicator according to the number of variables in the model.
```R
par(mfrow=c(2,2))
plot(bss_summ$rss, xlab="Number of Variables", ylab="SSres", type="l")
K0=which.min(bss_summ$rss)
points(K0, bss_summ$rss[K0], col="red",cex=2,pch=20)
plot(bss_summ$adjr2,xlab="Number of Variables", ylab="Adjusted RSq", type="l")
K1=which.max(bss_summ$adjr2)
points(K1, bss_summ$adjr2[K1], col="red",cex=2,pch=20)
plot(bss_summ$cp,xlab="Number of Variables", ylab="C_p", type="l")
K2=which.min(bss_summ$cp)
points(K2, bss_summ$cp[K2], col="red",cex=2,pch=20)
plot(bss_summ$bic,xlab="Number of Variables", ylab="BIC", type="l")
K3=which.min(bss_summ$bic)
points(K3, bss_summ$bic[K3], col="red",cex=2,pch=20)
```
```R
library(MASS)
library(leaps)
bss_penalty = regsubsets(area~.-X-Y, data, nvmax=10, met)
bss_summ=summary(bss_penalty)
bss_summ
par(mfrow=c(2,2))
plot(bss_summ$rss, xlab="Number of Variables", ylab="SSres", type="l")
K0=which.min(bss_summ$rss)
points(K0, bss_summ$rss[K0], col="red",cex=2,pch=20)
plot(bss_summ$adjr2,xlab="Number of Variables", ylab="Adjusted RSq", type="l")
K1=which.max(bss_summ$adjr2)
points(K1, bss_summ$adjr2[K1], col="red",cex=2,pch=20)
plot(bss_summ$cp,xlab="Number of Variables", ylab="C_p", type="l")
K2=which.min(bss_summ$cp)
points(K2, bss_summ$cp[K2], col="red",cex=2,pch=20)
plot(bss_summ$bic,xlab="Number of Variables", ylab="BIC", type="l")
K3=which.min(bss_summ$bic)
points(K3, bss_summ$bic[K3], col="red",cex=2,pch=20)
```
Suprisingly, all of the criteria **do not seem to agree as to which model is the best regarding their corresponding indicator, with regards to the number of explanatory variables used in it**.
Firstly, the $SS_{res}$, as expected, decreases monotonically as the number of variables increases, due to how it is defined by the method of Least Squares.
The Adjusted R-Squared criterion, which penalizes complexity if it does not decrease SSres "significantly", suggests that the best model is the one with the 4 parameters of DMC, ISI, RH and Wind.
Using the Mallow’s $C_p$ method, we conclude that the ideal model uses only 2 variables (Wind and DC). However, it is worth mentioning that the models with 3 and 4 parameters maintain a low Mallow's $C_p$ indicator too.
Lastly, the BIC suggests the simplest model (with just 1 variable) as the penalty term is much larger for this criterion, so it favours models with "lower" perplexity.
We can also visualize these results in a more clear way by comparing the model scores for each criterion, using shaded contour plots.
```R
par(mfrow=c(1,3))
plot(bss_penalty, scale = "Cp")
title("Cp")
plot(bss_penalty, scale = "adjr2")
title("AdjR2")
plot(bss_penalty, scale = "bic")
title("BIC")
```
We proceed with the Cross-Validation Method for choosing the "best" model. The Cross-Validation method uses "different portions of the data" to estimate the performance of the model on some "new" independent data.
Unfortunately, to use this method, we have to make calculations of the form: $\mathbf{X}_{Μ_p^{-k}}\hat{\beta}_{Μ_p^{-k}}$, where the matrix $\mathbf{X}_{Μ_p^{-k}}$ has as columns the ones that correspond to the $p$ parameters that are included in the model ${Μ_p^{-k}}$ that was fitted/trained using the data from the set $\mathcal{T}\smallsetminus\mathcal{T}_k$, where $\mathcal{T}$ is our "full" training set that was "folded" to the partition $\mathcal{T}_1,\dots,\mathcal{T}_\mathcal{K}$, and as rows those that correspond to the validation set $\mathcal{T}_k$ from our original training data set. This calculation is used to calculate the cross-validation error of the model $Μ_p^{-k}$ using as validation set the one that was exlcluded from our complete training set; $\mathcal{T}_k$.
As such, we create a function that takes as input the the "best" models (from the corresponding training/validation set) of each **size** (as this function will be executed in each _inner_ for-loop - see below), that were generated from the Best Subset selection method, alongsize the data that will be used to validate the model.
```R
predict.regsubsets=function(model, validation_set, num_par,...){
form=as.formula(model$call[[2]])
design_matrix=model.matrix(form, validation_set)
beta_M =coef(model, id=num_par)
xvars=names(beta_M)
X_M = design_matrix[,xvars]
X_M%*%beta_M
}
```
```R
K=10 # number of "folds"
q=10 # number of explanatory variables
set.seed(4) # setting a seed number for consistent outputs
partition = sample(1:K, nrow(data), replace=TRUE)
table(partition)
```
We "folded" our training set into a partition of 10 roughly equal in size subsets, using a random sample from the discrete uniform distribution. We validate this statement by noting all 10 sets have "roughly" 52 members.
```R
cv_errors_matrix = matrix(
NA, K, q, dimnames=list(paste("Model with validation set T_",1:K),
paste(1:q," variables model")))
```
We create the matrix that will have in its $(k,p)$ spot, the number (mean squared prediction error):
\begin{equation}
\hat{Err}_{VS;\mathcal{T}_k}(Μ_p^{-k})=\frac{1}{|\mathcal{T}_k|}\sum_{(x_i,y_i)\in\mathcal{T}_k}(y_i-\hat{f}_{Μ_p^{-k}}(x_i))^2
\end{equation}
```R
for(k in 1:K){
bss_cv_train=regsubsets(area~.-X-Y,data=data[partition!=k,],nvmax=10)
for(p in 1:q){
cv_validate=predict(model=bss_cv_train, validation_set=data[partition==k,],num_par=p)
cv_errors_matrix[k,p]=mean((data$area[partition==k]-cv_validate)^2)
}
}
cv_errors_matrix
```
We now take the mean error of all the models of each size - which means taking the mean of the matrix above with regards to its columns, to find which "best" models (essentialy the number of variables; $\bar{p}$) had the lowest mean error.
```R
mean_cv_errors = apply(cv_errors_matrix, MARGIN=2, FUN=mean)
as.matrix(mean_cv_errors)
```
```R
p.cv=which.min(mean_cv_errors)
plot(mean_cv_errors,type='b')
points(p.cv, mean_cv_errors[p.cv], col="red",cex=2,pch=20)
```
This tells us that the model that the Best Subset selection algorithm gives us, using **ALL** of our data to "train" our model, with just 3 explanatory variables, is the "best" one according to the C-V method. This is the model that includes the explanatory variables of DC, RH and wind. It is important to mention also that the models with 2 and just 1 variable also have mean errors close to that of models with 3 variables, while the ones with 4 and more variables increase immensely in mean error, which means that the C-V method favours non-perplexing models.
Also, changing the seed number from 4 to something else, we see that we always get either $\bar{p}=2 \ or \ 3$
**So C-V agrees with either Mallow's $C_p$ or with none of the criteria that penalize perplexity.**
```R
bss_cv=regsubsets(area~.-X-Y,data,nvmax=10)
summary(bss_cv)
```
```R
coef(bss_cv,3)
```
With all of the above we can now use all of these "best" models to make predictions for example of the mean burnt area using confidence intervals, if now at the time of writing this: 21/11/2021 16:28, there was a fire at Montesinho park (where we ignore **where** it *may* happen in the park due to how we analyzed our data). We gathered data from https://weawow.com/c9561135, where we have: $wind=7.2km/h$, $RH=57$, $month=0$ and because the site does not provide data for DC and DMC we take a rather low value for DC because we see that the park had quite some rain in the last 10 days and for DMC we take the mean of DMC values that we have in our dataset from the "cold-month" group as a rough indicator. These yield the following intervals:
```R
best_model_bic= lm(area~DMC,data)
best_model_cp= lm(area~DC+wind,data)
best_model_cv=lm(area~DC+RH+wind,data)
best_model_r2adj= lm(area~DMC+ISI+RH+wind,data)
bic = data.frame(DMC=mean(data[which(data$month==0),]$DMC))
cp = data.frame(DC=80, wind=7.2)
cv = data.frame(DC=80, RH=57, wind=7.2)
r2adj = data.frame(DMC=mean(data[which(data$month==0),]$DMC),
ISI=mean(data[which(data$month==0),]$ISI),
RH=57, wind=7.2)
predict(best_model_bic, newdata=bic,interval='conf')
predict(best_model_cp,cp,interval='conf')
predict(best_model_cv,cv,interval='conf')
predict(best_model_r2adj,r2adj,interval='conf')
# We see that these intervals are indeed rather "narrow"
# due to the fact that the corresponding
# determinant of each model's X^T*X matrix is quite large.
```
And lastly, so we do fell that we did not use all of the data provided to us, we create a heatmap that indicates where the more destructive fires happened in the park. We see a rather uniform distribution of the fires, so we do not feel awkward that we "ignored" the spatial coordinates while making our data analysis above.
**Code used to create the heat map:**
```{r}
data<-read.csv("forestfires.csv")
library(ggplot2)
coord2<-data[,c(1,2,13)]
coord2$area<-log(coord2$area+1)
coord2$Y<-10-coord2$Y
ggplot(coord2, aes(x = X, y = Y, fill = area)) +
geom_tile() +
scale_fill_gradient(low = "yellow", high = "red")
```
---
## Modelling the significant conflagrations
In all of the above sections, the common culprit in our explanations for some of the troubling observations was the abundance of zeroes in our datasets. Even if the fires recorded were not actually of "zero hectares" in area, they were still put down as 0, which negates their information that they could provide in the model. This means, that having different values of the fire weather observations like temperature, wind or relative humidity but all of them pointing to zero area, kills any information or correlation that these could attribute to the dependent variable.
These "zero area" fires, if **NOT** ignored bring in extraneous factors that are NOT explained by our data. This is because a deciding factor in the area of the fire, is also to the speed of reaction of the firefighters, not just the fire weather indeces of our dataset. If firefighters were not responsive enough, to maintain the fire to be one that would be put down as having "zero area" (<0.36ha), then the variables that we have in our model, start to affect the fire to a severe degree, which is exactly what we want (to the expense of burning Montesinho's forests), as these will help us understand their effect on the dependent variable.
Also, we still have enough "non zero area" fires to make statistical inferences - 270 observations to be exact.
```R
data_no_zeroes<-read.csv("forestfiresdata.csv")
names(data_no_zeroes)[1]<-"X" # Due to poor encoding from .xlsx to .csv,
# there needs to be a slight renaming of
# the first variable.
data_no_zeroes=data_no_zeroes[which(data_no_zeroes$area!=0),]
data_no_zeroes$area=log(data_no_zeroes$area+1)
data_no_zeroes[which(data_no_zeroes$month!=6&data_no_zeroes$month!=7&data_no_zeroes$month!=8&data_no_zeroes$month!=9),]$month=0
data_no_zeroes[which(data_no_zeroes$month!=0),]$month=1
fit_no_zeroes = lm(area~.-X-Y, data=data_no_zeroes)
summary(fit_no_zeroes)
e = fit_no_zeroes$residuals
sigma=summary(fit_no_zeroes)$sigma
# The diagonal of the Hat/Influence matrix
h = hatvalues(fit_no_zeroes)
e_studentized = e/(sigma*sqrt(1-h))
qqnorm(e_studentized)
qqline(e_studentized)
grid()
res=fit_no_zeroes$residuals
plot(fit_no_zeroes$fitted,res,xlab="Fitted values", ylab="Residuals",
main="Residuals vs Fitted Values")
abline(h=0)
```
```R
bss_penalty = regsubsets(area~.-X-Y, data_no_zeroes, nvmax=10)
bss_summ=summary(bss_penalty)
bss_summ
par(mfrow=c(2,2))
plot(bss_summ$rss, xlab="Number of Variables", ylab="SSres", type="l")
K0=which.min(bss_summ$rss)
points(K0, bss_summ$rss[K0], col="red",cex=2,pch=20)
plot(bss_summ$adjr2,xlab="Number of Variables", ylab="Adjusted RSq", type="l")
K1=which.max(bss_summ$adjr2)
points(K1, bss_summ$adjr2[K1], col="red",cex=2,pch=20)
plot(bss_summ$cp,xlab="Number of Variables", ylab="C_p", type="l")
K2=which.min(bss_summ$cp)
points(K2, bss_summ$cp[K2], col="red",cex=2,pch=20)
plot(bss_summ$bic,xlab="Number of Variables", ylab="BIC", type="l")
K3=which.min(bss_summ$bic)
points(K3, bss_summ$bic[K3], col="red",cex=2,pch=20)
```
```R
K=10 # number of "folds"
q=10 # number of explanatory variables
set.seed(4) # setting a seed number for consistent outputs
partition = sample(1:K, nrow(data_no_zeroes), replace=TRUE)
table(partition)
cv_errors_matrix = matrix(
NA, K, q, dimnames=list(paste("Model with validation set T_",1:K),
paste(1:q," variables model")))
for(k in 1:K){
bss_cv_train=regsubsets(area~.-X-Y,data=data_no_zeroes[partition!=k,],nvmax=10)
for(p in 1:q){
cv_validate=predict(model=bss_cv_train, validation_set=data_no_zeroes[partition==k,],num_par=p)
cv_errors_matrix[k,p]=mean((data_no_zeroes$area[partition==k]-cv_validate)^2)
}
}
cv_errors_matrix
mean_cv_errors = apply(cv_errors_matrix, MARGIN=2, FUN=mean)
as.matrix(mean_cv_errors)
p.cv=which.min(mean_cv_errors)
plot(mean_cv_errors,type='b')
points(p.cv, mean_cv_errors[p.cv], col="red",cex=2,pch=20)
bss_cv=regsubsets(area~.-X-Y,data_no_zeroes,nvmax=10)
summary(bss_cv)
```
We see that there was quite an improvement in our output by just taking the "troublesome" zeroes out of the equation. We see the errors do not longer seem to be correlated and that it is more obvious now that the error term is normally distributed, even if the number of the observations was essentially halved. There is still a disagreement though, between the mdoel selection algorithms.
So while we see that our data can be "altered" in a clever way to help us model the effects of fire weather observations and fuel moisture codes on the dependent variable, there is still a long way to go to truly utilize this dataset to its fullest extent. But unfortunately this demands a lot more time for investigation...
___
## Bibliography/Citations:
---
* Canada, N. R. (n.d.). Canadian wildland fire information system: Canadian forest fire weather index (FWI) system. Canadian Wildland Fire Information System | Canadian Forest Fire Weather Index (FWI) System. Retrieved November 25, 2021, from https://cwfis.cfs.nrcan.gc.ca/background/summary/fwi.
* Fire Weather Index (FWI) System. (n.d.). Retrieved from https://www.nwcg.gov/publications/pms437/cffdrs/fire-weather-index-system
* Masinda, M.M., Sun, L., Wang, G. et al. Moisture content thresholds for ignition and rate of fire spread for various dead fuels in northeast forest ecosystems of China. J. For. Res. 32, 1147–1155 (2021). https://doi.org/10.1007/s11676-020-01162-2
* Carmine Maffei, Massimo Menenti, Predicting forest fires burned area and rate of spread from pre-fire multispectral satellite measurements, ISPRS Journal of Photogrammetry and Remote Sensing, Volume 158, 2019, Pages 263-278, ISSN 0924-2716, https://doi.org/10.1016/j.isprsjprs.2019.10.013.
| faedceea7ef070ddb56fa2f209763bb706f8a27e | 570,827 | ipynb | Jupyter Notebook | Modelling Montesinho Natural Park's conflagrations.ipynb | marandmath/Modelling-Montesinho-Forest-Fires | 1eaa15b2c6863989b62f04a6c76a9032d209bbd1 | [
"Apache-2.0"
]
| 1 | 2022-01-23T19:41:18.000Z | 2022-01-23T19:41:18.000Z | Modelling Montesinho Natural Park's conflagrations.ipynb | marandmath/Modelling-Montesinho-Forest-Fires | 1eaa15b2c6863989b62f04a6c76a9032d209bbd1 | [
"Apache-2.0"
]
| null | null | null | Modelling Montesinho Natural Park's conflagrations.ipynb | marandmath/Modelling-Montesinho-Forest-Fires | 1eaa15b2c6863989b62f04a6c76a9032d209bbd1 | [
"Apache-2.0"
]
| null | null | null | 249.924256 | 27,648 | 0.890419 | true | 17,657 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.847968 | 0.735218 | __label__eng_Latn | 0.98354 | 0.54649 |
<figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
# Particle in a two-dimensional box
*Roberto Di Remigio*, *Luca Frediani*
After discussing and experimenting with the one-dimensional particle in a box model, we now move on to the two-dimensional case. The particle is now confined into a two-dimensional box with sides $L_x$ and $L_y$ long by the appropriate potential energy operator:
\begin{equation}
V(x, y) =
\begin{cases}
0 \quad\quad \text{if} \,\, 0\leq x \leq L_x \,\, \text{and} \,\, 0 \leq y \leq L_y \\
+\infty \quad\quad \text{otherwise}
\end{cases}
\end{equation}
Notice that $L_x$ can differ from $L_y$. In the general case, the particle can be confined inside
a rectangular well. The geometry of the potential will determine the properties of the solutions.
How does the quantum particle behave? We need to find the **eigenfunctions** and **eigenvalues** of the **Hamiltonian operator**, that is we have to solve the following ordinary differential equation:
\begin{equation}
-\frac{\hbar^2}{2m}\left(\frac{\mathrm{\partial}^2}{\mathrm{\partial}x^2}
+ \frac{\mathrm{\partial}^2}{\mathrm{\partial}y^2}\right)
\psi_{nm}(x,y) = E_{nm}\psi_{nm}(x,y)
\end{equation}
with **boundary conditions**:
\begin{equation}
\begin{aligned}
\psi_{nm}(0, y) &= 0 \\
\psi_{nm}(L_x, y) &= 0
\end{aligned}
\end{equation}
and:
\begin{equation}
\begin{aligned}
\psi_{nm}(x, 0) &= 0 \\
\psi_{nm}(x, L_y) &= 0
\end{aligned}
\end{equation}
You will notice that, not only the eigenfunctions now depend on two **degrees of freedom** (the $x$ and $y$ coordinates) but they also carry **two** quantum numbers $n$ and $m$.
Given that the kinetic energy operator is **separable**, an acceptable form for the solutions is
the product of one-dimensional states:
\begin{equation}
\psi_{nm}(x, y) = \psi_{n}(x)\psi_{m}(y)
\end{equation}
that is, states that are eigenfunctions of the one-dimensional particle in a box problem.
A more explicit form is:
\begin{equation}
\psi_{nm}(x, y) = \sqrt{\frac{2}{L_x}}\sin\left(\frac{n\pi x}{L_x}\right)
\sqrt{\frac{2}{L_y}}\sin\left(\frac{m\pi y}{L_y}\right) \quad \forall n, m \neq 0
\end{equation}
We can then derive the form of the eigenvalues by inserting this form of the wavefunction into the Schrödinger equation:
\begin{equation}
E_{nm} = \frac{h^2}{8M}\left(\frac{n^2}{L_x^2} + \frac{m^2}{L_y^2} \right)
\end{equation}
Of course, if the box is square the expression for the eigenvalues would simplify to:
\begin{equation}
E_{nm} = \frac{h^2}{8ML^2}\left(n^2 + m^2\right) \quad \forall n, m\neq 0
\end{equation}
## Exercise 1: Normalization
The one-dimensional eigenfunctions $\psi_n(x)$ and $\psi_m(y)$ are orthonormal. What about the two-dimensional eigenfunctions $\psi_{nm}$? Are they orthogonal? Are they normalized?
Given a linear combination of two-dimensional normalized, eigenfunctions, is it still normalized? That is,
is
\begin{equation}
\Psi(x, y) = \psi_{11}(x, y) + \psi_{21}(x, y)
\end{equation}
normalized? If not, find the normalization constant.
Define also a function to calculate the value of the two-dimensional eigenfunctions on a grid of points. We will use this function to plot the eigenfunctions.
The function should take the following arguments: the quantum numbers $n$ and $m$, the box lengths $L_x$ and $L_y$, the NumPy arrays with $x$ and $y$ values:
```Python
def eigenfunction2D(n, m, Lx, Ly, x, y):
""" Normalized eigenfunction for the 2D particle in a box.
n -- the quantum number, relative to the x axis
m -- the quantum number, relative to the y axis
Lx -- the size of the box on the x axis
Ly -- the size of the box on the y axis
x -- the NumPy array with the x values
y -- the NumPy array with the y values
"""
```
Once this function is defined, we can obtain the respective probability distribution by taking its square.
**Hint** Notice that you can re-use the function for the one-dimensional particle in a box to write this one!
## Interval: 3D plots with `matplotlib`
Since these will be 3D plots, the `matplotlib` commands are slightly more complicated.
The following commands will set up two 3D plots side by side. Put the plot of the eigenfunction on the left panel and the probability density on the right panel.
```Python
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from scipy.constants import *
# make sure we see it on this notebook
%matplotlib inline
# Generate points on x and y axes
x = np.linspace(0, pi, 100)
y = np.linspace(0, pi, 100)
# Generate grid in the xy plane
X, Y = np.meshgrid(x, y)
# Tell matplotlib to create a figure with two panels
fig = plt.figure(figsize=plt.figaspect(0.5))
# Tell matplotlib to add axes for a plot on the left panel
ax = fig.add_subplot(1, 2, 1, projection='3d')
# Generate function values for the first plot
Z = (np.sin(X)*np.cos(Y)).T
max_val = np.max(Z)
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
# Contour plots on the xy plane
cset = ax.contour(X, Y, Z, zdir='z', offset=-max_val)
ax.set_xlabel('X')
ax.set_xlim([0, L])
ax.set_ylabel('Y')
ax.set_ylim([0, L])
ax.set_zlabel('Z')
ax.set_zlim(-max_val, max_val)
# Tell maplotlib to add axes for a plot on the right panle
ax = fig.add_subplot(1, 2, 2, projection='3d')
Z1 = ((np.sin(X)*np.cos(Y)).T )**2
max_val = np.max(Z1)
ax.plot_surface(X, Y, Z1, rstride=8, cstride=8, alpha=0.3)
# Contour plots on the xy plane
cset = ax.contour(X, Y, Z1, zdir='z', offset=-max_val)
plt.show()
```
```python
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from scipy.constants import *
# make sure we see it on this notebook
%matplotlib inline
# Generate points on x and y axes
x = np.linspace(0, pi, 100)
y = np.linspace(0, pi, 100)
# Generate grid in the xy plane
X, Y = np.meshgrid(x, y)
# Tell matplotlib to create a figure with two panels
fig = plt.figure(figsize=plt.figaspect(0.5))
# Tell matplotlib to add axes for a plot on the left panel
ax = fig.add_subplot(1, 2, 1, projection='3d')
# Generate function values for the first plot
Z = (np.sin(X)*np.cos(Y)).T
max_val = np.max(Z)
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
# Contour plots on the xy plane
cset = ax.contour(X, Y, Z, zdir='z', offset=-max_val)
ax.set_xlabel('X')
ax.set_xlim([0, pi])
ax.set_ylabel('Y')
ax.set_ylim([0, pi])
ax.set_zlabel('Z')
ax.set_zlim(-max_val, max_val)
# Tell maplotlib to add axes for a plot on the right panle
ax = fig.add_subplot(1, 2, 2, projection='3d')
Z1 = ((np.sin(X)*np.cos(Y)).T)**2
max_val = np.max(Z1)
ax.plot_surface(X, Y, Z1, rstride=8, cstride=8, alpha=0.3)
# Contour plots on the xy plane
cset = ax.contour(X, Y, Z1, zdir='z', offset=-max_val)
```
## Exercise 2: Eigenfunction or not?
Given the following linear combinations of two-dimensional eigenfunctions:
\begin{equation}
\begin{aligned}
\psi_a(x, y) &= \psi_{11}(x, y) + \psi_{22}(x, y) \\
\psi_b(x, y) &= \psi_{11}(x, y) - \psi_{22}(x, y) \\
\psi_c(x, y) &= \psi_{23}(x, y) + \psi_{32}(x, y) \\
\psi_d(x, y) &= \psi_{23}(x, y) - \psi_{32}(x, y) \\
\end{aligned}
\end{equation}
Are these eigenfunctions of the two-dimensional particle in a box Hamiltonian when the box is rectangular ($L_x\neq L_y$)? What happens when the box is square ($L_x = Ly$)?
Normalize all the linear combinations and plot them, together with their probability distributions.
## Exercise 3: Hybridization
The eigenfunctions for the particle in a square box can be used to visualize orbitals similar to the $sp$ and $sp^2$ hybridized orbitals, as explained [here].
Hybridized orbitals are linear combinations of eigenfunctions for a given problem that exhibit a peculiar structure of the probability density.
Plot the wavefunctions and probability densities for the following linear combinations:
\begin{equation}
\Psi(x, y) = \psi_{11}(x, y) + \psi_{21}(x, y) \quad\quad \Psi(x, y) = \psi_{11}(x, y) - \psi_{21}(x, y)
\end{equation}
[here]: http://pubs.acs.org/doi/abs/10.1021/ed067p866
| 24037dd6adfba42caa0bb9d36b27a45eb9166656 | 119,537 | ipynb | Jupyter Notebook | 06_2D-particle_in_a_box.ipynb | ilfreddy/seminars | c7e13874b41cc906a45b672e5b85c57d6880473e | [
"MIT"
]
| 4 | 2017-02-04T01:34:33.000Z | 2021-06-12T12:27:37.000Z | 06_2D-particle_in_a_box.ipynb | ilfreddy/seminars | c7e13874b41cc906a45b672e5b85c57d6880473e | [
"MIT"
]
| 3 | 2020-03-30T11:00:35.000Z | 2020-05-12T05:42:24.000Z | 06_2D-particle_in_a_box.ipynb | ilfreddy/seminars | c7e13874b41cc906a45b672e5b85c57d6880473e | [
"MIT"
]
| 7 | 2016-04-26T20:42:43.000Z | 2022-02-06T11:12:57.000Z | 405.210169 | 108,122 | 0.918134 | true | 2,480 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.893309 | 0.752011 | __label__eng_Latn | 0.967493 | 0.585505 |
# Physics 256
## Random Number Generators
http://www.idquantique.com/random-number-generation/
## Last Time
- Error scaling for high dimensional quadrature
- Monte Carlo Integration
## Today
- Generation and testing of pseudorandom numbers
- Tower sampling
## Setting up the Notebook
```python
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.style.use('notebook');
%config InlineBackend.figure_format = 'retina'
colors = ["#2078B5", "#FF7F0F", "#2CA12C", "#D72827", "#9467BE", "#8C574B",
"#E478C2", "#808080", "#BCBE20", "#17BED0", "#AEC8E9", "#FFBC79",
"#98E08B", "#FF9896", "#C6B1D6", "#C59D94", "#F8B7D3", "#C8C8C8",
"#DCDC8E", "#9EDAE6"]
```
## Generation of Random Numbers
### What is a random number?
- There is really no such thing, definitely not on a deterministic classical computer
- Loose term applied to a sequence of independent numbers drawn randomly from some distribution
- Typically we select integer or real values on some finite domain
What of the simplest ways to generate uniformly distrubted random numbers on $[0,1]$ is the **Linear Congruential Generator** (LCG).
Consider the map (recursion relation) which generates integers between $0$ and $m-1$:
\begin{equation}
X_{n+1} = (a X_n + c) \mod m
\end{equation}
where $a$ is known as the multiplier, $c$ is the increment and $m$ the modulus. Starting from an initial **seed** value $X_0$ we generate the list of numbers:
\begin{align*}
X_0 &= \text{seed} \newline
X_1 &= (a X_0 + c) \mod m \newline
X_2 &= \{a [a (X_0 + c) \mod m] + c\} \mod m \newline
\vdots &
\end{align*}
Then, a uniform number $x_n \in \mathcal{U}_{[0,1]}$ can be computed as:
\begin{equation}
x_n = \frac{X_n}{m} .
\end{equation}
<div class="span alert alert-success">
<h2>Programming challenge </h2>
Write a LCG function with $a=16807$, $c=0$, $m=2^{31}-1$ and $seed=332$ that generates a list of *pseudorandom* uniform numbers of length $N=1000$ on [0,1].
</div>
<!--
m = 2**32-1
a = 6
c = 7
U = 3
-->
```python
def lcg_rand(a,c,m,seed,N=1):
'''A linear congruential pseudrandom number generator'''
x = np.zeros([N])
X = seed
x[0] = X/m
for n in range(N-1):
X = (a*X + c) % m
x[n+1] = X/m
return x
N = 1000
a,c,m,seed = 16807,0,2**31-1,332
m = 2**32-1
a = 7
c = 7
seed = 7
x = lcg_rand(a,c,m,seed,N)
print(x)
```
[ 1.62981451e-09 1.30385160e-08 9.28994268e-08 6.51925802e-07
4.56511043e-06 3.19574028e-05 2.23703450e-04 1.56592578e-03
1.09614821e-02 7.67303761e-02 5.37112635e-01 7.59788443e-01
3.18519105e-01 2.29633734e-01 6.07436137e-01 2.52052964e-01
7.64370746e-01 3.50595226e-01 4.54166586e-01 1.79166105e-01
2.54162734e-01 7.79139143e-01 4.53974000e-01 1.77818004e-01
2.44726027e-01 7.13082190e-01 9.91575334e-01 9.41027338e-01
5.87191367e-01 1.10339570e-01 7.72376994e-01 4.06638960e-01
8.46472723e-01 9.25309064e-01 4.77163447e-01 3.40144130e-01
3.81008913e-01 6.67062395e-01 6.69436768e-01 6.86057377e-01
8.02401638e-01 6.16811467e-01 3.17680267e-01 2.23761874e-01
5.66333118e-01 9.64331831e-01 7.50322819e-01 2.52259736e-01
7.65818152e-01 3.60727066e-01 5.25089464e-01 6.75626248e-01
7.29383740e-01 1.05686179e-01 7.39803252e-01 1.78622769e-01
2.50359383e-01 7.52515683e-01 2.67609780e-01 8.73268463e-01
1.12879245e-01 7.90154719e-01 5.31083034e-01 7.17581243e-01
2.30687023e-02 1.61480918e-01 1.30366427e-01 9.12564988e-01
3.87954921e-01 7.15684448e-01 9.79113486e-03 6.85379456e-02
4.79765621e-01 3.58359350e-01 5.08515448e-01 5.59608139e-01
9.17256975e-01 4.20798824e-01 9.45591768e-01 6.19142379e-01
3.33996657e-01 3.37976598e-01 3.65836189e-01 5.60853328e-01
9.25973296e-01 4.81813073e-01 3.72691513e-01 6.08840592e-01
2.61884145e-01 8.33189015e-01 8.32323108e-01 8.26261760e-01
7.83832319e-01 4.86826238e-01 4.07783668e-01 8.54485680e-01
9.81399762e-01 8.69798337e-01 8.85883612e-02 6.20118530e-01
3.40829711e-01 3.85807979e-01 7.00655853e-01 9.04590973e-01
3.32136814e-01 3.24957697e-01 2.74703881e-01 9.22927167e-01
4.60490174e-01 2.23431220e-01 5.64018539e-01 9.48129775e-01
6.36908427e-01 4.58358988e-01 2.08512915e-01 4.59590404e-01
2.17132831e-01 5.19929817e-01 6.39508723e-01 4.76561061e-01
3.35927427e-01 3.51491990e-01 4.60443931e-01 2.23107517e-01
5.61752623e-01 9.32268360e-01 5.25878521e-01 6.81149649e-01
7.68047544e-01 3.76332809e-01 6.34329664e-01 4.40307646e-01
8.21535248e-02 5.75074675e-01 2.55227261e-02 1.78659085e-01
2.50613593e-01 7.54295155e-01 2.80066086e-01 9.60462602e-01
7.23238218e-01 6.26675263e-02 4.38672686e-01 7.07088027e-02
4.94961620e-01 4.64731345e-01 2.53119414e-01 7.71835898e-01
4.02851289e-01 8.19959026e-01 7.39713185e-01 1.77992299e-01
2.45946098e-01 7.21622687e-01 5.13588123e-02 3.59511688e-01
5.16581814e-01 6.16072701e-01 3.12508907e-01 1.87562349e-01
3.12936445e-01 1.90555117e-01 3.33885822e-01 3.37200753e-01
3.60405271e-01 5.22836899e-01 6.59858293e-01 6.19008055e-01
3.33056384e-01 3.31394690e-01 3.19762833e-01 2.38339831e-01
6.68378820e-01 6.78651741e-01 7.50562187e-01 2.53935307e-01
7.77547153e-01 4.42830072e-01 9.98105048e-02 6.98673536e-01
8.90714750e-01 2.35003254e-01 6.45022781e-01 5.15159469e-01
6.06116283e-01 2.42813982e-01 6.99697875e-01 8.97885126e-01
2.85195882e-01 9.96371177e-01 9.74598244e-01 8.22187710e-01
7.55313972e-01 2.87197805e-01 1.03846388e-02 7.26924732e-02
5.08847314e-01 5.61931200e-01 9.33518402e-01 5.34628813e-01
7.42401690e-01 1.96811830e-01 3.77682808e-01 6.43779659e-01
5.06457614e-01 5.45203297e-01 8.16423083e-01 7.14961580e-01
4.73106443e-03 3.31174526e-02 2.31822170e-01 6.22755193e-01
3.59286350e-01 5.15004450e-01 6.05031154e-01 2.35218080e-01
6.46526560e-01 5.25685924e-01 6.79801470e-01 7.58610289e-01
3.10272023e-01 1.71904164e-01 2.03329148e-01 4.23304035e-01
9.63128245e-01 7.41897715e-01 1.93284005e-01 3.52988035e-01
4.70916245e-01 2.96413718e-01 7.48960299e-02 5.24272211e-01
6.69905479e-01 6.89338353e-01 8.25368472e-01 7.77579306e-01
4.43055146e-01 1.01386024e-01 7.09702168e-01 9.67915175e-01
7.75406225e-01 4.27843577e-01 9.94905041e-01 9.64335291e-01
7.50347037e-01 2.52429258e-01 7.67004805e-01 3.69033639e-01
5.83235474e-01 8.26483211e-02 5.78538249e-01 4.97677457e-02
3.48374221e-01 4.38619551e-01 7.03368578e-02 4.92358006e-01
4.46506047e-01 1.25542328e-01 8.78796295e-01 1.51574063e-01
6.10184453e-02 4.27129119e-01 9.89903833e-01 9.29326834e-01
5.05287842e-01 5.37014897e-01 7.59104283e-01 3.13729983e-01
1.96109880e-01 3.72769164e-01 6.09384146e-01 2.65689027e-01
8.59823188e-01 1.87623177e-02 1.31336226e-01 9.19353581e-01
4.35475072e-01 4.83255056e-02 3.38278541e-01 3.67949787e-01
5.75648508e-01 2.95395604e-02 2.06776925e-01 4.47438475e-01
1.32069324e-01 9.24485269e-01 4.71396886e-01 2.99778205e-01
9.84474346e-02 6.89132044e-01 8.23924310e-01 7.67470170e-01
3.72291192e-01 6.06038343e-01 2.42268400e-01 6.95878801e-01
8.71151606e-01 9.80612440e-02 6.86428710e-01 8.05000970e-01
6.35006794e-01 4.45047559e-01 1.15332915e-01 8.07330408e-01
6.51312860e-01 5.59190021e-01 9.14330147e-01 4.00311032e-01
8.02177225e-01 6.15240578e-01 3.06684048e-01 1.46788334e-01
2.75183416e-02 1.92628393e-01 3.48398749e-01 4.38791247e-01
7.15387303e-02 5.00771114e-01 5.05397798e-01 5.37784589e-01
7.64492122e-01 3.51444857e-01 4.60113999e-01 2.20797997e-01
5.45585979e-01 8.19101856e-01 7.33712991e-01 1.35990937e-01
9.51936558e-01 6.63555905e-01 6.44891338e-01 5.14239365e-01
5.99675557e-01 1.97728904e-01 3.84102330e-01 6.88716310e-01
8.21014168e-01 7.47099180e-01 2.29694265e-01 6.07859853e-01
2.55018975e-01 7.85132826e-01 4.95929785e-01 4.71508500e-01
3.00559499e-01 1.03916492e-01 7.27415448e-01 9.19081376e-02
6.43356965e-01 5.03498754e-01 5.24491279e-01 6.71438953e-01
7.00072675e-01 9.00508725e-01 3.03561074e-01 1.24927518e-01
8.74492629e-01 1.21448406e-01 8.50138842e-01 9.50971898e-01
6.56803287e-01 5.97623011e-01 1.83361077e-01 2.83527541e-01
9.84692787e-01 8.92849513e-01 2.49946590e-01 7.49626131e-01
2.47382917e-01 7.31680419e-01 1.21762936e-01 8.52340555e-01
9.66383890e-01 7.64687232e-01 3.52810623e-01 4.69674366e-01
2.87720562e-01 1.40439326e-02 9.83075295e-02 6.88152708e-01
8.17068958e-01 7.19482707e-01 3.63789508e-02 2.54652657e-01
7.82568603e-01 4.77980224e-01 3.45861572e-01 4.21031003e-01
9.47217026e-01 6.30519182e-01 4.13634276e-01 8.95439936e-01
2.68079554e-01 8.76556880e-01 1.35898160e-01 9.51287124e-01
6.59009868e-01 6.13069081e-01 2.91483566e-01 4.03849653e-02
2.82694758e-01 9.78863311e-01 8.52043177e-01 9.64302240e-01
7.50115684e-01 2.50809792e-01 7.55668549e-01 2.89679845e-01
2.77589161e-02 1.94312415e-01 3.60186904e-01 5.21308332e-01
6.49158323e-01 5.44108263e-01 8.08757845e-01 6.61304917e-01
6.29134420e-01 4.03940940e-01 8.27586581e-01 7.93106068e-01
5.51742480e-01 8.62197362e-01 3.53815346e-02 2.47670744e-01
7.33695209e-01 1.35866464e-01 9.51065252e-01 6.57456768e-01
6.02197375e-01 2.15381629e-01 5.07671402e-01 5.53699817e-01
8.75898720e-01 1.31291043e-01 9.19037303e-01 4.33261122e-01
3.28278570e-02 2.29795000e-01 6.08565004e-01 2.59955030e-01
8.19685208e-01 7.37796460e-01 1.64575222e-01 1.52026552e-01
6.41858676e-02 4.49301075e-01 1.45107524e-01 1.57526683e-02
1.10268679e-01 7.71880758e-01 4.03165307e-01 8.22157152e-01
7.55100067e-01 2.85700470e-01 9.99903294e-01 9.99323057e-01
9.95261397e-01 9.66829784e-01 7.67808488e-01 3.74659416e-01
6.22615916e-01 3.58311414e-01 5.08179902e-01 5.57259316e-01
9.00815210e-01 3.05706474e-01 1.39945317e-01 9.79617220e-01
8.57320539e-01 1.24377571e-03 8.70643161e-03 6.09450229e-02
4.26615162e-01 9.86306133e-01 9.04142936e-01 3.29000552e-01
3.03003864e-01 1.21027051e-01 8.47189359e-01 9.30325514e-01
5.12278597e-01 5.85950184e-01 1.01651288e-01 7.11559020e-01
9.80913143e-01 8.66392000e-01 6.47440038e-02 4.53208028e-01
1.72456201e-01 2.07193407e-01 4.50353849e-01 1.52476947e-01
6.73386296e-02 4.71370409e-01 2.99592865e-01 9.71500599e-02
6.80050421e-01 7.60352948e-01 3.22470640e-01 2.57294484e-01
8.01061391e-01 6.07429741e-01 2.52008188e-01 7.64057315e-01
3.48401208e-01 4.38808455e-01 7.16591834e-02 5.01614285e-01
5.11299999e-01 5.79099992e-01 5.36999453e-02 3.75899619e-01
6.31297335e-01 4.19081346e-01 9.33569425e-01 5.34985978e-01
7.44901850e-01 2.14312949e-01 5.00190643e-01 5.01334504e-01
5.09341532e-01 5.65390726e-01 9.57735082e-01 7.04145575e-01
9.29019026e-01 5.03133182e-01 5.21932274e-01 6.53525923e-01
5.74681461e-01 2.27702318e-02 1.59391624e-01 1.15741373e-01
8.10189610e-01 6.71327270e-01 6.99290895e-01 8.95036267e-01
2.65253871e-01 8.56777101e-01 9.97439706e-01 9.82077940e-01
8.74545584e-01 1.21819091e-01 8.52733637e-01 9.69135463e-01
7.83948246e-01 4.87637723e-01 4.13464061e-01 8.94248428e-01
2.59738995e-01 8.18172964e-01 7.27210748e-01 9.04752373e-02
6.33326662e-01 4.33286638e-01 3.30064700e-02 2.31045291e-01
6.17317041e-01 3.21219289e-01 2.48535025e-01 7.39745177e-01
1.78216238e-01 2.47513665e-01 7.32595656e-01 1.28169595e-01
8.97187169e-01 2.80310182e-01 9.62171279e-01 7.35198952e-01
1.46392667e-01 2.47486706e-02 1.73240696e-01 2.12684874e-01
4.88794123e-01 4.21558862e-01 9.50912037e-01 6.56384260e-01
5.94689822e-01 1.62828757e-01 1.39801301e-01 9.78609106e-01
8.50263744e-01 9.51846209e-01 6.62923462e-01 6.40464237e-01
4.83249658e-01 3.82747609e-01 6.79233262e-01 7.54632832e-01
2.82429829e-01 9.77008802e-01 8.39061615e-01 8.73431305e-01
1.14019138e-01 7.98133965e-01 5.86937755e-01 1.08564290e-01
7.59950032e-01 3.19650224e-01 2.37551567e-01 6.62860967e-01
6.40026772e-01 4.80187403e-01 3.61311822e-01 5.29182756e-01
7.04279293e-01 9.29955056e-01 5.09685393e-01 5.67797753e-01
9.74584274e-01 8.22089918e-01 7.54629427e-01 2.82405991e-01
9.76841938e-01 8.37893569e-01 8.65254985e-01 5.67848992e-02
3.97494296e-01 7.82460075e-01 4.77220525e-01 3.40543679e-01
3.83805757e-01 6.86640297e-01 8.06482084e-01 6.45374588e-01
5.17622120e-01 6.23354842e-01 3.63483898e-01 5.44387287e-01
8.10711013e-01 6.74977093e-01 7.24839655e-01 7.38775893e-02
5.17143127e-01 6.20001890e-01 3.40013233e-01 3.80092630e-01
6.60648412e-01 6.24538884e-01 3.71772189e-01 6.02405326e-01
2.16837286e-01 5.17861004e-01 6.25027030e-01 3.75189214e-01
6.26324502e-01 3.84271515e-01 6.89900609e-01 8.29304267e-01
8.05129872e-01 6.35909103e-01 4.51363724e-01 1.59546072e-01
1.16822504e-01 8.17757526e-01 7.24302686e-01 7.01188052e-02
4.90831638e-01 4.35821467e-01 5.07502686e-02 3.55251882e-01
4.86763176e-01 4.07342235e-01 8.51395645e-01 9.59769514e-01
7.18386602e-01 2.87062177e-02 2.00943525e-01 4.06604678e-01
8.46232749e-01 9.23629246e-01 4.65404726e-01 2.57833087e-01
8.04831609e-01 6.33821262e-01 4.36748835e-01 5.72418491e-02
4.00692946e-01 8.04850621e-01 6.33954351e-01 4.37680459e-01
6.37632113e-02 4.46342480e-01 1.24397365e-01 8.70781556e-01
9.54708951e-02 6.68296267e-01 6.78073872e-01 7.46517106e-01
2.25619743e-01 5.79338202e-01 5.53674167e-02 3.87571919e-01
7.13003432e-01 9.91024026e-01 9.37168185e-01 5.60177296e-01
9.21241072e-01 4.48687509e-01 1.40812563e-01 9.85687945e-01
8.99815614e-01 2.98709297e-01 9.09650801e-02 6.36755562e-01
4.57288938e-01 2.01022566e-01 4.07157963e-01 8.50105741e-01
9.50740189e-01 6.55181323e-01 5.86269265e-01 1.03884856e-01
7.27193995e-01 9.03579688e-02 6.32505783e-01 4.27540486e-01
9.92783404e-01 9.49483831e-01 6.46386818e-01 5.24707729e-01
6.72954104e-01 7.10678728e-01 9.74751101e-01 8.23257708e-01
7.62803956e-01 3.39627691e-01 3.77393840e-01 6.41756880e-01
4.92298160e-01 4.46087122e-01 1.22609858e-01 8.58269006e-01
7.88304187e-03 5.51812947e-02 3.86269064e-01 7.03883453e-01
9.27184172e-01 4.90289209e-01 4.32024462e-01 2.41712341e-02
1.69198640e-01 1.84390485e-01 2.90733394e-01 3.51337600e-02
2.45936321e-01 7.21554251e-01 5.08797609e-02 3.56158328e-01
4.93108297e-01 4.51758078e-01 1.62306549e-01 1.36145843e-01
9.53020906e-01 6.71146342e-01 6.98024399e-01 8.86170793e-01
2.03195555e-01 4.22368885e-01 9.56582194e-01 6.96075361e-01
8.72527528e-01 1.07692698e-01 7.53848888e-01 2.76942217e-01
9.38595521e-01 5.70168650e-01 9.91180554e-01 9.38263881e-01
5.67847172e-01 9.74930205e-01 8.24511438e-01 7.71580065e-01
4.01060457e-01 8.07423202e-01 6.51962416e-01 5.63736913e-01
9.46158394e-01 6.23108758e-01 3.61761306e-01 5.32329145e-01
7.26304020e-01 8.41281414e-02 5.88896992e-01 1.22278943e-01
8.55952601e-01 9.91668208e-01 9.41677458e-01 5.91742207e-01
1.42195449e-01 9.95368143e-01 9.67577003e-01 7.73039024e-01
4.11273168e-01 8.78912179e-01 1.52385258e-01 6.66968071e-02
4.66877651e-01 2.68143559e-01 8.77004914e-01 1.39034401e-01
9.73240808e-01 8.12685660e-01 6.88799622e-01 8.21597355e-01
7.51181488e-01 2.58270415e-01 8.07892907e-01 6.55250352e-01
5.86752463e-01 1.07267240e-01 7.50870679e-01 2.56094755e-01
7.92663283e-01 5.48642984e-01 8.40500887e-01 8.83506207e-01
1.84543452e-01 2.91804168e-01 4.26291742e-02 2.98404221e-01
8.88295488e-02 6.21806843e-01 3.52647905e-01 4.68535337e-01
2.79747362e-01 9.58231538e-01 7.07620766e-01 9.53345367e-01
6.73417567e-01 7.13922972e-01 9.97460806e-01 9.82225642e-01
8.75579498e-01 1.29056486e-01 9.03395403e-01 3.23767820e-01
2.66374745e-01 8.64623215e-01 5.23625068e-02 3.66537549e-01
5.65762844e-01 9.60339910e-01 7.22379372e-01 5.66556084e-02
3.96589260e-01 7.76124823e-01 4.32873762e-01 3.01163329e-02
2.10814332e-01 4.75700327e-01 3.29902290e-01 3.09316028e-01
1.65212199e-01 1.56485396e-01 9.53977704e-02 6.67784395e-01
6.74490764e-01 7.21435349e-01 5.00474472e-02 3.50332132e-01
4.52324927e-01 1.66274489e-01 1.63921423e-01 1.47449960e-01
3.21497202e-02 2.25048043e-01 5.75336305e-01 2.73541373e-02
1.91478963e-01 3.40352740e-01 3.82469180e-01 6.77284261e-01
7.40989828e-01 1.86928797e-01 3.08501583e-01 1.59511082e-01
1.16577575e-01 8.16043027e-01 7.12301189e-01 9.86108321e-01
9.02758252e-01 3.19307765e-01 2.35154354e-01 6.46080478e-01
5.22563351e-01 6.57943458e-01 6.05604211e-01 2.39229477e-01
6.74606341e-01 7.22244389e-01 5.57107267e-02 3.89975088e-01
7.29825620e-01 1.08779338e-01 7.61455371e-01 3.30187597e-01
3.11313180e-01 1.79192262e-01 2.54345838e-01 7.80420865e-01
4.62946054e-01 2.40622381e-01 6.84356667e-01 7.90496672e-01
5.33476709e-01 7.34336963e-01 1.40358745e-01 9.82511216e-01
8.77578514e-01 1.43049601e-01 1.34720560e-03 9.43044084e-03
6.60130875e-02 4.62091614e-01 2.34641302e-01 6.42489117e-01
4.97423821e-01 4.81966747e-01 3.73767231e-01 6.16370620e-01
3.14594342e-01 2.02160399e-01 4.15122793e-01 9.05859552e-01
3.41016866e-01 3.87118064e-01 7.09826453e-01 9.68785170e-01
7.81496190e-01 4.70473331e-01 2.93313321e-01 5.31932456e-02
3.72352721e-01 6.06469046e-01 2.45283320e-01 7.16983244e-01
1.88827112e-02 1.32178980e-01 9.25252861e-01 4.76770031e-01
3.37390218e-01 3.61731528e-01 5.32120695e-01 7.24844870e-01
7.39140907e-02 5.17398636e-01 6.21790455e-01 3.52533187e-01
4.67732313e-01 2.74126192e-01 9.18883348e-01 4.32183440e-01
2.52840812e-02 1.76988570e-01 2.38919994e-01 6.72439957e-01
7.07079704e-01 9.49557927e-01 6.46905490e-01 5.28338434e-01
6.98369040e-01 8.88583282e-01 2.20082977e-01 5.40580842e-01
7.84065893e-01 4.88461252e-01 4.19228764e-01 9.34601347e-01
5.42209432e-01 7.95466025e-01 5.68262176e-01 9.77835231e-01
8.44846615e-01 9.13926309e-01 3.97484163e-01 7.82389140e-01
4.76723984e-01 3.37067893e-01 3.59475252e-01 5.16326763e-01
6.14287342e-01 3.00011393e-01 1.00079753e-01 7.00558269e-01
9.03907887e-01 3.27355210e-01 2.91486472e-01 4.04053053e-02
2.82837139e-01 9.79859975e-01 8.59019826e-01 1.31387846e-02
9.19714936e-02 6.43800457e-01 5.06603198e-01 5.46222391e-01
8.23556738e-01 7.64897170e-01 3.54280195e-01 4.79961364e-01]
We can test for uniformity by examining a histogram
```python
# the histogram of the data
n, bins, patches = plt.hist(x, 20, normed=1, ec='w')
plt.xlabel('x')
plt.ylabel('p(x)')
```
## Optimal Values for the LCG
1. $c$ is relatively prime to m
2. $b=a-1$ is a multiple of $p$ for every prime number $p$ dividing $m$
3. $b$ is a multiple of 4 if $m$ is a multiple of 4
Numerical Recipes suggests:
\begin{align*}
a &= 1664525 \newline
c &= 1013904223 \newline
m &= 2^{32}
\end{align*}
We can also test the overall statistics (but not correlations) by looking at the mean and variance over the uniform probability distribution $p(x) = 1$.
\begin{equation}
\langle x \rangle = \mu = \int_0^1 p(x) x dx
= \int_0^1 x dx
= \left. \frac{x^2}{2}\right \rvert_0^1
= \frac{1}{2}
\end{equation}
\begin{equation}
\sigma^2 = \langle(x-\mu)^2\rangle = \int_0^1 \left(x-\frac{1}{2}\right)^2 dx = \int_0^1 \left(x^2 - x +\frac{1}{4}\right) dx
= \frac{1}{3} - \frac{1}{2} + \frac{1}{4} = \frac{1}{12}
\end{equation}
so $\sigma = \frac{1}{\sqrt{12}} \simeq 0.2886$
```python
a,c,m,seed = 1664525,1013904223,2**32,13523
x = lcg_rand(a,c,m,seed,10000)
print(np.average(x),np.std(x))
```
0.50449334808 0.289062854736
We can also visually inspect for correlations by plotting $x_i vs. x_{i+1}$
```python
plt.figure(figsize=(5,5))
plt.plot(x[:-1],x[1:],'o', ms=3, mew=0)
plt.xlabel(r'$x_i$')
plt.ylabel(r'$x_{i+1}$')
```
## Tower Sampling
We can use the uniform distribution of (pseudo) random numbers in many ways, including sampling $N$ discrete events, each with their own probabilities $p_0,p_1,\ldots,p_{N-1}$. Since something *must* happen, we know
\begin{equation}
\sum_{i=0}^{n-1} p_i = 1
\end{equation}
We can use our uniformly distributed random numbers $x\in \mathcal{U}_{[0,1]}$ to sample this discrete distribution by exploiting the fact that each event occupies a width $p_i$ in the probability interval. i.e. for a given random number $x$:
\begin{align*}
0 &\leftarrow 0 \le x < p_0 \newline
1 &\leftarrow p_0 \le x < p_0 + p_1 \newline
2 &\leftarrow p_0 + p_1 \le x < p_0 + p_1 + p_2 \newline
&\vdots \newline
N-1 &\leftarrow p_0 + \cdots + p_{N-2} \le x < 1 .
\end{align*}
Note that the relevant quantity here is the **cumulative probability**:
\begin{equation}
{P}_i = \sum_{k=0}^i p_k
\end{equation}
which for continuous distribution is:
\begin{equation}
{P}(x) = \int_{-\infty}^x p(x) dx
\end{equation}
In practice, we simply make a list of cumulative probabilites and figure out where to insert $x$.
```python
# Suppose we have 6 outcomes (eg. an unfair die) with the following propbabilites
p = [0.22181816, 0.16939565, 0.16688735, 0.06891783, 0.19622408, 0.17675693]
# generate the CDF
P = [np.sum(p[:i+1]) for i in range(len(p))]
plt.plot(P)
plt.xlabel('n')
plt.ylabel('P(n)')
plt.title('Cumulative Probability Distribution')
```
```python
P
```
[0.22181815999999999,
0.39121381,
0.55810115999999999,
0.62701898999999994,
0.82324306999999997,
1.0]
```python
# Generate N random numbers sampled according to the tower, searchsorted is *fast*
N = 1000000
events = np.searchsorted(P,np.random.random(N))
plt.plot(p,'o', mec='None')
plt.hist(events, bins=len(p), normed=True, range=(-0.5,len(p) - 0.5), ec='w')
plt.xlabel('x')
plt.ylabel('p(x)')
plt.xlim(-0.5,5.5)
plt.title('Tower Sampling')
```
## Sampling Continous Distributions
We can extend the *tower sampling* concept to any continuous probability distribution. Our starting point will be the uniform distribution which satisfies:
\begin{equation}
p(x) dx = \left \{
\begin{array}[rcl]
{}dx & ; & 0 \le x \le 1 \\
0 & ; & \text{otherwise}
\end{array}
\right.
\end{equation}
such that:
\begin{equation}
\int_{-\infty}^{\infty} p(x) dx = 1.
\end{equation}
Now, we want to sample some new random variables $y$ from some probability distribution $p(y)$. This requires we identify a mapping $x\leftrightarrow y$ such that probability is conserved, i.e.
\begin{equation}
p(y) dy = p(x) dx \Rightarrow p(y) = \frac{dx}{dy} p(x) .
\end{equation}
We can integrate both sides:
\begin{equation}
P(y) = \int_{-\infty}^y p(y') dy' = \int_{-\infty}^{y} \frac{dx}{dy'} p(x) dy' = \int_0^y \frac{dx}{dy'} dy' = x(y)
\end{equation}
Therefore, if we can invert the CDF $P(y)$ we can get $y = P^{-1}(x)$ for a uniformly distributed $x$.
Let's see how this works for a few specific examples.
### Example 1
Generate a uniform random number $y$ on the domain $[a,b]$.
We have:
\begin{equation}
p(y) = \left \{
\begin{array}[rcl]
{} \frac{1}{b-a} & ; & a \le y \le b \\
0 & ; & \text{otherwise}
\end{array}
\right.
\end{equation}
So
\begin{equation}
x(y) =\int_a^y p(y') dy' = \int_a^y \frac{dy'}{b-a} = \left.\frac{y'}{b-a} \right \rvert_a^y = \frac{y-a}{b-a}
\end{equation}
so $y = (b-a)x + a$, our well known result.
## Next Time:
What happens if we can't analytically invert $P(y)$?
```python
```
| 3b6b7e5391b02bcf2c979bf29ba5eb09547cad88 | 470,881 | ipynb | Jupyter Notebook | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/29_RandomNumbers.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
]
| null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/29_RandomNumbers.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
]
| null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/29_RandomNumbers.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
]
| 1 | 2021-11-05T07:48:26.000Z | 2021-11-05T07:48:26.000Z | 585.672886 | 297,076 | 0.928596 | true | 12,810 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.73412 | 0.538931 | __label__yue_Hant | 0.251732 | 0.090448 |
## Moving average filter
The moving average(MA) filter is a Low Pass FIR used for smoothing signals. This filter sum the data of L consecutive elements of the input vector and divide by L, therefore the result is a single output point.
As the parameter L increases, the smoothness of the output is better, whereas the sharp transitions in the data are made increasingly blunt. This implies that this filter has an excellent time-domain response but a poor frequency response.
### Implementation
\begin{align}
y[n]=\frac{1}{L}\sum_{k=0}^{L-1} x[n-k]
\end{align}
Where,
y: output vector<br>
x: input vector<br>
L: data point
<center>
Figure 1: Discrete-time 4-point Moving Average FIR filter
</center>
## Modules
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import freqz, dimpulse, dstep
from math import sin, cos, sqrt, pi,pow
```
## Parameters
```python
fsampling = 100
L = 4
```
## Coefficients
```python
#coefficients
b = np.ones(L) #numerator coeffs of filter transfer function
#[1. 1. 1. 1.]
#b = (np.ones(L))/L #numerator coeffs of filter transfer function
a = np.array([L] + [0]*(L-1)) #denominator coeffs of filter transfer function
#[4 0 0 0]
#a = np.ones(1) #denominator coeffs of filter transfer function
```
## Frequency response
```python
#frequency response
w, h = freqz(b, a, worN=4096)
#w, h = freqz(b, a)
w *= fsampling / (2 * pi)
# Plot the amplitude response
plt.figure(dpi=100)
plt.subplot(2, 1, 1)
plt.suptitle('Bode Plot')
plt.plot(w, 20 * np.log10(abs(h)))
plt.ylabel('Magnitude [dB]')
plt.xlim(0, fsampling / 2)
plt.ylim(-60, 10)
plt.axhline(-6.01, linewidth=0.8, color='black', linestyle=':')
# Plot the phase response
plt.subplot(2, 1, 2)
plt.plot(w, 180 * np.angle(h) / pi)
plt.xlabel('Frequency [Hz]')
plt.ylabel('Phase [°]')
plt.xlim(0, fsampling / 2)
plt.ylim(-180, 90)
plt.yticks([-180, -135, -90, -45, 0, 45, 90])
plt.show()
print("Figure 2: Magnitude and phase response of L=4-point Moving Average filter")
```
## Impulse response
```python
t, y = dimpulse((b, a, 1/fsampling), n=2*L)
plt.figure(dpi=100)
plt.suptitle('Impulse Response')
_, _, baseline = plt.stem(t, y[0], basefmt='k:')
plt.setp(baseline, 'linewidth', 1)
baseline.set_xdata([0,1])
baseline.set_transform(plt.gca().get_yaxis_transform())
plt.xlabel('Time [seconds]')
plt.ylabel('Output')
plt.xlim(-1/fsampling, 2*L/fsampling)
plt.yticks([0, 0.5/L, 1.0/L])
plt.show()
print("Figure 3: Plot the impulse response of discrete-time system.")
```
### Testing our equation
\begin{align}
|H(e^{j\omega})|=\sqrt{\frac{1}{16}((1+cos(\omega)+cos(2\omega)+cos(3\omega))^{2}+(sin(\omega)+sin(2\omega)+sin(3\omega))^{2})}
\end{align}
```python
N=4096 #Elements for w vector
w=(np.linspace(0, pi, N, endpoint=True)).reshape(N, )
H=np.zeros((N,1))
for i in range(N-2):
H[i]=sqrt(pow(1/L,2)*(pow(1+cos(w[i])+cos(2*w[i])+cos(3*w[i]),2)+ pow(sin(w[i])+sin(2*w[i])+sin(3*w[i]),2)))
```
#### Comparison of results
```python
plt.figure(dpi=100)
plt.plot(w*fsampling / (2 * pi),20*np.log10(H+0.000000001))
plt.plot(w*fsampling / (2 * pi), 20*np.log10(abs(h)),linestyle='dashed')
plt.ylabel('Magnitude [dB]')
plt.xlabel('Frequency [Hz]')
plt.xlim(0, fsampling / 2)
plt.ylim(-60, 10)
plt.axhline(-6.01, linewidth=0.8, color='black', linestyle=':')
plt.show()
print("Figure 4: Comparison of freqz vs our equation")
```
## Resources
SciPy Documentation: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.freqz.html"><code>scipy.signal.freqz</code></a><br>
```python
```
| 3f736266b1dba2a7ff02b725899f08bb7bc2bcb1 | 113,330 | ipynb | Jupyter Notebook | MA filter/Moving average filter.ipynb | frhaedo/dsp | a6941f915b602e9daf4b53c69c63be28e3e9df1d | [
"Apache-2.0"
]
| null | null | null | MA filter/Moving average filter.ipynb | frhaedo/dsp | a6941f915b602e9daf4b53c69c63be28e3e9df1d | [
"Apache-2.0"
]
| 1 | 2021-04-03T19:15:05.000Z | 2021-04-03T19:15:05.000Z | MA filter/Moving average filter.ipynb | frhaedo/dsp | a6941f915b602e9daf4b53c69c63be28e3e9df1d | [
"Apache-2.0"
]
| null | null | null | 336.290801 | 33,584 | 0.936548 | true | 1,129 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.913677 | 0.825319 | __label__eng_Latn | 0.688966 | 0.755825 |
# Black-Scholes European Option Pricing Script
```python
# File Contains: Python code containing closed-form solutions for the valuation of European Options,
# for backward compatability with Python 2.7
from __future__ import division
# import necessary libaries
import math
import numpy as np
from scipy.stats import norm
from scipy.stats import mvn
# Plotting
import matplotlib.pylab as pl
import numpy as np
```
# Option Pricing Theory: Black-Scholes model
Black Scholes genre option models widely used to value European options. The original “Black Scholes” model was published in 1973 for non-dividend paying stocks. Since that time, a wide variety of extensions to the original Black Scholes model have been created. Modifications of the formula are used to price other financial instruments like dividend paying stocks, commodity futures, and FX forwards. Mathematically, these formulas are nearly identical. The primary difference between these models is whether the asset has a carrying cost (if the asset has a cost or benefit associated with holding it) and how the asset gets present valued. To illustrate this relationship, a “generalized” form of the Black Scholes equation is shown below.
The Black Scholes model is based on number of assumptions about how financial markets operate. Black Scholes style models assume:
1. **Arbitrage Free Markets**. Black Scholes formulas assume that traders try to maximize their personal profits and don’t allow arbitrage opportunities (riskless opportunities to make a profit) to persist.
2. **Frictionless, Continuous Markets**. This assumption of frictionless markets assumes that it is possible to buy and sell any amount of the underlying at any time without transaction costs.
3. **Risk Free Rates**. It is possible to borrow and lend money at a risk-free interest rate
4. **Log-normally Distributed Price Movements**. Prices are log-normally distributed and described by Geometric Brownian Motion
5. **Constant Volatility**. The Black Scholes genre options formulas assume that volatility is constant across the life of the option contract.
In practice, these assumptions are not particularly limiting. The primary limitation imposed by these models is that it is possible to (reasonably) describe the dispersion of prices at some point in the future in a mathematical equation.
An important concept of Black Scholes models is that the actual way that the underlying asset drifts over time isn't important to the valuation. Since European options can only be exercised when the contract expires, it is only the distribution of possible prices on that date that matters - the path that the underlying took to that point doesn't affect the value of the option. This is why the primary limitation of the model is being able to describe the dispersion of prices at some point in the future, not that the dispersion process is simplistic.
The generalized Black-Scholes formula can found below (see *Figure 1 – Generalized Black Scholes Formula*). While these formulas may look complicated at first glance, most of the terms can be found as part of an options contract or are prices readily available in the market. The only term that is difficult to calculate is the implied volatility (σ). Implied volatility is typically calculated using prices of other options that have recently been traded.
>*Call Price*
>\begin{equation}
C = Fe^{(b-r)T} N(D_1) - Xe^{-rT} N(D_2)
\end{equation}
>*Put Price*
>\begin{equation}
P = Xe^{-rT} N(-D_2) - Fe^{(b-r)T} N(-D_1)
\end{equation}
>*with the following intermediate calculations*
>\begin{equation}
D_1 = \frac{ln\frac{F}{X} + (b+\frac{V^2}{2})T}{V*\sqrt{T}}
\end{equation}
>\begin{equation}
D_2 = D_1 - V\sqrt{T}
\end{equation}
>*and the following inputs*
>| Symbol | Meaning |
>|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
>| F or S | **Underlying Price**. The price of the underlying asset on the valuation date. S is used commonly used to represent a spot price, F a forward price |
>| X | **Strike Price**. The strike, or exercise, price of the option. |
>| T | **Time to expiration**. The time to expiration in years. This can be calculated by comparing the time between the expiration date and the valuation date. T = (t_1 - t_0)/365 |
>| t_0 | **Valuation Date**. The date on which the option is being valued. For example, it might be today’s date if the option we being valued today. |
>| t_1 | **Expiration Date**. The date on which the option must be exercised. |
>| V | **Volatility**. The volatility of the underlying security. This factor usually cannot be directly observed in the market. It is most often calculated by looking at the prices for recent option transactions and back-solving a Black Scholes style equation to find the volatility that would result in the observed price. This is commonly abbreviated with the greek letter sigma,σ, although V is used here for consistency with the code below. |
>| q | **Continuous Yield**. Used in the Merton model, this is the continuous yield of the underlying security. Option holders are typically not paid dividends or other payments until they exercise the option. As a result, this factor decreases the value of an option. |
>| r | **Risk Free Rate**. This is expected return on a risk-free investment. This is commonly a approximated by the yield on a low-risk government bond or the rate that large banks borrow between themselves (LIBOR). The rate depends on tenor of the cash flow. For example, a 10-year risk-free bond is likely to have a different rate than a 20-year risk-free bond.[DE1] |
>| rf | **Foreign Risk Free Rate**. Used in the Garman Kohlhagen model, this is the risk free rate of the foreign currency. Each currency will have a risk free rate. |
>*Figure 1 - Generalized Black Scholes Formula*
## Model Implementation
These functions encapsulate a generic version of the pricing formulas. They are primarily intended to be called by the other functions within this libary. The following functions will have a fixed interface so that they can be called directly for academic applicaitons that use the cost-of-carry (b) notation:
_GBS() A generalized European option model
_GBS_ImpliedVol() A generalized European option implied vol calculator
The other functions in this libary are called by the four main functions and are not expected to be interface safe (the implementation and interface may change over time).
### Implementation for European Options
```python
# The primary class for calculating Generalized Black Scholes option prices and deltas
# It is not intended to be part of this module's public interface
# Inputs: option_type = "p" or "c", fs = price of underlying, x = strike, t = time to expiration, r = risk free rate
# b = cost of carry, v = implied volatility
# Outputs: value, delta, gamma, theta, vega, rho
def _gbs(option_type, fs, x, t, r, b, v):
_debug("Debugging Information: _gbs()")
# -----------
# Create preliminary calculations
t__sqrt = math.sqrt(t)
d1 = (math.log(fs / x) + (b + (v * v) / 2) * t) / (v * t__sqrt)
d2 = d1 - v * t__sqrt
if option_type == "c":
# it's a call
_debug(" Call Option")
value = fs * math.exp((b - r) * t) * norm.cdf(d1) - x * math.exp(-r * t) * norm.cdf(d2)
delta = math.exp((b - r) * t) * norm.cdf(d1)
gamma = math.exp((b - r) * t) * norm.pdf(d1) / (fs * v * t__sqrt)
theta = -(fs * v * math.exp((b - r) * t) * norm.pdf(d1)) / (2 * t__sqrt) - (b - r) * fs * math.exp(
(b - r) * t) * norm.cdf(d1) - r * x * math.exp(-r * t) * norm.cdf(d2)
vega = math.exp((b - r) * t) * fs * t__sqrt * norm.pdf(d1)
rho = x * t * math.exp(-r * t) * norm.cdf(d2)
else:
# it's a put
_debug(" Put Option")
value = x * math.exp(-r * t) * norm.cdf(-d2) - (fs * math.exp((b - r) * t) * norm.cdf(-d1))
delta = -math.exp((b - r) * t) * norm.cdf(-d1)
gamma = math.exp((b - r) * t) * norm.pdf(d1) / (fs * v * t__sqrt)
theta = -(fs * v * math.exp((b - r) * t) * norm.pdf(d1)) / (2 * t__sqrt) + (b - r) * fs * math.exp(
(b - r) * t) * norm.cdf(-d1) + r * x * math.exp(-r * t) * norm.cdf(-d2)
vega = math.exp((b - r) * t) * fs * t__sqrt * norm.pdf(d1)
rho = -x * t * math.exp(-r * t) * norm.cdf(-d2)
_debug(" d1= {0}\n d2 = {1}".format(d1, d2))
_debug(" delta = {0}\n gamma = {1}\n theta = {2}\n vega = {3}\n rho={4}".format(delta, gamma,
theta, vega,
rho))
return value, delta, gamma, theta, vega, rho
```
### Implementation: Implied Volatility
This section implements implied volatility calculations. It contains implementation of a **Newton-Raphson Search.** This is a fast implied volatility search that can be used when there is a reliable estimate of Vega (i.e., European options)
```python
# ----------
# Find the Implied Volatility of an European (GBS) Option given a price
# using Newton-Raphson method for greater speed since Vega is available
#def _gbs_implied_vol(option_type, fs, x, t, r, b, cp, precision=.00001, max_steps=100):
# return _newton_implied_vol(_gbs, option_type, x, fs, t, b, r, cp, precision, max_steps)
```
### Public Interface for valuation functions
This section encapsulates the functions that user will call to value certain options. These function primarily figure out the cost-of-carry term (b) and then call the generic version of the function (like _GBS() or _American). All of these functions return an array containg the premium and the greeks.
```python
# ---------------------------
# Black Scholes: stock Options (no dividend yield)
# Inputs:
# option_type = "p" or "c"
# fs = price of underlying
# x = strike
# t = time to expiration
# v = implied volatility
# r = risk free rate
# q = dividend payment
# b = cost of carry
# Outputs:
# value = price of the option
# delta = first derivative of value with respect to price of underlying
# gamma = second derivative of value w.r.t price of underlying
# theta = first derivative of value w.r.t. time to expiration
# vega = first derivative of value w.r.t. implied volatility
# rho = first derivative of value w.r.t. risk free rates
def BlackScholes(option_type, fs, x, t, r, v):
b = r
return _gbs(option_type, fs, x, t, r, b, v)
```
### Public Interface for implied Volatility Functions
```python
# Inputs:
# option_type = "p" or "c"
# fs = price of underlying
# x = strike
# t = time to expiration
# v = implied volatility
# r = risk free rate
# q = dividend payment
# b = cost of carry
# Outputs:
# value = price of the option
# delta = first derivative of value with respect to price of underlying
# gamma = second derivative of value w.r.t price of underlying
# theta = first derivative of value w.r.t. time to expiration
# vega = first derivative of value w.r.t. implied volatility
# rho = first derivative of value w.r.t. risk free rates
```
```python
#def euro_implied_vol(option_type, fs, x, t, r, q, cp):
# b = r - q
# return _gbs_implied_vol(option_type, fs, x, t, r, b, cp)
```
### Implementation: Helper Functions
These functions aren't part of the main code but serve as utility function mostly used for debugging
```python
# ---------------------------
# Helper Function for Debugging
# Prints a message if running code from this module and _DEBUG is set to true
# otherwise, do nothing
# Developer can toggle _DEBUG to True for more messages
# normally this is set to False
_DEBUG = False
def _debug(debug_input):
if (__name__ is "__main__") and (_DEBUG is True):
print(debug_input)
```
## Real Calculations of Options Prices
```python
bs = BlackScholes('c', fs=60, x=65, t=0.25, r=0.08, v=0.30)
optionPrice = bs[0]
optionPrice
```
2.1333684449162007
## Option prices charts
```python
stockPrices = np.arange(50, 100, 1)
prices = stockPrices * 0
stockPrice = 60
strike = 65
timeToExpiration = 0.25
impliedVolatility = 0.30
riskFreeRate = 0.05
pl.title('Stock Option Price')
for i in range(len(stockPrices)):
prices[i] = BlackScholes('c', stockPrices[i], strike, t = timeToExpiration, r = riskFreeRate, v = impliedVolatility)[0]
pl.plot(stockPrices, prices, label = 'Option Price')
pl.xlabel("Stock Price")
pl.ylabel("Option Price")
pl.grid(True)
pl.show()
```
```python
timeToExpiration = np.arange(0.1, 1, 0.05)
prices = timeToExpiration * 0
stockPrice = 60
strike = 65
#timeToExpiration = 0.25
impliedVolatility = 0.30
riskFreeRate = 0.05
pl.title('Stock Option Price')
for i in range(len(prices)):
prices[i] = BlackScholes('c', stockPrice, strike, t = timeToExpiration[i], r = riskFreeRate, v = impliedVolatility)[0]
pl.plot(timeToExpiration, prices, label = 'Option Price')
pl.xlabel("Time to Expiry")
pl.ylabel("Option Price")
pl.grid(True)
pl.show()
```
```python
strikes = np.arange(50, 80, 1)
prices = strikes * 0
stockPrice = 60
strike = 65
timeToExpiration = 0.25
impliedVolatility = 0.30
riskFreeRate = 0.05
pl.title('Stock Option Price')
for i in range(len(prices)):
prices[i] = BlackScholes('c', stockPrice, strikes[i], t = timeToExpiration, r = riskFreeRate, v = impliedVolatility)[0]
pl.plot(strikes, prices, label = 'Option Price')
pl.xlabel("Striking Price")
pl.ylabel("Option Price")
pl.grid(True)
pl.show()
```
```python
strikes = np.arange(50, 80, 1)
prices = strikes * 0
stockPrice = 60
strike = 65
timeToExpiration = 0.25
impliedVolatility = 0.30
riskFreeRate = 0.05
pl.title('Stock Put Option Price')
for i in range(len(prices)):
prices[i] = BlackScholes('p', stockPrice, strikes[i], t = timeToExpiration, r = riskFreeRate, v = impliedVolatility)[0]
pl.plot(strikes, prices, label = 'Option Price')
pl.xlabel("Striking Price")
pl.ylabel("Option Price")
pl.grid(True)
pl.show()
```
```python
```
| 3bcbcf5990c14325f9cf93119860a2c65851509c | 83,120 | ipynb | Jupyter Notebook | .ipynb_checkpoints/OptionsPricingEvaluation-checkpoint.ipynb | SolitonScientific/Option_Pricing | 8e1ba226583f3f03a2d978d332696129bafa83cc | [
"MIT"
]
| null | null | null | .ipynb_checkpoints/OptionsPricingEvaluation-checkpoint.ipynb | SolitonScientific/Option_Pricing | 8e1ba226583f3f03a2d978d332696129bafa83cc | [
"MIT"
]
| null | null | null | .ipynb_checkpoints/OptionsPricingEvaluation-checkpoint.ipynb | SolitonScientific/Option_Pricing | 8e1ba226583f3f03a2d978d332696129bafa83cc | [
"MIT"
]
| null | null | null | 165.248509 | 16,818 | 0.857507 | true | 3,801 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91848 | 0.812867 | 0.746603 | __label__eng_Latn | 0.981937 | 0.57294 |
$$
\newcommand{\dt}{\Delta t}
\newcommand{\udt}[1]{u^{({#1})}(T)}
\newcommand{\Edt}[1]{E^{({#1})}}
\newcommand{\uone}[1]{u_{1}^{({#1})}}
$$
This is the third in a series of posts on testing scientific software. For this to make sense, you'll need to have skimmed [the motivation and background](http://ianhawke.github.io/blog/close-enough.html). The [first in the series](http://ianhawke.github.io/blog/close-enough-part-1.html) assumed we only cared about the answer, and whether we'd implemented the expected algorithm wasn't important. The [second](http://ianhawke.github.io/blog/close-enough-part-2.html) assumed we cared about the convergence rate of the algorithm, not the specific answer, nor the precise algorithm itself.
In the previous cases we've focused on the *behaviour* of the algorithm: whether it will give the correct answer in the limit, or whether it converges as expected. This is really what you want to do: you're trying to do science, to get an answer, and so implementing the precise algorithm should be secondary. If you are trying to implement a precise algorithm, it should be because of its (expected) behaviour, and so you should be testing for that!
However, let's put that aside and see if we can work out how to test whether we've implemented exactly the algorithm we want: Euler's method. Checking convergence alone is not enough: the [Backwards Euler method](http://en.wikipedia.org/wiki/Backward_Euler_method) has identical convergence behaviour, as do whole families of other methods. We need a check that characterizes the method uniquely.
The *local truncation error* $\Edt{\dt}$ would be exactly such a check. This is the error produced by a single step from exact data, eg
$$
\begin{equation}
\Edt{\dt} = u_1 - u(\dt).
\end{equation}
$$
This is enough as Euler's method is independent of the data, so each individual step performs the same operations.
For Euler's method we have
$$
\begin{equation}
u_{n+1} = u_n + \dt f(t_n, u_n)
\end{equation}
$$
and so
$$
\begin{equation}
\Edt{\dt} = \left| u_0 + \dt f(0, u_0) - u(\dt) \right| = \left| \frac{\dt^2}{2} \left. u''\right|_{t=0} \right| + {\cal O}(\dt^3).
\end{equation}
$$
This is all well and good, but we don't know the exact solution (in principle) at any point other than $t=0$, so cannot compute $u(\dt)$, so cannot compute $\Edt{\dt}$. We only know $\uone{\dt}$ for whichever values of $\dt$ we wish to compute.
We can use repeated Richardson extrapolation to get the solution $u(\dt)$ to sufficient accuracy, however. On the *assumption* that the algorithm is first order (we can use the previous techniques to check this), we can use Richardson extrapolation to repeatedly remove the highest order error terms. We can thus find the local truncation errors.
```python
from math import sin, cos, log, ceil
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
We will again need the code implementing Euler's method [the full phugoid model notebook](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb).
```python
# model parameters:
g = 9.8 # gravity in m s^{-2}
v_t = 30.0 # trim velocity in m s^{-1}
C_D = 1/40. # drag coefficient --- or D/L if C_L=1
C_L = 1.0 # for convenience, use C_L = 1
### set initial conditions ###
v0 = v_t # start at the trim velocity (or add a delta)
theta0 = 0.0 # initial angle of trajectory
x0 = 0.0 # horizotal position is arbitrary
y0 = 1000.0 # initial altitude
```
```python
def f(u):
"""Returns the right-hand side of the phugoid system of equations.
Parameters
----------
u : array of float
array containing the solution at time n.
Returns
-------
dudt : array of float
array containing the RHS given u.
"""
v = u[0]
theta = u[1]
x = u[2]
y = u[3]
return numpy.array([-g*sin(theta) - C_D/C_L*g/v_t**2*v**2,
-g*cos(theta)/v + g/v_t**2*v,
v*cos(theta),
v*sin(theta)])
```
```python
def euler_step(u, f, dt):
"""Returns the solution at the next time-step using Euler's method.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
approximate solution at the next time step.
"""
return u + dt * f(u)
```
This time we will need lots of solutions in order to measure anything. We will construct ten local truncation errors. For each, we take a single step and store the result for $v$. Then, for each single step result, we use eight other calculations using our algorithm. Each will take multiple, smaller, steps to get a more accurate result for $v$ at the same time. We then use repeated Richardson extrapolation to find, to very high accuracy, the "true" result for $v$ at this time. Finally, we can compare against the original single step results to find the local truncation errors.
This *only* works if the algorithm is converging at the expected rate, as we checked in [the second post](http://ianhawke.github.io/blog/close-enough-part-2.html). It does not rely on the algorithm being exactly Euler's method (thankfully, or the argument would be circular!), but does need the convergence rate to be known.
```python
T_values = numpy.array([0.001*2**(i) for i in range(10)])
lte_values = numpy.zeros_like(T_values)
for j, T in enumerate(T_values):
dt_values = numpy.array([T*2**(i-8) for i in range(8)])
v_values = numpy.zeros_like(dt_values)
for i, dt in enumerate(dt_values):
N = int(T/dt)+1
t = numpy.linspace(0.0, T, N)
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
for n in range(N-1):
u[n+1] = euler_step(u[n], f, dt)
v_values[i] = u[-1,0]
v_next = v_values
for s in range(1, len(v_values-1)):
v_next = (2**s*v_next[1:]-v_next[0:-1])/(2**s-1)
lte_values[j] = abs(v_values[0]-v_next)
```
This gives us a set of local truncation errors at given timesteps:
```python
for dt, lte in zip(T_values, lte_values):
print("For dt={} the local truncation error is {}.".format(dt, lte))
```
For dt=0.001 the local truncation error is 1.99954897084e-09.
For dt=0.002 the local truncation error is 8.05573563412e-09.
For dt=0.004 the local truncation error is 3.26825286834e-08.
For dt=0.008 the local truncation error is 1.34407251551e-07.
For dt=0.016 the local truncation error is 5.67045063349e-07.
For dt=0.032 the local truncation error is 2.50349015474e-06.
For dt=0.064 the local truncation error is 1.18961298661e-05.
For dt=0.128 the local truncation error is 6.26360622817e-05.
For dt=0.256 the local truncation error is 0.000370836831557.
For dt=0.512 the local truncation error is 0.00244291651282.
We now have many values for the local truncation error. We can thus compute the convergence rate of the local truncation error itself (which should be two), and check that it is close enough to the expected value using the [same techniques as in the second post in the series](http://ianhawke.github.io/blog/close-enough-part-2.html):
```python
s_m = numpy.zeros(2)
for i in range(2):
s_m[i] = log(abs((lte_values[2+i]-lte_values[1+i])/
(lte_values[1+i]-lte_values[0+i]))) / log(2.0)
print("Measured convergence rate (base dt {}) is {:.6g} (error is {:.4g}).".format(
T_values[i], s_m[i], abs(s_m[i]-2)))
print("Convergence error has reduced by factor {:.4g}.".format(
abs(s_m[0]-2)/abs(s_m[1]-2)))
```
Measured convergence rate (base dt 0.001) is 2.02375 (error is 0.02375).
Measured convergence rate (base dt 0.002) is 2.04637 (error is 0.04637).
Convergence error has reduced by factor 0.5121.
So the error has gone down considerably, and certainly $0.51 > 1/3$, so the convergence rate of the local truncation error is close enough to 2.
However, that alone isn't enough to determine that this really is Euler's method: as noted above, the convergence rate of the local truncation error isn't the key point: the key point is that we can predict its *actual value* as
$$
\begin{equation}
\Edt{\dt} = \frac{\dt^2}{2} \left| \left. u''\right|_{t=0} \right| + {\cal O}(\dt^3) = \frac{\dt^2}{2} \left| \left( \left. \frac{\partial f}{\partial t} \right|_{t=0} + f(0, u_0) \left. \frac{\partial f}{\partial u} \right|_{t=0, u=u_0} \right) \right|.
\end{equation}
$$
For the specific problem considered here we have
$$
\begin{equation}
u = \begin{pmatrix} v \\ \theta \\ x \\ y \end{pmatrix}, \quad f = \begin{pmatrix} -g\sin \theta - \frac{C_D}{C_L} \frac{g}{v_t^2} v^2 \\ -\frac{g}{v}\cos \theta + \frac{g}{v_t^2} v \\ v \cos \theta \\ v \sin \theta \end{pmatrix}.
\end{equation}
$$
We note that $f$ does not explicitly depend on $t$ (so $\partial f / \partial t \equiv 0$), and that the values of the parameters $g, C_D, C_L$ and $v_t$ are given above, along with the initial data $u_0 = (v_0, \theta_0, x_0, y_0)$.
So, let's find what the local truncation error should be.
```python
import sympy
sympy.init_printing()
v, theta, x, y, g, CD, CL, vt, dt = sympy.symbols('v, theta, x, y, g, C_D, C_L, v_t, {\Delta}t')
u = sympy.Matrix([v, theta, x, y])
f = sympy.Matrix([-g*sympy.sin(theta)-CD/CL*g/vt**2*v**2,
-g/v*sympy.cos(theta)+g/vt**2*v,
v*sympy.cos(theta),
v*sympy.sin(theta)])
dfdu = f.jacobian(u)
lte=dt**2/2*dfdu*f
```
```python
lte_0=lte.subs([(g,9.8),(vt,30.0),(CD,1.0/40.0),(CL,1.0),(v,30.0),(theta,0.0),(x,0.0),(y,1000.0)])
lte_0
```
So let us check the local truncation error values, which are computed for `v`:
```python
lte_exact = float(lte_0[0]/dt**2)
lte_values/T_values**2
```
array([ 0.00199955, 0.00201393, 0.00204266, 0.00210011, 0.00221502,
0.00244481, 0.00290433, 0.003823 , 0.00565852, 0.00931899])
These are indeed converging towards $0.002 \dt^2$ as they should. To check this quantitatively, we use that our model is
$$
\begin{equation}
\Edt{\dt} = \alpha \dt^2 + {\cal O}(\dt^3),
\end{equation}
$$
with the exact value $\alpha_e \simeq 0.002$. So we can use our usual Richardson extrapolation methods applied to $\Edt{\dt}/\dt^2$, to get a measured value for $\alpha$ with an error interval:
$$
\begin{equation}
\alpha_m = \frac{8\Edt{\dt} - \Edt{2\dt} \pm \left| \Edt{\dt} - \Edt{2\dt} \right|}{4\dt^2}.
\end{equation}
$$
```python
for i in range(len(lte_values)-1):
Edt = lte_values[i]
E2dt = lte_values[i+1]
dt = T_values[i]
err1 = abs(Edt - E2dt)
a_lo = (8.0*Edt - E2dt - err1)/(4.0*dt**2)
a_hi = (8.0*Edt - E2dt + err1)/(4.0*dt**2)
print("Base dt={:.4g}: the measured alpha is in [{:.5g}, {:.5g}]".format(
dt, a_lo, a_hi))
print("Does this contain the exact value? {}".format(
a_lo <= lte_exact <= a_hi))
```
Base dt=0.001: the measured alpha is in [0.00047112, 0.0034992]
Does this contain the exact value? True
Base dt=0.002: the measured alpha is in [0.00044604, 0.0035244]
Does this contain the exact value? True
Base dt=0.004: the measured alpha is in [0.00039575, 0.0035747]
Does this contain the exact value? True
Base dt=0.008: the measured alpha is in [0.00029522, 0.0036752]
Does this contain the exact value? True
Base dt=0.016: the measured alpha is in [9.4165e-05, 0.0038763]
Does this contain the exact value? True
Base dt=0.032: the measured alpha is in [-0.00030782, 0.0042784]
Does this contain the exact value? True
Base dt=0.064: the measured alpha is in [-0.0011113, 0.0050826]
Does this contain the exact value? True
Base dt=0.128: the measured alpha is in [-0.0027153, 0.0066903]
Does this contain the exact value? True
Base dt=0.256: the measured alpha is in [-0.0059063, 0.0099024]
Does this contain the exact value? True
So, to the limits that we can measure the local truncation error, we have implemented Euler's method.
| ccd4090952b650f396b5ca971e820c8e1095587c | 23,186 | ipynb | Jupyter Notebook | content/notebooks/03-Close-Enough-Just-Euler.ipynb | IanHawke/blog | aa47807bf5a96cc97ecfbe48e41b8f795b88cba9 | [
"MIT"
]
| 3 | 2015-03-10T23:49:33.000Z | 2016-06-01T23:53:24.000Z | content/notebooks/03-Close-Enough-Just-Euler.ipynb | IanHawke/blog | aa47807bf5a96cc97ecfbe48e41b8f795b88cba9 | [
"MIT"
]
| null | null | null | content/notebooks/03-Close-Enough-Just-Euler.ipynb | IanHawke/blog | aa47807bf5a96cc97ecfbe48e41b8f795b88cba9 | [
"MIT"
]
| null | null | null | 45.285156 | 3,229 | 0.587898 | true | 3,832 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.72487 | 0.553427 | __label__eng_Latn | 0.975753 | 0.124125 |
# AS-AD-model with long-run growth and bubbles.
In the following, we analyze a basic AS-AD-model containing equilibria in the goods- and service markets,an inflation-targeting Taylor rule, short-run aggregate supply determined by a philips-curve with nominal wage rigidities, as well as rational expectations for inflation.
The model has been extended with exogenous growth in structural output, $\overline{y}$, in order to simulate an economy and nominal volatility around an underlying real growth.
The model has furthermore been extended with three types of chocks:
- Biased noise in the supply side of the economy resulting in negative correlation between inflation and output - the degree to which is decided by the tradeoff set in the Taylor-rule. We have made the Central Bank relatively averse of inflation-gaps from the target of 2 pct., setting a higher b-value than h.
- White noise in the demand side of the economy, however being almost completely neutralized by Central Bank monetary policy, thus not affecting the economy much.
- Risks of a bubble bursting if the economy is bullish, having a positive outputgap. Whenever this is the case, we have implemented a given risk of nominal GDP, $y_t$, dropping a certain relative amount in the supply-side of the economy. Say, a 0,4 pct. chance of $y_t$ falling by 20 pct. each period with a positive output-gap.
We are interested in seeing the effects of crashes with a Central Bank following a given Taylor-rule.
The model has been calibrated for one period lasting around a week. For a 1000-period simulation this means around 20 years.
```python
import numpy as np
import csv
import matplotlib.pyplot as plt
import pandas as pd
import math
from sympy import symbols, Eq, solve
from scipy import optimize
```
## The Model:
The long-run part of the model can be described by:
$$\tag{GROWTH} \overline{y}_t = \overline{y}_{t-1} + k$$
In which $k \sim N(\mu_g,\sigma^2_g)$ is a stochastic chock with drift $\mu_g$ and variance $\sigma^2_g$. Thus, the structural output grows exogenously over time. As a baseline we have sat $\sigma^2_g = 0$ in order to isolate short-run fluctuations from long run ones, and sat $\mu_g=0.0008$, as this growth-rate on a weekly basis leads to an annual growth of around 2,3 pct.
The short-run part of the model can be described by:
$$ \tag{IS} y_t - \overline{y} = \alpha_1(g-\overline{g}) - \alpha_2(r-\overline{r}) - \alpha_3(\tau - \overline{\tau}) + d_t$$
$$ \tag{MP} i_t^p = \pi_{t+1}^e + \overline{r} + h(\pi - \pi^*) + b(y-\overline{y})$$
$$ \tag{AS} y_t - \overline{y} = \frac{1}{\gamma}(\pi - \pi^*) + \frac{v_t}{\gamma}$$
$$ \tag{IE} \pi^e_{t+1} = \pi_t^* $$
$$ \tag{FE} i_t^p - \pi_t = r_t $$
This results in the final model being solved by the system:
$$ \tag{AS} y_t - \overline{y} = \frac{1}{\gamma}(\pi - \pi^*) + \frac{v_t}{\gamma}$$
$$ \tag{AD} y_t - \overline{y} = -\frac{\alpha_2 h}{1+\alpha_2 b}(\pi_t-\pi^*) + \frac{\alpha_1}{1+\alpha_2 b}(g-\overline{g}) - \frac{\alpha_3}{1+\alpha_2 b}(\tau-\overline{\tau})+\frac{d_t}{1+\alpha_2 b}$$
$$ \tag{SC} v_{t} = v_{t-1}+x_t + c_t$$
$$ \tag{DC} d_{t} = d_{t-1} + z_t $$
Where:
$x_t \sim N(\mu_s,\sigma_s^2)$ is the supply-chock and $\mu_s$ is a drift of nominal short-run supply. This has been set to 0,001, greater than that of structural supply, in order to create an effect of nominally faster growth than the structural one, which consequently offsets the negative effects of bubble bursts.
$z_t \sim N(\mu_d,\sigma_s^2)$ is the white-noise demand-chock, having $\mu_d = 0$.
$c_t \in \begin{cases}[-\rho y_t,0] \hspace{2mm} \textrm{with probabilities} \hspace{2mm} [p, (1-p)]& \text{if } \hspace{2mm} (y_t-\overline{y}) > 0 \\
0 & \text{if } \hspace{2mm} (y_t-\overline{y})\le 0
\end{cases}$
is a stochastic crash-variable having risk of $p$ of reducing nominal supply each period with a positive output-gap.
Note, that all lower-case variables are logged - and differences are thus approximate percentages
```python
#Defining exogenous variables:
alpha1 = 1 #Weight of public spending on consumption demand
alpha2 = 1 #Weight of real interest rate on consumption demand
alpha3 = 1 #Weight of taxation changes on consumption demand
gamma = 1.5 #Constant, philips
h = 1 #Central Banks inflation-rule
b = 0.5 #Central Banks output-rule
tau = 1 #Taxation
taubar = 1 #Baseline taxation
g = 1 #Public spending
gbar = 1 #Baseline public spending
ybar = 2 #Structural GDP
pibar = 2 #Inflation target
pi_exp = pibar #Inflationary expectations
p = 0.004 #Risk of crash each period
rho = 0.2 #Share of supply cut if crash
v = np.random.normal(loc=0.001, scale=0.01, size=None)#Noise with drift (SUPPLY)
#Setting number of periods in simulation:
simsize = 1000
#Defining the time series:
data = pd.DataFrame()
data['tid'] = range(0,1000)
```
## Diagram
Firstly, the model is set up for intuitive purposes, showing the AS-AD-diagram and correlation:
```python
#Defining the functions:
def AS(ybar,pi,pi_exp,gamma):
y = ybar + (1/gamma)*(pi-pi_exp)
return y
def AD(ybar,pi,pibar,alpha1,g,gbar,alpha2,alpha3,h,b,tau,taubar):
y = ybar + (alpha1/(1+alpha2*b))*(g-gbar) -(alpha2*h/(1+alpha2*b))*(pi-pibar) -(alpha3/(1+alpha2*b))*(tau-taubar) + (1/(1+alpha2*b))*v
return y
#Solving the partial equilibria with the given values:
AS_liste = []
for i in range(0,10):
AS_liste.append(AS(ybar,i,pi_exp,gamma))
AD_liste = []
for i in range(0,10):
AD_liste.append(AD(ybar,i,pibar,alpha1,g,gbar,alpha2,alpha3,h,b,tau,taubar))
#Showing the AS-AD-diagram:
fig, ax1 = plt.subplots(figsize=(6,4))
plt.title('AS-AD',fontsize=15,weight='bold',pad=23)
goldman_blue = '#64a8f0'
plt.plot(AD_liste, label = 'AD', c=goldman_blue)
plt.plot(AS_liste, label = 'SRAS', c='maroon')
plt.axvline(pibar,linestyle='--',c='black', label = 'LRAS')
plt.xlabel('$y$',fontsize=15)
plt.ylabel('$\pi$',fontsize=15)
ax1.set_xlim(0,5)
ax1.set_ylim(0,5)
plt.legend(frameon=False,fontsize=12)
plt.savefig('AS-AD.pdf')
```
The above graph shows the correlation between dupply(SRAS) and demand(AD) in the short-run with equilibrium in ($\overline{y},\overline{\pi})=(2,2)$, that represents the long-run equilibrium.
## The Growth
Creating the underlying exogenous growth in structural GDP, not dwelving further into purpose of or reason for the source of growth:
```python
#Created as a random walk with drift and zero variance as baseline:
drift = 0.0008
y_bar = [1]
for i in range(1,simsize):
s = np.random.normal(loc=0,scale=0,size=None)
y_bar.append(
y_bar[i-1] + drift
)
data['ybar'] = pd.DataFrame(y_bar)
```
## Simulation
The model is solved and simulated.
Note: it will take a couple of minutes to finish the simulation, since the model is solved in every iteration. This could have been avoided by creating a program that solved the model more efficiently.
```python
#Setting a seed, 10:
np.random.seed(10)
#Setting baseline-values and defining lists of solved values to later graph:
y_løsninger = []
pi_løsninger = []
v_vektor = [0]
d_vektor = [0]
c = 0
gab = [0]
crises = []
#Solving the AS-AD-model with regards to inflation and output by a for-loop:
for i in range(1,simsize):
#Creating a stochastic chocks in both supply- and demand-side of economy in every iteration:
v = np.random.normal(loc=0.001, scale=0.01, size=None) #Noise with drift (SUPPLY)
d = np.random.normal(loc=0, scale=0.01, size=None) #White noise (DEMAND)
v_vektor.append(
v_vektor[i-1] + v + c
)
d_vektor.append(
d_vektor[i-1] + d
)
#Solving the model using optimizer from scipy:
pi, y = symbols('pi y')
obj1 = lambda pi, y : data['ybar'][i] + (1/gamma)*(pi-pi_exp) + (v_vektor[i])/gamma - y
obj2 = lambda pi, y : data['ybar'][i] + (alpha1/(1+alpha2*b))*(g-gbar) -(alpha2*h/(1+alpha2*b))*(pi-pibar) -(alpha3/(1+alpha2*b))*(tau-taubar) + (1/(1+alpha2*b)*d_vektor[i] - y)
obj = lambda x : [obj1(x[0],x[1]),obj2(x[0],x[1])]
sol = optimize.root(obj,x0= [pibar,ybar])
løsning = {y:sol.x[1],pi:sol.x[0]}
gab.append(løsning[y]-data['ybar'][i])
#risk of crash.
if gab[i] > 0:
#c = np.random.choice(a=[0,100],p=(1-(gab[i]**2)*0.177,(gab[i]**2)*0.177)) Alternative crash, size-dependent on output-gap
c = np.random.choice(a=[0,-løsning[y]*rho],p=(1-p,p))
crises.append(c)
else:
c=0
#Appending realisations of the model:
pi_løsninger.append(løsning[pi])
y_løsninger.append(løsning[y])
pi_exp = løsning[pi]
data['y_løs'] = pd.DataFrame(y_løsninger)
data['pi_løs'] = pd.DataFrame(pi_løsninger)
#Calculating inflationary- and output-gaps:
outputgab = [0]
for i in range(1,len(data['y_løs'])):
outputgab.append(data['y_løs'][i] - data['ybar'][i])
inflationgab = [0]
for i in range(1,len(data['pi_løs'])):
inflationgab.append(data['pi_løs'][i] - pibar)
data['outputgab'] = outputgab
data['inflationgab'] = inflationgab
```
```python
#Setting up the figure:
fig = plt.figure(figsize=(16,12), frameon=False)
title_font = {'size':'16', 'color':'black', 'weight':'bold',
'verticalalignment':'bottom'}
plt.title('AS-AD - Short Run Simulation',**title_font)
plt.axis('off')
#Figure 1
ax = fig.add_subplot(4,1,(1,2))
plt.title('Levels',fontsize=15,weight='normal',pad=-20)
plt.plot(data['y_løs'][5:], label = '$y$ (left axis)',c=goldman_blue)
plt.plot(data['ybar'][5:], label = '$\overline{y}$ (left axis)', c='lightgrey', linestyle='--')
plt.ylabel('$y$')
legend1 = plt.legend(loc = 'upper left', frameon=False)
ax2 = ax.twinx()
plt.plot(data['pi_løs'][5:], label = '$\pi$ (right axis)',c='tomato')
plt.tick_params()
plt.ylabel('$\pi$', )
legend2 = plt.legend(loc = 'upper left', frameon=False,bbox_to_anchor=(0,0.885))
ax.add_artist(legend1)
ax2.set_ylim(1,3)
#Figure 2:
ax = fig.add_subplot(4,1,(3,4))
plt.title('Gaps',fontsize=15,weight='normal',pad=-20)
plt.plot(data['outputgab'][5:], label = 'Output gap: $y_t - \overline{y}$',c=goldman_blue)
plt.plot(data['inflationgab'][5:], label = 'Inflationary gap: $\pi_t - \overline{\pi}$',c='tomato')
ax.axhline(0,linestyle='--',c='black')
plt.legend(frameon=False,loc='upper left',bbox_to_anchor=(0,1))
plt.xlabel('Time')
plt.savefig('Simulation.pdf')
print(len(v_vektor),len(y_løsninger),len(data['y_løs']))
```
```python
#printing descriptive statistics of the second graph.
print('The mean of the inflationary gap')
print(data['inflationgab'].mean())
print('The standard deviation of inflationary gap')
print(data['inflationgab'].std())
print('The mean of the output gap')
print(data['outputgab'].mean())
print('The standard deviation of output gap')
print(data['outputgab'].std())
#correlation between inflationary gap and output gap:
#correlation = data['outputgab'].corr(data['inflationgab'])
```
The mean of the inflationary gap
0.06377045294937161
The standard deviation of inflationary gap
0.10493075628627888
The mean of the output gap
0.0050212492615558865
The standard deviation of output gap
0.09403345656070142
```python
from scipy.stats import pearsonr
data1=data.copy()
data1.dropna(inplace=True)
print('The correlation between the two gaps are:')
print(pearsonr(data1['outputgab'],data1['inflationgab'])[0])
```
The correlation between the two gaps are:
-0.7291005797863483
## Interpretation
- First of all, we see a relatively realistic cycle - although with unrealistically sudden one-period crashes. It is seen, that nominal GDP, $y_t$, tends to grow slightly faster than its structural counterpart, $\overline{y}$, however being neutralized by the bubble-bursts regularly.
- Secondly, we see a relatively stable inflation between 1,8 and 2,2 pct. (the target rate being 2), due to a relatively inflationary-conservative Central Bank. We opted for one of the heavy-weights, Paul Volcker.
- Thirdly, we see very - but not perfectly(!) - inversely correlated inflation and output. This is due to the supply-chocks hitting the economy harder (with a perfectly inversely correlated change between inflation and output, of which the relative sizes are decided by the slope of the AD-curve, as decied by the Central Banks trade-off in the Taylor-rule). Demand-chocks can to a certain degree be perfectly smoothed out by the Central Bank through monetary policy, which is why demand-chocks, although present, not affect the economy much. This is, however, the reason for a not *perfect* inverse correlation between the two variables.
- Fourth, this iteration has resulted in two large bubble-bursts over the twenty-year period, both of which result in a 20 pct. cut in output, followed by steady convergence back towards the trend.
- Fifth, the economy seems to grow around $ln(1,8) \approx 58$ pct. over the period of 1000 weeks, around 20 years, equivalent of $1,58^{\frac{1}{20}} - 1 \approx 2,3$ pct. per year. This seems realistic and consistent with other empirical findings in Western world.
- From the descriptive statistics we lastly conclude that the correlation between the inflationary and output gap is negative, which correpsonds with the graph and is based on the definition of the demand.
## Conclusion
The model has resulted in a relatively realistically-looking simulation of real-world business-cycles, however having its flaws, some of which have been mentioned througout the text.
```python
```
| 74471302d74b777fc7b05aa309308dce1a153d1f | 399,357 | ipynb | Jupyter Notebook | AS-AD-model (IPNA assignment)/modelproject.ipynb | Holger-Harmsen/NumEcon | 20c61548c8889cbc17b9d9e83a7ce0398ef0761e | [
"MIT"
]
| null | null | null | AS-AD-model (IPNA assignment)/modelproject.ipynb | Holger-Harmsen/NumEcon | 20c61548c8889cbc17b9d9e83a7ce0398ef0761e | [
"MIT"
]
| null | null | null | AS-AD-model (IPNA assignment)/modelproject.ipynb | Holger-Harmsen/NumEcon | 20c61548c8889cbc17b9d9e83a7ce0398ef0761e | [
"MIT"
]
| 1 | 2020-04-26T08:53:10.000Z | 2020-04-26T08:53:10.000Z | 856.98927 | 176,898 | 0.808682 | true | 3,996 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.79053 | 0.653541 | __label__eng_Latn | 0.954449 | 0.356725 |
```python
from sympy.physics.units import *
from sympy import *
EA, l, F = var("EA, l, F")
# def k(phi):
# """ computes element stiffness matrix """
# # phi is angle between:
# # 1. vector along global x axis
# # 2. vector along 1-2-axis of truss
# # phi is counted positively about z.
# (c, s) = ( cos(phi), sin(phi) )
# (cc, ss, sc) = ( c*c, s*s, s*c)
# return Matrix(
# [
# [ cc, sc, -cc, -sc],
# [ sc, ss, -sc, -ss],
# [-cc, -sc, cc, sc],
# [-sc, -ss, sc, ss],
# ])
#
# (l1, l2, l3) = (l, l*sqrt(2), l)
# (p1, p2, p3) = (0 *pi/180, 135 *pi/180, 90 *pi/180)
# (k1, k2, k3) = (EA/l1*k(p1), EA/l2*k(p2), EA/l3*k(p3))
#
p = sqrt(2)/S(4)
K = EA/l*Matrix(
[
[1,0,-1,0 ,0,0],
[0,1,0,0 ,0,-1],
[-1,0,p+1,-p,-p,p],
[0,0,-p,p,p,-p],
[0,0,-p,p,p,-p],
[0,-1,p,-p,-p,p+1]
]
)
pprint("\nK / (EA/l):")
pprint(K/ (EA/l))
u1x,u1y,u2x,u2y,u3x,u3y = var("u1x,u1y,u2x,u2y,u3x,u3y")
F1x,F1y,F2x,F2y,F3x,F3y = var("F1x,F1y,F2x,F2y,F3x,F3y")
u = Matrix([u1x,u1y,u2x,u2y,u3x,u3y])
f = Matrix([F1x,F1y,F2x,F2y,F3x,F3y])
unknowns = [u1x,u1y,u2x,u2y,u3x,u3y, F1x,F1y,F2x,F2y,F3x,F3y]
# boundary conditions:
# --- a ---
sub_list_u_a=[
(u1x, 0),
(u1y, 0),
(u3x, 0),
]
sub_list_f_a=[
(F2x, 0),
(F2y, -F),
(F3y, 0),
]
# --- b ---
sub_list_u_b=[
(u1x, 0),
(u3x, 0),
(u3y, 0),
]
sub_list_f_b = sub_list_f_a
pprint("\na:")
ua = u.subs(sub_list_u_a)
fa = f.subs(sub_list_f_a)
eq = Eq(K*ua , fa)
sol = solve(eq, unknowns)
for s in sol:
pprint("\n")
pprint(s)
pprint(sol[s])
pprint("\nb:")
ub = u.subs(sub_list_u_b)
fb = f.subs(sub_list_f_b)
eq = Eq(K*ub , fb)
sol = solve(eq, unknowns)
for s in sol:
pprint("\n")
pprint(s)
pprint(sol[s])
# K / (EA/l):
# ⎡1 0 -1 0 0 0 ⎤
# ⎢ ⎥
# ⎢0 1 0 0 0 -1 ⎥
# ⎢ ⎥
# ⎢ √2 -√2 -√2 √2 ⎥
# ⎢-1 0 ── + 1 ──── ──── ── ⎥
# ⎢ 4 4 4 4 ⎥
# ⎢ ⎥
# ⎢ -√2 √2 √2 -√2 ⎥
# ⎢0 0 ──── ── ── ──── ⎥
# ⎢ 4 4 4 4 ⎥
# ⎢ ⎥
# ⎢ -√2 √2 √2 -√2 ⎥
# ⎢0 0 ──── ── ── ──── ⎥
# ⎢ 4 4 4 4 ⎥
# ⎢ ⎥
# ⎢ √2 -√2 -√2 √2 ⎥
# ⎢0 -1 ── ──── ──── ── + 1⎥
# ⎣ 4 4 4 4 ⎦
#
# a:
#
# F1y
# F
#
# F3x
# -F
#
# F1x
# F
#
# u2y
# -2⋅F⋅l⋅(1 + √2)
# ────────────────
# EA
#
# u3y
# -F⋅l
# ─────
# EA
#
# u2x
# -F⋅l
# ─────
# EA
#
# b:
#
# F1y
# F
#
# F3x
# -F
#
# F1x
# F
#
# u2y
# -F⋅l⋅(1 + 2⋅√2)
# ────────────────
# EA
#
# u1y
# F⋅l
# ───
# EA
#
# u2x
# -F⋅l
# ─────
# EA
```
| abc6f379e814f5ba6d67997480951b046d1b999a | 6,390 | ipynb | Jupyter Notebook | ipynb/EMS_02A/Selbst/5.1.ipynb | kassbohm/wb-snippets | f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe | [
"MIT"
]
| null | null | null | ipynb/EMS_02A/Selbst/5.1.ipynb | kassbohm/wb-snippets | f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe | [
"MIT"
]
| null | null | null | ipynb/EMS_02A/Selbst/5.1.ipynb | kassbohm/wb-snippets | f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe | [
"MIT"
]
| null | null | null | 32.769231 | 121 | 0.362441 | true | 1,448 | Qwen/Qwen-72B | 1. YES
2. YES | 0.955319 | 0.824462 | 0.787624 | __label__kor_Hang | 0.110077 | 0.668248 |
# Frequentist Inference Case Study - Part A
## 1. Learning objectives
Welcome to part A of the Frequentist inference case study! The purpose of this case study is to help you apply the concepts associated with Frequentist inference in Python. Frequentist inference is the process of deriving conclusions about an underlying distribution via the observation of data. In particular, you'll practice writing Python code to apply the following statistical concepts:
* the _z_-statistic
* the _t_-statistic
* the difference and relationship between the two
* the Central Limit Theorem, including its assumptions and consequences
* how to estimate the population mean and standard deviation from a sample
* the concept of a sampling distribution of a test statistic, particularly for the mean
* how to combine these concepts to calculate a confidence interval
## Prerequisites
To be able to complete this notebook, you are expected to have a basic understanding of:
* what a random variable is (p.400 of Professor Spiegelhalter's *The Art of Statistics, hereinafter AoS*)
* what a population, and a population distribution, are (p. 397 of *AoS*)
* a high-level sense of what the normal distribution is (p. 394 of *AoS*)
* what the t-statistic is (p. 275 of *AoS*)
Happily, these should all be concepts with which you are reasonably familiar after having read ten chapters of Professor Spiegelhalter's book, *The Art of Statistics*.
We'll try to relate the concepts in this case study back to page numbers in *The Art of Statistics* so that you can focus on the Python aspects of this case study. The second part (part B) of this case study will involve another, more real-world application of these tools.
For this notebook, we will use data sampled from a known normal distribution. This allows us to compare our results with theoretical expectations.
## 2. An introduction to sampling from the normal distribution
First, let's explore the ways we can generate the normal distribution. While there's a fair amount of interest in [sklearn](https://scikit-learn.org/stable/) within the machine learning community, you're likely to have heard of [scipy](https://docs.scipy.org/doc/scipy-0.15.1/reference/index.html) if you're coming from the sciences. For this assignment, you'll use [scipy.stats](https://docs.scipy.org/doc/scipy-0.15.1/reference/tutorial/stats.html) to complete your work.
This assignment will require some digging around and getting your hands dirty (your learning is maximized that way)! You should have the research skills and the tenacity to do these tasks independently, but if you struggle, reach out to your immediate community and your mentor for help.
```python
from scipy.stats import norm
from scipy.stats import t
import numpy as np
import pandas as pd
from numpy.random import seed
import matplotlib.pyplot as plt
```
__Q1:__ Call up the documentation for the `norm` function imported above. (Hint: that documentation is [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html)). What is the second listed method?
```python
#norm?
```
__A:__
The second-listed method for the `norm` object is the probability denstity function: `norm.pdf(x, loc=0, scale=1)`.
__Q2:__ Use the method that generates random variates to draw five samples from the standard normal distribution.
__A:__ Using the norm.rvs() method:
```python
seed(47)
# draw five samples here
norm_samples = norm.rvs(size=5)
print(norm_samples)
```
[-0.84800948 1.30590636 0.92420797 0.6404118 -1.05473698]
__Q3:__ What is the mean of this sample? Is it exactly equal to the value you expected? Hint: the sample was drawn from the standard normal distribution. If you want a reminder of the properties of this distribution, check out p. 85 of *AoS*.
__A:__
I didn't expect any particular value because the sample size was small, but, yes, the mean is relatively close to zero.
```python
# Calculate and print the mean here, hint: use np.mean()
norm_samples_mean = norm_samples.mean()
print(norm_samples_mean)
```
0.19355593334131074
__Q4:__ What is the standard deviation of these numbers? Calculate this manually here as $\sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n}}$ (This is just the definition of **standard deviation** given by Professor Spiegelhalter on p.403 of *AoS*). Hint: np.sqrt() and np.sum() will be useful here and remember that numPy supports [broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
__A:__ The standard deviation should be around 1:
```python
manual_std = np.sqrt(
np.sum(
((norm_samples - norm_samples_mean) ** 2)
) /
len(norm_samples)
)
assert manual_std == np.std(norm_samples)
print(manual_std)
```
0.9606195639478641
Here we have calculated the actual standard deviation of a small data set (of size 5). But in this case, this small data set is actually a sample from our larger (infinite) population. In this case, the population is infinite because we could keep drawing our normal random variates until our computers die!
In general, the sample mean we calculate will not be equal to the population mean (as we saw above). A consequence of this is that the sum of squares of the deviations from the _population_ mean will be bigger than the sum of squares of the deviations from the _sample_ mean. In other words, the sum of squares of the deviations from the _sample_ mean is too small to give an unbiased estimate of the _population_ variance. An example of this effect is given [here](https://en.wikipedia.org/wiki/Bessel%27s_correction#Source_of_bias). Scaling our estimate of the variance by the factor $n/(n-1)$ gives an unbiased estimator of the population variance. This factor is known as [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction). The consequence of this is that the $n$ in the denominator is replaced by $n-1$.
You can see Bessel's correction reflected in Professor Spiegelhalter's definition of **variance** on p. 405 of *AoS*.
__Q5:__ If all we had to go on was our five samples, what would be our best estimate of the population standard deviation? Use Bessel's correction ($n-1$ in the denominator), thus $\sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n-1}}$.
__A:__ To calculate the unbiased estimator, we set `ddof=1` or normalize the sum of squares by N-1:
```python
manual_sample_std = np.sqrt(
np.sum(
((norm_samples - norm_samples_mean) ** 2)
) /
(len(norm_samples) - 1)
)
assert manual_sample_std == np.std(norm_samples, ddof=1)
print(manual_sample_std)
```
1.0740053227518152
__Q6:__ Now use numpy's std function to calculate the standard deviation of our random samples. Which of the above standard deviations did it return?
__A:__ Numpy's default is to use the biased estimator (`ddof=0`). Note that this behavior is **diffferent** from Pandas, which normalizes by N-1 by default! In a way, this makes sense: a numeric library should not guess at what you are trying to do, but in Pandas, we are typically working with real-world samples, so the sample standard deviation is what we usually want.
```python
print(np.std(norm_samples))
```
0.9606195639478641
__Q7:__ Consult the documentation for np.std() to see how to apply the correction for estimating the population parameter and verify this produces the expected result.
__A:__
```python
np.std(norm_samples, ddof=1)
```
1.0740053227518152
```python
if manual_sample_std == np.std(norm_samples, ddof=1):
print("Yes, setting ddof=1 normalizes by N-1!")
```
Yes, setting ddof=1 normalizes by N-1!
### Summary of section
In this section, you've been introduced to the scipy.stats package and used it to draw a small sample from the standard normal distribution. You've calculated the average (the mean) of this sample and seen that this is not exactly equal to the expected population parameter (which we know because we're generating the random variates from a specific, known distribution). You've been introduced to two ways of calculating the standard deviation; one uses $n$ in the denominator and the other uses $n-1$ (Bessel's correction). You've also seen which of these calculations np.std() performs by default and how to get it to generate the other.
You use $n$ as the denominator if you want to calculate the standard deviation of a sequence of numbers. You use $n-1$ if you are using this sequence of numbers to estimate the population parameter. This brings us to some terminology that can be a little confusing.
The population parameter is traditionally written as $\sigma$ and the sample statistic as $s$. Rather unhelpfully, $s$ is also called the sample standard deviation (using $n-1$) whereas the standard deviation of the sample uses $n$. That's right, we have the sample standard deviation and the standard deviation of the sample and they're not the same thing!
The sample standard deviation
\begin{equation}
s = \sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n-1}} \approx \sigma,
\end{equation}
is our best (unbiased) estimate of the population parameter ($\sigma$).
If your dataset _is_ your entire population, you simply want to calculate the population parameter, $\sigma$, via
\begin{equation}
\sigma = \sqrt{\frac{\sum_i(x_i - \bar{x})^2}{n}}
\end{equation}
as you have complete, full knowledge of your population. In other words, your sample _is_ your population. It's worth noting that we're dealing with what Professor Spiegehalter describes on p. 92 of *AoS* as a **metaphorical population**: we have all the data, and we act as if the data-point is taken from a population at random. We can think of this population as an imaginary space of possibilities.
If, however, you have sampled _from_ your population, you only have partial knowledge of the state of your population. In this case, the standard deviation of your sample is not an unbiased estimate of the standard deviation of the population, in which case you seek to estimate that population parameter via the sample standard deviation, which uses the $n-1$ denominator.
Great work so far! Now let's dive deeper.
## 3. Sampling distributions
So far we've been dealing with the concept of taking a sample from a population to infer the population parameters. One statistic we calculated for a sample was the mean. As our samples will be expected to vary from one draw to another, so will our sample statistics. If we were to perform repeat draws of size $n$ and calculate the mean of each, we would expect to obtain a distribution of values. This is the sampling distribution of the mean. **The Central Limit Theorem (CLT)** tells us that such a distribution will approach a normal distribution as $n$ increases (the intuitions behind the CLT are covered in full on p. 236 of *AoS*). For the sampling distribution of the mean, the standard deviation of this distribution is given by
\begin{equation}
\sigma_{mean} = \frac{\sigma}{\sqrt n}
\end{equation}
where $\sigma_{mean}$ is the standard deviation of the sampling distribution of the mean and $\sigma$ is the standard deviation of the population (the population parameter).
This is important because typically we are dealing with samples from populations and all we know about the population is what we see in the sample. From this sample, we want to make inferences about the population. We may do this, for example, by looking at the histogram of the values and by calculating the mean and standard deviation (as estimates of the population parameters), and so we are intrinsically interested in how these quantities vary across samples.
In other words, now that we've taken one sample of size $n$ and made some claims about the general population, what if we were to take another sample of size $n$? Would we get the same result? Would we make the same claims about the general population? This brings us to a fundamental question: _when we make some inference about a population based on our sample, how confident can we be that we've got it 'right'?_
We need to think about **estimates and confidence intervals**: those concepts covered in Chapter 7, p. 189, of *AoS*.
Now, the standard normal distribution (with its variance equal to its standard deviation of one) would not be a great illustration of a key point. Instead, let's imagine we live in a town of 50,000 people and we know the height of everyone in this town. We will have 50,000 numbers that tell us everything about our population. We'll simulate these numbers now and put ourselves in one particular town, called 'town 47', where the population mean height is 172 cm and population standard deviation is 5 cm.
```python
seed(47)
pop_heights = norm.rvs(172, 5, size=50000)
```
```python
_ = plt.hist(pop_heights, bins=30)
_ = plt.xlabel('height (cm)')
_ = plt.ylabel('number of people')
_ = plt.title('Distribution of heights in entire town population')
_ = plt.axvline(172, color='r')
_ = plt.axvline(172+5, color='r', linestyle='--')
_ = plt.axvline(172-5, color='r', linestyle='--')
_ = plt.axvline(172+10, color='r', linestyle='-.')
_ = plt.axvline(172-10, color='r', linestyle='-.')
```
Now, 50,000 people is rather a lot to chase after with a tape measure. If all you want to know is the average height of the townsfolk, then can you just go out and measure a sample to get a pretty good estimate of the average height?
```python
def townsfolk_sampler(n):
return np.random.choice(pop_heights, n)
```
Let's say you go out one day and randomly sample 10 people to measure.
```python
seed(47)
daily_sample1 = townsfolk_sampler(10)
```
```python
_ = plt.hist(daily_sample1, bins=10)
_ = plt.xlabel('height (cm)')
_ = plt.ylabel('number of people')
_ = plt.title('Distribution of heights in sample size 10')
```
The sample distribution doesn't resemble what we take the population distribution to be. What do we get for the mean?
```python
np.mean(daily_sample1)
```
173.47911444163503
And if we went out and repeated this experiment?
```python
daily_sample2 = townsfolk_sampler(10)
```
```python
np.mean(daily_sample2)
```
173.7317666636263
__Q8:__ Simulate performing this random trial every day for a year, calculating the mean of each daily sample of 10, and plot the resultant sampling distribution of the mean.
__A:__
```python
def townsfolk_sample_mean(n):
return np.mean(townsfolk_sampler(n))
```
Note that the supplied function above `townsfolk_sampler()` samples *with* replacement, which is probably not what we want here. The CLT still applies, of course; it's just always good to be aware of defaults!
```python
seed(47)
# take your samples here
townsfolk_sample_means = np.array([townsfolk_sample_mean(10) for _ in range(365)])
```
```python
plt.figure(figsize=(10,8))
_ = plt.hist(townsfolk_sample_means, bins='auto', histtype='step')
_ = plt.xlabel('mean sample height (cm)')
_ = plt.ylabel('Number of samples')
_ = plt.title('Distribution of 365 sample means')
_ = plt.axvline(172, color='r', label='pop. mean')
_ = plt.axvline(172+5, color='r', linestyle='--')
_ = plt.axvline(172-5, color='r', linestyle='--', label='+/- 1 pop. sd')
_ = plt.axvline(np.mean(townsfolk_sample_means), color='dodgerblue', label='sample mean')
_ = plt.legend()
```
The above is the distribution of the means of samples of size 10 taken from our population. The Central Limit Theorem tells us the expected mean of this distribution will be equal to the population mean, and standard deviation will be $\sigma / \sqrt n$, which, in this case, should be approximately 1.58.
__Q9:__ Verify the above results from the CLT.
__A:__
```python
print(f"""
The mean of our sample means is {np.mean(townsfolk_sample_means):.2f} cm.
The standard deviation of our sample means is {np.std(townsfolk_sample_means):.2f} cm.
""")
```
The mean of our sample means is 171.87 cm.
The standard deviation of our sample means is 1.58 cm.
Remember, in this instance, we knew our population parameters, that the average height really is 172 cm and the standard deviation is 5 cm, and we see some of our daily estimates of the population mean were as low as around 168 and some as high as 176.
__Q10:__ Repeat the above year's worth of samples but for a sample size of 50 (perhaps you had a bigger budget for conducting surveys that year)! Would you expect your distribution of sample means to be wider (more variable) or narrower (more consistent)? Compare your resultant summary statistics to those predicted by the CLT.
__A:__
```python
seed(47)
# calculate daily means from the larger sample size here
townsfolk_sample_means_50 = np.array([townsfolk_sample_mean(50) for _ in range(365)])
```
```python
print(f"""
Given 365 samples of 50 people:
The mean of our sample means is {np.mean(townsfolk_sample_means_50):.2f}.
The standard deviation of our sample means is {np.std(townsfolk_sample_means_50):.2f}.
""")
```
Given 365 samples of 50 people:
The mean of our sample means is 171.94.
The standard deviation of our sample means is 0.67.
What we've seen so far, then, is that we can estimate population parameters from a sample from the population, and that samples have their own distributions. Furthermore, the larger the sample size, the narrower are those sampling distributions.
### Normally testing time!
All of the above is well and good. We've been sampling from a population we know is normally distributed, we've come to understand when to use $n$ and when to use $n-1$ in the denominator to calculate the spread of a distribution, and we've seen the Central Limit Theorem in action for a sampling distribution. All seems very well behaved in Frequentist land. But, well, why should we really care?
Remember, we rarely (if ever) actually know our population parameters but we still have to estimate them somehow. If we want to make inferences to conclusions like "this observation is unusual" or "my population mean has changed" then we need to have some idea of what the underlying distribution is so we can calculate relevant probabilities. In frequentist inference, we use the formulae above to deduce these population parameters. Take a moment in the next part of this assignment to refresh your understanding of how these probabilities work.
Recall some basic properties of the standard normal distribution, such as that about 68% of observations are within plus or minus 1 standard deviation of the mean. Check out the precise definition of a normal distribution on p. 394 of *AoS*.
__Q11:__ Using this fact, calculate the probability of observing the value 1 or less in a single observation from the standard normal distribution. Hint: you may find it helpful to sketch the standard normal distribution (the familiar bell shape) and mark the number of standard deviations from the mean on the x-axis and shade the regions of the curve that contain certain percentages of the population.
__A:__ The probability of observing a value of 1 or less in a single observation from the standard normal distribution is about **84%** (50% probability of less than 0 + 68% / 2 = 34% probabliity of within one standard deviation above the mean.
Calculating this probability involved calculating the area under the curve from the value of 1 and below. To put it in mathematical terms, we need to *integrate* the probability density function. We could just add together the known areas of chunks (from -Inf to 0 and then 0 to $+\sigma$ in the example above). One way to do this is to look up tables (literally). Fortunately, scipy has this functionality built in with the cdf() function.
__Q12:__ Use the cdf() function to answer the question above again and verify you get the same answer.
__A:__
```python
print(f"The probability observing a value less than 1 in a single observation from the standard normal distribution is {norm.cdf(1) * 100:.1f}%.")
```
The probability observing a value less than 1 in a single observation from the standard normal distribution is 84.1%.
__Q13:__ Using our knowledge of the population parameters for our townsfolks' heights, what is the probability of selecting one person at random and their height being 177 cm or less? Calculate this using both of the approaches given above.
__A:__ The probability of selecting one person at random and their height being 177 cm or less is the probability of that observation being less than or equal to one standard deviation above the mean, which is about 84%, using the same reasoning as above.
```python
print(f"""The probability of selecting one person at random
and their height being 177 cm or less is {norm.cdf(177, loc=172, scale=5) * 100:.1f}%.
""")
```
The probability of selecting one person at random
and their height being 177 cm or less is 84.1%.
__Q14:__ Turning this question around — suppose we randomly pick one person and measure their height and find they are 2.00 m tall. How surprised should we be at this result, given what we know about the population distribution? In other words, how likely would it be to obtain a value at least as extreme as this? Express this as a probability.
__A:__
```python
print(f"""
The probability of selecting one person at random
and their height being 200 cm or more is {(1 - norm.cdf(200, loc=172, scale=5))};
in other words, an outlier.""")
```
The probability of selecting one person at random
and their height being 200 cm or more is 1.0717590259723409e-08;
in other words, an outlier.
What we've just done is calculate the ***p-value*** of the observation of someone 2.00m tall (review *p*-values if you need to on p. 399 of *AoS*). We could calculate this probability by virtue of knowing the population parameters. We were then able to use the known properties of the relevant normal distribution to calculate the probability of observing a value at least as extreme as our test value.
We're about to come to a pinch, though. We've said a couple of times that we rarely, if ever, know the true population parameters; we have to estimate them from our sample and we cannot even begin to estimate the standard deviation from a single observation.
This is very true and usually we have sample sizes larger than one. This means we can calculate the mean of the sample as our best estimate of the population mean and the standard deviation as our best estimate of the population standard deviation.
In other words, we are now coming to deal with the sampling distributions we mentioned above as we are generally concerned with the properties of the sample means we obtain.
Above, we highlighted one result from the CLT, whereby the sampling distribution (of the mean) becomes narrower and narrower with the square root of the sample size. We remind ourselves that another result from the CLT is that _even if the underlying population distribution is not normal, the sampling distribution will tend to become normal with sufficiently large sample size_. (**Check out p. 199 of AoS if you need to revise this**). This is the key driver for us 'requiring' a certain sample size, for example you may frequently see a minimum sample size of 30 stated in many places. In reality this is simply a rule of thumb; if the underlying distribution is approximately normal then your sampling distribution will already be pretty normal, but if the underlying distribution is heavily skewed then you'd want to increase your sample size.
__Q15:__ Let's now start from the position of knowing nothing about the heights of people in our town.
* Use the random seed of 47, to randomly sample the heights of 50 townsfolk
* Estimate the population mean using np.mean
* Estimate the population standard deviation using np.std (remember which denominator to use!)
* Calculate the (95%) [margin of error](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/margin-of-error/#WhatMofE) (use the exact critial z value to 2 decimal places - [look this up](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/find-critical-values/) or use norm.ppf()) Recall that the ***margin of error*** is mentioned on p. 189 of the *AoS* and discussed in depth in that chapter).
* Calculate the 95% Confidence Interval of the mean (***confidence intervals*** are defined on p. 385 of *AoS*)
* Does this interval include the true population mean?
__A:__
```python
seed(47)
# take your sample now
single_sample_50 = townsfolk_sampler(50)
n = len(single_sample_50)
```
```python
estimate = np.mean(single_sample_50)
print(f"Our estimate of the population mean is {estimate:.2f} cm.")
```
Our estimate of the population mean is 172.78 cm.
```python
sd = np.std(single_sample_50, ddof=1)
print(f"Our estimate of the population standard deviation is {sd:.2f} cm.")
```
Our estimate of the population standard deviation is 4.20 cm.
```python
se = sd / np.sqrt(n)
print(f"The standard error of our estimate is {se:.3f} cm.")
print(f"The 95% critical z-value is {norm.ppf(0.975):.2f}.")
print(f"Using the normal distribution, the 95% confidence interval is {np.round(estimate + norm.ppf([0.025, 0.975]) * se, 2)} cm.")
```
The standard error of our estimate is 0.593 cm.
The 95% critical z-value is 1.96.
Using the normal distribution, the 95% confidence interval is [171.62 173.94] cm.
__Q16:__ Above, we calculated the confidence interval using the critical z value. What is the problem with this? What requirement, or requirements, are we (strictly) failing?
__A:__ By using the z-value, we are failing to account for the uncertainty in using 50 data points to estimate the standard error. By using the t-distribution, as below, which has heavier tails than the normal distribution for low degrees of freedom, we can account for this uncertainty in our estimate.
__Q17:__ Calculate the 95% confidence interval for the mean using the _t_ distribution. Is this wider or narrower than that based on the normal distribution above? If you're unsure, you may find this [resource](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/confidence-interval/) useful. For calculating the critical value, remember how you could calculate this for the normal distribution using norm.ppf().
__A:__
```python
se = sd / np.sqrt(n)
print(f"The standard error of our estimate is {se:.3f} cm.")
print(f"The 95% critical t-value for {n-1} degrees of freedom is {t.ppf(0.975, n-1):.2f}.")
print(f"Using the t distribution, the 95% confidence interval is {np.round(estimate + t.ppf([0.025, 0.975], n-1) * se, 2)} cm.")
```
The standard error of our estimate is 0.593 cm.
The 95% critical t-value for 49 degrees of freedom is 2.01.
Using the t distribution, the 95% confidence interval is [171.59 173.97] cm.
This is slightly wider than the previous confidence interval. This reflects the greater uncertainty given that we are estimating population parameters from a sample.
## 4. Learning outcomes
Having completed this project notebook, you now have hands-on experience:
* sampling and calculating probabilities from a normal distribution
* identifying the correct way to estimate the standard deviation of a population (the population parameter) from a sample
* with sampling distribution and now know how the Central Limit Theorem applies
* with how to calculate critical values and confidence intervals
| 823752dbe2e6a77c817b2d6f11c62123857b9c53 | 87,819 | ipynb | Jupyter Notebook | frequentist-case-study/frequentist-case-study-part-A.ipynb | reppertj/Data-Science-Examples | ee2690f07a9f606ecdb47cf1f3538641ade24312 | [
"MIT"
]
| null | null | null | frequentist-case-study/frequentist-case-study-part-A.ipynb | reppertj/Data-Science-Examples | ee2690f07a9f606ecdb47cf1f3538641ade24312 | [
"MIT"
]
| null | null | null | frequentist-case-study/frequentist-case-study-part-A.ipynb | reppertj/Data-Science-Examples | ee2690f07a9f606ecdb47cf1f3538641ade24312 | [
"MIT"
]
| null | null | null | 77.784765 | 20,684 | 0.812432 | true | 6,535 | Qwen/Qwen-72B | 1. YES
2. YES | 0.828939 | 0.760651 | 0.630533 | __label__eng_Latn | 0.999025 | 0.30327 |
# Classical Support Vector Machines
This notebook will serve as a summary of some of the resources below and is not meant to be used as a stand-alone reading material for Classical Support Vector Machines. We encourage you to complete reading the resources below before going forward with the notebook.
### Resources:
1. MIT Open Courseware lecture: https://youtu.be/_PwhiWxHK8o
3. MIT lecture slides: http://web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf
4. SVM Wikipedia page : https://en.wikipedia.org/wiki/Support_vector_machine
2. SVM tutorial using sklearn: https://jakevdp.github.io/PythonDataScienceHandbook/05.07-support-vector-machines.html
## Contents
1. [Introduction](#intro)
2. [SVMs as Linear Classifiers](#linear)
3. [Lagrange Multipliers and the Primal and Dual form](#primal)
4. [Class Prediction For a New Datapoint](#pred)
5. [Classifying Linearly Separable Data](#class-linear)
6. [Dealing With Non-Linearly Separable Data](#non-linear)
7. [Feature Map and Kernel](#kernel)
8. [Additional Resources](#add)
## Introduction <a id="intro"></a>
```python
# installing a few dependencies
!pip install --upgrade seaborn==0.10.1
!pip install --upgrade scipy==1.4.1
!pip install --upgrade scikit-learn==0.23.1
!pip install --upgrade matplotlib==3.2.0
# the output will be cleared after installation
from IPython.display import clear_output
clear_output()
```
Suppose you are a Botanist trying to distinguish which one of the **three species** a flower belongs to just by looking at **four features** of a flower - the length and the width of the sepals and petals. As part of your research you create a **dataset** of these features for a set of flowers for which the **species is already known**, where each **datapoint** of this dataset corresponds to a single flower. Now, your colleague brings in a new flower and asks you which species it belongs to. You could go into the lab and do the necessary tests to figure out what species it is, however, the lab is under renovation. So, left with no other choice you pull up the dataset that you created earlier and after a few minutes of trying to find a pattern you realise that this new flower has a petal width and sepal length similar to all the flowers of species 1. Thus, you **predict** this new flower to be of the species 1. This process of assigning a new datapoint to one of the known **classes** (flower species) is called **classfication**. And, as we used a dataset where we knew the classes corresponding to the datapoints before-hand, thus, this classification procedure comes under the umbrella of [**supervised learning**](https://en.wikipedia.org/wiki/Supervised_learning).
Support Vector Machines (SVMs) are **supervised learning models** that are mainly used for **classification** and **regression** tasks. In the context of classification, which is the topic of discussion, we use SVMs to find a **linear decision boundary with maximum width** splitting the space such that datapoints belonging to different classes are on either side of the boundary. Classification takes place based on which side of the decision boundary a new datapoint lands.
Before we try to understand how SVMs work, let's take a look at the [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set) which was the dataset mentioned in the first paragraph.
```python
# importing the iris dataset
from sklearn.datasets import load_iris
import numpy as np
iris = load_iris()
print("Number of datapoints: {}".format(iris['data'].shape[0]))
print("Number of features: {}".format(iris['data'].shape[1]))
print("Sample of the dataset:")
print(iris['data'][:5])
print("Unique species : {}".format(np.unique(iris['target'])))
```
Number of datapoints: 150
Number of features: 4
Sample of the dataset:
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]]
Unique species : [0 1 2]
Looking at the first 5 datapoints of the Iris dataset we realize that each datapoint is an array with four features. The number of features in a dataset is called the **dimension of the dataset**. Further, there are three unique species, which implies, three **classes** in the dataset. It's important to note that SVMs are natively binary classification algorithms, i.e, can classify between only 2 classes. However, there are methods to convert a binary classifier to a multi-class classifier, mentioned [here](https://datascience.stackexchange.com/questions/46514/how-to-convert-binary-classifier-to-multiclass-classifier). Let us now dig deeper into the mathematics of how SVMs work.
**Reminder:** Read the resources provided above to understand the next section with greater degree of clarity.
## SVMs as Linear Classifiers <a id="linear"></a>
Source: [wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)
Our input dataset is of the form $(\vec{x}_{1}, y_{1}), ..., (\vec{x}_{n}, y_{n})$,
where, $\vec{x}$ is a $d$ dimensional vector where $d$ is the number of features and $y_{i}$'s are the labels $ y_{i} \in {-1, +1}$ as it is a binary classification problem.
$\vec{w}$ is a vector perpendicular to the **decision boundary** (hyperplane that cuts the space into two parts and is the result of classification). Let $\vec{u}$ be a vector representing a point on our feature space. Now, to understand whether a point is on the +ve side or the -ve side we'll have to project the point $\vec{u}$ onto $\vec{w}$, which will give us a scaled version of $\vec{u}$'s projection in the direction perpendicular to the decision boundary. Depending on the value of this quantity we'll have the point either on the +ve side or the -ve side. This can be represented mathematically as
$$\begin{equation} \vec{w}\cdot\vec{x}_{+} + b \geq 1 \label{eq:vector_ray} \tag{1}\end{equation}$$
$$\begin{equation} \vec{w}\cdot\vec{x}_{-} + b \leq -1 \tag{2}\end{equation}$$
where, $\vec{x}_{+}$ is a datapoint with label $y_{i} = +1$,<br>
$\vec{x}_{-}$ is a datapoint with label $y_{i} = -1$ and<br>
b is parameter that has to be learnt
These two lines are separated by a distance of $\frac{2}{||{\vec{w}}||}$. The line in the middle of both of these, i.e,
$$\begin{equation} \vec{w}\cdot\vec{u} + b = 0 \tag{3}\end{equation}$$
is the equation of the hyperplane denoting our decision boundary. Together, the space between (1) and (2) forms what is usually know as the **street** or the **gutter**.
equation (1) and (2) can be conveniently combined to give
$$y_{i}(\vec{w}\cdot\vec{x}_{i} + b) \geq 1\tag{4}$$
And the limiting case would be
$$y_{i}(\vec{w}\cdot\vec{x}_{i} + b) -1 = 0 \tag{5}$$
Which is attained when the points lie on the edges of the street, i.e, on (1) or (2). These points are responsible for the change in the width of the street and are called **support vectors**. Once the support vectors are calculated in the training phase we only need these vectors to classify new datapoints during the prediction phase, hence reducing the computational load significantly. Equation (4) is a constraint in the optimization process of maximizing the street width $\frac{2}{||{\vec{w}}||}$. In the next section let us see how we can combine the optimization problem and the constraints together into a single optimization equation using the concept of lagrange multipliers.
## Lagrange Multipliers and the Primal and Dual form <a id="primal"></a>
Support Vector Machines are trying to solve the optimization problem of maximizing the street width $\frac{2}{||{\vec{w}}||}$ (which is equivalent to minimizing $\frac{||w||^2}{2}$) with the contraint $y_{i}(\vec{w}\cdot\vec{x}_{i} + b) \geq 1$. This can be elegantly written in a single equation with the help of [lagrange multipliers](https://en.wikipedia.org/wiki/Lagrange_multiplier). The resulting equation to be minimized is called the **primal form** (6).
**Primal form:** $$ L_{p} = \frac{||w||^2}{2} - \sum_{i}{\alpha_{i}[y_{i}(\vec{w}\cdot\vec{x}_{i} + b) -1]}\tag{6}$$
$$\frac{{\partial L}}{\partial \vec{w}} = \vec{w} - \sum_{i}{\alpha_{i}y_{i}\vec{x_{i}}}$$
equating $\frac{{\partial L}}{\partial \vec{w}}$ to 0 we get,
$$ \vec{w} = \sum_{i}{\alpha_{i}y_{i}\vec{x_{i}}}\tag{7}$$
$$\frac{{\partial L}}{\partial \vec{b}} = \sum_{i}{\alpha_{i}y_{i}}$$
and equating $\frac{{\partial L}}{\partial \vec{b}}$ to 0 we convert the primal form to the dual form,
$$\sum_{i}{\alpha_{i}y_{i}} = 0\tag{8}$$
$$L = \frac{1}{2}(\sum_{i}{\alpha_{i}y_{i}\vec{x_{i}}})(\sum_{j}{\alpha_{j}y_{j}\vec{x_{j}}}) - (\sum_{i}{\alpha_{i}y_{i}\vec{x_{i}}})(\sum_{j}{\alpha_{j}y_{j}\vec{x_{j}}}) - \sum_{i}{\alpha_{i}y_{i}b} + \sum_{i}{\alpha_{i}}$$
**Dual form:** $$L_{d} = \sum_{i}{\alpha_{i}} - \frac{1}{2}\sum_{i}\sum_{j}\alpha_{i}\alpha_{j}y_{i}y_{j}(\vec{x}_{i}\cdot\vec{x}_{j})\tag{9}$$
subject to: $$\sum_{i}{\alpha_{i}y_{i}} = 0$$
Taking a closer look at the dual form $L_{d}$ we can see that it is a function quadratic in the lagrange multipler terms which can be solved efficiently on a classical computer using [quadratic programming](https://en.wikipedia.org/wiki/Quadratic_programming) techniques. However, Note that finding the dot product $\vec{x}_{i}\cdot\vec{x}_{j}$ becomes computationally expensive as the dimension of our data increases. In the days to come we'll learn how a quantum computer could be used to classify a classical dataset using an algorithm called the Variational Quantum Classifier (VQC) Algorithm as given in [this paper](https://arxiv.org/abs/1804.11326). Understanding of Classical SVM may not be required, however, some of the concepts such as kernels and feature maps will be crucial in understanding the VQC algorithm.
## Class Prediction for a New Datapoint <a id="pred"></a>
The output of the training step are values of lagrange multipliers. Now, when a new datapoint $\vec z$ is given lets see how we can find the classification result corresponding to it:
* Step 1: Use the obtained values of lagrange multipliers to calculate the value of $\vec{w}$ using $(7)$.
* Step 2: Substitute the value of $\vec{w}$ in equation $(5)$ and substitute a support vector in the place of $\vec{x}_{i}$ to find the value of $b$.
* Step 3: Find the value of $\vec{w}\cdot\vec{z} + b$. If it $>0$ then assign $\vec{z}$ a label $y_{z} = 1$ and $y_{z} = -1$ if the obtained value is $< 0$.
## Classifying Linearly Separable Data <a id="class-linear"></a>
Lets switch gears and look at how we can use scikit-learn's Support Vector Classifier method to draw a decision boundary on a linearly separable dataset. This section of the notebook is a recap of resource \[4\] and thus we recommend reading it before going forward. The code used in this section is from the corresponding Github [repo](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.07-Support-Vector-Machines.ipynb).
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
```
### Importing the dataset
```python
# we are importing the make_blobs dataset as it can be clearly seen to be linearly separable
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
```
```python
from sklearn.svm import SVC # "Support vector classifier"
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
```
SVC(C=10000000000.0, kernel='linear')
```python
# helper plotting function
def plot_svc_decision_function(model, ax=None, plot_support=True):
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none', edgecolors='b');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
```
```python
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model);
```
As we can see SVM works quite well when it deals with linearly separable datasets. The points which lie on the dotted lines denoted by $y_{i}(\vec{w}\cdot\vec{x}_{i} + b) = \pm1$ are the **support vectors**. Part of the reason why SVMs are popular are because, during the classification step only support vectors are used to classify a new point. This reduces the computational load significantly. The reason for this is because values of the lagrange multipliers turn out to be zero for vectors which are not support vectors.
```python
model.support_vectors_
```
array([[0.74083668, 2.47610149],
[1.54209773, 2.65998103],
[1.50347711, 2.48342509]])
## Dealing With Non-Linearly Separable Data <a id="non-linear"></a>
In the previous example we've seen how we can find a model to classify linearly separable data. Lets look at an example and see if SVMs can find a solution when the data is non-linear.
```python
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf, plot_support=False);
```
When the data is circular, like in the example above, SVM fails to find a satisfactory linear classification model. However, if we cleverly introduce a new parameter $r$ such that $r = e^{-x^{2}}$ and using that as our third parameter construct a new dataset (see picture below), we'll observe that a plane can be drawn horizontally passing through, say, $r=0.7$ to classify the dataset! This method in which we are mapping our dataset into a higher dimension to be able to find a linear boundary in the higher dimension is called a **feature map**.
```python
r = np.exp(-(X ** 2).sum(1))
```
```python
from mpl_toolkits import mplot3d
# from ipywidgets import interact, fixed
def plot_3D(elev=30, azim=30, X=X, y=y):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='autumn')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
plot_3D()
# interact(plot_3D, elev=[-90, 90], azip=(-180, 180),
# X=fixed(X), y=fixed(y));
```
## Feature Map and Kernel <a id="kernel"></a>
As we have seen earlier a **feature map** maps our (non-linearly separable) input data to a higher dimensional **feature space** where our data is now linearly separable. This helps circumvent the problem of dealing with non-linearly separable data, however, a new problem arises. As we keep increasing the dimension of our data, computing the coordinates of our data and the dot product $\phi(\vec{x}_{i})\cdot\phi(\vec{x}_{j})$ in this higher dimentional feature space becomes computationally expensive. This is where the idea of the [Kernel functions](https://en.wikipedia.org/wiki/Kernel_method) comes in.
Kernel functions allow us to deal with our data in the higher dimensional feature space (where our data is linearly separable) without ever having to compute the dot product in that space.
if $\phi(\vec{x})$ is the feature map, then the corresponding kernel function is the dot product $\phi(\vec{x}_{i})\cdot\phi(\vec{x}_{j})$, therefore, the kernel function $k$ is
$$k(x_{i},x_{j}) = \phi(\vec{x}_{i})\cdot\phi(\vec{x}_{j})$$
Therefore, the corresponding transformed optimization problem can we written as,
**Primal form:** $$ L_{p} = \frac{||w||^2}{2} - \sum_{i}{\alpha_{i}[y_{i}(\vec{w}\cdot\phi(\vec{x}_{i}) + b) -1]}\tag{6}$$
**Dual form:** $$L_{d} = \sum_{i}{\alpha_{i}} - \frac{1}{2}\sum_{i}\sum_{j}\alpha_{i}\alpha_{j}y_{i}y_{j}(\phi(\vec{x}_{i})\cdot\phi(\vec{x}_{j}))$$
or $$L_{d} = \sum_{i}{\alpha_{i}} - \frac{1}{2}\sum_{i}\sum_{j}\alpha_{i}\alpha_{j}y_{i}y_{j}k(x_{i},x_{j})$$
subject to: $$\sum_{i}{\alpha_{i}y_{i}} = 0$$
where $$ \vec{w} = \sum_{i}{\alpha_{i}y_{i}\phi(\vec{x_{i}})}$$
To understand why Kernel functions are useful lets look at an example using the Radial Basis Function (rbf) Kernel.
the rbf kernel is written as,
$$k(x_{i},x_{j}) = exp(-||x_{i} - x_{j}||^{2}/2\sigma^{2}) $$
where $\sigma$ is a tunable parameter
What we should understand here is that the rbf kernel projects our data into an infinite dimensional feature space, however, the computational power required to compute the kernel function's value is quite negligible! As you see, we don't have to compute the dot product of the infinite dimensional vectors. This is how kernels help SVMs tackle non-linearly separable data.
Rbf kernel in action:
```python
clf = SVC(kernel='rbf', C=1E6)
clf.fit(X, y)
```
SVC(C=1000000.0)
```python
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
```
In the next notebook we will learn how to use Quantum Computers to do the same task of classification and why it may be advantageous in the future.
## Additional Resources <a id="add"></a>
1. Andrew NG notes: http://cs229.stanford.edu/notes/cs229-notes3.pdf
2. Andrew NG lecture: https://youtu.be/lDwow4aOrtg
| 242f0db00020d38625ca75e3c50e61e3db889aff | 209,825 | ipynb | Jupyter Notebook | Day 5/Classical Support Vector Machines.ipynb | AnDa-creator/Qiskit-India-Challenge | 69ed61d2b4258a217116c07dcbd13451479d11a7 | [
"MIT"
]
| null | null | null | Day 5/Classical Support Vector Machines.ipynb | AnDa-creator/Qiskit-India-Challenge | 69ed61d2b4258a217116c07dcbd13451479d11a7 | [
"MIT"
]
| null | null | null | Day 5/Classical Support Vector Machines.ipynb | AnDa-creator/Qiskit-India-Challenge | 69ed61d2b4258a217116c07dcbd13451479d11a7 | [
"MIT"
]
| null | null | null | 338.974152 | 45,788 | 0.920377 | true | 5,008 | Qwen/Qwen-72B | 1. YES
2. YES | 0.7773 | 0.760651 | 0.591254 | __label__eng_Latn | 0.987792 | 0.21201 |
# Trajectory Optimization
## Notebook Setup
### Julia Setup
```julia
using LaTeXStrings
using Plots
using Polynomials
import Base: ctranspose
# The Poly class has an odd quirk in that is defines the conjugate transpose
# operator A' as differentiation of the polynomial. While this makes some
# sense for an isolated polynomial objects, for arrays it has unexpected
# behavior: [p q]' == [polyder(p), polyder(q)']. In this notebook, I will be
# manipulating vectors and matrices of polynomial objects; I do not want these
# objects to be differentiated when transposed. Therefore, I'm redefining the
# ctranspose operator to simply return the original polynomial object.
ctranspose{T}(p::Poly{T}) = p
# The line selects which plotting backend will be used to generate the plots
# shown in this notebook.
pyplot();
# The function below allows quick printing of arrays of rational numbers
function print_rational(title::String, data::Array{Rational{Int64},2})
@printf("%s:\n", title)
for i in 1:size(data,1)
for j in 1:size(data,2)
@printf("%4d/%-4d", data[i,j].num, data[i,j].den)
end
println()
end
println()
end
```
WARNING: Method definition ctranspose
print_rational (generic function with 1 method)
### Latex Macros
- Vector Typeface: $ \newcommand{\vec}[1]{\boldsymbol{#1}} $
- Matrix Typeface: $ \newcommand{\mat}[1]{\boldsymbol{#1}} $
## Overview
The objective of this project is to develop a trajectory optimization tool that can be used to assess the maximum range of a Conventional Prompt Global Strike (CPGS) weapon system. CPGS weapon systems, have been a major research focus of the US Department of Defense for over a decade, with several development programs currently underway ([Ref. 1](https://fas.org/sgp/crs/nuke/R41464.pdf)). This project is specifically interested in CPGS systems that employ the “boost-glide” technique whereby a maneuverable glide vehicle is accelerated to hypersonic velocities using a multi-stage solid rocket booster.
## Trajectory Computation via Implicit CG Method
One of the key elements of the trajectory optimization problem is discretizing the equations of motion that define how the vehicle responds to the control input. This discretization can be accomplished in many ways, with modern state-of-the-art codes predominantly utilizing high-order
collocation methods. In this project we discretize the governing equations using an implicit Continuous Galerkin (CG) method in conjunction with quintic Hermite basis functions.
### Governing Differential Equations
The trajectory of a hypersonic boost-glide systems can be modeled using a pair of coupled second-order differential equations, similar to those shown below.
\begin{align}
\label{eq:governing_x} \ddot{x} &= f^x(t, x, y, \dot{x}, \dot{y}, \alpha) \\
\label{eq:governing_y} \ddot{y} &= f^y(t, x, y, \dot{x}, \dot{y}, \alpha)
\end{align}
| Variables | Description
|:-----------:|-------------
| $x$,$y$ | Vehicle position relative to inertial space
| $f^x$,$f^y$ | Force acting on the vehicle (per unit mass)
| $\alpha$ | Vehicle Angle of attack (control variable)
Note that for the purposes of this study, we consider the angle of attack to be the control variable that allows us to shape the trajectory. In reality, the angle of attack is a consequence of how the vehicle thrust is vectored and/or how the aerodynamic control surfaces are deflected. However, modeling the how the vehicle attitude changes in response to control inputs is beyond the scope of what is necessary for this project. Instead, we simply assume that a control system exists and that it can trim the vehicle to follow the desired angle of attack profile. We can always place limits on the angle of attack profile if necessary during the optimization process.
### CG System of Equations
The CG discretization proceeds by converting the differential equations above into an equivalent weak form by multiplying each equation by an arbitrary “weighting” or “test” function, $w(t)$, and integrating. (Note: from here on out, we will only show the $x$-equation; the $y$-equation is handled in an identical fashion.)
\begin{align}
\int_{t_0}^{t_f} w\ddot{x}~dt & = \int_{t_0}^{t_f} wf^x~dt \\
w\dot{x}\big|_{t_0}^{t_f} - \int_{t_0}^{t_f} \dot{w}\dot{x}~dt & = \int_{t_0}^{t_f} wf^x~dt
\end{align}
Introducing the notation $(u,v)=\int_{t_0}^{t_f}uv~dt$ yields a concise statement of the governing equation in weak form:
\begin{equation}
(\dot{w},\dot{x}) + (w,f^x) + w\dot{x}\big|_{t_0} - w\dot{x}\big|_{t_f} = 0
\end{equation}
At this point, it is necessary to select a method of parameterizing the unknown functions $x(t)$ and $w(t)$. For this application, we will utilize high-order (quintic) Hermite basis functions. The reason for selecting this basis set is as follows:
1. Since we can safely assume the vehicles acceleration history will be at least $C^0$ continuous, the vehicle position history will be at minimum $C^2$ continuous. Using quintic Hermite shape functions ensures that that the computed trajectory belongs to the space of $C^2$ continuous functions. (Note: high-jerk events like stage separation will be addressed by stitching together two separate CG discretizations using optimizer equality constraints).
2. Since Hermite shape functions utilize the nodal values of position, velocity, etc. as the shape function coefficients, implementing boundary conditions and trajectory constraints (altitude limits, terminal velocity requirements, etc.) is trivial.
3. Quintic Hermite interpolation is 6-th order accurate, which enables highly accurate trajectory solutions with small degrees of freedom counts.
Let $t_i,~i = 1,...,N$ be a series of ordered points spanning the interval $[t_0, t_f]$; the points need not uniformly spaced. The vector-valued function $\vec{\phi}_i(t)$ represents the Hermite basis functions associated with the $i$-th node. The vector $\vec{x}_i$ represents the coefficients used to scale the basis
functions for that node. In the case of quintic Hermite basis functions, these coefficients are simply the value of the solution function and its first two derivatives at the $i$-th node:
\begin{equation}
\vec{x}_i = \left[ x(t_i), \dot{x}(t_i), \ddot{x}(t_i) \right]^T
\end{equation}
The function $x(t)$ is then constructed via a linear super-position of the coefficient-weighted basis functions for all nodes, as shown below. The Galerkin test function, $w(t)$, is parameterized similarly.
\begin{align}
x(t) &= \sum_{i=1}^{N} \vec{\phi}_i^T(t)\cdot \vec{x}_i \\
w(t) &= \sum_{i=1}^{N} \vec{\phi}_i^T(t)\cdot \vec{w}_i
\end{align}
We then substitute these parameterizations into the weak form of the governing equation. For brevity, I will drop the explicit summation symbols; summation is implied by products with repeated indices.
\begin{equation}
\left( \vec{\dot\phi}_i^T\vec{w}_i, \vec{\dot\phi}_j^T\vec{w}_j \right) +
\left( \vec{\phi}_i^T\vec{w}_i, f^x \right) +
\left( \vec{\phi}_i^T(t_0)\vec{w}_i \right) \cdot \left( \vec{\dot\phi}_j^T(t_0)\vec{x}_j \right) -
\left( \vec{\phi}_i^T(t_f)\vec{w}_i \right) \cdot \left( \vec{\dot\phi}_j^T(t_f)\vec{x}_j \right) =
0
\end{equation}
Note that since $\vec{\phi}_i^T\vec{w}_i = \vec{w}_i^T\vec{\phi}_i$, we can rewrite the above equation and factor out parameterization constants for the basis functions as shown below:
\begin{equation}
\vec{w}_i^T \left[
\left[
\left( \vec{\dot\phi}_i, \vec{\dot\phi}^T_j \right) +
\left. \vec{\phi}_i\vec{\dot\phi}_j^T \right|_{t_0} -
\left. \vec{\phi}_i\vec{\dot\phi}_j^T \right|_{t_f}
\right] \vec{x}_j +
\left( \vec{\phi}_i, f^x \right)
\right] = 0
\end{equation}
Since the weighting function is arbitrary, the values of the $\vec{w}_i$’s are free parameters. Thus, the only way for the above equation to be uniformly zero for any possible value of the $\vec{w}_i$’s is for the bracketed expression itself to be zero for every value of $i = 1,...,N$. The result is a system of simultaneous, non-linear equations that must be solved for the unknown values of the $\vec{x}_j$’s:
\begin{equation}
\label{eq:cg_system_of_equations}
\left[
\left( \vec{\dot\phi}_i, \vec{\dot\phi}^T_j \right) +
\left. \vec{\phi}_i\vec{\dot\phi}_j^T \right|_{t_0} -
\left. \vec{\phi}_i\vec{\dot\phi}_j^T \right|_{t_f}
\right] \vec{x}_j +
\left( \vec{\phi}_i, f^x \right) =
0, ~~~~
\forall i = 1,...,N
\end{equation}
### Quintic Hermite Shape Functions
Hermite shape functions enable construction of a continuous interpolant on the interval $\tau \in [0,1]$ given the value of a function and its derivatives at the endpoints of the interval. The canonical form of the quintic (5th order) Hermite shape function are:
\begin{align}
H_0^5(\tau) &= 1 - 10\tau^3 + 15\tau^4 - 6\tau^5 \\
H_1^5(\tau) &= \tau - 6\tau^3 + 8\tau^4 - 3\tau^5 \\
H_2^5(\tau) &= \frac{1}{2}\left( \tau^2 - 3\tau^3 + 3\tau^4 - \tau^5 \right) \\
H_3^5(\tau) &= 10\tau^3 - 15\tau^4 + 6\tau^5 \\
H_4^5(\tau) &= -4\tau^3 + 7\tau^4 -3\tau^5 \\
H_5^5(\tau) &= \frac{1}{2}\left( \tau^3 - 2\tau^4 + \tau^5 \right)
\end{align}
```julia
# Define and plot quintic Hermite shape functions
H = [
Poly([ 1, 0, 0, -10, 15, -6 ]//1, :τ)
Poly([ 0, 1, 0, -6, 8, -3 ]//1, :τ)
Poly([ 0, 0, 1, -3, 3, -1 ]//2, :τ)
Poly([ 0, 0, 0, 10, -15, 6 ]//1, :τ)
Poly([ 0, 0, 0, -4, 7, -3 ]//1, :τ)
Poly([ 0, 0, 0, 1, -2, 1 ]//2, :τ)
]
τ = linspace(0,1)
H1 = polyder.(H)
H2 = polyder.(H1)
plot(
plot(τ, polyval.(H',τ), ylabel="Function Value"),
plot(τ, polyval.(H1',τ), ylabel="Function 1st Derivative"),
plot(τ, polyval.(H2',τ), ylabel="Function 2nd Derivative"),
label=["\$H^5_$i\$" for i in (0:5)'],
xlabel=L"\tau",
layout=(3,1),
size=(600,1200),
)
```
(Polynomials.Poly{#T<:Any}) in module Polynomials at C:\Users\Jeff\.julia\v0.5\Polynomials\src\Polynomials.jl:440 overwritten in module Main at In[2]:13.
The interpolant is constructed from the shape functions as follows:
\begin{equation}
\begin{split}
x(\tau) &= H_0^5(\tau)\cdot x\bigg|_{\tau=0}
+ H_1^5(\tau)\cdot \frac{dx}{d\tau}\bigg|_{\tau=0}
+ H_2^5(\tau)\cdot \frac{d^2x}{d\tau^2}\bigg|_{\tau=0} \\
&+ H_3^5(\tau)\cdot x\bigg|_{\tau=1}
+ H_4^5(\tau)\cdot \frac{dx}{d\tau}\bigg|_{\tau=1}
+ H_5^5(\tau)\cdot \frac{d^2x}{d\tau^2}\bigg|_{\tau=1}
\end{split}
\end{equation}
Three of the shape functions are associated with the degrees of freedom on the left end of the interval while the other three are assoicated with the right end of the interval. Therefore, let’s define
\begin{align}
\vec{h}_L(\tau) &= [H_0^5(\tau), H_1^5(\tau), H_2^5(\tau)]^T \\
\vec{h}_R(\tau) &= [H_3^5(\tau), H_4^5(\tau), H_5^5(\tau)]^T \\
\end{align}
This allows us to write the interpolant as
\begin{equation}
x(\tau) =
\vec{h}_L^T\cdot\left[
\begin{array}{c}
x \\
\frac{dx}{d\tau} \\
\frac{d^2x}{d\tau^2} \\
\end{array}
\right]_{\tau=0}
+
\vec{h}_R^T\cdot\left[
\begin{array}{c}
x \\
\frac{dx}{d\tau} \\
\frac{d^2x}{d\tau^2} \\
\end{array}
\right]_{\tau=1}
\end{equation}
Notice that the derivatives required for this interpolant are with respect to $\tau$, the interval-local coordinate. We previously defined the $\vec{x}_i$’s as specifying the true time derivatives, so a scaling must be applied. (Note: given that the node spacing need not be uniform, the relationships between $t$ and $\tau$ may differ on either side of a given node implying $\lim_{t \rightarrow t_i+}\frac{dx}{d\tau} \neq \lim_{t \rightarrow t_i-}\frac{dx}{d\tau}$). If we define $\Delta{}t_i = t_{i+1} - t_i$ as the length of the $i$-th interval, we can then construct $\mat{S}_i$, a matrix that scales global derivatives by the Jacobian of the coordinate transformation:
\begin{equation}
\mat{S}_i = \left[
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \Delta{}t_i & 0 \\
0 & 0 & \Delta{}t_i^2 \\
\end{array}
\right]
\end{equation}
Therefore, the interpolate for the $i$-th interval can be written as:
\begin{equation}
\label{eq:interval_interpolation_equation}
x(\tau) = \vec{h}_L^T\mat{S}_i\vec{x}_i
+ \vec{h}_R^T\mat{S}_i\vec{x}_{i+1}
\end{equation}
Finally, if we consider two adjacent intervals, we can write the functional form of the global shape functions associated with the $i$-th node as follows:
\begin{equation}
\label{eq:global_basis_functions}
\vec{\phi}_i = \left\{
\begin{array}{cl}
\mat{S}_{i-1}\vec{h}_R, & t_{i-1} \lt t \leq t_{ i } \\
\mat{S}_{i}\vec{h}_L, & t_{ i }~~~ \lt t \leq t_{i+1} \\
0, & \text{otherwise}
\end{array}
\right.
\end{equation}
### Evaluation of Basis Function Integrals
With the functional form of the global shape functions defined, we will now return to the fundamental non-linear equations shown in Equation $\ref{eq:cg_system_of_equations}$, and evaluate the various shape function integrals.
First, consider the term $\left( \vec{\dot\phi}_i,\vec{\dot\phi}_j^T \right)\vec{x}_j$. Due to the compact support of the nodal basis functions, the weighting matrix $\left( \vec{\dot\phi}_i,\vec{\dot\phi}_j^T \right)$ will be zero unless $j \in \{i-1, i, i+1\}$. Assuming $j=i-1$, we can write an explicit expression for the weighting matrix as follows:
\begin{split}
\left(\vec{\dot\phi}_i,\vec{\dot\phi}_{i-1}^T \right)
& = \int_{t_0}^{t_f} \vec{\dot\phi}_i\vec{\dot\phi}_{i-1}^T~dt \\
& = \int_{t_{i-1}}^{t_i}
\left(\mat{S}_{i-1}\dot{h}_R\right)
\left(\mat{S}_{i-1}\dot{h}_L\right)^T
~dt \\
& = \mat{S}_{i-1} \cdot \int_0^1
\left( \frac{d\tau}{dt}\frac{d\vec{h}_R}{d\tau} \right)
\left( \frac{d\tau}{dt}\frac{d\vec{h}_L}{d\tau} \right)^T
\frac{dt}{d\tau}~d\tau \cdot \mat{S}_{i-1}\\
& = \frac{1}{\Delta t_{i-1}} \mat{S}_{i-1} \cdot \int_0^1
\vec{h}_R^\prime \vec{h}_L^{\prime T}~d\tau\ \cdot \mat{S}_{i-1} \\
& = \frac{1}{\Delta t_{i-1}} \mat{S}_{i-1} \mat{K}^{RL} \mat{S}_{i-1}
\end{split}
A similar derivation can be performed for $j = i,i+1$. The results is a block tri-diagonal system of equations. The tri-diagonal structure means this matrix can be inverted efficiently, which suggests that we should consider using a gradient-based method to converge the discrete system of equations.
\begin{align}
\label{eq:stiffness_matrix}
\left(\vec{\dot\phi}_i,\vec{\dot\phi}_{j}^T \right)\vec{x}_j = \left\{
\begin{array}{ll}
\frac{1}{\Delta t_1} \mat{S}_1 \mat{K}^{LL} \mat{S}_1 \vec{x}_1 +
\frac{1}{\Delta t_1} \mat{S}_1 \mat{K}^{LR} \mat{S}_1 \vec{x}_2,
& i = 1 \\
\frac{1}{\Delta t_{i-1}} \mat{S}_{i-1} \mat{K}^{RL} \mat{S}_{i-1} \vec{x}_{i-1} +
\left(
\frac{1}{\Delta t_{i-1}} \mat{S}_{i-1} \mat{K}^{RR} \mat{S}_{i-1} +
\frac{1}{\Delta t_{i}} \mat{S}_{i} \mat{K}^{LL} \mat{S}_{i}
\right)\vec{x}_i +
\frac{1}{\Delta t_{i}} \mat{S}_{i} \mat{K}^{LR} \mat{S}_{i} \vec{x}_{i+1},
& i = 2,...,N-1\\
\frac{1}{\Delta t_{N-1}} \mat{S}_{N-1} \mat{K}^{RL} \mat{S}_{N-1} \vec{x}_{N-1} +
\frac{1}{\Delta t_{N-1}} \mat{S}_{N-1} \mat{K}^{RR} \mat{S}_{N-1} \vec{x}_{N}
& i = N \\
\end{array}
\right.
\end{align}
The code snippet below evaluates the various stiffness matrices $\mat{K}^{RL}, \mat{K}^{RR},$ etc. to obtain their exact numerical values. Note that the matrices are not symmetric as would be the case for a standard linear finite element method. This is because of the anti-symmetry of the basis function associated with the function derivative at a given node.
```julia
# Compute shape function integrals for K_RR, K_RL, etc.
hlp = H1[1:3]
hrp = H1[4:6]
print_rational("K_RL", polyval.(polyint.(hrp * hlp'), 1))
print_rational("K_RR", polyval.(polyint.(hrp * hrp'), 1))
print_rational("K_LL", polyval.(polyint.(hlp * hlp'), 1))
print_rational("K_LR", polyval.(polyint.(hlp * hrp'), 1))
```
K_RL:
-10/7 -3/14 -1/84
3/14 -1/70 -1/210
-1/84 1/210 1/1260
K_RR:
10/7 -3/14 1/84
-3/14 8/35 -1/60
1/84 -1/60 1/630
K_LL:
10/7 3/14 1/84
3/14 8/35 1/60
1/84 1/60 1/630
K_LR:
-10/7 3/14 -1/84
-3/14 -1/70 1/210
-1/84 -1/210 1/1260
Next, let's consider the boundary terms that result from the integration by parts. For these terms, we note that the only shape functions that are non-zero at the boundaries are those that are associated with the boundary degrees of freedom, $\vec\phi_1$ and $\vec\phi_N$. Thus, the boundary terms are non-zero only when $i = j = 1,N$.
\begin{array}{lll}
\vec\phi_1\vec{\dot\phi}_j^T\bigg|_{t_0}\vec{x}_j
&= \left(\mat{S}_1\vec{h}_L\right)
\left(\mat{S}_1\vec{\dot{h}}_L\right)^T\bigg|_{t_0}
\vec{x}_1
&= \frac{1}{\Delta t_1} \left(
\mat{S}_1
\left[ \vec{h}_L \vec{h}_L^{\prime T} \right]_{\tau=0}
\mat{S}_1
\right) \vec{x}_1\\
\vec\phi_N\vec{\dot\phi}_j^T\bigg|_{t_f}\vec{x}_j
&= \left(\mat{S}_{N-1}\vec{h}_R\right)
\left(\mat{S}_{N-1}\vec{\dot{h}}_R\right)^T\bigg|_{t_f}
\vec{x}_N
&= \frac{1}{\Delta t_{N-1}} \left(
\mat{S}_{N-1}
\left[ \vec{h}_R \vec{h}_R^{\prime T} \right]_{\tau=1}
\mat{S}_{N-1}
\right)
\vec{x}_N
\end{array}
The code below evaluates the shape function products $\vec{h}_L\vec{h}_L^{\prime T}$ and $\vec{h}_R\vec{h}_R^{\prime T}$ at their respective endpoints. There is only one non-zero element in the array: it picks out the first derivative of the unknown function at the boundary which will then be added or subtracted from the function value at the boundary.
```julia
# Compute boundary terms
hl = H[1:3]
hr = H[4:6]
print_rational("hl*hlp' @ tau=0:", polyval.(hl*hlp', 0))
print_rational("hr*hrp' @ tau=1:", polyval.(hr*hrp', 1))
```
hl*hlp' @ tau=0::
0/1 1/1 0/1
0/1 0/1 0/1
0/1 0/1 0/1
hr*hrp' @ tau=1::
0/1 1/1 0/1
0/1 0/1 0/1
0/1 0/1 0/1
The last term we need to evaluate is the integral of the basis functions multiplied by the forcing function.
\begin{split}
\left( \vec{\phi}_i, f^x \right)
&= \int_{t_0}^{t_f} \vec{\phi}_i \cdot f_x ~dt \\
&= \int_{t_{i-1}}^{t_{ i }} \left(\mat{S}_{i-1}\vec{h}_R\right) \cdot f^x ~dt
+ \int_{t_{ i }}^{t_{i+1}} \left(\mat{S}_{ i }\vec{h}_L\right) \cdot f^x ~dt \\
&= \Delta t_{i-1} \mat{S}_{i-1} \int_0^1 \vec{h}_R \cdot f^x ~d\tau
+ \Delta t_{ i } \mat{S}_{ i } \int_0^1 \vec{h}_L \cdot f^x ~d\tau
\end{split}
For most practical problems we will not have have a closed form expression for the forcing function, so the integrals in the final line of the above equation must be evaluated approximately using quadrature:
\begin{equation}
\int_0^1 \vec{h} \cdot f^x ~d\tau
\approx
\sum_k w_k \cdot \vec{h}(\tau_k) \cdot f^x(\tau_k)
\end{equation}
Defining
\begin{align}
\mat{H}^R &= [ \vec{h}_R(\tau_1), \vec{h}_R(\tau_2), ... ] \\
\mat{H}^L &= [ \vec{h}_R(\tau_1), \vec{h}_R(\tau_2), ... ] \\
\vec{f}^x &= [ f^x(\tau_1), f^x(\tau_2), ... ]^T \\
\mat{W} &= \left[ \begin{array}{ccc}
w_1, & 0, & \\
0, & w_2, & \\
& & \ddots
\end{array} \right]
\end{align}
We can write express the forcing function integral as follows:
\begin{equation}
\left( \vec{\phi}_i, f^x \right)
= \Delta t_{i-1} \mat{S}_{i-1} \mat{H}^R \mat{W} \vec{f}^x_{i-1}
+ \Delta t_{ i } \mat{S}_{ i } \mat{H}^L \mat{W} \vec{f}^x_{ i }
\end{equation}
Interestingly, all matrices that multiply $\vec{f}_{i-1}^x$ and $\vec{f}_{i}^x$ are constants, so they may be precomputed an stored as a single weighting matrix. Furthermore, if the node spacing in uniform, these matrices are the same for every node. This feature will enable the design of highly memory- and compute-efficient solution algorithms.
### Discrete System Summary
General Form:
\begin{equation}
\label{eq:discrete_system_summary}
\mat{\bar{K}}\vec{x} + \vec{\bar{f}}(\vec{x}) = 0
\end{equation}
Matrix Definitions:
\begin{array}{ll}
i = 1: & 0 = \left( \mat{\bar{K}}^{LL}_1 + \mat{\bar{B}}^L \right)\vec{x}_1
+ \mat{\bar{K}}^{LR}_1\vec{x}_2
+ \mat{\bar{H}}^L_1\mat{W}\vec{f}_1 \\
i = 2,\dots,N-1: & 0 = \mat{\bar{K}}^{RL}_{i-1}\vec{x}_{i-1}
+ \left( \mat{\bar{K}}^{RR}_{i-1} + \mat{\bar{K}}^{LL}_{i} \right) \vec{x}_{i}
+ \mat{\bar{K}}^{LR}_i\vec{x}_{i+1}
+ \mat{\bar{H}}^{R}_{i-1}\mat{W}\vec{f}_{i-1} + \mat{\bar{H}}^{L}_{i}\mat{W}\vec{f}_{i} \\
i = N: & 0 = \mat{\bar{K}}^{RL}_{N-1}\vec{x}_{N-1}
+ \left( \mat{\bar{K}}^{RR}_{N-1} - \mat{\bar{B}}^R \right)\vec{x}_N
+ \mat{\bar{H}}^R_{N-1}\mat{W}\vec{f}_{N-1}
\end{array}
| Coefficient | Definition$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ |
|:----------------------:|------------------------------------------------------------------------------------------------------------------|
| $\mat{\bar{K}}^{RL}_i$ | $\frac{1}{\Delta{t}_i} \mat{S}_i \left( \int_0^1 \vec{h}_R^\prime \vec{h}_L^{\prime T}~d\tau \right)\mat{S}_i$ |
| $\mat{\bar{K}}^{RR}_i$ | $\frac{1}{\Delta{t}_i} \mat{S}_i \left( \int_0^1 \vec{h}_R^\prime \vec{h}_R^{\prime T}~d\tau \right)\mat{S}_i$ |
| $\mat{\bar{K}}^{LL}_i$ | $\frac{1}{\Delta{t}_i} \mat{S}_i \left( \int_0^1 \vec{h}_L^\prime \vec{h}_L^{\prime T}~d\tau \right)\mat{S}_i$ |
| $\mat{\bar{K}}^{LR}_i$ | $\frac{1}{\Delta{t}_i} \mat{S}_i \left( \int_0^1 \vec{h}_L^\prime \vec{h}_R^{\prime T}~d\tau \right)\mat{S}_i$ |
| $\mat{\bar{H}}^{L}_i$ | $\Delta t_i \mat{S}_i \left[ \vec{h}_L(\tau_1), \vec{h}_L(\tau_2) ... \right]$ |
| $\mat{\bar{H}}^{R}_i$ | $\Delta t_i \mat{S}_i \left[ \vec{h}_R(\tau_1), \vec{h}_R(\tau_2) ... \right]$ |
| $\mat{\bar{B}}^{L}$ | $\frac{1}{\Delta t_{ 1 }} \mat{S}_{ 1 } \left[ \vec{h}_L \vec{h}_L^{\prime T} \right]_{\tau=0} \mat{S}_{ 1 }$ |
| $\mat{\bar{B}}^{R}$ | $\frac{1}{\Delta t_{N-1}} \mat{S}_{N-1} \left[ \vec{h}_R \vec{h}_R^{\prime T} \right]_{\tau=1} \mat{S}_{N-1}$ |
### Boundary Conditions
In the derivation above, we have neglected discussion of boundary conditions. In general, a trajectory integration problems requires specification of the full initial state (position and velocity in both the $x$ and $y$ directions) or specification of an equivalent number of initial and final states. We will also require that the integration time be specified a-priori; this requirement can be relaxed in the future once we get the fixed-time integrator running.
Imposing Dirichlet or Neumann boundary conditions using the discrete equations above is relatively straight forward. Since we have chosen to use Hermite basis functions, the discrete system contains explicit equations for positions and velocity at each end of the interval. To impose a specified boundary conditions, e.g. $\dot{x}_N = v_f$,
1. We zero out the associated row in $\mat{\bar{K}}$ and set the diagonal entry to unity.
2. We set the associated row in $\vec{\bar{f}}$ equal to the specified value.
Note that this procedure can be trivially generalized to allow assigning specified values to internal degrees of freedom. For example, we may choose to ensure that the trajectory apogee or perigee occurs at the mid-point of the trajectory, so we set the vertical velocity to zero at the mid-point node. This capability maybe worth implementing further down the road.
Another interesting generalization that would be fairly easy to implement would be boundary conditions that are linear combinations of the state variables:
\begin{equation}
\sum_k a_k x_{i,k} = c
\end{equation}
Perhaps most importantly, this could be used to ensure specific flight path constraints are met at impact. In cases like this, instead of setting the diagonal entries to unity in $\mat{\bar{K}}$, we would inject the coefficients for each state and set then assign the constant term to the forcing vector. However, in this case it's not clear which row in $\mat{\bar{K}}$ should be replaced: do we replace the equation for the first state involved in the boundary condition, or should we always prefer replacing the velocity variables? These types of boundary conditions are also amenable to Lagrange multiplier techniques, so perhaps that method would be preferred (that will destroy the diagonal structure of the LHS matrix, but an efficient solution may still be possible via partitioning).
General non-linear boundary conditions will require some more thought (although I can't think of a practical example of one, so maybe we don't need it). I suspect we would just add a hook that allows the user to call a custom function that computes an appropriate RHS for the constraint and then use some kind of automatic differentiation or finite differences to linearize that function if needed.
### Linearization
Efficiently solving the non-linear system shown in Equation $\ref{eq:discrete_system_summary}$ will require linearizing the residual with respect to the unknown state variables. Let's consider the discrete residual equations for the $i$-th interior node:
\begin{align}
\label{eq:x_residual}
\vec{R}_i^x &= \mat{\bar{K}}^{RL}_{i-1}\vec{x}_{i-1}
+ \left( \mat{\bar{K}}^{RR}_{i-1} + \mat{\bar{K}}^{LL}_i \right) \vec{x}_{i}
+ \mat{\bar{K}}^{LR}_i\vec{x}_{i+1}
+ \mat{\bar{H}}^{R}_{i-1}\mat{W}\vec{f}_{i-1}^x + \mat{\bar{H}}^{L}_i\mat{W}\vec{f}_i^x \\
\vec{R}_i^y &= \mat{\bar{K}}^{RL}_{i-1}\vec{y}_{i-1}
+ \left( \mat{\bar{K}}^{RR}_{i-1} + \mat{\bar{K}}^{LL}_i \right) \vec{y}_{i}
+ \mat{\bar{K}}^{LR}_i\vec{y}_{i+1}
+ \mat{\bar{H}}^{R}_{i-1}\mat{W}\vec{f}_{i-1}^y + \mat{\bar{H}}^{L}_i\mat{W}\vec{f}_i^y \\
\end{align}
As shown in Equations $\ref{eq:governing_x}$ and $\ref{eq:governing_y}$, the force terms are in general a function of $t,x,y,\dot{x},\dot{y},\alpha$. Therefore, the differential of a forcing function evaluated at the $k$-th quadrature point of the $i$-th interval is given by:
\begin{equation}
\label{eq:nodal_differential}
\delta f_k = \left.\frac{\partial f}{\partial t}\right|_{\tau_k} \delta t
+ \left.\frac{\partial f}{\partial x}\right|_{\tau_k} \delta x
+ \left.\frac{\partial f}{\partial y}\right|_{\tau_k} \delta y
+ \left.\frac{\partial f}{\partial \dot{x}}\right|_{\tau_k} \delta\dot{x}
+ \left.\frac{\partial f}{\partial \dot{y}}\right|_{\tau_k} \delta\dot{y}
+ \left.\frac{\partial f}{\partial \alpha}\right|_{\tau_k} \delta\alpha
\end{equation}
For the time being, we will consider $t$ and $\alpha$ to be specified, and therefore do require linearization. Furthermore, since we have selected Hermite basis functions, $x,\dot{x}$ on the $i$-th interval are functions of only $x_i,x_{i+1}$. A similar relationship exists for the $y$ variable. Therefore, the total differential can be rewritten as:
\begin{equation}
\begin{split}
\delta f_k
&=
\left[
\frac{\partial f}{\partial x} \frac{\partial x}{\partial \vec{x}_i} +
\frac{\partial f}{\partial \dot{x}} \frac{\partial \dot{x}}{\partial \vec{x}_i}
\right]_{\tau_k} \delta \vec{x}_i
+
\left[
\frac{\partial f}{\partial x} \frac{\partial x}{\partial \vec{x}_{i+1}} +
\frac{\partial f}{\partial \dot{x}} \frac{\partial \dot{x}}{\partial \vec{x}_{i+1}}
\right]_{\tau_k} \delta \vec{x}_{i+1} \\
&+
\left[
\frac{\partial f}{\partial y} \frac{\partial y}{\partial \vec{y}_i} +
\frac{\partial f}{\partial \dot{y}} \frac{\partial \dot{y}}{\partial \vec{y}_i}
\right]_{\tau_k} \delta \vec{y}_i
+
\left[
\frac{\partial f}{\partial y} \frac{\partial y}{\partial \vec{y}_{i+1}} +
\frac{\partial f}{\partial \dot{y}} \frac{\partial \dot{y}}{\partial \vec{y}_{i+1}}
\right]_{\tau_k} \delta \vec{y}_{i+1}
\end{split}
\end{equation}
From this expression, we see that in order to compute the partial derivatives of a given $\vec{f}_i$ with respect to the nodal degrees of freedom we need two things:
1. The sensitivity of the forcing function with respect to its input parameters ($x$,$\dot{x}$,etc.), evaluated at each quadrature point in the interval.
2. The sensitivity of the forcing function inputs ($x$,$\dot{x}$, etc.) with respect to the nodal degrees of freedom ($\vec{x}_i, \vec{x}_{i+1}$, etc.) for each point quadrature point in the interval.
While the former will in general be a function of the solution and must be re-computed as the solution evolves, the later can be precomputed and stored. Differentiating Equation $\ref{eq:interval_interpolation_equation}$ with respect to $\vec{x}_i, \vec{x}_{i+1}$ yields:
\begin{align}
\frac{\partial x}{\partial \vec{x}_i} &= \vec{h}_L^T\mat{S}_i \\
\frac{\partial x}{\partial \vec{x}_{i+1}} &= \vec{h}_R^T\mat{S}_i
\end{align}
These expressions can then be differentiated with respect to time:
\begin{align}
\frac{\partial \dot{x}}{\partial \vec{x}_i} &= \frac{1}{\Delta t_i} \vec{h}_L^{\prime T}\mat{S}_i \\
\frac{\partial \dot{x}}{\partial \vec{x}_{i+1}} &= \frac{1}{\Delta t_i} \vec{h}_R^{\prime T}\mat{S}_{i} \\
\end{align}
Substituting these expressions into Equation $\ref{eq:nodal_differential}$ and stacking the results for each quadrature points, we can derive the following expression for the linearization of the force vector on the $i$-th interval with respect to the nodal degrees of freedom:
\begin{align}
\frac{\partial}{\partial x_{ i }}\vec{f}_i =
\frac{1}{\Delta t_i^2}\left(
\mat{\bar{H}}^{L}_i \mat{F}^{xx} +
\mat{\bar{H}}^{\prime L}_i \mat{F}^{x\dot{x}}
\right)^T \\
\frac{\partial}{\partial x_{i+1}}\vec{f}_i =
\frac{1}{\Delta t_i^2}\left(
\mat{\bar{H}}^{R}_i \mat{F}^{xx} +
\mat{\bar{H}}^{\prime R}_i \mat{F}^{x\dot{x}}
\right)^T \\
\mat{F}^{pq} = \left[ \begin{array}{ccc}
\left.\frac{\partial f^p}{\partial q}\right|_{\tau_1} & & \\
& \left.\frac{\partial f^p}{\partial q}\right|_{\tau_2} & \\
& & \ddots
\end{array}\right]
\end{align}
With the identities above, we may now write the linearization of Equation $\ref{eq:x_residual}$ as follows:
... this gets ugly fast. Doable, but ugly.
| f95ea64773d7f7bc94a07bebe689b6e50fbe81ad | 238,400 | ipynb | Jupyter Notebook | doc/theory.ipynb | flying-tiger/TrajOpt.jl | 5892b3e580c26c752e1565b5e2f635d112d143cb | [
"MIT"
]
| null | null | null | doc/theory.ipynb | flying-tiger/TrajOpt.jl | 5892b3e580c26c752e1565b5e2f635d112d143cb | [
"MIT"
]
| null | null | null | doc/theory.ipynb | flying-tiger/TrajOpt.jl | 5892b3e580c26c752e1565b5e2f635d112d143cb | [
"MIT"
]
| null | null | null | 272.768879 | 196,083 | 0.886598 | true | 10,432 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.731059 | 0.562243 | __label__eng_Latn | 0.937662 | 0.144608 |
# Cheme 512- Method of Engineering Analysis
## Analysis of Problem 1.1.2 from Conduction Heat Solution Manual
#### Maria Politi
#### Diagram :
To determine a solution to this problem, a slab composed of two different layers was chosen. A similar procedure to the one displayed below can be use to find the solution for a slab with multiple layers.
#### The DIfferential Equation:
The energy conservation equation in cartesian coordinates can be found as:
$$\frac{\partial T}{\partial t} + v_x\frac{\partial T}{\partial x} + v_y\frac{\partial T}{\partial y} + v_z\frac{\partial T}{\partial z} = \alpha \bigg[\frac{\partial^2T}{\partial x^2}+\frac{\partial^2T}{\partial y^2}+\frac{\partial^2T}{\partial z^2}\bigg] + \frac{H_v}{\rho \hat C_p }$$
#### Assumptions:
Given the problem statement provided, the following assumptions can be made to simplify the solution of the ODE:
> * Steady state heat tranfer
> * 1- Dimentional heat transfer
> * No bulk flow- static problem
> * No heat generation
#### The simplified ODE:
Using the assumptions layed out above, the differntial equation can be simplified in the following form:
$$ \frac{d^2T}{dx^2}=0$$
Note: the partial differentials were replaced with full differential as the temperature is only a function of the position aong the x-direction.
The temperature of each slab section can be then expressed as:
$$ \frac{dT_1}{dx}=a_1 \space \rightarrow \space\space\space\space T_1(x)= a_1x+b_1 $$
$$ \frac{dT_2}{dx}=a_2 \space \rightarrow \space\space\space\space T_2(x)= a_2x+b_2$$
#### Boundary Conditions (BCs) :
In order to solve the two equations describing the temperature distribution in the slab, four boundary conditions are needed. These ca be found as follows:
1. Convective heat flux at $(x = 0)$
$$h_i[T_1(0)- T_i] = k_1\frac{dT_1}{dx}\space$$
2. Constant heat flux through the slab at $(x=L_1)$
$$ -k_1\frac{dT_1}{dx}\mid_{x=L_1}=-k_2\frac{dT_2}{dx}\mid_{x=L_1}$$
3. Thermal equilibrium at $(x=L_1)$
$$ T_1(L_1)=T_2(L_1) \space $$
4. Convective heat flux at $(x = L_1+L_2)$
$$h_o[T_2(L_1+L_2) - T_o] = -k\frac{dT_2}{dx}$$
Rewriting them in terms of the constants of integrations at $T_1(x)$ and $T_2(x)$ yields:
1. $h_i[b_1-T_i]=k_1a_1$
2. $-k_1a_1=-k_2a_2$
3. $ a_1L_1+b_1=a_2L_1+b_2$
4. $ h_o[a_2(L_1+L_2)+b_2-T_o]=-k_2a_2$
```python
import sympy as sp
from sympy.solvers import solve
from sympy import Symbol
#Define the symbols for the conductive coefficients
k_1= Symbol('k_1')
k_2= Symbol('k_2')
#Define the symbols for the constants of integration
a_1= Symbol('a_1')
a_2= Symbol('a_2')
b_1= Symbol('b_1')
b_2= Symbol('b_2')
#Define the symbols for the convective coefficients
h_i= Symbol('h_i')
h_o= Symbol('h_o')
#Define the symbols for the temperature distribution in the slabs
T_i= Symbol('T_i')
T_o= Symbol('T_o')
#Define the symbols for the thickness of each slab section
L_1= Symbol('L_1')
L_2= Symbol('L_2')
#Write the symbolic form of the boundary conditions as functions of the constants of integration
eqn1= (k_1*a_1-h_i*(b_1-T_i))
eqn2= (-k_1*a_1+k_2*a_2)
eqn3= (a_1*L_1+b_1-a_2*L_1-b_2)
eqn4= (-k_2*a_2-h_o*(a_2*(L_2+L_1)+b_2-T_o))
#Solve for the constants of integration as function of known variables
sol= sp.nonlinsolve((eqn1,eqn2,eqn3,eqn4),(a_1,a_2,b_1, b_2)).args[0]
print("Constant of integration a1")
display(sol[0])
print("Constant of integration a2")
display(sol[1])
print("Constant of integration b1")
display(sol[2])
print("Constant of integration b2")
display(sol[3])
```
Constant of integration a1
$\displaystyle - \frac{h_{i} h_{o} k_{2} \left(T_{i} - T_{o}\right)}{L_{1} h_{i} h_{o} k_{2} + L_{2} h_{i} h_{o} k_{1} + h_{i} k_{1} k_{2} + h_{o} k_{1} k_{2}}$
Constant of integration a2
$\displaystyle - \frac{h_{i} h_{o} k_{1} \left(T_{i} - T_{o}\right)}{L_{1} h_{i} h_{o} k_{2} + L_{2} h_{i} h_{o} k_{1} + h_{i} k_{1} k_{2} + h_{o} k_{1} k_{2}}$
Constant of integration b1
$\displaystyle \frac{L_{1} T_{i} h_{i} h_{o} k_{2} + L_{2} T_{i} h_{i} h_{o} k_{1} + T_{i} h_{i} k_{1} k_{2} + T_{o} h_{o} k_{1} k_{2}}{L_{1} h_{i} h_{o} k_{2} + L_{2} h_{i} h_{o} k_{1} + h_{i} k_{1} k_{2} + h_{o} k_{1} k_{2}}$
Constant of integration b2
$\displaystyle \frac{L_{1} T_{i} h_{i} h_{o} k_{1} - L_{1} T_{o} h_{i} h_{o} k_{1} + L_{1} T_{o} h_{i} h_{o} k_{2} + L_{2} T_{i} h_{i} h_{o} k_{1} + T_{i} h_{i} k_{1} k_{2} + T_{o} h_{o} k_{1} k_{2}}{L_{1} h_{i} h_{o} k_{2} + L_{2} h_{i} h_{o} k_{1} + h_{i} k_{1} k_{2} + h_{o} k_{1} k_{2}}$
#### Plot of the solution:
```python
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.cm as cm
from mpl_toolkits.mplot3d import axes3d
#Choosing values for the convection inside and outside the slab
hi= 20
ho= 0.5
#Defining values for the conductive coefficients of the two materials composing the slab
k1= 15
k2= 0.5
# Innner and outer temperatures. Assuming the inner temperature is higher than the outside one
Ti= 21
To= 10
# Define the thickness of each slab section
L1= 5
L2= 2
x1= np.linspace(0,L1, 100)
x2= np.linspace(L1,L2+L1,100)
# Calcualte the constants of integration based on the solution found by sympy above
a1= -(hi*ho*k2*(Ti-To))/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
a2= -(hi*ho*k1*(Ti-To))/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
b1= (L1*Ti*hi*ho*k2+L2*Ti*hi*ho*k1+Ti*hi*k1*k2+To*ho*k1*k2)/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
b2= (L1*Ti*hi*ho*k1-L1*To*hi*ho*k1+L1*To*hi*ho*k2+L2*Ti*hi*ho*k1+Ti*hi*k2*k1+To*ho*k1*k2)/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
# Calculate the temperature gradient in each slab section.
T1= a1*x1+b1
T2= a2*x2+b2
# Plot the solution
fig, ax = plt.subplots(figsize=(15,8))
ax.plot(x1,T1, label= '$T_1(x)$')
ax.plot(x2,T2, label= '$T_2(x)$')
plt.legend()
plt.xlabel('x [mm]', fontsize=16)
plt.ylabel('T(x)[$^oC$]', fontsize=16)
plt.axvline(5, c='k')
```
#### Investigate the effect of the conductive coefficient
```python
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.cm as cm
from mpl_toolkits.mplot3d import axes3d
#Repeat the solution and investigate the affect of the conductive coefficien of slab1.
hi= 20
ho= 0.5
k1= [0.1,1,5,15,50,100]
k2= 1
Ti= 21
To= 10
L1= 5
L2= 2
x1= np.linspace(0,L1, 100)
x2= np.linspace(L1,L2+L1,100)
fig, ax = plt.subplots(figsize=(15,8))
for i,k1 in enumerate(k1):
a1= -(hi*ho*k2*(Ti-To))/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
a2= -(hi*ho*k1*(Ti-To))/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
b1= (L1*Ti*hi*ho*k2+L2*Ti*hi*ho*k1+Ti*hi*k1*k2+To*ho*k1*k2)/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
b2= (L1*Ti*hi*ho*k1-L1*To*hi*ho*k1+L1*To*hi*ho*k2+L2*Ti*hi*ho*k1+Ti*hi*k2*k1+To*ho*k1*k2)/(L1*hi*ho*k2+L2*hi*ho*k1+hi*k1*k2+ho*k2*k1)
T1= a1*x1+b1
T2= a2*x2+b2
ax.plot(x1,T1, label= '$T_1, k1= {}$'.format(k1))
ax.plot(x2,T2, label= '$T_2(x)$')
plt.legend()
plt.xlabel('x [mm]', fontsize=16)
plt.ylabel('T(x)[$^oC$]', fontsize=16)
plt.axvline(5, c='k')
```
```python
```
| aa3edeafcea567832617a4d01c611e88646a60ff | 123,723 | ipynb | Jupyter Notebook | presentations/10_17_19_Politi.ipynb | uw-cheme512/uw-cheme512.github.io | 6dad7a9554eafb6eba347462d30c62bf9c0ec4da | [
"BSD-3-Clause"
]
| null | null | null | presentations/10_17_19_Politi.ipynb | uw-cheme512/uw-cheme512.github.io | 6dad7a9554eafb6eba347462d30c62bf9c0ec4da | [
"BSD-3-Clause"
]
| null | null | null | presentations/10_17_19_Politi.ipynb | uw-cheme512/uw-cheme512.github.io | 6dad7a9554eafb6eba347462d30c62bf9c0ec4da | [
"BSD-3-Clause"
]
| null | null | null | 296.697842 | 75,488 | 0.919902 | true | 2,834 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.868827 | 0.781181 | __label__eng_Latn | 0.711325 | 0.653277 |
# Nonlinear Equations and their Roots
## CH EN 2450 - Numerical Methods
**Prof. Tony Saad (<a>www.tsaad.net</a>) <br/>Department of Chemical Engineering <br/>University of Utah**
<hr/>
The purpose of root finding methods for nonlinear functions is to find the roots - or values of the independent variable, e.g. x - that make the nonlinear function equal to zero. The general form of a nonlinear root finding proposition is given as:
find $x$ such that
\begin{equation}
f(x) = 0
\end{equation}
Alternatively, a problem may be presented as:
find $x$ such that
\begin{equation}
f(x) = a
\end{equation}
then, one redefines this into what is called "residual" form
\begin{equation}
r(x) \equiv f(x) - a
\end{equation}
and reformulates the problem to find $x$ such that
\begin{equation}
r(x) = 0
\end{equation}
In all examples below, we will explore the roots of the following function
\begin{equation}
\ln(x) + \cos(x)e^{-0.1x} = 2
\end{equation}
or, in residual form
\begin{equation}
r(x) \equiv \ln(x) + \cos(x)e^{-0.1x} - 2 = 0
\end{equation}
This function has three roots, 5.309, 8.045, and 10.02
```python
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
```
```python
res = lambda x: np.log(x) + np.cos(x)*np.exp(-0.1*x)-2.0
x = np.linspace(4,15,200)
plt.grid()
plt.axhline(y=0,color='k')
plt.plot(x,res(x))
```
[<matplotlib.lines.Line2D at 0x101282f9e8>]
# Root Finding Methods
There are two classes of methods to find roots of equations:
1. Closed domain methods,
2. Open domain methods
Closed domain methods work by bracketing the root while open domain methods can start with an arbitrary initial guess.
# Close Domain Methods
## Bisection Method
Perhaps the most popular and intuitive root finding method, the Bisection method works by first bracketing a root and then successively improving the bracket by taking the midway point. An algorithm looks like the following
1. Choose values $a$ and $b$ such that the root, $x_0$ is $a \leq x_0 \leq b$
2. Calculate $c = \frac{a+b}{2}$ as the midway point between $a$ and $b$
3. Check which side the root is: if $f(a)\times f(c) < 0$ then $b = c$ else $a=c$
4. Check for convergence and repeat as necessary
Below is an example implementation of the bisection method
```python
def bisect(f,a,b,tol, maxiter):
err = tol + 100
niter = 0
print('{:<12} {:<12} {:<12} {:<12}'.format('Iteration','a','b','error'))
while err > tol and niter < maxiter:
niter +=1
c = (a + b)/2.0
fc = f(c)
fa = f(a)
if (fa * fc < 0.0):
b = c
else:
a = c
err = abs(fc)
print('{:<12} {:<12} {:<12} {:<12}'.format(niter, round(a,6), round(b,6), round(err,10)))
print('Iterations=',niter,' Root=',c)
return c
```
```python
bisect(res,4,5.5,1e-5,10)
```
Iteration a b error
1 4.75 5.5 0.418471165
2 5.125 5.5 0.1256704432
3 5.125 5.3125 0.0020525841
4 5.21875 5.3125 0.0599408749
5 5.265625 5.3125 0.0284566567
6 5.289062 5.3125 0.0130777303
7 5.300781 5.3125 0.0054812004
8 5.306641 5.3125 0.0017064285
9 5.306641 5.30957 0.0001750523
10 5.308105 5.30957 0.0007651951
Iterations= 10 Root= 5.30810546875
5.30810546875
## Method of False Position (Regula-Falsi)
The Methods of False Position or Regula-Falsi takes into consideration how close a guess might be to a root. It requires two initial guesses that bracket the root but instead of cutting the bracket in half, the Falsi method connects the two guesses via a straight line and then finds the point at which this straight line intersects the x-axis and uses that as a new guess.
Here's the algorithm for the Regula-Falsi method:
1. Choose values $a$ and $b$ such that the root, $x_0$ is $a \leq x_0 \leq b$
2. Calculate the slope of the line connecting $a$ and $b$: $m = \frac{f(b) - f(a)}{b-a}$
3. Find the point at which this line intersects the x-axis: $c = b - \frac{f(b)}{m}$
3. Check which side the root is: if $f(a)\times f(c) < 0$ then $b = c$ else $a=c$
4. Check for convergence and repeat as necessary
```python
def falsi(f,a,b,tol):
niter = 0
err = tol + 100
print('{:<12} {:<12} {:<12} {:<12}'.format('Iteration','a','b','error'))
while err > tol:
fa = f(a)
fb = f(b)
m = (fb - fa)/(b - a)
c = b - fb/m
fc = f(c)
err = abs(fc)
if fc * fa < 0.0:
b = c
else:
a = c
err = abs(fc)
print('{:<12} {:<12} {:<12} {:<12}'.format(niter, round(a,6), round(b,6), round(err,10)))
niter += 1
print('Iterations:', niter, 'Root=',c)
return c
```
```python
falsi(res,4,5.5,1e-5)
```
Iteration a b error
0 4 5.353774 0.0280786749
1 4 5.318575 0.0059333086
2 4 5.311179 0.0012065276
3 4 5.309677 0.0002433628
4 4 5.309374 4.90066e-05
5 4 5.309313 9.8653e-06
Iterations: 6 Root= 5.30931285069788
5.30931285069788
# Open Domain Methods
Closed domain methods require two initial guesses that bracket a root and are guaranteed to converge to the root, but in general are not practical when the root's location is unknown. Open domain methods relax this requirement and do not need initial guesses that bracket a root.
## The Secant Method
The secant method is identical to the regula-falsi method but does not require the initial guesses to bracket a root. Here's an algorithm for the secant method:
1. Choose values $a$ and $b$ that do not necessarily bracket a root
2. Calculate the slope of the line connecting $a$ and $b$: $m = \frac{f(b) - f(a)}{b-a}$
3. Find the point at which this line intersects the x-axis: $c = b - \frac{f(b)}{m}$
3. Set $a = b$ and $b = c$
4. Check for convergence and repeat as necessary
```python
def secant(f,a,b,tol):
niter = 0
err = tol + 100
print('{:<12} {:<12} {:<12} {:<12}'.format('Iteration','a','b','error'))
while err > tol:
fa = f(a)
fb = f(b)
m = (fb - fa)/(b - a)
c = b - fb/m
fc = f(c)
err = abs(fc)
a = b
b = c
print('{:<12} {:<12} {:<12} {:<12}'.format(niter, round(a,6), round(b,6), round(err,10)))
niter += 1
print('Iterations:', niter, 'Root=',c)
return c
```
```python
secant(res,7,7.5,1e-5)
```
Iteration a b error
0 7.5 8.130598 0.0254859413
1 8.130598 8.051866 0.0019750738
2 8.051866 8.045252 4.76383e-05
3 8.045252 8.045407 7.92e-08
Iterations: 4 Root= 8.04540742136555
8.04540742136555
## Newton's Method
Newton's method is one of the most popular open domain nonlinear solvers. It is based on a two-term approximation of the Taylor series of a function $f(x)$. Given an initial guess $x_0$, Newton's method is implemented in the following steps:
1. Choose $x_0$ as an initial guess
2. Compute $f(x^0)$ and $f'(x^0)$
3. Compute $x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}
3. Set $x_0 = x_1$
4. Check for convergence and repeat as necessary
```python
def newton(f,df,x0,tol):
niter = 0
err = tol + 100
while err > tol and niter < 100:
x1 = x0 - f(x0)/df(x0)
x0 = x1
err = abs(f(x0))
niter += 1
print('Iterations:', niter, 'Root=',x1)
return x1
```
In many cases, the derivative is not known and one must use a finite difference approximation to the derivative.
```python
def newton2(f,x0,tol):
niter = 0
err = tol + 100
while err > tol and niter < 100:
delta = 1e-4 * x0 + 1e-12
df = (f(x0 + delta) - f(x0))/delta
x1 = x0 - f(x0)/df
x0 = x1
err = abs(f(x0))
niter += 1
print('Iterations:', niter, 'Root=',x1)
return x1
```
```python
newton2(res,1,1e-5)
```
Iterations: 7 Root= 5.309297481151985
5.309297481151985
```python
import urllib
import requests
from IPython.core.display import HTML
def css_styling():
styles = requests.get("https://raw.githubusercontent.com/saadtony/NumericalMethods/master/styles/custom.css")
return HTML(styles.text)
css_styling()
```
CSS style adapted from https://github.com/barbagroup/CFDPython. Copyright (c) Barba group
<link href='http://fonts.googleapis.com/css?family=Merriweather' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Bitter' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Oxygen' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Lora' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/*div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
} */
/* set the font size in tables */
tr, td, th{
font-size:110%;
}
/* spec for headers */
h1 {
font-family: 'Bitter', serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Merriweather','Alegreya Sans','Lora', 'Oxygen', "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 160%;
font-size: 130%;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 100%;
}
.text_cell_render h1 {
font-weight: 200;
font-size: 32pt;
line-height: 120%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-size: 26pt;
text-align: center;
}
.text_cell_render h3 {
font-size: 20pt;
}
.text_cell_render h4 {
font-size: 18pt;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
/* div#notebook {background-color: #1e1e1e; border-top: none;}
div#notebook-container {background-color: rgb(180, 180, 180);}
*/
</style>
| b8a80900fc2b1f0b9ad0aef6e48e4d31f3e32943 | 48,868 | ipynb | Jupyter Notebook | topics/nonlinear-equations/Root Finding Methods.ipynb | jomorodi/NumericalMethods | e040693001941079b2e0acc12e0c3ee5c917671c | [
"MIT"
]
| 3 | 2019-03-27T05:22:34.000Z | 2021-01-27T10:49:13.000Z | topics/nonlinear-equations/Root Finding Methods.ipynb | jomorodi/NumericalMethods | e040693001941079b2e0acc12e0c3ee5c917671c | [
"MIT"
]
| null | null | null | topics/nonlinear-equations/Root Finding Methods.ipynb | jomorodi/NumericalMethods | e040693001941079b2e0acc12e0c3ee5c917671c | [
"MIT"
]
| 7 | 2019-12-29T23:31:56.000Z | 2021-12-28T19:04:10.000Z | 37.275362 | 382 | 0.466583 | true | 3,587 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.855851 | 0.760645 | __label__eng_Latn | 0.845248 | 0.605566 |
(Evaluate block to execute LaTeX definitions)
$ \newcommand{\ybar}{\overline{y}} $
$ \renewcommand{\d}[2]{\frac{d #1}{d #2}} $
$ \newcommand{\dd}[2]{\frac{d^2 #1}{d #2^2}} $
$ \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} $
$ \newcommand{\pdd}[2]{\frac{\partial^2 #1}{\partial #2^2}} $
$ \renewcommand{\b}{\beta} $
$ \newcommand{\m}{\mu} $
$ \renewcommand{\v}[1]{\mathbf{#1}} $
$ \newcommand{\N}{\mathcal{N}} $
$ \renewcommand{\l}{\lambda} $
$ \newcommand{\ol}[1]{\overline{#1}} $
# Calculating the Dispersion Relation for Zonostrophic Instability
This is the supplementary material for the paper [Dynamics of zonal flows: failure of wave-kinetic theory, and new geometrical optics approximations](https://www.cambridge.org/core/journals/journal-of-plasma-physics/article/dynamics-of-zonal-flows-failure-of-wave-kinetic-theory-and-new-geometrical-optics-approximations/5B51BB7D026E19E8F21801568ED6EA75). The article may also be found on arXiv [here](https://arxiv.org/abs/1604.06904).
The dispersion relation for zonostrophic instability is shown in Figure 2 in the paper. We show how to derive the general form of the dispersion relation for zonostrophic instability, and then how to compute it numerically for a given set of parameters. Python code is included.
In each case, we reduce the general form of the dispersion relation to a simpler form by using a fluctuation spectrum that consists of a thin ring in wavevector space $\sim \delta(k - k_f)$ centered at $k_f$. This reduction was first considered for CE2 by Srinivasan and Young (2012); more details can be found in Parker (Ph.D. Thesis, 2014).
If this document is being read as a PDF or webpage, an interactive ipython notebook can be found [here](https://github.com/jeffbparker/ZonalFlowWaveKinetic), or at a [direct link](https://github.com/jeffbparker/ZonalFlowWaveKinetic/raw/master/Supplemental_Material.ipynb) (right click, save as). In the ipython notebook one can follow along the mathematics and run the Python code inside the notebook using the [Jupyter](http://jupyter.org/) software. To view it non-interactively within a browser without the Jupyter software, see [here](http://nbviewer.jupyter.org/github/jeffbparker/ZonalFlowWaveKinetic/blob/master/Supplemental_Material.ipynb).
## Coordinate Convention
This document uses the geophysical convention for coordinates. To use the standard plasma coordinates, let $(x,y,\ybar,\b,U) \mapsto (-y, x, \ol{x}, \kappa, -U)$.
## General Form for Zonostrophic Instability
### Asymptotic WKE
We start from Eq. (3.2). There is a homogeneous equilibrium, independent of $\ybar$, at $\N_H(k_x,k_y) = F(k_x,k_y) / 2\b\m$, $U=0$. Using the form
\begin{equation}
\N = \N_H + e^{iq\ybar} e^{\l t} \N_1(k_x, k_y),
\end{equation}
\begin{equation}
U = e^{iq\ybar} e^{\l t} U_1,
\end{equation}
we linearize Eq. (3.2) for perturbations about the equilibrium. We obtain
\begin{equation}
\l \N_1 = iqk_x U_1 \pd{\N_H}{k_y} - \frac{2i\b q k_x k_y}{\ol{k}^4} \N_1 - 2\m \N_1
\end{equation}
\begin{equation}
(\l + \m) U_1 = iq \int \frac{d\v{k}}{(2\pi)^2} \frac{k_x k_y}{\ol{k}^4} \b \N_1.
\end{equation}
The first equation is solved for $\N_1$ in terms of $U_1$ and then substituted into the second equation. This procedure yields a single nonlinear equation for the eigenvalues $\l$ corresponding to zonostrophic instability. One finds
\begin{equation}
\l + \m = -q^2 \int \frac{d\v{k}}{(2\pi)^2} \frac{k_x^2 k_y}{(\l + 2\m)\ol{k}^4 + 2i\b q k_x k_y} \b \pd{\N_H}{k_y}
\end{equation}
When $q$ is large, $\l \sim q$.
### Zonostrophic Instability in CE2-GO
We follow the same procedure, starting from Eq. (4.1). After linearizing about the homogeneous equilibrium $W_H(k_x,k_y) = F(k_x,k_y) / 2\m$, we find
\begin{equation}
\l W_1 = iqk_x U_1 \pd{}{k_y} \left[ \left( 1 - \frac{q^2}{\ol{k}^2} \right) W_H \right] - \frac{2i\b q k_x k_y}{\ol{k}^4} W_1 - 2\m W_1,
\end{equation}
\begin{equation}
(\l + \m)U_1 = iq \int \frac{d\v{k}}{(2\pi)^2} \frac{k_x k_y}{\ol{k}^4} W_1.
\end{equation}
Solving for $W_1$ and substituting into the second equation yields
\begin{equation}
\l + \m = -q^2 \int \frac{d\v{k}}{(2\pi)^2} \frac{k_x^2 k_y}{(\l + 2\m)\ol{k}^4 + 2i\b q k_x k_y} \pd{}{k_y} \left[ \left( 1 - \frac{q^2}{\ol{k}^2} \right) W_H \right]
\end{equation}
### Zonostrophic Instability in the WKE
We follow the same procedure, starting from Eq. (4.3). We linearize about the homogeneous equilibrium $\N_H(k_x,k_y) = F(k_x,k_y) / 2\m \b$, and find
\begin{equation}
\l \N_1 = iqk_x U_1 \left( 1 - \frac{q^2}{\ol{k}^2} \right) \pd{\N_H}{k_y} - \frac{2i\b q k_x k_y}{\ol{k}^4} \N_1 - \frac{q^2 F}{\b^2} U_1 - 2 \m \N_1,
\end{equation}
\begin{equation}
(\l + \m) U_1 = iq \int \frac{d\v{k}}{(2\pi)^2} \frac{k_x k_y}{\ol{k}^4} (\b \N_1 - U_1'' \N_H).
\end{equation}
Under the assumption that $F$ and $\N_H$ are even in $k_x$ and in $k_y$, the integral over $U_1'' \N_H$ vanishes. Solving for $\N_1$ gives
\begin{equation}
\N_1 = \frac{iqk_x U_1 (1 - q^2/\ol{k}^2)}{\l + 2\m + 2i\b q k_x k_y / \ol{k}^4} \pd{\N_H}{k_y} - \frac{q^2 F U_1}{\b^2 (\l + 2\m + 2i\b q k_x k_y / \ol{k}^4)}
\end{equation}
Plugging this into the zonal flow equation yields the dispersion relation,
\begin{equation}
\l + \m = -q^2 \int \frac{d\v{k}}{(2\pi)^2} \left[ \frac{\b k_x^2 k_y (1 - q^2/\ol{k}^2)}{(\l+2\m) \ol{k}^4 + 2i\b q k_x k_y} \pd{\N_H}{k_y} + \frac{qF}{\b} \frac{i k_x k_y}{(\l + 2\m) \ol{k}^4 + 2i\b q k_x k_y} \right]
\end{equation}
The last term here has no analog in the other models and its presence is a consequence of the invalidity of this model --- wave action is not conserved during this instability.
### Zonostrophic Instability in CE2
The same procedure can be applied to Eq. (2.3) with a little more algebraic complexity. See Srinivasan and Young (2012) or Parker (PhD Thesis, Section 3.2) for the derivation. Using Eqs. (3.25) and (3.26) of Parker (PhD Thesis), and taking $\ol{q}^2 = q^2$ (corresponding to $L_d^{-2} = 0$ for the zonal flows) and the limit of zero viscosity $\nu=0$, the dispersion relation is
\begin{equation}
\l + \m = -q \int \frac{d\v{k}}{(2\pi)^2} \frac{k_x^2 k_y}{(\l + 2\m) \ol{h}^2_1 \ol{h}^2_2 + 2i\b qk_xk_y } \left[ \left( 1 - \frac{q^2}{\ol{h}_1^2} \right) W_H(k_x, k_y + \tfrac{1}{2}q) - \left(1 - \frac{q^2}{\ol{h}_2^2} \right) W_H(k_x, k_y - \tfrac{1}{2}q) \right],
\end{equation}
where $\ol{h}_{1,2}^2 = k_x^2 + (k_y \pm \tfrac{1}{2} q)^2 + L_d^{-2}$. After a Taylor expansion for small $q$, this equation reduces to the CE2-GO dispersion relation.
## Reduction for Thin-ring, isotropic forcing
A thin-ring forcing isotropic in wavevector space is a convenient simplification. We use
\begin{equation}
F(k) = 4\pi \varepsilon k_f \delta(k-k_f).
\end{equation}
If $L_d^{-2} = 0$, then $\varepsilon$ is equal to the energy input into the system by the forcing. With this forcing, the dispersion relations above can be simplified by converting the integrals to polar coordinates. One is left with only one integral, the angle integral, that must be computed numerically.
For some of these computations we will integrate by parts, and we need the relation
\begin{equation}
\pd{}{k_y} \frac{k_y}{(\l + 2\m) \ol{k}^4 + 2i\b qk_x k_y} = \frac{(\l + 2\m) \ol{k}^2 (\ol{k}^2 - 4k_y^2)}{[(\l + 2\m) \ol{k}^4 + 2i\b qk_x k_y]^2}
\end{equation}
### Asymptotic WKE
First we apply integration by parts in $k_y$ for the dispersion relation, then substitute $\N_H = F/2\m \b$. This yields
\begin{equation}
\l + \m = q^2 \int \frac{d\v{k}}{(2\pi)^2} \frac{F}{2\m} \frac{(\l + 2\m) \ol{k}^2 k_x^2 (\ol{k}^2 - 4k_y^2)} {[(\l + 2\m) \ol{k}^4 + 2i\b qk_x k_y]^2}
\end{equation}
For isotropic $F$, we use polar coordinates with $k_x = k\sin \phi$, $k_y = -k \cos \phi$. The dispersion relation becomes
\begin{equation}
\l + \m = q^2 \int_0^\infty dk\, \frac{F(k)}{4\pi \m} k^2 (\l + 2\m)(1+m) \times \int_0^{2\pi} \frac{d\phi}{2\pi} \frac{\sin^2 \phi (1 + m - 4\cos^2 \phi)}{ [(\l + 2\m)k^2 (1+m)^2 - 2i\b q \cos\phi \sin\phi]^2}
\end{equation}
where $m = (k L_d)^{-2}$. Substituting the thin-ring forcing $F = 4\pi \varepsilon k_f \delta(k-k_f)$, we obtain the final form of the dispersion relation,
\begin{equation}
\l + \m = q^2 \frac{\varepsilon}{\m} k_f^4 (\l + 2\m) (1 + m_f) \int_0^{2\pi} \frac{d\phi}{2\pi} \frac{(1 + m_f - 4\cos^2\phi) \sin^2 \phi}{[(\l + 2\m)k_f^2 (1 + m_f)^2 - 2i\b q \cos\phi \sin \phi]^2}
\end{equation}
with $m_f = (k_f L_d)^{-2}$.
Now, this is a nonlinear equation that we can solve for $\l$. For most choices of the spectrum W_H, an unstable perturbation with $\text{Re } \l > 0$, has an eigenvalue $\l$ that is purely real. It can be seen that if $\l$ is real, the imaginary part of the $\phi$ integral vanishes, so that one can work only with the real part. Working only with real $\l$ rather than complex $\l$ has the advantage that robust one-dimensional root finders can be used. For the integrand above, we have
\begin{equation}
\frac{A}{(B + iC)^2} = \frac{A(B^2 - C^2)}{(B^2 + C^2)^2} + \text{imag. part}
\end{equation}
where $A$, $B$, and $C$ are assumed real. If $\l$ is real, the imaginary part vanishes after integration, so we drop this term. We do indeed find real solutions for $\l$.
Below is Python code to solve the dispersion for zonostrophic instability within the Asymptotic WKE.
```python
# Some python setup
from __future__ import division
import numpy as np
import scipy.optimize
import scipy.integrate
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.style.use('ggplot')
plt.rcParams['figure.figsize'] = (8.0, 8.0)
font = {'family' : 'normal',
'weight' : 'bold',
'size' : 14}
plt.rc('font', **font)
```
```python
def lambda_of_params(beta, mu, epsilon, Ldinvsq, kf, q):
"""Given parameters, calculate the eigenvalue lambda for the asymptotic WKE."""
funres = lambda lamb: disp_relation_residual(beta, mu, epsilon, Ldinvsq, kf, q, lamb)
domainmin = -2*mu
domainmax = 100
lamb = scipy.optimize.brentq(funres, domainmin, domainmax) #1D root finder
return lamb
def disp_relation_residual(beta, mu, epsilon, Ldinvsq, kf, q, lamb):
"""Compute the residual for the dispersion relation. lamb must be real."""
mf = Ldinvsq / (kf*kf)
lhs = lamb + mu
rhs = q*q*epsilon/mu * kf**4 * (lamb + 2*mu) * (1+mf) * polarint(beta, mu, epsilon, Ldinvsq, kf, q, lamb)
res = rhs - lhs
return res
def polarint(beta, mu, epsilon, Ldinvsq, kf, q, lamb):
"""Carry out the polar integral"""
fun = lambda phi: polarint_integrand(beta, mu, epsilon, Ldinvsq, kf, q, lamb, phi)
out, err = scipy.integrate.quad(fun, 0, 2*np.pi, epsabs=2e-6, epsrel=2e-6)
out = out / (2*np.pi)
return out
def polarint_integrand(beta, mu, epsilon, Ldinvsq, kf, q, lamb, phi):
""" As explained above, integrand is A/(B+iC)^2. Keep only the real part.
So use A(B^2 - C^2) / (B^2 + C^2)^2."""
c = np.cos(phi)
s = np.sin(phi)
mf = Ldinvsq / (kf*kf)
A = (1 + mf - 4*c*c) * s * s
B = (lamb + 2*mu) * kf**2 * (1+mf)**2
C = -2*beta*q*c*s
out = A*(B*B - C*C) / (B*B + C*C)**2
return out
# Now, let's go and calculate the dispersion relation lambda(q)
beta = 1
mu = 0.02
epsilon = 1
Ldinvsq = 1
kf = 1
q1 = np.logspace(-4, 0, 150)
q2 = np.linspace(1.02, 2.2, 150)
q = np.concatenate([q1, q2])
lamb = np.zeros_like(q)
for j in range(len(q)):
lamb[j] = lambda_of_params(beta, mu, epsilon, Ldinvsq, kf, q[j])
fig = plt.figure()
plt.plot(q, lamb)
plt.xlabel(r'$q$')
plt.ylabel(r'$\lambda$ (Asymptotic WKE)')
```
### CE2-GO
The only difference for the general dispersion relation in CE2-GO as compared to the asymptotic WKE is the presence of a $(1 - q^2 / \ol{k}^2)$ term. For the same isotropic thin-ring forcing, the dispersion relation reduces to the same result except that in CE2-GO there is an additional factor of
\begin{equation}
1 - \frac{q^2}{k_f^2 (1+m_f)}
\end{equation}
in front of the polar integral.
```python
def CE2GO_lambda_of_params(beta, mu, epsilon, Ldinvsq, kf, q):
"""Given parameters, calculate the eigenvalue lambda."""
funres = lambda lamb: CE2GO_disp_relation_residual(beta, mu, epsilon, Ldinvsq, kf, q, lamb)
domainmin = -2*mu
domainmax = 100
lamb = scipy.optimize.brentq(funres, domainmin, domainmax) # 1D root finder
return lamb
def CE2GO_disp_relation_residual(beta, mu, epsilon, Ldinvsq, kf, q, lamb):
"""Compute the residual for the dispersion relation. lamb must be real."""
mf = Ldinvsq / (kf*kf)
lhs = lamb + mu
rhs = q*q*epsilon/mu * kf**4 * (lamb + 2*mu) * (1+mf) * (1-q*q/(kf*kf*(1+mf))) * CE2GO_polarint(beta, mu, epsilon, Ldinvsq, kf, q, lamb)
res = rhs - lhs
return res
def CE2GO_polarint(beta, mu, epsilon, Ldinvsq, kf, q, lamb):
"""Carry out the polar integral. Same as for asymptotic WKE"""
fun = lambda phi: CE2GO_polarint_integrand(beta, mu, epsilon, Ldinvsq, kf, q, lamb, phi)
out, err = scipy.integrate.quad(fun, 0, 2*np.pi, epsabs=2e-6, epsrel=2e-6)
out = out / (2*np.pi)
return out
def CE2GO_polarint_integrand(beta, mu, epsilon, Ldinvsq, kf, q, lamb, phi):
""" As explained above, integrand is A/(B+iC)^2. Keep only the real part.
So use A(B^2 - C^2) / (B^2 + C^2)^2."""
c = np.cos(phi)
s = np.sin(phi)
mf = Ldinvsq / (kf*kf)
A = (1 + mf - 4*c*c) * s * s
B = (lamb + 2*mu) * kf**2 * (1+mf)**2
C = -2*beta*q*c*s
out = A*(B*B - C*C) / (B*B + C*C)**2
return out
# Now, let's go and calculate the dispersion relation lambda(q)
beta = 1
mu = 0.02
epsilon = 1
Ldinvsq = 1
kf = 1
q1 = np.logspace(-4, 0, 150)
q2 = np.linspace(1.02, 2.2, 150)
q = np.concatenate([q1, q2])
lamb = np.zeros_like(q)
for j in range(len(q)):
lamb[j] = CE2GO_lambda_of_params(beta, mu, epsilon, Ldinvsq, kf, q[j])
fig = plt.figure()
plt.plot(q, lamb)
plt.xlabel(r'$q$')
plt.ylabel(r'$\lambda$ (CE2-GO)')
```
### WKE
For the WKE, the dispersion relation is calculated similarly (details and code are omitted here). However, a brief note is warranted on how the dispersion relation in Figure 2 is obtained. Because the WKE is invalid for calculating the dispersion relation, it is not wholly surprising to see strange behavior. When one calculates the dispersion relation about an equilibrium that balances forcing and dissipation, one finds that for some values of $q$, the eigenvalue $\l$ becomes complex, even though the correct answer is that $\l$ is real. The source of this can be traced to a linearization of the $F/(\b - U'')$ term, which gives $F U_1'' / \b^2$.
The WKE dispersion relation in Figure 2 is obtained by neglecting the $F U_1'' / \b^2$ term, which is justifiable through an alternate procedure. There are two ways of obtaining the equilibrium incoherent spectrum that we linearize about. The first way to realize the equilibrium is the route we have been using, which is a balance between external forcing and linear dissipation. An alternate route is to remove forcing and dissipation, in which case *any* homogeneous spectrum is an equilibrium. This alternate procedure yields effectively the same dispersion relation, although within the WKE linearization it has the effect of removing the $FU_1''/\b^2$ term in the linearization. This procedure yields a real $\l$ and is the one shown in Figure 2.
The main point here is solely to demonstrate quantitatively that the WKE is not correct, which is to be expected because one had to assume that $U$ varied slowly in time in order to derive the WKE.
### CE2
The CE2 dispersion relation is obtained in a similar way (details and code omitted). Since there is no $k_y$ derivative, one does not use an integration by parts but rather a shift of the integration variable, after which the isotropic form for $W_H$ can be inserted. For details, see Srinivsan and Young (2012) or Parker (Ph.D. thesis, section 3.2.3).
| 0e5e9e76ee2ae9c62330c4b66f45409471e58477 | 77,182 | ipynb | Jupyter Notebook | Supplemental_Material.ipynb | jeffbparker/ZonalFlowWaveKinetic | 93d0de31356efc3c0d4d44a62f04a870242b8602 | [
"MIT"
]
| null | null | null | Supplemental_Material.ipynb | jeffbparker/ZonalFlowWaveKinetic | 93d0de31356efc3c0d4d44a62f04a870242b8602 | [
"MIT"
]
| null | null | null | Supplemental_Material.ipynb | jeffbparker/ZonalFlowWaveKinetic | 93d0de31356efc3c0d4d44a62f04a870242b8602 | [
"MIT"
]
| null | null | null | 190.57284 | 29,194 | 0.852168 | true | 5,649 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.721743 | 0.606072 | __label__eng_Latn | 0.90972 | 0.246439 |
# Running a statistical trial for a machine learning regression model
Imagine you have been given an imaging dataset and you have trained a [convolutional neural network](https://en.wikipedia.org/wiki/Convolutional_neural_network) to count the number of cells in the image for a medical-based task. On a held-out test set you observe an average error ±5 cells. In order to be considered reliable enough for clinical use, the algorithm needs to be further validated on a prospective trial. The purpose of a such a trial is to 1) ensure that the algorithm's performance is close to what was observed on a test set, and 2) increase the rigour of assessment through a pre-specified protocol. How should such a statistical trial be conducted and what statistical quantities should be estimated? In this post I outline a two-stage method to conduct a validation study for a regression model. This strategy amounts to answering two questions:
1. What is the upper-bound error the algorithm could have before it is was no longer of practical use (e.g. ±10 cells)?
2. Conditional on this upper bound, how many samples will be needed in a prospective trial to establish that the error is *at most* a certain size?
In a previous [post](http://www.erikdrysdale.com/threshold_and_power) I discussed how to calibrate a machine learning (ML) model for a binary classification task in the context of a statistical trial. The classical ML pipeline is to train and tune a model on a training and validation set, and then make predictions (only once) on a test set to get an "unbiased" estimate of a specific performance metric.[[^1]] A statistical trial represents a further iteration on the ML pipeline: collecting data prospectively to "confirm" that the model works as well as you expect. For binary classifiers there were two statistical procedures when preparing for a prospective trial: 1) using the test set to establish a conservative threshold for a target performance level (e.g. 90% sensitivity), 2) picking a slightly worse trial goal (e.g. 80% sensitivity) and calculating a sample size necessary based on this spread. The first procedure relied on the statistical properties of the threshold (which is a random variable) for a given fixed hypothesis. The second procedure could be trivially calculated using statistical tests for the difference in two binomial proportions.
The regression case is more complicated because the desired performance cannot be chosen in advance: the result is what it is. One possibility is to pre-specify a null hypothesis (e.g. R-squared greater than 10%), and only run prospective trials for algorithms that rejected this null. However, such an approach would create a statistical significance [filter](http://www.erikdrysdale.com/winners_curse) that would, conditional on success (i.e. rejection of null), cause the expected test set performance to be biased upwards. Such a bias would lead to algorithms which fail to generalize and underestimate the prospective sample size that will be needed.
I have developed a two-stage testing strategy that avoids the problem of statistical significance filters and relies of classical statistical hypothesis testing paradigms. This approach has several advantages:
1. Model performance will be unbiased
2. Classical statistical techniques can be used to obtain valid inference
3. The upper bound can be chosen with respect to power considerations or the application use case, or both
4. **The analysis applies to any ML algorithm and any performance metric** (conditional on some regularity conditions)
The rest of the post is structured as follows: section (1) provides the statistical framework for a two-stage testing strategy for estimating the mean of a Gaussian, section (2) shows how two common regression performance metrics can be used and approximated by a Gaussian distribution, and section (3) provides an example pipeline of how this framework can be used and statistical simulation results.
[^1]: If the test set is non-representative of the future data generating process then the results of this subsequent analysis will not hold. Dealing with dataset shift is a large topic area that is beyond the scope of this post.
## (1) Two-stage testing approach
Imagine you will have access to two independent datasets, and your goal is to establish an upper-bound on the "true" mean of a Gaussian distribution. In the first stage, a sample is drawn, and the distribution of the sample mean is used to estimate the null hypothesis. In the second stage, a new sample is drawn, and the null from stage 1 is used to calculate a test statistic and a p-value. Assume the data is IID and comes from a normal distribution with a known variance: $X_i \sim N(\mu, \sigma^2)$.
$$
\begin{align*}
&\text{Step 1: Establish null} \\
\hat{\mu}_0 &= \hat\mu_1 + k \sqrt{\sigma^2 / n_1} \\
H_0&: \mu \geq \hat{\mu}_0 \\
H_A&: \mu < \hat{\mu}_0 \\
&\text{Step 2: Test on second sample} \\
s_2 &= \frac{\hat\mu_2 - \hat\mu_0}{\sqrt{\sigma^2 / n_2}} \\
\text{Reject }H_0&: s_2 < \Phi_{s_2}^{-1}(\alpha|H_0) = t_\alpha
\end{align*}
$$
In the first stage, the null is estimated as the point estimate of the sample mean plus $k$ standard deviations above it. As the value of $k$ increases, the power of the second-stage test increases. However, the "information" about the true mean decreases since the bound becomes larger. Consider the distribution of $s_2$, which is the test statistic in the second stage of the procedure:
$$
\begin{align*}
s_2&= \frac{\hat\mu_2-[\hat\mu_1+k\sqrt{\sigma^2/n_1}]}{\sigma^2/n_2} \\
&= z_2 - \sqrt{n_2/n_1}(z_1 + k) \\
z_i &= \frac{\hat\mu_i - \mu}{\sqrt{\sigma^2 / n_i}}
\end{align*}
$$
The unconditional distribution of the statistic can be seen to have a normal distribution:
$$
\begin{align*}
s_2&\sim N\big(-\sqrt{n_2/n_1}\cdot k, 1+n_2/n_1\big)
\end{align*}
$$
For a given $n_2$, as $n_1 \to \infty$, then the probability of rejecting the null, $P(s_2 < t_\alpha)$, approaches $\alpha$ since $s_2 \to N(0,1)$. For a given $n_1$, as $n_2 \to \infty$, then the probability of rejecting the null approaches $\Phi(k)$.[[^2]] However, in order to calculate the type-I and type-II errors of a testing procedure we need to know the distribution of $s_2$ conditional on the status of the null. Such as distribution is more complex:
$$
\begin{align}
s_2 | \{\text{$H_0$ is false}\} &\sim \frac{\hat\mu_2-\hat\mu_0}{\sqrt{\sigma^2/n_2}} \hspace{1mm} \Big|\hspace{1mm} \hat\mu_0 > \mu \nonumber \\
&\sim z_2 - r\cdot z_1^k | z_1^k > 0 \label{eq:dist_cond} \\
z_1^k &= z_1 + k, \hspace{3mm} r = \sqrt{n_2/n_1} \nonumber
\end{align}
$$
The conditional test statistic in \eqref{eq:dist_cond} is equivalent to a weighted sum of a standard normal and a truncated normal distribution. How can we characterize this distribution? The first step is to define a bivariate normal distribution as a function of $z_2\sim N(0,1)$ and $z_1^k~\sim(k,1)$.
$$
\begin{align}
X&= z_2 - r\cdot Y \nonumber \\
Y&= z_1^k \nonumber\\
\begin{pmatrix} X \\ Y \end{pmatrix} &\sim \text{MVN}\Bigg[ \begin{pmatrix} -rk \\ k \end{pmatrix}, \begin{pmatrix} 1+r^2 & -r \\ -r & 1 \end{pmatrix} \Bigg] \label{eq:dist_MVN} \\
\rho &= \frac{-r}{\sqrt{1+r^2}} \nonumber
\end{align}
$$
How does this help us? Luckily the distribution of a truncated bivariate normal distribution for one of the variables has been [characterized](https://link.springer.com/article/10.1007/BF02294652) already by *Arnold et. al*. After working out some of the math, the marginal density function of $X$ in \eqref{eq:dist_MVN}, conditional on $Y > 0$ or $Y<0$, can be written as follows:
$$
\begin{align*}
f_X(x) &= \frac{1}{\Phi(s\cdot k)} \cdot \frac{1}{\sqrt{1+r^2}} \cdot \phi\Bigg(\frac{x+rk}{\sqrt{1+r^2}} \Bigg) \cdot \Phi\Bigg(s\cdot \Bigg[ -r\cdot \frac{x+rk}{\sqrt{1+r^2}} + \sqrt{1+r^2}\cdot k \Bigg] \Bigg) \\
s &= \begin{cases}
+1 &\text{ if } \text{$H_0$ is false, } \hspace{2mm} (Y>0) \\
-1 &\text{ if } \text{$H_0$ is true, } \hspace{2mm} (Y<0)
\end{cases} \\
f_W(w) &= \frac{1}{\Phi(k)} \cdot \frac{1}{\sigma_W} \cdot \phi(w) \cdot \Phi(a + b\cdot w), \hspace{3mm} w=\frac{x+rk}{\sqrt{1+r^2}}
\end{align*}
$$
Next, we can use the result from Owen's classic [paper](https://www.tandfonline.com/doi/abs/10.1080/03610918008812164) which shows that that [integral](https://mathoverflow.net/questions/283928/closed-form-solution-for-an-integral-involving-the-p-d-f-and-c-d-f-of-a-n0-1) needed for calculating $F_W$ can be calculated from the CDF of a bivariate normal:
$$
\begin{align*}
F_W(w;s) &= \frac{1}{\Phi(s\cdot k)} \int_{-\infty}^w \frac{1}{\sigma_W} \cdot \phi(u) \cdot \Phi(a(s) + b(s)\cdot u) du \\
&= \frac{1}{\Phi(s\cdot k)} \text{BVN}\Big( X_1 \leq \frac{a(s)}{\sqrt{1+b(s)^2}}, X_2 \leq w, \rho=-b(s)/\sqrt{1+b(s)^2} \Big) \\
F_X(x;s) &= F_W\big((x+rk) / \sqrt{1+r^2};s\big) \label{eq:cdf_X}
\end{align*}
$$
The first code block below shows how to calculate the CDF of \eqref{eq:dist_cond} using \eqref{eq:cdf_X}. Simulations are run to demonstrate the accuracy of this approach.
[^2]: Throughout this post $\Phi$ and $\phi$ denote the standard normal CDF and PDF, respectively.
```python
from time import time
import numpy as np
from scipy.stats import norm
from scipy.stats import multivariate_normal as MVN
import pandas as pd
import plotnine
from plotnine import *
from scipy.optimize import minimize_scalar
class cond_dist():
def __init__(self, k, n1, n2, null=False):
self.s = +1
if null is True:
self.s = -1
self.k, self.n1, self.n2 = k, n1, n2
self.r = np.sqrt(n2 / n1)
def cdf_w(self, w):
a = np.sqrt(1+self.r**2) * self.k * self.s
b = -self.r * self.s
rho = -b/np.sqrt(1+b**2)
Sigma = np.array([[1,rho],[rho,1]])
dist_MVN = MVN(mean=np.repeat(0,2),cov=Sigma)
x1 = a / np.sqrt(1+b**2)
if isinstance(w, float):
X = [x1, w]
else:
X = np.c_[np.repeat(x1,len(w)), w]
pval = dist_MVN.cdf(X)
return pval
def cdf_x(self, x):
const = 1 / norm.cdf(self.s * self.k)
w = (x + self.r * self.k) / np.sqrt(1+self.r**2)
pval = self.cdf_w(w) * const
return pval
def quantile(self, p):
res = minimize_scalar(fun=lambda x: (self.cdf_x(x)-p)**2, method='brent').x
return res
seed = 1234
n1 = 100
sig2 = 2
mu = 3
nsim = 5000
n2_seq = [100, 300, 500]
k_seq = list(np.arange(0,2.5,0.5))
np.random.seed(seed)
holder = []
for k in k_seq:
for n2 in n2_seq:
c = k * np.sqrt(sig2/n1)
# Draw samples for two-stages
x1 = mu + np.sqrt(sig2)*np.random.randn(n1, nsim)
x2 = mu + np.sqrt(sig2)*np.random.randn(n2, nsim)
xbar1, xbar2 = x1.mean(0), x2.mean(0)
null_mu = xbar1 + c
s2 = (xbar2 - null_mu)/np.sqrt(sig2/n2)
# Calculate the p-values
pval_h0false = cond_dist(k=k, n1=n1, n2=n2, null=False).cdf_x(s2)
pval_h0true = cond_dist(k=k, n1=n1, n2=n2, null=True).cdf_x(s2)
pval_uncond = norm(loc=-np.sqrt(n2/n1)*k,scale=np.sqrt(1+n2/n1)).cdf(s2)
tmp = pd.DataFrame({'s2':s2,'mu0':null_mu,'n2':n2,'k':k,
'h0_false':pval_h0false, 'h0_true':pval_h0true,'pval_uncond':pval_uncond})
holder.append(tmp)
del tmp
df_res = pd.concat(holder).assign(null=lambda x: np.where(x.mu0 > mu, False, True))
cn_gg = ['null','n2','k']
df_res = df_res.sort_values(cn_gg+['s2']).reset_index(None,True)
df_res = df_res.assign(idx=df_res.groupby(cn_gg).cumcount())
df_res.idx = df_res.groupby(cn_gg).apply(lambda x: x.idx/x.idx.max()).values
# Compare the conditional distribution
df_res = df_res.assign(pval_cond=lambda x: np.where(x.null==False,x.h0_false,x.h0_true))
df_res_long = df_res.melt(cn_gg+['idx'],['pval_cond','pval_uncond'],'tt')
# Make a pp-plot
tmp = df_res_long.groupby(cn_gg+['tt']).sample(n=250, random_state=seed,replace=True)
plotnine.options.figure_size = (8, 7)
gg_pp = (ggplot(tmp, aes(x='value',y='idx',color='tt')) + theme_bw() +
geom_point() + labs(x='Theoretical percentile',y='Empirical percentile') +
ggtitle('Figure 1: P-P plot for test statistic') +
facet_grid('n2+null~k',labeller=label_both) +
scale_color_discrete(name='Distribution',labels=['Conditional','Unconditional']) +
geom_abline(slope=1,intercept=0,linetype='--',color='black',size=1))
gg_pp
```
Figure 1 shows that the CDF for the conditional distribution in \eqref{eq:cdf_X} accurately captures the distribution of the test statistic when the null is both false and true. When the null is false ($z_1^k > 0)$, for larger values of $k$, the unconditonal distribution of $s_2$ is a close approximation. This result makes sense since when the null hypothesis is set many standard deviations above the point estimate, the null will be false for almost all realizations so the conditioning event excludes very few realizations.
In classical statistics we pick a critical value to reject the null such that when the null is true, then rejection event happens at most $\alpha$ percent of the time. We can use \eqref{eq:cdf_X} to find the $\alpha-$quantile of the distribution when the null is true so that we reject it at most $a$-percent of the time:
$$
\begin{align}
F^{-1}_W(\alpha;-1) &= \sup_w: \{ F_W(w;-1)\leq \alpha \} \label{eq:quantile} \\
&= t_\alpha \nonumber
\end{align}
$$
```python
np.random.seed(seed)
# Calculate power for a range of n2's/k's
alpha = 0.05
n2_seq = np.arange(50,1001, 50)
k_seq = np.arange(0.0, 2.51, 0.50)
holder = []
for k in k_seq:
for n2 in n2_seq:
dd_true = cond_dist(k=k, n1=n1, n2=n2, null=True)
dd_false = cond_dist(k=k, n1=n1, n2=n2, null=False)
crit = dd_true.quantile(alpha)
power_theory = dd_false.cdf_x(crit)
# --- simulation --- #
c = k * np.sqrt(sig2/n1)
# Draw samples for two-stages
x1 = mu + np.sqrt(sig2)*np.random.randn(n1, nsim)
x2 = mu + np.sqrt(sig2)*np.random.randn(n2, nsim)
xbar1, xbar2 = x1.mean(0), x2.mean(0)
null_mu = xbar1 + c
s2 = (xbar2 - null_mu)/np.sqrt(sig2/n2)
power_emp = np.mean(s2[null_mu > mu] < crit)
type1_emp = np.mean(s2[null_mu < mu] < crit)
# ------------------ #
tmp = pd.Series({'k':k,'n2':n2,'Critical-Value':crit,'Power':power_theory,
'emp_power':power_emp, 'emp_type1':type1_emp})
holder.append(tmp)
del tmp
df_power = pd.concat(holder,1).T
df_power_long = df_power.melt(['k','n2'],['Critical-Value','Power'],'measure')
df_power_long.measure = pd.Categorical(df_power_long.measure,['Power','Critical-Value'])
plotnine.options.figure_size = (8, 4)
gg_power = (ggplot(df_power_long,aes(x='n2',y='value',color='k',group='k')) + theme_bw() +
geom_line() + labs(x='n2',y='Value') +
facet_wrap('~measure',labeller=label_both,scales='free_y') +
ggtitle('Figure 1B: Power by second-stage sample size and k') +
theme(subplots_adjust={'wspace': 0.20}) +
scale_x_continuous(limits=[0,1001]))
gg_power
```
Figure 1B reveals that as the second-stage sample size ($n_2$) or the value of $k$ grows, the power of the test increases. Higher values of $k$ ensures that the expected value between $\hat\mu_2 - \hat\mu_0$ becomes increasingly negative, raising the probability of rejection. A higher second-stage sample size decreases the variation of $\hat\mu_2$, ensuring that the average negative difference is more consistently around the expectation, once again increasing the probability of rejection.
```python
df_emp_long = df_power.melt(['k','n2'],['emp_power','emp_type1'],'tt')
df_emp_long.tt = df_emp_long.tt.map({'emp_power':'Empirical Power','emp_type1':'Empirical Type-I'})
tmp = pd.DataFrame({'tt':'Empirical Type-I', 'vv':0.05},index=[0])
plotnine.options.figure_size = (8, 4)
gg_emp = (ggplot(df_emp_long,aes(x='n2',y='value',color='k',group='k')) + theme_bw() +
geom_line() + labs(x='n2',y='Value') + facet_wrap('~tt',scales='free_y') +
ggtitle('Figure 1C: Empirical results match theory') +
theme(subplots_adjust={'wspace': 0.15}) +
scale_x_continuous(limits=[0,1001]) +
geom_hline(aes(yintercept='vv'),data=tmp,inherit_aes=False,linetype='--'))
gg_emp
```
Figure 1C shows that the empirical power curves line up with the theoretical expectation, and that the type-I error rates average to the expected level: 5%. Note that the empirical type-I error rates are not exactly 5% by random chance alone. For a sufficiently large number of simulation draws, the estimates will converge to the 5% line.
## (2) Regression statistic inference example
This section will show how to apply the principle of two-stage testing to example regression performance metrics: mean absolute error (MAE) and mean square error (MSE). In addition to being common metrics, these statistics also have known distributional properties when a linear regression model is used with Gaussian data. Hence, the statistical simulations can be benchmarked against a ground truth. However, in practice any regression statistic whose density function is reasonably smooth, and any regression model can be used. To repeat, the simple linear model and choice of statistics is only for convenience and does not signify a loss of generality to any other regression instance. Formally we are interested in the risk of the MAE & MSE loss functions:
$$
\begin{align*}
R_{MAE}(\theta) = E_{y\sim g(x)}[\text{MAE}(y,f_\theta(x)] \\
R_{MSE}(\theta) = E_{y\sim g(x)}[\text{MSE}(y,f_\theta(x)]
\end{align*}
$$
Where $y$ and $x$ are the response and the feature set, respectively, and $f_\theta(x)$ is a linear regression model indexed by its coefficient vector: $\theta$. Assume that the joint distribution is Gaussian:
$$
\begin{align*}
y &= x^T \theta^0 + u \\
u_i&\sim N(0,\sigma^2_u), \hspace{3mm} x \sim \text{MVN}(0,I) \\
e &= y - f_\theta(x) \\
&= x^T(\theta^0 - \theta) + u \\
&\sim N(0, \sigma^2_u + \| \theta^0 - \theta \|^2_2 )
\end{align*}
$$
The errors of such a model have a Gaussian distribution, with a variance equal to some irreducible error ($\sigma^2_u$) and the L2-norm of the coefficient error: $\sum_j (\theta_j^0 - \theta_j^2)$. The risk for the MSE or MAE can be easily calculated since the error, $e$, has a known distribution:
$$
\begin{align}
R_{MSE}(\theta) &= E(e^2) = \sigma^2_u + \| \theta^0 - \theta \|^2_2 \label{eq:risk_mse} \\
R_{MAE}(\theta) &= E( |e| ) = \sqrt{\sigma^2_u + \| \theta^0 - \theta \|^2_2}\cdot\sqrt{2/\pi} \label{eq:risk_mae}
\end{align}
$$
Where the risk for the MAE comes from the [half-normal](https://en.wikipedia.org/wiki/Half-normal_distribution) distribution. The empirical estimate of the MAE and MSE is simply their sample average:
$$
\begin{align*}
\hat{\text{MSE}}(\theta) &= n^{-1} \sum_{i=1}^n [y_i - f_\theta(x_i)]^2 \\
\hat{\text{MAE}}(\theta) &= n^{-1} \sum_{i=1}^n |y_i - f_\theta(x_i)|
\end{align*}
$$
On a test set with a sufficiently large $n$, $\hat{\text{MSE}}(\theta) \to R_{MSE}(\theta)$. However, for fininte sample it is clear that $\hat{\text{MSE}}(\theta)$ is a random variable whose first moment will be centered around the risk.
```python
from sklearn.linear_model import LinearRegression
from arch.bootstrap import IIDBootstrap
from sklearn.metrics import mean_squared_error as MSE
from sklearn.metrics import mean_absolute_error as MAE
def dgp_yX(n,p,t0=1,sig2=1,theta0=None):
X = np.random.randn(n,p)
if theta0 is None:
theta0 = np.repeat(t0,p) * np.sign(np.random.rand(p)-0.5)
eta = X.dot(theta0)
error = np.sqrt(sig2)*np.random.randn(n)
y = eta + error
return y, X, theta0
np.random.seed(seed)
n, p, t0 = 100, 20, 0.5
sig2 = t0**2*p
nsim = 250
holder = []
for ii in range(nsim):
y, X, theta0 = dgp_yX(n, p, t0=t0, sig2=sig2)
mdl = LinearRegression(fit_intercept=True).fit(y=y,X=X)
l2_error = np.sum((theta0 - mdl.coef_)**2)
ytest, Xtest, _ = dgp_yX(100*n, p, theta0=theta0, sig2=sig2)
eta_test = mdl.predict(Xtest)
# Calculate theoretical R(MSE), R(MAE)
risk_mse, risk_mae = sig2 + l2_error, np.sqrt((sig2 + l2_error)*2/np.pi)
hat_mse, hat_mae = MSE(ytest, eta_test), MAE(ytest, eta_test)
tmp = pd.DataFrame({'risk_mse':risk_mse, 'hat_mse':hat_mse, 'risk_mae':risk_mae, 'hat_mae':hat_mae}, index=[ii])
holder.append(tmp)
del tmp
df_risk = pd.concat(holder).rename_axis('idx').reset_index().melt('idx',None,'tmp')
df_risk = df_risk.assign(tt=lambda x: x.tmp.str.split('_',2,True).iloc[:,0],
metric=lambda x: x.tmp.str.split('_',2,True).iloc[:,1]).drop(columns=['tmp'])
df_risk = df_risk.pivot_table('value',['idx','metric'],'tt').reset_index()
plotnine.options.figure_size = (8, 4)
gg_risk = (ggplot(df_risk,aes(x='hat',y='risk',color='metric')) + theme_bw() +
geom_point() + labs(x='Empirical risk',y='Theoretical risk') +
facet_wrap('~metric',labeller=labeller(metric={'mae':'MAE','mse':'MSE'}),scales='free') +
theme(subplots_adjust={'wspace': 0.15}) + guides(color=False) +
ggtitle('Figure 2A: Empirical and theoretical risk estimates') +
geom_abline(slope=1,intercept=0,linetype='--'))
gg_risk
```
Figure 2A confirms that the empirical risk estimates are closely aligned with their theoretical counterparts. Once again, with a sufficient sample size, the scatter plot would show no variation outside the line going through the origin. In section (1), knowledge of the population standard deviation of the statistic ($\sigma$) was needed in order to calculate the test statistic ($s_2$). Because this quantity is unknown, the [bootstrap](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) can be used to estimate the variance of the performance metric of interest (e.g. MSE & MAE). If $\hat\sigma_{BS}$ is the empirical standard deviation of the bootstrap, then the population standard deviation can be estimated by multiplying it by the number of samples $n$.
In the simulation below, the accuracy of the bootstrap standard deviation will be compared to the true population standard deviation for the MSE. I am using the MSE rather than the MAE because the former can be characterized by a chi-square distribution:
$$
\begin{align*}
\frac{1}{\sqrt{\sigma^2 + \|\theta_0 - \theta \|_2^2}}\sum_{i=1}^n e_i^2 &\sim \chi^2_n \\
Var(\chi^2) &= 2n \\
Var(\hat{\text{MSE}}(\theta)) &= \frac{\sigma^2_{MSE}}{n} = \frac{2[\sigma_u^2+\|\theta_0 - \theta \|_2^2]^2}{n}
\end{align*}
$$
The quality of the bootstrapped variance can be compared to its true second moment (that is unknown in practice):
$$
\begin{align}
\sigma^2_{MSE} &\approx n\cdot \hat\sigma^2_{BS} \label{eq:sig2_bs}
\end{align}
$$
Figure 2B below shows that the bootstrapped variance is almost an unbiased estimator of the true variance.
```python
n_bs = 1000
nsim = 1000
np.random.seed(seed)
ntrain = 100
ntest = 1000
holder = np.zeros([nsim,2])
for ii in range(nsim):
y, X, theta0 = dgp_yX(ntrain, p, t0=t0, sig2=sig2)
mdl = LinearRegression(fit_intercept=True).fit(y=y,X=X)
l2_error = np.sum((theta0 - mdl.coef_)**2)
sig2e = l2_error + sig2
ytest, Xtest, _ = dgp_yX(ntest, p, theta0=theta0, sig2=sig2)
eta_test = mdl.predict(Xtest)
eta_boot = pd.Series(eta_test).sample(ntest*n_bs,replace=True)
y_boot = ytest[eta_boot.index].reshape([ntest,n_bs])
eta_boot = eta_boot.values.reshape([ntest,n_bs])
res_boot = y_boot - eta_boot
mse_boot = MSE(y_boot, eta_boot,multioutput='raw_values')
sig2_mse = mse_boot.var()*len(ytest)
sig2_gt = 2*sig2e**2
holder[ii] = [sig2_mse, sig2_gt]
dat_sig2 = pd.DataFrame(holder,columns=['hat','gt'])
dat_sig2 = np.sqrt(dat_sig2)
xx, xm = np.ceil(dat_sig2.max().max()), np.floor(dat_sig2.min().min())
plotnine.options.figure_size = (4.5, 4)
gg_sig2 = (ggplot(dat_sig2, aes(x='gt',y='hat')) + theme_bw() +
geom_point(alpha=0.5,size=1) +
labs(y='Estimated standard deviation',x='True standard deviation') +
ggtitle('Figure 2B: Bootstrap standard deviation as an approximation') +
geom_abline(slope=1,intercept=0,color='blue') +
scale_x_continuous(limits=[xm, xx]) + scale_y_continuous(limits=[xm, xx]))
gg_sig2
```
## (3) Applied example
At this point we are ready to run a simulation for the MSE & MAE by modifying the procedure outlined in section (1). Instead of estimating the true mean of the Gaussian, the risk of a linear regression model's MSE & MAE will be estimated. The pipeline is as follows:
1. Learn $f_\theta$ on an independent training dataset
2. Calculate MSE & MAE on an independent test set
3. Use the test set to obtain the bootstrap variance of the MSE or MAE: $\hat\sigma^2_{1,BS}$
4. Get an upper-estimate of performance for the null: $\hat{\text{MSE}}_0 = \hat{\text{MSE}}_1 + k \cdot \hat\sigma_{1,BS}$
5. Set the null hypothesis: $H_0: \text{MSE} \geq \hat{\text{MSE}}_0$
6. Find the sample size needed to obtain 80% power and its associated critical value $t_\alpha$ using \eqref{eq:quantile}
7. Estimate model performance on the prospective test set
8. Reject the null if $(\hat{\text{MSE}}_2-\hat{\text{MSE}}_0)/\hat\sigma_{2,BS} < t_\alpha$
In the code block below I am using a [studentized bootstrap](https://www.textbook.ds100.org/ch/18/hyp_studentized.html) to estimate the standard error on both the testing and prospective validation sets. A one-sided confidence interval (CI) of $k \cdot \hat\sigma_{BS}$ is an approximation on the CI at the $\Phi(k)$ level: $z_{\Phi(-k)}\cdot \sqrt{\sigma^2 / n}$. Since the estimate of $\hat\sigma_{BS}$ can be biased downwards for smaller sample sizes and skewed distributions, the studentized bootstrap can "correct" for this. Specifically, the bootstrapped statistic is mapped to a t-score:
$$
\begin{align*}
t^*_b = \frac{\hat{\text{MSE}^*_b} - \hat{\text{MSE}}}{\hat\sigma{_b^*}}
\end{align*}
$$
This approach requires re-bootstrapping a given bootstrapped sample and estimating its standard error $\hat\sigma_b^*$. Though this is computationally intensive, it helps give close to exact nominal coverage levels, and, as I show, can be easily vectorized with the `sample` attribute in `pandas` classes and using 3-D arrays in `numpy`. The upper bound of the interval is: $\hat{\text{MSE}} - q_{\alpha} \hat\sigma_{BS}$, where $q_\alpha$ is the empirical quantile of the $t^*_b$ distribution. For example if $k=1.5$, but $q_{\alpha}=-1.6$, then the standard errors are "too small", so we can adjust by rescaling $\hat\sigma_{BS} \gets \hat\sigma_{BS}\cdot (-q_{\alpha}/k)$. The simulations below will target a sample size needed to obtain 80% power, a type-I error rate of 5%, a $k$ of 1.5, and use a training and test set size of 150. The `power_find` function estimates that 399 samples will be needed in the prospective dataset to reject the null at these rates.
```python
def boot_mse_mae(eta, resp, k, n_bs=1000, n_student=250):
nn = len(eta)
mse_hat, mae_hat = MSE(resp, eta), MAE(resp, eta)
eta_boot = pd.Series(eta).sample(nn*n_bs,replace=True)
y_boot = pd.DataFrame(resp[eta_boot.index].reshape([nn,n_bs]))
eta_boot = pd.DataFrame(eta_boot.values.reshape([nn,n_bs]))
mse_boot = MSE(y_boot, eta_boot, multioutput='raw_values')
mae_boot = MAE(y_boot, eta_boot, multioutput='raw_values')
# Run studentized bootstrap for adjustment
# Recall in the studentized bootstrap the 1-a quantile is used for the lowerbound and the a-quantile for the upper
tmp1 = eta_boot.sample(frac=n_student,replace=True,axis=0)
tmp2 = y_boot.iloc[tmp1.index]
tmp1 = tmp1.values.reshape([n_student]+list(eta_boot.shape))
tmp2 = tmp2.values.reshape([n_student]+list(y_boot.shape))
sig_student_mse = np.mean((tmp2 - tmp1)**2,axis=1).std(0)
sig_student_mae = np.mean(np.abs(tmp2 - tmp1),axis=1).std(0)
t_mse = (mse_boot - mse_hat)/sig_student_mse
t_mae = (mae_boot - mae_hat)/sig_student_mae
k_adjust_mse = -np.quantile(t_mse,norm.cdf(-k))
k_adjust_mae = -np.quantile(t_mae,norm.cdf(-k))
# Get standard error estimates
sig2_mse_n, sig2_mae_n = mse_boot.var(), mae_boot.var()
# Scale by studentized-t factor
sig2_mse_n, sig2_mae_n = sig2_mse_n*(k_adjust_mse/k)**2, sig2_mae_n*(k_adjust_mae/k)**2
return sig2_mse_n, sig2_mae_n, k_adjust_mse, k_adjust_mae
def power_est(n2, k, n1, alpha):
dist_true = cond_dist(k=k, n1=n1, n2=n2, null=True)
dist_false = cond_dist(k=k, n1=n1, n2=n2, null=False)
crit_value = dist_true.quantile(alpha)
power = dist_false.cdf_x(crit_value)
return power
def power_find(pp, k, n1, alpha):
n2 = minimize_scalar(fun=lambda x: (power_est(x, k, n1, alpha)-pp)**2,method='brent').x
return n2
ntrain, ntest = 150, 150
p, t0 = 20, 0.5
sig2 = t0**2*p / 2
nsim = 5000
k = 1.5
n_bs = 1000
n_student = 250
alpha = 0.05
power = 0.8
# --------------------------------- #
# (6) Find sample size needed for 80% power and critical value
nprosp = int(np.ceil(power_find(power, k=k, n1=ntest, alpha=alpha)))
# Critical value for prospective set
dist_prosp_true = cond_dist(k=k, n1=ntest, n2=nprosp, null=True)
crit_prosp = dist_prosp_true.quantile(alpha)
import os
if os.path.exists('df_sim.csv'):
print('Not running')
else:
np.random.seed(seed)
stime = time()
holder = []
for ii in range(nsim):
if (ii + 1) % 25 == 0:
nsec, nleft = time() - stime, nsim - (ii+1)
rate = (ii+1) / nsec
tleft = nleft / rate
print('Iteration (%i and %i), ETA: %i seconds' % (ii+1, nsim, tleft))
# --------------------------------- #
# (1) Learn f_theta on training set
y, X, theta0 = dgp_yX(ntrain, p, t0=t0, sig2=sig2)
mdl = LinearRegression(fit_intercept=True).fit(y=y,X=X)
# --------------------------------- #
# (2) Calculate the MSE and MAE on the test set
ytest, Xtest, _ = dgp_yX(ntest, p, theta0=theta0, sig2=sig2)
eta_test = mdl.predict(Xtest)
mse_point, mae_point = MSE(ytest, eta_test), MAE(ytest, eta_test)
# --------------------------------- #
# (3) & (4) Calculate bootstrap variance and the population variance
sig2_mse_n, sig2_mae_n, k_mse, k_mae = boot_mse_mae(eta=eta_test, resp=ytest, k=k, n_bs=n_bs, n_student=n_student)
sig2_mse, sig2_mae = sig2_mse_n * ntest, sig2_mae_n * ntest # Population level estimate
# --------------------------------- #
# (5) Set upper bound on null
mse_null = mse_point + k*np.sqrt(sig2_mse_n)
mae_null = mae_point + k*np.sqrt(sig2_mae_n)
del mse_point, mae_point
# --------------------------------- #
# (7) Generate prospective test set
yprosp, Xprosp, _ = dgp_yX(nprosp, p, theta0=theta0, sig2=sig2)
# --------------------------------- #
# (8) Run inference using studentized bootstrap
eta_prosp = mdl.predict(Xprosp)
mse_prosp, mae_prosp = MSE(yprosp, eta_prosp), MAE(yprosp, eta_prosp)
sig2_mse_n_prosp, sig2_mae_n_prosp, k_mse_prosp, k_mae_prosp = boot_mse_mae(eta=eta_prosp, resp=yprosp,
k=k, n_bs=n_bs, n_student=n_student)
z_mse, z_mae = (mse_prosp - mse_null) / np.sqrt(sig2_mse_n_prosp), (mae_prosp - mae_null) / np.sqrt(sig2_mae_n_prosp)
#print('Z-score for MSE: %0.2f, MAE: %0.2f' % (z_mse, z_mae))
# These are statistical quantities known only to the "simulator"
l2_error = np.sum((theta0 - mdl.coef_)**2)
mse_gt = l2_error + sig2
mae_gt = np.sqrt(mse_gt * 2 / np.pi)
sig2_mse_gt = 2*(l2_error + sig2)**2
tmp = pd.DataFrame({'mse_gt':mse_gt, 'mse_null':mse_null, 'mae_gt':mae_gt, 'mae_null':mae_null,
'k_mse':k_mse, 'k_mae':k_mae, 'k_mse_prosp':k_mse_prosp, 'k_mae_prosp':k_mae_prosp,
'z_mse':z_mse, 'z_mae':z_mae,'crit':crit_prosp, 'nprosp':nprosp,
'sig2_prosp':sig2_mse_n_prosp*nprosp, 'sig2_test':sig2_mse_n*ntest,'sig2_gt':sig2_mse_gt},
index=[ii])
holder.append(tmp)
df_sim = pd.concat(holder).rename_axis('idx').reset_index()
df_sim.to_csv('df_sim.csv',index=False)
# Combine results
df_sim = pd.read_csv('df_sim.csv')
dat_coverage = df_sim.melt('idx',['mse_gt','mse_null','mae_gt','mae_null'],'tmp')
dat_coverage = dat_coverage.assign(metric=lambda x: x.tmp.str.split('_',2,True).iloc[:,0],
tt=lambda x: x.tmp.str.split('_',2,True).iloc[:,1]).drop(columns=['tmp'])
dat_coverage = dat_coverage.pivot_table('value',['idx','metric'],'tt').reset_index()
dat_coverage = dat_coverage.assign(null_is_false=lambda x: x['null'] > x['gt'])
dat_power = df_sim.melt(['idx','crit'],['z_mse','z_mae'],'metric')
dat_power = dat_power.assign(reject=lambda x: x.value < x.crit, metric=lambda x: x.metric.str.replace('z_',''))
dat_power = dat_power.merge(dat_coverage[['idx','metric','null_is_false']],'left',['idx','metric'])
plotnine.options.figure_size = (7, 5)
brks = list(np.concatenate((np.linspace(-13,crit_prosp,14),np.linspace(crit_prosp,6,9)[1:])))
dat_txt = dat_power.groupby(['metric','null_is_false']).reject.mean().reset_index()
dat_txt = dat_txt.assign(lbls=lambda x: (x.reject*100).map('{:,.1f}%'.format))
dat_txt = dat_txt.assign(x=lambda x: np.where(x.null_is_false==True,-8, -8),
y=lambda x: np.where(x.null_is_false==True,750, 90))
di_metric = {'mse':'MSE', 'mae':'MAE'}
di_null = {'False':'Null is True', 'True':'Null is False'}
gg_zdist = (ggplot(dat_power, aes(x='value',fill='reject')) + theme_bw() +
geom_histogram(alpha=0.5,color='black',breaks=brks) +
facet_grid('null_is_false~metric',scales='free',
labeller=labeller(metric=di_metric, null_is_false=di_null)) +
labs(x='Z-score', y='Count (out of 5000)') + guides(fill=False,color=False) +
ggtitle('Figure 3A: Distribution of second-stage z-scores\nVertical line shows critical value') +
geom_vline(xintercept=crit_prosp,color='black') +
geom_text(aes(x='x',y='y',label='lbls'),color="#00BFC4",data=dat_txt,size=10,inherit_aes=False))
gg_zdist
```
Figure 3A shows that the two-stage approach is extremely accurate! The simulated power frequency is between 80-81% and the type-I error between 3-5%, just as was expected.
```python
dat_sig2 = df_sim.melt(['idx','sig2_gt'],['sig2_prosp','sig2_test'],'msr').assign(msr=lambda x: x.msr.str.replace('sig2_',''))
di_msr = {'prosp':'Prospective', 'test':'Test', 'gt':'Ground-Truth'}
plotnine.options.figure_size = (6, 3)
gg_sig2 = (ggplot(dat_sig2,aes(x='np.sqrt(value)',y='np.sqrt(sig2_gt)',color='msr')) + theme_bw() +
geom_point(size=1,alpha=0.5) + geom_abline(slope=1,intercept=0) +
facet_wrap('~msr',labeller=labeller(msr=di_msr)) +
labs(x='Estimated standard deviation',y='Actual standard deviation') +
ggtitle('Figure 3B: Bootstrap variance quality for MSE') +
guides(color=False) + theme(panel_spacing_x=0.15))
print(gg_sig2)
dat_k = df_sim.melt('idx',['k_mse','k_mae','k_mse_prosp','k_mae_prosp'],'tmp').assign(tmp=lambda x: x.tmp.str.replace('k_',''))
dat_k = dat_k.assign(metric=lambda x: np.where(x.tmp.str.contains('mse'),'mse','mae'),
dset=lambda x: np.where(x.tmp.str.contains('prosp'),'prosp','test')).drop(columns='tmp')
dat_k = dat_k.assign(metric=lambda x: pd.Categorical(x.metric,['mse','mae']))
plotnine.options.figure_size = (7, 3)
gg_k = (ggplot(dat_k,aes(x='value',fill='dset')) + theme_bw() +
geom_density(alpha=0.5) + geom_vline(xintercept=k) +
facet_wrap('~metric',scales='free',labeller=labeller(metric={'mse':'MSE','mae':'MAE'})) +
scale_fill_discrete(name='Data',labels=['Prospective','Test']) +
theme(subplots_adjust={'wspace': 0.15},legend_position=(0.39,0.70)) +
labs(x='q_alpha',y='Density') + ggtitle('Figure 3C: Distribution of studentized quantiles'))
print(gg_k)
```
Figure 3B shows that the estimate of the population variance for the MSE is reasonably close to the one obtained from the studentized bootstrap. The variance tends to be overestimated slightly, and Figure 3C explains shows that this is the case because the (negative) $\alpha$ quantile of the studentized bootstrapped statistics tends to be larger than the target of $k=1.5$. In other words, the bootstrap-standard error is usually adjusted upwards. Though not shown here, using a vanilla bootstrap approach will cause the type-II errors and proportion of true nulls to be slightly too large. In fact the empirical coverage of the null, after the studentized adjustment, is basically spot on. For a properly estimated variance, the null hypothesis should be true/false $\Phi(-k)$/$\Phi(k)$ percent of the time.
```python
print(np.round(dat_coverage.groupby(['metric']).null_is_false.mean().reset_index().assign(expectation=[norm.cdf(k),norm.cdf(k)]),3))
```
metric null_is_false expectation
0 mae 0.939 0.933
1 mse 0.935 0.933
## (4) Conclusion
This post has shown how to construct a prospective trial to validate any ML regression model for any performance metric of interest. There are a few caveats. First, the statistic of interest needs to have a bootstrapped distribution that is reasonably "smooth". Discontinuities or extreme skewness will limit the quality of the estimate. Second, the distribution of the data for the test set and prospective trial needs to representative.
On a statistical level there are several, what I find to be, surprising conclusions that this analysis has shown:
1. The sum of a truncated normal and standard Gaussian can be re-written as a conditionally correlated bivariate normal whose density has a known closed-form solution, and whose CDF can be calculated from leveraging the CDF from a multivariate normal distribution \eqref{eq:cdf_X}.
2. The conditional distribution noted above is only a function of the ratio of the two sample sizes, rather than their absolute level, and $k$.
3. Sample-size calculations can be determined by specifying only three of the four terms: $n_2/n_1$, $k$, $\alpha$, or $1-\beta$ (the power).
What is remarkable about these three conclusions is that they are completely independent of the choice of ML model or performance metric. In other words, the sample size calculation used in section (3) would work just as well for a [Random Forest](https://en.wikipedia.org/wiki/Random_forest) predicting house prices and evaluated by its [Tweedie deviance](https://en.wikipedia.org/wiki/Tweedie_distribution#The_Tweedie_deviance) as it would for a [Gaussian Process Regression](https://en.wikipedia.org/wiki/Kriging) predicting patient volumes and evaluated using [R-squared](https://en.wikipedia.org/wiki/Coefficient_of_determination). This approach proves to be generalizable because the variance of the performance metric is the unknown quantity that is allowed to vary. In the binary classification case the variance of the performance metric was known because the variance of a binomial proportion is a function of its mean, whereas the threshold was the random variable.
The two-stage approach also means that the "posterior" distribution of outcomes on the prospective validation set can be defined by the following 2x2 table:
| | $H_0$ is true | $H_0$ false |
| ----------- | -----: | ------ |
| Reject $H_0$ | $\alpha\cdot$$\Phi$($-k$) | (1-$\beta$)$\Phi$($k$) |
| Do not reject $H_0$ | (1-$\alpha$)$\Phi$($-k$) | $\beta\cdot$$\Phi$($k$) |
This gives researchers significant freedom to control the uncertainty for each of these outcome categories. Lastly, it should be noted that the choice of using $k$-standard deviations above the point estimate is for mathematical tractability, and probably not for actually applied use. In almost all applied use-cases, the upper-bound will be picked by subject matter experts and that value of $k$ backed-out from this choice, rather than the other way around. Though the ordering of this decision is essential for real-world applications it is immaterial to the mathematics and hence the simpler form is described.
| 5b1d7319cd93e94633c248266b034529ca82ab25 | 878,860 | ipynb | Jupyter Notebook | _rmd/extra_regression/power_for_regression_metrics.ipynb | erikdrysdale/erikdrysdale.github.io | ff337117e063be7f909bc2d1f3ff427781d29f31 | [
"MIT"
]
| null | null | null | _rmd/extra_regression/power_for_regression_metrics.ipynb | erikdrysdale/erikdrysdale.github.io | ff337117e063be7f909bc2d1f3ff427781d29f31 | [
"MIT"
]
| null | null | null | _rmd/extra_regression/power_for_regression_metrics.ipynb | erikdrysdale/erikdrysdale.github.io | ff337117e063be7f909bc2d1f3ff427781d29f31 | [
"MIT"
]
| 2 | 2017-09-13T15:16:36.000Z | 2020-03-03T15:37:01.000Z | 930.010582 | 257,032 | 0.943709 | true | 11,782 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.828939 | 0.653147 | __label__eng_Latn | 0.92387 | 0.35581 |
## <span style = "color:blue">Causal Analysis in Settings where the control group is orders of magnitude larger than the treatment group </span>
```python
%load_ext autoreload
%autoreload 2
```
```python
import numpy as np
import pandas as pd
#dowhy
import dowhy
from dowhy import CausalModel
import dowhy.datasets, dowhy.plotter
#econml-scikit-learn
import econml
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LassoCV
from sklearn.ensemble import GradientBoostingRegressor
```
#### Filter Out Unnecessary Warnings
```python
import logging
logging.getLogger("dowhy").setLevel(logging.WARNING)
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
```
```python
np.random.seed(seed = 0)
```
**For the clustering** we use as means $\mu_1, \mu_2$. We choose $w$ to be parallel in the line connecting the two means. <br> $\begin{align}w=\frac{\mu_2-\mu_1}{||\mu_2-\mu_1||}\end{align}$<br>
The bias $b$ can be adjusted such that the hyperplane passes through a point connecting the two means:<br>$
\begin{align}
b = \frac{(\mu_2-\mu_1)}{||\mu_2-\mu_1||}^T(\mu_1 + \lambda (\mu_2-\mu_1))
\end{align}$
```python
def generate_state(X, w = None, b = None, cluster = False):
"""
Separates Patients
with a linear hyperplane
Patients above the hyperplane are healthy
and below are sick
"""
dims = X.shape[1]
if not cluster:
w = np.random.rand(dims, 1)
w = w/np.sum(w)
b = 1
S = ((X@w-b) > 0).astype(int)
return S
def generate_X(dims:int = 5, N:int = 10**4, cluster = False):
"""
Generates X from a multidimension Normal distribution
with the identity matrix for covariance
"""
if not cluster:
mu = np.zeros(dims)
loc = np.eye(dims)
X = np.random.multivariate_normal(mean = mu, cov = loc, size = N)
w = None
b = None
else:
mu1 = np.zeros(dims)
mu2 = np.ones(dims)
loc = np.eye(dims)
N1 = int(N*0.97)
N2 = N-N1
X1 = np.random.multivariate_normal(mean = mu1, cov = loc, size = N1)
X2 = np.random.multivariate_normal(mean = mu2, cov = loc, size = N2)
w = np.expand_dims((mu2-mu1)/np.sum(mu2-mu1), axis = 1)
l = 0.85
b = w.T@((1-l)*mu1[:,np.newaxis] + l*(mu2[:,np.newaxis]))
X = np.concatenate((X1, X2), axis = 0)
np.random.shuffle(X)
print("{} people from cluster 1 generated".format(N1))
print("{} people from cluster 2 generated".format(N2))
return X, w, b
def treatment_assignment(S, M):
"""
assign sick in treatment with p1
assing healthy in treatment with p2
S:State
M:Number of people in treatment group
"""
#generate a uniform sample
uniform = np.random.uniform(0,1,len(S))
helper = np.zeros_like(uniform)
#assign healthy people to pseudo treatment
#with probability 0.95
helper[(uniform <= 0.98) & (S[:,0] == 1)] = 1
#assign sick people to pseudo treatment
#with probability 0.2
helper[(uniform >= 0.95) & (S[:,0] == 0)] = 1
T = np.zeros_like(helper)
index = np.where(helper == 1)[0]
choose = np.random.choice(index, size = M, replace = False)
#take a random sample of M from the pseudo treatment
# and assign it to real treatment
T[choose] = 1
T =np.expand_dims(T, 1).astype(int)
T = np.where(T==1, True, False)
return T
def make_experiment(dims, N, M, cluster = False):
X, w, b = generate_X(dims, N, cluster = cluster)
S = generate_state(X, w = w, b = b, cluster = cluster)
T = treatment_assignment(S,M)
print_stats(S,T)
Yf, Ycf, Y = create_outcome(S,T)
data = create_pandas(X, S, T, Yf, Ycf, Y)
return X, S, T, data
def create_pandas(X, S, T, Yf, Ycf, Y):
columns = ['f'+str(i) for i in range(X.shape[1])]
columns.extend(['S', 'Tr', 'Yf', 'Ycf', 'Y'])
data = pd.DataFrame(np.concatenate([X,S,T, Yf, Ycf, Y], axis = 1), columns = columns)
return data
def print_stats(S,T):
print("Population Size:", len(S))
print("Sick Population Size:", (S==1).sum())
print("Treatment group Size:", T.sum())
print("Sick People in Treatment group:", ((S==1)&(T==1)).sum())
def create_outcome(S,T):
Yf = np.ones_like(S)
Ycf = np.ones_like(S)
Ycf[(S==1)] = 0
Y = np.ones_like(S)
Y[(S==1)&(T==0)] = 0
return Yf, Ycf, Y
```
```python
dims = 5
X, S, T, data= make_experiment(dims, 10**6, 1000, cluster=True)
data.Tr = data.Tr.astype(bool)
```
970000 people from cluster 1 generated
30000 people from cluster 2 generated
Population Size: 1000000
Sick Population Size: 46787
Treatment group Size: 1000
Sick People in Treatment group: 465
```python
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>f0</th>
<th>f1</th>
<th>f2</th>
<th>f3</th>
<th>f4</th>
<th>S</th>
<th>Tr</th>
<th>Yf</th>
<th>Ycf</th>
<th>Y</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.900766</td>
<td>-0.040178</td>
<td>-1.083424</td>
<td>0.121346</td>
<td>-0.247905</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<th>1</th>
<td>0.178655</td>
<td>-0.498405</td>
<td>1.631524</td>
<td>1.267724</td>
<td>-0.245090</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<th>2</th>
<td>-1.532263</td>
<td>-1.673195</td>
<td>-0.825910</td>
<td>-2.163854</td>
<td>-1.181748</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<th>3</th>
<td>0.598833</td>
<td>-0.745410</td>
<td>0.673457</td>
<td>-0.178242</td>
<td>0.286164</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<th>4</th>
<td>1.044861</td>
<td>-1.624495</td>
<td>-1.035050</td>
<td>0.136189</td>
<td>0.205054</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
```python
common_causes = ['f'+str(i) for i in range(dims)]
#common_causes.append('S')
data_dict = {'df':data,
'treatment_name': 'Tr',
'outcome_name': 'Y',
'common_causes_names': common_causes,
'time_val': None,
'instrument_names': None,
'dot_graph': None,
'gml_graph': None,
'ate': None}
```
```python
model = CausalModel(
data=data_dict['df'],
treatment=data_dict["treatment_name"],
outcome=data_dict["outcome_name"],
common_causes=data_dict["common_causes_names"],
)
```
WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.
```python
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
```
WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.
```python
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification",
target_units='ate',
test_significance = None,
evaluate_effect_strength = False,
confidence_intervals=False)
print(estimate)
print("Causal Estimate is " + str(estimate.value))
```
*** Causal Estimate ***
## Identified estimand
Estimand type: nonparametric-ate
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────(Expectation(Y|f3,f4,f0,f1,f2))
d[Tr]
Estimand assumption 1, Unconfoundedness: If U→{Tr} and U→Y then P(Y|Tr,f3,f4,f0,f1,f2,U) = P(Y|Tr,f3,f4,f0,f1,f2)
### Estimand : 2
Estimand name: iv
No such variable found!
## Realized estimand
b: Y~Tr+f3+f4+f0+f1+f2
Target units: ate
## Estimate
Mean value: 0.09541018707595271
Causal Estimate is 0.09541018707595271
```python
data.Yf.mean() - data.Ycf.mean()
```
```python
data[data.Tr==1].Y.mean() - data[data.Tr == 0].Y.mean()
```
```python
data[(data.S==1) & (data.Tr ==1)].Y.mean() - data[(data.S==1) & (data.Tr ==0)].Y.mean()
```
```python
data[(data.S==0) & (data.Tr ==1)].Y.mean() - data[(data.S==0) & (data.Tr ==0)].Y.mean()
```
```python
model.view_model()
```
```python
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>f0</th>
<th>f1</th>
<th>f2</th>
<th>f3</th>
<th>f4</th>
<th>S</th>
<th>Tr</th>
<th>Yf</th>
<th>Ycf</th>
<th>Y</th>
<th>propensity_score</th>
<th>strata</th>
<th>dbar</th>
<th>d_y</th>
<th>dbar_y</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.900766</td>
<td>-0.040178</td>
<td>-1.083424</td>
<td>0.121346</td>
<td>-0.247905</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000556</td>
<td>21.0</td>
<td>1</td>
<td>0.0</td>
<td>1.0</td>
</tr>
<tr>
<th>1</th>
<td>0.178655</td>
<td>-0.498405</td>
<td>1.631524</td>
<td>1.267724</td>
<td>-0.245090</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.001522</td>
<td>42.0</td>
<td>1</td>
<td>0.0</td>
<td>1.0</td>
</tr>
<tr>
<th>2</th>
<td>-1.532263</td>
<td>-1.673195</td>
<td>-0.825910</td>
<td>-2.163854</td>
<td>-1.181748</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000049</td>
<td>0.0</td>
<td>1</td>
<td>0.0</td>
<td>1.0</td>
</tr>
<tr>
<th>3</th>
<td>0.598833</td>
<td>-0.745410</td>
<td>0.673457</td>
<td>-0.178242</td>
<td>0.286164</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000833</td>
<td>31.0</td>
<td>1</td>
<td>0.0</td>
<td>1.0</td>
</tr>
<tr>
<th>4</th>
<td>1.044861</td>
<td>-1.624495</td>
<td>-1.035050</td>
<td>0.136189</td>
<td>0.205054</td>
<td>0.0</td>
<td>False</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000429</td>
<td>15.0</td>
<td>1</td>
<td>0.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
## Using Econ ML
dml_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.econml.dml.DMLCateEstimator",
method_params={
'init_params': {'model_y':GradientBoostingRegressor(),
'model_t': GradientBoostingRegressor(),
'model_final':LassoCV(fit_intercept=False), },
'fit_params': {}
})
dml_estimate.params['cate_estimates'].mean()
```python
from sklearn.cluster import KMeans
```
```python
kmeans = KMeans(n_clusters=2)
prediction = kmeans.fit_predict(data[common_causes].values)
```
```python
cl1 = prediction == 1
data1 = data[cl1].reset_index(drop = True)
data2 = data[~cl1].reset_index(drop = True)
```
```python
data1.describe()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>f0</th>
<th>f1</th>
<th>f2</th>
<th>f3</th>
<th>f4</th>
<th>S</th>
<th>Yf</th>
<th>Ycf</th>
<th>Y</th>
<th>propensity_score</th>
<th>strata</th>
<th>dbar</th>
<th>d_y</th>
<th>dbar_y</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.0</td>
<td>531616.0</td>
<td>531616.0</td>
<td>531616.0</td>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.000000</td>
<td>531616.000000</td>
</tr>
<tr>
<th>mean</th>
<td>-0.287588</td>
<td>-0.348828</td>
<td>-0.371957</td>
<td>-0.316373</td>
<td>-0.281822</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000408</td>
<td>13.412177</td>
<td>0.999426</td>
<td>0.000574</td>
<td>0.999426</td>
</tr>
<tr>
<th>std</th>
<td>0.950336</td>
<td>0.926534</td>
<td>0.915637</td>
<td>0.939860</td>
<td>0.950877</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.000171</td>
<td>7.890180</td>
<td>0.023946</td>
<td>0.023946</td>
<td>0.023946</td>
</tr>
<tr>
<th>min</th>
<td>-4.895454</td>
<td>-4.852118</td>
<td>-4.802920</td>
<td>-4.820940</td>
<td>-5.031764</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000013</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>25%</th>
<td>-0.927748</td>
<td>-0.971112</td>
<td>-0.986030</td>
<td>-0.947566</td>
<td>-0.922035</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000272</td>
<td>7.000000</td>
<td>1.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>50%</th>
<td>-0.284145</td>
<td>-0.343608</td>
<td>-0.365165</td>
<td>-0.313446</td>
<td>-0.279516</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000401</td>
<td>13.000000</td>
<td>1.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>75%</th>
<td>0.355717</td>
<td>0.277973</td>
<td>0.249408</td>
<td>0.319344</td>
<td>0.361252</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.000539</td>
<td>20.000000</td>
<td>1.000000</td>
<td>0.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>max</th>
<td>4.065940</td>
<td>3.783802</td>
<td>3.907693</td>
<td>3.831790</td>
<td>4.116536</td>
<td>0.0</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
<td>0.001065</td>
<td>36.000000</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
```python
data1.Tr.sum(), ((data1.Tr == 1) & (data1.S==1)).sum()
```
(305, 0)
```python
data2.Tr.sum(), ((data2.Tr == 1) & (data2.S==1)).sum()
```
(695, 465)
#### Modeling of 2 clusters
```python
data_dict1 = {'df':data1,
'treatment_name': 'Tr',
'outcome_name': 'Y',
'common_causes_names': common_causes,
'time_val': None,
'instrument_names': None,
'dot_graph': None,
'gml_graph': None,
'ate': None}
data_dict2 = {'df':data2,
'treatment_name': 'Tr',
'outcome_name': 'Y',
'common_causes_names': common_causes,
'time_val': None,
'instrument_names': None,
'dot_graph': None,
'gml_graph': None,
'ate': None}
model1 = CausalModel(
data=data_dict1['df'],
treatment=data_dict1["treatment_name"],
outcome=data_dict1["outcome_name"],
common_causes=data_dict1["common_causes_names"],
)
model2 = CausalModel(
data=data_dict2['df'],
treatment=data_dict2["treatment_name"],
outcome=data_dict2["outcome_name"],
common_causes=data_dict2["common_causes_names"],
)
identified_estimand1 = model1.identify_effect(proceed_when_unidentifiable=True)
identified_estimand2 = model2.identify_effect(proceed_when_unidentifiable=True)
```
WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.
WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.
WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.
WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.
```python
estimate1 = model1.estimate_effect(identified_estimand1,
method_name="backdoor.propensity_score_stratification",
target_units='ate',
test_significance = None,
evaluate_effect_strength = False,
confidence_intervals=False)
print(estimate1)
print("Causal Estimate is " + str(estimate1.value))
```
*** Causal Estimate ***
## Identified estimand
Estimand type: nonparametric-ate
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────(Expectation(Y|f3,f4,f0,f1,f2))
d[Tr]
Estimand assumption 1, Unconfoundedness: If U→{Tr} and U→Y then P(Y|Tr,f3,f4,f0,f1,f2,U) = P(Y|Tr,f3,f4,f0,f1,f2)
### Estimand : 2
Estimand name: iv
No such variable found!
## Realized estimand
b: Y~Tr+f3+f4+f0+f1+f2
Target units: ate
## Estimate
Mean value: 0.0
Causal Estimate is 0.0
```python
estimate2 = model2.estimate_effect(identified_estimand2,
method_name="backdoor.propensity_score_stratification",
target_units='ate',
test_significance = None,
evaluate_effect_strength = False,
confidence_intervals=False)
print(estimate2)
print("Causal Estimate is " + str(estimate2.value))
```
*** Causal Estimate ***
## Identified estimand
Estimand type: nonparametric-ate
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────(Expectation(Y|f3,f4,f0,f1,f2))
d[Tr]
Estimand assumption 1, Unconfoundedness: If U→{Tr} and U→Y then P(Y|Tr,f3,f4,f0,f1,f2,U) = P(Y|Tr,f3,f4,f0,f1,f2)
### Estimand : 2
Estimand name: iv
No such variable found!
## Realized estimand
b: Y~Tr+f3+f4+f0+f1+f2
Target units: ate
## Estimate
Mean value: 0.6641850953003251
Causal Estimate is 0.6641850953003251
```python
```
| 1493a4c982bf9a5ec65ac80d160bfbe10145d4e0 | 63,104 | ipynb | Jupyter Notebook | notebooks/Causality.ipynb | jorje1908/causality | 926de35abef1b1a7e300c5399bb2dd6ec313d0c1 | [
"MIT"
]
| null | null | null | notebooks/Causality.ipynb | jorje1908/causality | 926de35abef1b1a7e300c5399bb2dd6ec313d0c1 | [
"MIT"
]
| null | null | null | notebooks/Causality.ipynb | jorje1908/causality | 926de35abef1b1a7e300c5399bb2dd6ec313d0c1 | [
"MIT"
]
| null | null | null | 49.532182 | 19,232 | 0.620769 | true | 6,869 | Qwen/Qwen-72B | 1. YES
2. YES | 0.760651 | 0.715424 | 0.544188 | __label__eng_Latn | 0.308996 | 0.10266 |
#### _Speech Processing Labs 2020: Signals: Module 2_
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import cmath
from math import floor
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
plt.style.use('ggplot')
from dspMisc import *
```
# 2 Filtering the Source: Finite Impulse Response Filters
### Learning Outcomes
* Be able to describe what an FIR filter is
* Be able to explain what the impulse response of an FIR filter is
* See how an FIR filter can be used as a lowpass frequency filter.
* See how changing the coefficients of an FIR filter can change it's frequency response
### Need to know
* Topic Videos: Spectral Envelope, Filter, Impulse Train
* [Interpreting the DFT](./sp-m1-5-interpreting-the-dft.ipynb)
* [Building the source: impulse trains](sp-m2-1-impulse-as-source.ipynb)
<div class="alert alert-warning">
<strong>Equation alert</strong>: If you're viewing this on github, please note that the equation rendering is not always perfect. You should view the notebooks through a jupyter notebook server for an accurate view.
</div>
## 2.0 Filters
We've seen in the past notebooks that sometimes our input signal isn't exactly what we want. There is a vast literature in signal processing about designing filters to transform one signal into another. In speech processing, our signals often includes some sort of noise that we'd like to get rid of. However, we can also use filters to shape a simple input, like an impulse train, into something much more complicated, like a speech waveform.
In class you've seen two types of filters:
* Finite Impulse Response (FIR)
* Infinite Impulse Response (IIR)
Both perform a transform on an input sequence $x[n]$ to give us some desired output sequence $y[n]$. The difference between the two types of filters is basically whether we only use the inputs to derive each output $y[n]$ (FIR), or whether we also use previous outputs (IIR).
In the following we'll illustrate some of the properties of FIR filters.
## 2.1 Finite Impulse Response Filters
Finite Impulse Response (FIR) filters have the following form:
$$
\begin{align}
y[n] &= b[0]x[n] + b[1]x[n-1] + \dots + b[K]x[n-K] \\
&= \sum_{k=0}^K b(k) x[n-k]
\end{align}
$$
Here, we have:
* an input sequence $x[n]$ of length $N$
* a set of $K$ filter coefficients.
We can read the equation as saying that $n$th ouput of the filter, $y[n]$, is a weighted sum of the previous K inputs $x[n],...,x[n-K]$.
### Example
Let's plot the $b[k]\cdot x[n-k]$ terms where $x$ is sinusoid of 4 Hz and the filter is `b=[1.1, 1.2, 1.5, 1.2, 1.1]`
```python
## Plotting a filter input window
## Set the number of samples N, sampling rate f_s
N=64
f_s = 64
t_s = 1/f_s
print("sampling rate: f_s = %f\nsampling time: t_s: %f" % (f_s, t_s))
x, time_steps = gen_sinusoid(frequency=4, phase=0, amplitude=1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
## Plot the sinusoid we've just created:
fig, timedom = plt.subplots(figsize=(16, 4))
timedom.plot(time_steps, x, 'o', color='grey')
timedom.set_xlabel('Time (s)')
timedom.set_ylabel('Amplitude')
## Filter coefficients (you could try changing these)
b = [1.1, 1.2, 1.5, 1.2, 1.1]
## K is the number of filter coefficients
K=len(b)
## Let's see what happens when n=19 (or try changing this variable!)
n=19
print("filter using b at n=%d, K=%d\n" % (n, K))
## Plot the values that are input to the filter
## +1's because python indexing/slicing doesn't include the end point
timedom.plot(time_steps[n+1-K:n+1], x[n+1-K:n+1], 'o', color='red')
## Calculate the b[k]*x[n-k] terms and add them to a list
filtered_n = []
for k in range(len(b)):
## print out the variables here
print("%d:, b[%d]=%f, x[%d-%d]=%f, b[%d]*x[%d-%d]=%f" % (n-k, k, b[k], n, k, x[n-k], k, n, k, b[k]*x[n-k]))
filtered_n.append(b[k]*x[n-k])
## reverse the list so that they're in time order
filtered_n.reverse()
## Plot the b[k]*x[n-k] terms
timedom.plot(time_steps[n+1-K:n+1], filtered_n, 'o', color='blue')
## Calculate the filter output (add up the product terms)
print("\ny[%d] = %f" % (n, sum(filtered_n)))
```
In the plot above, you should see:
* A cosine wave with frequency 4 Hz in grey.
* The inputs the the filter x[19],...,x[15] in red.
* 5 input values for 5 filter coefficients
* The product $b[k]*x[n-k]$ for $n=19$, and $k=0,...,4$
So with `b = [1.1, 1.2, 1.5, 1.2, 1.1]` all the input values get scaled up, with the the middle of the filter window get biggest relative increase.
When we add all the product terms together, we get y[19] = 4.87 - a lot bigger than any of the input values!
## 2.2 An FIR moving average filter
A useful special case of an FIR filter is where each of the filter coefficients is just $1/K$. In this case our FIR equation looks like this:
$$
\begin{align}
y[n] &= \sum_{k=0}^{K-1} \frac{1}{K} x[n-k] \\
&= \frac{1}{K} \sum_{k=0}^{K-1} x[n-k]
\end{align}
$$
This equation says that when we apply the filter, we step through the input. At each step, we output the average of the previous $K$ inputs. You might know this by another more intuitive name: a _moving average_. You might also have seen this as method to 'smooth' an input.
Let's play around with this idea a bit and see how it relates to our notion of frequency response.
### Example:
Let's look at this 5-point moving average filter. In this case all the filter coefficients $b[k] = 1/5$ for $k=0,..,K-1=4$
$$
\begin{align}
y[n] &= \frac{1}{5} \sum_{k=0}^4 x[n-k] \\
&= \frac{1}{5}x[n] + \frac{1}{5}x[n-1] + \frac{1}{5}x[n-2] + \frac{1}{5}x[n-3] + \frac{1}{5}x[n-3]
\end{align}
$$
Now, let's code this specific filter up and apply it to some sequences!
```python
# Apply a moving average filter of size K to input sequence x
def moving_average(x, K=5):
## There are quicker ways to do this in numpy, but let's just do it this way for now for transparency
## We know that we'll have as many outputs as inputs, so we can initialize y to all zeros
N = len(x)
y = np.zeros(N)
## Go through the input one step at a time
for n in range(N):
## Add up the last K inputs (including the current one)
for k in range(K):
## Exercise: why do we have to put this conditional in here?
if n-k >= 0:
y[n] = y[n] + x[n-k]
## Divide by the size of the input window to get an average
y[n] = (1/K) * y[n]
return y
```
### Generate a 'noisy' sinusoid
Let's generate a compound sinusoid with one low frequency and one high(er) frequency component. We can take the higher frequency component as representing some sort of periodic noise in the signal.
```python
## Set the number of samples N, sampling rate f_s
N=64
f_s = 64
t_s = 1/f_s
print("sampling rate: f_s = %f\nsampling time: t_s: %f" % (f_s, t_s))
```
sampling rate: f_s = 64.000000
sampling time: t_s: 0.015625
```python
## make some sinusoids:
## Since the sample rate and sequence length is the same, the generated time steps will match for
## x1 and x2
x1, time_steps = gen_sinusoid(frequency=4, phase=0, amplitude=1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x2, time_steps = gen_sinusoid(frequency=6, phase=0, amplitude=1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
# add them up!
x_noisy = x1 + x2
```
```python
## Plot the compound sinusoid we've just created:
fig, timedom = plt.subplots(figsize=(16, 4))
timedom.plot(time_steps, x_noisy, color='magenta')
timedom.set_xlabel('Time (s)')
timedom.set_ylabel('Amplitude')
```
You should be able to see that the 4Hz cosine wave is perturbed by a 24 Hz cosine. So, this could represent a situation where there is high frequency noise in our signal that we'd like to get rid of. Let's see if we can use our filter to smooth out this high frequency noise.
### Apply the moving average filter
```python
## Apply our moving average filter
K=5
y_mov = moving_average(x_noisy,K=K)
```
```python
## Plot the results
fig, timedom = plt.subplots(figsize=(16, 4))
timedom.set_xlabel('Time (s)')
timedom.set_ylabel('Amplitude')
## The input signal
timedom.plot(time_steps, x_noisy, color='magenta', label='Noisy input')
## The underlying 4Hz signal
timedom.plot(time_steps, x1, color='grey', label='low freq input component')
## After the moving average has been applied
timedom.plot(time_steps, y_mov, color='blue', label='filter output')
timedom.set_title('%d-point moving average filter applied to a noisy sinsuoid' % K)
timedom.legend()
```
You should see:
* the original signal in magenta
* the low frequency cosine component in grey
* the output of the filter in blue
### Exercise:
* From the plot, does it appear that the moving average recovered the original 4Hz signal in terms of:
* frequency?
* amplitude?
* phase?
* Are there still high frequency components in the filter output?
Can you see some issues with applying this filter? What happens at the end points? Are they valid?
### Notes
```python
# recovery:
# frequency: yes
# amplitude: yes
# phase: no
# yes but they're very smoothed out
# issue: the end points don't start at 1 or 0
```
### Get the DFT of the filtered signal
We can apply the DFT to our output to check our observations from the time-domain output.
```python
## DFT of the original input
mags_x, phases_x = get_dft_mag_phase(x_noisy, N)
## DFT of the filter output
mags_y, phases_y = get_dft_mag_phase(y_mov, N)
```
```python
dft_freqs = get_dft_freqs_all(sample_rate=f_s, seq_len=N)
fig, mags = plt.subplots(figsize=(16, 4))
mags.set(xlim=(-1, N/2), ylim=(-1, N))
mags.scatter(dft_freqs, mags_x, color='magenta', label='noisy input')
mags.scatter(dft_freqs, mags_y, color='blue', label='filtered output')
mags.set_xlabel("Frequency (Hz)")
mags.set_ylabel("Magnitude")
mags.set_title("DFT Magnitude before and after moving average filter")
mags.legend()
#Let's not worry about phase right now, but feel free to uncomment and have a look!
#fig, phases = plt.subplots(figsize=(16, 4))
#phases.set(xlim=(-1, N/2), ylim=(-10, 10))
#phases.scatter(dft_freqs, phases_x, color='magenta', label='noisy input')
#phases.scatter(dft_freqs, phases_y, color='blue', label='filtered output')
#phases.set_xlabel("Frequency (Hz)")
#phases.set_ylabel("Phase (rad)")
#phases.set_title("DFT Phase before and after moving average filter")
#phases.legend()
```
### Exercise
* Based on the magnitude spectrum:
* Did the filter get rid of the 24Hz component?
* Do you see any signs of leakage?
* What happens if you change the frequency of the second sinusoid to something lower (e.g. 6Hz)?
### Notes
```python
# Yes, pretty well, almost completely
# no
# yes, there's leakage
```
## 2.3 FIR as convolution
An FIR Filter that takes $K$ previous elements of $x[n]$ as input has the following general form:
$$ y[n] = \sum_{k=0}^{K-1} b[k] x[n-k] $$
You might recognize this as a **convolution** of the two sequences $b$ and $x$ (i.e. $b * x$).
So, we can theoretically set our filter to coefficients to whatever we want. Here's a function that generalizes our moving average filter to allow for this:
```python
def fir_filter(x, filter_coeffs):
N = len(x)
K = len(filter_coeffs)
y = np.zeros(N)
for n in range(N):
for k in range(K):
if n-k >= 0:
#print("y[%d]=%f, b[%d]=%f, x[%d]=%f" % (n, y[n], k, filter_coeffs[k], n-k, x[n-k]))
y[n] = y[n] + (filter_coeffs[k]*x[n-k])
#print("y[%d]=%f" % (n, y[n]))
return y
```
### Changing the filter coefficients
Let's try it out with different coefficient values, comparing our unweighted average `h_avg` filter with a weighted average `h_wavg` filter.
```python
## The 5-point moving average from before
h_avg = np.array([1/5, 1/5, 1/5, 1/5, 1/5])
y_avg = fir_filter(x_noisy, h_avg)
## A 5-point symmetrically weighted average
h_wavg = np.array([1/5, 1/3, 1, 1/3, 1/5])
y_wavg = fir_filter(x_noisy, h_wavg)
```
### Filter effects in the time domain
```python
## Plot the filter outputs
fig, timedom = plt.subplots(figsize=(16, 4))
## The original "noisy" input
timedom.plot(time_steps, x_noisy, color='magenta', label='input x_noisy')
timedom.scatter(time_steps, x_noisy, color='magenta')
## The 5-point moving average
timedom.plot(time_steps, y_avg, color='blue', label='unweighted average: y_avg')
timedom.scatter(time_steps, y_avg, color='blue')
## The 5-point weighted average
timedom.plot(time_steps, y_wavg, color='orange', label='weighted average: y_wavg')
timedom.scatter(time_steps, y_wavg, color='orange')
timedom.legend()
timedom.set_xlabel('Time (s)')
timedom.set_ylabel('Amplitude')
```
In this time vs amplitude graph, you should see:
* the 'noisy input' in magenta
* the output of the unweighted average filter in blue (`y_avg`)
* the output of the weighted average filter in orange (`y_wavg`)
### Exercise
Q: Why is the output of `y_wavg` more spikey than that of `y_avg`?
### Notes
```python
# ???
```
## 2.4 FIR Filters in the Frequency Domain
### The DFT of the filtered outputs
We can look at the effect of the two FIR filters defined above in the frequency domain by performing a DFT on the filter outputs.
```python
## DFT of the original input
mags_x, phases_x = get_dft_mag_phase(x_noisy, N)
## DFT after weighted average filter: h_wavg = np.array([1/5, 1/3, 1, 1/3, 1/5])
mags_wavg, phases_wavg = get_dft_mag_phase(y_wavg, N)
## DFT after unweighted average filter: h_avg = np.array([1/5, 1/5, 1/5, 1/5, 1/5])
mags_avg, phases_avg = get_dft_mag_phase(y_avg, N)
dft_freqs = get_dft_freqs_all(sample_rate=f_s, seq_len=N)
## Plot magnitude spectrums
fig, mags = plt.subplots(figsize=(16, 4))
mags.set(xlim=(-1, N/2), ylim=(-1, N))
mags.scatter(dft_freqs, mags_x, color='magenta', label='input')
mags.scatter(dft_freqs, mags_avg, color='blue', label='unweighted average')
mags.scatter(dft_freqs, mags_wavg, color='orange', label='weighted average')
mags.legend()
## Plot phase spectrums
fig, phases = plt.subplots(figsize=(16, 4))
phases.set(xlim=(-1, N/2), ylim=(-10, 10))
phases.scatter(dft_freqs, phases_x, color='magenta', label='input')
phases.scatter(dft_freqs, phases_avg, color='blue', label='unweighted average')
phases.scatter(dft_freqs, phases_wavg, color='orange', label='weighted average')
phases.legend()
```
### Exercise:
* Describe the difference between the different FIR filters based on the frequency magnitude and phases responses plotted above.
* Does the weighted average filter do as good a job at filtering out the higher frequency signals?
### Notes
```python
# the weighted filter increases the original frequencies but the unweighted one decreases them
# both filters phase shifts the input positively, but only the weighted average phase shifts the frequencies
# with high amplitude negatively
```
## 2.5 Convolution in Time, Multiplication in Frequency
Now we get to the really cool bit. We know that the DFT allows us to go from the time domain to the frequency domain (and the Inverse DFT let's us go back). But it also has this very important property:
$$ h[k] * x[n] \mapsto \text{ DFT} \mapsto H(m) \cdot X(m) $$
That is, convolving an input sequence $x$ with a set of filter coefficients, $h$ in the time domain ($h*x$) is the same as (pointwise) multiplication of the DFT of $h$ with the DFT of $x$. So, if we know what type of frequency response $h$ has, we can treat this as apply a sort of mask to the DFT of $x$. This property is known as the **convolution theorem**.
Another way to think about it is that if the DFT outputs of $h$ has zero magnitude, then applying that filter to some input will also zero out those frequencies in the filter output!
We'll see some visualizations of this shortly, but first, we can also note that you can go back the other way using the Inverse DFT:
$$ H(m) \cdot X(m) \mapsto \text{ IDFT} \mapsto h[k] * x[n] $$
And also it works for multiplication in time domain too:
$$ h[k] \cdot x[n] \mapsto \text{DFT} \mapsto H(m) * X(m) $$
This is useful for understanding why the leakage graph around each DFT bin looks the way it does (though we won't go into it here!).
In fact, we can use this to not only show that the moving average acts as a low pass filter, but also to be able to calculate exactly the type of filter it will be.
### Plotting different FIR filter frequency responses
In order to multiply together the filter and the input frequency responses, we need to make sure the filter and the input have the same number of samples. We can do this by just padding out the filter with zeros (you can see why in the optional extra material at the end of this notebook). We can then look how changing the 'shape' of a filter changes it's frequency response. The following exercise shows some examples.
### Exercise
* Run the function plot_filter_freq_responses (defined in the next cell) to plot the frequency responses of filters with different shapes (e.g. `h_plat`, `h_tri`, `h_rect` in the cell below the next).
* What's the difference in frequency response of the triangular filter and the rectangular filter?
* What's the difference between the moving average of size 5 and one of size 9 (h_rect9)?
* Try some other FIR filters!
```python
# Given a list of filters, the sample rate and a specific sequence length
# plot the DFT frequency response of each filter.
# Each filter should be defined as a list of coeffients (b[k])
def plot_filter_freq_responses(filters, sample_rate, seq_length):
## Get the set of DFT output frequencies given the sample rate and desired sequence length
dft_freqs_filter = (sample_rate/seq_length) * np.arange(seq_length)
## Calculate the time steps for each filter value given the sample rate and sequence length
time_steps = (1/sample_rate) * np.arange(seq_length)
## Set up some plots:
# the filter itself (time v amplitude)
fig_time, sinusoid = plt.subplots(figsize=(16, 4))
# the frequency response (freq v magnitude)
fig_freq, fresponse = plt.subplots(figsize=(16, 4))
x_filters = {}
## For each filter:
for i, h in enumerate(filters):
# pad the filter coefficients with zeros until we get the desired sequence length
x_zeros = np.zeros(seq_length - len(h))
x = np.concatenate([h, x_zeros])
# Get the DFT outputs
mags, phases = get_dft_mag_phase(x, seq_length)
# Plot the filter
sinusoid.scatter(time_steps, x)
sinusoid.plot(time_steps, x, label=repr(h))
# plot the magnitude response
fresponse.scatter(dft_freqs_filter, mags)
fresponse.plot(dft_freqs_filter, mags, label=repr(h))
fresponse.set(xlim=(-1,seq_length/2))
# return the filters and the DFT responses just in case
x_filters = {'x':x, 'mags':mags, 'phases':phases, 'coeffs':h}
sinusoid.set_xlabel('Time(s)')
sinusoid.set_ylabel('Amplitude')
sinusoid.set_title('Zero padded filters of different shapes')
sinusoid.legend()
fresponse.set_xlabel('Frequency (Hz)')
fresponse.set_ylabel('Magnitude')
fresponse.set_title('DFT magnitude response of ero padded filters of different shapes')
fresponse.legend()
return x_filters
```
```python
h_plat = np.array([0.1, 0.2, 0.2, 0.2, 0.1])
h_tri = np.array([0.04, 0.12, 0.15, 0.12, 0.01])
h_rect = np.array([1/5, 1/5, 1/5, 1/5, 1/5])
h_rect9 = np.array([1/9, 1/9, 1/9, 1/9, 1/9, 1/9, 1/9, 1/9, 1/9])
## Try some others if you like!
N=64
f_s=64
filter_dict = plot_filter_freq_responses(filters=[h_tri, h_plat, h_rect, h_rect9], sample_rate=f_s, seq_length=N)
```
### Notes
```python
# triangular vs rectangular:
# rectangular filters have bumps in their magnitude response
# moving average size:
# the larger moving average filter filters out more higher-frequency waves than the smaller one
```
## 2.7 Applying an FIR filter to an impulse train
Now, finally we can look at the effect of applying an FIR filter to an impulse train. Remember, we're using impulse trains to represent a sound source (i.e. vocal pulses at a specific frequency). Eventually, we want to be able to define filters that capture the effects of the vocal tract.
First, here's a function that produces a sequence of pulses at a given frequency and sample rate. We'll then apply a triangular filter and look at what the frequency response is of the output is.
```python
## Let's make an impulse train
N=200
f_s = 8000
t_s = 1/f_s
frequency = 130
x_imp, time_steps = make_impulse_train(sample_rate=f_s, frequency=frequency, n_samples=N)
```
```python
## Plot the impulse train made in the cell before
fig, td = plt.subplots(figsize=(16, 4))
td.scatter(time_steps, x_imp)
td.plot(time_steps, x_imp)
td.set_xlabel('Time (s)')
td.set_ylabel('Amplitude')
td.set_title('Impulse train')
```
You should see an impulse train, with an impulse frequency of 200 Hz. Let's see what happens when we apply the filter.
```python
## define our FIR filter coefficients
h_mov = np.array([0.1, 0.3, 0.5, 0.3, 0.1])
#try this later!
#h_mov = np.array([1])
## Apply the filter to our impulse train
y_mov = fir_filter(x_imp, h_mov)
## plot the impulse train and the filtered version of it
fig, td = plt.subplots(figsize=(16, 4))
td.scatter(time_steps, x_imp)
td.plot(time_steps, x_imp, label='impulse train')
td.plot(time_steps, y_mov, label='filtered impulse train')
td.set_xlabel('Time (s)')
td.set_ylabel('Amplitude')
td.legend()
td.set_title('Impulse train and filtered impulse train')
```
Here you should see our impulse train (red) and the filtered impulse train (blue). The filtered version has smaller amplitude and a broader peak (more samples have non-zero value). Also, the filtered peaks are slightly after the original impulses.
Now let's look at the DFT output:
```python
## Get the DFT the filter output
mags, phases = get_dft_mag_phase(y_mov, N)
## Plot the DFT frequencies rather than DFT output indices
dft_freqs = get_dft_freqs_all(f_s, N)
## Plot the magnitude spectrum
fig, ax = plt.subplots(figsize=(16, 4))
ax.scatter(dft_freqs[0:int(N/2)],mags[0:int(N/2)])
ax.plot(dft_freqs[0:int(N/2)], mags[0:int(N/2)])
```
### Exercises
* What do the spikes in the DFT outputs represent?
* What does the first spike after the 0 frequency one represent?
* What does this filter appear to do?
* Remember applying the FIR filter in the time domain (via convolution) is the same as multiplying the DFT of the filter to the DFT of the input signal
### Notes
```python
# the spikes represent the harmonics
# the fundamental frequency
# lowpass filter
```
### Exercises
* What happens when the impulse train frequency doesn't fall on one of the bins? e.g. `frequency = 130`, for 200 samples, with sampling rate 8000 samples/second
* Does the magnitude spectrum have the harmonic structure you'd expect?
### Notes
```python
# less tidy magnitude spectrum
# somewhat, but there are lots of bleeding
```
## 2.6 (Extra Extension) The moving average filter as a rectangular function
<div class="alert alert-warning">
<em>This section (2.6) illustrates how we can use the convolution theorem understand why the moving average type filters act like low pass filters, and also the connection with leakage in the DFT magnitude response we saw previously. This is optional extra material.
If you want to see an example of the convolution theorem working in the frequency domain, you can just run the code and have a look at the graph titled 'Frequency response after applying 5-point weighted average filter'</em>
</div>
The convolution theorem tells us that, if we know the frequency response of an FIR filter, we know how it will affect the frequency response of it's input (we just multiply the individual frequency response together).
To understand what filter frequency response will look like, it's helpful to first observe that our unweighted moving average filter is pretty much a rectangular window function. It's easy to see what this means when we plot it. The following function allows us to generate rectangular functions:
```python
def gen_rect_window(start_index, end_index, sample_rate=64, seq_length=64):
nsteps = np.array(range(seq_length))
t_s = 1/sample_rate
time_steps = t_s * nsteps
## Let's make a rectangular window
x_rect = np.zeros(seq_length)
x_rect[start_index:end_index] = 1
return x_rect, time_steps
```
### Now, we make a rectangular window
```python
## Make rectangular window
N=64
K=16
f_s=64
start_index=24
end_index=start_index+K
x_rect, time_steps = gen_rect_window(start_index=start_index, end_index=end_index, sample_rate=f_s, seq_length=N)
fig, timedom = plt.subplots(figsize=(16, 4))
timedom.scatter(time_steps, x_rect, color='magenta')
timedom.plot(time_steps, x_rect, color='magenta')
timedom.set_xlabel('Time (s)')
timedom.set_ylabel('Amplitude')
timedom.set_title('a rectangular window')
```
You should see a sequence with 64 point where the mmiddle 16 points have value 1 and the rest have value 0 (i.e., it looks like a rectangle in the middle).
### Now, let's look at the frequency response of the rectangular window
```python
## Now we do the DFT on the rectangular function:
## get the magnitudes and phases
mags_rect, phases_rect = get_dft_mag_phase(x_rect, N)
## the DFT output frequencies
dft_freqs_rect = get_dft_freqs_all(f_s, N)
## let's just look at the magnitudes
fig, fdom = plt.subplots(figsize=(16, 4))
fdom.set(xlim=(-1, N/2))
fdom.scatter(dft_freqs_rect, mags_rect)
fdom.set_xlabel("Frequency (Hz)")
fdom.set_ylabel('Magnitude')
fdom.set_title('Frequency response of rectangular window')
## Looks leaky!
```
### Leaky windows?
The plot of the frequency magnitude response of our rectangular window has the hallmarks of leakiness. That is, the frequency response looks scalloped, with the biggest peak occuring around 0Hz. That is, it looks like a low pass filter!
With a bit of algebra we can derive the frequency for any $m$ (not just the DFT output bins indices) to be the following:
If $x[n]$ is a rectangular function of N samples with $K$ continugous samples of value 1 (starting at index $n_0$), we can figure out what the DFT output will be:
$$X[m] = e^{i(2\pi m/N)(n_0-(K-1)/2)} . \frac{\sin(2\pi mK/2N)}{\sin(2\pi m /2N)}$$
This is called the **Dirichlet kernel**. It has the **sinc** shape we saw when we looked at spectral leakage.
How is this useful? Since we know what the frequency response of a rectangular window is, we know what convolving this with different input sequences will look like in the frequency domain. We just multiply the frequency magnitude responses together.
<div class="alert alert-success">
On a more general note, this sort of convolution with a (short) window is how we do frequency analysis of speech: we taking windows of speech (aka frames) through time and and apply the DFT to get a frequency response. A rectangular window is the simplest type of window we can take. The equation above tells us that the sinc shaped response is an inherent part of using this sort of window. In fact, we can use other window types (e.g. Hanning) to make the main lobes shaper and the sidelobes flatter, but we never really get away from this sinc shape in real world applications. This is a key component of this soft of <strong>short term analysis</strong>.
</div>
Let's write this up in a function:
```python
def gen_rect_response(n_0, K, N, stepsize=0.01, polar=True, amplitude=1):
ms = np.arange(0.01, N, stepsize)
qs = 2*np.pi*ms/N
## Infact, we can work the frequency response to be the Dirichlet Kernel:
response = (np.exp(-1j*qs*(n_0-(K-1)/2)) * np.sin(qs*K/2))/np.sin(qs/2)
if polar:
response_polar = [cmath.polar(z) for z in response]
mags = np.array([m for m, _ in response_polar]) * amplitude
phases = np.array([ph if round(mag) > 0 else 0 for mag, ph in response_polar])
return (mags, phases, ms)
return response, ms
```
Now we can plot the dirichlet kernel with the leaky looking DFT magnitudes we calculated earlier for our rectangular window.
```python
## Overlay the dirichlet kernel onto the DFT magnitudes we calculated earlier
## You should be able to see that the DFT magnitudes appear as discrete samples of the Dirichlet Kernel
mags_rect, phases_rect = get_dft_mag_phase(x_rect, N)
mags_rect_sinc , _ , ms = response = gen_rect_response(start_index, K, N)
fig, ax = plt.subplots(figsize=(16, 4))
ax.scatter(dft_freqs_rect, mags_rect, label='rectangular window')
ax.plot((f_s/N)*ms, mags_rect_sinc, color='C2', label='dirichlet')
ax.set(xlim=(-1,N/2))
ax.set_xlabel('Frequency (Hz)')
ax.set_ylabel('Magnitude')
ax.set_title('Frequency response of a rectangular sequence, %d samples with %d contiguous ones' % (N, K))
```
You should be able to see that the DFT magnitudes appear as discrete samples of the sinc shaped Dirichlet Kernel
### The unweighted average filter as a rectangular function
We can think of our 5-point unweighted average filter as a 5-point input sequence with all values set to 1/5. We can then deduce that the frequency response of the filter will have the same shape as the frequency response of a rectangular window of all ones, but scaled down by 1/5.
Now let's check:
```python
N_h=5
f_s=64
start_index=0
end_index=N_h - start_index
## A 5 point rectangular window of all ones
h_avg, time_steps = gen_rect_window(start_index=start_index, end_index=end_index, sample_rate=f_s, seq_length=N_h)
h_avg = h_avg/N_h
fig, td = plt.subplots(figsize=(16, 4))
td.scatter(time_steps, h_avg, color='magenta')
td.plot(time_steps, h_avg, color='magenta')
td.set_xlabel('Time (s)')
td.set_ylabel('Amplitude')
td.set_title('5 point unweighted average as a rectangular function')
## Not very exciting looking!
print("h_avg:", h_avg)
```
You should just see 5 point in a row, all with value 1/5. Now, we can plot the DFT magnitude response, as well as it's idealized continuous version:
```python
## Get the frequency magnitude response for our rectangular function
mags_h_avg, phases_h_avg = get_dft_mag_phase(h_avg, N_h)
## Get the continuous
rect_mags_h_avg, _ , ms = gen_rect_response(start_index, N_h, N_h, amplitude=np.max(h_avg))
## x-axis as frequencies rather than indices
ms_freqs_h_avg = (f_s/N_h) * ms
dft_freqs_h_avg = (f_s/N_h) * np.arange(N_h)
## Plot the frequency magnitude response
fig, fd = plt.subplots(figsize=(16, 4))
fd.set(xlim=(-1, N/2))
fd.scatter(dft_freqs_h_avg, mags_h_avg)
fd.set_xlabel('Frequency (Hz)')
fd.set_ylabel('Magnitude')
fd.set_title('Frequency response of 5-point unweighter average filter')
#fd.scatter(dft_freqs_rect, mags_rect)
fd.plot(ms_freqs_h_avg, rect_mags_h_avg, color="C2")
```
You should see $floor(N/2) = 2$ points, with a main lobe peaking at 0 Hz, and side lobes peaking between each of the DFT output frequencies.
So, DFT frequencies sit exactly at the zeros of this function when the windown size K is the same as the number of samples.
### Matching the filter and input size with zero padding
The theorem we saw above told us that we could calculate the frequency response of applying the FIR filter to an input sequence (via convolution), but multiply the DFT outputs of the filter and the input sequence.
Now, the x-axis range matches our that of our noisy input sequence because that is determined by the sampling rate. However, the filter frequency response we have above only has 5 outputs, while our input sample size was 64 because the number of DFT outputs is determined by the number of samples we put into the DFT.
To get things in the right form, we need to do some **zero padding** of the filter. We'll see that this basically gives us more samples of the Dirichlet Kernel corresponding to the filter frequency response.
```python
N=64
K=5
f_s=64
start_index=0
end_index=K
## Make a rectangular filter: K ones at the start
h_avg_pad, time_steps = gen_rect_window(start_index=start_index, end_index=end_index, sample_rate=f_s, seq_length=N)
## Divide by K to make it an average
h_avg_pad = h_avg_pad/K
## Plot the filter
fig, td = plt.subplots(figsize=(16, 4))
td.scatter(time_steps, h_avg_pad, color='magenta')
td.plot(time_steps, h_avg_pad, color='magenta')
td.set_xlabel('Time (s)')
td.set_title('5 point unweighted average FIR filter padded with zeros')
#print("N=%d, K=%d, start=%d, end=%d" % (N, K, start_index, end_index))
```
```python
## Get the frequency magnitude response for our rectangular function
mags_havg, phases_havg = get_dft_mag_phase(h_avg_pad, N)
## Plot the frequency magnitude response
## x-axis as actual frequencies rather that DFT indices
dft_freqs_havg = (f_s/N) * np.arange(N)
fig, fd = plt.subplots(figsize=(16, 4))
fd.set(xlim=(-1,N/2))
fd.scatter(dft_freqs_havg, mags_havg)
fd.set_xlabel('Frequency (Hz)')
fd.set_ylabel('Magnitude')
fd.set_title('Magnitude response of 5-point unweighter average filter zero padded to 64 samples')
```
You should be able to see more clearly in the frequency response graph that the zero padding doesn't change doesnt change the basic shape of the filter's frequency response, we just get a finer grained representation in terms of samples (red dots).
### Calculate the input and filter frequency responses
```python
## Now let's calculate frequency responses of the original input
mags, phases = get_dft_mag_phase(x_noisy, N)
## ... the filter
mags_filter, phases_filter = get_dft_mag_phase(h_avg_pad, N)
## ... and the filtered output that we calculated above
mags_avg, phases_avg = get_dft_mag_phase(y_avg, N)
## Plot with actual frequencies on the x-axis
dft_freqs = get_dft_freqs_all(f_s, N)
```
```python
## plot frequency responses
fig, fd = plt.subplots(figsize=(16, 4))
fd.set(xlim=(-1,N/2), ylim=(-1, N))
# DFT(input)
fd.scatter(dft_freqs, mags, color='magenta', label='DFT(input)')
# DFT(filter) * DFT(input)
fd.scatter(dft_freqs, mags_filter*mags, color='blue', label='DFT(filter).DFT(input)')
# DFT(filtered input)
fd.scatter(dft_freqs, mags_avg, color='red', label='DFT(filter*input)')
fd.set_xlabel('Frequency (Hz)')
fd.set_ylabel('Magnitude')
fd.set_title('Frequency response after applying 5-point weighted average filter')
fd.legend()
```
You should see that the results from multiplying the DFT magnitudes from the input and the filter (blue) is (more or less) the same as the DFT of applying the filter in th time domain via convolution (red)
* Notice that there are some differences between the results from the time domain application of the filter (red) and the frequency domain multiplication (blue). In particular there appears to be some leakage in the time-domain convolution case, possibly due to floating point errors.
### Exercise
* Try changing the frequency of the second cosine component of our compound wave in the code below.
* Does the amount of attenuation of the high frequency component change as suggested by the DFT of the filter?
* e.g. try 26 Hz vs 19 Hz
* What does this tell you about how well this low pass filter get's rid of high frequency noise?
```python
## Change the frequency of x2
x1, time_steps = gen_sinusoid(frequency=4, phase=0, amplitude=1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x2, time_steps = gen_sinusoid(frequency=19, phase=0, amplitude=1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
# add them up!
x_noisy = x1 + x2
## Now let's calculate frequency responses of the original input
mags, phases = get_dft_mag_phase(x_noisy, N)
## ... the filter
mags_filter, phases_filter = get_dft_mag_phase(h_avg_pad, N)
## Plot with actual frequencies on the x-axis
dft_freqs = get_dft_freqs_all(f_s, N)
## plot frequency responses
fig, fd = plt.subplots(figsize=(16, 4))
fd.set(xlim=(-1,N/2), ylim=(-1, N))
# DFT(input)
fd.scatter(dft_freqs, mags, color='magenta', label='DFT(input)')
# DFT(filter) * DFT(input)
fd.scatter(dft_freqs, mags_filter*mags, color='blue', label='DFT(filter)*DFT(input)')
fd.set_xlabel('Frequency (Hz)')
fd.set_ylabel('Magnitude')
fd.set_title('Frequency response after applying 5-point weighted average filter')
fd.legend()
```
### Notes
```python
```
```python
```
```python
```
| f2662c89666aa0c9469607c9fb88d92abc36a417 | 797,317 | ipynb | Jupyter Notebook | signals/sp-m2-2-fir-filters.ipynb | vatnid/uoe_speech_processing_course | ac479566f2d7b911cac8c94ecac92dda2b80bdb3 | [
"MIT"
]
| null | null | null | signals/sp-m2-2-fir-filters.ipynb | vatnid/uoe_speech_processing_course | ac479566f2d7b911cac8c94ecac92dda2b80bdb3 | [
"MIT"
]
| null | null | null | signals/sp-m2-2-fir-filters.ipynb | vatnid/uoe_speech_processing_course | ac479566f2d7b911cac8c94ecac92dda2b80bdb3 | [
"MIT"
]
| null | null | null | 414.190649 | 88,564 | 0.934572 | true | 9,978 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.815232 | 0.696137 | __label__eng_Latn | 0.987226 | 0.455692 |
# SMU Honors Physics: Adventures in Spacetime
**Authors**: Stephen Sekula
In this python notebook, we will explore numbers, especially vectors - numbers that define both a length and a direction. Vectors are a crucial class of numbers in physics. Many quantities in the natural world must be specified by more than just a size or a length; they indicate where to go next, and also need to specify directional information. In this unit, we explore simple numbers without direction ("scalars"), then move on to numbers with direction ("vectors"). This will all be done with a programming language, called PYTHON, which you will use over and over this semester to explore different questions about the natural world.
In this unit, you will learn the following:
* How to define scalars in PYTHON.
* How to operate on scalars to make them do things, like make other scalars.
* How to define vectors in general
* How to define vectors in PYTHON
* How to operate on vectors
* How to use vectors to represent physical quantities, like those that describe motion in space and time
* Apply vectors to model a physical process
### Difficulty level of a section
Sections below are marked by one of three graphics: green circle, blue square, and black diamond. If you have ever gone skiing before, you might recognize these as ski trail difficulty markers. "Green circle" indicates a beginner-level trail; "Blue square" is intermediate-level; "Black diamond" is advanced level.
Everyone should be able to make progress on green circle material. Once you have a good comfort level with green circle, blue square will feel much more approachable. Black diamond may challenge even experienced people, but nonetheless it is important to see what is possible and aspire!
* Green Circle: BEGINNER
* Blue Square: INTERMEDIATE
* Black Diamond: ADVANCED
## Scalars (in general, and in PYTHON)
A "scalar" is a number with magnitude, but no directions. Here are some physical quantities that are represented by scalars:
* Your age: "I am 19 years old" is a description of the *magnitude* of your age. Of course, we all know that time only ever seems to move in one direction, so we never bother to specify the direction of age.
* The temperature of a room: "It's 70F in this office right now" is a statement about the magnitude of the temperature (heat energy content) of the room. It has no directional information. For instance, knowing that it is 70F in this room doesn't tell us about the temperature in the hall, or the next room. It has no information about what temperature is doing elsewhere.
* Speed: "I am driving at 70 miles per hour" is a statement about how far you go over a certain amount of time. Nobody can figure out, from just this information, whether you are driving north, south, east, west, or some combination of those. Speed is a scalar that gives no information about what your speed will be later, what it was earlier, or in what direction your motion is aimed.
Now that you have familiarized yourself with scalars in the physical world, let us represent them using the PYTHON programming language and then learn to operate on scalars to do things - like make other scalars.
### Scalars in PYTHON - integers and floating point numbers
PYTHON has two built-in simple scalar types: integers (0,1,2,3,... or -1,-2,-3,...) and "floating point" numbers, more commonly known as "decimal numbers" (1.1, 3.14159, 75.008). PYTHON allows you to have a symbol represent a number (like in algebra) and then use the symbol to conduct operations on the numbers. Consider the following first steps into PYTHON to store a decimal number using a "variable" (a symbol that stands for a number and whose value can be altered), and then act on the variable with mathematical operations (addition, +, subtraction, -, multiplication, \*, and division, /).
To make the next block of "code" (programming commands) do something, click your mouse cursor into the block and press the SHIFT key and the ENTER key at the same time.
```python
# This is a comment. PYTHON ignores these, but they are helpful for people!
# Any line that begins with a # symbol is a comment.
# Let's define a variable, x:
x = 5.0
# By just placing the variable on its own line, we can print its value:
x
# That's not very pretty. Just a number on a line. Let's print this with some helpful text:
print("The value of x is %f" %(x))
# How about printing a multiplication operation?
print("%f" %(x*5.0))
# We don't need all those decimal places. Let's print just a few (or maybe lots more! Play around!)
print("%.2f" % (x*5.0))
# Let's define a new variable, y, based on x:
y = x*2.0
print("The value of y is:")
print(y)
# Let's print out some arithmetic operations between x and y:
print("Addition")
print(x+y)
print("Subtraction")
print(x-y)
print("Multiplication")
print(x*y)
print("Division")
print(x/y)
# Congratulations - you just learned how to make a computer do whatever you want. Imagine the endless applications!
```
The value of x is 5.000000
25.000000
25.00
The value of y is:
10.0
Addition
15.0
Subtraction
-5.0
Multiplication
50.0
Division
0.5
## Vectors
A "vector" is a collection of scalars that, together, represent both magnitude and direction. Many physical quantities can only be accurately described by vectors. For example:
* Velocity: "I am driving northeast at 70 miles per hour" is a statement both about direction ("northwest") and magnitude ("70 miles per hour"). Not only do you know how much space is being covered, and in what time it is covered - you know where to find the person at a later time because you know their direction! If you know the starting point, you can keep track of a journey.
* Acceleration: "I am falling down to the earth and my speed increases by 9.8 meters per second, every second" is a statement about the rate of change of speed, and the direction in which that change occurs. The earth's gravitational force causes such a change in speed near the surface of the earth, and always seems to cause the change in a direction that points toward the center of the earth ("down").
* Electric force: "The lightning jumped from the storm cloud to the ground" is a statement about how much charge was drained from a storm cloud (in the sky) to the ground (down below the storm). This is a vector, containing both direction (downward) and magnitude (lightning is a number of electric charges that all make the jump). Even without actual motion, electric charges can exert forces on each other and those forces have directionality and strength.
There are many notations for a vector. For instance, let us try to represent the first example above - velocity - using a notation.
**"I am driving northeast at 70 miles per hour."**
Let us assume that the driver is moving an equal amount of distance north for every amount west that they move. We can represent one step in their motion as follows:
```python
# We'll need this more later, but this code snippet will draw arrows. We need them to represent the above statement
import matplotlib.pyplot as plt
import math
%matplotlib inline
plt.axis('on')
plt.xlim(-1.0,1.0)
plt.ylim(-1.0,1.0)
# Draw an east-pointing arrow
plt.arrow(0, 0, 1/math.sqrt(2)-0.1, 0, head_width=0.05, head_length=0.1, fc='k', ec='k');
# Draw a north-pointing arrow
plt.arrow(1/math.sqrt(2), 0, 0, 1/math.sqrt(2)-0.1, head_width=0.05, head_length=0.1, fc='k', ec='k');
# Draw the actual arrow representing the total motion of the car
plt.arrow(0, 0, 0.9/math.sqrt(2), 0.9/math.sqrt(2), head_width=0.05, head_length=0.1, fc='c', ec='c');
```
The above code snippet will help us later in doing more vector visualization, but for now focus on notating the above picture. For every bit of distance, x, that we go eastward, we go a distance, y, northward. In the above picture, x=y . . . but that doesn't always have to be the case. In fact, most of the time it's not.
Let us look at two popular notations for the above:
$\vec{v} = (x,y)$
or
$\vec{v} = x\hat{i} + y\hat{j}$
The first notation uses parentheses and commas to separate the horizontal component (x) from the vertical component (y). The second notation uses an additional kind of vector, a *unit vector*, to represent "direction along horizontal" ($\hat{i}$) and "direction along vertical" ($\hat{j}$).
For this exercise, we prefer the first notation since it's very similar to what we will do in PYTHON to represent vectors. Let's look at that.
### Mathematical Operations on Vectors
* Adding two vectors is as simple as adding the like-components: the first component in each vector is added to the first component in the next; the second component in each vector is added to the second component of the next; etc. Mathematically, we can represent this as follows: ${\vec v}_1 + {\vec v}_2 = (1,5) + (4,5) = (1+4, 5+5) = (5,10)$. Note that the sum of two vectors yields a new vector.
* Subtracting two vectors is just like addition, but with minus signs. For example, ${\vec v}_1 - {\vec v}_2 = (1,5)-(4,5) = (1-4, 5-5) = (-3, 0)$. Again, subtraction yields a new vector.
* Scalar multiplication involves multiplying a vector by a scalar. In this case, you change the overall length of the vector, but not its direction (we will see this in PYTHON below). For example, let $a=5$ and ${\vec v} = (1,5)$, and then multiply them: $a{\vec v} = 5(1,5) = (5 \cdot 1, 5 \cdot 5) = (5,25)$. Note that scalar multiplication yields a vector.
* Dot-product vector multiplication: one way to multiply two vectors is to take the "dot product," in which you multiply each component by the same component from another vector and sum the resulting products. For example: ${\vec v}_1 \cdot {\vec v}_2 = (1,5) \cdot (4,5) = (1 \cdot 4) + (5 \cdot 5) = 4 + 25 = 29$. Note that the dot-product of two vectors is a scalar - a pure number.
### The Length of a Vector
A vector is an arrow. It has direction, but also length ("magnitude"). We now have enough mathematical information to calculate the length of any vector, knowing its components.
We can determine the length of a vector using the Pythagorean Theorem. Consider the picture above of driving east and then driving north. The eastward movement creates a horizontal vector. The northward movement creates a vertical vector. The resulting total motion - driving northeast - is the sum of two independent motions, one eastward and one northward. We can write the eastward vector (of length $e$) as:
\begin{equation} \vec{e} = (e,0) \end{equation}
and the northward vector (of length $n$) as:
\begin{equation} \vec{n} = (0,n) \end{equation}
If we add them, using the rules above, we arrive at the total motion vector:
\begin{equation} \vec{v} = (e,0) + (0,n) = (e,n). \end{equation}
But what is the *length* of this total vector? The two components form the base and height if a right-triangle. You can imagine, then, that if we know the length of the base and the length of the height, we can find the total length of the hypotenuse - which is just the length ($L$) of the total vector! This would look like this:
\begin{equation} L^2 = e^2 + n^2 \longrightarrow L = \sqrt{e^2 + n^2} \end{equation}
Consider the mathematical operations we introduced above. Which operation does the length-squared, $L^2$, resemble?
If you said "dot-product," you're correct. In fact, the dot-product of a vector with its own self gives you the square of the length of that vector:
\begin{equation} {\vec v} \cdot {\vec v} = (e,n) \cdot (e,n) = (e \cdot e) + (n \cdot n) = e^2 + n^2 = L^2 \end{equation}
Let us now play with all of this in PYTHON.
## Vectors in PYTHON
Below, we show you how to write vectors using PYTHON and then act on vectors with various algebraic operations. Play around with these and see if you can get a feel for what it means to define and then manipulate a vector. The visualization below will update when you make changes.
```python
# import a library of numerical tools for defining and manipulating vectors
import numpy as np
# Let's define a vector using two scalar variables to set the components
x = 1.0
y = 5.0
v_1 = np.array([x,y])
# Let us print the components of the vector
print("The components of the vector, v_1, are as follows:")
print("x = %f" % (v_1[0]))
print("y = %f" % (v_1[1]))
# We see that adding the operator [] after the vector, with a number inside the square brackets, gains us
# access to the components of the vector.
# Let us define a second vector
v_2 = np.array([4,5])
# Let us do vector addition:
print("Vector addition of v_1 + v_2 yields: (%f, %f)" % ((v_1+v_2)[0], (v_1+v_2)[1]))
# We see that this, indeed, yields what we expect from the definition discussed above!
# Let us subtract them:
print("Vector subtraction of v_1 - v_2 yields: (%f, %f)" % ((v_1-v_2)[0], (v_1-v_2)[1]))
# Let us now multiply the vector by a scalar
a = 5
print("Scalar multiplication of a * v_1 yields: (%f, %f)" % ((a*v_1)[0],(a*v_1)[1]))
# Just as we expected from the definitions above!
# Let us now do the dot-product multiplication of two vectors.
# Numpy provides a special "function" to execute the dot product:
print("Dot-product multiplication of v_1 * v_2 yields: %f" % (np.dot(v_1, v_2)))
# Again, this gives us what we expected from the definition of the dot product.
# Let us now calculate the length of the vector, v_1, in two ways.
# The first way is "brute-force" - do the dot-product multiplication and then square-root it:
length = math.sqrt( np.dot(v_1, v_1))
print("Length Method A: the length of v_1 is %f" % (length))
# The second way is to use numpy's built-in function for computing the length:
length = np.linalg.norm(v_1)
print("Length Method B: the length of v_1 is %f" % (length))
# Oh, look. They are the same!
# Method B utilizes the "Linear Algebra" tools, and specifically the "norm()" function, which is
# designed to return the length of a vector of any size (we used a 2-dimensional vector, but you
# could give it a 10-dimensional vector and it still works!). "Linear Algebra" is a mathematics class
# you might take later, and it's the algebra of structured numbers like vectors and matrices, generically
# known as "tensors".
# Here is a 5-dimensional vector:
v_3 = np.array([1,3,5,7,9])
print("The length of our 5-D vector is: %f" % (np.linalg.norm(v_3)))
# Neat, huh? Try computing the length yourself and see if the above answer is right.
```
The components of the vector, v_1, are as follows:
x = 1.000000
y = 5.000000
Vector addition of v_1 + v_2 yields: (5.000000, 10.000000)
Vector subtraction of v_1 - v_2 yields: (-3.000000, 0.000000)
Scalar multiplication of a * v_1 yields: (5.000000, 25.000000)
Dot-product multiplication of v_1 * v_2 yields: 29.000000
Length Method A: the length of v_1 is 5.099020
Length Method B: the length of v_1 is 5.099020
The length of our 5-D vector is: 12.845233
## Play with vectors
The interactive PYTHON game below lets you play with a vector, interacting by using sliders to change things like the total length of the vector or the lengths of its components. Consider the following critical questions:
1. What happens to the *direction* of the total vector when you change its *total length*?
2. What happens to the *direction* of the total vector when you change the length of either of its components?
```python
from ipywidgets import widgets
from ipywidgets import interact
import matplotlib.pyplot as plt
import math
%matplotlib inline
def f(x):
print(x)
global x_last
x_last = 0.0
global y_last
y_last = 0.0
def draw_vector(x,y):
global x_last
global y_last
plt.axis('on')
plt.xlim(-5.0,5.0)
plt.ylim(-5.0,5.0)
xlength = x
ylength = y
length = math.sqrt(x**2 + y**2)
# Draw the previous vector in a fainter color/shade
if (x_last != 0 and y_last != 0):
length_last = math.sqrt(x_last**2 + y_last**2)
plt.arrow(0, 0, x_last, y_last, head_width=0.2, head_length=0.2, fc='c', ec='c');
# Draw the vector
plt.arrow(0, 0, xlength, ylength, head_width=0.2, head_length=0.2, fc='k', ec='k');
plt.show()
x_last = x
y_last = y
interact(draw_vector,x=widgets.IntSlider(min=-4,max=4,step=1,value=1),y=widgets.IntSlider(min=-4,max=4,step=1,value=1))
print("Note: the current vector is in black, while the previous vector is in cyan (light blue).")
```
# Fun With Vectors - Moving About
Let us now program a simple "animation" by adding TIME as an additional dimension to the problem. The above graphic of a northeast-pointing vector is static - it represents a moment in frozen time. It never changes. Let's add time as a step, and have some fun with this by making something that moves.
Let us do this. Let us:
* Define a "game board" of a fixed size.
* Draw a vector on the game board that fits inside the board.
* Wait a second, then draw a new vector on the game board that starts at the tip of the previous vector and points in the same direction as the original one. If the length of this vector exceeds the game board size, "bend" it to point back into the board. In essence, "bounce" it off the wall.
* Keep this going. Draw a new vector. If it fits in the board, draw the next one in the same direction. If it doesn't, "bounce" it off the wall.
How do we do the "bounce"? Let's choose the following:
* If the horizontal component of the vector exceeds the horizontal boundary, flip the horizontal component around and start the next vector on the "wall" of the board.
* if the vertical component of the vector exceeds the vertical boundary, flip the vertical component around and again start the next vector on the wall.
* If BOTH exceed the boundary, flip both and start on the wall.
You can make up your own rules. Here is what the program looks like in PYTHON if we implement these.
Note:
Try playing around with the total number of steps in the game ("total_steps") and the direction and length of the original vector (look for the comment, "initial vector," preceding those lines of code).
```python
# Library setups
import numpy as np
import math
import matplotlib.pyplot as plt
# Define variables accessible (and modifiable) by functions
global board_length
board_length = 1
global board_height
board_height = 1
# colors that we can assign to the arrow and change in the game
global colors
colors=['k','b','g','r','c','m'] # black, blue, red, green, cyan, magenta
global vector_color
vector_color = 0
# Define some functions for determining if our vector is on the board
# A "function" takes input and does something to it. It allows you to
# encapsulate repetative tasks.
# Is the vector's front end inside the bounds of the board's horizontal size?
def vector_in_board_horizontally(x_start = 0, v = np.array([])):
if (x_start + v[0]) < 0:
return -1
if (x_start + v[0]) > board_length:
return 1
return 0
# Is the vector's front end inside the bounds of the board's vertical size?
def vector_in_board_vertically(y_start = 0, v = np.array([])):
if (y_start + v[1]) < 0:
return -1
if (y_start + v[1]) > board_length:
return 1
return 0
# Define some functions for flipping vector components
def flip_horizontal(v = np.array([])):
return np.array([-v[0],v[1]])
def flip_vertical(v = np.array([])):
return np.array([v[0],-v[1]])
# function to draw vector
def draw_vector(x_initial = 0.5, y_initial = 0.5, v = np.array([])):
plt.arrow(x_initial, y_initial, v[0], v[1], head_width=0.05, head_length=0.1, fc=colors[vector_color], ec=colors[vector_color]);
return v
def next_color():
global vector_color
vector_color += 1
if vector_color >= len(colors):
vector_color = 0
return vector_color
# Draw the board and then play the game
# We'll need this more later, but this code snippet will draw arrows. We need them to represent the above statement
import matplotlib.pyplot as plt
import math
%matplotlib inline
plt.axis('on')
plt.xlim(0,board_length)
plt.ylim(0,board_height)
# initial vector
x_start = 0.32
y_start = 0.5
v = draw_vector(x_initial = x_start, y_initial = y_start, v=np.array([0.1,0.2]))
# Place a dot at the starting point - this helps us find the start point!
plt.plot(x_start,y_start, marker='o', color=colors[vector_color], ls='')
# Step the arrow around the board, applying our rules when a vector reaches the boundaries
# When we "bounce" off a wall, change the color!
total_steps = 50
# a "for loop" repeats a task over and over again, up to a limit that you define
for step in range(total_steps):
v_next = v
# update the start point
x_start = x_start + v[0]
y_start = y_start + v[1]
horizontal_choice = vector_in_board_horizontally(x_start, v)
if horizontal_choice == 0:
v_next = v_next
elif horizontal_choice == -1:
v_next = flip_horizontal(v_next)
x_start = 0
next_color()
elif horizontal_choice == 1:
v_next = flip_horizontal(v_next)
x_start = board_length
next_color()
vertical_choice = vector_in_board_vertically(y_start, v)
if vertical_choice == 0:
v_next = v_next
elif vertical_choice == -1:
v_next = flip_vertical(v_next)
y_start = 0
next_color()
elif vertical_choice == 1:
v_next = flip_vertical(v_next)
y_start = board_length
next_color()
v = v_next
draw_vector(x_start, y_start, v)
```
# Acknowledgements
The graphics used for difficult ratings are from openclipart.org:
* https://openclipart.org/detail/30529/led-triangular-black
* https://openclipart.org/detail/212963/blue-square-button
* https://openclipart.org/detail/26193/button-green
```python
```
| ca076ee41a54458712790ad9db62780d87761359 | 73,945 | ipynb | Jupyter Notebook | AdventuresInSpacetime/Adventures in Spacetime.ipynb | stephensekula/smu-honors-physics | 8c98408c7693a30023e125ad29f98aa42c3002f6 | [
"MIT"
]
| 1 | 2018-11-01T21:50:36.000Z | 2018-11-01T21:50:36.000Z | AdventuresInSpacetime/Adventures in Spacetime.ipynb | stephensekula/smu-honors-physics | 8c98408c7693a30023e125ad29f98aa42c3002f6 | [
"MIT"
]
| null | null | null | AdventuresInSpacetime/Adventures in Spacetime.ipynb | stephensekula/smu-honors-physics | 8c98408c7693a30023e125ad29f98aa42c3002f6 | [
"MIT"
]
| null | null | null | 114.289026 | 29,634 | 0.81873 | true | 5,698 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.887205 | 0.762698 | __label__eng_Latn | 0.998192 | 0.610334 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1_ModelTypes/W1D1_Tutorial2.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 1, Day 1, Tutorial 2
# Model Types: "How" models
__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording
__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom
___
# Tutorial Objectives
This is tutorial 2 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore models that can potentially explain *how* the spiking data we have observed is produced
To understand the mechanisms that give rise to the neural data we save in Tutorial 1, we will build simple neuronal models and compare their spiking response to real data. We will:
- Write code to simulate a simple "leaky integrate-and-fire" neuron model
- Make the model more complicated — but also more realistic — by adding more physiologically-inspired details
```python
#@title Video 1: "How" models
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1yV41167Di', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
```
Video available at https://www.bilibili.com/video/BV1yV41167Di
# Setup
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
```
```python
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
```
```python
#@title Helper Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins."""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, color='r', linestyle='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
if ymax == ymin:
ymax = None
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=False, axis='x', tight=True)
def plot_neuron_stats(v, spike_times):
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# membrane voltage trace
ax1.plot(v[0:100])
ax1.set(xlabel='Time', ylabel='Voltage')
# plot spike events
for x in spike_times:
if x >= 100:
break
ax1.axvline(x, color='red')
# ISI distribution
isi = np.diff(spike_times)
n_bins = np.arange(isi.min(), isi.max() + 2) - .5
counts, bins = np.histogram(isi, n_bins)
vlines = []
if len(isi) > 0:
vlines = [np.mean(isi)]
xmax = max(20, int(bins[-1])+5)
histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={
'xlabel': 'Inter-spike interval',
'ylabel': 'Number of intervals',
'xlim': [0, xmax]
})
plt.show()
```
# Section 1: The Linear Integrate-and-Fire Neuron
How does a neuron spike?
A neuron charges and discharges an electric field across its cell membrane. The state of this electric field can be described by the _membrane potential_. The membrane potential rises due to excitation of the neuron, and when it reaches a threshold a spike occurs. The potential resets, and must rise to a threshold again before the next spike occurs.
One of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential $V_m$ over time in response to excitatory input currents $I$ scaled by some factor $\alpha$:
\begin{align}
dV_m = {\alpha}I
\end{align}
Once $V_m$ reaches a threshold value a spike is produced, $V_m$ is reset to a starting value, and the process continues.
Here, we will take the starting and threshold potentials as $0$ and $1$, respectively. So, for example, if $\alpha I=0.1$ is constant---that is, the input current is constant---then $dV_m=0.1$, and at each timestep the membrane potential $V_m$ increases by $0.1$ until after $(1-0)/0.1 = 10$ timesteps it reaches the threshold and resets to $V_m=0$, and so on.
Note that we define the membrane potential $V_m$ as a scalar: a single real (or floating point) number. However, a biological neuron's membrane potential will not be exactly constant at all points on its cell membrane at a given time. We could capture this variation with a more complex model (e.g. with more numbers). Do we need to?
The proposed model is a 1D simplification. There are many details we could add to it, to preserve different parts of the complex structure and dynamics of a real neuron. If we were interested in small or local changes in the membrane potential, our 1D simplification could be a problem. However, we'll assume an idealized "point" neuron model for our current purpose.
#### Spiking Inputs
Given our simplified model for the neuron dynamics, we still need to consider what form the input $I$ will take. How should we specify the firing behavior of the presynaptic neuron(s) providing the inputs to our model neuron?
Unlike in the simple example above, where $\alpha I=0.1$, the input current is generally not constant. Physical inputs tend to vary with time. We can describe this variation with a distribution.
We'll assume the input current $I$ over a timestep is due to equal contributions from a non-negative ($\ge 0$) integer number of input spikes arriving in that timestep. Our model neuron might integrate currents from 3 input spikes in one timestep, and 7 spikes in the next timestep. We should see similar behavior when sampling from our distribution.
Given no other information about the input neurons, we will also assume that the distribution has a mean (i.e. mean rate, or number of spikes received per timestep), and that the spiking events of the input neuron(s) are independent in time. Are these reasonable assumptions in the context of real neurons?
A suitable distribution given these assumptions is the Poisson distribution, which we'll use to model $I$:
\begin{align}
I \sim \mathrm{Poisson}(\lambda)
\end{align}
where $\lambda$ is the mean of the distribution: the average rate of spikes received per timestep.
### Exercise 1: Compute $dV_m$
For your first exercise, you will write the code to compute the change in voltage $dV_m$ (per timestep) of the linear integrate-and-fire model neuron. The rest of the code to handle numerical integration is provided for you, so you just need to fill in a definition for `dv` in the `lif_neuron` function below. The value of $\lambda$ for the Poisson random variable is given by the function argument `rate`.
The [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) package is a great resource for working with and sampling from various probability distributions. We will use the `scipy.stats.poisson` class and its method `rvs` to produce Poisson-distributed random samples. In this tutorial, we have imported this package with the alias `stats`, so you should refer to it in your code as `stats.poisson`.
```python
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
################################################################################
# Students: compute dv, then comment out or remove the next line
raise NotImplementedError("Excercise: compute the change in membrane potential")
################################################################################
for i in range(1, n_steps):
dv = ...
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Uncomment these lines after completing the lif_neuron function
# v, spike_times = lif_neuron()
# plot_neuron_stats(v, spike_times)
```
```python
# to_remove solution
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
v, spike_times = lif_neuron()
with plt.xkcd():
plot_neuron_stats(v, spike_times)
```
## Interactive Demo: Linear-IF neuron
Like last time, you can now explore how various parametes of the LIF model influence the ISI distribution.
```python
#@title
#@markdown You don't need to worry about how the code works – but you do need to **run the cell** to enable the sliders.
def _lif_neuron(n_steps=1000, alpha=0.01, rate=10):
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(
n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),
alpha=widgets.FloatLogSlider(0.01, min=-2, max=-1),
rate=widgets.IntSlider(10, min=5, max=20)
)
def plot_lif_neuron(n_steps=1000, alpha=0.01, rate=10):
v, spike_times = _lif_neuron(int(n_steps), alpha, rate)
plot_neuron_stats(v, spike_times)
```
```python
#@title Video 2: Linear-IF models
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1iZ4y1u7en', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
```
Video available at https://www.bilibili.com/video/BV1iZ4y1u7en
# Section 2: Inhibitory signals
Our linear integrate-and-fire neuron from the previous section was indeed able to produce spikes. However, our ISI histogram doesn't look much like empirical ISI histograms seen in Tutorial 1, which had an exponential-like shape. What is our model neuron missing, given that it doesn't behave like a real neuron?
In the previous model we only considered excitatory behavior -- the only way the membrane potential could decrease was upon a spike event. We know, however, that there are other factors that can drive $V_m$ down. First is the natural tendency of the neuron to return to some steady state or resting potential. We can update our previous model as follows:
\begin{align}
dV_m = -{\beta}V_m + {\alpha}I
\end{align}
where $V_m$ is the current membrane potential and $\beta$ is some leakage factor. This is a basic form of the popular Leaky Integrate-and-Fire model neuron (for a more detailed discussion of the LIF Neuron, see the Appendix).
We also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable:
\begin{align}
I = I_{exc} - I_{inh} \\
I_{exc} \sim \mathrm{Poisson}(\lambda_{exc}) \\
I_{inh} \sim \mathrm{Poisson}(\lambda_{inh})
\end{align}
where $\lambda_{exc}$ and $\lambda_{inh}$ are the average spike rates (per timestep) of the excitatory and inhibitory presynaptic neurons, respectively.
### Exercise 2: Compute $dV_m$ with inhibitory signals
For your second exercise, you will again write the code to compute the change in voltage $dV_m$, though now of the LIF model neuron described above. Like last time, the rest of the code needed to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` below.
```python
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
###############################################################################
# Students: compute dv, then comment out or remove the next line
raise NotImplementedError("Excercise: compute the change in membrane potential")
################################################################################
for i in range(1, n_steps):
dv = ...
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Uncomment these lines do make the plot once you've completed the function
#v, spike_times = lif_neuron_inh()
#plot_neuron_stats(v, spike_times)
```
```python
# to_remove solution
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
v, spike_times = lif_neuron_inh()
with plt.xkcd():
plot_neuron_stats(v, spike_times)
```
## Interactive Demo: LIF + inhibition neuron
```python
#@title
#@markdown **Run the cell** to enable the sliders.
def _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(n_steps=widgets.FloatLogSlider(1000.0, min=2.5, max=4),
alpha=widgets.FloatLogSlider(0.5, min=-1, max=1),
beta=widgets.FloatLogSlider(0.1, min=-1, max=0),
exc_rate=widgets.IntSlider(12, min=10, max=20),
inh_rate=widgets.IntSlider(12, min=10, max=20))
def plot_lif_neuron(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
v, spike_times = _lif_neuron_inh(int(n_steps), alpha, beta, exc_rate, inh_rate)
plot_neuron_stats(v, spike_times)
```
```python
#@title Video 3: LIF + inhibition
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1nV41167mS', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
```
Video available at https://www.bilibili.com/video/BV1nV41167mS
#Summary
In this tutorial we gained some intuition for the mechanisms that produce the observed behavior in our real neural data. First, we built a simple neuron model with excitatory input and saw that it's behavior, measured using the ISI distribution, did not match our real neurons. We then improved our model by adding leakiness and inhibitory input. The behavior of this balanced model was much closer to the real neural data.
# Bonus
### Why do neurons spike?
A neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs.
The storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model.
### The LIF Model Neuron
The full equation for the LIF neuron is
\begin{align}
C_{m}\frac{dV_m}{dt} = -(V_m - V_{rest})/R_{m} + I
\end{align}
where $C_m$ is the membrane capacitance, $R_M$ is the membrane resistance, $V_{rest}$ is the resting potential, and $I$ is some input current (from other neurons, an electrode, ...).
In our above examples we set many of these parameters to convenient values ($C_m = R_m = dt = 1$, $V_{rest} = 0$) to focus more on the general behavior of the model. However, these too can be manipulated to achieve different dynamics, or to ensure the dimensions of the problem are preserved between simulation units and experimental units (e.g. with $V_m$ given in millivolts, $R_m$ in megaohms, $t$ in milliseconds).
| 2a892b0c92bf5a6469d64810d0e20b1b5a56cda7 | 722,016 | ipynb | Jupyter Notebook | tutorials/W1D1_ModelTypes/W1D1_Tutorial2.ipynb | Jaycob-jh/course-content | 6b2db614a7a357c16c1c108dfd4266dc0b2e9ea5 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W1D1_ModelTypes/W1D1_Tutorial2.ipynb | Jaycob-jh/course-content | 6b2db614a7a357c16c1c108dfd4266dc0b2e9ea5 | [
"CC-BY-4.0"
]
| null | null | null | tutorials/W1D1_ModelTypes/W1D1_Tutorial2.ipynb | Jaycob-jh/course-content | 6b2db614a7a357c16c1c108dfd4266dc0b2e9ea5 | [
"CC-BY-4.0"
]
| 1 | 2021-08-06T08:05:01.000Z | 2021-08-06T08:05:01.000Z | 357.256804 | 214,488 | 0.930464 | true | 5,162 | Qwen/Qwen-72B | 1. YES
2. YES | 0.675765 | 0.793106 | 0.535953 | __label__eng_Latn | 0.977364 | 0.083528 |
# Integrating Orbits
This is a demonstration of Euler's method for integrating orbits.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
We consider low mass objects orbiting the Sun. We work in units of AU, yr, and solar masses. From Kepler's third law:
\begin{equation}
4 \pi^2 a^3 = G M P^2
\end{equation}
If $a$ is in AU, $P$ is in yr, and $M$ is in solar masses, then
\begin{equation}
a^3 = P^2
\end{equation}
and therefore
\begin{equation}
4 \pi^2 = G
\end{equation}
We work in coordinates with the Sun at the origin.
Equations:
\begin{align*}
\frac{dx}{dt} &= u \\
\frac{dy}{dt} &= v \\
\frac{du}{dt} &= - \frac{GMx}{r^3} \\
\frac{dv}{dt} &= - \frac{GMy}{r^3}
\end{align*}
```python
# assuming 1 solar mass
GM = 4*np.pi**2
```
```python
def rhs(x, y, u, v):
""" RHS of the equations of motion. X is the input coordinate
vector and V is the input velocity vector """
# current radius
r = np.sqrt(x**2 + y**2)
# position derivatives
xdot = u
ydot = v
# velocity derivatives
udot = -GM*x/r**3
vdot = -GM*y/r**3
return xdot, ydot, udot, vdot
```
A simple class for storing the solution history and plotting it
```python
class Orbit:
def __init__(self):
self.t = []
self.x = []
self.y = []
self.u = []
self.v = []
def add_point(self, time, xnew, ynew, unew, vnew):
self.x.append(xnew)
self.y.append(ynew)
self.u.append(unew)
self.v.append(vnew)
def plot(self):
fig = plt.figure()
ax = fig.add_subplot(111)
_ = ax.plot(orbit.x, orbit.y)
_ = ax.set_aspect("equal", "datalim")
return fig
```
```python
def integrate(x0, y0, u0, v0, tmax, dt):
"""integrate the orbit with initial conditions X0, V0, using a
timestep dt for a duration tmax"""
t = 0
x = x0
y = y0
u = u0
v = v0
o = Orbit()
o.add_point(t, x, y, u, v)
while (t < tmax):
xdot, ydot, udot, vdot = rhs(x, y, u, v)
x += dt * xdot
y += dt * ydot
u += dt * udot
v += dt * vdot
t += dt
o.add_point(t, x, y, u, v)
if t + dt > tmax:
dt = tmax - t
return o
```
Initial conditions are at perihelion with counter clockwise orbit:
\begin{align*}
x &= 0 \\
y &= a(1-e) \\
u &= -\sqrt{\frac{GM}{a} \frac{1+e}{1-e}} \\
v &= 0
\end{align*}
```python
def init_conditions(a, e):
x0 = 0.0
y0 = a*(1.0 - e)
u0 = -np.sqrt((GM/a)* (1.0 + e) / (1.0 - e))
v0 = 0.0
return x0, y0, u0, v0
```
To integrate the orbit, we set the semi-major axis and eccentricity, specify one period (`tmax = 1`), and a timestep
```python
a = 1.0
e = 0.0
x0, y0, u0, v0 = init_conditions(a, e)
orbit = integrate(x0, y0, u0, v0, 1.0, 0.00001)
```
```python
fig = orbit.plot()
```
```python
```
| 8bb1dae09216a0581509b25c73bce31e3d20c222 | 22,315 | ipynb | Jupyter Notebook | orbits_example/orbit.ipynb | zingale/ast341_examples | 0a15b9bf0b268b00021c59504eb7a2006b7e5ada | [
"BSD-3-Clause"
]
| 3 | 2020-09-09T15:48:41.000Z | 2021-08-09T16:08:51.000Z | orbits_example/orbit.ipynb | zingale/ast341_examples | 0a15b9bf0b268b00021c59504eb7a2006b7e5ada | [
"BSD-3-Clause"
]
| null | null | null | orbits_example/orbit.ipynb | zingale/ast341_examples | 0a15b9bf0b268b00021c59504eb7a2006b7e5ada | [
"BSD-3-Clause"
]
| null | null | null | 85.826923 | 16,332 | 0.825633 | true | 1,001 | Qwen/Qwen-72B | 1. YES
2. YES | 0.939025 | 0.861538 | 0.809006 | __label__eng_Latn | 0.755735 | 0.717924 |
```python
import tensorflow as tf
#import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
from PIL import Image
```
```python
```
```python
```
```python
def convert_to_circuit(image):
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
```
```python
image=np.random.randint(255,size=(2,2))
#image=np.ones(255,size=(2,2))
plt.imshow(image,cmap='gray',vmin=0,vmax=255)
plt.show()
plt.imsave('grayimg.jpeg',image,cmap='gray')
```
```python
FILENAME='grayimg.jpeg'
image2 = Image.open(FILENAME)
pixels=np.asarray(image2)
pixels=pixels.astype('float32')
pixels2=pixels/255.0
print(pixels2)
THRESHOLD = 0.5
image_new_bin=np.array(pixels2 > THRESHOLD, dtype=np.float32)
image_new_bin
image_circuit=convert_to_circuit(image_new_bin)
```
```python
pix_val = list(image2.getdata())
pix_val
pix_val_flat = [x for sets in pix_val for x in sets]
pix_val_flat
```
```python
pixel_mat = np.array(image2.getdata())
width = image2.size[0]
pixel_ind = np.where((pixel_mat[:, :3] ==pix_val_flat[0]).any(axis=1))[0]
coordinate = np.concatenate(
[
(pixel_ind % width).reshape(-1, 1),
(pixel_ind // width).reshape(-1, 1),
],
axis=1,
)
coord=coordinate[0]
#xprime=list(coordinate[[0],coordinate[1])
xprime=coord.tolist()
xprime
```
```python
SVGCircuit(image_circuit)
```
```python
"""Get qubits to use in the circuit for Grover's algorithm."""
# Number of qubits n.
nqubits = 2
# Get qubit registers.
qubits = cirq.LineQubit.range(nqubits)
ancilla = cirq.NamedQubit("Ancilla")
```
```python
def make_oracle(qubits, ancilla, xprime):
"""Implements the function {f(x) = 1 if x == x', f(x) = 0 if x != x'}."""
# For x' = (1, 1), the oracle is just a Toffoli gate.
# For a general x', we negate the zero bits and implement a Toffoli.
# Negate zero bits, if necessary.
yield (cirq.X(q) for (q, bit) in zip(qubits, xprime) if not bit)
# Do the Toffoli.
yield (cirq.TOFFOLI(qubits[0], qubits[1], ancilla))
# Negate zero bits, if necessary.
yield (cirq.X(q) for (q, bit) in zip(qubits, xprime) if not bit)
```
```python
def grover_iteration(qubits, ancilla, oracle):
"""Performs one round of the Grover iteration."""
circuit = cirq.Circuit()
# Create an equal superposition over input qubits.
circuit.append(cirq.H.on_each(*qubits))
# Put the output qubit in the |-⟩ state.
circuit.append([cirq.X(ancilla), cirq.H(ancilla)])
# Query the oracle.
circuit.append(oracle)
# Construct Grover operator.
circuit.append(cirq.H.on_each(*qubits))
circuit.append(cirq.X.on_each(*qubits))
circuit.append(cirq.H.on(qubits[1]))
circuit.append(cirq.CNOT(qubits[0], qubits[1]))
circuit.append(cirq.H.on(qubits[1]))
circuit.append(cirq.X.on_each(*qubits))
circuit.append(cirq.H.on_each(*qubits))
# Measure the input register.
circuit.append(cirq.measure(*qubits, key="result"))
return circuit
```
```python
"""Select a 'marked' bitstring x' at random."""
xprime = [np.random.randint(0, 1) for _ in range(nqubits)]
xprime=list(coordinate)
xprime=coord.tolist()
print(f"Marked bitstring: {xprime}")
```
```python
"""Create the circuit for Grover's algorithm."""
# Make oracle (black box)
oracle = make_oracle(qubits, ancilla, xprime)
# Embed the oracle into a quantum circuit implementing Grover's algorithm.
circuit = grover_iteration(qubits, ancilla, oracle)
print("Circuit for Grover's algorithm:")
print(circuit)
```
```python
"""Simulate the circuit for Grover's algorithm and check the output."""
# Helper function.
def bitstring(bits):
return "".join(str(int(b)) for b in bits)
# Sample from the circuit a couple times.
simulator = cirq.Simulator()
result = simulator.run(circuit, repetitions=10)
# Look at the sampled bitstrings.
frequencies = result.histogram(key="result", fold_func=bitstring)
print('Sampled results:\n{}'.format(frequencies))
# Check if we actually found the secret value.
most_common_bitstring = frequencies.most_common(1)[0][0]
print("\nMost common bitstring: {}".format(most_common_bitstring))
print("Found a match? {}".format(most_common_bitstring == bitstring(xprime)))
```
| da290a1b88ef644d7cb066fd96c53926be155660 | 7,680 | ipynb | Jupyter Notebook | Quantum_Encoding_Image_data_v2.ipynb | Rukhsan/Quantum_Encoding | ad3fecc4d9124583e5ffd546be9250e3d9fac99c | [
"Apache-2.0"
]
| 1 | 2022-02-22T02:40:51.000Z | 2022-02-22T02:40:51.000Z | Quantum_Encoding_Image_data_v2.ipynb | Rukhsan/Quantum_Encoding | ad3fecc4d9124583e5ffd546be9250e3d9fac99c | [
"Apache-2.0"
]
| null | null | null | Quantum_Encoding_Image_data_v2.ipynb | Rukhsan/Quantum_Encoding | ad3fecc4d9124583e5ffd546be9250e3d9fac99c | [
"Apache-2.0"
]
| null | null | null | 27.725632 | 92 | 0.545703 | true | 1,260 | Qwen/Qwen-72B | 1. YES
2. YES | 0.831143 | 0.771844 | 0.641512 | __label__eng_Latn | 0.465298 | 0.328779 |
# 附录A:关于布莱克-斯科尔斯-默顿模型的一些有用的推导式
BSM价格表达式如下:
$$C(S,K,\tau,\sigma,r)=SN(d_1)-Ke^{-r\tau}N(d_2)$$
$$d_1=\frac{\ln\left(\frac{S_F}{K}\right)+\frac{\sigma^2}{2}\tau}{\sigma\sqrt{\tau}}$$
$$d_2=\frac{\ln\left(\frac{S_F}{K}\right)-\frac{\sigma^2}{2}\tau}{\sigma\sqrt{\tau}}$$
$$S_F=e^{r\tau}S$$
$$N(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}{e^{-\frac{1}{2}y^2}\mathrm{d}y}$$
## 有用的推导式
$$N^{\prime}(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$$
$$\frac{\partial C}{\partial\sigma}=\frac{1}{\sqrt{2\pi}}Se^{-\frac{1}{2}d_1^2}\sqrt{\tau}$$
$$KN^{\prime}(d_2)=S_FN^{\prime}(d_1)$$
$$\frac{\partial C}{\partial S}=N(d_1)$$
$$\frac{\partial d_{1,2}}{\partial K}=\frac{-1}{K\sigma\sqrt{\tau}}$$
$$\frac{\partial C}{\partial K}=-e^{-r\tau}N(d_2)$$
$$\frac{\partial d_{1,2}}{\partial\sigma}=\frac{-1}{\sigma^2\sqrt{\tau}}\ln\left(\frac{S_F}{K}\right)\pm\frac{1}{2}\sqrt{\tau}$$
## 证明
### 准备
```python
import sympy as sy
import sympy.stats as systats
sy.init_printing()
```
```python
S, K, tau, sigma, r = sy.symbols("S K tau sigma r")
def S_F(tau, r, s):
return sy.exp(tau*r)*s
def d1(S, K, tau, sigma, r):
return (sy.ln(S_F(tau, r, S) / K) + 1/2 * sigma ** 2 * tau) / (sigma * sy.sqrt(tau))
def d2(S, K, tau, sigma, r):
return (sy.ln(S_F(tau, r, S) / K) - 1/2 * sigma ** 2 * tau) / (sigma * sy.sqrt(tau))
def N(x):
y = sy.Symbol('y')
return 1/sy.sqrt(2*sy.pi)*sy.Integral(sy.exp(-0.5*y**2), (y, -sy.oo , x))
def C(S, K, tau, sigma, r):
return S*N(d1(S, K, tau, sigma, r))-K*sy.exp(-r*tau)*N(d2(S, K, tau, sigma, r))
```
### 1. $$N(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}{e^{-\frac{1}{2}y^2}\mathrm{d}y}$$
```python
x = sy.Symbol("x")
N(x)
```
### 2. $$N^{\prime}(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$$
```python
sy.diff(N(x),x)
```
### 3. $$KN^{\prime}(d_2)=S_FN^{\prime}(d_1)$$
```python
def Nprime(x):
return 1/sy.sqrt(2*sy.pi)*sy.exp(-1/2*x**2)
Nprime(x)
```
```python
K*Nprime(d2(S, K, tau, sigma, r))
```
```python
S*sy.exp(r*tau)*Nprime(d1(S, K, tau, sigma, r))
```
$$KN^\prime(d_2)=\frac{K}{\sqrt{2\pi}}e^{-\frac{1}{2}d_2^2}=\frac{K}{\sqrt{2\pi}}e^{-\frac{1}{2}\frac{\left[\ln\left(\frac{S_F}{K}\right)-\frac{\sigma^2}{2}\tau\right]^2}{\sigma^2\tau}}$$
$$S_FN^\prime(d_1)=\frac{S_F}{\sqrt{2\pi}}e^{-\frac{1}{2}d_1^2}=\frac{S_F}{\sqrt{2\pi}}e^{-\frac{1}{2}\frac{\left[\ln\left(\frac{S_F}{K}\right)+\frac{\sigma^2}{2}\tau\right]^2}{\sigma^2\tau}}$$
```python
sy.simplify(K*Nprime(d2(S, K, tau, sigma, r))/(S*sy.exp(r*tau)*Nprime(d1(S, K, tau, sigma, r))))
```
$$\frac{KN^\prime(d_2)}{S_FN^\prime(d_1)}=\frac{K}{S_F}e^{\frac{1}{2}\frac{2\ln{\left(\frac{S_F}{K}\right)}\sigma^2\tau}{\sigma^2\tau}}=1$$
### 4. $$\frac{\partial d_{1,2}}{\partial\sigma}=\frac{-1}{\sigma^2\sqrt{\tau}}\ln\left(\frac{S_F}{K}\right)\pm\frac{1}{2}\sqrt{\tau}$$
```python
sy.diff(d1(S, K, tau, sigma, r), sigma)
```
```python
sy.diff(d2(S, K, tau, sigma, r), sigma)
```
### 5. $$\frac{\partial C}{\partial\sigma}=\frac{1}{\sqrt{2\pi}}Se^{-\frac{1}{2}d_1^2}\sqrt{\tau}$$
```python
sy.diff(C(S, K, tau, sigma, r), sigma)
```
$$d_1=\frac{\ln\left(\frac{S_F}{K}\right)+\frac{\sigma^2}{2}\tau}{\sigma\sqrt{\tau}}$$
$$d_2=\frac{\ln\left(\frac{S_F}{K}\right)-\frac{\sigma^2}{2}\tau}{\sigma\sqrt{\tau}}$$
$$\frac{Ke^{-r\tau}\sqrt{\tau}}{\sqrt{2\pi}}(1+\frac{d_2}{\sigma\sqrt{\tau}})e^{-\frac{1}{2}d_2^2}+\frac{S\sqrt{\tau}}{\sqrt{2\pi}}(1-\frac{d_1}{\sigma\sqrt{\tau}})e^{-\frac{1}{2}d_1^2}$$
$$C(S,K,\tau,\sigma,r)=SN(d_1)-Ke^{-r\tau}N(d_2)$$
$$\frac{\partial C(S,K,\tau,\sigma,r)}{\partial\sigma}=SN^\prime(d_1)\frac{\partial d_1}{\partial\sigma}-Ke^{-r\tau}N^\prime(d_2)\frac{\partial d_2}{\partial\sigma}=SN^\prime(d_1)\left(\frac{\partial d_1}{\partial\sigma}-\frac{\partial d_2}{\partial\sigma}\right)$$
$$\frac{\partial d_{1,2}}{\partial\sigma}=\frac{-1}{\sigma^2\sqrt{\tau}}\ln\left(\frac{S_F}{K}\right)\pm\frac{1}{2}\sqrt{\tau}$$
$$\frac{\partial C(S,K,\tau,\sigma,r)}{\partial\sigma}=SN^\prime(d_1)\sqrt{\tau}=\frac{1}{\sqrt{2\pi}}Se^{-\frac{1}{2}d_1^2}\sqrt{\tau}$$
### 6. $$\frac{\partial d_{1,2}}{\partial K}=\frac{-1}{K\sigma\sqrt{\tau}}$$
```python
sy.diff(d1(S, K, tau, sigma, r), K)
```
```python
sy.diff(d2(S, K, tau, sigma, r), K)
```
### 7. $$\frac{\partial C}{\partial S}=N(d_1)$$
$$\frac{\partial C(S,K,\tau,\sigma,r)}{\partial S}=N(d_1)+SN^\prime(d_1)\frac{\partial d_1}{\partial S}-Ke^{-r\tau}N^\prime(d_2)\frac{\partial d_2}{\partial S}=N(d_1)$$
### 8. $$\frac{\partial C}{\partial K}=-e^{-r\tau}N(d_2)$$
$$\frac{\partial C(S,K,\tau,\sigma,r)}{\partial K}=SN^\prime(d_1)\frac{\partial d_1}{\partial K}-Ke^{-r\tau}N^\prime(d_2)\frac{\partial d_2}{\partial K}-e^{-r\tau}N(d_2)=-e^{-r\tau}N(d_2)$$
### 9. $$\frac{\partial C}{\partial\sigma^2}=\frac{1}{2\sigma\sqrt{2\pi}}Se^{-\frac{1}{2}d_1^2}\sqrt{\tau}$$
$$\frac{\partial C(S,K,\tau,\sigma,r)}{\partial\sigma^2}=SN^\prime(d_1)\frac{\partial d_1}{\partial\sigma^2}-Ke^{-r\tau}N^\prime(d_2)\frac{\partial d_2}{\partial\sigma^2}=SN^\prime(d_1)\left(\frac{\partial d_1}{\partial\sigma^2}-\frac{\partial d_2}{\partial\sigma^2}\right)$$
$$\frac{\partial d_1}{\partial\sigma^2}-\frac{\partial d_2}{\partial\sigma^2}=\frac{\partial\left(\sigma\sqrt{\tau}\right)}{\partial\sigma^2}=\sqrt{\tau}\frac{\partial\sigma}{\partial\sigma^2}=\frac{\sqrt{\tau}}{2\sigma}$$
$$\frac{\partial C(S,K,\tau,\sigma,r)}{\partial\sigma^2}=\frac{1}{2\sigma\sqrt{2\pi}}Se^{-\frac{1}{2}d_1^2}\sqrt{\tau}$$
| 1d633fbcd22af56b79f18579f766486da1b089d0 | 64,508 | ipynb | Jupyter Notebook | volatility-smile/.ipynb_checkpoints/appendix-a-checkpoint.ipynb | chenxin1-5/option-study | 722f425e4a3cad05ec8fbb3fc980fbdb9e85b655 | [
"Unlicense"
]
| 3 | 2021-04-05T14:50:01.000Z | 2021-11-12T11:27:02.000Z | volatility-smile/appendix-a.ipynb | chenxin1-5/option-study | 722f425e4a3cad05ec8fbb3fc980fbdb9e85b655 | [
"Unlicense"
]
| null | null | null | volatility-smile/appendix-a.ipynb | chenxin1-5/option-study | 722f425e4a3cad05ec8fbb3fc980fbdb9e85b655 | [
"Unlicense"
]
| null | null | null | 93.761628 | 15,492 | 0.757673 | true | 2,482 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.70253 | 0.565981 | __label__kor_Hang | 0.078181 | 0.153293 |
# Grain Boundary Coupled Tilt Phase Field Model
```
# ----------- Importing The Libraries ------------ #
import numpy as np
import math as mt
from sympy.vector import Del
from sympy import integrals as intp
```
```
# -------------- Defining Parameters -------------- #
# Length of the simulation box
Ly = 1
# Normalized reference value
n0 = 1
# Chemical Potential at Equilibrium
muE = -1.33215
# Chemical Potential of Solid Phase
muS = 0.9 * muE
# Chemical Potential of Liquid Phase
muL = 1.1 * muE
# paramter used in evolution equation
alpha = 0.1
# Velocity of the crystals
v = 1.25 * mt.pow(10, -4)
# Parameter used in free energy density
epsilon = 0.25
# mesh - spacing in x direction
dx = 2 * np.pi / 8
# mesh - spacing in y direction
dy = 2 * np.pi / 8
# time-step
dt = 0.1
# length of unit box
a = 2 * np.pi
# paramter used in Chemical Potential Function
b = 10 * np.pi
# Parameter used in Chemical Potential Function
xi = 4 * np.pi
# Parameter used in G(y) Function
sigma = 2 * np.pi
# A value chosen such that
# the pulling is applied to solid strips
# near the bottom and top surfaces that are not melted
d = 12 * a
# ratio of the magnitudes of reciprocal lattice vectors
Q1 = mt.sqrt(2)
```
## Chemical Potential Function
### $\mu$ = $ \mu_{l} + \frac{(\mu_{l} - \mu_{s})}{2}(tanh[(y - L_{y} + b)/ \xi] - tanh[(y - b)/ \xi]) $
```
# chemical potential function
def mew(y):
result1 = muL + ((muL - muS) / 2) * ((mt.tanh(y - Ly + 10)/xi) - (mt.tanh(y - b)/xi))
return result1
```
## Crystal Field Density
### $\psi(x,y,t) = \frac{(n(x,y,t) - n_{0})}{n_{0}}$
```
# crystal field density
def Psi(x,y,t) :
result2 = (n(x,y,t) - n0) / n0
return result2
```
## Free Energy Density
### $f = \frac{\psi}{2} (-\epsilon +(\nabla^{2} + 1)^{2}[(\nabla ^{2} + Q_{1}^{2})])\psi + \frac{\psi^{4}}{4}$
```
# free energy density
def f(Psi) :
result3 = (Psi / 2)(-epsilon + (Del()**2 + 1)*((Del()**2 + Q1**2)**2))*Psi + (Psi**4)/4
return result3
```
## External Force for Shearing
### $ F_{ext} = \int dx dy [ G(y - d)(\psi (x,y,t) - \psi_{0}(x - vt,y))^{2} + G(y - L_{y} + d)(\psi (x,y,t) - \psi_{0}(x + vt,y))^{2} ]$
```
# defining external force
def Fext(x,y,t,G,Xi):
function = G(y - d)( Xi(x,y,t) - Xi(x - v*t,y,0) ) ** 2 + G(y - Ly + d)( Xi(x,y,t) - Xi(x + v*t,y,0) ) ** 2
intwrty = intp.integrate(function,y) # integration wrt y
intwrtx = intp.integrate(intwrty,x) # integration wrt x
return intwrtx
```
## G(y) function inside $F_{ext}$
### $ G(y) = exp(-y^{2}/2\sigma^{2}) / \sqrt {2 \pi \sigma^{2}} $
```
def G(y):
result4 = mt.exp(-(y*y)/2(sigma*sigma)) / mt.sqrt(2 * np.pi * (sigma*sigma))
return result4
```
## Free Energy Functional
$F = \int drf + F_{ext} $
```
# free energy functional
def F(x,y,f,Fext) :
result5 = intp.integrate(f,x) + intp.integrate(f,y) + Fext
return result5
```
| 37988ecca2e26543bfe9592aa60b2c2fbc69e0a7 | 5,379 | ipynb | Jupyter Notebook | gbCoupledTiltPFM.ipynb | tapashreepradhan/phase-field-modelling-nptel | 8da1b8271481c3032b9e486620ab86d5eff3a20c | [
"Apache-2.0"
]
| null | null | null | gbCoupledTiltPFM.ipynb | tapashreepradhan/phase-field-modelling-nptel | 8da1b8271481c3032b9e486620ab86d5eff3a20c | [
"Apache-2.0"
]
| null | null | null | gbCoupledTiltPFM.ipynb | tapashreepradhan/phase-field-modelling-nptel | 8da1b8271481c3032b9e486620ab86d5eff3a20c | [
"Apache-2.0"
]
| null | null | null | 5,379 | 5,379 | 0.59881 | true | 1,041 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.661923 | 0.602288 | __label__eng_Latn | 0.683684 | 0.237648 |
# Start-to-Finish Example: [TOV](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) Neutron Star Simulation: The "Hydro without Hydro" Test
## Authors: Zach Etienne & Phil Chang
### Formatting improvements courtesy Brandon Clark
## This module sets up initial data for a neutron star on a spherical numerical grid, using the approach [documented in the previous NRPy+ module](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb), and then evolves these initial data forward in time. The aim is to reproduce the results from [Baumgarte, Hughes, and Shapiro]( https://arxiv.org/abs/gr-qc/9902024) (which were performed using Cartesian grids); demonstrating that the extrinsic curvature and Hamiltonian constraint violation converge to zero with increasing numerical resolution
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This module has been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution (see [plot](#convergence) at bottom). Note that convergence in the region causally influenced by the surface of the star will possess lower convergence order due to the sharp drop to zero in $T^{\mu\nu}$.
### NRPy+ Source Code for this module:
* [TOV/TOV_Solver.py](../edit/TOV/TOV_Solver.py); ([**NRPy+ Tutorial module reviewing mathematical formulation and equations solved**](Tutorial-ADM_Initial_Data-TOV.ipynb)); ([**start-to-finish NRPy+ Tutorial module demonstrating that initial data satisfy Hamiltonian constraint**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb)): Tolman-Oppenheimer-Volkoff (TOV) initial data; defines all ADM variables and nonzero $T^{\mu\nu}$ components in Spherical basis.
* [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb): *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function
* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates
## Introduction:
Here we use NRPy+ to evolve initial data for a [simple polytrope TOV star](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), keeping the $T^{\mu\nu}$ source terms fixed. As the hydrodynamical fields that go into $T^{\mu\nu}$ are not updated, this is called the "Hydro without Hydro" test.
The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:
1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration [(**NRPy+ tutorial on NRPy+ Method of Lines algorithm**)](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).
1. Set gridfunction values to initial data
* [**NRPy+ tutorial on TOV initial data**](Tutorial-ADM_Initial_Data-TOV.ipynb)
* [**NRPy+ tutorial on validating TOV initial data**](Tutorial-Start_to_Finish-BSSNCurvilinear-Setting_up_TOV_initial_data.ipynb).
1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm:
1. At the start of each iteration in time, output the Hamiltonian constraint violation
* [**NRPy+ tutorial on BSSN constraints**](Tutorial-BSSN_constraints.ipynb).
1. At each RK time substep, do the following:
1. Evaluate BSSN RHS expressions
* [**NRPy+ tutorial on BSSN right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb)
* [**NRPy+ tutorial on BSSN gauge condition right-hand sides**](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb)
* [**NRPy+ tutorial on adding stress-energy source terms to BSSN RHSs**](Tutorial-BSSN_stress_energy_source_terms.ipynb).
1. Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658)
* [**NRPy+ tutorial on setting up singular, curvilinear boundary conditions**](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
1. Enforce constraint on conformal 3-metric: $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$
* [**NRPy+ tutorial on enforcing $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint**](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)
1. Repeat above steps at two numerical resolutions to confirm convergence to zero.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric
1. [Step 1.a](#cfl) Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep
1. [Step 2](#adm_id_tov): Set up ADM initial data for polytropic TOV Star
1. [Step 2.a](#tov_interp): Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis
1. [Step 3](#adm_id_spacetime): Convert ADM spacetime quantity initial data to BSSN-in-curvilinear-coordinates
1. [Step 4](#bssn): Output C code for BSSN spacetime solve
1. [Step 4.a](#bssnrhs): Set up the BSSN right-hand-side (RHS) expressions, and add the *rescaled* $T^{\mu\nu}$ source terms
1. [Step 4.b](#hamconstraint): Output C code for Hamiltonian constraint
1. [Step 4.c](#enforce3metric): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$
1. [Step 4.d](#ccodegen): Generate C code kernels for BSSN expressions, in parallel if possible
1. [Step 4.e](#cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
1. [Step 5](#bc_functs): Set up boundary condition functions for chosen singular, curvilinear coordinate system
1. [Step 6](#mainc): `TOV_Playground.c`: The Main C Code
1. [Step 7](#visualize): Data Visualization Animations
1. [Step 7.a](#installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded
1. [Step 7.b](#genimages): Generate images for visualization animation
1. [Step 7.c](#genvideo): Generate visualization animation
1. [Step 8](#convergence): Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero
1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```python
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("BSSN_Hydro_without_Hydro_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "Spherical"
# Step 2.a: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size = 7.5 # SET BELOW BASED ON TOV STELLAR RADIUS
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05# If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.b: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Set the lapse & shift to be consistent with the original Hydro without Hydro paper.
LapseCondition = "HarmonicSlicing"
ShiftCondition = "Frozen"
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
Ricci_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, auxevol_gfs);
rhs_eval(&rfmstruct, ¶ms, auxevol_gfs, RK_INPUT_GFS, RK_OUTPUT_GFS);""",
post_RHS_string = """
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, RK_OUTPUT_GFS);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Step 5: Set the finite differencing order to FD_order (set above).
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# Step 7: Set the direction=2 (phi) axis to be the symmetry axis; i.e.,
# axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
```
<a id='cfl'></a>
## Step 1.a: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](#toc)\]
$$\label{cfl}$$
In order for our explicit-timestepping numerical solution to the scalar wave equation to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:
$$
\Delta t \le \frac{\min(ds_i)}{c},
$$
where $c$ is the wavespeed, and
$$ds_i = h_i \Delta x^i$$
is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
```python
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
```
<a id='adm_id_tov'></a>
# Step 2: Set up ADM initial data for polytropic TOV Star \[Back to [top](#toc)\]
$$\label{adm_id_tov}$$
As documented [in the TOV Initial Data NRPy+ Tutorial Module](Tutorial-TOV_Initial_Data.ipynb) ([older version here](Tutorial-GRMHD_UnitConversion.ipynb)), we will now set up TOV initial data, storing the densely-sampled result to file (***Courtesy Phil Chang***).
The TOV solver uses an ODE integration routine provided by scipy, so we first make sure that scipy is installed:
```python
!pip install scipy > /dev/null
```
Next we call the [TOV.TOV_Solver() function](../edit/TOV/TOV_Solver.py) ([NRPy+ Tutorial module](Tutorial-ADM_Initial_Data-TOV.ipynb)) to set up the initial data, using the default parameters for initial data. This function outputs the solution to a file named "outputTOVpolytrope.txt".
```python
############################
# Single polytrope example #
############################
import TOV.Polytropic_EOSs as ppeos
# Set neos = 1 (single polytrope)
neos = 1
# Set rho_poly_tab (not needed for a single polytrope)
rho_poly_tab = []
# Set Gamma_poly_tab
Gamma_poly_tab = [2.0]
# Set K_poly_tab0
K_poly_tab0 = 1. # ZACH NOTES: CHANGED FROM 100.
# Set the eos quantities
eos = ppeos.set_up_EOS_parameters__complete_set_of_input_variables(neos,rho_poly_tab,Gamma_poly_tab,K_poly_tab0)
import TOV.TOV_Solver as TOV
M_TOV, R_Schw_TOV, R_iso_TOV = TOV.TOV_Solver(eos,
outfile="outputTOVpolytrope.txt",
rho_baryon_central=0.129285,
return_M_RSchw_and_Riso = True,
verbose = True)
domain_size = 2.0 * R_iso_TOV
```
1256 1256 1256 1256 1256 1256
Just generated a TOV star with
* M = 1.405030336771405e-01 ,
* R_Schw = 9.566044579232513e-01 ,
* R_iso = 8.100085557410308e-01 ,
* M/R_Schw = 1.468768334847266e-01
<a id='tov_interp'></a>
## Step 2.a: Interpolate the TOV data file as needed to set up ADM spacetime quantities in spherical basis (for input into the `Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear` module) and $T^{\mu\nu}$ in the chosen reference metric basis \[Back to [top](#toc)\]
$$\label{tov_interp}$$
The TOV data file just written stored $\left(r,\rho(r),P(r),M(r),e^{\nu(r)}\right)$, where $\rho(r)$ is the total mass-energy density (cf. $\rho_{\text{baryonic}}$).
**METRIC DATA IN TERMS OF ADM QUANTITIES**
The [TOV line element](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation) in *Schwarzschild coordinates* is written (in the $-+++$ form):
$$
ds^2 = - c^2 e^\nu dt^2 + \left(1 - \frac{2GM}{rc^2}\right)^{-1} dr^2 + r^2 d\Omega^2.
$$
In *isotropic coordinates* with $G=c=1$ (i.e., the coordinate system we'd prefer to use), the ($-+++$ form) line element is written:
$$
ds^2 = - e^{\nu} dt^2 + e^{4\phi} \left(d\bar{r}^2 + \bar{r}^2 d\Omega^2\right),
$$
where $\phi$ here is the *conformal factor*.
The ADM 3+1 line element for this diagonal metric in isotropic spherical coordinates is given by:
$$
ds^2 = (-\alpha^2 + \beta_k \beta^k) dt^2 + \gamma_{\bar{r}\bar{r}} d\bar{r}^2 + \gamma_{\theta\theta} d\theta^2+ \gamma_{\phi\phi} d\phi^2,
$$
from which we can immediately read off the ADM quantities:
\begin{align}
\alpha &= e^{\nu(\bar{r})/2} \\
\beta^k &= 0 \\
\gamma_{\bar{r}\bar{r}} &= e^{4\phi}\\
\gamma_{\theta\theta} &= e^{4\phi} \bar{r}^2 \\
\gamma_{\phi\phi} &= e^{4\phi} \bar{r}^2 \sin^2 \theta \\
\end{align}
**STRESS-ENERGY TENSOR $T^{\mu\nu}$**
We will also need the stress-energy tensor $T^{\mu\nu}$. [As discussed here](https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation), the stress-energy tensor is diagonal:
\begin{align}
T^t_t &= -\rho \\
T^i_j &= P \delta^i_j \\
\text{All other components of }T^\mu_\nu &= 0.
\end{align}
Since $\beta^i=0$ the inverse metric expression simplifies to (Eq. 4.49 in [Gourgoulhon](https://arxiv.org/pdf/gr-qc/0703035.pdf)):
$$
g^{\mu\nu} = \begin{pmatrix}
-\frac{1}{\alpha^2} & \frac{\beta^i}{\alpha^2} \\
\frac{\beta^i}{\alpha^2} & \gamma^{ij} - \frac{\beta^i\beta^j}{\alpha^2}
\end{pmatrix} =
\begin{pmatrix}
-\frac{1}{\alpha^2} & 0 \\
0 & \gamma^{ij}
\end{pmatrix},
$$
and since the 3-metric is diagonal we get
\begin{align}
\gamma^{\bar{r}\bar{r}} &= e^{-4\phi}\\
\gamma^{\theta\theta} &= e^{-4\phi}\frac{1}{\bar{r}^2} \\
\gamma^{\phi\phi} &= e^{-4\phi}\frac{1}{\bar{r}^2 \sin^2 \theta}.
\end{align}
Thus raising $T^\mu_\nu$ yields a diagonal $T^{\mu\nu}$
\begin{align}
T^{tt} &= -g^{tt} \rho = \frac{1}{\alpha^2} \rho = e^{-\nu(\bar{r})} \rho \\
T^{\bar{r}\bar{r}} &= g^{\bar{r}\bar{r}} P = \frac{1}{e^{4 \phi}} P \\
T^{\theta\theta} &= g^{\theta\theta} P = \frac{1}{e^{4 \phi}\bar{r}^2} P\\
T^{\phi\phi} &= g^{\phi\phi} P = \frac{1}{e^{4\phi}\bar{r}^2 \sin^2 \theta} P
\end{align}
```python
thismodule = "HydrowithoutHydro"
rbar,theta,rho,P,expnu,exp4phi = par.Cparameters("REAL",thismodule,
["rbar","theta","rho","P","expnu","exp4phi"],1e300)
IDalpha = sp.sqrt(expnu)
gammaSphDD = ixp.zerorank2(DIM=3)
gammaSphDD[0][0] = exp4phi
gammaSphDD[1][1] = exp4phi*rbar**2
gammaSphDD[2][2] = exp4phi*rbar**2*sp.sin(theta)**2
T4SphUU = ixp.zerorank2(DIM=4)
T4SphUU[0][0] = rho/expnu
T4SphUU[1][1] = P/exp4phi
T4SphUU[2][2] = P/(exp4phi*rbar**2)
T4SphUU[3][3] = P/(exp4phi*rbar**2*sp.sin(theta)**2)
```
```python
expr_list = [IDalpha]
name_list = ["*alpha"]
for i in range(3):
for j in range(i,3):
expr_list.append(gammaSphDD[i][j])
name_list.append("*gammaDD"+str(i)+str(j))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_ADM_quantities"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params=""" const REAL xyz_or_rthph[3],
const ID_inputs other_inputs,
REAL *gammaDD00,REAL *gammaDD01,REAL *gammaDD02,REAL *gammaDD11,REAL *gammaDD12,REAL *gammaDD22,
REAL *KDD00,REAL *KDD01,REAL *KDD02,REAL *KDD11,REAL *KDD12,REAL *KDD22,
REAL *alpha,
REAL *betaU0,REAL *betaU1,REAL *betaU2,
REAL *BU0,REAL *BU1,REAL *BU2""",
body="""
// Set trivial metric quantities:
*KDD00 = *KDD01 = *KDD02 = 0.0;
/**/ *KDD11 = *KDD12 = 0.0;
/**/ *KDD22 = 0.0;
*betaU0 = *betaU1 = *betaU2 = 0.0;
*BU0 = *BU1 = *BU2 = 0.0;
// Next set gamma_{ij} in spherical basis
const REAL rbar = xyz_or_rthph[0];
const REAL theta = xyz_or_rthph[1];
const REAL phi = xyz_or_rthph[2];
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
outputC(expr_list,name_list, "returnstring",outCparams),
opts="DisableCparameters")
```
Output C function ID_TOV_ADM_quantities() to file BSSN_Hydro_without_Hydro_Ccodes/ID_TOV_ADM_quantities.h
As all input quantities are functions of $r$, we will simply read the solution from file and interpolate it to the values of $r$ needed by the initial data.
1. First we define functions `ID_TOV_ADM_quantities()` and `ID_TOV_TUPMUNU()` that call the [1D TOV interpolator function](../edit/TOV/tov_interp.h) to evaluate the ADM spacetime quantities and $T^{\mu\nu}$, respectively, at any given point $(r,\theta,\phi)$ in the Spherical basis. All quantities are defined as above.
1. Next we will construct the BSSN/ADM source terms $\{S_{ij},S_{i},S,\rho\}$ in the Spherical basis
1. Then we will perform the Jacobian transformation on $\{S_{ij},S_{i},S,\rho\}$ to the desired `(xx0,xx1,xx2)` basis
1. Next we call the *Numerical* Spherical ADM$\to$Curvilinear BSSN converter function to conver the above ADM quantities to the rescaled BSSN quantities in the desired curvilinear coordinate system: [BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py); [\[**tutorial**\]](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
$$
{\rm Jac\_dUSph\_dDrfmUD[mu][nu]} = \frac{\partial x^\mu_{\rm Sph}}{\partial x^\nu_{\rm rfm}},
$$
via exact differentiation (courtesy SymPy), and the inverse Jacobian
$$
{\rm Jac\_dUrfm\_dDSphUD[mu][nu]} = \frac{\partial x^\mu_{\rm rfm}}{\partial x^\nu_{\rm Sph}},
$$
using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Spherical to `"reference_metric::CoordSystem"` coordinates may be written:
$$
T^{\mu\nu}_{\rm rfm} =
\frac{\partial x^\mu_{\rm rfm}}{\partial x^\delta_{\rm Sph}}
\frac{\partial x^\nu_{\rm rfm}}{\partial x^\sigma_{\rm Sph}} T^{\delta\sigma}_{\rm Sph}
$$
```python
r_th_ph_or_Cart_xyz_oID_xx = []
CoordType_in = "Spherical"
if CoordType_in == "Spherical":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxSph
elif CoordType_in == "Cartesian":
r_th_ph_or_Cart_xyz_oID_xx = rfm.xxCart
else:
print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.")
exit(1)
# Next apply Jacobian transformations to convert into the (xx0,xx1,xx2) basis
# rho and S are scalar, so no Jacobian transformations are necessary.
Jac4_dUSphorCart_dDrfmUD = ixp.zerorank2(DIM=4)
Jac4_dUSphorCart_dDrfmUD[0][0] = sp.sympify(1)
for i in range(DIM):
for j in range(DIM):
Jac4_dUSphorCart_dDrfmUD[i+1][j+1] = sp.diff(r_th_ph_or_Cart_xyz_oID_xx[i],rfm.xx[j])
Jac4_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter4x4(Jac4_dUSphorCart_dDrfmUD)
# Perform Jacobian operations on T^{mu nu} and gamma_{ij}
T4UU = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","T4UU","sym01",DIM=4)
IDT4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
for sigma in range(4):
IDT4UU[mu][nu] += \
Jac4_dUrfm_dDSphorCartUD[mu][delta]*Jac4_dUrfm_dDSphorCartUD[nu][sigma]*T4SphUU[delta][sigma]
lhrh_list = []
for mu in range(4):
for nu in range(mu,4):
lhrh_list.append(lhrh(lhs=gri.gfaccess("auxevol_gfs","T4UU"+str(mu)+str(nu)),rhs=IDT4UU[mu][nu]))
desc = """This function takes as input either (x,y,z) or (r,th,ph) and outputs
all ADM quantities in the Cartesian or Spherical basis, respectively."""
name = "ID_TOV_TUPMUNU_xx0xx1xx2"
outCparams = "preindent=1,outCverbose=False,includebraces=False"
outCfunction(
outfile=os.path.join(Ccodesdir, name + ".h"), desc=desc, name=name,
params="""const paramstruct *restrict params,REAL *restrict xx[3],
const ID_inputs other_inputs,REAL *restrict auxevol_gfs""",
body=outputC([rfm.xxSph[0],rfm.xxSph[1],rfm.xxSph[2]],
["const REAL rbar","const REAL theta","const REAL ph"],"returnstring",
"CSE_enable=False,includebraces=False")+"""
REAL rho,rho_baryon,P,M,expnu,exp4phi;
TOV_interpolate_1D(rbar,other_inputs.Rbar,other_inputs.Rbar_idx,other_inputs.interp_stencil_size,
other_inputs.numlines_in_file,
other_inputs.r_Schw_arr,other_inputs.rho_arr,other_inputs.rho_baryon_arr,other_inputs.P_arr,other_inputs.M_arr,
other_inputs.expnu_arr,other_inputs.exp4phi_arr,other_inputs.rbar_arr,
&rho,&rho_baryon,&P,&M,&expnu,&exp4phi);\n"""+
fin.FD_outputC("returnstring",lhrh_list,params="outCverbose=False,includebraces=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
```
Output C function ID_TOV_TUPMUNU_xx0xx1xx2() to file BSSN_Hydro_without_Hydro_Ccodes/ID_TOV_TUPMUNU_xx0xx1xx2.h
<a id='adm_id_spacetime'></a>
# Step 3: Convert ADM initial data to BSSN-in-curvilinear coordinates \[Back to [top](#toc)\]
$$\label{adm_id_spacetime}$$
This is an automated process, taken care of by [`BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear`](../edit/BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear.py), and documented [in this tutorial notebook](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb).
```python
import BSSN.ADM_Numerical_Spherical_or_Cartesian_to_BSSNCurvilinear as AtoBnum
AtoBnum.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical","ID_TOV_ADM_quantities",
Ccodesdir=Ccodesdir,loopopts="")
```
Output C function ID_BSSN_lambdas() to file BSSN_Hydro_without_Hydro_Ccodes/ID_BSSN_lambdas.h
Output C function ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs() to file BSSN_Hydro_without_Hydro_Ccodes/ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h
Output C function ID_BSSN__ALL_BUT_LAMBDAs() to file BSSN_Hydro_without_Hydro_Ccodes/ID_BSSN__ALL_BUT_LAMBDAs.h
<a id='bssn'></a>
# Step 4: Output C code for BSSN spacetime solve \[Back to [top](#toc)\]
$$\label{bssn}$$
<a id='bssnrhs'></a>
## Step 4.a: Set up the BSSN right-hand-side (RHS) expressions, and add the *rescaled* $T^{\mu\nu}$ source terms \[Back to [top](#toc)\]
$$\label{bssnrhs}$$
`BSSN.BSSN_RHSs()` sets up the RHSs assuming a spacetime vacuum: $T^{\mu\nu}=0$. (This might seem weird, but remember that, for example, *spacetimes containing only single or binary black holes are vacuum spacetimes*.) Here, using the [`BSSN.BSSN_stress_energy_source_terms`](../edit/BSSN/BSSN_stress_energy_source_terms.py) ([**tutorial**](Tutorial-BSSN_stress_energy_source_terms.ipynb)) NRPy+ module, we add the $T^{\mu\nu}$ source terms to these equations.
```python
import time
import BSSN.BSSN_RHSs as rhs
import BSSN.BSSN_gauge_RHSs as gaugerhs
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::LapseEvolutionOption", LapseCondition)
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", ShiftCondition)
print("Generating symbolic expressions for BSSN RHSs...")
start = time.time()
# Enable rfm_precompute infrastructure, which results in
# BSSN RHSs that are free of transcendental functions,
# even in curvilinear coordinates, so long as
# ConformalFactor is set to "W" (default).
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
# Evaluate BSSN + BSSN gauge RHSs with rfm_precompute enabled:
import BSSN.BSSN_quantities as Bq
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","True")
rhs.BSSN_RHSs()
import BSSN.BSSN_stress_energy_source_terms as Bsest
Bsest.BSSN_source_terms_for_BSSN_RHSs(T4UU)
rhs.trK_rhs += Bsest.sourceterm_trK_rhs
for i in range(DIM):
# Needed for Gamma-driving shift RHSs:
rhs.Lambdabar_rhsU[i] += Bsest.sourceterm_Lambdabar_rhsU[i]
# Needed for BSSN RHSs:
rhs.lambda_rhsU[i] += Bsest.sourceterm_lambda_rhsU[i]
for j in range(DIM):
rhs.a_rhsDD[i][j] += Bsest.sourceterm_a_rhsDD[i][j]
gaugerhs.BSSN_gauge_RHSs()
# We use betaU as our upwinding control vector:
Bq.BSSN_basic_tensors()
betaU = Bq.betaU
import BSSN.Enforce_Detgammabar_Constraint as EGC
enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammabar_Constraint_symb_expressions()
# Next compute Ricci tensor
par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic","False")
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
# Now register the Hamiltonian as a gridfunction.
H = gri.register_gridfunctions("AUX","H")
# Then define the Hamiltonian constraint and output the optimized C code.
import BSSN.BSSN_constraints as bssncon
bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False)
Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU)
bssncon.H += Bsest.sourceterm_H
# Now that we are finished with all the rfm hatted
# quantities in generic precomputed functional
# form, let's restore them to their closed-
# form expressions.
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
end = time.time()
print("Finished BSSN symbolic expressions in "+str(end-start)+" seconds.")
def BSSN_RHSs():
print("Generating C code for BSSN RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
# Construct the left-hand sides and right-hand-side expressions for all BSSN RHSs
lhs_names = [ "alpha", "cf", "trK"]
rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs]
for i in range(3):
lhs_names.append( "betU"+str(i))
rhs_exprs.append(gaugerhs.bet_rhsU[i])
lhs_names.append( "lambdaU"+str(i))
rhs_exprs.append(rhs.lambda_rhsU[i])
lhs_names.append( "vetU"+str(i))
rhs_exprs.append(gaugerhs.vet_rhsU[i])
for j in range(i,3):
lhs_names.append( "aDD"+str(i)+str(j))
rhs_exprs.append(rhs.a_rhsDD[i][j])
lhs_names.append( "hDD"+str(i)+str(j))
rhs_exprs.append(rhs.h_rhsDD[i][j])
# Sort the lhss list alphabetically, and rhss to match.
# This ensures the RHSs are evaluated in the same order
# they're allocated in memory:
lhs_names,rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names,rhs_exprs), key=lambda pair: pair[0]))]
# Declare the list of lhrh's
BSSN_evol_rhss = []
for var in range(len(lhs_names)):
BSSN_evol_rhss.append(lhrh(lhs=gri.gfaccess("rhs_gfs",lhs_names[var]),rhs=rhs_exprs[var]))
# Set up the C function for the BSSN RHSs
desc="Evaluate the BSSN RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""",
body = fin.FD_outputC("returnstring",BSSN_evol_rhss, params="outCverbose=False,SIMD_enable=True",
upwindcontrolvec=betaU).replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished BSSN_RHS C codegen in " + str(end - start) + " seconds.")
def Ricci():
print("Generating C code for Ricci tensor in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate the Ricci tensor"
name="Ricci_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs,REAL *restrict auxevol_gfs""",
body = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD00"),rhs=Bq.RbarDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD01"),rhs=Bq.RbarDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD02"),rhs=Bq.RbarDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD11"),rhs=Bq.RbarDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD12"),rhs=Bq.RbarDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","RbarDD22"),rhs=Bq.RbarDD[2][2])],
params="outCverbose=False,SIMD_enable=True").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
end = time.time()
print("Finished Ricci C codegen in " + str(end - start) + " seconds.")
```
Generating symbolic expressions for BSSN RHSs...
Finished BSSN symbolic expressions in 5.461156368255615 seconds.
<a id='hamconstraint'></a>
## Step 4.b: Output the Hamiltonian constraint \[Back to [top](#toc)\]
$$\label{hamconstraint}$$
Next output the C code for evaluating the Hamiltonian constraint [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. We will therefore measure the Hamiltonian constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.
```python
def Hamiltonian():
start = time.time()
print("Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the Hamiltonian RHS
desc="Evaluate the Hamiltonian constraint"
name="Hamiltonian_constraint"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""",
body = fin.FD_outputC("returnstring",lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H),
params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts = "InteriorPoints,Enable_rfm_precompute")
end = time.time()
print("Finished Hamiltonian C codegen in " + str(end - start) + " seconds.")
```
<a id='enforce3metric'></a>
## Step 4.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](#toc)\]
$$\label{enforce3metric}$$
Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb)
Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint:
```python
def gammadet():
start = time.time()
print("Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.")
# Set up the C function for the det(gammahat) = det(gammabar)
EGC.output_Enforce_Detgammabar_Constraint_Ccode(Ccodesdir,exprs=enforce_detg_constraint_symb_expressions)
end = time.time()
print("Finished gamma constraint C codegen in " + str(end - start) + " seconds.")
```
<a id='ccodegen'></a>
## Step 4.d: Generate C code kernels for BSSN expressions, in parallel if possible \[Back to [top](#toc)\]
$$\label{ccodegen}$$
```python
# Step 1: Create a list of functions we wish to evaluate in parallel
funcs = [BSSN_RHSs,Ricci,Hamiltonian,gammadet]
try:
if os.name == 'nt':
# It's a mess to get working in Windows, so we don't bother. :/
# https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac
raise Exception("Parallel codegen currently not available in Windows")
# Step 1.a: Import the multiprocessing module.
import multiprocessing
# Step 1.b: Define master function for parallelization.
# Note that lambdifying this doesn't work in Python 3
def master_func(arg):
funcs[arg]()
# Step 1.c: Evaluate list of functions in parallel if possible;
# otherwise fallback to serial evaluation:
pool = multiprocessing.Pool()
pool.map(master_func,range(len(funcs)))
except:
# Steps 1.b-1.c, alternate: As fallback, evaluate functions in serial.
for func in funcs:
func()
```
Generating optimized C code for Hamiltonian constraint. May take a while, depending on CoordSystem.
Generating C code for BSSN RHSs in Spherical coordinates.
Generating optimized C code for gamma constraint. May take a while, depending on CoordSystem.
Generating C code for Ricci tensor in Spherical coordinates.
Output C function enforce_detgammabar_constraint() to file BSSN_Hydro_without_Hydro_Ccodes/enforce_detgammabar_constraint.h
Finished gamma constraint C codegen in 0.09400129318237305 seconds.
Output C function Hamiltonian_constraint() to file BSSN_Hydro_without_Hydro_Ccodes/Hamiltonian_constraint.h
Finished Hamiltonian C codegen in 6.726282358169556 seconds.
Output C function rhs_eval() to file BSSN_Hydro_without_Hydro_Ccodes/rhs_eval.h
Finished BSSN_RHS C codegen in 15.022368431091309 seconds.
Output C function Ricci_eval() to file BSSN_Hydro_without_Hydro_Ccodes/Ricci_eval.h
Finished Ricci C codegen in 45.88155126571655 seconds.
<a id='cparams_rfm_and_domainsize'></a>
## Step 4.e: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\]
$$\label{cparams_rfm_and_domainsize}$$
Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.
Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
```python
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
# Output to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Step 1.c.ii: Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir)
# Step 1.c.iii: Generate xxCart.h, which contains xxCart() for
# (the mapping from xx->Cartesian) for the chosen
# CoordSystem:
rfm.xxCart_h("xxCart","./set_Cparameters.h",os.path.join(Ccodesdir,"xxCart.h"))
# Step 1.c.iv: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
```
<a id='bc_functs'></a>
# Step 5: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](#toc)\]
$$\label{bc_functs}$$
Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
```python
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),Cparamspath=os.path.join("../"))
```
Wrote to file "BSSN_Hydro_without_Hydro_Ccodes/boundary_conditions/parity_conditions_symbolic_dot_products.h"
Evolved gridfunction "aDD00" has parity type 4.
Evolved gridfunction "aDD01" has parity type 5.
Evolved gridfunction "aDD02" has parity type 6.
Evolved gridfunction "aDD11" has parity type 7.
Evolved gridfunction "aDD12" has parity type 8.
Evolved gridfunction "aDD22" has parity type 9.
Evolved gridfunction "alpha" has parity type 0.
Evolved gridfunction "betU0" has parity type 1.
Evolved gridfunction "betU1" has parity type 2.
Evolved gridfunction "betU2" has parity type 3.
Evolved gridfunction "cf" has parity type 0.
Evolved gridfunction "hDD00" has parity type 4.
Evolved gridfunction "hDD01" has parity type 5.
Evolved gridfunction "hDD02" has parity type 6.
Evolved gridfunction "hDD11" has parity type 7.
Evolved gridfunction "hDD12" has parity type 8.
Evolved gridfunction "hDD22" has parity type 9.
Evolved gridfunction "lambdaU0" has parity type 1.
Evolved gridfunction "lambdaU1" has parity type 2.
Evolved gridfunction "lambdaU2" has parity type 3.
Evolved gridfunction "trK" has parity type 0.
Evolved gridfunction "vetU0" has parity type 1.
Evolved gridfunction "vetU1" has parity type 2.
Evolved gridfunction "vetU2" has parity type 3.
Auxiliary gridfunction "H" has parity type 0.
AuxEvol gridfunction "RbarDD00" has parity type 4.
AuxEvol gridfunction "RbarDD01" has parity type 5.
AuxEvol gridfunction "RbarDD02" has parity type 6.
AuxEvol gridfunction "RbarDD11" has parity type 7.
AuxEvol gridfunction "RbarDD12" has parity type 8.
AuxEvol gridfunction "RbarDD22" has parity type 9.
AuxEvol gridfunction "T4UU00" has parity type 0.
AuxEvol gridfunction "T4UU01" has parity type 1.
AuxEvol gridfunction "T4UU02" has parity type 2.
AuxEvol gridfunction "T4UU03" has parity type 3.
AuxEvol gridfunction "T4UU11" has parity type 4.
AuxEvol gridfunction "T4UU12" has parity type 5.
AuxEvol gridfunction "T4UU13" has parity type 6.
AuxEvol gridfunction "T4UU22" has parity type 7.
AuxEvol gridfunction "T4UU23" has parity type 8.
AuxEvol gridfunction "T4UU33" has parity type 9.
Wrote to file "BSSN_Hydro_without_Hydro_Ccodes/boundary_conditions/EigenCoord_Cart_to_xx.h"
<a id='mainc'></a>
# Step 6: `Hydro_without_Hydro_Playground.c`: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```python
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"Hydro_without_Hydro_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
REAL CFL_FACTOR = """+str(CFL_FACTOR)+"""; // Set the CFL Factor. Can be overwritten at command line.
// Part P0.d: Set TOV stellar parameters
#define TOV_Mass """+str(M_TOV)+"""
#define TOV_Riso """+str(R_iso_TOV)+"\n")
```
```python
%%writefile $Ccodesdir/Hydro_without_Hydro_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "Hydro_without_Hydro_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#define wavespeed 1.0 // Set CFL-based "wavespeed" to 1.0.
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xxCart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Set xxCart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xxCart.h"
// Step P5: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P6: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P7: Implement the algorithm for upwinding.
// *NOTE*: This upwinding is backwards from
// usual upwinding algorithms, because the
// upwinding control vector in BSSN (the shift)
// acts like a *negative* velocity.
//#define UPWIND_ALG(UpwindVecU) UpwindVecU > 0.0 ? 1.0 : 0.0
// Step P8: Include function for enforcing detgammabar constraint.
#include "enforce_detgammabar_constraint.h"
// Step P9: Find the CFL-constrained timestep
#include "find_timestep.h"
// Step P4: Declare initial data input struct:
// stores data from initial data solver,
// so they can be put on the numerical grid.
typedef struct __ID_inputs {
REAL Rbar;
int Rbar_idx;
int interp_stencil_size;
int numlines_in_file;
REAL *r_Schw_arr,*rho_arr,*rho_baryon_arr,*P_arr,*M_arr,*expnu_arr,*exp4phi_arr,*rbar_arr;
} ID_inputs;
// Part P11: Declare all functions for setting up TOV initial data.
/* Routines to interpolate the TOV solution and convert to ADM & T^{munu}: */
#include "../TOV/tov_interp.h"
#include "ID_TOV_ADM_quantities.h"
#include "ID_TOV_TUPMUNU_xx0xx1xx2.h"
/* Next perform the basis conversion and compute all needed BSSN quantities */
#include "ID_ADM_xx0xx1xx2_to_BSSN_xx0xx1xx2__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN__ALL_BUT_LAMBDAs.h"
#include "ID_BSSN_lambdas.h"
// Step P10: Declare function necessary for setting up the initial data.
// Step P10.a: Define BSSN_ID() for BrillLindquist initial data
// Step P10.b: Set the generic driver function for setting up BSSN initial data
void initial_data(const paramstruct *restrict params,const bc_struct *restrict bcstruct,
const rfm_struct *restrict rfmstruct,
REAL *restrict xx[3], REAL *restrict auxevol_gfs, REAL *restrict in_gfs) {
#include "set_Cparameters.h"
// Step 1: Set up TOV initial data
// Step 1.a: Read TOV initial data from data file
// Open the data file:
char filename[100];
sprintf(filename,"./outputTOVpolytrope.txt");
FILE *in1Dpolytrope = fopen(filename, "r");
if (in1Dpolytrope == NULL) {
fprintf(stderr,"ERROR: could not open file %s\n",filename);
exit(1);
}
// Count the number of lines in the data file:
int numlines_in_file = count_num_lines_in_file(in1Dpolytrope);
// Allocate space for all data arrays:
REAL *r_Schw_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rho_baryon_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *P_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *M_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *expnu_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *exp4phi_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
REAL *rbar_arr = (REAL *)malloc(sizeof(REAL)*numlines_in_file);
// Read from the data file, filling in arrays
// read_datafile__set_arrays() may be found in TOV/tov_interp.h
if(read_datafile__set_arrays(in1Dpolytrope, r_Schw_arr,rho_arr,rho_baryon_arr,P_arr,M_arr,expnu_arr,exp4phi_arr,rbar_arr) == 1) {
fprintf(stderr,"ERROR WHEN READING FILE %s!\n",filename);
exit(1);
}
fclose(in1Dpolytrope);
REAL Rbar = -100;
int Rbar_idx = -100;
for(int i=1;i<numlines_in_file;i++) {
if(rho_arr[i-1]>0 && rho_arr[i]==0) { Rbar = rbar_arr[i-1]; Rbar_idx = i-1; }
}
if(Rbar<0) {
fprintf(stderr,"Error: could not find rbar=Rbar from data file.\n");
exit(1);
}
ID_inputs TOV_in;
TOV_in.Rbar = Rbar;
TOV_in.Rbar_idx = Rbar_idx;
const int interp_stencil_size = 12;
TOV_in.interp_stencil_size = interp_stencil_size;
TOV_in.numlines_in_file = numlines_in_file;
TOV_in.r_Schw_arr = r_Schw_arr;
TOV_in.rho_arr = rho_arr;
TOV_in.rho_baryon_arr = rho_baryon_arr;
TOV_in.P_arr = P_arr;
TOV_in.M_arr = M_arr;
TOV_in.expnu_arr = expnu_arr;
TOV_in.exp4phi_arr = exp4phi_arr;
TOV_in.rbar_arr = rbar_arr;
/* END TOV INPUT ROUTINE */
// Step 1.b: Interpolate data from data file to set BSSN gridfunctions
ID_BSSN__ALL_BUT_LAMBDAs(params,xx,TOV_in, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_BSSN_lambdas(params, xx, in_gfs);
apply_bcs_curvilinear(params, bcstruct, NUM_EVOL_GFS, evol_gf_parity, in_gfs);
enforce_detgammabar_constraint(rfmstruct, params, in_gfs);
ID_TOV_TUPMUNU_xx0xx1xx2(params,xx,TOV_in,auxevol_gfs);
free(rbar_arr);
free(rho_arr);
free(rho_baryon_arr);
free(P_arr);
free(M_arr);
free(expnu_arr);
}
// Step P11: Declare function for evaluating Hamiltonian constraint (diagnostic)
#include "Hamiltonian_constraint.h"
// Step P12: Declare rhs_eval function, which evaluates BSSN RHSs
#include "rhs_eval.h"
// Step P13: Declare Ricci_eval function, which evaluates Ricci tensor
#include "Ricci_eval.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = 1.8*TOV_Mass; /* Final time is set so that at t=t_final,
* data at the origin have not been corrupted
* by the approximate outer boundary condition */
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
int output_every_N = (int)((REAL)N_final/800.0);
if(output_every_N == 0) output_every_N = 1;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
// Step 1: Set up initial data to an exact solution
initial_data(¶ms,&bcstruct, &rfmstruct, xx, auxevol_gfs, y_n_gfs);
// Step 1b: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
enforce_detgammabar_constraint(&rfmstruct, ¶ms, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
// Step 3.a: Output 2D data file periodically, for visualization
if(n%100 == 0) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d-%08d.txt",Nxx[0],n);
FILE *out2D = fopen(filename, "w");
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
const int idx = IDX3S(i0,i1,i2);
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
fprintf(out2D,"%e %e %e %e\n",
xCart[1]/TOV_Mass,xCart[2]/TOV_Mass,
y_n_gfs[IDX4ptS(CFGF,idx)],log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.c: If t=t_final, output conformal factor & Hamiltonian
// constraint violation to 2D data file
if(n==N_final-1) {
// Evaluate Hamiltonian constraint violation
Hamiltonian_constraint(&rfmstruct, ¶ms, y_n_gfs,auxevol_gfs, diagnostic_output_gfs);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
const int i0MIN=NGHOSTS; // In spherical, r=Delta r/2.
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS0-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS1-NGHOSTS,
NGHOSTS,Nxx_plus_2NGHOSTS2-NGHOSTS) {
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
xxCart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
fprintf(out2D,"%e %e %e %e\n",xCart[1]/TOV_Mass,xCart[2]/TOV_Mass, y_n_gfs[IDX4ptS(CFGF,idx)],
log10(fabs(diagnostic_output_gfs[IDX4ptS(HGF,idx)])));
}
fclose(out2D);
}
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t/M=%.2f dt/M=%.2e | %.1f%%; ETA %.0f s | t/M/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt/TOV_Mass, (double)dt/TOV_Mass, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt/TOV_Mass * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
```
Writing BSSN_Hydro_without_Hydro_Ccodes//Hydro_without_Hydro_Playground.c
```python
import cmdline_helper as cmd
print("Now compiling, should take ~20 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"Hydro_without_Hydro_Playground.c"), "Hydro_without_Hydro_Playground")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n")
cmd.delete_existing_files("out96*.txt")
cmd.delete_existing_files("out96-00*.txt.png")
print("Now running, should take ~10 seconds...\n")
start = time.time()
cmd.Execute("Hydro_without_Hydro_Playground", "96 16 2 "+str(CFL_FACTOR),"out96.txt")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n")
```
Now compiling, should take ~20 seconds...
Compiling executable...
Executing `gcc -Ofast -fopenmp -march=native -funroll-loops BSSN_Hydro_without_Hydro_Ccodes/Hydro_without_Hydro_Playground.c -o Hydro_without_Hydro_Playground -lm`...
Finished executing in 4.623743295669556 seconds.
Finished compilation.
Finished in 4.636880159378052 seconds.
Now running, should take ~10 seconds...
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./Hydro_without_Hydro_Playground 96 16 2 0.5`...
[2KIt: 300 t/M=1.77 dt/M=5.90e-03 | 98.4%; ETA 0 s | t/M/h 7401.69 | gp/s 4.29e+06
Finished executing in 1.0138895511627197 seconds.
Finished in 1.0257987976074219 seconds.
<a id='visualize'></a>
# Step 7: Data Visualization Animations \[Back to [top](#toc)\]
$$\label{visualize}$$
<a id='installdownload'></a>
## Step 7.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](#toc)\]
$$\label{installdownload}$$
Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or (if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`).
```python
print("Ignore any warnings or errors from the following command:")
!pip install scipy > /dev/null
check_for_ffmpeg = !which ffmpeg >/dev/null && echo $?
if check_for_ffmpeg != ['0']:
print("Couldn't find ffmpeg, so I'll download it.")
# Courtesy https://johnvansickle.com/ffmpeg/
!wget http://astro.phys.wvu.edu/zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz
!tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz
print("Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.")
!mkdir ~/.local/bin/
!cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/
print("If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.")
```
Ignore any warnings or errors from the following command:
<a id='genimages'></a>
## Step 7.b: Generate images for visualization animation \[Back to [top](#toc)\]
$$\label{genimages}$$
Here we loop through the data files output by the executable compiled and run in [the previous step](#mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file.
**Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.**
```python
## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ##
import numpy as np
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from IPython.display import HTML
import matplotlib.image as mgimg
import glob
import sys
from matplotlib import animation
globby = glob.glob('out96-00*.txt')
file_list = []
for x in sorted(globby):
file_list.append(x)
bound=7.5
pl_xmin = -bound
pl_xmax = +bound
pl_ymin = -bound
pl_ymax = +bound
N_interp_pts = 300
N_interp_ptsj = 300j
for filename in file_list:
fig = plt.figure()
x,y,other,Ham = np.loadtxt(filename).T #Transposed for easier unpacking
plotquantity = Ham
plotdescription = "Numerical Soln."
plt.title("Single Neutron Star (Ham. constraint)")
plt.xlabel("y/M")
plt.ylabel("z/M")
grid_x, grid_y = np.mgrid[pl_xmin:pl_xmax:N_interp_ptsj, pl_ymin:pl_ymax:N_interp_ptsj]
points = np.zeros((len(x), 2))
for i in range(len(x)):
# Zach says: No idea why x and y get flipped...
points[i][0] = y[i]
points[i][1] = x[i]
grid = griddata(points, plotquantity, (grid_x, grid_y), method='nearest')
gridcub = griddata(points, plotquantity, (grid_x, grid_y), method='cubic')
im = plt.imshow(grid, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
#plt.pcolormesh(grid_y,grid_x, grid, vmin=-8, vmax=0) # Set colorbar range from -8 to 0
ax = plt.colorbar()
plt.clim(-9, -2)
ax.set_label(plotdescription)
savefig(filename+".png",dpi=150)
plt.close(fig)
sys.stdout.write("%c[2K" % 27)
sys.stdout.write("Processing file "+filename+"\r")
sys.stdout.flush()
```
[2KProcessing file out96-00000300.txt
<a id='genvideo'></a>
## Step 7.c: Generate visualization animation \[Back to [top](#toc)\]
$$\label{genvideo}$$
In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.
```python
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
myimages = []
for i in range(len(file_list)):
img = mgimg.imread(file_list[i]+".png")
imgplot = plt.imshow(img)
myimages.append([imgplot])
ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
plt.close()
ani.save('SingleNS.mp4', fps=5,dpi=150)
```
```python
## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ##
# https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook
```
```python
%%HTML
```
<a id='convergence'></a>
# Step 8: Validation: Convergence of numerical errors (Hamiltonian constraint violation) to zero \[Back to [top](#toc)\]
$$\label{convergence}$$
The equations behind these initial data solve Einstein's equations exactly, at a single instant in time. One reflection of this solution is that the Hamiltonian constraint violation should be exactly zero in the initial data.
However, when evaluated on numerical grids, the Hamiltonian constraint violation will *not* generally evaluate to zero due to the associated numerical derivatives not being exact. However, these numerical derivatives (finite difference derivatives in this case) should *converge* to the exact derivatives as the density of numerical sampling points approaches infinity.
In this case, all of our finite difference derivatives agree with the exact solution, with an error term that drops with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$.
Here, as in the [Start-to-Finish Scalar Wave (Cartesian grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWave.ipynb) and the [Start-to-Finish Scalar Wave (curvilinear grids) NRPy+ tutorial](Tutorial-Start_to_Finish-ScalarWaveCurvilinear.ipynb) we confirm this convergence.
First, let's take a look at the numerical error on the x-y plane at a given numerical resolution, plotting $\log_{10}|H|$, where $H$ is the Hamiltonian constraint violation:
```python
grid96 = griddata(points, plotquantity, (grid_x, grid_y), method='nearest')
grid96cub = griddata(points, plotquantity, (grid_x, grid_y), method='cubic')
# fig, ax = plt.subplots()
plt.clf()
plt.title("96^3 Numerical Err.: log_{10}|Ham|")
plt.xlabel("x/M")
plt.ylabel("y/M")
fig96cub = plt.imshow(grid96cub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig96cub)
```
Next, we set up the same initial data but on a lower-resolution, $48\times 8\times 2$ grid (axisymmetric in the $\phi$ direction). Since the constraint violation (numerical error associated with the fourth-order-accurate, finite-difference derivatives) should converge to zero with the uniform gridspacing to the fourth power: $\left(\Delta x^i\right)^4$, we expect the constraint violation will increase (relative to the $96\times 16\times 2$ grid) by a factor of $\left(96/48\right)^4$. Here we demonstrate that indeed this order of convergence is observed as expected, *except* in the region causally influenced by the star's surface at $\bar{r}=\bar{R}\approx 0.8$ where the stress-energy tensor $T^{\mu\nu}$ sharply drops to zero.
```python
cmd.delete_existing_files("out48*.txt")
cmd.delete_existing_files("out48-00*.txt.png")
print("Now running, should take ~10 seconds...\n")
start = time.time()
cmd.Execute("Hydro_without_Hydro_Playground", "48 8 2 "+str(CFL_FACTOR), "out48.txt")
end = time.time()
print("Finished in "+str(end-start)+" seconds.")
```
Now running, should take ~10 seconds...
Executing `taskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 ./Hydro_without_Hydro_Playground 48 8 2 0.5`...
[2KIt: 70 t/M=1.65 dt/M=2.36e-02 | 92.1%; ETA 0 s | t/M/h 82436.31 | gp/s 2.98e+06
Finished executing in 0.21425819396972656 seconds.
Finished in 0.22713994979858398 seconds.
```python
x48,y48,valuesother48,valuesHam48 = np.loadtxt('out48.txt').T #Transposed for easier unpacking
points48 = np.zeros((len(x48), 2))
for i in range(len(x48)):
points48[i][0] = x48[i]
points48[i][1] = y48[i]
grid48 = griddata(points48, valuesHam48, (grid_x, grid_y), method='cubic')
griddiff_48_minus_96 = np.zeros((N_interp_pts,N_interp_pts))
griddiff_48_minus_96_1darray = np.zeros(N_interp_pts*N_interp_pts)
gridx_1darray_yeq0 = np.zeros(N_interp_pts)
grid48_1darray_yeq0 = np.zeros(N_interp_pts)
grid96_1darray_yeq0 = np.zeros(N_interp_pts)
count = 0
outarray = []
for i in range(N_interp_pts):
for j in range(N_interp_pts):
griddiff_48_minus_96[i][j] = grid48[i][j] - grid96[i][j]
griddiff_48_minus_96_1darray[count] = griddiff_48_minus_96[i][j]
if j==N_interp_pts/2-1:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid48_1darray_yeq0[i] = grid48[i][j] + np.log10((48./96.)**4)
grid96_1darray_yeq0[i] = grid96[i][j]
count = count + 1
plt.clf()
fig, ax = plt.subplots()
plt.title("Plot Demonstrating 4th-order Convergence")
plt.xlabel("x/M")
plt.ylabel("log10(Relative error)")
ax.plot(gridx_1darray_yeq0, grid96_1darray_yeq0, 'k-', label='Nr=96')
ax.plot(gridx_1darray_yeq0, grid48_1darray_yeq0, 'k--', label='Nr=48, mult by (48/96)^4')
ax.set_ylim([-9.5,-1.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
```
<a id='latex_pdf_output'></a>
# Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.pdf](Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
| af199090461c696c87fa04af0185ac5882372401 | 246,705 | ipynb | Jupyter Notebook | Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.ipynb | Steve-Hawk/nrpytutorial | 42d7450dba8bf43aa9c2d8f38f85f18803de69b7 | [
"BSD-2-Clause"
]
| null | null | null | Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.ipynb | Steve-Hawk/nrpytutorial | 42d7450dba8bf43aa9c2d8f38f85f18803de69b7 | [
"BSD-2-Clause"
]
| null | null | null | Tutorial-Start_to_Finish-BSSNCurvilinear-Neutron_Star-Hydro_without_Hydro.ipynb | Steve-Hawk/nrpytutorial | 42d7450dba8bf43aa9c2d8f38f85f18803de69b7 | [
"BSD-2-Clause"
]
| 1 | 2021-03-02T12:51:56.000Z | 2021-03-02T12:51:56.000Z | 125.103955 | 111,768 | 0.82128 | true | 22,594 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.727975 | 0.612828 | __label__eng_Latn | 0.666567 | 0.262136 |
```python
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.''')
display(tag)
```
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.
```python
%matplotlib notebook
import numpy as np
import control as control
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from ipywidgets import widgets
from ipywidgets import interact
import scipy.signal as signal
import sympy as sym
```
## Mehanički sustavi
#### Opći model masa-opruga-prigušivač
> Model masa-opruga-prigušivač sastoji se od diskretnih čvorova mase raspoređenih po objektu i međusobno povezanih mrežom opruga i prigušivača. Ovaj je model pogodan za opisivanje objekata sa složenim svojstvima materijala kao što su nelinearnost i viskoelastičnost. (izvor: [Wikipedia](https://en.wikipedia.org/wiki/Mass-spring-damper_model "Mass-spring-model"))
#### Model 1/4 automobila
> Model 1/4 automobila se koristi za analizu kvalitete automobilskih sustava ovjesa. Masa $m_1$ je "opružena masa", koja predstavlja jednu četvrtinu mase vozila koju podupire sustav ovjesa. Masa $m_2$ je "neopružena masa", koja predstavlja opterećenje jednog kotača i poluosovinskog sklopa, uključujući amortizer i opruge. Krutost i prigušenje sustava ovjesa modeliraju se idealnom konstantom opruge $k_1$ i koeficijentom trenja $B$, respektivno. Krutost gume modelirana je konstantom opruge $k_2$. (izvor: [Chegg Study](https://www.chegg.com/homework-help/questions-and-answers/figure-p230-shows-1-4-car-model-used-analyze-ride-quality-automotive-suspension-systems-ma-q26244005 "1/4 car model"))
---
### Kako koristiti ovaj interaktivni primjer?
1. Izaberite između ponuđenih modela *masa-opruga-prigušivač* i *1/4 automobila* klikom na odgovarajući gumb.
2. Odaberite funkciju sile $F$ među ponuđenima: *step funkcija*, *impulsna funkcija*, *rampa funkcija* i *funkcija sinus*.
3. Pomičite klizače da biste definirali vrijednosti mase ($m$; $m_1$ i $m_2$), koeficijenata opruge ($k$; $k_1$ i $k_2$), konstante prigušenja ($B$), pojačanje ulaznog signala i početne uvjete ($x_0$, $\dot{x}_0$, $y_0$, $\dot{y}_0$).
<table>
<tr>
<th style="text-align:center">Model masa-opruga-prigušivač</th>
<th style="text-align:center">Model 1/4 automobila</th>
</tr>
<tr>
<td style="width:170px; height:150px"></td>
<td style="width:170px; height:150px"></td>
</tr>
<tr>
</tr>
</table>
```python
# create figure
fig = plt.figure(figsize=(9.8, 4),num='Mehanički sustavi')
# add sublot
ax = fig.add_subplot(111)
ax.set_title('Vremenski odziv')
ax.set_ylabel('ulaz, izlaz')
ax.set_xlabel('$t$ [s]')
ax.grid(which='both', axis='both', color='lightgray')
inputf, = ax.plot([], [])
responsef, = ax.plot([], [])
responsef2, = ax.plot([], [])
arrowf, = ax.plot([],[])
style = {'description_width': 'initial'}
selectSystem=widgets.ToggleButtons(
options=[('masa-opruga-prigušivač',0),('1/4 automobila',1)],
description='Odaberi sustav: ', style=style) # define toggle buttons
selectForce = widgets.ToggleButtons(
options=[('step funkcija', 0), ('impulsna funkcija', 1), ('rampa funkcija', 2), ('funkcija sinus', 3)],
description='Odaberi $F$ funkciju: ', style=style)
display(selectSystem)
display(selectForce)
def build_model(M,K,B,M1,M2,B1,K1,K2,amp,x0,xpika0,y0,ypika0,select_System,index):
num_of_samples = 1000
total_time = 25
t = np.linspace(0, total_time, num_of_samples) # time for which response is calculated (start, stop, step)
global inputf, responsef, responsef2, arrowf
if select_System==0:
system0 = control.TransferFunction([1], [M, B, K])
if index==0:
inputfunc = np.ones(len(t))*amp
inputfunc[0]=0
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif index==1:
inputfunc=signal.unit_impulse(1000, 0)*amp
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif index==2:
inputfunc=t;
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif index==3:
inputfunc=np.sin(t)*amp
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif select_System==1:
system1=control.TransferFunction([M2, B1, K1+K2], [M1*M2, M1*B1+M2*B1, M2*K1+M1*(K1+K2), K2*B1, K1*K2])
system2=control.TransferFunction([B1*K1*M2**2, B1**2*K1*M2, B1*K1**2*M2 + 2*B1*K1*K2*M2,
B1**2*K1*K2, B1*K1**2*K2 + B1*K1*K2**2],
[M1**2*M2**2, B1*M1**2*M2 + 2*B1*M1*M2**2,
B1**2*M1*M2 + B1**2*M2**2 + K1*M1**2*M2 + 2*K1*M1*M2**2 + 2*K2*M1**2*M2 + K2*M1*M2**2,
2*B1*K1*M1*M2 + 2*B1*K1*M2**2 + B1*K2*M1**2 + 5*B1*K2*M1*M2 + B1*K2*M2**2,
B1**2*K2*M1 + 2*B1**2*K2*M2 + K1**2*M1*M2 + K1**2*M2**2 + K1*K2*M1**2 + 5*K1*K2*M1*M2 + K1*K2*M2**2 + K2**2*M1**2 + 2*K2**2*M1*M2,
2*B1*K1*K2*M1 + 4*B1*K1*K2*M2 + 3*B1*K2**2*M1 + 2*B1*K2**2*M2,
B1**2*K2**2 + K1**2*K2*M1 + 2*K1**2*K2*M2 + 3*K1*K2**2*M1 + 2*K1*K2**2*M2 + K2**3*M1,
2*B1*K1*K2**2 + B1*K2**3,
K1**2*K2**2 + K1*K2**3])
if index==0:
inputfunc = np.ones(len(t))*amp
inputfunc[0]=0
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
elif index==1:
inputfunc=signal.unit_impulse(1000, 0)*amp
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
elif index==2:
inputfunc=t;
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
elif index==3:
inputfunc=np.sin(t)*amp
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
ax.lines.remove(responsef)
ax.lines.remove(inputf)
ax.lines.remove(responsef2)
ax.lines.remove(arrowf)
inputf, = ax.plot(t,inputfunc,label='$F$',color='C0')
responsef, = ax.plot(time, response,label='$x$',color='C3')
if select_System==1:
responsef2, = ax.plot(time, response2,label='$y$',color='C2')
elif select_System==0:
responsef2, = ax.plot([],[])
if index==1:
if amp>0:
arrowf, = ax.plot([-0.1,0,0.1],[amp-((amp*0.05)/2),amp,amp-((amp*0.05)/2)],color='C0',linewidth=4)
elif amp==0:
arrowf, = ax.plot([],[])
elif amp<0:
arrowf, = ax.plot([-0.1,0,0.1],[amp-((amp*0.05)/2),amp,amp-(amp*(0.05)/2)],color='C0',linewidth=4)
else:
arrowf, = ax.plot([],[])
ax.relim()
ax.autoscale_view()
ax.legend()
def update_sliders(index):
global m1_slider, b1_slider, k1_slider, m21_slider, m22_slider, b2_slider, k21_slider, k22_slider
global x0_slider, xpika0_slider, y0_slider, ypika0_slider
m1val = [0.1,0.1,0.1,0.1]
k1val = [1,1,1,1]
b1val = [0.1,0.1,0.1,0.1]
m21val = [0.1,0.1,0.1,0.1]
m22val = [0.1,0.1,0.1,0.1]
b2val = [0.1,0.1,0.1,0.1]
k21val = [1,1,1,1]
k22val = [1,1,1,1]
x0val = [0,0,0,0]
xpika0val = [0,0,0,0]
y0val = [0,0,0,0]
ypika0val = [0,0,0,0]
m1_slider.value = m1val[index]
k1_slider.value = k1val[index]
b1_slider.value = b1val[index]
m21_slider.value = m21val[index]
m22_slider.value = m22val[index]
b2_slider.value = b2val[index]
k21_slider.value = k21val[index]
k22_slider.value = k22val[index]
x0_slider.value = x0val[index]
xpika0_slider.value = xpika0val[index]
y0_slider.value = y0val[index]
ypika0_slider.value = ypika0val[index]
def draw_controllers(type_select,index):
global m1_slider, b1_slider, k1_slider, m21_slider, m22_slider, b2_slider, k21_slider, k22_slider
global x0_slider, xpika0_slider, y0_slider, ypika0_slider
x0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='$x_0$ [dm]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
xpika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='${\dot{x}}_0$ [dm/s]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
if type_select==0:
amp_slider = widgets.FloatSlider(value=1.,min=-2.,max=2.,step=0.1,
description='Pojačanje ulaznog signala:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',style=style)
m1_slider = widgets.FloatSlider(value=.1, min=.01, max=1., step=.01,
description='$m$ [kg]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
k1_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k$ [N/m]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.1f',)
b1_slider = widgets.FloatSlider(value=.1,min=0.0,max=0.5,step=.01,
description='$B$ [Ns/m]:',disabled=False,continuous_update=False,
rientation='horizontal',readout=True,readout_format='.2f',)
m21_slider = widgets.FloatSlider(value=.1,min=.01,max=1.,step=.01,
description='$m_1$ [kg]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
m22_slider = widgets.FloatSlider(value=.1,min=.0,max=1.,step=.01,
description='$m_2$ [kg]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
b2_slider = widgets.FloatSlider(value=.1,min=0.0,max=2,step=.01,
description='$B$ [Ns/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
k21_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_1$ [N/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
k22_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_2$ [N/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
y0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='$y_0$ [dm]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
ypika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='${\dot{y}}_0$ [dm/s]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
elif type_select==1:
amp_slider = widgets.FloatSlider(value=1.,min=-2.,max=2.,step=0.1,
description='Pojačanje ulaznog signala:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',style=style)
m1_slider = widgets.FloatSlider(value=.1, min=.01, max=1., step=.01,
description='$m$ [kg]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
k1_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k$ [N/m]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.1f',)
b1_slider = widgets.FloatSlider(value=.1,min=0.0,max=0.5,step=.01,
description='$B$ [Ns/m]:',disabled=True,continuous_update=False,
rientation='horizontal',readout=True,readout_format='.2f',)
m21_slider = widgets.FloatSlider(value=.1,min=.01,max=1.,step=.01,
description='$m_1$ [kg]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
m22_slider = widgets.FloatSlider(value=.1,min=.0,max=1.,step=.01,
description='$m_2$ [kg]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
b2_slider = widgets.FloatSlider(value=.1,min=0.0,max=2,step=.01,
description='$B$ [Ns/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
k21_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_1$ [N/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
k22_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_2$ [N/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
y0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='$y_0$ [dm]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
ypika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='${\dot{y}}_0$ [dm/s]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
input_data = widgets.interactive_output(build_model, {'M':m1_slider, 'K':k1_slider, 'B':b1_slider, 'M1':m21_slider,
'M2':m22_slider, 'B1':b2_slider, 'K1':k21_slider, 'K2':k22_slider, 'amp':amp_slider,
'x0':x0_slider,'xpika0':xpika0_slider,'y0':y0_slider,'ypika0':ypika0_slider,
'select_System':selectSystem,'index':selectForce})
input_data2 = widgets.interactive_output(update_sliders, {'index':selectForce})
box_layout = widgets.Layout(border='1px solid black',
width='auto',
height='',
flex_flow='row',
display='flex')
buttons1=widgets.HBox([widgets.VBox([amp_slider],layout=widgets.Layout(width='auto')),
widgets.VBox([x0_slider,xpika0_slider]),
widgets.VBox([y0_slider,ypika0_slider])],layout=box_layout)
display(widgets.VBox([widgets.Label('Odaberite vrijednosti pojačanja ulaznog signala i početnih uvjeta:'), buttons1]))
display(widgets.HBox([widgets.VBox([m1_slider,k1_slider,b1_slider], layout=widgets.Layout(width='45%')),
widgets.VBox([m21_slider,m22_slider,k21_slider,k22_slider,b2_slider], layout=widgets.Layout(width='45%'))]), input_data)
widgets.interactive_output(draw_controllers, {'type_select':selectSystem,'index':selectForce})
```
<IPython.core.display.Javascript object>
ToggleButtons(description='Odaberi sustav: ', options=(('masa-opruga-prigušivač', 0), ('1/4 automobila', 1)), …
ToggleButtons(description='Odaberi $F$ funkciju: ', options=(('step funkcija', 0), ('impulsna funkcija', 1), (…
Output()
```python
```
| f0924928d885d43614315ab047f771c709fed904 | 144,416 | ipynb | Jupyter Notebook | ICCT_hr/examples/02/TD-03-Mehanicki_sustavi.ipynb | ICCTerasmus/ICCT | fcd56ab6b5fddc00f72521cc87accfdbec6068f6 | [
"BSD-3-Clause"
]
| 6 | 2021-05-22T18:42:14.000Z | 2021-10-03T14:10:22.000Z | ICCT_hr/examples/02/.ipynb_checkpoints/TD-03-Mehanicki_sustavi-checkpoint.ipynb | ICCTerasmus/ICCT | fcd56ab6b5fddc00f72521cc87accfdbec6068f6 | [
"BSD-3-Clause"
]
| null | null | null | ICCT_hr/examples/02/.ipynb_checkpoints/TD-03-Mehanicki_sustavi-checkpoint.ipynb | ICCTerasmus/ICCT | fcd56ab6b5fddc00f72521cc87accfdbec6068f6 | [
"BSD-3-Clause"
]
| 2 | 2021-05-24T11:40:09.000Z | 2021-08-29T16:36:18.000Z | 116.746968 | 87,099 | 0.776347 | true | 5,030 | Qwen/Qwen-72B | 1. YES
2. YES | 0.740174 | 0.679179 | 0.502711 | __label__eng_Latn | 0.053483 | 0.006294 |
# Álgebra Lineal Numérica
El álgebra lineal es el área de las matemáticas que estudia los espacios vectoriales y las transformaciones lineales entre dichos espacios. Independientemente del espacio vectorial que trabajemos, mientras sea de dimensión finita, todos elementos del espacio se pueden representar como un **vector** (una $n$-ada de elementos del campo del espacio) y toda transofmación lineal se representa como una **matriz** (una colección de $n \times n$ elementos del campo del espacio)
## El problema $\mathbf{Ax}=\mathbf{b}$ y sus equivalencias
En matemáticas y física, muchas veces surgen problemas en los que tenemos que resolver un sistema de ecuaciones lineales de la siguiente forma:
$$
\begin{align}
a_{11} x_1 + a_{12} x_2 + \ldots + a_{1n} x_n &= b_1 \\
a_{21} x_1 + a_{22} x_2 + \ldots + a_{2n} x_n &= b_2 \\
\vdots \quad \quad \quad \vdots \quad \quad \quad \vdots \\
a_{n1} x_1 + a_{n2} x_2 + \ldots + a_{nn} x_n &= b_n \\
\end{align}
$$
Que se pueden representar, de forma matricial, como
$$
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n}
\end{pmatrix}
\begin{pmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{pmatrix}
=
\begin{pmatrix}
b_1 \\
b_2 \\
\vdots \\
b_n
\end{pmatrix}
$$
Si definimos
$$
\mathbf{A} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n}
\end{pmatrix} \quad , \quad \mathbf{x} = \begin{pmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{pmatrix} \quad \text{y} \quad \mathbf{b}= \begin{pmatrix}
b_1 \\
b_2 \\
\vdots \\
b_n
\end{pmatrix} $$
Podemos escribir el sistema de manera compacta:
$$\mathbf{Ax} = \mathbf{b}$$
### Equivalencias del problema
Muchos problemas de álgebra lineal se pueden plantear como problemas de resolver un sistema de ecuaciones lineales. Por ejemplo:
* Encontrar la inversa de una matriz, es decir, encontrar $\mathbf{B}$ tal que $\mathbf{A}\mathbf{B} = \mathbf{Id}$
* Encontrar intersecciones de rectas o planos en el espacio
* Encontrar matrices de cambio de base.
* Encontrar los coeficientes una transformación lineal aplicada en $n$ vectores linealmente independientes
* Resolver ecuaciones diferenciales (parciales y ordinarias) discretizadas o numéricas.
### Sistemas de ecuaciones lineales problemáticos
En principio, no necesariamente se da el caso donde la matriz $A$ es de $n\times n$, puede suceder que tengamos menos o más ecuaciones que incógnitas. Tratar con esos casos es **complicado**. Así, siempre vamos a realizar la suposición de que *la matriz es de $n\times n$*.
Aunque nuestra matriz $A$ sea de $n\times n$, puede darse el caso de que nuestro sistema de ecuaciones lineales **no acepte una solución única**. Nuevamente, por simplicidad, siempre asumiremos que nuestro sistema admite una solución **única**. Condiciones equivalentes para eso son:
* $\mathbf{A}$ es invertible
* $\text{det}(\mathbf{A}) \neq 0$
* $\text{rango}(\mathbf{A}) = n$
## ¿Cómo resolvemos un sistema lineal?
Existen varias maneras analíticas y exactas de resolver un sistema de ecuaciones lineales (Regla de Kraemer, Eliminación Gaussiana, Sustitución, etc). Sin embargo, la gran mayoría de ellas son **tediosas**, **mecánicas** y largas de realizar manualmente. En particular, cuando tenemos más de $4$ ecuaciones, se vuelve extremadamente tedioso resolverlas.
Debido a su tediosidad y a su importancia en todas las ramas, resulta ideal utilizar la computadora para resolver dichos sistemas.
## Preludio computacional: vectores y matrices en Julia
En Julia podemos trabajar de manera nativa con matrices y vectores. Es claro que un **arreglo de flotantes** puede interpretarse como un **vector** por ser un conjunto ordenado de elementos del mismo tipo.
Dentro de Julia, es posible operar a los arreglos entre sí de la misma manera que operamos a los vectores: con suma y producto por un escalar
```julia
vec1 = [1.0,1.0,0.0]
vec2 = [1.0,0.0,1.0]
print("suma: ")
println(vec1+vec2)
alpha = 5.0
print("producto por un escalar: ")
println(alpha*vec1)
```
suma: [2.0, 1.0, 1.0]
producto por un escalar: [5.0, 5.0, 0.0]
Para representar a las matrices, una de las ventajas de Julia es que podemos construir **arreglos $n$-dimensionales**. Por ahora, nos basta con construir arreglos $2$ dimensionales. La sintaxis para construir un arreglo 2 dimensional de manera explícita es la siguiente:
```julia
# ponemos los renglones como arreglos 1D separados por ESPACIOS, no por comas
# cada renglon se separa entre sí por un punto y coma
A = [[1.0 2.0 3.0] ; [4.0 5.0 6.0] ]
```
2×3 Array{Float64,2}:
1.0 2.0 3.0
4.0 5.0 6.0
Podemos concatenar también vectores de manera horizontal (i.e. como si fueran columnas) para construir una matriz, utilizando la siguiente sintaxis
```julia
A2 = [vec1 vec2]
```
3×2 Array{Float64,2}:
1.0 1.0
1.0 0.0
0.0 1.0
Es posible transponer una matriz utilizando la función `transpose`
```julia
transpose(A2)
```
2×3 LinearAlgebra.Transpose{Float64,Array{Float64,2}}:
1.0 1.0 0.0
1.0 0.0 1.0
Si ahora nos preguntamos cuál es el tipo de una matriz, podemos usar la función `typeof`
```julia
typeof(A)
```
Array{Float64,2}
Notemos que ahora la función `typeof(A)` nos muestra un `2` después de `Float64`, lo que nos indica que nuestro objeto `A` es un arreglo de dimensión 2. Para los vectores normales, utilizamos arreglos de dimensión $1$. Las dimensiones del arreglo se pueden consultar con la función `size`
```julia
size(A2)
```
(3, 2)
Podemos multiplicar una matrix por un vector si coinciden las dimensiones
```julia
# matriz de 3x3
A4 = [[1 2 3] ; [2 4 5] ; [7 8 9]]
# vector de 3 entradas
vec = [1,0,0]
# su multiplicación
println(A4*vec)
```
[1, 2, 7]
```julia
# matriz de 4x3
A5 = [[0 1 4] ; [1 2 3] ; [2 4 5] ; [7 8 9]]
# vector de 3 entradas
vec = [1,0,0]
# su multiplicación
println(A5*vec)
```
[0, 1, 2, 7]
Si no coinciden las columnas de la matriz con la longitud del vector, intentar la multiplicación nos arrojará un error:
```julia
# matriz de 3x2
A6 = [[1 2] ; [2 4] ; [7 8]]
# vector de 3 entradas
vec = [1,0,0]
# no está definida la multiplicación
println(A6*vec)
```
Si le pedimos a Julia que nos imprima una matriz, el formato es bastante feo
```julia
# impresión fea
println(A6)
```
[1 2; 2 4; 7 8]
Podemos utilizar la función `display(A)` para imprimirlo como una matriz. La sintaxis es la siguiente:
```julia
# impresión bonita
display(A6)
```
3×2 Array{Int64,2}:
1 2
2 4
7 8
Para acceder a los elementos de una matriz, utilizamos la sintaxis `A[i,j]`
```julia
display(A)
println(A[1,2])
println(A[2,3])
```
2×3 Array{Float64,2}:
1.0 2.0 3.0
4.0 5.0 6.0
2.0
6.0
Podemos acceder a renglones o columnas completos utilizando `:` en lugar de `i` o `j`, respectivamente
```julia
# primera renglón
println(A[1,:])
# tercer columna
println(A[:,3])
```
[1.0, 2.0, 3.0]
[3.0, 6.0]
Notemos que en julia podemos utilizar una rango o arreglo de enteros como índice de otro arreglo para obtener el subarreglo con los elementos de dichos índices. Por ejemplo:
```julia
arr1 = [2,4,8,16,32,128]
println(arr1[[2,5]])
println(arr1[1:3])
```
[4, 32]
[2, 4, 8]
Lo mismo sucede con las matrices: podemos obtener una submatriz de esa forma
```julia
mat1 = [[1 2 3 4] ; [5 6 7 8]]
display(mat1)
display(mat1[1:2,3:4])
display(mat1[1:2,2:4])
```
2×4 Array{Int64,2}:
1 2 3 4
5 6 7 8
2×2 Array{Int64,2}:
3 4
7 8
2×3 Array{Int64,2}:
2 3 4
6 7 8
## Caso sencillo: $A$ es triangular superior
Regresando a querer resolver el problema $\mathbf{Ax} = \mathbf{b}$
Antes que atacar el problema de manera general, consideremos el caso sencillo en el que la matriz $\mathbf{A}$
es una matriz triangular superior, es decir, de la forma.
$$
\mathbf{A} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
0 & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
0 & \ldots & a_{n-1,n-1} & a_{n-1,n} \\
0 & \ldots & 0 & a_{n,n} \\
\end{pmatrix}
$$
Matemáticamente, eso quiere decir que para $i>j$, $A_{ij} = 0$
Cuando la matriz es así, es muy sencillo resolver el sistema de ecuaciones.
$$
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
0 & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
0 & \ldots & a_{n-1,n-1} & a_{n-1,n} \\
0 & \ldots & 0 & a_{n,n} \\
\end{pmatrix}
\begin{pmatrix}
x_1 \\
x_2 \\
\vdots \\
x_{n-1} \\
x_n
\end{pmatrix}
=
\begin{pmatrix}
b_1 \\
b_2 \\
\vdots \\
b_{n-1} \\
b_n
\end{pmatrix}
$$
### Ejercicio 1
(i) Suponiendo que $\mathbf{A}$ es triangular superior y que no tiene ceros en la diagonal, encuentra la expresión (analítica) para $x_n$ en términos de las entradas de $A$ y las entradas del vector solución $\mathbf{b}$.
(ii) Ya que tengas la expresión para $x_n$, ahora encuentra una expresión para $x_{n-1}$ (que también debe de depender del valor de $x_n$) y para $x_{n-2}$ (que depende también del valor de $x_n$ y $x_{n-1}$ )
(iii) Generaliza las expresiones anteriores para obtener una expresión general para cualquier $x_i$ en términos de las entradas de $A$, las entradas del vector solución $\mathbf{b}$ y de otros valores $x_k$ con $k>i$
### Ejercicio 2
Implementa una función `solTriSuperior(A,b)` que tome como argumento un arreglo 2D `A`, que representa a una matriz **triangular superior, sin ceros en la diagonal**, de $n\times n$ , un vector solución `b` de longitud $n$ y que regresa un arreglo `xs` con la solución del sistema $\mathbf{Ax} = \mathbf{b}$ calculada usando la fórmula del inciso (iii) del ejercicio anterior.
## Caso general: volver a $\mathbf{A}$ una matriz triangular superior
Generalmente, $\mathbf{A}$ no va a ser una matriz triangular superior. Sin embargo, mediante el procedimiento de [**eliminación gaussiana**](https://en.wikipedia.org/wiki/Gaussian_elimination) es posible llevarla a una de ellas.
### Eliminación Gaussiana
La eliminación Gaussiana consiste en trabajar con la **matriz aumentada**, denotada $\mathbf{C}$, cuya expresión es
$$
\mathbf{C} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} & b_1 \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} & b_2 \\
\vdots & & \ddots & & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n} & b_n
\end{pmatrix}
$$
Mediante operaciones elementales de renglones (intercambiar renglones, multiplicarlos por un escalar o sumarlos), buscamos reducir la matriz $\mathbf{C}$ a una forma **escalonada**, denotada $\mathbf{C}^*$, en la que todos los elementos por debajo de la diagonal sean $0$:
$$
\mathbf{C}^* =
\begin{pmatrix}
a^*_{1,1} & a^*_{1,2} & \cdots & a^*_{1,n} & b^*_1 \\
0 & a^*_{2,2} & \cdots & a^*_{2,n} & b^*_2 \\
\vdots & & \ddots & & \vdots \\
0 & 0 & \cdots & a^*_{n,n} & b^*_n
\end{pmatrix}
$$
Esa matriz aumenta representa al sistema de ecuaciones:
$$
\begin{pmatrix}
a^*_{1,1} & a^*_{1,2} & \cdots & a^*_{1,n} \\
0 & a^*_{2,2} & \cdots & a^*_{2,n} \\
\vdots & & \ddots & \vdots \\
0 & 0 & \cdots & a^*_{n,n}
\end{pmatrix}
\begin{pmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{pmatrix}=
\begin{pmatrix}
b^*_1 \\
b^*_2 \\
\vdots \\
b^*_n
\end{pmatrix}
$$
El sistema de ecuaciones representado por $\mathbf{C}^*$ es equivalente al representado por $\mathbf{C}$, por lo que sus soluciones son las mismas. Sin embargo, el sistema asociado a la matriz escalonada es triangular superior, por lo que es mucho más fácil de resolver, como vimos en los ejercicios anteriores.
### Ejercicio 3:
(i) Supón que $a_{1,1} \neq 0$. ¿Qué operación elemental podrías aplicarle al renglón $n$ para hacer que el elemento $a_{n,1}$ se vuelva 0?
**Sugerencia** La operación es convertir al renglón $n$ en una combinación lineal del renglón $1$ y el renglón $n$
(ii) ¿Puedes aplicar el procedimiento un procedimiento similar al del inciso anterior pero ahora para hacer $a_{n-1,1}=0$? ¿y para cualquier otro $a_{k,1}=0$ con $k \neq 1$?
(iii) Vamos ahora a la segunda columna. Supón que ya realizaste todas las operaciones elementales necesarias para que $a_{k,1} =0$ para $k \neq 1$. Nuevamente suponiendo que $a_{2,2}\neq 0$, ¿cómo puedes hacer algo parecido a las operaciones elementales de los incisos anteriores para volver $a_{k,2} =0$ para $k \neq 2$?
(iv) Generaliza todo lo visto anteriormente para encontrar las operaciones elementales necesarias (y el orden de ellas) para convertir a la matriz aumentada $\mathbf{C}$ en la matriz escalonada $\mathbf{C}^*$.
### Ejercicio 4
Implementa una función `elimGaussBasica(A,b)` que toma como argumento un arreglo 2D `A` de $n\times n$ **suponiendo que no tiene ceros en la diagonal** y un arreglo 1D `b` de longitud $n$. La función debe de construir la matriz aumentada $\mathbf{C}$ obtenida a partir de `A` y `b` y luego realizar las operaciones elementales obtenidas en el ejercicio anterior para generar la matriz escalonada $\mathbf{C}^*$. La función debe de regresar la matriz escalonada obtenida de $n\times (n+1)$.
**Sugerencia:** la función `hcat` te permite concatenar vectores y matrices horizontalmente de la siguiente forma:
```julia
A = [[1 2 3] ; [4 5 6]]
B = [8,9]
display(A)
display(B)
display(hcat(A,B))
```
2×3 Array{Int64,2}:
1 2 3
4 5 6
2-element Array{Int64,1}:
8
9
2×4 Array{Int64,2}:
1 2 3 8
4 5 6 9
## ¿Qué sucede si hay elementos $a_{ii}=0$?
Es fundamental que los elementos de la diagonal no sean $0$ para que nuestro método funcione. Sin embargo, es posible que se de el caso en el que sean $0$. Si $a_{k,k}=0$, podemos buscar un renglón $l$ en el que $a_{l,k}\neq 0 $ y luego **intercambiar** el renglón $k$ por el $l$.
Debido a que estamos suponiendo que la solución al sistema es única, no puede nunca darse el caso de $a_{l,k}=0$ para toda $l$ pues eso implicaría que no existe solución única para el sistema
### Ejercicio 5
Implementa una función `checarDiagonal(A,b)` que toma como argumento un arreglo 2D `A` de $n\times n$ **suponiendo que no tiene ceros en la diagonal** y un arreglo 1D `b` de longitud $n$. La función debe de construir la matriz aumentada $\mathbf{C}$ obtenida a partir de `A` y `b` y luego revisar sus elementos diagonales para ver que no sean $0$. Si encuentra alguno que sea $0$, debe de intercambiar el renglón por otro en el que sea distinto de cero. La función debe de regresar una matriz aumentada en la que no haya ceros en las diagonales.
### Ejercicio 6
Implementa una función `eliminaciónGaussiana(A,b)` que toma como argumento un arreglo 2D `A` de $n\times n$ **suponiendo que no tiene ceros en la diagonal** y un arreglo 1D `b` de longitud $n$. La función primero debe de usar `checarDiagonal(A,b)` para obtener una matriz aumentada sin ceros en las diagonales y luego debe de aplicar las operaciones elementales del ejercicio 3 para obtener una matriz escalonada $\mathbf{C}^*$. La función debe de regresar dicha matriz escalonada de $n\times (n+1)$.
### Ejercicio 7
Implementa una función `ecLineales(A,b)` que toma como argumento un arreglo 2D `A` de $n\times n$ y un arreglo 1D `b` de longitud $n$. Tu función debe de utilizar las funciones `eliminaciónGaussiana(A,b)` y `solTriSuperior(A,b)` para resolver el sistema de ecuaciones lineales y regresar un arreglo 1D de longitud $n$ correspondiente a las soluciones $x_i$.
### Ejercicio 8
Prueba tu función `ecLineales(A,b)` con el siguiente sistema
$$
\begin{align}
x+y+z &= 1 \\
3x-2y+w &= -4 \\
y - w &= 2 \\
x-2y+4z-5w &= -6
\end{align}
$$
Cuya solución está dada por $x= -0.0555$, $y=1.8333$, $z=-0.7777$ y $-w=0.1666$
| 9941f8a5a2310b67c5d91c4fac1a722258c4d27e | 27,128 | ipynb | Jupyter Notebook | files/fiscomp_2020-4/material/clase11.ipynb | sayeg84/sayeg84.github.io | 18f2e36dd7252603fad8f7093dc5aa00fc721be4 | [
"MIT"
]
| null | null | null | files/fiscomp_2020-4/material/clase11.ipynb | sayeg84/sayeg84.github.io | 18f2e36dd7252603fad8f7093dc5aa00fc721be4 | [
"MIT"
]
| null | null | null | files/fiscomp_2020-4/material/clase11.ipynb | sayeg84/sayeg84.github.io | 18f2e36dd7252603fad8f7093dc5aa00fc721be4 | [
"MIT"
]
| null | null | null | 30.310615 | 554 | 0.5383 | true | 5,649 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.833325 | 0.713202 | __label__spa_Latn | 0.975042 | 0.495338 |
```python
#### Notebook Imports
import numpy as np
```
```python
from random import randint as rand
```
### CS229 Week 1 Algorithms
---
1. Linear Model (for regression)
2. Least Mean Squares cost function
3. Batch Gradient Descent
4. Stochastic Gradient Descent
5. Normal Equations
### Linear Model (Hypothesis Function)
---
\begin{equation}
h_\theta(x) = \sum_{i=0}^{n} \theta_ix_i
\end{equation}
Here: $\theta_0$ will be bias/intercept of the linear equation and $x_0$ will be a 1 vector
```python
h = lambda theta, x: 1 + np.sum(theta*x)
```
### Least Mean Squares
---
\begin{equation}
J(\theta) = \frac{1}{2} \sum_{i=1}^{m} (h_\theta(x^{(i)}) - y^{(i)})^2 \\
\frac{\partial J}{\partial \theta_j} = \sum_{i=1}^{m} (h_\theta(x^{(i)}) - y^{(i)})x^{(i)}_j
\end{equation}
```python
J = lambda theta, x, y: (1/2)*np.sum(h(theta, x) - y)**2
# dJ_dtheta = lambda theta, x, y, xj: np.sum(h(theta, x) - y)*xj
def dJ_dtheta(theta, x, y, xj):
s = h(theta, np.ones(x.shape)) - y[0]
for i in range(len(x)):
s += (h(theta[i], x[i]) - y[i])
return s
```
### Batch Gradient Descent
---
\begin{equation}
\theta_{j+1} := \theta_j - \alpha \frac{\partial J}{\partial \theta_j} \\
\theta_{j+1} := \theta_j - \alpha (\sum_{i=1}^{m} (h_\theta(x^{(i)}) - y^{(i)})x^{(i)}_j)
\end{equation}
repeat until convergence {
\begin{equation}
\theta_{j+1} := \theta_j - \alpha (\sum_{i=1}^{m} (h_\theta(x^{(i)}) - y^{(i)})x^{(i)}_j)
\end{equation}
}
```python
nextTheta = lambda theta, X, Y, alpha: theta - alpha*dJ_dtheta(theta, X, Y, 1)
```
### Stochastic Gradient Descent
---
\begin{equation}
\theta_{j+1} := \theta_j - \alpha \frac{\partial J}{\partial \theta_j} \\
\theta_{j+1} := \theta_j - \alpha (h_\theta(x^{(i)}) - y^{(i)})x^{(i)}_j
\end{equation}
repeat until converge {
\begin{equation}
\theta_{j+1} := \theta_j - \alpha (h_\theta(x^{(i)}) - y^{(i)})x^{(i)}_j
\end{equation}
}
```python
stochastic_dJ_dtheta = lambda theta, x, y: y[rand(0, len(y))] - h(theta, x[rand(0, len(x))]) * x[rand(0, len(x))]
```
```python
stochastic_nextTheta = lambda theta, X, Y, alpha: theta + alpha*stochastic_dJ_dtheta(theta, X, Y)
```
### Normal Equations
---
\begin{equation}
\theta = (X^TX)^{-1}X^Ty
\end{equation}
```python
ftheta = lambda X, Y: np.linalg.pinv(np.transpose(X)*X)*np.transpose(X)*Y
```
```python
from sklearn import datasets
import pandas as pd
```
```python
df = pd.read_csv("ex1data1.txt")
```
```python
df.columns = ['X', 'Y']
```
```python
X = df['X']
Y = df['Y']
```
```python
Y
```
0 9.13020
1 13.66200
2 11.85400
3 6.82330
4 11.88600
...
91 7.20290
92 1.98690
93 0.14454
94 9.05510
95 0.61705
Name: Y, Length: 96, dtype: float64
```python
del df
```
```python
X, Y = X.to_numpy(), Y.to_numpy()
```
```python
X
```
array([ 5.5277, 8.5186, 7.0032, 5.8598, 8.3829, 7.4764, 8.5781,
6.4862, 5.0546, 5.7107, 14.164 , 5.734 , 8.4084, 5.6407,
5.3794, 6.3654, 5.1301, 6.4296, 7.0708, 6.1891, 20.27 ,
5.4901, 6.3261, 5.5649, 18.945 , 12.828 , 10.957 , 13.176 ,
22.203 , 5.2524, 6.5894, 9.2482, 5.8918, 8.2111, 7.9334,
8.0959, 5.6063, 12.836 , 6.3534, 5.4069, 6.8825, 11.708 ,
5.7737, 7.8247, 7.0931, 5.0702, 5.8014, 11.7 , 5.5416,
7.5402, 5.3077, 7.4239, 7.6031, 6.3328, 6.3589, 6.2742,
5.6397, 9.3102, 9.4536, 8.8254, 5.1793, 21.279 , 14.908 ,
18.959 , 7.2182, 8.2951, 10.236 , 5.4994, 20.341 , 10.136 ,
7.3345, 6.0062, 7.2259, 5.0269, 6.5479, 7.5386, 5.0365,
10.274 , 5.1077, 5.7292, 5.1884, 6.3557, 9.7687, 6.5159,
8.5172, 9.1802, 6.002 , 5.5204, 5.0594, 5.7077, 7.6366,
5.8707, 5.3054, 8.2934, 13.394 , 5.4369])
```python
def LinearRegression(x, y, learningRate = 0.01, epochs = 1000):
theta = np.zeros(x.shape)
T = None
for i in range(epochs):
theta = nextTheta(theta, x, y, learningRate)
return theta
```
| d8d3c5c805b6f637dab68a2127372f94e8e6e851 | 7,943 | ipynb | Jupyter Notebook | Week1.ipynb | m-yasir/ml-algo-scrach | f369b354ae82c5b31a469c65e5b9c99f0879d79b | [
"MIT"
]
| null | null | null | Week1.ipynb | m-yasir/ml-algo-scrach | f369b354ae82c5b31a469c65e5b9c99f0879d79b | [
"MIT"
]
| null | null | null | Week1.ipynb | m-yasir/ml-algo-scrach | f369b354ae82c5b31a469c65e5b9c99f0879d79b | [
"MIT"
]
| null | null | null | 24.365031 | 119 | 0.463175 | true | 1,789 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.7773 | 0.699982 | __label__yue_Hant | 0.198043 | 0.464623 |
# Clase 3b: Perfil de Yukovski
_Aunque no te lo creas, __con lo que hemos visto hasta ahora eres capaz de hacer grandes cosas__. Vale sí, un perfil de Yukovski no es gran cosa aerodinámicamente, pero si lo hacemos en Python... Echa un vistazo a la figura ¿no está mal, no? algo así intentaremos conseguir al final de esta clase._
_Como no se trata de aprender (o reaprender) aerodinámica, te daremos las funciones matemáticas y los pasos a seguir así como la estructura del programa. Tú sólo tienes que preocuparte de programar cada bloque. Puedes leer en detalle todo lo relativo a la aerodinámica en el libro Aerodinámica básica de Meseguer Ruiz, J., Sanz Andrés, A. (Editorial Garceta)._
## 1. Importamos paquetes.
Lo primero es lo primero, importemos los paquetes:
```python
# Recuerda, utilizaremos arrays y pintaremos gráficas.
```
## 2. Parámetros del problema
###### <h6 align="right">__Fuente:__ _Aerodinámica básica, Meseguer Ruiz, J., Sanz Andrés, A._<div>
La transformación de Yukovski es: $$\tau=t+\frac{a^{2}}{t}$$
Los parámetros del problema son los del siguiente bloque, puedes cambiarlos más adelante:
```python
# Datos para el perfil de Yukovski
# Parámetro de la transformación de Yukovski
a = 1
# Centro de la circunferencia
landa = 0.2 # coordenada x (en valor absoluto)
delta = 0.3 # coordenada y
t0 = a * (-landa + delta * 1j) # centro: plano complejo
# Valor del radio de la circunferencia
R = a * np.sqrt((1 + landa)**2 + delta**2)
# Ángulo de ataque corriente incidente
alfa_grados = 0
alfa = np.deg2rad(alfa_grados)
#Velocidad de la corriente incidente
U = 1
```
## 3. Perfil de Yukoski a partir de una circunferencia.
### Función transformación de Yukovski
__Se trata de definir una función que realice la transformación de Yukovski.__ Esta función recibirá el parámetro de la transformación, $a$ y el punto del plano complejo $t$. Devolverá el valor $\tau$, punto del plano complejo en el que se transforma $t$.
```python
def transf_yukovski(a, t):
"""Dado el punto t (complejo) y el parámetro a
a de la transformación proporciona el punto
tau (complejo) en el que se transforma t."""
```
```python
#comprobamos que la función está bien programada
#puntos del eje real siguen siendo del eje real
err_message = "La transformación de Yukovski no devuelve un resultado correcto"
np.testing.assert_equal(transf_yukovski(1, 1+0j), 2+0j, err_message)
```
### Circunferencia
Ahora queremos transformar la circunferencia de radio $R$ con centro en $t_0$ usando la función anterior:
1. __Creamos `N` puntos de la circunferencia__ de modo que __en `Xc` estén las coordenadas $x$ y en `Yc` estén las coordenadas $y$__ de los puntos que la forman. Controla el número de puntos mediante un parámetro que se llame `N_perfil`.
$$X_c = real(t_0) + R·cos(\theta)$$
$$Y_c = imag(t_0) + R·sin(\theta)$$
2. Una vez hayas obtenido los dos arrays `Xc` e `Yc`, __píntalos mediante un `scatter`__ para comprobar que todo ha ido bien.
3. Pinta también el __centro de la circunferencia__.
Deberías obtener algo así:
```python
# Número de puntos de la circunferencia que
# vamos a transformar para obtener el perfil
N_perfil =
#se barre un ángulo de 0 a 2 pi
#se crean las coordenadas del los puntos
#de la circunferencia
Xc =
Yc =
#lo visualizamos
```
```python
# Lo visualizamos más bonito
plt.figure("circunferencia", figsize=(5,5))
plt.title('Circunferencia', {'fontsize':20})
# Esto no tienes por qué entenderlo ahora
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=3)
plt.gca().add_patch(p)
plt.ylim(-1.5, 2)
plt.xlim(-2, 1.5)
plt.grid()
```
### Transformación de cirunferencia a perfil
Ahora estamos en condiciones de __transformar estos puntos de la circunferencia (`Xc`, `Yc`) en los del perfil (`Xp`, `Yp`)__. Para esto vamos a usar nuestra función `transf_yukovski`. Recuerda que esta función recibe y da números complejos. ¿Saldrá un perfil?
```python
# Se transforman los puntos de la circunferencia
Xp, Yp =
# Lo visualizamos
```
```python
# Lo visualizamos más bonito
plt.figure('perfil yukovski', figsize=(10,10))
plt.title('Perfil', {'fontsize':20})
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=3)
plt.gca().add_patch(p)
plt.gca().set_aspect(1)
plt.xlim(-3, 3)
plt.ylim(-0.4,1)
plt.grid()
```
## 4. Flujo alrededor del cilindro
Para visualizar ahora el flujo alrededor del cilindro recurrimos al __potencial complejo__ de una _corriente uniforme_ que forme un ángulo $\alpha$ con el eje $x$ _en presencia de un cilindro_ (aplicando el teorema del círculo) y se añade un torbellino con la intensidad adecuada para que se cumpla la hipótesis de Kutta en el perfil:
\begin{equation}
f(t)=U_{\infty}\text{·}\left((t-t_{0})\text{·}e^{-i\alpha}+\frac{R^{2}}{t-t_{0}}\text{·}e^{i\alpha}\right)+\frac{i\Gamma}{2\pi}\text{·}ln(t-t_{0})=\Phi+i\Psi
\end{equation}
donde $\Phi$ es el potencial de velocidades y $\Psi$ es la función de corriente.
$$\Gamma = 4 \pi a U (\delta + (1+\lambda) \alpha)$$
$\Gamma$ es la circulación que hay que añadir al cilindro para que al transformarlo en el perfil se cumpla la condición de Kutta.
Recordando que la función de corriente toma un valor constante en las líneas de corriente, sabemos que: dibujando $\Psi=cte$ se puede visualizar el flujo.
__Pintaremos estas lineas de potencial constante utilizando la función `contour()`, pero antes tendremos que crear una malla circular. Esto será lo primero que hagamos:__
1. Crea un parámetro `N_R` cuyo valor sea el número de puntos que va a tener la malla en dirección radial. Desde otro punto de vista, esta parámetro es el número de círculos concéntricos que forman la malla.
2. Crea dos parámetros `R_min` y `R_max` que representen el radio mínimo y máximo entre los que se extiende la malla. El radio mínimo debe de ser el radio del círculo, porque estamos calculando el aire en el exterior del perfil.
4. La dirección tangencial necesita un sólo parámetro `N_T`, que representa el número de puntos que la malla tendrá en esta dirección. Dicho de otro modo, cuántos puntos forman los círculos concéntricos de la malla.
3. Crea un array `R_` que vaya desde `R_min` hasta `R_max` y que tenga `N_R` elementos. De manera análoga, crea el array `T_`, que al representar los ángulos de los puntos que forman las circunferencias, debe ir de 0 a 2$\pi$, y tener `N_T` elementos.
4. Para trabajar con la malla, deberemos usar coordenadas cartesianas. Crea la malla: `XX, YY` van a ser dos matrices de `N_T · N_R` elementos. Cada elemento de estas matrices se corresponde con un punto de la malla: la matriz `XX` contiene las coordenadas X de cada punto y la matriz `YY`, las coordenadas y.
La manera de generar estas matrices tiene un poco de truco, porque depende de ambos vectores.
Para cada elemento, $x = real(t_0) + R · cos (T) $ , $y = imag(t_0) + R · sin(T)$.
```python
#se crea la malla donde se va pintar la función de corriente
# Dirección radial
N_R =
R_min =
R_max =
# Dirección tangencial
N_T =
R_ =
T_ =
# Crear la malla:
XX =
YY =
```
```python
#pintamos la malla para verla
plt.figure(figsize=(10,10)) #Esto sirve para que se vea grande
plt.???????(XX.flatten(), YY.flatten(), marker='.')
```
__NOTA__: En versiones anteriores se utilizaba una malla rectangular. Esto generaba algunos problemas con los puntos interiores a la hora de pintar las líneas de corriente y los campos de velocidades y presiones. La idea de usar una malla circular está tomada de [este ejercicio](http://nbviewer.ipython.org/github/barbagroup/AeroPython/blob/master/lessons/06_Lesson06_Assignment.ipynb) del curso Aerodynamics-Hydrodynamics with Python de la [Prof. Lorena Barba](http://lorenabarba.com/).
### Probando a transformar la malla
Bueno, lo que queríamos era hacer cosas alrededor de nuestro perfil, ¿no?
Esto lo conseguiremos pintando la función `psi` (tal cual está definida) en los puntos `XX_tau, YY_tau` transformados de los `XX, YY` a través de la función `transf_yukovski`, recuerda que la transformación que tenemos recibe y da números complejos. Como antes, debes separar parte real e imaginaria. En la siguiente celda calcula y transforma tt (donde debería estar almacenada la malla en forma compleja) para obtener `XX_tau, YY_tau`.
```python
tt =
XX_tau, YY_tau =
```
```python
# Comprobamos que los puntos exteriores a la circunferencia se transforman en los puntos exteriores del perfil
#pintamos la malla para verla
plt.figure(figsize=(10,10))
plt.scatter(XX_tau.flatten(), YY_tau.flatten(), marker='.')
```
### Obteniendo el flujo
1. Crea una variable `T` que tenga el valor correspondiente a la circulación $\Gamma$.
2. Utilizando el array `tt`, el valor `T` y los parámetros definidos al principio (`t0, alfa, U...`) crea `f` según la fórmula de arriba (no hace falta que crees una función).
3. Guarda la parte imaginaria de esa función (función de corriente) en una variable `psi`.
```python
# Circulación que hay que añadir al cilindro para
# que se cumpla la hipótesis de Kutta en el perfil
T = 4 * np.pi * a * U * (delta + (1+landa) * alfa)
# Malla compleja
tt =
# Potencial complejo
f = U * ((tt - t0) * np.exp(-alfa * 1j) + R ** 2 / (tt - t0) * np.exp(alfa * 1j) )
f += 1j * T / (2 * np.pi) * np.log(tt - t0)
# Función de corriente
psi =
```
Como la función de corriente toma un valor constante en cada línea de corriente, podemos visualizar el flujo alrededor del cilindro pintando las lineas en las que `psi` toma un valor constante. Para ello utilizaremos la función `contour()` en la malla `XX, YY`. Si no se ve nada prueba a cambiar el número de líneas y los valores máximo y mínimo de la función que se representan.
```python
#lo visualizamos
plt.figure('lineas de corriente', figsize=(10,10))
plt.contour(XX, YY, psi, np.linspace(-5,5,50))
plt.grid()
plt.gca().set_aspect(1)
#plt.xlim(-8, 8)
#plt.ylim(-3, 3)
```
```python
#ponemos el cilindro encima
plt.figure('flujo cilindro', figsize=(10,10))
plt.contour(XX, YY, psi, np.linspace(-5,5,50), colors=['blue', 'blue'])
plt.grid()
plt.gca().set_aspect(1)
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=3)
plt.gca().add_patch(p)
```
## 5. Flujo alrededor del perfil
```python
plt.figure("flujo perfil", figsize=(12,12))
plt.contour(?????, ?????, psi, np.linspace(-5,5,50))
plt.xlim(-8,8)
plt.ylim(-3,3)
plt.grid()
plt.gca().set_aspect(1)
```
```python
#Ahora ponemos el perfil encima
plt.figure("flujo perfil", figsize=(12,12))
plt.contour(XX_tau, YY_tau, psi, np.linspace(-5, 5, 50), colors=['blue', 'blue'])
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=10)
plt.gca().add_patch(p)
plt.xlim(-8,8)
plt.ylim(-3,3)
plt.grid()
plt.gca().set_aspect(1)
```
## 6. Interact
__Ahora es un buen momento para jugar con todos los parámetros del problema.
~~¡Prueba a cambiarlos y ejecuta el notebook entero!~~__
__Vamos a usar un `interact`, ¿no?__
Tenemos que crear una función que haga todas las tareas: reciba los argumentos y pinte para llamar a interact con ella. No tenemos más que cortar y pegar.
```python
def transformacion_geometrica(a, landa, delta, N_perfil=100):
#punto del plano complejo
t0 =
#valor del radio de la circunferencia
R =
#se barre un ángulo de 0 a 2 pi
theta =
#se crean las coordenadas del los puntos
#de la circunferencia
Xc =
Yc =
#se crean las coordenadas del los puntos
#del perfil
Puntos_perfil = transf_yukovski()
Xp, Yp = np.real(Puntos_perfil) , np.imag(Puntos_perfil)
#Se pintan la cirunferencia y el perfil
fig, ax = plt.subplots(1,2)
fig.set_size_inches(15,15)
p_c = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=1)
ax[0].add_patch(p_c)
ax[0].plot(Xc,Yc)
ax[0].set_aspect(1)
ax[0].set_xlim(-3, 3)
ax[0].set_ylim(-2,2)
ax[0].grid()
p_p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=1)
ax[1].add_patch(p_p)
ax[1].plot(Xp,Yp)
ax[1].set_aspect(1)
ax[1].set_xlim(-3, 3)
ax[1].set_ylim(-2,2)
ax[1].grid()
```
```python
from IPython.html.widgets import interact
```
```python
w = interact(transformacion_geometrica,
landa=(-1.,1, 0.01),
delta=(-1.,1,0.01),
a=(0,2.,0.1),
N_perfil=(4, 200) )
```
```python
def flujo_perfil_circunferencia(landa, delta, alfa, U=1, N_malla = 100):
N_perfil=100
a=1
#punto del plano complejo
t0 = a * (-landa + delta * 1j)
#valor del radio de la circunferencia
R = a * np.sqrt((1 + landa)**2 + delta**2)
#se barre un ángulo de 0 a 2 pi
theta = np.linspace(0, 2*np.pi, N_perfil)
#se crean las coordenadas del los puntos
#de la circunferencia
Xc = - a * landa + R * np.cos(theta)
Yc = a * delta + R * np.sin(theta)
#se crean las coordenadas del los puntos
#del perfil
Puntos_perfil = transf_yukovski(a, Xc+Yc*1j)
Xp, Yp = np.real(Puntos_perfil) , np.imag(Puntos_perfil)
#se crea la malla donde se va pintar la función de corriente
# Dirección radial
N_R = 50 # Número de puntos en la dirección radial
R_min = R
R_max = 10
# Dirección tangencial
N_T = 180 # Número de puntos en la dirección tangencial
R_ = np.linspace(R_min, R_max, N_R)
T_ = np.linspace(0, 2*np.pi , N_T)
# El menos en la XX es para que el borde de ataque del perfil esté en la izquierda
XX = - (R_ * np.cos(T_).reshape((-1, 1)) - np.real(t0))
YY = R_ * np.sin(T_).reshape((-1, 1)) + np.imag(t0)
tt = XX + YY * 1j
alfa = np.deg2rad(alfa)
# Circulación que hay que añadir al cilindro para
# que se cumpla la hipótesis de Kutta en el perfil
T = 4 * np.pi * a * U * (delta + (1+landa) * alfa)
#Potencial complejo
f = U * ( (tt - t0) * np.exp(-alfa *1j) + R**2 / (tt - t0) * np.exp(alfa * 1j) )
f += 1j * T / (2* np.pi) * np.log(tt - t0)
#Función de corriente
psi = np.imag(f)
Puntos_plano_tau = transf_yukovski(a, tt)
XX_tau, YY_tau = np.real(Puntos_plano_tau) , np.imag(Puntos_plano_tau)
#Se pinta
fig, ax = plt.subplots(1,2)
#lineas de corriente
fig.set_size_inches(15,15)
ax[0].contour(XX, YY, psi, np.linspace(-10,10,50), colors = ['blue', 'blue'])
ax[0].grid()
ax[0].set_aspect(1)
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=10)
ax[0].add_patch(p)
ax[0].set_xlim(-5, 5)
ax[0].set_ylim(-2,2)
ax[1].contour(XX_tau, YY_tau, psi, np.linspace(-10,10,50), colors = ['blue', 'blue'])
ax[1].grid()
ax[1].set_aspect(1)
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=10)
ax[1].add_patch(p)
ax[1].set_xlim(-5, 5)
ax[1].set_ylim(-2,2)
```
```python
p = interact(flujo_perfil_circunferencia,
landa=(-1.,1, 0.01),
delta=(-1.,1,0.01),
alfa=(0, 30),
U=(0,10))
```
---
## 7. Pintemos un poco más
Con los datos que ya hemos manejado, sin mucho más esfuerzo, podemos fácilmente pintar la velocidad y la presión del aire alrededor del perfil.
```python
#Velocidad conjugada
dfdt = U * ( 1 * np.exp(-alfa * 1j) - R**2 / (tt - t0)**2 * np.exp(alfa * 1j) )
dfdt += 1j * T / (2*np.pi) * 1 / (tt - t0)
#coeficiente de presion
cp = 1 - np.abs(dfdt)**2 / U**2
```
```python
cmap = plt.cm.RdBu
```
```python
#Se pinta
fig, ax = plt.subplots(1,3)
#lineas de corriente
fig.set_size_inches(15,15)
ax[0].contour(XX, YY, psi, np.linspace(-10,10,50), colors = ['blue', 'blue'])
ax[0].grid()
ax[0].set_aspect(1)
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=10)
ax[0].add_patch(p)
#Campo de velocidades
ax[1].contourf(XX, YY, np.abs(dfdt), 200, cmap=cmap)
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=10)
ax[1].set_title('campo de velocidades')
ax[1].add_patch(p)
ax[1].set_aspect(1)
ax[1].grid()
#campo de presiones
ax[2].contourf(XX, YY, cp, 200, cmap=cmap)
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=10)
ax[2].set_title('coeficiente de presión')
ax[2].add_patch(p)
ax[2].set_aspect(1)
ax[2].grid()
```
```python
#Se pinta
fig, ax = plt.subplots(1,3)
#lineas de corriente
fig.set_size_inches(15,15)
ax[0].contour(XX_tau, YY_tau, psi, np.linspace(-10,10,50), colors = ['blue', 'blue'])
ax[0].grid()
ax[0].set_aspect(1)
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=10)
ax[0].add_patch(p)
#Campo de velocidades
ax[1].contourf(XX_tau, YY_tau, np.abs(dfdt), 200, cmap=cmap)
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=10)
ax[1].set_title('campo de velocidades')
ax[1].add_patch(p)
ax[1].set_aspect(1)
ax[1].grid()
#campo de presiones
ax[2].contourf(XX_tau, YY_tau, cp, 200, cmap=cmap)
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=10)
ax[2].set_title('coeficiente de presión')
ax[2].add_patch(p)
ax[2].set_aspect(1)
ax[2].grid()
```
```python
def cp_perfil_circunferencia(landa, delta, alfa, U=1, N_malla = 100):
N_perfil=100
a=1
#punto del plano complejo
t0 = a * (-landa + delta * 1j)
#valor del radio de la circunferencia
R = a * np.sqrt((1 + landa)**2 + delta**2)
#se barre un ángulo de 0 a 2 pi
theta = np.linspace(0, 2*np.pi, N_perfil)
#se crean las coordenadas del los puntos
#de la circunferencia
Xc = - a * landa + R * np.cos(theta)
Yc = a * delta + R * np.sin(theta)
#se crean las coordenadas del los puntos
#del perfil
Puntos_perfil = transf_yukovski(a, Xc+Yc*1j)
Xp, Yp = np.real(Puntos_perfil) , np.imag(Puntos_perfil)
#se crea la malla donde se va pintar la función de corriente
# Dirección radial
N_R = 50 # Número de puntos en la dirección radial
R_min = R
R_max = 10
# Dirección tangencial
N_T = 180 # Número de puntos en la dirección tangencial
R_ = np.linspace(R_min, R_max, N_R)
T_ = np.linspace(0, 2*np.pi, N_T)
# El menos en la XX es para que el borde de ataque del perfil esté en la izquierda
XX = - (R_ * np.cos(T_).reshape((-1, 1)) - np.real(t0))
YY = R_ * np.sin(T_).reshape((-1, 1)) + np.imag(t0)
tt = XX + YY * 1j
alfa = np.deg2rad(alfa)
# Circulación que hay que añadir al cilindro para
# que se cumpla la hipótesis de Kutta en el perfil
T = 4 * np.pi * a * U * (delta + (1+landa) * alfa)
#Velocidad conjugada
dfdt = U * ( 1 * np.exp(-alfa * 1j) - R**2 / (tt - t0)**2 * np.exp(alfa * 1j) )
dfdt = dfdt + 1j * T / (2*np.pi) * 1 / (tt - t0)
#coeficiente de presion
cp = 1 - np.abs(dfdt)**2 / U**2
Puntos_plano_tau = transf_yukovski(a, tt)
XX_tau, YY_tau = np.real(Puntos_plano_tau) , np.imag(Puntos_plano_tau)
#Se pinta
fig, ax = plt.subplots(1,2)
#coeficiente de presión
fig.set_size_inches(15,15)
ax[0].contourf(XX, YY, cp, 200, cmap=cmap)
ax[0].grid()
ax[0].set_aspect(1)
p = plt.Polygon(list(zip(Xc, Yc)), color="#cccccc", zorder=10)
ax[0].add_patch(p)
ax[0].set_xlim(-5, 5)
ax[0].set_ylim(-3,3)
ax[1].contourf(XX_tau, YY_tau, cp, 200, cmap=cmap)
ax[1].grid()
ax[1].set_aspect(1)
p = plt.Polygon(list(zip(Xp, Yp)), color="#cccccc", zorder=10)
ax[1].add_patch(p)
ax[1].set_xlim(-5, 5)
ax[1].set_ylim(-3,3)
```
```python
interact(cp_perfil_circunferencia,
landa=(0.,1, 0.01),
delta=(0.,1, 0.01),
alfa=(0, 30),
U=(0,10))
```
_En esta clase hemos reafirmado nuestros conocimientos de NumPy, matplotlib y Python, general (funciones, bucles, condicionales...) aplicándolos a un ejemplo muy aeronáutico_
Si te ha gustado esta clase:
<a href="https://twitter.com/share" class="twitter-share-button" data-url="https://github.com/AeroPython/Curso_AeroPython" data-text="Aprendiendo Python con" data-via="pybonacci" data-size="large" data-hashtags="AeroPython">Tweet</a>
---
#### <h4 align="right">¡Síguenos en Twitter!
###### <a href="https://twitter.com/Pybonacci" class="twitter-follow-button" data-show-count="false">Follow @Pybonacci</a> <a href="https://twitter.com/Alex__S12" class="twitter-follow-button" data-show-count="false" align="right";>Follow @Alex__S12</a> <a href="https://twitter.com/newlawrence" class="twitter-follow-button" data-show-count="false" align="right";>Follow @newlawrence</a>
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo</span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
---
_Las siguientes celdas contienen configuración del Notebook_
_Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
File > Trusted Notebook
```python
%%html
<a href="https://twitter.com/Pybonacci" class="twitter-follow-button" data-show-count="false">Follow @Pybonacci</a>
```
<a href="https://twitter.com/Pybonacci" class="twitter-follow-button" data-show-count="false">Follow @Pybonacci</a>
```python
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../static/styles/style.css'
HTML(open(css_file, "r").read())
```
/* This template is inspired in the one used by Lorena Barba
in the numerical-mooc repository: https://github.com/numerical-mooc/numerical-mooc
We thank her work and hope you also enjoy the look of the notobooks with this style */
<link href='http://fonts.googleapis.com/css?family=Source+Sans+Pro|Josefin+Sans:400,700,400italic|Ubuntu+Condensed' rel='stylesheet' type='text/css'>
El estilo se ha aplicado =)
<style>
#notebook_panel { /* main background */
background: #f7f7f7;
}
div.cell { /* set cell width */
width: 900px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 950px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.7em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
font-family: 'Source Sans Pro', sans-serif;
background-color: rgb(256,256,256);
font-size: 110%;
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Josefin Sans', serif;
line-height: 145%;
font-size: 125%;
font-weight: 500;
width:750px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1, .text_cell_render h2, .text_cell_render h3,
.text_cell_render h4, .text_cell_render h5 {
font-family: 'Ubuntu Condensed', sans-serif;
}
/*
.text_cell_render h1 {
font-family: Flux, 'Ubuntu Condensed', serif;
font-style:regular;
font-weight: 400;
font-size: 30pt;
text-align: center;
line-height: 100%;
color: #335082;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
*/
.text_cell_render h1 {
font-weight: 600;
font-size: 35pt;
line-height: 100%;
color: #000000;
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h2 {
margin-top:16px;
font-size: 27pt;
font-weight: 550;
margin-bottom: 0.1em;
margin-top: 0.3em;
font-style: regular;
color: #2c6391;
}
.text_cell_render h3 {
font-size: 20pt;
font-weight: 550
text-align: left;
margin-bottom: 0.1em;
margin-top: 0.3em;
font-style: regular;
color: #387eb8;
}
.text_cell_render h4 { /*Use this for captions*/
font-size: 18pt;
font-weight: 450
text-align: left;
margin-bottom: 0.1em;
margin-top: 0.3em;
font-style: regular;
color: #5797cc;
}
.text_cell_render h5 { /*Use this for small titles*/
font-size: 18pt;
font-weight: 550;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
color: #b21c0d;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'Ubuntu Condensed', sans-serif;
font-weight: 300;
font-size: 14pt;
line-height: 100%;
color: #252525;
text-align: right;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: 'Duru Sans', sans-serif;
font-size: 100%;
}
</style>
```python
```
| 591bdb2a0ab7865c03e84ecf99b4f3642fa12c39 | 41,328 | ipynb | Jupyter Notebook | notebooks_vacios/Clase3_Perfil_Yukovski.ipynb | karimkprr/Curso-AeroPython-UC3M | 50009edc50e9e626e33bd2c4fbb240647f201167 | [
"CC-BY-4.0"
]
| 14 | 2015-10-05T20:21:20.000Z | 2021-02-15T03:12:53.000Z | notebooks_vacios/Clase3_Perfil_Yukovski.ipynb | karimkprr/Curso-AeroPython-UC3M | 50009edc50e9e626e33bd2c4fbb240647f201167 | [
"CC-BY-4.0"
]
| 2 | 2015-10-04T17:22:10.000Z | 2015-10-08T07:46:05.000Z | notebooks_vacios/Clase3_Perfil_Yukovski.ipynb | karimkprr/Curso-AeroPython-UC3M | 50009edc50e9e626e33bd2c4fbb240647f201167 | [
"CC-BY-4.0"
]
| 10 | 2016-06-16T06:00:05.000Z | 2021-10-31T01:44:40.000Z | 33.709625 | 1,306 | 0.541207 | true | 8,107 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.731059 | 0.629835 | __label__spa_Latn | 0.809362 | 0.301648 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.